text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
on 09-Feb-2018 (Fri)
#cfa-level-1 #reading-22-financial-statement-analysis-intro
The [...] disclose the basis of preparationfor the financial statements.
The notes disclose the basis of preparation for the financial statements.
3.1.5. Financial Notes and Supplementary Schedules
ete set of financial statements. The notes provide information that is essential to understanding the information provided in the primary statements. Volkswagen's 2009 financial statements, for example, include 91 pages of notes. <span>The notes disclose the basis of preparation for the financial statements. For example, Volkswagen discloses in its first note that its fiscal year corresponds to the calendar year, that its financial statements are prepared in accordance with IFRS as adopted
#daniel-goleman #emotional-brain #emotional-iq #how-the-brain-grew #what-are-emotions-for #when-passions-overwhelm-reasons
The more complex the social system, the more essential is [...]
flexibility in emotional responses.
The more complex the social system, the more essential is such flexibility in emotional responses.
#cashflow-statement
FCF accounts for Capex and [...]
dividend payments
Subject 3. Cash Flow Statement Analysis
CFO. CFO does not include cash outlays for replacing old equipment. Free Cash Flow (FCF) is intended to measure the cash available to a company for discretionary uses after making all required cash outlays. <span>It accounts for capital expenditures and dividend payments, which are essential to the ongoing nature of the business. The basic definition is cash from operations less the amount of capital expenditures required to maintain the company's present productive capacity. &#
#reading-9-probability-concepts
If A = 0.7 and B 0.8, then P(A | B) = [...] = 0.875.
Probabilistic principal components analysis (PCA) analyzes data via a [...] (Tipping & Bishop, 1999) .
lower dimensional latent space
Probabilistic principal components analysis (PCA) is a dimensionality reduction technique that analyzes data via a lower dimensional latent space (Tipping & Bishop, 1999) .
Edward – Probabilistic PCA
ahora Edward [imagelink] Getting Started Tutorials API Community Contributing Github [imagelink] Probabilistic PCA <span>Probabilistic principal components analysis (PCA) is a dimensionality reduction technique that analyzes data via a lower dimensional latent space (Tipping & Bishop, 1999). It is often used when there are missing values in the data or for multidimensional scaling. We demonstrate with an example in Edward. An interactive version with Jupyter notebook is a
#gaussian-process
In Gaussian process, the non-negative definiteness of the covariance function enables its [...] using the Karhunen–Loeve expansion.
spectral decomposition
head>
if a Gaussian process is assumed to have mean zero, defining the covariance function completely defines the process' behaviour. Importantly the non-negative definiteness of this function enables its spectral decomposition using the Karhunen–Loeve expansion.
Gaussian process - Wikipedia
} can be shown to be the covariances and means of the variables in the process. [3] Covariance functions[edit source] A key fact of Gaussian processes is that they can be completely defined by their second-order statistics. [4] Thus, <span>if a Gaussian process is assumed to have mean zero, defining the covariance function completely defines the process' behaviour. Importantly the non-negative definiteness of this function enables its spectral decomposition using the Karhunen–Loeve expansion. Basic aspects that can be defined through the covariance function are the process' stationarity, isotropy, smoothness and periodicity. [5] [6] Stationarity refers to the process' beha
#matrices #spectral-theorem
[...] is a result about when a linear operator or matrix can be diagonalized
spectral theorem
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis).
Spectral theorem - Wikipedia
for more information. [imagelink] [Help with translations!] Spectral theorem From Wikipedia, the free encyclopedia Jump to: navigation, search <span>In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concep
#computer-science #mathematics
a [...] form of a mathematical object is a standard way of presenting that object as a mathematical expression.
canonical, normal, or standard form
In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression.
Canonical form - Wikipedia
strings " madam curie " and " radium came " are given as C arrays. Each one is converted into a canonical form by sorting. Since both sorted strings literally agree, the original strings were anagrams of each other. <span>In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. The distinction between "canonical" and "normal" forms varies by subfield. In most fields, a canonical form specifies a unique representation for every object, while
#stochastics
The Wiener process has independent increments: for every [...
the future increments , are independent of the past values ,
The Wiener process is characterised by the following properties: [1] a.s. has independent increments: for every the future increments , are independent of the past values , has Gaussian increments: is normally distributed with mean and variance , has continuous paths: With probability , is continuous in .
Wiener process - Wikipedia
Brownian motion 4.3 Time change 4.4 Change of measure 4.5 Complex-valued Wiener process 4.5.1 Self-similarity 4.5.2 Time change 5 See also 6 Notes 7 References 8 External links Characterisations of the Wiener process[edit source] <span>The Wiener process W t {\displaystyle W_{t}} is characterised by the following properties: [1] W 0 = 0 {\displaystyle W_{0}=0} a.s. W {\displaystyle W} has independent increments: for every t > 0 , {\displaystyle t>0,} the future increments W t + u − W t , {\displaystyle W_{t+u}-W_{t},} u ≥ 0 , {\displaystyle u\geq 0,} , are independent of the past values W s {\displaystyle W_{s}} , s ≤ t . {\displaystyle s\leq t.} W {\displaystyle W} has Gaussian increments: W t + u − W t {\displaystyle W_{t+u}-W_{t}} is normally distributed with mean 0 {\displaystyle 0} and variance u {\displaystyle u} , W t + u − W t ∼ N ( 0 , u ) . {\displaystyle W_{t+u}-W_{t}\sim {\mathcal {N}}(0,u).} W {\displaystyle W} has continuous paths: With probability 1 {\displaystyle 1} , W t {\displaystyle W_{t}} is continuous in t {\displaystyle t} . The independent increments means that if 0 ≤ s 1 < t 1 ≤ s 2 < t 2 then W t 1 −W s 1 and W t 2 −W s 2 are independent random variables, and the similar condition holds for
Deep Gaussian processes are formally equivalent to neural networks with [...] .
multiple, infinitely wide hidden layers
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian pro- cesses (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
#history #logic
Johannes Gutenberg introduced new printing techniques in Europe around [...].
You can't terrorise Aristotle!
It is also not happenstance that the downfall of the disputational culture roughly coincided with the introduction of new printing techniques in Europe by Johannes Gutenberg, around 1440.
The rise and fall and rise of logic | Aeon Essays
ich is thoroughly disputational, with Meditations on First Philosophy (1641) by Descartes, a book argued through long paragraphs driven by the first-person singular. The nature of intellectual enquiry shifted with the downfall of disputation. <span>It is also not happenstance that the downfall of the disputational culture roughly coincided with the introduction of new printing techniques in Europe by Johannes Gutenberg, around 1440. Before that, books were a rare commodity, and education was conducted almost exclusively by means of oral contact between masters and pupils in the form of expository lectures in which
Viewed from a function analysis perspective, a single outcome of a stochastic process can be called a [...]
sample function
Again, remember a function is just a vector with infinite length, and a topology for the notion of proximity and continuity.
A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, a sample function or realization
Stochastic process - Wikipedia
r n {\displaystyle n} -dimensional Euclidean space. [1] [5] An increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time. [48] [49] <span>A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, a sample function or realization. [28] [50] [imagelink] A single computer-simulated sample function or realization, among other terms, of a three-dimensional Wiener or Brownian motion process for time 0 ≤ t ≤ 2.
#has-images #puerquito-session #reading-puerquito-verde
Information asymmetry. Managers almost always have more information than shareholders. Thus, it is difficult for shareholders to measure managers' performance or to hold them accountable for their performance.
a way to understand why managers do not always act in the best interests of stakeholders. Managers and shareholders may have different goals. They may also have different attitudes towards risk. <span>Information asymmetry. Managers almost always have more information than shareholders. Thus, it is difficult for shareholders to measure managers' performance or to hold them accountable for their performance. <span><body><html>
Subject 3. Principal-Agent and Other Relationships in Corporate Governance
Shareholder and Manager/Director Relationships Problems can arise in a business relationship when one person delegates decision-making authority to another. The principal is the person delegating authority, and the agent is the person to whom the authority is delegated. Agency theory offers a way to understand why managers do not always act in the best interests of stakeholders. Managers and shareholders may have different goals. They may also have different attitudes towards risk. Information asymmetry. Managers almost always have more information than shareholders. Thus, it is difficult for shareholders to measure managers' performance or to hold them accountable for their performance. Controlling and Minority Shareholder Relationships Ownership structure is one of the main dimensions of corporate governance. For firms with controllin
#has-images #puerquito-session #reading-puerquito-verde #stakeholder-management
Employee laws, contracts, codes of ethics and business conduct and compliance offer(s) are all means a company can use to manage its relationship with its employees.
tives employed by the company should be compensated. Contractual agreements with creditors ; indentures, covenants, collaterals and credit committees are tools used by creditors to protect their interests. <span>Employee laws, contracts, codes of ethics and business conduct , and compliance offer(s) are all means a company can use to manage its relationship with its employees. Contractual agreements with customers and suppliers . Laws and regulations a company must follow to protect the rights of specific groups. </sp
Subject 4. Stakeholder Management
groups and on that basis managing the company's relationships with stakeholders. The framework of corporate governance and stakeholder management reflects a legal, contractual, organizational, and governmental infrastructure. <span>Mechanisms of Stakeholder Management Mechanisms of stakeholder management may include: • General meetings. o The right to participate in general shareholder meetings is a fundamental shareholder right. Shareholders, especially minority shareholders, should have the opportunity to ask questions of the board, to place items on the agenda and to propose resolutions, to vote on major corporate matters and transactions, and to participate in key corporate governance decisions, such as the nomination and election of board members. o Shareholders should be able to vote in person or in absentia, and equal consideration should be given to votes cast in person or in absentia. A board of directors, which serves as a link between shareholders and managers, acts as the shareholders' monitoring tool within the company. The audit function. It plays a critical role in ensuring the corporation's financial integrity and consideration of legal and compliance issues. The primary objective is to ensure that the financial information reported by the company to shareholders is complete, accurate, reliable, relevant, and timely. Company reporting and transparency. It helps reduce of information asymmetry and agency costs. Related-party transactions. Related-party transactions involve buying, selling, and other transactions with board members, managers, employees, family members, and so on. They can create an inherent conflict of interest. Policies should be established to disclose, mitigate, and manage such transactions. Remuneration policies. Does the company's remuneration strategy reward long-term or short-term growth? Are equity-based compensation plans linked to the long-term performance of the company? o Say on pay is the ability of shareholders in a company to actively vote on how much executives employed by the company should be compensated. Contractual agreements with creditors; indentures, covenants, collaterals and credit committees are tools used by creditors to protect their interests. Employee laws, contracts, codes of ethics and business conduct, and compliance offer(s) are all means a company can use to manage its relationship with its employees. Contractual agreements with customers and suppliers. Laws and regulations a company must follow to protect the rights of specific groups. <span><body><html>
#has-images #puerquito-session #reading-puerquito-verde #stakeholder-management-mechanisms
What is the shareholders' monitoring tool within the company?
A board of directors
A board of directors , which serves as a link between shareholders and managers, acts as the shareholders' monitoring tool within the company.
#state-space-models
[...] is also known as sum-product message passing
Belief propagation
Belief propagation, also known as sum-product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. </sp
Belief propagation - Wikipedia
st of references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations. (April 2009) (Learn how and when to remove this template message) <span>Belief propagation, also known as sum-product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. Belief propagation is commonly used in artificial intelligence and information theor
Belief propagation is also known as [...]
sum-product message passing
Belief propagation, also known as sum-product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields.
#inner-product-space #vector-space
Among the topologies of vector spaces, those that are defined by a norm or inner product are more commonly used, as having a notion of [...].
Norm can be understood as the inner product of a vector with itself.
Among the topologies of vector spaces, those that are defined by a norm or inner product are more commonly used, as having a notion of distance between two vectors.
Vector space - Wikipedia
roperties, which in some cases can be visualized as arrows. Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. <span>Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces, whose vectors are functions. These vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are more commonly used, as having a notion of distance between two vectors. This is particularly the case of Banach spaces and Hilbert spaces, which are fundamental in mathematical analysis. Historically, the first ideas leading to vector spaces can be traced back as far as the 17th century's analytic geometry, matrices, systems of linear equations, and Euclidean vectors.
#lebesgue-integration
Riemann integral considers the area under a curve as made out of [...shape...]
vertical rectangles
While the Riemann integral considers the area under a curve as made out of vertical rectangles, the Lebesgue definition considers horizontal slabs that are not necessarily just rectangles, and so it is more flexible.
Lebesgue integration - Wikipedia
es, Fourier transforms, and other topics. The Lebesgue integral is better able to describe how and when it is possible to take limits under the integral sign (via the powerful monotone convergence theorem and dominated convergence theorem). <span>While the Riemann integral considers the area under a curve as made out of vertical rectangles, the Lebesgue definition considers horizontal slabs that are not necessarily just rectangles, and so it is more flexible. For this reason, the Lebesgue definition makes it possible to calculate integrals for a broader class of functions. For example, the Dirichlet function, which is 0 where its argument is
it is actually impossible to assign a length to [...] in a way that preserves some natural additivity and translation invariance properties.
all subsets of ℝ
As later set theory developments showed (see non-measurable set), it is actually impossible to assign a length to all subsets of ℝ in a way that preserves some natural additivity and translation invariance properties. This suggests that picking out a suitable class of measurable subsets is an essential prerequisite
a useful abstraction of the notion of length of subsets of the real line—and, more generally, area and volume of subsets of Euclidean spaces. In particular, it provided a systematic answer to the question of which subsets of ℝ have a length. <span>As later set theory developments showed (see non-measurable set), it is actually impossible to assign a length to all subsets of ℝ in a way that preserves some natural additivity and translation invariance properties. This suggests that picking out a suitable class of measurable subsets is an essential prerequisite. The Riemann integral uses the notion of length explicitly. Indeed, the element of calculation for the Riemann integral is the rectangle [a, b] × [c, d], whose area is calculated to be
#functions-of-money #globo-terraqueo-session #has-images #monetary-policy #money #reading-agustin-carsten
The most generic definition of money is that it is any generally accepted medium of exchange. A medium of exchange is any asset that can be used to purchase goods and services or to repay debts. Money can thus eliminate the debilitating double coincidence of the "wants" problem that exists in a barter economy. When this medium of exchange exists, a farmer wishing to sell wheat for wine does not need to identify a wine producer in search of wheat. Instead, he can sell wheat to those who want wheat in exchange for money. The farmer can then exchange this money for wine with a wine producer, who in turn can exchange that money for the goods or services that she wants.
The most generic definition of money is that it is any generally accepted medium of exchange. A medium of exchange is any asset that can be used to purchase goods and services or to repay debts. Money can thus eliminate the debilitating double coincidence of the "wants" problem that exists in a barter economy. When this medium of exchange exists, a farmer wishing to sell wheat for wine does not need to identify a wine producer in search of wheat. Instead, he can sell wheat to those who want wheat in exchange for money. The farmer can then exchange this money for wine with a wine producer, who in turn can exchange that money for the goods or services that she wants. However, for money to act as this liberating medium of exchange, it must possess certain qualities. It must: be readily acceptable, hav
: the price of oranges in terms of pears; of pears in terms of bread; of bread in terms of milk; or of milk in terms of oranges. A barter economy has no common measure of value that would make multiple transactions simple. <span>2.1.1. The Functions of Money The most generic definition of money is that it is any generally accepted medium of exchange. A medium of exchange is any asset that can be used to purchase goods and services or to repay debts. Money can thus eliminate the debilitating double coincidence of the "wants" problem that exists in a barter economy. When this medium of exchange exists, a farmer wishing to sell wheat for wine does not need to identify a wine producer in search of wheat. Instead, he can sell wheat to those who want wheat in exchange for money. The farmer can then exchange this money for wine with a wine producer, who in turn can exchange that money for the goods or services that she wants. However, for money to act as this liberating medium of exchange, it must possess certain qualities. It must: be readily acceptable, have a known value, be easily divisible, have a high value relative to its weight, and be difficult to counterfeit. Qualities (i) and (ii) are closely related; the medium of exchange will only be acceptable if it has a known value. If the medium of exchange has quality (iii), then it can be used to purchase items of relatively little value and of relatively large value with equal ease. Having a high value relative to its weight is a practical convenience, meaning that people can carry around sufficient wealth for their transaction needs. Finally, if the medium of exchange can be counterfeited easily, then it would soon cease to have a value and would not be readily acceptable as a means of effecting transactions; in other words, it would not satisfy qualities (i) and (ii). Given the qualities that money needs to have, it is clear why precious metals (particularly gold and silver) often fulfilled the role of medium of exchange in early societies, and as recently as the early part of the twentieth century. Precious metals were acceptable as a medium of exchange because they had a known value, were easily divisible, had a high value relative to their weight, and could not be easily counterfeited. Thus, precious metals were capable of acting as a medium of exchange. But they also fulfilled two other useful functions that are essential for the characteristics of money. In a barter economy, it is difficult to store wealth from one year to the next when one's produce is perishable, or indeed, if it requires large warehouses in which to store it. Because precious metals like gold had a high value relative to their bulk and were not perishable, they could act as a store of wealth . However, their ability to act as a store of wealth not only depended on the fact that they did not perish physically over time, but also on the belief that others would always value precious metals. The value from year to year of precious metals depended on people's continued demand for them in ornaments, jewellery, and so on. For example, people were willing to use gold as a store of wealth because they believed that it would remain highly valued. However, if gold became less valuable to people relative to other goods and services year after year it would not be able to fulfill its role as a store of value , and as such might also lose its status as a medium of exchange. Another important characteristic of money is that it can be used as a universal unit of account. As such, it can create a single unitary measure of value for all goods and services. In an economy where gold and silver are the accepted medium of exchange, all prices, debts, and wealth can be recorded in terms of their gold or silver coin exchange value. Money, in its role as a unit of account, drastically reduces the number of prices in an economy compared to barter, which requires that prices be established for a good in terms of all other goods for which it might be exchanged. In summary, money fulfills three important functions, it: acts as a medium of exchange; provides individuals with a way of storing wealth; and provides society with a convenient measure of value and unit of account. 2.1.2. Paper Money and the Money Creation Process Although precious metals like gold and silver fulfilled the required functions of money
#function-of-money #globo-terraqueo-session #has-images #monetary-policy #money #reading-agustin-carsten
for money to act as this liberating medium of exchange, it must possess certain qualities. It must:
be readily acceptable,
have a known value,
be easily divisible,
have a high value relative to its weight, and
be difficult to counterfeit.
stead, he can sell wheat to those who want wheat in exchange for money. The farmer can then exchange this money for wine with a wine producer, who in turn can exchange that money for the goods or services that she wants. However, <span>for money to act as this liberating medium of exchange, it must possess certain qualities. It must: be readily acceptable, have a known value, be easily divisible, have a high value relative to its weight, and be difficult to counterfeit. Qualities (i) and (ii) are closely related; the medium of exchange will only be acceptable if it has a known value. If the medium of exchange has quality (iii), the
the medium of exchange will only be acceptable if it has a known value. If the medium of exchange has quality (iii), then it can be used to purchase items of relatively little value and of relatively large value with equal ease.
13; have a known value, be easily divisible, have a high value relative to its weight, and be difficult to counterfeit. Qualities (i) and (ii) are closely related; <span>the medium of exchange will only be acceptable if it has a known value. If the medium of exchange has quality (iii), then it can be used to purchase items of relatively little value and of relatively large value with equal ease. Having a high value relative to its weight is a practical convenience, meaning that people can carry around sufficient wealth for their transaction needs. Finally, if the medium of exch
#globo-terraqueo-session #has-images #monetary-policy #paper-money-creation-process #reading-agustin-carsten
The process of money creation is a crucial concept for understanding the role that money plays in an economy. Its potency depends on the amount of money that banks keep in reserve to meet the withdrawals of its customers. This practice of lending customers' money to others on the assumption that not all customers will want all of their money back at any one time is known as fractional reserve banking
h the flow of commerce over time. A certain proportion of the gold that was not being withdrawn and used directly for commerce could therefore be lent to others at a rate of interest. By doing this, the early banks created money. <span>The process of money creation is a crucial concept for understanding the role that money plays in an economy. Its potency depends on the amount of money that banks keep in reserve to meet the withdrawals of its customers. This practice of lending customers' money to others on the assumption that not all customers will want all of their money back at any one time is known as fractional reserve banking . We can illustrate how it works through a simple example. Suppose that the bankers in an economy come to the view that they need to retain only 10 percent of any money dep
3; acts as a medium of exchange; provides individuals with a way of storing wealth; and provides society with a convenient measure of value and unit of account. <span>2.1.2. Paper Money and the Money Creation Process Although precious metals like gold and silver fulfilled the required functions of money relatively well for many years, and although carrying gold coins around was easier than carrying around one's physical produce, it was not necessarily a safe way to conduct business. A crucial development in the history of money was the promissory note . The process began when individuals began leaving their excess gold with goldsmiths, who would look after it for them. In turn the goldsmiths would give the depositors a receipt, stating how much gold they had deposited. Eventually these receipts were traded directly for goods and services, rather than there being a physical transfer of gold from the goods buyer to the goods seller. Of course, both the buyer and seller had to trust the goldsmith because the goldsmith had all the gold and the goldsmith's customers had only pieces of paper. These depository receipts represented a promise to pay a certain amount of gold on demand. This paper money therefore became a proxy for the precious metals on which they were based, that is, they were directly related to a physical commodity. Many of these early goldsmiths evolved into banks, taking in excess wealth and in turn issuing promissory notes that could be used in commerce. In taking in other people's gold and issuing depository receipts and later promissory notes, it became clear to the goldsmiths and early banks that not all the gold that they held in their vaults would be withdrawn at any one time. Individuals were willing to buy and sell goods and services with the promissory notes, but the majority of the gold that backed the notes just sat in the vaults—although its ownership would change with the flow of commerce over time. A certain proportion of the gold that was not being withdrawn and used directly for commerce could therefore be lent to others at a rate of interest. By doing this, the early banks created money. The process of money creation is a crucial concept for understanding the role that money plays in an economy. Its potency depends on the amount of money that banks keep in reserve to meet the withdrawals of its customers. This practice of lending customers' money to others on the assumption that not all customers will want all of their money back at any one time is known as fractional reserve banking . We can illustrate how it works through a simple example. Suppose that the bankers in an economy come to the view that they need to retain only 10 percent of any money deposited with them. This is known as the reserve requirement .2 Now consider what happens when a customer deposits €100 in the First Bank of Nations. This deposit changes the balance sheet of First Bank of Nations, as shown in Exhibit 2, and it represents a liability to the bank because it is effectively loaned to the bank by the customer. By lending 90 percent of this deposit to another customer the bank has two types of assets: (1) the bank's reserves of €10, and (2) the loan equivalent to €90. Notice that the balance sheet still balances; €100 worth of assets and €100 worth of liabilities are on the balance sheet. Now suppose that the recipient of the loan of €90 uses this money to purchase some goods of this value and the seller of the goods deposits this €90 in another bank, the Second Bank of Nations. The Second Bank of Nations goes through the same process; it retains €9 in reserve and loans 90 percent of the deposit (€81) to another customer. This customer in turn spends €81 on some goods or services. The recipient of this money deposits it at the Third Bank of Nations, and so on. This example shows how money is created when a bank makes a loan. Exhibit 2. Money Creation via Fractional Reserve Banking First Bank of Nations Assets Liabilities Reserves €10 Deposits €100 Loans €90 Second Bank of Nations Assets Liabilities Reserves €9 Deposits €90 Loans €81 Third Bank of Nations Assets Liabilities Reserves €8.1 Deposits €81 Loans €72.9 This process continues until there is no more money left to be deposited and loaned out. The total amount of money 'created' from this one deposit of €100 can be calculated as: Equation (1) New deposit/Reserve requirement = €100/0.10 = €1,000 It is the sum of all the deposits now in the banking system. You should also note that the original deposit of €100, via the practice of reserve banking, was the catalyst for €1,000 worth of economic transactions. That is not to say that economic growth would be zero without this process, but instead that it can be an important component in economic activity. The amount of money that the banking system creates through the practice of fractional reserve banking is a function of 1 divided by the reserve requirement, a quantity known as the money multiplier .3 In the case just examined, the money multiplier is 1/0.10 = 10. Equation 1 implies that the smaller the reserve requirement, the greater the money multiplier effect. In our simplistic example, we assumed that the banks themselves set their own reserve requirements. However, in some economies, the central bank sets the reserve requirement, which is a potential means of affecting money growth. In any case, a prudent bank would be wise to have sufficient reserves such that the withdrawal demands of their depositors can be met in stressful economic and credit market conditions. Later, when we discuss central banks and central bank policy, we will see how central banks can use the mechanism just described to affect the money supply. Specifically, the central bank could, by purchasing €100 in government securities credited to the bank account of the seller, seek to initiate an increase in the money supply. The central bank may also lend reserves directly to banks, creating excess reserves (relative to any imposed or self-imposed reserve requirement) that can support new loans and money expansion. 2.1.3. Definitions of Money The process of money creation raises a fundamental issue: What is money? In an economy with money but without promisso
The amount of money that the banking system creates through the practice of fractional reserve banking is a function of 1 divided by the reserve requirement, a quantity known as the money multiplier. In the case just examined, the money multiplier is 1/0.10 = 10. This equation implies that the smaller the reserve requirement, the greater the money multiplier effect.
e of reserve banking, was the catalyst for €1,000 worth of economic transactions. That is not to say that economic growth would be zero without this process, but instead that it can be an important component in economic activity. <span>The amount of money that the banking system creates through the practice of fractional reserve banking is a function of 1 divided by the reserve requirement, a quantity known as the money multiplier .3 In the case just examined, the money multiplier is 1/0.10 = 10. Equation 1 implies that the smaller the reserve requirement, the greater the money multiplier effect. In our simplistic example, we assumed that the banks themselves set their own reserve requirements. However, in some economies, the central bank sets the reserve requiremen
Later, when we discuss central banks and central bank policy, we will see how central banks can use the money creation mechanism to affect the money supply. Specifically, the central bank could, by purchasing $100 in government securities credited to the bank account of the seller, seek to initiate an increase in the money supply. The central bank may also lend reserves directly to banks, creating excess reserves (relative to any imposed or self-imposed reserve requirement) that can support new loans and money expansion.
potential means of affecting money growth. In any case, a prudent bank would be wise to have sufficient reserves such that the withdrawal demands of their depositors can be met in stressful economic and credit market conditions. <span>Later, when we discuss central banks and central bank policy, we will see how central banks can use the mechanism just described to affect the money supply. Specifically, the central bank could, by purchasing €100 in government securities credited to the bank account of the seller, seek to initiate an increase in the money supply. The central bank may also lend reserves directly to banks, creating excess reserves (relative to any imposed or self-imposed reserve requirement) that can support new loans and money expansion. <span><body><html>
#definitions-of-money #globo-terraqueo-session #has-images #monetary-policy #money #reading-agustin-carsten
The process of money creation raises a fundamental issue: What is money? In an economy with money but without promissory notes and fractional reserve banking, money is relatively easy to define: Money is the total amount of gold and silver coins in circulation, or their equivalent. The money creation process above, however, indicates that a broader definition of money might encompass all the notes and coins in circulation plus all bank deposits.
Definitions of Money The process of money creation raises a fundamental issue: What is money? In an economy with money but without promissory notes and fractional reserve banking, money is relatively easy to define: Money is the total amount of gold and silver coins in circulation, or their equivalent. The money creation process above, however, indicates that a broader definition of money might encompass all the notes and coins in circulation plus all bank deposits. More generally, we might define money as any medium that can be used to purchase goods and services. Notes and coins can be used to fulfill this purpose, and yet such curre
e in the money supply. The central bank may also lend reserves directly to banks, creating excess reserves (relative to any imposed or self-imposed reserve requirement) that can support new loans and money expansion. <span>2.1.3. Definitions of Money The process of money creation raises a fundamental issue: What is money? In an economy with money but without promissory notes and fractional reserve banking, money is relatively easy to define: Money is the total amount of gold and silver coins in circulation, or their equivalent. The money creation process above, however, indicates that a broader definition of money might encompass all the notes and coins in circulation plus all bank deposits. More generally, we might define money as any medium that can be used to purchase goods and services. Notes and coins can be used to fulfill this purpose, and yet such currency is not the only means of purchasing goods and services. Personal cheques can be written based on a bank chequing account, while debit cards can be used for the same purpose. But what about time deposits or savings accounts? Nowadays transfers can be made relatively easily from a savings account to a current account; therefore, these savings accounts might also be considered as part of the stock of money. Credit cards are also used to pay for goods and services; however, there is an important difference between credit card payments and those made by cheques and debit cards. Unlike a cheque or debit card payment, a credit card payment involves a deferred payment. Basically, the greater the complexity of any financial system, the harder it is to define money. The monetary authorities in most modern economies produce a range of measures of money (see Exhibit 3). But generally speaking, the money stock consists of notes and coins in circulation, plus the deposits in banks and other financial institutions that can be readily used to make purchases of goods and services in the economy. In this regard, economists often speak of the rate of growth of narrow money and/or broad money . By narrow money, they generally mean the notes and coins in circulation in an economy, plus other very highly liquid deposits. Broad money encompasses narrow money but also includes the entire range of liquid assets that can be used to make purchases. Because financial systems, practice, and institutions vary from economy to economy, so do definitions of money; thus, it is difficult to make international comparisons. Still, most central banks produce both a narrow and broad measure of money, plus some intermediate ones too. Exhibit 3 shows the money definitions in four economies. <span><body><html>
Money Measures in the United Kingdom
The United Kingdom produces a set of four measures of the money stock. M0 is the narrowest measure and comprises notes and coins held outside the Bank of England, plus Bankers' deposits at the Bank of England. M2includes M0, plus (effectively) all retail bank deposits. M4 includes M2, plus wholesale bank and building society deposits and also certificates of deposit. Finally, the Bank of England produces another measure called M3H, which is a measure created to be comparable with money definitions in the EU (see above). M3H includes M4, plus UK residents' and corporations' foreign currency deposits in banks and building societies.
plus other savings and deposits with financial institutions. There is also a "broad measure of liquidity" that encompasses M3 as well as a range of other liquid assets, such as government bonds and commercial paper. <span>Money Measures in the United Kingdom The United Kingdom produces a set of four measures of the money stock. M0 is the narrowest measure and comprises notes and coins held outside the Bank of England, plus Bankers' deposits at the Bank of England. M2includes M0, plus (effectively) all retail bank deposits. M4 includes M2, plus wholesale bank and building society deposits and also certificates of deposit. Finally, the Bank of England produces another measure called M3H, which is a measure created to be comparable with money definitions in the EU (see above). M3H includes M4, plus UK residents' and corporations' foreign currency deposits in banks and building societies. <span><body><html>
External Factors Affecting Working Capital Needs
#has-images #introduction #pie-de-cabra-session #reading-molo
New technologies and products
Both internal and external factors influence working capital needs; we summarize them in Exhibit 1. Exhibit 1. Internal and External Factors That Affect Working Capital Needs Internal Factors (Bill Cosby) <span>External Factors Company size and growth rates Organizational structure Sophistication of working capital management Borrowing and investing positions/activities/capacities Banking services Interest rates New technologies and new products The economy Competitors <span><body><html>
Reading 38 Working Capital Management Intro
and collecting on this credit, managing inventory, and managing payables. Effective working capital management also requires reliable cash forecasts, as well as current and accurate information on transactions and bank balances. <span>Both internal and external factors influence working capital needs; we summarize them in Exhibit 1. Exhibit 1. Internal and External Factors That Affect Working Capital Needs Internal Factors External Factors Company size and growth rates Organizational structure Sophistication of working capital management Borrowing and investing positions/activities/capacities Banking services Interest rates New technologies and new products The economy Competitors The scope of working capital management includes transactions, relations, analyses, and focus: Transactions include payments for trade, financing, and investment. Relations with financial institutions and trading partners must be maintained to ensure that the transactions work effectively. Analyses of working capital management activities are required so that appropriate strategies can be formulated and implemented. Focus requires that organizations of all sizes today must have a global viewpoint with strong emphasis on liquidity. In this reading, we examine the different types of working capital and the management issues associated with each. We also look at methods of evaluating the effectiveness of working capital management. <span><body><html>
#has-images #manzana-session #reading-dedo-indice
Because analyzing vast amounts of data can be both time consuming and difficult, investors often use a single measure that consolidates this information and reflects the performance of an entire security market.
Investors gather and analyze vast amounts of information about security markets on a continual basis. Because this work can be both time consuming and data intensive, investors often use a single measure that consolidates this information and reflects the performance of an entire security market. Security market indexes were first introduced as a simple measure to reflect the performance of the US stock market. Since then, security market indexes have evolved into i
Reading 45 Security Market Indexes (Intro)
Investors gather and analyze vast amounts of information about security markets on a continual basis. Because this work can be both time consuming and data intensive, investors often use a single measure that consolidates this information and reflects the performance of an entire security market. Security market indexes were first introduced as a simple measure to reflect the performance of the US stock market. Since then, security market indexes have evolved into important multi-purpose tools that help investors track the performance of various security markets, estimate risk, and evaluate the performance of investment managers. They also form the basis for new investment products. in·dex, noun (pl. in·dex·es or in·di·ces) Latin indic-, index, from indicare to indicate: an indicator, sign, or measure of something. ORIGIN OF MARKET INDEXES Investors had access to regularly published data on individual security prices in London as early as 1698, but nearly 200 years passed before they had access to a simple indicator to reflect security market information.1 To give readers a sense of how the US stock market in general performed on a given day, publishers Charles H. Dow and Edward D. Jones introduced the Dow Jones Average, the world's first security market index, in 1884.2 The index, which appeared in The Customers' Afternoon Letter, consisted of the stocks of nine railroads and two industrial companies. It eventually became the Dow Jones Transportation Average.3Convinced that industrial companies, rather than railroads, would be "the great speculative market" of the future, Dow and Jones introduced a second index in May 1896—the Dow Jones Industrial Average (DJIA). It had an initial value of 40.94 and consisted of 12 stocks from major US industries.4 , 5 Today, investors can choose from among thousands of indexes to measure and monitor different security markets and asset classes. This reading is organized as follows. Section 2 defines a security market index and explains how to calculate the price return and total return of an index for a single per
#has-images #reading-puerquito-verde
Corporate governance can be defined as: [...] .
"the system of internal controls and procedures by which individual companies are managed
Corporate Governance Overview
Corporate governance can be defined as: "the system of internal controls and procedures by which individual companies are managed. It provides a framework that defines the rights, roles and responsibilities of various groups . . . within an organization. At its core, corporate governance is the arrangement of check
[...] theory is concerned with resolving problems that can exist between stakeholders due to unaligned goals or different aversion levels to risk.
The agency theory is a supposition that explains the relationship between principals and agents in business. Agency theory is concerned with resolving problems that can exist in agency relationships due to unaligned goals or different aversion levels to risk. The most common agency relationship in finance occurs between shareholders (principal) and company executives (agents).
Agency Theory
What is the 'Agency Theory' <span>The agency theory is a supposition that explains the relationship between principals and agents in business. Agency theory is concerned with resolving problems that can exist in agency relationships due to unaligned goals or different aversion levels to risk. The most common agency relationship in finance occurs between shareholders (principal) and company executives (agents). BREAKING DOWN 'Agency Theory' Agency theory addresses problems that arise due to differences between the goals or desires between the principal and a
#has-images #lingote-de-oro-session #reading-jens
Whatever the approach(Top-Down or Bottom-up), an analyst who estimates the intrinsic value of an equity security is implicitly questioning the accuracy of the market price as an estimate of value.
ties of companies from previously identified attractive sectors. In a bottom-up approach, an analyst typically follows an industry or industries and forecasts fundamentals for the companies in those industries in order to determine valuation. <span>Whatever the approach, an analyst who estimates the intrinsic value of an equity security is implicitly questioning the accuracy of the market price as an estimate of value. Valuation is particularly important in active equity portfolio management, which aims to improve on the return–risk trade-off of a portfolio's benchmark by identifying mispriced securit
Reading 49 Equity Valuation: Concepts and Basic Tools (Intro)
Analysts gather and process information to make investment decisions, including buy and sell recommendations. What information is gathered and how it is processed depend on the analyst and the purpose of the analysis. Technical analysis uses such information as stock price and trading volume as the basis for investment decisions. Fundamental analysis uses information about the economy, industry, and company as the basis for investment decisions. Examples of fundamentals are unemployment rates, gross domestic product (GDP) growth, industry growth, and quality of and growth in company earnings. Whereas technical analysts use information to predict price movements and base investment decisions on the direction of predicted change in prices, fundamental analysts use information to estimate the value of a security and to compare the estimated value to the market price and then base investment decisions on that comparison. This reading introduces equity valuation models used to estimate the intrinsic value (synonym: fundamental value ) of a security; intrinsic value is based on an analysis of investment fundamentals and characteristics. The fundamentals to be considered depend on the analyst's approach to valuation. In a top-down approach, an analyst examines the economic environment, identifies sectors that are expected to prosper in that environment, and analyzes securities of companies from previously identified attractive sectors. In a bottom-up approach, an analyst typically follows an industry or industries and forecasts fundamentals for the companies in those industries in order to determine valuation. Whatever the approach, an analyst who estimates the intrinsic value of an equity security is implicitly questioning the accuracy of the market price as an estimate of value. Valuation is particularly important in active equity portfolio management, which aims to improve on the return–risk trade-off of a portfolio's benchmark by identifying mispriced securities. This reading is organized as follows. Section 2 discusses the implications of differences between estimated value and market price. Section 3 introduces three major categor
#has-images #lingote-de-oro-session #reading-chimenea-industrial
[...] is the analysis of a specific branch of manufacturing, service, or trade.
Industry analysis is the analysis of a specific branch of manufacturing, service, or trade. Understanding the industry in which a company operates provides an essential framework for the analysis of the individual company—that is, company analysis . Equity analysis and credit
Reading 48 Introduction to Industry and Company Analysis (Intro)
Industry analysis is the analysis of a specific branch of manufacturing, service, or trade. Understanding the industry in which a company operates provides an essential framework for the analysis of the individual company—that is, company analysis . Equity analysis and credit analysis are often conducted by analysts who concentrate on one or several industries, which results in synergies and efficiencies in gathering and interpreting information. Among the questions we address in this reading are the following: What are the similarities and differences among industry classification systems? How does an analyst go about choosing a peer group of companies? What are the key factors to consider when analyzing an industry? What advantages are enjoyed by companies in strategically well-positioned industries? After discussing the uses of industry analysis in the next section, Sections 3 and 4 discuss, respectively, approaches to identifying similar companies and industry
#analyst-notes #has-images #money #reading-agustin-carsten
The money multiplier is the amount by which a change in the monetary base is multiplied to calculate the final change in the money supply. Money Multiplier = 1/b, where b is the required reserve ratio. In our example, b is 0.2, so money multiplier = 1/0.2 = 5.
n out $640 to another person. Thus, BOA creates $640 of money supply. The process goes on and on. With each deposit and loan, more money is created. However, the money creation process does not create an infinite amount of money. <span>The money multiplier is the amount by which a change in the monetary base is multiplied to calculate the final change in the money supply. Money Multiplier = 1/b, where b is the required reserve ratio. In our example, b is 0.2, so money multiplier = 1/0.2 = 5. <span><body><html>
Subject 1. What is Money?
liquid of all assets, due to its function as the medium of exchange. However, many methods of holding money do not yield an interest return and the purchase power of money will decline during a time of inflation. <span>The Money Creation Process Reserves are the cash in a bank's vault and deposits at Federal Reserve Banks. Under the fractional reserve banking system, a bank is obligated to hold a minimum amount of reserves to back up its deposits. Reserves held for that purpose, which are expressed as a percentage of a bank's demand deposits, are called required reserves. Therefore, the required reserve ratio is the percentage of a bank's deposits that are required to be held as reserves. Banks create deposits when they make loans; the new deposits created are new money. Example Suppose the required reserve ratio in the U.S. is 20%, and then suppose that you deposit $1,000 cash with Citibank. Citibank keeps $200 of the $1,000 in reserves. The remaining $800 of excess reserves can be loaned out to, say, John. After the loan is made, the money supply increases by $800 (your $1,000 + John's $800). After getting the loan, John deposits the $800 with Bank of America (BOA). BOA keeps $160 of the $800 in reserves and can now loan out $640 to another person. Thus, BOA creates $640 of money supply. The process goes on and on. With each deposit and loan, more money is created. However, the money creation process does not create an infinite amount of money. The money multiplier is the amount by which a change in the monetary base is multiplied to calculate the final change in the money supply. Money Multiplier = 1/b, where b is the required reserve ratio. In our example, b is 0.2, so money multiplier = 1/0.2 = 5. Definitions of Money There are different definitions of money. The two most widely used measures of money in the U.S. are: The M1 Money Su
#has-images #microscopio-session #reading-calculadora
Macroeconomics focuses on national aggregates, such as:
The rate of change in the general level of prices
The overall level of interest rates.
the economic activity and behavior of individual economic units, such as a household, a company, or a market for a particular good or service, and macroeconomics is the study of the aggregate activities of households, companies, and markets. <span>Macroeconomics focuses on national aggregates, such as total investment, the amount spent by all businesses on plant and equipment; total consumption, the amount spent by all households on goods and services; the rate of change in the general level of prices; and the overall level of interest rates. <span><body><html>
Reading 16 Aggregate Output, Prices, and Economic Growth Introduction
In the field of economics, microeconomics is the study of the economic activity and behavior of individual economic units, such as a household, a company, or a market for a particular good or service, and macroeconomics is the study of the aggregate activities of households, companies, and markets. Macroeconomics focuses on national aggregates, such as total investment, the amount spent by all businesses on plant and equipment; total consumption, the amount spent by all households on goods and services; the rate of change in the general level of prices; and the overall level of interest rates. Macroeconomic analysis examines a nation's aggregate output and income, its competitive and comparative advantages, the productivity of its labor force, its price level and inflation rate, and the actions of its national government and central bank. The objective of macroeconomic analysis is to address such fundamental questions as: What is an economy's aggregate output, and how is aggregate income measured? What factors determine the level of aggregate output/income for an economy? What are the levels of aggregate demand and aggregate supply of goods and services within the country? Is the level of output increasing or decreasing, and at what rate? Is the general price level stable, rising, or falling? Is unemployment rising or falling? Are households spending or saving more? Are workers able to produce more output for a given level of inputs? Are businesses investing in and expanding their productive capacity? Are exports (imports) rising or falling? From an investment perspective, investors must be able to evaluate a country's current economic environment and to forecast its future economic environment in order to identify asset classes and securities that will benefit from economic trends occurring within that country. Macroeconomic variables—such as the level of inflation, unemployment, consumption, government spending, and investment—affect the overall level of activity within a country. They also have different impacts on the growth and profitability of industries within a country, the companies within those industries, and the returns of the securities issued by those companies. This reading is organized as follows: Section 2 describes gross domestic product and related measures of domestic output and income. Section 3 discusses short-run and long-
#fourier-analysis #functional-analysis #harmonic-analysis #hilbert-space
Fourier series can be conveniently studied in the context of Hilbert spaces, which provides a connection between harmonic analysis and functional analysis.
Harmonic analysis - Wikipedia
pport (these include functions of compact support), then its Fourier transform is never compactly supported. This is a very elementary form of an uncertainty principle in a harmonic analysis setting. See also: Convergence of Fourier series. <span>Fourier series can be conveniently studied in the context of Hilbert spaces, which provides a connection between harmonic analysis and functional analysis. Contents [hide] 1 Applied harmonic analysis 2 Abstract harmonic analysis 3 Other branches 4 See also 5 References 6 Bibliography 7 External links Applied harmonic analy
A Hilbert space is an abstract vector space possessing the structure of [...] that allows length and angle to be measured.
an inner product
A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured.
Hilbert space - Wikipedia
e state of a vibrating string can be modeled as a point in a Hilbert space. The decomposition of a vibrating string into its vibrations in distinct overtones is given by the projection of the point onto the coordinate axes in the space. <span>The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces. The earliest Hilbert spaces were studied from this point o
#topology
the structure of an inner product allows [...] to be measured.
length and angle
#factors-affecting-relationships-and-cg #has-images #market-factors #puerquito-session #reading-puerquito-verde
Corporate takeovers can happen in different ways: proxy contest or proxy fight, tender offer, hostile takeover, etc.
Corporate takeovers can happen in different ways: proxy contest or proxy fight, tender offer, hostile takeover, etc. The justification for the use of various anti-takeover defenses should rest on the support of the majority of shareholders and on the demonstration that preservation of the integrity of
Subject 6. Factors Affecting Stakeholder Relationships and Corporate Governance
Stakeholder relationships and corporate governance are continually shaped and influenced by a variety of market and non-market factors. Market Factors Shareholder engagement involves a company's interactions with its shareholders. It can provide benefits that include building support against short-term activist investors, countering negative recommendations from proxy advisory firms, and receiving greater support for management's position. Shareholder activism encompasses a range of strategies that may be used by shareholders seeking to compel a company to act in a desired manner. It can take any of several forms: proxy battles, public campaigns, shareholder resolutions, litigation, and negotiations with management. Corporate takeovers can happen in different ways: proxy contest or proxy fight, tender offer, hostile takeover, etc. The justification for the use of various anti-takeover defenses should rest on the support of the majority of shareholders and on the demonstration that preservation of the integrity of the company is in the long-term interests of shareholders. Non-Market Factors These factors include the legal environment, the media, and the corporate governance industry itself.
The justification for the use of various anti-takeover defenses should rest on the support of the majority of shareholders and on the demonstration that preservation of the integrity of the company is in the long-term interests of shareholders.
Corporate takeovers can happen in different ways: proxy contest or proxy fight, tender offer, hostile takeover, etc. The justification for the use of various anti-takeover defenses should rest on the support of the majority of shareholders and on the demonstration that preservation of the integrity of the company is in the long-term interests of shareholders.
#hilbert-space
Hilbert spaces are complete: there are [...] to allow the techniques of calculus to be used.
enough limits in the space
Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used.
Hilbert spaces are complete: there are enough limits in the space to allow [...].
the techniques of calculus to be used
Hilbert spaces are [...]: there are enough limits in the space to allow the techniques of calculus to be used.
#matrix-decomposition
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.
Matrix (mathematics) - Wikipedia
x. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy A ∗ = A, where the star or asterisk denotes the conjugate transpose of the matrix, that is, the transpose of the complex conjugate of A. <span>By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real. [29] This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below. Invertible matrix and its inverse[edit s
a spectral theorem is a result about when a linear operator or matrix can be diagonalized
a spectral theorem is a result about when [...]
a linear operator or matrix can be diagonalized
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an [...]
eigenbasis
#inner-product #linear-algebra
Inner product spaces generalize Euclidean spaces to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis.
Inner product space - Wikipedia
tors. Inner products allow the rigorous introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product). <span>Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product, also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis. The first usage of the concept of a vector space with an inner product is due to Peano, in 1898. [1] An inner product naturally induces an associated norm, thus an inner product space
the dot product is the inner product in [...]
the Euclidean space
Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product, also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis.
Inner product spaces generalize Euclidean spaces to [...], and are studied in functional analysis.
vector spaces of any (possibly infinite) dimension
[unknown IMAGE 1784659447052] #has-images
In mathematics, a Cauchy sequence a sequence whose elements become arbitrarily close to each other as the sequence progresses.
Cauchy sequence - Wikipedia
"ultimate destination" of this sequence (that is, the limit) exists. [imagelink] (b) A sequence that is not Cauchy. The elements of the sequence fail to get arbitrarily close to each other as the sequence progresses. <span>In mathematics, a Cauchy sequence ( French pronunciation: [koʃi]; English: /ˈkoʊʃiː/ KOH-shee), named after Augustin-Louis Cauchy, is a sequence whose elements become arbitrarily close to each other as the sequence progresses. [1] More precisely, given any small positive distance, all but a finite number of elements of the sequence are less than that given distance from each other. It is not sufficient for
#has-images
[unknown IMAGE 1784662068492]
In mathematics, a Cauchy sequence is a sequence whose elements [...] as the sequence progresses.
become arbitrarily close to each other
A Hilbert space is an inner product space that is complete with respect to the norm.
Hilbert vs Inner Product Space - Mathematics Stack Exchange
add a comment | up vote 1 down vote <span>A Hilbert space is an inner product space that is complete with respect to the norm. Completeness is what differentiates the two. Not every metric space can be defined by an inner product, for instance the space of continuous functions on [0,1][0,1] with the supremu
When an interrupt occurs, the system needs to save the current context of the process running on the CPU so that it can restore that context when its processing is done, essentially suspending the process and then resuming it. The context is represented in the PCB of the process. It includes the value of the CPU registers, the process state (see Figure 3.2), and memory-management information
Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context-switch time is pure overhead, because the system does no useful work while switching
The i OS 4 programming API provides support for multitasking, thus allowing a process to run in the background without being suspended. However, it is limited and only available for a limited number of application types, including applications • running a single, finite-length task (such as completing a download of content from a network); • receiving notifications of an event occurring (such as a new email message); • with long-running background tasks (such as an audio player.)
Apple probably limits multitasking due to battery life and memory use concerns. The CPU certainly has the features to support multitasking, but Apple chooses to not take advantage of some of them in order to better manage resource use. Android does not place such constraints on the types of applications that can run in the background. If an application requires processing while in the background, the application must use a service, a separate application component that runs on behalf of the background process. Consider a streaming audio application: if the application moves to the background, the service continues to send audio files to the audio device driver on behalf of the background application. In fact, the service will continue to run even if the background application is suspended. Services do not have a user interface and have a small memory footprint, thus providing an efficient technique for multitasking in a mobile environment
Most operating systems (including UNIX, Linux, and Windows) identify processes according to a unique process identifier (or pid), which is typically an integer number. The pid provides a unique value for each process in the system, and it can be used as an index to access various attributes of a process within the kernel
The init process (which always has a pid of 1) serves as the root parent process for all user processes
Thekthreadd process is responsible for creating additional processes that perform tasks on behalf of the kernel
The sshd process is responsible for managing clients that connect to the system by using ssh (which is short for secure shell). The login process is responsible for managing clients that directly log onto the system
A child process may be able to obtain its resources directly from the operating system, or it may be constrained to a subset of the resources of the parent process. The parent may have to partition its resources among its children, or it may be able to share some resources (such as memory or files) among several of its children
Restricting a child process to a subset of the parent's resources prevents any process from overloading the system by creating too many child processes
When a process creates a new process, two possibilities for execution exist: 1. The parent continues to execute concurrently with its children. 2. The parent waits until some or all of its children have terminated. There are also two address-space possibilities for the new process: 1. The child process is a duplicate of the parent process (it has the same program and data as the parent). 2. The child process has a new program loaded into it
To illustrate these differences, let's first consider the UNIX operating system. In UNIX, as we've seen, each process is identified by its process identifier, which is a unique integer. A new process is created by the fork() system call. The new process consists of a copy of the address space of the original process. This mechanism allows the parent process to communicate easily with its child process. Both processes (the parent and the child) continue execution at the instruction after the fork(), with one difference: the return code for the fork() is zero for the new (child) process, whereas the (nonzero) process identifier of the child is returned to the parent. After a fork() system call, one of the two processes typically uses the exec() system call to replace the process's memory space with a new program. The exec() system call loads a binary file into memory (destroying the memory image of the program containing the exec() system call) and starts its execution. In this manner, the two processes are able to communicate and then go their separate ways. The parent can then create more children; or, if it has nothing else to do while the child runs, it can issue a wait() system call to move itself off the ready queue until the termination of the child
Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process
There are several reasons for providing an environment that allows process cooperation: • Information sharing. Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information. • Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing cores. • Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads, as we discussed in Chapter 2. • Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, listening to music, and compiling in parallel
Cooperating processes require an interprocess communication ( IPC) mech- anism that will allow them to exchange data and information. There are two fundamental models of interprocess communication: shared memory and mes- sage passing
As the number of processing cores on systems increases, it is possible that we will see message passing as the preferred mechanism for IPC
Two types of buffers can be used. The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full
The process that creates a new mailbox is that mailbox's owner by default. Initially, the owner is the only process that can receive messages through this mailbox. However, the ownership and receiving privilege may be passed to other processes through appropriate system calls
#cabra-session #ethical-decisions-framework #ethics #has-images #reading-rene-toussaint
The CFA Institute Ethical Decision-Making Framework is a tool for analyzing and evaluating ethical scenarios in the investment profession. The Identify-Consider-Act- Reflect framework advances a decision-making structure for situations that often fall outside the clear confines of "right" and "wrong."
Subject 6. Ethical Decision-Making Frameworks
The CFA Institute Ethical Decision-Making Framework is a tool for analyzing and evaluating ethical scenarios in the investment profession. The Identify-Consider-Act- Reflect framework advances a decision-making structure for situations that often fall outside the clear confines of "right" and "wrong." Neither a linear model nor a checklist, the framework provides a summary of the key elements of making ethical decisions. The framework is offered with the understanding th
Neither a linear model nor a checklist, the framework provides a summary of the key elements of making ethical decisions. The framework is offered with the understanding that there likely will be additional influences, conflicts, and actions unique to each ethical scenario and beyond those detailed in the framework.
hical scenarios in the investment profession. The Identify-Consider-Act- Reflect framework advances a decision-making structure for situations that often fall outside the clear confines of "right" and "wrong." <span>Neither a linear model nor a checklist, the framework provides a summary of the key elements of making ethical decisions. The framework is offered with the understanding that there likely will be additional influences, conflicts, and actions unique to each ethical scenario and beyond those detailed in the framework. Identify: ethical principles, duties to others (to whom do you owe a duty?), important facts, conflicts of interest. Consider: situational Influen
Identify: ethical principles, duties to others (to whom do you owe a duty?), important facts, conflicts of interest.
g ethical decisions. The framework is offered with the understanding that there likely will be additional influences, conflicts, and actions unique to each ethical scenario and beyond those detailed in the framework. <span>Identify: ethical principles, duties to others (to whom do you owe a duty?), important facts, conflicts of interest. Consider: situational Influences, alternative actions, additional guidance. Act: make a decision or elevate the issue to a higher authority. Reflect: What did you
Consider: situational Influences, alternative actions, additional guidance.
nd actions unique to each ethical scenario and beyond those detailed in the framework. Identify: ethical principles, duties to others (to whom do you owe a duty?), important facts, conflicts of interest. <span>Consider: situational Influences, alternative actions, additional guidance. Act: make a decision or elevate the issue to a higher authority. Reflect: What did you learn? The lessons you learn will help you reach ethical decisions more quickly in the f
Act: make a decision or elevate the issue to a higher authority.
work. Identify: ethical principles, duties to others (to whom do you owe a duty?), important facts, conflicts of interest. Consider: situational Influences, alternative actions, additional guidance. <span>Act: make a decision or elevate the issue to a higher authority. Reflect: What did you learn? The lessons you learn will help you reach ethical decisions more quickly in the future <span><body><html>
Reflect: What did you learn? The lessons you learn will help you reach ethical decisions more quickly in the future
o others (to whom do you owe a duty?), important facts, conflicts of interest. Consider: situational Influences, alternative actions, additional guidance. Act: make a decision or elevate the issue to a higher authority. <span>Reflect: What did you learn? The lessons you learn will help you reach ethical decisions more quickly in the future <span><body><html>
#globo-terraqueo-session #has-images #monetary-policy #reading-agustin-carsten
Monetary policy refers to government or central bank activities that are directed toward influencing the [...] in an economy.
quantity of money and credit
Monetary policy refers to government or central bank activities that are directed toward influencing the quantity of money and credit in an economy.
2. Monetary Policy
As stated above, monetary policy refers to government or central bank activities that are directed toward influencing the quantity of money and credit in an economy. Before we can begin to understand how monetary policy is implemented, we must examine the functions and role of money . We can then explore the special role that central banks play i
Before we can begin to understand how monetary policy is implemented, we must examine the functions and role of [...] .
Before we can begin to understand how monetary policy is implemented, we must examine the functions and role of money . We can then explore the special role that central banks play in today's economies.
As stated above, monetary policy refers to government or central bank activities that are directed toward influencing the quantity of money and credit in an economy. Before we can begin to understand how monetary policy is implemented, we must examine the functions and role of money . We can then explore the special role that central banks play in today's economies.
The most generic definition of money is that it is any generally accepted [...] .
medium of exchange
The most generic definition of money is that it is any generally accepted medium of exchange. A medium of exchange is any asset that can be used to purchase goods and services or to repay debts. Money can thus eliminate the debilitating double coincidence of the "wants" problem
A medium of exchange is any asset that can be used to purchase goods and services or to repay debts.
The most generic definition of money is that it is any generally accepted medium of exchange. A medium of exchange is any asset that can be used to purchase goods and services or to repay debts. Money can thus eliminate the debilitating double coincidence of the "wants" problem that exists in a barter economy. When this medium of exchange exists, a farmer wishing to sell wheat
When a medium of exchange exists, a farmer wishing to sell wheat for wine does not need to identify a wine producer in search of wheat. Instead, he can sell wheat to those who want wheat in exchange for money. The farmer can then exchange this money for wine with a wine producer, who in turn can exchange that money for the goods or services that she wants.
pted medium of exchange. A medium of exchange is any asset that can be used to purchase goods and services or to repay debts. Money can thus eliminate the debilitating double coincidence of the "wants" problem that exists in a barter economy. <span>When this medium of exchange exists, a farmer wishing to sell wheat for wine does not need to identify a wine producer in search of wheat. Instead, he can sell wheat to those who want wheat in exchange for money. The farmer can then exchange this money for wine with a wine producer, who in turn can exchange that money for the goods or services that she wants. <span><body><html>
A medium of exchange will only be acceptable if it has a [...].
known value
The process of [...] is a crucial concept for understanding the role that money plays in an economy.
money creation
The process of money creation is a crucial concept for understanding the role that money plays in an economy. Its potency depends on the amount of money that banks keep in reserve to meet the withdrawals of its cust
#has-images #reading-agustin-carsten
Banking in which reserves constitute a fraction of deposits.
The potency of money creation process depends on the amount of money that banks keep in reserve to meet the withdrawals of its customers.
The process of money creation is a crucial concept for understanding the role that money plays in an economy. Its potency depends on the amount of money that banks keep in reserve to meet the withdrawals of its customers. This practice of lending customers' money to others on the assumption that not all customers will want all of their money back at any one time is known as fractional reserve banking 
The practice of lending customers' money to others on the assumption that not all customers will want all of their money back at any one time is known as fractional reserve banking
The process of money creation is a crucial concept for understanding the role that money plays in an economy. Its potency depends on the amount of money that banks keep in reserve to meet the withdrawals of its customers. <span>This practice of lending customers' money to others on the assumption that not all customers will want all of their money back at any one time is known as fractional reserve banking <span><body><html>
#has-images #monetary-policy #paper-money-creation-process #reading-agustin-carsten
The practice of lending customers' money to others on the assumption that not all customers will want all of their money back at any one time is known as [...]
he amount of money that the banking system creates through the practice of fractional reserve banking is a function of 1 divided by the reserve requirement, a quantity known as the money multiplier.
The amount of money that the banking system creates through the practice of fractional reserve banking is a function of 1 divided by the reserve requirement, a quantity known as the money multiplier. In the case just examined, the money multiplier is 1/0.10 = 10. This equation implies that the smaller the reserve requirement, the greater the money multiplier effect. </
The equation (1/Reserve Requirement) implies that the smaller the reserve requirement, the greater the money multiplier effect.
hat the banking system creates through the practice of fractional reserve banking is a function of 1 divided by the reserve requirement, a quantity known as the money multiplier. In the case just examined, the money multiplier is 1/0.10 = 10. <span>This equation implies that the smaller the reserve requirement, the greater the money multiplier effect. <span><body><html>
What serves as a link between shareholders and managers?
#cabra-session #ethics #has-images #reading-rene-toussaint
[...] are those actions that are perceived as beneficial and conform to the ethical expectations of society.
Ethical actions
Ethical actions are those actions that are perceived as beneficial and conform to the ethical expectations of society.
Subject 1. Ethics
rding what is good, acceptable, or responsible behavior, and what is bad, unacceptable, or forbidden behavior. They provide guidance for our behavior. Ethical conduct is behavior that follows moral principles. <span>Ethical actions are those actions that are perceived as beneficial and conform to the ethical expectations of society. Ethics encompass a set of moral principles ( code of ethics ) and standards of conduct that provide guidance for our behavior. Violations can ha
#cabra-session #ethics #ethics-and-professionalism #has-images #reading-rene-toussaint
A profession may adopt [...] to enhance and clarify the code of ethics.
A profession may adopt standards of conduct to enhance and clarify the code of ethics.
Subject 2. Ethics and Professionalism
ected behaviors of its members. generates confidence not only among members of the organization but also among non-members (clients, prospective clients, and/or the general public). <span>A profession may adopt standards of conduct to enhance and clarify the code of ethics. <span><body><html>
#cabra-session #ethical-vs-legal-standards #ethics #has-images #reading-rene-toussaint
[...] are not always the best mechanism to reduce unethical behavior.
Laws and regulations are not always the best mechanism to reduce unethical behavior. They often lag behind current circumstances; legal standards are often created to address past ethical failings and do not provide guidance for an evolving and increasingly complex worl
Subject 5. Ethical vs. Legal Standards
but they are not always the same. Some legal behaviors are not considered ethical. Some ethical behaviors may not be legal in certain countries. <span>Laws and regulations are not always the best mechanism to reduce unethical behavior. They often lag behind current circumstances; legal standards are often created to address past ethical failings and do not provide guidance for an evolving and increasingly complex world. In addition, new laws designed to reduce or eliminate conduct that adversely affects the markets can create opportunities for different but similarly problematic conduct. Investment professionals should act beyond legal standards, making good judgments and responsible choices even in the absence of clear laws or rules.
Laws and regulations often lag behind current circumstances; legal standards are often created to address past ethical failings and do not provide guidance for an evolving and increasingly complex world.
Laws and regulations are not always the best mechanism to reduce unethical behavior. They often lag behind current circumstances; legal standards are often created to address past ethical failings and do not provide guidance for an evolving and increasingly complex world. In addition, new laws designed to reduce or eliminate conduct that adversely affects the markets can create opportunities for different but similarly problematic conduct.
New laws designed to reduce or eliminate conduct that adversely affects the markets can create opportunities for different but similarly problematic conduct.
t mechanism to reduce unethical behavior. They often lag behind current circumstances; legal standards are often created to address past ethical failings and do not provide guidance for an evolving and increasingly complex world. In addition, <span>new laws designed to reduce or eliminate conduct that adversely affects the markets can create opportunities for different but similarly problematic conduct. <span><body><html>
What are the Internal factors that affect Working Capital needs?
Company size and growth rates
Sophistication of working capital management
Borrowing and investing positions/activities/capacities
Company size and growth rates Organizational structure Sophistication of working capital management Borrowing and investing positions/activities/capacities
What are the External Factors Affecting Working Capital Needs?
BINCE
Banking services Interest rates New technologies and products Competitors Economy
#has-images #introduction #lingote-de-oro-session #reading-ana-de-la-garza
This reading provides an overview of equity securities and their different features and establishes the background required to analyze and value equity securities in a global context.
This reading provides an overview of equity securities and their different features and establishes the background required to analyze and value equity securities in a global context. It addresses the following questions: What distinguishes common shares from preference shares, and what purposes do these securities serve in financing a company's o
Reading 47 Overview of Equity Securities (Intro)
Equity securities represent ownership claims on a company's net assets. As an asset class, equity plays a fundamental role in investment analysis and portfolio management because it represents a significant portion of many individual and institutional investment portfolios. The study of equity securities is important for many reasons. First, the decision on how much of a client's portfolio to allocate to equities affects the risk and return characteristics of the entire portfolio. Second, different types of equity securities have different ownership claims on a company's net assets, which affect their risk and return characteristics in different ways. Finally, variations in the features of equity securities are reflected in their market prices, so it is important to understand the valuation implications of these features. This reading provides an overview of equity securities and their different features and establishes the background required to analyze and value equity securities in a global context. It addresses the following questions: What distinguishes common shares from preference shares, and what purposes do these securities serve in financing a company's operations? What are convertible preference shares, and why are they often used to raise equity for unseasoned or highly risky companies? What are private equity securities, and how do they differ from public equity securities? What are depository receipts and their various types, and what is the rationale for investing in them? What are the risk factors involved in investing in equity securities? How do equity securities create company value? What is the relationship between a company's cost of equity, its return on equity, and investors' required rate of return? The remainder of this reading is organized as follows. Section 2 provides an overview of global equity markets and their historical performance. Section 3 examines
This reading addresses the following questions:
What distinguishes common shares from preference shares, and what purposes do these securities serve in financing a company's operations?
What are convertible preference shares, and why are they often used to raise equity for unseasoned or highly risky companies?
What are private equity securities, and how do they differ from public equity securities?
What are depository receipts and their various types, and what is the rationale for investing in them?
What are the risk factors involved in investing in equity securities?
How do equity securities create company value?
What is the relationship between a company's cost of equity, its return on equity, and investors' required rate of return?
This reading provides an overview of equity securities and their different features and establishes the background required to analyze and value equity securities in a global context. It addresses the following questions: What distinguishes common shares from preference shares, and what purposes do these securities serve in financing a company's operations? What are convertible preference shares, and why are they often used to raise equity for unseasoned or highly risky companies? What are private equity securities, and how do they differ from public equity securities? What are depository receipts and their various types, and what is the rationale for investing in them? What are the risk factors involved in investing in equity securities? How do equity securities create company value? What is the relationship between a company's cost of equity, its return on equity, and investors' required rate of return?
#has-images #introduction #prerequisite-session #reading-dildo
If a firm lowers its price, will its total revenue also fall? Are there conditions under which revenue might rise as price falls and what are those? Why?
nal to violate this "law of demand"? What are appropriate measures of how sensitive the quantity demanded or supplied is to changes in price, income, and prices of other goods? What affects those sensitivities? <span>If a firm lowers its price, will its total revenue also fall? Are there conditions under which revenue might rise as price falls and what are those? Why? What is an appropriate measure of the total value consumers or producers receive from the opportunity to buy and sell goods and services in a free market? How might gove
Prerequisite Reading Demand and Supply Analysis: Introduction
el of markets, he or she cannot hope to forecast how external events—such as a shift in consumer tastes or changes in taxes and subsidies or other intervention in markets—will influence a firm's revenue, earnings, and cash flows. <span>Having grasped the tools and concepts presented in this reading, the reader should also be able to understand many important economic relations and facts and be able to answer questions, such as: Why do consumers usually buy more when the price falls? Is it irrational to violate this "law of demand"? What are appropriate measures of how sensitive the quantity demanded or supplied is to changes in price, income, and prices of other goods? What affects those sensitivities? If a firm lowers its price, will its total revenue also fall? Are there conditions under which revenue might rise as price falls and what are those? Why? What is an appropriate measure of the total value consumers or producers receive from the opportunity to buy and sell goods and services in a free market? How might government intervention reduce that value, and what is an appropriate measure of that loss? What tools are available that help us frame the trade-offs that consumers and investors face as they must give up one opportunity to pursue another? Is it reasonable to expect markets to converge to an equilibrium price? What are the conditions that would make that equilibrium stable or unstable in response to external shocks? How do different types of auctions affect price discovery? This reading is organized as follows. Section 2 explains how economists classify markets. Section 3 covers the basic principles and concepts o
What is an appropriate measure of the total value consumers or producers receive from the opportunity to buy and sell goods and services in a free market? How might government intervention reduce that value, and what is an appropriate measure of that loss?
prices of other goods? What affects those sensitivities? If a firm lowers its price, will its total revenue also fall? Are there conditions under which revenue might rise as price falls and what are those? Why? <span>What is an appropriate measure of the total value consumers or producers receive from the opportunity to buy and sell goods and services in a free market? How might government intervention reduce that value, and what is an appropriate measure of that loss? What tools are available that help us frame the trade-offs that consumers and investors face as they must give up one opportunity to pursue another? Is i
Is it reasonable to expect markets to converge to an equilibrium price? What are the conditions that would make that equilibrium stable or unstable in response to external shocks?
that value, and what is an appropriate measure of that loss? What tools are available that help us frame the trade-offs that consumers and investors face as they must give up one opportunity to pursue another? <span>Is it reasonable to expect markets to converge to an equilibrium price? What are the conditions that would make that equilibrium stable or unstable in response to external shocks? How do different types of auctions affect price discovery? <span><body><html>
How do different types of auctions affect price discovery?
opportunity to pursue another? Is it reasonable to expect markets to converge to an equilibrium price? What are the conditions that would make that equilibrium stable or unstable in response to external shocks? <span>How do different types of auctions affect price discovery? <span><body><html>
#has-images #introduction #prerequisite-session #reading-saco-de-polipropileno
The theory of the firm , the subject of this reading, is the study of the supply of goods and services by profit-maximizing firms.
The theory of the firm , the subject of this reading, is the study of the supply of goods and services by profit-maximizing firms. Conceptually, profit is the difference between revenue and costs. Revenue is a function of selling price and quantity sold, which are determined by the demand and supply behavior in the
Prerequisite Demand and Supply Analysis: The firm
icroeconomics gives rise to the theory of the consumer and theory of the firm as two branches of study. The theory of the consumer is the study of consumption—the demand for goods and services—by utility-maximizing individuals. <span>The theory of the firm , the subject of this reading, is the study of the supply of goods and services by profit-maximizing firms. Conceptually, profit is the difference between revenue and costs. Revenue is a function of selling price and quantity sold, which are determined by the demand and supply behavior in the markets into which the firm sells/provides its goods or services. Costs are a function of the demand and supply interactions in resource markets, such as markets for labor and for physical inputs. The main focus of this reading is the cost side of the profit equation for companies competing in market economies under perfect competition. A subsequent reading will examine the different types of markets into which a firm may sell its output. The study of the profit-maximizing firm in a single time period is the essential starting point for the analysis of the economics of corporate decision making. Furthermore,
Conceptually, profit is the difference between revenue and costs.
The theory of the firm , the subject of this reading, is the study of the supply of goods and services by profit-maximizing firms. Conceptually, profit is the difference between revenue and costs. Revenue is a function of selling price and quantity sold, which are determined by the demand and supply behavior in the markets into which the firm sells/provides its goods or services.
Revenue is a function of selling price and quantity sold, which are determined by the demand and supply behavior in the markets into which the firm sells/provides its goods or services.
The theory of the firm , the subject of this reading, is the study of the supply of goods and services by profit-maximizing firms. Conceptually, profit is the difference between revenue and costs. Revenue is a function of selling price and quantity sold, which are determined by the demand and supply behavior in the markets into which the firm sells/provides its goods or services. Costs are a function of the demand and supply interactions in resource markets, such as markets for labor and for physical inputs. The main focus of this reading is the cost side of the
Costs are a function of the demand and supply interactions in resource markets, such as markets for labor and for physical inputs.
lly, profit is the difference between revenue and costs. Revenue is a function of selling price and quantity sold, which are determined by the demand and supply behavior in the markets into which the firm sells/provides its goods or services. <span>Costs are a function of the demand and supply interactions in resource markets, such as markets for labor and for physical inputs. The main focus of this reading is the cost side of the profit equation for companies competing in market economies under perfect competition. A subsequent reading will examine the diffe
The main focus of this reading is the cost side of the profit equation for companies competing in market economies under perfect competition.
ined by the demand and supply behavior in the markets into which the firm sells/provides its goods or services. Costs are a function of the demand and supply interactions in resource markets, such as markets for labor and for physical inputs. <span>The main focus of this reading is the cost side of the profit equation for companies competing in market economies under perfect competition. A subsequent reading will examine the different types of markets into which a firm may sell its output. <span><body><html>
Effective Working capital management requires managing and coordinating several tasks within the company, including managing short-term investments, granting credit to customers and collecting on this credit, managing inventory, and managing payables.
Working capital management is a broad-based function. Effective execution requires managing and coordinating several tasks within the company, including managing short-term investments, granting credit to customers and collecting on this credit, managing inventory, and managing payables. Effective working capital management also requires reliable cash forecasts, as well as current and accurate information on transactions and bank balances.
The focus of this reading is on the short-term aspects of corporate finance activities collectively referred to as working capital management . The goal of effective working capital management is to ensure that a company has adequate ready access to the funds necessary for day-to-day operating expenses, while at the same time making sure that the company's assets are invested in the most productive way. Achieving this goal requires a balancing of concerns. Insufficient access to cash could ultimately lead to severe restructuring of a company by selling off assets, reorganization via bankruptcy proceedings, or final liquidation of the company. On the other hand, excessive investment in cash and liquid assets may not be the best use of company resources. Effective working capital management encompasses several aspects of short-term finance: maintaining adequate levels of cash, converting short-term assets (i.e., accounts receivable and inventory) into cash, and controlling outgoing payments to vendors, employees, and others. To do this successfully, companies invest short-term funds in working capital portfolios of short-dated, highly liquid securities, or they maintain credit reserves in the form of bank lines of credit or access to financing by issuing commercial paper or other money market instruments. Working capital management is a broad-based function. Effective execution requires managing and coordinating several tasks within the company, including managing short-term investments, granting credit to customers and collecting on this credit, managing inventory, and managing payables. Effective working capital management also requires reliable cash forecasts, as well as current and accurate information on transactions and bank balances. Both internal and external factors influence working capital needs; we summarize them in Exhibit 1. Exhibit 1. Internal and External Factors That Affect Working Capit
Effective working capital management also requires reliable cash forecasts, as well as current and accurate information on transactions and bank balances.
nction. Effective execution requires managing and coordinating several tasks within the company, including managing short-term investments, granting credit to customers and collecting on this credit, managing inventory, and managing payables. <span>Effective working capital management also requires reliable cash forecasts, as well as current and accurate information on transactions and bank balances. <span><body><html>
<div style="font-size:1.4em">Suppliers, like creditors, are concerned with a company’s ability to generate sufficient cash flows to meet its financial obligations.</div> <img src="/static1515851872/app/images/FFFFFF-0.png" />
ead>
3.1.6 Suppliers A company's suppliers have a primary interest in being paid as contracted or agreed on, and in a timely manner, for products or services delivered to the company. Suppliers, like creditors, are concerned with a company's ability to generate sufficient cash flows to meet its financial obligations.
its given the price paid, as well as to meet applicable standards of safety. Compared with other stakeholder groups, customers tend to be less concerned with, and affected by, a company's financial performance. <span>3.1.6 Suppliers A company's suppliers have a primary interest in being paid as contracted or agreed on, and in a timely manner, for products or services delivered to the company. Suppliers, like creditors, are concerned with a company's ability to generate sufficient cash flows to meet its financial obligations. 3.1.7 Governments/Regulators Governments and regulators seek to protect the interests of the general public and ensure the well being of their nations' econom
#analyst-consideration-in-cg-sm #has-images #puerquito-session #reading-puerquito-verde
Analysts should assess whether the experience and skill sets of board members match the needs of the company.
Board of Directors Representation Analysts should assess whether the experience and skill sets of board members match the needs of the company. Are they truly independent? Are there inherent conflicts of interest?
Subject 8. Analyst Considerations in Corporate Governance and Stakeholder Management
3; Does the practice really insulate managers from Wall Street's short-term mindset? Dual-class structures create an inferior class of shareholders, and may allow management to make bad decisions with few consequences. <span>Board of Directors Representation Analysts should assess whether the experience and skill sets of board members match the needs of the company. Are they truly independent? Are there inherent conflicts of interest? Remuneration and Company Performance What are the main drivers of the management team's remuneration and incentive structure? Does the remuneration plan reward lo
Analysts should asses weather the board of directors is truly independent
Are there inherent conflicts of interest?
#esg-considerations-for-investors #has-images #puerquito-session #reading-puerquito-verde
ESG Implementation Methods
Asset managers and asset owners can incorporate ESG issues into the investment process in a variety of ways.
Negative screening is a type of investment strategy that excludes certain companies or sectors from investment consideration because of their underlying business activities or other environmental or social concerns.
Positive screening and best-in-class strategies focus on investments with favorable ESG aspects.
ESG Implementation Methods Asset managers and asset owners can incorporate ESG issues into the investment process in a variety of ways. Negative screening is a type of investment strategy that excludes certain companies or sectors from investment consideration because of their underlying business activities or other environmental or social concerns. Positive screening and best-in-class strategies focus on investments with favorable ESG aspects. Thematic investing focuses on a single factor, such as energy efficiency or climate change. Impact investing strategies are targeted investments, typically
Subject 9. ESG Considerations for Investors
d adherence to environmental safety and regulatory standards. Social factors generally pertain to human rights and welfare concerns in the workplace, product development, and, in some cases, community impact. <span>ESG Implementation Methods Asset managers and asset owners can incorporate ESG issues into the investment process in a variety of ways. Negative screening is a type of investment strategy that excludes certain companies or sectors from investment consideration because of their underlying business activities or other environmental or social concerns. Positive screening and best-in-class strategies focus on investments with favorable ESG aspects. Thematic investing focuses on a single factor, such as energy efficiency or climate change. Impact investing strategies are targeted investments, typically made in private markets, aimed at solving social or environmental problems. <span><body><html>
Impact investing strategies are targeted investments, typically made in private markets, aimed at solving social or environmental problems.
Thematic investing focuses on a single factor, such as energy efficiency or climate change.
investment consideration because of their underlying business activities or other environmental or social concerns. Positive screening and best-in-class strategies focus on investments with favorable ESG aspects. <span>Thematic investing focuses on a single factor, such as energy efficiency or climate change. Impact investing strategies are targeted investments, typically made in private markets, aimed at solving social or environmental problems. <span><body><html>
What are the ESG Implementation Methods?
Negative screening
Positive screening
ESG Implementation Methods Asset managers and asset owners can incorporate ESG issues into the investment process in a variety of ways. Negative screening is a type of investment strategy that excludes certain companies or sectors from investment consideration because of their underlying business activities or other environmental or social concerns. Positive screening and best-in-class strategies focus on investments with favorable ESG aspects. Thematic investing focuses on a single factor, such as energy efficiency or climate change. Impact investing strategies are targeted investments, typically made in private markets, aimed at solving social or environmental problems.
intrinsic value is based on an analysis of investment fundamentals and characteristics.
This reading introduces equity valuation models used to estimate the intrinsic value (synonym: fundamental value ) of a security; intrinsic value is based on an analysis of investment fundamentals and characteristics.
an analyst who estimates the intrinsic value of an equity security is implicitly questioning the accuracy of the market price as an estimate of value.
#globo-terraqueo-session #has-images #reading-fajo-de-pounds
some estimates about 40 percent of S&P 500 Index earnings are from outside the United States
Given the globalization of the world economy, most large companies depend heavily on their foreign operations (for example, by some estimates about 40 percent of S&P 500 Index earnings are from outside the United States).
Reading 20 Currency Exchange Rates Introduction
Measured by daily turnover, the foreign exchange (FX) market—the market in which currencies are traded against each other—is by far the world's largest market. Current estimates put daily turnover at approximately USD4 trillion for 2010. This is about 10 to 15 times larger than daily turnover in global fixed-income markets and about 50 times larger than global turnover in equities. Moreover, volumes in FX turnover continue to grow: Some predict that daily FX turnover will reach USD10 trillion by 2020 as market participation spreads and deepens. The FX market is also a truly global market that operates 24 hours a day, each business day. It involves market participants from every time zone connected through electronic communications networks that link players as large as multibillion-dollar investment funds and as small as individuals trading for their own account—all brought together in real time. International trade would be impossible without the trade in currencies that facilitates it, and so too would cross-border capital flows that connect all financial markets globally through the FX market. These factors make foreign exchange a key market for investors and market participants to understand. The world economy is increasingly transnational in nature, with both production processes and trade flows often determined more by global factors than by domestic considerations. Likewise, investment portfolio performance increasingly reflects global determinants because pricing in financial markets responds to the array of investment opportunities available worldwide, not just locally. All of these factors funnel through, and are reflected in, the foreign exchange market. As investors shed their "home bias" and invest in foreign markets, the exchange rate—the price at which foreign-currency-denominated investments are valued in terms of the domestic currency—becomes an increasingly important determinant of portfolio performance. Even investors adhering to a purely "domestic" portfolio mandate are increasingly affected by what happens in the foreign exchange market. Given the globalization of the world economy, most large companies depend heavily on their foreign operations (for example, by some estimates about 40 percent of S&P 500 Index earnings are from outside the United States). Almost all companies are exposed to some degree of foreign competition, and the pricing for domestic assets—equities, bonds, real estate, and others—will also depend on demand from foreign investors. All of these various influences on investment performance reflect developments in the foreign exchange market. This reading introduces the foreign exchange market, providing the basic concepts and terminology necessary to understand exchange rates as well as some of the basics of ex
#has-images #puerquito-session #reading-garrafon-lleno-de-monedas-de-diez-pesos
if the company invests in projects whose returns are less than the cost of capital, the company has actually destroyed value.
If a company invests in projects that produce a return in excess of the cost of capital, the company has created value; in contrast, if the company invests in projects whose returns are less than the cost of capital, the company has actually destroyed value.
Reading 36 Cost of Capital Introduction
A company grows by making investments that are expected to increase revenues and profits. The company acquires the capital or funds necessary to make such investments by borrowing or using funds from owners. By applying this capital to investments with long-term benefits, the company is producing value today. But, how much value? The answer depends not only on the investments' expected future cash flows but also on the cost of the funds. Borrowing is not costless. Neither is using owners' funds. The cost of this capital is an important ingredient in both investment decision making by the company's management and the valuation of the company by investors. If a company invests in projects that produce a return in excess of the cost of capital, the company has created value; in contrast, if the company invests in projects whose returns are less than the cost of capital, the company has actually destroyed value. Therefore, the estimation of the cost of capital is a central issue in corporate financial management. For the analyst seeking to evaluate a company's investment program and its competitive position, an accurate estimate of a company's cost of capital is important as well. Cost of capital estimation is a challenging task. As we have already implied, the cost of capital is not observable but, rather, must be estimated. Arriving at a cost of capital estimate requires a host of assumptions and estimates. Another challenge is that the cost of capital that is appropriately applied to a specific investment depends on the characteristics of that investment: The riskier the investment's cash flows, the greater its cost of capital. In reality, a company must estimate project-specific costs of capital. What is often done, however, is to estimate the cost of capital for the company as a whole and then adjust this overall corporate cost of capital upward or downward to reflect the risk of the contemplated project relative to the company's average project. This reading is organized as follows: In the next section, we introduce the cost of capital and its basic computation. Section 3 presents a selection of methods for estimat
#board-of-directors-and-committees #composition-of-the-bod #has-images #puerquito-session #reading-puerquito-verde
Staggered terms of Boards make it more difficult for shareholders to make fundamental changes to the composition and behavior of the board
Staggered terms of Boards make it more difficult for shareholders to make fundamental changes to the composition and behavior of the board and could result in a permanent impairment of long-term shareholder value.
Subject 5. Board of Directors and Committees
Composition of the Board of Directors A board of directors is the central pillar of the governance structure, serves as the link between shareholders and managers, and acts as the shareholders' internal monitoring tool within the company. The structure and composition of a board of directors vary across countries and companies. The number of directors may vary, and the board typically includes a mix of expertise levels, backgrounds, and competencies. Board members must have extensive experience in business, education, the professions and/or public service so they can make informed decisions about the company's future. If directors lack the skills, knowledge and expertise to conduct a meaningful review of the company's activities, and are unable to conduct in-depth evaluations of the issues affecting the company's business, they are more likely to defer to management when making decisions. Executive (internal) directors are employed by the company and are typically members of senior management. Non-executive (external) directors have limited involvement in daily operations but serve an important oversight role. In a classified or staggered board, directors are typically elected in two or more classes, serving terms greater than one year. Proponents argue that by staggering the election of directors, a certain level of continuity and skill is maintained. However, staggered terms make it more difficult for shareholders to make fundamental changes to the composition and behavior of the board and could result in a permanent impairment of long-term shareholder value. Functions and Responsibilities of the Board Two primary duties of a board of directors are duty of care and duty of loyalty. Among other responsibilities, the
#has-images #portfolio-session #reading-tiburon
The choice of which risks to undertake through the allocation of its scarce resources is the key tool available to management.
The choice of which risks to undertake through the allocation of its scarce resources is the key tool available to management. An organization with a comprehensive risk management culture in place, in which risk is integral to every key strategy and decision, should perform better in the long-term, in good time
Reading 40 Risk Management: An Introduction Intro
Risk—and risk management—is an inescapable part of economic activity. People generally manage their affairs in order to be as happy and secure as their environment and resources will allow. But regardless of how carefully these affairs are managed, there is risk because the outcome, whether good or bad, is seldom predictable with complete certainty. There is risk inherent in nearly everything we do, but this reading will focus on economic and financial risk, particularly as it relates to investment management. All businesses and investors manage risk, whether consciously or not, in the choices they make. At its core, business and investing are about allocating resources and capital to chosen risks. In their decision process, within an environment of uncertainty, these entities may take steps to avoid some risks, pursue the risks that provide the highest rewards, and measure and mitigate their exposure to these risks as necessary. Risk management processes and tools make difficult business and financial problems easier to address in an uncertain world. Risk is not just a matter of fate; it is something that organizations can actively control with their decisions, within a risk management framework. Risk is an integral part of the business or investment process. Even in the earliest models of modern portfolio theory, such as mean–variance portfolio optimization and the capital asset pricing model, investment return is linked directly to risk but requires that risk be managed optimally. Proper identification and measurement of risk, and keeping risks aligned with the goals of the enterprise, are key factors in managing businesses and investments. Good risk management results in a higher chance of a preferred outcome—more value for the company or portfolio or more utility for the individual. Portfolio managers need to be familiar with risk management not only to improve the portfolio's risk–return outcome, but also because of two other ways in which they use risk management at an enterprise level. First, they help to manage their own companies that have their own enterprise risk issues. Second, many portfolio assets are claims on companies that have risks. Portfolio managers need to evaluate the companies' risks and how those companies are addressing them. This reading takes a broad approach that addresses both the risk management of enterprises in general and portfolio risk management. The principles underlying portfolio risk management are generally applicable to the risk management of financial and non-financial institutions as well. The concept of risk management is also relevant to individuals. Although many large entities formally practice risk management, most individuals practice it more informally and some practice it haphazardly, oftentimes responding to risk events after they occur. Although many individuals do take reasonable precautions against unwanted risks, these precautions are often against obvious risks, such as sticking a wet hand into an electrical socket or swallowing poison. The more subtle risks are often ignored. Many individuals simply do not view risk management as a formal, systematic process that would help them achieve not only their financial goals but also the ultimate end result of happiness, or maximum utility as economists like to call it, but they should. Although the primary focus of this reading is on institutions, we will also cover risk management as it applies to individuals. We will show that many common themes underlie risk management—themes that are applicable to both organizations and individuals. Although often viewed as defensive, risk management is a valuable offensive weapon in the manager's arsenal. In the quest for preferred outcomes, such as higher profit, returns, or share price, management does not usually get to choose the outcomes but does choose the risks it takes in pursuit of those outcomes. The choice of which risks to undertake through the allocation of its scarce resources is the key tool available to management. An organization with a comprehensive risk management culture in place, in which risk is integral to every key strategy and decision, should perform better in the long-term, in good times and bad, as a result of better decision making. The fact that all businesses and investors engage in risky activities (i.e., activities with uncertain outcomes) raises a number of important questions. The questions that this reading will address include the following: What is risk management, and why is it important? What risks does an organization (or individual) face in pursuing its objectives? How are an entity's goals affected by risk, and how does it make risk management decisions to produce better results? How does risk governance guide the risk management process and risk budgeting to integrate an organization's goals with its activities? How does an organization measure and evaluate the risks it faces, and what tools does it have to address these risks? The answers to these questions collectively help to define the process of risk management. This reading is organized along the lines of these questions. Section 2 describes the risk management process, and Section 3 discusses risk governance and risk tolerance. Section 4 cove
Fiscal policy refers to the use of government expenditure, tax, and borrowing activities to achieve economic goals.
Fiscal policy refers to the use of government expenditure, tax, and borrowing activities to achieve economic goals. Monetary policy refers to central bank activities to control the supply of money. Their goals are maximum employment, stable prices, and moderate long-term interest rates. </spa
Fiscal policy refers to the use of government expenditure, tax, and borrowing activities to achieve economic goals. Monetary policy refers to central bank activities to control the supply of money. Their goals are maximum employment, stable prices, and moderate long-term interest rates. The Functions of Money Money performs three basic functions. • It serves as a medium of exchange to buy and sell goods and services. Money simplif
Monetary policy refers to central bank activities to control the supply of money.
Fiscal policy refers to the use of government expenditure, tax, and borrowing activities to achieve economic goals. Monetary policy refers to central bank activities to control the supply of money. Their goals are maximum employment, stable prices, and moderate long-term interest rates.
The goal of Monetary and Fiscal Policy are maximum employment, stable prices, and moderate long-term interest rates.
/head>
The [...] is the amount by which a change in the monetary base is multiplied to calculate the final change in the money supply.
money multiplier
The money multiplier is the amount by which a change in the monetary base is multiplied to calculate the final change in the money supply. Money Multiplier = 1/b, where b is the required reserve ratio. In
Money Multiplier = 1/b, where b is the required reserve ratio. In our example, b is 0.2, so money multiplier = 1/0.2 = 5.
Definitions of Money
There are different definitions of money. The two most widely used measures of money in the U.S. are:
The M1 Money Supply: cash, checking accounts and traveler's checks. This is the narrowest definition of the money supply. This definition focuses on money's function as a medium of exchange.
The M2 Money Supply: M1 + savings + small time deposits + retail money funds. This definition focuses on money's function as a medium of exchange and store of value.
Definitions of Money There are different definitions of money. The two most widely used measures of money in the U.S. are: The M1 Money Supply: cash, checking accounts and traveler's checks. This is the narrowest definition of the money supply. This definition focuses on money's function as a medium of exchange. The M2 Money Supply: M1 + savings + small time deposits + retail money funds. This definition focuses on money's function as a medium of exchange and store of value. Credit cards are not purchasing power, but instead are a convenient means of arranging a loan. Credit is a liability acquired when one borrows funds, while money is
unt by which a change in the monetary base is multiplied to calculate the final change in the money supply. Money Multiplier = 1/b, where b is the required reserve ratio. In our example, b is 0.2, so money multiplier = 1/0.2 = 5. <span>Definitions of Money There are different definitions of money. The two most widely used measures of money in the U.S. are: The M1 Money Supply: cash, checking accounts and traveler's checks. This is the narrowest definition of the money supply. This definition focuses on money's function as a medium of exchange. The M2 Money Supply: M1 + savings + small time deposits + retail money funds. This definition focuses on money's function as a medium of exchange and store of value. Credit cards are not purchasing power, but instead are a convenient means of arranging a loan. Credit is a liability acquired when one borrows funds, while money is a financial asset that provides the holder with future purchasing power. However, the widespread use of credit cards will tend to reduce the average quantity of money people hold. Deposits are money, but checks are not - a check is an instruction to a bank to transfer money. The Quantity Theory of Money Money Supply (M) x Velocity of Money (V) = Price (P) x Real Output (Y) The velocity of money is the average n
Credit cards are not purchasing power, but instead are a convenient means of arranging a loan. Credit is a liability acquired when one borrows funds, while money is a financial asset that provides the holder with future purchasing power. However, the widespread use of credit cards will tend to reduce the average quantity of money people hold.
money's function as a medium of exchange. The M2 Money Supply: M1 + savings + small time deposits + retail money funds. This definition focuses on money's function as a medium of exchange and store of value. <span>Credit cards are not purchasing power, but instead are a convenient means of arranging a loan. Credit is a liability acquired when one borrows funds, while money is a financial asset that provides the holder with future purchasing power. However, the widespread use of credit cards will tend to reduce the average quantity of money people hold. Deposits are money, but checks are not - a check is an instruction to a bank to transfer money. <span><body><html>
Deposits are money, but checks are not - a check is an instruction to a bank to transfer money.
y acquired when one borrows funds, while money is a financial asset that provides the holder with future purchasing power. However, the widespread use of credit cards will tend to reduce the average quantity of money people hold. <span>Deposits are money, but checks are not - a check is an instruction to a bank to transfer money. <span><body><html>
Violations of [...] can harm the community in a variety of ways.
moral principles
Violations of moral principles can harm the community in a variety of ways.
Ethical conduct is behavior that follows moral principles. Ethical actions are those actions that are perceived as beneficial and conform to the ethical expectations of society. <span>Ethics encompass a set of moral principles ( code of ethics ) and standards of conduct that provide guidance for our behavior. Violations can harm the community in a variety of ways. <span><body><html>
#has-images #prerequisite-session #reading-dildo
[...] the study of how buyers and sellers interact to determine transaction prices and quantities.
Demand and supply analysis
Demand and supply analysis is the study of how buyers and sellers interact to determine transaction prices and quantities.
croeconomics has its roots in microeconomics , which deals with markets and decision making of individual economic units, including consumers and businesses. Microeconomics is a logical starting point for the study of economics. <span>This reading focuses on a fundamental subject in microeconomics: demand and supply analysis. Demand and supply analysis is the study of how buyers and sellers interact to determine transaction prices and quantities. As we will see, prices simultaneously reflect both the value to the buyer of the next (or marginal) unit and the cost to the seller of that unit. In private enterprise market economies, which are the chief concern of investment analysts, demand and supply analysis encompasses the most basic set of microeconomic tools. Traditionally, microeconomics classifies private economic units into two groups: consumers (or households) and firms. These two groups give rise, respectively, to the theor
#bascula-session #has-images #reading-magnifying-glass
In essence, an analyst converts data into financial metrics that assist in decision making.
Financial analysis tools can be useful in assessing a company's performance and trends in that performance. In essence, an analyst converts data into financial metrics that assist in decision making.
Reading 27 Financial Analysis Techniques Introduction
Financial analysis tools can be useful in assessing a company's performance and trends in that performance. In essence, an analyst converts data into financial metrics that assist in decision making. Analysts seek to answer such questions as: How successfully has the company performed, relative to its own past performance and relative to its competitors? How is the company likely to perform in the future? Based on expectations about future performance, what is the value of this company or the securities it issues? A primary source of data is a company's annual report, including the financial statements and notes, and management commentary (operating and financial review or management's discussion and analysis). This reading focuses on data presented in financial reports prepared under International Financial Reporting Standards (IFRS) and United States generally accepted accounting principles (US GAAP). However, financial reports do not contain all the information needed to perform effective financial analysis. Although financial statements do contain data about the past performance of a company (its income and cash flows) as well as its current financial condition (assets, liabilities, and owners' equity), such statements do not necessarily provide all the information useful for analysis nor do they forecast future results. The financial analyst must be capable of using financial statements in conjunction with other information to make projections and reach valid conclusions. Accordingly, an analyst typically needs to supplement the information found in a company's financial reports with other information, including information on the economy, industry, comparable companies, and the company itself. This reading describes various techniques used to analyze a company's financial statements. Financial analysis of a company may be performed for a variety of reasons, such as valuing equity securities, assessing credit risk, conducting due diligence related to an acquisition, or assessing a subsidiary's performance. This reading will describe techniques common to any financial analysis and then discuss more specific aspects for the two most common categories: equity analysis and credit analysis. Equity analysis incorporates an owner's perspective, either for valuation or performance evaluation. Credit analysis incorporates a creditor's (such as a banker or bondholder) perspective. In either case, there is a need to gather and analyze information to make a decision (ownership or credit); the focus of analysis varies because of the differing interest of owners and creditors. Both equity and credit analyses assess the entity's ability to generate and grow earnings, and cash flow, as well as any associated risks. Equity analysis usually places a greater emphasis on growth, whereas credit analysis usually places a greater emphasis on risks. The difference in emphasis reflects the different fundamentals of these types of investments: The value of a company's equity generally increases as the company's earnings and cash flow increase, whereas the value of a company's debt has an upper limit.1 The balance of this reading is organized as follows: Section 2 recaps the framework for financial statements and the place of financial analysis techniques within the frame
A primary source of data is a company's annual report, including the financial statements and notes, and management commentary (operating and financial review or management's discussion and analysis).
A primary source of data is a company's annual report, including the financial statements and notes, and management commentary (operating and financial review or management's discussion and analysis). This reading focuses on data presented in financial reports prepared under IFRS and US GAAP . However, financial reports do not contain all the information needed to perform effectiv
financial reports do not contain all the information needed to perform effective financial analysis.
the financial statements and notes, and management commentary (operating and financial review or management's discussion and analysis). This reading focuses on data presented in financial reports prepared under IFRS and US GAAP . However, <span>financial reports do not contain all the information needed to perform effective financial analysis. Although financial statements do contain data about the past performance of a company (its income and cash flows) as well as its current financial condition (assets, liabilities, and ow
Although financial statements do contain data about the past performance of a company (its income and cash flows) as well as its current financial condition (assets, liabilities, and owners' equity), such statements do not necessarily provide all the information useful for analysis nor do they forecast future results.
agement's discussion and analysis). This reading focuses on data presented in financial reports prepared under IFRS and US GAAP . However, financial reports do not contain all the information needed to perform effective financial analysis. <span>Although financial statements do contain data about the past performance of a company (its income and cash flows) as well as its current financial condition (assets, liabilities, and owners' equity), such statements do not necessarily provide all the information useful for analysis nor do they forecast future results. The financial analyst must be capable of using financial statements in conjunction with other information to make projections and reach valid conclusions. Accordingly, an analyst typica
The financial analyst must be capable of using financial statements in conjunction with other information to make projections and reach valid conclusions.
company (its income and cash flows) as well as its current financial condition (assets, liabilities, and owners' equity), such statements do not necessarily provide all the information useful for analysis nor do they forecast future results. <span>The financial analyst must be capable of using financial statements in conjunction with other information to make projections and reach valid conclusions. Accordingly, an analyst typically needs to supplement the information found in a company's financial reports with other information, including information on the economy, industry, comp
an analyst typically needs to supplement the information found in a company's financial reports with other information, including information on the economy, industry, comparable companies, and the company itself.
l the information useful for analysis nor do they forecast future results. The financial analyst must be capable of using financial statements in conjunction with other information to make projections and reach valid conclusions. Accordingly, <span>an analyst typically needs to supplement the information found in a company's financial reports with other information, including information on the economy, industry, comparable companies, and the company itself. <span><body><html>
Financial analysis of a company may be performed for a variety of reasons, such as valuing equity securities, assessing credit risk, conducting due diligence related to an acquisition, or assessing a subsidiary's performance.
This reading describes various techniques used to analyze a company's financial statements. Financial analysis of a company may be performed for a variety of reasons, such as valuing equity securities, assessing credit risk, conducting due diligence related to an acquisition, or assessing a subsidiary's performance. This reading will describe techniques common to any financial analysis and then discuss more specific aspects for the two most common categories: equity analysis and credit analysis. <
Equity analysis incorporates an owner's perspective, either for valuation or performance evaluation. Credit analysis incorporates a creditor's (such as a banker or bondholder) perspective.
Equity analysis incorporates an owner's perspective, either for valuation or performance evaluation. Credit analysis incorporates a creditor's (such as a banker or bondholder) perspective. In either case, there is a need to gather and analyze information to make a decision (ownership or credit); the focus of analysis varies because of the differing interest of owners and
Both equity and credit analyses assess the entity's ability to generate and grow earnings, and cash flow, as well as any associated risks.
(such as a banker or bondholder) perspective. In either case, there is a need to gather and analyze information to make a decision (ownership or credit); the focus of analysis varies because of the differing interest of owners and creditors. <span>Both equity and credit analyses assess the entity's ability to generate and grow earnings, and cash flow, as well as any associated risks. Equity analysis usually places a greater emphasis on growth, whereas credit analysis usually places a greater emphasis on risks. The difference in emphasis reflects the different fundam
Equity analysis usually places a greater emphasis on growth, whereas credit analysis usually places a greater emphasis on risks.
hip or credit); the focus of analysis varies because of the differing interest of owners and creditors. Both equity and credit analyses assess the entity's ability to generate and grow earnings, and cash flow, as well as any associated risks. <span>Equity analysis usually places a greater emphasis on growth, whereas credit analysis usually places a greater emphasis on risks. The difference in emphasis reflects the different fundamentals of these types of investments: The value of a company's equity generally increases as the company's earnings and cash flow
The difference in emphasis reflects the different fundamentals of these types of investments: The value of a company's equity generally increases as the company's earnings and cash flow increase, whereas the value of a company's debt has an upper limit.
alyses assess the entity's ability to generate and grow earnings, and cash flow, as well as any associated risks. Equity analysis usually places a greater emphasis on growth, whereas credit analysis usually places a greater emphasis on risks. <span>The difference in emphasis reflects the different fundamentals of these types of investments: The value of a company's equity generally increases as the company's earnings and cash flow increase, whereas the value of a company's debt has an upper limit. <span><body><html>
#essay-tubes-session #has-images #reading-max
Examples of long-lived financial assets include investments in [...].
equity or debt securities issued by other companies
Examples of long-lived financial assets include investments in equity or debt securities issued by other companies.
Reading 29 Long-Lived Assets Introduction
Long-lived assets , also referred to as non-current assets or long-term assets, are assets that are expected to provide economic benefits over a future period of time, typically greater than one year.1 Long-lived assets may be tangible, intangible, or financial assets. Examples of long-lived tangible assets, typically referred to as property, plant, and equipment and sometimes as fixed assets, include land, buildings, furniture and fixtures, machinery and equipment, and vehicles; examples of long-lived intangible assets (assets lacking physical substance) include patents and trademarks; and examples of long-lived financial assets include investments in equity or debt securities issued by other companies. The scope of this reading is limited to long-lived tangible and intangible assets (hereafter, referred to for simplicity as long-lived assets). The first issue in accounting for a long-lived asset is determining its cost at acquisition. The second issue is how to allocate the cost to expense over time. The costs of most long-lived assets are capitalised and then allocated as expenses in the profit or loss (income) statement over the period of time during which they are expected to provide economic benefits. The two main types of long-lived assets with costs that are typically not allocated over time are land, which is not depreciated, and those intangible assets with indefinite useful lives. Additional issues that arise are the treatment of subsequent costs incurred related to the asset, the use of the cost model versus the revaluation model, unexpected declines in the value of the asset, classification of the asset with respect to intent (for example, held for use or held for sale), and the derecognition of the asset. This reading is organised as follows. Section 2 describes and illustrates accounting for the acquisition of long-lived assets, with particular attention to the impact of ca
The first issue in accounting for a long-lived asset is determining its [...].
cost at acquisition
The first issue in accounting for a long-lived asset is determining its cost at acquisition. The second issue is how to allocate the cost to expense over time.
The first issue in accounting for a long-lived asset is determining its cost at acquisition. The second issue is how to [...].
allocate the cost to expense over time
The two main types of long-lived assets with costs that are typically not allocated over time are [...]
land and intangible assets with indefinite useful lives.
The two main types of long-lived assets with costs that are typically not allocated over time are land, which is not depreciated, and those intangible assets with indefinite useful lives.
Total investment is the amount spent by all businesses on plant and equipment
Macroeconomics focuses on national aggregates, such as total investment, the amount spent by all businesses on plant and equipment; total consumption, the amount spent by all households on goods and services; the rate of change in the general level of prices; and the overall level of interest rates.
Total consumption is the amount spent by all households on goods and services
Macroeconomic variables—such as the level of inflation, unemployment, consumption, government spending, and investment—affect the overall level of activity within a country.
Macroeconomic variables—such as the level of inflation, unemployment, consumption, government spending, and investment—affect the overall level of activity within a country. They also have different impacts on the growth and profitability of industries within a country, the companies within those industries, and the returns of the securities issued by those
#has-images #investopedia
Endowment funds are typically funded entirely by [...]
Endowment funds are typically funded entirely by donations that are deductible for the donors.
<span>What is an 'Endowment Fund' An endowment fund is an investment fund established by a foundation that makes consistent withdrawals from invested capital. The capital in endowment funds, often used by universities, nonprofit organizations, churches and hospitals, is generally utilized for specific needs or to further a company's operating process. Endowment funds are typically funded entirely by donations that are deductible for the donors. BREAKING DOWN 'Endowment Fund' Financial endowments are typically structured so the principal amount invested remains intact, while investment income
#has-images #manzana-session #reading-pure-de-manzana
Active returns refer to returns earned by strategies that do not assume that all information is fully reflected in market prices.
The CIO's description underscores the importance of not assuming that past active returns that might be found in a historical dataset will repeat themselves in the future. Active returns refer to returns earned by strategies that do not assume that all information is fully reflected in market prices.
Reading 46 Market Efficiency (Intro)
Market efficiency concerns the extent to which market prices incorporate available information. If market prices do not fully incorporate information, then opportunities may exist to make a profit from the gathering and processing of information. The subject of market efficiency is, therefore, of great interest to investment managers, as illustrated in Example 1. EXAMPLE 1 Market Efficiency and Active Manager Selection The chief investment officer (CIO) of a major university endowment fund has listed eight steps in the active manager selection process that can be applied both to traditional investments (e.g., common equity and fixed-income securities) and to alternative investments (e.g., private equity, hedge funds, and real assets). The first step specified is the evaluation of market opportunity: What is the opportunity and why is it there? To answer this question we start by studying capital markets and the types of managers operating within those markets. We identify market inefficiencies and try to understand their causes, such as regulatory structures or behavioral biases. We can rule out many broad groups of managers and strategies by simply determining that the degree of market inefficiency necessary to support a strategy is implausible. Importantly, we consider the past history of active returns meaningless unless we understand why markets will allow those active returns to continue into the future.1 The CIO's description underscores the importance of not assuming that past active returns that might be found in a historical dataset will repeat themselves in the future. Active returns refer to returns earned by strategies that do not assume that all information is fully reflected in market prices. Governments and market regulators also care about the extent to which market prices incorporate information. Efficient markets imply informative prices—prices that accurately reflect available information about fundamental values. In market-based economies, market prices help determine which companies (and which projects) obtain capital. If these prices do not efficiently incorporate information about a company's prospects, then it is possible that funds will be misdirected. By contrast, prices that are informative help direct scarce resources and funds available for investment to their highest-valued uses.2 Informative prices thus promote economic growth. The efficiency of a country's capital markets (in which businesses raise financing) is an important characteristic of a well-functioning financial system. The remainder of this reading is organized as follows. Section 2 provides specifics on how the efficiency of an asset market is described and discusses the factors affectin
If market prices do not fully incorporate information, then [...]
opportunities may exist to make a profit from the gathering and processing of information.
If market prices do not fully incorporate information, then opportunities may exist to make a profit from the gathering and processing of information.
#has-images #portfolio-session #reading-apollo-creed
Our objective in this reading is to identify the optimal risky portfolio for all investors by using the capital asset pricing model (CAPM). The foundation of this reading is the computation of risk and return of a portfolio and the role that correlation plays in diversifying portfolio risk and arriving at the efficient frontier. The efficient frontier and the capital allocation line consist of portfolios that are generally acceptable to all investors. By combining an investor's individual indifference curves with the market-determined capital allocation line, we are able to illustrate that the only optimal risky portfolio for an investor is the portfolio of all risky assets (i.e., the market).
Our objective in this reading is to identify the optimal risky portfolio for all investors by using the capital asset pricing model (CAPM). The foundation of this reading is the computation of risk and return of a portfolio and the role that correlation plays in diversifying portfolio risk and arriving at the efficient frontier. The efficient frontier and the capital allocation line consist of portfolios that are generally acceptable to all investors. By combining an investor's individual indifference curves with the market-determined capital allocation line, we are able to illustrate that the only optimal risky portfolio for an investor is the portfolio of all risky assets (i.e., the market). Additionally, we discuss the capital market line, a special case of the capital allocation line that is used for passive investor portfolios. We also differentiate between
Reading 42 Portfolio Risk and Return: Part II (Intro)
Our objective in this reading is to identify the optimal risky portfolio for all investors by using the capital asset pricing model (CAPM). The foundation of this reading is the computation of risk and return of a portfolio and the role that correlation plays in diversifying portfolio risk and arriving at the efficient frontier. The efficient frontier and the capital allocation line consist of portfolios that are generally acceptable to all investors. By combining an investor's individual indifference curves with the market-determined capital allocation line, we are able to illustrate that the only optimal risky portfolio for an investor is the portfolio of all risky assets (i.e., the market). Additionally, we discuss the capital market line, a special case of the capital allocation line that is used for passive investor portfolios. We also differentiate between systematic and nonsystematic risk, and explain why investors are compensated for bearing systematic risk but receive no compensation for bearing nonsystematic risk. We discuss in detail the CAPM, which is a simple model for estimating asset returns based only on the asset's systematic risk. Finally, we illustrate how the CAPM allows security selection to build an optimal portfolio for an investor by changing the asset mix beyond a passive market portfolio. The reading is organized as follows. In Section 2, we discuss the consequences of combining a risk-free asset with the market portfolio and provide an interpretation of the
Additionally, we discuss the capital market line, a special case of the capital allocation line that is used for passive investor portfolios. We also differentiate between systematic and nonsystematic risk, and explain why investors are compensated for bearing systematic risk but receive no compensation for bearing nonsystematic risk. We discuss in detail the CAPM, which is a simple model for estimating asset returns based only on the asset's systematic risk. Finally, we illustrate how the CAPM allows security selection to build an optimal portfolio for an investor by changing the asset mix beyond a passive market portfolio.
nvestor's individual indifference curves with the market-determined capital allocation line, we are able to illustrate that the only optimal risky portfolio for an investor is the portfolio of all risky assets (i.e., the market). <span>Additionally, we discuss the capital market line, a special case of the capital allocation line that is used for passive investor portfolios. We also differentiate between systematic and nonsystematic risk, and explain why investors are compensated for bearing systematic risk but receive no compensation for bearing nonsystematic risk. We discuss in detail the CAPM, which is a simple model for estimating asset returns based only on the asset's systematic risk. Finally, we illustrate how the CAPM allows security selection to build an optimal portfolio for an investor by changing the asset mix beyond a passive market portfolio. <span><body><html>
#essay-tubes-session #has-images #reading-placas-del-df
A non-current liability (long-term liability) broadly represents a probable sacrifice of economic benefits in periods generally greater than one year in the future.
A non-current liability (long-term liability) broadly represents a probable sacrifice of economic benefits in periods generally greater than one year in the future. Common types of non-current liabilities reported in a company's financial statements include long-term debt (e.g., bonds payable, long-term notes payable), finance leases, pension liabi
Reading 31 Non-Current (Long-Term) Liabilities Introduction
A non-current liability (long-term liability) broadly represents a probable sacrifice of economic benefits in periods generally greater than one year in the future. Common types of non-current liabilities reported in a company's financial statements include long-term debt (e.g., bonds payable, long-term notes payable), finance leases, pension liabilities, and deferred tax liabilities. This reading focuses on bonds payable and leases. Pension liabilities are also introduced. This reading is organised as follows. Section 2 describes and illustrates the accounting for long-term bonds, including the issuance of bonds, the recording of interest exp
Common types of non-current liabilities reported in a company's financial statements include long-term debt (e.g., bonds payable, long-term notes payable), finance leases, pension liabilities, and deferred tax liabilities.
A non-current liability (long-term liability) broadly represents a probable sacrifice of economic benefits in periods generally greater than one year in the future. Common types of non-current liabilities reported in a company's financial statements include long-term debt (e.g., bonds payable, long-term notes payable), finance leases, pension liabilities, and deferred tax liabilities. This reading focuses on bonds payable and leases. Pension liabilities are also introduced.
This reading focuses on bonds payable and leases. Pension liabilities are also introduced.
year in the future. Common types of non-current liabilities reported in a company's financial statements include long-term debt (e.g., bonds payable, long-term notes payable), finance leases, pension liabilities, and deferred tax liabilities. <span>This reading focuses on bonds payable and leases. Pension liabilities are also introduced. <span><body><html>
#has-images #puerquito-session #reading-bulldozer
Capital projects, which make up the long-term asset portion of the balance sheet, can be so large that sound capital budgeting decisions ultimately decide the future of many corporations. Capital decisions cannot be reversed at a low cost, so mistakes are very costly. Indeed, the real capital investments of a company describe a company better than its working capital or capital structures, which are intangible and tend to be similar for many corporations.
cision making on capital projects—those projects with a life of a year or more. This is a fundamental area of knowledge for financial analysts for many reasons. First, capital budgeting is very important for corporations. <span>Capital projects, which make up the long-term asset portion of the balance sheet, can be so large that sound capital budgeting decisions ultimately decide the future of many corporations. Capital decisions cannot be reversed at a low cost, so mistakes are very costly. Indeed, the real capital investments of a company describe a company better than its working capital or capital structures, which are intangible and tend to be similar for many corporations. Second, the principles of capital budgeting have been adapted for many other corporate decisions, such as investments in working capital, leasing, mergers and acquisitio
Reading 35 Capital Budgeting Introduction
Capital budgeting is the process that companies use for decision making on capital projects—those projects with a life of a year or more. This is a fundamental area of knowledge for financial analysts for many reasons. First, capital budgeting is very important for corporations. Capital projects, which make up the long-term asset portion of the balance sheet, can be so large that sound capital budgeting decisions ultimately decide the future of many corporations. Capital decisions cannot be reversed at a low cost, so mistakes are very costly. Indeed, the real capital investments of a company describe a company better than its working capital or capital structures, which are intangible and tend to be similar for many corporations. Second, the principles of capital budgeting have been adapted for many other corporate decisions, such as investments in working capital, leasing, mergers and acquisitions, and bond refunding. Third, the valuation principles used in capital budgeting are similar to the valuation principles used in security analysis and portfolio management. Many of the methods used by security analysts and portfolio managers are based on capital budgeting methods. Conversely, there have been innovations in security analysis and portfolio management that have also been adapted to capital budgeting. Finally, although analysts have a vantage point outside the company, their interest in valuation coincides with the capital budgeting focus of maximizing shareholder value. Because capital budgeting information is not ordinarily available outside the company, the analyst may attempt to estimate the process, within reason, at least for companies that are not too complex. Further, analysts may be able to appraise the quality of the company's capital budgeting process—for example, on the basis of whether the company has an accounting focus or an economic focus. This reading is organized as follows: Section 2 presents the steps in a typical capital budgeting process. After introducing the basic principles of capital budgeti
the principles of capital budgeting have been adapted for many other corporate decisions, such as investments in working capital, leasing, mergers and acquisitions, and bond refunding.
o mistakes are very costly. Indeed, the real capital investments of a company describe a company better than its working capital or capital structures, which are intangible and tend to be similar for many corporations. Second, <span>the principles of capital budgeting have been adapted for many other corporate decisions, such as investments in working capital, leasing, mergers and acquisitions, and bond refunding. Third, the valuation principles used in capital budgeting are similar to the valuation principles used in security analysis and portfolio management. Many of the methods
the valuation principles used in capital budgeting are similar to the valuation principles used in security analysis and portfolio management. Many of the methods used by security analysts and portfolio managers are based on capital budgeting methods. Conversely, there have been innovations in security analysis and portfolio management that have also been adapted to capital budgeting.
porations. Second, the principles of capital budgeting have been adapted for many other corporate decisions, such as investments in working capital, leasing, mergers and acquisitions, and bond refunding. Third, <span>the valuation principles used in capital budgeting are similar to the valuation principles used in security analysis and portfolio management. Many of the methods used by security analysts and portfolio managers are based on capital budgeting methods. Conversely, there have been innovations in security analysis and portfolio management that have also been adapted to capital budgeting. Finally, although analysts have a vantage point outside the company, their interest in valuation coincides with the capital budgeting focus of maximizing shareholder val
although analysts have a vantage point outside the company, their interest in valuation coincides with the capital budgeting focus of maximizing shareholder value. Because capital budgeting information is not ordinarily available outside the company, the analyst may attempt to estimate the process, within reason, at least for companies that are not too complex. Further, analysts may be able to appraise the quality of the company's capital budgeting process—for example, on the basis of whether the company has an accounting focus or an economic focus.
security analysts and portfolio managers are based on capital budgeting methods. Conversely, there have been innovations in security analysis and portfolio management that have also been adapted to capital budgeting. Finally, <span>although analysts have a vantage point outside the company, their interest in valuation coincides with the capital budgeting focus of maximizing shareholder value. Because capital budgeting information is not ordinarily available outside the company, the analyst may attempt to estimate the process, within reason, at least for companies that are not too complex. Further, analysts may be able to appraise the quality of the company's capital budgeting process—for example, on the basis of whether the company has an accounting focus or an economic focus. <span><body><html>
#has-images #paracaidas-session #reading-la-ñora
The starting point for this analysis is the yield-to-maturity, or internal rate of return on future cash flows, which was introduced in the fixed-income valuation reading.
n characteristics of fixed-income investments. Beyond the vast worldwide market for publicly and privately issued fixed-rate bonds, many financial assets and liabilities with known future cash flows may be evaluated using the same principles. <span>The starting point for this analysis is the yield-to-maturity, or internal rate of return on future cash flows, which was introduced in the fixed-income valuation reading. The return on a fixed-rate bond is affected by many factors, the most important of which is the receipt of the interest and principal payments in the full amount and on the scheduled da
Reading 54 Understanding Fixed‑Income Risk and Return (Intro)
It is important for analysts to have a well-developed understanding of the risk and return characteristics of fixed-income investments. Beyond the vast worldwide market for publicly and privately issued fixed-rate bonds, many financial assets and liabilities with known future cash flows may be evaluated using the same principles. The starting point for this analysis is the yield-to-maturity, or internal rate of return on future cash flows, which was introduced in the fixed-income valuation reading. The return on a fixed-rate bond is affected by many factors, the most important of which is the receipt of the interest and principal payments in the full amount and on the scheduled dates. Assuming no default, the return is also affected by changes in interest rates that affect coupon reinvestment and the price of the bond if it is sold before it matures. Measures of the price change can be derived from the mathematical relationship used to calculate the price of the bond. The first of these measures (duration) estimates the change in the price for a given change in interest rates. The second measure (convexity) improves on the duration estimate by taking into account the fact that the relationship between price and yield-to-maturity of a fixed-rate bond is not linear. Section 2 uses numerical examples to demonstrate the sources of return on an investment in a fixed-rate bond, which includes the receipt and reinvestment of coupon interest
The return on a fixed-rate bond is affected by many factors, the most important of which is the receipt of the interest and principal payments in the full amount and on the scheduled dates. Assuming no default, the return is also affected by changes in interest rates that affect coupon reinvestment and the price of the bond if it is sold before it matures.
th known future cash flows may be evaluated using the same principles. The starting point for this analysis is the yield-to-maturity, or internal rate of return on future cash flows, which was introduced in the fixed-income valuation reading. <span>The return on a fixed-rate bond is affected by many factors, the most important of which is the receipt of the interest and principal payments in the full amount and on the scheduled dates. Assuming no default, the return is also affected by changes in interest rates that affect coupon reinvestment and the price of the bond if it is sold before it matures. Measures of the price change can be derived from the mathematical relationship used to calculate the price of the bond. The first of these measures (duration) estimates the change in th
Measures of the price change can be derived from the mathematical relationship used to calculate the price of the bond. The first of these measures (duration) estimates the change in the price for a given change in interest rates. The second measure (convexity) improves on the duration estimate by taking into account the fact that the relationship between price and yield-to-maturity of a fixed-rate bond is not linear.
est and principal payments in the full amount and on the scheduled dates. Assuming no default, the return is also affected by changes in interest rates that affect coupon reinvestment and the price of the bond if it is sold before it matures. <span>Measures of the price change can be derived from the mathematical relationship used to calculate the price of the bond. The first of these measures (duration) estimates the change in the price for a given change in interest rates. The second measure (convexity) improves on the duration estimate by taking into account the fact that the relationship between price and yield-to-maturity of a fixed-rate bond is not linear. <span><body><html>
#has-images #reading-ferrari #volante-session
Discounted cash flow methods and models, such as the capital asset pricing model and its variations, are useful for determining the prices of financial assets.
Understanding the pricing of financial assets is important. Discounted cash flow methods and models, such as the capital asset pricing model and its variations, are useful for determining the prices of financial assets. The unique characteristics of derivatives, however, pose some complexities not associated with assets, such as equities and fixed-income instruments. Somewhat surprisingly, however, der
Reading 57 Basics of Derivative Pricing and Valuation (Intro)
It is important to understand how prices of derivatives are determined. Whether one is on the buy side or the sell side, a solid understanding of pricing financial products is critical to effective investment decision making. After all, one can hardly determine what to offer or bid for a financial product, or any product for that matter, if one has no idea how its characteristics combine to create value. Understanding the pricing of financial assets is important. Discounted cash flow methods and models, such as the capital asset pricing model and its variations, are useful for determining the prices of financial assets. The unique characteristics of derivatives, however, pose some complexities not associated with assets, such as equities and fixed-income instruments. Somewhat surprisingly, however, derivatives also have some simplifying characteristics. For example, as we will see in this reading, in well-functioning derivatives markets the need to determine risk premiums is obviated by the ability to construct a risk-free hedge. Correspondingly, the need to determine an investor's risk aversion is irrelevant for derivative pricing, although it is certainly relevant for pricing the underlying. The purpose of this reading is to establish the foundations of derivative pricing on a basic conceptual level. The following topics are covered: How does the pricing of the underlying asset affect the pricing of derivatives? How are derivatives priced using the principle of arbitrage? How are the prices and values of forward contracts determined? How are futures contracts priced differently from forward contracts? How are the prices and values of swaps determined? How are the prices and values of European options determined? How does American option pricing differ from European option pricing? This reading is organized as follows. Section 2 explores two related topics, the pricing of the underlying assets on which derivatives are created and the principle
The unique characteristics of derivatives, pose some complexities not associated with assets, such as equities and fixed-income instruments.
n> Understanding the pricing of financial assets is important. Discounted cash flow methods and models, such as the capital asset pricing model and its variations, are useful for determining the prices of financial assets. <span>The unique characteristics of derivatives, however, pose some complexities not associated with assets, such as equities and fixed-income instruments. Somewhat surprisingly, however, derivatives also have some simplifying characteristics. For example, as we will see in this reading, in well-functioning derivatives markets the need to
Somewhat surprisingly, derivatives also have some simplifying characteristics. For example, as we will see in this reading, in well-functioning derivatives markets the need to determine risk premiums is obviated by the ability to construct a risk-free hedge. Correspondingly, the need to determine an investor's risk aversion is irrelevant for derivative pricing, although it is certainly relevant for pricing the underlying.
pricing model and its variations, are useful for determining the prices of financial assets. The unique characteristics of derivatives, however, pose some complexities not associated with assets, such as equities and fixed-income instruments. <span>Somewhat surprisingly, however, derivatives also have some simplifying characteristics. For example, as we will see in this reading, in well-functioning derivatives markets the need to determine risk premiums is obviated by the ability to construct a risk-free hedge. Correspondingly, the need to determine an investor's risk aversion is irrelevant for derivative pricing, although it is certainly relevant for pricing the underlying. <span><body><html>
The purpose of this reading is to establish the foundations of derivative pricing on a basic conceptual level.
The purpose of this reading is to establish the foundations of derivative pricing on a basic conceptual level. The following topics are covered: How does the pricing of the underlying asset affect the pricing of derivatives? How are derivatives priced using the
The following topics are covered:
How does the pricing of the underlying asset affect the pricing of derivatives?
How are derivatives priced using the principle of arbitrage?
How are the prices and values of forward contracts determined?
How are futures contracts priced differently from forward contracts?
The purpose of this reading is to establish the foundations of derivative pricing on a basic conceptual level. The following topics are covered: How does the pricing of the underlying asset affect the pricing of derivatives? How are derivatives priced using the principle of arbitrage? How are the prices and values of forward contracts determined? How are futures contracts priced differently from forward contracts? How are the prices and values of swaps determined? How are the prices and values of European options determined? How does American option prici
How are the prices and values of swaps determined?
How are the prices and values of European options determined?
How does American option pricing differ from European option pricing?
How are derivatives priced using the principle of arbitrage? How are the prices and values of forward contracts determined? How are futures contracts priced differently from forward contracts? <span>How are the prices and values of swaps determined? How are the prices and values of European options determined? How does American option pricing differ from European option pricing? <span><body><html>
#has-images #microscopio-session #reading-mano
Microeconomics classifies private economic units into two groups: consumers (or households) and firms.
Microeconomics classifies private economic units into two groups: consumers (or households) and firms. These two groups give rise, respectively, to the theory of the consumer and the theory of the firm as two branches of study. The theory of the consumer deals with consumption (the deman
Reading 14 Topics in Demand and Supply Analysis
In a general sense, economics is the study of production, distribution, and consumption and can be divided into two broad areas of study: macroeconomics and microeconomics. Macroeconomics deals with aggregate economic quantities, such as national output and national income, and is rooted in microeconomics , which deals with markets and decision making of individual economic units, including consumers and businesses. Microeconomics is a logical starting point for the study of economics. Microeconomics classifies private economic units into two groups: consumers (or households) and firms. These two groups give rise, respectively, to the theory of the consumer and the theory of the firm as two branches of study. The theory of the consumer deals with consumption (the demand for goods and services) by utility-maximizing individuals (i.e., individuals who make decisions that maximize the satisfaction received from present and future consumption). The theory of the firm deals with the supply of goods and services by profit-maximizing firms. It is expected that candidates will be familiar with the basic concepts of demand and supply. This material is covered in detail in the recommended prerequisite readings. I
The theory of the consumer deals with consumption (the demand for goods and services) by utility-maximizing individuals
Microeconomics classifies private economic units into two groups: consumers (or households) and firms. These two groups give rise, respectively, to the theory of the consumer and the theory of the firm as two branches of study. <span>The theory of the consumer deals with consumption (the demand for goods and services) by utility-maximizing individuals (i.e., individuals who make decisions that maximize the satisfaction received from present and future consumption). The theory of the firm deals with the supply of goods and services by
The theory of the firm deals with the supply of goods and services by profit-maximizing firms.
study. The theory of the consumer deals with consumption (the demand for goods and services) by utility-maximizing individuals (i.e., individuals who make decisions that maximize the satisfaction received from present and future consumption). <span>The theory of the firm deals with the supply of goods and services by profit-maximizing firms. <span><body><html>
#essay-tubes-session #has-images #reading-tubos-de-ensayo
Inventories and cost of sales (cost of goods sold) are significant items in the financial statements of many companies. | CommonCrawl |
Asymptotics for weakly dependent errors-in-variables
Michal Pešta
Linear relations, containing measurement errors in input and output data, are taken into account in this paper. Parameters of these so-called \emph{errors-in-variables} (EIV) models can be estimated by minimizing the \emph{total least squares} (TLS) of the input-output disturbances. Such an estimate is highly non-linear. Moreover in some realistic situations, the errors cannot be considered as independent by nature. \emph{Weakly dependent} ($\alpha$- and $\varphi$-mixing) disturbances, which are not necessarily stationary nor identically distributed, are considered in the EIV model. Asymptotic normality of the TLS estimate is proved under some reasonable stochastic assumptions on the errors. Derived asymptotic properties provide necessary basis for the validity of block-bootstrap procedures.
asymptotic normality, errors-in-variables (EIV), dependent errors, total least squares (TLS)
15A51, 15A52, 62E20, 65F15, 62J99
T. W. Anderson: An Introduction to Multivariate Statistical Analysis. John Wiley and Sons, New York 1958. CrossRef
P. Billingsley: Convergence of Probability Measures. First edition. John Wiley and Sons, New York 1968. CrossRef
R. C. Bradley: Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surveys 2 (2005), 107-144. CrossRef
P. P. Gallo: Consistency of regression estimates when some variables are subject to error. Comm. Statist Theory Methods 11 (1982), 973-983. CrossRef
P. P. Gallo: Properties of Estimators in Errors-in-Variables Models. Ph.D. Thesis, University of North Carolina, Chapel Hill 1982. CrossRef
L. J. Gleser: Estimation in a multivariate ``errors in variables'' regression model: Large sample results. Ann. Statist. 9 (1981), 24-44. CrossRef
G. H. Golub and C. F. Van Loan: An analysis of the total least squares problem. SIAM J. Numer. Anal. 17 (1980), 6, 883-893. CrossRef
J. D. Healy: Estimation and Tests for Unknown Linear Restrictions in Multivariate Linear Models. Ph.D. Thesis, Purdue University 1975. CrossRef
N. Herrndorf: A functional central limit theorem for strongly mixing sequence of random variables. Probab. Theory Rel. Fields 69 (1985), 4, 541-550. CrossRef
I. A. Ibragimov and Y. V. Linnik: Independent and Stationary Sequences of Random Variables. Wolters-Noordhoff 1971. CrossRef
Z. Lin and C. Lu: Limit Theory for Mixing Dependent Random Variables. Springer-Verlag, New York 1997. CrossRef
M. Pešta: Strongly consistent estimation in dependent errors-in-variables. Acta Univ. Carolin. - Math. Phys. 52 (2011), 1, 69-79. CrossRef
M. Pešta: Total least squares and bootstrapping with application in calibration. Statistics: J. Theor. and Appl. Statistics 46 (2013), 5, 966-991. CrossRef
M. Rosenblatt: Markov Processes: Structure and Asymptotic Behavior. Springer-Verlag, Berlin 1971. CrossRef
S. A. Utev: The central limit theorem for $\varphi$-mixing arrays of random variables. Theory Prob. Appl. 35 (1990), 131-139. CrossRef | CommonCrawl |
Establishment of a Microsatellite Marker Set for Individual, Pork Brand and Product Origin Identification in Pigs
돼지 브랜드 식별 및 원산지 추적에 활용 가능한 Microsatellite Marker Set의 확립
Lim, Hyun-Tae (Division of Applied Life Science (BK21 program) Graduate School of Gyeongsang National University) ;
Seo, Bo-Yeong (Division of Applied Life Science (BK21 program) Graduate School of Gyeongsang National University) ;
Jung, Eun-Ji (Division of Applied Life Science (BK21 program) Graduate School of Gyeongsang National University) ;
Yoo, Chae-Kyoung (Division of Applied Life Science (BK21 program) Graduate School of Gyeongsang National University) ;
Zhong, Tao (Division of Applied Life Science (BK21 program) Graduate School of Gyeongsang National University) ;
Cho, In-Cheol (National Institute of Animal Science, R. D. A.) ;
Yoon, Du-Hak (National Institute of Animal Science, R. D. A.) ;
Lee, Jung-Gyu (Division of Applied Life Science (BK21 program) Graduate School of Gyeongsang National University) ;
Jeon, Jin-Tae (Division of Applied Life Science (BK21 program) Graduate School of Gyeongsang National University)
임현태 (경상대학교 응용생명과학부(BK21)) ;
서보영 (경상대학교 응용생명과학부(BK21)) ;
정은지 (경상대학교 응용생명과학부(BK21)) ;
유채경 (경상대학교 응용생명과학부(BK21)) ;
종타오 (경상대학교 응용생명과학부(BK21)) ;
조인철 (농촌진흥청 국립축산과학원) ;
윤두학 (농촌진흥청 국립축산과학원) ;
이정규 (경상대학교 응용생명과학부(BK21)) ;
전진태 (경상대학교 응용생명과학부(BK21))
Seventeen porcine microsatellite (MS) markers recommended by the EID+DNA Tracing EU project, ISAG and Roslin institute were selected for the use in porcine individual and brand identification. The MSA, CERVUS, FSTAT, GENEPOP and API-CALC programs were applied for calculating heterozygosity indices. By considering the hetreozygosity value and PCR product size of each marker, we established a MS marker set composed of 13 MS markers (SW936, SW951, SW787, S00090, S0026, SW122, SW857, S0005, SW72, S0155, S0225, SW24 and SW632) and two sexing markers. The expected probability of identity among genotypes of random individuals (PI), probability of identity among genotypes from random half sibs ($PI_{half-sibs}$) and among genotypes of random individuals, probability of identity among genotypes from random sibs($PI_{sibs}$) were estimated as $2.47\times10^{-18}$, $6.39\times10^{-13}$ and $1.08\times10^{-8}$, respectively. The results indicate that the established marker set can provide a sufficient discriminating power in both individual and parentage identification for the commercial pigs produced in Korea.
Microsatellite markers;Brand identification;Porcine
Supported by : 농림수산식품부
Ayres, K. L. and Overall, A. D. J. 2004. API-CALC 1.0: a computer program for calculating the average probability of identity allowing for substructure, inbreeding and the presence of close relatives. Molecular Ecology Notes 4:315-318. https://doi.org/10.1111/j.1471-8286.2004.00616.x
Barker, J. S. F., Tan, S. G., Selvaraj, O. S. and Mukherjee, T. K. 1997. Genetic variation within and relationships among populations of Asian water buffalo (Bubalus bualis). Anim. Genet. 28:1-13. https://doi.org/10.1111/j.1365-2052.1997.00036.x
Bjornstad, G., Nilsen, N. O. and Roed, K. H. 2003. Genetic relationship between Mongolian and Norwegian horses ? Anim. Genet. 34:55-58 https://doi.org/10.1046/j.1365-2052.2003.00922.x
Blott, S. C., Williams, J. L. and Haley, C. S. 1999. Discriminating among cattle breeds using genetic markers. Heredity 82:613-619 https://doi.org/10.1046/j.1365-2540.1999.00521.x
Dieringer, D. and Schltterer, C. 2002. Microsatellite analyser (MSA): a platform independent analysis tool for large microsatellite data sets. Molecular Ecology Notes 3(1):167-169.
Goudet, J. 2001. FSTAT, a program to Estimate and Test Gene Diversities and Fixation Indices (version 2.9.3). Available from http://www.unil.ch/izea/software/fstat.html
IHGSC (International Human Genome Sequencing Consortium). 2001. Initial sequencing and analysis of the human genome. Nature 409:860-921. https://doi.org/10.1038/35057062
Li, K., Chen, Y., Moran, C., Fan, B., Zhao, S. and Peng, Z. 2000. Analysis of diversity and genetic relationships between four Chinese indigenous pig breeds and one Australian commercial pig breed. Anim. Genet. 31:322-325. https://doi.org/10.1046/j.1365-2052.2000.00649.x
Marshall, T. C., Slate, J., Kruuk, L. E. B. and Pemberton, J. M. 1998. Statistical confidence for likelihood-based paternity inference in natural populations. Mol. Ecol. 7:639-655. https://doi.org/10.1046/j.1365-294x.1998.00374.x
Nei, M. 1972. Genetic distance between populations. Am. Nat. 106:283-292. https://doi.org/10.1086/282771
Kaul, R., Singh, A., Vijh, R. K., Tantia, M. S. and Behl, R. 2001. Evaluation of the genetic variability of 13 microsatellite markers in native Indian pigs. J. Genet 80:149-153. https://doi.org/10.1007/BF02717911
Raymond, M. and Rousset, F. 1995. GENEPOP (version 1.2): population genetics software for exact tests and ecumenicism. J. Heredity 86:248-249.
Weir, B. S. and Hill, W. G. 2002. Estimating F-statistics. Annu. Rev. Genet. 36:721-50. https://doi.org/10.1146/annurev.genet.36.050802.093940
농림수산식품부. 2008. 농림수산 식품 통계연보
임현태, 민희식, 문원곤, 이재봉, 김재환, 조인철, 이학교, 이용욱, 이정규, 전진태. 2005. 한우 생산이력제에 활용 가능한 Microsatellite의 분석과 선발. 동물자원과학회지. 47(4):491-500.
Analysis of Genetic Characteristics and Probability of Individual Discrimination in Korean Indigenous Chicken Brands by Microsatellite Marker vol.55, pp.3, 2013, https://doi.org/10.5187/JAST.2013.55.3.185
Genetic Traceability of Black Pig Meats Using Microsatellite Markers vol.27, pp.7, 2014, https://doi.org/10.5713/ajas.2013.13829
Discrimination of the commercial Korean native chicken population using microsatellite markers vol.57, pp.1, 2015, https://doi.org/10.1186/s40781-015-0044-6
Paternity Identification Using the Multiplex PCR with Microsatellite Markers in Chicken vol.48, pp.2, 2014, https://doi.org/10.14397/jals.2014.48.2.69
Establishment of a Microsatellite Marker set for Individual Identification in Goat vol.48, pp.3, 2014, https://doi.org/10.14397/jals.2014.48.3.157 | CommonCrawl |
Spread of viral infection of immobilized bacteria
NHM Home
A short proof of the logarithmic Bramson correction in Fisher-KPP equations
March 2013, 8(1): 291-325. doi: 10.3934/nhm.2013.8.291
The existence and uniqueness of unstable eigenvalues for stripe patterns in the Gierer-Meinhardt system
Kota Ikeda 1,
Graduate School of Advanced Mathematical Science, Meiji University, 1-1-1 Higashimita, Tama-ku, Kawasaki, Kanagawa 214-8571, Japan
Received March 2012 Revised January 2013 Published April 2013
The Gierer-Meinhardt system is a mathematical model describing the process of hydra regeneration. This system has a stationary solution with a stripe pattern on a rectangular domain, but numerical results suggest that such stripe pattern is unstable. In [8], Kolokolnikov et al. proved the existence of a positive eigenvalue, which is called an unstable eigenvalue, for a stationary solution with a stripe pattern by the NLEP method, which implies the instability of the stripe pattern. In addition, the uniqueness of the unstable eigenvalue was shown under some technical assumptions in [8]. In this paper, we prove the existence and uniqueness of an unstable eigenvalue by using the SLEP method without any extra conditions. We also prove the existence of a single-spike solution in one-dimension.
Keywords: eigenvalue problem, Gierer-Meinhardt system, SLEP method..
Mathematics Subject Classification: 35K57, 35K5.
Citation: Kota Ikeda. The existence and uniqueness of unstable eigenvalues for stripe patterns in the Gierer-Meinhardt system. Networks & Heterogeneous Media, 2013, 8 (1) : 291-325. doi: 10.3934/nhm.2013.8.291
H. Brezis, "Functional Analysis, Sobolev Spaces and Partial Differential Equations,", Universitext, (2011). Google Scholar
A. Doelman, R. Gardner and T. J. Kaper, Large stable pulse solutions in reaction-diffusion equations,, Indiana Univ. Math. J., 50 (2001), 443. doi: 10.1512/iumj.2001.50.1873. Google Scholar
A. Doelman and H. van der Ploeg, Homoclinic stripe patterns,, SIAM J. Appl. Dyn. Syst., 1 (2002), 65. doi: 10.1137/S1111111101392831. Google Scholar
S. Ei and J. Wei, Dynamics of metastable localized patterns and its application to the interaction of spike solutions for the Gierer-Meinhardt systems in two spatial dimensions,, Japan J. Indust. Appl. Math., 19 (2002), 181. doi: 10.1007/BF03167453. Google Scholar
A. Gierer and H. Meinhardt, A theory of biological pattern formation,, Kybernetic, 12 (1972), 30. doi: 10.1007/BF00289234. Google Scholar
P. Hartman, "Ordinary Differential Equations,", Birkhäuser Boston, (1982). Google Scholar
D. Iron, M. J. Ward and J. Wei, The stability of spike solutions to the one-dimensional Gierer-Meinhardt model,, Phys. D, 150 (2001), 25. doi: 10.1016/S0167-2789(00)00206-2. Google Scholar
T. Kolokolnikov, W. Sun, M. J. Ward and J. Wei, The stability of a stripe for the Gierer-Meinhardt model and the effect of saturation,, SIAM J. Appl. Dyn. Syst., 5 (2006), 313. doi: 10.1137/050635080. Google Scholar
S. Kondo and R. Asai, A reaction-diffusion wave on the marine angelfish Pomacanthus,, Nature, 376 (1995), 765. doi: 10.1038/376765a0. Google Scholar
S. Kondo, and T. Miura, Reaction-diffusion model as a framework for understanding biological pattern formation,, Science, 329 (2010), 1616. Google Scholar
P. K. Maini, K. J. Painter and H. N. P. Chau, Spatial pattern formation in chemical and biological systems,, J. Chem. Soc. Faraday Trans., 93 (1997), 3601. doi: 10.1039/a702602a. Google Scholar
H. Meinhardt, "Models of Biological Pattern Formation,", Academic Press, (1982). Google Scholar
J. D. Murray, "Mathematical Biology,", Biomathematics, 19 (1989). doi: 10.1007/978-3-662-08539-4. Google Scholar
A. Nakamasu, G. Takahashi, A. Kanbe and S. Kondo, Interactions between zebrafish pigment cells responsible for the generation of Turing patterns,, Proceedings of the National Academy of Sciences, 106 (2009), 8429. doi: 10.1073/pnas.0808622106. Google Scholar
Y. Nakamura, C. D. Tsiairis, S. Özbek and T. W. Holstein, Autoregulatory and repressive inputs localize Hydra Wnt3 to the head organizer,, Proceedings of the National Academy of Sciences, 108 (2011), 9137. doi: 10.1073/pnas.1018109108. Google Scholar
Y. Nishiura, Coexistence of infinitely many stable solutions to reaction diffusion systems in the singular limit,, in, 3 (1994), 25. Google Scholar
Y. Nishiura and H. Fujii, Stability of singularly perturbed solutions to systems of reaction-diffusion equations,, SIAM J. Math. Anal., 18 (1987), 1726. doi: 10.1137/0518124. Google Scholar
H. Shoji, Y. Iwasa and S. Kondo, Stripes, spots, or reversed spots in two-dimensional Turing systems,, J. Theoret. Biol., 224 (2003), 339. doi: 10.1016/S0022-5193(03)00170-X. Google Scholar
I. Takagi, Point-condensation for a reaction-diffusion system,, J. Differential Equations, 61 (1986), 208. doi: 10.1016/0022-0396(86)90119-1. Google Scholar
M. Taniguchi, A uniform convergence theorem for singular limit eigenvalue problems,, Adv. Differential Equations, 8 (2003), 29. Google Scholar
M. Taniguchi and Y. Nishiura, Stability and characteristic wavelength of planar interfaces in the large diffusion limit of the inhibitor,, Proc. Roy. Soc. Edinburgh Sect. A, 126 (1996), 117. doi: 10.1017/S0308210500030638. Google Scholar
M. Taniguchi and Y. Nishiura, Instability of planar interfaces in reaction-diffusion systems,, SIAM J. Math. Anal., 25 (1994), 99. doi: 10.1137/S0036141092233500. Google Scholar
A. Turing, The chemical basis of morphogenesis,, Phil. Trans. R. Soc. Lond. B, 327 (1952), 37. doi: 10.1098/rstb.1952.0012. Google Scholar
M. J. Ward and J. Wei, Asymmetric spike patterns for the one-dimensional Gierer-Meinhardt model: equilibria and stability,, European J. Appl. Math., 13 (2002), 283. doi: 10.1017/S0956792501004442. Google Scholar
J. Wei and M. Winter, Existence and stability analysis of asymmetric patterns for the Gierer-Meinhardt system,, J. Math. Pures Appl. (9), 83 (2004), 433. doi: 10.1016/j.matpur.2003.09.006. Google Scholar
J. Wei and M. Winter, Spikes for the two-dimensional Gierer-Meinhardt system: The weak coupling case,, J. Nonlinear Sci., 11 (2001), 415. doi: 10.1007/s00332-001-0380-1. Google Scholar
Juncheng Wei, Matthias Winter. On the Gierer-Meinhardt system with precursors. Discrete & Continuous Dynamical Systems - A, 2009, 25 (1) : 363-398. doi: 10.3934/dcds.2009.25.363
Siu-Long Lei. Adaptive method for spike solutions of Gierer-Meinhardt system on irregular domain. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 651-668. doi: 10.3934/dcdsb.2011.15.651
Manuel del Pino, Patricio Felmer, Michal Kowalczyk. Boundary spikes in the Gierer-Meinhardt system. Communications on Pure & Applied Analysis, 2002, 1 (4) : 437-456. doi: 10.3934/cpaa.2002.1.437
Henghui Zou. On global existence for the Gierer-Meinhardt system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 583-591. doi: 10.3934/dcds.2015.35.583
Shin-Ichiro Ei, Kota Ikeda, Yasuhito Miyamoto. Dynamics of a boundary spike for the shadow Gierer-Meinhardt system. Communications on Pure & Applied Analysis, 2012, 11 (1) : 115-145. doi: 10.3934/cpaa.2012.11.115
Georgia Karali, Takashi Suzuki, Yoshio Yamada. Global-in-time behavior of the solution to a Gierer-Meinhardt system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2885-2900. doi: 10.3934/dcds.2013.33.2885
Rui Peng, Xianfa Song, Lei Wei. Existence, nonexistence and uniqueness of positive stationary solutions of a singular Gierer-Meinhardt system. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4489-4505. doi: 10.3934/dcds.2017192
Kazuhiro Kurata, Kotaro Morimoto. Construction and asymptotic behavior of multi-peak solutions to the Gierer-Meinhardt system with saturation. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1443-1482. doi: 10.3934/cpaa.2008.7.1443
Theodore Kolokolnikov, Michael J. Ward. Bifurcation of spike equilibria in the near-shadow Gierer-Meinhardt model. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 1033-1064. doi: 10.3934/dcdsb.2004.4.1033
Nabil T. Fadai, Michael J. Ward, Juncheng Wei. A time-delay in the activator kinetics enhances the stability of a spike solution to the gierer-meinhardt model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1431-1458. doi: 10.3934/dcdsb.2018158
Huan Gao, Zhibao Li, Haibin Zhang. A fast continuous method for the extreme eigenvalue problem. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1587-1599. doi: 10.3934/jimo.2017008
Yafeng Li, Guo Sun, Yiju Wang. A smoothing Broyden-like method for polyhedral cone constrained eigenvalue problem. Numerical Algebra, Control & Optimization, 2011, 1 (3) : 529-537. doi: 10.3934/naco.2011.1.529
Xing Li, Chungen Shen, Lei-Hong Zhang. A projected preconditioned conjugate gradient method for the linear response eigenvalue problem. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 389-412. doi: 10.3934/naco.2018025
Tiexiang Li, Tsung-Ming Huang, Wen-Wei Lin, Jenn-Nan Wang. On the transmission eigenvalue problem for the acoustic equation with a negative index of refraction and a practical numerical reconstruction method. Inverse Problems & Imaging, 2018, 12 (4) : 1033-1054. doi: 10.3934/ipi.2018043
Qilong Zhai, Ran Zhang. Lower and upper bounds of Laplacian eigenvalue problem by weak Galerkin method on triangular meshes. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 403-413. doi: 10.3934/dcdsb.2018091
Hao Li, Hai Bi, Yidu Yang. The two-grid and multigrid discretizations of the $ C^0 $IPG method for biharmonic eigenvalue problem. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020002
Mohsen Tadi. A computational method for an inverse problem in a parabolic system. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 205-218. doi: 10.3934/dcdsb.2009.12.205
Mihai Mihăilescu. An eigenvalue problem possessing a continuous family of eigenvalues plus an isolated eigenvalue. Communications on Pure & Applied Analysis, 2011, 10 (2) : 701-708. doi: 10.3934/cpaa.2011.10.701
Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012
Giacomo Bocerani, Dimitri Mugnai. A fractional eigenvalue problem in $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 619-629. doi: 10.3934/dcdss.2016016
HTML views (0)
Kota Ikeda | CommonCrawl |
Tug-of-war games and the infinity Laplacian with spatial dependence
On qualitative analysis for a two competing fish species model with a combined non-selective harvesting effort in the presence of toxicity
September 2013, 12(5): 1943-1957. doi: 10.3934/cpaa.2013.12.1943
Elliptic equations with cylindrical potential and multiple critical exponents
Xiaomei Sun 1, and Yimin Zhang 2,
Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China
Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, PO Box 71010, Wuhan 430071E01103
Received November 2011 Revised August 2012 Published January 2013
In this paper, we deal with the following problem: \begin{eqnarray*} -\Delta u-\lambda |y|^{-2}u=|y|^{-s}u^{2^{*}(s)-1}+u^{2^{*}-1}\ \ \ in \ \ R^N , y\neq 0\\ u\geq 0 \end{eqnarray*} where $u(x)=u(y,z): R^m\times R^{N-m}\longrightarrow R$, $N\geq 4$, $2 < m < N$, $\lambda < (\frac{m-2}{2})^2$ and $0 < s < 2$, $2^*(s)=\frac{2(N-s)}{N-2}$, $2^*=\frac{2N}{N-2}$. Using the Variational method, we proved the existence of a ground state solution for the case $0 < \lambda < (\frac{m-2}{2})^2$ and the existence of a cylindrical weak solution under the case $\lambda<0$.
Keywords: Variational method, cylindrical weight, multiple critical exponents..
Mathematics Subject Classification: Primary: 35J15, 35J61; Secondary: 35J2.
Citation: Xiaomei Sun, Yimin Zhang. Elliptic equations with cylindrical potential and multiple critical exponents. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1943-1957. doi: 10.3934/cpaa.2013.12.1943
J. Bellazzini and C. Bonanno, Nonlinear Schrödinger equations with strongly singular potentials,, Proceedings of the Royal Society of Edinburgh, 140A (2010), 707. doi: 10.1017/S0308210509001401. Google Scholar
M. Badiale, V. Bergio and S. Rolando, A nonlinear elliptic equation with singular potential and applications to nonlinear field equations,, J. Eur. Math. Soc., 9 (2007), 355. Google Scholar
M. Badiale, M. Guida and S. Rolando, Elliptic equations with decaying cylindrical potentials and power-type nonlinearities,, Adv. Diff. Equ., 12 (2007), 1321. doi: 10.1007/s00009-005-0055-5. Google Scholar
M. Badiale and S. Rolando, Nonlinear elliptic equations with subhomogeneous potentials,, Nonlinear Analysis, 72 (2010), 602. doi: 10.1016/j.na.2009.06.111. Google Scholar
M. Badiale and G. Tarantello, A Sobolev-Hardy inequality with applications to a nonlinear elliptic equation arising in astrophysics,, Arch. Ration. Mech. Anal., 163 (2002), 252. doi: 10.1007/s002050200201. Google Scholar
M. Bhakta and K. Sandeep, Hardy-Sobolev-Maz'ya type equations in bounded domains,, J. Differential Equations, 247 (2009), 119. doi: 10.1016/j.jde.2008.12.011. Google Scholar
H. Brezis and E. Lieb, A relation between pointwise convergence of functionals and convergence of functionals,, Proc. Amer. Math. Soc., 88 (1983), 486. doi: 10.2307/2044999. Google Scholar
D. Castorina, I. Fabbri, G. Mancini and K. Sandeep, Hardy-Sobolev extremals, hyperbolic symmetry and scalar curvature equations,, J. Differential Equations, 246 (2009), 1187. doi: 10.1016/j.jde.2008.09.006. Google Scholar
D. M. Cao and P. G. Han, Solutions to critical elliptic equations with multi-singular inverse square potentials,, J. Differential Equations, 224 (2006), 332. doi: 10.1016/j.jde.2005.07.010. Google Scholar
D. M. Cao and Y. Y. Li, Results on positive solutions of elliptic equations with a critical Hardy-Sobolev operator,, Methods and Applications of Analysis, 15 (2008), 081. doi: 10.1.1.140.417. Google Scholar
L. D'Ambrosio, Hardy type inequalities related to degenerate elliptic differential operators,, Ann. Sc. Norm. Super. Pisa Cl. Sci., 4 (2005), 451. Google Scholar
R. Filippucci, P. Pucci and F. Robert, On a p-Laplace equation with multiple critical nonlinearities,, J. Math. Pures Appl., 91 (2009), 156. doi: 10.1016/j.matpur.2008.09.008. Google Scholar
F. Gazzola, H. C. Grunau and E. Mitidieri, Hardy inequalities with optimal constants and remainder terms,, Trans. Amer. Math. Soc., 356 (2004), 2149. doi: 10.1090/S0002-9947-03-03395-6. Google Scholar
N. Ghoussoub and X. S. Kang, Hardy-Sobolev critical elliptic equations with boundary singularities,, Ann. Inst. H. Poincar$\acutee$ Anal. Non Lin$\acutee$aire, 21 (2004), 767. doi: 10.1016/j.anihpc.2003.07.002. Google Scholar
Y. Y. Li and C. S. Lin, A nonlinear Elliptic PDE with two Sobolev-Hardy critical exponents,, Arch. Rational Mech. Anal., 203 (2012), 943. doi: 10.1007/s00205-011-0467-2. Google Scholar
G. Mancini, I. Fabbri and K. Sandeep, Classification of solutions of a critical Hardy-Sobolev operator,, J. Differential Equations, 224 (2006), 258. doi: 10.1016/j.jde.2005.07.001. Google Scholar
G. Mancini and K. Sandeep, On a semilinear elliptic equation in $\Bbb H^n$,, Ann. Sc. Norm. Super. Pisa Cl. Sci., 5 (2008), 635. Google Scholar
R. Musina, Ground state solutions of a critical problem involving cylindrical weights,, Nonlinear Anal., 68 (2008), 3972. doi: 10.1016/j.na.2007.04.034. Google Scholar
R. S. Palais, The principle of symmetric criticality,, Commun. Math. Phys., 69 (1979), 19. doi: 10.1007/BF01941322. Google Scholar
J. B. Su and Z. Q. Wang, Sobolev type embedding and quasilinear elliptic equations with radial potentials,, J. Differential Equa., 250 (2011), 223. doi: 10.1016/j.jde.2010.08.025. Google Scholar
J. B. Su, Z. Q. Wang and M. Willem, Weighted Sobolev embedding with unbounded and decaying radial potentials,, J. Differential Equa., 238 (2007), 201. doi: 10.1016/j.jde.2007.03.018. Google Scholar
G. Talenti, Best constant in Sobolev inequality,, Ann. Mat. Pura Appl., 110 (1976), 353. doi: 10.1007/BF02418013. Google Scholar
S. Terracini, On positive entire solutions to a class of equations with a singular coefficient and critical exponent,, Adv. Differential Equations, 2 (1996), 241. Google Scholar
A. Tertikas and K. Tintarev, On existence of minimizers for the Hardy-Sobolev-Maz'ya inequality,, Ann. Mat. Pura e Appl., 186 (2007), 645. doi: 10.1007/s10231-006-0024-z. Google Scholar
S. Solimini, A note on compactness-type properties with respect to Lorentz norms of bounded subsets of a Sobolev space,, Ann. Inst. Henry Poincar\'e-Analyse Nonlin\'eaire, 12 (1995), 319. Google Scholar
M. Struwe, "Variational Methods. Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems,", Third edition, (2000). Google Scholar
Dongsheng Kang. Quasilinear systems involving multiple critical exponents and potentials. Communications on Pure & Applied Analysis, 2013, 12 (2) : 695-710. doi: 10.3934/cpaa.2013.12.695
Tsung-Fang Wu. On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function. Communications on Pure & Applied Analysis, 2008, 7 (2) : 383-405. doi: 10.3934/cpaa.2008.7.383
M. L. Miotto. Multiple solutions for elliptic problem in $\mathbb{R}^N$ with critical Sobolev exponent and weight function. Communications on Pure & Applied Analysis, 2010, 9 (1) : 233-248. doi: 10.3934/cpaa.2010.9.233
F. R. Pereira. Multiple solutions for a class of Ambrosetti-Prodi type problems for systems involving critical Sobolev exponents. Communications on Pure & Applied Analysis, 2008, 7 (2) : 355-372. doi: 10.3934/cpaa.2008.7.355
Chiu-Yen Kao, Yuan Lou, Eiji Yanagida. Principal eigenvalue for an elliptic problem with indefinite weight on cylindrical domains. Mathematical Biosciences & Engineering, 2008, 5 (2) : 315-335. doi: 10.3934/mbe.2008.5.315
Mousomi Bhakta, Debangana Mukherjee. Semilinear nonlocal elliptic equations with critical and supercritical exponents. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1741-1766. doi: 10.3934/cpaa.2017085
Ramzi Alsaedi. Perturbation effects for the minimal surface equation with multiple variable exponents. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 139-150. doi: 10.3934/dcdss.2019010
Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1037-1054. doi: 10.3934/cpaa.2011.10.1037
Kun Cheng, Yinbin Deng. Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 77-103. doi: 10.3934/dcds.2017004
Seunghyeok Kim. On vector solutions for coupled nonlinear Schrödinger equations with critical exponents. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1259-1277. doi: 10.3934/cpaa.2013.12.1259
Alexandre Nolasco de Carvalho, Marcelo J. D. Nascimento. Singularly non-autonomous semilinear parabolic problems with critical exponents. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 449-471. doi: 10.3934/dcdss.2009.2.449
Shengbing Deng, Fethi Mahmoudi, Monica Musso. Bubbling on boundary submanifolds for a semilinear Neumann problem near high critical exponents. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3035-3076. doi: 10.3934/dcds.2016.36.3035
Yinbin Deng, Shuangjie Peng, Li Wang. Infinitely many radial solutions to elliptic systems involving critical exponents. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 461-475. doi: 10.3934/dcds.2014.34.461
Filippo Gazzola. Critical exponents which relate embedding inequalities with quasilinear elliptic problems. Conference Publications, 2003, 2003 (Special) : 327-335. doi: 10.3934/proc.2003.2003.327
Yinbin Deng, Qi Gao, Dandan Zhang. Nodal solutions for Laplace equations with critical Sobolev and Hardy exponents on $R^N$. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 211-233. doi: 10.3934/dcds.2007.19.211
Zhilei Liang. On the critical exponents for porous medium equation with a localized reaction in high dimensions. Communications on Pure & Applied Analysis, 2012, 11 (2) : 649-658. doi: 10.3934/cpaa.2012.11.649
Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2693-2715. doi: 10.3934/cpaa.2019120
Hongxia Yin. An iterative method for general variational inequalities. Journal of Industrial & Management Optimization, 2005, 1 (2) : 201-209. doi: 10.3934/jimo.2005.1.201
Monica Musso, Donato Passaseo. Multiple solutions of Neumann elliptic problems with critical nonlinearity. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 301-320. doi: 10.3934/dcds.1999.5.301
Xiaomei Sun Yimin Zhang | CommonCrawl |
PDG HOME
pdgHome
REVIEWS, TABLES, PLOTS (2020)
PARTICLE LISTINGS
PHYSICAL CONSTANTS
ASTROPHYSICAL CONSTANTS
ATOMIC & NUCLEAR PROPERTIES
ABOUT PDG
PDG INTERNAL
For general comments and questions about PDG, please contact us at:
[email protected]
For technical assistance or help with our site, please contact us at:
[email protected]
Particle Data Group MS 50R-6008
1 Cyclotron Road
Were you looking to order PDG products?
Order PDG products
Please use this citation:
R.L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
INSPIRE BibTeX LaTeX(US) LaTeX(EU)
USA (LBNL) | Italy | Japan (KEK) | Russia (Protvino)
pdgLive Home > BOTTOM, STRANGE MESONS > ${{\mathit B}_{{s1}}{(5830)}^{0}}$ > ${\mathit m}_{{{\mathit B}_{{s1}}^{0}}}–{\mathit m}_{{{\mathit B}^{*+}}}$
${\mathit m}_{{{\mathit B}_{{s1}}^{0}}}–{\mathit m}_{{{\mathit B}^{*+}}}$
VALUE (MeV)
$\bf{ 503.99 \pm0.17}$ OUR FIT
$504.03$ $\pm0.12$ $\pm0.15$ 1
AALTONEN
CDF ${{\mathit p}}{{\overline{\mathit p}}}$ at 1.96 TeV
• • We do not use the following data for averages, fits, limits, etc. • •
CDF Repl. by AALTONEN 2014I
1 AALTONEN 2014I reports ${\mathit m}_{{{\mathit B}_{{s1}}{(5830)}^{0}}}$ $−$ ${\mathit m}_{{{\mathit B}^{*+}}}$ $−$ ${\mathit m}_{{{\mathit K}^{-}}}$ = $10.35$ $\pm0.12$ $\pm0.15$ MeV which we adjusted by the ${{\mathit K}^{-}}$ mass.
2 Uses two-body decays into ${{\mathit K}^{-}}$ and ${{\mathit B}^{+}}$ mesons reconstructed as ${{\mathit B}^{+}}$ $\rightarrow$ ${{\mathit J / \psi}}{{\mathit K}^{+}}$ , ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \mu}^{+}}{{\mathit \mu}^{-}}$ or ${{\mathit B}^{+}}$ $\rightarrow$ ${{\overline{\mathit D}}^{0}}{{\mathit \pi}^{+}}$ , ${{\overline{\mathit D}}^{0}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit \pi}^{-}}$ .
AALTONEN 2014I
PR D90 012013 Study of Orbitally Excited ${{\mathit B}}$ Mesons and Evidence for a New ${{\mathit B}}{{\mathit \pi}}$ Resonance
AALTONEN 2008K
PRL 100 082001 Observation of Orbitally Excited ${{\mathit B}_{{s}}}$ Mesons
Except where otherwise noted, content of the 2022 Review of Particle Physics is licensed under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. The publication of the Review of Particle Physics is supported by US DOE, MEXT (Japan), INFN (Italy), JPS and CERN. Individual collaborators receive support for their PDG activities from their respective institutes or funding agencies. © 2022. See LBNL disclaimers. | CommonCrawl |
<< Friday, December 01, 2017 >>
Fall 2017 Architecture Sustainability Colloquium
Colloquium | August 25 – December 1, 2017 every day | 112 Wurster Hall
FRIDAYS - AUG 25 through DEC 1. CHECK THE SCHEDULE FOR SPEAKERS. Bay Area Leaders discuss topics in sustainability.
Chemistry Graduate Life Committee December Meeting
Meeting | December 1 | Latimer Hall, Bixby North
GLC: Members, Chemistry Graduate Life Committee
Sponsor: College of Chemistry
The Chemistry Graduate Life Committee exists to support graduate students in the Chemistry Department at UC Berkeley. We provide peer mentoring, advocate for graduate student wellness, coordinate social events, distribute news of campus events and resources, and coordinate recruiting weekends. We meet monthly. Joining the CGLC is easy: just show up and introduce yourself!
canceled - Colloquium: Claire Chase
Colloquium | December 1 | 125 Morrison Hall | Canceled
Sponsor: Department of Music
Presented by Cal Performances in association with the Department of Music at UC Berkeley.
Claire Chase is a soloist, collaborative artist, curator and advocate for new and experimental music. Over the past decade she has given the world premieres of hundreds of new works for the flute in performances throughout the Americas, Europe and Asia, and she has championed new music throughout the... More >
Suitcase Clinic Donations: Supply Drive for their 3 Clinics
Miscellaneous | November 20 – December 4, 2017 every day | University Hall, 207 Suite, Cubicle 4
Sponsor: Division of Community Health Sciences
Items needed include basic toiletries, warm clothing, blankets, socks, infant/child supplies, packaged foods, gummy vitamins, etc.
Botanical Illustration: Winter / Holiday
Workshop | November 30 – December 1, 2017 every day | 10 a.m.-4 p.m. | UC Botanical Garden
This two-day class will introduce you to the fascinating world of Botanical Art. Catherine Watters will teach you to observe, measure and draw plants in great detail and with botanical accuracy. Students will work with graphite, colored pencil and watercolors. All levels are welcome.
Registration required: $190, $175 members
Registration info: Register online or by calling 510-664-9841, or by emailing [email protected]
Clark Kerr Garden Open Work Hours
Tour/Open House | October 27 – December 8, 2017 every Friday | 10 a.m.-12 p.m. | Clark Kerr Campus
Sponsor: Campus Gardens
Located on the CKC campus, sponsored by UC Grounds Dept. and Cal Dining
TS-04 Improving Safety at Intersections
Course | November 29 – December 1, 2017 every day | 10 a.m.-12 p.m. | Online
Speaker: Nazir Lalani, PE, President, Traffex Engineers, Inc
Sponsor: Institute of Transportation Studies
About 65 percent of all crashes in urban areas and 40 percent of those in rural areas occur at or near intersections or driveways. Safety improvements at these locations have always been a priority and pose a challenge for most transportation agencies in California.
Registration required: $290.00
Europe and the Euro: The way ahead
Panel Discussion | December 1 | 11 a.m.-12:30 p.m. | N270 Haas School of Business
Panelist/Discussants: Barry Eichengreen, UC Berkeley; Gabriele Giudice, European Commission; Gérard Roland, UC Berkeley
Sponsors: Institute of European Studies, Haas School of Business
Ten years after the crisis struck the European economy, wind seems to be back in Europe's sails. How can the European Union exploit this window of opportunity to strengthen the Economic and Monetary Union, increase the resilience of individual economies and relaunch economic and social convergence between its Member States?
RSVP recommended
RSVP info: RSVP by emailing Viviana Padelli at [email protected] by December 1.
Special Event | December 1 | 11 a.m.-6 p.m. | Martin Luther King Jr. Student Union, East Pauley Ballroom
Sponsor: Be Well at Work - Wellness Program
Blood Drives at UC Berkeley are sponsored by the American Red Cross (ARC) to provide much needed blood to hospitals throughout the Bay Area.
Blood Drives at UC Berkeley are held once a month. Appointments to donate are encouraged and walk-ins are always welcome. Drives are held at two locations on all dates from 11am-6pm: MLK- Student Union – East Pauley Ballroom and Red Cross Bloodmobile (at... More >
Jacobs Design Conversations: Steve Vassallo, "How to Design a Company that Matters"
Lecture | December 1 | 12-1 p.m. | 310 Jacobs Hall
Sponsor: Jacobs Institute for Design Innovation
Foundation Capital general partner Steve Vassallo will speak at Jacobs Hall as part of the Jacobs Design Conversations series.
Post-Baccalaureate Health Professions Program Online Information Session
Information Session | December 1 | 12-1 p.m. | Online
Gain academic preparation in the sciences along with one-on-one advising to enhance your application to medical, dental or veterinary school, as well as to advanced degree programs in medical- and health-related fields.
Labor Lunch Seminar: "$15 Minimum Wage Policies: Early Evidence."
Seminar | December 1 | 12-1 p.m. | 648 Evans Hall
Featured Speaker: Carl Nadler, IRLE
Dancing for Fun and Fitness (BEUHS605)
Workshop | December 1 | 12:10-1 p.m. | 251 Hearst Gymnasium
Speaker: Nadia Qabazard
Sponsor: Be Well at Work - Wellness
Fit some fun and fitness into your day with these free, beginner dance classes. Zumba will be on 9/8 and 12/1, Samba will be on 10/6 and Polynesian/Hula will be on 11/3. No partner required. Comfortable clothing and athletic shoes recommended.
Solid State Technology and Devices Seminar: Antenna-LED for on-chip optical communication
Seminar | December 1 | 1-2 p.m. | Cory Hall, 521 Cory (Hogan Room)
Speaker/Performer: Seth A. Fortuna, UC Berkeley
Sponsor: Electrical Engineering and Computer Sciences (EECS)
Using an optical antenna, it is now possible to make spontaneous emission faster than stimulated emission. This alludes to the exciting possibility of an LED faster than the LASER. Such an antenna-coupled LED (or simply antenna-LED) is well-suited as a light source for on-chip optical communication where small size, fast speed, and high efficiency are needed to achieve the promised benefit of... More >
Ground Penetrating Radar for Archaeology
Workshop | December 1 | 1-5 p.m. | 101 2251 College (Archaeological Research Facility)
Speaker: Scott Byram, Research Associate, Archaeological Research Facility, UC Berkeley
Sponsor: Archaeological Research Facility
At 1pm the workshop will begin at the UC Faculty Club lawn where subsurface features are being mapped.
Reservation info: Workshops cost $50 for non-UC attendees. The workshops are free for students, faculty, and staff. Make reservations online
GSSI 350 Mhz Hyperstacking Antenna
Talking About Combinatorial Objects Student Seminar: Grassmannians, Matroids, and Flags
Seminar | December 1 | 1-2 p.m. | 748 Evans Hall
Speaker: Charles Wang, UC Berkeley
Sponsor: Department of Mathematics
We will study the relationship between matroids and the geometric structure of the Grassmannian in terms of Schubert cells and other related objects. We will briefly extend these geometric ideas to the more general setting of flag varieties and flag matroids. Time permitting, we will discuss various ways to realize matroid polytopes for matroids associated to the Grassmannian.
Tour/Open House | January 6, 2017 – December 30, 2018 every Sunday, Thursday, Friday & Saturday with exceptions | 1:30-2:45 p.m. | UC Botanical Garden
Join us for a free, docent-led tour of the Garden as we explore interesting plant species, learn about the vast collection, and see what is currently in bloom. Meet at the Entry Plaza.
Free with Garden admission
Advanced registration not required
Tours may be cancelled without notice.
For day-of inquiries, please call 510-643-2755
For tour questions, please email [email protected]... More >
Western Language Resources for East Asian Studies
Information Session | December 1 | 2-3:30 p.m. | 341 East Asian Library
Speaker/Performer: Bruce Williams
Sponsor: Library
Introduction to locating materials and information in Western languages in the area of East Asian Studies by using library databases, catalogs and other bibliographic tools.
Translating Nanotechnologies into Clinical Settings to Measure Response to Novel Cancer Agents: Nano Seminar Series
Seminar | December 1 | 2-3 p.m. | 180 Tan Hall | Note change in location
Speaker/Performer: Prof. Alice C. Fan, MD, Stanford University School of Medicine, Oncology
Sponsor: Berkeley Nanosciences and Nanoengineering Institute
My laboratory studies how kinase inhibitors modulate protein signaling in patients with cancer. A barrier to this work has been the requirement for serial tumor samples from patients in order to measure changes in protein activation. To overcome this barrier, I use nanotechnologies to profile proteins in small numbers of patient tumor cells.
Our work addresses a series of questions at... More >
Functional Analysis Seminar: The Paulsen problem, continuous operator scaling, and smoothed analysis
Seminar | December 1 | 2:10-3 p.m. | 748 Evans Hall | Note change in location
Speaker: Tsz Chiu Kwok, University of Waterloo
The Paulsen problem is a basic open problem in operator theory: Given vectors $u_1, ..., u_n$ in $\mathbb R^d$ that are $\varepsilon $-nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors $v_1, ..., v_n$ in $\mathbb R^d$ that exactly satisfy the Parseval's condition and the equal norm condition? Given $u_1,..., u_n$, we consider the squared... More >
Student Probability/PDE Seminar: Large Deviation Principle for random graphs II
Seminar | December 1 | 2:10-3:30 p.m. | 891 Evans Hall
Speaker: Fraydoun Rezakhanlou, UC Berkeley
MENA Salon
Workshop | December 1 | 3-4 p.m. | 340 Stephens Hall
Sponsor: Center for Middle Eastern Studies
Every Friday in the semester, the CMES hosts an informal weekly coffee hour and guided discussion of current events in the Middle East and North Africa, open to all and free of charge. Check the calendar the Monday before the Salon for the current week's topic.
BLC Fellows Forum
Presentation | December 1 | 3-5 p.m. | Dwinelle Hall, B-4 (Classroom side)
Speaker/Performer: FAll 2017 BLC Fellows, UC Berkeley
Sponsor: Berkeley Language Center
Teaching French Listening Comprehension and Cultural Awareness through Regional Variation
Elyse Ritchey, GSR, French
At the university level, French language instruction in the US traditionally includes a course on phonetics and pronunciation. While the major aim of such courses is to improve students' speaking and listening competence, they also emphasize speaking 'correctly' using... More >
Solomon Northup's Odyssey
Film - Feature | December 1 | 4 p.m. | Berkeley Art Museum and Pacific Film Archive
Based on the 1853 memoir of a Northern black man kidnapped into slavery, Gordon Parks's made-for-TV drama predates 12 Years a Slave by almost three decades. It has "a somber lyricism that's hard to shake" (Bilge Ebiri).
Transportation Futures: Integrating transportation, land use, and environmental planning for sustainable urban development
Lecture | December 1 | 4 p.m. | 290 Hearst Memorial Mining Building
Speaker/Performer: Elizabeth Deakin, UC Berkeley
Hydrogenase- and ACS-Inspired Bioorganometallic Chemistry
Seminar | December 1 | 4-5 p.m. | 120 Latimer Hall
Featured Speaker: Prof. Marcetta Darensbourg, Department of Chemistry, Texas A&M University
Enzyme active sites such as the [FeFe]- and [NiFe]-H2ases, as well as Carbon Monoxide Dehydrogenase and Acetyl coA Synthase, ACS, have inspired chemists to use well-established principles and synthetic tools of organometallic chemistry in the development of biomimetics. The usual purpose of such studies is to understand how first-row transition metals are rendered by nature into molecular... More >
Dissertation Talk: Meta Learning for Control
Presentation | December 1 | 4-5 p.m. | 730 Sutardja Dai Hall
Speaker/Performer: Yan Duan, UC Berkeley
Dissertation talk on meta learning for control: policy learning algorithms that can themselves generate algorithms that are highly customized towards a certain domain of tasks.
Logic Colloquium: What is ordinary mathematics?
Colloquium | December 1 | 4:10-5 p.m. | 60 Evans Hall
Speaker: Marianna Antonutti, Munich Center for Mathematical Philosophy
The term "ordinary mathematics" is used to denote a collection of areas that are, in some sense, central to the practice of most working mathematicians, typically taken to contain such fields as number theory, real and complex analysis, geometry, and algebra. This notion has been taken for granted by both philosophers and logicians; for example, reverse mathematics is often described as the... More >
QB3 Postdoc Seminar: Examining Eph/ephrin signaling pathways that regulate adult neurogenesis and migration of hippocampal stem cells
Seminar | December 1 | 4:30-5:30 p.m. | 177 Stanley Hall
Speaker/Performer: Kira Mosher (David Schaffer lab)
Sponsor: QB3 - California Institute for Quantitative Biosciences
Approximately two decades ago, the longstanding dogma that the adult CNS is incapable of generating new neurons was overturned with the acceptance that adult neurogenesis truly occurs in mammals. With this understanding also came excitement that in maintaining some level of plasticity in adulthood, diseased brains may be capable of regeneration. While adult neurogenesis has become an important... More >
East Bay World History Reading Group: Seeing Like a State by James C. Scott
Meeting | December 1 | 5-7 p.m. | ORIAS Office, Room 520 C
Location: 1995 University Ave, Berkeley, CA 94704
Sponsor: ORIAS (Office of Resources for International and Area Studies)
Teachers in ORIAS World History Reading Groups read one book each month within a global studies theme. Participants meet monthly to eat and spend two hours in collegial conversation. It is a relaxing, intellectually rich atmosphere for both new and experienced teachers.
Attendance restrictions: This event is for k-14 teachers.
Registration info: Register online or or by emailing Shane Carter at [email protected]
Mashrou' Leila's Hamed Sinno: In Conversation at UC Berkeley
Special Event | December 1 | 5-7 p.m. | 125 Morrison Hall
Speaker/Performer: Hamed Sinno, Mashrou' Leila
Join the Center for Middle Eastern Studies (CMES) for a conversation with Hamed Sinno, lead singer of the Lebanese band Mashrou' Leila, on art, aesthetics, performance, and what it means to be an artist today. How is artistic production and consumption changing in the Middle East and beyond? How are new aesthetic territories and experiments being defined? What is the political, cultural, and... More >
TRANSOC Annual Holiday Party
Holiday | December 1 | 5:15-11:15 p.m. | 412 McLaughlin Hall
Speaker/Performer: Aqshems Meten Egun Tola Ade Le Kon Nichols
Sponsor: ASUC (Associated Students of the University of California)
Raina J. Léon, James Cagney, and Josiah Luis Alderete
Reading - Literary | December 1 | 6 p.m. | Berkeley Art Museum and Pacific Film Archive
Raina J. Léon, James Cagney, and Josiah Luis Alderete perform their poetry.
"Set in a vividly mod Swinging London, Antonioni's first English-language film [is] a cryptic murder mystery . . . a landmark of the decade's observational outrage and Pop disposability" (Time Out).
Exhibits and Ongoing Events
People Made These Things: Connecting with the Makers of Our World
Exhibit - Multimedia | April 12 – December 17, 2017 every Sunday, Wednesday, Thursday, Friday & Saturday with exceptions | Hearst Museum of Anthropology, 102 Kroeber Hall
Sponsor: Phoebe A. Hearst Museum of Anthropology
Why do we sometimes know a lot about who made things, and why do we sometimes not? Why does it sometimes matter to us, and why might it sometimes not? These are the questions that will be raised in the exhibit that will inaugurate the Phoebe A. Hearst Museum of Anthropology's renovated Kroeber Hall Gallery. The Museum will display objects from the collection that urge visitors to think... More >
Tickets required: Free UC Berkeley Students, Faculty, Staff, Hearst Museum Members, and Youth under 18, $6 General Admisison, $3 Non-UC Berkeley Students and seniors over 65
In-Between Places: Korean American Artists in the Bay Area
Exhibit - Painting | September 13 – December 10, 2017 every day | Mills College Art Museum
Location: 5000 MacArthur Boulevard, Oakland, CA 94613
Sponsor: Mills College Art Museum
In-Between Places (사이에 머물다) is the story of Korean American artists and their dreams, featuring new work by: Jung Ran Bae; Sohyung Choi; Kay Kang; Miran Lee; Young June Lew; Nicholas Oh; Younhee Paik; and Minji Sohn.
Street by Minji Sohn, 2016
Jennie Smith: New Drawings
Exhibit - Painting | September 18 – December 15, 2017 every Monday, Tuesday, Wednesday, Thursday & Friday | Stephens Hall, Townsend Center for the Humanities
Sponsor: Townsend Center for the Humanities
San Francisco artist Jennie Smith infuses her detailed drawings of the natural world with an imaginative sensibility.
Viewing hours are generally Monday through Friday, 9 am to 4 pm. The exhibit is located in a space also used for events and meetings; please call (510) 643-9670 or email in advance to confirm room availability.
The Russian Revolution Centenary: 1917-2017: Politics, Propaganda and People's Art
Exhibit - Multimedia | September 11, 2017 – January 8, 2018 every day | Moffitt Undergraduate Library
This exhibition is dedicated to the centenary of the Russian Revolution that took place in October of 1917. The exhibition will take place in the Moffitt Library, and it will highlight several print-items from the revolutionary times.
Attendance restrictions: Access to the Moffitt Undergraduate Library is restricted and you'll need the UC Berkeley/ Cal Card for entry.
Gordon Parks: The Making of an Argument
Exhibit - Photography | October 6 – December 17, 2017 every Sunday, Wednesday, Thursday, Friday & Saturday | Berkeley Art Museum and Pacific Film Archive
This investigation of the editorial process behind Parks's photo-essay "Harlem Gang Leader" reveals unspoken conflicts between photographer, editor, subject, and truth.
Miyoko Ito/ MATRIX 267
Exhibit - Painting | October 6, 2017 – January 28, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | Berkeley Art Museum and Pacific Film Archive
Discover the singular vision of a Berkeley-born artist whose paintings explore both exterior and interior landscapes.
Repentant Monk: Illusion and Disillusion in the Art of Chen Hongshou
Exhibit - Painting | October 25, 2017 – January 28, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | Berkeley Art Museum and Pacific Film Archive
Chen Hongshou is a major figure in Chinese art of the late Ming and early Qing dynasties. This exhibition explores his visually compelling work and his response to the turmoil of his times.
Veronica De Jesus/ MATRIX 268
Exhibit - Multimedia | October 25, 2017 – February 25, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | Berkeley Art Museum and Pacific Film Archive
De Jesus's memorial portraits honor artists, writers, and diverse cultural figures, testifying to the fact that each life is valuable and worthy of recognition.
Buddhist Realms
Exhibit - Multimedia | October 25, 2017 – April 22, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | Berkeley Art Museum and Pacific Film Archive
This presentation showcases exquisite examples of Buddhist art from the Himalayan region.
On the Hour/ Hayoun Kwon
Exhibit - Multimedia | November 1 – December 29, 2017 every Sunday, Wednesday, Thursday, Friday & Saturday | Berkeley Art Museum and Pacific Film Archive
Commissioned for BAMPFA's outdoor screen, Kwon's imaginative digital animation evokes a woman who transformed her apartment into an aviary.
Art Wall: Karabo Poppy Moletsane
Exhibit - Painting | November 22, 2017 – July 15, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | Berkeley Art Museum and Pacific Film Archive
Moletsane's vibrant, large-scale portraits for the Art Wall draw on both traditional African visual culture and Afrofuturism.
Fiat Yuks: Cal Student Humor, Then and Now
Exhibit - Artifacts | October 16, 2017 – June 3, 2018 every day | Bancroft Library, Rowell Cases, 2nd floor corridor between The Bancroft Library and Doe Library
Sponsor: Bancroft Library
Let there be laughter! This exhibition features Cal students'
cartoons, jokes, and satire throughout the years selected
from their humor magazines and other publications.
Exhibit - Artifacts | October 13, 2017 – May 30, 2019 every Monday, Tuesday, Wednesday, Thursday, Friday & Saturday | Bancroft Library, Rowell Cases, near Heyns Reading Room, 2nd floor corridor between The Bancroft Library and Doe
Let there be laughter! This exhibition features Cal students' cartoons, jokes, and satire from throughout the years, selected from their humor magazines and other publications.
The Summer of Love 50th Anniversary
Exhibit - Artifacts | July 21 – December 29, 2017 every Monday, Tuesday, Wednesday, Thursday & Friday | 9 a.m.-5 p.m. | Bancroft Library, Bancroft Corridor between The Bancroft Library and Doe Library
Marking a 50th anniversary, Bancroft's rare and unique collections documenting the 1967 "Summer of Love" are on exhibit in the corridor cases. Presented are images from the Bay Area alternative press, psychedelic rock posters and mailers, documentary photographs of the Haight-Ashbury scene and major rock concerts, and material from the personal papers of author Joan Didion and poet Michael... More >
¡Viva La Fiesta! Mexican Traditions of Celebration
Exhibit - Artifacts | October 13, 2017 – February 28, 2018 every Monday, Tuesday, Wednesday, Thursday & Friday | 10 a.m.-4 p.m. | Bancroft Library, The Bancroft Library Gallery
¡Viva la Fiesta! explores the cycle of traditional religious and
patriotic celebrations that have for centuries marked the
Mexican calendar. The exhibition draws on unique historical
representations of the fiestas and examines their relationship
to communal identities, national politics, religious practices,
and indigenous customs. These original materials, which are
preserved in the... More >
The Invisible Museum: History and Memory of Morocco
Exhibit - Multimedia | August 29 – December 15, 2017 every Tuesday, Wednesday, Thursday & Friday | 11 a.m.-4 p.m. | Magnes Collection of Jewish Art and Life (2121 Allston Way)
Since its inception in 1962, the former Judah L. Magnes Museum distinguished itself by directing its collecting efforts outside the focus on European Jewish culture and history that was prevalent among American Jewish museums at the time. During the 1970s and 1980s, its founders, Seymour and Rebecca Fromer, actively corralled an informal team of activist collectors and supporters. Together, they... More >
Sketching "Fiddler": Set Designs by Mentor Huebner
The Power of Attention: Magic and Meditation in Hebrew "shiviti" Manuscript Art
Exhibit - Artifacts | August 29 – December 15, 2017 every Tuesday, Wednesday, Thursday & Friday | 11 a.m.-4 p.m. | Magnes Collection of Jewish Art and Life (2121 Allston Way)
Created from the early-modern period and into the present, shiviti manuscripts are found in Hebrew prayer books, ritual textiles, and on the walls of synagogues and homes throughout the Jewish diaspora. Wrestling with ways to externalize the presence of God in Jewish life, these documents center upon the graphic representation of God's ineffable four-letter Hebrew name, the Tetragrammaton, and... More >
The Worlds of Arthur Szyk: The Taube Family Arthur Szyk Collection
Auditorium installation of high-resolution images of select collection items.
Acquired by The Magnes Collection of Jewish Art and Life in 2017 thanks to an unprecedented gift from Taube Philanthropies, the most significant collection of works by Arthur Szyk (Łódź, Poland, 1894 – New Canaan, Connecticut, 1951) is now available to the world in a public institution for the first time as... More >
To the Letter: Regarding the Written Word
Exhibit - Multimedia | October 6, 2017 – January 28, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | 11 a.m.-7 p.m. | Berkeley Art Museum and Pacific Film Archive
This exhibition crosses cultures and centuries to bring together works that activate the expressive and aesthetic potential of letters and words.
Matin Wong: Human Instamatic
Exhibit - Multimedia | October 6 – December 10, 2017 every Sunday, Wednesday, Thursday, Friday & Saturday | 11 a.m.-7 p.m. | Berkeley Art Museum and Pacific Film Archive
This retrospective surveys the career of "one of our great urban visionaries" (New York Times), from Northern California to New York and back. | CommonCrawl |
Skip to main content Skip to sections
Programming Languages and Systems
European Symposium on Programming
ESOP 2018: Programming Languages and Systems pp 186-213 | Cite as
How long, O Bayesian network, will I sample thee?
A program analysis perspective on expected sampling times
Kevin Batz
Benjamin Lucien Kaminski
Christoph Matheja
First Online: 14 April 2018
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10801)
Bayesian networks (BNs) are probabilistic graphical models for describing complex joint probability distributions. The main problem for BNs is inference: Determine the probability of an event given observed evidence. Since exact inference is often infeasible for large BNs, popular approximate inference methods rely on sampling.
We study the problem of determining the expected time to obtain a single valid sample from a BN. To this end, we translate the BN together with observations into a probabilistic program. We provide proof rules that yield the exact expected runtime of this program in a fully automated fashion. We implemented our approach and successfully analyzed various real–world BNs taken from the Bayesian network repository.
Probabilistic programs Expected runtimes Weakest preconditions Program verification
Download conference paper PDF
Bayesian networks (BNs) are probabilistic graphical models representing joint probability distributions of sets of random variables with conditional dependencies. Graphical models are a popular and appealing modeling formalism, as they allow to succinctly represent complex distributions in a human–readable way. BNs have been intensively studied at least since 1985 [43] and have a wide range of applications including machine learning [24], speech recognition [50], sports betting [11], gene regulatory networks [18], diagnosis of diseases [27], and finance [39].
Probabilistic programs are programs with the key ability to draw values at random. Seminal papers by Kozen from the 1980s consider formal semantics [32] as well as initial work on verification [33, 47]. McIver and Morgan [35] build on this work to further weakest–precondition style verification for imperative probabilistic programs.
The interest in probabilistic programs has been rapidly growing in recent years [20, 23]. Part of the reason for this déjà vu is their use for representing probabilistic graphical models [31] such as BNs. The full potential of modern probabilistic programming languages like Anglican [48], Church [21], Figaro [44], R2 [40], or Tabular [22] is that they enable rapid prototyping and obviate the need to manually provide inference methods tailored to an individual model.
Probabilistic inference is the problem of determining the probability of an event given observed evidence. It is a major problem for both BNs and probabilistic programs, and has been subject to intense investigations by both theoreticians and practitioners for more than three decades; see [31] for a survey. In particular, it has been shown that for probabilistic programs exact inference is highly undecidable [28], while for BNs both exact inference as well as approximate inference to an arbitrary precision are NP–hard [12, 13]. In light of these complexity–theoretical hurdles, a popular way to analyze probabilistic graphical models as well as probabilistic programs is to gather a large number of independent and identically distributed (i.i.d. for short) samples and then do statistical reasoning on these samples. In fact, all of the aforementioned probabilistic programming languages support sampling based inference methods.
Rejection sampling is a fundamental approach to obtain valid samples from BNs with observed evidence. In a nutshell, this method first samples from the joint (unconditional) distribution of the BN. If the sample complies with all evidence, it is valid and accepted; otherwise it is rejected and one has to resample.
Apart from rejection sampling, there are more sophisticated sampling techniques, which mainly fall in two categories: Markov Chain Monte Carlo (MCMC) and importance sampling. But while MCMC requires heavy hand–tuning and suffers from slow convergence rates on real–world instances [31, Chapter 12.3], virtually all variants of importance sampling rely again on rejection sampling [31, 49].
A major problem with rejection sampling is that for poorly conditioned data, this approach might have to reject and resample very often in order to obtain just a single accepting sample. Even worse, being poorly conditioned need not be immediately evident for a given BN, let alone a probabilistic program. In fact, Gordon et al. [23, p. 177] point out that
"the main challenge in this setting [i.e. sampling based approaches] is that many samples that are generated during execution are ultimately rejected for not satisfying the observations."
If too many samples are rejected, the expected sampling time grows so large that sampling becomes infeasible. The expected sampling time of a BN is therefore a key figure for deciding whether sampling based inference is the method of choice.
How Long, O Bayesian Network, will I Sample Thee? More precisely, we use techniques from program verification to give an answer to the following question:
Given a Bayesian network with observed evidence, how long does it take in expectation to obtain a single sample that satisfies the observations?
Open image in new window
A simple Bayesian network.
As an example, consider the BN in Fig. 1 which consists of just three nodes (random variables) that can each assume values 0 or 1. Each node X comes with a conditional probability table determining the probability of X assuming some value given the values of all nodes Y that X depends on (i.e. X has an incoming edge from Y), see [3, Appendix A.1] for detailed calculations. For instance, the probability that G assumes value 0, given that S and R are both assume 1, is 0.2. Note that this BN is paramterized by \(a \in [0,1]\).
Now, assume that our observed evidence is the event \(G=0\) and we apply rejection sampling to obtain one accepting sample from this BN. Then our approach will yield that a rejection sampling algorithm will, on average, require
$$\begin{aligned} \frac{200 a^2 - 40 a - 460}{89 a^2 - 69 a - 21} \end{aligned}$$
guard evaluations, random assignments, etc. until it obtains a single sample that complies with the observation \(G=0\) (the underlying runtime model is discussed in detail in Sect. 3.3). By examination of this function, we see that for large ranges of values of a the BN is rather well–behaved: For \(a \in [0.08,\ 0.78]\) the expected sampling time stays below 18. Above \(a = 0.95\) the expected sampling time starts to grow rapidly up to 300.
While 300 is still moderate, we will see later that expected sampling times of real–world BNs can be much larger. For some BNs, the expected sampling time even exceeded \(10^{18}\), rendering sampling based methods infeasible. In this case, exact inference (despite NP–hardness) was a viable alternative (see Sect. 6).
Our Approach. We apply weakest precondition style reasoning a lá McIver and Morgan [35] and Kaminski et al. [30] to analyze both expected outcomes and expected runtimes (ERT) of a syntactic fragment of pGCL , which we call the Bayesian Network Language (BNL). Note that since BNL is a syntactic fragment of pGCL, every BNL program is a pGCL program but not vice versa. The main restriction of BNL is that (in contrast to pGCL) loops are of a special form that prohibits undesired data flow across multiple loop iterations. While this restriction renders BNL incapable of, for instance, counting the number of loop iterations1, BNL is expressive enough to encode Bayesian networks with observed evidence.
For BNL, we develop dedicated proof rules to determine exact expected values and the exact ERT of any BNL program, including loops, without any user–supplied data, such as invariants [30, 35], ranking or metering functions [19], (super)martingales [8, 9, 10], etc.
As a central notion behind these rules, we introduce f–i.i.d.–ness of probabilistic loops, a concept closely related to stochastic independence, that allows us to rule out undesired parts of the data flow across loop iterations. Furthermore, we show how every BN with observations is translated into a BNL program, such that
executing the BNL program corresponds to sampling from the conditional joint distribution given by the BN and observed data, and
the ERT of the BNL program corresponds to the expected time until a sample that satisfies the observations is obtained from the BN.
As a consequence, exact expected sampling times of BNs can be inferred by means of weakest precondition reasoning in a fully automated fashion. This can be seen as a first step towards formally evaluating the quality of a plethora of different sampling methods (cf. [31, 49]) on source code level.
Contributions. To summarize, our main contributions are as follows:
We develop easy–to–apply proof rules to reason about expected outcomes and expected runtimes of probabilistic programs with f–i.i.d. loops.
We study a syntactic fragment of probabilistic programs, the Bayesian network language (BNL), and show that our proof rules are applicable to every BNL program; expected runtimes of \( \textsf {{BNL}} \) programs can thus be inferred.
We give a formal translation from Bayesian networks with observations to \( \textsf {{BNL}} \) programs; expected sampling times of BNs can thus be inferred.
We implemented a prototype tool that automatically analyzes the expected sampling time of BNs with observations. An experimental evaluation on real–world BNs demonstrates that very large expected sampling times (in the magnitude of millions of years) can be inferred within less than a second; This provides practitioners the means to decide whether sampling based methods are appropriate for their models.
Outline. We discuss related work in Sect. 2. Syntax and semantics of the probabilistic programming language \( \textsf {{pGCL}} \) are presented in Sect. 3. Our proof rules are introduced in Sect. 4 and applied to BNs in Sect. 5. Section 6 reports on experimental results and Sect. 7 concludes.
2 Related Work
While various techniques for formal reasoning about runtimes and expected outcomes of probabilistic programs have been developed, e.g. [6, 7, 17, 25, 38], none of them explicitly apply formal methods to reason about Bayesian networks on source code level. In the following, we focus on approaches close to our work.
Weakest Preexpectation Calculus. Our approach builds upon the expected runtime calculus [30], which is itself based on work by Kozen [32, 33] and McIver and Morgan [35]. In contrast to [30], we develop specialized proof rules for a clearly specified program fragment without requiring user–supplied invariants. Since finding invariants often requires heavy calculations, our proof rules contribute towards simplifying and automating verification of probabilistic programs.
Ranking Supermartingales. Reasoning about almost–sure termination is often based on ranking (super)martingales (cf. [8, 10]). In particular, Chatterjee et al. [9] consider the class of affine probabilistic programs for which linear ranking supermartingales exist (Lrapp); thus proving (positive2) almost–sure termination for all programs within this class. They also present a doubly–exponential algorithm to approximate ERTs of Lrapp programs. While all BNL programs lie within Lrapp, our proof rules yield exact ERTs as expectations (thus allowing for compositional proofs), in contrast to a single number for a fixed initial state.
Bayesian Networks and Probabilistic Programs. Bayesian networks are a—if not the most—popular probabilistic graphical model (cf. [4, 31] for details) for reasoning about conditional probabilities. They are closely tied to (a fragment of) probabilistic programs. For example, Infer.NET [36] performs inference by compiling a probabilistic program into a Bayesian network. While correspondences between probabilistic graphical models, such as BNs, have been considered in the literature [21, 23, 37], we are not aware of a formal soudness proof for a translation from classical BNs into probabilistic programs including conditioning.
Conversely, some probabilistic programming languages such as Church [21], Stan [26], and R2 [40] directly perform inference on the program level using sampling techniques similar to those developed for Bayesian networks. Our approach is a step towards understanding sampling based approaches formally: We obtain the exact expected runtime required to generate a sample that satisfies all observations. This may ultimately be used to evaluate the quality of a plethora of proposed sampling methods for Bayesian inference (cf. [31, 49]).
3 Probabilistic Programs
We briefly present the probabilistic programming language that is used throughout this paper. Since our approach is embedded into weakest-precondition style approaches, we also recap calculi for reasoning about both expected outcomes and expected runtimes of probabilistic programs.
3.1 The Probabilistic Guarded Command Language
We enhance Dijkstra's Guarded Command Language [14, 15] by a probabilistic construct, namely a random assignment. We thereby obtain a probabilistic Guarded Command Language (for a closely related language, see [35]).
Let \(\textsf {Vars}\) be a finite set of program variables. Moreover, let \(\mathbb {Q}\) be the set of rational numbers, and let \(\mathcal {D}\left( {\mathbb {Q}} \right) \) be the set of discrete probability distributions over \(\mathbb {Q}\). The set of program states is given by Open image in new window .
A distribution expression \(\mu \) is a function of type \(\mu :\varSigma \rightarrow \mathcal {D}\left( {\mathbb {Q}} \right) \) that takes a program state and maps it to a probability distribution on values from \(\mathbb {Q}\). We denote by \(\mu _\sigma \) the distribution obtained from applying \(\sigma \) to \(\mu \).
The probabilistic guarded command language (\( \textsf {{pGCL}} \)) is given by the grammar
where \(x \in \textsf {Vars}\) is a program variable, \(\mu \) is a distribution expression, and \(\varphi \) is a Boolean expression guarding a choice or a loop. A \( \textsf {{pGCL}} \) program that contains neither \(\mathtt {diverge}\), nor \(\mathtt {while}\), nor \(\mathtt {repeat-until}\) loops is called loop–free.
For \(\sigma \in \varSigma \) and an arithmetical expression E over \(\textsf {Vars}\), we denote by \(\sigma (E)\) the evaluation of E in \(\sigma \), i.e. the value that is obtained by evaluating E after replacing any occurrence of any program variable x in E by the value \(\sigma (x)\). Analogously, we denote by \(\sigma (\varphi )\) the evaluation of a guard \(\varphi \) in state \(\sigma \) to either \(\textsf {true}\) or \(\textsf {false}\). Furthermore, for a value \(v \in \mathbb {Q}\) we write \(\sigma \left[ {x} \mapsto {v}\right] \) to indicate that we set program variable x to value v in program state \(\sigma \), i.e.3
We use the Iverson bracket notation to associate with each guard its according indicator function. Formally, the Iverson bracket \(\left[ {\varphi } \right] \) of \(\varphi \) is thus defined as the function Open image in new window .
Let us briefly go over the \( \textsf {{pGCL}} \) constructs and their effects: \(\mathtt {skip}\) does not alter the current program state. The program \(\mathtt {diverge}\) is an infinite busy loop, thus takes infinite time to execute. It returns no final state whatsoever.
The random assignment \({x}\mathrel {:\approx }{\mu }\) is (a) the only construct that can actually alter the program state and (b) the only construct that may introduce random behavior into the computation. It takes the current program state \(\sigma \), then samples a value v from probability distribution \(\mu _\sigma \), and then assigns v to program variable x. An example of a random assignment is
If the current program state is \(\sigma \), then the program state is altered to either \(\sigma \left[ {x} \mapsto {5}\right] \) with probability Open image in new window , or to \(\sigma \left[ {x} \mapsto {\sigma (y) + 1}\right] \) with probability Open image in new window , or to \(\sigma \left[ {x} \mapsto {\sigma (y) - 1}\right] \) with probability Open image in new window . The remainder of the pGCL constructs are standard programming language constructs.
In general, a \( \textsf {{pGCL}} \) program C is executed on an input state and yields a probability distribution over final states due to possibly occurring random assignments inside of C. We denote that resulting distribution by Open image in new window . Strictly speaking, programs can yield subdistributions, i.e. probability distributions whose total mass may be below 1. The "missing"probability mass represents the probability of nontermination. Let us conclude our presentation of pGCL with an example:
Example 1 (Geometric Loop)
Consider the program \(C_ geo \) given by
This program basically keeps flipping coins until it flips, say, heads (\(c=0\)). In x it counts the number of unsuccessful trials.4 In effect, it almost surely sets c to 0 and moreover it establishes a geometric distribution on x. The resulting distribution is given by
3.2 The Weakest Preexpectation Transformer
We now present the weakest preexpectation transformer \(\mathsf {wp}\) for reasoning about expected outcomes of executing probabilistic programs in the style of McIver and Morgan [35]. Given a random variable f mapping program states to reals, it allows us to reason about the expected value of f after executing a probabilistic program on a given state.
Expectations. The random variables the \(\mathsf {wp}\) transformer acts upon are taken from a set of so-called expectations, a term coined by McIver and Morgan [35]:
Definition 1 (Expectations)
The set of expectations \(\mathbb {E}\) is defined as
We will use the notation \(f[{x}/{E}]\) to indicate the replacement of every occurrence of x in f by E. Since x, however, does not actually occur in f, we more formally define Open image in new window .
A complete partial order \(\le \) on \(\mathbb {E}\) is obtained by point–wise lifting the canonical total order on \(\mathbb {R}_{\ge 0}^{\infty }\), i.e.
$$\begin{aligned} f_1 ~{}\preceq {}~f_2 \quad \text {iff}\quad \forall \sigma \in \varSigma :~~ f_1(\sigma ) ~{}\le {}~f_2(\sigma ) ~. \end{aligned}$$
Its least element is given by Open image in new window which we (by slight abuse of notation) also denote by 0. Suprema are constructed pointwise, i.e. for \(S \subseteq \mathbb {E}\) the supremum \(\sup S\) is given by Open image in new window .
We allow expectations to map only to positive reals, so that we have a complete partial order readily available, which would not be the case for expectations of type \(\varSigma \rightarrow \mathbb {R}\cup \{-\infty ,\, +\infty \}\). A \(\mathsf {wp}\) calculus that can handle expectations of such type needs more technical machinery and cannot make use of this underlying natural partial order [29]. Since we want to reason about ERTs which are by nature non–negative, we will not need such complicated calculi.
Notice that we use a slightly different definition of expectations than McIver and Morgan [35], as we allow for unbounded expectations, whereas [35] requires that expectations are bounded. This however would prevent us from capturing ERTs, which are potentially unbounded.
Expectation Transformers. For reasoning about the expected value of \(f \in \mathbb {E}\) after execution of C, we employ a backward–moving weakest preexpectation transformer \(\mathsf {wp}\llbracket C \rrbracket :\mathbb {E}\rightarrow \mathbb {E}\), that maps a postexpectation \(f \in \mathbb {E}\) to a preexpectation Open image in new window , such that Open image in new window is the expected value of f after executing C on initial state \(\sigma \). Formally, if C executed on input \(\sigma \) yields final distribution Open image in new window , then the weakest preexpectation Open image in new window of C with respect to postexpectation f is given by
where we denote by \(\int _{A}~{h}~d{\nu }\) the expected value of a random variable \(h:A \rightarrow \mathbb {R}_{\ge 0}^{\infty }\) with respect to a probability distribution \(\nu :A \rightarrow [0,\, 1]\). Weakest preexpectations can be defined in a very systematic way:
(The \({{\mathbf {\mathsf{{wp}}}}}\) Transformer [35]). The weakest preexpectation transformer \(\mathsf {wp}: \textsf {{pGCL}} \rightarrow \mathbb {E}\rightarrow \mathbb {E}\) is defined by induction on all \( \textsf {{pGCL}} \) programs according to the rules in Table 1. We call Open image in new window the \(\mathsf {wp}\)–characteristic functional of the loop Open image in new window with respect to postexpectation f. For a given \(\mathsf {wp}\)–characteristic function \(F_f\), we call the sequence \(\{F_f^n(0) \}_{n\in \mathbb {N}}\) the orbit of \(F_f\).
Rules for the \(\mathsf {wp}\)–transformer.
\(\varvec{C}\)
\(\mathtt {skip}\)
\(\mathtt {diverge}\)
\({x}\mathrel {:\approx }{\mu }\)
\(\mathtt {if} \left( {\varphi } \right) \left\{ {C_1} \right\} \mathtt {else} \left\{ {C_2} \right\} \)
\({C_1};\,{C_2}\)
\(\mathtt {while} \left( {\varphi }\right) \left\{ {C'} \right\} \)
\(\mathtt {repeat}\left\{ {C'}\right\} \mathtt {until}\left( {\varphi }\right) \)
Let us briefly go over the definitions in Table 1: For \(\mathtt {skip}\) the program state is not altered and thus the expected value of f is just f. The program \(\mathtt {diverge}\) will never yield any final state. The distribution over the final states yielded by \(\mathtt {diverge}\) is thus the null distribution \(\nu _0(\tau ) = 0\), that assigns probability 0 to every state. Consequently, the expected value of f after execution of \(\mathtt {diverge}\) is given by \(\int _{\varSigma }~{f}~d{\nu _0} = \sum _{\tau \in \varSigma }0 \cdot f(\tau ) = 0\).
The rule for the random assignment \({x}\mathrel {:\approx }{\mu }\) is a bit more technical: Let the current program state be \(\sigma \). Then for every value \(v \in \mathbb {Q}\), the random assignment assigns v to x with probability \(\mu _\sigma (v)\), where \(\sigma \) is the current program state. The value of f after assigning v to x is \(f(\sigma \left[ {x} \mapsto {v}\right] ) = f[{x}/{v}](\sigma )\) and therefore the expected value of f after executing the random assignment is given by
Expressed as a function of \(\sigma \), the latter yields precisely the definition in Table 1.
The definition for the conditional choice \(\mathtt {if} \left( {\varphi } \right) \left\{ {C_1} \right\} \mathtt {else} \left\{ {C_2} \right\} \) is not surprising: if the current state satisfies \(\varphi \), we have to opt for the weakest preexpectation of \(C_1\), whereas if it does not satisfy \(\varphi \), we have to choose the weakest preexpectation of \(C_2\). This yields precisely the definition in Table 1.
The definition for the sequential composition \({C_1};\,{C_2}\) is also straightforward: We first determine Open image in new window to obtain the expected value of f after executing \(C_2\). Then we mentally prepend the program \(C_2\) by \(C_1\) and therefore determine the expected value of Open image in new window after executing \(C_1\). This gives the weakest preexpectation of \({C_1};\,{C_2}\) with respect to postexpectation f.
The definition for the while loop makes use of a least fixed point, which is a standard construction in program semantics. Intuitively, the fixed point iteration of the \(\mathsf {wp}\)–characteristic functional, given by \(0,\, F_f(0),\, F_f^2(0),\, F_f^3(0),\, \ldots \), corresponds to the portion the expected value of f after termination of the loop, that can be collected within at most \(0,\, 1,\, 2,\, 3,\, \ldots \) loop guard evaluations. The Kleene Fixed Point Theorem [34] ensures that this iteration converges to the least fixed point, i.e.
By inspection of the above equality, we see that the least fixed point is exactly the construct that we want for while loops, since \(\sup _{n \in \mathbb {N}} F_f^n(0)\) in principle allows the loop to run for any number of iterations, which captures precisely the semantics of a while loop, where the number of loop iterations is—in contrast to e.g. for loops—not determined upfront.
Finally, since \(\mathtt {repeat}\left\{ {C}\right\} \mathtt {until}\left( {\varphi }\right) \) is syntactic sugar for \({C};\,{\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} }\), we simply define the weakest preexpectation of the former as the weakest preexpectation of the latter. Let us conclude our study of the effects of the \(\mathsf {wp}\) transformer by means of an example:
Consider the following program C:
Say we wish to reason about the expected value of \(x + c\) after execution of the above program. We can do so by calculating Open image in new window using the rules in Table 1. This calculation in the end yields Open image in new window The expected valuation of the expression \(x + c\) after executing C is thus Open image in new window . Note that \(x + c\) can be thought of as an expression that is evaluated in the final states after execution, whereas Open image in new window must be evaluated in the initial state before execution of C. \(\triangle \)
Healthiness Conditions of wp. The \(\mathsf {wp}\) transformer enjoys some useful properties, sometimes called healthiness conditions [35]. Two of these healthiness conditions that we will heavily make use of are given below:
Theorem 1
(Healthiness Conditions for the \({{\mathbf {\mathsf{{wp}}}}}\) Transformer [35]). For all \(C \in \textsf {{pGCL}} \), \(f_1, f_2 \in \mathbb {E}\), and \(a \in \mathbb {R}_{\ge 0}\), the following holds:
3.3 The Expected Runtime Transformer
While for deterministic programs we can speak of the runtime of a program on a given input, the situation is different for probabilistic programs: For those we instead have to speak of the expected runtime (ERT). Notice that the ERT can be finite (even constant) while the program may still admit infinite executions. An example of this is the geometric loop in Example 1.
A \(\mathsf {wp}\)–like transformer designed specifically for reasoning about ERTs is the \(\mathsf {ert}\) transformer [30]. Like \(\mathsf {wp}\), it is of type \(\mathsf {ert}\llbracket C \rrbracket :\mathbb {E}\rightarrow \mathbb {E}\) and it can be shown that Open image in new window is precisely the expected runtime of executing C on input \(\sigma \). More generally, if \(f:\varSigma \rightarrow \mathbb {R}_{\ge 0}^{\infty }\) measures the time that is needed after executing C (thus f is evaluated in the final states after termination of C), then Open image in new window is the expected time that is needed to run C on input \(\sigma \) and then let time f pass. For a more in–depth treatment of the \(\mathsf {ert}\) transformer, see [30, Sect. 3]. The transformer is defined as follows:
(The Open image in new window Transformer [30]). The expected runtime transformer \(\mathsf {ert}: \textsf {{pGCL}} \rightarrow \mathbb {E}\rightarrow \mathbb {E}\) is defined by induction on all \( \textsf {{pGCL}} \) programs according to the rules given in Table 2. We call Open image in new window the \(\mathsf {ert}\)–characteristic functional of the loop \(\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} \) with respect to postexpectation f. As with \(\mathsf {wp}\), for a given \(\mathsf {ert}\)–characteristic function \(F_f\), we call the sequence \(\{F_f^n(0) \}_{n\in \mathbb {N}}\) the orbit of \(F_f\). Notice that
Rules for the \(\mathsf {ert}\)–transformer.
\(1 + f\)
\(\infty \)
The rules for \(\mathsf {ert}\) are very similar to the rules for \(\mathsf {wp}\). The runtime model we assume is that \(\mathtt {skip}\) statements, random assignments, and guard evaluations for both conditional choice and while loops cost one unit of time. This runtime model can easily be adopted to count only the number of loop iterations or only the number of random assignments, etc. We conclude with a strong connection between the \(\mathsf {wp}\) and the \(\mathsf {ert}\) transformer, that is crucial in our proofs:
(Decomposition of Open image in new window [41]). For any \(C \in \textsf {{pGCL}} \) and \(f \in \mathbb {E}\),
4 Expected Runtimes of i.i.d Loops
We derive a proof rule that allows to determine exact ERTs of independent and identically distributed loops (or i.i.d. loops for short). Intuitively, a loop is i.i.d. if the distributions of states that are reached at the end of different loop iterations are equal. This is the case whenever there is no data flow across different iterations. In the non–probabilistic case, such loops either terminate after exactly one iteration or never. This is different for probabilistic programs.
An i.i.d. loop sampling a point within a circle uniformly at random using rejection sampling. The picture on the right–hand side visualizes the procedure: In each iteration a point (\(\times \)) is sampled. If we obtain a point within the white area inside the square, we terminate. Otherwise, i.e. if we obtain a point within the gray area outside the circle, we resample.
As a running example, consider the program \(C_{ circle }\) in Fig. 2. \(C_{ circle }\) samples a point within a circle with center (5, 5) and radius \(r=5\) uniformly at random using rejection sampling. In each iteration, it samples a point \((x,y) \in [0, \ldots , 10]^2\) within the square (with some fixed precision). The loop ensures that we resample if a sample is not located within the circle. Our proof rule will allow us to systematically determine the ERT of this loop, i.e. the average amount of time required until a single point within the circle is sampled.
Towards obtaining such a proof rule, we first present a syntactical notion of the i.i.d. property. It relies on expectations that are not affected by a \( \textsf {{pGCL}} \) program:
Let \(C \in \textsf {{pGCL}} \) and \(f \in \mathbb {E}\). Moreover, let \(\textsf {Mod} \left( C \right) \) denote the set of all variables that occur on the left–hand side of an assignment in C, and let \(\textsf {Vars}\left( f \right) \) be the set of all variables that "occur in f", i.e. formally
$$\begin{aligned} x \in \textsf {Vars}\left( f \right) \qquad iff \qquad \exists \, \sigma ~ \exists \, v, v' :\quad f(\sigma \left[ {x} \mapsto {v}\right] ) ~{}\ne {}~f(\sigma \left[ {x} \mapsto {v'}\right] ). \end{aligned}$$
Then f is unaffected by C, denoted Open image in new window , iff \(\textsf {Vars}\left( f \right) \cap \textsf {Mod} \left( C \right) = \emptyset \).
We are interested in expectations that are unaffected by \( \textsf {{pGCL}} \) programs because of a simple, yet useful observation: If Open image in new window , then g can be treated like a constant w.r.t. the transformer \(\mathsf {wp}\) (i.e. like the a in Theorem 1 (1)). For our running example \(C_{ circle }\) (see Fig. 2), the expectation Open image in new window is unaffected by the loop body \(C_{ body }\) of \(C_{ circle }\). Consequently, we have Open image in new window . In general, we obtain the following property:
Lemma 1 (Scaling by Unaffected Expectations)
Let \(C\in \textsf {{pGCL}} \) and \(f,g \in \mathbb {E}\). Then Open image in new window implies Open image in new window .
By induction on the structure of C. See [3, Appendix A.2]. \(\square \)
We develop a proof rule that only requires that both the probability of the guard evaluating to true after one iteration of the loop body (i.e. Open image in new window ) as well as the expected value of \(\left[ {\lnot \varphi } \right] \cdot f\) after one iteration (i.e. Open image in new window ) are unaffected by the loop body. We thus define the following:
(\(\varvec{f}\)–Independent and Identically Distributed Loops). Let \(C \in \textsf {{pGCL}} \), \(\varphi \) be a guard, and \(f \in \mathbb {E}\). Then we call the loop \(\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} \) f–independent and identically distributed (or f–i.i.d. for short), if both
Our example program \(C_{ circle }\) (see Fig. 2) is f–i.i.d. for all \(f \in \mathbb {E}\). This is due to the fact that
and (again for some fixed precision \(p \in \mathbb {N}\setminus \{0\}\))
Our main technical Lemma is that we can express the orbit of the \(\mathsf {wp}\)–characteristic function as a partial geometric series:
Lemma 2
(Orbits of \(\varvec{f}\)–i.i.d. Loops). Let \(C \in \textsf {{pGCL}} \), \(\varphi \) be a guard, \(f \in \mathbb {E}\) such that the loop \(\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} \) is f–i.i.d, and let \(F_f\) be the corresponding \(\mathsf {wp}\)–characteristic function. Then for all \(n \in \mathbb {N}\setminus \{ 0 \}\), it holds that
By use of Lemma 1, see [3, Appendix A.3].
Using this precise description of the \(\mathsf {wp}\) orbits, we now establish proof rules for f–i.i.d. loops, first for \(\mathsf {wp}\) and later for \(\mathsf {ert}\).
(Weakest Preexpectations of \(\varvec{f}\)–i.i.d. Loops). Let \(C \in \textsf {{pGCL}} \), \(\varphi \) be a guard, and \(f \in \mathbb {E}\). If the loop \(\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} \) is f–i.i.d., then
where we define Open image in new window .
The preexpectation (\(\dagger \)) is to be evaluated in some state \(\sigma \) for which we have two cases: The first case is when Open image in new window . Using the closed form of the geometric series, i.e. \(\sum _{i=0}^{\omega } q = \frac{1}{1-q}\) if \(|q| < 1\), we get
The second case is when Open image in new window . This case is technically slightly more involved. The full proof can be found in [3, Appendix A.4]. \(\square \)
We now derive a similar proof rule for the ERT of an f–i.i.d. loop \(\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} \).
(Proof Rule for ERTs of \(\varvec{f}\)–i.i.d. Loops). Let \(C \in \textsf {{pGCL}} \), \(\varphi \) be a guard, and \(f \in \mathbb {E}\) such that all of the following conditions hold:
\(\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} \) is f–i.i.d.
Open image in new window (loop body terminates almost–surely).
Open image in new window (every iteration runs in the same expected time).
Then for the ERT of the loop \(\mathtt {while} \left( {\varphi }\right) \left\{ {C} \right\} \) w.r.t. postruntime f it holds that
where we define Open image in new window and Open image in new window , for \(a \ne 0\).
We first prove
To this end, we propose the following expression as the orbit of the \(\mathsf {ert}\)–characteristic function of the loop w.r.t. 0:
For a verification that the above expression is indeed the correct orbit, we refer to the rigorous proof of this theorem in [3, Appendix A.5]. Now, analogously to the reasoning in the proof of Theorem 3 (i.e. using the closed form of the geometric series and case distinction on whether Open image in new window or Open image in new window ), we get that the supremum of this orbit is indeed the right–hand side of (\(\ddag \)). To complete the proof, consider the following:
\(\square \)
5 A Programming Language for Bayesian Networks
So far we have derived proof rules for formal reasoning about expected outcomes and expected run-times of i.i.d. loops (Theorems 3 and 4). In this section, we apply these results to develop a syntactic \( \textsf {{pGCL}} \) fragment that allows exact computations of closed forms of ERTs. In particular, no invariants, (super)martingales or fixed point computations are required.
After that, we show how BNs with observations can be translated into \( \textsf {{pGCL}} \) programs within this fragment. Consequently, we call our \( \textsf {{pGCL}} \) fragment the Bayesian Network Language. As a result of the above translation, we obtain a systematic and automatable approach to compute the expected sampling time of a BN in the presence of observations. That is, the expected time it takes to obtain a single sample that satisfies all observations.
5.1 The Bayesian Network Language
Programs in the Bayesian Network Language are organized as sequences of blocks. Every block is associated with a single variable, say x, and satisfies two constraints: First, no variable other than x is modified inside the block, i.e. occurs on the left–hand side of a random assignment. Second, every variable accessed inside of a guard has been initialized before. These restrictions ensure that there is no data flow across multiple executions of the same block. Thus, intuitively, all loops whose body is composed from blocks (as described above) are f–i.i.d. loops.
(The Bayesian Network Language). Let \(\textsf {Vars}= \{x_1,\, x_2,\, \ldots \}\) be a finite set of program variables as in Sect. 3. The set of programs in Bayesian Network Language, denoted \( \textsf {{BNL}} \), is given by the grammar
where \(x_i \in \textsf {Vars}\) is a program variable, all variables in \(\varphi \) have been initialized before, and \(B_{x_i}\) is a non–terminal parameterized with program variable \(x_i \in \textsf {Vars}\). That is, for all \(x_i \in \textsf {Vars}\) there is a non–terminal \(B_{x_i}\). Moreover, \(\psi \) is an arbitrary guard and \(\mu \) is a distribution expression of the form \(\mu = \sum _{j=1}^{n} p_j \cdot \langle a_j \rangle \) with \(a_j \in \mathbb {Q}\) for \(1 \le j \le n\).
Consider the BNL program \(C_{\textit{dice}}\):
This program first throws a fair die. After that it keeps throwing a second die until its result is at least as large as the first die. \(\triangle \)
For any \(C \in \textsf {{BNL}} \), our goal is to compute the exact ERT of C, i.e. Open image in new window . In case of loop–free programs, this amounts to a straightforward application of the \(\mathsf {ert}\) calculus presented in Sect. 3. To deal with loops, however, we have to perform fixed point computations or require user–supplied artifacts, e.g. invariants, supermartingales, etc. For \( \textsf {{BNL}} \) programs, on the other hand, it suffices to apply the proof rules developed in Sect. 4. As a result, we directly obtain an exact closed form solution for the ERT of a loop. This is a consequence of the fact that all loops in \( \textsf {{BNL}} \) are f–i.i.d., which we establish in the following.
By definition, every loop in \( \textsf {{BNL}} \) is of the form \(\mathtt {repeat}\left\{ {B_{x_{i}}}\right\} \mathtt {until}\left( {\psi }\right) \), which is equivalent to \({B_{x_{i}}};\,{\mathtt {while} \left( {\lnot \psi }\right) \left\{ {B_{x_{i}}} \right\} }\). Hence, we want to apply Theorem 4 to that while loop. Our first step is to discharge the theorem's premises:
Let \(\textit{Seq}\) be a sequence of BNL–blocks, \(g \in \mathbb {E}\), and \(\psi \) be a guard. Then:
The expected value of g after executing \(\textit{Seq}\) is unaffected by \(\textit{Seq}\). That is, Open image in new window .
The ERT of \(\textit{Seq}\) is unaffected by \(\textit{Seq}\), i.e. Open image in new window .
For every \(f \in \mathbb {E}\), the loop \(\mathtt {while} \left( {\lnot \psi }\right) \left\{ {\textit{Seq}} \right\} \) is f–i.i.d.
1. is proven by induction on the length of the sequence of blocks \(\textit{Seq}\) and 2. is a consequence of 1., see [3, Appendix A.6]. 3. follows immediately from 1. by instantiating g with \(\left[ {\lnot \psi } \right] \) and \(\left[ {\psi } \right] \cdot f\), respectively. \(\square \)
We are now in a position to derive a closed form for the ERT of loops in \( \textsf {{BNL}} \).
For every loop \(\mathtt {repeat}\left\{ {\textit{Seq}}\right\} \mathtt {until}\left( {\psi }\right) \in \textsf {{BNL}} \) and every \(f \in \mathbb {E}\),
Let \(f \in \mathbb {E}\). Moreover, recall that \(\mathtt {repeat}\left\{ {\textit{Seq}}\right\} \mathtt {until}\left( {\psi }\right) \) is equivalent to the program \({\textit{Seq}};\,{\mathtt {while} \left( {\lnot \psi }\right) \left\{ {\textit{Seq}} \right\} } \in \textsf {{BNL}} \). Applying the semantics of \(\mathsf {ert}\) (Table 2), we proceed as follows:
Since the loop body \(\textit{Seq}\) is loop–free, it terminates certainly, i.e. Open image in new window (Premise 2. of Theorem 4). Together with Lemma 3.1. and 3., all premises of Theorem 4 are satisfied. Hence, we obtain a closed form for Open image in new window :
By Theorem 2, we know Open image in new window for any g. Thus:
Since \(\mathsf {wp}\) is linear (Theorem 1 (2)), we obtain:
By a few simple algebraic transformations, this coincides with:
Let R denote the fraction above. Then Lemma 3.1. and 2. implies Open image in new window . We may thus apply Lemma 1 to derive Open image in new window . Hence:
Again, by Theorem 2, we know that Open image in new window for any g. Thus, for \(g = \left[ {\psi } \right] \cdot f\), this yields:
Then a few algebraic transformations lead us to the claimed ERT:
Note that Theorem 5 holds for arbitrary postexpectations \(f \in \mathbb {E}\). This enables compositional reasoning about ERTs of \( \textsf {{BNL}} \) programs. Since all other rules of the \(\mathsf {ert}\)–calculus for loop–free programs amount to simple syntactical transformations (see Table 2), we conclude that
Corollary 1
For any \(C \in \textsf {{BNL}} \), a closed form for Open image in new window can be computed compositionally.
Theorem 5 allows us to comfortably compute the ERT of the BNL program \(C_{\textit{dice}}\) introduced in Example 4:
For the ERT, we have
5.2 Bayesian Networks
To reason about expected sampling times of BNs, it remains to develop a sound translation from BNs with observations into equivalent \( \textsf {{BNL}} \) programs. A BN is a probabilistic graphical model that is given by a directed acyclic graph. Every node is a random variable and a directed edge between two nodes expresses a probabilistic dependency between these nodes.
As a running example, consider the BN depicted in Fig. 3 (inspired by [31]) that models the mood of students after taking an exam. The network contains four random variables. They represent the difficulty of the exam (D), the level of preparation of a student (P), the achieved grade (G), and the resulting mood (M). For simplicity, let us assume that each random variable assumes either 0 or 1. The edges express that the student's mood depends on the achieved grade which, in turn, depends on the difficulty of the exam and the preparation of the student. Every node is accompanied by a table that provides the conditional probabilities of a node given the values of all the nodes it depends upon. We can then use the BN to answer queries such as "What is the probability that a student is well–prepared for an exam (\(P = 1\)), but ends up with a bad mood (\(M=0\))?"
A Bayesian network
In order to translate BNs into equivalent BNL programs, we need a formal representation first. Technically, we consider extended BNs in which nodes may additionally depend on inputs that are not represented by nodes in the network. This allows us to define a compositional translation without modifying conditional probability tables.
Towards a formal definition of extended BNs, we use the following notation. A tuple \((s_1,\ldots ,s_k) \in S^{k}\) of length k over some set S is denoted by \(\mathbf {s}\). The empty tuple is \(\mathbf {\varepsilon }\). Moreover, for \(1 \le i \le k\), the i-th element of tuple \(\mathbf {s}\) is given by \(\mathbf {s}(i)\). To simplify the presentation, we assume that all nodes and all inputs are represented by natural numbers.
An extended Bayesian network, \(\text {EBN}\) for short, is a tuple \(\mathcal {B}= (V,I,E,\textsf {Vals},\mathsf {dep},\mathsf {cpt})\), where
\(V\subseteq \mathbb {N}\) and \(I\subseteq \mathbb {N}\) are finite disjoint sets of nodes and inputs.
\(E\subseteq V\times V\) is a set of edges such that \((V,E)\) is a directed acyclic graph.
\(\textsf {Vals}\) is a finite set of possible values that can be assigned to each node.
\(\mathsf {dep}:V\rightarrow (V\cup I)^{*}\) is a function assigning each node v to an ordered sequence of dependencies. That is, \(\mathsf {dep}(v) ~{}={}~(u_{1}, \ldots , u_{m})\) such that \(u_i < u_{i+1}\) (\(1 \le i < m\)). Moreover, every dependency \(u_j\) \((1 \le j \le m\)) is either an input, i.e. \(u_j \in I\), or a node with an edge to v, i.e. \(u_j \in V\) and \((u_j,v) \in E\).
\(\mathsf {cpt}\) is a function mapping each node v to its conditional probability table \(\mathsf {cpt}[v]\). That is, for \(k = |\mathsf {dep}(v)|\), \(\mathsf {cpt}[v]\) is given by a function of the form
$$\begin{aligned} \mathsf {cpt}[v] \,:\, \textsf {Vals}^{k} \rightarrow \textsf {Vals}\rightarrow [0,1] \quad \text {such that}\quad \sum _{\mathbf {z} \in \textsf {Vals}^{k}, a \in \textsf {Vals}} \mathsf {cpt}[v](\mathbf {z})(a) ~{}={}~1. \end{aligned}$$
Here, the i-th entry in a tuple \(\mathbf {z} \in \textsf {Vals}^{k}\) corresponds to the value assigned to the i-th entry in the sequence of dependencies \(\mathsf {dep}(v)\).
A Bayesian network (BN) is an extended BN without inputs, i.e. \(I= \emptyset \). In particular, the dependency function is of the form \(\mathsf {dep}:V\rightarrow V^{*}\).
The formalization of our example BN (Fig. 3) is straightforward. For instance, the dependencies of variable G are given by \(\mathsf {dep}(G) = (D,P)\) (assuming D is encoded by an integer less than P). Furthermore, every entry in the conditional probability table of node G corresponds to an evaluation of the function \(\mathsf {cpt}[G]\). For example, if \(D = 1\), \(P = 0\), and \(G = 1\), we have \(\mathsf {cpt}[G](1,0)(1) = 0.4\).\(\triangle \)
In general, the conditional probability table \(\mathsf {cpt}\) determines the conditional probability distribution of each node \(v \in V\) given the nodes and inputs it depends on. Formally, we interpret an entry in a conditional probability table as follows:
$$\begin{aligned} \mathsf {Pr}\left( v=a \,|\, \mathsf {dep}(v) = \mathbf {z}\right) ~{}={}~\mathsf {cpt}[v](\mathbf {z})(a), \end{aligned}$$
where \(v \in V\) is a node, \(a \in \textsf {Vals}\) is a value, and \(\mathbf {z}\) is a tuple of values of length \(|\mathsf {dep}(v)|\). Then, by the chain rule, the joint probability of a BN is given by the product of its conditional probability tables (cf. [4]).
Let \(\mathcal {B}= (V,I,E,\textsf {Vals},\mathsf {dep},\mathsf {cpt})\) be an extended Bayesian network. Moreover, let \(W \subseteq V\) be a downward closed5 set of nodes. With each \(w \in W \cup I\), we associate a fixed value \(\underline{w} \in \textsf {Vals}\). This notation is lifted pointwise to tuples of nodes and inputs. Then the joint probability in which nodes in W assume values \(\underline{W}\) is given by
The conditional joint probability distribution of a set of nodes W, given observations on a set of nodes O, is then given by the quotient \(\nicefrac {\mathsf {Pr}\left( W = \underline{W}\right) }{\mathsf {Pr}\left( O = \underline{O}\right) }\).
For example, the probability of a student having a bad mood, i.e. \(M = 0\), after getting a bad grade (\(G=0\)) for an easy exam (\(D=0\)) given that she was well–prepared, i.e. \(P = 1\), is
5.3 From Bayesian Networks to BNL
We now develop a compositional translation from EBNs into BNL programs. Throughout this section, let \(\mathcal {B}= (V,I,E,\textsf {Vals},\mathsf {dep},\mathsf {cpt})\) be a fixed EBN. Moreover, with every node or input \(v \in V\cup I\) we associate a program variable \(x_{v}\).
We proceed in three steps: First, every node together with its dependencies is translated into a block of a \( \textsf {{BNL}} \) program. These blocks are then composed into a single \( \textsf {{BNL}} \) program that captures the whole BN. Finally, we implement conditioning by means of rejection sampling.
Step 1: We first present the atomic building blocks of our translation. Let \(v \in V\) be a node. Moreover, let \(\mathbf {z} \in \textsf {Vals}^{|\mathsf {dep}(v)|}\) be an evaluation of the dependencies of v. That is, \(\mathbf {z}\) is a tuple that associates a value with every node and input that v depends on (in the same order as \(\mathsf {dep}(v)\)). For every node v and evaluation of its dependencies \(\mathbf {z}\), we define a corresponding guard and a random assignment:
Note that \(\mathsf {dep}(v)(i)\) is the i-th element from the sequence of nodes \(\mathsf {dep}(v)\).
Continuing our previous example (see Fig. 1), assume we fixed the node \(v = G\). Moreover, let \(\mathbf {z} = (1,0)\) be an evaluation of \(\mathsf {dep}(v) = (S,R)\). Then the guard and assignment corresponding to v and \(\mathbf {z}\) are given by:
We then translate every node \(v \in V\) into a program block that uses guards to determine the rows in the conditional probability table under consideration. After that, the program samples from the resulting probability distribution using the previously constructed assignments. In case a node does neither depend on other nodes nor input variables we omit the guards. Formally,
Remark 1
The guards under consideration are conjunctions of equalities between variables and literals. We could thus use a more efficient translation of conditional probability tables by adding a switch-case statement to our probabilistic programming language. Such a statement is of the form
$$\begin{aligned} \texttt {switch}(\mathbf {x})~\{~\texttt {case}~\mathbf {a}_1: C_1~\texttt {case}~a_2: C_2~\ldots ~\texttt {default}: C_m \}, \end{aligned}$$
where \(\mathbf {x}\) is a tuple of variables, and \(\mathbf {a}_1, \ldots \mathbf {a}_{m-1}\) are tuples of rational numbers of the same length as \(\mathbf {x}\). With respect to the \(\mathsf {wp}\) semantics, a switch-case statement is syntactic sugar for nested if-then-else blocks as used in the above translation. However, the runtime model of a switch-case statement requires just a single guard evaluation (\(\varphi \)) instead of potentially multiple guard evaluations when evaluating nested if-then-else blocks. Since the above adaption is straightforward, we opted to use nested if-then-else blocks to keep our programming language simple and allow, in principle, more general guards. \(\triangle \)
Step 2: The next step is to translate a complete EBN into a \( \textsf {{BNL}} \) program. To this end, we compose the blocks obtained from each node starting at the roots of the network. That is, all nodes that contain no incoming edges. Formally,
$$\begin{aligned} \textit{roots}(\mathcal {B}) = \{ v \in V_{\mathcal {B}} ~|~ \lnot \exists u \in V_{\mathcal {B}} :(u,v) \in E_{\mathcal {B}} \}. \end{aligned}$$
After translating every node in the network, we remove them from the graph, i.e. every root becomes an input, and proceed with the translation until all nodes have been removed. More precisely, given a set of nodes \(S \subseteq V\), the extended BN \(\mathcal {B}\setminus S\) obtained by removing S from \(\mathcal {B}\) is defined as
$$\begin{aligned} \mathcal {B}\setminus S ~{}={}~\left( V\setminus S,\, I\cup S,\, E\setminus (V\times S \cup S \times V),\, \mathsf {dep},\, \mathsf {cpt}\right) . \end{aligned}$$
With these auxiliary definitions readily available, an extended BN \(\mathcal {B}\) is translated into a \( \textsf {{BNL}} \) program as follows:
$$\begin{aligned} \textit{BNL}(\mathcal {B})~{}={}~{\left\{ \begin{array}{ll} \textit{block}_{\mathcal {B}}(r_1);\ldots ;\textit{block}_{\mathcal {B}}(r_m) &{} ~\text {if}~ \textit{roots}(\mathcal {B}) = \{r_1,\ldots ,r_m\} = V\\ \textit{block}_{\mathcal {B}}(r_1);\ldots ;\textit{block}_{\mathcal {B}}(r_m); &{} ~\text {if}~ \textit{roots}(\mathcal {B}) = \{r_1,\ldots ,r_m\} \subsetneqq V\\ \textit{BNL}(\mathcal {B}\setminus \textit{roots}(\mathcal {B})) \end{array}\right. } \end{aligned}$$
Step 3: To complete the translation, it remains to account for observations. Let \(\textit{cond}: V\rightarrow \textsf {Vals}\cup \{ \bot \}\) be a function mapping every node either to an observed value in \(\textsf {Vals}\) or to \(\bot \). The former case is interpreted as an observation that node v has value \(\textit{cond}(v)\). Otherwise, i.e. if \(\textit{cond}(v) = \bot \), the value of node v is not observed. We collect all observed nodes in the set \(O= \{ v \in V~|~ \textit{cond}(v) \ne \bot \}\). It is then natural to incorporate conditioning into our translation by applying rejection sampling: We repeatedly execute a BNL program until every observed node has the desired value \(\textit{cond}(v)\). In the presence of observations, we translate the extended BN \(\mathcal {B}\) into a \( \textsf {{BNL}} \) program as follows:
$$\begin{aligned} \textit{BNL}(\mathcal {B},\textit{cond}) ~{}={}~\mathtt {repeat}\left\{ {\textit{BNL}(\mathcal {B})}\right\} \mathtt {until}\left( {\bigwedge _{v \in O} x_{v} = \textit{cond}(v)}\right) \end{aligned}$$
The \( \textsf {{BNL}} \) program \(C_{\textit{mood}}\) obtained from the BN in Fig. 3.
Consider, again, the BN \(\mathcal {B}\) depicted in Fig. 3. Moreover, assume we observe \(P = 1\). Hence, the conditioning function \(\textit{cond}\) is given by \(\textit{cond}(P) = 1\) and \(\textit{cond}(v) = \bot \) for \(v \in \{D,G,M\}\). Then the translation of \(\mathcal {B}\) and \(\textit{cond}\), i.e. \(\textit{BNL}(\mathcal {B},\textit{cond})\), is the \( \textsf {{BNL}} \) program \(C_{\textit{mood}}\) depicted in Fig. 4.\(\triangle \)
Since our translation yields a \( \textsf {{BNL}} \) program for any given BN, we can compositionally compute a closed form for the expected simulation time of a BN. This is an immediate consequence of Corollary 1.
We still have to prove, however, that our translation is sound, i.e. the conditional joint probabilities inferred from a BN coincide with the (conditional) joint probabilities from the corresponding \( \textsf {{BNL}} \) program. Formally, we obtain the following soundness result.
Theorem 6 (Soundness of Translation)
Let \(\mathcal {B}= (V,I,E,\textsf {Vals},\mathsf {dep},\mathsf {cpt})\) be a BN and \(\textit{cond}: V\rightarrow \textsf {Vals}\,\cup \, \{\bot \}\) be a function determining the observed nodes. For each node and input v, let \(\underline{v} \in \textsf {Vals}\) be a fixed value associated with v. In particular, we set \(\underline{v} = \textit{cond}(v)\) for each observed node \(v \in O\). Then
Without conditioning, i.e. \(O= \emptyset \), the proof proceeds by induction on the number of nodes of \(\mathcal {B}\). With conditioning, we additionally apply Theorems 3 and 5 to deal with loops introduced by observed nodes. See [3, Appendix A.7]. \(\square \)
Example 9 (Expected Sampling Time of a BN)
Consider, again, the BN \(\mathcal {B}\) in Fig. 3. Moreover, recall the corresponding program \(C_{\textit{mood}}\) derived from \(\mathcal {B}\) in Fig. 4, where we observed \(P=1\). By Theorem 6 we can also determine the probability that a student who got a bad grade in an easy exam was well–prepared by means of weakest precondition reasoning. This yields
Furthermore, by Corollary 1, it is straightforward to determine the expected time to obtain a single sample of \(\mathcal {B}\) that satisfies the observation \(P = 1\):
We implemented a prototype in Java to analyze expected sampling times of Bayesian networks. More concretely, our tool takes as input a BN together with observations in the popular Bayesian Network Interchange Format.6 The BN is then translated into a BNL program as shown in Sect. 5. Our tool applies the \(\mathsf {ert}\)–calculus together with our proof rules developed in Sect. 4 to compute the exact expected runtime of the BNL program.
The size of the resulting BNL program is linear in the total number of rows of all conditional probability tables in the BN. The program size is thus not the bottleneck of our analysis. As we are dealing with an NP–hard problem [12, 13], it is not surprising that our algorithm has a worst–case exponential time complexity. However, also the space complexity of our algorithm is exponential in the worst case: As an expectation is propagated backwards through an \(\texttt {if}\)–clause of the BNL program, the size of the expectation is potentially multiplied. This is also the reason that our analysis runs out of memory on some benchmarks.
We evaluated our implementation on the largest BNs in the Bayesian Network Repository [46] that consists—to a large extent—of real–world BNs including expert systems for, e.g., electromyography (munin) [2], hematopathology diagnosis (hepar2) [42], weather forecasting (hailfinder) [1], and printer troubleshooting in Windows 95 (win95pts) [45, Sect. 5.6.2]. For a evaluation of all BNs in the repository, we refer to the extended version of this paper [3, Sect. 6].
All experiments were performed on an HP BL685C G7. Although up to 48 cores with 2.0 GHz were available, only one core was used apart from Java's garbage collection. The Java virtual machine was limited to 8 GB of RAM.
Our experimental results are shown in Table 3. The number of nodes of the considered BNs ranges from 56 to 1041. For each Bayesian network, we computed the expected sampling time (EST) for different collections of observed nodes (#obs). Furthermore, Table 3 provides the average Markov Blanket size, i.e. the average number of parents, children and children's parents of nodes in the BN [43], as an indicator measuring how independent nodes in the BN are.
Experimental results. Time is in seconds. MO denotes out of memory.
#obs
hailfinder
#nodes: 56, #edges: 66, avg. Markov Blanket: 3.54
\(9.500 \cdot 10^1\)
hepar2
#nodes: 70, #edges: 123, avg. Markov Blanket: 4.51
win95pts
\(1.110 \cdot 10^{15}\)
#nodes: 135, #edges: 200, avg. Markov Blanket: 3.04
#nodes: 1041, #edges: 1397, avg. Markov Blanket: 3.54
Observations were picked at random. Note that the time required by our prototype varies depending on both the number of observed nodes and the actual observations. Thus, there are cases in which we run out of memory although the total number of observations is small.
In order to obtain an understanding of what the EST corresponds to in actual execution times on a real machine, we also performed simulations for the win95pts network. More precisely, we generated Java programs from this network analogously to the translation in Sect. 5. This allowed us to approximate that our Java setup can execute \(9.714\cdot 10^6\) steps (in terms of EST) per second.
For the win95pts with 17 observations, an EST of \(1.11 \cdot 10^{15}\) then corresponds to an expected time of approximately 3.6 years in order to obtain a single valid sample. We were additionally able to find a case with 13 observed nodes where our tool discovered within 0.32 s an EST that corresponds to approximately 4.3 million years. In contrast, exact inference using variable elimination was almost instantaneous. This demonstrates that knowing expected sampling times upfront can indeed be beneficial when selecting an inference method.
We presented a syntactic notion of independent and identically distributed probabilistic loops and derived dedicated proof rules to determine exact expected outcomes and runtimes of such loops. These rules do not require any user–supplied information, such as invariants, (super)martingales, etc.
Moreover, we isolated a syntactic fragment of probabilistic programs that allows to compute expected runtimes in a highly automatable fashion. This fragment is non–trivial: We show that all Bayesian networks can be translated into programs within this fragment. Hence, we obtain an automated formal method for computing expected simulation times of Bayesian networks. We implemented this method and successfully applied it to various real–world BNs that stem from, amongst others, medical applications. Remarkably, our tool was capable of proving extremely large expected sampling times within seconds.
There are several directions for future work: For example, there exist subclasses of BNs for which exact inference is in \(\textsf {P}\), e.g. polytrees. Are there analogies for probabilistic programs? Moreover, it would be interesting to consider more complex graphical models, such as recursive BNs [16].
An example of a program that is not expressible in BNL is given in Example 1.
Positive almost–sure termination means termination in finite expected time [5].
We use \(\lambda \)–expressions to construct functions: Function Open image in new window applied to an argument \(\alpha \) evaluates to \(\epsilon \) in which every occurrence of X is replaced by \(\alpha \).
This counting is also the reason that \(C_{ geo }\) is an example of a program that is not expressible in our BNL language that we present later.
W is downward closed if \(v \in W\) and \((u,v) \in E\) implies \(u \in E\).
http://www.cs.cmu.edu/~fgcozman/Research/InterchangeFormat/.
Abramson, B., Brown, J., Edwards, W., Murphy, A., Winkler, R.L.: Hailfinder: a Bayesian system for forecasting severe weather. Int. J. Forecast. 12(1), 57–71 (1996)CrossRefGoogle Scholar
Andreassen, S., Jensen, F.V., Andersen, S.K., Falck, B., Kjærulff, U., Woldbye, M., Sørensen, A., Rosenfalck, A., Jensen, F.: MUNIN: an expert EMG Assistant. In: Computer-Aided Electromyography and Expert Systems, pp. 255–277. Pergamon Press (1989)Google Scholar
Batz, K., Kaminski, B.L., Katoen, J., Matheja, C.: How long, O Bayesian network, will I sample thee? arXiv extended version (2018)Google Scholar
Bishop, C.: Pattern Recognition and Machine Learning. Springer, New York (2006)zbMATHGoogle Scholar
Bournez, O., Garnier, F.: Proving positive almost-sure termination. In: Giesl, J. (ed.) RTA 2005. LNCS, vol. 3467, pp. 323–337. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-32033-3_24CrossRefGoogle Scholar
Brázdil, T., Kiefer, S., Kucera, A., Vareková, I.H.: Runtime analysis of probabilistic programs with unbounded recursion. J. Comput. Syst. Sci. 81(1), 288–310 (2015)MathSciNetCrossRefGoogle Scholar
Celiku, O., McIver, A.: Compositional specification and analysis of cost-based properties in probabilistic programs. In: Fitzgerald, J., Hayes, I.J., Tarlecki, A. (eds.) FM 2005. LNCS, vol. 3582, pp. 107–122. Springer, Heidelberg (2005). https://doi.org/10.1007/11526841_9CrossRefGoogle Scholar
Chakarov, A., Sankaranarayanan, S.: Probabilistic program analysis with martingales. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 511–526. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39799-8_34CrossRefGoogle Scholar
Chatterjee, K., Fu, H., Novotný, P., Hasheminezhad, R.: Algorithmic analysis of qualitative and quantitative termination problems for affine probabilistic programs. In: POPL, pp. 327–342. ACM (2016)Google Scholar
Chatterjee, K., Novotný, P., Zikelic, D.: Stochastic invariants for probabilistic termination. In: POPL, pp. 145–160. ACM (2017)Google Scholar
Constantinou, A.C., Fenton, N.E., Neil, M.: pi-football: a Bayesian network model for forecasting association football match outcomes. Knowl. Based Syst. 36, 322–339 (2012)CrossRefGoogle Scholar
Cooper, G.F.: The computational complexity of probabilistic inference using Bayesian belief networks. Artif. Intell. 42(2–3), 393–405 (1990)MathSciNetCrossRefGoogle Scholar
Dagum, P., Luby, M.: Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artif. Intell. 60(1), 141–153 (1993)MathSciNetCrossRefGoogle Scholar
Dijkstra, E.W.: Guarded commands, nondeterminacy and formal derivation of programs. Commun. ACM 18(8), 453–457 (1975)MathSciNetCrossRefGoogle Scholar
Dijkstra, E.W.: A Discipline of Programming. Prentice-Hall, Upper Saddle River (1976)zbMATHGoogle Scholar
Etessami, K., Yannakakis, M.: Recursive Markov chains, stochastic grammars, and monotone systems of nonlinear equations. JACM 56(1), 1:1–1:66 (2009)MathSciNetCrossRefGoogle Scholar
Fioriti, L.M.F., Hermanns, H.: Probabilistic termination: soundness, completeness, and compositionality. In: POPL, pp. 489–501. ACM (2015)Google Scholar
Friedman, N., Linial, M., Nachman, I., Pe'er, D.: Using Bayesian networks to analyze expression data. In: RECOMB, pp. 127–135. ACM (2000)Google Scholar
Frohn, F., Naaf, M., Hensel, J., Brockschmidt, M., Giesl, J.: Lower runtime bounds for integer programs. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 550–567. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40229-1_37CrossRefGoogle Scholar
Goodman, N.D.: The principles and practice of probabilistic programming. In: POPL, pp. 399–402. ACM (2013)Google Scholar
Goodman, N.D., Mansinghka, V.K., Roy, D.M., Bonawitz, K., Tenenbaum, J.B.: Church: A language for generative models. In: UAI, pp. 220–229. AUAI Press (2008)Google Scholar
Gordon, A.D., Graepel, T., Rolland, N., Russo, C.V., Borgström, J., Guiver, J.: Tabular: a schema-driven probabilistic programming language. In: POPL, pp. 321–334. ACM (2014)Google Scholar
Gordon, A.D., Henzinger, T.A., Nori, A.V., Rajamani, S.K.: Probabilistic programming. In: Future of Software Engineering, pp. 167–181. ACM (2014)Google Scholar
Heckerman, D.: A tutorial on learning with Bayesian networks. In: Holmes, D.E., Jain, L.C. (eds.) Innovations in Bayesian Networks. Studies in Computational Intelligence, vol. 156, pp. 33–82. Springer, Heidelberg (2008)CrossRefGoogle Scholar
Hehner, E.C.R.: A probability perspective. Formal Aspects Comput. 23(4), 391–419 (2011)MathSciNetCrossRefGoogle Scholar
Hoffman, M.D., Gelman, A.: The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res. 15(1), 1593–1623 (2014)MathSciNetzbMATHGoogle Scholar
Jiang, X., Cooper, G.F.: A Bayesian spatio-temporal method for disease outbreak detection. JAMIA 17(4), 462–471 (2010)Google Scholar
Kaminski, B.L., Katoen, J.-P.: On the hardness of almost–sure termination. In: Italiano, G.F., Pighizzini, G., Sannella, D.T. (eds.) MFCS 2015. LNCS, vol. 9234, pp. 307–318. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48057-1_24CrossRefGoogle Scholar
Kaminski, B.L., Katoen, J.: A weakest pre-expectation semantics for mixed-sign expectations. In: LICS (2017)Google Scholar
Kaminski, B.L., Katoen, J.-P., Matheja, C., Olmedo, F.: Weakest precondition reasoning for expected run–times of probabilistic programs. In: Thiemann, P. (ed.) ESOP 2016. LNCS, vol. 9632, pp. 364–389. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49498-1_15CrossRefGoogle Scholar
Koller, D., Friedman, N.: Probabilistic Graphical Models - Principles and Techniques. MIT Press, Cambridge (2009)zbMATHGoogle Scholar
Kozen, D.: Semantics of probabilistic programs. J. Comput. Syst. Sci. 22(3), 328–350 (1981)MathSciNetCrossRefGoogle Scholar
Kozen, D.: A probabilistic PDL. J. Comput. Syst. Sci. 30(2), 162–178 (1985)MathSciNetCrossRefGoogle Scholar
Lassez, J.L., Nguyen, V.L., Sonenberg, L.: Fixed point theorems and semantics: a folk tale. Inf. Process. Lett. 14(3), 112–116 (1982)MathSciNetCrossRefGoogle Scholar
McIver, A., Morgan, C.: Abstraction, Refinement and Proof for Probabilistic Systems. Springer, New York (2004). http://doi.org/10.1007/b138392
Minka, T., Winn, J.: Infer.NET (2017). http://infernet.azurewebsites.net/. Accessed Oct 17
Minka, T., Winn, J.M.: Gates. In: NIPS, pp. 1073–1080. Curran Associates (2008)Google Scholar
Monniaux, D.: An abstract analysis of the probabilistic termination of programs. In: Cousot, P. (ed.) SAS 2001. LNCS, vol. 2126, pp. 111–126. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-47764-0_7CrossRefzbMATHGoogle Scholar
Neapolitan, R.E., Jiang, X.: Probabilistic Methods for Financial and Marketing Informatics. Morgan Kaufmann, Burlington (2010)zbMATHGoogle Scholar
Nori, A.V., Hur, C., Rajamani, S.K., Samuel, S.: R2: an efficient MCMC sampler for probabilistic programs. In: AAAI, pp. 2476–2482. AAAI Press (2014)Google Scholar
Olmedo, F., Kaminski, B.L., Katoen, J., Matheja, C.: Reasoning about recursive probabilistic programs. In: LICS, pp. 672–681. ACM (2016)Google Scholar
Onisko, A., Druzdzel, M.J., Wasyluk, H.: A probabilistic causal model for diagnosis of liver disorders. In: Proceedings of the Seventh International Symposium on Intelligent Information Systems (IIS-98), pp. 379–387 (1998)Google Scholar
Pearl, J.: Bayesian networks: a model of self-activated memory for evidential reasoning. In: Proceedings of CogSci, pp. 329–334 (1985)Google Scholar
Pfeffer, A.: Figaro: an object-oriented probabilistic programming language. Charles River Analytics Technical Report 137, 96 (2009)Google Scholar
Ramanna, S., Jain, L.C., Howlett, R.J.: Emerging Paradigms in Machine Learning. Springer, Heidelberg (2013)CrossRefGoogle Scholar
Scutari, M.: Bayesian Network Repository (2017). http://www.bnlearn.com
Sharir, M., Pnueli, A., Hart, S.: Verification of probabilistic programs. SIAM J. Comput. 13(2), 292–314 (1984)MathSciNetCrossRefGoogle Scholar
Wood, F., van de Meent, J., Mansinghka, V.: A new approach to probabilistic programming inference. In: JMLR Workshop and Conference Proceedings, AISTATS, vol. 33, pp. 1024–1032 (2014). JMLR.org
Yuan, C., Druzdzel, M.J.: Importance sampling algorithms for Bayesian networks: principles and performance. Math. Comput. Model. 43(9–10), 1189–1207 (2006)MathSciNetCrossRefGoogle Scholar
Zweig, G., Russell, S.J.: Speech recognition with dynamic Bayesian networks. In: AAAI/IAAI, pp. 173–180. AAAI Press/The MIT Press (1998)Google Scholar
© The Author(s) 2018
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
1.RWTH Aachen UniversityAachenGermany
Batz K., Kaminski B.L., Katoen JP., Matheja C. (2018) How long, O Bayesian network, will I sample thee?. In: Ahmed A. (eds) Programming Languages and Systems. ESOP 2018. Lecture Notes in Computer Science, vol 10801. Springer, Cham. https://doi.org/10.1007/978-3-319-89884-1_7
First Online 14 April 2018
DOI https://doi.org/10.1007/978-3-319-89884-1_7
Publisher Name Springer, Cham
Print ISBN 978-3-319-89883-4
Online ISBN 978-3-319-89884-1
eBook Packages Computer Science Computer Science (R0)
Not logged in Not affiliated 3.239.109.55 | CommonCrawl |
Journal of Plasma Physics (8)
Laser and Particle Beams (2)
High Power Laser Science and Engineering (1)
Plasma channel formation in the knife-like focus of laser beam
O. G. Olkhovskaya, G. A. Bagdasarov, N. A. Bobrova, V. A. Gasilov, L. V. N. Goncalves, C. M. Lazzarini, M. Nevrkla, G. Grittani, S. S. Bulanov, A. J. Gonsalves, C. B. Schroeder, E. Esarey, W. P. Leemans, P. V. Sasorov, S. V. Bulanov, G. Korn
Journal: Journal of Plasma Physics / Volume 86 / Issue 3 / June 2020
Published online by Cambridge University Press: 02 June 2020, 905860307
The plasma channel formation in the focus of a knife-like nanosecond laser pulse irradiating a gas target is studied theoretically, and in gas-dynamics computer simulations. The distribution of the electromagnetic field in the focus region, obtained analytically, is used to calculate the energy deposition in the plasma, which then is implemented in the magnetohydrodynamic computer code. The modelling of the channel evolution shows that the plasma profile, which can guide the laser pulse, is formed by the tightly focused short knife-like lasers. The results of the simulations show that a proper choice of the convergence angle of a knife-like laser beam (determined by the focal length of the last cylindrical lens), and laser pulse duration may provide a sufficient degree of azimuthal symmetry of the formed plasma channel.
Towards laser ion acceleration with holed targets
Prokopis Hadjisolomou, S. V. Bulanov, G. Korn
Published online by Cambridge University Press: 26 May 2020, 905860304
Although the interaction of a flat foil with currently available laser intensities is now considered a routine process, during the last decade, emphasis has been given to targets with complex geometries aiming at increasing the ion energy. This work presents a target geometry where two symmetric side holes and a central hole are drilled into the foil. A study of the various side-hole and central-hole length combinations is performed with two-dimensional particle-in-cell simulations for polyethylene targets and a laser intensity of $5.2\times 10^{21}~\text{W}~\text{cm}^{-2}$ . The holed targets show a remarkable increase of the conversion efficiency, which corresponds to a different target configuration for electrons, protons and carbon ions. Furthermore, diffraction of the laser pulse leads to a directional high energy electron beam, with a temperature of ${\sim}40~\text{MeV}$ , or seven times higher than in the case of a flat foil. The higher conversion efficiency consequently leads to a significant enhancement of the maximum proton energy from holed targets.
Correction of hyperprolactinemia under risperidon treatment
V. Bulanov, L. Gorobets, L. Vasilenko, G. Granenov, S. Mosolov
Journal: European Psychiatry / Volume 17 / Issue S1 / May 2002
On annihilation of the relativistic electron vortex pair in collisionless plasmas
K. V. Lezhnin, F. F. Kamenets, T. Zh. Esirkepov, S. V. Bulanov
Journal: Journal of Plasma Physics / Volume 84 / Issue 6 / December 2018
Published online by Cambridge University Press: 26 November 2018, 905840610
In contrast to hydrodynamic vortices, vortices in a plasma contain an electric current circulating around the centre of the vortex, which generates a magnetic field localized inside. Using computer simulations, we demonstrate that the magnetic field associated with the vortex gives rise to a mechanism of dissipation of the vortex pair in a collisionless plasma, leading to fast annihilation of the magnetic field with its energy transforming into the energy of fast electrons, secondary vortices and plasma waves. Two major contributors to the energy damping of a double vortex system, namely, magnetic field annihilation and secondary vortex formation, are regulated by the size of the vortex with respect to the electron skin depth, which scales with the electron $\unicode[STIX]{x1D6FE}$ factor, $\unicode[STIX]{x1D6FE}_{e}$ , as $R/d_{e}\propto \unicode[STIX]{x1D6FE}_{e}^{1/2}$ . Magnetic field annihilation appears to be dominant in mildly relativistic vortices, while for the ultrarelativistic case, secondary vortex formation is the main channel for damping of the initial double vortex system.
Charged particle dynamics in multiple colliding electromagnetic waves. Survey of random walk, Lévy flights, limit circles, attractors and structurally determinate patterns
S. V. Bulanov, T. Zh. Esirkepov, J. K. Koga, S. S. Bulanov, Z. Gong, X. Q. Yan, M. Kando
Journal: Journal of Plasma Physics / Volume 83 / Issue 2 / April 2017
Published online by Cambridge University Press: 09 March 2017, 905830202
The multiple colliding laser pulse concept formulated by Bulanov et al. (Phys. Rev. Lett., vol. 104, 2010b, 220404) is beneficial for achieving an extremely high amplitude of coherent electromagnetic field. Since the topology of electric and magnetic fields of multiple colliding laser pulses oscillating in time is far from trivial and the radiation friction effects are significant in the high field limit, the dynamics of charged particles interacting with the multiple colliding laser pulses demonstrates remarkable features corresponding to random walk trajectories, limit circles, attractors, regular patterns and Lévy flights. Under extremely high intensity conditions the nonlinear dissipation mechanism stabilizes the particle motion resulting in the charged particle trajectory being located within narrow regions and in the occurrence of a new class of regular patterns made by the particle ensembles.
On some theoretical problems of laser wake-field accelerators
S. V. Bulanov, T. Zh. Esirkepov, Y. Hayashi, H. Kiriyama, J. K. Koga, H. Kotaki, M. Mori, M. Kando
Enhancement of the quality of laser wake-field accelerated (LWFA) electron beams implies the improvement and controllability of the properties of the wake waves generated by ultra-short pulse lasers in underdense plasmas. In this work we present a compendium of useful formulas giving relations between the laser and plasma target parameters allowing one to obtain basic dependences, e.g. the energy scaling of the electrons accelerated by the wake field excited in inhomogeneous media including multi-stage LWFA accelerators. Consideration of the effects of using the chirped laser pulse driver allows us to find the regimes where the chirp enhances the wake field amplitude. We present an analysis of the three-dimensional effects on the electron beam loading and on the unlimited LWFA acceleration in inhomogeneous plasmas. Using the conditions of electron trapping to the wake-field acceleration phase we analyse the multi-equal stage and multiuneven stage LWFA configurations. In the first configuration the energy of fast electrons is a linear function of the number of stages, and in the second case, the accelerated electron energy grows exponentially with the number of stages. The results of the two-dimensional particle-in-cell simulations presented here show the high quality electron acceleration in the triple stage injection–acceleration configuration.
Fast magnetic energy dissipation in relativistic plasma induced by high order laser modes
HEDP and HPL 2016
Y. J. Gu, Q. Yu, O. Klimo, T. Zh. Esirkepov, S. V. Bulanov, S. Weber, G. Korn
Journal: High Power Laser Science and Engineering / Volume 4 / 2016
Published online by Cambridge University Press: 22 June 2016, e19
Fast magnetic field annihilation in a collisionless plasma is induced by using TEM(1,0) laser pulse. The magnetic quadrupole structure formation, expansion and annihilation stages are demonstrated with 2.5-dimensional particle-in-cell simulations. The magnetic field energy is converted to the electric field and accelerate the particles inside the annihilation plane. A bunch of high energy electrons moving backwards is detected in the current sheet. The strong displacement current is the dominant contribution which induces the longitudinal inductive electric field.
Characterization of preformed plasmas using a multi-dimensional hydrodynamic simulation code in the study of high-intensity laser–plasma interactions
AKITO SAGISAKA, TAKAYUKI UTSUMI, HIROYUKI DAIDO, KOICHI OGURA, SATOSHI ORIMO, MAMIKO NISHIUCHI, YUKIO HAYASHI, MICHIAKI MORI, AKIFUMI YOGO, MASATAKA KADO, ATSUSHI FUKUMI, ZHONG LI, SHU NAKAMURA, AKIRA NODA, YUJI OISHI, TAKUYA NAYUKI, TAKASHI FUJII, KOSHICHI NEMOTO, SERGEI V. BULANOV, TIMUR ZH. ESIRKEPOV, ALEXANDER S. PIROZHKOV, DAISUKE WAKABAYASHI, TOSHIMASA MORITA, MITSURU YAMAGIWA
Published online by Cambridge University Press: 20 December 2006, pp. 1281-1284
We observed a preformed plasma of an aluminum slab target produced by a high-intensity Ti:sapphire laser. The expansion length of the preformed plasma at the electron density of 3 × 1018 cm−3, which was the detection limit, was around 100 μm measured with a laser interferometer. In order to characterize quantitatively and to control the preformed plasmas, we perform a two-dimensional hydrodynamic simulation. The expansion length of the preformed plasma was almost the same as the experimental result, if we assumed that the amplified spontaneous emission lasted 3.5 ns before the main pulse arrived.
Production of ion beams in high-power laser–plasma interactions and their applications
F. PEGORARO, S. ATZENI, M. BORGHESI, S. BULANOV, T. ESIRKEPOV, J. HONRUBIA, Y. KATO, V. KHOROSHKOV, K. NISHIHARA, T. TAJIMA, M. TEMPORAL, O. WILLI
Journal: Laser and Particle Beams / Volume 22 / Issue 1 / March 2004
Energetic ion beams are produced during the interaction of ultrahigh-intensity, short laser pulses with plasmas. These laser-produced ion beams have important applications ranging from the fast ignition of thermonuclear targets to proton imaging, deep proton lithography, medical physics, and injectors for conventional accelerators. Although the basic physical mechanisms of ion beam generation in the plasma produced by the laser pulse interaction with the target are common to all these applications, each application requires a specific optimization of the ion beam properties, that is, an appropriate choice of the target design and of the laser pulse intensity, shape, and duration.
On the motion of charged particles in a sheared force-free magnetic field
G. E. VEKSTEIN, N. A. BOBROVA, S. V. BULANOV
Journal: Journal of Plasma Physics / Volume 67 / Issue 2-3 / April 2002
This paper considers single-particle trajectories in a planar sheared force-free magnetic field. A specific feature of this magnetic configuration is the absence of both gradient and curvature magnetic drifts, as well as a diamagnetic force along field lines. Therefore, in the framework of the drift approximation, the motion of the particle guiding centre does not feel the magnetic field's non-uniformity. Here we discuss how the latter affects actual particle trajectories, making them quite different from simple circular gyromotion even when the Larmor radius is small. It is also shown how magnetic confinement ceases to work when the Larmor radius becomes comparable to the spatial scale of the field variation.
Coulomb explosion of a cluster irradiated by a high intensity laser pulse
T. ESIRKEPOV, R. BINGHAM, S. BULANOV, T. HONDA, K. NISHIHARA, F. PEGORARO
Journal: Laser and Particle Beams / Volume 18 / Issue 3 / July 2000
Clusters represent a new class of laser pulse targets which show both the properties of underdense and of overdense plasmas. We present analytical and numerical results (based on 2D- and 3D-PIC simulations) of the Coulomb explosion of the ion cloud that is formed when a cluster is irradiated by a high-intensity laser pulse. For laser pulse intensities in the range of 1021−1022 W/cm2, the laser light can rip electrons from atoms almost instantaneously and can create a cloud made of an electrically nonneutral plasma. Ions can then be accelerated up to high energy during the Coulomb explosion of the cloud.
Magnetic-field generation and wave-breaking in collisionless plasmas
F. CALIFANO, R. PRANDI, F. PEGORARO, S. V. BULANOV
Journal: Journal of Plasma Physics / Volume 60 / Issue 2 / September 1998
We discuss the nonlinear evolution of the Weibel instability of two counter-streaming electron beams in the relativistic and non-relativistic regimes in the framework of a two-fluid description. We show the presence of two singularities per wavelength, responsible for the formation of very large density spikes. In the case of non-symmetric beams, we observe the compressive wave-break of the fast electron population, and the generation of a dipolar magnetic field associated with a central fast thin current and two larger slow return currents. This structure is very similar to those observed in PIC simulations of laser-plasma interactions. | CommonCrawl |
2Intensity
2.1Factors that influence intensity
2.2Formation
2.3Rapid intensification
2.4Dissipation
2.5Methods for assessing intensity
2.6Intensity metrics
3Classification and naming
3.1Intensity classifications
3.2Naming
4Structure
4.1Eye and center
4.2Size
5Movement
5.1Environmental steering
5.2Beta drift
5.3Multiple storm interaction
5.4Interaction with the mid-latitude westerlies
6Formation regions and warning centers
7Preparations
8Impacts
8.1Natural phenomena caused or worsened by tropical cyclones
8.2Impact on property and human life
8.3Environmental impact
9Response
10Climatology
10.1Influence of climate change
11Observation and forecasting
11.1Observation
11.2Forecasting
11.3Geopotential height
12Related cyclone types
(Redirected from Tropical storm)
Rapidly rotating storm system
"Hurricane" redirects here. For other uses, see Hurricane (disambiguation).
"Tropical Depression" redirects here. For the Filipino band, see Tropical Depression (band).
For technical reasons, "Hurricane #1" redirects here. For the band, see Hurricane No. 1.
Hurricane Florence in 2018 as seen from the International Space Station. The eye, eyewall, and surrounding rainbands, characteristics of tropical cyclones in the narrow sense, are clearly visible in this view from space.
Central dense overcast
Warnings and watches
Climatology and tracking
Climate change effects
RSMCs
Rainfall forecasting
Rainfall climatology
Storms by basin
Tropical cyclone naming
List of historical names
Lists of retired names: Atlantic, Pacific hurricane, Pacific typhoon, Philippine, Australian, South Pacific
Outline of tropical cyclones
Tropical cyclones portal
A tropical cyclone is a rapidly rotating storm system characterized by a low-pressure center, a closed low-level atmospheric circulation, strong winds, and a spiral arrangement of thunderstorms that produce heavy rain and squalls. Depending on its location and strength, a tropical cyclone is referred to by different names, including hurricane (/ˈhʌrɪkən, -keɪn/), typhoon (/taɪˈfuːn/), tropical storm, cyclonic storm, tropical depression, or simply cyclone.[citation needed] A hurricane is a strong tropical cyclone that occurs in the Atlantic Ocean or northeastern Pacific Ocean, and a typhoon occurs in the northwestern Pacific Ocean. In the Indian Ocean, South Pacific, or (rarely) South Atlantic, comparable storms are referred to simply as "tropical cyclones", and such storms in the Indian Ocean can also be called "severe cyclonic storms".
"Tropical" refers to the geographical origin of these systems, which form almost exclusively over tropical seas. "Cyclone" refers to their winds moving in a circle, whirling round their central clear eye, with their surface winds blowing counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. The opposite direction of circulation is due to the Coriolis effect. Tropical cyclones typically form over large bodies of relatively warm water. They derive their energy through the evaporation of water from the ocean surface, which ultimately condenses into clouds and rain when moist air rises and cools to saturation. This energy source differs from that of mid-latitude cyclonic storms, such as nor'easters and European windstorms, which are powered primarily by horizontal temperature contrasts. Tropical cyclones are typically between 100 and 2,000 km (62 and 1,243 mi) in diameter. Every year tropical cyclones impact various regions of the globe including the Gulf Coast of North America, Australia, India, and Bangladesh.
The strong rotating winds of a tropical cyclone are a result of the conservation of angular momentum imparted by the Earth's rotation as air flows inwards toward the axis of rotation. As a result, they rarely form within 5° of the equator. Tropical cyclones are very rare in the South Atlantic (although occasional examples do occur) due to consistently strong wind shear and a weak Intertropical Convergence Zone. Conversely, the African easterly jet and areas of atmospheric instability give rise to cyclones in the Atlantic Ocean and Caribbean Sea, while cyclones near Australia owe their genesis to the Asian monsoon and Western Pacific Warm Pool.
The primary energy source for these storms is warm ocean waters. These storms are therefore typically strongest when over or near water, and they weaken quite rapidly over land. This causes coastal regions to be particularly vulnerable to tropical cyclones, compared to inland regions. Coastal damage may be caused by strong winds and rain, high waves (due to winds), storm surges (due to wind and severe pressure changes), and the potential of spawning tornadoes. Tropical cyclones draw in air from a large area and concentrate the water content of that air (from atmospheric moisture and moisture evaporated from water) into precipitation over a much smaller area. This replenishing of moisture-bearing air after rain may cause multi-hour or multi-day extremely heavy rain up to 40 km (25 mi) from the coastline, far beyond the amount of water that the local atmosphere holds at any one time. This in turn can lead to river flooding, overland flooding, and a general overwhelming of local water control structures across a large area. Although their effects on human populations can be devastating, tropical cyclones may play a role in relieving drought conditions, though this claim is disputed[disputed – discuss]. They also carry heat and energy away from the tropics and transport it towards temperate latitudes, which plays an important role in regulating global climate.
A tropical cyclone is the generic term for a warm-cored, non-frontal synoptic-scale low-pressure system over tropical or subtropical waters around the world.[1][2] The systems generally have a well-defined center which is surrounded by deep atmospheric convection and a closed wind circulation at the surface.[1]
Historically, tropical cyclones have occurred around the world for thousands of years, with one of the earliest tropical cyclones on record estimated to have occurred in Western Australia in around 4000 BC.[3] However, before satellite imagery became available during the 20th century, there was no way to detect a tropical cyclone unless it impacted land or a ship encountered it by chance.[4]
These days, on average around 80 to 90 named tropical cyclones form each year around the world, over half of which develop hurricane-force winds of 65 kn (120 km/h; 75 mph) or more.[4] Around the world, a tropical cyclone is generally deemed to have formed once mean surface winds in excess of 35 kn (65 km/h; 40 mph) are observed.[4] It is assumed at this stage that a tropical cyclone has become self-sustaining and can continue to intensify without any help from its environment.[4]
A study review article published in 2021 in Nature Geoscience concluded that the geographic range of tropical cyclones will probably expand poleward in response to climate warming of the Hadley circulation.[5]
Tropical cyclone intensity is based on wind speeds and pressure; relationships between winds and pressure are often used in determining the intensity of a storm.[6] Tropical cyclone scales such as the Saffir-Simpson Hurricane Wind Scale and Australia's scale (Bureau of Meteorology) only use wind speed for determining the category of a storm.[7][8] The most intense storm on record is Typhoon Tip in the northwestern Pacific Ocean in 1979, which reached a minimum pressure of 870 hPa (26 inHg) and maximum sustained wind speeds of 165 kn (85 m/s; 306 km/h; 190 mph).[9] The highest maximum sustained wind speed ever recorded was 185 kn (95 m/s; 343 km/h; 213 mph) in Hurricane Patricia in 2015—the most intense cyclone ever recorded in the Western Hemisphere.[10]
Factors that influence intensity
Warm sea surface temperatures are required in order for tropical cyclones to form and strengthen. The commonly-accepted minimum temperature range for this to occur is 26–27 °C (79–81 °F), however, multiple studies have proposed a lower minimum of 25.5 °C (77.9 °F).[11][12] Higher sea surface temperatures result in faster intensification rates and sometimes even rapid intensification.[13] High ocean heat content, also known as Tropical Cyclone Heat Potential, allows storms to achieve a higher intensity.[14] Most tropical cyclones that experience rapid intensification are traversing regions of high ocean heat content rather than lower values.[15] High ocean heat content values can help to offset the oceanic cooling caused by the passage of a tropical cyclone, limiting the effect this cooling has on the storm.[16] Faster-moving systems are able to intensify to higher intensities with lower ocean heat content values. Slower-moving systems require higher values of ocean heat content to achieve the same intensity.[15]
The passage of a tropical cyclone over the ocean causes the upper layers of the ocean to cool substantially, a process known as upwelling,[17] which can negatively influence subsequent cyclone development. This cooling is primarily caused by wind-driven mixing of cold water from deeper in the ocean with the warm surface waters. This effect results in a negative feedback process that can inhibit further development or lead to weakening. Additional cooling may come in the form of cold water from falling raindrops (this is because the atmosphere is cooler at higher altitudes). Cloud cover may also play a role in cooling the ocean, by shielding the ocean surface from direct sunlight before and slightly after the storm passage. All these effects can combine to produce a dramatic drop in sea surface temperature over a large area in just a few days.[18] Conversely, the mixing of the sea can result in heat being inserted in deeper waters, with potential effects on global climate.[19]
Vertical wind shear negatively impacts tropical cyclone intensification by displacing moisture and heat from a system's center.[20] Low levels of vertical wind shear are most optimal for strengthening, while stronger wind shear induces weakening.[21][22] Dry air entraining into a tropical cyclone's core has a negative effect on its development and intensity by diminishing atmospheric convection and introducing asymmetries in the storm's structure.[23][24][25] Symmetric, strong outflow leads to a faster rate of intensification than observed in other systems by mitigating local wind shear.[26][27][28] Weakening outflow is associated with the weakening of rainbands within a tropical cyclone.[29]
The size of tropical cyclones plays a role in how quickly they intensify. Smaller tropical cyclones are more prone to rapid intensification than larger ones.[30] The Fujiwhara effect, which involves interaction between two tropical cyclones, can weaken and ultimately result in the dissipation of the weaker of two tropical cyclones by reducing the organization of the system's convection and imparting horizontal wind shear.[31] Tropical cyclones typically weaken while situated over a landmass because conditions are often unfavorable as a result of the lack of oceanic forcing.[32] The Brown ocean effect can allow a tropical cyclone to maintain or increase its intensity following landfall, in cases where there has been copious rainfall, through the release of latent heat from the saturated soil.[33] Orographic lift can cause an significant increase in the intensity of the convection of a tropical cyclone when its eye moves over a mountain, breaking the capped boundary layer that had been restraining it.[34] Jet streams can both enhance and inhibit tropical cyclone intensity by influencing the storm's outflow as well as vertical wind shear.[35][36]
Main article: Tropical cyclogenesis
Diagram of a tropical cyclone in the Northern Hemisphere
Tropical cyclones tend to develop during the summer, but have been noted in nearly every month in most tropical cyclone basins. Tropical cyclones on either side of the Equator generally have their origins in the Intertropical Convergence Zone, where winds blow from either the northeast or southeast.[37] Within this broad area of low-pressure, air is heated over the warm tropical ocean and rises in discrete parcels, which causes thundery showers to form.[37] These showers dissipate quite quickly; however, they can group together into large clusters of thunderstorms.[37] This creates a flow of warm, moist, rapidly rising air, which starts to rotate cyclonically as it interacts with the rotation of the earth.[37]
Several factors are required for these thunderstorms to develop further, including sea surface temperatures of around 27 °C (81 °F) and low vertical wind shear surrounding the system,[37][38] atmospheric instability, high humidity in the lower to middle levels of the troposphere, enough Coriolis force to develop a low-pressure center, a pre-existing low-level focus or disturbance,[38] There is a limit on tropical cyclone intensity which is strongly related to the water temperatures along its path.[39] and upper-level divergence.[40] An average of 86 tropical cyclones of tropical storm intensity form annually worldwide. Of those, 47 reach strength higher than 119 km/h (74 mph), and 20 become intense tropical cyclones (at least Category 3 intensity on the Saffir–Simpson scale).[41]
Climate cycles such as ENSO and the Madden–Julian oscillation modulate the timing and frequency of tropical cyclone development.[42][43][44][45] Rossby waves can aid in the formation of a new tropical cyclone by disseminating the energy of an existing, mature storm.[46][47] Kelvin waves can contribute to tropical cyclone formation by regulating the development of the westerlies.[48] Cyclone formation is usually reduced 3 days prior to the wave's crest and increased during the 3 days after.[49]
Rapid intensification
Main article: Rapid intensification
On occasion, tropical cyclones may undergo a process known as rapid intensification, a period in which the maximum sustained winds of a tropical cyclone increase by 30 kn (56 km/h; 35 mph) or more within 24 hours.[50] Similarly, rapid deepening in tropical cyclones is defined as a minimum sea surface pressure decrease of 1.75 hPa (0.052 inHg) per hour or 42 hPa (1.2 inHg) within a 24-hour period; explosive deepening occurs when the surface pressure decreases by 2.5 hPa (0.074 inHg) per hour for at least 12 hours or 5 hPa (0.15 inHg) per hour for at least 6 hours.[51] For rapid intensification to occur, several conditions must be in place. Water temperatures must be extremely high (near or above 30 °C (86 °F)), and water of this temperature must be sufficiently deep such that waves do not upwell cooler waters to the surface. On the other hand, Tropical Cyclone Heat Potential is one of such non-conventional subsurface oceanographic parameters influencing the cyclone intensity. Wind shear must be low; when wind shear is high, the convection and circulation in the cyclone will be disrupted. Usually, an anticyclone in the upper layers of the troposphere above the storm must be present as well—for extremely low surface pressures to develop, air must be rising very rapidly in the eyewall of the storm, and an upper-level anticyclone helps channel this air away from the cyclone efficiently.[52] However, some cyclones such as Hurricane Epsilon have rapidly intensified despite relatively unfavorable conditions.[53][54]
Dissipation
Hurricane Paulette, in 2020, is an example of a sheared tropical cyclone, with deep convection slightly removed from the center of the system.
There are a number of ways a tropical cyclone can weaken, dissipate, or lose its tropical characteristics. These include making landfall, moving over cooler water, encountering dry air, or interacting with other weather systems; however, once a system has dissipated or lost its tropical characteristics, its remnants could regenerate a tropical cyclone if environmental conditions become favorable.[55][56]
A tropical cyclone can dissipate when it moves over waters significantly cooler than 26.5 °C (79.7 °F). This will deprive the storm of such tropical characteristics as a warm core with thunderstorms near the center, so that it becomes a remnant low-pressure area. Remnant systems may persist for several days before losing their identity. This dissipation mechanism is most common in the eastern North Pacific. Weakening or dissipation can also occur if a storm experiences vertical wind shear which causes the convection and heat engine to move away from the center; this normally ceases the development of a tropical cyclone.[57] In addition, its interaction with the main belt of the Westerlies, by means of merging with a nearby frontal zone, can cause tropical cyclones to evolve into extratropical cyclones. This transition can take 1–3 days.[58]
Should a tropical cyclone make landfall or pass over an island, its circulation could start to break down, especially if it encounters mountainous terrain.[59] When a system makes landfall on a large landmass, it is cut off from its supply of warm moist maritime air and starts to draw in dry continental air.[59] This, combined with the increased friction over land areas, leads to the weakening and dissipation of the tropical cyclone.[59] Over a mountainous terrain, a system can quickly weaken; however, over flat areas, it may endure for two to three days before circulation breaks down and dissipates.[59]
Over the years, there have been a number of techniques considered to try to artificially modify tropical cyclones.[60] These techniques have included using nuclear weapons, cooling the ocean with icebergs, blowing the storm away from land with giant fans, and seeding selected storms with dry ice or silver iodide.[60] These techniques, however, fail to appreciate the duration, intensity, power or size of tropical cyclones.[60]
Methods for assessing intensity
For broader coverage of this topic, see Dvorak technique and Scatterometer.
A variety of methods or techniques, including surface, satellite, and aerial, are used to assess the intensity of a tropical cyclone. Reconnaissance aircraft fly around and through tropical cyclones, outfitted with specialized instruments, to collect information that can be used to ascertain the winds and pressure of a system.[4] Tropical cyclones possess winds of different speeds at different heights. Winds recorded at flight level can be converted to find the wind speeds at the surface.[61] Surface observations, such as ship reports, land stations, mesonets, coastal stations, and buoys, can provide information on a tropical cyclone's intensity or the direction it is traveling.[4] Wind-pressure relationships (WPRs) are used as a way to determine the pressure of a storm based on its wind speed. Several different methods and equations have been proposed to calculate WPRs.[62][63] Tropical cyclones agencies each use their own, fixed WPR, which can result in inaccuracies between agencies that are issuing estimates on the same system.[63] The ASCAT is a scatterometer used by the MetOp satellites to map the wind field vectors of tropical cyclones.[4] The SMAP uses an L-band radiometer channel to determine the wind speeds of tropical cyclones at the ocean surface, and has been shown to be reliable at higher intensities and under heavy rainfall conditions, unlike scatterometer-based and other radiometer-based instruments.[64]
The Dvorak technique plays a large role in both the classification of a tropical cyclone and the determination of its intensity. Used in warning centers, the method was developed by Vernon Dvorak in the 1970s, and uses both visible and infrared satellite imagery in the assessment of tropical cyclone intensity. The Dvorak technique uses a scale of "T-numbers", scaling in increments of 0.5 from T1.0 to T8.0. Each T-number has an intensity assigned to it, with larger T-numbers indicating a stronger system. Tropical cyclones are assessed by forecasters according to an array of patterns, including curved banding features, shear, central dense overcast, and eye, in order to determine the T-number and thus assess the intensity of the storm.[65] The Cooperative Institute for Meteorological Satellite Studies works to develop and improve automated satellite methods, such as the Advanced Dvorak Technique (ADT) and SATCON. The ADT, used by a large number of forecasting centers, uses infrared geostationary satellite imagery and an algorithm based upon the Dvorak technique to assess the intensity of tropical cyclones. The ADT has a number of differences from the conventional Dvorak technique, including changes to intensity constraint rules and the usage of microwave imagery to base a system's intensity upon its internal structure, which prevents the intensity from leveling off before an eye emerges in infrared imagery.[66] The SATCON weights estimates from various satellite-based systems and microwave sounders, accounting for the strengths and flaws in each individual estimate, to produce a consensus estimate of a tropical cyclone's intensity which can be more reliable than the Dvorak technique at times.[67][68]
Intensity metrics
Multiple intensity metrics are used, including accumulated cyclone energy (ACE), the Hurricane Surge Index, the Hurricane Severity Index, the Power Dissipation Index (PDI), and integrated kinetic energy (IKE). ACE is a metric of the total energy a system has exerted over its lifespan. ACE is calculated by summing the squares of a cyclone's sustained wind speed, every six hours as long as the system is at or above tropical storm intensity and either tropical or subtropical.[69] The calculation of the PDI is similar in nature to ACE, with the major difference being that wind speeds are cubed rather than squared.[70] The Hurricane Surge Index is a metric of the potential damage a storm may inflict via storm surge. It is calculated by squaring the dividend of the storm's wind speed and a climatological value (33 metres per second (74 mph)), and then multiplying that quantity by the dividend of the radius of hurricane-force winds and its climatological value (96.6 kilometres (60.0 mi)). This can be represented in equation form as:
( v 33 m / s ) 2 × ( r 96.6 k m ) {\displaystyle \left({\frac {v}{33m/s}}\right)^{2}\times \left({\frac {r}{96.6km}}\right)\,}
where v is the storm's wind speed and r is the radius of hurricane-force winds.[71] The Hurricane Severity Index is a scale that can assign up to 50 points to a system; up to 25 points come from intensity, while the other 25 come from the size of the storm's wind field.[72] The IKE model measures the destructive capability of a tropical cyclone via winds, waves, and surge. It is calculated as:
∫ V o l 1 2 p u 2 d v {\displaystyle \int _{Vol}{\frac {1}{2}}pu^{2}d_{v}\,}
where p is the density of air, u is a sustained surface wind speed value, and dv is the volume element.[72][73]
Classification and naming
Intensity classifications
Main article: Tropical cyclone scales
Three tropical cyclones of the 2006 Pacific typhoon season at different stages of development. The weakest (left) demonstrates only the most basic circular shape. A stronger storm (top right) demonstrates spiral banding and increased centralization, while the strongest (lower right) has developed an eye.
Around the world, tropical cyclones are classified in different ways, based on the location (tropical cyclone basins), the structure of the system and its intensity. For example, within the Northern Atlantic and Eastern Pacific basins, a tropical cyclone with wind speeds of over 65 kn (120 km/h; 75 mph) is called a hurricane, while it is called a typhoon or a severe cyclonic storm within the Western Pacific or North Indian Oceans.[74][75][76] Within the Southern Hemisphere, it is either called a hurricane, tropical cyclone or a severe tropical cyclone, depending on if it is located within the South Atlantic, South-West Indian Ocean, Australian region or the South Pacific Ocean.[77][78]
Main articles: Tropical cyclone naming and History of tropical cyclone naming
The practice of using names to identify tropical cyclones goes back many years, with systems named after places or things they hit before the formal start of naming.[79][80] The system currently used provides positive identification of severe weather systems in a brief form, that is readily understood and recognized by the public.[79][80] The credit for the first usage of personal names for weather systems is generally given to the Queensland Government Meteorologist Clement Wragge who named systems between 1887 and 1907.[79][80] This system of naming weather systems subsequently fell into disuse for several years after Wragge retired, until it was revived in the latter part of World War II for the Western Pacific.[79][80] Formal naming schemes have subsequently been introduced for the North and South Atlantic, Eastern, Central, Western and Southern Pacific basins as well as the Australian region and Indian Ocean.[80]
At present, tropical cyclones are officially named by one of twelve meteorological services and retain their names throughout their lifetimes to provide ease of communication between forecasters and the general public regarding forecasts, watches, and warnings.[79] Since the systems can last a week or longer and more than one can be occurring in the same basin at the same time, the names are thought to reduce the confusion about what storm is being described.[79] Names are assigned in order from predetermined lists with one, three, or ten-minute sustained wind speeds of more than 65 km/h (40 mph) depending on which basin it originates.[74][76][77] However, standards vary from basin to basin with some tropical depressions named in the Western Pacific, while tropical cyclones have to have a significant amount of gale-force winds occurring around the center before they are named within the Southern Hemisphere.[77][78] The names of significant tropical cyclones in the North Atlantic Ocean, Pacific Ocean, and Australian region are retired from the naming lists and replaced with another name.[74][75][78] Tropical cyclones that develop around the world are assigned an identification code consisting of a two-digit number and suffix letter by the warning centers that monitor them.[78][81]
Eye and center
Main article: Eye (cyclone)
The eye and surrounding clouds of 2018 Hurricane Florence as seen from the International Space Station
At the center of a mature tropical cyclone, air sinks rather than rises. For a sufficiently strong storm, air may sink over a layer deep enough to suppress cloud formation, thereby creating a clear "eye". Weather in the eye is normally calm and free of convective clouds, although the sea may be extremely violent.[82] The eye is normally circular and is typically 30–65 km (19–40 mi) in diameter, though eyes as small as 3 km (1.9 mi) and as large as 370 km (230 mi) have been observed.[83][84]
The cloudy outer edge of the eye is called the "eyewall". The eyewall typically expands outward with height, resembling an arena football stadium; this phenomenon is sometimes referred to as the "stadium effect".[84] The eyewall is where the greatest wind speeds are found, air rises most rapidly, clouds reach their highest altitude, and precipitation is the heaviest. The heaviest wind damage occurs where a tropical cyclone's eyewall passes over land.[82]
In a weaker storm, the eye may be obscured by the central dense overcast, which is the upper-level cirrus shield that is associated with a concentrated area of strong thunderstorm activity near the center of a tropical cyclone.[85]
The eyewall may vary over time in the form of eyewall replacement cycles, particularly in intense tropical cyclones. Outer rainbands can organize into an outer ring of thunderstorms that slowly moves inward, which is believed to rob the primary eyewall of moisture and angular momentum. When the primary eyewall weakens, the tropical cyclone weakens temporarily. The outer eyewall eventually replaces the primary one at the end of the cycle, at which time the storm may return to its original intensity.[86]
There are a variety of metrics commonly used to measure storm size. The most common metrics include the radius of maximum wind, the radius of 34-knot (17 m/s; 63 km/h; 39 mph) wind (i.e. gale force), the radius of outermost closed isobar (ROCI), and the radius of vanishing wind.[87][88] An additional metric is the radius at which the cyclone's relative vorticity field decreases to 1×10−5 s−1.[84]
Size descriptions of tropical cyclones
ROCI (Diameter)
Less than 2 degrees latitude Very small/minor
2 to 3 degrees of latitude Small
3 to 6 degrees of latitude Medium/Average/Normal
6 to 8 degrees of latitude Large
Over 8 degrees of latitude Very large[89]
On Earth, tropical cyclones span a large range of sizes, from 100–2,000 km (62–1,243 mi) as measured by the radius of vanishing wind. They are largest on average in the northwest Pacific Ocean basin and smallest in the northeastern Pacific Ocean basin.[90] If the radius of outermost closed isobar is less than two degrees of latitude (222 km (138 mi)), then the cyclone is "very small" or a "midget". A radius of 3–6 latitude degrees (333–670 km (207–416 mi)) is considered "average sized". "Very large" tropical cyclones have a radius of greater than 8 degrees (888 km (552 mi)).[89] Observations indicate that size is only weakly correlated to variables such as storm intensity (i.e. maximum wind speed), radius of maximum wind, latitude, and maximum potential intensity.[88][90] Typhoon Tip is the largest cyclone on record, with tropical storm-force winds 2,170 km (1,350 mi) in diameter. The smallest storm on record is Tropical Storm Marco (2008), which generated tropical storm-force winds only 37 km (23 mi) in diameter.[91]
The movement of a tropical cyclone (i.e. its "track") is typically approximated as the sum of two terms: "steering" by the background environmental wind and "beta drift".[92] Some tropical cyclones can move across large distances, such as Hurricane John, the longest-lasting tropical cyclone on record, which traveled 13,280 km (8,250 mi), the longest track of any Northern Hemisphere tropical cyclone, over its 31-day lifespan in 1994.[93][94]
Environmental steering
Environmental steering is the primary influence on the motion of tropical cyclones.[95] It represents the movement of the storm due to prevailing winds and other wider environmental conditions, similar to "leaves carried along by a stream".[96]
Physically, the winds, or flow field, in the vicinity of a tropical cyclone may be treated as having two parts: the flow associated with the storm itself, and the large-scale background flow of the environment.[95] Tropical cyclones can be treated as local maxima of vorticity suspended within the large-scale background flow of the environment.[97] In this way, tropical cyclone motion may be represented to first-order as advection of the storm by the local environmental flow.[98] This environmental flow is termed the "steering flow" and is the dominant influence on tropical cyclone motion.[95] The strength and direction of the steering flow can be approximated as a vertical integration of the winds blowing horizontally in the cyclone's vicinity, weighted by the altitude at which those winds are occurring. Because winds can vary with height, determining the steering flow precisely can be difficult.
The pressure altitude at which the background winds are most correlated with a tropical cyclone's motion is known as the "steering level".[97] The motion of stronger tropical cyclones is more correlated with the background flow averaged across a thicker portion of troposphere compared to weaker tropical cyclones whose motion is more correlated with the background flow averaged across a narrower extent of the lower troposphere.[99] When wind shear and latent heat release is present, tropical cyclones tend to move towards regions where potential vorticity is increasing most quickly.[100]
Climatologically, tropical cyclones are steered primarily westward by the east-to-west trade winds on the equatorial side of the subtropical ridge—a persistent high-pressure area over the world's subtropical oceans.[96] In the tropical North Atlantic and Northeast Pacific oceans, the trade winds steer tropical easterly waves westward from the African coast toward the Caribbean Sea, North America, and ultimately into the central Pacific Ocean before the waves dampen out.[101] These waves are the precursors to many tropical cyclones within this region.[102] In contrast, in the Indian Ocean and Western Pacific in both hemispheres, tropical cyclogenesis is influenced less by tropical easterly waves and more by the seasonal movement of the Intertropical Convergence Zone and the monsoon trough.[103] Other weather systems such as mid-latitude troughs and broad monsoon gyres can also influence tropical cyclone motion by modifying the steering flow.[99][104]
Beta drift
In addition to environmental steering, a tropical cyclone will tend to drift poleward and westward, a motion known as "beta drift".[105] This motion is due to the superposition of a vortex, such as a tropical cyclone, onto an environment in which the Coriolis force varies with latitude, such as on a sphere or beta plane.[106] The magnitude of the component of tropical cyclone motion associated with the beta drift ranges between 1–3 m/s (3.6–10.8 km/h; 2.2–6.7 mph) and tends to be larger for more intense tropical cyclones and at higher latitudes. It is induced indirectly by the storm itself as a result of feedback between the cyclonic flow of the storm and its environment.[107][105]
Physically, the cyclonic circulation of the storm advects environmental air poleward east of center and equatorial west of center. Because air must conserve its angular momentum, this flow configuration induces a cyclonic gyre equatorward and westward of the storm center and an anticyclonic gyre poleward and eastward of the storm center. The combined flow of these gyres acts to advect the storm slowly poleward and westward. This effect occurs even if there is zero environmental flow.[108][109] Due to a direct dependence of the beta drift on angular momentum, the size of a tropical cyclone can impact the influence of beta drift on its motion; beta drift imparts a greater influence on the movement of larger tropical cyclones than that of smaller ones.[110][111]
Multiple storm interaction
Main article: Fujiwhara effect
A third component of motion that occurs relatively infrequently involves the interaction of multiple tropical cyclones. When two cyclones approach one another, their centers will begin orbiting cyclonically about a point between the two systems. Depending on their separation distance and strength, the two vortices may simply orbit around one another, or else may spiral into the center point and merge. When the two vortices are of unequal size, the larger vortex will tend to dominate the interaction, and the smaller vortex will orbit around it. This phenomenon is called the Fujiwhara effect, after Sakuhei Fujiwhara.[112]
Interaction with the mid-latitude westerlies
See also: Westerlies
Storm track of Typhoon Ioke, showing recurvature off the Japanese coast in 2006
Though a tropical cyclone typically moves from east to west in the tropics, its track may shift poleward and eastward either as it moves west of the subtropical ridge axis or else if it interacts with the mid-latitude flow, such as the jet stream or an extratropical cyclone. This motion, termed "recurvature", commonly occurs near the western edge of the major ocean basins, where the jet stream typically has a poleward component and extratropical cyclones are common.[113] An example of tropical cyclone recurvature was Typhoon Ioke in 2006.[114]
Formation regions and warning centers
Main articles: Tropical cyclone basins and Regional Specialized Meteorological Centre
Tropical cyclone basins and official warning centers
Warning center
Area of responsibility
Northern Hemisphere
United States National Hurricane Center Equator northward, African Coast – 140°W [74]
Eastern Pacific
United States Central Pacific Hurricane Center Equator northward, 140–180°W [74]
Japan Meteorological Agency Equator – 60°N, 180–100°E [75]
North Indian Ocean
India Meteorological Department Equator northwards, 100–40°E [76]
Météo-France Reunion Equator – 40°S, African Coast – 90°E [77]
Australian region
Indonesian Meteorology, Climatology,
and Geophysical Agency (BMKG) Equator – 10°S, 90–141°E [78]
Papua New Guinea National Weather Service Equator – 10°S, 141–160°E [78]
Australian Bureau of Meteorology 10–40°S, 90–160°E [78]
Southern Pacific
Fiji Meteorological Service Equator – 25°S, 160°E – 120°W [78]
Meteorological Service of New Zealand 25–40°S, 160°E – 120°W [78]
The majority of tropical cyclones each year form in one of seven tropical cyclone basins, which are monitored by a variety of meteorological services and warning centres.[4] Ten of these warning centres worldwide are designated as either a Regional Specialized Meteorological Centre or a Tropical Cyclone Warning Centre by the World Meteorological Organisation's (WMO) tropical cyclone programme.[4] These warning centres issue advisories which provide basic information and cover a systems present, forecast position, movement and intensity, in their designated areas of responsibility.[4] Meteorological services around the world are generally responsible for issuing warnings for their own country, however, there are exceptions, as the United States National Hurricane Center and Fiji Meteorological Service issue alerts, watches and warnings for various island nations in their areas of responsibility.[4][78] The United States Joint Typhoon Warning Center and Fleet Weather Center also publicly issue warnings, about tropical cyclones on behalf of the United States Government.[4] The Brazilian Navy Hydrographic Center names South Atlantic tropical cyclones, however the South Atlantic is not a major basin, and not an official basin according to the WMO.[115]
Main articles: Tropical cyclone preparedness and Tropical cyclone engineering
This section needs expansion. You can help by adding to it. (April 2021)
Ahead of the formal season starting, people are urged to prepare for the effects of a tropical cyclone by politicians and weather forecasters, amongst others. They prepare by determining their risk to the different types of weather, tropical cyclones cause, checking their insurance coverage and emergency supplies, as well as determining where to evacuate to if needed.[116][117][118] When a tropical cyclone develops and is forecast to impact land, each member nation of the World Meteorological Organization issues various watches and warnings to cover the expected impacts.[119] However, there are some exceptions with the United States National Hurricane Center and Fiji Meteorological Service responsible for issuing or recommending warnings for other nations in their area of responsibility.[120][121][122]: 2–4
Main articles: Effects of tropical cyclones and Tropical cyclone effects by region
Natural phenomena caused or worsened by tropical cyclones
Tropical cyclones out at sea cause large waves, heavy rain, floods and high winds, disrupting international shipping and, at times, causing shipwrecks.[123] Tropical cyclones stir up water, leaving a cool wake behind them, which causes the region to be less favorable for subsequent tropical cyclones.[18] On land, strong winds can damage or destroy vehicles, buildings, bridges, and other outside objects, turning loose debris into deadly flying projectiles. The storm surge, or the increase in sea level due to the cyclone, is typically the worst effect from landfalling tropical cyclones, historically resulting in 90% of tropical cyclone deaths.[124] Cyclone Mahina produced the highest storm surge on record, 13 m (43 ft), at Bathurst Bay, Queensland, Australia, in March 1899.[125] Other ocean-based hazards that tropical cyclones produce are rip currents and undertow. These hazards can occur hundreds of kilometers (hundreds of miles) away from the center of a cyclone, even if other weather conditions are favorable.[126][127] The broad rotation of a landfalling tropical cyclone, and vertical wind shear at its periphery, spawns tornadoes. Tornadoes can also be spawned as a result of eyewall mesovortices, which persist until landfall.[128] Hurricane Ivan produced 120 tornadoes, more than any other tropical cyclone.[129] Lightning activity is produced within tropical cyclones; this activity is more intense within stronger storms and closer to and within the storm's eyewall.[130][131] Tropical cyclones can increase the amount of snowfall a region experiences by delivering additional moisture.[132] Wildfires can be worsened when a nearby storm fans their flames with its strong winds.[133][134]
Impact on property and human life
Aftermath of Hurricane Ike in Bolivar Peninsula, Texas
The number of $1 billion Atlantic hurricanes almost doubled from the 1980s to the 2010s, and inflation-adjusted costs have increased more than elevenfold.[135] The increases have been attributed to climate change and to greater numbers of people moving to coastal areas.[135]
Tropical cyclones regularly affect the coastlines of most of Earth's major bodies of water along the Atlantic, Pacific, and Indian oceans. Tropical cyclones have caused significant destruction and loss of human life, resulting in about 2 million deaths since the 19th century.[136] Large areas of standing water caused by flooding lead to infection, as well as contributing to mosquito-borne illnesses. Crowded evacuees in shelters increase the risk of disease propagation.[124] Tropical cyclones significantly interrupt infrastructure, leading to power outages, bridge and road destruction, and the hampering of reconstruction efforts.[124][137][138] Winds and water from storms can damage or destroy homes, buildings, and other manmade structures.[139][140] Tropical cyclones destroy agriculture, kill livestock, and prevent access to marketplaces for both buyers and sellers; both of these result in financial losses.[141][142][143] Powerful cyclones that make landfall – moving from the ocean to over land – are some of the most impactful, although that is not always the case. An average of 86 tropical cyclones of tropical storm intensity form annually worldwide, with 47 reaching hurricane or typhoon strength, and 20 becoming intense tropical cyclones, super typhoons, or major hurricanes (at least of Category 3 intensity).[144]
In Africa, tropical cyclones can originate from tropical waves generated over the Sahara Desert,[145] or otherwise strike the Horn of Africa and Southern Africa.[146][147] Cyclone Idai in March 2019 hit central Mozambique, becoming the deadliest tropical cyclone on record in Africa, with 1,302 fatalities, and damage estimated at US$2.2 billion.[148][149] Réunion island, located east of Southern Africa, experiences some of the wettest tropical cyclones on record. In January 1980, Cyclone Hyacinthe produced 6,083 mm (239.5 in) of rain over 15 days, which was the largest rain total recorded from a tropical cyclone on record.[150][151][152] In Asia, tropical cyclones from the Indian and Pacific oceans regularly affect some of the most populated countries on Earth. In 1970, a cyclone struck Bangladesh, then known as East Pakistan, producing a 6.1 m (20 ft) storm surge that killed at least 300,000 people; this made it the deadliest tropical cyclone on record.[153] In October 2019, Typhoon Hagibis struck the Japanese island of Honshu and inflicted US$15 billion in damage, making it the costliest storm on record in Japan.[154] The islands that comprise Oceania, from Australia to French Polynesia, are routinely affected by tropical cyclones.[155][156][157] In Indonesia, a cyclone struck the island of Flores in April 1973, killing 1,653 people, making it the deadliest tropical cyclone recorded in the Southern Hemisphere.[158][159]
Atlantic and Pacific hurricanes regularly affect North America. In the United States, hurricanes Katrina in 2005 and Harvey in 2017 are the country's costliest ever natural disasters, with monetary damage estimated at US$125 billion. Katrina struck Louisiana and largely destroyed the city of New Orleans,[160][161] while Harvey caused significant flooding in southeastern Texas after it dropped 60.58 in (1,539 mm) of rainfall; this was the highest rainfall total on record in the country.[161] Europe is rarely affected by tropical cyclones; however, the continent regularly encounters storms after they transitioned into extratropical cyclones. Only one tropical depression – Vince in 2005 – struck Spain,[162] and only one subtropical cyclone – Subtropical Storm Alpha in 2020 – struck Portugal.[163] Occasionally, there are tropical-like cyclones in the Mediterranean Sea.[164] The northern portion of South America experiences occasional tropical cyclones, with 173 fatalities from Tropical Storm Bret in August 1993.[165][166] The South Atlantic Ocean is generally inhospitable to the formation of a tropical storm.[167] However, in March 2004, Hurricane Catarina struck southeastern Brazil as the first hurricane on record in the South Atlantic Ocean.[168]
Although cyclones take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions.[169] Their precipitation may also alleviate drought conditions by restoring soil moisture, though one study focused on the Southeastern United States suggested tropical cyclones did not offer significant drought recovery.[170][171][172] Tropical cyclones also help maintain the global heat balance by moving warm, moist tropical air to the middle latitudes and polar regions,[173] and by regulating the thermohaline circulation through upwelling.[174] The storm surge and winds of hurricanes may be destructive to human-made structures, but they also stir up the waters of coastal estuaries, which are typically important fish breeding locales.[175] Ecosystems, such as saltmarshes and Mangrove forests, can be severely damaged or destroyed by tropical cyclones, which erode land and destroy vegetation.[176][177] Tropical cyclones can cause harmful algae blooms to form in bodies of water by increasing the amount of nutrients available.[178][179][180] Insect populations can decrease in both quantity and diversity after the passage of storms.[181] Strong winds associated with tropical cyclones and their remnants are capable of felling thousands of trees, causing damage to forests.[182]
When hurricanes surge upon shore from the ocean, salt is introduced to many freshwater areas and raises the salinity levels too high for some habitats to withstand. Some are able to cope with the salt and recycle it back into the ocean, but others can not release the extra surface water quickly enough or do not have a large enough freshwater source to replace it. Because of this, some species of plants and vegetation die due to the excess salt.[183] In addition, hurricanes can carry toxins and acids onshore when they make landfall. The floodwater can pick up the toxins from different spills and contaminate the land that it passes over. These toxins are harmful to the people and animals in the area, as well as the environment around them.[184] Tropical cyclones can cause oil spills by damaging or destroying pipelines and storage facilities.[185][178][186] Similarly, chemical spills have been reported when chemical and processing facilities were damaged.[186][187][188] Waterways have become contaminated with toxic levels of metals such as nickel, chromium, and mercury during tropical cyclones.[189][190]
Tropical cyclones can have an extensive effect on geography, such as creating or destroying land.[191][192] Cyclone Bebe increased the size of Tuvalu island, Funafuti Atoll, by nearly 20%.[191][193][194] Hurricane Walaka destroyed the small East Island in 2018,[192][195] which destroyed the habitat for the endangered Hawaiian monk seal, as well as, threatened sea turtles and seabirds.[196] Landslides frequently occur during tropical cyclones and can vastly alter landscapes; some storms are capable of causing hundreds to tens of thousands of landslides.[197][198][199][200] Storms can erode coastlines over an extensive area and transport the sediment to other locations.[190][201][202]
This section needs expansion. You can help by adding to it. (October 2022)
Main article: Tropical cyclone response
Relief efforts for Hurricane Dorian in the Bahamas
Hurricane response is the disaster response after a hurricane. Activities performed by hurricane responders include assessment, restoration, and demolition of buildings; removal of debris and waste; repairs to land-based and maritime infrastructure; and public health services including search and rescue operations.[203] Hurricane response requires coordination between federal, tribal, state, local, and private entities.[204] According to the National Voluntary Organizations Active in Disaster, potential response volunteers should affiliate with established organizations and should not self-deploy, so that proper training and support can be provided to mitigate the danger and stress of response work.[205]
Hurricane responders face many hazards. Hurricane responders may be exposed to chemical and biological contaminants including stored chemicals, sewage, human remains, and mold growth encouraged by flooding,[206][207][208] as well as asbestos and lead that may be present in older buildings.[207][209] Common injuries arise from falls from heights, such as from a ladder or from level surfaces; from electrocution in flooded areas, including from backfeed from portable generators; or from motor vehicle accidents.[206][209][210] Long and irregular shifts may lead to sleep deprivation and fatigue, increasing the risk of injuries, and workers may experience mental stress associated with a traumatic incident. Additionally, heat stress is a concern as workers are often exposed to hot and humid temperatures, wear protective clothing and equipment, and have physically difficult tasks.[206][209]
Main article: Tropical cyclones by year
Tropical cyclones have occurred around the world for millennia. Reanalyses and research are being undertaken to extend the historical record, through the usage of proxy data such as overwash deposits, beach ridges and historical documents such as diaries.[3] Major tropical cyclones leave traces in overwash records and shell layers in some coastal areas, which have been used to gain insight into hurricane activity over the past thousands of years.[211] Sediment records in Western Australia suggest an intense tropical cyclone in the 4th millennium BC.[3] Proxy records based on paleotempestological research have revealed that major hurricane activity along the Gulf of Mexico coast varies on timescales of centuries to millennia.[212][213] In the year 957, a powerful typhoon struck southern China, killing around 10,000 people due to flooding.[214] The Spanish colonization of Mexico described "tempestades" in 1730,[215] although the official record for Pacific hurricanes only dates to 1949.[216] In the south-west Indian Ocean, the tropical cyclone record goes back to 1848.[217] In 2003, the Atlantic hurricane reanalysis project examined and analyzed the historical record of tropical cyclones in the Atlantic back to 1851, extending the existing database from 1886.[218]
Before satellite imagery became available during the 20th century, many of these systems went undetected unless it impacted land or a ship encountered it by chance.[4] Often in part because of the threat of hurricanes, many coastal regions had sparse population between major ports until the advent of automobile tourism; therefore, the most severe portions of hurricanes striking the coast may have gone unmeasured in some instances. The combined effects of ship destruction and remote landfall severely limit the number of intense hurricanes in the official record before the era of hurricane reconnaissance aircraft and satellite meteorology. Although the record shows a distinct increase in the number and strength of intense hurricanes, therefore, experts regard the early data as suspect.[219] The ability of climatologists to make a long-term analysis of tropical cyclones is limited by the amount of reliable historical data.[220] During the 1940s, routine aircraft reconnaissance started in both the Atlantic and Western Pacific basin during the mid-1940s, which provided ground truth data, however, early flights were only made once or twice a day.[4] Polar-orbiting weather satellites were first launched by the United States National Aeronautics and Space Administration in 1960 but were not declared operational until 1965.[4] However, it took several years for some of the warning centres to take advantage of this new viewing platform and develop the expertise to associate satellite signatures with storm position and intensity.[4]
Each year on average, around 80 to 90 named tropical cyclones form around the world, of which over half develop hurricane-force winds of 65 kn (120 km/h; 75 mph) or more.[4] Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active month. November is the only month in which all the tropical cyclone basins are in season.[221] In the Northern Atlantic Ocean, a distinct cyclone season occurs from June 1 to November 30, sharply peaking from late August through September.[221] The statistical peak of the Atlantic hurricane season is September 10. The Northeast Pacific Ocean has a broader period of activity, but in a similar time frame to the Atlantic.[222] The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September.[221] In the North Indian basin, storms are most common from April to December, with peaks in May and November.[221] In the Southern Hemisphere, the tropical cyclone year begins on July 1 and runs all year-round encompassing the tropical cyclone seasons, which run from November 1 until the end of April, with peaks in mid-February to early March.[221][78]
Of various modes of variability in the climate system, El Niño–Southern Oscillation has the largest impact on tropical cyclone activity.[223] Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies.[224] When the subtropical ridge position shifts due to El Niño, so will the preferred tropical cyclone tracks. Areas west of Japan and Korea tend to experience much fewer September–November tropical cyclone impacts during El Niño and neutral years.[225] During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific Ocean, which increases the landfall threat to China and much greater intensity in the Philippines.[225] The Atlantic Ocean experiences depressed activity due to increased vertical wind shear across the region during El Niño years.[226] Tropical cyclones are further influenced by the Atlantic Meridional Mode, the Quasi-biennial oscillation and the Madden–Julian oscillation.[223][227]
Season lengths and averages
North Atlantic June 1 November 30 14.4 [228]
Eastern Pacific May 15 November 30 16.6 [228]
Western Pacific January 1 December 31 26.0 [228]
North Indian January 1 December 31 12 [229]
South-West Indian July 1 June 30 9.3 [228][77]
Australian region November 1 April 30 11.0 [230]
Southern Pacific November 1 April 30 7.1 [231]
Total: 96.4
Influence of climate change
Main article: Tropical cyclones and climate change
The 20-year average of the number of annual Category 4 and 5 hurricanes in the Atlantic region has approximately doubled since the year 2000.[232]
Climate change can affect tropical cyclones in a variety of ways: an intensification of rainfall and wind speed, a decrease in overall frequency, an increase in the frequency of very intense storms and a poleward extension of where the cyclones reach maximum intensity are among the possible consequences of human-induced climate change.[233] Tropical cyclones use warm, moist air as their fuel. As climate change is warming ocean temperatures, there is potentially more of this fuel available.[234] Between 1979 and 2017, there was a global increase in the proportion of tropical cyclones of Category 3 and higher on the Saffir–Simpson scale. The trend was most clear in the North Atlantic and in the Southern Indian Ocean. In the North Pacific, tropical cyclones have been moving poleward into colder waters and there was no increase in intensity over this period.[235] With 2 °C (3.6 °F) warming, a greater percentage (+13%) of tropical cyclones are expected to reach Category 4 and 5 strength.[233] A 2019 study indicates that climate change has been driving the observed trend of rapid intensification of tropical cyclones in the Atlantic basin. Rapidly intensifying cyclones are hard to forecast and therefore pose additional risk to coastal communities.[236]
Warmer air can hold more water vapor: the theoretical maximum water vapor content is given by the Clausius–Clapeyron relation, which yields ≈7% increase in water vapor in the atmosphere per 1 °C (1.8 °F) warming.[237][238] All models that were assessed in a 2019 review paper show a future increase of rainfall rates.[233] Additional sea level rise will increase storm surge levels.[239][240] It is plausible that extreme wind waves see an increase as a consequence of changes in tropical cyclones, further exacerbating storm surge dangers to coastal communities.[241] The compounding effects from floods, storm surge, and terrestrial flooding (rivers) are projected to increase due to global warming.[240]
There is currently no consensus on how climate change will affect the overall frequency of tropical cyclones.[233] A majority of climate models show a decreased frequency in future projections.[241] For instance, a 2020 paper comparing nine high-resolution climate models found robust decreases in frequency in the Southern Indian Ocean and the Southern Hemisphere more generally, while finding mixed signals for Northern Hemisphere tropical cyclones.[242] Observations have shown little change in the overall frequency of tropical cyclones worldwide,[243] with increased frequency in the North Atlantic and central Pacific, and significant decreases in the southern Indian Ocean and western North Pacific.[244] There has been a poleward expansion of the latitude at which the maximum intensity of tropical cyclones occurs, which may be associated with climate change.[245] In the North Pacific, there may also have been an eastward expansion.[239] Between 1949 and 2016, there was a slowdown in tropical cyclone translation speeds. It is unclear still to what extent this can be attributed to climate change: climate models do not all show this feature.[241]
Observation and forecasting
Main article: Tropical cyclone observation
Sunset view of Hurricane Isidore's rainbands photographed at 2,100 m (7,000 ft)
"Hurricane Hunter" – WP-3D Orion is used to go into the eye of a hurricane for data collection and measurements purposes.
Intense tropical cyclones pose a particular observation challenge, as they are a dangerous oceanic phenomenon, and weather stations, being relatively sparse, are rarely available on the site of the storm itself. In general, surface observations are available only if the storm is passing over an island or a coastal area, or if there is a nearby ship. Real-time measurements are usually taken in the periphery of the cyclone, where conditions are less catastrophic and its true strength cannot be evaluated. For this reason, there are teams of meteorologists that move into the path of tropical cyclones to help evaluate their strength at the point of landfall.[246]
Tropical cyclones are tracked by weather satellites capturing visible and infrared images from space, usually at half-hour to quarter-hour intervals. As a storm approaches land, it can be observed by land-based Doppler weather radar. Radar plays a crucial role around landfall by showing a storm's location and intensity every several minutes.[247] Other satellites provide information from the perturbations of GPS signals, providing thousands of snapshots per day and capturing atmospheric temperature, pressure, and moisture content.[248]
In situ measurements, in real-time, can be taken by sending specially equipped reconnaissance flights into the cyclone. In the Atlantic basin, these flights are regularly flown by United States government hurricane hunters.[249] These aircraft fly directly into the cyclone and take direct and remote-sensing measurements. The aircraft also launch GPS dropsondes inside the cyclone. These sondes measure temperature, humidity, pressure, and especially winds between flight level and the ocean's surface. A new era in hurricane observation began when a remotely piloted Aerosonde, a small drone aircraft, was flown through Tropical Storm Ophelia as it passed Virginia's eastern shore during the 2005 hurricane season. A similar mission was also completed successfully in the western Pacific Ocean.[250]
A general decrease in error trends in tropical cyclone path prediction is evident since the 1970s
See also: Tropical cyclone track forecasting, Tropical cyclone prediction model, and Tropical cyclone rainfall forecasting
High-speed computers and sophisticated simulation software allow forecasters to produce computer models that predict tropical cyclone tracks based on the future position and strength of high- and low-pressure systems. Combining forecast models with increased understanding of the forces that act on tropical cyclones, as well as with a wealth of data from Earth-orbiting satellites and other sensors, scientists have increased the accuracy of track forecasts over recent decades.[251] However, scientists are not as skillful at predicting the intensity of tropical cyclones.[252] The lack of improvement in intensity forecasting is attributed to the complexity of tropical systems and an incomplete understanding of factors that affect their development. New tropical cyclone position and forecast information is available at least every six hours from the various warning centers.[253][254][255][256][257]
Geopotential height
Main article: Geopotential height
In meteorology, geopotential heights are used when creating forecasts and analyzing pressure systems. Geopotential heights represent the estimate of the real height of a pressure system above the average sea level.[258] Geopotential heights for weather are divided up into several levels. The lowest geopotential height level is 850 hPa (25.10 inHg), which represents the lowest 1,500 m (5,000 ft) of the atmosphere. The moisture content, gained by using either the relative humidity or the precipitable water value, is used in creating forecasts for precipitation.[259] The next level, 700 hPa (20.67 inHg), is at a height of 2,300–3,200 m (7,700–10,500 ft); 700 hPa is regarded as the highest point in the lower atmosphere. At this layer, both vertical movement and moisture levels are used to locate and create forecasts for precipitation.[260] The middle level of the atmosphere is at 500 hPa (14.76 inHg) or a height of 4,900–6,100 m (16,000–20,000 ft). The 500 hPa level is used for measuring atmospheric vorticity, commonly known as the spin of air. The relative humidity is also analyzed at this height in order to establish where precipitation is likely to materialize.[261] The next level occurs at 300 hPa (8.859 inHg) or a height of 8,200–9,800 m (27,000–32,000 ft).[262] The top-most level is located at 200 hPa (5.906 inHg), which corresponds to a height of 11,000–12,000 m (35,000–41,000 ft). Both the 200 and 300 hPa levels are mainly used to locate the jet stream.[263]
Related cyclone types
See also: Cyclone, Extratropical cyclone, and Subtropical cyclone
In addition to tropical cyclones, there are two other classes of cyclones within the spectrum of cyclone types. These kinds of cyclones, known as extratropical cyclones and subtropical cyclones, can be stages a tropical cyclone passes through during its formation or dissipation.[264] An extratropical cyclone is a storm that derives energy from horizontal temperature differences, which are typical in higher latitudes. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses; although not as frequently, an extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone.[265] From space, extratropical storms have a characteristic "comma-shaped" cloud pattern.[266] Extratropical cyclones can also be dangerous when their low-pressure centers cause powerful winds and high seas.[267]
A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form in a wide band of latitudes, from the equator to 50°. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm.[268]
2023 Pacific hurricane season
2023 Pacific typhoon season
2023 North Indian Ocean cyclone season
2022–23 South-West Indian Ocean cyclone season
2022–23 Australian region cyclone season
2022–23 South Pacific cyclone season
^ a b "Glossary of NHC Terms". United States National Hurricane Center. Archived from the original on February 16, 2021. Retrieved February 18, 2021.
^ "Tropical cyclone facts: What is a tropical cyclone?". United Kingdom Met Office. Archived from the original on February 2, 2021. Retrieved February 25, 2021.
^ a b c Nott, Jonathan (March 1, 2011). "A 6000 year tropical cyclone record from Western Australia". Quaternary Science Reviews. 30 (5): 713–722. Bibcode:2011QSRv...30..713N. doi:10.1016/j.quascirev.2010.12.004. ISSN 0277-3791. Archived from the original on December 21, 2020. Retrieved March 13, 2021.
^ a b c d e f g h i j k l m n o p q Global Guide to Tropical Cyclone Forecasting: 2017 (PDF) (Report). World Meteorological Organization. April 17, 2018. Archived (PDF) from the original on July 14, 2019. Retrieved September 6, 2020.
^ Studholme, Joshua; Fedorov, Alexey V.; Gulev, Sergey K.; Emanuel, Kerry; Hodges, Kevin (December 29, 2021). "Poleward expansion of tropical cyclone latitudes in warming climates". Nature Geoscience. 15: 14–28. doi:10.1038/s41561-021-00859-1. S2CID 245540084. Archived from the original on January 4, 2022. Retrieved January 4, 2022.
^ Knapp, Kenneth R.; Knaff, John A.; Sampson, Charles R.; Riggio, Gustavo M.; Schnapp, Adam D. (August 1, 2013). "A Pressure-Based Analysis of the Historical Western North Pacific Tropical Cyclone Intensity Record". Monthly Weather Review. American Meteorological Society. 141 (8): 2611–2631. Bibcode:2013MWRv..141.2611K. doi:10.1175/MWR-D-12-00323.1. S2CID 19031120. Archived from the original on October 7, 2022. Retrieved October 7, 2022.
^ "What is a Tropical Cyclone?". Bureau of Meteorology. Archived from the original on October 3, 2022. Retrieved October 7, 2022.
^ "Saffir-Simpson Hurricane Wind Scale". National Hurricane Center. Archived from the original on June 20, 2020. Retrieved October 7, 2022.
^ Dunnavan, G.M.; Diercks, J.W. (1980). "An Analysis of Super Typhoon Tip (October 1979)". Monthly Weather Review. 108 (11): 1915–1923. Bibcode:1980MWRv..108.1915D. doi:10.1175/1520-0493(1980)108<1915:AAOSTT>2.0.CO;2.
^ Pasch, Richard (October 23, 2015). "Hurricane Patricia Discussion Number 14". National Hurricane Center. Archived from the original on October 25, 2015. Retrieved October 23, 2015. Data from three center fixes by the Hurricane Hunters indicate that the intensity, based on a blend of 700 mb-flight level and SFMR-observed surface winds, is near 175 kt. This makes Patricia the strongest hurricane on record in the National Hurricane Center's area of responsibility (AOR) which includes the Atlantic and the eastern North Pacific basins.
^ Tory, K. J.; Dare, R. A. (October 15, 2015). "Sea Surface Temperature Thresholds for Tropical Cyclone Formation". Journal of Climate. American Meteorological Society. 28 (20): 8171. Bibcode:2015JCli...28.8171T. doi:10.1175/JCLI-D-14-00637.1. Archived from the original on April 28, 2021. Retrieved April 28, 2021.
^ Lavender, Sally; Hoeke, Ron; Abbs, Deborah (March 9, 2018). "The influence of sea surface temperature on the intensity and associated storm surge of tropical cyclone Yasi: a sensitivity study". Natural Hazards and Earth System Sciences. Copernicus Publications. 18 (3): 795–805. Bibcode:2018NHESS..18..795L. doi:10.5194/nhess-18-795-2018. Archived from the original on April 28, 2021. Retrieved April 28, 2021.
^ Xu, Jing; Wang, Yuqing (April 1, 2018). "Dependence of Tropical Cyclone Intensification Rate on Sea SurfaceTemperature, Storm Intensity, and Size in the Western North Pacific". Weather and Forecasting. American Meteorological Society. 33 (2): 523–527. Bibcode:2018WtFor..33..523X. doi:10.1175/WAF-D-17-0095.1. Archived from the original on April 28, 2021. Retrieved April 28, 2021.
^ Brown, Daniel (April 20, 2017). "Tropical Cyclone Intensity Forecasting: Still a Challenging Proposition" (PDF). National Hurricane Center. p. 7. Archived (PDF) from the original on April 27, 2021. Retrieved April 27, 2021.
^ a b Chih, Cheng-Hsiang; Wu, Chun-Chieh (February 1, 2020). "Exploratory Analysis of Upper-Ocean Heat Content and Sea Surface Temperature Underlying Tropical Cyclone Rapid Intensification in the Western North Pacific". Journal of Climate. 33 (3): 1031–1033. Bibcode:2020JCli...33.1031C. doi:10.1175/JCLI-D-19-0305.1. S2CID 210249119. Archived from the original on April 27, 2021. Retrieved April 27, 2021.
^ Lin, I.; Goni, Gustavo; Knaff, John; Forbes, Cristina; Ali, M. (May 31, 2012). "Ocean heat content for tropical cyclone intensity forecasting and its impact on storm surge" (PDF). Journal of the International Society for the Prevention and Mitigation of Natural Hazards. Springer Science+Business Media. 66 (3): 3–4. doi:10.1007/s11069-012-0214-5. ISSN 0921-030X. S2CID 9130662. Archived (PDF) from the original on April 27, 2021. Retrieved April 27, 2021.
^ Hu, Jianyu; Wang, Xiao Hua (September 2016). "Progress on upwelling studies in the China seas". Reviews of Geophysics. AGU. 54 (3): 653–673. Bibcode:2016RvGeo..54..653H. doi:10.1002/2015RG000505. S2CID 132158526. Archived from the original on May 23, 2022. Retrieved May 14, 2022.
^ a b D'Asaro, Eric A. & Black, Peter G. (2006). "J8.4 Turbulence in the Ocean Boundary Layer Below Hurricane Dennis". University of Washington. Archived (PDF) from the original on March 30, 2012. Retrieved February 22, 2008.
^ Fedorov, Alexey V.; Brierley, Christopher M.; Emanuel, Kerry (February 2010). "Tropical cyclones and permanent El Niño in the early Pliocene epoch". Nature. 463 (7284): 1066–1070. Bibcode:2010Natur.463.1066F. doi:10.1038/nature08831. hdl:1721.1/63099. ISSN 0028-0836. PMID 20182509. S2CID 4330367.
^ Stovern, Diana; Ritchie, Elizabeth. "Modeling the Effect of Vertical Wind Shear on Tropical Cyclone Size and Structure" (PDF). American Meteorological Society. pp. 1–2. Archived (PDF) from the original on June 18, 2021. Retrieved April 28, 2021.
^ Wingo, Matthew; Cecil, Daniel (March 1, 2010). "Effects of Vertical Wind Shear on Tropical Cyclone Precipitation". Monthly Weather Review. American Meteorological Society. 138 (3): 645–662. Bibcode:2010MWRv..138..645W. doi:10.1175/2009MWR2921.1. S2CID 73622535. Archived from the original on April 29, 2021. Retrieved April 28, 2021.
^ Liang, Xiuji; Li, Qingqing (March 1, 2021). "Revisiting the response of western North Pacific tropical cyclone intensity change to vertical wind shear in different directions". Atmospheric and Oceanic Science Letters. Science Direct. 14 (3): 100041. doi:10.1016/j.aosl.2021.100041.
^ Shi, Donglei; Ge, Xuyang; Peng, Melinda (September 2019). "Latitudinal dependence of the dry air effect on tropical cyclone development". Dynamics of Atmospheres and Oceans. Science Direct. 87: 101102. Bibcode:2019DyAtO..8701102S. doi:10.1016/j.dynatmoce.2019.101102. S2CID 202123299. Archived from the original on May 14, 2022. Retrieved May 14, 2022.
^ Wang, Shuai; Toumi, Ralf (June 1, 2019). "Impact of Dry Midlevel Air on the Tropical Cyclone Outer Circulation". Journal of the Atmospheric Sciences. American Meteorological Society. 76 (6): 1809–1826. Bibcode:2019JAtS...76.1809W. doi:10.1175/JAS-D-18-0302.1. hdl:10044/1/70065. S2CID 145965553. Archived from the original on May 23, 2022. Retrieved May 14, 2022.
^ Alland, Joshua J.; Tang, Brian H.; Corbosiero, Kristen L.; Bryan, George H. (February 24, 2021). "Combined Effects of Midlevel Dry Air and Vertical Wind Shear on Tropical Cyclone Development. Part II: Radial Ventilation". Journal of the Atmospheric Sciences. American Meteorological Society. 78 (3): 783–796. Bibcode:2021JAtS...78..783A. doi:10.1175/JAS-D-20-0055.1. S2CID 230602004. Archived from the original on May 14, 2022. Retrieved May 14, 2022.
^ Rappin, Eric D.; Morgan, Michael C.; Tripoli, Gregory J. (February 1, 2011). "The Impact of Outflow Environment on Tropical Cyclone Intensification and Structure". Journal of the Atmospheric Sciences. American Meteorological Society. 68 (2): 177–194. Bibcode:2011JAtS...68..177R. doi:10.1175/2009JAS2970.1. S2CID 123508815. Archived from the original on May 14, 2022. Retrieved May 15, 2022.
^ Shi, Donglei; Chen, Guanghua (December 10, 2021). "The Implication of Outflow Structure for the Rapid Intensification of Tropical Cyclones under Vertical Wind Shear". Monthly Weather Review. American Meteorological Society. 149 (12): 4107–4127. Bibcode:2021MWRv..149.4107S. doi:10.1175/MWR-D-21-0141.1. S2CID 244001444. Archived from the original on May 14, 2022. Retrieved May 15, 2022.
^ Ryglicki, David R.; Doyle, James D.; Hodyss, Daniel; Cossuth, Joshua H.; Jin, Yi; Viner, Kevin C.; Schmidt, Jerome M. (August 1, 2019). "The Unexpected Rapid Intensification of Tropical Cyclones in Moderate Vertical Wind Shear. Part III: Outflow–Environment Interaction". Monthly Weather Review. American Meteorological Society. 147 (8): 2919–2940. Bibcode:2019MWRv..147.2919R. doi:10.1175/MWR-D-18-0370.1. S2CID 197485216. Archived from the original on May 14, 2022. Retrieved May 15, 2022.
^ Dai, Yi; Majumdar, Sharanya J.; Nolan, David S. (July 1, 2019). "The Outflow–Rainband Relationship Induced by Environmental Flow around Tropical Cyclones". Journal of the Atmospheric Sciences. American Meteorological Society. 76 (7): 1845–1863. Bibcode:2019JAtS...76.1845D. doi:10.1175/JAS-D-18-0208.1. S2CID 146062929. Archived from the original on May 14, 2022. Retrieved May 15, 2022.
^ Carrasco, Cristina; Landsea, Christopher; Lin, Yuh-Lang (June 1, 2014). "The Influence of Tropical Cyclone Size on Its Intensification". Weather and Forecasting. American Meteorological Society. 29 (3): 582–590. Bibcode:2014WtFor..29..582C. doi:10.1175/WAF-D-13-00092.1. S2CID 18429068. Archived from the original on May 1, 2021. Retrieved May 1, 2021.
^ Lander, Mark; Holland, Greg J. (October 1993). "On the interaction of tropical-cyclone-scale vortices. I: Observations". Quarterly Journal of the Royal Meteorological Society. Royal Meteorological Society. 119 (514): 1347–1361. Bibcode:1993QJRMS.119.1347L. doi:10.1002/qj.49711951406. Archived from the original on June 1, 2022. Retrieved May 20, 2022.
^ Andersen, Theresa K.; Shepherd, J. Marshall (March 21, 2013). "A global spatiotemporal analysis of inland tropical cyclone maintenance or intensification". International Journal of Climatology. Royal Meteorological Society. 34 (2): 391–402. doi:10.1002/joc.3693. S2CID 129080562. Retrieved October 7, 2022.
^ Andersen, Theresa; Sheperd, Marshall (February 17, 2017). "Inland Tropical Cyclones and the "Brown Ocean" Concept". Hurricanes and Climate Change. Springer. pp. 117–134. doi:10.1007/978-3-319-47594-3_5. ISBN 978-3-319-47592-9. Archived from the original on May 15, 2022. Retrieved May 20, 2022.
^ Houze, Robert A. Jr. (January 6, 2012). "Orographic effects on precipitating clouds". Reviews of Geophysics. AGU. 50 (1). Bibcode:2012RvGeo..50.1001H. doi:10.1029/2011RG000365. S2CID 46645620. Archived from the original on May 15, 2022. Retrieved May 20, 2022.
^ Ito, Kosuke; Ichikawa, Hana (August 31, 2020). "Warm ocean accelerating tropical cyclone Hagibis (2019) through interaction with a mid-latitude westerly jet". Scientific Online Letters on the Atmosphere. Meteorological Society of Japan. 17A: 1–6. doi:10.2151/sola.17A-001. S2CID 224874804. Archived from the original on October 7, 2022. Retrieved October 7, 2022.
^ Do, Gunwoo; Kim, Hyeong-Seog (August 18, 2021). "Effect of Mid-Latitude Jet Stream on the Intensity of Tropical Cyclones Affecting Korea: Observational Analysis and Implication from the Numerical Model Experiments of Typhoon Chaba (2016)". Atmosphere. MDPI. 12 (8): 1061. Bibcode:2021Atmos..12.1061D. doi:10.3390/atmos12081061.
^ a b c d e "Tropical cyclone facts: How do tropical cyclones form?". United Kingdom Met Office. Archived from the original on February 2, 2021. Retrieved March 1, 2021.
^ a b Landsea, Chris. "How do tropical cyclones form?". Frequently Asked Questions. Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. Archived from the original on August 27, 2009. Retrieved October 9, 2017.
^ Berg, Robbie. "Tropical cyclone intensity in relation to SST and moisture variability" (PDF). RSMAS (University of Miami). Archived (PDF) from the original on June 10, 2011. Retrieved September 23, 2010.
^ Zhang, Da-Lin; Zhu, Lin (September 12, 2012). "Roles of upper-level processes in tropical cyclogenesis". Geophysical Research Letters. AGU. 39 (17). Bibcode:2012GeoRL..3917804Z. doi:10.1029/2012GL053140. S2CID 53341455. Archived from the original on October 2, 2022. Retrieved October 4, 2022.
^ Chris Landsea (January 4, 2000). "Climate Variability table — Tropical Cyclones". Atlantic Oceanographic and Meteorological Laboratory, National Oceanic and Atmospheric Administration. Archived from the original on October 2, 2012. Retrieved October 19, 2006.
^ Landsea, Christopher. "AOML Climate Variability of Tropical Cyclones paper". Atlantic Oceanographic and Meteorological Laboratory. Archived from the original on October 26, 2021. Retrieved September 23, 2010.
^ Aiyyer, Anantha; Molinari, John (August 1, 2008). "MJO and Tropical Cyclogenesis in the Gulf of Mexico and Eastern Pacific: Case Study and Idealized Numerical Modeling". Journal of the Atmospheric Sciences. American Meteorological Society. 65 (8): 2691–2704. Bibcode:2008JAtS...65.2691A. doi:10.1175/2007JAS2348.1. S2CID 17409876. Archived from the original on October 2, 2022. Retrieved October 5, 2022.
^ Zhao, Chen; Li, Tim (October 20, 2018). "Basin dependence of the MJO modulating tropical cyclone genesis". Climate Dynamics. Springer. 52 (9–10): 6081–6096. doi:10.1007/s00382-018-4502-y. S2CID 134747858. Archived from the original on October 2, 2022. Retrieved October 5, 2022.
^ Camargo, Suzana J.; Emanuel, Kerry A.; Sobel, Adam H. (October 1, 2007). "Use of a Genesis Potential Index to Diagnose ENSO Effects on Tropical Cyclone Genesis". Journal of Climate. American Meteorological Society. 20 (19): 4819–4834. Bibcode:2007JCli...20.4819C. doi:10.1175/JCLI4282.1. S2CID 17340459. Archived from the original on October 2, 2022. Retrieved October 5, 2022.
^ Molinari, John; Lombardo, Kelly; Vollaro, David (April 1, 2007). "Tropical Cyclogenesis within an Equatorial Rossby Wave Packet". Journal of the Atmospheric Sciences. American Meteorological Society. 64 (4): 1301–1317. Bibcode:2007JAtS...64.1301M. doi:10.1175/JAS3902.1. S2CID 12920242. Archived from the original on October 2, 2022. Retrieved October 5, 2022.
^ Li, Tim; Fu, Bing (May 1, 2006). "Tropical Cyclogenesis Associated with Rossby Wave Energy Dispersion of a Preexisting Typhoon. Part I: Satellite Data Analyses". Journal of the Atmospheric Sciences. American Meteorological Society. 63 (5): 1377–1389. Bibcode:2006JAtS...63.1377L. doi:10.1175/JAS3692.1. S2CID 15372289. Archived from the original on October 2, 2022. Retrieved October 5, 2022.
^ Schreck III, Carl J.; Molinari, John (September 1, 2011). "Tropical Cyclogenesis Associated with Kelvin Waves and the Madden–Julian Oscillation". Monthly Weather Review. American Meteorological Society. 139 (9): 2723–2734. Bibcode:2011MWRv..139.2723S. doi:10.1175/MWR-D-10-05060.1. S2CID 16983131. Archived from the original on October 2, 2022. Retrieved October 5, 2022.
^ Schreck III, Carl J. (October 1, 2015). "Kelvin Waves and Tropical Cyclogenesis: A Global Survey". Monthly Weather Review. American Meteorological Society. 143 (10): 3996–4011. Bibcode:2015MWRv..143.3996S. doi:10.1175/MWR-D-15-0111.1. S2CID 118859063. Archived from the original on October 2, 2022. Retrieved October 5, 2022.
^ "Glossary of NHC Terms". United States National Oceanic and Atmospheric Administration's National Hurricane Center. Archived from the original on September 12, 2019. Retrieved June 2, 2019.
^ Oropeza, Fernando; Raga, Graciela B. (January 2015). "Rapid deepening of tropical cyclones in the northeastern Tropical Pacific: The relationship with oceanic eddies". Atmósfera. Science Direct. 28 (1): 27–42. doi:10.1016/S0187-6236(15)72157-0. Archived from the original on May 15, 2022. Retrieved May 15, 2022.
^ Diana Engle. "Hurricane Structure and Energetics". Data Discovery Hurricane Science Center. Archived from the original on May 27, 2008. Retrieved October 26, 2008.
^ Brad Reinhart; Daniel Brown (October 21, 2020). "Hurricane Epsilon Discussion Number 12". nhc.noaa.gov. Miami, Florida: National Hurricane Center. Archived from the original on March 21, 2021. Retrieved February 4, 2021.
^ Cappucci, Matthew (October 21, 2020). "Epsilon shatters records as it rapidly intensifies into major hurricane near Bermuda". The Washington Post. Archived from the original on December 10, 2020. Retrieved February 4, 2021.
^ Lam, Linda (September 4, 2019). "Why the Eastern Caribbean Sea Can Be a 'Hurricane Graveyard'". The Weather Channel. TWC Product and Technology. Archived from the original on July 4, 2021. Retrieved April 6, 2021.
^ Sadler, James C.; Kilonsky, Bernard J. (May 1977). The Regeneration of South China Sea Tropical Cyclones in the Bay of Bengal (PDF) (Report). Monterey, California: Naval Environmental Prediction Research Facility. Archived (PDF) from the original on June 22, 2021. Retrieved April 6, 2021 – via Defense Technical Information Center.
^ Chang, Chih-Pei (2004). East Asian Monsoon. World Scientific. ISBN 978-981-238-769-1. OCLC 61353183. Archived from the original on August 14, 2021. Retrieved November 22, 2020.
^ United States Naval Research Laboratory (September 23, 1999). "Tropical Cyclone Intensity Terminology". Tropical Cyclone Forecasters' Reference Guide. Archived from the original on July 12, 2012. Retrieved November 30, 2006.
^ a b c d "Anatomy and Life Cycle of a Storm: What Is the Life Cycle of a Hurricane and How Do They Move?". United States Hurricane Research Division. 2020. Archived from the original on February 17, 2021. Retrieved February 17, 2021.
^ a b c "Attempts to Stop a Hurricane in its Track: What Else has been Considered to Stop a Hurricane?". United States Hurricane Research Division. 2020. Archived from the original on February 17, 2021. Retrieved February 17, 2021.
^ Knaff, John; Longmore, Scott; DeMaria, Robert; Molenar, Debra (February 1, 2015). "Improved Tropical-Cyclone Flight-Level Wind Estimates Using RoutineInfrared Satellite Reconnaissance". Journal of Applied Meteorology and Climatology. American Meteorological Society. 54 (2): 464. Bibcode:2015JApMC..54..463K. doi:10.1175/JAMC-D-14-0112.1. S2CID 17309033. Archived from the original on April 24, 2021. Retrieved April 23, 2021.
^ Knaff, John; Reed, Kevin; Chavas, Daniel (November 8, 2017). "Physical understanding of the tropical cyclone wind-pressure relationship". Nature Communications. 8 (1360): 1360. Bibcode:2017NatCo...8.1360C. doi:10.1038/s41467-017-01546-9. PMC 5678138. PMID 29118342.
^ a b Kueh, Mien-Tze (May 16, 2012). "Multiformity of the tropical cyclone wind–pressure relationship in the western North Pacific: discrepancies among four best-track archives". Environmental Research Letters. IOP Publishing. 7 (2): 2–6. Bibcode:2012ERL.....7b4015K. doi:10.1088/1748-9326/7/2/024015.
^ Meissner, Thomas; Ricciardulli, L.; Wentz, F.; Sampson, C. (April 18, 2018). "Intensity and Size of Strong Tropical Cyclones in 2017 from NASA's SMAP L-Band Radiometer". American Meteorological Society. Archived from the original on April 21, 2021. Retrieved April 21, 2021.
^ DeMaria, Mark; Knaff, John; Zehr, Raymond (2013). Satellite-based Applications on Climate Change (PDF). Springer. pp. 152–154. Archived (PDF) from the original on April 22, 2021. Retrieved April 21, 2021.
^ Olander, Timothy; Veldan, Christopher (August 1, 2019). "The Advanced Dvorak Technique (ADT) for Estimating Tropical Cyclone Intensity: Update and New Capabilities". American Meteorological Society. 34 (4): 905–907. Bibcode:2019WtFor..34..905O. doi:10.1175/WAF-D-19-0007.1. Archived from the original on April 21, 2021. Retrieved April 21, 2021.
^ Velden, Christopher; Herndon, Derrick (July 21, 2020). "A Consensus Approach for Estimating Tropical Cyclone Intensity from Meteorological Satellites: SATCON". American Meteorological Society. 35 (4): 1645–1650. Bibcode:2020WtFor..35.1645V. doi:10.1175/WAF-D-20-0015.1. Archived from the original on April 21, 2021. Retrieved April 21, 2021.
^ Chen, Buo-Fu; Chen, Boyo; Lin, Hsuan-Tien; Elsberry, Russell (April 2019). "Estimating tropical cyclone intensity by satellite imagery utilizing convolutional neural networks". American Meteorological Society. 34 (2): 448. Bibcode:2019WtFor..34..447C. doi:10.1175/WAF-D-18-0136.1. Archived from the original on April 21, 2021. Retrieved April 21, 2021.
^ Davis, Kyle; Zeng, Xubin (February 1, 2019). "Seasonal Prediction of North Atlantic Accumulated Cyclone Energy and Major Hurricane Activity". Weather and Forecasting. American Meteorological Society. 34 (1): 221–232. Bibcode:2019WtFor..34..221D. doi:10.1175/WAF-D-18-0125.1. hdl:10150/632896. S2CID 128293725.
^ Villarini, Gabriele; Vecchi, Gabriel A (January 15, 2012). "North Atlantic Power Dissipation Index (PDI) and Accumulated Cyclone Energy (ACE): Statistical Modeling and Sensitivity to Sea Surface Temperature Changes". Journal of Climate. American Meteorological Society. 25 (2): 625–637. Bibcode:2012JCli...25..625V. doi:10.1175/JCLI-D-11-00146.1. S2CID 129106927.
^ Islam, Md. Rezuanal; Lee, Chia-Ying; Mandli, Kyle T.; Takagi, Hiroshi (August 18, 2021). "A new tropical cyclone surge index incorporating the effects of coastal geometry, bathymetry and storm information". Scientific Reports. 11 (1): 16747. Bibcode:2021NatSR..1116747I. doi:10.1038/s41598-021-95825-7. PMC 8373937. PMID 34408207.
^ a b Rezapour, Mehdi; Baldock, Tom E. (December 1, 2014). "Classification of Hurricane Hazards: The Importance of Rainfall". Weather and Forecasting. American Meteorological Society. 29 (6): 1319–1331. Bibcode:2014WtFor..29.1319R. doi:10.1175/WAF-D-14-00014.1. S2CID 121762550.
^ Kozar, Michael E; Misra, Vasubandhu (February 16, 2019). "3". Hurricane Risk. Springer. pp. 43–69. doi:10.1007/978-3-030-02402-4_3. ISBN 978-3-030-02402-4. S2CID 133717045.
^ a b c d e RA IV Hurricane Committee. Regional Association IV Hurricane Operational Plan 2019 (PDF) (Report). World Meteorological Organization. Archived (PDF) from the original on July 2, 2019. Retrieved July 2, 2019.
^ a b c WMO/ESCP Typhoon Committee (March 13, 2015). Typhoon Committee Operational Manual Meteorological Component 2015 (PDF) (Report No. TCP-23). World Meteorological Organization. pp. 40–41. Archived (PDF) from the original on October 1, 2015. Retrieved March 28, 2015.
^ a b c WMO/ESCAP Panel on Tropical Cyclones (November 2, 2018). Tropical Cyclone Operational Plan for the Bay of Bengal and the Arabian Sea 2018 (PDF) (Report No. TCP-21). World Meteorological Organization. pp. 11–12. Archived (PDF) from the original on July 2, 2019. Retrieved July 2, 2019.
^ a b c d e RA I Tropical Cyclone Committee (November 9, 2012). Tropical Cyclone Operational Plan for the South-West Indian Ocean: 2012 (PDF) (Report No. TCP-12). World Meteorological Organization. pp. 11–14. Archived (PDF) from the original on March 29, 2015. Retrieved March 29, 2015.
^ a b c d e f g h i j k RA V Tropical Cyclone Committee (October 31, 2022). Tropical Cyclone Operational Plan for the South-East Indian Ocean and the Southern Pacific Ocean 2022 (PDF) (Report). World Meteorological Organization. pp. I-4–II-9 (9–21). Retrieved November 4, 2022.
^ a b c d e f Smith, Ray (1990). "What's in a Name?" (PDF). Weather and Climate. The Meteorological Society of New Zealand. 10 (1): 24–26. doi:10.2307/44279572. JSTOR 44279572. S2CID 201717866. Archived from the original (PDF) on November 29, 2014. Retrieved August 25, 2014.
^ a b c d e Dorst, Neal M (October 23, 2012). "They Called the Wind Mahina: The History of Naming Cyclones". Hurricane Research Division, Atlantic Oceanographic and Meteorological Laboratory. National Oceanic and Atmospheric Administration. p. Slides 8–72.
^ Office of the Federal Coordinator for Meteorological Services and Supporting Research (May 2017). National Hurricane Operations Plan (PDF) (Report). National Oceanic and Atmospheric Administration. pp. 26–28. Archived (PDF) from the original on October 15, 2018. Retrieved October 14, 2018.
^ a b National Weather Service (October 19, 2005). "Tropical Cyclone Structure". JetStream – An Online School for Weather. National Oceanic & Atmospheric Administration. Archived from the original on December 7, 2013. Retrieved May 7, 2009.
^ Pasch, Richard J.; Eric S. Blake; Hugh D. Cobb III; David P. Roberts (September 28, 2006). "Tropical Cyclone Report: Hurricane Wilma: 15–25 October 2005" (PDF). National Hurricane Center. Archived (PDF) from the original on March 4, 2016. Retrieved December 14, 2006.
^ a b c Annamalai, H.; Slingo, J.M.; Sperber, K.R.; Hodges, K. (1999). "The Mean Evolution and Variability of the Asian Summer Monsoon: Comparison of ECMWF and NCEP–NCAR Reanalyses". Monthly Weather Review. 127 (6): 1157–1186. Bibcode:1999MWRv..127.1157A. doi:10.1175/1520-0493(1999)127<1157:TMEAVO>2.0.CO;2. Archived from the original on August 1, 2020. Retrieved December 12, 2019.
^ American Meteorological Society. "AMS Glossary: C". Glossary of Meteorology. Allen Press. Archived from the original on January 26, 2011. Retrieved December 14, 2006.
^ Atlantic Oceanographic and Hurricane Research Division. "Frequently Asked Questions: What are "concentric eyewall cycles" (or "eyewall replacement cycles") and why do they cause a hurricane's maximum winds to weaken?". National Oceanic and Atmospheric Administration. Archived from the original on December 6, 2006. Retrieved December 14, 2006.
^ "Global Guide to Tropical Cyclone Forecasting: chapter 2: Tropical Cyclone Structure". Bureau of Meteorology. May 7, 2009. Archived from the original on June 1, 2011. Retrieved May 6, 2009.
^ a b Chavas, D.R.; Emanuel, K.A. (2010). "A QuikSCAT climatology of tropical cyclone size". Geophysical Research Letters. 37 (18): n/a. Bibcode:2010GeoRL..3718816C. doi:10.1029/2010GL044558. hdl:1721.1/64407. S2CID 16166641.
^ a b "Q: What is the average size of a tropical cyclone?". Joint Typhoon Warning Center. 2009. Archived from the original on October 4, 2013. Retrieved May 7, 2009.
^ a b Merrill, Robert T (1984). "A comparison of Large and Small Tropical cyclones". Monthly Weather Review. 112 (7): 1408–1418. Bibcode:1984MWRv..112.1408M. doi:10.1175/1520-0493(1984)112<1408:ACOLAS>2.0.CO;2. hdl:10217/200. S2CID 123276607. Archived from the original on May 23, 2022. Retrieved December 12, 2019.
^ Dorst, Neal; Hurricane Research Division (May 29, 2009). "Frequently Asked Questions: Subject: E5) Which are the largest and smallest tropical cyclones on record?". National Oceanic and Atmospheric Administration's Atlantic Oceanographic and Meteorological Laboratory. Archived from the original on December 22, 2008. Retrieved June 12, 2013.
^ Holland, G.J. (1983). "Tropical Cyclone Motion: Environmental Interaction Plus a Beta Effect". Journal of the Atmospheric Sciences. 40 (2): 328–342. Bibcode:1983JAtS...40..328H. doi:10.1175/1520-0469(1983)040<0328:TCMEIP>2.0.CO;2. S2CID 124178238. Archived from the original on May 23, 2022. Retrieved January 10, 2020.
^ Dorst, Neal; Hurricane Research Division (January 26, 2010). "Subject: E6) Frequently Asked Questions: Which tropical cyclone lasted the longest?". National Oceanic and Atmospheric Administration's Atlantic Oceanographic and Meteorological Laboratory. Archived from the original on May 6, 2009. Retrieved June 12, 2013.
^ Dorst, Neal; Delgado, Sandy; Hurricane Research Division (May 20, 2011). "Frequently Asked Questions: Subject: E7) What is the farthest a tropical cyclone has travelled?". National Oceanic and Atmospheric Administration's Atlantic Oceanographic and Meteorological Laboratory. Archived from the original on May 6, 2009. Retrieved June 12, 2013.
^ a b c Galarneau, Thomas J.; Davis, Christopher A. (February 1, 2013). "Diagnosing Forecast Errors in Tropical Cyclone Motion". Monthly Weather Review. American Meteorological Society. 141 (2): 405–430. Bibcode:2013MWRv..141..405G. doi:10.1175/MWR-D-12-00071.1. S2CID 58921153.
^ a b Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. "Frequently Asked Questions: What determines the movement of tropical cyclones?". National Oceanic and Atmospheric Administration. Archived from the original on July 16, 2012. Retrieved July 25, 2006.
^ a b Wu, Chun-Chieh; Emanuel, Kerry A. (January 1, 1995). "Potential vorticity Diagnostics of Hurricane Movement. Part 1: A Case Study of Hurricane Bob (1991)". Monthly Weather Review. American Meteorological Society. 123 (1): 69–92. Bibcode:1995MWRv..123...69W. doi:10.1175/1520-0493(1995)123<0069:PVDOHM>2.0.CO;2.
^ Carr, L. E.; Elsberry, Russell L. (February 15, 1990). "Observational Evidence for Predictions of Tropical Cyclone Propagation Relative to Environmental Steering". Journal of the Atmospheric Sciences. American Meteorological Society. 47 (4): 542–546. Bibcode:1990JAtS...47..542C. doi:10.1175/1520-0469(1990)047<0542:OEFPOT>2.0.CO;2. S2CID 121754290.
^ a b Velden, Christopher S.; Leslie, Lance M. (June 1, 1991). "The Basic Relationship between Tropical Cyclone Intensity and the Depth of the Environmental Steering Layer in the Australian Region". Weather and Forecasting. American Meteorological Society. 6 (2): 244–253. Bibcode:1991WtFor...6..244V. doi:10.1175/1520-0434(1991)006<0244:TBRBTC>2.0.CO;2.
^ Chan, Johnny C.L. (January 2005). "The Physics of Tropical Cyclone Motion". Annual Review of Fluid Mechanics. Annual Reviews. 37 (1): 99–128. Bibcode:2005AnRFM..37...99C. doi:10.1146/annurev.fluid.37.061903.175702.
^ Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. "Frequently Asked Questions: What is an easterly wave?". National Oceanic and Atmospheric Administration. Archived from the original on July 18, 2006. Retrieved July 25, 2006.
^ Avila, L.A.; Pasch, R.J. (1995). "Atlantic Tropical Systems of 1993". Monthly Weather Review. 123 (3): 887–896. Bibcode:1995MWRv..123..887A. doi:10.1175/1520-0493(1995)123<0887:ATSO>2.0.CO;2.
^ DeCaria, Alex (2005). "Lesson 5 – Tropical Cyclones: Climatology". ESCI 344 – Tropical Meteorology. Millersville University. Archived from the original on May 7, 2008. Retrieved February 22, 2008.
^ Carr, Lester E.; Elsberry, Russell L. (February 1, 1995). "Monsoonal Interactions Leading to Sudden Tropical Cyclone Track Changes". Monthly Weather Review. American Meteorological Society. 123 (2): 265–290. Bibcode:1995MWRv..123..265C. doi:10.1175/1520-0493(1995)123<0265:MILTST>2.0.CO;2.
^ a b Wang, Bin; Elsberry, Russell L.; Yuqing, Wang; Liguang, Wu (1998). "Dynamics in Tropical Cyclone Motion: A Review" (PDF). Chinese Journal of the Atmospheric Sciences. Allerton Press. 22 (4): 416–434. Archived (PDF) from the original on June 17, 2021. Retrieved April 6, 2021 – via University of Hawaii.
^ Holland, Greg J. (February 1, 1983). "Tropical Cyclone Motion: Environmental Interaction Plus a Beta Effect". Journal of the Atmospheric Sciences. American Meteorological Society. 40 (2): 328–342. Bibcode:1983JAtS...40..328H. doi:10.1175/1520-0469(1983)040<0328:TCMEIP>2.0.CO;2.
^ Fiorino, Michael; Elsberry, Russell L. (April 1, 1989). "Some Aspects of Vortex Structure Related to Tropical Cyclone Motion". Journal of the Atmospheric Sciences. American Meteorological Society. 46 (7): 975–990. Bibcode:1989JAtS...46..975F. doi:10.1175/1520-0469(1989)046<0975:SAOVSR>2.0.CO;2.
^ Li, Xiaofan; Wang, Bin (March 1, 1994). "Barotropic Dynamics of the Beta Gyres and Beta Drift". Journal of the Atmospheric Sciences. American Meteorological Society. 51 (5): 746–756. Bibcode:1994JAtS...51..746L. doi:10.1175/1520-0469(1994)051<0746:BDOTBG>2.0.CO;2.
^ Willoughby, H. E. (September 1, 1990). "Linear Normal Modes of a Moving, Shallow-Water Barotropic Vortex". Journal of the Atmospheric Sciences. American Meteorological Society. 47 (17): 2141–2148. Bibcode:1990JAtS...47.2141W. doi:10.1175/1520-0469(1990)047<2141:LNMOAM>2.0.CO;2.
^ Hill, Kevin A.; Lackmann, Gary M. (October 1, 2009). "Influence of Environmental Humidity on Tropical Cyclone Size". Monthly Weather Review. American Meteorological Society. 137 (10): 3294–3315. Bibcode:2009MWRv..137.3294H. doi:10.1175/2009MWR2679.1.
^ Sun, Yuan; Zhong, Zhong; Yi, Lan; Li, Tim; Chen, Ming; Wan, Hongchao; Wang, Yuxing; Zhong, Kai (November 27, 2015). "Dependence of the relationship between the tropical cyclone track and western Pacific subtropical high intensity on initial storm size: A numerical investigation: SENSITIVITY OF TC AND WPSH TO STORM SIZE". Journal of Geophysical Research: Atmospheres. John Wiley & Sons. 120 (22): 11, 451–11, 467. doi:10.1002/2015JD023716.
^ "Fujiwhara effect describes a stormy waltz". USA Today. November 9, 2007. Archived from the original on November 5, 2012. Retrieved February 21, 2008.
^ "Section 2: Tropical Cyclone Motion Terminology". United States Naval Research Laboratory. April 10, 2007. Archived from the original on February 12, 2012. Retrieved May 7, 2009.
^ Powell, Jeff; et al. (May 2007). "Hurricane Ioke: 20–27 August 2006". 2006 Tropical Cyclones Central North Pacific. Central Pacific Hurricane Center. Archived from the original on March 6, 2016. Retrieved June 9, 2007.
^ "Normas Da Autoridade Marítima Para As Atividades De Meteorologia Marítima" (PDF) (in Portuguese). Brazilian Navy. 2011. Archived from the original (PDF) on February 6, 2015. Retrieved October 5, 2018.
^ "Hurricane Seasonal Preparedness Digital Toolkit". Ready.gov. February 18, 2021. Archived from the original on March 21, 2021. Retrieved April 6, 2021.
^ Gray, Briony; Weal, Mark; Martin, David (2019). "The Role of Social Networking in Small Island Communities: Lessons from the 2017 Atlantic Hurricane Season". Proceedings of the 52nd Hawaii International Conference on System Sciences. 52nd Hawaii International Conference on System Sciences. University of Hawaii. doi:10.24251/HICSS.2019.338. ISBN 978-0-9981331-2-6.
^ Morrissey, Shirley A.; Reser, Joseph P. (May 1, 2003). "Evaluating the Effectiveness of Psychological Preparedness Advice in Community Cyclone Preparedness Materials". The Australian Journal of Emergency Management. 18 (2): 46–61. Archived from the original on May 23, 2022. Retrieved April 6, 2021.
^ "Tropical Cyclones". World Meteorological Organization. April 8, 2020. Archived from the original on April 15, 2021. Retrieved April 6, 2021.
^ "Fiji Meteorological Services". Ministry of Infrastructure & Meteorological Services. Ministry of Infrastructure & Transport. Archived from the original on August 14, 2021. Retrieved April 6, 2021.
^ "About the National Hurricane Center". Miami, Florida: National Hurricane Center. Archived from the original on October 12, 2020. Retrieved April 6, 2021.
^ Regional Association IV – Hurricane Operational Plan for NOrth America, Central America and the Caribbean (PDF). World Meteorological Organization. 2017. ISBN 9789263111630. Archived from the original on November 14, 2020. Retrieved April 6, 2021.
^ Roth, David & Cobb, Hugh (2001). "Eighteenth Century Virginia Hurricanes". NOAA. Archived from the original on May 1, 2013. Retrieved February 24, 2007.
^ a b c Shultz, J.M.; Russell, J.; Espinel, Z. (2005). "Epidemiology of Tropical Cyclones: The Dynamics of Disaster, Disease, and Development". Epidemiologic Reviews. 27: 21–35. doi:10.1093/epirev/mxi011. PMID 15958424.
^ Nott, Jonathan; Green, Camilla; Townsend, Ian; Callaghan, Jeffrey (July 9, 2014). "The World Record Storm Surge and the Most Intense Southern Hemisphere Tropical Cyclone: New Evidence and Modeling". Bulletin of the American Meteorological Society. 5 (95): 757. Bibcode:2014BAMS...95..757N. doi:10.1175/BAMS-D-12-00233.1.
^ Carey, Wendy; Rogers, Spencer (April 26, 2012). "Rip Currents — Coordinating Coastal Research, Outreach and Forecast Methodologies to Improve Public Safety". Solutions to Coastal Disasters Conference 2005. American Society of Civil Engineers: 285–296. doi:10.1061/40774(176)29. ISBN 9780784407745. Archived from the original on May 26, 2022. Retrieved May 25, 2022.
^ Rappaport, Edward N. (September 1, 2000). "Loss of Life in the United States Associated with Recent Atlantic Tropical Cyclones". Bulletin of the American Meteorological Society. American Meteorological Society. 81 (9): 2065–2074. Bibcode:2000BAMS...81.2065R. doi:10.1175/1520-0477(2000)081<2065:LOLITU>2.3.CO;2. S2CID 120065630. Archived from the original on May 26, 2022. Retrieved May 25, 2022.
^ Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. "Frequently Asked Questions: Are TC tornadoes weaker than midlatitude tornadoes?". National Oceanic and Atmospheric Administration. Archived from the original on September 14, 2009. Retrieved July 25, 2006.
^ Grazulis, Thomas P.; Grazulis, Doris (February 27, 2018). "Top 25 Tornado-Generating Hurricanes". The Tornado Project. St. Johnsbury, Vermont: Environmental Films. Archived from the original on December 12, 2013. Retrieved November 8, 2021.
^ Bovalo, C.; Barthe, C.; Yu, N.; Bègue, N. (July 16, 2014). "Lightning activity within tropical cyclones in the South West Indian Ocean". Journal of Geophysical Research: Atmospheres. AGU. 119 (13): 8231–8244. Bibcode:2014JGRD..119.8231B. doi:10.1002/2014JD021651. S2CID 56304603. Archived from the original on May 22, 2022. Retrieved May 25, 2022.
^ Samsury, Christopher E.; Orville, Richard E. (August 1, 1994). "Cloud-to-Ground Lightning in Tropical Cyclones: A Study of Hurricanes Hugo (1989) and Jerry (1989)". Monthly Weather Review. American Meteorological Society. 122 (8): 1887–1896. Bibcode:1994MWRv..122.1887S. doi:10.1175/1520-0493(1994)122<1887:CTGLIT>2.0.CO;2. Archived from the original on May 25, 2022. Retrieved May 25, 2022.
^ Collier, E.; Sauter, T.; Mölg, T.; Hardy, D. (June 10, 2019). "The Influence of Tropical Cyclones on Circulation, Moisture Transport, and Snow Accumulation at Kilimanjaro During the 2006–2007 Season". JGR Atmospheres. AGU. 124 (13): 6919–6928. Bibcode:2019JGRD..124.6919C. doi:10.1029/2019JD030682. S2CID 197581044. Archived from the original on June 1, 2022. Retrieved May 25, 2022.
^ Osborne, Martin; Malavelle, Florent F.; Adam, Mariana; Buxmann, Joelle; Sugier, Jaqueline; Marenco, Franco (March 20, 2019). "Saharan dust and biomass burning aerosols during ex-hurricane Ophelia: observations from the new UK lidar and sun-photometer network". Atmospheric Chemistry and Physics. Copernicus Publications. 19 (6): 3557–3578. Bibcode:2019ACP....19.3557O. doi:10.5194/acp-19-3557-2019. S2CID 208084167. Archived from the original on January 24, 2022. Retrieved May 25, 2022.
^ Moore, Paul (August 3, 2021). "An analysis of storm Ophelia which struck Ireland on 16 October 2017". Weather. Royal Meteorological Society. 76 (9): 301–306. Bibcode:2021Wthr...76..301M. doi:10.1002/wea.3978. S2CID 238835099. Archived from the original on June 1, 2022. Retrieved May 25, 2022.
^ a b Philbrick, Ian Pasad; Wu, Ashley (December 2, 2022). "Population Growth Is Making Hurricanes More Expensive". The New York Times. Archived from the original on December 6, 2022. Newspaper states data source: NOAA.
^ Haque, Ubydul; Hashizume, Masahiro; Kolivras, Korine N; Overgaard, Hans J; Das, Bivash; Yamamoto, Taro (March 16, 2011). "Reduced death rates from cyclones in Bangladesh: what more needs to be done?". Bulletin of the World Health Organization. Archived from the original on October 5, 2020. Retrieved October 12, 2020.
^ Staff Writer (August 30, 2005). "Hurricane Katrina Situation Report #11" (PDF). Office of Electricity Delivery and Energy Reliability (OE) United States Department of Energy. Archived from the original (PDF) on November 8, 2006. Retrieved February 24, 2007.
^ Adam, Christopher; Bevan, David (December 2020). "Tropical cyclones and post-disaster reconstruction of public infrastructure in developing countries". Economic Modelling. Science Direct. 93: 82–99. doi:10.1016/j.econmod.2020.07.003. S2CID 224926212. Archived from the original on May 26, 2022. Retrieved May 25, 2022.
^ Cuny, Frederick C. (1994). Abrams, Susan (ed.). Disasters and Development (PDF). INTERTECT Press. p. 45. ISBN 0-19-503292-6. Archived (PDF) from the original on May 26, 2022. Retrieved May 25, 2022.
^ Le Dé, Loïc; Rey, Tony; Leone, Frederic; Gilbert, David (January 16, 2018). "Sustainable livelihoods and effectiveness of disaster responses: a case study of tropical cyclone Pam in Vanuatu". Natural Hazards. Springer. 91 (3): 1203–1221. doi:10.1007/s11069-018-3174-6. S2CID 133651688. Archived from the original on May 26, 2022. Retrieved May 25, 2022.
^ Perez, Eddie; Thompson, Paul (September 1995). "Natural Hazards: Causes and Effects: Lesson 5—Tropical Cyclones (Hurricanes, Typhoons, Baguios, Cordonazos, Tainos)". Prehospital and Disaster Medicine. Cambridge University Press. 10 (3): 202–217. doi:10.1017/S1049023X00042023. PMID 10155431. S2CID 36983623. Archived from the original on May 26, 2022. Retrieved May 25, 2022.
^ Debnath, Ajay (July 2013). "Condition of Agricultural Productivity of Gosaba C.D. Block, South24 Parganas, West Bengal, India after Severe Cyclone Aila". International Journal of Scientific and Research Publications. 3 (7): 97–100. CiteSeerX 10.1.1.416.3757. ISSN 2250-3153. Archived from the original on May 26, 2022. Retrieved May 25, 2022.
^ Needham, Hal F.; Keim, Barry D.; Sathiaraj, David (May 19, 2015). "A review of tropical cyclone-generated storm surges: Global data sources, observations, and impacts". Reviews of Geophysics. AGU. 53 (2): 545–591. Bibcode:2015RvGeo..53..545N. doi:10.1002/2014RG000477. S2CID 129145744. Retrieved May 25, 2022. [permanent dead link]
^ Landsea, Chris. "Climate Variability table — Tropical Cyclones". Atlantic Oceanographic and Meteorological Laboratory, National Oceanic and Atmospheric Administration. Archived from the original on October 2, 2012. Retrieved October 19, 2006.
^ Belles, Jonathan (August 28, 2018). "Why Tropical Waves Are Important During Hurricane Season". Weather.com. Archived from the original on October 1, 2020. Retrieved October 2, 2020.
^ Schwartz, Matthew (November 22, 2020). "Somalia's Strongest Tropical Cyclone Ever Recorded Could Drop 2 Years' Rain In 2 Days". NPR. Archived from the original on November 23, 2020. Retrieved November 23, 2020.
^ Muthige, M. S.; Malherbe, J.; Englebrecht, F. A.; Grab, S.; Beraki, A.; Maisha, T. R.; Van der Merwe, J. (2018). "Projected changes in tropical cyclones over the South West Indian Ocean under different extents of global warming". Environmental Research Letters. 13 (6): 065019. Bibcode:2018ERL....13f5019M. doi:10.1088/1748-9326/aabc60. S2CID 54879038. Archived from the original on March 8, 2021. Retrieved August 23, 2021.
^ Masters, Jeff. "Africa's Hurricane Katrina: Tropical Cyclone Idai Causes an Extreme Catastrophe". Weather Underground. Archived from the original on March 22, 2019. Retrieved March 23, 2019.
^ "Global Catastrophe Recap: First Half of 2019" (PDF). Aon Benfield. Archived (PDF) from the original on August 12, 2019. Retrieved August 12, 2019.
^ Lyons, Steve (February 17, 2010). "La Reunion Island's Rainfall Dynasty!". The Weather Channel. Archived from the original on February 10, 2014. Retrieved February 4, 2014.
^ Précipitations extrêmes (Report). Meteo France. Archived from the original on February 21, 2014. Retrieved April 15, 2013.
^ Randall S. Cerveny; et al. (June 2007). "Extreme Weather Records". Bulletin of the American Meteorological Society. 88 (6): 856, 858. Bibcode:2007BAMS...88..853C. doi:10.1175/BAMS-88-6-853.
^ Frank, Neil L.; Husain, S. A. (June 1971). "The Deadliest Tropical Cyclone in history?". Bulletin of the American Meteorological Society. 52 (6): 438. Bibcode:1971BAMS...52..438F. doi:10.1175/1520-0477(1971)052<0438:TDTCIH>2.0.CO;2. S2CID 123589011.
^ Weather, Climate & Catastrophe Insight: 2019 Annual Report (PDF) (Report). AON Benfield. January 22, 2020. Archived (PDF) from the original on January 22, 2020. Retrieved January 23, 2020.
^ Sharp, Alan; Arthur, Craig; Bob Cechet; Mark Edwards (2007). Natural hazards in Australia: Identifying risk analysis requirements (PDF) (Report). Geoscience Australia. p. 45. Archived (PDF) from the original on October 31, 2020. Retrieved October 11, 2020.
^ The Climate of Fiji (PDF) (Information Sheet: 35). Fiji Meteorological Service. April 28, 2006. Archived (PDF) from the original on March 20, 2021. Retrieved April 29, 2021.
^ Republic of Fiji: Third National Communication Report to the United Nations Framework Convention on Climate Change (PDF) (Report). United Nations Framework Convention on Climate Change. April 27, 2020. p. 62. Archived (PDF) from the original on July 6, 2021. Retrieved August 23, 2021.
^ "Death toll". The Canberra Times. Australian Associated Press. June 18, 1973. Archived from the original on August 27, 2020. Retrieved April 22, 2020.
^ Masters, Jeff. "Africa's Hurricane Katrina: Tropical Cyclone Idai Causes an Extreme Catastrophe". Weather Underground. Archived from the original on August 4, 2019. Retrieved March 23, 2019.
^ "Billion-Dollar Weather and Climate Disasters". National Centers for Environmental Information. Archived from the original on August 11, 2021. Retrieved August 23, 2021.
^ a b Blake, Eric S.; Zelensky, David A. Tropical Cyclone Report: Hurricane Harvey (PDF) (Report). National Hurricane Center. Archived (PDF) from the original on January 26, 2018. Retrieved August 23, 2021.
^ Franklin, James L. (February 22, 2006). Tropical Cyclone Report: Hurricane Vince (PDF) (Report). National Hurricane Center. Archived (PDF) from the original on October 2, 2015. Retrieved August 14, 2011.
^ Blake, Eric (September 18, 2020). Subtropical Storm Alpha Discussion Number 2 (Report). National Hurricane Center. Archived from the original on October 9, 2020. Retrieved September 18, 2020.
^ Emanuel, K. (June 2005). "Genesis and maintenance of 'Mediterranean hurricanes'". Advances in Geosciences. 2: 217–220. Bibcode:2005AdG.....2..217E. doi:10.5194/adgeo-2-217-2005. Archived from the original on May 23, 2022. Retrieved May 23, 2022.
^ Pielke, Rubiera, Landsea, Fernández, and Klein (2003). "Hurricane Vulnerability in Latin America & The Caribbean" (PDF). National Hazards Review. Archived (PDF) from the original on August 10, 2006. Retrieved July 20, 2006. {{cite web}}: CS1 maint: multiple names: authors list (link)
^ Rappaport, Ed (December 9, 1993). Tropical Storm Bret Preliminary Report (GIF) (Report). National Hurricane Center. p. 3. Archived from the original on March 3, 2016. Retrieved August 11, 2015.
^ Landsea, Christopher W. (July 13, 2005). "Subject: Tropical Cyclone Names: G6) Why doesn't the South Atlantic Ocean experience tropical cyclones?". Tropical Cyclone Frequently Asked Question. United States National Oceanic and Atmospheric Administration's Hurricane Research Division. Archived from the original on March 27, 2015. Retrieved February 7, 2015.
^ McTaggart-Cowan, Ron; Bosart, Lance F.; Davis, Christopher A.; Atallah, Eyad H.; Gyakum, John R.; Emanuel, Kerry A. (November 2006). "Analysis of Hurricane Catarina (2004)" (PDF). Monthly Weather Review. American Meteorological Society. 134 (11): 3029–3053. Bibcode:2006MWRv..134.3029M. doi:10.1175/MWR3330.1. Archived (PDF) from the original on August 30, 2021. Retrieved May 23, 2022.
^ National Oceanic and Atmospheric Administration. 2005 Tropical Eastern North Pacific Hurricane Outlook. Archived June 12, 2015, at the Wayback Machine. Retrieved May 2, 2006.
^ "Summer tropical storms don't fix drought conditions". ScienceDaily. May 27, 2015. Archived from the original on October 9, 2021. Retrieved April 10, 2021.
^ Yoo, Jiyoung; Kwon, Hyun-Han; So, Byung-Jin; Rajagopalan, Balaji; Kim, Tae-Woong (April 28, 2015). "Identifying the role of typhoons as drought busters in South Korea based on hidden Markov chain models: ROLE OF TYPHOONS AS DROUGHT BUSTERS". Geophysical Research Letters. 42 (8): 2797–2804. doi:10.1002/2015GL063753.
^ Kam, Jonghun; Sheffield, Justin; Yuan, Xing; Wood, Eric F. (May 15, 2013). "The Influence of Atlantic Tropical Cyclones on Drought over the Eastern United States (1980–2007)". Journal of Climate. American Meteorological Society. 26 (10): 3067–3086. Bibcode:2013JCli...26.3067K. doi:10.1175/JCLI-D-12-00244.1.
^ National Weather Service (October 19, 2005). "Tropical Cyclone Introduction". JetStream – An Online School for Weather. National Oceanic & Atmospheric Administration. Archived from the original on June 14, 2012. Retrieved September 7, 2010.
^ Emanuel, Kerry (July 2001). "Contribution of tropical cyclones to meridional heat transport by the oceans". Journal of Geophysical Research. 106 (D14): 14771–14781. Bibcode:2001JGR...10614771E. doi:10.1029/2000JD900641.
^ Christopherson, Robert W. (1992). Geosystems: An Introduction to Physical Geography. New York: Macmillan Publishing Company. pp. 222–224. ISBN 978-0-02-322443-0.
^ Khanna, Shruti; Santos, Maria J.; Koltunov, Alexander; Shapiro, Kristen D.; Lay, Mui; Ustin, Susan L. (February 17, 2017). "Marsh Loss Due to Cumulative Impacts of Hurricane Isaac and the Deepwater Horizon Oil Spill in Louisiana". Remote Sensing. MDPI. 9 (2): 169. Bibcode:2017RemS....9..169K. doi:10.3390/rs9020169.
^ Osland, Michael J.; Feher, Laura C.; Anderson, Gordon H.; Varvaeke, William C.; Krauss, Ken W.; Whelan, Kevin R.T.; Balentine, Karen M.; Tiling-Range, Ginger; Smith III, Thomas J.; Cahoon, Donald R. (May 26, 2020). "A Tropical Cyclone-Induced Ecological Regime Shift: Mangrove Forest Conversion to Mudflat in Everglades National Park (Florida, USA)". Wetlands and Climate Change. Springer. 40 (5): 1445–1458. doi:10.1007/s13157-020-01291-8. S2CID 218897776. Archived from the original on May 17, 2022. Retrieved May 27, 2022.
^ a b You, Zai-Jin (March 18, 2019). "Tropical Cyclone-Induced Hazards Caused by Storm Surges and Large Waves on the Coast of China". Geosciences. 9 (3): 131. Bibcode:2019Geosc...9..131Y. doi:10.3390/geosciences9030131. ISSN 2076-3263.
^ Zang, Zhengchen; Xue, Z. George; Xu, Kehui; Bentley, Samuel J.; Chen, Qin; D'Sa, Eurico J.; Zhang, Le; Ou, Yanda (October 20, 2020). "The role of sediment-induced light attenuation on primary production during Hurricane Gustav (2008)". Biogeosciences. Copernicus Publications. 17 (20): 5043–5055. Bibcode:2020BGeo...17.5043Z. doi:10.5194/bg-17-5043-2020. S2CID 238986315. Archived from the original on January 19, 2022. Retrieved May 19, 2022.
^ Huang, Wenrui; Mukherjee, Debraj; Chen, Shuisen (March 2011). "Assessment of Hurricane Ivan impact on chlorophyll-a in Pensacola Bay by MODIS 250 m remote sensing". Marine Pollution Bulletin. Science Direct. 62 (3): 490–498. doi:10.1016/j.marpolbul.2010.12.010. PMID 21272900. Archived from the original on May 19, 2022. Retrieved May 19, 2022.
^ Chen, Xuan; Adams, Benjamin J.; Platt, William J.; Hooper-Bùi, Linda M. (February 28, 2020). "Effects of a tropical cyclone on salt marsh insect communities and post-cyclone reassembly processes". Ecography. Wiley Online Library. 43 (6): 834–847. doi:10.1111/ecog.04932. S2CID 212990211. Archived from the original on May 19, 2022. Retrieved May 21, 2022.
^ "Tempestade Leslie provoca grande destruição nas Matas Nacionais" [Storm Leslie wreaks havoc in the National Forests]. Notícias de Coimbra (in Portuguese). October 17, 2018. Archived from the original on January 28, 2019. Retrieved May 27, 2022.
^ Doyle, Thomas (2005). "Wind damage and Salinity Effects of Hurricanes Katrina and Rita on Coastal Baldcypress Forests of Louisiana" (PDF). Archived (PDF) from the original on March 4, 2016. Retrieved February 13, 2014.
^ Cappielo, Dina (2005). "Spills from hurricanes stain coast With gallery". Houston Chronicle. Archived from the original on April 25, 2014. Retrieved February 12, 2014.
^ Pine, John C. (2006). "Hurricane Katrina and Oil Spills: Impact on Coastal and Ocean Environments" (PDF). Oceanography. The Oceanography Society. 19 (2): 37–39. doi:10.5670/oceanog.2006.61. Archived (PDF) from the original on January 20, 2022. Retrieved May 19, 2022.
^ a b Santella, Nicholas; Steinberg, Laura J.; Sengul, Hatice (April 12, 2010). "Petroleum and Hazardous Material Releases from Industrial Facilities Associated with Hurricane Katrina". Risk Analysis. 30 (4): 635–649. doi:10.1111/j.1539-6924.2010.01390.x. PMID 20345576. S2CID 24147578. Archived from the original on May 21, 2022. Retrieved May 21, 2022.
^ Qin, Rongshui; Khakzad, Nima; Zhu, Jiping (May 2020). "An overview of the impact of Hurricane Harvey on chemical and process facilities in Texas". International Journal of Disaster Risk Reduction. Science Direct. 45: 101453. doi:10.1016/j.ijdrr.2019.101453. S2CID 214418578. Archived from the original on May 17, 2022. Retrieved May 19, 2022.
^ Misuri, Alessio; Moreno, Valeria Casson; Quddus, Noor; Cozzani, Valerio (October 2019). "Lessons learnt from the impact of hurricane Harvey on the chemical and process industry". Reliability Engineering & System Safety. Science Direct. 190: 106521. doi:10.1016/j.ress.2019.106521. S2CID 191214528. Archived from the original on May 19, 2022. Retrieved May 19, 2022.
^ Cañedo, Sibely (March 29, 2019). "Tras el Huracán Willa, suben niveles de metales en río Baluarte" [After Hurricane Willa, metal levels rise in the Baluarte River] (in Spanish). Noreste. Archived from the original on September 30, 2020. Retrieved May 19, 2022.
^ a b Dellapenna, Timothy M.; Hoelscher, Christena; Hill, Lisa; Al Mukaimi, Mohammad E.; Knap, Anthony (December 15, 2020). "How tropical cyclone flooding caused erosion and dispersal of mercury-contaminated sediment in an urban estuary: The impact of Hurricane Harvey on Buffalo Bayou and the San Jacinto Estuary, Galveston Bay, USA". Science of the Total Environment. Science Direct. 748: 141226. Bibcode:2020ScTEn.748n1226D. doi:10.1016/j.scitotenv.2020.141226. PMC 7606715. PMID 32818899.
^ a b Volto, Natacha; Duvat, Virginie K.E. (July 9, 2020). "Applying Directional Filters to Satellite Imagery for the Assessment of Tropical Cyclone Impacts on Atoll Islands". Coastal Research. Meridian Allen Press. 36 (4): 732–740. doi:10.2112/JCOASTRES-D-19-00153.1. S2CID 220323810. Archived from the original on January 25, 2021. Retrieved May 21, 2022.
^ a b Bush, Martin J. (October 9, 2019). "How to End the Climate Crisis". Climate Change and Renewable Energy. Springer. pp. 421–475. doi:10.1007/978-3-030-15424-0_9. ISBN 978-3-030-15423-3. S2CID 211444296. Archived from the original on May 17, 2022. Retrieved May 21, 2022.
^ Onaka, Susumu; Ichikawa, Shingo; Izumi, Masatoshi; Uda, Takaaki; Hirano, Junichi; Sawada, Hideki (2017). "Effectiveness of Gravel Beach Nourishment on Pacific Island". Asian and Pacific Coasts. World Scientific: 651–662. doi:10.1142/9789813233812_0059. ISBN 978-981-323-380-5. Archived from the original on May 16, 2022. Retrieved May 21, 2022.
^ Kench, P.S.; McLean, R.F.; Owen, S.D.; Tuck, M.; Ford, M.R. (October 1, 2018). "Storm-deposited coral blocks: A mechanism of island genesis, Tutaga island, Funafuti atoll, Tuvalu". Geology. Geo Science World. 46 (10): 915–918. Bibcode:2018Geo....46..915K. doi:10.1130/G45045.1. S2CID 135443385. Archived from the original on May 23, 2022. Retrieved May 21, 2022.
^ Baker, Jason D.; Harting, Albert L.; Johanos, Thea C.; London, Joshua M.; Barbieri, Michelle M.; Littnan, Charles L. (August 2020). "Terrestrial Habitat Loss and the Long-term Viability of the French Frigate Shoals Hawaiian Monk Seal Subpopulation". NOAA Technical Memorandum NMFS-PIFSC. NOAA Fisheries. doi:10.25923/76vx-ve75. Archived from the original on May 12, 2022. Retrieved May 20, 2022.
^ Tokar, Brian; Gilbertson, Tamra (March 31, 2020). Climate Justice and Community Renewal: Resistance and Grassroots Solutions. p. 70. ISBN 9781000049213. Archived from the original on May 17, 2022. Retrieved May 27, 2022.
^ Samodra, Guruh; Ngadisih, Ngadisih; Malawani, Mukhamad Ngainul; Mardiatno, Djati; Cahyadi, Ahmad; Nugroho, Ferman Setia (April 11, 2020). "Frequency–magnitude of landslides affected by the 27–29 November 2017 Tropical Cyclone Cempaka in Pacitan, East Java". Journal of Mountain Science. Springer. 17 (4): 773–786. doi:10.1007/s11629-019-5734-y. S2CID 215725140. Archived from the original on May 17, 2022. Retrieved May 21, 2022.
^ Zinke, Laura (April 28, 2021). "Hurricanes and landslides". Geomorphology. Nature Reviews Earth & Environment. 2 (5): 304. Bibcode:2021NRvEE...2..304Z. doi:10.1038/s43017-021-00171-x. S2CID 233435990. Archived from the original on May 17, 2022. Retrieved May 21, 2022.
^ Tien, Pham Van; Luong, Le Hong; Duc, Do Minh; Trinh, Phan Trong; Quynh, Dinh Thi; Lan, Nguyen Chau; Thuy, Dang Thi; Phi, Nguyen Quoc; Cuong, Tran Quoc; Dang, Khang; Loi, Doan Huy (April 9, 2021). "Rainfall-induced catastrophic landslide in Quang Tri Province: the deadliest single landslide event in Vietnam in 2020". Landslides. Springer. 18 (6): 2323–2327. doi:10.1007/s10346-021-01664-y. S2CID 233187785. Archived from the original on May 17, 2022. Retrieved May 21, 2022.
^ Santos, Gemma Dela Cruz (September 20, 2021). "2020 tropical cyclones in the Philippines: A review". Tropical Cyclone Research and Review. Science Direct. 10 (3): 191–199. doi:10.1016/j.tcrr.2021.09.003. S2CID 239244161. Archived from the original on May 17, 2022. Retrieved May 21, 2022.
^ Mishra, Manoranjan; Kar, Dipika; Debnath, Manasi; Sahu, Netrananda; Goswami, Shreerup (August 30, 2021). "Rapid eco-physical impact assessment of tropical cyclones using geospatial technology: a case from severe cyclonic storms Amphan". Natural Hazards. Springer. 110 (3): 2381–2395. doi:10.1007/s11069-021-05008-w. S2CID 237358608. Archived from the original on May 17, 2022. Retrieved May 21, 2022.
^ Tamura, Toru; Nicholas, William A.; Oliver, Thomas S. N.; Brooke, Brendan P. (July 14, 2017). "Coarse-sand beach ridges at Cowley Beach, north-eastern Australia: Their formative processes and potential as records of tropical cyclone history". Sedimentology. Wiley Library. 65 (3): 721–744. doi:10.1111/sed.12402. S2CID 53403886. Archived from the original on May 16, 2022. Retrieved May 21, 2022.
^ "OSHA's Hazard Exposure and Risk Assessment Matrix for Hurricane Response and Recovery Work: List of Activity Sheets". U.S. Occupational Safety and Health Administration. 2005. Archived from the original on September 29, 2018. Retrieved September 25, 2018.
^ "Before You Begin – The Incident Command System (ICS)". American Industrial Hygiene Association. Archived from the original on September 29, 2018. Retrieved September 26, 2018.
^ "Volunteer". National Voluntary Organizations Active in Disaster. Archived from the original on September 29, 2018. Retrieved September 25, 2018.
^ a b c "Hurricane Key Messages for Employers, Workers and Volunteers". U.S. National Institute for Occupational Safety and Health. 2017. Archived from the original on November 24, 2018. Retrieved September 24, 2018.
^ a b "Hazardous Materials and Conditions". American Industrial Hygiene Association. Archived from the original on September 29, 2018. Retrieved September 26, 2018.
^ "Mold and Other Microbial Growth". American Industrial Hygiene Association. Archived from the original on September 29, 2018. Retrieved September 26, 2018.
^ a b c "OSHA's Hazard Exposure and Risk Assessment Matrix for Hurricane Response and Recovery Work: Recommendations for General Hazards Commonly Encountered during Hurricane Response and Recovery Operations". U.S. Occupational Safety and Health Administration. 2005. Archived from the original on September 29, 2018. Retrieved September 25, 2018.
^ "Electrical Hazards". American Industrial Hygiene Association. Archived from the original on September 29, 2018. Retrieved September 26, 2018.
^ Muller, Joanne; Collins, Jennifer M.; Gibson, Samantha; Paxton, Leilani (2017), Collins, Jennifer M.; Walsh, Kevin (eds.), "Recent Advances in the Emerging Field of Paleotempestology", Hurricanes and Climate Change: Volume 3, Cham: Springer International Publishing, pp. 1–33, doi:10.1007/978-3-319-47594-3_1, ISBN 978-3-319-47594-3, S2CID 133456333
^ Liu, Kam-biu (1999). Millennial-scale variability in catastrophic hurricane landfalls along the Gulf of Mexico coast. 23rd Conference on Hurricanes and Tropical Meteorology. Dallas, TX: American Meteorological Society. pp. 374–377.
^ Liu, Kam-biu; Fearn, Miriam L. (2000). "Reconstruction of Prehistoric Landfall Frequencies of Catastrophic Hurricanes in Northwestern Florida from Lake Sediment Records". Quaternary Research. 54 (2): 238–245. Bibcode:2000QuRes..54..238L. doi:10.1006/qres.2000.2166. S2CID 140723229.
^ G. Huang; W.W. S. Yim (January 2001). "Reconstruction of an 8,000-year record of typhoons in the Pearl River estuary, China" (PDF). University of Hong Kong. Archived (PDF) from the original on July 20, 2021. Retrieved April 2, 2021.
^ Arnold Court (1980). Tropical Cyclone Effects on California. NOAA technical memorandum NWS WR; 159. Northridge, California: California State University. pp. 2, 4, 6, 8, 34. Archived from the original on October 1, 2018. Retrieved February 2, 2012.
^ "Atlantic hurricane best track (HURDAT version 2)" (Database). United States National Hurricane Center. September 19, 2022. Retrieved January 29, 2023. This article incorporates text from this source, which is in the public domain.
^ Philippe Caroff; et al. (June 2011). Operational procedures of TC satellite analysis at RSMC La Reunion (PDF) (Report). World Meteorological Organization. Archived from the original on April 27, 2021. Retrieved April 22, 2013.
^ Christopher W. Landsea; et al. "Documentation for 1851–1910 Alterations and Additions to the HURDAT Database". The Atlantic Hurricane Database Re-analysis Project. Hurricane Research Division. Archived from the original on June 15, 2021. Retrieved April 27, 2021.
^ Neumann, Charles J. "1.3: A Global Climatology". Global Guide to Tropical Cyclone Forecasting. Bureau of Meteorology. Archived from the original on June 1, 2011. Retrieved November 30, 2006.
^ Knutson, Thomas; Camargo, Suzana; Chan, Johnny; Emanuel, Kerry; Ho, Chang-Hoi; Kossin, James; Mohapatra, Mrutyunjay; Satoh, Masaki; Sugi, Masato; Walsh, Kevin; Wu, Liguang (October 1, 2019). "TROPICAL CYCLONES AND CLIMATE CHANGE ASSESSMENT Part I: Detection and Attribution". American Meteorological Society. 100 (10): 1988. Bibcode:2019BAMS..100.1987K. doi:10.1175/BAMS-D-18-0189.1. hdl:1721.1/125577. S2CID 191139413. Archived from the original on August 13, 2021. Retrieved April 17, 2021.
^ a b c d e Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. "Frequently Asked Questions: When is hurricane season?". National Oceanic and Atmospheric Administration. Archived from the original on May 6, 2009. Retrieved July 25, 2006.
^ McAdie, Colin (May 10, 2007). "Tropical Cyclone Climatology". National Hurricane Center. Archived from the original on March 21, 2015. Retrieved June 9, 2007.
^ a b Ramsay, Hamish (2017). "The Global Climatology of Tropical Cyclones". Oxford Research Encyclopedia of Natural Hazard Science. Oxford University Press. doi:10.1093/acrefore/9780199389407.013.79. ISBN 9780199389407. Archived from the original on August 15, 2021.
^ Joint Typhoon Warning Center (2006). "3.3 JTWC Forecasting Philosophies" (PDF). United States Navy. Archived (PDF) from the original on November 29, 2007. Retrieved February 11, 2007.
^ a b Wu, M.C.; Chang, W.L.; Leung, W.M. (2004). "Impacts of El Niño–Southern Oscillation Events on Tropical Cyclone Landfalling Activity in the Western North Pacific". Journal of Climate. 17 (6): 1419–1428. Bibcode:2004JCli...17.1419W. CiteSeerX 10.1.1.461.2391. doi:10.1175/1520-0442(2004)017<1419:IOENOE>2.0.CO;2.
^ Klotzbach, Philip J. (2011). "El Niño–Southern Oscillation's Impact on Atlantic Basin Hurricanes and U.S. Landfalls". Journal of Climate. 24 (4): 1252–1263. Bibcode:2011JCli...24.1252K. doi:10.1175/2010JCLI3799.1. ISSN 0894-8755.
^ Camargo, Suzana J.; Sobel, Adam H.; Barnston, Anthony G.; Klotzbach, Philip J. (2010), "The Influence of Natural Climate Variability on Tropical Cyclones, and Seasonal Forecasts of Tropical Cyclone Activity", Global Perspectives on Tropical Cyclones, World Scientific Series on Asia-Pacific Weather and Climate, WORLD SCIENTIFIC, vol. 4, pp. 325–360, doi:10.1142/9789814293488_0011, ISBN 978-981-4293-47-1, archived from the original on August 15, 2021
^ a b c d Hurricane Research Division. "Frequently Asked Questions: What are the average, most, and least tropical cyclones occurring in each basin?". National Oceanic and Atmospheric Administration's Atlantic Oceanographic and Meteorological Laboratory. Retrieved December 5, 2012.
^ http://www.rsmcnewdelhi.imd.gov.in/images/pdf/publications/annual-rsmc-report/rsmc-2018.pdf
^ "Australian Tropical Cyclone Outlook for 2019 to 2020". Australian Bureau of Meteorology. October 11, 2019. Archived from the original on October 14, 2019. Retrieved October 14, 2019.
^ 2019–20 Tropical Cyclone Season Outlook [in the] Regional Specialised Meteorological Centre Nadi – Tropical Cyclone Centre (RSMC Nadi – TCC) Area of Responsibility (AOR) (PDF) (Report). Fiji Meteorological Service. October 11, 2019. Archived (PDF) from the original on October 11, 2019. Retrieved October 11, 2019.
^ Leonhardt, David; Moses, Claire; Philbrick, Ian Prasad (September 29, 2022). "Ian Moves North / Category 4 and 5 Atlantic hurricanes since 1980". The New York Times. Archived from the original on September 30, 2022. Source: NOAA - Graphic by Ashley Wu, The New York Times
^ a b c d Knutson, Thomas; Camargo, Suzana J.; Chan, Johnny C. L.; Emanuel, Kerry; Ho, Chang-Hoi; Kossin, James; Mohapatra, Mrutyunjay; Satoh, Masaki; Sugi, Masato; Walsh, Kevin; Wu, Liguang (August 6, 2019). "Tropical Cyclones and Climate Change Assessment: Part II. Projected Response to Anthropogenic Warming". Bulletin of the American Meteorological Society. 101 (3): BAMS–D–18–0194.1. doi:10.1175/BAMS-D-18-0194.1. ISSN 0003-0007.
^ "Major tropical cyclones have become '15% more likely' over past 40 years". Carbon Brief. May 18, 2020. Archived from the original on August 8, 2020. Retrieved August 31, 2020.
^ Kossin, James P.; Knapp, Kenneth R.; Olander, Timothy L.; Velden, Christopher S. (May 18, 2020). "Global increase in major tropical cyclone exceedance probability over the past four decades" (PDF). Proceedings of the National Academy of Sciences. 117 (22): 11975–11980. Bibcode:2020PNAS..11711975K. doi:10.1073/pnas.1920849117. ISSN 0027-8424. PMC 7275711. PMID 32424081. Archived (PDF) from the original on November 19, 2020. Retrieved October 6, 2020.
^ Collins, M.; Sutherland, M.; Bouwer, L.; Cheong, S.-M.; et al. (2019). "Chapter 6: Extremes, Abrupt Changes and Managing Risks" (PDF). IPCC Special Report on the Ocean and Cryosphere in a Changing Climate. p. 602. Archived (PDF) from the original on December 20, 2019. Retrieved October 6, 2020.
^ Thomas R. Knutson; Joseph J. Sirutis; Ming Zhao (2015). "Global Projections of Intense Tropical Cyclone Activity for the Late Twenty-First Century from Dynamical Downscaling of CMIP5/RCP4.5 Scenarios". Journal of Climate. 28 (18): 7203–7224. Bibcode:2015JCli...28.7203K. doi:10.1175/JCLI-D-15-0129.1. S2CID 129209836. Archived from the original on January 5, 2020. Retrieved October 6, 2020.
^ Knutson; et al. (2013). "Dynamical Downscaling Projections of Late 21st Century Atlantic Hurricane Activity: CMIP3 and CMIP5 Model-based Scenarios". Journal of Climate. 26 (17): 6591–6617. Bibcode:2013JCli...26.6591K. doi:10.1175/JCLI-D-12-00539.1. S2CID 129571840. Archived from the original on October 5, 2020. Retrieved October 6, 2020.
^ a b Collins, M.; Sutherland, M.; Bouwer, L.; Cheong, S.-M.; et al. (2019). "Chapter 6: Extremes, Abrupt Changes and Managing Risks" (PDF). IPCC Special Report on the Ocean and Cryosphere in a Changing Climate. p. 603. Archived (PDF) from the original on December 20, 2019. Retrieved October 6, 2020.
^ a b "Hurricane Harvey shows how we underestimate flooding risks in coastal cities, scientists say". The Washington Post. August 29, 2017. Archived from the original on August 30, 2017. Retrieved August 30, 2017.
^ a b c Walsh, K. J. E.; Camargo, S. J.; Knutson, T. R.; Kossin, J.; Lee, T. -C.; Murakami, H.; Patricola, C. (December 1, 2019). "Tropical cyclones and climate change". Tropical Cyclone Research and Review. 8 (4): 240–250. doi:10.1016/j.tcrr.2020.01.004. ISSN 2225-6032.
^ Roberts, Malcolm John; Camp, Joanne; Seddon, Jon; Vidale, Pier Luigi; Hodges, Kevin; Vannière, Benoît; Mecking, Jenny; Haarsma, Rein; Bellucci, Alessio; Scoccimarro, Enrico; Caron, Louis-Philippe (2020). "Projected Future Changes in Tropical Cyclones Using the CMIP6 HighResMIP Multimodel Ensemble". Geophysical Research Letters. 47 (14): e2020GL088662. Bibcode:2020GeoRL..4788662R. doi:10.1029/2020GL088662. ISSN 1944-8007. PMC 7507130. PMID 32999514. S2CID 221972087.
^ "Hurricanes and Climate Change". Union of Concerned Scientists. Archived from the original on September 24, 2019. Retrieved September 29, 2019.
^ Murakami, Hiroyuki; Delworth, Thomas L.; Cooke, William F.; Zhao, Ming; Xiang, Baoqiang; Hsu, Pang-Chi (2020). "Detected climatic change in global distribution of tropical cyclones". Proceedings of the National Academy of Sciences. 117 (20): 10706–10714. Bibcode:2020PNAS..11710706M. doi:10.1073/pnas.1922500117. ISSN 0027-8424. PMC 7245084. PMID 32366651.
^ James P. Kossin; Kerry A. Emanuel; Gabriel A. Vecchi (2014). "The poleward migration of the location of tropical cyclone maximum intensity". Nature. 509 (7500): 349–352. Bibcode:2014Natur.509..349K. doi:10.1038/nature13278. hdl:1721.1/91576. PMID 24828193. S2CID 4463311.
^ Florida Coastal Monitoring Program. "Project Overview". University of Florida. Archived from the original on May 3, 2006. Retrieved March 30, 2006.
^ "Observations". Central Pacific Hurricane Center. December 9, 2006. Archived from the original on February 12, 2012. Retrieved May 7, 2009.
^ "NOAA harnessing the power of new satellite data this hurricane season". National Oceanic and Atmospheric Administration. June 1, 2020. Archived from the original on March 18, 2021. Retrieved March 25, 2021.
^ 403rd Wing. "The Hurricane Hunters". 53rd Weather Reconnaissance Squadron. Archived from the original on May 30, 2012. Retrieved March 30, 2006.
^ Lee, Christopher. "Drone, Sensors May Open Path Into Eye of Storm". The Washington Post. Archived from the original on November 11, 2012. Retrieved February 22, 2008.
^ National Hurricane Center (May 22, 2006). "Annual average model track errors for Atlantic basin tropical cyclones for the period 1994–2005, for a homogeneous selection of "early" models". National Hurricane Center Forecast Verification. National Oceanic and Atmospheric Administration. Archived from the original on May 10, 2012. Retrieved November 30, 2006.
^ National Hurricane Center (May 22, 2006). "Annual average official track errors for Atlantic basin tropical cyclones for the period 1989–2005, with least-squares trend lines superimposed". National Hurricane Center Forecast Verification. National Oceanic and Atmospheric Administration. Archived from the original on May 10, 2012. Retrieved November 30, 2006.
^ "Regional Specialized Meteorological Center". Tropical Cyclone Program (TCP). World Meteorological Organization. April 25, 2006. Archived from the original on August 14, 2010. Retrieved November 5, 2006.
^ Fiji Meteorological Service (2017). "Services". Archived from the original on June 18, 2017. Retrieved June 4, 2017.
^ Joint Typhoon Warning Center (2017). "Products and Service Notice". United States Navy. Archived from the original on June 9, 2017. Retrieved June 4, 2017.
^ National Hurricane Center (March 2016). "National Hurricane Center Product Description Document: A User's Guide to Hurricane Products" (PDF). National Oceanic and Atmospheric Administration. Archived (PDF) from the original on June 17, 2017. Retrieved June 3, 2017.
^ "Notes on RSMC Tropical Cyclone Information". Japan Meteorological Agency. 2017. Archived from the original on March 19, 2017. Retrieved June 4, 2017.
^ "Geopotential Height". National Weather Service. Archived from the original on March 24, 2022. Retrieved October 7, 2022.
^ "Constant Pressure Charts: 850 mb". National Weather Service. Archived from the original on May 4, 2022. Retrieved October 7, 2022.
^ "Constant Pressure Charts: 700 mb". National Weather Service. Archived from the original on June 29, 2022. Retrieved October 7, 2022.
^ "Constant Pressure Charts: 500 mb". National Weather Service. Archived from the original on May 21, 2022. Retrieved October 7, 2022.
^ "Constant Pressure Charts: 300 mb". National Weather Service. Archived from the original on October 7, 2022. Retrieved October 7, 2022.
^ "Constant Pressure Charts: 200 mb". National Weather Service. Archived from the original on August 5, 2022. Retrieved October 7, 2022.
^ Lander, Mark A.; et al. (August 3, 2003). "Fifth International Workshop on Tropical Cyclones". World Meteorological Organization. Archived from the original on May 9, 2009. Retrieved May 6, 2009.
^ Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. "Frequently Asked Questions: What is an extra-tropical cyclone?". National Oceanic and Atmospheric Administration. Archived from the original on February 9, 2007. Retrieved July 25, 2006.
^ "Lesson 14: Background: Synoptic Scale". University of Wisconsin–Madison. February 25, 2008. Archived from the original on February 20, 2009. Retrieved May 6, 2009.
^ "An Overview of Coastal Land Loss: With Emphasis on the Southeastern United States". United States Geological Survey. 2008. Archived from the original on February 12, 2009. Retrieved May 6, 2009.
^ Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division. "Frequently Asked Questions: What is a sub-tropical cyclone?". National Oceanic and Atmospheric Administration. Archived from the original on October 11, 2011. Retrieved July 25, 2006.
Barnes, Jay. Fifteen Hurricanes That Changed the Carolinas: Powerful Storms, Climate Change, and What We Do Next (University of North Carolina Press, 2022) online review
Vecchi, Gabriel A., et al. "Changes in Atlantic major hurricane frequency since the late-19th century." Nature communications 12.1 (2021): 1-9. online
Weinkle, Jessica, et al. "Normalized hurricane damage in the continental United States 1900–2017." Nature Sustainability 1.12 (2018): 808-813. online
Look up tropical cyclone in Wiktionary, the free dictionary.
Wikimedia Commons has media related to Tropical cyclones.
Wikisource has original text related to this article:
The Hurricane
Wikivoyage has a travel guide for Cyclones.
United States National Hurricane Center – North Atlantic, Eastern Pacific
United States Central Pacific Hurricane Center – Central Pacific
Japan Meteorological Agency – Western Pacific
India Meteorological Department – Indian Ocean
Météo-France – La Reunion – South Indian Ocean from 30°E to 90°E
Indonesian Meteorological Department – South Indian Ocean from 90°E to 125°E, north of 10°S
Australian Bureau of Meteorology – South Indian Ocean and South Pacific Ocean from 90°E to 160°E
Papua New Guinea National Weather Service – South Pacific east of 160°E, north of 10°S
Fiji Meteorological Service – South Pacific west of 160°E, north of 25° S
MetService New Zealand – South Pacific west of 160°E, south of 25°S
Cyclones and anticyclones of the world (Centers of action)
Anticyclonic storm
High-pressure area
Low-pressure area
Annular tropical cyclone
Bar (tropical cyclone)
Hypercane
Tropical cyclones and climate change
North Polar High
Siberian High
North American High
North Pacific High
Ridiculously Resilient Ridge
Subtropical ridge
South Polar High
South Atlantic High
South Pacific High
Kalahari High
Australian High
Synoptic scale
Surface-based
North Polar low
South Polar low
Extratropical cyclones
Post-tropical cyclone
Weather bomb
Sting jet
Rainband
Lee Cyclone
Alberta clipper
Colorado low
Great basin low
Bighorn Low
Panhandle hook
November gale
Aleutian Low
Hatteras low
Nor'easter
Gulf low
Pacific Northwest windstorm
European windstorms
Grote Mandrenke
Burchardi flood
Great Storm of 1703
Christmas Flood of 1717
Night of the Big Wind
Moray Firth fishing disaster
Tay Bridge disaster
Eyemouth disaster
Iberia 1941
North Sea flood of 1953
Debbie 1961
Great Sheffield Gale of 1962
1968 Scotland storm
Quimburga 1972
Gale of January 1976
December 1981 windstorm
Charley 1986
Burns' Day storm 1990
1992 New Year's Day Storm
Braer Storm 1993
Christmas Eve storm 1997
Boxing Day Storm of 1998
Anatol 1999
Lothar 1999
Martin 1999
Oratia 2000
Jeanett 2002
Gudrun 2005
Per 2007
Kyrill 2007
Emma 2008
Klaus 2009
Xynthia 2010
Berit 2011
Friedhelm/Bawbag 2011
Joachim 2011
Dagmar 2011
St Jude 2013
Xaver 2013
Anne 2014
Christina 2014
Tini 2014
Niklas 2015
Egon 2017
Thomas (Doris) 2017
Zeus 2017
Xavier 2017
Ophelia 2017
Herwart 2017
Eleanor (Burglind) 2018
Friederike (David) 2018
Adrian 2018
Ciara 2020
Dennis 2020
Aurore 2021
Malik 2022
Eunice 2022
List of European windstorms
List of atmospheric pressure records in Europe
Black Sea storms
Icelandic Low
Asiatic Low
Western Disturbance
Continental North Asian storms
East Asian-northwest Pacific storms
Kona storm
Australian east coast low
Black nor'easter
Southern Ocean cyclone
Sudestada
Subtropical
Lake Huron cyclone
Mediterranean tropical-like cyclone
Atlantic hurricane
Cape Verde hurricane
Pacific hurricane
North Indian Ocean tropical cyclone
South-West Indian Ocean tropical cyclone
Australian region tropical cyclone
South Pacific tropical cyclone
South Atlantic tropical cyclone
Cold-core low
Upper tropospheric cyclonic vortex
Mesoscale ocean eddies
Catalina eddy
Haida Eddies
Mesoscale convective system
Wake Low
Mesohigh
Mesoscale convective vortex
Line echo wave pattern
Mesocyclone
Low-topped supercell
Wall cloud
Multiple-vortex tornado
Satellite tornado
Anticyclonic tornado
Landspout
Waterspout
Steam devil
Fire whirl
Tornadoes portal
Temperate seasons
Science portal
Tropical seasons
Dry season
Harmattan (West Africa)
Cyclone season
Fog season
chemical elements
History (geological)
Gaia hypothesis
Atmosphere (Earth)
Origin (abiogenesis)
Biology (astrobiology)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Tropical_cyclone&oldid=1131229263#Classification_and_naming"
Tropical cyclone meteorology
Climate change and hurricanes
Meteorological phenomena
Types of cyclone
Weather hazards
Articles with dead external links from June 2022
Source attribution
Use mdy dates from April 2021
All accuracy disputes
Articles with disputed statements from October 2022
Articles to be expanded from April 2021
All articles to be expanded
Articles using small message boxes
Articles to be expanded from October 2022 | CommonCrawl |
C4′/H4′ selective, non-uniformly sampled 4D HC(P)CH experiment for sequential assignments of 13C-labeled RNAs
Saurabh Saxena1,
Jan Stanek1,
Mirko Cevec2,
Janez Plavec2,3,4 &
Wiktor Koźmiński1
Journal of Biomolecular NMR volume 60, pages 91–98 (2014)Cite this article
A through bond, C4′/H4′ selective, "out and stay" type 4D HC(P)CH experiment is introduced which provides sequential connectivity via H4′(i)–C4′(i)–C4′(i−1)–H4′(i−1) correlations. The 31P dimension (used in the conventional 3D HCP experiment) is replaced with evolution of better dispersed C4′ dimension. The experiment fully utilizes 13C-labeling of RNA by inclusion of two C4′ evolution periods. An additional evolution of H4′ is included to further enhance peak resolution. Band selective 13C inversion pulses are used to achieve selectivity and prevent signal dephasing due to the of C4′–C3′ and C4′–C5′ homonuclear couplings. For reasonable resolution, non-uniform sampling is employed in all indirect dimensions. To reduce sensitivity losses, multiple quantum coherences are preserved during shared-time evolution and coherence transfer delays. In the experiment the intra-nucleotide peaks are suppressed whereas inter-nucleotide peaks are enhanced to reduce the ambiguities. The performance of the experiment is verified on a fully 13C, 15N-labeled 34-nt hairpin RNA comprising typical structure elements.
Avoid the most common mistakes and prepare your manuscript for journal editors.
With the advent of several new classes of non-coding RNAs (e.g. siRNA, miRNAs) research has been heavily focused on understanding the role of RNA in cellular processes during normal and diseased states (Esteller 2011), through exploring its structure–function relationship (Briones et al. 2009; Mercer et al. 2009). Over the years, in conjunction with isotope labeling techniques, several NMR approaches (Varani et al. 1996; Wijmenga and van Buuren 1998; Furtig et al. 2003; Flinders and Dieckmann 2006) proved to be highly useful in expanding our knowledge about RNA structure, its basic structural motifs, catalysis and interactions with small molecules or proteins. However, precise structural determination of even moderately sized RNAs can still be problematic. In addition to low proton density in RNAs, these biopolymers comprise only four different nucleotides. Effectively, chemical shift dispersion is inherently low, which entails severe spectral overlaps. For non-coding RNAs a frequent lack of base stacking results in even increased crowding in the NMR spectra. Moreover, similar chemical shifts are observed for many nucleotides having similar chemical environment in helical secondary structures. Recently, automated assignment approach involving no isotope labeling (Aeschbacher et al. 2013) was proposed that requires peak lists from 2D TOCSY, 2D NOESY and natural abundance 1H–13C HSQC spectra. However, difficulty may arise while assigning regions/nuclei with irregular or limited statistics. In addition, chemical shift degeneracy or low dispersion still remain a bottleneck for such approaches. This suggests that new high dimensional techniques, resembling 4D/5D methods employed for intrinsically disordered proteins (Zawadzka-Kazimierczuk et al. 2012; Stanek et al. 2013b; Bermel et al. 2013) can be explored for the assignment of poorly resolved resonances in RNAs.
The sequential resonance assignment in RNA is usually achieved using through-space NOE-type (Nikonowicz and Pardi 1993) and/or through-bond HCP (Marino et al. 1994) experiments. The efficacy of both types of experiments is severely affected due to spectral crowding and overlaps, which increases dramatically with the size of RNA. To increase the peak resolution, experiments having HCP concatenated with HCCH-TOCSY were also proposed (Marino et al. 1995; Ramachandran et al. 1996), however, their application remained limited due to significant relaxation losses during TOCSY mixing time and limited resolution owing to relatively short maximum evolution times. In account of this, high resolution 4D C(aro),C(ribo)-NOESY experiment (Stanek et al. 2013a) was recently reported which aimed at providing the intra and inter-nucleotide (sequential) NOE correlations in RNA. As NOE interactions are largely dependent on conformations, ambiguities and gaps may arise during the assignment of NOESY spectrum, preventing the possible correlation of genuine peaks that may be present in the spectrum. These ambiguities can largely be resolved if the spectral analysis is complemented by some through-bond experiments. For example, the 3D HCP experiment, which provides sequential connectivity via intervening 31P nuclei, i.e. correlating H4′(i)–C4′(i)–P(i) with P(i)–C4′(i−1)–H4′(i−1), has been successful in many applications. However, it suffers from severe spectral overlaps (see Fig. S1a–d in Supplementary Materials) and relies on quite poor resolution of 31P dimension. In principle the peaks can be resolved in 31P dimension but in practice it is limited by its low chemical shift dispersion (~1.8 ppm) (see Fig. S1e), which makes unambiguous assignment of peaks a challenging task even for moderately sized RNAs. Additionally, in this experiment it is very difficult to unambiguously assign intra- and inter-nucleotide peaks (see Fig. S1b, c), especially when it relates to the most crowded, H4′C4′, region among sugar carbons. The possibility of sequential correlation through other sugar carbons, i.e. C3′(i−1) and C5′(i) is also hampered due to weak 31P–C3′/5′ couplings making such peaks either absent or too weak; the problem is further augmented by peak overlaps and presence of H5′/H5″ doublets. Clearly, more advanced approaches are needed to achieve an unambiguous sequential assignment in RNAs.
To address these issues we have developed a C4′/H4′ selective, four-dimensional HC(P)CH experiment with "out and stay" type transfer. The experiment includes chemical shifts evolution of 1H4′s and 13C4′s of the adjoining nucleotides thereby linking them in a single experiment with higher peak resolution. The experiment provides sequential connectivities via H4′(i)–C4′(i)–C4′(i−1)–H4′(i−1) correlation. The 31P dimension (e.g. used in 3D HCP) is replaced with better dispersed nuclei, C4′s (~5 ppm) to improve resolution and alleviate ambiguities during assignments. Multiple quantum (MQ) line narrowing effect (Grzesiek et al. 1995) is implemented to improve the sensitivity of the experiment. In the proposed experiment the intra-nucleotide peaks are efficiently suppressed whereas the inter-nucleotide peaks are enhanced. Different settings of the coherence transfer delay allow for suppression of intra-nucleotide peaks. In the cases where intra-nucleotide peaks are partially suppressed, the opposite signs of two types of peaks still make it convenient to assign them separately without any ambiguities. The experiment employs C3′/C5′ selective inversion pulses to prevent the signal modulation due to 13C–13C homonuclear couplings, these pulses also indirectly enforce the C4′/H4′ selectivity. The schematic design of the experiment is illustrated in Fig. 1 where it also compares the differences from 3D HCP experiment. Figure 1a describes the pathways for generation of both intra and inter-nucleotide peaks in 3D HCP experiment and almost unidirectional flow of magnetization due to suppression of intra-nucleotide peaks in C4′/H4′ selective 4D HC(P)CH experiment. Between the adjoining nucleotides the magnetization on 31P is forward transferred not only to desired C4′s, but also to other weakly coupled carbon spins, i.e. C3′ and C5′. This, collectively, causes a significant loss in sensitivity resulting in weak or undetectable resonances; such feature is not affordable in a 4D experiment. To eliminate these deleterious effects we utilized the C4′ selective inversion pulse during the coherence transfer in the experiment. Figure 1b shows a comparative illustration of the non-selective transfer in 3D HCP with the selective transfer in 4D HC(P)CH.
A schematic and comparative illustration of magnetization transfer in C4′/H4′ selective 4D HC(P)CH experiment. Red and blue paths represent the magnetization flow in 3′ → 5′ and 5′ → 3′ directions, respectively. The numbers in circles represent the coherence transfer steps leading to cross-peaks whereas the suffix "a" inside circles represents a path which generates intra-nucleotide peaks. In a 3D HCP experiment magnetization flow splits from 31P, generating both intra- and inter-nucleotide peaks whereas in 4D HC(P)CH experiment (a) intra-nucleotide peaks are suppressed (denoted by dotted red/blue arrows) and involving mostly unidirectional flow of magnetization. (b) Illustrates other key differences in coherence transfers between 3D HCP and 4D HCPCH experiments. In 3D HCP the magnetization on 31P gets forward transferred (P → C3′/C5′, orange/green solid arrows) to other sugar carbons (C3′ and C5′) whereas in 4D HC(P)CH experiment such pathways are blocked (denoted with cross on orange/green arrows); again the suppressed intra-nucleotide peak is shown by dashed red arrow. For the interpretation of colors in this figure the reader is referred to the online version of the Journal
The pulse scheme for C4′/H4′ selective 4D HC(P)CH experiment is shown in Fig. 2. The experiment is designed with an emphasis on achieving higher resolution with minimum sensitivity losses. High dimensionality is achieved by incorporating three indirect chemical shift evolution periods into the sequence. The pulse sequence (see Fig. 2) comprises two 1H–13C MQ periods (MQ1 and MQ2; storing MQ coherences for most of the period) and a middle 31P–13C single quantum transfer period (SQ). The magnetization flow scheme is as follows:
$$ {}_{ }^{1} {\text{H}}4 '(t_{1} )\mathop \to \limits^{{{}_{ }^{1} J _{CH } }} {}_{ }^{13} {\text{C}}4^{ '} (t_{2} )\mathop \to \limits^{{{}_{ }^{3} J _{CP} }} {}_{ }^{31} {\text{P}} \mathop \to \limits^{{{}_{ }^{3} J _{CP} }} {}_{ }^{13} {\text{C}}4 '(t_{3} )\mathop \to \limits^{{{}_{ }^{1} J _{CH } }} {}_{ }^{1} {\text{H}}4 '(t_{4} ) $$
Pulse sequence scheme for through-bond, C4′/H4′ selective 4D HC(P)CH experiment. The 90° and 180° 'hard' pulses are represented by filled and open bars, respectively. All pulses are applied along the x-axis of the rotating frame unless indicated otherwise. Grey sine bell-shaped pulses (P and Q) indicate cosine modulated IBURP-2 (Geen and Freeman 1991) pulses. P inverts the chemical shift range 69.5 ± 6 ppm (C3′s and C5′s) with a duration of 2.5 ms (13.8 kHz peak r.f. field) and Q inverts the chemical shift range 83 ± 8 ppm (C4′s) with a duration of 1.9 ms (13.8 kHz peak r.f. field). W represent spin-lock pulses (SLx, SLy) implemented for dephasing of transverse water magnetization. 13C adiabatic composite pulse decoupling was performed with WURST (Kupce and Freeman 1995). The durations of 'hard' π/2 pulses were 7.8, 18.1 and 26.5 µs for 1H, 13C and 31P, respectively. Proton carrier frequency was set on resonance with water (4.68 ppm), carbon carrier was set to the centre of 13C4′s (83.00 ppm) and 31P carrier was set to −4.10 ppm. Quadrature detection in t 1, t 2 and t 3 is accomplished by altering ϕ1, ϕ2 and ϕ5, respectively, according to the States-TPPI procedure. 16-step phase cycle is as follows: ϕ1 = x; ϕ2 = x, −x, ϕ3 = 2(y), 2(−y); ϕ4 = 4(x), 4(−x); ϕ5 = 8(x), 8(−x) and ϕrec = y, 2(−y), y, 2(−y, 2(y), −y), y, (−2y), y. Delays are set as follows: ∆ = 3.5 ms ≈ (2 J CH)−1, τa = τc = 20.9 ms and τb = 38 ms. Gradient levels and durations are: G 1 (0.2 ms, 12.7 G/cm), G 2 (0.8 ms, 33.7 G/cm), G 3 (1.0 ms, 42.5 G/cm), G 4 (0.2 ms, 15.61 G/cm) and G 5 (0.5 ms, 4.6 G/cm). A total of 1,300 (~9 %) sampling points (t 1, t 2, t 3) were randomly chosen from a 31 × 22 × 22 Cartesian grid according to Gaussian probability distribution, p(t) = exp[−(t/t max)2/2σ2], σ = 0.5, with Poisson disk restrictions (Kazimierczuk et al. 2008). Maximum evolution times of 20 (t 1max), 14 (t 2max) and 14 ms (t 3max) were achieved in the indirectly detected dimensions. Acquisition time was set to 85 ms (t 4max). Spectral widths of 15 (ω1), 15 (ω2), 15 (ω3) and 12 kHz (ω4) were assumed. The total experiment duration was 75 h. The interscan delay of 1.8 s for optimal recovery of 1H magnetization (sensitivity per unit time) was used. The experiment was performed at 298 K on the Agilent DDR2 600 MHz spectrometer equipped with a room-temperature penta (1H/13C/15N/2H/31P) probe
Since, as it was shown earlier for nucleic acids (Fiala et al. 1998, 2000), the dominant 1H–13C dipolar relaxation mechanism is significantly attenuated for zero- and double-quantum coherences, MQ coherences are preserved during the frequency labeling of both C4′ evolution periods. In the first MQ1 period (see Fig. 2), the coherence starts from H4′ in sugars and is transferred to C4′ via non-refocused INEPT. H4′ and C4′ are then brought into a MQ state and the shared-evolution of chemical shifts of H4′ (t 1) and C4′ (t 2) is performed in a constant-time manner by shifting the corresponding hard 180° pulses within the MQ1 period. In order to evolve C4′–P couplings and achieve a coherence transfer, a 180° pulse on 31P is applied simultaneously with the C4′ inversion pulse on 13C channel. During this period (τa), the evolution due to homonuclear carbon coupling (C4′–C3′ and C4′–C5′) is refocused by two cosine modulated IBURP-2 (Geen and Freeman 1991) pulses (P in Fig. 2) which selectively and simultaneously invert the frequency bands of C3′ and C5′ ribose sugar carbons. Since C2′ carbons share the same spectral region as of C3′, inversion of later also inverts the C2′ carbons. Effectively, the use of inversion pulses lead to an indirect selection of C4′ and hence H4′ during the MQ1 period. The next 90° pulse on H4′ and ∆ delay refocus the C4′ anti-phase to H4′ whereas the subsequent 90° pulses on C4′ and 31P transfer the coherence onto 31P.
In the next SQ period magnetization on 31P is brought into transverse plane and 31P–13C couplings are evolved. In the middle of this period a C4′ selective cosine modulated IBURP-2 pulse (Q in Fig. 2) is employed to prevent dephasing due to Pi–C3′i−1 and Pi–C5′i couplings and achieve selectivity for P → C4′ transfer.
The evolution during the delay τb refocuses the Pi–C4i anti-phase and creates the Pi–C4′i−1 anti-phase operators, and determines the suppression of intra-nucleotide peaks or enhancement of inter-nucleotide peaks. The suppression level of intra-nucleotide peaks is a trade-off between the J-coupling optimum delay τb and relaxation. The intensity of an intra-nucleotide peak is proportional as:
$$ {\text{I}}_{{intra({\text{C}}4^{ '}_{i} - {\text{P}}_{i} - {\text{C}}4^{ '}_{i} )}} \propto r *cos(\pi {}_{ }\,^{3} J_{{{\text{C}}4^{ '}_{i} {\text{P}}_{i} }} \tau_{\text{b}} ) *cos(\pi {}_{ }\,^{3} J_{{{\text{P}}_{i} {\text{C}}4^{ '}_{i} }} \tau_{\text{b}} ) * e^{{ - R\tau_{\text{b}} }} $$
whereas that of inter-nucleotide peak is related as:
$$ {\text{I}}_{{inter({\text{C}}4^{ '}_{i} - {\text{P}}_{i} - {\text{C}}4^{ '}_{i - 1} )}} \propto r *sin(\pi {}_{ }\,^{3} J_{{{\text{C}}4^{ '}_{i} {\text{P}}_{i} }} \tau_{\text{b}} ) *sin(\pi {}_{ }\,^{3} J_{{{\text{P}}_{i} {\text{C}}4^{ '}_{i - 1} }} \tau_{\text{b}} ) * e^{{ - R\tau_{\text{b}} }} $$
where R is 31P SQ transverse relaxation rate and r incorporates the contributions from all other passive couplings.
The coherence transfer efficiency and hence the intensities are dependent on 3 J (C4′, P) values (discussed later in the text). The experiments are performed at various J values, however at ~10 Hz we found the least loss of number of resonances in the spectrum. The transverse relaxation rate (R) is estimated experimentally for 31P and intensities for both types of peaks are plotted (see Fig. S2 in Supplementary Materials), with increasing transfer delay (τb). A suitable delay (~38 ms for this study) is chosen to maximize the inter-nucleotide peak intensities, which in turn also minimizes the intra-nucleotide terms, especially in the cases where 3 J (C4′i, Pi) ≈ 3 J (Pi,C4′i−1).
In the consecutive block MQ2, the coherence is forward transferred to C4′(i) where its chemical shifts are indirectly recorded (t 3) preserving the MQ coherences similarly to the MQ1 block in the sequence. During the same period (τc) refocusing of P–C4′ couplings is also achieved by application of a 180° pulse on 31P in synchrony with the moving 180° pulse on 13C channel. Another 180° pulse is centrally placed on 1H channel to refocus its chemical shift evolution. Again, the use of C3′/C5′ selective IBURP-2 pulses (P in Fig. 2) prevents the evolution due to C4′–C3′/5′ homonuclear couplings and indirectly selects C4′/H4′. Finally, an in-phase coherence is generated on H4′ spins by refocused INEPT transfer during ∆ period.
The inversion profiles for shaped pulses were simulated and tested using Spinach library (Hogben et al. 2011) on MATLAB®. Gradients and phase cycling are employed to eliminate undesired coherences and improve the C4′/H4′ selectivity of experiment. After P → C4′ transfer period, spin-lock pulses (SLx, SLy) are employed (W in Fig. 2) to dephase any remaining transverse water magnetization. The experiment complements the set of recently reported high dimensional experiments, 5D HCP-CCH COSY (Krahenbuhl et al. 2014) and 4D-NUS C(aro),C(ribo)-NOESY (Stanek et al. 2013a), dedicated for sequential resonance assignment in RNAs.
To achieve higher dimensionality with reasonable resolution in the indirectly detected dimensions, non-uniform sampling (NUS) was employed. Using NUS we are able to acquire 4D HC(P)CH experiment with high evolution times: 20 ms (t 1), 14 ms (t 2) and 14 ms (t 3). The processing of 4D NUS data was accomplished by the home-written software package Signal Separation Algorithm (SSA) (Stanek et al. 2012), which can be downloaded free of charge for non-commercial purposes from the website http://nmr.cent3.uw.edu.pl.
We have tested the performance of C4′/H4′ selective 4D HC(P)CH experiment on a fairly demanding RNA sample which encompasses typical structural elements. The experiments were run on a 13C,15N-labeled 34-nt hairpin RNA (1.5 mM in D2O solution) consisting of two A-RNA form stems, one adenine bulge, an asymmetric internal loop and a GAAA terminal loop (Cevec et al. 2010). The 4D HC(P)CH spectrum was easily analyzed with SPARKY (Goddard and Kneller 2004) program by synchronizing two dimensions (H4′ and C4′) of ith nucleotide (see Figs. 3, S3 in Supplementary Materials); the resulting 2D plane consists of inter-nucleotide peaks, i.e. to the (i−1)th and (i+1)th nucleotides. In other words, to achieve the sequential assignment, H4′C4′ plane of one nucleotide is correlated with the H4′C4′ planes of two neighboring nucleotides. Figures 3, S3 show the representative 2D planes of 4D HC(P)CH spectrum illustrating the resolution advantage in the experiment. It can clearly be seen that heavily overlapped peaks (e.g. C33, U32, C31, A28, C29, A20, G19) in 2D 13C-HSQC (Fig. 3a) are clearly resolved in 4D experiment along the H4′C4′ planes of adjoining nucleotides (Figs. 3b–d, S3a–d). To compare the suppression levels of intra-nucleotide peaks, another 4D experiment was acquired without emphasis on suppression, i.e. using τb ~22 ms. The 2D planes from this version of experiment consist of one intra- and two inter-nucleotide peaks (of course, with an exception for terminal nucleotide). As can be expected, in this version of experiment intra-nucleotide peaks are more intense than inter-nucleotide peaks (see Fig. 3e–g). A significant suppression of intra- and enhancement of inter-nucleotide peaks can be compared between the two versions of experiment in Fig. 3 (b, e), (c, f) and (d, g) pairs. In the cases of incomplete suppression of intra-nucleotide peaks, their opposite sign still reduces ambiguities during assignments.
Representative cross-sections from 4D HC(P)CH experiment. (a) shows the overlapped H4′C4′ region of 2D 13C-HSQC spectrum. Resolution enhancement can be seen in (b–d) which are the 2D cross-sections of 4D HC(P)CH spectrum extracted along the H4′C4′ dimensions of C34, C33, U32 respectively. The peaks are clearly resolved in the H4′C4′ plane, enabling an unambiguous assignment of cross-peaks to the neighboring nucleotides. For example, the assignment of C34–C33, C33–U32, U32–C31 inter-nucleotide peaks (marked in blue) is achieved based on the H4′C4′ planes of C34 (b), C33 (c), U32 (d) respectively. Intra-nucleotide-peaks are labeled in grey. Also illustrated is the comparison between 4D HC(P)CH experiments with (b–d) and without (e–g) suppression of intra-nucleotide peaks. For the non-suppressed version of experiment each 2D cross section (e–g) contains one intra-nucleotide peak (green contours) and two inter-nucleotide peaks (red contours), i.e. to the previous and the next nucleotide, respectively. Dotted vertical lines in the (b, e), (c, f) and (d, g) pairs compare the suppression of intra-nucleotide peaks and enhancement of inter-nucleotide peaks between two versions of the experiment. Since C34 is the terminal nucleotide, only one inter-nucleotide peak is observed in its C/H plane (b, e). The position of completely suppressed intra-nucleotide peaks is indicated by solid green dots (c, d) whereas inter-nucleotide peaks below detection limit are indicated by solid red dots (g). For the interpretation of colors in this figure the reader is referred to the online version of the Journal
Overall, 19 sequential connectivities were successfully established (see Fig. 4) using C4′/H4′ selective 4D HC(P)CH whereas 3D HCP experiment could provide only 4 sequential links in 34-nt RNA. Comparatively, the previously reported 4D C(aro),C(ribo)-NOESY experiment provided 17 sequential links, which reflects the difficulty of the investigated RNA sample. Interestingly, 4D HC(P)CH and 4D NOESY experiments provided complementary data for sequential assignment. In combination, 26 (out of 33) sequential links were successfully assigned. The missing assignments are either due to structural mobility, manifesting in enhanced relaxation during coherence transfer periods or due to small C4′–P couplings. In our previous study we have shown that, in this RNA, the asymmetric internal loop adopts two energetically comparable families of structures, which both satisfy NMR data (Cevec et al. 2010). In addition, the C4′ → P/P → C4′ transfers in 4D HC(P)CH experiment rely on the C4′–P couplings (3 J C4′,P), which in turn depend on the β/ε torsional angles in RNA (Schwalbe et al. 1994; Legault et al. 1995; Hu et al. 1999). For RNA used in this study (PDB ID: 2KPV) the coupling constants (3 J C4′,P) were calculated based on the parameterized Karplus equation (Mooren et al. 1994). It can be observed that for some of the cases the C4′–P couplings are very small (see Fig. 5) and an efficient coherence transfer is difficult to achieve. It is noteworthy that, in this study, most of the missing resonances relate to the internal loop or to the proximate residues where the β/ε angles are large and therefore C4′–P couplings are very small.
The schematic presentation of the investigated 34-nt RNA showing the sequential connectivities observed in the 3D HCP and 4D HC(P)CH spectrum. Blue arrows indicate the sequential links assigned using 3D HCP experiment, while orange arrows indicate sequential connectivities obtained from C4′/H4′ selective 4D HC(P)CH experiment. Very weak or missing correlations are marked with the grey arrows, most of which belong to internal loop or to the proximate residues
Coupling constant versus β and ε torsional angles of 34-nt RNA (PDB ID: 2KPV). Thin solid line is the Karplus curve of 3 J C4′–P based on the parameterized Karplus equation (Mooren et al. 1994). The coupling constant values based on β angles (5′ → 3′, 3 J C4′–P) are indicated by green circles while those obtained from ε angles (3′ → 5′, 3 J C4′–P) are indicated by blue triangles. The unfavorable or week couplings (large β/ε angles), as labeled, mostly belong to internal loop or to the proximate residues
To conclude, we have introduced a through-bond, C4′/H4′ selective, non-uniformly sampled 4D HC(P)CH experiment for sequential assignments in RNAs. The incorporated indirect dimensions, along with the replacement of evolution of 31P by 13C4′, significantly enhanced the spectral dispersion. NUS was employed to achieve high resolution in all the indirectly detected dimensions. Band selective inversion pulses were used to prevent signal modulation due to C4′–C3′, C4′–C5′ couplings and to indirectly select the C4′/H4′ region. Experiment involved the suppression of intra-nucleotide peaks, as a result, the number of ambiguities were further reduced. We have demonstrated that the C4′/H4′ selectivity and attenuated relaxation of MQ coherences partially compensated the sensitivity losses entailing the increased dimensionality. Despite lower sensitivity, the proposed experiment clearly outperforms the conventional HCP experiment, which suffers from critical overlap in the "linking" 31P dimension. The experiment is proposed as a complementary tool to 3D/4D NOESY experiments and augments the set of high dimensional experiments aimed at improving resolution and reducing ambiguities during resonance assignments in RNAs with poor chemical shift dispersion.
Aeschbacher T, Schmidt E, Blatter M, Maris C, Duss O, Allain FH-T, Güntert P, Schubert M (2013) Automated and assisted RNA resonance assignment using NMR chemical shift statistics. Nucleic Acids Res gkt665
Bermel W, Felli IC, Gonnelli L, Kozminski W, Piai A, Pierattelli R, Zawadzka-Kazimierczuk A (2013) High-dimensionality 13C direct-detected NMR experiments for the automatic assignment of intrinsically disordered proteins. J Biomol NMR 57:353–361
Briones C, Stich M, Manrubia SC (2009) The dawn of the RNA World: toward functional complexity through ligation of random RNA oligomers. RNA 15:743–749 (New York, NY)
Cevec M, Thibaudeau C, Plavec J (2010) NMR structure of the let-7 miRNA interacting with the site LCS1 of lin-41 mRNA from Caenorhabditis elegans. Nucleic Acids Res 38:7814–7821
Esteller M (2011) Non-coding RNAs in human disease. Nat Rev Genet 12:861–874
Fiala R, Jiang F, Sklenář V (1998) Sensitivity optimized HCN and HCNCH experiments for 13C/15 N labeled oligonucleotides. J Biomol NMR 12:373–383
Fiala R, Czernek J, Sklenář V (2000) Transverse relaxation optimized triple-resonance NMR experiments for nucleic acids. J Biomol NMR 16:291–302
Flinders J, Dieckmann T (2006) NMR spectroscopy of ribonucleic acids. Prog Nucl Magn Reson Spectrosc 48:137–159
Furtig B, Richter C, Wohnert J, Schwalbe H (2003) NMR spectroscopy of RNA. Chembiochem Eur J Chem Biol 4:936–962
Geen H, Freeman R (1991) Band-selective radiofrequency pulses (1969). J Magn Reson 93:93–141
ADS Google Scholar
Goddard T, Kneller D (2004) SPARKY 3. University of California, San Francisco 14:15
Grzesiek S, Kuboniwa H, Hinck AP, Bax A (1995) Multiple-quantum line narrowing for measurement of H.alpha.-H.beta. J couplings in isotopically enriched proteins. J Am Chem Soc 117:5312–5315
Hogben HJ, Krzystyniak M, Charnock GT, Hore PJ, Kuprov I (2011) Spinach–a software library for simulation of spin dynamics in large spin systems. J Magn Reson 208:179–194 (San Diego, Calif : 1997)
Hu W, Bouaziz S, Skripkin E, Kettani A (1999) Determination of 3J(H3i, Pi+1) and 3J(H5i/5i, Pi) coupling constants in 13C-labeled nucleic acids using constant-time HMQC. J Magn Reson 139:181–185 (San Diego, Calif : 1997)
Kazimierczuk K, Zawadzka A, Kozminski W (2008) Optimization of random time domain sampling in multidimensional NMR. J Magn Reson 192:123–130 (San Diego, Calif : 1997)
Krahenbuhl B, El Bakkali I, Schmidt E, Guntert P, Wider G (2014) Automated NMR resonance assignment strategy for RNA via the phosphodiester backbone based on high-dimensional through-bond APSY experiments. J Biomol NMR
Kupce E, Freeman R (1995) Adiabatic pulses for wideband inversion and broadband decoupling. J Magn Reson Ser A 115:273–276
Legault P, Jucker FM, Pardi A (1995) Improved measurement of 13C, 31P J coupling constants in isotopically labeled RNA. FEBS Lett 362:156–160
Marino JP, Schwalbe H, Anklin C, Bermel W, Crothers DM, Griesinger C (1994) Three-dimensional triple-resonance 1H, 13C, 31P experiment: sequential through-bond correlation of ribose protons and intervening phosphorus along the RNA oligonucleotide backbone. J Am Chem Soc 116:6472–6473
Marino JP, Schwalbe H, Anklin C, Bermel W, Crothers DM, Griesinger C (1995) Sequential correlation of anomeric ribose protons and intervening phosphorus in RNA oligonucleotides by a 1H, 13C, 31P triple resonance experiment: HCP-CCH-TOCSY. J Biomol NMR 5:87–92
Mercer TR, Dinger ME, Mattick JS (2009) Long non-coding RNAs: insights into functions. Nat Rev Genet 10:155–159
Mooren MM, Wijmenga SS, van der Marel GA, van Boom JH, Hilbers CW (1994) The solution structure of the circular trinucleotide cr (GpGpGp) determined by NMR and molecular mechanics calculation. Nucleic Acids Res 22:2658–2666
Nikonowicz EP, Pardi A (1993) An efficient procedure for assignment of the proton, carbon and nitrogen resonances in 13C/15 N labeled nucleic acids. J Mol Biol 232:1141–1156
Ramachandran R, Sich C, Grüne M, Soskic V, Brown L (1996) Sequential assignments in uniformly 13C-and 15 N-labelled RNAs: the HC (N, P) and HC (N, P)-CCH-TOCSY experiments. J Biomol NMR 7:251–255
Schwalbe H, Marino J, King G, Wechselberger R, Bermel W, Griesinger C (1994) Determination of a complete set of coupling constants in 13C-labeled oligonucleotides. J Biomol NMR 4:631–644
Stanek J, Augustyniak R, Kozminski W (2012) Suppression of sampling artefacts in high-resolution four-dimensional NMR spectra using signal separation algorithm. J Magn Reson 214:91–102 (San Diego, Calif : 1997)
Stanek J, Podbevšek P, Koźmiński W, Plavec J, Cevec M (2013a) 4D Non-uniformly sampled C, C-NOESY experiment for sequential assignment of 13C, 15 N-labeled RNAs. J Biomol NMR 57:1–9
Stanek J, Saxena S, Geist L, Konrat R, Koźmiński W (2013b) Probing local backbone geometries in intrinsically disordered proteins by cross-correlated NMR relaxation. Angew Chem Int Ed 52:4604–4606
Varani G, Aboul-ela F, Allain FHT (1996) NMR investigation of RNA structure. Prog Nucl Magn Reson Spectrosc 29:51–127
Wijmenga SS, van Buuren BNM (1998) The use of NMR methods for conformational studies of nucleic acids. Prog Nucl Magn Reson Spectrosc 32:287–387
Zawadzka-Kazimierczuk A, Kozminski W, Sanderova H, Krasny L (2012) High dimensional and high resolution pulse sequences for backbone resonance assignment of intrinsically disordered proteins. J Biomol NMR 52:329–337
This work was supported by TEAM project operated within the Foundation for Polish Science. S.S. and W.K. thank Polish National Science Centre for the financial support with the Grant No. 2013/11/N/ST4/01827. The research was co-financed from Polish budget funds for science for years 2013–2014, the project No. IP2012 057872 (J.S.). M.C. and J.P. thank Slovenian Research Agency (ARRS, P1-242 and J1-6733). The study was carried out at the Biological and Chemical Research Centre, University of Warsaw, established within the project co-financed by European Union from the European Regional Development Fund under the Operational Programme Innovative Economy.
Biological and Chemical Research Centre (CENT III), Faculty of Chemistry, University of Warsaw, Pasteura1, 02093, Warsaw, Poland
Saurabh Saxena, Jan Stanek & Wiktor Koźmiński
Slovenian NMR Centre, National Institute of Chemistry, Hajdrihova ulica 19, 1000, Ljubljana, Slovenia
Mirko Cevec & Janez Plavec
EN-FIST Centre of Excellence, Dunajska cesta 156, 1000, Ljubljana, Slovenia
Janez Plavec
Faculty of Chemistry and Chemical Technology, University of Ljubljana, Aškerčeva cesta 5, 1000, Ljubljana, Slovenia
Saurabh Saxena
Jan Stanek
Mirko Cevec
Wiktor Koźmiński
Correspondence to Wiktor Koźmiński.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Saxena, S., Stanek, J., Cevec, M. et al. C4′/H4′ selective, non-uniformly sampled 4D HC(P)CH experiment for sequential assignments of 13C-labeled RNAs. J Biomol NMR 60, 91–98 (2014). https://doi.org/10.1007/s10858-014-9861-z
Issue Date: November 2014
RNA resonance assignment
Selective pulses
Four-dimensional NMR
Non-uniform sampling | CommonCrawl |
The BrightEyes-TTM as an open-source time-tagging module for democratising single-photon microscopy
Confocal-based fluorescence fluctuation spectroscopy with a SPAD array detector
Eli Slenders, Marco Castello, … Giuseppe Vicidomini
A robust and versatile platform for image scanning microscopy enabling super-resolution FLIM
Marco Castello, Giorgio Tortarolo, … Giuseppe Vicidomini
Spectro-temporal encoded multiphoton microscopy and fluorescence lifetime imaging at kilohertz frame-rates
Sebastian Karpf, Carson T. Riche, … Bahram Jalali
Photoswitching fingerprint analysis bypasses the 10-nm resolution barrier
Dominic A. Helmerich, Gerti Beliu, … Markus Sauer
Single-shot super-resolution total internal reflection fluorescence microscopy
Min Guo, Panagiotis Chandris, … Hari Shroff
High-speed laser-scanning biological microscopy using FACED
Queenie T. K. Lai, Gwinky G. K. Yip, … Kevin K. Tsia
MINFLUX nanometer-scale 3D imaging and microsecond-range tracking on a common fluorescence microscope
Roman Schmidt, Tobias Weihs, … Stefan W. Hell
Raster-scanning Donut simplifies MINFLUX and provides alternative implement on other scanning-based microscopes
Xinzhu Xu, Shu Jia & Peng Xi
Multiphoton single-molecule localization by sequential excitation with light minima
Luciano A. Masullo & Fernando D. Stefani
Alessandro Rossetta1,2,3 na1,
Eli Slenders1 na1,
Mattia Donato ORCID: orcid.org/0000-0003-0026-747X1 na1,
Sabrina Zappone ORCID: orcid.org/0000-0003-2695-49991,3,
Francesco Fersini ORCID: orcid.org/0000-0003-1534-52741,3,
Martina Bruno4,5,
Francesco Diotalevi6,
Luca Lanzanò2,7,
Sami Koho1,
Giorgio Tortarolo1,
Andrea Barberis4,
Marco Crepaldi ORCID: orcid.org/0000-0002-5881-37206,
Eleonora Perego ORCID: orcid.org/0000-0002-1700-584X1 &
Giuseppe Vicidomini ORCID: orcid.org/0000-0002-3085-730X1
Confocal microscopy
Fluorescence imaging
Fluorescence laser-scanning microscopy (LSM) is experiencing a revolution thanks to new single-photon (SP) array detectors, which give access to an entirely new set of single-photon information. Together with the blooming of new SP LSM techniques and the development of tailored SP array detectors, there is a growing need for (i) DAQ systems capable of handling the high-throughput and high-resolution photon information generated by these detectors, and (ii) incorporating these DAQ protocols in existing fluorescence LSMs. We developed an open-source, low-cost, multi-channel time-tagging module (TTM) based on a field-programmable gate array that can tag in parallel multiple single-photon events, with 30 ps precision, and multiple synchronisation events, with 4 ns precision. We use the TTM to demonstrate live-cell super-resolved fluorescence lifetime image scanning microscopy and fluorescence lifetime fluctuation spectroscopy. We expect that our BrightEyes-TTM will support the microscopy community in spreading SP-LSM in many life science laboratories.
A revolution is happening in fluorescence laser-scanning microscopy (LSM): the radically new way in which the fluorescence signal is recorded with single-photon array detectors drastically expands the information content of any LSM experiment. For a conventional fluorescence laser-scanning microscope, an objective lens focuses the laser beam at a specific position in the sample, called the excitation or probing region. The same objective lens collects the emitted fluorescence and, together with a tube lens, projects the light onto the sensitive area of a single-element detector, such as a photomultiplier tube (PMT). Depending on the type of experiment, the probing region is scanned across the sample or kept steady. For imaging, the sample is raster scanned. For single-point fluorescence correlation spectroscopy (FCS), the laser beam is kept steady, whereas for scanning FCS, the laser beam scans the sample repeatedly in a circular or linear manner. During the whole measurement, the detector generates a one-dimensional signal (intensity versus time) that the data acquisition (DAQ) system integrates within the pixel dwell time (for imaging) or the temporal bins (for FCS). Such a signal-recording process induces information loss: the signal from the photons is integrated by the detector regardless of the photons' positions on the sensitive area and their arrival time with respect to a particular event, such as the fluorophore excitation event. Notably, also other properties of light, such as the wavelength and the polarization, are typically completely or partially discarded.
Single-photon (SP) array detectors, when combined with adequate DAQ systems, can preserve most of this information. In particular, asynchronous read-out SP array detectors—consisting of a matrix of fully independent elements that can detect a single-photon with several tens of picoseconds timing precision—have made it possible to spatiotemporally tag each fluorescence photon, i.e., to record simultaneously at which position of the array (spatial tag) and at which delay with respect to a reference time (temporal tag) the photon hits the detector.
Currently, the spatial tags can be used in two ways: firstly, by placing a bidimensional detector array in the LSM image plane, the probing region can be imaged (Fig. 1a). Secondly, by dispersing the fluorescence across the long axis of a linear detector array, the spatial tags enable spectrally resolved recording of the probing region, i.e., the spatial tag encodes the wavelength of the photon. At the same time, by exciting the sample with a pulsed laser and recording the fluorescence photon arrival times with respect to the excitation events, the temporal tags (i.e., the time difference between the excitation event and the photon detection, also called the start–stop time) allow for sub-nanosecond time-resolved measurements, such as fluorescence lifetime or photon correlation. Furthermore, recording the photon arrival times with respect to the beginning of the experiment allows for microsecond intensity fluctuation analysis, e.g., for time-resolved fluorescence fluctuation spectroscopy (FFS).
Fig. 1: Concepts of FLISM and FLFS.
a In FLISM, a pulsed laser beam is focused and scanned across the sample. For each position of the laser beam, the fluorescence is collected and focused onto a SPAD array detector. Every photon produces a pulse in one of the detection channels almost instantaneously. The BrightEyes-TTM measures the arrival time of the photon with respect to the last laser pulse and the pixel, line, and frame clock of the microscope. In single-point spot-variation FLFS, the laser beam is kept steady while the photon arrival times are measured. The movement of the fluorophores results in temporal fluctuations in the intensity trace. b A super-resolution fluorescence lifetime (FLISM) image can be reconstructed from the resulting 4D dataset (x, y, t, ch). For each time bin of the TCSPC histogram, a super-resolved ISM image is reconstructed with the adaptive pixel reassignment algorithm. All the images are then recombined, and the fluorescence lifetimes are calculated for each pixel, resulting in the final FLISM image. c In spot-variation FLFS, the diffusion time as a function of the focal spot area is measured. The dimension of the focal spot can be changed by combining the photon traces coming from different detection channels. From the autocorrelations of the resulting intensity time traces, the diffusion times, and hence the diffusion modality (free diffusion, diffusion through a meshwork, or diffusion in a sample comprising isolated microdomains), can be found. Simultaneously, from the start–stop times, the fluorescence lifetime τfl is measured. PCR photon count rate.
In summary, these spatial, temporal, and spectral photon signatures have opened a series of advanced fluorescence imaging and spectroscopy techniques precluded (or made more complex) by conventional single-element detectors. Recently, a new LSM architecture based on linear SP detectors has led to a revival of the combination of fluorescence lifetime and spectral imaging1—spectral fluorescence lifetime imaging microscopy (S-FLIM). At the same time, bidimensional SP array detectors have opened up new perspectives for image-scanning microscopy (ISM). ISM uses the information contained in the image of the probing region to reconstruct the specimen image with a twofold increase in spatial resolution and a higher signal-to-noise ratio (SNR) compared to conventional LSM2,3,4. Because bidimensional SP array detectors provide a sub-nanosecond time-resolved image of the probing region, ISM can be combined with fluorescence lifetime to create a super-resolution fluorescence lifetime imaging technique, called fluorescence lifetime ISM (FLISM)5, or to trigger the implementation of a new class of nanoscopy techniques, namely quantum ISM6,7. The microsecond time-resolved images can also be used to implement (i) high information content FCS and, more generally, fluorescence fluctuation spectroscopy8—usually referred to as comprehensive-correlation analysis (CCA)9; and (ii) the combination of super-resolution optical fluctuation imaging with ISM10. Hereafter, we refer to these techniques with the collective term single-photon laser-scanning microscopy (SP-LSM).
Key elements for implementing SP-LSM are the SP array detector and the DAQ system. Although analog-to-digital converters (e.g., constant-fraction discriminators) allow the use of photomultiplier tube arrays as SP detectors, they introduce a significant amount of unwanted correlation into the measurements11 and are outperformed by true SP detectors regarding the photon-time jitter/precision. An alternative to PMT-based SP array detectors is the AiryScan-inspired module in which the hexagonal-shaped multi-core fiber bundle is connected to a series of single-element single-photon avalanche diodes (SPADs)6 instead of PMTs as in the conventional AiryScan module12. Clearly, this module is expensive and not scalable. True SP array detectors based on the well-established SPAD array technology13,14 solved these limitations. In particular, asynchronous read-out SPAD array detectors have been tailored for SP-LSM applications15,16,17,18. These detectors have a small number of elements—but enough for sub-Nyquist sampling of the probing region— high photon-detection efficiency, low dark noise, high dynamics, high fill factor, low cross-talk, and low photon time-jitters.
Although the development of SPAD array detectors specifically designed for SP-LSM is gaining substantial momentum, no effort has been placed into developing an open-source data acquisition system able to (i) fully exploit the high-throughput and high-resolution photon-level information that these detectors provide; and (ii) offer flexibility and upgradability of the system. The lack of an open-source DAQ system may preclude a massive spreading of the above SP-LSM techniques and the emergence of new ones.
To address this need, we propose here an open-source multi-channel time-tagging module (TTM), called the BrightEyes-TTM, specifically designed to implement current and future fluorescence SP-LSM techniques. The BrightEyes-TTM has multiple photon- and reference channels to record at which element of the detector array, and when (with tunable precision) with respect to the reference events a single photon reaches the detector. The BrightEyes-TTM is based on a commercially available and low-cost field-programmable gate array (FPGA) evaluation board, equipped with a state-of-the-art FPGA and a series of I/O connectors that provide an easy interface of the board with the microscope, the SPAD array detector, and the computer. We chose an FPGA-based implementation to grant quick prototyping, easy updating, and adaptation: in particular, we envisage a module that can be updated—also remotely—by us or other groups to meet future requests from new SP-LSM techniques and SPAD array detectors.
We integrated the BrightEyes-TTM into an existing custom SP-LSM architectures equipped with a 5 × 5 SPAD array detector prototype or a commercial 7 × 7 SPAD array detector, and we performed FLISM imaging (Fig. 1b) on a series of calibration and biological samples, including living cells. Furthermore, for the first time, we demonstrated the combination of CCA with fluorescence lifetime analysis (Fig. 1c). This synergy opens to a new series of fluorescence lifetime fluctuation spectroscopy (FLFS) techniques able to provide a more complete picture of the biomolecular processes inside living cells. As proof-of-principle, we correlated the diffusion mode of eGFP protein with its fluorescence lifetime in live cells.
Despite the great potential of SP-LSM, we are aware that massive dissemination of this paradigm will be effective only if a broad range of laboratories will have access to the TTM, and potentially modify it according to their needs. For this reason, this manuscript provides detailed guidelines, hardware parts lists, and open-source code for the FPGA firmware and operational software.
Multi-channel time-tagging module
The BrightEyes-TTM includes multiple (i.e., 25-channel for the current release and 49-channel for the next) fine (picosecond precision) time-to-digital converters (TDCs) to measure the start–stop times, and three coarse (nanosecond precision) TDCs to measure the relative delays between photon and clock signals. To characterize the performances of the fine TDCs, we used a test-bench architecture based on the SYLAP pulse generator. Independently, we validated the coarse TDCs directly by integrating the BrightEyes-TTM into different SP-LSM systems.
We measured the linearity of the fine TDCs—which expresses the deviation from the ideal behavior of the converter, by performing a statistical code-density test. We fed a fixed frequency signal (50 MHz) into the synchronization (SYNC) channel and a random signal into one of the photon channels (Channel #12). After accumulating several millions of photon events, we built the start–stop time histogram, also called the time-correlated single photon counting (TCSPC) histogram, which shows a differential nonlinearity (DNL) of σDNL = 0.06 least-significant-bit (LSB) and an integral nonlinearity (INL) of σINL = 0.08 LSB—with LSB = 48 ps (Supplementary Fig. S1). Such low values of nonlinearity are negligible in a typical measurement of the photons' temporal distribution.
We then characterized the single-shot precision (SSP) of the fine TDC by repeatedly measuring a constant start–stop time interval. The SSP represents the standard deviation of the distribution around the mean value when a constant time interval is measured multiple times. We fed a fixed frequency (50 MHz) signal into the SYNC channel and a 30 times decimated synchronized (i.e., ~1.6 MHz) second signal—with a tunable fixed delay—into one of the photon channels (Channel #12). After accumulating several millions of sync-photon pairs, we built the start–stop time histogram, which in this case represents the distribution of the measurement error (Fig. 2a–c). By fitting the start–stop time histogram with a Gaussian function, we estimated a precision of σ = 30 ps (standard deviation of the fitted Gaussian distribution). We tuned the delay between the two signals across the whole temporal range of the fine TDC (here, 20 ns), and we observed a similar precision for all the imposed delays (Supplementary Fig. S2), confirming the linearity of the fine TDC. We repeated the same SSP experiment for the other channels and obtained a similar precision (Fig. 2d, e). This SSP allows leveraging the photon-timing precision of the SPAD array detector.
Fig. 2: BrightEyes-TTM characterization and validation.
a–c Single-shot precision experiment. a Temporal schematic representation: a fixed frequency SYNC clock signal and a synchronized but delayed (in a controlled way) signal. b Unified representation of the start–stop time histograms as a function of the imposed delay between the two signals. c Single start–stop time histogram for the delay denoted by the dotted white line in the middle panel. The inset shows a magnification of the histogram for a selected temporal interval, superimposed with the Gaussian fit (red line). d, e Dual-channel single-shot precision experiment. d Temporal schematic representation: a fixed frequency SYNC clock signal and a pair of synchronized signals (channel A and channel B). The delays between all three signals are fixed. e Jitter map for each pair of channels (here, 25 channels), i.e., error in the time-difference estimation between any two channels, measured as the standard deviation of a Gaussian fit of the error distribution. The diagonal of the map represents the sigma of the single-channel single-shot precision experiment. f Normalized impulse-response functions (dark colors) and fluorescein–water solution decay histograms (light colors) for the BrightEyes-TTM and DPC-230 multi-channel card. The instrument response functions (IRFs) represent the response of the whole architecture (microscope and DAQ) to a fast (sub-nanosecond) fluorescence emission. The full-width-at-half-max values are 250 ps for the DPC-230 card and 200 ps for the BrightEyes-TTM. The decay histograms are also typically referred to as start–stop time histograms or TCSPC histograms. All single-channel measurements were done with TTM channel #12, which received the photon signal from the central element of the SPAD array detector. All start–stop histograms have 48 ps granularity (bin width).
Furthermore, we repeated the SSP experiment by feeding the same signal into a second photon channel. In this case, the delays between all three signals (ChannelA, ChannelB, and SYNC) are kept fixed, and we used the TTM to measure the delay between two photon channel signals. Similar to the start–stop time histogram, we built a histogram that reports the elapsed time between the two-photon channel signals, and we fit the distribution with a Gaussian function. We performed the experiment for all the possible channel pairs, and we obtained a σ precision value ranging from 23 to 33 ps, depending on the channel pair (Fig. 2d, e).
Lastly, we checked the sustained photon rate of the BrightEyes-TTM by implementing the SPP experiment for increasing photon rates (from 100 kHz to 50 MHz), keeping the SYNC signal at 50 MHz. The module starts saturating at ~15 MHz for a single channel and at ~5 MHz when all 25 channels received simultaneously a photon signal, which corresponds to a total photon flux of 125 MHz (Supplementary Fig. S3).
After the test-bench measurements, we integrated the BrightEyes-TTM into a custom-built single-photon laser-scanning microscope equipped with a 5 × 5 SPAD array detector prototype and a picosecond pulsed diode laser. To measure the impulse-response function (IRF) of the system, we used a solution of fluorescein, saturated with potassium iodide to quench the fluorescence19 (Fig. 2f). The relatively high full-width at the half-maximum value of the IRF (240 ps) is due to the convolution of the single-shot response (~30 ps) with the laser pulse width (> 100 ps), the SPAD photon jitters (> 90 ps), and the jitters/dispersion introduced by the optical system. We compared the IRF of the BrightEyes-TTM with the IRF of the DPC-230 commercial multi-channel time-tagging card measured on the same optical architecture. Notably, because of the poor time resolution (164 ps from the datasheet), the DPC-230 is not able to reveal the typical cross-talk effect of the SPAD array detector17 which is visible in the BrightEyes-TTM as an additional bump (Fig. 2f). We used the two time-tagging systems to compare the decay distributions of a pure (not quenched) solution of fluorescein. The two TCSPC histograms show very similar shapes, Fig. 2f, which is confirmed by fitting them with a single exponential decay model: τfl = (3.97 ± 0.04) ns and τfl = (3.99 ± 0.01) ns, for the BrightEyes-TTM and DPC-230, respectively. To demonstrate the ability of the BrightEyes-TTM to work at different temporal ranges, we repeated the fluorescein experiment for different laser frequencies (80, 40, 20, 10, and 5 MHz). The TCSPC histograms do not show significant differences (Supplementary Fig. S4).
To test the BrightEyes-TTM for different fluorescence lifetime values, we measured the decay distributions of the quenched fluorescein solution for increasing concentrations of potassium iodide (Supplementary Fig. S5). Each measurement was analyzed by performing a single-exponential fitting of the TCSPC histogram and by phasor analysis. Phasor plots visualize the fluorescence lifetime by projecting the TCSPC histogram in a 2D coordinate system20, which allows interpreting FLIM data without the need for fitting. The higher the potassium iodide concentration is, the higher the quenching will be, and thus the longer the measurement needs to be in order to accumulate good photon statistics. For this reason, the dark noise (which appears as an uncorrelated background in the TCSPC histogram) increases with the quencher concentration. The same effect appears on the phasor plot: because the decays follow a single-exponential function, caused by the collisional mechanism of the quenching, all points, regardless of the concentration, should lie on the universal semicircle. However, the uncorrelated background shifts the points toward the origin, since a lower signal-to-background ratio yields a higher demodulation.
In conclusion, the BrightEyes-TTM offers a combination of single-shot precision, linearity, temporal range, number of channels, and sustained count rate which is suitable for measuring fluorescence lifetimes with state-of-the-art SPAD array detectors for LSM. It is worth noting that literature reports on different TDC implementations based on the same Kintex-7 FPGA family with superior characteristics (Supplementary Table S2). This aspect indicates the possibility of further improving the characteristics of our BrightEyes-TTM to match the expected enhancement in performances of the next SPAD array generations. In fact, it is important to observe that our current 25 channels TTM implementation (as well as the 49 channels implementation of the next BrightEyes-TTM release) uses a small portion of the FPGA resources available (Supplementary Fig. S6), thus offering room for implementing strategies to improve the overall characteristics of our TTM21.
Fluorescence lifetime image-scanning microscopy
To demonstrate the ability of the BrightEyes-TTM in the context of SP-LSM imaging, we implemented FLISM on the same custom-made LSM used for the previous measurements. As with all LSM imaging techniques, FLISM requires acquiring the fluorescence photons in synchronization with the scanning system (e.g., galvanometric mirrors, and piezo stages). We obtain this synchronization by measuring the start–stop times with respect to the clock signals typically provided by the scanning system (i.e., pixel/line/frame clocks). Since the scanning synchronization does not need high precision, we use the reference (REF) channels of the TTM, which have a coarse TDC with nanosecond precision (~4.2 ns). The current public BrightEyes-TTM has three REF channels (pixel, line, and frame), but additional channels can be implemented with minimal changes in the architecture. Having more REF channels would, for example, allow the recording of the photons in synchronization with a change in the excitation conditions, such as intensity, laser wavelength, or polarization.
Thanks to the synchronization signals, the stream of photons recorded by the BrightEyes-TTM leads to a 4D photon-counting image (ch, x, y, Δt), where ch is the dimension that describes the element of the SPAD array, (x, y) are the spatial coordinates of the laser beam scanning system (in these experiments we recorded a single frame), and Δt is the dimension of the TCSPC histogram (Supplementary Table S1). First, we used the adaptive pixel-reassignment (APR) algorithm16,22 to reconstruct the 3D (x, y, Δt), high spatial resolution and high signal-to-noise ratio, image-scanning microscopy (ISM) intensity image. Next, we applied both conventional fluorescence lifetime fitting and phasor analysis to obtain the FLISM image (x, y) and the phasor plot (g, s), respectively. The side-by-side comparison of the intensity-based images of fluorescent beads clearly shows the optical resolution enhancement of ISM with respect to conventional (1.4 AU) imaging and the higher SNR with respect to confocal (0.2 AU) imaging (Fig. 3a, c and Supplementary Fig. S7). In the context of fluorescence lifetime imaging, the higher SNR leads to a higher lifetime precision, as depicted by the lifetime histograms and the phasor plots (Fig. 3b, d). We performed the same analysis on biological samples, fixed and live cells (Supplementary Fig. S8), which illustrates the flexibility of the BrightEyes-TTM for different samples. Because the data collected by the BrightEyes-TTM need post-processing before being visualized, to obtain a real-time intensity-based image as a guide during FLISM experiments, we implemented in the BrightEyes-TTM a digital output signal which duplicates the signal of one photon channel. This duplicated signal (here the channel of the central element of the SPAD array) is sent back to the control system of the microscope, which performs real-time imaging (Supplementary Fig. S9).
Fig. 3: BrightEyes-TTM for FLISM.
Imaging and FLISM analysis of 100 nm fluorescent beads with a custom-built single-photon laser-scanning microscope equipped with a 5 × 5 SPAD array detector prototype. a Side-by-side comparison of confocal (left, pinhole 0.2 AU), adaptive pixel-reassignment ISM (center), and open confocal (right, pinhole 1.4 AU). AU = Airy unit. Each imaging modality shows both the intensity-based image (top-left corner) and the lifetime image (bottom-right corner). A bidimensional look-up-table represents in the lifetime images both the intensity values (i.e., photon counts) and the excited-state lifetime values (i.e., τfl). The intensity-based images integrate the relative 3D data (x, y, Δt) along the start–stop time dimension Δt. b Histogram distributions showing the number of pixels versus lifetime values—in violet lifetime values which fall out of the selected lifetime interval. The selected interval was chosen by visually inspecting the FLISM image. The same intervals were used for the confocal and open confocal data. The lifetime images report in violet the pixels whose lifetime is in this interval. c Zoomed regions in the white-dashed boxes, with the intensity panels re-normalized to the maximum and minimum values. d Pixel intensity phasor plots, 5% and 10% thresholds respectively in gray and color. Pixel dwell time 100 μs. Scale bars 2 μm.
We further validated the BrightEyes-TTM for a realistic application of fluorescence lifetime imaging, namely assessing the complex microenvironmental variation in a living cell (Fig. 4). In particular, we showed how the fluorescent polarity-sensitive membrane dye di-4-ANEPPDHQ allows visualizing ordered/disordered-phase membrane domains16,23. Monitoring the phase and phase changes of membranes is important since these domains affect many cellular membrane processes. While the fluorescent dye molecules in the plasma membrane show relatively long fluorescence lifetimes, denoting a high membrane order, dye molecules in the intracellular membrane show a shorter fluorescence lifetime, denoting membrane disorder. The difference between the two microenvironmental conditions can be highlighted with a phasor-based segmentation (Fig. 4c, d). We can follow the change in lifetime over time (S31, Supplementary Movie 1), which gives kinetic information on changes in the microenvironment of the fluorescent molecules. In general, this approach allows real-time tracking of changes in the structure and function of living systems, also allowing functional measurements such as Förster resonance energy transfer (FRET).
Fig. 4: Fluorescence lifetime image-scanning microscopy in live cells.
a Intensity-based ISM image and b lifetime-based FLISM image of live HeLa cells stained with the polarity-sensitive fluorescent probe di-4-ANEPPDHQ. Side images depict the areas within the dashed white boxes. c Histogram distribution of the fluorescence lifetime values (top): number of pixels versus lifetime values. Pixel intensity thresholded phasor plots (bottom): number of pixels versus the polar coordinate (10% thresholds). d Phasor-based segmentation, i.e, images obtained by backprojection of points within the red (long lifetime, ordered membrane) and green (short lifetime, disordered membrane) area. Scale bars 5 μm. Pixel dwell time 150 μs. While here we show one cell, this experiment was independently performed two times on the same HeLa cell line. Setup: custom-built single-photon laser-scanning microscope equipped with a 5 × 5 SPAD array detector prototype.
Finally, to confirm the versatility of the BrightEyes-TTM and to consolidate our vision about the future of SP-LSM, we demonstrated FLISM on a custom LSM setup equipped with a commercial 7 × 7 SPAD array detector (Supplementary Fig. S10). This detector uses the low-voltage-differential-signaling (LVDS) output standard instead of transistor-transistor-logic (TTL) as for the SPAD array prototype. Nonetheless, our BrightEyes-TTM can be used by upgrading the FPGA firmware and by substituting the I/0s daughter card with another one designed for the commercial 7 × 7 SPAD array detectors. In this work, we connected only the 5 × 5 central elements of the 7 × 7 SPAD array to the TTM.
Fluorescence Lifetime Fluctuation Spectroscopy
To show the potential opened by the BrightEyes-TTM in the context of SP-LSM, we introduced fluorescence lifetime fluctuation spectroscopy (FLFS). Specifically, we developed two types of spot-variation FLFS, i.e., circular scanning FLFS and steady-beam (i.e., single-point) FLFS, and we used these techniques to probe the dynamics of fluorescent molecules both in vitro and in living cells. Because the BrightEyes-TTM provides both the photons' absolute times tphoton (i.e., the delay of the photons with respect to the beginning of the experiment) and the photons' start–stop times, the diffusion dynamics and fluorescence lifetime of a molecule can be obtained simultaneously. Furthermore, because the detector array allows the simultaneous acquisition of fluorescence fluctuations in different detection volumes, spot-variation FCS can be straightforwardly implemented in a single experiment8: by measuring the diffusion time τD for different detection volumes, spot-variation FCS allows distinguishing between different molecular dynamics modes24. In addition, accessing the start–stop times allows filtering of the autocorrelation curves to mitigate artefacts such as detector afterpulsing.
As an example of circular scanning FLFS, we measured freely diffusing fluorescent nanobeads (Fig. 5). From the absolute times, we calculated the unfiltered autocorrelations for the central SPAD array detector element, the sum of the nine most central elements (called sum 3 × 3) and the sum over all elements except for the four corner elements (called sum 5 × 5). By scanning the probing region in a circle across the sample, the focal spot size ω0 and the diffusion time τD can be derived from the same experiment25, i.e., no further calibration measurement is needed to obtain ω0 (Fig. 5a). In all cases, we fitted the data with a model assuming a 3D Gaussian focal volume with the diffusion time, the particle concentration, and ω0 as fit parameters (Fig. 5d). The fitted diffusion coefficient, (14.3 ± 0.5) μm2/s, corresponds to the value that can be expected based on the diameter of the beads (estimated bead radius r = 27 nm). The diffusion law \({\tau }_{D}({\omega }_{0}^{2})\)26,27 confirms that the beads were freely diffusing: the dependency of τD is linear with a zero intercept (Fig. 5e, left panel). However, the fitted concentration of the beads sample does not scale proportionally to the focal volume (Fig. 5e, right panel, squares), indicating that the amplitudes of the autocorrelation functions are not correct. Using the start–stop time histograms (Fig. 5b), we calculated the filter functions which attenuate the detector afterpulsing and background signals (Fig. 5c). The TCSPC histogram of each channel was fitted with an exponential decay function with an offset, describing the fluorescence and the background, respectively. From the resulting fit, the filter functions, which assign a weight to every count, were calculated28,29. Calculating the autocorrelation functions of the weighted photon traces yields significantly altered amplitudes, whereas the diffusion times are mostly left unchanged. As a result, the filtered autocorrelations show both the expected behavior for the diffusion time and the concentration (Fig. 5e, circles). If the fluorescence signal is strong enough with respect to the background, filtering is not needed, as shown for freely diffusing goat anti-mouse antibodies conjugated with Alexa 488 (Supplementary Fig. S11).
Fig. 5: BrightEyes-TTM for circular scanning FLFS on freely diffusing fluorescent beads.
a Schematic representation of the concept of circular scanning FLFS. A pulsed laser beam is scanned in circles of radius R (top panel) while both the absolute arrival times (center-left panel) and the start–stop times (center-right panel) are registered. The autocorrelation function of the intensity trace is calculated, from which the size of the focal spot ω0 and the diffusion time τD can be simultaneously extracted. b Start–stop time histograms for the different pixels, bin width 48 ps, total measurement time 226 s, central pixel in black. c Exemplary filter functions for the central pixel data. d Autocorrelations and fits for the central pixel, sum 3 × 3, and sum 5 × 5, for the unfiltered (left) and filtered (right) case. e Diffusion time as a function of \({\omega }_{0}^{2}\) (left) and average number of particles in the focal volume as a function of the focal volume (right). The corresponding diffusion coefficients are (14.3 ± 0.5) μm2/s (unfiltered) and (14.0 ± 0.4) μm2/s (filtered). The fitted particle concentrations are (7 ± 3)/μm3 (unfiltered) and (1.70 ± 0.03)/μm3 (filtered). Setup: custom-built single-photon laser-scanning microscope equipped with a 5 × 5 SPAD array detector prototype.
Especially in a biological context, having simultaneous access to the lifetime and to diffusion information is useful not only for obtaining a robust spot-variation FCS analysis but also to correlate the mobility mode of the investigated molecules with their microenvironment or structural changes. In fact, by using probes sensitive to specific environment states or FRET constructs, the variation in the fluorescence lifetime can be linked to biomolecular changes30. As a proof-of-principle example, we used the BrightEyes-TTM to measure the diffusion of monomeric-eGFP inside living cells and correlated its mobility with its fluorescence lifetime (Fig. 6). Since our spot-variation FCS implementation allows extracting the mobility mode from a specific cell position18, e.g., the measurement position for single-point FCS, and within a specific temporal window (a few seconds), it allows revealing both spatial and temporal heterogeneity in the molecular diffusive behavior. With the BrightEyes-TTM, we can simultaneously perform ISM and FLISM on whole cells (Fig. 6a), and then select specific positions for performing FLFS. For each position, we use (i) the absolute arrival times to create the fluorescence time traces (Fig. 6c), and calculate the corresponding autocorrelations, (Fig. 6d) and (ii) the photon arrival times to create the start–stop histograms (Fig. 6b). Spot-variation FCS performed at different cell positions showed free diffusion dynamics but different diffusion coefficients, Fig. 6e, indicating mobility variability at the single cell level. However, the average diffusion coefficient, D = (34 ± 12) μm2/s, is well comparable with previous measurements of free eGFP in living cells18,31 and with the expected diffusion value knowing its dimensions (MW = 27 kDa for monomeric-eGFP). We performed measurements in both the cell cytoplasm and the nuclei. Even if the eGFP molecules were not expected to be expressed in the cell nuclei, the (monomeric) eGFP molecules could diffuse into the nuclei due to their small dimension32. A slower diffusion of (20 ± 3) μm2/s was measured in the nuclei compared to (35 ± 10) μm2/s in the cytoplasm. This difference might be caused by the packed molecular environment of the nuclei. Repeating the analysis for consecutive time windows of 5 s each for a single position in the cell (here shown for the cytoplasm, circle shown in Fig. 6a), we find a temporal variability of the mobility mode (Fig. 6f). As a parameter for mobility, we calculated the ratio between the diffusion coefficients at different spatial scales, i.e., Dcentral/D5×5, which reflects the type of mobility, and we correlated it with the fluorescence lifetime, Fig. 6g. In this biological context, no differences in the fluorescence lifetime are expected over time, as eGFP is not specifically tagged to any protein. Moreover, the constant behavior of the fluorescence lifetime over time is proof of the viability of our system with biological samples for long acquisition times. On the other hand, the change in the diffusion modality suggests more complex protein dynamics, which might be caused by the heterogeneous cytosolic environment during physiological cellular processes. Similar to imaging, we took advantage of the BrightEyes-TTM output signal, which duplicates the central element photon channel, and we fed this signal into the control system of the microscope for a real-time display of the intensity trace and the corresponding autocorrelation function. The autocorrelation obtained from the BrightEyes-TTM is identical to the curve obtained from the reference system (Supplementary Fig. S12).
Fig. 6: Fluorescence fluctuation spectroscopy on living cells.
a ISM (bottom-left corner) and FLISM (top-right corner) images of a HEK293T cell expressing eGPF. b, c Start–stop time histograms (central pixel in black) and intensity time traces for all 25 channels of a 100 s FLFS measurement. The blue circle in a depicts the position in the cell where the measurement is performed. d Autocorrelation curves (scattered points) and fits (lines) for the central pixel, sum 3 × 3, and sum 5 × 5 curves obtained from (c). e Spot-variation analysis: the dashed black line represents the average (D = 34 ± 12 μm2/s, N = 5) of the dashed light-gray lines. Each dashed light-gray line represents a different position within the same cell. f Spot-variation analysis as a function of the measurement time-coarse. Data from (c). The intensity time traces are divided into chunks of 5 s, each chunk is analyzed by means of spot-variation FCS and generates a dashed light-gray line. The dashed black line represents the average (D = 32 ± 5 μm2/s, N = 14) of the dashed light-gray lines. Error bars in (e, f) represent standard deviations. g Ratio between the diffusion coefficients measured for the central pixel and sum 5 × 5 (Dcentral/D5×5), overlapped with the fluorescence lifetime as a function of the measurement time-coarse. Data from (f). Scale bar 5 μm. Pixel dwell time 100 μs. The data were acquired from five different cells, in each cell multiple positions (from 3 to 5) were sampled. Setup: custom-built single-photon laser-scanning microscope equipped with a 5 × 5 SPAD array detector prototype.
We present a low-cost (<3000$) and multi-channel TTM-DAQ system specifically designed for fluorescence SP-LSM: a family of methods that leverage the possibility of SP array detectors to analyze the specimen's fluorescence photon-by-photon. We first validated the versatility of the BrightEyes-TTM by implementing super-resolution FLISM. Next, to demonstrate the perspectives opened by the BrightEyes-TTM, we introduced another SP-LSM technique, namely FLFS. We recently demonstrated that SPAD array detectors are the method of choice to implement CCA8. CCA combines in a single experiment a series of well-established fluorescence fluctuation spectroscopy approaches to provide a global analysis for deciphering biomolecular dynamics in living cells. FLFS combines CCA with fluorescence lifetime analysis to further enhance the information content of a single experiment. We used FLFS to monitor the relationship between the diffusion mode of an eGFP-tagged protein and its fluorescence lifetime. In this context, we envisage the combination of FLFS with FRET sensors to open new opportunities for studying complex processes under physiological conditions33.
We implemented the TTM architecture on an FPGA, mounted on a readily available development kit, thus providing design flexibility and upgradability and also allowing fast prototyping and testing of new potential BrightEyes-TTM release versions. Compared to general-purpose commercial TTMs, our current BrightEyes-TTM version provides similar characteristics (linearity, temporal range, and sustained count rate) or partially inferior characteristics (temporal precision) but is still ideally suited for SP-LSM with current SPAD array detectors (Supplementary Table S3). Indeed, because our TTM has been specifically designed for fluorescence analysis, temporal constraints can be relaxed at the benefit of (i) compatibility with existing LSM systems, (ii) the possibility to integrate other features, and (iii) scalability in terms of photon channels. The BrightEyes-TTM supports 25 channels (i.e., the optimal number of elements requested for a SP detector array to implement ISM) in a single module. All commercial TTMs require multiple synchronized modules to support 25 channels. Furthermore, we anticipate the implementation of a new open-source BrightEyes-TTM version capable of handling up to 49 photon channels, which are useful for other SP-LSM techniques, such as S-FLIM.
Whilst the time-tagging module implements the most informative DAQ system strategy, it requires transferring (to the PC) and storing a consistent amount of data. Thus, this approach is not practically scalable to the large (megapixel) SPAD cameras requested by wide-field microscopy34. Here, typically an application-specific integrated circuit (ASIC) TDC is implemented directly at the chip level for each SPAD element or cluster of SPAD elements35, and the temporal information is pre-processed to reduce the amount of data to transfer. For example, during the measurement, the detector records the start–stop histogram, rather than the photon time-tags. However, also in the context of small SPAD array detectors, it could be important to reduce the data transferred to the PC. For this reason, the potential directions for the next developments in the BrightEyes-TTM project would be the migration to a new FPGA-development kit equipped with ARM-based processors. In this case, it might be interesting exploring next generation FPGAs with improved technological node (e.g., Ultrascale 20-nm), on which TDCs with superior performances are demonstrated36.
The principal aim of this work is to democratize SP-LSM by giving any microscopy laboratory the possibility to update its existing LSM systems. Together with a TTM, the other important element in order to achieve our aim is the SPAD array detector. We demonstrate here the compatibility of the BrightEyes-TTM not only with a SPAD array detector prototype but also with a commercial SPAD array detector. Currently, two startups are selling SPAD array detectors for SP-LSM (Genoa Instruments and Pi-Imaging Technology), but the interest from the major microscopy companies for this technology is growing: The AiryScan from Zeiss has become very popular and its transition to a single photon-detector will be natural. On the other side, Genoa Instruments and Abberior Instruments already proposed integrating a SPAD array detector in their LSM product. Following this trend, we expect that also well-established detector companies will release a SPAD array detector specifically designed for SP-LSM.
The second aim of this work is to trigger the interest of the microscopy community and establish the BrightEyes-TTM as a common platform for further developments in the context of SP-LSM. We are fully aware that these aims can only be achieved if the microscopy maker community37 has full access to this device, thus we released the results of this work as open-source. The transition from conventional LSM to SP-LSM can pass through the implementation of standard FLIM with a single-element detector, such as a PMT or a SPAD. Thereby we also demonstrated the use of the BrightEyes-TTM to transform a commercial Nikon confocal LSM into a FLIM system (Supplementary Fig. S13), and to support well-established single-element SPADs for FLIM (Supplementary Fig. S14).
By implementing different low-precision reference signals, the TTM can collect photons in synchronization with many different optomechanical devices that could potentially contain, directly or indirectly, additional information about each photon. For example, polarization modulators and/or analyzers can help tag photons with an excitation and emission polarization signature. Acoustic-tunable filters and spectrometers can provide an excitation and emission wavelength signature.
We anticipate a future in which each single fluorescence photon will be tagged with a series of stamps describing not only its spatial and temporal properties, as shown in this work, but also its polarization and wavelength characteristics. A series of new algorithms—based on conventional statistical analysis or machine learning techniques, and data formats—will be developed to analyze and store such multiparameter single-photon data-sets. As single-molecule techniques38 have revolutionized cellular and molecular biology—by observing molecule-by-molecule and not as an ensemble, we expect that recording the fluorescence signal photon-by-photon, while preserving most of the encoded information, can provide similar outcomes.
Although we believe that the life-science community will receive the major benefits of the SP-LSM technology, we are convinced that SP-LSM will find many applications also in material sciences, in particular in the field of single-dot spectroscopy39.
Time-tagging module architecture
In the context of SP-LSM, the goal of the BrightEyes-TTM is to tag every single photon that reaches the detector with a spatial signature and a series of temporal signatures (Supplementary Fig. S15). The temporal signatures are the delays of the photons with respect to specific events: (i) the fluorophore excitation events induced by a pulsed laser; (ii) the changes of some experimental condition (called here second-class or REF events), e.g., the start of the experiment or the position change in the probing region. Measuring the photon arrival time with respect to the excitation event (the so-called start–stop time) typically requires high temporal precision (higher than the SPAD array detectors photon-timing jitters, < 100 ps) and a temporal range larger than the pulse-to-pulse interval of the laser (10–1000 ns for 1–100 MHz laser pulse frequencies). For all second-class events, a nanosecond temporal precision is sufficient but the requested temporal range increases up to seconds. To meet these conditions, we implemented different time-to-digital converters (TDCs) for the different types of input signals. In particular, we implemented fine TDCs with picosecond precision for the photon and laser SYNC signal and coarse TDCs with nanosecond precision for the reference (REF) signals (Supplementary Fig. S16). Since most fluorescence applications do not require a subpicosecond temporal precision and we aimed to develop an open-source upgradable TTM, FPGA-based TDCs were the natural choice for the BrightEyes-TTM.
Within the FPGA-based TDC, the literature reports several different architectures21. Each architecture offers a compromise between different specifications (e.g., temporal precision, temporal range, temporal resolution, dead time, linearity, and FPGA resources). Using an FPGA, the delay between two events can be measured with a counter which simply counts the number of clocks between the two events. The counter approach yields a precision no better than the clock period, which is typically a few nanoseconds for low-cost devices (as in our case) and—in principle—infinite temporal range. While this precision is suitable for measuring the start–stop time with respect to second-class events, the counter approach does not satisfy the request for the start–stop time assessment. For the delay with respect to the excitation event (fine TDC), a higher precision (10–100 picoseconds) is achieved by running the photon signal (i.e., the START signal) through a fast tapped-delay-line (TDL), and then measuring the position that is reached when the SYNC signal from the laser (i.e., the STOP signal) is received. The downside of the TDL approach is that the maximum delay measurable (i.e., the temporal range) depends on the length of the TDL, and thus on the available FPGA-resources. To keep the FPGA-resources low and to implement on the same architecture both a few coarse TDCs and one fine TDC for each SPAD array element, we implemented a sliding-scale interpolating TDC architecture40 (Supplementary Fig. S17). This method uses a pair of tapped delay lines to measure with tens of picoseconds precision the start-stop time, combined with a free-running coarse counter to extend the temporal range. At the same time, the architecture uses the free-running coarse counter to measure the second class events with a few nanoseconds precision. In particular, the architecture combines N + 1 (N = 25 photon channels in this implementation) tapped delay lines and a coarse counter at 240 MHz to obtain N fine TDCs with tens of picoseconds precision (for the start-stop time of each photon channel), and M coarse TDCs with a nanosecond precision (M = 3 reference channels in this implementation). For both TDCs, the temporal range is—in principle—limitless. Notably, each fine TDC uses a dedicated START tapped delay line, but shares with the other fine TDCs the STOP delay line; thus, the start-stop time resulting from each fine TDC is measured with respect to the same SYNC signal, which is what is typically needed in all SP-LSM applications. Furthermore, the high-frequency clock (240 MHz in our case) allows for keeping the delay lines short (slightly longer than the clock period, ~4.2 ns), thus reducing both the FPGA resources needed and the dead time of the fine TDC. Notably, the dead time of our TDC (~4.2 ns) is shorter than typical SPAD detector hold-off times (>10 ns) and typical pulsed laser excitation periods for fluorescence applications (12.5–50 ns).
By using two different tapped-delay lines for the START and STOP signals, the architecture ensures that the TDCs are asynchronous with respect to the clock counter. The asynchronous design reduces the nonlinearity problem of the FPGA-based TDC and auto-calibrates the tapped-delay lines (Supplementary Note 1 and Supplementary Fig. S17).
Here, we describe a single fine TDC. The parallelization to N TDCs is straightforward, and the implementation of the coarse TDC comes naturally with the use of the coarse counter. Each fine TDC is composed of two flash TDCs, each one containing a tapped-delay line and a thermometer-to-binary converter (Supplementary Note 3 and Supplementary Fig. S18). The START delay line measures the difference ΔtSTART between the START signal and the next active edge of the counter clock. Similarly, the STOP delay line measures the difference ΔtSTOP between the STOP signal and the next active clock rising edge. Thanks to the free-running coarse counter, the architecture is also able to measure (i) nphoton and (ii) nSYNC, corresponding to the number of clock cycles between the start of the experiment and (i) the photon signal or (ii) the SYNC signal. Given these values, the start–stop time Δt is equal to \(\Delta {t}_{{{{{{{{\rm{STOP}}}}}}}}}-\Delta {t}_{{{{{{{{\rm{START}}}}}}}}}+\Delta n\cdot {{{{{{{{\mathcal{T}}}}}}}}}_{{{{{{{{\rm{sysclk}}}}}}}}}\), with Δn = nSYNC − nphoton, and \({{{{{{{{\mathcal{T}}}}}}}}}_{{{{{{{{\rm{sysclk}}}}}}}}}=1/240\,\,{{\mbox{MHz}}}\) the clock period; the coarse absolute time \({\hat{t}}_{{{{{{{{\rm{photon}}}}}}}}}\) is equal to nphoton ⋅ Tsysclk. Similarly, the nanosecond delay from the REF signal \(\Delta {\hat{t}}_{{{{{{{{\rm{REF}}}}}}}}}\) is equal to \(({n}_{{{{{{{{\rm{(photon)}}}}}}}}}-{n}_{{{{{{{{\rm{REF}}}}}}}}})\cdot {{{{{{{{\mathcal{T}}}}}}}}}_{{{{{{{{\rm{sysclk}}}}}}}}}\), with nREF the number of clock cycles between the beginning of the experiment and the REF signal. Since the TTM architecture can only provide integer values, a calibration is required to obtain the time values. In particular, the coarse counter provides the values nphoton and nSYNC, and the time-to-bin converters of the flash TDCs provide the values ΔTphoton and ΔTSYNC. After the raw data are transferred to the PC, the calibration is used to calculate the time values Δtphoton and ΔtSYNC (Supplementary Note 1), and the application-dependent analysis software allows calculating the time interval between specific events (e.g., Δt, \(\Delta {\hat{t}}_{{{{{{{{\rm{REF}}}}}}}}}^{\,{{{{{{\rm{photon}}}}}}}}\)). This approach does not preclude applications in which some other notion of an absolute time is required, such as the absolute photon time (e.g., coarse \({\hat{t}}_{{{{{{{{\rm{photon}}}}}}}}}\), or fine tphoton), rather than the time interval between two events.
The TTM architecture also contains hit filters (Supplementary Fig. S19), an event filter (Supplementary Fig. S20), and a data module in charge of preparing and transferring the data to the PC. The hit filter is a circuit used to shape and stabilize the input signal. The event filter is a data-input/output filter that reduces the data throughput rate by avoiding transmitting information when no photons are detected. The data-module buffers the data into a FIFO before being transferred to the PC. In its current implementation, data transfer is done with a USB 3.0 protocol, but the platform is compatible with other data communication protocols, such as a PCIe.
We implemented the BrightEyes-TTM architecture on a commercially available Kintex-7 FPGA evaluation board (KC705 Evaluation Board, Xilinx), featuring a state-of-the-art FPGA (Kintex-7 XC7K325T-2FFG900C) and, importantly for the versatility of the architecture, a series of hardware components (e.g., serial connectors, expansion connectors, and memories). In particular, the different serial connectors potentially allow for different data-transfer rates, and the expansion connectors allow compatibility with other detectors and microscope components (Supplementary Fig. S21).
To transmit the data to the PC via USB 3.0, the current TTM design uses an FX3-based board (Cypress SuperSpeed Explorer kit board, CYUSB3KIT) connected through an adapter card (CYUSB3ACC) to the LPC-FMC connector of the Kintex-7 evaluation board (Supplementary Fig. S21). To use the FX3, we developed a dedicated module in the FPGA. This module has a simple interface (essentially FIFO with a data-valid flag) for the data transmission, and it manages the FX3 control signals and the data bus. We designed the module to work with the FX3 programmed with the SF_streamIN firmware, which is part of the AN65974 example provided by Cypress. The total component cost for the BrightEyes-TTM is about $3000.
Test-bench architecture and analysis
We performed the code-density test (Supplementary Note 2), the single-photon precision measurement and the saturation analysis by connecting the BrightEyes-TTM to a dedicated signal generator, named SYLAP (Supplementary Fig. S22). We implemented SYLAP on an FPGA evaluation board identical to the one used for the BrightEyes-TTM. The SYLAP architecture generates a fixed frequency clock, which we used to simulate the laser SYNC signal, and a synchronized pulse train, which we used to simulate the photon signal. The key features of the SYLAP are (i) the possibility of adjusting the delay from the clock and the pulse with a granularity of 39.0625 ps; (ii) the possibility of setting the clock period and the pulse duration with a granularity of 2.5 ns; (iii) the possibility of setting the number of clocks needed to generate a pulse, i.e., clock decimation. The native USB 2.0 serial port of the evaluation board allows configuring the SYLAP. The timing jitter between the clock and the pulse is 13 ps, measured with an oscilloscope (Keysight DSO9404A with 4 GHz bandwidth). Since both the BrightEyes-TTM and the SYLAP signal generator are based on the same type of board, we also implemented both systems on the very same FPGA. Importantly, in this case, we separated the clock domains and the clock sources of the BrightEyes-TTM and SYLAP projects. We performed a single-photon precision experiment to compare the stand-alone SYLAP configuration (i.e, two different boards connected with coaxial cables) with the internal configuration, and we did not observe substantial differences.
To perform the code-density test (Supplementary Note 2), we used the SYLAP clock signal as the SYNC signal and the TTL signal from an avalanche photodiode (APD, SPCM-AQRH-13-FC, Perkin Elmer) as temporally uncorrelated photon signals. We illuminated the APD with natural light, maintaining a photon flux well below the saturation value of the detector. For the single-photon precision experiment, we used the clock and pulse signals from SYLAP. Specifically, to measure the precision for all photon channels, we took again advantage of the flexibility of the BrightEyes-TTM architecture: we implemented physical switches which allow simultaneously connecting a single input/photon channel of the board to all photon channels. This feature gives the possibility to measure the same event with all photon channels. Finally, to analyze the sustained read-out rate of the BrightEyes-TTM, we performed the single-photon precision experiment for different clock decimations. We measured both the rate for a single active channel and for all channels active at the same time.
Laser-scanning microscopes
Optical architectures
To test the BrightEyes-TTM, we used two custom-built laser-scanning microscopes and a commercial confocal laser-scanning microscope. We performed the majority of the FLISM and FLFS experiments with a custom system (Supplementary Fig. S23) previously implemented to demonstrate confocal-based FFS with a SPAD array detector prototype8,18. In particular, the microscope excites the sample with a triggerable 485 nm pulsed laser diode (LDH-D-C-485, PicoQuant) and records the fluorescence signal with a cooled BCD-based 5 × 5 SPAD array detector17,18 or with a commercial single-element SPAD detector ($PD-050-CTC-FC, Micro Photon Devices). To test the BrightEyes-TTM with a commercial SPAD array detector, we used a second custom system (Supplementary Fig. S10a) previously implemented to introduce FLISM16. In particular, the microscope excites the sample with a triggerable 640 nm pulsed laser diode (LDH-D-C-640, PicoQuant) and records the emitted fluorescence with a 7 × 7 CMOS-based SPAD array detector (PRISM Light detector LVDS outputs, Genoa Instruments). For the experiments of this work, we registered only the 5 × 5 central elements of the 7 × 7 SPAD array. Finally, we integrated the BrightEyes-TTM into a Nikon AXR confocal microscope system (Supplementary Fig. S13a). The microscope was equipped with a 485 nm diode pulsed laser (ISS) and a conventional GaAs PMT (H7422P, Hamamatsu) to implement conventional FLIM. A constant-fraction discriminator (CFD, Flim Labs) was used to convert the analog PMT output signal to the digital input signal requested by the BrightEyes-TTM.
To control the custom laser-scanning microscopes, we built a LabVIEW-based system inspired by the Carma microscope control system16,41. The LabVIEW control system uses an FPGA-based general-purpose National Instruments (NI) data-acquisition-card (USB-7856R; National Instruments) to control all microscope components, such as the galvanometric mirrors, the piezo stage, and to initialize the SPAD array detector. The control system delivers the pixel, line, and frame clocks as digital TTL outputs. Finally, the control system is able to count the digital TTL photon signals from the SPAD array detector and transfer the data to the PC (via USB 2.0), where a LabVIEW-based software allowed real-time visualization of the images—in the case of imaging, or the intensity time traces and correlations—in the case of FFS. When we use the custom laser-scanning microscopes in the time-tagging modality, we switch the photon signals from the NI cards to the BrightEyes-TTM. In this case, the BrightEyes-TTM reads the photon signal from the SPAD array detector thanks to a custom I/0s daughter card connected via the FPGA mezzanine connector (FMC), Supplementary Fig. S21. Notably, we used different cards to match the detector cable connector and the signals standard, e.g., TTL or LVDS. The same I/0s daughter cards were used to transmit the duplicate signal from the central element of the SPAD array to the LabVIEW control system. The pixel, line, and frame signal passed through a custom buffer, to match the impedance between the NI card and the Xilinx card, before being connected to an SMA digital input. The SYNC signal provided by the laser driver was converted from nuclear-instrumentation-module (NIM) to TTL (NIM2TTL Converter, Micro Photon Devices) and read by the Xilinx card via an SMA digital input. The data recorded by the BrightEyes-TTM was transferred to a PC via USB 3.0. To compare the BrightEyes-TTM with a commercial reference TTM, the SYNC (NIM) signal and the signal from the central element of the SPAD array could also be sent to a commercial multi-channel TTM (DPC-230, Becker & Hickl GmbH), running on a dedicated PC. For the experiments with the Nikon AXR confocal system, we control the microscope with the NIS elements software and we obtained the pixel, line, and frames clock from the Nikon control unit. Also in this case, we used a buffer to match the impedance between the control unit and the Xilinx card. The laser driver already provides a TTL SYNC signal.
The data structure
To transfer the data from the TTM to the PC, we designed a simple data protocol, whose major advantages are the scalability to add photon channels and its flexibility. Briefly, the protocol perceives the SP detector array as a fast camera with a maximum frame rate of 240 MHz, i.e., the frequency at which the whole TTM architecture works. Under this scenario, the data protocol foresees a frame-like data structure streamed to the communication port (USB 3.0 in this implementation) in 32-bits long words, Supplementary Fig. S24.
The data are transmitted in a data structure composed of 16-bit words: each word contains a 7-bit identifier (ID), a 1-bit valid flag, and an 8-bit payload. The data structure contains a header (5 words) and the channel data. The number of words in the channel data is not fixed. In typical applications, the expected number of channels hit by a photon within the 240 MHz frame-rate window is very low. Therefore, a zero-suppression algorithm was implemented that prevents the transmission of data from no-hit channels.
The channel data words (ID < No. channels) contain for each channel: (i) 8 bits representing the value measured by the tapped-delay line of the respective photon channel ΔTSTART(ch); (ii) the channel data-valid (Vn) boolean-flag which confirms that an event in that channel has occurred.
The header (ID > 122) includes the following information: (i) 3 bits used as boolean-flag for the reference (REF) events (in our applications the pixel, line, and frame clocks); (ii) 8 bits representing the value measured by the tapped-delay line of the SYNC channel ΔTSTOP (in our application the synchronization signal from the pulsed laser); (iii) the "laser data valid" (VL) boolean-flag which confirms that a SYNC event has occurred; (iv) 16 bits representing the number of clock cycles (STEP) of the free-running 240 MHz counter; (v) 8 spare bits that can be used for debugging purposes.
To reduce the data throughput, an event is transmitted only if one of the following conditions occurs: (i) a START event in one of the photon channels; (ii) a STOP event following a START event, e.g., a laser SYNC event occurs after a photon event (event filter); (iii) a REF event, e.g., a pixel/line/frame event; (iv) a force-write event. The force-write event is fundamental for reconstructing any time measurement (relative or absolute) that requires the coarse counter values. Indeed, since the data structure uses only 16 bits to store the 240 MHz coarse counter value, the counter resets every 273 μs. To guarantee the possibility of always reconstructing the relative and absolute values for each event, we implemented an internal trigger that forces the transmission of the data structure at least once every 17 μs, which corresponds to one-sixteenth of the coarse counter reset period.
To improve the robustness of the data transfer process, we mitigated the data throughput peaks by buffering the data in a large (512 kB) BRAM-FIFO on the same FPGA. The TTM code contains a mechanism that guarantees that in case the FIFO is filled over a certain threshold, i.e., the average data throughput exceeds the USB 3.0 data bandwidth (because of PC latency, a high rate of events, or other reasons), the TTM temporarily enters a fail-safe mode, giving priority to some type of events. For example, in the case of imaging, the TTM gives priority to the pixel/line/frame flags and to the force-write events. This strategy guarantees the reconstruction of the absolute time of each event (photon or REF) and the image.
In the current TTM implementation, a USB 3.0 bus transfers the data to the PC. The data receiver software checks the data integrity of the data structures received (i.e., the IDs must be in the correct order) and, if the data structure is properly received, it sequentially writes the data to a binary file without any processing. The data receiver uses the libusb-1.0 and is developed in the C programming language, Supplementary Fig. S25.
The data pre-processing
To create a user-friendly dataset, we pre-processed the binary file, Supplementary Fig. S26. First, the binary data is read as a table, which is saved to HDF5 (raw data). Since the free-running counter has 16 bits, it resets every 273 μs. Therefore, to obtain a consistent monotonic counter n, we update the 16 bits long free-counter provided by the data structure: when the free-counter value is lower than the one in the previous data structure, the value 216 is added to it and added to the following counter values.
With the monotonic counter n, it is possible to calculate the arrival time of each photon event with respect to a REF event or with respect to the SYNC event (start–stop time). While the former calculation is trivial, the latter is more complex. Since the TTM architecture uses a start–stop reverse strategy to reconstruct the photon start–stop time, it is necessary to collect information about successive SYNC laser events. This information can be contained in the same data structure as the photon, or in a successive data structure. The pre-processing step identifies for each STOP event the corresponding START event and creates a table. Each table row contains a STOP event and includes (i) the relative monotonic counter nSYNC, (ii) the relative TDL value ΔTSTOP; an entry for each photon event linked to this specific STOP event. In particular, each entry contains: (i) the number of elapsed clock cycles Δn(ch) = nSYNC − nphoton(ch) and (ii) the TDL value ΔTSTART(ch).
To simplify the reconstruction of the image, we also pre-processed the REF information for the pixels, lines, and frames. We used the pixel/line/frame events to include in the table the columns x, y, and fr. The pixel event increases the x counter; in the case of a line event, the x counter resets and the line counter y increases; in the case of a frame event, both the x, y counters reset and the fr counter increases.
We saved the pre-processed dataset in a single (optionally compressed) HDF5 file composed of different tables: one main table and 25 other tables referred to as the photon channels. All tables use a unique column identifier (idx) which allows the application software (e.g., FLISM software, FLFS software) to easily merge the information. The main table has a row for each SYNC channel event. Each row contains the corrected monotonic counter nSYNC, the coordinates x, y, fr, the TDL value ΔTSTOP value, and the unique row index idx. The photon channel tables have a row for each photon. Each row contains the TDL value ΔTSTART(ch), the elapsed clock cycle Δn(ch), and the index idx of the row of the corresponding sync event.
The data calibration
The HDF5 file contains all the information received by the TTM, but the data are structured in a way that is easier to access. However, this information still contains numbers of cycles or tapped delays. An off-line calibration phase allows transforming this information into temporal information. In particular, the calibration (Supplementary Note 1 and Supplementary Fig. S27), transforms each TDL value ΔTSTART(ch) or ΔTSYNC into a temporal value ΔtSTART(ch) or ΔtSYNC, which we can use to calculate all (relative and absolute) temporal signatures of each event. The output of the calibration is again an HDF5 file with a structure similar to the uncalibrated file. The main table has a row for each SYNC channel event. Each row contains the absolute SYNC time tSYNC = \({n}_{{{{{{{{\rm{SYNC}}}}}}}}}\cdot {{{{{{{{\mathcal{T}}}}}}}}}_{{{{{{{{\rm{sysclk}}}}}}}}}+\Delta {t}_{{{{{{{{\rm{SYNC}}}}}}}}}\), the coordinates x, y, fr and a unique row index idx. The photon channel tables have for each row the start–stop time Δt(ch), and the index idx of the row of the corresponding SYNC event.
The application-dependent analysis
Depending on the application, we further processed the calibrated HDF5. In general, any application requires the generation of the start–stop time (or TCSPC) histogram, which is simply the histogram of a series of Δt values. Because of the autocalibration steps (Supplementary Note 1), the Δt values are float values, thus the bin width of the histogram (i.e., the temporal resolution) can be chosen almost arbitrarily by the user (Supplementary Fig. S28). In this work, we used 48 ps, which is a value well below the IRF of the systems used in this work (e.g., 200 ps, Supplementary Fig. 1) and in the range of the average delay values (Supplementary Fig. S29) of the FPGA delay element forming the tapped-delay lines (Supplementary Note 1). In the case of FLISM analysis, the calibrated data are binned into a multidimensional photon-counts array (ch, x, y, fr, Δt). The Δt dimension is the start–stop histogram for a given set of (ch, x, y, fr) coordinates. This multidimensional array can be saved in HDF5 and further processed with ad hoc scripts or software such as ImageJ. Importantly, because each SPAD element and each TDC can introduce a different, but fixed, delay, it is necessary to temporally align the histograms of the different channels: We performed a reference measurement under identical conditions with freely diffusing fluorescein. The resulting histograms were plotted in the phasor space. The phase difference between the expected position in the phasor space (given the known lifetime of 4.1 ns of fluorescein) and the measured position for each channel was used to calibrate each channel, i.e., to shift the histograms of the actual measurement back to the correct position. In the case of FLFS analysis, we ignored the x, y, and fr information of the calibrated HDF5 and did not apply any further processing. Indeed, the FLFS needs a list of photon events for each channel, in which each photon has an absolute time—with respect to the beginning of the experiment, and the start–stop time (Δt). By using tSYNC as absolute time (instead of tphoton), the calibrated HDF5 file contains already all information structured in the best way.
FLISM and FLFS analysis
Image reconstruction and analysis
We reconstructed the ISM images with the adaptive pixel-reassignment method16. In short, we integrated the 4D dataset (ch, x, y, Δt) along the Δt dimension, we applied a phase-correlation registration algorithm to align all the images (x, y∣ch) with respect to central image. The registration generated the so-called shift-vector fingerprint (sx(ch), sy(ch)). To obtain the ISM intensity-based images, we integrated the shifted dataset along the ch dimension. To obtain the lifetime-based ISM image, we started from the 4D dataset (ch, x, y, Δt), Supplementary Fig. S30; for each Δt value, we used the same shift-vector fingerprint to shift the relative 2D image; we integrated the result along the ch dimension; we used the resulting 3D dataset (x, y, Δt) and the FLIMJ software42 to obtain the τfl maps (fitted with a single-exponential decay model). Alternatively, we applied the phasor analysis on the same 3D dataset. We calculated the phasor coordinates (g, s) using cosine and sine summations20,43. To avoid artefacts, we performed the MOD mathematical operation of the TCSPC histograms with the laser repetition period value43. To calibrate the acquisition system and thus account for the instrument response function of the complete setup (microscope, detector, and TTM), measurements were referenced to a solution of fluorescein in water.
To demonstrate the spatial resolution enhancement achieved by ISM, we performed a Fourier ring correlation (FRC) analysis44,45. The FRC analysis requires two near-identical images obtained from two different measurements of the same sample in order to contain different noise realizations. The two images are correlated to obtain the effective cut-off-frequency (i.e., the frequency of the specimen above the noise level) of the images, and thus the effective resolution. Since, in this work, we built up the images photon-by-photon, we used the temporal tags of the photons to generate simultaneously two 4D data-sets (ch, x, y, Δt). Then, we used the two data-sets to reconstruct the two statistically independent ISM images for the FRC analysis. In particular, we odd-even assign each photon to one of the two images by using the ΔTSTART integer values. As explained for the sliding-scale approach, the photons are distributed uniformly across the START and STOP tapped-delay lines; thus, the method generates two statistically independent data-sets with similar photon counts.
Fluorescence correlation calculation and analysis
We calculated the correlations directly on the lists of absolute photon times46. For the sum 3 × 3 and sum 5 × 5 analysis, the lists of all relevant SPAD channels were merged and sorted. The data was then split into chunks of 10 or 5 s, for the beads and eGFP in living cells, respectively, and for each chunk, the correlation was calculated. The individual correlation curves were visually inspected and all curves without artifacts were averaged. To obtain the filtered correlation curves for the beads sample, the same procedure was followed, except that a weight was assigned to each photon based on its start–stop time28,29. The weights were obtained from the start–stop time histograms of each channel. Background counts coming from other sources than fluorescence, such as dark counts and detector afterpulsing, appear as an offset in the histogram because of their uniform probability distribution on the time scale of the histogram (25 ns). As a result, bins in which the number of photons approaches this offset value can be considered to contain only background. Having access to the start–stop time with ps range resolution, the BrightEyes-TTM allows classifying every photon in either of two classes: (i) almost certainly background or (ii) possibly fluorescence. In the former case, the photon can be completely removed from the data; in the latter case, the filter function is used to add a weight to the photon depending on how likely it is that the photon comes from the sample. Here, only counts in time bins between the peak of the histogram and about 10 ns later were included. The cropped TCSPC histogram of each channel was fitted with a single exponential decay function \(H(t)=A\exp (-t/{\tau }_{{{{{{{{\rm{fl}}}}}}}}})+B\), with amplitude A, lifetime τfl, and offset B as free fit parameters. The filters were calculated assuming a single exponential decay with amplitude 1 and lifetime τfl for the fluorescence histogram and a uniform distribution with value B/A for the background component. Then, for each count, a weight was assigned equal to the value of the fluorescence filter function at the corresponding start–stop time; counts that were detected directly after the laser pulse were assigned a higher weight than counts detected some time later, since the probability of a count being a fluorescence photon decreases with increasing start–stop time. The second filter function describes the background component and was not used for further analysis.
For both single-point and circular scanning47 FCS, the correlation curves were fitted with a 1-component model assuming a Gaussian detection volume. For the circular FCS measurements47, the periodicity and radius of the scan movement were kept fixed while the amplitude, diffusion time, and focal spot size were fitted. This procedure was used for the fluorescent beads and allowed calibrating the different focal spot sizes (i.e., central, sum 3 × 3 and sum 5 × 5). For the conventional FCS measurements, the focal spot size was kept fixed at the values found with circular scanning FCS, and the amplitude and diffusion time were fitted. Since we approximated the PSF as a 3D Gaussian function with a 1/e2 lateral radius of ω0 and a 1/e2 height of k × ω0 (with k the eccentricity of the detection volume, k = 4.5 for the central element probing volume, k = 4.1 for the sum 3 × 3 and sum 5 × 5 probing volume), the diffusion coefficient D can be calculated from the diffusion time τD and the focal spot size ω0 via D = \({\omega }_{0}^{2}\)/(4τD). All analysis scripts are available in our repository and are based on Python (Python Software Foundation, (day 1) Python Language Reference, version 3.70. Available at http://www.python.org).
Samples preparation
BrightEyes-TTM characterization
For the characterization and validation of the TTM, we used fluorescein (46955, free acid, Sigma-Aldrich, Steinheim, Germany) and potassium iodide (KI) (60399-100G-F, BioUltra, ≥99.5% (AT), Sigma-Aldrich). We dissolved fluorescein from powder into DMSO (Sigma-Aldrich) and then we further diluted it to a 1:1000 v/concentration by adding ultrapure water. For the fluorescein quenching experiments, we diluted the 1:1000 fluorescein solution at different volume ratios with the KI quencher (from 1:2 to 1:256). All samples were made at room temperature. A fresh sample solution was prepared prior to each measurement.
For the imaging experiments, we used (i) a sample of 100 nm fluorescent beads (yellow-green FluoSpheres Q7 Carboxylate-Modified Microspheres, F8803; Invitrogen). We treated the glass coverslips with poly-L-lysine (0.1% (w/v) in H2O, P8920, Sigma-Aldrich) for 20 min at room temperature, and then we diluted the beads in Milli-Q water by 1:10,000 v/v. We drop-casted the beads on the coverslip and washed the coverslip after 10 min with ultrapure water. Then, the glass coverslip was dried under nitrogen flow and mounted overnight with Invitrogen ProLong Diamond Antifade Mounting Medium (P36965); (ii) Hippocampal mice neurons expressing super-ecliptic-pHluorin (SEP)-tagged-β3 subunit γ-aminobutyric acid, type A (GABAA) receptor for live-cell imaging experiments. Hippocampal neuronal cells were isolated and dissected from early postnatal (day 1) B6;129-Nlgn3wt/J (B6129SF1/J) mice of either sex using a previously published protocol48, in accordance with the guidelines established by the European Communities Council (Directive 2010/63/EU of 22 September 2010) and by the national legislation (D.Lgs.26/2014). After the dissection, hippocampal neurons were plated onto glass coverslips coated with poly-D-lysine (0.1 μg/ml, Sigma-Aldrich) and maintained in Neurobasal-A medium supplemented with 1% GlutaMAX™, 2% B-27 (all from ThermoFisher Scientific, Italy) and 5 μg/ml Gentamycin at 37 °C in 5% CO2. After 8 days in vitro (DIV8), neurons were transfected with SEP-tagged-β3 subunit GABAA receptor49 using Effectene Transfection Reagent (Qiagen, Italy) according to the manufacturer's recommendations. Measurements were made at DIV14 in Live-Cell Imaging Solution (ThermoFisher Scientific) at 37 °C. (iii) a microscope slide containing fixed HeLa cells stained for α-tubulin. We cultured HeLa cells in Dulbecco's Modified Eagle Medium (DMEM, Gibco, ThermoFisher Scientific) supplemented with 10% fetal bovine serum (Sigma-Aldrich) and 1% penicillin/streptomycin (Sigma-Aldrich) at 37 °C in 5% CO2. The day before immunostaining, we seeded HeLa cells on coverslips in a 12-well plate (Corning Inc., Corning, NY). The day after, we incubated cells in a solution of 0.3% Triton X-100 (Sigma-Aldrich) and 0.1% glutaraldehyde (Sigma-Aldrich) in BRB80 buffer (80 mM Pipes, 1 mM EGTA, 4 mM MgCl, pH 6.8, Sigma-Aldrich) for 1 min. We fixed HeLa cells with a solution of 4% paraformaldehyde (Sigma-Aldrich) and 4% sucrose (Sigma-Aldrich) in the BRB80 buffer for 10 min, and then we washed them three times for 15 min in phosphate-buffered saline (PBS, Gibco™, ThermoFisher Scientific). Next, we treated the cells with a 0.25% Triton-X-100 solution in BRB80 buffer for 10 min and washed three times for 15 min in PBS. After 1 h in blocking buffer (3% bovine serum albumin (BSA, Sigma-Aldrich) in BRB80 buffer), we incubated the cells with monoclonal mouse anti-α-tubulin antibody (1:1000, Sigma-Aldrich) diluted in the blocking buffer for 1 h at room temperature. The alpha-tubulin goat anti-mouse antibody was revealed by Alexa Fluor 488 goat anti-mouse (1:1000, Invitrogen, ThermoFisher Scientific) incubated for 1 h in BRB80 buffer. We rinsed HeLa cells three times in PBS for 15 min. Finally, we mounted the coverslips onto microscope slides (Avantor, VWR International) with ProLong Diamond Antifade Mountant (Invitrogen, ThermoFisher Scientific). (iv) Live-cell imaging of HeLa cells stained for di-4-ANEPPDHQ. The day before live-cell imaging, we seeded HeLa cells in a μ-Slide eight-well plate (Ibidi, Grafelfing, Germany). Just before measurement, cells were incubated with DMEM (Gibco, ThermoFisher Scientific) supplemented with 5 μM di-4-ANEPPDHQ (Invitrogen, ThermoFisher Scientific) at 37 °C in 5% CO2 for 30 min. The cells were washed three times with DMEM. Measurements were made in Live-Cell Imaging Solution (ThermoFisher Scientific) at 37 °C.
FLFS experiments
For fluorescence fluctuation spectroscopy experiments, we used: (i) YG carboxylate fluoSpheres (REF F8787, 2% solids, 20 nm diameter, actual size 27 nm, exc./em. 505/515 nm, Invitrogen, ThermoFisher) diluted 5000× in ultrapure water. A droplet was poured on a coverslip for the FLFS measurements; (ii) goat anti-Mouse IgG secondary antibody with Alexa Fluor 488 sample (REF A11029, Invitrogen, ThermoFisher). The antibody was diluted 100× in PBS to a final concentration of 20 μg/mL. In total, 200 μL of the resulting dilution was poured into an eight-well chamber previously treated with a 1% BSA (ThermoFisher Scientific) solution to prevent the sample from sticking to the glass. All samples were prepared at room temperature. A fresh sample solution was prepared for each measurement. (iii) HEK293T cells expressing monomeric-eGFP. HEK293T cells (Sigma-Aldrich, non-authenticated cell line) were cultured in DMEM (Dulbecco's Modified Eagle Medium, Gibco™, ThermoFisher Scientific) supplemented with 1% MEM (Eagle's minimum essential medium), Non-essential Amino Acid Solution (Sigma-Aldrich), 10% fetal bovine serum (Sia-Aldrich) and 1% penicillin/streptomycin (Sigma-Aldrich) at 37 °C in 5% CO2. HEK293T cells were seeded onto a μ-Slide eight-well plate (Ibidi GmbH). HEK293T cells were transfected with pcDNA3.1(+)eGFP (Addgene plasmid #129020). Transfection was performed using Effectene® Transfection Reagent (Qiagen, Hilden, Germany) according to the manufacturer's protocol. Measurements were performed in Live-Cell Imaging Solution (ThermoFisher Scientific) at room temperature. As the experiment performed with HEK cell is not related to relevant biological processes specific for this cell line, its authentication is, in this case, not relevant.
Ethical approval declarations
The experiments involving primary neuronal cultures from mice were prepared in accordance with the guidelines established by the European Communities Council (Directive 2010/63/EU of 22 September 2010) and following the Italian law D.Lgs.26/2014. All the animal procedures have been approved by the Italian Ministry of Health Regulation (Authorization 800/2021-PR) and by the Italian Institute of Technology welfare body.
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
The raw time-tagged data generated in this study have been deposited in Zenodo under the accession code https://doi.org/10.5281/zenodo.4912656. Full build instructions for the BrightEyes-TTM have been deposited in GitHub and ReadTheDocs under accession codes https://github.com/VicidominiLab/BrightEyes-TTM and https://brighteyes-ttm.readthedocs.io.
The firmware and the VHDL/Verilog source code for implementing time-tagging on the FPGA evaluation board, the data receiver software to install on the personal computer, and the operating software have been deposited in GitHub under accession code https://github.com/VicidominiLab/BrightEyes-TTM and Zenodo50,51.
Scipioni, L., Rossetta, A., Tedeschi, G. & Gratton, E. Phasor s-FLIM: a new paradigm for fast and robust spectral fluorescence lifetime imaging. Nat. Methods 18, 542–550 (2021).
Bertero, M., Mol, C. D., Pike, E. & Walker, J. Resolution in diffraction-limited imaging, a singular value analysis. IV. The case of uncertain localization or non uniform illumination object. Opt. Acta: Int. J. Optics 31, 923–946 (1984).
Sheppard, C. J. R. Super-resolution in confocal imaging. Optik 80, 53–54 (1988).
Müller, C. B. & Enderlein, J. Image scanning microscopy. Phys. Rev. Lett. 104, 198101 (2010).
Castello, M., Diaspro, A. & Vicidomini, G. Multi-images deconvolution improves signal-to-noise ratio on gated stimulated emission depletion microscopy. Appl. Phys. Lett. 105, 234106 (2014).
Tenne, R. et al. Super-resolution enhancement by quantum image scanning microscopy. Nat. Photonics 13, 116–122 (2018).
Lubin, G. et al. Quantum correlation measurement with single photon avalanche diode arrays. Opt. Express 27, 32863–32882 (2019).
Slenders, E. et al. Confocal-based fluorescence fluctuation spectroscopy with a SPAD array detector. Light Sci. Appl. 10, 1–12 (2021).
Scipioni, L., Lanzanó, L., Diaspro, A. & Gratton, E. Comprehensive correlation analysis for super-resolution dynamic fingerprinting of cellular compartments using the Zeiss Airyscan detector. Nat. Commun. 9, 1–7 (2018).
Sroda, A. et al. SOFISM: super-resolution optical fluctuation image scanning microscopy. Optica 7, 1308–1316 (2020).
Brown, C. M. et al. Raster image correlation spectroscopy (RICS) for measuring fast protein dynamics and concentrations with a commercial laser scanning confocal microscope. J. Microsc. 229, 78–91 (2008).
Article MathSciNet CAS PubMed PubMed Central Google Scholar
Huff, J. The Airyscan detector from Zeiss: confocal imaging with improved signal-to-noise ratio and super-resolution. Nat. Methods 12, i–ii (2015).
Zappa, F., Tisa, S., Tosi, A. & Cova, S. Principles and features of single-photon avalanche diode arrays. Sens. Actuators A Phys. 140, 103–112 (2007).
Bruschini, C., Homulle, H., Antolovic, I. M., Burri, S. & Charbon, E. Single-photon avalanche diode imagers in biophotonics: review and outlook. Light Sci. Appl. 8, 1–28 (2019).
Antolovic, I. M., Bruschini, C. & Charbon, E. Dynamic range extension for photon counting arrays. Opt. Express 26, 22234–22248 (2018).
Castello, M. et al. A robust and versatile platform for image scanning microscopy enabling super-resolution FLIM. Nat. Methods 16, 175–178 (2019).
Buttafava, M. et al. SPAD-based asynchronous-readout array detectors for image-scanning microscopy. Optica 7, 755–765 (2020).
Slenders, E. et al. Cooled SPAD array detector for low light-dose fluorescence laser scanning microscopy. Biophys. Rep. 1, 1–13 (2021).
Liu, M. et al. Instrument response standard in time-resolved fluorescence spectroscopy at visible wavelength: quenched fluorescein sodium. Appl. Spectrosc. 68, 577–583 (2014).
Digman, M. A., Caiolfa, V. R., Zamai, M. & Gratton, E. The phasor approach to fluorescence lifetime imaging analysis. Biophys. J. 94, L14–L16 (2008).
Machado, R., Cabral, J. & Alves, F. S. Recent developments and challenges in FPGA-based time-to-digital converters. IEEE Trans. Instrum. Meas. 68, 4205–4221 (2019).
Koho, S. V. et al. Two-photon image-scanning microscopy with SPAD array and blind image reconstruction. Biomed. Opt. Express 11, 2905–2924 (2020).
Owen, D. M. et al. Fluorescence lifetime imaging provides enhanced contrast when imaging the phase-sensitive dye di-4-ANEPPDHQ in model membranes and live cells. Biophys. J. 90, L80–L82 (2006).
Wawrezinieck, L., Rigneault, H., Marguet, D. & Lenne, P.-F. Fluorescence correlation spectroscopy diffusion laws to probe the submicron cell membrane organization. Biophys. J. 89, 4029–4042 (2005).
Petrášek, Z. & Schwille, P. Precise measurement of diffusion coefficients using scanning fluorescence correlation spectroscopy. Biophys. J. 94, 1437–1448 (2008).
Ruprecht, V., Wieser, S., Marguet, D. & Schütz, G. J. Spot variation fluorescence correlation spectroscopy allows for superresolution chronoscopy of confinement times in membranes. Biophys. J. 100, 2839–2845 (2011).
Masuda, A., Ushida, K. & Okamoto, T. New fluorescence correlation spectroscopy enabling direct observation of spatiotemporal dependence of diffusion constants as an evidence of anomalous transport in extracellular matrices. Biophys. J. 88, 3584–3591 (2005).
Enderlein, J. & Gregor, I. Using fluorescence lifetime for discriminating detector afterpulsing in fluorescence-correlation spectroscopy. Rev. Sci. Instrum. 76, 033102 (2005).
Kapusta, P., Macháň, R., Benda, A. & Hof, M. Fluorescence lifetime correlation spectroscopy (FLCS): concepts, applications and outlook. Int. J. Mol. Sci. 13, 12890–12910 (2012).
Wallrabe, H. & Periasamy, A. Imaging protein molecules using FRET and FLIM microscopy. Curr. Opin. Biotechnol. 16, 19–27 (2005).
Sadovsky, R. G., Brielle, S., Kaganovich, D. & England, J. L. Measurement of rapid protein diffusion in the cytoplasm by photo-converted intensity profile expansion. Cell Rep. 18, 2795–2806 (2017).
Seibel, N. M., Eljouni, J., Nalaskowski, M. M. & Hampe, W. Nuclear localization of enhanced green fluorescent protein homomultimers. Anal. Biochem. 368, 95–99 (2007).
Hellenkamp, B. et al. Precision and accuracy of single-molecule FRET measurements—a multi-laboratory benchmark study. Nat. Methods 15, 669–676 (2018).
Ulku, A. C. et al. A 512x512 SPAD image sensor with integrated gating for widefield FLIM. IEEE J. Sel. Top. Quant. 25, 6801212 (2019).
Scott, R., Jiang, W. & Deen, M. J. CMOS time-to-digital converters for biomedical imaging applications. IEEE Rev. Biomed. Eng. (2021).
Lusardi, N., Garzetti, F., Corna, N., Marco, R. D. & Geraci, A. Very high-performance 24-channels time-to-digital converter in Xilinx 20-nm Kintex UltraScale FPGA. in 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC) (IEEE, 2019).
Owens, B. The microscope makers. Nature 551, 659–662 (2017).
Lelek, M. et al. Single-molecule localization microscopy. Nat. Rev. Methods Primers 1, 1–27 (2021).
Lindquist, N. C., de Albuquerque, C. D. L., Sobral-Filho, R. G., Paci, I. & Brolo, A. G. High-speed imaging of surface-enhanced Raman scattering fluctuations from individual nanoparticles. Nat. Nanotechnol. 14, 981–987 (2019).
Tontini, A., Gasparini, L., Pancheri, L. & Passerone, R. Design and characterization of a low-cost FPGA-based TDC. IEEE Trans. Nucl. Sci. 65, 680–690 (2018).
Castello, M. et al. Universal removal of anti-stokes emission background in STED microscopy via FPGA-based synchronous detection. Rev. Sci. Instrum. 88, 053701 (2017).
Gao, D. et al. FLIMJ: an open-source ImageJ toolkit for fluorescence lifetime image data analysis. PLOS ONE 15, e0238327 (2020).
Ranjit, S., Malacrida, L., Jameson, D. M. & Gratton, E. Fit-free analysis of fluorescence lifetime imaging data using the phasor approach. Nat. Protoc. 13, 1979–2004 (2018).
Tortarolo, G., Castello, M., Diaspro, A., Koho, S. & Vicidomini, G. Evaluating image resolution in STED microscopy. Optica 5, 32–35 (2018).
Koho, S. et al. Fourier ring correlation simplifies image restoration in fluorescence microscopy. Nat. Commun. 10, 3103 (2019).
Wahl, M., Gregor, I., Patting, M. & Enderlein, J. Fast calculation of fluorescence correlation data with asynchronous time-correlated single-photon counting. Opt. Express 11, 3583 (2003).
Petrášek, Z., Derenko, S. & Schwille, P. Circular scanning fluorescence correlation spectroscopy on membranes. Opt. Express 19, 25006 (2011).
de Luca, E. et al. Inter-synaptic lateral diffusion of GABAA receptors shapes inhibitory synaptic currents. Neuron 95, 63–69.e5 (2017).
Petrini, E. M. et al. Synaptic recruitment of gephyrin regulates surface GABAA receptor dynamics for the expression of inhibitory LTP. Nat. Commun. 5, 1–19 (2014).
Rossetta, A. et al. The BrightEyes-TTM as an open-source time-tagging module for democratising single-photon microscopy. VicidominiLab/BrightEyes-TTM: v.1.0 https://doi.org/10.5281/zenodo.7064910 (2022).
This research was supported by Fondazione San Paolo, "Observation of biomolecular processes in live-cell with nanocamera", No. EPFD0098 (E.S. and G.V.), by the European Research Council, Bright Eyes, No. 818699 (G.T. and G.V.), and by the European Union's Horizon 2020 research and innovation programme under the Marie Sk1odowska-Curie grant agreement no. 890923 (SMSPAD) (E.S. and G.V.). We thank Prof. Alberto Diaspro and Dr. Paolo Bianchini (Nanoscopy & NIC@IIT, Istituto Italiano di Tecnologia) for useful discussions; Dr. Michele Oneto (Nikon Imaging Center) and Marco Scotto (Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia) for support on the experiments; Alessandro Barcellona (Electronic Design Laboratory, Istituto Italiano di Tecnologia) for design and implementation of the custom-made buffer; Prof. Alberto Tosi, Prof. Federica Villa, Dr. Mauro Buttafava (Politecnico di Milano), Dr. Marco Castello, and Dr. Simonluca Piazza (Istituto Italiano di Tecnologia and Genoa Instruments) for useful initial discussions in the time-to-digital design and for the realization of the single-photon-avalanche-diode detector array; All members of the RNA Initiative at the Istituto Italiano di Tecnologia for their contribution to the long-term vision of this project.
These authors contributed equally: Alessandro Rossetta, Eli Slenders, Mattia Donato.
Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Via Enrico Melen 85, Genoa, 16152, Italy
Alessandro Rossetta, Eli Slenders, Mattia Donato, Sabrina Zappone, Francesco Fersini, Sami Koho, Giorgio Tortarolo, Eleonora Perego & Giuseppe Vicidomini
Nanoscopy and NIC@IIT, Istituto Italiano di Tecnologia, Via Enrico Melen 85, Genoa, 16152, Italy
Alessandro Rossetta & Luca Lanzanò
Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Via All'Opera Pia 13, Genoa, 16145, Italy
Alessandro Rossetta, Sabrina Zappone & Francesco Fersini
Synaptic Plasticity of Inhibitory Networks, Istituto Italiano di Tecnologia, Via Morego, 30, Genoa, 16163, Italy
Martina Bruno & Andrea Barberis
Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics and Maternal and Child Sciences, University of Genoa, Largo Paolo Daneo, 3, Genoa, 16132, Italy
Martina Bruno
Electronic Design Laboratory, Istituto Italiano di Tecnologia, Via Enrico Melen 85, Genoa, 16152, Italy
Francesco Diotalevi & Marco Crepaldi
Department of Physics and Astronomy "Ettore Majorana", University of Catania, Via S. Sofia 64, Catania, 95123, Italy
Luca Lanzanò
Alessandro Rossetta
Eli Slenders
Mattia Donato
Sabrina Zappone
Francesco Fersini
Francesco Diotalevi
Sami Koho
Giorgio Tortarolo
Andrea Barberis
Marco Crepaldi
Eleonora Perego
Giuseppe Vicidomini
Correspondence to Giuseppe Vicidomini.
G.V. has a personal financial interest (co-founder) in Genoa Instruments, Italy; A.R. has a personal financial interest (founder) in FLIM LABS, Italy, outside the scope of this work. The remaining authors declare no competing interests.
Nature Communications thanks Douglas Shepherd and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Supplementary Movie 1
Rossetta, A., Slenders, E., Donato, M. et al. The BrightEyes-TTM as an open-source time-tagging module for democratising single-photon microscopy. Nat Commun 13, 7406 (2022). https://doi.org/10.1038/s41467-022-35064-0
Received: 03 September 2021
Accepted: 09 November 2022
Focus image scanning microscopy for sharp and gentle super-resolved microscopy
Alessandro Zunino | CommonCrawl |
Emmy Noether lived from 1882 to 1935. In a letter to the New York Times in 1935, just after she died, Albert Einstein wrote:
"Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began."
And he wasn't the only one to praise her — in fact, many people think she was one of the greatest mathematicians of her time.
But what did Einstein mean by "creative mathematical genius"? Noether mainly worked on algebra, which is about writing symbols for mathematical objects, such as numbers. Algebra is a useful tool to describe the relationships between such objects. The relationships contain patterns and structures that a good mathematician can sense, just as a musician can feel the rhythms and structures in a piece of music. And just as a musician has to stick to certain rules when writing a new piece of music (otherwise it'll sound awful) so a mathematician is guided by the rules of logic when discovering new equations and relationships. In fact, Albert Einstein called pure mathematics the "poetry of logical ideas".
As an example of how algebra can describe patterns in relationships, for example between numbers, think of the two equations $$2x=10,$$ and $$3x=9.$$ Instead of thinking of them as entirely different things, you can recognise that they belong to the family of equations that have the form $$ax=b$$ for some numbers $a$ and $b$. You can now work out a general solution: $$x=\frac{b}{a},$$ which works for every equation of that family. You simply have to fill in the appropriate values for $a$ and $b.$
Linking algebra to physics
Noether's deep understanding of algebra led to a result that is actually important in physics. As you will learn in GCSE physics, some things always stay the same. An example is energy: energy can't be created out of nothing, and it can't just disappear. When you kick a football, the energy of your foot doesn't disappear after the kick, rather it's transferred to the ball, so the overall amount of energy remains the same. We say that "energy is conserved". Another conserved quantity is momentum, which is related to the speed and mass of an object (loosely speaking it measures how hard it would be to stop that object in its tracks). When you kick the ball, your foot might lose its momentum after the kick, but the momentum hasn't disappeared: it's been turned into the momentum of the ball, and the overall amount of momentum remains the same.
With great insight, Noether figured out how these conservation laws are reflected within the mathematical equations that describe the physics. Conservation of momentum, for example, is linked to the fact that the equations look the same no matter where you are in space — you use the same equations whether you are in London, New York or on the Moon. Conservation of energy is linked to the fact that the equations look the same no matter where you are in time — whether you apply them today, tomorrow, or in ten years' time.
When an object remains unchanged even though you are doing something to it, mathematicians say that the object is symmetric. A picture of a butterfly has a mirror symmetry, for example, because you can reflect it in a line down its centre without changing its appearance. The equations we just talked about remain unchanged when you "move them around" in time and space, so they are also symmetric in that sense. Therefore, Noether's result links the conservation laws of physics with a concept that at might at first appear to have nothing to do with them: symmetry.
A butterfly has mirror symmetry
Why did Einstein mention the education of women in his praise for Noether? Because Noether had to fight every step of the way to be allowed to follow her passion for mathematics. When she decided to enter university in 1900, women were still not allowed to obtain university degrees. They were allowed to sit in on lectures, but only if they had the permission of the professor. As Noether's father, Max Noether, was a mathematician at the University of Erlangen, the professors were family friends, and so Noether was able to gain their consent.
In 1904, after four years of unofficial study, the rules were relaxed and Noether was finally allowed to enroll at university. She went on to complete an excellent degree in 1907, which earned her a PhD. But at the time women were not allowed to get jobs at universities, so Noether spent years working without pay and without an official position. Several famous mathematicians and scientists supported Noether (including Einstein), but it was not until the 1920s that she was given a job with a salary at the University of Göttingen. The salary was tiny, however, and she could barely survive on it.
When the Nazis came to power in the 1930s Noether was one of the first six professors fired from Göttingen because she was both Jewish and politically liberal. Noether's friends started a frantic search to find her a university position abroad. Eventually, she was granted a temporary one-year position on a modest salary at a small women's college, Bryn Mawr, in the United States. By 1935 enough funds were scraped together to support Noether at a reduced salary for another two years.
Sadly, Emmy Noether died relatively young. In 1935 she went into hospital to have an operation to remove a large tumour. For a few days it appeared as though the surgery had been successful but then she suddenly died. She was in her early fifties at the height of her creativity and powers. But her legacy lives on — she is still considered one of the best mathematicians of all time.
To find out more about the life of Emmy Noether, read the Plus magazine article Against the odds by Danielle Stretch, on which this article is based (and which is a little more advanced). | CommonCrawl |
The Polyphase Implementation of Interpolation Filters in Digital Signal Processing
December 06, 2017 by Dr. Steve Arar
This article discusses an efficient implementation of the interpolation filters called the polyphase implementation.
In digital signal processing (DSP), we commonly use the multirate concept to make a system, such as an A/D or D/A converter, more efficient. This article discusses an efficient implementation of one of the main building blocks of the multirate systems, the interpolation filter. The method we'll cover here is called the polyphase implementation.
We can derive the polyphase implementation of the decimation and interpolation systems using the frequency-domain representation of the signals and systems. This is outside the scope of this article, but you can learn more in section 11.5 of the book Digital Signal Processing by John Proakis.
Here, we will attempt to clarify the operation of a polyphase interpolation filter examining a specific example in time-domain.
As shown in Figure 1, the straightforward implementation of interpolation uses an upsampler by a factor of $$L$$ and, then, applies a lowpass filter with a normalized cutoff frequency of $$\frac{\pi}{L}$$. You can read about the interpolation filter in my article, Multirate DSP and Its Application in D/A Conversion.
Figure 1. Upsampling followed by a low-pass filter with a normalized cutoff frequency of Lperforms interpolation.
The upsampler places $$L-1$$ zero-valued samples between adjacent samples of the input, $$x(n)$$, and increases the sample rate by a factor of $$L$$. Hence, the filter in Figure 1 is placed at the part of the system which has a higher sample rate.
A finite impulse response filter (FIR) of length $$N$$ which is placed before the upsampler needs to perform $$N$$ multiplications and $$N-1$$ additions for each sample of $$x(n)$$. However, the filter of Figure 1, which is placed after the upsampler, will have to perform $$LN$$ multiplications and $$L(N-1)$$ additions for each sample of $$x(n)$$.
Is there any way to relax the computational complexity of this system?
To answer this question, we need to note that while the filter realizing $$H(z)$$ in Figure 1 is clocked at a higher sample rate, $$L-1$$ samples out of every $$L$$ samples that $$H(z)$$ processes are zero-valued. Hence, for $$L=2$$ at least $$50$$% of the input samples of $$H(z)$$ are zero-valued. This percentage will increase even further for $$L>2$$.
Considering the fact that multiplying a filter coefficient by a zero-valued input leads to a zero-valued product, we may be able to decrease the computational complexity of the system in Figure 1. To get a better insight, let's investigate a simple example of interpolation where $$L=2$$.
Interpolation with $$L=2$$
Let's assume that $$L=2$$ and $$H(z)$$ is an FIR filter of length six with the following difference equation:
$$y(n)=\sum_{k=0}^{5}b_{k}x(n-k)$$
Equation 1
Assume that the input signal, $$x(n)$$, is as shown in Figure 2.
Figure 2. The input sequence $$x(n)$$.
After upsampling by a factor of two, we have $$x_1(m)$$ shown in Figure 3 below:
Figure 3. The upsampled sequence $$x_1(m)$$.
Assume that the six-tap FIR filter is implemented with the direct-form structure below:
Figure 4. The direct-form realization of a six-tap FIR filter.
With these assumptions, let's examine the straightforward implementation of the interpolation filter in Figure 1. At time index $$m=5$$, the FIR filter will be as shown in Figure 5.
Figure 5. The FIR filter at $$m=5$$.
As you can see, at $$m=5$$, half of the multiplications of the FIR filter have a zero-valued input. The branches corresponding to these multiplications are shown by the dashed lines. You can verify that, for an odd, these multiplications will be always zero and $$y(m)$$ will be determined only by the coefficients $$b_1$$, $$b_3$$, and $$b_5$$. At the next time index, i.e. $$m=6$$, we obtain Figure 6 below:
Figure 6. The FIR filter at m=6.
Again those branches which incorporate a zero-valued input are shown by dashed lines. Figure 6 shows that, again, half of the multiplications have a zero-valued input. Examining Figures 5 and 6, we observe that, for an odd time index, half of the coefficients, namely $$b_1$$, $$b_3$$, and $$b_5$$, determine the output value and the sum of the products incorporating the other coefficients is zero. For an even time index, the coefficients, i.e. $$b_0$$, $$b_2$$, and $$b_4$$, are important and the sum of the products for the rest of the coefficients becomes zero.
Let's use two different filters after the upsampler: one with the odd coefficients and the other one with the even coefficients and add the output of these two filters together to get $$y(m)$$. The result is shown in Figure 7.
Figure 7. Breaking the Filter's difference equation into two sets of coefficients: the odd coefficients and the even ones.
We can easily obtain the above figure by manipulating Equation 1 as
$$y(n)= \big ( b_0 x(n)+ b_2 x(n-2) + b_4 x(n-4) \big ) + \big ( b_1 x(n-1)+ b_3 x(n-3) + b_5 x(n-5) \big )$$
However, our previous discussion shows why we are interested in this decomposition: at each time index, only one of these two filters can produce a non-zero output and the other one outputs zero. To further clarify, let's consider the lower path of Figure 7. We know that the output of this path is non-zero only for even time indexes. As a result, we only need to simplify the cascade of the upsampler and FIR2 at even time indexes where the filter output is non-zero. At the next time index, we can simply connect the output of the path to zero. This will be further explained in the rest of the article.
Now, let's examine the upsampler followed by the lower path of Figure 7 which incorporates the even coefficients. In this path, we are first upsampling the input $$x(n)$$ to obtain $$x_1(m)$$. With this operation, as shown in Figures 2 and 3, we are creating a time difference equal to two time units between every two successive samples of $$x(n)$$. On the other hand, the filter FIR2 in Figure 7, "looks" at its input at multiples of "two time units". For example, while the multiplication by $$b_0$$ takes the current sample, multiplications by $$b_2$$ and $$b_4$$ are receiving samples with two time units and four time units distances, respectively. Therefore, when the output of FIR2 is going to be non-zero, we can simply find the output by applying $$x(n)$$ rather than $$x_1(m)$$ to the coefficients $$b_0$$, $$b_2$$, and $$b_4$$ provided that we are using a delay of one unit time, i.e. $$Z^{-1}$$, between these coefficients. This equivalent filtering is shown in Figure 8.
Figure 8. The schematic is equivalent to the cascade of the upsampler and FIR2 in Figure 7.
Figure 8 also includes a switch after the filter, why do we need this switch? Remember that FIR2 in Figure 7 has a non-zero output for an even $$m$$. For an odd $$m$$, the output of this filter will be always zero in our example. That's why we need to force the output of the equivalent circuit in Figure 8 to be zero for an odd m. Interestingly, the operation of this particular switch is exactly the same as that of an upsampler by a factor of two. Hence, we obtain the final equivalent schematic in Figure 9.
Figure 9. The schematic is equivalent to the cascade of upsampler and FIR2 in Figure 7.
What is the advantage of Figure 9 over the cascade of the upsampler and FIR2 in Figure 7? In Figure 7, we were evaluating FIR2 at both the odd and even time indexes regardless of the fact that, for an odd time index, the output of FIR2 is always zero. In Figures 8 and 9, this property is taken into account and the output is directly connected to zero for an odd time index. In this way, we are avoiding unnecessary calculations. In other words, the three-tap FIR filter in Figure 9 is placed before the upsampler, hence, we only perform three multiplications and two additions for each input sample of x(n). However, the lower path of Figure 7 places the multiplications after the upsampler and we would have to perform six multiplications and four additions for each input sample of $$x(n)$$.
The process of simplifying the lower path of Figure 7 to the block diagram in Figure 9 is actually a particular example of an identity called the second noble identity. This identity is shown in Figure 10.
Figure 10. The second noble identity states that these two systems are equivalent. Image courtesy of Digital Signal Processing.
Considering our previous discussion, you should now be able to imagine why we are allowed to bring a system which can be expressed in terms of ZI, i.e. H(ZI), before the factor-of-I upsampler provided that, for the new system, ZIis replaced by Zin the transfer function. In fact, the upsampler creates a time difference equal to I time units between every two successive samples of x(n). However, for a time index at which the output is non-zero, the system function H(ZI) "looks" at its input at multiples of "I time units". Hence, we can simplify the cascade of the upsampler and the system function in manner similar to what we did with the FIR2 path in Figure 7. To read about the proof of the second noble identity read Section 11.5.2 of this book.
How can we simplify the upper path of Figure 7? We can obtain the system function FIR1 as
$$H_{FIR1}(z)=b_{1}z^{-1}+b_{3}z^{-3}+b_{5}z^{-5}$$
To use the second noble identity, we only need to express this function in terms of $$z^{-2}$$. We can rewrite the system function as
$$H_{FIR1}(z)=\big ( b_{1}+b_{3}z^{-2}+b_{5}z^{-4} \big ) z^{-1} = P_{1}(z^{2})z^{-1}$$
Since $$P_1(z^2)$$ is in terms of $$z^2$$, we can use the noble identity to move this part of the transfer function before the upsampler. In this case, we will have to replace $$z^2$$ with $$z$$ in $$P_1(z^2)$$. The final system is shown in Figure 11.
Figure 11. The final system obtained after applying the second noble identity.
In this system, all of the multiplications are performed before the upsampling operations. Hence, a significant reduction in the computational complexity is achieved. The schematic of Figure 11 is called the polyphase implementation of the interpolation filter.
Now, let's examine the general form of the above example. In this case, we have a factor-of-M upsampler followed by a system function H(z).
Polyphase Decomposition and Efficient Implementation of an Interpolator
To find the M-component polyphase decomposition of a given system $$H(z)$$, we need to rewrite the system function as
$$H(z)=\sum_{k=0}^{M-1}z^{-k} P_{k}(z^M)$$
where $$P_k(z)$$ is called a polyphase component of $$H(z)$$ which is given by
$$P_{k}(z)=\sum_{n=-\infty}^{+\infty}h(nM+k)z^{-n}$$
Now, if $$H(z)$$ is preceded by a factor-of-M upsampler, we can apply the second noble identity to $$P_k(z^M)$$ components and achieve a more efficient implementation.
For example, if H(z)is preceded by a factor-of-3 upsampler, we can use the decomposition of Equation 2 to obtain Figure 12 below. we will obtain Figure 12 for M=3. Now, applying the second noble identity, we will have Figure 13. To get more comfortable with Equations 2 and 3, try using these two equations to obtain the schematic of Figure 11 directly from the system function of the filter in Equation 1.
Figure 12. The three-component polyphase decomposition of $$H(z)$$ preceded by a factor-of-three upsampler. Image courtesy of Digital Signal Processing.
Figure 13. Use of the three-component polyphase decomposition of $$H(z)$$ to implement the factor-of-three upsampler followed by $$H(z)$$. Image courtesy of Digital Signal Processing.
For more details and examples see Section 11.5 of Digital Signal Processing, Section 12.2 of Digital Signal Processing: Fundamentals and Applications, and also this excellent paper from IEEE.
The straightforward implementation of the interpolation filter places $$H(z)$$ at the part of the system which has a higher sample rate.
A finite impulse response (FIR) filter of length $$N$$ which is placed before the upsampler needs to perform $$N$$ multiplications and $$N-1$$ additions for each sample of $$x(n)$$. However, the filter of Figure 1, which is placed after the upsampler, will have to perform $$LN$$ multiplications and $$L(N-1)$$ additions for each sample of $$x(n)$$.
According to the second noble identity, we are allowed to bring a system which can be expressed in terms of $$Z^I$$, i.e., $$H(Z^I)$$, before the factor-of-I upsampler provided that, for the new system, $$Z^I$$ is replaced by $$Z$$ in the transfer function.
If $$H(z)$$ is preceded by a factor-of-M upsampler, we can rewrite the system function in terms of its polyphase components, $$P_k(z^M)$$, and apply the second noble identity to swap the position of the polyphase components and the upsampler.
To see a complete list of my DSP-related articles on AAC, please see this page.
Digital Signal Processing in Scilab: How to Remove Noise in Recordings with Audio Processing Filters
Basics of Digital Down-Conversion in DSP
Practical FIR Filter Design: Part 2 - Implementing Your Filter
Understanding Color Models Used in Digital Image Processing
Digital Signal Processing in Scilab: How to Decode an FSK Signal
digital filter
fir filter
interpolation filters
Google Hints at First Custom SoC, Tensor, in Pixel 6 Smartphone
How to Create an Low−Power Bluetooth LE Asset Tag
by onsemi
Empowering Future Technology with Excelpoint Systems
In Partnership with Excelpoint
In Its First Public Acquisition, SpaceX Snatches Up Satellite Startup Swarm
The Importance of Power Quality Module Technologies | CommonCrawl |
Non-existence of global solutions to nonlinear wave equations with positive initial energy
CPAA Home
Spaces admissible for the Sturm-Liouville equation
May 2018, 17(3): 1001-1022. doi: 10.3934/cpaa.2018049
A doubly nonlinear Cahn-Hilliard system with nonlinear viscosity
Elena Bonetti 1, , Pierluigi Colli 2,, , Luca Scarpa 3, and Giuseppe Tomassetti 4,
Dipartimento di Matematica "F. Enriques", Università degli Studi di Milano, Via Saldini 50,20133 Milano, Italy
Dipartimento di Matematica "F. Casorati", Università di Pavia, Via Ferrata 1,27100 Pavia, Italy
Department of Mathematics, University College London, Gower Street, London WC1E 6BT, United Kingdom
Dipartimento di Ingegneria -Sezione Ingegneria Civile, Università degli Studi "Roma Tre", Via Vito Volterra 62, Roma, Italy
Received October 2017 Revised November 2017 Published January 2018
Fund Project: PC gratefully acknowledges some financial support from the MIUR-PRIN Grant 2015PA5MP7 "Calculus of Variations"; the present paper also benefits from the support of the GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica) and the IMATI – C.N.R. Pavia for EB and PC
In this paper we discuss a family of viscous Cahn-Hilliard equations with a non-smooth viscosity term. This system may be viewed as an approximation of a ''forward-backward'' parabolic equation. The resulting problem is highly nonlinear, coupling in the same equation two nonlinearities with the diffusion term. In particular, we prove existence of solutions for the related initial and boundary value problem. Under suitable assumptions, we also state uniqueness and continuous dependence on data.
Keywords: Diffusion of species, Cahn-Hilliard equations, viscosity, non-smooth regularization, nonlinearities, initial-boundary value problem, existence of solutions, continuous dependence.
Mathematics Subject Classification: Primary: 35G31, 35K52, 35D35, 74N20.
Citation: Elena Bonetti, Pierluigi Colli, Luca Scarpa, Giuseppe Tomassetti. A doubly nonlinear Cahn-Hilliard system with nonlinear viscosity. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1001-1022. doi: 10.3934/cpaa.2018049
F. Bai, C. M. Elliott, A. Gardiner, A. Spence and A. M. Stuart, The viscous Cahn-Hilliard equation. Ⅰ. Computations, Nonlinearity, 8 (1995), 131-160. Google Scholar
V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Noordhoff, Leyden, 1976. Google Scholar
E. Bonetti, P. Colli and G. Tomassetti, A non-smooth regularization of a forward-backward parabolic equation, Math. Models Methods Appl. Sci., 27 (2017), 641-661. Google Scholar
N. D. Botkin, M. Brokate and E. G. El Behi-Gornostaeva, One-phase flow in porous media with hysteresis, Phys. B, 486 (2016), 183-186. Google Scholar
H. Brezis, Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert, North-Holland Math. Stud., 5 North-Holland, Amsterdam, 1973. Google Scholar
J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system Ⅰ. Interfacial free energy, J. Chem. Phys., 2 (1958), 258-267. Google Scholar
P. Colli and T. Fukao, Nonlinear diffusion equations as asymptotic limits of Cahn-Hilliard systems, J. Differential Equations, 260 (2016), 6930-6959. Google Scholar
P. Colli, G. Gilardi and J. Sprekels, On the Cahn-Hilliard equation with dynamic boundary conditions and a dominating boundary potential, J. Math. Anal. Appl., 419 (2014), 972-994. Google Scholar
P. Colli and L. Scarpa, From the viscous Cahn-Hilliard equation to a regularized forward-backward parabolic equation, Asymptot. Anal., 99 (2016), 183-205. Google Scholar
P. Colli and A. Visintin, On a class of doubly nonlinear evolution equations, Comm. Partial Differential Equations, 15 (1990), 737-756. Google Scholar
C. M. Elliott and H. Garcke, On the Cahn-Hilliard equation with degenerate mobility, SIAM J. Math. Anal., 27 (1996), 404-423. Google Scholar
C. M. Elliott and A. M. Stuart, Viscous Cahn-Hilliard equation. Ⅱ. Analysis, J. Differential Equations, 128 (1996), 387-414. Google Scholar
C. M. Elliott and S. Zheng, On the Cahn-Hilliard equation, Arch. Rational Mech. Anal., 96 (1986), 339-357. Google Scholar
M. Frémond, Non-Smooth Thermomechanics, Springer-Verlag, Berlin, 2002. Google Scholar
G. Gilardi, A. Miranville and G. Schimperna, On the Cahn-Hilliard equation with irregular potentials and dynamic boundary conditions, Commun. Pure Appl. Anal., 8 (2009), 881-912. Google Scholar
M. Gurtin, Generalized Ginzburg-Landau and Cahn-Hilliard equations based on a microforce balance, Phys. D, 92 (1996), 178-192. Google Scholar
M. Latroche, Structural and thermodynamic properties of metallic hydrides used for energy storage, J. Phys. Chem. Solids, 65 (2004), 517-522. Google Scholar
A. Miranville and G. Schimperna, On a doubly nonlinear Cahn-Hilliard-Gurtin system, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 675-697. Google Scholar
A. Miranville and S. Zelik, Doubly nonlinear Cahn-Hilliard-Gurtin equations, Hokkaido Math. J., 38 (2009), 315-360. Google Scholar
A. Novick-Cohen, On the viscous Cahn-Hilliard equation, in Material instabilities in continuum mechanics (Edinburgh, 1985–1986), Oxford Sci. Publ., Oxford Univ. Press, New York, 1988, pp. 329–342. Google Scholar
A. Novick-Cohen and R. L. Pego, Stable patterns in a viscous diffusion equation, Trans. Amer. Math. Soc., 324 (1991), 331-351. Google Scholar
B. Schweizer, The Richards equation with hysteresis and degenerate capillary pressure, J. Differential Equations, 252 (2012), 5594-5612. Google Scholar
J. Simon, Compact sets in the space $L^p(0,T;B)$, Ann. Mat. Pura Appl.(4), 146 (1987), 65-96. Google Scholar
G. Tomassetti, Smooth and non-smooth regularizations of the nonlinear diffusion equation, Discrete Contin. Dyn. Syst. Ser. S, 10 (2017), 1519-1537. Google Scholar
Alain Miranville. Existence of solutions for Cahn-Hilliard type equations. Conference Publications, 2003, 2003 (Special) : 630-637. doi: 10.3934/proc.2003.2003.630
L. Chupin. Existence result for a mixture of non Newtonian flows with stress diffusion using the Cahn-Hilliard formulation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (1) : 45-68. doi: 10.3934/dcdsb.2003.3.45
Peng Jiang. Unique global solution of an initial-boundary value problem to a diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3015-3037. doi: 10.3934/dcds.2015.35.3015
Tatsien Li, Libin Wang. Global classical solutions to a kind of mixed initial-boundary value problem for quasilinear hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 59-78. doi: 10.3934/dcds.2005.12.59
Michal Beneš. Mixed initial-boundary value problem for the three-dimensional Navier-Stokes equations in polyhedral domains. Conference Publications, 2011, 2011 (Special) : 135-144. doi: 10.3934/proc.2011.2011.135
Ciprian G. Gal, Alain Miranville. Robust exponential attractors and convergence to equilibria for non-isothermal Cahn-Hilliard equations with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 113-147. doi: 10.3934/dcdss.2009.2.113
Gilles Carbou, Bernard Hanouzet. Relaxation approximation of the Kerr model for the impedance initial-boundary value problem. Conference Publications, 2007, 2007 (Special) : 212-220. doi: 10.3934/proc.2007.2007.212
Xianpeng Hu, Dehua Wang. The initial-boundary value problem for the compressible viscoelastic flows. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 917-934. doi: 10.3934/dcds.2015.35.917
Ciprian G. Gal, Mahamadi Warma. Reaction-diffusion equations with fractional diffusion on non-smooth domains with various boundary conditions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1279-1319. doi: 10.3934/dcds.2016.36.1279
Jitao Liu. On the initial boundary value problem for certain 2D MHD-$\alpha$ equations without velocity viscosity. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1179-1191. doi: 10.3934/cpaa.2016.15.1179
Ahmad Makki, Alain Miranville. Existence of solutions for anisotropic Cahn-Hilliard and Allen-Cahn systems in higher space dimensions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 759-775. doi: 10.3934/dcdss.2016027
Pierluigi Colli, Gianni Gilardi, Paolo Podio-Guidugli, Jürgen Sprekels. An asymptotic analysis for a nonstandard Cahn-Hilliard system with viscosity. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 353-368. doi: 10.3934/dcdss.2013.6.353
Álvaro Hernández, Michał Kowalczyk. Rotationally symmetric solutions to the Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 801-827. doi: 10.3934/dcds.2017033
Ciprian G. Gal, Maurizio Grasselli. Longtime behavior of nonlocal Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 145-179. doi: 10.3934/dcds.2014.34.145
Xiaoyun Cai, Liangwen Liao, Yongzhong Sun. Global strong solution to the initial-boundary value problem of a 2-D Kazhikhov-Smagulov type model. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 917-923. doi: 10.3934/dcdss.2014.7.917
Yi Zhou, Jianli Liu. The initial-boundary value problem on a strip for the equation of time-like extremal surfaces. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 381-397. doi: 10.3934/dcds.2009.23.381
Martn P. Árciga Alejandre, Elena I. Kaikina. Mixed initial-boundary value problem for Ott-Sudan-Ostrovskiy equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 381-409. doi: 10.3934/dcds.2012.32.381
Haifeng Hu, Kaijun Zhang. Analysis on the initial-boundary value problem of a full bipolar hydrodynamic model for semiconductors. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1601-1626. doi: 10.3934/dcdsb.2014.19.1601
Türker Özsarı, Nermin Yolcu. The initial-boundary value problem for the biharmonic Schrödinger equation on the half-line. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3285-3316. doi: 10.3934/cpaa.2019148
Cecilia Cavaterra, Maurizio Grasselli, Hao Wu. Non-isothermal viscous Cahn-Hilliard equation with inertial term and dynamic boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1855-1890. doi: 10.3934/cpaa.2014.13.1855
HTML views (257)
Elena Bonetti Pierluigi Colli Luca Scarpa Giuseppe Tomassetti | CommonCrawl |
Forecast performance of a quantity theory of labor
One of the dynamic information equilibrium model forecasts I've been tracking on the order of a year now to measure its performance is what I call the "N/L" or "NGDP/L" model [1] (specifically FRED GDP, i.e. nominal GDP, divided by FRED PAYEMS, i.e. total nonfarm payrolls). Revised GDP data came out today, so I thought it'd be a good time to check back in with the model [2]:
One way to think about this is as a measure of nominal productivity. We are coming out of the aftermath of the shock to the labor force following the great recession, so we can see a gradual increase back towards the long-run equilibrium.
If we use this dynamic equilibrium model instead of NGDP alone as the shocks, we can see in a history "seismograph" that this measure basically coincides with the inflation measures.
There's a good reason for this: this is effectively a model of Okun's law (as described here) if we identify the "abstract price" with the price level P:
P \equiv \frac{dNGDP}{dL} = k \; \frac{NGDP}{L}
which can be rearranged
L & = k \; \frac{NGDP}{P} \equiv RGDP\\
\frac{d}{dt} \log L & = \frac{d}{dt} \log RGDP
to show changes in employment (and therefore unemployment) are directly related to changes in real GDP.
[1] Also, the "quantity theory of labor" per the title because the model implies log NGDP ~ k log L.
[2] Here is the complete model:
Women in the workforce and the Solow paradox
Paddy Carter sent me a link on Twitter to a study [pdf] using a different model that came to conclusions similar to the view I've been expressing on this blog:
The increase in female employment and participation rates is one of the most dramatic changes to have taken place in the economy during the last century.
From their conclusions:
Furthermore, the unexplained portion [of the rise in women's employment] is quite large and positive; in other words, for cohorts born before 1955, the simulations overpredict female employment and for more recent cohorts, underpredict it. Therefore, there must have been other changes taking place among married women by cohort. We have shown above that it is consistent with the model to claim that technological progress in household production or a change in social norms has brought down the costs of working outside the home.
As part of my continuing series of dynamic information equilibrium "seismograms" (previously here), I put together another version of the story of women entering the workforce [1] as a driver of dramatic changes in the economy:
A big lesson is about causality. The positive shock to the level of women in the labor force precedes a (smaller relative) shock to the level of men in the labor force, both of which precede shocks to output and finally inflation (both CPI and PCE shown). Women entering the workforce caused a general economic boom — which drew additional men into the workforce and increased output and prices.
In the study linked above, the authors speculate that some of the effect was due to "technological progress in household production" (e.g. household labor-saving devices like washing machines and dishwashers) which made me think of the 'Solow paradox' ("You can see the computer age everywhere but in the productivity statistics."). What if the reason that household technology shows up in the economy is because household technology enabled more people to enter the labor force, while computers were mostly used by people already in the labor force [2]? This idea can be taken a step further to suggest that maybe the high GDP growth and inflation of the 1970s was due to the fact that a significant fraction of work that isn't counted in GDP statistics (household production) was automated allowing people to participate in work that was counted in GDP. That is to say that if household production was counted in GDP, it is possible that there might not have been a "great inflation".
This is of course speculative. However it is a good "thought experiment" to keep in mind to keep you from assuming that GDP is some ideal measure and remind you that the "events" that appear in the GDP data may well be artefacts of the measurement methodology [3].
Diane Coyle makes the case [pdf] for the possibility I mention in footnote [2]: transition to more "digital production" behind recent low productivity.
Update 28 February 2018
Commenter Anti below mentions inflation expectations, so I thought I'd add the dynamic information equilibrium model of the price level implied by the University of Michigan inflation expectations data [4]. I've added the result to the macroeconomic seismogram:
Note that shock to inflation expectations follows the shock to measured inflation (making a simple backward-looking martingale a plausible model).
[1] Here are the CLF models for men and women:
The NGDP model is from here; the inflation models were used in my first history seismogram.
[2] This brings up the question of whether current home production that isn't counted in GDP — much of which is done on computers — is behind the recent "low growth" of a lot of developed economies.
[3] And even the economic system as households (as well as firms) are typically more like miniature centrally planned economies.
[4] The model fit is pretty good (dynamic equilibrium is α = 0.03, or basically 3% inflation):
Dynamic equilibrium in wage growth
I saw some data from the Atlanta Fed [1] on wage growth that looked remarkably suitable for a dynamic information equilibrium model (also described in my recent paper). One of the interesting things here is that it is a dynamic equilibrium between wages ($W$) and the rate of change of wages ($dW/dt$) so that we have the model $dW/dt \rightleftarrows W$:
\frac{d}{dt} \log \frac{d}{dt} \log W = \frac{d}{dt} \log \frac{dW/dt}{W} \approx \gamma + \sigma_{i} (t)
where $\gamma$ is the dynamic equilibrium growth rate and $\sigma_{i} (t)$ represents a series of shocks. This model works remarkably well:
The shock transitions are in 1992.0, 2002.4, 2009.4, and 2014.7 which all follow the related shock to unemployment. A negative shock to employment drives down wage growth (who knew?), but it also appears that wage growth has a tendency to increase at about 4.2% per year [2] unless there is a positive shock to employment (such as in 2014) when it can increase faster. The most recent downturn in the data is possibly consistent with the JOLTS leading indicators showing a deviation, however since the wage growth data seems to lag recessions it is more likely that this is a measurement/noise fluctuation.
I added the wage growth series to the labor market "seismogram" collection, and we can see a fall in wage growth typically follows a recession:
[1] The time series is broken before 1997, but data goes back to 1983 in the source material. I included data back to 1987. However the data prior the 1991 recession does not have the complete 1980s recession(s), so the fit to that recession shock would be highly uncertain and so I left it out.
[2] Wage growth it typically around 3.0% lately, so a 4.2% increase in that rate would mean that after a year wage growth would be about 3.1% and after 2 years about 3.3% in the absence of shocks.
Are interest rates inexplicably high?
The interest rate model of the long and short term interest rates are predicting average interest rates below the current observed rates. For example in this forecast:
Now the actual forecast is for the average trend of monthly rates and I'm showing actual daily interest rate data, so we can expect to see occasional deviations even if the model is working correctly.
But how can we tell the difference between some expected theoretical error and a deviation? I decided to look at the elevated recent data in the light of the models' typical error. In the case of the long rate above, we're in the normal range:
The short rate is on a significant deviation:
However these errors basically assume that the model error is roughly constant in percentage (i.e. a 10% error means 100 basis point error on a 10% interest rate while a 10% error means a 10 basis point error on a 1% interest rate). This is definitely not true because the data is reported only to the nearest basis point, but the finite precision effect should only come into play near log(0.01) ~ -4.6. This error is possibly due to the Federal Reserve's implied precision of 25 basis points where log(0.25) ~ -1.4. Since the Fed doesn't make changes of less than a quarter basis point, and the short rate typically sticks close to the Fed funds rate, we'd expect data near or below log(0.25) as shown on the graph to have larger error than points above log(0.25).
I don't see any particular reason to abandon these models without a more significant deviation.
Comparing CPI forecasts to data
New CPI data is out today, and here is the latest data point as both continuously compounded annual rate of change and year-over-year change. The latest uptick is consistent with the a general upward trend after the post-recession shock to the labor force.
Some historical myths about Einstein and relativity
Which "thought experiment" leads you this geometry?
One of the things I've noticed ever since I started doing some "freelance" (or "maverick" or "nutcase") economic research is how many strange accounts of how special relativity came about are out there in the world. It's a story frequently invoked by people from all walks of life from economists to philosophers to general fans of science as an example of an ideal process of science. However the story invoked is often at odds with what actually happened or with how physicists today view the outcome.
The popular re-telling actually has many parallels with the popular but erroneous [1] re-telling of how 70s inflation "proved Friedman was right" in macroeconomics — even to the point where some practitioners themselves believe the historical myths. The popular (but false) narrative goes something like this: Michelson and Morely conclusively disproved the idea of the aether and in order to solve the resulting problems, Einstein used intuition and some thought experiments about moving clocks to derive a new theory of physics that refuted the old Newtonian world.
This should immediately raise some questions. 1) What problems with Newtonian physics would be caused by showing the aether (which doesn't exist) doesn't exist? 2) Why do physicists still use Newtonian physics? 3) Isn't Einstein famous for the equation E = mc² — which thought experiment leads to that?
The real story is more like this: Maxwell had produced an aether-based framework that was unifying the physics of light waves, electricity, and magnetism but there were some counterintuitive aspects of this framework that all had to do with moving charges and light sources involving a bunch of ad hoc mathematical modifications like length contraction, models of the aether, and an inconsistency in the interpretation of Maxwell's equations; Einstein came up with a general principle that unified all of these ad hoc modifications, made the aether models unnecessary, and resolved the asymmetry.
This answers my questions 1) through 3) above. 1) The aether was shown to be unnecessary, not erroneous. 2) Newtonian physics is a valid approximation when velocity is small compared to the speed of light. 3) E = mc² is a result of Lorentz invariance (i.e. math), not the thought experiments that help us get over the counterintuitive aspects of Lorentz invariance.
Now I am not a historian, so you should take this blog post as you would any amateur's. I did an undergraduate research project on the motivations for special relativity as part of my interdisciplinary science honors program [2], presented the result in a seminar, and I'm fairly familiar with the original papers (translated from German and available in this book). I also spent a bit of time talking with Cecile DeWitt about Einstein, but I'd only really use this to confirm the popular notion that Einstein had a pretty robust sense of humor so direct quotes should be considered with that in mind.
Myth: Einstein was "bad at math"
This takes many forms from denial that the theoretical advances Einstein made were extremely advanced math at the time, to that he was actually bad at math leading him to his "thought experiments". This myth likely arises from a quote from a 1943 letter in response to a high school student (Barbara Wilson) who had called Einstein one of her heroes (emphasis mine):
Dear Barbara:
I was very pleased with your kind letter. Until now I never dreamed to be something like a hero. But since you have given me the nomination, I feel that I am one. It's like a man must feel who has been elected by the people as President of the United States.
Do not worry about your difficulties in mathematics; I can assure you that mine are still greater.
This probably was just said as encouragement, and Einstein might even have been thinking about his own crash course in differential geometry and comparing himself to mathematicians he knew like his teacher Minkowski. Einstein was something of a mathematical prodigy when he was younger and all of his work on relativity is mathematically challenging even for modern physics students. It would be hard to look at mathematics like this and say the person who was able to use it to produce an empirically successful theory of gravity was "bad at math". Also, here's the blackboard he left after his death:
You forgot to contract the indices on the Christoffel symbols.
Update 19 January 2019 (H/T Beatrice Cherrier). Apparently the math was so obscure at the time that only Einstein and is close mostly German colleagues really understood it and due to English-German animosity in WWI it took some time to leak out to English physicists. The article that's from also has other things related to the rest of this post.
Myth: Einstein's thought experiments led to relativity
There are quite a few versions of this idea, but really it is more the reverse. Math led Einstein to conclusions he used thought experiments to understand (i.e. explain to himself and others) because of how counterintuitive they were. Maxwell's equations and their Lorentz invariance led Einstein to effectively promote a symmetry of electromagnetism to a symmetry of the universe. Einstein later used Minkowski's mathematical representation of a 4-dimensional spacetime as the framework for what would become general relativity.
It's somewhat ironic because Mach — who coined our modern use of "thought experiment" and that Einstein had learned "relativity" from — believed that human intuition was accurate because it was honed by evolution. But why would evolution provide humans with the capacity to intuitively understand the bending of space and time (or the quantum fluctuations at the atomic scale)? Einstein turned that upside-down, and used Mach's thought experiments to instead explain counterintuitive concepts like time dilation and length contraction. I think a lot of people confuse Einstein's and Mach's ideas of "thought experiments" which led to this myth [3]. You can read more about this here.
I once had a commenter on this blog who decided to argue against even direct quotes from Einstein saying he got the idea of space-time for general relativity from Minkowski's 4-dimensional mathematics. Although some things in physics get named for the wrong person (the Lorentz force wasn't first derived by Lorentz), it's called Minkowski space-time for a reason.
This is a powerful narrative for some reason; I suspect it is the math-phobic environment that seems unique to American discourse. It is fine as an American to freely admit you are bad at math and still think of yourself as somehow "cultured" or "intellectual" (or in fact to elevate your status). The myth that Einstein didn't need math to come up with relativity plays into that.
Myth: The aether was disproved just before (or by) relativity
As I talked about here, there were actually several different theories of the aether (e.g. aether dragging) and various negative results over 50 years from Fizeau's experiment to Michelson and Morely's were often seen as confirmation of particular versions. Experiments continued for many years after Einstein's 1905 paper [3], and despite the modern narrative that Michelson and Morely's experiment led to special relativity it was really more about mathematical theory than experiment [4].
I'm not entirely convinced that the aether has been completely "disproved" in the popular imagination or even among physicists anyway. We frequently see general relativity and gravity waves explained through the "rubber sheet" analogy which might as well be called an "aether sheet". If the strong and weak nuclear forces hadn't been discovered in the meantime it is entirely possible that Kaluza and Klein's 5-dimensional theory that combined general relativity and electromagnetism would have become the dominant "standard model" and the aether could have been re-written in history as what space-time is made of [5].
What the #$@& is this substance that's oscillating here?
Myth: Special relativity "falsified" Newtonian physics
This one can be partially blamed on Karl Popper, but also on various representations and interpretations of Popper. I've frequently found descriptions of Popper's idea of falsification that say something like "Eddington's 1919 experiment falsified Newton's theory of gravity and caused it to be replaced with Einstein's". For example, here:
Popper argues, however, that [General Relativity] is scientific while psychoanalysis is not. The reason for this has to do with the testability of Einstein's theory. As a young man, Popper was especially impressed by Arthur Eddington's 1919 test of GR, which involved observing during a solar eclipse the degree to which the light from distant stars was shifted when passing by the sun. Importantly, the predictions of GR regarding the magnitude shift disagreed with the then-dominant theory of Newtonian mechanics. Eddington's observation thus served as a crucial experiment for deciding between the theories, since it was impossible for both theories to give accurate predictions. Of necessity, at least one theory would be falsified by the experiment, which would provide strong reason for scientists to accept its unfalsified rival.
As best as I can tell, Popper only thought that Eddington's experiment demonstrated the falsifiability of Einstein's general relativity (e.g. here [pdf]): Eddington's experiment could have come out differently meaning GR was falsifiable. I have never been able to find any instance of Popper himself saying Newton's theory was falsified (falsifiable, yes, but not falsified). Popper was a major fanboy for Einstein which doesn't help — it's hard to read Popper's gushing about Einstein and not believe he though Einstein had "falsified" Newton. Also it's important to note that general relativity isn't required for light to bend (just the equivalence principle), but the relativistic calculation predicts twice the purely "Newtonian" effect. That is to say that light bending alone doesn't "falsify" Newtonian physics, just the particular model of photon-matter gravitational scattering.
In any case, both Newtonian gravity and Newtonian mechanics are used today by physicists unless one is dealing with a velocity close to the speed of light or in the presence of significant gravitational fields (or at sufficient precision to warrant it such as in your GPS which includes some corrections due to general relativity). The modern language we use is that Newtonian physics is an effective theory.
More myths?
I will leave this space available for more myths that I encounter in my travels.
[1] Read James Forder on this.
[2] Dean's Scholars at the University of Texas at Austin
[3] I sometimes jokingly point out that there is a privileged frame of reference that observers would agree on: the Big Bang rest frame. We only recently discovered our motion with respect to it in the 1990s. This idea also complicates some of the "thought experiments" used to explain special relativity (i.e. an absolute clock could be defined as one ticking in the rest frame of the CMB).
[4] I blame Popper for this:
Famous examples are the Michelson-Morley experiment which led to the theory of relativity
Einstein actually begins [pdf] with the "asymmetries" in Maxwell's equations, and relegates the aether experiments to an aside:
Examples [from electrodynamics], together with the unsuccessful attempts to discover any motion of the earth relatively to the "light medium," suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest.
The paper itself is titled On the electrodynamics of moving bodies, further emphasizing that Einstein's motivation was more understanding the "asymmetries" of Maxwell's equations and Lorentz's electrodynamics. Einstein's paper basically reformulates Lorentz's "stationary aether" electrodynamics, but does it without recourse to the aether.
Experiments like Michelson and Morely's (such as Fizeau's 50 years prior, and a long list of others) were part of a drumbeat of negative results of measurements of motion with respect aether. In a sense, Einstein is telling us the aether (and therefore any attempt to measure our motion with respect to it) is basically moot — not that some experiment "disproved" it:
The introduction of a "luminiferous ether" will prove to be superfluous inasmuch as the view here to be developed will not require an "absolutely stationary space" provided with special properties, nor assign a velocity-vector to a point of the empty space in which electromagnetic processes take place.
[5] For example: "In the early 1800s Fresnel came up with the wave theory of light where the electromagnetic vibrations occurred in a medium called the luminiferous aether that we now refer to as space-time after Kaluza and Klein's unification of the two known forces in the universe: gravity and electromagnetism."
Posted by Jason Smith at 7:00 PM 1 comment:
Economic seismograms: labor and financial markets
Steve Randy Waldman wrote a tweet asking about whether the stock market falls imperfectly predicted recessions or caused them, to which I responded saying the former in the "Phillips curve era" and the latter in the "asset bubble era" (both described here). But I thought I'd show a dynamic information equilibrium history chart that helps illustrate this a bit better for the US data. I first started making these graphs a few months ago partially inspired by this 85 foot long infographic from the 1930s; I thought they provided a simpler representation of the important takeaways from the dynamic information equilibrium models (presentation here or see also my paper) that I plan on using in my next book. Be sure to click on the graphics to expand them.
The light orange bars are NBER recessions. The darker orange bars represent the "negative" shocks (in the sense that you'd consider a bad change in the measure — unemployment rate goes up or the stock market goes down) with the wider ones meaning a longer duration shock. The blue bars are "positive" shocks (unemployment rate goes down, stock market goes up). The models shown here are the S&P 500, unemployment rate, JOLTS (quits, openings, hires), and prime age Civilian Labor Force participation rate.
As you can see in the top graph, major shocks to the S&P 500 precede recessions (and unemployment shocks) in the Phillips curve era (the 1960s to roughly the 1980s) and are basically concurrent with recessions (and unemployment shocks) in the asset bubble era (late 90s to the present).
At the bottom of this post, I focused in on the latter five labor market measures. This graph illustrates the potential "leading indicators" in the JOLTS data with hires coming first, openings second, and quits third. I don't know if the order is fixed (if there is a recession coming up, openings appears to be leading a bit more than hires). The other interesting piece is that shocks (in both directions [1]) to prime age CLF participation lag shocks to unemployment. There's an intuitive "story" behind this: people become unemployed, search for awhile, and then leave the labor force.
PS I thought I'd include these measures that illustrate my contention that the "great inflation" of the 1970s was primarily a demographic phenomenon of women entering the workforce that I describe here in order to have a single post to reference for some of my more outside the mainstream conjectures. I present two measures of inflation (CPI and PCE) as well as the civilian labor force (total size) alongside the employment population ratio for men and women.
[1] You may be asking why there's a positive shock to unemployment, but no (apparent) shock to any of the JOLTS measures. That's an excellent question. The answer probably lies in the fact that shocks to unemployment are made up of a combination of smaller shocks to the other measures as well as a shock to the matching function itself. Therefore the shock to hires and openings might be too small to see in those (much noisier) measures. One way to think about it is that the unemployment rate is a sensitive detector of changes in both hires, openings, and the matching function.
What is the chance of seeing deviations in three JOLTS measures?
JW Mason had a post the other day wherein he said:
The probability approach in economics. Empirical economics focuses on estimating the parameters of a data-generating process supposed to underlie some observable phenomena; this is then used to make ceteris paribus (all else equal) predictions about what will happen if something changes. Critics object that these kinds of predictions are meaningless, that the goal should be unconditional forecasts instead ("economists failed to call the crisis"). Trygve Haavelmo's writings on empirics from the 1940s suggest third possibile goal: unconditional predictions about the joint distribution of several variables within a particular domain.
To that end, I thought I'd look at the joint probabilities of the JOLTS data time series falling below the model estimates. First, let's look at some density plots of the deviation from the model (these are percentage points) for JOLTS hires (HIR), openings (JOR), and quits (QUR) for the data from 2004-2015 and then place the data from January 2017 to the the most recent (Dec 2017) on top of it (points):
Can we quantify this a bit more? I looked at two measures using the full 3-dimensional distribution: the probability of finding a point that is further out from the center as well as the probability that at least one of the data series has a worse negative deviation than the given point and plotted both of those measures versus the distance from zero:
The first measure doesn't account for the correlation between the different series very well, but does give a sense of how far out these points are from the center of the distribution. The second measure gives us a better indication of not only the joint probabilities but the correlation between them — even if one of the three series is far from the center, it can be mitigated by one that is closer especially if they are correlated.
While there is 19% chance that one of the hires, openings, or quits data could've come in worse than it did on Tuesday based on the data from 2004-2015, that's not all that small of a probability leaving open the possibility that the data is simply on a correlated jog away from the model. This is basically capturing the fact that most of the deviation is coming from the openings data while the other two are showing smaller deviations:
JOLTS data ... and that market crash?
The latest JOLTS data does seem to continue the deviation from the dynamic information equilibrium we might see during the onset of a new shock (shown here with the original forecast and updated counterfactual shock in gray; post-forecast data is in black):
I will admit that the way I decided to implement the counterfactual shock (as a Taylor expansion of the shock function that looks roughly exponential on the leading edge) might have some limitations if we proceed into the shock proper because adding successive terms causes the longer ranges of the forecast to wildly oscillate back and forth as can be seen here for a sine function. Using the full logistic function isn't necessarily a solution because it produces a series of under- and over-estimates (see here). Basically, forecasting a function that grows exponentially at first can be hard. One other measure is the joint function of openings and unemployment making up the Beveridge curve which is starting to show a deviation from the expected path as well (moving almost perpendicularly to it):
This brings me to the discussion around the latest market crash which included a lot of "the market is not the economy" and a pretty definitive "literally zero percent chance we are in a recession now" from Tim Duy. The only thing I would bring up is that the JOLTS data is a possible leading indicator of a recession and that data is not obviously saying "no recession" — and is in fact hinting at one (in the next year or so).
Coincidentally, I just updated the S&P 500 model I've been tracking and the latest drop puts us almost exactly back at the dynamic equilibrium (red, data and ARMA process forecast is blue, post-forecast data is black):
Which is to say that we're right where we'd expect to be — not on some negative deviation from equilibrium (just a correction to a positive deviation). I think it is just coincidental that the market fell to exactly the dynamic equilibrium model center; I wouldn't read too much into that. The fluctuations we see are well within the historical deviations from the dynamic equilibrium (red band is the 90% band).
Update 7 February 2018
I thought I'd add in the interest rate model forecast that's been going on for over three years as well. Note that the model prediction is for monthly data, therefore the random noise in daily data will have somewhat larger spread, but it is still a bit high (which is one of the possible precursors of recession, connected to yield curve inversion in the model, see also here or here):
Long term exercises in hubris: forecasting the S&P 500
I've been tracking the S&P 500 forecast made with the dynamic information equilibrium model. The latest mini-boom and subsequent fall are still within the normal fluctuations of the market:
However, I wouldn't be surprised if the massive giveaway to corporations in the latest Republican tax cut didn't in fact constitute a "shock" (dashed line in the graph above). Also relevant: the multi-scale self-similarity of the S&P 500 in terms of dynamic equilibrium.
Also, the close today brings us almost exactly back to the dynamic equilibrium:
Also bitcoin continues to fall (this is not a forecast, but rather a model description):
Continued update of S&P 500 and bitcoin:
African American unemployment spike
There's almost a sense of dramatic irony that after the State of the Union speech last week where credit was taken for the stock market and African American unemployment, both reversed themselves in the most recent data. While the spike in unemployment is outside the 90% confidence bands for the dynamic information equilibrium model for black unemployment, I do think it is just a fluctuation (statistical or measurement error):
We'd expect 90% of the individual measurements to fall inside the bands, so occasionally we should see one fall outside. It's not an actual increase in human suffering, and in fact is consistent with the continued decline in unemployment seen by the model. The unemployment rate is somewhat of a lagging indicator of recessions as well, so we should expect to see a decline in one or more JOLTS measures first if this is the leading edge of a recession.
We should always keep our minds open to alternative theories, and along with the spike in hate crimes since the 2016 election it is possible that employers have felt more empowered to discriminate against African Americans. JOLTS data is not broken out by race, and so a racially biased decline in hires could well be hidden in the data (e.g. it could be partially responsible for the potential decline we are currently seeing in the aggregate measures — why would JOLTS hires fall when the "conventional wisdom" is that the economy is doing "great"?). This "leading" indicator wouldn't be as good of a leading indicator for a racially biased recession. In the past two recessions, the shocks to unemployment hit African Americans a couple months later (the centers are at 2002.0 vs 2001.8, and 2009.0 vs 2008.8), so a recession where black unemployment leads would be anomalous.
I don't think that is what is happening (it's just a single data point after all), but it can't be ruled out using available data. And after the experience of the past two years, I wouldn't put money on the better angels of white Americans' nature.
Unemployment and labor force participation (models vs data)
The latest employment situation data is out and the unemployment rate holds steady at 4.1%. This is still in line the dynamic information equilibrium model (here or in my recent paper) as we begin the model's second year of accurate forecasting:
The data is also still in line with the some of the latest forecasts from the Fed and FRBSF (but not their earlier ones):
Note that the unemployment rate seems to be a lagging indicator compared to JOLTS data (out next Tuesday 6 February 2018), so while there is some evidence in the JOLTS hires data of a possible turnaround it won't show up in the unemployment rate for several months.
Also out is the latest labor force participation data which doesn't help us distinguish between the two models (with and without a small positive shock in 2016) as it's consistent with both:
And finally there is the novel "Beveridge curve" connecting labor force participation and unemployment rate:
In light of this post by JW Mason, I decided to add the error bands to the "Beveridge" curve above based on the individual errors. It's not exactly looking at the probability of the joint distribution of multiple variables, but it's a step in that direction.
When did we become gluten intolerant?
I don't know about you all, but I've been doing this since the early 2000s.
The dynamic information equilibrium approach I talk about in my recent paper doesn't just apply to economic data. The idea that the information content of observing one event relative to observing another event has rather general application. As an example, I will look at search term frequency. Now if the English language was unchanging, given that there are a huge number of speakers, we'd expect relative word frequencies to remain constant and the distributions to be relatively stable. Changes to the language would show up as "non-equilibrium shocks" — a change in the relative frequency of use that may or may not reach a new equilibrium. A given word becomes more or less common and therefore has a different information content when a that word is observed (a "word event").
We might be able to see some of these shocks in Google trends data — a collection of "word events" entered as search terms. It's is only available since 2004, so we really can only look at language changes that happen within a few years. Longer changes (e.g. words falling into disuse) won't show up clearly, but this time series is well-suited for looking at fads.
I wanted to try this because I read an offhand comment somewhere (probably on Twitter) that said something like "everyone suddenly became gluten intolerant in 2015" [1]. What does the search data say?
The gluten transition in the US is centered near January 2009, but takes place over about 6 years (using the full width at half maximum for the shock). It "begins" in the mid-2000s and we seem to have achieved a new equilibrium over the past couple years.
How about avocado toast? That happened around 2015 in the US:
However, I did notice on Twitter there were a lot more and earlier references to avocado toast from Australians (in fact I think it was a mention in Australian media that it wasn't just the breakfast I made myself for years after having been given it by a Chilean friend where it's been a common dish for a long time ("palta")). Was this hunch visible in the data? Yes — almost a full year earlier:
So anyway, I just wanted to show a fun application of the information equilibrium framework. It applies to a lot of situations where there is some concept of balance between different things: supply and demand, words and their language, cars and the flow of traffic, neurons and the cognitive state, or electrons and information.
The "macro wars" (Nov 2007–Mar 2011):
[1] Update: found it.
As a casual student of American food faddism, something that is still more than alive and well today (Yes, it's an amazing coincidence that a sizable percentage of the educated liberal upper middle class all became gluten intolerant over a 3 year period. Must be pollution or something), I always love stories about our ridiculous food history.
It's a 6-year period above, but the definition of the "width" of a transition is somewhat arbitrary (I used the full width at half maximum above).
What is the chance of seeing deviations in three J...
Long term exercises in hubris: forecasting the S&P...
Unemployment and labor force participation (models...
Economic growth, path dependence, and non-equilibr... | CommonCrawl |
Home » CBSE » Class 10 » Lakhmir Singh » Chemistry
Write chromyl chloride test with equation
Chemistry, Previous Year Question Papers
Chloride radical is detected by the chromyl chloride test. In this test, chromyl chloride gas (orange red color) is produced. Equation Involved – 4NaCl + K2Cr2O7 + 6H2SO4 → 4NaHSO4 + 2KHSO4 + 3H2O +...
What do you understand by lanthanide contraction
The lanthanide contraction is the decrease in the atomic or ionic radii with increase in the atomic number of lanthanides
What are lanthanide elements?
Lanthanide elements resembles a lot in properties with lanthanum. Lanthanide is group of 14 elements from atomic number 58 to 71. In these elements on increasing atomic number electron enters into...
Explain oxidization properties of potassium permanganate in acidic medium.
2KMnO4 + 8H2SO4 + 10KI → 6K2SO4 + 8H2O + 5I2 2KMnO4 + 5SO2 + 2H2O → 2MnSO4 + 2H2SO4 + K2SO4 2KmO4 + 16HCl → 2KCl + 2MnCl2 + 8H2O + 5Cl2 5COOH – COOH + [5O] → 10CO2 + 5H2O
Give two differences between double salt and complex salt.
Answer: Double salt Complex salt A double salt is a combination of two salt compounds. A complex salt is a molecular structure that is composed of one or more complex ions. Double salts can give...
Give two differences between DNA and RNA.
Answer: DNA RNA DNA – Deoxyribo Nucleic Acid RNA – Ribo Nucleic acid DNA consists of adenine (A), cytosine (C), guanine (G), and thymine (T) RNA consists of adenine (A), cytosine (C), guanine (G),...
Which of the following compounds has tetrahedral geometry? (a) (b) (c) (d)
SOL: Correct option is D. $\left[\mathrm{NiCl}_{4}\right]^{2}$
Oxidation number of gold metal is (a)+1 (b) 0 (c) (d) all of these
Sol: Correct option is B. 0
Match the example given in Column I with the name of the reaction in Column II
Chemistry, Class 12, NCERT Exemplar
Solution: (i) is e (ii) is d (iii) is a (iv) is b (v) is f (vi) is c
Match the reactions given in Column I with the suitable reagents given in Column II.
Solution: (i) is c (ii) is d (iii) is a (iv) is b
Match the acids given in Column I with their correct IUPAC names given in Column II.
Solution: (i) is b (ii) is e (iii) is d (iv) is a (v) is c
Match the common names given in Column I with the IUPAC names given in Column II
Solution: (i) is d (ii) is e (iii) is a (iv) is b (v) is c
Can Gatterman-Koch reaction be considered similar to Friedel Craft's acylation? Discuss.
Solution: Both reactions resemble each other. In Friedel Craft's acylation reaction, an aryl group or benzene is treated with an acid chloride in the presence of anhydrous AlCl3 and corresponding...
Ethylbenzene is generally prepared by acetylation of benzene followed by reduction and not by direct alkylation. Think of a possible reason.
Solution: This is due to the formation of polysubstituted products. To avoid the formation of polysubstituted products Friedel-craft's alkylation reaction is not used for the preparation of...
Complete the following reaction sequence.
Why are carboxylic acids more acidic than alcohols or phenols although all of them have a hydrogen atom attached to an oxygen atom (—O—H)?
Solution: Due to the resonance in carboxylic acids, the negative charge is at the more electronegative oxygen whereas, in alcohols or phenols, the negative charge is on a less electronegative atom....
. Identify the compounds A, B and C in the following reaction.
Solution: Compound A = CH3-MgBr Compound B = CH3-COOH Compound C = CH3COOCH3
Carboxylic acids contain carbonyl group but do not show the nucleophilic addition reaction like aldehydes or ketones. Why?
Solution: The oxygen atom in carbonyl compound pull more shared pair of electron towards itself and so, carbon acquires partial positive charge and oxygen acquires partial negative charge in...
Alkenes and carbonyl compounds both contain a π bond but alkenes show electrophilic addition reactions whereas carbonyl compounds show nucleophilic addition reactions. Explain.
Solution: Both the compounds carbon atom is attached to the electronegative atom oxygen. Thus the oxygen pulls more shared pair of electron towards them and a partial positive charge will be...
Arrange the following in decreasing order of their acidic strength. Explain the arrangement. C6H5COOH, FCH2COOH, NO2CH2COOH
Solution: NO2CH2COOH > FCH2COOH > C6H5COOH. NO2CH2COOH is most acidic among the given three compounds. Electron withdrawing groups like -NO2, increases the acidity of carboxylic acids by...
Compound 'A' was prepared by oxidation of compound 'B' with alkaline KMnO4. Compound 'A' on reduction with lithium aluminium hydride gets converted back to compound 'B'. When compound 'A' is heated with compound B in the presence of H2SO4 it produces the fruity smell of compound C to which family the compounds 'A', 'B' and 'C' belong to?
Solution: Compound 'A' belongs to the carboxylic acid. Compound 'B' belongs to alcohol. Compound 'C' belongs to an ester group.
What product will be formed on reaction of propanal with 2-methyl propanal in the presence of NaOH? What products will be formed? Write the name of the reaction also.
Solution: When propanal reacts with 2-methyl propanal in the presence of NaOH, the mixture of aldehydes are formed. Both the reactants have an alpha-hydrogen and hence, can undergo cross aldol...
Arrange the following in decreasing order of their acidic strength and give the reason for your answer.
Solution: FCH2COOH > ClCH2COOH > C6H5CH2COOH > CH3COOH > CH3CH2OH. CH3CH2OH is least acidic among the given compounds. C6H5CH2COOH is more acidic than CH3COOH due to the resonance in...
Oxidation of ketones involves carbon-carbon bond cleavage. Name the products formed on oxidation of 2, 5-dimethylhexan-3-one.
Solution: Solution: The products formed on oxidation of 2, 5-dimethylhexan-3-one are the mixtures of ketone and carboxylic acids. Ketone is then further oxidized to carboxylic acids. Overall the...
Name the electrophile produced in the reaction of benzene with benzoyl chloride in the presence of anhydrous AlCl3. Name the reaction also.
Solution: The electrophile produced in the reaction of benzene with benzoyl chloride in the presence of anhydrous AlCl3 is benzoylinium cation. The product formed in this reaction is benzophenone....
Benzaldehyde can be obtained from benzal chloride. Write reactions for obtaining benzyl chloride and then benzaldehyde from it.
SOLUTION: Toluene is first converted to benzal chloride by side-chain chlorination, in presence of Chlorine gas and light. Benzal chloride on hydrolysis at 373K gives benzaldehyde.
Write IUPAC names of the following structures.
Solution: (i) Ethane-1,2-dial. (ii) Benzene-1, 4-dicarbaldehyde. (iii) 3-Bromobenzaldehyde.
Give the structure of the following compounds. (i) 4-Nitropropiophenone (ii) 2-Hydroxycyclopentanecarbaldehyde (iii) Phenyl acetaldehyde
Give the IUPAC names of the following compounds
Solution: (i) 3-Phenylprop-2-ene-1-al. (ii) Cyclohexanecarbaldehyde (iii) 3-Oxopentan-1-al (iv) IUPAC name: But-2-enal
Temperature dependence of resistivity ρ(T) of semiconductors, insulators, and metals is significantly based on the following factors:
a) number of charge carriers can change with temperature T
b) time interval between two successive collisions can depend on T
c) length of material can be a function of T
d) mass of carriers is a function of T
CBSE, Class 12, Current Electricity, NCERT Exemplar, Periodic Classification of Elements, Physics, Some Special Series
The correct answer is a) number of charge carriers can change with temperature T b) time interval between two successive collisions can depend on T
Write a test to differentiate between pentan-2-one and pentan-3-one.
Solution: One can differentiate between pentan-2-one and pentan-3-one by iodoform test. Pentan-2-one have a –CO-CH3 group and therefore forms a yellow precipitate of Iodoform. Pentan-2-one gives a...
Why is there a large difference in the boiling points of butanal and butane-1-ol?
Solution: Butanal has no intermolecular hydrogen bonding but butan-1-ol has intermolecular hydrogen bonding. This bonding in butan-1-ol makes it more stable at a higher temperature than butanal.
Which of the following is the correct representation for intermediate of nucleophilic addition reaction to the given carbonyl compound (A) :
Solution: Option (A) and (B) are the answers. Reason:
Benzophenone can be obtained by ____________. (i) Benzoyl chloride + Benzene + AlCl3 (ii) Benzoyl chloride + Diphenyl cadmium (iii) Benzoyl chloride + Phenyl magnesium chloride (iv) Benzene + Carbon monoxide + ZnCl2
Solution: Option (i) and (ii) are the answers Reason: Benzophenone can be obtained by the Friedel-Craft acylation reaction. The reaction is shown as
Through which of the number of the following reactions of carbon atoms can be increased in the chain? (i) Grignard reaction (ii) Cannizaro's reaction (iii) Aldol condensation (iv) HVZ reaction
Solution: Option (i) and (iii) are the answers. Reason: Grigned reaction and aldol condensation is used to increase the number of carbon attom in the chain as follows:
Which of the following conversions can be carried out by Clemmensen Reduction? (i) Benzaldehyde into benzyl alcohol (ii) Cyclohexanone into cyclohexane (iii) Benzoyl chloride into benzaldehyde (iv) Benzophenone into diphenylmethane
Solution: Option (ii) and (iv) are the answers. Reason: The carbonyl group of aldehydes and ketones is reduced to CH2 group on treatment with zinc amalgam and concentrated hydrochloric acid...
Treatment of compound with NaOH solution yields(i) Phenol (ii) Sodium phenoxide (iii) Sodium benzoate (iv) Benzophenone
Solution: Option (ii) and (iii) are the answers. Reason: Treatment of compound with NaOH yields sodium phenoxide and sodium by means of nucleophilic substitution reaction as follows
13. Which of the following compounds do not undergo aldol condensation?
Solution: Option (ii) and (iv) are the answers. reason: Aldehydes and ketones and having at least one alpha-hydrogen undergo a reaction in the presence of dilute alkali as catalyst to beta-hydroxy...
In Clemmensen Reduction carbonyl compound is treated with _____________. (i) Zinc amalgam + HCl (ii) Sodium amalgam + HCl (iii) Zinc amalgam + nitric acid (iv) Sodium amalgam + HNO3
Solution: Option (i) is the answer. Reason: From the above reaction carbonyl group is treated with Zn−Hg(Zinc Amalgum) and HCl
Which of the following compounds will give butanone on oxidation with alkaline KMnO4 solution? (i) Butan-1-ol (ii) Butan-2-ol (iii) Both of these (iv) None of these
Solution: Option (ii) is the answer.
Which is the most suitable reagent for the following conversion?(i) Tollen's reagent (ii) Benzoyl peroxide (iii) I2 and NaOH solution (iv) Sn and NaOH solution
Solution: Option (iii) is the answer. Reason: This reaction is called as lodoform reaction.
Compound A and C in the following reaction are :_____________
Solution: Option (ii) is the answer. Reason:
Structure of 'A' and type of isomerism in the above reaction are respectively. (i) Prop–1–en–2–ol, metamerism (ii) Prop-1-en-1-ol, tautomerism (iii) Prop-2-en-2-ol, geometrical isomerism (iv) Prop-1-en-2-ol, tautomerism
Solution: Option (iv) is the answer. reason: Structure of A and the type of isomerism in the above reaction are Prop-1-en-2-ol, tautomerism respectively. Enol form tautomerises into keto...
Which product is formed when the compoundis treated with concentrated aqueous KOH solution?
Solution: Option (ii) is the answer. Reason: Benzaldhyde C6H5CHO on treatment with KOH yields the corresponding alcohol and acid. In this reaction, there is no alpha hydrogen atom present in...
Cannizaro's reaction is not given by _____________.
Solution: Option (iv) is the answer. Reason: CH3CHO will not give Cannizzaro's reaction because it contains a-hydrogen while other three compounds have no a-hydrogen. Hence, they will give...
The reagent which does not react with both, acetone and benzaldehyde. (i) Sodium hydrogen sulphite (ii) Phenyl hydrazine (iii) Fehling's solution (iv) Grignard reagent
Solution: Option (iii) is the answer. Reason: Aromatic aldehydes and ketones does not respond to Fehling's test. Sodium hydrogen sulphite,phenyl hydrazine, grignard reaction are common for carbonyl...
Compound can be prepared by the reaction of _____________.
The correct order of increasing acidic strength is _____________. (i) Phenol < Ethanol < Chloroacetic acid < Acetic acid (ii) Ethanol < Phenol < Chloroacetic acid < Acetic acid (iii) Ethanol < Phenol < Acetic acid < Chloroacetic acid (iv) Chloroacetic acid < Acetic acid < Phenol < Ethanol
Solution: Option (iii) is the answer. Reason: The correct order of increasing acidic strength is Ethanol < Phenol < Acetic acid < Chloroacetic acid. Phenol is more acidic than ethanol...
Which of the following compounds is most reactive towards nucleophilic addition reactions?
Solution: Option (i) is the answer.
Addition of water to alkynes occurs in acidic medium and the presence of Hg2+ ions as a catalyst. Which of the following products will be formed on addition of water to but-1-one under these conditions.
Solution: Option (ii) is the answer. Reason: Addition of water to but-1-yne in the presence of H2SO4 and HgSO4 gives 2-butaone. The addition takes place by markovnikoff's rule....
Arrange the following compounds in increasing order of dipole moment. CH3CH2CH3, CH3CH2NH2, CH3CH2OH
Solution: CH3CH2CH3 < CH3CH2NH2 < CH3CH2OH The dipole moment of CH3CH2OH is greater than that of CH3CH2NH2. CH3CH2CH3 has the least dipole moment among the three given compounds because it is...
Predict the product of the reaction of aniline with bromine in a non-polar solvent such as CS2.
Solution: The products formed in the reaction of aniline with bromine in a non-polar solvent such as CS2 are 4-Bromoaniline and 2-Bromoaniline where 4-Bromoaniline is the major product.
Under what reaction conditions (acidic/basic), the coupling reaction of aryldiazonium chloride with aniline is carried out?
Solution: This reaction is carried out in a mild basic medium. This is an electrophilic substitution reaction. Aryldiazonium chloride reacts with aniline to form a yellow dye of p-Aminoazobenzene.
Explain why MeNH2 is a stronger base than MeOH?
Solution: MeNH2 is a stronger base than MeOH because of the lower electronegativity and the presence of the lone pair of electrons on the nitrogen atom in MeNH2.
Why is benzene diazonium chloride not stored and is used immediately after its preparation?
Solution: At high temperatures, benzene diazonium chloride is highly soluble in water, and at low temperatures, it is a very stable compound in water. Because it is unstable, it should be used as...
What is the best reagent to convert nitrile to primary amine?
Solution: LiAlH4 and Sodium/Alcohol are the best reagents for converting nitrile to primary amine. The nitriles can be converted into a corresponding primary amine through reduction.
Why is NH2 group of aniline acetylated before carrying out nitration?
Solution: The NH2 group of aniline is acetylated before nitration to control the nitration reaction and the formation of tarry oxidation products and nitro derivatives. P-nitroaniline is the main...
Which of the following reactions belong to electrophilic aromatic substitution?
(i) Bromination of acetanilide
(ii) Coupling reaction of aryldiazonium salts
(iii) Diazotisation of aniline
(iv) Acylation of aniline
Solution: Option (i) and (ii) are the answers. Reason:...
Under which of the following reaction conditions, aniline gives p-nitro derivative as the major product?
(i) Acetyl chloride/pyridine followed by reaction with conc. H2SO4 + conc. HNO3
(ii) Acetic anhydride/pyridine followed by conc. H2SO4 + conc. HNO3
(iii) Dil. HCl followed by reaction with conc. H2SO4 + conc. HNO3
(iv) Reaction with conc. HNO3 + conc.H2SO4
Solution: Option (i) and (ii) are the answers. Reason: In addition to the nitro derivatives, direct nitration of aniline produces tarry oxidation products. Furthermore, in a strongly acidic...
Which of the following reactions are correct?
Solution: Option (i) and (iii) are the answers. Reason:
Which of the following amines can be prepared by Gabriel synthesis.
(i) Isobutyl amine
(ii) 2-Phenylethylamine
(iii) N-methyl benzylamine
(iv) Aniline
Solution: Option (i) and (ii) are the answers. Reason: Gabriel synthesis is used for the preparation of primary amines. Phthalimide on treatment with ethanolic potassium hydroxide forms potassium...
Arenium ion involved in the bromination of aniline is __________.
Solution: Option (i), (ii) and (iii) are the answers. Reason: Arenium ion involved in the bromination of aniline are as follows:
The product of the following reaction is __________.
Solution: Option (A) and (B) is the answer. Reason:
The reagents that can be used to convert benzene diazonium chloride to benzene are __________.
Solution: Option (ii) and (iii) are the answers. Reason: Hypophosphorous acid (phosphinic acid) and ethanol, for example, reduce diazonium salts to arenes, which are then oxidised to phosphorous...
Which of the following species are involved in the carbylamine test?
Solution: Option (i) and (ii) are the answers. Reason: Only RNC and CHCl3 are involved in carbylamine reaction.
Reduction of nitrobenzene by which of the following reagent gives aniline?
(i) Sn/HCl
(ii) Fe/HCl
(iii) H2-Pd
(iv) Sn/NH4OH
Solution: Option (i), (ii), and (iii) are the answers. Reason: They are reducing agents.
Which of the following cannot be prepared by Sandmeyer's reaction?
(i) Chlorobenzene
(ii) Bromobenzene
(iii) Iodobenzene
(iv) Fluorobenzene
Solution: Option (iii) and (iv) are the answers. Reason: Sandmeyer's reaction is used for the preparation of chlorobenzene and bromobenzene.
Which of the following methods of preparation of amines will not give the same number of carbon atoms in the chain of amines as in the reactant?
(i) The reaction of nitrite with LiAlH4.
(ii) The reaction of the amide with LiAlH4 followed by treatment with water.
(iii) Heating alkyl halide with potassium salt of phthalimide followed by hydrolysis.
(iv) Treatment of amide with bromine in the aqueous solution of sodium hydroxide.
Solution: Option (iv) is the answer. Reason: In Hoffmann Bromide degradation as the word, suggest, the amide is reduced to an amine with 1- carbonless, so this is the method in which we don't get...
Which of the following should be most volatile?
Solution: Option (ii) is the answer. Reason: The order of boiling points of isomeric amines is 1 amine > 2 amines > 3 amines. 3 amines have no intermolecular association because there are no H...
Among the following amines, the strongest Brönsted base is __________.
Solution; Option (iv) is the answer. Reason: Option (iv) is the strongest Bronsted base as there is no delocalization of lone pair of electron of the atom which is not possible in aniline and in...
The correct decreasing order of basic strength of the following species is _______. H2O, NH3, OH–, NH2– (i) NH2– > OH – > NH3 > H2O (ii) OH– > NH2– > H2O > NH3 (iii) NH3 > H2O > NH2– > OH– (iv) H2O > NH3> OH– > NH2–
Solution: Option (i) is the answer. Reason: NH3 is more basic than H2O, therefore NH2− (Conjugate base of weak acid NH3) is a stronger base than OH−.
Solution; Option (iv) is the answer. Reason: Option(iv)is the strongest Bronsted base as there is no delocalisation of lone pair of electron of N atom which is not possible in aniline and in...
Which of the following compounds is the weakest Brönsted base?
Solution: Option (iii) is the answer. Reason: A Bronsted Lowry base is a proton acceptor or hydrogen ion acceptor. Amines have a stronger tendency to accept protons and are strong Bronsted bases....
Which of the following compound will not undergo an azo coupling reaction with benzene diazonium chloride.
(i) Aniline
(ii) Phenol
(iii) Anisole
(iv) Nitrobenzene
Solution: Option (iv) is the answer. Reason: Diazonium cation is a weak electrophile and hence it reacts with electron-rich compounds containing electron-donating groups such as −OH, -$NH_2$ and...
The best method for preparing primary amines from alkyl halides without changing the number of carbon atoms in the chain is
(i) Hoffmann Bromamide reaction
(ii) Gabriel phthalimide synthesis
(iii) Sandmeyer reaction
(iv) Reaction with
Solution: Option (ii) is the answer. Reason: Best method for preparing primary aminos form alkyl halides without changing the number of carbon atoms in the chain is Gabriel synthesis. Because this...
The reaction Ar + N2Cl– → (Cu/HCl)– ArCl + N2 + CuCl is named as _________.
(i) Sandmeyer reaction
(ii) Gatterman reaction
(iii) Claisen reaction
(iv) Carbylamine reaction
Solution: Option (ii) is the answer. Reason: Diazonium salts in the presence of copper powder and halogen acid give aryl halide. Gattermann reaction is a variation of Sandmeyer reaction in which...
Acid anhydrides on reaction with primary amines give ____________.
(i) amide
(ii) imide
(iii) secondary amine
(iv) imine
Solution: Option (i) is the answer Reason: When acid anhydrides react with primary amines, they produce amide. The H atom of the amino group is replaced with an acyl group in this nucleophilic...
The most reactive amine towards dilute hydrochloric acid is ___________.
Solution: Option (ii) is the answer. Reason: The reactivity of amines is proportional to their basicity. If the R group is, the order of basicity is secondary amine ...
Reduction of aromatic nitro compounds using Fe and HCl gives __________.
(i) aromatic oxime
(ii) aromatic hydrocarbon
(iii) aromatic primary amine
(iv) aromatic amide
Solution: Option (iii) is the answer. Reason: Reduction of nitro aryl compounds in presence of Fe and HCl gives aromatic primary amines.
In the nitration of benzene using a mixture of conc. and conc. ,the species which initiates the reaction is __________.
Solution: Option (iii) is the answer. Reason:
The gas evolved when methylamine reacts with nitrous acid is __________.
(i) (ii) (iii) (iv)
Methylamine reacts with HNO2 to form _________.
The correct increasing order of basic strength for the following compounds is _________.
(i) II < III < I
(ii) III < I < II
(iii) III < II < I
(iv) II < I < III
Solution: Option (iv) is the answer. Reason: Electron donating: group increases the basicity while electron-withdrawing group decreases the basicity of...
Hoffmann Bromamide Degradation reaction is shown by __________.
Solution: Option (ii) is the answer. Reason: Where the aryl amide is converted to arylamine in the presence of $Br_2$ and $NaOH$ .
The best reagent for converting 2–phenylpropanamide into 2-phenylpropanolamine is _____.
(i) excess H2 (ii) Br2 in aqueous NaOH (iii) iodine in the presence of red phosphorus (iv) LiAlH4 in ether
Solution: Option (iv) is the answer. Reason:
Amongst the given set of reactants, the most appropriate for preparing 2° amine is _____.
(i) 2° R—Br + NH3
(ii) 2° R—Br + NaCN followed by
(iii) 1° R— + RCHO followed by %H_2/Pt H_3O+$/heat
The source of nitrogen in Gabriel synthesis of amines is _____________.
(i) Sodium azide, NaN3
(ii) Sodium nitrite, NaNO2
(iii) Potassium cyanide, KCN
(iv) Potassium phthalimide
Solution: Option (iv) is the answer. Reason: Gabriel synthesis :The reaction is given to the image.Source of nitrogen atom is Gabriel synthesis is Potassium phthalamide.
To prepare a 1° amine from an alkyl halide with simultaneous addition of one group in the carbon chain, the reagent used as a source of nitrogen is ___________.
(i) Sodium amide, NaNH2
(ii) Sodium azide, NaN3
6. Which of the following reagents would not be a good choice for reducing an aryl nitro compound to an amine? (i) H2 (excess)/Pt (ii) LiAlH4 in ether (iii) Fe and HCl (iv) Sn and HCl
Solution: Option (ii) is the answer. Reason: LiAlH4/ether reduces aryl nitro compounds to azo compounds 2C6H5NO2→LiAIH4C6H5N=N-C6H5
5. Benzylamine may be alkylated as shown in the following equation : C6H5CH2NH2 + R—X → C6H5CH2NHR Which of the following alkyl halides is best suited for this reaction through SN1 mechanism? (i) CH3Br (ii) C6H5Br (iii) C6H5CH2Br (iv) C2H5 Br
Solution: Option (iii) is the answer. Reason: C6H5CH2Br is best suited for this reaction through SN1 mechanism as the carbocation (C6H5CH2) formed is resonance...
Which of the following is the weakest Brönsted base?
Solution: Option (A) is the answer. Reason: Aniline is the weakest Bronsted base due to delocalization of lone pair of electron...
Amongst the following, the strongest base in aqueous medium is ____________.
Solution: Option (iii) is the answer. Reason: Due to the electron releasing nature of the alkyl group, it (R) pushes electrons towards nitrogen and thus makes the uncharged electrons pair available...
The correct IUPAC name for is (i) Allylmethylamine (ii) 2-amino-4-pentene (iii) 4-aminopent-1-ene (iv) N-methylprop-2-en-1-amine
Solution: Option (iv) is the answer. Reason: $CH_2=CHCH_2-NHCH_3$ N−methylprop−2−en−1−amine.
Which of the following is a 3° amine?(i) 1-methylcyclohexylamine (ii) Triethylamine (iii) tert-butylamine (iv) N-methyl aniline
Solution: Option (ii) is the answer. Reason: Triethylamine is a 3° amine because of amonia in which each hydrogen atom is substituted by an methyl group.
Which of the following reaction does NOT yield an amine
(1) (2) (3) (4) Correct Answer: Option (4) Explanation: The given reaction in the fourth option does not yield an amine whereas the rest of the reactions...
Identity 'Z' in the following series of reaction
1.Butan-1-ol 2.2-chlorobutane 3.Butan-2-ol 4.But-2-ene Correct Answer: 3.Butan-2-ol Explanation: Identification of Z - Butan-2-ol
Which of the following compounds is obtained when t-butyl bromide is treated with alcoholic ammonia?
(1) (2) (3) (4) Correct Answer: Option (3) Explanation:
Identify the product Y in the following reaction
1.Gluconic acid 2.Saccharic acid 3.n-Hexane 4.Glucoxime Correct Answer: 2. Saccharic acid Explanation: Identification of Y - Saccharic acid
Identify 'A' in the following reaction
1. 2.R-NH-OH 3.R-COOH 4.R-NH2 Correct Answer: 2.R-NH-OH Explanation: Identification of A - R-NH-OH
Which among the following statements about terpenes is NOT true?
1.Terpenes occur in essential oils 2.Terpenes include vitamin A, E and K 3.Terpenes consist of isoprene units 4.Terpenes are saturated hydrocarbons Correct Answer: 4. Terpenes are saturated...
How many pi bonds and sigma bound are present in following molecule?
1.5 pi,14 sigma - bonds 2.3 pi,17 sigma - bonds 3.3 pi, 17 sigma - bonds 4.2 pi, sigma - bonds Correct Answer: 3.3 pi, 17 sigma - bonds Explanation: 3 pi, 17 sigma - bonds are...
Aluminium crystallizes in face centered cubic structure, having atomic radius 125pm. The edge length of unit cell of aluminium is
Chemistry, Chemistry, MHTCET
253.5 pm 353.5 pm 465 pm 250 pm Solution: 353.5 pm $ For\text{ }a~FCC~or~~CCP~unit\text{ }cell,we\,have: $ $ 4r=\surd 2\times a $ $ \Rightarrow a=\frac{4r}{\surd 2}=353.5pm $
Which of the following is Rosenmund reduction?
(1) (2) (3) (4) Correct Answer: (3) Explanation: Rosenmund reduction reaction -
Identify the correct statement from the following :
(1) Blister copper has blistered appearance due to evolution of CO2.
(2) Vapour phase refining is carried out for Nickel by Van Arkel method.
(3) Pig iron can be moulded into a variety of shapes.
(4) Wrought iron is impure iron with 4% carbon.
Correct option: (3) Explanation: Because of the evolution of $SO_2$, not $CO_2$, blister copper has a blistered appearance. The Van Arkel method is used to obtain ultra-pure Titanium through vapor...
An alkene on ozonolysis gives methanal as one of the product. Its structure is
Correct option: (2) On ozonolysis, the structure in option B produces methane, as shown in the reaction above.
When carbolic acid is heated with concentrated nitric acid in presence of concentrated sulphuric acid it forms
1.benzoic acid 2.picric acid 3.phthalic acid 4.benzene sulphonic acid Correct Answer: 2. picric acid Explanation: When carbolic acid is heated with concentrated nitric acid in presence of...
For the following reaction, what is the value of ∆S(total) at 298k?
Fe2O3(s) + 3 CO(g) 2Fe(s) + 3CO2(g) o = -29.8 KJ and So =15JK-1. 1.29.8 J 2.100.0 J 3.298.0J 4.115.OJ Correct Answer: 4.115.OJ Explanation: The value of ∆S(total) at 298k is...
What is the oxidation number of carbons in glucose?
1.-6 2.+6 3.+3 4.Zero Correct Answer: 4. Zero Explanation: Zero is the oxidation number of carbons in glucose.
The rate of constant for a second order reaction A→product is 1.62M-1s-1. What will be the rate of reaction when concentration of reactant is 2 x 10-3M?
1.3.24 x 10-3Ms-1 2.3.24 x 10-6 Ms-1 3.6.48 x 10-6 Ms-1 4.2 x 10-3 Ms-1 Correct Answer: 3.6.48 x 10-6 Ms-1 Explanation: The rate of reaction when concentration of reactant is 2 x 10-3M is...
Calcite crystals used in Nicol's prism are formed of
1.CaC2 2.CaCO3 3.CaCL2 4.CaO Correct Answer: 2.CaCO3 Explanation: Calcite crystals used in Nicol's prism are formed of CaCO3
H2 molecule is more stable than Li2 molecule, because
1.in H2 molecule 15 molecular orbitals are shielded by electrons are shielded by electrons. 2.In H2 bond order is one. 3.In Li2 molecule 15 molecular orbitals are shielded by electrons. 4.In Li2...
Which of the following monomers is used in manufacture of Neoprene rubber?
1.1,3-Butadien 2.styrene 3.2-chlorobuta-1,3-diene 4.Isobutylene Correct Answer: 3.2-chlorobuta-1,3-diene Explanation: 2-chlorobuta-1,3-diene monomer is used in the manufacture of Neoprene...
The unit of atomic mass, amu is replaced by u, here u stands for
1.unified mass 2.united mass 3.unique mass 4.universal mass Correct Answer: 1. unified mass Explanation: u stands for unified mass.
What is the lowest oxidation state possessed by phosphorus in its oxyacid?
1.+4 2.+2 3.+5 4.+1 Correct Answer: 4. +1 Explanation: +1 is the lowest oxidation state possessed by phosphorus in its oxyacid
What happen during bessemerization process of copper from copper pyrites?
1.Au and Ag metals are deposited as anode mud. 2.Impurities as As and Sb are removed as volatile oxides. 3.Cu is obtained by auto reduction of Cu2O and CuS. 4.Iron is removed in the form of slag....
What is the common unit of conductivity if the dimension is expressed in centimeter?
1.Ω cm-1 2.Ω-1 cm-1 3.Ω cm 4.Ω-1 cm Correct Answer: 2. Ω-1 cm-1 Explanation: Ω-1 cm-1 is the common unit of conductivity if the dimension is expressed in centimeter.
Blurring of vision is a side effect caused by the use of
1.antibiotics 2.antacides 3.tranquilizers 4.analgesics Correct Answer: 3. tranquilizers Explanation: Blurring of vision is a side effect caused by the use of tranquillizers....
What is the boiling point of heavy water?
1.100.4oC 2.101.4OC 3.273OC 4.100OC Correct Answer: 2.101.4OC Explanation: 101.4OC is the boiling point of heavy water.
What is effective atomic number of Fe in [Fe(CN)6]4- (At.no. of Fe =26)
1.34 2.26 3.36 4.35 Correct Answer: 3.36 Explanation: 36 is effective atomic number of Fe in [Fe(CN)6]4-
Which among the following elements has lowest density and is lightest?
1.Scandium 2.Cobalt 3.cpper 4.iron Correct Answer: 1. Scandium Explanation: Scandium is the element that has lowest density and is lightest.
What is the value of radius ratio of ionic crystal having coordination number six?
1.Greater than 0.732 2.in between 0.414 to 0.732 3.in between 0.225 to 0.414 4.Less than 0.225 Correct Answer: 1.in between 0.414 to 0.732 Explanation: The value of radius ratio of ionic...
What is the molar conductivity of 0.1 M NaCl its conductivity is 1.06 x 10-2Ω-1cm-1
1.1.06 x 102 Ω-1cm2 mol-1 2.1.06 x 10-2 Ω-1cm2 mol-1 3.9.4 x 10-2 Ω-1cm2 mol-1 4.5.3 x 103 Ω-1cm2 mol-1 Correct Answer: 1.1.06 x 102 Ω-1cm2 mol-1 Explanation: The molar conductivity of 0.1 M...
If a mixture of iodomethane and iodoethane is treated is treated with sodium metal in presence of dry ether it forms
1.popane and ethane 2.ethane and butane 3.propane and butane 4.ethane, propane and butane Correct Answer: 4. ethane, propane and butane Explanation: If a mixture of iodomethane and iodoethane...
Which of the following carbonyl compounds does NOT undergoes aldol condensation?
1.Acetone 2.Benzophenone 3.Acetaldehyde 4.Accetophenone Correct Answer: 2. Benzophenone Explanation: Benzophenone is the carbonyl compound which does not undergoes aldol...
Calculate the number of units cell in the 38.6g of noble having density 19.3gcm-1 and volume of one unit cell is 6.18 x 10-23cm3?
1.3.236 x 1022 2.6.180 x 1023 3.6.236 x 1020 4.3.236 x 1023 Correct Answer: 1.3.236 x 1022 Explanation: 3.236 x 1022 is the number of units cell in the 38.6g of noble having density 19.3gcm-1...
What is the percentage of formaldehyde in formalin?
1.60% 2.40% 3.10% 4.20% Correct Answer: 2.40% Explanation: The percentage of formaldehyde in formalin is 40%.
Which of the following antihistamine contain –CN group?
1.Dimetapp 2.Cimetidine 3.Terfenadine 4.Ranitidine Correct Answer: 2.Cimetidine Explanation: Cimetidine contain –CN group.
Which among the following coordination compounds does not have coordination number equal to number of ligands?
1.[pt(NH3)6]4+ 2.[Co(en)3]3+ 3.[Cu(NH3)4]2+ 4.[Co (NH3)6]3+ Correct Answer: 2.[Co(en)3]3+ Explanation: [Co(en)3]3+does not have coordination number equal to number of...
According to Andrews isothermals at What temperature the carbon dioxide gas starts to condense at 73 atmosphere?
1.21.5oC 2.30.98oC 3.13.1oC 4.48.1oC Correct Answer: 2.30.98o C Explanation: According to Andrews isothermals at 30.98oC the carbon dioxide gas starts to condense at 73...
Sodium crystallizes in bcc structure with radius 1.86 x 10-8cm.What is the edge length of unit cell of sodium?
1.4.3x 10-8 cm 2.3.72 x 10-8 cm 3.7.44 x 10-8 cm 4.5.26 x 10-8 cm Correct Answer: 1.4.3x 10-8 cm Explanation: Sodium crystallizes in bcc structure with radius 1.86 x 10-8cm. The edge length...
Which among the following reactions occurs at the zone of slag formation in extraction of iron by blast furnace?
1.C + ½ O2 → CO 2.CaO + SiO2 → CaSiO3 3.Fe2O3 + 3 CO → 2 Fe + 3CO2 4.Fe2O3 + 3C → 2 Fe + 3 CO Correct Answer: 2. CaO + SiO2 → CaSiO3 Explanation: Reaction Involved - CaO + SiO2...
In the reaction, N2 +3H2 → 2NH3, the rate of disappearance of H2 is 0.002 M/s. The rate of appearance of NH3 is
1.0.0133 M/s 2.0.023 M/s 3.0.004 M/s 4.0.032 M/s Correct Answer: 1.0.0133 M/s Explanation: In the given reaction , the rate of appearance of NH3 is 0.0133 M/s.
Identity the polymer obtained by heating n moles of isobutylene with n moles of isoprene at 1000C in presence of anhydrous AICl3
1.Butyl rubber 2.Buna-N 3.Buna-S 4.Neoprene rubber Correct Answer: 1.Butyl rubber Explanation: Butyl rubber is the polymer which is obtained by heating n moles of isobutylene with n moles of...
Which among the following gas is bubbled through the brine solution during the preparation of sodium carbonate in Solvay's process?
1.CO2(g) 2.N2(g) 3.NO2(g) 4.O2(g) Correct Answer: 4.O2(g) Explanation: Oxygen gas is bubbled through the brine solution during the preparation of sodium carbonate in Solvay's...
Which among the following elements is a soft element as compared to others
1.CO 2.Zn 3.W 4.Mo Correct Answer: 2.Zn Explanation: Zinc is a soft element when compared to the rest of the elements.
Which of the following changes will changes will cause increase in vapour in vapour pressure of 1 molal aqueous KI solution at same temperature?
1.addition of 0.1 molal solution of NaCl 2.addition of 0.5 molal solution of Na2So4 3.addition of water 4.addition of 1 molal KI solution Correct Answer: 3. addition of water Explanation: By...
In the resonance hybrid of ozone molecule, O-O bond length is
1.128 pm 2.134.5 pm 3.121pm 4.148pm Correct Answer: 1.128 pm Explanation: In the resonance hybrid of ozone molecule, O-O bond length is 128 pm.
In the following reaction, What is the mass of KCL(s) produced ?
2KCLO3(s) 2KCL(s) + 3 O(g) H0 = -78 KJ. if 33.6L of oxygen gas is liberated at S.T.P. (at mass K=39, CL= 35.5 g mol-1) 1.48.0 g 2.7.45 g 3.24.0 g 4.74.5 g Correct Answer: 4.74.5 g...
Enthalpy of fusion of vaporization for water respectively are 6.01 Kj mol-1 and 45.07 kJ mol-1at 0degree C What is enthalpy of sublimation at 0degree C?
1.27.50 KJ mol-1 2.48.07 KJ mol-1 3.51.08 KJ mol-1 4.39.06 KJ mol-1 Correct Answer: 3.51.08 KJ mol-1 Explanation: Enthalpy of fusion of vaporization for water respectively are 6.01 Kj mol-1...
Which of the following statement is NOT correct about solution?
1.The three states of matter solid, liquid and gas may play role of either solute or solvent 2.the component of solution which constitute smaller part is called solute. 3.When water is solvent, the...
The P-P-P bond angle in white phosphorus is
1.90o 2.109o281 3.120o 4.600 Correct Answer: 4.600 Explanation: The P-P-P bond angle in white phosphorus is 4.600
Which of the following pairs of solution is isotonic? (Molar mass urea=60, sucrose=342 g mol-1)
1.3.0 gL-1 urea and 17.19 gL-1 sucrose 2.0.3 gL-1 urea and 1.719 gL-1 sucrose 3.0.gL-1 urea and 1.719 gL-1 sucrose 4.0.3 gL-1 urea and 17.19 gL-1 sucrose Correct Answer: 1.3.0 gL-1 urea and...
When alkyl halide is boiled with large excess of alcoholic ammonia it forms
1.Primary amine 2.tertiary amine 3.secondary amine 4.quatrnary ammonium salt Correct Answer: 1.Primary amine Explanation: When alkyl halide is boiled with large excess of alcoholic ammonia it...
Methoxy ethane on reaction with hot concentrated HI gives
1.iodomethane and ethanol 2.iodomethane and iodoethane 3.methanol and ethanol 4.methanol and iodoethane Correct Answer: 2.iodomethane and iodoethane Explanation: Methoxy ethane on reaction...
Which of the following oxyacid of Sulphur contains S=S linkage?
1.H2S2O4 2.H2SO3 3.H2S2O5 4.H2S2O2 Correct Answer: 4.H2S2O2 Explanation: H2S2O2 contains S=S linkage.
Identity the decreasing order of boiling point of alkanes (1)n-pentane (2) isopentane (3) Neopentane
1.Isopentene ˃ n-pentane ˃ Neopentane 2.Neopentane ˃Isopentane ˃ n-pentane 3.n-pentane ˃ Isopentane ˃ Neopentane 4.Isopentane ˃ Neopentane ˃ n-pentane Correct Answer: 3.n-pentane ˃ Isopentane...
Which of the following is a character of catalyst?
1.it changes the position of equilibrium. 2.it increases the rates of both forward and backward reaction equally in reversible reaction 3.it affects the energies of reactants and products of the...
An increase in the concentration of the reactants of a reaction leads to change in (1) heat of reaction (2) threshold energy (3) collision frequency (4) activation energy
Correct option (1) As the number of molecules per unit volume grows, so does the frequency of collisions.
Match the following and identify the correct option.
Correct option (4) CO(g) + H2(g) = Synthetic gas or water gas Bicarbonates of Ca2+ and Mg2+...
Anisole on cleavage with HI gives
Correct option (4) The methyl(phenyl) oxonium ion is formed when anisole reacts with protons from hydroiodic acid. The reaction is an SN2...
Which of the following oxoacid of sulphur has -O-O- linkage?
(1) , peroxodisulphuric acid
(2) , pyrosulphuric acid
(3) , sulphurous acid
(4) , sulphuric acid
The correct option is (1) Diagrammatic representation of each compound is as follows,
What is the change in oxidation number of carbon in the following reaction?
(1) 0 to
Correct option: (2) The oxidation state carbon of methane is -4 which is the reactant and that on product side is carbon tetrachloride is +4.
The mixture which shows positive deviation from Raoult's law is
(1) Benzene + Toluene
(2) Acetone + Chloroform
(3) Chloroethane + Bromoethane
(4) Ethanol + Acetone
Correct option (4) Acetone + ethanol deviates from Raoult's law in a positive way. H-bonding exists in pure ethanol, and adding acetone to it causes some H-bonds to break. The observed vapour...
The freezing point depression constant of benzene is . The freezing point depression for the solution of molality m containing a non-electrolyte solute in benzene is (rounded off upto two decimal places):
Correct option: (2) As we know that, we evaluate the value using the formula directly, $\begin{aligned} \Delta T _{ f }= i k _{ f } m \\ \Rightarrow \quad \Delta T _{ f } &=1 \times 5.12 \times...
Identify the incorrect statement.
(1) The transition metals and their compounds are known for their catalytic activity due to their ability to adopt multiple oxidation states and to form complexes.
(2) Interstitial compounds are those that are formed when small atoms like or are trapped inside the crystal lattices of metals.
(3) The oxidation states of chromium in and are not the same.
(4) is a stronger reducing agent than in water.
Correct option (3) Explanation: Interstitial compounds are those formed when small atoms such as H, B, C, or N are trapped inside metal crystal lattices. Fact: On the basis of standard reduction...
Which of the following equations shows the relationship between heat of reaction at constant pressure and heat of reaction at constant volume if the temperature is not constant?
$ 1.\,\,\Delta H-\Delta n=\Delta URT $ $ 2.\,\,\Delta H-\Delta U=\Delta nRT $ $ 3.\,\,\Delta H=\Delta nRT $ $ 4.\,\,\Delta H=\Delta U-RT $ Solution: $ 2.\,\,\Delta H-\Delta U=\Delta nRT $...
Which of the following elements is refined by zone refining?
A. Gallium B. Bismuth C. Copper D. Zinc Solution: gallium The zone refining method is generally used to refine metalloids and ultra-pure metal is obtained. This is based on the idea that impurities...
Slope of the straight line obtained by plotting against represents what term ? (A) (B) (C) (D)
Correct option is C ( $-\mathrm{E}_{\mathrm{a}} / 2.303 \mathrm{R}$) Explanation: The Arrhenius equation is: $ \log _{10} \mathrm{k}=\log _{10} \mathrm{~A}-\frac{\mathrm{E}_{\mathrm{a}}}{2.303...
Calculate the work done during combustion of of ethanol, at at . Given : , molar mass of ethanol (A) (B) (C) (D)
The correct option is B Explanation: The combustion of ethanol, involves the following reaction. $ C_{2} H_{5} O H(l)+3 O_{2}(g) \rightarrow 2 C O_{2}+3 H_{2} O $ Given, Mass of ethanol $=0.138...
With which halogen the reactions of alkanes are explosive ? (A) Fluorine (B) Chlorine (C) Bromine (D) Iodine
Correct answer is A (Fluorine) Explanation: The reactions of alkanes with fluorine are explosive. Fluorine is a highly reactive element exhibiting an exothermic reaction when it reacts with...
What is the geometry of water molecule ? (A) distorted tetrahedral (B) tetrahedral (C) trigonal planer (D) diagonal
Correct option is A Explanation: The tetrahedral geometry of the water molecule is deformed. The electrons in the O atom are divided into two lone pairs and two bond pairs. The sp 3 hybridisation of...
Lactic acid and glycollic acid are the monomers used for preparation of Polymer (A) Nylon2nylon6 (B) Dextron (C) PHBV (D) BunaN
Correct option is B Explanation: Dextron is a biodegradable copolymer of glycolic acid and lactic acid that contains ester linkages.
In case of R, S configuration the group having highest priority is (A) (B) (C)
Correct option is D Explanation: The group with the highest priority in the R, S configuration is -OH. The atomic number of O is larger than that of N and C. As a result, oxygen takes priority.
Arenes on treatment with chlorine in presence of ferric chloride as a catalyst undergo what type of reaction ? (A) Electrophilic substitution (B) Nucleophilic substitution (C) Electrophilic addition (D) Nucleophilic addition
Correct option is A Explanation: Arenes undergo halogenation when exposed to chlorine in the presence of a Lewis acid catalyst, ferric chloride, or aluminium chloride, and in the absence of light....
Identify the oxidation states of titanium and copper in their colourless compounds. (A) (B) (C) (D)
Correct option is C $\mathrm{T} \mathrm{i}^{4+}, \mathrm{Cu}^{+}$ Explanation: The oxidation states of titanium $(\mathrm{Z}=22)$ and copper $(\mathrm{Z}=29)$ in their colourless compounds are...
Identify the monosaccharide containing only one asymmetric carbon atom in its molecule. (A) Ribulose (B) Ribose (C) Erythrose (D) Glyceraldehyde
D is the correct answer Explanation: The monosaccharide glyceraldehyde has only one asymmetric carbon atom in its molecule. An asterisk denotes an asymmetric carbon atom.
Which among the group -15 elements does NOT exists as tetra atomic molecule ? (A) Nitrogen (B) Phosphorus (C) Arsenic (D) Antimony
Correct option is A (Nitrogen) Explanation: Nitrogen does not exist as tetra atomic molecule. It exists as diatomic molecule $N_2$ that can be represented as N≡N.
The correct relation between elevation of boiling point and molar mass of solute is (A) (B) (C) (D)
Correct option is (A) $M_{2}=\frac{K_{b} \cdot W_{2}}{\Delta T_{b} \cdot W_{1}}$ EXPLANATION: The correct relation between elevation of boiling point (ΔTb) and molar mass of solute (M2) is...
Which of the following reactions is used to prepare aryl fluorides from diazonium salts and fluoroboric acid ? (A) Sandmeyer reaction (B) BalzSchiemann reaction (C) Gattermann reaction (D) Swarts reaction
Correct option is B Balz-Schiemann reaction EXPLANATION: Balz-Schiemann reaction is used to prepare aryl fluorides from diazonium salts and fluoroboric acid. $ \mathrm{Ar}-\mathrm{N}_{2} \mathrm{X}...
The element that does NOT exhibit allotropy is (A) Phosphorus (B) Arsenic (C) Antimony (D) Bismuth
Correct option is D Explanation: Bismuth is the only element that does not exhibit allotropy. There are two allotropes of nitrogen (solid). There are three types of antimony allotropes. There are...
Conservation of hexane into benzene involves the reaction of (A) Hydration (B) Hydrolysis (C) Hydrogenation (D) Dehydrogenation
Correct option is D Explanation: Conversion of hexane into benzene involves the reaction of dehydrogenation.
Which of the following is NOT a tranquilizer ? (A) Meprobamate (B) Equanil (C) Chlordiazepoxide (D) Bromopheniiramine
Correct option is D Explanation: A tranquillizer is a medicine that is used to treat anxiety, fear, tension, agitation, and other mental problems, with the goal of reducing anxiety and tension....
Which element is obtained in the pure form by van Arkel method ? (A) Aluminium (B) Titanium (C) Silicon (D) Nickel
Correct option is B (Titanium) Explanation: The van Arkel process is used to obtain pure titanium. This procedure can also be used to obtain zirconium. Hall's procedure, Baeyer's process, or...
Which of the following polymers is used to manufacture clothes for firefighters ? (A) Thiokol (B) Kevlar (C) Nomex (D) Dynel
Correct option is C (Nomex) Explanation: Nomex is a synthetic fibre that is used to make clothing for firefighters. It's also utilised to make race car drivers' protective gear. Dynel is a synthetic...
What is the density of solution of sulphuric acid used as an electrolyte in lead accumulator? (A) (B) (C) (D)
B $1.2 gmL^{-1}$ is the correct answer. Explanation: Sulphuric acid solution used as an electrolyte in a lead accumulator has a density of 1.2 gmL^-1. It is equivalent to 38 percent sulphuric acid...
Excess of ammonia with sodium hypochloride solution in the presence of glue or gelatine gives (A) (B) (C) (D)
Correct option is B $\mathrm{NH}_{2} \mathrm{NH}_{2}$ Explanation: Excess of ammonia with sodium hypo chloride solution in the presence of glue or gelatine gives hydrazine $\left(\mathrm{NH}_{2}...
Which among the following elements of group-2 exhibits anomalous properties ? (A) Be (B) Mg (C) Ca (D) Ba
A is the correct answer. Explanation: Be has anomalous characteristics. This is due to beryllium's smaller atomic and ionic radii when compared to the other members of the group. It's also because...
Phenol in presence of sodium hydroxide reacts with chloroform to form salicyladehyde. The reaction is known as (A) Kolbe's reaction (B) ReimerTiermann reaction (C) Stephen reaction (D) Etard reaction
Correct option is B Explanation: Salicyladehyde is formed when phenol interacts with chloroform in the presence of sodium hydroxide. The Reimer-Tiemann reaction is the name of the reaction.
Bauxite, the ore of aluminium, is purified by which process ? (A) Hoope's process (B) Hall's process (C) Mond's process (D) Liquation process
Correct option is B Explanation: (i)The Hall's process is the most widely used industrial method for smelting aluminium. The process entails dissolving aluminium oxide in molten cryolite and...
What are the products of auto-photolysis of water? (A) and (B) Stream (C) and (D) Hydrogen peroxide
Correct Option is A Explanation: Water undergoes auto photolysis, which entails the breaking of chemical bonds as a result of the transmission of light energy to these bonds. It results in the...
In which among the following solids, Schottky defect is NOT observed ? (A) ZnS (B) NaCl (C) KCl (D) CsCl
Correct option is A (ZnS) Explanation: In ZnS, the Schottky defect does not exist. NaCl, KCl, and CsCl all have the Schottky defect. The Schottky defect occurs in solids where the cations and anions...
Two moles of an ideal gas are allowed to expand from a volume of to at against a pressure of 101.325 KPa. Calculate the work done (A) (B) (C) -810.6kJ (D)
Correct option is A $-201.6 \mathrm{~kJ}$ Explanation: $ \begin{array}{l} \Delta \mathrm{V}=\mathrm{V}_{2}-\mathrm{V}_{1}=2 \mathrm{~m}^{3}-\left(10 \mathrm{dm}^{3} \times 10^{-3} \mathrm{~m}^{3} /...
Page 1 of 141234...10...»Last » | CommonCrawl |
Applied Water Science
March 2013 , Volume 3, Issue 1, pp 125–132 | Cite as
Removal of turbidity, COD and BOD from secondarily treated sewage water by electrolytic treatment
Ashok Kumar Chopra
Arun Kumar Sharma
A preliminary study was conducted for the removal of turbidity (TD), chemical oxygen demand (COD) and biochemical oxygen demand (BOD) from secondarily treated sewage (STS) water through the electrolytic batch mode experiments with DC power supply (12 V) up to 30 min and using a novel concept of electrode combinations of different metals. The different surface areas (40, 80, 120 and 160 cm2) of the electrodes as a function of cross-sectional area of the reactor and the effect of inter-electrode distances (2.5–10 cm) on the electrolysis of STS water were studied. This study revealed that the effluent can be effectively treated with the aluminum (Al) and iron (Fe) electrode combinations (Al–Fe and Fe–Al). The maximum removal of TD (81.51 %), COD (74.36 %) and BOD (70.86 %) was recorded with Al–Fe electrode system, while the removal of these parameters was found to be 71.11, 64.95 and 61.87 %, respectively, with Fe–Al electrode combination. The Al–Fe electrode combination had lower electrical energy consumption (2.29 kWh/m3) as compared to Fe–Al electrode combination (2.50 kWh/m3). The economic evaluation of electrodes showed that Al–Fe electrode combination was better than Fe–Al electrode combination. This revealed the superiority of aluminum as a sacrificial electrode over that of iron which can probably be attributed to better flocculation capabilities of aluminum than that of iron.
Electrolytic process Removal efficiency Electrode combinations Sewage water
Water is an essential substance for living system as it allows the transport of nutrients as well as waste products in the living systems. However, sustainable water supply is becoming more challenging by the day due to ever increasing demand of growing population as well as increasing contamination of water resources. At the same time, huge quantities of wastewater generated by industries of every hue and kind and also by exponential growth in the number of households are becoming a serious concern for society.
The role of electrochemistry in water and effluent treatment is relatively small, since conventional electrode materials achieve only low current efficiencies due to the water electrolysis side reactions (Comninellis 1994; Simonsson 1997). However, the use of sacrificial electrodes of metals which can give rise to multiple charged ions and their corresponding salts in the electrolytic systems results in coagulation and flocculation of dissolved and undissolved water impurities. This helps in the removal of contaminants from wastewater. Matteson et al. (1995) described a device, referred to as an "electronic coagulator" which electrochemically dissolved aluminum (from the anode) into the solution, reacting this with the hydroxyl ion (from the cathode) to form aluminum hydroxide. The aluminum hydroxide, thus formed, flocculates and coagulates the suspended solids and thereby purifies waste water. Carmona et al. (2006) reported that Al or Fe was usually used as electrode material and their actions were generated by the dissolution of sacrificial anodes upon the application of a direct current. This electrolytic process of generating metallic hydroxide flocks in situ via electro-dissolution of the sacrificial anode immersed in the waste water is referred to as electrocoagulation (EC). The generation rate of flocks can be controlled by applying varying amount of current. The electrochemically generated metallic ions can be hydrolyzed next to the anode and generate a series of metal hydroxides that are able to destabilize the dispersed particles present in the wastewater. The destabilized particles are believed to be responsible for the aggregation and precipitation of the suspended particles and for the adsorption of the dissolved and/or colloidal pollutants which are subsequently removed by sedimentation and/or flotation (Bayramoglu et al. 2004; Lung Chou 2010). Thus, the EC process offers the possibility of anodic oxidation which leads to in situ generation of adsorbents such as hydrous ferric oxides, hydroxides of aluminum, etc. (Kumar et al. 2004). The electrode material has a significant effect on the treatment efficiency in terms of cost of the treatment and removal of pollutants, and iron and aluminum electrodes are reasonably inexpensive and are easily available. These electrodes are anodically soluble leading to high wear and tear and thus generate sludge (Mollah et al. 2001; Holt et al. 2002; Kobaya and Can 2003; Daneshvar et al. 2003; Bayramoglu et al. 2004 and Chen 2004).
Electrolytic mechanism with Al and Fe electrodes
The electrolytic process involves the generation of coagulants in situ by electrolytic oxidation of the sacrificial electrode material. Aluminum or iron is usually used as electrodes and their cations are generated by the dissolution of sacrificial anodes upon the application of direct current. The metal ions generated are hydrolyzed in the electrochemical cell to produce metal hydroxide ions according to the reactions (1)–(7) and the solubility of the metal hydroxide complexes formed depends on pH and ionic strength. Insoluble flocs are generated in a pH range between 6.0 and 7.0 as seen from the solubility diagrams of aluminum hydroxide at various pH values (Bensadok et al. 2008). The Al plates are also finding applications in wastewater treatment either alone or in combination with Fe plates due to the high coagulation efficiency of Al3+ (Chen 2004). Mollah et al. (2001) had reported that the electrolytic dissolution of the Al anode produces the cationic monomeric species such as Al3+ and Al(OH)2+ under acidic conditions. At appropriate pH values, they are transformed initially into Al(OH)3 and finally polymerized to Al n (OH)3n according to the following reactions:
$$ {\text{Al}} \to {{\text{Al}}_{(\text{aq})}}^{3 +} + 3{\text{e}}^{ - } . $$
$$ {{\text{Al}}_{(\text{aq})}}^{3 +} + 3{\text{H}}_{2} {\text{O}} \to {\text{Al}}\left( {\text{OH}} \right)_{3} + 3{{\text{H}}_{{({\text{aq}})}}}^{ + } $$
$$ n{\text{Al}}\left( {\text{OH}} \right)_{3} \to {\text{Al}}_{n} \left( {\text{OH}} \right)_{{3n}} . $$
However, depending on the pH of the aqueous medium, other ionic species, such as Al(OH)2+, Al2(OH)24+ and Al(OH)4− may also be present in the system.
In addition, various forms of charged multimeric hydroxo Al3+ species may be formed under appropriate conditions. These gelatinous charged hydroxo cationic complexes can effectively remove pollutants by adsorption (Yetilmezsoy et al. 2009).
When a DC electric field is applied, the following electrolysis reactions are expected in the vicinity of the iron electrodes (Ofir et al. 2007).
At anode:
$$ {\text{Fe}}_{{({\text{s}})}} \to {\text{Fe}}_{{({\text{aq}})}}{}^{2 + } + 2{\text{e}}^{ - } $$
$$ {\text{Fe}}_{{({\text{aq}})}}{}^{2 + }+ 2{\text{OH}}_{{({\text{aq}})}}{}^{ - } \to {\text{Fe}}\left( {\text{OH}} \right)_{{2({\text{s}})}} $$
At the cathode:
$$ 2{\text{H}}_{2} {\text{O}}_{{({\text{l}})}} + 2{\text{e}}^{ - } \to 2{\text{OH}}_{{({\text{aq}})}}{}^{ - } + {\text{H}}_{{2({\text{g}})}} $$
$$ {\text{Fe}}_{{({\text{s}})}} + 2{\text{H}}_{2} {\text{O}}\left( {\text{l}} \right) \to {\text{Fe}}\left( {\text{OH}} \right)_{{2({\text{s}})}} + {\text{H}}_{{2({\text{g}})}} $$
Generation of iron hydroxide is followed by an electrophoretic concentration of colloids (usually negatively charged), which are then swept by the electric field in the region close to the anode. The particles subsequently interact with the iron hydroxide and can be removed by the electrostatic attraction. In the region close to the anode, the high concentration of local iron hydroxide increases the probability of coagulation of colloids.
The present investigation was focused on the electrolytic treatment of secondarily treated sewage (STS) water and to find out the removal efficiency of Al–Fe and Fe–Al electrode combinations with different electrode surface areas and inter-electrode distances.
Collection of wastewater samples
The samples of STS water were collected from the outlet of activated sludge process (ASP) of the sewage treatment plant (STP), Jagjeetpur, Haridwar (Uttarakhand), India, brought to the laboratory and then used for electrolytic treatment using Al–Fe and Fe–Al electrode combinations.
Electrolytic experimental set up
The schematic arrangement of the experimental setup is shown in Fig. 1. The experiments were carried out in a cylindrical reactor having a capacity of 5 L. Al and Fe electrode plates in two combinations (Al–Fe and Fe–Al) having different surface areas (40, 80, 120 and 160 cm2) were connected to the respective anode and cathode leading to the DC rectifier and energized for a required duration of time at a fixed voltage. The inter-electrode distances between the two neighboring electrode plates varied between 2.5 and 10 cm (Table 1). All the experiments were performed at room temperature (30 ± 2 °C) and at a constant stirring speed (100 rpm) to maintain the uniform mixing of effluent during the electrolytic procedure. Before conducting an experiment, the electrodes were washed with water, dipped in dilute hydrochloric acid (HCl, 5 % v/v) for 5 min, thoroughly washed with water and finally rinsed twice with distilled water. After electrolytic treatment, the effluent was allowed to stand for 2 h and then sampled for analysis.
Systematic design of experimental set-up. A Anode, C cathode, R reactor, M magnetic stirrer, P DC power supply
Operating conditions for electrolytic treatment of STS water
Electrode material (anode/cathode)
Al/Fe and Fe/Al
Applied voltage
Shape of electrode
Electrode area (cm2)
40, 80, 120 and 160
Inter-electrode space (cm)
0.5, 1.0, 1.5 and 2.0
Time of operation (min)
10, 20 and 30
Volume of sample
Analytical methods
The TD, COD and BOD of wastewater were analyzed before and after the electrolytic treatment following the standard methods for examination of water and wastewater (APHA 2005). The calculation of TD, COD and BOD removal efficiencies after electrolytic treatment was carried out using the formula:
$$ \hbox{CR}\ \% = \frac{{C_{0} -C}}{{C_{0} }} \times 100, $$
where C0 and C are TD, COD or BOD of wastewater before and after electrolysis.
Removal of chemical and biological impurities from the contaminated water system by the process of electrolysis is governed by several factors including electrode material and distance between them, time of electrolysis, electrical parameters such as voltage and current densities, pH of the system and last but not the least the presence of other coagulants in the system, This preliminary study was devoted to figure out effects of electrode systems consisting of different materials on STS water treatment. Changes in surface area and distance between the electrodes were studied in detail. The characteristics of different parameters of the STS water are given in Table 2. The removal of TD, COD and BOD of STS water with electrode combinations (Al–Fe and Fe–Al) using different surface areas (40–160 cm2) and different inter-electrode distances (2.5–10 cm) are shown in Figs. 2, 3, 4, 5 and 6.
Characteristics of STS water
Mean ± SD
7.40 ± 0.17
Conductivity (μS)
727.8 ± 23.05
TDS (mg/L)
TD (NTU)
16.34 ± 1.28
BOD (mg/L)
COD (mg/L)
106.69 ± 8.11
Percentage removal of TD, COD and BOD of STS water using Al–Fe electrode combination with different inter-electrode distances at constant voltage (12 V), time (30 min) and electrode area (80 cm2)
Percentage removal of TD, COD and BOD of STS water using Fe–Al electrode combination with different inter-electrode distances at constant voltage (12 V), time (30 min) and electrode area (80 cm2)
Percentage removal of TD, COD and BOD of STS water using Al–Fe electrode combination with different electrode areas at constant voltage (12 V), time (30 min) and inter-electrode distance (2.5 cm)
Percentage removal of TD, COD and BOD of STS water using Fe–Al electrode combination with different electrode areas at constant voltage (12 V), time (30 min) and inter-electrode distance (2.5 cm)
Percentage removal of TD, COD and BOD of STS water using Al–Fe and Fe–Al electrode combination at constant voltage (12 V), inter-electrode distance (2.5 cm) and electrode area (160 cm2) with different time
Inter-electrode distance was observed to be an effective factor in the electrolytic treatment of STS water. The removal percentage of TD, COD and BOD increased progressively with decrease in inter-electrode distance from 10.0 to 2.5 cm, whereby it exhibited the maximum removal of TD (65.9 %), COD (57.41 %) and BOD (59.56 %) at the shortest distance (2.5 cm) between the electrodes (Al and Fe) with each electrode area of 80 cm2, whereas the Fe and Al electrode combination showed the removal of TD (59.66 %), COD (56.46 %) and BOD (51.99 %) (Figs. 2, 3). Similar observations have also been reported by Li et al. (2008) that COD decreases with the decrease in distance between electrodes of the same composition. This is because the shorter distance speeds up the anion discharge on the anode and improves the oxidation. It also reduces resistance, the electricity consumption and the cost of the wastewater treatment. Ghosh et al. (2008) have also observed that with the increase of inter-electrode distance, the percentage removal of dye products from waste water decreases. At a lower inter-electrode distance, the resistance encountered by current flowing in the solution medium decreases thereby facilitating the electrolytic process and resulting in enhanced dye removal. The above results also indicated the superiority of aluminum as sacrificial electrode when compared to that of iron as sacrificial electrode. This can probably be attributed to better coagulating properties of Al3+ to those of oxidized products of Fe. It may be due to the fact that the majority of Al3+ ions subsequently precipitates in the form of hydroxides. The adsorption of Al3+ ion with colloidal pollutants results in coagulation, and resulting coagulants can be more efficiently removed by settling, surface complexation and electrostatic attraction in comparison to Fe2+ ions.
With a fourfold increase in the electrode area of Al–Fe from 40 to 160 cm2, the current increased from 0.24 to 0.58 A; this resulted in an increase in the removal percentage of TD, COD and BOD. Highest removal efficiencies of 81.51 % (TD), 74.36 % (COD) and 70.86 % (BOD) were achieved at electrode area of 160 and at 2.5 cm inter-electrode distance. The removal efficiency can be attributed to a greater electrode area that produced larger amounts of anions and cations from the anode and cathode. The greater electrode area increased the rate of flock's formation, which in turn influenced the removal efficiency (Figs. 4, 5). Escobara et al. (2006) have also observed logistical relationship between electrode geometric area (AG) and copper removal efficiency and concluded that the increase in copper removal was related to an increase in AG, reaching an optimal value of 35 cm2, with an asymptotic value near 80 %. In the case of Fe–Al electrode combination the removal efficiency of TD, COD and BOD was 71.11, 64.95 and 61.87 %, respectively, which was somewhat lower than the values obtained with Al–Fe electrode.
Also in the present study, the electrolytic reactor equipped with a higher electrode area of Al and Fe electrode combinations was able to produce significant quantities of coagulants, thereby indicating the enhancement in removal efficiency of TD, COD and BOD from STS water. The increase in the electrode area during electrolytic treatment was predicted to an increase in number of hydroxide ions (OH−) in solution resulting from water reduction at the cathode.
$$ 2{\text{H}}_{2} {\text{O}}\left( {\text{l}} \right) + 2{\text{e}} ^{-} \Leftrightarrow {\text{H}}_{2} \left( {\text{g}} \right) + 2{\text{OH}}^{ - } . $$
In the electrolytic treatment, the selection of suitable electrode material is important and so is the time required to effect an acceptable removal of dissolved and undissolved impurities. Therefore, we studied the electrolytic treatment using both electrode combinations under the same conditions but as a function of time. The comparative results of TD, COD and BOD removal, obtained with the same voltage (12 V), same inter-electrode spacing (2.5 cm) and same area of electrode (160 cm2) but a varying time of up to 30 min again demonstrated the superiority of Al–Fe electrode combination over that of Fe–Al electrode combination (Fig. 6). During their study of the electrolytic treatment of latex wastewater, Vijayaraghavan et al. (2008) also observed that the increase in the electrolysis period resulted in a decrease in residual COD and BOD concentrations irrespective of the current densities. An increase in the operating time from 10 to 60 min in the treatment of the baker's yeast wastewater by electrocoagulation resulted in an increase in the removal efficiencies of COD, TOC and turbidity as reported by Kobya and Delipinar (2008).
Metallic hydroxides are produced up to a sufficient concentration of coagulant inducing the formation of white and slightly greenish precipitate using Al–Fe and Fe–Al electrode combination as the OH− of Al and Fe, respectively. This indicates that the STS water can be efficiently treated with Al–Fe combination that ensures better adsorption of soluble and colloidal species which settles down in the form of Al(OH)3 from STS water. Zongo et al. (2009) in their investigation of electro-coagulation for the treatment of textile wastewater with Al or Fe electrode elucidated that the Fe electrode is easily dissolved in water in comparison with Al. However, the use of iron electrodes often results in the formation of very fine brown particles which are less prone to settling than the gel flows formed with aluminum. For further re-use of the treated water, the post-treatment downstream of the electro-coagulation–electro-flotation system might represent a penalty to the use of iron over aluminum.
The present finding is in support of Lai and Lin (2003) who observed that the Al–Fe electrode pair is deemed to be a better choice out of the five electrode pair combinations tested. They also observed that Al–Fe electrode pair offers good overall COD and copper removal, low final wastewater NTU and reasonably low sludge production. Adhoum and Monser (2004) using Al electrodes achieved a COD removal efficiency of 76 % in the treatment of olive mill effluents. Ilhan et al. (2008) indicated the maximum removal of COD 56 and 35 % on the EC of leachate using Al and Fe electrode, respectively, in 30-min contact time. According to Lung Chou (2010), removal efficiency of polyvinyl alcohol (PVA) from aqueous solutions for Fe–Al and Fe–Fe pairs using Fe as the anode was greater than those of Al–Al and Al–Fe pairs using Al as the anode. This has been explained by the chemical reactions that take place at the aluminum anode and the iron anode. Katal and Pahlavanzadeh (2011) observed that the Fe–Al electrode combination has higher COD removal efficiency in comparison to Al–Fe electrode combination, while in present study, Al–Fe electrode combination was more efficient in comparison to Fe–Al electrode combination. In our opinion, this difference can probably be attributed to the different types of contaminants present in the waste water being studied. Iron is a 3d block transition metal and has better complexing properties with organic/inorganic impurities present in water which can act as complexing ligands than aluminum which is a p block metal and lacks empty d-orbitals necessary for making coordination compounds. Therefore, Fe–Al electrode system may be a better choice if electron-rich nucleophilic organic compounds such as dyes, their intermediates and degradation products are present in waste water. However, in the present study, Al–Fe system was found to be more efficient. Therefore, chemical behavior of the contaminants present in waste water may be a deciding factor in the selection of anodic sacrificial electrode.
Energy consumption and operating cost
Electrical energy and electrode consumption are important economical parameters in EC process. In EC process, the operating cost includes material, mainly electrodes and electrical energy costs, as well as labor, maintenance, sludge dewatering and its disposal. In the present study, energy and electrode material costs have been taken into account as major cost items in the calculation of the operating cost (US $/m3) (Ghosh et al. 2008) as follows:
$$ {\text{Operating}}\,{\text{cost}} = a\mathop C\nolimits_{\text{energy}} + b\mathop C\nolimits_{\text{electrode}}, $$
where Cenergy (kWh/m3) and Celectrode (kg Al/m3) are the consumption quantities for the turbidity, COD and BOD removal, "a" is the electrical energy price 0.1 US$/kWh, "b" is the electrode material price 3.4 US$/kg for Al electrode and 1.3 US$/kg for Fe electrode. Cost due to electrical energy (kWh/m3) is calculated as:
$$ \mathop C\nolimits_{\text{energy}} = \frac{{U \times I \times \mathop t\nolimits_\text{EC} }}{v}. $$
Cost for electrode (kg Al/m3) was calculated as follows using the equation:
$$ \mathop C\nolimits_{\text{electrode}} = \frac{{I \times t \times \mathop M\nolimits_{w} }}{z \times F \times v}, $$
where U is the cell voltage (V), I current (A), tEC time of electrolysis (s), v volume (m3) of STS water, MW molecular mass of aluminum (26.98 g/mol) and iron (55.84 g/mol), z no. of electrons transferred (z = 3 for Al and 2 for Fe) and F is the Faraday's constant (96487C/mol) .
For both electrode combinations (Al–Fe and Fe–Al), the energy consumption increased from 1.04 to 2.5 kWh/m3 with an increase in current from 0.24 to 0.58 A that resulted in increasing the electrode consumption (0.85 × 10−5 to 6.04 × 10−5 kg/m3). The cost due to electrical energy consumption as well as an electrode assembly was calculated for both electrode combinations at optimum operating condition. The operating cost of Fe–Al (0.25006 US$/m3) electrode combination was found to be slightly higher than Al–Fe (0.22906 US$/m3) electrode combination (Table 3).
Economic evaluation of Al–Fe and Fe–Al electrode combination at optimum operating condition (current 0.53 and 0.58 A; voltage 12 V; time 30 min)
Electrode combination
Electrode consumption (kg/m3)
Energy consumption (kWh/m3)
Operating cost US$/m3
Al–Fe
1.77 × 10−5
Fe–Al
Kinetic study of turbidity, COD and BOD
The rate of removal of TD, COD and BOD is represented by the following first-order mechanism (El-Ashtoukhy and Amin 2010):
$$ \ln \left( {\frac{{\mathop C\nolimits_{0} }}{{\mathop C\nolimits_{t} }}} \right) = kt, $$
where C0 is the initial concentration (mg/L), C t final concentration with respect to time, t the time (min) and k is the rate constant (min−1) for TD, COD and BOD for electrolytic treatment using Al–Fe and Fe–Al electrode combination. Rate constants for electrolytic treatment of TD, COD and BOD from STS water using two types of electrode combination are given in Table 4.
Rate constant (k) (min−1) values at variable distances between electrode and their correlation coefficients (r2)
Distance between electrodes
Al–Fe electrode combination
Fe–Al electrode combination
(min−1)
The kinetic study on the distances between electrodes for electrolytic treatment has not been given due consideration so far. There appears to be no work with regard to the kinetic study on the distance between electrodes during electrolytic treatment. In the present study, it was revealed that there is a strong positive correlation between inter-electrode space, and TD, COD as well as BOD abatement rates and rate of coefficients. The pseudo-first-order abatement kinetic was relatively fitted. The decrease in the distance between electrodes from 10.0 to 2.5 cm increased the rate constant from 0.01 to 0.026 min−1 for TD, 0.007 to 0.020 min−1 for COD and 0.006 to 0.018 min−1 for BOD using Al–Fe electrode combination and from 0.008 to 0.019 min−1 for TD, 0.006 to 0.015 min−1 for COD and 0.005 to 0.014 min−1 for BOD using Fe–Al electrode combination. The increase in the rate constant of both Al–Fe and Fe–Al electrode combinations may be ascribed to the decrease of TD, COD and BOD of the STS water. The use of this kinetic study showed high correlation coefficients (r2 ≥ 0.959). Thus, the kinetic study is more suitable for explaining the efficiency of distance between electrodes for electrolytic treatment.
The use of electrode systems using different metals for anodes and cathodes was studied in an attempt to improve upon the existing system and to further understand the process of electrolysis. The removal of TD, COD and BOD was found to be dependent on the inter-electrode distances, electrode areas and the electrode combinations (Al–Fe and Fe–Al) in the treatment of STS water. An increase in the surface area of the electrodes and a decrease in the distance between them resulted in better removal of contaminants from the waste water; the optimal removal has been obtained with the use of an electrode area of 160 cm2 and a short distance of 2.5 cm between electrodes in a 5-L reactor. Acquired results of the present study could be specified and evaluated by employing pseudo-first-order kinetics. The electrical energy consumption was calculated as 0.229 kWh/m3 for Al–Fe and 0.25 kWh/m3 for Fe–Al electrode combination. Al–Fe electrode combination proved more effective in comparison to Fe–Al electrode combination for the treatment of STS water. Due to economical constraint, EC with Al–Fe electrode combination should be preferred in comparison to Fe–Al electrode combination. Al anode was more efficient in comparison to Fe anode establishing the superiority of aluminum as the preferred material for sacrificial electrode for the treatment of sewage water obtained from STP, Jagjeetpur, Haridwar (Uttarakhand), India.
The University Grant Commission, New Delhi, India is acknowledged for providing the financial support in the form of UGC research fellowship (F.4-1/2006 (BSR) 7-70/2007 BSR) to Mr. Arun Kumar Sharma.
Adhoum N, Monser L (2004) Decolourization and removal of phenolic compounds from olive mill wastewater by electro coagulation. Chem Eng Process 43:1281–1287CrossRefGoogle Scholar
APHA (2005) Standard methods for the examination of water and wastewater, American Public Health Association, 21st edn. Washington, DCGoogle Scholar
Bayramoglu M, Kobya M, Can OT, Sozbir M (2004) Operating costs analysis of electrocoagulation of textile dye wastewater. Sep Purif Technol 37:117–125CrossRefGoogle Scholar
Bensadok K, Benammar S, Lapicque F, Nezzal G (2008) Electrocoagulation of cutting oil emulsions using aluminum plate electrodes. J Hazard Mater 152(1):423–430CrossRefGoogle Scholar
Carmona M, Khemis M, Leclerc JP, Lapicque F (2006) A simple model to predict the removal of oil suspensions from water using the electrocoagulation technique. Chem Eng Sci 61:1237–1246CrossRefGoogle Scholar
Chen G (2004) Electrochemical technologies in wastewater treatment. Sep Purif Technol 38:11–41CrossRefGoogle Scholar
Comninellis Ch (1994) Electrocatalysis in the electrochemical conversion/combustion of organic pollutants for wastewater treatment. Electrochim Acta 39(11):1857–1862CrossRefGoogle Scholar
Daneshvar N, Ashassi-Sorkhabi H, Tizpar A (2003) Decolorization of orange II by electrocoagulation meted. Sep Purif Technol 31:153–162CrossRefGoogle Scholar
El-Ashtoukhy ESZ, Amin NK (2010) Removal of acid green dye 50 from wastewater by anodic oxidation and electrocoagulation—a comparative study. J Hazard Mater 179:113–119CrossRefGoogle Scholar
Escobara C, Cesar SS, Toral M (2006) Optimization of the electrocoagulation process for the removal of copper, lead and cadmium in natural waters and simulated wastewater. J Environ Manag 81(4):384–391CrossRefGoogle Scholar
Ghosh D, Medhi CR, Solanki H, Purkait MK (2008) Decolorization of crystal violet solution by electrocoagulation. J Environ Prot Sci 2:25–35Google Scholar
Holt PK, Barton GW, Wark M, Mitchell CA (2002) A quantitative comparison between chemical dosing and electrocoagulation. Colloids Surf A Physicochem Eng Aspects 211:233–248CrossRefGoogle Scholar
Ilhan F, Kurt U, Apaydin O, Gonullu MT (2008) Treatment of leachate by electrocoagulation using aluminum and iron electrodes. J Hazard Mater 154:381–389CrossRefGoogle Scholar
Katal R, Pahlavanzadeh H (2011) Influence of different combinations of aluminum and iron electrode on electrocoagulation efficiency: Application to the treatment of paper mill wastewater. Desalination 265:199–205CrossRefGoogle Scholar
Kobaya MOT, Can Bayramoglu M (2003) Treatment of textile wastewaters by electrocoagulation using iron and aluminum electrodes. J Hazard Mater B 100:163–178CrossRefGoogle Scholar
Kobya M, Delipinar S (2008) Treatment of the baker's yeast wastewater by electrocoagulation. J Hazard Mater 154:1133–1140CrossRefGoogle Scholar
Kumar PR, Chaudhari S, Khilar KC, Mahajan SP (2004) Removal arsenic from water by electrocoagulation. Chemosphere 55:1245–1252CrossRefGoogle Scholar
Lai CL, Lin SH (2003) Electrocoagulation of chemical mechanical polishing (CMP) wastewater from semiconductor fabrication. Chem Eng J 95:205–211CrossRefGoogle Scholar
Li Xu, Wang Wei, Wang Mingyu, Cai Yongyi (2008) Electrochemical degradation of tridecane dicarboxylic acid wastewater with tantalum-based diamond film electrode. Desalination 222:388–393CrossRefGoogle Scholar
Lung Chou W (2010) Removal and adsorption characteristics of polyvinyl alcohol from aqueous solutions using electrocoagulation. J Hazard Mater 177:842–850CrossRefGoogle Scholar
Matteson MJ, Dobson RL, Glenn RW Jr, Kukunoor NS, Waits WH III, Clayfield EJ (1995) Electrocoagulation and separation of aqueous suspensions of ultrafine particles. Colloids Surf A 104(1):101–109CrossRefGoogle Scholar
Mollah MYA, Schennach R, Parga JR, Cocke DL (2001) Electrocoagulation (EC)—science and applications. J Hazard Mater B 84:29–41CrossRefGoogle Scholar
Ofir E, Oren Y, Adin A (2007) Comparing pretreatment by iron of electro flocculation and chemical flocculation. Desalination 204:87–93CrossRefGoogle Scholar
Simonsson D (1997) Electrochemistry for a cleaner environment. Chem Soc Rev 26:181–189CrossRefGoogle Scholar
Vijayaraghavan K, Ahmad D, Yuzri A, Yazid A (2008) Electrolytic treatment of latex wastewater. Desalination 219:214–221CrossRefGoogle Scholar
Yetilmezsoy K, Ilhan F, Zengin ZS, Sakar S, Gonullu MT (2009) Decolorization and COD reduction of UASB pretreated poultry manure wastewater by electrocoagulation process: A post-treatment study. J Hazard Mater 162:120–132CrossRefGoogle Scholar
Zongo I, Maiga AH, Wéthé J, Valentina G, Leclerca JP, Paternottea G, Lapicquea F (2009) Electrocoagulation for the treatment of textile wastewaters with Al or Fe electrodes: compared variations in COD levels, turbidity and absorbance. J Hazard Mater 169:70CrossRefGoogle Scholar
This article is published under license to BioMed Central Ltd. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
1.Department of Zoology and Environmental SciencesGurukula Kangri UniversityHaridwarIndia
Chopra, A.K. & Sharma, A.K. Appl Water Sci (2013) 3: 125. https://doi.org/10.1007/s13201-012-0066-x
Received 20 October 2011
DOI https://doi.org/10.1007/s13201-012-0066-x
This article is published under an open access license. Please check the 'Copyright Information' section for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
King Abdulaziz City for Science and Technology | CommonCrawl |
To Group or Not to Group? Good Practice for Housing Male Laboratory Mice
Sarah Kappel, Penny Hawkins, Michael T. Mendl
Subject: Biology, Animal Sciences & Zoology Keywords: refinement; mouse welfare; mouse husbandry; mouse aggression; male mice; social organisation; group housing; single housing; animal husbandry; animal welfare; animal management
It is widely recommended to group house male laboratory mice because they are 'social animals', but male mice do not naturally share territories and aggression can be a serious welfare problem. Even without aggression, not all animals within a group will be in a state of positive welfare. Rather, many male mice may be negatively affected by the stress of repeated social defeat and subordination, raising concerns about welfare and also research validity. However, individual housing may not be an appropriate solution, given the welfare implications associated with no social contact. An essential question is whether it is in the best welfare interests of male mice to be group- or singly-housed. This review explores the likely impacts, positive and negative, of both housing conditions, presents results of a survey of current practice and awareness of mouse behaviour, and includes recommendations for good practice and future research. We conclude that whether group- or single-housing is better (or less worse) in any situation is highly context-dependent according to several factors including strain, age, social position, life experiences, and housing and husbandry protocols. It is important to recognise this and evaluate what is preferable from animal welfare and ethical perspectives in each case.
Maxmal Order of NG-Transformation Group
Faraj Abdunabi
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Ng group; permutation group
In this paper, we consider the problem that the maximal order consider the groups that consisting of transformations we called NG-Transformation on a nonempty set A has no bijection as its element. We find the order of these groups not greater that (n-1)!. In addition, we will prove our result by showing that any kind of NG-group in the theorem be isomorphic to a permutation group on a quotient set of A with respect to an equivalence relation on A.
Peer Group : A New Approach Of Nursing Intervention
Suyanto Suyanto, Moses Glorino Rumambo Pandin
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: peer group support; peer group education and technology
AbstractBackground: the development of nursing, especially related to the nursing intervention approach, is running so fast. This can be seen from the use of peer group support in nursing interventions in individual humans. The purpose of this literature is to find the impact of implementing nursing interventions using a peer group support approach.Method: this literature review method uses JBI and Prisma on 120 articles taken from journal databases, namely Scopus, PubMed and Sciendirect.Result: From the articles analyzed, it was found that the application of peer groups can improve individual abilities both in psychological and behavioral aspects.Conclusion: the application of the peer group approach is able to be one of the approaches in the world of nursing in carrying out nursing actions today.
Further Properties of Bornological Groups
Anwar N. Imran, I. S. Rakhimov
Subject: Materials Science, Other Keywords: bornological set; bornological group
In this work, Further properties of bornological groups are studied to find the sufficient conditions to introduce a bornology on a group. In particular, we show that every left (right) translations in bornological groups are isomorphisms and therefore the bornological group's structures are homogeneous, and this property from the bornological group is not shared with bornological semigroups. Further, the boundedness of a bornological group can be deduced from its boundedness at the identity.
Generating the Triangulations of the Torus with the Vertex-Labeled Complete 4-Partite Graph K_{2,2,2,2}
Serge Lawrencenko, Abdulkarim Magomedov
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: group action; orbit decomposition; polynomial; graph; tree; triangulation; torus; automorphism; quaternion group
Using the orbit decomposition, a new enumerative polynomial P(x) is introduced for abstract (simplicial) complexes of a given type, e.g., trees with a fixed number of vertices or triangulations of the torus with a fixed graph. The polynomial has the following three useful properties. (I) The value P(1) is equal to the total number of unlabeled complexes (of a given type). (II) The value of the derivative P'(1) is equal to the total number of nontrivial automorphisms when counted across all unlabeled complexes. (III) The integral of P(x) from 0 to 1 is equal to the total number of vertex-labeled complexes, divided by the order of the acting group. The enumerative polynomial P(x) is demonstrated for trees and then is applied to the triangulations of the torus with the vertex-labeled complete four-partite graph G = K_{2,2,2,2}, in which specific case P(x) = x^{31}. The graph G embeds in the torus as a triangulation, T(G). The automorphism group of G naturally acts on the set of triangulations of the torus with the vertex-labeled graph G. For the first time, by a combination of algebraic and symmetry techniques, all vertex-labeled triangulations of the torus (twelve in number) with the graph G are classified intelligently without using computing technology, in a uniform and systematic way. It is helpful to notice that the graph G can be converted to the Cayley graph of the quaternion group Q_8 with the three imaginary quaternions i, j, k as generators.
Variations à la Fourier-Weyl-Wigner on Quantizations of the Plane and the Half-Plane
Hervé Bergeron, Jean-Pierre Gazeau
Subject: Physical Sciences, Mathematical Physics Keywords: Weyl-Heisenberg group; affine group; weyl quantization; wigner function; covariant integral quantization
Any quantization maps linearly functions on a phase space to symmetric operators in a Hilbert space. Covariant integral quantization combines operator-valued measure with symmetry group of the phase space. Covariant means that the quantization map intertwines classical (geometric operation) and quantum (unitary transformations) symmetries. Integral means that we use all ressources of integral calculus, in order to implement the method when we apply it to singular functions, or distributions, for which the integral calculus is an essential ingredient. In this paper we emphasize the deep connection between Fourier transform and covariant integral quantization when the Weyl-Heisenberg and affine groups are involved. We show with our generalisations of the Wigner-Weyl transform that many properties of the Weyl integral quantization, commonly viewed as optimal, are actually shared by a large family of integral quantizations.
On the Group-Theoretical Approach to Relativistic Wave Equations for Arbitrary Spin
Luca Nanni
Subject: Physical Sciences, Acoustics Keywords: relativistic wave equations; higher spin; de Sitter group; irreducible representations of Lorentz group
Formulating a relativistic equation for particles with arbitrary spin remains an open challenge in theoretical physics. In this study, the main algebraic approaches used to generalize the Dirac and Kemmer–Duffin equations for particles of arbitrary spin are investigated. It is proved that an irreducible relativistic equation formulated using spin matrices satisfying the commutation relations of the de Sitter group leads to inconsistent results, mainly as a consequence of violation of unitarity and the appearance of a mass spectrum that does not reflect the physical reality of elementary particles. However, the introduction of subsidiary conditions resolves the problem of unitarity and restores the physical meaning of the mass spectrum. The equations obtained by these approaches are solved and the physical nature of the solutions is discussed.
Calculation of the Surface Tension of Ordinary Organic and Ionic Liquids by Means of a Generally Applicable Computer Algorithm Based on the Group—Additivity Method
Rudolf Naef, William E. Acree
Subject: Chemistry, Other Keywords: group-additivity method; surface tension
The calculation of the surface tension of ordinary organic and ionic liquids, based on a computer algorithm applying a refined group-additivity method, is presented. The refinement consists of a complete breakdown of the molecules into their constituting atoms, further distinguishing them by their immediate neighbour atoms and bond constitution. The evaluation of the atom-groups' contributions was carried out by means of a fast Gauss-Seidel fitting method founded upon the experimental data of 1895 compounds from literature. The result has been tested for plausibility using a 10-fold cross-validation (cv) procedure. The direct calculation and the cv test proved the applicability of the present method by the close similarity and excellent goodness of fit R2 and Q2 of 0.9023 and 0.8821, respectively. The respective standard deviations are ±2.01 and ±2.16 dyn/cm. Some correlation peculiarities have been observed in a series of ordinary and ionic liquids with homologous alkyl chains which are illustrated and discussed in detail, exhibiting the limit of the present method.
Calculation of the Higgs Mass for Quark and Lepton Electric Charges Swap Lie-Groupoid
Elias Koorambas
Subject: Physical Sciences, Nuclear & High Energy Physics Keywords: Higgs mass; Group Theory; Hypothetical Particles
Starting from the SU(2) group of weak interactions in the presence of Electric Charge Swap (ECS) symmetry, we show that ordinary and non-regular (ECS) leptons are related by the ECS rotational group SO(3). We find that many Standard Model (SM) algebras depend on the sin of the angle θs of the ECS rotational group SO (3). We call these ECSM algebras. Furthermore, the break of the gauge symmetry of the SM groupoid gives the massive ECS particle. We find that the ECS particle masses are related with the SM particle masses by sinθs . We also investigate the finite subgroups of the ECS Möbius transformations. We find that sinθs could be derived from the ECS dihedral group DF, which refers to the symmetry of the fermionic polygon (F-gon). The average value of the anchor of the SM algebroid depends on the fermionic Catalan numbers (CF). Finally, we find that the ECS physics at loop level differs the SM physics. The ECSM mass is suppressed by the CF numbers. For 24 fermions, the calculated one-loop radiative correction to the bare Higgs mass µ is 125GeV—a value very close to the experimental one.
On the Average of p-Selmer Rank in Quadratic Twist Families of Elliptic Curves over Function Field
Niudun Wang, SunWoo Park
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Selmer group; quadratic twist; elliptic curve
We show that if the quadratic twist family of a given elliptic curve over F_q[t] with char(F_q) > 5 has an element whose Neron model has a multiplicative reduction away from infinity, then the average p-Selmer rank is p+1 in large q-limit for almost all primes p.
Preprint HYPOTHESIS | doi:10.20944/preprints202008.0030.v1
Household Representative Sample Strategy for COVID-19 Large-Scale Population Screening
John Takyi-Williams
Subject: Medicine & Pharmacology, General Medical Research Keywords: COVID-19; group testing; household; screening
In the advent of COVID-19 pandemic, testing is highly essential to be able to isolate, treat infected persons, and finally curb transmission of this infectious respiratory disease. Group testing has been used previously for various infectious diseases and recently reported for large-scale population testing of COVID-19. However, possible sample dilution as a result of large pool sizes has been reported, limiting testing methods' detection sensitivity. Moreover, the need to sample all individuals prior to pooling overburden the limited resources such as test kits. An alternative proposed strategy where test is performed on pooled samples from individuals representing different households is presented here. This strategy intends to improve group testing method through the reduction in the number of samples collected and pooled during large-scale population testing. Moreover, it introduces database system which enables continuous monitoring of the population's virus exposure for better decision making.
Vacuum Effective Actions and Mass-Dependent Renormalization in Curved Space
Omar Zanusso, Sebastián A. Franchino-Viñas, Tibério de Paula Netto
Subject: Physical Sciences, Particle & Field Physics Keywords: Effective actions, Renormalization Group, Semiclassical contributions
We review past and present results on the non-local form-factors of the effective action of semiclassical gravity in two and four dimensions computed by means of a covariant expansion of the heat kernel up to the second order in the curvatures. We discuss the importance of these form-factors in the construction of mass-dependent beta functions for the Newton's constant and the other gravitational couplings.
Blood Groups (ABO/Rh) And Sociodemographic and Clinical Profile among Patients with Leprosy in Angola
Euclides Nenga Manuel Sacomboio, Tomásia Oliveira Muhongo, Adelino Tchilanda Tchivango, Edson Kuatelela Cassinela, Silvana da Rocha Silveira, Mauricio Da Costa, Carlos Alberto Pinto de Sousa, Cruz Sebastião Sebastião, Eduardo Ekundi Valentim
Subject: Medicine & Pharmacology, General Medical Research Keywords: leprosy; ABO/Rh blood group; Clinical; Angola
Introduction: Leprosy, caused by Mycobacterium leprae is one of the oldest infectious diseases in human history and its eradication is linked to poverty control, lack of basic sanitation, the fragility of health, and education services. Objective: To evaluate the frequency of blood groups (ABO/Rh) and the sociodemographic and clinical profile of Angolan patients with Leprosy treated at the Anti-Tuberculosis and Leprosy Dispensary in Luanda, the capital city of Angola. Methodology: A descriptive, introspective, cross-sectional study with a quantitative approach was carried out with 102 patients of Luanda, in the second half of 2021. Results: Of the 102 patients included in the study, the majority belonged to the ORh+ group (51.9%), followed by the BRh+ group (27.4%) and ARh+ (18.6%), most were under 51 years of age ( 87.3%), with low education (54.9%), coming from urban areas (44.1%). As for clinical conditions, most had a multibacillary infection (93.1%), diagnosed mainly by smear microscopy (75.5%) without other infection (79.4%), some of them with complications (28.4%) and individuals with non-O blood group showed changes in the blood count. Conclusion: Leprosy seems to be common in ORh+ individuals, it continues to affect especially those residing in areas of population agglomerations and with low education, presenting itself as a multibacillary infection, where changes in the blood count are greater in non-O individuals.
Membership Enforcement as a Driver of the Evolution of Language
Daniil Ryabko, Alvaro Mbeju Moreno
Subject: Biology, Other Keywords: language evolution; evolution of altruism; group evolution
A novel hypothesis concerning language evolution is advanced. It posits that languages have evolved as a means of binding individuals to a group, as well as for defining those groups. This means that language evolution has to be considered on the level of groups and not only on the level of individuals. This hypothesis helps to explain the huge diversity of human languages, as well as their complexity. Perhaps more importantly, it explains why adults lose the ability to learn languages with the ease that children possess.
Factoring Continuous Characters Defined on Subgroups of Products of Topological Groups
Mikhail Tkachenko
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Monoid; Group; Character; Homomorphism; Factorization; Roelcke uniformity
We study factorization properties of continuous homomorphisms defined on subgroups (or submonoids) of products of (para)topological groups (or monoids). A typical result is the following one: Let $D=\prod_{i\in I}D_i$ be a product of paratopological groups, $S$ be a dense subgroup of $D$, and $\chi$ a continuous character of $S$. Then one can find a finite set $E\subset I$ and continuous characters $\chi_i$ of $D_i$, for $i\in E$, such that $\chi=\big(\prod_{i\in E} \chi_i\circ p_i\big)\hs1\res\hs1 S$, where $p_i\colon D\to D_i$ is the projection.
The Effect of Group Spiritual Care on Hope in Patients with Multiple Sclerosis referred to the MS Society of Zahedan, Iran
Mozhgan Rahnama, Malihe Rahdar, Mehdi Afshari
Subject: Medicine & Pharmacology, Allergology Keywords: Group Spiritual Care; Hope; Multiple Sclerosis; Iran
Background and Aim: Multiple sclerosis (MS) is known as an autoimmune disease and a chronic inflammatory condition, inducing a wide variety of mood affective disorders, including depression and feelings of hopelessness in many aspects of patients' quality of life (QoL). In view of the positive side effects of spirituality and spiritual care on finding appropriate strategies for further adaptation, this study aimed to determine the impact of group spiritual care (GSC) on levels of hope in patients suffering from MS. Materials and Methods: This clinical trial was conducted on a total number of 96 patients with MS, referring to the National Multiple Sclerosis Society (NMSS) in the city of Zahedan, Iran. Following sample selection via the convenience sampling technique, the patients meeting the inclusion criteria were randomized into two groups, i.e., intervention and control. The data collection tools for this purpose included a demographic information form and the Adult Hope Scale (AHS, Snyder et al. 1991), completed by the subjects at the pre- and post-intervention stages. As well, the intervention group received five sessions of GSC during three weeks but the control group members only talked over daily issues along with their mental health problems. The data were also analyzed using the SPSS Statistics software (ver. 14). Results: The Kruskal-Wallis test results revealed that the GSC intervention could have a significant positive effect on raising hope in the patients with MS (p<0.001). Moreover, a significant growth was observed in the scores of hope dimensions including agency and pathway (p<0.001). Conclusion: GSC can effectively boost levels of hope in patients suffering from MS in all dimensions. Therefore, it is recommended to utilize this type of care in order to nurture hope in such individuals.
Calculation of the Isobaric Heat Capacities of the Liquid and Solid Phase of Organic Compounds at 298.15K by Means of a Generally Applicable Computer Algorithm Based on the Group-Additivity Method
Rudolf Naef
Subject: Chemistry, Physical Chemistry Keywords: heat capacity; group-additivity method; ionic liquids
The calculation of the isobaric heat capacities of the liquid and solid phase of molecules at 298.15 K is presented, applying a universal computer algorithm based on the atom-groups additivity method, using refined atom groups. The atom groups are defined as the molecules' constituting atoms and their immediate neighbourhood. In addition, the hydroxy group of alcohols are further subdivided to take account of the different intermolecular interactions of primary, secondary and tertiary alcohols. The evaluation of the groups' contributions has been carried out by means of a fast Gauss-Seidel fitting calculus using experimental data from literature. Plausibility has been tested immediately after each fitting calculation using a 10-fold cross-validation procedure. For the heat capacity of liquids, the respective goodness of fit of the direct (R2) and the cross-validation calculations (Q2) of 0.998 and 0.9975, and the respective standard deviations of 8.2 and 9.16 J/mol/K, together with a medium absolute percentage deviation (MAPD) of 2.69%, based on the experimental data of 1133 compounds, proves the excellent predictive applicability of the present method. The statistical values for the heat capacity of solids are only slightly inferior: for R2 and Q2, the respective values are 0.9915 and 0.9875, the respective standard deviations are 12.19 and 14.13 J/mol/K and the MAPD is 4.65%, based on 732 solids. The predicted heat capacities for a series of liquid and solid compounds has been directly compared to those received by a complementary method based on the "true" molecular volume [1] and their deviations elucidated.
Group entropies: from phase space geometry to entropy functionals via group theory
Henrik Jeldtoft Jensen, Piergiulio Tempesta
Subject: Physical Sciences, General & Theoretical Physics Keywords: Phase space volume, formal group theory, entropy
The entropy of Boltzmann-Gibbs, as proved by Shannon and Khinchin, is based on four axioms, where the fourth one concerns additivity. The group theoretic entropies make use of formal group theory to replace this axiom with a more general composability axiom. As has been pointed out before, generalized entropies crucially depend on the number of allowed number degrees of freedom $N$. The functional form of group entropies is restricted (though not uniquely determined) by assuming extensivity on the equal probability ensemble, which leads to classes of functionals corresponding to sub-exponential, exponential or super-exponential dependence of the phase space volume $W$ on $N$. We review the ensuing entropies, discuss the composability axiom, relate to the Gibbs' paradox discussion and explain why group entropies may be particularly relevant from an information theoretic perspective.
Quasirecognition by Prime Graph of the Groups 2D2n(q) Where q < 105
Hossein Moradi, Mohammad Reza Darafsheh, Ali Iranmanesh
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: prime graph; simple group; orthogonal groups; quasirecognition
Let G be a finite group. The prime graph Γ(G) of G is defined as follows: The set of vertices of Γ(G) is the set of prime divisors of |G| and two distinct vertices p and p' are connected in Γ(G), whenever G has an element of order pp'. A non-abelian simple group P is called recognizable by prime graph if for any finite group G with Γ(G)=Γ(P), G has a composition factor isomorphic to P. In [4] proved finite simple groups 2Dn(q), where n ≠ 4k are quasirecognizable by prime graph. Now in this paper we discuss the quasirecognizability by prime graph of the simple groups 2D2k(q), where k ≥ 9 and q is a prime power less than 105.
Efficient and Stable Voxel-Based Algorithm for Computing 3D Zernike Moments and Shape Reconstruction
An-Wen Deng, Chih-Ying Gwo
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: 3D Zernike moments; 3D Zernike radial polynomials; 3D Zernike polynomials; Spherical harmonics; Recurrence formula; Matrix Lie Group; Group action
3D Zernike moments based on 3D Zernike polynomials have been successfully applied to the field of voxelized 3D shape retrieval and have attracted more attention in biomedical image processing. As the order of 3D Zernike moments increases, both computational efficiency and numerical accuracy decrease. Due to this phenomenon, a more efficient and stable method for computing high-order 3D Zernike moments was proposed in this study. The proposed recursive formula for computing 3D Zernike radial polynomials combines the recursive calculation of spherical harmonics to develop a voxel-based algorithm for the calculation of 3D Zernike moments. The algorithm was applied to the 3D shape Michelangelo's David with a size of 150×150×150 voxels. As compared to the method without additional acceleration, the proposed method uses a group action of order sixteen orthogonal group and saving unnecessary iterations, the factor of speed-up is 56.783±3.999 when the order of Zernike moments is between 10 and 450. The proposed method also obtained an accurate reconstructed shape with the error rate (normalized mean square error) of 0.00 (4.17×10^-3) when the reconstruction was computed for all moments up to order 450.
The Evolution of the Majorana Neutrino Mass Renormalization Group in the Super-Weak Theory
Chitta Ranjan Das
Subject: Physical Sciences, Particle & Field Physics Keywords: super-weak theory; renormalization group; majorana neutrino mass
The super-weak interaction includes three simple extensions of the standard model: gauge extension, fermionic extension, and scalar extension. All of these extensions are strongly influenced by their complex phenomenology. They can explain a number of unresolved questions in particle physics and cosmology, including the genesis of dark matter, cosmic inflation, asymmetry of matter and antimatter, neutrino masses, and vacuum stability, if combined into a single structure. This is an extension of the gauge group of the standard model $G_{\rm SM}$ by $G_{\rm SM}\otimes U(1)_Z$ without any anomalies. We investigate the implications of the development of a general Majorana mass renormalization group for neutrinos with masses in the range of 0.03 $eV$ and 0.1 $eV$, which fall within the recently published range as well as the range to be explored in future planned experiments.
Automorphisms and Definability for Upward Complete Structures
Alexei L. Semenov, Sergei F. Soprunov
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: definability; definability lattice; automorphism group; reduct; Svenonius theorem
The Svenonius theorem establishes the correspondence between definability of relations in a countable structure and automorphism groups of these relations in the extension of the structure. This correspondence may help in finding the description of the definability lattice constituted by all definability spaces (reducts) of the original structure. However, the major difficulty here is the necessity to consider the extensions which generally are obscure and hardly amenable to classification. Because of that results on definability lattices were obtained only for $\omega$-categorical structures (i.\,e. those in which all elementary extensions are isomorphic to the structure itself). All known definability lattices for such structures proved to be finite. In this work we introduce the concept of upwards complete structure as such in which all its extensions are isomorphic. Further we define upwards completions structures. For such structures Galois correspondence between definability lattice and the lattice of closed supergroups of automorphism group of the structure is an anti-automorphism. These lattices could be infinite in general. We describe the natural class of structures which have upwards completion, that is discretely homogeneous graphs, present the explicit constrcution of their completion and automorphism groups of completions. We establish the general \emph{localness} property of discretely homogeneous graphs and present the examples of completable structures and their completions.
The Origins of Seljuk Ornamental Art in Anatolia
Mehmet Erbudak
Subject: Arts & Humanities, Art History & Restoration Keywords: Ornaments; Symmetry; Wallpaper Group; Rum Seljuks; Islamic Art
The Seljuks, who came from the Central Asian prairies, invaded Asia Minor towards the end of the 11th century. The land had been settled by then mainly by the Christian Eastern Romans and Armenian peoples. Seljuks were Moslems; they built monumental structures, some of which have survived the natural disasters of several centuries to the present day. Most of these architectural marvels contain extraordinary decorations in the form of ornaments, friezes and rosettes. I have studied periodic ornaments and classified them into 17 mathematical wallpaper groups according to their symmetry properties that reveal their global structure. On the other hand, the local details of the ornaments, the motifs, show a clear variation from simple geometric patterns to complicated and refined forms. Seljuk art was originally influenced by Persian styles, later influenced by the Christian population in Asia Minor, and finally represents the impact of Islamic culture.
The Study of Network Community Capacity to be a Subject: Digital Discursive Footprints
Anatoly N. Voronin, Taisiya A. Grebenschikova, Tina A. Kubrak, Timofey A. Nestik, Natalya D. Pavlova
Subject: Behavioral Sciences, Social Psychology Keywords: discourse; digital footprints; group reflexivity; network community; subjectness
The article is devoted to the assessment of the network community as a collective subject, as a group of interconnected and interdependent persons performing joint activities. According to the main research hypothesis, various forms of group subjectness, which determine its readiness for joint activities, are manifested in the discourse of the network community. Discourse constitutes a network community, mediates the interaction of its participants, represents ideas about the world, values, relationships, attitudes, sets patterns of behavior. A procedure is proposed for identifying discernible traces of the subjectness of a network community at various levels (lexical, semantic, content-analytical scales, etc.). The subjective structure of the network community is described based on experts' implicit representations. The revealed components of the subjectness of network communities are compared with the characteristics of the subjectness of offline social groups. It is shown that the structure of the subjectness of network communities for some components is similar to the structure of the characteristics of the subjectness of offline social groups: the discourse of the network community represents a discussion of joint activities, group norms and values, problems of civic identity. The specificity of network communities' subjectness is revealed, which is manifested in the positive support of communication within the community, the identification and support of distinction between "us" and "them". Two models of the relationship between discursive features and the construct "subjectness" are compared: additive-cumulative and additive. The equivalence of models is established based on the discriminativeness and the level of consistency with expert evaluation by external criteria.
Aesthetic Patterns with symmetries of the Regular Polyhedron
Peichang Ouyang, Liying Wang, Tao Yu, Xuan Huang
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Regular polyhedra, reflection group, fundamental region, invariant mapping
A fast algorithm is established to transform points of the unit sphere into fundamental region symmetrically. With the resulting algorithm, a flexible form of invariant mappings is achieved to generate aesthetic patterns with symmetries of the regular polyhedra. This method avoids the order restriction of symmetry groups, which can be similarly extended to treat regular polytopes in n-dimensional space for n>=4.
Amenability Modulo an Ideal of Second Duals of Semigroup Algebras
Hamidreza Rahimi, Khalil Nabizadeh
Subject: Mathematics & Computer Science, Analysis Keywords: amenability modulo an ideal; semigroup algebra; group congruence
The aim of this paper is to investigate the amenability modulo an ideal of Banach algebras with emphasis on applications to homological algebras. In doing so, we show that amenability modulo an ideal of A** implies amenability modulo an ideal of A. Finally, for a large class of semigroups, we prove that l1(S) ** is amenable modulo I σ** if and only if an appropriate group homomorphic image of S is finite where Iσ is the closed ideal induced by the least group congruence σ.
Adaptive Badging Control for A Decentralized Epidemic Policyusing Feasible Group Testing Protocol (Abcdefg Protocol)
Inavamsi Enaganti, Shirshendu Chatterjee, Bud Mishra
Subject: Mathematics & Computer Science, Other Keywords: Epidemics; Badging, Pool-testing; Group-testing; ODE model; Policies
The ABCDEFG Protocol provides an evolving tool for imposing structure on the flow of Covid infection information obtained from community testing, collective policy and individual compliance. ABCDEFG Protocol could not assume soundness, invariance, symmetry and completeness of the available information and relied on signaling game theory to design solutions that could evolve with the variable narratives, theories, individual utilities and pathogen variants. Thus, ABCDEFG Protocol suggests a novel and a very flexible pool-testing and badging protocol in the context of controlling contagious epidemics and tackling the far-reaching associated challenges, including understanding and evaluating individual and collective risks of returning prior infected individuals to normal society and other economic and social arrangements and interventions to protect against disease. AbCDEFG Proto uses both control theoretic and game theoretic mathematical models that may be centralized (an optimizing policy maker mandates behavior based on estimated models) or decentralized (a strategizing individual selects their behavior based on available asymmetric information). ABCDEFG protocol demonstrates how society can continue to carry out plausible economic activities in addition to controlling the prevalence of a contagious disease by keeping the number of infected people below a desired limit without compromising an individuals' privacy despite the presence of deception and selfishness among people, and limitations of available resources. Different types of badges would come with different restrictions. Badges would be reissued periodically by third-party testing centers via suitably frequent pool testing of samples of the participants. The size of the pools, frequency of tests, and allowable activities for people with a given type of badge would depend on the available resources, the prevalence of the disease, and the efficacy of the equipment used in the tests.
Weakly-Interacting Bose-Bose Mixtures from the Functional Renormalisation Group
Felipe Isaule, Ivan Morera
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: quantum gases; bose-bose mixtures; bose polarons; renormalisation group
We provide a detailed presentation of the functional renormalisation group (FRG) approach to weakly-interacting Bose-Bose mixtures, including a complete discussion on the RG equations. To test this approach, we examine thermodynamic properties of balanced three-dimensional Bose-Bose gases at zero and finite temperatures and find a good agreement with related works. We also study ground-state energies of repulsive Bose polarons by examining mixtures in the limit of infinite population imbalance. Finally, we discuss future applications of the FRG to novel problems in Bose-Bose mixtures and related systems.
Biophysics of Nervous Embryo-Fetal Development
Arturo Tozzi
Subject: Keywords: soft matter; liquid crystals; braid group; anyon; brain; microcolumn
The dynamical processes of living systems are characterized by the cooperative interaction of many units. This claim enables us to portray the embryo-fetal development of the central and peripheral nervous systems in terms of assemblies of building blocks. We describe how the structure and arrangement of nervous fibers is - at least partially - dictated by biophysical and topological constraints. The far-flung field of soft-matter polymers/nematic colloids sheds new light on the neurulation in mammalian embryos, suggesting an intriguing testable hypothesis: the development of the central and peripheral nervous systems might be correlated with the occurrence of local thermal changes in embryo-fetal tissues. Further, we show a correlation between the fullerene-like arrangement of the cortical microcolumns and the Frank-Kasper phases of artificial quasicrystals assemblies. The last, but not the least, we explain how and why the multisynaptic ascending nervous fibers connecting the peripheral receptors to the neocortical areas can be viewed as the real counterpart of mathematical tools such as knot theory and braid groups. Their group structure and generator operations point towards a novel approach to long-standing questions concerning human sensation and perception, leading to the suggestion that the very arrangement and intermingling of the peripheral nervous fibers contributes to the cortical brain activity. In touch with the old claims of D'Arcy Thompson, we conclude that the arrangement and the pattern make the function in a variety of biological instances, leading to countless testable hypotheses.
Stakeholders' Recount on the Dynamics of Indonesia's Renewable Energy Sector
Satya Widya Yudha, Benny Tjahjono, Philip Longhurst
Subject: Engineering, Energy & Fuel Technology Keywords: Focus group discussion; sustainability; renewable energy development; Indonesia; geothermal
The study describes in this paper uses direct evidence from processes applied for the developing economy of Indonesia, as it defines the trajectory for its future energy policy and energy research agenda. The paper makes explicit the process undertaken by key stakeholders in assessing and determining the suitability, feasibility and dynamics of the renewable energy sector. Barriers and enablers that key in selecting the most suitable renewable energy sources for developing economies for the renewable energy development have been identified from extensive analyses of research documents alongside qualitative data from the focus group discussions (FGD). The selected FGD participants encompass the collective views that cut across the political, economic, social, technological, legal and environmental aspects of renewable energy development in Indonesia. The information gained from the FGD gives insights to the outlook and challenges that are central to energy transition within the country, alongside the perceptions of renewable energy development from the influential stakeholders contributing to the process. It is notable that the biggest barriers to transition are centred on planning and implementation aspects, as it is also evident that many in the community do not adhere to the same vision.
Experimental Investigation of the Wetting Ability of Microemulsion to Different Coal Samples
Fengyun Sang, Song Yan, Gang Wang, Zujie Ma, Jinzhou Li
Subject: Engineering, Energy & Fuel Technology Keywords: coal wettability; microemulsion; contact angle; functional group; clay mineral
To improve water injection effect, microemulsions (MEs) were used to wet coal seam compared with water and sodium dodecyl sulfate solution (SDS). Wetting effects were characterized by contact angle, X-ray diffraction, Fourier infrared spectroscopy. The results showed that the microemulsion has better spreadability on coal surface and has stronger wettability for coals of different ranks and different particle sizes than traditional wetting agents. The W/O type microemulsion is more affinity to coal than the O/W type and the bicontinuous type.Oxygen and hydrogen contents contributed to wetting. Different wetting agents have the greatest impact on the oxygen-containing functional group absorption zone of coal, but have little impact on the change of clay mineral composition.As the content of quartz increased, the content of montmorillonite was decreased, and the hydrophilicity of coal was increased. This research proposes new ideas for solving coal dust problems and reducing coal mine disasters.
The Basis For A Neurobiological-Associative Model of Personality and Group Cohesion: The Evolutionary And System Biological Origins Of Social Exclusion, Hierarchy, and Structure
Michael Thomas
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: neurobiological; personality; group processes; cohesion; intelligence; oxytocin; serotonin; dopamine
By using a systems biological perspective and available literature on human social interaction, grouping, and cohesiveness, a new coherent model is proposed that integrates existing social integration and neurobiological research into a theoretical neurobiological framework of personality and social interaction. This model allows for the coherent analysis of complex social systems and interactions within them, and proposes a framework for estimating group cohesiveness and evaluating group structures in order to build and organize optimized social groups. This "Neurobiological-Associative" model proposes two primary feedback loops, with environmental conditioning (learning) being sorted into an associative model that modulates interaction with the social environment, and which impacts the second feedback loop involving the individuals' neurobiological capacity. In this paper, the concept of neurobiological capacity is developed and based upon contemporary research on intelligence, personality, and social behavior with a focus on the oxytocin, serotonin, and dopamine systems. The basis of social exclusion and group structure is thus, expressed in the very most simple terms, neurobiological compatibility and risk assessment modulated by an internal associative model.
Preprint ESSAY | doi:10.20944/preprints201810.0102.v3
What if the 'Anthropocene' is not Formalised as a New Geological Series/Epoch?
Valenti Rull
Subject: Earth Sciences, Other Keywords: Anthropocene, series/epoch, chronostratigraphic units, formalization, Anthropocene Working Group
In the coming years, the Anthropocene Working Group (AWG) will submit its proposal on the 'Anthropocene' as a new geological epoch to the International Commission on Stratigraphy (ICS) for approval. If approved, the proposal will be send to the International Union of Geological Sciences (IUGS) for ratification. If the proposal is approved and ratified, the 'Anthropocene' will be formalised and the Holocene Series/Epoch will be officially terminated. Currently, the 'Anthropocene' is a broadly used term and concept in a wide range of scientific and non-scientific situations and, for many, the official acceptance of this term is only a matter of time. However, the AWG proposal, in its present state, seems to not fully meet the ICS requirements for a new geological epoch. This paper asks what could happen if the current 'Anthropocene' proposal is not formalised by the ICS/IUGS. The possible stratigraphic alternatives are evaluated on the basis of the more recent literature and the personal opinions of distinguished AWG and ICS members. The eventual impact on environmental sciences and on non-scientific sectors, where the 'Anthropocene' seems already firmly rooted and de facto accepted as a new geological epoch, are also discussed.
Chemical Interaction–Induced Evolution of Phase Compatibilization in Blends of Poly(hydroxy ether of bisphenol-A) with Poly(1,4-butylene terephthalate)
Jing Liu, Hsiang-Ching Wang, Chean-Cheng Su, Cheng-Fu Yang
Subject: Materials Science, Polymers & Plastics Keywords: immiscible blend; compatibilization; homogeneous phase; alcoholysis; carbonyl group; copolymers
An immiscible blend of poly(hydroxy ether of bisphenol-A) (phenoxy) and poly(1,4-butylene terephthalate) (PBT) with phase separation was observed in as-blended samples. However, compatibilization of the phenoxy/PBT blends can be promoted through chemical exchange reactions of phenoxy with PBT upon annealing. In contrast to the as-blended samples, the annealed phenoxy/PBT blends had a homogeneous phase with a single Tg that could be enhanced by annealing at 260°C. Infrared (IR) spectroscopy demonstrated that phase homogenization could be promoted by annealing of the phenoxy/PBT blend, where alcoholytic exchange occurred between the dangling hydroxyl group in phenoxy and the carbonyl group in PBT in the heated blends. The alcoholysis reaction changes the aromatic linkages to aliphatic linkages in carbonyl groups, which initially led to the formation of a graft copolymer of phenoxy and PBT with an aliphatic/aliphatic carbonyl link. The progressive alcoholysis reaction resulted in the transformation of the initial homopolymers into block copolymers and finally into random copolymers, which promoted phase compatibilization in blends of phenoxy with PBT. Due to the fact that the amount of copolymers increased upon annealing, crystallization of PBT was inhibited by alcoholytic exchange in the blends.
The Symmetry of Linear Molecules
Katy L. Chubb, Per Jensen, Sergei N. Yurchenko
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: symmetry; linear molecules; group theory; finite symmetry; point groups
A numerical application of linear-molecule symmetry properties, described by the D∞h point group, is formulated in terms of lower-order symmetry groups Dnh with finite n. Character tables and irreducible representation transformation matrices are presented for Dnh groups with arbitrary n-values. These groups are subsequently used in the construction of symmetry-adapted ro-vibrational basis functions for solving the Schrödinger equations of linear molecules as part of the variational nuclear motion program TROVE. The TROVE symmetrisation procedure is based on a set of "reduced" vibrational eigenvalue problems with simplified Hamiltonians. The solutions of these eigenvalue problems have now been extended to include the classification of basis-set functions using ℓ, the eigenvalue (in units of ℏ) of the vibrational angular momentum operator L ^ z . This facilitates the symmetry adaptation of the basis set functions in terms of the irreducible representations of Dnh. 12C2H2 is used as an example of a linear molecule of D∞h point group symmetry to illustrate the symmetrisation procedure.
A Solvable Algebra for Massless Fermions
Stefan Groote, Rein Saar
Subject: Physical Sciences, Mathematical Physics Keywords: Solvable Lie group; Borel subgroup; massless particle states; chirality states
We derive the stabiliser group of the four-vector, also known as Wigner's little group, in case of massless particle states, as the maximal solvable subgroup of the proper orthochronous Lorentz group of dimension four, known as the Borel subgroup. In the absence of mass, particle states are disentangled into left and right handed chiral states, governed by the maximal solvable subgroups sol2± of order two. Induced Lorentz transformations are constructed and applied to general representations of particle states. Finally, it is argued how the spin-flip contribution is closely related to the occurrence of nonphysical spin operators.
Notes on the Lie Symmetry Exact Explicit Solutions for Nonlinear Burgers' Equation
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Lie group; Burgers equation; exact solution; general solution; elementary function
In light of Liu \emph{at el.}'s original works, this paper revisits the solution of Burgers's nonlinear equation $u_t=a(u_x)^2+bu_{xx} $. The study found two exact and explicit solutions for groups $G_4$ and $G_6$, as well as a general solution. A numerical simulation is carried out. In the appendix a Maple code is provided
Sweetening Pharmaceutical Radiochemistry by 18F-Fluoroglycosylation: Recent Progress and Future Prospects
Sandip S. Shinde, Simone Maschauer, Olaf Prante
Subject: Chemistry, Medicinal Chemistry Keywords: fluorine-18; prosthetic group; 18F-fluoroglycosylation; positron emission tomography; PET
In the field of 18F-chemistry for the development of radiopharmaceuticals for positron emission tomography (PET), various labeling strategies by the use of prosthetic groups have been im-plemented, including chemoselective 18F-labeling of biomolecules. Among those, chemoselec-tive 18F-fluoroglycosylation methods focus on the sweetening of pharmaceutical radiochemistry by offering a highly valuable tool for the synthesis of 18F-glycoconjugates with suitable in vivo properties for PET imaging studies. A previous review covered the various 18F-fluoroglycosylation methods that have been developed and applied as of 2014 [Maschauer and Prante, BioMed. Res. Int. 2014, 214748]. This paper is an updated review, providing the recent progress in 18F-fluoroglycosylation reactions and the preclinical application of 18F-glycoconjugates, including small molecules, peptides, and high-molecular-weight proteins.
Prevalence, Phylogroups, Antimicrobial Susceptibility and GE-Netic Diversity of Escherichia coli Isolates From Food Products
Babak Pakbin, Samaneh Allahyari, Zahra Amani, Wolfram Manuel Bruck, Razzagh Mahmoudi, Amir Peymani
Subject: Life Sciences, Microbiology Keywords: Escherichia coli; Antimicrobial resistance; Food samples; Phylogenetic group; Genetic diversity
The emergence of multi-drug resistant E. coli is an important matter of increasing considerable concern to global public health. The aim of this study was to investigate the incidence, antibiotic resistance pattern, phylogroups and genetic variation of E. coli isolates from raw milk, vegetable salad and ground meat samples. Methods: Culture-based techniques, Kirby-Bauer disk diffusion susceptibility testing, PCR and RAPD assays were used to determine the incidence rate, antimicrobial resistance pattern, phylogenetic groups and genetic diversity of the E. coli isolates. Results: E. coli isolates were highly resistant to amoxicillin (79.16%), trime-thoprim-sulfamethoxazole (70.83%), amoxicillin-clavulanic acid (62.50%), tetracycline (54.16%), chloramphenicol (54.16%), nitrofurantoin (54.16%), ampicillin (45.83%), streptomycin (45.83%), and kanamycin (33.33%); and completely susceptible to norfloxacin and azithromycin. 70.83% of the isolates were multi-drug resistant. Most E. coli isolates (46%) belonged to phylogroup A. RAPD with UBC245 primer categorized the isolates into 11 clusters. A high level of genetic di-versity was found among the isolates; however, 33.3% of the isolates were grouped in a major cluster (R5). Conclusions: Antibiotic resistance patterns are randomly distributed among the ge-netic clusters. Novel, practical, efficient food safety control and surveillance systems of multi-drug resistant foodborne pathogens are required to control the foodborne pathogen contamina-tion.
Attitudes of Teachers in Training Towards People With HIV/AIDS.
Aranzazú Cejudo-Cortés, Encarnación Pedrero-García, Pilar Moreno-Crespo, Olga Moreno-Fernández
Subject: Social Sciences, Accounting Keywords: HIV; AIDS; vulnerable group; young people; trainee teachers; health education
Discriminatory attitudes towards people living with HIV/AIDS are prevalent. A Joint United Nations Program on HIV/AIDS report (2019) indicated that more than 50% of the people surveyed in one of the studies spanning 26 countries expressed unfavorable attitudes towards HIV-positive people. The objective of this study was to assess the attitudes of senior Education Studies students at a university in Spain towards people with HIV/AIDS so as to propose specific educational interventions. The study employed a quantitative methodological approach; a questionnaire with a 14-item attitude score served as the analytical instrument. The study sample comprised 613 students from the School of Education at the University of Huelva, Spain. The results showed that more than 50% of the School's senior students had discriminatory attitudes towards HIV-positive people, some of whom were fellow classmates. This study proposes several formative approaches to reducing the stigma suffered by HIV-positive people, while also improving senior students' skills and capabilities in the field of health promotion.
Four-Dimensional Almost Einstein Manifolds with Skew-Circulant Structres
Iva Dokuzova, Dimitar Razpopov
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: Riemannian manifold; Einstein manifold; sectional curvatures; Ricci curvature; Lie group
We consider a four-dimensional Riemannian manifold M with an additional structure S, whose fourth power is minus identity. In a local coordinate system the components of the metric g and the structure S form skew-circulant matrices. Both structures S and g are compatible, such that an isometry is induced in every tangent space of M. By a special identity for the curvature tensor, generated by the Riemannian connection of g, we determine classes of Einstein and almost Einstein manifolds. For such manifolds we obtain propositions for the sectional curvatures of some characteristic 2-planes in a tangent space of M. We consider a Hermitian manifold associated with the studied manifold and find conditions for g, under which it is a K\"{a}hler manifold. We construct some examples of the considered manifolds on Lie groups.
Decision Support System Research on Forest Tending Problems Based on Process Management System
Hui Jing, Wukui Wang, Aline Umutoni
Subject: Engineering, Other Keywords: forest tending; group decision support system; process management; data integration
In this study, the decision-making process management of forest tending in the forestry business is decentralized, and forest tending decision-making activities at different points in time are integrated by decision makers at different geographical locations. The decision-making process was analyzed and optimized from a system perspective. Based on the optimized decision-making process, a forest tending business group decision support system (FTGDSS) was established. We first reviewed and discussed the characteristics and development of the forest tending business and forestry decision support system. Business Process Modeling Notation was used to draw a current state flow chart of the forest tending business, to identify and discover important decision points in the process of tending decision-making. We also analyzed the content and attributes of each decision point, and described the system structure, functional framework, knowledge base structure, and reasoning algorithm of FTGDSS in detail. Finally, FTGDSS was evaluated from the two dimensions of the technology adoption model. FTGDSS integrates different levels of time-space decision-making activities, historical tending data, business plans, decision-makers' management tendencies into the decision-making process and automatically extracts decision-making data from the forest business process management enterprise resource planning system (Smartforest) that improves the ease of use of the decision support system (DSS). It also improves the quality of forest tending decisions, and enables the DSS to better support multi-target management strategies.
Exploring Group Movement Pattern through Cellular Data: A Case Study of Tourists in Hainan
Xinning Zhu, Tianyue Sun, Hao Yuan, Zheng Hu, Jiansong Miao
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Low accuracy CDRs; Group movement pattern; Data mining; Travel behaviors
Identifying group movement patterns of crowds and understanding group behaviors is valuable for urban planners, especially when the groups are special such as tourist groups. In this paper, we present a framework to discover tourist groups and investigate the tourist behaviors using mobile phone call detail records (CDRs). Unlike GPS data, CDRs are relatively poor in spatial resolution with low sampling rates, which makes it a big challenge to identify group members from thousands of tourists. Moreover, since touristic trips are not on a regular basis, no historical data of the specific group can be used to reduce the uncertainty of trajectories. To address such challenges, we propose a method called group movement pattern mining based on similarity (GMPMS) to discover tourist groups. To avoid large amounts of trajectory similarity measurements, snapshots of the trajectories are firstly generated to extract candidate groups containing co-occurring tourists. Then, considering that different groups may follow the same itineraries, additional traveling behavioral features are defined to identify the group members. Finally, with Hainan province as an example, we provide a number of interesting insights of travel behaviors of group tours as well as individual tours, which will be helpful for tourism planning and management.
Valuation of Large Variable Annuity Portfolios Using Linear Models with Interactions
Guojun Gan
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: variable annuity; portfolio valuation; linear regression; group-lasso; interaction effect
A variable annuity is a popular life insurance product that comes with financial guarantees. Using Monte Carlo simulation to value a large variable annuity portfolio is extremely time-consuming. Metamodeling approaches have been proposed in the literature to speed up the valuation process. In metamodeling, a metamodel is first fitted to a small number of variable annuity contracts and then used to predict the values of all other contracts. However, metamodels that have been investigated in the literature are sophisticated predictive models. In this paper, we investigate the use of linear regression models with interaction effects for the valuation of large variable annuity portfolios. Our numerical results show that linear regression models with interactions are able to produce accurate predictions and can be useful additions to the toolbox of metamodels that insurance companies can use to speed up the valuation of large VA portfolios.
Application of a General Computer Algorithm Based on the Group-Additivity Method for the Calculation of two Molecular Descriptors at both Ends of Dilution: Liquid Viscosity and Activity Coefficient in Water at infinite Dilution
Subject: Chemistry, General & Theoretical Chemistry Keywords: liquid viscosity; activity coefficient at infinite dilution; group-additivity method
The application of a commonly used computer algorithm based on the group-additivity method for the calculation of the liquid viscosity coefficient at 292.15K and the activity coefficient at infinite dilution in water at 298.15K of organic molecules is presented. The method is based on the complete breakdown of the molecules into their constituting atoms, further subdividing them by their immediate neighbourhood. A fast Gauss-Seidel fitting method using experimental data from literature is applied for the calculation of the atom groups' contributions. Plausibility tests have been carried out on each of the calculations using a 10-fold cross-validation procedure which confirms the excellent predictive quality of the method. The goodness of fit (Q2) and the standard deviation (σ) of the cross-validation calculations for the viscosity coefficient, expressed as log(η), was 0.9728 and 0.11, respectively, for 413 test molecules, and for the activity coefficient log(γ)∞ the corresponding values were 0.9736 and 0.31, respectively, for 621 test compounds. The present approach has proven its versatility in that it enabled at once the evaluation of the liquid viscosity of normal organic compounds as well as of ionic liquids.
Two Genetic Codes: Repetitive Syntax for Active non-Coding RNAs; non-Repetitive Syntax for the DNA Archives
Guenther Witzany
Subject: Life Sciences, Microbiology Keywords: RNA; DNA; Repetitive sequences; RNA stem loops; RNA group identities
Current knowledge of the RNA world indicates two different genetic codes being present throughout the living world. In contrast to non-coding RNAs that are built of repetitive nucleotide syntax, the sequences that serve as templates for proteins share – as main characteristics – a non-repetitive syntax. The differences in their syntax structure is coherent with the difference of the functions they represent. Whereas non-coding RNAs build groups that serve as regulatory tools in nearly all genetic processes, the coding sections represent the evolutionarily successful function of the genetic information storage medium. The DNA genomes themselves are rather inactive, whereas the non-coding RNA domain is highly active, even as non-random genetic innovation operators. This indicates that repetitive syntax is the essential pre-requisite for RNA interactions to install variable RNA-group-identities, whereas the non-repetitive syntax serves as a stable conservation tool for successful selection processes out of RNA-groups cooperation and competition. The interaction opportunities of RNA loops with repetitive syntax are higher than with non-repetitive ones. Interestingly, these two genetic codes resemble the function of all natural languages, i.e., (a) everyday language use for organization and coordination of biotic group behavior, and (b) artificial (instrumental) language use for conservation of blueprints for complex protein-body constructions.
Lie Group Cohomology and (Multi)Symplectic Integrators : New Geometric Tools for Lie Group Machine Learning based on Souriau Geometric Statistical Mechanics
Frédéric Barbaresco, François Gay-Balmaz
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Momentum Maps; Cocycles; Lie Group Actions; Coadjoint Orbits; Variational Integrators; (Multi)symplectic Integrators; Fisher Metric; Gibbs Probability Density; Entropy; Lie Group Machine Learning; Casimir Functions
In this paper we describe and exploit a geometric framework for Gibbs probability densities and the associated concepts in statistical mechanics, which unifies several earlier works on the subject, including Souriau's symplectic model of statistical mechanics, its polysymplectic extension, Koszul model, and approaches developed in quantum information geometry. We emphasize the role of equivariance with respect to Lie group actions and the role of several concepts from geometric mechanics, such as momentum maps, Casimir functions, coadjoint orbits, and Lie-Poisson brackets with cocycles, as unifying structures appearing in various applications of this framework to information geometry and machine learning. For instance, we discuss the expression of the Fisher metric in presence of equivariance and we exploit the property of the entropy of the Souriau model as a Casimir function to apply a geometric model for energy preserving entropy production. We illustrate this framework with several examples including multivariate Gaussian probability densities, and the Bogoliubov-Kubo-Mori metric as a quantum version of the Fisher metric for quantum information on coadjoint orbits. We exploit this geometric setting and Lie group equivariance to present symplectic and multisymplectic variational Lie group integration schemes for some of the equations associated to Souriau symplectic and polysymplectic models, such as the Lie-Poisson equation with cocycle.
Regularity for Quasi-Linear p-Laplacian Type Non-homogeneous Equations in the Heisenberg Group
Chengwei Yu
Subject: Mathematics & Computer Science, Analysis Keywords: p-Laplacian type; non-homogeneous equations; Heisenberg group; regularities; Riesz potentials.
When 2−1/Q<p≤2, we establish the C^{0,1} and C^{1,α}-regularities of weak solutions to quasi-linear p-Laplacian type non-homogeneous equations in the Heisenberg group.
The Collective Mind: An Experimental Analysis of Imitation and Self‑organization in Humans
Emmanuel Olarewaju
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: ; Social interaction; Self-organization; Imitation; Coordination dynamics; Group normalization; Interpersonal symmetry
I present an experimental paradigm to explore the interpersonal dynamics generating a collective mind. I hypothesized that collective organization is based on dual interpersonal modes: (1) symmetrical and (2) anti‑symmetrical. I specified the geometric topology of these modes by detecting the spatiotemporal patterns that embed cooperative agents in a three‑dimensional matrix. I found that the symmetrical mode is executed automatically and without guidance. Conversely, the anti‑symmetrical mode required explicit direction and recruited attention for execution. I demonstrate that self‑other mirror‑symmetry stabilized group dynamics, enabled fast and efficient symmetrical imitation that optimized information transmission, whereas anti‑symmetrical imitation was comparatively slow, inefficient, and unstable. I determined that the anti‑symmetrical mode spontaneously transitioned to the symmetrical mode under perturbations. Crucially, this renormalization mechanism never transitioned from symmetrical to anti‑symmetrical. These self-organizing dynamics speak to interpersonal symmetry‑breaking. In the present work, spontaneous group choice mandated that agents synchronize cooperative cycles in symmetrical space under internal or external perturbations. I provide examples to illustrate that this self-regulating pullback attractor manifests in invertebrates and vertebrates alike. I conclude by suggesting that inter‑agent symmetry provides the social stability manifold through which attention-driven interactions enable intrapersonal and interpersonal change.
Logarithmic-Time Addition for BIT-Predicate With Applications for a Simple and Linear Fast Adder And Data Structures
Juan Ramírez
Subject: Mathematics & Computer Science, Logic Keywords: Structuralism; Set Theory; Type Theory; Arithmetic Model; Data Type; Tree, Group
A construction for the systems of natural and real numbers is presented in Zermelo-Fraenkel Set Thoery, that allows for simple proofs of the properties of these systems, and practical and mathematical applications. A practical application is discussed, in the form of a Simple and Linear Fast Adder (Patent Pending). Applications to finite group theory and analysis are also presented. A method is illustrated for finding the automorphisms of any finite group $G$, which consists of defining a canonical block form for finite groups. Examples are given, to illustrate the procedure for finding all groups of $n$ elements along with their automorphisms. The canonical block form of the symmetry group $\Delta_4$ is provided along with its automorphisms. The construction of natural numbers is naturally generalized to provide a simple and sound construction of the continuum with order and addition properties, and where a real number is an infinite set of natural numbers. A basic outline of analysis is proposed with a fast derivative algorithm. Under this representation, a countable sequence of real numbers is represented by a single real number. Furthermore, an infinite $\infty\times\infty$ real-valued matrix is represented with a single real number. A real function is represented by a set of real numbers, and a countable sequence of real functions is also represented by a set of real numbers. In general, mathematical objects can be represented using the smallest possible data type and these representations are calculable. In the last section, mathematical objects of all types are well assigned to tree structures in a proposed type hierarchy.
Precisely Equal Group Size and Allocation Bias in Nursing Randomized Controlled Trials: A Scientiometric Study
Richard Gray, Daniel Bressington, Martin Jones, David R. Thompson
Subject: Medicine & Pharmacology, Allergology Keywords: Randomized Controlled Trial; Equal Group Size; Nursing; Allocation Bias; Effect Size
The manipulation of participant allocation in randomized controlled trials to achieve equal groups sizes may introduce allocation bias potentially leading to larger treatment effect estimates. This study aimed to estimate the proportion of nursing trials that have precisely equal group sizes and examine if there was an association with trial outcome. Data were extracted from a sample of 148 randomized controlled trials published in nursing science journals in 2017. One hundred trials (68%) had precisely equal group sizes. Respectively, a positive outcome was reported in 70% and 58% of trials with equal/unequal groups. Trials from Asia were more likely to have equal group sizes than those from the rest of the world. Most trials reported a sample size calculation (n=105, 71%). In a third of trials (n=36, 34%), the number of participants recruited precisely matched the requirement of the sample size calculation; this was significantly more common in studies with equal group sizes. The high number of nursing trials with equal groups may suggest nurses con-ducting clinical trials are manipulating participant allocation to ensure equal group size increasing the risk of bias.
Polyadic Braid Operators and Higher Braiding Gates
Steven Duplij, Raimund Vogl
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Yang-Baxter equation; braid group; qubit; ternary; polyadic; braiding quantum gate
A new kind of quantum gates, higher braiding gates, as matrix solutions of the polyadic braid equations (different from the generalized Yang-Baxter equations) is introduced. Such gates lead to another special multiqubit entanglement which can speed up key distribution and accelerate algorithms. Ternary braiding gates acting on three qubit states are studied in details. We also consider exotic noninvertible gates which can be related with qubit loss, and define partial identities (which can be orthogonal), partial unitarity, and partially bounded operators (which can be noninvertible). We define two classes of matrices, star and circle ones, such that the magic matrices (connected with the Cartan decomposition) belong to the star class. The general algebraic structure of the introduced classes is described in terms of semigroups, ternary and $5$-ary groups and modules. The higher braid group and its representation by the higher braid operators are given. Finally, we show, that for each multiqubit state there exist higher braiding gates which are not entangling, and the concrete conditions to be non-entangling are given for the obtained binary and ternary gates.
Overview of Gulf of Mottama Wetland (GoMW) & Size Distribution and Economic Status of Sea Bass in Myanmar
Phyoe Marnn, Chunguang He, Haider Ali, Soe Moe Tun, Khin Swe Wynn, Nyein Nyein Moe, Tao Yang, Nizeyimana Jean Claude, Muhammad Hasnain, Thaw Tar Oo, Yousef A. Al-Masnay
Subject: Biology, Other Keywords: The Gulf of Mottama Wetland, Morphometric measurement, catch weight, size group
The present study was conducted the status of sea bass from Kokko and Kyuntone of The Gulf of Motttama Wetland (GoMW) area in Thanatpin Township in Bago Region Myanmar from September 2019 to August 2020. Fifty specimens were monthly collected, measured and weighed. Invoices of sea bass were collected for the depot and fish sellers by monthly. In Kokko, mean value of standard length and body weight were highest in March (32.70±1.58, 660.7±112.23). The mean value of standard length was peak in January (31.39±7.16) but peak of body weight was in March (963.24±280.86) in Kyuntone villages. The lowest mean value of standard length and body weight were found in June at both study areas. According to the invoice data revealed that monthly catch weight of sea bass is most abundance in October (829.92) kg in Kokko, (339.12) kg in Kyuntone. Based on price of relations to size group, small size C < 300g (41%) was mostly abundance in Kokko and in Kyuntone small size C < 300g (35%) was second abundance. Specimens were not landed in April and May. In June, young specimens were very rarely seen in both study sites. The important roles of wetland fishes, the economic valuation of GOMW in Myanmar and samples of fishing gear and value chain of sea bass in Myanmar was expressed in this study.
Mapping Effective Field Theory to Multifractal Geometry
Ervin Goldfain
Subject: Physical Sciences, General & Theoretical Physics Keywords: deterministic chaos; multifractals; effective field theory; Lyapunov exponents; Renormalization Group; selfsimilarity
Fractals and multifractals are well-known trademarks of nonlinear dynamics and classical chaos. The goal of this work is to tentatively uncover the unforeseen path from multifractals and selfsimilarity to the framework of effective field theory (EFT). An intriguing finding is that the partition function of multifractal geometry includes a signature analogous to that of gravitational interaction. Our results also suggest that multifractal geometry may offer insights into the non-renormalizable interactions presumed to develop beyond the Standard Model scale.
EIMECA: A Proposal for a Model of Collective Environmental Actions
Beatriz Carmona-Moya, Antonia Calvo-Salguero, M.Carmen Aguilar-Luzón
Subject: Behavioral Sciences, Applied Psychology Keywords: environmental identity; environmental collective action; emotions; moral conviction; group efficacy beliefs.
The deterioration and destruction of the environment is becoming more and more considerable and greater efforts are needed to stop it. To accomplish this feat, all members of society must identify with environmental problems, with collective environmental action being one of the most relevant means of doing so. From this perspective, the analysis of the psychosocial factors that lead to participation in environmental collective action emerges as a priority objective in the research agenda. Thus, the aim of this study is to examine the role of "environmental identity" as conceptualized by Clayton, as a central axis for explaining environmental collective action. The inclusion of the latter in the theoretical framework of the SIMCA model gives rise to the model that we have called EIMECA. Two studies were conducted, and the results reveal that environmental identity, a variety of negative affects, as well as group efficacy accompanied by hope for a simultaneous additive effect, are critical when it comes to predicting environmental collective action.
Preprint SHORT NOTE | doi:10.20944/preprints202011.0267.v1
Forecast of Growth and Development of Modal Fir Stands in the Lower Angara Region
Pavel Mikhaylov Mikhaylov, Svetlana Sultson, Andrey Goroshko
Subject: Earth Sciences, Atmospheric Science Keywords: Siberian fir; regression model; forest type group; bonitet; growth rate table
The paper presents an assessment of the growth dynamics of the modal fir plantations in the Lower Angara region. At present, a vast area of fir forests in the Lower Angara region is characterised by a significant decrease in sustainability due to periodic forest fires, insect pests outbreaks and diseases, which lead to their natural degradation and death. However, the intensity of coniferous stand growth in certain forest site characteristics persists in the long term. Therefore, creating regression models of forest growth and development involving the identification of site conditions is very important both from a practical point of view and for environmental monitoring. The materials of the mass inventory of 3491 stands served as the initial data for studying the processes of fir plantations natural growth. The Hoerl Model function is suitable for the best approximation of stand growth since it is characterised by high levelling factor (from 0.970 to 0.987) and a small standard error (not exceeding 7%). As a result of the research, there have been constructed sketches of the growth rate tables for the modal Siberian fir stands of the third bonitet class of the forb and mossy groups of forest types.
Grid-friendly Active Demand Strategy on Air Conditioning Class Load
Jian-hong zhu, Juping Gu, Min Wu
Subject: Engineering, Civil Engineering Keywords: air conditioning group load; grid friendly; active demand; storage; coordinated control
The growing number of the accessed energy-efficient frequency conversion air conditioners is likely to generate a large number of harmonics on the power grid. The following shortage in the reactive power of peak load may trigger voltage collapse. Hence, this conflicts with people's expectations for a cozy environment. Concerning the problems mentioned above, an active management scheme is put forward to balance the electricity use and the normal operation of air conditioning systems. To be specific, schemes to suppress the low voltage ride through (LVRT) and harmonic are designed firstly. Then to deaden the adverse effects caused by nonlinear group load running on the grid, and to prevent the unexpected accidents engendered from grid malfunction, the dynamic sensing information obtained by an online monitor is analyzed, which can be seen as an actively supervise mechanism. The combined application of active and passive filtering technology is studied as well. Thirdly, the new energy storage is accessed reliably to cope with peak-cutting or grid breaking emergencies, and the fuzzy control algorithm is researched. Finally, system feasibility is verified by functional modules co-operation simulation, and active management target is achieved under scientific and reasonable state-of-charge (SOC) management strategy.
Water-soluble and Cytocompatible Phospholipid Polymers for Molecular Complexation to Enhance Biomolecule Transportation to Cell in vitro
Kazuhiko Ishihara, Shohei Hachiya, Yuuki Inoue, Kyoko Fukazawa, Tomohiro Konno
Subject: Materials Science, Biomaterials Keywords: 2-methacryloyloxyethyl phosphorylcholine polymer; amphiphilic nature; cationic group; polymer aggregate; endocytosis
Water-soluble and cytocompatible polymers were investigated to enhance a transporting efficiency of biomolecules into cells in vitro. The polymers composed of 2-methacryloyloxyethyl phosphorylcholine (MPC) unit, a hydrophobic monomer unit, and a cationic monomer unit bearing an amino group were synthesized for complexation with model biomolecules, siRNA. The cationic MPC polymer was shown to interact with both siRNA and the cell membrane and was successively transported siRNA into cells. When introducing 20 − 50 mol% hydrophobic units into the cationic MPC polymer, transport of siRNA into cells. The MPC units (10 − 20 mol%) in the cationic MPC polymer were able to impart cytocompatibility, while maintaining interaction with siRNA and the cell membrane. The level of gene suppression of the siRNA/MPC polymer complex was evaluated in vitro and it was as the same level as that of a conventional siRNA transfection reagent, whereas its cytotoxicity was significantly lower. We concluded that these cytocompatible MPC polymers may be promising complexation reagent for introducing biomolecules into cells, with the potential to contribute to future fields of biotechnology, such as in vitro evaluation of gene functionality, and the production of engineered cells with biological functions.
Public Health and Population Perspective of COVID-19 As A Global Pandemic
Nazneen Akhter, M. Salim Uzzaman, Amr Ravine
Subject: Social Sciences, Other Keywords: COVID-19; Risk Group; Syndrome; Community Quarantine; Population dynamics; Social distancing
COVID-19 appeared as an infectious disease of global health emergency and the highest public health concern of 21st century for this world due to its high-speed of spread across the globe. The disease started as one single case to a cluster of cases in Wuhan, China (Dec 2019) and within few months with its continuous upsurge of cases spreading globally which has created enormous threat and tension across the global Public health care field. Most significantly to share the fact that, this disease caused high level of Risk Group mortality, high morbidity, health care services burden, panic anxiety, mental trauma and tension, social and economic insecurity, which also collectively surfaced by diverse range of social reaction and political pressure across the world. Also with the appearance of new and unknown pathogenesis of the disease has created the most and significant attention and concern for the scientific community and political leaders as well. The disease also varies much with its pattern of virus, sign, symptoms, characteristics including its epidemiological and public health response (like prevention strategy, diagnoses, case management, treatment pattern) across the countries. However, this variance is comparatively less rather even more in commonality while it comes to basic public health prevention interventions like frequent hand washing, wearing face mask, maintaining social/physical distancing (2 - 6 meter in between), individual isolation and community quarantine for suspected exposure and lockdown in community areas where cases are identified. These are with the most commonly practiced public health interventions to deal with this disease in majority of the epicenters across countries. Also with the progress of the disease with its diverse categorical appearance of sign symptoms which are uniquely portraying this disease is more of a kind of COVID-19 syndrome than as COVID-19. Moreover, various factors including socio economic status, health status, population dynamics, health system and infrastructure, health behavioral pattern, nutrition and food habit and access to information and knowledge made this viral disease one of the historically counted on expensive disease of the modern world to fight for. Specially to mention here, the case fatality rate distinctively vary with the population dynamics and the health system infrastructure and ability factors which have already created clear evidence as explained in this paper how these factors are uniquely distinguished and different from country to country. Apart from the contextual differences, the various kind of ongoing preventive measures (like prevention, diagnosis and treatment) learnt every day from the emerging characteristics and pattern of this viral disease which are most commonly practiced across countries with a variance. According to the Public health prevention practice and interventions, the total world is connected and learning from each other's public health experiences to fight for the virus. However, the world communities are eagerly waiting to see the result and outcome of the ongoing therapeutics and vaccine trial initiated in several countries, the people of the world are quite optimistic and hopeful that, global scientific community will be able to invent some miraculous and magic solution in upcoming months which ultimately will free the world from this most terrifying COVID-19 pandemic of 21st century.
Group Analysis of the Boundary Layer Equations in the Models of Polymer Solutions
Sergey V. Meleshko, Vladislav V. Pukhnachev
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: symmetry; admitted Lie group; invariant solution; boundary layer equations; polymer solution
The famous Toms effect (1948) consists of a substantial increase of the critical Reynolds number when a small amount of soluble polymer is introduced into water. The most noticeable influence of polymer additives is manifested in the boundary layer near solid surfaces. The goal of the present paper is a group analysis of the boundary layer equations in two mathematical models of the flow of aqueous polymer solutions: the second grade fluid (Rivlin and Ericksen, 1955) and the model derived by Pavlovskii (1971). The equations of the unsteady two-dimensional boundary layer in the Pavlovskii and Rivlin-Ericksen models are analyzed for the first time here. These equations have no definite type so that finding their exact solutions is very important in order to understand the mathematical nature of the above mentioned models. The problem of group classification with respect to the arbitrary function of the longitudinal coordinate and time present in the equations, which sets the pressure gradient of the external flow, arises. All functions for which an extension of the admitted Lie group occurs are found. The task includes the ratio of two characteristic length scales. One of them is the Prandtl scale, and another is defined as the square root of the normalized coefficient of relaxation viscosity (Frolovskaya and Pukhnachev, 2018) and does not depend on the characteristics of the motion. The paper contains a number of exact solutions in the Pavlovskii model including a solution describing the flow near a critical point. Among the solutions of the new model of the boundary layer, a special place is taken by the solution of the stationary problem of flow around a rectilinear plate. Within the framework of the Prandtl theory of the boundary layer, such a solution was constructed by Blasius (1908). As is well-known, this solution has a non-removable defect: the transverse velocity near the edge of the plate increases without bound. The introduction of a relaxation term into the model makes it possible to eliminate this singularity.
Multi-Stage Meta-Learning for Few-Shot with Lie Group Network Constraint
Fang Dong, Fanzhang Li
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: meta-learning; lie group; machine learning; deep learning; convolutional neural network
Deep learning has achieved lots of successes in many fields, but when trainable sample are extremely limited, deep learning often under or overfitting to few samples. Meta-learning was proposed to solve difficulties in few-shot learning and fast adaptive areas. Meta-learner learns to remember some common knowledge by training on large scale tasks sampled from a certain data distribution to equip generalization when facing unseen new tasks. Due to the limitation of samples, most approaches only use shallow neural network to avoid overfitting and reduce the difficulty of training process, that causes the waste of many extra information when adapting to unseen tasks. Euclidean space-based gradient descent also make meta-learner's update inaccurate. These issues cause many meta-learning model hard to extract feature from samples and update network parameters. In this paper, we propose a novel method by using multi-stage joint training approach to post the bottleneck during adapting process. To accelerate adapt procedure, we also constraint network to Stiefel manifold, thus meta-learner could perform more stable gradient descent in limited steps. Experiment on mini-ImageNet shows that our method reaches better accuracy under 5-way 1-shot and 5-way 5-shot conditions.
Almost Hypercomplex Manifolds with Hermitian-Norden Metrics and 4-dimensional Indecomposable Real Lie Algebras Depending on One Parameter
Hristo Manev
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: Almost hypercomplex structure; Hermitian metric; Norden metric; Lie group; Lie algebra
We study almost hypercomplex structure with Hermitian-Norden metrics on 4-dimensional Lie groups considered as smooth manifolds. All the basic classes of a classification of 4-dimensional indecomposable real Lie algebras depending on one parameter are investigated. There are studied some geometrical characteristics of the respective almost hypercomplex manifolds with Hermitian-Norden metrics.
Feasibility of Children in Disaster: Evaluation and Recovery (CIDER) Protocol for Traumatized Adolescents in South Korea: A Preliminary Study
Mi-Sun Lee, Hyun Soo Kim, Eun Jin Park, Soo-Young Bhang
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: CIDER; post-traumatic stress disorder; trauma; adolescent; trauma-focused group psychotherapy
We aimed to evaluate the feasibility and preliminary efficacy of trauma-focused group psychotherapy in adolescents who experienced traumatic events in Korea. Participants were assigned and recruited from two sites in Korea. Children in Disaster: Evaluation and Recovery (CIDER) V1.0 is a trauma-focused group psychotherapy approach consisting of psychoeducation, normalization, stabilization, and techniques of managing the traumatic memory. The CIDER intervention consists of eight 50-minute-long sessions. The effectiveness of the intervention was evaluated using the Korean version of the Children's Response to Traumatic Events Scale-Revised (K-CRTES-R), the Beck Depression Inventory (BDI), the State Anxiety Inventory for Children (SAIC), and the Pediatric Quality of Life Inventory (PedQL). Data were analyzed by Wilcoxon signed-rank test. We recruited 22 traumatized adolescents (mean age 16 years; SD 1.43; range 13–18 years old; 71.4% boys) in this pilot study. The K-CRTES-R scores were significantly improved (Z = −2.85, p < 0.01). The BDI demonstrated the effectiveness of the therapy (Z = −2.35, p < 0.05). The assessment of the PedQL supported the effect of CIDER (Z = −3.08, p < 0.01). However, there was no statistically significant differences in the SAIC scores (Z = −1.90, p > 0.05). The results show that there is preliminary evidence that CIDER intervention reduces post-traumatic stress and depressive symptoms and improves quality of life. Our findings indicate that CIDER is feasible for treating adolescents exposed to traumatic events. Larger controlled trials are needed to establish the efficacy of this trauma-focused group psychotherapy and examine its impact on post-traumatic stress disorder.
Transversely Modulated Wave Packet
Vladimir N. Salomatov
Subject: Physical Sciences, General & Theoretical Physics Keywords: quasi-monochromatic waves; group velocity; dispersion relation; longitudinal modulation; coherence time
The wave packet consisting of two harmonic plane waves with the same frequencies, but with different wave vectors is considered. The dispersion relation of a packet is structurally similar to the dispersion relation of a relativistic particle with a nonzero rest mass. The possibility of controlling the group velocity of a quasi-monochromatic wave packet by varying the angle between the wave vectors of its constituent waves is discussed.
A Demographic and Biochemical Analytic of Cases of G6PD Deficiency: A Cross-Sectional Study from the Middle East
Ahmed Al-Imam
Subject: Medicine & Pharmacology, Pediatrics Keywords: Glucosephosphate Dehydrogenase Deficiency; G6PD; ABO Blood-Group System; Middle East; Arabs.
BACKGROUND G6PD deficiency is an inherited an X-linked recessive condition leading to insufficient levels of glucose-6-phosphate dehydrogenase, thus causing hemolytic anaemia under certain conditions. METHODS Our study is explorative for cases admitted to Jordan University Hospital. The studied parameters include demographics, clinical manifestations, biochemical markers including Hb level, WBC count, liver enzymes, and blood grouping. RESULTS Most of the patients were admitted to the emergency unit (53.13%). Individuals who were Rh-positive represented 57.81%, while patients of AB blood group accounted for 75%. The mean values were 4.81 years (age), 29.06 hours (time-to-hospital admission), 38.10 degree Celsius (temperature), 6.11 gm/dl (Hb), 13242.19 (WBC count), 343.20 U/L (S. ALP), and 50.98 IU/L (S. ALT). There was no significant difference between males and females or between favism-induced versus drug-induced hemolytic episodes. AB and Rh positive blood groups are of protective effect in relation to liver enzymes. Patients who were admitted to the hospital within 24 hours from having clinical manifestations had a better prognosis. CONCLUSION This study is the first inferential research on G6PD deficiency from the Middle East to explore cases from one of the largest healthcare centres in Jordan. The role of blood grouping should be investigated prospectively.
Scaling Group Analysis of Mixed Bioconvective Flow in Nanofluid with Presence of Slips, MHD and Chemical Reactions
Mohd Faisal Mohd Basir, M. Jashim Uddin, Ahmad Izani Md. Ismail
Subject: Physical Sciences, Mathematical Physics Keywords: Bioconvection, Magnetohydrodynamics, Scaling group of transformations, Slip boundary conditions, Chemical reaction
Bioconvective flows have attracted attention in recent years due to actual and potential applications. In this paper, we consider a steady and laminar convective MHD flow of a nanofluid with heat, mass and microorganism transfer with a heat source/sink present. In addition, we assume there exists a first order chemical reaction. The governing partial differential equations (PDEs) are reduced to ordinary differential equations (ODEs) using the scaling group transformation and the associated boundary value problem is then solved. The influences of selected governing parameters on the dimensionless velocity, temperature, nanoparticle concentration, density of motile microorganisms, skin friction, heat transfer, mass transfer, and motile microorganism density rates are computed and discussed
Geology and Structure of Uru-Ugworji Diorite Lokpanukwu, Southeastern Nigeria
Nwosu Obinnaya Chikezie Victor, Ibe Kalu Kalu
Subject: Earth Sciences, Geology Keywords: Geological; Geophysical; Shale; Dolerite; Calcareous Sandstone; Asu River Group; Eze-Aku Formation
The Lokpaukwu Uru Quarry was examined geologically, geophysically, and core-wise. The location is between 5056.149'N and 5056.193'N and 7028.312'E and 7028.356'E. The study location may include the Asu River Group and the Eze-Aku Formation. This area has five rock units. In the eastern research region, siltstone forms a "CAP" on the shale. Shale underlies half of the study area. The west has calcareous sandstone. The eastern part of the area is dolerite, the main rock that spans siltstone and shale. The region's geological matter contains iron. Two geological sections were analysed and interpreted to identify the five rock units and their outcrops in the study area. electroresistivity in geophysical research Schlumberger found that the western, northwesterly, and central sections of the research region had a thick sedimentary sequence, whereas the eastern half has an igneous body, the project's main component. Sandstone, siltstone, and shale follow the high-resistivity rock in this location. The rock unit in the region was found in eleven core samples from the east half of the study area. Nine rock-unit core samples were found near Obichioke. The Lokpaukwu area's core data shows the rocks' positions, kinds, minerals, and strengths. Geologic mapping shows that a major fault separates the viable Uru end from the unviable Obichioke lot. Recrystallization dominates the fault track (alcitic matter). Thus, prior to quarrying igneous (basic) units, comparable investigations are advised.
Revision and Extension of a Generally Applicable Group-Additivity Method for the Calculation of the Refractivity and Polarizability of Organic Molecules
Rudolf Naef, William E. Acree Jr.
Subject: Chemistry, Physical Chemistry Keywords: group-additivity method; Gauss-Seidel diagonalization; refractivity; polarizability; ionic liquids; silanes; boranes
In a continuation and extension of an earlier publication, the calculation of the refractivity and polarizability of organic molecules at standard conditions is presented, applying a commonly applicable computer algorithm based on an atom-group additivity method, where the molecules are broken down into their constituting atoms, these again being further characterized by their immediate neighbor atoms. The calculation of their group contributions, carried out by means of a fast Gauss–Seidel fitting calculus, used the experimental data of 5988 molecules from literature. An immediate subsequent ten-fold cross-validation test confirmed the extraordinary accuracy of the prediction of the molar refractivity, indicated by a correlation coefficient R^2 and a cross-validated analog Q^2 of 0.9997, a standard deviation σ of 0.38, a cross-validated analog S of 0.41, and a mean absolute deviation of 0.76%. The high reliability of the predictions has been exemplified with three classes of molecules: ionic liquids, silicon- and boron-containing compounds. The corresponding molecular polarizabilities have been calculated indirectly from the refractivity using the inverse Lorentz-Lorenz relation. In addition, it could be shown that there is a close relationship between the "true" volume and the refractivity of a molecule, revealing an excellent correlation coefficient R^2 of 0.9645 and a mean absolute deviation of 7.53%.
Community Knowledge, Attitudes and Practices Regarding Onchocerciasis in Rural Villages With a High Epilepsy Prevalence in Mahenge, Tanzania: A Qualitative Study
Dan K. Bhwana, Isolde S. Massawe, Adiel K. Mushi, Pendo Mashili, Luis-Jorge Amaral, Williams Makunde, Bruno P. Mmbando, Robert Colebunders
Subject: Life Sciences, Other Keywords: onchocerciasis; community directed treatment with ivermectin; elimination; epilepsy; focus group discussions; misconceptions
Despite of over 20 years of community directed treatment with ivermectin (CDTI), a high prevalence of onchocerciasis and onchocerciasis-associated epilepsy were observed in rural villages in Mahenge, Tanzania. Therefore, we assessed the knowledge, attitude and practice about onchocerciasis in four rural villages in the Mahenge area. This was a qualitative study conducted between June and July 2019. Eleven focus group discussions were organized with persons with epilepsy and their caretakers, community resource persons, and community drug distributors (CDDs), and two in-depth interviews with district programme coordinators of neglected tropical diseases (NTD). Most participants were aware about symptoms of onchocerciasis using local terminologies such as "ukurutu/rough dry skin" and "kuwashwa/itching". A small proportion of people did not take ivermectin during CDTI for fear of adverse reactions such as itching and swelling. Some men believed that ivermectin may decrease libido. Challenges for high CDTI coverage included, long walking distance by CDDs to deliver drugs to households, persons being away for farming, low awareness of the disease and limited supervision by the NTD coordinators. In conclusion, ivermectin uptake in Mahenge should be optimised by continuous advocacy about the importance of taking ivermectin to prevent onchocerciasis-associated morbidity and by improving supervision during CDTI.
MISF2 Encodes an Essential Mitochondrial Splicing Co-factor Required for nad2 mRNA Processing and Embryo Development in Arabidopsis thaliana
Tan-Trung Nguyen, Corinne Best, Sofia Shevtsov, Michal Zmudjak, Martine Quadrado, Ron Mizrahi, Hagit Zer, Hakim Mireau, Oren Ostersetzer-Biran
Subject: Biology, Plant Sciences Keywords: Group II; Intron; Splicing; PPR; Respiration; Complex I; Mitochondria; Embryogenesis; Arabidopsis; Angiosperms.
Mitochondria play key roles in cellular energy metabolism in eukaryotes. Mitochondria of most organisms contain their own genome and specific transcription and translation machineries. The expression of angiosperm mtDNA involves extensive RNA-processing steps, such as RNA trimming, editing, and the splicing of numerous group II-type introns. Pentatricopeptide repeat (PPR) proteins are key players of plant organelle gene expression and RNA metabolism. In the present analysis, we reveal the function of the MITOCHONDRIAL SPLICING FACTOR 2 gene (MISF2, AT3G22670) and show that it encodes a mitochondria-localized PPR protein that is crucial for early embryo-development in Arabidopsis. Molecular characterization of embryo-rescued misf2 plantlets indicates that the splicing of nad2 intron 1 and thus respiratory complex I biogenesis are strongly compromised. Moreover, the molecular function seems conserved between MISF2 protein in Arabidopsis and its orthologous gene (EMP10) in maize, suggesting that the ancestor of MISF2/EMP10 was recruited to function in nad2 processing before the monocot-dicot divergence, ~200 million years ago. These data provide new insights into the function of nuclear-encoded factors in mitochondrial gene expression and respiratory chain biogenesis during plant embryo development.
Fanconi Anemia Patients from an Indigenous Community in Mexico Carry a New Founder Pathogenic Variant in FANCG
Pedro Reyes, Benilde García-deTeresa, Ulises Juárez, Fernando Pérez-Villatoro, Moisés O Fiesco-Roa, Alfredo Rodríguez, Bertha Molina, Maria Teresa Villarreal-Molina, Jorge Melendez-Zajgla, Alessandra Carnevale, Leda Torres, Sara Frias
Subject: Medicine & Pharmacology, Other Keywords: Fanconi anemia; Chromosome instability; FANCG; splicing; founder pathogenic variant; Mixe indigenous group.
Fanconi anemia (FA) is a rare genetic disorder caused by pathogenic variants (PV) in at least 22 genes, which cooperate in the FA/BRCA pathway to maintain genome stability. PV in FANCA, FANCC, and FANCG account for most cases (~90%). This study evaluated the chromosomal, molecular, and phenotypic findings of a novel founder FANCG PV, identified in three patients with FA from the Mixe community of Oaxaca, Mexico. All patients presented chromosomal instability and a homozygous PV, FANCG: c.511-3_511-2delCA, identified by next-generation sequencing analysis. Bioinformatics predictions suggest that this deletion disrupts a splice acceptor site promoting the exon 5 skipping. Analysis of Cytoscan 750K arrays for haplotyping and global ancestry supported the Mexican origin and founder effect of the variant, reaffirming the high frequency of founder PV in FANCG. The degree of bone marrow failure and physical findings (described through the acronyms VACTERL-H and PHENOS) were used to depict the phenotype of the patients. Despite having a similar frequency of chromosomal aberrations and genetic constitution, the phenotype showed a wide spectrum of severity. The identification of a founder PV could help for a systematic and accurate genetic screening of patients with FA suspicion in this population.
Optimised ARG Based Group Activity Recognition for Video Understanding
Pranjal Kumar
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: group activity recognition; graph convolution network; video understanding; video analytics; activity recognition
In this paper, we propose a robust video understanding model for activity recognition by learning the actor's pair-wise correlations and relational reasoning, exploiting spatial and temporal information. In or der to measure the similarity between the pair appearances and construct an actor relations map, the Zero Mean Normalized Cross-Correlation (ZNCC) and the Zero Mean Sum of Absolute Differences(ZSAD) is proposed to allow the Graph Convolution Network (GCN) to learn how to distinguish group actions. We recommend that MNASNet be used as the backbone to retrieve features. Experiments show a 38.50% and 23.7% reduction in training time in the 2-stage training process along with a 1.52% improvement in accuracy against traditional methods.
Tsunami Deposits on a Paleoproterozoic Unconformity? The 2.2 Ga Yerrida Marine Transgression on the Northern Margin of the Yilgarn Craton, Western Australia
Desmond Lascelles, Ryan J. Lowe
Subject: Earth Sciences, Atmospheric Science Keywords: paleoproterozoic; tsunami boulder deposit; marine transgression; Yilgarn Craton; Yerrida Basin; Windplain Group
Large blocks and boulders of banded iron formation and massive hematite up to 40 x 27 x 6 m and in excess of 10,000 metric tonnes were detached from outcrop of the Wilgie Mia Formation during the ca 2.20 Ga marine transgression at the base of the Paleoproterozoic Windplain Group, and deposited in a broad band on the wave-cut surface 900 to 1200 m to the east. At the same time sand and shingle was scoured from the sea floor, leaving remnants only on the western side of the Wilgie Mia Formation and on the eastern sides of the boulders. Evidence suggesting that the blocks were detached and transported and the sea floor scoured by a tsunami bore with a height of at least 40 m is provided by (1) the deposition of the blocks indicates transportation by a unidirectional sub-horizontal force, whereas the smaller boulders are randomly oriented (2) 900 -1200m separating the BIF outcrop and the blocks (3) the absence of the basal conglomerate between the blocks (4) the blocks and boulders rest directly on the wave-cut surface of deeply weathered amphibolites (5) the blocks and boulders are surrounded and overlain by fine-grained sandstone of the Windplain Group.
Ain't too Proud to Beg! Effects of Leader's Use of Pride on Groups
Steve Baumgartner, Catherine Daus
Subject: Behavioral Sciences, Applied Psychology Keywords: leadership; pride; authentic pride; hubristic pride; task satisfaction; group cohesion; leader satisfaction
Studies of discrete pride in the workplace are both few and on the rise. We examined what has, to date, been yet unstudied: the impact that a leader's expressions of authentic and hubristic pride can have on the followers at that moment, and on their feelings about their task, leader, and group. Students working in groups building Lego structures rated their perceived leader regarding expressions of pride, both authentic and hubristic. Students who perceived the leader as expressing more authentic pride rated the task, group (satisfaction and cohesion), and leader more positively; while the reverse was generally true for perceptions of expressions of hubristic pride. We found these effects both at the individual level, and at the group level. We also predicted and found moderation for the type of task worked on, creative or detailed. Implications abound for leader emotional labor and emotion management.
The Description of Heat Capacity by the Debye – Mayer – Kelly Hybrid Model
Valery P. Vasiliev, Alex F. Taldrik
Subject: Materials Science, Metallurgy Keywords: Heat Capacity; Entropy; Similarity Method; AIIIBV, AIIBVI Phases; Pure Elements IV Group
The universal Debye – Mayer – Kelly hybrid model was proposed for the description of the heat capacity from 0 K to the melting points of substance within the experimental uncertainty for the first time. To describe the heat capacity, the in-house software on the base of commercial one DELPHI-7 was used with a 95% confidence level. To demonstrate the perfect suitability of this model, a thermodynamic analysis of the heat capacities of the fourth group elements, and some compounds of the AIIIBV and AIIBVI phases was carried out. It produced good agreement within the experimental uncertainty. There is no a similar model description in literature.The Similarity Method is a convenient and effective tool for critical analysis of the heat capacities of isostructural phases, which was used as an example for diamond-like compounds. Phases with the same sum of the atomic numbers of elements (Z), such as diamond and B0.5 N0.5 (cub) (Z = 6); pure silicon (Si) and Al0.5 P0.5 (Z=14); pure germanium (Ge) and Ga0.5 As0.5 (Z = 32)); pure grey tin (alpha-Sn) and In0.5 Sb0.5, and Cd0.5 Te0.5 (Z = 50) have the same heat capacity experimental values in the solid state. The proposed models can be used to both different binary and multicomponent phases. It helps to standardize the physicochemical constants.
Development and Application of a Patient Group Engagement Prioritization Tool for Use in Medical Product Development
Brian Perry, Carrie Dombeck, Jaye Bea Smalley, Bennett Levitan, David Leventhal, Bray Patrick-Lake, Linda Brennan, Kevin McKenna, Zachary Hallinan, Amy Corneli
Subject: Medicine & Pharmacology, Other Keywords: patient engagement; stakeholder engagement; patient group engagement; prioritization tool; patient engagement activities
Patient group engagement is increasingly used to inform the design, conduct, and dissemination of clinical trials and other medical research activities. However, the priorities of industry sponsors and patient groups differ, and there is currently no framework to help these groups identify mutually beneficial engagement activities. Methods: We conducted 28 qualitative, semi-structured interviews with representatives from research sponsor organizations (n=14) and patient groups (n=14) to determine: 1) how representatives define benefits and investments of patient group engagement in medical product development and, 2) to refine a list of 31 predefined patient group engagement activities. Results: Patient group and sponsor representatives described similar benefits: engagement activities can enhance the quality and efficiency of clinical trials by improving patient recruitment and retention, reduce costs, and help trials meet expectations of regulators and payers. All representatives indicated that investments include both dedicated staff time and expertise, and financial resources. Factors to consider when evaluating benefits and investments were also identified as were suggestions for clarifying the list of engagement activities. Discussion: Using these findings, we refined the 31 engagement activities to 24 unique activities across the medical product development lifecycle. We also developed a web-based prioritization tool (https://prioritizationtool.ctti-clinicaltrials.org/) to help clinical research sponsors and patient groups identify high priority engagement activities. Use of this tools can help sponsors and patient groups identify the engagement activities that they believe will provide the most benefit for the least investment and may lead to more meaningful and mutually beneficial partnerships in medical product development.
Pyrochlore-Group Minerals in the Granite-Hosted Katugin Rare-Metal Deposit, Transbaikalia, Russia
Anastasiya E. Starikova, Ekaterina P. Bazarova, Valentina B. Savelieva, Eugene V. Sklyarov, Elena A. Khromova, Sergei V. Kanakin
Subject: Earth Sciences, Geochemistry & Petrology Keywords: pyrochlore-group minerals; fluornatropyrochlore; alkaline granites; Katugin rare-metal deposit; East Transbaikalia
Pyrochlore group minerals are the main raw phases in granitic rocks of the Katugin complex-ore deposit that stores Nb, Ta, Y, REE, U, Th, Zr, and cryolite. They are of three main generations: primary magmatic (I), early postmagmatic (II), and supergene (III) pyrochlores. The primary magmatic phase (generation I) is fluornatropyrochlore with high concentrations of Na2O (to 10.5 wt.%), F (to 5.4 wt.%) and REE2O3 (to 17.1 wt.%) but low CaO (0.6-4.3 wt.%), UO2 (to 2.6 wt.%), ThO2 (to 1.8 wt.%), and PbO (to 1.4 wt.%). Pyrochlore of this type is very rare in nature and limited to a few occurrences, such as rare-metal deposits of Nechalacho in syenite and nepheline syenite (Canada) and Mariupol in nepheline syenite (Ukraine). It may have crystallized synchronously with or slightly later than melanocratic minerals (aegirine, biotite, and arfvedsonite) at the late magmatic stage when Fe from the melt became bound making impossible the formation of columbite. Second generation pyrochlore formed at the early postmagmatic stage of the Katugin deposit. It differs from that of first generation in lower Na2O concentrations (2.8 wt.%), relatively low F (4 wt.%), and les occupancy of the A and Y sites at similar contents of other components. Generation III pyrochlore is a product of supergene alteration processes. It is compositionally heterogeneous and contains K, Ba, Pb, Fe, and significant Si concentrations but low Na and F. Its compositions mostly fall within the filed of hydro- and kenopyrochlore.
Fuzzy Multicriteria Analysis for Performance Evaluation of Internet of Things Based Supply Chains
Santoso Wibowo, Srimannarayana Grandhi
Subject: Mathematics & Computer Science, Other Keywords: group decision makers; multicriteria analysis; performance evaluation; internet of things; intuitionistic environment
The performance evaluation of the Internet of Things (IoT) based supply chain is challenging due to the involvement of multiple decision makers, the multi-dimensional nature of the evaluation process, and the existence of uncertainty and imprecision in the decision making process. To ensure effective decisions are made, this paper presents a fuzzy multicriteria analysis model for evaluating the performance of IoT based supply chain. The inherent uncertainty and imprecision of the performance evaluation process is adequately handled by using intuitionistic fuzzy numbers. A new algorithm is developed for determining the overall performance index for each alternative across all criteria. The development of the fuzzy multicriteria group decision making model provides organizations with the ability to effectively evaluate the performance of their IoT based supply chains for improving their competitiveness. An example is presented for demonstrating the applicability of the model for dealing with real world IoT-based performance evaluation problems.
Almost Fully Secure Lattice-Based Group Signatures with Verifier-Local Revocation
Maharage Nisanasla Sevwandi Perera, Takeshi Koshiba
Subject: Mathematics & Computer Science, Other Keywords: lattice-based group signatures; verifier-local revocation; anonymity; almost-full anonymity; traceability
Efficient member revocation and strong security against attacks are prominent requirements in group signature schemes. Among the revocation approaches Verifier-local revocation is the most flexible and efficient method since it requires to inform only the verifiers regarding the revoked members. The verifier-local revocation technique uses a token system to manage members' status. However, the existing group signature schemes with verifier-local revocability rely on weaker security. On the other hand, existing static group signature schemes rely on a stronger security notion called, full-anonymity. Achieving the full-anonymity for group signature schemes with verifier-local revocation is a quite challenging task. This paper aims to obtain stronger security for the lattice-based group signature schemes with verifier-local revocability, which is closer to the full-anonymity. Moreover, this paper delivers a new key-generation method which outputs revocation tokens without deriving from the users' signing keys. By applying the tracing algorithm given in group signature schemes for static groups, this paper also outputs an efficient tracing mechanism. Thus, we deliver a new group signature scheme with verifier-local revocation that satisfies a stronger security from lattices.
Effectiveness of a Group B outer Membrane Vesicle Meningococcal Vaccine in Preventing Hospitalization from Gonorrhea in New Zealand: a Retrospective Cohort Study
Janine Paynter, Felicity Goodyear-Smith, Jane Morgan, Peter Saxton, Steve Black, Helen Petousis-Harris
Subject: Medicine & Pharmacology, Other Keywords: Gonorrhea; Outer membrane vesicle vaccine; Group B meningococcus; Cohort study; New Zealand
Gonorrhea is a major global public health problem with emergence of multiple drug-resistant strains with no effective vaccine. This retrospective cohort study aimed to estimate the effectiveness of the New Zealand meningococcal B vaccine against gonorrhea associated hospitalization. The cohort consisted of individuals born 1984-1999 residing in New Zealand, therefore eligible for meningococcal B vaccination during 2004-2008. Administrative datasets of demographics, customs, hospitalization, education, income tax and immunization, were linked using the national Integrated Data Infrastructure. The primary outcome was hospitalization with a primary diagnosis of gonorrhea. Cox's proportional hazards models were applied with a Firth correction for rare outcomes to generate estimates of hazard ratios. Vaccine effectiveness estimates were calculated as 1-Hazard Ratio expressed as percent. There were 1,143,897 eligible cohort members, with 135 missing information on gender, 16,245 missing ethnicity and/or 197,502 missing deprivation hence 935,496 were included in the analysis. After adjustment for gender, ethnicity and deprivation, vaccine effectiveness (MeNZB™) against hospitalization caused by gonorrhea was estimated to be 24% (95% CI 1-42%). In conclusion, vaccination with MeNZB™ significantly reduced the rate of hospitalization from gonorrhea. This supports prior research indicating possible cross protection of this vaccine against gonorrhea acquisition and disease in the outpatient setting.
A Fuzzy Path Selection Strategy for Aircraft Landing on the Carrier
Xichao Su, Yu Wu, Jingyu Song, Peilong Yuan
Subject: Engineering, Control & Systems Engineering Keywords: landing; aircraft carrier; landing path; fuzziness; fuzzy multi-attribute group decision making
Landing is one of the most dangerous tasks in all the operations on the aircraft carrier, and the landing safety is very important to the pilot and the flight deck operation. The problem of landing path selection is studied in this paper as there several candidates corresponding to different situations. A fuzzy path selection strategy is proposed to solve the problem considering the fuzziness of environmental information and human judgment, and the goal is to provide the pilot with more reasonable decision. The strategy is based on Fuzzy Multi-attribute Group Decision Making (FMAGDM), which has been widely used in industry. Firstly, the background of the path selection problem is given. Then the essential elements of the problem are abstracted to build the conceptual model. A group decision-making method is applied to denote the preference of each decision maker for each alternative route, and the optimal landing path under the current environment is determined taking into account the knowledge and the weight of both decision makers. Experimental studies under different setups, i.e., different environments, are carried out. The results demonstrate that the proposed path selection strategy is validated in different environments, and the optimal landing paths corresponding to different environments can be determined.
New Labour's Policies to Influence and Challenge Islam in Contemporary Britain: A Case Study on the National Muslim Women's Advisory Group's Theology Project
Subject: Keywords: Muslim women; Islam; political engagement; National Muslim Women's Advisory Group; extremist ideologies
The creation of the National Muslims Women's Advisory Group (NMWAG) in 2008 by Britain's New Labour Government was part of a strategy which sought to engage different levels of Muslim communities beneath an overarching focus on reducing 'Islamic extremism'. To do so however, Government acknowledged that it would need to support Muslim women to overcome some of the constraints it believed were placed on Muslim women in contemporary Britain. Deeming theology and religious interpretation to be one of those constraints, Government saw the need to empower Muslim women to 'influence and challenge' religious and theological discourses as a priority. This article therefore offers a case study on a project that was commissioned by Government that sought to empower Muslim women to 'influence and challenge' theological interpretations in collaboration with the NMWAG. Having gained unprecedented access to the NMWAG, its activities and engagement with Government, this article presents previously unpublished findings from that project to focus on two key themes: Muslim women, their identity and position; and theology, leadership and the participation of women. Having explored these in detail, this article concludes by critically reflecting on the way in which Government engaged and interacted with Muslim women, the role and relative success of the NMWAG and, most importantly, the extent to which the NMWAG was able to 'influence and challenge' interpretations of Islamic theology.
Evaluation of Cloud Services: A Fuzzy Multicriteria Group Decision Making Method
Santoso Wibowo, Hepu Deng, Wei Xu
Subject: Mathematics & Computer Science, Analysis Keywords: performance evaluation; cloud services; group decision making; multicriteria decision making; fuzzy sets
This paper formulates the performance evaluation of cloud services as a multicriteria group decision making problem, and presents a fuzzy multicriteria group decision making method for evaluating the performance of cloud services. Interval-valued intuitionistic fuzzy numbers are used to model the inherent subjectiveness and imprecision of the performance evaluation process. An effective algorithm is developed based on the technique for order preference by similarity to ideal solution method and the Choquet integral operator for adequately solving the performance evaluation problem. An example is presented to demonstrate the applicability of the proposed fuzzy multicriteria group decision making method for solving the multicriteria group decision making problem in real world situations.
Perceptions of Teachers and School Nurses on Child and Adolescent Oral Health
Carl A. Maida, Mavin Marcus, Di Xiong, Paula Ortega-Verdugo, Elizabeth Agredano, Yilan Huang, Linyu Zhou, Steve Y. Lee, Jie Shen, Ron D. Hays, James J. Crall, Honghu Liu
Subject: Medicine & Pharmacology, Dentistry Keywords: focus group; patient reported outcome measures; oral health; education; COVID-19; dental problem
This study reports results of focus groups with school nurses and teachers from elementary, middle, and high schools to explore their perceptions of child and adolescent oral health. Participants included 14 school nurses and 15 teachers (83% Female; 31% Hispanic, 21% White, 21% Asian, 14% African American, and 13% Others). Respondents were recruited from Los Angeles County schools and scheduled by school level for six one-hour focus groups, using Zoom. Audio recordings were transcribed, reviewed, and saved with anonymization of speaker identities. NVivo software was used to facilitate content analysis and identify key themes. The nurses' rate of "Oral Health Education" comments statistically exceeded that of teachers, while teachers had higher rates for "Parental Involvement" and "Mutual Perception" "Need for Care" was perceived to be more prevalent in immigrants to the United States based on student behaviors and complaints. "Access to Care" was seen as primarily the nurse's role. Strong relationships between community clinics and schools were viewed by some as integral to students achieving good oral health. The results suggest dimensions and questions important to item development for oral health surveys of children and parents to address screening, management, program assessment, and policy planning.
Read the Signs: Detecting Early Warning Signals of Interreligious Conflict
Peter Ochs, Essam Fahim, Paola Pinzon
Subject: Social Sciences, Other Keywords: religion; interreligious conflict; science; constative; performative; peacemaking; ethnolinguistics; semiotics; behavioral signals; group behavior
Building on recent directions in religion-related social and political science, our essay addresses a need for location-specific and religion-specific scientific research that might contribute directly to local and regional interreligious peacemaking. Over the past 11 years, our US-Pakistani research team has conducted research of this kind: a social scientific method for diagnosing the probable near-future behavior of religious stakeholder groups toward other groups. Integrating features of ethnography, linguistics, and semiotics, the method enables researchers to read a range of ethno-linguistic signals that appear uniquely in the discourses of religious groups. Examining the results, we observe, firstly, that our religion and location-specific science identifies features of religious group behavior that are inevident in broader, social scientific studies of religion and conflict; we observe, secondly, that our science integrates constative and performative elements: it seeks facts and it serves a purpose. We conclude that strictly constative, fact-driven sciences may fail to detect certain crucial features of religious stakeholder group behavior.
Confounding Factors Influencing the Kinetic and the Magnitude of Serological Response Following Administration of BNT162b2
Jean-Louis Bayart, Laure Morimont, Mélanie Closset, Grégoire Wieërs, Tatiana Roy, Vincent Gerin, Marc Elsen, Christelle Eucher, Sandrine Van Eeckhoudt, Nathalie Ausselet, Clara David, François Mullier, Jean-Michel Dogné, Julien Favresse, Jonathan Douxfils
Subject: Life Sciences, Biochemistry Keywords: SARS-CoV-2; vaccine; BNT162b2; antibody, serology; kinetic; age; gender; BMI; blood-group.
Background: Little is known about potential confounding factors influencing the humoral response in individuals having received the BNT162b2 vaccine. Methods: Blood samples from 231 subjects were collected before and 14, 28 and 42 days following COVID-19 vaccination with BNT162b2. Anti-Spike Receptor-Binding-Domain protein (anti-Spike/RBD) immunoglobulin G (IgG) antibodies were measured at each time-point. Impact of age, sex, childbearing age status, hormonal therapy, blood group, body mass index and past-history of SARS-CoV-2 infection were assessed by multivariable analyses. Results and Conclusions: In naïve subjects, the level of anti-Spike/RBD antibodies gradually increased following administration of the first dose to reach the maximal response at day 28 and then plateauing at day 42. In vaccinated subjects with previous SARS-CoV-2 infection, the plateau was reached sooner (i.e. at day 14). In the naïve population, age had a significant negative impact on anti-Spike/RBD titers at day 14 and 28 while lower levels were observed for males at day 42, when corrected for other confounding factors. BMI as well as B and AB blood groups had a significant impact in various subgroups on the early response at day 14 but no longer after. No significant confounding factors were highlighted in the previously infected group.
Domain-Specific Stimulation of Executive Functioning in Low-Preforming Students with a Roma Background: Cognitive Potential Of Mathematics
Iveta Kovalčíková, Jochanan Veerbeek, Bart Vogelaar, Alena Prídavková, Ján Ferjenčík, Edita Šimčíková, Blanka Tomková
Subject: Social Sciences, Accounting Keywords: executive functioning; domain-specific cognitive stimulation; math; low-performing student; Roma ethnic group.
The current study investigated whether a domain-specific intervention targeting maths and executive functions of primary school children with a Roma background would be effective in improving their scholastic performance and executive functioning. In total, 122 students attending Grade 4 of elementary school took part in the project. The study concerned a pretest-intervention-training experimental design with three conditions: the experimental condition, an active and a passive control group. The results suggested that both maths performance and executive functions improved over time, with no significant differences between the three conditions. An additional correlational analysis indicated that pretest performance was not related to posttest performance for the children in the experimental and active control group.
How Collective Intelligent Decisions in Public Policymaking are Made: Case Study of Participatory Budgeting in Kraków
Rafał Olszowski
Subject: Social Sciences, Accounting Keywords: collective intelligence; policymaking; public policy; e-participation; participatory budgeting; cognitive systems; group cognition.
In open, sustainable policymaking, we are expecting to develop the valuable results that will bring us closer to a fairer and more balanced society. One way to involve the public in these processes is to engage them in online e-participation projects. Despite the hopes, empirical analyses show that many e-participation initiatives have failed to deliver expected benefits. Revealing what actually works in examined projects and what requires improvement would allow for better policy planning in the future. In this article, I made an attempt to identify and assess the cognitive processes enabling emergence of collective intelligence (CI) in a singular e-participation project. For this reason, I worked out and tested an evaluation technique, combining the MILCS framework for group cognition and the results of empirical research on CI. A case study method based on semi-structured interviews was selected to evaluate a sample participatory budgeting initiative, Civic Budget of the City of Kraków. Results reveal that most cognitive processes are working satisfactorily, but the real problem is using collective memory, which works only to a very limited extent. A guideline for future policymakers should be to develop a shared memory system, to which all community members should have access.
Calculation of the Vapour Pressure of Organic Molecules by Means of a Group-Additivity Method and their Resultant Gibbs Free Energy and Entropy of Vaporization at 298.15K
Subject: Chemistry, General & Theoretical Chemistry Keywords: group-additivity method; vapour pressure; Gibbs free energy of vaporization; entropy of vaporization
The calculation of the vapour pressure of organic molecules at 298.15K is presented using a commonly applicable computer algorithm based on the group-additivity method. The basic principle of this method rests on the complete breakdown of the molecules into their constituting atoms, further characterized by their immediate neighbour atoms. The group contributions are calculated by means of a fast Gauss-Seidel fitting algorithm using the experimental data of 2036 molecules from literature. A ten-fold cross-validation procedure has been carried out to test the applicability of this method, which confirmed excellent quality for the prediction of the vapour pressure, expressed in log(pa), with a cross-validated correlation coefficient Q2 of 0.9938 and a standard deviation of 0.26. Based on these data, the molecules' standard Gibbs free energy G°vap has been calculated. Furthermore, using their enthalpies of vaporization, predicted by an analogous group-additivity approach published earlier, the standard entropy of vaporization S°vap has been determined and compared with experimental data of 1129 molecules, exhibiting excellent conformance with a correlation coefficient R2 of 0.9598, a standard error of 8.14 J/mol/K and a medium absolute deviation of 4.68%.
A Pre-aggregation Fuzzy Collaborative Intelligence-based Fuzzy Analytic Hierarchy Process Approach for Selecting Alternative Suppliers amid the COVID-19 Pandemic
Toly Chen, Hsin-Chieh Wu
Subject: Keywords: group decision-making; fuzzy analytic hierarchy process; consensus; wafer foundry; COVID-19 pandemic
In the existing group decision-making fuzzy analytic hierarchy process (FAHP) methods, the consensus among experts has rarely been fully reached. To fill this gap, in this study, a pre-aggregation fuzzy collaborative intelligence (FCI)-based FAHP approach is proposed. In the proposed pre-aggregation FCI-based FAHP approach, fuzzy intersection is applied to aggregate experts' pairwise comparison results if these pairwise comparison results overlap. The aggregation result is a matrix of polygonal fuzzy numbers. Subsequently, alpha-cut operations are applied to derive the fuzzy priorities of criteria from the aggregation result. The pre-aggregation FCI-based FAHP approach has been applied to select suitable alternative suppliers for a wafer foundry in Taiwan amid the COVID-19 pandemic. The experimental results revealed that the pre-aggregation FCI-based FAHP approach significantly reduced the uncertainty inherent in the decision-making process by deriving fuzzy priorities with very narrow ranges.
Case Study on Privacy-Aware Social Media Data Processing in Disaster Management
Marc Löchner, Ramian Fathi, David Schmid, Alexander Dunkel, Dirk Burghardt, Frank Fiedrich, Steffen Koch
Subject: Earth Sciences, Atmospheric Science Keywords: disaster management; virtual operation support teams; privacy; data retention; hyperloglog; focus group discussion
Social media data is heavily used to analyze and evaluate situations in times of disasters, and derive decisions for action from it. A cruicial part of the analysis is to avoid unnecessary data retention during that process, in order to prevent subsequent abuse, theft or public exposure of collected datasets and thus, protect the privacy of social media users. There are a number of technical approaches out to face the problem. One of them is using a cardinality estimation algorithm called HyperLogLog to store data in a privacy-aware structure, that can not be used for purposes other than the originally intended. In this case study, we developed and conducted a focus group discussion with teams of social media analysts, in which we identified challenges and opportunities of working with such a privacy-enhanced social media data structure in place of conventional techniques. Our findings show that, with the exception of training scenarios, deploying HyperLogLog in the data acquisiton process will not distract the data analysis process. Instead, it will improve working with huge datasets due to the improved characteristics of the resulting data structure.
Some Construction Methods of Aggregation Operators in Decision Making Problems: An Overview
Azadeh Zahedi Khameneh, Adem Kilicman
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: aggregation operators; composite aggregation operators; weighted aggregation operators; transformation; duality; group decision making
Aggregating data is the main line of any discipline dealing with fusion of information from the knowledge-based systems to the decision-making. The purpose of aggregation methods is to convert a list of objects, all belonging to a given set, into a single representative object of the same set usually by an n-ary function, so-called aggregation operator. Since the useful aggregation functions for modeling real-life problems are limit, the basic problem is to construct a proper aggregation operator for each situation. During the last decades, a number of construction methods for aggregation functions have been developed to build new classes based on the well-known operators. This paper reviews some of these construction methods where they are based on transformation, composition and weighted rule.
Lorentz Symmetry Group, Retardation, and Galactic Rotation Curves
Asher Yahalom
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: spacetime symmetry; Relativity of space-time; Lorentz Symmetry Group; retardation; Galactic Rotation Curves
The general theory of relativity (GR) is known to be invariant under smooth coordinate transformations (diffeomorphism). This group has a subgroup known as the Lorentz group of symmetry which is manifested in the weak field approximation to GR. The dominant operator in the weak field equation of GR is thus the d'Alembert (wave) operator which has a retarded potential solution. Galaxies are huge physical systems having dimensions of many tens of thousands of light years. Thus any change at the galactic center will be noticed at the rim only tens of thousands of years later. Those retardation effects are neglected in present day galactic modelling used to calculate rotational velocities of matter in the rims of the galaxy and surrounding gas. The significant differences between the predictions of Newtonian instantaneous action at a distance and observed velocities are usually explained by either assuming dark matter or by modifying the laws of gravity (MOND). In this paper we will show that taking general relativity seriously without neglecting retardation effects one can explain the radial velocities of galactic matter in the M33 galaxy without postulating dark matter.
Mineralogical and Geochemical Constraints on the Origin and Late-Stage Crystallization History of the Breivikbotn Silicocarbonatite, Seiland Igneous Province in Northern Norway: Prerequisites for Zeolite Deposits in Carbonatite Complexes
Dmitry Zozulya, Kåre Kullerud, Erling Ravna, Yevgeny Savchenko, Ekaterina Selivanova, Marina Timofeeva
Subject: Earth Sciences, Geochemistry & Petrology Keywords: : silicocarbonatite; melteigite; calcite; nepheline; zeolite group minerals; garnet; crystal fractionation; Breivikbotn; Northern Norway
The present work reports new mineralogical and whole rock geochemical data from the Breivikbotn silicocarbonatite (Seiland igneous province, North Norway), allowing conclusions to be drawn concerning its origin and the role of late fluid alteration. The rock shows a rare mineral association: calcite + pyroxene + amphibole + zeolite group minerals + garnet + titanite, with apatite, allanite, magnetite and zircon as minor and accessory minerals, and it is classified as silicocarbonatite. Calcite, titanite and pyroxene (Di36-46 Acm22-37 Hd14-21) are primarily magmatic minerals. Amphibole of hastingsitic composition has formed after pyroxene at a late-magmatic stage. Zeolite group minerals (natrolite, gonnardite, Sr-rich thomsonite-(Ca)) were formed during hydrothermal alteration of primary nepheline by fluids/solutions with high Si-Al-Ca activities. Poikilitic garnet (Ti-bearing andradite) has inclusions of all primary minerals, amphibole and zeolites, and presumably crystallized metasomatically during a late metamorphic event (Caledonian orogeny). Whole rock chemical compositions of the silicocarbonatite differs from the global average of calciocarbonatites by elevated silica, aluminium, sodium and iron, but show comparable contents of trace elements (REE, Sr, Ba). Trace element distributions indicate within-plate tectonic setting of the carbonatite. The spatial proximity of carbonatite and alkaline ultramafic rock (melteigite), the presence of "primary nepheline" in carbonatite together with the trace element distributions indicate that the carbonatite was derived from crystal fractionation of a parental carbonated foidite magma. The main prerequisites for the extensive formation of zeolite group minerals in silicocarbonatite are revealed.
The User's Perspective on Home Energy Management Systems
Ad Straub, Ellard Volmer
Subject: Behavioral Sciences, Other Keywords: Energy consumption, Energy savings, Home Energy Management System (HEMS), Homeowners, Target group segmentation
In contrast to physical sustainable measures carried out in homes, such as insulation, the installation of a Home Energy Management System (HEMS) has no direct and immediate energy-saving effect. A HEMS gives insight into resident behaviour regarding energy use. When this is linked to the appropriate feedback, the resident is in a position to change his or her behaviour. This should result in reduced gas and/or electricity consumption. The aim of our study is to contribute towards the effective use of home energy management systems (HEMS) by identifying types of homeowners in relation to the use of HEMS. The research methods used were a literature review and the Q-method. A survey using the Q-method was conducted among 39 owners of single-family homes in various Rotterdam neighbourhoods. In order to find shared views among respondents, a principal component analysis (PCA) was performed. Five different types of homeowner could be distinguished: the optimists, the privacy-conscious, the technicians, the sceptics, and the indifferent. Their opinions vary as regards the added value of a HEMS, what characteristics a HEMS should have, how much confidence they have in the energy-saving effect of such systems, and their views on the privacy and safety of HEMS. The target group classification can be used as input for a way in which local stakeholders, e.g. a municipality, can offer HEMS that is in line with the wishes of the homeowner.
Geometric Theory of Heat from Souriau Lie Groups Thermodynamics and Koszul Hessian Geometry: Applications in Information Geometry for Exponential Families
Frédéric Barbaresco
Subject: Physical Sciences, Mathematical Physics Keywords: Lie Group Thermodynamics; Moment map; Gibbs Density; Gibbs Equilibrium; Maximum Entropy; Information Geometry; Symplectic Geometry; Cartan-Poincaré Integral Invariant; Geometric Mechanics; Euler-Poincaré Equation; Fisher Metric; Gauge Theory; Affine Group
We introduce the Symplectic Structure of Information Geometry based on Souriau's Lie Group Thermodynamics model, with a covariant definition of Gibbs equilibrium via invariances through co-adjoint action of a group on its moment space, defining physical observables like energy, heat, and moment as pure geometrical objects. Using Geometric (Planck) Temperature of Souriau model and Symplectic cocycle notion, the Fisher metric is identified as a Souriau Geometric Heat Capacity. Souriau model is based on affine representation of Lie Group and Lie algebra that we compare with Koszul works on G/K homogeneous space and bijective correspondence between the set of G-invariant flat connections on G/K and the set of affine representations of the Lie algebra of G. In the framework of Lie Group Thermodynamics, an Euler-Poincaré equation is elaborated with respect to thermodynamic variables, and a new variational principal for thermodynamics is built through an invariant Poincaré-Cartan-Souriau integral. The Souriau-Fisher metric is linked to KKS (Kostant-Kirillov-Souriau) 2-form that associates a canonical homogeneous symplectic manifold to the co-adjoint orbits. We apply this model in the framework of Information Geometry for the action of an affine Group for exponentiel families, and provide some illustrations of use cases for multivariate Gaussian densities. Information Geometry is presented in the context of seminal work of Fréchet and his Clairaut-Legendre equation. Souriau model of Statistical Physics is validated as compatible with Balian gauge model of thermodynamics. We recall the precursor work of Casalis on affine group invariance for natural exponential families.
Modeling Community Residents' Exercise Adherence and Life Satisfaction: An Application of the Influence of the Reference Group
Huimin Song, Wei Zeng, Tingting Zeng
Subject: Physical Sciences, Other Keywords: the reference group theory; life satisfaction; exercise adherence; personal investment; strategic and cultural fit
To expand the application area of the reference group and enrich exercise theoretical research, based on Stimulus-Organism-Response (S–O-R) framework, this study examines the factors that motivate for adherence to exercise from the external. Taking reference group and strategy and cultural fit as the main stimulus, and personal investment and life satisfaction as mediating variables, this study try to explore the influence of external stimulus on residents' exercise behavior. In order to enrich the sample size, two surveys of 734 Chinese residents in two cities (Xiamen vs. Fuzhou) were conducted using factor analyses, regression analysis, and T-test analysis. The results indicated that the reference group and strategic and cultural fit as external stimulus have impact on residents'personal investment, life satisfaction and exercise adherence, personal investment and life satisfaction as the organism has impact on residents' exercise adherence. Personal investment and life satisfaction play a chain mediating role between the reference group and exercise adherence, strategy and cultural fit, and exercise adherence. Moreover, the T-test determined the differences between Xiamen and Fuzhou residents' exercise adherence and life satisfaction. Residents' surroundings affect their exercise behavior and life satisfaction. These findings have implications for policymaking aimed at promoting national exercise, which could gradually improve residents' physical fitness, particularly in light of the current coronavirus emergency.
Online Prediction of Lead Seizures from iEEG Data
Hsiang-Han Chen, Han-Tai Shiao, Vladimir Cherkassky
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: iEEG; non-stationarity; lead seizure; seizure prediction; support vector machines; unbalanced classification; group learning
We describe a novel system for online prediction of lead seizures from long-term intracranial electroencephalogram (iEEG) recordings for canines with naturally occurring epilepsy. This study adopts new specification of lead seizures, reflecting strong clustering of seizures in observed data. This clustering results in fewer lead seizures (~7 lead seizures per dog), and hence new challenges for online seizure prediction, that are addressed in the proposed system. In particular, machine learning part of the system is implemented using the Group Learning method suitable for modeling sparse and noisy seizure data. In addition, several modifications for the proposed system are introduced to cope with non-stationarity of noisy iEEG signal. They include: (1) periodic re-training of SVM classifier using most recent training data; (2) removing samples with noisy labels from training data; (3) introducing new adaptive post-processing technique for combining many predictions made for 20-second windows into a single prediction for 4 hr segment. Application of the proposed system requires only 2 lead seizures for training the initial model, and results in high prediction performance for all four dogs (with mean 0.84 sensitivity, 0.27 time-in-warning, and 0.78 false-positive rate per day). Proposed system achieves accurate prediction of lead seizures during long-term test periods, 3–16 lead seizures during 169–364 days test period, whereas earlier studies did not differentiate between lead vs. non-lead seizures and used much shorter test periods (~few days long).
Modified Gravity and Dark Matter
Syed Abbas, Nasim Akhtar, Danish Alam
Subject: Physical Sciences, General & Theoretical Physics Keywords: Modified Gravity; quantum mechanics; Galilei group; exchange forces; Bargmann Superselection rule; neutron-proton Majorana interaction; Dark Matter
At present there is a renewed interest in theories of "modified" gravity. Here, under a more drastic modification enforced by Galilei group, we obtain a completely new gravitational structure, and which exists in addition to the already available general relativity of today. Correlated with this, we show that in addition, there is a new "modified" quantum mechanics, in as much as it exists as an independent and new "pure" non-relativistic quantum me- chanics, and which has no relativistic counterpart. This is in addition to the present quantum mechanics, where the relativistic and non-relativistic structures are counterparts of each other. The above holds, firstly due to the correlation between Galilei group and quantum mechanics. These math- ematical conclusions are consolidated by the fact that there exists a physical Majorana interaction between each neutron- proton pairs in nuclei. Galilei invariance of Majorana exchange in Majorana interaction, shows that the mass here is of pure gravitational nature, and which is immune to the other three forces. This makes an amazing connection between the gravitational force and the quantum mechanics. This pure gravitational mass would man- ifest itself as dark matter of the universe. It is our new modified gravity that generates the dark matter.
A Multi-Level System of Performance Evaluation Using Diagnosis-Related Groups for Cost Containment
Shuguang Lin, Paul Rouse, Fan Zhang, Ying-Ming Wang
Subject: Social Sciences, Accounting Keywords: Cost containment; Performance Evaluation; Multi-level System; Diagnosis-related Group (DRG); Health system sustainability
This study aims to develop a performance evaluation system that can facilitate performance evaluation at region, hospital, and department levels to enable better cost management for sustainable development. A multi-level system of performance evaluation informs a hierarchical assessment of cost management from regions to hospitals to departments using diagnosis-related group (DRGs). Various metrics are developed employing the variances between targets and actuals where targets are determined from two perspectives: benchmarking using external regional prices and change management using internal data. Targets for the latter are statistically based and specifically incorporate variability. The model is applied to two hospitals, twenty departments, nine DRGs and 1071 inpatients. The analyses indicate that the approach can provide a practical evaluation tool that allows for particular characteristics at multiple levels. The system provides macro-micro and external-internal perspectives in performance, enabling high-level variances to be decomposed thereby identifying sources of performance variability and financial impact. | CommonCrawl |
Effective reduction of a three-dimensional circadian oscillator model
Leanne Dong ,
Gina Cody School of Engineering and Computer Science, 1455 De Maisonneuve Blvd., W. Montreal, QC H3G 1M8, Canada
Corresponding author: Leanne Dong
Received November 2019 Revised June 2020 Published October 2021 Early access December 2020
In this paper we prove that the stochastic Navier-Stokes equations with stable Lévy noise generate a random dynamical systems. Then we prove the existence of random attractor for the Navier-Stokes equations on 2D spheres under stable Lévy noise (finite dimensional). We also deduce the existence of a Feller Markov Invariant Measure.
Keywords: Random attractors, random dynamical systems, stochastic Navier-Stokes, unit spheres, stable Lévy noise, Feller Markov invariant measure.
Mathematics Subject Classification: Primary: 60H15, 35R60; Secondary: 37H10, 34F05.
Citation: Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5421-5448. doi: 10.3934/dcdsb.2020352
D. Applebaum, Lévy processes and stochastic integrals in Banach spaces, Probab. Math. Statist., 27 (2007), 75-88. Google Scholar
L. Arnold, Random Dynamical Systems, Springer Science & Business Media, 2013. Google Scholar
J.-P. Bouchaud and A. George, Anomalous diffusion in disordered media: Statistic mechanics, models and physical applications, Phys. Rep, 195 (1990), 127-293. doi: 10.1016/0370-1573(90)90099-N. Google Scholar
Z. Brzeźiak, Asymptotic compactness and absorbing sets for stochastic Burgers' equations driven by space-time white noise and for some two-dimensional stochastic Navier-Stokes equations on certain unbounded domains, Stochastic Partial Differential Equations and Applications–VII, Lect. Notes Pure Appl. Math., Chapman & Hall/CRC, Boca Raton, FL, 245 (2006), 35–52. Google Scholar
Z. Brzeźniak, M. Capiński and F. Flandoli, Pathwise global attractors for stationary random dynamical systems, Probab. Theory Related Fields, 95 (1993), 87-102. doi: 10.1007/BF01197339. Google Scholar
Z. Brzeźiak, B. Goldys and Q. T. Le Gia, Random dynamical systems generated by stochastic Navier-Stokes equations on a rotating sphere, J. Math. Anal. Appl., 426 (2015), 505-545. doi: 10.1016/j.jmaa.2015.01.054. Google Scholar
Z. Brzeźiak, B. Goldys and Q. T. Le Gia, Random attractors for the stochastic Navier–Stokes equations on the 2D unit sphere, J. Math. Fluid Mech., 20 (2018), 227-253. doi: 10.1007/s00021-017-0351-4. Google Scholar
Z. Brzeźiak and Y. Li, Asymptotic compactness and absorbing sets for 2D stochastic Navier-Stokes equations on some unbounded domains, Trans. Amer. Math. Soc., 358 (2006), 5587-5629. doi: 10.1090/S0002-9947-06-03923-7. Google Scholar
Z. Brzeźiak and J. Zabczyk, Regularity of Ornstein-Uhlenbeck processes driven by a Lévy white noise, Potential Anal., 32 (2010), 153-188. doi: 10.1007/s11118-009-9149-1. Google Scholar
C. Castaing and M. Valadier, Convex Analysis and Measurable Multifunctions, Lecture Notes in Mathematics, Vol. 580. Springer-Verlag, Berlin-New York, 1977. Google Scholar
M. D. Chekroun, E. Simonnet and M. Ghil, Stochastic climate dynamics: Random attractors and time-dependent invariant measures, Phys. D, 240 (2011), 1685-1700. doi: 10.1016/j.physd.2011.06.005. Google Scholar
H. Crauel, Random Probability Measures on Polish Spaces, Stochastics Monographs, 11. Taylor & Francis, London, 2002. Google Scholar
H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dynam. Differential Equations, 9 (1997), 307-341. doi: 10.1007/BF02219225. Google Scholar
H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Theory Related Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar
L. Dong, Invariant measures for the stochastic navier-stokes equation on a 2D rotating sphere with stable Lévy noise, arXiv e-prints, arXiv: 1812.05513. Google Scholar
L. Dong, Strong solutions for the stochastic Navier-Stokes equations on the 2D rotating sphere with stable Lévy noise, J. Math. Anal. Appl., 489 (2020), 124182, 37 pp. doi: 10.1016/j.jmaa.2020.124182. Google Scholar
S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence, John Wiley & Sons, Inc., New York, 1986. doi: 10.1002/9780470316658. Google Scholar
T. Gao, J. Duan and X. Li, Fokker-Planck equations for stochastic dynamical systems with symmetric Lévy motions, Appl. Math. Comput., 278 (2016), 1-20. doi: 10.1016/j.amc.2016.01.010. Google Scholar
B. Gess, W. Liu and M. Röckner, Random attractors for a class of stochastic partial differential equations driven by general additive noise, J. Differential Equations, 251 (2011), 1225-1253. doi: 10.1016/j.jde.2011.02.013. Google Scholar
G. A. Gottwald and D. T. Crommelin and C. L. E. Franzke, Stochastic climate theory, Nonlinear and Stochastic Climate Dynamics, Cambridge Univ. Press, Cambridge, (2017), 209–240. Google Scholar
A. Gu, Synchronization of coupled stochastic systems driven by $\alpha$-stable Lévy noises, Math. Probl. Eng., 2013 (2013), Art. ID 685798, 10 pp. doi: 10.1155/2013/685798. Google Scholar
A. Gu and W. Ai, Random attractor for stochastic lattice dynamical systems with $\alpha$-stable Lévy noises, Commun. Nonlinear Sci. Numer. Simul., 19 (2014), 1433-1441. doi: 10.1016/j.cnsns.2013.08.036. Google Scholar
J. Huang, Y. Li and J. Duan, Random dynamics of the stochastic Boussinesq equations driven by Lévy noises, Abstr. Appl. Anal., 2013 (2013), Art. ID 653160, 10 pp. doi: 10.1155/2013/653160. Google Scholar
M. Ledous and M. Talagrand, Probability in Banach Spaces: Isoperimetry and Processes, Springer Science & Business Media, 2013. Google Scholar
F. Matthäus, M. S. Mommer, T. Curk and J. Dobnikar, On the origin and characteristics of noise-induced Lévy walks of E. Coli, PLoS ONE, 6 (2011), e18623. doi: 10.1371/journal.pone.0018623. Google Scholar
[26] S. Peszat and J. Zabczyk, Stochastic Partial Differential Equations With Lévy Noise, Encyclopedia of Mathematics and its Applications, 113. Cambridge University Press, Cambridge, 2007. doi: 10.1017/CBO9780511721373. Google Scholar
G. W. Peters, S. A. Sisson and Y. Fan, Likelihood-free Bayesian inference for $\alpha$-stable models, Comput. Statist. Data Anal., 56 (2012), 3743-3756. doi: 10.1016/j.csda.2010.10.004. Google Scholar
E. Priola and J. Zabczyk, Structural properties of semilinear SPDEs driven by cylindrical stable processe, Probab. Theory Related Fields, 149 (2011), 97-137. doi: 10.1007/s00440-009-0243-5. Google Scholar
[29] J. C. Robinson, Infinite-Dimensional Dynamical Systems: An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors, Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, 2001. Google Scholar
G. Samorodnitsky and M. S. Taqqu, Stable Non-Gaussian Random Processes, Stochastic Modeling, Chapman & Hall, New York, 1994. Google Scholar
[31] K. Sato, Lévy Processes and Infinitely Divisible Distributions, Cambridge Studies in Advanced Mathematics, 68. Cambridge University Press, Cambridge, 1999. Google Scholar
L. Serdukova, Y. Zheng, J. Duan and J. Kurths, Metastability for discontinuous dynamical systems under Lévy noise: Case study on Amazonian Vegetation, Scientific Reports, 7 (2017), Article number, 9336. doi: 10.1038/s41598-017-07686-8. Google Scholar
M. Talagrand, Upper and Lower Bounds for Stochastic Processes: Modern Methods and Classical Problems, Springer, Heidelberg, 2014. doi: 10.1007/978-3-642-54075-2. Google Scholar
R. Temam, Infinite-Dimensional Dynamical Systems, 1993. Google Scholar
L. Xu, Applications of a simple but useful technique to stochastic convolution of $\alpha$-stable processes, arXiv e-prints, arXiv: 1201.4260. Google Scholar
F. Yonezawa, Introduction to focused session on'anomalous relaxation', Journal of Non-Cryst. Solids, 198-200 (1996), 503-506. doi: 10.1016/0022-3093(95)00726-1. Google Scholar
Y. Zhang, Z. Cheng, X. Zhang, X. Chen, J. Duan and X. Li, Data assimilation and parameter estimation for a multiscale stochastic system with $\alpha$-stable Lévy noise, J. Stat. Mech. Theory Exp., 11 (2017), 113401, 17 pp. doi: 10.1088/1742-5468/aa9343. Google Scholar
Kumarasamy Sakthivel, Sivaguru S. Sritharan. Martingale solutions for stochastic Navier-Stokes equations driven by Lévy noise. Evolution Equations & Control Theory, 2012, 1 (2) : 355-392. doi: 10.3934/eect.2012.1.355
Martin Redmann, Melina A. Freitag. Balanced model order reduction for linear random dynamical systems driven by Lévy noise. Journal of Computational Dynamics, 2018, 5 (1&2) : 33-59. doi: 10.3934/jcd.2018002
Manil T. Mohan, Sivaguru S. Sritharan. $\mathbb{L}^p-$solutions of the stochastic Navier-Stokes equations subject to Lévy noise with $\mathbb{L}^m(\mathbb{R}^m)$ initial data. Evolution Equations & Control Theory, 2017, 6 (3) : 409-425. doi: 10.3934/eect.2017021
Ji Li, Kening Lu, Peter W. Bates. Invariant foliations for random dynamical systems. Discrete & Continuous Dynamical Systems, 2014, 34 (9) : 3639-3666. doi: 10.3934/dcds.2014.34.3639
Anhui Gu, Boling Guo, Bixiang Wang. Long term behavior of random Navier-Stokes equations driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2495-2532. doi: 10.3934/dcdsb.2020020
Ivan Werner. Equilibrium states and invariant measures for random dynamical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1285-1326. doi: 10.3934/dcds.2015.35.1285
Björn Schmalfuss. Attractors for nonautonomous and random dynamical systems perturbed by impulses. Discrete & Continuous Dynamical Systems, 2003, 9 (3) : 727-744. doi: 10.3934/dcds.2003.9.727
Abiti Adili, Bixiang Wang. Random attractors for non-autonomous stochastic FitzHugh-Nagumo systems with multiplicative noise. Conference Publications, 2013, 2013 (special) : 1-10. doi: 10.3934/proc.2013.2013.1
Bixiang Wang. Random attractors for non-autonomous stochastic wave equations with multiplicative noise. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 269-300. doi: 10.3934/dcds.2014.34.269
Xiaojun Li, Xiliang Li, Kening Lu. Random attractors for stochastic parabolic equations with additive noise in weighted spaces. Communications on Pure & Applied Analysis, 2018, 17 (3) : 729-749. doi: 10.3934/cpaa.2018038
Badr-eddine Berrhazi, Mohamed El Fatini, Tomás Caraballo, Roger Pettersson. A stochastic SIRI epidemic model with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2415-2431. doi: 10.3934/dcdsb.2018057
Mario Roy, Mariusz Urbański. Random graph directed Markov systems. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 261-298. doi: 10.3934/dcds.2011.30.261
Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355
Philippe Marie, Jérôme Rousseau. Recurrence for random dynamical systems. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 1-16. doi: 10.3934/dcds.2011.30.1
Wenqiang Zhao. Pullback attractors for bi-spatial continuous random dynamical systems and application to stochastic fractional power dissipative equation on an unbounded domain. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3395-3438. doi: 10.3934/dcdsb.2018326
Xiangjun Wang, Jianghui Wen, Jianping Li, Jinqiao Duan. Impact of $\alpha$-stable Lévy noise on the Stommel model for the thermohaline circulation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1575-1584. doi: 10.3934/dcdsb.2012.17.1575
Tomás Caraballo, Stefanie Sonner. Random pullback exponential attractors: General existence results for random dynamical systems in Banach spaces. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6383-6403. doi: 10.3934/dcds.2017277
Markus Böhm, Björn Schmalfuss. Bounds on the Hausdorff dimension of random attractors for infinite-dimensional random dynamical systems on fractals. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3115-3138. doi: 10.3934/dcdsb.2018303
Yuncheng You. Random attractors and robustness for stochastic reversible reaction-diffusion systems. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 301-333. doi: 10.3934/dcds.2014.34.301
Felix X.-F. Ye, Hong Qian. Stochastic dynamics Ⅱ: Finite random dynamical systems, linear representation, and entropy production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4341-4366. doi: 10.3934/dcdsb.2019122
Leanne Dong | CommonCrawl |
Yao* , Yang** , Lu** , and Wang**: A Fast Rough Mode Decision Algorithm for HEVC
Wei-Xin Yao* , Dan Yang** , Gui-Fu Lu** and Jun Wang**
A Fast Rough Mode Decision Algorithm for HEVC
Abstract: HEVC is the high efficiency video coding standard, which provides better coding efficiency contrasted with the other video coding standard. But at the same time the computational complexity increases drastically. Thirty-five kinds of intra-prediction modes are defined in HEVC, while 9 kinds of intra prediction modes are defined in H.264/AVC. This paper proposes a fast rough mode decision (RMD) algorithm which adopts the smoothness of the up-reference pixels and the left-reference pixels to decrease the computational complexity. The three step search method is implemented in RMD process. The experimental results compared with HM13.0 indicate that the proposed algorithm can save 39.7% of the encoding time, while Bjontegaard delta bitrate (BDBR) is increased slightly by 1.35% and Bjontegaard delta peak signal-to-noise ratio (BDPSNR) loss is negligible.
Keywords: HEVC , Intra Prediction , Rough Mode Decision , Video Coding
High efficiency video coding (HEVC) standard uses various coding tool to enhance compression performance. Among them, intra prediction introduces 35 angular prediction modes and a flexible coding unit (CU) size based on quadtree. Coding tree unit (CTU) is defined in HEVC, whose size is 64×64. As root of encoding unit, CTU is divided into smaller CUs. A CU is further divided into prediction unit (PU) and transform unit (TU) [1].
To decide the optimal prediction mode and CU size, HEVC makes use of rate distortion optimization (RDO). However, HEVC increases computational burden compared with H.264. At present, a research hotspot is focused on the research of fast intra prediction mode. The research works are lying in two aspects: fast intra prediction mode decision algorithms (FIPMD) and the fast CU or (PU) size decision algorithms [2].
FIPMD algorithms decrease the number of candidate modes before compute RD cost. In [3], the authors proposed improved FIPMD based on gradient. The prediction mode is decided by the texture direction, which is denoted by the intensity gradient. In [4], progressive rough mode search (PRMS) which is based on the calculation of Hadamard cost is proposed. Rate-distortion optimized quantization (RDOQ) is implemented by fewer effective candidates, which are selected by PRMS. In [5], edge of the PU was also used to speed-up the best selection procedure. In order to obtain the minimal direction differences, Wang and Siu [6] made use of the features of every directional prediction mode to calculate the strength of directional errors of initial spatial domain. Na et al. [7] made use of spatial correlation to detect edges. The candidate modes are adaptively chosen by the edge map. Lee et al. [8] analyzed the probability distribution of local binary patterns to obtain textural information of the most probable mode.
The fast CU or PU size decision algorithms can skip unnecessary CU size partition [9-14]. In [9], the proposed method makes use of motion homogeneity to decide CU depth range. Unnecessary depth levels do not need to be traversed. In [10], according to regular updates of the statistical parameter, CU depth decision was early terminated. In [11], a novel CU size decision strategy is presented by the authors, which is based on the local and global edge complexities. In order to reduce complexity of the screen content coding, Duanmu et al. [12] proposed machine learning based on CU statistics and sub-CU homogeneity. Yu et al. [13] adopted convolution neural network to reduce the amount of CU/PU candidate modes, which decrease the complexity of the real-time hardwired encoder. According to the depth of adjacent CUs, CU size decision was early terminated. In terms of correlation between the current layer mode and the higher layer mode in PU, the authors propose a new fast PU mode decision algorithm [14].
In this paper, we analyze the reference pixels of the PU to exclude some angular intra prediction modes before intra prediction. The amount of candidate modes is decreased from 35 to 19. The three step search method is implemented by 19 modes. The proposed algorithm has many advantages such as easy implement, good performance and high robustness.
The rest of the paper is arranged as followed. A brief review of intra coding process is described in Section 2. In Section 3, a fast rough mode decision algorithm is presented. Experiments and the corresponding result analysis of the proposed algorithm are presented in Section 4. Finally, some inspiring conclusions are given in Section 5.
2. INTRA-PREDICTION IN HEVC
HEVC makes use of quadtree structure to split large code unit (LCU) into CU sizes ranging from 8×8 to 64×64. The numbers denote depth levels in Fig. 1. Depth 3, 2, 1, and 0 correspond to the CU size of 8×8, 16×16, 32×32, and 64×64, respectively. A CU is divided into PUs, which are used as basic units for intra-prediction. According to the adjacent decoded pixels, the predicted pixels are obtained by linear interpolation. Two PU sizes are M×M and 2M×2M. Therefore, the PU sizes range from 4×4 to 64×64.
LCU partition and its corresponding quadtree.
The CUs are also divided into TUs according to quadtree structure. TU is used for transformation and quantization. The TU sizes range from 32×32 to 4×4. Fig. 2 shows the process of dividing a CU of 32×32 size into PU and TU.
The process of dividing a CU of 32×32 size into PU and TU.
2.1 Rough Mode Decision
HEVC supports 35 intra-prediction modes which are shown in Fig. 3. The order number 0 and 1 represent plana mode and DC mode. The order numbers from 2 to 34 represent 33 angular prediction modes. The N most possible candidate modes are obtained by the rough mode decision (RMD). In the RMD processes, the Hadamard cost and the bit cost of 35 prediction modes are computed. N is related to PU size. N is 8, 8, 3, 3, and 3, respectively corresponding to the PU size of 4×4, 8×8, 16×16, 32×32, and 64×64. The N lowest Hadamard cost is selected to a subset of candidate, as defined in Eq. (1) [15].
[TeX:] $$H_{c o s t}=\mathrm{SATD}+\lambda R_{m o d e}$$
where Hadamard transformation is a generalized Fourier transformation and SATD denotes the sum of absolute Hadamard transformed residual signal, Rmode represents bit costs for the prediction mode, and is the Lagrange multiplier.
Thirty-five intra-prediction modes in HEVC.
After the RMD processes, we apply RDO to select the optimal prediction mode for PU. RDO is implemented in the subset of candidate. The least RDO cost is selected by function (2).
[TeX:] $$R D O_{c o s t}=\mathrm{SSE}+\lambda \mathrm{R}$$
SSE represents the sum of squared difference of the predicted against original PU. R and λ denote bit-rate for encoding process and Lagrange parameter, respectively.
2.2 Most Probable Modes
Intra-prediction mode is fast decided by Zhao et al. [16]. According to spatial correlation, most probable modes (MPMs) of the current PU have a great correlation with the prediction modes of neighboring PUs. The optimal prediction mode of PU exists in the MPMs, which is very probable. Even in different types of test sequences, this probability fluctuation is very small. MPMs are inserted into the subset of candidate if modes of above and left neighboring PUs are not included in the subset of candidate. RDO is applied to N+MPM modes.
3. Fast Rough Mode Decision
However, 35 kinds of prediction mode are calculated in RMD process, which requires too much time. It is required that a fast RMD algorithm is proposed.
Three cases of reference pixels for the prediction block: (a) all the same reference pixels, (b) the same up-reference pixels and the same left-reference pixels, and (c) different reference pixels.
Fig. 4 shows the reference pixels for the 4×4 block. When the reference pixels are the same, the current PU block is a smoothing block. Planar mode is selected, and the other modes are skipped. The Planar mode adopts the mean of linear interpolation in horizontal and vertical directions to obtain the prediction values of PUs. DC mode adopts the mean of reference pixels to obtain the prediction values of PUs. DC mode and planar mode are suitable for blocks containing smooth texture. However, planar mode is appropriate for blocks with gradient texture. The planar mode maintains the maximum continuity of block boundaries. We skip the unnecessary modes based on smoothness of the reference pixels. We can define the smoothness of the reference pixels as following [17].
[TeX:] $$V a r_{u p}=\frac{1}{2 N+1} \sum_{K=0}^{2 N}\left(f_{k}^{u p}-U_{u p}\right)^{2}$$
[TeX:] $$V a r_{l e f t}=\frac{1}{2 N+1} \sum_{K=0}^{2 N}\left(f_{k}^{l e f t}-U_{l e f t}\right)^{2}$$
where Uup denotes average of the up-reference pixels, Uleft denotes average of the left-reference pixels, N represents size of the current block, [TeX:] $$f_{k}^{u p}$$ denotes the up-reference pixel associated with index K, and [TeX:] $$f_{k}^{l e f t}$$ denoes left-reference pixel associated with index K. If Varup is equal to Varleft, Planar mode is selected for the current block. If Varup is smaller than Varleft, the left-reference pixels are smoother than the up-reference pixels and RMD process is performed by 0–18 prediction modes.
The three step search method is implemented by using [TeX:] $$0,1,2+4 \delta$$, where [TeX:] $$\delta \in[0,4] \text { or } \delta \in[4,8]$$. The subset of candidate mode is selected roughly by using step size 4. Then the subset of candidate mode is expanded by using step size 2 and 1. Taken 16×16 as an example, 0–18 prediction modes are selected according to smoothness of the reference pixels. Firstly, step size 4 for search is performed in range of mode index 0–18, which constituted the subset of candidate mode. Hadamard cost is calculated by using the subset and the ascending order of Hadamard cost is {10, 6, 14, 18, 2, 1, 0}. We select the three best mode as 1{10, 6, 14}. Secondly, the neighboring modes of two distances are added in {10,6,14}. Then, the subset is updated as {8, 10, 6, 4, 12, 14, 16, 18, 2, 1, 0} and selects the three best mode as 2{8,10,6}. At last, the neighboring modes of one distance are added in {8, 10, 6} and the subset is updated as {8, 9, 10, 7, 6, 11, 5, 4, 12, 14, 16, 18, 2, 1, 0}. RDO process is implemented by 3{8, 9, 10}. The three step search process is shown in the Fig. 5.
The three step search method: (a) the first step, (b) the second step, and (c) the third step.
4. Experimental Results
It is performed on HM13.0 reference software that fast mode decision proposed. The experiment environment is Intel Core i7-6500 2.5 GHz and RAM of 4 GB. This paper makes use of 100 frames of the video sequences to experiment. It is presented by JCJ-VC the video sequences, containing five classes. Different QPs (37, 32, 27, 22) are tested in the experiment. To evaluate performance of the proposed algorithm, the parameters are measured according to Bjontegaard delta bitrate (BDBR), Bjontegaard delta peak signal-to-noise ratio (BDPSNR) and encoding time saving. BDBR and BDPSNR are used to represent the average differences. Encoding time saving represents the total time reduction in percentage compared with HM13.0. Encoding time saving is defined by formula (5).
[TeX:] $$\operatorname{Time}(\%)=\frac{T_{p r o p}-T_{H M}}{T_{H M}} \times 100 \%$$
where THM and Tprop present the encoding time which is required by HM13.0 and the proposed algorithm.
Table 1 shows that experiment results compared between Gao algorithm and the proposed algorithm. The fast rough mode decision algorithm can save time from 30.0 to 39.7, while it increases BDBR by between 0.56 and 2.64, and decreases BDPSNR by between 0.01 and 1.01. Gao algorithm [18] increases less BDBR than our algorithm, but our algorithm can save more time. It is negligible that BDBR increase and BDPSNR loss.
Comparison of coding performance (%) of two different algorithms
Proposed algorithm
Gao algorithm
BDBR BDPSNR T BDBR BDPSNR T
Class A(2560×1600) Traffic 1.10 -0.07 35.1 0.92 -0.05 27.3
People on street 0.7 -0.05 38.2 0.91 -0.06 24.7
Class B(1920×1080) Kimonol 0.56 -0.06 30.0 1.31 -0.04 24.1
Cactus 0.81 -0.07 34.2 1.07 -0.03 24.2
Class C(832×1080) BasketballDrill 2.01 -0.07 38.4 1.21 -0.05 28.6
BQMall 0.85 -0.01 31.5 0.93 -0.08 34.2
Class D(416×240) BasketballPass 1.98 -0.12 39.7 1.2 -0.08 32.5
BQSquare 2.64 -0.22 36.6 2.78 -0.18 30.1
Class E(1280×720) FourPeople 1.91 -0.86 30.4 1.10 -0.06 22.3
Johnny 0.94 -1.01 35.7 1.34 -0.06 29.7
Average 1.35 -0.25 34.98 1.28 -0.07 27.8
We can see the RD performances of the proposed algorithm are close to that of HM13.0 from Fig. 6. The encoding efficiency of the proposed algorithm is approximately equal to that of HM13.0, while the proposed algorithm can save more time.
Comparison of the RD curve of the proposed algorithm and HM13.0: (a) basketball and (b) people on the street.
The paper proposes the fast RMD decision algorithm. The proposed algorithm is implemented by using two methods. Firstly, we compare the upper and left reference pixels to decrease the amount of candidate modes from 35 to 19. Secondly, the three step search method is implemented in 19 candidate modes to reduce computational complexity. It is the final result that 12–15 modes are performed in RMD process instead of 35 modes. Our algorithm can save a lot of time, while RD performance is approximately unchanged.
This work is supported by the National Natural Science Foundation of China (No. 61572033) and Key Projects of Natural Science Research Fund of Anhui Province (No. KJ2015A311), and Provincial Natural Science Research General Project of Anhui higher Education Promotion Plan in 2014 (No. TSKJ2014B07).
Wei-Xin Yao
She received M.S. degree in communication and information system from Gui Zhou University, Guiyang, China, in 2007. She is currently a Lecturer with School of Electrical Engineering, Anhui Polytechnic University. Her research interests include video coding and image processing.
Dan Yang
He received M.S. degree in computer application technology from Gui Zhou University, Guiyang, China, in 2007. He is currently an Associate Professor with School of Computer Science and Technology, Anhui Polytechnic University. His research interests include video coding and image processing.
Gui-Fu Lu
He received the B.S. degree in 1997 from Hefei University of Technology in China, the M.S. degree in 2004 from Hangzhou Institute of Electronics Engineering, and the Ph.D. degree in 2012 from Nanjing University of Science and Information, Since 2004, he has been teaching in the School of Computer Science and Information, Anhui Polytechnic University, Wuhu, China. His research interests include computer vision, digital image processing and pattern recognition.
Jun Wang
He was born in 1975, He is currently an Associate Professor with School of Computer Science and Technology, Anhui Polytechnic University. His research interests include image processing, machine vision and intelligent detection.
1 M. Ramezanpour abd F. Zargari, "Fast CU size and prediction mode decision method for HEVC encoder based on spatial features," SignalImage and Video Processing, vol. 10, no. 7, pp. 1233-1240, 2016.doi:[[[10.1007/s11760-016-0885-6]]]
2 G. J. Sullivan, J. R. Ohm, J. W. Han, T. Wiegand, "Overview of the high efficiency video coding (HEVC) standard," IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 12, pp. 1649-1668, 2012.doi:[[[10.1109/tcsvt.2012.2221191]]]
3 Y. Zhang, Z. Li, B. Li, "Gradient-based fast decision for intra prediction in HEVC," in Proceedings of 2012 Visual Communications and Image Processing, San Diego, CA, 2012;pp. 1-6. doi:[[[10.1109/VCIP.2012.6410739]]]
4 H. Zhang, Z. Ma, "Fast intra mode decision for high efficiency video coding (HEVC)," IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 4, pp. 660-668, 2013.doi:[[[10.1109/tcsvt.2013.2290578]]]
5 T. L. da Silva, L. V. Agostini, L. A. da Silva Cruz, "Fast HEVC intra prediction mode decision based on EDGE direction information," in Proceedings of 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO), Bucharest, Romania, 2012;pp. 1214-1218. custom:[[[]]]
6 L. L. Wang, W. C. Siu, "H. 264 fast intra mode selection algorithm based on direction difference measure in the pixel domain," in Proceedings of 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 2009;pp. 1037-1040. doi:[[[10.1109/ICASSP.2009.4959764]]]
7 S. Na, W. Lee, K. Yoo, "Edge-based fast mode decision algorithm for intra prediction in HEVC," in Proceedings of 2014 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, 2014;pp. 11-14. doi:[[[10.1109/ICCE.2014.6775887]]]
8 J. H. Lee, K. S. Jang, B. G. Kim, S. Jeong, J. S. Choi, "Fast intra mode decision algorithm based on local binary patterns in High Efficiency Video Coding (HEVC)," in Proceedings of 2015 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, 2015;pp. 270-272. doi:[[[10.1109/ICCE.2015.7066409]]]
9 L. Shen, Z. Liu, X. Zhang, W. Zhao, Z. Zhang, "An effective CU size decision method for HEVC encoders," IEEE Transactions on Multimedia, vol. 15, no. 2, pp. 465-470, 2013.doi:[[[10.1109/TMM.2012.2231060]]]
10 S. Cho, M. Kim, "Fast CU splitting and pruning for suboptimal CU partitioning in HEVC intra coding," IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 9, pp. 1555-1564, 2013.doi:[[[10.1109/TCSVT.2013.2249017]]]
11 B. Min, R. C. Cheung, "A fast CU size decision algorithm for the HEVC intra encoder," IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 5, pp. 892-896, 2015.doi:[[[10.1109/TCSVT.2014.2363739]]]
12 F. Duanmu, Z. Ma, Y. Wang, "Fast CU partition decision using machine learning for screen content compression," in Proceedings of 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, Canada, 2015;pp. 4972-4976. doi:[[[10.1109/ICIP.2015.7351753]]]
13 X. Yu, Z. Liu, J. Liu, Y. Gao, D. Wang, "VLSI friendly fast CU/PU mode decision for HEVC intra encoding: leveraging convolution neural network," in Proceedings of 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, Canada, 2015;pp. 1285-1289. doi:[[[10.1109/ICIP.2015.7351007]]]
14 X. Shang, G. Wang, T. Fan, Y. Li, "Fast CU size decision and PU mode decision algorithm in HEVC intra coding," in Proceedings of 2015 IEEE International Conference on Image Processing (ICIP), 2015;pp. 1593-1597. doi:[[[10.1109/ICIP.2015.7351069]]]
15 Q. Zhang, Y. Yang, H. Chang, W. Zhang, Y. Gan, "Fast intra mode decision for depth coding in 3D-HEVC," Multidimensional Systems and Signal Processing, vol. 28, no. 4, pp. 1203-1226, 2017.doi:[[[10.1007/s11045-016-0388-1]]]
16 L. Zhao, L. Zhang, S. Ma, D. Zhao, "Fast mode decision algorithm for intra prediction in HEVC," in Proceedings of 2011 Visual Communications and Image Processing (VCIP), Tainan, Taiwan, 2011;pp. 1-4. doi:[[[10.1109/VCIP.2011.6115979]]]
17 L. L. Wang, W. C. Siu, "Novel adaptive algorithm for intra prediction with compromised modes skipping and signaling processes in HEVC," IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 10, pp. 1686-1694, 2013.doi:[[[10.1109/TCSVT.2013.2255398]]]
18 L. Gao, S. Dong, W. Wang, R. Wang, W. Gao, Fast intra mode decision algorithm based on refinement in HEVC, Lisbon Portugal, In 2015 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 517-520, 2015.custom:[[[]]]
Revision received: December 6 2017
Corresponding Author: Wei-Xin Yao* ([email protected])
Wei-Xin Yao*, School of Electrical Engineering, Anhui Polytechnic University, Wuhu, China, [email protected]
Dan Yang**, School of Computer Science and Information, Anhui Polytechnic University, Wuhu, China, [email protected]
Gui-Fu Lu**, School of Computer Science and Information, Anhui Polytechnic University, Wuhu, China, [email protected]
Jun Wang**, School of Computer Science and Information, Anhui Polytechnic University, Wuhu, China, [email protected] | CommonCrawl |
Research | Open | Published: 10 March 2017
Evaluation of environmental bacterial communities as a factor affecting the growth of duckweed Lemna minor
Hidehiro Ishizawa1,
Masashi Kuroda1,
Masaaki Morikawa2 &
Michihiko Ike1
Biotechnology for Biofuelsvolume 10, Article number: 62 (2017) | Download Citation
Duckweed (family Lemnaceae) has recently been recognized as an ideal biomass feedstock for biofuel production due to its rapid growth and high starch content, which inspired interest in improving their productivity. Since microbes that co-exist with plants are known to have significant effects on their growth according to the previous studies for terrestrial plants, this study has attempted to understand the plant–microbial interactions of a duckweed, Lemna minor, focusing on the growth promotion/inhibition effects so as to assess the possibility of accelerated duckweed production by modifying co-existing bacterial community.
Co-cultivation of aseptic L. minor and bacterial communities collected from various aquatic environments resulted in changes in duckweed growth ranging from −24 to +14% compared to aseptic control. A number of bacterial strains were isolated from both growth-promoting and growth-inhibitory communities, and examined for their co-existing effects on duckweed growth. Irrespective of the source, each strain showed promotive, inhibitory, or neutral effects when individually co-cultured with L. minor. To further analyze the interactions among these bacterial strains in a community, binary combinations of promotive and inhibitory strains were co-cultured with aseptic L. minor, resulting in that combinations of promotive–promotive or inhibitory–inhibitory strains generally showed effects similar to those of individual strains. However, combinations of promotive–inhibitory strains tended to show inhibitory effects while only Aquitalea magnusonii H3 exerted its plant growth-promoting effect in all combinations tested.
Significant change in biomass production was observed when duckweed was co-cultivated with environmental bacterial communities. Promotive, neutral, and inhibitory bacteria in the community would synergistically determine the effects. The results indicate the possibility of improving duckweed biomass production via regulation of co-existing bacterial communities.
Duckweed is a tiny floating aquatic plant that is characterized by a rapid growth, high tolerance to polluted water, global distribution, and high starch content [1]. For decades, duckweed was considered as an industrially versatile plant that could be used for animal feed [2, 3], organic fertilizer [4], and chemical toxicity tests [5, 6]. In recent years, duckweed has been recognized as an ideal feedstock for biofuel production, because their soft and starch-rich biomass enables larger yield of fuel ethanol, butanol, and biogas [7, 8]. Xu et al. [9] calculated that bioethanol production from duckweed is 1.5 times greater than that from maize, when considering all parts of the cultivation and fermentation processes. Further, since duckweed can efficiently remove nitrogen, phosphorus, and heavy metals from water during growth, it has also been used in low-cost and low-energy wastewater treatment systems [10–12]. Thus, co-beneficial systems that combine biofuel production and water purification using duckweed have been proposed [13].
The attractive features of duckweed as a biomass resource have inspired interest in improving their productivity via selection of species/strains with higher growth rates [14] and optimizing the design and operational parameters, such as harvest period and water depth, of cultivation systems [15–17]. The effects of nutrient strength [18, 19], light intensity, photoperiod [20], and temperature [19] on duckweed growth and starch accumulation have also been examined to improve the production.
In addition, microbes that co-exist with duckweed are believed to have significant effects on growth in natural cultivation systems. In the terrestrial sphere, plants are widely recognized to develop intimate interactions with microbes that are critical for their growth or survival [21]. Some symbiotic bacteria called plant growth-promoting bacteria (PGPB) are known to enhance host plant growth by increasing nutrient acquisition or alleviating biotic and abiotic stresses [22]. In the last few decades, considerable efforts have been dedicated to isolate and characterize PGPB for important terrestrial agricrops, and it is clear that an extremely wide range of the plants harbor beneficial bacteria such as PGPB.
Although there have been several studies to understand plant-associated microbial communities and to engineer them for optimal production of crops [23, 24], such studies on aquatic plants, including duckweed, have just started lately [25]. Crump et al. [26], Xie et al. [27], and Matsuzawa et al. [28] have recently found that aquatic plants, including duckweed, also harbor diverse and specific bacterial communities. Yamaga et al. [29] isolated the first PGPB recognized to promote duckweed growth in a sterile synthetic medium. Another bacterium was recently found to promote the growth of the duckweed Lemna minor in a medium containing chromium [30]. These confirmed that bacteria living with duckweed can exert significant effects on host plant growth, similar to those seen in terrestrial crops. Thus, extended studies on duckweed–microbe interactions, especially those affecting duckweed growth, are needed to realize efficient and sustainable cultivation of duckweed species utilizing beneficial bacteria.
The aim of the current research was to evaluate the effects of diverse environmental bacteria on the growth of the duckweed L. minor. Fifteen native bacterial communities collected from various aquatic environments were investigated for their effects on duckweed growth. Bacterial strains in communities that strongly enhanced or repressed the growth of duckweed were isolated for a more profound understanding of duckweed–microbe interactions.
Effects of bacterial communities on duckweed growth
The growth of duckweed cultivated with fifteen environmental bacterial communities is shown in Fig. 1. Because all of the experiments were performed in the same axenic culture conditions, both promotive and inhibitory effects on duckweed growth should be a function of the bacterial communities. Many bacterial communities were found to have promotive or neutral effects on duckweed growth, with bacterial community H, which showed the greatest growth promotion, increasing the number of fronds by +14% over aseptically cultured L. minor. On the other hand, bacterial communities M and N decreased the number of duckweed fronds by −24 and −14%, respectively, compared to that of the controls. No remarkable difference in frond size, shape, color, or disease symptoms was observed in plants grown with the different bacterial communities in this series of experiments (Fig. 2).
Effects on plant growth (EPGs) of bacterial communities collected from ponds or rivers. A–O indicate bacterial communities recovered from water samples. EPGs were evaluated based on the number of fronds after 7 days of cultivation compared to that of an aseptic control. There were 91.33 (±4.50), 85.67 (±2.05), and 81.33 (±6.60) fronds at the end of control experiments for bacterial communities A–E, F–J, and K–O. Error bars show the standard errors and include errors among treatments performed in triplicate and the control
Images of Lemna minor after 7 days of cultivation with bacterial communities H (a) and M (b) and the aseptic control (c)
Isolation of bacterial strains from duckweed
From the plant bodies cultivated with bacterial communities H and M, which conferred the highest and lowest L. minor growth in the previous experiment, 10 and 12 morphologically distinct bacterial strains were isolated, respectively, and used for the further investigations. Table 1 shows the results of a BLAST search for the 16S rDNA sequences of all 22 isolates. All isolates showed at least 97% sequence identity with known strains. All isolates were found to be members of the alpha, beta, and gamma subclasses of Proteobacteria, except for H8, which belonged to the phylum Actinobacteria. In addition, both communities contained members of the order Rhizobiales (H1, H2, H5, M2), Pseudomonadales (H4, H6, M10, M12), Burkholderiales (H7, H9, M7, M8, M9), and Sphingomonadales (H10, M5, M11), which often comprise the large fraction of the rhizobacterial communities of terrestrial plants [31, 32].
Table 1 Nucleotide BLAST search of bacterial strains isolated from Lemna minor cultivated with communities H and M
Cultivation of L. minor with single bacterial isolates
A total of 22 isolates were evaluated for their effects on duckweed growth by co-culture with sterilized L. minor. In this experiment, frond numbers and dry weights were highly correlated (r = 0.93), so only the EPGs calculated from the dry weight were used in further analyses. As shown in Fig. 3, duckweed growth was affected both positively and negatively by the inoculation of isolates. The EPGs of members of bacterial community H varied from −6.3 to +21%, whereas in community M, they ranged from −14.4 to +17.5%. Bacterial strains were classified into five groups: those showed EPG values greater than 10% (++), between +5 and +10% (+), between −5 and +5% (+/−), between −5 and −10% (−), and less than −10% (−−). According to this classification scheme, there were 3, 2, 3, 2, and 0 isolates from community H, corresponding to (++), (+), (+/−), (−), and (−−), respectively, whereas there were 3, 1, 5, 0, and 3 isolates from community M. Bacterial communities H and M, which were associated the greatest and least growth in the first experiment, contained both promotive bacteria (++ or +) and inhibitory bacteria (− or −−). However, it is worth noting that inhibitory bacteria with EPG values less than −10% (−−) were isolated only from community M.
Effects on plant growth (EPGs) of single bacteria isolated from communities H (black bars) and M (gray bars). EPGs were evaluated by the change in dry weight of Lemna minor relative to that of the aseptic control, which had 119.67 (±5.19) fronds at the end. Error bars show the standard errors (n = 3)
Cultivation of L. minor with mixtures of two bacterial isolates
Based on the results of the previous experiment and ease of cultivation, promotive strains H1 (++), H3 (++), and M12 (++) and inhibitory strains H6 (−), M3 (−−), M5 (−−), and M6 (−−) were selected for further experiments, and all binary combinations of these isolates (7C2 = 21 patterns) were examined for their effects on duckweed growth. Simultaneously, single cultures of seven strains were tested again as positive controls. Because the frond number and dry weight showed a strong correlation (r = 0.95), only the dry weights were used to calculate EPGs (%). As shown in Table 4, all positive controls showed similar effects as those seen in the previous experiment, although the effects of H1 and H6 in this experiment were regarded as (+) and (−−), respectively. Generally, binary combinations of promotive bacteria showed promotive or neutral effects on duckweed growth, and combinations of inhibitory bacteria showed inhibitory effects. Combinations of promotive and inhibitory strains, however, resulted in both growth promotion and growth inhibition. Specifically, combinations of strain H3 (++) and an inhibitory bacterium tended to show promotive effects, whereas all combinations of strain H1 (+) and M12 (++) with inhibitory strains resulted in a negative effect.
Evaluation of bacterial isolates for plant growth-affecting traits
The isolates were examined for traits that affect plant growth in a series of assays. The results for the 22 isolates are shown in Table 2. Of the 22 isolates, 10 could synthesize IAA, 11 could solubilize insoluble phosphate, and 12 could produce siderophores. In addition, 12 isolates were positive for hydrogen cyanide production, which is believed to be involved in plant growth inhibition. All of the isolates excepting for H2 (–) and H5 (+/−) exhibited at least one of these traits. Strains H3 (++), M1 (++), and M12 (++) were positive for all traits. The occurrence of these traits was similar between the isolates from communities H and M, except that the siderophore-producing isolates were more frequently found in community M than H. Table 3 shows the results of the multiple-way ANOVA test to detect the contribution of these traits to the effects on duckweed growth. Of the four traits, only phosphate solubilization was found to correlate (p < 0.05) with duckweed growth promotion, whereas the other traits correlated with neither growth promotion nor growth inhibition.
Table 2 Indole acetic acid (IAA) production, phosphate (P) P solubilization, siderophore production, and hydrogen cyanide (HCN) production by bacterial strains
Table 3 The result of multiple-way analysis of variance (ANOVA)
This study revealed that bacterial communities in freshwater environments can both enhance and repress the growth of duckweed L. minor (Fig. 1). The effects of bacterial communities on the 7-day growth of L. minor ranged from −24 to +14%, which indicates that a 1.5-fold difference in duckweed yield can be controlled by selecting a bacterial community. Approximately, the same change in relative growth rate of L. minor was observed with light intensities of 400 and 110 µmol/m2/s [20], and ammonium (as a sole nitrogen source) concentrations of 28 and 2 mg/L [33]. Considering these facts, environmental bacterial communities were shown to be a critical factor that affects duckweed growth, with effects that are comparable with other important environmental factors such as light and nutrients. Enhancement of crop yields by optimizing co-existing bacteria has long been a goal for sustainable agriculture. Here, our results show its feasibility, even for the hydroculture of duckweed. This strategy should be fascinating choice if attained, since co-existing bacteria can potentially be modulated with lower energy than light, nutrient, and temperature.
Although culture-dependent methods have clear limitations for analyzing bacterial communities, we believe that it is useful to isolate bacterial strains in order to characterize their functions, since it is expected that readily culturable bacteria comprise larger fraction in duckweed rhizoplane than that in other natural environments according to Matsuzawa et al. [28]. Moreover, to rationally design co-existent bacteria for enhanced duckweed biomass production, understanding which bacterial strains promote or inhibit duckweed growth is indispensable. In this study, we isolated and characterized representative bacterial strains from both promotive and inhibitory bacterial communities H and M, respectively (Fig. 3). Taxonomically, large part of the bacterial isolates belonged to taxa that are known inhabitants of the terrestrial plant rhizosphere. It might suggest that duckweed rhizobacteria share the same characteristics with those of terrestrial plants to a certain extent. Interestingly, both promotive and inhibitory communities contained bacterial strains that expressed promotive, inhibitory, and neutral effects on duckweed growth, and their isolation frequencies were not significantly different between the two communities. The only notable difference was that the activities of inhibitory bacterial strains isolated from the inhibitory community were stronger than those of the strains isolated from the promotive community. We conclude that promotive, inhibitory, and neutral bacteria are ubiquitous in duckweed-associated bacterial communities, and that the activities of these bacteria likely determine, synergistically, the net effect of a bacterial community on duckweed growth.
As far as we know, Acinetobacter calcoaceticus P23 [29] and Exiguobacterium sp. MH3 [30] are the only PGPB that have been reported for duckweed species. In this study, we discovered six new bacterial strains that promoted the growth of duckweed by more than 10% with 7 days of cultivation. Sequencing of 16S rRNA genes revealed that these strains belong to diverse genera that were different from previously isolated PGPB, suggesting that PGPB for duckweed are distributed across a wider range of taxa. Interestingly, strains M1 (++) and M12 (++) were identified as Azospirillum and fluorescent Pseudomonas, respectively, both of which are common PGPB for terrestrial plants, except for Pseudomonas syringae, which is a plant pathogen. On the other hand, the most efficient PGPB strain H3 (++) was identified as belonging to the genus Aquitalea, which has been discovered only in freshwater environments. Quisehuatl-Tepexicuapan et al. [34] isolated one strain of Aquitalea from the rhizoplane of duckweed L. gibba. Therefore, strain H3 may be a PGPB specific for aquatic plants, including duckweed, which has evolved in freshwater environments.
Bacterial strains that suppress the plant growth without any apparent pathogenic symptoms are known as plant growth-inhibiting bacteria (PGIB) or deleterious rhizobacteria (DRB) in the field of agriculture. Although these bacteria are difficult to detect, a number of studies indicate that PGIB and DRB can be regulated to improve crop production [35, 36] and to control weeds [37]. We isolated these bacteria for the first time from duckweed or aquatic plants in this study. Because these bacteria can significantly lower the efficiency of duckweed production, attention should be paid to PGIB, as well as PGPB. Interestingly, we found PGIB in the genus Acinetobacter, which is the same taxonomic group as that of the first PGPB identified in duckweed [29]. Therefore, culture-independent metagenomic analysis of the 16S rRNA gene is not sufficient to detect and distinguish between duckweed PGPB and PGIB in bacterial communities, and further isolation-based research, such as this study, will contribute to not only a deeper understanding of duckweed–microbe interactions but also the construction of a relevant bacterial database.
Many studies have been dedicated to elucidating the mechanisms by which PGPB affect plant growth [22]. In this study, we examined the correlation between EPGs assessed by co-cultivation and the presence of four physiological traits that are known to be associated with plant growth promotion or inhibition. Although many bacteria were found to have more than one of these traits (Table 2), no clear-cut correlation was found between the possession of these traits and duckweed growth promotion/inhibition effects of the bacterial strains in a multiple ANOVA analysis (Table 3). Therefore, multiple mechanisms, probably including unknown ones, are related to bacterial promotion/inhibition of duckweed growth. Among the four tested traits, only the ability to solubilize phosphate was shown to be slightly correlated with duckweed growth promotion. Although bacterial phosphate solubilization is widely recognized to contribute to phosphorus availability in soil environments [38], this result was unexpected, because all of the phosphorus was added in soluble form at the start of an experiment. However, it is possible that phosphate supply via degradation of dead bacterial cells, plant exudates, and phosphate salts formed in the medium was influenced by phosphate-solubilizing activity of duckweed-associated bacteria. Since aquatic environments also contain a variety of unavailable phosphorus [39], the effects of bacterial phosphate supply to plants should be evaluated in real hydroculture.
In contrast to our relatively substantial knowledge on the mechanisms of PGPB, reports on the mechanisms by which PGIB inhibit the growth of plants are quite limited. Cyanide production is virtually the only proposed mechanism with enough supporting data [40, 41], whereas other studies have suggested the benefit of hydrogen cyanide based on antifungal activity [42]. The current study did not show a significant correlation between cyanide production and plant growth. Further studies are required to understand duckweed growth inhibition associated with bacteria or bacterial community.
To better understand the complex effects of bacterial communities, effects of binary combinations of selected isolates on duckweed growth were tested as simple artificial bacterial community models. In contrast with results of a previous study conducted for terrestrial plant [43], synergistic effects were generally not observed with promotive–promotive or inhibitory–inhibitory bacterial combinations. Interestingly, the results of promotive–inhibitory bacterial combinations showed that promotive strains H1 (+) and M12 (++) were not effective in the presence of any of the inhibitory strains (Table 4). This suggests that not all PGPB are able to function in their native environments, and that inhibitory bacteria may have a stronger influence on the effects of the bacterial community as a whole. This observation shows the difficulty of using PGPB in non-sterilized conditions as reported in Liu et al. [44]. It also indicates that regulation of PGIB may be effective for maximizing PGPB activity in a bacterial community. In contrast to strains H1 (+) and M12 (++), promotive strain H3 (++) was less susceptible to the deleterious effects of inhibitory strains, and was found to exert a promotive effect or at least negate the inhibitory effects of other bacteria. From this point of view, strain Aquitalea magnusonii H3 can be regarded as a PGPB for potential use in open environments.
Table 4 The effects on plant growth (EPGs, %) based on dry weight of a mixed inoculation of two species of bacteria
There are many possible explanations for what determines the result of the conflicting effects of duckweed PGPB and PGIB described above. For example, competition between bacteria on root exudates and spaces, inactivation of promotive or inhibitory mechanisms, and masking effects are likely. Elucidating such bacterial interactions is an important next step for optimizing duckweed hydroculture systems via the design of beneficial bacterial communities. For this reason, bacterial strains obtained in this study may be useful as model PGPB and PGIB for duckweed.
This study reported that (1) bacterial community strongly influences the production speed of duckweed biomass; (2) duckweed harbors bacteria which have promotive, neutral, or inhibitory effects for their growth; (3) promotive effects of PGPB strains can sustain or cannot sustain in the presence of other bacteria, depending on the kind of PGPB strain and some unknown mechanisms; and (4) many of isolates from duckweed-associated bacterial communities have some common characteristics in their taxa and ability to influence plant growth with terrestrial rhizobacteria. From these, it can be concluded that modulating bacterial community is the possible choice for improving biomass production from duckweed hydroculture. Further, it may be applicable to the other aquatic feedstocks such as water lettus, water hyacinth, and Azolla plants which have similar morphology to duckweed.
Common duckweed (Lemna minor, RDSC #5512), obtained from a small pond in a botanical garden of Hokkaido University (Sapporo, Japan), was used in the experiments. The plants were sterilized by washing with 0.5% sodium hypochlorite for 7 min, followed by washing with sterilized water twice. The sterilized plants were successively cultured in flasks containing Hoagland medium (36.1 mg/L KNO3, 293 mg/L K2SO4, 3.87 mg/L NaH2PO4, 103 mg/L MgSO4·7H2O, 147 mg/L CaCl2·H2O, 3.33 mg/L FeSO4·7H2O, 0.95 mg/L H3BO3, 0.39 mg/L MnCl2·4H2O, 0.03 mg/L CuSO4·5H2O, 0.08 mg/L ZnSO4·7H2O, and 0.254 mg/L H2MoO4·4H2O; pH 7.0) in an incubation chamber at 28 °C, an irradiance of 80 µmol/m2/s, and a photoperiod of 16 h/8 h day/night.
Cultivation of duckweed with environmental bacterial communities
Water samples were taken from the surfaces of 15 freshwater ponds and rivers located in the northern part of Osaka, Japan in August 24, 27, and 30 of 2015. Descriptions of sampled sites are shown in Additional file 1: Figure S1. The native bacterial communities in the samples were recovered and used for duckweed cultivation experiments as follows. First, coarse particles, including fungi and microalgae, were removed from the water samples using filters with a pore size of 3.0 µm (SSWP, MF-Millipore, Merck Millipore, Darmstadt, Germany), followed by centrifugation (10,000×g, 4 °C, 10 min) to collect bacterial cells from the native bacterial communities. The collected bacterial cells were washed twice with sterilized Hoagland medium and re-suspended in the original volume of Hoagland medium. Ten fronds of L. minor were transferred to flasks filled with 60 mL of the medium containing the bacteria and cultivated for 7 days in the above-mentioned conditions. During cultivation, duckweed growth was monitored by counting the frond number. The effects of the bacterial communities on duckweed growth were evaluated in comparison with growth of a control without introduced bacteria (sterile Hoagland medium).
Isolation of bacterial strains attached to duckweed
At the end of 7 days of cultivation of duckweed with bacterial communities, whole plant bodies in each flask were collected and washed with 20 mL of sterilized 5 mg/L sodium tripolyphosphate (TPP). Then, the duckweed samples were homogenized in TPP using a BioMasher II (Nippi, Tokyo, Japan). The homogenates were spread onto solid 1:10 LB medium in TPP containing 1.5% agar and incubated at 28 °C for 7 days. All morphologically distinct colonies were picked and purified using the same medium.
Identification of bacterial strains
Isolated bacterial strains were identified based on their 16S rRNA gene sequences. A single colony of each bacterial strain was picked and added to PCR reagent containing primers 27F (5′-AGAGTTTGATCTGGCTCAG-3′) [45] and 1392R (5′-ACGGGCGGTGTGTACA-3′) [46] and Ex Taq DNA polymerase (TaKaRa Bio Inc., Shiga, Japan). PCR amplification of the 16S rRNA gene fragments was performed as described previously [47] using a T100 Thermal Cycler (Bio-Rad Laboratories, Hercules, CA, USA). The amplicons were sequenced by Hokkaido System Science Co., Ltd (Hokkaido, Japan). The NCBI Nucleotide BLAST tool (http://blast.ncbi.nlm.nih.gov/Blast.cgi) was employed for taxonomic identification of strains H1–H10 and M1–12 using the obtained sequences as queries. The nucleotide sequences of the partial 16S rRNA gene from strains H1–H10 and M1–12 were submitted to the DNA Data Bank of Japan (DDBJ) under accession number LC191965–LC191986.
Cultivation of duckweed with isolated bacterial strains
To cultivate bacterial isolates used in the experiments, a loop of bacterial colony was inoculated into 20 or 100 mL of liquid LB medium in a vial or flask that was held overnight at 28 °C with shaking at 120 rpm. Cells were harvested by centrifugation (10,000×g, 4 °C, 10 min), washed twice with sterilized Hoagland medium, and then re-suspended in the same medium with cells at an optical density at 600 nm (OD600) = 0.1. To allow bacterial strains to attach to the plants, aseptic L. minor were placed on each bacterial suspension for 24 h. Then, 10 duckweed fronds were transferred to a flask filled with fresh bacteria-free medium at the start of cultivation. This method minimized the effect of nutrient leakage from dead bacterial cells and enabled an evaluation of the direct physiological effects of bacteria on the duckweed [48]. After 7 days of cultivation, the number of duckweed fronds and dry weight (12 h drying at 80 °C) were measured. Cultivation experiments using a combination of two bacterial strains were performed using the same procedure, except that equal amounts (30 mL each) of two separately prepared bacterial suspensions (OD600 = 0.1) were mixed and allowed to attach to plants. Control experiments were performed using sterile Hoagland medium without the introduction of bacterial strains.
Evaluation of bacterial isolates for traits that affect plant growth
Indole acetic acid (IAA) production
Indole acetic acid production in the presence of L-Trp was tested according to a method described by Orlando [49], with some modifications. First, a single bacterial colony was inoculated into a vial containing 20 mL of LB medium with 0.05% (w/v) of L-Trp. After 5 days of incubation with shaking (28 °C, 120 rpm), the culture was centrifuged (2000×g, 30 min, 24 °C), and 2 mL of the supernatant was added to 2 mL of Salkowski reagent (98 mL of 35% HClO4, 2 mL of 0.5 M FeCl3). Then, the mixture was placed at room temperature for 30 min for observation. Development of a pink color indicated the production of IAA.
Phosphate-solubilizing ability
The ability to solubilize insoluble phosphate was evaluated using Pikovskaya's agar, which contains calcium phosphate as an insoluble phosphate [50]. Then, each bacterial colony was streaked onto an agar plate and incubated at 28 °C for 7 days. Results were considered positive when clear zones developed around a colony.
Siderophore production
Bacterial siderophore production was detected using the method of Schwyn and Neilands [51]. In this assay, each bacterial colony was streaked on a chrome azurol S (CAS) agar plate containing blue dye. Plates were incubated at 28 °C for 7 days and then examined for a yellow or orange halo around the colonies, which would indicate the production of a siderophore.
Hydrogen cyanide (HCN) production
The assay of bacterial cyanide production was performed according to a method described by Saber et al. [52]. In short, the bacterial strains were grown in 5 mL of LB medium in a test tube with Whatman No. 1 filter paper (GE Healthcare Life Science, Buckinghamshire, UK) soaked in cyanide reagent. Cyanide-producing bacteria were detected when the Whatman paper changed color from yellow to orange or red.
Duckweed cultivation experiments were performed in triplicate for all treatments and controls. In all duckweed cultures, the effects on plant growth (EPG) of each bacterial community or each bacterial strain were calculated as follows:
$${\text{EPG }}({\% })\; = \;\frac{G\left( T \right) - G(C)}{G(C)}\; \times \;100,$$
where G(T) is the mean growth of duckweeds in the presence of microbes, which was evaluated by the frond number or dry weight of the duckweed after 7 days of cultivation, and G(C) is that in the aseptic controls. Here, the standard errors (SE) for EPG were calculated using the following formula:
$${\text{SE }}\left( {\text{EPG}} \right)\; = \;\frac{{\sqrt {{\text{SE}}\left( {G\left( T \right)} \right)^{2} + {\text{SE}}\left( {G\left( C \right)} \right)^{2} } }}{G(C)}\; \times \; 100.$$
Multiple-way analysis of variance (ANOVA) was performed to test whether bacterial IAA production, phosphate-solubilizing ability, siderophore production, or hydrogen cyanide production correlated with growth promotion or inhibition of L. minor. In this analysis, the results of four assays were treated as qualitative factors, and the EPG (%) based on dry weights in the duckweed cultivation experiments with single bacterial strains were used as the response variable. All statistical analyses were performed in R v3.2.3. (http://www.r-project.org).
PGPB:
plant growth-promoting bacteria
The Basic Local Alignment Search Tool
DNA:
deoxyribonucleic acid
EPG:
effects on plant growth
IAA:
indole acetic acid
RNA:
ribonucleic acid
PGIB:
plant growth-inhibiting bacteria
DRB:
deleterious rhizobacteria
NCBI:
The National Center for Biotechnology Information
chrome azurol S
Landolt E. Biosystematic investigations in the family of duckweed (Lemnaceae), vol. 2. Stiftung Rubel, Zúrich: Geobotanischen Inst ETH; 1986.
Islam MS, Kabir MS, Khan SI, Ekramullah M, Nair GB, Sack RB, Sack DA. Wastewater-grown duckweed may be safely used as fish feed. Can J Microbiol. 2004;50:51–6.
Cheng JJ, Stomp AM. Growing duckweed to recover nutrients from wastewaters and for production of fuel ethanol and animal feed. Clean. 2009;37:17–26.
Ahmad Z, Hossain NS, Hussain SG, Khan AH. Effect of duckweed (Lemna minor) as complement to fertilizer nitrogen on the growth and yield of rice. Int J Trop Agric. 1990;8:72–9.
Lahive E, O'Halloran J, Jansen MAK. Differential sensitivity of four Lemnaceae species to zinc sulphate. Environ Exp Bot. 2011;71:25–33.
Ziegler P, Sree KS, Appenroth KJ. Duckweeds for water remediation and toxicity testing. Toxicol Environ Chem. 2016;98:1127–54.
Cui W, Cheng JJ. Growing duckweed for biofuel production: a review. Plant Biol. 2015;17:16–23.
Yu C, Sun C, Yu L, Zhu M, Xu H, Zhao J, Ma Y, Zhou G. Comparative analysis of duckweed cultivation with sewage water and SH media for production of fuel ethanol. PLoS ONE. 2014. doi:10.1371/journal.pone.0115023.
Xu J, Cui W, Cheng JJ, Stomp AM. Production of high-starch duckweed and its conversion to bioethanol. Biosyst Eng. 2011;110:67–72.
El-Shafai SA, El-Gohary FA, Nasr FA, van der Steen NP, Gijzen HJ. Nutrient recovery from domestic wastewater using a UASB-duckweed ponds system. Bioresour Technol. 2007;98:798–807.
Stout L, Nüsslein K. Biotechnological potential of aquatic plant–microbe interactions. Curr Opin Biotechnol. 2010;21:339–45.
Alaerts GJ, Rahman Mahbubar MD, Kelderman P. Performance analysis of a full-scale duckweed-covered sewage lagoon. Water Res. 1996;30:843–52.
Fujita M, Mori K, Kodera T. Nutrient removal and starch production through cultivation of Wolffia arrhiza. J Biosci Bioeng. 1999;87:194–8.
Bergmann BA, Cheng J, Classen J, Stomp AM. In vitro selection of duckweed geographical isolates for potential use in swine lagoon effluent renovation. Bioresour Technol. 2000;73:13–20.
Zhao Y, Fang Y, Jin Y, Huang J, Bao S, He Z, Wang F, Zhao H. Effects of operation parameters on nutrient removal from wastewater and high-protein biomass production in a duckweed-based (Lemma aequinoctialis) pilot-scale system. Water Sci Technol. 2014;70:1195–204.
Verma R, Suthar S. Impact of density loads on performance of duckweed bioreactor: a potential system for synchronized wastewater treatment and energy biomass production. Environ Prog Sustain Energy. 2015;34:1596–604.
Oron G, de-Vegt A, Porath D. Nitrogen removal and conversion by duckweed grown on waste-water. Water Res. 1988;22:179–84.
Caicedo JR, Steen NP, Arce O, Gijzen HJ. Effect of total ammonia nitrogen concentration and pH on growth rates of duckweed (Spirodela polyrrhiza). Water Res. 2000;34:3829–35.
Lasfar S, Monette F, Millette L, Azzouz A. Intrinsic growth rate: a new approach to evaluate the effects of temperature, photoperiod and phosphorus-nitrogen concentrations on duckweed growth under controlled eutrophication. Water Res. 2007;41:2333–40.
Yin Y, Yu C, Yu L, Zhao J, Sun C, Ma Y, Zhou G. The influence of light intensity and photoperiod on duckweed biomass and starch accumulation for bioethanol production. Bioresour Technol. 2015;187:84–90.
Berg G, Grube M, Schloter M, Smalla K. Unraveling the plant microbiome: looking back and future perspectives. Front Microbiol. 2014;5:1–7.
Glick BR. Plant growth-promoting bacteria: mechanisms and applications. Scientifica. 2012. doi:10.6064/2012/963401.
Anderson M, Habiger J. Characterization and identification of productivity-associated rhizobacteria in wheat. Appl Environ Microbiol. 2012;78:4434–46.
Mendes R, Kruijt M, de Bruijn I, Dekker E, van der Voort M, Schneider JHM, Piceno YM, DeSantis TZ, Anderson GL, Bakker PAHM, Raaijmakers JM. Deciphering the rhizosphere microbiome for disease-suppressive bacteria. Science. 2011;332:1097–100.
Appenroth KJ, Ziegler P, Sree KS. Duckweed as a model organism for investigating plant–microbe interactions in an aquatic environment and its applications. Endocytobiosis Cell Res. 2016;27:94–106.
Crump BC, Koch EW. Attached bacterial populations shared by four species of aquatic angiosperms. Appl Environ Microbiol. 2008;74:5948–57.
Xie WY, Su JQ, Zhu YG. Phyllosphere bacterial community of floating macrophytes in paddy soil environments as revealed by Illumina high-throughput sequencing. Appl Environ Microbiol. 2015;81:522–32.
Matsuzawa H, Tanaka Y, Tamaki H, Kamagata Y, Mori K. Culture-dependent and independent analyses of the microbial communities inhabiting the giant duckweed (Spirodela polyrrhiza) rhizoplane and isolation of a variety of rarely cultivated organisms within the phylum Verrucomicrobia. Microbes Environ. 2010;25:302–8.
Yamaga F, Washio K, Morikawa M. Sustainable biodegradation of phenol by Acinetobacter calcoaceticus P23 isolated from the rhizosphere of duckweed Lemna aoukikusa. Environ Sci Technol. 2010;44:6470–4.
Tang J, Zhang Y, Cui Y, Ma J. Effects of a rhizobacterium on the growth of and chromium remediation by Lemna minor. Environ Sci Pollut Res. 2015;22:9686–93.
Lundberg DS, Lebeis SL, Paredes SH, Yourstone S, Gehring J, Malfatti S, Tremblay J, Engelbrektson A, Glavina del Rio T, Edgar RC, Eickhorst T, Ley RE, Hugenholtz P, Green Tringe S, Dangl JL. Defining the core Arabidopsis thaliana root microbiome. Nature. 2012;488:86–90.
Buée M, de Boer W, Martin F, van Overbeek L, Jurkevitch E. The rhizosphere zoo: an overview of plant-associated communities of microorganisms, including phages, bacteria, archaea, and fungi, and of some of their structuring factors. Plant Soil. 2009;321:189–212.
Wang W, Yang C, Tang X, Gu X, Zhu Q, Pan K, Hu Q, Ma D. Effects of high ammonium level on biomass accumulation of common duckweed Lemna minor L. Environ Sci Pollut Res. 2014;21:14202–10.
Quisehuatl-Tepexicuapan E, Ferrera-Cerrato R, Silva-Rojas HV, Rodriguez-Zaragoza S, Alarcón A, Almaraz-Suárez JJ. Free-living culturable bacteria and protozoa from the rhizoplanes of three floating aquatic plant species. Plant Biosyst. 2014;2014:1–11.
Suslow TV, Schroth MN. Role of deleterious rhizobacteria as minor pathogens in reducing crop growth. Dis Control Pest Manag. 1981;72:111–5.
Jagadeesh KS, Krishnaraj PU, Kulkarni JH. Suppression of deleterious bacteria by rhizobacteria and subsequent improvement of germination and growth of tomato seedlings. Curr Sci. 2006;91:1458–9.
Li J, Kremer RJ. Growth response of weed and crop seedlings to deleterious rhizobacteria. Biol Cont. 2006;39:58–65.
Gyaneshwar P, Naresh Kumar G, Parekh LJ, Poole PS. Role of soil microorganisms in improving P nutrition of plants. Plant Soil. 2002;245:83–93.
Baldwib DS. Organic phosphorus in the aquatic environment. Environ Chem. 2013;10:439–54.
Blom D, Fabbri C, Eberl L, Weisskopf L. Volatile-mediated killing of Arabidopsis thaliana by bacteria is mainly due to hydrogen cyanide. Appl Environ Microbiol. 2011;77:1000–8.
Astrom B. Role of bacterial cyanide production in differential reaction of plant cultivars to deleterious rhizosphere pseudomonads. Plant Soil. 1991;133:93–100.
Voisard C, Keel C, Haas D, Défago G. Cyanide production by Pseurdomonas fluorescence helps suppress black root rot of tobacco under gnotobiotic conditions. EMBO J. 1989;8:351–8.
Singh M, Awasthi A, Soni SK, Singh R, Verma RK, Kalra A. Complementarity among plant growth promoting traits in rhizospheric bacterial communities promotes plant growth. Sci Rep. 2015. doi:10.1038/srep15500.
Liu W, Yang C, Shu W. Effects of plant growth-promoting bacteria isolated from copper tailings on plants in sterilized and non-sterilized tailings. Chemosphere. 2014;97:47–53.
Weisburg WG, Barns SM, Pelletier DA, Lane DJ. 16S ribosomal DNA amplification for phylogenetic study. J Bacteriol. 1991;173:697–703.
Lane DJ, Pace B, Olsen G, Stahl DA, Sogin ML, Pace NR. Rapid determination of 16S ribosomal RNA sequences for phylogenetic analysis. Proc Natl Acad Sci USA. 1985;82:6955–9.
Sambrook J, Russell DW. Molecular cloning: a laboratory manual. 3rd ed. New York: Cold Spring Harbor Laboratory Press; 2001.
Suzuki W, Sugawara M, Miwa K, Morikawa M. Plant growth-promoting bacterium Acinetobacter calcoaceticus P23 increases the chlorophyll content of the monocot Lemna minor (duckweed) and the dicot Lactuca sativa (lettuce). J Biosci Bioeng. 2014;118:41–4.
Orland OA, Aida VRT, Eugenia LL, Angelica RD. Characterization of indole acetic acid endophyte producers in autochthonous Lemna gibba plants from Xochimilco Lake. Afr J Biotechnol. 2015;14:604–11.
Goteti PK, Desai S, Emmanuel LDA, Taduri M, Sultana U. Phosphate solubilization potential of fluorescent Pseudomonas spp. isolated from diverse agro-ecosystems of India. Int J Soil Sci. 2014;9:101–10.
Schwyn B, Neilands JB. Universal chemical assay for the detection and determination of siderophores. Anal Biochem. 1987;160:47–56.
Saber FMA, Abdelhafez AA, Hassan EA, Ramadan EM. Characterization of fluorescent pseudomonads isolates and their efficiency on the growth promotion of tomato plant. Ann Agric Sci. 2015;60:131–40.
HI designed and performed the experiments, interpreted the results, and drafted the manuscript. MK designed the experiments, interpreted the results, drafted and revised the manuscript. MM interpreted the results, critically revised the manuscript, and supervised the study. MI designed the experiments, interpreted the results, drafted and revised the manuscript, and supervised the study. All authors read and approved the final manuscript.
We thank Dr. Tokitaka Oyama and Dr. Shogo Ito for giving valuable information about the plant material.
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
This study was supported by the Advanced Low Carbon Technology Research and Development Program (ALCA) of the Japan Science and Technology Agency (JST), and partially supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number 26281041.
Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka, 565-0871, Japan
Hidehiro Ishizawa
, Masashi Kuroda
& Michihiko Ike
Division of Biosphere Science, Graduate School of Environmental Science, Hokkaido University, N10-W5, Kita-ku, Sapporo, 060-0810, Japan
Masaaki Morikawa
Search for Hidehiro Ishizawa in:
Search for Masashi Kuroda in:
Search for Masaaki Morikawa in:
Search for Michihiko Ike in:
Correspondence to Michihiko Ike.
Additional file 1: Figure S1. Locations and descriptions of water sampled sites.
Biomass production | CommonCrawl |
Journal of Mathematics of Kyoto University
J. Math. Kyoto Univ.
Volume 11, Number 2 (1971), 253-300.
One-parameter Subgroups and a Lie Subgroup of an Infinite Dimensional Rotation Group.
Hiroshi Sato
More by Hiroshi Sato
Let $\mathscr{S}_{r}$, be the real topological vector space of real-valued rapidly decreasing functions and let $\mathcal{O}(\mathscr{S}_{r})$ be the group of rotations of $\mathscr{S}_{r}$. Then every one-parameter subgroup of $\mathcal{O}(\mathscr{S}_{r})$ induces a flow in $\mathscr{S}_{r}^{*}$ the conjugate space of $\mathscr{S}_{r}$ with the Gaussian White Noise as an invariant measure.
The author constructed a group of functions which is isomorphic to a subgroup of $\mathcal{O}(\mathscr{S}_{r})$ and some of its one-parameter subgroups.
But the problem whether it contains sufficiently many one-parameter subgroups has been a problem. In Part I of the present paper, we answer this problem affirmatively by constructing two classes of one-parameter subgroups in a concrete way.
In Part II, we construct an infinite dimensional Lie subgroup of $\mathcal{O}(\mathscr{S}_{r})$ and the corresponding Lie algebra. Namely, we construct a topological subgroup $\mathfrak{G}$ of $\mathcal{O}(\mathscr{S}_{r})$ which is coordinated by the nuclear space $\mathscr{S}_{r}$ and the algebra $\mathfrak{a}$ of generators of one-parameter subgroups of $\mathfrak{G}$ which is closed under the commutation. Furthermore, we establish the exponential map from $\mathfrak{a}$ into $\mathfrak{G}$ and prove continuity.
J. Math. Kyoto Univ., Volume 11, Number 2 (1971), 253-300.
First available in Project Euclid: 17 August 2009
https://projecteuclid.org/euclid.kjm/1250523648
doi:10.1215/kjm/1250523648
Sato, Hiroshi. One-parameter Subgroups and a Lie Subgroup of an Infinite Dimensional Rotation Group. J. Math. Kyoto Univ. 11 (1971), no. 2, 253--300. doi:10.1215/kjm/1250523648. https://projecteuclid.org/euclid.kjm/1250523648
Lie group-Lie algebra correspondences of unitary groups in finite von Neumann algebras
ANDO, Hiroshi and MATSUZAWA, Yasumichi, Hokkaido Mathematical Journal, 2012
On Irreducibility of an Induced Representation of a Simply Connected Nilpotent Lie Group
Koffi, Adjiey Jean-Luc and Kangni, Kinvi, African Diaspora Journal of Mathematics, 2015
Infinite dimensional algebraic geometry: algebraic structures on {$p$}-adic groups and their homogeneous spaces
Haboush, William J., Tohoku Mathematical Journal, 2005
The $m$-Derivations of Distribution Lie Algebras
Randriambololondrantomalala, Princy, Journal of Generalized Lie Theory and Applications, 2015
Koszul duality for modules over Lie algebras
Maszczyk, Tomasz and Weber, Andrzej, Duke Mathematical Journal, 2002
Positive energy representations of double extensions of Hilbert loop algebras
MARQUIS, Timothée and NEEB, Karl-Hermann, Journal of the Mathematical Society of Japan, 2017
On $\theta$-stable Borel subalgebras of large type for real reductive groups
Ohta, Takuya, Tohoku Mathematical Journal, 2000
On the algebra of parallel endomorphisms of a pseudo-Riemannian metric
Boubel, Charles, Journal of Differential Geometry, 2015
Cuntz-Pimsner algebras of group representations
Deaconu, Valentin, Rocky Mountain Journal of Mathematics, 2018
Structure of the $C^*$-Algebras of Nilpotent Lie Groups
SUDO, Takahiro, Tokyo Journal of Mathematics, 1996
euclid.kjm/1250523648 | CommonCrawl |
System model and preliminaries
Proposed DoA estimation method
Numerical results and discussion
Realified L1-PCA for direction-of-arrival estimation: theory and algorithms
Panos P. Markopoulos1,
Nicholas Tsagkarakis2,
Dimitris A. Pados3Email authorView ORCID ID profile and
George N. Karystinos4
Accepted: 31 May 2019
Subspace-based direction-of-arrival (DoA) estimation commonly relies on the Principal-Component Analysis (PCA) of the sensor-array recorded snapshots. Therefore, it naturally inherits the sensitivity of PCA against outliers that may exist among the collected snapshots (e.g., due to unexpected directional jamming). In this work, we present DoA-estimation based on outlier-resistant L1-norm principal component analysis (L1-PCA) of the realified snapshots and a complete algorithmic/theoretical framework for L1-PCA of complex data through realification. Our numerical studies illustrate that the proposed DoA estimation method exhibits (i) similar performance to the conventional L2-PCA-based method, when the processed snapshots are nominal/clean, and (ii) significantly superior performance when the snapshots are faulty/corrupted.
Data contamination
Direction-of-arrival estimation
Faulty measurements
L1 norm
Multiple signal classification
Principal-component analysis
Outlier resistance
Singular-value decomposition
Subspace data processing
Direction-of-arrival (DoA) estimation is a fundamental problem in signal processing theory with important applications in localization, navigation, and wireless communications [1–6]. Existing DoA-estimation methods can be broadly categorized as (i) likelihood maximization methods [7–13], (ii) spectral estimation methods, as in the early works of [14, 15], and (iii) subspace-based methods [16–19]. Subspace-based methods have enjoyed great popularity in applications, mostly due to their favorable trade-off between angle estimation quality and computational simplicity in implementation.
In their most common form, subspace-based DoA estimation methods rely on the L2-norm principal components (L2-PCs) of the recorded snapshots, which can be simply obtained by means of singular-value decomposition (SVD) of the sensor-array data matrix, or by eigenvalue decomposition (EVD) of the received-signal autocorrelation matrix [20]. Importantly, under nominal system operation (i.e., no faulty measurements or unexpected jamming/interfering sources), in additive white Gaussian noise (AWGN) environment, such methods are known to offer unbiased, asymptotically consistent DoA estimates [21–23] and exhibit high target-angle resolution ("super-resolution" methods).
However, in many real-world applications, the collected snapshot record may be unexpectedly corrupted by faulty measurements, impulsive additive noise [24–26], and/or intermittent directional interference. Such interference may appear either as an endogenous characteristic of the underlying communication system, as for example in frequency-hopped spread-spectrum systems [27], or as an exogenous factor (e.g., jamming). In cases of such snapshot corruption, L2-PC-based methods are well known to suffer from significant performance degradation [28–30]. The reason is that, as squared error-fitting minimizers, L2-PCs respond strongly to corrupted snapshots that appear in the processed data matrix as points that lie far from the nominal signal subspace [29]. Accordingly, DoA estimators that rely upon the L2-PCs are inevitably misled.
At the same time, research in signal processing and data analysis has shown that absolute error-fitting minimizers place much less emphasis on individual data points that diverge from the nominal signal subspace than square-fitting-error minimizers. Based on this observation, in the past few years, there have been extended documented research efforts toward defining and calculating L1-norm principal components (L1-PCs) of data under various forms of L1-norm optimality, including absolute-error minimization and projection maximization [31–46]. Recently, Markopoulos et al. [47, 48] calculated optimally the maximum-projection L1-PCs of real-valued data, for which up to that point only suboptimal approximations were known [36–38]. Experimental studies in [47–53] demonstrated the sturdy resistance of optimal L1-norm principal-component analysis (L1-PCA) against outliers, in various signal processing applications. Recently, [43, 45] introduced a heuristic algorithm for L1-PCA that was shown to attain state-of-the-art performance/cost trade-off. Another popular approach for outlier-resistant PCA is "Robust PCA" (RPCA), as introduced in [29] and further developed in [54, 55].
In this work, we consider system operation in the presence of unexpected, intermittent directional interference and propose a new method for DoA-estimation that relies on the L1-PCA of the recorded complex snapshots. Importantly, this work introduces a complete paradigm on how L1-PCA, defined and solved over the real field [47, 48], can be used for processing complex data, through a simple "realification" step. An alternative approach for L1-PCA of complex-valued data was presented in [46], where the authors reformulated complex L1-PCA into unimodular nuclear-norm maximization (UNM) and estimated its solution through a sequence of converging iterations. It is noteworthy that for the UNM introduced in [46], no general exact solver exists to date.
Our numerical studies show that the proposed L1-PCA-based DoA-estimation method attains performance similar to the conventional L2-PCA-based one (i.e., MUSIC [16]) in the absence of jamming sources, while it offers significantly superior performance in the case of unexpected, sporadic contamination of the snapshot record.
Preliminary results were presented in [56]. The present paper is significantly expanded to include (i) an Appendix section with all necessary technical proofs, (ii) important new theoretical findings (Proposition 3 on page 7), (iii) new algorithmic solutions (Section 3.5), and (iv) extensive numerical studies (Section 4).
The rest of the paper is organized as follows. In Section 2, we present the system model and offer a preliminary discussion on subspace-based DoA estimation. In Section 3, we describe in detail the proposed L1-PCA-based DoA-estimation method and present three algorithms for L1-PCA of the snapshot record. Section 4 presents our numerical studies on the performance of the proposed DoA estimation method. Finally, Section 5 holds some concluding remarks.
1.1 Notation
We denote by \(\mathbb {R}\) and \(\mathbb {C}\) the set of real and complex numbers, respectively, and by j the imaginary unit (i.e., j2=−1). ℜ{(·)},I{(·)},(·)∗,(·)⊤, and (·)H denote the real part, imaginary part, complex conjugate, transpose, and conjugate transpose (Hermitian) of the argument, respectively. Bold lowercase letters represent vectors and bold uppercase letters represent matrices. diag(·) is the diagonal matrix formed by the entries of the vector argument. For any \(\mathbf {A} \in \mathbb {C}^{m \times n}, [\mathbf {A}]_{i,q}\) denotes its (i,q)th entry, [A]:,q its qth column, and [A]i,: its ith row; \(\left \| \mathbf {A} \right \|_{p} \stackrel {\triangle }{=} \left (\sum \nolimits _{i=1}^{m} \sum \nolimits _{q=1}^{n} | [\mathbf {A}]_{i,q} |^{p}\right)^{\frac {1}{p}}\) is the pth entry-wise norm of A,∥A∥∗ is the nuclear norm of A (sum of singular values), span(A) represents the vector subspace spanned by the columns of A,rank(A) is the dimension of span(A), and null(A⊤) is the kernel of span(A) (i.e., the nullspace of A⊤). For any square matrix \(\mathbf {A} \in \mathbb {C}^{m \times m}, \text {det} (\mathbf {A})\) denotes its determinant, equal to the product of its eigenvalues. ⊗ and ⊙ are the Kronecker and entry-wise (Hadamard) product operators [57], respectively. 0m×n,1m×n, and Im are the m×n all-zero, m×n all-one, and size-m identity matrices, respectively. Also, \(\mathbf {E}_{m} \stackrel {\triangle }{=} \left [ \begin {array}{cc} 0 & -1 \\ 1 & 0 \end {array} \right ] \otimes \mathbf {I}_{m}\), for \(m \in \mathbb {N}_{\geq 1}\), and ei,m is the ith column of Im. Finally, E{·} is the statistical-expectation operator.
2 System model and preliminaries
We consider a uniform linear antenna array (ULA) of D elements. The length-D response vector to a far-field signal that impinges on the array with angle of arrival \(\theta \in (-\frac {\pi }{2}, \frac {\pi }{2}]\) with respect to (w.r.t.) the broadside is defined as
$$\begin{array}{*{20}l} \mathbf{s} (\theta) \stackrel{\triangle}{=} \left[1,~ e^{- j \frac{2 \pi f_{c} d \sin (\theta)}{c}},~ \ldots,~ e^{-j \frac{(D-1) 2 \pi f_{c} d \sin (\theta)}{c}}\right]^{\top} \end{array} $$
where fc is the carrier frequency, c is the signal propagation speed, and d is the fixed inter-element spacing of the array. We consider that the uniform inter-element spacing d is no greater than half the carrier wavelength, adhering to the Nyquist spatial sampling theorem; i.e., \(d \leq \frac {c}{2 f_{c}}\). Accordingly, for any two distinct angles of arrival \(\theta, \theta ' \in (-\frac {\pi }{2}, \frac {\pi }{2}]\), the corresponding array response vectors s(θ) and s(θ′) are linearly independent.
The ULA collects N narrowband snapshots from K sources of interest (targets) arriving from distinct DoAs \(\theta _{1}, \theta _{2}, \ldots, \theta _{K} \in \left (-\frac {\pi }{2}, \frac {\pi }{2} \right ], K < D \leq N\). We assume that the system may also experience intermittent directional interference from L independent sources (jammers), at angles \(\theta _{1}^{\prime }, \theta _{2}^{\prime }, \ldots, \theta _{L}^{\prime } \in \left (-\frac {\pi }{2}, \frac {\pi }{2} \right ]\). A schematic illustration of the targets and jammers is given in Fig. 1. We assume that \(\theta _{i} \neq \theta _{q}^{\prime }\), for any i∈{1,2,…,K} and q∈{1,2,…,L}. For any l∈{1,2,…,L}, the l-th jammer may be active during any of the N snapshots with some fixed and unknown to the receiver probability pl. Accordingly, the n-th down-converted received data vector is of the form
$$ {}\begin{aligned} \mathbf{y}_{n} &= \sum\limits_{k=1}^{K} x_{n,k} \mathbf{s} (\theta_{k}) + \sum\limits_{l=1}^{L} \gamma_{n,l} x_{n,l}^{\prime} \mathbf{s} \left(\theta_{l}^{\prime}\right)+ \mathbf{n}_{n} \in \mathbb{C}^{D \times 1},\\ n&=1,2, \ldots, N, \end{aligned} $$
Schematic representation of the K target sources and the L directional jammers
where, xn,k and \(x_{n,l}^{\prime } \in \mathbb {C}\) denote the statistically independent signal values of target k and jammer l, respectively, comprising power-scaled information symbols and flat-fading channel coefficients, and γn,l is the activity indicator for jammer l, modeled as a {0,1}-Bernoulli random variable with activation probability pl. \(\mathbf {n}_{n} \in \mathbb {C}^{D \times 1}\) accounts for additive white Gaussian noise (AWGN) with mean equal to zero and per-element variance σ2; i.e., \(\mathbf {n}_{n} \sim \mathcal {CN} \left (\mathbf {0}_{D}, \sigma ^{2} \mathbf {I}_{D}\right)\). Henceforth, we refer to the case of target-only presence in the collected snapshots (i.e., γn,l=0 for every n=1,2,…,N and every l=1,2,…,L) as normal system operation.
Defining \(\mathbf {x}_{n}\! \stackrel {\triangle }{=} [x_{n,1}, x_{n,2}, \ldots, x_{n,K}]^{\top }, \mathbf {x}_{n}^{\prime } \stackrel {\triangle }{=} [x_{n,1}^{\prime }, x_{n,2}^{\prime }, \ldots, x_{n,L}^{\prime }]^{\top }, {\mathbf {\Gamma }}_{n} \stackrel {\triangle }{=} \mathbf {diag}\left ([\gamma _{n,1}, \gamma _{n,2}, \ldots, \gamma _{n,L}]^{\top }\right)\), and \( \mathbf {S}_{\Phi } \stackrel {\triangle }{=} \left [ \mathbf {s} (\phi _{1}), \mathbf {s} (\phi _{2}), \ldots, \mathbf {s}(\phi _{m}) \right ] \in \mathbb {C}^{D \times m} \) for any size-m set of angles \(\Phi \stackrel {\triangle }{=} \{{\phi }_{1}, {\phi }_{2}, \dots, {\phi }_{m} \} \in \left (-\frac {\pi }{2}, \frac {\pi }{2} \right ]^{m}\),1 (2) can be rewritten as
$$ {}\mathbf{y}_{n} = \mathbf{S}_{\Theta} \mathbf{x}_{n} + \mathbf{S}_{\Theta^{\prime}} {\mathbf{\Gamma}}_{n} \mathbf{x}_{n}^{\prime} + \mathbf{n}_{n} \in \mathbb{C}^{D \times 1}, \,\,n=1,2, \ldots, N, $$
for \(\Theta \stackrel {\triangle }{=} \left \{{\theta }_{1}, {\theta }_{2}, \dots, {\theta }_{K} \right \}\) and \(\Theta ^{\prime } \stackrel {\triangle }{=} \left \{{\theta }_{1}^{\prime }, {\theta }_{2}^{\prime }, \dots, {\theta }_{L}^{\prime }\right \}\). The goal of a DoA estimator is to identify correctly all angles in the DoA set Θ. Importantly, by the Vandermonde structure of SΘ, it holds that
$$\begin{array}{*{20}l} {\kern35pt}\mathbf{s} (\phi) \subseteq \text{span}(\mathbf{S}_{\Theta}) \Leftrightarrow \phi \in \Theta, \end{array} $$
for any \( \phi \in \left (-\frac {\pi }{2}, \frac {\pi }{2} \right ]\) [16]. That is, given \(\mathcal {S} \stackrel {\triangle }{=} \text {span}(\mathbf {S}_{\Theta })\), the receiver can decide accurately for any candidate angle \( \phi \in \left (-\frac {\pi }{2}, \frac {\pi }{2} \right ]\) whether it is a DoA in Θ, or not.
2.1 DoA estimation under normal system operation.
Considering for a moment pl=0 for every l∈{1,2,…,L}, (2) becomes
$$\begin{array}{*{20}l} \mathbf{y}_{n} = \mathbf{S}_{\Theta} \mathbf{x}_{n} + \mathbf{n}_{n} \in \mathbb{C}^{D \times 1}, ~~n=1,2, \ldots, N \end{array} $$
with autocorrelation matrix \( \mathbf {R} \stackrel {\triangle }{=} E \left \{\mathbf {y}_{n} \mathbf {y}_{n}^{\mathrm {H}} \right \} = \mathbf {S}_{\Theta } E \left \{\mathbf {x}_{n} \mathbf {x}_{n}^{\mathrm {H}}\right \} \mathbf {S}_{\Theta }^{\mathrm {H}} + \sigma ^{2} \mathbf {I}_{D} \). Certainly, \(\mathcal {S} = \text {span}(\mathbf {S}_{\Theta })\) coincides with the K-dimensional principal subspace of R, spanned by its K highest-eigenvalue eigenvectors [5]. Therefore, being aware of R, the receiver could obtain \(\mathcal {S}\) through standard EVD and then conduct accurate DoA estimation by means of (4). However, in practice, the nominal received-signal autocorrelation matrix R is unknown to the receiver and sample-average estimated as \(\hat {\mathbf {R}} = \frac {1}{N} \sum \nolimits _{n=1}^{N} \mathbf {y}_{n} \mathbf {y}_{n}^{\mathrm {H}}\) [5, 16]. Accordingly, \(\mathcal {S}\) is estimated by the span of the K highest-eigenvalue eigenvectors of \(\hat {\mathbf {R}}\), which coincide with the K highest-singular-value left singular-vectors of Y=△[y1,y2,…,yN]. The eigenvectors of \(\hat {\mathbf {R}}\), or left singular-vectors of Y, are also commonly referred to as the L2-PCs of Y, since they constitute a solution to the L2-PCA problem
$$\begin{array}{*{20}l} {\kern30pt}\mathbf{Q}_{L2} = \underset{{\mathbf{Q} \in \mathbb{C}^{D \times K},~\mathbf{Q}^{\mathrm{H}}\mathbf{Q} = \mathbf{I}_{K}}}{\text{argmax}}~ \left\| \mathbf{Q}^{\mathrm{H}} \mathbf{Y} \right\|_{2}^{2}. \end{array} $$
In accordance to (4), the DoA set Θ is estimated by the arguments that yield the K local maxima (peaks) of the familiar MUSIC [16] spectrum
$$\begin{array}{*{20}l} {}P (\phi) = \left\| \left(\mathbf{I}_{D} - \mathbf{Q}_{L2} \mathbf{Q}_{L2}^{\mathrm{H}} \right) \mathbf{s} (\phi) \right\|_{2}^{-2},~~\phi \in \left(-\frac{\pi}{2}, \frac{\pi}{2}\right], \end{array} $$
which clarifies why MUSIC is, in fact, an L2-PCA-based DoA estimation method. Certainly, as N increases asymptotically, \(\hat {\mathbf {R}}\) tends to R,QL2 tends to span SΘ, and P(ϕ) goes to infinity for every ϕ∈Θ and finding its peaks becomes a criterion equivalent to (4). Therefore, for sufficient N, L2-PCA-based MUSIC is well-known to attain high performance in normal system operation.
2.2 Complications in the presence of unexpected jamming
In this work, we focus on the case where pl>0 for all l, so that some snapshots in Y are corrupted by unexpected, unknown, directional interference, as modeled in (2). In this case, the K eigenvectors of \(\mathbf {R} = E \left \{\mathbf {y}_{n} \mathbf {y}_{n}^{\mathrm {H}}\right \}\) do not span \(\mathcal {S}\) any more. Thus, the K eigenvectors of \(\hat {\mathbf {R}}\) or singular-vectors of Y would be of no use, even for very high sample-support N. In fact, interference-corrupted snapshots in Y may constitute outliers with respect to \(\mathcal {S}\). Accordingly, due to the well documented high responsiveness of L2-PCA in (6) to outlying data, QL2 may diverge significantly from \(\mathcal {S}\) [29, 48], rendering DoA estimation by means of (7) highly inaccurate. Below, we introduce a novel method that exploits the outlier-resistance of L1-PCA [36, 47, 48] to offer improved DoA estimates.
3 Proposed DoA estimation method
3.1 Operation on realified snapshots
In order to employ L1-PCA algorithms that are defined for the processing of real-valued data, the proposed DoA estimation method operates on real-valued representations of the recorded complex snapshots in (2), similar to a number of previous works in the field [58–60]. In particular, we define the real-valued representation of any complex-valued matrix \(\mathbf {A} \in \mathbb {C}^{m \times n}\), by concatenating its real and imaginary parts, as
$$\begin{array}{*{20}l} {\kern30pt}\overline{\mathbf{A}} & \stackrel{\triangle}{=} \left[ \begin{array}{lr} \Re\{\mathbf{A}\}, & -\Im\{\mathbf{A}\} \\ \Im\{\mathbf{A}\}, & \Re\{\mathbf{A}\} \end{array} \right] \in \mathbb{R}^{2m \times 2n}. \end{array} $$
In Lie algebras and representation theory, this transition from Cm×n to \( \mathbb {R}^{2m \times 2n}\) is commonly referred to as complex-number realification [61, 62] and is a method that allows for any complex system of equations to be converted into (and solved through) a corresponding real system [63]. Lemmas 1, 2, and 3 presented in the Appendix provide three important properties of realification. By (8) and Lemma 1, the nth complex snapshot yn in (3) can be realified as
$$\begin{array}{*{20}l} {\kern20pt}\overline{\mathbf{y}}_{n} = \overline{\mathbf{S}}_{\Theta} \overline{\mathbf{x}}_{n} + \overline{\mathbf{S}}_{\Theta^{\prime}} \overline{\mathbf{\Gamma}}_{n} \overline{\mathbf{x}^{\prime}}_{n} + \overline{\mathbf{n}}_{n} \in \mathbb{R}^{2D \times 2}. \end{array} $$
In accordance with Lemma 2, the rank of \( \overline {\mathbf {S}}_{\Theta } \) is 2K and, hence, \(\mathcal {S}_{R} \stackrel {\triangle }{=} \text {span} \left (\overline {\mathbf {S}}_{\Theta }\right) \) is a 2K-dimensional subspace wherein the K realified signal components of interest with angles of arrival in Θ lie. The following Proposition, deriving straightforwardly from (4) by means of Lemma 1 and Lemma 2, highlights the utility of \(\mathcal {S}_{R}\) for estimating the target DoAs.
For any \( \phi \in \left (-\frac {\pi }{2}, \frac {\pi }{2} \right ]\), it holds that
$$\begin{array}{*{20}l} {\kern35pt}\text{span}\left(\overline{\mathbf{s}} (\phi) \right) \subseteq \mathcal{S}_{R} ~ \Leftrightarrow~ \phi \in \Theta. \end{array} $$
Set equality may hold only if K=1. ■
By Proposition 1, given an orthonormal basis \(\mathbf {Q}_{R} \in \mathbb {R}^{2D \times 2K}\) that spans \(\mathcal {S}_{R}\), the receiver can decide accurately whether some \(\phi \in \left (-\frac {\pi }{2}, \frac {\pi }{2} \right ]\) is a target DoA, or not, by means of the criterion
$$\begin{array}{*{20}l} \left(\mathbf{I}_{2D} - \mathbf{Q}_{R}\mathbf{Q}_{R}^{\top} \right) \overline{\mathbf{s}}(\phi) = \mathbf{0}_{2D \times 2} ~ \Leftrightarrow ~ ~\phi \in \Theta. \end{array} $$
Similar to the complex-data case presented above, in normal system operation, \(\mathcal {S}_{R}\) coincides with the span of the K dominant eigenvectors of \( \mathbf {R}_{R} \stackrel {\triangle }{=} \mathrm {E} \left \{\overline {\mathbf {y}}_{n} \overline {\mathbf {y}}_{n}^{\top } \right \}\). When the receiver, instead of RR, possesses only the realified snapshot record \(\overline {\mathbf {Y}}, \mathcal {S}_{R}\) can be estimated as the span of
$$\begin{array}{*{20}l} \mathbf{Q}_{R,L2} = \underset{{\mathbf{Q} \in \mathbb{R}^{2D \times 2K},~\mathbf{Q}^{\top}\mathbf{Q} = \mathbf{I}_{2K}}}{\text{argmax}}~ \left\| \mathbf{Q}^{\top} \overline{\mathbf{Y}} \right\|_{2}^{2}. \end{array} $$
Then, in accordance with (11), the target DoAs can be estimated as the arguments that yield the K highest peaks of the spectrum
$$ \begin{aligned} P_{R}(\phi; ~ {\mathbf{Q}}_{R,L2}) & \stackrel{\triangle}{=} 2\left\| \left(\mathbf{I}_{2D} - {\mathbf{Q}}_{R,L2} {\mathbf{Q}}_{R,L2}^{\top} \right) \overline{\mathbf{s}} (\phi) \right\|_{2}^{-2},\\ &~~\phi \in (-\frac{\pi}{2}, \frac{\pi}{2} ]. \end{aligned} $$
Similar to (6), the solution to (12) can be obtained by singular-value decomposition (SVD) of \(\overline {\mathbf {Y}}\). Interestingly, the L2-PCA-based DoA estimator of (13) is equivalent to the complex-field MUSIC estimator presented in Section 2. In fact, as we prove in the Appendix,
$$\begin{array}{*{20}l} {\kern20pt}P_{R}(\phi; {\mathbf{Q}}_{R,L2}) = P(\phi) ~~\forall \phi \in \left[-\frac{\pi}{2},\frac{\pi}{2} \right). \end{array} $$
Hence, exhibiting performance identical to that of MUSIC, (12) can offer highly accurate estimates of the target DoAs under normal system operation. However, when Y contains corrupted snapshots, the L2-PCA-calculated span(QR,L2) is a poor approximation to \(\mathcal {S}_{R}\) and DoA estimation by means of PR(ϕ;QR,L2) tends to be highly inaccurate. In the following subsection, we present an alternative, L1-PCA-based method for obtaining an outlier-resistant estimate of Θ.
3.2 DoA estimation by realified L1-PCA
Over the past few years, L1-PCA has been shown to be far more resistant than L2-PCA against outliers in the data matrix [31–40, 47, 48]. In this work, we propose the use of a DoA-estimation spectrum analogous to that in (13) that is formed by the L1-PCs of \(\overline {\mathbf {Y}}\). Specifically, the proposed method has two steps. First, we obtain the L1-PCs of \(\overline {\mathbf {Y}}\), solving the L1-PCA problem
$$\begin{array}{*{20}l} \mathbf{Q}_{R,L1} = \underset{\mathbf{Q} \in \mathbb{R}^{2D \times 2K},~ \mathbf{Q}^{\top} \mathbf{Q} = \mathbf{I}_{2K}}{\text{argmax}} \sum\limits_{n=1}^{N} \left\| \mathbf{Q}^{\top} \overline{\mathbf{y}}_{n} \right\|_{1}. \end{array} $$
That is, (15) searches for the subspace that maximizes data presence, quantified as the aggregate L1-norm of the projected points.
Then, similarly to MUSIC, we estimate the target angles in Θ by the K highest peaks of the L1-PCA-based spectrum
$$ \begin{aligned} P_{R}(\phi; \mathbf{Q}_{R,L1}) & = 2\left\| \left(\mathbf{I}_{2D} - \mathbf{Q}_{R,L1} \mathbf{Q}_{R,L1}^{\top}\right) \overline{\mathbf{s}} (\phi) \right\|_{2}^{-2},\\ &~~\phi \in (-\frac{\pi}{2}, \frac{\pi}{2} ]. \end{aligned} $$
In accordance to standard practice, to find the K highest peaks of (16), we examine every angle in \(\left \{\phi =-\frac {\pi }{2}+k \Delta \phi :~ k\in \left \{1, 2, \ldots, \left \lfloor \frac {\pi }{\Delta \phi } \right \rfloor ~ \right \}\right \}\), for some small scanning step Δϕ>0. Next, we place our focus on solving the L1-PCA in (15).
3.3 Principles of realified L1-PCA
Although L1-PCA is not a new problem in the literature (see, e.g., [36–38]), its exact optimal solution was unknown until the recent work in [48], where the authors proved that (15) is formally NP-hard and offered the first two exact algorithms for solving it. Proposition 2 below, originally presented in [48] for real-valued data matrices of general structure (i.e., not having necessarily the realified structure of \(\overline {\mathbf {Y}}\)) translates L1-PCA in (15) to a nuclear-norm maximization problem over the binary field.
If Bopt is a solution to
$$\begin{array}{*{20}l} {\kern55pt}\underset{\mathbf{B} \in \{\pm 1\}^{2N \times 2K}}{\text{maximize}}~\| \overline{\mathbf{Y}} \mathbf{B}\|_{*}^{2} \end{array} $$
and \(\overline {\mathbf {Y}} \mathbf {B}_{\text {opt}}\) admits SVD \(\overline {\mathbf {Y}} \mathbf {B}_{\text {opt}} \overset {\text {}}{=} \mathbf {U} \mathbf {\Sigma }_{2K \times 2K} \mathbf {V}^{\top }\), then
$$\begin{array}{*{20}l} {\kern63pt}{\mathbf{Q}}_{R,L1} = \mathbf{U} \mathbf{V}^{\top} \end{array} $$
is a solution to ( 15 ). Moreover, \(\left \| {\mathbf {Q}}_{R,L1}^{\top } \overline {\mathbf {Y}}\right \|_{1} = \left \| \overline {\mathbf {Y}} \mathbf {B}_{\text {opt}}\right \|_{*}\). ■
Since QR,L1 can be obtained by Bopt via standard SVD, L1-PCA is in practice equivalent to a combinatorial optimization problem over the 4N,K binary variables in B. The authors in [48] presented two algorithms for exact solution of (17), defined upon real-valued data matrices of general structure.
In this work, for the first time, we simplify the solutions of [48] in view of the special, realified structure of \(\overline {\mathbf {Y}}\). Specifically, in the following Proposition 3, we show that for K=1 we can exploit the special structure of \(\overline {\mathbf {Y}}\) and reduce (17) to a binary quadratic-form maximization problem over half the number of binary variables (i.e., 2N instead of 4N). A proof for Proposition 3 is provided in the Appendix.
$$\begin{array}{*{20}l} {\kern63pt}\underset{\mathbf{b} \in \{\pm 1\}^{2N \times 1}}{\text{maximize}}~\| \overline{\mathbf{Y}} \mathbf{b}\|_{2}^{2}, \end{array} $$
then [bopt, ENbopt] is a solution to
$$\begin{array}{*{20}l} {\kern63pt}\underset{\mathbf{B} \in \{\pm 1\}^{2N \times 2}}{\text{maximize}}~\| \overline{\mathbf{Y}} \mathbf{B}\|_{*}^{2}. \end{array} $$
with \( \| \overline {\mathbf {Y}}~[\mathbf {b}_{\text {opt}}, \mathbf {E}_{N} \mathbf {b}_{\text {opt}}]\|_{*}^{2} = 4~ \|\overline {\mathbf {Y}} \mathbf {b}_{\text {opt}} \|_{2}^{2}. \label {nucnorm5} \) ■
In view of Propositions 2 and 3, QR,L1 derives easily from the solution of
$$\begin{array}{*{20}l} {\kern63pt}\underset{\mathbf{B} \in \{\pm 1 \}^{2N \times m}}{\text{maximize}}~\| \overline{\mathbf{Y}} \mathbf{B} \|_{*}, \end{array} $$
for m=1, if K=1, or m=2K, if K>1.
Since (21) is a combinatorial problem, the conceptually simplest approach for solving it is an exhaustive search (possibly in parallel fashion) over all elements of its feasibility set {±1}2N×m. By means of this method, one should conduct 22Nm nuclear norm evaluations (e.g., by means of SVD of \(\overline {\mathbf {Y}} \mathbf {B}\)) to identify the optimum argument in the feasibility set; thus, the asymptotic complexity of this method is \(\mathcal {O}\left (2^{2Nm}\right)\). Exploiting the well-known nuclear-norm properties of column-permutation and column-negation invariance, we can expedite practically the exhaustive procedure by searching for a solution to (21) in the set of all binary matrices that are column-wise built by the elements of a size-m multiset2 of {b∈{±1}2N: [b]1=1}. By this modification, the exact number of binary matrices examined (thus, the number of nuclear-norm evaluations) decreases from 22Nm to \({{2^{2N-1}+2K-1}\choose {m}}\). Of course, exhaustive-search approaches, being of exponential complexity in N, become impractical as the number of snapshots increases. For completeness, in Fig. 2, we provide a pseudocode for the exhaustive-search algorithm presented above.
Algorithm for optimal computation of the 2K L1-PCs of rank- 2D data matrix \(\overline {\mathbf {Y}}_{2D \times 2N}\) with exponential (w.r.t. N) asymptotic complexity \({\mathcal {O}}\left (2^{2Nm}\right)\) (m=1, for K=1; m=2K, for K>1)
For the case of engineering interest where N>D and D is a constant, the authors in [48] presented a polynomial-cost algorithm that solves (21) with complexity \(\mathcal {O}(N^{2Dm})\). In the following subsection, we exploit further the structure of \(\overline {\mathbf {Y}}\) and reduce significantly the computational cost of this algorithm.
3.4 Polynomial-cost realified L1-PCA
The authors in [48] showed that, according to Proposition 2, a solution to (21) can be found among the binary matrices that draw columns from
$$\begin{array}{*{20}l} \mathcal{B} \stackrel{\triangle}{=} \left\{\text{sgn} \left(\overline{\mathbf{Y}}^{\top} \mathbf{a}\right):~\mathbf{a} \in \Omega_{2D}\right\} \subseteq \{\pm 1 \}^{2N \times 1} \end{array} $$
where \(\Omega _{2D} \stackrel {\triangle }{=} \left \{\mathbf {a} \in \mathbb {R}^{2D \times 1}:~ \|\mathbf {a} \|_{2}=1, [\!\mathbf {a}]_{2D} >0\right \}\) –with the positivity constraint in the last entry of a deriving from the invariance of the nuclear norm to column negations of its matrix argument. That is, a solution to (21) belongs to the mth Cartesian power of \(\mathcal {B}, \mathcal {B}^{m} \subseteq \{\pm 1 \}^{2N \times m}\).
In addition, [48] pointed out that, since the nuclear-norm maximization is also invariant to column permutations of the argument, we can maintain problem equivalence while further narrowing down our search to the elements of a set \( \tilde {\mathcal {B}}\), subset of \(\mathcal {B}^{m}\), that contains the \({{|\mathcal {B}| +m-1}\choose {m}}\) binary matrices that are built by the elements of all size-m multisets of \(\mathcal {B}\). That is, we can obtain a solution to (21) by solving instead
$$\begin{array}{*{20}l} {\kern60pt}\underset{\mathbf{B} \in \tilde{\mathcal{B}}} {\text{maximize}} \left\| \overline{\mathbf{Y}} \mathbf{B}\right\|_{*}^{2}. \end{array} $$
Importantly, \(|\tilde {\mathcal {B}}| = {{|\mathcal {B}| +m-1}\choose {m}} < |\mathcal {B}|^{m} = |\mathcal {B}^{m}|\). The exact multiset-extraction procedure for obtaining \(\tilde {\mathcal {B}}\) from \(\mathcal {B}\) follows.
Calculation of\(\tilde {\boldsymbol{\boldsymbol{\mathcal {B}}}}\) from\(\boldsymbol{\boldsymbol{\mathcal {B}}}\)[48]. For every \( i \in \left \{1,2,\ldots, \binom {|\mathcal {B}| + m-1 }{m} \right \}\), we define a distinct indicator function \( f_{i} : \mathcal {B} \mapsto \{0, 1, \ldots, m\}\) that assigns to every \( \mathbf {b} \in \mathcal {B}\) a natural number fi(b)≤m, such that \(\sum \nolimits _{\mathbf {b}\in \mathcal {B}} f_{i}(\mathbf {b})=m\). Then, for every \(i \in \left \{1,2,\ldots, {{|\mathcal {B}| +m-1}\choose {m}}\right \}\), we define a unique binary matrix Bi∈{±1}2N×m such that every \(\mathbf {b} \in \mathcal {B}\) appears exactly fi(b) times among the columns of Bi. Finally, we define the sought-after set as \(\tilde {\mathcal {B}} \stackrel {\triangle }{=} \left \{\mathbf {B}_{1}, \mathbf {B}_{2}, \ldots, \mathbf {B}_{{{|\mathcal {B}| +m-1}\choose {m}}}\right \}\).
Evidently, the cost to solve (23), and thus (21), amounts to the cost of constructing the feasibility set \(\tilde {\mathcal {B}}\) added to the cost of conducting nuclear-norm evaluations (through SVD) over all its elements. Therefore, the cost to solve (23) depends on the construction cost and cardinality of \(\tilde {\mathcal {B}}\). As seen above, \(|\tilde {\mathcal {B}}| = {{|\mathcal {B}| + m-1}\choose {m}}\) and \(\tilde {\mathcal {B}}\) can be constructed online, by multiset selection on \(\mathcal {B}\), with negligible computational cost. Therefore, for determining the cardinality and construction cost of \(\tilde {\mathcal {B}}\), we have to find the cardinality and construction cost of \(\mathcal {B}\).
Next, we present a novel method to construct \({\mathcal {B}}\), different than the one in [48], that exploits the realified structure of \(\overline {\mathbf {Y}}\) to achieve lower computational cost.
Construction of \(\boldsymbol{\boldsymbol{\mathcal {B}}}\) , in view of the structure of \(\overline {\mathbf {Y}}\) .
Considering that any group of m≤2D columns of \(\overline {\mathbf {Y}}\) spans a m-dimensional subspace, for each index set \(\mathcal {X} \subseteq \{1, 2, \ldots, 2N \}\) –elements in ascending order (e.a.o.)– of cardinality \(|\mathcal {X}| = 2D-1\), we denote by \(\mathbf {z}(\mathcal {X})\) the unique left-singular vector of \(\left [\overline {\mathbf {Y}}\right ]_{:,\mathcal {X}}\) that corresponds to zero singular value. Calculation of \(\mathbf {z}(\mathcal {X})\) can be achieved either by means of SVD or by simple Gram-Schmidt Orthonormalization (GMO) of \(\left [\overline {\mathbf {Y}}\right ]_{:,\mathcal {X}}\) –both SVD and GMO are of constant cost with respect to N. Accordingly, we define
$$\begin{array}{*{20}l} {\kern25pt}\mathbf{c}(\mathcal{X}) \stackrel{\triangle}{=} \text{sgn}([\mathbf{z}(\mathcal{X})]_{2D})\mathbf{z}(\mathcal{X}) \in \Omega_{2D}. \end{array} $$
Being a scaled version of \(\mathbf {z}(\mathcal {X}), \mathbf {c}(\mathcal {X}) \) also belongs to \( \text {null}\left (\left [\overline {\mathbf {Y}}\right ]_{:,\mathcal {X}}^{\top }\right) \), satisfying \( \left [\overline {\mathbf {Y}}\right ]_{:,\mathcal {X}}^{\top } \mathbf {c}(\mathcal {X}) = \mathbf {0}_{2D-1} \label {c}. \) Next, we define the set of binary vectors
$$\begin{array}{*{20}l} {}\mathcal{B}(\mathcal{X})\! \stackrel{\triangle}{=} \!\left\{\!\mathbf{b} \in \{\pm 1 \}^{2N \times 1}\!:~ [\!\mathbf{b}]_{\mathcal{X}^{c}}=\text{sgn}\left(\![\!\overline{\mathbf{Y}}]_{:, \mathcal{X}^{c}}^{\top}\mathbf{c} (\mathcal{X})\! \right) \!\right\} \end{array} $$
of cardinality \(|\mathcal {B}(\mathcal {X})| = 2^{2D-1}\), where \(\mathcal {X}^{c} \stackrel {\triangle }{=} \{1, 2, \ldots, 2N \} \setminus \mathcal {X}\) (e.a.o.) is the complement of \(\mathcal {X}\). In [48], the authors showed that
$$\begin{array}{*{20}l} {\kern48pt}\mathcal{B} = \underset{\underset{|\mathcal{X}| = 2D-1}{\mathcal{X} \subseteq \{1, 2, \ldots, 2N \}}}{\bigcup} \mathcal{B} (\mathcal{X}). \end{array} $$
Since \(\mathcal {X}\) can take \({{2N}\choose {2D-1}}\) different values, \(\mathcal {B}\) can be built by (26) through \({{2N}\choose {2D-1}}\) nullspace calculations in the form of (24), with cost \({{2N}\choose {2D-1}}D^{3} \in \mathcal {O}\left (N^{2D}\right)\). Accordingly, \(\mathcal {B}\) consists of
$$\begin{array}{*{20}l} {\kern25pt}|\mathcal{B}| \leq 2^{2D-1} {{2N}\choose{2D-1}} \in \mathcal{O}\left(N^{2D-1}\right) \end{array} $$
elements. In fact, in view of [64], the exact cardinality of \(\mathcal {B}\) is
$$\begin{array}{*{20}l} {\kern25pt}|\mathcal{B}| = \sum\limits_{d=0}^{2D-1}{{2N-1}\choose{d}} \in \mathcal{O}\left(N^{2D-1}\right). \end{array} $$
Next, we show for the first time how we can reduce the cost of calculating \(\mathcal {B}\), exploiting the realified structure of \(\overline {\mathbf {Y}}\).
Consider \(\mathcal {X}_{1} \subseteq \{1, 2, \ldots, N \}\) (e.a.o.), \(\mathcal {X}_{2} \subseteq \{N+1, N+2, \ldots, 2N \}\) (e.a.o.), and their union \(\mathcal {X}_{A} = \{\mathcal {X}_{1}, \mathcal {X}_{2} \} \) (e.a.o.), such that \(|\mathcal {X}_{1}| < D\) and \(|\mathcal {X}_{A}|=|\mathcal {X}_{1}| + |\mathcal {X}_{2}| = 2D-1\). Define also the set of indices \(\mathcal {X}_{B} = \{\mathcal {X}_{1} + N, \mathcal {X}_{2} - N\}\) (e.a.o.) with \(|\mathcal {X}_{B}| = 2D-1\). By the structure of \(\overline {\mathbf {Y}}\), it is straightforward that
$$\begin{array}{*{20}l} {\kern23pt}\mathbf{c}(\mathcal{X}_{B}) = \mathbf{E}_{D} \mathbf{c}(\mathcal{X}_{A}) \text{sgn}([\mathbf{c}(\mathcal{X}_{A})]_{D}). \end{array} $$
In turn, by the definition in (25) and (29), it holds that
$$\begin{array}{*{20}l} {\kern22pt}\mathcal{B}(\mathcal{X}_{B}) = \text{sgn}([\mathbf{c}(\mathcal{X}_{A})]_{D}) \mathbf{E}_{N} \mathcal{B}(\mathcal{X}_{A}). \end{array} $$
The proof of (29) and (30) is offered in the Appendix. Notice now that, for every \(\mathcal {X} \subset \{1, 2, \ldots, 2N \}\) with \(|\mathcal {X}| = 2D-1\), there exist \(\mathcal {X}_{1} \subset \{1, 2, \ldots, N\}\) and \(\mathcal {X}_{2} \subset \{N+1, N+2, \ldots, 2N\}\), satisfying \(|\mathcal {X}_{1}| < D\) and \(|\mathcal {X}_{1}| + |\mathcal {X}_{2}|=2D-1\), such that
$$\begin{array}{*{20}l} \text{either}\ \mathcal{X} = \{\mathcal{X}_{1}, \mathcal{X}_{2} \}, ~~\text{or}~~ \mathcal{X} = \{\mathcal{X}_{1} +N, \mathcal{X}_{2} -N\}. \end{array} $$
Thus, by (26) and (31), \(\mathcal {B}\) can constructed as
$$ {\begin{aligned} \mathcal{B} &= \bigcup_{d=0}^{D-1} \hspace{0.1cm} \underset{\underset{\mathcal{X}_{2} \subset \{\{1, 2, \ldots, N \}+N\}, \; |\mathcal{X}_{2}|=2D-1-d}{\mathcal{X}_{1} \subset \{1, 2, \ldots, N \}, \; |\mathcal{X}_{1}|=d}}{\bigcup} \left\{\mathcal{B} (\{\mathcal{X}_{1}, \mathcal{X}_{2}\}), \mathcal{B} (\{\mathcal{X}_{1} +N, \mathcal{X}_{2} -N \})\right\}. \end{aligned}} $$
In view of (30), \(\mathcal {B} (\{\mathcal {X}_{1} +N, \mathcal {X}_{2} -N\}) \) can be directly constructed from \(\mathcal {B} (\{\mathcal {X}_{1}, \mathcal {X}_{2} \})\) with negligible computational overhead. In addition, for \(|\mathcal {X}_{1}| < D\) and \(|\mathcal {X}_{1}|+|\mathcal {X}_{2}| =2D-1\), by the Chu-Vandermonde binomial-coefficient property [65], \(\{\mathcal {X}_{1}, \mathcal {X}_{2} \}\) can take
$$\begin{array}{*{20}l} \sum\limits_{d=0}^{D-1} {{N}\choose{d}} {{N}\choose{2D-1-d}} = \frac{1}{2} {{2N}\choose{2D-1}} \end{array} $$
values. Therefore, exploiting the structure of \(\overline {\mathbf {Y}}\), the proposed algorithm constructs \(\mathcal {B}\) by (32), avoiding half the nullspace calculations needed in the generic method of [48], presented in (26).
In view of (28), the feasibility set of (23), \(\tilde {\mathcal {B}}\), consists of exactly
$$ \begin{aligned} |\tilde{\mathcal{B}}|& = {{|\mathcal{B}| +m-1}\choose{m}} = {{\sum\nolimits_{d=0}^{2D-1}\binom{2N-1}{d} +m-1}\choose{m}}\\ &\in \mathcal{O}\left(N^{2Dm - m}\right) \end{aligned} $$
elements. Thus, \(\mathcal {O}(N^{2Dm - m})\) nuclear-norm evaluations suffice to obtain a solution to (17). The asymptotic complexity for solving (15) by the presented algorithm is then \(\mathcal {O}\left (N^{2Dm-m}\right)\). The described polynomial-time algorithm is presented in detail in Fig. 3, including element-by-element construction of \(\mathcal {B}\).
Algorithm for optimal computation of the 2K L1-PCs of the rank- 2D data matrix \(\overline {\mathbf {Y}}_{2D \times 2N}\) with polynomial (w.r.t. N) asymptotic complexity \({\mathcal {O}}\left (N^{4DK - m+1}\right)\) (m=1 for K=1; m=2K, for K>1)
3.5 Iterative realified L1-PCA
For large problem instances (large N,D), the above presented optimal L1-PCA calculators could be computationally impractical. Therefore, at this point we present and employ the bit-flipping-based iterative L1-PCA calculator, originally introduced in [43] for processing general real-valued data matrices. Given \(\overline {\mathbf {Y}} \in \mathbb {R}^{2D \times 2N}\), and some m<2 min(D,N), the algorithm presented below attempts to solve (21), by conducting a converging sequence of optimal single-bit flips.
Specifically, the algorithm is initializes at a 2N×m binary matrix B(1) and conducts optimal single-bit flipping iterations. Specifically, at the tth iteration step (t>1), the algorithm generates the new matrix B(t) so that (i) B(t) differs from B(t−1) in exactly one entry (bit flipping) and (ii) \(\| \overline {\mathbf {Y}} \mathbf {B}^{(t)} \|_{*} > \| \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} \|_{*}\). Mathematically, we notice that if we flip at the tth iteration the (n,k)th bit of B(t−1) setting B(t)=B(t−1)−2[B(t−1)]n,ken,2Nek,m⊤, it holds that
$$\begin{array}{*{20}l} \overline{\mathbf{Y}} \mathbf{B}^{(t)} = \overline{\mathbf{Y}} \mathbf{B}^{(t-1)} - 2 \left[\mathbf{B}^{(t-1)}\right]_{n,k} [\!\overline{\mathbf{Y}}]_{:,n} {\mathbf{e}_{k,m}}^{\top}. \end{array} $$
Therefore, at step t, the presented algorithm searches for a solution (n,k) to
$$\begin{array}{*{20}l} \underset{\underset{(l-1)2N+j \in \mathcal{L}^{(t)}}{(j,l) \in \{1, 2, \ldots, 2N \} \times \{1, 2, \ldots, m \}}}{\text{maximize}} \left\| \overline{\mathbf{Y}} \mathbf{B}^{(t-1)} - 2 \left[\mathbf{B}^{(t-1)}\right]_{j,l} [\!\overline{\mathbf{Y}}]_{:,j} {{\mathbf{e}_{l}^{m}}}^{\top} \right\|_{*}. \end{array} $$
The constraint set \(\mathcal {L}^{(t)} \subseteq \{1,2, \ldots, 2Nm \}\), employed to restrain the greediness of the presented iterations, contains the indices3 of bits that have not been flipped before and, thus, is initialized as \(\mathcal {L}^{(1)} = \{1,2, \ldots, 2Nm \}\). Having obtained the solution to (36), (n,k), the algorithm proceeds as follows. If \( \left \| \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} - 2 \left [\mathbf {B}^{(t-1)}\right ]_{n,k} [\!\overline {\mathbf {Y}}]_{:,n} \mathbf {e}_{k,m}^{\top } \right \|_{*} > \left \| \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} \right \|_{*}\), the algorithm generates \( \mathbf {B}^{(t)} = \mathbf {B}^{(t-1)} - 2 \left [\mathbf {B}^{(t-1)}\right ]_{n,k} \mathbf {e}_{n,2N}\mathbf {e}_{k,m}^{\top }\) and updates \(\mathcal {L}^{(t+1)}\) to \(\mathcal {L}^{(t)} \setminus \{ (k-1)2N+n\}\). If, otherwise, \( \left \| \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} - 2 \left [\mathbf {B}^{(t-1)}\right ]_{n,k}[\!\overline {\mathbf {Y}}]_{:,n} \mathbf {e}_{k,m}^{\top } \right \|_{*} \leq \left \| \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} \right \|_{*}\), the algorithm obtains a new solution (n,k) to (36) after resetting \(\mathcal {L}^{(t)}\) to {1,2,…,2Nm}. If this new (n,k) is such that \( \left \| \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} - 2 \left [\mathbf {B}^{(t-1)}\right ]_{n,k} [\!\overline {\mathbf {Y}}]_{:,n} \mathbf {e}_{k,m}^{\top } \right \|_{*} > \left \| \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} \right \|_{*}\), then the algorithm sets \(\mathbf {B}^{(t)} = \mathbf {B}^{(t-1)} - 2 [\mathbf {B}^{(t-1)}]_{n,k} \mathbf {e}_{n,2N}\mathbf {e}_{k,m}^{\top }\) and updates \(\mathcal {L}^{(t+1)} = \mathcal {L}^{(t)} \setminus \{(k-1)2N+n\}\). Otherwise, the iterations terminate and the algorithm returns B(t) as a heuristic solution to (21). Notice that since at each iteration, the optimization metric increases. At the same time, the metric is certainly upper-bounded by \(\| \overline {\mathbf {Y}} \mathbf {B}_{opt}\|_{*}\). Therefore, the iterations are guaranteed to terminate in a finite number of steps, for any initialization B(1). Our studies have shown that, in fact, the iterations terminate for t<2Nm, with very high frequency of occurrence.
For solving (36), one has to calculate \( \left \|{\vphantom {\mathbf {e}_{l,m}^{\top }}} \overline {\mathbf {Y}} \mathbf {B}^{(t-1)} -2 \left [\mathbf {B}^{(t-1)}\right ]_{j,l} [\!\overline {\mathbf {Y}}]_{:,j} \mathbf {e}_{l,m}^{\top } \right \|_{*}\), for all (j,l)∈{1,2,…,2N}×{1,2,…,m} such that \( (l-1)2N+j \in \mathcal {L}\). At worst case, \(\mathcal {L} = \{1, 2, \ldots, 2Nm \}\) and this demands 2Nm independent singular-value/nuclear-norm calculations. Therefore, the total cost for solving (36) is \(\mathcal {O}\left (N^{2}m^{3}\right)\). If we limit the number of iterations to 2Nm, for the sake of practicality, then the total cost for obtaining a heuristic solution to (21) is \(\mathcal {O} \left (N^{3} m^{4}\right)\) –significantly lower than the cost of the polynomial-time optimal algorithm presented above, \(\mathcal {O}\left (N^{2Dm-m}\right)\). When the iterations terminate, the algorithm returns the bit-flipping-derived L1-PC matrix \( {\mathbf {Q}}_{R,BF} \stackrel {\triangle }{=} \mathbf {U} \mathbf {V}^{\top } \), where \( \mathbf {U} \mathbf {\Sigma }_{2K \times 2K} \mathbf {V}^{\top } \overset {\text {svd}}{=} \overline {\mathbf {Y}} \mathbf {B}_{\text {opt}} \). Formal performance guarantees for the presented bit-flipping procedure were offered in [43], for general real-valued matrices and K=1.
A pseudocode of the presented algorithm for the calculation of the 2K L1-PCs \(\overline {\mathbf {Y}}_{2D \times 2N}\) is presented in Fig. 4.
Algorithm for estimation of the 2K L1-PCs of rank- 2D data matrix \(\overline {\mathbf {Y}}_{2D \times 2N}\) with cubic (w.r.t. N) asymptotic complexity \(\mathcal {O} \left (N^{3} m^{4}\right)\) (m=1 for K=1; m=2K, for K>1)
4 Numerical results and discussion
We present numerical studies to evaluate the DoA estimation performance of realified L1-PCA, compared to other PCA calculation counterparts. Our focus lies on cases where a nominal source (the DoA of which we are looking for) operates in the intermittent presence of a jammer located at a different angle. Ideally, we would like the DoA estimator to be able to identify successfully the DoA of the source of interest, despite the unexpected directional interference.
To offer a first insight into the performance of the proposed method, in Fig. 5 we present a realization of the DoA-estimation spectra PR(ϕ;QR,L1) and PR(ϕ;QR,L2), as defined in (14) and (16), respectively. In this study, we calculate the exact L1-PCs of \(\overline {\mathbf {Y}}\), using the polynomial-cost optimal algorithm of Fig. 3. The receiver antenna-array is equipped with D=3 elements and collects N = 8 snapshots. All snapshots contain a signal from the single source of interest (K=1) impinging on the array with DoA − 20∘. One out of the eight snapshots is corrupted by two jamming sources with DoAs 31∘ and 54∘. The signal-to-noise ratio (SNR) is set to 2 dB for the target source and to 5 dB for each of the jammers. We observe that standard MUSIC (L2-PCA) is clearly misled by the two jammer-corrupted measurement. Interestingly, the proposed L1-PCA-based method manages to identify the target location successfully.
DoA-estimation spectra PR(ϕ;QR,L2) (MUSIC) and PR(ϕ;QR,L2) (proposed); one target and two jamming signals with angles of arrival marked by \(\blacktriangle \) and ∙,∙, respectively
Next, we generalize our study to include probabilistic presence of an jammer. Specifically, we keep D=3 and N=8 and consider K=1 target at θ=− 41∘ with SNR 2 dB, and L=1 jammer at θ′=24∘ with activation probability p taking values in {0,.1,.2,.3,.4,.5}.
In Fig. 6, we plot the root-mean-square-error (RMSE)4, calculated over 5000 independent realizations, vs. jammer SNR, for three DoA estimators: (a) the standard L2-PCA-based one (MUSIC), (b) the proposed L1-PCA DoA estimator with the L1-PCs calculated optimally by means of the polynomial-cost algorithm of Fig. 3, and (c) the proposed L1-PCA estimator with the L1-PCs found by means of the algorithm of Fig. 4. For all three methods, we plot the performance attained for each value of p∈{0,.1,.2,.3,.4,.5}. Our first observation is that the two L1-PCA-based estimators exhibit almost identical performance for every value of p and jammer SNR. Then, we notice that, in normal system operation (p=0) the RMSEs of the L2-PCA-based and L1-PCA-based estimators are extremely close to each other and low, with slight (almost negligible) superiority of the L2-PCA-based method. Quite interestingly, for any non-zero jammer activation probability p and over the entire range of jammer SNR values, the RMSE attained by the proposed L1-PCA-based methods is lower than that attained by the L2-PCA-based one. For instance, for jammer SNR 12 dB and p=.1, the proposed methods offer 8∘ smaller RMSE than MUSIC. Of course, at high jammer SNR values and p =.5 the RMSE of both methods approaches 65∘, which is the angular distance of the target and the jammer; i.e. both methods tend to peak the significantly (18 dB) stronger jammer present in half the snapshots.
Root-mean-squared-error (RMSE) vs. jammer SNR, for: L2-PCA (MUSIC), optimal L1-PCA, calculated by means of Algorithm 2 in Fig. 3, and L1-PCA by means of Algorithm 3 in Fig. 4. For each estimator, we present the RMSE curves for p=0,.1,.2,.3,.4,.5. N=8,D=3,θ=− 41∘,θ′=24∘, source SNR 2 dB
In Fig. 7, we change the metric and study the more general Subspace Representation Ratio (SRR), attained by L2-PCA and L1-PCA. For any orthonormal basis \(\mathbf {Q} \in \mathbb {R}^{2D \times 2K}\), SRR is defined as
$$\begin{array}{*{20}l} {\kern22pt}\text{SRR} (\mathbf{Q}) \stackrel{\triangle}{=} \frac{\left\| \mathbf{Q}^{\top} \overline{\mathbf{s}} ({\theta})\right\|_{2}^{2}}{\left\| \mathbf{Q}^{\top} \overline{\mathbf{s}}({\theta^{\prime}})\right\|_{2}^{2}+\left\| \mathbf{Q}^{\top} \overline{\mathbf{s}}({\theta})\right\|_{2}^{2}}. \end{array} $$
Average SRR vs. jammer SNR, for: L2-PCA (MUSIC), optimal L1-PCA, calculated by means of Algorithm 2 in Fig. 3, and L1-PCA by means of Algorithm 3 in Fig. 4. For each estimator, we present the RMSE curves for p=0,.1,.2,.3,.4,.5. N=8,D=3,θ=− 41∘,θ′=24∘, source SNR 2 dB
In Fig. 7, we plot SRR(QR,L2) (L2-PCA), SRR(QR,L1) (optimal L1-PCA), and SRR(QR,BF) averaged over 5000 realizations, for multiple values of p, versus the jammer SNR. We observe that, again, the performance of the optimal and heuristic L1-PCA calculators almost coincides for every value of p and jammer SNR. Also, we notice that under normal system operation (p=0) the spans of QR,L2,QR,L1, and QR,BF are equally good approximations to \(\mathcal {S}_{R}\) and their respective SRR curves lie close (as close as the target SNR and the number of snapshots allow) to the benchmark of SRR(U), where U is an orthonormal basis for the exact \(\mathcal {S}_{R}\). On the other hand, when half the snapshots are jammer corrupted (p=.5) both methods capture more of the interference. Similar to Fig. 6, for any jammer activation probability and over the entire range of jammer SNR values, the SRR attained by L1-PCA (both algorithms) is superior to that attained by conventional L2-PCA.
Next, we set D=4,N=10,θ=− 20∘ and θ′=50∘. The source SNR is set to 5 dB. In Fig. 8, we plot the RMSE vs. jamming SNR performance attained by L2-PCA, RPCA5 (algorithm of [29]), and L1-PCA (proposed –computed by efficient Algorithm 3), all computed on the realified snapshots. We observe that for p = 0 (i.e., no jamming corruption) all methods perform well; in particular, L2-PCA and L1-PCA demonstrate almost identical performance of about 3∘ RMSE. For jammer operation probability p>0, we observe that the proposed L1-PCA method outperforms clearly all counterparts, exhibiting from 5∘ (for jammer SNR 6 dB) to 20∘ (for jammer SNR 11 dB) lower RMSE.
RMSE vs. jammer SNR, for: L2-PCA (MUSIC), RPCA [29], and L1-PCA by means of Algorithm 3 in Fig. (4). For each estimator, we present the RMSE curves for p=0,.2. N=10,D=4,θ=− 20∘,θ′=50∘, source SNR 5 dB
In Fig. 9. we plot the RMSE attained by the three counterparts, this time fixing jamming SNR to 10 dB and varying the snapshot corruption probability p∈{0,.1,.2,.3,.4,.5,.6}. Once again, we observe that, for p=0 (no jamming activity), all methods perform well. For p>0, L1-PCA outperforms both counterparts across the board.
RMSE vs. jammer operation probability p, for: L2-PCA (MUSIC), RPCA [29], and L1-PCA by means of Algorithm 3 in Fig. 4. N=10,D=4,θ=−20∘,θ′=50∘, source SNR 5dB, jammer SNR 10 dB
Finally, in the study of Fig. 9, we measure the computation time expended by the three PCA methods, for p=0 and p=0.5. We observe that standard PCA, implemented by SVD, is the fastest method, with average computation time about 4·10−5 s, for both values of p. The computation time of RPCA is 1.5·10−2 s for p = 0 and 1.9·10−2 s for p=0.5. L1-PCA (Algorithm 3) computation takes, on average, 4.3·10−2 s for both values of p, comparable to RPCA.6
We considered the problem of DoA estimation in the possible presence of unexpected, intermittent directional interference and presented a new method that relies on the L1-PCA of the recorded snapshots. Accordingly, we presented three algorithms (two optimal ones and one iterative/heuristic) for realified L1-PCA; i.e., L1-PCA of realified complex data matrices. Our numerical studies showed that the proposed method attains performance similar to conventional L2-PCA-based DoA estimation (MUSIC) in normal system operation (absence of jammers), while it attains significantly superior performance in the case of unexpected, sporadic corruption of the snapshots.
6 Appendix
6.1 Useful properties of realification
Lemma 1 below follows straightforwardly from the definition in (8).
Lemma 1
For any \(\mathbf {A}, \mathbf {B} \in \mathbb {C}^{m \times n}\), it holds that \(\overline {(\mathbf {A} + \mathbf {B})} = \overline {\mathbf {A}} + \overline {\mathbf {B}}\). For any \(\mathbf {A} \in \mathbb {C}^{m \times n}\) and \(\mathbf {B} \in \mathbb {C}^{n \times q}\), it holds that \(\overline {(\mathbf {A} \mathbf {B})} = \overline {\mathbf {A}}\; \overline {\mathbf {B}}\) and \(\overline {\left (\mathbf {A}^{\mathrm {H}} \right)} = \overline {\mathbf {A}}^{\top }\). ■
Lemma 2 below was discussed in [70] and [20], in the form of problem 8.6.4. Here, we also provide a proof, for the sake of completeness.
For any \(\mathbf {A} \in \mathbb {C}^{m \times n}, \text {rank}(\overline {\mathbf {A}}) =2~ \text {rank}(\mathbf {A})\). In particular, each singular value of A will appear twice among the singular values of \(\overline {\mathbf {A}}\). ■
Consider a complex matrix \(\mathbf {A} \in \mathbb {C}^{m \times n}\) of rank k≤ min{m,n} and its singular value decomposition \(\mathbf {A} \overset {\text {SVD}}{=} \mathbf {U}_{m \times m} \mathbf {\Sigma }_{m \times n} \mathbf {V}_{n \times n}^{\mathrm {H}}\), where
$$\begin{array}{*{20}l} {\kern25pt}\boldsymbol \Sigma = \left[\begin{array}{cc} \mathbf{diag}(\boldsymbol \sigma) & \mathbf{0}_{k \times (n-k)} \\ \mathbf{0}_{(m-k) \times k} & \mathbf{0}_{(m-k) \times (n-k)} \end{array}\right] \end{array} $$
and \(\boldsymbol \sigma \stackrel {\triangle }{=} [\sigma _{1}, \sigma _{2}, \ldots, \sigma _{k}]^{\top } \in \mathbb {R}_{+}^{k}\) is the length k vector containing (in descending order) the positive singular values of A. By Lemma 1,
$$\begin{array}{*{20}l} {\kern65pt}\overline{\mathbf{A}} = \overline{\mathbf{U}} \; \overline{\boldsymbol \Sigma} \; \overline{\mathbf{V}}^{\top} \end{array} $$
with \(\overline {\mathbf {U}}^{\top } \overline {\mathbf {U}} = \overline {\mathbf {U}} \; \overline {\mathbf {U}}^{\top } = \mathbf {I}_{2m}\) and \(\overline {\mathbf {V}}^{\top } \overline {\mathbf {V}} = \overline {\mathbf {V}} \; \overline {\mathbf {V}}^{\top } = \mathbf {I}_{2n}\). Define now, for every \(a,b \in \mathbb {N}_{\geq 1}\), the ab×ab permutation matrix
$$\begin{array}{*{20}l} \mathbf{Z}_{a,b} \stackrel{\triangle}{=} \left[\mathbf{I}_{a} \otimes {\mathbf{e}_{1}^{b}},\; \mathbf{I}_{a} \otimes {\mathbf{e}_{2}^{b}},\; \ldots, \; \mathbf{I}_{a} \otimes {\mathbf{e}_{b}^{b}} \right]^{\top} \end{array} $$
where \({\mathbf {e}_{i}^{b}} \stackrel {\triangle }{=} [\mathbf {I}_{b}]_{:,i}\), for every i∈{1,2,…,b}. Then,
$$\begin{array}{*{20}l} \mathbf{Z}_{a,b}^{\top} \mathbf{Z}_{a,b} & = \sum\limits_{i=1}^{b} \left(\mathbf{I}_{a} \otimes \left({\mathbf{e}_{i}^{b}}\right)^{\top} \right)^{\top} \left(\mathbf{I}_{a} \otimes \left({\mathbf{e}_{i}^{b}}\right)^{\top} \right) \\ & = \mathbf{I}_{a} \otimes \left(\sum\limits_{i=1}^{b} {\mathbf{e}_{i}^{b}} \left({\mathbf{e}_{i}^{b}}\right)^{\top} \right) = \mathbf{I}_{a} \otimes \mathbf{I}_{b} = \mathbf{I}_{ab}. \end{array} $$
By (39),
$$\begin{array}{*{20}l} \overline{\mathbf{A}} & = \overline{\mathbf{U}} \mathbf{Z}_{2,m}^{\top} \mathbf{Z}_{2,m} \overline{\boldsymbol \Sigma} \mathbf{Z}_{2,n}^{\top} \mathbf{Z}_{2,n} \overline{\mathbf{V}}^{\top} \\ & = \left(\overline{\mathbf{U}} \mathbf{Z}_{2,m}^{\top}\right) \left(\mathbf{Z}_{2,m} (\mathbf{I}_{2} \otimes \boldsymbol \Sigma) \mathbf{Z}_{2,n}^{\top}\right) \left(\overline{\mathbf{V}} \mathbf{Z}_{2,n}^{\top}\right)^{\top} \\ & = \check{\mathbf{U}} \check{\boldsymbol \Sigma} \check{\mathbf{V}}^{\top} \end{array} $$
where \(\check {\mathbf {U}} \stackrel {\triangle }{=} \overline {\mathbf {U}} \mathbf {Z}_{2,m}^{\top }, \check {\mathbf {V}} \stackrel {\triangle }{=} \overline {\mathbf {V}} \mathbf {Z}_{2,n}^{\top }\), and \(\check {\boldsymbol \Sigma } \stackrel {\triangle }{=} \mathbf {Z}_{2,m} (\mathbf {I}_{2} \otimes \boldsymbol \Sigma) \mathbf {Z}_{2,n}^{\top }\). It is easy to show that \(\check {\mathbf {U}}^{\top } \check {\mathbf {U}} = \check {\mathbf {U}} \check {\mathbf {U}}^{\top } = \mathbf {I}_{2m}, \check {\mathbf {V}}^{\top } \check {\mathbf {V}} = \check {\mathbf {V}} \check {\mathbf {V}}^{\top } = \mathbf {I}_{2n}\), and \(\check {\boldsymbol \Sigma } = \boldsymbol \Sigma \otimes \mathbf {I}_{2}\). Therefore, (42) constitutes the standard (sorted singular values) SVD of \(\overline {\mathbf {A}}\) and \(\text {rank}(\overline {\mathbf {A}}) = 2k\). □
Lemma 3 below follows from Lemmas 1 and 2.
For any \(\mathbf {A} \in \mathbb {C}^{m \times n}, \| \mathbf {A}\|_{2}^{2} = \frac {1}{2}\| \overline {\mathbf {A}} \|_{2}^{2}\). ■
6.2 Proof of (14)
We commence our proof with the following auxiliary Lemma 4.
For any matrix \(\mathbf {A} \in \mathbb {C}^{m \times n}\), if \(\mathbf {Q}_{real} \in \mathbb {R}^{2m \times 2l}, l < m \leq n\) is a solution to
$$\begin{array}{*{20}l} {\kern43pt}\underset{\mathbf{Q}\in\mathbb{R}^{2m \times 2l}, ~\mathbf{Q}^{\top}\mathbf{Q} = \mathbf{I}_{2l}}{\text{maximize}}~\| \overline{\mathbf{A}}^{\top} \mathbf{Q}\|_{2}. \end{array} $$
and \(\mathbf {Q}_{comp.} \in \mathbb {C}^{m \times l}\) is a solution to
$$\begin{array}{*{20}l} {\kern43pt}\underset{\mathbf{Q}\in\mathbb{C}^{m \times l}, ~\mathbf{Q}^{\mathrm{H}}\mathbf{Q} = \mathbf{I}_{l} }{\text{maximize}}~\| {\mathbf{A}}^{\mathrm{H}} \mathbf{Q}\|_{2}, \end{array} $$
then \(\mathbf {Q}_{{\text {real}}} \mathbf {Q}_{{\text {real}}}^{\top } = \overline {\left (\mathbf {Q}_{{\mathrm {comp.}}} \mathbf {Q}_{{\mathrm {comp.}}}^{\mathrm {H}}\right)} = \overline {\mathbf {Q}}_{{\mathrm {comp.}}}\vspace *{-2pt} \overline {\mathbf {Q}}_{{\mathrm {comp.}}}^{\top } \). ■
Orthonormal basis \([\check {\mathbf {U}}]_{:,1:2l}\), defined in the proof of Lemma 2 above, contains the 2l highest-singular-value left-singular vectors of \(\overline {\mathbf {A}}\), and, thus, solves (43) [20]. Since the objective value in (43) is invariant to column permutations of the argument Q, any column permutation of \([\check {\mathbf {U}}]_{:,1:2l}\) is still a solution to (43). Next, we define the permutation matrix Wm,l=△[Il, 0l×(m−l)]⊤ and notice that \( [\check {\mathbf {U}}]_{:,1:2l} = \overline {\mathbf {U}} [\mathbf {I}_{2} \otimes {\mathbf {e}_{1}^{m}}, \mathbf {I}_{2} \otimes {\mathbf {e}_{2}^{m}}, \ldots, \mathbf {I}_{2} \otimes {\mathbf {e}_{l}^{m}}] \) is a column permutation of \(\overline {\mathbf {U}} (\mathbf {I}_{2} \otimes \mathbf {W}_{m,l}) =\overline {\mathbf {U}} \; \overline {\mathbf {W}_{m,l}} \), which, by Lemma 1, equals \( \overline {\left (\mathbf {U} \mathbf {W}_{m,l} \right)} = \overline {[\mathbf {U}]_{:,1:l}}\). Thus, \( \overline {[\mathbf {U}]_{:,1:l}}\) solves (43) too. At the same time, by (39), [U]:,1:l contains the l highest-singular-value left-singular vectors of A and solves (44) [20]. By the above, we conclude that a realification per (8) of any solution to (44) constitutes a solution to (43) and, thus, \(\mathbf {Q}_{{\text {real}}} \mathbf {Q}_{{\text {real}}}^{\top } = \overline {\left (\mathbf {Q}_{{\mathrm {comp.}}} \mathbf {Q}_{{\mathrm {comp.}}}^{\mathrm {H}}\right)} = \overline {\mathbf {Q}}_{{\mathrm {comp.}}} \overline {\mathbf {Q}}_{{\mathrm {comp.}}}^{\top } \). □
By Lemmas 1, 3, and 4, (14) holds true.
We commence our proof by defining \(d = |\mathcal {X}_{1}|\) and the sets \({\mathcal {X}_{A}^{c}} \stackrel {\triangle }{=} \{1, 2, \ldots, 2N \}\setminus \mathcal {X}_{A}\) (e.a.o.) and \({\mathcal {X}_{B}^{c}} \stackrel {\triangle }{=} \{1, 2, \ldots, 2N \}\setminus \mathcal {X}_{B}\) (e.a.o.) Then, we notice that
$$\begin{array}{*{20}l} {\kern43pt}[\mathbf{I}_{2N}]_{:,\mathcal{X}_{B}} = \mathbf{E}_{N} [\mathbf{I}_{2N}]_{:,\mathcal{X}_{A}} \mathbf{P}, \end{array} $$
where \(\mathbf {P} \stackrel {\triangle }{=} \left [ -[\mathbf {I}_{2D-1}]_{:,d+1:2D-1}, ~[\mathbf {I}_{2D-1}]_{:,1:d} \right ] \). Similarly,
$$\begin{array}{*{20}l} {\kern43pt}[\mathbf{I}_{2N}]_{:,{\mathcal{X}_{B}^{c}}} = \mathbf{E}_{N} [\mathbf{I}_{2N}]_{:,{\mathcal{X}_{A}^{c}}} \mathbf{P}_{c}, \end{array} $$
where \(\mathbf {P}_{c} \stackrel {\triangle }{=} \left [ -[\mathbf {I}_{2N- 2D+1}]_{:,N-d+1:2N-2D+1}, ~[\mathbf {I}_{2N- 2D + 1}]_{:,1:N-d} \right ] \). Then,
$$\begin{array}{*{20}l} {\kern13pt}[\!\overline{\mathbf{Y}}]_{:,\mathcal{X}_{B}}& = \overline{\mathbf{Y}} [\mathbf{I}_{2N}]_{:,\mathcal{X}_{B}} = \overline{\mathbf{Y}} \mathbf{E}_{N} [\mathbf{I}_{2N}]_{:,\mathcal{X}_{A}} \mathbf{P} \\ &= \mathbf{E}_{D} \overline{\mathbf{Y}} [\mathbf{I}_{2N}]_{:,\mathcal{X}_{A}} \mathbf{P} = \mathbf{E}_{D} [\!\overline{\mathbf{Y}}]_{:,\mathcal{X}_{A}} \mathbf{P}. \end{array} $$
Consider now \(\mathbf {z} = \text {sgn}{([\mathbf {c}(\mathcal {X}_{A})]_{D})} \mathbf {E}_{D} \mathbf {c}(\mathcal {X}_{A})\). It holds that [z]2D>0 and
$$\begin{array}{*{20}l} [\!\overline{\mathbf{Y}}]_{:,\mathcal{X}_{B}}^{T} \mathbf{z} &= \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}^{\top} [\!\overline{\mathbf{Y}}]_{:,\mathcal{X}_{A}}^{\top} \mathbf{E}_{D}^{\top} \mathbf{E}_{D} \mathbf{c}(\mathcal{X}_{A}) \\ &=\text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}^{\top} [\!\overline{\mathbf{Y}}]_{:,\mathcal{X}_{A}}^{\top} \mathbf{c}(\mathcal{X}_{A}) = \mathbf{0}_{2D}. \end{array} $$
Therefore, \(\mathbf {z} = \mathbf {c}(\mathcal {B}) = \in \text {null} \left ([\!\overline {\mathbf {Y}}]_{:,\mathcal {X}_{B}}^{\top }\right) \cap \Omega _{2D}\) and, hence, (29) holds true.
6.4 Proof of Prop. 3
We begin by rewriting the maximization argument of (20) as
$$\begin{array}{*{20}l} \| \overline{\mathbf{Y}} \mathbf{B}\|_{*}^{2} &= \| \overline{\mathbf{Y}} \mathbf{B} \|_{2}^{2} + 2\sqrt{\text{det}\left(\mathbf{B}^{\top} \overline{\mathbf{Y}}^{\top} \overline{\mathbf{Y}} \mathbf{B}\right)} \\ &= \|\overline{\mathbf{Y}} \mathbf{b}_{1} \|_{2}^{2} + \|\overline{\mathbf{Y}} \mathbf{b}_{2} \|_{2}^{2} \\&\quad+ 2 \sqrt{ \|\overline{\mathbf{Y}} \mathbf{b}_{1} \|_{2}^{2}\|\overline{\mathbf{Y}} \mathbf{b}_{2} \|_{2}^{2} - \left(\mathbf{b}_{1}^{\top} \overline{\mathbf{Y}}^{\top} \overline{\mathbf{Y}} \mathbf{b}_{2}\right)^{2}}, \end{array} $$
where b1 and b2 are the first and second columns of B, respectively. Evidently, the maximum value attained at (17) is upper bounded as
$$\begin{array}{*{20}l} \underset{\mathbf{B} \in \{\pm 1\}^{2N \times 2}}{\text{max}}~\| \overline{\mathbf{Y}} \mathbf{B}\|_{*}^{2} & \leq \underset{\mathbf{b}_{1} \in \{\pm 1\}^{2N}, \mathbf{b}_{2} \in \{\pm 1\}^{2N}}{\text{max}}~ \|\overline{\mathbf{Y}} \mathbf{b}_{1} \|_{2}^{2}\\ & + \|\overline{\mathbf{Y}} \mathbf{b}_{1} \|_{2}^{2} + 2 {\|\overline{\mathbf{Y}} \mathbf{b}_{1} \|_{2} \|\overline{\mathbf{Y}} \mathbf{b}_{2} \|_{2} } \\ & = 4~\underset{\mathbf{b} \in \{\pm 1\}^{2N}}{\text{max}}~ \|\overline{\mathbf{Y}} \mathbf{b} \|_{2}^{2}. \end{array} $$
Considering now a solution bopt to \( {\text {maximize}}_{\mathbf {b} \in \{\pm 1\}^{2N \times 1}}~ \|\overline {\mathbf {Y}} \mathbf {b} \|_{2}^{2}, \) and defining \(\mathbf {b}_{\text {opt}}^{\prime } = \mathbf {E}_{N} \mathbf {b}_{\text {opt}} \), we notice that \( \|\overline {\mathbf {Y}} \mathbf {b}_{\text {opt}}^{\prime } \|_{2}^{2} = \|\overline {\mathbf {Y}} \mathbf {E}_{N} \mathbf {b}_{\text {opt}} \|_{2}^{2} = \|\mathbf {E}_{D} \overline {\mathbf {Y}} \mathbf {b}_{\text {opt}} \|_{2}^{2} = \| \overline {\mathbf {Y}} \mathbf {b}_{\text {opt}} \|_{2}^{2} \) and \( \mathbf {b}_{\text {opt}}^{\top } \overline {\mathbf {Y}}^{\top } \overline {\mathbf {Y}} \mathbf {b}_{\text {opt}}^{\prime } = \mathbf {b}_{\text {opt}}^{\top } \overline {\mathbf {Y}}^{\top } \overline {\mathbf {Y}} \mathbf {E}_{N} \mathbf {b}_{\text {opt}} = \mathbf {b}_{\text {opt}}^{\top } \overline {\mathbf {Y}}^{\top } \mathbf {E}_{D} \overline {\mathbf {Y}} \mathbf {b}_{\text {opt}} = 0. \) Therefore, \( \| \overline {\mathbf {Y}}~\left [\mathbf {b}_{\text {opt}}, \mathbf {b}_{\text {opt}}^{\prime }\right ]\|_{*}^{2} = 4~ \|\overline {\mathbf {Y}} \mathbf {b}_{\text {opt}} \|_{2}^{2} \) and, in view of (50), [bopt, ENbopt] is a solution to (20).
By (29) and (46), it holds that
$$\begin{array}{*{20}l} [\!\overline{\mathbf{Y}}]_{:,{\mathcal{X}_{B}^{c}}}^{\top} \mathbf{c}(\mathcal{X}_{B}) &= \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}_{c}^{\top} [\!\overline{\mathbf{Y}}]_{:,{\mathcal{X}_{A}^{c}}}^{\top} \mathbf{E}_{D}^{\top} \mathbf{E}_{D} \mathbf{c}(\mathcal{X}_{A}) \\ & = \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}_{c}^{\top} [\!\overline{\mathbf{Y}}]_{:,{\mathcal{X}_{A}^{c}}}^{\top} \mathbf{c}(\mathcal{X}_{A}). \end{array} $$
Consider now some \(\mathbf {b} \in \mathcal {B}(\mathcal {X}_{A})\) and define \(\mathbf {b}^{\prime } \stackrel {\triangle }{=} \text {sgn}{([\mathbf {c}(\mathcal {X}_{A})]_{D})} \mathbf {E}_{N} \mathbf {b}\). By (46), (51), and the definition in (25), it holds that
$$ \begin{aligned} [\mathbf{b}^{\prime}]_{{\mathcal{X}_{B}^{c}}} &= [ \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{E}_{N} \mathbf{b}]_{{\mathcal{X}_{B}^{c}}} = \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} [\mathbf{I}_{2N}]_{:,{\mathcal{X}_{B}^{c}}}^{\top} \mathbf{E}_{N} \mathbf{b} \\ &=\text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} (\mathbf{E}_{N} [\mathbf{I}_{2N}]_{:,X_{A}} \mathbf{P}_{c})^{\top} \mathbf{E}_{N} \mathbf{b} \\ & = \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}_{c}^{\top} [\mathbf{I}_{2N}]_{:,X_{A}}^{\top} \mathbf{b} = \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}_{c}^{\top} [\mathbf{b}]_{:,X_{A}} \\ &= \text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}_{c}^{\top} \text{sgn}([\!\overline{\mathbf{Y}}]_{:,X_{A}}^{\top} \mathbf{c}(\mathcal{A})) \\ &= \text{sgn}(\text{sgn}{([\mathbf{c}(\mathcal{X}_{A})]_{D})} \mathbf{P}_{c}^{\top} [\!\overline{\mathbf{Y}}]_{:,X_{A}}^{\top} \mathbf{c}(\mathcal{A})) \\ &= \text{sgn}([\!\overline{\mathbf{Y}}]_{:,{\mathcal{X}_{B}^{c}}}^{\top} \mathbf{c}(\mathcal{X}_{B})). \end{aligned} $$
Hence, b′ belongs to \(\mathcal {B}(\mathcal {X}_{B})\) and (30) holds true.
\(\mathbf {S}_{\Phi } \in \mathbb {C}^{D \times m}\) is a transposed Vandermonde matrix [66] and has rank m if |Φ|=m<D.
For any underlying set of elements \(\mathcal {A}\) of cardinality n, a size m multiset may be defined as a pair \(\left (\mathcal {A}, f\right)\) where \(f:~ \mathcal {A} \to \mathbb {N}_{\geq 1}\) is a function from \(\mathcal {A}\) to \(\mathbb {N}_{\geq 1}\) such that \(\sum \nolimits _{a \in \mathcal {A}} f(a) =m\); the number of all distinct size m multisets defined upon \(\mathcal {A}\) is \({{n + m -1}\choose {m}}\) [67].
The (n,k)th bit of the binary matrix argument in (21) has the corresponding single-integer index (k−1)2N+n∈{1,2,…,2Nm}.
Denoting by θ the DoA and \(\hat {\theta }\) the DoA estimate, RMSE is defined as the square root of the average value of \(|\hat {\theta } - {\theta }|^{2}\).
Given \(\overline {\mathbf {Y}}\), RPCA minimizes ∥L∥∗+λ∥O∥1, over L and O, subject to \(\overline {\mathbf {Y}} = \mathbf {L} + \mathbf {O}\) and \(\lambda = {\sqrt {\max \{2D, 2N \}}}^{-1}\) [29]. Then, it returns the 2K L2-PCs of L (computed by SVD).
Reported computation times are measured in MATLAB R2017a, run on a computer equipped with Intel(R) core(TM) i7-6700 processor 3.40 GHz and 32 GB RAM. The MATLAB code of L1-PCA (Algorithm 3) can be found at [68]. The MATLAB code for RPCA was provided at [69].
AWGN:
Additive white Gaussian noise
DoA:
Direction-of-arrival
EVD:
Eigenvalue decomposition
L1-PCA:
L1-norm principal-component analysis
L1-PCs:
L1-norm principal components
PCA:
RMSE:
Root-mean-squared-error
RPCA:
Robust PCA
SNR:
Signal-to-noise ratio
SRR:
Subspace-representation ratio
SVD:
ULA:
Uniform linear array
UNM:
Unimodular nuclear-norm maximization
The authors would like to thank the U.S. NSF, the U.S. AFOSR, and the Ministry of Education, Research, and Religious Affairs of Greece for their support in this research work.
This work was supported in part by the U.S. National Science Foundation (NSF) under grants CNS-1117121, ECCS-1462341, and OAC-1808582, the U.S. Air Force Office of Scientific Research (AFOSR) under the Dynamic Data Driven Applications Systems (DDDAS) program, and the Ministry of Education, Research, and Religious Affairs of Greece under Thales Program Grant MIS-379418-DISCO.
Not applicable. The manuscript does report any studies involving human participants, human data, or human tissue.
Not applicable. The manuscript does not contain any individual person's data.
Department of Electrical and Microelectronic Engineering, Rochester Institute of Technology, Rochester, 14623, NY, USA
Ericsson AB, Göteborg, 417 56, Sweden
I-SENSE and Department of Computer and Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, 33431, FL, USA
School of Electrical and Computer Engineering, Technical University of Crete, Chania, Crete, 73100, Greece
S. A. Zekavat, M. Buehrer, Handbook of Position Location: Theory, Practice and Advances (Wiley-IEEE Press, New York, 2011).View ArticleGoogle Scholar
W. -J. Zeng, X. -L. Li, High-resolution multiple wideband and non-stationary source localization with unknown number of sources. IEEE Trans. Signal Process.58:, 3125–3136 (2010).MathSciNetView ArticleGoogle Scholar
M. G. Amin, W. Sun, A novel interference suppression scheme for global navigation satellite systems using antenna array. IEEE J. Select Areas Commun.23:, 999–1012 (2005).View ArticleGoogle Scholar
S. Gezici, Z. Tian, G. B. Giannakis, H. Kobayashi, A. F. Molisch, H. V. Poor, Z. Sahinoglu, Localization via ultra-wideband radios. IEEE Signal Process. Mag.22:, 70–84 (2005).View ArticleGoogle Scholar
L. C. Godara, Application of antenna arrays to mobile communications, part ii: Beam-forming and direction-of-arrival considerations. Proc. IEEE. 85:, 1195–1245 (1997).View ArticleGoogle Scholar
H. Krim, M. Viberg, Two decades of array signal processing research. IEEE Signal Process. Mag., 67–94 (1996).View ArticleGoogle Scholar
J. M. Kantor, C. D. Richmond, D. W. Bliss, B. C. Jr., Mean-squared-error prediction for bayesian direction-of-arrival estimation. IEEE Trans. Signal Process.61:, 4729–4739 (2013).MathSciNetView ArticleGoogle Scholar
P. Stoica, A. B. Gershman, Maximum-likelihood doa estimation by data-supported grid search. IEEE Signal Process. Lett.6:, 273–275 (1999).View ArticleGoogle Scholar
P. Stoica, K. C. Sharman, Maximum likelihood method for direction of arrival estimation. IEEE Trans. Acoust. Speech Signal Process., 1132–1143 (1990).View ArticleGoogle Scholar
J. Sheinvald, M. Wax, A. J. Weiss, On maximum-likelihood localization of coherent signals. IEEE Trans. Signal Process.44:, 2475–2482 (1996).View ArticleGoogle Scholar
B. Ottersten, M. Viberg, P. Stoica, A. Nehorai, in Radar Array Processing, ed. by S. Haykin, J. Litva, and T. J. Shepherd. Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing (SpringerBerlin, 1993), pp. 99–151.View ArticleGoogle Scholar
M. I. Miller, D. R. Fuhrmann, Maximum likelihood narrow-band direction finding and the em algorithm. IEEE Trans. Acoust., Speech, Signal Process.38:, 1560–1577 (1990).View ArticleGoogle Scholar
F. C. Schweppe, Sensor array data processing for multiple-signal sources. IEEE Trans. Inf. Theory. 14:, 294–305 (1968).MathSciNetView ArticleGoogle Scholar
D. H. Johnson, The application of spectral estimation methods to bearing estimation problems. Proc. IEEE. 70:, 1018–1028 (1982).View ArticleGoogle Scholar
R. T. Lacoss, Data adaptive spectral analysis method. Geophysics. 36:, 661–675 (1971).View ArticleGoogle Scholar
R. O. Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag.34:, 276–280 (1986).View ArticleGoogle Scholar
S. Haykin, Advances in Spectrum Analysis and Array Processing (Prentice-Hall, Englewood Cliffs, 1995).Google Scholar
H. L. V. Trees, Optimum Array Processing, Part IV of Detection, Estimation, and Modulation Theory (Wiley, New York, 2002).Google Scholar
R. Grover, D. A. Pados, M. J. Medley, Subspace direction finding with an auxiliary-vector basis. IEEE Trans. Signal Process.55:, 758–763 (2007).MathSciNetView ArticleGoogle Scholar
G. H. Golub, C. F. V. Loan, Matrix Computations, 4th edn. (The Johns Hopkins Univ. Press, Baltimore, 2012).MATHGoogle Scholar
P. Stoica, A. Nehorai, Music, maximum likelihood, and cramer-rao bound. IEEE Trans. Acoust. Speech Signal Process.37:, 720–741 (1989).MathSciNetView ArticleGoogle Scholar
P. Stoica, A. Nehorai, Music, maximum likelihood, and cramer-rao bound: further results and comparisons. IEEE Trans. Acoust. Speech Signal Process.38:, 2140–2150 (1990).View ArticleGoogle Scholar
H. Abeida, J. -P. Delmas, Efficiency of subspace-based doa estimators. Signal Process. (Elsevier). 87:, 2075–2084 (2007).View ArticleGoogle Scholar
K. L. Blackard, T. S. Rappaport, C. W. Bostian, Measurements and models of radio frequency impulsive noise for indoor wireless communications. IEEE J. Select Areas Commun.11:, 991–1001 (1993).View ArticleGoogle Scholar
J. B. Billingsley, Ground clutter measurements for surface-sited radar. Massachusetts Inst. Technol. Cambridge, MA, Tech. Rep. 780 (1993). https://apps.dtic.mil/docs/citations/ADA262472.
F. Pascal, P. Forster, J. P. Ovarlez, P. Larzabal, Performance analysis of covariance matrix estimates in impulsive noise. IEEE Trans. Signal Process.56:, 2206–2217 (2008).MathSciNetView ArticleGoogle Scholar
R. L. Peterson, R. E. Ziemer, D. E. Borth, Introduction to Spread Spectrum Communications (Prentice Hall, Englewood Cliffs, 1995).Google Scholar
O. Besson, P. Stoica, Y. Kamiya, Direction finding in the presence of an intermittent interference. IEEE Trans. Signal Process.50:, 1554–1564 (2002).View ArticleGoogle Scholar
E. J. Candes, X. Li, Y. Ma, J. Wright, Robust principal component analysis?. J. ACM. 58(11), 1–37 (2011).MathSciNetView ArticleGoogle Scholar
F. De la Torre, M. J. Black, in Proc. Int. Conf. Computer Vision (ICCV). Robust principal component analysis for computer vision (Vancouver, 2001), pp. 1–8.Google Scholar
Q. Ke, T. Kanade, Robust subspace computation using L1 norm,. Internal Technical Report, Computer Science Department, Carnegie Mellon University, CMU-CS-03–172 (2003). http://www-2.cs.cmu.edu/~ke/publications/CMU-CS-03-172.pdf.
Q. Ke, T. Kanade, in Proc. IEEE Conf. Comput. Vision Pattern Recog. (CVPR). Robust l 1 norm factorization in the presence of outliers and missing data by alternative convex programming (SPIESan Diego, 2005), pp. 739–746.Google Scholar
L. Yu, M. Zhang, C. Ding, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP). An efficient algorithm for l 1-norm principal component analysis (IEEEKyoto, 2012), pp. 1377–1380.Google Scholar
J. P. Brooks, J. H. Dulá, The l 1-norm best-fit hyperplane problem. Appl. Math. Lett.26:, 51–55 (2013).MathSciNetView ArticleGoogle Scholar
J. P. Brooks, J. H. Dulá, E. L. Boone, A pure l 1-norm principal component analysis. J. Comput. Stat. Data Anal.61:, 83–98 (2013).View ArticleGoogle Scholar
N. Kwak, Principal component analysis based on l 1-norm maximization. IEEE Trans. Pattern Anal. Mach. Intell.30:, 1672–1680 (2008).View ArticleGoogle Scholar
F. Nie, H. Huang, C. Ding, D. Luo, H. Wang, in Proc. Int. Joint Conf. Artif. Intell. (IJCAI). Robust principal component analysis with non-greedy l 1-norm maximization (IJCAI, 2011), pp. 1433–1438.Google Scholar
M. McCoy, J. A. Tropp, Electron JS. Two proposals for robust PCA using semidefinite programming. 5:, 1123–1160 (2011).Google Scholar
C. Ding, D. Zhou, X. He, H. Zha, in Proc. Int. Conf. Mach. Learn. (ICML). r 1-PCA: Rotational invariant l 1-norm principal component analysis for robust subspace factorization (Proc. ICMLPittsburgh, 2006), pp. 281–288.Google Scholar
X. Li, Y. Pang, Y. Yuan, l 1-norm-based 2DPCA. IEEE Trans. Syst. Man. Cybern. B. Cybern.40:, 1170–1175 (2009).Google Scholar
Y. Liu, D. A. Pados, Compress-sensed-domain L1-PCA video surveillance. IEEE Trans. Mult.18:, 351–363 (2016).View ArticleGoogle Scholar
P. P. Markopoulos, D. A. Pados, G. N. Karystinos, M. Langberg, in Proc. SPIE Compressive Sensing Conference, Defense and Commercial Sensing (SPIE DCS 2017). L1-norm principal-component analysis in l2-norm-reduced-rank data subspaces (Anaheim, 2017), pp. 1021104–1102110410.Google Scholar
P. P. Markopoulos, S. Kundu, S. Chamadia, D. A. Pados, Efficient L1-norm principal-component analysis via bit flipping. IEEE Trans. Signal Process.62:, 4252–4264 (2017).MathSciNetView ArticleGoogle Scholar
N. Tsagkarakis, P. P. Markopoulos, D. A. Pados, in Proc. IEEE International Conference on Machine Learning and Applications (IEEE ICMLA 2016). On the L1-norm approximation of a matrix by another of lower rank (IEEEAnaheim, 2016), pp. 768–773.View ArticleGoogle Scholar
P. P. Markopoulos, S. Kundu, S. Chamadia, D. A. Pados, in Proc. IEEE International Conference on Machine Learning and Applications (IEEE ICMLA 2016). L1-norm principal-component analysis via bit flipping (IEEEAnaheim, 2016), pp. 326–332.View ArticleGoogle Scholar
N. Tsagkarakis, P. P. Markopoulos, G. Sklivanitis, D. A. Pados, L1-norm principal-component analysis of complex data. IEEE Trans. Signal Process.66:, 3256–3267 (2018).MathSciNetView ArticleGoogle Scholar
P. P. Markopoulos, G. N. Karystinos, D. A. Pados, in Proc. 10th Int. Symp. on Wireless Commun. Syst. (ISWCS). Some options for l 1-subspace signal processing (IEEEIlmenau, 2013), pp. 622–626.Google Scholar
P. P. Markopoulos, G. N. Karystinos, D. A. Pados, Optimal algorithms for l 1-subspace signal processing. IEEE Trans. Signal Process.62:, 5046–5058 (2014).MathSciNetView ArticleGoogle Scholar
P. P. Markopoulos, S. Kundu, D. A. Pados, in Proc. IEEE Int. Conf. Image Process. (ICIP). L1-fusion: Robust linear-time image recovery from few severely corrupted copies (IEEEQuebec City, 2015), pp. 1225–1229.Google Scholar
P. P. Markopoulos, F. Ahmad, in Proc. IEEE Radar Conference (IEEE Radarcon 2017). Indoor human motion classification by L1-norm subspaces of micro-doppler signatures (IEEESeattle, 2017).Google Scholar
D. G. Chachlakis, P. P. Markopoulos, R. J. Muchhala, A. Savakis, in Proc. SPIE Compressive Sensing Conference, Defense and Commercial Sensing (SPIE DCS 2017). Visual tracking with L1-grassmann manifold modeling (SPIEAnaheim, 2017), pp. 1021102–1102110210.Google Scholar
P. P. Markopoulos, F. Ahmad, in Int. Microwave Biomed. Conf. (IEEE IMBioC 2018), IEEE. Robust radar-based human motion recognition with L1-norm linear discriminant analysis (IEEEPhiladelphia, 2018), pp. 145–147.View ArticleGoogle Scholar
A. Gannon, G. Sklivanitis, P. P. Markopoulos, D. A. Pados, S. N. Batalama, Semi-blind signal recovery in impulsive noise with L1-norm PCA (IEEE, Pacific Grove, 2018). to appear.Google Scholar
X. Ding, L. He, L. Carin, Bayesian robust principal component analysis. IEEE Trans. Image Process.20(12), 3419–3430 (2011).MathSciNetView ArticleGoogle Scholar
J. Wright, A. Ganesh, S. Rao, Y. Peng, Y. Ma, in Adv. Neural Info. Process. Syst. (NIPS). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization, (2009), pp. 2080–2088.Google Scholar
P. P. Markopoulos, N. Tsagkarakis, D. A. Pados, G. N. Karystinos, in Proc. IEEE Phased Array Systems and Technology (PAST 2016). Direction-of-arrival estimation by L1-norm principal components (Waltham, 2016), pp. 1–6.Google Scholar
A. Graham, Kronecker Products and Matrix Calculus with Applications (Ellis Horwood, Chichester, 1981).MATHGoogle Scholar
M. Haardt, J. Nossek, Unitary esprit: How to obtain increased estimation accuracy with a reduced computational burden. IEEE Trans. Signal Process.43:, 1232–1242 (1995).View ArticleGoogle Scholar
M. Pesavento, A. B. Gershman, M. Haardt, Unitary root-music with a real-valued eigendecomposition: A theoretical and experimental performance study. IEEE Trans. Signal Process.48:, 1306–1314 (2000).View ArticleGoogle Scholar
S. A. Vorobyov, A. B. Gershman, Z. -Q. Luo, Robust adaptive beamforming using worst-case performance optimization. IEEE Trans. Signal Process.51:, 313–324 (2003).View ArticleGoogle Scholar
J. E. Humphreys, Introduction to Lie Algebras and Representation Theory (Springer, New York, 1972).View ArticleGoogle Scholar
A. W. Knapp, Lie Groups Beyond an Introduction vol. 140 (Birkhauser, Boston, 2002).MATHGoogle Scholar
L. W. Ehrlich, Complex matrix inversion versus real. Commun. ACM. 13:, 561–562 (1970).MathSciNetView ArticleGoogle Scholar
G. N. Karystinos, A. P. Liavas, Efficient computation of the binary vector that maximizes a rank-deficient quadratic form. IEEE Trans. Inf. Theory. 56:, 3581–3593 (2010).MathSciNetView ArticleGoogle Scholar
E. S. Andersen, Two summation formulae for product sums of binomial coefficients. Math. Scand.1:, 261–262 (1953).MathSciNetView ArticleGoogle Scholar
C. D. Meyer, Matrix Analysis and Applied Linear Algebra (SIAM, Philadelphia, 2001).Google Scholar
R. P. Stanley, Enumerative Combinatorics vol. 1, 2nd edn. (Cambridge University Press, New York, 2012).MATHGoogle Scholar
P. P. Markopoulos, L1-PCA Toolbox. https://www.mathworks.com/matlabcentral/fileexchange/64855-l1-PCA-toolbox. Accessed 18 July 2019.
D. Laptev, RobustPCA. https://github.com/dlaptev. Accessed 18 July 2019.
D. Day, M. A. Heroux, Solving complex-valued linear systems via equivalent real formulations. SIAM J. Sci. Comput.23(2), 480–498 (2001).MathSciNetView ArticleGoogle Scholar | CommonCrawl |
Graph neural network inspired algorithm for unsupervised network community detection
Stanislav Sobolevsky ORCID: orcid.org/0000-0001-6281-06561,2 &
Alexander Belyi2
Network community detection often relies on optimizing partition quality functions, like modularity. This optimization appears to be a complex problem traditionally relying on discrete heuristics. And although the problem could be reformulated as continuous optimization, direct application of the standard optimization methods has limited efficiency in overcoming the numerous local extrema. However, the rise of deep learning and its applications to graphs offers new opportunities. And while graph neural networks have been used for supervised and unsupervised learning on networks, their application to modularity optimization has not been explored yet. This paper proposes a new variant of the recurrent graph neural network algorithm for unsupervised network community detection through modularity optimization. The new algorithm's performance is compared against the state-of-the-art methods. The approach also serves as a proof-of-concept for the broader application of recurrent graph neural networks to unsupervised network optimization.
The complex networks play a pivotal role in various fields such as physics, biology, economics, social sciences, and urban planning. Thus understanding the underlying community structure of the networks saw a wide range of applications, including social science (Plantié and Crampes 2013), biology(Guimerà and Nunes Amaral 2005), and economics (Piccardi and Tajoli 2012). In particular, partitioning the networks of human mobility and interactions is broadly applied to regional delineation (Ratti et al. 2010; Blondel et al. 2010; Sobolevsky et al. 2013; Amini et al. 2014; Hawelka et al. 2014; Kang et al. 2013; Sobolevsky et al. 2014; Belyi et al. 2017; Grauwin et al. 2017; Xu et al. 2021) as well as urban zoning (Sobolevsky et al. 2018; Landsman et al. 2020, 2021).
Over the last two decades, a large number of approaches and algorithms for community detection in complex networks have been suggested. Some of them are just straightforward heuristics such as hierarchical clustering (Hastie 2001) or the Girvan-Newman (Girvan and Newman 2002) algorithm, while the vast majority rely on optimization techniques based on the maximization of various objective functions. The first and the most well-known partition quality function is modularity (Newman and Girvan 2004; Newman 2006) assessing the relative strength of edges and quantifying the cumulative strength of the intra-community links. Many modularity optimization strategies have been suggested over the last two decades (Newman and Girvan 2004; Newman 2006, 2004; Clauset et al. 2004; Agarwal and Kempe 2008; Sun et al. 2009; Blondel et al. 2008; Guimera et al. 2004; Good et al. 2010; Duch and Arenas 2005; Lee et al. 2012; Aloise et al. 2012; Barber and Clark 2009; Liu and Murata 2010; Sobolevsky et al. 2014; Džamić et al. 2019; Traag et al. 2019; Biedermann et al. 2018). Comprehensive historical overviews are presented in Fortunato (2010); Fortunato and Hric (2016) as well as some later surveys (Khan and Niazi 2017; Javed et al. 2018).
And while the problem of finding the exact modularity maximum is known to be NP-hard (Brandes et al. 2006), most of the available modularity optimization approaches rely on specific discrete optimization heuristics (although, in some cases, an algorithmic optimality proof of the partition is possible Agarwal and Kempe 2008; Aloise et al. 2010; Sobolevsky et al. 2017; Belyi et al. 2019, 2021; Belyi and Sobolevsky 2022).
As we show below, modularity optimization can be formulated as a continuous matrix optimization problem. However, the direct application of generic gradient descent methods is inefficient due to a large number of local maxima, which gradient descent might not be able to overcome.
Recently, graph neural networks (GNNs) became increasingly popular for supervised classifications and unsupervised embedding of the graph nodes with diverse applications in text classification, recommendation systems, traffic prediction, computer vision and many others (Wu et al. 2020). GNNs were already successfully applied for community detection, including supervised learning of the ground-truth community structure (Chen et al. 2017) as well as unsupervised learning of the node features enabling representation modeling of the network, including stochastic block-model (Bruna and Li 2017) and other probabilistic models with overlapping communities (Shchur and Günnemann 2019) or more complex self-expressive representation (Bandyopadhyay and Peter 2020). However, existing GNN applications overlook unsupervised modularity optimization, which so far has been a major approach in classic community detection.
This paper aims to fill this gap by proposing a straightforward GNN-inspired algorithmic framework for unsupervised community detection through modularity optimization. We perform a comprehensive comparative evaluation of the performance of the proposed method against the state-of-the-art ADVNDS and Combo algorithms (capable of reaching the best known partition in most cases), a viral Louvain algorithm (which, despite its sub-optimal performance, is very fast and capable of handling the large-scale networks), and its successor, Leiden algorithm, that improves partition quality while preserving short execution time. We demonstrate that the method provides a reasonable balance between performance and speed for classic, synthetic and real-world networks, including temporal networks, and is sometimes capable of finding partitions with a higher modularity score that other algorithms cannot achieve.
More importantly, we believe the proposed approach serves as a proof of concept of leveraging GNN approaches for solving a broader range of network optimization problems. Such problems often arise when the aim is to reconstruct nodes' attributes based on their features and the network structure, including various types of unsupervised graph clustering (Kampffmeyer et al. 2019; Bianchi 2022). Following the work applying machine learning to combinatorial optimization problems by Bengio et al. (2021), several attempts to apply GNNs to hard combinatorial optimization problems were recently made. Some first promising results were obtained for the problems of minimum vertex cover, maximal clique, maximal independent set, and the satisfiability problem (Li et al. 2018). Furthermore, various GNN architectures were adapted to address the graph correlation clustering problem formulated as the minimum cost multicuts problem (Jung and Keuper 2022). Initial steps were even taken towards graph clustering via maximizing some variant of modularity function (Lobov and Ivanov 2019; Tsitsulin et al. 2020). However, their results still fall behind current state-of-the-art approaches in terms of modularity score. In this work, we show that with GNN-inspired techniques it is possible to achieve more practical results close to state-of-the-art.
In the following sections, first, we recall the formulation of community detection through modularity maximization and show how it could be framed as continuous quadratic optimization. Then we propose our GNN-inspired method and describe how to select its parameters. Lastly, we evaluate our approach using three benchmarks: classical real-world networks used previously in the literature to test modularity maximization algorithms, synthetic-networks benchmark, and a temporal network of taxi trips. The paper ends with conclusions and discussions.
The modularity optimization problem
The network modularity was among the first quality/objective functions proposed to assess and optimize the community structure (Newman 2006). However, it is now known to have certain shortcomings, including a resolution limit (Fortunato and Barthélémy 2007; Good et al. 2010) and the fact that it does not compare with a proper baseline and finds communities in random networks. Therefore alternative objective functions should be mentioned, e.g., Infomap description code length (Rosvall and Bergstrom 2007, 2008), Stochastic Block Model likelihood (Karrer and Newman 2011; Ball et al. 2011; Bickel and Chen 2009; Decelle et al. 2011, 2011; Yan et al. 2014), and Surprise (Aldecoa and Marìn 2011). Nevertheless, despite its limitations, modularity remains perhaps the most commonly used objective function so far.
In 2014, the authors proposed a novel optimization technique for community detection, "Combo" (Sobolevsky et al. 2014), capable of maximizing various objective functions, including modularity, description code length, and pretty much any other metric based on the link scoring and assessing the cumulative score of the intra-community links. At the time of publication, for modularity optimization, Combo outperformed other state-of-the-art algorithms, including a popular Louvain method (Blondel et al. 2008) in terms of the quality (modularity score) of the resulting partitioning, which could be achieved within a reasonable time for most real-world and synthetic networks of up to tens of thousands of nodes. The size limitation for the algorithm evaluation is due to the current implementation handling a full modularity matrix in the memory entirely. However, this is not a fundamental limitation and could be overcome by using sparse matrix operations. Recently, more precise algorithms were proposed, but they are slower and often designed to run on a distributed cluster (Džamić et al. 2019; Biedermann et al. 2018; Hamann et al. 2018; Lu et al. 2015). Moreover, their code is not available.
The proposed algorithms, including Combo, are often quite efficient and, in some cases, are able to reach the theoretical maximum of the modularity score as revealed by a suitable upper bound estimate (Sobolevsky et al. 2017). However, in general, finding the theoretically optimal solution may not be feasible, and one has to rely on heuristic algorithmic solutions without being certain of their optimality. Instead, an empiric assessment of their performance in comparison with other available algorithms could be performed.
The modularity function
In short, the modularity (Newman and Girvan 2004; Newman 2006) function of the proposed network partition quantifies how relatively strong all the edges between the nodes attached to the same community are. Specifically, if the network edge weights between each pair of nodes i, j are denoted as \(e_{i,j}\), then the modularity of the partition com(i) (expressed as a mapping assigning community number com to each node i) can be defined as
$$\begin{aligned} M=\sum _{i,j, com(i)=com(j)}q_{i,j}, \end{aligned}$$
where the quantity \(q_{i,j}\) for each edge i, j (call q a modularity score for an edge) is defined as its normalized relative edge weight in comparison with the random network model with the same node strengths. Namely,
$$\begin{aligned} q_{i,j}=\frac{e_{i,j}}{T}-\frac{w^{out}(i)w^{in}(j)}{T^2}, \end{aligned}$$
where \(w^{out}(i)=\sum _k e_{i,k}\), \(w^{in}(j)=\sum _k e_{k,j}\), \(T=\sum _i w^{out}(i)=\sum _j w^{in}(j)=\sum _{i,j}e_{i,j}\).
Rewrite the modularity optimization problem in a vector form: let \(Q=\left( q_{i,j}\right)\) be the matrix of all the modularity scores for all the edges (call it a modularity matrix). Let C be an \(n\times k\) matrix, where n is the number of network nodes and k is the number of communities we are looking to build. Each element \(c_{i,p}\) of the matrix can be zero or one depending on whether the node i belongs to the community p or not, i.e., whether \(com(i)=p\). If the communities are not overlapping, then each row of the matrix has one single unit element, and the rest of its elements are zeros.
More generally, if we admit uncertainty in community attachment, then the elements \(c_{i,p}\) of the matrix C could represent the probabilities of the node i to be attached to the community p. This way, \(c_{i,p}\in [0,1]\) and the sum of each row of the matrix C equals 1.
Then the modularity score M in the case of a discrete community attachment could be represented as a trace of matrix product
$$\begin{aligned} M=tr(C^T Q C), \end{aligned}$$
where tr denotes the trace of the matrix – a sum of all of its diagonal elements.
This way, finding the community structure of up to k communities optimizing the network modularity could be expressed as a constrained quadratic optimization problem of finding the \(n\times k\) matrix C maximizing the trace of matrix product \(M=tr(C^TQC)\), such that all \(c_{i,p}\in \{0,1\}\) and the sum of each row of the matrix C equals 1 (having a single unit element).
Replacing the binary attachment constraint \(c_{i,p}\in \{0,1\}\) with a continuous attachment \(c_{i,p}\in [0,1]\) relaxes the optimization problem to finding probabilistic community attachments. It could be easily shown that the optimal solution of the binary attachment problem could be derived from the optimal solution of the probabilistic attachment problem after assigning \(q_{i,i}=0\) for all the diagonal elements of the matrix Q. As diagonal elements \(q_{i,i}\) are always included in the sum M since \(com(i)=com(j)\) for \(i=j\), the values of the diagonal elements serve as constant adjustment of the objective function M and do not affect the choice of the optimal partition, so we are free to null them without loss of generality. At the same time, for each given i, once \(q_{i,i}=0\), if we fix the community attachments of all the other nodes \(j\ne i\), the objective function M becomes a linear function of the variables \(c_{i,p}\) subject to constraints \(\sum _p c_{i,p}=1\) and \(c_{i,p}\in [0,1]\). Obviously, the maximum of the linear function with linear constraints is reached at one of the vertices of the domain of the allowed values for \(c_{i,p}\), which will involve a single \(c_{i,p}\) being one and the rest being zeros. This way, we have proven the following:
The optimal probabilistic attachment \(c_{i,p}\in [0,1]\) maximizing (2) in the case of \(q_{i,i}=0\) represents a binary attachment \(c_{i,p}\in \{0,1\}\) maximizing (2) for an arbitrary original Q.
So the discrete community detection problem through modularity optimization could be solved within the continuous constrained quadratic optimization framework. This allows the application of methods and techniques developed for continuous optimization, such as gradient descent, for example. However, despite its analytic simplicity, the quadratic programming problem with indefinite matrix Q is still NP-hard. In particular, the dimensionality of the problem leads to multiple local maxima challenging direct application of the standard continuous optimization techniques, like gradient descent. Indeed, any discrete partition, such that no single node could be moved to a different community with a modularity gain, will become such a local maximum. Unfortunately, finding such a local maximum rarely provides a plausible partition – such solutions could have been obtained with a simple greedy discrete heuristic iteratively adjusting the single node attachments, while we know that the modularity optimization, being NP-hard, generally requires more sophisticated non-greedy heuristics, like Sobolevsky et al. (2014). To address this challenge, we introduce a new heuristic method that efficiently finds high-quality solutions to the described quadratic optimization problem, although without guaranteed achievement of the global optimum.
The GNNS method
In this section, we present a GNN-style method (GNNS) for unsupervised network partition through modularity optimization inspired by recurrent graph neural network models (in the definition of Wu et al. (2020)) as well as an older Weisfeiler-Lehmann graph node labeling algorithm Weisfeiler and Leman (1968). Ideas similar to the Weisfeiler-Lehmann algorithm have already found their application to community detection in a well-known label propagation algorithm Raghavan et al. (2007). Their development and application to modularity maximization were proposed and studied in detail in consequent works (Barber and Clark 2009; Liu and Murata 2010). However, they still were discrete optimization heuristics in their spirit and could not take into account recent advances in graph neural networks. Unlike these methods, we propose a continuous optimization technique that considers current nodes' attachments combined with attachments of their neighbors. Namely, we propose a simple iterative process starting with a random initial matrix \(C=C_0\) and at each step \(t=1,2,3,...,N\) performing an iterative update of the rows \(c_i\) of the matrix C representing the node i community attachments as follows:
$$\begin{aligned} \tilde{c}_{i}^t = F\left( c_i^{t-1}, Q_i C^{t-1}\right) , \end{aligned}$$
where \(Q_i=\left( q_{i,j}:j=1,2,...,n\right)\) is the i-th row of the modularity matrix Q representing the outgoing edges from the node i. This way, the term \(Q_i C^{t-1}\) collects information about the neighbor nodes' community attachments (this could be viewed as a development of ideas discussed in Barber and Clark (2009)), and the equation (3) updates the node community attachments with respect to their previous attachments as well as the neighbor node attachments. In order to ensure the conditions \(\sum _p c_{i,p}=1\), a further normalization \(c_{i,p}^t=\tilde{c}_{i,p}^t/\sum _{p^*}\tilde{c}_{i,p^*}^t\) needs to be applied at each iteration.
A simple form for an activation function F could be a superposition of a linear function subject to appropriate scale normalization and a rectified linear unit \(ReLU(x)=\left\{ \begin{array}{c}0,x\le 0\\ x,x>0\end{array}\right.\), leading to
$$\begin{aligned} \tilde{c}_{i,p}^t=ReLU\left( f_1 c_{i,p}^{t-1} + f_2 Q_i C_p^{t-1}/\tau _i^t + f_0\right) , \ c_{i,p}^t = \tilde{c}_{i,p}^t/\sum _{p^*}\tilde{c}_{i,p^*}^t, \end{aligned}$$
where \(f_0, f_1, f_2\) are the model parameters, \(C_p=\left( c_{j,p}:j=1,2,...,n\right)\) is the p-th column of the matrix C representing all the node attachments to the community p, and the \(\tau _i^k = \left| \max _{p^*}{Q_i C_{p^*}^{t-1}}\right|\) are the normalization coefficients ensuring the same scale for terms of the formulae.
Intuitive considerations allow defining possible ranges for the model coefficients \(f_0,f_1,f_2\). Defining the coefficient \(f_1\) within the range \(f_1\in [0,1]\) would ensure decay scaling of the community attachment at each iteration unless confirmed by the strength of the node's attachment to the rest of the community expressed by \(Q_i C_p^{t-1}\). A free term \(f_0\in [-1,0]\) provides some additional constant decay of the community attachment at each iteration, while the term \(Q_i C_p^{t-1}/\tau _i^t\) strengthens the attachment of node i to those communities having positive modularity scores of the edges between node i and the rest of the community. Normalization term \(\tau _i^t\) ensures that the strongest community attachment gets a maximum improvement of a fixed scale \(f_2\).
A consistent node attachment that cannot be improved by assigning the given node i to a different community p (i.e., having \(Q_i C_p^{t-1}=\tau _i^{t-1}=\max _{p^*} Q_i C_{p^*}^{t-1}\)) should see the fastest increase in the attachment score \(\tilde{c}_{i,p}^t\), eventually converging to the case of \(c_{i,p}=1\). Any weaker attachment should see a decreasing community membership score \(c_{i,p}^t\), eventually dropping to zero. This could be ensured by the balancing equation \(f_1+f_2+f_0 = 1\), allowing to define appropriate \(f_2\) given \(f_0\) and \(f_1\).
Training the GNNS
The sequence of the GNNS iterations (4) depends on the choice of the model parameters \(f_0,f_1\), as well as the initial community attachments. The final convergence also often depends on those choices. Given that, a good strategy is to simulate multiple iteration sequences with different initial attachments and choose the best final result. Also, it turns out that the method demonstrates reasonable performance for a broad range of parameter values \(f_0\in [-1,0]\), \(f_1\in [0,1]\), so rather than trying to fit the best choice for all the networks or a given network, one may simply include the random choice for \(f_0, f_1\) along with a random choice of the initial community attachments.
So the proposed GNNS algorithm starts with a certain number of S random partitions and parameter choices. Then the GNNS performs 10 iterations of the partition updates according to (4). Among those, a batch of the best \(\lfloor S/3 \rfloor\) partitions (with the highest achieved modularity scores) is selected and further supplemented with another \(S-\lfloor S/3 \rfloor\) configurations derived from the selected batch by randomly shuffling partitions and assigning new random parameters. Another 10 iterations are performed. Another batch of \(\lfloor S/9 \rfloor\) best partitions is selected and shuffled, creating a total of \(\lfloor S/3 \rfloor\) samples. Then another 30 iterations are performed with those, and for small S (\(S \le 1000\)), the best resulting partition is selected as the final outcome of the algorithm. For larger S (\(S > 1000\)), another batch of \(\lfloor S/30 \rfloor\) best partitions is selected and shuffled, creating a total of \(\lfloor S/10 \rfloor\) samples. Then the final 100 iterations are performed with those, and the best resulting partition is selected as the final outcome of the algorithm. Finally, the partition is discretized by assigning each node to the cluster with maximum probability. The following algorithm describes the whole process.
If matrix Q is stored as a sparse graph matrix and two separate vectors for in- and out-degrees, this algorithm has a time complexity of O(Simk), where S is the number of attempts with different initial attachments and parameter values, i is the number of iterations, m is the number of edges, and k is the maximum number of communities. Value k is usually selected as an (educated) guess of the expected maximum number of communities or as the maximum feasible value. One nice property the GNNS inherits from neural networks is that it is easily parallelizable by modern frameworks.
Below we evaluate the three versions of the GNNS: a fast version with \(S=100\) denoted GNNS100, a slower but more precise version with \(S=2500\) denoted GNNS2500, and a slow but very precise version with \(S=25000\) denoted GNNS25000.
Comparative evaluation
We implemented the proposed GNNS modularity optimization algorithm in Python and ran our experiments in Google ColabFootnote 1 (results of Combo runs on the three largest networks were obtained on a laptop because of the time limits in Colab). The source code is available on GitHubFootnote 2. We evaluate our algorithm against other state-of-the-art techniques mentioned above: a fast and popular Louvain method Blondel et al. (2008), its successor, the Leiden algorithm Traag et al. (2019), Combo algorithm Sobolevsky et al. (2014), often capable of reaching the best known modularity score over a wide range of networks, and the ADVNDS method claimed to be state-of-the-art method in 2017 Džamić et al. (2019). Both papers introducing Combo and ADVNDS make a thorough comparison with other methods available at their time and conclude that those methods outperform their competitors. We also tried all other modularity maximization methods implemented in the CDLib library developed specifically for evaluating community detection methods Rossetti et al. (2019). However, their performance was much worse both in terms of running time and achieved modularity scores. Some more recent methods claim to be new state-of-the-art, but they are designed to be run on distributed systems, and their results cannot be fairly compared Biedermann et al. (2018). Thus, first, we shall compare our method with four selected approaches over a sample of classic network examples. And then, all those methods but ADVNDS (whose implementation is not available to us) shall also be evaluated over the series of the two types of random graphs often used for benchmarking the community detection algorithms: Lancichinetti et al. (2008) and Block-model graphs (Karrer and Newman 2011).
As the GNNS chooses the best partition among multiple runs, we consider its three configurations involving a) 100, b) 2500, and c) 25000 initial random samples of partitions plus model configurations. We shall refer to those as GNNS100, GNNS2500, and GNNS25000. All other algorithms in our comparison also involve random steps, and the best partition they converge to is not perfectly stable. Thus their performance could also benefit from choosing the best partition among multiple runs, especially for the Louvain method. It often takes up to 10-20 attempts to find the best partition they are capable of producing. E.g. applying Combo and Louvain to the classic case of the Email network leads to the following performance reported in Table 1 below. As we see, both reach their best performance after 20 iterations (although results of the Combo for this network are significantly better compared to Louvain) but do not further improve over the subsequent 30 iterations. Based on that, in the further experiments, we shall report the best performance of the Combo, Louvain, and Leiden algorithms achieved after 20 attempts for each. Since the implementation of the ADVNDS algorithm is not available to us, we provide results reported in its original paper after ten runs with running time calculated as the reported average multiplied by ten Džamić et al. (2019).
Table 1 The best modularity scores reached by the Combo and Louvain method after a different number of attempts for the Email network
Classic examples
Most of the classic instances were taken from the clustering chapter of the 10th DIMACS Implementation ChallengeFootnote 3 and were often reused in community detection literature Sanders et al. (2014). Table 2 reports the sources and details of those networks. Since the complexity of our method is proportional to the maximum number of communities, we limited this dataset to the networks with less than three hundred communities, according to the ADVNDS paper Džamić et al. (2019). All originally directed networks were symmetrized, and self-loops were removed. The largest network, Krong500slogn16, is synthetic, while all other networks represent real-world data.
Table 2 List, with sources, of the networks we used in our benchmark
Tables 3 and 4 report the performance of the proposed approach compared with ADVNDS Džamić et al. (2019), Leiden Traag et al. (2019), Louvain Blondel et al. (2008), and Combo methods Sobolevsky et al. (2014). Missing values in the ADVNDS column mean that the corresponding networks were not present in the original paper. For the Leiden and Louvain methods, we used their implementation in the Python igraph package, setting the number of iterations to \(-1\) allowing the Leiden algorithm to run until the best modularity is achieved. The missing value in the Combo column corresponds to the network too large for the current implementation.
Table 3 Execution time (in seconds) of the ADVNDS, Leiden, Louvain, Combo, and GNNS algorithms over the classic network examples
Table 4 Best modularity scores of the ADVNDS, Leiden, Louvain, Combo, and GNNS algorithms over the classic network examples
According to the results reported in Tables 3 and 4, Louvain is the fastest algorithm, closely followed by the GNNS100 method, especially for larger networks, with GNNS100 often providing higher modularity scores, especially for smaller networks, while ADVNDS, Combo, and GNNS25000 are by far the slowest, demonstrating however superior performance. GNNS2500 finds pretty good partitions for some networks, and it works much faster than ADVNDS and Combo. In general, its performance is comparable to Leiden. Both algorithms work quickly and find partitions with modularity just a bit below the best-known. Moreover, Combo could not handle the largest network, while GNNS2500 found a partition better than Leiden and did so almost six times faster. For that network, GNNS25000 finds the highest modularity score, outperforming all other methods, including ADVNDS.
While ADVNDS reported the highest known modularity scores for all the networks where it was applied except the largest one, it is much slower than GNNS, and its implementation is not available, so we could not test it on other networks. Besides, the current GNNS implementation uses pure Python, while implementing it in C++ (as done for all other algorithms) could provide further speed improvements.
Overall, while no single heuristic is the best solution for all the cases, a GNNS algorithm often finds a plausible solution, sometimes the best-known one, and provides a flexible parameter-controlled trade-off between speed and performance ranging from the fastest to the close-to-optimal performance, which makes it a valuable addition to an existing collection of algorithms. More importantly, since this simple GNN-style heuristic can perform comparably to the state-of-the-art, that serves as a proof-of-concept for considering more sophisticated GNN architectures, configurations, and learning techniques that could provide further improvement in solving community detection problem as well as other complex network optimization problems like minimum vertex cover, maximal independent set Li et al. (2018), clique partitioning, correlation clustering Jung and Keuper (2022), spectral clustering Bianchi et al. (2020), and others Bengio et al. (2021), Yow and Luo (2022).
Synthetic networks
In the next test, the methods were applied to the sets of synthetic networks – Lancichinetti–Fortunato–Radicchi (LFR) Lancichinetti et al. (2008) and Stochastic Block-Model (SBM) Holland et al. (1983). We built two sets of ten LFR networks of size 250 each, with overall average node degrees of \(\overline{d}=29.6\) and \(\overline{d}=13.3\), and three sets of ten SBM networks of size 300 for each value of the parameter \(\nu\), defining the ratio of the probability for the model graph to have inner-community edges (three communities of size 100 each) divided by the probability of the inter-community edges. For all SBM, networks the average probability of an edge was 0.1, leading to the average node degree \(\overline{d}=30\). The Python package networkx was used to generate these synthetic networks.
Table 5 shows the average values of modularity, normalized mutual information (NMI), and execution time obtained after partitioning ten instances of each type using the Leiden, Louvain, Combo (best of the 20 runs), and GNNS100 algorithms. As we can see, Combo demonstrates superior performance in terms of modularity in all sets of networks except for the last SBM set with the highest \(\nu =3\), where all algorithms performed achieve the same score. While GNNS100 demonstrates suboptimal performance for those networks compared to Combo, it works at unparalleled speed, several times faster than Louvain and Leiden and two orders of magnitude faster than Combo. Based on that, GNNS100 proves itself to be the fastest solution for partitioning the synthetic networks demonstrating reasonable performance in terms of the modularity and NMI scores achieved.
Table 5 Comparative evaluation of the Leiden, Louvain, Combo, and GNNS algorithms over the synthetic networks
Temporal networks
Earlier works established the applicability of the GNN architecture for capturing dynamic properties of the evolving graphs Ma et al. (2020). As GNNS is well-suited for the iterative partition improvement, it could be suggested for active learning of the temporal network partition. For example, initial warm-up training could be performed over the first temporal layers with subsequent tuning iterations while moving from a current temporal layer to the next one.
Below we apply the approach to the temporal network of the daily taxi mobility between the taxi zones in New York City (NYC). We use the 2016-2017 data provided by the NYC Taxi and Limousine CommissionFootnote 4 to build the origin-destination network of yellow and green taxi ridership between the NYC taxi zones (edges of the network are weighted by the number of trips). The results of the temporal GNNS (GNNStemp) for each daily ridership network are compared against single runs (for the sake of speed) of the Louvain and Combo algorithms as the fastest and the most precise from the available algorithms. GNNStemp uses an initial warm-up over the year 2016 aggregated network and then performs a single run of 20 fine-tune iterations for each daily temporal layer in 2017, starting with the previously achieved partition. The achieved best modularity scores fluctuate slightly between the daily layers. The 2017 yearly average of the ratios of the daily scores achieved by each algorithm to the best score of all three algorithms for that day look as follows: \(97.97\%\) for Louvain, \(99.99\%\) for Combo, and \(99.78\%\) for GNNStemp. At the same time, the total elapsed time is as follows: 1.71 sec for Louvain, 16.04 sec for Combo, and 8.40 sec for GNNStemp. Furthermore, GNNStemp managed to find the best modularity score not reached by two other algorithms on \(11.2\%\) of the temporal layers.
So the performance of GNNStemp in terms of the achieved modularity score falls right in the middle between Louvain and Combo, while GNNStemp is nearly twice as fast as the single run of Combo.
Details of the day-by-day GNNStemp performance compared to the best of the three algorithms are shown in fig. 1. On some days, one may notice somewhat visible differences between the top algorithm performance and that of the GNNStemp, but GNNStemp captures the temporal pattern of the modularity score dynamics pretty well, with a correlation of \(99.44\%\) between the GNNStemp score and the top algorithm score timelines.
Comparative performance of GNNS vs. the best of GNNS, Louvain, and Combo for community detection on the temporal network of daily taxi ridership in NYC. Subplots depict achieved network modularity and its time-series properties: autocorrelation, seasonality, and periodicity obtained by seasonal decomposition using moving averages
By performing time-series periodicity and trend-seasonality analysis, one could notice some interesting temporal patterns in the strength of the network community structure, as also presented in fig. 1. The community structure quantified by means of the best achieved modularity score demonstrates a strong weekly periodicity with a stronger community structure over the weekends, including Fridays. The strength of the community structure also shows noticeable seasonality, with stronger communities over the winter and weaker over the summer. One may relate this observation with the people exploring more destinations during their weekend and holiday time. As seen in fig. 1, the GNNStemp accurately reproduces the patterns discovered for the best partition timeline.
In conclusion, while Combo runs could demonstrate higher partition accuracy at certain temporal layers, the GNNStemp is capable of reaching similar performance much faster while adequately reproducing the qualitative temporal patterns. So GNNStemp could be trusted as a fast and efficient solution for extracting insights into the dynamics of the temporal network structure.
We proposed a novel recurrent GNN-inspired framework for unsupervised learning of the network community structure through modularity optimization. A simple iterative algorithm depends on only two variable parameters, and we propose an integrated technique for tuning those so that parameter selection and modularity optimization are performed within the same iterative learning process.
The algorithm's performance has been evaluated on classic network examples, synthetic network models, and a real-world temporal network case. Despite its simplicity, the new algorithm reaches similar and, in some cases, higher modularity scores compared to the more sophisticated discrete optimization state-of-the-art algorithms. One of the possible limitations of the proposed method is its dependence on the number of runs. However, this could be viewed as an advantage since it allows flexible adjustment of the algorithm's complexity, tuning the model parameters to find the right balance between speed and performance. At the low-complexity settings, it can significantly outperform alternative methods in terms of running time while maintaining reasonable performance in terms of partition quality. Thus, the algorithm is efficiently applicable in both scenarios—when the execution time is of the essence as well as when the quality of the resulting partition is a paramount priority.
Furthermore, the algorithm enables a special configuration for the active learning of the community structure on temporal networks, reconstructing all the important longitudinal patterns.
But more importantly, we believe the algorithm serves as a successful proof of concept for applying more advanced GNN-type techniques for unsupervised network learning, and opens possibilities for solving a broader range of network optimization problems. Similar approaches could work for such related problems as clique partitioning and correlation clustering. Applying more sophisticated model architectures along with constantly developing new methods for neural network training can possibly further improve the quality of found solutions. Developing these methods will constitute our future work in this direction.
All the data (sample networks) are taken from publicly available sources.
https://colab.research.google.com/
https://github.com/Alexander-Belyi/GNNS
All networks were downloaded from http://www.cc.gatech.edu/dimacs10/downloads.shtml
https://www1.nyc.gov/site/tlc/about/data.page
GNN:
Graph neural network
GNNS:
GNN-style method
NYC:
LFR:
Lancichinetti–Fortunato–Radicchi
SBM:
Stochastic block-model
NMI:
Normalized mutual information
Agarwal G, Kempe D (2008) Modularity-maximizing graph communities via mathematical programming. Eur Phys J B 66(3):409–418. https://doi.org/10.1140/epjb/e2008-00425-1
Aldecoa R, Marìn I (2011) Deciphering network community structure by surprise. PLoS one 6(9):e24195. https://doi.org/10.1371/journal.pone.0024195
Aloise D, Cafieri S, Caporossi G, Hansen P, Perron S, Liberti L (2010) Column generation algorithms for exact modularity maximization in networks. Phys Rev E 82(4):46112. https://doi.org/10.1103/PhysRevE.82.046112
Amini A, Kung K, Kang C, Sobolevsky S, Ratti C (2014) The impact of social segregation on human mobility in developing and industrialized regions. EPJ Data Sci 3(1):6
Baird D, Ulanowicz RE (1989) The seasonal dynamics of the Chesapeake Bay ecosystem. Ecol Monogr 59(4):329–364
Ball B, Karrer B, Newman MEJ (2011) Efficient and principled method for detecting communities in networks. Phys Rev E 84:036103. https://doi.org/10.1103/PhysRevE.84.036103
Belyi A, Bojic I, Sobolevsky S, Sitko I, Hawelka B, Rudikova L et al (2017) Global multi-layer network of human mobility. Int J Geogr Inf Sci 31(7):1381–1402
Belyi A, Sobolevsky S, Kurbatski A, Ratti C (2019) Improved upper bounds in clique partitioning problem. J Belarusian State Univ Math Inf 2019(3):93–104. https://doi.org/10.33581/2520-6508-2019-3-93-104
Bengio Y, Lodi A, Prouvost A (2021) Machine learning for combinatorial optimization: a methodological tour d'horizon. European J Oper Res 290(2):405–421. https://doi.org/10.1016/j.ejor.2020.07.063
Bickel PJ, Chen A (2009) A nonparametric view of network models and Newman-Girvan and other modularities. Proceed Natl Acad Sci 106(50):21068–21073
Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):P10008
Bruna J, Li X (2017) Community detection with graph neural networks. Stat 1050:27
Clauset A, Newman MEJ, Moore C (2004) Finding community structure in very large networks. Phys Rev E 70:066111. https://doi.org/10.1103/PhysRevE.70.066111
Decelle A, Krzakala F, Moore C, Zdeborová L (2011) Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys Rev E 84:066106. https://doi.org/10.1103/PhysRevE.84.066106
Decelle A, Krzakala F, Moore C, Zdeborová L (2011) Inference and phase transitions in the detection of modules in sparse networks. Phys Rev Lett 107:065701. https://doi.org/10.1103/PhysRevLett.107.065701
Duch J, Arenas A (2005) Community detection in complex networks using extremal optimization. Phys Rev E 72:027104. https://doi.org/10.1103/PhysRevE.72.027104
Džamić D, Aloise D, Mladenović N (2019) Ascent-descent variable neighborhood decomposition search for community detection by modularity maximization. Ann Oper Res 272(1):273–287. https://doi.org/10.1007/s10479-017-2553-9
Fortunato S (2010) Community detection in graphs. Phys Rep 486:75–174
Fortunato S, Barthélémy M (2007) Resolution limit in community detection. Proceed Natl Acad Sci 104(1):36–41. https://doi.org/10.1073/pnas.0605965104
Fortunato S, Hric D (2016) Community detection in networks: a user guide. Phys Rep 659:1–44
Girvan M, Newman MEJ (2002) Community structure in social and biological networks. Proc Natl Acad Sci USA 99(12):7821–7826
Girvan M, Newman MEJ (2002) Community structure in social and biological networks. Proceed Natl Acad Sci 99(12):7821–7826. https://doi.org/10.1073/pnas.122653799
Gleiser PM, Danon L (2003) Community structure in Jazz. Adv Complex Syst 06(04):565–573. https://doi.org/10.1142/S0219525903001067
Good BH, de Montjoye YA, Clauset A (2010) Performance of modularity maximization in practical contexts. Phys Rev E 81:046106. https://doi.org/10.1103/PhysRevE.81.046106
Grauwin S, Szell M, Sobolevsky S, Hövel P, Simini F, Vanhoof M et al (2017) Identifying and modeling the structural discontinuities of human interactions. Sci Rep 7(1):1–11
Guimerà R, Nunes Amaral LA (2005) Functional cartography of complex metabolic networks. Nature 433(7028):895–900. https://doi.org/10.1038/nature03288
Guimerà R, Danon L, Díaz-Guilera A, Giralt F, Arenas A (2003) Self-similar community structure in a network of human interactions. Phys Rev E 68:065103. https://doi.org/10.1103/PhysRevE.68.065103
Guimera R, Sales-Pardo M, Amaral LAN (2004) Modularity from fluctuations in random graphs and complex networks. Phys Rev E 70(2):025101
Hamann M, Strasser B, Wagner D, Zeitz T (2018) Distributed graph clustering using modularity and map equation. In: Aldinucci M, Padovani L, Torquati M (eds) Euro-Par 2018: parallel processing. Springer International Publishing, Cham, pp 688–702
Hastie T (2001) The elements of statistical learning : data mining, inference, and prediction : with 200 full-color illustrations. Springer, New York
Hawelka B, Sitko I, Beinat E, Sobolevsky S, Kazakopoulos P, Ratti C (2014) Geo-located Twitter as proxy for global mobility patterns. Cartogr Geogr Inf Sci 41(3):260–271
Holland PW, Laskey KB, Leinhardt S (1983) Stochastic blockmodels: first steps. Soc Networks 5(2):109–137
Javed MA, Younis MS, Latif S, Qadir J, Baig A (2018) Community detection in networks: a multidisciplinary review. J Network Comput Appl 108:87–111
Kampffmeyer M, Løkse S, Bianchi FM, Livi L, Salberg AB, Jenssen R (2019) Deep divergence-based approach to clustering. Neural Networks 113:91–101. https://doi.org/10.1016/j.neunet.2019.01.015
Karrer B, Newman MEJ (2011) Stochastic blockmodels and community structure in networks. Phys Rev E 83:016107. https://doi.org/10.1103/PhysRevE.83.016107
Lancichinetti A, Fortunato S, Radicchi F (2008) Benchmark graphs for testing community detection algorithms. Phys Rev E 78(4):046110
Landsman D, Kats P, Nenko A, Sobolevsky S (2020) Zoning of St. Petersburg through the prism of social activity networks. Procedia Comput Sci 178:125–133
Lee J, Gross SP, Lee J (2012) Modularity optimization by conformational space annealing. Phys Rev E 85:056702. https://doi.org/10.1103/PhysRevE.85.056702
Liu X, Murata T (2010) Advanced modularity-specialized label propagation algorithm for detecting communities in networks. Phys A Stat Mech Appl 389(7):1493–1500. https://doi.org/10.1016/j.physa.2009.12.019
Lu H, Halappanavar M, Kalyanaraman A (2015) Parallel heuristics for scalable community detection. Parallel Comput 47:19–37. https://doi.org/10.1016/j.parco.2015.03.003
Lusseau D, Schneider K, Boisseau OJ, Haase P, Slooten E, Dawson SM (2003) The bottlenose dolphin community of Doubtful Sound features a large proportion of long-lasting associations. Behav Ecol Sociobiol 54(4):396–405. https://doi.org/10.1007/s00265-003-0651-y
Newman MEJ (2004) Fast algorithm for detecting community structure in networks. Phys Rev E 69:066133. https://doi.org/10.1103/PhysRevE.69.066133
Newman MEJ (2006) Finding community structure in networks using the eigenvectors of matrices. Phys Rev E 74:036104. https://doi.org/10.1103/PhysRevE.74.036104
Newman MEJ (2006) Modularity and community structure in networks. Proceed Nat Academ Sci 103(23):8577–8582
Newman MEJ, Girvan M (2004) Finding and evaluating community structure in networks. Phys Rev E 69(2):026113
Piccardi C, Tajoli L (2012) Existence and significance of communities in the World Trade Web. Phys Rev E. https://doi.org/10.1103/PhysRevE.85.066119
Raghavan UN, Albert R, Kumara S (2007) Near linear time algorithm to detect community structures in large-scale networks. Phys Rev E 76:036106. https://doi.org/10.1103/PhysRevE.76.036106
Ratti C, Sobolevsky S, Calabrese F, Andris C, Reades J, Martino M et al (2010) Redrawing the Map of Great Britain from a Network of Human Interactions. PLoS one 5(12):e14248. https://doi.org/10.1371/journal.pone.0014248
Rossetti G, Milli L, Cazabet R (2019) CDLIB: a python library to extract, compare and evaluate communities from complex networks. Appl Network Sci 4(1):1–26. https://doi.org/10.1007/s41109-019-0165-9
Rosvall M, Bergstrom CT (2007) An information-theoretic framework for resolving community structure in complex networks. Proceed Natl Acad Sci 104(18):7327–7331. https://doi.org/10.1073/pnas.0611034104
Rosvall M, Bergstrom CT (2008) Maps of random walks on complex networks reveal community structure. Proc Natl Acad Sci USA 105:1118–1123
Sobolevsky S, Szell M, Campari R, Couronné T, Smoreda Z, Ratti C (2013) Delineating geographical regions with networks of human interactions in an extensive set of countries. PloS one 8(12):e81707
Sobolevsky S, Campari R, Belyi A, Ratti C (2014) General optimization technique for high-quality community detection in complex networks. Phys Rev E 90(1):012811
Traag VA, Waltman L, Van Eck NJ (2019) From Louvain to Leiden: guaranteeing well-connected communities. Sci Rep 9(1):1–12. https://doi.org/10.1038/s41598-019-41695-z
Watts DJ, Strogatz SH (1998) Collective dynamics of 'small-world' networks. Nature 393(6684):440–442
Weisfeiler B, Leman A (1968) The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series 2(9):12–16
White JG, Southgate E, Thomson JN, Brenner S (1986) The structure of the nervous system of the nematode caenorhabditis elegans. Philos Trans Royal Soc London B Biol Sci 314(1165):1–340. https://doi.org/10.1098/rstb.1986.0056
Xu Y, Li J, Belyi A, Park S (2021) Characterizing destination networks through mobility traces of international tourists - a case study using a nationwide mobile positioning dataset. Tour Manag. https://doi.org/10.1016/j.tourman.2020.104195
Yan X, Shalizi C, Jensen JE, Krzakala F, Moore C, Zdeborová L et al (2014) Model selection for degree-corrected block models. J Stat Mech Theory Exp 2014(5):P05007
Zachary WW (1977) An information flow model for conflict and fission in small groups. J Anthropol Res 33:452–473
Adamic LA, Glance N (2005) The political blogosphere and the 2004 U.S. election: divided they blog. In: Proceedings of the 3rd international workshop on Link discovery. LinkKDD '05. New York, NY, USA: ACM; p 36–43. Available from: http://doi.acm.org/10.1145/1134271.1134277. https://doi.org/10.1145/1134271.1134277
Aloise D, Caporossi G, Hansen P, Liberti L, Perron S, Ruiz M (2012) Modularity maximization in networks by variable neighborhood search. Graph Partitioning and Graph Clustering. 588(113)
Bandyopadhyay S, Peter V (2020) Self-expressive graph neural network for unsupervised community detection. arXiv preprint arXiv:2011.14078
Barber MJ, Clark JW (2009) Detecting network communities by propagating labels under constraints. Phys Rev E. 80, 026129. https://doi.org/10.1103/PhysRevE.80.026129
Belyi A, Sobolevsky S (2022) Network Size Reduction Preserving Optimal Modularity and Clique Partition. In: Gervasi O, Murgante B, Hendrix EMT, Taniar D, Apduhan BO (eds). Computational science and its applications – ICCSA 2022. Cham: Springer International Publishing; p 19–33. https://doi.org/10.1007/978-3-031-10522-7_2
Belyi A, Sobolevsky S, Kurbatski A, Ratti C (2021) Subnetwork Constraints for Tighter Upper Bounds and Exact Solution of the Clique Partitioning Problem. arXiv preprint arXiv:2110.05627
Bianchi FM (2022) Simplifying clustering with graph neural networks. arXiv preprint arXiv:2207.08779
Bianchi FM, Grattarola D, Alippi C (2020) Spectral Clustering with Graph Neural Networks for Graph Pooling. In: III HD, Singh A, editors. In: Proceedings of the 37th international conference on machine learning. vol. 119 of Proceedings of Machine Learning Research. PMLR; p 874–883. Available from: https://proceedings.mlr.press/v119/bianchi20a.html
Biedermann S, Henzinger M, Schulz C, Schuster B (2018) Memetic Graph Clustering. In: D'Angelo G, editor. 17th International Symposium on Experimental Algorithms (SEA 2018). vol. 103 of Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. p. 3:1–3:15. Available from: http://drops.dagstuhl.de/opus/volltexte/2018/8938. https://doi.org/10.4230/LIPIcs.SEA.2018.3
Blondel V, Krings G, Thomas I (2010) Regions and borders of mobile telephony in Belgium and in the Brussels metropolitan zone. Brussels Studies La revue scientifique électronique pour les recherches sur Bruxelles/Het elektronisch wetenschappelijk tijdschrift voor onderzoek over Brussel/The e-journal for academic research on Brussels
Boguñá M, Pastor-Satorras R, Díaz-Guilera A, Arenas A (2004 Nov) Models of social networks based on social distance attachment. Phys Rev E. 70:056122. https://doi.org/10.1103/PhysRevE.70.056122
Brandes U, Delling D, Gaertler M, Görke R, Hoefer M, Nikoloski Z, et al (2006) Maximizing modularity is hard. arXiv preprint physics/0608255
Chen Z, Li X, Bruna J (2017) Supervised community detection with line graph neural networks. arXiv preprint arXiv:1705.08415
Jung S, Keuper M (2022) Learning to solve minimum cost multicuts efficiently using edge-weighted graph convolutional neural networks. arXiv preprint arXiv:2204.01366
Kang C, Sobolevsky S, Liu Y, Ratti C (2013) Exploring human movements in Singapore: A comparative analysis based on mobile phone and taxicab usages. In: Proceedings of the 2nd ACM SIGKDD international workshop on urban computing. ACM; p 1
Khan BS, Niazi MA (2017) Network community detection: a review and visual survey. arXiv preprint arXiv:1708.00977
Knuth DE (1993) The Stanford GraphBase: a platform for combinatorial computing. Addison-Wesley; Available from: http://www-cs-staff.stanford.edu/~uno/sgb.html
Landsman D, Kats P, Nenko A, Kudinov S, Sobolevsky S (2021) Social activity networks shaping St. Petersburg. In: Proceedings of the 54th Hawaii international conference on system sciences; p 1149
Li Z, Chen Q, Koltun V (2018) Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds). Advances in neural information processing systems. vol 31. Curran Associates, Inc. p 1–10. Available from: https://proceedings.neurips.cc/paper/2018/file/8d3bba7425e7c98c50f52ca1b52d3735-Paper.pdf
Lobov I, Ivanov S (2019) Unsupervised community detection with modularity-based attention model. arXiv preprint arXiv:1905.10350
Ma Y, Guo Z, Ren Z, Tang J, Yin D (2020) Streaming graph neural networks. In: Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval; p 719–728
Plantié M, Crampes M (2013) Survey on social community detection. In: Social media retrieval. Springer; p 65–85
Sanders P, Schulz C, Wagner D (2014) Benchmarking for graph clustering and partitioning. Encyclopedia of social network analysis and mining Springer
Shchur O, Günnemann S (2019) Overlapping community detection with graph neural networks. arXiv preprint arXiv:1909.12201
Sobolevsky S, Sitko I, Des Combes RT, Hawelka B, Arias JM, Ratti C (2014) Money on the move: Big data of bank card transactions as the new proxy for human mobility patterns and regional delineation. the case of residents and foreign visitors in spain. In: Big data (BigData Congress), 2014 IEEE international congress on. IEEE; p 136–143
Sobolevsky S, Belyi A, Ratti C (2017) Optimality of community structure in complex networks. arXiv preprint arXiv:1712.05110
Sobolevsky S, Kats P, Malinchik S, Hoffman M, Kettler B, Kontokosta C (2018) Twitter Connections Shaping New York City. In: Proceedings of the 51st Hawaii international conference on system sciences. p 1008–1016
Sun Y, Danila B, Josić K, Bassler KE (2009) Improved community structure detection using a modified fine-tuning strategy. EPL (Europhysics Letters). 86(2):28004. Available from: http://stacks.iop.org/0295-5075/86/i=2/a=28004
Tsitsulin A, Palowitch J, Perozzi B, Müller E (2020) Graph clustering with graph neural networks. arXiv preprint arXiv:2006.16904
Wu Z, Pan S, Chen F, Long G, Zhang C, Philip SY (2020) A comprehensive survey on graph neural networks. In: IEEE transactions on neural networks and learning systems
Yow KS, Luo S (2022) Learning-based approaches for graph problems: a survey. arXiv preprint arXiv:2204.01057
This work was partially supported by the MUNI Award in Science and Humanities (MASH) of the Grant Agency of Masaryk University under the Digital City project (MUNI/J/0008/2021). This research/work/article was supported by ERDF "CyberSecurity, CyberCrime and Critical Information Infrastructures Center of Excellence" (No. CZ.02.1.01/0.0/0.0/16_019/0000822).
Center For Urban Science+Progress, New York University, Brooklyn, NY, USA
Stanislav Sobolevsky
Department of Mathematics and Statistics, Faculty of Science, Masaryk University, Brno, Czech Republic
Stanislav Sobolevsky & Alexander Belyi
Alexander Belyi
SS designed the algorithm, performed initial analysis and wrote the first draft. AB implemented the algorithm, performed comparison with other methods, contributed to writing of the final paper. All authors read and approved the final manuscript.
Correspondence to Stanislav Sobolevsky.
Appendix. Results for directed networks
The analysis presented in the paper includes 17 classic sample networks. Their sources and characteristics (network size and whether the network is weighted and/or directed) are presented in the Table 2. Six of the 17 sample networks are directed in their original form. However Python iGraph implementations of the Leiden and Louvain methods handle only undirected networks, but Combo and GNNS implementations are capable of handling directed versions of the networks and in the Table 6 below we present the achieved modularity scores by Combo, GNNS100, GNNS2500, and GNNS25000 for the original directed versions of the sample networks. One can see that except of Jazz and US Airports1997, where Combo underperforms, all three methods reach closely similar modularity scores with a slight lead of Combo.
Table 6 Achieved modularity scores for the original directed configurations of the classical network examples
Sobolevsky, S., Belyi, A. Graph neural network inspired algorithm for unsupervised network community detection. Appl Netw Sci 7, 63 (2022). https://doi.org/10.1007/s41109-022-00500-z
Complex networks
Community detection
Special Issue on Community Structure in Networks 2021 | CommonCrawl |
A fast algorithm for reversion of power series
Author: Fredrik Johansson
Final submitted version (PDF)
To appear in Mathematics of Computation.
arXiv preprint: http://arxiv.org/abs/1108.4772, published August 24, 2011; revised November 30, 2013.
We give an algorithm for reversion of formal power series, based on an efficient way to implement the Lagrange inversion formula. Our algorithm requires $O(n^{1/2}(M(n) + M\!M(n^{1/2})))$ operations where $M(n)$ and $M\!M(n)$ are the costs of polynomial and matrix multiplication respectively. This matches the asymptotic complexity of an algorithm of Brent and Kung, but we achieve a constant factor speedup whose magnitude depends on the polynomial and matrix multiplication algorithms used. Benchmarks confirm that the algorithm performs well in practice.
This algorithm is implemented in FLINT which uses it by default for reversion of power series over $\mathbb{Z}$, $\mathbb{Q}$ and $\mathbb{Z}/n\mathbb{Z}$. As reported in the paper, it is usually faster in practice than previously known algorithms, including Horner's rule and the first Brent-Kung algorithm (both available in FLINT) as well as the second Brent-Kung algorithm which in theory should be asymptotically faster. The second Brent-Kung algorithm is currently not available in FLINT since it is complicated to implement with a similar level of optimization as the other algorithm, and in any case not faster (I have privately implemented an experimental version over $\mathbb{Z}/n\mathbb{Z}$ just for benchmarking purposes).
The algorithm is now also used in Arb for reversion of power series with ball coefficients over $\mathbb{R}$ and $\mathbb{C}$. Initial experiments indicate that its numerical stability typically is much better than Horner's rule and slightly better than the first Brent-Kung algorithm, with speed similar or slightly better than the latter.
Last updated July 31, 2014. Contact: [email protected].
Back to fredrikj.net | CommonCrawl |
Applications of CGO solutions to coupled-physics inverse problems
IPI Home
Stability in conductivity imaging from partial measurements of one interior current
April 2017, 11(2): 305-338. doi: 10.3934/ipi.2017015
Optical flow on evolving sphere-like surfaces
Lukas F. Lang 1,, and Otmar Scherzer 2,3,
Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Altenberger Straße 69, 4040 Linz, Austria
Computational Science Center, University of Vienna, Oskar-Morgenstern-Platz 1,1090 Vienna, Austria
* Corresponding author
Received June 2015 Revised April 2016 Published March 2017
Figure(12) / Table(2)
In this work we consider optical flow on evolving Riemannian 2-manifolds which can be parametrised from the 2-sphere. Our main motivation is to estimate cell motion in time-lapse volumetric microscopy images depicting fluorescently labelled cells of a live zebrafish embryo. We exploit the fact that the recorded cells float on the surface of the embryo and allow for the extraction of an image sequence together with a sphere-like surface. We solve the resulting variational problem by means of a Galerkin method based on vector spherical harmonics and present numerical results computed from the aforementioned microscopy data.
Keywords: Optical flow, sphere-like surfaces, vector spherical harmonics, variational methods, biomedical imaging, computer vision.
Mathematics Subject Classification: 35A15, 68U10, 92C55, 33C55, 92C37, 92C17, 53A05.
Citation: Lukas F. Lang, Otmar Scherzer. Optical flow on evolving sphere-like surfaces. Inverse Problems & Imaging, 2017, 11 (2) : 305-338. doi: 10.3934/ipi.2017015
F. Amat, W. Lemon, D. P. Mossing, K. McDole, Y. Wan, K. Branson, E. W. Myers and P. J. Keller, Fast, accurate reconstruction of cell lineages from large-scale fluorescence microscopy data, Nat. Meth., 11 (2014), 951-958. doi: 10.1038/nmeth.3036. Google Scholar
F. Amat, E. W. Myers and P. J. Keller, Fast and robust optical flow for time-lapse microscopy using super-voxels, Bioinformatics, 29 (2013), 373-380. doi: 10.1093/bioinformatics/bts706. Google Scholar
K. Atkinson and W. Han, Spherical Harmonics and Approximations on the Unit Sphere: An Introduction volume 2044 of Lecture Notes in Mathematics, Springer, Heidelberg, 2012. doi: 10.1007/978-3-642-25983-8. Google Scholar
G. Aubert, R. Deriche and P. Kornprobst, Computing optical flow via variational techniques, SIAM J. Appl. Math., 60 (2000), 156-182. doi: 10.1137/S0036139998340170. Google Scholar
G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing, volume 147 of Applied Mathematical Sciences, Springer, New York, 2 edition, 2006. Google Scholar
S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black and R. Szeliski, A database and evaluation methodology for optical flow, Int. J. Comput. Vision, 92 (2011), 1-31. doi: 10.1109/ICCV.2007.4408903. Google Scholar
M. Bauer, M. Grasmair and C. Kirisits, Optical flow on moving manifolds, SIAM J. Imaging Sciences, 8 (2015), 484-512. doi: 10.1137/140965235. Google Scholar
M. Botsch, L. Kobbelt, M. Pauly, P. Alliez and B. Lévy, Polygon Mesh Processing, A K Peters, 2010. doi: 10.1201/b10688. Google Scholar
M. P. do Carmo, Differential Geometry of Curves and Surfaces, Prentice-Hall, 1976. Google Scholar
M. P. do Carmo, Riemannian Geometry, Birkhäuser, 1992.Google Scholar
L. C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions, Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. Google Scholar
W. Freeden and M. Schreiner, Spherical functions of mathematical geosciences. A scalar, vectorial, and tensorial setup, Berlin: Springer, 2009.Google Scholar
D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order, Classics in Mathematics. Springer Verlag, Berlin, 2001. Google Scholar
E. Hebey, Sobolev Spaces on Riemannian Manifolds, volume 1635 of Lecture Notes in Mathematics, SV, Berlin, 1996. doi: 10.1007/BFb0092907. Google Scholar
E. Hebey, Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities, Courant Lecture Notes in Mathematics. New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 1999. Google Scholar
K. Hesse, I. H. Sloan and R. S. Womersley, Numerical integration on the sphere, In W. Freeden, M. Z. Nashed, and T. Sonar, editors, Handbook of Geomathematics, pages 1187-1219. Springer, 2010.Google Scholar
M. W. Hirsch, Differential Topology, volume 33 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1994. Google Scholar
B. K. P. Horn and B. G. Schunck, Determining optical flow, Artificial Intelligence, 17 (1981), 185-203. doi: 10.1016/0004-3702(81)90024-2. Google Scholar
A. Imiya, H. Sugaya, A. Torii and Y. Mochizuki, Variational analysis of spherical images, In A. Gagalowicz and W. Philips, editors, Computer Analysis of Images and Patterns, volume 3691 of Lecture Notes in Computer Science, pages 104-111. Springer Berlin, Heidelberg, 2005. doi: 10.1007/11556121_14. Google Scholar
P. J. Keller, Imaging morphogenesis: Technological advances and biological insights Science, 340 (2013), 1234168. doi: 10.1126/science.1234168. Google Scholar
C. B. Kimmel, W. W. Ballard, S. R. Kimmel, B. Ullmann and T. F. Schilling, Stages of embryonic development of the zebrafish, Devel. Dyn., 203 (1995), 253-310. doi: 10.1002/aja.1002030302. Google Scholar
C. Kirisits, L. F. Lang and O. Scherzer, Optical flow on evolving surfaces with an application to the analysis of 4D microscopy data, In A. Kuijper, K. Bredies, T. Pock, and H. Bischof, editors, SSVM'13: Proceedings of the fourth International Conference on Scale Space and Variational Methods in Computer Vision, volume 7893 of Lecture Notes in Computer Science, pages 246-257, Berlin, Heidelberg, 2013. Springer-Verlag. doi: 10.1007/978-3-642-38267-3_21. Google Scholar
C. Kirisits, L. F. Lang and O. Scherzer, Decomposition of optical flow on the sphere, GEM. Int. J. Geomath., 5 (2014), 117-141. doi: 10.1007/s13137-013-0055-8. Google Scholar
C. Kirisits, L. F. Lang and O. Scherzer, Optical flow on evolving surfaces with space and time regularisation, J. Math. Imaging Vision, 52 (2015), 55-70. doi: 10.1007/s10851-014-0513-4. Google Scholar
J. M. Lee, Riemannian Manifolds volume 176 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1997. doi: 10.1007/b98852. Google Scholar
J. M. Lee, Introduction to Smooth Manifolds, volume 218 of Graduate Texts in Mathematics, Springer, New York, 2 edition, 2013.Google Scholar
J. Lefévre and S. Baillet, Optical flow and advection on 2-Riemannian manifolds: A common framework, IEEE Trans. Pattern Anal. Mach. Intell., 30 (2008), 1081-1092. Google Scholar
S. G. Megason and S. E. Fraser, Digitizing life at the level of the cell: High-performance laser-scanning microscopy and image analysis for in toto imaging of development, Mech. Dev., 120 (2003), 1407-1420. doi: 10.1016/j.mod.2003.07.005. Google Scholar
C. Melani, M. Campana, B. Lombardot, B. Rizzi, F. Veronesi, C. Zanella, P. Bourgine, K. Mikula, N. Peyriéras and A. Sarti, Cells tracking in a live zebrafish embryo, In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2007), (2007), 1631-1634. doi: 10.1109/IEMBS.2007.4352619. Google Scholar
V. Michel, Lectures on Constructive Approximation. Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and The Ball, New York, NY: Birkhäuser, 2013. doi: 10.1007/978-0-8176-8403-7. Google Scholar
T. Mizoguchi, H. Verkade, J. K. Heath, A. Kuroiwa and Y. Kikuchi, Sdf1/Cxcr4 signaling controls the dorsal migration of endodermal cells during zebrafish gastrulation, Development, 135 (2008), 2521-2529. doi: 10.1242/dev.020107. Google Scholar
M. A. Penna and K. A. Dines, A simple method for fitting sphere-like surfaces, IEEE Trans. Pattern Anal. Mach. Intell., 29 (2007), 1673-1678. doi: 10.1109/TPAMI.2007.1114. Google Scholar
P. Quelhas, A. M. Mendonça and A. Campilho, Optical flow based arabidopsis thaliana root meristem cell division detection, In A. Campilho and M. Kamel, editors, Image Analysis and Recognition, volume 6112 of Lecture Notes in Computer Science, pages 217-226. Springer Berlin Heidelberg, 2010. doi: 10.1007/978-3-642-13775-4_22. Google Scholar
B. Schmid, G. Shah, N. Scherf, M. Weber, K. Thierbach, C. Campos Pérez, I. Roeder, P. Aanstad and J. Huisken, High-speed panoramic light-sheet microscopy reveals global endodermal cell dynamics Nat. Commun. , 4 (2013), p2207. doi: 10.1038/ncomms3207. Google Scholar
Ch. Schnörr, Determining optical flow for irregular domains by minimizing quadratic functionals of a certain class, Int. J. Comput. Vision, 6 (1991), 25-38. Google Scholar
T. Schuster and J. Weickert, On the application of projection methods for computing optical flow fields, Inverse Probl. Imaging, 1 (2007), 673-690. doi: 10.3934/ipi.2007.1.673. Google Scholar
A. Torii, A. Imiya, H. Sugaya and Y. Mochizuki, Optical Flow Computation for Compound Eyes: Variational Analysis of Omni-Directional Views, In M. De Gregorio, V. Di Maio, M. Frucci, and C. Musio, editors, Brain, Vision, and Artificial Intelligence, volume 3704 of Lecture Notes in Computer Science, pages 527-536. Springer Berlin, Heidelberg, 2005. doi: 10.1007/11565123_51. Google Scholar
H. Triebel, Theory of Function Spaces. II, volume 84 of Monographs in Mathematics, Birkhäuser Verlag, Basel, 1992. doi: 10.1007/978-3-0346-0419-2. Google Scholar
R. M. Warga and C. Nüsslein-Volhard, Origin and development of the zebrafish endoderm, Development, 126 (1999), 827-838. Google Scholar
J. Weickert, A. Bruhn, T. Brox and N. Papenberg, A survey on variational optic flow methods for small displacements, In O. Scherzer, editor, Mathematical Models for Registration and Applications to Medical Imaging, volume 10 of Mathematics in Industry, pages 103-136. Springer, Berlin Heidelberg, 2006. doi: 10.1007/978-3-540-34767-5_5. Google Scholar
J. Weickert and Ch. Schnörr, A theoretical framework for convex regularizers in PDE-based computation of image motion, Int. J. Comput. Vision, 45 (2001), 245-264. Google Scholar
J. Weickert and Ch. Schnörr, Variational optic flow computation with a spatio-temporal smoothness constraint, J. Math. Imaging Vision, 14 (2001), 245-255. Google Scholar
Figure 1. Frames 140 (left) and 141 (right) of the volumetric zebrafish microscopy images recorded during early embryogenesis. The sequence contains a total number of 151 frames recorded at time intervals of $120 \, \mathrm{s}$. Fluorescence response is indicated by blue colour and is proportional to the observed intensity. The embryonic axis of the animal forms around the clearly visible dent. All dimensions are in micrometer ($\mu$m).
Figure 2. Depicted are frames no. 140 (left) and 141 (right) of the processed zebrafish microscopy sequence. Top and bottom row differ by a rotation of 180 degrees around the $x^{3}$-axis. All dimensions are in micrometer ($\mu$m).
Figure 11. From left to right, top to bottom: a) colour-coded surface velocity $\mathbf{\hat{V}}$, b) signed norm $\text{sign}(\partial_{t} \tilde{\rho}) \left\| {{\bf{\hat V}}} \right\|$, c) optical flow $\mathbf{\hat{v}}$ for $\alpha = 1$, and d) total motion $\mathbf{\hat{M}} = \mathbf{\hat{V}} + \mathbf{\hat{v}}$. Values are computed for the interval between frames 140 and 141. All surfaces are depicted in a top view.
Figure 12. Top view of the optical flow field computed on a static spherical geometry. The same values of α as in Fig. 10 were used.
Figure 3. Schematic illustration of a cut through the surfaces $\mathcal{S}^2$ and $\mathcal{M}_{t}$ intersecting the origin. In addition, we show a radial line along which the extension $\bar{f}(t, \cdot)$ is constant. The surface normals are shown in grey.
Figure 5. Commutative diagram relating spaces $\Omega$, $\mathcal{S}^{2}$, and $\mathcal{M}_{t}$, and tangent vector fields. We highlight that $\mathbf{y}_{p}$ is the coordinate representation, see Sec. 2.1, of a particular tangential vector spherical harmonic $\mathbf{\tilde{y}}_{p}$ and $\mathbf{\hat{y}}_{p}$ is its uniquely identified tangent vector field on $\mathcal{M}_{t}$.
Figure 4. Illustration of trajectories through the evolving surface. Their corresponding velocities are shown in grey.
Figure 6. Illustration of a triangular face (filled gray) intersecting the sphere $\mathcal{S}^{2}$ at the vertices (hollow circles). The six nodal points consist of the vertices of the triangle together the edge midpoints (filled black dots). The approximated sphere-like surface is shown by the hatched gray area. A radial line passing through the vertex $v_{i}$ is shown. The hollow circle indicates the intersection with $\mathcal{S}^{2}$ at which $\bar{f}(v_{i})$ in (49) is taken. $\bar{f}$ itself, as described in Sec. 5.2, is assigned by taking the maximum image intensity along the drawn radial line between the two cross marks.
Figure 7. Frames no. 140 (left) and 141 (right) of the processed image sequence in a top view. The embryo's body axis is oriented from bottom left to top right.
Figure 9. Tangent vector field minimising $\mathcal{E}_{\alpha}$. Depicted is the colour-coded optical flow field computed between frames 140 and 141 for different values of $\alpha$. The bottom row differs from the top view by a rotation of 180 degrees around the $x^{3}$-axis. From left to right: a) $\alpha = 10^{-2}$, b) $\alpha = 10^{-1}$, c) $\alpha = 1$, and d) $\alpha = 10$.
Figure 8. Function $\tilde{\rho}_{h}$ obtained by minimising $\mathcal{F}_{\beta}$ for frames 140 (left column) and 141 (right column). Colour corresponds to the radius ($\mu$m) of the fitted surface. The top row depicts $\mathcal{S}_{h}^{2}$ in a top view.
Figure 10. Top view of the optical flow field computed for different values of $\alpha$. From left to right, top to bottom: a) $\alpha = 10^{-2}$, b) $\alpha = 10^{-1}$, c) $\alpha = 1$, and d) $\alpha = 10$.
Table 1. Summary of notation used throughout the paper.
$\Omega$ coordinate domain
$I$ time interval
$\mathcal{S}^2$ 2-sphere
$\mathcal{M}$ family of sphere-like surfaces $\mathcal{M}_{t}$
$T_{y}\mathcal{M}_{t}$ tangent plane at $y \in \mathcal{M}_{t}$
$\mathbf{\tilde{N}}, \mathbf{\hat{N}}$ outward unit normals to $\mathcal{S}^2$ and $\mathcal{M}$
$\bf{x}$ , $\bf{y}$ parametrisations of $\mathcal{S}^2$ and $\mathcal{M}$
$D\bf{x}$ , $D\bf{y}$ gradient matrix of $\bf{x}$ and $\bf{y}$
$\{ \partial_{1} \bf{x}, \partial_{2} \bf{x} \}$ basis for $T\mathcal{S}^2$
$\{ \partial_{1} \bf{y}, \partial_{2} \bf{y} \}$ basis for $T\mathcal{M}$
$\{ \mathbf{\hat{e}}_{1}, \mathbf{\hat{e}}_{2} \}$ orthonormal basis for $T\mathcal{M}_{t}$
$\mathbf{\hat{V}}$ surface velocity of $\mathcal{M}$
$\tilde{\phi}, D\tilde{\phi}$ smooth map from $\mathcal{S}^{2}$ to $\mathcal{M}$ and its differential
$\tilde{f}$ , $\hat{f}$ , $f$ scalar function on $\mathcal{S}^2$ , $\mathcal{M}$ , and their coordinate version
$\nabla_{\mathcal{S}^2} \tilde{f}$ , $\nabla_{\mathcal{M}} \hat{f}$ surface gradient on $\mathcal{S}^2$ and $\mathcal{M}_{t}$
$\mathbf{\tilde{v}}$ , $\mathbf{\hat{v}}$ , $\mathbf{v}$ tangent vector fields on $\mathcal{S}^2$ , $\mathcal{M}$ , and their coordinate version
$\nabla_{\mathbf{\hat{u}}} \mathbf{\hat{v}}$ covariant derivative of $\mathbf{\hat{v}}$ along direction $\mathbf{\hat{u}}$ on $\mathcal{M}_{t}$
$\bar{f}$ , $\mathbf{\bar{v}}$ radially constant extensions of $\hat{f}$ and $\mathbf{\hat{v}}$ to $\mathbb{R}^{3} \setminus \{ 0 \}$
$\tilde{Y}_{n,j}$ scalar spherical harmonic of degree $n$ and order $j$
$\mathbf{\tilde{y}}_{n,j}^{(i)}$ vector spherical harmonic of degree $n$ , order $j$ , and type $i$
$\mathbf{\hat{y}}_{n,j}^{(i)}$ pushforward of $\mathbf{\tilde{y}}_{n,j}^{(i)}$ via the differential $D\tilde{\phi}$
Table 2. Radii $R$ of the colour disks used for colour-coded visualisation of tangent vector fields.
Figures 9(a) 9(b) 9(c) 9(d)
10(a) 10(b) 10(c) 10(d) 11(a) 11(c) 11(d) 12(a) 12(b) 12(c) 12(d)
$R$ $8.92$ $4.62$ $2.90$ $2.19$ $12.07$ $2.90$ $12.02$ $9.23$ $4.87$ $2.92$ $2.10$
Thomas Schuster, Joachim Weickert. On the application of projection methods for computing optical flow fields. Inverse Problems & Imaging, 2007, 1 (4) : 673-690. doi: 10.3934/ipi.2007.1.673
Jan Haskovec, Nader Masmoudi, Christian Schmeiser, Mohamed Lazhar Tayeb. The Spherical Harmonics Expansion model coupled to the Poisson equation. Kinetic & Related Models, 2011, 4 (4) : 1063-1079. doi: 10.3934/krm.2011.4.1063
George Dassios, Michalis N. Tsampas. Vector ellipsoidal harmonics and neuronal current decomposition in the brain. Inverse Problems & Imaging, 2009, 3 (2) : 243-257. doi: 10.3934/ipi.2009.3.243
Aniello Raffaele Patrone, Otmar Scherzer. On a spatial-temporal decomposition of optical flow. Inverse Problems & Imaging, 2017, 11 (4) : 761-781. doi: 10.3934/ipi.2017036
Chjan C. Lim, Da Zhu. Variational analysis of energy-enstrophy theories on the sphere. Conference Publications, 2005, 2005 (Special) : 611-620. doi: 10.3934/proc.2005.2005.611
Tom Goldstein, Xavier Bresson, Stan Osher. Global minimization of Markov random fields with applications to optical flow. Inverse Problems & Imaging, 2012, 6 (4) : 623-644. doi: 10.3934/ipi.2012.6.623
Djano Kandaswamy, Thierry Blu, Dimitri Van De Ville. Analytic sensing for multi-layer spherical models with application to EEG source imaging. Inverse Problems & Imaging, 2013, 7 (4) : 1251-1270. doi: 10.3934/ipi.2013.7.1251
Matthias Bergner, Lars Schäfer. Time-like surfaces of prescribed anisotropic mean curvature in Minkowski space. Conference Publications, 2011, 2011 (Special) : 155-162. doi: 10.3934/proc.2011.2011.155
Franz W. Kamber and Peter W. Michor. The flow completion of a manifold with vector field. Electronic Research Announcements, 2000, 6: 95-97.
Chjan C. Lim, Junping Shi. The role of higher vorticity moments in a variational formulation of Barotropic flows on a rotating sphere. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 717-740. doi: 10.3934/dcdsb.2009.11.717
T. Varslo, C E Yarman, M. Cheney, B Yazıcı. A variational approach to waveform design for synthetic-aperture imaging. Inverse Problems & Imaging, 2007, 1 (3) : 577-592. doi: 10.3934/ipi.2007.1.577
Dieter Mayer, Fredrik Strömberg. Symbolic dynamics for the geodesic flow on Hecke surfaces. Journal of Modern Dynamics, 2008, 2 (4) : 581-627. doi: 10.3934/jmd.2008.2.581
Francois Ledrappier and Omri Sarig. Invariant measures for the horocycle flow on periodic hyperbolic surfaces. Electronic Research Announcements, 2005, 11: 89-94.
Carlangelo Liverani. On the work and vision of Dmitry Dolgopyat. Journal of Modern Dynamics, 2010, 4 (2) : 211-225. doi: 10.3934/jmd.2010.4.211
Marta García-Huidobro, Raul Manásevich, J. R. Ward. Vector p-Laplacian like operators, pseudo-eigenvalues, and bifurcation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 299-321. doi: 10.3934/dcds.2007.19.299
Bendong Lou. Spiral rotating waves of a geodesic curvature flow on the unit sphere. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 933-942. doi: 10.3934/dcdsb.2012.17.933
S. J. Li, Z. M. Fang. On the stability of a dual weak vector variational inequality problem. Journal of Industrial & Management Optimization, 2008, 4 (1) : 155-165. doi: 10.3934/jimo.2008.4.155
G. Mastroeni, L. Pellegrini. On the image space analysis for vector variational inequalities. Journal of Industrial & Management Optimization, 2005, 1 (1) : 123-132. doi: 10.3934/jimo.2005.1.123
Huan Gao, Yu-Hong Dai, Xiao-Jiao Tong. Barzilai-Borwein-like methods for the extreme eigenvalue problem. Journal of Industrial & Management Optimization, 2015, 11 (3) : 999-1019. doi: 10.3934/jimo.2015.11.999
Lori Badea. Multigrid methods for some quasi-variational inequalities. Discrete & Continuous Dynamical Systems - S, 2013, 6 (6) : 1457-1471. doi: 10.3934/dcdss.2013.6.1457
Lukas F. Lang Otmar Scherzer | CommonCrawl |
Preaching water but drinking wine? Relative performance evaluation in international banking
Dragan Ilić ORCID: orcid.org/0000-0002-6788-78361,2,
Sonja Pisarov3 &
Peter S. Schmidt4
Swiss Journal of Economics and Statistics volume 155, Article number: 6 (2019) Cite this article
The rise in the level of executive compensation in international banking in the last two decades has been striking. At the same time, corporate declarations of relative performance evaluation (RPE) have enjoyed widespread popularity. RPE determines the level of CEO pay by accounting for common market shocks that are out of a CEO's control, providing better governance and incentivizing CEOs to maximize shareholder value. In this paper, we test for evidence of RPE in international banking and pay particular attention to banks that openly disclose its use. To that end, we collect compensation data on 46 large international banks. Taken as a whole, our sample shows moderate evidence consistent with RPE. We report stronger evidence once we investigate the subsample of RPE-disclosing banks. These results hold up to a series of robustness checks. In addition, we find that the use of RPE is positively related to firm size and negatively related to growth options.
The rise in the level of executive compensation in international banking in the last two decades has been striking. By the end of 2003 Citigroup, Lehman Brothers, and Bear Stearns, large players in the banking industry at that time, were run by CEOs whose earnings were among the top ten in the S&P 500 (Hodgson 2004). Two of these banks—Lehman Brothers and Bear Stearns—collapsed in 2008. According to Bebchuk et al. (2010), in spite of obvious mismanagement the executives of these banks had received considerable performance-based compensation packages during the years preceding the financial crisis. Switzerland was not exempt. In 2012, UBS paid out 70 million Swiss Francs to the members of its executive committees despite a 2.5 billion loss that year. It stands to reason that the effectiveness of such compensation schemes have since become subject of ever more heated discussions, not least in international banking. The recent rise in executive compensation has not been confined to the banking industry. Other industries have been following the same trend. Murphy (2013) documents that total pay for executives in the S&P 500 skyrocketed in the late 2000s.
This general development has also piqued the interest of economists, who are intrigued by the underlying pay-setting mechanism. Executive compensation is a classic example of a principal-agent problem and lies at the heart of the controversy of corporate separation of ownership and control (Jensen and Meckling 1976). Put succinctly, the challenge lies in motivating the CEO (the agent) to act in the best interest of the shareholder (the principal). Because the effort of the agent is not perfectly observable, the principal is not able to force the agent to choose the action that would be optimal from the principal's perspective. This invokes a moral hazard problem. There has been much discussion about how firms are to solve this agency problem (Ross 1973; Gjesdal 1982; Mahoney 1995). A straightforward solution would involve a compensation scheme which provides proper incentives for the CEO. Economic intuition would suggest tying compensation to firm performance. However, firm performance is also influenced by a myriad of factors that are beyond the control of the agent. This exogeneity introduces undesirable risk into contracting.
This is where relative performance evaluation (RPE) comes in (Holmstrom 1982). RPE implies that compensation contracts should be linked to firm performance in relation to peers with similar characteristics. Such contracts account for common shocks that are out of an agent's control and thus offer a more conclusive way to assess the agent's individual performance. At the same time, RPE contracts offer the same incentives as contracts based on absolute performance. For shareholders, knowing that RPE is implemented is of particular importance because the mechanism creates incentives for CEOs to increase shareholder wealth.Footnote 1 The case for employing RPE in executive compensation contracts, then, seems clear-cut. Indeed, RPE has become seemingly popular in practice. Recent studies show that roughly every fourth firm in the S&P 1500 openly claims to use RPE in their compensation contracts (Carter et al. 2009; Gong et al. 2011).
In this paper, we test for the existence of RPE in international banking and pay particular attention to banks that claim to purport its application. For banks might have incentives to misreport RPE practice if their board of directors are prone to managerial influence. Board members might want to appease executives rather than constrain them. For example, managers of large banks in particular usually enjoy a good reputation inside and outside of the bank, which could be beneficial for the members of the board. Bebchuk and Fried (2003; 2004) note that managers also enjoy great authority within the company, rendering conflict less appealing. Designs of executive compensation schemes endorsed by the board of directors might thus succumb to this pressure. This is not without corporate risk. If managerial compensation packages are identified as unjustified by relevant outsiders, managers and directors face considerable social disapproval, denoted as "outrage" costs by Bebchuk and Fried (2004). The stronger the negative perception of outsiders, the greater the managers' cost of enjoying large compensation packages. This criticism can be avoided by camouflaging compensation packages.Footnote 2 Camouflaging the peer group of an underperforming bank manager would make excessive pay more acceptable to the public and investors. In other words, we might want to be cautious of flaunting RPE statements; some banks might want to avoid evaluating the actual performance of their underperforming manager relative to an appropriate peer group and instead pick particularly low-performing peers.
RPE has been investigated empirically. Most studies try to infer its usage by regressing executive compensation on the performance of a target firm and some measure of peer group performance. A negative and statistically significant coefficient on peer performance is taken as indication that common shocks are being removed from compensation contracts, constituting evidence of RPE. The scope of the existing literature is rather limited. The focus lies on compensation practices of industrial firms, typically in the USA.Footnote 3 This regional limitation has its reasons. It is difficult to obtain comprehensive data on executive compensation outside of the USA. Despite the ubiquitously proclaimed use of RPE in practice, the empirical results of the literature have been a mixed bag.Footnote 4 This is partly owed to the fact that the post hoc construction of peer groups is fraught with issues. If researchers identify a different peer group than the target firm itself had actually used, inferences on RPE are no longer meaningful. Correct identification is one reason why one may fail to find evidence of RPE. A simpler explanation would be that the RPE claims are merely empty rhetoric to appease stakeholders. As Albuquerque (2009) puts it, any empirical tests of RPE are, in this sense, joint tests.
This paper embraces this duality and tests for RPE in a new sample of large and globally operating non-U.S. banks.Footnote 5 We contend that the global banking industry is an ideal playground to test the usage of RPE, for at least three reasons. First, RPE makes especially sense for firms that are exposed to common shocks. This applies particularly well to international banking. The main reason for this exposure is that banks are highly leveraged institutions. Around 90 percent of their assets come from debt, making them more prone to exogenous volatility (Houston and James 1995; Chen et al. 2006). Second, the barriers to global integration in the banking industry have been significantly trimmed in the last two decades, shifting banks from once centralized domestic organizations to global behemoths. In turn, the structure of competition in the industry has adjusted (Berger and Smith 2003). Large banks operating on the international level are now dealing with intense competition.Footnote 6 Third, the recent financial crisis was characterized by failures of large international banks such as Bear Stearns or Lehman Brothers. The downfall of these banks has drawn increasing attention to corporate governance issues in remuneration policy.Footnote 7 If anything, this pressure has prompted banks to make more efficient use of RPE.
Our study tackles the caveat that the soundness of empirical tests on RPE critically hinges on the correct identification of the peer groups. We follow the seminal approach by Albuquerque (2009) and aggregate peer performance on the basis of industry and industry/size peer groups. Aggregating in this manner accounts for the observation that industry affiliation and firm size are informative proxies for the common market risks that RPE-setting firms face. In doing so, this approach takes up Holmstrom's (1979) theoretical requirement of common uncertainties. Our study also deals with the potential issue of RPE being corporate window dressing, with the firm not actually engaging in RPE. If that were the case then signaling the disclosure of peer group usage would be mere noise, and incorporating this information should not alter our results qualitatively. To test this hypothesis, we differentiate between disclosing and non-disclosing banks. Disclosing banks claim to compare their executive's performance relative to peer group performance in determining one or several components of executive pay.Footnote 8
We collect a new data set with information on 46 large international banks. The results of our basic regression specification document negative and insignificant parameter estimates in industry peers. Taken by itself, this casts doubt on the use of RPE in our sample. When we perform tests of RPE on industry/size peers, we find moderate evidence consistent with RPE. In restricting attention to the subsample of disclosing banks, we find stronger and more conclusive evidence that systematic risk is filtered out from CEO compensation. Strong-form RPE tests support this conclusion. This result stands in contrast to Gong et al. (2011), who do not find informational value in RPE disclosure among US. firms.Footnote 9 To gain more insight, we disentangle potential factors related to RPE usage. A logistic regression indicates that firm size, and, to a smaller extent, growth options are associated with RPE usage. The results imply that the greater a bank is, the higher is the probability that it will use RPE in its compensation contracts. On the other hand, the probability of using RPE decreases with the magnitude of growth options.
Our paper contributes to the ongoing discussion on RPE along several dimensions. Existing studies testing RPE on banks have focused solely on US data. This is hard to square with an industry that is characterized by pronounced international competition. We provide broader evidence by conducting tests on a newly collected sample of large international banks. We also shed new light on the informative value of disclosure. Our results withstand several robustness tests and suggest that the banks in our sample which proclaim the use of peers in assessing the performance of their CEOs are not merely window dressing: we do find stronger evidence for RPE usage among disclosing banks. This indicates that lumping together disclosing and non-disclosing firms can be detrimental to the conclusiveness of RPE tests. Finally, we examine the association of several bank characteristics with the intensity of RPE usage.
The rest of the paper is organized as follows. In the "Relative performance evaluation and the banking industry" section we describe the main characteristics of the banking industry. This section also introduces the empirical model and depicts the peer group construction mechanism. The "Data description" section presents our novel data set of international banks. The "Results" section reports summary statistics and regression results. In the "Extensions and robustness checks" section, we explore factors related to RPE, investigate the association of RPE pay practices with bank-level characteristics, and conduct several robustness checks. The last section concludes.
Relative performance evaluation and the banking industry
Executive pay in the banking industry
The literature on executive compensation in banking attends to the particularities of the industry. There are three characteristic features of banks (John and Qian 2003; Macey and O'Hara 2003; Tung 2011). First, banks have a peculiar capital structure. They hold much less equity than other companies, rendering them highly leveraged. Roughly 90% of a bank's funds comes from debt. In addition, a bank's assets and liabilities are mismatched (Diamond and Dybvig 1983). Second, the presence of federal guarantees of bank deposits, a public measure to protect private depositors from losses in case of insolvency, distinguishes banks from other firms. Third, these deposits can increase the risk of fraud and self-dealing in the banking industry by reducing the incentives for monitoring (Macey and O'Hara 2003).
Against this background, the literature addresses three main topics (Houston and James 1995). One cluster of studies examines whether the sensitivity of executive compensation to a bank's performance was affected by the US corporate control market deregulation (Crawford et al. 1995; Hubbard and Palia 1995; Cuñat and Guadalupe 2009)Footnote 10. Two other studies test whether the existing compensation policies promote risk-taking in the US banking sector by examining the relation between the specific component of the compensation and market measures of risk (Houston and James 1995; Chen et al. 2006).Footnote 11 Other research examines the usage of RPE in the US banking industry. Barro and Barro (1990) test RPE on a data set that covers 83 commercial banks in the US between 1982 and 1987. They regress the growth rate of real compensation on the average of the real total rate of return from the current and previous period, the first difference of accounting-based returns, regional averages for both accounting-based return, and the average of the real total rate of return. This effectively compares the performance of banks relative to the performance of other banks in the same region. Their evidence is not consistent with the use of RPE. Crawford (1999) tests two hypotheses on 215 executives from 118 US commercial banks from 1976 to 1988. He regresses change in CEO pay for a specific bank on a change in shareholder wealth for that bank, an industry relative performance measure, and a market performance measure using S&P 500 returns. His findings suggest that relative compensation is negatively related to market and industry returns and positively related to shareholder returns. In addition, in his sample, the use of RPE increases upon introduction of banking deregulation. Crawford reports evidence consistent with RPE if CEO compensation is evaluated relative to industry peers. He does not, however, find evidence of RPE when using market performance measures.
The literature also provides insight about the remuneration practice in the banking industry. The data show that bank CEOs receive less cash compensation on average, are less likely to participate in a stock option plan, and hold fewer growth options than CEOs in other industries. These differences are likely to stem from banks' different investment opportunities (Houston and James 1995). But not all is different in the banking industry. Houston and James do not find any differences between banking and non-banking industries regarding the overall sensitivity of pay to performance. They presume that the factors that influence compensation in the banking industry are similar to those in non-banking industries despite differences in the compensation structure. Adams and Mehran (2003) suggest that the difference in the governance structures between manufacturing firms and banks are industry-specific. Furthermore, the differences seem to be mostly due to different investment opportunities of bank holding companies (BHCs) and pertinent regulation. Adams and Mehran's study examines whether firm performance measures are influenced by the governance structure. Their results indicate that differences between the board structures of manufacturing firms and banks might not be a reason for concern in this respect. Aebi et al. (2012) study the strength of incentive features of top management compensation contracts in banks. They compare the pay-performance sensitivity in banks with those in manufacturing firms and show that debt ratio, firm size, risk, and regulation are important determinants of pay-performance sensitivity in banks. Finally, the executive compensation structure and the governance structure of banks differ from firms in other industries. Even so, the factors that influence the overall pay-performance sensitivity do not seem to differ significantly across industries.
Empirical model
To test for RPE, we employ a model that is based on Holmstrom and Milgrom (1987). Specifically, we use the following weak-form test of RPE:Footnote 12
$$\begin{array}{*{20}l} {Comp}_{it}=&\alpha_{0}+\alpha_{1}\cdot FirmPerf_{it} \\ &+\alpha_{2}\cdot {PeerPerf}_{it}+\alpha_{3}\cdot C_{it}+ \epsilon_{it}. \end{array} $$
Compit measures executive compensation in monetary terms, FirmPerfit stands for the performance of firm i measured as the continuously compounded gross real rate of return to shareholders (assuming that dividends are reinvested), and PeerPerfit denotes the performance of firm i′s peer group. To account for variation not included in firm i′s and its peer group's performance we include several control variables, subsumed in the column vector Cit. These variables include firm size and growth options. In addition, we include time, industry, and country dummies. The subscript t denotes the respective year and εit represents an independent firm-specific white noise process. α0, α1, α2, and α3 denote model parameters.Footnote 13
In this model, rejecting the null hypothesis H0:α2≥0 against the one-sided alternative H1:α2<0 provides evidence of RPE in executive compensation contracts. In that case, exogenous shocks outside of the control of the executive management are filtered out from the compensation contract.
Researchers typically use the so-called strong-from RPE test to examine whether all exogenous shocks are removed from the compensation contract. The first step in conducting this test is to regress firm performance on peer performance (Antle and Smith 1986). The first step regression model is
$$\begin{array}{*{20}l} {FirmPerf}_{it}=&\gamma_{i}+\beta_{i}\cdot {PeerPerf}_{it}\\ &+ \eta_{i}\cdot C_{it} +\varepsilon_{it}. \end{array} $$
The unsystematic and systematic performance are obtained from the equation above in the following manner:
$$\begin{array}{*{20}l} \begin{aligned} {UnsysFirmPerf}_{it}&=\widehat{\eta}_{i}\cdot C_{it} + \widehat{\varepsilon}_{it}, \\ {SysFirmPerf}_{it}&=\widehat{\gamma}_{i}+\widehat{\beta}_{i}\cdot {PeerPerf}_{it}. \end{aligned} \end{array} $$
\(\widehat {\epsilon }_{it}\) denotes regression residuals and \(\widehat {\gamma _{i}},\,\widehat {\beta _{i}}\) denote parameter estimates.Footnote 14 The second step estimates the sensitivity of CEO compensation with respect to the unsystematic and systematic components of firm performance, that is:
$$\begin{array}{*{20}l} Comp_{it}&=\delta_{0}+\delta_{1}\cdot UnsysFirmPerf_{it} \\ &+\delta_{2} \cdot SystFirmPerf_{it}+ \delta_{3}\cdot C_{it}+ e_{it}. \end{array} $$
Cit denotes a column vector of control variables and the row vector δ3 its coefficients. If the systematic risk is filtered out from the compensation contract, the systematic performance δ2 in Eq. (4) should not be significantly different from zero.
This section describes the data preparation process. The "Compensation data" subsection reports the collection of the international compensation data, the "International banking sample" subsection provides details about the sample of international banks that we use in the regression analysis, and the "Peer group composition" subsection documents the peer group data selection process.
There is no standardized database for international corporate executive compensation. We collect our data from several sources for the years 2003–2014. Financial and accounting data are obtained from Thomson Reuters Datastream and Thomson Reuters Worldscope. Compensation data are collected manually either from annual reports or management proxy circulars available online. We do not include US banks in our analysis. In August 2006, a new regulatory requirement by the US Securities and Exchange Commission mandated, among other things, full disclosure of peer group compositions (if applicable) for fiscal years ending on or after December 15, 2006. In a recent study, Faulkender and Yang (2013) provide evidence that this event generated a structural break in the peer group selection, discouraging the use of US compensation data for our purpose.Footnote 15 For the other countries in our sample, we could not find any corresponding regulation that was introduced in our observed timeframe.Footnote 16
Our initial data set is composed of firms classified as banks from the FTSE All World Index with an index weight higher than 0.02. This yields 75 firms. Based on this list, we collect remuneration data for 46 different firms with a total of 335 firm-year observations (henceforth dubbed the "full sample"). A detailed list of all banks in our sample is shown in Table 12 in the Appendix. In addition, we list all non-US global systemically important banks (G-SIBs) from 2011 to 2012, indicating the inclusion in our sample or stating the reason for exclusion (see Table 13 in the Appendix).
In line with the source information, we quantify the compensation in nominal terms. As CEO compensation, we define the compensation paid by the parent company as well as the one paid by subsidiaries (for the CEO position). In rare cases, firms only provide a certain wage range. In that case, we always include the higher bound as the actual compensation. We do not include the measure of CEO compensation changes in the value of existing firm options and stock holdings owned by the CEO.
In order to collect the total compensation data, we focus on the amount the firm itself defines as the "total." This always includes all the positions used for the fixed compensation amount as well as performance-related components. The name and the exact composition of these performance-related components vary significantly between firms. For example, some firms differentiate between long-term and short-term incentives, whereas others just talk about bonuses. This seems to be related to the pertaining country and its national regulations. We ignore any extraordinary compensation such as restricted shares (which had been allocated when starting as CEO), payment in lieu of notice, or buyout. Finally, we exclude all amounts received related to the holding of a director position in addition to the CEO position.
We also collect information to create a dummy variable that indicates explicit disclosure of peer firms in determining a company's relative performance pay to its CEO. We translate such disclosures as indication of RPE usage and examine the subsample of thusly disclosing firms in the "Regression results" section. We then identify possible factors related to this disclosure in the "Extensions and robustness checks" section. Note that this approach is less excluding than a strict requirement of overt RPE claims, and would therefore also pick up simple benchmarking (for details on the difference between RPE and benchmarking, see Gong et al. (2011)). This runs a higher risk of not rejecting the null hypothesis even if it is false. If we do find evidence against the null hypothesis, however, we can be quite confident that disclosure has a significant impact.
International banking sample
We convert all compensation data into US Dollars by using exchange rates from Thomson Reuters Datastream. The exchange rate is determined by the day after the end of the fiscal year (e.g., if the fiscal year ends on December 31, 2010, we take the exchange rate on January 1, 2011). We measure firm performance with stock market return data from Thomson Reuters Datastream. Following the literature, we control for firm size (sales) (Smith and Watts 1992; Fama and French 1992) and growth opportunities (Fama and French 1992). In addition, we include dummies to control for year-specific differences in the level of compensation, industry dummies that capture unobservable variation at the industry level, and country dummies that capture any country-specific variation (e.g., due to different regulations or legal directives). In order to control for this possible country-specific heterogeneity, we only keep banks from countries with at least two observations.
Panel A of Table 1 shows the frequencies for the full sample, the RPE disclosure subsample, and the non-RPE disclosure subsamples for each year. Altogether, the data for the full sample are evenly distributed over the years 2004–2013, though the frequency of the data tends to increase somewhat over time.Footnote 17 The same applies for both subsamples.
Panel B of Table 1 displays the sample frequency by industry group within the banking industry. In addition, it reports the frequency of RPE disclosing and non-disclosing banks by industry group. Subsector 6029 (Commercial Banks) dominates the full sample with more than 80% of all observations. The other subsectors are National Commercial Banks (6021), State Commercial Banks (6022), Federal Saving Institutions (6035), and Security Brokers and Dealers (6211). Similar to the full sample results, RPE disclosing (84.57%) and non-RPE disclosing banks (86.71%) mostly belong to the subsector 6029 (Commercial Banks). All the banks belonging to the subsector 6022 (State Commercial Banks) and the subsector 6035 (Federal Saving Institutions) disclose their peer groups.
Panel C of Table 1 depicts the sample frequency by country, including the RPE and non-RPE subsamples. Among the 14 countries in the full sample, Canada, Australia, Singapore, Sweden, and the UK provide the largest shares of our observations. The banks in Canada, Australia, and Germany have the highest propensities of RPE disclosure. In Australia and Germany, all banks provide information about their peer groups, whereas none of the banks in Hong Kong, China, and Norway do so.
Table 2 shows Pearson correlation coefficients between performance measures and the control variables firm size and growth options. Firm stock return and industry peer return as well as firm stock return and industry/size peer return display positive correlations. The correlation of firm stock return with its industry peer return (0.73) is lower than the correlation of firm stock return with its industry/size peer return (0.79), which is consistent with previous evidence (Albuquerque 2009).Footnote 18 The statistically significant correlation coefficients increase our confidence that industry and industry/size peers are suited for filtering out noise from firm performance measures. In addition, total executive compensation is positively and significantly correlated with stock return (0.15). The same holds for the correlation between total compensation and industry peer return, and total compensation and industry/size peer return. Not surprisingly, total compensation is positively correlated with firm size. In order to identify a possible multicollinearity problem in the upcoming regressions, we report variance inflation factors in all respective tables.Footnote 19
Peer group composition
For the selection of the peer firm pool, we start with a comprehensive list of 4,228 firms, most of which are financials. We use SIC-codes to remove firms which do not belong to the banking industry.Footnote 20 We also exclude other firms which we do not consider valid peers, such as the Allied Irish Banks, which technically became state-owned during the financial crisis. We then apply a number of screens to the return data to obtain a qualitatively sound data set (Ince and Porter 2006). First, we delete any consecutive zero returns at the end of the sample period. Second, we remove returns below − 80% and above 300%. We also require that the one-year continuously compounded return obtained from monthly data is available. We end up with 1,570 firms as the pool of potential peers. Note that in order to mitigate survivorship bias, this pool also contains so-called "dead stocks" which were delisted from the stock market during the sample period.
RPE firms assess their CEOs' compensation levels based on performance in relation to their respective peers. These peers are not simply a random draw of the market; firms follow a specific methodology in selecting their peers. Because researchers usually do not know a firm's peers, a different approach is needed to approximate the peer group. Most studies assessing RPE employ broad industry or market indices as a comparison group for peer performance. This is not without problems. Firms within an industry are hardly homogenous in their characteristics, so simple benchmarks are not able to adequately capture common shocks (Albuquerque 2009).Footnote 21 This introduces a bias in the statistical estimation and can distort inferences. An inappropriate comparison group can lead to a higher (or lower) prescribed level of CEO pay. An expedient and replicable comparison group based on a reasonable and objective criterion is therefore the key element when empirically testing for RPE.
Albuquerque (2009) provides a pragmatic solution for the ex post reconstruction of RPE peer groups. She composes groups based on both the two-digit SIC level and firm size. The first step in her process sorts firms by beginning-of-year market value into size quartiles within an industry. This yields four peer groups per industry. Each firm is then matched with its industry-size peer group. It turns out that this approach yields stronger empirical support for the use of RPE in executive compensation than sorting by industry classification alone, an improvement that is due to the information that firm size captures. Firms of comparable size are similar along several other characteristics which proxy for systematic risk. Albuquerque shows how the levels of diversification, financing constraints, and operating leverage vary with industry-size-ranked portfolios and provides evidence that firm size subsumes these characteristics. She finds that larger firms tend to be more diversified, have greater operating leverage, and smaller financing constraints. This claim is supported by other literature. Demsetz and Strahan (1997), for instance, construct a measure of diversification of BHCs. Their results establish a strong, positive effect of bank size on the diversification of BHCs. Moreover, small firms tend to face bigger financial constraints in comparison to large ones. In other words, firm size is a proxy with high explanatory power for the common uncertainty Holmstrom (1982) insinuated. We thus proceed to build the specific peer groups by adapting the industry/size approach by Albuquerque (2009).
Full sample results
This subsection presents the results for our full banking sample. We first present descriptive statistics of compensation data, performance measures, and firm characteristics for the 46 firms during 2004–2013 (see the "Summary statistics" subsection). In the "Regression results" section, we then document the statistical results. We regress the logarithm of total CEO compensation on firm stock performance, peer return, and several control variables.
Table 3 presents descriptive statistics for the full sample. We report two measures of compensation: total compensation and the logarithm of total compensation. In the regression analysis, we use the logarithm of total compensation as a dependent variable because its empirical distribution is more symmetrical than the one for total compensation. This mitigates heteroscedasticity as well as extreme skewness and allows for a direct comparison with results from previous studies (Murphy 1999). We also report summary statistics for the control variables firm size (log of sales and sales) and growth options. Table 3 shows that the average (median) total compensation of an executive in our sample is USD 5.28 million (USD 4.12 million), which is not all that surprising in a sample that largely consists of major global players in the banking industry. Firm performance is measured using log-returns. The mean firm stock return is 5% and the median return is 14%. Averages of peer returns hover around 8%. The average (median) size in terms of sales of a bank is USD 34.15 billion (USD 20.63 billion). Using total assets as a proxy for size, the according value is USD 738.47 billion (USD 376.83 billion).
Table 3 Descriptive statistics
Regression results
We test the use of RPE in CEO compensation using Eq. 1. Peer groups are constructed with the industry and industry/size approach. We regress the logarithm of total CEO compensation on firm stock return, peer return, growth options, and log of sales. Year, country, and industry dummies are also included.
Panel A of Table 4 shows the sensitivity of CEO total compensation to RPE when using industry and industry/size peer groups. The coefficient on firm stock return is positive and statistically significant at the 1% level for both peer group specifications, with values of 0.47 and 0.55 for the industry and industry/size specifications, respectively. When the peer group is restricted to firms within the same industry, the coefficient of the peer portfolio is negative but not significant (− 0.11 with a p value of 0.56). Put differently, the performance of these peers does not seem to be filtered out from the CEO compensation contracts. If we include size into sorting and consider industry/size peers, the parameter estimates become statistically significant (with a coefficient of − 0.32 and a p value of 0.05). Robustness checks yield mixed results.Footnote 22Footnote 23
By and large, the results for our international banks match previous findings for US firms, which showed that industry/size peers are better able to capture exogenous shocks than industry peers alone (Albuquerque 2009).
RPE subsample results
Weak tests of RPE
The results above are consistent with the notion that the 46 banks in our full sample follow an RPE scheme. We now turn to the informational value of peer disclosure. Although there is a risk of taking disclosure at face value, we exploit this information to sharpen our sample's profile. We test the sensitivity of CEO pay to RPE in the subsample of 25 banks that explicitly declare the use of peers in determining the performance of their CEOs in their statement proxies (see the "Compensation data" section). We follow the same empirical specification used in the previous analysis.
Panel B of Table 4 shows the sensitivity of CEO total compensation to RPE when using industry and industry/size peers. The results show positive and statistically significant parameter estimates on firm stock performance for both peer group specifications. The estimates are 0.49 and 0.69, respectively, indicating that a CEO is being rewarded for positive firm performance. Hence, on average, CEO compensation increases with firm performance. When the peer groups are composed of banks within the same industry, the coefficient on the peer portfolio is negative and not significant (with a coefficient of − 0.30 and a p value of 0.40). The industry/size parameter estimate is also negative but statistically significant at the 5% level (with a coefficient of − 0.66 and a p value of 0.02). The results for the subsample of disclosing banks, too, provide evidence consistent with RPE, but more conclusively so than the results for the full sample. The coefficient on the peer portfolio doubles in size and increases in statistical significance.Footnote 24 This suggests that peer group disclosure holds informational value regarding RPE. One could also say that the inclusion of non-disclosing banks in the full sample tends to dilute the statistical inference and renders it less conclusive.Footnote 25 These results stand in contrast to Gong et al. (2011), who find no informational value of RPE disclosure. However, their sample only comprises US firms for 1 year.
Strong-form test of RPE
Following Antle and Smith (1986), we perform so-called strong-form tests of RPE on the subsample of RPE disclosures to verify the robustness of our results. Strong-form tests of RPE examine whether all the noise that can be removed is indeed filtered out from the compensation contracts. Details on the construction of systematic and unsystematic firm performances and the employed empirical model are reported in the "Empirical model" subsection. In a nutshell, the results are consistent with RPE if only the unsystematic performance exerts influence on CEO pay, and not the systematic one.
Panel C of Table 4 documents the regression results from Eq. (4) for the subsample of disclosing banks. Here, we regress the logarithm of CEO compensation on unsystematic firm performance, systematic firm performance, and control variables for 162 firm-year observations over the time span 2004–2013. In that specification, we restrict ourselves to industry/size groups for constructing the systematic performance variable. The systematic component is not significant with a coefficient estimate of 0.03 (p value = 0.89). The unsystematic performance variable, on the other hand, is positive and statistically significant with a coefficient of 0.69 (p value = 0.00). This suggests that the CEOs in our subsample are being compensated for unsystematic performance only. These results hold up to several robustness tests and provide evidence in keeping with the use of strong-form RPE and reinforce the previous finding that CEOs are not being compensated for systematic performance in the subsample of RPE disclosures.Footnote 26
Extensions and robustness checks
Associated factors of RPE in the banking industry
Prior studies have put forth a variety of factors that are related to the usage of RPE in compensation contracts in UK and US firms (Carter et al. 2009; Gong et al. 2011; Albuquerque 2014). They do not, however, examine the relation of one factor at a time on the usage of RPE while controlling for other factors. Gong et al. (2011) investigate explicit disclosures on RPE in the US to identify the factors that prompt the use of RPE in compensation contracts in 2006. Carter et al. (2009) examine the use of RPE in performance-vested equity grants in a sample of UK firms in 2002. This section examines international firms over a longer time span. Understanding what factors are linked to RPE is instructive for researchers testing for RPE and could offer yet another reason for the mixed evidence in existing empirical studies.
In order to pinpoint possible factors related to RPE, we conduct a logit regression. The dependent variable yit is an indicator variable that equals 1 for banks that disclose information on the use of a peer group to determine the level of executive compensation, and 0 otherwise (see the "Compensation data" section). The independent variables include CEO pay (Compit), firm performance (FirmPerfit), various specifications of peer return (PeerPerfit), and control variables. We control for firm size (FirmSizeit) and growth options (GrowthOptionsit) and include year (YearDummyit), industry (IndustryDummyit), and country (CountryDummyit) dummies to control for cross-sectional variation. Sales are used as a proxy for firm size. Growth options are calculated as follows: (Market Equity+Total Assets−Common Equity)/Total Assets.
Our logit model is based on the following latent variable model:
$$\begin{array}{*{20}l} y_{it} = & \;\gamma_{0}+\gamma_{1}\cdot {Comp}_{it}+ \gamma_{2}\cdot {FirmPerf}_{it}\\ &+ \gamma_{3} \cdot {PeerPerf}_{it} +\gamma_{4}\cdot {FirmSize}_{it} \\ &+ \gamma_{5} \cdot {GrowthOptions}_{it}+\gamma_{7} \cdot {YearDummy}_{it}\\ &+ \gamma_{8} \cdot {IndustryDummy}_{it} \\ &+ \gamma_{9} \cdot {CountryDummy}_{it} + u_{it}. \end{array} $$
We estimate Eq. (5) with the full sample of 335 firm-year observations from 2004 to 2013. Table 5 reports the results. We find that the likelihood of using RPE is positively related to firm size and negatively related to growth options for industry and industry/size peers.Footnote 27 The opposite holds for growth options.Footnote 28 None of the other predictors are statistically significant, indicating that in our sample there is a strong link between size and growth options on the one hand and RPE on the other one.Footnote 29
These results are in line with existing evidence. Gong et al. (2011) find that larger firms are more likely to use RPE. This could be for several reasons. Firm size might represent a crude proxy for public scrutiny and shareholder concerns about compensation practices. Larger firms are also more exposed to monitoring pressure compared to smaller firms. This might well force them to be more committed to RPE (Bannister and Newman 2003). Albuquerque (2014) and Gong et al. (2011) find that the level of RPE in CEO compensation contracts is negatively associated with a firm's level of growth options. Carter et al. (2009) examine the disclosure of performance-based conditions in equity grants and document that growth options are inversely related to the performance-based conditions. Albuquerque (2014) argues that high growth options firms have to bear more risks and thus exhibit a higher idiosyncratic variance. These firms are also characterized by firm-specific know-how and operate in markets with high barriers to entry. As a consequence, these characteristics make peer performance uninformative with respect to capturing external shocks. This eventually leads to less usage of RPE among high growth options firms (Albuquerque, 2014, p.1).
RPE pay practices and bank-level characteristics
We next extend our analysis and investigate the relation between the magnitude of RPE pay practices and various bank-level characteristics. We first repeat the standard estimation (Eq. 1) conducted in the "Regression results" section with the industry/size peer group. We quantify RPE-intensity via the ratio of predicted (log) CEO-compensation to the actual (log) CEO-compensation. This prediction is only based on firm stock return and peer group return. The idea here is to separate firms (or firm-years) for which compensation is mainly based on firm performance and peer group performance from firms (or firm-years) for which other factors are more important. We then proceed to sort all firm-years based on this measure of RPE-intensity into four groups of equal size to examine if various bank-level characteristics are related to RPE-intensity. We analyze three different measures; two proxies for firm performance and one proxy for firm-specific risk: (1) return on equity (ROE), (2) the (yearly) firm stock return, and (3) the variance of the firm stock return. We calculate ROE by dividing net sales (Datastream code DWSL) by lagged common equity (Worldscope item WC03501). The calculation of the yearly stock return is described in the "International banking sample" section. Finally, firm stock return variance is calculated as the variance of the stock returns over the previous 36 months.
The results are shown in Table 6. Between RPE-intensity and ROE no clear relation can be made out. Returns seem to decrease with RPE-intensity. This is a purely descriptive exercise which cannot pinpoint any causality, so one could only speculate whether lower stock returns lead to more RPE practices or whether less RPE practices lead to higher stock returns (or whether there is an unobservable characteristic driving both of them). There is no monotonic relation between stock return variance and RPE-intensity, but the correlations show that firms with comparatively high and low RPE-intensities have somewhat higher return variances, suggesting that these firms are riskier. A more detailed investigation on the mechanism behind of these observations could be an area for future research.
Table 6 RPE pay practices and bank-level characteristics
Robustness checks
In this section, we conduct three different checks to gauge the robustness of the results obtained in the "Results" section: (1) we construct regional instead of global peer groups, (2) we construct value-weighted instead of equal-weighted peer groups, and (3) we examine the effect of excluding the years of the financial crisis (2007 and 2008).
Regional peer groups
Our first robustness check restricts the construction of the peer group by employing regional instead of global peer groups. We first classify seven regions as defined by the World Bank:Footnote 30 "Europe and Central Asia," "Middle East and North Africa," "Latin America and Caribbean," "East Asia and Pacific," "South Asia," "Sub-Saharan Africa," and "North America." Not surprisingly, the correlations between peer and firm stock returns are somewhat higher with regional peer groups than with global peer groups (as shown in Tables 7 and 2, respectively). The correlation between industry peer group and total compensation is no longer significant. The correlation between industry/size peer group and total compensation, on the other hand, does not change from going regional.
Table 7 Pearson correlation coefficients with regional peer groups
Table 8 replicates the main results of the "Regression results" section using regional peer groups. By and large, the results are similar to the ones obtained with global peer groups in the "Regression results" section. The coefficients of the industry/size peer group remain significant at the 5% level for both the full sample and the disclosure subsample. The strong-form RPE test for the disclosure subsample continues to support the hypothesis that firms apply RPE.
Table 8 Regressions estimating the sensitivity of CEO compensation to RPE with regional peer groups
Value-weighted peer groups
With a skewed distribution, equal weights might overstate the influence of smaller banks in peer groups. This is a concern in our sample because it contains the largest banks in the world, possibly biasing our results. To mitigate the impact of smaller banks, the next robustness check employs value-weighted instead of equal-weighted peer groups. For this purpose, we use the market capitalization at the fiscal year date. Table 9 shows the according correlations. The results do not change much compared to equal-weighted peer groups as shown in Table 2.
Table 9 Pearson correlation coefficients with value-weighted peer groups
Table 10 replicates the main results of the "Regression results" section using value-weighted peer groups. The differences between the industry and the industry/size peer groups become less pronounced. This is to be expected because value weights shift the focus to the biggest firms in each industry, and the banks in our sample are most likely in this group. Taken together, we come to the same conclusions: the size/industry peer group still performs better than the industry peer group, and the strong-form RPE test for the disclosure subsample continues to support the hypothesis that firms apply RPE.
Table 10 Regressions estimating the sensitivity of CEO compensation to RPE with value-weighted peer groups
Exclusion of financial crisis years
The financial crisis in 2007 and 2008 had far-reaching implications for the performance of banks (e.g., Fahlenbrach and Stulz (2011)). These crisis years might distort the results of our analysis by driving the correlation between firm performance and industry/size peer return. In a third robustness check, we exclude the years 2007 and 2008 from our sample.Footnote 31
The correlations obtained without the years 2007 and 2008 strongly differ from the baseline results in Table 2 (see Table 16). While still significant, the correlations between peer return and firm stock return become much less pronounced. The correlations between total compensation and peer return even turn negative, albeit not statistically significantly.
We next examine whether this substantial change in correlations affect the main conclusions from our baseline results. Table 11 shows that the peer group coefficients become generally more negative. But the weak-form regression paints the same picture as the baseline results: Size/industry peer groups have larger coefficients than industry-only peer groups, and the disclosure subsample shows stronger evidence in favor of RPE than the full sample. The strong-form regressions reject the hypothesis that systematic risk is not filtered out of the compensation contract at a 5% level of statistical significance (instead of 10% in the other results).
Table 11 Regressions estimating the sensitivity of CEO compensation to RPE without the years 2007 and 2008
Additional robustness checks
We performed additional, unreported robustness checks.Footnote 32 Using total assets instead of sales as a proxy for firm size yields very similar results. Value-weighted regional peer groups do not substantially alter the results of the baseline specification, either. Finally, in order to disentangle cross-sectional from time-series effects, we conduct a panel estimation with fixed year-effects and fixed bank-effects. The between-group estimates yield mostly insignificant coefficients. The coefficient for the disclosure subsample with the size/industry peer group, however, is negative and significant (at the 5% level), and thus in line with our baseline result.
This papers tests the presence of RPE in an original sample of 46 international banks from 2004 to 2013. We regress the logarithm of total compensation on firm performance, industry and industry/size peer performance, and control variables such as firm size and growth options. We control for unobservable variation in the level of compensation across years, industries, and countries. When we account for peer groups with peer selection based on industry and firm size, we find evidence for the use of RPE in international banking. This evidence becomes stronger once we focus on banks who openly disclose the use of peers in their remuneration practice. This insight contrasts and complements previous findings for the US.
We next employ a logit regression model to identify factors related to RPE in international banking. The evidence supports the working theory that growth options and firm size play a crucial role in banks' decisions to use RPE. Our results are robust to different model specifications and are consistent with existing evidence. We find that the likelihood of RPE usage is decreasing with growth options. A possible explanation for this result is that the implementation of RPE in high growth option banks might be too costly due to difficulties in identifying the correct peer group, rendering such banks less likely to use RPE. We also find that larger banks are more inclined to use RPE in their compensation contracts. This is a plausible finding. In light of the recent financial crisis, high levels of CEO compensation have attracted a lot of attention, and large banks in particular have been under significant monitoring and shareholder pressure. In response to such pressure, large banks are more likely to have become incentivized to be committed to RPE usage in determining the level of CEO pay.
Our overarching findings suggest at least four things. First, large international banks seem to entertain the use of RPE in assessing the performance of their CEOs. This holds more conclusively for banks that disclose their peer groups. The latter implies the second point: disclosure statements seem to have some merit, at least in our sample, and credibly reflect good corporate practice on that score. Disclosing firms do not seem to limit themselves to preaching water; they likely drink it, too. This finding lends support to the credibility and thus to the informational value of RPE commitments. These first two points have important consequences for shareholders. Left on their own, CEOs would rather follow their own ways to maximize utility. RPE helps align the interest of shareholders and CEOs by creating incentives for CEOs to take actions to increase shareholder wealth. Third, in line with previous studies, our evidence indicates that industry/size peers are better able to capture exogenous shocks than industry peers alone. Finally, empirical evidence on RPE runs the risk of diluting. In studies of RPE, it seems, if nothing else for robustness, to stratify empirical samples by disclosure. This should inform future research.
Sample banks and (non-US) global systemically important banks
Table 12 List of international banks in the full sample
Table 13 List of non-US global systemically important banks (G-SIBs), 2011–2012
Kernel-based peer group construction
This appendix addresses some issues one might have with the industry/size quartile approach and introduces a novel Kernel-based alternative. This alternative extends Albuquerque (2009) with a more flexible peer group construction method. The following example illustrates a possible caveat of the industry/size quartile approach. In Albuquerque (2009), all firms are partitioned and ranked into four size groups (per industry). In ascending order, the first group contains 25% of the firms with the smallest size, and the fourth group contains 25% of the firms with the largest size. The boundaries between the four groups, the so-called breakpoints, thus lie on 25%, 50%, and 75% of the ranked values of firm size. Now let us assume that we want to test the RPE hypothesis on a target company that is very close to the breakpoint between the first and the second quartile, but just happens to fall into the first one. In this particular case, it is not readily obvious why the first peer group, and not the second one, should be assigned to the target firm. Our alternative method of peer group composition addresses this issue. For every target firm, we assign a unique peer group that is determined by the target firm's size. We implement this with a Kernel-based weighting scheme. Firms that are closer to the target firm in terms of firm size receive a weight specific to the distance from the target firm. More concretely, a weighting function assigns a higher weight to a peer firm if it exhibits a smaller distance to the target firm in terms of firm size (we will also allow for equal weights). We measure the differences of firm sizes as follows:
$$ \begin{aligned} \mathrm{D}_{i}=\text{Size}_{T}-\text{Size}_{i} && \text{where} && i=1,..., N. \end{aligned} $$
Size T denotes the size of the target company measured in terms of firm sales, and Size i is a proxy for the size of all other firms. We standardize the "distances" by dividing them with the cross-sectional standard deviation, s(D i):
$$ \begin{aligned} \mathrm{D}^{*}_{i}=\frac{\mathrm{D}_{i}}{\mathrm{s}(\mathrm{D}_{i})} && \text{where} && i=1,..., N \end{aligned} $$
From these standardized distances, we construct weights using a kernel weighting function. The firm i in the sample of N firms will be assigned the weight
$$ \begin{aligned} \mathrm{w}_{i}=\mathrm{K}(\mathrm{D}^{*}_{i}) \end{aligned} $$
Additionally, we create weights by multiplying the standardized difference with the following scaling factor (SF):
$$ \begin{aligned} \text{SF}_{i} &= \text{Median}\left(\frac{\mathrm{s}(\text{Size}_{i})}{\mathrm{s}(\text{Size}_{T})}\right)\cdot 2 \\ \mathrm{D}^{**}_{i} &= \mathrm{D}^{*}_{i} \cdot \text{SF} \\ \mathrm{w}_{i} &= \mathrm{K}(\mathrm{D}^{**}_{i}). \end{aligned} $$
For robustness, we use three types of kernel functions to assign weights: (1) the probability density function (pdf) of the standard normal distribution, (2) the pdf of the uniform distribution, and (3) the pdf of the "cosine distribution".1In addition, we standardize each weight with the sum of all weights. This amounts to the following peer performance weight:
$$ {\mathrm{w}_{i}}^{*}=\frac{\mathrm{w}_{i}}{{\sum_{j=1}^{N}\mathrm{w}_{j}}} $$
(A.10)
such that
$$ \sum_{i=1}^{N}{\mathrm{w}_{i}}^{*}=1. $$
Finally, we use the performance weights and individual firm performance Perf i to construct each target firm's peer group as follows:
$$ \text{PeerPerf}=\sum_{i=1}^{N}{\mathrm{w}_{i}^{*}\cdot \text{Perf}_{i}} \; \; \text{where} \;\; i=1,..., N. $$
Table 14 Regressions estimating the sensitivity of CEO compensation to RPE
Table 15 Logit regression of RPE usage in executive compensation contracts
Table 16 Pearson correlation coefficients without the years 2007 and 2008
Regression results (kernel-based peers)
Full sample of banks
Panel A of Table 14 reports the results from regressing the logarithm of total compensation on firm stock return, Kernel-based peers, growth options, and log of sales. The parameter estimates are negative and insignificant, which is not consistent with the presence of RPE. The estimates hardly differ across the different Kernel specifications. They are − 0.26 (p value = 0.38) for the normal Kernel function, − 0.16 (p value = 0.54) for the cosine Kernel function, and −0.20 (p value = 0.48) for the uniform Kernel function. In panel A of Table 14 we have slightly adjusted the Kernel-based approach by multiplying the difference of the firm size by the scaling factor introduced in the previous section. We test the presence of RPE by regressing the log of total CEO compensation on firm stock return, peer performance, growth options, and log of sales. We also include year, country, and industry dummies. The coefficient on the log of firm stock return is again positive and statistically significant at the 1% level for every specification. The negative coefficients on the Kernel-based peer portfolio keep persisting. They are − 0.22 (p value = 0.39) for the normal Kernel function, − 0.20 (p value = 0.38) for the cosine Kernel function, and − 0.27 (p value = 0.29) for the uniform Kernel function. The adjusted Kernel-based approach reports smaller p values. The coefficients remain insignificant, revealing no evidence of RPE in the full sample.
Weak tests of RPE (disclosure subsample)
Panel B of Table 14 documents the same regression procedure on the subsample of banks that explicitly disclose the use of peers in determining their level of CEO compensation. Under the Kernel-based peer group specification, external shocks are removed from the compensation contract, which is consistent with RPE. The peer coefficients do not differ much across the Kernel specifications. They are − 0.99 (p value = 0.03) for the normal Kernel function, − 0.81 (p value = 0.05) for the "cosine" Kernel function, and − 0.88 (p value = 0.06) for the uniform Kernel function. The coefficient on firm stock performance is positive, statistically significant, and ranges from 0.70 to 0.77. Panel B of Table 14 also reports the regression results with the adjusted Kernel-based peers (columns labeled "scaled"). All the Kernel-based peer coefficients keep a negative and statistically significant sign, soundly rejecting the null hypothesis of no RPE. The coefficient of the normal Kernel peer group is − 0.82 (p value = 0.03), of the cosine Kernel peer group − 0.83 (p value = 0.01), and of the uniform Kernel peer group − 0.77 (p value = 0.04).
Strong-form tests of RPE (disclosure subsample)
We now use strong-form RPE tests in order to test the RPE hypothesis on the disclosing subsample. We use the Kernel-based method to construct a systematic performance variable and run the same regression model. Panel C of Table 14 reports the results. We document insignificant parameter estimates on systematic firm performance. The coefficient of the normal Kernel peer group is 0.04 (p value = 0.85), of the cosine Kernel peer group 0.08 (p value = 0.70), and of the uniform Kernel peer group 0.09 (p value = 0.65). Our results are robust to different specifications of the weights in the Kernel-based approach, which is reported in panel C of Table 14. The parameter estimates of the normal Kernel peer group is 0.07 (p value = 0.75), of the cosine Kernel peer group 0.01 (p value = 0.96), and of the uniform Kernel peer group 0.09 (p value = 0.66). The unsystematic firm performance is significant at the 1% level for every Kernel-based specification.
Associated factors of RPE
In this section, we estimate Eq. (5). For this purpose, as in the previous section, we use the alternative peer group definitions based on the Kernel approach. The results are presented in Table 15. The parameter estimates on firm size remain statistically significant and have the same sign. The firm size coefficient of the normal Kernel peer group is 2.68 (p value = 0.00), of the cosine Kernel peer group 2.70 (p value = 0.00), and of the uniform Kernel peer group 2.69 (p value = 0.00). The coefficient for growth options remains negative and in most cases statistically significant. The results are qualitatively similar when we use the adjusted Kernel-based approach. The coefficients are − 19.64 (p value = 0.09) for the normal Kernel peer group, − 19.28 (p value = 0.10) for the cosine Kernel peer group, and for the uniform Kernel peer group − 19.74 (p value = 0.09).
Correlation coefficients of non-crisis years
Table 16 shows Pearson correlations obtained without the years 2007 and 2008.
Clustered standard errors
Here, we consider the same regression procedure (Eq. (1)) for the full sample of 42 banks but include clustered standard errors across industry codes. Table 17 reports the regression results when peers are based on industry and industry/size. The coefficient on industry peer is − 0.06 (p value = 0.87), and the coefficient on industry/size peers is − 0.31 (p value = 0.06). That is to say, we find qualitatively similar results to those presented in panel A of Table 4. In addition, in unreported results, we find that the results for the Kernel-based approaches are robust to the inclusion of clustered standard errors.
Implementing RPE does also entail potential costs. (Gibbons and Murphy 1990, p. 31) discuss the benefits and costs of RPE and state that: "One occupation for which the risk-sharing advantages of RPE likely exceed its counterproductive side-effects is top-level corporate management." By discussing several potential costs of RPE in general (like costs induced by sabotage, collusion, choice of the reference group, or production externalities), they find that most of the costs are of minor relevance for CEO compensation (for details see Gibbons and Murphy 1990).
This is known as Bebchuck and Fried's (2004) managerial power hypothesis, which argues that there are serious flaws in the corporate governance system, flaws that allow executives to exert influence over the board of the directors and thereby hold sway over the level of their pay. Bebchuck and Fried provide evidence that supports their hypothesis, which helps explain why the actual pay-setting process is hard to reconcile with traditional contract theory.
Antle and Smith (1986) examine RPE in a sample of chemical, aerospace, and electronics firms. Rajgopal et al. (2006) cover a wide range of industries with the three largest groups being electric, gas, and sanitary services, chemicals and allied products, and depository institutions. Aggarwal and Samwick (1999b) and Aggarwal and Samwick (1999a) exploit ExecuComp data, compensation information for US firms. Joh (1999) tests RPE on a sample of Japanese firms in the manufacturing sector.
For example, Gibbons and Murphy (1990), Albuquerque (2009), and Black et al. (2015) find empirical support for the RPE hypothesis. In contrast, Janakiraman et al. (1992), Antle and Smith (1986), Aggarwal and Samwick (1999b), Jensen and Murphy (1990), and Antle and Smith (1986) fail to provide evidence for RPE or present mixed results.
US banks were excluded in our analysis because of a regulatory event during the observed timeframe (see the section "Compensation data" for details). To our knowledge, there are only two studies that test RPE on US banks, Barro and Barro (1990) and Crawford (1999). We elaborate on them in the section "International banking sample".
Bikker and Haaf (2002) investigate the competitive conditions and concentration in banking markets of 23 industrialized countries inside and outside Europe over 10 years. They form three sub-markets in terms of bank sizes for each country and estimate the corresponding competition conditions. They show that large banks operate mostly in international markets and are exposed to strong competition. On the other hand, smaller banks operate mainly in local markets and are facing less competition.
Bebchuk et al. (2010) conclude that the pay structures in Bear Stearns and Lehman Brothers had provided top executives with overbearing risk-taking incentives. This misalignment let them focus on their company's short-term performance while paying too little attention to the long-term value.
We collect this information from the annual reports and proxy statements of the respective banks. See the "Compensation data" section for more details.
Our classification of RPE statements is less strict than Gong et al. (2011), so our estimates on its informational value are, if anything, to be interpreted conservatively. See the section "Compensation data" for more details.
Since 1980, many states in the US have passed so-called interstate banking laws that allow local banks to be acquired by out-of-state banks. This has led to higher competition among banks on the interstate market and has had consequences on the pay-performance relation.
The results are inconclusive. For example, Saunders et al. (1990) find evidence for this hypothesis and observe a positive and statistically significant relation between bank risk and stock held by the executive. In contrast, Houston and James (1995) provide results that are inconsistent with this hypothesis.
Originally, Holmstrom and Milgrom (1987) defined RPE as \(\frac {\alpha _{2} }{\alpha _{1}} \). They test \(H_{0}:\frac {\alpha _{2}}{\alpha _{1}}\geq 0\) against the alternative \(H_{1}:\frac {\alpha _{2} }{\alpha _{1}}<0\). Since α1 is expected to be positive, most of the literature that uses the model proposed by Holmstrom and Milgrom test whether α2<0. We follow that approach.
Note that α3 is a row vector.
Firm-specific fixed effects in firm performance would be included in the unsystematic part of the performance, potentially biasing the coefficient estimate βi used to determine systematic performance. We address this issue by including control variables (including industry and country dummies) in the first step of the regression. Still, because the correlation between peer performance and the control variables is low, excluding them would have no major impact on the results.
We retain US banks as possible peers, however. See the section "Peer group composition".
For EU firms, for example, Ferrarini and Moloney (2005, p. 318) point out that peer group disclosure is not required. We are not aware that this has changed since 2005.
The low number of firms in the last year is because not all firms in our sample had yet released proxy circulars by the time we collected the data.
Table 1 Sample frequency by year, SIC level, and country
Compared to the correlation coefficients in Albuquerque (2009, Table 4), our values seem rather high (e.g., 0.45 vs. 0.79 for the correlation between firm performance and industry/size peer return). One explanation for this difference is our focus on the banking industry. Another explanation is the financial crisis. It turns out that the years 2007 and 2008 explain roughly half of the difference (excluding these years yields a value of 0.63 for our sample; see Table 16). See also our robustness check in section "Exclusion of financial crisis years" The correlation coefficients for our disclosing subsample do not differ substantially from our full sample. We do not report these results separately.
Table 2 Pearson correlation coefficients
A variance inflation factor (VIF) indicates how much the variance of a variable is increased because of collinearity. There is no clear rule when a VIF indicates high multicollinearity. A rule of thumb is that values bigger than 10 are considered to be problematic. In our results, the variables of interest are below this threshold. We are grateful to an anonymous referee for bringing this to attention.
Datastream provides up to five different SIC codes for each firm, in order of relevance. We include a firm if its first SIC code is one of the following: 6021, 6022, 6029, 6035, 6036, 6061, 6062, 6081, 6082, 6091, 6111, 6141, 6159, 6162, or 6712. If the first SIC code is either 6311, 6211, 6153, 6163, or 6221, we include the firm only if one of its four other SIC-codes is in the previous list.
Jensen and Murphy (1990) aggregate peer performance based on the two-digit SIC level or a market index, Janakiraman et al. (1992) match their peers on the same two-digit SIC industry level. Aggarwal and Samwick (1999a) use two-, three-, and four-digit SIC levels in order to compose a peer group, and Aggarwal and Samwick (1999b) choose peers in the same two-, three-, and four-digit SIC level industry or a market index.
In the Appendix, we provide two additional robustness checks beyond the ones presented in the subsequent section "Extensions and robustness checks". First, the results are robust across different treatments of standard errors (see the section "Clustered standard errors"). Second, we implement a novel Kernel-based peer group construction approach with three different Kernel functions: a standard normal probability distribution function (pdf), a "cosine" pdf, and a uniform pdf. These generalized approaches implement different weights depending on a peer's size "distance" from the target firm's size. The according results are reported in the section "Regression results (kernel-based peers)" (Table 14).
Table 4 Regressions estimating the sensitivity of CEO compensation to RPE
The result does not remain statistically significant when we conduct the complementary changes regression presented in Albuquerque (2009). Untabulated results show statistically insignificant coefficients on industry/size peer returns. Neither are we able to document robustness when we use the Kernel-based peer group construction approach from the previous endnote.
In contrast to the full sample, for the subsample this result holds up to the Kernel-based approach presented in endnote 22. For more details see Panel B of Table 14 in the "Weak tests of RPE (disclosure subsample)" section of the Appendix.
Untabulated results for the non-disclosing subsample support this reasoning. On their own, non-disclosing banks do not seem to make use of RPE.
For the robustness tests, see panel C in Table 14 in the "Strong-form tests of RPE (disclosure subsample)" section of the Appendix.
Strictly speaking, we do not examine the likelihood of using RPE but the more encompassing likelihood of disclosing peer groups. This translates into a conservative estimation. Because peer group usage is only a necessary condition for RPE, our findings are reflecting a lower likelihood bound of using RPE.
Table 5 Logit regression of RPE usage in executive compensation contracts
When using total assets instead of sales as a proxy for size, this coefficient becomes insignificant. The results are not reported, but are available upon request by the authors.
The regression results based on our self-created Kernel-based peer group specifications are presented in Table 15 in the "Associated factors of RPE" section of the Appendix and are similar to the results in this section.
See https://datahelpdesk.worldbank.org/knowledgebase/ articles/906519-world-bank-country-and-lending-groups.
We are grateful to an anonymous referee for this suggestion.
Detailed results are available upon request by the authors.
Adams, R.B., & Mehran, H. (2003). Is corporate governance different for bank holding companies?. FRBNY Economic Policy Review, 9(1), 123–142.
Aebi, V., Sabato, G., Schmid, M. (2012). Risk management, corporate governance, and bank performance in the financial crisis. Journal of Banking and Finance, 36(12), 3213–3226. ISSN 0378-4266.
Aggarwal, R.K., & Samwick, A.A. (1999). Executive compensation, strategic competition, and relative performance evaluation: Theory and evidence. Journal of Finance, 54(6), 1999–2043.
Aggarwal, R.K., & Samwick, A.A. (1999). The other side of the trade-off: The impact of risk on executive compensation. Journal of Political Economy, 107(1), 65–105.
Albuquerque, A.M. (2009). Peer firms in relative performance evaluation. Journal of Accounting and Economics, 48(1), 69–89. ISSN 0165-4101.
Albuquerque, A.M. (2014). Do growth-option firms use less relative performance evaluation?. The Accounting Review, 89(1), 27–60.
Antle, R., & Smith, A. (1986). An empirical investigation of the relative performance evaluation of corporate executives. Journal of Accounting Research, 24(1), 1–39. ISSN 0021-8456.
Bannister, J.W., & Newman, H.A. (2003). Analysis of corporate disclosures on relative performance evaluation. Accounting Horizons, 17(3), 235–246.
Barro, J.R., & Barro, R.J. (1990). Pay, performance, and turnover of bank ceos. Journal of Labor Economics, 8(4), 448–481. ISSN 0734-306X, 1537–5307.
Bebchuk, L.A., & Fried, J.M. (2003). Executive compensation as an agency problem. Journal of Economic Perspectives, 17(3), 71–92.
Bebchuk, L.A., & Fried, J.M. (2004). Pay Without Performance: The Unfulfilled Promise of Executive Compensation. Cambridge: Harvard University Press.
Bebchuk, L.A., Cohen, A., Spamann, H. (2010). The wages of failure: Executive compensation at bear stearns and lehman 2000-2008. Yale Journal on Regulation, 27, 257–282.
Berger, A.N., & Smith, D.A. (2003). Global integration in the banking industry. Federal Reserve Bulletin, 90, 451–460.
Bikker, J.A., & Haaf, K. (2002). Competition, concentration and their relationship: An empirical analysis of the banking industry. Journal of Banking and Finance, 26(11), 2191–2214.
Black, D.E., Dikolli, S.D., Hofmann, C. (2015). Peer group composition, peer performance aggregation, and detecting relative performance evaluation. Unpublished working paper. Duke University.
Carter, M.E., Ittner, M.C.D., Zechman, S.L.C. (2009). Explicit relative performance evaluation in performance-vested equity grants. Review of Accounting Studies, 14(2-3), 269–306.
Chen, C.R., Steiner, T.L., Whyte, A.M. (2006). Does stock option-based executive compensation induce risk-taking? an analysis of the banking industry. Journal of Banking and Finance, 30(3), 915–945. ISSN 0378-4266.
Crawford, A.J. (1999). Relative performance evaluation in CEO pay contracts: evidence from the commercial banking industry. Managerial Finance, 25(9), 34–54. ISSN 0307-4358.
Crawford, A.J., Ezzell, J.R., Miles, J.A. (1995). Bank CEO pay-performance relations and the effects of deregulation. The Journal of Business, 68(2), 231–256. ISSN 0021-9398.
Cunat~, V., & Guadalupe, M. (2009). Executive compensation and competition in the banking and financial sectors. Journal of Banking and Finance, 33(3), 495–504. ISSN 0378-4266.
Demsetz, R.S., & Strahan, P.E. (1997). Diversification, size, and risk at bank holding companies. Journal of Money, Credit and Banking, 29(3), 300–313. ISSN 0022-2879.
Diamond, D.W., & Dybvig, P.H. (1983). Bank runs, deposit insurance, and liquidity. Journal of Political Economy, 91(3), 401–419.
Fahlenbrach, R., & Stulz, R.M. (2011). Bank CEO incentives and the credit crisis. Journal of Financial Economics, 99(1), 11–26. ISSN 0304-405X.
Fama, E.F., & French, K.R. (1992). The cross-section of expected stock returns. The Journal of Finance, 47(2), 427–465. ISSN 1540-6261.
Faulkender, M., & Yang, J. (2013). Is disclosure an effective cleansing mechanism? the dynamics of compensation peer benchmarking. Review of Financial Studies, 26(3), 806–839.
Ferrarini, G., & Moloney, N. (2005). Executive remuneration in the EU: The context for reform. Oxford Review of Economic Policy, 21(2), 304–323. ISSN 0266-903X, 1460-2121.
Gibbons, R., & Murphy, K.J. (1990). Relative performance evaluation for chief executive officers. Industrial and Labor Relations Review, 43(3), 30–51.
Gjesdal, F. (1982). Information and incentives: The agency information problem. The Review of Economic Studies, 49(3), 373–390.
Gong, G., Li, L.Y., Shin, J.Y. (2011). Relative performance evaluation and related peer groups in executive compensation contracts. The Accounting Review, 86(3), 1007–1043.
Hodgson, P. (2004). The wall street example: Bringing excessive executive compensation into line. Ivey Business Journal.
Holmstrom, B (1979). Moral hazard and observability. The Bell Journal of Economics, 10(1), 74–91.
Holmstrom, B. (1982). Moral hazard in teams. The Bell Journal of Economics, 13(2), 324–340.
Holmstrom, B., & Milgrom, P. (1987). Aggregation and linearity in the provision of intertemporal incentives. Econometrica, 55(2), 303–28.
Houston, J.F., & James, C. (1995). CEO compensation and bank risk: Is compensation in banking structured to promote risk taking?. Journal of Monetary Economics, 36(2), 405–431. ISSN 0304-3932.
Hubbard, R.G., & Palia, D. (1995). Executive pay and performance evidence from the U.S. banking industry. Journal of Financial Economics, 39(1), 105–130. ISSN 0304-405X.
Ince, O.S., & Porter, R.B. (2006). Individual equity return data from Thomson Datastream: Handle with care!. Journal of Financial Research, 29(4), 463–479. ISSN 1475-6803.
Janakiraman, S.N., Lambert, R.A., Larcker, D.F. (1992). An empirical investigation of the relative performance evaluation hypothesis. Journal of Accounting Research, 30(1), 53–69. ISSN 0021-8456.
Jensen, M.C., & Meckling, W.H. (1976). Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3(4), 305–360.
Jensen, M.C., & Murphy, K.J. (1990). Performance pay and top-management incentives. Journal of Political Economy, 98(2), 225–264.
Joh, S.W. (1999). Strategic managerial incentive compensation in japan: Relative performance evaluation and product market collusion. The Review of Economics and Statistics, 81(2), 303–313.
John, K., & Qian, Y. (2003). Incentive features in CEO compensation in the banking indsutry. FRBNY Economic Policy Review, 9(1), 109–121.
Macey, J.R., & O'Hara, M. (2003). The corporate governance of banks. FRBNY Economic Policy Review, 9(1), 91–107.
Mahoney, P.G. (1995). Mandatory disclosure as a solution to agency problems. The University of Chicago Law Review, 62(3), 1047–1112.
Murphy, K.J. (1999). Executive compensation. In: Ashenfelter, O., & Card, D. (Eds.) In Handbook of Labor Economics, volume 3, Part B, chapter 38, pages 2485–2563. Elsevier Science North Holland, New York and Oxford.
Murphy, K.J. (2013). Executive Compensation: Where We Are, and How We Got There. In: Harris, M., & Stulz, R.M. (Eds.) In George M. Constantinides, Handbook of the Economics of Finance, volume 2, Part A, chapter 4, pages 211–356. Elsevier/North-Holland., Amsterdam.
Rajgopal, S., Shevlin, T.J., Zamora, V.L. (2006). CEOs' outside employment opportunities and the lack of relative performance evaluation in compensation contracts. The Journal of Finance, 61(4), 1813–1844.
Ross, S.A. (1973). The economic theory of agency: The principal's problem. The American Economic Review, 63(2), 134–139.
Saunders, A., Strock, E., Travlos, N.G. (1990). Ownership structure, deregulation, and bank risk taking. The Journal of Finance, 45(2), 643–654. ISSN 0022-1082.
Smith, C.W., & Watts, R.L. (1992). The investment opportunity set and corporate financing, dividend, and compensation policies. Journal of Financial Economics, 32(3), 263–292.
Tung, F. (2011). Pay for bank performance: Structuring executive compensation for risk regulation. Northwestern University Law Review, 105(3), 1205–1252.
We are grateful to Gerhard Fehr and Alain Kamm for valuable discussions, and would like to thank Lara Vogt for her tireless effort in collecting the compensation data and for excellent research assistance. The authors appreciate helpful comments from Hui Chen, Ernst Fehr, Robert Göx, Stefan Hirth, Martin Brown, two anonymous referees, and finance seminar participants at the Department of Economics and Business Economics at the University of Aarhus, Denmark.
We gratefully acknowledge financial support from the Swiss Commission for Technology and Innovation (Grant 12740.1: "Relative Performance Evaluation and Executive Compensation"). We would also like to thank FehrAdvice AG for additional support.
The datasets used and/or analyzed in this study are available from the corresponding author upon reasonable request.
ETH Zurich, CER Center of Economic Research, Zürich, Switzerland
Dragan Ilić
University of Basel, Faculty of Business and Economics, Peter Merian-Weg 6, Basel, 4052, Switzerland
University of Zurich, Zürich, Switzerland
Sonja Pisarov
University of Geneva, the Geneva Finance Research Institute, Geneva, Switzerland
Peter S. Schmidt
Search for Dragan Ilić in:
Search for Sonja Pisarov in:
Search for Peter S. Schmidt in:
DI and SP have written major parts of the paper. PS contributed to the empirical part of the paper. PS and SP were involved in the data collection and did the estimations. DI wrote letters to the editor and the referees. DI and PS have been working together closely in the revision of the paper. All authors have read and approved the final manuscript.
Correspondence to Dragan Ilić.
Ilić, D., Pisarov, S. & Schmidt, P. Preaching water but drinking wine? Relative performance evaluation in international banking. Swiss J Economics Statistics 155, 6 (2019) doi:10.1186/s41937-019-0032-8
Accepted: 05 April 2019
Relative performance evaluation | CommonCrawl |
Mass measurements in the vicinity of the rp-process and the nu p-process paths with JYFLTRAP and SHIPTRAP (0808.4065)
C. Weber, V.-V. Elomaa, R. Ferrer, C. Fröhlich, D. Ackermann, J. Äystö, G. Audi, L. Batist, K. Blaum, M. Block, A. Chaudhuri, M. Dworschak, S. Eliseev, T. Eronen, U. Hager, J. Hakala, F. Herfurth, F.P. Heßberger, S. Hofmann, A. Jokinen, A. Kankainen, H.-J. Kluge, K. Langanke, A. Martín, G. Martínez-Pinedo, M. Mazzocco, I.D. Moore, J.B. Neumayr, Yu.N. Novikov, H. Penttilä, W.R. Plaß, A.V. Popov, S. Rahaman, T. Rauscher, C. Rauth, J. Rissanen, D. Rodríguez, A. Saastamoinen, C. Scheidenberger, L. Schweikhard, D.M. Seliverstov, T. Sonoda, F.-K. Thielemann, P.G. Thirolf, G.K. Vorobjev
Aug. 29, 2008 nucl-ex
The masses of very neutron-deficient nuclides close to the astrophysical rp- and nu p-process paths have been determined with the Penning trap facilities JYFLTRAP at JYFL/Jyv\"askyl\"a and SHIPTRAP at GSI/Darmstadt. Isotopes from yttrium (Z = 39) to palladium (Z = 46) have been produced in heavy-ion fusion-evaporation reactions. In total 21 nuclides were studied and almost half of the mass values were experimentally determined for the first time: 88Tc, 90-92Ru, 92-94Rh, and 94,95Pd. For the 95Pdm, (21/2^+) high-spin state, a first direct mass determination was performed. Relative mass uncertainties of typically $\delta m / m = 5 \times 10^{-8}$ were obtained. The impact of the new mass values has been studied in nu p-process nucleosynthesis calculations. The resulting reaction flow and the final abundances are compared to those obtained with the data of the Atomic Mass Evaluation 2003. | CommonCrawl |
Fuel dynamics after a bark beetle outbreak impacts experimental fuel treatments
Justin S. Crotteau1,2,
Christopher R. Keyes1,
Sharon M. Hood3,
David L. R. Affleck1 &
Anna Sala4
Fire Ecology volume 14, Article number: 13 (2018) Cite this article
Fuel reduction treatments have been widely implemented across the western US in recent decades for both fire protection and restoration. Although research has demonstrated that combined thinning and burning effectively reduces crown fire potential in the few years immediately following treatment, little research has identified effectiveness of thinning and burning treatments beyond a decade. Furthermore, it is unclear how post-treatment disturbances such as a bark beetle outbreak affect fuel treatment effectiveness.
We evaluated differences in surface and canopy fuel characteristics and potential fire behavior metrics between fuel reduction treatments (no-action or control, burn-only, thin-only, thin+burn) implemented in ponderosa pine (Pinus ponderosa Lawson & C. Lawson)−Douglas-fir (Pseudotsuga menziesii [Mirb.] Franco)-dominated forests that were subsequently affected by a mountain pine beetle (Dendroctonus ponderosae Hopkins) outbreak after treatment. Experimental units were measured in 2002 (immediately following fuel treatment) and in 2016 (14 years after treatment and at least 4 years following a beetle outbreak). We found that beetle-altered thinning treatments (thin-only and thin+burn combined) had less fuel (i.e., 34% and 83% lower fine and coarse woody debris loading, respectively) and lower crown fire potential (i.e., 47% lower probability of torching and 42% greater crowning index) than corresponding unthinned treatments (control and burn-only). There was no post-beetle-outbreak effect of burning treatments (burn-only and thin+burn combined) on surface fuel loading, but burning reduced crown fire potential (i.e., 37% greater crowning index) over unburned units (control and thin-only) 14 years after treatment. Additionally, we determined the relative impacts of fuel treatments and the bark beetle outbreak on fuel and crown fire potential differences and found that bark beetle-caused tree mortality inflated differences between controls and thinned treatments (thin-only and thin+burn) for surface fuel loading and probability of torching, but diminished differences between these treatments for canopy fuel loading, canopy bulk density, and crowning index.
Despite the differential effects of bark beetle-caused tree mortality in the treatments, our study suggests that the effects of fuel treatments on mitigating crown fire potential persist even after a stand-transforming insect outbreak, especially when thinning and burning are combined.
Los tratamientos de reducción de combustibles han sido ampliamente usados a través del oeste de los EEUU en décadas recientes, tanto para protección de incendios como en restauración. Aunque las investigaciones han demostrado que la combinación de raleos y quemas prescriptas reducen efectivamente el potencial de tener fuegos de copas por algunos años después de estos tratamientos, pocas investigaciones han identificado la efectividad de los raleos y la posterior quema más allá de una década. Además, es poco claro como disturbios post- tratamientos tales como epidemias del escarabajo de la corteza afectan la efectividad de estos tratamientos.
Evaluamos las diferencias en las características de combustibles superficiales y del dosel arbóreo y las propiedades potenciales de parámetros del comportamiento del fuego entre distintos tratamientos de reducción de combustible (ninguna acción o control, quema, raleo, y raleo+quema), implementado en bosques dominados por pino ponderosa (Pinus ponderosa Lawson & C. Lawson) y pino Oregón (Pseudotsuga menziesii [Mirb.] Franco), que fueron afectados por una epidemia del escarabajo del pino de montaña (Dendroctonus ponderosae Hopkins) después de efectuados estos tratamientos. Las unidades experimentales fueron medidas en 2002 (inmediatamente después de efectuados los tratamientos) y en 2016 (14 años post- tratamientos y al menos 4 años posteriores a la epidemia del escarabajo de montaña). Encontramos que los tratamientos de raleo afectados por el escarabajo de montaña (raleo, y raleo+quema) tenían menos combustible (i.e., 34% y 83% menos de combustible fino y restos de troncos muertos en superficie), y menor potencial de fuegos de copas (i.e., 47% menos probabilidad de coronamiento y 42% más de índice de coronamiento) que los correspondientes a los tratamientos no raleados (control y solo quema). No hubo otro efecto post- epidemia del escarabajo de montaña en los tratamientos de quema (solo quema y raleo+quema combinados) en los combustibles de superficie, aunque la quema redujo el potencial de coronamiento (i.e., 37% más de índice de coronamiento) sobre las unidades no quemadas (control y solo raleadas) 14 años después del tratamiento. Adicionalmente, determinamos los impactos relativos de los tratamientos de combustible y la epidemia del escarabajo de montaña en las diferencias entre los combustibles y en el potencial de coronamiento. Encontramos que la mortalidad de árboles causada por el escarabajo de montaña aumentó las diferencias entre el tratamiento de control y los que involucraban tratamientos de raleos (raleo solo y raleo+quema), en lo que hace a la carga de combustible superficial y la probabilidad de coronamiento del fuego, pero disminuyó las diferencias entre esos tratamientos para la carga de combustible del dosel, su densidad aparente, y el índice de coronamiento.
A pesar de los efectos diferenciales de las causales de muerte de árboles por el escarabajo de montaña en los tratamientos, nuestro estudio sugiere que los efectos de tratamientos de combustibles para mitigar el potencial de coronamiento de fuegos persiste aun después de la transformación de los rodales por la epidemia del escarabajo de montaña, especialmente cuando se combinan los tratamientos de raleos y quemas.
A major goal of fuel reduction treatments is to mitigate potential wildfire behavior, especially to reduce the probability of crown fires in forests (Reinhardt et al. 2008). Many restoration efforts in fire-prone ecosystems also include fuel reduction strategies to reverse the effects of fire exclusion on forest fuels, structure, and composition (Brown et al. 2004, Fulé et al. 2012). However, treatment effects are ephemeral, as forest development in the years following treatment may compromise the effectiveness of treatments to mitigate crown fire behavior (Keyes and Varner 2006, Affleck et al. 2012, Tinkham et al. 2016). Wildland fire is not the only disturbance in many forests (Bebi et al. 2003, Bigler et al. 2005, Raffa et al. 2008, Kolb et al. 2016); fuel treatments may also interact with insect outbreaks and droughts, for example. Understanding how fuel treatments develop with time and in response to an exogenous disturbance such as a beetle outbreak has important implications for fuel treatment effectiveness and longevity in light of management objectives.
Fuel treatments in forests are often designed to decrease crown fire behavior (i.e., propensity for crown fire ignition or spread) by reducing canopy and surface fuels, and increase resistance to fire (i.e, overstory survival) by favoring fire-tolerant species and removing competition. Fire exclusion and canopy ingrowth over the past century have elevated surface and canopy fuel loading in the western US (Parsons and DeBenedetti 1979, Covington and Moore 1994, Keeling et al. 2006). Increased surface and canopy fuel loading in conjunction with a warmer and drier climate have caused wildfires to increase in size and frequency, resulting in greater suppression costs to protect resources (Westerling et al. 2006, Flannigan et al. 2009, Miller et al. 2009). Crown fires threaten human safety and property, and, in forest types where crown fire is uncharacteristic, it also threatens ecological resilience (Savage and Nystrom Mast 2005). Fuel reduction is a proactive silvicultural treatment that alters potential fire behavior by removing and modifying fuels to encourage low-severity (low overstory mortality) surface fire instead of high-severity (high overstory mortality) crown fire. Fuel treatments are typically designed to reduce surface fuel loading and canopy densities; increase heights to canopy base; and retain large, fire resistant trees (Agee and Skinner 2005, Hessburg et al. 2015). Although these goals can be attained with various silvicultural techniques, thinning and burning are the most typical means of fuel reduction. The relative effectiveness of thinning and burning to reduce crown fire behavior has been thoroughly studied immediately after treatment (Stephens and Moghaddas 2005, Harrington et al. 2007, Stephens et al. 2009, Fulé et al. 2012, McIver et al. 2012), generally highlighting that burning reduces surface fuels, thinning improves forest structure, and the combination of the two best reduces crown fire potential.
However, fuel treatments are only temporarily effective (Reinhardt et al. 2008, Martinson and Omi 2013). As treated areas age, regeneration, ingrowth, and residual trees grow into open space and increase surface and canopy fuel loading, causing concomitant increases in crown fire potential (Keyes and O'Hara 2002, Keyes and Varner 2006, Affleck et al. 2012). Although stimulated growth and regeneration are expected to follow treatment because of release from competition, it is still unclear how long treatments remain effective. Studies have identified that fuel treatments may be effective for a decade following treatment (Finney et al. 2005, Fernandes 2009, Jain et al. 2012, Stephens et al. 2012), but evidence beyond a decade is scant (though simulated by Tinkham et al. 2016). Understanding of treatment longevity is especially important when logistics and economics limit successive treatments.
Fire exclusion, past management actions, and warming climate have increased probability of crown fire in the West, but they have also been attributed to abetting recent insect outbreaks (Raffa et al. 2008, Bentz et al. 2010). Recent bark beetle outbreaks in the late 1990s to 2012 have profoundly affected several forest types, killing trees over millions of hectares in western US forests (Hicke et al. 2016). Wildfire in beetle-impacted forests is a primary concern for managers because beetle-killed trees alter canopy and surface fuel profiles (Page and Jenkins 2007, Hicke et al. 2012), as foliage on killed trees transitions from green to red to gray phases on the tree, then progressively falls to the forest floor with accompanying limbs and stems (British Colombia Ministry of Forests 2004, Donato et al. 2013). The impact of beetle-caused tree mortality on potential fire behavior in unmanaged forest landscapes has been a controversial topic (Jenkins et al. 2008, 2014; Simard et al. 2011; Harvey et al. 2013; Hart et al. 2015; Kane et al. 2017).
Bark beetle outbreaks can also directly impact fuel treatments, further altering the fuel profile and fire hazard. The interactions between fuel treatments and beetle outbreaks remain largely uncharacterized. A few studies have identified that fuel treatments may moderate beetle-caused mortality (Fettig et al. 2010, Jenkins et al. 2014, Hood et al. 2016). Conversely, beetle-caused tree mortality may compromise fuel treatment effectiveness by altering surface fuel loading and vegetative competition, depending on time since outbreak initiation. In unmanaged units with beetle-killed trees that have lost their needles (i.e., gray phase), active crown fire behavior and torching probability may be reduced because of altered canopy fuel attributes (Simard et al. 2011), similar to a fuel treatment. But beetle-caused mortality eventually adds fuel to the surface profile, and can thereby increase surface flame lengths, spotting, and residence times, exacerbating crown fire potential in surviving trees (Moran and Cochrane 2012, Jenkins et al. 2014). In treated units that densify and become more prone to crown fire over time, bark beetle-caused mortality may either maintain fuel treatment effectiveness by prolonging the duration of low canopy fuel loading and reducing canopy connectivity, or it may render treatments useless to their original objective by increasing surface fire behavior and probability of torching. Knowledge of fuels and potential crown fire behavior between treated and untreated units is valuable for safety assessment, inventory, and planning, as well as for determining resilience of actively managed units to subsequent disturbance.
The purpose of this study is to understand how silvicultural fuel reduction and a subsequent bark beetle outbreak influence fuel and the potential for crown fire. This study is distinct because it examines fuel treatments that were tested by a beetle outbreak with high levels of tree mortality rather than by low, endemic levels of bark beetle-caused tree mortality. We utilized the northern Rocky Mountains study site of the Fire and Fire Surrogate Study (McIver and Weatherspoon 2010) as a balanced experimental design to contrast fuel treatments (no-action control, burn-only, thin-only, thin+burn) in a (Pinus ponderosa Lawson & C. Lawson)−Douglas-fir (Pseudotsuga menziesii [Mirb.] Franco)-dominated forest. Treatments were fully implemented by 2002, at least four years before a widespread mountain pine beetle (MPB; Dendroctonus ponderosae Hopkins) outbreak that overlapped all experimental units. We analyzed surface and canopy fuels data from 14 years after silvicultural treatment and at least 4 years after an MPB outbreak, with the specific objective to determine temporal effects of the combined silviculture and MPB outbreak on surface and canopy fuel characteristics. As a follow-up demonstration, we used an industry standard modeling approach (i.e., using the Fire and Fuels Extension to the Forest Vegetation Simulator [FFE-FVS]) to predict crown fire potential given post-treatment conditions. Our final objective was to determine the individual effects of silvicultural treatment and the MPB outbreak on surface and canopy fuel characteristics and crown fire potential. This study uniquely showcases the impact that time and beetle outbreak have on restorative fuel treatments, demonstrating how beetle-caused mortality interacts with the development of fuel characteristics and crown fire potential in treated versus untreated units.
This study was conducted at the University of Montana's Lubrecht Experimental Forest (46°53'N, 113°26'W), an 11 300 ha forest in western Montana's Blackfoot River drainage of the Garnet Range, USA. Study sites ranged in elevation from 1230 to 1388 m ASL, and were composed of Pseudotsuga menziesii−Vaccinium caespitosum Michx. and Pseudotsuga menziesii−Spiraea betulifolia Pall. habitat types (Pfister et al. 1977). This forest was generally composed of second-growth ponderosa pine (Pinus ponderosa Lawson & C. Lawson var. scopulorum Engelm.), Douglas-fir (Pseudotsuga menziesii [Mirb.] Franco var. glauca [Beissn.] Franco), with western larch (Larix occidentalis Nutt.) regenerated from heavy cutting in the early twentieth century. Soils are fine or clayey-skeletal, mixed, Typic Eutroboralfs, as well as loamy-skeletal, mixed, frigid, Udic Ustochrepts (Nimlos 1986).
Climate in this study area is maritime-continental. Annual precipitation is approximately 460 mm (PRISM Climate Group, Oregon State University, http://prism.oregonstate.edu), nearly half of which falls as snow. Mean temperatures range from − 6 °C in December and January to 17 °C in July and August. Average plant growing season is between 60 and 90 days. Historic fire frequency at Lubrecht prior to the twentieth century ranged from 2 to 14 years, with a mean composite fire return interval of 7 years (Grissino-Mayer et al. 2006).
Silvicultural activities and MPB "treatment"
A portion of the Lubrecht Experimental Forest was selected as a site for the Fire and Fire Surrogate Study, a multidisciplinary research project that aimed to quantify the short-term effects of restorative fuel treatments in frequent-fire forests across the US (McIver and Weatherspoon 2010, Weatherspoon 2000). The study provided a framework to examine the effects of common fuel treatments on treatment longevity, fuel development, and potential fire behavior. Treatments were implemented in each of three blocks of approximately 36 ha (9 ha per treatment unit), using a randomized factorial design: two levels of thinning (thinned and unthinned) by two levels of prescribed burning (burned and unburned), for a total of four treatment levels (no-action control, burn-only, thin-only, thin+burn), with one treatment replicate per block. Prescription intensity was designed to maintain 80% overstory tree survival given a wildfire during 80th percentile weather conditions (Weatherspoon 2000). Units were cut in 2001 and burned in 2002, creating twelve 9 ha experimental units. The silvicultural cutting prescription was a combined low thinning and improvement cut to a residual basal area of 11.5 m2 ha−1, favoring retention of ponderosa pine and western larch over Douglas-fir (although Douglas-fir maintained a significant presence in residual overstories; Metlen and Fiedler 2006). Burning treatments were spring burns with wind speeds less than 13 km h−1. Burns were generally low severity, with pockets of high severity in two of the thin+burn treatments. Fiedler et al. (2010) analyzed treatment effect on stand structure and short-term growth, and Stephens et al. (2009) summarized short-term woody fuel and potential fire behavior responses to treatment across western Fire and Fire Surrogate sites.
Approximately four growing seasons after treatment, a regional MPB outbreak in Montana began affecting all experimental blocks in the study (Gannon and Sontag 2010). Beetle-caused overstory mortality levels were high in the control and burn-only units from 2006 to 2012 (Hood et al. 2016), leading to comparable live ponderosa pine basal area across all treatments. After the outbreak, fuel loading, crown fire potential, productivity, and stand dynamics were no longer a pure effect of fuel treatments, but rather of the combination of fuel treatments and beetle-caused tree mortality. This beetle outbreak was an opportunity to assess a novel but increasingly common condition in the West: fuel treatment followed by a MPB outbreak. Therefore, the meaning of "treatment" in this study changes with measurement year. Before the MPB outbreak (prior to 2006), "treatment" refers to the silvicultural fuel treatment. After the MPB outbreak, "treatment" refers to the combination of fuel treatment and MPB-caused tree mortality.
Live trees were measured twice on permanently monumented plots in the Fire and Fire Surrogate Study. Trees were initially measured the year after treatment (measured in 1999 for control, 2001 for thin-only, 2002 for burn-only and thin+burn) on 10 randomly selected plot locations from 36 systematically located grid points within each treatment unit, for a total of 120 plots (Metlen and Fiedler 2006). Mature trees were measured on 0.04 ha circular subplots; for each mature tree with a diameter at breast height (dbh; 1.37 m) greater than 10.16 cm, species, dbh, and total height were recorded. Height to the base of live crown was recorded for all "leave" trees according to the silvicultural prescription (i.e., all trees retained in the thin-only and thin+burn treatments, but only a similar subset of trees in the control and burn-only). Height to the base of live crown was estimated based upon the average branch height of the compacted lower limit of the crown (US Forest Service 2005). Trees smaller than 10.16 cm dbh but taller than 1.37 m were measured on five 100 m2 subplots; trees between 0.10 m and 1.37 m tall were measured on twenty 1 m2 subplots. Mature trees were re-measured in 2014, and trees smaller than 10.16 cm dbh were re-measured in 2016 using the protocol outlined above.
Dead surface fuel loading was first measured the year after treatment (same years as above: 1999 for control, 2001 for thin-only, 2002 for burn-only and thin+burn) using a mixture of planar intercept and destructive sampling. A modified Brown's (1974) protocol was used to quantify 1-h (woody material <0.64 cm diameter), 10-h (0.64 cm ≤ diameter < 2.54 cm), 100-h (2.54 cm ≤ diameter < 7.62 cm), and 1000-h+ (diameter ≥7.62 cm) timelag classes. On each of the 36 grid points, two 15.2 m transects were established; 1-h and 10-h fuels were tallied for 1.8 m of the length, 100-h fuels were tallied for 3.7 m, and 1000-h+ fuel diameters were recorded along the entire transect lengths. Duff and litter depths were each measured along transects at 4.6 m and 10.7 m from plot center. In the thin-only and thin+burn treatments, 1-h, 10-h, litter, and duff materials were not measured along transects but destructively sampled on two 1 m2 quadrats. These materials were taken to the lab, oven dried, and weighed to determine loading by fuel component type. In 2016, we re-measured dead surface fuels using the original modified Brown's transects for all 36 grid points in all of the treatment units.
For simplicity's sake in this paper, the datasets will be referred to by the last year of measurement. Namely, "2002" for the collective immediate post-treatment dataset, and "2016" for the post-beetle-outbreak dataset. By the time of final measurement, units were in the post-epidemic, leaf-off, gray phase of the MPB rotation (Jenkins et al. 2008).
In addition to the 2002 and 2016 datasets, we supplemented our dataset with data measured and analyzed by Hood et al. (2016). Using the same measurement points described above, they measured MPB-caused tree mortality to the overstory (dbh >10.16 cm) between 2006 and 2012. We augmented our dataset with their data on plot-scale MPB outbreak severity (overstory stems ha−1).
Analytical and statistical methods
We calculated dead downed woody debris loading (Mg ha−1) according to Brown (1974), and used site-specific depth-to-loading regressions to calculate litter and duff loading (Mg ha−1) (M. Harrington, retired, USDA Forest Service, Missoula, Montana, USA, unpublished data). Dead downed woody fuel loading were grouped into two pools for analysis: fine woody debris (FWD) was composed of fuel less than 7.62 cm diameter (1-h, 10-h, and 100-h); coarse woody debris (CWD) was composed of sound fuel greater than or equal to 7.62 cm diameter (1000-h). Litter and duff layers (LD) were combined for analysis.
Surface fuels data and measured tree data were input into FFE-FVS (Dixon 2002, Rebain 2010) to calculate plot-scale canopy fuel characteristics and potential fire behavior for our two measurement years (2002, 2016). We estimated fire behavior using FFE-FVS methods, whereby measured dead fuel loadings are keyed to Albini's (1976) 13 original fire behavior fuel models, and the FFE-FVS algorithm selects and weights predicted fire behavior from one to two most similar models (Rebain 2010). This default FFE-FVS method produces consistent results to predictions based entirely on user-inputted, customized fire behavior fuel models in FFE-FVS (Noonan-Wright et al. 2014). Potential fire behavior was based on FFE-FVS's default "severe" fire weather scenario (4% 10-h fuel moisture, 21.1 °C ambient temperature, and 32.2 km h−1 wind speed at 6.1 m) instead of percentile (e.g., 80th or 95th) fire weather conditions to provide standardized analysis. Output gathered from FFE-FVS FUELOUT and POTFIRE reports included canopy fuel loading (CF; Mg ha−1), canopy base height (CBH; m), canopy bulk density (CBD; kg m−3), potential fire behavior (fire type and surface flame length, m), and crown fire potential (probability of torching, PT, %; and crowning index, CI, km h−1) calculations (see Rebain 2010 for further variable descriptions). We note that the CF loading from the FUELOUT report includes foliage plus branchwood <7.62 cm diameter, but FFE-FVS only uses foliage plus half of all materials <0.64 cm to calculate CBH and CBD (Rebain 2010).
We recognize that fire prediction models such as FFE-FVS have shortcomings and are simplified representations of actual fuel and weather conditions, especially in bark beetle-impacted forests. Cruz and Alexander (2010) discussed these limitations and found that models typically underpredict fire rate of spread in the forest canopy, and Fernandes (2009) identified that simulation for treatment comparison may produce conflicting results with actual fire behavior. There is no universally accepted alternative to modeling potential fire behavior over large spatial extents, although some researchers have developed empirical crown fire transition models with coarse data inputs (e.g., Cruz et al. 2005) or novel approaches to address FFE-FVS's shortcomings (e.g., Nelson et al. 2017). Computational fluid dynamics models have been used to examine potential fire behavior in beetle-impacted forests (Sieg et al. 2017), but these models are not feasible to use on the large, 9 ha treatment units. Despite these shortcomings, FFE-FVS is the standard fuel and fire assessment tool used by vegetation managers, planners, and researchers because it extends fuels data to more meaningful management objectives. Therefore, we used FFE-FVS as a tool to demonstrate treatment-specific outcomes for mitigating fire hazard as it is commonly assessed and given model assumptions. This is a useful means for contrasting treatment alternatives (Johnson et al. 2011); however, we caution users that these predictions may not reflect actual fire behavior, a critical consideration for actual management applications.
We used nested ANOVA to investigate treatment influence on eight fuel and crown fire potential response variables in 2002 and 2016. In this study, plot is nested within experimental unit, which is nested within block. We performed this analysis using the anova.lme function in R's nlme package (Pinheiro et al. 2016, R Core Team 2016). ANOVA models had the form:
$$ {\widehat{y}}_{\mathrm{i}\mathrm{jk}\mathrm{l}}=\upmu +{\upalpha}_{\mathrm{i}}+{\upbeta}_{\mathrm{j}}\times {\upgamma}_{\mathrm{k}}+{\upvarepsilon}_{\mathrm{i}\mathrm{jk}}+{\updelta}_{\mathrm{i}\mathrm{jk}\mathrm{l}} $$
where \( \widehat{y} \) is the plot-scale response variable (2002 and 2016 FWD loading, CWD loading, LD loading, CF loading, CBH, CBD, PT, and CI), μ is the grand mean, αi is the block effect (levels 1 to 3), βj is the prescribed burn effect (levels not burned and burned), γk is the thinning effect (levels not thinned and thinned), εijk is the experimental unit error term, and δijkl is the residual error term associated with plots. Although the block effect would ideally be treated as a random effect, we considered it a fixed effect in this model because there were only three factor levels; therefore, only experimental unit was treated as a random effect.
Next, we used linear mixed effects regression to determine the effect that treatment has on the development of fuel loading and crown fire potential over time (i.e., change in value from 2002 to 2016) using the lme function in nlme. Regression models had the same structure as the nested ANOVA model, except that \( \widehat{y} \) is the change in plot-scale response variable (FWD, CWD, LD, CF, CBH, CBD, PT, and CI) from 2002 to 2016.
Finally, we conducted mediation analyses to parse out the effects of silvicultural manipulation and the MPB outbreak on fuel and crown fire potential metrics. Mediation analysis allows insight into direct treatment effects and the indirect effects mediated by MPB, as, by 2016, the fuel complex in the treatments was a function of both the fuel treatment and the MPB outbreak. The goal was to characterize the direct effect of the silvicultural treatments (X) on eight different 2016 fuel and crown fire potential metrics (Y), the indirect effect of the treatments on the metrics as mediated by the outbreak (M), and the total effect of the treatments on the metrics given mediation by the outbreak (Fig. 1; Baron and Kenny 1986, MacKinnon et al. 2007). Coefficients are derived by fitting two statistical models: Y = f(X, M), and M = f(X). The direct effect is quantified as the regression coefficient of the relationship between X and Y (leg c of Fig. 1), the indirect effect is quantified as the product of the relationships between X and M (leg a of Fig. 1) and M and Y (leg b of Fig. 1), while the total effect is the sum of direct and indirect effects. We determined the relationships a, b, and c using linear mixed effects regression with the same nesting structure characterized in our ANOVA models. Since we wanted to determine the effect that treatment had on mediation, we contrasted each of the active treatment effects with the control (i.e., burn-only vs. control, thin-only vs. control, thin+burn vs. control). All variables were standardized for interpretation of effect size across fuel and crown fire potential metrics, with effect sizes near zero meaning no difference from the control treatment. The bottom panel of Fig. 1 illustrates standardized coefficients for CBH as an example. We utilized a non-parametric bootstrap resampling routine (N = 1000 replications) to determine if direct, indirect, and total effects were significantly different from zero.
Conceptual diagram of mediation analysis testing the effect of fuel treatment (versus control) on fuel and crown fire potential as mediated by mountain pine beetle outbreak in the northern Rocky Mountains' Fire and Fire Surrogate Study, 2002 to 2016. Upper panel illustrates overall conceptual framework, with the direct effect as the solid arrow and the indirect effect represented by the dashed-arrow pathway. As an example, the lower panel illustrates regression coefficients linking treatment (Burn-only, Thin-only, Thin+Burn) to canopy base height (CBH2016) with the number of trees killed by mountain pine beetles (n killed) representing outbreak severity.
In all analyses, treatment effects were considered to have strong evidence of significance at the 95% confidence level, and marginal evidence of significance at the 90% level. We inspected residuals from nested ANOVA and linear mixed effects regression models of response state and change for constant variance across treatments using Levene's test of homoscedasticity. When residuals were heteroscedastic, we applied treatment level variance functions using R's varIdent function. Furthermore, we applied square root transformations on responses that showed increasing residual variance with predicted values.
When expedient for summarizing broad patterns and concise interpretation, we grouped treatments according to the crossed factorial design nomenclature. Thinned or thinning refers to thin-only and thin+burn treatments, while unthinned refers to control and burn-only. Burned or burning refers to burn-only and thin+burn treatments, while unburned refers to control and thin-only.
Thinning was the dominant influence on overstory stand structure and development across measurement years (Table 1), influencing the fuel and fire behavior responses in this study. Thinned units had 67% lower stem densities than unthinned units in 2002 (79% and 60% lower by basal area and stand density index, respectively), but all density metrics were more similar across treatments by 2016. Although differences between thinned and unthinned units in stem density, basal area, and stand density index abated over time by 33%, 63%, and 54%, respectively, the contrast between thinned and unthinned quadratic mean diameters increased by 161% over the measurement period as large trees in the unthinned units were killed by MPB.
Table 1 Mean (and standard error) stand structure metrics by treatment following the Fire and Fire Surrogate Study's fuel treatments in 2002 (immediately after treatment) and in 2016 (following the 2005 to 2012 regional mountain pine beetle outbreak). Stand density index was calculated according to Reineke (1933), equivalent to 25.4 cm trees per hectare
Surface fuel loading
Treatment effect on FWD loading in 2002 (the year following silvicultural treatment) followed an expected pattern (Fig. 2). Burning, thinning, and their interaction all had significant effects on FWD (Table 2; P ≤ 0.024). Burning reduced FWD loading by 63% (compared to unburned) and thinning increased FWD loading by 250% (compared to unthinned). In 2016 (at least 4 years following MPB-caused mortality), only thinning had a significant effect, with 34% less FWD in thinned than unthinned treatments (Table 2; P = 0.028). Whereas in 2002, thin+burn and control loading were no different, in 2016 these two were the only individual treatments that were statistically distinct (Fig. 2). Unthinned units significantly accumulated fuel between 2002 and 2016 (Table 3; P ≤ 0.007), but FWD in the thin-only treatment decreased (P = 0.010) and did not change in the thin+burn treatment (P = 0.490).
Surface fuel loads (mean and standard error) by treatment following the northern Rocky Mountains' Fire and Fire Surrogate Study's fuel treatments in 2002 (immediately after treatment) and in 2016 (following the 2005 to 2012 regional mountain pine beetle outbreak). C = Control, BO = Burn-only, TO = Thin-only, TB = Thin+Burn. Fine woody debris loading includes surface wood <7.62 cm diameter; coarse woody debris loading includes sound surface wood ≥7.62 cm diameter. Letters above bars denote pairwise differences between treatments (lowercase = 2002 differences, uppercase = 2016 differences); letters are not shown when ANOVA tests were not significant
Table 2 Nested ANOVA of fuel and crown fire potential by treatment following the Fire and Fire Surrogate Study's fuel treatments in 2002 (immediately after treatment) and in 2016 (following the 2005 to 2012 regional mountain pine beetle outbreak). NumDF and DenDF are the numerator and denominator degrees of freedom used in ANOVA, respectively. aFine woody debris (surface wood < 7.62 cm diameter); bCoarse woody debris (sound surface wood ≥7.62 cm diameter); cLitter and duff (litter and duff layers); dCanopy fuels (foliage and materials<7.62 cm diameter); eCanopy base height (lowest height where canopy bulk density exceeds 0.011 kg m−3); fCanopy bulk density (maximum canopy fuel mass per volume given 4.5 m running mean); gProbability of torching (probability of surface fire ascending into crowns given Monte Carlo simulation); hCrowning index (6.1 m wind speed required to cause active crown fire)
Table 3 Fuel and crown fire potential change by treatment between 2002 (immediately following treatment) and 2016 (following the 2005 to 2012 regional mountain pine beetle outbreak) at the Fire and Fire Surrogate Study. Estimates were derived and tested against zero using linear mixed effects models. aFine woody debris (surface wood < 7.62 cm diameter); bCoarse woody debris (sound surface wood ≥7.62 cm diameter); cLitter and duff (litter and duff layers); dCanopy fuels (foliage and materials < 7.62 cm diameter); eCanopy base height (lowest height where canopy bulk density exceeds 0.011 kg m−3); fCanopy bulk density (maximum canopy fuel mass per volume given 4.5 m running mean); gProbability of torching (probability of surface fire ascending into crowns given Monte Carlo simulation); hCrowning index (6.1 m windspeed required to cause active crown fire)
CWD loading was similar across treatments in 2002 (Fig. 2; Table 2), but by 2016, CWD loading was lower in thinned than unthinned treatments (Table 2; P = 0.002). Variability of CWD loading among control units (i.e., standard deviation) was 15 times greater than treated units because of one unit with particularly high CWD loading. Overall, thinned treatment CWD loading was 83% less than the unthinned treatments. Similar to trends observed in FWD dynamics, unthinned treatments accumulated CWD from 2002 to 2016 (Table 3; P ≤ 0.057), whereas CWD in the thin-only treatment reduced over time (Table 3; P = 0.015) and the thin+burn treatment did not change (P = 0.601).
LD loading varied by treatment in 2002 (Fig. 2; Table 2). Loads were 59% lower in burned treatments than unburned treatments (P = 0.007). Although ANOVA results indicated a significant burning and thinning interaction (P = 0.038), pairwise comparisons show LD loading assembles into two main treatment groups: burned and unburned. In 2016, LD loading did not vary by treatment (Table 2). Burned treatments significantly accumulated LD loading between 2002 and 2016 (Table 3; P < 0.006), but LD in unburned treatments either decreased (control; P = 0.083) or did not change (P = 0.9288).
Canopy fuel characteristics
CF loading differed by thinning in 2002 (Fig. 3; Table 2). The immediate effect of thinning was a 58% reduction in CF versus the unthinned treatments (P = 0.001). In 2016, there was slight evidence of both thinning and burning effects (Table 2; P of 0.096 and 0.060, respectively). These effects were relatively minor on their own, but when combined, caused the thin+burn treatment to have 43% less CF than the control. Thinned treatment CF loading increased between 2002 and 2016 (P ≤ 0.018), but unthinned treatment loading decreased (P ≤ 0.049).
Canopy fuel characteristics (mean and standard error) by treatment following the northern Rocky Mountains' Fire and Fire Surrogate Study's fuel treatments in 2002 (immediately after treatment) and in 2016 (following the 2005 to 2012 regional mountain pine beetle outbreak). C = Control, BO = Burn-only, TO = Thin-only, TB = Thin+Burn. Canopy fuel loading include foliage and materials <7.62 cm diameter; canopy base height is the lowest height at which canopy bulk density exceeds 0.011 kg m−3; canopy bulk density is the maximum canopy fuel mass per volume given 4.5 m running mean. Letters above bars denote pairwise differences between treatments (lowercase = 2002 differences, uppercase = 2016 differences); letters are not shown when ANOVA tests were not significant.
CBH in 2002 varied due to thinning (Fig. 3; Table 2), whereby mean CBHs were 130% higher in thinned treatments than unthinned (P < 0.001). By 2016, CBH varied by both burning and thinning (Table 2). Burned treatments were associated with 105% greater CBHs than unburned (P = 0.008), and thinned treatments had 79% greater CBHs than unthinned treatments (P = 0.008). There was slight evidence that interaction amplified these effects (P = 0.094) such that the thin+burn treatment was 3.2 times greater than the control. CBH dropped significantly in the thin-only treatment from 2002 to 2016 (Table 3; P = 0.003) as ladder fuel ingrowth densified the canopy from below, but CBH did not change in the remaining treatments (P ≥ 0.350).
Immediately after treatment (i.e., in 2002), CBD in thinned treatments was 55% less than in unthinned treatments (Fig. 3; Table 2; P < 0.001). In 2016, CBDs were more similar among treatments than in 2002, but still varied significantly by treatment (Table 2). Burned treatments had 33% lower CBDs than unburned treatments (P = 0.0314), and thinned treatments had 46% lower CBDs than unthinned treatments (P = 0.004). Although change in CBD between 2002 and 2016 appears to vary by thinning (reduction in unthinned due to MPB-caused tree mortality and accumulation in thinned due to ingrowth), reduction was only significant for the burn-only treatment (Table 3; P = 0.040) and accumulation for the thin-only treatment (P = 0.008).
Surface and crown fire potential
The FFE-FVS software keyed at least one fire behavior fuel model to each plot, but fuel model 8 ("closed timber litter") was the most commonly assigned model across 2002 treatments (Table 4). Fuel model 8 was also the most assigned model in thinned treatments in 2016, but the unthinned treatments were better characterized by fuel model 10 ("timber [litter and understory]"), with occasional assignments of fuel model 12 ("medium logging slash"). Predicted surface fire flame length was greatest in the thin-only treatment and lowest in the burn-only treatment in 2002, but the thin-only treatment had the lowest predicted flame lengths in 2016. In 2002, crown fire (passive type) was only predicted for the burned treatments (13% of plots). Passive crown fire was predicted for all treatment types in 2016. However, the control had the greatest propensity by far for crown fire, whether active, passive, or conditional.
Table 4 Dominant fuel models and potential fire behavior by treatment following the Fire and Fire Surrogate Study's fuel treatments in 2002 (immediately after treatment) and in 2016 (following the 2005 to 2012 regional mountain pine beetle outbreak). Fuel models and fire behavior were determined using the Fire and Fuels Extension of the Forest Vegetation Simulator (FFE-FVS) Predicted fire behavior is based on standard severe fire weather conditions (FFE-FVS "severe" category: 4% 10-h fuel moisture, 21.1 °C, and 32.2 km h−1 6.1 m wind speed. Fuel model numbers refer to those developed by Albini (1976). Predicted fire type refers to categories outlined by Scott and Reinhardt (2001)
PT in 2002 depended on both burning and thinning treatments (Fig. 4; Table 2). Burning reduced PT by 64% (P = 0.003) and thinning reduced it by 34% (P = 0.007). Probability of torching in 2016 was only dependent on thinning (Table 3). In this case, unthinned treatment PT was 111% greater than thinned treatments probabilities (P = 0.005). Between 2002 and 2016, PT in unthinned treatments significantly increased with inputs to the surface fuel loading (Table 3; P ≤ 0.036), but PT did not significantly change in thinned treatments (P ≥ 0.124).
Crown fire potential (mean and standard error) by treatment following the northern Rocky Mountains' Fire and Fire Surrogate Study's fuel treatments (completed in 2001) and regional mountain pine beetle outbreak (2005 to 2012). C = Control, BO = Burn-only, TO = Thin-only, TB = Thin+Burn. Probability of torching is the probability of surface fire ascending into crowns given Monte Carlo simulation; crowning index is the 6.1 m wind speed required to cause active crown fire. Letters above bars denote pairwise differences between treatments (lowercase = 2002 differences, uppercase = 2016 differences); letters are not shown when ANOVA tests were not significant.
CI differed by thinning in 2002 (Fig. 4; Table 2), which was expected because thinning reduced CBDs. More specifically, thinning resulted in 95% greater CIs than the unthinned treatments. By 2016, CI differed by both burning and thinning (Table 2). Burned units had 37% greater CI than unburned units (P = 0.037) and thinned units had 42% greater CI than unthinned units (P = 0.025). Although the ANOVA interaction term was not significant, pairwise differences revealed that the thin+burn treatment had 48% to 89% greater CI than the remaining treatments. CIs remained relatively constant between 2002 and 2016 except for in the thin-only treatment, where it dropped significantly as ingrowth densified the canopy (Table 3; P = 0.019).
Fuel and crown fire potential mediation
Differences in fuel loading between control and thinning treatments were mediated by the MPB outbreak, but the outbreak did not affect differences in loading between the control and burn-only treatments (Fig. 5). The significantly non-zero indirect effects in the thin-only and thin+burn for FWD, CWD, and CF responses demonstrate that the MPB outbreak mediated differences between control and thinned treatments for three of our four fuel loading metrics. Substantial (standardized effect size P > 0.05) indirect effects were consistent with total effect direction for all responses except for CF, meaning that the MPB outbreak generally increased contrasts between control and treatments, but decreased CF differences between control and thinned treatments.
Mediation analysis treatment effect sizes (vs. Control) on 2016 fuel (top panel) and crown fire potential (bottom panel) after the northern Rocky Mountains' Fire and Fire Surrogate Study. BO = Burn-only, TO = Thin-only, TB = Thin+Burn. Total effect represents observed or calculated treatment effect, indirect effect represents influence of treatment mediated by mountain pine beetle outbreak on total effect, and direct effect (total minus indirect) represents stand-alone treatment effect. Effect significance at 95% confidence level is shown by capital letter above bars (T = total, I = indirect, D = direct); lowercase letters signify significance at 90% confidence. Response variables include: FWD = fine woody debris, CWD = coarse woody debris, LD = litter and duff, CF = canopy fuels, CBH = canopy base height, CBD = canopy bulk density, PT = probability of torching, CI = crowning index.
Consistent with fuel loadings, differences in CBH, CBD, and crown fire potential metrics between control and thinning treatments depended on the MPB outbreak, but differences between control and burn-only did not (Fig. 5). Thin-only indirect effects were significantly non-zero for CBD, PT, and CI, while thin+burn indirect effects were only significant for CBD and PT. Thus, the MPB outbreak did not affect CBH differences between control and treatments, nor did it affect any response differences between control and burn-only. Indirect effects were consistent with total effect directions only for PT; they were inconsistent with total effects for CBH, CBD, and CI. The magnitude and direction of indirect effects on PT illustrate that most of the difference between the control and thinned treatments was due to the MPB outbreak. Conversely, the MPB outbreak obscured differences in CBH, CBD, and CI between control and thinned treatments.
This study characterized fuel development and crown fire potential dynamics 14 years after initial treatment and at least 4 years following an MPB outbreak. In general, we observed that fuel loading was elevated after the outbreak and ingrowth, and potential for crown fire was greatest in the untreated control, intermediate in burn-only and thin-only, and lowest in thin+burn (Table 5).
Table 5 Summary of fuel load and crown fire potential differences by treatment at the Fire and Fire Surrogate Study. Fuel and crown fire potential attributes: FWD = fine woody debris load, CWD = coarse woody debris loading, LD = litter and duff loading, CF = canopy fuel loading, CBH = canopy base height, CBD = canopy bulk density, PT = probability of torching, CI = crowing index
Despite the subsequent biotic disturbance, the primary management objective in these treated units is still to resist crown fire. Interaction between disturbances such as a beetle outbreak and fire has been a growing concern (Bebi et al. 2003, Schoennagel et al. 2012, Jenkins et al. 2014, Kane et al. 2017), but other studies have not specifically considered this interaction within treated areas. To more satisfactorily address the drivers and outcomes of combined treatment and MPB-caused tree mortality effects, we discuss the mediation analysis prior to the assessment of fuel and crown fire potential dynamics.
Are differences driven by silvicultural treatment or MPB?
To effectively interpret treatment outcomes we must begin with the relationship between the two "treatment" components in 2016: silvicultural fuel reduction and MPB-caused tree mortality. Studies have shown endemic (non-outbreak) beetle populations can cause additional mortality in response to burning treatments, killing injured or less vigorous trees that may have survived burning alone (e.g., Larsson et al. 1983, Negrón and Popp 2004, Fettig et al. 2010). On our sites, Six and Skov (2009) identified that, by 2008, three bark beetle species (Douglas-fir beetle [Dendroctonus pseudotsugae Hopkins], pine engraver [Ips pini Say], and western pine beetle [Dendroctonus brevicomis LeConte]) increased in abundance because of burning treatments. MPB population size did not respond to treatment, but successful MPB attacks were more prevalent in unthinned treatments. By 2012, the regional MPB outbreak had caused high overstory mortality in the control (50%) and burn-only (39%) treatment units, leading to similar live ponderosa pine basal areas across all treatments (Hood et al. 2016).
We applied these MPB-caused tree mortality data to our mediation analysis, confirming that fuel treatment and MPB-caused mortality were inextricably linked: the number of overstory trees killed had a strong negative association with thinning and a slight positive association with burning. In addition to characterizing the combined effects of these "treatments," our analysis ascertains the relative effects of silviculture and MPB on forest fuels and crown fire potential, including treatment-outbreak agreement or antagonism (Fig. 5; Table 5).
A number of studies have shown that, in unmanaged units, MPB-caused mortality alters forest fuel profiles (summarized in Jenkins et al. 2012, but see Simard et al. 2011). Our mediation analysis illustrates that CF loading after the outbreak was significantly lower in thin+burn than control despite high MPB-caused tree mortality in control units, and FWD loading was lower in thin+burn units because of the high MPB-caused tree mortality. Additionally, MPB-caused tree mortality inflated differences between thinned and control units in FWD and CWD pools, but reduced existing differences in the CF pool. These linked fuel loadings demonstrate that MPB caused fuel transfer from overstory to surface pools, and that although MPB-caused tree mortality partially masked or diminished differences between thinned and unthinned canopies, unthinned canopy fuels translocated to the ground inflated surface fuel loading beyond thinned treatments. The nature of these differences is also manifested in the divergence of assigned fire behavior fuel models by treatments. FFE-FVS assigned slash group fire behavior fuel models ("medium logging slash" and "heavy logging slash") to characterize unthinned surface fuel profiles, which is expected to make potential fire behavior more volatile, increasing soil heating and belowground severity.
Although MPB-caused tree mortality inflated surface fuel differences between thinned and control units, our analysis of fire potential indicated MPB only inflated differences in PT between those treatments. Studies have shown that an MPB outbreak can exacerbate fire behavior, depending on time since disturbance and metrics analyzed (synthesized in Jenkins et al. 2014, but see Harvey et al. 2013). But beyond beetle-caused inputs to surface fuels, beetles actually thin out forest canopies and eventually moderate potential crown fire spread, akin to the silviculturist's fuel treatments. Our analysis shows that MPB-caused tree mortality—which was greater in unthinned units—offsets the initial positive effect of thinning on canopy fuel (CBD) and therefore crown fire potential (CI). However, since this offset effect is minor (indirect effect magnitude was smaller than direct effect magnitude), it demonstrates that "natural thinning" by MPB neither reduces crown fire potential as well as active management nor hinders the effective longevity of silvicultural thinning. PT, on the other hand, incorporates potential surface fire behavior where the canopy-based metrics do not. Silviculture and MPB effects had consistent influence on the thinning versus control contrast for this metric because silvicultural thinning reduces PT but MPB-caused tree mortality increases it by compounding ladder fuels with heavy surface loading, inflating the difference (in fuel loading and longevity) between thinned and unthinned units.
We found no fuel or crown fire potential differences (treatment or MPB caused) between burn-only and control treatments. This is likely because the prescribed burning treatment was mostly kept to low fire intensities to limit overstory mortality. Fire effects were also limited because many trees, including successional species, had grown to fire-tolerant sizes in the fire-excluded twentieth century. The burning effect may have been muddled by MPB if the treatment was more severe and had more strongly influenced successful beetle attacks (Wallin et al. 2003) or reduced overstory density as much as thinning treatments. When combined with thinning in the form of the thin+burn treatment, however, we did observe a minor effect of MPB on burned treatments: MPB effects in the thin+burn treatments were always slightly less than in the thin-only treatment, even to the point of being absent in the CI contrast. This is because treatment more poorly differentiated MPB-caused tree mortality between thin+burn and control than between thin-only and control (leg a of Fig. 1). Thus, the MPB outbreak obfuscated the thin-only versus control contrast on crown fire spread (CI), but did not impact the thin+burn contrast, suggesting that differences were mostly due to prescribed burning treatment.
Combined effects: the state of treatment in 2016
Disentangling the silviculture and MPB effects on fuel treatments in 2016 is useful for understanding the relative importance of these factors on the fuel development process and on change in crown fire potential, but land managers may be more concerned with the resultant state of treatment fuel loading and crown fire potential. In this sense, treatment effectiveness at maintaining low crown fire potential may be a more practically important matter than effect mediation.
We found that thinned units had less fuel and lower crown fire potential than unthinned units in 2016. One major difference between 2002 and 2016 thinning effects was the radical increase of surface fuel (FWD and CWD) in unthinned units, which likely would not have happened without the MPB outbreak. These surface fuels are directly tied to increased surface fire behavior and potential for torching. Although thinning was a statistically significant predictor of PT in both 2002 and 2016, 2016 probabilities were higher in unthinned (50%) than thinned units and therefore more significant, and undesirable, in practical terms. This condition is typical of unmanaged second-growth ponderosa pine−Douglas fir forests impacted by beetles throughout the Interior West and reflects that torching and crowning fire behavior may be commonplace in many unmaintained, post-outbreak units (Jain et al. 2012).
We also found that burned units had less canopy fuel and lower probabilities of sustaining crown fire than unburned units in 2016. Interestingly, we observed delayed effects of burning on canopy fuel characteristics and crown fire potential (CBH, CBD, CI) that were not present in 2002. This delay is likely due to secondary fire-induced mortality, namely pre-MPB-outbreak attacks by other beetles on trees weakened by prescribed fire, as documented by Six and Skov (2009). Despite these two beetle episodes (endemic then epidemic) in the burn-only treatment, and that the control was only slightly different from the burn-only in all of the 2016 fuel and crown fire potential facets, FFE-FVS predicted "surface" type fire only 40% of the time in the control versus 70% in the burn-only. Fire (and fire modeling) is very sensitive to thresholds in fuels, weather, and topography. Although the potential fire behavior metrics that this study presents are more valuable for comparative analysis than absolute characterization, they illustrate that prescribed fire may only have mild effects on measured vegetation and fuels structure, but still reduce potential fire behavior below important crowning thresholds.
The combination of thinning and burning is clearly the most effective at sustaining low crown fire potential in light of post-treatment growth and subsequent disturbance. The 2016 thin+burn was superior to the burn-only and thin-only treatments for three main reasons. First, it reduced surface and canopy fuels. Combined thinning and burning most effectively reduces fuels because thinning removes substantial tree and canopy biomass, while burning consumes surface fuels that have built up prior to thinning in addition to activity fuels (Stephens and Moghaddas 2005). Second, thin+burn reduced MPB-caused mortality relative to control and burn-only. Although thinning during outbreak-level disturbances may be ineffective (Six et al. 2014), thinning prior to an outbreak has been shown to moderate mortality (Fettig et al. 2014, Jenkins et al. 2014, Hood et al. 2016). Third, the thin+burn dampened development of ladder fuels by killing regeneration and potential ingrowth present after commercial thinning. Keyes and O'Hara (2002) identified that fuel treatments stimulate forest regeneration, and in turn negate fuel reduction objectives. Although thin+burn units are in fact regenerating, the combination of these treatments killed advanced regeneration and reset an understory development phase, lengthening the duration of treatment effectiveness. The recent divergence from thin-only emphasizes that treating the understory is imperative for extending the duration of treatment longevity. Although stand densities in both thin-only and thin+burn were most similar to stand densities in local historical units with intact frequent fire regimes (e.g., per Clyatt et al. 2016), without re-entry or burning, the thin-only treatment may not be able to resist crown fire like the thin+burn or historical, open ponderosa pine units (Arno et al. 2008). Although the thin+burn treatment had slightly greater MPB-caused mortality than thin-only, fuel treatments in ponderosa pine forest types that include both thinning and burning (i.e., because burning treated the understory) best establish forest structure that is able to resist both beetles and crown fire well into the second decade. This timeframe is especially important for ponderosa pine forests in the inland Northwest, where fire return intervals may range up to half a century (Arno et al. 1995) and managers may be financially or logistically unable to keep up fuel treatments.
Fuel treatments have been widely implemented to reduce crown fire potential in fire-prone forests. However, recent bark beetle outbreaks have impacted millions of hectares of unmanaged and managed forests throughout the West. This study shows that fuel treatments followed by an MPB outbreak generate their own unique responses that differ from original treatment responses. Overall, thinned then MPB-attacked units had less fuel and lower crown fire potential than unthinned attacked units. Burned then beetle-attacked units had less canopy fuel and also had lower crown fire potential than unburned attacked units. Combined thinning and burning best improved fuel treatment longevity; even after MPB outbreak, this treatment exhibited little change in fuel profile and crown fire potential.
Bark beetle outbreaks reduce live stem densities and canopy fuels. However, bark beetle outbreaks and fuel treatments typically target different tree sizes and species, resulting in different forest structures and species composition, and these differences can have profound impacts on potential fire behavior. The MPB outbreak in our study had a complex effect on fuels and crown fire potential in treated versus untreated units, amplifying some differences and reducing others. High levels of MPB-caused mortality in control units and ladder fuel ingrowth in thin-only units made fuel and crown fire potential in these two treatments more similar in a number of ways. Despite becoming more similar to thinned treatments following MPB outbreak, control treatments were still characterized by greater crown fire potential, emphasizing a key difference between silvicultural and MPB "treatment."
Affleck, D., C. Keyes, and J. Goodburn. 2012. Conifer crown fuel modeling: Current limits and potential for improvement. Western Journal of Applied Forestry 27 (4): 165–169 https://doi.org/10.5849/wjaf.11-039.
Agee, J.K., and C.N. Skinner. 2005. Basic principles of forest fuel reduction treatments. Forest Ecology and Management 211 (1): 83–96 https://doi.org/10.1016/j.foreco.2005.01.034.
Albini, F.A. 1976. Estimating wildfire behavior and effects. In USDA Forest Service general technical report INT-30. Intermountain Forest and Range Experiment Station, Odgen, Utah: USA.
Arno, S.F., C.E. Fiedler, and M.K. Arno. 2008. Giant pines and grassy glades. The historic ponderosa pine ecosystem, disappearing icon of the American west, Forest history today (spring), 12–19.
Arno, S.F., J.H. Scott, and G. Hartwell. 1995. Age-class structure of old growth ponderosa pine/ Douglas-fir stands and its relationship to fire history. In USDA Forest Service research paper INT-RP-481. Intermountain Research Station, Ogden, Utah: USA.
Baron, R., and D. Kenny. 1986. The moderator-mediator variable distinction in social psychological research. Journal of Personality and Social Psychology 51 (6): 1173–1182 https://doi.org/10.1037/0022-3514.51.6.1173.
Bebi, P., D. Kulakowski, and T.T. Veblen. 2003. Interactions between fire and spruce beetles in a subalpine Rocky Mountain forest landscape. Ecology 84 (2): 362–371 https://doi.org/10.1890/0012-9658(2003)084[0362:IBFASB]2.0.CO;2.
Bentz, B.J., J. Regniere, C.J. Fettig, M. Hansen, J.L. Hayes, J.A. Hicke, R.G. Kelsey, J.F. Negrón, and S. Seybold. 2010. Climate change and bark beetles of the western United States and Canada: Direct and indirect effects. BioScience 60 (8): 602–613 https://doi.org/10.1525/bio.2010.60.8.6.
Bigler, C., D. Kulakowski, and T.T. Veblen. 2005. Multiple disturbance interactions and drought influence fire severity in Rocky Mountain subalpine forests. Ecology 86 (11): 3018–3029 https://doi.org/10.1890/05-0011.
British Colombia Ministry of Forests. 2004. Bark beetle management guidebook. Britis Columbia Ministry of Forests. British Columbia, Canada: Victoria.
Brown, J.K. 1974. Handbook for inventorying downed woody material. In USDA Forest Service general technical report INT-GTR-16. Intermountain Research Station, Ogden, Utah: USA.
Brown, R.T., J.K. Agee, and J.F. Franklin. 2004. Forest restoration and fire: Principles in the context of place. Conservation Biology 18 (4): 903–912 https://doi.org/10.1111/j.1523-1739.2004.521_1.x.
Clyatt, K.A., J.S. Crotteau, M.S. Schaedel, H.L. Wiggins, H. Kelley, D.J. Churchill, and A.J. Larson. 2016. Historical spatial patterns and contemporary tree mortality in dry mixed-conifer forests. Forest Ecology and Management 361: 23–37 https://doi.org/10.1016/j.foreco.2015.10.049.
R Core Team. 2016. R: A language and environment for statistical computing. <http://www.r-project.org/>. Accessed 9 Oct 2018.
Covington, W.W., and M.M. Moore. 1994. Postsettlement changes in natural fire regimes and forest structure. Journal of Sustainable Forestry 2 (1): 153–181 https://doi.org/10.1300/J091v02n01_07.
Cruz, M.G., and M.E. Alexander. 2010. Assessing crown fire potential in coniferous forests of western North America: A critique of current approaches and recent simulation studies. International Journal of Wildland Fire 19: 377–398 https://doi.org/10.1071/WF08132.
Cruz, M.G., M.E. Alexander, and R.H. Wakimoto. 2005. Development and testing of models for predicting crown fire rate of spread in conifer forest stands. Canadian Journal of Forest Research 35 (7): 1626–1639 https://doi.org/10.1139/x05-085.
Dixon, G. 2002. Essential FVS: A user's guide to the Forest vegetation simulator. USDA Forest Service internal report. Fort Collins, Colorado, USA: Forest Management Service Center.
Donato, D.C., B.J. Harvey, W.H. Romme, M. Simard, and M.G. Turner. 2013. Bark beetle effects on fuel profiles across a range of stand structures in Douglas-fir forests of greater Yellowstone. Ecological Applications 23 (1): 3–20 https://doi.org/10.1890/12-0772.1.
Fernandes, P.M. 2009. Examining fuel treatment longevity through experimental and simulated surface fire behaviour: A maritime pine case study. Canadian Journal of Forest Research 39 (12): 2529–2535 https://doi.org/10.1139/X09-145.
Fettig, C., R. Borys, and C. Dabney. 2010. Effects of fire and fire surrogate treatments on bark beetle-caused tree mortality in the southern cascades, California. Forest Science 56 (1): 60–73.
Fettig, C.J., K.E. Gibson, A.S. Munson, and J.F. Negrón. 2014. A comment on "management for mountain pine beetle outbreak suppression: Does relevant science support current policy?". Forests 5 (4): 822–826 https://doi.org/10.3390/f5040822.
Fiedler, C.E., K.L. Metlen, and E.K. Dodson. 2010. Restoration treatment effects on stand structure, tree growth, and fire hazard in a ponderosa pine/Douglas-fir forest in Montana. Forest Science 56 (1): 18–31.
Finney, M.A., C.W. McHugh, and I.C. Grenfell. 2005. Stand- and landscape-level effects of prescribed burning on two Arizona wildfires. Canadian Journal of Forest Research 35 (7): 1714–1722 https://doi.org/10.1139/x05-090.
Flannigan, M.D., M.A. Krawchuk, W.J. de Groot, M.B. Wotton, and L.M. Gowman. 2009. Implications of changing climate for global wildland fire. International Journal of Wildland Fire 18 (5): 483–507 https://doi.org/10.1071/WF08187.
Fulé, P.Z., J.E. Crouse, J.P. Roccaforte, and E.L. Kalies. 2012. Do thinning and/or burning treatments in western USA ponderosa or Jeffrey pine-dominated forests help restore natural fire behavior? Forest Ecology and Management 269: 68–81.
Gannon, A., and S. Sontag. 2010. Montana forest insect and disease conditions and program highlights-2010. In USDA Forest Service region 1, Forest health and protection report 11–1. Missoula, Montana: USA.
Grissino-Mayer, H.D., C.M. Gentry, S.Q. Croy, J. Hiatt, B. Osborne, A. Stan, and G. DeWeese Wight. 2006. Fire history of western Montana forested landscapes via tree-ting analyses. Indiana State University Department of geography, geology, and anthropology. Terre Haute, Indiana, USA: Professional Paper No. 23.
Harrington, M.G., E. Noonan-Wright, M. Doherty. 2007. Testing the modeled effectiveness of an operational fuel reduction treatment in a small western Montana interface landscape using two spatial scales. Pages 301–314 in: B.W. Butler, and W. Cook, The fire environment—innovations, management, and policy. USDA Forest Service Proceedings RMRS-P-46CD, Rocky Mountain Research Station, Fort Collins, Colorado, USA.
Hart, S.J., T. Schoennagel, T.T. Veblen, and T.B. Chapman. 2015. Area burned in the western United States is unaffected by recent mountain pine beetle outbreaks. Proceedings of the National Academy of Sciences of the United States of America 112 (14): 4375–4380 https://doi.org/10.1073/pnas.1424037112.
Harvey, B.J., D.C. Donato, W.H. Romme, and M.G. Turner. 2013. Influence of recent bark beetle outbreak on fire severity and postfire tree regeneration in montane Douglas-fir forests. Ecology 94 (11): 2475–2486 https://doi.org/10.1890/13-0188.1.
Hessburg, P.F., D.J. Churchill, A.J. Larson, R.D. Haugo, C. Miller, T.A. Spies, M.P. North, N.A. Povak, R.T. Belote, P.H. Singleton, W.L. Gaines, R.E. Keane, G.H. Aplet, S.L. Stephens, P. Morgan, P.A. Bisson, B.E. Rieman, R.B. Salter, and G.H. Reeves. 2015. Restoring fire-prone inland Pacific landscapes: Seven core principles. Landscape Ecology 30: 1805–1835 https://doi.org/10.1007/s10980-015-0218-0.
Hicke, J.A., M.C. Johnson, L.H.D. Jane, and H.K. Preisler. 2012. Effects of bark beetle-caused tree mortality on wildfire. Forest Ecology and Management 271: 81–90.
Hicke, J.A., A.J.H. Meddens, and C.A. Kolden. 2016. Recent tree mortality in the western United States from bark beetles and forest fires. Forest Science 62 (2): 141–153 https://doi.org/10.5849/forsci.15-086.
Hood, S.M., S. Baker, and A. Sala. 2016. Fortifying the forest: Thinning and burning increase resistance to a bark beetle outbreak and promote forest resilience. Ecological Applications 26 (7): 1984–2000 https://doi.org/10.1002/eap.1363.
Jain, T.B., M.A. Battaglia, H.-S. Han, R.T. Graham, C.R. Keyes, J.S. Fried, and J.E. Sandquist. 2012. A comprehensive guide to fuel management practices for dry mixed conifer forests in the northwestern United States. In USDA Forest Service general technical report RMRS-GTR-292. Rocky Mountain Research Station, Fort Collins, Colorado: USA.
Jenkins, M.J., E. Hebertson, W. Page, and C.A. Jorgensen. 2008. Bark beetles, fuels, fires and implications for forest management in the intermountain west. Forest Ecology and Management 254 (1): 16–34 https://doi.org/10.1016/j.foreco.2007.09.045.
Jenkins, M.J., W.G. Page, E.G. Hebertson, and M.E. Alexander. 2012. Fuels and fire behavior dynamics in bark beetle-attacked forests in western North America and implications for fire management. Forest Ecology and Management 275: 23–34 https://doi.org/10.1016/j.foreco.2012.02.036.
Jenkins, M.J., J.B. Runyon, C.J. Fettig, W.G. Page, and B.J. Bentz. 2014. Interactions among the mountain pine beetle, fires, and fuels. Forest Science 60 (3): 489–501 https://doi.org/10.5849/forsci.13-017.
Johnson, M.C., M.C. Kennedy, and D.L. Peterson. 2011. Simulating fuel treatment effects in dry forests of the western United States: Testing the principles of a fire-safe forest. Canadian Journal of Forest Research 41 (5): 1018–1030 https://doi.org/10.1139/x11-032.
Kane, J.M., J.M. Varner, M.R. Metz, and P.J. van Mantgem. 2017. Characterizing fire-disturbance interactions and their potential impacts on tree mortality in western US forests. Forest Ecology and Management 405: 188–199 https://doi.org/10.1016/j.foreco.2017.09.037.
Keeling, E.G., A. Sala, and T.H. DeLuca. 2006. Effects of fire exclusion on forest structure and composition in unlogged ponderosa pine/Douglas-fir forests. Forest Ecology and Management 237: 418–428 https://doi.org/10.1016/j.foreco.2006.09.064.
Keyes, C.R., and K.L. O'Hara. 2002. Quantifying stand targets for silvicultural prevention of crown fires. Western Journal of Applied Forestry 17 (2): 101–109.
Keyes, C.R., and J.M. Varner. 2006. Pitfalls in the silvicultural treatment of canopy fuels. Fire Manage Today 66: 46–50.
Kolb, T.E., C.J. Fettig, M.P. Ayres, B.J. Bentz, J.A. Hicke, R. Mathiasen, J.E. Stewart, and A.S. Weed. 2016. Observed and anticipated impacts of drought on forest insects and diseases in the United States. Forest Ecology and Management 380: 321–334 https://doi.org/10.1016/j.foreco.2016.04.051.
Larsson, S., R. Oren, R.H. Waring, and J.W. Barrett. 1983. Attacks of mountain pine beetle as related to tree vigor of ponderosa pine. Forest Science 29 (2): 395–402.
MacKinnon, D.P., A.J. Fairchild, and M.S. Fritz. 2007. Mediation analysis. Annual Review of Psychology 58: 593–614 https://doi.org/10.1146/annurev.psych.58.110405.085542.
Martinson, E.J., and P.N. Omi. 2013. Fuel treatments and fire severity: A meta-analysis. Fort Collins, Colorado, USA: USDA Forest Service Research Paper RMRS-RP-103WWW, Rocky Mountain Research Station https://doi.org/10.2737/RMRS-RP-103.
McIver, J.D., S.L. Stephens, J.K. Agee, J. Barbour, R.E.J. Boerner, C.B. Edminster, K.L. Erickson, K.L. Farris, C.J. Fettig, C.E. Fiedler, S. Haase, S.C. Hart, J.E. Keeley, E.E. Knapp, J.F. Lehmkuhl, J.J. Moghaddas, W. Otrosina, K.W. Outcalt, D.W. Schwilk, C.N. Skinner, T.A. Waldrop, C.P. Weatherspoon, D.A. Yaussy, A. Youngblood, and S. Zack. 2012. Ecological effects of alternative fuel-reduction treatments: Highlights of the National Fire and fire surrogate study (FFS). International Journal of Wildland Fire 22: 63–82 https://doi.org/10.1071/WF11130.
McIver, J.D., and C.P. Weatherspoon. 2010. On conducting a multisite, multidisciplinary forestry research project: Lessons from the national fire and fire surrogate study. Forest Science 56 (1): 4–17.
Metlen, K.L., and C.E. Fiedler. 2006. Restoration treatment effects on the understory of ponderosa pine/Douglas-fir forests in western Montana, USA. Forest Ecology and Management 222 (1): 355–369 https://doi.org/10.1016/j.foreco.2005.10.037.
Miller, J.D., H.D. Safford, M. Crimmins, and A.E. Thode. 2009. Quantitative evidence for increasing forest fire severity in the Sierra Nevada and southern Cascade Mountains, California and Nevada, USA. Ecosystems 12 (1): 16–32.
Moran, C.J., and M.A. Cochrane. 2012. Do mountain pine beetle outbreaks change the probability of active crown fire in lodgepole pine forests? Comment. Ecology 93 (4): 939–941.
Negrón, J.F., and J.B. Popp. 2004. Probability of ponderosa pine infestation by mountain pine beetle in the Colorado front range. Forest Ecology and Management 191 (1–3): 17–27 https://doi.org/10.1016/j.foreco.2003.10.026.
Nelson, K.N., M.G. Turner, W.H. Romme, and D.B. Tinker. 2017. Simulated fire behaviour in young, postfire lodgepole pine forests. International. International Journal of Wildland Fire 26 (10): 852–865 https://doi.org/10.1071/WF16226.
Nimlos TJ (1986). Soils of Lubrecht experimental Forest. Montana Forest and conservation Experiment Station, miscellaneous publication no. 44, Missoula, Montana, USA.
Noonan-Wright, E.K., N.M. Vaillant, and A.L. Reiner. 2014. The effectiveness and limitations of fuel modeling using the fire and fuels extension to the Forest vegetation simulator. Forest Science 60 (2): 231–240 https://doi.org/10.5849/forsci.12-062.
Page, W.G., and M.J. Jenkins. 2007. Mountain pine beetle-induced changes to selected lodgepole pine fuel complexes within the intermountain region. Forest Science 53 (4): 507–518.
Parsons, D.J., and S.H. DeBenedetti. 1979. Impact of fire suppression on a mixed-conifer forest. Forest Ecology and Management 2: 21–33 https://doi.org/10.1016/0378-1127(79)90034-3.
Pfister, R.D., B.L. Kovalchik, S.F. Arno, and R.C. Presby. 1977. Forest habitat types of Montana. In USDA Forest Service general technical report INT-GTR-34. Intermountain Research Station, Ogden, Utah: USA.
Pinheiro J, Bates D, DebRoy S, R Core Team. 2016. Nlme: Linear and nonlinear mixed effects models. http://cran.r-project.org/package=nlme. Accessed 9 Oct 2018.
Raffa, K.F., B.H. Aukema, B.J. Bentz, A.L. Carroll, J.A. Hicke, M.G. Turner, and W.H. Romme. 2008. Cross-scale drivers of natural disturbances prone to anthropogenic amplification: The dynamics of bark beetle eruptions. BiosSience 58 (6): 501–517 https://doi.org/10.1641/B580607.
Rebain, S. 2010. The fire and fuels extension to the Forest vegetation simulator: Updated model documentation. Fort Collins Available online at: USDA Forest Service, Forest Management Service Center http://www.fs.fed.us/fmsc/ftp/fvs/docs/gtr/FFEguide.pdf.
Reineke, L.H. 1933. Perfecting a stand-density index for even-aged forests. Journal of Agricultural Research 46: 627–638.
Reinhardt, E.D., R.E. Keane, D.E. Calkin, and J.D. Cohen. 2008. Objectives and considerations for wildland fuel treatment in forested ecosystems of the interior western United States. Forest Ecology and Management 256 (12): 1997–2006 https://doi.org/10.1016/j.foreco.2008.09.016.
Savage, M., and J. Nystrom Mast. 2005. How resilient are southwestern ponderosa pine forests after crown fires? Canadian Journal of Forest Research 35 (4): 967–977 https://doi.org/10.1139/x05-028.
Schoennagel, T., T.T. Veblen, J.F. Negron, and J.M. Smith. 2012. Effects of mountain pine beetle on fuels and expected fire behavior in lodgepole pine forests, Colorado, USA. PLoS One 7 (1): 1–14 https://doi.org/10.1371/journal.pone.0030002.
Scott, J.H.E.D. Reinhardt. 2001. Assessing crown fire potential by linking models of surface and crown fire behavior. In USDA Forest Service research paper RMRS-RP-29. Rocky Mountain Research Station, Fort Collins, Colorado: USA.
Sieg, C.H., R.R. Linn, F. Pimont, C.M. Hoffman, J.D. McMillin, J. Winterkamp, and L.S. Baggett. 2017. Fires following bark beetles: Factors controlling severity and disturbance interactions in ponderosa pine. Fire Ecology 13 (3): 1–23 https://doi.org/10.4996/fireecology.130300123.
Simard, M., W.H. Romme, J.M. Griffin, and M.G. Turner. 2011. Do mountain pine beetle outbreaks change the probability of active crown fire in lodgepole pine forests? Ecological Monographs 81 (1): 3–24 https://doi.org/10.1890/10-1176.1.
Six, D.L., E. Biber, and E. Long. 2014. Management for mountain pine beetle outbreak suppression: Does relevant science support current policy? Forests 5 (1): 103–133 https://doi.org/10.3390/f5010103.
Six, D.L., and K. Skov. 2009. Response of bark beetles and their natural enemies to fire and fire surrogate treatments in mixed-conifer forests in western Montana. Forest Ecology and Management 258: 761–772 https://doi.org/10.1016/j.foreco.2009.05.016.
Stephens, S.L., B.M. Collins, and G. Roller. 2012. Fuel treatment longevity in a Sierra Nevada mixed conifer forest. Forest Ecology and Management 285: 204–212 https://doi.org/10.1016/j.foreco.2012.08.030.
Stephens, S.L., and J.J. Moghaddas. 2005. Experimental fuel treatment impacts on forest structure, potential fire behavior, and predicted tree mortality in a California mixed conifer forest. Forest Ecology and Management 215: 21–36 https://doi.org/10.1016/j.foreco.2005.03.070.
Stephens, S.L., J.J. Moghaddas, C.B. Edminster, C.E. Fiedler, S.M. Haase, M.G. Harrington, J.E. Keeley, E.E. Knapp, J.D. McIver, K. Metlen, C.N. Skinner, and A. Youngblood. 2009. Fire treatment effects on vegetation structure, fuels, and potential fire severity in western US forests. Ecological Applications 19 (2): 305–320 https://doi.org/10.1890/07-1755.1.
Tinkham, W., C. Hoffman, S. Ex, M. Battaglia, and J. Saralecos. 2016. Ponderosa pine forest restoration treatment longevity: Implications of regeneration on fire hazard. Forests 7 (7): 137 https://doi.org/10.3390/f7070137.
US Forest Service. 2005. Forest inventory and analysis National Core Field Guide Volume I: Field data collection procedures for phase 2 plots. <https://www.fia.fs.fed.us/library/field-guides-methods-proc/>. Accessed 9 Oct 2018.
Wallin, K.F., T.E. Kolb, K.R. Skov, and M.R. Wagner. 2003. Effects of crown scorch on ponderosa pine resistance to bark beetles in northern Arizona. Environmental Entomology 32 (3): 652–661 https://doi.org/10.1603/0046-225X-32.3.652.
Weatherspoon, C.P. 2000. A long-term national study of the consequences of fire and fire surrogate treatments. In Proceedings of the joint fire science conference and workshop—Crossing the millenium: Integrating spatial technologies and ecological principles for a new age in fire management, ed. L.F. Neuenschwander and K.C. Ryan, 117–126. Moscow: University of Idaho Press.
Westerling, A.L., H.G. Hidalgo, D.R. Cayan, and T.W. Swetnam. 2006. Warming and earlier spring increase western US forest wildfire activity. Science 313: 940–943 https://doi.org/10.1126/science.1128834.
This was a study of the Applied Forest Management Program at the University of Montana, a research and demonstration unit of the Montana Forest and Conservation Experiment Station. We are also indebted to A. Larson for providing feedback on an earlier draft.
This work was supported by the USDA National Institute of Food and Agriculture, McIntire Stennis project 233356 (awarded to C. Keyes). The study was made possible with the foresight of past scientists who designed and implemented the Fire and Fire Surrogate Study, a nation-wide project funded by the Joint Fire Sciences Program (FFS 99-S-01).
Data available upon request.
University of Montana, WA Franke College of Forestry and Conservation, 32 Campus Drive, Missoula, Montana, 59812, USA
Justin S. Crotteau, Christopher R. Keyes & David L. R. Affleck
Present address: USDA Forest Service, Pacific Northwest Research Station, 11175 Auke Lake Way, Juneau, Alaska, 99801, USA
Justin S. Crotteau
USDA Forest Service, Rocky Mountain Research Station, Fire, Fuel, and Smoke Science Program, 5775 Highway 10 W, Missoula, Montana, 59808, USA
Sharon M. Hood
University of Montana, Division of Biological Sciences, 32 Campus Drive, Missoula, Montana, 59812, USA
Anna Sala
Christopher R. Keyes
David L. R. Affleck
JSC and CRK designed the study and acquired the data. JSC performed the analysis and wrote the manuscript. JSC, CRK, SMH, DLRA, and AS contributed to data interpretation and manuscript revision. All authors read and approved the final manuscript.
Correspondence to Justin S. Crotteau.
Crotteau, J.S., Keyes, C.R., Hood, S.M. et al. Fuel dynamics after a bark beetle outbreak impacts experimental fuel treatments. fire ecol 14, 13 (2018). https://doi.org/10.1186/s42408-018-0016-6
Dendroctonus ponderosae
disturbance interaction
fire and fire surrogate study
fuel dynamics
mediation analysis
treatment longevity | CommonCrawl |
Creating a Speaker Embedding for Speaker ID
Speaker Identification is the process of automatically determining who is speaking based on a recording of their voice. In the past Gaussian Mixture Models (GMMs) and GMM-UBMs were used for this sort or thing, however Deep Learning has basically replaced all the old classification techniques at this point, so why not speaker ID too?
This tutorial will be about creating a deep neural embedding i.e. a neural network that directly learns a mapping from speech segments to a compact Euclidean space where distances directly correspond to a measure of speaker similarity. Once this space has been trained, tasks such as speaker recognition, verification and clustering can be easily implemented using standard techniques with the embeddings as feature vectors. We'll use a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer, similar to how FaceNet is used for face recognition.
There is a fair bit of stuff to get through, from datasets, to network architechture, to applications of the final embeddings, so I'll just get started.
• Datasets
• Training and Loss
• Architecture
• A Worked DTW Example
• Using DTW on ISOLET
• Incorporating DTW as SVM Kernel
If you look at FaceNet, they used millions of images to train it, but unfortunately finding a database with millions of speakers is going to be tough. The largest speech dataset I know of is librispeech which has 1266 speakers (meant for ASR, but we can use it for speaker), followed by VoxCeleb (1245 speakers), a rather challenging dataset of celebrities speaking in youtube clips. VoxCeleb is difficult because there are so many different recording environments, background noises and time between recordings. Other datasets include TIMIT (630 speakers), ANDOSL (108 speakers) and YOHO (130 speakers). I also played around with voxforge a little bit, but it would take a lot of cleaning up to get it into a workable state.
I will use librispeech, VoxCeleb, ANDOSL and YOHO pooled together for training, and timit for testing. This allows me to use timit's 630 speakers that the embedding has never seen before to test the system, and since this is the scenario that it will be used in, should provide a good indication of its ability to discriminate arbitrary speakers.
Training and Loss
We are going to be training the network with triplet loss, this allows us to directly optimise the embedding instead of using a bottleneck or some other trick. What we want for an embedding is that all speech segments from one speaker map closely together, while segments from different speakers are far apart e.g.
\begin{equation}\label{tripleteq} \norm{x^a - x^p} < \norm{x^a - x^n} \quad \forall a, \forall p, \forall n \($ In the equation above $x^a\) is the embedding for a given speech segment, \(x^p\) is the embedding for a positive exemplar (another speech segment of the same speaker) and \(x^n\) is a negative exemplar (a speech segment from a different speaker). To translate this into a loss function we can use for deep learning we use:
$$ loss = \sum_i^N \left[ \norm{f(x^a_i) - f(x^p_i)} - \norm{f(x^a_i) - f(x^n_i)} + \alpha \right] $$
Where \(f()\) is the neural network and \(\alpha\) is a small number that represents the margin. This means we need sets of three speech segements where 2 are from the same speaker and 1 from another, this is where 'triplet loss' gets its name. There are some tricks to implementing this efficiently, we don't want to have three copies of our network in memory, and in addition we don't want to use all possible triplets - just the difficult ones! The FaceNet paper goes into this a bit deeper, many triples satisfy equation \ref{tripleteq} but these don't help our training, we need hard triples i.e. the ones that violate equation \ref{tripleteq} (but not too hard!). Want we want is to mine moderately hard triples. This is done easily by taking a minibatch and running it through the network on the GPU, then computing all valid triples in the minibatch on the CPU. At this stage we can discard all the easy triples, and only keep the hard ones in the batch. This satisfies our need for moderately hard triples i.e. they are the hardest in the batch. Once we have the hard triples, we then pass the indexes of the embeddings back into tensorflow which can then index into to batch of embeddings to compute the loss.
This network will be implemented
Often in machine learning we need the distance between two objects. This will usually be e.g. the euclidean distance or cosine similarity between vectors. These distance functions only work on fixed size vectors though, and for some problems we would like to know the distance between sequences of different lengths.
Dynamic Time Warping is a method for aligning sequences and computing the distance between them, these can be time sequences like audio recordings or non-time sequences like protein sequences. In bioinformatics the algorithm is called either the Needleman-Wunsch algorithm or Smith–Waterman (these are slight variations of the same thing). It finds uses in many other areas as well.
This section will introduce DTW in a slightly mathematical way, in the next section I'll go through a worked example that shows how all this stuff is applied. Let \(P\) and \(Q\) be sequences of length \(L_P\) and \(L_Q\) respectively. The first step is to compute the distance between all the different combinations of rows. let \(p_i\) be the \(i^{th}\) row of \(P\), and \(q_j\) be the \(j^{th}\) row of \(Q\).
Here we will use the cosine distance, which is defined between vectors. We could just as easily use euclidean distance between each row here as well. The cosine distance between vectors \(p_i\) and \(q_j\) is defined as:
\begin{equation} d(p_i,q_j) = 1 - \dfrac{p_i q_j^T}{\sqrt{p_i p_i^T q_j q_j^T}}, \quad \textrm{for } i=1,\ldots, L_P \textrm{ and } j=1,\ldots, L_Q \end{equation}
The distance \(d\) is computed for all rows in \(P\) and all rows in \(Q\) to give an \(L_P \times L_Q\) distance matrix \(S\). The dynamic time warping algorithm is applied to \(S\) to find the minimum cost path. This gives a cumulative distance matrix \(D\). The matrix \(D\) defines the total cost of alignment between \((p_1,q_1)\) and \((p_{L_P}, q_{L_Q})\). A lower cost implies a better alignment, which indicates the sequences \(P\) and \(Q\) are more similar. The computation of the cumulative distance matrix \(D\) can be calculated like this:
\begin{equation} \label{eqn_dtw_D} D_{i,j} = \min(D_{i-1,j},D_{i,j-1},D_{i-1,j-1}) + S_{i,j} \end{equation}
The equation above is computed for all \(i=1,2,\ldots,L_P\) and \(j=1,2,\ldots ,L_Q\). Also, \(D_{i,j} = \emptyset\) (the empty set) for \(i\leq 0\) and/or \(j\leq 0\), and \(S_{i,j} = d(p_i,q_j)\). The final distance between \(P\) and \(Q\) is defined as
\begin{equation} D_{DTW}(P,Q) = D(L_P,L_Q) \end{equation}
where \(D\) is the cumulative distance matrix defined above for \(P\) and \(Q\), and \(D(L_P,L_Q)\) indicates the indexing of the bottom right corner of the matrix \(D\), all other values in the matrix are ignored.
A Worked DTW Example
Most DTW tutorials assume that you are aligning two 1-dimensional vectors, but in machine learning problems we almost always deal with multi-dimensional sequences e.g. audio is represented as MFCCs which have 12 dimensions, so our sequence is actually a N x 12 matrix. DTW can handle this sort of situation just fine, but I think it confuses newcomers so the example below aligns 2 sequences with 3-dimensional elements.
Our matrices \(P\) and \(Q\) will be defined as
Sequence \(P\) of length 4
Sequence \(Q\) of length 5
\begin{bmatrix} 1 & 5 & 6 \\ 4 & 6 & -3\\ 3 & 3 & 1 \\ 5 & 3 & -1 \end{bmatrix}
\begin{bmatrix} 2 & 3 & 2 \\ 5 & 2 & -3\\ 5 & 4 & 6 \\ 0 & 4 & 1 \\ 2 & 0 & -1 \end{bmatrix}
These matrices aren't that similar, but we will find the distance between them anyway. For this example we have \(L_P=4\) and \(L_Q=5\). Our first step is to build the matrix \(S\) of pairwise distances between all the rows.
Let \(p_i\) (for \(i=1,\ldots,4\)) and \(q_j\) (for \(j=1,\ldots,5\)) be the row vectors of PSSMs \(P\) and \(Q\). To compute the distance between row 1 of \(P\) (\(p_1 = [1,5,6]\)) and row 1 of \(Q\) (\(q_1 = [2,3,2])\))
\begin{align} d(p_1,q_1) &= 1 - \dfrac{p_1 q_1^T}{\sqrt{p_1 p_1^T q_1 q_1^T}}\\ &= 1 - \dfrac{29}{\sqrt{62 \times 17}}\\ &= 1 - 0.8933 = 0.1067 \end{align}
In the same way, the rest of the matrix \(S\) can be computed between all possible pairs of rows (all other combinations of \(i\) and \(j\)).
$$ S = \begin{bmatrix} 0.1067 & 1.0618 & 0.1171 & 0.1991 & 1.2272\\ 0.3789 & 0.1484 & 0.6206 & 0.3479 & 0.3701\\ 0.0541 & 0.3301 & 0.1372 & 0.2767 & 0.4870\\ 0.3031 & 0.0677 & 0.4029 & 0.5490 & 0.1685\\ \end{bmatrix} $$
The matrix \(S\) is used to compute the cumulative distance matrix \(D\) using equation \ref{eqn_dtw_D}.
\begin{equation} D_{11} = \min(D_{01},D_{10},D_{00}) + S_{11} = S_{11} = 0.1067 \end{equation}
since \(D_{01}\),\(D_{10}\) and \(D_{00}\) do not exist and are considered empty.
In the same way \(D_{21} = D_{11} + S_{21} = 0.4856\), \(D_{12} = D_{11} + S_{12} = 1.1685\) and \(D_{22} = \min(D_{12},D_{21},D_{11}) + S_{22} = 0.1067 + 0.1484 = 0.2552\). The complete matrix \(D\) is calculated to be:
$$ D = \begin{bmatrix} 0.1067 & 1.1685 & 1.2856 & 1.4847 & 2.7119\\ 0.4856 & 0.2551 & 0.8757 & 1.2236 & 1.5937\\ 0.5397 & 0.5852 & 0.3923 & 0.6690 & 1.1560\\ 0.8428 & 0.6074 & 0.7952 & 0.9413 & 0.8375\\ \end{bmatrix} $$
It is concluded that the distance \(D_{DTW}\) between \(P\) and \(Q\) is \(D_{DTW}(P,Q) = D(L_P,L_Q) = D(4,5) = 0.8375\).
Using DTW on ISOLET
In the previous section we went through how compute the distance between two sequences. We're now going to use the distance function \(D_{DTW}\) that we defined to build a nearest neighbour classifier for the utterances in ISOLET. The basic steps are, for each test utterance, to compute the distance between it and every train utterance. We identify the closest train utterance and then use the corresponding class as the class of the test utterance. That's it!
We have to implement the functions defined above in python. First we have to get a list of all the files in ISOLET and extract MFCC features for each utterance, see the source file for that stuff. Below I've shown an implementation of the DTW algorithm in numpy, but thus runs really slowly, so I rewrote it to use numba just-in-time compilation which is 500 times (!!) faster, see the code for the implementation, the basic trick is not to use any numpy or scipy calls in the loops, just pure python.
# this is the dtw distance implemented directly
def dtw_dist(p,q):
ep = np.sqrt(np.sum(np.square(p),axis=1));
eq = np.sqrt(np.sum(np.square(q),axis=1));
D = 1 - np.dot(p,q.T)/np.outer(ep,eq) # work out D all at once
S = np.zeros_like(D)
Lp = np.shape(p)[0]
Lq = np.shape(q)[0]
N = np.shape(p)[1]
for i in range(Lp):
for j in range(Lq):
if i==0 and j==0: S[i,j] = D[i,j]
elif i==0: S[i,j] = S[i,j-1] + D[i,j]
elif j==0: S[i,j] = S[i-1,j] + D[i,j]
else: S[i,j] = np.min([S[i-1,j],S[i,j-1],S[i-1,j-1]]) + D[i,j]
return S[-1,-1] # return the bottom right hand corner distance
# features_test, features_train are lists of numpy arrays containing MFCCs.
# label_test and label_train are just python lists of class labels (0='A', 1='B' etc.)
correct = 0
for i in range(len(features_test)):
best_dist = 10**10
best_label = -1
for j in range(len(features_train)):
dist_ij = dtw2_dist(features_test[i],features_train[j])
if dist_ij < best_dist:
best_dist = dist_ij
best_label = label_train[j]
if best_label == label_test[i]: correct += 1
total += 1
print('accuracy:',correct/total)
Using the above code I could achieve 79.35% accuracy using 26 filterbank, 13 cepstrum MFCC. By adding delta features and twiddling with some other parameters I got this up to 84.4% accuracy. This is not great, the scores listed by the UCI repository have 96% as the best score (using a non-DTW method). The example above uses 1-nearest neighbour, what if we increase the number of neighbours? for 3-NN we get 85.06%, for 5-NN we get 85.58%, for 11-NN we get 88.52%, for 50-NN we get 88.84% and for 100-NN we get 88.91%. So a small increase in accuracy. In the next section we will look at incorporating these distances into an SVM kernel.
Incorporating DTW as SVM Kernel
See here for a good non-mathematical guide to SVM. The SVM algorithm uses a kernel matrix as the basis of its classifications. A kernel function is valid only of it is symmetric i.e. \(K(\mathbf{x},\mathbf{y}) = K(\mathbf{x},\mathbf{y})\) and the resulting kernel matrix is positive semi-definite.
Common kernel functions include linear, polynomial and Radial Basis Function (RBF). For these kernels \(\mathbf{x}\) and \(\mathbf{y}\) must have the same dimensionality to be computable. To create a kernel function that can utilise variable length inputs, we can use \(D_{DTW}\), the distance between sequences that we defined in the previous section.
\begin{equation} K_{DTW}(\mathbf{x},\mathbf{y}) = \exp \left( \dfrac{-D_{DTW}(\mathbf{x},\mathbf{y})^2}{\gamma^2} \right) \end{equation}
One of the problems with using the DTW distance is that it is not a true distance metric like the Euclidean distance. The DTW distance does not satisfy the triangle inequality, and positive semi-definiteness of the kernel cannot be proven as simple counterexamples can be found. Nevertheless kernels built like this show good results.
To apply this we just have to do grid search on the validation set to find good parameters \(C\) and \(\gamma\) for this particular problem.
Proudly powered by Pelican, which takes great advantage of Python. | CommonCrawl |
Microsatellite instability test using peptide nucleic acid probe-mediated melting point analysis: a comparison study
Mi Jang1,
Yujin Kwon1,2,
Hoguen Kim1,2,
Hyunki Kim1,
Byung Soh Min3,
Yehyun Park4,
Tae Il Kim4,
Sung Pil Hong5 &
Won Kyu Kim ORCID: orcid.org/0000-0001-5531-95001,2
BMC Cancer volume 18, Article number: 1218 (2018) Cite this article
Analysis of high microsatellite instability (MSI-H) phenotype in colorectal carcinoma (CRC) is important for evaluating prognosis and choosing a proper adjuvant therapy. Although the conventional MSI analysis methods such as polymerase chain reaction (PCR) fragment analysis and immunohistochemistry (IHC) show high specificity and sensitivity, there are substantial barriers to their use.
In this study, we analyzed the MSI detection performance of three molecular tests and IHC. For the molecular tests, we included a recently developed peptide nucleic acid probe (PNA)-mediated real-time PCR-based method using five quasi-monomorphic mononucleotide repeat markers (PNA method) and two conventional PCR fragment analysis methods using NCI markers (NCI method) or five quasi-monomorphic mononucleotide repeat markers (MNR method). IHC analysis was performed with four mismatch repair proteins. The performance of each method was validated in 166 CRC patient samples, which consisted of 76 MSI-H and 90 microsatellite stable (MSS) CRCs previously diagnosed by NCI method.
Of the 166 CRCs, 76 MSI-H and 90 MSS CRCs were determined by PNA method. On the other hand, 75 MSI-H and 91 MSS CRCs were commonly determined by IHC and MNR methods. Based on the originally diagnosed MSI status, PNA showed 100% sensitivity and 100% specificity while IHC and MNR showed 98.68% sensitivity and 100% specificity. When we analyzed the maximum sensitivity of MNR and PNA method, which used the same five markers, PNA method could detect alterations in all five mononucleotide repeat markers in samples containing down to 5% MSI-H DNAs, whereas MNR required at least 20% MSI-H DNAs to achieve the same performance.
Based on these findings, we suggest that PNA method can be used as a practical laboratory test for the diagnosis of MSI.
The molecular pathogenesis of colorectal carcinoma (CRC) is well understood in comparison with other cancers. The development of a broad range of CRCs can be explained by the multistep carcinogenesis model and high microsatellite instability (MSI-H) resulting from deficiencies of the mismatch repair (MMR) gene set, which consists of MSH2, MLH1, MSH6, and PMS2. The MSI-H phenotype is found in both hereditary non-polyposis colorectal cancer (HNPCC) with germline mutation in MMR gene set (3%) and sporadic CRCs with CpG island methylator phenotype in MMR gene set (12%), together which account for approximately 15% of all CRCs [1]. Regardless of the origin (hereditary or sporadic) or type of mutation, MSI-H quantitatively or qualitatively alters the expression of numerous genes harboring nucleotide repeats, such as transforming growth factor-β receptor 2, TCF-4, BAX, MLH3, and RAD50, which may contribute to the development of CRCs, increased neoantigen production, and increased sensitivity to immunotherapy [2,3,4,5].
In addition to CRCs, MSI-H has frequently been reported in other types of cancers including endometrial and gastric cancer and is expected to play direct or indirect roles in the development of these cancers. A recent comprehensive MSI screening study showed that the degree of MSI positively correlates with the survival of patients with various cancers [6]. Thus, precise and rapid detection of MSI status has become more crucial for both research and clinical practice. There are several laboratory tests for determining MSI status, including polymerase chain reaction (PCR)-based analysis of MSI markers and immunohistochemistry (IHC) staining of MMR proteins [7]. However, conventional PCR-based MSI interrogation requires complicated steps and additional equipment and shows low sensitivity for samples with a small proportion of tumor cells. On the other hand, IHC analysis requires pathologists, and interpretation criteria can be subjective and affected by technical factors. Therefore, a more accurate and simple MSI test strategy is needed.
We aimed to evaluate the MSI detection performance of various MSI detection methods, including the following: a recently developed peptide nucleotide acid probe (PNA)-mediated real-time PCR-based MSI test using five quasi-monomorphic mononucleotide repeat markers (PNA method) and two conventional PCR fragment analysis methods, PCR fragment analysis with two mononucleotide and three dinucleotide repeat markers proposed by the National Cancer Institute (NCI method) [8], and PCR fragment analysis with the same mononucleotide repeat (MNR) markers as PNA method (MNR method) as well as IHC method. Each MSI detection method was validated in 166 CRCs and paired normal specimens. In this study, we provided practical data for proper selection and application of MSI detection methods.
Cell lines and patient tissue samples
For testing the sensitivity of PNA (U-TOP™ MSI Detection Kit) and MNR (Promega MSI Analysis System Kit), HeLa (microsatellite stable, MSS) and SNU-638 (MSI-H) cell lines were used. HeLa cells and SNU-638 cells were cultured with DMEM and RPMI1640 medium, respectively. Each medium was supplemented with 10% FBS (Invitrogen Life Technologies, Carlsbad, CA, USA) and 1% penicillin/streptomycin.
Formalin-fixed paraffin-embedded (FFPE) tissue samples were collected from 2263 patients with CRCs who visited Severance Hospital between January 2005 and December 2015. A total of 166 CRCs (76 diagnosed as MSI-H and 90 diagnosed as MSS) and matched non-neoplastic colon mucosal tissues were randomly selected for this study. MSI status was previously analyzed using the NCI method. The specimens were obtained from the archives of the Department of Pathology, Yonsei University, Seoul, Korea and the Liver Cancer Specimen Bank of the National Research Resource Bank Program of the Korean Science and Engineering Foundation, Ministry of Science and Technology.
PNA probe-mediated real-time PCR sensing for detection of MSI status
We tested the performance of the PNA method in detecting MSI status in colon cancer samples using genomic DNA samples (gDNAs) extracted from FFPE CRCs and matched normal tissues. gDNA was isolated according to the manufacturer's instructions (QIAamp DNA FFPE Tissue Kit, Qiagen, Venlo, Netherlands; Maxwell® 16 FFPE Purification Kit for DNA, Promega). Both quality and quantity of extracted genomic DNA samples were evaluated by using the Nanodrop (Thermo Waltham, MA, USA) and subsequent gel electrophoresis. In PNA method, wild type PNA probe perfectly hybridized with wild type allele, while partial or mismatch hybridization caused melting temperature of PNA probe to be lower than that of perfectly matched probe. At denaturation temperature, PNA probe was subject to fluorescence quenching by random coiling, and this quenching temperature was analyzed by real-time PCR machine [9, 10].
To assess MSI status with PNA method, U-TOP™ MSI Detection Kit was purchased from SeaSun Biomaterials (Daejeon, Korea). This commercially available product employed five MSI marker genes (NR21, NR24, NR27, BAT25, and BAT26). A 20-μl mixture composed of gDNA sample (3 μl), 2 × qPCR Premix (10 μl), and dual-labeled (fluorescence and quencher) PNA probes for NR21, NR24, and BAT26 (MSI Marker A; 7 μl), as well as another 20-μl mixture composed of gDNA sample (3 μl), 2 × qPCR Premix (10 μl) and dual-labeled PNA probes for BAT25 and NR27 (MSI Marker B; 7 μl). The qPCR premix contained dNTP and DNA Taq polymerase, as well as Uracil DNA Glycosylase (UDG). Each PNA probe was fluorescently labeled with Texas-Red, Hexachloro-fluorescein, or Fluorescein amidite. Two individual real-time PCR reactions were performed for each normal and cancer sample using a CFX96 PCR machine (Bio-Rad, Hercules, CA, USA). PCR reaction steps consisted of amplification and subsequent melting point analysis. The amplification condition was 50 °C for 5 min, 95 °C for 10 min, and 60 cycles of 95 °C for 30 s, 65 °C for 30 s, 55 °C for 30 s, and 57 °C for 45 s. The initial incubation at 50 °C for 5 min was required to activate Uracil DNA Glycosylase and prevent possible carryover contamination. In 60 cycles of PCR, four different temperatures and respective optimal times were required. The main purpose of using these conditions was to ensure specific binding of PNA probes to their target MSI marker. In detail, 95 °C for 30 s was for denaturation of template DNAs, 65 °C for 30 s was for binding of PNA probes to their target MSI markers, 55 °C for 30 s was for annealing temperature of primers, and 57 °C for 45 s was for elongation of a polymerase. The melting point analysis condition was 10 min at 95 °C for denaturation and touchdown PCR (90 °C to 45 °C, decreasing 1 °C per cycle). Fluorescence was measured for 10 s at each cycle of touchdown PCR. Obtained melting peaks were analyzed to detect alterations in the five MSI marker genes. A CRC sample was considered to be unstable in a MSI marker gene when a > 3 °C melting temperature difference between a CRC and the normal sample was observed. For the analysis of minimal base pair alteration that PNA method can detect, we used artificially synthesized oligo targets using either deletions or insertions (Macrogen, Seoul, Korea).
PCR fragment analysis
For NCI method, five microsatellite loci (BAT-25, BAT-26, D2S123, D5S346, and D17S250) recommended by the 1997 NCI-sponsored MSI workshop were amplified in a single multiplex PCR reaction. PCR products were analyzed by capillary electrophoresis. For interpretation, instability at more than one locus was defined as MSI-H, instability at a single locus was defined as low MSI (MSI-L), and no instability at any locus was defined as MSS. For MNR method, amplification of five MSI markers (NR21, NR24, BAT26, BAT25, and NR27) was performed using gDNAs extracted from CRCs and matched normal samples. PCR reactions were performed with the MSI Analysis System Kit referring to the MSI Analysis System Version 1.2 protocol. This kit was purchased from Promega (Madison, WI, USA). Amplified PCR products were purified using a PCR clean-up kit (Macherey-Nagel, Düren, Germany), and the size of PCR amplicons was analyzed using an ABI PRISM 3100 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA). Based on data obtained from the sequencer, MSI status was determined by random experts (Macrogen).
Paraffin-embedded tissue blocks were cut into 4-μm sections. IHC analysis was performed using a Ventana XT automated stainer (Ventana Corporation, Tucson, AZ, USA) with antibodies against the following: MutL homolog 1 (MLH1, diluted 1:50, BD Biosciences, San Jose, CA, USA), MutS homolog 2 (MSH2, diluted 1:200, BD Biosciences), MutS homolog 6 (MSH6, diluted 1:100, Cell Marque, Rocklin, CA, USA), and PMS1 homolog 2 (PMS2, diluted 1:40, Cell Marque). Positive internal controls including stromal cells and lymphoid cells were confirmed, and the percentage of nuclear expression was measured.
To calculate the diagnostic sensitivity and specificity of each MSI test, McNemar's tests were performed. The sensitivity (%) of each MSI analysis method was calculated as follows: 100 × true positives/(true positives + false negatives), where true positives and false negatives were defined according to MSI status originally diagnosed by NCI method. The specificity (%) of PNA method was calculated as follows: 100 × true negatives/(false positives + true negatives), where true negatives and false positives were defined according to MSI status originally diagnosed by NCI method.
To determine the sample number required for this study, we performed non-inferiority test at a significance level of 0.05 and a statistical power of 80%. The average of positive predictive value (PPV) and negative predictive value (NPV) were obtained from 10 reference articles [11,12,13,14,15,16,17,18,19,20], where methods using microsatellite instability testing or real-time PCR were compared to immunohistochemistry for detecting mutations of MLH1, MSH2, MSH6, and PMS2 or other human genes. The average PPV and NPV were 91.1 and 93.6%, respectively, and the differences from lower margin of 95% confidence intervals (CI; δ, non-inferiority margin) were − 5.49% and − 4.3%, respectively. Based on reference articles, the sizes of MSI-H and MSS samples were calculated using the equation below, considering a 10% dropout rate.
$$ \mathrm{N}=\frac{{\left({Z}_{\alpha /2}+{Z}_{\beta}\right)}^2P\left(1-P\right)\ }{{\left(\delta -\left|P-{P}_0\right|\right)}^2} $$
P = average PPV and NPV values from reference articles; P0= expected PPV and NPV for this study (equivalence); Zα/2=1.96; Zβ = 0.842.
Based on the calculation above, we came to the conclusion that more than 68 MSI-H and 89 MSS CRC samples were required for this study, and performed MSI analysis in 76 MSI-H and 90 MSS CRC samples.
To analyze clinicopathological parameters, statistical analyses were performed using SPSS software, version 21.0.0.0 for Windows (IBM., Armonk, NY, USA). Mann-Whitney tests, Student's t-tests, Fisher's exact tests, or χ2 tests were used depending on the purpose; P-values < 0.05 were considered statistically significant.
Determination of MSI status by three conventional MSI detection methods
We randomly collected 166 cases from 2263 CRCs that had undergone MSI status analysis with variable methods from January 2005 to December 2015. The NCI analysis results of the 166 cases were also collected. Seventy-six cases were previously diagnosed as MSI-H and 90 cases as MSS or MSI-L.
To test the performance of conventional MSI detection methods, we conducted MNR method with five quasi-monomorphic mononucleotide markers (Promega MSI Analysis System Kit) and IHC with four MMR proteins (MLH1, MSH2, MSH6, and PMS2). MSI analysis results determined by each method are summarized in Additional file 1: Table S1. Among the 76 MSI-H cases, 74 cases showed MSI-H in all three conventional MSI tests. Case no. 1 was diagnosed as MSI-H by NCI and IHC but was diagnosed as MSI-L by MNR. Case no. 20 was diagnosed as MSI-H by NCI and MNR, but no loss of protein expression was detected by IHC (Additional file 1: Table S1 and Additional file 2: Figure S1).
Diagnostic sensitivity and specificity of three MSI test methods
To assess the clinical sensitivity and specificity of each MSI test, McNemar's tests were performed by comparing the MSI analysis results of IHC, MNR, and PNA methods with the MSI status originally diagnosed by NCI method in the 166 CRCs (Table 1). The clinical sensitivity and specificity of PNA method were 100% (95% confidence interval (CI): 95.2–100%) and 100% (95% CI: 95.9–100%), respectively. On the other hand, the clinical sensitivity and specificity of MNR and IHC methods were 98.68% (95% CI: 92.9–99.8%) and 100% (95% CI: 95.9–100%), respectively.
Table 1 MSI analysis results of NCI, PNA, MNR, and IHC methods for 166 CRCs
Next, we evaluated the maximum sensitivity of the MNR and PNA methods, which used the same five MSI markers. To do so, mixed samples of gDNAs extracted from HeLa cells (microsatellite stable, MSS) and SNU-638 cells (MSI-H) were used, and MNR and PNA analyses were performed with different proportions (0, 1, 5, 10, 20, 40, and 100%) of MSI-H variant. MNR analysis showed that SNU-638 cells harbored 7-, 8-, 8-, 12-, and 10-base deletion mutations in NR21, NR24, BAT25, BAT26, and NR27, respectively. PNA method was capable of detecting alterations in all five MSI markers in the mixed samples, including down to 5% MSI-H variant, whereas MNR method required a sample containing up to 20% MSI-H variant to detect alterations in all five MSI markers. PNA method was further capable of detecting 1% MSI-H variant in NR21 and BAT25 markers (Fig. 1 and Additional file 3: Table S2). The repeatedly performed experiments using PNA method provided consistent and reproducible results, irrespective of the type of MSI markers and portion of MSI variants. To evaluate the qualitative performance of PNA method, we determined the minimum alteration number in each MSI marker that PNA method can detect, using artificially synthesized oligo targets containing one or two-base deletion mutations and one or two-base insertion mutations. The samples used in this experiment were composed of 100% MSI variant. Oligo targets containing insertion mutations were included because mononucleotide microsatellites often exhibit insertion mutations. The analysis result showed that PNA method clearly detected up to two-base deletion and one-base insertion mutations of the five MSI markers (Additional file 2: Figure S2). Then, we performed PNA analysis using titrated samples with MSI variants containing a two-base deletion or one-base insertion to evaluate the quantitative performance of PNA method. The analysis results showed that PNA method clearly detected a two-base deletion or one-base insertion of the five MSI markers in samples containing more than 5% MSI-H variants (Fig. 2).
Maximum sensitivity evaluation of PNA and MNR method. The maximum sensitivity of PNA and MNR methods was evaluated using mixed samples of genomic DNA samples obtained from HeLa (MSS) and SNU-638 (MSI-H) cells. a PNA method was capable of detecting alteration in all five MSI marker genes in samples containing down to 5% MSI-H variant. b MNR method required at least 20% MSI-H variant in samples to detect alteration in all five MSI marker genes
Evaluation of maximum sensitivity of PNA method using oligo targets containing minimal base pair alteration. a and b PNA analysis was performed using oligo samples containing different portions of MSI variants with two-base deletion or one-base insertion. PNA method clearly detected two-base deletion and one-base insertion in all five MSI markers in samples containing 5% or more MSI-H variant
Determination of MSI status by the PNA probe-mediated melting point analysis
MSI status was determined by PNA method, which was a recently developed real-time PCR based method. The results of PNA method showed sharp melting curves highly specific to each cancer sample as compared with paired normal samples (Fig. 3, left panel). MNR method also showed readable size differences between cancer and paired normal samples (Fig. 3, right panel). Samples with alterations in more than one MSI marker were determined as MSI-H, whereas samples with an alteration in a single MSI marker or no alteration were determined as MSI-L or MSS, respectively (Fig. 3 and Additional file 2: Figure S3). MSI-L and MSS were grouped together for statistical analysis based on a previous report of no significant clinicopathological or molecular differences between MSI-L and MSS CRCs [21]. MSI analysis using PNA method indicated that 76 samples were MSI-H and the remaining 90 were MSS, which was the same result as that of NCI analysis. A comparison of instability in each MSI marker between PNA and MNR method showed significant difference in NR24, with PNA having higher detection rates (Additional file 3: Table S3). We also note that one case (Case no. 1) was determined as MSI-L by MNR but was determined as MSI-H by NCI, IHC, and PNA methods (Additional file 1: Table S1).
Analysis of MSI status in 166 CRCs and matched normal samples using PNA (left panel) and MNR method (right panel). Representative MSI analysis results of CRC samples determined as MSI-H or MSS are shown
Clinicopathologic findings of sporadic and hereditary MSI-H CRCs
We observed that MSI-H and MSS CRCs showed distinctive clinicopathologic characteristics, as previously reported [2]. Compared with MSS CRCs, MSI-H CRCs were more frequently found in the right colon and showed larger and exophytic features. Elevated pre-operative carcinoembryonic antigen (CEA) level and advanced T stage were noted in MSS CRCs. Mucinous adenocarcinoma and signet ring cell carcinoma were observed more often in MSI-H CRCs than in MSS CRCs. Patients with MSI-H CRCs were younger on average than patients with MSS CRCs (Additional file 3: Table S4).
Among the 166 CRCs, 155 cases did not satisfy Amsterdam II and Revised Bethesda criteria, which allowed us to clinically define these cases as sporadic CRCs. Eleven cases showed clinical features of HNPCC as suggested by Amsterdam II or Revised Bethesda criteria. Analysis of these 11 clinically diagnosed HNPCC cases by NCI, MNR, IHC, and PNA methods showed that two cases were determined as MSS in all four tests, indicating that these two cases can be considered as familial colorectal cancer type X. In the remaining nine cases, MSI-H was determined by all four tests and mutations in MLH1 or MSH2 were detected (data not shown), which leads us to consider the nine cases are Lynch syndrome. Consequently, nine and 67 cases were classified as MSI-H CRCs associated with Lynch syndrome and sporadic MSI-H CRCs, respectively. Whereas most clinicopathologic characteristics of the nine cases with Lynch syndrome and 67 sporadic MSI-H CRC cases were similar, some parameters showed significant differences (Additional file 3: Table S5). Patients in the Lynch syndrome group were younger on average than patients in the sporadic CRC group. Also, mean tumor size was smaller in the Lynch syndrome group, perhaps because the features of hereditary CRCs, such as family history and early symptoms, may lead to early health check-ups and the detection of smaller tumors. There were no differences in other parameters such as MMR protein expression status, tumor location, pre-operative CEA level, gross type, histologic diagnosis, T stage, lymphovascular invasion, mucin formation, Crohn-like reaction, or tumor budding.
Diagnostic value of IHC using antibodies against four MMR proteins to determine MSI status
Next, we performed IHC analysis of four MMR proteins in the 166 CRCs to determine MSI status. MSI-H was defined as the loss of expression of at least one MMR protein in more than 95% of tumor cells. By this criterion, we detected MSI-H in 75 out of 76 samples (98.7%). The percentage of nuclear expression of MMR proteins was also measured in all cases, and MSI status was further analyzed according to various cut-off values of loss of MMR protein expression (Additional file 3: Table S6). When we defined MMR deficit as a complete loss of nuclear staining, we observed 84.21% concordance with the originally diagnosed MSI status using two antibodies against MLH1 and MSH2, 73.68% concordance with two antibodies against PMS2 and MSH6, and 90.79% concordance with all four antibodies against MLH1, MSH2, PMS2, and MSH6. When we defined MMR deficit as < 1% nuclear staining, we observed 84.21% concordance with MLH1 and MSH2 antibodies, 89.47% concordance with PMS2 and MSH6 antibodies, and 96.05% with all four antibodies. When we defined MMR as < 5% nuclear staining, we observed 90.79% concordance with MLH1 and MSH2 antibodies and 98.68% concordance with PMS2 and MSH6 antibodies or all four antibodies. Overall, therefore, an IHC criterion of 95% was the best match to the original MSI diagnosis results.
The determination of MSI status is important because CRCs with MSI-H show distinguishing clinicopathologic characteristics and require optimized treatment [22, 23]. IHC for MMR proteins can effectively identify CRCs with a MSI-H subtype and provides indirect information about the affected MMR pathway. IHC is a relatively quick and simple assay for determining MSI status by evaluating the protein expression of four MMR genes: MLH1, MSH2, MSH6, and PMS2. MLH1 forms a functional complex with PMS2, and MSH2 forms a functional complex with MSH6. Because MLH1 is responsible for the stability of PMS2, the combined loss of PMS2 and MLH1 suggests that MLH1 is defective. Similarly, the combined loss of MSH2 and MSH6 suggests a defect within MSH2. Our finding that the co-loss of MLH1/PMS2 or MSH2/MSH6 predominantly occurred in MSI-H CRC samples further supports a relationship among MMR proteins.
We also observed variable MMR expression patterns such as loss of a single MMR marker, loss of MLH1 together with MSH2 and MSH6, loss of MSH6 together with MLH1 and PMS2, and loss of all four markers. Exclusive loss of PMS2 expression could be explained by a PMS2 or MLH1 mutation resulting in intact expression of MLH1 with abnormal function [24]. Loss of MSH6 together with MLH1 and PMS2 could occur because of a somatic mutation in MSH6 combined with MLH1 hypermethylation [25]. Rarely, a null pattern has reportedly been caused by a germline MSH2 mutation together with somatic MLH1 hypermethylation [26]. The interpretation of these rare MMR expression patterns is challenging, which limits the application of IHC analysis for MSI detection. Moreover, the interpretation of IHC results can be limited by the cut-off value. Although some studies suggest 5% or 10% cut-off values [27, 28], there is no consensus on the minimal percentage of nuclear staining that should be considered as intact expression. Because slight discordance between IHC and MSI molecular tests is rather natural [29], our results suggest that > 5% nuclear staining was the best match to MSI test results. The challenging interpretation of IHC results could be due to somatic missense mutations in sporadic CRCs that can reduce IHC staining without affecting MMR protein expression and thus not cause pathogenesis [30]. Tissue fixation status, somatic mosaicism, or other gene defects that also cause a MSI-H phenotype also could affect the interpretation of IHC results [28]. Indeed, there was one case (Case no. 20), determined as MSI-H by other molecular tests, that showed expression of all four MMR proteins. Therefore, we suggest that additional molecular tests of MSI status should be performed in cases with decreased expression of one or more MMR genes and/or clinicopathologic features related to the MSI-H subtype.
Several methods using amplicon melting analysis had been suggested for genotyping and mutation scanning. The primitive techniques had required a fluorescently, labeled primer, and been limited to the detection of mutations residing in the melting domain of the labeled primer. Taking advantage of a double-stranded DNA dye, Wittwer et al. reported another amplicon melting analysis method that did not require any labeled primers. This method was not limited by the requirement that sequence variants have to be in the same melting domain [31]. Likewise, real-time PCR based methods have been developed for the analysis of various DNA alterations. Here, we evaluated the performance of three MSI detection molecular tests and found that PNA method can be used as a time- and cost-efficient molecular test for MSI diagnosis. Instead of using a set of five marker genes (two mononucleotide repeats and three dinucleotide repeats) recommended by the NCI, PNA method employs a panel of five marker genes containing quasi-monomorphic mononucleotide repeats proposed by Buhard et al. [32]. Many previous studies have shown that quasi-monomorphic mononucleotide repeats are far more sensitive than dinucleotide repeats in detecting MSI [33,34,35]. PNA method adopts a PNA-based real-time PCR sensing strategy that can be performed by a conventional real-time PCR machine and requires only a small amount of sample. We demonstrate that PNA method was capable of detecting a very small proportion of a mutant gene variant (5%) in a mixed sample of wild-type and mutant gDNAs, which complies with the College of American Pathologists guideline that molecular tests should be capable of detecting mutation in specimens with > 5% tumor cell population [36]. PNA probes enable strong amplification of mutant and weak amplification of wild type allele in a sample containing wild type and mutant alleles, which guarantees sensitive detection of mutant variants without an internal positive control. Since most clinical samples contain both normal and mutant variants, PNA method shows higher utilization over previous melting point analysis methods. As shown in the MNR analysis results of NR24 and BAT25 in samples containing 20% MSI-H variants (Fig. 1b), interpretation of data from MNR could be highly confusing and subjective. This might be due to artifacts in capillary electrophoresis that appear as smaller, split, and stutter peaks. Challenging interpretation of MNR analysis results likely reduces data reproducibility. On the other hand, PNA method provides data consisting of clear and sharp melting peaks detected at a specific melting temperature, which facilitates interpretation of data for researchers. There are some demerits also in the PNA method. The procedure for PNA method takes relatively longer than a general real-time PCR protocol for accuracy reasons (requiring ~ 4.5 h), and PNA method cannot distinguish homozygotes from heterozygotes, which might not be a critical disadvantage for this method due to using quasi-monomorphic markers. In terms of cost-efficiency, the cost for running PNA method is about three-fifths of that for PCR fragment analysis-based MSI tests (MNR and NCI methods). Moreover, PNA method can be performed in a general real-time PCR machine, which costs only one-fourths of a sequencer required for PCR fragment analysis. Overall, PNA method has some advantages over MNR and NCI methods.
The determination of MSI status is critical for the intense lifelong screening of patients with Lynch syndrome and the appropriate treatment of patients with sporadic CRCs. Amsterdam criteria (revised to Amsterdam II criteria in 1998) and revised Bethesda criteria are used to identify HNPCCs, which could lead to the misdiagnosis of some CRC patients with Lynch syndrome. Therefore, it is highly recommended that every diagnosed CRC patient undergo Lynch syndrome screening by IHC and/or molecular tests. However, Lynch syndrome screening for every colon cancer patient is not cost-effective. We therefore propose two possible screening algorithms, both of which employ screening for MSI using IHC for PMS2 and MSH6 based on our finding that the detection rate of MSI using these two markers was the same as that using all four markers when the cut-off value was ≤5% nuclear expression (Additional file 3: Table S6). In both algorithms, CRC patients are divided into two groups depending on clinical criteria such as Amsterdam II or Revised Bethesda criteria. In the first algorithm, MSI screening is initially performed by IHC using PMS2 and MSH6 markers, which reduces the cost by half compared to applying IHC analysis in four MMR markers. Next, a cost- and time-efficient tool, such as PNA, is applied to determine MSI presence in CRCs, which also reduces the cost by three-fifths and time by half compared to using conventional molecular tests (Additional file 2: Figure S4). In the second algorithm, MSI screening is initially performed using PNA and subsequent IHC analysis is performed to identify suspected gene(s), which could be similarly time- and cost-efficient as the first algorithm (Additional file 2: Figure S5). However, a molecular test for MSI analysis should be carefully selected since, test results from independent methods that even use the same MSI markers sometimes show discrepancy. In our cohort, there was a sample (Case no. 1) that was determined as MSI-L by MNR method, but as MSI-H by PNA method. In this case, comparing two results is possible, but deciding which is correct can be difficult.
In this study, we have evaluated the MSI detection performance of a recently developed PNA method, conventional molecular tests, and immunohistochemistry. Based on our findings, we suggest that PNA method could be used as a simple alternative to existing molecular tests.
CRC:
Colorectal carcinoma
FFPE:
Formalin-fixed paraffin-embedded
HNPCC:
Hereditary non-polyposis colorectal cancer
IHC:
MMR:
Mismatch repair
MSI-H:
High microsatellite instability
MSI-L:
Low microsatellite instability
MSS:
Microsatellite stable
PNA:
Peptide nucleotide acid
Vilar E, Gruber SB. Microsatellite instability in colorectal cancer-the stable evidence. Nat Rev Clin Oncol. 2010;7(3):153–62.
Boland CR, Goel A. Microsatellite instability in colorectal cancer. Gastroenterology. 2010;138(6):2073–2087 e2073.
Kim WK, Park M, Kim YJ, Shin N, Kim HK, You KT, Kim H. Identification and selective degradation of neopeptide-containing truncated mutant proteins in the tumors with high microsatellite instability. Clin Cancer Res. 2013;19(13):3369–82.
Asaoka Y, Ijichi H, Koike K. PD-1 blockade in tumors with mismatch-repair deficiency. N Engl J Med. 2015;373(20):1979.
Michot JM, Bigenwald C, Champiat S, Collins M, Carbonnel F, Postel-Vinay S, Berdelou A, Varga A, Bahleda R, Hollebecque A, et al. Immune-related adverse events with immune checkpoint blockade: a comprehensive review. Eur J Cancer. 2016;54:139–48.
Hause RJ, Pritchard CC, Shendure J, Salipante SJ. Classification and characterization of microsatellite instability across 18 cancer types. Nat Med. 2016;22(11):1342–50.
Zhang X, Li J. Era of universal testing of microsatellite instability in colorectal cancer. World J Gastrointest Oncol. 2013;5(2):12–9.
Boland CR, Thibodeau SN, Hamilton SR, Sidransky D, Eshleman JR, Burt RW, Meltzer SJ, Rodriguez-Bigas MA, Fodde R, Ranzani GN, et al. A National Cancer Institute workshop on microsatellite instability for cancer detection and familial predisposition: development of international criteria for the determination of microsatellite instability in colorectal cancer. Cancer Res. 1998;58(22):5248–57.
Hur D, Kim MS, Song M, Jung J, Park H. Detection of genetic variation using dual-labeled peptide nucleic acid (PNA) probe-based melting point analysis. Biol Proced Online. 2015;17:14.
Kim YT, Kim JW, Kim SK, Joe GH, Hong IS. Simultaneous genotyping of multiple somatic mutations by using a clamping PNA and PNA detection probes. Chembiochem. 2015;16(2):209–13.
Shia J. Immunohistochemistry versus microsatellite instability testing for screening colorectal cancer patients at risk for hereditary nonpolyposis colorectal cancer syndrome. Part I. the utility of immunohistochemistry. J Mol Diagn. 2008;10(4):293–300.
Lindor NM, Burgart LJ, Leontovich O, Goldberg RM, Cunningham JM, Sargent DJ, Walsh-Vockley C, Petersen GM, Walsh MD, Leggett BA, et al. Immunohistochemistry versus microsatellite instability testing in phenotyping colorectal tumors. J Clin Oncol. 2002;20(4):1043–8.
Brevet M, Arcila M, Ladanyi M. Assessment of EGFR mutation status in lung adenocarcinoma by immunohistochemistry using antibodies specific to the two major forms of mutant EGFR. J Mol Diagn. 2010;12(2):169–76.
Park S, Wang HY, Kim S, Ahn S, Lee D, Cho Y, Park KH, Jung D, Kim SI, Lee H. Quantitative RT-PCR assay of HER2 mRNA expression in formalin-fixed and paraffin-embedded breast cancer tissues. Int J Clin Exp Pathol. 2014;7(10):6752–9.
Wang J, Cai Y, Dong Y, Nong J, Zhou L, Liu G, Su D, Li X, Wu S, Chen X, et al. Clinical characteristics and outcomes of patients with primary lung adenocarcinoma harboring ALK rearrangements detected by FISH, IHC, and RT-PCR. PLoS One. 2014;9(7):e101551.
Angulo B, Conde E, Suárez-Gauthier A, Plaza C, Martínez R, Redondo P, Izquierdo E, Rubio-Viqueira B, Paz-Ares L, Hidalgo M, et al. A comparison of EGFR mutation testing methods in lung carcinoma: direct sequencing, real-time PCR and immunohistochemistry. PLoS One. 2012;7(8):e43842.
Millson A, Suli A, Hartung L, Kunitake S, Bennett A, Nordberg MC, Hanna W, Wittwer CT, Seth A, Lyon E. Comparison of two quantitative polymerase chain reaction methods for detecting HER2/neu amplification. J Mol Diagn. 2003;5(3):184–90.
Li Y, Pan Y, Wang R, Sun Y, Hu H, Shen X, Lu Y, Shen L, Zhu X, Chen H. ALK-rearranged lung cancer in Chinese: a comprehensive assessment of clinicopathology, IHC, FISH and RT-PCR. PLoS One. 2013;8(7):e69016.
Cronin M, Pho M, Dutta D, Stephans JC, Shak S, Kiefer MC, Esteban JM, Baker JB. Measurement of gene expression in archival paraffin-embedded tissues: development and performance of a 92-gene reverse transcriptase-polymerase chain reaction assay. Am J Pathol. 2004;164(1):35–42.
Long GV, Wilmott JS, Capper D, Preusser M, Zhang YE, Thompson JF, Kefford RF, von Deimling A, Scolyer RA. Immunohistochemistry is highly sensitive and specific for the detection of V600E BRAF mutation in melanoma. Am J Surg Pathol. 2013;37(1):61–5.
Pawlik TM, Raut CP, Rodriguez-Bigas MA. Colorectal carcinogenesis: MSI-H versus MSI-L. Dis Markers. 2004;20(4–5):199–206.
Kawakami H, Zaanan A, Sinicrope FA. Microsatellite instability testing and its role in the management of colorectal cancer. Curr Treat Options in Oncol. 2015;16(7):30.
Gatalica Z, Vranic S, Xiu J, Swensen J, Reddy S. High microsatellite instability (MSI-H) colorectal carcinoma: a brief review of predictive biomarkers in the era of personalized medicine. Familial Cancer. 2016;15(3):405–12.
Rosty C, Clendenning M, Walsh MD, Eriksen SV, Southey MC, Winship IM, Macrae FA, Boussioutas A, Poplawski NK, Parry S, et al. Germline mutations in PMS2 and MLH1 in individuals with solitary loss of PMS2 expression in colorectal carcinomas from the Colon Cancer family registry cohort. BMJ Open. 2016;6(2):e010293.
Shia J, Zhang L, Shike M, Guo M, Stadler Z, Xiong X, Tang LH, Vakiani E, Katabi N, Wang H, et al. Secondary mutation in a coding mononucleotide tract in MSH6 causes loss of immunoexpression of MSH6 in colorectal carcinomas with MLH1/PMS2 deficiency. Modern Pathol. 2013;26(1):131–8.
Hagen CE, Lefferts J, Hornick JL, Srivastava A. "Null pattern" of immunoreactivity in a lynch syndrome-associated colon cancer due to germline MSH2 mutation and somatic MLH1 hypermethylation. Am J Surg Pathol. 2011;35(12):1902–5.
Pai RK, Pai RK. A practical approach to the evaluation of gastrointestinal tract carcinomas for lynch syndrome. Am J Surg Pathol. 2016;40(4):e17–34.
Chen W, Swanson BJ, Frankel WL. Molecular genetics of microsatellite-unstable colorectal cancer for pathologists. Diagn Pathol. 2017;12(1):24.
Bartley AN, Luthra R, Saraiya DS, Urbauer DL, Broaddus RR. Identification of cancer patients with lynch syndrome: clinically significant discordances and problems in tissue-based mismatch repair testing. Cancer Prev Res (Phila). 2012;5(2):320–7.
Richman S. Deficient mismatch repair: read all about it (review). Int J Oncol. 2015;47(4):1189–202.
Wittwer CT, Reed GH, Gundry CN, Vandersteen JG, Pryor RJ. High-resolution genotyping by amplicon melting analysis using LCGreen. Clin Chem. 2003;49(6 Pt 1):853–60.
Buhard O, Suraweera N, Lectard A, Duval A, Hamelin R. Quasimonomorphic mononucleotide repeats for high-level microsatellite instability analysis. Dis Markers. 2004;20(4–5):251–7.
Suraweera N, Duval A, Reperant M, Vaury C, Furlan D, Leroy K, Seruca R, Iacopetta B, Hamelin R. Evaluation of tumor microsatellite instability using five quasimonomorphic mononucleotide repeats and pentaplex PCR. Gastroenterology. 2002;123(6):1804–11.
Bacher JW, Flanagan LA, Smalley RL, Nassif NA, Burgart LJ, Halberg RB, Megid WM, Thibodeau SN. Development of a fluorescent multiplex assay for detection of MSI-high tumors. Dis Markers. 2004;20(4–5):237–50.
Murphy KM, Zhang S, Geiger T, Hafez MJ, Bacher J, Berg KD, Eshleman JR. Comparison of the microsatellite instability analysis system and the Bethesda panel for the determination of microsatellite instability in colorectal cancers. J Mol Diagn. 2006;8(3):305–11.
Sepulveda AR, Hamilton SR, Allegra CJ, Grody W, Cushman-Vokoun AM, Funkhouser WK, Kopetz SE, Lieu C, Lindor NM, Minsky BD, et al. Molecular biomarkers for the evaluation of colorectal Cancer: guideline from the American Society for Clinical Pathology, College of American Pathologists, Association for Molecular Pathology, and the American Society of Clinical Oncology. J Clin Oncol. 2017;35(13):1453–86.
This research was supported by the Convergence Technology R&D Program (grant number: S2392537) funded by the Small and Medium Business Administration (SMBA, Korea) and a grant of Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI16C0257).
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Department of Pathology, Yonsei University College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul, 120-752, South Korea
Mi Jang, Yujin Kwon, Hoguen Kim, Hyunki Kim & Won Kyu Kim
Brain Korea 21 PLUS Projects for Medical Science, Yonsei University College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul, 120-752, South Korea
Yujin Kwon, Hoguen Kim & Won Kyu Kim
Department of Surgery, Yonsei University College of Medicine, Seoul, 120-752, South Korea
Byung Soh Min
Department of Internal Medicine, Institute of Gastroenterology, Yonsei University College of Medicine, Seoul, 120-752, South Korea
Yehyun Park & Tae Il Kim
Department of Surgery and Cancer, Imperial College London, London, W120NN, UK
Sung Pil Hong
Mi Jang
Yujin Kwon
Hoguen Kim
Hyunki Kim
Yehyun Park
Tae Il Kim
Won Kyu Kim
Conception and design: WK and MJ. Development of methodology: WK and MJ. Data acquisition: WK, MJ and YK. Data analysis and interpretation: SH, YK, HyK, BM, YP, WK and MJ. Manuscript writing and review: YK, SH, MJ, WK and HoK. Administrative, technical, or material support: SH, TK, WK and HoK Study supervision: WK. All authors read and approved the final manuscript.
Correspondence to Won Kyu Kim.
This study was approved by the Institutional Review Board of Yonsei University of College of Medicine (IRB-1-2017-0012). Written informed consent was obtained from all patients.
No personal information is included in the publication; thus no dedicated approval was required.
Table S1. MSI status as determined by NCI, MNR, IHC, and PNA methods for 166 CRCs. (PDF 514 kb)
Figure S1. Case no. 20 was diagnosed as MSI-H by NCI, MNR, and PNA methods but no loss of nuclear expression was detected using IHC for MMR proteins. Figure S2. Determination of minimal base alteration that can be detected by PNA method. (a and b) PNA analysis was performed using artificially synthesized MSI variants containing − 1 or − 2 deletion mutations and + 1 or + 2 insertion mutations. Figure S3. Representative MSI analysis results of CRC samples determined as MSI-L by PNA (left panel) and MNR method (right panel). Figure S4. Type 1 algorithm for MSI screening of sporadic and hereditary CRC patients by IHC and subsequent molecular tests. Figure S5. Type 2 algorithm for MSI screening of sporadic and hereditary CRC patients by molecular tests and subsequent IHC. (PDF 447 kb)
Table S2. Sensitivity of MNR and PNA method. Table S3. MSI-H detection rate for each marker used in PNA and MNR method. Table S4. Clinicopathologic characteristics of 76 MSI-H CRC patients and 90 MSS CRC patients. Table S5. Clinicopathologic characteristics of patients with MSI-H CRCs associated with sporadic conditions or Lynch syndrome. Table S6. IHC-mediated MSI-H detection rate depending on the combination of markers and variable cut-off values for loss of MMR protein expression. (PDF 342 kb)
Jang, M., Kwon, Y., Kim, H. et al. Microsatellite instability test using peptide nucleic acid probe-mediated melting point analysis: a comparison study. BMC Cancer 18, 1218 (2018). https://doi.org/10.1186/s12885-018-5127-6
Microsatellite instability
Real-time polymerase chain reaction
Peptide nucleic acid probe and immunohistochemistry | CommonCrawl |
Posted 2020-11-13 Updated 2021-11-27 Analysis / Functional Analysis
The Big Three Pt. 6 - Closed Graph Theorem with Applications
(Before everything: elementary background in topology and vector spaces, in particular Banach spaces, is assumed.)
A surprising result of Banach spaces
We can define several relations between two norms. Suppose we have a topological vector space \(X\) and two norms \(\lVert \cdot \rVert_1\) and \(\lVert \cdot \rVert_2\). One says \(\lVert \cdot \rVert_1\) is weaker than \(\lVert \cdot \rVert_2\) if there is \(K>0\) such that \(\lVert x \rVert_1 \leq K \lVert x \rVert_2\) for all \(x \in X\). Two norms are equivalent if each is weaker than the other (trivially this is a equivalence relation). The idea of stronger and weaker norms is related to the idea of the "finer" and "coarser" topologies in the setting of topological spaces.
So what about their limit? Unsurprisingly this can be verified with elementary \(\epsilon-N\) arguments. Suppose now \(\lVert x_n - x \rVert_1 \to 0\) as \(n \to 0\), we immediately have \[ \lVert x_n - x \rVert_2 \leq K \lVert x_n-x \rVert_1 < K\varepsilon \]
for some large enough \(n\). Hence \(\lVert x_n - x \rVert_2 \to 0\) as well. But what about the converse? We give a new definition of equivalence relation between norms.
(Definition) Two norms \(\lVert \cdot \rVert_1\) and \(\lVert \cdot \rVert_2\) of a topological vector space are compatible if given that \(\lVert x_n - x \rVert_1 \to 0\) and \(\lVert x_n - y \rVert_2 \to 0\) as \(n \to \infty\), we have \(x=y\).
By the uniqueness of limit, we see if two norms are equivalent, then they are compatible. And surprisingly, with the help of the closed graph theorem we will discuss in this post, we have
(Theorem 1) If \(\lVert \cdot \rVert_1\) and \(\lVert \cdot \rVert_2\) are compatible, and both \((X,\lVert\cdot\rVert_1)\) and \((X,\lVert\cdot\rVert_2)\) are Banach, then \(\lVert\cdot\rVert_1\) and \(\lVert\cdot\rVert_2\) are equivalent.
This result looks natural but not seemingly easy to prove, since one find no way to build a bridge between the limit and a general inequality. But before that, we need to elaborate some terminologies.
(Definition) For \(f:X \to Y\), the graph of \(f\) is defined by \[ G(f)=\{(x,f(x)) \in X \times Y:x \in X\}. \]
If both \(X\) and \(Y\) are topological spaces, and the topology of \(X \times Y\) is the usual one, that is, the smallest topology that contains all sets \(U \times V\) where \(U\) and \(V\) are open in \(X\) and \(Y\) respectively, and if \(f: X \to Y\) is continuous, it is natural to expect \(G(f)\) to be closed. For example, by taking \(f(x)=x\) and \(X=Y=\mathbb{R}\), one would expect the diagonal line of the plane to be closed.
(Definition) The topological space \((X,\tau)\) is an \(F\)-space if \(\tau\) is induced by a complete invariant metric \(d\). Here invariant means that \(d(x+z,y+z)=d(x,y)\) for all \(x,y,z \in X\).
A Banach space is easily to be verified to be a \(F\)-space by defining \(d(x,y)=\lVert x-y \rVert\).
(Open mapping theorem) See this post
By definition of closed set, we have a practical criterion on whether \(G(f)\) is closed.
(Proposition 1) \(G(f)\) is closed if and only if, for any sequence \((x_n)\) such that the limits \[ x=\lim_{n \to \infty}x_n \quad \text{ and }\quad y=\lim_{n \to \infty}f(x_n) \] exist, we have \(y=f(x)\).
In this case, we say \(f\) is closed. For continuous functions, things are trivial.
(Proposition 2) If \(X\) and \(Y\) are two topological spaces and \(Y\) is Hausdorff, and \(f:X \to Y\) is continuous, then \(G(f)\) is closed.
Proof. Let \(G^c\) be the complement of \(G(f)\) with respect to \(X \times Y\). Fix \((x_0,y_0) \in G^c\), we see \(y_0 \neq f(x_0)\). By the Hausdorff property of \(Y\), there exists some open subsets \(U \subset Y\) and \(V \subset Y\) such that \(y_0 \in U\) and \(f(x_0) \in V\) and \(U \cap V = \varnothing\). Since \(f\) is continuous, we see \(W=f^{-1}(V)\) is open in \(X\). We obtained a open neighborhood \(W \times U\) containing \((x_0,y_0)\) which has empty intersection with \(G(f)\). This is to say, every point of \(G^c\) has a open neighborhood contained in \(G^c\), hence a interior point. Therefore \(G^c\) is open, which is to say that \(G(f)\) is closed. \(\square\)
closed-graph
REMARKS. For \(X \times Y=\mathbb{R} \times \mathbb{R}\), we have a simple visualization. For \(\varepsilon>0\), there exists some \(\delta\) such that \(|f(x)-f(x_0)|<\varepsilon\) whenever \(|x-x_0|<\delta\). For \(y_0 \neq f(x_0)\), pick \(\varepsilon\) such that \(0<\varepsilon<\frac{1}{2}|f(x_0)-y_0|\), we have two boxes (\(CDEF\) and \(GHJI\) on the picture), namely \[ B_1=\{(x,y):x_0-\delta<x<x_0+\delta,f(x_0)-\varepsilon<y<f(x_0)+\varepsilon\} \] and \[ B_2=\{(x,y):x_0-\delta<x<x_0+\delta,y_0-\varepsilon<y<y_0+\varepsilon\}. \] In this case, \(B_2\) will not intersect the graph of \(f\), hence \((x_0,y_0)\) is an interior point of \(G^c\).
The Hausdorff property of \(Y\) is not removable. To see this, since \(X\) has no restriction, it suffices to take a look at \(X \times X\). Let \(f\) be the identity map (which is continuous), we see the graph \[ G(f)=\{(x,x):x \in X\} \] is the diagonal. Suppose \(X\) is not Hausdorff, we reach a contradiction. By definition, there exists some distinct \(x\) and \(y\) such that all neighborhoods of \(x\) contain \(y\). Pick \((x,y) \in G^c\), then all neighborhoods of \((x,y) \in X \times X\) contain \((x,x)\) so \((x,y) \in G^c\) is not a interior point of \(G^c\), hence \(G^c\) is not open.
Also, as an immediate consequence, every affine algebraic variety in \(\mathbb{C}^n\) and \(\mathbb{R}^n\) is closed with respect to Euclidean topology. Further, we have the Zariski topology \(\mathcal{Z}\) by claiming that, if \(V\) is an affine algebraic variety, then \(V^c \in \mathcal{Z}\). It's worth noting that \(\mathcal{Z}\) is not Hausdorff (example?) and in fact much coarser than the Euclidean topology although an affine algebraic variety is both closed in the Zariski topology and the Euclidean topology.
The closed graph theorem
After we have proved this theorem, we are able to prove the theorem about compatible norms. We shall assume that both \(X\) and \(Y\) are \(F\)-spaces, since the norm plays no critical role here. This offers a greater variety but shall not be considered as an abuse of abstraction.
(The Closed Graph Theorem) Suppose
\(X\) and \(Y\) are \(F\)-spaces,
\(f:X \to Y\) is linear,
\(G(f)\) is closed in \(X \times Y\).
Then \(f\) is continuous.
In short, the closed graph theorem gives a sufficient condition to claim the continuity of \(f\) (keep in mind, linearity does not imply continuity). If \(f:X \to Y\) is continuous, then \(G(f)\) is closed; if \(G(f)\) is closed and \(f\) is linear, then \(f\) is continuous.
Proof. First of all we should make \(X \times Y\) an \(F\)-space by assigning addition, scalar multiplication and metric. Addition and scalar multiplication are defined componentwise in the nature of things: \[ \alpha(x_1,y_1)+\beta(x_2,y_2)=(\alpha x_1+\beta x_2,\alpha y_1 + \beta y_2). \] The metric can be defined without extra effort: \[ d((x_1,y_1),(x_2,y_2))=d_X(x_1,x_2)+d_Y(y_1,y_2). \] Then it can be verified that \(X \times Y\) is a topological space with translate invariant metric. (Potentially the verifications will be added in the future but it's recommended to do it yourself.)
Since \(f\) is linear, the graph \(G(f)\) is a subspace of \(X \times Y\). Next we quote an elementary result in point-set topology, a subset of a complete metric space is closed if and only if it's complete, by the translate-invariance of \(d\), we see \(G(f)\) is an \(F\)-space as well. Let \(p_1: X \times Y \to X\) and \(p_2: X \times Y \to Y\) be the natural projections respectively (for example, \(p_1(x,y)=x\)). Our proof is done by verifying the properties of \(p_1\) and \(p_2\) on \(G(f)\).
For simplicity one can simply define \(p_1\) on \(G(f)\) instead of the whole space \(X \times Y\), but we make it a global projection on purpose to emphasize the difference between global properties and local properties. One can also write \(p_1|_{G(f)}\) to dodge confusion.
Claim 1. \(p_1\) (with restriction on \(G(f)\)) defines an isomorphism between \(G(f)\) and \(X\).
For \(x \in X\), we see \(p_1(x,f(x)) = x\) (surjectivity). If \(p_1(x,f(x))=0\), we see \(x=0\) and therefore \((x,f(x))=(0,0)\), hence the restriction of \(p_1\) on \(G\) has trivial kernel (injectivity). Further, it's trivial that \(p_1\) is linear.
Claim 2. \(p_1\) is continuous on \(G(f)\).
For every sequence \((x_n)\) such that \(\lim_{n \to \infty}x_n=x\), we have \(\lim_{n \to \infty}f(x_n)=f(x)\) since \(G(f)\) is closed, and therefore \(\lim_{n \to \infty}p_1(x_n,f(x_n)) =x\). Meanwhile \(p_1(x,f(x))=x\). The continuity of \(p_1\) is proved.
Claim 3. \(p_1\) is a homeomorphism with restriction on \(G(f)\).
We already know that \(G(f)\) is an \(F\)-space, so is \(X\). For \(p_1\) we have \(p_1(G(f))=X\) is of the second category (since it's an \(F\)-space and \(p_1\) is one-to-one), and \(p_1\) is continuous and linear on \(G(f)\). By the open mapping theorem, \(p_1\) is an open mapping on \(G(f)\), hence is a homeomorphism thereafter.
Claim 4. \(p_2\) is continuous.
This follows the same way as the proof of claim 2 but much easier since there is no need to care about \(f\).
Now things are immediate once one realises that \(f=p_2 \circ p_1|_{G(f)}^{-1}\), which implies that \(f\) is continuous. \(\square\)
Before we go for theorem 1 at the beginning, we drop an application on Hilbert spaces.
Let \(T\) be a bounded operator on the Hilbert space \(L_2([0,1])\) so that if \(\phi \in L_2([0,1])\) is a continuous function so is \(T\phi\). Then the restriction of \(T\) to \(C([0,1])\) is a bounded operator of \(C([0,1])\).
For details please check this.
Now we go for the identification of norms. Define \[ \begin{aligned} f:(X,\lVert\cdot\rVert_1) &\to (X,\lVert\cdot\rVert_2) \\ x &\mapsto x \end{aligned} \] i.e. the identity map between two Banach spaces (hence \(F\)-spaces). Then \(f\) is linear. We need to prove that \(G(f)\) is closed. For the convergent sequence \((x_n)\) \[ \lim_{n \to \infty}\lVert x_n -x \rVert_1=0, \] we have \[ \lim_{n \to \infty} \lVert f(x_n)-x \rVert_2=\lim_{n \to \infty}\lVert x_n -x\rVert_2=\lim_{n \to \infty}\lVert f(x_n)-f(x)\rVert_2=0. \] Hence \(G(f)\) is closed. Therefore \(f\) is continuous, hence bounded, we have some \(K\) such that \[ \lVert x \rVert_2 =\lVert f(x) \rVert_1 \leq K \lVert x \rVert_1. \] By defining \[ \begin{aligned} g:(X,\lVert\cdot\rVert_2) &\to (X,\lVert\cdot\rVert_1) \\ x &\mapsto x \end{aligned} \] we see \(g\) is continuous as well, hence we have some \(K'\) such that \[ \lVert x \rVert_1 =\lVert g(x) \rVert_2 \leq K'\lVert x \rVert_2 \] Hence two norms are weaker than each other.
Since there is no strong reason to write more posts on this topic, i.e. the three fundamental theorems of linear functional analysis, I think it's time to make a list of the series. It's been around half a year.
The Big Three Pt. 1 - Baire Category Theorem Explained
The Big Three Pt. 2 - The Banach-Steinhaus Theorem
The Big Three Pt. 3 - The Open Mapping Theorem (Banach Space)
The Big Three Pt. 4 - The Open Mapping Theorem (F-Space)
The Big Three Pt. 5 - The Hahn-Banach Theorem (Dominated Extension)
Walter Rudin, Functional Analysis
Peter Lax, Functional Analysis
Jesús Gil de Lamadrid, Some Simple Applications of the Closed Graph Theorem
About this post
The Hahn-Banach theorem has been a central tool for functional analysis and therefore enjoys a wide variety, many of which have a numerous uses in other fields of mathematics. Therefore it's not possible to cover all of them. In this post we are covering two 'abstract enough' results, which are sometimes called the dominated extension theorem. Both of them will be discussed in real vector space where topology is not endowed. This allows us to discuss any topological vector space.
Another interesting thing is, we will be using axiom of choice, or whatever equivalence you may like, for example Zorn's lemma or well-ordering principle. Before everything, we need to examine more properties of vector spaces.
Vector space
It's obvious that every complex vector space is also a real vector space. Suppose \(X\) is a complex vector space, and we shall give the definition of real-linear and complex-linear functionals.
An addictive functional \(\Lambda\) on \(X\) is called real-linear (complex-linear) if \(\Lambda(\alpha x)=\alpha\Lambda(x)\) for every \(x \in X\) and for every real (complex) scalar \(\alpha\).
For *-linear functionals, we have two important but easy theorems.
If \(u\) is the real part of a complex-linear functional \(f\) on \(X\), then \(u\) is real-linear and \[ f(x)=u(x)-iu(ix) \quad (x \in X). \]
Proof. For complex \(f(x)=u(x)+iv(x)\), it suffices to denote \(v(x)\) correctly. But \[ if(x)=iu(x)-v(x), \] we see \(\Im(f(x)=v(x)=-\Re(if(x))\). Therefore \[ f(x)=u(x)-i\Re(if(x))=u(x)-i\Re(f(ix)) \] but \(\Re(f(ix))=u(ix)\), we get \[ f(x)=u(x)-iu(ix). \] To show that \(u(x)\) is real-linear, note that \[ f(x+y)=u(x+y)+iv(x+y)=f(x)+f(y)=u(x)+u(y)+i(v(x)+v(y)). \] Therefore \(u(x)+u(y)=u(x+y)\). Similar process can be applied to real scalar \(\alpha\). \(\square\)
Conversely, we are able to generate a complex-linear functional by a real one.
If \(u\) is a real-linear functional, then \(f(x)=u(x)-iu(ix)\) is a complex-linear functional
Proof. Direct computation. \(\square\)
Suppose now \(X\) is a complex topological vector space, we see a complex-linear functional on \(X\) is continuous if and only if its real part is continuous. Every continuous real-linear \(u: X \to \mathbb{R}\) is the real part of a unique complex-linear continuous functional \(f\).
Sublinear, seminorm
Sublinear functional is 'almost' linear but also 'almost' a norm. Explicitly, we say \(p: X \to \mathbb{R}\) a sublinear functional when it satisfies \[ \begin{aligned} p(x)+p(y) &\leq p(x+y) \\ p(tx) &= tp(x) \\ \end{aligned} \] for all \(t \geq 0\). As one can see, if \(X\) is normable, then \(p(x)=\lVert x \rVert\) is a sublinear functional. One should not be confused with semilinear functional, where inequality is not involved. Another thing worth noting is that \(p\) is not restricted to be nonnegative.
A seminorm on a vector space \(X\) is a real-valued function \(p\) on \(X\) such that \[ \begin{aligned} p(x+y) &\leq p(x)+p(y) \\ p(\alpha x)&=|\alpha|p(x) \end{aligned} \] for all \(x,y \in X\) and scalar \(\alpha\).
Obviously a seminorm is also a sublinear functional. For the connection between norm and seminorm, one shall note that \(p\) is a norm if and only if it satisfies \(p(x) \neq 0\) if \(x \neq 0\).
Dominated extension theorems
Are the results will be covered in this post. Generally speaking, we are able to extend a functional defined on a subspace to the whole space as long as it's dominated by a sublinear functional. This is similar to the dominated convergence theorem, which states that if a convergent sequence of measurable functions are dominated by another function, then the convergence holds under the integral operator.
(Hahn-Banach) Suppose
\(M\) is a subspace of a real vector space \(X\),
\(f: M \to \mathbb{R}\) is linear and \(f(x) \leq p(x)\) on \(M\) where \(p\) is a sublinear functional on \(X\)
Then there exists a linear \(\Lambda: X \to \mathbb{R}\) such that \[ \Lambda(x)=f(x) \] for all \(x \in M\) and \[ -p(-x) \leq \Lambda(x) \leq p(x) \] for all \(x \in X\).
Step 1 - Extending the function by one dimension
With that being said, if \(f(x)\) is dominated by a sublinear functional, then we are able to extend this functional to the whole space with a relatively proper range.
Proof. If \(M=X\) we have nothing to do. So suppose now \(M\) is a nontrivial proper subspace of \(X\). Choose \(x_1 \in X-M\) and define \[ M_1=\{x+tx_1:x \in M,t \in R\}. \] It's easy to verify that \(M_1\) satisfies all axioms of vector space (warning again: no topology is endowed). Now we will be using the properties of sublinear functionals.
Since \[ f(x)+f(y)=f(x+y) \leq p(x+y) \leq p(x-x_1)+p(x_1+y) \] for all \(x,y \in M\), we have \[ f(x)-p(x-x_1) \leq p(x_1+y) -f(y). \] Let \[ \alpha=\sup_{x}\{f(x)-p(x-x_1):x \in M\}. \] By definition, we naturally get \[ f(x)-\alpha \leq p(x-x_1) \] and \[ f(y)+\alpha \leq p(x_1+y). \] Define \(f_1\) on \(M_1\) by \[ f_1(x+tx_1)=f(x)+t\alpha. \] So when \(x +tx_1 \in M\), we have \(t=0\), and therefore \(f_1=f\).
To show that \(f_1 \leq p\) on \(M_1\), note that for \(t>0\), we have \[ f(x/t)-\alpha \leq p(x/t-x_1), \] which implies \[ f(x)-t\alpha=f_1(x-t\alpha)\leq p(x-tx_1). \] Similarly, \[ f(y/t)+\alpha \leq p(y/t+x_1), \] and therefore \[ f(y)+t\alpha=f_1(y+tx_1) \leq p(y+tx_1). \] Hence \(f_1 \leq p\).
Step 2 - An application of Zorn's lemma
Side note: Why Zorn's lemma
It seems that we can never stop using step 1 to extend \(M\) to a larger space, but we have to extend. (If \(X\) is a finite dimensional space, then this is merely a linear algebra problem.) This meets exactly what William Timothy Gowers said in his blog post:
If you are building a mathematical object in stages and find that (i) you have not finished even after infinitely many stages, and (ii) there seems to be nothing to stop you continuing to build, then Zorn's lemma may well be able to help you.
-- How to use Zorn's lemma
And we will show that, as W. T. Gowers said,
If the resulting partial order satisfies the chain condition and if a maximal element must be a structure of the kind one is trying to build, then the proof is complete.
To apply Zorn's lemma, we need to construct a partially ordered set. Let \(\mathscr{P}\) be the collection of all ordered pairs \((M',f')\) where \(M'\) is a subspace of \(X\) containing \(M\) and \(f'\) is a linear functional on \(M'\) that extends \(f\) and satisfies \(f' \leq p\) on \(M'\). For example we have \[ (M,f) , (M_1,f_1) \subset \mathscr{P}. \] The partial order \(\leq\) is defined as follows. By \((M',f') \leq (M'',f'')\), we mean \(M' \subset M''\) and \(f' = f''\) on \(M'\). Obviously this is a partial order (you should be able to check this).
Suppose now \(\mathcal{F}\) is a chain (totally ordered subset of \(\mathscr{P}\)). We claim that \(\mathcal{F}\) has an upper bound (which is required by Zorn's lemma). Let \[ M_0=\bigcup_{(M',f') \in \mathcal{F}}M' \] and \[ f_0(y)=f(y) \] whenever \((M',f') \in \mathcal{F}\) and \(y \in M'\). It's easy to verify that \((M_0,f_0)\) is the upper bound we are looking for. But \(\mathcal{F}\) is arbitrary, therefore by Zorn's lemma, there exists a maximal element \((M^\ast,f^\ast)\) in \(\mathscr{P}\). If \(M^* \neq X\), according to step 1, we are able to extend \(M^\ast\), which contradicts the maximality of \(M^\ast\). And \(\Lambda\) is defined to be \(f^\ast\). By the linearity of \(\Lambda\), we see \[ -p(-x) \leq -\Lambda(-x)=\Lambda{x}. \] The theorem is proved. \(\square\)
How this proof is constructed
This is a classic application of Zorn's lemma (well-ordering principle, or Hausdorff maximality theorem). First, we showed that we are able to extend \(M\) and \(f\). But since we do not know the dimension or other properties of \(X\), it's not easy to control the extension which finally 'converges' to \((X,\Lambda)\). However, Zorn's lemma saved us from this random exploration: Whatever happens, the maximal element is there, and take it to finish the proof.
Generalisation onto the complex field
Since inequality is appeared in the theorem above, we need more careful validation.
(Bohnenblust-Sobczyk-Soukhomlinoff) Suppose \(M\) is a subspace of a vector space \(X\), \(p\) is a seminorm on \(X\), and \(f\) is a linear functional on \(M\) such that \[ |f(x)| \leq p(x) \] for all \(x \in M\). Then \(f\) extends to a linear functional \(\Lambda\) on \(X\) satisfying \[ |\Lambda (x)| \leq p(x) \] for all \(x \in X\).
Proof. If the scalar field is \(\mathbb{R}\), then we are done, since \(p(-x)=p(x)\) in this case (can you see why?). So we assume the scalar field is \(\mathbb{C}\).
Put \(u = \Re f\). By dominated extension theorem, there is some real-linear functional \(U\) such that \(U(x)=u\) on \(M\) and \(U \leq p\) on \(X\). And here we have \[ \Lambda(x)=U(x)-iU(ix) \] where \(\Lambda(x)=f(x)\) on \(M\).
To show that \(|\Lambda(x)| \leq p(x)\) for \(x \neq 0\), by taking \(\alpha=\frac{|\Lambda(x)|}{\Lambda(x)}\), we have \[ U(\alpha{x})=\Lambda(\alpha{x})=|\Lambda(x)|\leq p(\alpha x)=p(x) \] since \(|\alpha|=1\) and \(p(\alpha{x})=|\alpha|p(x)=p(x)\). \(\square\)
Extending Hahn-Banach theorem under linear transform
To end this post, we state a beautiful and useful extension of the Hahn-Banach theorem, which is done by R. P. Agnew and A. P. Morse.
(Agnew-Morse) Let \(X\) denote a real vector space and \(\mathcal{A}\) be a collection of linear maps \(A_\alpha: X \to X\) that commute, or namely \[ A_\alpha A_\beta=A_\beta A_\alpha \] for all \(A_\alpha,A_\beta \in \mathcal{A}\). Let \(p\) be a sublinear functional such that \[ p(A_\alpha{x})=p(x) \] for all \(A_\alpha \in \mathcal{A}\). Let \(Y\) be a subspace of \(X\) on which a linear functional \(f\) is defined such that
\(f(y) \leq p(y)\) for all \(y \in Y\).
For each mapping \(A\) and \(y \in Y\), we have \(Ay \in Y\).
Under the hypothesis of 2, we have \(f(Ay)=f(y)\).
Then \(f\) can be extended to \(X\) by \(\Lambda\) so that \(-p(-x) \leq \Lambda(x) \leq p(x)\) for all \(x \in X\), and \[ \Lambda(A_\alpha{x})=\Lambda{x}. \]
To prove this theorem, we need to construct a sublinear functional that dominates \(f\). For the whole proof, see Functional Analysis by Peter Lax.
References / Further Readings
Walter Rudin, Functional Analysis.
Peter Lax, Functional Analysis.
William Timothy Gowers, How to use Zorn's lemma.
The Open Mapping Theorem
We are finally going to prove the open mapping theorem in \(F\)-space. In this version, only metric and completeness are required. Therefore it contains the Banach space version naturally.
(Theorem 0) Suppose we have the following conditions:
\(X\) is a \(F\)-space,
\(Y\) is a topological space,
\(\Lambda: X \to Y\) is continuous and linear, and
\(\Lambda(X)\) is of the second category in \(Y\).
Then \(\Lambda\) is an open mapping.
Proof. Let \(B\) be a neighborhood of \(0\) in \(X\). Let \(d\) be an invariant metric on \(X\) that is compatible with the \(F\)-topology of \(X\). Define a sequence of balls by \[ B_n=\{x:d(x,0) < \frac{r}{2^n}\} \] where \(r\) is picked in such a way that \(B_0 \subset B\). To show that \(\Lambda\) is an open mapping, we need to prove that there exists some neighborhood \(W\) of \(0\) in \(Y\) such that \[ W \subset \Lambda(B). \] To do this however, we need an auxiliary set. In fact, we will show that there exists some \(W\) such that \[ W \subset \overline{\Lambda(B_1)} \subset \Lambda(B). \] We need to prove the inclusions one by one.
The first inclusion requires BCT. Since \(B_2 -B_2 \subset B_1\), and \(Y\) is a topological space, we get \[ \overline{\Lambda(B_2)}-\overline{\Lambda(B_2)} \subset \overline{\Lambda(B_2)-\Lambda(B_2)} \subset \overline{\Lambda(B_1)} \] Since \[ \Lambda(X)=\bigcup_{k=1}^{\infty}k\Lambda(B_2), \] according to BCT, at least one \(k\Lambda(B_2)\) is of the second category in \(Y\). But scalar multiplication \(y\mapsto ky\) is a homeomorphism of \(Y\) onto \(Y\), we see \(k\Lambda(B_2)\) is of the second category for all \(k\), especially for \(k=1\). Therefore \(\overline{\Lambda(B_2)}\) has nonempty interior, which implies that there exists some open neighborhood \(W\) of \(0\) in \(Y\) such that \(W \subset \overline{\Lambda(B_1)}\). By replacing the index, it's easy to see this holds for all \(n\). That is, for \(n \geq 1\), there exists some neighborhood \(W_n\) of \(0\) in \(Y\) such that \(W_n \subset \overline{\Lambda(B_n)}\).
The second inclusion requires the completeness of \(X\). Fix \(y_1 \in \overline{\Lambda(B_1)}\), we will show that \(y_1 \in \Lambda(B)\). Pick \(y_n\) inductively. Assume \(y_n\) has been chosen in \(\overline{\Lambda(B_n)}\). As stated before, there exists some neighborhood \(W_{n+1}\) of \(0\) in \(Y\) such that \(W_{n+1} \subset \overline{\Lambda(B_{n+1})}\). Hence \[ (y_n-W_{n+1}) \cap \Lambda(B_n) \neq \varnothing \] Therefore there exists some \(x_n \in B_n\) such that \[ \Lambda x_n = y_n - W_{n+1}. \] Put \(y_{n+1}=y_n-\Lambda x_n\), we see \(y_{n+1} \in W_{n+1} \subset \overline{\Lambda(B_{n+1})}\). Therefore we are able to pick \(y_n\) naturally for all \(n \geq 1\).
Since \(d(x_n,0)<\frac{r}{2^n}\) for all \(n \geq 0\), the sums \(z_n=\sum_{k=1}^{n}x_k\) converges to some \(z \in X\) since \(X\) is a \(F\)-space. Notice we also have \[ \begin{aligned} d(z,0)& \leq d(x_1,0)+d(x_2,0)+\cdots \\ & < \frac{r}{2}+\frac{r}{4}+\cdots \\ & = r \end{aligned} \] we have \(z \in B_0 \subset B\).
By the continuity of \(\Lambda\), we see \(\lim_{n \to \infty}y_n = 0\). Notice we also have \[ \sum_{k=1}^{n} \Lambda x_k = \sum_{k=1}^{n}(y_k-y_{k+1})=y_1-y_{n+1} \to y_1 \quad (n \to \infty), \] we see \(y_1 = \Lambda z \in \Lambda(B)\).
The whole theorem is now proved, that is, \(\Lambda\) is an open mapping. \(\square\)
You may think the following relation comes from nowhere: \[ (y_n - W_{n+1}) \cap \Lambda(B_{n}) \neq \varnothing. \] But it's not. We need to review some set-point topology definitions. Notice that \(y_n\) is a limit point of \(\Lambda(B_n)\), and \(y_n-W_{n+1}\) is a open neighborhood of \(y_n\). If \((y_n - W_{n+1}) \cap \Lambda(B_{n})\) is empty, then \(y_n\) cannot be a limit point.
The geometric series by \[ \frac{\varepsilon}{2}+\frac{\varepsilon}{4}+\cdots+\frac{\varepsilon}{2^n}+\cdots=\varepsilon \] is widely used when sum is taken into account. It is a good idea to keep this technique in mind.
Corollaries
The formal proof will not be put down here, but they are quite easy to be done.
(Corollary 0) \(\Lambda(X)=Y\).
This is an immediate consequence of the fact that \(\Lambda\) is open. Since \(Y\) is open, \(\Lambda(X)\) is an open subspace of \(Y\). But the only open subspace of \(Y\) is \(Y\) itself.
(Corollary 1) \(Y\) is a \(F\)-space as well.
If you have already see the commutative diagram by quotient space (put \(N=\ker\Lambda\)), you know that the induced map \(f\) is open and continuous. By treating topological spaces as groups, by corollary 0 and the first isomorphism theorem, we have \[ X/\ker\Lambda \simeq \Lambda(X)=Y. \] Therefore \(f\) is a isomorphism; hence one-to-one. Therefore \(f\) is a homeomorphism as well. In this post we showed that \(X/\ker{\Lambda}\) is a \(F\)-space, therefore \(Y\) has to be a \(F\)-space as well. (We are using the fact that \(\ker{\Lambda}\) is a closed set. But why closed?)
(Corollary 2) If \(\Lambda\) is a continuous linear mapping of an \(F\)-space \(X\) onto a \(F\)-space \(Y\), then \(\Lambda\) is open.
This is a direct application of BCT and open mapping theorem. Notice that \(Y\) is now of the second category.
(Corollary 3) If the linear map \(\Lambda\) in Corollary 2 is injective, then \(\Lambda^{-1}:Y \to X\) is continuous.
This comes from corollary 2 directly since \(\Lambda\) is open.
(Corollary 4) If \(X\) and \(Y\) are Banach spaces, and if \(\Lambda: X \to Y\) is a continuous linear bijective map, then there exist positive real numbers \(a\) and \(b\) such that \[ a \lVert x \rVert \leq \lVert \Lambda{x} \rVert \leq b\rVert x \rVert \] for every \(x \in X\).
This comes from corollary 3 directly since both \(\Lambda\) and \(\Lambda^{-1}\) are bounded as they are continuous.
(Corollary 5) If \(\tau_1 \subset \tau_2\) are vector topologies on a vector space \(X\) and if both \((X,\tau_1)\) and \((X,\tau_2)\) are \(F\)-spaces, then \(\tau_1 = \tau_2\).
This is obtained by applying corollary 3 to the identity mapping \(\iota:(X,\tau_2) \to (X,\tau_1)\).
(Corollary 6) If \(\lVert \cdot \rVert_1\) and \(\lVert \cdot \rVert_2\) are two norms in a vector space \(X\) such that
\(\lVert\cdot\rVert_1 \leq K\lVert\cdot\rVert_2\).
\((X,\lVert\cdot\rVert_1)\) and \((X,\lVert\cdot\rVert_2)\) are Banach
Then \(\lVert\cdot\rVert_1\) and \(\lVert\cdot\rVert_2\) are equivalent.
This is merely a more restrictive version of corollary 5.
What is open mapping
An open map is a function between two topological spaces that maps open sets to open sets. Precisely speaking, a function \(f: X \to Y\) is open if for any open set \(U \subset X\), \(f(U)\) is open in \(Y\). Likewise, a closed map is a function mapping closed sets to closed sets.
You may think open/closed map is an alternative name of continuous function. But it's not. The definition of open/closed mapping is totally different from continuity. Here are some simple examples.
\(f(x)=\sin{x}\) defined on \(\mathbb{R}\) is not open, though it's continuous. It can be verified by considering \((0,2\pi)\), since we have \(f((0,2\pi))=[-1,1]\).
The projection \(\pi: \mathbb{R}^2 \to \mathbb{R}\) defined by \((x,y) \mapsto x\) is open. Indeed, it maps an open ball onto an open interval on \(x\) axis.
The inclusion map \(\varphi: \mathbb{R} \to \mathbb{R}^2\) by \(x \mapsto (x,0)\) however, is not open. An open interval on the plane is locally closed but not open or closed.
Under what condition will a continuous linear function between two TVS be an open mapping? We'll give the answer in this blog post. Open mapping theorem is a sufficient condition on whether a continuous linear function is open.
Open Mapping Theorem
Let \(X,Y\) be Banach spaces and \(T: X \to Y\) a surjective bounded linear map. Then \(T\) is an open mapping.
The open balls in \(X\) and \(Y\) are defined respectively by \[ B_r^X=\{x \in X:\lVert x \rVert<r\}\quad\text{and}\quad B_r^Y=\{y \in Y:\lVert y \rVert<r\} \] All we need to do is show that there exists some \(r>0\) such that \[ B_r^Y \subset T(B_1^X) \] Since every open set in \(X\) or \(Y\) can be expressed as a union of open balls. For a ball in \(X\) centered at \(x \in X\) with radius \(r\), we can express it as \(x+B_r^X\). After that, it becomes obvious that \(T\) maps open set to open set.
First we have \[ X=\bigcup_{n=1}^{\infty}B_n^{X}. \] The surjectivity of \(T\) ensures that \[ Y=\bigcup_{n=1}^{\infty}T(B_n^X). \] Since \(Y\) is Banach, or simply a complete metric space, by Baire category theorem, there must be some \(n_0 \in \mathbb{N}\) such that \(\overline{T(B_{n_0}^{X})}\) has nonempty interior. If not, which means \(T(B_n^{X})\) is nowhere dense for all \(n \in \mathbb{N}\), we have \(Y\) is of the first category. A contradiction.
Since \(x \to nx\) is a homeomorphism of \(X\) onto \(X\), we see in fact \(T(B_n^X)\) is not nowhere dense for all \(n \in \mathbb{N}\). Therefore, there exists some \(y_0 \in \overline{T(B_1^{X})}\) and some \(\varepsilon>0\) such that \[ y_0+B_\varepsilon^Y \subset \overline{T(B_1^X)} \] the open set on the left hand is a neighborhood of \(y_0\), which should be in the interior of \(\overline{T(B_1^X)}\).
On the other hand, we claim \[ \overline{T(B_1^X)} - y_0 \subset \overline{T(B_2^X)}. \] We shall prove it as follows. Pick any \(y \in \overline{T(B_1^X)}\), we shall show that \(y-y_0 \in \overline{T(B_2^X)}\). For \(y_0\), there exists a sequence of \(y_n\) where \(\lVert y_n \rVert <1\) for all \(n\) such that \(Ty_n \to y_0\). Also we are able to find a sequence of \(x_n\) where \(\lVert x_n \rVert <1\) for all \(n\) such that \(Tx_n \to y\). Notice that we also have \[ y-y_0=\lim_{n \to \infty}T(x_n-y_n), \] since \[ \lVert x_n -y_n \rVert \leq \lVert x_n \rVert+\lVert y_n \rVert <2, \] we see \(T(x_n-y_n) \in T(B_2^X)\) for all \(n\), it follows that \[ y-y_0 \in \overline{T(B_2^X)}. \] Combining all these relations, we get \[ B_\varepsilon^Y \subset \overline{T(B_2^X)}. \] Since \(T\) is linear, we see \[ 2B_{\varepsilon/2}^{Y} \subset \overline{T(2B_1^X)}=2\overline{T(B_1^X)}. \] By induction we get \[ B_{\varepsilon/2^n}^Y \subset \overline{T(B_{1/2^{n-1}}^X)} \] for all \(n \geq 1\).
We shall show however \[ B_{\varepsilon/4}^Y \subset T(B_1^X). \] For any \(u \in B_{\varepsilon/4}^Y\), we have \(u \in \overline{T(B_{1/2}^X)}\). There exists some \(x_1 \in B_{1/2}^{X}\) such that \[ \lVert u-Tx_1 \rVert < \frac{\varepsilon}{8}. \] This implies that \(u-Tx_1 \in B_{\varepsilon/8}^Y\). Under the same fashion, we are able to pick \(x_n\) in such a way that \[ \lVert u-Tx_1-Tx_2-\cdots-Tx_n \rVert < \frac{\varepsilon}{2^{n+2}} \] where \(\lVert x_n \rVert<2^{-n}\). Now let \(z_n=\sum_{k=1}^{n}x_k\), we shall show that \((z_n)\) is Cauchy. For \(m<n\), we have \[ \lVert z_n - z_m \rVert =\left\Vert\sum_{k=m+1}^nx_k \right\Vert \leq \sum_{k=m+1}^{n}\lVert x_k\rVert < \frac{1}{2^{m+1}} \] Since \(X\) is Banach, there exists some \(z \in X\) such that \(z_n \to z\). Further we have \[ \lVert z\rVert = \lim_{n \to \infty}\lVert z_n \rVert \leq \sum_{k=1}^{\infty}\lVert x_n \rVert < 1 \] therefore \(z \in B_1^X\). Since \(T\) is bounded, therefore continuous, we get \(T(z)=u\). To summarize, for \(u \in B_{\varepsilon/4}^Y\), we have some \(z \in B_{1}^X\) such that \(T(z)=y\), which implies \(T(B_1^X) \supset B_{\varepsilon/4}^Y\).
Let \(U \subset X\) be open, we want to show that \(T(U)\) is also open. Take \(y \in T(U)\), then \(y=T(x)\) with \(x \in U\). Since \(U\) is open, there exists some \(\varepsilon>0\) such that \(B_{\varepsilon}^{X}+x \subset U\). By the linearity of \(T\), we obtain \(B_{r\varepsilon}^Y \subset T(B_{\varepsilon}^X)\) for some small \(r\). Using the linearity of \(T\) again, we obtain \[ B_{r\varepsilon}^Y + y \subset T(B_{\varepsilon}^X+x) \subset T(U) \] which shows that \(T(U)\) is open, therefore \(T\) is an open mapping.
One have to notice that the completeness of \(X\) and \(Y\) has been used more than one time. For example, the existence of \(z\) depends on the fact that Cauchy sequence converges in \(X\). Also, the surjectivity of \(T\) cannot be omitted, can you see why?
There are some different ways to state this theorem.
To every \(y\) with \(\lVert y \rVert < \delta\), there corresponds an \(x\) with \(\lVert x \rVert<1\) such that \(T(x)=y\).
Let \(U\) and \(V\) be the open unit balls of the Banach spaces \(X\) and \(Y\). To every surjective bounded linear map, there corresponds a \(\delta>0\) such that
\[ T(U) \supset \delta{V}. \]
You may also realize that we have used a lot of basic definitions of topology. For example, we checked the openness of \(T(U)\) by using neighborhood. The set \(\overline{T(B_1^X)}\) should also remind you of limit point.
The difference of open mapping and continuous mapping can be viewed via the topologies of two topological vector spaces. Suppose \(f: X \to Y\). If for any \(U \in \tau_X\), we have \(f(U) \in \tau_Y\), where \(\tau_X\) and \(\tau_Y\) are the topologies of \(X\) and \(Y\), respectively. But this has nothing to do with continuity. By continuity we mean, for any \(V \in \tau_Y\), we have \(f^{-1}(V) \in \tau_U\).
Fortunately, this theorem can be generalized to \(F\)-spaces, which will be demonstrated in the following blog post of the series. A space \(X\) is an \(F\)-space if its topology \(\tau\) is induced by a complete invariant metric \(d\). Still, completeness plays a critical rule.
About this blog post
People call the Banach-Steinhaus theorem the first of the big three, which sits at the foundation of linear functional analysis. None of them can go without the Baire's category theorem.
This blog post offers the Banach-Steinhaus theorem on different abstract levels. Recall that we have \[ \text{TVS} \supset \text{Metrizable TVS} \supset \text{F-space} \supset \text{Fréchet space}\supset\text{Banach space} \supset \text{Hilbert space} \] First, there will be a simple version for Banach spaces, which may be more frequently used, and you will realize why it's referred to as the uniform boundedness principle. After that, there will be a much more generalized version for TVS. Typically, the metrization of the space will not be considered.
Also, it will be a good chance to get a better view of the first and second space by Baire.
Equicontinuity
For metric spaces, equicontinuity is defined as follows. Let \((X,d_X)\) and \((Y,d_Y)\) be two metric spaces.
Let \(\Lambda\) be a collection of functions from \(X\) to \(Y\). We have three different levels of equicontinuity.
Equicontinuous at a point. For \(x_0 \in X\), if for every \(\varepsilon>0\), there exists a \(\delta>0\) such that \(d_Y(Lx_0,Lx)<\varepsilon\) for all \(L \in \Lambda\) and \(d_X(x_0,x)<\delta\) (that is, the continuity holds for all \(L\) in a ball centered at \(x_0\) with radius \(r\)).
Pointwise equicontinuous. \(\Lambda\) is equicontinuous at each point of \(X\).
Uniformly equicontinuous. For every \(\varepsilon>0\), there exists a \(\delta>0\) such that \(d_Y(Lx,Ly)<\varepsilon\) for all \(x \in \Lambda\) and \(x,y \in X\) such that \(d_X(x,y) < \delta\).
Indeed, if \(\Lambda\) contains only one element, namely \(L\), then everything goes with the continuity and uniform continuity.
But for Banach-Steinhaus theorem, we need a little more restrictions. In fact, \(X\) and \(Y\) should be considered Banach spaces, and \(\Lambda\) contains linear functions only. In this sense, for \(L \in \Lambda\), we have the following three conditions equivalent.
\(L\) is bounded.
\(L\) is continuous.
\(L\) is continuous at one point of \(X\).
For topological vector spaces, where only topology and linear structure are taken into consideration, things get different. Since no metrization is considered, we have to state it in the language of topology.
Suppose \(X\) and \(Y\) are TVS and \(\Lambda\) is a collection of linear functions from \(X\) to \(Y\). \(\Lambda\) is equicontinuous if for every neighborhood \(N\) of \(0\) in \(Y\), there corresponds a neighborhood \(V\) of \(0\) in \(X\) such that \(L(V) \subset N\) for all \(L \in \Lambda\).
Indeed, for TVS, \(L \in \Lambda\) has the three conditions equivalent as well. With that being said, equicontinuous collection has the boundedness property in a uniform manner. That's why the Banach-Steinhaus theorem is always referred to as the uniform boundedness principle.
The Banach-Steinhaus theorem, a sufficient condition for being equicontinuous
Banach space version
Suppose \(X\) is a Banach space, \(Y\) is a normed linear space, and \({F}\) is a collection of bounded linear transformation of \(X\) into \(Y\), we have two equivalent statements: 1. (The Resonance Theorem) If \(\sup\limits_{L \in \Lambda}\left\Vert{L}\right\Vert=\infty\), then there exists some \(x \in X\) such that \(\sup\limits_{L \in {L}}\left\Vert{Lx}\right\Vert=\infty\). (In fact, these \(x\) form a dense \(G_\delta\).)
(The Uniform Boundedness Principle) If \(\sup\limits_{L \in {\Lambda}}\left\Vert{Lx}\right\Vert<\infty\) for all \(x \in X\), then we have $ L M$ for all \(L \in {\Lambda}\) and some \(M<\infty\).
(A summary of 1 and 2) Either there exists an \(M<\infty\) such that \(\lVert L \rVert \leq M\) for all \(L \in {L}\), or \(\sup\lVert Lx \rVert = \infty\) for all \(x\) belonging to some dense \(G_\delta\) in \(X\).
Though it would be easier if we finish the TVS version proof, it's still a good idea to leave the formal proof without the help of TVS here. The equicontinuity of \(\Lambda\) will be shown in the next section.
An elementary proof of the Resonance theorem
First, we offer an elementary proof in which the hardest part is the Cauchy sequence.
(Lemma) For any \(x \in X\) and \(r >0\), we have \[ \sup_{y\in B(x,r)}\lVert Lx \rVert \geq \lVert L \rVert r \] where \(B(x,r)=\{y \in X:\lVert x-y \rVert < r\}\).
(Proof of the lemma)
For \(t \in X\) we have a simple relation \[ \begin{aligned} \max(\lVert{L(x+t)}\rVert,\lVert{L(x-t)}\rVert)&=\frac{1}{2}(\lVert{L(x+t)}\rVert+\lVert{L(x-t)}\rVert)+\frac{1}{2}\left\vert\lVert{L(x+t)}\rVert-\lVert{L(x-t)}\rVert\right\vert \\ &\geq \frac{1}{2}(\lVert{L(x+t)}\rVert+\lVert{L(x-t)}\rVert) \\ &\geq \frac{1}{2}\lVert{L(2t)}\rVert=\lVert Lt \rVert \end{aligned} \] If we have \(t \in B(0,r)\), then \(x+t,x-t\in{B(x,r)}\). And the desired inequality follows by taking the supremum over \(t \in B(0,r)\). (If you find trouble understanding this, take a look at the definition of \(\lVert L \rVert\).)
Suppose now \(\sup\limits_{L \in \Lambda}\left\Vert{L}\right\Vert=\infty\). Pick a sequence of linear transformation in \(\Lambda\), say \((L_n)_{n=1}^{\infty}\), such that \(\lVert L_n \rVert \geq 4^n\). Pick \(x_0 \in X\), and for \(n \geq 1\), we pick \(x_n\) inductively.
Set \(r_n=3^{-n}\). With \(x_{n-1}\) being picked, \(x_n \in B(x_{n-1},r_n)\) is picked in such a way that \[ \lVert L_n x_n \rVert \geq \frac{2}{3}\lVert L_n \rVert r_n \] (It's easy to validate this inequality by reaching a contradiction.) Also, it's easy to check that \((x_n)_{n=1}^{\infty}\) is Cauchy. Since \(X\) is complete, \((x_n)\) converges to some \(x \in X\). Further we have \[ \begin{aligned} \lVert x-x_n \rVert &\leq \sum_{k=n}^{\infty}\lVert x_k - x_{k+1}\rVert \\ &=\frac{1}{2\cdot 3^n} \end{aligned} \] Therefore we have \[ \begin{aligned} \lVert L_n x \rVert &=\lVert L_n[x_n-(x_n-x)] \rVert \\ &\geq \lVert L_nx_n \rVert - \lVert L_n(x_n-x) \rVert \\ &\geq \frac{2}{3}\lVert{L_n}\rVert{3}^{-n}-\lVert{L_n}\rVert\lVert{x_n-x}\rVert\\ &\geq \frac{1}{6}\lVert{L_n}\rVert{3}^{-n} \\ & \geq \frac{1}{6}\left(\frac{4}{3}\right)^n \to\infty \end{aligned} \]
A topology-based proof
The previous proof is easy to understand but it's not easy to see the topological properties of the set formed by such \(x\). Thus we are offering a topology-based proof which enables us to get a topology view.
Put \[ \varphi(x)=\sup_{L \in \Lambda}\lVert Lx \rVert \] and let \[ V_n=\{x:\varphi(x)>n\} \] we claim that each \(V_n\) is open. Indeed, we have to show that \(x \mapsto \lVert Lx \rVert\) is continuous. It suffice to show that \(\lVert\cdot\rVert\) defined in \(Y\) is continuous. This follows immediately from triangle inequality since for \(x,y \in Y\) we have \[ \lVert x \rVert \leq \lVert x-y \rVert + \lVert y \rVert \] which implies \[ \lVert x \rVert - \lVert y \rVert \leq \lVert x-y \rVert \] by interchanging \(x\) and \(y\), we get \[ |\lVert x \rVert - \lVert y \rVert | \leq \lVert x-y \rVert \] Thus \(x \mapsto \lVert Lx \rVert\) is continuous since it's a composition of \(\lVert\cdot\rVert\) and \(L\). Hence \(\varphi\), by the definition, is lower semicontinuous, which forces \(V_n\) to be open.
If every \(V_n\) is dense in \(X\) (consider \(\sup\lVert L \rVert=\infty\)), then by BCT, \(B=\bigcap_{n=1}^{\infty} V_n\) is dense in \(X\). Since each \(V_n\) is open, \(B\) is a dense \(G_\delta\). Again by the definition of \(B\), we have \(\varphi(x)=\infty\) for all \(x \in B\).
If one of these sets, namely \(V_N\), fails to be dense in \(X\), then there exist an \(x_0 \in X - V_N\) and an \(r>0\) such that for \(x \in B(0,r)\) we have \(x_0+x \notin V_N\), which is equivalent to \[ \varphi(x+x_0) \leq N \] considering the definition of \(\varphi\), we also have \[ \lVert L(x+x_0) \rVert \leq N \] for all \(L \in \Lambda\). Since \(x=(x+x_0)-x_0\), we also have \[ \lVert Lx \rVert \leq \lVert L(x+x_0) \rVert+\lVert Lx_0 \rVert \leq 2N \] Dividing \(r\) on two sides, we got \[ \lVert L\frac{x}{r}\rVert \leq \frac{2N}{r} \] therefore \(\lVert L \rVert \leq M=\frac{2N}{r}\) as is to be shown. Again, this follows from the definition of \(\lVert L \rVert\).
Topological vector space version
Suppose \(X\) and \(Y\) are topological vector spaces, \(\Lambda\) is a collection of continuous linear mapping from \(X\) into \(Y\), and \(B\) is the set of all \(x \in X\) whose orbits \[ \Lambda(x)=\{Lx:L\in\Lambda\} \] are bounded in \(Y\). For this \(B\), we have:
If \(B\) is of the second category, then \(\Lambda\) is equicontinuous.
A proof using properties of TVS
Pick balanced neighborhoods \(W\) and \(U\) of the origin in \(Y\) such that \(\overline{U} + \overline{U} \subset W\). The balanced neighborhood exists since every neighborhood of \(0\) contains a balanced one.
Put \[ E=\bigcap_{L \in \Lambda}L^{-1}(\overline{U}). \] If \(x \in B\), then \(\Lambda(x)\) is bounded, which means that to \(U\), there exists some \(n\) such that \(\Lambda(x) \subset nU\) (Be aware, no metric is introduced, this is the definition of boundedness in topological space). Therefore we have \(x \in nE\). Consequently, \[ B\subset \bigcup_{n=1}^{\infty}nE. \] If no \(nE\) is of the second category, then \(B\) is of the first category. Therefore, there exists at least one \(n\) such that \(nE\) is of the second category. Since \(x \mapsto nx\) is a homeomorphism of \(X\) onto \(X\), \(E\) is of the second category as well. But \(E\) is closed since each \(L\) is continuous. Therefore \(E\) has an interior point \(x\). In this case, \(x-E\) contains a neighborhood \(V\) of \(0\) in \(X\), and \[ L(V) \subset Lx-L(E) \subset \overline{U} - \overline{U} \subset W \] This proves that \(\Lambda\) is equicontinuous.
Equicontinuity and uniform boundedness
We'll show that \(B=X\). But before that, we need another lemma, which states the connection between equicontinuity and uniform boundedness
(Lemma) Suppose \(X\) and \(Y\) are TVS, \(\Gamma\) is an equicontinuous collection of linear mappings from \(X\) to \(Y\), and \(E\) is a bounded subset of \(X\). Then \(Y\) has a bounded subset \(F\) such that \(T(E) \subset F\) for every \(T \in \Gamma\).
(Proof of the lemma) We'll show that, the set \[ F=\bigcup_{T \in \Gamma}T(E) \] is bounded. By the definition of equicontinuity, there is an neighborhood \(V\) of the origin in \(X\) such that \(T(V) \subset W\) for all \(T \in \Gamma\). Since \(E\) is bounded, there exists some \(t\) such that \(E \subset tV\). For these \(t\), by the definition of linear functions, we have \[ T(E) \subset T(tV)=tT(V) \subset tW \] Therefore \(F \subset tW\). \(F\) is bounded.
Thus \(\Lambda\) is uniformly bounded. Picking \(E=\{x\}\) in the lemma, we also see \(\Lambda(x)\) is bounded in \(Y\) for every \(x\). Thus \(B=X\).
A special case when \(X\) is a \(F\)-space or Banach space
\(X\) is a \(F\)-space if its topology \(\tau\) is induced by a complete invariant metric \(d\). By BCT, \(X\) is of the second category. If we already have \(B=X\), in which case \(B\) is of the second category, then by Banach-Steinhaus theorem, \(\Lambda\) is equicontinuous. Formally speaking, we have:
If \(\Lambda\) is a collection of continuous linear mappings from an \(F\)-space \(X\) into a topological vector space \(Y\), and if the sets \[ \Lambda(x)=\{Lx:L\in\Lambda\} \] are bounded in \(Y\) for every \(x \in X\), then \(\Lambda\) is equicontinuous.
Notice that all Banach spaces are \(F\)-spaces. Therefore we can restate the Uniform Boundedness Principle in Banach space with equicontinuity.
Suppose \(X\) is a Banach space, \(Y\) is a normed linear space, and \({F}\) is a collection of bounded linear transformation of \(X\) into \(Y\), we have:
(The Uniform Boundedness Principle) If \(\sup\limits_{L \in {\Lambda}}\left\Vert{Lx}\right\Vert<\infty\) for all \(x \in X\), then we have \(\|L\| \le M\) for all \(L \in {\Lambda}\) and some \(M<\infty\). Further, \(\Lambda\) is equicontinuous.
Surprisingly enough, the Banach-Steinhaus theorem can be used to do Fourier analysis. An important example follows.
There is a periodic continuous function \(f\) on \([0,1]\) such that the Fourier series \[ \sum_{n\in\mathbb{Z}}\hat{f}(n)e^{2\pi inx} \] of \(f\) diverges at \(0\). \(\hat{f}(n)\) is defined by \[ \hat{f}(n)=\int_{0}^{1}e^{-2\pi inx}f(x)dx \]
Notice that \(f \mapsto \hat{f}\) is linear, and the divergence of the series at \(0\) can be considered by \[ \sum_{n\in\mathbb{Z}}\hat{f}(n)e^{2\pi in\cdot0}=\sum_{n\in\mathbb{Z}}\hat{f}(n) \] To invoke Banach-Steinhaus theorem, the family of linear functionals are defined by \[ \lambda_N(f)=\sum_{|n| \leq N}\hat{f}(n) \] It can be proved that \[ \lVert \lambda_N \rVert=\int_0^1\left\vert\sum_{|n| \leq N}e^{-2\pi inx}\right\vert dx \] which goes to infinity as \(N \to \infty\). The existence of such \(f\) that \[ \sup_{N}|\lambda_N(f)|=+\infty \] follows from the resonance theorem. Further, we also know that these \(f\) are in a dense \(G_\delta\) subset of the vector space generated by all periodic continuous functions on \([0,1]\).
arXiv:1005.1585v2
W. Rudin, Real and Complex Analysis
W. Rudin, Functional Analysiss
Applications to Fourier series
About the 'Big Three'
There are three theorems about Banach spaces that occur frequently in the crux of functional analysis, which are called the 'big three':
The Hahn-Banach Theorem
The Banach-Steinhaus Theorem
The incoming series of blog posts is intended to offer a self-read friendly explanation with richer details. Some basic analysis and topology backgrounds are required.
First and second category
The term 'category' is due to Baire, who developed the category theorem afterwards. Let \(X\) be a topological space. A set \(E \subset X\) is said to be nowhere dense if \(\overline{E}\) has empty interior, i.e. \(\text{int}(\overline{E})= \varnothing\).
There are some easy examples of nowhere dense sets. For example, suppose \(X=\mathbb{R}\), equipped with the usual topology. Then \(\mathbb{N}\) is nowhere dense in \(\mathbb{R}\) while \(\mathbb{Q}\) is not. It's trivial since \(\overline{\mathbb{N}}=\mathbb{N}\), which has empty interior. Meanwhile \(\overline{\mathbb{Q}}=\mathbb{R}\). But \(\mathbb{R}\) is open, whose interior is itself. The category is defined using nowhere dense set. In fact,
A set \(S\) is of the first category if \(S\) is a countable union of nowhere dense sets.
A set \(T\) is of the second category if \(T\) is not of the first category.
Baire category theorem (BCT)
In this blog post, we consider two cases: BCT in complete metric space and in locally compact Hausdorff space. These two cases have nontrivial intersection but they are not equal. There are some complete metric spaces that are not locally compact Hausdorff.
There are some classic topological spaces, for example \(\mathbb{R}^n\), are both complete metric space and locally compact Hausdorff. If a locally compact Hausdorff space happens to be a topological vector space, then this space has finite dimension. Also, a topological vector space has to be Hausdorff.
By a Baire space we mean a topological space \(X\) such that the intersection of every countable collection of dense open subsets of \(X\) is also dense in \(X\).
Baire category states that
(BCT 1) Every complete metric space is a Baire space.
(BCT 2) Every locally compact Hausdorff space is a Baire space.
By taking the complement of the definition, we can see that, every Baire space is not of the first category.
Suppose we have a sequence of sets \(\{X_n\}\) where \(X_n\) is dense in \(X\) for all \(n>0\), then \(X_0=\cap_n X_n\) is also dense in \(X\). Notice then \(X_0^{c} = \cup_n X_n^c\), a nowhere dense set and a countable union of nowhere dense sets, i.e. of the first category.
Proving BCT 1 and BCT 2 via Choquet game
Let \(X\) be the given complete metric space or locally Hausdorff space, and \(\{X_n\}\) a countable collection of open subsets of \(X\). Pick an arbitrary open subsets of \(X\), namely \(A_0\) (this is possible due to the topology defined on \(X\)). To prove that \(\cap_n V_n\) is dense, we have to show that \(A_0 \cap \left(\cap_n V_n\right) \neq \varnothing\). This follows the definition of denseness. Typically we have
A subset \(A\) of \(X\) is dense if and only if \(A \cap U \neq \varnothing\) for all nonempty open subsets \(U\) of \(X\).
We pick a sequence of nonempty open sets \(\{A_n\}\) inductively. With \(A_{n-1}\) being picked, and since \(V_n\) is open and dense in \(X\), the intersection \(V_n \cap A_{n-1}\) is nonempty and open. \(A_n\) can be chosen such that \[ \overline{A}_n \subset V_n \cap A_{n-1} \] For BCT 1, \(A_n\) can be chosen to be open balls with radius \(< \frac{1}{n}\); for BCT 2, \(A_n\) can be chosen such that the closure is compact. Define \[ C = \bigcap_{n=1}^{\infty}\overline{A}_n \] Now, if \(X\) is a locally compact Hausdorff space, then due to the compactness, \(C\) is not empty, therefore we have \[ \begin{cases} K \subset A_0 \\ K \subset V_n \quad(n \in \mathbb{N}) \end{cases} \] which shows that \(A_0 \cap V_n \neq \varnothing\). BCT 2 is proved.
For BCT 1, we cannot follow this since it's not ensured that \(X\) has the Heine-Borel property, for example when \(X\) is the Hilbert space (this is also a reason why BCT 1 and BCT 2 are not equivalent). The only tool remaining is Cauchy sequence. But how and where?
For any \(\varepsilon > 0\), we have some \(N\) such that \(\frac{1}{N} < \varepsilon\). For all \(m>n>N\), we have \(A_m \subset A_n\subset A_N\), therefore the centers of \(\{A_n\}\) form a Cauchy sequence, converging to some point of \(K\), which implies that \(K \neq \varnothing\). BCT 1 follows.
Applications of BCT
BCT will be used directly in the big three. It can be considered as the origin of them. But there are many other applications in different branches of mathematics. The applications shown below are in the same pattern: if it does not hold, then we have a Baire space of the first category, which is not possible.
\(\mathbb{R}\) is uncountable
Suppose \(\mathbb{R}\) is countable, then we have \[ \mathbb{R}=\bigcup_{n=1}^{\infty}\{x_n\} \] where \(x_n\) is a real number. But \(\{x_n\}\) is nowhere dense, therefore \(\mathbb{R}\) is of the first category. A contradiction.
Suppose that \(f\) is an entire function, and that in every power series \[ f(z)=\sum_{n=1}^{\infty}c_n(z-a)^n \] has at least one coefficient is \(0\), then \(f\) is a polynomial (there exists a \(N\) such that \(c_n=0\) for all \(n>N\)).
You can find the proof here. We are using the fact that \(\mathbb{C}\) is complete.
An infinite dimensional Banach space \(B\) has no countable basis
Assume that \(B\) has a countable basis \(\{x_1,x_2,\cdots\}\) and define \[ B_n=\text{span}\{x_1,x_2,\cdots,x_n\} \] It can be easily shown that \(B_n\) is nowhere dense. In this sense, \(B=\cup_n B_n\). A contradiction since \(B\) is a complete metric space. | CommonCrawl |
Lumerical Support > APP home
CMOS image sensor - Angular response 3D
FDTD CHARGE CMOS Image Sensors Consumer Electronics
In this example, the angular response of a CMOS image sensor is characterized through optical simulations using the FDTD solver and electrical simulations using the CHARGE solver. Key results from the simulations include the spatial field profiles, transmission and optical efficiency vs. angle, quantum efficiency vs. angle. The effect of microlens shift is also considered.
download example
Understand the simulation workflow and key results
Characterization of CMOS images sensors generally requires both optical and electrical simulations to account for the absorption, scattering, and diffraction from sub-wavelength features as well as the electrical transport of generated charge. In this example, optical simulations provide information about field profile, transmission, optical efficiency. The effects of injection angle and the microlens shift are also considered. Steps 1-3 demonstrate a few example tasks with increasing complexity (single simulation, angle/polarization sweep and angle/polarization/microlens position sweep).The generated charge data from the optical simulations (step 2) is combined with the weighting function from an independent electrical simulation (step 4) for further calculation of the quantum efficiency and the crosstalk in terms of the injection angle (step 5).
Note that the definition of the "pixel" can differ depending on the application areas. The optical simulations in this example contain a periodic array of Red/Green/Blue/Green unit cells. Throughout this example, we will refer to each of the R/G/B/G regions as a "pixel", meaning there are 4 pixels in a unit cell, as shown in the figure below.
As this example requires many sweeps, we limited ourselves to single frequency simulations to reduce the overall simulation time. But the approaches in the optical simulations are applicable to broadband simulations.
Step 1: Initial simulation
Obtain the field profile, transmission and optical efficiency of each pixel when the sensor is illuminated by a planewave at a fixed angle. The main purpose of this step is to ensure the simulation is set up correctly and to allow the user to manually explore the results, before running the full angular response parameter sweep in the latter steps.
Step 2: Angular response
Calculate the optical efficiency and the electron-hole pair generation rates as a function of injection angle. In this example, the generation rate results are averaged in the y-direction and saved in a 2D format so that it can be used in step 5 to calculate the quantum efficiency of the device.
Step 3: Effect of microlens shift
Obtain a 2D map of the optical efficiency as a function of the injection angle and microlens shift. This sweep demonstrates an interesting way to optimize the optical performance of the device. In the interest of keeping the example short, we don't record the generation rate data as in this step (although we could).
Step 4: Weighting function
Run the CHARGE solver to get the impulse response (Green's function) of the system to an electron-hole pair at arbitrary positions in the substrate. From this, we calculate a spatially-varying weighting function which represents the probability of an electron-hole pair generated at any point in space to be collected by the contact for a specific pixel (green in this example). 2D CHARGE simulations are used to reduce the simulation time, but it is possible to extend the simulation methodology to 3D.
Step 5: Quantum efficiency and crosstalk
The weighting function (Step 4) is multiplied by the generation rate data (Step 2) and integrated to yield the internal quantum efficiency (IQE) and crosstalk. This approach based on the Green's function is very efficient in calculating the IQE for arbitrary optical generation rate profiles since it requires, on the electrical side, only a single simulation for the weighting function calculation. For the sake of keeping the simulation time short, we ran a 2D CHARGE simulation to obtain a 2D weighting function and used the generation rate data that was averaged in the y-direction.
Run and results
Instructions for running the model and discussion of key results
Open the simulation file (CMOS_image_sensor_angular_response.fsp)
Run the script file (CMOS_image_sensor_angular_response_initial.lsf) to run the simulation and visualize some of the representative results shown below.
Field profile
The "field_XZ" and "field_YZ" frequency monitors record the fields at the cross sections of the red-green pixels and the green-blue pixels, respectively. As the source is currently set to emit at 550 nm (green), a high transmission is observed at the green pixel due to the different wavelength-selective filters on different regions.
The movies of propagating fields at the same positions as the frequency monitors can be found in the folder where the simulation file is saved. They clearly show that the injected light is selectively transmitted through the green filter and eventually dissipated by absorption in the underlying silicon layer.
The "pixel_transmission" analysis group records the normal component of the Poynting vector, \(P_{z}(x,y)\), on the top surface of the Si layer. To calculate the power absorbed in each pixel, (optical efficiency), we can choose to integrate Pz only over the depletion region of the pixel. The easiest way to integrate \(P_{z}\) over an arbitrary region is to use a spatial filter in the shape of the depletion region and multiply it to the \(P_{z}\) . The spatial filter is optional and can be disabled in the Analysis->Script tab. The following figures show the unfiltered \(P_{z}\) , the depletion regions and the \(P_{z}\) in the depletion regions. The shape of each depletion region is currently set to a 1x1um square with a rounded corner. See integrating the poynting vector for more information.
Optical efficiency
Optical efficiency is defined as the fraction of the power incident in the pixel that is absorbed in the depletion region of the pixel:
$$\text{Optical efficiency (OE)} =\frac{\text{Absorbed power}}{\text{Source power}}$$
By integrating the \(P_{z}\) over the entire surface of the Si layer and normalizing it by the injected power, we find that about 38% of the power is transmitted into the Si layer. The combined efficiency of the two green pixels makes about 33 % while the efficiency for the red and blue pixel is about 0.5 % each.
Power into Si layer: 0.372
Power through red pixel: 0.00459
Power through 2 green pixels: 0.328
Power through blue pixel: 0.00459
The simulation file comes with a pre-run "convergence" sweep object, which records the optical efficiencies vs. mesh accuracy. The figure below shows that there relatively small differences in the results for different mesh accuracies, with the mesh accuracy of "1" giving a reasonably close result to the one for mesh accuracy "6". Using a coarse mesh for initial simulations is strongly recommended due to the large time savings it provides. The default mesh accuracy in this example is "2". However, a mesh accuracy of "1" is used in Step 3 due to the large number of simulations required. Like all simulations, thorough convergence testing is required to ensure the results are accurate.
Open the script file (CMOS_image_sensor_angular_response_sweep_angle.lsf) and set the value of the "Analysis_only" parameter - "1" to visualize the pre-run sweep results or "0" to run the sweep and visualize the results.
Run the script file.
The "angle sweep" consists of 14 sweep points – 7 for the source angle and 2 for the polarization. To reduce the simulation time, unnecessary simulation objects like movie monitors and index monitors were disabled.
The optical efficiency vs. source angle for each pixel is shown below. Note that we averaged the efficiencies for 0 and 90 (degree) polarizations to obtain the response for an unpolarized light. Note that a factor of \(cos(\theta)\) was also multiplied to correct for the varying amount of incident power per pixel as a function of angle. Even with an ideal pixel design, it would not be possible to have an optical efficiency as a function of angle that does better than \(cos(theta)\). The OE for green has maximum at normal incidence and is reduced at larger incidence angle. The angular response simulations also provide a measure of optical crosstalk – some light being absorbed in the red or blue pixels under green illumination (or vice versa).
Generation rate
The generation rate data from "CW_generation_gb" analysis group is used in the electrical simulation in step 4. Once the sweep is completed, 14 data files for the generation rate of the green/blue pixels will be created under the folder named "sweepangle". File names are automatically assigned by the model setup script based on the polarization angle and the source angle. The figure below shows the generation rate for an unpolarized light (550 nm) in the green/blue pixels. Note that the CW generation analysis group is currently set to average the generation rate in the "y" direction and produce an averaged 2d map of it, \(G_{L}(x,z)\). This is to make it compatible with the 2D CHARGE simulations in step 4. For the sake of saving the simulation time, we are using 2D CHARGE simulations in this example. However, it is possible to do full 3D simulations in CHARGE and hence use the full 3D generation rate from FDTD.
Step 3: Microlens shift
Open the script file (CMOS_image_sensor_angular_response_microlens_shift.lsf) and set the value of the "Analysis_only" parameter - "1" to visualize the pre-run sweep results or "0" to run the sweep and visualize the results.
The "microlens shift" sweep consists of a total of 462 sweep points – 21 for microlens shift, 7 for angle theta and 2 for polarization angle. Considering the huge number of sweeps, we are using the mesh accuracy of "1" for this example to reduce the simulation time.
The optical efficiency in terms of the angle and the lens shift is shown below for each pixel. Note that the script applies a factor of \(cos(\theta)\) to correct for the varying amount of incident power per pixel at different injection angle. From the result for green, it can be seen that a shift of about 37 (nm/degree), marked by the dotted black line, gives the maximum optical efficiency for a given incidence angle. For example, if the light is incident dominantly at 15 degree, the lens needs to be shifted by about 555 nm for maximum efficiency.
Note: The generation rate data from Step 2 is a prerequisite to step 4.
Open the simulation file (CMOS_image_sensor_greens_function.ldev) and run the simulation
Run the script file (CMOS_image_sensor_weighting_function.lsf)
Silicon has a negligible thermal recombination at moderate illumination strengths. With it being an indirect band gap semiconductor, its radiative recombination (generation of photons by recombination of electron-hole pairs) is also negligible. As a result, the photo-generated charges in the substrate will be mostly collected by the contacts for different pixels. In this step, we use the point generation source in CHARGE simulation to determine the weighting function, \(W(x,y,z)\) of the device. \( W(x,y,z)\) is the probability that the charge generated at that position will be collected by a specific contact. This approach is based on the Green's function, \(G(x,y,z)\), and is equivalent to knowing the carrier density \(n,p\) at each contact in response to an impulse generation rate source located at a specific location.
The carrier densities are calculated during the charge transport simulation. To determine the full \(G(x,y,z)\), the location of the impulse source is swept over all r in the simulation domain. This operation is performed internally when the "map current collection probability" option is enabled for one or more contacts. When the simulation is complete, the weighting functions are stored as the result "W" for each carrier type at each enabled contact and are exported as data files for further analysis in step 5. The figure below shows the weighting function \(W(x,z,)\) for the green pixel shows that the collection probability for the green pixel is very high when the charge is located nearer the green contact (top left). However, it also shows that some of the charges generated in the blue pixel region (\(x>0\)) has a non-zero probability of being collected by the green contact. This suggests that there is some electrical crosstalk between the neighbouring pixels.
Step 5: IQE and crosstalk
Open the simulation file (CMOS_image_sensor_greens_function.ldev)
Run the script file (CMOS_image_sensor_angular_response_iqe.lsf)
In this step, we will be calculating the quantum efficiency (QE) of the green pixel and the green/blue crosstalk based on the Green's function approach. We do not need to run any additional simulations at this stage as we already have all the required generation rate and weighting function data saved from the step 2 and 4, respectively. The definitions of the related quantities are as follows:
$$\text{Internal quantum efficiency (IQE)} = \frac{\text{Charges collected by the green pixel}}{\text{Total charges generated by absorption}}$$
$$\text{External quantum efficiency (EQE)} = \text{IQE}\times \text{OE (Optical efficiency)}$$
$$\text{Green/blue crosstalk} = \frac{\text{Charges collected by the blue pixel}}{\text{Total charges generated by absorption}}$$
Quantum efficiency and crosstalk
The script sequentially loads the 14 generation rate data saved from the angle sweep in step 2 and multiplies it with the weighting function for green. The following figures show \(G_{L}(x,z)\), \(W_{green}(x,z)\) and \(G_{L}(x,z)W_{green}(x,z)\) for the unpolarized light at normal incidence.
By integrating the \(G_{L}(x,z)W_{green}(x,z)\) and normalizing it with the total generation rate, we obtain the IQE for the green pixel. Repeating the same procedure with the weighting function for the blue pixel, \(W_{blue}(x,z)\), yields the green/blue crosstalk. The IQE has its maximum value of about 80 % and decreases at larger source angle. The trend is in agreement with the increased green/blue crosstalk at larger angle. The maximum EQE is about 26%. When interpreting this data, it's important to remember that \(G_{L}(x,z)\) is for green light illumination.
Important model settings
Description of important objects and settings used in this model
Parametrization
All the structures in this example are constructed using the setup script in the "image sensor" structure group. However, some of the key design parameters such as pixel size and the microlens shift are set in the setup tab of the "model", which then will update the associated parameters in the "image sensor" as well as in other simulation objects dependent on any of those parameters.
"CW generation" analysis groups
The generation rate group consists of a 3D frequency monitor. It measures the absorbed power in the Si layer, then calculates the generation rate assuming a single pair of electron-hole pair is created per single photon.
Source propagation axis: The script in the "CW generation" group assumes the source is injecting in the y-direction for 2D and the z-direction for 3D simulations.
Averaging of generation rate: Even though the raw data for the generation rate is obtained in 3D, we are averaging it in the y-direction and saves a 2D generation data for later use. This is because we are running a 2D simulation in CHARGE to save the simulation time in this demonstrative example.
x/y/z spans: The spans of the generation rate objects are set by the setup script in the "model." The z-min of the objects might need to be adjusted to capture most of the absorbed light penetrating deeper into the substrate. The z-max of the objects needs to be at least one mesh cell away from the Si surface for an accurate absorption calculation. You might need to do some convergence test to decide on the appropriate z-span.
Filenames: The name of the generation rate file name is automatically set by the "polarization angle" of the source and the "count" in the setup variables tab of the "model." The "count" is paired with the parameter "theta" in the "sweep angle" object. At each sweep point of "sweep angle", the "count" value in "model" is updated and consequently the "export filename" in the CW generation group.
Unpolarized light
To obtain the transmission and generation rate results for unpolarized light, we need to run two simulations (one for x-polarized and another for y-polarized light) and average them. For that reason, a nested sweep named "source polarization" is included in the "angle sweep" and the "microlens shift" sweep objects.
For initial simulations, it can make sense to ignore polarization effects and choose only one polarization, S or P. Eventually, it makes sense to correctly calculate the incoherent response using both polarizations but a great deal of initial testing and optimization can be done with only one polarization.
"map current collection probability"
To calculate the current collection probability (=weighting function) for a given electrical contact, select the "map current collection probability" option in the steady-state contact configuration. In this simulation, the collection probability mapping is enabled for each of the simulation contacts located in the PD(Photodiode) n-wells. These contacts are biased to mimic the potential of a depletion region. The p++ surface diffusion and the substrate are electrically grounded.
Use a coarse mesh initially
We strongly recommend starting with a coarse mesh, using a mesh accuracy setting of 1 or 2. It is far easier to setup, test or optimize a simulation that takes a few seconds to minutes, compared to a simulation that takes several hours. Only once all other problems are resolved and you are getting good results should you try using a mesh accuracy setting of 3 or more by doing some convergence testing.
X,Y override in Si
Due to the high index of Silicon (n>4), the automatic meshing algorithm will use a very small mesh everywhere in the Si region. This small mesh will make the simulation significantly slower that it would otherwise be.
From Snell's law, we can show that the in-plane wave vector is conserved (\(k_{x, glass} = k_{x, Si}\)).
In other words, the in-plane wavevector (kx or ky) in the Silicon can never be larger than the in-plane wavevector in the glass. Therefore, it is possible to use a coarser mesh in the x and y directions without reducing the accuracy. To force a larger mesh size in the X and Y directions, add a mesh override region over the Si layer. Set the override region to treat this area as if it has an index of 1.5 (glass) in the X and Y directions. A small mesh is required in the Z direction, so the override should not be applied in the Z direction.
This technique is only valid if there are no scattering structures within the Si. Indeed, even scattering structure on or near the surface of the Si can generate light propagating at all angles in the Si, however, the above approximation is generally very good even if there are some scattering structures on the surface of the Si. We recommend testing the convergence if there are concerns about the amount of steep angle scattering that may be generated in the Si.
Use PEC for metal objects
In most image sensor simulations, metallic objects behave like perfect metals (100% reflection, no absorption). Rather than using a detailed material model for the metal (which will require a smaller mesh), simply use the Perfect Electrical Conductor (PEC) material model. The PEC material model does not require a small mesh, which makes your simulations faster.
Use conformal meshing (Conformal variant 1)
The conformal mesh can provide much more accurate results at larger mesh sizes and make it possible to run simulation much faster. If PEC is used for the metals, it is a good idea to switch to using the "Conformal variant 1" setting which will apply the conformal mesh algorithm to the PEC as well as the dielectric materials and the Si (please see Mesh refinement options for more details). The figure on the right shows that the Optical Efficiency to the green pixel is almost unchanged when going from a mesh accuracy of 1 (lambda/dx=6) to a mesh accuracy of 6 (lambda/dx=26). When using staircase meshing, the convergence is slower and a smaller mesh size is required for the same accuracy.
Please note that the conformal mesh can generate more numerical instability, especially if many highly dispersive materials are used that require large numbers of coefficients to fit. If these instabilities do occur, they can normally be controlled by making changes to the fit settings for certain materials. Please contact Lumerical support for advice if necessary.
Structures in optical and electrical simulations
You do not need to include exactly the same structures and simulation volume in the optical and the electrical simulations. Structures such as microlens, color filters and metal shields do not affect the electrical response of the device and should be removed from the electrical simulations to avoid an unnecessarily larger simulation region. Likewise, some of the electrical parts such as the substrate contact do not need to be included in the optical simulation if they have negligible effect on the optical response of the device.
Nwell contacts
This contact should not exist in realistic design, but this is a key element to help calculate the weighting function. In simulation, it plays a dual role:
set the electrostatic potential of the n-well equivalent to its depleted state after reset
collect photo-generated charge carriers that are captured in the n-well
When calculating the collection probability weighting function, we assume that the electrostatic potential is unperturbed. In normal device operation, charge carriers trapped in this region will be transferred to the drain (via the TX gate) and will accumulate on the source-follower gate. An accurate determination of the potential of the depleted n-well can be determined by inducing a channel between the drain (biased to VDD) and the n-well with the TX gate turned ON, which can be simulated at steady state. If user would like a quick way to test the simulation behavior, they can first run a simulation without the nwell contact to test it electrostatic potential. And then add a small amount, eg, 0.5V, to this simulated voltage to represent the bias at the nwell. This contact should be small in size and placed at the minimum of the potential in the n-well (small enough that the potential is close to uniform over the equivalent of the contact's surface). This usually corresponds to the peak location of the doping region.
Sub contact
This is a simulation contact. In realistic design, the silicon substrate should be grounded and therefore we need to assign a contact to the substrate. In the Charge solver, contacts have to be assigned to a metal type material. Aluminum is chosen arbitrarily, and the electrical contact boundary condition "force Ohmic" is enabled. The contact should be placed sufficiently far from all depletion regions so that it doesn't change the potential there (i.e. equilibrium condition should be established before contact surface), but generally must be placed deeper in the substrate to provide adequate depth for the absorption of light (check the optical generation rate profile).
PD contacts
This simulation contact is disabled by default. If necessary, the PD objects and contacts can be enabled and used as a troubleshooting step. There are some p++ regions without a reference voltage near the surface of the silicon. The solver may become numerically unstable if the region is floating. This contact can provide a reference voltage to the region for simulation purpose. This contact should be placed on top of the p++ region, the size of this contact should not have major effects to the simulation results.
Transfer gate and drain
Those are not simulated in this example. The current example is simplified and a focus of it is to show the simulation methodologies and how the Green's function approach can be used to efficiently calculate quantities like IQE, EQE, crosstalk, etc. In a more sophisticated simulation setup, experienced users may want to include these elements in their simulations, but this will certainly increase the complexity of the simulation.
Isolation trenches
These are common features to reduce electrical leakage current, formed by etching a Shallow Trench Isolation (STI) in the substrate and filling it with oxide. Their presence and location will depend on the specific design and fabrication process.
Updating the model with your parameters
Instructions for updating the model based on your device parameters
Customizing structures
The "image sensor" structure group contains only some representative parts of typical CMOS image sensors. When modifying structures, it is recommended that you retain the "user properties" of the "image sensor" object and make sure the link between the key design parameters (pixel size, microlens shift, etc.) and the associated structures are not broken. Otherwise, you might need to write all the relevant scripts from scratch.
Importing AFM data for the lens
The "image sensor" structure group has an option for "use_AFM_data" to demonstrate how AFM data could be used to define parts of the sensor. Set its value to "1" to import the AFM data for the microlens surface and "0" to define the surface based on the polynomial and conic formulations. The script assumes you have your surface data (savedata - Script command) saved in the same directory as your .fsp file and uses the importsurface2 command to create the surface. You can also import a surface data in .txt format using the importsurface command. The "cmos_image_sensor_angular_response_microlens_AFM_make.lsf" can be used to generate an exemplary surface data. Please note that using AFM data will make your fsp files much larger, and you will likely want to keep a low setting for the "rendering detail" to avoid slower display of the structures.
Source wavelength
The wavelength of the source is currently set at 550nm (green). If you want to see the response of the green/blue pixels to blue light, all you need to do is just change the source wavelength to blue. However, to obtain the response of the red/green pixels to red light, you need some additional modification in the simulation settings:
Enable the the associated generation rate group, "CW_generation_rg"
Add the "::model::CW_generation_rg::Igen" to the "Result" of the "sweep angle" object
Update the filenames for the weighting functions in the script files for step 4 and 5 with correct pixel names.
Source intensity
The default intensity of the light source is set to 1W/m^2 in the "model." Its value might need to be updated to account for the actual intensity of the light source in consideration.
Spatial filter for depletion region
The depletion region is where the generated charges are swept by the strong electric field and contribute the photocurrent. The shape of the actual depletion region can differ depending on the contact design and doping profiles. In the FDTD simulation, we assumed that light absorbed within the depletion region may contribute photocurrent and therefore used a spatial filter in the analysis script of the "pixel transmission" to mimic the depletion region when calculating the transmission for each pixel. It is currently set to a 1x1 \(({\mu m}^2) \)with rounded corners, but you might need to modify it to better match the depletion region in your device. This is an optional setting.
Taking the model further
Information and tips for users that want to further customize the model
Broadband simulations
While the optical simulations in step 1-3 were run at single frequency, you can also use the same file for broadband (multi-frequency) simulations. However, there are some additional points to be considered when running broadband simulations. Please visit here for detailed information. Note that the generation rate analysis group returns results that are averaged over wavelength when broadband source is used.
3D CHARGE simulations
While the electrical part of this example was based on 2D CHARGE simulations, the Green's function approach works for the 3D CHARGE simulations as well. Please note that extending the current example to 3D CHARGE simulations requires extensive changes to the generation rate script, the CHARGE simulation file and related analysis scripts.
Point spread function (PSF)
Together with OE, PSF is also a frequently used metric to characterize the optical properties of CMOS image sensor. Broadly speaking, it is a measure of spatial crosstalk — i.e., how much light is detected in neighboring pixels when a specific pixel is fully illuminated. Please visit here for further information and example files.
Response for arbitrary illumination
The approach in the above PSF example uses an array of "thin lens" Gaussian beams to mimic a source uniformly illuminating a specific pixel. This requires the simulation to have pml boundaries in all directions and a large simulation region. Additionally, you need to run separate simulations to obtain the results for different objective lenses. Fortunately, there is a much faster and more efficient approach available, which use an incoherent sum of results for planewave illumination to reconstruct the response of any arbitrary illumination (including arbitrary objective lens). For further information about this approach based on the planewave, please see [2]
Additional documentation, examples and training material
F. Hirigoyen, A. Crocherie, J. M. Vaillant, and Y. Cazaux, "FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization" Proc. SPIE 6816, 681609 (2008)
J. Vaillant, A. Crocherie, F. Hirigoyen, A. Cadien, and J. Pond, "Uniform illumination and rigorous electromagnetic simulations applied to CMOS image sensors," Opt. Express 15, 5494-5503 (2007)
Crocherie et al., "Three-dimensional broadband FDTD optical simulations of CMOS image sensor", Optical Design and Engineering III, Proc. of SPIE, 7100, 71002J (2008)
Wang, Xinyang, "Noise in Sub-Micron CMOS Image Sensors", Ph.D. Thesis, Delft University of Technology
W. Gazeley and D. McGrath, "Quantum Efficiency Simulation Using Transport Equations," International Image Sensor Workshop, R06 (2011)
Optical simulation methodology
Electrical simulation methodology
Point spread function
Green's function IQE method
Related Ansys Innovation Courses
CMOS - Optical simulation methodology
CMOS - Angular response 2D
CMOS - Point spread function (PSF)
CMOS - Photoelectric conversion
Lumerical scripting language - By category | CommonCrawl |
Addressing the deprived: need and access of sexual reproductive health services to street adolescents in Ethiopia. The case of Nekemte town: mixed methods study
Abdo Abazinab Ababor1,
Desalegn Wirtu Tesso2 &
Melese Chego Cheme ORCID: orcid.org/0000-0002-0237-40282
BMC Research Notes volume 12, Article number: 827 (2019) Cite this article
Globally, the research knowledge gap exists in the sexual reproductive health (SRH) services of street adolescents. The intensity of the problem is high in settings like Ethiopia, where there are limited access and integration of services. This study aimed at exploring risky sexual behaviors, needs, and barriers of SRH services among street adolescents in Nekemte town. A community-based cross-sectional study design with mixed approaches was used on a sample size of 219 street adolescents. Supplementary qualitative data of 24 in-depth interviews were collected from the street adolescents and SRH service providers. Time-location sampling or venue sampling technique (VDT) was used for a quantitative study. Quantitative data were analyzed by SPSS version 24.0.
About 93% of street adolescents reported difficulty in accessing contraceptives. Behavioral change and sustainable access to SRH services are lacking among street adolescents. The Knowledge gap is more evident in early adolescents (10–13) period than the other classes. In general, street adolescents are deprived of access to SRH services. Mobile and flexible access to contraceptives should be designed targeting street adolescents.
Adolescence is a transition period between childhood to adulthood (10 to 19 years), which is marked by a continuum of physical, mental, behavioral, emotional, and social changes. Streetlife is a global phenomenon and increasing in urban areas of developing countries like Ethiopia. Adolescents on streets are neglected in policy and action to access SRH services [1, 2].
Street adolescents are pushed to street life by complex factors: socioeconomic, political, cultural, and global opportunities and challenges. The intensity of the problem is high in settings like Ethiopia [3,4,5]. Systemic reviews and studies in Africa show that street children are exposed to street life, mainly due to poverty and abuse. There are more girls on the street than boys [6, 7, 22].
Despite global support and attention, there has been historical neglect of adolescent SRH that exposed the adolescents to risky sexual practice and early parenthood in developing countries [8,9,10,11,12,13]. As evidence shows, poor adolescents, including street adolescents, are exposed to marginalization in social, political, and economic aspects and face discrimination, stigmatizations, isolations, and violations [14, 15].
Globally, about 13,000,000 women aged 15–19 years give birth annually. Pregnancy and childbirth-related deaths are the primary cause of women's death in this age group. Furthermore, neonatal mortality of teenage mothers is also higher than adult mothers [2, 13, 16, 17]. In Sub-Saharan Africa, Children and adolescents are among high risk and vulnerable groups for sexual abuse, violence, and HIV/HIDS, and there is a downgrading of vulnerability perception to HIV/AIDS among adolescents [18,19,20,21,22,23,24,25].
Qualitative and quantitative studies also indicate a low level of knowledge of street children on SRH issues, including HIV/AIDs and different forms of violence of abuse to street children, especially girls. As indicated in the studies, street adolescents are engaged in sexual activities early, and they have multiple sexual partners in most of the cases [26,27,28,29,30].
In developing countries, a research knowledge gap exists in addressing access to SRH services to populations, particularly the adolescents, despite the increase in demand and challenges. The specific objectives of the study are exploring risky sexual behaviors, SRH needs, and barriers of SRH services utilization among adolescents (10–19 years) in Nekemte town [3, 31]. The conceptual framework of the study is indicated as follows (Fig. 1).
Conceptual framework of the study, developed by the researchers by reviewing literatures, modified by March 17, 2018
Study setting and study population
The study was conducted from April to June 2018 in Nekemte town, East Wollega Zone, Ethiopia. The town has an estimated total population of 115, 741 in 2016 [32]. The town attracts many adolescents and youths from different urban and rural areas of Ethiopia. The source population for this study was street adolescents aged (10–19 years) who are living or working on the street in Nekemte town.
A community-based cross-sectional study design with mixed methods was used. The study is primarily quantitative, and the qualitative approach is used to accompany the quantitative findings.
Sample size determination
The sample size for the quantitative data was determined using a single finite population proportion formula. The assumptions for the sample size (n) calculation were: Proportion (P) of barrier to SRH services among street adolescents in Addis Ababa city in Ethiopia which was 84.7%, confidence level for study 95%, non response rate of 10%, standard error (d) of 5%, and two-sided Z score at 95% confidence interval, 1.96.
$$\therefore n = \frac{{\left( {Z_{/2} } \right)^{2} {\text{P}}(1 - {\text{P}})}}{{{\text{d}}^{2} }} = 235.07$$
Adding a 10% response rate to the initial sample size (n), the total number of employed street adolescents was 235. For supplementary qualitative data, 20 in-depth interviews were proposed initially, and 24 in-depth interviews were conducted from the street adolescents and different SRH service providers.
A sampling of street adolescents for the quantitative study was conducted by time-location sampling or venue sampling technique, which is recommended for enrolling research participants in special situations like street adolescents in which there is no clear formal structure of the study population to apply the usual probability sampling techniques. The units of the sampling were venue day time units (VDT). Venue day time unit implies the period of some time in a particular venue in a day.
This sampling technique is a probability sampling approach provided that the units are well studied and identified before actual sampling. Individuals included in day time units have an equal chance to be selected in the study. Accordingly, before data collection, some street adolescents and local administrative, religious, and community members were interviewed to identify the day time units and make arrangements. In the study area, 88 venues/locations were identified, and the average number of street children identified in each place for 2 h (VDT) was 12. Based on these the total venues divided by the average number of street adolescents in each VDT gives the number of locations, venues for the study, which are 20 VDTs. Finally, the Street adolescents found in selected VDT were interviewed, for qualitative data participants were purposely selected from street adolescents and reproductive health service providers. The service providers for in-depth interviews were selected based on the information obtained from the Family Guidance Association of Ethiopia Nekemte area coordinator and other key informants about service providers.
Data collection procedure
Quantitative data were collected by interviews guided by structured questionnaires after 2 days of training for data collectors. For qualitative data, in-depth interviews were conducted by trained masters level data collectors in the local language, and notes were taken manually with maximum effort.
Data processing and analysis
Quantitative data were analyzed using SPSS version 24.0. Descriptive statistics like frequency, percentage, mean, standard deviation were used to describe the findings, and the summary findings were presented by textual, tabular, and graphic presentations. Qualitative data from the in-depth interview were analyzed by thematic analysis approach after translated to English from the local languages, and the responses were coded and sorted to identify themes. The result was presented triangulating and accompanying the quantitative findings. Direct quotes of the respondents were also used.
Result and discussion
Socio-demographic and economic characteristics of street adolescents
About 219 adolescent street adolescents were interviewed, making a 93% response rate. The age range of those adolescents included in this study was between 10 and 19 years, with a mean age of 16.82 (SD ± 1.73) years (Table 1).
Table 1 Socio-demographic and socioeconomic characteristic of street adolescent in Nekemte town, April–June 2018
Substance use behaviors
Different types are substances that are used by street participants. About one-third of the participants use compound or many kinds of substances at a time (Table 2).
Table 2 Uses of substance/drug of street adolescent in Nekemte town, September 2018
Practice towards SRH issues among adolescents
About 185 (84.5%) know the means of prevention of HIV/STI. Among those 3 (1.4%), 174 (79.5%), 34 (15.5%), and 3 (1.4%) mentioned abstinence, use of a condom, remaining faithful to a partner, and avoiding casual sex, respectively. About 127 (58%) of them reported that they know at least one means of preventing pregnancy. Oral pills, condoms, and injection were the most recognized contraceptive methods that were reported by 95%, 43%, and 93% of the interviewee, respectively. From in-depth interview awareness about un-intended pregnancy, STI/HIV, their transmission and prevention methods are fair, especially for older adolescents. There is a considerable concern to early adolescents (10–13) as this seems to have low decision making power and compression of life matters. It cannot be said that street adolescents do not know SRH issues.
The majority of street adolescents 160 (73.1%) had ever practiced sexual intercourse, 49 (63%) of boys and 111 (79%) of girls. The overall mean age at first sexual initiation was 15.18 years (16.25 years for boys and 14.44 for girls). The main reasons for sexual activity among sexually active Street adolescents were peer pressure 81 (50.6%), and 79 (49.2%) exchange for money. Of those sexually active, the first sexual partner includes causal partner 57 (26%), steady boy/girlfriend 73 (33.3%), and commercial sex worker 16 (7.3%). Sexually active street children were also asked about their sexual experience within the last 12 months. Of those who are currently sexually active, 134 (83.75%) of them reported that they had sexual intercourse with two and more than two partners and the mean number of sexual partners for them was 2.85 (SD ± 0.4) while only 20 (12.5%) have a single sexual partner. Of those sexually active street adolescents, 105 (79.04%) had ever used modern contraceptives, out of which, Condoms 83 (79.04%), Pills 13 (12.3%) injectable 13 (8.12%) were reported to be the most frequently used methods of contraceptives. As reasons mentioned for not using contraceptives include (multiple responses considered): Lack of adequate knowledge 28 (51%), unplanned sex 31 (56%), too far to get contraceptive (not accessible) 32 (58%) and having infrequent sex 19 (35%).
Among the street girls who participated in this study, more than half 80 (57%) reported that they had ever been pregnant, and all 80 (57%) of female respondents said that pregnancies were unwanted. Further, the reasons for pregnancies were accidental 45 (56.25%), unavailability of contraceptive 29 (36.25%), failure of contraceptives 3 (3.75%) was mentioned as the main reasons for the occurrence of unwanted pregnancies.
An in-depth interview with a service provider, FGAE, Nekemte branch claims: "Trading sex is found to be a serious and growing problem in Nekemte town for street children, particularly amongst street girls. Besides education and service provision, the basic thing is social and financial support".
Access to SRH service
From who was not seeking SRH care, about 93% reported difficulty in getting contraceptives/condoms. The reasons mentioned for the difficulty were lack of money (89%), too far to get it (86.5%), inconvenience of distribution places (67%), expensive (7.3%), and provider disapproval (15%).
From an In-depth interview, although they use a condom on occasions, it is difficult for adolescents to maintain consistent use of condoms as there are different level barriers. Male, 17 years says, "Even though they (street adolescents) have enough knowledge on pregnancy, STI/HIV, they don't stay in safe sexual practice because of peer pressure, negligence, lack of hope, substance abuse, and lack of continuous education on this issue."
They have also reported that in terms of access to SRH services, they are disconnected from the existing service stream. A 17-year-old male child suggests, "….When we visit health facilities, they don't cooperate well. The workers are usually busy, and you don't get adequate service for your needs. Most of the street children's attention is dominated by another issue like cloth, food, and shelter, and they also seek care rarely."
Program coordinator of FGAE Nekemte branch states, "The reality of providing such service (facility-based service) is that many street children clients will not attend arranged appointments and may disengage entirely for periods. In these stages, the clients can be most vulnerable, and despite not looking for it, they are often most in need of a sexual health service. It is, therefore, crucial to offering outreach and support to access sexual health services for those children who are most vulnerable and at risk." The government health facility director also says, "Street adolescents had very limited access to reproductive health services. The main reasons were location of the reproductive health facilities and the service providers. The facilities are mostly in residential areas, and the street children mostly operate from Central Business area, therefore crucial to offer outreach and support to access sexual health services".
The main barriers to access local SRH services among Nekemte town street adolescents are lack of information on available services for street adolescents, the behavior of adolescents, inaccessibility adolescent-friendly service. The finding implies that street adolescents are highly deprived and need a particular focus on intervention. Accessing mobile peer-based and friendly services at facilities and in the community should be focused, and rigorous qualitative and quantitative studies should be conducted at a large scale and large sample size to identify the root causes.
The nature of the cross sectional study may not allow the cause-effect relationship and the statistical inference. Social desirability might have affected the responses of street adolescents. Statistical inference is also not reported. Besides, the study should have included the broader views of community, schools, religious institutions, and other relevant bodies. Considering these issues in further research will be helpful.
Relevant data are available from the corresponding author on a reasonable request.
SRH:
sexual reproductive health
VDT:
venue day time unit
STI:
sexually transmitted illnesses
ICT:
CSW:
commercial sex workers
FGAE:
Family Guidance Association of Ethiopia
Borise S, et al. Adolescent sexual and reproductive health toolkit for humanitarian settings, a companion to the inter-agency field manual on reproductive health in humanitarian settings. 2009. https://pt.scribd.com/document/376195634/UNFPA-ASRHtoolkit-english-pdf.
Boakye-Boaten A. An examination of the phenomenon of street children communities in Accra (Ghana). 2006.
Chandramouli V, Svanemyr J, Amin A, et al. Twenty years after international conference on population and development: where are we with adolescent sexual and reproductive health and rights ? J Adolesc Health. 2015;56(1):S1–6. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jadohealth.2014.09.015.
Kamanu R, Nganga PZ, Muttunga J. Determinants of sexual and reproductive health among street adolescents in Dagoretti District of Nairobi. p. 1–12.
Society T. Sexual and reproductive health care: a position paper of the society for adolescent health and medicine. J Adolesc Health. 2014;54(4):491–6. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jadohealth.2014.01.010.
Cumber SN, Tsoka-gwegweni JM. The health profile of street children in Africa: a literature review. J Public Health Afr. 2015;6:85–90.
Yizengaw SS, Gebiresilus AG. Triggering factors, risky behaviors, and resilience of street children in Gondar City, North West Ethiopia. 2014;2(4).
Morris JL, Rushwan H. Adolescent sexual and reproductive health: the global challenges. Int J Gynecol Obstet. 2015. https://0-doi-org.brum.beds.ac.uk/10.1016/j.ijgo.2015.02.006.
Salam RA, Faqqah A, Sajjad N, et al. Improving adolescent sexual and reproductive health : a systematic review of potential interventions. J Adolesc Health. 2016;59(4 Suppl):S11–28. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jadohealth.2016.05.022.
Warenius L. Sexual and reproductive health services for young people in Kenya and Zambia providers' attitudes and young people's needs and experiences. Stockholm: Department of Public Health Sciences, Division of International Health (IHCAR); 2008.
Shaw D. Access to sexual and reproductive health for young people: bridging the disconnect between rights and reality. Int J Gynecol Obstet. 2009;106:132–6.
Woan J, Lin J, Auerswald C. The health status of street children and youth in low- and middle-income countries: a systematic review of the literature. J Adolesc Health. 2013;53(3):314–321.e12. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jadohealth.2013.03.013.
Rushwan H. Adolescent sexual and reproductive health initiative: what do we know about adolescents? Literature review, FIGO.
PAHO. Reaching poor adolescents in situations of vulnerability with sexual and reproductive health. Washington, DC; 2013. https://www.paho.org/hq/index.php?option=com_docman&view=download.
Chase E, Aggleton P. Meeting the sexual health needs of young people living on the street. In: Ingham R, Aggleton P, editors. Promoting young people's sexual health: international perspectives. London: Routledge; 2006. p. 81–97.
Igras SM, Macieira M, Murphy E, Lundgren R. Investing in very young adolescents' sexual and reproductive health. Glob Public Health. 2014;9(5):555–69. https://0-doi-org.brum.beds.ac.uk/10.1080/17441692.2014.908230.
Chandramouli V, Camacho AV, Michaud P. WHO guidelines on preventing early pregnancy and poor reproductive outcomes among adolescents in developing countries. J Adolesc Health. 2013;52(5):517–22. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jadohealth.2013.03.002.
Karki S, et al. Risks and vulnerability to HIV, STIs and AIDS among street children in Nepal: public health approach. Post Doctoral thesis, University of Huddersfield; 2013. http://eprints.hud.ac.uk/id/eprint/21282/.
Habtamu D, Adamu A. Assessment of sexual and reproductive health status of street children in Addis Ababa. 2013;(May 2014).
East, Central, and Southern African Health Community, child sexual abuse in sub-Saharan Africa. Literature review; 2011. https://www.svri.org/sites/default/files.
Kumi-Kyereme A, Awusabo-Asare K, Biddlecom A. Adolescents' sexual and reproductive health: qualitative evidence from Ghana; 2007. https://www.guttmacher.org/pubs/2007/08/31/or30.pdf.
Bam K. Scenario of adolescent sexual and reproductive health with opportunities for information communication and technology use in selected South Asian Countries. iMedPub J. Access to ICT and Challenges of ASRH program. 2015;1–7.
Cumber S. Pattern and practice of psychoactive substance abuse and risky behaviors among street children in Cameroon. 2016;10(3).
Wittenberg J, et al. Protecting the next generation in Malawi: new evidence on adolescent sexual and reproductive health needs. New York: Guttmacher Institute; 2007.
Publications MJ, Berhane T, Assefa B, Birhan N. Reproductive health behavior of street youth and associated factors in Gondar City, Northwest Ethiopia. Int J Med Biomed Res. 2014;3(1):28–37.
Uddin MJ, Sarma H, Wahed T, et al. The vulnerability of Bangladeshi street-children to HIV/AIDS: a qualitative study. BMC Public Health. 2014;14:1151. https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2458-14-1151.
Shaikh BT, Rahim ST. Assessing knowledge, exploring needs: a reproductive health survey of adolescents and young adults in Pakistan. Eur J Contracept Reprod Health Care. 2006;11(2):132–7. https://0-doi-org.brum.beds.ac.uk/10.1080/13625180500389463.
Wachira J, Kamanda A, Embleton L, Naanyu V, Winston S, Ayuku D. Initiation to street life: a qualitative examination of the physical, social, and psychological practices in becoming an accepted member of the street youth community in Western Kenya. BMC Public Health. 2015. https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-015-1942-8.
Habtemariam K. Knowledge, attitude and practice of modern contraceptives among street girls of bole sub-city, Addis Ababa. http://www.localhost:80/xmlui/handle/123456789/1976.
Kayembe PK, Mapatano MA, Fatuma AB, et al. Knowledge of HIV, sexual behaviors and correlates of risky sex among street children in Kinshasa, Democratic Republic Of Congo, East African. J Public Health. 2008;5(3):186–92.
Hughes J, McCauley AP. Improving the fit: adolescents' needs and future programs for sexual and reproductive health in developing countries. Stud Family Plan. 1998;29(2):233–245. https://0-doi-org.brum.beds.ac.uk/10.2307/172161. https://0-www-jstor-org.brum.beds.ac.uk/stable/172161.
Population estimation of Nekemte for 2016, based on data on List of cities and towns in Ethiopia, Wikipedia, the free encyclopedia. https://en.wikipedia.org/wiki/Nekemte.
Our thanks go to the Department of Public Health and postgraduate office of the Institute of Health Science of Wollega University and administrative office of Nekemte town for their unreserved provision of information and support throughout the process of this study. Our appreciation also extends to the street adolescents who have participated in the study.
No fund was received for this study.
Health Office, Buno Bedele Zone, Chora District, Bedele, Oromia, Ethiopia
Abdo Abazinab Ababor
Wollega University, Institute of Health sciences, Nekemte, Ethiopia
Desalegn Wirtu Tesso
& Melese Chego Cheme
Search for Abdo Abazinab Ababor in:
Search for Desalegn Wirtu Tesso in:
Search for Melese Chego Cheme in:
AAA generated the research question, developed the proposal, supervised the data collection process, analyzed data and prepared a research report. DWT was the main or senior advisor of the research proposal development and data analysis process. MCC was a co-advisor of the research proposal development, data analysis process and has prepared the manuscript and he was a corresponding author. All authors read and approved the manuscript.
Correspondence to Melese Chego Cheme.
Ethical clearance was obtained from Wollega University. Written consent to participate in research was obtained from the adolescents aged above 15 years and guardians or parents of the street adolescents age 15 and below. All collected information is kept confidential. The adolescents were included in the study with their full interest and no forceful inclusion in the study. They were also told as they can withdraw from the study in the middle of the responses if they think it is not fair to them. As the research includes only interviews, there is no physical and emotional threat it may pose to the street adolescents.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Ababor, A.A., Tesso, D.W. & Cheme, M.C. Addressing the deprived: need and access of sexual reproductive health services to street adolescents in Ethiopia. The case of Nekemte town: mixed methods study. BMC Res Notes 12, 827 (2019) doi:10.1186/s13104-019-4850-7
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13104-019-4850-7
Street adolescents
Nekemte | CommonCrawl |
eLight
To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects
Bijie Bai1,2,3 na1,
Yi Luo1,2,3 na1,
Tianyi Gan1,3,
Jingtian Hu1,2,3,
Yuhang Li1,2,3,
Yifan Zhao1,3,
Deniz Mengu1,2,3,
Mona Jarrahi1,3 &
Aydogan Ozcan ORCID: orcid.org/0000-0002-0717-683X1,2,3
eLight volume 2, Article number: 14 (2022) Cite this article
Privacy protection is a growing concern in the digital era, with machine vision techniques widely used throughout public and private settings. Existing methods address this growing problem by, e.g., encrypting camera images or obscuring/blurring the imaged information through digital algorithms. Here, we demonstrate a camera design that performs class-specific imaging of target objects with instantaneous all-optical erasure of other classes of objects. This diffractive camera consists of transmissive surfaces structured using deep learning to perform selective imaging of target classes of objects positioned at its input field-of-view. After their fabrication, the thin diffractive layers collectively perform optical mode filtering to accurately form images of the objects that belong to a target data class or group of classes, while instantaneously erasing objects of the other data classes at the output field-of-view. Using the same framework, we also demonstrate the design of class-specific permutation and class-specific linear transformation cameras, where the objects of a target data class are pixel-wise permuted or linearly transformed following an arbitrarily selected transformation matrix for all-optical class-specific encryption, while the other classes of objects are irreversibly erased from the output image. The success of class-specific diffractive cameras was experimentally demonstrated using terahertz (THz) waves and 3D-printed diffractive layers that selectively imaged only one class of the MNIST handwritten digit dataset, all-optically erasing the other handwritten digits. This diffractive camera design can be scaled to different parts of the electromagnetic spectrum, including, e.g., the visible and infrared wavelengths, to provide transformative opportunities for privacy-preserving digital cameras and task-specific data-efficient imaging.
Digital cameras and computer vision techniques are ubiquitous in modern society. Over the past few decades, computer vision-assisted applications have been adapted massively in a wide range of fields [1,2,3], such as video surveillance [4, 5], autonomous driving assistance [6, 7], medical imaging [8], facial recognition, and body motion tracking [9, 10]. With the comprehensive deployment of digital cameras in workspaces and public areas, a growing concern for privacy has emerged due to the tremendous amount of image data being collected continuously [11,12,13,14]. Some commonly used methods address this concern by applying post-processing algorithms to conceal sensitive information from the acquired images [15]. Following the computer vision-aided detection of the sensitive content, traditional image redaction algorithms, such as image blurring [16, 17], encryption [18, 19], and image inpainting [20, 21] are performed to secure private information such as human faces, plate numbers, or background objects. In recent years, deep learning techniques have further strengthened these algorithmic privacy preservation methods in terms of their robustness and speed [22,23,24]. Despite the success of these software-based privacy protection techniques, there exists an intrinsic risk of raw data exposure given the fact that the subsequent image processing is executed after the raw data recording/digitization and transmission, especially when the required digital processing is performed on a remote device, e.g., a cloud-based server.
Another set of solutions to such privacy concerns can be implemented at the hardware/board level, in which the data processing happens right after the digital quantization of an image, but before its transmission. Such solutions protect privacy by performing in-situ image modifications using camera-integrated online processing modules. For instance, by embedding a digital signal processor (DSP) or Trusted Platform Module (TPM) into a smart camera, the sensitive information can be encrypted or deidentified [25,26,27]. These camera integration solutions provide an additional layer of protection against potential attacks during the data transmission stage; however, they do not completely resolve privacy concerns as the original information is already captured digitally, and adversarial attacks can happen right after the camera's digital quantization.
Implementing these image redaction algorithms or embedded DSPs for privacy protection also creates some environmental impact as a compromise. To support the computation/processing of massive amounts of visual data being generated every day [28], i.e., billions of images and millions of hours of videos, the demand for digital computing power and data storage space rapidly increases, posing a major challenge for sustainability [29,30,31,32].
Intervening into the light propagation and image formation stage and passively enforcing privacy before the image digitization can potentially provide more desired solutions to both of these challenges outlined earlier. For example, some of the existing works use customized optics or sensor read-out circuits to modify the image formation models, so that the sensor only captures low-resolution images of the scene and, therefore, the identifying information can be concealed [33,34,35]. Such methods sacrifice the image quality of the entire sample field-of-view (FOV) for privacy preservation, and therefore, a delicate balance between the final image quality and privacy preservation exists; a change in this balance for different objects can jeopardize imaging performance or privacy. Furthermore, degrading the image quality of the entire FOV limits the applicable downstream tasks to low-resolution operations such as human pose estimation. In fact, sacrificing the entire image quality can be unacceptable under some circumstances such as e.g., in autonomous driving. Additionally, since these methods establish a blurred or low-resolution pixel-to-pixel mapping between the input scene and the output image, the original information of the samples can be potentially retrieved via digital inverse models, using e.g., blind image deconvolution or estimation of the inherent point-spread function.
Here, we present a new camera design using diffractive computing, which images the target types/classes of objects with high fidelity, while all-optically and instantaneously erasing other types of objects at its output (Fig. 1). This computational camera processes the optical modes that carry the sample information using successive diffractive layers optimized through deep learning by minimizing a training loss function customized for class-specific imaging. After the training phase, these diffractive layers are fabricated and assembled together in 3D, forming a computational imager between an input FOV and an output plane. This camera design is not based on a standard point-spread function, and instead the 3D-assembled diffractive layers collectively act as an optical mode filter that is statistically optimized to pass through the major modes of the target classes of objects, while filtering and scattering out the major representative modes of the other classes of objects (learned through the data-driven training process). As a result, when passing through the diffractive camera, the input objects from the target classes form clear images at the output plane, while the other classes of input objects are all-optically erased, forming non-informative patterns similar to background noise, with lower light intensity. Since all the spatial information of non-target object classes is instantaneously erased through light diffraction within a thin diffractive volume, their direct or low-resolution images are never recorded at the image plane, and this feature can be used to reduce the image storage and transmission load of the camera. Except for the illumination light, this object class-specific camera design does not utilize external computing power and is entirely based on passive transmissive layers, providing a highly power-efficient solution to task-specific and privacy-preserving imaging.
Object class-specific imaging using a diffractive camera. a Illustration of a three-layer diffractive camera trained to perform object class-specific imaging with instantaneous all-optical erasure of the other classes of objects at its output FOV. b The experimental setup for the diffractive camera testing using coherent THz illumination
We experimentally demonstrated the success of this new class-specific camera design using THz radiation and 3D-printed diffractive layers that were assembled together (Fig. 1) to specifically and selectively image only one data class of the MNIST handwritten digit database [36], while all-optically rejecting the images of all the other handwritten digits at its output FOV. Despite the random variations observed in handwritten digits (from human to human), our analysis revealed that any arbitrary handwritten digit/class or group of digits could be selected as the target, preserving the same all-optical rejection/erasure capability for the remaining classes of handwritten digits. Besides handwritten digits, we also showed that the same framework can be generalized to class-specific imaging and erasure of more complicated objects, such as some fashion products [37]. Additionally, we demonstrated class-specific imaging of input FOVs with multiple objects simultaneously present, where only the objects that belong to the target class were imaged at the output plane, while the rest were all-optically erased. Furthermore, this class-specific camera design was shown to be robust to variations in the input illumination intensity and the position of the input objects. Apart from direct imaging of the target objects from specific data classes, we further demonstrated that this diffractive imaging framework can be used to design class-specific permutation and class-specific linear transformation cameras that output pixel-wise permuted or linearly transformed images (following an arbitrarily selected image transformation matrix) of the target class of objects, while all-optically erasing other types of objects at the output FOV—performing class-specific encryption all-optically.
The teachings of this diffractive camera design can inspire future imaging systems that consume orders of magnitude less computing and transmission power as well as less data storage, helping with our global need for task-specific, data-efficient and privacy-aware modern imaging systems.
Class-specific imaging using diffractive cameras
We first numerically demonstrate the class-specific camera design using the MNIST handwritten digit dataset, to selectively image handwritten digit '2' (the object class of interest) while instantaneously erasing the other handwritten digits. As illustrated in Fig. 2a, a three-layer diffractive imager with phase-only modulation layers was trained under an illumination wavelength of \(\lambda\). Each diffractive layer contains 120 \(\times\) 120 trainable transmission phase coefficients (i.e., diffractive features/neurons), each with a size of ~ 0.53\(\lambda\). The axial distance between the input/sample plane and the first diffractive layer, between any two consecutive diffractive layers, and between the last diffractive layer and the output plane were all set to ~ 26.7\(\lambda\). The phase modulation values of the diffractive neurons at each transmissive layer were iteratively updated using a stochastic gradient-descent-based algorithm to minimize a customized loss function, enabling object class-specific imaging. For the data class of interest, the training loss terms included the normalized mean square error (NMSE) and the negative Pearson Correlation Coefficient (PCC) [38] between the output image and the input, aiming to optimize the image fidelity at the output plane for the correct class of objects. For all the other classes of objects (to be all-optically erased), we penalized the statistical similarity between the output image and the input object (see "Methods" section for details). This well-balanced training loss function enabled the output images from the non-target classes of objects (i.e., the handwritten digits 0, 1, 3–9) to be all-optically erased at the output FOV, forming speckle-like background patterns with lower average intensity, whereas all the input objects of the target data class (i.e., handwritten examples of digit 2) formed high-quality images at the output plane. The resulting diffractive layers that are learned through this data-driven training process are reported in Fig. 2b, which collectively function as a spatial mode filter that is data class-specific.
Design schematic and blind testing results of the class-specific diffractive camera. a The physical layout of the three-layer diffractive camera design. b Phase modulation patterns of the converged diffractive layers of the camera. c The blind testing results of the diffractive camera. The output images were normalized using the same constant for visualization
After its training, we numerically tested this diffractive camera design using 10,000 MNIST test digits, which were not used during the training process. Figure 2c reports some examples of the blind testing output of the trained diffractive imager and the corresponding input objects. These results demonstrate that the diffractive camera learned to selectively image the input objects that belong to the target data class, even if they have statistically diverse styles due to the varying nature of human handwriting. As desired, the diffractive camera generates unrecognizable noise-like patterns for the input objects from all the other data classes, all-optically erasing their information at its output plane. Stated differently, the image formation is intervened at the coherent wave propagation stage for the undesired data classes, where the characteristic optical modes that statistically represent the input objects of these non-target data classes are scattered out of the output FOV of our diffractive camera.
Importantly, this diffractive camera is not based on a standard point-spread function-based pixel-to-pixel mapping between the input and output FOVs, and therefore, it does not automatically result in signals within the output FOV for the transmitting input pixels that statistically overlap with the objects from the target data class. For example, the handwritten digits '3' and '8' in Fig. 2c were completely erased at the output FOV, regardless of the considerable amount of common (transmitting) pixels that they statistically share with the handwritten digit '2'. Instead of developing a spatially-invariant point-spread function, our designed diffractive camera statistically learned the characteristic optical modes possessed by different training examples, to converge as an optical mode filter, where the main modes that represent the target class of objects can pass through with minimum distortion of their relative phase and amplitude profiles, whereas the spatial information carried by the characteristic optical modes of the other data classes were scattered out. The deep learning-based optimization using the training images/examples is the key for the diffractive camera to statistically learn which optical modes must be filtered out and which group of modes needs to pass through the diffractive layers so that the output images accurately represent the spatial features of the input objects for the correct data class. As detailed in "Methods" section, the training loss function and its penalty terms for the target data class and the other classes are crucial for achieving this performance.
In addition to these results summarized in Fig. 2, the same class-specific imaging system can also be adapted to selectively image input objects of other data classes by simply re-dividing the training image dataset into desired/target vs. unwanted classes of objects. To demonstrate this, we show different diffractive camera designs in Additional file 1: Fig. S1, where the same class-specific performance was achieved for the selective imaging of e.g., handwritten test objects from digits '5' or '7', while all-optically erasing the other data classes at the output FOV. Even more remarkable, the diffractive camera design can also be optimized to selectively image a desired group of data classes, while still rejecting the objects of the other data classes. For example, Additional file 1: Fig. S1 reports a diffractive camera that successfully imaged handwritten test objects belonging to digits '2', '5', and '7' (defining the target group of data classes), while erasing all the other handwritten digits all-optically. Stated differently, the diffractive camera was in this case optimized to selectively image three different data classes in the same design, while successfully filtering out the remaining data classes at its output FOV (see Additional file 1: Fig. S1).
To further demonstrate the success of the presented class-specific diffractive camera design for processing more complicated objects, we extended it to specifically image only one class of fashion products [37] (i.e., trousers). As shown in Additional file 1: Fig. S2, a seven-layer diffractive camera was designed to achieve class-specific imaging of trousers within the Fashion MNIST dataset [37], while all-optically erasing/rejecting four other classes of the fashion products (i.e., dresses, sandals, sneakers, and bags). These results, summarized in Additional file 1: Fig. S2, further demonstrate the successful generalization of our class-specific diffractive imaging approach to more complex objects.
Next, we evaluated the diffractive camera's performance with respect to the number of transmissive layers in its design (see Fig. 3 and Additional file 1: Fig. S1). Except for the number of diffractive layers, all the other hyperparameters of these camera designs were kept the same as before, for both the training and testing procedures. The patterns of the converged diffractive layers of each camera design are illustrated in Additional file 1: Fig. S3. The comparison of the class-specific imaging performance of these diffractive cameras with different numbers of trainable transmissive layers can be found in Fig. 3. Improved fidelity of the output images corresponding to the objects from the target data class can be observed as the number of diffractive layers increases, exhibiting higher image contrast, closely matching the input object features (Fig. 3a). At the same time, for the input objects from the non-target data classes, all the three diffractive camera designs generated unrecognizable noise-like patterns, all-optically erasing their information at the output. The same depth advantage can also be observed when another digit or a group of digits were selected as the target data classes. In Additional file 1: Fig. S1, we compare the diffractive camera designs with three, five, and seven successive layers and demonstrate that deeper diffractive camera designs with more layers imaged the target classes of objects with higher fidelity and contrast compared to those with fewer diffractive layers.
Performance advantages of deeper diffractive cameras. a Comparison of the output images using diffractive camera designs with three, four, and five layers. The output images at each row were normalized using the same constant for visualization. b Quantitative comparison of the three diffractive camera designs. The left panel compares the average PCC values calculated using input objects from the target data class only (i.e., 1032 different handwritten digits). The middle panel compares the average absolute PCC values calculated using input objects from the other data classes (i.e., 8968 different handwritten digits). The right panel plots the average output intensity ratio (\(R\)) of the target to non-target data classes
We also quantified the blind testing performance of each diffractive camera design by calculating the average PCC value between the output images and the ground truth (i.e., input objects); see Fig. 3b. For this quantitative analysis, the MNIST testing dataset was first divided into target class objects (\({n}_{1}=\) 1032 handwritten test objects for digit '2') and non-target class objects (\({n}_{2}=\) 8968 handwritten test objects for all the other digits), and the average PCC value was calculated separately for each object group. For the target data class of interest, the higher PCC value presents an improved imaging fidelity. For the other, non-target data classes, however, the absolute PCC values were used as an "erasure figure-of-merit", as the PCC values close to either 1 or −1 can indicate interpretable image information, which is undesirable for object erasure. Therefore, the average PCC values of the target class objects (\({n}_{1}\)) and the average absolute PCC values of the non-target classes of objects (\({n}_{2}\)) are presented in the first two charts in Fig. 3b. The depth advantage of the class-specific diffractive camera designs is clearly demonstrated in these results, where a deeper diffractive imager with e.g., five transmissive layers achieved (1) a better output image fidelity and a higher average PCC value for imaging the target class of objects, and (2) an improved all-optical erasure of the undesired objects (with a lower absolute PCC value) for the non-target data classes as shown in Fig. 3b.
In addition to these, a deeper diffractive camera also creates a stronger signal intensity separation between the output images of the target and non-target data classes. To quantify this signal-to-noise ratio advantage at the output FOV, we defined the average output intensity ratio (\(R\)) of the target to non-target data classes as:
$$\begin{array}{c}R=\frac{\frac{1}{{n}_{1}}{\sum }_{i=1}^{{n}_{1}}{\overline{O} }_{i}^{+}}{\frac{1}{{n}_{2}}{\sum }_{i=1}^{{n}_{2}}{\overline{O} }_{i}^{-}}\end{array}$$
where the numerator is the average output intensity of \({n}_{1}=\) 1032 test objects from the target data class (denoted as \({O}_{i}^{+})\), and the denominator is the average output intensity of \({n}_{2}=\) 8968 test objects from all the other data classes (denoted as \({O}_{i}^{-})\). The \(R\) values of three-, four-, and five-layer diffractive camera designs were found to be 1.354, 1.464, and 1.532, respectively, as summarized in Fig. 3b. These quantitative results once again confirm that a deeper diffractive camera with more trainable layers exhibits a better performance in its class-specific imaging task and achieves an improved signal-to-noise ratio at its output.
Note that, a class-specific diffractive camera trained with the standard grayscale MNIST images retains its designed functionality even when the input objects face varying illumination conditions. To demonstrate this, we first blindly tested the five-layer diffractive camera design reported in Fig. 3a under varying levels of intensity (from low to high intensity and eventually saturated, where the grayscale features of the input objects became binary). As reported in Additional file 2: Movie S1, the diffractive camera selectively images the input objects from the target class and robustly erases the information of the non-target classes of input objects, regardless of the intensity, even when the objects became saturated and structurally deformed from their grayscale features. We further blindly tested the same five-layer diffractive camera design reported in Fig. 3a with the input objects illuminated under spatially non-uniform intensity distributions, deviating from the training features. As shown in Additional file 3: Movie S2, the class-specific diffractive camera still worked as designed under non-uniform input illumination intensities, demonstrating its effectiveness and robustness in handling complex scenarios with varying lighting conditions. These input distortions highlighted in Additional file 2: Movie S1, Additional file 3: Movie S2 were never seen/used during the training phase, and illustrate the generalization performance of our diffractive camera design as an optical mode filter, performing class-specific imaging.
Simultaneous imaging of multiple objects from different data classes
In a more general scenario, multiple objects of different classes can be presented in the same input FOV. To exemplify such an imaging scenario, the input FOV of the diffractive camera was divided into 3 \(\times\) 3 subregions, and a random handwritten digit/object could appear in each subregion (see e.g., Fig. 4). Based on this larger FOV with multiple input objects, a three-layer and a five-layer diffractive camera were separately designed to selectively image the whole input plane, all-optically erasing all the presented objects from the non-target data classes (Fig. 4a). The design parameters of these diffractive cameras were the same as the cameras reported in the previous subsection, except that each diffractive layer was expanded from 120 \(\times\) 120 to 300 \(\times\) 300 diffractive pixels to accommodate the increased input FOV. During the training phase, 48,000 MNIST handwritten digits appeared randomly at each subregion, and the handwritten digit '2' was selected as our target data class to be specifically imaged. The diffractive layers of the converged camera designs are shown in Fig. 4b for the three-layer diffractive camera and in Fig. 4c for the five-layer diffractive camera.
Simultaneous imaging of multiple objects of different data classes using a diffractive camera. a Schematic and the blind testing results of a three-layer diffractive camera and a five-layer diffractive camera. The output images in each row were normalized using the same constant for visualization. b Phase modulation patterns of the converged diffractive layers for the three-layer diffractive camera design. c Phase modulation patterns of the converged diffractive layers for the five-layer diffractive camera design
During the blind testing phase of each of these diffractive cameras, the input test objects were randomly generated using the combinations of 10,000 MNIST test digits (not included in the training). Our imaging results reported in Fig. 4a reveal that these diffractive camera designs can selectively image the handwritten test objects from the target data class, while all-optically erasing the other objects from the remaining digits in the same FOV, regardless of which subregion they are located at. It is also demonstrated that, compared with the three-layer design, the deeper diffractive camera with five trained layers generated output images with improved fidelity and higher contrast for the target class of objects, as shown in Fig. 4a. At the same time, this deeper diffractive camera achieved stronger suppression of the objects from the non-target data classes, generating lower output intensities for these undesired objects.
Class-specific camera design with random object displacements over a large input field-of-view
In consideration of different imaging scenarios, where the target objects can appear at arbitrary spatial locations within a large input FOV, we further demonstrated a class-specific camera design that selectively images the input objects from a given data class within a large FOV. As the space-bandwidth product at the input (SBPi) and the output (SBPo) planes increased in this case, we used a deeper architecture with more diffractive neurons, since in general the number of trainable diffractive features in a given design needs to scale proportional to SBPi × SBPo [39, 40]. Therefore, we used seven diffractive layers, each with 300 \(\times\) 300 diffractive neurons/pixels. During the training phase, 48,000 MNIST handwritten digits were randomly placed within the input FOV of the camera, one by one, and the handwritten digit '2' was selected to be specifically imaged at the corresponding location on the output/image plane while the input objects from the other classes were to be erased (see the "Methods" section). As demonstrated in Additional file 4: Movie S3, test objects from the target data class (the handwritten digit '2') can be faithfully imaged regardless of their varying locations, while the objects from the other data classes were all-optically erased, only yielding noisy images at the output plane. This deeper diffractive camera exhibits class-specific imaging over a larger input FOV regardless of the random displacements of the input objects. The blind testing performance shown in Additional file 4: Movie S3 can be further improved with wider and deeper diffractive camera architectures with more trainable features to better cope with the increased space-bandwidth product at the input and output fields-of-view.
Class-specific permutation camera design
Apart from directly imaging the objects from a target data class, a class-specific diffractive camera can also be designed to output pixel-wise permuted images of target objects, while all-optically erasing other types of objects. To demonstrate this class-specific image permutation as a form of all-optical encryption, we designed a five-layer diffractive permutation camera, which takes MNIST handwritten digits as its input and performs an all-optical permutation only on the target data class (e.g., handwritten digit '2'). The corresponding inverse permutation operation can be sequentially applied on the pixel-wise permuted output images to recover the original handwritten digits, '2'. The other handwritten digits, however, will be all-optically erased, with noise-like features appearing at the output FOV, before and after the inverse permutation operation (Fig. 5a). Stated differently, the all-optical permutation of this diffractive camera operates on a specific data class, whereas the rest of the objects from other data classes are irreversibly lost/erased at the output FOV.
Class-specific permutation camera. a Illustration of a five-layer diffractive camera trained to perform class-specific permutation operation (denoted as \({\varvec{P}}\)) with instantaneous all-optical erasure of the other classes of objects at its output FOV. Application of an inverse permutation (\({{\varvec{P}}}^{-1}\)) to the output images recovers the original objects of the target data class, whereas the rest of the objects from other data classes are irreversibly lost/erased at the output FOV. The output images were normalized using the same constant for visualization. b Phase modulation patterns of the converged diffractive layers of the class-specific permutation camera
To design this class-specific permutation camera, a random permutation matrix \({\varvec{P}}\) was first generated (Fig. 5), which describes a unique one-to-one mapping of each image pixel at the input FOV to a new location/pixel at the output FOV. This randomly selected, desired permutation matrix \({\varvec{P}}\) was applied to each input image \(G\) and the resulting permuted image \(({\varvec{P}}G)\) was used as the ground truth throughout the training process of the permutation camera. The training loss function remained the same as in the previous five-layer diffractive design reported in Fig. 3a; however, instead of calculating the loss using the output and the input (\(G\)) images, this class-specific permutation camera design was optimized by minimizing the loss calculated using the output images and the permuted input images (\({\varvec{P}}G\)). The converged diffractive layers of this class-specific permutation camera are presented in Fig. 5b.
During the blind testing phase, the designed class-specific permutation camera was tested with 10,000 MNIST digits, never used in the training phase. As demonstrated in Fig. 5a, this permutation camera learned to selectively permute the input objects that belong to the target class (i.e., the handwritten digit '2'), generating output intensity patterns that closely resemble \({\varvec{P}}G\). This class-specific all-optical permutation operation performed by the diffractive camera resulted in uninterpretable patterns of the target objects at the output FOV, which cannot be decoded without the knowledge of the permutation matrix, \({\varvec{P}}\). On the other hand, for the input objects that belong to other data classes, the same permutation camera design generated noise-like, low-intensity patterns that do not match the permuted images (\({\varvec{P}}G\)). In fact, by applying the inverse permutation (\({{\varvec{P}}}^{-1}\)) operation on the output images of the diffractive camera, the original digits of interest from the target data class can be faithfully reconstructed, whereas all the other classes of objects ended up in noise-like patterns (see the last column of Fig. 5a), which illustrates the success of this class-specific permutation camera.
Class-specific linear transformation camera design
As a more general and even more challenging case of the class-specific permutation camera reported in the previous section, here we report the design of a class-specific linear transformation camera (Fig. 6), which performs an arbitrarily selected invertible linear transformation at its output FOV for a desired class of objects, while all-optically erasing the other classes of objects, i.e., they cannot be retrieved even if the inverse linear transformation were to be applied at the output of the camera. To achieve this goal, we designed a seven-layer linear transformation diffractive camera, which takes MNIST handwritten digits as its input and performs an all-optical linear transformation only on the target data class (which was selected as the handwritten digit '2'). During its blind testing phase, the designed class-specific linear transformation camera was tested with 10,000 MNIST digits, never used in the training phase. After the linear transformation operation performed all-optically through the diffractive camera, the output images become uninterpretable, i.e., become encrypted (unless one has the "key", i.e., the inverse transformation matrix). The corresponding inverse linear transformation, i.e., the key, can be subsequently applied to the transformed output images of the target class of objects to recover the original handwritten input digits, '2'. Similar to the class-specific permutation camera design (shown in Fig. 5), the other handwritten digits are all-optically erased, with noise-like features appearing at the output FOV, which cannot be retrieved back even after the inverse linear transformation (see Fig. 6). Stated differently, the all-optical linear transformation (i.e., the "lock" or the encryption) of this diffractive camera only operates on the objects of a specific data class (where the key would be able to bring the images of the objects back through an inverse linear transformation), whereas the rest of the objects from the other data classes are irreversibly lost/erased at the output FOV even if one has access to the correct key (Fig. 6).
Class-specific linear transformation camera. a Illustration of a seven-layer diffractive camera trained to perform a class-specific linear transformation (denoted as \({\varvec{T}}\)) with instantaneous all-optical erasure of the other classes of objects at its output FOV. This class-specific all-optical linear transformation operation performed by the diffractive camera results in uninterpretable patterns of the target objects at the output FOV, which cannot be decoded without the knowledge of the transformation matrix, \({\varvec{T}}\), or its inverse. By applying the inverse linear transformation (\({{\varvec{T}}}^{-1}\)) on the output images of the diffractive camera, the original images of interest from the target data class can be faithfully reconstructed. On the other hand, the input objects from the other data classes end up in noise-like patterns both before and after applying the inverse linear transformation, demonstrating the success of this class-specific linear transformation camera design. The output images were normalized using the same constant for visualization. b Phase modulation patterns of the converged diffractive layers of the class-specific linear transformation camera
Experimental demonstration of a class-specific diffractive camera
We experimentally demonstrated the proof of concept of a class-specific diffractive camera by fabricating and assembling the diffractive layers using a 3D printer and testing it with a continuous wave source at \(\lambda =\) 0.75 mm (Fig. 7a). For these experiments, we trained a three-layer diffractive camera design using the same configuration as the system reported in Fig. 2, with the following changes: (1) the diffractive camera was "vaccinated" during its training phase against potential experimental misalignments [41], by introducing random displacements to the diffractive layers during the iterative training and optimization process (Fig. 7b, see the "Methods" section for details); (2) the handwritten MNIST objects were down-sampled to 15 \(\times\) 15 pixels to form the 3D-fabricated input objects; (3) an additional image contrast-related penalty term was added to the training loss function to enhance the contrast of the output images from the target data class, which further improved the signal-to-noise ratio of the diffractive camera design. The resulting diffractive layers, including the pictures of the 3D-printed camera, are shown in Fig. 7b, c.
Experimental setup for object class-specific imaging using a diffractive camera. a Schematic of the experimental setup using THz illumination. b Schematic of the misalignment resilient training of the diffractive camera and the converged phase patterns. c Photographs of the 3D printed and assembled diffractive system
To blindly test the 3D-assembled diffractive camera (Fig. 7c), 12 different MNIST handwritten digits, including three digits from the target data class (digit '2') and nine digits from the other data classes were used as the input test objects of the diffractive camera. The output FOV of the diffractive camera (36 \(\times\) 36 mm2) was scanned using a THz detector forming the output images. The experimental imaging results of our 3D-printed diffractive camera are demonstrated in Fig. 8, together with the input test objects and the corresponding numerical simulation results for each input object. The experimental results show a high degree of agreement with the numerically expected results based on the optical forward model of our diffractive camera, and we observe that the test objects from the target data class were imaged well, while the other non-target test objects were completely erased at the output FOV of the camera. The success of these proof-of-concept experimental results further confirms the feasibility of our class-specific diffractive camera design.
Experimental results of object class-specific imaging using a 3D-printed diffractive camera
We reported a diffractive camera design that performs class-specific imaging of target objects while instantaneously erasing other objects all-optically, which might inspire energy-efficient, task-specific and secure solutions to privacy-preserving imaging. Unlike conventional privacy-preserving imaging methods that rely on post-processing of images after their digitization, our diffractive camera design enforces privacy protection by selectively erasing the information of the non-target objects during the light propagation, which reduces the risk of recording sensitive raw image data.
To make this diffractive camera design even more resilient against potential adversarial attacks, one can monitor the illumination intensity as well as the output signal intensity and accordingly trigger the camera recording only when the output signal intensity is above a certain threshold. Based on the intensity separation that is created by the class-specific imaging performance of our diffractive camera, an intensity threshold can be determined at the output image sensor to trigger image capture only when a sufficient number of photons are received, which would eliminate the recording of any digital signature corresponding to non-target objects at the input FOV. Such an intensity threshold-based recording for class-specific imaging also eliminates unnecessary storage and transmission of image data by only digitizing the target information of interest from the desired data classes.
In addition to securing the information of the undesired objects by all-optically erasing them at the output FOV, the class-specific permutation and class-specific linear transformation camera designs reported in Figs. 5, 6 can further perform all-optical image encryption for the desired classes of objects, providing an additional layer of data security. Through the data-driven training process, the class-specific permutation camera learns to apply a randomly selected permutation operation on the target class of input objects, which can only be inverted with the knowledge of the inverse permutation operation; this class-specific permutation camera can be used to further secure the confidentiality of the images of the target data class.
Compared to the traditional digital processing-based methods, the presented diffractive camera design has the advantages of speed and resource savings since the entire non-target object erasure process is performed as the input light diffracts through a thin camera volume at the speed of light. The functionality of this diffractive camera can be enabled on demand by turning on the coherent illumination source, without the need for any additional digital computing units or an external power supply, which makes it especially beneficial for power-limited and continuously working remote systems.
It is important to emphasize that the presented diffractive camera system does not possess a traditional, spatially-invariant point-spread function. A trained diffractive camera system performs a learned, complex-valued linear transformation between the input and output fields that statistically represents the coherent imaging of the input objects from the target data class. Through the data-driven training process using examples of the input objects, this complex-valued linear transformation performed by the diffractive camera converged into an optical mode filter that, by and large, preserves the phase and amplitude distributions of the propagating modes that characteristically represent the objects of the target data class. Because of the additional penalty terms that are used to all-optically erase the undesired data classes, the same complex-valued linear transformation also acts as a modal filter, scattering out the characteristic modes that statistically represent the other types of objects that do not belong to the target data class. Therefore, each class-specific diffractive camera design results from this data-driven learning process through training examples, optimized via error backpropagation and deep learning.
Also, note that the experimental proof of concept for our diffractive camera was demonstrated using a spatially-coherent and monochromatic THz illumination source, whereas the most commonly used imaging systems in the modern digital world are designed for visible and near-infrared wavelengths, using broadband and incoherent (or partially-coherent) light. With the recent advancements in state-of-the-art nanofabrication techniques such as electron-beam lithography [42] and two-photon polymerization [43], diffractive camera designs can be scaled down to micro-scale, in proportion to the illumination wavelength in the visible spectrum, without altering their design and functionality. Furthermore, it has been demonstrated that diffractive systems can be optimized using deep learning methods to all-optically process broadband signals [44]. Therefore, nano-fabricated, compact diffractive cameras that can work in the visible and IR parts of the spectrum using partially-coherent broadband radiation from e.g., light-emitting diodes (LEDs) or an array of laser diodes would be feasible in the near future.
Forward-propagation model of a diffractive camera
For a diffractive camera with N diffractive layers, the forward propagation of the optical field can be modeled as a sequence of (1) free-space propagation between the lth and (l + 1)th layers (\(l=0, 1, 2, \dots , N\)), and (2) the modulation of the optical field by the lth diffractive layer (\(l=1, 2, \dots , N)\), where the 0th layer denotes the input/object plane and the (N + 1)th layer denotes the output/image plane. The free-space propagation of the complex field is modeled following the angular spectrum approach [45]. The optical field \({u}^{l}\left(x, y\right)\) right after the lth layer after being propagated for a distance of \(d\) can be written as [46]:
$$\begin{array}{c}{\mathbb{P}}_{\mathbf{d}}{ u}^{l}\left(x,y\right)={\mathcal{F}}^{-1}\left\{\mathcal{F}\left\{{u}^{l}\left(x,y\right)\right\}H({f}_{x},{f}_{y};d)\right\}\end{array}$$
where \({\mathbb{P}}_{\mathbf{d}}\) represents the free-space propagation operator, \(\mathcal{F}\) and \({\mathcal{F}}^{-1}\) are the two-dimensional Fourier transform and the inverse Fourier transform operations, and \(H({f}_{x}, {f}_{y};d)\) is the transfer function of free space:
$${H\left( {{f_x},{f_y};d} \right) = \left\{ {\begin{array}{*{20}{l}}{{\rm{exp}}\left\{ {jkd\sqrt {1 - {{\left( {\frac{{2\pi {f_x}}}{k}} \right)}^2} - {{\left( {\frac{{2\pi {f_y}}}{k}} \right)}^2}} } \right\},}&{f_x^2 + f_y^2 < \frac{1}{{{\lambda ^2}}}}\\{0,}&{f_x^2 + f_y^2 \ge \frac{1}{{{\lambda ^2}}}}\end{array}} \right.}$$
where \(j=\sqrt{-1}\), \(k= \frac{2\pi }{\lambda }\) and \(\lambda\) is the wavelength of the illumination light. \({f}_{x}\) and \({f}_{y}\) are the spatial frequencies along the \(x\) and \(y\) directions, respectively.
We consider only the phase modulation of the transmitted field at each layer, where the transmittance coefficient \({t}^{l}\) of the lth diffractive layer can be written as:
$$\begin{array}{c}{t}^{l}\left(x,y\right)=exp\left\{j{\phi }^{l}\left(x,y\right)\right\}\end{array}$$
where \({\phi }^{l}\left(x,y\right)\) denotes the phase modulation of the trainable diffractive neuron located at \(\left(x,y\right)\) position of the lth diffractive layer. Based on these definitions, the complex optical field at the output plane of a diffractive camera can be expressed as:
$$\begin{array}{c}o\left(x,y\right)={\mathbb{P}}_{{\mathbf{d}}_{{\varvec{N}},{\varvec{N}}+1}}\left(\prod_{l=N}^{1}{t}^{l}\left(x,y\right)\cdot {\mathbb{P}}_{{\mathbf{d}}_{{\varvec{l}}-1,\boldsymbol{ }\boldsymbol{ }{\varvec{l}}}}\right)g(x,y)\end{array}$$
where \({d}_{l-1,l}\) represents the axial distance between the (l − 1)th and the lth layers, \(g\left(x,y\right)\) is the input optical field, which is the amplitude of the input objects (handwritten digits) used in this work.
Training loss function
The reported diffractive camera systems were optimized by minimizing the loss functions that were calculated using the intensities of the input and output images. The input and output intensities \(G\) and \(O\), respectively, can be written as:
$$\begin{array}{c}G\left(x,y\right)={\left|g\left(x,y\right)\right|}^{2}\end{array}$$
$$\begin{array}{c}O\left(x,y\right)={\left|o\left(x,y\right)\right|}^{2}\end{array}$$
The loss function, calculated using a batch of training input objects \({\varvec{G}}\) with the corresponding output images \({\varvec{O}}\) can be defined as:
$$\begin{array}{c}Loss\left({\varvec{O}},{\varvec{G}}\right)=Los{s}_{+}\left({{\varvec{O}}}^{+}, {{\varvec{G}}}^{+}\right)+ Los{s}_{-}\left({{\varvec{O}}}^{-},\boldsymbol{ }{{\varvec{G}}}^{-},{G}_{k}^{+}\right)\end{array}$$
where \({{\varvec{O}}}^{+},{{\varvec{G}}}^{+}\) represent the output and input images from the target data class (i.e., desired object class), and \({{\varvec{O}}}^{-},{{\varvec{G}}}^{-}\) represent the output and input images from the other data classes (to be all-optically erased), respectively.
The \(Los{s}_{+}\) is designed to reduce the NMSE and enhance the correlation between any target class input object \({O}^{+}\) and its output image \({G}^{+}\), so that the diffractive camera learns to faithfully reconstruct the objects from the target data class, i.e.,
$$\begin{array}{c}Los{s}_{+}\left({O}^{+},{G}^{+}\right)={\alpha }_{1}\times NMSE\left({O}^{+}, { G}^{+}\right)+ {\alpha }_{2}\times \left(1-\mathrm{PCC}\left({O}^{+}, {G}^{+}\right)\right)\end{array}$$
where \({\alpha }_{1}\) and \({\alpha }_{2}\) are constants and NMSE is defined as:
$$\begin{array}{c}NMSE\left({O}^{+},{G}^{+}\right)=\frac{1}{MN}\sum_{m,n}{\left(\frac{{O}_{m,n}^{+}}{\mathrm{max}({O}^{+})}-{G}_{m,n}^{+}\right)}^{2}\end{array}$$
\(m\) and \(n\) are the pixel indices of the images, and \(MN\) represents the total number of pixels in each image. The output image \({O}^{+}\) was normalized by its maximum pixel value, \(\mathrm{max}({O}^{+})\). The PCC value between any two images \(A\) and \(B\) is calculated using [38]:
$$\begin{array}{c}PCC(A,B)=\frac{\sum \left(A-\overline{A }\right)\left(B-\overline{B }\right)}{\sqrt{\sum {\left(A-\overline{A }\right)}^{2}\sum {\left(B-\overline{B }\right)}^{2}}}\end{array}$$
The term \(\left(1-\mathrm{PCC}\left({O}^{+}, {G}^{+}\right)\right)\) was used in \(Los{s}_{+}\) in order to maximize the correlation between \({O}^{+}\) and \({G}^{+}\), as well as to ensure a non-negative loss value since the PCC value of any two images is always between − 1 and 1.
Different from\(Los{s}_{+}\), the \(Los{s}_{-}\) function is designed to reduce (1) the absolute correlation between the output \({O}^{-}\) and its corresponding input \({G}^{-}\), (2) the absolute correlation between \({O}^{-}\) and an arbitrary object \({G}_{k}^{+}\) from the target class, and (3) the correlation between \({O}^{-}\) and itself shifted by a few pixels \({O}_{\mathrm{sft}}^{-}\), which can be formulated as:
$$\begin{array}{c}Los{s}_{-}\left({O}^{-},{G}^{-},{G}_{k}^{+}\right)={\beta }_{1}\times \left|\mathrm{PCC}\left({O}^{-}, {G}^{-}\right)\right|+{\beta }_{2}\times \left|\mathrm{PCC}\left({O}^{-}, {G}_{k}^{+}\right)\right|+ {\beta }_{3}\times PCC\left({O}^{-}, {O}_{\mathrm{sft}}^{-}\right)\end{array}$$
where \({\beta }_{1}\), \({\beta }_{2}\) and \({\beta }_{3}\) are constants. Here the \({G}_{k}^{+}\) refers to an image of an object from the target data class in the training set, which was randomly selected for every training batch, and the subscript \(k\) refers to a random index. In other words, within each training batch, the \(\mathrm{PCC}\left({O}^{-}, {G}_{k}^{+}\right)\) was calculated using the output image from the non-target data class and a random ground truth image from the target class. By adding such a loss term, we prevent the diffractive camera from converging to a solution where all the output images look like the target object. The \({O}_{\mathrm{sft}}^{-}\) was obtained using:
$$\begin{array}{c}{O}_{\mathrm{sft}}^{-} \left(x,y\right)={O}^{-}\left(x-{s}_{x},y-{s}_{y}\right)\end{array}$$
where \({s}_{x}={s}_{y}=5\) denote the number of pixels that \({O}^{-}\) is shifted in each direction. Intuitively, a natural image will maintain a high correlation with itself, shifted by a small amount, while an image of random noise will not. By minimizing \(\mathrm{PCC}\left({O}^{-}, {O}_{\mathrm{sft}}^{-}\right)\), we forced the diffractive camera to generate uninterpretable noise-like output patterns for input objects that do not belong to the target data class.
The coefficients \(\left({\alpha }_{1}, {\alpha }_{2}, {\beta }_{1},{\beta }_{2},{\beta }_{3}\right)\) in the two loss functions were empirically set to (1, 3, 6, 3, 2).
Digital implementation and training scheme
The diffractive camera models reported in this work were trained with the standard MNIST handwritten digit dataset under \(\lambda =0.75\; \mathrm{mm}\) illumination. Each diffractive layer has a pixel/neuron size of 0.4 mm, which only modulates the phase of the transmitted optical field. The axial distance between the input plane and the first diffractive layer, the distances between any two successive diffractive layers, and the distance between the last diffractive layer and the output plane are set to 20 mm, i.e., \({d}_{l-1,l}=20\;\mathrm{ mm }\,(l=1,2,\dots , N+1)\). For the diffractive camera models that take a single MNIST image as its input (e.g., reported in Figs. 2, 3), each diffractive layer contains 120 \(\times\) 120 diffractive pixels. During the training, each 28 \(\times\) 28 MNIST raw image was first linearly upscaled to 90 \(\times\) 90 pixels. Next, the upscaled training dataset was augmented with random image transformations, including a random rotation by an angle within \([-10^\circ , +10^\circ ]\), a random scaling by a factor within [0.9, 1.1], and a random shift in each lateral direction by an amount of \([-2.13\lambda , +2.13\lambda ]\).
For the diffractive camera model reported in Fig. 4 that takes multiplexed objects as its input, each diffractive layer contains 300 \(\times\) 300 diffractive pixels. The MNIST training digits were first upscaled to 90 \(\times\) 90 pixels and then randomly transformed with \([-10^\circ , +10^\circ ]\) angular rotation, [0.9, 1.1] scaling, and \([-2.13\lambda , +2.13\lambda ]\) translation. Nine different handwritten digits were randomly selected and arranged into 3 \(\times\) 3 grids, generating a multiplexed input image with 270 \(\times\) 270 pixels for the diffractive camera training.
For the diffractive permutation camera reported in Fig. 5, each diffractive layer contains 120 \(\times\) 120 diffractive pixels. The design parameters of this class-specific permutation camera were kept the same as the five-layer diffractive camera reported in Fig. 3a, except that the handwritten digits were down-sampled to 15 \(\times\) 15 pixels considering that the required computational training resources for the permutation operation increase quadratically with the total number of input image pixels. The MNIST training digits were augmented using the same random transformations as described above. The 2D permutation matrix \({\varvec{P}}\) was generated by randomly shuffling the rows of a 225 \(\times\) 225 identity matrix. The inverse of \({\varvec{P}}\) was obtained by using the transpose operation, i.e., \({{\varvec{P}}}^{-1}={{\varvec{P}}}^{{\varvec{T}}}\). The training loss terms for the class-specific permutation camera remained the same as described in Eqs. (8), (9), and (12), except that the permuted input images (\({\varvec{P}}G\)) were used as the ground truth, i.e.,
$$\begin{array}{c}Los{s}_{\mathrm{Permutation}}\left(O,{\varvec{P}}G\right)\,=\,Los{s}_{+}\left({O}^{+}, {{\varvec{P}}G}^{+}\right)+ Los{s}_{-}\left({O}^{-},\boldsymbol{ }{{\varvec{P}}G}^{-},{{\varvec{P}}G}_{k}^{+}\right)\end{array}$$
For the seven-layer diffractive linear transformation camera reported in Fig. 6, each diffractive layer contains 300 \(\times\) 300 diffractive neurons, and the axial distance between any two consecutive planes was set to 45 mm (i.e., \({d}_{l-1,l}=20\) mm, for \(l=1, 2, \dots , N+1)\). The 2D linear transformation matrix \({\varvec{T}}\) was generated by randomly creating an invertible matrix with each row having 20 non-zero random entries, and normalized so that the summation of each row is 1 (for conserving energy); see Fig. 6 for the selected \({\varvec{T}}\). The invertibility of \({\varvec{T}}\) was validated by calculating its determinant. During the training, the loss functions were applied to the diffractive camera output and the ground truth after the inverse linear transformation, i.e., \({{\varvec{T}}}^{-1}O\) and \({{\varvec{T}}}^{-1}({\varvec{T}}G)\). The other details of the training loss terms for the class-specific linear transformation camera remained the same as described in Eqs. (8), (9), and (12).
The diffractive camera trained with the Fashion MNIST dataset (reported in Additional file 1: Fig. S2) contains seven diffractive layers, each with 300 \(\times\) 300 pixels/neurons. The axial distance between any two consecutive planes was set to 45 mm (i.e., \({d}_{l-1,l}=20\) mm, for \(l=\mathrm{1,2},\dots , N+1)\). During the training, each Fashion MNIST raw image was linearly upsampled to 90 \(\times\) 90 pixels and then augmented with random transformations of \([-10^\circ , +10^\circ ]\) angular rotation, [0.9, 1.1] physical scaling, and \([-2.13\lambda , +2.13\lambda ]\) lateral translation. The loss functions used for training remained the same as described in Eqs. (8), (9), and (12).
The spatial displacement-agnostic diffractive camera design with the larger input FOV (reported in Additional file 4: Movie S3) contains seven diffractive layers, each with 300 \(\times\) 300 pixels/neurons. The axial distance between any two consecutive planes was set to 45 mm (i.e., \({d}_{l-1,l}=20\) mm, for \(l=\mathrm{1,2},\dots , N+1)\). During the training, each MNIST raw image was linearly upsampled to 90 × 90 pixels, and then was randomly placed within a larger input FOV of 140 × 140 pixels for training. The loss functions were the same as described in Eqs. (8), (9), and (12). The input objects distributed within a FOV of 120 × 120 pixels were demonstrated during the blind testing shown in Additional file 4: Movie S3.
The MNIST handwritten digit dataset was divided into training, validation, and testing datasets without any overlap, with each set containing 48,000, 12,000, and 10,000 images, respectively. For the diffractive camera trained with the Fashion MNIST dataset, five different classes (i.e., trousers, dresses, sandals, sneakers, and bags) were selected for the training, validation, and testing, with each set containing 24,000, 6000, and 5000 images without overlap, respectively.
The diffractive camera models reported in this paper were trained using the Adam optimizer [47] with a learning rate of 0.03. The batch size used for all the trainings was 60. All models were trained and tested using PyTorch 1.11 with a GeForce RTX 3090 graphical processing unit (NVIDIA Inc.). The typical training time for a three-layer diffractive camera (e.g., in Fig. 2) is ~ 21 h for 1000 epochs.
For the experimentally validated diffractive camera design shown in Fig. 7, an additional contrast loss \({\mathrm{L}}_{\mathrm{c}}\) was added to \(Los{s}_{+}\) i.e.,
$$\begin{array}{c}Los{s}_{+}\left({O}^{+},{G}^{+}\right)\,=\,{\alpha }_{1}\times NMSE\left({O}^{+}, { G}^{+}\right)+ {\alpha }_{2}\times \left(1-\mathrm{PCC}\left({O}^{+}, {G}^{+}\right)\right)+{\alpha }_{3}\times {\mathrm{L}}_{\mathrm{c}}\left({O}^{+}, {G}^{+}\right)\end{array}$$
The coefficients \(\left({\alpha }_{1}, {\alpha }_{2}, {\alpha }_{3}\right)\) were empirically set to (1, 3, 5) and \({\mathrm{L}}_{\mathrm{c}}\) is defined as:
$$\begin{array}{c}{\mathrm{L}}_{\mathrm{c}}\left({O}^{+}, {G}^{+}\right)=\frac{\sum \left({O}^{+}\cdot \left(1-\widehat{{G}^{+}}\right)\right)}{\sum \left({O}^{+}\cdot \widehat{{G}^{+}}\right)+\varepsilon }\end{array}$$
where \(\varepsilon =1{\mathrm{e}}^{-6}\) was added to the denominator to avoid divide-by-zero error. \(\widehat{{G}^{+}}\) is a binary mask indicating the transmissive regions of the input object \({G}^{+}\), which is defined as:
$${\widehat {{G^ + }}\left( {m,n} \right) = \left\{ {\begin{array}{*{20}{l}}{1,}&{{G^ + }(m,n) > 0.5}\\{0,}&{otherwise}\end{array}} \right.}$$
By adding this image contrast related training loss term, the output images of the target objects exhibit enhanced contrast which is especially helpful in non-ideal experimental conditions.
In addition, the MNIST training images were first linearly downsampled to 15 × 15 pixels and then upscaled to 90 × 90 pixels using nearest-neighbor interpolation. Then, the resulting input objects were augmented using the same parameters as described before and were fed into the diffractive camera for training. Each diffractive layer had 120 × 120 trainable diffractive neurons.
To overcome the challenges posed by the fabrication inaccuracies and mechanical misalignments during the experimental validation of the diffractive camera, we vaccinated our diffractive model during the training by deliberately introducing random displacements to the diffractive layers [41]. During the training process, a 3D displacement \({\varvec{D}}= \left({D}_{x},{ D}_{y},{ D}_{z}\right)\) was randomly added to each diffractive layer following the uniform \({\text{(U)}}\) random distribution:
$$\begin{array}{c}{D}_{x} \sim {\text{U}}\left(-{\Delta }_{x, tr}, {\Delta }_{x, tr}\right)\end{array}$$
$$\begin{array}{c}{D}_{y} \sim {\text{U}}\left(-{\Delta }_{y, tr}, {\Delta }_{y,tr}\right)\end{array}$$
$$\begin{array}{c}{D}_{z} \sim {\text{U}}\left(-{\Delta }_{z, tr}, {\Delta }_{z,tr}\right)\end{array}$$
where \({D}_{x}\) and \({D}_{y}\) denote the random lateral displacement of a diffractive layer in \(x\) and \(y\) directions, respectively. \({D}_{z}\) denotes the random displacement added to the axial distances between any two consecutive diffractive layers. \({\Delta }_{*, tr}\) represents the maximum amount of shift allowed along the corresponding axis, which was set as \({\Delta }_{x,tr}={\Delta }_{y,tr}=\) 0.4 mm (~ 0.53\(\lambda\)), and \({\Delta }_{z, tr}=\) 1.5 mm (2\(\lambda\)) throughout the training process. \({D}_{x},{ D}_{y}\), and \({D}_{z}\) of each diffractive layer were independently sampled from the given uniform random distributions. The diffractive camera model used for the experimental validation was trained for 50 epochs.
Experimental THz imaging setup
We validated the fabricated diffractive camera design using a THz continuous wave scanning system. The phase values of the diffractive layers were first converted into height maps using the refractive index of the 3D printer material. Then, the layers were printed using a 3D printer (Pr 110, CADworks3D). A layer holder that sets the positions of the input plane, output plane, and each diffractive layer was also 3D printed (Objet30 Pro, Stratasys) and assembled with the printed layers. The test objects were 3D printed (Objet30 Pro, Stratasys) and coated with aluminum foil to define the transmission areas.
The experimental setup is illustrated in Fig. 7a. The THz source used in the experiment was a WR2.2 modular amplifier/multiplier chain (AMC) with a compatible diagonal horn antenna (Virginia Diode Inc.). The input of AMC was a 10 dBm RF input signal at 11.1111 GHz (fRF1) and after being multiplied 36 times, the output radiation was at 0.4 THz. The AMC was also modulated with a 1 kHz square wave for lock-in detection. The output plane of the diffractive camera was scanned with a 1 mm step size using a single-pixel Mixer/AMC (Virginia Diode Inc.) detector mounted on an XY positioning stage that was built by combining two linear motorized stages (Thorlabs NRT100). A 10 dBm RF signal at 11.083 GHz (fRF2) was sent to the detector as a local oscillator to down-convert the signal to 1 GHz. The down-converted signal was amplified by a low-noise amplifier (Mini-Circuits ZRL-1150-LN+) and filtered by a 1 GHz (± 10 MHz) bandpass filter (KL Electronics 3C40-1000/T10-O/O). Then the signal passed through a tunable attenuator (HP 8495B) for linear calibration and a low-noise power detector (Mini-Circuits ZX47-60) for absolute power detection. The detector output was measured by a lock-in amplifier (Stanford Research SR830) with the 1 kHz square wave used as the reference signal. Then the lock-in amplifier readings were calibrated into linear scale. A digital 2 × 2 binning was applied to each measurement of the intensity field to match the training feature size used in the design phase.
All the data and methods that support this work are present in the main text and the Additional files. The deep learning models in this work employ standard libraries and scripts that are publicly available in PyTorch. The MNIST handwritten digits database is available online at: http://yann.lecun.com/exdb/mnist/.
J. Scharcanski, Bringing vision-based measurements into our daily life: a grand challenge for computer vision systems. Front. ICT 3, 3 (2016)
X. Feng, Y. Jiang, X. Yang, M. Du, X. Li, Computer vision algorithms and hardware implementations: a survey. Integration 69, 309–320 (2019)
M. Al-Faris, J. Chiverton, D. Ndzi, A.I. Ahmed, A review on computer vision-based methods for human action recognition. J. Imaging 6, 46 (2020)
X. Wang, Intelligent multi-camera video surveillance: a review. Pattern Recogn. Lett. 34, 3–19 (2013)
Article ADS Google Scholar
N. Haering, P.L. Venetianer, A. Lipton, The evolution of video surveillance: an overview. Mach. Vis. Appl. 19, 279–290 (2008)
E. D. Dickmannsin, The development of machine vision for road vehicles in the last decade. in IEEE Intelligent Vehicle Symposium, 2002, vol. 1 (2002), p. 268–281.
J. Janai, F. Güney, A. Behl, A. Geiger, Computer vision for autonomous vehicles: problems, datasets and state of the art. CGV 12, 1–308 (2020)
A. Esteva et al., Deep learning-enabled medical computer vision. NPJ Digit. Med. 4, 1–9 (2021)
M. Tistarelli, M. Bicego, E. Grosso, Dynamic face recognition: from human to machine vision. Image Vis. Comput. 27, 222–232 (2009)
T.B. Moeslund, E. Granum, A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 81, 231–268 (2001)
G. Singh, G. Bhardwaj, S.V. Singh, V. Garg, Biometric identification system: security and privacy concern, in Artificial intelligence for a sustainable industry 4.0. ed. by S. Awasthi, C.M. Travieso-González, G. Sanyal, D. Kumar Singh (Springer International Publishing, Berlin, 2021), pp. 245–264. https://doi.org/10.1007/978-3-030-77070-9_15
A. Acquisti, L. Brandimarte, G. Loewenstein, Privacy and human behavior in the age of information. Science 347, 509–514 (2015)
A. Acquisti, L. Brandimarte, J. Hancock, How privacy's past may shape its future. Science 375, 270–272 (2022)
W.N. Price, I.G. Cohen, Privacy in the age of medical big data. Nat. Med. 25, 37–43 (2019)
J.R. Padilla-López, A.A. Chaaraoui, F. Flórez-Revuelta, Visual privacy protection methods: a survey. Expert Syst. Appl. 42, 4177–4195 (2015)
C. Neustaedter, S. Greenberg, M. Boyle, Blur filtration fails to preserve privacy for home-based video conferencing. ACM Trans. Comput.-Hum. Interact. 13, 1–36 (2006)
A. Frome, et al., Large-scale privacy protection in Google Street View. in 2009 IEEE 12th International Conference on Computer Vision (2009), pp. 2373–2380. https://doi.org/10.1109/ICCV.2009.5459413.
F. Dufaux, T. Ebrahimi, Scrambling for privacy protection in video surveillance systems. IEEE Trans. Circuits Syst. Video Technol. 18, 1168–1174 (2008)
W. Zeng, S. Lei, Efficient frequency domain selective scrambling of digital video. IEEE Trans. Multimed. 5, 118–129 (2003)
A. Criminisi, P. Perez, K. Toyama, Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13, 1200–1212 (2004)
K. Inai, M. Pålsson, V. Frinken, Y. Feng, S. Uchida, Selective concealment of characters for privacy protection. in 2014 22nd International Conference on Pattern Recognition (2014), p. 333–338. https://doi.org/10.1109/ICPR.2014.66.
R. Uittenbogaard et al., Privacy protection in street-view panoramas using depth and multi-view imagery. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2019), pp. 10573–10582. https://doi.org/10.1109/CVPR.2019.01083.
K. Brkic, I. Sikiric, T. Hrkac, Z. Kalafatic, I know that person: generative full body and face de-identification of people in images. in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017), pp. 1319–1328. https://doi.org/10.1109/CVPRW.2017.173.
F. Pittaluga, S. Koppal, A. Chakrabarti, Learning privacy preserving encodings through adversarial training. in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), (IEEE, 2019), pp. 791–799. https://doi.org/10.1109/WACV.2019.00089.
A. Chattopadhyay, T. E. Boult, PrivacyCam: a privacy preserving camera using uCLinux on the Blackfin DSP. in 2007 IEEE Conference on Computer Vision and Pattern Recognition (2007), pp. 1–8, https://doi.org/10.1109/CVPR.2007.383413.
T. Winkler, B. Rinner, TrustCAM: security and privacy-protection for an embedded smart camera based on trusted computing. in 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance (2010), pp. 593–600, https://doi.org/10.1109/AVSS.2010.38.
Mrityunjay, P. J. Narayanan, The de-identification camera. in 2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (2011), pp. 192–195. https://doi.org/10.1109/NCVPRIPG.2011.48.
53 Important statistics about how much data is created every day. Financesonline.com. (2021). https://financesonline.com/how-much-data-is-created-every-day/.
P. Dhar, The carbon impact of artificial intelligence. Nature Machine Intelligence 2, 423–425 (2020)
S. Thakur, A. Chaurasia, Towards Green Cloud Computing: Impact of carbon footprint on environment. in 2016 6th International Conference—Cloud System and Big Data Engineering (Confluence), (2016), pp. 209–213. https://doi.org/10.1109/CONFLUENCE.2016.7508115.
L. Belkhir, A. Elmeligi, Assessing ICT global emissions footprint: trends to 2040 & recommendations. J. Clean. Prod. 177, 448–463 (2018)
M. Durante, Computational power: the impact of ICT on law, society and knowledge (Routledge, London, 2021). https://doi.org/10.4324/9781003098683
Pittaluga, F. & Koppal, S. J. Privacy preserving optics for miniature vision sensors. in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (IEEE, 2015), pp. 314–324. https://doi.org/10.1109/CVPR.2015.7298628.
F. Pittaluga, A. Zivkovic, S. J. Koppal, Sensor-level privacy for thermal cameras. in 2016 IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–12. https://doi.org/10.1109/ICCPHOT.2016.7492877.
C. Hinojosa, J. C. Niebles, H. Arguello, Learning privacy-preserving Optics for Human Pose Estimation. in 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (IEEE, 2021), pp. 2553–2562. https://doi.org/10.1109/ICCV48922.2021.00257.
Y. LeCun, et al., Handwritten Digit Recognition With A Back-Propagation Network. in Advances in Neural Information Processing Systems vol. 2, (Morgan-Kaufmann, 1989).
H. Xiao, K. Rasul, R. Vollgraf, Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. (2017). https://doi.org/10.48550/arXiv.1708.07747.
J. Benesty, J. Chen, Y. Huang, I. Cohen, Pearson correlation coefficient. in Noise reduction in speech processing (Springer Berlin Heidelberg, 2009).
O. Kulce, D. Mengu, Y. Rivenson, A. Ozcan, All-optical information-processing capacity of diffractive surfaces. Light Sci. Appl. 10, 25 (2021)
O. Kulce, D. Mengu, Y. Rivenson, A. Ozcan, All-optical synthesis of an arbitrary linear transformation using diffractive surfaces. Light Sci. Appl. 10, 196 (2021)
D. Mengu et al., Misalignment resilient diffractive optical networks. Nanophotonics 9, 4207–4219 (2020)
C. Vieu et al., Electron beam lithography: resolution limits and applications. Appl. Surf. Sci. 164, 111–117 (2000)
X. Zhou, Y. Hou, J. Lin, A review on the processing accuracy of two-photon polymerization. AIP Adv. 5, 030701 (2015)
Y. Luo et al., Design of task-specific optical systems using broadband diffractive neural networks. Light Sci. Appl. 8, 112 (2019)
X. Lin et al., All-optical machine learning using diffractive deep neural networks. Science 361, 1004–1008 (2018)
Article ADS MathSciNet Google Scholar
A. Ozcan, E. McLeod, Lensless imaging and sensing. Annu. Rev. Biomed. Eng. 18, 77–102 (2016)
D. P. Kingma, J. Ba, Adam: a method for stochastic optimization. arXiv:1412.6980 [cs] (2017).
The authors acknowledge the assistance of Dr. GyeoRe Han (UCLA) on 3D printing.
The Ozcan Research Group at UCLA acknowledges the support of ONR (Grant # N00014-22-1-2016). The Jarrahi Research Group at UCLA acknowledges the support of the Department of Energy (Grant # DE-SC0016925).
Bijie Bai and Yi Luo contributed equally to this work
Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
Bijie Bai, Yi Luo, Tianyi Gan, Jingtian Hu, Yuhang Li, Yifan Zhao, Deniz Mengu, Mona Jarrahi & Aydogan Ozcan
Bioengineering Department, University of California, Los Angeles, 90095, USA
Bijie Bai, Yi Luo, Jingtian Hu, Yuhang Li, Deniz Mengu & Aydogan Ozcan
California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
Bijie Bai
Yi Luo
Tianyi Gan
Jingtian Hu
Yuhang Li
Yifan Zhao
Deniz Mengu
Mona Jarrahi
Aydogan Ozcan
AO conceived the research and initiated the project. BB, YL, and DM developed the numerical simulation codes. BB, YL, TG, JH, YL, and YZ performed the fabrication of the diffractive system and conducted the experiments. All the authors participated in the analysis and discussion of the results. BB, YL, TG, JH, and AO prepared the manuscript and all authors contributed to the manuscript. AO and MJ supervised the project. All authors read and approved the final manuscript.
Correspondence to Aydogan Ozcan.
AO and MJ serve as Editors of the journal; no other author reported any competing interests.
Blind testing results of diffractive camera designs that selectively image different data classes. Figure S2. Blind testing results of a seven-layer diffractive camera design that selectively images trousers in the Fashion MNIST dataset, while all-optically erasing 4 other types of fashion objects (i.e., dresses, sandals, sneakers, and bags). Figure S3. Converged diffractive layers for the diffractive camera designs with different numbers of diffractive layers.
Additional file 2: Movie S1. Blind testing results of a five-layer diffractive camera design (reported in the main text Fig. 3) with input objects at different intensity levels.
Additional file 3: Movie S2.
Blind testing results of a five-layer diffractive camera design (reported in the main text Fig. 3) with input objects modulated by 50% transmission filters applied at different sub-regions of the input field-of-view.
Additional file 4: Movie S3. Blind testing results of a seven-layer diffractive camera design with input objects continuously shifted throughout a large input field-of-view.
Bai, B., Luo, Y., Gan, T. et al. To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects. eLight 2, 14 (2022). https://doi.org/10.1186/s43593-022-00021-3
Revised: 16 July 2022 | CommonCrawl |
On the Cycle Augmentation Problem: Hardness and Approximation Algorithms
Approximation and Online Algorithms (WAOA 2019)
Special Issue on Approximation and Online Algorithms (2019)
Waldo Gálvez1,
Fabrizio Grandoni1,
Afrouz Jabal Ameli ORCID: orcid.org/0000-0001-5620-90391 &
Krzysztof Sornat2
Theory of Computing Systems volume 65, pages 985–1008 (2021)Cite this article
In the k-Connectivity Augmentation Problem we are given a k-edge-connected graph and a set of additional edges called links. Our goal is to find a set of links of minimum size whose addition to the graph makes it (k + 1)-edge-connected. There is an approximation preserving reduction from the mentioned problem to the case k = 1 (a.k.a. the Tree Augmentation Problem or TAP) or k = 2 (a.k.a. the Cactus Augmentation Problem or CacAP). While several better-than-2 approximation algorithms are known for TAP, for CacAP only recently this barrier was breached (hence for k-Connectivity Augmentation in general). As a first step towards better approximation algorithms for CacAP, we consider the special case where the input cactus consists of a single cycle, the Cycle Augmentation Problem (CycAP). This apparently simple special case retains part of the hardness of the general case. In particular, we are able to show that it is APX-hard. In this paper we present a combinatorial \(\left (\frac {3}{2}+\varepsilon \right )\)-approximation for CycAP, for any constant ε > 0. We also present an LP formulation with a matching integrality gap: this might be useful to address the general case of the problem.
Avoid the most common mistakes and prepare your manuscript for journal editors.
The basic goal of Survivable Network Design is to construct low cost networks that provide connectivity guarantees between pre-specified sets of nodes even after the failure of a few edges/nodes (in the following we will focus on the edge failure case). This has many applications, e.g., in transportation and telecommunication networks.
A relevant subclass of these problems is given by Network Augmentation problems. Here the goal is to augment a given graph G = (V,E) by adding extra edges taken from a given set L (links), so as to satisfy given (edge-)connectivity requirements. Several such problems are NP-hard, and in most cases the best known approximation factor is 2 due to Jain [19].
In this paper we focus on the following k-Connectivity Augmentation Problem (k-CAP). Given a k-(edge)-connected undirected graph G = (V,E) and a collection L of extra edges (links), the goal is to find a subset \(A\subseteq L\) with minimum size, such that \(G^{\prime }=(V,E\cup A)\) is (k + 1)-connected. (We recall that G = (V,E) is k-connected if for every set of edges \(F\subseteq E\), |F|≤ k − 1, the graph \(G^{\prime }=(V,E\setminus F)\) is connected.) Dinitz et al. [10] presented an approximation preserving reduction from this problem to the case k = 1 for odd k, and k = 2 for even k. This motivates a deeper understanding of the latter two special cases.
The case k = 1 is also known as the Tree Augmentation Problem (TAP). The reason for this name is that any 2-edge-connected component of the input graph G can be contracted, hence leading to a tree. For this problem several better-than-2 approximation algorithms are known [1, 7, 11, 12, 17, 24, 28]. The case k = 2 is also known as the Cactus Augmentation Problem (CacAP) since, similarly to the previous case, the input graph can be assumed to be a cactus [10]. Recall that a cactus G is a connected undirected graph in which every edge belongs to exactly one cycle. For technical reasons in this paper we also consider cycles of length 2. However, here the best-known approximation factor was 2 [19] for a long time and only recently this was improved to 1.91 (implying the same for k-CAP in general).
For all the mentioned problems it makes sense to consider the weighted version, where links have non-negative integral weights, and the goal is to find a minimum weight (rather than minimum cardinality) subset of links A with the desired properties. In particular we will speak about Weighted TAP (WTAP) and Weighted CacAP (WCacAP). Here the best-known approximation factor is 2 in both cases [19]. Moreover, improving on that approximation factor for WTAP is considered as a major open problem in the area. We also notice that we can turn a WTAP instance into an equivalent WCacAP instance by replacing each edge with two parallel edges. Hence, approximating WCacAP is not any easier than approximating WTAP (and the same holds for the corresponding unweighted versions).
As mentioned before, CacAP contains TAP as a special case when all the cycles in the cactus have length 2 (formed by a pair of parallel edges). Hence, in order to make progress on CacAP, it makes sense to consider the somehow complementary case where the input cactus consists of a single cycle of n nodes. We call the corresponding subproblem the Cycle Augmentation Problem (CycAP), and its weighted version Weighted CycAP (WCycAP). To the best of our knowledge, these special cases were not studied before. However, as we will see, they still retain part of the difficulties of the general cactus case. In more detail, we achieve the following main results:
Approximation Algorithms
We present better-than-2 approximation algorithms for this problem. In particular, we present a simple \(\frac {5}{3}\)-approximation, and a slightly more complex (3/2 + ε)-approximation for any constant ε > 0. Notice that the latter approximation factor is not far from the best known approximation factor for TAP which is equal to 1.458 [17]. Our algorithms are purely combinatorial, and they consist of two main phases. In the first phase, we greedily add some links to the solution under construction and contract them. At the end of this phase we achieve an instance of CacAP that can be solved exactly in polynomial time. In particular, for the \(\frac {5}{3}\)-approximation this reduces to computing a spanning tree, while for the (3/2 + ε)-approximation we use an FPT algorithm parameterized by a proper notion of maximum length of a link.
Hardness of Approximation
We are able to show that WCycAP is as hard to approximate as WCacAP. Therefore, improving on a 2-approximation for WCycAP would imply a major breakthrough in the area (in particular, it would imply the same for WTAP). This also justifies a more careful investigation of CycAP. In our opinion it is a priori not so obvious that CycAP is even NP-hard. Indeed, the special case of TAP (and even of WTAP) where the input graph is a path can be solved exactly in polynomial time. The case of an input cycle might closely remind the path case. Here we show that this intuition is not correct: we prove that CycAP is NP-hard and even APX-hard via a simple but non-trivial adaptation of the proofs in [15, 23]. In particular, we need one extra step in the reduction where we turn an intermediate CacAP instance into a CycAP one while maintaining certain properties of the optimal solution.
LP Gaps
The recent literature on TAP approximation [1, 12, 17] shows that finding strong LP relaxations for the problem can be very helpful to design improved approximation algorithms. In the same spirit, we tried to address the problem of finding LP relaxations for CycAP with small integrality gap. For both TAP and CacAP (hence CycAP) one can define a natural and simple standard cut LP (more details later). While for TAP it was recently shown that the standard cut LP has integrality gap smaller than 2 [29], interestingly for CycAP (hence for CacAP) the standard cut LP has integrality gap 2. Here we present a stronger LP that, for any ε > 0, has integrality gap at most \(\frac {3}{2}+\varepsilon \) (hence matching the approximation ratio of our algorithm). In our opinion this could be useful for future work on CacAP approximation.
As mentioned before, the best known result in terms of polynomial time approximation algorithms for k-CAP is a 1.91-approximation proposed by Byrka et al [2]. However, if the set of links is equal to V × V it is possible to solve this problem optimally [33]. More recently, this problem has been studied in the framework of Fixed-Parameter Tractability: Végh and Marx [27] proved that this problem is in FPT when parameterized by the size of the optimal solution, and later the running time of their algorithm was further improved [3].
Tree Augmentation has been extensively studied over the past few decades. It was first shown that WTAP is NP-hard by Frederickson and Jájá [15], then that TAP is NP-hard by Cheriyan et al. [6], and later that TAP is APX-hard by Kortsarz et al. [23]. For WTAP, the best-known approximation guarantee is 2 and was first established by Frederickson and Jájá [15]. Their algorithm was later simplified by Khuller and Thurimella [21]. A 2-approximation can also be achieved by various other techniques developed later on, including a primal-dual approach [16] and iterative rounding [19]. Improvements on the factor 2 have only been obtained for restricted cases, including bounded diameter trees [8] and bounded weights [1, 12, 17, 29].
Regarding TAP, the first algorithm beating the approximation guarantee of 2 is due to Nagamochi [28], achieving an approximation factor of 1.815 + ε. This factor was subsequently improved to 1.8 [11] and to 1.5 [24]. These results are combinatorial in nature, but LP-based results have been achieved as well. As an example, recently Nutov [29] showed that the standard cut LP for TAP has an integrality gap of at most 28/15 while a lower bound of 3/2 was known [7]. An LP-based \(\left (\frac {5}{3} + \varepsilon \right )\)-approximation was given by Adjiashvili [1] and then refined by Fiorini et al. [12] to obtain a \(\left (\frac {3}{2} + \varepsilon \right )\)-approximation (see also [4, 5, 26]). Both results are obtained by adding a proper family of extra constraints to the standard cut LP. Recently, Grandoni et al. [17] achieved a 1.458 approximation for TAP, which is smaller than the integrality gap of the standard cut LP.
The rest of this paper is organized as follows. In Section 2 we give some preliminary definitions and results. The approximation algorithms, LP-gaps and hardness of approximation results are discussed in Sections 3, 4 and 5 respectively.
For a set X and element y, we use the shortcut X ∖ y for X ∖{y}, and similarly for other set operations.
Given a graph G = (V,E), we let V (G) = V and E(G) = E. Recall that in WCacAP we are given a cactus G = (V,E), a set of links \(L \subseteq \binom {V}{2}\) and a non-negative weight function \(c:L \to \mathbb {R}_{\geq 0}\). The task is to compute a subset of links \(A \subseteq L\) such that the graph (V,E ∪ A) is 3-edge-connected while minimizing \(c(A):={\sum }_{\ell \in A} c(\ell )\). The special case where G is a cycle is called WCycAP, and the unweighted versions of the above problems are called CacAP and CycAP respectively. By n we will denote the number of nodes of the considered instance of the problem.
Notice that, given an instance (G,L) of CacAP, we can check in polynomial time if the graph (V (G),E(G) ∪ L) is 3-edge-connected by exhaustively checking if the removal of any pair of elements from E(G) ∪ L disconnects the graph. Hence we will assume along this work that the instance always admits a feasible solution.
Observation 1
The 2-edge cuts of a cactus G are identified by pairs \(S=\{e,e^{\prime }\}\) of distinct edges belonging to the same cycle, and consist of the node sets (U,V ∖ U) of the two connected components obtained by removing S from G. A necessary and sufficient condition for a subset of links A to be a feasible solution for WCacAP is that, for any such cut S, there is at least one ℓ =< u,v >∈ A that u ∈ U and v ∈ (V ∖ U). (in which case ℓ satisfies the \(\{e,e^{\prime }\}\)-cut).
Note that in the case of CycAP, Observation 1 implies that any feasible solution must be an edge cover as 2-edge cuts defined by neighboring edges of the cycle must be satisfied. Given a 2-edge cut \(S=\{e,e^{\prime }\}\), let LS be the subset of links satisfying S. The standard cut LP for CycAP is as follows:
$$\begin{array}{rll} \min & \displaystyle\sum\limits_{\ell \in L}{x_{\ell}}& \text{(standard cut LP)}\\ s.t. & \displaystyle\sum\limits_{\ell\in L_{S}}{x_{\ell}} \ge 1 & \forall S: S \text{ is a 2-edge cut}\\ & 0\leq x_{\ell}\leq 1 & \forall \ell \in L \end{array}$$
Now we proceed to define a standard building block for our algorithms, the contraction of a link.
Contracting a subset of nodes W consists of the following operations: (i) remove the nodes in W and all edges/links incident to them; (ii) add a new node w and, for each original edge/link of type (y,x), x ∈ W,y∉W, add the edge/link (y,w) (of the same weight for the case of links). Note that we do not create loops this way but may introduce parallel links. We say that (y,w) is the image of (y,x) and (y,x) is the preimage of (y,w).
We will sometimes slightly abuse notation and use the same label to denote a link and its image: the meaning will be clear from the context.
For a link ℓ = (u,v), we define a sequence w0,…,wq of boundary nodes B(ℓ) as follows. Consider a simple path from u to v in the cactus, and let C1,C2,…,Cq be the ordered sequence of cycles visited by this path (possibly q = 1). Note that a path visits a cycle iff it includes an edge from the cycle. We define wi, i = 1,…,q − 1 as the unique common node between Ci and Ci+ 1, and set w0 = u and wq = v.
Contracting a link ℓ is the operation of contracting its boundary nodes B(ℓ). We denote by G|ℓ the graph obtained by this operation. Contracting a set of links A is the operation of contracting any ℓ ∈ A, and then continue recursively on G|ℓ and on the image of A ∖ ℓ until A becomes empty.
Note that contracting a link in a cactus yields again a cactus. We will extensively use the following standard fact.
Lemma 1
Let (G,L) be a CacAP instance, \(A\subseteq L\), and ℓ ∈ A. Then A is a feasible solution for (G,L) iff the image of A ∖ ℓ is a feasible solution for (G|ℓ,L ∖ ℓ).
We require some further notation before proving the lemma. The internal projectionsS(ℓ) of ℓ are the links (wi,wi+ 1), i = 0,…,q − 1. In terms of feasibility, ℓ and S(ℓ) are equivalent as the following proposition states.
Let (G,L) be a CacAP instance and ℓ ∈ L. Then ℓ satisfies precisely the same 2-edge cuts as S(ℓ).
Let B(ℓ) = (w0,…,wq) and C1,…,Cq be the corresponding sequence of cycles visited by a simple path between the endpoints of ℓ. Notice that pairs (wi,wi+ 1), i = 0,…,q − 1, subdivide each Ci into two paths next denoted as \(C^{\prime }_{i}\) and \(C^{\prime \prime }_{i}\). Trivially ℓ satisfies only cuts belonging to the cycles C1,…,Cq, and the same holds for S(ℓ). Consider any pair (e1,e2) belonging to some Ci. Link ℓ satisfies the corresponding cut if and only if precisely one such edge ej belongs to \(C^{\prime }_{i}\). The same holds for (wi,wi+ 1), hence for S(ℓ). □
In order to prove Lemma 1, let us first consider the simpler case where G is a cycle.
Let (G = (V,E),L) be a CycAP instance, \(A\subseteq L\), and ℓ = (u,v) ∈ A. Then A is a feasible solution for (G,L) iff the image of A ∖ ℓ is a feasible solution for the CacAP instance (G|ℓ,L ∖ ℓ).
Let C1 and C2 be the two cycles in G|ℓ, with common node w.
Suppose first that the image of A ∖ ℓ is a feasible solution for (G|ℓ,L ∖ ℓ). Consider a pair of edges {e1,e2} belonging to a common cycle Ci, and the corresponding cut \((S^{\prime },S^{\prime \prime })\) in G|ℓ with \(w \in S^{\prime \prime }\). There must be a link \(\ell ^{\prime }\in A\setminus \ell \) satisfying this cut in G|ℓ. The preimage of \(\ell ^{\prime }\) has one endpoint in \(S^{\prime }\) and the other in \(V\setminus S^{\prime } = (S^{\prime \prime } \setminus \{w\}) \cup \{u,v\}\), hence it satisfies the {e1,e2}-cut in G. The remaining pairs of edges {e1,e2} of G satisfy e1 ∈ C1 and e2 ∈ C2, modulo symmetries. Those cuts are satisfied by ℓ in G.
Suppose now that A is feasible for (G,L). Consider a pair of edges {e1,e2} belonging to a common cycle Ci. Let \((S^{\prime },S^{\prime \prime })\) be the corresponding cut in G|ℓ with \(w \in S^{\prime \prime }\). Since ℓ does not satisfy that cut in G, this means that there is some other link \(\ell ^{\prime }\in A\setminus \ell \) satisfying it. The image of \(\ell ^{\prime }\) has one endpoint in \(S^{\prime }\) and the other in \(S^{\prime \prime }\), hence it satisfies the {e1,e2}-cut. □
Now we can proceed with the proof of Lemma 1.
Proof Proof of Lemma 1
By Proposition 1, we obtain an equivalent statement of the lemma by replacing A with the set S(A) of the internal projections of links in A and replacing ℓ with its internal projection S(ℓ).
Let B(ℓ) = (w0,…,wq) and C1,…,Cq be the corresponding sequence of cycles visited by a simple path between the endpoints of ℓ. Consider any cycle C not in the above list. Then trivially any pair of edges in C is covered by links in S(A) ∖ S(ℓ). Therefore it is sufficient to consider pairs of edges e1,e2 belonging to the same cycle Ci. Let ℓi = (wi,wi+ 1) be the internal projection of ℓ with both endpoints in Ci, and define similarly Si(A) w.r.t. S(A). Then it is sufficient to show that Si(A) is a feasible solution for the CycAP instance induced by Ci if and only if Si(A) ∖ ℓi is a feasible solution for the CycAP instance induced by Ci|ℓi, which follows from Lemma 2. □
Approximation Algorithms for Cycle Augmentation
In this section we present improved approximation algorithms for CycAP. We start with a simple \(\frac {5}{3}\)-approximation to illustrate the main ideas, and then present a slightly more complex \(\left (\frac {3}{2}+\varepsilon \right )\)-approximation. The approach we will follow in both cases is as follows: in a first phase we iteratively add a properly chosen subset of a few links to the solution under construction, and then contract them. Notice that, after the first contraction, the cycle structure may be lost and we obtain a CacAP instance instead. These choices are designed so that, at the end of the first phase, the remaining CacAP instance can be solved efficiently, which is done in a second phase with an ad-hoc algorithm.
A \(\frac {5}{3}\)-Approximation
We next describe a simple greedy algorithm that provides a \(\frac {5}{3}\)-approximation for CycAP, that we refer to as crossing-first algorithm. In order to present the algorithm clearly, we need the following definitions.
A link ℓ = (u,v) of a CacAP instance is internal if both its endpoints belong to a common cycle, and external otherwise.
Given a CacAP instance, a pair of internal links {(u1,v1),(u2,v2)} of a cycle C is crossing if they are node disjoint and deleting u2 and v2 disconnects u1 from v1 in C.
The kind of links that we want to add in the first stage of the algorithm are external links plus crossing pairs of links. More in detail, the algorithm has two main stages. The first stage consists of a set of rounds, where in each round we first check if there exists an external link ℓ, in which case we add it to our solution, contract it and proceed to the next round. Otherwise, if there exists a pair of (internal) crossing links \(\ell ^{\prime }\) and \(\ell ^{\prime \prime }\), we add them to our solution, contract them and proceed to the next round. If none of the two cases above applies, we are left with a CacAP instance without neither external links nor crossing pairs of links which we address in the second stage of the algorithm. As the following lemma states, in the second stage we can efficiently compute the optimal solution.
Consider an instance (G = (V,E),L) of CacAP. If there are no external links and no crossing pairs of links, then every minimal solution has size exactly |V |− 1 and induces a spanning tree over V.
We prove the first part of the claim by induction on n = |V |. The base case n = 2 is trivial since in this case the instance is just a cycle consisting of two parallel edges and any link must be incident to the two nodes of G (hence defining a feasible solution). For the inductive case, assume the claim is true up to instances having n − 1 nodes, and consider an instance of the problem defined by a cactus G having n nodes with optimal solution OPT. If G is not a cycle of length n, then it is defined by a set of cycles of length at most n − 1 where every link is internal, so we can apply the inductive hypothesis to each cycle independently. If G is a cycle of n nodes, then let ℓ = (u,v) ∈OPT. Contracting ℓ leads to a CacAP instance on two cycles C1 and C2 sharing a common node w, with |V (C1)| + |V (C2)| = n. Let \(\text {OPT}^{\prime }\) be the optimal solution for the new instance. By Lemma 1, \(|\text {OPT}|=|\text {OPT}^{\prime }|+1\). Observe that any remaining link \(\ell ^{\prime }\) must have both endpoints in the same Ci (otherwise ℓ and \(\ell ^{\prime }\) would be crossing). Thus by the inductive hypothesis the optimum solution for the problem induced by Ci has size |V (Ci)|− 1. It then follows that \(|\text {OPT}^{\prime }|=|V(C_{1})|-1+|V(C_{2})|-1=n-2\). Hence |OPT| = n − 1 as desired.
For the second part of the claim, it is sufficient to show that a minimal solution does not induce a cycle. By contradiction, consider a minimal solution containing a simple cycle \(L^{\prime }\), and consider now a solution where we remove precisely one arbitrary link ℓ = (u,v) from \(L^{\prime }\). Consider any pair of edges e1,e2 belonging to the same cycle such that ℓ satisfies the {e1,e2}-cut. Since \(L^{\prime }\setminus \ell \) induces a simple u-v path, then some \(\ell ^{\prime }\in L^{\prime }\setminus \ell \) must satisfy the cut. Thus \(L^{\prime }\setminus \ell \) is a feasible solution, contradicting the minimality of \(L^{\prime }\). □
Now we proceed to prove the approximation guarantee of the algorithm.
The crossing-first algorithm is a \(\frac {5}{3}\)-approximation for CycAP.
Let OPT be the optimal solution and APX the computed solution. Let also \(n^{\prime \prime }\) be the number of nodes remaining at the end of the first stage, and \(\text {APX}^{\prime }\) (resp. \(\text {APX}^{\prime \prime }\)) be the set of links added to the solution during the first (resp. second) stage. Since contracting an external link decreases the number of nodes by at least 2 and contracting any pair of crossing links decreases the number of nodes by at least 3, we have that \(|\text {APX}^{\prime }|\leq \frac {2}{3}(n-n^{\prime \prime })\).
By Lemma 3, \(|\text {APX}^{\prime \prime }|=n^{\prime \prime }-1\), and hence \(|\text {APX}|\leq \frac {2}{3}(n-n^{\prime \prime })+n^{\prime \prime }-1=\frac {2n+n^{\prime \prime }-3}{3}\). On the other hand, since any feasible solution must be an edge cover, we have that |OPT|≥ n/2. Observe also that \(|\text {OPT}|\ge n^{\prime \prime }-1\) since by Lemma 1 contracting links cannot increase the cost of the optimum solution. Thus \(|\text {OPT}|\geq \max \limits \{n/2,n^{\prime \prime }-1\}\). We can conclude that \(\frac {|\text {APX}|}{|\text {OPT}|}\le \frac {(2n+n^{\prime \prime }-3)/3}{\max \limits \{n/2,n^{\prime \prime }-1\}}\leq \frac {5}{3}\), being \(n^{\prime \prime }-1 = n/2\) the worst case. □
We complement this result with an asymptotically matching lower bound.
The approximation ratio of the crossing-first algorithm is not better than \(\frac {5}{3}\).
Consider the following construction: for each k ≥ 2 consider an instance (Gk,Lk) of CycAP defined by a cycle of n = 6k nodes (assume that the cycle is defined by the order of the nodes \(v_{1},v_{2}, \dots , v_{6k}\)) and the following set of links (see Fig. 1 (Left)):
\((v_{1}, v_{\frac {n}{2}+1}) \in L_{k}\);
Left: Instance (G2,L2) from the lower bound construction in Lemma 4. Red links define an optimal solution. Right: If the algorithm in the first phase picks and contracts the crossing links {(v1,v3),(v2,v4)}, this is the obtained CacAP instance
For each \(i=1,\dots , \frac {n}{2}-1\), (vi+ 1,vn+ 1−i) ∈ Lk;
For each \(i=1,\dots , \frac {n}{6}\), (v3(i− 1)+ 1,v3(i− 1)+ 3) ∈ Lk and (v3(i− 1)+ 2,v3(i− 1)+ 4) ∈ Lk;
Notice that the first and second set of links define a feasible solution of size \(\frac {n}{2}\), hence being optimal: if we remove any two edges of the cycle, then we are either satisfying the corresponding cut via \((v_{1}, v_{\frac {n}{2}+1})\), or one side of the partition is contained in either \(\{v_{2}, \dots , v_{\frac {n}{2}}\}\) or in \(\{v_{\frac {n}{2}+2}, \dots , v_{n}\}\) but the links selected form a matching between those sets.
We will now prove that there exists a sequence of choices performed by our algorithm that outputs a solution of size \(\frac {5n}{6}-1\), which implies that the approximation ratio is at least \(\frac {5}{3} - \frac {2}{n}\) and this value approaches \(\frac {5}{3}\) as k goes to infinity. Notice first that the pair of links \(\{(v_{1},v_{3}), (v_{2},v_{4})\} \subseteq L_{k}\) is crossing, and hence the algorithm can include them in the solution in the first round (and finish the round). Furthermore, after these links are contracted no link becomes external as the new cactus instance consists of a cycle of length n − 3, and also the links with endpoints vn,vn− 1 and vn− 2 are not part of any pair of crossing links (see Fig. 1 (Right)). If we now iteratively pick all the pairs of crossing links \(\{(v_{3(i-1)+1},v_{3(i-1)+3}), (v_{3(i-1)+2},v_{3(i-1)+4})\} \subseteq L_{k}\), \(i=2,\dots , \frac {n}{6}\), after \(\frac {n}{6}\) rounds we end up with a cycle of length \(\frac {n}{2}\) without crossing links, and the algorithm must now take the remaining \(\frac {n}{2}-1\) links to complete the solution. Thus, the size of the computed solution is \(2\cdot \frac {n}{6} + \frac {n}{2}-1 = \frac {5}{6}n - 1\), proving the claim. □
A \(\left (\frac {3}{2}+\varepsilon \right )\)-approximation
The family of instances from Lemma 4 suggests that "short" crossing pairs of links, although being locally profitable, may enforce the algorithm to take expensive decisions in the end. In this section we present a more involved \(\left (\frac {3}{2}+\varepsilon \right )\)-approximation for CycAP that tries to avoid this kind of situation. Like in the previous algorithm, there is a certain kind of links that we want to iteratively add to our solution in a first phase, and in this case such links correspond to external links and long links, which are defined as follows.
The length of an internal link (u,v) is the length of the shortest path between u and v in the corresponding cycle. For a given parameter 0 < ε < 1, an internal link is called long if its length is at least \(\frac {1}{\varepsilon }\), and short otherwise.
Our algorithm consists of the following two main phases. In the first phase, we iteratively check if there exists a long (internal) link ℓ. Otherwise, we check if there exists an external link ℓ. In both cases, we add ℓ to the solution under construction and contract it. Observe that contracting links does not create new long links, hence we will first select a set Llong of long links, and then a set Lext of external links. After exhausting the previous choices, we move to the second phase. Here we are left with an instance where all links are short and internal, so we can solve independently the sub-instance induced by each cycle. We refer to this algorithm as long-first. This second stage can be solved efficiently, due to the lack of long links, by means of the following lemma.Footnote 1
Given a CycAP instance, there exists an algorithm that returns the optimal solution in time \(\text {poly}(n)\cdot 2^{O(h_{\max \limits }^{2})}\), where \(h_{\max \limits }\) is the maximum length among the links.
Let Lshort be the collection of edges obtained in the second stage. The final solution is Llong ∪ Lext ∪ Lshort.
The long-first algorithm is a \((\frac {3}{2}+\varepsilon )\)-approximation algorithm for CycAP.
The running time of the algorithm is upper-bounded by \(\text {poly}(n) 2^{O(1/\varepsilon ^{2})}\). Consider next the approximation factor. Note first that |Llong|≤ εn. Indeed, contracting a long link always increases the number of cycles in the cactus by one without decreasing the number of edges, and all these cycles always have size at least 1/ε, so there are at most εn of them. Similarly to Theorem 2, we have that |OPT|≥|Lshort| and \(|\text {OPT}|\ge \frac {n}{2}\).
If \(|L_{\text {long}}|+|L_{\text {ext}}|+|L_{\text {short}}|\leq \frac {(3+2\varepsilon )n}{4}\) then we already have a \(\left (\frac {3}{2}+\varepsilon \right )\)-approximation as \(|\text {OPT}|\geq \frac {n}{2}\). Otherwise, since the contraction of each external link reduces the number of nodes by at least 2 and the contraction of any other link reduces the number of nodes by at least 1, we have that |Llong| + 2|Lext| + |Lshort|≤ n. So \(|L_{\text {ext}}|\leq n-\frac {(3+2\varepsilon )n}{4}=\frac {(1-2\varepsilon )n}{4}\) and hence \(|L_{\text {ext}}|+|L_{\text {long}}|\leq \frac {n+2\varepsilon n}{4}\le \left (\frac {1}{2} + \varepsilon \right )|\text {OPT}|\). Since |OPT|≥|Lshort|, we have that in this case the size of the solution is also at most \((\frac {3}{2}+\varepsilon )|\text {OPT}|\), concluding the proof. □
By replacing ε with \(1/\sqrt {\log n}\) in the above construction, we can obtain a slightly improved approximation factor of 3/2 + o(1) which still runs in polynomial time.
It remains to prove Lemma 5. To do this, we need some more notations. Given a link ℓ = (u,v), we say that the edges of the shortest path between u and v in the cycle are covered by ℓ (in case of multiple shortest paths we choose the one going from u to v in counter-clockwise order along the cycle). Given an edge e of the cycle, we define the cut-neighborhood of e, namely \(\mathcal {N}(e)\), as the \(2h_{\max \limits }-1\) edges that are closest to e, e included. We also define \(\mathcal {N}_{L}(e)\) as the set of links in L covering at least one edge from \(\mathcal {N}(e)\).
Notice that in any feasible solution to a CycAP instance, at most one edge of the cycle is not covered: if it is not the case, then the cut defined by two uncovered edges is not satisfied as any link satisfying the cut would cover one of these two edges. We can use this observation to characterize the feasibility of a solution in terms of the cut-neighborhoods.
Consider a CycAP instance and let A be a set of links such that every edge of the cycle is covered by some link in A. A is feasible iff for each edge e, all the \(\{e,e^{\prime }\}\)-cuts, where \(e^{\prime }\in \mathcal {N}(e)\), are satisfied.
If A is feasible then the required properties are clearly satisfied since every cut is satisfied. On the other hand, suppose that A satisfies that every edge is covered by some link in A and the \(\{e,e^{\prime }\}\)-cuts are satisfied for every edge e and \(e^{\prime }\in \mathcal {N}(e)\). Consider a pair of edges \(\{e,e^{\prime }\}\) such that \(e^{\prime }\notin \mathcal {N}(e)\). By definition of \(\mathcal {N}(e)\) there is no link in A covering both edges at the same time, and as e is covered by some link, this link satisfies the \(\{e,e^{\prime }\}\)-cut. This implies that A is feasible as every cut is satisfied. □
This lemma is useful as it implies that, given an edge e and a set of links S, we can optimally complete S in order to satisfy every \(\{e,e^{\prime }\}\)-cut in time \(2^{O(h_{\max \limits }^{2})}\) just by guessing the subset of links from \(\mathcal {N}_{L}(e)\) that must be added, which are \(O(h_{\max \limits }^{2})\) only. Now we proceed to present the proof.
Let us assume that we deal with instances of CycAP such that there exists an optimal solution where every edge is covered by some link. If it is not the case, as there may be only one uncovered edge, we can guess this edge and contract it; this leads to an equivalent instance of the problem where we can require that the optimum solution covers all the edges. We say that an edge e is satisfied by a set of links A if it is covered by some link in A and furthermore every \(\{e,e^{\prime }\}\)-cut is satisfied by A. In particular A is a feasible solution for the problem iff it satisfies all the edges.
We next design a dynamic programming algorithm to compute a minimum cardinality feasible solution. Let us name the nodes v1,v2,...,vn in counter-clockwise order starting from some arbitrary node v1, and let the edges be ei = (vi,vi+ 1) for each \(i=1,\dots ,n\) (assuming vn+ 1 = v1).
For each edge ei and \(S\subseteq \mathcal {N}_{L}(e_{i})\), we define a cell T[i][S] which will correspond to a set \(S^{\prime }\) of links of smallest cardinality such that for each \(j\in \{1,\dots ,i\}\), ej is satisfied by \(S^{\prime }\), subject to \(S\subseteq S^{\prime }\). It is then sufficient to return T[n][∅].
We initialize the table by computing T[1][S] for each set \(S\subseteq \mathcal {N}_{L}(e_{1})\), which can be done by guessing how to complete S in order to satisfy e1 with links from \(\mathcal {N}_{L}(e_{1})\). Then, for each i ≥ 2 and \(S\subseteq \mathcal {N}_{L}(e_{i})\), in order to fill the cell T[i][S], we consider all the possible subsets \(A\subseteq \mathcal {N}_{L}(e_{i})\) such that \(S(A):=T[i-1][(S\cup A)\cap \mathcal {N}_{L}(e_{i-1})] \cup (S\cup A)\) satisfies ei. Among them we select a set A∗ that minimizes |S(A)|, and we set T[i][S] = S(A∗) (see Fig. 2 for a sketch).
Depiction of an iteration of the DP from Lemma 5, where we are currently at edge ei. Left: Green links correspond to S and at this point we must decide which extra links to add to S in order to satisfy the edges \(e_{1},\dots ,e_{i}\). Right: This computation is done by looking at a proper previous cell in the table (orange links) which contains S and satisfies \(e_{1},\dots ,e_{i-1}\), and then add the extra required links A∗ (red links) in order to satisfy ei too
The correctness of the computation follows by a simple induction on i. The table can be filled in total time \(\text {poly}(n) \cdot 2^{O(h_{\max \limits }^{2})}\), plus an extra factor n from the initial guessing of an uncovered edge (that is contracted). □
We complement Theorem 3 with an asymptotically matching lower bound.
The approximation ratio of the long-first algorithm is at least \(\frac {3}{2}\).
Consider the following construction: for each \(k> \frac {1}{2\varepsilon }\) consider an instance (Gk,Lk) of CycAP defined by a cycle of n = 4k nodes (assume that the cycle is defined by the order of the nodes \(v_{1},v_{2}, \dots , v_{4k}\)) and the following set of links (see Fig. 3 (Left)):
Left: Instance (G4,L4) from the lower bound construction in Lemma 7. An optimal solution is defined by red links. Right: If the algorithm picks first the thick red link (which is long) and then the links which become external (blue links and (v1,v9)) we obtain this subinstance without crossing pairs of links
For each \(i=1,\dots , \frac {n}{4}-1\), \((v_{i+1},v_{\frac {n}{2}+1-i}) \in L_{k}\).
As argued in Lemma 4, the first and second set of links define an optimal solution of size \(\frac {n}{2}\). We will now prove that there exists a sequence of choices performed by our algorithm that outputs a solution of size \(\frac {3n}{4}-1\), which implies that the approximation ratio is at least \(\frac {3}{2} - \frac {2}{n}\) and this value approaches \(\frac {3}{2}\) as k goes to infinity. Notice first that the link \((v_{\frac {n}{4}+1}, v_{\frac {3n}{4}+1})\in L_{k}\) has length \(2k>\frac {1}{\varepsilon }\) and hence it is long so the first stage of the algorithm can include it in the solution. After doing that, the second and third set of links become external and thus the algorithm will include them in the solution. Once all these links are included and contracted, we get a cactus consisting of two cycles of \(\frac {n}{4}\) nodes each and without crossing links (see Fig. 3 (Right)). Hence, the algorithm must pick all the remaining links to complete the solution. The size then of this solution is \(\frac {n}{4} + 1 + 2\left (\frac {n}{4}-1\right ) = \frac {3n}{4}-1\). □
LP Relaxations for CycAP
We start by lower-bounding the integrality gap of the standard cut LP for CycAP.
The standard cut LP for CycAP has integrality gap at least 2.
Consider a cycle of size k and, for each edge, a parallel link. The optimum integral solution has size k − 1, while setting each variable to \(\frac {1}{2}\) gives a feasible fractional solution of cost \(\frac {k}{2}\). □
This shows that the standard cut LP is not strong enough even for instances without crossing nor long links, cases that we can handle optimally via combinatorial algorithms. We next present a stronger LP that exploits a more general set of constraints.
Let (G = (V,E),L) be a CycAP instance and \(S\subseteq E\). We define the S-reduced instance (GS,LS) as follows: We contract the edges of E ∖ S, obtaining a cycle with |S| edges which defines GS, and the set of links LS will correspond to the images of L. Notice that there is a one-to-one relation between LS and the links in L which satisfy some cut defined by a pair of edges from S. We denote by OPTS the optimal solution for the instance (GS,LS)Footnote 2. The following lemma characterizes the feasibility of a solution.
Given an instance (G,L) of CycAP, a solution \(A\subseteq L\) is feasible iff for every \(S\subseteq E\) it holds that |A ∩ LS|≥|OPTS|.
Suppose that there exists \(S\subseteq E\) such that |A ∩ LS| < |OPTS|. This means that A ∩ LS is not a feasible solution for (GS,LS) and hence there exist two edges ei,ej ∈ S such that no link in A ∩ LS satisfies the {ei,ej}-cut. As the remaining links in A ∖ LS also do not satisfy the cut by definition, this cut remains unsatisfied in the original instance, implying that A is not feasible.
On the other hand, suppose that A satisfies the claimed property for every set S. If we consider just sets S consisting of two edges this is exactly the characterization of feasibility shown in Observation1, implying that A is feasible. □
This implies that we can add the constraint \({\sum }_{\ell \in L_{S}}{x_{\ell }} \ge |\text {OPT}_{S}|\) for \(S\subseteq E\). Unfortunately there is an exponential number of such constraints and most of them require to compute |OPTS| for large instances. However, if we restrict ourselves to sets of edges having constant size, we get an LP formulation with polynomially many constraints that can be written in polynomial time. We call this LP the k-edge-cut LP for a given constant \(k\in \mathbb {N}\), which is similar in spirit to the bundle-LP for TAP introduced by Adjiashvili [1].
$$\begin{array}{rll} \min & \sum\limits_{\ell \in L}{x_{\ell}}& (k\text{-edge-cut LP}) \\ s.t. & \sum\limits_{\ell\in L_{S}}{x_{\ell}} \ge |\text{OPT}_{S}| & \forall S\subseteq E, |S|\le k \\ & 0\leq x_{\ell}\leq 1 & \forall \ell \in L \end{array}$$
Notice that for k = 2 this is exactly the standard cut LP. Now we will prove some properties of this relaxation and bound its integrality gap.
Lemma 10
Given ε > 0, for \(k=\frac {1}{\varepsilon ^{2}}\) the k-edge-cut LP restricted to instances with links of length at most \(\frac {1}{\varepsilon }\) has integrality gap at most (1 + 2ε).
We will assume w.l.o.g. that the set of links L contains every possible link of length 1. If it is not the case, let us include them obtaining a new set of links \(L^{\prime } \supseteq L\). The optimal LP value can only decrease while the size of the optimal solution cannot decrease, implying that the integrality gap can only increase due to this operation. To see this last fact, assume by contradiction that there exists a solution \(\text {OPT}^{\prime }\) for the new instance having strictly smaller size than OPT. Consider now a solution S consisting of \(\text {OPT}^{\prime }\cap L\) plus a minimal set of links from L that makes S feasible (this is possible since the instance admits a feasible solution). If we in parallel iteratively contract the common links in S and \(\text {OPT}^{\prime }\) we arrive to the same CacAP instance, but now the remaining links from \(\text {OPT}^{\prime }\) have length 1 and the contraction of each of them reduces the number of nodes in the instance by exactly one node while the contraction of the remaining links in S reduces the number of nodes by at least 1. Thus \(|S|\le |\text {OPT}^{\prime }|\) which is not possible since \(S\subseteq L\).
Let X = (xℓ)ℓ∈L be an optimal solution for the k-edge-cut LP. We will construct an integral feasible solution of size at most \((1+\varepsilon ) {\sum }_{\ell \in L}{x_{\ell }}\). To do so, we will partition the cycle into disjoint intervals as follows: We will first define an interval of size k (which we will call a long interval) and then an interval of size \(\frac {1}{\varepsilon }\) (which we will call a short interval), and then continue with this procedure until it is not possible to continue. If in the end there are at most \(\frac {1}{\varepsilon }\) edges we define a last short interval consisting of these remaining edges, otherwise we define a short interval consisting of the last \(\frac {1}{\varepsilon }\) edges and a long interval consisting of the remaining edges (which will have size at most k). The number of short intervals is upper bounded by \(1+\left \lfloor \frac {n}{1/\varepsilon ^{2}+1/\varepsilon }\right \rfloor \leq 1+\frac {\varepsilon ^{2} n}{1+\varepsilon }\leq \varepsilon ^{2} n\) assuming w.l.o.g. that n is lower bounded by a large enough constant.
Notice that \({\sum }_{\ell \in L}{x_{\ell }} \ge n/2\) by a simple averaging argument over the n constraints corresponding to all the pairs of consecutive edges: every link appears in exactly two such constraints and the right-hand side of each constraint is 1. Since the total number of links of length 1 having both endpoints in a short interval is at most \(\varepsilon ^{2} n \cdot \frac {1}{\varepsilon } = \varepsilon n \le 2 \varepsilon {\sum }_{\ell \in L}{x_{\ell }}\), we can add them to our solution at a negligible cost.
Consider now the set of long intervals \(S_{1}, S_{2}, \dots , S_{T}\). Notice that no link has endpoints in different long intervals, and hence the LP constraints associated to such intervals do not share common variables. This implies that \({\sum }_{\ell \in L}{x_{\ell }} \ge {\sum }_{i=1}^{T}{|\text {OPT}_{S_{i}}|}\). Our feasible solution will consist of all the links of length 1 with both endpoints in a short interval plus the optimal solutions \(\text {OPT}_{S_{i}}\) for each long interval Si. As argued before, the size of this solution is at most \((1+2\varepsilon ) {\sum }_{\ell \in L}{x_{\ell }}\) and the feasibility of the solution follows since every \(\{e,e^{\prime }\}\)-cut where e is in a short interval is satisfied by a link of length 1, while the remaining cuts are satisfied by the links computed optimally. □
Given ε > 0, for \(k=\frac {1}{\varepsilon ^{2}}\) the k-edge-cut formulation has integrality gap at most (1 + 4ε) restricted to instances without crossing pairs of links.
Let X = (xℓ)ℓ∈L be an optimal solution for the k-edge-cut LP. Suppose that the instance does not contain links of length at least \(\frac {1}{\varepsilon }\), then we can conclude the claim thanks to Lemma 10. Otherwise, we will pick any link of length at least \(\frac {1}{\varepsilon }\) and contract it, obtaining a CacAP instance consisting of two cycles without external links (as there are no crossing links), both of size at least \(\frac {1}{\varepsilon }\). If any cycle still contains some long link, we iterate this procedure. Let Llong be the set of long links we picked during this procedure and \(C_{1}, C_{2}, \dots , C_{T}\) be the set of cycles at the end. By the same argument as in Theorem 3, we have that \(|L_{\text {long}}|\le \varepsilon n \le 2\varepsilon {\sum }_{\ell \in L}{x_{\ell }}\).
Applying Lemma 10 to each cycle, we obtain a feasible solution of size at most \((1+2\varepsilon ){\sum }_{i=1}^{T}{\text {OPT}_{\text {LP}_{i}}} + |L_{\text {long}}|\), where LPi is the k-edge-cut LP defined by each cycle Ci and its internal links. As there are no external links, the sum of the previous LP solutions is the optimal solution for the following LP:
$$ \begin{array}{rlr} \min & \sum\limits_{\ell \in L\setminus L_{\text{long}}}{x_{\ell}}& \\ s.t. & \sum\limits_{\ell\in L_{S}}{x_{\ell}} \ge |\text{OPT}_{S}| & \forall i\in\{1,\dots,T\}, \forall S\subseteq E(C_{i}), |S|\le \frac{1}{\varepsilon^{2}} \\ & 0\leq x_{\ell}\leq 1 & \forall \ell \in L\setminus L_{\text{long}} \end{array} $$
The set of constraints of this LP is a subset of the constraints of the original LP as links in Llong do not appear in these constraints and the set of variables is a subset of the original one. Thus we have \({\sum }_{i=1}^{T}{\text {OPT}_{\text {LP}_{i}}}\le {\sum }_{\ell \in L}{x_{\ell }}\), and then we can conclude that the constructed solution has size at most \((1+4\varepsilon ){\sum }_{\ell \in L}{x_{\ell }}\). □
Following the proof of Theorem 3 plus the previous results we can get the following bound on the integrality gap for general instances of CycAP.
Corollary 1
For any ε > 0, the integrality gap of the k-edge-cut LP for \(k=\frac {1}{\varepsilon ^{2}}\) is at most \(\frac {3}{2}+O(\varepsilon )\).
Let X = (xℓ)ℓ∈L be an optimal solution for the k-edge cut LP and consider the output of the \(\left (\frac {3}{2}+\varepsilon \right )\)-approximation from Section 3.2 decomposed into Llong, Lext and Lshort as in the proof of Theorem 3. As argued before, we know that \({\sum }_{\ell \in L}{x_{\ell }}\ge \frac {n}{2}\) and analogously to the proof of Lemma 11 we have that \(|L_{\text {short}}|\le (1+2\varepsilon ){\sum }_{\ell \in L}{x_{\ell }}\). Hence essentially the same analysis as in Theorem 3 provides the same bound of 3/2 + O(ε) up to an extra (1 + ε) factor. □
In the following two sections we discuss the hardness of approximation for WCycAP and CycAP, respectively.
Hardness of Approximation for WCycAP
We now provide an approximation preserving reduction from WCacAP to WCycAP. Note that finding a better-than-2-approximation for WCacAP is at least as hard as finding such an approximation for WTAP, a big open problem in the area. Therefore our reduction shows that achieving a similar result for WCycAP is a very hard task as well.
Given an instance A of WCacAP, it is possible to construct in polynomial time an instance B of WCycAP (whose only possibly new weight value is 0) such that any feasible solution to A can be mapped in polynomial time into a feasible solution to B of the same cost and vice versa.
To prove Theorem 4 we make use of the "inverse" of the contraction of a link, which we call an expansion: Consider a WCacAP instance with a node v with degree greater than 2. An expansion of v will consist of taking two cycles containing v and replacing them by the Eulerian tour that traverses them starting from v. Every node appears exactly once except for v which appears twice, for which we create two copies: v1 the starting node and v2 the intermediate one. The links originally incident to v are replaced by links of the same cost incident to v1, and we also add a link of cost zero between v1 and v2 (see Fig. 4 for an example). The two main properties of this procedure are that: (1) the contraction of a link created by an expansion brings back the graph to the original state and (2) v is replaced by v1 and v2, which have degree \(\deg (v)-2\) and 2, respectively.
Depiction of an expansion applied to node v in the left graph considering the cycles to the left and right of v. Dashed edges correspond to links and the highlighted link in the middle graph corresponds to the extra link of cost zero added by the expansion. The right graph is the final WCycAP instance formed by another expansion on the middle graph
Proof Proof of Theorem 4
At high level our proof works as follows. We will build in polynomial time a chain of WCacAP instances (G1,L1),…,(Gk,Lk), with the following properties: (i) (G1,L1) is the input instance and Gk is a cycle; (ii) (Gi+ 1,Li+ 1), i = 1,…,k − 1, is obtained from (Gi,Li) via precisely one expansion (so Gi+ 1 contains precisely one cycle less than Gi, and precisely one new link ℓi+ 1 of cost zero); (iii) a feasible solution to (Gi,Li), i = 1,…,k − 1, can be turned in polynomial time into a feasible solution to (Gi+ 1,Li+ 1) of the same cost and vice versa. The above properties together trivially imply the claim.
Given (Gi,Li), we proceed as follows. Consider any node v ∈ Gi of degree at least 4, and let C1 and C2 be any two cycles incident to v (that must exist). We apply an expansion to node v w.r.t. C1 and C2, hence creating a new link ℓi+ 1 of cost zero. Properties (i) and (ii) follow immediately by construction. Observe that (Gi,Li) can be obtained from (Gi+ 1,Li+ 1) by contracting ℓi+ 1. Hence property (iii) follows directly from Lemma 1. In more detail, given a feasible solution Ai+ 1 to (Gi,Li), we first add ℓi+ 1 to Ai+ 1 (that keeps the solution feasible, and does not change its cost). By Lemma 1, Ai := Ai+ 1 ∖ ℓi+ 1 is a feasible solution to (Gi,Li) of the same cost. Vice versa, given a feasible solution Ai to (Gi,Li), Ai+ 1 := Ai ∪ ℓi+ 1 is a feasible solution to (Gi+ 1,Li+ 1) of the same cost. □
Hardness of Approximation for CycAP
In this section we prove that CycAP is APX-hard via a reduction from a restricted case of 3-Dimensional Matching (3DM). In the general version of 3DM we are given three disjoint sets W,X and Y having equal cardinality p and a set of m hyperedges \(H\subseteq W\times X \times Y\). A (3D) matching is a subset \(M\subseteq H\) such that each element of W ∪ X ∪ Y belongs to at most one hyperedge in M, and this matching is perfect if |M| = p. Notice that in a perfect matching M each element of W ∪ X ∪ Y belongs to precisely one hyperedge. The goal is to determine whether a perfect matching exists. We will consider the special case 3DM-K, \(K\in \mathbb {N}\), where we add the constraint that each element from W ∪ X ∪ Y appears in at most K hyperedges. The following result will help us to conclude our final claim.
Theorem 5 (Petrank 30)
For some fixed ε0 > 0, it is NP-hard to distinguish whether an instance of 3DM-5 with |W| = |X| = |Y | = q has a perfect matching (of size q) or every matching has size at most (1 − ε0)q.
The proof of the following theorem is similar in spirit to the proof of NP-hardness for WTAP due to Frederickson and JáJá [15] and the extension presented by Kortsarz et al. [23]. In the first reduction the authors start from an instance A of 3DM with 3p nodes and m hyperedges, and build a WTAP instance B such that: A has a feasible solution (with p hyperedges) iff B has a feasible solution with p + m links. By duplicating the edges in B, one obtains a CacAP instance C with exactly the same property over some cactus G. Our main idea is to turn C into an instance D of CycAP by constructing an Euler tour \(G^{\prime }\) out of G and shortcutting some nodes. However, we need to carefully choose the ordering in the Euler tour in order to preserve a mapping between the feasible solutions of C and D. By following the refined approach from the second reduction, we will show that it is hard to distinguish solutions with a gap depending on the maximum degree in the instance and then use Theorem 5 to conclude the following result.
For some fixed ε > 0, it is NP-hard to approximate CycAP within a factor 1 + ε.
Construction of the Instance
Let \(H\subseteq W\times X\times Y\) be an instance of 3DM with |H| = m, \(W=\{w_{1},\dots ,w_{p}\}, X=\{x_{1},\dots ,x_{p}\}\) and \(Y=\{y_{1},\dots ,y_{p}\}\). We will define an instance (G = (V,E),L) of CycAP where nodes are placed on the cycle in the order as they appear below in counterclockwise direction (see Fig. 5 for a depiction of the instance):
For each node xi ∈ X we define a node xi;
For each node yi ∈ Y we define a node yi;
Let H(wi) denote the hyperedges in H containing wi ∈ W. For each hyperedge h ∈ H(wi) we define two nodes, namely hX and hY (hyperedge nodes). These nodes are added to the cycle in the following order. For each \(i \in \{1, \dots , p\}\), we add first nodes hX corresponding to hyperedges in H(wi) (in some arbitrary order) and then the corresponding nodes hY respecting the same order used before. We will denote the first set of nodes by HX(wi), and the second set by HY(wi).
The set of links L is defined as follows:
For each hyperedge h ∈ H we add the link (hX,hY);
For each hyperedge h ∈ H and a node x ∈ X, we add the link (hX,x) iff x ∈ h;
For each hyperedge h ∈ H and a node y ∈ Y, we add the link (hY,y) iff y ∈ h.
Example of the construction in Theorem 6. Red links correspond to hyperedge h(2) = (w1,x2,y1) and green links join the copies of the hyperedges
If the 3DM instance H contains a 3D matching M with p hyperedges then the CycAP instance (G,L) constructed as above admits a solution A of size p + m.
Suppose that H contains a 3D matching M of size p. We build a solution A to (G,L) as follows: For each hyperedge h = (w,x,y) ∈ M we add to A the links (hX,x) and (hY,y). Also, for each hyperedge h ∈ H ∖ M we add the link (hX,hY) to A. Observe that the total number of links in A is 2p + (m − p) = p + m.
Let us show that A is a feasible solution. By Observation 1, it is sufficient to consider any pair of edges {e1,e2}, and show that there exists some link ℓ ∈ A satisfying the corresponding {e1,e2}-cut. Let us denote by \(S^{\prime }\) and \(S^{\prime \prime }\) the sets of nodes induced by the cut. Let HX (resp., HY) be the collection of nodes of type hX (resp., hY). We make the following case distinction: Suppose first that e1 is incident to two nodes in X or e1 = (xp,y1) (the case e1 being incident to two nodes in Y is symmetric). We distinguish the following 3 subcases depending on e2:
Suppose e2 is incident to at least one node in X ∪ Y. Then one of the sets in the cut, say \(S^{\prime \prime }\), contains all the hyperedge nodes while \(S^{\prime }\) contains at least one node in z ∈ X ∪ Y. By construction each node in X (resp. Y) is adjacent to some node in HX (resp., in HY). Thus this cut is satisfied.
Suppose e2 is not incident to any node in HY(wp). Then one of the sets in the cut, say \(S^{\prime }\), contains completely Y, while \(S^{\prime \prime }\) contains HY(wp). By construction, for h = (wp,x,y) ∈ M, ℓ = (hY,y) ∈ A, hence this cut is satisfied.
Suppose e2 is incident to some node in HY(wp). Then one of the sets in the cut, say \(S^{\prime \prime }\), contains HX while the other set contains at least one node x from X. Again by construction, for h = (w,x,y) ∈ M, ℓ = (hX,x) ∈ A. Hence this cut is satisfied.
Suppose on the other hand that e1 and e2 are incident to at least one hyperedge node. Notice that one of the sets in the cut, say \(S^{\prime }\), contains X ∪ Y. We distinguish the following 2 subcases:
If \(S^{\prime \prime }\) contains entirely HX(w) or HY(w) for some w ∈ W, then for h = (w,x,y) ∈ M, (hX,x) or (hY,y) is contained in A and the cut is satisfied.
In the remaining case we prove that the following claim holds: There exists an hyperedge h such that \(h_{X}\in S^{\prime }\) and \(h_{Y}\in S^{\prime \prime }\).
Suppose by contradiction that for every hyperedge h both hX and hY belong to the same side of the considered cut. Let wi be such that either HX(wi) or HY(wi) has non-empty intersection with both sides of the cut. Note that such wi must exist, otherwise there would exist wj such that \(S^{\prime \prime }\) contains either HX(wj) or HY(wj) completely which was already covered by the previous case. Assume w.l.o.g. that \(H_{X}(w_{i}) = \{{h_{X}^{1}},\dots ,{h_{X}^{q}}\}\) is the considered set with elements sorted in counterclockwise direction. Since \({h_{X}^{1}}\) and \({h_{Y}^{1}}\) are on the same side of the partition and HX(wi) is not fully contained in any side of the partition, it must hold that one set of the partition is properly contained in HX(wi). Then any node inside that set has its copy on the other side of the partition. This is in contradiction with the assumption.
Let h be an hyperedge as in the previous claim. We are either adding to the solution the link that joins both copies of h (i.e. the case when h∉M) and the proof is finished, or we are adding the two links joining the two copies of h to elements in X and Y (i.e. the case when h ∈ M). Since X ∪ Y is contained in \(S^{\prime }\) and both copies of h are in different sides of the partition, one of the links satisfies the cut.
For any z ∈ W ∪ X ∪ Y in a 3DM instance, let \(\deg (z)\) be the number of hyperedges in H containing z. Let also Δ denote the maximum degree of the instance, i.e., \(\varDelta =\max \limits _{z\in W\cup X\cup Y}{\deg (z)}\). By following an analogous approach to the one from Kortsarz et al. [23], we can prove that even instances with a gap can be mapped.
If the CycAP instance (G,L) constructed as above admits a solution A with |A|≤ (1 + ε)(p + m), then the 3DM instance H contains a 3D matching M with |M|≥ p − (2 + 10Δ)(p + m)ε.
Let A be a feasible solution to (G,L) with |A|≤ (1 + ε)(p + m). Note that G contains 2(p + m) nodes and the links must form an edge cover (otherwise the resulting graph would not be 3-edge-connected). Call a node permissible if it is adjacent to exactly one link in A and impermissible otherwise. Let Vperm and Vimperm be the set of permissible and impermissible nodes respectively. We will first prove that the number of impermissible nodes is upper bounded by 2ε(p + m). In fact, if \(\deg _{A}(v)\) denotes the number of links in A incident to v, we have that
$$2|A| = \sum\limits_{v\in V}{|\deg_{A}(v)|} = \sum\limits_{v\in V_{\text{perm}}}{\deg_{A}(v)} + \sum\limits_{v\in V_{\text{imperm}}}{\deg_{A}(v)} \ge |V_{\text{perm}}| + 2|V_{\text{imperm}}|$$
where the last inequality comes from the fact that impermissible nodes are adjacent to at least two links. Since |A|≤ (1 + ε)(p + m), and |Vperm| + |Vimperm| = 2(p + m), we can conclude the claim.
We will now compute a set \(M^{\prime }\) which is almost a matching. We initialize \(M^{\prime }=\emptyset \) and then, iteratively for \(j=1,\dots , p\), we try to add an hyperedge to \(M^{\prime }\) as follows: if xj is permissible, then it is adjacent to one node \(h_{x}^{(j)} \in H_{X}\) (let us assume \(h_{x}^{(j)}\in H_{X}(w_{i})\)); if both \(h_{x}^{(j)}\) and its copy \(h_{y}^{(j)}\in H_{Y}(w_{i})\) are permissible, then \(h_{y}^{(j)}\) is adjacent to one node yk. If yk is permissible, then we add (wi,xj,yk) to \(M^{\prime }\). Notice that hyperedges added by this procedure are indeed in H by construction. Our claim is that \(|M^{\prime }|\ge p-2\varDelta (p+m)\varepsilon \). Actually, if xj, \(h_{x}^{(j)}\) or \(h_{y}^{(j)}\) are impermissible, then only one iteration fails (the one indexed by j). If yk is impermissible then it can cause at most Δ iterations to fail, since it can be connected to at most Δ nodes in HY. If we denote by ny the number of impermissible nodes yk involved in the procedure, then the number of iterations that fail is at most (2ε(p + m) − ny) + nyΔ. Since ny ≤ 2ε(p + m) (the total number of impermissible nodes), the number of iterations that fail is at most 2Δ(p + m)ε, proving the claim.
By construction, hyperedges in \(M^{\prime }\) have different elements from X and Y but elements from W might be repeated. Thus, for every wi belonging to more than one hyperedge in \(M^{\prime }\), we remove from \(M^{\prime }\) all but one of such hyperedges, obtaining \(M^{\prime \prime }\) which is now a matching. Let \(\mu =p-|M^{\prime \prime }|\) be the number of vertices wi not appearing in any hyperedge of \(M^{\prime }\) (equivalently of \(M^{\prime \prime }\)). Since \(|M^{\prime }|-|M^{\prime \prime }| \le p-|M^{\prime \prime }|=\mu \), we can find a lower bound on the size of \(M^{\prime \prime }\) by bounding above μ. We indeed claim that μ ≤ (2 + 8Δ)(p + m)ε.
Let \(L^{\prime }\) be the links in L of the form \((x_{j},h_{X}^{(j)})\) and \((y_{k},h_{Y}^{(j)})\) where \(h_{x}^{(j)}\) corresponds to a hyperedge \((w_{i}, x_{j}, y_{k})\in M^{\prime }\) and \(h_{y}^{(j)}\) corresponds to its copy. We have that \(|L^{\prime }| = 2|M^{\prime }| \ge 2p-4\varDelta (p+m)\varepsilon \), hence
$$|A\setminus L^{\prime}| \le (1+\varepsilon)(p+m) - 2p+4\varDelta(p+m)\varepsilon = m-p+(1+4\varDelta)(p+m)\varepsilon.$$
Consider on the other hand the μ nodes wi which are not intersected by hyperedges in \(M^{\prime }\). Since A is a feasible solution, for each such wi there must be a link in A connecting HX(wi) ∪ HY(wi) and X ∪ Y, because otherwise we could disconnect HX(wi) ∪ HY(wi) from the rest of the graph by removing the two edges in the boundary of HX(wi) ∪ HY(wi), contradicting the feasibility of A. Notice that these μ links are part of \(A\setminus L^{\prime }\). Furthermore, since A is an edge cover, the remaining 2m − 2p − μ nodes in HX ∪ HY untouched by \(L^{\prime }\) plus the μ aforementioned links must be incident to some link in A, implying that
$$|A\setminus L^{\prime}| \ge \mu + \frac{2m-2p-\mu}{2} = m-p+\frac{\mu}{2}.$$
Combining both inequalities we get that μ ≤ (2 + 8Δ)(p + m)ε, and hence we conclude that the size of \(M^{\prime \prime }\) is at least
$$|M^{\prime}|-\mu \ge p-2\varDelta(p+m)\varepsilon-(2+8\varDelta)(p+m)\varepsilon = p-(2+ 10\varDelta)(p+m)\varepsilon,$$
completing the proof. □
We can now use Lemmas 12 and 13 together with Theorem 5 to conclude the proof of Theorem 6. Notice that in 3DM-5, since Δ = 5, we have that m = |H|≤ 5|W| = 5p.
We will show that our reduction presented above is gap-preserving. Specifically, we will show that if H is an instance of 3DM-5 and (G,L) is the corresponding CycAP instance, then
If H admits a matching of size p, then (G,L) admits a feasible solution of size p + m;
If H does not admit a matching of size at least p(1 − ε0), then (G,L) does not admit a feasible solution of size at most \((p+m)(1+\frac {\varepsilon _{0}}{312})\).
The first statement follows directly from Lemma 12, while the second is the contrapositive of Lemma 13 when setting \(\varepsilon =\frac {\varepsilon _{0}}{312}\), as in this case we have that p − (2 + 10Δ)(5p + p)ε = p(1 − 312ε) = p(1 − ε0). □
This lemma implies that CycAP is FPT with parameter \(h_{\max \limits }\).
For |S|≤ 1, we simply set OPTS = ∅.
Adjiashvili, D.: Beating approximation factor two for weighted tree augmentation with bounded costs. ACM Transactions on Algorithms, 1549–6325 (2018)
Byrka, J., Grandoni, F., Jabal Ameli, A.: Breaching the 2-Approximation barrier for connectivity augmentation: a reduction to steiner tree. ACM Symposium on Theory of Computing (STOC), 815–825 (2020)
Basavaraju, M., Fomin, F. V., Golovach, P., Misra, P., Ramanujan, M. S., Saurabh, S.: Parameterized algorithms to preserve connectivity. ICALP 2014, 800–811 (2014)
MathSciNet MATH Google Scholar
Cheriyan, J., Gao, Z.: Approximating (Unweighted) Tree Augmentation via Lift-and-Project, Part I: Stemless TAP. Algorithmica 80, 530–559 (2018)
Cheriyan, J., Gao, Z.: Approximating (Unweighted) Tree Augmentation via Lift-and-Project, Part II. Algorithmica 80, 608–651 (2018)
Cheriyan, J., Jordán, T., Ravi, R.: On 2-Coverings and 2-Packings of laminar families. ESA 1999, 510–520 (1999)
Cheriyan, J., Karloff, H., Khandekar, R., Könemann, J.: On the integrality ratio for tree augmentation. Oper. Res. Lett. 36, 399–401 (2008)
Cohen, N., Nutov, Z.: A (1 + ln2)-approximation algorithm for minimum-cost 2-edge-connectivity augmentation of trees with constant radius. Theor. Comput. Sci., 67–74 (2013)
Cormen, T. H., Leiserson, C. E., Rivest, R. L., Stein, C.: Introduction to algorithms. Third edition (2009)
Dinitz, E., Karzanov, A., Lomonosov, M.: On the structure of the system of minimum edge cuts of a graph. Studies in Discrete Optimization, 290–306 (1976)
Even, G., Feldman, J., Kortsarz, G., Nutov, Z.: A 1.8 approximation algorithm for augmenting edge-connectivity of a graph from 1 to 2. ACM Trans. Algorithms 5, 21:1–21:17 (2009)
Fiorini, S., Groß, M., Könemann, J., Sanità, L.: Approximating weighted tree augmentation via Chvátal-Gomory cuts. SODA 2018, 817–831 (2018)
Frank, A.: Augmenting graphs to meet edge-connectivity requirements. SIAM J. Discrete Math. 5, 25–53 (1992)
Frank, A.: Connections in combinatorial optimization. Number 38 in Oxford lecture series in mathematics and its applications (2011)
Frederickson, G. N., JáJá, J.: Approximation algorithms for several graph augmentation problems. SIAM J. Comput. 10, 270–283 (1981)
Goemans, M. X., Goldberg, A. V., Plotkin, S., Shmoys, D. B., Tardos, É., Williamson, D.P.: Improved approximation algorithms for network design problems. SODA 1994, 223–232 (1994)
Grandoni, F., Kalaitzis, C., Zenklusen, R.: Improved approximation for tree augmentation: Saving by rewiring. STOC 2018, 632–645 (2018)
Hsu, T.: On four-connecting a triconnected graph. J. Algorithms 35, 202–234 (2000)
Jain, K.: A factor 2 approximation algorithm for the generalized Steiner network problem. Combinatorica 21, 39–60 (2001)
Jordan, T.: On the optimal vertex-connectivity augmentation. J. Combin. Theory 35, 202–234 (1995)
Khuller, S., Thurimella, R.: Approximation algorithms for graph augmentation. J. Algorithms 14, 214–225 (1993)
Knuth, D. E.: Postscript about NP-hard problems. SIGACT News 6, 15–16 (1974)
Kortsarz, G., Krauthgamer, R., Lee, J. R.: Hardness of approximation for Vertex-Connectivity network design problems. SIAM J. Comput. 33, 704–720 (2004)
Kortsarz, G., Nutov, Z.: A Simplified 1.5-Approximation Algorithm for Augmenting Edge-Connectivity of a Graph from 1 to 2. ACM Trans. Algorithms 12, 23:1–23:20 (2015)
Kortsarz, G., Nutov, Z.: Approximating minimum cost connectivity problems. Handbook on Approximation Algorithms and Metaheuristics 35, 202–234 (2007)
Kortsarz, G., Nutov, Z.: LP-relaxations for tree augmentation. APPROX/RANDOM 2016, 13:1–13:16 (2016)
Marx, D., Végh, L.: Fixed-parameter algorithms for minimum-cost edge-connectivity augmentation. ACM Trans. Algorithms 11 (4), 27:1–27:24 (2015)
Nagamochi, H.: An approximation for finding a smallest 2-Edge-Connected subgraph containing a specified spanning tree. Discrete Appl. Math. 126, 83–113 (2003)
Nutov, Z.: On the tree augmentation problem. ESA 2017, 61:1–61:14 (2017)
Petrank, E.: The hardness of approximation: Gap location. Comput. Complex. 4, 133–157 (1994)
Shmoys, D. B.: The design of approximation algorithms. Cambridge (2011)
Végh, L.: Augmenting undirected Node-Connectivity by one. SIAM Journal of Discrete Mathematics, 695–718 (2011)
Watanabe, T., Nakamura, A.: Edge-Connectivity Augmentation problems. J. Comput. Syst. Sci. 35, 96–144 (1987)
Williamson, D. P., Shmoys, D. B.: Algorithms and Complexity. Elsevier, Amsterdam (2011)
Open Access funding provided by SUPSI - University of Applied Sciences and Arts of Southern Switzerland
IDSIA, Lugano, Switzerland
Waldo Gálvez, Fabrizio Grandoni & Afrouz Jabal Ameli
University of Wrocław, Wrocław, Poland
Krzysztof Sornat
Waldo Gálvez
Fabrizio Grandoni
Afrouz Jabal Ameli
Correspondence to Afrouz Jabal Ameli.
This article belongs to the Topical Collection: Special Issue on Approximation and Online Algorithms (2019)
Guest Editors: Evripidis Bampis and Nicole Megow
Partially supported by the SNSF Grant 200021_159697/1, the SNSF Excellence Grant 200020B_182865/1, the National Science Centre, Poland, grant numbers 2015/17/N/ST6/03684, 2015/18/E/ST6/00456 and 2018/28/T/ST6/00366. K. Sornat was also supported by the Foundation for Polish Science (FNP) within the START programme.
Gálvez, W., Grandoni, F., Jabal Ameli, A. et al. On the Cycle Augmentation Problem: Hardness and Approximation Algorithms. Theory Comput Syst 65, 985–1008 (2021). https://doi.org/10.1007/s00224-020-10025-6
Issue Date: August 2021
Connectivity augmentation
Cactus augmentation
Cycle augmentation | CommonCrawl |
Methyltransferase-like 3 gene (METTL3) expression and prognostic impact in acute myeloid leukemia patients
Reham Mohamed Nagy1,
Amal Abd El Hamid Mohamed1,
Rasha Abd El-Rahman El-Gamal1,
Shereen Abdel Monem Ibrahim1 &
Shaimaa Abdelmalik Pessar ORCID: orcid.org/0000-0002-8947-23241
DNA methylation is involved in pathogenesis of acute myeloid leukemia (AML). N6-methyladenosine (m6A) modification of mRNA, mediated by methyltransferase-like 3 (METTL3), is one of the well-identified mRNA modifiers associated with the pathogenesis of AML. High level of METTL3 mRNA is detected in AML cells, thus can be a potential target therapy for AML. This is a preliminary study that aimed at measuring METTL3 mRNA expression level in de novo AML patients and correlating it with clinicopathological, laboratory and prognostic markers. METTL3 expression was analyzed by quantitative reverse transcription polymerase chain reaction in 40 newly diagnosed AML adults and was re-measured in the 2nd month of chemotherapy. Patients were followed up for periods up to 6 months post-induction therapy.
METTL3 expression was found to be significantly upregulated in AML patients compared to control subjects (p < 0.001). METTL3 gene was significantly expressed among non-responders compared to responders (p < 0.001). A cutoff value was assigned for normalized METTL3 values to categorize AML patients according to response to therapy. Statistically significant association was observed between high pretreatment normalized METTL3 gene level and failure to attain complete remission at 2nd, 4th and 6th month following therapy (p = 0.01, 0.02 and 0.003, respectively). However, insignificant correlation was found between pretreatment normalized METTL3 gene level and event free survival or clinicopathological prognostic factors.
METTL3 is overexpressed in AML patients and is associated with adverse prognostic effect and failure to attain hematological remission within 6 months post-induction therapy.
Acute myeloid leukemia (AML) is one of the most prevalent hematological malignancies in adults. It occurs due to clonal expansion of undifferentiated myeloid precursors in bone marrow, which leads to defect in normal hematopoiesis [1].
The pathogenesis and clinical outcome of AML are based on different genetic mutations and epigenetic dysregulations [2]. Epigenetic modifiers refer to the regulators of gene expression without an alteration of the DNA coding sequence. While almost 70–90% of the genomic DNA is estimated to be transcribed, only less than 2% of the genomic DNA is translated to proteins [3]. This implicates the leading role of non-coding RNAs (ncRNAs) in the human cell development and survival. The epigenetic dysregulations actively participate in pathogenesis of major hematological malignancies including AML [4]. DNA methylation and histone tail modifications are major epigenetic mechanisms that regulate the physiologic process of cell differentiation. However, their functional aberrations lead to silence of critical genes and development of AML [5]. METTL3, the m6A-forming enzyme, is one of the most prevalent regulators of the mRNA nucleotide that occurs regularly in approximately 20% of human cellular mRNAs [6]. It stimulates translation of certain mRNAs including epidermal growth factor receptor (EGFR) and the Hippo pathway effector TAZ in human tumor cells; METTL3 associates with the reporter mRNA and promotes translation in the cytoplasm. Hence, a depletion of METTL3 will inhibit translation of the affected genes [7]. M6A is an internal RNA modification in both coding and non-coding RNAs that is catalyzed by the METTL3–METTL14 methyltransferase complex [8]. The specific role of these enzymes in leukemia is still largely unknown. However, compared with normal hematopoietic cells, high levels of METTL3 mRNA and protein are detected in AML cells, which are related to abnormal cell differentiation and development of myeloid hematological malignancies. Furthermore, its depletion induces cell differentiation, apoptosis and delayed progression of leukemia [6, 8]. This effect is primarily mediated by phosphorylated AKT, whose levels are increased as a consequence of METTL3 depletion [9, 10]. As a result, a new rationale has emerged that targets the writers, erasers and readers of m6A modification, thus representing a potential target therapy for several malignancies, including AML. Inhibition of 2-oxoglutarate (2OG) and iron-dependent oxygenases (e.g., ALKBH5 and FTO), which belong to the 2OG-dependent nucleic acid oxygenase (NAOX) family and suppress m6A modification demethylation of RNA, have been discussed as promising target therapy of AML [11].
This work represents a preliminary study that measures METTL3 mRNA expression level by quantitative reverse transcription polymerase chain reaction (qRT-PCR) in newly diagnosed adult AML patients and non-leukemic control subjects. It aims at investigating its role in AML pathogenesis and correlating its levels with clinicopathological, laboratory and prognostic markers.
A prospective, case–control study was carried out on 40 newly diagnosed AML patients recruited during the period from May 2019 to May 2020. In addition, 20 age- and sex-matched adult controls, free from any hematological or solid malignancy, were also included. The diagnosis of AML was established according to the recent 2016 World Health Organization (WHO), updated AML diagnostic criteria, based on morphology, immunophenotyping (IPT) and cytogenetic analysis.
A written informed consent was obtained from all enrolled patients. The approval of study was taken from the institutional Ethics Committee of Ain Shams University with approval No. FWA 000017585.
Two milliliters of venous blood samples were collected under complete aseptic conditions from each patient into dipotassium ethylene diamine tetra-acetic acid (k2-EDTA) tube at a concentration of 1.2 mg/ml, for complete blood count (CBC) testing and preparation of Leishman-stained films. Aspiration of 4 mL bone marrow (BM) was performed and divided as follows: the initial 0.5 ml for Leishman-stained smears, 1 ml into heparin tube for cytogenetic fluorescence in situ hybridization (FISH) analysis and 3 ml were divided into two EDTA tubes for IPT and quantitative reverse transcription polymerase chain reaction (qRT-PCR). These tests were performed initially for the diagnosis of the patients and on scheduled intervals for follow-up. For optimal results, blood samples were processed within 2–3 h of collection and no stored samples were used.
All patients were subjected to full history taking, clinical examination, CBC testing using Sysmex XN-1000 (Sysmex Europe, GmbH) with examination of Leishman-stained peripheral blood (PB) films, BM aspiration with examination of Leishman-stained smears. IPT (for patients only) was carried on BM blasts/blast equivalent cells using a standard panel of monoclonal antibodies by 6-color Navios flow cytometer (Coulter, Electronics, Hialeah, FL, USA). Conventional karyotyping and FISH were performed in selected cases. qRT-PCR was done for all enrolled subjects (40 AML patients before starting chemotherapy and 20 controls) to detect METTL3 mRNA expression level using; TaqMan Gene Expression assays (FAM-MGB) Hs00219820_m1 METTL3.
AML patients received induction therapy regimen consisting of Adriamycin (25 mg/m2, day 1–3) and cytarabine (100 mg/m2, every 12 h, day 1–7); however, PML /RARA positive acute promyelocytic leukemia (APL) was given another protocol of PETHEMA LPA 2005. The patients were followed up at day 28, 2nd, 4th and 6th month, post-induction, and resistant cases received re-induction of FLAG Adria chemotherapy (fludarabine, high dose cytarabine, filgrastim and Adriamycin). The morphological remission of AML patients was assessed depending on their BM blast counts at day 28, 2nd, 4th and 6th month (till either the end of the study or last contact with them). Accordingly, patients were classified as responders (identified by BM blasts ≤ 5% at day 28 chemotherapy), non-responders (BM blasts > 5% at day 28). Moreover, during follow-up stages at time between 2nd and 4th month of chemotherapy, the METTL3 gene expression level was reanalyzed in 15 patients who exhibited higher gene expression before starting treatment to assess the effect of therapy on expression levels.
METTL3 expression level measurement by qRT-PCR
Total ribonucleic acid (RNA) was isolated by using the "QIAamp RNA Blood Mini Kit" (Qiagen, Hilden, Germany) following manufacturer instructions. A specialized high-salt buffering system allows RNA species longer than 200 bases to bind to the QIAamp membrane. During the QIAamp procedure for purification of RNA, erythrocytes are selectively lysed and leukocytes are recovered by centrifugation. The leukocytes are then lysed using highly denaturing conditions that immediately inactivate RNases, allowing the isolation of intact RNA. Homogenization of the lysate is done by a brief centrifugation through a QIAshredder spin column. Ethanol is added to adjust binding conditions and the sample is applied to the QIAamp spin column. RNA is bound to the silica membrane during a brief centrifugation step. Contaminants are washed away, and total RNA is eluted in 30 µl or more of RNase-free water for direct use.
mRNA of METTL3 was reversibly transcribed into complementary deoxyribonucleic acid (cDNA) using QuantiTect® Reverse Transcription Kit (Qiagen, Hilden, Germany). METTL3 gene expression level was amplified from mRNA using METTL3 TaqMan™ Gene Expression Assay, Thermo Fisher, cat. No: (4331182) with primer sequence (Forward 5′-CAAGCTGCACTTCAGACGAA-3′, Reverse5′ GCTTGGCGTGTGGTCTTT-3′) and Beta-Actin as housekeeping gene, which serves as internal control for cDNA quality.
As duplex real-time PCR requires the simultaneous detection of different fluorescent reporter dyes whose fluorescence spectra exhibit minimal spectral overlap, the two-reporter dye labeled the 5′ ends of the probes were FAM™ (for METTL3 target gene 14q11.2) and VIC (for Beta actin reference gene). Both TaqMan Probes were labeled with a non-fluorescent quencher dye at the 3′ ends of the probe.
Real-time PCR was performed with real-time cycler (Applied Biosystems StepOne; Applied Biosystems by Life Technologies™, USA). The PCR reaction mix was prepared for a final volume of 20 μl per well reaction volume using the following: 4 μl cDNA, 10 μl 2 × TaqMan Gene expression master mix, 1 μl of 20 × TaqMan Gene expression primer assay, and de-ionized water up to 5 μl for each sample cup.
qRT-PCR was performed at 50 °C for 2 min, 95 °C for 10 min, followed by 40 cycles at 95 °C for 15 s and 60 °C for 1 min. Results were analyzed, and the differences of expression level for the target gene (METTL3) were calculated using the ΔΔCT method for relative quantitation.
$$\begin{aligned} & \Delta {\text{CT }}\left( {{\text{sample}}} \right) \, = {\text{ CT target gene}}(METTL3){-}{\text{ CT reference gene }}\left( {\beta - {\text{actin}}} \right). \\ & \Delta {\text{CT }}\left( {{\text{control}}} \right) \, = {\text{ CT target gene}}(METTL3){-}{\text{ CT reference gene }}\left( {\beta - {\text{actin}}} \right). \\ \end{aligned}$$
Next, the ΔΔCT value for each sample was determined as:
$$\Delta \Delta {\text{CT }} = \, \Delta {\text{CT }}\left( {{\text{sample}}} \right) \, {-} \, \Delta {\text{CT }}\left( {{\text{control}}} \right).$$
Finally, the normalized level of target gene expression was calculated by using the formula: 2−ΔΔCT (Fig. 1).
AML cases with high normalized METTL3 expression (high 2−ΔΔCT formula): the left side showing FAM™ (METTL3 target gene CT), the right-side VIC (Beta actin reference gene CT)
The data were analyzed using the statistical package for social sciences, version 20.0 (SPSS Inc., Chicago, Illinois, USA). Quantitative data with parametric distribution were expressed as mean ± standard deviation (SD), while those with nonparametric distribution were expressed as median and interquartile range (IQR). Qualitative data were expressed as frequency and percentage. For quantitative variables, independent t test was used in cases of two independent groups with normally distributed data, while Mann–Whitney U test was used in cases of two independent groups with non-normally distributed data. Chi-square (χ2) test of significance was used to compare proportions between qualitative parameters. Spearman's rank correlation coefficient (rs) was used to assess the degree of association between two sets of variables if one or both was skewed. Kaplan–Meier Survival Analysis was performed for examining the distribution of time-to-event variables. ROC curve was constructed to evaluate the prognostic performance of METTL3 gene expression and assign a cutoff value that would best distinguish between different groups. The confidence interval was set to 95% and the margin of error accepted was set to 5%. So, the Probability (p value) was considered significant as follows: p value ≤ 0.05 significant, p value ≤ 0.001 highly significant and p value > 0.05 insignificant.
Our study included 40 newly diagnosed AML patients (22 males and 18 females with mean age of 41.93 ± 17.45 years and male/female ratio 1.2). Out of the 40 patients, 32 (80%) had hepatosplenomegaly (HSM). Twenty age- and sex-matched non-leukemic control subjects were also included in all laboratory assessments. Laboratory characteristics of the studied patients are illustrated in Table 1.
Table 1 Laboratory characteristics of the studied patients
Morphological remission was assessed for all patients at day 28 to evaluate early response to chemotherapy. Moreover, follow-up of BM blasts was done for available AML patients at the 2nd, 4th and 6th month of chemotherapy to verify maintenance of complete remission (CR) (Table 2).
Table 2 Assessment of morphological remission and maintenance of complete remission (CR)
At baseline, pretreatment median METTL3 gene expression level was statistically significantly higher in patients group compared to control group (6.95 vs 1.18; p < 0.001) (Table 3).
Table 3 Comparison between AML cases vs. control group regarding pretreatment METTL3 gene expression level at diagnosis
To unveil the impact of METTL3 gene expression on prognosis, AML patients were divided into 2 prognostic groups depending on prognostic factors adopted by the American Cancer Society (2021 guidelines), namely total leucocyte count (TLC), age and cytogenetic abnormalities.
Most of AML patients in our study exhibited good prognostic criteria; 32/40 (80%) were younger than 60 years old and their TLC was < 100 × 109/L. Regarding cytogenetic abnormalities, two patients out of 19 studied cases revealed positive t(8;21), five patients out of ten studied cases were positive for t(15;17) and one patient out of four cases was positive for inv.16. Regarding the bad prognostic criteria, 8/40 (20%) were older than 60 years of age, of whom, 5 patients were associated with high TLC > 100 × 109/L; unfavorable cytogenetics (11q23 & t(9;22)) were not detected in any case. METTL3 gene expression was studied in each group; however, no statistically significant difference was found as regards METTL3 gene expression level in both prognostic groups.
AML patients were classified into responders 15/40 (37.5%) and non-responders 25/40 (62.5%) according to morphological assessment of BM at day 28 post-therapy (Table 4). To evaluate the impact of METTL3 gene expression on achievement of hematological remission, METTL3 gene expression was studied among these 2 subgroups and responders revealed low normalized METTL3 gene expression level (median 2.28; IQR 1.87–2.58), while non-responders exhibited higher gene expression level (median 9.58; IQR 7.7–14.6), and this difference was statistically highly significant (p < 0.001) (Table 4).
Table 4 Comparison between the two AML subgroups (responders and non-responders) according to their pretreatment METTL3 gene expression level
The prognostic performance of METTL3 gene expression was assessed using ROC curve analysis to obtain best cutoff for predicting the poor outcome (failure of remission or death). A cutoff value of ≥ 4 with an area under curve (AUC) 0.980 was found to have a sensitivity of 95.8%, specificity of 87.5%, PPV of 92%, NPV of 93.3% and a diagnostic accuracy of 98% (Fig. 2).
Receiver operating characteristic (ROC) curve detect the best cutoff value of METTL3 gene expression level predicating response to chemotherapy
Comparison was done between AML subgroup with high normalized METTL3 gene expression versus AML subgroup with low normalized gene expression regarding demographic, clinical data and hematological parameters at diagnosis and findings are illustrated in Table 5.
Table 5 Comparison between AML subgroups regarding demographic, clinical data and hematological parameters at diagnosis
Pretreatment METTL3 gene expression level was statistically significantly higher in AML cases with hepatosplenomegaly (HSM) (p = 0.014). There was no statistically significant difference in METTL3 gene expression level regarding laboratory parameters.
Comparison between pretreatment METTL3-based AML subgroups regarding hematological remission as evidenced by BM blast percentage was performed for follow-up samples from day 28 chemotherapy till the end of the 6th month of chemotherapy, the results are shown in Table 6. Statistically significant association was observed between high normalized METTL3 gene expression pretreatment level and failure to maintain CR at 2nd month, 4th month and 6th month follow-up (p = 0.01, 0.02 and 0.003, respectively). Also, among cases with high gene expression level, one case died at day 28 and another two cases died at the 2nd month (Table 6 and Fig. 3).
Table 6 Comparison between pretreatment METTL3-based AML regarding of BM blasts in follow-up samples from day 28 chemotherapy till the end of the 6th month of chemotherapy
Bar chart showing difference in BM blast counts during follow-up stages in both METTL3-based AML subtypes
Among the 29 cases who could be followed till the end of the 6th month chemotherapy, 8/14 patients who expressed high pretreatment level of METTL3 gene failed to achieve hematological remission (blast counts > 5%) till the end of the 6th m chemotherapy compared to only 1/15 patients who expressed low level of METTL3 (p = 0.003).
Within the short-term follow-up, between the 2nd and 4th month of chemotherapy, the METTL3 gene expression level was reassessed (post-treatment) in 15 cases who expressed higher pretreatment METTL3 gene levels. The initial mean value of the pretreatment gene levels of those 15 patients was 14 and became 24.62 post-treatment (range 0.36–131.59). Post-treatment METTL3 gene level was elevated in 60% of the evaluated cases (9/15) with a mean value of 28.02, compared to 6/15 (40%) patients who revealed a decline in post-treatment METTL3 gene expression level with mean value 11.02.
The frequency of maintaining CR based on BM blast percentage in follow-up samples was assessed for both groups (Table 7, Fig. 4) and statistically significant association was found between the elevated normalized METTL3 gene expression level post-treatment and failure to maintain CR at 2nd month, 4th month and 6th month chemotherapy (p = 0.048, 0.015, 0.015, respectively). Moreover, post-treatment METTL3 gene expression level was positively correlated to BM blast percentage at 2nd month, 4th month and 6th month of chemotherapy, and this correlation was statistically highly significant (p ≤ 0.001, 0.016, 0.002, respectively) (Table 8, Fig. 5).
Table 7 Comparison between patients with elevated post-treatment METTL3 gene expression versus patients with lowered post-treatment gene levels regarding BM blast counts in follow-up samples
Bar chart showing the follow-up response in both AML subgroups with elevated and reduced gene expression level with treatment
Table 8 Correlation of METTL3 gene expression level to blast count during follow-up stages for assessment of early response to chemotherapy and maintenance of CR
Scatter plot between METTL3 gene expression level during follow-up and 2nd, 4th and 6th month BM blast counts %
Kaplan–Meier survival curve was drawn to evaluate the impact of higher METTL3 expression on event free survival (EFS), yet the curve could not estimate statistical significance due to the few deaths number (only 3 cases). Interestingly, the curve revealed that all died cases were within the subgroup of AML with high METTL3 expression level.
In the past few years, the molecular characterization of AML has aroused the researchers' interest in better understanding of pathogenesis, prognosis prediction, treatment stratification, development of new targeted therapies and assistance in MRD detection.
An increasing number of researches have focused on the crucial role of mRNA modifiers in progression of leukemia. METTL3 is one of these mRNA modifications that have been associated with pathogenesis of AML [2]. The methylation process of the mRNA nucleotide by METTL3 occurs regularly in approximately 20% of human cellular mRNAs. However, alteration of this methylation process in AML cells has been associated with high abnormal level of METTL3 mRNA and protein and leads to marked alteration in cell differentiation and developing of myeloid hematological malignancies [6].
In this work, we aimed to measure METTL3 mRNA expression level by qRT-PCR in newly diagnosed AML cases and study this level in relation to clinical, laboratory and prognostic markers.
Forty newly diagnosed AML patients were enrolled in our study along with 20 age and sex-matched control subjects. We found that METTL3 is significantly overexpressed in AML patients with median value 6.95 compared to median 1.18 in control group (p value < 0.001). Some recent studies have demonstrated similar results where overexpression of METTL3 was shown in AML cells compared to the normal Hematopoietic Stem/Progenitor Cells (HSPCs) [6, 12].
Similarly, another study was done by Vu et al. [13] to assess the alteration of METTL3 mRNA expression in leukemia in which they compared the METTL3 mRNA expression in AML samples to other cancers based on the cancer genome atlas database. They found that METTL3 mRNA expression is significantly higher in AML than in other cancer types with p value < 0.00001. They further assessed the relative abundance of METTL3 in myeloid leukemia and examined both METTL3 mRNA and protein levels in multiple leukemia cell lines in comparison with primary HSPCs cord blood-derived CD34+ cells. They found that both METTL3 mRNA and METTL3 protein were more abundant in AML cell lines. However, there was no significant difference in METTL3 expression across multiple subtypes of AML in the blood pool database [14].
In our study, we divided the 40 AML patients into 2 prognostic subgroups according to the American Cancer Society (2021 guidelines). We found that most of our AML patients exhibited good prognostic criteria. However, we didn't find significant association between any prognostic criterion and the METTL3 gene expression level.
In the same context, a cohort study was carried on 191 AML patients and detected mutations of m6A regulatory genes in 2.6% (5/191) and variation in gene copy number in 10.5% (20/191) of patients. They studied whether mutations and copy number variations (CNVs) of m6A regulatory genes were associated with clinical and molecular features (older age > 60 years, white blood cell count > median (15,200/mm3), unfavorable cytogenetic risk and mutations of DNMT3A and TP53). They observed that mutations and/or CNVs of METTL3, METTL14, YTHDF1, YTHDF2, FTO and ALKBH5 as a group were significantly associated with poorer cytogenetic risk in AML (p < 0.0001). Additionally, they detected a marked increase in TP53 mutations (p < 0.0001). However, these mutations and/or CNVs were not associated with older age (> 60 years) or high white blood cell count > median [15].
METTL3 gene expression pretreatment level was statistically significantly higher in AML cases with HSM (p value 0.014). There was no statistically significant relation between METTL3 gene expression level and any other parameter (age, gender, initial TLC count, hemoglobin, platelet count or initial BM blast count).
Similarly, a study of METTL3 and METTL14 expressions in 37 ALL patients investigated the relation between METTL3 and METTL14 expressions with clinical features. They didn't find association between the expression level of METTL3 and METTL14 with gender, age, initial TLC and blast percentage, indicating that these two genes may not be associated with tumor burden [16]. The association between METTL3 expression level and clinical data in AML patients' needs further studies to be evaluated.
Another recent study of METTL3 expression level in solid tumors was carried on 340 patients with oral squamous cell carcinoma reported that a higher METTL3 expression level was significantly positively associated with advanced tumor stage, advanced clinical stage and lymph node metastasis, but no differences in other features, such as sex and age, were observed [17].
Our study revealed that no significant association was found between the higher METTL3 expression levels and specific AML subtypes. The same finding was reported by Vu et al. [13] during their study of METTL3 mRNA and protein levels in multiple leukemia cell lines. They found that METTL3 mRNA was more abundant in AML cell lines with no significant difference in METTL3 expression across multiple subtypes of AML.
As regards the cytogenetic abnormalities in our study, they were not statistically related to METTL3 gene expression level. Contrarily, a recent study of the molecular function of m6A RNA methylation in cancer has reported that METTL3 and METTL14 are highly expressed in AML cells carrying t(11q23), t(15;17) or t(8;21) and are down-regulated during myeloid differentiation [18]. This controversy may be attributed to the few numbers of cases exhibiting these abnormalities in our study.
Another study by Weng et al. [10] found that METTL14 is highly expressed in normal HSCs and AML and is down-regulated during myeloid differentiation. In particular, METTL14 was found to be overexpressed in AML cells carrying 11q23 alterations, t(15;17) or t(8;21). Analysis of the Cancer Genome Atlas (TCGA) data revealed that AML blast cells expressed higher mRNA levels of both METTL3 and METTL14 than most cancer types, and genetic alteration of those genes has significantly correlated with poorer prognosis [19].
The associations between m6A and genetic alterations in AML were studied by Paris et al. [9] who reported that m6A promotes the translation of c-MYC, BCL2 and PTEN mRNAs in the human AML cell line. Moreover, loss of METTL3 leads to increased levels of phosphorylated AKT, which contributes to the differentiation-promoting effects of METTL3 depletion. Overall, these results provide a rationale for the therapeutic targeting of METTL3 in myeloid leukemia. In the same context, the molecular function of WTAP, which is a novel oncogenic protein in myeloid leukemia that acts as regulatory subunit of the m6A methylation complex, was evaluated by a group of researchers. Their results revealed a lack of association between WTAP levels and particular cytogenetic abnormalities, but a significant correlation was detected between some specific molecular mutations such as NPM1 and FLT3-ITD, and WTAP expression [20]. WTAP is commonly upregulated in myeloid leukemia, but this upregulation alone is not enough to induce cell proliferation in the absence of a functioning METTL3 [21].
The patients enrolled in our preliminary study were de novo AML, with a mean age of 41.9 years and median TLC 36.05 × 109/L with the absence of unfavorable cytogenetic abnormalities (11q23 & t(9;22)). These data can collectively predict a good response to chemotherapy based on the updates of independent prognostic factors in AML [22]. However, during the initial stages of the follow-up, 23/40 (57.5%) failed to achieve CR with induction therapy and by the end of the 6th month chemotherapy, 9/29 (31.1%) failed to maintain their CR. This may suggest the adverse prognostic role of METTL3 expression in AML.
In the current study, the initial response to chemotherapy was assessed morphologically at day 28 post-therapy and accordingly, based on the response to chemotherapy at day 28 post-therapy, AML patients were classified into responders (15/40; 37.5%) and non-responders (25/40; 62.5%). To evaluate the impact of METTL3 gene expression on achievement of hematological remission, METTL3 gene expression was studied in these 2 subgroups and surprisingly, responders revealed low normalized METTL3 gene expression level (median 2.28; IQR 1.87–2.58) while non-responders exhibited higher median gene expression (median 9.58; IQR 7.7–14.6).
ROC curve analysis was used to evaluate the ability of METTL3 gene expression level at diagnosis to anticipate response of AML patients to chemotherapy. A cutoff value of 4 was selected as discriminating point with sensitivity of 95.8%, specificity of 87.5%, PPV of 92%, NPV of 93.3% and a diagnostic accuracy of 98%. This cutoff value proves that AML patients with high gene expression levels have bad response to chemotherapy at day 28. Likewise, patients with low gene expression levels show good response to induction therapy.
Our study detected significant association between higher pretreatment level of METTL3 gene expression in AML cases at time of diagnosis and failure to maintain CR at 2nd month, 4th month and 6th month follow-up (p = 0.01, 0.02 and 0.003, respectively). On monitoring patients with high gene expression level, we found that one case died at day 28 and another two cases died at the 2nd month. Out of the 14 patients who expressed higher METTL3 level at diagnosis and could be followed till the end of the 6th month chemotherapy, eight patients (57.1%) did not achieve hematological remission and showed persistently high blast counts, compared to only 1/15 (6.67%) patients who expressed a low level of METTL3.
In the present study, we reassessed the METTL3 gene expression level (between the 2nd and 4th month post-treatment) in 15 cases who expressed higher pretreatment gene levels. We intended to monitor the short-term effect of chemotherapy on the gene expression levels and determine possible correlations with patients' outcome. In comparison with the pretreatment levels, the gene level was found to increase after chemotherapy in 9/15 patients (60%). This elevation was associated with failure to maintain CR at 2nd m, 4th m and 6th m chemotherapy (p = 0.048, 0.015, 0.015, respectively). Moreover, 7/9 (77.8%) of AML cases with elevated gene level post-treatment failed to maintain hematological remission till the end of the 6th month; in contrast, none of the six AML cases with reduced gene expression level failed to maintain hematological remission. No related data in analogous studies are available in the literature. Nevertheless, these findings of gene levels monitoring comply with the poor prognostic effect obtained when analyzing the pretreatment gene levels. In addition, it is important to emphasize that despite the administration of chemotherapeutics, it can be speculated that this gene still has the ability to increase and exert its dismal prognostic effect.
Similarly, the poor prognosis of AML cases with m6A mutations was reported in a cohort study that was carried on 191 AML patients, where the authors found that mutation of any of the genes encoding m6A regulatory enzymes had a worse OS (p = 0.007) and EFS (p < 0.0001). Inferior OS and EFS were also evident in patients who had mutations and/or CNVs of these genes [15].
Recently, a therapeutic trial was done on small molecules that act as selective inhibitors of METTL3 in AML. Their anti-tumor effects were evaluated in patient-derived xenotransplantation experiments as well as transplantation experiments using an MLL-AF9-driven primary murine AML model. They reported that daily dosing of 30 mg/kg significantly inhibited AML expansion and reduced spleen weight compared to control, indicating that inhibition of METTL3 in vivo leads to strong anti-tumor effects in physiologically and clinically relevant models of AML [23].
Collectively, these studies highlight the prognostic role of both METTL3 in malignant hematopoietic cells and will encourage further epigenetic studies of target therapies in AML. These upcoming studies will reveal new insights regarding the molecular mechanisms regulating normal and malignant hematopoiesis and offer better opportunities for AML patients to improve their clinical outcomes.
In conclusion, our results have proved an association between high pretreatment gene expression level and bad response to chemotherapy. In addition, patients with a further increase in gene expression during the course of the disease were more likely to show failure to maintain hematological remission. Accordingly, an adverse prognostic impact of METTL3 expression on the outcome of AML adult patients can be concluded. However, since the small number of patients and the short follow-up time are two main limitations of this study, we strongly recommend large studies with longer follow-up periods to verify the proposed role of METTL3 gene expression in the pathogenesis and prognosis of AML.
ALKBH5:
Alkylated DNA repair protein alkB homolog 5
AML:
BCL2:
B-cell lymphoma 2
BM:
CBC:
cDNA:
Complementary deoxyribonucleic acid
c-MYC:
Cellular myelocytomatosis
CR:
Complete remission
Cycle threshold
DNMT3A:
DNA (cytosine-5)-methyltransferase 3A
EFS:
Event free survival
Epidermal growth factor receptor
FAB:
French–American–British classification
Fluorescence in situ hybridization
FLT3:
Famus-like tyrosine kinase-3
FTO:
Fat mass and obesity-associated protein
HGB:
HSM:
Hepatosplenomegaly
IPT:
IQR:
Interquartile range
k2-EDTA:
Dipotassium ethylene diamine tetra-acetic acid
m6A:
N6-methyladenosine
METTL3:
Methyltransferase-like 3
METTL14:
Methyltransferase-like 14
mRNA:
Messenger ribonucleic acid
NAOX:
Nucleic acid oxygenase
ncRNAs:
Noncoding RNAs
NPM1:
Nucleophosmin-1
2OG:
2-Oxoglutarate
Overall survival
PLTs:
PTEN:
Phosphatase and tensin homolog
P value:
Probability value
qRT-PCR:
Quantitative reverse transcription polymerase chain reaction
TLC:
Total leucocyte count
TP53:
Tumor protein P53
WTAP:
Wilms tumor 1-associating protein
YTHDF1:
YTH N6-methyladenosine RNA binding protein1
Döhner H, Weisdorf DJ, Bloomfield CD (2015) Acute myeloid leukemia. N Engl J Med 373:1136–1152
Han SH, Choe J (2020) Diverse molecular functions of m6A mRNA modification in cancer. Exp Mol Med 52:738–749
Hajjari M, Salavaty A (2015) HOTAIR: an oncogenic long non-coding RNA in different cancers. Cancer Biol Med 12(1):1–9
Liu Y, Cheng Z, Pang Y, Cui L, Qian T, Quan L, Zhao H, Shi J, Ke X, Fu L (2019) Role of microRNAs, circRNAs and long noncoding RNAs in acute myeloid. J Hematol Oncol 12(1):51
Wouters BJ, Delwel R (2016) Epigenetics and approaches to targeted epigenetic therapy in acute myeloid leukemia. Blood 127(1):42–52
Zeng C, Huang W, Li Y, Weng H (2020) Roles of METTL3 in cancer: mechanisms and therapeutic targeting. J Hematol Oncol 13:117
Wang X, Lu Z, Gomez A, Hon GC, Yue Y, Han D, Fu Y, Parisien M, Dai Q, Jia G et al (2014) N6-methyladenosine-dependent regulation of messenger RNA stability. Nature 505:117–120
Martin GH, Park CY (2018) Meddling with METTLs in normal and leukemia stem cells. Cell Stem Cell 22(2):139–141
Paris J, Morgan M, Campos J, Spencer GJ, Shmakova A, Ivanova I, Mapperley C, Lawson H, Wotherspoon DA, Sepulveda C et al (2019) Targeting the RNA m6A reader YTHDF2 selectively compromises cancer stem cells in acute myeloid Leukemia. Cell Stem Cell 25:137–148
Weng H, Huang H, Wu H, Qin X, Zhao BS, Dong L (2018) METTL14 inhibits hematopoietic stem/progenitor differentiation and promotes leukemogenesis via mRNA m(6)A modification. Cell Stem Cell 22:191–205
Niu Y, Wan A, Lin Z, Lu X, Wan GN (2018) (6)-Methyladenosine modification: a novel pharmacological target for anti-cancer drug development. Acta Pharm Sin B 8:833–843
Deng X, Su R, Weng H, Huang H, Li Z, Chen J (2018) RNA N6-methyladenosine modification in cancers: current status and perspectives. Cell Res 28(5):507–517
Vu LP, Pickering BF, Cheng Y, Zaccara S, Nguyen D, Minuesa G, Chou T, Chow A, Saletore Y, MacKay M, Schulman J, Famulare C, Patel M, Klimek VM, Garrett-Bakelman FE, Melnick A, Carroll M, Mason CE, Jaffrey SR, Kharas MG (2017) The N6-methyladenosine (m6A)-forming enzyme METTL3 controls myeloid differentiation of normal hematopoietic and leukemia cells. Nat Med 23(11):1369–1376
Bagger FO, Sasivarevic D, Sohi SH, Laursen LG, Pundhir S, Sønderby CK, Winther O, Rapin N, Porse BT (2016) Blood spot: a database of gene expression profiles and transcriptional programs for healthy and malignant haematopoiesis. Nucleic Acids Res 44:D917-924
Kwok CT, Marshall AD, Rasko JE, Wong JJ (2017) Genetic alterations of m6A regulators predict poorer survival in acute myeloid leukemia. J Hematol Oncol 10(1):39
Sun C, Chang L, Liu C, Chen X, Zhu X (2019) The study of METTL3 and METTL14 expressions in childhood ETV6/RUNX1-positive acute lymphoblastic leukemia. Mol Genet Genom Med 7(10):e00933
Liu L, Wu Y, Li Q, Liang J, He Q, Zhao L, Chen J, Cheng M, Huang Z, Ren H, Chen J, Peng L, Gao F, Chen D, Wang A (2020) METTL3 promotes tumorigenesis and metastasis through BMI1 m6A methylation in oral squamous cell carcinoma. Mol Ther 28(10):2177–2190
Pan Y, Ma P, Liu Y, Li W, Shu Y (2018) Multiple functions of m6A RNA methylation in cancer. J Hematol Oncol 11(1):48
Barbieri I, Tzelepis K, Pandolfini L, Shi J, Millán-Zambrano G, Robson SC, Aspris D, Migliori V, Bannister AJ, Han N, Braekeleer ED, Ponstingl H, Hendrick A, Vakoc CR, Vassiliou GS, Kouzarides T (2017) Promoter-bound METTL3 maintains myeloid leukaemia by m6A-dependent translation control. Nature 552(7683):126–131
Bansal H, Yihua Q, Iyer SP, Ganapathy S, Proia D, Penalva LO, Uren PJ, Suresh U, Carew JS, Karnad AB et al (2014) WTAP is a novel oncogenic protein in acute myeloid leukemia. Leukemia 28:1171–1174
Sorci M, Ianniello Z, Cruciani S, Larivera S, Ginistrelli LC, Capuano E, Marchioni M, Fazi F, Fatica A (2018) METTL3 regulates WTAP protein homeostasis. Cell Death Dis 9:796
Sekeres MA, Guyatt G, Abel G, Alibhai S, Altman JK, Buckstein R, Choe H, Desai P, Erba H, Hourigan CS, LeBlanc TW, Litzow M, MacEachern J, Michaelis LC, Mukherjee S, O'Dwyer K, Rosko A, Stone R, Agarwal A, Colunga-Lozano LE, Brignardello-Petersen R (2020) American Society of Hematology 2020 guidelines for treating newly diagnosed acute myeloid leukemia in older adults. Blood Adv 4(15):3528–3549
Tzelepis K, Braekeleer ED, Yankova E, Rak J, Aspris D, Domingues AF, Fosbeary R, Hendrick A, Leggate D, Ofir-Rosenfeld Y, Sapetschnig A, Pina C, Albertella M, Blackaby W, Rausch O, Vassiliou GS, Kouzarides T (2019) Pharmacological inhibition of the RNA m6a writer METTL3 as a novel therapeutic strategy for acute myeloid leukemia. Blood 134:403–403
This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors.
Clinical Pathology Department, Faculty of Medicine, Ain Shams University, Abasseya, Cairo, 11566, Egypt
Reham Mohamed Nagy, Amal Abd El Hamid Mohamed, Rasha Abd El-Rahman El-Gamal, Shereen Abdel Monem Ibrahim & Shaimaa Abdelmalik Pessar
Reham Mohamed Nagy
Amal Abd El Hamid Mohamed
Rasha Abd El-Rahman El-Gamal
Shereen Abdel Monem Ibrahim
Shaimaa Abdelmalik Pessar
All authors contributed to data interpretation and manuscript writing. AM conceptualized, designed the study and supervised laboratory analysis. RE, SP and SI contributed to study design and data interpretation. SP contributed to the conceptualization and the writing of the drafted manuscript. RN selected cases, collected clinical data and performed technical work. All authors read and approved the final manuscript.
Correspondence to Shaimaa Abdelmalik Pessar.
A written informed consent was obtained from all enrolled patients. The approval of study was taken from the institutional Ethics Committee of Ain Shams University with approval No. FWA 000017585 and was in accordance with the Declaration of Helsinki.
Nagy, R.M., Mohamed, A.A.E.H., El-Gamal, R.A.ER. et al. Methyltransferase-like 3 gene (METTL3) expression and prognostic impact in acute myeloid leukemia patients. Egypt J Med Hum Genet 23, 34 (2022). https://doi.org/10.1186/s43042-022-00242-8
METTL3 | CommonCrawl |
Optimal treatment and stochastic modeling of heterogeneous tumors
Hamidreza Badri1 &
Kevin Leder1
In this work we review past articles that have mathematically studied cancer heterogeneity and the impact of this heterogeneity on the structure of optimal therapy. We look at past works on modeling how heterogeneous tumors respond to radiotherapy, and take a particularly close look at how the optimal radiotherapy schedule is modified by the presence of heterogeneity. In addition, we review past works on the study of optimal chemotherapy when dealing with heterogeneous tumors.
Reviewers: This article was reviewed by Thomas McDonald, David Axelrod, and Leonid Hanin.
In recent years there have been many exciting studies that have observed the high levels of diversity present within tumors, (e.g., [42, 74, 82]). In addition to this genomic diversity it is possible for intra-tumor diversity to show up through cell cycle asynchrony or variability in microenviroment. This intra-tumor diversity has the potential to alter the evolutionary trajectory of the tumor cell population under therapy. An important question this raises is how we design optimal treatment strategies when dealing with heterogeneous populations. For example, if we have multiple therapies available there might be tumor subpopulations that respond better to certain therapies. The question then becomes how we optimally administer the various therapies. Addressing this question requires the use of mathematical models to understand the heterogeneity present, and in addition the development of optimization techniques to treat heterogeneous tumors.
In this work, we review past literature that has studied the question of optimal treatment of heterogeneous tumors, as well as stochastic modeling of heterogeneous tumors. The primary focus of the review is on the structure of optimal radiotherapy fractionation schedules when incorporating intra-tumor heterogeneity. A reason for focusing on the radiotherapy setting is that, in simple models of radiotherapy there are well established results for the structure of optimal radiotherapy schedules; see e.g. Badri et al. [5]. Therefore, it is possible to investigate the changes in the optimal schedule as a result of incorporating tumor heterogeneity.
We also review past literature on stochastic modeling and stochastic optimization for the treatment of heterogeneous tumors with chemotherapy or targeted therapy. In this section we look at works that considered stochastic models of heterogeneity in response to therapy, looking at works on both stochastic optimization and stochastic analysis.
Modeling and optimization in radiotherapy
In this section we will review previous works that studied mathematical modeling and optimization of radiotherapy for heterogeneous tumor cell populations using Linear-Quadratic (LQ) model and its various extensions based on timing effects, cell cycle, hypoxia and cancer stem cell.
Background on the linear-quadratic model
The LQ equation is widely used to describe the effects of ionizing radiation on normal and neoplastic tissue (For a review see [73]). The basic model states that the fraction of cells that survives a radiation dose of d Gy is given by exp ( − αd − βd 2) where the radiosensitivity parameters, α and β, account for non-repairable lesions to DNA and the lethal mis-repair events occurring in the repair process of DNA double strand breaks (DSB), respectively [49, 71]. The initial model has been extended to include the four 'Rs' of radiobiology, repopulation of the tumor cells during the treatment period by surviving tumor cells, reoxygenation of hypoxic cells, repair of radiation-induced damage between fractions and redistribution of cells in the cell cycle [96]. These four phenomena are often extended by a fifth 'R', which is intrinsic radiosensitivity, defined as the considerable variability between different cell types [85]. These are important determinants of local tumor control after fractionated irradiation, and significantly change the optimal fractionation schemes. In this section, we review several studies that model tumor heterogeneity in radiation fractionation problem and discuss how the conventional optimal fractionation protocols change when considering intra-tumor heterogeneity.
Despite a significant history of predicting doses response curves by the LQ model [13], there is a significant amount of debate as to whether the LQ is appropriate for measuring high dose per fraction effects in stereotactic high-dose radiotherapy (e.g., see [53, 81]). The application of the LQ model is thought to underestimate tumor control at high doses (larger than 10 Gy). Several models have been proposed for improving the prediction of high dose survival curves, e.g. see the models developed by Hanin [51, 52] and Hanin and Zaider [53] or the review by Brown et al. [15] which discusses the validity of LQ model to high dose irradiation of tumors in detail. Since the LQ model is the most widely used model for quantitative predictions of dose/fractionation dependencies in radiotherapy and most models for heterogeneous tumors have been developed based on the same principal structure of the LQ model, we will mainly focus on the LQ model and its extensions in this study.
There are two widely used approaches for delivering radiotherapy: fractionated and continuous radiation. Assuming sufficiently large inter-fraction time, in fractionated radiation, the damage induced in a cell by an acute dose of radiation either causes cell death or complete repair of the cell before the next exposure. Therefore this model leads to memoryless kinetic that can be captured using Markov processes. However this is not the case for continuous irradiation where a longer biological memory of the irradiated cells is stored. See the work of Hanin et al. [57] and experimental studies cited therein for a more detail discussion on how the processes of damage repair/misrepair, cell proliferation and cycling can be modeled by a non-Markovian model. The remainder of this work will largely focus on models of fractionated radiotherapy.
An important problem in radiotherapy is to find the best total treatment size and division of total dose into fractional doses that maximally reduces tumor size while imposing the least amount of damage on surrounding normal tissues (called organ-at-risks or OAR). This problem can be cast as an optimization question and it is commonly referred to as the 'fractionation problem'. A critical constraint to enforce when locating optimal fractionated schedule is sufficiently low levels of normal tissue toxicity. In order to properly model normal tissue damage, two simultaneous constraints should be imposed: toxicity on early-responding tissue, such as skin and health effects on the late-responding tissue, such as neurons. Usually the concept of biologically equivalent dose (BED), originally motivated by the LQ model, is implemented in clinical practice to measure the biological damage caused by a radiation fractionation scheme in a specified structure. More specifically, the BED for a fractionation regimen with N treatment fractions in which radiation dose d i is administered in fraction i (i = 1,.., N) is given by
$$ BED={\displaystyle {\sum}_{i=1}^N}{d}_i\left(1+\frac{d_i}{\left[\alpha /\beta \right]}\right) $$
where [α/β] is a tissue-specific radio-sensitivity parameter. The normal tissues toxicity constraints in radiotherapy fractionation problem are mathematically modeled by insisting that BED levels for various OAR stay within prescribed levels [5] or keeping the total number of functional proliferating normal cells more than the required threshold [54]. These constraints can be satisfied by keeping the total dose, fractional dose or dose rate in continuous irradiation within some acceptable levels.
Two possible solutions to the fractionation problem are hyper-fractionated and hypo-fractionated schedules. In hyper-fractionated schedules small fraction sizes are delivered over a long period of time whereas in hypo-fractionated schedules, large fraction sizes are administrated during a short period of radiation delivery. If we maximize tumor control probability (TCP) at the conclusion of treatment, it has been observed that whether hyper or hypo-fractionation is optimal depends on the radio-sensitivity parameters of the normal and cancerous tissue [5, 70, 92]. More specifically if tumor α/β ratio is smaller than effective α/β ratio for all normal tissues (defined as (α i /β i )/γ i , where α i /β i and γ i , denote the radio-sensitivity parameter and sparing factor, respectively, in ith OAR), then a single-dosage solution (hypo-fractionated schedule) is optimal, whereas a multiple-dosage solution with equal doses (uniform schedule) is optimal otherwise (hyper-fractionated schedule) [5, 6].
These results are based on the assumption that irradiated cell survival curves are explained by the LQ model, therefore TCP is invariant under rearrangement of fractional doses. However considering more complicated models [55] or different objectives such as minimizing metastatic risk [7, 8] instead of maximizing TCP may result in the optimality of non-standard fractional schedules. These schedules are formed using the front loading principle: administering the maximum possible dose as soon as possible. Moreover, as a result of these emerging alternative models and objectives, other factors such as the time point at which the performance criteria is evaluated may play an important role in the structure of optimal schedules, e.g. see Zaider and Hanin [102] and Badri et al. [7].
Intra-tumor heterogeneity
The uncertainties in radiotherapy treatment can be categorized into two groups: inter-patient variability and intra-tumor heterogeneity. Inter-patient variability stems from heterogeneity in patient-specific variables such as the sensitivity of their normal tissues and tumor to radiation (α/β ratio), the growth rate of their tumor or the healing kinetics of normal tissues. Several studies addressed these uncertainties using different techniques. Badri et al. [6] proposed a stochastic optimization formulation to incorporate inter-patient variability in tumor and normal tissue radiosensitivity parameters (α and β) and sparing factor of the OAR into the scheduling optimization problem. Hanin and Zaider [54] developed a mechanistic approach that models post-irradiation normal tissue toxicity when considering inter-patient variation of kinetic parameters. On the other hand, to improve the efficacy of radiation therapy, it is necessary to study the role of intra-tumor heterogeneity, since it significantly changes the tumor response curves [34, 58]. The range of cell sensitivity comes from inherent genetic and epigenetic differences among the tumor cells and from temporal variations arising from the asynchronous cell cycle phases and variable micro environmental conditions during therapy. The focus of the present work is to review studies that model intra-tumor heterogeneity and present where possible novel optimization problems that arise from these models.
In [56], Hanin et al. studied the role of radiosensitivity variation amongst cancer cells on optimal radiotherapy fractionation schemes. They used a new criterion developed by Rachev and Yakovlev that considers the difference between weighted survival probabilities for normal and neoplastic cells, where tumor cell radiosensitivity is considered as a random variable with a known distribution function [76]. For several special cases, the exact solution of optimal fractionation is obtained and an iterative approximation methodology is designed when it is not possible to compute the exact optimal fractionation schedule.
Several studies have suggested that intra-tumor heterogeneity accounts for variability observed in radiobiological parameters and TCP versus dose [58]. Zagars et al. categorized the cells existing in a tumor into three main subclasses: the radio-sensitive cells which are controllable with radiotherapy, the radio-resistant cells that are not susceptible to damage from therapeutic radiation, and the stochastic fraction, which includes those cells with tumor control probability between 1 and 99 % [101]. The population TCP over total delivered dose curve, so-called TCP/D, can be modeled as a weighted summation of individual TCP/D curves, where the weights are estimated based on the relative frequency of the different types of tumor cells in the population. It was observed (see Fig. 1) that intra-tumor heterogeneity flattens the tumor dose–response curves [90, 101].
Relationship between TCP and number of 2.0 Gy fractions for different tumor population variabilities based on the model developed by Zagras et al. [101]. The fraction of surviving cells is assumed to be normally distributed. The standard deviation of the normal distribution measures the homogeneity of tumor cells
Timing and 4R's effects on tumor heterogeneity
The fraction of surviving cells after a dose of radiation not only depends on dose and tumor radio-sensitivity parameters, but also it typically depends on the time-course of dose delivery [88]. Timing affects cell killing due to several reasons such as DNA repair and misrepair, tumor repopulation, redistribution and reoxygenation [48, 49]. The basic LQ model typically assumes that tumor radio-sensitivity parameters (α and β) and repopulation are constant over the time course of radiotherapy. This implies the failure of the simple version of the LQ equation exp ( − αd − βd 2) to capture the dynamics of reoxygenation and repopulation throughout the course of treatment. Mathematical models for ionizing radiation therapy, applied to multicellular populations whose cells have time-dependent radio sensitivity have been studied widely [17, 60]. However in some cases such as heterogeneity associated with cell sensitivity and proliferation rate when fractionated irradiation with sufficiently many fractions or protracted continuous radiation is implemented, it is possible to only consider the homogeneous subpopulations of the most resistant and/or fastest growing cells. This is due to the fact that usually slowly growing tumor cells and sensitive subpopulations die out after commencement of therapy, and therefore it is sufficient to design the therapy to target the fast growing tumor cells and resistant population. As an example see the mathematical model developed by [54] to model the number of proliferating as well as non-proliferating normal cells as a function of time post treatment when incorporating the selection of the fastest growing subpopulation to capture the tissue damage at the conclusion of therapy and of the subsequent healing kinetics.
Hlatky et al. [60] studied the variable response of tumor cells to therapeutic treatment in ionizing radiation by modeling the resensitization process; which includes redistribution and reoxygenation. The resensitization process states that after the dose is delivered, a large fraction of damage occurs among the radiosensitive cells, resulting in decreased average radiosensitivity. However these changes are reversible; and the remaining subpopulation are driven into more radiosensitive states as time passes [14, 60]. Considering a smooth function for absolute number of cell that have sensitivity α at time t, i.e. n(α, t), we can write the equation explaining the fluctuating diversity of a population with fixed size using a Kolmogorov forward equation as (see [60] for more details)
$$ \frac{\partial n\left(\alpha, t\right)}{\partial t}=-\left(\alpha \overset{.}{D}-\frac{1}{2}\kappa {u}^2\right)n+k\frac{\partial }{\partial \alpha}\left(\left(\alpha -{\alpha}_0\right)n+{\sigma}^2\frac{\partial n}{\partial \alpha}\right) $$
where \( \overset{.}{D} \) is the dose rate, u denotes the average number of DSB per cell, \( \frac{1}{2}\kappa\ {u}^2 \) shows the average rate at which binary misreapirs removes DSB by lethal rearrangements, k displays the rate at which cells change their radiation sensitivity, and α 0 and σ 2 represent the mean and variance of random variable α, respectively. Note that in the case of a homogeneous tumor, σ = 0, Eq. (1) becomes the deterministic model developed by Sachs et al. in [80] which adds the enzymatic modification of the immediate damage through a Markov process to the basic LQ model. Considering tumor population in the long term, it was shown that the solution to the Eq. (1) gives the surprising simple result of
$$ N\left(\infty \right)=N(0) \exp \left(-{\alpha}_0\ D+\left(\frac{1}{2}{\sigma}^2G(kT)-\beta G\left(\lambda T\right)\right){D}^2\ \right) $$
where N(t) shows the total population at time t, D is total radiation dose delivered for period (0,T), and G is the Lea-Catcheside function [60]. Equation (2) can be considered as the elementary LQ model with α being replaced by its average α 0, and β being replaced by its modified value. Results of their analysis support the hypothesis that the therapeutic paradigm of low dose rate or fractionated radiation can help conquer radioresistance in hypoxic tumors [91, 97]. This is due to the fact that a large fractionation interval (parameter T in (2)) allows the tumor population to complete the reoxygenation process and thereby the tumor population radio-resistance due to oxygenation status will be minimized. This phenomenon is supported by a smaller coefficient for D 2 in Eq. (2). One year later, Brenner et al. developed a parsimonious model to include the resensitization effect into the LQ model. In the extended model, designated LQR, survival is written as a function of dose d as
$$ \exp \left(-\alpha d-\left(\beta -\frac{1}{2}{\sigma}^2\right){d}^2\right) $$
where the term \( \frac{1}{2}{\sigma}^2{d}^2 \) refers to cellular diversity, and is given by the uncertainty about the cell kill by one-track action of radiation, i.e. parameter α [14]. The cell survival values based on Brenner et al. model (Eq. (3)) are plotted in Fig. 2 for values of σ 2 = 0, 0.01 and 0.09 for cell population without, low and high diversity, respectively. By comparison of cellular diversity effect for tumors with different values of \( \frac{\alpha }{\beta }, \) we observe a more significant effect for tumors with large values of \( \frac{\alpha }{\beta } \), e.g. 10 for prostate cancer (Fig. 2b), compared to tumors with a small value of \( \frac{\alpha }{\beta } \), e.g. 3 for head and neck cancer (Fig. 2a).
Cell survival curves illustrating the effect of tumor heterogeneity on surviving fraction of cells after a single dose of radiation based on Eq. (3) a) This plot is shown for α = 0.3 and β = 0.1 b) This plot is shown for α = 0.3 and β = 0.03
Optimization of radiotherapy treatment within the Hlatky model which includes time dependence of sublethal damage repair has been studied by Yang and Xing in [99]. It has been observed that incorporating these effects into the LQ model may give rise to optimal non-uniform fractionation schedules where fractional doses at the beginning and end of each irradiated week become significantly greater than others. Furthermore it was observed that the hyper-fractionation schedule gives an insignificant advantage over hypo-fractionation or a standard regimen.
Another reason that radiotherapy cell killing depends on timing, not just total dose, is the process of the mitotic cycle. Tumor cells respond differently to radiation in different cell phases of the cell cycle [11], e.g. cells within the G 0-phase of the cell cycle, quiescent cells, possess a lower level of radio-sensitivity than proliferating cells that are in the G 1, S, G 2, M-phases [23, 73]. Therefore for tumors with asynchronous cells, increasing radiation delivery time,T, increases tumor radiosensitivity. This makes sense because at first the radiation kills the cells in more sensitive phases, and then radioresistant cells, e.g. those are in G 0-phase, have time to reach more sensitive phases. Also due to cell arrest in the most sensitive phases of cell cycle, protracted radiation promotes synchronization. Chen et al. studied the effect of cell cycle redistribution on the population resensitization when ignoring the quadratic misrepair of radiation damage, β [17]. They used a Mc-Kendrick-von Foerster equation adjusted for the first track radiation cell kill to model the age dependent cell dynamics as
$$ \frac{\partial n\left(a,t\right)}{\partial t}=-\frac{\partial n\left(a,t\right)}{\partial a}-\overset{.}{D}\alpha (a)n\left(a,t\right)-g(a)n\left(a,t\right) $$
where n(a, t)da shows the density number of cells in the age range (a, a + da) at time t and α(a) shows the tumor radiosensitivity at age a. They observed that the tumor population resensitization effect occurs as the duration T of irradiation is increased from essentially zero times to short, and sufficiently small finite times. They concluded that population resensitization is proportional to T 2 and \( \exp\ \left(-\alpha (a)D\right)\ {\left(\frac{Dd\alpha }{da}\right)}^2 \) and the resensitization happens when T is small and the cell population is in a stable age-distribution phase before irradiation, which in this case happens regardless of how the radiation cell kill, function α(a), depends on age. Hahnfeldt and Hlatky generalized the model proposed by Chen et al. beyond constant-dose-rate irradiation and small T in more explicit terms [48]. They have used the same equation described in (4) and have shown mathematically that variation with the time of resensitisation due to redistribution is not monotonic but damped oscillatory. They found that spreading a dose of d Gy over a longer period of time in any way is more desirable and results in higher TCP than delivering an acute dose of equal magnitude. They proved that this result continues to apply regardless of age-dependent sensitivity and mitosis rate functions chosen.
In [23], Dawson and Hillen have considered extensions to the TCP model developed by Zaider and Minerbo [103] to include the quiescent states and cell cycle dynamics. The model is based on a birth-death process and generalizes the Zaider and Minerbo TCP formula, aiming to include cell cycle effects according to the idea that assumes the cell populations split into two compartments which represent an active phase (G 1, S, G 2, M) and a quiescent phase(G 0). If the clonogenic cells do not enter a G 0 phase, which is modeled with considering the transition between both compartments during radiotherapy, then the model equally applies for a splitting into S, G 2, M and G 1 phases. The key assumption is that actively proliferating cancer cells are much more susceptible to radiation damage than quiescent cells. The basic model states that the expected number of cells in active,N A , and quiescent compartments, N Q , satisfy a system of differential equations as
$$ \frac{\partial {N}_A}{\partial t}=-\mu {N}_A+\nu {N}_Q-{\lambda}_A(t){N}_A-{h}_A(t){N}_A,\ \frac{\partial {N}_Q}{\partial t}=2\mu {N}_A-\nu {N}_Q-{\lambda}_Q(t){N}_Q-{h}_Q(t){N}_Q $$
where μ is the rate of active cell division, ν describes the transition from quiescent compartment into the cell cycle, λ(t) shows the death rate of different types of cells at time t and h(t) explains the radiation induced death rate in different compartments. Note that since active cells are more radiosensitive, we haveh A (t) > h Q (t). The original model of Dawson and Hillen have been taken and extended to describe more complex systems or models with more compartments [25, 32, 59, 68]. Analysis of Dawson and Hillen active-quiescent radiation model and its comparison to LQ model confirms that a larger α/β ratio relates to a fast cell cycle and indicates the presence of a significant quiescent compartment, while a smaller ratio is associated with a slow cell cycle [23]. These comparisons were performed under the LQ model assumptions which allowed the authors to construct a relationship between proliferation and transition rates, μ and ν in Eq. (5), respectively, in their model with α and β parameters in the LQ model. Therefore we can conclude that for the tumor population with a substantial quiescent compartment, which indicates a large value for α/β ratio, hyper-fractionated schedules provide a better TCP than the hypo-fractionated schedules (see [70] or Badri et al. [5]). These types of analysis are indeed the future direction of the cell cycle modeling in TCP, i.e. the inclusion of cell cycle and diversity of the cellular radiosensitivity of a tumor in optimization of radiation dosing schedules.
Hypoxia plays a significant role in the reduced response to radiation [45, 78]. Specifically, a cell in the tumor may experience changes in radiosensitivity due to a change in the tumor microenvironment, e.g., a decrease in oxygen levels to a hypoxic state. As a tumor shrinks and a significant proportion of cells are killed, the radius of the tumor cord shrinks; diffusion-limited hypoxia decreases and necrotic or hypoxic regions become smaller and may finally disappear. Consequently there is no nutritional deprivation leading to cell death. Therefore the net repopulation rate increases as the tumor shrinks [40]. This idea has been utilized to incorporate the volume-dependent sensitivity and repopulation effect in the LQ model [12, 16, 79, 94]. Several experiments provide evidence that indicates that radio-sensitivity and growth rate in tumor spheroids decrease as the distance from the nutrient supply increases [21, 87, 89]. Hence a simple way to model this phenomena is assuming that tumor cell sensitivity to radiation, α andβ, and the tumor net repopulation rate,γ, depend upon the cell radial distance, r, from the center of the tumor, and on the current tumor radius, R [94]. Then if we assume all of these parameters take on a fixed well-oxygenated level at the tumor surface (i.e. α = α 0, β = β 0 and γ = γ 0 at r = R) and decrease linearly as r decreases, we can compute the radio-sensitivity parameters and tumor growth rate as a function of r ∊ [0, R] for R < r 0 as (see Fig. 3 and [94] for more details)
Tumor geometry in the mathematical model by [94]. Tumor cells are insensitive to radiation at hypoxic core and die at rate γ N per day
$$ \alpha \left(r,R\right)={\alpha}_0-\frac{\alpha_0}{r_0}\left(R-r\right),\ \beta \left(r,R\right)={\beta}_0-\frac{\beta_0}{r_0}\left(R-r\right),\ \gamma \left(r,R\right)={\gamma}_0-\frac{\gamma_0}{r_0}\left(R-r\right) $$
and for r ∊ (R − r 0, R] and R > r 0 as
$$ \alpha \left(r,R\right)=\frac{\alpha_0}{r_0}\left(r-R+{r}_0\right),\kern0.5em \beta \left(r,R\right)=\frac{\beta_0}{r_0}\left(r-R+{r}_0\right),\ \gamma \left(r,R\right)={\gamma}_0 - \frac{\gamma_0}{r_0}\left(r-R+{r}_0\right) $$
As discussed by the authors, the linearity assumptions in Eqs. (6) and (7) may not be compatible with the physics of oxygen diffusion and were chosen for their parsimony and computational feasibility. The actual situation in vitro and in vivo is significantly more complex, e.g. the oxygen enhancement ratio depends on the fraction size [49], therefore a more complicated model is required to explain the tumor radiosensitivity as a function of radial location. Also Eqs. (6) and (7) are based on the assumptions that the net rate of spontaneous cell death decreases as the tumor shrinks, which is applicable for most types of tumors (e.g., well-differentiated squamous cell cancers) and is consistent with experimental results [87, 89].
If we show the tumor radius and number of tumor cells at time t by R t and n t , respectively, and we assume the density of cells per unit volume in the spherical tumor to be θ, then we have \( {n}_t=\frac{4}{3}\theta \pi\ {R}_t^3 \). Using LQ formulation adjusted for exponential tumor growth [49], the expected change in number of tumor cells after a dose of size d is (see [94] for more details)
$$ {\overset{.}{n}}_t={n}_t\left[\gamma \left({R}_t\right)-\alpha \left({R}_t\right){d}_t-2\sqrt{\beta \left({R}_t\right)}{d}_t{\displaystyle \underset{0}{\overset{t}{\int }}}\sqrt{\beta \left({R}_t\right)}\ {d}_s{e}^{-\mu \left(t-s\right)}ds\right] $$
where μ is tumor repair rate. Substituting Eqs. (6) and (7) and using equation \( {\overset{.}{n}}_t=4\theta \pi\ {R}_t^2\ {\overset{.}{R}}_t \), we can write Eq. (8) in terms of Ṙ t and R t and forms the basis of the optimal control problem. Wein et al. proposes a dynamic programming approach to numerically solve this problem. The resulting optimal protocols suggest a non-standard time varying schedules with irregular time intervals between fractions, administering larger fractions before longer breaks, such as afternoon sessions or Fridays, and shorter fractions before shorter breaks, such as morning sessions [94]. Wein et al. proposed two main reasons for this phenomenon. First the large fractions make up for tumor repopulation during overnight or weekend breaks. Second the tumor size is smaller at the end of the week, i.e. Fridays, and smaller tumors are more sensitive to radiation. They also observed that as the tumor shrinks during therapy, it is optimal to increase the doses on Friday afternoons. Based on their model, as the tumor shrinks, α(R)/β(R) becomes smaller which leads to the optimality of hypo-fractionated schedules.
The existence of cellular heterogeneity in solid tumors may originate from a number of sources, including hypoxia, cell cycle asynchrony, infiltration of normal cells, vascular structures and stroma into the tumor and the hierarchical structure of the cell populations from which cancers arise. The cancer stem cell (CSC) model of tumorigenesis has received significant attention in recent years. CSC refers to a subset of tumor cells that has the ability to self-renew and generate differentiated progeny which make up the bulk of a tumor [77]. Existence of CSCs has been identified in different cancers such as acute myeloid leukemia [26] breast cancer [1] and brain tumors [84]. The definition of CSC implies that an anticancer therapy can control a tumor, i.e. permanent local tumor control, only if all CSCs are eradicated. Therefore it is possible that removal of CSCs is the crucial determinant in curing cancer and eradicating tumor cells [10].
The concept of CSCs has profound clinical implications. In particular, CSCs in solid tumors are more resistant to anti-cancer treatments, such as radiotherapy [9, 50, 75, 98]. Mathematical modeling that integrates this complexity has been used to analyze and predict the evolutionary dynamics of heterogeneous tumor populations caused by the hierarchical natures of the cell populations. A dual-compartment linear-quadratic model (DLQ) is usually implemented to study tumor hierarchical intrinsic heterogeneity [67, 93]. DLQ assumes there exist two cell populations in a solid tumor, CSCs and differentiated cancer cells (DCC), where CSCs form the minor subpopulation of a solid tumor. CSCs are able to produce more CSCs as well as DCCs and are described as the more radio-resistant subpopulation (have lower values of α and β). The radiation response model is constructed as
$$ S(d)=F\times \exp \left(-{\alpha}_sd-{\beta}_s{d}^2\right)+\left(1-F\right)\times \exp \left(-{\alpha}_dd-{\beta}_d{d}^2\right) $$
where S(d) represents the fraction of surviving cells after delivering an acute dose of radiation, F represents the fraction of CSCs out of all cells, and (α s , β s ) and (α d , β d ) show the radiosensitivity parameters in CSC and DCC, respectively. The interplay between CSCs and DCCs can be modeled by using the ODE introduced in Hillen et al. [59] as (10)
$$ \begin{array}{c}\hfill \frac{\partial {N}_s}{\partial t}=\left(2p-1\right){\mu}_sk\left(N(t)\right){N}_s(t)\hfill \\ {}\hfill \frac{\partial {N}_d}{\partial t} = 2\left(1-p\right){\mu}_sk\left(N(t)\right){N}_s(t)+{\mu}_dk\left(N(t)\right){N}_d(t)-{a}_v{N}_d(t)\hfill \end{array} $$
where N s (t) and N d (t) are the volume fractions of CSCs and DCCs, respectively. The function N(t) is the total volume of tumor normalized between 0 and 1 which is equal to N s (t) + N d (t), p is the probability of symmetric CSC division, and μ s , μ d and a v define the CSC growth, DCC growth and DCC apoptosis rate, respectively. k(N(t)) is a constraint defined as max {1 − N(t)σ, 0} for a σ ≥ 1 and keeps the total volume fraction less than 1. In [4], Bachman and Hillen used the ODE Eqs. in (10) and showed that the differentiation therapy proposed by Youssefpour et al. [100], which is defined as the combination of radiotherapy and chemotherapy where the chemotherapeutic agent pushes CSCs into the differentiation stage, can have large beneficial effects in head and neck cancer, brain cancers and breast cancer for the patient increasing treatment success and reducing side effects.
Leder et al. developed a model to study the reversible phenotypic interconversions between the CSCs and the DCCs in glioblastomas (GBM), i.e. radiation may induce DCCs to dedifferentiate into CSCs [67]. They assumed that the increased radiosensitivity of DSCs to be expressed in relation to the CSCs radioresistance, measured by the parameter ρ ∊ (0, 1], i.e. α s = ρα d and β s = ρβ d . This simplifying assumption enabled the authors to characterize the sensitivity of CSCs to radiation by a single parameter, ρ. The model is described in Fig. 4. The model stipulates that t hours after the previous dose of radiation, the fraction of DCCs capable of reversion to CSCs is given by \( \gamma (t)={\gamma}_0{e}^{-{\left(t-{a}_0\right)}^2/{a}_1^2} \) (note that γ(t) = γ 0 for the first dose of radiation), for some constants γ 0, a 0 and a 1 and the fraction of surviving cells can be computed based on the LQ model. They predicted several optimal radiation strategies that substantially enhanced survival in experimental studies using a mouse model of glioblastoma. The resulting optimized schedules recommend a non-uniform schedule delivering larger fractions at the beginning and toward the end of the therapy. In a follow up work, Badri et al. used the Leder model to consider fractionated schedules that have optimal survival while, maintaining acceptable levels of toxicity in early- and late-responding tissues [5]. They derived the closed form solution to the problem and proved that the optimization problem can be split into two separate optimization tasks that can be tackled independently. The first model involves optimization of dose per fraction and the optimal total dose, and the second model optimizes inter-fraction intervals between radiation doses. It was observed that normal tissues sparing factors and radiosensitivities, and the magnitude of the α/β ratio for tumor are determinant factors defining the optimal radiation scheme, i.e. for low (high) values of tumor α/β ratio, the hypo-fractionated (hyper-fractionated) schedule is optimal. For the time-dependent model, the optimal inter-fraction intervals only depend on the time dynamics of the dedifferentiation process and treatment duration. In particular it was observed that optimal inter-fraction intervals are equal to the dose spacing that leads to the maximal amount of cell reversion to the stem-like state, i.e. a 0.
Mathematical model described in [67]
Several stochastic and cellular automata models have been used in more complicated simulation based studies of complete tumor cell kinetics during radiation therapy. Gao et al. used an integrated experimental and cellular Potts model to simulate glioblastoma population growth and response to irradiation [41]. They found that in order to maintain the tumor population following radiotherapy, surviving glioma CSCs in vitro increase their rate of self-renewal, i.e. the fraction of CSCs in the populations is increased after radiation. By comparing acute and fractionated irradiation response, the authors observed that the relative increase in fraction of CSCs in tumor population after fractionated treatment cannot be explained merely by radioresistance of CSCs. This simulation based model suggests that repeated exposure to radiation might increase the symmetric division rate of CSCs, which eventually may lead to accelerated repopulation of CSCs. A series of in vivo 4D simulation models for GBM explore the tumor growth dynamics and response to radiation, considering vasculature, oxygen supply and radiosensitivity [2, 27, 28]. These works clustered cells into dynamic classes based on the mean cell cycle phase durations over the various cell cycle phases and used a linear quadratic model to describe the number of cells killed. They associated p53 mutations with increased radioresistance and inefficient clinical outcome for patients with GBM, as suggested by Haas-Kogan et al. [46]. Evaluating the response to treatment for different fractionation regimens revealed that hyper-fractionated schedules may lead to an improvement in local tumor control compared to standard schedules.
In this section we will review previous works that studied stochastic modeling and optimization of chemotherapy for heterogeneous tumor cell populations.
Optimization models
There is a vast literature on the mathematical modeling and optimization of the delivery of chemotherapy, e.g., see the three review papers Shi et al. [83], Swan [86] and Kimmel and Swiernak [64], or the textbook by Martin and Teo [69]. In this large literature optimization problems are formulated to optimally achieve a desired patient outcome subject to various constraints. Several works in these reviews follow an optimal control approach, e.g., Swan mentions several problems of this form [86]. Specifically, these works assume that cancer cell population satisfies a differential equation that depends on the drug concentration, e.g.,
$$ \frac{\partial x}{\partial t}=x\left[f(x)-h(u)\right], $$
where x is the cancer cell population size and u is the drug concentration level. Also f and gare arbitrary functions that represent density dependence and drug induced cell kill respectively. Then a cost function is specified, e.g.
$$ J\left(x,u\right)={\displaystyle \underset{0}{\overset{t}{\int }}}\left[\omega \left(x(t)\right)+\rho\ u{(t)}^2\right]dt, $$
and the goal is then to use optimal control methodology to numerically identify the optimal drug concentration profile u. There is a wide range of works on models of this kind and we refer the reader to the reviews Shi et al. [83], Swan [86] and Kimmel and Swiernak [64] for further examples.
Given the large amount of literature on this topic, we focus in the remainder of this section on works related to optimization of stochastic models of the treatment process for heterogeneous tumors with resistant subtypes.
Optimization of stochastic models
The majority of stochastic models of tumor response to chemotherapy have been based on the continuous time binary multi-type branching process framework (see e.g. [3]). In this modeling framework there are m possible cell types, and all cells of a given type behave in a statistically identical fashion independently of all other cells present. In particular a cell of type-i well wait an exponentially distributed amount of time with mean 1/a i before a birth/death/mutation event. During this event the type-i cell produces offspring of type (j 1, …, j m ) with probability p (i)(j 1, …, j m ), where j 1 + … + j m ∊ {0, 2} (see Fig. 5). The multi-type branching process is specified by the vector \( \overrightarrow{a} = \left[{a}_1\cdots\ {a}_m\right] \) and the vector valued mapping
In panel (a), we show an event where a type-j replicates without mutation, panel (b) a type-j has a single mutated offspring a type-k cell, and in panel (c) a type-j cell dies
$$ P\left({j}_1,..,{j}_m\right)=\left(\begin{array}{c}\hfill {p}^{(1)}\left({j}_1,\dots, {j}_m\right)\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {p}^{(m)}\left({j}_1,\dots, {j}_m\right)\hfill \end{array}\right) $$
The long term behavior of a multi-type branching process is easily deduced from this information. In particular, one forms a mean matrix M = (m ij ), where m ij is the expected number of type-j offspring a type-i offspring will produce. If the maximum eigenvalue of the matrix M is less than or equal to one then the branching process is guaranteed to go extinct, while if it is greater than 1 then the branching process can either go extinct or its size diverge to positive infinity. Therefore understanding the long term behavior of the branching process is straightforward. When studying the problem of drug resistance in cancer one is often interested in the behavior of the process over a long (but finite) time interval, and therefore it is not sufficient to simply look at the maximal eigenvalue of M. For an example of other techniques that can be used see e.g. Durrett and Moseley [30], Iwasa et al. [61], Haeno et al. [47], or Durrett et al. [31].
When modeling drug resistance in chemotherapy, a standard approach would be to assume that initially most cells are type-1, which is assumed to be sensitive to some first line therapy. Thus during treatment with this first line therapy the type-1 cells will begin to decrease; however, these cells may mutate to a different type of cell that can grow under the first line therapy. This type of cell may decline under a second line therapy; however it may now mutate to a type of cell resistant to both types of therapy. In this model then the question becomes, how does one administer the various therapies so that the risk of total treatment failure (no more viable drugs) is minimized.
Seminal work was done in this field by Coldman and Goldie in several papers, e.g., Goldie and Coldman [43], Goldie et al. [44] and Coldman and Goldie [19]. We will focus on Coldman and Goldie [19], since it generalizes the previous works. It is assumed that there are n treatments available T 1, …, T n , and 2n different cell types present, each type specified by which subset of therapies the constituent cells are resistant to. Specifically, \( {R}_{i_1,\dots,\ {i}_m}(t) \) is the number of cells at time t that are resistant to the therapies \( {T}_{i_1}\dots,\ {T}_{i_m} \) and sensitive to all other therapies. The cell type R 0 is sensitive to all therapies. In the absence of therapy it is assumed that all cells behave according to a pure birth process with birth rate λ per cell. During cell division events, mutations may occur and cells can acquire resistance to new types of drugs. Chemotherapy is modeled as an instantaneous probabilistic reduction in population of all sensitive cells according to a log cell kill rule. The authors then derive formulas for the probability of evolution of cells resistant to therapies within a finite time horizon. Coldman and Goldie, consider the case of two therapies and three distinct resistant cells in depth. In particular, let P 12(t) be the probability that no cells with resistance to both therapies evolve by timet. Under symmetry assumptions on the efficacy of the two therapies and the behavior of the two singly resistant mutants, Coldman and Goldie [19] establishes that alternating therapies maximizesP 12(t). Day computationally investigated relaxation of the symmetry assumptions and found that some non-alternating schedules could outperform the alternating schedule in that scenario [24]. In particular Day proposed a 'worst drug first' rule, this rule was investigated in further depth by Katouli and Komarova who considered a wide range of possible cyclic therapies [62]. In later works Murray and Coldman [72] and Coldman and Murray [20] extended the original model of Coldman and Goldie [19] to allow for toxicity constraints on normal tissues, simultaneous administration of multiple drugs, and included the possibility of inter-patient heterogeneity. In [18], Chen et al. further investigated the effects of asymmetry in the efficacy of the two possible therapies, and derived general conditions for the identification of optimal drug administration sequences. One potential shortfall of the Goldie and Coldman model is that the tumor cell populations grow exponentially ignoring possible effects of resource depletion. Chapter 9 of the monograph Martin and Teo [69] develops a deterministic model that allows for logistic and Gompertz growth in the tumor population. In this model they have four types of cells and two therapies, the authors develop an algorithm that searches for the schedule of therapies that leads to the maximal time until treatment failure. Note that this algorithm only identifies local optima though.
Despite the large amount of work done in this field there are still significant challenges remaining. In particular, previous works have looked at optimal schedules with only a small number of resistant types and potential therapies. Going forward, an important extension will be to develop methodologies that allow for the optimization of administration schedules for larger number of therapies and a larger number of resistant types. Another possible extension is to study minimization of probability of resistance in more complex stochastic models. Until now the stochastic models have all had essentially exponential growth properties, which are known to be inconsistent with tumor growth curves. An exciting challenge for the future is to minimize resistance probabilities in stochastic models that include density dependence.
Stochastic analysis
There has been a large volume of work on the stochastic modeling of cancer evolution, e.g., see the monographs Kimmel and Axelrod [63] and Durrett [29]. Given this large body of work we will focus on stochastic models for the evolution of resistance under therapy. In the works of Komarova and Wodarz [66] and Komarova [65], Komarova and Wodarz extended the model of Coldman and Goldie by replacing a pure birth process with cell kill events due to therapy with a multi-type binary branching process. Here the types are representative of the therapies that the cells are resistant, and cells mutate to give rise to daughter cells with new types of resistance. In [36], Foo and Michor also consider a multi-type binary branching process, but they allow for time inhomogeneous birth and death rates and then identify dosing schedules that minimize risk of resistance subject to toxicity constraints. A follow up work [37] generalized this model to allow for arbitrary concentration curves and incorporated pharmacokinetic effects. Fla et al. constructed a stochastic model for the evolution of normal blood stem cells, wild-type leukemic stem cells, and mutated drug resistant leukemic stem cells [33]. A novel feature of this model is that it was a stochastic model that incorporated competition. The authors derived the Fokker-Planck equations governing the probability mass functions of the stochastic model and analyzed the possible equilibrium of the system. In a series of works Foo and Leder [35, 38] studied a branching process model for the evolution of heterogeneous cancer population undergoing therapy. In particular denote the drug sensitive cell population at time t by Z 0(t) and the drug resistant cell population by Z 1(t), with Z 0(0) = n and Z 1(0) = 0. The sensitive cell population is modeled as a subcritical binary branching process, that produces resistant cells at rate μ and each resistant cell initiates a super-critical branching process with random net growth rate. In these works the properties of the cancer cell population is investigated at the 'crossover-time':
$$ \xi = \min \left\{t>0:\ {Z}_1(t)>{Z}_0(t)\right\}. $$
In particular, Foo and Leder [35] study the relationship between ξ and the extinction time of the sensitive cell process. While Foo et al. [38] study the diversity properties of the resistant cell population at the time ξ. There are several standard metrics for diversity of a population, e.g. the number of distinct species present, the Simpson's Index (probability two randomly chosen cells are genomically identical), and Shannon's Index (related to Shannon's Entropy, see e.g. [22]). In Foo et al. [38] they consider all three of these diversity measures. Lastly the work of Foo et al. [39] establishes a central limit theorem for ξ in the limit as the initial population Z 0(0) goes to infinity, and identifies the effect of the random fitness distribution on the large n behavior of the crossover time ξ.
There are lots of open problems remaining in the topic of stochastic models of cancer cells undergoing therapy. An interesting extension would be to investigate the treatment process when spatially explicit models (such as [95]) are used.
Viewing tumors as an evolving population of cells has proven to be a useful tool in the study of cancer. Anti-cancer therapy clearly has the potential to impact the evolutionary trajectory of the tumor cell population. The behavior of this evolution is extremely interesting in the context of diverse tumor cell populations. For example, one might expect that therapy will select for cells with therapy resistance, thus leaving a more difficult to treat tumor. In order to achieve the best possible therapeutic results it is thus seems necessary to create treatment strategies that take into account the diversity present within a tumor and the evolutionary changes the tumor might undergo during therapy.
There has clearly been a significant amount of work done in the field of cancer therapy optimization. However, there are still lots of exciting problems remaining to be investigated. For example, there are few theoretical results about the structure of optimal radiotherapy schedules when studying heterogeneous populations. In the chemotherapy setting there are no suitable optimization methods for dealing with large amounts of heterogeneity present, i.e., large numbers of distinct cell types. There are several interesting open problems in the stochastic modeling and optimization framework. In particular, more work needs to be done in this area that incorporates cellular competition.
Perhaps the biggest challenge in the field of designing optimal cancer therapies, is bringing these optimized therapeutic schedules into the clinic. While there have been successes in the laboratory setting, e.g., Leder et al. [67], Gao et al. [41], successes in a clinical setting are quite rare.
CSC, cancer stem cell; DCC, differentiated cancer cells; DLQ, dual-compartment linear-quadratic; DSB, double strand breaks; LQ, linear-quadratic; OAR, organs-at-risk; TCP, tumor control probability
Reviewer's report 1 Thomas McDonald, Biostatistics and Computational Biology, Dana-Farber Cancer Institute
Reviewer comments:
The review provides a general overview of modeling therapy of tumors. It separates into radiotherapy and chemotherapy discussing historical and more recent models of each and the impact of heterogeneity that affect tumor response. The authors do a good job discussing radiotherapy beginning with the Linear-Quadratic model before moving into the various extensions too account for the four 'Rs'. The section on Chemotherapy covers a wide range of work from the Coldman and Goldie models up to modern methods used and include a discussion of the next necessary steps and issues to tackle. Ultimately, this review provides a useful recap of the work done in mathematical modeling of radiotherapy and chemotherapy.
Reviewer recommendations to authors:
Major: The main suggestion is to include a few more pictures of some of the processes mentioned. The radiotherapy models could be illustrated with curves and example plots of tumor response curves showing the impact of heterogeneity as modeled in some of the articles cited.
The second part on chemotherapy seems lacking in the detail that the radiotherapy section got, and it may deserve a little more time or mathematical description of the work. The focus of the work is clearly radiotherapy, but explaining some of the chemotherapy models in a little more depth or describing a quick background of branching processes and their use would make the review more complete. A more careful proofreading is necessary. There are minor grammatical errors scattered throughout. An incomplete list is given below.
The first section on radiotherapy may be separated into subsections since the authors jumped between models abruptly.
Author's response: Thank you for your careful reading of the manuscript and helpful suggestions, we have addressed these comments.
Reviewer's report 2 David Axelrod, Rutgers University
Summary: Recommendation status: Endorse publication as a Review. Reviewers report: Summary of some mathematical modeling to optimize radiotherapy and chemotherapy, with brief mention of open problems, but little indication of whether or not the modeling has had a clinical impact, and if not why not. Not comprehensive or original, although useful as an entrance to the literature.
Reviewer's report 3 (Author's response included in italics) Leonid Hanin, Idaho State University
Summary: The authors attempted to review a huge research field (mathematical models of radiation therapy/chemotherapy and stochastic models of tumor cell populations) through a prism of optimal cancer treatment schedules and intra-tumor heterogeneity. From among many hundreds of relevant articles and dozens of books published on these subjects the authors selected a relatively small fraction for their discussion. The review is somewhat sketchy and oftentimes superficial. In my opinion, it does not delve deep enough into biological, clinical and mathematical issues. The overall picture of the field does not come through clear enough as well. I believe the review is missing some general guiding ideas that would make the discussion of the subject coherent and captivating from methodological and historical standpoints.
Reviewer recommendations to authors: The article provides a brief overview of the following areas of biomathematical research: cancer radiotherapy, chemotherapy and stochastic modeling of cancer cell populations. The umbrella topic that gives a common thread to the reviewed modeling approaches and results is the effects of heterogeneity of cancer cell populations on optimal radiotherapy or chemotherapy schedules. I believe the review is too sketchy, incomplete and lacking technical details to do justice to these extensive and important areas of research. Specifically, the following important questions have not been addressed in sufficient depth and detail:
(1) What are the biological and mathematical assumptions underlying the quoted studies?
(2) What are the sources of heterogeneity (spatial, oxygenation level, radiosensitivity, cell cycle phases, variation in kinetic parameters, inter-patient variation, etc.) accounted for and disregarded in any particular study? Without this, results of the cited research can be neither fully appreciated nor compared.
(3) What are the types of radiation involved (fractionated, continuous with constant dose rate, brachytherapy, etc.)?
(4) Are the results and conclusions theoretical or numerical and, in the latter case, how were model parameters selected?
(5) What is the basis for various equations discussed in the text?
Author's response: Thank you so much for your comments, we have included more discussion on the underlying assumptions on the models, equations, various types of radiation, and classification of the heterogeneity sources in radiotherapy
1. Due to selection effects some of the heterogeneity issues seem irrelevant in the case of fractionated irradiation with sufficiently many fractions, protracted continuous radiation and chemotherapy. For example, this is the case for heterogeneity with respect to cell sensitivity and proliferation rate, for sensitive and slowly growing tumor subpopulation will disappear soon after the start of treatment, so it seems feasible to deal from the outset with the homogeneous subpopulations of the most resistant and/or fastest growing cells. See e.g. [1] in the list of references below for the discussion of selection of the fastest growing subpopulation. This fairly obvious but important consideration provides the missing evolutionary context to the discussion of heterogeneity.
Author's response: Thank you so much for you valuable comment. We have added a paragraph in the paper to explain this matter.
2. The article disregarded a profound difference between fractionated and continuous radiation. While the former leads to memoryless kinetic models that can be described using Markov processes, the latter brings about long biological memory (due to the arrest of irradiated cells in the most radiosensitive phases of the cell cycle and non-markovian kinetics of radiation damage accumulation and repair/misrepair), see e.g. [2] and experimental studies quoted therein.
Author's response: Thank you so much for pointing out this shortcoming. We have added a paragraph to address this issue.
3. As it was briefly mentioned in the article, clinically relevant approaches to radiotherapy optimization must involve constraints accounting for damage to normal tissue. However, no details were provided and no results reviewed. Modeling normal tissue complication probability (NTCP) leads to many mathematical and biomedical challenges including heterogeneity issues [1].
Author's response: We appreciate reviewer comment. The focus of the present work is to review the studies that properly model the intra-tumor heterogeneity. However to add some discussion about this important topic, we have added a paragraph that explains the difference between inter-patient and intra-tumor heterogeneity. We also cited reference [1] to provide some additional sources for covering this important topic briefly.
4. The article deals with the linear-quadratic (LQ) model of irradiated cell survival and its extensions. This model is based on a fairly sophisticated mechanistic description of the kinetics of sublesion generation, repair/misrepair and pairwise interaction that produces lethal lesions (typically chromosomal aberrations). However, converting this formalism into cell survival probability is based on a highly unrealistic assumption that the distribution of the number of lesions is Poisson. Although this and other critical flaws of the LQ model have been uncovered about four decades ago, see [3] for a more recent discussion, LQ model is still in business. This is especially surprising given that alternatives have been proposed, see e.g. [3] where a parsimonious model based on rigorous microdosimetric analysis and overcoming many flaws of the LQ model was introduced. Another fundamental problem of the LQ model is that it is inapplicable to large doses (>10 Gy) [4]. For example, it was shown in [3] that for such doses LQ model underestimates cell survival (compared to the more realistic model developed in [3]) by several orders of magnitude! Insisting on the LQ model confines researchers to a mathematical abstraction that in many cases has little to do with clinical reality.
Author's response: We appreciate reviewer comments on this shortcoming. We have added a paragraph in the paper to explain this shortcoming and present our reasoning for using LQ model.
5. Discussion of optimal radiation schedules is overly ad hoc. Addressing the question as to whether some general principles are true in a given biological/modeling setting would bring much needed structure and clarity. For example, is TCP invariant under rearrangement of fractional doses? Does it satisfy the front loading principle (i.e. "hit the tumor as hard and as early as possible") true? Is the uniform radiation schedule optimal? For a discussion of these and other general principles, see [5-7].
Author's response: Thank you so much, we have added a few sentences to explain this topic briefly.
6. The article says nothing about the time point at which TCP is evaluated. As discussed in [8], its selection is consequential.
Author's response: We have added a few sentences to explain its importance.
7. Among the many Rs of radiation biology repopulation is probably the most important, yet its discussion in the article is scarce thus missing many aspects of the subject at hand. In the case of fractionated radiation, the TCP in the repopulation setting was computed in closed form in [9, 10] under arbitrary time-dependent birth and spontaneous death rates, arbitrary time post-treatment, arbitrary radiation schedule and arbitrary dose–response function. Moreover, computed in these two works was not only TCP but the entire distribution of the number of surviving clonogenic cancer cells.
Author's response: We have added these two papers to our review study.
7. Discussion of optimization problems never mentioned constraints on the total dose, fractional doses and dose rates. What are they and where did they come from?
Author's response: Thank you. We have added a paragraph to explain this important matter in detail.
Technical Comments
1. P. 4, lines 22-27. The basic LQ model contains also the time-dependent Lea-Catcheside dose-protraction factor that accounts for the temporal pattern of radiation dose delivery and depends on the rate of sublethal damage repair. Models accounting for repopulation and reoxygenation do not have to be of LQ type.
Author's response: Thank you. We agree that Lea-Catcheside model is time dependent, however we were referring to the basic equation of LQ model exp(−αd − βd 2) . We have added this equation to the sentence to clarify it.
2. What does variable n is Eqs (1) and (4) represent: absolute or relative number of cells?
Author's response: Thank you so much for bringing this subtle point. We modified the definition of n for both equations, which is absolute in Eq (1) and density in Eq (4).
3. In Eq. (1), is the distribution of radiosensitivity fixed or changes in time?
Author's response: Eq (1) represents a standard Ornstein-Uhlenbeck process for a cell population of fixed size undergoing "convection" and "diffusion" in a "radiation sensitivity space" parametrized by α and centered on α 0 . We added few words to point it out in the text.
4. It follows from Eq. (2) that for sufficiently large sigma the number of cancer cells will eventually exceed N(0). How could this happen in the absence of cell proliferation? Also, for sigma = 0 the formula does not coincide with the LQ model. Finally, what is the meaning of T? The two questions about large sigma and sigma = 0 relate to Eq. (3) as well.
Author's response: Thank you so much for your comment, there were two typos in these two equations which are fixed. Also we defined the parameter T in the text.
5. P. 5, lines 24-26. This observation is unclear.
Author's response: We have added two references to support this statement and also few sentences to clarify it.
6. P. 5, line 60 and p. 6, line 4. Due to cell arrest in the most sensitive phases of cell cycle, protracted radiation promotes synchronization.
Author's response: Thank you, we have added a sentence to the article about this topic.
7. P. 6, Eq. (4). What is g(a)? Also, n(a, t) on line 22 should be n(a, t) da.
Author's response: Function g(a) is the function modeling mitosis rate at age a. We have added to the text and changed n(a,t) to n(a,t)da.
8. P. 7, line 24. Shouldn't t be an argument of function h rather than A and Q?
Author's response: Thank you so much, we have addressed this issue.
9. P. 7, lines 29-41. This whole paragraph is obscure. What are the assumptions here and how does the argument work?
Author's response: We have added a few sentences to explain the comparison of the active-quiescent model by Dawson and Hillen and the LQ model.
10. P. 8, Eqs (6) and (7). Are the linearity assumptions compatible with the physics of oxygen diffusion? Also, does this imply that the rate of spontaneous cell death is decreasing with time as found in [11, 12]?
Author's response: Thank you so much for your comment. We agree with your point about linearity assumption. We have added a few sentences after equation (6) and (7) to explain these shortcomings. Also as it is mentioned in the original article [wein et al. 2000], based on equation (6) and (7), the tumor's net repopulation rate increases as the tumor shrinks, which is consistent with the experimental evidence showing that the growth fraction and sensitivity in solid tumors decrease as the distance from the nutrient supply increases, therefore tumor death rate decreases with time based on the equation (6) and (7).
11. P. 9, line 16. Studied in Hanin et al. 1993 were the effects of radiosensitivity variation among cancer cells without any spatial considerations. A more detailed discussion was presented in the book [13].
Author's response: Thank you so much for pointing this out. The spatial term has been removed.
12. P. 9, line 39. The paper Dick 1997 deals with acute myeloid leukemia that does not form tumors.
Author's response: Thank you so much for pointing this out. We removed this paper in that sentence.
13. P. 10, line 17. s(d) should be S(d).
Author's response: Corrected.
14. P. 10, line 34. "…total volume of tumor with respect to a desired volume…" What do you mean?
Author's response: The function N(t) is the total volume of tumor normalized between 0 and 1 which is equal to N s (t) + N d (t). We modified it in the text.
15. P. 10, lines 55-56. Was such dedifferentiation observed and what is its mechanism?
Author's response: This topic is discussed in the referenced manuscript.
16. P. 10, line 57. Beta depends on many kinetic parameters accounting for damage production, repair, misrepair and pairwise interaction, see [4]. Therefore, the stated proportionality does not seem likely and, in any case, requires discussion of the underlying assumptions.
Author's response: This is simplifying assumption to enable authors to characterize the radio-sensitivity of CSC by a single parameter. We have added few sentences to clarify this.
17. P. 15, line 34. What are the Simpson's and Shannon's indices?
Author's response: Definitions were added.
1. Hanin L and Zaider M (2013). A mechanistic description of radiation-induced damage to normal tissue and its healing kinetics. Phys Med Biol 58: 825-839.
2. Hanin LG, Hyrien O, Bedford J and Yakovlev AY (2006). A comprehensive stochastic model of irradiated cell populations in culture. J Theor Biol 239(4): 401-416.
3. Hanin L and Zaider M (2010). Cell-survival probability at large doses: an alternative to the linear-quadratic model. Phys Med Biol 55: 4687-4702.
4. Sachs RK, Hahnfeld P and Brenner DJ (1997). The link between low-LET dose-response relations and the underlying kinetics of damage production/ repair/misrepair. Int J Radiat Biol 72: 351–74.
5. Sachs RK, Heidenreich WF, Brenner DJ (1996). Dose timing in tumor radiotherapy: considerations of cell number stochasticity. Math Biosci 138: 131-146.
6. Fakir H, Hlatky L, Li H and Sachs R (2013). Repopulation of interacting tumor cells during fractionated radiotherapy: stochastic modeling of the tumor control probability. Med Phys 40(12):121716.
7. Hanin L and Zaider M (2014). Optimal schedules of fractionated radiation therapy by way of the greedy principle: biologically-based adaptive boosting, Phys Med Biol 59: 4085-4098.
8. Zaider M and Hanin L (2011). Tumor Control Probability in radiation treatment. Med Phys 38 (2): 574-583.
9. Hanin LG (2001). Iterated birth and death process as a model of radiation cell survival. Math Biosci 169(1): 89-107.
10. Hanin LG (2004). A stochastic model of tumor response to fractionated radiation: limit theorems and rate of convergence. Math Biosci 191: 1–17.
11. Jones B and Dale RG (1995). Cell loss factors and the linear-quadratic model. Radiother Oncol 37:136-139.
12. Ang KK, Thames HD, Jones SD, Jiang G-L, Milas L and Peters LJ (1988). Proliferation kinetics of a murine fibrosarcoma during fractionated irradiation. Radiat Res 116: 327-336.
13. Hanin LG, Pavlova LV and Yakovlev AY (1994). Biomathematical Problems in Optimization of Cancer Radiotherapy. CRC Press, Boca Raton, FL.
Al-Hajj M, Wicha MS, Benito-Hernandez A, Morrison SJ, Clarke MF. Prospective identication of tumorigenic breast cancer cells. Proc Natl Acad Sci. 2003;100(7):3983–8.
Antipas VP, Stamatakos GS, Uzunoglu NK, Dionysiou DD, Dale RG. A spatio-temporal simulation model of the response of solid tumors to radiotherapy in vivo: parametric validation concerning oxygen enhancement ratio and cell cycle duration. Phys Med Biol. 2004;49(8):1485.
Athreya K, Ney P. Branching processes. Dover Books on Mathematics Series. Mineola: Dover Publications; 2004.
Bachman JW, Hillen T. Mathematical optimization of the combination of radiation and differentiation therapies for cancer. Front Oncol. 2013;3:52.
Badri H, Pitter K, Holland E, Michor F, Leder K. Optimization of radiation dosing schedules for proneural glioblastoma. Journal of Mathematical Biology. 2016;72(5):1–36
Badri H, Watanabe Y, Leder K. Optimal radiotherapy dose schedules under parametric uncertainty. Phys Med Biol. 2015;61(1):338.
Badri H, Ramakrishnan J, Leder K. Minimizing metastatic risk in radiotherapy fractionation schedules. Phys Med Biol. 2015;60(22):N405.
Badri H, Salari E, Watanabe Y, Leder K. Optimizing chemoradiotherapy to target multi-site metastatic disease and tumor growth. 2016. http://arxiv.org/pdf/1603.00349.pdf. Accessed June 2016
Bao S, Wu Q, McLendon RE, Hao Y, Shi Q, Hjelmeland AB, Dewhirst MW, Bigner DD, Rich JN. Glioma stem cells promote radioresistance by preferential activation of the DNA damage response. Nature. 2006;444(7120):756–60.
Baumann M, Krause M, Thames H, Trott K, Zips D. Cancer stem cells and radiotherapy. Int J Radiat Biol. 2009;85(5):391–402.
Bernhard EJ, Maity A, Muschel RJ, McKenna WG. Effects of ionizing radiation on cell cycle progression. Radiat Environ Biophys. 1995;34(2):79–83.
Bouchat V, Nuttens VE, Michiels C, Masereel B, Feron O, Gallez B, Vander Borght T, Lucas S. Radioimmunotherapy with radioactive nanoparticles: biological doses and treatment efficiency for vascularized tumors with or without a central hypoxic area. Med Phys. 2010;37(4):1826–39.
Brenner D. The linear-quadratic model is an appropriate methodology for determining isoeffective doses at large doses per fraction. Seminars Radiation Oncology. 2008;18:234–9.
Brenner DJ, Hlatky LR, Hahnfeldt PJ, Hall EJ, Sachs RK. A convenient extension of the linear-quadratic model to include redistribution and reoxygenation. Int J Radiat Oncol Biol Phys. 1995;32(2):379–90.
Brown JM, Carlson DJ, Brenner DJ. The tumor radiobiology of SRS and SBRT: are more than the 5 Rs involved? International Journal of Radiation Oncology* Biology* Physics. 2014;88(2):254–262.
Buffa FM, West C, Byrne K, Moore JV, Nahum AE. Radiation response and cure rate of human colon adenocarcinoma spheroids of different size: the significance of hypoxia on tumor control modelling. Int J Radiat Oncol Biol Phys. 2001;49(4):1109–18.
Chen PL, Brenner DJ, Sachs RK. Ionizing radiation damage to cells: effects of cell cycle redistribution. Math Biosci. 1995;126(2):147–70.
Chen JH, Kuo YH, Luh HP. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldmans cancer model. Math Biosci. 2013;245:282–98.
Coldman AJ, Goldie JH. A model for the resistance of tumor cells to cancer chemotherapeutic agents. Math Biosci. 1983;65:291–307.
Coldman AJ, Murray JM. Optimal control for a stochastic model of cancer chemotherapy. Math Biosci. 2000;168:187–200.
Conger AD, Ziskin MC. Growth of mammalian multicellular tumor spheroids. Cancer Res. 1983;43(2):556–60.
Cover TM, TM, Thomas JA. Elements of information theory. New York, USA: John Wiley & Sons; 2012
Dawson A, Hillen T. Derivation of the tumor control probability (TCP) from a cell cycle model. Computational and Mathematical Methods in Medicine. 2006;7(2-3):121–41.
Day RS. Treatment sequencing, asymmetry, and uncertainty: protocol strategies for combination chemotherapy. Cancer Res. 1986;46:3876–85.
A. Dhawan, K. Kaveh, M. Kohandel, S. Sivaloganathan. Stochastic model for tumor control probability: effects of cell cycle and (a) symmetric proliferation. arXiv preprint arXiv:1312.7556, 2013
Dick D. Human acute myeloid leukemia is organized as a hierarchy that originates from a primitive hematopoietic cell. Nature Med. 1997;3:730–7.
Dionysiou DD, Stamatakos GS. Applying a 4d multiscale in vivo tumor growth model to the exploration of radiotherapy scheduling: the effects of weekend treatment gaps and p53 gene status on the response of fast growing solid tumors. Cancer Informat. 2006;2:113.
Dionysiou DD, Stamatakos GS, Uzunoglu NK, Nikita KS, Marioli A. A four-dimensional simulation model of tumor response to radiotherapy in vivo: parametric validation considering radiosensitivity, genetic profile and fractionation. J Theor Biol. 2004;230(1):1–20.
Durrett R. Branching process models of cancer. Cham, Switzerland: Springer; 2015.
Durrett R, Moseley S. Evolution of resistance and progression to disease during clonal expansion of cancer. Theor Popul Biol. 2010;77(1):42–8.
Durrett R, Foo J, Leder K, Mayberry J, Michor F. Evolutionary dynamics of tumor progression with random fitness values. Theor Popul Biol. 2010;78:54–66.
Enderling H, Chaplain MA, Hahnfeldt P. Quantitative modeling of tumor dynamics and radiotherapy. Acta Biotheor. 2010;58(4):341–53.
Fla T, Rupp F, Woywod C. Deterministic and stochastic dynamics of chronic myelogenous leukaemia stem cells subject to hill-function-like signaling. In: Recent Trends in Dynamical Systems, pages 221-263. Basel, Switzerland: Springer; 2013.
Fletcher GH. Textbook of radiotherapy. Philadelphia, USA: Lea & Febiger; 1973
Foo J, Leder K. Dynamics of cancer recurrence. Ann Appl Probab. 2013;23:1437–68.
Foo J, Michor F. Evolution of resistance to targeted anti-cancer therapies during continuous and pulsed administration strategies. PLoS Comput Biol. 2009;5(11):e1000557.
Foo J, Michor F. Evolution of resistance to anti-cancer therapy during general dosing schedules. J Theor Biol. 2010;263(2):179–88.
Foo J, Leder K, Mumenthaler S. Cancer as a moving target: understanding the composition and rebound growth kinetics of recurrent tumors. Evol Appl. 2013;6:54–69.
Foo J, Leder K, Zhu J. Escape times for branching processes with random mutational fitness effects. Stochastic Processes and Their Applications. 2014;124:3661–97.
Fowler JF. The phantom of tumor treatment-continually rapid proliferation unmasked. Radiother Oncol. 1991;22(3):156–8.
Gao X, McDonald JT, Hlatky L, Enderling H. Acute and fractionated irradiation differentially modulate glioma stem cell division kinetics. Cancer Res. 2013;73(5):1481–90.
Gerlinger M, Horswell S, Larkin J, Rowan AJ, Salm MP, Varela I, Fisher R, McGranahan N, Matthews N, Santos CR, et al. Genomic architecture and evolution of clear cell renal cell carcinomas defined by multi-region sequencing. Nat Genet. 2014;46(3):225–33.
Goldie JH, Coldman AJ. A mathematical model relating the drug sensitivity of tumors to their spontaneous mutation rate. Cancer Treat Rep. 1979;63:1727–33.
Goldie JH, Coldman AJ, Gudauskas GA. A rationale for the use of alternating noncross resistant chemotherapy. Cancer Treat Rep. 1982;66:439–49.
Gray LH, Conger AD, Ebert M, Hornsey S, Scott O. The concentration of oxygen dissolved in tissues at the time of irradiation as a factor in radiotherapy. Br J Radiol. 1953;26(312):638–48.
Haas-Kogan DA, Yount G, Haas M, Levi D, Kogan SS, Hu L, Vidair C, Deen DF, Dewey WC, Israel MA. p53-dependent G1 arrest and p53-independent apoptosis in influence the radiobiologic response of glioblastoma. Int J Radiat Oncol Biol Phys. 1996;36(1):95–103.
Haeno H, Iwasa Y, Michor F. The evolution of two mutations during clonal expansion. Genetics. 2007;177(4):2209–21.
Hahnfeldt P, Hlatky L. Resensitization due to redistribution of cells in the phases of the cell cycle during arbitrary radiation protocols. Radiat Res. 1996;145(2):134–43.
Hall E, Giaccia A. Radiobiology for the radiologist. Philadelphia, USA: Wolters Kluwer Health; 2006
Hambardzumyan D, Becher OJ, Rosenblum M, Pandol PP, Manova-Todorova K, Holland EC. Pi3k pathway regulates survival of cancer stem cells residing in the perivascular niche following radiation in medulloblastoma in vivo. Genes Dev. 2008;22(4):436–48.
Hanin LG. Iterated birth and death process as a model of radiation cell survival. Math Biosci. 2001;169(1):89–107.
Hanin LG. A stochastic model of tumor response to fractionated radiation: limit theorems and rate of convergence. Math Biosci. 2004;191:1–17.
Hanin LG, Zaider M. Cell-survival probability at large doses: an alternative to the linear-quadratic model. Phys Med Biol. 2010;55:4687–702.
Hanin LG, Zaider M. A mechanistic description of radiation-induced damage to normal tissue and its healing kinetics. Phys Med Biol. 2013;58(4):825–39.
Hanin LG, Zaider M. Optimal schedules of fractionated radiation therapy by way of the greedy principle: biologically-based adaptive boosting. Phys Med Biol. 2014;59:4085–98.
Hanin L, Rachev S, Yakovlev AY. On the Optimal Control of Cancer Radiotherapy for Non-Homogeneous Cell Populations. Advances in Applied Probability. 1993;25(1):1–23. doi: 10.2307/1427493.
Hanin LG, Hyrien O, Bedford J, Yakovlev AY. A comprehensive stochastic model of irradiated cell populations in culture. J Theor Biol. 2006;239(4):401–16.
Hendry JH, Moore JV. Is the steepness of dose-incidence curves for tumor control or complications due to variation before, or as a result of, irradiation? Br J Radiol. 1984;57(683):1045–6.
Hillen T, VrIeS GD, Gong J, Finlay C. From cell population models to tumor control probability: including cell cycle effects. Acta Oncol. 2010;49(8):1315–23.
Hlatky LR, Hahnfeldt P, Sachs RK. Influence of time-dependent stochastic heterogeneity on the radiation response of a cell population. Math Biosci. 1994;122(2):201–20.
Iwasa Y, Nowak MA, Michor F. Evolution of resistance during clonal expansion. Genetics. 2006;172(4):2557–66.
Katouli AA, Komarova NL. The worst drug rule revisited: mathematical modeling cyclic cancer treatments. Bull Math Biol. 2011;73:549–84.
Kimmel M, Axelrod D. Branching processes in biology. 2nd ed. Springer-Verlag; 2015
Kimmel M, Swiernak A. Control theory approach to cancer chemotherapy: benefiting from phase dependence and overcoming drug resistance. Lect Notes Math. 2006;1872:185–221.
Komarova N. Stochastic modeling of drug resistance in cancer. J Theor Biol. 2006;239:351–66.
Komarova N, Wodarz D. Drug resistance in cancer: principles of emergence and prevention. Proc Natl Acad Sci U S A. 2005;102(27):9714–9.
Leder K, Pitter K, LaPlant Q, Hambardzumyan D, Ross B, Chan T, Holland E, Michor F. Mathematical modeling of PDGF-driven glioblastoma reveals optimized radiation dosing schedules. Cell. 2014;156(3):603–16.
Maler A, Lutscher F. Cell-cycle times and the tumor control probability. Mathematical Medicine and Biology, 2009. doi:10.1093/imammb/dqp024.
Martin R, Teo KL. Optimal control of drug administration in cancer chemotherapy. New Jersey, USA: World Scientific; 1994
Mizuta M, Takao S, Date H, Kishimotoi N, Sutherland K, Onimaru R, Shirato H. A mathematical study to select fractionation regimen based on physical dose distribution and the linear-quadratic model. Int J Radiat Oncol Biol Phys. 2012;84(3):829–33.
Munro TR, Gilbert CW. The relation between tumor lethal doses and the radiosensitivity of tumor cells. Br J Radiol. 1961;34(400):246–51.
Murray JM, Coldman AJ. The effect of heterogeneity on optimal regimens in cancer chemotherapy. Math Biosci. 2003;185:73–87.
ORourke SFC, McAneney H, Hillen T. Linear quadratic and tumor control probability modelling in external beam radiotherapy. J Math Biol. 2009;58(4-5):799–817.
Patel AP, Tirosh I, Trombetta JJ, Shalek AK, Gillespie SM, Wakimoto H, Cahill DP, Nahed BV, Curry WT, Martuza RL, et al. Single-cell RNA-seq highlights intratumoral heterogeneity in primary glioblastoma. Science. 2014;344(6190):1396–401.
Phillips TM, McBride WH, Pajonk F. The response of CD24-/low/CD44+ breast cancer-initiating cells to radiation. J Natl Cancer Inst. 2006;98(24):1777–85.
Rachev ST, Yakovlev AY. Theoretical bounds for the tumor treatment efficacy. Syst Anal Model Simul. 1988;5(1):37–42.
Reya T, Morrison SJ, Clarke MF, Weissman IL. Stem cells, cancer, and cancer stem cells. Nature. 2001;414(6859):105–11.
Rockwell S, Dobrucki IT, Kim EY, Marrison ST, Vu VT. Hypoxia and radiation therapy: past history, ongoing research, and future promise. Current Molecular Medicine. 2009;9(4):442.
Ruggieri R. Hypofractionation in non-small cell lung cancer (NSCLC): suggestions from modelling both acute and chronic hypoxia. Phys Med Biol. 2004;49(20):4811.
Sachs RK, Chen PL, Hahnfeldt PJ, Hlatky LR. DNA damage caused by ionizing radiation. Math Biosci. 1992;112(2):271–303.
Sachs RK, Hahnfeldt PJ, Brenner DJ. The link between low-LET dose-response relations and the underlying kinetics of damage production/ repair/misrepair. Int J Radiat Biol. 1997;72:351–74.
Shah SP, Morin RD, Khattra J, Prentice L, Pugh T, Burleigh A, Delaney A, Gelmon K, Guliany R, Senz J, et al. Mutational evolution in a lobular breast tumor profiled at single nucleotide resolution. Nature. 2009;461(7265):809–13.
Shi J, Alagoz O, Erenay F, Su Q. A survey of optimization models on cancer chemotherapy treatment planning. Ann Oper Res. 2014;221:331–56.
Singh SK, Clarke ID, Terasaki M, Bonn VE, Hawkins C, Squire J, Dirks PB. Identification of a cancer stem cell in human brain tumors. Cancer Res. 2003;63(18):5821–8.
Steel GG, McMillan TJ, Peacock JH. The 5Rs of radiobiology. Int J Radiat Biol. 1989;56(6):1045–8.
Swan GW. Role of optimal control theory in cancer chemotherapy. Math Biosci. 1990;101:237–84.
Tannock IF. The relation between cell proliferation and the vascular system in a transplanted mouse mammary tumor. Br J Cancer. 1968;22(2):258.
Thames HD, Hendry JH. Fractionation in radiotherapy. London: Taylor and Francis; 1987.
Thomlinson RH, Gray LH. The histological structure of some human lung cancers and the possible implications for radiotherapy. Br J Cancer. 1955;9(4):539.
Tucker SL, Thames HD. The effect of patient-to-patient variability on the accuracy of predictive assays of tumor response to radiotherapy: a theoretical evaluation. Int J Radiat Oncol Biol Phys. 1989;17(1):145–57.
Turesson I. Radiobiological aspects of continuous low dose-rate irradiation and fractionated high dose-rate irradiation. Radiother Oncol. 1990;19(1):1–15.
Unkelbach J, Craft D, Salari E, Ramakrishnan J, Bortfeld T. The dependence of optimal fractionation schemes on the spatial dose distribution. Phys Med Biol. 2013;58(1):159.
Victoria YY, Nguyen D, Pajonk F, Kupelian P, Kaprealian T, Selch M, Low DA, Sheng K. Incorporating cancer stem cells in radiation therapy treatment response modeling and the implication in glioblastoma multiforme treatment resistance. Int J Radiat Oncol Biol Phys. 2015;91(4):866–75.
Wein LM, Cohen JE, Wu JT. Dynamic optimization of a linear-quadratic model with incomplete repair and volume-dependent sensitivity and repopulation. Int J Radiat Oncol Biol Phys. 2000;47(4):1073–83.
Williams T, Bjerknes R. Stochastic model for abnormal clone spread through epithelial basal layer. Nature. 1972;236:19–21.
Withers H. Four R's of radiotherapy. Adv Biol. 1975;5:241–7.
Withers HR. Biological basis of radiation therapy for cancer. Lancet. 1992;339(8786):156–9.
Woodward WA, Chen MS, Behbod F, Alfaro MP, Buchholz TA, Rosen JM. Wnt/β-catenin mediates radiation resistance of mouse mammary progenitor cells. Proc Natl Acad Sci. 2007;104(2):618–23.
Yang Y, Xing L. Optimization of radiotherapy dose-time fractionation with consideration of tumor specific biology. Med Phys. 2005;32:3666.
Youssefpour H, Li X, Lander AD, Lowengrub JS. Multispecies model of cell lineages and feedback control in solid tumors. J Theor Biol. 2012;304:39–59.
Zagars GK, Schultheiss TE, Peters LJ. Inter-tumor heterogeneity and radiation dose-control curves. Radiother Oncol. 1987;8(4):353–61.
Zaider M, Hanin LG. Tumor Control Probability in radiation treatment. Med Phys. 2011;38(2):574–83.
Zaider M, Minerbo GN. Tumor control probability: a formulation applicable to any temporal protocol of dose delivery. Phys Med Biol. 2000;45(2):279.
Both HB and KL were supported by NSF grant CMMI- 1362236.
HB and KL wrote the manuscript. Both authors read and approved the manuscript.
Department of Industrial and Systems Engineering, University of Minnesota, Minneapolis, MN, 55455, USA
Hamidreza Badri & Kevin Leder
Hamidreza Badri
Correspondence to Kevin Leder.
Badri, H., Leder, K. Optimal treatment and stochastic modeling of heterogeneous tumors. Biol Direct 11, 40 (2016). https://0-doi-org.brum.beds.ac.uk/10.1186/s13062-016-0142-5
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13062-016-0142-5
Tumor heterogeneity
Evolution and cancer: A mathematical biology approach II | CommonCrawl |
Weakly-Ionized Air Plasma Theory
For the plasma technologies here considered (aerodynamic flow control, MHD power generation, energy bypass in pulse detonation engines), the airflow is expected to be ionized artificially through electron beams, microwaves, or strong applied electric fields. Since the cost of ionization for the air molecules is rather large, only a small fraction of the gas can be ionized in order to keep the power requirements to a reasonable level. Hence why the plasma can be considered weakly-ionized. Similarly to strongly-ionized plasmas, weakly-ionized plasmas require the simultaneous solution of the mass, momentum, and energy equations for the neutrals and the charged species, as well as of the Maxwell equations for the electric and magnetic fields. However, because of the low ionization fraction of weakly-ionized plasmas, the electrical conductivity is expected to be quite small and the plasma becomes collision-dominated. Under such conditions, the governing equations take on a very different formulation as those describing strongly-ionized plasmas. A brief outline of the chemical model, of the charged species transport equations, of the neutrals transport equations and of the electric field potential equation applicable to weakly-ionized air is here given.
The degree of ionization of the air plasma as well as its chemical composition can be predicted using a finite rate nonequilibrium 8-species 28-reactions model as outlined below in Table 1. The model [2] is especially suited to air plasmas ionized by electron beams. Additionally to chemical reactions related to electron-beam ionization (see reactions 7a and 7b), the model also includes chemical reactions related to Townsend ionization (specifically reactions 1a and 1b).
Townsend ionization consists of an electron accelerated by an electric field impacting the nitrogen or oxygen molecules and releasing in the process a new electron and a positive ion. This chemical reaction is the physical phenomenon that is at the origin of sparks and lighting bolts and that occurs in a weakly-ionized plasma whenever the electric field reaches very high values. It needs to be included in the chemical model when solving plasma aerodynamics in order to predict correctly the voltage drop within the cathode sheaths. Cathode sheaths are thin regions near the cathodes where the electric field is particularly high due to the current being mostly ionic.
Charged Species Transport Equations
The mass-conservation transport equations for the charged species must contain chemical source terms to account for ion and electron creation and destruction as well as other chemical reactions taking place in air: $$ \frac{\partial}{\partial t} \rho_k + \sum_{j} \frac{\partial }{\partial x_j} \rho_k \boldsymbol{V}_j^k = W_k $$ with $\rho_k$ the density of the $k$th species, $\boldsymbol{V}^k$ the velocity of the species under consideration including both drift and diffusion, and $W_k$ the chemical source terms. The chemical source terms are determined from the chemical reactions taking place in weakly-ionized air (see Table 1 above). The charged species velocity can be obtained from the momentum equation assuming negligible ion and electron inertia compared to the collision forces as follows (see Ref. [1] for details): $$ \boldsymbol{V}^{k}_i = \boldsymbol{V}^{\rm n}_i + \sum_{j=1}^3 s_k \tilde{\mu}^k_{ij} \left( \boldsymbol{E} + \boldsymbol{V}^{\rm n} \times \boldsymbol{B} \right)_j - \sum_{j=1}^3 \frac{\tilde{\mu}^{k}_{ij}}{|C_k| N_k} \frac{\partial P_k}{\partial x_j} $$ where $\boldsymbol{E}$ is the electric field, $\boldsymbol{V}^{\rm n}$ is the neutrals velocity including drift and diffusion, $N_k$ is the number density, $P_k$ is the partial pressure of species $k$, $s_k$ the sign of species $k$ (equal to +1 for the positive ions and to -1 for the electrons and negative ions), $C_k$ the charge of species $k$ (equal to $-e$ for the electrons, $+e$ for the positive ions, $-e$ for the negative ions, etc), and where $\tilde{\mu}$ is the tensor mobility equal to: $$ \tilde{\mu}^k \equiv\frac{\mu_k}{1+\mu_k^2|\boldsymbol{B}|^2} \left[\begin{array}{c} 1+\mu_k^2 \boldsymbol{B}_1^2 \\ \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_2-s_k\mu_k\boldsymbol{B}_3 \\ \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_3 +s_k\mu_k\boldsymbol{B}_2 \end{array} \right. \left.\begin{array}{c} \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_2+s_k \mu_k \boldsymbol{B}_3 \\ 1+\mu_k^2\boldsymbol{B}_2^2 \\ \mu_k^2 \boldsymbol{B}_2\boldsymbol{B}_3-s_k\mu_k\boldsymbol{B}_1 \end{array}\right. \left. \begin{array}{c} \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_3-s_k \mu_k \boldsymbol{B}_2 \\ \mu_k^2\boldsymbol{B}_2\boldsymbol{B}_3+s_k\mu_k\boldsymbol{B}_1 \\ 1+\mu_k^2\boldsymbol{B}_3^2 \end{array} \right] $$ where $\mu_k$ is the mobility of species $k$ and $\boldsymbol{B}$ the magnetic field vector.
Neutrals Mass Conservation Equation
The mass-conservation transport equations for the neutral molecules must contain chemical source terms to account for ion creation and destruction as well as other chemical reactions taking place in air: $$ \frac{\partial}{\partial t} \rho_k + \sum_{j} \frac{\partial }{\partial x_j} \rho_k \boldsymbol{V}^{\rm n}_j - \underbrace{\sum_j \frac{\partial}{\partial x_j}\left(\nu_k \frac{\partial w_k}{\partial x_j} \right)}_{\textrm{diffusion terms}} ={W_k} $$ with $\boldsymbol{V}^{\rm n}_j$ the bulk velocity of the neutrals, $w_k$ the mass fraction and $\nu_k$ the diffusion coefficient. The diffusion terms are here limited to the diffusion of the neutrals species within each other and neglect the diffusion of the neutrals within the charged species. Such is an excellent approximation as long as the plasma remains weakly-ionized (i.e, the ionization fraction should remain less than $10^{-4}$ or so).
Total Momentum Conservation Equation
The total momentum equation for the plasma is obtained by adding the momentum equations for the neutrals and the charged species: $$ \frac{\partial}{\partial t} \rho \boldsymbol{V}^{\rm n}_i + \sum_j \frac{\partial }{\partial x_j} \rho \boldsymbol{V}^{\rm n}_j \boldsymbol{V}^{\rm n}_i + \frac{\partial P}{\partial x_i} = \underbrace{\sum_j \frac{\partial \tau_{ji}}{\partial x_j}}_\textrm{viscous force} + \underbrace{\rho_{\rm c} \boldsymbol{E}_i }_{\rm EHD~force} + \underbrace{ \left(\boldsymbol{J} \times \boldsymbol{B}\right)_i}_{\rm MHD~force} $$ with $P$ the total pressure of the gas including the electron and ion partial pressures and $\tau_{ji}$ the shear stress tensor and $\rho_{\rm c}$ and $\boldsymbol{J}$ are the net charge density and current density defined as: $$ \rho_{\rm c} \equiv \sum_k N_k C_k $$ $$ \boldsymbol{J}_i\equiv \sum_k C_k N_k \boldsymbol{V}^k_i $$ Also known as the Lorentz force, the MHD force occurs as a result of the magnetic field acting on the charges in motion, and can hence only take place when a current is flowing within the gas. On the other hand, the EHD force occurs as a result of the electric field acting on a non-neutral region of the plasma. The momentum imparted to the charged particules by the MHD and EHD forces is then transferred to the bulk of the gas through collisions between the charged particules and the neutrals.
Vibrational Energy Conservation Equation
A particularity of the relatively-low-temperature weakly-ionized plasmas is the nonequilibrium of the electron temperature and vibrational temperature with respect to the translational temperature of the neutrals. An example of the large degree of thermal nonequilibrium near an electrode can be seen below in Fig. 1.
Figure 1. The large degree of thermal nonequilibrium typical of cold plasmas can here be seen through the temperature, vibrational temperature, and electron temperature near an electrode as computed by Parent et al. [2].
Because the nitrogen vibrational energy relaxation distance can reach several centimeters or even meters at the low translational temperatures typical of weakly-ionized air plasmas, it is necessary to solve a separate equation accounting for the transport of the nitrogen vibrational energy: [38,39] $$ \frac{\partial}{\partial t} \rho_{\rm N_2} e_{\rm v} + \sum_j \frac{\partial }{\partial x_j} \left( \rho_{\rm N_2} \boldsymbol{V}^{\rm n}_j e_{\rm v} -e_{\rm v} \nu_{\rm N_2} \frac{\partial w_{\rm N_2}}{\partial x_j} +q^{\rm v}_j \right)\\ = \eta_{\rm v} \underbrace{ Q_{\rm J}^{\rm e} }_{\begin{array}{l} \rm Joule\\ \rm Heating \end{array} } + {\frac{\rho_{\rm N_2}}{\tau_{\rm vt}}}\left( e_{\rm v}^0 -e_{\rm v} \right) + W_{\rm N_2} e_{\rm v} $$ where $e_{\rm v}$ is the nitrogen vibrational energy, $e_{\rm v}^0$ is the nitrogen vibrational energy that would be obtained should $T_{\rm v}=T$, $q^{\rm v}$ the vibrational energy heat flux, and $Q_{\rm J}^{\rm e}$ the Joule heating due to the electron velocity being different from the velocity of the bulk of the plasma. The fraction of the Joule heating consumed in the excitation of the vibration levels of the nitrogen molecule, $\eta_{\rm v}$, is obtained from the electron temperature as shown below in Fig. 2.
Figure 2. Fraction of energy consumed in the excitation of the vibration levels of the nitrogen molecule as a function of the electron temperature. From Ref. [36] and Ch. 21 of Ref. [37].
The nitrogen vibrational energy is hence seen to be highly dependent on the Joule heating especially when the electron temperature is in the range 7000-30,000 K. Because weakly-ionized air plasmas often exhibit an electron temperature in that range, a large amount of the Joule heating typically gets deposited in form of nitrogen vibrational energy. Because the relaxation time $\tau_{\rm vt}$ of the nitrogen vibrational temperature is quite low in air in typical flight conditions [38,39], there is not enough time for most of the Joule heating to be transferred from the vibrational energy modes to the translational energy modes. Then, the heating does not result in a significant decrease of the gas density. This is a desirable feature when solving MHD generator flowfields since it can limit the negative effects of large density gradients on the generator performance. However, this may not be a desirable feature when trying to perform aerodynamic flow control through heat deposition because the latter performs satisfactorily only if a density gradient is created by the heating process (which would occur only if the Joule heating is converted to translational energy of the neutrals).
Electron Energy Conservation Equation
Because the electron temperature is in significant non-equilibrium with the neutrals temperature ($T_{\rm e}$ is typically 10-100 times higher than $T$), it is necessary to solve an additional transport equation for the electron energy. The electron energy transport equation (as outlined in Ref. [62]) can be derived from the first law of thermo applied to the electron fluid and substituting the pressure gradient from the momentum equation for the electron species shown above. We thus obtain: $$ \frac{\partial }{\partial t} \left( \rho_{\rm e} e_{\rm e} \right) + \sum_{i} \frac{\partial }{\partial x_i} \left(\rho_{\rm e} h_{\rm e} \boldsymbol{V}_i^{\rm e} \right) + \sum_{i} \frac{\partial q_i^{\rm e}}{\partial x_i} = W_{\rm e} e_{\rm e}+C_{\rm e} N_{\rm e}\boldsymbol{V}^{\rm e} \cdot \boldsymbol{E} - \frac{3 e P_{\rm e} \zeta_{\rm e}}{2 m_{\rm e} \mu_{\rm e}} - Q_{\rm ei} $$ where $\rho_{\rm e}$ is the electron density, $e_{\rm e}$ the electron translational energy, $h_{\rm e}$ the electron enthalpy, $q_i^{\rm e}$ the electron heat flux, $\zeta_{\rm e}$ the electron energy loss function, $P_{\rm e}$ the electron partial pressure, $m_{\rm e}$ the mass of one electron, $\mu_{\rm e}$ the electron mobility, $e$ the elementary charge, and $Q_{\rm ei}$ the energy the electrons lose per unit time per unit volume in creating new electrons through Townsend ionization. In the latter, the kinetic energy of the electrons does not appear, which is a consequence of the electron momentum equation not including the inertia terms, a valid assumption as long as the plasma remains weakly-ionized. It is noted that this does not necessarily entail that the change in kinetic energy of the electrons is negligible compared to the change in internal energy. In fact, the kinetic energy of the electrons is not negligible within cathode sheaths even when the plasma is weakly-ionized. But, including the kinetic energy terms would not improve the accuracy of the simulation in this case. In fact, including them would result in increased physical error because when combined with the momentum equation in which the inertia terms are neglected, the electron energy equation would not satisfy the first law of thermo. Because the inertia terms part of the momentum equation are neglected, the kinetic energy terms should also be neglected for the energy transport equation to satisfy the first law.
Total Energy Conservation Equation
The neutrals and ions translational temperature can be determined through the total energy transport equation which can be derived by summing the energy equations for each species as obtained from the first law of thermo and then making some simplifications applicable to a weakly-ionized plasma. The following is thus obtained: $$ \begin{array}{l}\displaystyle \frac{\partial }{\partial t}\left(\rho_{\rm N_2} e_{\rm v}+\sum_k \rho_k (e_k+h_k^\circ)+\frac{1}{2}\rho|\boldsymbol{V}^{\rm n}|^2 \right) \\ \displaystyle + \sum_{j} \frac{\partial }{\partial x_j} \left(\rho_{\rm N_2} \boldsymbol{V}_j^{\rm N_2} e_{\rm v} + \sum_{k} \rho_k \boldsymbol{V}^k_j (h_k+h_k^\circ)+\frac{1}{2}\rho \boldsymbol{V}^{\rm n}_j|\boldsymbol{V}^{\rm n}|^2 \right)\\ \displaystyle = -\sum_{i} \frac{\partial q_i}{\partial x_i} +\sum_{i} \sum_{j} \frac{\partial }{\partial x_j} \tau_{ji} \boldsymbol{V}_i^{\rm n} + \boldsymbol{E}\cdot\boldsymbol{J} + Q_{\rm b} \end{array} $$ where $Q_{\rm b}$ corresponds to the energy deposited to the gas by an external ionizer (such as electron beams, microwaves, laser beams, etc.), where $q_i$ is the total heat flux from the charged species and the neutrals, where $h_k$ is the species enthalpy and where $h_k^\circ$ is the species heat of formation. The species energy and enthalpy contains the translational, rotational, vibrational, and electronic energies at equilibrium. For all heavy species (the heavy species here refer to all ions and neutrals but do not include electrons) except nitrogen the translational, rotational, vibrational, and electronic energies are assumed to be at equilibrium at the temperature $T$; for nitrogen, the vibrational energy is determined from a separate transport equation, as outlined previously; for the electrons, the translation energy is determined from the electron energy transport equation.
Electric Field Potential Equation
In the momentum and energy equations outlined in the previous section, the electric and magnetic fields appeared as part of the MHD force, the EHD force, or the Joule heating. The electric and magnetic fields must hence be determined simultaneously to the fluid flow equations to close the system of equations. This can be done by solving the Maxwell equations. The Maxwell equations are particularly complex to solve as they involve the solution of 3 transport equations for the magnetic field and 3 other transport equations for the electric field. However, they can be reduced to simpler form by making some assumptions applicable to a weakly-ionized plasma. Indeed, because of the low ionization fraction of weakly-ionized plasmas, the electrical conductivity is expected to be quite small, leading in turn to a very small magnetic Reynolds number. At a small magnetic Reynolds number, the induced magnetic field can be assumed to be negligible whether or not an external magnetic field (originating from a permanent or electro magnet) is applied. Then, the partial differential equations solving for the induced magnetic fluxes do not need to be solved. We can simplify further the physical model by assuming steady-state of the electromagnetic fields with respect to the fluid flow (the so-called "electrostatic" assumption). The assumption of a steady-state for the electromagnetic fields is an excellent one for a weakly-ionized plasma even when solving unsteady fluid flows, since the flow speed and sound speed are considerably less than the electromagnetic wave speed (the speed of light). At steady-state, the curl of the electric field would be zero, and an electric field potential would exist. Then, the 3 transport equations for the electric field can be dropped in favour of one equation: the electric field potential equation. For a quasi-neutral weakly-ionized plasma, the electric field potential equation can thus be obtained from Gauss's law as follows: $$ \sum_{j=1}^3 \frac{\partial^2 \phi}{\partial x_j^2} = -\frac{1}{\epsilon_0} \sum_k C_k N_k $$ from which the electric field can be obtained as $ \boldsymbol{E}_j = - {\partial \phi}/{\partial x_j}$.
Limitations of the Physical Model
The physical model outlined above is hence valid both in the non-neutral sheaths and the quasi-neutral regions of weakly-ionized plasmas, and can predict accurately physical phenomena such as ambipolar diffusion, ambipolar drift, cathode sheaths, dielectric sheaths, unsteady effects in which the displacement current is significant, etc. Nonetheless, it is noted that the physical model considered herein makes several assumptions: (i) the induced magnetic field is assumed negligible, (ii) the drag force due to collisions between charged species is negligible compared to the one originating from collisions between charged species and neutrals, and (iii) the forces due to inertia change are assumed small compared to the forces due to collisions. The mathematical expressions for the latter forces as well as the justification for neglecting them when simulating weakly-ionized plasmas can be found in Ref. [1]. Finally it is cautioned that, because the electric field is obtained from Gauss's law, the physical model outlined in this section can not be used to tackle problems where the electric field is a significant function of a time-varying magnetic field, such as in inductively coupled plasmas or microwave induced plasmas. In those cases, the electric field would cease to be a potential field and would need to be determined through the full or simplified Maxwell equations. More details on when Gauss's law can and can not be used to determine the electric field can be found in Refs. [62,63].
[1] B Parent, MN Shneider, and SO Macheret, "Generalized Ohm's Law and Potential Equation in Computational Weakly-Ionized Plasmadynamics," Journal of Computational Physics, Vol. 230, No. 4, 2011, pp. 1439–1453.
[2] B Parent, SO Macheret, MN Shneider, and N Harada, "Numerical Study of an Electron-Beam-Confined Faraday Accelerator," Journal of Propulsion and Power, Vol. 23, No. 5, 2007, pp. 1023–1032.
[30] B Parent, SO Macheret, and MN Shneider, "Electron and Ion Transport Equations in Computational Weakly-Ionized Plasmadynamics," Journal of Computational Physics, Vol. 259, 2014, pp. 51–69.
[31] NL Aleksandrov, EM Bazelyan, IV Kochetov, and NA Dyatko, "The Ionization Kinetics and Electric Field in the Leader Channel in Long Air Gaps," Journal of Physics D Applied Physics, Vol. 30, 1997, pp. 1616–1624.
[32] A Kossyi, AY Kostinsky, AA Matveyev, and VP Silakov, "Kinetic Scheme of the Non-Equilibrium Discharge in Nitrogen-Oxygen Mixtures," Plasma Sources Science and Technology, Vol. 1, 1992, pp. 207–220.
[33] EM Bazelyan and YP Raizer, Spark Discharge, CRC, Boca Raton, Florida, 1997.
[34] YI Bychkov, YD Korolev, and GA Mesyats, Inzhektsionnaia Gazovaia Elektronika, Nauka, Novosibirsk, Russia, 1982, (Injection Gaseous Electronics, in Russian).
[35] OE Krivonosova, SA Losev, VP Nalivayko, YK Mukoseev, and OP Shatalov, Khimiia Plazmy [Plasma Chemistry], edited by B. M. Smirnov, Vol. 14, Energoatomizdat, Moscow, Russia, 1987, p. 3.
[36] NL Aleksandrov, FI Vysikailo, RS Islamov, IV Kochetov, AP Napartovich, and VG Pevgov, "Electron Distribution Function in 4:1 N2-O2 Mixture," High Temperature, Vol. 19, No. 1, 1981, pp. 17–21.
[37] IS Grigoriev and EZ Meilikhov, Handbook of Physical Quantities, CRC, Boca Raton, Florida, 1997.
[38] SO Macheret, MN Shneider, and RB Miles, "Electron-Beam-Generated Plasmas in Hypersonic Magnetohydrodynamic Channels," AIAA Journal, Vol. 39, No. 6, 2001, pp. 1127–1138.
[39] SO Macheret, L Martinelli, and RB Miles, "Shock Wave Propagation and Structure in Non-Uniform Gases and Plasmas," 1999, AIAA Paper 99-0598.
[62] YP Raizer, Gas Discharge Physics, Springer-Verlag, Berlin, Germany, 1991.
[63] YP Raizer, MN Shneider, and NA Yatsenko, Radio-Frequency Capacitive Discharges, CRC Press, U.S.A., 1995. | CommonCrawl |
Marine Chemistry
2021, 40(9): 1-2.
[Abstract](8) [FullText HTML] (2) [PDF 67KB](0) [Cited by] ()
Denitrification-nitrification process in permeable coastal sediments: An investigation on the effect of salinity and nitrate availability using flow-through reactors
Shan Jiang, Mark Kavanagh, Juan Severino Pino Ibánhez, Carlos Rocha
Permeable coastal sediments act as a reactive node in the littoral zone, transforming nutrients via a wide range of biogeochemical reactions. Reaction rates are controlled by abiotic factors, e.g., salinity, temperature or solute concentration. Here, a series of incubation experiments, using flow-through reactors, were conducted to simulate the biogeochemical cycling of nitrate (\begin{document}${\rm {NO}}_3^- $\end{document}) and phosphorus (P) in permeable sediments under different \begin{document}${\rm {NO}}_3^- $\end{document} availability conditions (factor I) along a salinity gradient (admixture of river and seawater, factor II). In an oligotrophic scenario, i.e., unamended \begin{document}${\rm {NO}}_3^- $\end{document} concentrations in both river and seawater, sediments acted as a permanent net source of \begin{document}${\rm {NO}}_3^- $\end{document} to the water column. The peak production rate occurred at an intermediate salinity (20). Increasing \begin{document}${\rm {NO}}_3^- $\end{document} availability in river water significantly enhanced net \begin{document}${\rm {NO}}_3^- $\end{document} removal rates within the salinity range of 0 to 30, likely via the denitrification pathway based on the sediment microbiota composition. In this scenario, the most active removal was obtained at salinity of 10. When both river and seawater were spiked with \begin{document}${\rm {NO}}_3^- $\end{document}, the highest removal rate switched to the highest salinity (36). It suggests the salinity preference of the \begin{document}${\rm {NO}}_3^- $\end{document} removal pathway by local denitrifiers (e.g., Bacillus and Paracoccus) and that \begin{document}${\rm {NO}}_3^- $\end{document} removal in coastal sediments can be significantly constrained by the dilution related \begin{document}$ {\rm {NO}}_3^-$\end{document} availability. Compared with the obtained variation for \begin{document}${\rm {NO}}_3^- $\end{document} reactions, permeable sediments acted as a sink of soluble reactive P in all treatments, regardless of salinity and \begin{document}${\rm {NO}}_3^- $\end{document} input concentrations, indicating a possibility of P-deficiency for coastal water from the intensive cycling in permeable sediments. Furthermore, the net production of dissolved organic carbon (DOC) in all treatments was positively correlated with the measured \begin{document}${\rm {NO}}_3^- $\end{document} reaction rates, indicating that the DOC supply may not be the key factor for \begin{document}${\rm {NO}}_3^- $\end{document} removal rates due to the consumption by intensive aerobic respiration. Considering the intensive production of recalcitrant carbon solutes, the active denitrification was assumed to be supported by sedimentary organic matter.
Estimating submarine groundwater discharge at a subtropical river estuary along the Beibu Gulf, China
Xilong Wang, Kaijun Su, Juan Du, Linwei Li, Yanling Lao, Guizhen Ning, Li Bin
In certain regions, submarine groundwater discharge (SGD) into the ocean plays a significant role in coastal material fluxes and their biogeochemical cycle; therefore, the impact of SGD on the ecosystem cannot be ignored. In this study, SGD was estimated using naturally occurring radium isotopes (223Ra and 224Ra) in a subtropical estuary along the Beibu Gulf, China. The results showed that the Ra activities of submarine groundwater were approximately 10 times higher than those of surface water. By assuming a steady state and using an Ra mass balance model, the SGD flux in May 2018 was estimated to be 5.98×106 m3/d and 3.60×106 m3/d based on 224Ra and 223Ra, respectively. At the same time, the activities of Ra isotopes fluctuated within a tidal cycle; that is, a lower activity was observed at high tide and a higher activity was seen at low tide. Based on these variations, the average tidal pumping fluxes of SGD were 1.15×106 m3/d and 2.44×106 m3/d with 224Ra and 223Ra, respectively. Tidal-driven SGD accounts for 24%–51% of the total SGD. Therefore, tidal pumping is an important driving force of the SGD in the Dafengjiang River (DFJR) Estuary. Furthermore, the SGD of the DFJR Estuary in the coastal zone contributes significantly to the seawater composition of the Beibu Gulf and the material exchange between land and sea.
Pore-water geochemistry in methane-seep sediments of the Makran accretionary wedge off Pakistan: Possible link to subsurface methane hydrate
Xianrong Zhang, Jianming Gong, Zhilei Sun, Jing Liao, Bin Zhai, Libo Wang, Xilin Zhang, Cuiling Xu, Wei Geng
Cold seeps are pervasive along the continental margin worldwide, and are recognized as hotspots for elemental cycling pathway on Earth. In this study, analyses of pore water geochemical compositions of one ~400 cm piston core (S3) and the application of a mass balance model are conducted to assess methane-associated biogeochemical reactions and uncover the relationship of methane in shallow sediment with gas hydrate reservoir at the Makran accretionary wedge off Pakistan. The results revealed that approximately 77% of sulfate is consumed by the predominant biogeochemical process of anaerobic oxidation of methane. However, the estimated sulfate-methane interface depth is ~400 cm below sea floor with the methane diffusive flux of 0.039 mol/(m2·a), suggesting the activity of methane seepage. Based on the δ13CDIC mass balance model combined with the contribution proportion of different dissolved inorganic carbon sources, this study calculated the δ13C of the exogenous methane to be −57.9‰, indicating that the exogenous methane may be a mixture source, including thermogenic and biogenic methane. The study of pore water geochemistry at Makran accretionary wedge off Pakistan may have considerable implications for understanding the specific details on the dynamics of methane in cold seeps and provide important evidence for the potential occurrence of subsurface gas hydrate in this area.
Tectonic unit divisions based on block tectonics theory in the South China Sea and its adjacent areas
Zhengxin Yin, Zhourong Cai, Cheng Zhang, Xiaofeng Huang, Qianru Huang, Liang Chen
Identifying distinct tectonic units is key to understanding the geotectonic framework and distribution law of oil and gas resources. The South China Sea and its adjacent areas have undergone complex tectonic evolution processes, and the division of tectonic units is controversial. Guided by block tectonics theory, this study divide the South China Sea and its adjacent areas into several distinguished tectonic units relying on known boundary markers such as sutures (ophiolite belts), subduction-collision zones, orogenic belts, and deep faults. This work suggests that the study area is occupied by nine stable blocks (West Burma Block, Sibumasu Block, Lanping-Simao Block, Indochina Block, Yangtze Block, Cathaysian Block, Qiongnan Block, Nansha Block, and Northwest Sulu Block), two suture zones (Majiang suture zone and Southeast Yangtze suture zone), two accretionary zones (Sarawak-Sulu accretionary zone and East Sulawesi accretionary zone), one subduction-collision zone (Rakhine-Java-Timor subduction-collision zone), one ramp zone (Philippine islands ramp zone), and six small oceanic marginal sea basins (South China Sea Basin, Sulu Sea Basin, Sulawesi Sea Basin, Banda Sea Basin, Makassar Basin, and Andaman Sea Basin). This division reflects the tectonic activities, crustal structural properties, and evolutionary records of each evaluated tectonic unit. It is of great theoretical and practical importance to understand the tectonic framework to support the exploration of oil and gas resources in the South China Sea and its adjacent areas.
Copper and zinc isotope variations in ferromanganese crusts and their isotopic fractionation mechanism
Lianhua He, Jihua Liu, Hui Zhang, Jingjing Gao, Aimei Zhu, Ying Zhang
Ferromanganese (Fe-Mn) crusts are potential archives of the Cu and Zn isotope compositions of seawater through time. In this study, the Cu and Zn isotopes of the top surface of 28 Fe-Mn crusts and 2 Fe-Mn nodules were analysed by MC-ICP-MS using combined sample-standard bracketing for mass bias correction. The Zn isotope compositions of the top surface of Fe-Mn crusts are in the range of 0.71‰ to 1.08‰, with a mean δ66Zn value of 0.94‰±0.21‰ (2SD, n=28). The δ65Cu values of the top surface of Fe-Mn crusts range from 0.33‰ to 0.73‰, with a mean value of 0.58‰±0.20‰ (2SD, n=28). The Cu isotope compositions of Fe-Mn crusts are isotopically lighter than that of dissolved Cu in deep seawater (0.58‰ vs. 0.9‰). In contrast, the δ66Zn values of Fe-Mn crusts appear to be isotopically heavy compared to deep seawater (0.94‰±0.21‰ vs. 0.51‰±0.14‰). The isotope fractionation between Fe-Mn crusts and seawater is attributed to equilibrium partitioning between the sorption to crusts and the organic-ligand-bound Cu and Zn in seawater. The Cu and Zn isotopes in the top surface of Fe-Mn crusts are not a direct reflection of the Cu and Zn isotopes, but a function of Cu and Zn isotopes in modern seawater. This study proposes that Fe-Mn crusts have the potential to be archives for paleoceanography through Cu and Zn isotope analysis.
Assessment of the exploitable biomass of thread herring (Opisthonema spp.) in northwestern Mexico
Marcelino Ruiz-Domínguez, Casimiro Quiñonez-Velázquez, Dana Isela Arizmendi-Rodriguez, Víctor Manuel Gómez-Muñoz, Manuel Otilio Nevárez-Martínez
In recent years, the small pelagic fishery on the Pacific northwest coast of Mexico has significantly increased fishing pressure on thread herring Opisthonema spp. This fishery is regulated using a precautionary approach (acceptable biological catch (ABC) and minimum catch size). However, due to fishing dynamics, fish aggregation habits and increased fishing mortality, periodic biomass assessments are necessary to estimate ABC and assess the resource status. The Catch-MSY approach was used to analyze historical series of thread herring catches off the western Baja California Sur (BCS, 1981–2018) and the Gulf of California (GC, 1972–2018) to estimate exploitable biomass and target reference points in order to obtain catch quotas. According to the results, in GC, the maximum biomass reached in 1972 (at the beginning of fishery) and minimum biomass reached in 2015; the estimated exploitable biomass for 2019 was 42.2×104 t; and the maximum sustainable yield (MSY) was 15.4×104 t. In the western BCS coast, the maximum biomass was reached in 1981 (at the beginning of fishery) and minimum biomass was reached in 2017; the estimated exploitable biomass for 2019 was 3.2×104 t; and the MSY was 1.2×104 t. Both stocks showed a decrease in biomass over the past years and were currently near to point of full exploitation. The results suggest that the use of the Catch-MSY method is suitable to obtain annual biomass estimates, in order to establish an ABC, to know the current state of the resource, and to avoid overcoming the potential recovery of the stocks.
Estimating genetic parameters with molecular relatedness and pedigree reconstruction for growth traits in early mixed breeding of juvenile turbot
Song Sun, Weiji Wang, Yulong Hu, Sheng Luan, Ding Lyu, Jie Kong
[Abstract](301) [FullText HTML] (116) [PDF 473KB](9) [Cited by] ()
An introduced turbot population was used to establish families and to estimate genetic parameters of the offspring. However, there is a lack of pedigree information, and common environmental effects can be introduced when each full-sib family is raised in a single tank. Therefore, in the genetic evaluation, SSRs (simple sequence repeats) were used to reconstruct the pedigree and to calculate molecular relatedness between individuals, and the early mixed-family culture model was used to remove the impact of the common environmental effects. After 100 d of early mixed culture, twenty SSRs were used to cluster 20 families and to calculate paired molecular relationships (n=880). Additive genetic matrices were constructed using molecular relatedness (MR) and pedigree reconstruction (PR) and were then applied to the same animal model to estimate genetic parameters. Based on PR, the heritabilities for body weight and body length were 0.214±0.124 and 0.117±0.141, and based on MR they were 0.101±0.031 and 0.102±0.034, respectively. Cross validation showed that the accuracies of the estimated breeding values based on MR (body weight and body length of 0.717±0.045 and 0.629±0.141, respectively) were higher than those of PR (body weight and body length of 0.692±0.052 and 0.615±0.060, respectively). The MR method ensure availability of all genotyped selection candidates, thereby improving the accuracy of the breeding value estimation.
The mitochondrial genome of Chaeturichthys stigmatias provides novel insight into the interspecific difference with Amblychaeturichthys hexanema
Jian Zheng, Bingjie Chen, Tianxiang Gao, Na Song
Chaeturichthys stigmatias and Amblychaeturichthys hexanema belong to the family Gobiidae, which are offshore warm fish species and widely distribute in the western Pacific Ocean. In this study, the mitochondrial cytochrome c oxidase subunit I (COI) sequences and 12S ribosomal RNA (12S rRNA) sequences were used to analyze the interspecific differences between the two species. The phylogenetic analysis showed that the interspecific distance was significantly higher than the intraspecific genetic distance. The Neighbor-Joining tree showed two separate clusters, without sharing haplotype. The mitochondrial genome sequence of C. stigmatias was also reported. This genome was 17 134 bp in size, with a high A+T content of 55.9%. The phylogenetic analysis based on the tandem 13 coding protein genes nucleotide sequences indicated that C. stigmatias showed a close relationship with A. hexanema. This study can provide the basic genetic data for two species and will help for constructing the phylogeny of the Gobiiade.
Effects of rising temperature on growth and energy budget of juvenile Eogammarus possjeticus (Amphipoda: Anisogammaridae)
Suyan Xue, Yuze Mao, Jiaqi Li, Jianguang Fang, Fazhen Zhao
Growth and energy budget of marine amphipod juvenile Eogammarus possjeticus at different temperatures (20°C, 24°C, 26°C, 28°C, 30°C, 32°C and 34°C) were investigated in this study. The results showed that the cumulative mortality rate increased significantly with rising temperature (p<0.01), and exceeded 50% after 24 h when temperature was above 30°C. With the temperature increasing from 20°C to 26°C, the ingestion rate and absorption rate increased, but decreased significantly above 28°C (p<0.01), indicating a decline in feeding ability at high temperatures. The specific growth rate increased with rising temperature, but decreased significantly (p<0.01) after reaching the maximum value at 24°C. Similarly, the oxygen consumption and ammonia emission rates also showed a trend of first increase and then decrease. However, the O:N ratio decreased first and then increased with rising temperature, indicating that the energy demand of E. possjeticus juvenile transferred from metabolism of carbohydrate and lipid to protein. In the energy distribution of amphipods, the proportion of each energy is different. With rising temperature, the ratio of the energy deposited for growth accounted for ingested gross energy showing a trend of decrease, while the energy lost to respiration, ammonia excretion, and feces accounted for ingested gross energy being showed a trend of increase. It seemed that rising temperature increased the metabolism and energy consumption of the amphipods and, meanwhile, decreased the energy used for growth, which may be an important reason for the slow growth and small body size of the amphipods during the summer high-temperature period.
Two new species of the genus Nassarius (Gastropoda: Nassariidae) from the South China Sea
Suping Zhang, Shuqian Zhang, Haitao Li
Two species of Nassarius Duméril, 1805 from the South China Sea are described and illustrated. The specimens are in the Nassariidae collection of the Marine Biological Museum of Chinese Academy of Sciences, Qingdao. Nassarius concavus sp. nov., from the sandy bottom at a depth of 180 m, resembles Nassarius glabrus Zhang and Zhang, 2014 in general shell morphology, but differs from the latter in having a smaller, more slender adult shell without axial ribs on the upper teleoconch whorls. Nassarius nanshaensis sp. nov., from the Nansha Islands at a depth of 56–147 m, is similar to Nassarius maxiutongi Zhang, Zhang and Li, 2019 in the shell sculpture, but differs in having a more slender shell with a higher spire, and fewer cusps on the rachidian tooth (9–11 vs. 13–17).
Simultaneous nitrification and denitrification conducted by Halomonas venusta MA-ZP17-13 under low temperature conditions
Guizhen Li, Qiliang Lai, Guangshan Wei, Peisheng Yan, Zongze Shao
2021, 40(9): 94-104. doi: 10.1007/s13131-021-1897-9
Nitrification is a key step in the global nitrogen cycle. Compared with autotrophic nitrification, heterotrophic nitrification remains poorly understood. In this study, Halomonas venusta MA-ZP17-13, isolated from seawater in shrimp aquaculture (Penaeus vannamei), could simultaneously undertake nitrification and denitrification. With the initial ammonium concentration at 100 mg/L, the maximum ammonium-nitrogen removal rate reached 98.7% under the optimal conditions including C/N concentration ratio at 5.95, pH at 8.93, and NaCl at 2.33%. The corresponding average removal rate was 1.37 mg/(L·h) (according to nitrogen) in 3 d at 11.2°C. By whole genome sequencing and analysis, nitrification- and denitrification-related genes were identified, including ammonia monooxygenase, nitrate reductase, nitrite reductase, nitric oxide dioxygenase and nitric oxide synthase; while no gene encoding hydroxylamine oxidase was identified, it implied the existence of a novel nitrification pathway from hydroxylamine to nitrate. These results indicate heterotrophic bacterium H. venusta MA-ZP17-13 can undertake simultaneous nitrification and denitrification at low temperature and has potential for \begin{document}${\rm {NH}}_4^+ $\end{document}-N/NH3-N removal in marine aquaculture systems.
Dynamic simulation of tropical coral reef ecosystem being disturbed by multiple situations
Geng Wang, Rui Dong, Huimin Xu, Dewen Ding
In consideration of the rapid degradation of coral reef ecosystems, the establishment of models is helpful to comprehend the degradation mechanism of coral reef ecosystems and predict the development process of coral reef communities. According to the characteristics of complex ecosystem of tropical coral reefs in China, the coral reef functional group is the core level variable; combined with the multiple feedback effects of coral reef functional groups and environmental changes, the study presents a coral reef ecosystem dynamics model with hermatypic corals as the core. Based on the simulation of the assumed initial value and the internal feedback of the system, the results show that in the basic simulation (relative health conditions), the coverage area of live corals and coral reefs generally decreased first and then increased, and increased by 4.67% and 6.38% between 2010 and 2050, respectively. Based on the calibration model and the current situation of the studied area, the multi-factor disturbance effects of coral reef communities were simulated and explored by setting up three scenarios involving fishing policy, terrestrial deposition, and inorganic nitrogen emissions. Among them, in the single factor disturbance, the fishing policy exerts the most direct impact on the community decline; and the succession phenomenon is obvious; the terrestrial sedimentation has a faster and more integrated effect on the community decline; the effect of inorganic nitrogen emission on the community decline is relatively slow. In the double/multi-factor disturbance, the superimposed disturbance will aggravate the multi-source feedback effect of the coral reef communities development, accelerate the community decay rate, and make its development trajectory more complicated and diverse. This method provides a scientific and feasible method for simulating the damage of long-term coral reef community and exploring the development law and adaptive management of coral reef ecosystems. In the future, it can be further studied in the ecological restoration process and decision-making direction of coral reefs.
Responses of macrobenthic communities to patchy distributions of heavy metals and petroleum hydrocarbons in sediments: A study in China's Zhoushan Archipelago
Yanbin Tang, Qiang Liu, Yibo Liao, Konglin Zhou, Lu Shou
This study conducted four cruises during 2014–2017 to investigate relationships between macrobenthic communities and sediment contaminations in sea area around the Zhoushan Archipelago. Fourteen sites were categorized into three groups: high total heavy metal contamination content (HHMC), high total petroleum hydrocarbon content (HTPH), and low content ratio of heavy metal contamination content to total petroleum hydrocarbon content (HMC/TPH) areas. Four main taxa of macrofauna (polychaetes, bivalves, gastropods, and crustaceans) were determined to respond to environmental factors differently. While tolerant polychaetes being the minimal impact by environmental factors, bivalves were threated by heavy metal pollutions in sediment. Additionally, body size distribution frequency demonstrated that macrofauna in the low HMC/TPH areas were less disturbed by contamination than those in the HHMC and HTPH areas. The result represented the presentation of sensitive species while tolerant species are usually considered as small size organisms. Overall, this study confirmed the hypothesis that the contamination levels of small-scale patches is indicated by the condition of macrobenthic communities. | CommonCrawl |
A survey of integral representation theory
by Irving Reiner PDF
Maurice Auslander and Oscar Goldman, Maximal orders, Trans. Amer. Math. Soc. 97 (1960), 1–24. MR 117252, DOI 10.1090/S0002-9947-1960-0117252-7
Maurice Auslander and Oscar Goldman, The Brauer group of a commutative ring, Trans. Amer. Math. Soc. 97 (1960), 367–409. MR 121392, DOI 10.1090/S0002-9947-1960-0121392-6
Gorô Azumaya, Corrections and supplementaries to my paper concerning Krull-Remak-Schmidt's theorem, Nagoya Math. J. 1 (1950), 117–124. MR 37832, DOI 10.1017/S002776300002290X
4. D. Ballew, The module index, projective modules and invertible ideals, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1969.
David W. Ballew, The module index and invertible ideals, Trans. Amer. Math. Soc. 148 (1970), 171–184. MR 255589, DOI 10.1090/S0002-9947-1970-0255589-8
B. Banaschewski, Integral group rings of finite groups, Canad. Math. Bull. 10 (1967), 635–642. MR 232864, DOI 10.4153/CMB-1967-061-0
P. M. Gudivok and L. F. Barannik, Projective representations of finite groups over rings, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968 (1968), 294–297 (Ukrainian, with Russian and English summaries). MR 0228597
L. F. Barannik and P. M. Gudivok, Indecomposable projective representations of finite groups, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1969 (1969), 391–393, 472 (Ukrainian, with English and Russian summaries). MR 0276370
Hyman Bass, Finitistic dimension and a homological generalization of semi-primary rings, Trans. Amer. Math. Soc. 95 (1960), 466–488. MR 157984, DOI 10.1090/S0002-9947-1960-0157984-8
Hyman Bass, Projective modules over algebras, Ann. of Math. (2) 73 (1961), 532–542. MR 177012, DOI 10.2307/1970315
Hyman Bass, Torsion free and projective modules, Trans. Amer. Math. Soc. 102 (1962), 319–327. MR 140542, DOI 10.1090/S0002-9947-1962-0140542-0
Hyman Bass, On the ubiquity of Gorenstein rings, Math. Z. 82 (1963), 8–28. MR 153708, DOI 10.1007/BF01112819
H. Bass, $K$-theory and stable algebra, Inst. Hautes Études Sci. Publ. Math. 22 (1964), 5–60. MR 174604, DOI 10.1007/BF02684689
Hyman Bass, The Dirichlet unit theorem, induced characters, and Whitehead groups of finite groups, Topology 4 (1965), 391–410. MR 193120, DOI 10.1016/0040-9383(66)90036-X
14. H. Bass, Algebraic K-theory, Math. Lecture Note Series, Benjamin, New York, 1968.
Edward A. Bender, Classes of matrices and quadratic fields, Linear Algebra Appl. 1 (1968), 195–201. MR 230741, DOI 10.1016/0024-3795(68)90003-7
S. D. Berman, On certain properties of integral group rings, Doklady Akad. Nauk SSSR (N.S.) 91 (1953), 7–9 (Russian). MR 0056603
S. D. Berman, On isomorphism of the centers of group rings of $p$-groups, Doklady Akad. Nauk SSSR (N.S.) 91 (1953), 185–187 (Russian). MR 0056604
S. D. Berman, On a necessary condition for isomorphism of integral group rings, Dopovidi Akad. Nauk Ukrain. RSR 1953 (1953), 313–316 (Ukrainian, with Russian summary). MR 0059909
S. D. Berman, On the equation $x^m=1$ in an integral group ring, Ukrain. Mat. Ž. 7 (1955), 253–261 (Russian). MR 0077521
S. D. Berman, On certain properties of group rings over the field of rational numbers, Užgorod. Gos. Univ. Nau�n. Zap. Him. Fiz. Mat. 12 (1955), 88–110 (Russian). MR 0097451
21. S. D. Berman, On automorphisms of the center of an integral group ring, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. (1960), no. 3, 55. (Russian)
S. D. Berman, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 152 (1963), 1286–1287 (Russian). MR 0154910
S. D. Berman, On the theory of integral representations of finite groups, Dokl. Akad. Nauk SSSR 157 (1964), 506–508 (Russian). MR 0165017
S. D. Berman, Integral representations of a cyclic group containing two irreducible rational components, In Memoriam: N. G. Čebotarev (Russian), Izdat. Kazan. Univ., Kazan, 1964, pp. 18–29 (Russian). MR 0195958
S. D. Berman, On integral monomial representations of finite groups, Uspehi Mat. Nauk 20 (1965), no. 4 (124), 133–134 (Russian). MR 0195959
S. D. Berman, Representations of finite groups over an arbitrary field and over rings of integers, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 69–132 (Russian). MR 0197582
S. D. Berman and P. M. Gudivok, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 145 (1962), 1199–1201 (Russian). MR 0139664
S. D. Berman and P. M. Gudivok, Indecomposable representations of finite groups over the ring of $p$-adic integers, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 875–910 (Russian). MR 0166273
S. D. Berman and A. I. Lihtman, On integral representations of finite nilpotent groups, Uspehi Mat. Nauk 20 (1965), 186–188 (Russian). MR 0207859
S. D. Berman and A. R. Rossa, Integral group-rings of finite and periodic groups, Algebra and Math. Logic: Studies in Algebra (Russian), Izdat. Kiev. Univ., Kiev, 1966, pp. 44–53 (Russian, with English summary). MR 0209367
31a. A. Bialnicki-Birula, On the equivalence of integral representations of finite groups, Proc. Amer. Math. Soc. (to appear).
Z. I. Borevi� and D. K. Faddeev, Theory of homology in groups. I, Vestnik Leningrad. Univ. 11 (1956), no. 7, 3–39 (Russian). MR 0080088
Z. I. Borevi� and D. K. Faddeev, Integral representations of quadratic rings, Vestnik Leningrad. Univ. 15 (1960), no. 19, 52–64 (Russian, with English summary). MR 0153707
Z. I. Borevi� and D. K. Faddeev, Representations of orders with cyclic index, Trudy Mat. Inst. Steklov 80 (1965), 51–65 (Russian). MR 0205980
Z. I. Borevi� and D. K. Faddeev, A remark on orders with a cyclic index, Dokl. Akad. Nauk SSSR 164 (1965), 727–728 (Russian). MR 0190187
36. N. Bourbaki, Algèbre commutative, Actualités Sci. Indust., no. 1293, Hermann, Paris, 1961. MR 30 #2027.
A. A. Bovdi, Periodic normal subgroups of the multiplicative group of a group ring, Sibirsk. Mat. Ž. 9 (1968), 495–498 (Russian). MR 0227268
38. J. O. Brooks, Classification of representation modules over quadratic orders, Ph.D. Thesis, University of Michigan, Ann Arbor, Mich., 1964.
Armand Brumer, Structure of hereditary orders, Bull. Amer. Math. Soc. 69 (1963), 721–724. MR 152565, DOI 10.1090/S0002-9904-1963-11002-2
Henri Cartan and Samuel Eilenberg, Homological algebra, Princeton University Press, Princeton, N. J., 1956. MR 0077480
41. C. Chevalley, L'arithmétique dans les algèbres de matrices, Actualités Sci. Indust., no. 323, Hermann, Paris, 1936.
James A. Cohn and Donald Livingstone, On groups of order $p^{3}$, Canadian J. Math. 15 (1963), 622–624. MR 153739, DOI 10.4153/CJM-1963-063-1
James A. Cohn and Donald Livingstone, On the structure of group algebras. I, Canadian J. Math. 17 (1965), 583–593. MR 179266, DOI 10.4153/CJM-1965-058-2
D. B. Coleman, Idempotents in group rings, Proc. Amer. Math. Soc. 17 (1966), 962. MR 193158, DOI 10.1090/S0002-9939-1966-0193158-3
S. B. Conlon, Structure in representation algebras, J. Algebra 5 (1967), 274–279. MR 202860, DOI 10.1016/0021-8693(67)90040-3
S. B. Conlon, Relative components of representations, J. Algebra 8 (1968), 478–501. MR 223427, DOI 10.1016/0021-8693(68)90056-2
S. B. Conlon, Decompositions induced from the Burnside algebra, J. Algebra 10 (1968), 102–122. MR 237664, DOI 10.1016/0021-8693(68)90107-5
S. B. Conlon, Monomial representations under integral similarity, J. Algebra 13 (1969), 496–508. MR 252527, DOI 10.1016/0021-8693(69)90111-2
Ian G. Connell, On the group ring, Canadian J. Math. 15 (1963), 650–685. MR 153705, DOI 10.4153/CJM-1963-067-0
Charles W. Curtis and Irving Reiner, Representation theory of finite groups and associative algebras, Pure and Applied Mathematics, Vol. XI, Interscience Publishers (a division of John Wiley & Sons, Inc.), New York-London, 1962. MR 0144979
E. C. Dade, Rings, in which no fixed power of ideal classes becomes invertible. Note to the preceding paper of Dade, Taussky and Zassenhaus, Math. Ann. 148 (1962), 65–66. MR 140545, DOI 10.1007/BF01438390
E. C. Dade, Some indecomposable group representations, Ann. of Math. (2) 77 (1963), 406–412. MR 144981, DOI 10.2307/1970222
E. C. Dade, The maximal finite groups of $4\times 4$ integral matrices, Illinois J. Math. 9 (1965), 99–122. MR 170958
E. C. Dade, D. W. Robinson, O. Taussky, and M. Ward, Divisors of recurrent sequences, J. Reine Angew. Math. 214(215) (1964), 180–183. MR 161875, DOI 10.1515/crll.1964.214-215.180
E. C. Dade and O. Taussky, Some new results connected with matrices of rational integers, Proc. Sympos. Pure Math., Vol. VIII, Amer. Math. Soc., Providence, R.I., 1965, pp. 78–88. MR 0184924
E. C. Dade and O. Taussky, On the different in orders in an algebraic number field and special units connected with it, Acta Arith. 9 (1964), 47–51. MR 166183, DOI 10.4064/aa-9-1-47-51
E. C. Dade, O. Taussky, and H. Zassenhaus, On the semigroup of ideal classes in an order of an algebraic number field, Bull. Amer. Math. Soc. 67 (1961), 305–308. MR 136597, DOI 10.1090/S0002-9904-1961-10594-6
E. C. Dade, O. Taussky, and H. Zassenhaus, On the theory of orders, in paricular on the semigroup of ideal classes and genera of an order in an algebraic number field, Math. Ann. 148 (1962), 31–64. MR 140544, DOI 10.1007/BF01438389
54. K. deLeeuw, Some applications of cohomology to algebraic number theory and group representations (unpublished).
Frank R. DeMeyer, The trace map and separable algebras, Osaka Math. J. 3 (1966), 7–11. MR 228542
Max Deuring, Algebren, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 41, Springer-Verlag, Berlin-New York, 1968 (German). Zweite, korrigierte auflage. MR 0228526, DOI 10.1007/978-3-642-85533-7
Fritz-Erdmann Diederichsen, Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Äquivalenz, Abh. Math. Sem. Hansischen Univ. 13 (1940), 357–412 (German). MR 2133, DOI 10.1007/BF02940768
Andreas Dress, On the Krull-Schmidt theorem for integral group representations of rank $1$, Michigan Math. J. 17 (1970), 273–277. MR 263933
Andreas Dress, An intertwining number theorem for integral representations and applications, Math. Z. 116 (1970), 153–165. MR 267011, DOI 10.1007/BF01109959
Andreas Dress, On the decomposition of modules, Bull. Amer. Math. Soc. 75 (1969), 984–986. MR 244227, DOI 10.1090/S0002-9904-1969-12326-8
Andreas Dress, On integral representations, Bull. Amer. Math. Soc. 75 (1969), 1031–1034. MR 249466, DOI 10.1090/S0002-9904-1969-12349-9
Andreas Dress, The ring of monomial representations. I. Structure theory, J. Algebra 18 (1971), 137–157. MR 274607, DOI 10.1016/0021-8693(71)90132-3
Andreas Dress, On relative Grothendieck rings, Bull. Amer. Math. Soc. 75 (1969), 955–958. MR 244401, DOI 10.1090/S0002-9904-1969-12311-6
V. S. Drobotenko, Integral representations of primary abelian groups, Algebra and Math. Logic: Studies in Algebra (Russian), Izdat. Kiev. Univ., Kiev, 1966, pp. 111–121 (Russian, with English summary). MR 0204536
V. S. Drobotenko, È. S. Drobotenko, Z. P. Žilinskaja, and E. Ja. Pogoriljak, Representations of the cyclic group of prime order $p$ over the ring of residue classes $\textrm {mod}\, p^{s}$, Ukrain. Mat. Ž. 17 (1965), no. 5, 28–42 (Russian). MR 0188304
67. V. S. Drobotenko and A. I. Lihtman, Representations of finite groups over the ring of residue classes mod p, Dokl. Užgorod Univ. 3 (1960), 63. (Russian)
P. M. Gudivok, V. S. Drobotenko, and A. I. Lihtman, On representations of finite groups over the ring of residue classes modulo $m$, Ukrain. Mat. Ž. 16 (1964), 82–89 (Russian). MR 0167538
V. S. Drobotenko and V. P. Rud′ko, Representations of a cyclic group by groups of automorphisms of a certain class of modules, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968 (1968), 302–304 (Ukrainian, with Russian and English summaries). MR 0227288
Ju. A. Drozd, Representations of cubic $Z$-rings, Dokl. Akad. Nauk SSSR 174 (1967), 16–18 (Russian). MR 0215824
Ju. A. Drozd, The distribution of maximal sublattices, Mat. Zametki 6 (1969), 19–24 (Russian). MR 252434
Ju. A. Drozd, Adèles and integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 1080–1088 (Russian). MR 0255595
Ju. A. Drozd and V. V. Kiri�enko, Representation of rings in a second order matrix algebra, Ukrain. Mat. Ž. 19 (1967), no. 3, 107–112 (Russian). MR 0210746
Ju. A. Drozd and V. V. Kiri�enko, Hereditary orders, Ukrain. Mat. Ž. 20 (1968), 246–248 (Russian). MR 0254095
Ju. A. Drozd, V. V. Kiri�enko, and A. V. Roĭter, Hereditary and Bass orders, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1415–1436 (Russian). MR 0219527
Ju. A. Drozd and A. V. Roĭter, Commutative rings with a finite number of indecomposable integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 783–798 (Russian). MR 0220716
Ju. A. Drozd and V. M. Tur�in, The number of modules of representations in genus for integral matrix rings of the second order, Mat. Zametki 2 (1967), 133–138 (Russian). MR 229679
Klaus W. Roggenkamp and Verena Huber-Dyson, Lattices over orders. I, Lecture Notes in Mathematics, Vol. 115, Springer-Verlag, Berlin-New York, 1970. MR 0283013
M. Eichler, Über die Idealklassenzahl total definiter Quaternionenalgebren, Math. Z. 43 (1938), no. 1, 102–109 (German). MR 1545717, DOI 10.1007/BF01181088
M. Eichler, Über die Idealklassenzahl hyperkomplexer Systeme, Math. Z. 43 (1938), no. 1, 481–494 (German). MR 1545733, DOI 10.1007/BF01181104
D. K. Faddeev, On the semigroup of genera in the theory of integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 475–478 (Russian). MR 0161885
D. K. Faddeev, An introduction to the multiplicative theory of modules of integral representations, Trudy Mat. Inst. Steklov 80 (1965), 145–182 (Russian). MR 0206048
D. K. Faddeev, On the theory of cubic $Z$-rings, Trudy Mat. Inst. Steklov. 80 (1965), 183–187 (Russian). MR 0195887
D. K. Faddeev, On the equivalence of systems of integral matrices, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 449–454 (Russian). MR 0194432
D. K. Faddeev, The number of classes of exact ideals for $Z$-rings, Mat. Zametki 1 (1967), 625–632 (Russian). MR 214617
Robert Fossum, The Noetherian different of projective orders, J. Reine Angew. Math. 224 (1966), 207–218. MR 222067, DOI 10.1515/crll.1966.224.207
Robert M. Fossum, Maximal orders over Krull domains, J. Algebra 10 (1968), 321–332. MR 233809, DOI 10.1016/0021-8693(68)90083-5
Albrecht Frölich, Ideals in an extension field as modules over the algebraic integers in a finite number field, Math. Z 74 (1960), 29–38. MR 0113877, DOI 10.1007/BF01180470
A. Fröhlich, The module structure of Kummer extensions over Dedekind domains, J. Reine Angew. Math. 209 (1962), 39–53. MR 160777, DOI 10.1515/crll.1962.209.39
A. Fröhlich, Invariants for modules over commutative separable orders, Quart. J. Math. Oxford Ser. (2) 16 (1965), 193–232. MR 210697, DOI 10.1093/qmath/16.3.193
A. Fröhlich, Resolvents, discriminants, and trace invariants, J. Algebra 4 (1966), 173–198. MR 207684, DOI 10.1016/0021-8693(66)90038-X
Italo Giorgiutti, Modules projectifs sur les algèbres de groupes finis, C. R. Acad. Sci. Paris 250 (1960), 1419–1420 (French). MR 124379
J. A. Green, On the indecomposable representations of a finite group, Math. Z. 70 (1958/59), 430–445. MR 131454, DOI 10.1007/BF01558601
J. A. Green, Blocks of modular representations, Math. Z. 79 (1962), 100–115. MR 141717, DOI 10.1007/BF01193108
J. A. Green, The modular representation algebra of a finite group, Illinois J. Math. 6 (1962), 607–619. MR 141709, DOI 10.1215/ijm/1255632708
J. A. Green, A transfer theorem for modular representations, J. Algebra 1 (1964), 73–84. MR 162843, DOI 10.1016/0021-8693(64)90009-2
K. W. Gruenberg, Residual properties of infinite soluble groups, Proc. London Math. Soc. (3) 7 (1957), 29–62. MR 87652, DOI 10.1112/plms/s3-7.1.29
95. P. M. Gudivok, Integral representations of a finite group with a noncyclic Sylow p-subgroup, Uspehi Mat. Nauk 16 (1961), 229-230. 96. P. M. Gudivok, Integral representations of groups of type (p, p), Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 73. 97. P. M. Gudivok, On p-adic integral representations of finite groups, Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 81-82.
P. M. Gudivok, Representations of finite groups over certain local rings, Dopovidi Akad. Nauk Ukraïn. RSR 1964 (1964), 173–176 (Ukrainian, with Russian and English summaries). MR 0166274
P. M. Gudivok, Representations of finite groups over quadratic rings, Dokl. Akad. Nauk SSSR 159 (1964), 1210–1213 (Russian). MR 0169931
P. M. Gudivok, Representations of finite groups over local number rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1966 (1966), 979–981 (Ukrainian, with Russian and English summaries). MR 0201525
P. M. Gudivok, Representations of finite groups over number rings, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 799–834 (Russian). MR 0218468
P. M. Gudivok and V. P. Rud′ko, On $p$-adic integer-valued representations of a cyclic $p$-group, Dopovīdī Akad. Nauk Ukraïn. RSR 1966 (1966), 1111–1113 (Ukrainian, with Russian and English summaries). MR 0201527
T. A. Hannula, The integral representation ring $a(R_{k}G)$, Trans. Amer. Math. Soc. 133 (1968), 553–559. MR 241548, DOI 10.1090/S0002-9947-1968-0241548-9
Manabu Harada, Hereditary orders, Trans. Amer. Math. Soc. 107 (1963), 273–290. MR 151489, DOI 10.1090/S0002-9947-1963-0151489-9
Manabu Harada, Structure of hereditary orders over local rings, J. Math. Osaka City Univ. 14 (1963), 1–22. MR 168619
Manabu Harada, Hereditary orders in generalized quaternions $D_{\tau }$, J. Math. Osaka City Univ. 14 (1963), 71–81. MR 168620
Akira Hattori, Rank element of a projective module, Nagoya Math. J. 25 (1965), 113–120. MR 175950, DOI 10.1017/S002776300001148X
Akira Hattori, Semisimple algebras over a commutative ring, J. Math. Soc. Japan 15 (1963), 404–419. MR 158903, DOI 10.2969/jmsj/01540404
Alex Heller, On group representations over a valuation ring, Proc. Nat. Acad. Sci. U.S.A. 47 (1961), 1194–1197. MR 125163, DOI 10.1073/pnas.47.8.1194
Alex Heller, Some exact sequences in algebraic $K$-theory, Topology 4 (1965), 389–408. MR 179229, DOI 10.1016/0040-9383(65)90004-2
A. Heller and I. Reiner, Indecomposable representations, Illinois J. Math. 5 (1961), 314–323. MR 122890, DOI 10.1215/ijm/1255629829
A. Heller and I. Reiner, Representations of cyclic groups in rings of integers. I, Ann. of Math. (2) 76 (1962), 73–92. MR 140575, DOI 10.2307/1970266
A. Heller and I. Reiner, On groups with finitely many indecomposable integral representations, Bull. Amer. Math. Soc. 68 (1962), 210–212. MR 137773, DOI 10.1090/S0002-9904-1962-10751-4
A. Heller and I. Reiner, Grothendieck groups of orders in semisimple algebras, Trans. Amer. Math. Soc. 112 (1964), 344–355. MR 161889, DOI 10.1090/S0002-9947-1964-0161889-X
A. Heller and I. Reiner, Grothendieck groups of integral group rings, Illinois J. Math. 9 (1965), 349–360. MR 175935, DOI 10.1215/ijm/1256067896
D. G. Higman, Indecomposable representations at characteristic $p$, Duke Math. J. 21 (1954), 377–381. MR 67896, DOI 10.1215/S0012-7094-54-02138-9
D. G. Higman, Induced and produced modules, Canadian J. Math. 7 (1955), 490–508. MR 87671, DOI 10.4153/CJM-1955-052-4
D. G. Higman, On orders in separable algebras, Canadian J. Math. 7 (1955), 509–515. MR 88486, DOI 10.4153/CJM-1955-053-1
D. G. Higman, Relative cohomology, Canadian J. Math. 9 (1957), 19–34. MR 83486, DOI 10.4153/CJM-1957-004-4
D. G. Higman, On isomorphisms of orders, Michigan Math. J. 6 (1959), 255–257. MR 109174, DOI 10.1307/mmj/1028998231
D. G. Higman, On representations of orders over Dedekind domains, Canadian J. Math. 12 (1960), 107–125. MR 109175, DOI 10.4153/CJM-1960-010-1
D. G. Higman and J. E. McLaughlin, Finiteness of class numbers of representations of algebras over function fields, Michigan Math. J. 6 (1959), 401–404. MR 109151, DOI 10.1307/mmj/1028998288
Graham. Higman, The units of group-rings, Proc. London Math. Soc. (2) 46 (1940), 231–248. MR 2137, DOI 10.1112/plms/s2-46.1.231
Roger Holvoet, Sur l'isomorphie d'algèbres de groupes, Bull. Soc. Math. Belg. 20 (1968), 264–282 (French). MR 240219
124a. D. A. Jackson, On a problem in the theory of integral group rings, Ph.D. thesis, Oxford University, Oxford, 1967.
D. A. Jackson, The groups of units of the integral group rings of finite metabelian and finite nilpotent groups, Quart. J. Math. Oxford Ser. (2) 20 (1969), 319–331. MR 249521, DOI 10.1093/qmath/20.1.319
H. Jacobinski, Über die Hauptordnung eines Körpers als Gruppenmodul, J. Reine Angew. Math. 213 (1963/64), 151–164 (German). MR 163901, DOI 10.1515/crll.1964.213.151
H. Jacobinski, On extensions of lattices, Michigan Math. J. 13 (1966), 471–475. MR 204538, DOI 10.1307/mmj/1028999605
H. Jacobinski, Sur les ordres commutatifs avec un nombre fini de réseaux indécomposables, Acta Math. 118 (1967), 1–31 (French). MR 212001, DOI 10.1007/BF02392474
H. Jacobinski, Über die Geschlechter von Gittern über Ordnungen, J. Reine Angew. Math. 230 (1968), 29–39 (German). MR 229676, DOI 10.1515/crll.1968.230.29
H. Jacobinski, Genera and decompositions of lattices over orders, Acta Math. 121 (1968), 1–29. MR 251063, DOI 10.1007/BF02391907
H. Jacobinski, On embedding of lattices belonging to the same genus, Proc. Amer. Math. Soc. 24 (1970), 134–136. MR 251072, DOI 10.1090/S0002-9939-1970-0251072-X
Nathan Jacobson, The Theory of Rings, American Mathematical Society Mathematical Surveys, Vol. II, American Mathematical Society, New York, 1943. MR 0008601, DOI 10.1090/surv/002
N. Jacobson, Representation theory for Jordan rings, Proceedings of the International Congress of Mathematicians, Cambridge, Mass., 1950, vol. 2, Amer. Math. Soc., Providence, R.I., 1952, pp. 37–43. MR 0044505
W. E. Jenner, Block ideals and arithmetics of algebras, Compositio Math. 11 (1953), 187–203. MR 62723
W. E. Jenner, On the class number of non-maximal orders in ${\mathfrak {p}}$-adic division algebras, Math. Scand. 4 (1956), 125–128. MR 81270, DOI 10.7146/math.scand.a-10461
Alfredo Jones, Groups with a finite number of indecomposable integral representations, Michigan Math. J. 10 (1963), 257–261. MR 153737
Alfredo Jones, Integral representations of the direct product of groups, Canadian J. Math. 15 (1963), 625–630. MR 154927, DOI 10.4153/CJM-1963-064-9
Alfredo Jones, On representations of finite groups over valuation rings, Illinois J. Math. 9 (1965), 297–303. MR 175981
Irving Kaplansky, Elementary divisors and modules, Trans. Amer. Math. Soc. 66 (1949), 464–491. MR 31470, DOI 10.1090/S0002-9947-1949-0031470-3
Irving Kaplansky, Modules over Dedekind rings and valuation rings, Trans. Amer. Math. Soc. 72 (1952), 327–340. MR 46349, DOI 10.1090/S0002-9947-1952-0046349-0
Irving Kaplansky, Submodules of quaternion algebras, Proc. London Math. Soc. (3) 19 (1969), 219–232. MR 240142, DOI 10.1112/plms/s3-19.2.219
V. V. Kiri�enko, Orders whose representations are all completely reducible, Mat. Zametki 2 (1967), 139–144 (Russian). MR 219528
142. D. I. Knee, The indecomposable integral representations of finite cyclic groups, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962.
Martin Kneser, Einige Bemerkungen über ganzzahlige Darstellungen endlicher Gruppen, Arch. Math. (Basel) 17 (1966), 377–379 (German). MR 201526, DOI 10.1007/BF01899614
S. A. Krugljak, Precise ideals of integer matrix-rings of the second order, Ukrain. Mat. . 18 (1966), no. 3, 58–64 (Russian). MR 0199229
S. A. Krugljak, The Grothendieck group, Ukrain. Mat. Ž. 18 (1966), no. 5, 100–105 (Russian). MR 0200305
Tsit-yuen Lam, Induction theorems for Grothendieck groups and Whitehead groups of finite groups, Ann. Sci. École Norm. Sup. (4) 1 (1968), 91–148. MR 231890, DOI 10.24033/asens.1161
Richard G. Larson, Group rings over Dedekind domains, J. Algebra 5 (1967), 358–361. MR 209368, DOI 10.1016/0021-8693(67)90045-2
Claiborne G. Latimer and C. C. MacDuffee, A correspondence between classes of ideals and classes of matrices, Ann. of Math. (2) 34 (1933), no. 2, 313–316. MR 1503108, DOI 10.2307/1968204
149. W. J. Leahey, The classification of the indecomposable integral representations of the dihedral group of order 2p, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962.
Myrna Pike Lee, Integral representations of dihedral groups of order $2p$, Trans. Amer. Math. Soc. 110 (1964), 213–231. MR 156896, DOI 10.1090/S0002-9947-1964-0156896-7
Heinrich-Wolfgang Leopoldt, Über die Hauptordnung der ganzen Elemente eines abelschen Zahlkörpers, J. Reine Angew. Math. 201 (1959), 119–149 (German). MR 108479, DOI 10.1515/crll.1959.201.119
Lawrence S. Levy, Decomposing pairs of modules, Trans. Amer. Math. Soc. 122 (1966), 64–80. MR 194467, DOI 10.1090/S0002-9947-1966-0194467-9
George W. Mackey, On induced representations of groups, Amer. J. Math. 73 (1951), 576–592. MR 42420, DOI 10.2307/2372309
Jean-Marie Maranda, On $\mathfrak {B}$-adic integral representations of finite groups, Canad. J. Math. 5 (1953), 344–355. MR 56605, DOI 10.4153/cjm-1953-040-2
J.-M. Maranda, On the equivalence of representations of finite groups by groups of automorphisms of modules over Dedekind rings, Canadian J. Math. 7 (1955), 516–526. MR 88498, DOI 10.4153/CJM-1955-054-9
Jacques Martinet, Sur l'arithmétique des extensions galoisiennes à groupe de Galois diédral d'ordre $2p$, Ann. Inst. Fourier (Grenoble) 19 (1969), no. fasc. 1, 1–80, ix (French, with English summary). MR 262210, DOI 10.5802/aif.307
A. Matuljauskas, Integral representations of a fourth-order cyclic group, Litovsk. Mat. Sb. 2 (1962), no. 1, 75–82 (Russian, with Lithuanian and German summaries). MR 0148768
A. Matuljauskas, Integral representations of the cyclic group of order six, Litovsk. Mat. Sb. 2 (1962), no. 2, 149–157 (Russian, with Lithuanian and German summaries). MR 0155902
A. Matuljauskas, On the number of indecomposable representations of the group $Z_{8}$, Litovsk. Mat. Sb. 3 (1963), no. 1, 181–188 (Russian, with Lithuanian and German summaries). MR 0165018
A. Matuljauskas and M. Matuljauskene, On integral representations of a group of type $(3,\,3)$, Litovsk. Mat. Sb. 4 (1964), 229–233 (Russian, with Lithuanian and German summaries). MR 0167540
Warren May, Commutative group algebras, Trans. Amer. Math. Soc. 136 (1969), 139–149. MR 233903, DOI 10.1090/S0002-9947-1969-0233903-9
G. O. Michler, Structure of semi-perfect hereditary Noetherian rings, J. Algebra 13 (1969), 327–344. MR 246918, DOI 10.1016/0021-8693(69)90078-7
Tadasi Nakayama, A theorem on modules of trivial cohomology over a finite group, Proc. Japan Acad. 32 (1956), 373–376. MR 80098
Tadasi Nakayama, On modules of trivial cohomology over a finite group, Illinois J. Math. 1 (1957), 36–43. MR 84014
Tadasi Nakayama, On modules of trivial cohomology over a finite group. II. Finitely generated modules, Nagoya Math. J. 12 (1957), 171–176. MR 98125, DOI 10.1017/S0027763000022030
L. A. Nazarova, Unimodular representations of the four group, Dokl. Akad. Nauk SSSR 140 (1961), 1101–1014 (Russian). MR 0130916
L. A. Nazarova, Unimodular representations of the alternating group of degree four, Ukrain. Mat. . 15 (1963), 437–444 (Russian). MR 0158926
L. A. Nazarova, Representations of a tetrad, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1361–1378 (Russian). MR 0223352
L. A. Nazarova and A. V. Roĭter, Integral representations of a symmetric group of third degree, Ukrain. Mat. . 14 (1962), 271–288 (Russian, with English summary). MR 0148767
168. L. A. Nazarova and A. V. Roĭter, On irreducible representations of p-groups over Z, Ukrain. Mat. Ž. 18 (1966), no 1, 119-124. (Russian) MR 34 #254.
L. A. Nazarova and A. V. Roĭter, Integral $p$-adic representations and representations over a ring of residue classes, Ukrain. Mat. . 19 (1967), no. 2, 125–126 (Russian). MR 0209369
L. A. Nazarova and A. V. Roĭter, A sharpening of a theorem of Bass, Dokl. Akad. Nauk SSSR 176 (1967), 266–268 (Russian). MR 0225810
L. A. Nazarova and A. V. Roĭter, Finitely generated modules over a dyad of two local Dedekind rings, and finite groups which possess an abelian normal divisor of index $p$, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 65–89 (Russian). MR 0260859
Morris Newman and Olga Taussky, Classes of positive definite unimodular circulants, Canadian J. Math. 9 (1957), 71–73. MR 82947, DOI 10.4153/CJM-1957-010-5
M. Newman and Olga Taussky, On a generalization of the normal basis in abelian algebraic number fields, Comm. Pure Appl. Math. 9 (1956), 85–91. MR 75985, DOI 10.1002/cpa.3160090106
R. J. Nunke, Modules of extensions over Dedekind rings, Illinois J. Math. 3 (1959), 222–241. MR 102538, DOI 10.1215/ijm/1255455124
Tadao Obayashi, On the Grothendieck ring of an abelian $p$-group, Nagoya Math. J. 26 (1966), 101–113. MR 225847, DOI 10.1017/S0027763000011661
176. J. Oppenheim, Integral representations of cyclic groups of squarefree order, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1962.
D. S. Passman, Nil ideals in group rings, Michigan Math. J. 9 (1962), 375–384. MR 144930, DOI 10.1307/mmj/1028998773
D. S. Passman, Isomorphic groups and group rings, Pacific J. Math. 15 (1965), 561–583. MR 193160, DOI 10.2140/pjm.1965.15.561
Lena Chang Pu, Integral representations of non-abelian groups of order $pq$, Michigan Math. J. 12 (1965), 231–246. MR 178063
T. Ralley, Decomposition of products of modular representations, J. London Math. Soc. 44 (1969), 480–484. MR 240220, DOI 10.1112/jlms/s1-44.1.480
Irving Reiner, Maschke modules over Dedekind rings, Canadian J. Math. 8 (1956), 329–334. MR 78969, DOI 10.4153/CJM-1956-037-3
Irving Reiner, Integral representations of cyclic groups of prime order, Proc. Amer. Math. Soc. 8 (1957), 142–146. MR 83493, DOI 10.1090/S0002-9939-1957-0083493-6
Irving Reiner, On the class number of representations of an order, Canadian J. Math. 11 (1959), 660–672. MR 108513, DOI 10.4153/CJM-1959-061-5
Irving Reiner, The nonuniqueness of irreducible constituents of integral group representations, Proc. Amer. Math. Soc. 11 (1960), 655–658. MR 122891, DOI 10.1090/S0002-9939-1960-0122891-9
Irving Reiner, Behavior of integral group representations under ground ring extension, Illinois J. Math. 4 (1960), 640–651. MR 121407
Irving Reiner, The Krull-Schmidt theorem for integral group representations, Bull. Amer. Math. Soc. 67 (1961), 365–367. MR 138689, DOI 10.1090/S0002-9904-1961-10619-8
Irving Reiner, Indecomposable representations of non-cyclic groups, Michigan Math. J. 9 (1962), 187–191. MR 140576
Irving Reiner, Failure of the Krull-Schmidt theorem for integral representations, Michigan Math. J. 9 (1962), 225–231. MR 144942
Irving Reiner, Extensions of irreducible modules, Michigan Math. J. 10 (1963), 273–276. MR 155874
Irving Reiner, The integral representation ring of a finite group, Michigan Math. J. 12 (1965), 11–22. MR 172937
Irving Reiner, Nilpotent elements in rings of integral representations, Proc. Amer. Math. Soc. 17 (1966), 270–274. MR 188306, DOI 10.1090/S0002-9939-1966-0188306-5
Irving Reiner, Integral represetation algebras, Trans. Amer. Math. Soc. 124 (1966), 111–121. MR 202863, DOI 10.1090/S0002-9947-1966-0202863-6
Irving Reiner, Relations between integral and modular representations, Michigan Math. J. 13 (1966), 357–372. MR 222188
Irving Reiner, Module extensions and blocks, J. Algebra 5 (1967), 157–163. MR 213452, DOI 10.1016/0021-8693(67)90032-4
Irving Reiner, Representation rings, Michigan Math. J. 14 (1967), 385–391. MR 218469
I. Raĭner, The action of an involution in $\~Ku^0 (ZG)$, Mat. Zametki 3 (1968), 523–527 (Russian). MR 229696
197. I. Reiner, A survey of integral representation theory, Proc. Algebra Sympos., University of Kentucky (Lexington, 1968), pp. 8-14. 198. I. Reiner, Maximal orders, Mimeograph Notes, University of Illinois, Urbana, Ill., 1969.
I. Reiner and H. Zassenhaus, Equivalence of representations under extensions of local ground rings, Illinois J. Math. 5 (1961), 409–411. MR 126468, DOI 10.1215/ijm/1255630885
Dock Sang Rim, Modules over finite groups, Ann. of Math. (2) 69 (1959), 700–712. MR 104721, DOI 10.2307/1970033
Dock Sang Rim, On projective class groups, Trans. Amer. Math. Soc. 98 (1961), 459–467. MR 124378, DOI 10.1090/S0002-9947-1961-0124378-1
Klaus W. Roggenkamp, Gruppenringe von unendlichem Darstellungstyp, Math. Z. 96 (1967), 393–398 (German). MR 206123, DOI 10.1007/BF01117098
Klaus W. Roggenkamp, Darstellungen endlicher Gruppen in Polynomringen, Math. Z. 96 (1967), 399–407 (German). MR 206124, DOI 10.1007/BF01117099
Klaus W. Roggenkamp, Grothendieck groups of hereditary orders, J. Reine Angew. Math. 235 (1969), 29–40. MR 254101, DOI 10.1515/crll.1969.235.29
Klaus W. Roggenkamp, On the irreducible lattices of orders, Canadian J. Math. 21 (1969), 970–976. MR 248247, DOI 10.4153/CJM-1969-106-x
206. K. W. Roggenkamp, Das Krull-Schmidt Theorem für projektive Gitter in Ordnungen über lokalen Ringen, Math. Seminar (Giessen, 1969).
K. W. Roggenkamp, Projective modules over clean orders, Compositio Math. 21 (1969), 185–194. MR 248170
K. W. Roggenkamp, A necessary and sufficient condition for orders in direct sums of complete skewfields to have only finitely many nonisomorphic indecomposable integral representations, Bull. Amer. Math. Soc. 76 (1970), 130–134. MR 284466, DOI 10.1090/S0002-9904-1970-12398-9
Klaus W. Roggenkamp, Projective homomorphisms and extensions of lattices, J. Reine Angew. Math. 246 (1971), 41–45. MR 274485, DOI 10.1515/crll.1971.246.41
A. V. Roĭter, On the representations of the cyclic group of fourth order by integral matrices, Vestnik Leningrad. Univ. 15 (1960), no. 19, 65–74 (Russian, with English summary). MR 0124418
A. V. Roĭter, Categories with division and integral representations, Soviet Math. Dokl. 4 (1963), 1621–1623. MR 0194494
A. V. Roĭter, On a category of representations, Ukrain. Mat. . 15 (1963), 448–452 (Russian). MR 0159856
A. V. Roĭter, Integer-valued representations belonging to one genus, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 1315–1324 (Russian). MR 0213391
A. V. Roĭter, Divisibility in the category of representations over a complete local Dedekind ring, Ukrain. Mat. Ž. 17 (1965), no. 4, 124–129 (Russian). MR 0197534
A. V. Roĭter, $E$-systems of representations, Ukrain. Mat. Ž. 17 (1965), no. 2, 88–96 (Russian). MR 0190206
A. V. Roĭter, An analog of the theorem of Bass for modules of representations of noncommutative orders, Dokl. Akad. Nauk SSSR 168 (1966), 1261–1264 (Russian). MR 0202772
A. V. Roĭter, Unboundedness of the dimensions of the indecomposable representations of an algebra which has infinitely many indecomposable representations, Izv. Akad. Nauk SSSR Ser. Mat. 32 (1968), 1275–1282 (Russian). MR 0238893
A. V. Roĭter, On the theory of integral representations of rings, Mat. Zametki 3 (1968), 361–366 (Russian). MR 231859
Joseph J. Rotman, Notes on homological algebras, Van Nostrand Reinhold Mathematical Studies, No. 26, Van Nostrand Reinhold Co., New York-Toronto, Ont.-London, 1970. MR 0409590
V. P. Rud′ko, Tensor algebra of integral representations of a cyclic group of order $p^{2}$, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1967 (1967), 35–39 (Ukrainian, with Russian and English summaries). MR 0209370
A. I. Saksonov, On group rings of finite $p$-groups over certain integral domains, Dokl. Akad. Nauk BSSR 11 (1967), 204–207 (Russian). MR 0209372
A. I. Saksonov, Group-algebras of finite groups over a number field, Dokl. Akad. Nauk BSSR 11 (1967), 302–305 (Russian). MR 0210795
O. F. G. Schilling, The Theory of Valuations, Mathematical Surveys, No. 4, American Mathematical Society, New York, N. Y., 1950. MR 0043776, DOI 10.1090/surv/004
Hans Schneider and Julian Weissglass, Group rings, semigroup rings and their radicals, J. Algebra 5 (1967), 1–15. MR 213453, DOI 10.1016/0021-8693(67)90021-X
Sudarshan K. Sehgal, On the isomorphism of integral group rings. I, Canadian J. Math. 21 (1969), 410–413. MR 255706, DOI 10.4153/CJM-1969-044-9
C. S. Seshadri, Triviality of vector bundles over the affine space $K^{2}$, Proc. Nat. Acad. Sci. U.S.A. 44 (1958), 456–458. MR 102527, DOI 10.1073/pnas.44.5.456
C. S. Seshadri, Algebraic vector bundles over the product of an affine curve and the affine line, Proc. Amer. Math. Soc. 10 (1959), 670–673. MR 164972, DOI 10.1090/S0002-9939-1959-0164972-1
Michael Singer, Invertible powers of ideals over orders in commutative separable algebras, Proc. Cambridge Philos. Soc. 67 (1970), 237–242. MR 252378, DOI 10.1017/s0305004100045503
D. L. Stancl, Multiplication in Grothendieck rings of integral group rings, J. Algebra 7 (1967), 77–90. MR 223428, DOI 10.1016/0021-8693(67)90068-3
228. E. Steinitz, Rechteckige Systeme und Moduln in algebraischen Zahlenkörpern. I, II, Math. Ann. 71 (1911), 328-354; 72 (1912), 297-345.
Jan Rustom Strooker, Faithfully projective modules and clean algebras, J. J. Groen & Zoon, N.V., Leiden, 1965. Dissertation, University of Utrecht, Utrecht, 1965. MR 0217115
Richard G. Swan, Projective modules over finite groups, Bull. Amer. Math. Soc. 65 (1959), 365–367. MR 114842, DOI 10.1090/S0002-9904-1959-10376-1
Richard G. Swan, The $p$-period of a finite group, Illinois J. Math. 4 (1960), 341–346. MR 122856
Richard G. Swan, Induced representations and projective modules, Ann. of Math. (2) 71 (1960), 552–578. MR 138688, DOI 10.2307/1969944
Richard G. Swan, Projective modules over group rings and maximal orders, Ann. of Math. (2) 76 (1962), 55–61. MR 139635, DOI 10.2307/1970264
Richard G. Swan, The Grothendieck ring of a finite group, Topology 2 (1963), 85–110. MR 153722, DOI 10.1016/0040-9383(63)90025-9
R. G. Swan, Algebraic $K$-theory, Lecture Notes in Mathematics, No. 76, Springer-Verlag, Berlin-New York, 1968. MR 0245634, DOI 10.1007/BFb0080281
Richard G. Swan, Invariant rational functions and a problem of Steenrod, Invent. Math. 7 (1969), 148–158. MR 244215, DOI 10.1007/BF01389798
Richard G. Swan, The number of generators of a module, Math. Z. 102 (1967), 318–322. MR 218347, DOI 10.1007/BF01110912
Shuichi Takahashi, Arithmetic of group representations, Tohoku Math. J. (2) 11 (1959), 216–246. MR 109848, DOI 10.2748/tmj/1178244583
Shuichi Takahashi, A characterization of group rings as a special class of Hopf algebras, Canad. Math. Bull. 8 (1965), 465–475. MR 184988, DOI 10.4153/CMB-1965-033-5
Olga Taussky, On a theorem of Latimer and MacDuffee, Canad. J. Math. 1 (1949), 300–302. MR 30491, DOI 10.4153/cjm-1949-026-1
Olga Taussky, Classes of matrices and quadratic fields, Pacific J. Math. 1 (1951), 127–132. MR 43064, DOI 10.2140/pjm.1951.1.127
Olga Taussky, Classes of matrices and quadratic fields. II, J. London Math. Soc. 27 (1952), 237–239. MR 46335, DOI 10.1112/jlms/s1-27.2.237
Olga Taussky, Unimodular integral circulants, Math. Z. 63 (1955), 286–289. MR 72890, DOI 10.1007/BF01187938
Olga Taussky, On matrix classes corresponding to an ideal and its inverse, Illinois J. Math. 1 (1957), 108–113. MR 94326
Olga Taussky, Matrices of rational integers, Bull. Amer. Math. Soc. 66 (1960), 327–345. MR 120237, DOI 10.1090/S0002-9904-1960-10439-9
Olga Taussky, Ideal matrices. I, Arch. Math. 13 (1962), 275–282. MR 150165, DOI 10.1007/BF01650074
Olga Taussky, Ideal matrices. II, Math. Ann. 150 (1963), 218–225. MR 156862, DOI 10.1007/BF01396991
Olga Taussky, On the similarity transformation between an integral matrix with irreducible characteristic polynomial and its transpose, Math. Ann. 166 (1966), 60–63. MR 199206, DOI 10.1007/BF01361438
Olga Taussky, The discriminant matrices of an algebraic number field, J. London Math. Soc. 43 (1968), 152–154. MR 228473, DOI 10.1112/jlms/s1-43.1.152
Olga Taussky and John Todd, Matrices with finite period, Proc. Edinburgh Math. Soc. (2) 6 (1940), 128–134. MR 2829, DOI 10.1017/s0013091500024627
Olga Taussky and John Todd, Matrices of finite period, Proc. Roy. Irish Acad. Sect. A 46 (1941), 113–121. MR 0003607
Olga Taussky and Hans Zassenhaus, On the similarity transformation between a matrix and its transpose, Pacific J. Math. 9 (1959), 893–896. MR 108500, DOI 10.2140/pjm.1959.9.893
John G. Thompson, Vertices and sources, J. Algebra 6 (1967), 1–6. MR 207863, DOI 10.1016/0021-8693(67)90009-9
254. A. Troy, Integral representations of cyclic groups of order p, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1961.
Kôji Uchida, Remarks on Grothendieck rings, Tohoku Math. J. (2) 19 (1967), 341–348. MR 227253, DOI 10.2748/tmj/1178243284
S. Ullom, Normal bases in Galois extensions of number fields, Nagoya Math. J. 34 (1969), 153–167. MR 240082, DOI 10.1017/S0027763000024521
S. Ullom, Galois cohomology of ambiguous ideals, J. Number Theory 1 (1969), 11–15. MR 237473, DOI 10.1016/0022-314X(69)90022-5
Yutaka Watanabe, The Dedekind different and the homological different, Osaka Math. J. 4 (1967), 227–231. MR 227210
André Weil, Basic number theory, Die Grundlehren der mathematischen Wissenschaften, Band 144, Springer-Verlag New York, Inc., New York, 1967. MR 0234930, DOI 10.1007/978-3-662-00046-5
261. A. R. Whitcomb, The group ring problem, Ph.D. thesis, University of Chicago, Chicago, Ill., 1968.
Oscar Zariski and Pierre Samuel, Commutative algebra, Volume I, The University Series in Higher Mathematics, D. Van Nostrand Co., Inc., Princeton, New Jersey, 1958. With the cooperation of I. S. Cohen. MR 0090581
263. H. Zassenhaus, Neuer Beweis der Endlichkeit der Klassenzahl bei unimodularer Aquivalenz endlicher ganzzahliger Substitutionsgruppen, Abh. Math. Sem. Univ. Hamburg 12 (1938), 276-288.
Hans Zassenhaus, Über die Äquivalenz ganzzahliger Darstellungen, Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. II 1967 (1967), 167–193 (German). MR 230759
Janice Zemanek, On the semisimplicity of integral representation rings, Bull. Amer. Math. Soc. 76 (1970), 778–779. MR 269757, DOI 10.1090/S0002-9904-1970-12547-2
Retrieve articles in Bulletin of the American Mathematical Society with MSC (1970): 1075, 1548, 2080, 1640, 1069, 1620
Retrieve articles in all journals with MSC (1970): 1075, 1548, 2080, 1640, 1069, 1620
Journal: Bull. Amer. Math. Soc. 76 (1970), 159-227
MSC (1970): Primary 1075, 1548, 2080, 1640; Secondary 1069, 1620 | CommonCrawl |
United State of America
HR UK (London)
HR UAE (Dubai)
HR Pakistan
HR Islamabad
HR Karachi
HR Lahore
HR Office Lahor – Gulberg
HR Office Lahore – DHA
HR Rawalpindi
HR Faisalabad
HR Multan
Hr Peshawar
nurse practitioner clinical decision making and evidence based practice
Home / Uncategorized / nurse practitioner clinical decision making and evidence based practice
A similar net was used by Tranter (1962) off northwest Australia and off Java. It is possible that the intermittent halts in the upwelling process actually sustain greater levels of production. On that expedition numbers of "metazoa" were counted from 4 litres of water taken at the surface, at 50 m and 100 m (Hentschel, 1933). Academic Press, New York. This process is experimental and the keywords may be updated as the learning algorithm improves. The quantity ml/m² (or g/m², wet weight) is converted to carbon by the factor 1/17.85 (Cushing, 1958). In fact a number of different designs have been used (see Reid's table 2) but the differences are not great enough to generate differences in quantities caught. and G.G. Plankton come in two varieties: zooplankton and phytoplankton. Hatcheries often find it difficult to reliably produce enough algae for their zooplankton cultures. Plankton is composed of the phytoplankton (the plants of the sea) and zooplankton (zoh-plankton) which are typically the tiny animals found near the surface in aquatic environments. The Baleen zooplankton harvesting system (Frish Pty. 1971. Most Zooplankton, and some benthic animals, reproduce continuously. There are unfortunately no observations of zooplankton made in the Canary Current or in the Benguela Current except those made on the Meteor Expedition. So must zooplankton, which feed on the phytoplankton. From 2004 - 2009, CMarZ carried out more than 80 cruises and collected samples from every ocean basin. If the sample isconcentrated, dilute using tap water and mix thoroughly 2. ), and tempora… Ver. Therefore the Pacific and Indian Ocean nets appear to sample the zooplankton populations adequately, excluding larger forms like euphausiids which were not caught by Hentschel's waterbottles. An individual also undergoes a biochemical turnover during its lifetime so that, upon completing a mean lifespan, it will have assimilated several times its final mass. They are heterotrophic (other-feeding), meaning they cannot produce their own food and must consume instead other plants or animals as food. Zooplankton Reproduction . 2nd Ed. The bluegill are eaten by bass and BAM! ... binders and stabilizers have the potential to produce a sustainable, totally aquatic, totally organic, quality fish feed. The main difference between Phytoplanktons and Zooplankton is that Phytoplanktons are a photosynthetic, microscopic organisms live in rivers, lakes, freshwater, and streams whereas Zooplanktons are small aquatic animals that also live in water bodies, but they cannot make their own food, and they are dependent on phytoplankton. 19–58. Limnol. We want to capture children's imaginations through great storytelling, bringing the beauty, awe and fascination of the ocean and its inhabitants alive. Some cyanobacteria can fix nitrogen and produce toxins. Can someone put One Tree Hill season 6 episode 1 online on 2nd sept please? Oceanogr. Zooplankton reach maturity quickly and live short, but productive lives. 2004). To analyze the production of populations with continuous reproduction, it is necessary to use methods that do not require complete evaluation of cohort differences. So a proportion of the zooplankton population escaped through the meshes and a further proportion evaded capture. A Manual on Methods for the Assessment of Secondary Productivity in Fresh Waters. So a new outburst is possible until the zooplankton has been regenerated. 1973. They help in regulating algal and microbial productivity through grazing and in the transfer of primary productivity to fish and other consumers (Dejen et al . Unanswered Questions. Haney, J.F. A Manual on Methods for the Assessment of Secondary Productivity in Fresh Waters. Likens, G.E. So a new outburst is possible until the zooplankton has been regenerated. Both eggs and larvae are themselves eaten by larger animals. Download preview PDF. This service is more advanced with JavaScript available, Limnological Analyses During the daylight hours, zooplankton generally drift in deeper waters to avoid predators. Although there is a large quantity of information on secondary production in the California Current, observations in other upwelling areas are sufficient only to establish an average, but not a seasonal trend, as for the radiocarbon measurements. The model then can be used to estimate the production of a sample Zooplankton species over a given time interval. It is an ideal environment for growth of fry or juvenile prawns since it provides natural feeds. Mitt. Counting chamber 4. Prepas, E. 1978. Only in a few cases can the observations be averaged adequately through a season, but they are well enough established to estimate generation time. Generally phytoplankton (plankton that use photosynthesis like plants) need … Secondary production. This process protects the zooplankton from being eaten by the predators especially diurnal and also support the phytoplankton to produce their food in the presence of sunlight. The survival of animals across the upwelling area depends on the algal production. In all the upwellings examined, temperature observations at the surface are available from the sources given in section 6.6. The same procedure should really be applied to the algae, but a short half of a week or so only reduced the rate of increase of algal production, whereas a week's halt in upwelling may cause the failure of a local brood of nauplii, and when upwelling returns it would take half a generation for a new brood to get underway. Young Ocean Explorers has been on a mission since 2012 to inspire kids to love our ocean - through entertaining education. This is a preview of subscription content. Where the upwelling is resumed, the algae continue to produce, but the grazing restraint is no longer there. Phytoplankton produce their own food by lassoing the energy of the sun in a process called photosynthesis. Zooplankton (from the Greek for "drifting animal") is a collective term for a wide range of aquatic animal plankton with little or no swimming ability, who mostly drift along with the surrounding currents. The more food you produce from fertilization, the more fish you can grow. Zooplankton, small floating or weakly swimming organisms that drift with water currents and, with phytoplankton, make up the planktonic food supply upon which almost all oceanic organisms are ultimately dependent.Many animals, from single-celled Radiolaria to the eggs or larvae of herrings, crabs, and lobsters, are found among the zooplankton. Tiny insects called zooplankton eat phytoplankton. Assimilation (A) is the difference between ingestion and egestion (A = C - F). 1984. There are times when Spring Lake and Muskegon Lake experience algal blooms. Fish can produce high numbers of eggs which are often released into the open water column. The Indian Ocean net was hauled off Somali and southwest Arabia in strong winds and so an average wire angle of 45° has been assumed (from my own observations on R.R.S. and D.J. Cruises - sample collection. The most important upwelling areas in terms of secondary production are those in the Peru Current and the Benguela Current, where production is of the order of 3-5.106 tons C/yr (using column N1). Crustaceans such as water fleas ( Daphnia), cyclops, and copepods are representatives of the consumers or zooplankton found in samples. The objective of this exercise is to construct a realistic, logical model of Zooplankton production. Sugar-coated Daphnia: A preservation technique for Cladocera. (ed.) Few, if any, of the individuals present at the peak of an exponentially developing population were alive at the beginning of the exponential phase. The metre nets were hauled from about 200 m to the surface (ranging from 100 m to 300 m on some occasions) and the catches were expressed as ml/1000 m³ displacement volume. At night, coral polyps come out of their skeletons to feed, stretching their long, stinging tentacles to capture critters that are floating by. It is assumed that the animals have a uniform age distribution, which is not always the case. Cite as. The first step is to determine the depth from which the nets were hauled, so that ml/1000 m³ are converted to ml/m² in a specified layer; fortunately the authors quoted above give the depths of sampling in the upwelling areas specified. In the Indian Ocean, Wooster, Schaefer and Robinson (1967) gives samples from a 0.200 m water column from the Indian Ocean standard net, which is like that used in the Pacific, with a mesh size of 0.33 mm and a mouth opening of 1 m² (Currie, 1963). However, the southwest Arabian upwelling, the Domes off Costa Rica and Java and the Indonesian area appear to be areas of medium zooplankton production. Tranter (1963) has shown that the Juday net with a mesh of 0.26 mm catches more than the Indian Ocean net, with a mesh size of 0.33 mm but it is not stated whether the added living material was of nauplii or algae. Edmondson, W.T. Downing. Marshall and Orr (1955) give the duration of copepodite stages as a proportion of the duration of copepodite stage I. McLaren (1965) gives the duration of some copepodite stages at different temperatures, so a relation between stage duration and temperature was constructed. How can zooplankton production be made easier and more reliable? Int. Limnol. They do not differ much from the estimates of Heinrich (1961). This method is called "biomanipulation" and is usually done by reducing predation on zooplankton by planktivorous fish either by directly removing these fish or adding a fish predator such as pike. Where the upwelling is resumed, the algae continue to produce, but the grazing restraint is no longer there. Eventually, the whole zooplankton community becomes the bottom of a food chain for an entire food web stretching from the smallest fish to the largest whale. Not logged in Asexual reproduction is more common for holoplankton and can be accomplished through cell division, in which one cell divides in half to produce two cells, and so on. This relationship implies that, when a stage lasts two days, one-half of the individuals will pass from that stage on one day and the other half on the following day, i.e., the number of individuals in a given stage will be inversely proportional to the duration of that stage. Very roughly it takes one zooplankton generation for water and zooplankton to be drifted across the length of an upwelling area. The calculation of secondary productivity, pp. Volumetric pipette 3. Sugar-frosted Daphnia: An improved fixation technique for Cladocera. Zooplankton range from zooflagellates a few micrometres long, to large jellyfish. Zooplankton may reproduce sexually or asexually, depending upon species. Zooplankton may have different types of epibionts, like diatoms, green algae, ciliates, bacteria, etc. Blackwell, Oxford. Plankton have evolved many different ways to keep afloat. By this means, the future population size of the species can be predicted on the basis of its present population structure and size, observed size-specific production of eggs, and certain assumed or known information on biomass and survival. 159.203.86.213. Zooplankton is an essential component of all aquatic food chains. Zooplankton are drifting ecologically important organisms that are an integral component of the food chain. Hall. These keywords were added by machine and not by the authors. Summarizing, the estimates of secondary production are based on the use of one standard net, or its analogues. Zooplankton (pictured below) are a type of heterotrophic plankton that range from microscopic organisms to large species, such as jellyfish. In:J.A. 1972. pp 251-256 | So there is an input of zooplankton from the poleward end and it is generated from the coast by upwelling. Downing, and F.H. The freshwater zooplankton species was mass produced in lab until transfer to fiberglass tank 1-tonne (t) filled with filtered tap water. 175 pp. In many ways, plankton rule the oceans. DISCOVERY). The mesh size of the metre nets was usually about 0.25 to 0.31 mm and the net was hauled at about 1 m/sec. Indeed the very meticulous methods used by Hentschel on the Meteor Expedition tend to confirm the results from the CALCOFI/POFI/IIOE nets. Zooplankton are found within large bodies of water, including oceans and freshwater systems. Figure 5.3. You have a great fishing hole. Oceanogr. Frontier (1963) has made some observations in the Guinea Current off Abidjan with a Hensen net (0.33 m²; 0.41 mm mesh). Most zooplankton eat phytoplankton, and most are, in turn, eaten by larger animals (or by each other). For example, adult females of a zooplankter called Daphnia can produce their body mass in … Many hauls for zooplankton have been made in the Pacific and Indian Oceans. Part of Springer Nature. zooplankton as thy consume phytoplankton. Because zooplankton eat algae, it has been proposed that it may be possible to control algal blooms by increasing zooplankton grazing. So for sunlight to reach them, they need to be near the top layer of the ocean. These plankton "blooms" are common throughout the world's oceans and can be composed of phytoplankton, zooplankton, or gelatinous zooplankton, depending on the environmental conditions. 1974. Main Difference. Zooplankton can reproduce rapidly, and populations can increase by about 30 percent a day under favorable conditions. Microscope Procedure 1. Ltd., Australia). (10–50 x … On the basis of data given by Marshall and Orr (1955) on maturation and hatching, the duration of a full generation could be worked out at different temperatures. A number of models of Zooplankton production have been developed, but they fall into two general classes: (1) direct models based on time-dependent parameters of the zooplankton species [e.g., Edmondson and Winberg (1971) and Rigler and Downing (1984)], and (2) indirect models based on inferred rates of Zooplankton filtering, assimilation, and consumption by fish and other predators [e.g., Winberg (1971)]. Rigler, F.H. So far as it goes this arrangement of high production in the Peru and Benguela Currents and medium production in other areas does correspond to the order given for the radiocarbon measurements. For many areas for which radiocarbon observations are available, there are not adequate observations in zooplankton. The available data in different areas are not good enough to support or deny this speculation. So we may consider the generation of zooplankton to be autonomous within the upwelling area. During these episodes, the lakes turn green as if green paint had been spilled. The holoplanktonic species spend their entire lives in the pelagic environment; meroplanktonic forms are temporary members of the plankton, and include the eggs and larval stages of many benthic invertebrates and fish. However, with the present state of zooplankton sampling, the nets used probably represent the best compromise for estimates of zooplankton displacement volume over extensive areas. Foggy White Water: This mainly comprises of zooplankton, clay particles and detritus. The production in other upwelling areas for which there is enough information is around 1-2.106 tons C/yr. The zooplankton distributions off California (Thrailkill, 1956, 1957, 1959, 1961, 1963) reflect to some extent the transient structures of an upwelling region, possibly because the animals are vulnerable to food lack in the periods between upwellings. and J.J. Gilbert. Unable to display preview. The marine zooplankton community includes many different species of animals, ranging in size from microscopic protozoans to animals of several metres in dimension. Fish larvae are part of the zooplankton that eat smaller plankton, while fish eggs carry their own food supply. Zooplankton is a group of small and floating organisms that form most of the heterotrophic animals in oceanic environments. An upwelling area might be 800 km in length by 200 km in width, with a current moving towards the equator at 20 km/d (Wooster and Reid, 1963). . Oceanogr. This procedure gives results which are comparable with those from other upwelling areas. As a population changes by addition and growth over a given time interval, a demographic turnover occurs [see reviews of Edmondson (1974) and Rigler and Downing (1984)]. Then it is ready to produce more planktonic larvae. © 2020 Springer Nature Switzerland AG. To count the animals and estimate their volumes from an array of nets, each sampling a band of size properly, would take so much time that the results could not have been obtained as quickly for such extensive areas. Edmondson, W.T. The graded and concentrated zooplankton is stored in wells in the floaters of the vessel and can be unloaded by pumping. Plankton are comprised of two main groups, permanent members of the plankton, called holoplankton (such as diatoms, radiolarians, dinoflagellates, foraminifera, amphipods, krill, copepods, salps, etc. Perhaps the assumptions that the loss of nauplii is balanced by gain of algae, and that the proportions escaping are small, are roughly justified. Zooplankton is found in an aquatic environment which serve as food to fishes in lakes, Zooplankton organisms are identified as important components of aquatic ecosystems. There is not enough information to draw any conclusion for the Canary Current in secondary production and that for the Guinea Current is based on very few observations. Kamshilov's (1951) formula converting length (0.6-1.0 mm) to weights for copepodites (0.6-1.0 mm) and nauplii (0.08-0.1 mm) has been used. The estimated generation time has been arbitrarily lengthened by one third, to take account of the intermittent character of upwelling, due to variation in the wind stress (California, Department of Fish and Game, 1953). Methods for the Estimation of Production of Aquatic Animals. Phytoplankton are the microscopic plants that absorb sunlight to produce sugars that form the base of the entire food web. Notes on quantitative sampling of natural populations of planktonic rotifers. The Russians working in the Indian Ocean have used a large Juday net (0.5 m², with a mesh of 0.26 mm) and they express their results as mg/1000 m³ wet weight, essentially the same form of expression as used in the American and Australian work. Both discrete time-interval and instantaneous models are used to estimate production. The model then can be used to estimate the production of a sample Zooplankton species over a given time interval. The rate of increase of algal production is reduced and the zooplankton production is perhaps destroyed. Zooplankton are the animal component of the planktonic community ("zoo" comes from the Greek for animal). Zooplankton feed efficiently on algae, so culture systems are not fouled and contaminating organisms do not proliferate. Corals also eat by catching tiny floating animals called zooplankton. The estimate of one third of the generation time was taken from the seasonal picture of intermittent upwellings off Southern California (California, Department of Fish and Game, 1953). The boat can be operated by one person and is powered by an outboard motor and auxiliary petrol engine to drive the pumps and hydraulic rams. Production, in the context of a population, then, is growth and is only one factor in the material or energy budget for the whole population (Edmondson, 1974): $$\frac{{\Delta N}}{{\Delta t}} = birth + growth - mortality$$, $${N_t} = {N_0} + birth + growth - mortality$$, Since it is necessary to measure recruitment or birth rate, and with continuous birth and death one cannot identify distinct cohorts, production (, $$P = \frac{{{N_1}\Delta {w_1}}}{{{T_1}}} + \frac{{{N_2}\Delta {w_2}}}{{{T_2}}} + \frac{{{N_3}\Delta {w_3}}}{{{T_3}}} + ...\frac{{{N_n}\Delta {w_n}}}{{{T_n}}}$$. Loss by escape has not been measured, although shown for some larger animals (Fleminger and Clutter, 1965; McGowan and Fraundorf, 1966), so it is possible that euphausiids everywhere are improperly sampled. Requirements 1. 358 pp. Formalin solution (40percent formaldehyde) 2. The production of zooplankton is given by the average standing crop as ml/m², multiplied by the number of generations during the upwelling season. This procedure will be demonstrated using Daphnia. However, the two methods are so profoundly different that they are separated all through the estimating procedures. Krill may be the most well-known type of zooplankton; they are a major component of the diet of humpback, right, and blue whales.
Knives Chau Movie, Baankey Ki Crazy Baraat Netflix, Byu Provo Tuition, Phish Summer Tour 2000, Larry The Cucumber Wiki, How To Comfort Someone Over The Phone, Lock Haven University Acceptance Rate, Discover Card Legal Department, O'hare Quarantine Order,
Advanced Diploma in Accounting and Finance [ACCA route]
Study in Canberra
Regional Scholarship 2015-16
HR Education Limited
8 – 10 Boston Place,
London, NW16QH
Email: [email protected]
T: 00442074021366
M: 00447500772444
HR - Most trust worthy Education Consultants by Parents & Students
WE ARE HERE TO SERVE YOU
FIRST STEP TO THE BRIGHT FUTURE | CommonCrawl |
High dietary quality of non-toxic cyanobacteria for a benthic grazer and its implications for the control of cyanobacterial biofilms
Sophie Groendahl1Email authorView ORCID ID profile and
Patrick Fink1, 2
Accepted: 9 May 2017
Mass occurrences of cyanobacteria frequently cause detrimental effects to the functioning of aquatic ecosystems. Consequently, attempts haven been made to control cyanobacterial blooms through naturally co-occurring herbivores. Control of cyanobacteria through herbivores often appears to be constrained by their low dietary quality, rather than by the possession of toxins, as also non-toxic cyanobacteria are hardly consumed by many herbivores. It was thus hypothesized that the consumption of non-toxic cyanobacteria may be improved when complemented with other high quality prey. We conducted a laboratory experiment in which we fed the herbivorous freshwater gastropod Lymnaea stagnalis single non-toxic cyanobacterial and unialgal diets or a mixed diet to test if diet-mixing may enable these herbivores to control non-toxic cyanobacterial mass abundances.
The treatments where L. stagnalis were fed non-toxic cyanobacteria and a mixed diet provided a significantly higher shell and soft-body growth rate than the average of all single algal, but not the non-toxic cyanobacterial diets. However, the increase in growth provided by the non-toxic cyanobacteria diets could not be related to typical determinants of dietary quality such as toxicity, nutrient stoichiometry or essential fatty acid content.
These results strongly contradict previous research which describes non-toxic cyanobacteria as a low quality food resource for freshwater herbivores in general. Our findings thus have strong implications to gastropod-cyanobacteria relationships and suggest that freshwater gastropods may be able to control mass occurrences of benthic non-toxic cyanobacteria, frequently observed in eutrophied water bodies worldwide.
Balanced diet hypothesis
Compensatory feeding
Benthic algae
Lymnaea stagnalis
Cyanobacteria are a common component in the diets of herbivores in freshwater ecosystems. Cyanobacteria often occur in eutrophied water bodies and represent a low-quality food source for consumer species caused by a variety of factors such as the possession of toxins [1, 2], feeding inhibitors [3], unsuitable morphology [1] and the lack of essential dietary lipids [4]. In particular, their low amounts of the essential polyunsaturated fatty acids (PUFA) omega 3 and omega 6 [5] and sterols, strongly constrain the fitness of herbivores on cyanobacterial diets [4, 6]. Within a cyanobacterial species, individual strains can be toxic or non-toxic. Although less frequently studied, non-toxic cyanobacteria are known to reduce growth [7] and reproduction [8], at a similar magnitude as toxin-bearing cyanobacteria, to cladocerans and copepods. Surveys conducted in different parts of the world showed that up to 75% of cyanobacterial blooms can be non-toxic [9–11]. Non-toxic cyanobacteria may consequently impact ecosystems, trophic cascades and geochemical cycles [12]. Interestingly, in a meta-analysis by Wilson et al. [1], it was found that cyanobacterial toxins were actually less important with respect to their negative effects on consumer fitness than cyanobacterial cell morphology. Wilson et al. [1] therefore concluded that the role of cyanobacterial toxins in the determination of food quality may be less important than widely assumed and suggested that future research should focus more on nutritional deficiencies, morphology, and the toxicity of undescribed cyanobacterial compounds as mediators of the poor food quality of cyanobacteria.
Due to the multiple threats that cyanobacteria may pose to ecosystems, various attempts have been made to control cyanobacterial mass abundances ('blooms') through herbivory [13, 14]. However, this requires that the herbivores are able to efficiently utilize cyanobacteria as a food resource. While pure cyanobacteria may be a low quality food resource, a mixed diet with cyanobacterial and eukaryotic components may be easier to assimilate for many herbivores. For example, a severe sterol limitation of the planktonic herbivore Daphnia is assumed to occur only if cyanobacteria make up more than 80% of phytoplankton biomass [15]. Moreover, the growth of Dreissena polymorpha was strongly reduced while feeding upon a pure cyanobacterial diet deficient in polyunsaturated fatty acids (PUFAs) in comparison to a mixed diet rich in PUFAs [16]. By consuming a mixed diet, herbivores may obtain all nutrients required for growth. This is called the balanced diet hypothesis [17, 18] and has been described for numerous herbivores for example insects [19], snails [20] and fish [21]. Diet mixing may enable grazers to feed upon cyanobacteria without any significant decrease in fitness, and thus reduce cyanobacterial blooms. In a study by DeMott and Müller-Navarra [22], Daphnia feeding upon non-toxic cyanobacteria did not display any increase in growth, but when supplemented with a green alga a significant increase in growth was observed. Additionally, rotifers were found to grow better on a diet consisting of a mixture between green algae and cyanobacteria than on either single algal diet [23]. Herbivores may also compensate for a low quality diet through compensatory feeding [24–26]. This is a strategy in which herbivores increase their consumption rate as dietary nutrient concentrations decrease in order to maintain a sufficient uptake of the limiting nutrient(s). Compensatory feeding may thereby increase the fitness of herbivores, but it may also be associated with costs [27]. With respect to macronutrients, herbivores maintain a rather strict homeostasis [28, 29], for instance they need to maintain their body's elemental composition by excreting excess nutrients. This requires energy and results in a reduction of fitness [30, 31]. Moreover, when consuming food in higher quantities, the dosage of potential toxins in the diet can increase [32].
Effects of cyanobacterial diets on freshwater gastropods have rarely been studied. While feeding upon benthic biofilms gastropods rarely only encounter cyanobacteria, but mixtures of various microalgae, bacteria and protozoa embedded in a mucopolysaccharide matrix [33]. Gastropods can represent up to 60% of the total biomass of macroinvertebrates in freshwater ecosystems [34] and they play a key role in the top-down control of benthic primary production. For instance, Lymnaea stagnalis (L.), is a benthic herbivore [34] and important grazer in freshwater habitats [35]. It is often found in small eutrophic water bodies [36] where cyanobacteria are extremely common. It is thus plausible to assume that L. stagnalis has evolved strategies to cope with cyanobacterial presence in its diet. L. stagnalis detects its food via semiochemicals [37], but prey selection on the level of individual food items (e.g. algal cells) is not possible due to the rather unspecific ingestion mode via the gastropod radula [35]. Moreover, it is a common model organism in experimental ecology [37–39].
The aim of this study was to test whether gastropods may be able to feed upon non-toxic cyanobacteria without any significant decrease in fitness through the benefits of diet-mixing. Further, we aimed to investigate which factors are responsible for the typically observed low food quality of non-toxic cyanobacteria to freshwater herbivores [40, 41]. We thus hypothesized (1) that a pure diet consisting of non-toxic cyanobacteria will decrease the fitness of freshwater gastropods, and that (2) diet-mixing allows these gastropods to feed upon non-toxic cyanobacteria without any significant decrease in fitness. To test our hypotheses, we conducted a laboratory experiment using juveniles of the great pond snail L. stagnalis which we fed with either single, non-toxic cyanobacterial and algal diets or a mixture of all (six) primary producer species. The aquatic primary producers chosen for this experiment belonged to the three most important groups of organisms in freshwater biofilms: cyanobacteria, chlorophytes and diatoms.
We randomly selected six species of primary producers, two chlorophytes (Aphanochaete repens and Klebsormidium flaccidum), two cyanobacteria (Cylindrospermum sp. and Lyngbya halophila), and two diatoms (Navicula sp. and Nitzschia communis) from the Culture Collection of Algae at Cologne (CCAC, see Table 1). The size and structure of the primary producer cells were determined by microscopy to test whether the morphology of the cells may impact the fitness of L. stagnalis (Table 1). All six primary producer strains were cultivated under continuous aeration in 8 L of cyanophyceae medium [42] for cyanobacteria and chlorophytes or in diatom medium [43] for diatoms, respectively. All cultures were kept in a climatized chamber at 20 ± 1 °C at a light intensity of 150 µmol photons s−1 m−2 as described elsewhere [44]. After one (chlorophytes and cyanobacteria) or two months (for diatoms) of exponential growth, the primary producers were harvested by centrifugation at 4500×g and the resulting pellets were freeze-dried [44]. Juvenile L. stagnalis, originating from a pond in Appeldorn, Germany, were raised in aquaria in a climatized chamber at 18 ± 1 °C with a light–dark period of 16:8 h and fed ad libitum with Tetra Wafer Mix™ fish food pellets (Tetra, Melle, Germany) prior to the experiment [44].
The six benthic primary producers used in the experiment together with their cell shape, average biovolume and origin
Origin/strain
Average biovolume (µm3)
Aphanochaete repens
CCAC/M2227
Klebsormidium flaccidum
CCAC/2007 B
Cylindrospermum sp.
Lyngbya halophila
Cylinder, two half spheres
Navicula sp.
Prism on eliptic base
Nitzschia communis
The cell-specific biovolumes calculated on basis of the geometric shapes according to [45]. The shell morphology of the primary producer species were estimated as it may impact the ingestion by herbivores
Growth experiment
A total of 64 juvenile L. stagnalis with a shell height (defined as the distance from the apex to the lower edge of the aperture) of 2.0 ± 0.2 mm were selected. Out of these, eight had their shells removed under a dissecting microscope and were subsequently freeze-dried for the determination of their initial soft body dry mass (dm) using a microbalance (Mettler UTM2, Giessen, Germany). The experiment took place in a climatized chamber at 20 ± 1 °C. The snails were individually kept in square polyethylene containers (length = 11 cm) with each 100 ml aged and aerated tap water and fed on a daily basis. The primary producers were rehydrated in 1 ml water each and then added through a hollow glass cylinder (d = 2.3 cm, h = 2.5 cm) that was submerged halfway in the water in the center of the snails' container. After approximately 1 h when the primary producers had sedimented to the bottom of the container, the glass cylinders were removed. This yielded one clear resource patch and prevented selective feeding by the snails in the mixed primary producer treatment. Water was exchanged daily and the containers were replaced with clean ones every other day. The experiment consisted of seven treatments with eight replicates each, we thus used 56 containers, each containing one L. stagnalis individual. The seven treatments consisted of snails fed with a mixture of all six primary producer species or with one of the six primary producer species. To ensure that the snails could feed ad libitum, the amount of added food was increased during the 33 days experimental duration using previously estimated shell size specific ingestion rates [44]. The shell height of the snails was measured every three days as described above. After 33 days, the soft body dry mass of the remaining 54 snails (single incidents of death had occurred in the treatments with A. repens and K. flaccidum as diet organism during the experiment) was determined as described above. Since the juvenile growth of L. stagnalis can be assumed to be exponential [44], the somatic growth rate of the snails was estimated using the following equation:
$$ g = \frac{{\ln (m_{end} ) - \ln \left( {m_{start} } \right)}}{time \left[ d \right]} $$
where mstart is the initial dry mass of snails and mend is the dry mass of the snails at the end of the experiment (day 33) over time (d), yielding the somatic growth rate (g).
Ingestion rate
On the final day of the experiment, the snails' ingestion rates were determined in the same setup used to determine the somatic growth rates of the snails. Additionally, three control units per dietary treatment were set up without snails. After 19 h, the snails and their fecal pellets were separately removed from the containers and the remaining primary producers were filtered onto pre-combusted glass fiber filters (GF/F, d = 25 mm, VWR GmbH, Darmstadt, Germany) and dried at 60 °C for 24 h. The amount of ingested food was determined by subtracting the primary producer dry mass remaining in the snails' containers after 19 h from the primary producer dry mass in the consumer-free controls.
To determine the C:N ratios of the primary producers, approximately 1 mg of each freeze-dried primary producer culture, including a mixture of all six primary producers in equal dry mass were packed into tin capsules (HekaTech, Wegberg, Germany) and subsequently analyzed using a Thermo Flash EA 2000 elemental Analyser (Schwerte, Germany). For the analysis of particulate phosphorus, approximately 1 mg of freeze-dried primary producers were transferred into a solution of potassium peroxodisulphate and 1.5% sodium hydroxide. The solution was subsequently autoclaved for 1 h at 120 °C and the soluble reactive phosphorus was analyzed using the molybdate–ascorbic acid method [46]. Both analyses were replicated fivefold per food treatment. The same method was applied to separately determine the molar C:N:P ratios of the 54 snails. When the mass of individual gastropod samples did not reach the required 1 mg dry mass, samples of several individuals from the same treatment were pooled randomly to reach sufficient sample masses.
Fatty acid analysis
Cultures of all primary producers were harvested in the exponential growth phase and filtered in triplicate onto precombusted GF/F filters to assess their fatty acid contents. Filters were placed into 5 mL CH2Cl2/MeOH (2:1 v/v) to extract total lipids. The samples were incubated over night at 4 °C whereafter 10 µg of methyl heptadecanoate (C17:0 ME) and 5 µg methyl tricosanoate (C23:0 ME), were added as internal standards. Subsequently, the samples were homogenized in an ultrasonic bath for 1 min and then centrifuged at 4500×g for 5 min. Afterwards, the supernatants were dried at 40 °C under a gentle stream of nitrogen. Hydrolysis of lipids and subsequent methylation of fatty acids were achieved by adding 5 mL of 3 N methanolic HCl (Supelco) to the sample and then incubating the sample for 20 min at 70 °C to yield fatty acid methyl esters (FAMEs). The FAMEs were extracted using 6 mL isohexane. The hexane phase was again dried at 40 °C under a gentle stream of nitrogen. Finally, all samples were dissolved in 100 µL isohexane. The samples were then subjected to gas chromatographic analyses on a 6890 N GC System (Agilent Technologies, Waldbronn, Germany) equipped with a DB-225 capillary column (30 m, 0.25 mm i.d., 0.25 µm film thickness, J&W Scientific, Folsom, CA, USA) and a flame ionization detector (FID). The conditions of the GC were as follows: injector and FID temperatures were set to 220 °C; the initial oven temperature were 60 °C for 1 min, followed by a 2 min temperature ramp to 180 °C, then the temperature was increased to 200 °C over a time period of 12.9 min followed by a final 20.6 min temperature increase to 220 °C; helium (5.0 purity) with a flow rate of 1.5 ml min was used as carrier gas. FAMEs were identified by comparison of retention times with those of reference compounds and quantified using the internal standard and previously established calibration functions for each individual FAME. For more details we refer the reader to [47].
Toxin analysis
The two cyanobacterial species used in the experiment (L. halophila and Cylindrospermum sp.) are known to sometimes contain the toxins cylindrospermopsin and lyngbyatoxin-a. We therefore used high-resolution LC–MS to screen for the cyanobacterial toxins cylindrospermopsin and lyngbyatoxin-a in the cyanobacterial cultures to ensure that the toxins were not produced by the cyanobacterial strains. To screen the two cyanobacteria for the toxins cylindrospermopsin and lyngbyatoxin-a, a crude extract from the freeze-dried samples (500 mg dry mass) of the two cyanobacteria (Cylindrospermum sp. and L. halophila) were prepared using 50 mL of 100% methanol (HPLC Grade, VWR). The extracts were incubated for 1 h on a rotary shaker and subsequently centrifuged at 4500×g for 5 min. The supernatant was then evaporated and the residue dissolved in 5 mL of 100% methanol. Three 5 µL subsamples each were analyzed on an Accela Ultra high pressure liquid chromatography (UPLC) system (Thermo Fisher) coupled with an exactive orbitrap mass spectrometer (Thermo Fisher) with electrospray ionization (ESI) in positive and negative ionization mode. As stationary phase, a Nucleosil C18 column (2 × 125 mm length, pore size 100 Å, particle size 3 μm; Macherey–Nagel, Düren, Germany) was used with a gradient of acetonitrile (ACN) and ultrapure water, each containing 0.05% trifluoroacetic acid (TFA). The column temperature was set to 30 °C and the flow rate to 300 µL/min. The solvent gradient started with 0 min: 38% ACN; 2 min: 40% ACN; 12 min: 50% ACN; 12.5 min: 100% ACN; 15 min: 100% ACN; 15.5 min: 38% ACN; 17 min: 38% ACN. Mass spectrometry was carried out at 1 scan s−1 from 150 to 1500 Da in positive and from 120 to 1500 Da in negative ionization mode, respectively (spray voltage 4.5 kV pos./4.3 kV neg. capillary temperature 325 °C, sheath and aux gas nitrogen set to 40 pos./35 neg. and 15 pos./5 neg. respectively). For the identification of potential cyanobacterial toxins, we used extracted ion chromatograms for the respective specific masses of the different compounds (Cylindrospermopsin pos. 416.12345 Da/neg. 414.10889 Da, Lyngbyatoxin-a pos. 438.31150 Da/neg. 436.29695 Da) granting a maximum mass deviation of 3 parts per million (ppm). Electrospray ionization resulted in adduct ions with one positive/one negative charge for the two compounds. We used the Xcalibur software package (Thermo Fisher) for qualitative analysis.
The gastropods' shell height increase over time was analysed via repeated measures ANOVA in R (v. 3.3.1) [48], followed by Bonferroni correction. All other statistical tests were performed in SigmaPlot (v. 11, SysStat). The data were checked for normal distribution using the Shapiro–Wilk's test and for homoscedasticity using Levene's test. When the data fulfilled the criteria for a parametric test, one-way ANOVAs was performed followed by Tukey's HSD. When the data did not fulfil the criteria for an ANOVA, a Kruskal–Wallis test was performed followed by Dunn's post hoc test. Linear regressions were performed to test for relationships between the algal C:N ratio and the ingestion rate of the snails, the algal C:N ratio and the somatic growth rates of the snails, the molar C:N, C:P, N:P ratios of the algae and the molar C:N, C:P, N:P ratios of the snails, and the biovolume of the algae and the somatic growth rate of the snails. An exponential growth, single, 2 parameter regression was performed to test for the relationship between snail dry mass and shell height.
Lymnaea stagnalis fed the cyanobacteria or a mixed diet grew faster than on any of the pure eukaryotic algae, except for the comparison with K. flaccidum (repeated measures ANOVA, F = 26.6, df = 6, P < 0.001, Fig. 1a). The somatic growth rate of L. stagnalis was significantly higher in the treatments where L. halophila and the mixed treatment was offered compared to the treatments in which L. stagnalis was provided with diatoms (Fig. 1b).
Increase of shell height (a) over time and somatic growth rate (b) of L. stagnalis. The snails were fed a diet consisting of single primary producer species or a mixture of all six species (Mixture) in equal biomass (mean + 1 SE; N = 7–8). The dashed line indicates the average of all single algal treatments; means which were found to be significantly different in Tukey post hoc comparisons are labelled with different letters
The molar C:N ratio (one-way ANOVA, F = 100.00, df = 6, P < 0.001, Fig. 2a) and the molar C:P ratio (one-way ANOVA, F = 59.16, df = 6, P < 0.001, Fig. 2c) of the cyanobacteria were significantly lower than that of the algae. The N:P ratio of L. halophila were significantly lower than the N:P ratio of A. repens and N. communis (Kruskal–Wallis, H = 31.24, df = 6, P < 0.001, Fig. 2e). The C:N ratios of the snails varied significantly between the diet treatments (one-way ANOVA, F = 14.01, df = 5, P < 0.001, Fig. 2b). L. stagnalis feeding upon L. halophila had a significantly lower C:N ratio compared to all other treatments (Fig. 2b), except for A. repens and N. communis. While C:N ratios were lower in L. stagnalis compared to their diets (Fig. 2a, b), the C:N:P ratios of the snails and their dietary organisms were not significantly correlated (see Additional file 1).
Molar C:N:P ratios of the primary producers and of L. stagnalis. C:N:P ratios of single primary producer species or of the mixed diet (a, c, e, mean + 1 SE; N = 5) and of L. stagnalis feeding upon the single and mixed diets (b, d, f, mean + 1 SE; N = 1–6). Means which were found to be significantly different in Tukey post hoc comparisons are labelled with different letters
The food consumption of L. stagnalis was significantly higher when offered Navicula sp. compared to the treatment in which the snails fed on Cylindrospermum sp. (Kruskal–Wallis, H = 19.88, df = 6, P = 0.003, Fig. 3). However, no significant differences of the ingestion rates between any other treatments were found (Fig. 3). Moreover, the mean ingestion rate of the snails in each treatment increased linearly with the dietary C:N ratio (y = −0.0241 + (0.00683×), R2 = 0.61, df = 6, P = 0.039, N = 7; Fig. 4a), but the mean somatic growth rate and the dietary C:N ratio did not correlate (linear regression, x = 8.458 − (8.825y), R2 = 0.18, df = 6, P = 0.34, N = 7; Fig. 4b). The mean C:N ratios of the snails in each treatment were lower than the mean C:N ratios of any of the diets (N = 7; Fig. 4c). Additionally, no significant relationships between somatic growth rate and the dietary organisms' cell sizes were found (see Additional file 2).
Mass-specific ingestion rates of L. stagnalis. Ingestion rates (mean + 1 SE; N = 7–8) were determined on the last day of the growth experiment. Means which were found to be significantly different in Tukey post hoc comparisons are labelled with different letters
Relationships of primary producer C:N ratio with various parameters of L. stagnalis. The average C:N ratio of the algae (red for diatoms and green for chlorophytes), cyanobacteria (blue) and the mixed diet (grey) in each treatment (a, b, N = 7) is plotted versus the average ingestion rate of L. stagnalis in the respective treatment (a, N = 7), the average snail somatic growth rate in the respective treatment (b, N = 7), or the C:N ratio of the algae and cyanobacteria in each treatment (c, mean ± 1 SE; N = 7) versus ingestion rate of L. stagnalis in the respective treatment (c, mean ± 1 SE; N = 7)
The fatty acid concentration of the primary producers differed, A. repens and Cylindrospermum sp. were particularly rich in α-linolenic acid (C 18:3 n − 3) whereas N. communis was rich in eicosapentaenoic acid (C20:5 n − 3, see Additional file 3). A. repens and the cyanobacteria contained the highest absolute amount of palmitic acid (C 16:0, see Additional file 3), while the A. repens and N. communis contained the highest total amounts of fatty acids and PUFAs (see Additional file 4).
We did not detect the cyanobacterial toxin cylindrospermopsin in the cyanobacterium Cylindrospermum sp. nor lyngbyatoxin-a in L. halophila via high-resolution LC–MS.
Contrary to our hypothesis, a mixed diet containing non-toxic cyanobacteria did not provide a higher growth rate for L. stagnalis compared to pure non-toxic cyanobacterial diets. Surprisingly, the single non-toxic cyanobacteria provided growth rates higher than or equal to the diet consisting of mixed pro- and eukaryotic primary producers. Non-toxic cyanobacteria—at least the two strains investigated here—may therefore be considered a high quality resource for L. stagnalis. Negative effects of non-toxic cyanobacteria on the fitness of animals have frequently been reported [7, 8]. However, a screening of cyanobacterial strains demonstrated that some cyanobacterial strains can have a high nutritional value [49]. The importance of the supply ratio or stoichiometry of carbon (C), nitrogen (N) and phosphorus (P) is well studied, as the balance or imbalance in the molar C:N:P supply ratio has been linked to herbivore growth [50, 51], fecundity [52, 53], developmental times [52] and survival rates [52]. While nitrogen is mainly needed for protein synthesis [29], phosphorus is required for the synthesis of phospholipids and nucleic acids [54]. We found that the cyanobacterial species exhibited slightly lower C:N and C:P ratios than the green algae. On the other hand, the overall differences in nutrient stoichiometry between the diet organisms were not particularly pronounced and the nutrient ratios typically below those observed for L. stagnalis, which makes a direct nitrogen or phosphorus limitation of snail growth on the green algal diets unlikely. Furthermore, we did not find a significant correlation between the C:N ratio of the algae and the somatic growth rate of the snails, suggesting that the snails were not nitrogen limited. Also, the C:P ratios and the C:N ratios of the snails and the primary producers did not correlate.
The ingestion rate of the snails increased linearly with the C:N ratio of the primary producers, suggesting that compensatory feeding occurred. However, the somatic growth rate of the snails did not correlate with the C:N ratio of the primary producers. It is possible that compensatory feeding by L. stagnalis decreased a reduction in growth caused by nitrogen limitation in our experiment. In a previous study, it was found that the freshwater snail Radix ovata displayed compensatory feeding on low nutrient diets [24]. This dampened the differences in fitness compared to the treatments in which R. ovata was fed a nutrient-rich diet.
The lack of essential fatty acids and sterols in cyanobacteria is frequently held responsible for the reduction in growth of pelagic herbivorous zooplankton [4, 15], but also of filter-feeding clams Corbicula [40] and mussels Dreissena [55]. Even though the diatom diets in our experiment contained much more PUFAs than the cyanobacteria, L. stagnalis grew better on both cyanobacteria compared to any of the diatoms. This supports previous findings that lymnaeid gastropods appear to be less susceptible to dietary PUFA limitations than other freshwater invertebrates [56].
Difference in morphology [1, 57, 58] and cell size [44] can strongly influence the ingestion of algae and cyanobacteria by herbivores. We found that snails feeding upon two filamentous cyanobacterial species grew best. Similar results have been found for the lymnaeid species Radix peregra which ingests filamentous green algae better than diatoms [58]. Grazing by Lymnaea elodes for instance increased the abundance of small coccoid cells of green algae and cyanobacteria at the expense of larger diatoms [59], suggesting a preference for larger algal cells. However, we could not find any clear correlation between algal cell size and the somatic growth rate of L. stagnalis in this study.
If the fitness of L. stagnalis is increased by the consumption of non-toxic cyanobacteria rather than green algae and diatoms, L. stagnalis might have the ability to decrease non-toxic cyanobacterial abundances. In fact, it has been found that snails have the potential to reduce cyanobacterial blooms. In a study by Armitage and Fong [60], primary producers were subjected to nutrient enrichment which led to an increase in cyanobacterial blooms by up to 200%. When snails were allowed to feed upon the primary producers only cyanobacteria decreased in biomass [60]. The snails did not avoid consumption of the toxic cyanobacteria, which indicated that they perceived the cyanobacteria as a suitable food resource, even though cyanobacteria could be linked to an increased mortality of the snails [60]. Toxic cyanobacteria are likely to influence competitive interactions by consumer species favoring the most tolerant ones [61]. Previous studies found that crustacean zooplankton of the genus Daphnia are able to locally adapt to environments where cyanobacteria occur in high abundances [62–64]. Similarly physical adaptations in freshwater mussels to cyanobacterial toxins have been found [65]. As L. stagnalis often occur in eutrophicated water bodies [36], habitats where cyanobacteria are known to occur, it is possible that—similar to zooplankton and mussels—Lymnaea have evolved to coexist with cyanobacteria.
We feed L. stagnalis with single primary producer species or a mixture of all species. Due to time constraints we did not investigate the effects of all diet combinations possible on the fitness of L. stagnalis, however, the experimental set-up still it provided insights into gastropod-cyanobacteria relationships.
Efforts have been made in order to control cyanobacterial abundances by manipulating the biomass of herbivores and thereby increasing the top–down grazing pressure on cyanobacteria. However, most studies conducted used toxic cyanobacterial species; therefore, the knowledge of non-toxic cyanobacteria-grazer relationships remains limited. We hypothesized that the growth rate of L. stagnalis should be significantly reduced when feeding upon non-toxic cyanobacteria, we found however, the opposite pattern. Non-toxic cyanobacteria and the mixed diet provided the best growth rates for the snails. L. stagnalis might thus be a good biological control agent for non-toxic cyanobacterial mass occurrences. These results hence have considerable repercussions to how the dietary quality of non-toxic cyanobacteria for gastropods is perceived.
SG and PF designed the research. SG conducted the laboratory work, performed the statistical analyses and drafted the manuscript. PF contributed to manuscript writing. Both authors read and approved the final manuscript.
We thank Sofia Albrecht for help with the experiment, Barbara Melkonian at the Culture Collection of Algae at the University of Cologne (CCAC) for providing us with the cyanobacterial and algal cultures, Katja Preuss for assistance with the GC analysis of fatty acids and Christian Burberg for help with the LC–MS screening for cyanobacterial toxins.
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
We confirm that we comply to the guidelines for the use of animal behavior for research and teaching (Animal Behaviour 2012. 83: 301–309).
This study was supported by the Deutsche Forschungsgemeinschaft (DFG), Grant FI 1548/5-1 to PF.
12898_2017_130_MOESM1_ESM.tiff Additional file 1. Relationship between the C:N:P ratios (mean ± SE) of the primary producers and L. stagnalis. Nonsignificant linear regressions for C:N (A, y = 3.006 + (0.335 x), R2 = 0.27, df = 6, P = 0.23), C:P (B, y = 134.678 - (0.0388 x), R2 < 0.005, df = 6, P = 0.90), and N:P (C, y = 27.849 - (0.342 x), R2 = 0.26, df = 6, P = 0.25).
12898_2017_130_MOESM2_ESM.tiff Additional file 2. Relationship between primary producer biovolume and somatic growth rate of L. stagnalis. Not statistically significant linear regression, y = 0.176 - (0.0000260 x), R2 = 0.63, df = 5, P = 0.06.
12898_2017_130_MOESM3_ESM.docx Additional file 3. A table of the fatty acid composition of the primary producers used as a food resource for L. stagnalis. Values given are means ± 1 SE of N = 3 replicates analyzed via gas chromatography of fatty acid methyl esters (n.d. = not detected), the standard errors are given in parentheses.
12898_2017_130_MOESM4_ESM.docx Additional file 4. Total fatty acid and polyunsaturated fatty acid (PUFA) concentration of the primary producers. Values given are means ± 1 SE of N = 3 replicates analyzed via gas chromatography of fatty acid methyl esters, the standard errors are given in parentheses.
Cologne Biocenter, Workgroup Aquatic Chemical Ecology, University of Cologne, Zuelpicher Strasse 47b, 50674 Koeln, Germany
Present address: Institute for Zoomorphology and Cell Biology, Heinrich-Heine University of Duesseldorf, Universitaetsstrasse 1, 40225 Duesseldorf, Germany
Wilson AE, Sarnelle O, Tillmanns AR. Effects of cyanobacterial toxicity and morphology on the population growth of freshwater zooplankton: meta-analyses of laboratory experiments. Limnol Oceanogr. 2006;51(4):1915–24.View ArticleGoogle Scholar
Carmichael WW. Health effects of toxin-producing cyanobacteria: "The CyanoHABs". Hum Ecol Risk Assess. 2001;7(5):1393–407.View ArticleGoogle Scholar
Schwarzenberger A, Zitt A, Kroth P, Mueller S, Von Elert E. Gene expression and activity of digestive proteases in Daphnia: effects of cyanobacterial protease inhibitors. BMC Physiol. 2010;10:6.View ArticlePubMedPubMed CentralGoogle Scholar
Martin-Creuzburg D, von Elert E. Good food versus bad food: the role of sterols and polyunsaturated fatty acids in determining growth and reproduction of Daphnia magna. Aquat Ecol. 2009;43(4):943–50.View ArticleGoogle Scholar
Sargent J, Bell J, Bell M, Henderson R, Tocher D. Requirement criteria for essential fatty acids. J Appl Ichthyol. 1995;11(3–4):183–98.View ArticleGoogle Scholar
Wacker A, Becher P, von Elert E. Food quality effects of unsaturated fatty acids on larvae of the zebra mussel Dreissena polymorpha. Limnol Oceanogr. 2002;47(4):1242–8.View ArticleGoogle Scholar
FerrÃo-Filho AS, Azevedo SM, DeMott WR. Effects of toxic and non-toxic cyanobacteria on the life history of tropical and temperate cladocerans. Freshw Biol. 2000;45(1):1–19.View ArticleGoogle Scholar
Koski M, Engström J, Viitasalo M. Reproduction and survival of the calanoid copepod Eurytemora affinis fed with toxic and non-toxic cyanobacteria. Mar Ecol Prog Ser. 1999;186:187–97.View ArticleGoogle Scholar
Lindholm T, Vesterkvist P, Spoof L, Lundberg-Niinistö C, Meriluoto J. Microcystin occurrence in lakes in Åland, SW Finland. Hydrobiologia. 2003;505(1):129–38.View ArticleGoogle Scholar
Sivonen K. Cyanobacterial toxins and toxin production. Phycologia. 1996;35(6S):12–24.View ArticleGoogle Scholar
Vezie C, Brient L, Sivonen K, Bertru G, Lefeuvre JC, Salkinoja-Salonen M. Variation of microcystin content of cyanobacterial blooms and isolated strains in Lake Grand-Lieu (France). Microb Ecol. 1998;35(2):126–35.View ArticlePubMedGoogle Scholar
Sukenik A, Quesada A, Salmaso N. Global expansion of toxic and non-toxic cyanobacteria: effect on ecosystem functioning. Biodivers Conserv. 2015;24(4):889–908.View ArticleGoogle Scholar
Xie P, Liu J. Practical success of biomanipulation using filter-feeding fish to control cyanobacteria blooms: a synthesis of decades of research and application in a subtropical hypereutrophic lake. Sci World J. 2001;1:337–56.View ArticleGoogle Scholar
Matveev V, Matveeva L, Jones GJ. Study of the ability of Daphnia carinata King to control phytoplankton and resist cyanobacterial toxicity: implications for biomanipulation in Australia. Mar Freshw Res. 1994;45(5):889–904.View ArticleGoogle Scholar
von Elert E, Martin-Creuzburg D, Le Coz JR. Absence of sterols constrains carbon transfer between cyanobacteria and a freshwater herbivore (Daphnia galeata). Proc R Soc Lond B: Biol Sci. 2003;270(1520):1209–14.View ArticleGoogle Scholar
Wacker A, von Elert E. Strong influences of larval diet history on subsequent post–settlement growth in the freshwater mollusc Dreissena polymorpha. Proc R Soc Lond B: Biol Sci. 2002;269(1505):2113–9.View ArticleGoogle Scholar
Pulliam HR. Diet optimization with nutrient constraints. Am Nat. 1975;109(970):765–8.View ArticleGoogle Scholar
Westoby M. What are the biological bases of varied diets? Am Nat. 1978;112(985):627–31.View ArticleGoogle Scholar
Unsicker SB, Oswald A, Koehler G, Weisser WW. Complementarity effects through dietary mixing enhance the performance of a generalist insect herbivore. Oecologia. 2008;156(2):313–24.View ArticlePubMedPubMed CentralGoogle Scholar
Watanabe JM. Food preference, food quality and diets of three herbivorous gastropods (Trochidae: Tegula) in a temperate kelp forest habitat. Oecologia. 1984;62(1):47–52.View ArticlePubMedGoogle Scholar
Lobel PS, Ogden JC. Foraging by the herbivorous parrotfish Sparisoma radians. Mar Biol. 1981;64(2):173–83.View ArticleGoogle Scholar
DeMott W, Müller-Navarra D. The importance of highly unsaturated fatty acids in zooplankton nutrition: evidence from experiments with Daphnia, a cyanobacterium and lipid emulsions. Freshw Biol. 1997;38(3):649–64.View ArticleGoogle Scholar
Alva-Martínez AF, Fernández R, Sarma S, Nandini S. Effect of mixed toxic diets (Microcystis and Chlorella) on the rotifers Brachionus calyciflorus and Brachionus havanaensis cultured alone and together. Limnol-Ecol Manag Inland Waters. 2009;39(4):302–5.View ArticleGoogle Scholar
Fink P, Von Elert E. Physiological responses to stoichiometric constraints: nutrient limitation and compensatory feeding in a freshwater snail. Oikos. 2006;115(3):484–94.View ArticleGoogle Scholar
Cruz-Rivera E, Hay ME. Can quantity replace quality? Food choice, compensatory feeding, and fitness of marine mesograzers. Ecology. 2000;81(1):201–19.View ArticleGoogle Scholar
Berner D, Blanckenhorn WU, Körner C. Grasshoppers cope with low host plant quality by compensatory feeding and food selection: N limitation challenged. Oikos. 2005;111(3):525–33.View ArticleGoogle Scholar
Zehnder CB, Hunter MD. More is not necessarily better: the impact of limiting and excessive nutrients on herbivore population growth rates. Ecol Entomol. 2009;34(4):535–43.View ArticleGoogle Scholar
Persson J, Fink P, Goto A, Hood JM, Jonas J, Kato S. To be or not to be what you eat: regulation of stoichiometric homeostasis among autotrophs and heterotrophs. Oikos. 2010;119(5):741–51.View ArticleGoogle Scholar
Sterner RW, Elser JJ. Ecological stoichiometry: the biology of elements from molecules to the biosphere. Princeton: Princeton University Press; 2002.Google Scholar
Darchambeau F, Faerøvig PJ, Hessen DO. How Daphnia copes with excess carbon in its food. Oecologia. 2003;136(3):336–46.View ArticlePubMedGoogle Scholar
Suzuki-Ohno Y, Kawata M, Urabe J. Optimal feeding under stoichiometric constraints: a model of compensatory feeding with functional response. Oikos. 2012;121(4):569–78.View ArticleGoogle Scholar
Slansky F, Wheeler G. Caterpillars' compensatory feeding response to diluted nutrients leads to toxic allelochemical dose. Entomol Exp Appl. 1992;65(2):171–86.View ArticleGoogle Scholar
Anderson M. Variations in biofilms colonizing artificial surfaces: seasonal effects and effects of grazers. J Mar Biol Assoc UK. 1995;75(03):705–14.View ArticleGoogle Scholar
Habdija I, Lajtner J, Belinic I. The contribution of gastropod biomass in macrobenthic communities of a karstic river. Int Rev Gesamt Hydrobiol. 1995;80(1):103–10.View ArticleGoogle Scholar
Dillon R. The ecology of freshwater molluscs, vol. 1. 1st ed. Cambridge: Cambridge University Press; 2000.View ArticleGoogle Scholar
Clarke AH. Gastropods as indicators of trophic lake stages. Nautilus. 1979;94:4.Google Scholar
Moelzner J, Fink P. The smell of good food: volatile infochemicals as resource quality indicators. J Anim Ecol. 2014;83(5):1007–14.View ArticlePubMedGoogle Scholar
Bakker ES, Dobrescu I, Straile D, Holmgren M. Testing the stress gradient hypothesis in herbivore communities: facilitation peaks at intermediate nutrient levels. Ecology. 2013;94(8):1776–84.View ArticlePubMedGoogle Scholar
Dalesman S, Rundle SD, Bilton DT, Cotton PA. Phylogenetic relatedness and ecological interactions determine antipredator behavior. Ecology. 2007;88(10):2462–7.View ArticlePubMedGoogle Scholar
Basen T, Martin-Creuzburg D, Rothhaupt KO. Role of essential lipids in determining food quality for the invasive freshwater clam Corbicula fluminea. J N Am Benthol Soc. 2011;30(3):653–64.View ArticleGoogle Scholar
von Elert E. Determination of limiting polyunsaturated fatty acids in Daphnia galeata using a new method to enrich food algae with single fatty acids. Limnol Oceanogr. 2002;47(6):1764–73.View ArticleGoogle Scholar
von Elert E, Jüttner F. Phosphorus limitation and not light controls the extracellular release of allelopathic compounds by Trichormus doliolum (cyanobacteria). Limnol Oceanogr. 1997;42(8):1796–802.View ArticleGoogle Scholar
Wendel T. Lipoxygenase-katalysierte VOC-Bildung: Untersuchungen an Seewasser und Laborkulturen von Diatomeen. Shaker; 1994.Google Scholar
Groendahl S, Fink P. The effect of diet mixing on a nonselective herbivore. PLoS ONE. 2016;11(7):e0158924.View ArticlePubMedPubMed CentralGoogle Scholar
Hillebrand H, Dürselen CD, Kirschtel D, Pollingher U, Zohary T. Biovolume calculation for pelagic and benthic microalgae. J Phycol. 1999;35(2):403–24.View ArticleGoogle Scholar
Greenberg A, Trussel R, Clesceri L. Standard method for the examination of water and wastewater. Washington: American Public Health Association (APHA); 1985.Google Scholar
Fink P. Invasion of quality: high amounts of essential fatty acids in the invasive Ponto-Caspian mysid Limnomysis benedeni. J Plankton Res. 2013: fbt029.Google Scholar
Team RC. R: a language and environment for statistical computing. Vienna; 2013.Google Scholar
Becker E, Venkataraman L. Production and utilization of the blue-green alga Spirulina in India. Biomass. 1984;4(2):105–25.View ArticleGoogle Scholar
Hessen DO. Nutrient element limitation of zooplankton production. Am Nat. 1992;149:799–814.View ArticleGoogle Scholar
Elser JJ, Fagan WF, Denno RF, Dobberfuhl DR, Folarin A, Huberty A, Interlandi S, Kilham SS, McCauley E, Schulz KL, et al. Nutritional constraints in terrestrial and freshwater food webs. Nature. 2000;408(6812):578–80.View ArticlePubMedGoogle Scholar
Huberty AF, Denno RF. Consequences of nitrogen and phosphorus limitation for the performance of two planthoppers with divergent life-history strategies. Oecologia. 2006;149(3):444–55.View ArticlePubMedGoogle Scholar
Kilham S, Kreeger D, Goulden C, Lynn S. Effects of algal food quality on fecundity and population growth rates of Daphnia. Freshw Biol. 1997;38(3):639–47.View ArticleGoogle Scholar
Weider LJ, Elser JJ, Crease TJ, Mateos M, Cotner JB, Markow TA. The functional significance of ribosomal (r) DNA variation: impacts on the evolutionary ecology of organisms. Annu Rev Ecol Evol Syst. 2005;36:219–42.View ArticleGoogle Scholar
Wacker A, Von Elert E. Food quality controls reproduction of the zebra mussel (Dreissena polymorpha). Oecologia. 2003;135(3):332–8.View ArticlePubMedGoogle Scholar
Fink P. Food quality and food choice in freshwater gastropods: field and laboratory investigations on a key component of littoral food webs. Berlin: Logos-Verlag; 2006.Google Scholar
Lodge D. Selective grazing on periphyton: a determinant of freshwater gastropod microdistributions. Freshw Biol. 1986;16(6):831–41.View ArticleGoogle Scholar
Calow P. Studies on the natural diet of Lymnaea pereger obtusa (Kobelt) and its possible ecological implications. J Molluscan Stud. 1970;39(2–3):203–15.Google Scholar
Cuker B. E m: Grazing and nutrient interactions in controlling the activity and composition of the epilithic algal community of an arctic lake. Limnol Oceanogr. 1983;28(1):133–41.View ArticleGoogle Scholar
Armitage AR, Fong P. Upward cascading effects of nutrients: shifts in a benthic microalgal community and a negative herbivore response. Oecologia. 2004;139(4):560–7.View ArticlePubMedGoogle Scholar
Gerard C, Poullain V, Lance E, Acou A, Brient L, Carpentier A. Influence of toxic cyanobacteria on community structure and microcystin accumulation of freshwater molluscs. Environ Pollut. 2009;157(2):609–17.View ArticlePubMedGoogle Scholar
Schwarzenberger A, D'Hondt S, Vyverman W, von Elert E. Seasonal succession of cyanobacterial protease inhibitors and Daphnia magna genotypes in a eutrophic Swedish lake. Aquat Sci. 2013;75(3):433–45.View ArticleGoogle Scholar
Gustafsson S, Rengefors K, Hansson L-A. Increased consumer fitness following transfer of toxin tolerance to offspring via maternal effects. Ecology. 2005;86(10):2561–7.View ArticleGoogle Scholar
Hairston N Jr, Holtmeier C, Lampert W, Weider L, Post D, Fischer J, Caceres C, Fox J, Gaedke U. Natural selection for grazer resistance to toxic cyanobacteria: evolution of phenotypic plasticity? Evolution. 2001;55(11):2203–14.View ArticlePubMedGoogle Scholar
Burmester V, Nimptsch J, Wiegand C. Adaptation of freshwater mussels to cyanobacterial toxins: Response of the biotransformation and antioxidant enzymes. Ecotoxicol Environ Saf. 2012;78:296–309.View ArticlePubMedGoogle Scholar | CommonCrawl |
Cornerstones
Non-Parametric Tests
Wilcoxon Rank Sum Test
The Wilcoxon Rank Sum test is a non-parametric hypothesis test where the null hypothesis is that there is no difference in the populations (i.e., they have equal medians).
This test does assume that the two samples are independent, and both $n_1$ and $n_2$ are at least $10$. It should not be used if either of these assumptions are not met.
The test involves first ranking the data in both samples, taken together. Each data element is given a rank, $1$ through $n_1 + n_2$, from lowest to highest -- with ties resolved by ranking tied elements arbitrarily at first, and then replacing rankings of tied elements with the average rank of those tied elements.
So for example, ranking the data below $$\begin{array}{l|cccccccc} \textrm{Sample A} & 12 & 15 & 17 & 18 & 18 & 20 & 23 & 24\\\hline \textrm{Sample B} & 14 & 15 & 18 & 20 & 20 & 20 & 24 & 25\\ \end{array}$$ results in the following ranks $$\begin{array}{ccc} \textrm{value} & \textrm{initial rank} & \textrm{final rank}\\\hline 12 & 1 & 1\\ 14 & 2 & 2\\ 15 & 3 & 3.5\\ 15 & 4 & 3.5\\ 17 & 5 & 5\\ 18 & 6 & 7\\ 18 & 7 & 7\\ 18 & 8 & 7\\ 20 & 9 & 10.5\\ 20 & 10 & 10.5\\ 20 & 11 & 10.5\\ 20 & 12 & 10.5\\ 23 & 13 & 13\\ 24 & 14 & 14.5\\ 24 & 15 & 14.5\\ 25 & 16 & 16\\ \end{array}$$
Suppose $n_1$ denotes the size of the smaller sample and $n_2$ denotes the size of the other sample. Now define the following: $$\mu_R = \frac{n_1(n_1+n_2+1)}{2} \quad \textrm{ and } \quad \sigma_R = \sqrt{\frac{n_1 n_2 (n_1 + n_2 + 1)}{12}}$$ If $R$ is the sum of the ranks associated with elements from the sample of size $n_1$, then $$z = \frac{R - \mu_R}{\sigma_R}$$ is a test statistic that follows a standard normal distribution.
Kruskal-Wallis Test (i.e., H Test)
The Kruskal-Wallis Test (named after William Kruskal and W. Allen Wallis) can be used to test the claim (a null hypothesis) that there is no difference in the populations (i.e., they have equal medians) when there are 3 or more independent samples, provided they meet the additional assumption that the sample sizes are all at least 5.
To perform the test, we first rank all of the samples together, and then add the ranks associated with each sample.
Letting $R_i$ be the sum of the ranks for sample $i$, of size $n_i$, $N$ be the sum of all sample sizes $n_i$, and $k$ be the number of samples, the following test statistic
$$H = \frac{12}{N(N+1)}\left[\sum_{i=1}^k \frac{R^2_i}{n_i} \right] - 3(N+1)$$ follows a $\chi^2$ distribution with $k-1$ degrees of freedom.
This is a right-tailed test.
To see why this test statistic takes the form it does, consider the following:
Recall that a $\chi^2$-distribution is the distribution of a sum of the squares of independent standard normal random variables.
Under a presumption that the sample sizes, $n_i$, are not too small (remember, we required $n_i \ge 5$ for each sample), the $\overline{R_i}$ jointly will be approximately normally distributed.
(Note, we have relaxed our typical requirement that $n \ge 30$ down to $n \ge 5$ as the associated population is uniform.)
To make $H$ a sum of squares of standard normal random variables, we use $z$-scores for each observed average rank in a natural way:
$$H \approx \sum_{i=1}^k \left( \frac{\textrm{observed average rank} - \textrm{expected average rank}}{\displaystyle{\left(\frac{\textrm{standard deviation of ranks}}{\sqrt{n_i}}\right) }} \right)^2$$
Given the null hypothesis that there is no difference between the populations with regard to their medians, we can expect the ranks 1 to $N$ seen in the samples are distributed uniformly. Recalling that the expected value and variance of such a uniform distribution $X$ are given by $$E(X) = \frac{N+1}{2} \quad \quad \textrm{ and } \quad \quad Var(X) = (SD(X))^2 = \frac{N^2-1}{12}$$ we make the following substitutions:
$$H \approx \sum_{i=1}^k \frac{n_i \left[\overline{R_i} - \frac{N+1}{2} \right]^2}{\frac{N^2 - 1}{12}}$$
Adding a factor of $(N-1)/N$ to correct bias (much like Bessel's correction), we have:
$$H = \frac{N-1}{N}\sum_{i=1}^k \frac{n_i \left[\overline{R_i} - \frac{N+1}{2} \right]^2}{\frac{N^2 - 1}{12}}$$
From here, we just use algebra to rewrite $H$ in a form more convenient for calculation:
$$\begin{array}{rcl} H &=& \displaystyle{\frac{N-1}{N}\sum_{i=1}^k \frac{n_i \left[\frac{R_i}{n_i} - \frac{N+1}{2} \right]^2}{\frac{N^2 - 1}{12}}}\\\\ &=& \displaystyle{\frac{12}{N^2-1} \cdot \frac{N-1}{N} \cdot \sum_{i=1}^k \left[ n_i \left( \frac{R_i^2}{n_i^2} - \frac{R_i}{n_i}(N+1) + \frac{(N+1)^2}{4}\right) \right]}\\\\ &=& \displaystyle{\frac{12}{N(N+1)} \cdot \sum_{i=1}^k \left(\frac{R_i^2}{n_i} - R_i (N+1) + \frac{(N+1)^2}{4} n_i \right)}\\\\ &=& \displaystyle{\frac{12}{N(N+1)} \cdot \left[ \sum_{i=1}^k \frac{R_i^2}{n_i} - \sum_{i=1}^k R_i(N+1) + \sum_{i=1}^k\frac{(N+1)^2}{4}n_i \right]}\\\\ &=& \displaystyle{\frac{12}{N(N+1)} \cdot \left[ \sum_{i=1}^k \frac{R_i^2}{n_i} - (N+1)\sum_{i=1}^k R_i + \frac{(N+1)^2}{4}\sum_{i=1}^kn_i \right]}\\\\ &=& \displaystyle{\frac{12}{N(N+1)} \cdot \left[ \sum_{i=1}^k \frac{R_i^2}{n_i} - (N+1) \cdot \frac{N(N+1)}{2} + \frac{(N+1)^2}{4} \cdot N \right]}\\\\ &=& \displaystyle{\frac{12}{N(N+1)} \cdot \left[ \sum_{i=1}^k \frac{R_i^2}{n_i} \right] - 6(N+1) + 3(N+1)}\\\\ &=& \displaystyle{\frac{12}{N(N+1)} \cdot \left[ \sum_{i=1}^k \frac{R_i^2}{n_i} \right] - 3(N+1)}\\\\ \end{array}$$ | CommonCrawl |
Pullback attractors for bi-spatial continuous random dynamical systems and application to stochastic fractional power dissipative equation on an unbounded domain
July 2019, 24(7): 3439-3451. doi: 10.3934/dcdsb.2018327
Complex dynamics in a discrete-time size-structured chemostat model with inhibitory kinetics
Dan Zhang a,b, , Xiaochun Cai c, and Lin Wang d,,
School of Mathematics and Computational Science, Xiangtan University, Xiangtan, 411105, Hunan, China
School of Computer Science and Network Security, Dongguan University of Technology, Dongguan, 523808, Guangdong, China
College of Finance and Statistics, Hunan University, Changsha, 410079, Hunan, China
Department of Mathematics and Statistics, University of New Brunswick, Fredericton, NB, Canada
* Corresponding author: Lin Wang
Received May 2018 Revised August 2018 Published January 2019
Fund Project: The work of DZ was partially supported by the National Natural Science Foundation of China (No. 11501193) and the China Post Doctorial Fund (No. 2015M582335). LW was partially supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC)
Figure(5)
An inhibitory uptake function is incorporated into the discrete, size-structured nonlinear chemostat model developed by Arino et al. (Journal of Mathematical Biology, 45(2002)). Different from the model with a monotonically increasing uptake function, we show that the inhibitory kinetics can induce very complex dynamics including stable equilibria, cycles and chaos (via the period-doubling cascade). In particular, when the nutrient concentration in the input feed to the chemostat $ S^0 $ is larger than the upper break-even concentration value $ \mu $, the model exhibits three types of bistability allowing a stable equilibrium to coexist with another stable equilibrium, or a stable cycle or a chaotic attractor.
Keywords: Chemostat model, inhibitory kinetics, discrete time, chaos, bistability.
Mathematics Subject Classification: Primary: 39A30, 92D25.
Citation: Dan Zhang, Xiaochun Cai, Lin Wang. Complex dynamics in a discrete-time size-structured chemostat model with inhibitory kinetics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3439-3451. doi: 10.3934/dcdsb.2018327
J. F. Andrews, A mathematical model for the continuous culture of microorganisms utilizing inhibitory substrates, Biotech. Bioeng., 10 (1968), 707-723. Google Scholar
J. Arino, J.-L. Gouze and A. Sciandra, A discrete, size-structured model of phytoplankton growth in the chemostat, J. Math. Biol., 45 (2002), 313-336. doi: 10.1007/s002850200160. Google Scholar
R. A. Armstrong and R. McGehee, Competitive exclusion, Am. Nat., 115 (1980), 151-170. doi: 10.1086/283553. Google Scholar
L. Becks, F. M. Hilker, H. Malchow, K. Jürgens and H. Arndt, Experimental demonstration of chaos in a microbial food web, Nature, 435 (2005), 1226-1229. Google Scholar
B. Boon and H. Laudeuout, Kinetics of nitrite oxidation by Nitrobacter winogradskyi, Biochem. J., 85 (1962), 440-447. Google Scholar
A. W. Bush and A. E. Cook, The effect of time delay and growth rate inhibition in the bacterial treatment of wastewater, J. Theor. Biol., 63 (1976), 385-395. Google Scholar
G. J. Butler and G. S. K. Wolkowicz, A mathematical model of the chemostat with a general class of functions describing nutrient outake, SIAM J. Appl. Math., 45 (1985), 138-151. doi: 10.1137/0145006. Google Scholar
E. P. Cohen and H. Eagle, A simplified chemostat for the growth of mammalian cells: characteristics of cell growth in continuous culture, J. Exp. Med., 113 (1961), 467-474. Google Scholar
J. M. Cushing, A competition model for size-structured species, SIAM J. Appl. Math., 49 (1989), 838-858. doi: 10.1137/0149049. Google Scholar
J. M. Cushing, An Introduction to Structured Population Dynamics, Reginal Conference Series in Applied Mathematics 71, SIAM, Philadelphia, PA, 1998. doi: 10.1137/1.9781611970005. Google Scholar
D. E. Dykhuizen and A. M. Dean, Evolution of specialists in an experimental microcosm, Genetics, 167 (2005), 2015-2026. Google Scholar
T. B. K. Gage, F. M. Williams and J. B. Horton, Division synchrony and the dynamics of microbial populations: A size-specific model, Theor. Pop. Bio., 26 (1984), 296-314. doi: 10.1016/0040-5809(84)90035-2. Google Scholar
M. Golubitsky, E. B. Keeler and M. Rothschild, Convergence of the age-structure: Applications of the projective metric, Theor. Pop. Bio., 7 (1975), 84-93. doi: 10.1016/0040-5809(75)90007-6. Google Scholar
S. R. Hansen and S. P. Hubbell, Single nutrient microbial competition: Agreement between experimental and theoretical forecast outcomes, Science, 207 (1980), 1491-1493. Google Scholar
S. B. Hsu, S. P. Hubbell and P. Waltman, A mathematical theory for single nutrient competition in countinuous cultures of micro-organisms, SIAM J. Appl. Math., 32 (1977), 366-383. doi: 10.1137/0132030. Google Scholar
L. Jones and S. P. Ellner, Effects of rapid prey evolution on predator-prey cycles, J. Math. Biol., 55 (2007), 541-573. doi: 10.1007/s00285-007-0094-6. Google Scholar
J. L. Jost, J. F. Drake, A. G. Fredrickson and H. M. Tsuchiya, Interactions of Tetrahymena pyriformis, Escherichia coli, Azotobacter vinelandu and glucose in a minimal medium, J. Bacteriol., 113 (1973), 834-841. Google Scholar
J. A. J. Metz and O. Diekmann, The Dynamics of Physiologically Structured Populations, Lecture Notes in Biomath. 68, Springer-Verlag, New York, 1986. doi: 10.1007/978-3-662-13159-6. Google Scholar
S. Pavlou and I. G. Kevrekidis, Microbial predation in a periodically operated chemostat: A global study of the interaction between natural and externally imposed frequencies, Math. Biosci., 108 (1992), 1-55. doi: 10.1016/0025-5564(92)90002-E. Google Scholar
E. Senior, A. T. Bull and J. H. Slater, Enzyme evolution in a microbial community growing on the herbicide Dalapon, Nature, 263 (1976), 476-479. Google Scholar
H. L. Smith, A discrete, size-structured model of microbial growth and competition in the chemostat, J. Math. Biol., 34 (1996), 734-754. doi: 10.1007/BF00161517. Google Scholar
H. L. Smith and X.-Q. Zhao, Competitive exclusion in a discrete-time, size-structured chemostat model, Discrete Contin. Dyn. Syst. Ser. B, 1 (2001), 183-191. doi: 10.3934/dcdsb.2001.1.183. Google Scholar
L. Wang and G. S. K. Wolkowicz, A delayed chemostat model with general delayed response functions and differential removal rates, J. Math. Anal. Appl., 321 (2006), 452-468. doi: 10.1016/j.jmaa.2005.08.014. Google Scholar
H. A. Wichman, J. Millstein and J. J. Bull, Adaptive molecular evolution for 13,000 phage generations: A possible arms race, Genetics, 170 (2005), 19-31. Google Scholar
L. M. Wick, H. Weilenmann and T. Egli, The apparent clock-like evolution of Escherichia coli in glucose-limited chemostats is reproducible at large but not at small population sizes and can be explained with Monod kinetics, Microbiology (Reading, Engl.), 148 (2002), 2889-2902. Google Scholar
G. S. K. Wolkowicz and Z. Lu, Global dynamics of a mathematical model of competition in the chemostat: general response functions and differential death rates, SIAM J. Appl. Math., 52 (1992), 222-233. doi: 10.1137/0152012. Google Scholar
J. Wu, H. Nie and G. S. K. Wolkowicz, The effect of inhibitor on the plasmid-bearing and plasmid-free model in the unstirred chemostat, SIAM J. Math. Anal., 38 (2007), 1860-1885. doi: 10.1137/050627514. Google Scholar
H. Xia, G. S. K. Wolkowicz and L. Wang, Transient oscillations induced by delayed growth response in the chemostat, J. Math. Biol., 50 (2005), 489-530. doi: 10.1007/s00285-004-0311-5. Google Scholar
Figure 1. Bifurcation diagram of the limiting system (7). Here $ f(s) = \frac{as}{1+0.1s+0.01s^2}, E = 0.1, S^{0} = 100, U_{0} = 70 $ with $ a\in[0.049, 0.059] $ Thus $ aS^{0}>1 $ and $ S^{0}> \mu $. A cascade of period-doublings to chaos occurs as $ a $ increases
Figure Options
Download as PowerPoint slide
Figure 2. Left: a numerical solution of the limiting system (7) with $ U_0 = 60 $; Right: a numerical solution of system (11) with $ (U_0, S_0) = (60,6) $. Here $ f(s) = \frac{0.05s}{1+0.1s+0.01s^2} $, $ E = 0.1 $, $ S^{0} = 100 $
Figure 3. Numerical solutions of system (11) with $ f(s) = \frac{0.05s}{1+0.1s+0.01s^2} $, $ E = 0.1 $, $ S^{0} = 100 $. Left: $ (U_t,S_t)\to E_0 = (0, S^0) $ as $ t\to\infty $, initial condition $ (U_0, S_0) = (10,6) $ was used; Right: $ (U_t, S_t) \to E_1 = (S^0-\lambda, \lambda) $ as $ t\to\infty $, initial condition was $ (U_0, S_0) = (80,6) $
Figure 4. Numerical solutions of system (11) with $ f(s) = \frac{0.054s}{1+0.1s+0.01s^2} $, $ E = 0.1 $, $ S^{0} = 100 $. Left: $ (U_t,S_t)\to (0, S^0) $ as $ t\to\infty $, initial condition $ (U_0, S_0) = (10,6) $ was used; Right: $ (U_t, S_t) $ approaches a stable $ 2-cycle $, initial condition was $ (U_0, S_0) = (80,6) $
Figure 5. Numerical solutions of system (11) with $ f(s) = \frac{0.059s}{1+0.1s+0.01s^2} $, $ E = 0.1 $, $ S^{0} = 100 $. Left: $ (U_t,S_t)\to (0, S^0) $ as $ t\to\infty $, initial condition $ (U_0, S_0) = (10,6) $ was used; Right: $ (U_t, S_t) $ approaches a chaotic attractor, initial condition was $ (U_0, S_0) = (80,6) $
H. L. Smith, X. Q. Zhao. Competitive exclusion in a discrete-time, size-structured chemostat model. Discrete & Continuous Dynamical Systems - B, 2001, 1 (2) : 183-191. doi: 10.3934/dcdsb.2001.1.183
Josselin Garnier, George Papanicolaou, Tzu-Wei Yang. Mean field model for collective motion bistability. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 851-879. doi: 10.3934/dcdsb.2018210
Nabil T. Fadai, Michael J. Ward, Juncheng Wei. A time-delay in the activator kinetics enhances the stability of a spike solution to the gierer-meinhardt model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1431-1458. doi: 10.3934/dcdsb.2018158
Sze-Bi Hsu, Cheng-Che Li. A discrete-delayed model with plasmid-bearing, plasmid-free competition in a chemostat. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 699-718. doi: 10.3934/dcdsb.2005.5.699
Zhipeng Qiu, Jun Yu, Yun Zou. The asymptotic behavior of a chemostat model. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 721-727. doi: 10.3934/dcdsb.2004.4.721
Eric A. Carlen, Maria C. Carvalho, Jonathan Le Roux, Michael Loss, Cédric Villani. Entropy and chaos in the Kac model. Kinetic & Related Models, 2010, 3 (1) : 85-122. doi: 10.3934/krm.2010.3.85
Kaijen Cheng, Kenneth Palmer. Chaos in a model for masting. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1917-1932. doi: 10.3934/dcdsb.2015.20.1917
Jianquan Li, Zuren Feng, Juan Zhang, Jie Lou. A competition model of the chemostat with an external inhibitor. Mathematical Biosciences & Engineering, 2006, 3 (1) : 111-123. doi: 10.3934/mbe.2006.3.111
Carole Guillevin, Rémy Guillevin, Alain Miranville, Angélique Perrillat-Mercerot. Analysis of a mathematical model for brain lactate kinetics. Mathematical Biosciences & Engineering, 2018, 15 (5) : 1225-1242. doi: 10.3934/mbe.2018056
María J. Cáceres, Ricarda Schneider. Blow-up, steady states and long time behaviour of excitatory-inhibitory nonlinear neuron models. Kinetic & Related Models, 2017, 10 (3) : 587-612. doi: 10.3934/krm.2017024
Paula A. González-Parra, Sunmi Lee, Leticia Velázquez, Carlos Castillo-Chavez. A note on the use of optimal control on a discrete time model of influenza dynamics. Mathematical Biosciences & Engineering, 2011, 8 (1) : 183-197. doi: 10.3934/mbe.2011.8.183
Masaki Sekiguchi, Emiko Ishiwata, Yukihiko Nakata. Dynamics of an ultra-discrete SIR epidemic model with time delay. Mathematical Biosciences & Engineering, 2018, 15 (3) : 653-666. doi: 10.3934/mbe.2018029
Eduardo Liz. A new flexible discrete-time model for stable populations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2487-2498. doi: 10.3934/dcdsb.2018066
Edoardo Beretta, Fortunata Solimano, Yanbin Tang. Analysis of a chemostat model for bacteria and virulent bacteriophage. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 495-520. doi: 10.3934/dcdsb.2002.2.495
Yangjin Kim, Khalid Boushaba. An enzyme kinetics model of tumor dormancy, regulation of secondary metastases. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1465-1498. doi: 10.3934/dcdss.2011.4.1465
Jose S. Cánovas, Tönu Puu, Manuel Ruiz Marín. Detecting chaos in a duopoly model via symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 269-278. doi: 10.3934/dcdsb.2010.13.269
Wenbo Cheng, Wanbiao Ma, Songbai Guo. A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells and its stability analysis. Communications on Pure & Applied Analysis, 2016, 15 (3) : 795-806. doi: 10.3934/cpaa.2016.15.795
Sie Long Kek, Mohd Ismail Abd Aziz, Kok Lay Teo, Rohanin Ahmad. An iterative algorithm based on model-reality differences for discrete-time nonlinear stochastic optimal control problems. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 109-125. doi: 10.3934/naco.2013.3.109
Rui Xu, M.A.J. Chaplain, F.A. Davidson. Periodic solutions of a discrete nonautonomous Lotka-Volterra predator-prey model with time delays. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 823-831. doi: 10.3934/dcdsb.2004.4.823
Yun Kang. Permanence of a general discrete-time two-species-interaction model with nonlinear per-capita growth rates. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2123-2142. doi: 10.3934/dcdsb.2013.18.2123
HTML views (361)
Dan Zhang Xiaochun Cai Lin Wang | CommonCrawl |
Phase Portrait Calculator
Here 24 curves are plotted to create the phase portrait. 0 Review of Integration ecThniques 0. More recently, Dias, Llibre and Valls [9] classi ed the global phase portraits of all. Nonlinear equations and stability; phase portraits. Bode diagram Phase Group delay Nyquist diagram Pole, zero Phase margin Oscillation analysis. In accelerators: scattering, synchrotron radiation, cooling. Stephen Shankland/CNET Most folks think carefully before spending $300 on a new camera. Convert the ODE to state space. com wishes everyone to BE WELL, STAY WELL, GET WELL. Real part of is positive and the FP is unstable. the orbit in phase space corresponding to a certain energy is a curve in phase space. The graphic is drawn directly from phase line diagram Figure 15, using rules 1, 2, 3. used to produce a phase portrait of a particular case. (b) λ 1 = λ 2, complex-valued. 1 General concepts in 2-D phase portraits: Trajectories, closed orbits, xed points, null-clines, and drawing vector elds,. Megapixel calculator Image Width x Height → megapixels. The Investment Calculator shows the effects of inflation on investments and savings. TV Sizes to Distance Calculator. Polking of Rice. Created on Mon Jan 23 18:51:01 2017. Almost any differential equation can be solve with our step by step online calculator. VVG performed calculations of attractors and attraction basins, model optimization, and analysis of data and results. Stable and unstable fixed points. there is much more. In early lectures the students revise the solution methods for. Here o denotes a focus surrounded by perhaps several limit cycles, and • denotes either o or a node. But the line x=y is labeled with 2 arrows that go to infinity in both directions. These are here to show you some of the possibilities. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you've provided to them or that they've collected from your use of their services. The trajectories either all diverge away from the critical point to infinite-distant away (when \( \lambda >0 ,\) ) or all converge to the critical point. (1) if P(x0;y0) = Q(x0;y0) = 0: Its local phase portrait is a picture which describes the conflguration of the orbits of system (1) in some neighborhood of this point. 01\) or close, then slide \(a_{11}\) slowly down to -2). PhasePlane(sys,tspan,icond) plots the the phase plane portrait for a general second order. F to C: Fahrenheit to Celsius Conversion Calculator. It's made by forward integrating the equations of motion using Matlab's ODE45 and using the governing equations to calculate the flow field. In this case phase portrait of the system is ellipse with semiaxes a2 = y02, b2. Remark: Diagonalizable 2 × 2 matrices A with real coefficients are classified according to their eigenvalues. TV Sizes to Distance Calculator. A phase plane was constructed with on the x-axis and on the y-axis. Note: this is the new, improved version. Planetary Map Lines. First, find the eigenvalues of the characteristic equation: $$ \begin{aligned} &\lambda^{2}+1=0\\ &s_{1,2}=\pm i \end{aligned} $$ And we know that with such pole distribution, the phase portrait should look like: phase portrait w. Nullclines and phaseplanes Bard Ermentrout September 25, 2002 In many cases, we will be able to reduce a system of di erential equations to two independent. We will use our previous knowledge to get the two phase lines. Interactive Graphing Calculator - Desmos Calculator. Index 276. content of the phase portrait. The application of these techniques is vast, from simple to more complex systems of mechanical or biological origin. 1(a) shows the spectral portrait of a non-normal ma-trix with eigenvalues at 1, 2, 3, and 4. The origin is an unstable focus and there is a stable orbit r = √. The phase portrait when can be seen in Figure 2(a) using the parameters ,,,,, and. Default is 0. This gets more pronounced when delays are comparable with the inherent time-scale of the oscillators, as is demonstrated in figure 4 for all three BNMs at frequency f = 20 Hz. Masters Athletics Track and Field - World Rankings. Calculatorul matrice vă permite: inmultirea matricelor, inversa. All orbits in phase space (i. ( AUTO Hint When I=0 the equilibrium is V=-59. Motion in phase-energy plane. field, a phase portrait is a graphical tool to visualize how the solutions of a given system of differential equations would behave in the long run. $\dot{x} = -y$. a=1, b=-1, c=1, and d=3 results in the phase portrait being an improper node. 3: 1,3,8,9 2,6: 09/18: 2. Phase Portraits of 2D Differential Systems Loading. A has real eigenvalues of the same sign. Made some #pixelart rpg portraits using AI. Phase Portraits. The following table lists the Q Cable types for single phase projects. One of the most important tools in a trader's bag is risk management. NINTH EDITION A FIRST COURSE IN DIFFERENTIAL EQUATIONS with Modeling Applications This page intentionally left blank NINTH EDITION A FIRST COURSE IN DIFFERENTIAL EQUATIONS. Linear Phase Portraits: Matrix Entry The type of phase portrait of a homogeneous linear autonomous system -- a companion system for example -- depends on the matrix coefficients via the eigenvalues or equivalently via the trace and determinant. The coordinates represent projections of the first two or three unstable directions of the conductive state C. Use the tool immediately for fast results (or first skip to the explanation below). INTRODUCTORY LECTURES on TURBULENCE Physics, Mathematics and Modeling J. How to draw a phase portrait of a stable or unstable node arising from a system of linear differential equations. Enabling on-line discussions. Goldman-Hodgkin-Katz Equation Calculator. The sketch should show all special trajectories and a few generic trajectories. The qualitative in-terpretations from such a portrait are several-fold. stay up to date and talk with us! Join our Slack, follow, and like Phase. The set of all trajectories is called phase portrait. Lecture 13 (Tue, Feb 26): Phase portraits (cont. portrait is correct, or (2) the calculations fail to con rm that a phase por-trait is correct. The below PCB Calculator helps you to calculate the impedance, capacitance, propagation delay and inductance of a Microstrip. This approach of linearizing, analyzing the linearizations, and piecing the results together is a standard approach for non-linear systems. Stable and unstable fixed points. 42 Phase portraits 137 43 Stable and unstable nodes139 44 Saddle points 141 45 Spiral points 143 Practice quiz: Phase portraits145 46 Coupled oscillators 147 47 Normal modes (eigenvalues)149 48 Normal modes (eigenvectors)151 Practice quiz: Normal modes153 VI Partial Differential Equations155 49 Fourier series 159 50 Fourier sine and cosine. Welcome to the Desmos graphing calculator!Graph functions, plot data, evaluate equations, explore transformations, and much more—all for free. Stable steady states are indicated by filled circles, unstable steady states by hollow circles. 2 Phase Portraits of Almost Linear Systems: 6. This online calculator converts between polar and rectangular forms of complex numbers in Below is an interactive calculator that allows you to easily convert complex numbers in polar form to. 6 Defective Eigenvalues and Generalized Eigenvectors. Calculate Grid Spacing on iPhone and Android™. Gibbs phase rule states that if the equilibrium in a heterogeneous system is not affected by gravity or by electrical and magnetic forces, the number of degree of freedom is given by the equation. (b) Phase portrait Figure 6. Every month an invisible New Moon signals we can make a fresh start and, as the cycle progresses and the Moon waxes, we can learn, grow and invest. co/L4F8Ei6o4O. This calculator will tell you the critical value of the F-distribution, given the probability level, the numerator degrees of freedom, and the denominator degrees of freedom. Learn vocabulary, terms and more with flashcards, games and other Only RUB 79. Calculates the impedance of the resistor and capacitor in series. I Phase portraits for 2 × 2 systems. It covers only phase portraits and PDE topics. the phase portrait of the system) are just level curves of the Hamiltonian. , an electronic health record) may open the web address of the ACS NSQIP surgical risk calculator in a new browser window. Learn astrology, and check your horoscope. Note: this is the new, improved version. This means that you can scale the graph and move the coordinate plane so that you can. 1 Plotting a Single Trajectory 222. 3 Phase Portraits 222. $\dot{x} = -y$. © 2008, 2012 Zachary S Tseng A-2 - 3 Notice that the long-term behavior of a particular solution is determined solely from the initial condition y(t 0) = y 0. After settling with the kind of dash and gap, you can now customize your dashed line by changing the color, width or adding more elements such as vector graphics. [Hint: if matrices M, Ncommute then eM+ N= eMe | why?] What does the solution do if c 1 = 0 or c 2 = 0? 3. 42 Phase portraits 137 43 Stable and unstable nodes139 44 Saddle points 141 45 Spiral points 143 Practice quiz: Phase portraits145 46 Coupled oscillators 147 47 Normal modes (eigenvalues)149 48 Normal modes (eigenvectors)151 Practice quiz: Normal modes153 VI Partial Differential Equations155 49 Fourier series 159 50 Fourier sine and cosine. Mathematical writing, LaTeX: 10. ASTROGRAPH - The Planetary Aspects. A tool that shows you the best way to farm a toon's shards and gear and gives an estimate of the amount of time it will take to do so. Span having equal level supports (i. Un portrait de phase est une représentation géométrique des trajectoires d'un système dynamique dans l'espace des phases : à chaque ensemble de conditions initiales correspond une courbe ou un. Figure 1: Phase portrait of a 1-D problem (first-order problem). Use the electronic form to submit comments on the proposed revisions to BAL-005-1 – Balancing Authority Control, BAL-006-3 – Inadvertent Interchange, FAC-001-3 – Facility Interconnection Requirements. In early lectures the students revise the solution methods for. - Sprints - 50m 55m 60m 100m 200m 300m 400m. Normal stages have enemies of average difficulty, compared to the other stage types. There-phase power calculation. Damped Pendulum. 1 Automated Matrix Exponential Solutions. Phase Portraits of 2D Differential Systems Loading. Duration Between Two Dates - Calculates number of days. Construct vector fields and use them to construct phase portraits. Of course, the population of predators is related to the population of its prey. Span having equal level supports (i. How to use the DoF Calculator. Calculate three phase power from voltage, current and power factor For a tutorial on three phase calculations, please see: Three Phase Current - Simple Calculation. Phase portraits enhance the intuitive understanding of concepts in complex analysis and are Notwithstanding, the usefulness of these phase portraits is enough to merit a five-star rating. The phase portraits include all sextupole and skew quadrupole fields (correctors plus errors) in addition to quadrupoles and dipoles, where 15% of the skew quadrupole errors have been removed in. For the purposes of this calculator for the benefit of simplicity, any amount of ETH can be used in the calculation. Phase portraits show how a function, its derivative, and its second derivative are changing. There-phase power calculation. What can we say about the phase portraits? • When the amplitude of oscillations is small (βmax < 1), we have π(β2 max − 1) < 0 ∞ negative damping Thus trajectories spiral outward: θ θ • But when the amplitude of oscillations is large (βmax > 1), π(β2 max − 1) > 0 ∞ positive damping The trajectories spiral inward: θ θ 30. (which is 2nd then 5) and then press 8 twice. We will use our previous knowledge to get the two phase lines. Un portrait de phase est une représentation géométrique des trajectoires d'un système dynamique dans l'espace des phases : à chaque ensemble de conditions initiales correspond une courbe ou un. PhasePlane(sys,tspan,icond) plots the the phase plane portrait for a general second order nonlinear system defined by the function sys(t,x). Accelerators as dynamical systems. > alpha := 1; α :=1 7. The most sophisticated and comprehensive graphing calculator online. Calculator. 0 9 m, with initial velocity v = 0. The second method involves the construction of phase portraits and provides a graphical representation of trajectories in phase space that can reveal the existence of an underlying attractor. Schematic phase portraits at various values of/z. By viewing simultaneously the phase portrait and the eigenvalue plot, one can easily and directly associate phase portrait bifurcations with changes in the character of the eigenvalues. In Experiment 3, relative phase stability and transitions in. 15 Qualitative Theory for Systems of Differential Equations 229. We now mention briefly how basin boundary calculations are pe rformed. Classify the xed points and determine their stability, when possible. 20099 ) CAUTION AUTO seems to crash when you follow the right-most Hopf bifurcation point too far to. Phase portraits are an invaluable tool in studying dynamical systems. 0 7 m and s = ± 0. Our Fret Calculator also gives you compensated bridge placement -- only at stewmac. If each point in the phase space is considered as a random quantity with a particular probability ascribed to every possible state (i. 75 billion in September, Reuters calculations based on Chinese customs data showed on Tuesday, down from a $34. 1 Integration by Parts The equation for integration by parts is udv= uv vdu The priority in which you choose what uis can be generalized by the acronym LIPET, elaborated in the list. It is possible to choose between algebraic and polar form of complex numbers. Exercise in class: acceleration and longitudinal dynamics in the Tevatron. Hence, A has two non-proportional eigenvectors v 1, v 2 (eigen-directions), (Section 5. Phase portraits thus help to clarify which regions of a system's state space are credibly reachable, hence of empirical interest, and which are not. Birthday Calculator - Find when you are 1 billion seconds old. Aspect Ratio Calculator. Home > Numerical methods calculators > Numerical Interpolation using Forward, Backward, Divided Difference, Langrange's method calculator. The disruption due to COVID-19 reverberated through the bond markets in three phases. This will include deriving a second linearly independent solution that we will need to form the general solution to the system. Pressure Induced Structural Phase Transition, Metallization and Superconductivity in RbBr [15] reported on the relative phase stability and lattice dynamics of NaNb[O. 3 Automatic Calculation of Eigenvalues and Eigenvectors. Created on Mon Jan 23 18:51:01 2017. Try out our compatibility calculator. Use the tool immediately for fast results (or first skip to the explanation below). In this video lesson we will look at Phase Plane Portraits. VVG, M, and JR conceived and designed the course of the study. A quest consists of 1 to 5 stages of battle. Change the image aspect This requires a lot of calculations. It is capable of computing sums over finite, infinite and parameterized sequences. You'll learn studio and wedding photography, from the basics of how to use. Purpose The calculation of loop decays is a tedious work, which can rarely be performed in a reasonable time by calculating all arising diagrams by hand. Portrait Phase III Preview. SunCalc shows the movement of the sun and sunlight-phase for a certain day at a certain place. recent origin, the concept of motion in phase-space and its geometrical depiction is simple. Mathematical tools such as phase portraits, bifurcation diagrams, perturbation theory, and parameter estimation techniques that are necessary to analyze and interpret biological models will also be covered. The techniques of the analysis use procedures of identification of particular areas in state space for long-term intervals, determination of the. Learn how to perform material balance for single phase system and multiphase system. Megapixel calculator Image Width x Height → megapixels. Position Size Calculator. Phase One's camera gear is substantial. Corteva Agriscience is a publicly traded, global pure-play agriculture company that provides farmers around the world with the most complete portfolio in the industry. Math 246 Description and Prerequisites UM Undergraduate Catalog Course Description Math 246, Differential Equations for Scientists and Engineers (3 credits) An introduction to the basic methods of solving ordinary differential equations. By viewing simultaneously the phase portrait and the eigenvalue plot, one can easily and directly associate phase portrait bifurcations with changes in the character of the eigenvalues. The phase portrait when can be seen in Figure 2(a) using the parameters ,,,,, and. Both 1 > 0 and 2 > 0, so the origin in the linearization is a source. Free to use, handy for resizing images and videos. SunCalc shows the movement of the sun and sunlight-phase for a certain day at a certain place. Remark: It is quite labor-intensive, but it is possible to sketch the phase portrait by hand without first having to solve the system of equations that it represents. Note that most menus have several submenus, which in turn may have many options. Default is 0. This can be used to ensure that you are working at the correct resolution. Calculate v =r˙ =d(re and the corresponding phase portraits for a particle of mass m=1 as a function of energy and initial position q0. Printed Circuit Board Width Tool. When we say "You can access the command and from Math 8 8", we mean first press MATH. to look at the phase portrait, which we'll draw using Maple this time since we have it running anyway. This LED calculator will help you calculate the resistor values you will need when designing a This LED calculator will help you design your LED array and choose the best current limiting resistors. Calculates your luteal phase length for any of your menstrual cycles. The calculation of sag and tension in a transmission line depends on the span of the overhead conductor. Phase portraits are an invaluable tool in studying dynamical systems. This gets more pronounced when delays are comparable with the inherent time-scale of the oscillators, as is demonstrated in figure 4 for all three BNMs at frequency f = 20 Hz. Calculator runs on PC/Mac, tablets and smartphones. Note that the phase portrait around the left fixed point in Fig. This will include deriving a second linearly independent solution that we will need to form the general solution to the system. Enter two sequences that represent the population of rabbits and foxes. Ponce Campuzano, J. In this section we will give a brief introduction to the phase plane and phase portraits. Phase plane for a system of differential equations. Each set of initial conditions is represented by a different curve, or point. Enter numbers separated by comma, space or line break: If The Relative Standard Deviation Calculator is used to calculate the relative standard deviation. Portrait and LandscapeXLSXDisplay the 8 major Moon phases on a yearly calendar using this Excel template. 3 m (51") Portrait. However, only those trajectories in the first quadrant appear to converge to this point. Project 2010-14. A quest consists of 1 to 5 stages of battle. Natal Chart Calculator. Note that most menus have several submenus, which in turn may have many options. If each point in the phase space is considered as a random quantity with a particular probability ascribed to every possible state (i. Phase portraits of nonlinear systems. Not all possible phase portraits have been shown here. Use timeanddate. Create beautiful designs with your team. phase portrait ofupper lip, lower lip, and jaw, and nonsequential aspects by sequence period (the time period covering all three of the articulator peak velocity onsets sequence period times irrespective of sequencing order). Find phase and vertical shift of periodic functions step-by-step. October 2020 vertical calendar. First, we define the basin of infinity. Includes information on how trace width is calculated. We first have to pick some values for our pa-rameters. The below PCB Calculator helps you to calculate the impedance, capacitance, propagation delay and inductance of a Microstrip. Moreover, the portraits in phase (C ,C xy), (2 xx), (22 C ,C xy) show the evolution of system from initial state (marked in the figure as 'start point') to steady state. Start studying Phase Portrait Reference Guide. Methods Subsequently, the phase portraits for the leg and thigh segments were generated, which is a. 3 m (51") Portrait. Each set of initial conditions is represented by a different curve, or point. 5 x y Figure 8: Phase Portrait for Example 3. 4 Example of a phase portrait - Shows a sample of the qualitatively. In this phase we will be exploring the individual features in three palettes. A very handy online tool to quickly convert equivalent focal lengths and f-stops to their Full Frame counterpart (36 x 24mm – the largest sized sensor found in a DSLR). " By doing so, he made an important contribution to the subject, namely conveying the interplay between the concepts of stability and a certain structure, the phase portrait, subject to topological equivalence. Free Horoscopes charts, calculations Birth Natal Chart Online Calculator Ascendant, Rising Sign Calculator Astro Portrait: Sun, Moon, ASC Personal Daily Horoscope Transits, Progressions, Solar Return Synastry, Composite, Davison Chart Traditional Astrology Calculator Sidereal Astrology Calculator Various astrology calculations. An ACE modeling of a dynamic system can be used to conduct batched runs starting from multiple initial agent states, thus providing a rough approximation of the system's phase portrait. (a) Show that (15) has two equilibrium points for k (b) Show that (15) has one equilibrium point for k = 0. 15 for plot_tiled_phase_portraits(), otherwise None. Learn more about the everyday use of ratios, or explore hundreds of other calculators addressing the. The Fraction Calculator will reduce a fraction to its simplest form. 12 For this case, a Q estimate (using the distribution of points along the major axis of the phase portrait) was used to monitor OSNR, while the width of the phase portrait was used to monitor CD. In this case, it is appropriate to determine the geometric symmetry in the totality of equilibrium. 6 Defective Eigenvalues and Generalized Eigenvectors. The Mathematica package MasterTwoallows the automated calculation of all one- and two-loop Feynman integrals reducable to scalar integrals independent of external momenta and depending on up. Calculate three phase power from voltage, current and power factor For a tutorial on three phase calculations, please see: Three Phase Current - Simple Calculation. Mass per Volume Solution Concentration Calculator. Lens distance calculator. $\dot{x} = -y$. SAT / ACT Prep Online Guides and Tips. x L 1 y L 2 7. For the asymptotic approximation to the phase of a second order pole we draw a line from 0° at. It is (b) Instability apparent that the values of the two eigenvalues, and Stability analysis of equilibrium points A, B, and C is. Purpose The calculation of loop decays is a tedious work, which can rarely be performed in a reasonable time by calculating all arising diagrams by hand. There are 3 types of stages: normal, Fatal Battle, and Grand Battle. Plotting phase portraits with matlab. plot_phase_portraits ([ "Igfbpl1" , "Pdgfra" ]) The calculate velocity and extrapolate the future state of the cells:. (b) λ 1 = λ 2, complex-valued. Give the NumWorks graphing calculator a try with our online simulator. Sketching Non-linear Systems OCW 18. The phase portrait of any quadratic system having. Download A+ VCE Player, VCE to PDF Converter FREE. net is free online diagram software for making flowcharts, process diagrams, org charts, UML, ER and network diagrams. Depth of field is one of the most powerful creative tools in photography and, to help you master it, we've prepared a DoF guide with lots of love. Use the tool immediately for fast results (or first skip to the explanation below). phase portrait of the time series, variable was plotted along the X-axis; the same variable, but with shift, was plotted along the Y-axis for the 2D format, and the same variable, but with a new axis and shift – for the 3D format [16, 17]. Relative Standard Deviation Calculator. 11 We study the system in radial coordinates given by the equations r_ = ¡r (1) µ_ = 1 lnr: (2) 3. Phase-space portraits. Solve systems of first order non-linear equations with the use of technology. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you've provided to them or that they've collected from your use of their services. In early lectures the students revise the solution methods for. Moon Phase Calendar. The Next Phase For Bitcoin. New Resources. Semi-simple + Nilpotent de nition, how to use it to calculate matrix exponential in simple examples, n = 2, 3, 4 Chapter 6: Phase-plane analysis (May not get to all of these) 6. ASTROGRAPH - The Planetary Aspects. 3 Earthquake Induced Vibrations of Multistory Buildings: 9. Now you can compute wall spacing while at your desk or while on the road with the Y+ Calculator for iPhone or Android. towers of the same height) is called level span. A phase portrait (generated with PPLANE) is shown in Figure 1. The calculation would go on forever, so we have to stop somewhere. A phase portrait of a system of DEs is a diagram consisting of the trajectories of all solutions to the system in the phase plane. Masters Athletics Track and Field - World Rankings. Gibbs phase rule states that if the equilibrium in a heterogeneous system is not affected by gravity or by electrical and magnetic forces, the number of degree of freedom is given by the equation. These applets are tailored to the differential equations of the given exercise. The course is composed of 56 short lecture videos, with a. To crop or resize the image as you like, you may use the Picture toolbar (seen in Figure 2) by selecting View -> Toolbars -> Picture. We define the equilibrium solution/point for a homogeneous system of differential equations and how phase portraits can be used to determine the stability of the equilibrium solution. Phase portraits are, for example, often used to plot particle position versus momentum in a certain direction, a phase space distribution. All nonzero trajectories flow from infinity towards. Two Methods for Selecting Grid Points There are two standard methods for selecting grid points, called the uniform grid method and the isocline grid method. apply the model to four datasets and demonstrate consistency among landscapes and phase portraits. Activating the help mode shows descriptions of the interface elements. ( AUTO Hint When I=0 the equilibrium is V=-59. Continuous and discrete descriptions. An online tool LC filter synthesis. IT Certification Exam. You can vary any of the variables in the matrix to generate the solutions for stable and unstable systems. 4 Dynamic Phase Plane Graphics. It can be seen from Figure 5. The techniques of the analysis use procedures of identification of particular areas in state space for long-term intervals, determination of the. My problem is the following: I would like to draw a phase diagram for a system of 3 differential equations. You may enter between two and ten non-zero integers between -2147483648 and 2147483647. 6-s per step). Megapixel calculator Image Width x Height → megapixels. eters generated a phase portrait of the cell. Phase portrait of the Lotka—Volterra system with a critical point. Hence its rotation is clockwise. Phase Portraits is an application, which draws phase portraits of simple second-order autonomous Amps to Watts Calculator. Talk with us - let's build something great, together. How to draw a phase portrait of a stable or unstable node arising from a system of linear differential equations. Paste the. 0 9 m, with initial velocity v = 0. com 2020-2024 Five Year Planner Five Year Monthly. 1 Integration by Parts The equation for integration by parts is udv= uv vdu The priority in which you choose what uis can be generalized by the acronym LIPET, elaborated in the list. 0 7 m and s = ± 0. With only one eigenvector, it is a degenerated-looking node that is a cross between a node and a spiral point (see case 4 below). Note that the phase portrait around the left fixed point in Fig. Explore the phase-plane and compute the bifurcation diagram for this using AUTO and the current as a parameter. EMM applied alternative normalization to the Bcd data and analyzed the data. A quest consists of 1 to 5 stages of battle. Prepare your mission with the MicaSense pre-flight calculator for RedEdge and Altum. Calculate an expected return from staking Ether in the ETH 2 deposit contract. The phase portraits, plotted as a function of displacement and velocity, are shown in the right- side insets in Fig. Matrix Calculator: A beautiful, free matrix calculator from Desmos. Andronov in. Microwaves101 Line Stretcher! MMIC Phase Shifter Example 1. Calculate Grid Spacing on iPhone and Android™. Find the critical points and phase portrait of the autonomous first-order differential equation dy/dx = y^2 (y + 4)(y − 2). Index 276. The calculator will provide you with the shooting interval, the number of photos you need to take and the total memory usage. The course is composed of 56 short lecture videos, with a. Last Time: We studied phase portraits and systems of differential equations with complex eigen-values. Goldman-Hodgkin-Katz Equation Calculator. In the first example, a ¼-bit phase portrait was used to monitor a 10-Gb/s RZ-DPSK signal. MDCalc loves calculator creators - researchers who, through intelligent and often complex methods, discover tools that describe scientific facts that can then be applied in practice. All the points on the line x=y are 0s of the vector field, and all points not on the line are attracted to some point on the line, and the Mathlet labels these orbits (rays) OK. So, the nature of equilibrium point is determined by the roots of this polynomial. For the purposes of this calculator for the benefit of simplicity, any amount of ETH can be used in the calculation. I need a phase portrait of the following nonlinear system given in polar form. (a) Show that (15) has two equilibrium points for k (b) Show that (15) has one equilibrium point for k = 0. Includes all the functions Easy to use and 100% Free! We also have several other calculators. Exercise in class: acceleration and longitudinal dynamics in the Tevatron. Question: (a). Due Date: Section: Problems: 09/11: 2. The region described by the phase trajectory corresponds to the reactive power. For instance, the figure below shows a phase plane portrait for the almost linear system cos( 1) cos( 1). You'll learn studio and wedding photography, from the basics of how to use. calculator. 15 for plot_tiled_phase_portraits(), otherwise None. Express The General Solution Of The Given System Of Equations In Terms Of Real-valued Functions. In particular, find and. The key with phase detection is that the phase calculation gives both the direction and amount of focus correction needed — that's why it's fast. The three-phase power calculator calculates the apparent, active and reactive power for three-phase AC systems. 03SC (Alternatively, make the change of variables x 1 = x − x 0, y 1 = y − y 0, and drop all terms having order higher than one; then A is the matrix of coefficients for the linear. The course is composed of 56 short lecture videos, with a. Try out our compatibility calculator. The below PCB Calculator helps you to calculate the impedance, capacitance, propagation delay and inductance of a Microstrip. This free online truss calculator is a truss design tool that generates the axial forces, reactions of completely customisable 2D truss structures. Figure 7: Phase Portrait for Example 3. portrait is correct, or (2) the calculations fail to con rm that a phase por-trait is correct. dy/dt = y and dx/dt = -sin(x)-y The question asks to find the critical points and sketch some of the orbits. 24 billion surplus a month. When we say "You can access the command and from Math 8 8", we mean first press MATH. Systems of First Order Linear Equations Phase portraits; Nonlinear Systems Predator-prey Equations Phase portrait reference chart. More precisely, in the first step the color wheel is used to assign colors to the complex w-plane: points on rays emanating from the origin. If you want to see how one sequence affects another sequence, you can graph a phase plot on your TI-84 Plus calculator to represent the data. For example, for a 55" TV, the best distance is 7 feet. This calculator will help you assess what camera settings are required to achieve a desired level of sharpness. This Demonstration plots the phase portrait or phase plane and the vector field of directions around For more information on phase portraits and types of fixed points for linear systems of ODEs, see. The phase portrait at the chaotic boundary is the upper middle plot, with K C = 0. there is much more. 01\) or close, then slide \(a_{11}\) slowly down to -2). Phase plane portrait is a very important tool to study the behavior and stability of a non-linear system. So, the nature of equilibrium point is determined by the roots of this polynomial. the quotient rule to calculate y0 = Aex(1 + Aex) x(Aex)2 (1 + Aex)2 = Ae (1 + Aex)2. Phase Portraits Now I want to introduce you to a nice online tool which might help you plot some of these slope fields. Phase-portrait (picture) Tmes implct Equilibria (ODEs = 0) Stability of equilibria SIRmodel Diagram Model SIR with vaccination Diagram Model SIR with mutation Diagram Model SIS model Diagram Model Lab SI with treatment Long term behaviour with and without treatment Exploring parameters: Less infectious version. Whether you're using a Canon APS-C camera (crop factor 1. ): important objects in the phase plane - fixed points, periodic solutions (closed orbits); important questions - arrangements of trajectories near fixed points and closed orbits, stability of fixed points and closed orbits; an example of calculation of a one-parameter semigroup for a particular. Usually phase portraits only include the trajectories of the solutions and not any vectors. Busy af career 2020-2024 cute floral marble academic daily monthly planner is finally here! stay organized and in control. Plotting the Phase Portrait of a System of Nonlinear ODEs in Matlab 2017a. In our previous lessons we learned how to solve Systems of Linear Differential Equations, where we had to analyze Eigenvalues and Eigenvectors. The conditions of equilibrium phase stability are. 3 Phase Portraits Recall ˙x = f(x), i. abs amplitude antiderivative_calculator arccos arcsin arctan area. You can still use the original aspect ratio calculator if you prefer. 1 Phase portraits of the linear oscillations of the system At equality of cross sections of vertical tubes oscillations of the liquid are described by linear equation (7), and, thus, in the denominator of expression (11) there is no dependence on coordinate y. phase portrait ofupper lip, lower lip, and jaw, and nonsequential aspects by sequence period (the time period covering all three of the articulator peak velocity onsets sequence period times irrespective of sequencing order). Moreover, the portraits in phase (C ,C xy), (2 xx), (22 C ,C xy) show the evolution of system from initial state (marked in the figure as 'start point') to steady state. Phase portraits enhance the intuitive understanding of concepts in complex analysis and are Notwithstanding, the usefulness of these phase portraits is enough to merit a five-star rating. Hence its rotation is clockwise. Medium is an open platform where readers find dynamic thinking, and where expert and undiscovered voices can share their writing on any topic. still gets use by many people. Start studying Phase Portrait Reference Guide. Another important tool for sketching the phase portrait is the following: an eigenvector for a real eigenvalue corresponds to a solution that is always on the ray from the origin in the direction of the eigenvector. You can also use your computer's keyboard. Using Matlab to get Phase Portraits Once upon a time if you wanted to use the computer to study continuous dynamical systems you had to learn a lot about numerical methods. The results are interpreted and discussed in a report. 2 if the phase portrait is "locally qualitatively similar", namely, if one portrait can be obtained from another by a diffeomorphism j(x), j: P(U x 1) !P(U x 2), it often turns out to be polynomial automorphism, cf. Let's analyze the behavior of the trajectories near the four critical points:. The trajectories x(t) wind their way through the phase plane. Plot your graphs in an easy way. We can calculate the distances between two. More precisely, in the first step the color wheel is used to assign colors to the complex w-plane: points on rays emanating from the origin. dy/dt = y and dx/dt = -sin(x)-y The question asks to find the critical points and sketch some of the orbits. Phase portraits show how a function, its derivative, and its second derivative are changing. Example 1 (c) Phase portrait of d~x dt = −2 0 0 −2 ~x General solutions: ~x(t) = e−2t C1 C2 All solutions decay to 0 in the same exponential rate λ1 = −2. Talk with us - let's build something great, together. 2 Automated Variation of Parameters. The trajectories are lines converging to the origin. 4 A Non-Linear System 227. Use the electronic form to submit comments on the proposed revisions to BAL-005-1 – Balancing Authority Control, BAL-006-3 – Inadvertent Interchange, FAC-001-3 – Facility Interconnection Requirements. I got the critical points as (n*pi,0) where n is an integer. An easy FOV calculator (projectimmersion. In her excellent Moon Watching series[1] Dana Gerhardt introduced readers of. Damped Pendulum. In the first five weeks we will learn about ordinary differential equations, and in the final week, partial differential equations. Construct vector fields and use them to construct phase portraits. Complex power (S) calculation from voltage (V) and current (I) AC power calculator. When we say "You can access the command and from Math 8 8", we mean first press MATH. Answer to (II) Sketch the local phase portrait around (0,0) by first calculate the leading term of the center manifold. Voltage type and conductor count Model Connector spacing PV module orientation 240 VAC, 2 conductor Q-12-10-240 1. 1 Phase 2 of Balancing Authority Reliability-based Controls. This can be a little hard to comprehend, but the following examples will help. Use this tool in hex calculator mode to perform algebraic operations with hex numbers (add, subtract, multiply and divide hexadecimals). More recently, Dias, Llibre and Valls [9] classi ed the global phase portraits of all. The Fraction Calculator will reduce a fraction to its simplest form. Damped Harmonic Motion Applet. ASTROGRAPH - The Planetary Aspects. You can vary any of the variables in the matrix to generate the solutions for stable and unstable systems. Be careful in your choice of the scale for the picture so that you do not miss any important part of the picture (you can set xmin=-2, xmax=2, ymin=-2, ymax=2 for the picture). The limit cycle begins as circle and, with varying μ , become increasingly sharp. In a 3-phase installation Isc at any point is given by: where. In this case, the DFE is globally asymptotically stable. These applets are tailored to the differential equations of the given exercise. The special features of chemical and phase equilibria in such mixtures are considered in the ideal gas approximation and taking nonideality into account. That phase rises in the eastern sky at the time shown in the window by the "E" and sets in the western sky at the time shown in the window by the "W. Explore the phase-plane and compute the bifurcation diagram for this using AUTO and the current as a parameter. Find out what is your Moon Sign with this free online Moon Sign Calculator. Molar Solution Concentration Calculator. Phase portrait features such as the existence or non-existence of limit cycles in particular regions are proved while avoiding the awkwardness associated. In Experiment 3, relative phase stability and transitions in. Therefore, the above system of differential equations is autonomous. Coupling was quantified by the CRP angle , which is the difference between the phase angles of two signals Where and are the phase angles for the first and second signals, respectively. The calculation is performed in the cycle phase, the results for m, m are displayed with "3 " corridors of errors on both Phase portrait of S Scl. For the asymptotic approximation to the phase of a second order pole we draw a line from 0° at. A: F(a, b) 50,100,150 ms for 16 patches (a–p) containing a single maxi channel. 7 Comets and Spacecraft. In practice, the width is obtained by using an image. Contact us with any suggestions for improvements or enhancements to this. Calculate Grid Spacing on iPhone and Android™. The Jacobian matrix is J = 3 2y 1 cos(y) (15) and at (0;0), this is J = 3 0 1 1 : (16) The eigenvalues are 1 = 1 and 2 = 3. com 2020-2024 Five Year Planner Five Year Monthly. The paper presents mathematical models and calculation methods for solving particular research problems related to the thermodynamic characteristics of multicomponent and multiphase mixtures. 24 billion surplus a month. Of course, the population of predators is related to the population of its prey. Welcome to the Desmos graphing calculator!Graph functions, plot data, evaluate equations, explore transformations, and much more—all for free. The results shown are intended for reference only, and do not necessarily reflect results that would be obtained in actual investment situations. Made some #pixelart rpg portraits using AI. - Sprints - 50m 55m 60m 100m 200m 300m 400m. The phase portrait of a complex function f (appearing in the picture on the left) arises when all points z of the domain of f are colored according to the argument (or "phase") of the value w = f(z). Moon Astro Chart, Astrology, Lunar chart - Seek and meet people born on the same date as you. Unofficial Comment Form. 1 Introduction I Preliminary phase-portrait-based modelling of fingerprints. still gets use by many people. The resultant model is autonomous and thus an energy approach permits a full phase portrait of the resultant motions in the phase plane. Created on Mon Jan 23 18:51:01 2017. Here's the circuit I believe you refer to: - And here's the formula: - It's an approximate formula because it assumes the discharge of the capacitor between recharges is linear (it's actually exponential) but is reasonable for ripples up to 10%. Phase plane portrait is a very important tool to study the behavior and stability of a non-linear system. This free online truss calculator is a truss design tool that generates the axial forces, reactions of completely customisable 2D truss structures. To examine the coordination between the two segments, the phase portraits for the thorax and the pelvis were generated ( Fig. ( AUTO Hint When I=0 the equilibrium is V=-59. submitted 2 years ago by STIIP. Learn how to solve unknown variables using ideal gas equation of state as well as non-ideal gas equation of state. China's trade surplus with the United States stood at $30. Fixed points are shown with the filled and unfilled bullets. I Produce a 3-D phase-plane plot I Use le examples/rigidODE. 999-1010, DOI: 10. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Relationship between phase angle φ in radians (rad), the time shift or time delay Δ t, and the frequency f is. This page plots a system of differential equations of the form dx/dt = f(x,y), dy/dt = g(x,y). Determinants and Eigenvalues. In this case, it is appropriate to determine the geometric symmetry in the totality of equilibrium. Project 2010-14. If you need a three-dimensional phase portrait - [V (t), V (tl), V (t-2 * l)]. submitted 2 years ago by STIIP. Brogan's Method. Learn more about the everyday use of ratios, or explore hundreds of other calculators addressing the. As there is no day without night, the astrological portrait of a person drawn just by the means of his or her Sun Sign will. The cubic equation represented by the solid. The phase portrait of the system is shown on Figure 5. This page plots a system of differential equations of the form dx/dt = f(x,y), dy/dt = g(x,y). stay up to date and talk with us! Join our Slack, follow, and like Phase. A numerically generated phase-portrait of the non-linear system Zoomed in near (0,0) Zoomed in near (2,1) The critical point at (2,1) certainly looks like a spiral source, but (0,0) just looks bizarre. This will include deriving a second linearly independent solution that we will need to form the general solution to the system. By comparison, we calculate that y y2 = Aex (1 + Ae x) (Aex)2 (1 + Ae )2 = Aex (1 + Aex)2. Find phase and vertical shift of periodic functions step-by-step. Our Fret Calculator also gives you compensated bridge placement -- only at stewmac. The method of graphic climate representation includes procedures of processing of meteorological data, calculation of the estimation characteristics and construction of meteorological portraits. The steps involved in calculating the relative phase angles are shown in Fig. Copyright 2017 Sinitsa AM site: digiratory. For a much more sophisticated phase plane plotter, see the MATLAB plotter written by John C. It makes the lives of people who use matrices easier. This luteal phase calculator will calculate the length of your luteal phase. Phase Plane Analysis. Extended Keyboard. To examine the coordination between the two segments, the phase portraits for the thorax and the pelvis were generated ( Fig. We define the equilibrium solution/point for a homogeneous system of differential equations and how phase portraits can be used to determine the stability of the equilibrium solution. Our Fret Calculator also gives you compensated bridge placement -- only at stewmac. OuterVision Power Supply Calculator is the most accurate PC power consumption calculator available and is trusted by computer enthusiasts, PC hardware and power supply manufacturers across the. We will work from both male and females and utilize photo reference as well. plot_phase_portraits ([ "Igfbpl1" , "Pdgfra" ]) The calculate velocity and extrapolate the future state of the cells:. 1(a) shows the spectral portrait of a non-normal ma-trix with eigenvalues at 1, 2, 3, and 4. You can still use the original aspect ratio calculator if you prefer. A has real eigenvalues of the same sign. This results in 2 first order ODE's. Then use phaseplane to draw a phase portrait for the system (13), and plot a phase path on your diagram using drawphase; the y1start value represents x(0) and should therefore be zero, while the y2start value can be any initial value for y(0) of your choosing. Mass per Volume Solution Concentration Calculator. The device resolution is not supported. Find more Mathematics widgets in Wolfram|Alpha. The key with phase detection is that the phase calculation gives both the direction and amount of focus correction needed — that's why it's fast. Moon Phase Calendar. and the impedence 70. This crossover calculator can be employed for the calculation of passive filters (first, second, third, and fourth order) in two-way and three-way crossover networks. The GCD calculator allows you to quickly find the greatest common divisor of a set of numbers. I Phase portraits for 2 × 2 systems. Learn to take portraits that tell stories with our expert-taught courses on portrait photography. Do I have to pick couple of random points and find the general solutions for each of them (it's gonna be a huge process!!) ? Or is it possible to predict the shape of the. Calculate Cartridge Input Voltage (From Output and db Gain). What can we say about the phase portraits? • When the amplitude of oscillations is small (βmax < 1), we have π(β2 max − 1) < 0 ∞ negative damping Thus trajectories spiral outward: θ θ • But when the amplitude of oscillations is large (βmax > 1), π(β2 max − 1) > 0 ∞ positive damping The trajectories spiral inward: θ θ 30. Pour les articles homonymes, voir Portrait (homonymie). Complex power (S) calculation from voltage (V) and current (I) AC power calculator. Plot your graphs in an easy way. I can't seem to find any tutorials online. (c) Show that (15) has no equilibrium points for k > 0. What is a throw. 7 Nonlinear Systems 360 The advantages of using technology such as graphics calculators and computer algebra systems in. A phase portrait is a plot of multiple phase curves corresponding to different initial conditions in the same phase plane (Tabor 1989, p. An online exposure calculator. For the purposes of this calculator for the benefit of simplicity, any amount of ETH can be used in the calculation. " The Moon Phases Calendar and Calculator can tell you approximately what time of day * and where you can see your favorite moon phases! For example, to see the full moon as it sets on the. Enabling on-line discussions. Proceeding as the above subsection, we plot in Figure 11 different phase portraits: Figure 11(a), Figure 11(c) and Figure 11(e) represent our experimental results drawn in the planes (, ), (, ) and (, ) respectively, while Figure 11 (b), Figure 11 (d) and Figure 11 (f) are their numerical corresponding. We first consider the oceans as a single reservoir of a mass m of carbon, with an incoming flux j i and an outgoing (burial) flux j out. Classify the xed points and determine their stability, when possible. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Use of eigenvalues and eigenvectors. Phase portraits are, for example, often used to plot particle position versus momentum in a certain direction, a phase space distribution. Show Instructions In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. Calculate the Axial forces for Truss, Roof and Rafters. Fortunately today's modern scientific calculators have built in mathematical functions (check your book) that allows for the easy conversion of rectangular to polar form, ( R → P ) and back from polar. Main text Recent technological innovations that allow for assaying multiple modes of cell states at single-cell resolution are creating opportunities for more detailed biophysical modeling of the molecular biology of cells. Andronov in. China's trade surplus with the United States stood at $30. The phase portrait of any quadratic system having. 3 Earthquake Induced Vibrations of Multistory Buildings: 9. Phase Plane Analysis. 25 \quad\mbox{and}\quad \omega^2 =1 , \) we ask Mathematica to provide a phase portrait for the pendulum equation with resistance: 2. Steve Brunton 6,237 views. mohr circle calculation for a three dimensional state of stress, mohr 3D - Granit Engineering. An easy to use Aspect Ratio Calculator. Review of Newtonian, Lagrangian, and Hamiltonian dynamics. A "picture" of "all" the phase trajectories in phase space is often referred to as a phase portrait of the dynamical system. The calculator will generate a step by step explanation for. Newegg's Power Supply Calculator (or PSU Calculator) helps you quickly find all the compatible power supplies for your current or future PC build. Learn the common processes in Chemical Engineering industry and process variables and how to calculate them. View the Epson Projection Distance Calculators and Display Size Calculator for optimal Projection Distance Calculators. In this section we will solve systems of two linear differential equations in which the eigenvalues are real repeated (double in this case) numbers. The Mathematica package MasterTwoallows the automated calculation of all one- and two-loop Feynman integrals reducable to scalar integrals independent of external momenta and depending on up. co/L4F8Ei6o4O. Both basic theory and applications are taught. dx yxy dt dy xxy dt =− + − =−+ (1). Portrait Professional Studio 10 Crack is a painting program which will not require any artistic skill. Figure 1 below indicates how your phase portrait will look in Word. It is possible to choose between algebraic and polar form of complex numbers. Classify equilibrium points. | CommonCrawl |
The ZX calculus is a language for surface code lattice surgery
Niel de Beaudrap1 and Dominic Horsman2
1Department of Computer Science, University of Oxford, Parks Road, Oxford, OX1 3QD
2Department of Physics, Durham University, South Road, Durham, DH1 1LE Department of Computer Science, University of Oxford, Parks Road, Oxford, OX1 3QD
A leading choice of error correction for scalable quantum computing is the surface code with lattice surgery. The basic lattice surgery operations, the merging and splitting of logical qubits, act non-unitarily on the logical states and are not easily captured by standard circuit notation. This raises the question of how best to design, verify, and optimise protocols that use lattice surgery, in particular in architectures with complex resource management issues. In this paper we demonstrate that the operations of the ZX calculus --- a form of quantum diagrammatic reasoning based on bialgebras --- match exactly the operations of lattice surgery. Red and green ``spider'' nodes match rough and smooth merges and splits, and follow the axioms of a dagger special associative Frobenius algebra. Some lattice surgery operations require non-trivial correction operations, which are captured natively in the use of the ZX calculus in the form of ensembles of diagrams. We give a first taste of the power of the calculus as a language for lattice surgery by considering two operations (T gates and producing a CNOT) and show how ZX diagram re-write rules give lattice surgery procedures for these operations that are novel, efficient, and highly configurable.
@article{deBeaudrap2020zxcalculusis, doi = {10.22331/q-2020-01-09-218}, url = {https://doi.org/10.22331/q-2020-01-09-218}, title = {The {ZX} calculus is a language for surface code lattice surgery}, author = {de Beaudrap, Niel and Horsman, Dominic}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {4}, pages = {218}, month = jan, year = {2020} }
[1] M. Backens. The ZX-calculus is complete for stabilizer quantum mechanics. New Journal of Physics, 16 (9): 093021, 2014. 10.1088/1367-2630/16/9/093021.
[2] Miriam Backens. Making the stabilizer ZX-calculus complete for scalars. In Chris Heunen, Peter Selinger, and Jamie Vicary, editors, Proceedings of the 12th International Workshop on Quantum Physics and Logic (QPL 2015), volume 195 of Electronic Proceedings in Theoretical Computer Science, pages 17–32, 2015. 10.4204/EPTCS.195.2.
https://doi.org/10.4204/EPTCS.195.2
[3] H. Bombin. Topological order with a twist: Ising anyons from an Abelian model. Physical review letters, 105 (3): 030403, 2010. 10.1103/PhysRevLett.105.030403.
[4] H. Bombin and M. Martin-Delgado. Quantum measurements and gates by code deformation. Journal of Physics A: Mathematical and Theoretical, 42 (9): 095302, 2009. 10.1088/1751-8113/42/9/095302.
[5] S. Bravyi and A. Kitaev. Quantum codes on a lattice with boundary. Preprint, arXiv:quant-ph/9811052, 1998. Translation of Quantum Computers and Computing 2 (1), pp. 43-48. (2001).
[6] S. Bravyi and A. Kitaev. Universal quantum computation with ideal Clifford gates and noisy ancillas. Physical Review A, 71 (2): 022316, 2005. 10.1103/PhysRevA.71.022316.
[7] E. Campbell and J. O'Gorman. An efficient magic state approach to small angle rotations. Quantum Science and Technology, 1 (1): 015007, 2016. 10.1088/2058-9565/1/1/015007.
https://doi.org/10.1088/2058-9565/1/1/015007
[8] Earl T. Campbell, Barbara M. Terhal, and Christophe Vuillot. Roads towards fault-tolerant universal quantum computation. Nature, 549 (7671): 172, 2017. 10.1038/nature23460. arXiv:1612.07330.
[9] T. Carette, D. Horsman, and S. Perdrix. SZX-calculus: Scalable graphical quantum reasoning. Preprint, arXiv:1905.00041, 2019.
[10] Nicholas Chancellor, Aleks Kissinger, Joschka Roffe, Stefan Zohren, and Dominic Horsman. Graphical structures for design and verification of quantum error correction. arXiv:1611.08012, 2018.
[11] B. Coecke and R. Duncan. Interacting quantum observables: categorical algebra and diagrammatics. New Journal of Physics, 13 (4): 043016, 2011. 10.1088/1367-2630/13/4/043016.
[12] B. Coecke and A. Kissinger. Picturing Quantum Processes: A first course in quantum theory and diagrammatic reasoning. Cambridge University Press, 2017. 10.1017/9781316219317.
https://doi.org/10.1017/9781316219317
[13] B Coecke, E. Paquette, and D. Pavlovic. Classical and quantum structuralism. Semantic Techniques in Quantum Computation, eds. Gay S., Mackie I., Cambridge University Press, 2008. 10.1017/CBO9781139193313.003. arXiv:0904.1997.
https://doi.org/10.1017/CBO9781139193313.003
[14] Alexander Cowtan, Silas Dilkes, Ross Duncan, Alexandre Krajenbrink, Will Simmons, and Seyon Sivarajah. On the Qubit Routing Problem. In Wim van Dam and Laura Mancinska, editors, 14th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2019), volume 135 of Leibniz International Proceedings in Informatics (LIPIcs), pages 5:1–5:32, Dagstuhl, Germany, 2019. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. ISBN 978-3-95977-112-2. 10.4230/LIPIcs.TQC.2019.5. URL http://drops.dagstuhl.de/opus/volltexte/2019/10397. arXiv:1902.08091.
https://doi.org/10.4230/LIPIcs.TQC.2019.5
http://drops.dagstuhl.de/opus/volltexte/2019/10397
[15] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill. Topological quantum memory. Journal of Mathematical Physics, 43 (9): 4452–4505, 2002. 10.1063/1.1499754. arXiv:quant-ph/0110143.
[16] R. Duncan and S. Perdrix. Graph states and the necessity of Euler decomposition. In Conference on Computability in Europe, pages 167–177. Springer, 2009. 10.1007/978-3-642-03073-4.
https://doi.org/10.1007/978-3-642-03073-4
[17] Ross Duncan and Maxime Lucas. Verifying the steane code with Quantomatic. Proceedings QPL 2013, pages 33–49, 2013. 10.4204/EPTCS.171.4. arXiv:1306.4532.
[18] Ross Duncan and Simon Perdrix. Rewriting measurement-based quantum computations with generalised flow. In Samson Abramsky, Cyril Gavoille, Claude Kirchner, Friedhelm Meyer auf der Heide, and Paul G. Spirakis, editors, Automata, Languages and Programming, pages 285–296, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. ISBN 978-3-642-14162-1.
[19] Ross Duncan, Aleks Kissinger, Simon Perdrix, and John van de Wetering. Graph-theoretic simplification of quantum circuits with the ZX-calculus. Preprint, arXiv:1902.03178, 2019.
[20] B. Eastin and E. Knill. Restrictions on transversal encoded quantum gate sets. Phys. Rev. Lett., 102: 110502, Mar 2009. 10.1103/PhysRevLett.102.110502. arXiv:0811.4262.
[21] Andrew Fagan and Ross Duncan. Optimising Clifford circuits with Quantomatic. In Proceedings of the 15th International Conference on Quantum Physics and Logic (QPL), volume 287 of Electronic Proceedings in Theoretical Computer Science, pages 85–105. Open Publishing Association, 2019. 10.4204/EPTCS.287.5.
[22] A. Fowler, A. Stephens, and P. Groszkowski. High-threshold universal quantum computation on the surface code. Phys. Rev. A, 80: 052312, 2009. 10.1103/PhysRevA.80.052312.
[23] Austin G Fowler and Craig Gidney. Low overhead quantum computation using lattice surgery. Preprint, arXiv:1808.06709, 2018.
[24] M. Freedman and D. Meyer. Projective plane and planar quantum codes. Foundations of Computational Mathematics, 1 (3): 325–332, 2001. 10.1007/s102080010013.
[25] Craig Gidney and Austin G Fowler. Efficient magic state factories with a catalyzed $\mathrm{\lvert CCZ\rangle}$ to $\mathrm{2\lvert T\rangle}$ transformation. Quantum, 3, 2019. 10.22331/q-2019-04-30-135. arXiv:1812.01238.
[26] Google. https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html. Accessed 10/04/2019.
https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html
[27] Daniel Gottesman. Class of quantum error-correcting codes saturating the quantum Hamming bound. Physical Review A, 54 (3): 1862, 1996. 10.1103/PhysRevA.54.1862. arXiv:quant-ph/9604038.
[28] Amar Hadzihasanovic, Kang Feng Ng, and Quanlong Wang. Two complete axiomatisations of pure-state qubit quantum computing. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '18, pages 502–511, New York, NY, USA, 2018. ACM. ISBN 978-1-4503-5583-4. 10.1145/3209108.3209128.
https://doi.org/10.1145/3209108.3209128
[29] D. Herr, F. Nori, and S. Devitt. Lattice surgery translation for quantum computation. New Journal of Physics, (19): 013034, 2017. 10.1088/1367-2630/aa5709.
https://doi.org/10.1088/1367-2630/aa5709
[30] C. Horsman, A. G Fowler, S. Devitt, and R. Van Meter. Surface code quantum computing by lattice surgery. New Journal of Physics, 14 (12): 123011, 2012. 10.1088/1367-2630/14/12/123011. arXiv:1111.4022.
[31] IBM. https://www.research.ibm.com/ibm-q/. Accessed 10/04/2019.
https://www.research.ibm.com/ibm-q/
[32] C. Jones, D. Kim, M. Rakher, P. Kwiat, and T. Ladd. Design and analysis of communication protocols for quantum repeater networks. New Journal of Physics, 18 (8): 083015, 2016. 10.1088/1367-2630/18/8/083015.
[33] A. Kissinger and V. Zamdzhiev. Quantomatic: A proof assistant for diagrammatic reasoning. In International Conference on Automated Deduction, pages 326–336. Springer, 2015. 10.1007/978-3-319-21401-6_22.
[34] Aleks Kissinger and Arianne Meijer-van de Griend. CNOT circuit extraction for topologically-constrained quantum memories. Preprint, arXiv:1904.00633, 2019.
[35] Aleks Kissinger and John van de Wetering. Reducing T-count with the ZX-calculus. Preprint, arXiv:1903.10477, 2019a.
[36] Aleks Kissinger and John van de Wetering. PyZX: Large scale automated diagrammatic reasoning. Preprint, arXiv:1904.04735, 2019b.
[37] E. Knill. Quantum computing with realistically noisy devices. Nature, 434 (7029): 39–44, 2005. 10.1038/nature03350. arXiv:quant-ph/0410199.
[38] Andrew J Landahl and Ciáran Ryan-Anderson. Quantum computing by color-code lattice surgery. Preprint, arXiv:1407.5103, 2014. SAND2014-15911J.
[39] Daniel Litinski. A game of surface codes: Large-scale quantum computing with lattice surgery. Quantum, 3: 128, 2019. 10.22331/q-2019-03-05-128.
[40] N. Nickerson, Y. Li, and S. Benjamin. Topological quantum computing with a very noisy network and local error rates approaching one percent. Nature communications, 4: 1756, 2013. 10.1038/ncomms2773.
[41] N. Nickerson, J. Fitzsimons, and S. Benjamin. Freely scalable quantum technologies using cells of 5-to-50 qubits with very lossy and noisy photonic links. Physical Review X, 4 (4): 041041, 2014. 10.1103/PhysRevX.4.041041.
[42] Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information. Cambridge University Press, Cambridge UK, 2000. 10.1017/CBO9780511976667.
[43] R. Raussendorf and J. Harrington. Fault-tolerant quantum computation with high threshold in two dimensions. Physical review letters, 98 (19): 190504, 2007. 10.1103/PhysRevLett.98.190504.
https://doi.org/10.1103/PhysRevLett.98.190504
[44] B. Terhal. Quantum error correction for quantum memories. Reviews of Modern Physics, 87 (2): 307, 2015. 10.1103/RevModPhys.87.307.
[45] R. Van Meter. Quantum Networking. John Wiley & Sons, 2014. 10.1002/9781118648919.
[46] R. Van Meter and C. Horsman. A blueprint for building a quantum computer. Communications of the ACM, 56 (10): 84–93, 2013. 10.1145/2494568.
https://doi.org/10.1145/2494568
[47] The ZX-calculus. http://zxcalculus.com/. Accessed 08/07/2019.
http://zxcalculus.com/
[1] Niel de Beaudrap, Ross Duncan, Dominic Horsman, and Simon Perdrix, "Pauli Fusion: a Computational Model to Realise Quantum Transformations from ZX Terms", Electronic Proceedings in Theoretical Computer Science 318, 85 (2020).
[2] Michael Hanks, Marta P. Estarellas, William J. Munro, and Kae Nemoto, "Effective Compression of Quantum Braided Circuits Aided by ZX-Calculus", Physical Review X 10 4, 041030 (2020).
[3] John H. Selby, Carlo Maria Scandolo, and Bob Coecke, "Reconstructing quantum theory from diagrammatic postulates", Quantum 5, 445 (2021).
[4] Titouan Carette, Marc de Visme, and Simon Perdrix, 2021 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS) 1 (2021) ISBN:978-1-6654-4895-6.
[5] Niel de Beaudrap, "Well-tempered ZX and ZH Calculi", Electronic Proceedings in Theoretical Computer Science 340, 13 (2021).
[6] Joseph Collins and Ross Duncan, "Hopf-Frobenius Algebras and a Simpler Drinfeld Double", Electronic Proceedings in Theoretical Computer Science 318, 150 (2020).
[7] Titouan Carette, Yohann D'Anello, and Simon Perdrix, "Quantum Algorithms and Oracles with the Scalable ZX-calculus", Electronic Proceedings in Theoretical Computer Science 343, 193 (2021).
[8] Bob Coecke, Dominic Horsman, Aleks Kissinger, and Quanlong Wang, "Kindergarden quantum mechanics graduates ...or how I learned to stop gluing LEGO together and love the ZX-calculus", Theoretical Computer Science 897, 1 (2022).
[9] Agustín Borgna, Simon Perdrix, and Benoît Valiron, Lecture Notes in Computer Science 13008, 121 (2021) ISBN:978-3-030-89050-6.
[10] Hector Miller-Bakewell, "Finite Verification of Infinite Families of Diagram Equations", Electronic Proceedings in Theoretical Computer Science 318, 27 (2020).
[11] Richard D.P. East, John van de Wetering, Nicholas Chancellor, and Adolfo G. Grushin, "AKLT-States as ZX-Diagrams: Diagrammatic Reasoning for Quantum States", PRX Quantum 3 1, 010302 (2022).
[12] Chen Zhao and Xiao-Shan Gao, "Analyzing the barren plateau phenomenon in training quantum neural networks with the ZX-calculus", Quantum 5, 466 (2021).
[13] Alexander Cowtan, Silas Dilkes, Ross Duncan, Will Simmons, and Seyon Sivarajah, "Phase Gadget Synthesis for Shallow Circuits", arXiv:1906.01734, (2019).
[14] Christophe Vuillot, Lingling Lao, Ben Criger, Carmen García Almudéver, Koen Bertels, and Barbara M. Terhal, "Code deformation and lattice surgery are gauge fixing", New Journal of Physics 21 3, 033028 (2019).
[15] Renaud Vilmart, "A Near-Optimal Axiomatisation of ZX-Calculus for Pure Qubit Quantum Mechanics", arXiv:1812.09114, (2018).
[16] Craig Gidney and Austin G. Fowler, "Flexible layout of surface code computations using AutoCCZ states", arXiv:1905.08916, (2019).
[17] Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart, "A Complete Axiomatisation of the ZX-Calculus for Clifford+T Quantum Mechanics", arXiv:1705.11151, (2017).
[18] Craig Gidney and Austin G. Fowler, "Efficient magic state factories with a catalyzed |CCZ> to 2|T> transformation", arXiv:1812.01238, (2018).
[19] Jonathan Gorard, Manojna Namuduri, and Xerxes D. Arsiwalla, "ZX-Calculus and Extended Wolfram Model Systems II: Fast Diagrammatic Reasoning with an Application to Quantum Circuit Simplification", arXiv:2103.15820, (2021).
[20] Stach Kuijpers, John van de Wetering, and Aleks Kissinger, "Graphical Fourier Theory and the Cost of Quantum Addition", arXiv:1904.07551, (2019).
[21] Niel de Beaudrap, Xiaoning Bian, and Quanlong Wang, "Fast and effective techniques for T-count reduction via spider nest identities", arXiv:2004.05164, (2020).
[22] Renaud Vilmart, "A ZX-Calculus with Triangles for Toffoli-Hadamard, Clifford+T, and Beyond", arXiv:1804.03084, (2018).
[23] Bob Coecke, "Compositionality as we see it, everywhere around us", arXiv:2110.05327, (2021).
[24] Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart, "A Generic Normal Form for ZX-Diagrams and Application to the Rational Angle Completeness", arXiv:1805.05296, (2018).
[25] Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart, "Diagrammatic Reasoning beyond Clifford+T Quantum Mechanics", arXiv:1801.10142, (2018).
[26] Bob Coecke, Stefano Gogioso, and John H. Selby, "The time-reverse of any causal theory is eternal noise", arXiv:1711.05511, (2017).
[27] Richard D. P. East, Pierre Martin-Dussaud, and John Van de Wetering, "Spin-networks in the ZX-calculus", arXiv:2111.03114, (2021).
[28] Niel de Beaudrap, "Well-tempered ZX and ZH Calculi", arXiv:2006.02557, (2020).
[29] Quanlong Wang, "Completeness of the ZX-calculus", arXiv:2209.14894, (2022).
[30] John van de Wetering and Sal Wolffs, "Completeness of the Phase-free ZH-calculus", arXiv:1904.07545, (2019).
[31] Titouan Carette, Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart, "Completeness of Graphical Languages for Mixed States Quantum Mechanics", arXiv:1902.07143, (2019).
[32] Bob Coecke and Quanlong Wang, "ZX-Rules for 2-qubit Clifford+T Quantum Circuits", arXiv:1804.05356, (2018).
[33] Niel de Beaudrap, Ross Duncan, Dominic Horsman, and Simon Perdrix, "Pauli Fusion: a Computational Model to Realise Quantum Transformations from ZX Terms", arXiv:1904.12817, (2019).
[34] Titouan Carette, Dominic Horsman, and Simon Perdrix, "SZX-calculus: Scalable Graphical Quantum Reasoning", arXiv:1905.00041, (2019).
[35] Craig Gidney, "A Pair Measurement Surface Code on Pentagons", arXiv:2206.12780, (2022).
[36] Renaud Vilmart, "Quantum Multiple-Valued Decision Diagrams in Graphical Calculi", arXiv:2107.01186, (2021).
[37] Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart, "Completeness of the ZX-Calculus", arXiv:1903.06035, (2019).
[38] Agustín Borgna and Rafael Romero, "Encoding High-level Quantum Programs as SZX-diagrams", arXiv:2206.09376, (2022).
[39] Stefano Gogioso and Subhayan Roy Moulik, "Purification and time-reversal deny entanglement in LOCC-distinguishable orthonormal bases", arXiv:1902.00316, (2019). | CommonCrawl |
Efficient distributed discovery of bidirectional order dependencies
Regular Paper
Sebastian Schmidl ORCID: orcid.org/0000-0002-6597-98091 &
Thorsten Papenbrock ORCID: orcid.org/0000-0002-4019-82211
The VLDB Journal volume 31, pages 49–74 (2022)Cite this article
Bidirectional order dependencies (bODs) capture order relationships between lists of attributes in a relational table. They can express that, for example, sorting books by publication date in ascending order also sorts them by age in descending order. The knowledge about order relationships is useful for many data management tasks, such as query optimization, data cleaning, or consistency checking. Because the bODs of a specific dataset are usually not explicitly given, they need to be discovered. The discovery of all minimal bODs (in set-based canonical form) is a task with exponential complexity in the number of attributes, though, which is why existing bOD discovery algorithms cannot process datasets of practically relevant size in a reasonable time. In this paper, we propose the distributed bOD discovery algorithm DISTOD, whose execution time scales with the available hardware. DISTOD is a scalable, robust, and elastic bOD discovery approach that combines efficient pruning techniques for bOD candidates in set-based canonical form with a novel, reactive, and distributed search strategy. Our evaluation on various datasets shows that DISTOD outperforms both single-threaded and distributed state-of-the-art bOD discovery algorithms by up to orders of magnitude; it can, in particular, process much larger datasets.
Distributed discovery of order dependencies
Order is a fundamental concept in relational data because every attribute can be used to sort the records of a relation. Some sortings represent the natural ordering of attribute values by their domain (e. g., timestamps, salaries, or heights) and, hence, express meaningful statistical metadata; other sortings serve technical purposes, such as data compression (e. g., via run-length encoding [1]), index optimization (e. g., for sorted indexes [3]), or query optimization (e. g., when picking join strategies [23]).
Because a relational instance can follow only one sorting at a time, dependencies between different orders help to find optimal sortings; they also reveal meaningful correlations between attribute domains.
Table 1 flight dataset excerpt. \(\bot \) denotes null values
An order dependency (OD) expresses an order relationship between lists of attributes in a relational table. More specifically, an OD \(\varvec{\mathsf {X}} \mapsto \varvec{\mathsf {Y}}\) specifies that when we order the tuples of a relational table based on the left-hand side attribute list \(\varvec{\mathsf {X}}\), then the tuples are also ordered by the right-hand side attribute list \(\varvec{\mathsf {Y}}\). The tuple order is lexicographical w. r. t. the attribute values selected by \(\varvec{\mathsf {X}}\) and \(\varvec{\mathsf {Y}}\), respectively. This means that ties in the order implied by the first attribute in the list are resolved by the next attribute in the list (and so forth). This resembles the ordering produced by the ORDER BY-clause in SQL. A bidirectional order dependency (bOD), such as , lets us define the order direction of the individual attributes involved in the bOD; in this example: \(\mathsf {A}\) in ascending order with ties resolved by \(\mathsf {B}\) in descending order sorts \(\mathsf {C}\) in ascending order. ODs are closely related to functional dependencies (FDs), which have been extensively studied in research [16], but due to their consideration of order, ODs subsume FDs [28].
The dataset shown in Table 1, for example, fulfills the bOD . In other words, when we sort the tuples by the ADelay attribute, then they are also ordered by the ADGrp attribute. Note that the inverse bOD does not hold, because the value in attribute ADGrp of the tuple \(t_5\) is greater or equal to the value in ADGrp of tuple \(t_9\), but \(t_5\)'s value in ADelay is smaller than \(t_9\)'s value in ADelay.
With order dependencies, we know how the ordering of tuples based on certain attributes translates to the ordering based on other attributes. This knowledge can be used in various situations: During query planning, for example, ODs help to derive additional orders that enable further optimizations, such as eliminating costly sort operations or selecting better join strategies [28]. In database design, ODs can be used to, for example, replace dense implementations of secondary indexes with sparse implementations if we know that the tuple ordering by the secondary index' attributes is determined by the ordering of the primary key attributes [6]. For consistency maintenance and data cleaning, ODs can be considered as integrity constraints (ICs). Like all other ICs, semantically meaningful ODs can describe business rules so that any violation of an ODs indicates an error in the dataset. In this way, ODs can guide automatic data cleaning [11].
Although certain ODs can be obtained manually, this process is very time-consuming and difficult. ODs are naturally expressed using lists of attributes, which leads to a search space that grows factorial with the number of attributes in a dataset. Fortunately, with the polynomial mapping of ODs to a set-based canonical form, which was presented by Szlichta et al. [25], we can construct a search space for ODs that grows only exponentially with the number of attributes. The search space is still too large for manual exploration, but it is small enough for automatic OD discovery algorithms. An example of a set-based OD is , which is valid in the flight dataset (see Sect. 9). The OD specifies that for flights flown by the same aircraft on the same route and day for the same flight time, the delay at the destination airport monotonically increases over the day. We define set-based ODs in more detail in Sect. 3.2.
To automate the discovery of ODs, researchers have proposed different order dependency [5, 15, 26] and bidirectional order dependency [25] discovery algorithms. Depending on the OD representation, these approaches have a factorial [5, 15] or exponential [25, 26] worst-case complexity in the number of attributes. Despite various clever pruning strategies, none of the existing OD discovery algorithms can process datasets of practically relevant size in a feasible time. The FASTOD-BID algorithm, for example, takes almost 5 h on the 700 KiB letter dataset with 17 attributes and 20K records, and it exceeds 58 GB of memory on the 70 MiB flight dataset with 21 attributes and 500K records (see Table 2 in Sect. 9).
To overcome existing algorithmic limitations, we propose DISTOD, a scalable (in number of cores and number of computers), robust (against limited memory), elastic (in adding and removing of computers at runtime), and applicable (i.e., with semantic pruning strategies equipped) bOD discovery algorithm.
DISTOD pursues a novel, reactive bOD search strategy that allows it to distribute both the discovery process and the validation of bODs on multiple machines in a compute cluster without a need for global parallelization barriers. The algorithm discovers all minimal bODs by deliberately centralizing the candidate generation and pruning; to maximize the efficiency and scalability of the discovery process, it dynamically parallelizes and distributes all other parts of the discovery via reactive programming strategies. DISTOD is based on the canonical representation of set-based bODs from Szlichta et al. [25], which allows it to traverse a relatively small set-containment lattice and to benefit from the known pruning rules.
The motivation for this research project is the observation that most distributed data profiling algorithms, including [14, 21, 22, 33], are built on top of dataflow-based distributed computing frameworks, such as Apache Spark [31] or Apache Flink [30]. These frameworks force the discovery algorithms into batch processing, which is an unsuitable paradigm for all known dependency discovery approaches, because they rely on dynamic pruning and dynamic candidate generation techniques. To implement the dynamic parts of the discovery, the algorithms split the search into multiple runs of batch processes and utilize the synchronization barriers in between the distributed runs for pruning and dynamic search decisions. For this reason, their performance implicitly suffers from idle times due to the synchronization barriers, unnecessary re-partitioning of data, and the inability to make dynamic search decisions within batch runs. We, therefore, advocate the use of a reactive computing paradigm, i. e., actor programming [8], for the implementation of distributed data profiling algorithms. At the cost of harder programming, we can thereby find superior search strategies, minimize idle times, avoid certain redundant work, optimize resource utilization, and support elasticity.
In this paper, we first introduce related work about the discovery of ODs (Sect. 2) and the formal foundations on the set-based canonical form for bODs (Sect. 3). We then make the following contributions:
Distributed, reactive search strategy We introduce a distributed, reactive bOD search strategy that breaks the strictly level-wise search approach of FASTOD-BID [25] up into fine-grained tasks that represent constant and order compatible bOD candidates separately. A reactive resolution strategy for intra-task dependencies allows a synchronization barrier-free work distribution for the candidate validation tasks (Sects. 4 to 6).
Parallel candidate generation We present a centralized, but highly parallel candidate generation algorithm. The algorithm guarantees that all minimal bODs are generated while traversing the candidate lattice (Sect. 5).
Revised validation algorithm We use a new index data structure, which we call inverted sorted partition, to improve the efficiency of the order compatible bOD validation algorithm from [25] (Sect. 6).
Hybrid, dynamic partition generation We contribute a hybrid and dynamic generation algorithm for stripped partitions that either uses a recursive partition generation scheme or a direct partition product to generate a stripped partition on-demand (Sect. 7).
Effective memory management We present a dynamic memory management strategy that caches intermediate and temporary results for as long as possible; freeing them as soon as memory runs short (Sect. 7).
Elasticity and semantic pruning We equip DISTOD with elasticity properties (Sect. 8.1) and semantic pruning strategies (Sect. 8.2) to enable the discovery of bODs in datasets of practically relevant size.
Evaluation We evaluate the runtime, memory usage, and scalability of DISTOD on various datasets and show that it outperforms both the single-threaded algorithm FASTOD-BID and the distributed algorithm DIST-FASTOD-BID by up to orders of magnitude (Sect. 9).
In 1982, Ginsburg and Hull were the first to consider the ordering of records w.r.t. different lists of attributes in a relation as a kind of dependency [7]. Their work introduced point-wise orders with a complete set of inference rules and shows that the inference-problem in this formalism is co-NP-complete. In 2012, Szlichta et al. formally defined order dependencies as a dependency between lists of attributes such that if the relation is ordered by the values of the first attribute list, it is also ordered by the values of the second attribute list [29]. Like SQL ORDER BY operators, the formalism uses a lexicographical ordering of tuples by the attribute lists. Szlichta et al. also introduced a set of axioms and the proof that ODs properly subsume FDs. The list-based formalization was adopted by many following works on OD profiling [5, 15, 26]. Later, bidirectional order dependencies (bODs)—a combination of ascending and descending orders of attributes—have been introduced [28]. The authors of [29] and [28] show that the inference problem for both ODs and bODs is co-NP-complete. In this work, we discover bODs using the set-based formalism as defined in [28]. We now discuss existing OD and bOD discovery algorithms.
Order dependency discovery The first automatic OD discovery algorithm, called ORDER, was proposed by Langer and Naumann [15]. It traverses a list-containment lattice of OD candidates to find (all) valid, minimal dependencies in a given dataset. The algorithm has a factorial worst-case complexity in the number of attributes, is sound, but is intentionally incomplete as confirmed in [25, 26].
Inspired by [4] and [18], Jin et al. proposed a hybrid OD discovery approach that discovers ODs by alternately comparing records on a sample of the dataset and validating candidates on the entire dataset [12]. Their approach can discover ODs as well as bODs. The authors show that their algorithm discovers the same set of ODs as ORDER; hence, the algorithm also produces incomplete results.
FASTOD proposed by Szlichta et al. is the first algorithm that discovers complete sets of minimal ODs [26]. By mapping ODs to a new set-based canonical representation, the algorithm has only exponential worst-case complexity in the number of attributes and linear complexity in the number of tuples. The authors also provide effective inference rules for the new OD representation. With the algorithm FASTOD-BID, the same authors later expanded their discovery approach to bODs [25]. They show that discovering bidirectional ODs does not take significantly longer than discovering unidirectional ODs. We base our algorithm on the same set-based canonical representation of bODs and the corresponding definition of minimality to also benefit from the reduced search space size and efficient pruning rules.
With OCDDISCOVER, Consonni et al. took another approach to the OD discovery task by exploiting order compatibility dependencies (OCDs) [5]. An OCD is a special form of OD in which two lists of attributes order one another if they are concatenated [29]. Unfortunately, OCDDISCOVER uses an incorrect definition of minimality and, therefore, prunes the search space too aggressively; consequently, the results are incomplete [27].
Distributed order dependency discovery Because FASTOD-BID is the only complete and correct OD algorithm, not much research exists on distributed OD discovery. In [22], Saxena et al. proposed common map-reduce style primitives (based on Apache Spark) into which they could break down any existing data profiling algorithm. In this way, they presented distributed versions of different dependency discovery algorithms including FASTOD-BID—we call this implementation DIST-FASTOD-BID. Performance-wise, all these algorithms suffer from non-optimal resource utilization because batch-oriented algorithms frequently re-partition the data and contain (many) hard synchronization barriers when used for dynamic discovery algorithms. They also do not support elasticity, i. e., they struggle with flexible cluster sizes, where nodes enter and leave at runtime. For these reasons, we use the reactive actor-programming model for distribution and parallelization, which leads to a fundamentally different algorithm design. Our approach waives hard synchronization barriers, reactively optimizes the load balancing, and reduces data communication costs.
In this paper, we use the following notational conventions:
\(\mathrm {R}\) denotes a relation and \(\mathrm {r}\) a specific instance of \(\mathrm {R}\).
\(\mathsf {A}\) and \(\mathsf {B}\), \(\mathsf {C}\), etc., denote single attributes from \(\mathrm {R}\).
t and s denote tuples of a relational instance \(\mathrm {r}\).
\(t_{\mathsf {A}}\) denotes the value of an attribute \(\mathsf {A}\) in a tuple t.
\(X\) and \(Y\), etc., are sets of attributes and \(\mathsf {X}_i\) the ith element of \(X\) with \(0 \le i < \vert X \vert \). We use \(W_i\) to indicate subsets with \(\vert W_i \vert = \vert X \vert - 1\) and \(Z_i\) to indicate supersets with \(\vert Z_i \vert = \vert X \vert + 1\) for a given attribute set \(X\).
\(\varvec{\mathsf {X}}\) and \(\varvec{\mathsf {Y}}\), \(\varvec{\mathsf {Z}}\), etc., are lists of attributes and \(\varvec{\mathsf {X}}_i\) the \(i^\text {th}\) element of \(\varvec{\mathsf {X}}\) with \(0 \le i < \vert \varvec{\mathsf {X}} \vert \). \([\ ]\) is the empty list and \([\mathsf {A}\mid \varvec{\mathsf {X}}]\) denotes a list with head \(\mathsf {A}\) and tail \(\varvec{\mathsf {X}}\). Lists and sets with the same name reference the same attributes, i. e., set \(X\), contains all distinct elements from list \(\varvec{\mathsf {X}}\).
In this section, we first formally define bODs, then we recap the set-based canonical form for bODs [25], and finally, we describe the core concepts of actor programming our means to dynamically distribute the discovery process in a cluster.
Order dependencies
Following the definitions for bidirectional order dependencies given in [28], we first define order specifications, which specify how to sort the tuples of a dataset based on multiple attributes w. r. t. different order directions (ascending or descending). It corresponds to the ORDER BY-clause in SQL and produces a lexicographical ordering.
An ordering based on an attribute \(\mathsf {A} \in \mathrm {R}\) can be either ascending or descending. To indicate the order direction of a marked attribute \(\overline{\mathsf {A}}\), we use \(\mathsf {A}{{\,\mathrm{\uparrow }\,}}\) for ascending and \(\mathsf {A}{{\,\mathrm{\downarrow }\,}}\) for descending. An order specification is a list of marked attributes denoted as \(\overline{\varvec{\mathsf {X}}}\) with \(\varvec{\mathsf {X}}_i \in \mathrm {R}\). For attributes without an explicit order direction, we implicitly assume an ascending (\({{\,\mathrm{\uparrow }\,}}\)) order.
Using order specifications, we can now introduce bidirectional order dependencies [28].
A bidirectional order dependency (bOD) is a statement of the form \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {Y}}}\) (read: \(\overline{\varvec{\mathsf {X}}}\) orders \(\overline{\varvec{\mathsf {Y}}}\)) specifying that ordering a relation \(\mathrm {r}\) by order specification \(\overline{\varvec{\mathsf {X}}}\) also orders \(\mathrm {r}\) by order specification \(\overline{\varvec{\mathsf {Y}}}\), where \(X \subset \mathrm {R}\) and \(Y \subset \mathrm {R}\). We use the notation \(\overline{\varvec{\mathsf {X}}} \leftrightarrow \overline{\varvec{\mathsf {Y}}}\) (read: \(\overline{\varvec{\mathsf {X}}}\) and \(\overline{\varvec{\mathsf {Y}}}\) are order equivalent) if \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {Y}}}\) and \(\overline{\varvec{\mathsf {Y}}} \mapsto \overline{\varvec{\mathsf {X}}}\). For \(\overline{\varvec{\mathsf {XY}}} \leftrightarrow \overline{\varvec{\mathsf {YX}}}\), the two order specifications \(\overline{\varvec{\mathsf {X}}}\) and \(\overline{\varvec{\mathsf {Y}}}\) are order compatible and we write \(\overline{\varvec{\mathsf {X}}} \sim \overline{\varvec{\mathsf {Y}}}\). Table \(\mathrm {r}\) over \(\mathrm {R}\) satisfies a bOD \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {Y}}}\) if \(\forall s, t \in \mathrm {r}: s \preceq _{\overline{\varvec{\mathsf {X}}}} t \Rightarrow s \preceq _{\overline{\varvec{\mathsf {Y}}}} t\). The lexicographical order operator \(\preceq _{\overline{\varvec{\mathsf {Z}}}}\) for an order specification \(\overline{\varvec{\mathsf {Z}}}\) and the tuples \(s, t \in \mathrm {r}\) is defined as:
$$\begin{aligned} s \preceq _{\overline{\varvec{\mathsf {Z}}}} t = {\left\{ \begin{array}{ll} \overline{\varvec{\mathsf {Z}}} = [\ ] \\ \overline{\varvec{\mathsf {Z}}} = [\mathsf {A}{{\,\mathrm{\uparrow }\,}}\mid \overline{\varvec{\mathsf {T}}}] \wedge s_\mathsf {A} < t_\mathsf {A} \\ \overline{\varvec{\mathsf {Z}}} = [\mathsf {A}{{\,\mathrm{\downarrow }\,}}\mid \overline{\varvec{\mathsf {T}}}] \wedge s_\mathsf {A} > t_\mathsf {A} \\ \overline{\varvec{\mathsf {Z}}} = \left( [\mathsf {A}{{\,\mathrm{\uparrow }\,}}\mid \overline{\varvec{\mathsf {T}}}] \vee [\mathsf {A}{{\,\mathrm{\downarrow }\,}}\mid \overline{\varvec{\mathsf {T}}}] \right) \wedge s_\mathsf {A} = t_\mathsf {A} \wedge s \preceq _{\overline{\varvec{\mathsf {T}}}} t \\ \end{array}\right. } \end{aligned}$$
It is \(s \prec _{\overline{\varvec{\mathsf {Z}}}} t\) if \(s \preceq _{\overline{\varvec{\mathsf {Z}}}} t\) but \(t \npreceq _{\overline{\varvec{\mathsf {Z}}}} s\).
The lexicographical order operator \(\preceq _{\overline{\varvec{\mathsf {Z}}}}\) defines a weak total order over a set of tuples. We assume that numbers are ordered numerically, strings are ordered lexicographically, and dates are ordered chronologically.
If \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {Y}}}\), then any ordering of tuples for any table \(\mathrm {r}\) that satisfies \(\overline{\varvec{\mathsf {X}}}\) also satisfies \(\overline{\varvec{\mathsf {Y}}}\). Considering our example in Table 1, i. a., the following bODs hold: \([\texttt {ADelay}] \mapsto [\texttt {ADGrp}]\), \([\texttt {Code}{{\,\mathrm{\uparrow }\,}}] \mapsto [\texttt {Month}{{\,\mathrm{\downarrow }\,}}]\), , , and . Note that these bODs do hold in our example table and not necessarily in general.
Set-based canonical bODs
The search space of bODs in list-based form \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {Y}}}\) (see Definition 2) grows factorial with the number of attributes [15]. Despite clever candidate pruning rules, this growth defines the complexity of all bOD discovery algorithms that use the list-based bOD formalization. Hence, we now introduce (and later use) a set-based bOD formalization, as defined in [25]. Set-based bODs span much smaller set-containment candidate lattices similar to, e. g., FD discovery algorithms like TANE [10], that grow only exponential with the number of attributes and, therefore, make an efficient discovery feasible. For space reasons, we do not repeat the mapping between set-based and list-based bODs, all proofs, and the axioms for set-based bODs and refer to [25] for details.
First, we introduce equivalence classes and partitions consistent with [9] and [25]:
An equivalence class w. r. t. a given attribute set is denoted as \(\mathscr {E}(t_X)\). It groups tuples s and t together if their projection on \(X\) is equal: \(\mathscr {E}(t_X) = \{s \in \mathrm {r} \mid s_{X} = t_{X} \}\) and \(X \in \mathrm {R}\). The attribute set \(X\) is called context.
This means that all tuples in an equivalence class \(\mathscr {E}(t_X)\) have the same value (or value combination) in \(X\). Partitions group equivalence classes by a common attribute set:
A partition \(\varPi _X\) is a set of disjoint equivalence classes with the same set of attributes: \(\varPi _X = \{\mathscr {E}(t_X) \mid t \in \mathrm {r} \}\).
From our example dataset in Table 1, we can extract, for example, the partition . With equivalence classes and partitions, we now define the two set-based canonical forms for bODs [25, Definition 9]: constant bODs and order compatible bODs.
Constant bODs
A constant bOD is a marked attribute \(\overline{\mathsf {A}}\) that is constant within each equivalence class w. r. t. the set of attributes in the context \(X\). It is denoted as \(X: [\ ] \mapsto \overline{\mathsf {A}}\). It can be mapped to the list-based bODs \(\overline{\varvec{\mathsf {X'}}} \mapsto \overline{\varvec{\mathsf {X'}}\mathsf {A}}\) for all permutations \(\overline{\varvec{\mathsf {X'}}}\) of \(\overline{\varvec{\mathsf {X}}}\).
In our example dataset, i. a., the following constant bODs are valid: \(\{\}: [\ ] \mapsto \texttt {Month}\) and \(\{\texttt {ADelay}\}: [\ ] \mapsto \texttt {ADGrp}\). Constant bODs directly represent FDs [25]. They can be violated only by so-called splits.
A split w. r. t. a constant bOD \(X: [\ ] \mapsto \overline{\mathsf {A}}\) is a pair of tuples s and t such that both tuples are part of the same equivalence class \(\mathscr {E}(t_X)\) but \(s_\mathsf {A} \ne t_\mathsf {A}\).
The bOD \(\{\texttt {ADGrp}\}: [\ ] \mapsto \texttt {ADelay}\) is not valid in our example dataset because it is invalidated by at least one split (e. g., tuple \(t_1\) and \(t_5\)).
Order compatible bODs
An order compatible bOD is denoted as \(X: \overline{\mathsf {A}} \sim \overline{\mathsf {B}}\) and states that two marked attributes \(\overline{\mathsf {A}}\) and \(\overline{\mathsf {B}}\) are order compatible within each equivalence class w. r. t. the set of attributes in the context \(X\). It can be mapped to a list-based bOD \(\overline{\varvec{\mathsf {X'}}\mathsf {A}} \sim \overline{\varvec{\mathsf {X'}}\mathsf {B}}\) for any permutation \(\overline{\varvec{\mathsf {X'}}}\) of \(\overline{\varvec{\mathsf {X}}}\).
A valid order compatible bOD of our example dataset is \(\{\ \}: \texttt {Fips}{{\,\mathrm{\uparrow }\,}}\sim \texttt {State}{{\,\mathrm{\uparrow }\,}}\). It tells us that when we order the dataset by Fips it is also ordered by State. Order compatible bODs are violated by so-called swaps:
A swap w. r. t. an order compatible bOD \(X: \overline{\mathsf {A}} \sim \overline{\mathsf {B}}\) is a pair of tuples s and t such that both tuples are part of the same equivalence class \(\mathscr {E}(t_X)\) and \(s \prec _{\overline{\mathsf {A}}} t\) but \(t \prec _{\overline{\mathsf {B}}} s\).
The order compatible bOD \(\{\ \}: \texttt {Fips}{{\,\mathrm{\uparrow }\,}}\sim \texttt {State}{{\,\mathrm{\downarrow }\,}}\), for example, is not valid in Table 1, because, i. a., \(t_9\) and \(t_6\) form a swap. Discovery algorithms for bODs in set-based form use both splits and swaps to validate constant and order compatible bOD candidates.
Mapping from list to set-based form List-based bODs can be mapped to a set of set-based bODs in polynomial time. This mapping is based on the fact that \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {Y}}}\) is valid only if \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {X}}\varvec{\mathsf {Y}}}\) and \(\overline{\varvec{\mathsf {X}}} \sim \overline{\varvec{\mathsf {Y}}}\) are valid as well [25, Theorem 2]. \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {XY}}}\) ensures that there are no splits and \(\overline{\varvec{\mathsf {X}}} \sim \overline{\varvec{\mathsf {Y}}}\) ensures that there are no swaps that falsify the bOD. As we have seen in Definitions 6 and 8, the two set-based canonical forms for bODs enforce the same constraints.
A list-based bOD \(\overline{\varvec{\mathsf {X}}} \mapsto \overline{\varvec{\mathsf {Y}}}\) is valid iff the two following statements are true: First, \(\forall \overline{\mathsf {A}} \in \overline{\varvec{\mathsf {Y}}}\) the set-based bOD \(X: [\ ] \mapsto \overline{\mathsf {A}}\) is valid and, second, the set-based bOD is valid.
Actor programming model
The actor model is a reactive programming paradigm for concurrent, parallel and distributed applications [8]. It helps to avoid blocking behavior via isolation and asynchronous message passing. The core primitive in this model are actors, which are objects with strictly private state and behavior. Actors are dynamically scheduled on threads by the actor runtime and, hence, can execute tasks in parallel. They communicate within and across process boundaries via asynchronous messages, which are immutable objects with arbitrary, but serializable content. Incoming messages to an actor are buffered in the actor's mailboxes and then processed sequentially, so all parallelization happens between actors but not within one actor.
The strong isolation of actors and their lock-free, reactive concurrency model supports the development of highly scalable, but still dynamic algorithms [32], which is needed for search tasks, such as dependency discovery. Batch processing frameworks for distributed computing, such as Apache Spark [31] or Apache Flink [30], impose stricter workflows that sacrifice algorithmic flexibility to ease the implementation. Therefore, we implement our algorithm with the Akka toolkit [24] for actor programming.
Efficient distributed bOD discovery
In this section, we give an overview of our scalable, robust, and elastic bOD discovery algorithm DISTOD. DISTOD is executed on a cluster of network-connected compute machines (nodes). The algorithm assumes an asynchronous, switched network model, in which messages can be arbitrarily dropped, delayed, and reordered. We now first introduce the DISTOD algorithm and, then, describe its architecture.
DISTOD algorithm
DISTOD is a discovery algorithm that traverses a bOD candidate lattice based on all possible sets of attributes reactively breadth-first. Figure 1 shows a snapshot of such a set-lattice for the attributes \(\mathsf {A}, \mathsf {B}, \mathsf {C}, \mathsf {D}\), where some nodes have already been processed (bold nodes) and others have been pruned (dashed nodes). DISTOD starts the search with singleton sets of attributes and progresses to ever-larger sets of attributes in the lattice. When processing node \(X\), it checks the following bODs: Constant bODs of the form , and order compatible bODs of the form , where \(\mathsf {A}, \mathsf {B} \in X\) and \(\mathsf {A} \ne \mathsf {B}\). Following FASTOD-BID's bottom-up search strategy [25], DISTOD can use the same minimality definitions and pruning rules that guarantee that only minimal and valid bODs are added to the result set (see Sect. 5). DISTOD produces exactly the same results as FASTOD-BID.
Snapshot of a set lattice for the attributes \(\mathsf {A}, \mathsf {B}, \mathsf {C}, \mathsf {D}\). Bold nodes have already been processed by DISTOD, thin nodes still await processing, and dashed nodes have been pruned from the lattice
Although both DISTOD and FASTOD-BID use the same formalisms and pruning rules, DISTOD does not generate the candidate lattice level-wise, but instead uses a task-based approach that interleaves candidate generation, validation, and pruning. Hence, there are no synchronization barriers between the three steps and the algorithm can use the available resources in its distributed environment more effectively. In theory, DISTOD still follows the following high-level steps proposed by FASTOD-BID, but interleaved: (i) initialization and generation of the initial candidates (ii) candidate validation (iii) node pruning (iv) generation of the next candidates. The steps (ii) to (iv) repeat until all candidates have been processed. Step (ii) is explained in Sect. 6, while steps (i), (iii), and (iv) are subject of Sect. 5. Interleaving the four main algorithm steps means that they may occur concurrently for different nodes in the lattice. We do not strictly enforce that a level \(l_i\) has to be completed before the next level \(l_{i+1}\) is started. In our task-based approach, each node (attribute set \(X\)) in the candidate lattice represents a task, whose candidates have to be generated, validated, and pruned. DISTOD works on these tasks in parallel and a set of rules ensures that only minimal and non-pruned candidates are checked. Hence, a snapshot of the candidate lattice of a running instance of DISTOD might look like Fig. 1, where some high level nodes (e.g., ) have already been processed while lower level nodes (e.g., ) still need to be finished.
A single central master actor is responsible for maintaining a consistent view on the candidate lattice, performing minimality checks, and executing pruning decisions. Once the master has generated a candidate, the candidate can be validated independently of other candidates, which allows us to distribute these checks to different compute nodes. We describe the candidate generation in detail in Sect. 5 and the validation of candidates in Sect. 6. Section 7 explains how data are managed in DISTOD.
DISTOD architecture
The DISTOD system consists of a cluster with a single leader node and various follower nodes. The leader node is the initial connection point for the cluster setup and it is responsible for managing the cluster. The single leader hosts the master component that is responsible for the generation of minimal candidates and all pruning decisions. The leader distributes validation jobs to the follower nodes, which in turn send the results back to the leader. We assume that all input data physically resides on the leader node. On algorithm startup, the leader automatically replicates the input data to the other nodes in the system; during the discovery, it writes the final results to the leader node's disk. Each node of the DISTOD cluster is started individually either immediately or later at runtime if more compute power is needed. A common seed node configuration ensures that all nodes find each other to form the DISTOD cluster. Hence, the start of DISTOD is not synchronized across nodes and the algorithm accepts follower nodes for validation tasks until the leader ends the discovery. This reactive startup strategy enables elasticity (see Sect. 8.1) and improves resource usage because the candidate processing begins as soon as possible, i. e., it does not wait until all nodes are ready.
Following the actor programming model, DISTOD consists of different actors that communicate using message-passing. Each node in the DISTOD cluster runs a set of actors which are grouped into five modules (see Fig. 2):
Master module The master module consists of the master components. Its actors are tasked with input, output, and state management as well as candidate generation. The master module is available only on the leader node.
Worker module The worker module contains actors responsible for the validation of bOD candidates and sending them back to the master components. The actors in this module can be spawned on all nodes.
Partition management module The partition management module hosts the actors storing the partitions (see Definition 4 on Page 5), which are used to validate bOD candidates. This module is available on all cluster nodes.
Leader module The leader module contains actors controlling the shutdown procedure and the replication of the initial partitions. It is available on the leader node only.
Follower module The follower module contains puppet actors for the shutdown procedure and the partition replication. They are directly controlled by the corresponding actors in the leader module and steer the local parts of both processes on the follower nodes. They are placed only on the follower nodes.
The leader node hosts the actors from the master, partition management, and leader module; the follower nodes host the actors from the worker, partition management, and follower module. In this passive leader setup, the leader node does not host actors from the worker module and, therefore, does not perform expensive bOD candidate validations. The active leader setup, in contrast, hosts the worker module also on the leader node so that the leader can contribute spare resources to candidate validations; the active leader is also required for stand-alone executions on one node without follower nodes. In this setup, the master module actors are run on separate high-priority threads to ensure that the leader node remains reactive and can answer requests from the other nodes despite the hosting of worker actors. All our experiments use the active leader setup, but we recommend passive leader for particularly wide datasets with many bODs.
DISTOD's architecture consisting of multiple actors grouped into logical modules. Multiple instances of the same actor type are indicated with indices i, j, and k, where i and j control the parallelism and \(k \in [0 \dots \texttt {\#nodes}-1]\). Unidirectional arrows indicate unidirectional communication, i. e., asynchronous message sends, and bidirectional arrows indicate bidirectional communication, usually as request-response pairs
The actors of DISTOD are depicted with rounded corners in Fig. 2. The algorithm uses the master-worker pattern to work on the validation tasks in parallel. The Master module is responsible for creating and maintaining the candidate lattice. It generates the bOD candidates, creates validation jobs, and distributes them to the Worker actors via the dynamic work pulling pattern, which ensures a balanced load distribution. By passing all modifications through the Master actor, it maintains a consistent view on the candidate lattice. It also performs all pruning decisions and maintains the job queue. The MasterHelper actors support the Master actor by performing parallelizable tasks, such as the candidate generation or job-to-Worker dispatching. Section 5 describes how DISTOD ensures minimality and consistent pruning of bODs.
The Worker actors validate the bOD candidates, which is the most time-consuming part of the discovery. All Workers are supervised by a local WorkerMgr actor, which ensures that the system always operates at full capacity. The Workers emit valid bODs to the local RCProxy to immediately request a new validation job from the Master without waiting for result transmission. The RCProxy collects valid bODs from multiple Workers in a batch before reliably sending them to the single ResultCollector actor, which is responsible for formatting the results in a human-readable format and writing them to a file. Every batch from an RCProxy is immediately and asynchronously flushed to disk. This means that DISTOD outputs valid bODs progressively to a file on disk while the algorithm is still running. In this way, DISTOD can be stopped early, if the result set is already satisfactory.
The validation of bODs is performed using partitions (see Definition 4) of the original input dataset. Section 6 describes this approach in detail. At the start of the algorithm, the DataReader actor reads the input dataset, parses it, and uses multiple Partitioner actors to create the initial partitions, which are then sent to the PartitionMgr actor. The PartitionMgr stores the initial partitions and caches intermediate partitions. All requests for partitions from the local Workers are send to the PartitionMgr. If a requested partition is available in the cache, it is directly served to the Worker; otherwise, a partition generation job is sent to one of the PartitionGen actors. They perform the partition generation as described in Sect. 6 and return the partition to the PartitionMgr, which inserts the partition into the cache and forwards it to the requesting Worker(s).
If DISTOD runs on multiple nodes, some additional actors are needed for the cluster management. Figure 2 depicts them in the gray leader and follower modules. Since each node requires the initial partitions, we replicate the initial partitions from the leader's PartitionMgr to the PartitionMgr actors on all follower nodes via a side-channel implemented by the temporary PartitionRepl actors on the follower nodes and the corresponding PMEndpoint actors on the leader node (cf. Sect. 7.1). In addition to the partition replication, the algorithm also ensures that all nodes of the DISTOD cluster shut down cleanly at the end of the algorithm and that all results are flushed to disk beforehand. This is handled by a coordinated shutdown protocol implemented using the ShutdownCoord actor on the leader node and the Executioner actors on the follower nodes. The ShutdownCoord actor implements a registry for follower nodes and drives the shutdown process.
Candidate generation
Like all existing algorithms, which are, i. a., [5, 10, 15, 25, 26], DISTOD uses a shared lattice data structure that tracks the results of the candidate validations to guarantee complete and correct results. This data structure is also used to check the minimality constraints during candidate generation. Since distributing this data structure causes a significant communication, i.e., synchronization, overhead that cannot be compensated by gains in parallelization, DISTOD sticks to a non-distributed, centralized candidate tracking and generation approach. A central component on the leader node, the Master, watches over intermediate results and ensures completeness and correctness of the algorithm. It generates bOD candidates, checks the candidates' minimality, and performs the candidate pruning because these three parts of the discovery algorithm rely on information about other nodes in the set lattice. All other parts of the algorithm can be executed independently of each other and, hence, they are distributed to the compute nodes. Intermediate results and pruning data are sent to the Master, which integrates them into its encapsulated state and considers them for the pruning decisions.
In this section, we first define minimality for bODs (Sect. 5.1) and then discuss how DISTOD ensures that only minimal bOD candidates are generated (Sect. 5.2). Afterward, we explain DISTOD's candidate generation algorithm (Sect. 5.3).
Trivial and minimal bODs
Like other dependency discovery algorithms, DISTOD outputs only minimal, non-trivial bODs. Non-minimal and trivial bODs can easily be inferred from the result set using the axioms for set-based bODs [25, Figure 5]. For triviality and minimality, we adopt the definition of FASTOD-BID [25] so that we can use the same highly effective pruning rules:
Definition 10
A constant bOD \(X: [\ ] \mapsto \overline{\mathsf {A}}\) is trivial iff \(\mathsf {A} \in X\). An order compatible bOD \(X: \overline{\mathsf {A}} \sim \overline{\mathsf {B}}\) is trivial iff \(\mathsf {A} \in X\), \(\mathsf {B} \in X\), or \(\overline{\mathsf {A}} = \overline{\mathsf {B}}\).
A constant bOD \(X: [\ ] \mapsto \overline{\mathsf {A}}\) is minimal iff it is not trivial and there is no context \(Y \subset X\), such that \(Y: [\ ] \mapsto \overline{\mathsf {A}}\) holds in the instance \(\mathrm {r}\). An order compatible bOD \(X: \overline{\mathsf {A}} \sim \overline{\mathsf {B}}\) is minimal iff it is not trivial and there is no context \(Y \subset X\), such that \(Y: \overline{\mathsf {A}} \sim \overline{\mathsf {B}}\), \(X: [\ ] \mapsto \overline{\mathsf {A}}\), or \(X: [\ ] \mapsto \overline{\mathsf {B}}\) hold in \(\mathrm {r}\).
As [25] shows, we can further reduce the number of bODs that we have to consider in our discovery algorithm by eliminating bODs with similar semantics. Constant bODs of the form \(X: [\ ] \mapsto \mathsf {A}{{\,\mathrm{\uparrow }\,}}\) and \(X: [\ ] \mapsto \mathsf {A}{{\,\mathrm{\downarrow }\,}}\) are semantically equivalent (cf. [25, Revers-I in Figure 5]). Thus, we consider only constant bODs of the form \(X: [\ ] \mapsto \mathsf {A}{{\,\mathrm{\uparrow }\,}}\). Order compatible bODs of the form \(X: \mathsf {A}{{\,\mathrm{\uparrow }\,}}\sim \mathsf {B}{{\,\mathrm{\uparrow }\,}}\) and \(X: \mathsf {A}{{\,\mathrm{\uparrow }\,}}\sim \mathsf {B}{{\,\mathrm{\downarrow }\,}}\) eliminate \(X: \mathsf {A}{{\,\mathrm{\downarrow }\,}}\sim \mathsf {B}{{\,\mathrm{\downarrow }\,}}\) and \(X: \mathsf {A}{{\,\mathrm{\downarrow }\,}}\sim \mathsf {B}{{\,\mathrm{\uparrow }\,}}\) respectively by Reverse-II [25, Figure 5]. Thus, we consider only order compatible bODs of the form \(X: \mathsf {A}{{\,\mathrm{\uparrow }\,}}\sim \mathsf {B}{{\,\mathrm{\uparrow }\,}}\) and \(X: \mathsf {A}{{\,\mathrm{\uparrow }\,}}\sim \mathsf {B}{{\,\mathrm{\downarrow }\,}}\). In summary, we use the following minimality pruning rules:
All relevant bOD candidates have the form \(X: [\ ] \mapsto \mathsf {A}{{\,\mathrm{\uparrow }\,}}\), \(X: \mathsf {A}{{\,\mathrm{\uparrow }\,}}\sim \mathsf {B}{{\,\mathrm{\uparrow }\,}}\), or \(X: \mathsf {A}{{\,\mathrm{\uparrow }\,}}\sim \mathsf {B}{{\,\mathrm{\downarrow }\,}}\).
A constant bOD candidate \(X: [\ ] \mapsto \mathsf {A}{{\,\mathrm{\uparrow }\,}}\) is not minimal if
it is trivial (\(\mathsf {A} \in X\)) or
there is a valid bOD \(Y: [\ ] \mapsto \mathsf {A}{{\,\mathrm{\uparrow }\,}}\), where \(Y \subset X\).
An order compatible bOD candidate \(X: \overline{\mathsf {E}} \sim \overline{\mathsf {F}}\) with \((\overline{\mathsf {E}}, \overline{\mathsf {F}})\) = \((\mathsf {A}{{\,\mathrm{\uparrow }\,}}, \mathsf {B}{{\,\mathrm{\uparrow }\,}})\) or \((\overline{\mathsf {E}}, \overline{\mathsf {F}})\) = \((\mathsf {A}{{\,\mathrm{\uparrow }\,}}, \mathsf {B}{{\,\mathrm{\downarrow }\,}})\) is not minimal if
it is trivial (\(\mathsf {A} \in X\), \(\mathsf {B} \in X\), or \(\mathsf {A} = \mathsf {B}\)) or
there is a valid bOD \(Y: \overline{\mathsf {E}} \sim \overline{\mathsf {F}}\), where \(Y \subset X\), or
there is a valid bOD \(X: [\ ] \mapsto \mathsf {A}{{\,\mathrm{\uparrow }\,}}\) or
there is a valid bOD \(X: [\ ] \mapsto \mathsf {B}{{\,\mathrm{\uparrow }\,}}\).
Similar to TANE and just like FASTOD-BID, our candidate generation tracks all dependency candidates that either have not been tested yet or are known to be non-valid; these candidates will eventually lead to minimal bODs. Each node \(X\) in the candidate lattice, i. e., the candidate state \(\mathscr {S}\), stores its untested/non-valid constant bOD candidates in the candidate set \(\mathscr {S}(X).C_c\) and its untested/non-valid order compatible bOD candidates in the candidate set \(\mathscr {S}(X).C_o\). An untested set of candidates may still result in valid minimal bODs. Removing valid bODs from the set after their validation enforces all pruning rules listed above. The sets then fulfill the following definitions:
\(\mathscr {S}(X).C_c\) = [10, Lemma 3.3], [25, Definition 10]
If a constant bOD candidate \(\mathsf {A} \in \mathscr {S}(X).C_c\) holds for a specific node \(X\), then there was no valid constant bOD for any \(Y \subset X\). Therefore, we can find minimal constant bODs by considering only candidates of the form , where \(\mathsf {A} \in X\) (Rule 2a) and (Rule 2b). The same technique is used for order compatible bOD candidates:
\(\mathscr {S}(X).C_o\) = \(\{(\overline{\mathsf {E}}, \overline{\mathsf {F}})\, \vert \ (\overline{\mathsf {E}}, \overline{\mathsf {F}}) = (\mathsf {A}{{\,\mathrm{\uparrow }\,}}, \mathsf {B}{{\,\mathrm{\uparrow }\,}})\) or \((\mathsf {A}{{\,\mathrm{\uparrow }\,}}, \mathsf {B}{{\,\mathrm{\downarrow }\,}}), (\mathsf {A}, \mathsf {B}) \in X^2, \mathsf {A} \ne \mathsf {B}\) and does not hold, and does not hold\(\}\) [25, Definition 11].
If an order compatible bOD candidate \((\overline{\mathsf {E}}, \overline{\mathsf {F}}) \in \mathscr {S}(X).C_o\) for a specific node \(X\) with either \((\overline{\mathsf {E}}, \overline{\mathsf {F}})\) = \((\mathsf {A}{{\,\mathrm{\uparrow }\,}}, \mathsf {B}{{\,\mathrm{\uparrow }\,}})\) or \((\overline{\mathsf {E}}, \overline{\mathsf {F}})\) = \((\mathsf {A}{{\,\mathrm{\uparrow }\,}}, \mathsf {B}{{\,\mathrm{\downarrow }\,}})\), where \(\mathsf {A} \in X\) and \(\mathsf {B} \in X\) (Rule 3a), then there was no valid order compatible bOD \(Y: \overline{\mathsf {E}} \sim \overline{\mathsf {F}}\) for any context \(Y \subset X\) (Rule 3b) and both (Rule 3c) and (Rule 3d) do not hold.
In summary, storing only non-valid dependency candidates in each node and removing valid ones, automatically enforces all pruning rules listed above. All three minimality pruning rules eventually lead to [25, Lemma 12] (we call it "node pruning" rule), which states that if for a node \(X\) with \(\vert X \vert \ge 2\) both candidate sets \(\mathscr {S}(X).C_c\) and \(\mathscr {S}(X).C_o\) are empty, all succeeding nodes' candidate sets \(\mathscr {S}(Z).C_c\) and \(\mathscr {S}(Z).C_o\) with \(Z \supset X\) will be empty as well. This means that we can ignore all candidates from the succeeding nodes of a node, for which both candidate sets are empty. DISTOD prunes nodes from the candidate lattice by storing a flag \(\mathscr {S}(X).p\) that indicates whether a node \(X\) should be considered or not. If a node \(X\) is pruned, all its successors in the candidate lattice are pruned as well; DISTOD does not generate bOD candidates for pruned nodes or their successors.
Generating minimal bOD candidates
Inter- and intra-node dependencies for the subtasks of a node \(X\) and its two predecessor nodes \(W_0\) and \(W_1\) with . The arrow on an edge points to the subtask that depends on the subtask at the source of the edge
In contrast to FASTOD-BID, DISTOD decouples the generation of candidates and their validation. The central Master component on the leader node drives the traversal of the candidate lattice. It performs the candidate generation, which includes the minimality checks and the pruning of nodes. The bOD candidates are encapsulated into jobs that are sent to Worker actors via a work pulling pattern. The distributed Worker actors then perform the validation of the minimal bOD candidates and send the results back to the Master.
DISTOD uses a task-based approach to bOD discovery. Each node in the candidate lattice (attribute set \(X\)) represents a task that is divided into five subtasks:
generation of minimal constant bOD candidates
generation of minimal order compatible bOD candidates
validation of minimal constant bOD candidates
validation of minimal order compatible bOD candidates
node pruning
Not all subtasks of a node can be executed concurrently because the subtasks depend on results of other subtasks of the node or its predecessors. Figure 3 depicts the inter- and intra-node dependencies for the subtasks of a node \(X\) with two predecessor nodes \(W_0\) and \(W_1\), where . Because Gen-tasks simply depend on all predecessor nodes' Val-tasks, Fig. 3 easily generalizes to \(\vert X \vert > 2\).
The generation of the minimal constant bOD candidates of node \(X\) requires the checking of Rule 2b, i. e., it can be performed only after all constant bOD candidates of the set of predecessor nodes have been validated; indicates this dependency. To guarantee the minimality of the order compatible candidates \(\mathscr {S}(X).C_o\) of node \(X\), the Master module has to follow Rule 3b, Rule 3c, and Rule 3d. Rule 3b requires the results of the order compatible bOD validations of all \(W_i\), which is expressed by . Rule 3c and Rule 3d require the results of the constant bOD validations of all \(W_i\) indicated by . \(\mathscr {S}(X).C_o\) can thus be generated as soon as both constant and order compatible bOD candidates of all predecessor nodes \(W_i\) have been validated. The validations of constant bODs and order compatible bODs are independent of each other and can be performed concurrently as soon as the respective candidates are fully generated. If both validation checks for node \(X\) are finished, the Master module can check \(X\)'s candidate states \(\mathscr {S}(X).C_c\) and \(\mathscr {S}(X).C_o\) to decide if the node and all its successors can be pruned from the lattice (node pruning).
In contrast to FASTOD-BID [25], where the generation of constant and order compatible bOD candidates happens after the node pruning subtasks of all previous level's nodes are done, DISTOD's candidate generation steps, which are the Gen \(\mathscr {S}(X).C_c\) and Gen \(\mathscr {S}(X).C_o\) boxes in Fig. 3, do not depend on the node pruning step. In this way, DISTOD proactively generates constant bOD validation jobs; some of which may be pruned once the order compatible validations of the predecessors are done. This does not violate the minimality of the discovered bODs, because the job cannot contain valid candidates anyway. We deliberately interlace the candidate generation subtasks and validate them independently from each other in the candidate validation steps, because this removes synchronization barriers and allows for a more fine-grained work distribution improving the resource utilization on all nodes; DISTOD distributes the candidate validations as encapsulated jobs, which are the Val \(\mathscr {S}(X).C_c\) and Val \(\mathscr {S}(X).C_o\) boxes in Fig. 3, which are pulled and processed by the Worker actors on the follower nodes. Node pruning is handled downstream.
Candidate generation algorithm
The candidate generation is handled by the Master module on the leader node (cf. Fig. 2). For better performance, the Master module is parallelized and consists of two types of actors: a single Master and a pool of MasterHelpers.
Lattice structure To ensure consistency, the Master is the sole actor that can manipulate the candidate state \(\mathscr {S}\) and the validation job queue \(\varvec{\mathsf {Q}}\). The MasterHelper actors have read-only access to the candidate state \(\mathscr {S}\), perform the actual generation of new bOD candidates and check the minimality pruning rules. This allows us to parallelize the candidate generation, which reduces the load on the central Master. For each node \(X\) in the candidate lattice, we store an entry \(\{C_c, C_o, f_c, f_o, p, i_c, i_o\}\) in the candidate state \(\mathscr {S}\): The entry's constant and order compatible bOD candidate sets \(\mathscr {S}(X).C_c\) and \(\mathscr {S}(X).C_o\) serve to track untested and invalid candidates as described in Sect. 5.2. The two flags \(\mathscr {S}(X).f_c\) and \(\mathscr {S}(X).f_o\) indicate whether or not \(\mathscr {S}(X).C_c\) and \(\mathscr {S}(X).C_o\) have already been validated by Workers. The flag \(\mathscr {S}(X).p\) indicates whether the node as a whole is pruned. Lastly, the two counters \(\mathscr {S}(X).i_c\) and \(\mathscr {S}(X).i_o\) track the necessary preconditions for the candidate generation in node \(X\), i. e., the number of predecessor nodes \(W_i\), for which the constant (\(\mathscr {S}(W_i).f_c =\) true) and order compatible (\(\mathscr {S}(W_i).f_o =\) true) bOD validations have already been performed.
They allow the MasterHelpers to enforce the dependencies shown in Fig. 3, because they trigger the generation of candidates not before all preconditions are met, which is when \(\mathscr {S}(X).i_c = \vert X \vert \) and \(\mathscr {S}(X).i_o = \vert X \vert \), respectively. Every successful validation triggers \(i_c\) and \(i_o\) counter increments in all dependent nodes of \(X\) and with them the check if a node is ready to generate either \(C_c\) or \(C_o\) candidates. If \(\mathscr {S}(X).i_c = \vert X \vert \), a MasterHelper reactively checks Rule 2b to initiate the generation of minimal constant bOD candidates for node \(X\) (Algorithm 1 Line 1f.); if \(\mathscr {S}(X).i_o = \vert X \vert \) and \(\mathscr {S}(X).i_c = \vert X \vert \), a MasterHelper checks Rule 3b, Rule 3c, and Rule 3d to generate minimal order compatible bOD candidates for node \(X\) (Algorithm 1 Line 4ff.). Although DISTOD checks whether candidates can be generated once per counter increment, the rule testing and actual generation of minimal bOD candidates is done only once per node. The job queue \(\varvec{\mathsf {Q}}\) of the Master actor tracks the encapsulated validation jobs (the Val \(\mathscr {S}(X).C_c\) and Val \(\mathscr {S}(X).C_o\) boxes in Fig. 3) for the Worker actors.
Lattice initialization The state initialization is performed by the Master actor right after reading the input dataset. The state of the sole level \(l_0\) node is initialized by setting \(C_c\) = \(\mathrm {R}\), \(C_o\) = \(\emptyset \), and \(f_c\) = \(f_o\) = true (no validations to perform). Level \(l_1\) is the first level that contains bOD candidates. For all \(\mathsf {A} \in \mathrm {R}\), the Master sets = = 1, = \(\mathrm {R}\), = \(\emptyset \), = true, and adds the initial validation jobs to \(\varvec{\mathsf {Q}}\), which effectively starts the discovery. Because \(l_1\) includes only single-attribute nodes and, hence, no order compatible bOD candidates, the initialization also has to set the precondition counters \(\mathscr {S}(X).i_o\) for all nodes in level \(l_2\) to 2.
Validation job dispatching DISTOD uses work pulling to distribute validation jobs, which means that Worker actors once they finished a job immediately request a new job from the Master. For each request, the Master dequeues a validation job \((X, C_c)\) (or \((X, C_o)\)) from \(\varvec{\mathsf {Q}}\), removes trivial constant bOD candidates from \(C_c\), and dispatches the job to the Worker for validation (Sect. 6). We cannot remove trivial candidates from \(\mathscr {S}(X).C_c\) directly, because they might be required to generate candidates of succeeding nodes.
If \(\varvec{\mathsf {Q}}\) is empty, the Master bookmarks the requesting Worker. As soon as new jobs are put into \(\varvec{\mathsf {Q}}\), bookmarked Workers are served with jobs again. Once all Workers are idle and \(\varvec{\mathsf {Q}}\) is empty, there are no more nodes with minimal bOD candidates in the lattice and DISTOD is finished.
Validation result processing The Worker actors send the validation results for a candidate set \(C_c\) (or \(C_o\) respectively) back to the Master actor (via the MasterHelpers). The Master then updates the corresponding node's state \(\mathscr {S}(X)\) by setting \(f_c\) (or \(f_o\)) to true and removing all pruned candidates from \(C_c\) (or \(C_o\)). Once both validations have been performed (\(f_c = f_o = \textsc {true}\)), the Master checks if the node can be pruned from the candidate lattice (node pruning). If this is not the case, it updates the precondition counters of all successor nodes of \(X\). For this, the Master iterates over all nodes \(Z_i = X\cup \mathsf {S}_i\) with \(\mathsf {S}_i \in \mathrm {R} \setminus X\) that have not been pruned yet and increments the precondition counter \(\mathscr {S}(Z_i).i_c\) (or \(\mathscr {S}(Z_i).i_c\)). Non-existing node states \(\mathscr {S}(Z_i)\) are dynamically created during this step and added to the lattice. After the counters of all successor nodes have been updated, the Master generates candidate generation jobs for all non-pruned successor nodes \(Z_i\). These candidate generation jobs are sent to the MasterHelper actors in a round-robin fashion and are processed concurrently using Algorithm 1. Because the MasterHelpers cannot modify \(\mathscr {S}\) and \(\varvec{\mathsf {Q}}\) directly, they send the newly generated candidates and the new jobs back to the Master actor, which integrates them into \(\mathscr {S}\) and \(\varvec{\mathsf {Q}}\).
Node pruning If a node \(X\) is prunable (\(\mathscr {S}(X).C_c = \emptyset \wedge \mathscr {S}(X).C_o = \emptyset \)), the Master actor marks the node \(X\) and all existing successors \(Z_i\) as pruned by setting their flag p = true. Then, it removes all related validation jobs from \(\varvec{\mathsf {Q}}\).
Candidate validation
DISTOD validates bOD candidates similar to FASTOD-BID [25] using data partitions. More specifically, it uses stripped partitions to validate constant bODs and a combination of stripped and sorted partitions to validate order compatible bOD. In contrast to FASTOD-BID, though, DISTOD uses a slightly optimized algorithm to validate order compatible bODs and faces an additional challenge in partition management, because the candidate validations are distributed across different nodes in the cluster.
In this section, we first introduce sorted and stripped partitions. We, then, explain how we validate constant and order compatible bOD candidates.
Sorted and stripped partitions
Partitions \(\varPi _X\) are sets of equivalence classes w. r. t. a context \(X\) (see Sect. 3.2 Definition 4). Similar to FASTOD-BID, DISTOD does not directly use these full partitions for the candidate validation checks, because they take a lot of memory and lack information about the order of the tuples in the dataset required to check order compatible bODs. Instead, DISTOD uses two variations of these partitions: sorted partitions, which capture the order of the tuples and, thus, enable the validation of order compatible bODs, and stripped partitions, which remove implicit information and, in this way, reduce the memory footprint needed to store partitions.
Sorted partitions Sorted partitions are necessary for the validation of order compatible bODs because they preserve the ordering information of the input dataset.
A sorted partition, denoted as \(\widehat{\varPi }_X\), is a list of equivalence classes sorted by the ordering imposed to the tuples by \(X\) [25].
DISTOD's order compatible bOD validation algorithm performs only a single operation on sorted partitions. It looks up the positions of two given tuples to determine their order (see Sect. 6.2.2). We propose a reversed mapping of the sorted partitions, called inverted sorted partition \(\varGamma _X\), to represent sorted partitions in DISTOD. Inverted sorted partitions allow us to lookup the position of a tuple identifier in a sorted partition in constant time.
An inverted sorted partition \(\varGamma _X\) is a mapping from tuple identifiers to the positions of their equivalence classes in a sorted partition \(\widehat{\varPi }_X\).
To give an example, consider Table 1 and the partition . The sorted partition for is . We store this sorted partition as inverted sorted partition .
DISTOD's validation checks require only inverted sorted partitions for the singleton attribute sets, where \(|X |= 1\) (see Sect. 6.2.2). This means that DISTOD can compute the inverted sorted partitions for each \(\mathsf {A} \in \mathrm {R}\) directly from the individual columns of the dataset. After the inverted sorted partitions have been generated, we work with tuple identifiers only. This has the advantage that we can discard the attribute type information and all concrete values, which saves memory and—because the computations effectively deal with integers only—makes the operations on partitions fast and simple.
Stripped partitions DISTOD requires only the sorted partitions for each (level \(l_1\) of the candidate lattice); for attribute sets with \(|X |> 1\) (higher levels), it replaces sorted partitions with the smaller stripped partitions [5, 10, 12, 22, 25, 26] (also known as position list indexes [2, 15, 19, 20]).
Stripped partitions are partitions, where singleton equivalence classes with \(|\mathscr {E}(t_X) |= 1\) are removed [10]. We denote them with \(\varPi ^*_X\).
Coming back to our example partition , we can transform it into a stripped partition by removing all singleton equivalence classes such that . For the proposed bOD validation algorithms (see Sect. 6.2) using stripped instead of full partitions is sufficient because this also guarantees correctness [25].
DISTOD uses two different approaches to generate stripped partitions: one for attribute sets of level \(l_1\) and another one for attribute sets of deeper levels. Stripped partitions for all single attribute sets (level \(l_1\)) are generated from the inverted sorted partitions by converting them back to sorted partitions and simply removing all singleton equivalence classes.
The partitions for larger attribute sets \(X\), where \(|X |\ge 2\), can efficiently be computed from two of their subsets using the product of refined partitions. The partition product is "the least refined partition \(\varPi _{X\cup Y}\) that refines both \(\varPi _X\) and \(\varPi _Y\)" [10]. Stripped partition in level \(l_2\), for example, is computed by the stripped partition product of and : . Any two different subsets of size \(|X |- 1\) of a stripped partition for \(X\) suffice for the stripped partition product. This fits well for our small-to-large search strategy and gives us flexibility in choosing the operands for the stripped partition product.
Validation algorithm
As discussed in Sect. 5, DISTOD generates only minimal and non-pruned bOD candidates. For each node \(X\) in the candidate lattice, the algorithm generates constant bOD candidates of the from for all and order compatible bOD candidates of the form and for all \(\mathsf {A}, \mathsf {B} \in X\), where \(\mathsf {A} \ne \mathsf {B}\). For the validation, the constant bOD candidates and the order compatible bOD candidates of a node \(X\) are grouped together. Each group is distributed as a validation job to one of the Worker actors. The constant bOD candidates are validated using a partition refinement check on stripped partitions (see Sect. 6.2.1) and the order compatible bOD candidates are validated by comparing the ordering of tuples that is imposed by their first attribute (\(\mathsf {A}\)) and their second attribute (\(\mathsf {B}\)) of the bOD as represented in their sorted partition (see Sect. 6.2.2).
Validating constant bODs
Constant bODs of the form resemble FDs. Hence, they can be validated using a partition refinement check [10, 25]. A partition \(\varPi \) refines another partition \(\varPi '\) if the equivalence classes in \(\varPi \) are all subsets of any of the equivalence classes in \(\varPi '\). The partition refinement check on stripped partitions can efficiently be computed using the popular error measure \(e(Y) = (\Vert \varPi ^*_Y \Vert - |\varPi ^*_Y |) / |r |\) from the TANE algorithm [10], where \(|\varPi ^*_Y |\) is the number of equivalence classes of the stripped partition and \(\Vert \varPi ^*_Y \Vert \) is the sum of the sizes of all equivalence classes in \(\varPi ^*_Y\). Partition \(\varPi _X\) refines partition if and only if .
A constant bOD candidate of the from is valid if the stripped partition refines (no split; see Definition 6). We check this condition using the error measure: If , the bOD is valid; if , the bOD is invalid. Because computing the error measure on demand would require a scan over the stripped partition, i. e., a scan for each constant bOD candidate check, DISTOD stores for each stripped partition \(\varPi ^*_X\) its number of equivalence classes \(|\varPi ^*_X |\) and its number of elements \( \Vert \varPi ^*_X \Vert \). The divisor \(\vert r \vert \) is constant for all checks and can, thus, be removed. The algorithm calculates \(|\varPi ^*_X |\) and \( \Vert \varPi ^*_X \Vert \) during the generation of the respective stripped partition because it has to scan the partition product during this operation anyway. In this way, the candidate check consists of only three operations: two subtractions to compute the errors and \(e(X)\), respectively, and a comparison of the two error values, i. e., .
Validating order compatible bODs
Order compatible bOD candidates of the form are also validated using partitions. For this, we slightly change the validation algorithm from FASTOD-BID [25] to improve its efficiency: To verify whether there is no swap over the attributes \(\mathsf {A}\) and \(\mathsf {B}\), FASTOD-BID's validation algorithm scans over the (large) sorted partition and over the (small) stripped context partition ; our changed version scans only the (small) stripped context partition twice. Sorted partitions always contain all tuples of the input dataset, while stripped partitions get smaller for larger attribute sets due to the partition refinement, i. e., the number of singleton equivalence classes in \(\varPi _X\) grows with the number of attributes in \(X\) and stripped partitions \(\varPi ^*_X\) omit these classes. Figure 4 illustrates the rapid size reduction of stripped partitions over the levels of seven datasets.
Average relative size of stripped partitions \(\left( \frac{\Vert \varPi ^*_X \Vert }{\vert \mathrm {r} \vert }\right) \) per level \(l_i\) (\(\vert X \vert = i\)) for different datasets
Algorithm 2 shows DISTOD's steps to validate an order compatible bOD candidate. It first sorts the equivalence classes of the stripped context partition by the first attribute \(\mathsf {A}\) of the bOD (Line 1) and then compares this order to the order imposed by the second attribute \(\mathsf {B}\) (Line 3-16). The algorithm checks for candidates of the from and simultaneously. If it finds no swap in the data, the order compatible bOD is valid (Line 17f). Analogously, if it finds no reverse swap in the data, the order compatible bOD is valid (Line 17f). If neither a swap nor a reverse swap is found, both order compatible bOD forms are valid.
In Line 1 of Algorithm 2, we call Algorithm 3 to sort the tuples in the equivalence classes of the stripped context partition by the first attribute \(\mathsf {A}\). Algorithm 3 first iterates over all equivalence classes \(\mathscr {E}\) of the context partition (Line 2). For each \(\mathscr {E}\), it creates a new temporary list to store the sorted equivalence classes (Line 3). The algorithm then iterates over all tuples of the equivalence class \(\mathscr {E}\) and retrieves their positions when being sorted by attribute \(\mathsf {A}\) using the inverted sorted partition (Line 4f). We store the tuple identifiers in the sorted map \(\mathscr {M}\) with their position \(pos_t\) as the key. \(\mathscr {M}\) stores key-value pairs and allows us to traverse the values in key order. DISTOD uses this property to add the tuple sets in sorted order to their new sorted equivalence class \(\varvec{\mathsf {E}}\) in Line 8f. Afterward, it stores the sorted equivalence class in the output set \(\gamma \) (Line 10) and clears \(\mathscr {M}\) to process the next equivalence class \(\mathscr {E}\) of (Line 11). As an example, consider the stripped context partition and the inverted sorted partition . Sorting the equivalence classes in by \(\texttt {ADGrp}\) using Algorithm 3 results in .
Distributed partition management
DISTOD manages data in different parts of the system. The Master actor on the leader node keeps track of all checked and unchecked bOD candidates and the search progress. This includes the status of specific bOD candidates, pending tasks, and pruning information. The Master actor also stores work queues to track waiting and pending validation jobs. It sends out the candidates as validation jobs to the nodes in the cluster and receives the intermediate results and pruning information back to integrate them into its state. The ResultCollector actor on the leader node manages the results of the discovery algorithm; more specifically, it receives all valid bODs from all nodes in the DISTOD cluster and writes them to disk for persistence. It is also responsible for removing duplicate results, which could occur if a follower node is removed from the DISTOD cluster and its unfinished tasks are dispatched to another node. Managing the candidate data and search state is easy because it is centralized on the leader node. The handling of the sorted and stripped partitions, however, is a challenge because the partitions are required on different follower nodes to perform the candidate validations and the set of stripped partitions grows exponentially while the algorithm generates ever-more partitions for further candidate validations. For this reason, we focus on distributed partition management in this section.
Partition handling
DISTOD distributes the bOD candidate validations as jobs to the Worker actors across the nodes in the cluster on a "first come, first served" basis. This helps to facilitate elasticity (see Sect. 8.1) because every node in the cluster can receive any validation job and we do not have to adapt our distribution strategy when nodes join or leave the cluster. However, different validation candidates require different partitions and, due to the dynamic distribution, we do not know a priori which node will receive which validation candidates. Therefore, DISTOD manages the partitions for each node individually and locally. Each DISTOD node stores and generates its own partitions and no node must hold all partitions because only the partitions relevant to the locally performed checks are generated. Based on an initial set of partitions, which is replicated across the nodes in the cluster, each node can generate all other stripped partitions on demand.
The initial partitions are generated directly from the input dataset. They include the inverted sorted partitions for each \(\mathsf {A} \in \mathrm {R}\) and the stripped partition . In DISTOD, only the leader node reads the input dataset and, thus, the leader node creates the initial partitions (i. e., the DataReader actors). After a follower node connects to the leader node, it replicates the initial partitions once and generates all other stripped partitions locally on demand.
Depending on the characteristics of the input dataset, the initial partitions can get quite large. If we would send these amounts of data using the default message channel, they may hinder or delay other messages from being sent and received. This could include time-critical messages, such as heartbeats, or other important messages, such as cluster membership updates and gossip. To prevent such message collisions, DISTOD uses message side-channels between all nodes in the DISTOD cluster for any large message transfers. A side-channel handles the streaming of the chunked initial partitions over a separate, low-priority back-pressured communication channel.
The partition management and partition generation are implemented in the partition management module. Each node in the cluster has its own PartitionMgr actor, which stores the inverted sorted partitions for each \(\mathsf {A} \in \mathrm {R}\), the \(l_1\) stripped partitions and the stripped partition for the empty candidate set . The PartitionMgr also serves as a cache for the temporary stripped partitions \(\varPi ^*_X\) for \(\vert X \vert \ge 2\). We cache the intermediate stripped partitions in the PartitionMgr because they are used to generate stripped partitions for larger attribute sets and different candidate validation checks can rely on the same partitions. The local Workers can start requesting partitions from their local PartitionMgr as soon as all initial partitions have been replicated; missing stripped partitions are generated from the existing ones on demand.
All partitions stored in the PartitionMgr are immutable and, thus, Workers can safely access the same partition concurrently. If a Worker has to manipulate a partition, e. g., to sort its equivalence classes, it uses a private working copy of the stripped partition.
On demand stripped partition generation
DISTOD uses two alternative strategies for the local generation of stripped partitions: recursive generation and direct partition product. This section introduces both strategies.
Recursive partition generation
The node-local and on-demand generation of stripped partitions and DISTOD's distribution of the candidate validation checks to different nodes entails an irregular generation of stripped partitions: It is possible that a Worker receives a task where it requires a stripped partition, for which not all preceding partitions have been generated by the local PartitionMgr. Thus, the requested partition cannot be computed using the partition product of two of its subsets. This effect is amplified by the regular partition cleanup of the PartitionMgr and by partition eviction processes in the case of memory shortage (cf. Sect. 7.3). We overcome the issue of missing subset partitions by recursively computing partitions in a way that makes the best use of already available intermediate partitions.
For each partition request that cannot be served by the partition cache, the PartitionMgr recursively generates a chain of partition generation jobs for the PartitionGen actors. This job chain records the order of the partition generation jobs and the particular inputs for each job in the chain. A job chain is sent to a single PartitionGen actor, which processes it from the beginning to the end. The PartitionGen actor temporarily stores the generated partitions and can use them as input for the next partition generation job. If a newly generated partition should be stored in the partition cache, the PartitionGen actor sends the partition to its local PartitionMgr, which also forwards the partition to the requesting Worker.
Algorithm 4 shows the recursive function that creates a chain of partition generation jobs. It takes the target attribute set \(X\) and all stored partitions \(\mathscr {P}\) as input and returns a list of partition generation jobs \(\varvec{\mathsf {J}}\). A partition generation job (e. g., in Line 10) consists of the attribute set for the target partition to generate, the two input partitions for the partition product and a flag that tells the PartitionGen actor whether this partition should be stored or not s. We store all partitions up to a depth of three. This is a compromise between not storing predecessors and storing all intermediate partitions. By storing all intermediate partitions, partition generations can be computed much faster, but the cache size would quickly outgrow any main memory capacity due to its exponential growth. By storing only the target partition, on the other hand, we would use a lot less memory, but had to compute common intermediate partitions for the following partition requests as well.
The two input partitions of a partition generation job are either the stripped partitions \(\varPi ^*_X\) themselves or the identifiers \(X\) of the partitions. If only the identifiers are specified, the PartitionGen actor looks up the stripped partitions in its temporary partition state and uses the looked up partitions as input for the partition product. The partition generation job chain ensures that all necessary stripped partitions are computed before they are used as input for the partition product and that no partition is generated multiple times by the same PartitionGen actor (see Line 2 in Algorithm 4).
Algorithm 4 generates a minimal and deterministic number of jobs. This is because the algorithm consistently chooses the partition product factors from the candidate set's predecessors in a left-oriented way utilizing the cached partitions optimally. For each recursion step, the predecessors of the current candidate set are sorted lexicographically (see Line 5 in Algorithm 4). If no predecessors are available (see Line 7), the algorithm generates the first two predecessor partitions recursively for the partition product; if only one predecessor is available (see Line 11), this predecessor partition is taken and the first non-available predecessor partitions are generated recursively; if more predecessor partitions exist (see Line 15), the algorithm takes the two first predecessor partitions. Any following partition generation run also uses this left-orientation and, in this way, automatically re-use the previously generated partitions. This cuts down the number of generation steps and, thus, reduces the time spent generating partitions.
Direct partition product
The recursive generation of stripped partitions is, in general, the fastest way of computing new partitions because the operation re-uses existing stripped partitions that usually become smaller for larger attribute sets. It also caches intermediate results to accelerate later partition retrieval and generation operations. However, if DISTOD's heap memory usage exceeds a certain threshold, the algorithm's partition eviction mechanism (see Sect. 7.3) removes all intermediate stripped partitions from the partition cache of the PartitionMgr so that later partitions have to be recursively generated from the partitions in level \(l_1\). This is very costly because if there are no intermediate partitions the number of partition generation jobs needed for the recursive generation grows exponentially with the level in which a partition is requested. In addition, storing the intermediate partitions again would lead to a repeated memory exhaustion. For these reasons, we dynamically switch to a different partition generation method, which is direct partition product, whenever the chain of recursive partition generation jobs becomes too long. This strategy computes the partition product for a stripped partition not only from its immediate predecessors, but also from other subsets. In our case, we use the single attribute set partitions from level \(l_1\) because they are always available.
As an example, consider the request for partition and an empty partition cache. Instead of computing the partition product recursively via , the direct partition product uses the persistent \(l_1\) stripped partitions to directly, i. e., without intermediate results, compute the requested stripped partition: . In this example, the recursive partition product would compute three partition products in three jobs with two intermediate partitions, namely and . The direct partition product would compute only two partition products, but within one job, with no intermediate partitions.
Because the recursive generation of stripped partitions is much faster if the memory can hold the intermediate partitions and the partition product chains are not too long, DISTOD uses this strategy by default. The algorithm dynamically switches to the direct partition product if it has to generate a lot of intermediate stripped partitions. The threshold at which DISTOD switches from the recursive to the direct strategy is exposed as a parameter with 15 as default value.
Dealing with limited memory
The main memory resources in a cluster are usually limited. Hence, DISTOD has to effectively manage its memory consumption: If DISTOD allocates memory too aggressively, i. e., up to the limit, the Java garbage collector takes up most of the processing time slowing down the actual algorithm; if it exceeds the memory limit, the discovery fails and terminates. Working close to the memory limit cannot be prevented on large datasets, which is why we need to handle DISTOD's memory consumption carefully.
For the majority of datasets and especially for long ones, the partitions of the PartitionMgr take up most of the memory. As the discovery progresses, the recursive partition generation expands the size of the partition cache exponentially by adding ever more stripped partitions to it. Because any required partitions in level \(l_i\), where \(i \ge 2\), can always be computed on-demand from the stripped partitions in level \(l_1\), we can optimize the time that intermediate stripped partitions are kept in the cache. For this, we propose two strategies: periodic partition cleanup and partition eviction.
Periodic partition cleanup
Most stripped partitions are needed for only a short period of time, e. g., before their successors make them obsolete. Only the initial partitions need to be preserved to enable the validation of order compatible bOD candidates and the regeneration of all other stripped partitions. The intermediate stripped partitions in deeper levels can be deleted when they are not needed anymore. However, the point in time, at which the partitions are no longer needed, is hard to predict because a single partition might be involved in different bOD candidate validations and only the Master actor on the leader node knows which candidates have already been processed and which candidates are next in the work queue. For this reason, our PartitionMgr tracks the number of accesses for each partition to estimate the relevance of any locally cached partition. It then periodically removes not-recently-used partitions from its partition cache.
The periodic partition cleanup protocol is run by each PartitionMgr in the cluster: Every PartitionMgr tracks the accesses to its cached stripped partitions. The scheduler of the local actor system then periodically sends a tick message to the PartitionMgr. When the PartitionMgr receives a tick message, it removes all partitions from the partition cache that have not been accessed since the last tick was received and resets its internal access statistics.
In this protocol, the tick frequency defines the minimum lifetime of a partition because the tick interval of the partition's creation does not see the partition and the initial generation of the partition is triggered by accessing it so that its initial access counter is 1. Therefore, short intervals cause a more aggressive removal behavior and long intervals keep partitions in the cache for longer. Overall, the tick frequency trades off memory consumption and runtime because the removal of stripped partitions slows DISTOD down. For maximum performance, the periodic partition cleanup can be turned off completely, but this increases the algorithm's memory usage significantly. We propose a default partition cleanup interval of 40 s, which showed to be a good compromise between runtime and memory consumption in our experiments; the interval can be configured with a parameter.
Partition eviction
The periodic partition cleanup is a valuable technique to control the memory consumption for normal, non-critical discovery periods. Due to the dynamic nature of the discovery process, DISTOD's memory consumption is bursty at times and the periodic partition cleanups might not be able to remove enough partitions from the cache in critical situations. For this reason, we propose a second protocol, called partition eviction, that tries to prevent out-of-memory situations by carefully monitoring the memory consumption.
The heap size monitoring is implemented by a dedicated SystemMonitor actor on each node in the DISTOD cluster. This actor makes partition eviction decisions for its host independently of other SystemMonitors on remote nodes. This means that one node reaching its individual heap memory limit does not impact the performance or memory usage of other nodes.
The SystemMonitor monitors the local memory usage of DISTOD and compares it to a certain threshold. If the local memory usage exceeds the threshold, the SystemMonitor instructs the local PartitionMgr to remove all intermediate partitions from the partition cache. In this way, DISTOD frees all expendable memory at the cost of (re-)calculating all later requested partitions from scratch. Every such later generated partition is again stored in the PartitionMgr's partition cache to be utilized for further partition generations and the validations. This re-populates the cache with relevant partitions. If DISTOD then hits the heap threshold again, the partition eviction is triggered once more. Because the partition eviction costs a lot of performance, the protocol is triggered only if it is inevitable. We recommend a partition eviction threshold of 90%Footnote 1 so that the eviction process has enough scope for action.
Complexity control
Despite DISTOD's novel search strategy, the exponentially growing search space still limits the applicability of the algorithm: On the one hand, the candidate checks may take unexpectedly long although DISTOD aggressively parallelizes and distributes them; on the other hand, the search space might exhaust the memory of the leader node. As countermeasures, DISTOD supports elasticity, i. e., it can incorporate additional compute nodes at runtime if the discovery runs unexpectedly long, and it supports semantic pruning to narrow down the search space and, in this way, reduce both the required memory and the runtime. In this section, we briefly describe these two features.
Elastic bOD discovery
DISTOD's runtime is hard to predict. It depends not only on the size of the input dataset but also on the dataset's structure as well as the number and placement of valid bODs in the search space because these factors also determine the effectiveness of the pruning strategies and the number of candidate validations. Therefore, we designed DISTOD in a way that we can dynamically add more follower nodes to or remove existing follower nodes from a running DISTOD cluster. The idea is to increase DISTOD's capabilities and speed up the processing by elastically adding nodes to the cluster on demand. Removing nodes frees up the compute resources of the cluster for other tasks without impacting the correctness or completeness of the discovered bODs sets.
Adding follower nodes Because all nodes in the DISTOD cluster are started individually, the procedures of starting the initial DISTOD cluster or starting a new node are the same. To connect to an already running DISTOD cluster, a freshly started follower node requires only the address of one of the nodes in the cluster. It then joins the cluster by (i) connecting to the specified seed node, (ii) retrieving the addresses of all other nodes from it, (iii) fetching the initial partitions to its local PartitionMgr actor, (iv) connecting its local RCProxy to the ResultCollector actor on the leader node, and finally (v) registering its local Workers at the cluster's Master actor. DISTOD treats all connected follower nodes in the same way.
Removing follower nodes Any follower node can be removed from the DISTOD cluster by gracefully shutting it down. Only the leader node cannot be removed from the cluster, because it holds the central candidate state and orchestrates the discovery process. The shutdown of a single node is handled by the same coordinated shutdown protocol than the cluster shutdown, but only the node-local parts are executed. The termination procedure is executed by the local Executioner actor. It supervises the termination and makes sure that the following steps are executed in the correct order: (i) Stop local Workers and abort their jobs at the Master actor. The Master re-enqueues the jobs into the work queue so that they at some point get dispatched to the Workers of another node. (ii) Flush buffered results of the local RCProxy to the ResultCollector on the leader node. (iii) Leave the DISTOD cluster. (iv) Stop all remaining actors and cleanly terminate the Java Virtual Machine (JVM).
Semantic pruning strategies
In this section, we adapt two semantic pruning strategies for our distributed bOD discovery algorithm: interestingness pruning [25] and size limitation [19]. Both strategies discard candidates with certain characteristics that mark them as less practically relevant than other candidates. In this way, we reduce the result size at the cost of losing completeness of the discovered bOD sets. Therefore, semantic pruning is implemented as optional features that can be turned on and off.
Interestingness pruning The interestingness pruning strategy calculates a score for each bOD candidate and compares it to a threshold. If the score is too low, the candidate is not interesting enough and it is pruned. The interestingness score is defined as \(\sum _{\mathscr {E}(t_X) \in \varPi _X} {\vert \mathscr {E}(t_X) \vert }^2\) for a bOD with the context \(X\) and indicates coverage and succinctness of a bOD candidate [25].
To facilitate interestingness pruning in DISTOD's distributed setting, we calculate and use the interestingness score directly before validating a bOD candidate in the Worker actors. The interestingness pruning decision of a single bOD candidate is independent of other candidates because it involves only the calculation of the interestingness score. Thus, the calculation and testing can be distributed similarly to the candidate validations. This allows us to parallelize the score calculation making use of the already distributed partitions.
Size limitation Limiting the maximum size of the dependency candidates to some fixed value is a technique that can reduce the size of the search space significantly improving both runtime and memory consumption. A size limitation has also been argued to be semantically meaningful because large dependencies are statistically more likely to appear by chance [19]. We, therefore, allow the user to restrict the number of attributes that can be involved in a bOD (size of a bOD) to a certain threshold. This threshold then directly corresponds to the maximum depth of a node in the candidate lattice (e. g., bOD candidates in level \(l_4\) have exactly four attributes). To implement the size limitation pruning strategy in DISTOD, we simply design the Master actor to not generate bOD candidates of levels deeper than a specified size limit.
In this section, we evaluate DISTOD's performance in different settings and on various datasets. We compare its runtime with all existing complete OD discovery algorithms, which are FASTOD-BID [25] and its distributed variant DIST-FASTOD-BID [22]. Note that ORDER [15], its hybrid variant [12], and OCDDISCOVER [5] produce incomplete results and are, therefore, not comparable to our approach. We published the source code for DISTOD, additional technical documentation and the datasets for our evaluation on our repeatability website;Footnote 2 The source code for FASTOD-BIDFootnote 3 and DIST-FASTOD-BIDFootnote 4 is publicly available on Github.
After the performance evaluation (Sect. 9.2), we evaluate DISTOD's scalability w. r. t. the number of CPU cores of a single node (Sect. 9.3), the number of nodes in the cluster (Sect. 9.4), the number of tuples in a dataset (Sect. 9.5), and the number of attributes in a dataset (Sect. 9.6). To evaluate the robustness of DISTOD, we measure the runtime of our algorithm with different memory limits (Sect. 9.7). Our final experiments demonstrate the impact of the partition caching on DISTOD's runtime (Sect. 9.8).
Hardware We perform our experiments on a cluster with twelve bare-metal nodes. The machines are equipped with an Intel Xeon E5-2630 CPU at 2.2 GHz (boost to 3.1 GHz) with 10 cores and hyper-threading. Eight nodes have 64 GB of main memory and four nodes have 32 GB of main memory. All nodes run an AdoptOpenJDK version 11.0.8 64-bit server JVM and Spark 2.4.4 on Ubuntu 18.04 LTS. We run our base experiments (see Table 2) three times and report the average runtime and the relative standard deviation (RSD).
Memory restriction Java's performance does not scale linearly with the used heap size, i. e., using a smaller heap might reduce not only the memory consumption but also the execution time of an algorithm. We observed this behavior in our experiments when on the letter-sub.csv dataset, for example, a single DISTOD node with 31 GB of memory was about 20% faster than the same DISTOD node with 58 GB of memory (\(\sim 40\) min compared to \(\sim 49\) min). This is because Java uses a JVM-internal performance optimization called compressed ordinary object pointers (OOPs) when the heap size is smaller than 32 GB. This reduces the size of object pointers to 32 bit instead of 64 bit, even on 64 bit architectures. As a consequence, less memory is used by the Java process so that the processor cache usage, as well as the memory bandwidth usage, is improved. This speeds up the algorithm execution significantly. For more details about compressed OOPs, we refer to Oracle's Java documentation.Footnote 5 For this reason, if not stated otherwise, we limit the Java heap size for all experiments, all nodes and all algorithms: The leader nodes, i. e., DISTOD's leader node and DIST-FASTOD-BID's driver process, limit their heap to 31 GB; the follower nodes, i. e., DISTOD's follower and DIST-FASTOD-BID's executors, limit their heap to 28 GB, which leaves 4 GB for stacks and the operating system.
Data characteristics For our experiments, we use several synthetic and real-world datasets from different domains, most of which have previously been used to evaluate FD and OD discovery algorithms. The datasets can be found on our repeatability website (see Footnote 2). We list all relevant details about the datasets in Table 2.
The implementation of DIST-FASTOD-BID does not support data types other than integers, but our datasets contain strings, dates, and decimals. For this reason, we had to preprocess all datasets to run DIST-FASTOD-BID on them: We removed all headers and substituted all values with their hash value so that each value is mapped to an integer representation. This transformation keeps all FDs intact, but may change bODs. Datasets with the suffix -sub in their name have been transformed using this method. Regardless of the fact that DISTOD can handle NULL values, text strings, decimal numbers, and date values, the mentioned datasets do not contain any of these and consist of only integer numbers. DISTOD follows the NULLS FIRST principle and infers the data type of a column during input parsing.
Varying the datasets
In this first experiment, we compare the runtime of DISTOD, FASTOD-BID and DIST-FASTOD-BID in their most powerful configuration on various datasets. In Table 2, we report the measured runtimes in seconds and list the number of valid constant bODs (reported as #FDs) and the number of valid order compatible bODs (reported as #bODs). For the number of valid bODs, we count the actual results that have been written to disk. Note that all three algorithms produce the same results in all experiments. Furthermore, the total (incoming and outgoing) average network traffic for the leader node of DISTOD varies between 162 kB/s and 11 MB/s for the different datasets. The peak total activity varies between 445 kB/s and 16 MB/s including the initial dataset replication phase. This is significantly below the usual maximum network bandwidth; hence, DISTOD's performance is not bound by network.
The experiment uses the following rules: We execute the single-threaded algorithm FASTOD-BID on a single node of our compute cluster. Both the Akka cluster of DISTOD and the Spark cluster of DIST-FASTOD-BID are configured over the same twelve machines. The Spark master runs on the same machine as the driver process and we configured Spark to put one executor with 20 cores on each of the remaining eleven nodes. For DISTOD, we use an active leader configuration, where the leader node spawns 10 workers and each of the eleven follower nodes spawn 20 workers; we set the partition cleanup interval to 40 s (cf. Sect. 7.3) and turn all semantic pruning strategies off (cf. Sect. 8.2). Whenever an execution hits the memory limit of 31 GB, we increase the heap limit to 58 GB. In all these cases but one, which is the letter-sub.csv dataset for FASTOD-BID, increasing the memory limit did not enable the algorithms to process the dataset.
Table 2 Runtimes in seconds of FASTOD-BID, DIST-FASTOD-BID, and DISTOD on different datasets in their most powerful configuration finding the complete set of minimal bODs (semantic pruning turned off)
Table 2 shows that DISTOD is an order of magnitude faster than FASTOD-BID for datasets with a lot of rows. On the adult-sub.csv dataset with 15 columns and over 30k rows, DISTOD finishes slightly before 1 min and FASTOD-BID takes almost 1 h to complete. A similar observation can be made for the letter-sub.csv and the imdb-sub.csv dataset. DISTOD can finish the task for letter-sub.csv over \(60\times \) and for imdb-sub.csv nearly \(20\times \) faster than FASTOD-BID. For very small datasets with under 1k rows, FASTOD-BID is slightly faster than DISTOD. This is expected because DISTOD deals with the overhead of multiple parallel components and cluster management. Due to the active leader configuration and the reactive start procedure, DISTOD can start processing the dataset very early on, even before all follower nodes have connected to the leader. This allows DISTOD to process even small datasets very fast without the need to wait for a complete cluster startup and shutdown (e. g., for abalone-sub.csv or chess-sub.csv).
Compared to DIST-FASTOD-BID, DISTOD is at least 4\(\times \) faster on all tested datasets. On short and wide datasets, such as bridges-sub.csv or hepatitis-sub.csv, DISTOD is even an order of magnitude faster than DIST-FASTOD-BID. This shows that DISTOD does not only gain its performance from scaling with the number of rows but also from scaling with the number of columns. On the small datasets in Table 2, DIST-FASTOD-BID is an order of magnitude slower than both FASTOD-BID and DISTOD. This is due to the synchronized cluster startup and shutdown procedure of the Spark implementation, which causes a significant runtime overhead.
DISTOD is the only approach that is able to process the dblp-sub.csv, the tpch-sub.csv, the ncvoter-sub.csv, and the horse-sub.csv datasets within our time and memory constraints. FASTOD-BID cannot process any of the four dataset because it hits the memory limit even when it uses 58 GB of heap memory. DIST-FASTOD-BID cannot process dblp-sub.csv and tpch-sub.csv, because its executors hit the memory limit in level three of the candidate lattice for both datasets. The ncvoter-sub.csv and horse-sub.csv datasets cannot be processed by DIST-FASTOD-BID, because it hits the time limit of 24 h. While DIST-FASTOD-BID did not finish level nine of the candidate lattice within the time limit for the ncvoter-sub.csv dataset, DISTOD explored all 15 levels of the candidate lattice in nearly 10 h validating more than 736k bOD candidates. Similarly for the horse-sub.csv dataset: While DIST-FASTOD-BID cannot finish processing level eight of the candidate lattice within the time limit, DISTOD explored all 18 levels within 7 h validating over 95m bOD candidates.
In summary, the experiment demonstrates that DISTOD competes well with FASTOD-BID on datasets with low numbers of bODs. It outperforms FASTOD-BID on harder datasets by about a factor that is proportional to the number of machines in the cluster, demonstrating that it distributes the workload effectively; it's reactive and dynamic search strategy, in particular, distributes the workload significantly better than DIST-FASTOD-BID's MapReduce-style distribution approach. Our novel partition caching strategies and the distributed setting also enable DISTOD to process much larger datasets before running into memory limits.
Scaling the cores
In our second experiment, we evaluate DISTOD's scalability with the number of cores in one node. Because the performance of a system cannot be judged based on its scalability behavior alone, i. e., good scalability can simply be the result of an improper implementation, McSherry et al. introduced a metric, called configuration that outperforms a single thread (COST), that puts the scalability of a system in relation to the performance of a competent single-threaded implementation [17]. The COST of a parallel/distributed system is the hardware configuration required to outperform the single-threaded variant. To judge the performance of DISTOD, the following scalability experiments, therefore, also evaluate COST.
To evaluate the COST of DISTOD, we compare its runtime with different hardware configurations to the efficient single-threaded bOD discovery algorithm FASTOD-BID. We perform the experiments on a single node of our cluster and scale the number of cores from 1 to 20 (with DISTOD's parameter max-parallelism). Technically, we restrict the parallelism of the various parallel components of DISTOD by limiting DISTOD's actor system to a specific number of execution threads. We use two datasets to evaluate our COST metric: hepatitis-sub.csv as an example for a wide but short dataset and adult-sub.csv as an example for a narrow but long dataset.
Figure 5a shows the runtimes of DISTOD and FASTOD-BID for the hepatitis-sub.csv dataset in seconds. Since the hepatitis-sub.csv dataset is very short, there is not a big potential for parallelizing the candidate validations. Each validation is finished very fast and dispatching the validation jobs to different actors may introduce additional overhead. Despite that, DISTOD is able to outperform FASTOD-BID with a parallelism of six or more. Thus, DISTOD's COST for the hepatitis-sub.csv dataset is a single node with six cores. DISTOD's elastic task distribution strategy introduces only a low overhead and the parallelized candidate generation step improves its scalability even for short datasets.
Scaling experiments
Figure 5b shows the runtimes of DISTOD and FASTOD-BID for the adult-sub.csv dataset in seconds. The adult-sub.csv dataset with 15 columns is narrower than the hepatitis-sub.csv dataset with 20 columns, but it has more than \(200\times \) more rows. As expected, DISTOD scales very well on this dataset and can outperform FASTOD-BID already with a parallelism of two. DISTOD with a parallelism of three is already twice as fast as the single-threaded algorithm FASTOD-BID.
Scaling the nodes
DISTOD is a distributed algorithm that does not only scale vertically by utilizing all available cores of a single machine, but also horizontally by forming a cluster of multiple compute nodes. In Fig. 5g, we compare the runtimes of DISTOD, FASTOD-BID and DIST-FASTOD-BID when scaling them horizontally. Because FASTOD-BID is a single-threaded bOD discovery algorithm, its runtime is constant and serves as a reference. Note that we report the runtime of the approaches in seconds on a log axis. We ran the experiment on the adult-sub.csv dataset with 15 columns and 32,561 rows.
As the measurements in Fig. 5g show, DISTOD is 4\(\times \) faster than DIST-FASTOD-BID and 14\(\times \) faster than FASTOD-BID on a single node. On all twelve nodes, DISTOD is still more than 4\(\times \) faster than DIST-FASTOD-BID, but its lead has shrunk, which is because the algorithm reaches the maximum parallelization for the parallelizable part of the bOD discovery for this specific dataset. Note that DISTOD uses the active leader configuration with ten Worker actors on the leader node while DIST-FASTOD-BID's Spark driver process utilizes only a single core on the leader node. Another observation that we make on this adult-sub.csv dataset and on many other datasets as well is that DISTOD on only one node is already faster than DIST-FASTOD-BID on all twelve nodes.
Scaling the rows
To perform the experiments on DISTOD's scalability in the number of rows \(\vert \mathrm {r} \vert \), we use two long datasets with a mid-range number of columns. Figure 5d plots DISTOD's runtime in seconds when scaling the number of rows of the ncvoter-sub.csv dataset from 100k to about 1m rows, and Fig. 5e plots the runtime of DISTOD for the flight-long.csv dataset with 50k to 500k rows.
The measurements in Fig. 5d show a tendency toward linear runtime growth and the measurements in Fig. 5e show almost perfectly linear scalability with the number of rows. This is because the computation time is dominated by the generation of partitions and the validation of bOD candidates, which both are linear in \(\vert \mathrm {r} \vert \). The deviation from a perfectly linear growth is due to the increasing number of bOD candidates: Additional records in the input data invalidate ever more candidates, which leads to a growth in the amount of candidates that need to be validated. If fewer bOD candidates are invalidated, the valid bODs are shorter and, hence, detected early in the lower lattice levels; our pruning strategies can, then, prune a lot of the candidates in higher levels from the search space. If many invalid candidates occupy the lower lattice levels, DISTOD cannot prune these candidates but needs to validate them.
In Fig. 5d, we see that DISTOD scales about linearly with the number of rows if the number of candidates does not change significantly (400k to 900k rows) and it scales worse than linearly if the number of candidates grows stronger (100k to 400k and 900k to 1m rows).
In Fig. 5e, we see that DISTOD scales almost perfectly linear with the number of rows because the increase in candidates is small and even flattens out in the end.
Scaling the columns
To evaluate DISTOD's ability to scale with the number of columns in a dataset, we perform an experiment on our widest dataset, which is plista-sub.csv with 63 columns. In the experiment, we scale the dataset's number of columns from five to 60 by increments of five and vary the number of columns by taking random projections of the plista-sub.csv dataset. Figure 5f shows that the runtime of DISTOD grows exponentially with the number of columns. This is expected because the number of bOD candidates (and minimal bODs) in the set containment lattice grows exponentially with the number of columns in the worst case. The increasing amount of bOD candidates depicted in Fig. 5f confirms this theoretical complexity. The candidate lattice for the plista-sub.csv dataset with 40 or more columns outgrows DISTOD's memory limit on our leader node because DISTOD is not able to free up any more memory without giving up the completeness or minimality of the results—in practice, the algorithm then sacrifices completeness by terminating early without finding all minimal bODs. For this reason, we report the runtimes only up to 35 columns.
As the result size grows exponentially with the number of columns in a dataset, DISTOD's runtime and memory consumption grow exponentially as well. To overcome this limitation, we introduced two semantic pruning strategies in Sect. 8.2: interestingness pruning and size limitation. Both reduce the number of valid bODs by restricting the search space to interesting bODs only. This improves the performance of DISTOD by orders of magnitude and allows it to mine larger datasets. By enabling the interestingness pruning, DISTOD can mine the plista-sub.csv dataset with 45 columns in 64 s (60 interesting bODs) and the entire dataset with 63 columns in 4.5 min (98 interesting bODs). Limiting the size of the bODs to a maximum of 6 columns achieves a similar result: For the plista-sub.csv dataset with 45 columns, DISTOD takes 89 s (532 bODs) and for the entire dataset with 63 columns, it takes just under 5 min (809 bODs).
Memory consumption
Current bOD discovery algorithms demand a lot of memory to store intermediate data structures. For DISTOD, this includes the candidate state and job queue on the leader node and the partitions on all other nodes. In the following experiment, we, therefore, compare DISTOD's performance with limited memory and its memory consumption to our baseline algorithms FASTOD-BID and DIST-FASTOD-BID. To measure the memory consumption, we limit the available memory in logarithmic steps starting from 28 GB. We stop reducing the memory limit when the algorithm experiences memory issues for the first time. We execute the single-threaded algorithm FASTOD-BID on a single node of our cluster. DISTOD and DIST-FASTOD-BID utilize all nodes of the cluster. For DISTOD, we limit the available memory of the leader node as well as the memory for all follower nodes to the same value. We still use the active leader configuration, where the leader node spawns ten local Worker actors. The experiment runs DISTOD in two configurations: one with a 40 s partition cleanup interval, which we also used in all previous experiments, and one with a more aggressive interval of 5 s (see Sect. 7.3). A shorter interval causes DISTOD to free up memory quicker, but it also influences DISTOD's runtime negatively. For DIST-FASTOD-BID, we gradually reduce the available memory for the Spark driver process as well as for all executors.
Figure 5c shows the runtimes of DISTOD, DIST-FASTOD-BID, and FASTOD-BID on the letter-sub.csv dataset when we reduce the available memory from 28 GB to 256 MB. Because FASTOD-BID already hit the memory limit (denoted with ML) with 28 GB memory, we included its runtime with 58 GB. FASTOD-BID uses a lot of memory because it's level-wise search strategy stores all partitions of the current and the next level while generating a level. In addition, it also stores the sorted partitions for all attributes of the dataset and the current intermediate candidate states. The partitions of the previous level are freed up not before the transition from one level to the next is completed. With the 58 GB of memory, FASTOD-BID takes more than 4.5 h to process the letter-sub.csv dataset. As a reference, DISTOD finishes the discovery on a single node with only 28 GB of memory within 38 min; this is \(7 \times \) faster than FASTOD-BID while using only half of the memory.
DIST-FASTOD-BID can process the letter-sub.csv dataset with at least 1 GB available main memory. If we limit the memory to 512 MB or less, then the Spark executors fail, which is marked in Fig. 5c as reaching the memory limit (ML). However, the diminishing memory capacity already becomes noticeable with 2 GB as DIST-FASTOD-BID's runtime starts to increase because the Spark framework starts to spend extra cycles on data management.
Our algorithm DISTOD is able to process the letter-sub.csv dataset with a pessimistic partition cleanup interval of 5 s and only 512 MB of available memory. Even with only 512 MB memory and the 5 s partition cleanup interval, DISTOD is still \(1.4 \times \) faster than DIST-FASTOD-BID with 1 GB of memory. For the experiment with 512 MB, DISTOD uses most of its memory and triggers partition evictions, i. e., it frequently frees up memory by removing all stored stripped partitions from the cache. This allows DISTOD to continue processing with less memory but increased processing time, which is from 888 s (1024 MB) to 1817 s (512 MB). However, for the experiment with only 256 MB of memory, the candidate states outgrow the memory limit (ML) on the master node. In this case, freeing up stripped partitions does not help anymore and the algorithm becomes output bound. To still process the dataset, we recommend enabling the semantic pruning mechanism of DISTOD.
Figure 5c also shows the runtimes of DISTOD when we keep the partition cleanup interval at 40 s. With a limit of 28 GB of memory, DISTOD takes 613 s to process the letter-sub.csv dataset with a 5 s partition cleanup interval and but only 247 s with a 40 s partition cleanup interval (see Table 2 in Sect. 9.2). This shows that a small partition cleanup interval negatively impacts DISTOD's runtime when the available memory is adequately sized. A smaller partition cleanup interval allows DISTOD to efficiently run with lower memory bounds though. In the case of a 40 s partition cleanup interval, DISTOD's memory consumption peaks higher due to the less frequent partition cleanups, which causes DISTOD to hit the memory limit sooner than with a 5 s partition cleanup interval, which is already at 512 MB. As Fig. 5c also shows, DISTOD with a 40 s partition cleanup interval is slower than with a 5 s interval when using lower memory limits, such as 2 GB or 1 GB, because the JVM's garbage collector already starts fighting for memory, which is more costly and less effective than DISTOD's own memory management. Thus, for environments with limited memory, a small partition cleanup interval is preferable. It allows DISTOD to process the dataset more efficiently and, thus, faster.
Partition caching
In this section, we study the impact of DISTOD's partition caching mechanism (see Sect. 7.1) on the runtime for various datasets. For this, we measured the runtime of DISTOD with partition caching enabled and disabled in milliseconds and report the results in Table 3. For the experiment, DISTOD uses all twelve nodes of our testing cluster.
Table 3 Runtimes of DISTOD in milliseconds when partition caching is off or on. Column Diff reports the runtime increase or decrease when partition caching is on w. r. t. the runtime when partition caching is off
DISTOD with partition caching enabled is on average 26% faster than with partition caching disabled. For the datasets adult-sub.csv and letter-sub.csv, partition caching decreases DISTOD's runtime even by 55% and 70% respectively. The runtime increase with caching on the iris-sub.csv dataset is due to the overall small runtime on this dataset and the runtime fluctuations in the startup process that impact particularly such small measurements. If partition caching is disabled, DISTOD computes all stripped partitions from the initial partitions using the direct partition product (see Sect. 7.2.2) and does not cache stripped partitions in the PartitionMgr actor. The direct partition product is slower than computing a stripped partition from two of its predecessors and, thus, increases DISTOD's runtime. Since DISTOD works on constant and order compatible bOD candidate validations in parallel and checks from different validation jobs may require the same stripped partition, DISTOD may even compute stripped partitions multiple times on each node. If partition caching is enabled, DISTOD's PartitionMgr makes sure that Workers on the same node can reuse existing stripped partitions and that DISTOD can benefit from the faster recursive generation of stripped partitions (see Sect. 7.2.1). In summary, the experiment showed that the cached partitions might be superfluous in some discovery runs, but they can increase DISTOD's performance significantly in other runs.
In this paper, we proposed DISTOD, a novel, scalable, robust, and elastic bOD discovery algorithm that uses the actor programming model to distribute the discovery and the validation of bODs to multiple machines in a compute cluster. DISTOD discovers all minimal bODs w. r. t. the minimality definition of [25] in set-based canonical form with an exponential worst-case runtime complexity in the number of attributes and a linear complexity in the number of tuples. In our evaluation, DISTOD outperformed both the single-threaded bOD discovery algorithm FASTOD-BID [25] and the distributed algorithm DIST-FASTOD-BID [22] by orders of magnitude. The superior performance is the result of DISTOD's optimized search strategy and its improved validation techniques, which are both enabled by the reactive, actor-based distribution approach. With DISTOD's elasticity property and the semantic pruning strategies, we can discover bODs in datasets of practically relevant size, such as the plista-sub.csv dataset with 61 columns and 1k rows, which can now be mined in under 5 min.
Topics for future work are investigating strategies that can reduce the memory consumption and growth of the candidate states in the central data structure on the leader node because this is the memory limiting factor in DISTOD at the moment; adopting hybrid data profiling approaches, i. a., the ideas of [4, 12, 18], into a distributed bOD discovery algorithm; or enhancing our approach to the discovery of approximate bODs [13].
If the G1 garbage collector is used, we recommend running it with \(\texttt {-XX:G1ReservePercent=(1 - heap-eviction} -\texttt {threshold)}\); defaults to 10%.
https://hpi.de/naumann/projects/repeatability/algorithms/distod.html.
https://git.io/fastodbid (Accessed 2020-08-26).
https://git.io/dist-fastodbid (Accessed 2020-08-26).
https://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html#compressedOop (Accessed 2020-02-14).
Abadi, D., Madden, S., Ferreira, M.: Integrating compression and execution in column-oriented database systems. In: Proceedings of the International Conference on Management of Data (SIGMOD), pp. 671–682 (2006)
Abedjan, Z., Golab, L., Naumann, F., Papenbrock, T.: Data Profiling. Morgan & Claypool Publishers. ISBN: 978-1-68173-447-7 (2018)
Bayer, R., McCreight, E.: Organization and maintenance of large ordered indices. In: Proceedings of the Workshop on Data Description (SIGFIDET, Now SIGMOD), pp. 107–141 (1970)
Bleifuß, T., Kruse, S., Naumann, F.: Efficient denial constraint discovery with hydra. Proc. VLDB Endow. 11(3), 311–323 (2017)
Consonni, C., Montresor, A., Sottovia, P., Velegrakis, Y.: Discovering order dependencies through order compatibility. In: Proceedings of the International Conference on Extending Database Technology (EDBT), pp. 409–420 (2019)
Dong, J., Hull, R.: Applying approximate order dependency to reduce indexing space. In: Proceedings of the International Conference on Management of Data (SIGMOD), pp. 119–127 (1982)
Ginsburg, S., Hull, R.: Order dependency in the relational model. Theor. Comput. Sci. 26(1), 149–195 (1983)
Hewitt, C., Bishop, P., Steiger, R.: A universal modular ACTOR formalism for artificial intelligence. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 235–245 (1973)
Huhtala, Y., Kärkkäinen, J., Porkka, P., Toivonen, H.: Efficient discovery of functional and approximate dependencies using partitions. In: Proceedings of the International Conference on Data Engineering (ICDE), pp. 392–401 (1998)
Huhtala, Y., Kärkkäinen, J., Porkka, P., Toivonen, H.: Tane: an efficient algorithm for discovering functional and approximate dependencies. Comput. J. 42(2), 100–111 (1999)
Ilyas, I.F., Chu, X.: Trends in cleaning relational data: consistency and deduplication. Found. Trends Databases 5(4), 281–393 (2015)
Jin, Y., Zhu, L., Tan, Z.: Efficient bidirectional order dependency discovery. In: Proceedings of the International Conference on Data Engineering (ICDE), pp. 61–73 (2020)
Karegar, R., Godfrey, P., Golab, L., Kargar, M., Srivastava, D., Szlichta, J.: Efficient discovery of approximate order dependencies. arXiv: 2101.02174 [cs] (2021)
Kruse, S., Papenbrock, T., Naumann, F.: Scaling out the discovery of inclusion dependencies. In: Proceedings of the Conference on Datenbanksysteme in Business, Technologie Und Web (BTW), pp. 445–454 (2015)
Langer, P., Naumann, F.: Efficient order dependency detection. VLDB J 25(2), 223–241 (2016)
Liu, J., Li, J., Liu, C., Chen, Y.: Discover dependencies from data—a review. IEEE Trans. Knowl. Data Eng. 24(2), 251–264 (2012)
McSherry, F., Isard, M., Murray, D.G.: Scalability! But at what cost? In: Proceedings of the USENIX Conference on Hot Topics in Operating Systems (HotOS), p. 14 (2015)
Papenbrock, T., Naumann, F.: A hybrid approach for efficient unique column combination discovery. In: Proceedings of the Conference Datenbanksysteme in Business, Technologie Und Web (BTW), pp. 195–204 (2017)
Papenbrock, T., Naumann, F.: A hybrid approach to functional dependency discovery. In: Proceedings of the International Conference on Management of Data (SIGMOD), pp. 821–833 (2016)
Papenbrock, T., Ehrlich, J., Marten, J., Neubert, T., Rudolph, J.-P.: Functional dependency discovery: an experimental evaluation of seven algorithms. Proc. VLDB Endow. 8(10), 12 (2015)
Saxena, H., Golab, L., Ilyas, I.F.: Distributed discovery of functional dependencies. In: Proceedings of the International Conference on Data Engineering (ICDE), pp. 1590–1593 (2019)
Saxena, H., Golab, L., Ilyas, I.F.: Distributed implementations of dependency discovery algorithms. Proc. VLDB Endow. 12(11), 1624–1636 (2019)
Selinger, P.G., Astrahan, M.M., Chamberlin, D.D., Lorie, R.A., Price, T.G.: Access path selection in a relational database management system. In: Proceedings of the International Conference on Management of Data (SIGMOD), pp. 23–34 (1979)
Lightbend Inc. 2020. Akka: build powerful reactive, concurrent, and distributed applications more easily. Version 2.6.3 (2020)
Szlichta, J., Godfrey, P., Golab, L., Kargar, M., Srivastava, D.: Effective and complete discovery of bidirectional order dependencies via set-based axioms. VLDB J. 27(4), 573–591 (2018)
Szlichta, J., Godfrey, P., Golab, L., Kargar, M., Srivastava, D.: Effective and complete discovery of order dependencies via set-based axiomatization. Proc. VLDB Endow. 10(7), 721–732 (2017)
Szlichta, J., Godfrey, P., Golab, L., Kargar, M., Srivastava, D.: Erratum for discovering order dependencies through order compatibility (EDBT 2019). In: Proceedings of the International Conference on Extending Database Technology (EDBT), pp. 659–663 (2020)
Szlichta, J., Godfrey, P., Gryz, J., Zuzarte, C.: Expressiveness and complexity of order dependencies. Proc. VLDB Endow. 6(14), 1858–1869 (2013)
Szlichta, J., Godfrey, P., Gryz, J.: Fundamentals of order dependencies. Proc. VLDB Endow. 5(11), 1220–1231 (2012)
The Apache Software Foundation: Apache Flink - Stateful Computations over Data Streams (2019). Retrieved 08/03/2020 from https://flink.apache.org/
The Apache Software Foundation: Apache Spark—unified analytics engine for big data (2018). Retrieved 08/03/2020 from https://spark.apache.org/
Vernon, V.: (2015). Reactive Messaging Patterns with the Actor Model: Applications and Integration in Scala and Akka. Addison-Wesley Professional. ISBN: 978-0-13-384690-4
Zhu, G., Wang, Q., Tang, Q., Rong, G., Yuan, C., Huang, Y.: Efficient and Scalable Functional Dependency Discovery on Distributed Data-Parallel Platforms. IEEE Trans. Parallel Distrib. Syst. 30(12), 2663–2676 (2019)
We sincerely thank Lukasz Golab and Jarek Szlichta for their support and advice, all authors for generously publishing or sharing their code, and the reviewers for the numerous constructive comments.
Hasso Plattner Institute, University of Potsdam, Prof.-Dr.-Helmert-Str. 2-3, 14482, Potsdam, Germany
Sebastian Schmidl & Thorsten Papenbrock
Sebastian Schmidl
Thorsten Papenbrock
Correspondence to Sebastian Schmidl.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Schmidl, S., Papenbrock, T. Efficient distributed discovery of bidirectional order dependencies. The VLDB Journal 31, 49–74 (2022). https://doi.org/10.1007/s00778-021-00683-4
Revised: 16 April 2021
Bidirectional order dependencies
Actor programming
Data profiling
Dependency discovery | CommonCrawl |
Special Relativity - two beams of light in opposite direction
I just want to first say that I'm aware I am asking a question due to my own confusion and ignorance and not because of anything to do with special relativity. I hope that's alright.
What I'm confused about is whether two beams of light moving in opposite directions away from eachother, have a relative speed of $c$, or $2c$.
The thing is, we can always say it's $c$, but then when we look at the relative distance travelled for the time elapsed, the result is $2c$.
Can anyone explain what is going on?
Response to Alfred Centauri 22/08/2016 I question your reasoning for two reasons. Firstly to photons moving in opposite directions was one of Einstein's examples of the counter-intuitive implications. So what you're saying wasn't around then, and Einstein himself was misconceived. In fact you can find instances opposite direction light beam as recently as 10 years ago. Phillip Green in his popular science book uses that very example in his introduction. So the question is whether this is a formal adjustment to SR, backed by a publication and a reasonable consensus. Is it?
The second reason is to do with your reasoning. You say it is not a legitimate measurement because there is no inertial frame in which one or the other photon is at rest. That at-rest criteria as you define the constraints, is equally applicable to two beams coming toward eachother. In fact almost every problem setup has no way to assume that one of the two frames is stationary. The way it's handled, I believe, is mathematically by the simple addition of the equal and opposite motion. This is also implicit in the Laplace transforms, I think.
The reason this can be done, I think, is because all inertial frames are assumed to be equilvant....they are part of a global reference frame.
A further matter derives from the fact the relative motion of two frames can be represented as tangential and radial components. If what you are saying is correct, then one of the two components cannot be measured in almost all cases.
I definitely want to make clear I am a novice in the theory, and you probably are not. It's possible this is all misconception, except for the historic part of my response which can easily be confirmed. Please do let me know if you can as I am trying to learn and need my mistakes corrected, especially the long running ones.
special-relativity speed-of-light velocity inertial-frames observers
Lucy MeadowLucy Meadow
What I'm confused about is whether two beams of light moving in opposite directions away from each other, have a relative speed of C, or 2C.
The thing is, we can always say it's C
In fact, we can't say that at all.
The two beams of light (or better, two oppositely directed photons) do not have a relative speed at all for the simple fact that there is no inertial coordinate system in which either photon is at rest.
(from here on, by coordinate system, I mean inertial coordinate system)
The distance between the two photons increases at the rate of $2c$ but this isn't the speed of an object but, rather, the rate of increase in distance between two objects as observed in a coordinate system; neither object has speed greater than $c$ in any coordinate system.
The relative speed of two objects is the speed of one of the objects as observed in the coordinate system in which the other object is at rest. Since such a coordinate system does not exist for either photon, the relative speed of two photons isn't defined.
Yes, according to the relativistic velocity addition formula, one might think that the relative velocity is $c$.
$$u' = \frac{u + v}{1 + \frac{uv}{c^2}} = \frac{2c}{1 + \frac{c^2}{c^2}} = c$$
However, this is a conceptual error. The (1D) velocity $u$ is the velocity of an object as observed in the unprimed coordinate system while the velocity $v$ is the velocity of the origin of the unprimed coordinate system in the primed coordinate system.
But there is no inertial coordinate system with speed $c$ in another coordinate system so we cannot validly set $v = c$.
$\begingroup$ thanks for this detailed answer. Between you and the others I think I've been straightened out. Hey thanks everyone!! $\endgroup$ – Lucy Meadow Aug 19 '15 at 16:21
$\begingroup$ hi Alfred centuri you said this "The relative speed of two objects is the speed of one of the objects as observed in the coordinate system in which the other object is at rest. Since such a coordinate system does not exist for either photon, the relative speed of two photons isn't defined." ---->>>> isn't the same true in the case of two photons moving directly toward eachother? Yet that's one of the scenarios typically addressed in a 'first introduction' for complete novices? Or is that wrong? $\endgroup$ – Lucy Meadow Sep 5 '15 at 23:15
$\begingroup$ What a cop-out. You could have considered two objects moving away at any speed you like, say $v$ and $u$ then looked at what happens when $v+u>c$ - this is in the spirit of OP's Q. $\endgroup$ – Alec Teal Dec 21 '18 at 13:25
$\begingroup$ I think this answer is profoundly unsatisfying. It just throws away the question because it hits an edge case. There's a real conceptual question that could have been answered here, if the OP's $c$ was replaced with, say, $0.99 c$. $\endgroup$ – knzhou Mar 9 '19 at 22:55
$\begingroup$ @knzhou, it's been a while since I posted this answer but doesn't the OP specifically ask about two oppositely directed beams of light? Why would I consider changing the speed of one of the beams to 0.99 $c$? $\endgroup$ – Alfred Centauri Mar 10 '19 at 4:10
In any observers frame, a lightray propagates with $c$. Thus for any observer the distance between two light pulses which propagate from one point in opposite directions grows as $2c \cdot t$. There is no reason to believe that the relative distance grows as $c \cdot t$. One could mistakenly arrive at this answer if one thinks one could go into a photons restframe, which is not possible.
That the "relative velocity" is larger than $c$ is not a problem as you cannot use this setup to send information faster than light. The appearance of "velocities" higher than $c$ happens easily. Assume you send a continuous ray of light at a certain angle to the sky. If you now change the angle with an angular velocity $\dot \theta$ the spot will move with $v=\dot \theta\cdot d$ at a distance d (say we have a planet with a solid surface at that distance). If the distance if very large, this can easily become larger than $c$.
faddeevfaddeev
$\begingroup$ faddeev....the reason I think what I think is because I'm confused :() I don't really understand your answer. Could you make it a bit longer? $\endgroup$ – Lucy Meadow Aug 17 '15 at 21:49
$\begingroup$ But if the relative speed is 2c, then the speed limit isn't C ????? $\endgroup$ – Lucy Meadow Aug 17 '15 at 21:50
$\begingroup$ I hope the answer is a bit clearer now. :) $\endgroup$ – faddeev Aug 17 '15 at 22:08
$\begingroup$ @LucyMeadow, relative speed isn't what you evidently think it is. See my answer below or this link: en.wikipedia.org/wiki/Relative_velocity $\endgroup$ – Alfred Centauri Aug 17 '15 at 23:40
$\begingroup$ +1, despite there being 12 answers, this is essentially the only one that actually answers the question, correctly. $\endgroup$ – knzhou Mar 9 '19 at 23:27
Let's attack the confusion from the other side.
People say light always travels at speed $c.$
Light always travels at speed $c$ relative to an inertially moving observer. (Or to an inertial frame that is momentarily moving with your observer.)
People say that nothing can go faster than $c$
Nothing can go faster than speed $c$ relative to an inertially moving observer. (Or to an inertial frame that is momentarily moving with your observer.)
The distance between light moving in two directions grows at speed $c$
Each beam of light is moving at speed $c$ relative to the person on the ground. So the first rule is fine. And if someone could go at speed $c$ then we might have a problem, but no one ever claimed observers can move at speed $c$ relative to another observer.
So we can actually conclude that inertial frames can't move at speed $c$ relative to an inertial frame and so therefore neither can observers.
Your result demonstrates that not only is light speed a maximum speed for anything, it is a speed that observers can not achieve, they must always go less than $c.$
TimaeusTimaeus
$\begingroup$ sure but how is the inertial frame usually established ? I thought it was by applying an equal opposite motion to one of them, with the effect of loading it onto the other. Also see my question edit, where I mentios.n the two photons moving in opposite directions was one of Einstein's own examples. $\endgroup$ – Lucy Meadow Aug 22 '16 at 12:30
$\begingroup$ @LucyMeadow It is either established as a frame in which a massive particle experiencing no force has a constant velocity. Or as a frame in which the laws of physics hold. Another different approach is that you postulate mathematical frames, and use them to make mathematical models and compare the testable predictions of the mathematical models to observations. To be honest, I'm not even sure what you are confused about or where you have a problem. $\endgroup$ – Timaeus Aug 22 '16 at 20:59
$\begingroup$ Lucy, might I add, why not just ask this question with a beam of light moving at speed c in one direction, and a train with an observer moving with 0.5c in the oposite direction? Why not ask, wouldnt the observer observe 1.5c? $\endgroup$ – Žarko Tomičić Nov 11 '17 at 7:48
I know I'm late to the party and I'll probably just confusing things further but it seemed to me that the other answers glossed over some important points.
The difficulty for the specific case given is that, from the perspective of the photon or some impossible hypothetical ship traveling along with it, time has stopped and lengths are infinitely contracted in the direction of travel. The concept of distance and relative velocity between the two photons is often written off as completely nonsensical but the other interpretation is that there is no distance or time separating them. Not from their perspective, anyway.
Massive particles moving away from a common origin at greater than 0.5c but less than c would sidestep that particular singularity. Length contraction and time dilation still account for why the apparent relative velocity never exceeds c from their perspectives.
It's worth remembering that in GR, it is space itself that's contracting along the path. It's not just that the particle or ship moving at relativistic speeds is being flattened relative to some hypothetical static euclidean space.
People tend to shy away from thinking about this problem but it has real world consequences in quantum entanglement. It's directly applicable with entangled photons but with massive entangled particles moving away from each other slower than c, you have to remember they are coupled by bosons moving at c. That means there's still an infinitely contracted path between them even though that's not the path the two massive particles have taken. Einstein may not have liked "spooky action at a distance" but it's been periodically pointed out in various published papers over the last century that it should actually be expected in GR.
Jason S. ClaryJason S. Clary
An experimentalist's answer:
We have the pi0. The pi0 can be at rest and when it decays it goes into two gamma, conserving energy and momentum. In the laboratory frame , here is a pi0, a laboratory case for your question:
This comes from a famous event, the omega- event which was predicted before discovery.
All the analysis in particle physics is based on Lorenz transformations. If we transform these two photons to the center of mass of the pi0, each is moving with velocity c away from the center of mass. Since there is no other fixed reference, as the photons have no rest frame there is no other velocity that can be defined for the two photons from the pi0 decay except in the center of mass.
From this I conclude that at the photon level , only if you find the center of mass of the four vector which is the sum of the two photon four vectors, a stable point in space (versus which the velocity of each photon can be deterimined) exists. Photons cannot be checked against each other because they do not have a center of mass frame.
Light is built up by zillions of photons so the argument should mathematically hold for electromagnetic waves in general as Lorentz transformations are rigorous.
anna vanna v
The watertight way to describe things in special relativity is to describe your system, ask a question, and figure out whether that question has a physical answer or not.
Describing the system
You can describe the time/position of one light ray with a line $(t, ct)$. The first number is measured in seconds and is the time, and the second coordinate is measured in meters and is the position, say to your left or right. $c$ here is the speed of light.
At the same time, a light ray is moving in the opposite direction. Its position can be described as $(t,-ct)$.
Asking a question of the system
In your case, it's true that the separation between the two rays of light, at time $t$, is equal to $ct-(-ct)=2ct$. The distance between them increases at a rate of $2c$.
Figuring out whether that is a physical statement or not
You could make a mistake and say, "that means that in some reference frame, I will see light moving away from me at twice the speed of light." That's incorrect! In classical mechanics, and in regular Newtonian every-day life, the question of "what is the relative speed of these two objects" and "can I go into that reference frame and actually observe that relative speed" are equivalent. In special relativity, they are not!
You'll always see both rays of light moving away from you at the speed of light -- no more, no less.
$\begingroup$ "You'll always see both rays of light moving away from you at the speed of light -- no more, no less." ----> Hi NeuroFuzzy, I see that's correct. But what does the photon see....how fast is the other photon going. $\endgroup$ – Lucy Meadow Sep 5 '15 at 23:05
$\begingroup$ @LucyMeadow What you have to ask yourself is: what physical question are you asking? What experiment will you set up? No massive object (like a camera) can travel at the speed of light, so you can't travel along a light wave and take a snapshot. There is no frame in which light is still, so it doesn't make sense to ask "what does the photon see". $\endgroup$ – user12029 Sep 6 '15 at 1:03
$\begingroup$ Hi NeuroFuzzy....I understand the point you are making and see it is correct. But where I'm confused is what the difference is, between the two divergent photons, and the opposite scenario of two convergent photons. Is one OK to apply relativity while the other not? It's just the two convergent photon scenario is - I thought anway - one of the typical illustrations offered to absolute beginners. Or is that wrong. I'm sorry for being so obtuse. $\endgroup$ – Lucy Meadow Sep 15 '15 at 12:29
$\begingroup$ @LucyMeadow no problem! Relativity applies in both cases. In both cases the coordinate distance (position one minus position two) changes at a rate twice the speed of light. However, in both cases, there is no frame of reference in which a single photon is travelling faster than the speed of light. (The photon has no frame of reference, so you can't use that.) $\endgroup$ – user12029 Sep 15 '15 at 15:33
The relative speed of 2 objects is not itself the speed of a 'thing', therefore the speed of light principle is not violated by saying that 2 photons moving in opposite directions move away from each other at a relative speed of 2C.
And if planet earth were to move toward light that has not yet reached us from a distant star, the light and the earth would be heading toward each other at a speed greater than C. But that is merely a relative speed: a relative speed is not a speed of a thing.
LawrenceLawrence
Suppose points A B and C are moving relative to one another. Consider observation point A to be relatively stationary with B moving away from A at close to the speed of light. Next suppose that C is moving in that same direction away from B at close to the speed of light [A >> B >> C]. An observer on A can see B and measure that B is moving away at close to the speed of light, but since C is moving away from A at well over the speed of light, no light from C will ever reach A, so from observation point A, C is invisible and seems to not exist.
James MathisonJames Mathison
$\begingroup$ Why wouldn't A be able to observe C? $\endgroup$ – Kyle Kanos Jan 24 '19 at 10:59
$\begingroup$ No, C is not moving away from A at well over the speed of light. See the relativistic velocity addition formula, given in Alfred Centauri's answer. $\endgroup$ – PM 2Ring Sep 9 '19 at 6:47
One could always conjure up a velocity that's much higher than even 2c! So, right now, I have two envelopes in front of me. Say that on the white envelope, a particle is moving relative to it left at c, or -c. On the yellow envelope, we have +c. So, the velocity of particle(yellow) relative to particle(white) is c-(-c), which is 2c.
Now, there's a reason I've conjured up the envelopes. What if, now, we started moving the envelopes away from one another? Assuming the envelopes were simplified to being one dimensional. Now, we have a two dimensional problem, if we want to objectively quantify the velocity between these two particles in this two-dimensional plane. So, we add up the velocities in quadrature.
$v = sqrt(v(y)^2+v(x)^2)$
So, even if we say (theoretically) these envelopes can move apart from objective space at speed c, what if these envelopes were moving away from each other, like before, at v = c--c = 2c? Then, of course, you might think our limit is
$v = sqrt((2c)^2+(2c)^2) = sqrt(8c^2) = +/-2.828c$
Let's add a third dimension, where the envelopes are moving from each other horizontally, simultaneously. So, the magnitude of v would be, using the same rules as before,
$v = sqrt((2c)^2+(2c)^2+(2c)^2) = sqrt(12c^2) = +/-3.46c$
Well, there's no reason to stop. sqrt(x) goes to infinity, and multiplying c to sqrt(x) doesn't stop anything - rather, it simply increases the speed the vector trends towards infinity as we keep adding dimensions in quadrature!
I think that the relevance of c is that you can move faster than c from a coordinate system, but not THE coordinate system. At the end of the day, I think we're calculating the magnitudes of vectors that have absolutely nothing to do with c. It's about as meaningful as saying, the length of your genitalia is measured from tip to crotch. What if we add the distance from crotch to tip to tip to crotch?
Jacob AlbrechtJacob Albrecht
Not the answer you're looking for? Browse other questions tagged special-relativity speed-of-light velocity inertial-frames observers or ask your own question.
Will relative velocity of light exceed $c$?
If the speed of light is a limit, is there a way to tell which direction we are moving through space and how fast?
Is light so important in special relativity?
Does curved spacetime arise from inhomogeneity of gravitational field?
Problem with speed of light
Does light emitted in opposite direction hit equally distant objects at different times?
Arbitrary frames in special relativity
Direction of force in special relativity | CommonCrawl |
Deep impact? Is mercury in dab (Limanda limanda) a marker for dumped munition? Results from munition dump site Kolberger Heide (Baltic Sea)
Ulrike Kammann ORCID: orcid.org/0000-0002-3738-148X1,
Marc-Oliver Aust1,
Maike Siegmund1,
Nicole Schmidt1,
Katharina Straumer1 &
Thomas Lang1
Environmental Monitoring and Assessment volume 193, Article number: 788 (2021) Cite this article
This article has been updated
Dumped munitions contain various harmful substances which can affect marine biota like fish. One of them is mercury (Hg), included in the common explosive primer Hg fulminate. There is still a lack of knowledge whether dumped munitions impact the Hg concentrations in the Baltic Sea environment. This study aims to answer the question if dab caught at the dump site Kolberger Heide show higher Hg concentrations released from munition sources and whether Hg in fish is a usable marker for munition exposure. Therefore, a total of 251 individual dab (Limanda limanda) were analysed including 99 fish from the dump site. In fish from the Kolberger Heide, no elevated Hg concentrations were found compared to reference sites when age-dependent bioaccumulation of mercury was considered. Therefore we conclude that Hg in fish is no suitable indicator for exposure to munition dumping, e.g. in the frame of possible future monitoring studies as Hg exposure originating from dumped munition is only a small contributor to overall Hg exposure of fish.
Dumped munition in the sea is a global problem and its management is a challenge for the future. After World War II, about 300,000 t of conventional munitions have been dumped in German coastal waters of the Baltic Sea inside eight official munition dump sites, often close to the coast (Beck et al., 2018). One of these sites is Kolberger Heide (KH), on the one hand an area in the Kiel Bay in about 2 km distance to the beach and the route of shipping traffic, on the other hand a historical munitions disposal site of German and British ordnance from World War II. Approximately 30,000 t of munition including torpedoes, moored mines, ground mines, aerial bombs, and depth charges were originally dumped in the area (Kampmeier et al., 2020). More than 6000 mines with over 1600 t weight represent the biggest part of dumped munition in KH (Kampmeier et al., 2020). In recent investigations, more than 1000 objects in KH were discovered using repeated high-resolution multibeam and underwater video surveys by Kampmeier et al. (2020).
Munitions contain various harmful substances which can affect marine biota like fish. One of them is mercury (Hg), which is included either as elemental Hg or Hg fulminate (a common explosive primer), and thus may act as a local source of Hg in the dumping areas (Beldowski et al., 2019). It has been shown that explosive material released from dumped munition in KH can be found in the direct vicinity of the munition in water and in different biota (Beck et al., 2018, 2019; Gledhill et al., 2019; Maser & Strehse, 2021; Strehse & Maser, 2020; Strehse et al., 2017) as well as in fish (Koske et al., 2020b). Besides explosives also compounds related to chemical warfare agents can leak from dumped munition and have already been detected in fish from the Baltic Sea (Niemikoski et al., 2020). So it is likely that besides explosives and chemical warfare agents also Hg from the munition might be released in the environment. Beldowski et al. (2019) confirmed this by detection of increased concentrations of Hg in sediments from KH. The same authors detected Hg fulminate (Hg (CNO)2) in sediments confirming that KH is a local point sources of Hg originating from dumped munition. Also Uścinowicz et al. (2011) observed high Hg concentrations in Baltic Sea sediments from the specific munitions dumpsites. On the other hand, Kampmeier et al. (2020) stated that mainly unfused munitions have been dumped in KH which is not likely to contain Hg-fulminate. But at least parts of the munition must have been still armed, as explosion accidents happened during the dumping work (Kampmeier et al., 2020). Therefore, markedly Hg contamination at KH caused by dumped munition can be questioned.
Hg exists as inorganic Hg and as organic Hg (primarily methylHg); it is ubiquitous in the marine environment and at the same time is considered one of the most toxic elements or substances on the planet. Hg is released from natural and anthropogenic sources (Clarkson & Magos, 2006). Direct atmospheric deposition of Hg is regarded as major source of contamination of the seas (Driscoll et al., 2013), and half of the emitted anthropogenic Hg has accumulated in the oceans and marine sediments (Zhang et al., 2015). In the environment, a variety of adverse effects in fish at physiologic, histologic, bio-chemical, enzymatic, and genetic levels can be induced by Hg (Morcillo et al., 2017), partly at environmentally realistic concentrations. Lang et al. (2017) reported higher disease prevalences in fish going alongside with enhanced Hg levels in the North Sea. Dabs (Limanda limanda) are suitable organisms for environmental screening due to their benthic lifestyle, geographically widespread and considered to be relatively stationary at the same time. It has been used as a bioindicator in many studies, e.g. on heavy metals (Lang et al., 2017) or organic contaminants (Kammann, 2007; Kammann et al., 2017).
The potential environmental threat related to dumped munition has gained attention in international monitoring: The European Marine Strategy Framework Directive (MSFD) aims for establishing a good environmental status of European marine waters by 2020. MSFD names under Descriptor 8, munition disposal sites as a source for contamination and pollution (Law et al., 2010). Monitoring is the prerequisite for predictions of contamination rates, accumulation of toxic substances in biota and thus risk assessment which may lead to remediation of munition dumpsites in future.
Munition might be a relevant source of Hg for bottom dwelling fish like the dab. However, there is a knowledge gap if Hg from dumped munition enhances exposure for fish and by this might affect them—probably together with other contaminants from dumped munition. Therefore, the present study aims to answer the questions:
Are dab from the munition dump site KH higher contaminated with Hg than those from reference areas?
Can the exposure of individual fish to munition (indicated by explosives in bile) be related to Hg concentration at munition dump site?
Is Hg contamination of fish a suitable marker for munition exposure?
The region of all study sites is the western Baltic Sea. KH is a 1260 ha restricted munition dumpsite with approximately 30,000 t of conventional munition dumped (Gledhill et al., 2019). The reference sites were used for comparison: Stoller Ground (SG) located 10 km west of KH, B01 about 25 km northeast of KH. According to the AMUCAD database (North.io GmbH, 2019), no actual munition contamination is documented at SG. B01 is located close to the Fehmarn Belt the latter being contaminated by munition as hundreds of ground mines were dropped there during the war (Böttcher et al., 2011). The locations of the sampling sites are shown in Fig. 1; geographical coordinates are given in Table 2.
Sampling sites of dab in the western Baltic Sea: Kolberger Heide dumpsite (KH) and reference sites, B01, and Stoller Ground (SG) close to the German coastline (Sources of the basemap: Esri, Garmin, USGS, NPS
Dab (Limanda limanda) were collected during two cruises of RV Clupea (CLU314 and CLU326) in August 2017 and 2018 respectively by gillnet fishery at the edges of KH and by bottom trawling in SG (TV-300 bottom trawl, 15–20 min towing time at 3–4 knots). Additionally, dab were collected in B01 during one cruise of RV Walther Herwig III (WH408) in September 2017 by bottom trawling (140 ft. bottom trawl, 60 min towing time at 3–4 knots). Detailed cruise information is provided in Table 2. Live dab were randomly sorted from the catches and kept alive in tanks with running seawater of ambient water temperature prior to dissection. Fish were weighted, the total length measured, sex visually determined and animals anaesthetized by a blow on the head, followed by decapitation. The skin was partly removed, and a portion of muscle fillet of individual fish was collected with a ceramic knife and stored frozen in precleared plastic tubes (rinsed with nitric acid 6.5% and ultrapure water) and stored at − 20 °C until further processing. Subsequent analyses were carried out under clean lab conditions of ISO class 7.
Condition factor and age determination
Biometric data were used to determine Fulton 's condition factor (CF = weight [g] * 100/length [cm]3) as an indicator of the general fish health status. Otoliths were removed for subsequent age determination according to Maier (1906) and Bohl (1957). All biometric data characterizing the fish under analysis are presented in Table 2.
Nitric acid, 69% in ultrapure quality and certified standard solutions of Hg were purchased from Carl Roth, Karlsruhe, Germany, in 0.5 M nitric acid. Ultrapure water was obtained from a Purelab Flex 3 device (Elga Veolia; High Wycombe, UK).
Hg measurement and quality assurance
For sample preparation, portions of muscle samples were freeze dried using a lyophilizer (LD 1–2, Christ, Osterrode, Germany) and subsequently homogenized using an agate mortar or an ultra turrax tube drive dispenser (IKA, Staufen, Germany) equipped with glass grinders respectively to obtain a dry sample powder suitable for Hg analysis. Total Hg was determined by atomic absorption spectrometry using a Direct Mercury Analyzer (DMA-80, MLS, Leutkirchen, Germany). Known amounts (20–30 mg) of each sample were weighted into the boat containers (precleaned with nitric acid) of the DMA-80. Direct analysis for total Hg content was performed using a 10-level calibration with standards in 0.5 M nitric acid. The accuracy of the procedure was determined by analysis of Certified Reference Material (DORM-3 and DORM-4, both fish protein homogenates) obtained from the National Research Council (NCR) in Canada which was taken through the same analytical procedure as the samples. Details on reference materials are given in Table 1. All samples were analysed in triplicate. External quality assurance was done by successfully participation in laboratory proficiency tests (z-score 0.7) conducted by QUASIMEME (www.wepal.nl) designed for marine environment analytics. The limit of detection (LD) and the limit of quantification (LQ) were calculated from a standard curve according to DIN 32,645 (DIN, 1994) with a confidence level of 99%. Considering the sample preparation a LD of 0.080 µg/kg wet weight (w. w.) and a LQ of 0.230 µg/kg w. w. were determined for Hg. No values below these limits were found in any sample under investigation. Recovery of the method was 100.3% and precision was 8.20%. All analytical results are presented in Table 2.
Table 1 Values of Hg in Certified Reference Materials DORM-3 and DORM-4. Measured means and standard deviations (SD) obtained accompanying sample analysis
Table 2 Cruise information, biometric and contaminant data of dab caught at sites B01, Stoller Ground (SG) and Kolberger Heide (KH). Location is given as latitude and longitude for a rectangle. Total length [cm], condition factor (CF), age [y], mercury (Hg) [µg/kg] in fish muscle related to wet weight (w. w.) and explosive compound 4-aminodinitrotoluene (4-ADNT) [ng/mL] are expressed as mean values with minima and maxima in brackets. Number of individuals per sex is given. *The limit of detection (LOD) of 4-ADNT is 2.8 ng/ml. 4-ADNT results and LOD are
Statistical analyses were carried out using Statistica Version 12.5 (Statsoft Europe, Hamburg Germany). The correlation between concentration of Hg in muscle and the age of fish as well as between 4-ADNT in bile and Hg in muscle in fish from KH was tested using linear regression as well as by Spearman rank correlation. The principal component analysis (PCA) was performed using varimax rotation. An ANOVA (95% confidence level, 0.05 significance threshold) was conducted to investigate age and site influence on Hg concentrations.
Fish from three study sites in the western Baltic Sea (Fig. 1) were included in this study. A total of 251 dab were examined and individual muscle samples were analysed for Hg. Comparing biometric data, the dab from the munition dumpsite KH were the largest and oldest fish, while fish from the reference sites SG and B01 were smaller and younger (Table 2). At every study site, more female than male dab were caught. The mean values of the condition factor (CF) at all study sites were in the range of 0.98 to 1.02 (Table 2) with no significant differences between the sites. Therefore, CF is not likely to mirror any negative influence at a single site.
Samples from the munition dump site KH and from two reference sites in the vicinity of Kiel Bight (B01, SG) were analysed for total Hg. All samples exhibited Hg concentrations above LOQ. The maximum concentration of Hg measured in single samples was 173.90 µg/kg w. w. in a fish from KH. Highest mean concentration of Hg was also reported in KH with 66.74 µg/kg w. w. Minimum single Hg concentration of 6.64 µg/kg w. w. was determined in a sample from SG. Also lowest mean concentration of 39.34 µg/kg w. w. was calculated for SG. Table1 also contains concentrations of Hg and explosive compound 4-aminodinitrotoluene (4-ADNT) measured in the same fish (bile) but already published in Koske et al. (2020b).
Figure 2 shows the relation between age of the fish and the concentrations of Hg in the three sites under investigation:
Relation between Hg concentration [µg/kg ww] in muscle and age [years] of the dabs separated by sampling site Kolberger Heide (KH, blue), Stoller Ground (SG, red) and B01 (green) in the western Baltic Sea. Given are mean correlation functions (linear, solid lines) and 95% prognosis bands (dashed lines) for each site
$$\begin{aligned}Hg[{\mu{g}/kg]}=&-26.397+18.394\\ &*{age}\;[y]\;(r=0.7628)\;for\;site\;KH,\end{aligned}$$
$$\begin{aligned}Hg[{\mu{g}/kg]}=&--7.415+12.942\\ &*{age}\;[y]\;(r=0.7408){\;for\;site}\;SG\;and\end{aligned}$$
$$\begin{aligned}Hg[{\mu{g}/kg]}=&-6.118+13.524\\ &*{age}\;[y]\;(r=0.7157)\;for\;site\;B01.\end{aligned}$$
Hg bioaccumulation in dab for all sites under investigation is described by:
$$Hg[{\mu{g}/kg]}=-17.503+16.217*{age}\;[y]\;(r=0.7902).$$
Correlation analysis between Hg in muscle tissue and 4-ADNT in bile of the same fish at site KH did not lead to any significant correlation, expressed by a non-significance in linear correlation (p > 0.05) and a low correlation coefficient r = 0.189. Non-linear Spearman rank-correlation leads to comparable results (results not shown).
ANOVA results on possible site and/or age effects on Hg revealed a significant correlation (p > 0.001) between age and Hg but no significant relation between site and Hg.
The PCA in Fig. 3 explains 75.52% of the variance with the first two factors. Factor 1 explains 53.34% of the variance and refers mainly to age and Hg. Factor 2 explains 22.18% of the total variance and is dominated by 4-ADNT and CF. However, the variable site shows weaker relations to both of the first two factors. An overview on factor loadings is given in Table 3.
Principal component analysis of dab from three sites in the western Baltic Sea, variable projection on factors 1 and 2: Hg mercury in muscle tissue [µg/kg ww]; 4-ADNT 4-aminodinitrotoluene in bile [ng/ml] according to Koske et al. (2020b); site = KH, SG or B01 (compare Fig. 1); age [years]; CF condition factor
Table 3 Loadings for the first two factors (F1, F2 with variance levels) of a principal component analysis of dab from three sites in the western Baltic Sea. Hg, mercury in muscle tissue [µg/kg fresh weight]; 4-ADNT, 4-aminodinitrotoluene in bile [ng/ml] according to Koske et al. (2020b); site = KH, SG or B01 (compare Fig. 1); age [years]; CF, condition factor. Factor loadings above/below ± 0.5 are marked in bold
The present study aims to analyse Hg concentrations in muscle tissue of dab from a munition dump site compared to reference sites in the western Baltic Sea. It also aims to reveal possible relationships between Hg concentrations and exposure to munition as well as to consider biological parameters like age of the fish. This contributes to the overall question if Hg in fish can act as a marker for munition exposure in future monitoring studies. The range of Hg contamination in dab muscle tissue reported in Table 2 is in well accordance with the contamination range covered by former studies (Bayens et al., 2003; Lang et al., 2017; HELCOM, 2018). Lang et al. (2015) reported Hg concentrations in muscle tissue of dab ranging from 7 to 373 µg/kg w. w. (mean 52 µg/kg w.w.).
At first sight, results displayed in Table 2 with maximum mean concentrations of Hg in samples from KH show a general higher Hg contamination in KH than in the reference sites. However, it is known that Hg in dab is mainly present as methyl mercury (94%, Lang et al., 2017), which shows bioaccumulation in fish (Donadt et al., 2021). A detailed look at the Hg concentrations and the age, shown in Fig. 2, reveals a clear correlation between Hg and age for all stations (Eq. (4)). This illustrates that age-dependent bioaccumulation of Hg in dab from all stations takes place and bioaccumulation follow comparable functions with overlapping 95%-prognosis bands in Fig. 2. Therefore, the age-Hg-relations can be regarded similar for all sites so that the typical Hg concentration of a 2-year old dab is about 19 and for a 4-year old dab is about 47 µg/kg w. w. (calculated from equation [4], shown in Fig. 2) in the southern Baltic Sea. The higher Hg concentration in fish from KH (Table 2) is therefore mainly caused by their higher age and not by higher contamination level of, e.g. sediments. This leads to the question, if earlier findings of elevated Hg concentration in KH sediments before (Beldowski et al., 2019) could explain enhanced Hg concentration of fish regarding (1) its concentration relatively to diffuse sources and (2) the biovailability of Hg-fulminate for fish. It has to be taken into account, that highest individual Hg levels were reported in fish from KH.
Koske et al. (2020b) reported for the same individuals which have been used in the present study set that only about half of the fish were tested positive for explosives (4-aminodinitrotoluol in Table 3) and therefore had a proven contact to dumped munitions. However, no indication could be found in the present study that fish determined positive for explosive compounds tend to higher Hg levels. This might be explained by the different time scales of exposure when a metabolite is detected in bile (days) compared to bioaccumulation of Hg in muscle (months/years). It is also possible that exposure to explosives and to Hg takes place in parallel in KH but that Hg contamination originating from Hg fulminate has either a low bioavailability, or a low concentration compared to other diffuse Hg sources and is therefore hard to detect.
A PCA illustrates the relations in the data set in Fig. 3 and confirms that the variables age and Hg are related as expected from Fig. 2 (displayed closely in the projection and covered by same factor, Table 3). The PCA also shows a weaker relation between variables 4-ADNT and site as described by Koske et al. (2020b) (variables inversely correlated on the same diagonal in the projection in Fig. 3). There is no clear relation between Hg and 4-ADNT because they were mainly covered by different factors (Table 2). Results of correlation analysis (Fig. 2) as well as of PCA (Fig. 3) are supported by ANOVA showing a significant correlation (p > 0.001) between age and Hg but not between site and Hg. All three statistical methods point in the same direction: Dumped munition at KH is not likely to be a Hg source for fish.
As Hg is a core indicator in environmental monitoring programme for the Baltic Sea, HELCOM (2018) reported in its second holistic assessment that concentrations of Hg in fish muscle exceeded the threshold level of 20 µg/kg w. w. in almost all monitored regions indicating no good status for the Baltic Sea. This is in accordance with our findings and underlines the importance of Hg measurements in the marine environment and the need of knowledge about local sources of Hg to interpret monitoring results; especially if they are related to dumped munition. The German environmental ministers decided in 2019 to initiate a process to set up a screening in munition dump sites and reference areas to assess possible environmental impact on the ecosystem originating from dumped munitions in German marine waters (UMK, 2019). Hg is discussed as one of the indicators for munition exposure besides explosives such as 4-ADNT. KH is likely to be included in this future screening study because it is a well-studied region (Kampmeier et al., 2020; Koske et al., 2020a, b; Strehse et al., 2017). For investigations described above or a later monitoring on dumped munition, e.g. under MSFD suitable indicators in fish have to be selected. As outlined before, Hg does not seem to act as suitable indicator for monitoring of dumped munition at KH.
We conclude that elevated Hg levels present in dump site sediments (Beldowski et al., 2019) do not significantly influence Hg contamination in fish living there. Therefore, Hg in fish is no suitable indicator for exposure to dumped munition at KH. We hypothesize that Hg from diffuse sources may overlay the additional input at dump sites. However, Hg exposure originating from dumped munition cannot be excluded in general as local contamination source for fish and may contribute to the overall exposure monitored, e.g. under MSFD D8.
Missing Open Access funding information has been added in the Funding Note.
Beck, A. J., Gledhill, M., Schlosser, C., Stamer, B., Böttcher, C., Sternheim, J., Greinert, J., & Achterberg, E. P. (2018). Spread, behavior, and ecosystem consequences of conventional munitions compounds in coastal marine waters. Frontiers in Marine Science, 5, 141. https://doi.org/10.3389/fmars.2018.00141
Beck, A. J., van der Lee, E. M., Eggert, A., Stamer, B., Gledhill, M., Schlosser, C., & Achterberg, E. P. (2019). In situ measurements of explosive compound dissolution fluxes from exposed munition material in the Baltic Sea. Environmental Science & Technology, 53(10), 5652–5660. https://doi.org/10.1021/acs.est.8b06974
Baeyens, W., Leermakers, M., Papina, T., Saprykin, A., Brion, N., Noyen, J., De Gieter, M., Elskens, M., & Goeyens, L. (2003). Bioconcentration and biomagnification of mercury and methylmercury in North Sea and Scheldt estuary fish. Archives of Environmental Contamination and Toxicology, 45(4), 498–508. https://doi.org/10.1007/s00244-003-2136-4
Bełdowski, J., Szubska, M., Siedlewicz, G., Korejwo, E., Grabowski, M., Bełdowska, M., Kwasigroch, U., Fabisiak, J., Łońska, E., Szala, M., & Pempkowiak, J. (2019). Sea-dumped ammunition as a possible source of mercury to the Baltic Sea sediments. Science of the Total Environment, 674, 363–373. https://doi.org/10.1016/j.scitotenv.2019.04.058
Bohl, H. (1957). Die Biologie der Kliesche (Limanda Limanda L.) in der Nordsee. Ber. Dtsch. Wiss. Komm. Meeresforsch., 15, 1–57.
Böttcher, C., Knobloch, T., Rühl, N.-P., Sternheim, J., Wichert, U., & Wöhler, J. (2011). Munitionsbelastung der Deutschen Meeresgewässer - Bestandsaufnahme und Empfehlungen, Meeresumwelt Aktuell Nord- und Ostsee. Retrieved March 21 2021 from https://www.blmp-online.de/PDF/Indikatorberichte/2011_03_sd.pdf
Clarkson, T. W., & Magos, L. (2006). The toxicology of mercury and its chemical compounds. Critical Reviews in Toxicology, 36, 609–662. https://doi.org/10.1080/10408440600845619
DIN. (1994). Deutsches Institut für Normung e.V. (DIN) 32645, Nachweis-, Erfassungs- und Bestimmungsgrenze. Berlin Beuth Verlag, Berlin
Donadt, C., Cooke, C. A., Graydon, J. A., & Poesch, M. S. (2021). Mercury bioaccumulation in stream fish from an agriculturally-dominated watershed. Chemosphere, 262, 128059. https://doi.org/10.1016/j.chemosphere.2020.128059
Driscoll, C. T., Mason, R. P., Chan, H. M., Jacob, D. J., & Pirrone, N. (2013). Mercury as a global pollutant: Sources, pathways, and effects. Environmental Science & Technology., 47(10), 4967–4983. https://doi.org/10.1021/es305071v
Gledhill, M., Beck, A. J., Stamer, B., Schlosser, C., & Achterberg, E. P. (2019). Quantification of munition compounds in the marine environment by solid phase extraction – ultra high performance liquid chromatography with detection by electrospray ionisation – mass spectrometry. Talanta, 200, 366–372. https://doi.org/10.1016/j.talanta.2019.03.050
HELCOM. (2018). State of the Baltic Sea – Second HELCOM holistic assessment 2011–2016. Baltic Sea Environment, Proceedings 155. retrieved June 2 2021 from http://stateofthebalticsea.helcom.fi/wp-content/uploads/2018/07/HELCOM_State-of-the-Baltic-Sea_Second-HELCOM-holistic-assessment-2011-2016.pdf
Kammann, U., Akcha, F., Budzinski, H., Burgeot, T., Gubbins, M. J., Lang, T., Le Menach, K., Vethaak, A. D., & Hylland, K. (2017). PAH metabolites in fish bile: From the Seine Estuary to Iceland. Marine Environment Research, 124, 41–45. https://doi.org/10.1016/j.marenvres.2016.02.014
Kammann, U. (2007). PAH metabolites in bile fluids of dab (Limanda limanda) and flounder (Platichthys flesus): Spatial distribution and seasonal changes. Environmental Science and Pollution Research, 14, 102–108. https://doi.org/10.1065/espr2006.05.308
Kampmeier, M., van der Lee, E. M., Wichert, U., & Greinert, J. (2020). Exploration of the munition dumpsite Kolberger Heide in Kiel Bay, Germany: Example for a standardised hydroacoustic and optic monitoring approach. Continental Shelf Research, 198, 104108. https://doi.org/10.1016/j.csr.2020.104108
Koske, D., Goldenstein, N., Rosenberger, T., Machulik, U., Hanel, R., & Kammann, U. (2020a). Dumped munitions: New insights into the metabolization of 2,4,6-trinitrotoluene in Baltic flatfish. Marine Environment Research, 160, 104992. https://doi.org/10.1016/j.marenvres.2020.104992
Koske, D., Straumer, K., Goldenstein, N. I., Hanel, R., Lang, T., & Kammann, U. (2020b). First evidence of explosives and their degradation products in dab (Limanda limanda L.) from a munition dumpsite in the Baltic Sea. Marine Pollution Bulletin, 155:111131. https://doi.org/10.1016/j.marpolbul.2020.111131
Lang, T., Kruse, R., Haarich, M., & Wosniok, W. (2017). Mercury species in dab (Limanda limanda) from the North Sea, Baltic Sea and Icelandic waters in relation to host-specific variables. Marine Environment Research, 124, 32–40. https://doi.org/10.1016/j.marenvres.2016.03.001
Law, R., Hanke, G., Angelidis, M. O., Batty, J., Bignert, A., Dachs, J., Davies, I., et al. (2010). Marine Strategy Framework Directive: Task group 8: Report contaminants and pollution effects. Joint Research Centre, European Commission. https://doi.org/10.2788/85887
Maser, E., & Strehse, J. S. (2021). Can seafood from marine sites of dumped World War relicts be eaten? Archives of Toxicology. https://doi.org/10.1007/s00204-021-03045-9
Maier, H. N. (1906). Beiträge zur Altersbestimmung der Fische: Allgemeines; die Altersbestimmung nach den Otolithen bei Scholle und Kabeljau. Littmann.
Morcillo, P., Esteban, M. A., & Cuesta, A. (2017). Mercury and its toxic effects on fish. AIMS Environmental Science, 4(3), 386–402. https://doi.org/10.3934/environsci.2017.3.386
Niemikoski, H., Straumer, K., Ahvo, A., Turja, R., Brenner, M., Rautanen, T., Lang, T., Lehtonen, K. K., & Vanninen, P. (2020). Detection of chemical warfare agent related phenylarsenic compounds and multibiomarker responses in cod (Gadus morhua) from munition dumpsites. Marine Environment Research. https://doi.org/10.1016/j.marenvres.2020.105160
North.io GmbH. (2019). Amucad.
Strehse, J. S., Appel, D., Geist, C., Martin, H.-J., & Maser, E. (2017). Biomonitoring of 2,4,6- trinitrotoluene and degradation products in the marine environment with transplanted blue mussels (M. edulis). Toxicology, 390, 117–123. https://doi.org/10.1016/j.tox.2017.09.004
Strehse, J. S., & Maser, E. (2020). Marine bivalves as bioindicators for environmental pollutants with focus on dumped munitions in the sea: A review. Marine Environment Research, 158, 105006. https://doi.org/10.1016/j.marenvres.2020.105006
UMK. (2019). German environment minister conference (UMK), press release of 93. UMK, 15. November 2019/BUE15, retrieved March 21 2021 from https://www.umweltministerkonferenz.de/Mitglieder-UMK-Mitglieder.html.html/Aktuelles-Box.html?newsID=230
Uścinowicz, S., Szefer, P., & Sokołowski, K. (2011). Trace metals in the Baltic Sea sediments. In: Uścinowicz, S. (Ed.), Geochemistry of the Baltic Sea Surface Sediments 2011, p. 356 (Warszawa).
Zhang, Y. X., Jaegle, L., Thompson, L., & Streets, D. G. (2015). Six centuries of changing oceanic mercury. Glob. Biogeochem. Cycles, 28, 1251–1261. https://doi.org/10.1002/2014GB004939
Open Access funding enabled and organized by Projekt DEAL. This study was funded by the European Union, Baltic Sea Region Programme, DAIMON (www.daimonproject.com). Its content is the sole responsibility of the authors and can in no way be taken to reflect the views of the European Union. Partial financial support was received under Regulation (EU) No 508/2014 from the European Parliament and of the Council on the European Maritime and Fisheries Fund.
Thünen Institute of Fisheries Ecology, Herwigstraße 31, Bremerhaven, 27572, Germany
Ulrike Kammann, Marc-Oliver Aust, Maike Siegmund, Nicole Schmidt, Katharina Straumer & Thomas Lang
Ulrike Kammann
Marc-Oliver Aust
Maike Siegmund
Nicole Schmidt
Katharina Straumer
Thomas Lang
CRediT authorship contribution statement, UK: conceptualization, data curation, formal analysis, writing—original draft, writing—review and editing, project administration, visualization. M-OA: writing—original draft, writing—review and editing, visualization. MS: resources, methodology, validation, investigation. NS: methodology, validation. KS: resources, investigation, writing—review and editing. TL: conceptualization, resources, project administration, funding acquisition. All authors approved the final version of the manuscript submitted for publication.
Correspondence to Ulrike Kammann.
Kammann, U., Aust, MO., Siegmund, M. et al. Deep impact? Is mercury in dab (Limanda limanda) a marker for dumped munition? Results from munition dump site Kolberger Heide (Baltic Sea). Environ Monit Assess 193, 788 (2021). https://doi.org/10.1007/s10661-021-09564-3 | CommonCrawl |
Impressive common misleading interpretations in statistics to make students aware of
Statistics are used everywhere; politicians, companies, etc. argue with the help of statistics. Since calculations are needed for the interpretation of statistics, such things should be taught in mathematics in school.
What are the most impressive common misleading interpretations of statistics that students should be aware of?
undergraduate-education secondary-education statistics mathematics-in-daily-life
Markus KleinMarkus Klein
$\begingroup$ See also What are common statistical sins? on Cross Validated. $\endgroup$ – Nick Stauner Apr 6 '14 at 6:59
$\begingroup$ several books on this see eg how to lie with statistics Huff $\endgroup$ – vzn Apr 7 '14 at 4:00
$\begingroup$ I see this question is tagged secondary education, but it applies equally to tertiary education. $\endgroup$ – J W Apr 7 '14 at 7:47
$\begingroup$ I protect this for now; please feel free to contact me if you prefer it undone. $\endgroup$ – quid♦ Apr 9 '14 at 14:22
$\begingroup$ I am not allowed to answer, but someone should mention the Will Rogers phenomenon; often, if you have two sets of numbers, you can increase the averages of both just by moving things from one to the other. This has relevance in medicine. Alternatively, open any newspaper and you can find an avalanche of bad statistics. For example, a recent news story claims that it's "shocking" that the number of women over-50 in the UK giving birth has doubled in four years... to 154. $\endgroup$ – Flounderer Apr 10 '14 at 22:49
Here are two well known examples:
If someone tests positive for a rare disease (say its prevalence is 1 out of 100,000) with a test that has a 1% false positive rate, it is tempting to say that we are 99% sure they have that disease. This isn't true if you go through the numbers; they probably don't have that disease and are a false positive. (Bayes)
If you look at a list of cancer rates by county, you see that counties with the lowest rates of cancer tend to have a much lower population than average. Students will speculate all sorts of reasons for this - "healthier country living" etc. But you can also look at the counties with the highest cancer rates. You find they too are the least populated. If you show that to students first they will have all sorts of reasons why that makes sense. But what is really going on is that the standard error is larger for smaller samples. Standardized test results from schools show the same effect. I heard the Gates Foundation invested millions in small high performing schools before realizing that this effect was in play.
Here is a great article with a much better explanation of the first error and a lot of other examples of statistical confusion: http://web.mit.edu/5.95/readings/gigerenzer.pdf
EDIT: I recently discovered that p-values are a lot more subtle than I had thought. The Wikipedia page for p-values lists 6 common misconceptions, most of which I had. It references this article on twelve common misconceptions about p-values.
Psychologists Tversky and Kahneman have studied various misconceptions in statistics. Here is one of their better known papers. They found even trained statisticians often ignore base rates when calculating probabilities and engage in a version of the gamblers fallacy, expecting small samples to have the same standard error as large ones.
NoahNoah
$\begingroup$ Putting the numbers for #1 here for others: the probability that someone has the disease and tests positively is $\frac{99}{100} \frac{1}{100000} = 0.0000099$. The probability that somoene doesn't have the disease and test positively is $\frac{1}{100} \frac{99999}{100000} = .0099999$. So, if someone tests positively, the probability that they actually have the disease is $.0989 \%$ $\endgroup$ – MCT Apr 6 '14 at 16:45
$\begingroup$ @MichaelT, these are wrong numbers. Noah said nothing about sensitivity of the test (fraction of times it is correct for those who have the disease), so you cannot compute the first probability at all here. Realistically, most cheap tests are designed to have a rather lousy sensitivity but high specificity: if somebody has a disease, the test MUST find it; more sophisticated tests may follow, but the initial screener should be designed in such a way as to never miss the case, at the expense of producing a lot of false positives. $\endgroup$ – StasK Apr 7 '14 at 14:06
Anscombe's quartet is pretty good:
All four of these sets have almost identical mean and variance for both x and y coordinates, correlation, and best-fit linear regression. But they're obviously very different!
VengeVenge
$\begingroup$ There are a number of misconceptions and counterexamples mentioned in this Cross Validated thread Datasets constructed for a purpose similar to that of Anscombe's quartet $\endgroup$ – Glen_b Apr 18 '14 at 3:39
A book I remember has the title "the egg-laying dog". The titular dog enters a room where we placed 10 sausages and 10 eggs. After a while the dog leaves the room, and we observe, that the percentage of eggs relative to the sausages increased, so we conclude that the dog must have produced eggs.
It's easy to spot the mistake in the above example, because the image of a dog laying eggs is absurd. However, consider the following case: a few decades ago a new medicine against heart diseases was developed. It worked well. However, 10 years later someone observed, that the rate of people dying of cancer was much higher among those who have been treated with the new medicine, in fact, the rate of cancer increased by a significant margin. Mass hysteria ensues: the new medicine causes cancer! Bans are being issued, companies are sued, etc. until a better look at the statistics showed that the situation was exactly the same as in the case of the egg-laying dog: people are not immortal, and sooner or later they tend to die of something. As fewer people died of heart diseases, they died of other causes years or decades later, and cancer, being a leading cause of death especially among older people, was one of them. The new medicine was not causing cancer at all, it just decreased the rate of another disease.
Other interesting, commonly occurring examples:
Regression toward the mean: people, especially bosses tend to think that scolding people when they perform badly improves their effectiveness and complimenting them when they perform well decreases it. It's easy to see the problem. Take a 6-sided die and start throwing it. Every time you throw a 1, scold the die why did it give such a bad result. Observe, that after your scolding, in over 80% of the cases, the result of the next throw was better. Was it because of the scolding?
Improper scaling in graphs. A common election tactic, you put two bar graphs next to each other, one very low for your opponent and one very high for yourself. What people tend to miss, is that the values don't start from zero. In fact you created 26857 new jobs, while your opponent only 26819. Not a big difference, but if you start the graph from 26800, it seems quite large.
While not strictly statistics-related, it's worth to mention the "fallacy fallacy". If someone uses a fallacy to prove or defend a statement, this fact alone is not a proof that the statement is wrong.
vszvsz
$\begingroup$ This is an incredibly high-quality first post to a stackexchange site. I see you're not new to the party, but welcome to this site anyway! $\endgroup$ – Chris Cunningham♦ Apr 9 '14 at 14:47
$\begingroup$ great "scolding" example $\endgroup$ – Rolazaro Azeveires Feb 9 '17 at 1:15
Sally Clark (http://en.wikipedia.org/wiki/Sally_Clark) was convicted in the UK of murdering both her infant sons, when in fact it is much more likely that they died of natural causes. The case against her was largely based on invalid statistical reasoning. The Royal Statistical Society made a statement about at at the time, which begins as follows:
In the recent highly-publicised case of R v. Sally Clark, a medical expert witness drew on published studies to obtain a figure for the frequency of sudden infant death syndrome (SIDS, or "cot death") in families having some of the characteristics of the defendant's family. He went on to square this figure to obtain a value of 1 in 73 million for the frequency of two cases of SIDS in such a family. This approach is, in general, statistically invalid. It would only be valid if SIDS cases arose independently within families, an assumption that would need to be justified empirically. Not only was no such empirical justification provided in the case, but there are very strong a priori reasons for supposing that the assumption will be false. There may well be unknown genetic or environmental factors that predispose families to SIDS, so that a second case within the family becomes much more likely.
After more than three years in prison Sally Clark was released following a second appeal, but she died of alcohol poisoning a few years later. This is a very sad but instructive story.
Simpson's paradox: see http://en.wikipedia.org/wiki/Simpson%27s_paradox.
To summarize the Berkeley Admissions example: in 1973, 43% of men applying to graduate school at Berkeley were admitted, but only 35% of women. But, broken down across the six departments, women either did better than men, or the difference was not significant. The paradoxical result appeared because women were more likely than men to apply to the most competitive departments.
Benoît Kloeckner's answer mentions some other problems that arise from averaging percentages.
Mark WildonMark Wildon
$\begingroup$ I wondered if this one was already mentioned. I have seen it used often to explain declines in, e.g., SAT scores. For example, see ets.org/research/policy_research_reports/publications/report/…. $\endgroup$ – Benjamin Dickman Apr 6 '14 at 9:28
$\begingroup$ While this is a paradox to be sure, the example is not without statistical meaning. Berkeley admitted a smaller proportion of applicants to programs which attracted more female interest. Was the "competitiveness" inevitable or a consequence of bias? It may be a useful measurement of something, with proper care. $\endgroup$ – Potatoswatter Apr 7 '14 at 6:56
$\begingroup$ An excellent illustration of the politically fashionable but logically invalid use of statistics to "prove" discrimination. $\endgroup$ – user807 Apr 9 '14 at 15:31
$\begingroup$ An attractive interactive visualization of Simpson's Paradox can be found at vudlab.com/simpsons $\endgroup$ – Mark S. Feb 28 '15 at 22:55
Percentages are a source of many, many, many common mistakes.
One that is very common is believing that percentages can be added. An example: one of our presidents increased its salary by 172%; the next president decreased the presidential salary by 30%. It was commented that compared to the salary before the raise, it was still a 142% increase.
Another one is to confuse 300% of a price and a raise by 300%. One of our former ministers made this mistake on twitter (while in office), sticking by it when corrected.
Another mistake, which is not really about percentage, is to invert roles in ratios when considering the correlation between two characters. E.g.: "30% of convicted criminals have purple hairs" is sometimes translated into " 30% of purple haired people are convicted criminals".
Benoît KloecknerBenoît Kloeckner
$\begingroup$ An even worse example with percentages. Quote of a danish politician some years ago, addressing Folketinget (apart from being translated, I have paraphrased a bit since I could not find the exact quote again, and the numbers might have been slightly different): "30% of danish men and 35% of danish women use the libraries. Now, usually we need to be careful when adding percentages, but in this case it is OK ["tør jeg godt"]. So this means that 65% of danes use the libraries, which is not so bad". The next speaker started with "Now, we are of course not here to tech each other math, but..." $\endgroup$ – Tobias Kildetoft Apr 8 '14 at 7:31
$\begingroup$ @TobiasKildetoft out of curiosity: who is that a quote of? $\endgroup$ – Therkel Jun 18 '16 at 15:18
$\begingroup$ @Therkel I do not remember, unfortunately. $\endgroup$ – Tobias Kildetoft Jun 18 '16 at 15:22
$\begingroup$ @Therkel: da.wikiquote.org/wiki/Aase_D._Madsen $\endgroup$ – Hans Lundmark Jul 12 '17 at 9:41
Multiple hypothesis testing is a common one.
Let's say you run a study where you try to link some genetic marker to cancer rates. You look at perhaps 80 different genes and see if any of them have a correlation with occurrence of cancer. Lo and behold, one does! With p-value = 0.03! You conclude that there is a strong correlation (and seek to prove causation since you know the difference).
Seems pretty reasonable, but here's an analogous example: Let's say you want to find out if any of 80 people have the ability to predict the future. You ask each of them to predict 6 coin flips. And one predicts all 6 correctly! The p-value of this occurrence is 0.03 again! The stats seem to reject the null hypothesis that "John Doe cannot predict the future." But obviously this individual just got lucky. Your null hypothesis should be "someone can predict the future" but your p-values don't give any data about this.
There are a number of ways to adjust p-values because of this. The simplest (but not very powerful) of these is Bonferroni correction.
This is perhaps one of the most common violations of statistics in published academic papers.
Joe KJoe K
$\begingroup$ xkcd.com/882 $\endgroup$ – naught101 Apr 10 '14 at 6:24
$\begingroup$ It is also very hard if not impossible to detect on review. $\endgroup$ – Richard Mar 24 '15 at 20:33
Just two (now three, see below), to whet the appetite. Stating the mistakes:
"Correlation implies Causation": it doesn't. The finding of statistical correlation between two variables may strengthen a pre-existing theoretical/logical argument of existing causal links. But it may also reflect the existence of an underlying third variable that affects both and thus creates the correlation. When correlation is unexpectedly found, it indicates the possible existence of hitherto unknown causal links and should initiate a deeper (and not necessarily statistical) investigation -but it does not imply causation from the outset.
"If we take a larger and larger sample from a population, its distribution will tend to become normal (Gaussian) no matter what it is initially": it won't. The Central Limit Theorem, the misreading of which is the cause of this mistake, refers to the distribution of standardized sums of random variables as their number grows, not to the distribution of a collection of random variables. Alternative statement of the mistake: "Everything has a bell-shaped distribution" -let alone the fact that the normal distribution does not always look so bona fide bell-shaped.
The research paper Students' misconceptions of statistical inference: A review of the empirical evidence from research on statistics education from Ana Elisa Castro Sotos et al., reviewing research papers on the matter can be downloaded from
ftp://ftp.soc.uoc.gr/Psycho/Zampetakis/%D3%F4%E1%F4%E9%F3%F4%E9%EA%DE%20%C9/Useful%20Papers/statistical%20terms.pdf.
ADDENDUM April 8 2014
I am adding a third one, which is really dangerous, since it relates in a more general sense to reasoning and inference, not necessarily statistical inference.
3."My sample is representative of the population": it isn't. Ok, it may be, but you need to try hard to achieve that (or to be lucky), so don't take it as a given. It may look an "ordinary" day to you, with nothing special, but this does not mean that it is a representative day. So counting during just this one day, the numbers of red and of blue cars passing outside your window, won't give you a reliable estimate of the average number of red and blue cars, or of the average proportion of red and blue cars, or of the probability that the cars will be red or they will be blue per day... This is sometimes called, sarcastically, "the law of small numbers" (but the Poisson distribution is sometimes also called that), and it points out the pitfalls of doing any kind of inference based on too little information, persuading yourself that this information is nevertheless "representative" of the whole picture, and so it suffices to reach valid conclusions. People do it all the time, even when statistics do not appear to be involved. Fundamentally, it has to do with the difficulty we have of understanding and accepting the phenomenon of random variability: it does not need a reason to occur, it just occurs (at least given our current state of knowledge).
Alecos PapadopoulosAlecos Papadopoulos
$\begingroup$ Too often I see "XYZ linked with ABC disease" in the media, often making it seem a causation. $\endgroup$ – Ramchandra Apte Apr 6 '14 at 8:10
$\begingroup$ @RamchandraApte Reminds me of this comic: phdcomics.com/comics.php?n=1174 $\endgroup$ – Markus Klein Apr 6 '14 at 9:10
$\begingroup$ @MarkusKlein Very amusing comic! $\endgroup$ – Alecos Papadopoulos Apr 6 '14 at 10:15
$\begingroup$ My favorite example for correlation doesn't imply causation: Ice cream consumption causes drowning accidents! The more ice cream is consumed in a month, the more drowning accidents happen. They must be linked! No, they are both independently linked to the weather. When the weather is hot, more ice cream is consumed and more people go swimming and drown. Both are influenced by the same third variable, but one doesn't influence the other. $\endgroup$ – Philipp Apr 6 '14 at 19:37
$\begingroup$ Can't help: xkcd.com/552 ... $\endgroup$ – mbork Apr 6 '14 at 19:52
Sometimes extreme sample bias. Here is an example (numbers made up, but realistic): In some country with a population of 100 million people, every year 100 people are bitten by poisonous snakes and 50 of these die. Every year 50 people are given treatment against snake bites, and 10 of these die (40 die without getting treatment).
Your chances of dying from snake bite if you are not given treatment is 40 in 100 million or 1 in 2.5 million. Your chance of dying from snake bit if you are given treatment is one in five. Clearly you should strongly refuse snake bite treatment.
Here the error is quite obvious, but there are medical situations where something similar happens. For some medical condition, there are two medications. One is more effective but may increase your blood pressure. The other is slightly less effective but won't increase your blood pressure. If your doctor sees that you have high blood pressure, you will be given the second medication to avoid a risk of the blood pressure getting too high. Now if you examine the statistics, people given the second medication will statistically end up with higher blood pressure, and totally wrong conclusions can be drawn.
Unfortunately, it is in German, but the book Angewandte Statistik: Eine Einführung für Wirtschaftswissenschaftler und Informatiker by Kröpfl, Peschek and Schneider contains many typical mistakes that you can make.
My favorite example is that you can show a strong geographic correlation in Germany between the number of stork nests and the number of newborn children.
Nick Stauner
András BátkaiAndrás Bátkai
$\begingroup$ Since you've mentioned a German book, there is also a German webpage www.unstatistik.de which deals with very specific examples from current newspaper storys (That's how the question came up, since I was looking for more common issues than on that site). $\endgroup$ – Markus Klein Apr 5 '14 at 20:36
If you succumb to the temptation of ejecting, say, a 5-sigma outlier from a n=10 sample taken from what you believe to be a normally distributed source, then you are discarding 50% of the sample's information content. Not so harmless.
EDIT: I'll give it a go:
Low-probability events carry more information (a.k.a. surprisal) than high-probability events. E.g. "The building is on fire." carries more information than "The building is not on fire." This "information" is quantified by information theory, and appreciated by inductive inference. Solomonoff inductive inference is rooted in Bayesian statistics and is the optimal procedure for drawing strong-as-possible conclusions from available data, which is what frequentist statistical inference (as I understand its aims, vaguely) also tries to do, albeit less efficiently. Since information is a commodity valued by (Solomonoff) inductive inference, and since frequentist statistical inference seems to be a largely parallel pursuit of the same aims under a different theoretical framework, we would expect information to be a valuable commodity to statistical inference in general.
To get back to the point: Outliers are unlikely events (according to anyone tempted to dismiss them as fluke events, anyway) and therefore carry the most information within the sample (to that person) and therefore should be seen by that person as the most valuable members of a sample. The desire to "clean the data" and get rid of them is diametrically misguided (unless an explanation has been provided for them, in which case they would no longer carry much information anyway since under that explanation they are no longer such low-probability events).
MusefulMuseful
$\begingroup$ Welcome to the site! Unfortunately I (and many other users of this site, and many people who teach introductory statistics) will be unable to parse that wikipedia page very easily. If you expanded your answer into something that was usable in a classroom setting, I think it would be extremely well-received! $\endgroup$ – Chris Cunningham♦ Apr 8 '14 at 21:40
If a coin is biased to land heads with probability $p$ and $(a,b)$ is a $95\%$ confidence interval for $p$ then $p$ is in $(a,b)$ with probability $95\%$.
Added in edit - While it is often argued that the difference is only philosophical, this distinction is of huge practical importance because the latter phrase is usually interpreted as if $p$ were the random variable, while actually $(a,b)$ is the only random variable if we are in a non-Bayesian setting. In a Bayesian setting, then $p$ is a random variable but the probability that it belongs to the confidence interval depends on the prior. To makes things clearer, let us adapt the first item of Noah's answer to this case.
Imagine the coin is taken randomly uniformly from a jar known to contain coins for which $p=5\%$ and coins for which $p=95\%$ (and assume the coins auto-destruct after being flipped, otherwise we can flip the same coin a number of times; for a more realistic framework, see Noah's answer).
Then, if we know the two possible values (but ignore the distribution of the two kind of coins in the jar), a natural confidence interval $I$ (which I recall is far from being unique) is to take $I=\{0.95\}$ if we observe heads and $I=\{0.05\}$ if we observe tails. We can here replace these singletons by small intervals around the same values if we so decide.
This $I$ is random, as it should: it depend on the random outcome of the experiment. Let us show that this indeed gives a $95\%$ confidence interval. Consider all possible situations and their probabilities, denoting by $a$ the proportion of $p=0.95$ coins:
we drew a $p=0.95$ coin and got heads (odds: $0.95\, a$),
we drew a $p=0.05$ coin and got heads (odds: $0.05\,(1-a)$),
we drew a $p=0.95$ coin and got tails (odds: $0.05\, a$),
we drew a $p=0.05$ coin and got tails (odds: $0.95\,(1-a)$).
Precisely in the first and last case will $I$ contain $p$, and this sums up to a probability of $95\%$: our design for $I$ indeed ensured that in $95\%$ of the cases, the experiment would lead us to choose a $I$ that contains $p$.
Let us go further: if the outcome is heads, we take $I=\{0.95\}$ and the a posteriori probability that $I$ contains the actual value of $p$ is $$ \frac{0.95\, a}{0.95\, a+0.05\,(1-a)}=\frac{0.95\, a}{0.90\, a+0.05}$$ which can be anywhere between $0$ and $1$.
For example, assume that $a=1/1000$. If heads turns up, the (conditional) probability that $I$ contains $p$ is less than $2\%$: the overwhelming prior makes that a "heads" outcome is much more likely to result from a little of bad luck with a $p=0.05$ coin than from a very rare $p=0.95$ coin.
This example might seem artificial, but when a scientist assumes $p$ is in her or his computed confidence interval with $95\%$ probability, as if $p$ where random, it may leads to false interpretations notably in presence of bias in his or her experiments. I thus prefer to say "$I$ lands around $p$ with $95\%$ probability" to make more explicit the randomness of the confidence interval.
Benoît Kloeckner
$\begingroup$ Even some scientists/doctors have this misconception, as I recently discovered. $\endgroup$ – JAB Apr 8 '14 at 16:59
$\begingroup$ They're really the same thing, it's just a difference of philosophy, not results, and scientists and doctors are all about results. If you create multiple 95% CIs of a known p, then you will notice that p will be within about 95% of them. Hence, p is in (a,b) with probability 95%. $\endgroup$ – Łukasz Wiklendt Apr 10 '14 at 12:30
$\begingroup$ Can you explain this more fully, and how it could lead to a problem? $\endgroup$ – Richard Mar 24 '15 at 21:02
$\begingroup$ @Richard: The 'correct' interpretation is that if you construct many 95% CI's, then 95% of them will contain the true p. The 'incorrect' interpretation, which is the content of Mark_Wildon's answer, is that if you construct a single 95% CI, then with probability 95%, the true p is in your CI. The difference is subtle and arguably important. Lukasz's argument seems to be that the latter interpretation might be 'incorrect' strictly speaking, but in practical terms, there is no difference. $\endgroup$ – Kenny LJ Jun 3 '15 at 14:48
$\begingroup$ @BenoîtKloeckner: Thank you, your new explanation is very clear. To excuse myself (somewhat), I understood your original experiment design to always report the same confidence interval. Of course this doesn't make much sense in practice. For this interpretation, if $a = 1/1000$ then always reporting $\{0.05\}$ gives a CI containing $p$ with probability $999/1000$, so $\{0.05\}$ is certainly a 95% CI. $\endgroup$ – Mark Wildon Jan 3 '17 at 14:51
Problems with interpreting $P$-values are discussed by Regina Nuzzo, "Statistical Errors," Nature 504 (13 February 2014), 150-152. [link]
"Most scientists" [sic] interpret a $P$-value as the probability of being wrong in concluding the null hypothesis is false.
The $P$-value is often valued more highly than the size of the effect.
"$P$-hacking": "Trying multiple things until you get the desired result."
$\begingroup$ Relevant XKCD comic. $\endgroup$ – hlovdal Apr 7 '14 at 21:41
I think, a very commong example are newspapers writing something like
The economic performance decreases: Last year's economic growth was 3%, now it is only 2.2%.
My interpretation: Although probably not knowing what a derivative is, they mix up decreasing of the first derivative (=economic growth) with the decreasing of the function (=economic performance).
$\begingroup$ At least here journalists have heard of derivatives (they talk about the "second deriviative" even, to mean indirect effects). I'm pretty sure they have no clue what they are about... $\endgroup$ – vonbrand Apr 6 '14 at 2:29
$\begingroup$ @vonbrand No, the comment was from me as explonation how to think even that the normally don't know about derivatives. $\endgroup$ – Markus Klein Apr 6 '14 at 14:21
$\begingroup$ +1 - I'm not sure how common it is, but here is another example of that error in this blog post. $\endgroup$ – Andy W Apr 6 '14 at 19:20
$\begingroup$ I was under the impression that relative economic growth IS a measure of economic performance; even if absolute growth for one year is greater than for the previous, if the relative growth from one year to the next is not it counts as a decrease in economic performance as the previous growth is assumed to encourage future growth to some degree. (This does not consider whether or not such a measure is accurate or sustainable in the long term; that would be a separate, economically-focused topic for discussion.) $\endgroup$ – JAB Apr 8 '14 at 16:58
$\begingroup$ Newspaper journalists often make mistakes, but I am not sure if this is one such instance. If you assume that 3% GDP growth is the norm and how things should be, then 2.2% GDP growth this year may be considered disappointing. One can thus say that "economic performance decreases". As an example, suppose China in 2015 had GDP growth of only 2.2%. Then there would, justifiably, be plenty of headlines about how China's economy is crashing. Most would view it as a catastrophe, even though you might argue that, on average, the Chinese person is still better off than in 2014. $\endgroup$ – Kenny LJ Jun 3 '15 at 14:53
Sampling issues occur frequently in opinion polls. Errors were the result of bias. Nonresponse bias has been mentioned lately as well as other factors concerning the sampling of potential voters predicting the behavior of actual voters. See Perils of Polling in Election '08, the Pew Research Center.
Michael E2
Folks often think that an event having probability zero and being impossible are the same thing.
ncrncr
$\begingroup$ Care to clarify the difference? $\endgroup$ – JTP - Apologise to Monica Jun 14 '14 at 3:54
$\begingroup$ See the answers to this question. $\endgroup$ – ncr Jun 14 '14 at 4:34
$\begingroup$ A typical situation is that one of infinitely many things must happen, but none of them has a non-zero probability. If you hit a dartboard, every single of the infinitely many points on the board has zero probability to be hit, but one of them must be hit. $\endgroup$ – gnasher729 Jun 16 '14 at 10:40
The book The tiger that isn't is what you're looking for.
Martín-Blas Pérez PinillaMartín-Blas Pérez Pinilla
$\begingroup$ Would you mind adding an example from this book? $\endgroup$ – András Bátkai Apr 7 '14 at 10:12
$\begingroup$ The random but nonrandom-looking clusters: books.google.es/… to books.google.es/…. $\endgroup$ – Martín-Blas Pérez Pinilla Apr 7 '14 at 10:27
Years ago, there was a news story that coffee caused cancer. It was great, my opportunity to quickly tell everyone I ran into that I'd bet a year's pay this was a strong, but false, correlation. It was pretty obvious to me that for whatever reason, the coffee population had a higher smoking rate than the non-coffee drinkers. It took some time, but that was exactly what created the initial conclusion.
Similarly, TV watching has been correlated with a sedentary lifestyle. A fair correlation, but that shouldn't discourage one from mounting a TV on the wall in front of their treadmill.
JTP - Apologise to MonicaJTP - Apologise to Monica
Related to the answer of Mark, it is also a widely believed misconception (even among MDs) that if an AIDS test has 99% sensitivity, then someone testing positive is with 99% probability ill. Acually, the numbers can be extremely low... This is true for all illnesses which are erlatively rare in our society, see http://uhavax.hartford.edu/bugl/treat.htm#notions
$\begingroup$ @Noah already covered this. $\endgroup$ – Nick Stauner Apr 6 '14 at 7:10
This fallacy is probably less well-known than others: large samples always mean better confidence. This turns out to be false in the presence of even the slightest bias.
Imagine an experiment to determine if a subject can read another's mind. The experimenter picks a card in a random (uniform) deck of blue and red cards, looks at it, and the subject guesses the color. Then one determines the probability to achieve at least the observed success rate if the subject had no supernatural power (i.e. guess chance $50\%$) and determines a $p$ value. Now, this experiment is subject to very small bias (the experimenter might lead the subject by slightly different reactions depending on the color, even unconsciously). If the raw guess chance is indeed $50\%$ but the bias improves it to $50.01\%$, then using a huge enough sample the observed guess rate will approach $50.01\%$ sufficiently to exclude the $50\%$ hypothesis with large confidence.
Here the sample size needed would be really big (of the order of the hundreds of millions, since the spread decreases as the square root of the sample), but with more important bias even smaller sample will give the illusion of great confidence. But however small the bias is, and however strong the confidence interval is taken, there is a sample size above which the fallacy of large numbers will drop in.
The take away is that when confronted with a study that uses a huge sample size and has a tiny $p$-value, one should be concerned about the effect size. A small effect size might mean the $p$-value is driven by the bias and the large sample rather than by an intrinsic effect.
Not the answer you're looking for? Browse other questions tagged undergraduate-education secondary-education statistics mathematics-in-daily-life or ask your own question.
Feedback about statistical and numerical illiteracy-examples website?
Ideas for high school pure maths projects
Emphasizing Statistics instead of Calculus
Changing students' approach to math
Is there a calculator centered secondary school curriculum somewhere?
Will presenting non-Euclidean geometries to students before Euclidean geometry give them a better intuition about shapes on the plane?
Passage from Descriptive to Inferential Statistics - analogies with other Math-fields?
Are there examples of countries where the use of CAS systems or graphing calculators was deemphasized or discontinued?
Teaching new stats students confidence intervals, hypothesis testing, and other general techniques for inference
Entry Test for Statistical/Data Science class
Are Clustered Index Usage Stats skewed since they are referenced in Non-Clustered Keys? | CommonCrawl |
Theta rhythmicity governs human behavior and hippocampal signals during memory-dependent tasks
Coordinated representational reinstatement in the human hippocampus and lateral temporal cortex during episodic memory retrieval
D. Pacheco Estefan, M. Sánchez-Fibla, … P. F. M. J. Verschure
Lateralized hippocampal oscillations underlie distinct aspects of human spatial memory and navigation
Jonathan Miller, Andrew J. Watrous, … Joshua Jacobs
Theta-phase dependent neuronal coding during sequence learning in human single neurons
Leila Reddy, Matthew W. Self, … Pieter R. Roelfsema
An Intrinsic Role of Beta Oscillations in Memory for Time Estimation
Martin Wiener, Alomi Parikh, … H. Branch Coslett
Assessing the utility of frequency tagging for tracking memory-based reactivation of word representations
Ashley Glen Lewis, Herbert Schriefers, … Jan-Mathijs Schoffelen
Changing temporal context in human temporal lobe promotes memory of distinct episodes
Mostafa M. El-Kalliny, John H. Wittig Jr, … Kareem A. Zaghloul
Phase-dependent amplification of working memory content and performance
Sanne ten Oever, Peter De Weerd & Alexander T. Sack
Attention rhythmically samples multi-feature objects in working memory
Samson Chota, Carlo Leto, … Stefan Van der Stigchel
Spontaneous cortical activity transiently organises into frequency specific phase-coupling networks
Diego Vidaurre, Laurence T. Hunt, … Mark W. Woolrich
Marije ter Wal ORCID: orcid.org/0000-0003-4922-34351,
Juan Linde-Domingo ORCID: orcid.org/0000-0002-3301-74531,2,
Julia Lifanov1,
Frédéric Roux1,
Luca D. Kolibius1,3,
Stephanie Gollwitzer4,
Johannes Lang ORCID: orcid.org/0000-0001-9019-90814,
Hajo Hamer ORCID: orcid.org/0000-0003-1709-86174,
David Rollings5,
Vijay Sawlani5,
Ramesh Chelvarajah5,
Bernhard Staresina ORCID: orcid.org/0000-0002-0558-97451,6,
Simon Hanslmayr ORCID: orcid.org/0000-0003-4448-21471,3 &
Maria Wimber ORCID: orcid.org/0000-0002-1917-353X1,3
Nature Communications volume 12, Article number: 7048 (2021) Cite this article
Long-term memory
Memory formation and reinstatement are thought to lock to the hippocampal theta rhythm, predicting that encoding and retrieval processes appear rhythmic themselves. Here, we show that rhythmicity can be observed in behavioral responses from memory tasks, where participants indicate, using button presses, the timing of encoding and recall of cue-object associative memories. We find no evidence for rhythmicity in button presses for visual tasks using the same stimuli, or for questions about already retrieved objects. The oscillations for correctly remembered trials center in the slow theta frequency range (1-5 Hz). Using intracranial EEG recordings, we show that the memory task induces temporally extended phase consistency in hippocampal local field potentials at slow theta frequencies, but significantly more for remembered than forgotten trials, providing a potential mechanistic underpinning for the theta oscillations found in behavioral responses.
In everyday life, our brains receive a virtually never-ending stream of information that needs to be stored for future reference or requires integrating with pre-existing knowledge. The hippocampus is the hub where encoding and retrieval of information is coordinated (for reviews see refs. 1,2,3). Information streams within hippocampus and between hippocampus and cortex are thought to be orchestrated by the phase of the theta rhythm4,5,6. Here, we ask whether theta oscillations clock responses during memory tasks, producing rhythmicity in behavior.
During memory formation, information processed by cortical regions is sent to the hippocampus and presumably encoded in the form of a sparse, associative index. Conversely, during retrieval, cues trigger the completion of existing patterns encoded in hippocampus, eliciting reinstatement of the memory in associated cortical regions. Both memory encoding and retrieval have been associated with changes in oscillatory patterns in hippocampal local field potentials (LFPs). The LFP of rodents is dominated by oscillations in the 4–8 Hz theta frequency band, while a broader low-frequency band is apparent in humans, with frequencies in intracranial recordings often peaking between 1 and 5 Hz during memory tasks7,8,9,10. Several studies have shown that encoding of later-remembered items is accompanied by higher theta power compared to later-forgotten items9,11,12,13, but see ref. 14. Similarly, phase–amplitude coupling between theta and gamma oscillations increases during successful encoding12,15,16. Finally, spiking activity of hippocampal neurons was reported to lock to the LFP at theta frequencies17 specifically during successful encoding18.
During memory retrieval, theta power increases in cortical areas that are involved in reinstatement19 and synchronization between these areas and hippocampus increased at theta frequencies20,21,22,23,24. Intriguingly, recall signals in hippocampus precede reinstatement in the cortex by about one theta cycle, suggesting hippocampus and cortex communicate within theta "windows" during memory recall3. In recent human studies, reinstatement of remembered associations was found to be theta-rhythmic25, and remembered spatial goal locations were represented at different phases of the theta rhythm26,27.
Theoretical work in the memory domain proposes that destructive interference between new information entering hippocampus and stored, reactivated information is reduced by locking to opposing theta phases28. Indeed, strengthening of synaptic connections (long-term potentiation) is more likely to occur around the trough of theta cycles29,30, while synaptic depression is more pronounced at the peak30. In line with these findings, rodent work suggests that communication of new information from cortex to hippocampus predominantly occurs around the theta trough, while retrieval-related spiking activity in hippocampus is observed around the theta peak31,32,33,34. Intracranial recordings from epilepsy patients suggest similar network dynamics, with entorhinal cortex and hippocampus synchronizing their theta phase during encoding, while hippocampus locked to the downstream subiculum during retrieval35. Furthermore, optogenetically suppressing neural activity during task-irrelevant phases of the theta oscillation improves performance36, demonstrating that the theta phase has a functional link to memory performance.
Consistent locking of encoding and retrieval processes to the theta rhythm predicts that these processes appear as rhythmic. Rhythmicity might therefore be visible in behavioral markers that depend on long-term memory. To our knowledge, no work has tested for such rhythmicity in memory-dependent tasks. However, recent studies on attentional scanning in both monkeys and humans suggest that oscillatory activity can manifest in behavioral performance, reflecting periodic switches in attended locations37,38,39,40,41,42.
Here, we ask whether the presumed clocking of neural memory processes by the theta rhythm translates into an observable oscillatory modulation of behavior. We analyze responses from hundreds of participants completing a memory task, in which they press buttons to indicate the exact time points at which they formed or recalled associative memories. We find significant oscillations in both encoding and retrieval responses, with peak frequencies in the lower theta frequency band (1–5 Hz8,9). No oscillatory signatures are observed in button presses from task phases that do not depend on memory. Moreover, incorrect trials do not lock to the rhythm identified for correct trials. To underpin our behavioral findings with a neural mechanism, we analyze hippocampal LFPs recorded in epilepsy patients. These exhibit temporally extended phase locking in the low theta range during memory-dependent task phases, for correct but not incorrect trials. Finally, we show that encoding and retrieval trials show maximal phase alignment at opposite phases of the theta rhythm. Together, our results demonstrate that theta-rhythmicity of memory processing can be detected in human behavior and direct hippocampal recordings.
Button presses indicate the timing of memory-dependent and -independent processing
In this study we asked whether signatures of hippocampal rhythms can be found in behavioral responses during memory encoding and retrieval. We analyzed the data from 226 participants who performed associative memory tasks, consisting of multiple blocks with encoding, distractor, and retrieval phases (Fig. 1A). During encoding phases, participants viewed a cue (verb or scene image), followed by a stimulus (photo or drawing of an object). They pressed a button when they made an association between cue and stimulus, providing us with an estimate of the timing of memory formation (Encoding button press). During retrieval phases, cues were shown in random order and participants were asked to remember the associated objects. Participants in group 1 (n = 71) indicated the moment they remembered the object by pressing a button (Retrieval button press) and then answered one or two catch questions (e.g., "animate or inanimate?") about the already reinstated object (Catch-after-retrieval button press). Participants in group 2 (n = 155) were shown the catch question before the cue appeared. This group mentally reinstated the object and pressed the button as soon as they were able to answer the question (Catch-with-retrieval button press), indicating the time of subjective memory retrieval in this group. Each participant memorized between 64 and 128 cue–object pairs. Objects, cues, and catch questions varied between experiments; for details see "Methods" and Supplementary Table 1.
Fig. 1: Button presses indicate the timing of memory-dependent and -independent processing.
A Structure of the memory task. Two groups of participants (groups 1 and 2; n = 226) completed blocks consisting of an encoding phase (top row) in which they associated cues ("spin", "cut") to objects, a distractor phase (not shown), and one of two versions of a retrieval phase (bottom rows), in which they answered catch questions about remembered objects ("animate or inanimate?", "photo or drawing?"). B Structure of the visual task. A separate group of participants (group 3; n = 95) answered questions about objects on the screen, using the same questions and stimuli as the memory task. C Number of participants that were included in further analyses, after exclusion of participants with a high number of incorrect and/or timed-out trials (Supplementary Fig. 1A). Note that participants in group 1 contributed button presses to three task phases (Encoding, Retrieval, and Catch-after-retrieval), group 2 contributed to 2 (Encoding and Catch-with-retrieval), and group 3 to 1 task phase (Visual). The example stimuli shown in A and B were taken from the BOSS database70.
In order to separate memory processes from perceptual task elements, a separate group (group 3; n = 95) performed visual control tasks using the same questions and objects (Fig. 1B). Participants were shown a question (e.g., "animate or inanimate") followed by an object, and they answered the question by pressing a button (Visual button press). Note that the button presses from the visual task do not depend on episodic memory, as they pertain to objects that are constantly visible. Answers to the catch question for memory group 1 (Catch-after-retrieval button press) are also not expected to rely on hippocampal memory retrieval, since they are asked after objects are reinstated. Answering these questions is, however, likely to rely on maintenance of the retrieved object in working memory. We use the term memory-dependent as relying on hippocampus-dependent associative memory.
We analyzed performance of each participant based on the catch questions. Participants who performed at chance level (binomial test) were excluded from further analyses (n = 12 for memory task; n = 0 for visual task, Supplementary Fig. 1A). In addition, 32 (1) participants with sufficient performance had a low trial count for the encoding (retrieval) phase due to trial time-outs, and hence were excluded for encoding (retrieval) phase analyses (Supplementary Fig. 1A). In general, participants responded well within the allotted response times (Supplementary Fig. 1C and Supplementary Table 3). The included participants (Fig. 1C) had an average performance of 84.0% (range 56.3–100%) for the memory groups and 96.3% (range 78.2–100%) for the visual group (Supplementary Fig. 1B). For the number of responses and reaction times per task phase see Supplementary Fig. 1 and Supplementary Table 3.
Oscillatory patterns can be detected in behavioral responses using the O-score
The button presses from the memory tasks provided us with estimates of when participants formed and reinstated memories on each trial. Figure 2A shows the button presses from all retrieval trials of one participant, as well as the smoothed response density across trials. We asked whether the response densities showed oscillations, as suggested by the trend-removed trace in Fig. 2A (right), and whether these patterns differed between memory-dependent and -independent task phases.
Fig. 2: Oscillatory patterns can be detected in behavioral responses using the O-score.
A Timing of retrieval button presses for an example participant. Each circle is one button press, with correct trials in green and incorrect trials in purple. Convolving the correct trials with a Gaussian kernel (left panel, solid green line) reveals an overall trend in response density (left panel, gray line) as well as an oscillatory modulation (right panel, dashed green line). This oscillation was well fitted by a sine wave at the frequency identified by the Oscillation score procedure (light blue, arbitrary amplitude). More examples are given in Supplementary Fig. 2. B Step-by-step representation of the Oscillation score method, for the participant from panel A. O scores were computed for the original data (dark green) and 500 reference datasets with the same overall response trend (gray) following the procedure from ref. 43, summarized in the blue box. For parameter validation see Supplementary Fig. 3, and for further details see the main text and "Methods". Source data are provided as a Source Data file.
To address this we used the Oscillation score43. This procedure identifies the dominant frequency in the response time stamps, and provides a normalized amplitude at this frequency: the O-score. In brief, after removal of early and late outliers (Fig. 2B, step I), we computed the O-score following the procedure in ref. 43 (Fig. 2B, blue box): The auto-correlation histogram (ACH) is computed for the button presses from correct trials and smoothed with a Gaussian kernel (σ = 2 ms) to reduce noise. The central peak of the ACH is removed. All remaining positive lags are Fourier transformed, and the frequency with the highest magnitude is found within a frequency range of interest (adjusted per participant based on the signal length (lower bound) and number of responses (upper bound), with a minimum of 0.5 Hz and maximum of 40 Hz). The O-score is computed by dividing the peak magnitude by the average of the entire spectrum.
The O-score indicates how much the spectral peak stands out, but does not take into account the overall response structure (gray trend curves in Fig. 2) and the limited number of data points, which could introduce a frequency bias. To account for this, we fitted a trend curve (Gamma distribution) for each participant and generated 500 random series of button presses based on this structure, with the same number of data points as the original dataset (Fig. 2B, II, see "Methods" for details). We computed the O-score at the peak frequency from the intact data for each randomization, and Z-transformed the original O-score against the 500 reference O-scores (Fig. 2B, III). This allowed us to statistically assess the strength of the behavioral oscillation for each participant and task phase, and perform second-level statistical assessments across participants.
We validated the performance of the O-score and Z-scoring methods using simulated data that mimicked the characteristics of the behavioral dataset (Supplementary Note 2 and Supplementary Fig. 14). This provided several important validations: (1) when no or very weak oscillations were present in the simulated data, the O-score was never significant at the population level; (2) when the O-score reached significance for our simulated populations, the O-score identified the correct frequency; and (3) the O-score procedure performed well for different task phases, despite differences in participant count, number of responses, or average reaction time.
Behavioral responses oscillate at theta frequencies for memory-dependent task phases
Significant O-scores were observed for encoding and retrieval button presses from both versions of the memory task (Fig. 3A), specifically Encoding (t(181) = 6.20, p < 0.001); Retrieval (t(68) = 4.58, p < 0.001); and Catch-with-retrieval (t(143) = 5.08, p < 0.001; all Bonferroni-corrected for five comparisons; effect sizes in Supplementary Table 5). Additionally, the proportion of participants with significant O-scores was high (Fig. 3B): 74.7% for Encoding, 69.6% for Retrieval, and 76.4% for Catch-with-retrieval. On the other hand, no evidence for a behavioral oscillation was found for memory-independent task phases, with non-significant O-scores for the catch questions after reinstatement and the visual task (Catch-after-retrieval: t(69) = 1.69, p = 0.240; Visual: t(94) = −4.10, p = 1.00; Bonferroni-corrected for five comparisons; the t-value captures deviation from the reference-defined threshold; hence, both non-significant and negative t-values signify lack of evidence for oscillations). Note that the Catch-after-retrieval data were obtained from the participants in memory task group 1, while the Visual task was recorded in an independent group of participants (see Fig. 1C). The proportion of participants with significant O-scores was lower than for memory-dependent phases: 64.3% for Catch-after-retrieval and 37.9% for the Visual task. Raw O-scores showed a similar pattern across task phases (Supplementary Fig. 4A).
Fig. 3: Behavioral responses oscillate at theta frequencies for memory-dependent task phases.
A Scatter plot of O-scores (Z-scored) per task phase, where each circle is one participant, and box plots representing the 5, 25, 50, 75, and 95% bounds of the O-score distribution across participants. The dashed line gives the significance threshold for single participants (α = 0.05, one-tailed, Z-distribution). The outcome of a second-level t-test is given above each task phase, and the comparison between memory-dependent and -independent task phases is given at the top (linear mixed model, see the main text and "Methods"). See Supplementary Fig. 4 for raw O-scores and additional statistics and Supplementary Fig. 5 for analyses excluding participants for whom the response density trend could not be fitted; n.s. not significant; *: 0.05 ≥ p > 0.01; **: 0.01 ≥ p > 0.001; ***: p ≤ 0.001 (for exact values see the main text), Bonferroni-corrected for five comparisons. B Proportion of participants with significant (i.e. above the Z-threshold in A; blue bars) and non-significant O-score (white bars). C Histograms of peak frequencies per task phase, for participants with significant O-scores, as a fraction of the total number of participants. The yellow outline indicates the 1–5 Hz frequency band. Participant numbers can be found in Fig. 1C. Source data are provided as a Source Data file.
To test whether memory-dependent task phases had significantly higher O-scores than memory-independent phases, we fitted a linear mixed-effects model to the Z-scored O-scores. Fixed terms in this model were memory dependence and length of the time series, which varied substantially between task phases (Supplementary Fig. 1C); we included the intercept per subject as random effect, to address potential dependencies due to participants of the memory task contributing 2 or 3 data points. We found strong support for an effect of memory dependency on O-score, with significantly higher Z-scores for memory-dependent than memory-independent task phases (Fig. 3A; coefficient = 0.28; 95% CI: 0.19–0.36; t(556) = 6.55; p < 0.001). This was unaffected by time series length (coefficient = 0.0022; 95% CI: −0.0035 to 0.0080; t(556) = 0.768; p = 0.443). Post hoc (paired) t-tests confirmed these trends within memory groups 1 and 2, and demonstrated that the visual task had significantly lower O-scores than all other task phases (Supplementary Fig. 4B–D and Supplementary Table 4, effect sizes in Supplementary Table 5). These trends were qualitatively similar within all included experiments (see Fig. 4E) and were hence not driven by a single experimental setup or stimulus set.
The high O-scores we found for memory-dependent task phases are a strong indication of rhythmicity of behavior. Interestingly, the peak frequencies of significant O-scores from memory-dependent task phases were non-uniformly distributed (Kolmogorov–Smirnoff test for uniformity; Encoding: D* = 0.395, p < 0.001; Retrieval: D* = 0.381, p < 0.001; Catch-with-Retrieval: D* = 0.236, p < 0.001; corrected for five comparisons). Most participants showed peak frequencies (Fig. 3C) between 1 and 5 Hz or harmonics of this range. These frequencies align with the low theta band identified in human hippocampal recordings during memory tasks8,9. Conversely, peak frequencies were broadly distributed for Catch-after-retrieval and the Visual task (Kolmogorov–Smirnoff test for uniformity; Catch-after-Retrieval: D* = 0.158, p = 0.957; Visual: D* = 0.0894, p = 1.00; corrected for five comparisons). To directly test for a difference between the frequencies of memory-dependent and -independent task phases, we fitted a linear mixed model to the frequencies of significant O-scores, with memory dependency and time series length as fixed effects, and participant as random effect. This revealed that frequencies for memory-dependent task phases were significantly lower than for memory-independent tasks (coefficient = −5.09; 95% CI: −7.52 to 2.67; t(371) = 6.55; p < 0.001, post hoc tests in Supplementary Table 6). There was a small effect of time series length on frequency (coefficient = −0.202; 95% CI: −0.358 to 0.0461; t(371) = −2.55; p = 0.011), with higher frequencies for memory-independence corresponding to shorter time series. To ensure that the results were not amplified by the lower-frequency limit in the O-score procedure, set to 1/3 of the time series length, we loosened this bound to twice the time series length and recomputed the O-scores. This produced similar results (Supplementary Fig. 6; Catch-after-retrieval: t(69) = 0.908, p = 0.917; Visual: t(94) = −6.20, p = 1.00; Bonferroni-corrected for five comparisons), reaffirming that the identified difference between memory-dependent and -independent task phases is not caused by differences in response times.
Reaction times of incorrect trials are not locked to the behavioral oscillation
The O-scores reported in Fig. 3 were based on correct trials only. Due to a low number of incorrect trials, it was not possible to establish whether incorrect trials show oscillatory modulation. However, we were able to test whether incorrect trials locked to the oscillation of the correct trials (correcting for fitting bias, see below) for every participant with a significant O-score. The instantaneous phase of the oscillation was determined by smoothing and filtering the correct response trace around the participant's peak frequency (example in Fig. 2A, solid green line) and performing a Hilbert transform. We then determined the phases at which the incorrect button presses occurred (Fig. 4, purple lines). Similarly, we found the phase of each correct response relative to all other correct trials, by recomputing the instantaneous phase without the trial of interest, avoiding circularity (Fig. 4, green). As expected, for all memory-dependent task phases, correct trials more often occurred around the peak of the oscillation (V-test for non-uniformity around 0°; Encoding: V (11398) = 1861.1, p < 0.001; Retrieval: V (8036) = 1174.3, p < 0.001; Catch-with-retrieval: V (23815) = 2847.5, p < 0.001; Bonferroni-corrected for three comparisons; Note that the high trial count can inflate test results). On the other hand, phase distributions were uniform for incorrect trials (Encoding: V (1270) = 44.1, p = 0.120; Retrieval: V (954) = 11.2, p = 0.911; Catch-with-retrieval: V (5328)=73.8, p = 0.229; Bonferroni-corrected for three comparisons). This suggests that incorrect responses did not lock to the rhythm of the correct trials, while correct responses were locked to the oscillation from other correct trials, pointing to the behavioral relevance of the identified oscillation. We did not perform this analysis for memory-independent task phases, as we found no evidence for oscillations.
We next directly tested the phase modulation of correct versus incorrect trials, accounting for potential biases caused by differences in trial count and procedure. We shuffled correct and incorrect trial labels 500 times per participant (i.e. keeping the original trial counts and response times), and computed the V-statistics of shuffled-correct and shuffled-incorrect trials as described previously. The difference in phase modulation between real-correct and real-incorrect responses was significantly higher than expected based on the difference between the shuffled-correct and -incorrect trials (Encoding: p < 0.002; Retrieval: p < 0.002; Catch-with-retrieval: p < 0.002).
For participants with at least 10 incorrect trials we also compensated for trial number biases by subsampling the number of correct trials to the number of incorrect trials (repeated 100 times), and recomputing the phases of both the selected correct and the incorrect trials relative to the remaining correct trials (Fig. 4, right panels). This procedure also demonstrated significantly higher phase modulation for correct than for incorrect trials for each of the memory-dependent task phases (two-tailed paired t-test; Encoding: t(51) = 4.07, p < 0.001, 95% CI: <0.0001–0.14; Retrieval: t(38) = 5.41, p < 0.001, 95% CI: <0.0001–0.024; Catch-with-retrieval: t(101) = 7.25; p < 0.001; 95% CI: <0.0001; Bonferroni-corrected for three comparisons). In conclusion, all comparisons show that correct responses are substantially more phase-locked to each other than to incorrect trials. Note that we cannot rule out that incorrect trials lock to each other at a different frequency. Combining these findings with our previous analyses, our data suggest that correct trials show substantial behavioral oscillations, but that incorrect trials do not lock to this oscillation.
Fig. 4: Reaction times of incorrect trials are not locked to the behavioral oscillation.
Phase distributions of incorrect responses relative to all correct responses (purple) and of correct responses relative to all other correct responses (green) for the task phases with significant O-scores: Encoding (A), Retrieval (B), and Catch-with-retrieval (C). The left panels shown deviations from uniform phase distributions across all participants with significant O-scores. Here, 0 radians is defined as the peak of the oscillation, and ±π as the trough. Statistics for correct and incorrect trials individually were obtained with a V-test for non-uniformity of the distribution around phase 0, and a permutation test was used to compare correct with incorrect distributions to a trial label-shuffled reference distribution (500 permutations, see "Methods"). Right panels show V-statistics for correct and incorrect trials of participants with at least 10 incorrect trials (each gray line is one participant), after down-sampling the number of correct trials to the number of incorrect trials. Shown are the mean V-statistics per participant (dots) and the distribution of V-statistics across all participants (box plots indicating the 5, 25, 50, 75, and 95% bounds of the distributions), which were compared with a two-tailed paired t-test. Phase distributions corresponding to these down-sampled datasets can be found in Supplementary Fig. 7. n.s. not significant; *: 0.05 ≥ p > 0.01; **: 0.01 ≥ p > 0.001; ***: p ≤ 0.001 (for exact values see the main text), Bonferroni-corrected for three comparisons. Source data are provided as a Source Data file.
Increased phase locking of hippocampal LFPs during encoding and retrieval
The data reported so far indicate that across trials, memory-relevant behavioral responses fall onto a consistent phase of a theta oscillation. The presence of such an oscillation, determined on the basis of one response per trial, implies phase consistency across trials in the neural oscillations in hippocampus presumed to underly memory formation and reinstatement, as previously shown by Kota et al.44, and Fell et al.45. We hypothesized that this phase consistency, induced by events in the trial, persists until the participant successfully encodes or retrieves the memory (expected to slightly precede the button press).
To test these predictions, we recorded hippocampal LFPs in 10 epilepsy patients undergoing seizure monitoring using intracranial EEG. These patients performed the same memory task as healthy participants, and their behavioral data are included in the previous results. We recorded from 42 Behnke–Fried micro-electrodes located in hippocampus (Fig. 5A), which ensures a truly local hippocampal signal, minimizing influence of volume conduction from neighboring cortical regions and connections. We wavelet-transformed the LFPs and computed the pairwise phase consistency (PPC; ref. 46) across trials for every frequency and time point. The PPC quantifies how similar the LFP phases are across trials. We performed this analysis separately for cue-, stimulus- and response-locked data and for correct (Fig. 5B) and incorrect trials (Fig. 5C).
Fig. 5: Increased phase locking of hippocampal local field potentials during encoding and retrieval.
A Locations of iEEG electrode bundles (n = 45, of which n = 42 included in further analyses) across all 10 participants, color-coded to indicate five regions of interest (yellow: amygdala; red: anterior hippocampus (HPC); purple: middle HPC; blue: posterior HPC; green: parahippocampal cortex). B, C Pairwise phase consistency (PPC, color-coded, second-level t-score) between correct (B) and incorrect (C) trials, locked to cue/stimulus onset or response of encoding (left column), retrieval (middle), and catch trials (right). Significant changes from baseline (α = 0.05, permutation test against time-shuffled trials) are indicated separately for increases (red) and decreases (blue). Black outlines indicate significant differences between correct and incorrect trials (α = 0.05, permutation test against shuffled trial labels, see also Supplementary Fig. 8A). For raw PPC values see Supplementary Fig. 8B and for PPCs per patient see Supplementary Fig. 9. D Response rate for correct (green) and incorrect (purple) trials, in the time windows and task phases corresponding to B and C. Source data are provided as a Source Data file. Number of data points specified in B also apply to C and D.
In line with our predictions, PPC across correct trials significantly increased after stimulus onset for encoding, and after cue onset for retrieval trials (α = 0.05; cluster-based permutation test against 100 time-shuffled datasets47). The clusters of significantly increased PPC (red outlines in Fig. 5B) covered a range of frequencies shortly after cue/stimulus onset, but extended in time in a frequency band between 2 and 3 Hz. This pattern was seen along the long axis of the hippocampus (Supplementary Fig. 10B) and in both hemispheres (Supplementary Fig. 10C), and was also observed for individual patients (Supplementary Fig. 9), resulting in a high consensus across patients (Supplementary Fig. 10A). The PPC peak frequencies and -values generally aligned with the frequencies and O-scores found in the behavioral data of these patients (see Supplementary Fig. 11). This lower theta cluster lasted up to the response (Fig. 5D), and resulted in a significant response-locked PPC cluster for retrieval (encoding showed increased but non-significant response-locked PPC). PPC increases were also visible in the raw data (Supplementary Fig. 8B) and appeared as theta oscillations in event-related potentials (Supplementary Fig. 8D), confirming that these effects were not caused by changes in baseline. Qualitatively similar PPC increases were found in recordings from hippocampal macro contacts in the same patients (Supplementary Note 1 and Supplementary Fig. 13). Increases in phase consistency were accompanied by increased power during retrieval, but not encoding and catch questions (see Supplementary Fig. 8C), in line with44, suggesting that amplitude and phase were modulated independently.
In line with the behavioral data, we found no significant increases in PPC for incorrect trials, neither for encoding nor retrieval. When comparing the PPC for correct and incorrect trials within electrodes (cluster-based permutation test against 100 trial-shuffled reference data sets), we found that the PPC increase after cue/stimulus onset in the 2–3 Hz frequency band was significantly stronger for correct than for incorrect trials (α = 0.05; black outlines in Fig. 5B, C). The intracranial recordings therefore support our hypothesis that task events induce temporally extended theta phase consistency in hippocampus across correct, but not incorrect trials. By showing that behavioral responses indicating the timing of completed memory encoding and retrieval were preceded by consistent hippocampal theta phases, these findings suggest a potential mechanism for our behavioral findings.
Encoding and retrieval occur at different phases of the theta rhythm
The PPC analyses in Fig. 5 demonstrate that theta phases are consistent across trials during both encoding and retrieval phases of the memory task. The identification of phase consistency allows us to ask whether the dominant phases of encoding and retrieval trials differ, which is a prominent suggestion in the computational literature28. To this end, we identified the time point and frequency at which PPC was maximal for both stimulus- and response-locked trials during encoding, and cue-locked and response-locked trials for retrieval, for each patient. We then computed the phase differences between encoding and retrieval at the corresponding frequencies and time points for every electrode. Indeed, phase differences between encoding and retrieval trials were non-uniformly distributed around 250.7 ± 14.1° for cue/stimulus-locked trials (Rayleigh's Z = 30.9; p < 0.001) and differed on average 162.4 ± 30.6° for response-locked trials (Rayleigh's Z = 7.35; p = 0.001). Both analyses provided support for a half-cycle difference between encoding and retrieval (V-test around 180°; cue/stimulus locked: V = 33.1, p = 0.005; response-locked: V = 46.7, p = 0.001; n = 326). We tested for inflation of these statistics due to the high channel count (by comparing the V-statistics against 500 time-shuffled datasets) and conclude that phase opposition for the response-locked trials was unlikely to be obtained by chance (p = 0.042), while for stimulus/cue-locked data (p = 0.126), the observed V-statistic could, in part, be inflated by channel count or a phase bias, for example, due to asymmetry in the theta cycles48.
To test whether phase opposition generalized beyond the time and frequency with the highest PPC, we computed response-locked event-related potentials for each hippocampal electrode bundle, and filtered these in the theta band (Fig. 6C). We compared the phase of encoding and retrieval ERPs using V-tests in 200 ms sliding windows. After FDR-correction, 55.8% of tested windows supported phase opposition between encoding and retrieval, which is unlikely to be produced by chance. These results further confirm the PPC analyses in Fig. 5 by demonstrating extended phase concentration in the period leading up to and around the button presses, and show that the dominant phases for encoding and retrieval are approximately 180° apart. Together, these results support both theoretical and empirical findings from previous studies that encoding and retrieval processes occur at different phases of the hippocampal theta rhythm, and generalize these findings to LFP recordings from the human hippocampus.
Fig. 6: Encoding and retrieval occur at different phases of the theta rhythm.
A, B Circular histogram of phase differences between encoding and retrieval at the time and frequency of maximal PPC (A) following stimulus (encoding) and cue onset (retrieval), and (B) before response, across all hippocampal channels (n = 326), with the mean direction in dark blue. Light blue bars give the mean and black whiskers the standard deviation across electrode bundles (n = 42). V-tests assessed non-uniformity around 180°, and were compared against 500 time-shuffled datasets (permutation test). C Event-related potentials (ERPs) for encoding (red) and retrieval (yellow) locked to the patient's response. Solid lines give the mean; shaded areas the SEM. Gray dots indicate time windows with significant phase opposition between encoding and retrieval (V-test in 200 ms sliding windows spaced 10 ms apart; FDR-corrected with q = 0.05). Red and yellow dots are the time points of peak PPC for individual patients used in Fig. 6B. Source data are provided as a Source Data file. n.s. not significant; *: 0.05 ≥ p > 0.01; **: 0.01 ≥ p > 0.001; ***: p ≤ 0.001 (for exact values see the main text).
In this study we demonstrated that oscillations can be detected in behavioral responses from associative memory tasks. Using the Oscillation score43, we showed that button presses that indicate the timing of memory encoding and retrieval were rhythmically modulated, i.e. periodically more or less likely to occur, predominantly in the 1–5 Hz frequency band. We found no evidence for behavioral oscillations for memory-independent task phases. Button presses from forgotten trials did not lock to the oscillation of remembered trials, a distinction that was echoed by hippocampal LFP recordings from 10 epilepsy patients: phase consistency across trials significantly increased in the slow theta range during the encoding and retrieval of later remembered, but not forgotten associations. Finally, phase consistency during encoding and retrieval peaked at opposite phases of the theta cycle, aligning with earlier work suggesting that encoding- and retrieval-related information flows are orchestrated by the phase of the hippocampal theta rhythm. Our data show that these hippocampal mechanisms influence the timing of overt human behavior.
In our study, we relied on button presses that explicitly marked the timing of memory formation and recall. Though these responses are subjective and rely on multiple neural processes, our results allow us to exclude several alternative explanations. Firstly, the behavioral oscillations cannot be explained by rhythmicity in visual processing, as the Encoding and Visual task phases shared identical visual inputs. Secondly, a behavioral oscillation was detectable when memory reinstatement was combined with a catch question (Catch-with-retrieval), but not when the catch question was asked 3 s after reinstatement (Catch-after-retrieval), suggesting that (1) the observed oscillation did not result from motor processes and (2) the lack of oscillations in memory-independent phases cannot be attributed to the nature or content of the catch questions. The data also show that rhythmic clocking is not universal within memory tasks: correct but not incorrect trials showed locking to a theta oscillation, and this result was mirrored in electrophysiology.
We did not observe significant oscillations in behavior for processes that we a priori marked as memory-independent, namely answering catch questions after reinstatement and the visual task. These task phases also did not contain an attentional selection element and did not rely on memory-guided visual search. These cognitive processes were previously linked to theta rhythmic modulation of behavior37,38,39,40,41,42 and saccadic eye movements49,50,51, respectively. We did however find increased PPC in hippocampal signals after catch questions appeared on screen. O-scores for the corresponding Catch-after-retrieval task phase, although not significant, were higher than for the visual task. Possible explanations are that retrieval-induced oscillations extend in time, or that catch questions induce a second, weaker reinstatement of the memory, leading to behavioral oscillations that are too weak to detect robustly. Alternatively, the oscillations observed for the catch questions could result from maintaining the retrieved object in working memory. Working memory has been proposed to be mediated by theta-nested gamma bursts52,53, synchronizing a network of cortical memory areas19,54,55,56,57, as well as the hippocampus58 (for reviews see refs. 59,60). The micro-electrode recordings presented here do not allow us to distinguish between retrieval- or maintenance-related theta oscillations, and further work is needed to understand if the behavioral oscillations reported here are specific to long-term memory.
The behavioral theta oscillations for memory-dependent task phases, together with increased PPC in hippocampal LFPs across trials, suggest that events in the memory task (i.e. cue/stimulus onset) induce consistent phase resets in the hippocampal theta rhythm. Our findings suggest this phase reset is most pronounced in the slow 1–5 Hz theta band. Several human intracranial EEG studies have reported prominent slow theta oscillations during episodic memory tasks7,8,9,10, while the higher 4-8 Hz frequency band typically observed in rodents seems to be linked to movement or spatial processing7. LFP phase resets and phase locking after task events have been reported for the slow theta band in memory paradigms16,18,61,62, and phase consistency directly preceding11 and following44,45 stimulus presentation has been shown to predict memory performance. In line with our finding, a recent study44 reported a dissociation between theta power and phase consistency, with power decreasing during encoding, but increasing during retrieval, while phase consistency increased for both processes. Like in rodents, human hippocampal neurons lock their firing to theta oscillations shortly before and during the encoding of later-recognized but not later-forgotten images, for both slow and fast theta bands18. In line with our findings, theta phases were found to differ between encoding and retrieval62, although the effects were limited to an early time window after stimulus presentation, and theta frequencies below 4 Hz were not included in that study. Our intracranial EEG results extend previous findings by demonstrating that post-stimulus phase consistency and encoding-retrieval phase consistency and opposition extend in time in a narrow frequency band, providing a potential neurophysiological mechanism for the theta-clocked behavior.
Our behavioral results align closely with the PPC analyses in terms of dominant frequency and subsequent memory effect; the presence of both LFP phase consistency and behavioral oscillations for correct trials, but absence of both during incorrect trials, suggest a link between the hippocampal rhythms and behavior. Further work is needed to establish how oscillations in hippocampal processes translate to oscillations in behavioral responses. In principle, it is sufficient for hippocampal output to cortical areas to fluctuate rhythmically, i.e., for encoding and recall signals from hippocampus to occur more frequently at certain time windows. Such fluctuations will then be maintained, though at a delay, in subsequent processing steps that lead up to the motor response, without the necessity for cortical areas to show theta oscillations themselves. Alternatively, behavioral oscillations could arise from theta rhythms in cortical areas that are entrained to or induced by the hippocampal theta rhythm. Coherence with the hippocampus at theta frequencies has been demonstrated for entorhinal35, parietal63, and frontal cortices26 during memory tasks, but it remains to be determined whether hippocampal–cortical theta coherence underlies the behavioral oscillations reported here. Along with optogenetic techniques in rodents36,64, transcranial magnetic stimulation over lateral parietal cortex in humans might provide a promising way of establishing causality between hippocampal theta and behavior, since it was shown to improve memory performance65 and hippocampal–cortical coherence particularly when stimulating in theta-bursts66. If theta-frequency TMS can enhance memory performance by boosting or entraining theta oscillations, this approach could potentially establish a direct link between hippocampal and behavioral oscillations in healthy humans.
Phase coding is a powerful candidate neural mechanism for optimizing specificity and sensitivity on the one hand, and flexibility on the other. Outside the memory domain, rhythmic switching of visual attention has been demonstrated at theta frequencies38,39,40,41. In memory tasks, potentially interfering mnemonic information has been shown to recur at different phases12,26 of the hippocampal theta rhythm. Items kept in working memory are thought to be represented in gamma cycles separated in the theta/alpha phase52,55. Visual stimulation at relevant theta/alpha phases, but not opposite phases, boosted working memory performance67. Our findings support the notion that not only sensory inputs are sampled periodically by attention, but that internal, mnemonic information is sampled rhythmically as well. Empirical evidence is thus accumulating in both humans and other animals for a powerful role of phase coding and sampling in cognitive processes.
Detecting oscillations in sparse behavioral data is not a trivial task, particularly in memory paradigms that rely on one-shot learning, like the task presented here. The trial counts for these tasks are limited by the number of unique trials participants can perform, which ultimately limits the detectability of oscillations. We showed that, despite these limitations, the O-score method43, a method to detect oscillations in spike trains, and our Z-scoring approach were sensitive enough to detect oscillations in behavioral data. Based on simulated datasets, we identified that the sensitivity of the O-score method improved with a higher density of the responses. Interestingly, in our dataset response density was lowest for the encoding task phase, which produced significant O-scores despite the expected reduced sensitivity, suggesting these oscillations are of substantial amplitude. In addition, the simulations showed the O-score method maintained good selectivity in all tested conditions, i.e., did not produce spurious results for weak or absent oscillations, and identified the correct frequency when O-scores were significant. In summary, the O-score method was both sensitive and selective to oscillations for all task phases. It is important to note, however, that a reliable analysis of oscillations in sparse data requires repeated measurements, either in the form of repeated trials43 or across a large number of participants, like we have done here.
Our results suggest that theta-rhythmicity of memory encoding and retrieval processes can not only be found in neural correlates but also has a clear behavioral signature: the likelihood that a memory is formed or recalled rhythmically fluctuates within a trial, at a slow theta frequency, resulting in rhythmicity of button presses relying on these processes. Our findings suggest that behavior can be a relatively straightforward, yet powerful way to assess rhythmicity of neural memory processes, an approach that can potentially be extended to many other cognitive domains. Together, our behavioral data and hippocampal LFP recordings point to an important mechanistic role for lasting phase consistency in the hippocampal theta rhythm during memory-dependent processing.
A total of 216 healthy participants took part in behavioral, EEG and fMRI/EEG studies using the memory tasks described in the next section. A group of 10 epilepsy patients also performed a very similar memory task, more details about this group are given in section "iEEG recordings: patients and recording setup". A separate group of 95 healthy participants completed the visual tasks. All healthy participants volunteered to participate in the studies and were compensated for their time through a cash payment (£6–8 per hour) or the University's course credit system. All participants gave written informed consent before starting the study. None of the healthy participants reported a history of neurological or psychiatric disorders and all had normal or corrected-to-normal vision. Participants only took part in one version of the task, e.g., participants in the behavioral visual task could not take part in the memory EEG study. Only the behavioral data are presented here. A subset of the behavioral data (visual experiments 1 and 2, and memory experiments 5 and 6), as well as the EEG data from experiment 10 (see Supplementary Table 1), were previously reported in ref. 68, while data from experiment 9 were previously reported in ref. 69. All studies with healthy participants took place in facilities of the University of Birmingham, and the participants were recruited through the university's research participation scheme. All studies were approved by the Science, Technology, Engineering and Mathematics Ethical Review Committee of the University of Birmingham. Demographic information for each of the participant groups is available in Supplementary Table 1.
Task versions
In this manuscript we present behavioral and intracranial EEG data recorded during a series of visual and memory experiments. The experiments were originally designed to address the following question: is perceptual information about a stimulus analyzed earlier or later than semantic information, and is this processing order similar when viewing a stimulus compared with reinstating the same stimulus from memory? Data from five experiments (experiments 1, 2, 5, 6, and 10, see Supplementary Table 1) and the analyses addressing the original research question have previously been reported in ref. 68. Data from experiment 9 (see Supplementary Table 1) were previously reported in ref. 69. In the present manuscript, we analyze the button presses for perceptual and semantic questions together. We also include the behavioral data from an additional eight follow-up experiments that took place after the collection of the initial datasets.
The experiments can be divided into three main categories (Fig. 1): memory reaction time experiments; electrophysiology memory experiments; and visual reaction time experiments. We give a general description of each category of experiments below, as well as specific differences between experiments within each category. The numbers of participants per task version and their demographic information is given in Fig. 1 and Supplementary Table 1. The characteristics of each of the 13 task versions are summarized in Supplementary Table 2.
Groups 1 and 2: Memory experiments
In the memory experiments, participants first learned associations between cues and objects and later, after a distractor task, memories were reinstated in a cued recall phase, described in more detail below. Participants learned a total of 128 associations, divided into blocks of between four and eight trials. Each block consisted of an encoding phase, a distractor phase, and a retrieval phase. Cues consisted of action verbs (e.g., spin, decorate, hold, …) for all experiments except experiment 12 (details below).
In general, the memory tasks were set up as follows: Each encoding trial started with the presentation of a fixation cross for between 500 and 1500 ms to jitter the onset of the trial. The cue then appeared in the center of the screen for 2 s. After presentation of a fixation cross for 0.5–1.5 s the stimulus (stimuli in experiment 6) appeared. Participants were asked to indicate when they made the association between cue and stimulus by pressing a button (encoding button press). The stimulus remained on the screen for 7 s. After the encoding phase, the participants performed a distractor task in which they judged whether numbers presented on the screen were odd or even. The distractor task lasted 60 s, after which the retrieval phase started. In the retrieval phase the participants were presented with the same cues as during encoding, though in a randomly different order, and asked to recall the associated objects. They then answered either a perceptual or the semantic question about the reinstated object. The trial timed out if the participant did not answer within 10 s. Trials were separated by a fixation cross shown for 500–1500 ms.
The structure of the retrieval phases differed slightly between experiments. We therefore make a further distinction within the memory experiments: the electrophysiology experiments (group 1; experiments 10–13) and the behavioral experiments (group 2; experiments 5–9).
For group 1, we aimed to separate the reinstatement processes from the formulation of the answer to the catch question. To this end, participants were asked to indicate, through a button press, when they had a clear image of the associated object in mind. The trial timed out if the participant did not press the button within 10 s. They then kept the image in mind for 3 s, during which time the screen was blank. Finally, the answer options for the catch question appeared on the screen, after which the participants responded as quickly as possible. Participants had 3 s to respond. As a result, the retrieval trials of the electrophysiology experiments produced two button presses: a retrieval button press and a catch-after-retrieval button press. These button presses are analyzed separately. Only the reinstatement button press is considered memory-dependent, because the catch question appears at a time point when the object has supposedly already been fully retrieved.
For group 2, the answer options were shown on the screen for 3 s before the retrieval cue appeared. The catch-with-retrieval button presses obtained for the memory reaction time experiments can therefore be assumed to represent the time point when sufficient information has been retrieved about the object to answer the catch question.
The number of times we asked participants to retrieve associations was varied between behavioral experiments in group 2. In experiments 5, 7, and 8, every object was probed twice, and participants answered both the perceptual and semantic question for each object in random order. In experiment 9, every object was reinstated six times in the retrieval phase of the block, and twice during a delayed retest 2 days later. The data from the delayed test are not included here due to poor performance (average performance 49.6%, only 16 out of 52 participants performed above chance). In experiment 6, participants learned associations of triplets instead of pairs, consisting of cue, object, and scene image. During the retrieval phase of this experiment, each object was probed only once, and in addition to the perceptual and semantic questions participants were asked a question about the background image (indoor or outdoor?), such that each question was answered on 1/3 of the trials.
The group 1 memory task was used for EEG recordings (experiment 10), combined EEG/fMRI recordings (experiment 11) and intracranial EEG recordings in epilepsy patients (experiments 12 and 13, also see section "iEEG recordings: patients and recording setup"). Several small adjustments were made to the task to accommodate electrophysiology. First, to minimize the duration of the testing sessions, the duration of the distractor phase was reduced to 20 s in the EEG/fMRI experiment, while in the EEG and iEEG task versions, every object was reinstated only once. To compensate for the corresponding drop in the number of catch questions, participants answered both perceptual and semantic catch questions on every trial, one after the other, in random order. The doubling of the number of catch questions per trial was introduced after the first three EEG participants and three iEEG patients were recorded. In addition, the first three iEEG patients learned pairs of background scenes and objects, instead of verb–object pairs, with the background scenes functioning as cues during the retrieval phase (experiment 12). These patients only learned a total of 64 pairs. The reaction time data from these three patients showed no difference to that of the other seven patients (see Supplementary Fig. 1) and no qualitative differences were found in the PPC analysis (PPC per patient is shown in Supplementary Fig. 9A). During encoding trials, the background and object appeared on the screen at the same time. As a result, the encoding trials of these three participants do not have a separate cue period.
We made two further modifications to the task for the iEEG recordings in all epilepsy patients. First, the task was made fully self-paced, such that length of verb presentation and the period needed to associate cue and object were determined by the patient on each trial. The patients pressed a button when they were ready to move on. Second, to avoid loss of attention/motivation and/or to accommodate medical procedures, visitors, and rest periods, the task was divided into two or three sessions, recorded at different times or on different days. Data from different sessions were pooled and analyzed together. Details of the electrophysiological recordings included in this manuscript can be found in the section "iEEG recordings: patients and recording setup".
Group 3: Visual reaction time experiments
In the visual experiments, participants were shown a series of stimuli on the screen, and were asked either a perceptual or a semantic question about each stimulus. The stimuli and questions used in the visual experiments were identical to those in the memory experiments. To get accurate estimates of the reaction times, the answer options were shown for 3 s prior to stimulus presentation. Stimuli were presented in the center of the screen. Each trial was preceded by a fixation cross for a random duration of between 500 and 1500 ms, so the onset of the trial could not be predicted. Like in memory group 2, participants were instructed to answer as fast as possible.
In experiments 1, 3, and 4, all 128 stimuli were shown twice, once followed by a perceptual and once by a semantic question (in random order), so both questions were answered for every object. In experiment 2, in which the object images were shown with a background, all stimuli were presented only once, followed by one of three questions: perceptual, semantic, or contextual, with the later referring to the background (indoor or outdoor). All button presses were included here. We refer to ref. 68 for analyses comparing the different catch questions.
Stimulus sets
Across the experiments, three different stimulus sets were used, referred to as Standard, Shape, and Size. Supplementary Table 2 specifies for each experiment which stimulus set was used. Each stimulus set consisted of 128 emotionally neutral, everyday objects. Each object fell into one of two perceptual categories and one of two semantic categories. Participants were instructed about the perceptual and semantic categorizations before onset of the study and were shown examples that were not included in the remainder of the study. In the Standard stimulus set, used in most experiments, the semantic dimension divided the objects into animate and inanimate objects, while in the Shape and Size stimulus sets, used in experiments 3, 4, 7, and 8, objects were categorized as natural or man-made. Furthermore, three different perceptual dimensions were used across the tasks. In the Standard stimulus set, half of the stimuli were colored photographs and the other half were black-and-white drawings. In the Shape and Size stimulus sets only colored photographs were used. Instead, stimuli were categorized as either long or round objects (Shape stimulus set, exp. 3 and 7), or stimuli were presented as large or small pictures on the screen (Size stimulus set, exp. 4 and 8). Stimuli were selected from the BOSS database70 or other royalty-free online sources.
Stimulus presentation and pace
The task presentation was performed using MATLAB 2015a-2018a (The Mathworks Inc.), with Psychophysics Toolbox Version 3 (Releases between January 2017 and April 2019; https://github.com/Psychtoolbox-3/Psychtoolbox-3). With the exception of the fMRI/EEG and iEEG experiments, all experiments took place in dedicated testing rooms at the University of Birmingham, with the participants seated at a desk and watching a computer screen. The computer screens had a refresh rate of 60 Hz. A standard keyboard was used to record the responses. For the fMRI experiment, stimuli were projected onto a screen behind the scanner with a refresh rate of 60 Hz and viewed through a mirror. Participants answered using NATA response boxes. The iEEG experiment was presented and responses were recorded using a laptop (Toshiba Tecra W50) with a screen refresh rate of 60 Hz.
For encoding blocks, trials took on average 9.8 s to complete, resulting in an average of 0.31 visual events per seconds (cue onset, stimulus onset, and stimulus offset). For group 1 retrieval trials took on average 9.0 s for trials with one catch question (0.45 events per second) and 15.2 s for trials with two catch questions (0.39 events per second), while for group 2 retrieval trials took on average 6.8 s (0.44 events per second). Visual task trials took on average 5.8 s, resulting in on average 0.52 events per second. The event rates were below the lower frequency bound of O-score analysis for the behavioral oscillation (minimum of 0.5 Hz, see "RT analysis: O-score and statistics per participant") and were below the 1 Hz lower bound of the PPC analyses, while the screen refresh rate was higher than the upper frequency bound for both the O-score (maximum of 40 Hz) and PPC (12 Hz) procedures. It is therefore not expected that visual events can be the cause of the behavioral oscillations or PPC effects reported here.
Assessment of performance and exclusion of participants
Prior to reaction time analyses, the performance of each of the participants was analyzed based on their accuracy in answering the catch questions. Answers to catch question were considered incorrect when subjects chose the wrong answer, when the indicated they had forgotten the answer (for memory tasks) or when they did not answer on time (for healthy participants only). The data of a participant were only included in the analysis of a task phase if the following two requirements were met: (1) catch question accuracy across trials exceeding chance level and (2) a minimum of 10 correct button presses per participant in the task phase of interest. The first criterium was assessed using a one-sided binomial test against a guessing rate of 50% with α = 0.05. The second criterium had to be set as some participants repeatedly failed to provide encoding (reinstatement) button presses before trial time-out, leaving too few trials to run further analyses for the Encoding (Retrieval) phase, despite sufficient performance when answering the catch questions. The inclusion criteria were set a priori. The number of participants included in each of the task phases is shown in Fig. 1 and the number of excluded participants can be found in Supplementary Fig. 1A.
Of the participants who performed the memory tasks, 28 participants answered two catch questions per retrieval trial, while the remaining 198 answered one. To bring the analyses of the 28 participants with 2 catch questions per trial in line with the data from the other 198 participants, we considered a 2-catch trial to be correctly reinstated if one or both catch questions were answered correctly (on average, across 28 subjects: one catch question correct: 12.5% of trials; two catch questions correct: 76.0% of trials, see Supplementary Fig. 1D).
RT analysis: O-score and statistics per participant
To assess the presence and strength of oscillations in behavioral responses we used the Oscillation score (O-score, Fig. 2B), a method that was developed to analyze oscillations in spike trains43. Like spikes, the button presses we study here are discreet, all or nothing events, and can be summarized as trains of button presses across trials. The O-score method identifies the dominant frequency in those trains, and produces a normalized measure of the strength of the oscillation that can be compared across conditions.
The O-score method does not make assumptions about the source underlying the discreet events and can therefore be applied to button presses in a similar way as to spikes, even when the button presses arise from different trials. We did however add an additional processing step before computing the O-score, to compensate for the fact that behavioral responses, unlike spikes, have no baseline rate (e.g., they cannot occur before cue/stimulus onset). Extremely early and late responses therefore have to be considered outliers. We removed these outliers prior to O-score computation by removing the first and last 5% of the button press trace of each participant, i.e., maintaining the middle 90% of the button presses. Supplementary Figure 3B shows that reducing the fraction of button presses included in the analyses affected the ability to identify oscillations, but did not affect the differences found between the task phases.
The button presses from correctly answered trials that remained after outlier removal entered the O-score computation. We made two modifications to the procedure described in ref. 43 to match the characteristics of our dataset. The O-score procedure and our modifications are described below.
The O-score analysis requires the experimenter to define a frequency range of interest. We a priori defined a wide frequency range of interest of between \({f}_{{\min }}^{{{{{\rm{init}}}}}}\) = 0.5 and \({f}_{{\max }}^{{{{{\rm{init}}}}}}\) = 40 Hz, as we did not want to limit the analyses to a specific frequency band, yet did not expect to detect frequencies in the higher gamma frequency band. Given the wide variety in the number of responses and response times, we checked for every participant and task phase whether these pre-set frequency bounds were valid. Following ref. 43, we increased the lower bound to \(1/{c}_{{\min }}\) of the width of the response distribution (in seconds) of the participant, with \({c}_{{\min }}\)= 3, such that at least three cycles of the lowest detectable frequency were present in the data. We reduced the upper bound to the average response rate (button presses per second), if the participant did not have enough button presses to resolve the upper frequency limit. The O-score was then computed through the following series of steps:
Step 1: As described in ref. 43, we computed the ACH of the button presses with a time bin size of 1 ms (\({f}_{s}\) = 1000 Hz).
Step 2: The ACH was smoothed with a Gaussian kernel with a standard deviation \({\sigma }_{{{{{\rm{fast}}}}}}\) of 2 ms. As estimated in ref. 43, this smoothing kernel attenuated frequencies up to 67 Hz by less than 3 dB, allowing us to detect frequencies in the entire frequency range of interest.
Step 3: We identified the width of the peak in the ACH using the method described in ref. 43. However, to avoid the introduction of low frequencies by replacing the peak, we opted to only use positive lags beyond the detected ACH peak for further steps, as the peak-replacement approach would not allow us to detect frequencies toward the lower bound of our frequency range of interest. To identify the peak, we smoothed the ACH with a Gaussian kernel with \({\sigma }_{{{{{\rm{slow}}}}}}\) of 8 ms, resulting in the smoothed ACH trace \({A}_{{{{{\rm{slow}}}}}}\left(l\right)\), with \(l\) the lag. We then identified the left boundary lag of the central peak \({l}_{{{{{\rm{left}}}}}}\) by
$${l}_{{{{{{\rm{left}}}}}}}=l\big|\Delta {A}_{{{{{{\rm{slow}}}}}}}(l)\frac{2\,{l}_{{{\max }}}+1}{{A}_{{{{{{\rm{slow}}}}}}}(0)}\le \,\tan \left(\frac{10\,\pi }{180}\right)$$
where \({l}_{{\max }}\) is the highest lag included in the ACH and \({A}_{{{{{\rm{slow}}}}}}(0)\) is the value of the peak of the ACH (i.e. at lag 0).
Step 4: The remaining part of the ACH was subsequently truncated/zero padded to size \(w\), where
$$w={2}^{\left\lfloor {{\max }}\left({\log }_{2}\left(2{c}_{{{\min }}}\frac{{f}_{s}}{{f}_{{{\min }}}}\right),{\log }_{2}\left(\frac{{f}_{s}}{2}\right)\right)\right\rfloor \,+\,1}$$
We then applied a Hanning taper and the Fourier transform was computed.
Step 5: We identified the frequency with the highest power in the participant-adjusted frequency bounds, as well as the average magnitude of the spectrum between 0 and \({f}_{s}\)/2 Hz. The O-score was then computed as
$$O=\frac{{M}_{{{{{{\rm{peak}}}}}}}}{{M}_{{{{{{\rm{avg}}}}}}}}$$
In their paper, Muresan and colleagues propose a method to estimate the confidence interval of the O-score, allowing for a statistical assessment at the single cell level. However, this approach requires multiple repeated recordings, which are not available for the data presented here, nor do the datasets contain enough data points to create independent folds. Instead, we opted to generate a participant-specific reference distribution of O-scores for the identified frequency, to which we could compare the observed O-score. To this end, we randomly generated 500 time series for each participant matching the trial count and overall response density function of the participant's original button presses. First, a gamma probability function \({r}_{{{{{\rm{gamma}}}}}}(t)\) was fitted to the participant's response distribution using the fitdist function from the Statistics and Machine Learning Toolbox (v11.3) for MATLAB 2018a (The Mathworks Inc.; for an example see the gray lines in Fig. 2), and scaled to the number of responses of the participant. We then generated 500 Poisson time series, with the probability of a response in a time step \(\triangle t\) = 0.5 ms, given by
$${P}_{{{{{{\rm{resp}}}}}}}(t\to t+\Delta t)={r}_{{{{{{\rm{gamma}}}}}}}(t)\Delta t$$
If a gamma distribution could not be fitted (as assessed through a \({\chi }^{2}\) goodness-of-fit test with α = 0.05), the participant's button presses were instead randomly redistributed in time, with the new time per button press uniformly drawn from a window defined by one period of the participant's peak frequency and centered around the time of the original button press. Redistributing within one period of the identified oscillation ensured that this oscillation frequency of interest was not maintained in the reference data set, while minimizing changes to the overall response distribution.
O-scores were then computed for each of the resulting reference traces, but instead of finding the peak, the power at the peak frequency of the observed O-score was used. This approach controls for any frequency bias that could arise due to the length of the time series and/or the number of data points included in the analysis. To compare the observed O-score to the reference O-scores, we first log-transform all O-score values. This log-transformation was needed as the O-score is a bounded measure (it cannot take values below 0) and the O-score distribution is therefore right-skewed when O-score values are low, leading to an underestimation of the standard deviation of the reference distribution. The log-transformed reference O-scores were then used to perform a one-tailed Z-test for the observed O-score at α = 0.05, establishing the significance of the oscillation at the single participant level. For a validation that 500 reference O-scores was sufficient to produce a stable outcome for the Z-scoring, we refer to Supplementary Fig. 3A. Second-level t-scores were subsequently computed based on the Z-scored O-scores for each task phase and tested with α = 0.01 (one-tailed, Bonferroni-corrected for 5 task phases). These Z-scored oscillation scores can be assumed to represent the strength of the behavioral oscillation, and are the basis of many of our statistical comparisons.
To test whether the O-scores of memory-dependent task phases, i.e., Encoding, Retrieval. and Catch-with-retrieval, and memory-independent task phases, i.e., Catch-after-retrieval and Visual, against each other, we fitted a linear mixed model to the Z-scored O-scores, with memory dependence and the length of the time series used for O-score computation as fixed effects, and an intercept per participant as random effect. We included the length of the time series, computed as the difference (in seconds) between the last and the first RT used in the O-score analysis, because there was a substantial difference in response times between the task phases, with overall similar patterns as the O-scores (see Supplementary Fig. 1). We included participants as random effects to compensate for the difference in the number of data points contributed by memory task participants (3 data points from group 1, 2 data points from group 2) compared to visual task participants (1 datapoint from group 3), and to account for dependencies in the data. We fitted an identical linear mixed model to the peak frequencies corresponding to significant O-scores. The linear mixed models were fitted using the fitlme function from the Statistics and Machine Learning Toolbox (v11.3) for MATLAB 2018a (The Mathworks Inc.).
The performance of the modified O-score method and Z-scoring procedure were tested in a simulated dataset where the amplitude and frequency of the oscillation in the simulated button presses was varied. Methods and results of these simulations are given in Supplementary Note 2.
RT analysis: phase of response
For the task phases with significant second-level O-scores, i.e., Encoding, Retrieval, and Catch-with-retrieval, we analyzed the phases at which individual button presses occurred in the behavioral oscillation identified by the O-score analysis. We performed this analysis for both correctly and incorrectly remembered trials. As this analysis relied on the frequency identified by the O-score analysis, only participants with significant O-scores were included.
To identify the phases of the button presses, we first established a continuous reference trace that captured the behavioral oscillation. This was achieved by convolving the button presses with a Gaussian kernel, with \({\sigma }_{{{{{\rm{freq}}}}}}={f}_{{{{{\rm{peak}}}}}}/8\). The resulting continuous trace was then band-pass filtered with second-order Butterworth filter with a 1 Hz wide pass band centered on the participant's peak frequency identified by the O-score. The filtered trace was then Hilbert transformed and the instantaneous phase was computed, resulting in a phase of 0 rad for the peak of the behavioral oscillation. Finally, for each button press, the corresponding phase of the reference trace was determined and stored for further analyses.
We used two complementary approaches to compare the phase locking of correct versus incorrect trials: across participants, allowing us to include correct and incorrect trials from all participants with significant O-scores, even when the number of incorrect button presses was low; and within participants, comparing the phase distributions of correct and incorrect trials for participants with 10 or more incorrect trials. These approaches are described in more detail below.
With the across-participant analysis we aimed to address the following questions: (1) are correct and incorrect trials phase-locked to the behavioral oscillation found for the correct trials and (2) are correct trials locked to this oscillation more strongly than incorrect trials? For these analyses, to find the phases of incorrect trials, we compared the timing of the incorrect button presses to the phase trace determined on the correct trials only. To determine the phases of the correct trials, to avoid circularity, we instead used a leave-one-out approach; for each correct button press, a phase trace was established based on all other correct trials. We then performed a V-test (implementation: CircStats toolbox 2012a71; https://github.com/circstat/circstat-matlab) to assess non-uniformity of the phase distributions around the peak of the behavioral oscillation (i.e. around phase 0 rad), providing an answer to the first question. To address the second question, i.e., whether correct phase distributions were modulated more strongly than incorrect phase distributions, we had to compensate for the trial count differences as well as the methodological differences in determining the phase distributions for correct and incorrect trials. To this end, we defined the permutation test statistic:
$${V}_{{{{{{\rm{diff}}}}}}}={V}_{{{{{{\rm{correct}}}}}}}-{V}_{{{{{{\rm{incorrect}}}}}}}$$
with \(V\) being the test statistic from the V-test for non-uniformity around phase 0. For each participant with a significant O-score, we then randomly shuffled the labels of the correct and incorrect trials, and computed the \({V}_{{{{{\rm{diff}}}}}}\) statistic across participants for the label-shuffled trials in the same way as described for the observed labels. We repeated this shuffling procedure 100 times and counted the number of times \({V}_{{{{{{\rm{diff}}}}}}}^{{{{{{\rm{observed}}}}}}}\) was smaller than \({V}_{{{{{{\rm{diff}}}}}}}^{{{{{{\rm{shuffled}}}}}}}\). This procedure hence resulted in a p value that estimated the likelihood that the observed difference in phase modulation between correct and incorrect trials was produced by chance.
For participants with sufficient (10 or more) incorrect trials, we performed an additional analysis to compare the phase modulation of correct and incorrect trials. For these participants, the correct trials were randomly subsampled to match the number of incorrect trials. The phases of the incorrect trials and the subsampled correct trials were then determined based on the phase trace of the remaining correct trials and V-tests were performed for both subsampled correct and incorrect phase distributions. The V-statistics for correct trials were then compared to those for incorrect trials using paired t-tests. The subsampling procedure was repeated 100 times.
iEEG recordings: patients and recording setup
We recorded intracranial EEG from 10 epilepsy patients while they were admitted to hospital for assessment for focus resection surgery; 7 patients were recorded in the Queen Elizabeth Hospital Birmingham (Birmingham, UK) and 3 patients in the Universitätsklinikum Erlangen (Erlangen, Germany). For an 11th patient, task recording was aborted due to poor performance. All patients were recruited by the clinical team, were informed about the study and gave written informed consent before their stay in hospital. Ethical approval was granted by the National Health Service Health Research Authority (15/WM/2019), the Research Governance & Ethics Committee from the University of Birmingham, and the Ethik-Kommission der Friedrich-Alexander Universität Erlangen-Nürnberg (142_12 B).
As part of their routine clinical care, the patients were implanted with intracranial depth electrodes targeting the medial temporal lobe, as well as other brain areas. Patients gave written informed consent for the implantation of between two and eight Behnke-Fried electrodes with microwire bundles (AdTech Medical Instrument Corporation, USA) in the medial temporal lobe (see Fig. 1 for electrode placement and Supplementary Table 7 for electrode numbers per patient). Only data from the hippocampal electrodes are presented here. Implantation schemes were determined by the clinical team and were based solely on clinical requirements. Each microwire bundle contained eight high-impedance wires and one low impedance wire, which was used as reference in most patients (see Supplementary Table 7 for patient-specific references). Data were recorded using an ATLAS recording setup (Neuralynx Inc, USA.) consisting of CHET-10-A pre-amplifiers and a Digital Lynx NX amplifier and running on the Cheetah software version 1.1.0. Data were filtered using analog filters with cut-off frequencies at 0.1 and 9000 Hz (40 Hz for patient 01) and sampled at 32,000 Hz in Birmingham and 32,768 Hz in Erlangen. All data were stored on the CaStLeS storage facility of the University of Birmingham72.
For each patient both pre- and post-surgical T1-weighted MRI images were acquired. The pre- and post-surgical scans were co-registered and normalized to MNI space using SPM12 (https://www.fil.ion.ucl.ac.uk/spm/). The locations of the tip of the macro-electrodes were determined through visual inspection using MRIcron (v1.0.20190902; https://people.cas.sc.edu/rorden/mricron/index.html) and electrodes were assigned one of the following anatomical labels: amygdala, anterior, middle or posterior hippocampus, or parahippocampal gyrus. The locations and labels were visualized using ModelGUI (release 1.0.30; http://www.modelgui.org) and are shown in Fig. 5A.
The patients performed the memory task described in section "Task versions" on a laptop computer (Toshiba Tecra W50), while seated in their hospital bed or on a chair next to their bed. The three patients who were recorded in Erlangen, Germany, performed the task in German. Patients completed between 64 and 128 full trials, divided over between 1 and 3 recording sessions (see Supplementary Table 8). Of the 10 patients, 3 patients performed a version of the task that used scene images as cue (see "Task versions"), while the other patients were presented with verbs as cues. For the image cue task version, the cue was shown at the same time as the object; hence, the encoding data of three patients had no separate cue phase.
iEEG analysis: LFP data preprocessing
Raw microwire data were loaded into MATLAB 2018a using the MatlabImportExport scripts (version 6.0.0: https://neuralynx.com/software/category/matlab-netcom-utilities) provided by Neuralynx Inc. The data were subsequently zero-phase filtered with a third-order FIR high-pass filter with a cut-off frequency of 0.5 Hz and a sixth-order FIR low-pass filter with a cut-off frequency of 200 Hz using FieldTrip (v20190615 (ref. 73); https://github.com/fieldtrip/fieldtrip). A Notch filter with a stopband of 0.5 Hz wide at −3 dB was used to remove 50 Hz line noise and its harmonics. The data were down-sampled to 1000 Hz and divided into encoding and retrieval trials.
All data were visually inspected and channels/time points that contained electrical artefacts or epileptic activity were removed. Trials that had more than 20% of time points marked as artefactual were rejected in their entirety. In an additional preprocessing step, the data of patient 03 were re-referenced against the mean of the channels in each microwire bundle. This was done to bring the data from this patient, whose data were originally recorded against ground, more in line with the referencing schemes of the other patients, which were recorded against a local reference wire (see Supplementary Table 7 for reference information per patient).
iEEG analysis: wavelet transform, pairwise phase consistency and cluster statistics
The pre-processed microwire recordings were wavelet transformed using a complex Morlet wavelet with a bandwidth parameter of 4. We used the cwt implementation from the Wavelet Toolbox (v5.0) for MATLAB 2018a (The Mathworks Inc., USA) to compute the wavelet transform. The wavelet was scaled to cover a frequency range between 1 and 12 Hz in 43 pseudo-logarithmic steps and convolved with the data in time steps of 10 ms.
To obtain the power plots in Supplementary Fig. 8C, we extracted the absolute value of the wavelet coefficients and assessed power changes per frequency against a −2 to −0.5 s pre-cue baseline using a two-sided t-test for every time point. We averaged the resulting t-maps across the wires within each bundle, as they shared a common low impedance reference. The bundle averages were then used to compute a second level t-score across the bundles of all participants. We performed second-level analyses at the level of bundles, because correlations between signals from two bundles from the same patient were low and did not differ from correlations between signals from two bundles from two different patients, suggesting bundle was the main source of variance (see Supplementary Fig. 10A). The p values resulting from the second-level analysis were entered into a Benjamini–Hochberg false discovery rate (FDR) correction procedure with q = 0.05 to correct for multiple comparisons and the t-score map was masked at alpha = 0.05.
The phases obtained for every frequency and time point in the trial using the complex wavelet transform were used to compute the pairwise phase consistency (PPC46) across trials for each time- and frequency pixel and for each microwire. The PPC was calculated for correct and incorrect trials separately. The PPC values were then non-parametrically tested relative to their pre-cue baseline, defined as the period from 2 to 0.5 s prior to cue onset, using a Mann–Whitney U-test. We opted for a non-parametric test due to the strong left-skew of the PPC data. As for the power analyses, the resulting approximated Z-values were averaged across the microwires in a bundle, and the averages were used to compute a second level t-score across all bundles from all patients.
We then detected time–frequency clusters of significant PPC through the following steps. First, the t-scored PPC values were thresholded at α = 0.05 with \({{{{\rm{df}}}}}={N}_{{{{{\rm{bundles}}}}}}-1\), resulting in a binary image with 0 = non-significant and 1 = significant. This binary image was entered into an 8-connected component labeling algorithm to identify clusters of significant PPC values.
As we used a fixed threshold to identify the clusters, it is possible for clusters to be made up of two or more merged peaks. This merging of peaks artificially inflates the cluster's size. To avoid this, we tested whether each cluster contained more than one peak, and if so, split the cluster. To this end, for every cluster, we iteratively increased the significance threshold towards 90% of the highest value in the cluster, in 5% increments, and reran the cluster detection method described in the previous paragraph. We required any resulting subclusters to be at least 5% of the size of the original cluster, to overcome noise in the data. If no subclusters were found, the threshold was increased further. On the other hand, if all identified subclusters were smaller than 5% of the original cluster, we concluded that the cluster could not be split. If subclusters of sufficient size were detected, these were stored. For all pixels that were part of the original cluster, but were not a member of any of the new subclusters, we computed the weighted Euclidian distance to all subclusters and assigned them to the closest subcluster. For each resulting (sub)cluster we then computed a cluster statistic defined as the sum of all t-scores from all pixels in the cluster.
We took a non-parametric approach to assess the statistics at the cluster level. To this end, we went back to the wavelet transforms and, at a random time point in each trial, divided the trial in two parts. We then concatenated the first part of the trial to the end of the second part. This procedure, suggested in ref. 74, left all characteristics of the dataset intact, with the exception of the temporal structure of the phase. We computed the PPC across these time-shuffled trials, Z-scored against baseline, computed the second level t-score, identified clusters of significant t-scores, and computed the cluster scores as described in the previous paragraph. We repeated this procedure 100 times and we stored the highest cluster score for each repetition, resulting in a reference distribution of maximum cluster scores. We then non-parametrically compared the cluster scores from the intact data to the reference distribution, with α = 0.05. We performed the time-shuffle analysis independently for positive and negative changes in PPC and for correct and incorrect trials separately.
Finally, we also compared the PPCs from correct and incorrect trials to each other directly. We used a similar approach as described above, with two important differences: (1) the second-level analysis was now performed on the pairwise difference between correct and incorrect PPCs from the same bundle and (2) we shuffled correct and incorrect trials (as opposed to time points) to obtain the reference distribution.
Phase differences and event-related potentials
We used two different approaches to assess whether phases between encoding and retrieval trials differed. First, we tested whether encoding and retrieval phases differed at the moment of peak PPC, i.e., where the effect of phase resets was optimal and trials were most phase aligned. To this end, we detected the highest PPC value for every participant and stored the average phase for every electrode at the corresponding time and frequency. We then computed the phase difference per electrode by subtracting the retrieval phases from the encoding phases. This procedure was performed on both the cue- (for retrieval) or stimulus- (for encoding) locked data and for the response-locked data. We subsequently performed V-tests for non-uniformity around 180° (CircStats toolbox 2012a (ref. 71); https://github.com/circstat/circstat-matlab) to assess whether the phases of encoding and retrieval were opposite at peak PPC. We compared the V-statistics to V-statistics computed using the same approach in 500 time-shuffled datasets (see previous section).
For the second approach we computed event-related potentials (ERPs) to test for phase opposition in time windows leading up to the response. To obtain the ERPs, we first Z-scored the raw data per electrode by subtracting the mean and dividing by the standard deviation of all trials and time points. We then tested whether all electrodes had the same sign. This step was essential because recordings from different layers of the hippocampus can have opposing polarities. In microwire recordings there is no control over the placement of the electrode, nor is it possible to determine this placement based on scans, hence potential sign flips have to be detected in the data, before averaging data of different electrodes. We detected the sign by identifying the highest deflection in the trial-average of every electrode in the 1 s time interval after cue onset during encoding. If this deflection was negative, the data from the electrode was flipped. Note that the time interval we used for sign testing was not included in the ERP analysis in Fig. 6. We then averaged the trials of all wires within a microwire bundle, separating correct and incorrect trials. The data resulting from this step are represented in Supplementary Fig. 8D. We then filtered the averaged data in the theta-frequency band (1–5 Hz; data in Fig. 6C) and identified the instantaneous phase using the Hilbert transform. We subtracted the instantaneous phases from the retrieval trials of the phases from the encoding trials for each bundle yielding the instantaneous phase difference. The phase differences were collected in windows of 200 ms (i.e., 1 cycle at the 5 Hz upper bound of the theta band) spaced 10 ms apart and we tested whether the phase differences in each window were non-uniformly distributed around 180° using a V-test (CircStats toolbox 2012a (ref. 71): https://github.com/circstat/circstat-matlab). We used a Benjamini–Hochberg false discovery rate correction procedure75 with q = 0.05 to account for repeated tests across the time windows.
Further information on experimental design is available in the Nature Research Reporting Summary linked to this paper.
All behavioral data underlying the results in this study and from the iEEG dataset, the PPC values for correct and incorrect trials, including the time- and trial-shuffled PPCs have been deposited in the following FigShare repository: https://doi.org/10.6084/m9.figshare.c.5192567 (ref. 76). Behavioral data from experiments 1, 2, 5, 6, and 10 (see Supplementary Table 1) were previously reported in ref. 68. Data from experiment 9 (see Supplementary Table 1) were previously reported in ref. 69. Intermediate processing steps and other derived iEEG data will be made available upon reasonable request. The raw iEEG data and patient-specific electrode locations are protected and are not available due to data privacy laws. Source data are provided with this paper.
Custom MATLAB functions and scripts used to produce the results presented in this study are publicly available via GitHub: https://github.com/marijeterwal/behavioral-oscillations and FigShare: https://doi.org/10.6084/m9.figshare.13213769 (ref. 77).
Eichenbaum, H. A cortical–hippocampal system for declarative memory. Nat. Rev. Neurosci. 1, 41–50 (2000).
O'Reilly, R. C. & Norman, K. A. Hippocampal and neocortical contributions to memory: advances in the complementary learning systems framework. Trends Cogn. Sci. 6, 505–510 (2002).
Staresina, B. P. & Wimber, M. A neural chronometry of memory recall. Trends Cogn. Sci. 23, 1071–1085 (2019).
Colgin, L. L. Rhythms of the hippocampal network. Nat. Rev. Neurosci. 17, 239–249 (2016).
PubMed PubMed Central CAS Google Scholar
Düzel, E., Penny, W. D. & Burgess, N. Brain oscillations and memory. Curr. Opin. Neurobiol. 20, 245–257 (2010).
Hasselmo, M. E. & Stern, C. E. Theta rhythm and the encoding and retrieval of space and time. Neuroimage 85, 656–666 (2014).
Goyal, A. et al. Functionally distinct high and low theta oscillations in the human hippocampus. Nat. Commun. 11, 2469 (2020).
Jacobs, J. Hippocampal theta oscillations are slower in humans than in rodents: Implications for models of spatial navigation and memory. Philos. Trans. R. Soc. B Biol. Sci. 369, 20130304 (2014).
Lega, B. C., Jacobs, J. & Kahana, M. Human hippocampal theta oscillations and the formation of episodic memories. Hippocampus 22, 748–761 (2012).
Griffiths, B. J. et al. Directional coupling of slow and fast hippocampal gamma with neocortical alpha/beta oscillations in human episodic memory. Proc. Natl Acad. Sci. USA 116, 21834–21842 (2019).
Fell, J. et al. Medial temporal theta/alpha power enhancement precedes successful memory encoding: evidence based on intracranial EEG. J. Neurosci. 31, 5392–5397 (2011).
Staudigl, T. & Hanslmayr, S. Theta oscillations at encoding mediate the context-dependent nature of human episodic memory. Curr. Biol. 23, 1101–1106 (2013).
Kahana, M. J., Sekuler, R., Caplan, J. B., Kirschen, M. & Madsen, J. R. Human theta oscillations exhibit task dependence during virtual maze navigation. Nature 399, 781–784 (1999).
ADS PubMed CAS Google Scholar
Herweg, N. A., Solomon, E. A. & Kahana, M. J. Theta oscillations in human memory. Trends Cogn. Sci. 24, 208–227 (2020).
Lega, B., Burke, J., Jacobs, J. & Kahana, M. J. Slow-theta-to-gamma phase-amplitude coupling in human hippocampus supports the formation of new episodic memories. Cereb. Cortex 26, 268–278 (2016).
Mormann, F. et al. Phase/amplitude reset and theta-gamma interaction in the human medial temporal lobe during a continuous word recognition memory task. Hippocampus 15, 890–900 (2005).
Jacobs, J., Kahana, M. J., Ekstrom, A. D. & Fried, I. Brain oscillations control timing of single-neuron activity in humans. J. Neurosci. 27, 3839–3844 (2007).
Rutishauser, U., Ross, I. B., Mamelak, A. N. & Schuman, E. M. Human memory strength is predicted by theta-frequency phase-locking of single neurons. Nature 464, 903–907 (2010).
Jacobs, J., Hwang, G., Curran, T. & Kahana, M. J. EEG oscillations and recognition memory: theta correlates of memory retrieval and decision making. Neuroimage 32, 978–987 (2006).
Herweg, N. A. et al. Theta-alpha oscillations bind the hippocampus, prefrontal cortex, and striatum during recollection: evidence from simultaneous EEG-fMRI. J. Neurosci. 36, 3579–3587 (2016).
Fujisawa, S. & Buzsáki, G. A 4 Hz oscillation adaptively synchronizes prefrontal, VTA, and hippocampal activities. Neuron 72, 153–165 (2011).
Benchenane, K. et al. Coherent theta oscillations and reorganization of spike timing in the hippocampal- prefrontal network upon learning. Neuron 66, 921–936 (2010).
Anderson, K. L., Rajagovindan, R., Ghacibeh, G. A., Meador, K. J. & Ding, M. Theta oscillations mediate interaction between prefrontal cortex and medial temporal lobe in human memory. Cereb. Cortex 20, 1604–1612 (2010).
Watrous, A. J., Tandon, N., Conner, C. R., Pieters, T. & Ekstrom, A. D. Frequency-specific network connectivity increases underlie accurate spatiotemporal memory retrieval. Nat. Neurosci. 16, 349–356 (2013).
Kerren, C., Linde-Domingo, J., Hanslmayr, S. & Wimber, M. An optimal oscillatory phase for pattern reactivation during memory retrieval. Curr. Biol. 28, 3383–3392 (2018). E6.
Kunz, L. et al. Hippocampal theta phases organize the reactivation of large-scale electrophysiological representations during goal-directed navigation. Sci. Adv. 5, eaav8192 (2019).
ADS PubMed PubMed Central Google Scholar
Watrous, A. J., Miller, J., Qasim, S. E., Fried, I. & Jacobs, J. Phase-tuned neuronal firing encodes human contextual representations for navigational goals. Elife 7, 1–16 (2018).
Hasselmo, M. E., Bodelón, C. & Wyble, B. P. A proposed function for hippocampal theta rhythm: separate phases of encoding and retrieval enhance reversal of prior learning. Neural Comput. 14, 793–817 (2002).
PubMed MATH Google Scholar
Pavlides, C., Greenstein, Y. J., Grudman, M. & Winson, J. Long-term potentiation in the dentate gyrus is induced preferentially on the positive phase of θ-rhythm. Brain Res. 439, 383–387 (1988).
Hyman, J. M., Wyble, B. P., Goyal, V., Rossi, C. A. & Hasselmo, M. E. Stimulation in hippocampal region CA1 in behaving rats yields long-term potentiation when delivered to the peak of theta and long-term depression when delivered to the trough. J. Neurosci. 23, 11725–11731 (2003).
Colgin, L. L. et al. Frequency of gamma oscillations routes flow of information in the hippocampus. Nature 462, 353–357 (2009).
Amemiya, S. & Redish, A. D. Hippocampal theta-gamma coupling reflects state-dependent information processing in decision making. Cell Rep. 22, 3328–3338 (2018).
Fernández-Ruiz, A. et al. Entorhinal-CA3 dual-input control of spike timing in the hippocampus by theta-gamma coupling. Neuron 93, 1213–1226. e5 (2017).
Lopes-dos-Santos, V. et al. Parsing hippocampal theta oscillations by nested spectral components during spatial exploration and memory-guided behavior. Neuron 940–952, https://doi.org/10.1016/j.neuron.2018.09.031 (2018).
Solomon, E. A. et al. Dynamic theta networks in the human medial temporal lobe support episodic memory. Curr. Biol. 29, 1100–1111. e4 (2019).
Siegle, J. H. & Wilson, M. A. Enhancement of encoding and retrieval functions through theta phase-specific manipulation of hippocampus. Elife 3, 1–18 (2014).
Fiebelkorn, I. C., Saalmann, Y. B. & Kastner, S. Rhythmic sampling within and between objects despite sustained attention at a cued location. Curr. Biol. 23, 2553–2558 (2013).
Fiebelkorn, I. C., Pinsk, M. A. & Kastner, S. A dynamic interplay within the frontoparietal network underlies rhythmic spatial attention. Neuron 99, 842–853. e8 (2018).
Helfrich, R. F. et al. Neural mechanisms of sustained attention are rhythmic. Neuron 99, 854–865. e5 (2018).
Busch, N. A. & VanRullen, R. Spontaneous EEG oscillations reveal periodic sampling of visual attention. Proc. Natl Acad. Sci. USA 107, 16048–16053 (2010).
ADS PubMed PubMed Central CAS Google Scholar
Landau, A. N. & Fries, P. Attention samples stimuli rhythmically. Curr. Biol. 22, 1000–1004 (2012).
VanRullen, R. Perceptual cycles. Trends Cogn. Sci. 20, 723–735 (2016).
Muresan, R. C., Jurjut, O. F., Moca, V. V., Singer, W. & Nikolic, D. The Oscillation Score: an efficient method for estimating oscillation strength in neuronal activity. J. Neurophysiol. 99, 1333–1353 (2008).
Kota, S., Rugg, M. D. & Lega, B. C. Hippocampal theta oscillations support successful associative memory formation. J. Neurosci. 40, 9507–9518 (2020).
Fell, J., Ludowig, E., Rosburg, T., Axmacher, N. & Elger, C. E. Phase-locking within human mediotemporal lobe predicts memory formation. Neuroimage 43, 410–419 (2008).
Vinck, M., van Wingerden, M., Womelsdorf, T., Fries, P. & Pennartz, C. M. A. The pairwise phase consistency: a bias-free measure of rhythmic neuronal synchronization. Neuroimage 51, 112–122 (2010).
Maris, E. & Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164, 177–190 (2007).
Cole, S. R. & Voytek, B. Brain oscillations and the importance of waveform shape. Trends Cogn. Sci. 21, 137–149 (2017).
Jutras, M. J., Fries, P. & Buffalo, E. A. Oscillatory activity in the monkey hippocampus during visual exploration and memory formation. Proc. Natl Acad. Sci. USA 110, 13144–13149 (2013).
Hoffman, K. L. et al. Saccades during visual exploration align hippocampal 3–8 Hz rhythms in human and non-human primates. Front. Syst. Neurosci. 7, 1–10 (2013).
Kragel, J. E. et al. Hippocampal theta coordinates memory processing during visual exploration. Elife 9, e52108 (2020).
Lisman, J. E. & Jensen, O. The θ-γ neural code. Neuron 77, 1002–1016 (2013).
Lisman, J. E. & Idiart, M. A. P. Storage of 7 +- 2 short-term memories in oscillatory subcycles. Science 267, 1512–1515 (1995).
Fuentemilla, L., Penny, W. D., Cashdollar, N., Bunzeck, N. & Düzel, E. Theta-coupled periodic replay in working memory. Curr. Biol. 20, 606–612 (2010).
Bahramisharif, A., Jensen, O., Jacobs, J. & Lisman, J. Serial representation of items during working memory maintenance at letter-selective cortical sites. PLoS Biol. 16, e2003805 (2018).
Vilberg, K. L. & Rugg, M. D. The neural correlates of recollection: transient versus sustained fMRI effects. J. Neurosci. 32, 15679–15687 (2012).
Raghavachari, S. et al. Gating of human theta oscillations by a working memory task. J. Neurosci. 21, 3175–3183 (2001).
Axmacher, N. et al. Cross-frequency coupling supports multi-item working memory in the human hippocampus. Proc. Natl Acad. Sci. USA 107, 3228–3233 (2010).
Hsieh, L. T. & Ranganath, C. Frontal midline theta oscillations during working memory maintenance and episodic encoding and retrieval. Neuroimage 85, 721–729 (2014).
Roux, F. & Uhlhaas, P. J. Working memory and neural oscillations: alpha-gamma versus theta-gamma codes for distinct WM information? Trends Cogn. Sci. 18, 16–25 (2014).
Haque, R. U., Wittig, X. J. H., Damera, S. R., Inati, X. S. K. & Zaghloul, K. A. Cortical low-frequency power and progressive phase synchrony precede successful memory encoding. J. Neurosci. 35, 13577–13586 (2015).
Rizzuto, D. S., Madsen, J. R., Bromfield, E. B., Schulze-Bonhage, A. & Kahana, M. J. Human neocortical oscillations exhibit theta phase differences between encoding and retrieval. Neuroimage 31, 1352–1358 (2006).
Hebscher, M., Meltzer, J. A. & Gilboa, A. A causal role for the precuneus in network-wide theta and gamma oscillatory activity during complex memory retrieval. Elife 8, 1–20 (2019).
McNaughton, N., Ruan, M. & Woodnorth, M.-A. Restoring theta-like rhythmicity in rats restores initial learning in the Morris water maze. Hippocampus 16, 1102–1110 (2006).
Wang, J. X. et al. Targeted enhancement of cortical-hippocampal brain networks and associative memory. Science 345, 1054–1057 (2014).
Hermiller, M. S., Chen, Y. F., Parrish, T. B. & Voss, J. L. Evidence for immediate enhancement of hippocampal memory encoding by network-targeted theta-burst stimulation during concurrent fMRI. J. Neurosci. 40, 7155–7168 (2020).
Ten Oever, S., Weerd, P. D. & Sack, A. T. Phase-dependent amplification of working memory content and performance. Nat. Commun. 11, 1832 (2020).
Linde-Domingo, J., Treder, M. S., Kerrén, C. & Wimber, M. Evidence that neural information flow is reversed between object perception and object reconstruction from memory. Nat. Commun. 10, 179 (2019).
Lifanov, J., Linde-Domingo, J. & Wimber, M. Feature-specific reaction times reveal a semanticisation of memories over time and with repeated remembering. Nat. Commun. 12, 3177 (2021).
Brodeur, M. B., Dionne-Dostie, E., Montreuil, T. & Lepage, M. The bank of standardized stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research. PLoS ONE 5, e10773 (2010).
Berens, P. CircStat: A MATLAB toolbox for circular statistics. J. Stat. Softw. 31, 1–21 (2009).
Thompson, S. J., Thompson, S. E. M. & Cazier, J.-B. CaStLeS (Compute and Storage for the Life Sciences): a collection of compute and storage resources for supporting research at the University of Birmingham.Zenodo. https://doi.org/10.5281/ZENODO.3250616 (2019).
Oostenveld, R., Fries, P., Maris, E. & Schoffelen, J. M. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, 156869 (2011).
Cohen, M. X. Analyzing Neural Time Series Data—Theory and Practice (MIT Press, 2014).
Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. B 57, 289–300 (1995).
MathSciNet MATH Google Scholar
ter Wal, M. et al. Data for: theta rhythmicity governs the timing of behavioural and hippocampal responses in humans specifically during memory-dependent tasks. figshare. Collect. https://doi.org/10.6084/m9.figshare.c.5192567 (2020).
ter Wal, M. et al. Behavioral-oscillations. GitHub Repos. https://doi.org/10.6084/m9.figshare.13213769 (2020).
This work was funded by starting grant ERC-2016-STG-715714 (STREAM) of the European Research Council to M.W., consolidator grant ERC-2015-647954 (Code4Memory) awarded to S.H., and a Wellcome Trust/Royal Society Sir Henry Dale Fellowship (10762/Z/15/Z) awarded to B.S. We thank Sophie Watson, Wing Tse, Jonathan Burton-Barr, Emma Sutton, Thomas Faherty, Alexandru-Andrei Moise, Laura De Herde, Britanny Lowe, Jessica Davies, and James Lloyd-Cox for their help with collecting the behavioral data and Andrew Reid, Gernot Kreiselmeyer, and Rüdiger Hopfengärtner for technical support. We are grateful to all participants for donating their time, and in particular thank the patients, their families, and the hospital staff for accommodating our work.
School of Psychology & Centre for Human Brain Health, University of Birmingham, Edgbaston, B15 2TT, Birmingham, UK
Marije ter Wal, Juan Linde-Domingo, Julia Lifanov, Frédéric Roux, Luca D. Kolibius, Bernhard Staresina, Simon Hanslmayr & Maria Wimber
Max Planck Institute for Human Development, 14195, Berlin, Germany
Juan Linde-Domingo
Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, University of Glasgow, G12 8QB, Glasgow, UK
Luca D. Kolibius, Simon Hanslmayr & Maria Wimber
Universitätsklinikum Erlangen, 91054, Erlangen, Germany
Stephanie Gollwitzer, Johannes Lang & Hajo Hamer
Complex Epilepsy and Surgery Service, Queen Elizabeth Hospital Birmingham, Edgbaston, B15 2GW, Birmingham, UK
David Rollings, Vijay Sawlani & Ramesh Chelvarajah
Department of Experimental Psychology, University of Oxford, OX2 6GG, Oxford, UK
Bernhard Staresina
Marije ter Wal
Julia Lifanov
Frédéric Roux
Luca D. Kolibius
Stephanie Gollwitzer
Johannes Lang
Hajo Hamer
David Rollings
Vijay Sawlani
Ramesh Chelvarajah
Simon Hanslmayr
Maria Wimber
J.L.-D., J. Lifanov, and M.W. designed the experiments, and M.t.W., J.L.-D., J. Lifanov, F.R., L.D.K., S.G., J. Lang, H.H., D.R., V.S., R.C., B.S., S.H., and M.W. were involved in data collection. M.t.W. performed the data analysis and M.t.W. and M.W. wrote the manuscript. All authors provided feedback on the manuscript.
Correspondence to Marije ter Wal or Maria Wimber.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
ter Wal, M., Linde-Domingo, J., Lifanov, J. et al. Theta rhythmicity governs human behavior and hippocampal signals during memory-dependent tasks. Nat Commun 12, 7048 (2021). https://doi.org/10.1038/s41467-021-27323-3
Received: 08 March 2021
The brain time toolbox, a software library to retune electrophysiology data to brain dynamics
Sander van Bree
María Melcón
Nature Human Behaviour (2022)
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Journal of Clinical & Molecular Endocrinology
Journal of Molecular Pharmaceutics & Organic Process Research
Bipolar Disorder: Open Access
Journal of Blood & Lymph
Industrial Engineering & Management
Journal of Hypertension: Open Access
Journal of Alcoholism & Drug Dependence
Trauma & Treatment
earth sciences journalsArtificial Neural NetworkPre-eclampsia impact factorenergy-buildings-journalsHIVDiffusionsevere thalassemia disease scholarly peer review journalAlternative medicine future High Impact Factor Journals Soil Mechanics Impact FactorScholarly journals in Glucose MetabolismBiotechnologymetagenomicsMicrobial Biotechnology Scientific JournalsTop Phylogenetics Online Publishing Journalsevolutionary-systems
Open Access Articles- Top Results for Asterisk
For other uses, see Asterisk (disambiguation) and * (disambiguation).
Not to be confused with Asterix.
apostrophe ' '
brackets [ ] ( ) { } ⟨ ⟩
colon :
comma , ، 、
dash ‒ – — ―
ellipsis … ... . . .
exclamation mark !
full stop, period .
hyphen ‐
hyphen-minus -
question mark ?
quotation marks ' ' " " ' ' " "
semicolon ;
slash, stroke, solidus / ⁄
Word dividers
interpunct ·
General typography
ampersand &
asterisk *
at sign @
backslash \
bullet •
caret ^
dagger † ‡
degree °
ditto mark ″
inverted exclamation mark ¡
inverted question mark ¿
number sign, pound, hash, octothorpe #
numero sign №
obelus ÷
ordinal indicator º ª
percent, per mil % ‰
plus and minus + −
basis point ‱
pilcrow ¶
prime ′ ″ ‴
section sign §
underscore, understrike _
vertical bar, pipe, broken bar | ‖ ¦
sound-recording copyright ℗
registered trademark ®
service mark ℠
trademark ™
generic currency symbol ¤
₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ ₠ € ƒ ₣ ₲ ₴ ₭ ₺ ℳ ₥ ₦ ₧ ₱ ₰ £ ៛ ₽ ₹ ₨ ₪ ৳ ₸ ₮ ₩ ¥
Uncommon typography
asterism ⁂
hedera ❧
index, fist ☞
interrobang ‽
irony punctuation ⸮
lozenge ◊
reference mark ※
tie ⁀
Other quotation styles (« » " ")
Logic symbols
Whitespace characters
In other scripts
16x16px Category
16x16px Book
File:Section break 02 by Pengo.jpg
Asterisks used to illustrate a section break in Alice's Adventures in Wonderland.
An asterisk (*; Late Latin: asteriscus, from Greek: ἀστερίσκος, asteriskos, "little star")[1] is a typographical symbol or glyph. It is so called because it resembles a conventional image of a star. Computer scientists and mathematicians often call it a star (as, for example, in the A* search algorithm or C*-algebra), In English, an asterisk is usually five-pointed in sans-serif typefaces, six-pointed in serif typefaces,[citation needed] and six- or eight-pointed when handwritten. It can be used as censorship. It is also used on the internet to correct one's spelling, in which case it appears before or after the correct word.
The asterisk is derived from the need of the printers of family trees in feudal times for a symbol to indicate date of birth. The original shape was seven-armed,[citation needed] each arm like a teardrop shooting from the center.
In computer science, the asterisk is commonly used as a wildcard character, or to denote pointers, repetition, or multiplication.
1.1 Typography
1.2 Linguistics
1.2.1 Historical linguistics
1.2.2 Generative linguistics
1.2.2.1 Ambiguity
1.4 Computing
1.4.1 Computer science
1.4.2 Computer interfaces
1.4.2.1 Adding machines and printing calculators
1.4.3 Programming languages
1.4.4 Comments in computing
1.5 Mathematics
1.5.1 Mathematical typography
1.6 Fluid Mechanics
1.7 Statistical results
1.8 Human genetics
1.9 Telephony
1.10 Economics
1.11 Education
1.12 Games
1.13 Competitive sports and games
1.13.1 Cricket
1.13.2 Barry Bonds
1.14 Marketing
1.15 Religious texts
1.16 Censorship
2 Encodings
The asterisk is used to call out a footnote, especially when there is only one on the page. Less commonly, multiple asterisks are used to denote different footnotes on a page (i.e., *, **, ***). Typically, an asterisk is positioned after a word or phrase and preceding its accompanying footnote.
Three spaced asterisks centered on a page may represent a jump to a different scene, thought, or section
A group of three asterisks arranged in a triangular formation ⁂ is called an asterism.
One or more asterisks may be used as censorship over all or part of a word.
Asterisks are sometimes used as an alternative to typographical bullets to indicate items of a list.
Asterisks can be used in textual media to represent *emphasis* when bold or italic text is not available (e.g., Twitter, text messaging).
Asterisks may denote corrections to misspelling or misstatements in previous electronic messages, particularly when replacement or retraction of a previous writing is not possible, especially with "instant messaging" but also with other "immediate delivery" types of textual messages such as SMS. Usually this takes the form of a message consisting solely of the corrected text, with an asterisk; etiquette varies on whether the asterisk should precede or follow such a correction. Example:
I had breakfast this mroning
morning*
I prefer serial
*cereal
I like toast
don't*
I asked for bread
no bread*
Bounding asterisks as "a kind of self-describing stage direction", as linguist Ben Zimmer has put it. For example, in "Another school shooting *sigh*," the writer uses *sigh* to express disappointment (but does not necessarily literally sigh).[2]
In linguistics, an asterisk is placed before a word or phrase to indicate that it is not used, or there are no records of it being in use. This is used in several ways depending on what is being discussed.
In historical linguistics, the asterisk marks words or phrases that are not directly recorded in texts or other media, and that are therefore reconstructed on the basis of other linguistic material (see also comparative method).
In the following example, the Proto-Germanic word ainlif is a reconstructed form.
*ainlif → endleofan → eleven
A double asterisk indicates a form that would be expected according to a rule, but is not actually found. That is, it indicates a reconstructed form that is not found or used, and in place of which another form is found in actual usage:
For the plural, **kubar would be expected, but separate masculine plural akābir أكابر and feminine plural kubrayāt كبريات are found as irregular forms.
Generative linguistics
In generative linguistics, especially syntax, an asterisk in front of a word or phrase indicates that the word or phrase is not used because it is ungrammatical.
wake her up / *wake up her (in Standard American English)
An asterisk before a parenthesis indicates that the lack of the word or phrase inside is ungrammatical, while an asterisk after the opening bracket of the parenthesis indicates that the existence of the word or phrase inside is ungrammatical.
go *(to) the station - Here, "go the station" would be ungrammatical.
go (*to) home - Here, "go to home" would be ungrammatical.
Since a word marked with an asterisk could mean either "unattested" or "impossible", it is important in some contexts to distinguish these meanings. In general, authors retain asterisks for "unattested", and prefix ˣ, **, or a superscript "?" for the latter meaning.
In musical notation the sign 20px indicates when the sustain pedal of the piano should be lifted.
In liturgical music, an asterisk is often used to denote a deliberate pause.
In computer science, the asterisk is used in regular expressions to denote zero or more repetitions of a pattern; this use is also known as the Kleene star or Kleene closure after Stephen Kleene.
In the Unified Modeling Language, the asterisk is used to denote zero to many classes.
In some command line interfaces, such as the Unix shell and Microsoft's CMD, the asterisk is the wildcard character and stands for any string of characters. This is also known as a wildcard symbol. A common use of the wildcard is in searching for files on a computer. For instance, if a user wished to find a document called Document 1, search terms such as Doc* and D*ment* would return this file. Document* would also return any file that begins with Document.
In some graphical user interfaces an asterisk is pre- or appended to the current working document name shown in a window's title bar to indicate that unsaved changes exist. In Windows versions before XP the asterisk was also used as a mask to hide passwords being entered into a text box; later this was changed to a bullet.
In Commodore (and related) filesystems, an asterisk appearing next to a filename in a directory listing denotes an improperly closed file, commonly called a "splat file."
In travel industry Global Distribution Systems, the asterisk is the display command to retrieve all or part of a Passenger Name Record.
In HTML web forms, an asterisk can be used to denote required fields.
Chat Room etiquette calls on one asterisk to correct a misspelled word that has already been submitted. For example, one could post lck, then follow it with luck* to correct himself, or if it's someone else that notices the mistake, they would use *luck.
Enclosing a phrase between two asterisks is used to denote an action the user is "performing", e.g. *pulls out a paper*, although this usage is also common on forums, and less so on most chat rooms due to /me or similar commands. Hyphens (-action-) and double colons (::action::) as well as the operator /me are also used for similar purposes.
Adding machines and printing calculators
Some international models of adding machines and printing calculators use the asterisk to denote the total, or the terminal sum or difference of an addition or subtraction sequence, respectively, sometimes on the keyboard where the total key is marked with an asterisk and sometimes a capital T, and on the printout.
Many programming languages and calculators use the asterisk as a symbol for multiplication. It also has a number of special meanings in specific languages, for instance:
In some programming languages such as the C, C++, and Go programming languages, the asterisk is used to dereference or to declare a pointer variable.
In the Common Lisp programming language, the names of global variables are conventionally set off with asterisks, *LIKE-THIS*.
In the Ada, Fortran, Perl, Python, Ruby programming languages, in some dialects of the Pascal programming language, and many others, a double asterisk is used to signify exponentiation: 5**3 is 5*5*5 or 125.
In the Perl programming language, the asterisk is used to refer to the typeglob of all variables with a given name.
In the programming languages Ruby and Python, * has two specific uses. First, the unary * operator applied to a list object inside a function call will expand that list into the arguments of the function call. Second, a parameter preceded by * in the parameter list for a function will result in any extra positional parameters being aggregated into a tuple (Python) or array (Ruby), and likewise in Python a parameter preceded by ** will result in any extra keyword parameters being aggregated into a dictionary.
In the APL language, the asterisk represents the exponential and exponentiation functions.
In IBM Job Control Language, the asterisk has various functions, including in-stream data in the DD statement, the default print stream as SYSOUT=*, and as a self-reference in place of a procedure step name to refer to the same procedure step where it appears.
In Haskell, the asterisk represents the set of well-formed, fully applied types - that is a 0-ary kind of types.
Comments in computing
Main article: block comments
In the B programming language and languages that borrow syntax from it, like C, PHP, Java, or C#, comments (parts of the code not intended to be compiled into the program) are marked by an asterisk combined with the slash:
/* Here is a comment.
The compiler will ignore it. */
Some Pascal-like programming languages, for example, Object Pascal, Modula-2, Modula-3, and Oberon, as well as several other languages including ML, Mathematica, AppleScript, OCaml, Standard ML, and Maple, use an asterisk combined with a parenthesis:
(* This is a comment.
The compiler will ignore it. *)
CSS, while not strictly a programming language, also uses the slash-star comment format.
/* This ought to make the text more readable for far-sighted people */
text-size:24pt;
The asterisk has many uses in mathematics. The following list highlights some common uses and is not exhaustive.
An arbitrary point in some set. Seen, for example, when computing Riemann sums or when contracting a simply connected group to the singleton set { ∗ }.
as a unary operator, denoted in prefix notation
The Hodge dual operator on vector spaces <math>*: A^k \rightarrow A^{n-k}</math>.
as a unary operator, written as a subscript
The pushforward (differential) of a smooth map f between two smooth manifolds, denoted f∗.
And more generally the application of any covariant functor, where no doubt exists over which functor is meant.
as a unary operator, written as a superscript
The complex conjugate of a complex number (the more common notation is <math>\bar{z}</math>).[3]
The conjugate transpose, Hermitian transpose, or adjoint matrix of a matrix.
Hermitian adjoint.
The multiplicative group of a ring, especially when the ring is a field. E.g. <math>\mathbb{C}^* = \mathbb{C}-\{0\}.</math>
The dual space of a vector space V, denoted V*.
The combination of an indexed collection of objects into one example, e.g. the combination of all the cohomology groups Hk(X) into the cohomology ring H*(X).
In statistics, z* and t* are given critical points for z-distributions and t-distributions, respectively.
as a binary operator, in infix notation
A notation for an arbitrary binary operator.
The free product of two groups.
f ∗ g is a convolution of f with g.
The asterisk is used in all branches of mathematics to designate a correspondence between two quantities denoted by the same letter – one with the asterisk and one without.
Mathematical typography
In fine mathematical typography, the Unicode character Template:Unichar (in HTML, ∗) is available. This character also appeared in the position of the regular asterisk in the PostScript symbol character set in the Symbol font included with Windows and Macintosh operating systems and with many printers. It should be used in fine typography for a large asterisk that lines up with the other mathematical operators.
In fluid mechanics, an asterisk in superscript is sometimes used to mean a property at sonic speed.[4]
Statistical results
In many scientific publications, the asterisk is employed as a shorthand to denote the statistical significance of results when testing hypotheses. When the likelihood that a result occurred by chance alone is below a certain level, one or more asterisks are displayed. Popular significance levels are <0.05 (*), <0.01 (**), and <0.001 (***).
In human genetics, * is used to denote that someone is a member of a haplogroup and not any of its subclades (see * (haplogroup)).
On a Touch-Tone telephone keypad, the asterisk (called star, or less commonly, palm or sextile)[5] is one of the two special keys (the other is the number sign (pound sign or hash or, less commonly, octothorp[5] or square)), and is found to the left of the zero. They are used to navigate menus in Touch-Tone systems such as Voice mail, or in Vertical service codes.
In economics, the use of an asterisk after a letter indicating a variable such as price, output, or employment indicates that the variable is at its optimal level (that which is achieved in a perfect market situation). For instance, p* is the price level p when output y is at its corresponding optimal level of y*.
Also in international economics asterisks are commonly used to denote economic variables in a foreign country. So, for example, "p" is the price of the home good and "p*" is the price of the foreign good, etc.
In the GCSE and A-Level examinations in the United Kingdom and the PSLE in Singapore, A* ("A-star") is a special top grade that is distinguished from grade A.
In the Hong Kong Diploma of Secondary Education (HKDSE) examination in Hong Kong, 5** (5-star-star) and 5* (5-star) are two special top grades that are distinguished from Level 5. Level 5** is the highest level a candidate can attain in HKDSE.
Certain categories of character types in role-playing games are called splats, and the game supplements describing them are called splatbooks. This usage originated with the shorthand "*book" for this type of supplement to various World of Darkness games, such as Clanbook: Ventrue (for Vampire: The Masquerade) or Tribebook: Black Furies (for Werewolf: The Apocalypse), and this usage has spread to other games with similar character-type supplements. For example, Dungeons & Dragons Third Edition has had several lines of splatbooks: the "X & Y" series including Sword & Fist and Tome & Blood prior to the "3.5" revision, the "Complete X" series including Complete Warrior and Complete Divine, and the "Races of X" series including Races of Stone and Races of the Wild.
In many MUDs and MOOs, as well as "male", "female", and other more esoteric genders, there is a gender called "splat", which uses an asterisk to replace the letters that differ in standard English gender pronouns. For example, h* is used rather than him or her. Also, asterisks are used to signify doing an action, for example, "*action*"
Game show producer Mark Goodson used a six-pointed asterisk as his trademark. It is featured prominently on many set pieces from The Price Is Right.
Scrabble players put an asterisk after a word to indicate that an illegal play was made.[6]
Competitive sports and games
In colloquial usage, an asterisk is used to indicate that a record is somehow tainted by circumstances, which are putatively explained in a footnote referenced by the asterisk.[7] This usage arose after the 1961 baseball season in which Roger Maris of the New York Yankees broke Babe Ruth's 34-year-old single-season home run record. Because Ruth had amassed 60 home runs in a season with only 154 games, compared to Maris's 61 over 162 games, baseball commissioner Ford Frick announced that Maris' accomplishment would be recorded in the record books with an explanation (often referred to as "an asterisk" in the retelling). In fact, Major League Baseball had no official record book at the time, but the stigma remained with Maris for many years, and the concept of a real or figurative asterisk denoting less-than-official records has become widely used in sports and other competitive endeavors. A 2001 TV movie about Maris' record-breaking season was called 61* (pronounced sixty-one asterisk) in reference to the controversy.
In recent years, the asterisk has come into use on baseball scorecards to denote a "great defensive play."[8]
In February 2011 the United States Olympic Committee and the Ad Council launched an anti-steroid campaign called "Play Asterisk Free"[9] aimed at teens. The campaign, whose logo uses a heavy asterisk, first launched in 2008 under the name Don't Be An Asterisk.[10]
In cricket, it signifies a total number of runs scored by a batsman without losing his wicket, e.g. 107* means '107 not out'. When written before a player's name on a scorecard, it indicates the captain of the team.
It is also used on television when giving a career statistic during a match. For example, 47* in a number of matches column means that the current game is the player's 47th.
Fans critical of Barry Bonds, who has been accused of using performance-enhancing drugs during his baseball career, invoked the asterisk notion during the 2007 season, as he approached and later broke Hank Aaron's career home run record.[11] Opposing fans would often hold up signs bearing asterisks whenever Bonds came up to bat. After Bonds hit his record-breaking 756th home run on August 7, 2007, fashion designer and entrepreneur Marc Ecko purchased the home run ball from the fan who caught it, and ran a poll on his Web site to determine its fate. On September 26, Ecko revealed on NBC's Today show that the ball will be branded with an asterisk and donated to the Baseball Hall of Fame. The ball, marked with a die-cut asterisk, was finally delivered to the hall on July 2, 2008 after Marc Ecko unconditionally donated the artifact rather than loaning it to the hall as originally intended.
Asterisks (or other symbols) are commonly used in advertisements to refer readers to special terms/conditions for a certain statement, commonly placed below the statement in question. For example: an advertisement for a sale may have an asterisk after the word "sale" with the date of the sale at the bottom of the advertisement, similar to the way footnotes are used.
In the Geneva Bible and the King James Bible, an asterisk is used to indicate a marginal comment or scripture reference.
In the Leeser Bible, an asterisk is used to mark off the seven subdivisions of the weekly Torah portion. It is also used to mark the few verses to be repeated by the reader of the Haftara.
In American printings of the Book of Common Prayer, an asterisk is used to divide a verse of a Psalm in two portions for responsive reading. British printings use a spaced colon (" : ") for the same purpose.
Main article: Wordfilter
It is used as censorship over all or part of a word.
The Unicode standard states that the asterisk is distinct from:
Template:Unichar,
Template:Unichar, and
Template:Unichar.[12]
The symbols are compared below (the display depends on your browser's font).
Asterisk Operator
Heavy Asterisk
Small Asterisk
Full Width Asterisk
Open Centre Asterisk
* ∗ ✱ ﹡ * ✲
Low Asterisk
Arabic star
East Asian reference mark
Teardrop-Spoked Asterisk
Sixteen Pointed Asterisk
⁎ ٭ ※ ✻ ✺
Name Unicode Decimal UTF-8 HTML Displayed
Asterisk U+002A * 2A *
Combining Asterisk Below U+0359 ͙ CD 99 ͙
Arabic Five Pointed Star U+066D ٭ D9 AD ٭
East Asian Reference Mark U+203B ※ E2 80 BB ※
Flower Punctuation Mark U+2055 ⁕ E2 81 95 ⁕
Asterism U+2042 ⁂ E2 81 82 ⁂
Low Asterisk U+204E ⁎ E2 81 8E ⁎
Two Asterisks Aligned Vertically U+2051 ⁑ E2 81 91 ⁑
Combining Asterisk Above U+20F0 ⃰ E2 83 B0 ⃰
Asterisk Operator U+2217 ∗ E2 88 97 ∗ ∗
Circled Asterisk Operator U+229B ⊛ E2 8A 9B ⊛
Four Teardrop-Spoked Asterisk U+2722 ✢ E2 9C A2 ✢
Four Balloon-Spoked Asterisk U+2723 ✣ E2 9C A3 ✣
Heavy Four Balloon-Spoked Asterisk U+2724 ✤ E2 9C A4 ✤
Four Club-Spoked Asterisk U+2725 ✥ E2 9C A5 ✥
Heavy Asterisk U+2731 ✱ E2 9C B1 ✱
Open Centre Asterisk U+2732 ✲ E2 9C B2 ✲
Eight Spoked Asterisk U+2733 ✳ E2 9C B3 ✳
Sixteen Pointed Asterisk U+273A ✺ E2 9C BA ✺
Teardrop-Spoked Asterisk U+273B ✻ E2 9C BB ✻
Open Centre Teardrop-Spoked Asterisk U+273C ✼ E2 9C BC ✼
Heavy Teardrop-Spoked Asterisk U+273D ✽ E2 9C BD ✽
Heavy Teardrop-Spoked Pinwheel Asterisk U+2743 ❃ E2 9D 83 ❃
Balloon-Spoked Asterisk U+2749 ❉ E2 9D 89 ❉
Eight Teardrop-Spoked Propeller Asterisk U+274A ❊ E2 9D 8A ❊
Heavy Eight Teardrop-Spoked Propeller Asterisk U+274B ❋ E2 9D 8B ❋
Squared Asterisk U+29C6 ⧆ E2 A7 86 ⧆
Equals With Asterisk U+2A6E ⩮ E2 A9 AE ⩮
Slavonic Asterisk U+A673 ꙳ EA 99 B3 ꙳
Small Asterisk U+FE61 ﹡ EF B9 A1 ﹡
Full Width Asterisk U+FF0A * EF BC 8A *
Music Symbol Pedal Up Mark U+1D1AF 𝆯 F0 9D 86 AF 𝆯
Tag Asterisk U+E002A 󠀪 F3 A0 80 AA
Light Five Spoked Asterisk U+1F7AF 🞯 F0 9F 9E AF 🞯
Medium Five Spoked Asterisk U+1F7B0 🞰 F0 9F 9E B0 🞰
Bold Five Spoked Asterisk U+1F7B1 🞱 F0 9F 9E B1 🞱
Heavy Five Spoked Asterisk U+1F7B2 🞲 F0 9F 9E B2 🞲
Very Heavy Five Spoked Asterisk U+1F7B3 🞳 F0 9F 9E B3 🞳
Extremely Heavy Five Spoked Asterisk U+1F7B4 🞴 F0 9F 9E B4 🞴
Light Six Spoked Asterisk U+1F7B5 🞵 F0 9F 9E B5 🞵
Medium Six Spoked Asterisk U+1F7B6 🞶 F0 9F 9E B6 🞶
Bold Six Spoked Asterisk U+1F7B7 🞷 F0 9F 9E B7 🞷
Heavy Six Spoked Asterisk U+1F7B8 🞸 F0 9F 9E B8 🞸
Very Heavy Six Spoked Asterisk U+1F7B9 🞹 F0 9F 9E B9 🞹
Extremely Heavy Six Spoked Asterisk U+1F7BA 🞺 F0 9F 9E BA 🞺
Light Eight Spoked Asterisk U+1F7BB 🞻 F0 9F 9E BB 🞻
Medium Eight Spoked Asterisk U+1F7BC 🞼 F0 9F 9E BC 🞼
Bold Eight Spoked Asterisk U+1F7BD 🞽 F0 9F 9E BD 🞽
Heavy Eight Spoked Asterisk U+1F7BE 🞾 F0 9F 9E BE 🞾
Very Heavy Eight Spoked Asterisk U+1F7BF 🞿 F0 9F 9E BF 🞿
Asterism (typography)
Reference mark
Star (glyph)
^ ἀστερίσκος, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus
^ Zimmer, Ben. "The cyberpragmatics of bounding asterisks". Language Log, University of Pennsylvania. Retrieved 24 August 2013.
^ Complex Conjugate - from Wolfram MathWorld
^ White, F. M. Fluid Mechanics, Fourth Ed. WCB McGraw Hill.
^ a b US 3920926
^ "Scrabble Glossary". Tucson Scrabble Club. Retrieved 2012-02-06.
^ See e.g. Allen Barra (2007-05-27). "An Asterisk is very real, even when it's not". New York Times.
^ Baseball Almanac - Scoring Baseball: Advanced Symbols
^ Facebook.com
^ Adcouncil.org, Ad Council, August 8, 2008
^ See e.g. Michael Wilbon (2004-12-04). "Tarnished records deserve an Asterisk". Washington Post. p. D10.
^ "Detailed descriptions of the characters (The ISO Latin 1 character repertoire)". 2006-09-20. Retrieved 2015-03-23.
This page is based on the copyrighted Wikipedia article Asterisk; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA | CommonCrawl |
A framework for detecting unfolding emergencies using humans as sensors
Marco Avvenuti1,
Mario G. C. A. Cimino1,
Stefano Cresci ORCID: orcid.org/0000-0003-0170-24452,3,
Andrea Marchetti3 &
Maurizio Tesconi3
The advent of online social networks (OSNs) paired with the ubiquitous proliferation of smartphones have enabled social sensing systems. In the last few years, the aptitude of humans to spontaneously collect and timely share context information has been exploited for emergency detection and crisis management. Apart from event-specific features, these systems share technical approaches and architectural solutions to address the issues with capturing, filtering and extracting meaningful information from data posted to OSNs by networks of human sensors. This paper proposes a conceptual and architectural framework for the design of emergency detection systems based on the "human as a sensor" (HaaS) paradigm. An ontology for the HaaS paradigm in the context of emergency detection is defined. Then, a modular architecture, independent of a specific emergency type, is designed. The proposed architecture is demonstrated by an implemented application for detecting earthquakes via Twitter. Validation and experimental results based on messages posted during earthquakes occurred in Italy are reported.
Established public safety systems are based on centralized emergency detection approaches, often relying on expensive infrastructures of physical sensors which may not be available everywhere. The proliferation of handheld devices, equipped with a large number of sensors and communication capabilities, can significantly extend, or possibly substitute, conventional sensing by enabling the collection of data through networks of humans. Novel paradigms such as crowd-, urban- or citizen-sensing have been coined to describe how information can be sourced from the average individual in a coordinated way. Data gathering can be either participatory or opportunistic, depending on whether the user intentionally contributes to the acquisition campaign (possibly receiving an incentive), or she simply acts as the bearer of a sensing device from which data is transparently collected by some situation-aware system (Sheth 2009; Kapadia et al. 2009; Cimino et al. 2012).
In this scenario, the advent of online social network (OSN) platforms, such as Twitter, Weibo and Instagram, that have grown bigger becoming a primary hub for public expression and interaction, has added facilities for ubiquitous and real-time data-sharing (Demirbas et al. 2010). These unprecedented sensing and sharing opportunities have enabled situations where individuals not only play the role of sensor operators, but also act as data sources themselves. In fact, humans have a great aptitude in processing and filtering observations from their surroundings and, with communication facilities at hand, in readily sharing the information they collect (Srivastava et al. 2012). This spontaneous behavior has driven a new challenging research field, called "social sensing" (Aggarwal and Abdelzaher 2013), investigating how human-sourced data, modeled by the "human as a sensor" (HaaS) paradigm (Wang et al. 2014), can be gathered and used to gain situational awareness and to nowcast events (Lampos and Cristianini 2012) in different domains such as health, transportation, energy, social and political crisis, and even warfare. Among the advantages of social sensing is the natural tendency of OSN users to promptly convey information about the context (Liang et al. 2013; Cresci et al. 2015b) and that those proactively posted messages, especially when witnessing emergency situations, are likely to be free of pressure or influence (Zhou et al. 2012). The utmost case is Twitter, where users are encouraged to make their messages (tweets) publicly available by default and where, due to the 140 characters length limitation, they are forced to share more topic-specific content.
Given this picture, it is not surprising that OSNs, and Twitter in particular, have drawn the attention of designers of decision support systems for emergency management, and that during recent disasters, such as the Tōhoku earthquake and tsunami (Japan—2011), the Hurricane Sandy (Central and North America—2012) and the Himalayan earthquake (Nepal—2015), civil protection agencies turned to the Web and to OSN data to help tracking stricken locations, assessing the damage and coordinating the rescue efforts. Based on the observation that an unfolding emergency is likely to give rise to a burst of alerting messages, which may be used to early detect the event, followed by more reflective messages, whose content may be used to understand its consequences, several systems have focused on the collection and analysis of messages shared in areas affected by disasters (Hughes and Palen 2009; Bagrow et al. 2011; Adam et al. 2012; Gao et al. 2014; Avvenuti et al. 2014a. However, such information is often unstructured, heterogeneous and fragmented over a large number of messages in such a way that it cannot be directly used. It is therefore mandatory to turn that messy data into a number of clear and concise messages for emergency responders (Cresci et al. 2015b). Challenging issues highlighted and faced by pioneer systems include the real-time acquisition of unstructured data not specifically targeted to the system (data is often free text without structure or codified semantics) (Goolsby 2010), the extraction of critical data overwhelmed by high flood of meaningless babbles, the identification of the most stricken areas in the aftermath of an emergency (Cresci et al. 2015c; Sakai and Tamura 2015), security and privacy issues including the lack of guarantee that human sensors correctly deliver information about specific facts at specific times (Rosi et al. 2011).
Despite these common findings, an analysis of the state-of-the-art in the field of social sensing-based emergency management systems highlights a multitude of domain-specific, unstructured and heterogeneous solutions. In fact, in the literature the design of monolithic and vertical ad-hoc solutions still prevails over architectural approaches addressing modularity, generality and flexibility (Imran et al. 2015). This paper presents a framework for detecting emergent crisis events using humans as sensors. According to the framework, different emergency types (e.g., seismic, hydrological, meteorological) can be detected by configuring a software architecture, where re-usable components can adapt to different contents and patterns of messages posted to the OSN while the event unfolds. The contribution of the paper is both conceptual and practical. To the purpose of deepening and sharing the understanding of the properties and relationships of data provided by human sensors, we have defined a terminology and an ontology for the HaaS paradigm in the context of emergency detection. From the practical point of view, we have designed a domain-independent, architectural and modular framework that encompasses the vast majority of systems proposed to date. The effectiveness of the proposed architecture in solving common problems, such as data capturing, data filtering and emergency event detection, has been demonstrated by a proof-of-concept implementation involving earthquake detection via Twitter. The application has been validated using datasets of tweets collected during earthquakes occurred in Italy.
In this section, we outline the most relevant works in the field, discussing the main differences with our approach as well as the main similarities, in order to point out the works that inspired our architectural model. Thus, this section corroborates our approach under the more general umbrella of the HaaS paradigm for emergency management.
Several initiatives, both in scientific and in application environments, have been developed in the last few years with the aim of exploiting information available on social media during emergencies. Works proposed in the literature either describe working systems employing solutions for some of the fundamental challenges of emergency management, or focus on a single specific challenge and thoroughly study it. The systems surveyed in this section present different degrees of maturity. Some have been deployed and tested in real-life scenarios, while others remain under development (Imran et al. 2015). The vast majority of these systems share goals or functionalities with the framework we are proposing and can be mapped, totally or in part, on the architecture subsequently defined. Among the proposed systems some approaches are tailored to suit requirements of a specific kind of emergency and are therefore domain-specific. Overall, many of the surveyed works present shortcomings regarding their reusability.
The works presented in Bartoli et al. (2015) and Foresti et al. (2015) describe novel emergency management platforms for smart public safety and situational awareness. The proposed solutions exploit both wireless sensor networks and social media to support decision-makers during crises. In Bartoli et al. (2015) a high-level framework is proposed which includes subsystems designed for the acquisition and the analysis of heterogeneous data. The subsystems working on social media data perform the data acquisition and data analysis tasks and can be directly mapped to the corresponding components of our architecture. In this framework data acquisition from social media has a marginal impact since it is activated only after the detection of an emergency. Thus Bartoli et al. (2015) only marginally deals with the challenges related to the acquisition and handling of a big stream of social media data. An example of an application scenario for the system is also proposed for hydrological risks such as floods and landslides. The ASyEM system (Foresti et al. 2015) focuses on data acquisition and data fusion. Authors introduce an offline methodology for the extraction of emergency-specific terms which are subsequently used by the online system to gather relevant messages from social media sources. The detection of an emergency is performed by means of a neural tree network previously trained during the offline phase. Both Bartoli et al. (2015) and Foresti et al. (2015) lack a data filtering component. Similarly to Foresti et al. (2015), the work discussed in Salfinger et al. (2015) employs data fusion techniques in a system designed to increase situational awareness during emergencies. Authors propose a high-level architecture for an adaptive framework exploiting both traditionally sensed data as well as social media data.
Among the various kinds of emergencies, seismic events are those which have been investigated the most in the last few years. Earthquake emergency management is a topic worth studying not only for the big threat seismic events pose on communities and infrastructures. The detailed earthquake characterization obtainable from seismographic networks can be exploited as a baseline for novel social media-based emergency management systems and leveraged to achieve better results in terms of responsiveness and situational awareness. The opportunities granted by the application of the HaaS paradigm to earthquake detection and response have been firstly envisioned in works such as Earle (2010), Allen (2012), and Crooks et al. (2013).
The study described in Sakaki et al. (2010, 2013) is one among the first works proposing techniques for emergency management based on social media data. Authors investigate the design and development of a social alert detection and earthquake reporting system. The detection of an event is performed by means of a bayesian statistical model. Authors carried out experiments to assess the quality of the detections and their responsiveness. Detection results are evaluated only by means of the Recall metric (ratio of correctly detected earthquakes among the total occurred earthquakes) and the system was able to timely detect 67.9 % (53 out of 78) of the earthquakes with JMA (Japan Meteorological Agency) scale 2 or more which occurred over 2 months. It is worth noting that the JMA scale can not be directly mapped into the worldwide-adopted Richter magnitude scale used in Table 1 to evaluate our systemFootnote 1. The approach proposed in Sakaki et al. (2010, 2013) is tested on both earthquakes and tornadoes and the achieved results seem convincing towards the employment of this solution for other large-scale emergencies as well. However, the work only focuses on the event detection task, without dealing with the definition of a full working system. Moreover, data acquisition is performed by means of the Twitter Search APIFootnote 2 which accesses to only a portion of the amount of tweets produced. While this limitation can be negligible for large scale events, it can impair the system's ability to detect events felt by a small number of social sensors, thus limiting the reusability of this system for small-scale emergencies such as landslips, traffic jams, car accidents, etc.
US Geological Survey (USGS) efforts towards the development of an earthquake detection system based solely on Twitter data are described in Earle et al. (2012). The solution is evaluated with different settings according to the sensitivity of the event detection module. However, even in its best configuration, the system could only detect 48 globally distributed earthquakes out of the 5175 earthquakes occurred during the same time window. Also this system acquires data via the Twitter Search API, thus suffering from the same limitations described above. Basic data filtering concerns are taken into account and relevant messages are selected with a heuristic approach. Event detection is performed by a STA/LTA (short-term average/long-term average) algorithm. Although representing an interesting demonstration of the possibility to perform emergency event detection via social media, this system has a few shortcomings which severely limit its performances. The deeper level of analysis supported in our proposed architecture and performed in our implementation allow us to outperform USGS's system. Overall, we believe the main reasons for our better performances lie in the adoption of more sophisticated filtering techniques (i.e. machine learning classifiers instead of heuristics) and a more powerful event detection algorithm (i.e. a burst detection algorithm instead of a STA/LTA). USGS kept on working on the project and recently announced the official employment of a Twitter earthquake detection system named TED (Tweet Earthquake Dispatch). As claimed by USGS, such detection system proved more responsive than those based on seismographs in regions where the number of seismographic stations is lowFootnote 3 , Footnote 4.
In Avvenuti et al. (2014a, b, 2015) is described the development of the Earthquake Alert and Report System (EARS). EARS is a real-time platform designed for the detection and the assessment of the consequences of earthquakes from social media data. The proposed solution employs data mining and natural language processing techniques to enhance situational awareness after seismic events. Although the proposed system is domain-specific and employed only in the field of earthquake emergency management, the discussion in Avvenuti et al. (2014b) addresses issues common to all social media emergency management systems. Preliminary results of the works proposed in Sakaki et al. (2010, 2013); Earle et al. 2012) and Avvenuti et al. (2014a, b, 2015) are overall encouraging, especially in relation to the responsiveness of the detections. In the present work we built on the key features of these systems in order to design a solution applicable to a broad range of emergencies.
Situational awareness during emergencies is the goal of the work described in Yin et al. (2012). The Emergency Situation Awareness (ESA) platform operates over the Twitter stream by comparing terms used in recent tweets with those of a baseline. The baseline has been generated in an offline phase and represents a statistical model of the terms used during a fixed time window of several months. ESA raises alerts for every term which appears in recent tweets significantly more than in the baseline. The drawback of this approach is that the baseline does not account for topic seasonality. Moreover ESA does not perform data filtering neither employs keywords for the data acquisition and therefore many of the generated alerts are of little interest. ESA represents however one of the first domain-independent approaches to the problem of emergency management from social media. The core of the general ESA platform has been later expanded with ad-hoc filters and tailored to perform event detection in the earthquakes (Robinson et al. 2013) and wildfires (Power et al. 2013) domains. Other works have instead investigated the exploitation of social sensors for the detection of traffic jams (D'Andrea et al. 2015).
Crowdsourced crisis mapping from Twitter data is the goal of the systems proposed in Middleton et al. (2014), Cresci et al. (2015c). Crisis mapping concerns with the capturing, processing and display of data during a crisis with the goal of increasing situational awareness. Following an approach adopted in other previously reviewed works, these systems are composed of both offline and real-time (online) subsystems. The offline subsystems calculate baseline statistics during a historical period when no disasters occurred. Among the real-time subsystems Middleton et al. (2014) also includes a data filtering component which, similarly to Earle et al. (2012), applies heuristic rules to select relevant tweets. On the contrary, Cresci et al. (2015c) uses machine learning techniques to filter and analyze data.
Lastly, the study in Imran et al. (2015) presents a survey on computational techniques for social media data processing during emergencies and can be considered as a further reference for works in the fields of social media emergency management, crisis informatics and crisis mapping.
Core concepts and functionalities
An ontological view of the HaaS paradigm for emergency management
Our conceptual framework is intended to operate in a broad class of domains. For this reason it should evolve from an explicit formal specification of terms and of relationships among them. This way, experts are supported with shared understanding of their domains of interest. A good specification serves as a basis to communicate in development, to guarantee consistency, to minimize misunderstanding and missed information, to overcome barriers to the acquisition of specifications, to reuse and analyze domain knowledge, and to separate it from operational knowledge. Among the suitable formalisms, ontologies are structured vocabularies with definitions of basic concepts and relations among them. Ontologies have interesting properties that can be formally verified, such as completeness, correctness, consistency, and unambiguity (Siegemund et al. 2011).
In this section we introduce the terminology of the "human as a sensor" (HaaS) paradigm via an ontology diagram. In Fig. 1 base concepts are enclosed in gray ovals and connected by properties, represented by black directed edges. The fundamental property is on the right: Decision System detects Emergency. This property cannot be directly sensed (i.e., instantiated) by the system, and is therefore represented as an abstract property, shown by a dotted edge. Indeed the overall decision system is aimed at indirectly detecting emergencies by means of a series of information provided by sensors. As the system should be scalable in terms of types of emergency, different specific emergencies have been considered. In figure, Seismic, Hydrological, Meteorological, and Terrorist are examples of specialized concepts, shown with white ovals and connected by white directed edges to the base concept.
A Decision System is owned by a Public Safety Agency, and exploits both Artificial and Social Detection Systems. The former is a conventional system based on physical sensors: an Artificial Detection System analyzes Observations, which are provided by Artificial Sensors, i.e., a type of specialized Sensor. Another type of specialized sensor is human Sense, which is interpreted by Humans. Here, the concept Human acts as a Sensor can then be derived as a specialized human. Indeed, both Human and Sensor are in the Territory, where Emergency occurs and Effects of it are measured by Sensors. Differently from an artificial sensor, a Human as a Sensor is able to directly perceive an emergency and owns a Terminal to deliver Messages in an Online Social Network. For this reason, he can alert via an Online Social Network. Location is a structural property of a terminal. Specialized examples of Online Social Networks are Twitter, Weibo, and Instagram.
Use cases of the HaaS paradigm for emergency management
In the context of online detection, a structural property of a message is the timestamp. Other properties are content-based and must be recognized as specialized types: a Trusted Message, i.e., a message which is not sent for malicious, disruptive or abusive purposes (Mendoza et al. 2010; Castillo et al. 2011); a Primary Message, i.e., a message sent by a user who is actually present at the referred event and can directly describe it (Kumar et al. 2013; Morstatter et al. 2014); an Emergency Message, i.e., a message reporting an actual social emergency and not, for instance, reporting a personal problem via an idiom made of emergency words (Avvenuti et al. 2014a). If all these properties are available in a single message, that message can be considered an instance of a further specialized concept, the Ongoing Emergency Message, which is a message reporting an ongoing emergency. In addition, an Ongoing Emergency Message must have another property: being temporally close to another message of the same typology. This way, the Social Detection System recognizes a number of temporally close messages. Thus, the detection of an actual social emergency encompasses many messages, differently arranged in time depending on the type of emergency.
Managing a Social Detection System requires interaction between different external agents (people or systems), represented in Fig. 2 as UML use cases. Here, interacting agents are called actors and are represented by the "stick man" icon, whereas functionalities available to actors are represented by an oval shape. An actor can communicate with the system through an association to a functionality, represented as a link. Use cases have been related to other use cases by the extend and include relationships, allowing to increment a use case and to specify a piece of the use case in a modular way, respectively. A relationship is represented as a dashed directed arrow, whose direction denotes dependency.
More specifically, for a given emergency type (e.g., earthquake, flooding, or their subtypes) the Decision System asks the Social Detection System (hereafter called System for the sake of brevity) to be prepared to get alerts of that emergency type. This functionality includes the activation of the content-based filtering of messages, which is in charge of providing, among the messages captured from the Online Social Network actor (e.g., Twitter), only those containing information related to the unfolding emergency situation. We call this use case the online process.
Emergency-specific knowledge of the content of messages is thus necessary to extend the System's capability in recognising multiple emergency types. Such a knowledge can be extracted from a message corpus, a large and structured set of messages (electronically stored and processed), used for statistical analysis and hypothesis testing, checking occurrences or validating filtering within a specific emergency type. Extracted knowledge can be encoded as: (1) terms that are frequently contained in the target messages, established via statistical methods; (2) features extracted from a training set of target messages, established via machine learning methods; (3) parameters of collections of messages related to the same emergency event, established via statistical methods.
Thus, when a new emergency type has to be managed, the content-based filtering of messages functionality must be previously extended with emergency-specific knowledge provided by the configure filters functionality. This process is managed by the actor responsible for the System's maintenance and configuration, the Social Network Analyst. Configuring filters includes creating training sets and extracting terms from corpus. To build a corpus includes to annotate corpus, in collaboration with a number of Annotators. We call the configure filters use case the offline process.
The "human as a sensor" (HaaS) paradigm for emergency management so far determined has been used as a reference for designing an efficient, flexible and scalable software architecture. The analysis conducted in the previous section, as well as the findings reported in previous works, highlighted the fundamental challenges related to processing social media data for the detection of unfolding emergency situations (Imran et al. 2015). Such challenges comprehend: (1) data capturing, (2) data filtering and (3) emergency event detection. The challenge related to data capturing lies in gathering, among the sheer amount of social media messages, the most complete and specific set of messages for the detection of a given type of emergency. However, not all collected messages are actually related to an unfolding emergency, hence the need of a data filtering step to further reduce the noise among collected messages and retain only the relevant ones. Finally, techniques are needed in order to analyze relevant messages and infer the occurrence of an emergency event. The general framework for emergency management that we are proposing efficiently deals with all these aspects.
In this section the system logic is represented by a number of components and actors. A component represents a modular piece of logic whose external behavior can be concisely described to offer a platform-independent view. Each component may be developed in any programming language and by using one or more classes or procedures, since its internal algorithmic implementation is not detailed. Indeed, each component in the model can be replaced by another component supporting the same interfaces, thus providing modularity. Each actor represents a role played by a user interacting with the system components. Subsequently, a behavioral description of the system within its life cycle is also provided by means of a sequence of exchange messages between actors and components.
Static view of the logical architecture
Figure 3 shows a UML static view of the system, made by components and their interfaces. Here, a component is represented by a box, with provided and required interfaces represented by the "lollipop" and "socket" icons, respectively. Actors are represented by the "stick man" icon. Components that are external to the design are colored in dark gray. Some specific types of components or subsystems, such as repository, storage, knowledge base, web, are characterized by a special icon or shape. The usage of a component by an actor or by another component is represented by the socket icon or by the dashed arrow, respectively. The architecture is focused on the social detection system, i.e., on the HaaS input channel. The Human as a Sensor actor is represented on the bottom left as an actor using the Terminal subsystem to deliver messages to the Online Social Network subsystem. The Online Social Network subsystem feeds the main data flow carried out in the online mode of operation, i.e., the detection process. In figure, the components involved in the online process are arranged in a stack of components, enclosed in a dotted box, where the Online Social Network is a the bottom.
More specifically, the Emergency Message Capturing component accesses the Online Social Network's global stream of data, via a streaming API, to collect emergency messages. The messages are captured according to the Emergency-specific Terms provided by the knowledge base, and then pushed to the Emergency Messages repository, which acts as a buffer with respect to the large data stream provided by the Online Social Network. The Primary Messages Selection component takes data from this buffer and provides only primary messages to the Trusted Messages Selection component, which, in turn, provides only trusted messages to the next component. The semantics of both primary and trusted is compliant with the HaaS ontology. The latter component employs a statically defined Trusted Message Model, which is the same for all types of emergencies. Both components implement fast and coarse-grained filtering to avoid congestion due to the large number of messages.
The logical architecture of a decision support system for emergency management based on social sensing
Communication diagram of the online process in a decision support system for emergency management based on social sensing
The next filtering component is the Ongoing Emergency Messages Selection, which is fed by the Trusted Message Selection component and implements the namesake concept of the HaaS ontology. This component carries out a fine-grained filtering, employing an Ongoing Emergency Message Model knowledge base. The outgoing messages are subsequently sent to the Emergency Event Detection component, which is able to detect an actual collective emergency. Since each type of emergency needs a different parameterization, this component is based on the Emergency-specific Parameters knowledge base configured by the Social Network Analyst. The detected event is then gelolocated by the Emergency Geolocation component. Finally, the geolocated emergency is provided to the Analysis System, which is able to interoperate with a Decision System of a Public Safety Agency.
In the offline mode of operation, the setting of parametric models and knowledge bases for each type of emergency is covered. This offline process is managed by the Social Network Analyst (on the bottom right) with the help of some Annotators.
More specifically, given a new type of emergency the web is first accessed to find, via Emergency Management Agency and News Archives, some historical examples of the same type of emergency. Subsequently, an Emergency-specific corpus of messages is created via the Corpus Building component, accessing to the Online Social Network via a historical search API managed by the Historical Messages Capturing component.
Emergency-specific terms are then created by means of the Offline Terms Building component, which uses both the corpus and a Static Message Baseline component. A baseline represents common terms in online social networks, which hampers filtering and does not provide relevant information. For this reason, such terms are removed from messages.
Subsequently, an Emergency-specific Training Set is created by selecting and annotating messages in the corpus, via an Annotation Tool. The training set is finally used to train the Ongoing Emergency Message Model via the Machine Learning Classifier that exploits a set of features defined on the message corpus itself.
The next subsection provides a dynamic view of the above logical architecture.
Dynamic view of the logical architecture
In this subsection we focus on the sequence of steps performed by the diverse components in both online and offline processes. Figure 4 shows the online process, via a UML communication diagram. Here, interacting components are connected by temporary links. Messages among components are shown as labeled arrows attached to links. Each message has a sequence number, name and arguments. A message may be asynchronous or synchronous. On an asynchronous call, the execution of the sender continues immediately after the call is issued, and the processing of the message is made by the receiver concurrently with the execution of the sender. On a synchronous call, the execution of the sender is blocked during the execution of the invoked procedure. When the receiver has carried out the procedure, it returns the generated values to the sender, which is awakened and allowed to continue execution. In a communication diagram, synchronous messages are shown with filled arrow head, whereas asynchronous messages have an open arrow head. A return message is denoted by a dashed open arrow head.
Communication diagram of the offline process in a decision support system for emergency management based on social sensing
Let us suppose that the offline process (as described later in Fig. 5) was previously performed so that the system is ready-to-use for a given type of emergency. The online process evolves as in the following: (1) the Decision System makes the getAlerts call to the Analysis System component, providing the emergencyType as a parameter (e.g., "earthquake", "flooding"); (2–4) the Analysis System makes the beginDetection, beginSelection and beginCapturing calls to the Emergency Event Detection, Ongoing Emergency Messages Selection, and Emergency Messages Capturing components, respectively, providing the emergencyType as a parameter; (5) the Emergency Messages Capturing component makes the beginStreaming call to the Online Social Network component, providing the emergengyTerms as a parameter. The latter call is synchronous, so as to avoid losing data from the Online Social Network's stream. The sixth step is made of a number of substeps iteratively carried out for each message delivered by the Online Social Network; for this purpose, the whole step for a given message is referred to as 6.*, whereas the single substep is referred to as 6.*.1, 6.*.2, and so on.
Each emergency message delivered by the Online Social Network to the Emergency Messages Capturing component (6.*.1), is then delivered to the Primary Messages Selection component (6.*.2), which checks whether the message is primary or not (6.*.3). If the message is primary, it is delivered to the Trusted Messages Selection component (6.*.4), which checks whether the message is trusted or not (6.*.5). If the message is trusted, it is delivered to the Ongoing Emergency Messages Selection component (6.*.6), which, in turn, checks whether the message refers to an ongoing emergency or not (6.*.7). If the message refers to an ongoing emergency, it is delivered to the Emergency Event Detection component (6.*.8), which according to an arbitrary detection algorithm (i.e., a message-burst detection algorithm), checks whether to trigger the detection of an event or not (6.*.9). When an event occurs, it is received (7) and geolocated (8) by the Emergency Geolocation component, and the Analysis System is finally notified with an alert (9) by the Emergency Geolocation component itself.
The offline process, described in Fig. 5, is aimed at providing the Emergency Messages Capturing component with Emergency-specific Terms, as well as training the Machine Learning Classifier component for a new type of emergency. At the beginning, the Social Network Analyst is provided with some occurrences of the new type of emergency via historical archives. He needs to build some collection of messages related to such occurrences.
In the first step the Social Network Analyst configures the Corpus Building component (1) with some parameters derived from the archives and purposely targeted on each specific occurrence (e.g., date and location of the emergency). Then, the Social Network Analyst asks the Corpus Building component to build the corpus (2). This is made through two substeps: the Corpus Building component asks the Historical Messages Capturing component to capture messages with the above parameters (2.1), and the Historical Messages Capturing component gets message blocks from the Online Social Network component (2.2), by using a historical search API. Message blocks are then returned and collected to build the corpus (2.3–2.4).
The Social Network Analyst, by using the returned corpus and a baseline of messages from the OSN, asks the Offline Terms Building component to extract Emergency-specific Terms (3) which are then deployed on a knowledge base (3.1). He also enables the annotation campaign of the corpus (4) by enrolling a number of annotators (4.*). At the end of the annotation campaign (4.2) the Social Network Analyst creates the training set of messages (4.3). The training set is then used by the Social Network Analyst to train the Machine Learning Classifier component (5) by exploiting the annotated corpus and a set of features defined on the corpus itself. At the end of the training, an Ongoing Emergency Message Model is created (5.1).
The model so far created will be used by the Ongoing Emergency Messages Selection component during the online process. The Trusted Messages Selection and the Primary Messages Selection components are ready-to-use for any type of emergencies, and then they do not require training nor setting procedures.
Finally, the Emergency Messages Capturing component will employ the Emergency-specific Terms created at the third step of the offline process to extract emergency messages from the Online Social Network during the online process.
This section describes an implementation of the logical architecture proposed in the previous section, by means of a prototypical application in the domain of Seismic emergencies. Such application implements the components involved in the online process (i.e., with reference to Fig. 3, those arranged in a stack on top of Online Social Network and enclosed in a dotted, light grey box) to act as a Twitter-based earthquake detector.
Emergency Messages Capturing
The Emergency Messages Capturing component is in charge of gathering messages potentially related to an emergency. As the overall online process relies on data collected at this stage, this component plays a crucial role within the framework. As shown in Fig. 3, Emergency Messages Capturing interfaces directly to the Online Social Networking platform, provided by Twitter, and exploits the Emergency-specific Terms knowledge base, which is generated and updated by the offline process. This knowledge base contains the keywords used by the Emergency Messages Capturing component to query the Twitter platform in order to capture earthquake-related messages (e.g., for Seismic emergencies in Italy, it contains the two italian terms "terremoto" (earthquake) and "scossa" (tremor)).
Among the methods provided by Twitter for data capturing, the implemented system exploits the Streaming APIFootnote 5 to open a persistent connection with a stream of tweets. The Streaming API gives access to a global stream of messages, optionally filtered by search keywords. In contrast with the Search API used in the systems described in Sakaki et al. (2010, 2013), Earle et al. (2012), Yin et al. (2012), Robinson et al. (2013), which gives access only to a subset of all the tweets produced, the Streaming API potentially makes it possible to capture all the tweets matching the search criteria. To guarantee the robustness and the reliability of the system we also implemented additional mechanisms that manage rate-limit and generic connection problems in the use of the Streaming API. Such mechanisms include the adoption of a backup streaming connection to avoid loss of data in case of a sudden disconnection from the primary stream, and mechanisms to perform automatic reconnection upon disconnecting from a stream. Twitter rate-limits for the Streaming APIFootnote 6 are set so as to deliver, at any given time, at most 1 % of the total worldwide Twitter traffic, per streaming connection. However, our system never suffered from such a limitation over a 2 months long experiment, during which the collected tweets never generated a traffic exceeding the 1 % threshold. Applications exploiting Twitter's Streaming API should also guarantee a rapid processing of delivered messages. Clients which are unable to process messages fast enough will be automatically disconnected by Twitter. This situation is commonly refered to as Falling Behind. Following Twitter's guidelines, in our implementation we decoupled the data capturing and analysis phases by rapidly storing messages in a NoSQL MongoDBFootnote 7 database. Such messages are later properly formatted and copied in a relational MySQL database for further processing.
It should be noted that not all the messages gathered in this first step are actually related to an unfolding seismic event. In fact, some messages can be misleading for the event detection task and must be filtered out as noise (Earle et al. 2012). For example, their contents could be maliciously fictitious, convey reported news or talk about past of future events. This motivates the filtering components required by the architecture and described in the following.
Primary Messages Selection
The Primary Messages Selection component is the first filtering module in the proposed architecture and is therefore fed with the whole stream of messages gathered by the Emergency Messages Capturing component. Due to the potentially large volume of messages to be processed at this stage, this component performs a fast coarse-grained filtering of incoming messages by applying heuristic rules to select firsthand tweets sent by eyewitness users who are actually present at the referred event and can directly describe it (Kumar et al. 2013; Morstatter et al. (2014)).
Studying the characteristics of the messages shared on Twitter in the aftermath of seismic events led us to the observation that genuine reports of earthquakes do not follow any information diffusion model and are not influenced by other reports. However, this scenario rapidly evolves over time as the news of the earthquake spreads over the different medias, so that subsequent reports are in growing percentage influenced by other news. Thus, we concluded that the best results for the event detection task could be achieved by considering only spontaneous and independent messages. The Primary Messages Selection component therefore discards retweet messages, reply messages and messages shared by accounts belonging to a blacklist of 345 Twitter profiles that publish official information about recent emergencies. We are aware that the heuristics exploited by the Primary Messages Selection component might not be enough to discard all derivative messages. Nonetheless, they represent a computationally efficient way of filtering out the vast majority of useless messages. Furthermore, the modular and architectural solution we propose is particularly suitable for being expanded with alternative approaches and algorithmic solutions to this task.
A burst of messages registered after a moderate earthquake
Trusted Messages Selection
Another possible flaw for all social mining systems lies in the vulnerability to intentional attacks performed by malicious users (Mendoza et al. 2010; Castillo et al. 2011). In our application, security concerns can arise if groups of people collude to generate fictitious tweets referring to an earthquake. The online Trusted Messages Selection component exploits the Trusted Message Model to select trusted, reliable messages. Many already developed classifiers can be exploited for this task, such as the ones proposed in Chu et al. (2012) and Amleshwaram et al. (2013). In our implementation we employ a domain-independent machine learning classifier trained to distinguish between "fake" and "real" accounts (Cresci et al. 2014, 2015a). The classifier has been trained on a set of 3900 equally distributed fake and real accounts and was able to correctly classify more than 95 % of the accounts of the training set. In the online mode of operation, the Trusted Messages Selection component exploits the trained model and the Weka tool (Hall et al. 2009) to infer the class (fake, real) a user who posted a message belongs to. The Trusted Messages Selection component performs this operation for every message it receives from the Primary Messages Selection component. Messages posted by fake users are automatically discarded by the system. In addition, users repeatedly triggering false detections are added to the same account blacklist exploited by the Primary Messages Selection component. To further protect the system from harmful attacks, we consider only a single message per user, and messages from different users but with the same contents are considered only once. While we understand that these solutions do not fully address the problem of malicious attacks, we are confident that our efforts represent a first response to security concerns in social mining systems. In fact, the adopted solutions require potential attackers to put considerably much effort into the creation of plausible accounts. The employment of the solutions proposed in Chu et al. (2012) and Amleshwaram et al. (2013) for the classification of "automated" versus "non-automated" accounts, might represent another possible way to tackling this problem and stands as promising ground for future work.
Ongoing Emergency Messages Selection
To further enforce the Primary, Trusted and Emergency message properties, the Ongoing Emergency Messages Selection component performs a fine-grained filtering by means of the Ongoing Emergency Message Model, a machine learning classifier which has been trained in the offline process. Again, we exploited Weka to train and generate the classifier. The Emergency-specific Training Set for earthquakes is composed of more than 1400 tweets divided into two balanced sets of messages: tweets related and tweets not related to a seismic event in progress. During the offline phase, tweets of the training set were manually classified by the Annotators using the ad-hoc Annotation Tool web interfaceFootnote 8. Our analysis of the messages reporting earthquakes has highlighted a few interesting characteristics that help distinguish between tweets related and tweets not related to an unfolding seismic event. Tweets referring to an earthquake are generally very short, they present fewer punctuation than normal tweets and often contain slang or offensive words. This is because people reporting an earthquake are usually scared about the event and the content of the messages they write tend to reflect this emotion. Instead, tweets referring to official news of an earthquake or talking about a past earthquake present a longer, more structured message. Tweets not related to a recent earthquake also include a higher number of mentions and URLs than spontaneous earthquake reports. Thus, we defined the following set of features that takes into account the results of the previous analysis: (1) character count; (2) word count; (3) punctuation count; (4) URL count; (5) mention count; (6) slang/offensive word count. Notably, some of the features that we defined for this task are also supported by the findings of recent related works (Imran et al. 2013; Gupta et al. 2013).
Training the classifier with this set of features produced correct classifications in more than 90 % of the tweets of the Emergency-specific Training Set. The classifier was obtained using the decision tree J48, corresponding to the Java implementation of the C4.5 algorithm (Quinlan 1993) with a tenfold cross validation. In the online mode of operation, the prediction is performed by invoking the classifier every time a message is delivered to the Ongoing Emergency Messages Selection component. As Weka generally needs less than a second to predict the class of a new tweet by means of our decision tree model, it is feasible to use the fine-grained classifier filter at this stage of the system since most of the noisy messages have already been filtered out by previous components.
Emergency event detection
The detection of a seismic event is triggered by an exceptional growth in the frequency of messages that have passed the filtering phases. In our system, we adopt a novel event detection approach which is based on a burst detection algorithm. A burst is defined as a large number of occurrences of a phenomenon within a short time window (Zhang and Shasha 2006). Burst detection techniques are commonly applied to various fields such as the detection of topics in data streams. Our system triggers the detection of a seismic event when it identifies a burst of Ongoing Emergency Messages. Figure 6 displays a rug plot of the arrival times of Ongoing Emergency Messages, as well as a histogram plot showing their frequency per minute, during a 3.4 magnitude earthquake occurred at 15:47:49, August 9 2014, in Tuscany regional district. After the occurrence time of the earthquake, denoted by the red vertical dashed line, a big burst of tweets was recorded by our system.
Table 1 Earthquake detection validation
Works in Kleinberg (2003), Ebina et al. (2011) discuss various burst detection algorithms. Our Emergency Event Detection component implements the hierarchical algorithm proposed in Ebina et al. (2011) since it is computationally light and can adapt well to both big and small bursts. An efficient algorithm is necessary because of the real-time nature of our system, and the ability to detect both big and small bursts fits well with the need of a flexible, scalable and reusable system.
The validation of the proposed Social Detection System has been carried out exploiting official data released by the National Institute of Geophysics and VolcanologyFootnote 9 (INGV), the authority responsible for monitoring seismic events in Italy. INGV uses different channels, including a dedicated Twitter accountFootnote 10, to distribute detailed information about seismic events having magnitude 2 or more, which have been detected by their seismographic network. To validate the proposed architecture, we cross-checked all the events detected by the prototypical application described in the previous section, against the official reports released by INGV. This approach allowed us to validate our system with stronger metrics than the ones used in similar works, such as Sakaki et al. (2010, 2013), Earle et al. (2012) and Yin et al. (2012), Robinson et al. (2013). Specifically, the majority of social media emergency management systems have been validated with a focus on correct detections. However, the problem of false detections is often understated, despite being a critical factor in emergency management (Middleton et al. 2014). Therefore, we classified earthquake detection results as in the following:
True Positives (TP) events detected by our system and confirmed by INGV;
False Positives (FP) events detected by our system, but not confirmed by INGV;
False Negatives (FN) events reported by INGV but not detected by our system.
True Negatives (TN) are widely used in information retrieval and classification tasks together with TP, FP and FN. However, in our scenario TN are not applicable, as it would mean counting the number of earthquakes that did not happen and that our system did not detect. In addition, we also computed the following standard metric
Precision, ratio of correctly detected events among the total number of detected events:
$$\textit{Precision}=\frac{TP}{TP+FP}$$
Recall (a.k.a. Sensitivity), ratio of correctly detected events among the total number of occurred events:
$$\textit{Recall}=\frac{TP}{TP+FN}$$
F-Measure, harmonic mean of Precision and Recall:
$$\textit{F-Measure}=2*\frac{Precision*Recall}{Precision+Recall}$$
We were not able to compute other well-known metrics such as Specificity, Accuracy and Mathews Correlation Coefficient since they rely on the True Negatives (TN) count. Employed metrics are anyway exhaustive and allow a thorough validation of detection results. Table 1 summarizes event detection validation against earthquakes registered by INGV over a 66 days time window starting from 2013-07-19 to 2013-09-23. The number of earthquakes reported in Table 1 refers only to real earthquakes detected by INGV and therefore corresponds to the sum of TP and FN. FP instead represent false detections by our system.
We first evaluated the Social Detection System against all the earthquakes having a magnitude greater than 2.0, registered by INGV within the given time window. Results show that the detection of earthquakes with magnitude lower than 3 is a very challenging task. This is because the majority of these earthquakes are only detected by seismographic stations and not by people. For events with a magnitude equal to or greater than 3.5, results show a good performance of the system, as demonstrated by the encouraging values of F-Measure: 78.26 % for magnitude >3.5, 83.33 % for magnitude >4 and 100 % for magnitude >4.5. This is especially significant given that seismic events of a magnitude around 3 are considered "light" earthquakes and are generally perceived only by a very small number of social sensors.
The majority (68 %) of the earthquakes occurred during the 66 days validation time window were extremely light and did not generate any report on Twitter. A detection system based solely on tweets is obviously incapable of detecting such events and this is reflected by the high number of False Negatives (FN) and by the low Recall for earthquakes with magnitude lower than 3.
In the emergency management scenario, light seismic events only detected by seismographic stations clearly do not pose any threat to communities and infrastructures and earthquakes of interest are those actually felt by the population at large. Therefore we re-validated the system against those earthquakes that generated at least one report on Twitter. Results for this experiment are displayed in the bottom half of Table 1 and show an overall improvement in the system performances. It is worth noting that the proposed Social Detection System achieves flawless results (Precision, Recall and F-Measure = 100 %) for earthquakes of magnitude 4.0 or more and still performs very well on earthquakes which have a magnitude in the region of 3.5 (Precision = 75 %, Recall = 100 % and F-Measure = 85.71 %).
System responsiveness validation. Distribution of detection delays versus INGV notification delays
Figure 7 characterizes the system's responsiveness by means of boxplot and scatterplot distributions of the detection delays of our system compared to the notification delays of INGV official reports. The detection delays of our Social Detection System are computed as the difference between the occurrence timestamp of an earthquake and the timestamp of the corresponding detection triggered by the Emergency Event Detection component. INGV notification delays are computed as the difference between the occurrence timestamp of an earthquake and the timestamp of the corresponding official report released by INGV. The detection delays reported in Fig. 7 have been computed considering only True Positive detections.
INGV official reports are the timeliest publicly available source of information about earthquakes in Italy. Anyway, INGV notification delays are considerably higher than the detection delays of our system. In Fig. 7 this is evident from the massive gap between the spreads (or boxes) of the two distributions. Earthquake detection responsiveness of our system is even more valuable since early reports of severe earthquakes might be of interest not only to emergency responders, but also to all breaking news agencies looking for fresh information to publish as well as to insurance companies and financial advisors.
Among all the detections performed by our system, 87 % occurred within 5 minutes of the earthquake and 43 % occurred within 2 minutes. These results are promising, especially considering that the proposed framework is adaptable to other emergency scenarios where automatic detection equipment, playing the role of seismographs for seismic events, might not be available. Being able to automatically detect a considerable percentage of emergency situations within minutes of the event would surely benefit emergency responders.
Conclusions and future work
In this paper we have discussed how the HaaS paradigm can be exploited for emergency detection. Core concepts, major roles and functionalities have been specified to operate in a broad class of emergencies. The design of architectural components reusable for many types of events, and possibly adaptive with respect to the different characteristics of each type, has been detailed. Related works have been discussed via the proposed architectural model, to systematize the available solutions under our modular and platform-independent conceptual framework. The implementation of an actual Twitter-based earthquake detector has been then presented, to show the effectiveness of our approach. Furthermore, a real-world case of application has been discussed and analyzed, discovering the most interesting properties of our approach. In addition, the architecture has been validated under more comprehensive metrics with respect to the existing literature.
As a future work, to better assess the system over its whole life cycle, it should be cross-validated on other real-world scenarios, involving emergencies of different types and sizes. Afterwards, the next key investigation activities along this line of research should be to employ real-time data provided by bursts of messages as a mine of information for situational awareness and damage assessment. Specifically, qualitative analyses of relevant messages can be performed to increase the overall situational awareness in the aftermath of an emergency. Qualitative analyses of the textual content of messages can be performed via natural language processing techniques and might lead to time-evolving term-clouds, highlighting those textual bits which convey critical and actionable information. In parallel, analyses of the multimedia content of messages can be carried out by means of image filtering and image clustering techniques. However, despite providing valuable insights into the unfolding scenario, the output of qualitative analyses still requires to be interpreted by domain-experts. In contrast, quantitative analyses could provide unambiguous outputs which might prove even more valuable to decision-makers and emergency responders. Specifically, for seismic events, a quantitative approach to the estimation of the impact of an earthquake can be performed by training statistical regression models to estimate earthquake intensity from the characteristics of social media reports.
In the future we look forward to addressing these issues by extending our modular framework to include components performing analyses aimed at increasing situational awareness and capable of providing early damage assessments.
http://earthquake.usgs.gov/learn/topics/mag_vs_int.php.
https://dev.twitter.com/rest/reference/get/search/tweets.
http://www.livescience.com/45385-earthquake-alerts-from-twitter.html.
https://blog.twitter.com/2015/usgs-twitter-data-earthquake-detection.
https://dev.twitter.com/streaming/overview.
https://dev.twitter.com/streaming/overview/messages-types#limit_notices.
http://www.mongodb.org/.
http://wafi.iit.cnr.it/sosnlp/sosnlp/annotation_tool.
http://www.ingv.it/en/.
https://twitter.com/ingvterremoti.
Adam NR, Shafiq B, Staffin R (2012) Spatial computing and social media in the context of disaster management. IEEE Intell Syst 27(6):90–96
Aggarwal CC, Abdelzaher T (2013) Social sensing. In: Aggarwal CC (ed) Managing and mining sensor data, 1st edn. Springer, New York, pp 237–297
Allen RM (2012) Transforming earthquake detection? Science 335(6066):297–298
Amleshwaram AA, Reddy N, Yadav S, Gu G, Yang C (2013) Cats: characterizing automation of twitter spammers. In: Fifth international conference on communication systems and networks (COMSNETS), 2013, pp 1–10. IEEE
Avvenuti M, Cresci S, La Polla MN, Marchetti A, Tesconi M (2014a) Earthquake emergency management by social sensing. In: IEEE international conference on pervasive computing and communications workshops (PERCOM Workshops), 2014, pp 587–592. IEEE
Avvenuti M, Cresci S, Marchetti A, Meletti C, Tesconi M (2014b) EARS (Earthquake Alert and Report System): a real time decision support system for earthquake crisis management. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1749–1758. ACM
Avvenuti M, Del Vigna F, Cresci S, Marchetti A, Tesconi M (2015) Pulling information from social media in the aftermath of unpredictable disasters. In: 2nd international conference on information and communication technologies for disaster management (ICT-DM), 2015. IEEE
Bagrow JP, Wang D, Barabasi A-L (2011) Collective response of human populations to large-scale emergencies. PloS one 6(3):17680
Bartoli G, Fantacci R, Gei F, Marabissi D, Micciullo L (2015) A novel emergency management platform for smart public safety. Int J Commun Syst 28(5):928–943
Castillo C, Mendoza M, Poblete B (2011) Information credibility on twitter. In: Proceedings of the 20th international conference on world wide web, pp 675–684. ACM
Chu Z, Gianvecchio S, Wang H, Jajodia S (2012) Detecting automation of twitter accounts: are you a human, bot, or cyborg? IEEE Trans Dependable Secure Comput 9(6):811–824
Cimino MG, Lazzerini B, Marcelloni F, Ciaramella A (2012) An adaptive rule-based approach for managing situation-awareness. Exp Syst Appl 39(12):10796–10811
Cresci S, Di Pietro R, Petrocchi M, Spognardi A, Tesconi M (2015a) Fame for sale: efficient detection of fake Twitter followers. Decis Support Syst 80:56–71
Cresci S, Tesconi M, Cimino A, Dell'Orletta F (2015b) A linguistically-driven approach to cross-event damage assessment of natural disasters from social media messages. In: Proceedings of the 24th international conference on world wide web companion, pp 1195–1200. International World Wide Web Conferences Steering Committee
Cresci S, Cimino A, Dell'Orletta F, Tesconi M (2015c) Crisis mapping during natural disasters via text analysis of social media messages. In: Web Information Systems Engineering-WISE 2015, pp 250–258. Springer
Cresci S, Petrocchi M, Spognardi A, Tesconi M, Di Pietro R (2014) A criticism to society (as seen by twitter analytics). In: IEEE 34th international conference on distributed computing systems workshops (ICDCSW), 2014, pp 194–200. IEEE
Crooks A, Croitoru A, Stefanidis A, Radzikowski J (2013) # Earthquake: Twitter as a distributed sensor system. Trans GIS 17(1):124–147
Demirbas M, Bayir MA, Akcora CG, Yilmaz YS, Ferhatosmanoglu H (2010) Crowd-sourced sensing and collaboration using twitter. In: IEEE international symposium on a world of wireless mobile and multimedia networks (WoWMoM), 2010, pp 1–9. IEEE
D'Andrea E, Ducange P, Lazzerini B, Marcelloni F (2015) Real-time detection of traffic from twitter stream analysis. IEEE Trans Intell Transp Syst 16(4):2269–2283
Earle P (2010) Earthquake twitter. Nat Geosci 3(4):221–222
Earle PS, Bowden DC, Guy M (2012) Twitter earthquake detection: earthquake monitoring in a social world. Ann Geophys 54(6):708–715
Ebina R, Nakamura K, Oyanagi S (2011) A real-time burst detection method. In: 23rd IEEE international conference on tools with artificial intelligence (ICTAI), 2011, pp 1040–1046. IEEE
Foresti GL, Farinosi M, Vernier M (2015) Situational awareness in smart environments: socio-mobile and sensor data fusion for emergency response to disasters. J Ambient Intell Humaniz Comput 6(2):239–257
Gao L, Song C, Gao Z, Barabási A-L, Bagrow JP, Wang D (2014) Quantifying information flow during emergencies. Sci Rep 4:3997. doi:10.1038/srep03997
Goolsby R (2010) Social media as crisis platform: the future of community maps/crisis maps. ACM Trans Intell Syst Technol (TIST) 1(1):7
Gupta A, Lamba H, Kumaraguru P, Joshi A (2013) Faking sandy: characterizing and identifying fake images on twitter during hurricane sandy. In: Proceedings of the 22nd international conference on world wide web companion, pp 729–736. International World Wide Web Conferences Steering Committee
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software: an update. ACM SIGKDD Explor Newsl 11(1):10–18
Hughes AL, Palen L (2009) Twitter adoption and use in mass convergence and emergency events. Int J Emerg Manag 6(3):248–260
Imran M, Castillo C, Diaz F, Vieweg S (2015) Processing social media messages in mass emergency: a survey. ACM Comput Surv (CSUR) 47(4):67
Imran M, Elbassuoni SM, Castillo C, Diaz F, Meier P (2013) Extracting information nuggets from disaster-related messages in social media. In: Proceedings of ISCRAM, Baden-Baden, Germany
Kapadia A, Kotz D, Triandopoulos N (2009) Opportunistic sensing: security challenges for the new paradigm. In: Communication systems and networks and workshops, 2009. COMSNETS 2009. First International, pp 1–10. IEEE
Kleinberg J (2003) Bursty and hierarchical structure in streams. Data Min Knowl Discov 7(4):373–397
Kumar S, Morstatter F, Zafarani R, Liu H (2013) Whom should i follow?: identifying relevant users during crises. In: Proceedings of the 24th ACM conference on hypertext and social media, pp 139–147. ACM
Lampos V, Cristianini N (2012) Nowcasting events from the social web with statistical learning. ACM Trans Intell Syst Technol (TIST) 3(4):72
Liang Y, Caverlee J, Mander J (2013) Text vs. images: on the viability of social media to assess earthquake damage. In: Proceedings of the 22nd international conference on world wide web companion, pp 1003–1006. International World Wide Web Conferences Steering Committee
Mendoza M, Poblete B, Castillo C (2010) Twitter under crisis: can we trust what we rt? In: Proceedings of the first workshop on social media analytics, pp 71–79. ACM
Middleton SE, Middleton L, Modafferi S (2014) Real-time crisis mapping of natural disasters using social media. IEEE Intell Syst 29(2):9–17
Morstatter F, Lubold N, Pon-Barry H, Pfeffer J, Liu H (2014) Finding eyewitness tweets during crises. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, p 23. ACL
Power R, Robinson B, Ratcliffe D (2013) Finding fires with twitter. In: Australasian language technology association workshop, p 80
Quinlan JR (1993) C4.5: Programs for machine learning, vol 1. Morgan kaufmann, San Francisco
Robinson B, Power R, Cameron M (2013) A sensitive twitter earthquake detector. In: Proceedings of the 22nd international conference on world wide web companion, pp. 999–1002. International World Wide Web Conferences Steering Committee
Rosi A, Mamei M, Zambonelli F, Dobson S, Stevenson G, Ye J (2011) Social sensors and pervasive services: approaches and perspectives. In: 2011 IEEE international conference on pervasive computing and communications workshops (PERCOM Workshops), pp 525–530. IEEE
Sakai T, Tamura K (2015) Real-time analysis application for identifying bursty local areas related to emergency topics. SpringerPlus 4(1):1–17
Sakaki T, Okazaki M, Matsuo Y (2013) Tweet analysis for real-time event detection and earthquake reporting system development. IEEE Trans Knowl Data Eng 25(4):919–931
Sakaki T, Okazaki M, Matsuo Y (2010) Earthquake shakes twitter users: real-time event detection by social sensors. In: Proceedings of the 19th international conference on world wide web, pp. 851–860. ACM
Salfinger A, Retschitzegger W, Schwinger W, et al (2015) crowdSA–towards adaptive and situation-driven crowd-sensing for disaster situation awareness. In: IEEE international inter-disciplinary conference on cognitive methods in situation awareness and decision support (CogSIMA), 2015, pp 14–20. IEEE
Sheth A (2009) Citizen sensing, social signals, and enriching human experience. IEEE Internet Comput 13(4):87–92
Siegemund K, Thomas EJ, Zhao Y, Pan J, Assmann U (2011) Towards ontology-driven requirements engineering. In: Workshop on semantic web enabled software engineering at 10th international semantic web conference (ISWC)
Srivastava M, Abdelzaher T, Szymanski B (2012) Human-centric sensing. Philos Trans R Soc A Math Phys Eng Sci 370(1958):176–197
Wang D, Amin MT, Li S, Abdelzaher T, Kaplan L, Gu S, Pan C, Liu H, Aggarwal CC, Ganti R et al (2014) Using humans as sensors: an estimation-theoretic perspective. In: Proceedings of the 13th international symposium on information processing in sensor networks, pp 35–46. IEEE Press
Yin J, Lampert A, Cameron M, Robinson B, Power R (2012) Using social media to enhance emergency situation awareness. IEEE Intell Syst 27(6):52–59
Zhang X, Shasha D (2006) Better burst detection. In: Proceedings of the 22nd international conference on data engineering, 2006. ICDE'06, pp 146–146. IEEE
Zhou A, Qian W, Ma H (2012) Social media data analysis for revealing collective behaviors. In: Proceedings of the 18th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1402–1402. ACM
MA, MGCAC, SC, AM, and MT are all responsible for the concept of the paper, the results presented and the writing. All authors have read and approved the manuscript.
This research was partially supported by the .it domain registration authority (Registro .it) funded project SoS - Social Sensing (http://socialsensing.it/en).
Department of Information Engineering, University of Pisa, Largo L. Lazzarino 1, 56122, Pisa, Italy
Marco Avvenuti & Mario G. C. A. Cimino
Bell Labs, Alcatel-Lucent, Route de Villejust, 91620, Nozay, Paris, France
Stefano Cresci
Institute of Informatics and Telematics (IIT), National Research Council (CNR), Via G. Moruzzi 1, 56124, Pisa, Italy
Stefano Cresci, Andrea Marchetti & Maurizio Tesconi
Marco Avvenuti
Mario G. C. A. Cimino
Andrea Marchetti
Maurizio Tesconi
Correspondence to Stefano Cresci.
Avvenuti, M., Cimino, M.G.C.A., Cresci, S. et al. A framework for detecting unfolding emergencies using humans as sensors. SpringerPlus 5, 43 (2016). https://doi.org/10.1186/s40064-016-1674-y
DOI: https://doi.org/10.1186/s40064-016-1674-y
Social sensing
Social media mining
Event detection | CommonCrawl |
SMITE: an R/Bioconductor package that identifies network modules by integrating genomic and epigenomic information
N. Ari Wijetunga1,
Andrew D. Johnston1,
Ryo Maekawa2,
Fabien Delahaye1,3,
Netha Ulahannan1,4,
Kami Kim4,5,6 &
John M. Greally1
BMC Bioinformatics volume 18, Article number: 41 (2017) Cite this article
The molecular assays that test gene expression, transcriptional, and epigenetic regulation are increasingly diverse and numerous. The information generated by each type of assay individually gives an insight into the state of the cells tested. What should be possible is to add the information derived from separate, complementary assays to gain higher-confidence insights into cellular states. At present, the analysis of multi-dimensional, massive genome-wide data requires an initial pruning step to create manageable subsets of observations that are then used for integration, which decreases the sizes of the intersecting data sets and the potential for biological insights. Our Significance-based Modules Integrating the Transcriptome and Epigenome (SMITE) approach was developed to integrate transcriptional and epigenetic regulatory data without a loss of resolution.
SMITE combines p-values by accounting for the correlation between non-independent values within data sets, allowing genes and gene modules in an interaction network to be assigned significance values. The contribution of each type of genomic data can be weighted, permitting integration of individually under-powered data sets, increasing the overall ability to detect effects within modules of genes. We apply SMITE to a complex genomic data set including the epigenomic and transcriptomic effects of Toxoplasma gondii infection on human host cells and demonstrate that SMITE is able to identify novel subnetworks of dysregulated genes. Additionally, we show that SMITE outperforms Functional Epigenetic Modules (FEM), the current paradigm of using the spin-glass algorithm to integrate gene expression and epigenetic data.
SMITE represents a flexible, scalable tool that allows integration of transcriptional and epigenetic regulatory data from genome-wide assays to boost confidence in finding gene modules reflecting altered cellular states.
In genomics research, the dimensionality of assayed data has increased far beyond the pace of analytical tool development, with data sets likely to continue to increase in size and complexity [1, 2]. We appreciate that gene expression is regulated through a number of interacting mechanisms that include epigenetic processes such as DNA methylation. DNA methylation can also reflect the local binding of transcription factors [3], which are capable of influencing local chromatin structure [4] and post-translational modifications of histones [5]. Furthermore, transcription can induce DNA methylation [6], and DNA methylation can itself influence transcription factor binding [7–11]. While these observations indicate complex interactions between regulators of genomic organization, they also suggest that multiple types of events observed at the same locus increase confidence that regulatory activity is genuinely occurring at that locus. Current methods to explore multiple coincident processes using integrated analysis introduce bias by pruning data sets, either by focusing only on a subset of loci with the most significant effects, or requiring pairwise comparisons of data sets with progressively smaller intersections. Furthermore integrative methods, like Functional Epigenetic Modules (FEM) [12], score genes within a network and identify subnetworks, referred to as modules, but they lack an implemented method to define further functional interpretability an essential outcome of genomics experiment [13]. Therefore, there is a need for a flexible method integrating genomic assay data into a single score that can be used to identify functionally important pathways for further study.
Here we describe an intuitive gene scoring system that combines transcriptional and epigenetic regulatory data sets, an approach we call Significance-based Modules Integrating the Transcriptome and Epigenome (SMITE). The novelty of SMITE lies in the use of mathematic principles and sampling techniques to simplify multiple complex genome-level signals into a single set of interpretable results. We use SMITE to identify novel gene modules in a large, high dimension epigenetic and transcriptomic data set, and we show that SMITE offers improved detection, characterization, and visualization of functional modules within a gene network compared to existing methods. Overall, SMITE provides a useful and intuitive answer to the most important question in integrative genomics: what we can learn from integrating multiple sources of high-resolution information instead of considering each source separately?
Toxoplasma gondii (T. gondii) human foreskin fibroblast data set
To benchmark SMITE and demonstrate implemented features, we obtained a large multifaceted genomics data set from a controlled experiment studying the transcriptional regulatory effects on human foreskin fibroblasts (HFF) following infection by T. gondii. Further description of the experimental methods used to produce the data set and results are available in Additional file 1, including alignment to a combined human/Toxoplasma genome assembly (Additional file 1: Figure S1).
Required inputs to SMITE
SMITE provides a pipeline that results in annotated functional modules (Fig. 1). It requires the following inputs: 1) a gene annotation bed file, 2) an interaction network, and 3) data sets of effects and statistical test significance from at least one gene expression and/or epigenomic profile(s). In addition, users can include an unlimited number of previously identified genomic intervals of interest (e.g. Chromatin Immunoprecipitation (ChIP)-seq peaks, enhancers, Additional file 1: Table S1). Notably, the software relies on p-values without specifying the source statistical test, so it is necessary for users to ensure appropriate sample sizes, data quality, and statistical testing of the original experiments.
Summary of SMITE. The flowchart details the pipeline through which SMITE takes p-values, associates them with genomic intervals, and scores genes. The steps and input required to discover significant modules are shown as well as the downstream functions that SMITE provides for module interpretation
Motivation for SMITE
In functional genomics experiments, after performing genomic assays on two or more groups, one generally uses a statistical test to estimate an effect for up to millions of genomic loci (e.g. genome-wide DNA methylation analysis). These estimates are then compared to their standard errors to derive test statistics, T, and p-values, p, where p is defined as the probability that T is greater than a threshold from a statistical distribution, t, such that P(T ≥ t) = p. These test statistics and corresponding p-values are used to reject a null hypothesis (i.e. no difference between study groups). While a p-value does not represent the probability that a hypothesis is true, in practice, each p-value does correspond to a researcher's relative prioritization of a gene or genomic region within a ranked list [14]. An observed p, which increases in significance as it approaches zero, is proportional, ∝, to a new heuristic that is maximized as 1-p approaches 1, and this heuristic is the probability, P, that a gene or genomic region is prioritized by a researcher for further analysis:
$$ P\left(T<t\right)=1-p\propto P\kern0.5em \left(\mathrm{Gene}\ \mathrm{or}\ \mathrm{genomic}\ \mathrm{region}\ \mathrm{is}\ \mathrm{prioritized}\right) $$
Therefore, in application, p-values are generally reinterpreted beyond their intended purpose, and in this capacity they contribute to new heuristics that are used as the primary criteria for prioritization. While the functional interpretation of significant hypothesis tests from gene expression experiments is straightforward (e.g. genes are significantly upregulated or significantly downregulated), to understand specific functional genomic contexts we must interpret multiple p-values as contributing evidence. For example, DNA modifications like DNA methylation and DNA hydroxymethylation are typically measured at the single base pair level, whereas functional genomic contexts are represented by genomic intervals that vary in size, like gene promoters. This necessitates a method of combining multiple p-values overlapping the same genomic interval, while also accounting for their likely interdependence. Therefore, these genomic intervals can contribute to a single heuristic that can be used to score their associated genes. Since the relationships between genomic intervals and their associated genes are complex, a flexible approach is needed to allow user input for optimal weighting of genomic contexts depending on a particular experiment.
There are several p-value combination methods used in meta-analyses. Because these methods assume independence of experiments, SMITE includes a preprocessing step using Monte Carlo methods (MCMs) to account for non-zero correlations when combining dependent p-values [15]. This novel approach implementing MCMs assesses the average strength of the correlations and determine a new distribution of combined p-values. Subsequently, p-values are recursively combined until every node (gene) in a specific interaction network is associated with a single score that in turn reflects a researcher's intuitive belief that the node has sufficient evidence to be prioritized for further analysis.
Combining p-values in SMITE
Given K experiments with K hypotheses, H i…K , test statistics and corresponding p-values, p 1…k , are calculated so that each p-value reflects the probability of observing a particular test statistic or more extreme values; the p-value itself is, however, a random variable that follows a uniform distribution, U(0,1) [16]. P-value combination methods attempt to characterize the joint distributions of two or more of these random variables. If the p-values are not independent from one another, then there is a covariance/correlation matrix that needs to be incorporated into the analysis in order to maintain statistical validity. Rather than focus on the statistical distributions of combined p-values, which can be complex, difficult to calculate, and risks over-interpreting p-values, SMITE uses MCMs, like bootstrapping, to sample randomly a particular set of values from an unknown distribution and to estimate the characteristics of the new combined distribution. SMITE employs these sampling methods before combining large correlated p-value data sets. SMITE offers several methods for combining p-values including Stouffer's Z-score method [17] (the default procedure), Sidak's adjustment [18], Fisher's method [19], and binomial testing. More detail about the available methods is provided in Additional file 1.
In the idealized scenario, the application of p-value combination methods is trivial because of the independence of each epigenetic signal; however, modifications like DNA methylation are thought to be highly correlated over short distances [20], with methods like BumpHunting exploiting this local correlation to define differentially methylated regions [21]. For this reason, SMITE estimates the average correlation between the dependent p-values as a function of distance. For each gene G i for i in 1,2…I, we first find the J genomic intervals R ij for j in 1,2…J related to G i (e.g. a specific gene's promoter and body). Then, we determine the N overlapping p-values, p ijk for k in 1…N ij , for each genomic interval. Next, we convert the p-values to a standard normal distribution with the transformation Z ijk = Φ −1 (1 − p ijk /2), where Φ is the standard normal cumulative distribution function (CDF). Rather than incorrectly assuming that the p-values are independent, we chose to use a non-parametric MCM approach to estimate correlation coefficients for modifications that overlap the same interval, R ij .
We estimate a correlation matrix using the physical distance between loci associated with p-values, and thus, we control for a background level of spatial correlation. To estimate this matrix, we find for each significant p-value within a type of interval R. j the distances to the closest upstream and downstream p-value. As HELP-tagging [22] and Illumina HumanMethylation450 BeadChip array [23] data have ~2 million data points and ~450,000 probes, respectively, these distances were binned in 500 bins, resulting in as little as single basepair bins for the smallest distances, where we expect the largest correlations. We randomly sampled within bins with replacement and found the Pearson correlation between the transformed p-values. This process was repeated 500 times and the average correlation was associated with the bin. The results from a correlation matrix using DNA methylation from the T. gondii HFF data set indicate, as expected, that the estimated correlation is generally higher between p-values close to one another, and that it tends to decrease with distance (Fig. 2). Even when these correlations are small, it is inappropriate to ignore them completely, and this calculation is necessary to account for the background interdependence of effects.
Monte Carlo simulation of correlation matrix for DNA methylation. The average Pearson correlations as a function of distance separating adjacent effects for DNA methylation in the T. gondii HFF data set. As expected, there is general decrease in the correlation of DNA methylation values as the distance between assayed sites increases
Having determined a correlation matrix, \( {\varSigma}_{ij} \), that is symmetric, positive, and definite, we can determine an upper triangular matrix with positive diagonal entries using the Cholesky decomposition, C ij , so that \( {\varSigma}_{ij} \) =C ij T C ij , and this decomposition can be used to adjust the previously transformed p-values Zijk, where Z ijk = Φ −1 (1 − p ijk /2), such that [16]:
$$ {C_{ij}}^{-1}{\varPhi}^{-1}\left(1-\frac{p_{ijk}}{2}\right)={Z}_{ijk}^{*} $$
Through this method the correlated Z m and Z n for m ≠ n are now approximately independent and can be combined as independent experiments. The Cholesky decomposition is discussed in greater detail in Additional file 1. Additionally, SMITE employs MCMs to estimate the distribution of the combined statistics so that the new p-values can be thought of as completely new heuristics indicating confidence in a particular p-value, \( {Z}_{ijk}^{*} \).
An aggregated score, R ij , is calculated using the weighted Stouffer's method:
$$ {R}_{ij}=\frac{{\displaystyle {\sum}_{k=1}^N}{w}_{ijk}{Z}_{ijk}^{*}}{\sqrt{{\displaystyle {\sum}_{i=1}^k}{w}_{ijk}^2}}\sim N\left(0,1\right) $$
where w ijk represents optional weights such as distance from the gene transcription start site (TSS) [24]. An analysis where no weights w ijk are used is shown in Additional file 1: Figure S2 where an R2 = 0.99 between final scores with and without weighting and nearly identical final modules and annotations in Additional file 2: Tables S14–S15, indicate that SMITE is robust for choices of w ijk . In a high-resolution epigenomic assay like HELP-tagging, it is possible to have as many as ~2000 data points (p-values) associated with a large region like a gene body. Because aggregated scores increase as the number of p-values within a genomic interval increases, SMITE implements a quantile-permutation adjustment, whereby a specific R ij is compared to 100 distributions of randomly sampled R' ij scores from the same N ij quantile. We estimate p* ij , the proportion of sampled R' ij scores at or more extreme than the observed R ij and \( \overline{p} \) * ij , the average of the proportions from random samples. Finally, we consider R ij = Φ −1 (1− \( \overline{p} \) * ij ) with an effect direction (e.g. less or more DNA methylation) derived from the p-value effect sizes. The improvement after controlling for the number of combined p-values on the combined significance can be seen before and after adjustment (Fig. 3).
The effect of adjustment by the total number of combined P-values. In this example taken from the T. gondii HFF data set, the negative natural log of the significance of the combined p-value is plotted against the number of p-values that were combined for each value. The increased trend is visible before adjustment (left) and is no longer present after adjustment (right)
Normalization of aggregated p-value-derived scores
We found that despite each component score R ij being normalized for the number of combined p-values, a slight difference in the distribution of one component can drive downstream scores and bias module detection. To resolve this potential limitation, we implement a normalization step that results in more comparable component scores, Rij, for all genes (i in 1,2,…I). There are two methods available for normalizing scores depending on the distribution of the combined p-values and both represent monotonic transformations preserving the order of the scores. The first available method is a logit transform of the p-values, followed by rescaling to a common scale and then recovering the adjusted p-value. This method has minimal effects on the actual data, but it successfully improves the overall distribution and comparability of the different types of data (Fig. 4). The second available method is a variation on Box-cox transformations where an iterative process identifies an optimal power transformation of the data.
Normalization of combined p-value scores. The densities of the scores/p-values for the T. gondii HFF data set are plotted using the SMITE functions to compare each of the annotated contexts to determine if normalization is necessary (left). After normalizing the values by logit transformation, rescaling, and back-transformation, the densities of the normalized p-values are shown (right)
The comparison of Rij (e.g. the gene expression scores compared to the gene promoter DNA methylation scores for the same gene) can provide useful information about the overall observed trends. Here, we show a comparison of the gene promoter scores with gene body scores for DNA methylation and DNA hydroxymethylation in the T. gondii HFF data set, and we can see that hypo-hydroxymethylated gene bodies are associated with hypo-hydroxymethylated promoters (Fig. 5).
Epigenetic modifications at promoters compared with gene bodies. Using the SMITE functions, we show a comparison of the component scores (the –ln (p-value) version of the Score) and the effect direction for gene promoters and gene bodies in the T. gondii HFF data set. For DNA methylation (left), there is not a large relationship between scores and directions of scores between promoters and bodies, whereas for DNA hydroxymethylation (right) there is a concordance of loss of hydroxymethylation in promoters and gene bodies
Final score derivation for downstream analysis
Finally, we derive a single score for each gene, G i , using the Stouffer method again, with optional weights w. j for each R. j reflecting the researcher's main analysis goals (e.g. increased weighting for gene expression and DNA methylation at gene promoters), including a directionality coefficient B. j reflecting a researcher's a priori understanding about the relationship between each R. j (e.g. increased DNA methylation at a gene promoter is correlated with decreased gene expression [25]). Because the combined score represents linear combinations of weights and transformed p-values, we again use MCMs by bootstrapping to determine a new adjusted p-value for each gene, pi. Scores for each gene are then calculated using Fisher's method as G i = −2ln(p i ), which has an approximate Chi-square distribution with 2° of freedom. High scoring genes can be used for other analyses such as Gene Set Enrichment Analysis [26] and network-based approaches.
To explore the impact of weight choice for each R. j on downstream analysis, we fixed the weight values w. j for j in 1,2…J, varied the one individual weight w. m , for m not in {1,2…J} and for each variation, we extracted the highest scoring genes using a sampling approach with replacement to determine the background score distribution. This analysis allowed us to assess how individual gene's scores varied with weight choice, and to what extent the overall high scoring geneset was altered in Additional file 1: Figure S3. As expected, we observe that as the relative weighting increases, the effect of each R. m on the overall identified geneset is greater; however, roughly 50% of the identified genes remain constant, likely depending mostly on other R. j for m ≠ j for their overall scores. For each R. m , as w. m increases, a different subset of genes emerges that likely depends on R. m (i.e. there are associated significant p-values). Ultimately, we believe this flexibility in identified genes is a strength of the technique as it allows the researcher to identify a subset of genes that is robust to weight choice, but also allows for overall gene sets that differ depending on R. j of interest.
Module identification within SMITE
In SMITE, modules are identified by inputting scores into a spin-glass algorithm as in Epimods [27] or a heinz algorithm [28] as in BioNet [29]. The spin-glass algorithm in network analysis was initially suggested by Reichardt and Bornholdt [30] who sought a method of defining subsets of nodes within a network that were more densely interconnected, suggesting that these represented a joint spin state, or community. They proposed that the relative density of the connections, called modularity, could be compared to modularity under a null distribution to derive significant communities within a larger network. The spin-glass algorithm, which depends on a single parameter [31], has been shown to an effective method for finding modules as long as its parameter is set below 0.6, and in fact, it was shown that fixing this parameter at 0.5 results in an optimal number of genes within a module [27]. Thus, SMITE also uses a 0.5 parameter for running the spin-glass algorithm. Alternatively, the Heinz algorithm uses a linear programming approach called branch-and-cut where connections between nodes are converted to two directed edges and trimmed until a single optimal subnetwork is identified. Thus in practice, the Heinz algorithm produces a larger summary subnetwork of genes that typically encompasses the separate modules found using the spin-glass algorithm.
Whereas other subnetwork identification algorithms define significance on the basis of observed subnetwork modularity (i.e. connectivity), SMITE allows modules to have both connectivity significance and an additional associated statistical significance related to the sum of the individual node within a module. Because our scores are derived from p-values, we employ Fisher's method mentioned above to assess the overall module significance, which should follow a Chi-square distribution with 2 k degrees of freedom, where k is the number of genes within a module (see Additional file 1: Supplementary methods). Therefore, this significance can be used to rank and filter modules.
Integrative analysis increases study power
SMITE increases the power of analysis at four levels: (1) by analyzing combined genomic signals from multi-level genomics experiments and avoiding the inflated type I error that characterizes pairwise comparisons of genomic signals; (2) by combining incomplete data sets so that having one missing signal will not eliminate a gene from analysis; (3) by allowing prioritization of the most important signals and genomic contexts (a subjective criterion dependent on research goals) for further downstream analysis; and (4) by implementing methods to analyze groups of genes within networks or pathways together. We have therefore designed SMITE to aid in the interpretation of integrated data that were given rigorous statistical treatment during upstream analysis. In the setting of underpowered, preliminary research, SMITE is better used as an exploratory tool to help target downstream analysis and plan further experiments.
SMITE identifies novel dysregulated functional modules in T. gondii-infected human cells
In Additional file 1: Table S2, we show two sets of criteria that we used to score the T. gondii HFF data called reduced (SMITE-R) and full (SMITE-F) models that illustrate how a researcher can use SMITE with varied weighting to identify varied gene modules. The SMITE-R model only includes gene expression and gene promoter DNA methylation; whereas in the SMITE-F model also includes enhancer (active and poised) and gene body DNA methylation and hydroxymethylation. We were primarily interested in transcriptional regulatory alterations at enhancers (histone H3 lysine 4 monomethylation, H3K4me1) and how those relate to functional annotations, so in SMITE-F, enhancer-defining marks were weighted highest, followed by gene expression, gene promoters, and gene bodies. As mentioned previously, we expect that DNA methylation should have a negative correlation with gene expression at gene promoters [32], and a positive correlation with gene expression at gene bodies [33–35]. In contrast, for the purpose of this demonstration, we do not assume any known relationship between DNA methylation at enhancers or for DNA hydroxymethylation at any genomic feature. For both the reduced and full models, we ran the spin-glass and the Heinz algorithms. For the spin-glass algorithm, we requested modules that had at least 8 genes but no more than 100 genes. For the heinz algorithm, we input a subset of high scoring genes identified by randomly sampling the scores to find the background distribution. The R code that we used is shown in Additional file 1: Appendix 1, and the list of genes within the summary network generated by the heinz algorithm is shown in Additional file 2: Table S11.
The effect of SMITE-R and SMITE-F model choices on the overall scores is shown in Additional file 1: Figure S4. Through the spin-glass algorithm, both SMITE-R and SMITE-F identified 13 modules representing 528 and 510 genes, respectively (Additional file 2: Tables S5–S6), with an overlap of only 94 genes. Notably, four and two of the 13 modules for SMITE-R and SMITE-F, respectively, showed enrichment for infection-related and inflammation-related annotations, as would be expected for infection of a host cell by an intracellular pathogen. In addition, we find that generally metabolism-related modules are dysregulated in five and four of the 13 modules for SMITE-R and SMITE-F, respectively, suggesting that host cell metabolism may be altered after infection. For SMITE-R, two modules enriched for cell cycle and apoptosis related effects confirming prior observations regarding T. gondii infection in host cells [36–39]. In Fig. 6 we show one cell cycle related functional module identified by SMITE-R that also indicates altered MAPK signaling, a previously implicated feature in toxoplasmosis of mice [40, 41] and humans [42]. While it has been demonstrated that T. gondii infection of human cells induces host cell cycle arrest at G2 [37, 38], the identified module indicates that T. gondii may accomplish this through combined epigenetic dysregulation at promoters and transcriptomic dysregulation. In SMITE-F, three modules strongly implicate chromatin remodeling, epigenetic regulation of gene expression, and detection of pathogen DNA in the cytosol, and in Fig. 7, we show an identified module with multiple epigenetic events at genes' active and poised enhancers. Results from the Heinz algorithm are concordant in showing many cell cycle related pathways for the reduced model and additional altered cell signaling pathways in the full model (Additional file 1: Figure S5, Additional file 2: Table S12–S13). Therefore, SMITE analysis suggests that T. gondii infection remodels the epigenome of the infected host and alters host gene expression, impacting host gene networks that regulate metabolism, intracellular signaling, and cell cycle progression, and these findings are part of a manuscript in preparation (Ulahannan et al.,). To ensure robustness of results, we performed the analysis twice more, and despite using random sampling procedures at multiple points within SMITE, we obtained the same modules and module significance each time, indicating that SMITE results are highly reproducible. More detail about each module is given in Additional file 2: Tables S5–S6 and Tables S7–S9.
SMITE-identified module implicating cell cycle and MAPK pathways. SMITE allows visualization of the relationship between each component score and the overall node score. This functional module is enriched in human genes that regulate cell cycle by altering cell survival and apoptosis consistent with the known property of T. gondii infection of human cells to induce host cell cycle arrest at G2. The module shows MAPK4 as a highly scoring gene (intense red coloring) centered within the network
SMITE-identified module implicating chromatin regulation. The module centered around histones and their regulators is plotted in a circular layout in two modes, with (left) and without (right) component score details. We can see that many of these genes were implicated because of their component scores for gene expression and events occurring at enhancers
SMITE improves integrative genomics methods
We identified FEM and BioNet as computationally efficient methods to identify gene modules, and we designed SMITE to improve the gene scoring functionality of these technologies. While SMITE can serve as a wrapper for module-identifying functions of FEM and BioNet, there are several major shortcomings of these approaches, which we have addressed with SMITE. Although these improvements preclude a direct head-to-head comparison of SMITE to other methods, a discussion of these improvements illustrates the novel aspects of SMITE as compared to state-of-the-art technology.
Both SMITE and BioNet use p-values as an input, while FEM usually employs t-statistics that have been averaged over a region near the transcription stat site (TSS). By averaging t-statistics over a region directly adjacent to the TSS, FEM does not preserve the biology of epigenetic processes like DNA methylation that may occur far from the TSS and may not occur equally throughout a region. Though FEM is not limited to T-tests, the algorithm assumes sample normality and uses scaling of the relationship between DNA methylation and expression by the ratio of the t-statistic variances – a technique that is optimal for combining T-tests. Therefore, FEM is only functionally optimal for analyzing T-tests, which is often inappropriate in genomics considering data distributions and the necessary adjustments for confounders such as experimental batch effects [43]. Thus, the p-value is a more versatile input because it can be derived from different statistical methods depending on each individual experiment.
FEM can only integrate one epigenetic modification, usually DNA methylation, with gene expression. If a researcher wanted to compare multiple types of epigenetic data with expression and with each other, it would necessitate either pairwise comparisons between each epigenetic dataset and expression, which would hinder the overall study interpretation, or manual selection of a single p-value for each gene, which would bias the findings. Though BioNet allows several p-values to be associated with a gene so that more than one epigenetic modification could be integrated, it does not have an implemented method to arrive at a single summary statistic or p-value for the epigenetic modifications, again requiring manual curating of the input data. To address these major shortcomings, SMITE uses a statistically sophisticated aggregation and normalization algorithm that that allows the user to input p-values and multiple genomic intervals, thus allowing simultaneous comparison of many types of data including, but not limited to, DNA methylation, DNA hydroxymethylation, and ChIP-seq peak data.
BioNet does not incorporate the effect direction its scoring method, and FEM incorrectly assumes that the epigenetic modification statistic will always have an inverse relationship with gene expression, which oversimplifies the complexity of gene expression regulation. To address this limitation, SMITE is novel in allowing the user to adjust the directionality of an epigenetic modification's relationship with gene expression in a genomic context-dependent manner.
In addition, FEM has a very specific input structure that requires rows of the DNA methylation data, expression data, and graph objects to have matching Entrez gene ids. Unfortunately, this may not be straightforward to assemble and will negatively select genes that only have partial data (e.g. having only gene expression or only DNA methylation) or are not part of an interaction network. Functionally, each FEM analysis becomes centered around the nodes that are still available in a specific interaction network instead of centered around high scoring genes regardless of missing data. BioNet employs non-parametric order statistics that ignore missing data. Because SMITE uses a combined p-value for each node, it does not specifically require a high scoring node to have complete data. SMITE is also not limited by gene annotation (e.g. Entrez, Refseq) as a consistent set of identifiers is used. Thus, SMITE allows for missing data and flexibility of gene annotation.
Finally, FEM and BioNet rely on ranking genes based off the sum of their DNA methylation and gene expression statistics and a combined p-value, respectively. In contrast, SMITE is novel in allowing users to input a prioritization of genomic contexts relative to one another so that the identified functional modules reflect the researcher's goals or intuition. Therefore, the findings in SMITE are more robust for novel pathway discovery and exploratory analysis.
Comparison of modules detected using SMITE and FEM
Though SMITE and FEM are not directly comparable, having shown that SMITE can identify functionally important modules within the T. gondii HFF data set, we aimed to demonstrate that SMITE-identified modules are not the same as those identified by FEM. Additionally, because the spin-glass algorithm can identify several modules compared to a single module in BioNet, a comparison of the multiple identified modules between SMITE and FEM allows more resolution. To compare SMITE and FEM, we used the criteria defined in the FEM vignette to associate genes with DNA methylation. We calculated t-statistics with four degrees of freedom for gene expression and DNA methylation analysis, and we associated DNA methylation with genes by: 1) taking the average of all effects within 200 bp from a gene transcription start site (TSS), 2) if no effects were found, taking the average of effects over the first exon, and 3) if no effects were found, taking the average over 1500 bp around the TSS. The R code that we used to run FEM is shown in Additional file 1: Appendix 2. The high-scoring genes identified by the three models (SMITE-F, SMITE-R and FEM) are listed in Additional file 2: Table S3. We compare the FEM model with the SMITE-R model, which is directly comparable because it equally weights gene expression and promoter DNA methylation and in opposite directions, and the SMITE-F model, which incorporates additional information regarding gene enhancers.
We used the DoFEM.bi function in FEM with the default settings provided in the FEM package vignette. Using FEM we identified 7 modules that have between 8 and 100 genes (Additional file 2: Tables S4 and S7). In summary, FEM implicated 175 genes, only 8 of which overlapped those identified with the reduced SMITE model and 23 of which overlapped those identified by SMITE-F (Fig. 8a). Therefore, since SMITE-F identified modules represent combined gene expression and DNA methylation and DNA hydroxymethylation at enhancers, and the SMITE-R and FEM-identified modules only focus on DNA methylation at gene promoters and expression, the techniques appear to identify largely different modules and genes. Additionally, SMITE-R and FEM models appear to also identify mutually exclusive genes. Though FEM does not have an implemented method to examine further pathway annotations, we annotated it using GoSeq and compared enriched pathways. In Additional file 2: Table S10, it is apparent that all three models enrich for metabolism, signal transduction, and the immune system to some extent; however, while FEM and SMITE-R model enrich for cell cycle regulation, only the SMITE models indicate transcriptional regulatory processes.
SMITE comparison with FEM. a An Euler diagram showing that no genes were found by all three models: FEM, SMITE-R, and SMITE-F. SMITE-F and SMITE-R overlap much more than either do with FEM. b A comparison of the densities of all scores compared to genes identified within modules by SMITE-F (left), SMITE-R (middle), and FEM (right), indicating that there is a statistically significant enrichment for high scoring genes using SMITE even when using the reduced model
We then examined how each technique was able to enrich for high scoring nodes within identified functional modules. In Fig. 8b we compare the density of all scores compared to the density of scores for genes within modules for FEM, SMITE-R and SMITE-F. SMITE-R and SMITE-F have a statistically different distribution (simulated Kolmogorov-Smirnov (KS) test p = 0.00001 and p = 0.00001, respectively) of enriched genes compared to all scored genes whereas FEM contains the equivalent of a random sampling of scored genes (Kolmogorov-Smirnov test p = 0.2153). The derivation of the KS-test significance for these tests is shown in Additional file 1: Figure S6.
Finally, in Additional file 1: Figure S7 we show the relationship between high scoring genes and the number of p-values associated with those genes. Because the FEM input involved averaging p-values in discrete regions around the TSS, the highest scoring genes in FEM tend to be biased by having more associated p-values when compared to high scoring genes in the full-SMITE model (KS test p < 10−12).
The limitations of FEM make it impossible to perform a head-to-head comparison with SMITE to identify simulated effects occurring at putative enhancers and incorporating DNA hydroxymethylation. Nevertheless, assuming the existence of true functional modules that represent interconnected genes that are dysregulated by common epigenetic mechanisms within a pathway, SMITE enriches for genes that are high scoring and is, therefore, very sensitive and specific. In contrast, FEM modules will tend to have many low scoring nodes, which may indicate that FEM is not as sensitive, or there may be many more false positives within FEM modules. FEM genes are also biased by having a higher number of associated p-values. Therefore, we conclude that the heuristic used to prioritize genes in SMITE employs a robust algorithm that integrates multi-level genomics findings and can identify novel functional modules that are both focused and meaningful.
Current genomic experiments are underpowered to detect genomic events comprehensively within a network, and a functional module identified by SMITE is implicated by the cumulative evidence of varied input data over all of its members. Modules implicate potentially important network members for which there may be no statistically significant evidence. Thus, SMITE is a discovery platform to integrate multi-level genomic observations that represents a significant improvement over existing integrative genomics approaches. Through SMITE, researchers can increase study power to find a single set of interpretable results integrating epigenomic and transcriptomic data sets.
FEM:
Functional Epigenetic Modules
HFF:
Human foreskin fibroblasts
MCM:
Monte Carlo method
SMITE:
Significance-based Modules Integrating the Transcriptome and Epigenome
TSS:
Transcription start site
Chin L, Hahn WC, Getz G, Meyerson M. Making sense of cancer genomic data. Genes Dev. 2011;25:534–55.
Koestler DC, Jones MJ, Kobor MS. The era of integrative genomics: more data or better methods? Epigenomics. 2014;6:463–7.
Feldmann A, Ivanek R, Murr R, Gaidatzis D, Burger L, Schübeler D. Transcription factor occupancy can mediate active turnover of DNA methylation at regulatory regions. PLoS Genet. 2013;9:e1003994.
Wang J, Zhuang J, Iyer S, Lin X, Whitfield TW, Greven MC, Pierce BG, Dong X, Kundaje A, Cheng Y, Rando OJ, Birney E, Myers RM, Noble WS, Snyder M, Weng Z. Sequence features and chromatin structure around the genomic regions bound by 119 human transcription factors. Genome Res. 2012;22:1798–812.
Benveniste D, Sonntag HJ, Sanguinetti G, Sproul D. Transcription factor binding predicts histone modifications in human cell lines. Proc Natl Acad Sci U S A. 2014;111:13367–72.
Zilberman D, Gehring M, Tran RK, Ballinger T, Henikoff S. Genome-wide analysis of Arabidopsis thaliana DNA methylation uncovers an interdependence between methylation and transcription. Nat Genet. 2007;39:61–9.
Hark AT, Schoenherr CJ, Katz DJ, Ingram RS, Levorse JM, Tilghman SM. CTCF mediates methylation-sensitive enhancer-blocking activity at the H19/Igf2 locus. Nature. 2000;405:486–9.
Luu PL, Schöler HR, Araúzo-Bravo MJ. Disclosing the crosstalk among DNA methylation, transcription factors, and histone marks in human pluripotent cells through discovery of DNA methylation motifs. Genome Res. 2013;23:2013–29.
Hu S, Wan J, Su Y, Song Q, Zeng Y, Nguyen HN, Shin J, Cox E, Rho HS, Woodard C, Xia S, Liu S, Lyu H, Ming GL, Wade H, Song H, Qian J, Zhu H. DNA methylation presents distinct binding sites for human transcription factors. Elife. 2013;2:e00726.
Kim J, Kollhoff A, Bergmann A, Stubbs L. Methylation-sensitive binding of transcription factor YY1 to an insulator sequence within the paternally expressed imprinted gene, Peg3. Hum Mol Genet. 2003;12:233–45.
Domcke S, Bardet AF, Adrian Ginno P, Hartl D, Burger L, Schübeler D. Competition between DNA methylation and transcription factors determines binding of NRF1. Nature. 2015;528:575–9.
Jiao Y, Widschwendter M, Teschendorff AE. A systems-level integrative framework for genome-wide DNA methylation and gene expression data identifies differential gene expression modules under epigenetic control. Bioinformatics. 2014;30:2360–6.
Ramanan VK, Shen L, Moore JH, Saykin AJ. Pathway analysis of genomic data: concepts, methods, and prospects for future development. Trends Genet. 2012;28:323–32.
Mukherjee SN, Skykacek P, Roberts SJ, Gurr SJ. Gene ranking using bootstrapped p-values. SIGKDD Explorations. 2003;5:16–22.
Alves G, Yu YK. Accuracy evaluation of the unified P-value from combining correlated p-values. PLoS One. 2014;9:e91225.
Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–85.
Stouffer S, DeVinney LN, Suchman E. The American Soldier, Adjustment During Army Life. Princeton: Princeton University Press; 1949.
Sidak Z. Rectangular confidence regions for the means of multivariate normal distributions. J Am Stat Assoc. 1967;62:626–33.
Fisher RA. Statistical methods for research workers. 4th ed. Edinburgh: Oliver and Boyd; 1932.
Vanderkraats ND, Hiken JF, Decker KF, Edwards JR. Discovering high-resolution patterns of differential DNA methylation that correlate with gene expression changes. Nucleic Acids Res. 2013;41:6816–27.
Jaffe AE, Murakami P, Lee H, Leek JT, Fallin MD, Feinberg AP, Irizarry RA. Bump hunting to identify differentially methylated regions in epigenetic epidemiology studies. Int J Epidemiol. 2012;41:200–9.
Suzuki M, Jing Q, Lia D, Pascual M, McLellan A, Greally JM. Optimized design and data analysis of tag-based cytosine methylation assays. Genome Biol. 2010;11:R36.
Bibikova M, Barnes B, Tsan C, Ho V, Klotzle B, Le JM, Delano D, Zhang L, Schroth GP, Gunderson KL, Fan JB, Shen R. High density DNA methylation array with single CpG site resolution. Genomics. 2011;98:288–95.
Lipták T. On the combination of independent tests. Magyar Tud Akad Mat Kutato Int Közl. 1958;3:171–97.
Jones PA. The DNA methylation paradox. Trends Genet. 1999;15:34–7.
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102:15545–50.
West J, Beck S, Wang X, Teschendorff AE. An integrative network algorithm identifies age-associated differential methylation interactome hotspots targeting stem-cell differentiation pathways. Sci Rep. 2013;3:1630.
Dittrich MT, Klau GW, Rosenwald A, Dandekar T, Müller T. Identifying functional modules in protein-protein interaction networks: an integrated exact approach. Bioinformatics. 2008;24:i223–31.
Beisser D, Klau GW, Dandekar T, Müller T, Dittrich MT. BioNet: an R-Package for the functional analysis of biological networks. Bioinformatics. 2010;26:1129–30.
Reichardt J, Bornholdt S. Statistical mechanics of community detection. Phys Rev E Stat Nonlin Soft Matter Phys. 2006;74:016110.
Yang Z, Algesheimer R, Tessone CJ. A comparative analysis of community detection algorithms on artificial networks. Sci Rep. 2016;6:30750.
Deaton AM, Bird A. CpG islands and the regulation of transcription. Genes Dev. 2011;25:1010–22.
Hellman A, Chess A. Gene body-specific methylation on the active X chromosome. Science. 2007;315:1141–3.
Ball MP, Li JB, Gao Y, Lee JH, LeProust EM, Park IH, Xie B, Daley GQ, Church GM. Targeted and genome-scale strategies reveal gene-body methylation signatures in human cells. Nat Biotechnol. 2009;27:361–8.
Suzuki M, Oda M, Ramos MP, Pascual M, Lau K, Stasiek E, Agyiri F, Thompson RF, Glass JL, Jing Q, Sandstrom R, Fazzari MJ, Hansen RS, Stamatoyannopoulos JA, McLellan AS, Greally JM. Late-replicating heterochromatin is characterized by decreased cytosine methylation in the human genome. Genome Res. 2011;21:1833–40.
Bougdour A, Durandau E, Brenier-Pinchart MP, Ortet P, Barakat M, Kieffer S, Curt-Varesano A, Curt-Bertini RL, Bastien O, Coute Y, Pelloux H, Hakimi MA. Host cell subversion by Toxoplasma GRA16, an exported dense granule protein that targets the host cell nucleus and alters gene expression. Cell Host Microbe. 2013;13:489–500.
Molestina RE, El-Guendy N, Sinai AP. Infection with Toxoplasma gondii results in dysregulation of the host cell cycle. Cell Microbiol. 2008;10:1153–65.
Brunet J, Pfaff AW, Abidi A, Unoki M, Nakamura Y, Guinard M, Klein JP, Candolfi E, Mousli M. Toxoplasma gondii exploits UHRF1 and induces host cell cycle arrest at G2 to enable its proliferation. Cell Microbiol. 2008;10:908–20.
Blader IJ, Koshy AA. Toxoplasma gondii development of its replicative niche: in its host cell and beyond. Eukaryot Cell. 2014;13:965–76.
Kim L, Butcher BA, Denkers EY. Toxoplasma gondii interferes with lipopolysaccharide-induced mitogen-activated protein kinase activation by mechanisms distinct from endotoxin tolerance. J Immunol. 2004;172:3003–10.
Valère A, Garnotel R, Villena I, Guenounou M, Pinon JM, Aubert D. Activation of the cellular mitogen-activated protein kinase pathways ERK, P38 and JNK during Toxoplasma gondii invasion. Parasite. 2003;10:59–64.
Braun L, Brenier-Pinchart MP, Yogavel M, Curt-Varesano A, Curt-Bertini RL, Hussain T, Kieffer-Jaquinod S, Coute Y, Pelloux H, Tardieux I, Sharma A, Belrhali H, Bougdour A, Hakimi MA. A Toxoplasma dense granule protein, GRA24, modulates the early immune response to infection by promoting a direct and sustained host p38 MAPK activation. J Exp Med. 2013;210:2071–86.
Thomas JG, Olson JM, Tapscott SJ, Zhao LP. An efficient and robust statistical modeling approach to discover differentially expressed genes using genomic expression profiles. Genome Res. 2001;11:1227–36.
Parts of this manuscript were published in partial fulfillment of the requirements for a PhD in Biomedical Sciences at Einstein (NAW). The personnel of Einstein's Center for Epigenomics are thanked for their contributions.
SMITE development was supported in part by the MSTP at Albert Einstein College of Medicine (Einstein) (to NAW, AJ; NIH T32 GM007288) and R01AI087625 (to KK).
The T. gondii data analysed in the current study are available under the GEO Accession number: GSE79612.
SMITE is available through the Bioconductor web site at https://www.bioconductor.org/packages/release/bioc/html/SMITE.html. It requires R (version ≥3.3.0), an open-source statistical environment available through the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org, and SMITE runs on Linux, Mac OS and MS-Windows. Further package details, installation instructions, and a comprehensive package vignette are available through Bioconductor.
NAW: Co-developed the SMITE approach with RM, one of the authors of manuscript. AJ: Developed the SMITE package for Bioconductor with NAW, assisted NAW with revisions to SMITE approach, one of the authors of manuscript. RM: Co-developed the SMITE approach with NAW. FD: Involved in development and testing of SMITE approach. NU: Developed the Toxoplasma data and helped to test these using SMITE. KK: Led the Toxoplasma data development, contributed to directing SMITE functions. JMG: Oversaw project, coordinated application to Toxoplasma data, one of the authors of manuscript. All authors read and approved the final manuscript.
Department of Genetics, Albert Einstein College of Medicine, 1301 Morris Park Avenue, Bronx, NY, 10461, USA
N. Ari Wijetunga, Andrew D. Johnston, Fabien Delahaye, Netha Ulahannan & John M. Greally
Division of Obstetrics and Gynecology, Yamaguchi University, 677-1 Yoshida, Yamaguchi Prefecture, 753-8511, Japan
Ryo Maekawa
Department of Obstetrics, Gynecology and Women's Health, Albert Einstein College of Medicine, 1301 Morris Park Avenue, Bronx, NY, 10461, USA
Fabien Delahaye
Department of Microbiology and Immunology, Albert Einstein College of Medicine, 1301 Morris Park Avenue, Bronx, NY, 10461, USA
Netha Ulahannan & Kami Kim
Department of Pathology, Albert Einstein College of Medicine, 1301 Morris Park Avenue, Bronx, NY, 10461, USA
Kami Kim
Department of Medicine, Albert Einstein College of Medicine, 1301 Morris Park Avenue, Bronx, NY, 10461, USA
N. Ari Wijetunga
Andrew D. Johnston
Netha Ulahannan
John M. Greally
Correspondence to John M. Greally.
Supplementary Methods. Figure S1 Proportions of RNA-seq reads from T. gondii-infected HFFs aligning to a composite hg19/Toxoplasma genome. Figure S2 Comparison of distance weighting effect on gene scores. Figure S3 Representation of simulations demonstrating the effects on high scoring genes of variation of weightings. Figure S4 Comparison of gene scores with reduced and full SMITE models. Figure S5 Examples of modules generated by full and reduced SMITE models. Figure S6 KS test results comparing SMITE and FEM module genes and a random sampling of 10,000 genes. Figure S7 Comparison of the performance of the full SMITE model with the FEM model. Table S1 Criteria for defining genomic contexts in HFFs. Table S2 Weighting criteria used for SMITE analysis of the T. gondii HFF dataset. Appendix 1 R code for analyzing T. gondii HFF dataset with SMITE. Appendix 2 R code for analyzing T. gondii HFF dataset with FEM. Supplementary references (PDF 5642 kb)
Supplementary Tables. Table S3 Gene symbol and score of the high scoring genes using three different methods: SMITE full model, SMITE reduced model, and FEM. Table S4 Modules discovered using FEM and genes composing the modules with their DNA methylation, expression, and overall statistics. Table S5 Modules discovered using the reduced model of SMITE (SMITE-R) with spin-glass. Table S6 Modules discovered using the full model of SMITE (SMITE-F) with spin-glass. Table S7 Pathways associated with the genes composing the modules discovered by FEM. Table S8 Pathways associated with the genes composing the modules discovered by the reduced model of SMITE (SMITE-R) using spin-glass. Table S9 Pathways associated with the genes composing the modules discovered by the full model of SMITE (SMITE-F) using spin-glass. Table S10 Quantifying the number of times pathways were found to be associated the modules discovered by either FEM, the reduced model of SMITE (SMITE-R) using spin-glass, or the full model of SMITE(SMITE-F) using spin-glass. Table S11 Genes composing the "summary network" found by either the reduced (SMITE-R) or full (SMITE-F) SMITE models using the Heinz algorithm. Table S12 Pathways associated with the genes composing the "summary network" discovered by the reduced model of SMITE(SMITE-R) using the Heinz algorithm. Table S13 Pathways associated with the genes composing the "summary network" discovered by the full model of SMITE (SMITE-F) using the Heinz algorithm. Table S14 Genes composing the "modules" found using no weights instead of weighting by distance. Table S15 Pathways associated with the genes in the modules identified without using distance weighting. (XLSX 269 kb)
Wijetunga, N.A., Johnston, A.D., Maekawa, R. et al. SMITE: an R/Bioconductor package that identifies network modules by integrating genomic and epigenomic information. BMC Bioinformatics 18, 41 (2017). https://doi.org/10.1186/s12859-017-1477-3
Epigenetic
Interaction network
Networks analysis | CommonCrawl |
Home > Research > Researchers > Dr Dave Shaw > Publications
Dr Dave Shaw
Research student
First measurement of the nu(mu) charged-current cross section on a water target without pions in the final state
Brailsford, D., Dealtry, T., Finch, A. J., Knox, A., Kormos, L. L., Lamont, I., Lawe, M., Nowak, J., O'Keeffe, H. M., Ratoff, P. N., Shaw, D. & T2K Collaboration, 8/01/2018, In: Physical Review D. 97, 1, 16 p., 012001.
Measurement of neutrino and antineutrino oscillations by the T2K experiment including a new additional sample of $ν_e$ interactions at the far detector
T2K Collaboration, 21/11/2017, In: Physical Review D. 96, 9, 49 p., 092006.
Measurement of $\barν_μ$ and $ν_μ$ charged current inclusive cross sections and their ratio with the T2K off-axis near detector
T2K Collaboration, 1/09/2017, In: Physical Review D. 96, 5, 15 p., 052001.
Updated T2K measurements of muon neutrino and antineutrino disappearance using 1.5 x10 21 protons on target
T2K Collaboration, 31/07/2017, In: Physical Review D. 96, 1, 9 p., 011102(R).
Search for Lorentz and CPT violation using sidereal time dependence of neutrino flavor transitions over a short baseline
T2K Collaboration, 29/06/2017, In: Physical Review D.
First combined analysis of neutrino and antineutrino oscillations at T2K
T2K Collaboration, 14/04/2017, In: Physical review letters. 118, 15, 9 p., 151801.
First measurement of the muon neutrino charged current single pion production cross section on water with the T2K near detector
T2K Collaboration, 01/2017, In: Physical Review D. 95, 1, 012010.
Measurement of coherent $π^{+}$ production in low energy neutrino-Carbon scattering
T2K Collaboration, 4/11/2016, In: Physical Review D. 117, 19, 7 p., 192501.
Proposal for an Extended Run of T2K to $20\times10^{21}$ POT
T2K Collaboration, 14/09/2016, In: arxiv.org.
Sensitivity of the T2K accelerator-based neutrino experiment with an Extended run to 20×1021 POT
Research output: Contribution to journal › Journal article | CommonCrawl |
4D Structural root architecture modeling from digital twins by X-Ray Computed Tomography
Monica Herrero-Huerta ORCID: orcid.org/0000-0002-4134-557X1,
Valerian Meline2,
Anjali S. Iyer-Pascuzzi2,
Augusto M. Souza1,
Mitchell R. Tuinstra1 &
Yang Yang1
Plant Methods volume 17, Article number: 123 (2021) Cite this article
Breakthrough imaging technologies may challenge the plant phenotyping bottleneck regarding marker-assisted breeding and genetic mapping. In this context, X-Ray CT (computed tomography) technology can accurately obtain the digital twin of root system architecture (RSA) but computational methods to quantify RSA traits and analyze their changes over time are limited. RSA traits extremely affect agricultural productivity. We develop a spatial–temporal root architectural modeling method based on 4D data from X-ray CT. This novel approach is optimized for high-throughput phenotyping considering the cost-effective time to process the data and the accuracy and robustness of the results. Significant root architectural traits, including root elongation rate, number, length, growth angle, height, diameter, branching map, and volume of axial and lateral roots are extracted from the model based on the digital twin. Our pipeline is divided into two major steps: (i) first, we compute the curve-skeleton based on a constrained Laplacian smoothing algorithm. This skeletal structure determines the registration of the roots over time; (ii) subsequently, the RSA is robustly modeled by a cylindrical fitting to spatially quantify several traits. The experiment was carried out at the Ag Alumni Seed Phenotyping Facility (AAPF) from Purdue University in West Lafayette (IN, USA).
Roots from three samples of tomato plants at two different times and three samples of corn plants at three different times were scanned. Regarding the first step, the PCA analysis of the skeleton is able to accurately and robustly register temporal roots. From the second step, several traits were computed. Two of them were accurately validated using the root digital twin as a ground truth against the cylindrical model: number of branches (RRMSE better than 9%) and volume, reaching a coefficient of determination (R2) of 0.84 and a P < 0.001.
The experimental results support the viability of the developed methodology, being able to provide scalability to a comprehensive analysis in order to perform high throughput root phenotyping.
Plant roots are critical for water and nutrient uptake from soils [1, 2]. Roots can form complex networks composed by different type and age of roots [3]. The spatial arrangement of the root system is called Root System Architecture (RSA). Considering that RSA can affect crop performance, selecting crops based on specific RSA could lead to improve agricultural productivity [4]. However, our understanding of RSA development in soil is limited by the complexity of the root phenotyping in situ [5, 6]. Because of the opaque nature of soil, progress made in non-destructive root phenotyping has been limited to systems such as the rhizotron, which acquires two-dimensional images of root growing in transparent enclosures.
Plant science community urgently requires advance approaches in the characterization of RSA using novel image-based technologies [7], to quantify the 3D dynamics in RSA [8, 9]. Three tomographic techniques are currently available for non-destructive 3D phenotyping: X-ray Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Position Emission Tomography (PET). Recent technological innovations in scan resolution and the throughput in image processing made X-ray CT the current state of the art technology for non-destructive root phenotyping in soil [10]. Generally speaking, a regular X-ray CT has a source and a detector. The source is responsible for passing the X-ray beams through a sample, which absorbs a portion of these beams, while the detector will record this attenuated signal as two-dimensional projections. The attenuation is based on the material properties and electron-density; thus, the internal structure of the scanned sample becomes visible by contrasting the different elements inside depending on how much X-ray they absorb based on their chemical composition and characteristics [11]. Further, a 3D reconstruction of the sample material can be generated based on the 2D projections by scanning it at different positions [12].
A shape descriptor highly recommended in plant science is the curve-skeleton. It is able to describe the hierarchies and extent of branching plant networks [13]. Methods for skeleton extraction are primarily grouped in volumetric and geometrics, depending on the computed interior or only surface representation. As a common drawback, volumetric approaches potentially lose details and have numerical instability caused by inappropriate discretization resolution [14, 15]. In contrast, geometric methods approximate the medial surface by extracting the internal edges and faces. Medial axis skeleton and Reeb-graph-based methods are a couple of examples that are established using the geometric principles. In the 3D space, the medial axis usually fails when the planes occurrence.
For methods in 3D modeling, there are as well two categories. The first one includes voxel approaches, where volumetric models are constructed by partitioning the point cloud into voxels. The capability of these methods in model irregular surfaces is limited. The other category comprises parametric surface methods. The circular cylinder is the most dominant shape-fitting approach, because of its balancing between simplicity and realistic modeling [16].
Analysis of root models derived from X-ray CT images allows quantification of root growth over time and in response to external stresses, but there are several major challenges associated with this data. These include root segmentation and 3D modeling, which involve extracting the root digital twin from X-ray radiographs, and computing root architecture measurements from resulting models. The RootTine protocol was design to segment the root in a faster and automated way to be implemented in high-throughput (HTP) systems [17]. However, this method only computes the root length as a phenotyping trait by medial axis-based skeletonization processes. RootForce [18] is one of the latest developments in semi-automatic segmentation based as well in RootTine. Therefore, an initial phase is required to tuned these parameters on few samples. Once these parameters are adapted to the pot, soil and root system, the same set of parameters can be used for a complete time series experiment. Based on these arguments, RootFroce is described as especially designed for highthrouput time series of CTX data. It is able to extract more traits, for instance root volume and root growth angles by Reeb Graph-based skeletonization. RooTrack is another tool for not only root segmentation but also for visually object tracking by identifying boundaries in image cross-sections. The main advantage is detecting and differentiating multiple roots from different plants in the same image. Still, this methodology is not yet applicable to HTP or automated procedures [19]. These tools mainly tackle the root segmentation issue from X-ray data as a primary challenge. The focus of this paper is to model temporal digital twins of roots to quantify traits as well as to record the topological and hierarchical branching structure, after the segmentation from the soil is already done. To the best of our knowledge, no research has been done to parametrize by geometric primitives the root surfaces or even label their different branches using a volumetric model. In our methodology, the temporal analysis of roots is solved throw skeleton extraction, while the spatial quantification is performed by a shape-fitting approach.
In this paper, we propose a spatial–temporal root architectural model from digital twins obtained by X-ray CT (computed tomography). Values of essential root traits were extracted as phenotypic data to quantitatively assist growth analyses and RSA description. The proposed methodology consists of two phases. In the first, we compute a curve-skeleton as a powerful descriptor for analyzing root system networks. We use a constrained Laplacian smoothing algorithm which directly performs on the mesh domain, followed by a connectivity surgery and embedding refinement process. As a result, this skeletal structure controls the registration process in temporal series. Secondly, the root system is robustly reconstructed by generating a flexible cylinder model. This non-linear optimization problem is solved by nonlinear squares iterative solution. The full pipeline is optimized for quantifying accurate and robust results, allowing high-throughput root phenotyping using X-ray CT systems.
The 3D digital twin of the root system is obtained by X-ray CT. This technology allows us to non-destructively, comprehensively and accurately monitor the exact same plant root even at different points in time under controlled conditions. Our system scans pots with photon energies in the 225 keV range, and is able to scan pots 20 cm in height in less than 7 min. The resulting voxel size is set at 200 μm. The Focus-Detector distance is 800 mm. Both X-ray detector and X-ray tube are fixed within the system. A pot rotation stage allows 360° for the measurement. A vertical translation axis optionally extends the vertical field of view. Table 1 summarizes the rest of the technical specifications of the system. The system manufacturer is Fraunhofer IIS (Fraunhofer Development Center X-ray Technology, Germany).
Table 1 Technical specifications of the X-ray CT system
The experiment was performed at the Ag Alumni Seeds Phenotyping Facility (AAPF) at Purdue University in West Lafayette (IN, USA). In this facility, plants are transported in standard carriers to the X-ray CT system from the loading position by a mechanical conveyor belt. During the summer of 2019, root systems from three tomato plants at two different times and three corn plants at three different times were scanned. The pots were circular with 180 mm-diameter and 200 mm-height for tomato and 400 mm-height for corn. The type of pot media in the pots is sifted sphagnum peat moss with a moisture inferior at 20% of relative humidity. Table 2 summarize the main characteristics of the digital twins of the roots used in this study (additional file as Data S1: dataset), whereas Fig. 1 shows their visualizations.
Table 2 Root digital twin dataset
Visualization of the root samples from tomato (a) and corn (b) used in this study
In this study, we developed an approach that can be used to enable high throughput root phenotyping tasks. It includes a 4D structural root architectural modeling from digital twins. These digital twins were acquired X-ray CT using RootForce tool [18]. RootForce approach is based on Frangi's vesselness method [20], extended for the semi-automatic segmentation of roots. Beforehand, a thresholding is applied to select a range of attenuation coefficient according to the type of soil and plants used in the experiment. Then, the Hessian-based Frangi vesselness filter is used for small roots detection while larger roots are detected based on their 3D homogeneity using a 3D-Gaussian filter. The small and large vessel structures are then merged using upper and lower merging thresholds. Here, the value range of the attenuation coefficient was 0.07 to 0.19 with root diameters of 0.4, 0.5, 0.6, 1.0 and 1.2 mm. The upper and lower threshold of the merging parameters were respectively 25 and 1000 for the corn roots and respectively 100 and 1000 for the tomato roots. A size filter was used to eliminate unconnected fragment with a minimum volume of 25 mm3 for the corn roots and 50 mm3 for the tomato roots. The minimum root diameter that can be segmented with RootForce is about 2.5 voxels in diameter. Here, using a resolution of 200 μm cubic voxel size for the reconstruction, the minimum detectable root diameter is approximatively 0.5 mm. Once the segmentation process is done, we apply our model approach. It consists of two clearly differentiated phases: the computation of the curve-skeleton which serves for the registration of temporal series, and the RSA cylindrical model of the digital twin for spatial analysis. Figure 2 summarizes the workflow to follow.
Workflow of the methodology proposed: from the digital twin of the root, first we extract the curve-skeleton to register temporal series of the same root and secondly, we spatially model the RSA by a flexible cylinder fitting
Skeletonization
Basically, the curve-skeleton is a structure that extracts the volume and topological characteristics of the model. We select a robust skeleton extraction method via Laplacian-based contraction [14, 15] based on the characteristics of the model: the algorithm works directly on the mesh, without a resampled volumetric representation. By this means, it is pose-insensitive and invariant to global rotation. As a potential limitation of this skeleton algorithm, it only works for closed mesh models with manifold connectivity since the Laplacian contraction algorithm operates for every individual vertex. In order to close the mesh, we follow the procedure already explained by [21], which incorporates several automatic and sequential tasks: (i) filling of holes through algorithms based on interpolators of radial basis function [22]; (ii) repairing of meshing gaps by threshold distance algorithms [23]; (iii) removing of topological noise, allowing the mesh to be re-triangulated locally [24]; (iv) removing of topological and geometric noise by anti-aliased Laplacians filters [25]. Once the mesh is closed, the skeleton extraction is applied. Firstly, the method contracts the mesh geometry into a zero-volume skeletal shape. Details and noise are removed by applying an iterative Laplacian smoothing that tightly moves all the vertices along their curvature normal directions. After each iteration, a connectivity process is carried out, removing all the collapsed faces from the degenerated mesh until no triangles exist. The key of this step is to sensibly control the contraction procedure so that it leads to a collapsed mesh with sufficient skeletal nodes to maintain an acceptable correspondence between the skeleton and the original geometry. As a consequence, the contraction does not alter the mesh connectivity and retains the key features, guarantying to be homotopic to the original mesh. Next, we describe a process that moves each skeletal node to the center of mass of its local mesh region in order to refine the skeleton's geometric embedding.
This skeletal structure drives the registration process in temporal series. Thus, we can automatically perform a growth analysis of the RSA, quantified by the elongation rate as a trait. To register temporal series, Principal Component Analysis (PCA) is performed [26]. In general, the principal components are eigenvectors of the data's covariance matrix. More specifically, this statistical analysis uses the first and second moments of the curve-skeleton, resulting in three orthogonal vectors centered on its center of gravity. The PCA summarizes the distribution of the lines along the three dimensions and models the principal directions and magnitudes of the curve-skeleton distribution around the center of gravity. Thereby, the registration of temporal series is carried out by overlapping the principal component axes. The elongation rate is measured in the first principal direction.
RSA model
We use a group of geometric primitives to model the surface and topology of the root. The circular cylinder is the simpler primitive. For natural entities such as trees, the circular fitting is the most robust primitive in the sense of a well-bounded volumetric modelling error, even with noise and gaps in the data, compared with more complex primitives which are more sensitive to data quality [16]. Thereby, our modeling is based on circular cylinder fitting as an optimal parametrization to provide significant traits of the RSA such as diameters, specific surfaces and volumes from the main root and ramifications. We use the approach of [27], where they model point clouds of individual trees acquired from TLS (Terrestrial LiDAR Scanner) by a cylindrical parametrization. This process is scale independent because only neighbor-relations and relative sizes are needed. To apply this approach, the 3D mesh of the root digital twin is transformed into a regularized point cloud [28]. For that, randomly sampled points over the mesh are extracted by fixing a desired density, (5 points/mm2) and a restored point cloud is obtained. Subsequently, we apply Dart Throwing Poisson Disk sampling to the point cloud to make the points appear more uniform by culling those points that are close to a randomly selected point [29]. In this step, a threshold based on Euclidean distance between points of 1 mm is set. These values are set regarding the details in the final cylindrical model and due to the scanner's accuracy for these specific samples. After this process, a significant reduction of points is achieved because the Poisson subsampling approach considers the local point distribution, retaining key elements of the structure.
Once the regularized point cloud is achieved, the cylinder fitting is applied. The process has 2 consecutive phases: first, the point cloud is segmented into the main root and its ramifications, and secondly, the surface and volume of the segments are robustly fitted with geometric primitives, specifically cylinders. This non-linear optimization problem is solved by nonlinear squares iterative solution. The topological distribution of the RSA is also recorded. Mathematically, the model is raised by a local approach in which the point cloud is covered with small sets corresponding to connected surface patches in the root surface. In that way, the RSA and size properties, such as volume and branch size distributions, can be approximated. The method uses a cover set approach [27], where the point cloud is partitioned into small sets that correspond to small patches in the surface of the model. These sets form the smallest unit we use to segment the point cloud into main root and individual branches. The generation process produces a Voronoi partition of the point cloud so that the cell size is controlled. The cover set value is calculated by an iterative approach where the final value varies from 0,75 to 3 cm.
Experimental results
All the experimental results obtained below were run on a 3.6-GHz desktop computer with an Intel CORE I7 CPU and 32-GB RAM. First, the digital twin of the root obtained by X-ray must be previously closed and repaired to be able to apply our approach as Sect. 2.2.1. explains. Once the mesh is closed, the skeleton extraction and the RSA model pipelines are run. The code from the RSA model saves (i) general values of the entire root as total volume, height, length, number and order of branches, and the mean and maximum diameter of the crown, (ii) branching map of the root that includes the topological relation of each ramification, (iii) volume, length, angle, height, azimuth and zenith of each branch, and (iv) length, diameter, angle and coordinates of all the cylinders that belong to each branch. Figures 3 and 4 show both results for a tomato and a corn root sample. In the zoom window, we can appreciate the complexity and accuracy of the model. In our RSA model, each branch is labeled in a unique color and quantified. This is a brand-new solution that is able to quantify branching patterns, which are critical for biologists to understand water and nutrient uptake. In the additional file 1, we made a video that shows the segmented root, the skeleton and the RSA model, for tomato and for corn (Additional file 2: Video S2: 4D Structural Root Architecture Modeling).
Tomato root sample with a zoom window: digital twin by X-ray CT system (a), curve-skeleton extraction based on a constrained Laplacian smoothing algorithm, where the mesh is in orange and the skeleton is in red (b), and the RSA model based on a flexible cylinder fitting, where each ramification is in different color (c)
Corn root sample with a zoom window: digital twin by X-ray CT system (a), curve-skeleton extraction based on a constrained Laplacian smoothing algorithm, where the mesh is in orange and the skeleton is in red (b), and the RSA model based on a flexible cylinder fitting, where each branch is in different color (c)
From the RSA model, different traits are extracted. Table 3 summarize the general values of the entire roots.
Table 3 General values of the RSA model for each sample (volume, volume of the main root, total length, length of the main root, number of branches, maximum order of ramifications and maximum and mean crown diameter)
Validation results and discussion
The volume of each digital twin of the root is measured by Cloud Compare software [30], that computes the volume within the solid mesh. Moreover, number of branches from digital twins are estimated by a visual analysis. Table 4 shows several metrics between the digital twin and the cylindrical model of each root of these two parameters. In particular, the root mean square error (RMSE), the relative RMSE (RRMSE), the average systematic error (ASE), and mean percent standard error (MPSE) were calculated as follow:
$${\text{RMSE}} = \sqrt {\frac{{\mathop \sum \nolimits_{{{\text{i}} = 1}}^{{\text{n}}} \left( {{\text{y}}_{{{\text{model}}}}^{{\text{i}}} - {\text{y}}_{{\text{dig twin}}}^{{\text{i}}} } \right)^{2} }}{{\text{n}}}}$$
$${\text{RRMSE}} = 100{*}\frac{{{\text{RMSE}}}}{{{\overline{\text{y}}}_{{\text{dig twin}}}^{{}} }}$$
$${\text{ASE}} = \frac{100}{{\text{n}}}{*}\mathop \sum \limits_{{{\text{i}} = 1}}^{{\text{n}}} \left( {{\text{y}}_{{{\text{model}}}}^{{\text{i}}} - {\text{y}}_{{\text{dig twin}}}^{{\text{i}}} } \right)/{\text{y}}_{{\text{dig twin}}}^{{\text{i}}} { }$$
$$MPSE = \frac{100}{n}*\mathop \sum \limits_{i = 1}^{n} |\left( {y_{model}^{i} - y_{dig twin}^{i} } \right)/y_{dig twin}^{i} |$$
Table 4 Statistic metrics of number of branches and volume where RMSE is the root mean square error, RRMSE is the relative RMSE, ASE is the average systematic error, and MPSE is the mean percent standard error
where yimodel is the parameter estimated from the model of the ith scan, yidig twin is the measured parameter from the digital twin of the ith scan, \(\overline{y}_{{dig{ }twin}}^{{}} { }\) is the mean of the measured parameter from the digital twin per scan, and n is the number of scans.
From the results of this table, we can affirm that our model detects branches mainly by excess in tomato and corn. In addition, for tomato branches were estimated by deficit more than for corn. Regarding the absolute volume discrepancies, they are larger for corn. In relative volume quantity, the error in tomato is larger. For both, always the volume is estimated by deficit. The errors in the number of branches detected could have been caused by segmentation problems. Figure 5a represents a part of the RSA model from the ID 121 tomato scan. Each detected branch is in a distinct color. We can see that the loss of the tracking of the branches could have generated new false branches. This issue is high-lighted with a red circle in the figure. Another type of common errors is the volume discrepancies between the digital twin and the RSA model, mainly generated when the shape of the branch is not cylindrical and when the diameter of each segmented branch does not decrease along the length. This topological property is used in the branch segmentation of the model. Figure 5b represents a part of the ID 223 corn scan, where the digital twin is represented by points and the model by polyhedrons in the same color. The shape of the branches could generate errors to fit cylindrical solids.
Errors in the number of detected branches due to loss of tracking in the segmentation process. Branches are on different colors with a red circle remarking this issue (a); and volume discrepancies in between the digital twin, represented with dense points, and the RSA model represented with polyhedrons with similar colors (b)
The relative volume of the RSA model is compared against the relative volume from the digital twin measured by Cloud Compare software [30]. We split the digital twins in 10, 20, 30, 40, 50, 60, 70, 80 and 90% of the total volume (starting from the top) and we run the model with these parts to evaluate its performance, which reached an R2 of 0.82 for tomato and 0.74 for corn with a P < 0.001, as Fig. 6 displays. When tomato and corn measurements are together, the R2 improves to 0.83.
Volume correlation between the RSA model and the digital twin for each scan sample: tomato (a) and corn (b)
Furthermore, this methodology is able to temporally analyse the root dynamics through a registration process based on a PCA of the skeleton from the root mesh. Figure 7a shows the same tomato root sample registered at two different times (July 2nd and 18th, 2019), with a slot of 16 days. The elongation rate is mapped in Fig. 7b, where the maximum value is 2.58 cm on the upper-right ramification. Figure 8 illustrates the same temporal sample where the convex hull is individually computed (Fig. 8a, b) and as well the variation in time (Fig. 8c). The convex hull value for Fig. 8a is 229.87 cm3 and for Fig. 8b is 519.76 cm3. At this point, it is worth to notice that PCA results are affected by the segmentation process: the better segmentation is done, the more accurate PCA results are obtained.
Tomato root sample in July 2nd and 18th, 2019, registered by a PCA of the skeleton (a) and the elongation rate mapped in 3D (b)
Mesh of the tomato root sample from July 2nd (a) and 18th (b), together with its convex hull. Both convex hulls registered and superimposed in solid and transparent faces (c)
Table 5 recaps the maximum and mean value of the elongation rate for the temporal series of each sample and the convex hull volume reached by each root.
Table 5 Values of the volume of the convex hull and maximum and mean elongation
To sum up, the developed pipeline aims to automatically extract phenotypic data of RSA from digital twins obtained by non-invasive X-ray CT. This pipeline is able to analyze both spatial and temporal root dynamics. As potential advantages, we find this methodology fully automatic, fast, precise and sufficiently robust to provide scalability for high throughput root phenotyping.
Determining the contribution of structural root traits to crop performance is vital to overcome climate change, environmental degradation and food insecurity. In addition, structural root traits that are accurately extracted from X-ray data will enhance our understanding of the relationship between the plant phenome and plant function in ecosystems, which is the end goal of functional phenomics [31]. Moreover, this computationally low-cost workflow will potentially increase the usability of imaging technologies for high-throughput phenotyping regarding genetic mapping and phenotypic selection in breeding programs.
The dataset supporting the conclusions of this article is included within the article (Additional file 1: Data S1: dataset and Additional file 2: Video S2: 4D Structural Root Architecture Modeling).
Postma JA, Kuppe C, Owen MR, Mellor N, Griffiths M, Bennett MJ, et al. OpenSimRoot: widening the scope and application of root architectural models. New Phytol. 2017;215(3):1274–86.
Seethepalli A, Guo H, Liu X, Griffiths M, Almtarfi H, Li Z, et al. Rhizovision crown: an integrated hardware and software platform for root crown phenotyping. Plant Phenomics. 2020;2020:1–15.
Morris EC, Griffiths M, Golebiowska A, Mairhofer S, Burr-Hersey J, Goh T, et al. Shaping 3D root system architecture. Curr Biol. 2017;27(17):R919–30.
Nord EA, Lynch JP. Plant phenology: a critical controller of soil resource acquisition. J Exp Bot. 2009;60(7):1927–37.
Tracy SR, Nagel KA, Postma JA, Fassbender H, Wasson A, Watt M. Crop improvement from phenotyping roots: highlights reveal expanding opportunities. Trends Plant Sci. 2020;25(1):105–18.
Bucksch A, Burridge J, York LM, Das A, Nord E, Weitz JS, Lynch JP. Image-based high-throughput field phenotyping of crop roots. Plant Physiol. 2014;166(2):470–86.
Dowd T, McInturf S, Li M, Topp CN. Rated-M for mesocosm: allowing the multimodal analysis of mature root systems in 3D. Emerg Top Life Sci. 2021;5(2):249.
Liu S, Barrow CS, Hanlon M, Lynch JN, Bucksch A. DIRT/3D: 3D root phenotyping for field-grown maize (Zea mays). Plant Physiol. 2021;187(2):739–57. https://doi.org/10.1093/plphys/kiab311.
Jiang N, Floro E, Bray AL, Laws B, Duncan KE, Topp CN. Three-dimensional time-lapse analysis reveals multiscale relationships in maize root systems with contrasting architectures. Plant Cell. 2019;31(8):1708–22.
Atkinson JA, Pound MP, Bennett MJ, Wells DM. Uncovering the hidden half of plants using new advances in root phenotyping. Curr Opin Biotechnol. 2019;55:1–8.
Flavel RJ, Guppy CN, Tighe M, Watt M, McNeill A, Young IM. Non-destructive quantification of cereal roots in soil using high-resolution X-ray tomography. J Exp Bot. 2012;63(7):2503–11.
Metzner R, Eggert A, Dusschoten D, Pflugfelder D, Gerth S, Schurr U, Uhlmann N, Jahnke S. Direct comparison of MIR and X-ray CT technologies for 3D imaging of root systems in soil: potential challenges for root trait quantification. Plant Methods. 2015;11:17. https://doi.org/10.1186/s13007-015-0060-z.
Bucksch A. A practical introduction to skeletons for the plant sciences. Appl Plant Sci. 2014;2(8):1400005.
Au OKC, Tai CL, Chu HK, Cohen-Or D, Lee TY. Skeleton extraction by mesh contraction. ACM Trans Graph (TOG). 2008;27(3):1–10.
Cao J, Tagliasacchi A, Olson M, Zhang H, Su Z. Point cloud skeletons via laplacian based contraction. In: 2010 shape modeling international conference. IEEE; 2010. p. 187–197.
Markku Å, Raumonen P, Kaasalainen M, Casella E. Analysis of geometric primitives in quantitative structure models of tree stems. Remote Sens. 2015;7(4):4581–603.
Gao W, Schlüter S, Blaser SRGA, et al. A shape-based method for automatic and rapid segmentation of roots in soil from X-ray computed tomography images: Rootine. Plant Soil. 2019;441:643–55. https://doi.org/10.1007/s11104-019-04053-6.
Gerth S, Claußen J, Eggert A, Wörlein N, Waininger M, Wittenberg T, Uhlmann N. Semiautomated 3D root segmentation and evaluation based on x-ray CT imagery. Plant Phenomics. 2021;2021:1–13.
Mairhofer S, Johnson J, Sturrock CJ, et al. Visual tracking for the recovery of multiple interacting plant root systems from X-ray μCT images. Mach Vis Appl. 2016;27:721–34. https://doi.org/10.1007/s00138-015-0733-7.
Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. In: International conference on medical image computing and computer-assisted intervention. Berlin, Heidelberg: Springer; 1998. p. 130–137.
Herrero-Huerta M, González-Aguilera D, Rodriguez-Gonzalvez P, Hernández-López D. Vineyard yield estimation by automatic 3D bunch modelling in field conditions. Comput Electron Agric. 2015;110:17–26.
Branch D, Benetti S, Kasen D, Baron E, Jeffery DJ, Hatano K, et al. Direct analysis of spectra of type Ib supernovae. Astrophys J. 2002;566(2):1005.
Ju, T. Robust repair of polygonal models. ACM Transactions on Graphics (TOG), 2004;23(3):888–895.
Guskov I, Wood ZJ. Topological noise removal. In: 2001 graphics interface proceedings. Ottawa, Canada, 19; 2001
Fan YZ, Tam BS, Zhou J. Maximizing spectral radius of unoriented Laplacian matrix over bicyclic graphs of a given order. Linear Multilinear Algebra. 2008;56(4):381–97.
Russ T, Boehnen C, Peters T. 3D face recognition using 3D Alignment for PCA. In: IEEE Computer Society conference on computer vision and pattern recognition (CVPR'06), New York, NY, USA; 2006. p. 1391–1398. Doi: https://doi.org/10.1109/CVPR.2006.13.
Raumonen P, Kaasalainen M, Åkerblom M, Kaasalainen S, Kaartinen H, Vastaranta M, et al. Fast automatic precision tree models from terrestrial laser scanner data. Remote Sens. 2013;5(2):491–520.
Herrero-Huerta M, Bucksch A, Puttonen E, Rainey KM. Canopy roughness: a new phenotypic trait to estimate above-ground biomass from unmanned aerial system. Plant Phenomics. 2020;2020:1–10.
Chambers B. Performing poisson sampling of point clouds using dart throwing, 2013, June 2020, https://pdal.io/tutorial/sampling/index.html
CloudCompare (version 2.10) [GPL software]. 2021. www.cloudcompare.org. Accessed 05 Mar 2021
York LM. Functional phenomics: an emerging field integrating high-throughput phenotyping, physiology, and bioinformatics. J Exp Bot. 2019;70(2):379–86.
Authors would like to thank Chris Hoagland for his collaboration during the experimental phase of this research.
V.M. and A.I. were funded by Purdue University, the Foundation for Food and Agriculture Research (New Innovator Award), and Hatch Funds (#IND011293).
Institute for Plant Sciences, College of Agriculture, Purdue University, West Lafayette, IN, USA
Monica Herrero-Huerta, Augusto M. Souza, Mitchell R. Tuinstra & Yang Yang
Department of Botany and Plant Pathology, Purdue University, West Lafayette, IN, USA
Valerian Meline & Anjali S. Iyer-Pascuzzi
Monica Herrero-Huerta
Valerian Meline
Anjali S. Iyer-Pascuzzi
Augusto M. Souza
Mitchell R. Tuinstra
MH conceived the idea, developed the data analysis pipelines and software, performed the data analysis and visualization, and wrote the manuscript; VM, AI and AS contributed to writing the manuscript. VM, AI and YY contributed to the method development and data analysis; MT and YY supervised the research and edited the manuscript. All authors have read and agreed to the published version of the manuscript.
Correspondence to Monica Herrero-Huerta.
Additional file 2. 4D Structural Root Architecture Modeling.
Dataset.
Herrero-Huerta, M., Meline, V., Iyer-Pascuzzi, A.S. et al. 4D Structural root architecture modeling from digital twins by X-Ray Computed Tomography. Plant Methods 17, 123 (2021). https://doi.org/10.1186/s13007-021-00819-1
Proximal sensing
Root system architecture (RSA)
X-ray CT (computed tomography) | CommonCrawl |
How to prove that $K =\lim \limits_{n \to \infty}\left( \prod \limits_{k=1}^{n}a_k\right)^{1/n}\approx2.6854520010$?
I was going through a list of important Mathematical Constants, when I saw the Khinchin's constant.
It said that :
If a real number $r$ is written as a simple continued fraction :
$$r=a_0+\dfrac{1}{a_1+\dfrac{1}{a_2+\dfrac{1}{a_3+\dots}}}$$, where $a_k$ are natural numbers $\forall \,\,k$, then $\lim \limits_{n \to \infty} GM(a_1,a_2,\dots,a_n )= \left(\lim \limits_{n \to \infty} \prod \limits_{k=1}^{n}a_k\right)^{1/n}$ exists and is a constant $K \approx 2.6854520010$, except for a set of measure $0$.
First obvious question is that why the value $a_0$ is not included in the Geometric Mean? I tried playing around with terms and juggling them but was unable to compute the limit. Also, is it necessary for $r$ to be "written-able" in the form of a continued fraction ?
Thanks in Advance ! :-)
calculus limits continued-fractions constants
$\begingroup$ Wow, that'd be pretty fascinating, if true! $\endgroup$ – Shalop Feb 21 '17 at 6:18
$\begingroup$ $a_0$ is not included in the Geometric mean because its expected value is infinity. $\endgroup$ – guest Feb 21 '17 at 6:21
$\begingroup$ In the title you have, correctly, an exponent $1/n$, but in the body, you don't. Whether $a_0$ is included has no effect on the limit, except including it would mess things up if it were zero, so it's easier to leave it out. Every real number can be written as a continued fraction. The theorem is not easy to prove, the limit is not easy to compute except in some cases where it doesn't equal $K$ (but that's a set of measure zero). $\endgroup$ – Gerry Myerson Feb 21 '17 at 6:25
$\begingroup$ $a_0$ is not included because it is $0$ on a set of positive measure. $\endgroup$ – Jonas Meyer Feb 21 '17 at 6:41
$\begingroup$ @Jonas, see Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; On the Khintchine constant, Math. Comp. 66 (1997), no. 217, 417–431, MR1377659 (97c:11119), freely available on the American Math Society website, especially section 4. $\endgroup$ – Gerry Myerson Feb 28 '17 at 2:22
This answer won't shed much light on the theorem or its proof, but is aimed to answer your specific questions about the context of the statement. Petch Puttichai points out in a comment that there is a proof sketch on the Wikipedia page for Khinchin's constant.
The number $a_0$ is the floor of $r$. When $0\leq r<1$, $a_0=0$. If $a_0$ were included in the geometric mean, it would make it zero on the interval $[0,1)$, which has positive measure, and it would have no effect when $r\geq 1$ because $\lim\limits_{n\to\infty}c^{1/n}=1$ if $c>0$. (And if $r<0$ you would have to worry about taking $n^\text{th}$ roots of a negative number.)
Every real number $r$ has a simple continued fraction expansion with natural number $a_k$s for $k\geq 1$ ($a_0$ might be $0$ or a negative integer). It is a finite expansion if $r$ is rational, but the set of rational numbers has measure $0$, so they can be ignored here. Otherwise it is infinite, and you can compute coefficients by repeated subtracting, taking the reciprocal, and taking the floor.
$ \begin{align*} a_0&=\lfloor r\rfloor,\\ a_1&=\left\lfloor \dfrac{1}{r-a_0}\right\rfloor,\\ a_2&=\left\lfloor\dfrac{1}{\dfrac{1}{r-a_0}-a_1} \right\rfloor,\\ a_3&=\left\lfloor\dfrac{1}{\dfrac{1}{\dfrac{1}{r-a_0}-a_1}-a_2}\right\rfloor, \end{align*} $
and so on. For example, take $r=\pi$: Then $r = 3.14...$, so $a_0=3$. Then $\dfrac{1}{r-3}= 7.06...$, so $a_1=7$. Then $\dfrac{1}{7.06... - 7}= 15.99...$, so $a_2=15$. One more: $\dfrac{1}{15.99... -15} = 1.003...$, so $a_3=1$. This gives a sequence of approximations of $\pi$ starting with $3$, $3+\frac17=\frac{22}{7}$, $3+\frac{1}{7+\frac1{15}} = \frac{333}{106}$, and $3+\frac{1}{7+\frac{1}{15+\frac{1}{1}}}=\frac{355}{113}$, but continuing on in an infinite sequence converging to $\pi$.
The Wikipedia article on continued fractions summarizes many results about them, including results that imply the sequence of "convergents" always converges to the number in question.
As for the result in question here: I'm not credible on this topic, and your question first brought it to my attention, but commenters and links indicate the following:
"The theorem is not easy to prove, the limit is not easy to compute except in some cases where it doesn't equal $K$ (but that's a set of measure zero)." - Gerry Myerson
"Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose." -Wikipedia
Jonas MeyerJonas Meyer
42k66 gold badges151151 silver badges265265 bronze badges
How to express an irrational number as generalized continued fraction?
Evaluating $\lim\limits_{n\to\infty}(a_1^n+\dots+a_k^n)^{1\over n}$ where $a_1 \ge \cdots\ge a_k \ge 0$
Mistake in Khinchin's "Continued Fractions"
Question Mark Function and continued fraction representations
On the asymptotics of a continued fraction
Theorem 1 in Khinchin's "Continued Fractions"
Find $\lim\limits_{n\to\infty}\left(\frac{a_1}{a_0S_1}+\frac{a_2}{S_1S_2}+…+\frac{a_n}{S_{n-1}S_n}\right)$
Prove if $\lim\limits_{n\rightarrow\infty}(a_1+a_2+…+a_n)=S$, then $\lim\limits_{n\rightarrow\infty}\frac{a_1+2a_2+…+na_n}{n}=0$
Calculate $\lim\limits_{n \to \infty} \frac1n\cdot\log\left(3^\frac{n}{1} + 3^\frac{n}{2} + \dots + 3^\frac{n}{n}\right)$
On the uniqueness of a simple continued fraction | CommonCrawl |
An almost periodic epidemic model with age structure in a patchy environment
Global attractors for the Gray-Scott equations in locally uniform spaces
January 2016, 21(1): 313-335. doi: 10.3934/dcdsb.2016.21.313
Dynamics of harmful algae with seasonal temperature variations in the cove-main lake
Feng-Bin Wang 1, , Sze-Bi Hsu 2, and Wendi Wang 3,
Department of Natural Science in the Center for General Education, Chang Gung University, Kwei-Shan, Taoyuan 333
Department of Mathematics, National Tsing Hua University, Hsinchu 300
School of Mathematics and Statistics, Southwest University, Chongqing, 400715
Received April 2015 Revised July 2015 Published November 2015
In this paper, we investigate two-vessel gradostat models describing the dynamics of harmful algae with seasonal temperature variations, in which one vessel represents a small cove connected to a larger lake. We first define the basic reproduction number for the model system, and then show that the trivial periodic state is globally asymptotically stable, and algae is washed out eventually if the basic reproduction number is less than unity, while there exists at least one positive periodic state and algal blooms occur when it is greater than unity. There are several types of productions for dissolved toxins, related to the algal growth rate, and nutrient limitation, respectively. For the system with a specific toxin production, the global attractivity of positive periodic steady-state solution can be established. Numerical simulations from the basic reproduction number show that the factor of seasonality plays an important role in the persistence of harmful algae.
Keywords: threshold dynamics., global attractivity, Harmful algae, seasonal variations.
Mathematics Subject Classification: Primary: 34C12, 34D20; Secondary: 34D2.
Citation: Feng-Bin Wang, Sze-Bi Hsu, Wendi Wang. Dynamics of harmful algae with seasonal temperature variations in the cove-main lake. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 313-335. doi: 10.3934/dcdsb.2016.21.313
G. Aronsson and R. B. Kellogg, On a differential equation arising from compartmental analysis,, Math. Biosci., 38 (1978), 113. doi: 10.1016/0025-5564(78)90021-4. Google Scholar
N. Bacaër and S. Guernaoui, The epidemic threshold of vector-borne diseases with seasonality. The case of cutaneous leishmaniasis in Chichaoua, Morocco,, J. Math. Biol., 53 (2006), 421. doi: 10.1007/s00285-006-0015-0. Google Scholar
S. Chakraborty, S. Roy and J. Chattopadhyay, Nutrient-limited toxin production and the dynamics of two phytoplankton in culture media: a mathematical model,, Ecol. Model., 213 (2008), 191. doi: 10.1016/j.ecolmodel.2007.12.008. Google Scholar
O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $R_0$ in the models for infectious disease in heterogeneous populations,, J. Math. Biol., 28 (1990), 365. doi: 10.1007/BF00178324. Google Scholar
P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold en- demic equilibria for compartmental models of disease transmission,, Math. Biosci., 180 (2002), 29. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar
E. I. R. Falconer and A. R. Humpage, Cyanobacterial (bluegreen algal) toxins in water supplies: cylindrospermopsins,, Environ. Toxicol., 21 (2006), 299. Google Scholar
J. P. Grover, S.-B. Hsu and F.-B. Wang, Competition and coexistence in flowing habitats with a hydraulic storage zone,, Math. Biosci., 222 (2009), 42. doi: 10.1016/j.mbs.2009.08.006. Google Scholar
E. Graneĺi and N. Johansson, Increase in the production of allelopathic substances by Prymnesium parvum cells grown under N- or P-deficient conditions,, Harmful Algae, 2 (2003), 135. Google Scholar
J. P. Grover, K. W. Crane, J. W. Baker, B. W. Brooks and D. L. Roelke, Spatial variation of harmful algae and their toxins in flowing-water habitats: a theoretical exploration,, Journal of Plankton Research, 33 (2011), 211. doi: 10.1093/plankt/fbq070. Google Scholar
M. W. Hirsch, Systems of differential equations that are competitive or cooperative II: Convergence almost everywhere,, SIAM J. Math. Anal., 16 (1985), 423. doi: 10.1137/0516030. Google Scholar
J. Hale, Asymptotic Behavior of Dissipative Systems,, American Mathematical Society Providence, (1988). Google Scholar
P. R. Hawkins, E. Putt and I. Falconer, et al, Phenotypical variation in a toxic strain of the phytoplankter, Cylindrospermopsis raciborskii (Nostocales, Cyanophyceae) during batch culture,, Environ. Toxicol., 16 (2001), 460. Google Scholar
S. B. Hsu, F. B. Wang and X.-Q. Zhao, Global dynamics of zooplankton and harmful algae in flowing habitats,, J. Diff. Eqns., 255 (2013), 265. doi: 10.1016/j.jde.2013.04.006. Google Scholar
J. Jiang, On the global stability of cooperative systems,, Bull London Math. Soc., 26 (1994), 455. doi: 10.1112/blms/26.5.455. Google Scholar
N. Johansson and E. Graneĺi, Cell density, chemical composition and toxicity of Chrysochromulina polylepis (Haptophyta) in relation to different N:P supply ratios,, Mar. Biol., 135 (1999), 209. doi: 10.1007/s002270050618. Google Scholar
D. Lekan and C. R. Tomas, The brevetoxin and brevenal composition of three Karenia brevis clones at different salinities and nutrient conditions,, Harmful Algae, 9 (2010), 39. doi: 10.1016/j.hal.2009.07.004. Google Scholar
C. G. R. Maier, M. D. Burch and M. Bormans, Flow management strategies to control blooms of the cyanobacterium, Anabaena circinalis, in the river Murray at Morgan, South Australia,, Regul. Rivers Res. Mgmt., 17 (2001), 637. doi: 10.1002/rrr.623. Google Scholar
S. M. Mitrovic, L. Hardwick and R. Oliver, et. al., Use of flow management to control saxitoxin producing cyanobacterial blooms in the Lower Darling River, Australia,, J. Plankton Res., 33 (2011), 229. Google Scholar
C. S. Reynolds, Potamoplankton: Paradigms, Paradoxes and Prognoses, in Algae and the Aquatic Environment,, F. E. Round, (1990). Google Scholar
D. L. Roelke, G. M. Gable and T. W. Valenti, Hydraulic flushing as a Prymnesium parvum bloom terminating mechanism in a subtropical lake,, Harmful Algae, 9 (2010), 323. doi: 10.1016/j.hal.2009.12.003. Google Scholar
D. L. Roelke, J. P. Grover and B. W. Brooks et al, A decade of fishkilling Prymnesium parvum blooms in Texas: Roles of inflow and salinity,, J. Plankton Res., 33 (2011), 243. Google Scholar
H. L. Smith, Microbial growth in periodic gradostats,, Rocky Mountain J. Math., 20 (1990), 1173. doi: 10.1216/rmjm/1181073069. Google Scholar
H. L. Smith, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems,, Math. Surveys Monogr 41, (1995). Google Scholar
G. M. Southard, L. T. Fries and A. Barkoh, Prymnesium parvum: the Texas experience,, J. Am. Water Resources Assoc., 46 (2010), 14. doi: 10.1111/j.1752-1688.2009.00387.x. Google Scholar
H. L. Smith and P. E. Waltman, The Theory of the Chemostat,, Cambridge Univ. Press, (1995). doi: 10.1017/CBO9780511530043. Google Scholar
H. L. Smith and X.-Q. Zhao, Robust persistence for semidynamical systems,, Nonlinear Anal., 47 (2001), 6169. doi: 10.1016/S0362-546X(01)00678-2. Google Scholar
W. Wang and X.-Q. Zhao, Threshold dynamics for compartmental epidemic models in periodic environments,, J. Dyn. Differ. Equ., 20 (2008), 699. doi: 10.1007/s10884-008-9111-8. Google Scholar
K.F. Zhang and X.-Q. Zhao, A periodic epidemic model in a patchy environment,, J. Math. Anal. Appl., 325 (2007), 496. doi: 10.1016/j.jmaa.2006.01.085. Google Scholar
X.-Q. Zhao, Asymptotic behavior for asymptotically periodic semiflows with applications,, Commun. Appl. Nonlinear Anal., 3 (1996), 43. Google Scholar
X.-Q. Zhao, Dynamical Systems in Population Biology,, Springer, (2003). doi: 10.1007/978-0-387-21761-1. Google Scholar
Zhenguo Bai, Yicang Zhou. Threshold dynamics of a bacillary dysentery model with seasonal fluctuation. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 1-14. doi: 10.3934/dcdsb.2011.15.1
Jinhuo Luo, Jin Wang, Hao Wang. Seasonal forcing and exponential threshold incidence in cholera dynamics. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2261-2290. doi: 10.3934/dcdsb.2017095
M. R. S. Kulenović, Orlando Merino. A global attractivity result for maps with invariant boxes. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 97-110. doi: 10.3934/dcdsb.2006.6.97
Y. Chen, L. Wang. Global attractivity of a circadian pacemaker model in a periodic environment. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 277-288. doi: 10.3934/dcdsb.2005.5.277
Zhaohui Yuan, Xingfu Zou. Global threshold dynamics in an HIV virus model with nonlinear infection rate and distributed invasion and production delays. Mathematical Biosciences & Engineering, 2013, 10 (2) : 483-498. doi: 10.3934/mbe.2013.10.483
Yilei Tang, Dongmei Xiao, Weinian Zhang, Di Zhu. Dynamics of epidemic models with asymptomatic infection and seasonal succession. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1407-1424. doi: 10.3934/mbe.2017073
Xinli Hu. Threshold dynamics for a Tuberculosis model with seasonality. Mathematical Biosciences & Engineering, 2012, 9 (1) : 111-122. doi: 10.3934/mbe.2012.9.111
Dongmei Xiao. Dynamics and bifurcations on a class of population model with seasonal constant-yield harvesting. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 699-719. doi: 10.3934/dcdsb.2016.21.699
G. A. Enciso, E. D. Sontag. Global attractivity, I/O monotone small-gain theorems, and biological delay systems. Discrete & Continuous Dynamical Systems - A, 2006, 14 (3) : 549-578. doi: 10.3934/dcds.2006.14.549
Qingwen Hu. A model of regulatory dynamics with threshold-type state-dependent delay. Mathematical Biosciences & Engineering, 2018, 15 (4) : 863-882. doi: 10.3934/mbe.2018039
Lin Zhao, Zhi-Cheng Wang, Liang Zhang. Threshold dynamics of a time periodic and two–group epidemic model with distributed delay. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1535-1563. doi: 10.3934/mbe.2017080
Toshikazu Kuniya, Yoshiaki Muroya, Yoichi Enatsu. Threshold dynamics of an SIR epidemic model with hybrid of multigroup and patch structures. Mathematical Biosciences & Engineering, 2014, 11 (6) : 1375-1393. doi: 10.3934/mbe.2014.11.1375
Liang Zhang, Zhi-Cheng Wang. Threshold dynamics of a reaction-diffusion epidemic model with stage structure. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3797-3820. doi: 10.3934/dcdsb.2017191
Yijun Lou, Xiao-Qiang Zhao. Threshold dynamics in a time-delayed periodic SIS epidemic model. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 169-186. doi: 10.3934/dcdsb.2009.12.169
Zhenguo Bai. Threshold dynamics of a periodic SIR model with delay in an infected compartment. Mathematical Biosciences & Engineering, 2015, 12 (3) : 555-564. doi: 10.3934/mbe.2015.12.555
Peter W. Bates, Jiayin Jin. Global dynamics of boundary droplets. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 1-17. doi: 10.3934/dcds.2014.34.1
Oleksandr Misiats, Nung Kwan Yip. Convergence of space-time discrete threshold dynamics to anisotropic motion by mean curvature. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6379-6411. doi: 10.3934/dcds.2016076
Lili Liu, Xianning Liu, Jinliang Wang. Threshold dynamics of a delayed multi-group heroin epidemic model in heterogeneous populations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2615-2630. doi: 10.3934/dcdsb.2016064
Shiming Li, Yongsheng Li, Wei Yan. A global existence and blow-up threshold for Davey-Stewartson equations in $\mathbb{R}^3$. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1899-1912. doi: 10.3934/dcdss.2016077
Masaki Kurokiba, Toshitaka Nagai, T. Ogawa. The uniform boundedness and threshold for the global existence of the radial solution to a drift-diffusion system. Communications on Pure & Applied Analysis, 2006, 5 (1) : 97-106. doi: 10.3934/cpaa.2006.5.97
Feng-Bin Wang Sze-Bi Hsu Wendi Wang | CommonCrawl |
Exact analytical matrix inversion of sparse 100x100 matrices in C++
I need to invert a matrix. Of course, I'm not the first person in this situation, and I know that there's a wealth of powerful libraries out there, of which I only know a couple.
That being said, there is a twist: I know that matrix inversion is (in general) numerically unstable, and I need a solution that is absolutely failsafe, if need be at the cost of large amounts of computing time.
The matrices in question will be relatively moderate in size (not exceeding 100x100 entries), and it's really important to get it right. If the computation takes a couple of minutes on a regular desktop computer, that is completely acceptable. The reason for this is that (a) the matrix inversion will only happen once in the execution of the program, and (b) the analytic solution will only be used as an optional replacement for the already implemented numeric one (mainly for running cross-checks).
I do not think that it matters for the issue at hand, but the matrices will be relatively sparse (10-50% of entries different from zero), and relatively well-behaved (entries in the range 0.001-1000 expected, but nearly identical rows are highly probable).
The solution will be used in C++ code that will be distributed freely, so it is necessary to use either an open (BSD, LGPL, ...) library that can be distributed alongside what I'm doing to a small audience of scientific users (or is ideally already installed on most linux systems), or to implement something from scratch. In that case, a clean, short and less convoluted implementation is to be given preference over an optimized one.
Im happy about:
pointers to existing libraries that I could use for this
names or outlines of algorithms that could be useful
additional ideas or considerations that will help me pursue this endeavour (i.e. will I need to define a custom arbitrary-precision-float type?)
For illustration purposes, I have added a 15x15 matrix and it's inverse using unsatisfactory numeric inversion.
Here's the input matrix:
$\begin{matrix} 0 & 0 & 0 & 0 & 0 & 0.01513 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0.003782 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.003782 \\ 0 & 0 & 0 & 0.0625 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0.0625 & 0 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.01538 & 0 \\ 0 & 0 & 0 & 0.0625 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.01537 & 0 & 0 \\ 0 & 0 & 0.003782 & 0 & 0 & 0.003782 & 0 & 0 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.003782 & 0 & 0 & 0.003782 & 0 & 0 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.003782 & 0.003782 & 0 & 0.003782 & 0.003782 & 0 & 0.003782 & 0.003782 & 0 & 0 & 0.003782 & 0 & 0 & 0.003782 & 0 & 0 & 0.003782 \\ 0.0625 & 0 & 0 & 0.0625 & 0 & 0 & 0.0625 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.0625 & 0 & 0.003782 & 0.0625 & 0 & 0.003782 & 0.0625 & 0 & 0.003782 & 0 & 0.01537 & 0 & 0 & 0.01537 & 0 & 0 & 0.01538 & 0 \\ 0.0625 & 0.003782 & 0 & 0.0625 & 0.003782 & 0 & 0.0625 & 0.003782 & 0 & 0.01537 & 0 & 0 & 0.01537 & 0 & 0 & 0.01537 & 0 & 0 \\ 0 & 0 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.01513 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.003782 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.003782 & 0 & 0 & 0 \\ 0.25 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.0625 & 0 & 0.003782 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.01537 & 0 & 0 & 0 & 0 \\ 0.25 & 0.01513 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.0615 & 0 & 0 & 0 & 0 & 0 \\ \end{matrix}$
Here's the numeric inverse:
$\begin{matrix} -0 & -0 & -0 & 0 & -0 & 0 & -0 & -0 & -0 & 0 & 0 & -0 & 0 & 0 & -0 & 4 & 0 & 0 \\ -0 & -0 & -0 & 0 & -0 & 0 & -0 & -0 & -0 & 0 & 0 & -0 & 0 & 66.1 & -0 & 0 & 0 & 0 \\ -0 & -0 & -0 & 0 & -0 & 0 & -0 & -0 & -0 & 0 & 0 & -0 & 264.4 & 0 & 0 & 0 & 0 & 0 \\ -0 & 0 & -0 & 16 & -0 & 0 & -0 & 2.935e^{-17} & -0 & -1.805e^{-15} & 0 & 0 & -2.037e^{-31} & -7.338e^{-18} & -1.907e^{-31} & 4.513e^{-16} & 3.944e^{-31} & 0 \\ -0 & 264.4 & -0 & 0 & -0 & 0 & -0 & -0 & -0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 66.1 & 0 & -0 & 0 & -0 & 0 & -0 & -0 & -0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -0 & -16 & -0 & 0 & -0 & 0 & -0 & 16 & 0 & 0 & 1.835e^{-15} & -4.441e^{-16} & 1.718e^{-15} & -4 & -3.553e^{-15} & 0 \\ 0 & -264.4 & -0 & 0 & -0 & 0 & -0 & 264.4 & -0 & 0 & 0 & 0 & 0 & -66.1 & 0 & 0 & 0 & 0 \\ -66.1 & 0 & -0 & 0 & -0 & 0 & 264.4 & 0 & -0 & 0 & 0 & 0 & -264.4 & 0 & 0 & 0 & 0 & 0 \\ 0 & 65.04 & -0 & 65.04 & -0 & -65.04 & 0 & -65.04 & -0 & -65.04 & 0 & 65.04 & -7.338e^{-15} & 16.26 & -6.872e^{-15} & 16.26 & 1.421e^{-14} & -16.26 \\ 16.26 & 0 & -0 & 65.04 & -65.04 & 0 & -65.04 & 0 & -0 & -65.04 & 65.04 & 0 & 65.04 & 0 & 0 & 16.26 & -65.04 & 0 \\ 66.1 & 264.4 & -264.4 & 0 & -0 & 0 & -264.4 & -264.4 & 264.4 & 0 & 0 & 0 & 264.4 & 66.1 & -264.4 & 0 & 0 & 0 \\ 0 & 0 & -0 & 0 & -0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -16.26 & 0 & -16.26 & 0 & 16.26 \\ 0 & 0 & -0 & 0 & -0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -65.04 & -7.105e^{-15} & 1.421e^{-14} & -16.26 & 65.04 & 3.553e^{-15} \\ 0 & 0 & -0 & 0 & -0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -264.4 & -66.1 & 264.4 & 0 & 0 & 0 \\ 0 & -65.04 & -0 & -65.04 & -0 & 65.04 & 0 & -1.193e^{-16} & 0 & 7.338e^{-15} & 0 & 0 & 8.28e^{-31} & 2.983e^{-17} & 7.498e^{-31} & -1.835e^{-15} & -1.578e^{-30} & 0 \\ -16.26 & 0 & -0 & -65.04 & 65.04 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -66.1 & -264.4 & 264.4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{matrix}$
matrix computer-arithmetic inverse
David Ketcheson
carstencarsten
$\begingroup$ Define "absolutely failsafe". Computers represent real numbers with finite precision approximations and therefore cannot give you an exact inverse in almost any case. How much error are you willing to tolerate? $\endgroup$ – Bill Barth Jul 1 '15 at 23:48
$\begingroup$ It is my understanding that for small matrices, also symbolic calculations are possible (e.g. like WolframAlpha does it). Of course, I don't know their algorithmics, but whatever they do should suffice. If you are looking for a numeric quantification, I would say that the worst tolerable deviation for extremely malformed input should be within a permill of the "true" value. Please note that this relative specification of uncertainty implies intentionally that 0-values should come out as 0. $\endgroup$ – carsten Jul 2 '15 at 0:01
$\begingroup$ If you are happy with an error of one part in one thousand, you should be fine with a 64-bit representation and almost any LU factorization with pivoting assume your matrix is square and has full rank. There are pathological cases where this could break down, but they aren't that likely. How nearly identical are these potential rows? $\endgroup$ – Bill Barth Jul 2 '15 at 0:09
$\begingroup$ The condition number of your posted matrix is only about 322 meaning you will only lose about two digits of precision during your matrix inversion. For double precision values this still leaves you with about 14 accurate digits. Once again: are you sure this is really necessary? If you really want to validate your results why not just check $M^{-1}M$? $\endgroup$ – Doug Lipinski Jul 2 '15 at 13:09
$\begingroup$ I personally agree that standard numeric precision is probably sufficient for almost all cases. However, I have been specifically asked to add this as a cross-check for users who don't trust the numerics... :-) $\endgroup$ – carsten Jul 3 '15 at 3:52
This may be more of a comment than an answer, but I can't comment.
Yes on arbitrary precision. No on symbolic inverse, because at some point, you have to numerically evaluate it, and it is likely to be extremely numerically unstable. Even for 6 by 6 matrix, the symbolic inverse is starting to get pretty ungainly.
Which leaves me the main item. Why do you "need" the matrix inverse? Quite frequently, there is a numerically stable and accurate (and possibly faster) calculation which avoids the need to invert a matrix when a matrix inverse id "needed".
$\begingroup$ In the case at hand, the actual inversion of the matrix is unfortunately inevitable. The entries of the inverse matrix play the role of prefactors for a linear combination of algebraic expressions. The purpose of this to allow continuously transforming statistical distributions into one another, roughly speaking. $\endgroup$ – carsten Jul 3 '15 at 3:48
$\begingroup$ I don't understand your answer well enough to know whether actual inversion is necessary. As I wrote before, quite frequently the inverse is not needed when when someone believes it is needed. If you write out in detail how the inverse is used, we may be able to assess whether the inverse is truly needed, or whether there might be a better alternative to accomplish your end goal. Are you sure you can't accomplish your goal using linear equation solves? $\endgroup$ – Mark L. Stone Jul 3 '15 at 6:38
There are a number of symbolic algebra packages that will compute the exact matrix inverse; e.g. Sage, Mathematica, Maple, or Maxima. This becomes computationally expensive for large matrices, but there is a chance that a sparse $100 \times 100$ matrix would be within reach. I wouldn't rule it out without trying, since it's so easy to try.
David KetchesonDavid Ketcheson
$\begingroup$ I'm specifically looking for a piece of c++ code or c++-linkable library to do this. $\endgroup$ – carsten Jul 3 '15 at 20:24
$\begingroup$ So, you don't need to invert one matrix, but to have the capability of inverting matrices. That's not clear in your question. If you can try a CAS, you can later generate C++ code. Otherwise, you can try something like GiNaC or Yacas. $\endgroup$ – nicoguaro♦ Jul 3 '15 at 21:20
You may want to check division-free Gaussian elimination as described, e.g., in this document titled "A simplified fraction-free integer Gauss elimination algorithm".
I have been told that these approaches are precise until you eventually evaluate the solution (up to loss of significance, which, however, you can monitor). They rather fail because of overflow, which you can handle using appropriate data types.
Not the answer you're looking for? Browse other questions tagged matrix computer-arithmetic inverse or ask your own question.
Algorithms for Large Sparse Integer Matrices
C+C++ library for inversion of a large scale matrix over cluster
closed form approximation of matrix inverse with special properties
Solving Newton-Raphson step with ill-conditioned sparse matrix
Kernel of a Sparse Matrix
Obtaining column vectors of pseudo-inverse of a matrix
Invert a matrix only on a subset of variables / Compute the "equivalent circuit"
How can I speed up this code for sparse matrix-vector multiplication?
Which pseudo-inverse to compute when Inverse is not possible? (No linear solve) | CommonCrawl |
Table of Contents Physics Articles Physics Tutorials Physics Guides Physics FAQ Math Articles Math Tutorials Math Guides Math FAQ Education Articles Education Guides Bio/Chem Articles Technology Guides Computer Science Tutorials
New posts Find threads Hot Threads Featured threads
Physics Mathematics Other Sciences Interviews Pop Science Experiments Entertainment
Terms and Rules About Us Members Donate Help
Special and General Relativity
B Why do things in free fall accelerate?
Thread starter Z3kr0m
jbriggs444
Science Advisor
Homework Helper
Ibix said:
In stronger spacetime curvature you need more force to stay at the same altitude
This is the second time I've seen this picture presented in this thread. I believe it to be an incorrect over-simplification.
Spacetime curvature is essentially equivalent to tidal gravity -- the rate at which gravitational acceleration changes with position. If you want to compare an acceleration here against a state of rest over there, what matters is, roughly speaking, the integral of curvature over a path.
The local acceleration of gravity is independent of the local spacetime curvature.
My mistaken impression has been corrected.
Nugatory
jbriggs444 said:
If you want to compare an acceleration here against a state of rest over there, what matters is, roughly speaking, the integral of curvature over a path.
But here we are comparing the deviation between the geodesic worldline of an object in freefall and the non-geodesic worldline of an object hovering with constant Schwarzschild ##r## coordinate at the point where the two worldlines intersect. That's a measure of the force required to hold the hovering object on its worldline, and it is indeed greater in a more strongly curved spacetime.
Reactions: jbriggs444
Insights Author
The proper acceleration of an observer hovering at constant Schwarzschild ##r## (with ##G=c=1##) is $$a=\frac{M}{r^2\sqrt{1-2M/r}}$$The Kretchmann curvature invariant, which is what I was thinking of as "the strength of the curvature", is ##K=R_{ijkl}R^{ijkl}=48M^2/r^6##. You can eliminate ##r## between those two equations and see that the acceleration needed to hover behaves as ##K^{1/3}## for small ##K## and as ##K^{1/4}## for large ##K##. So the proper acceleration needed to hover increases in more strongly curved spacetime.
Intuitively, something like this must be true. Else, why would you feel heavier closer to a gravitating mass?
2018 Award
GR-curved space-time implies that the geodesics are not the same as they would be without GR. Without GR, we would think that an object held stationary at a constant altitude was following a geodesic, ##A_{nonGR}##. But we know that the true GR-geodesic, ##B_{GR}##, curves away from that. The increasing separation between ##A_{nonGR}## and ##B_{GR}## appears as "acceleration" to one who does not take GR into account. So an object that is left to follow the true (GR) geodesic, ##B_{GR}##, will appear to be accelerating. That fact requires no calculations at all. The details of how much "acceleration" there is and in which direction requires calculations.
I don't know. I am more or less onboard with your "mistaken" impression. At each point on the earth the local spacetime is approximately flat and yet you are accelerating upwards. So locally I don't think that curvature explains the acceleration. The acceleration is (IMO) explained by real forces. The floor pushes up on your feet, the ground pushes up on the floor, the bedrock pushes up on the ground ... Every part of the surface of the earth is accelerating upward due to unbalanced real forces acting on it.
What curvature explains is why every point on that surface can accelerate away from each other without the distance changing.
Reactions: A.T., Ibix and PeterDonis
Z3kr0m
I was yesterday on presentation of Kip Thorne about Gravitional waves, and he said that the acceleration is due to the changes of time flow (or gradient or something), so I started to not understand this phenomenon again. Can somebody explain me this please?
PeroK
Z3kr0m said:
You might have to ask Kip what he meant.
Reactions: vanhees71 and Dale
... he said that the acceleration is due to the changes of time flow (or gradient or something),...
He probably meant that the gradient of the gravitational time dilation is related to the strength of the gravitational field.
See the images here:
10. Curved Spacetime | UCLA Physics & Astronomy
demoweb.physics.ucla.edu
This might also help:
So the proper acceleration needed to hover increases in more strongly curved spacetime.
Is this true in general, for example inside a spherical mass? Just because "stronger attractive gravity" coincides with "stronger tidal gravity" for the exterior Schwarzschild solution, doesn't mean it's a general relation or even causation.
PeterDonis
Dale said:
locally I don't think that curvature explains the acceleration. The acceleration is (IMO) explained by real forces.
The acceleration itself is explained by real forces, yes; curvature doesn't give you any proper acceleration.
But the magnitude of the proper acceleration that is required to follow a stationary worldline does depend on the curvature; for example, @Ibix gave a relationship between the proper acceleration of a stationary worldline in Schwarzschild spacetime and the Kretzschmann curvature invariant.
As @A.T. has indicated, this statement is too strong; it is true for the particular curved spacetime you give as an example (with the caveat that the concept of "hovering" only makes sense outside the horizon), but not for a general curved spacetime. The obvious counterexample, as @A.T. noted, is the interior of a spherically symmetric mass; the proper acceleration goes to zero at the center, but the curvature is not zero there, and is in fact (I believe) larger there than anywhere else.
pervect
PeroK said:
You can get it directly from the metric. If you are at rest relative to the Earth, then you can calculate your "proper" acceleration, which requires a force to sustain.
The formula may not make a lot of sense without a fair amount of background knowledge of tensors, but I'll try to give a summary. It may not make any sense at all unless one knows what a tensor is. Informally, a rank 0 tensor is just a number, a scalar, that all observers agree on, a rank 1 tensor is a vector, and a rank 2 tensor is rather like a matrix. There are some conditions on how tensors transform that I won't get into.
Tensors have components, which are numbers. Components are influenced by the coordinates chosen, the tensor itself is regarded as an entity that represents a physical phenomenon, independent of any choice of coordinates. This is slightly oversimplifed, one actually needs to choose the coordinates and additionally a set of basis vectors, but I will assume that one is using what is called a "coordinate basis", in which case knowing the coordinates also specifies the basis vectors.
A rank 0 tensor has 1 component, a rank 1 tensor has 4 components, and a rank 2 tensor has 16 components.
General relativity and special relativity have significantly different paradigms. In General relativity, the acceleration of a body in free fall is always zero. In Newtonian mechanics, the acceleration of a freely falling body is nonzero, and is due to "gravity", which is regarded as a force.
The trajectory of a body can be specified by knowing the path that the body takes through space-time. One can specify the path by writing ##x^i(\tau)##, where ##x^i## are the coordinates of the body, and ##\tau## is proper time.
##x^i## represents the position of the body, both in space and in time.
It's important to know here what proper time, ##\tau## is. That's the sort of time that a clock (a wristwatch) on the body measures, as opposed to the value of the time coordinate, which in GR has the status of a label that one attaches to an event that is more or less arbitrary. Proper time is a rank 0 tensor, because it's a number that everyone agrees on, regardless of coordinates. The time coordinate of a body is not a tensor, because it depends on the coordinate choices, and tensors are geometric entities that are independent of coordinate choices.
Then the four-velocity ##u^i## is a vector, whose components are given by ##u^i = \partial x^i / \partial \tau##, the partial derivative of the position ##x^i## with respect to proper time.
The acceleration 4-vector can be calculated from ##u^i## by another sort of derivative, the covariant derivative. This would be, in the tensor notation of General relativity
$$a^b = u^a \nabla_a u^b$$
The symbol ##\nabla_a## represents taking the covariant derivative. The covariant derivative of the 4-velocity ##u^a## is a rank 2 tensor, with 16 components. ##u^a## has four components, it's covariant derivative has 16. Contracting (another tensor operation) of this rank 2 tensor with the 4-velocity gives one back a 4-vector.
This is very terse, I haven't really explained what the covariant derivative about, but it's rather like a gradient operation, as you might guess from the symobl.
So, the result of this formula gives the acceleration 4-vector, ##a^b##. The magnitude of this four-vector gives you the number that represents the magnitude of the proper acceleration can be computed from the 4-acceleratio and the metric tensor
$$A^2 = g_{ab} a^a a^b$$
here A is the magnitude of the proper acceleration, so a^2 is the squared magnitude of the acceleration, while ##a^i## is the accleration 4-vector we calculated previously.
Because of the use of tensors, A is a rank 0 tensor, which means that it's defined in a manner that's independent of coordinate choices. It may take some knowledge of tensors to appreciate fully how that is possible.
Reactions: vanhees71
PeterDonis said:
As @A.T. has indicated, this statement is too strong; it is true for the particular curved spacetime you give as an example (with the caveat that the concept of "hovering" only makes sense outside the horizon), but not for a general curved spacetime.
Agreed - in a non-static spacetime, "hovering" doesn't even make sense, I think.
The obvious counterexample, as @A.T. noted, is the interior of a spherically symmetric mass; the proper acceleration goes to zero at the center, but the curvature is not zero there, and is in fact (I believe) larger there than anywhere else.
You and @A.T. are correct as usual. I considered the case of a constant density sphere of radius ##R##. The Kretschmann scalar is discontinuous at the surface, jumping from ##12R_s^2/R^6## to ##15R_s^2/R^6## presumably because the stress-energy tensor is discontinuous here, then rises smoothly to the centre of the sphere. The general expression for ##K(r)## inside the sphere is messy and not particularly interesting - Maxima code is in the spoiler tags if you want to see.
Spoiler: Maxima code
load(ctensor);
/* Lazy way of defining constant density interior Schwarzschild metric - */
/* set up for exterior Schwarzschild and edit. Note that this is the */
/* metric for a sphere of radius rg and Schwarzschild radius Rs. */
ct_coordsys(exteriorschwarzschild);
lg[1,1]:-(sqrt(1-Rs*r^2/rg^3)-3*sqrt(1-Rs/rg))^2/4;
lg[2,2]:(1-Rs*r^2/rg^3)^(-1);
/* Derive the Kretschmann scalar */
cmetric(false);
christof(false);
riemann(false);
lriemann(false);
uriemann(false);
rinvariant();
/* What is the curvature invariant at the surface? */
ksurface:ratsimp(substitute(rg,r,kinvariant));
/* What is the curvature invariant at the centre? */
substitute(0,r,ratsimp(kinvariant/ksurface));
substitute(alpha*rg,Rs,%);
kcentre:ratsimp(%);
/* What is the curvature through the interior? Define */
/* alpha=Rs/rg and rho=r/rg for convenience. */
substitute(alpha*rg,Rs,ratsimp(kinvariant/ksurface));
substitute(rho*rg,r,%);
krho:ratsimp(%);
plot2d([substitute(0.1,alpha,krho),
substitute(0.02,alpha,krho),
substitute(0.01,alpha,krho)],
[rho,0,1],
[legend,"R=10Rs",
"R=50Rs",
"R=100Rs"],
[xlabel,"r/R"],
[ylabel,"K(r)/K(R)"]);
That said, I find myself thinking that the proper acceleration of a hovering observer in a static spacetime ought to have some relationship to the local curvature. I think we can define "hovering" in a coordinate-free way as a worldline that is the integral curve of the timelike Killing vector. And the proper acceleration is some kind of measure of how hard you have to work to follow that path, which feels to me like it ought to be related to how non-flat spacetime is where the hovering observer is.
But writing a generic static spherically symmetric metric, ##g_{\mu\nu}=\mathrm{diag}(g_{tt}(r),g_{rr}(r),r^2,r^2\sin^2\theta)##, leads to a simple expression for the modulus of the proper acceleration$$a=\frac{dg_{tt}/dr}{2\sqrt{g_{rr}}g_{tt}}$$but a much more complicated one for the Kretchmann scalar which includes ##g_{rr}##, ##g_{tt}##, and derivatives of both. So apparently my intuition is wrong - perhaps because the acceleration seems to depend on the spatial variation of the metric coefficients related to the timelike direction while curvature takes into account the full Riemann tensor?
Reactions: Dale
DrGreg
But even in flat spacetime Rindler observers "hover" (from their point of view) above a Rindler horizon, with ever-increasing proper acceleration (diverging to ##\infty##) the closer they are to the horizon. They are each following the flow of a Killing field but the spacetime curvature is zero.
Reactions: Ibix and Dale
writing a generic static spherically symmetric metric, ##g_{\mu\nu}=\mathrm{diag}(g_{tt}(r),g_{rr}(r),r^2,r^2\sin^2\theta)##, leads to a simple expression for the modulus of the proper acceleration
a=\frac{dg_{rr}/dr}{2\sqrt{g_{rr}}g_{tt}}
Shouldn't it be ##d g_{tt} / dr## in the numerator?
a = \sqrt{g_{rr}} a^r = \sqrt{g_{rr}} \Gamma^r{}_{tt} u^t u^t = \sqrt{g_{rr}} \frac{1}{2} g^{rr} \frac{d g_{tt}}{d r} \frac{1}{g_{tt}} = \frac{1}{2 \sqrt{g_{rr}} g_{tt}} \frac{d g_{tt}}{d r}
Yes. Thanks - typo now corrected above.
typo now corrected above
I think you corrected the wrong thing--you put ##\sqrt{g_{tt}} g_{tt}## in the denominator, instead of ##d g_{tt} / dr## in the numerator.
I'd just spotted that. Re-corrected.
Karl Coryat
Back to the original discussion: Surely, one reason why people have trouble with GR basics is confusion over the word "accelerate." Most people think of acceleration as a perceptible change in velocity, so it's counterintuitive to imagine that a person sitting in a chair on Earth is accelerating upward. My question: Is it correct to say that ordinary velocity change is a special case of acceleration (i.e., that acceleration doesn't necessarily mean a change in meters-per-second), or is it more correct to say that a person sitting in a chair is experiencing a velocity change — like a person on a merry-go-round — only that the seated person's acceleration is a change in their velocity through spacetime? If that's the case, are there units for expressing one's velocity through spacetime at any given instant?
Karl Coryat said:
There is an analogy with circular motion where the acceleration is toward the center of the circle but there is no change in the distance to the center.
Also, velocity is frame dependent. If you accelerate for a time in flat spacetime then the net result is not a change in absolute velocity, but a change in your inertial reference frame.
In the case of sitting in a chair, therefore, the upward acceleration does not imply an absolute motion in that direction. But it does imply a continuous change of local inertial reference frame.
Is it correct to say that ordinary velocity change is a special case of acceleration (i.e., that acceleration doesn't necessarily mean a change in meters-per-second), or is it more correct to say that a person sitting in a chair is experiencing a velocity change
The problem is defining acceleration as a perceptible change in velocity without saying velocity with respect to what. For a free-falling observer (who feels no force) the person in the chair is clearly accelerating upwards (and, in fact, feels an upwards force from the chair).
Newton regards the person in the chair as "at rest" (or moving inertially, more precisely). Einstein regards the free-faller as moving inertially. So the person in the chair is genuinely accelerating - they can feel it, and inertial observers can see it.
Surely, one reason why people have trouble with GR basics is confusion over the word "accelerate."
The only around this is to always be explicit, if one means "coordinate acceleration" (change in velocity) or "proper acceleration" (measured by accelerometer).
Back to the original discussion: Surely, one reason why people have trouble with GR basics is confusion over the word "accelerate." Most people think of acceleration as a perceptible change in velocity, so it's counterintuitive to imagine that a person sitting in a chair on Earth is accelerating upward.
If a person wants to understand GR basics, he may just have to learn something about geodesics in spacetime and understand that acceleration is a deviation from a geodesic in spacetime.
Want to reply to this thread?
"Why do things in free fall accelerate?" You must log in or register to reply here.
Facebook Twitter Reddit Pinterest WhatsApp Email Link
Related Threads for: Why do things in free fall accelerate?
Gravity question: Why do things fall?
staballoy
Free fall acceleration in SR
harrylin
B Why do bodies in free fall change position
Free Fall Acceleration to c
Goalie33
I Free fall not accelerating but standing on Earth accelerating
TheWonderer1
Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
B One way twin
Started by Buckethead
B E=mc^2: where did the 1/2 go?
Started by DaveC426913
I The velocity of a moving frame of reference
Started by RossBlenkinsop
I Sorry but I'm still going nuts over the twin paradox
Started by SamRoss
B Canceling Orbital Motion
Started by metastable
© 2001-2019 Physics Forums | CommonCrawl |
Experimental Methods
Nano Express
A 29Si, 1H, and 13C Solid-State NMR Study on the Surface Species of Various Depolymerized Organosiloxanes at Silica Surface
Iryna S. Protsak1, 2, 3,
Yevhenii M. Morozov2, 4,
Wen Dong1, 3,
Zichun Le2Email authorView ORCID ID profile,
Dong Zhang5 and
Ian M. Henderson6
Nanoscale Research Letters201914:160
Received: 7 January 2019
Three poly(organosiloxanes) (hydromethyl-, dimethyl-, and epoxymethylsiloxane) of different chain lengths and pendant groups and their mixtures of dimethyl (DMC) or diethyl carbonates (DEC) were applied in the modification of fumed silica nanoparticles (FSNs). The resulting modified silicas were studied in depth using 29Si, 1H, and 13C solid-state NMR spectroscopy, elemental analysis, and nitrogen adsorption-desorption (BET) analysis. The obtained results reveal that the type of grafting, grafting density, and structure of the grafted species at the silica surface depend strongly on the length of organosiloxane polymer and on the nature of the "green" additive, DMC or DEC. The spectral changes observed by solid-state NMR spectroscopy suggest that the major products of the reaction of various organosiloxanes and their DMC or DEC mixtures with the surface are D (RR'Si(O0.5)2) and T (RSi(O0.5)3) organosiloxane units. It was found that shorter methylhydro (PMHS) and dimethylsiloxane (PDMS) and their mixtures with DMC or DEC form a denser coverage at the silica surface since SBET diminution is larger and grafting density is higher than the longest epoxymethylsiloxane (CPDMS) used for FSNs modification. Additionally, for FSNs modified with short organosiloxane PMHS/DEC and also medium organosiloxane PDMS/DMC, the dense coverage formation is accompanied by a greater reduction of isolated silanols, as shown by solid-state 29Si NMR spectroscopy, in contrast to reactions with neat organosiloxanes. The surface coverage at FSNs with the longest siloxane (CPDMS) greatly improves with the addition of DMC or DEC. The data on grafting density suggest that molecules in the attached layers of FSNs modified with short PMHS and its mixture of DMC or DEC and medium PDMS and its mixture of DMC form a "vertical" orientation of the grafted methylhydrosiloxane and dimethylsiloxane chains, in contrast to the reaction with PDMS/DEC and epoxide methylsiloxane in the presence of DMC or DEC, which indicates a "horizontal" chain orientation of the grafted methyl and epoxysiloxane molecules. This study highlights the major role of solid-state NMR spectroscopy for comprehensive characterization of solid surfaces.
Dialkyl carbonates
1H solid-state NMR spectroscopy
29Si solid-state NMR spectroscopy
13C solid-state NMR spectroscopy
Surface modification
Fumed nanosilica
Bonding density
Hydrophobized fumed silica nanoparticles (FSNs) are of interest from a practical point of view because these materials can be better fillers of nonpolar or weakly polar polymers or more appropriate hydrophobic materials for other practical applications than unmodified hydrophilic silica [1–4]. Functionalization of FSNs can be performed using various traditional types of modifying agents such as alkoxy-, halo-, and aminosilanes and organosilazanes [3–8]. However, due to the high reactivity and moisture sensitivity of the modifying agents, purification is often critical for these hydrolyzable precursors. Organosiloxanes with methyl-terminated groups provide a viable and environmentally benign alternative to the chemical functionalization of oxides, taking into account three aspects of their structure that set it apart from carbon-based polymers: the bond lengths of Si–O and Si–C (1.63 and 1.90 Å) in organosiloxane are longer than the C–C (1.53 Å) bonds of most polymers; the S–O–Si bond angle (143°) is significantly greater than the C–C–C bond angles (109°) in the main chain of carbon-based polymers; and the differences in Pauling electronegativity values between silicon (1.8) and oxygen (3.5) and between silicon (1.8) and carbon (2.5) impart ionic character to both the Si–O backbone bonds (51% ionic) and the Si–C bonds (12% ionic). These three structural differences allow rotational and vibrational degrees of freedom in organosiloxane that are not available to carbon-based polymers and are the basis for unusual and unique properties: high thermal stability; excellent dielectric properties; and resistance to oxygen, water, and UV irradiation and so on [5, 8–11]. Linear organosiloxanes are generally not considered to be reactive with inorganic oxide surfaces, and an enormous research effort has been made over the last 50 years to develop other silicon-containing reagents with reactive functional groups [12]. One of the likely ways to increase the reactivity of a silicone polymer is partial depolymerization of high molecular poly(organosiloxanes), followed by grafting formed oligomers (with terminated alkoxy groups) on silica surfaces. Complete depolymerization of poly(dimethylsiloxanes) can be achieved by treatment of siloxanes with such toxic agents as various amines [13, 14]; thermal degradation (300–400 °C); and treatment with sulfuric acids, thionyl chloride, and mixtures of alkali (NaOH, KOH) or with alcohols (methanol, ethanol) [15–18]. In our previous work, we found that dimethyl carbonate, which is an environmentally friendly reagent [19, 20] that meets all the requirements of green chemistry, promotes partial depolymerization of organosiloxanes, making the resultant oligomers a candidate for surface functionalization [21]. However, no systematic characterization on the surface species of various depolymerized organosiloxanes on silica surface has been performed. Useful but limited information on the bonded species of silylated silica surfaces can be obtained through zeta potential, infrared spectroscopy, scanning, and transmission electron microscopy. One of the problems often met with these methods concern the difficulty in the identification of different OH and Si–O bonds. More specific information can be obtained by high-resolution 13C and 29Si cross-polarization magic-angle spinning NMR (CP-MAS NMR) and 1H MAS NMR. Indeed, only the use of abovementioned method allows a full characterization of the surface species on silylated silica. Some solid-state NMR studies have been already performed on gel and fumed silicas modified with different alkoxysilanes [22–28], mesoporous silica modified with cetyltrimethylammonium bromide [29], and 3-metacryloxypropyltrimethoxysilane (MPS) deposited in various solvents onto porous silica [30].
Therefore, the aim of this work is to study the surface species of various organosiloxanes and their mixtures with dimethyl or diethyl carbonate at a fumed silica surface depending on the polymer chain length of siloxane used as a modifying agent and on the nature of dimethyl or diethyl carbonate applied as an initiator for partial organosiloxane deoligomerization.
Chemical Reagents
For preparation of the modified silica surfaces, poly(methylhydrosiloxane) (code name PMHS, linear, –CH3 terminated, viscosity of ca. 3 cSt at 25 °C), poly(dimethylsiloxane) (code name PDMS, linear, –CH3 terminated, viscosity of ca. 100 cSt at 25 °C), and poly[dimethylsiloxane-co-(2-(3,4-epoxycyclohexyl)ethyl)methylsiloxane] (code name CPDMS, linear, –CH3 terminated, viscosity of ca. 3,300 cSt at 25 °C) were purchased from Sigma Aldrich, USA. Commercial, dimethyl carbonate (DMC), diethyl carbonate (DEC), and fumed silica (SiO2, SBET = 278 m2/g) were purchased from Aladdin Reagents, China. The purity of the reagents, as reported by the manufacturers, was ≥ 99.0 %. The reagents were used as received.
Modification of Fumed Silica Surfaces
Organosiloxanes were chosen as non-toxic and environmentally benign modifying reagents with high carbon content. FSNs were applied as a matrix for modification because of the high regularity hydroxyl groups on the surface and good dispersibility. In addition, the main advantage of these FSNs over larger monodisperse particles is the fact that they provide a large surface area and thus high sensitivity for solid-state NMR. The modification of the fumed silica surface was performed with PMHS, PDMS, and CPDMS at 180–200 °C for 2 h with or without addition of DMC or DEC, which does not contribute to the weight of modified silica due to the reaction mechanism in gaseous (nitrogen) dispersion media (i.e., without a solvent). The amount of modifier agent was determined to be 17 wt% of silica weight. The modification process was performed in a glass reactor with a stirrer with a rotational speed of 350–500 rpm. The modifying agent was added by means of aerosol-nozzle spray. The samples were subsequently cooled to room temperature after the synthesis.
The content of grafted organic groups in the synthesized samples was measured a couple of times by a vario MACRO cube analyzer (Elementar, Germany), and average values for carbon content and relative deviations were calculated (Table 1). The anchored layer was oxidized to produce H2O and CO2 during heating of the samples in the oxygen flow at 750 °C.
Carbon content, bonding density, and surface area of grafted neat organosiloxanes and their mixtures with DMC or DEC at the SiO2 surface
Carbon content, wt%
Bonding density ([Si(CH3R1O)), groups/nm2 where R1 = CH3, H, CH2CH2C6H9
SBET, m2/g
SiO2 (A–300)
SiO2/PMHS
2.42 ± 0.08
SiO2/PMHS+DMC
SiO2/PMHS+DEC
SiO2/PDMS
SiO2/PDMS+DMC
SiO2/PDMS+DEC
SiO2/CPDMS
SiO2/CPDMS+DMC
SiO2/CPDMS+DEC
The bonding density of the attached layers was calculated using the formula [11, 12]
$$ \rho =\frac{6\times {10}^5\left(\%C\right)}{\left[1200\times {n}_c-{M}_w\times \left(\%C\right)\right]}\frac{1}{S\left(\mathrm{BET}\right)}, $$
where Mw is the molecular weight of the grafted group, %C is the carbon weight percentage of the modified silica, S(BET) is the surface area of the original silica (m2/g), and nс is the number of carbon atoms in the grafted group in each silicone used for modification. Equation 1 gives the number of [–Si(RR1)O–] repeat units per 1 nm2 of the surface (ρ) (where R is methyl group (CH3); R1 is hydro (H) or methyl (CH3) or epoxy(cyclohexylethyl) group (CH2CH2C6H9)).
29Si, 1H, and 13C CP/MAS NMR Measurements
Solid-state 1H MAS NMR spectra were recorded on a Bruker Avance 400 III HD spectrometer (Bruker, USA, magnetic field strength of 9.3947 T) at resonance frequency of 79.49 MHz. The powder samples were placed in a pencil-type zirconia rotor of 4.0 mm o.d. The spectra were obtained at a spinning speed of 10 kHz, with a recycle delay of 1 s. The adamantane was used as the reference of 1H chemical shift.
Solid-state 29Si CP/MAS NMR spectra were recorded on a Bruker Avance 400 III HD spectrometer (Bruker, USA, magnetic field strength of 9.3947 T) at resonance frequency of 79.49 MHz for 29Si using the cross-polarization (CP), magic-angle spinning (MAS), and a high-power 1H decoupling. The powder samples were placed in a pencil-type zirconia rotor of 4.0 mm o.d. The spectra were obtained at a spinning speed of 8 kHz (4 μs 90° pulses), a 8-ms CP pulse, and a recycle delay of 4 s. The Si signal of tetramethylsilane (TMS) at 0 ppm was used as the reference of 29Si chemical shift.
Solid-state 13C CP/MAS NMR spectra were recorded on a spectrometer (Bruker, USA, with a magic field strength of 9.3947 T) at a resonance frequency of 100.61 MHz for 13C using the cross-polarization (CP), magic-angle spinning (MAS), and a high-power 1H decoupling. The powder samples were placed in a pencil-type zirconia rotor of 4.0 mm o.d. The spectra were obtained at a spinning speed of 5 kHz (4 μs 90° pulses), a 2-ms CP pulse, and a recycle delay of 4 s. The methylene signal of glycine at 176.03 ppm was used as the reference of 13C chemical shift.
1H Liquid NMR Spectroscopy
1H NMR spectra of each initial organosiloxane (PMHS (Additional file 1: Figure S1), PDMS (Additional file 1: Figure S2), CPDMS (Additional file 1: Figure S3); see Additional file 1) were recorded at 90 MHz with an Anasazi Eft–90 spectrometer (Anasazi Instruments, USA). Each polymer was dissolved in deuterated chloroform CDCl3, and the resulting solution was analyzed by 1H NMR spectroscopy.
BET Measurements
To analyze the surface area (SBET, m2/g) of the silicas, the samples were degassed at 150 °C for 300 min. Low-temperature (77.4 K) nitrogen adsorption–desorption isotherms were recorded using a Micromeritics ASAP 2420 adsorption analyzer (Micromeritics Instrument Corp., USA). The specific surface area (Table 1, SBET) was calculated according to the standard BET method.
1H MAS NMR spectrum of neat fumed silica, Fig. 1 a, represents three main contributing peak lines at 1.1, 3.5, and 4.8 ppm. The peak at 1.1 ppm is assigned to isolated silanols at the SiO2 surface. Note that they were not detected directly at their usual position around 1.8 ppm in the spectrum of neat silica, but it is known that protons in isolated silanol groups also produce lines between 0.5 and 1.5 ppm. Similar spectral features were also observed in previous studies [29]. Chemical-shift lines between 3.5–5.0 ppm are assigned to weakly bound, relatively mobile water, and hydrogen bonded silanols (Figs. 1 and 2) [26, 27, 29].
1H MAS NMR spectra of (a) neat fumed silica, (b) modified fumed silica with PMHS, modified with (c) mixtures of PMHS and dimethyl carbonate and (d) mixtures of PMHS and diethyl carbonate
Schematic representation of (a) single silanol groups and (b, c) possible structures involving the silanol groups and physisorbed water at the silica surface. The values of chemical shift are assigned to these structures according to refs [26, 27, 29]
The intense resonance in the 3.5–5.0 ppm range has been studied widely by different researchers. Liu and Maciel [26], for example, by using CRAMPS observed a peak at 4.1 ppm in humidified fumed silica (Cab–O–Sil) which they reported as intermediate between that of liquid water protons (4.9 ppm) and that of the physisorbed water peak (3.5 ppm). According to their studies, a resonance at 3.5 ppm assigned to physisorbed water could easily be desorbed by evacuation at 25 °C. Moreover, evacuation at 100 or 225 °C led to further decrease in the intensity of this resonance, and it was attributed to "rapidly exchanging weakly hydrogen bonded hydroxyls, including those of both water and silanols" [25, 29]. On the other hand, the 1H MAS NMR investigation of silicas by Turov et al. [31, 32] reported the chemical shift of water at around 5 ppm at 25 °C. Several other studies of silicas have also attributed the resonances at 4.5–5.0 ppm to water on strongly hydrated surfaces and chemical shift near 3 ppm to water on significantly dehydrated surfaces, as reported by Turov et al. [32].
The 1H MAS NMR spectra of modified silicas (Fig. 1) with neat poly(methylhydro)siloxane (curve b) and its mixture of DMC or DEC (curves c and d) were similar to each other and displayed a broad peak (centered at 0.0 ppm) which confirms the grafting of alkyl siloxane species. All the spectra of presented samples show the intensity reduction of adsorbed water and hydrogen bonded silanols (3.5–5.0 ppm) and do not show the isolated silanols (1.1 ppm) presence, confirming that silicas were well modified. The appearance of the peak around 4.7 ppm can be assigned also to the proton in the Si-H group which was attached to the SiO2 surface, as well as alkyls during SiO2 functionalization. The presence of the adsorbed water in the modified samples can be explained by the fact that water molecules are much smaller than the cross-section of organosiloxane. Water can therefore penetrate the narrow nanovoids in the contact zones between adjacent nanoparticles in the aggregates, but polymer macromolecules cannot penetrate these voids. Nevertheless, the 1H MAS NMR spectra of modified silicas provide much less structural information than the 29Si CP/MAS NMR spectra of these composites in which is possible to see unique resonances of different grafted species [33]. The nomenclature used to define siloxane surface species grafted at the silica surface incorporates the use of the letters M, D, T, and Q of the organosiloxane units which represent R3SiO0.5, R2Si(O0.5)2, RSi(O0.5)3, and Si(O0,5)4 units, respectively, where R represents aliphatic and/or aromatic substituents or H [34]. The CP/MAS 29Si NMR spectrum of unreacted silica (Fig. 3 a) shows three signals with resolved peaks at − 91, − 100, and − 109 ppm. These peaks are assigned to silicon atoms in the silanediol groups, silanol groups, and silicon-oxygen tetrahedra of the SiO2 framework, respectively (Fig. 4a–c and Table 2), or, in other words, to silicon-oxygen tetrahedra Q2, Q3, and Q4 where the superscript indicates the number of siloxane bonds [34]. Appropriately, an assignment was made on the basis of the small difference between 29Si chemical shifts in solids and the corresponding shifts in a liquid, and data on soluble silicates were used for identification. The formation of an additional siloxane bond has been found to lead to an upfield signal shift of about 9 ppm [28].
29Si CP/MAS NMR spectra of (a) neat fumed silica, (b) modified fumed silica with neat PMHS, modified with (c) mixtures of PMHS and dimethyl carbonate and (d) mixtures of PMHS and diethyl carbonate
Various grafted PHMS species (a–f)
29Si Chemical shifts (δ) of grafted neat and depolymerizied organosiloxane species at SiO2 surface
δ, ppm
Si(OH)2(O–)2
− 91
Si(OH)(O–)3
− 100
Si(O–)4
Si(CH3)(H)(O–)
(≡SiO)2SiR, R-attached polymer chain
Si(CH3)(R)(O–)2 in D4, R = CH3, C2H4C6H8
− 19.5
Si(CH3)(O–)2 in linear MD4M
Si(CH3)2(O–)2, D1
Si(CH3)(C2H4C6H8), D2
As can be seen in Fig. 3, after silica surface modification with low viscous poly(methylhydrosiloxane) and DEC (curve d), a significant decrease in the signals Q3 and Q2 is accompanied by an increase in the intensity growth of the signal Q4. Additionally, the signal at − 35 ppm appears, and this can be identified with the methylhydrosiloxane species (D1), (Fig. 4d and Table 2). This implies that a reaction has occurred between the silica surface and the PHMS/DEC mixture.
The surface of SiO2/PMHS+DEC also shows high grafting density of 5.0 groups/nm2 (Table 1) and larger diminution of SBET (Table 1) as compared with SiO2 modified with neat PMHS suggesting closely packed methylhydrosiloxane network. Close values of grafting densities have been reported for the self-assembled monolayers (SAMs) of C18H37SiH3, C18H37SiCl3, and C18H37P(O)(OH)2 on metals and metal oxides [35–37] and C18H37SH on gold [38]. The silicas modified with neat PMHS and its mixture of DMC show a grafting density even slightly higher around 5.5–6.0 groups/nm2 (Table 1). Nevertheless, the appearance of the chemical shifts at − 35, − 58, − 68 ppm of the D1, T2, and T3 units (Fig. 4d–f) for SiO2/PMHS (Fig. 3 b) and only the D1 unit for SiO2/PMHS+DMC (Fig. 3 c) is not accompanied by a significant reduction of the peak which corresponds to free silanols (− 100 ppm, Q3) as is the case for SiO2/PMHS+DEC (Fig. 3 d). The 13C CP/MAS NMR spectra of these modified FSNs (Fig. 5) show one prominent peak at about 43–46 ppm due siloxane alkyl chains grafted at their SiO2 surfaces. The sharp peak in the CP/MAS 13C NMR spectrum of SiO2/PMHS+DMC (Fig. 5 c) indicates well-ordered surface structures at the SiO2 surface. On the contrary, in the CP/MAS 13C NMR spectra of SiO2/PMHS (Fig. 5 a) and SiO2/PMHS+DMC (Fig. 5 b), the signals are relatively broad, indicating a restricted mobility of the functional groups attached to the siloxane framework. Additionally, a higher relative intensity of this signal 43–46 ppm for SiO2/PMHS+DEC may suggest a greater number of attached surface species at the SiO2 surface as compared with SiO2/PMHS and SiO2/PMHS+DMC.
13C CP/MAS NMR spectra of (a) modified fumed silica with neat PMHS, modified with (b) mixtures of PMHS and dimethyl carbonate and (c) mixtures of PMHS and diethyl carbonate
The abovementioned differences of modified silicas could be explained by several factors: (1) a type of organosiloxane bonding (physical or chemical) with SiO2 surface and (2) changes in the length of initial organosiloxane and its fragments after reactions with alkyl carbonate (as shorter polymer fragments will react intensively with silica surface sites due to the lower of steric hindrance of side polymer chains). It is therefore more likely that neat PHMS and the mixture of PHMS/DMC absorb at the SiO2 surface through the formation of adsorption complexes by the binding of hydrogen in the surface silanol group with siloxane oxygen of organosiloxane, while the chemical reaction between the SiO2 surface and PMHS/DEC could be carried out through the formation of chemical bond by the electrophilic substitution of the proton in the silanol group (see Scheme 1 below). The latter explains the significant reduction of free silanols peak at − 100 ppm (Q3) for SiO2/PHMS+DEC (Fig. 3 d). That fact that the resonance of isolated silanols (-100 ppm) is significantly decreased for SiO2/PHMS+DEC in comparison to unreacted SiO2 but does not disappear completely (even with close packing of methylhyrosiloxane groups of 4.0 group/nm2) indicates that some of the OH groups were inaccessible to the modifier reagent. These silanols could be located inside SiO2 nanoparticles. Note that these intra-particle silanols and water molecules can be removed upon heating at 550–700 °C, and only a very small amount of residual silanols remains upon heating even at 1000 °C [11]. The existence of intracrystalline hydroxyl groups is typical for layered silicates [28]. According to Iler [1], their formation is possible in an aerosil structure by the aggregation of SiO2 primary particles with a size of 1–2 nm into a finite globule with a diameter of 10–20 nm. In addition, one cannot rule out the possibility of internal hydroxy group formation in the course of diffusion of the water molecules into the SiO2 globules. On the other hand, unreacted silanols play an important role in the stabilization of alkylsilanes layers at the SiO2 surface as considered by other researchers [11, 35–43]. In the opinion of the authors [11], grafted silane layers form a closely packed monolayer film with an ordered amorphous structure with a significant number of the uncoupled silanols that interact with neighboring Si–OH groups via hydrogen bonding, while the alkyl chains (not shown in Fig. 6) are directed perpendicular to the plane of the siloxane network (Fig. 6a). The presence of uncoupled silanols supports enough space for the presence of alkyl chains grafted at SiO2 after the modification, as the maximal length of the Si–OH·····HO–Si sequence of bonds is ≈ 0.6 nm, which is notably higher than the Van de Waals diameter of the alkyl chains (≈ 0.46 nm). In the case of the absence of the uncoupled silanols, the attached monolayers form a hexagonal array (Fig. 6b) where Si atoms are connected via the siloxane network [39]. However, as was reported by Helmy et al. [11], such a structure is too constrained by steric repulsion between the grafted alkyl chains, as the maximum length of Si–O–Si bond is 0.32 nm, which is very much smaller than the Van der Waals diameter of the alkyl chain (0.46 nm).
Attack by methoxysilane of silica silanol group
The amorphous-like structure (a) consists of the molecules bonded via Si-O-Si and Si-OH·····HO-Si bonds, proposed in [11] and (b) the crystalline-like structure has "extended" Si-O-Si bonds, proposed in [11, 38–41]
29Si CP/MAS NMR spectra of SiO2 modified with organosiloxane of medium chain length (PDMS) and its mixture with DMC or DEC are shown in Fig. 7. The chemical shifts, which appeared at − 19 and − 23 ppm for all the samples (Table 2), are assigned to D1 and D2 units in cyclotetrasiloxane and dimethylsiloxane species in linear MD4M, respectively (Fig. 8 a, b). Notice that the shift of dimethylsiloxane species (− 23 ppm) for silicas modified with PDMS (Fig. 7) is shifted to higher frequency ranges in comparison to SiO2 modified with PMHS (− 35 ppm, Fig. 3), which is explained by the fact that hydrides appear at relatively low frequency compared with their alkyl analogs [34]. The abovementioned resonances result from "capping" of the silica surfaces with modifier agent which is in a good agreement with earlier reports [5]. The sites denoted as T2 and T3, observed around − 58 and − 68 ppm for SiO2/PDMS and SiO2/PDMS+DMC (Fig. 7 b, c) are assigned to (≡SiO)2SiR and (≡SiO)3SiR functionalities (Fig. 8 c, d) where R represents the attached polymer chain. The presence of D as well as T sites for these samples indicates that functionalization of the SiO2 surfaces has occurred. Note that the appearance of the chemical shifts of the D and T units for SiO2 modified with mixture of PDMS/DMC (Fig. 7 c) is accompanied by a significant decrease in the resonances of free and geminal silanols and also SBET value (167 m2/g, Table 1), which may suggest that the reaction of the SiO2 with depolymerized PDMS occurred through the chemical bonding, as for SiO2/PMHS+DEC (Fig. 3 d). The surface also shows the highest grafting density (ρmax) – 7.4 groups/nm2 and the lowest surface area in comparison with other silicas presented in this work (Table 1). The ρmax value obtained for SiO2/PDMS+DMC is similar to those reported for the best quality monolayers derived from chloro- and aminosilanes [35, 44]. The only difference is that the modification of SiO2 with mixture of PDMS/DMC occurs with noncorrosive reagents, thus providing a cleaner and less hazardous environment than amino- and chlorosilanes. The high value of the grafting density for this surface indicates the formation of closely packed grafted organic layers. On the contrary, the resonances of free and geminal silanols are not shown to be greatly diminished in intensity for the surfaces, which were obtained by SiO2 modification with neat PDMS (Fig. 7 b) and its mixture of DEC (Fig. 7 d) but showing grafting density 7.2 and 2.5 groups/nm2. This could be explained by the partial adsorption of the modifier reagent at the SiO2 surface as was mentioned in the previous section. 29Si CP/MAS NMR data are in a good agreement with the BET data (Table 1), as surface areas for both samples are higher compared with SiO2/PDMS+DMC, confirming the smaller degree of chemisorption of the modifier agents at the SiO2 surfaces. In addition, the 13C CP/MAS NMR data (Fig. 9) are in excellent agreement with the data on grafting density (Table 1) and 29Si CP/MAS NMR data, and the relative intensities of the signals which correspond to organosiloxane chains (44–50 ppm, Fig. 9) attached to the SiO2 surface are higher for SiO2/PDMS+DMC (curve b) and SiO2/PDMS (curve a) as compared with SiO2/PDMS+DEC (curve c). All the signals in the 13C CP/MAS NMR spectra (Fig. 9) are relatively sharp, indicating well-ordered surface structures on the silica surface.
29Si CP/MAS NMR spectra of (a) neat fumed silica, (b) modified fumed silica with PDMS, modified with (c) mixtures of PDMS and dimethyl carbonate and (d) mixtures of PDMS and diethyl carbonate
Various grafted PDMS species (a–d)
13C CP/MAS NMR spectra of (a) modified fumed silica with neat PDMS, modified with (b) mixtures of PDMS and dimethyl carbonate and (c) mixtures of PDMS and diethyl carbonate
The denser coverage for SiO2/PMHS+DEC (discussed above) and SiO2/PDMS+DMC in comparison to other samples presented here can be explained also by the presence of additional reactive centers at the SiO2 surface, the attached methoxy groups (–OCH3 or OR), which can be formed by the reaction of DMC or DEC with the SiO2 surface (see Scheme 2 below).
The reaction of DMC or DEC with the SiO2 surface
1H MAS NMR spectra of FNSs modified with mixtures of PDMS/DMC or PDMS/DEC (Fig. 10 c, d) show the disappearance of the peaks of free and hydrogen-bonded silanols (δ = 1.1 ppm and δ = 4.8 ppm) as well as adsorbed water (δ = 3.5 ppm). The presence of grafted siloxane species is confirmed by the emergence of a chemical shift at 0.0 ppm for all the samples (Fig. 10 b–d). In spite of the methylsiloxane grafting presence for SiO2/PDMS (Fig. 10 b), its surface still contains free, hydrogen-bonded silanols and adsorbed water, which is in very good agreement with 29Si CP/MAS NMR data (Fig. 7 b). The values of the bonding density (Table 1) of the attached layers of FSNs modified with short PMHS and its mixture of DMC or DEC, as well as medium PDMS and its mixture of DMC, suggest a "vertical" orientation of the grafted dimethylsiloxane and methylhydrosiloxane molecules stabilized by lateral Si–O–Si bonds and Van der Waals interactions between the grafted alkyl chains [43–46], while FSNs modified with PDMS/DEC mixture show a grafting density of 1.9 groups/nm2, suggesting a "horizontal" orientation of the grafted dimethylsiloxane molecules [8, 35].
1H MAS NMR spectra of (a) neat fumed silica, (b) modified fumed silica with PDMS, modified with (c) mixtures of PDMS and dimethyl carbonate and (d) mixtures of PDMS and diethyl carbonate
Overall, from the solid-state NMR data obtained, it is evident that the addition of DMC to the modifying mixture has a significant effect on the chemical interaction of organosiloxane of a medium length of polymer chain (PDMS) used for modification at the silica surface, while DEC addition has practically no influence on the chemical interaction of SiO2 with PDMS. In contrast, the DEC has a great effect on the chemical interaction of short organosiloxane (PMHS) used for modification at the SiO2 surface, while DMC has minimal impact on the chemical interaction of SiO2 with PMHS.
As can be seen from the 29Si CP/MAS NMR spectrum of SiO2 modified with the longest polymer, poly[dimethylsiloxane-co-(2-(3,4-epoxycyclohexyl)ethyl)methylsiloxane (CPDMS) (Fig. 11 a), the resonances of grafted methyl-epoxy species around − 23 and − 19 ppm are very hardly detectable, which implies mostly an inert nature of this polymer in relation to the SiO2 surface. 29Si CP/MAS NMR spectra of silicas modified with the longest organosiloxane in the presence of additives—DMC or DEC (Fig. 11 c, d)—represent peaks of grafted siloxane species at − 22, − 21, and − 19 ppm (Table 2) which are assigned to a mixture of D2 and D1 units in linear MD4M siloxane (Fig. 12b) and D1 unit in cyclotetrasiloxane (Fig. 12a), respectively. The grafting density for these surfaces is not as high as for surfaces modified with short (PMHS) and medium siloxane (PDMS) and representing values of 0.4 and 0.7 group/nm2 (Table 1), which are closer to "horizontal" chain orientation at the SiO2 surfaces. However, these values are three to five times higher than SiO2 modified with the neat polymer—CPDMS (ρmax = 0.1 group/nm2, Table 1), and the data is in a good agreement with BET values (Table 1) which are lower for these samples than for SiO2/CPDMS one. A somewhat lower reactivity of neat CPDMS in relation to SiO2 surface could be attributed to the steric hindrance which could be caused by the long polymer chain units and epoxide groups which are present in this polymer, as long-chain organosiloxanes could form a helix structure [46, 47] due to the corresponding rotations around the Si–O bonds, which greatly limits the number of organosiloxane segments which are capable of interacting with active silica sites. On the other hand, in the concentrated solutions of organosiloxane in hexane, for example, the fraction of unfolded molecules increases [46], and this resulted in an increase in the density of contacts between the siloxane molecules and the SiO2 surface OH groups. Taking this into account, the use of an alkyl carbonate is beneficial, as under its influence the organosiloxane might change its structure, which in turn promotes the better accessibility of the formed polymer fragments to the silica surface silanols, and this promotes higher polymer adsorption at the SiO2 surface. The peaks broadening for SiO2/CPDMS+DMC (Fig. 11 c) and SiO2/CPDMS+DEC (Fig. 11 d) are due to a different steric orientation of the closely adjacent methyl and epoxy groups [34].
29Si CP/MAS NMR spectra of (a) neat fumed silica, (b) modified fumed silica with CPDMS, modified with (c) mixtures of CPDMS and dimethyl carbonate and (d) mixtures of CPDMS and diethyl carbonate
Various grafted CPDMS species (a, b)
According to 1H MAS NMR, the spectra of the silicas which were modified by methyl-epoxy siloxane in the presence of DMC (Fig. 13 c) or DEC (Fig. 13 d) are nearly identical and in excellent agreement with 29Si CP/MAS NMR (Fig. 11 c, d). Grafted methyl-epoxy siloxane on silica surfaces for both samples resulted in shifts of 0.0 ppm and 3.2 ppm. The chemical shift at 3.2 ppm confirms the presence of the characteristic methy/epoxy groups for all the samples. In contrast, the resonance at 0.0 ppm for SiO2, modified with neat CPDMS (Fig. 13 b) is hardly detectable, which in accordance with the grafting density data (Table 1) demonstrates the small amount of long CPDMS units grafted at the SiO2 surface. Additionally, the 13C CP/MAS NMR data (Fig. 14) support this conclusion because only a very faint peak due to the alkyl groups at 44–50 ppm can be observed for the resulting samples. Note that this signal in the 13C CP/MAS NMR spectra of the presenting samples (Fig. 14) is broad, indicating a restricted mobility of the functional groups attached to the siloxane framework as discussed above.
1H MAS NMR spectra of (a) neat fumed silica, (b) modified fumed silica with neat CPDMS, modified with (c) mixtures of CPDMS and dimethyl carbonate and (d) mixtures of CPDMS and diethyl carbonate
13C CP/MAS NMR spectra of (a) modified fumed silica with neat CPDMS, modified with (b) mixtures of CPDMS and dimethyl carbonate and (c) mixtures of CPDMS and diethyl carbonate
Note that all the presented surfaces generally exhibit the grafting density decrease as the size of the polymer increases, used for surfaces functionalization. Similar results were also presented for silicas functionalized by different bis-fluoroalkyl disiloxanes [12]. This can be due to steric hindrance from the long polymer chains on the macromolecule, as discussed above.
An in-depth solid-state NMR study of FSNs functionalized with organosiloxanes of various lengths of polymer chains and their mixtures of DMC or DEC is presented. For better analysis of the length of polymer chain effects, the organosiloxanes studied here are much longer and with a larger difference in the viscosity as well as pendant groups than the organosiloxanes studied before [12, 35, 48–54]. The obtained results reveal that the structure of the grafted species, type of grafting, and grafting density at the SiO2 surface depend strongly on the length of organosiloxane polymer and on the nature of the "green" additive, DMC or DEC. Spectral changes observed by solid-state NMR spectroscopy suggest that the major products of the reaction of various organosiloxanes and their DMC or DEC mixtures with the FSNs were D (RR'Si(O0.5)2) and T (RSi(O0.5)3) organosiloxane units. The appearance of grafted siloxane units at SiO2/PHMS+DEC and SiO2/PDMS+DMC surfaces is accompanied by a significant reduction of Q3 signals, while for neat organosiloxanes and some of their mixture with alkyl carbonate used for SiO2 modification, a reduction of Q3 is hardly observable. The small amounts of residual silanols (hardly accessible for modifier reagents used) and physisorbed water remain in all the samples of modified silicas (note that the crude silica was not preheated at high temperatures).
Addition of DMC to the modifying mixture facilitates the passage of chemical reaction between medium (PDMS) or long (CPDMS) polymer and the SiO2 surface. Diethyl carbonate addition somewhat worsens the chemical reaction between medium organosiloxane (PDMS) and SiO2 surface but greatly facilitates the reaction when organosiloxanes at short (PMHS) and long polymer chain (CPDMS) are applied for FSNs modification. Thus, from the technological point of view, for FSNs modification with short organosiloxanes, it is reasonable to use DEC; at medium organosiloxane, the application of DMC is necessary; and at long organosiloxane, it is beneficial to use both DMC and DEC.
The data for CP/MAS NMR, BET, and chemical analysis suggest the "vertical" orientation of grafted organosiloxane chains when short and medium polymer or its mixture with DMC (ρ = 7.2–7.4 groups/nm2) are applied for FSNs modification. The reaction of FSNs with medium and long polymer and its mixture with DEC (PDMS/DEC or CPDMS/DEC) leads to the formation of the "horizontal" chains at the surface (ρ = 0.1–2.5 groups/nm2). The findings open new ways for the preparation of similar materials of the same quality using different substrates such as various silicas—silica gels, porous silicas, and precipitated silica. The comparison of the influence of substrate nature on poly(organosiloxane)/alkyl carbonate modification is of undoubted interest for future study.
%C :
Carbon weight percentage
CP/MAS NMR:
Cross-polarization magic-angle spinning nuclear magnetic resonance
CPDMS:
Poly[dimethylsiloxane-co-(2-(3,4-epoxycyclohexyl)ethyl)methylsiloxane]
DEC:
Diethyl carbonate
DMC:
Dimethyl carbonate
FSNs:
Fumed silica nanoparticles
PDMS:
Poly(dimethysiloxane)
PMHS:
Poly(methylhydrosiloxane)
S BET :
SiO2 :
δ :
Chemical shift
ρ :
This research was supported by the Special Funding of the 'Belt and Road' International Cooperation of Zhejiang Province under grant 2015C04005 and China Postdoctoral Science Foundation grant Z741020001. Partly, this research was supported by the Center for Integrated Nanotechnologies, an Office of the Science User Facility operated for the US Department of Energy (DOE), Office of Science by Los Alamos National Laboratory (Contract DE-AC52-06NA25396), and Sandia National Laboratories (Contract DE-NA-0003525).
The datasets supporting the conclusions of this work are included within the article. Any raw data generated and/or analyzed in the present study are available from the corresponding author on request.
ISP, YMM, IMH, and DZ conceived and designed the experiments; ISP and YMM performed all the experiments; ISP, IMH, and YMM analyzed and interpreted the data; ISP wrote the manuscript; and ZL and WD contributed reagents/materials/analysis tools. All the authors revised and approved the final version of the manuscript.
Additional file 1: Figure S1. 90 MHz 1H NMR spectrum of neat PMHS. Figure S2. 90 MHz 1H NMR spectrum of neat PDMS; the inset shows the methyl group shifts of parent PDMS. Figure S3. 90 MHz 1H NMR spectrum of neat CPDMS; the inset shows the methyl group shifts of parent CPDMS. (DOCX 1498 kb)
College of Environment, Zhejiang University of Technology, Hangzhou, 310014, China
College of Science, Zhejiang University of Technology, Hangzhou, 310023, China
Key Laboratory of Microbial Technology for Industrial Pollution Control of Zhejiang Province, Hangzhou, 310014, China
Institute for Information Recording of NAS of Ukraine, Kiev, 03113, Ukraine
College of Materials Science & Engineering, Zhejiang University of Technology, Hangzhou, 310014, China
Omphalos Bioscience, LLC, Albuquerque, New Mexico 87110, USA
Iler RK (1979) The chemistry of silica: solubility, polymerization, colloid and surface properties, and biochemistry. 1st ed. Wiley, New YorkGoogle Scholar
Bergna HE, Roberts WO (2005) Colloidal silica: fundamentals and applications. 1st ed. Taylor and Francis, Boca RatonGoogle Scholar
Vansant EF, van der Voort P, Vrancken KC (1997) Characterization and chemical modification of the silica surface. 1st ed. Elsevier, AmsterdamGoogle Scholar
Bluemel J (1995) Reactions of ethoxysilanes with silica: a solid-state NMR study. JACS 117:2112–2113 https://doi.org/10.1021/ja00112a033 View ArticleGoogle Scholar
Litvinov VM, Barthel H, Weis J (2002) Structure of a PDMS layer grafted onto a silica surface studied by means of DSC and solid-state NMR. Macromolecules 35:4356–4364 https://doi.org/10.1021/ma0119124 View ArticleGoogle Scholar
Park SE, Prasetyanto EA (2010) Morphosynthesis and catalysis by organofunctionalized mesoporous materials. In: Wyman EB, Skief MC (eds) Organosilanes, Properties, Performance, and Applications. UK ed. Nova Science Publishers, New York, pp 101–131Google Scholar
Daoud WA, Xin JH, Tao X (2006) Synthesis and characterization of hydrophobic silica nanocomposites. Appl Surf Sci 252:5368–5371 https://doi.org/10.1016/j.apsusc.2005.12.020 View ArticleGoogle Scholar
Bernardoni F, Kouba M, Fadeev AY (2008) Effect of curvature on the packing and ordering of organosilane monolayers supported on solids. Chem Mater 20:382–387 https://doi.org/10.1021/cm070842y View ArticleGoogle Scholar
Chojnowski J, Rubinsztajn S, Fortuniak W, Kurjata J (2008) Synthesis of highly branched alkoxysiloxane−dimethylsiloxane copolymers by nonhydrolytic dehydrocarbon polycondensation catalyzed by tris(pentafluorophenyl)borane. Macromolecules 41:7352–7358 https://doi.org/10.1021/ma801130y View ArticleGoogle Scholar
Gun'ko VM, Pakhlov EM, Goncharuk OV, Andriyko LS, Marynin AI, Ukrainets AI, Charmas B, Skubiszewska-Zięba J, Blitz JP (2017) Influence of hydrophobization of fumed oxides on interactions with polar and nonpolar adsorbates. Appl Surf Sci 423:855–868 https://doi.org/10.1016/j.apsusc.2017.06.207 View ArticleGoogle Scholar
Helmy R, Wenslow RW, Fadeev AY (2004) Reaction of organosilicon hydrides with solid surfaces: an example of surface-catalyzed self-assembly. JACS 126:7595–7600 https://doi.org/10.1021/ja0498336 View ArticleGoogle Scholar
Graffius G, Bernardoni F, Fadeev AY (2014) Covalent functionalization of silica surface using "inert" poly(dimethylsiloxanes). Langmuir 30:14797–14807 https://doi.org/10.1021/la5031763 View ArticleGoogle Scholar
Chang CL, Lee HS, Chen CK. Aminolysis of cured siloxane polymers. Polym Degrad Stab 1999; 65: 1–4. https://doi.org/10.1016/S0141-3910(98)00099-00098Google Scholar
Hsiao YC, Hill LW, Pappas SP (1975) Reversible amine solubilization of cured siloxane polymers. J Appl Polym Sci 19:2817–2820 https://doi.org/10.1002/app.1975.070191017 View ArticleGoogle Scholar
Clarson SJ, Semlyen JA (1986) Studies of cyclic and linear poly(dimethylsiloxanes): 21 high temperature thermal behavior. Polymer 27:91–95 https://doi.org/10.1016/0032-3861(86)90360-5 View ArticleGoogle Scholar
Thomas TH, Kendrick TC (1969) Thermal Analysis of Polydimethylsiloxanes. I. Thermal Degradation in Controlled Atmospheres. J Polym Sci B 7:537–549 https://doi.org/10.1002/pol.1969.160070308 Google Scholar
Brook MA, Zhao S, Liu L, Chen Y (2012) Surface etching of silicone elastomers by depolymerization. Can J Chem 90:153–160 https://doi.org/10.1139/v11-145 View ArticleGoogle Scholar
Chang CL, Lee HSJ, Chen CK (2005) Nucleophilic cleavage of crosslinked polysiloxanes to cyclic siloxane monomers: mild catalysis by a designed polar solvent system. J Polym Res 12:433–438 https://doi.org/10.1007/s10965-004-1871-1 View ArticleGoogle Scholar
Selva M, Fabrisa M, Perosa A (2011) Decarboxylation of dialkyl carbonates to dialkyl ethers over alkali metal-exchanged faujasites. Green Chem 13:863–872 https://doi.org/10.1039/C0GC00536C View ArticleGoogle Scholar
Ono Y. Dimethyl carbonate for environmentally benign reactions. Pure Appl Chem 1996; 68: 367–375. https://doi.org/10.1016/S0920-5861(96)00130-00137Google Scholar
Protsak I, Henderson IM, Tertykh V, Dong W, Le Z (2018) Cleavage of organosiloxanes with dimethyl carbonate: a mild approach to graft-to-surface modification. Langmuir 34:9719–9730 https://doi.org/10.1021/acs.langmuir.8b01580 View ArticleGoogle Scholar
Gun′ko VM, Turov VV (2013) Nuclear magnetic resonance studies of interfacial phenomena. 1st ed. CRC Press/Taylor & Francis Group, Boca RatonView ArticleGoogle Scholar
Spataro G, Champouret Y, Florian P, Coppel Y, Kahn ML (2018) Multinuclear solid-state NMR study: a powerful tool for understanding the structure of ZnO hybrid nanoparticles. Phys Chem Chem Phys 20:12413–12421 https://doi.org/10.1039/C8CP01096J View ArticleGoogle Scholar
Kobayashi T, Singappuli-Arachchige D, Slowing II, Pruski M (2018) Spatial distribution of organic functional groups supported on mesoporous silica nanoparticles (2): a study by 1H triple-quantum fast-MAS solid-state NMR. Phys Chem Chem Phys 20:22203–22209 https://doi.org/10.1039/C8CP04425B View ArticleGoogle Scholar
Bronnimann CE, Zeigler RC, Maciel GE (1988) Proton NMR study of dehydration of the silica gel surface. JACS 110:2023–2026 https://doi.org/10.1021/ja00215a001 View ArticleGoogle Scholar
Liu CC, Maciel GE (1996) The fumed silica surface: a study by NMR. JACS 118:5103–5119 https://doi.org/10.1021/ja954120w View ArticleGoogle Scholar
Maciel GE, Sindorf DW (1980) Silicon-29 NMR study of the surface of silica gel by cross polarization and magic-angle spinning. JACS 102:7606–7607 https://doi.org/10.1021/ja00545a056 View ArticleGoogle Scholar
Brei VV (1994) 29Si solid-state NMR study of the surface structure of aerosil silica. J Chem Soc Faraday Trans 90:2961–2964 https://doi.org/10.1039/FT9949002961 View ArticleGoogle Scholar
Trebosc J, Wiench JW, Huh S, Lin VSY, Pruski M (2005) Solid-state NMR study of MCM-41-type mesoporous silica nanoparticles. JACS 127:3057–3068 https://doi.org/10.1021/ja043567e View ArticleGoogle Scholar
De Haan JW, Van Den Bogaert HM, Ponjeé JJ, Van de Ven LJM (1986) Characterization of modified silica powders by Fourier transform infrared spectroscopy and cross-polarization magic angle spinning NMR. J Colloid Interface Sci 110:591–600 https://doi.org/10.1016/0021-9797(86)90411-X View ArticleGoogle Scholar
Turov VV, Chodorowski S, Leboda R, Skubiszewska-Zie J, Brei VV (1999) Thermogravimetric and 1H NMR spectroscopy studies of water on silicalites. Colloids Surf A Physicochem Eng Asp 158:363–373 https://doi.org/10.1016/S0927-7757(99)00180-6 View ArticleGoogle Scholar
Turov VV, Leboda R (1999) Application of 1H NMR spectroscopy method for determination of characteristics of thin layers of water adsorbed on the surface of dispersed and porous adsorbents. Adv Colloid Interface Sci 79:173–211 https://doi.org/10.1016/S0001-8686(97)00036-5 View ArticleGoogle Scholar
Williams EA (1984) Recent advances in silicon-29 NMR spectroscopy. Annu Rep NMR Spectrosc 15:235–289 https://doi.org/10.1016/S0066-4103(08)60209-4 View ArticleGoogle Scholar
Engelhardt G, Jancke H (1981) Structure investigation of organosilicon polymers by silicon-29 NMR. Polym Bull 5:577–584 https://doi.org/10.1007/BF00255295 View ArticleGoogle Scholar
Fadeev AY, Kazakevich YV (2002) Covalently attached monolayers of oligo(dimethylsiloxane)s on silica: a siloxane chemistry approach for surface modification. Langmuir 18:2665–2672 https://doi.org/10.1021/la011491j View ArticleGoogle Scholar
Wasserman SR, Whitesides GM, Tidswell IM, Ocko BM, Pershan PS, Axe JD (1989) The structure of self-assembled monolayers of alkylsiloxanes on silicon: a comparison of results from ellipsometry and low-angle x-ray reflectivity. JACS 111:5852–5861 https://doi.org/10.1021/ja00197a054 View ArticleGoogle Scholar
Gao W, Dickinson L, Grozinger C, Morin FG, Reven L (1996) Self-assembled monolayers of alkylphosphonic acids on metal oxides. Langmuir 12:6429–6435 https://doi.org/10.1021/la9607621 View ArticleGoogle Scholar
Bain CD, Troughton EB, Tao YT, Evall J, Whitesides GM, Nuzzo RG (1989) Formation of monolayer films by the spontaneous assembly of organic thiols from solution onto gold. JACS 111:321–335 https://doi.org/10.1021/ja00183a049 View ArticleGoogle Scholar
Stevens MJ (1999) Thoughts on the structure of alkylsilane monolayers. Langmuir 15:2773–2778 https://doi.org/10.1021/la981064e View ArticleGoogle Scholar
Rye RR (1997) Transition temperatures for n-alkyltrichlorosilane monolayers. Langmuir 13:2588–2590 https://doi.org/10.1021/la960934u View ArticleGoogle Scholar
Kessel CR (1991) Formation and characterization of a highly ordered and well-anchored alkylsilane monolayer on mica by self-assembly. Langmuir 7:532–538 https://doi.org/10.1021/la00051a020 View ArticleGoogle Scholar
Parikh AN, Allara DL, Azouz IB, Rondelez F (1998) An intrinsic relationship between molecular structure in self-assembled n-alkylsiloxane monolayers and deposition temperature. J Phys Chem 31:7577–7590 https://doi.org/10.1021/j100082a031 Google Scholar
Fadeev AY (2010) Hydrophobic monolayer surfaces: synthesis and wettability. In: Somasundaran P (ed) Encyclopedia for Surface and Colloid Science. 2nd ed. Taylor & Francis, New York, p 2854Google Scholar
Fadeev AY, McCarthy TJ (2000) Self-assembly is not the only reaction possible between alkyltrichlorosilanes and surfaces: monomolecular and oligomeric covalently attached layers of dichloro- and trichloroalkylsilanes on silicon. Langmuir 16:7268–7274 https://doi.org/10.1021/la000471z View ArticleGoogle Scholar
Brzoska JB, Azouz IB, Rondelez F (1994) Silanization of solid substrates: a step toward reproducibility. Langmuir 10:4367–4373 https://doi.org/10.1021/la00023a072 View ArticleGoogle Scholar
Lipatov YS, Sergeeva LM, Kondor R, Slutzkin D (1974) Adsorption of polymers, 1st edn. Wiley, New YorkGoogle Scholar
Gun'ko VM, Borysenko MV, Pissis P, Spanoudaki A, Shinyashiki N, Sulim IY, Kulik TV, Palyanytsy BB (2007) Polydimethylsiloxane at the interfaces of fumed silica and zirconia/fumed silica. Appl Surf Sci 253:7143–7156 https://doi.org/10.1016/j.apsusc.2007.02.185 View ArticleGoogle Scholar
Krumpfer JW, Fadeev AY (2006) Displacement reactions of covalently attached organosilicon monolayers on Si. Langmuir 22:8271–8272 https://doi.org/10.1021/la060969m View ArticleGoogle Scholar
Kazakevich YV, Fadeev AY (2002) Adsorption characterization of oligo(dimethylsiloxane)-modified silicas: an example of highly hydrophobic surfaces with non-aliphatic architecture. Langmuir 18:3117–3122 https://doi.org/10.1021/la011490r View ArticleGoogle Scholar
Li YF, Xia YX, Xu DP, Li GL (1981) Surface reaction of particulate silica with polydimethylsiloxanes. J Polym Sci 19:3069–3079 https://doi.org/10.1002/pol.1981.170191204 Google Scholar
Guba GY, Bogillo VI, Chuiko AA (1993) Kinetics and mechanism of the reaction of organosiloxanes with the surface of pyrogenic silica. Theor Exp Chem 28:146–150 https://doi.org/10.1007/BF0057392 View ArticleGoogle Scholar
Barthel H, Nikitina E (2004) INS and IR study of intermolecular interactions at the fumed silica-polydimethylsiloxane interphase, Part 3. Silica-siloxane adsorption complexes. Silicon Chem 1:261–279 https://doi.org/10.1023/B:SILC.0000018353.32350.c9 View ArticleGoogle Scholar
Smith JS, Borodin O, Smith GD, Kober EM (2007) A molecular dynamics simulation and quantum chemistry study of poly(dimethylsiloxane)-silica nanoparticle interactions. J Polym Sci B 45:1599–1615 https://doi.org/10.1002/polb.21119 View ArticleGoogle Scholar
Sulym IY, Borysenko MV, Goncharuk OV, Terpilowski K, Sternik D, Chibowski E, Gun'ko VM (2011) Structural and hydrophobic–hydrophilic properties of nanosilica/zirconia alone and with adsorbed PDMS. Appl Surf Sci 258:270–277 https://doi.org/10.1016/j.apsusc.2011.08.045 View ArticleGoogle Scholar | CommonCrawl |
Problems in Mathematics
Problems by Topics
Gauss-Jordan Elimination
Linear Transformation
Vector Space
Eigen Value
Cayley-Hamilton Theorem
Diagonalization
Exam Problems
Group Homomorphism
Sylow's Theorem
Module Theory
Ring Theory
LaTex/MathJax
Login/Join us
Solve later Problems
My Solved Problems
You solved 0 problems!!
Solved Problems / Solve later Problems
Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation
Problem 289
(a) Find the inverse matrix of
\[A=\begin{bmatrix}
1 & 0 & 1 \\
1 &0 &0 \\
2 & 1 & 1
\end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason.
(b) Find a nonsingular $2\times 2$ matrix $A$ such that
\[A^3=A^2B-3A^2,\] where
\[B=\begin{bmatrix}
4 & 1\\
2& 6
\end{bmatrix}.\] Verify that the matrix $A$ you obtained is actually a nonsingular matrix.
(The Ohio State University, Linear Algebra Midterm Exam Problem)
Add to solve later
(a) Find the inverse matrix if exists.
Find a nonsingular $2\times 2$ matrix $A$ such that $A^3=A^2B-3A^2$.
Midterm 1 problems and solutions
List of Quiz Problems of Linear Algebra (Math 2568) at OSU in Spring 2017
We consider the augmented matrix $[A\mid I]$, where $I$ is the $3\times 3$ identity matrix, and apply elementary row operations as follows. We have
\left[\begin{array}{rrr|rrr}
1 & 0 & 1 & 1 &0 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 \\
\end{array} \right] \xrightarrow{\substack{R_2-R_1\\R_3-2R_1}}
0 & 0 & -1 & -1 & 1 & 0 \\
\end{array} \right] \xrightarrow{R_2 \leftrightarrow R_3}\\[10pt] \left[\begin{array}{rrr|rrr}
\end{array} \right] \xrightarrow{-R_3}
0 & 0 & 1 & 1 & -1 & 0 \\
\end{array} \right]\\[10pt] \xrightarrow{\substack{R_1-R_3\\R_2+R_3}}
0 & 1 & 0 & -1 & -1 & 1 \\
\end{array} \right].
Therefore, we could reduce the matrix $A$ into the identity matrix ($A$ is row equivalent to $I$) and thus $A$ is invertible, and the inverse matrix is given by the right half of the last matrix.
Hence the inverse matrix is
\[A^{-1}=\begin{bmatrix}
-1 &-1 &1 \\
1 & -1 & 0
\end{bmatrix}.\]
Suppose that $A$ is a nonsingular matrix such that $A^3=A^2B-3A^2$.
Since $A$ is nonsingular, it is invertible and thus the inverse matrix $A^{-1}$ exists.
Then we have
A&=A^{-2}A^3\\
&=A^{-2}(A^2B-3A^2)\\
&=A^{-2}A^2B-3A^{-2}A^2\\
&=IB-3I\\
&=B-3I\\
&=\begin{bmatrix}
\end{bmatrix}.
To prove that this matrix is nonsingular we calculate the determinant of $A$.
(Recall that if the determinant of a matrix is nonzero, then it is invertible, hence nonsingular.)
The determinant of $2\times 2$ matrix $A$ is
\[\det(A)=1\cdot 3-1\cdot 2=1\neq 0.\]
Since the determinant of $A$ is nonzero, we conclude that $A$ is nonsingular.
Thus, the nonsingular matrix $A$ satisfying $A^3=A^2B-3A^2$ must be
These are Quiz 4 problems for Math 2568 (Introduction to Linear Algebra) at OSU in Spring 2017.
(Update:2/13/2017) The second problem is one of the midterm exam 1 problems at OSU in Spring 2017.
The following list is the problems and solutions/proofs of midterm exam 1 of linear algebra at the Ohio State University in Spring 2017.
Problem 1 and its solution: Possibilities for the solution set of a system of linear equations
Problem 2 and its solution: The vector form of the general solution of a system
Problem 3 and its solution: Matrix operations (transpose and inverse matrices)
Problem 4 and its solution: Linear combination
Problem 5 and its solution: Inverse matrix
Problem 6 and its solution (The current page): Nonsingular matrix satisfying a relation
Problem 7 and its solution: Solve a system by the inverse matrix
Problem 8 and its solution:A proof problem about nonsingular matrix
There were 13 weekly quizzes. Here is the list of links to the quiz problems and solutions.
Quiz 1. Gauss-Jordan elimination / homogeneous system.
Quiz 2. The vector form for the general solution / Transpose matrices.
Quiz 3. Condition that vectors are linearly dependent/ orthogonal vectors are linearly independent
Quiz 4. Inverse matrix/ Nonsingular matrix satisfying a relation
Quiz 5. Example and non-example of subspaces in 3-dimensional space
Quiz 6. Determine vectors in null space, range / Find a basis of null space
Quiz 7. Find a basis of the range, rank, and nullity of a matrix
Quiz 8. Determine subsets are subspaces: functions taking integer values / set of skew-symmetric matrices
Quiz 9. Find a basis of the subspace spanned by four matrices
Quiz 10. Find orthogonal basis / Find value of linear transformation
Quiz 11. Find eigenvalues and eigenvectors/ Properties of determinants
Quiz 12. Find eigenvalues and their algebraic and geometric multiplicities
Quiz 13 (Part 1). Diagonalize a matrix.
Quiz 13 (Part 2). Find eigenvalues and eigenvectors of a special matrix
Click here if solved 22
Find the Inverse Matrix of a $3\times 3$ Matrix if Exists Find the inverse matrix of \[A=\begin{bmatrix} 1 & 1 & 2 \\ 0 &0 &1 \\ 1 & 0 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. (The Ohio State University, Linear Algebra Midterm Exam […]
For Which Choices of $x$ is the Given Matrix Invertible? Determine the values of $x$ so that the matrix \[A=\begin{bmatrix} 1 & 1 & x \\ 1 &x &x \\ x & x & x \end{bmatrix}\] is invertible. For those values of $x$, find the inverse matrix $A^{-1}$. Solution. We use the fact that a matrix is invertible […]
Quiz 6. Determine Vectors in Null Space, Range / Find a Basis of Null Space (a) Let $A=\begin{bmatrix} 1 & 2 & 1 \\ 3 &6 &4 \end{bmatrix}$ and let \[\mathbf{a}=\begin{bmatrix} -3 \\ 1 \\ 1 \end{bmatrix}, \qquad \mathbf{b}=\begin{bmatrix} -2 \\ 1 \\ 0 \end{bmatrix}, \qquad \mathbf{c}=\begin{bmatrix} 1 \\ 1 […]
Determine Whether the Following Matrix Invertible. If So Find Its Inverse Matrix. Let A be the matrix \[\begin{bmatrix} 1 & -1 & 0 \\ 0 &1 &-1 \\ 0 & 0 & 1 \end{bmatrix}.\] Is the matrix $A$ invertible? If not, then explain why it isn't invertible. If so, then find the inverse. (The Ohio State University Linear Algebra […]
Quiz 11. Find Eigenvalues and Eigenvectors/ Properties of Determinants (a) Find all the eigenvalues and eigenvectors of the matrix \[A=\begin{bmatrix} 3 & -2\\ 6& -4 \end{bmatrix}.\] (b) Let \[A=\begin{bmatrix} 1 & 0 & 3 \\ 4 &5 &6 \\ 7 & 0 & 9 \end{bmatrix} \text{ and } B=\begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 &0 […]
Find the Inverse Matrices if Matrices are Invertible by Elementary Row Operations For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. (a) $A=\begin{bmatrix} 1 & 3 & -2 \\ 2 &3 &0 \\ […]
Find Values of $h$ so that the Given Vectors are Linearly Independent Find the value(s) of $h$ for which the following set of vectors \[\left \{ \mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} h \\ 1 \\ -h \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 1 \\ 2h \\ 3h+1 […]
Solving a System of Linear Equations By Using an Inverse Matrix Consider the system of linear equations \begin{align*} x_1&= 2, \\ -2x_1 + x_2 &= 3, \\ 5x_1-4x_2 +x_3 &= 2 \end{align*} (a) Find the coefficient matrix and its inverse matrix. (b) Using the inverse matrix, solve the system of linear equations. (The Ohio […]
Tags: augmented matrixdeterminantdeterminant of a matrixelementary row operationsinverse matrixinvertible matrixlinear algebramatrixnonsingular matrixOhio StateOhio State.LAquiz
Next story A Simple Abelian Group if and only if the Order is a Prime Number
Previous story Summary: Possibilities for the Solution Set of a System of Linear Equations
Restriction of a Linear Transformation on the x-z Plane is a Linear Transformation
by Yu · Published 05/24/2017 · Last modified 08/11/2017
Rank of the Product of Matrices $AB$ is Less than or Equal to the Rank of $A$
If a Matrix $A$ is Singular, There Exists Nonzero $B$ such that the Product $AB$ is the Zero Matrix
Pingbacks13
The possibilities for the number of solutions of systems of linear equations that have more equations than unknowns – Problems in Mathematics
[…] Problem 6 and its solution: Nonsingular matrix satisfying a relation […]
Solve the system of linear equations and give the vector form for the general solution – Problems in Mathematics
Compute and simplify the matrix expression including transpose and inverse matrices – Problems in Mathematics
Solve a system by the inverse matrix and compute $A^{2017}mathbf{x}$ – Problems in Mathematics
If a matrix $A$ is singular, then exists nonzero $B$ such that $AB$ is the zero matrix – Problems in Mathematics
Quiz 2. The vector form for the general solution / Transpose matrices. Math 2568 Spring 2017. – Problems in Mathematics
[…] Quiz 4. Inverse matrix/ Nonsingular matrix satisfying a relation […]
Quiz 1. Gauss-Jordan Elimination / Homogeneous System. Math 2568 Spring 2017. – Problems in Mathematics
Quiz 11. Find Eigenvalues and Eigenvectors/ Properties of Determinants – Problems in Mathematics
Express a Vector as a Linear Combination of Given Three Vectors – Problems in Mathematics
Find the Inverse Matrix of a $3times 3$ Matrix if Exists – Problems in Mathematics
Quiz 6. Determine Vectors in Null Space, Range / Find a Basis of Null Space – Problems in Mathematics
Quiz 13 (Part 2) Find Eigenvalues and Eigenvectors of a Special Matrix – Problems in Mathematics
Quiz 13 (Part 1) Diagonalize a Matrix – Problems in Mathematics
This website's goal is to encourage people to enjoy Mathematics!
This website is no longer maintained by Yu. ST is the new administrator.
Linear Algebra Problems by Topics
The list of linear algebra problems is available here.
Introduction to Matrices
Elementary Row Operations
Gaussian-Jordan Elimination
Solutions of Systems of Linear Equations
Linear Combination and Linear Independence
Nonsingular Matrices
Inverse Matrices
Subspaces in $\R^n$
Bases and Dimension of Subspaces in $\R^n$
General Vector Spaces
Subspaces in General Vector Spaces
Linearly Independency of General Vectors
Bases and Coordinate Vectors
Dimensions of General Vector Spaces
Linear Transformation from $\R^n$ to $\R^m$
Linear Transformation Between Vector Spaces
Orthogonal Bases
Determinants of Matrices
Computations of Determinants
Introduction to Eigenvalues and Eigenvectors
Eigenvectors and Eigenspaces
Diagonalization of Matrices
The Cayley-Hamilton Theorem
Dot Products and Length of Vectors
Eigenvalues and Eigenvectors of Linear Transformations
Jordan Canonical Form
Elementary Number Theory (1)
Field Theory (27)
Group Theory (126)
Linear Algebra (485)
Math-Magic (1)
Module Theory (13)
Ring theory (67)
Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog.
Probabilities of An Infinite Sequence of Die Rolling
Interchangeability of Limits and Probability of Increasing or Decreasing Sequence of Events
Linearity of Expectations E(X+Y) = E(X) + E(Y)
Successful Probability of a Communication Network Diagram
Lower and Upper Bounds of the Probability of the Intersection of Two Events
If Two Ideals Are Comaximal in a Commutative Ring, then Their Powers Are Comaximal Ideals
Prove that the Length $\|A^n\mathbf{v}\|$ is As Small As We Like.
Find Values of $h$ so that the Given Vectors are Linearly Independent
10 True of False Problems about Nonsingular / Invertible Matrices
Fundamental Theorem of Finitely Generated Abelian Groups and its application
How to Diagonalize a Matrix. Step by Step Explanation.
Determine Whether Each Set is a Basis for $\R^3$
Prove Vector Space Properties Using Vector Space Axioms
Express a Vector as a Linear Combination of Other Vectors
How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix
Matrix of Linear Transformation with respect to a Basis Consisting of Eigenvectors
Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis
12 Examples of Subsets that Are Not Subspaces of Vector Spaces
The Intersection of Two Subspaces is also a Subspace
The Matrix for the Linear Transformation of the Reflection Across a Line in the Plane
Site Map & Index
abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space
Search More Problems
Membership Level Free
If you are a member, Login here.
Problems in Mathematics © 2020. All Rights Reserved.
More in Linear Algebra
Summary: Possibilities for the Solution Set of a System of Linear Equations
In this post, we summarize theorems about the possibilities for the solution set of a system of linear equations and... | CommonCrawl |
It's important to understand the graphs and charts often used in statistics based questions before we explain the core concepts tested on the SAT Math section.
1. Histograms
Like a line plot a histogram shows the frequency of data. Instead of marking the data items with Xs, however, a histogram shows them as a graph.
EXAMPLE: This histogram shows the heights of trees on each street in my town. In this case, the frequency is the number of trees, and the characteristic is the height of the trees.
From the graph, we see:
There are 3 trees whose heights are between 30 and 35 feet
There are 8 trees whose heights are between 41 and 45 feet etc.
2. Scatterplot
A scatter plot is a type of graph that shows the relationship between two sets of data. Scatter plots graph data as ORDERED PAIRS (this is simply a pair of numbers but the order in which they appear together matters).
After a test, Ms. Phinney asked her students how many hours they studied. She recorded their answers, along with their test scores. Create a scatter plot of hours studied and test scores.
To show Tammy's data, mark the point whose horizontal value is 4.5 and whose vertical value is 90.
By graphing the data on a scatter plot, we can see if there is a relationship between the number of hours studied and test scores. The scores generally go up as the hours of studying go up, so this shows that there IS a relationship between test scores and studying. We can draw a line on the graph that roughly describes the relationship between the number of hours studied and test scores. This line is known as the LINE OF BEST FIT. As you can see, none of the points lie on the line of best fit, but that's okay! This is because the line of best fit is the best line that describes the relationship of ALL the points on the graph
Scatter plots show three types of relationships, called CORRELATIONS:
POSITIVE CORRELATION: As one set of values increases, the other set increases as well (but not necessarily every value).
EXAMPLE: As the population increases, so does the number of primary schools
NEGATIVE CORRELATION: As one set of values increases, the other set decreases (but not necessarily every value).
EXAMPLE: As the price of peaches goes down, the number of peaches sold goes up.
NO CORRELATION: The values have no relationship.
EXAMPLE: A person's IQ is not related to his/her shoe size, so there is no correlation.
Apart from these graphs, random graphs like pie charts, box plots, cumulative frequency, etc., can be used. They are rare and therefore we wont describe each one of them in detail. If they do show up on the test, they are straightforward to answer. A couple of examples are below.
What is the area of the pie chart that is represented by those who chose hamburgers as their favorite food?
A. 97.2°
B. 97.4°
C. 98.4°
D. 98.8°
Solution: We know that 27% of those in the survey chose hamburgers as their favorite food. This corresponds to 27% of the area of the circle, which means that the central angle of the circle accounts for 27% of the measure of the sum of all central angles in a circle (360°).
In other words, we are looking for the angle that makes up 0.27 of the entire circle.
0.27 x 360°
= 97.2°
The correct answer is A.
At what pH is the enzyme activity at its maximum?
Solution: Enzyme activity is at its maximum in the earliest stages of the experiment at about time = 0. The pH at this time is about 1.5. The closest correct answer is A.
STATISTICAL MEASURES
Statistics have been a notoriously feared SAT topic for many test takers. The fear inducing notoriety and intimidating reputation associated with these mathematical operations have been the product of a lack of understanding of the somewhat vague and confusing mathematical operations used to calculate statistical measures.
The most used statistical measures are described below:
1. The Mean (also called the average)
The mean is a calculated central value of a set of numbers. To calculate the mean, add all of the numbers, then divide the sum by how the number of items.
If x is the average (arithmetic mean) of m and 9, y is the average of 2m and 15, and z is the average of 3m and 18, what is the average of x,y, and z in terms of m?
A. m + 6
B. m + 7
C. 2m + 14
D. 3m + 21
Solution: There are a lot of variables in this equation, but don't let them confuse you. We already know that the average of two numbers is the sum of those two numbers divided by 2. That means that:
\(x=\frac{m+9}{2}\)
\(y=\frac{2m+15}{2}\)
\(z=\frac{3m+18}{2}\)
Now we need to find the average of x, y, and z. Substituting the previous expressions for m gives us:
\(\frac{m+9+2m+15+3m+18}{3\times 2}\)
\(\frac{6m+42}{6}\)
m + 7
The correct answer is B.
The mean score of 8 players in a basketball game was 14.5 points. If the highest individual score is removed, the mean score of the remaining 7 players become 12 points. What was the highest score?
Solution: If the mean score of 8 players is 14.5, then the total of those 8 scores is 14.5 x 8 = 116. If the mean of 7 scores is 12, then the total of those 7 scores is 12 x 7 = 84.
Since the set of 7 scores was created by removing the highest score from the set of 8 scores, the difference between the total of all 8 scores and the set of 7 scores is equal to the removed score.
116 - 84 = 32
The correct answer is C.
2. Weighted Averages
The basic formula for averages applies only to sets of data consisting of individual values, all of which are equally weighted (i.e., none of the values "counts" toward the average any more than any other value does). When you consider sets in which some data are more heavily weighted than other data - whether weighted by percent, frequencies, ratios, or fractions - you need to use special techniques for WEIGHTED AVERAGES.
A weighted average of only two values will fall closer to whichever value is weighted more heavily. For instance, if a drink is made by mixing 2 shots of a liquor containing 15% alcohol with 3 shots of a liquor containing 20% alcohol, then the alcohol content of the mixed drink will be closer to 20% than to 15%.
EXAMPLE: A mixture of "lean" ground beef (10% fat) and "super-lean" ground beef (3% fat) has a total fat content of 8%. What is the ratio of "lean" ground beef to "superlean" ground beef?
Fortunately, you do not need to complete any complicated formulas here. Instead, you need to look at the difference between the fat contents of "lean" and "super lean" ground beef and the fat content of the final mixture.
"Lean" ground beef has a fat content that is 2% higher than the fat content of the final mixture. You can say that "lean" ground beef has a +2 differential.
Similarly, "super lean" ground beef has a fat content that is 5% lower than the fat content of the final mixture. You can say that "super lean" ground beef has a -5 differential. You need to make these differentials cancel out, so you should multiply both differentials by different numbers so that the positive will cancel out with the negative. If you were to set up an equation, it would look something like this:
x(+2) + y(-5) = 0
Now all you have to do is pick values for x and y. If x = 5 and y = 2, the equation will be true (10 + (-10) = 0). That means that for every 5 parts "lean" ground beef, you have 2 parts "super-lean" ground beef. The ratio is 5:2.
This relationship holds whenever two groups are averaged together. Suppose that A and B are averaged together. If they are in a ratio of a:b, then you can multiply the differential of A by a, and it will cancel out with the differential of B times b.
EXAMPLE: A group of men and women in a ratio of 2:3. If the men have an average age of 50, and the average age of the group is 56, you can easily figure out the average age of the women in the group. Men have a -6 differential, and there are 2 of them for every 3 women. If the average age of women is w, then:
2 x (-6) + 3 x (w) = 0
-12 + 3w = 0
w = 4
Women have a +4 differential. The average age of the women in the group is 56 + 4 = 60 years old.
3. Median
A median is the middle number of a data set when all of the items are written in order, from least to greatest (or greatest to least).
The following are the steps to calculate the median of a list:
Arrange the list in ascending or descending order
If there are an odd number of items on the list, the middle item equals the median
\(Median(odd\: number\: of\: Items)=\frac{(n+1)^{th}}{2}term\)
If there are an even number of items on the list, then the median is the average of the two middle numbers
\(Median(odd\: number\: of\: Items)=\frac{n^{th}term+(n+1)^{th}}{2}term\)
In the 7-element set { 3, 5, 7, 9, 13, 15, 17 }, the median is 9
In the 8-element set { 3, 5, 7, 9, 13, 15, 17, 17 }, the median is 11 (the average of the fourth and fifth entries, 9 and 13).
When the number of items on the list is even, the median can equal a number not on the list.
An absurdly large number, far away from the rest of the set, such as 312 in this last set, has zero effect on the median, although it would have a big effect on the mean
As long as the middle numbers stay in the middle, changes to the values of the outer numbers have no effect on the median; by contrast, changing any number in the set changes the mean.
A sociologist chose 300 students at random from each of two schools and asked each student how many siblings he or she has. The results are shown in the table below:
There are a total of 2400 students at Lincoln School and 3,300 students at Washington School. What is the median number of siblings for all the students surveyed?
Solution: There were a total of 600 data points collected (300 from each school) which means the median will be between the 300th and 301st numbers.
Fortunately, there's a way to solve the problem without having to write out 600 numbers! You can put the numbers into groups based on the information you're given in the chart.
For each "number of siblings" value, add the number of respondents from each of the two schools together. For example, 120 students from Lincoln School and 140 students from Washington School said they had no siblings, and 120 + 140 = 260. So a total of 260 students have 0 siblings. Do this for each of the sibling values.
260 students have 0 siblings
190 students have 1 sibling
90 students have 2 siblings
20 students have 4 siblings.
Both the 300th and the 301st values will come in the second category (190 students have 1 sibling). The correct answer is B.
A survey was taken regarding the value of homes in a county, and it was found that the mean home value was $165,000, and the median home value was $125,000. Which of the following could explain the difference between the mean and median home values in the country?
A. The homes have values that are close to each other
B. There are a few homes that are valued much less than the rest
C. There are a few homes that are valued much more than the rest
D. Many of the homes have values between $125,000 and $165,000
Solution: The mean and median of a set of data are equal when the data has a perfectly symmetrical distribution (such as a normal distribution). If the mean and median aren't equal to each other, that means the data isn't symmetrical and that there are outliers.
When there are outliers in the data, the mean will be pulled in the outliers' direction (either smaller or larger) while the median remains the same. In this problem, the mean is larger than the median. That means the outliers are several homes that are significantly more expensive than the rest, since these outliers push the mean to be larger without affecting the median.
Therefore, the correct answer is C, There are a few homes that are valued much more than the rest.
Median of sets containing unknown values
Unlike the arithmetic mean, the median of a set depends only on the one or two values in the middle of the ordered set. Therefore, you may be able to determine a specific value for the median of a set even if one or more unknowns are present.
For instance, consider the unordered set {x, 2, 5, 11, 11, 12, 33}. No matter whether x is less than 11, equal to 11, or greater than 11, the median of the resulting set will be 11. (Try substituting different values of x to see why the median does not change.)
By contrast, the median of the unordered set {x, 2, 5, 11, 12, 12, 33} depends on x. If x is 11 or less, the median is 11. If x is between 11 and 12, the median is x. Finally, if x is 12 or more, the median is 12.
EXAMPLE: Sarah spends 2 hours on Tuesday, Thursday, and Saturday practicing her violin. On Monday she practices for 90 minutes, and on Friday she practices for 1 hour. What is the median time she spends practicing her violin in a given week?
STEP 1: Convert all the given time values into the same unit of measure.
Monday = 90 mins (1.5 hours)
Tuesday = 2 hours
Thursday = 2 hours
Friday = 1 hour
Saturday = 2 hours
STEP 2: Organize the times in ascending order: 1, 1.5, 2, 2, 2
Since there is an odd number of data entries, 5, in the set, the median will be the 3rd set in the list. Therefore, the median is 2.
In Billy's 5th grade class, the test scores for the final math exam were 87, 54, 77, 92, 95, 91, x,
and 90. What is the value of x if the median of the test scores is 82?
STEP 1: Reorder the data set in ascending order: 54, 77, 87, 90, 91, 92, 95, x
STEP 2: Since the set has an even number of data entries, the median will be the average of
the two middle numbers.
Therefore, the median will be the average of the 4th and 5th terms.
\(The\: median\: is\: \frac{90+91}{2}=90.5\neq 82\)
STEP 3: Now, let's reorder our data set with the x on the other end of the set: x, 54, 77, 87, 90,91, 92, 95
The median will still be the average of the 4th and 5th terms, but those terms will be different
Now we know that the x-value will need to be one of the middle numbers that the average is being taken from; therefore, we can rewrite the set as follows: 54, 77, 87, x, 90, 91, 92, 95
\(The\: median\: is\: \frac{x+90}{2}=82\)
x + 90 = 164
4. Mode
The MODE represents the value or values in a data set that is/are repeated the most. It is possible for a data set to have one, multiple, or no modes.
There is no mode in the data set {1, 2, 3, 4, 5} since each number appears an equal number of times
In the data set {2, 4, 5, 2, 3, 1, 2, 2}, 2 is the mode, as it is seen four times
In the data set {1, 1, 2, 2, 3, 4}, there are two modes, 1 and 2, since each is seen twice in the set
EXAMPLE: Given the set {72, 75, 85, 90, 90, x}, what is the value of x if a mode of the set is 90 and its median is 85?
Solution: There are three possible places x can be located:
{72, 75, 85, 90, 90, x}
{x ,72, 75, 85, 90, 90}
{72, 75, x, 85, 90, 90}
The median of each scenario is:
= 87.5 ≠ 85
= 80 ≠ 85
= 85, Therefore, x = 85
Now, write down the set with x as 85: {72, 75, 85, 85, 90, 90}. In this set, the modes are both 85 and 90. Therefore the correct answer for x is 85.
5. Range
The SAT loves this statistical measure because it's so simple. The RANGE is the difference between the maximum value and the minimum value.
In the set {3, 5, 7, 9, 13, 15, 17}, the range = 17 - 3 = 14
In the set {1, 3, 3, 3, 3, 3, 74, 89, 312}, the range = 312 - 1 = 311
For the range, the only thing that matters is the top and bottom values in the set.
The smaller the range of a set, the closer its data entries are to one another. If the range is large, then either there is more space between the data entries or there are outliers in the data.
EXAMPLE: What is the value of x if the range is 12: {2, 3, x, 5, 12}
Solution: Find the range of the set, assuming x lies in the middle of the set.
Range = 12 − 2 = 10 ≠ 12
This tells us that x must be either greater than 12 or less than 2.
POSSIBILITY 1
{x, 2, 3, 5, 12}
Range = 12 − x = 12
{2, 3, 5, 12, x}
Range = x − 2 = 12
In order for the range of this particular data set to be 12, x must be 0 or 14.
6. Standard Deviation
The mean and median both give "average" or "representative" values for a set, but they do not tell us the whole story. It is possible for two sets to have the same average but to differ widely in how spread out their values are. To describe the spread, or variation, of the data in a set, you use a different measure: STANDARD DEVIATION.
STANDARD DEVIATION (SD) indicates how far from the average (mean) the data points typically fall.
A small SD indicates that a set is clustered closely around the average (mean) value
A large SD indicates that the set is spread out widely, with some points appearing far from the mean.
EXAMPLE: Consider the sets {5, 5, 5, 5}, {2, 4, 6, 8}, and {0, 0, 10, 10}. These sets all have the same mean value of 5. You can see at a glance, though, that the sets are very different, and the differences are reflected in their SDs. The first set has an SD of zero (no spread at all), the second set has a moderate SD, and the third set has a large SD.
You might be asking where the √5 comes from in the technical definition of SD for the second set.
The good news is that you do not need to know - it is very unlikely that an SAT problem will ask you to calculate an exact SD. If you just pay attention to what the average spread is doing, you'll be able to answer all SAT standard deviation problems, which involve either
Changes in the SD when a set is transformed, or
Comparisons of the SDs of two or more sets.
Just remember that the more spread out the numbers, the larger the SD.
If you come across a problem on the test that focuses on changes in the SD, ask yourself whether the changes move the data closer to the mean, farther from the mean, or neither. If you see a problem requiring comparisons, ask yourself which set is more spread out from its mean.
Below are some sample problems to help illustrate standard deviation:
Which set has the greater standard deviation: {1, 2, 3, 4, 5} or {440, 442, 443, 444, 445}?
If each data point in a set is increased by 7, does the set's standard deviation increase, decrease, or remain constant?
If each data point in a set is increased by a factor of 7, does the set's standard deviation increase, decrease, or remain constant? (Assume that the set consists of different numbers)
Solution:
The second set has the greater SD. One way to understand this is to observe that the gaps between its numbers are, on average, slightly bigger than the gaps in the first set. Only the spread matters. The numbers in the second set are much more "consistent" in a sense - they are all within about 1% of each other, while the largest numbers in the first set are several times the smallest ones. However, this "percent variation" idea is irrelevant to the SD.
The SD will not change. "Increased by 7" means that the number 7 is added to each data point in the set. This transformation will not affect any of the gaps between the data points, and thus it will not affect how far the data points are from the mean.
The SD will increase. "Increased by a factor of 7" means that each data point is multiplied by 7. This transformation will make all the gaps between points 7 times as big as they originally were. Thus, each point will fall 7 times as far from the mean. The SD will increase by a factor of 7.
Set S has a mean of 10 and a standard deviation of 1.5. We are going to add two additional numbers to Set S. Which pair of numbers would decrease the standard deviation the most?
A. {2, 10}
B. {10, 18}
C. {7, 13}
D. {9, 11}
Solution: This is a very tricky problem. The starting list has a mean of 10 and a standard deviation of 1.5.
A. hese two numbers don't have a mean of 10, so adding them will change the mean; what's more, one number is "far away," which will wildly decrease the mean, increasing the deviations from the mean for almost every number on the list, and therefore increasing the standard deviation.
B. These choices don't have a mean of 10, so adding them will change the mean. One number is "far away," which will wildly increase the mean, increasing the deviations from the mean for almost every number on the list, and therefore increasing the standard deviation.
C. These options are centered on 10, so adding them will not change the mean. Both of these are a distance of 3 units from the mean, and this is larger than the standard deviation, so the size of the typical deviation from the mean will increase.
D. This is the correct answer. These are centered on 10, so adding them will not change the mean. Both are a distance of 1 unit from the mean, and this is less than the standard deviation, so the size of the typical deviation from the mean will decrease.
Set Q consists of the following five numbers: Q = {5, 8, 13, 21, 34}. Which of the following sets has the same standard deviation as Set Q?
I. {35, 38, 43, 51, 64}
II. {10, 16, 26, 42, 68}
III. {46, 59, 67, 72, 75}
A. I only
B. I & II
C. I & III
D. I, II, & III
Solution: Notice that Set I is just every number in Q plus 30. When you add the same number to every number in a set, you simply shift it up without changing the spacing, so this doesn't change the standard deviation at all. Set I has the same standard deviation as Q.
Notice that Set II is just every number in Q multiplied by 2. Multiplying by a number does change the spacing, so this does change the standard deviation. Set II does not have the same standard deviation as Q.
Set III is very tricky and probably is at the outer limit of what the SAT could ever ask you you to consider. The spacing between the numbers in Set III, from right to left, is the same as the spacing between the numbers in Q from left to right.
The correct combination is I and III, so the answer is C.
Consider the following sets:
L = {3, 4, 5, 5, 6, 7}
M = {2, 2, 2, 8, 8, 8}
N = {15, 15, 15, 15, 15, 15}
Rank those three sets from least standard deviation to greatest standard deviation.
A. L, M, N
B. M, L, N
C. M, N, L
D. N, L, M
Solution: OK, first of all, set N has six numbers that are all the same. When all the members of a set are identical, the standard deviation is zero, which is the smallest possible standard deviation. So, automatically, N must have the lowest. Right away, we can eliminate options A, B and C. Only D remains. The correct answer is D.
7. Error
Error is a way to describe the accuracy of a data set. When dealing with error in statistics, it is important to pay special attention to the wordings that will differentiates between 'EXPECTED VALUE' and 'ACTUAL VALUES'.
\(Error%=\frac{Actual\: value-Expected\: value}{Expected value}\times 100\)
EXAMPLE A researcher studies a certain species of fish. He finds that the size of the fish population is limited by the size of the lakes in which they live, and derives an equation to model the expected population size, P, based on surface area, A, of the lake in square feet:
P = 5 + 0.83A
If the researcher finds that a particular lake has a surface area of \(342ft^{2}\) and a population of 310 fish, what is the percent error from the predicted value?
Solution: The actual value is 310 fish, given in the question. The expected value can be calculated using the researcher's equation to predict the expected fish population in a lake with surface area \(342ft^{2}\).
P = 5 + 0.83(342)
P = 288.86
The theoretical value is 288.86 and the actual value is 310 fish. Therefore,
\(Error%=\frac{Actual\: value-Expected\: value}{Expected value}\times 100=\frac{310-288.86}{288.86}\times 100=7.32%\)
SAMPLING AND MODELING
One of the important jobs of a statistician is to make predictions. For example, the Indian Government may want to find out the average age of a person in India. It is impossible for the government to survey each and every individual in the country. Therefore, it hires statisticians to take random samples of residents and make predictions based on the data they collect. The SAT will often test you on the relationship between such sample data and the predictions you can or cannot make about the entire population.
Manchester United Football Club chose 1,000 of their fans at random and asked each fan how many jerseys he or she has. The results are shown in the table below.
There are a total of 16 million Manchester United fans. Based on the survey data, what is the expected total number of fans who own 3 jerseys?
A. 2 million
B. 4 million
C. 6 million
D. 8 million
Solution: Using the sample data, we can estimate the total number of fans who own 3 jerseys are:
16 million x 250/1000 = 4 million
In order to accurately predict, the SAT usually tests on the following key concepts:
1. Line of Best Fit
A statistician is trying to figure out the relationship between the number of fans football clubs have, and the number of tournaments football clubs have won. Therefore, he collects information regarding the number of fans, and the number of tournaments won for 100 football clubs in Europe. He then plots those points on a scatterplot graph.
The line of best fit refers to a line through that scatter plot that best expresses the relationship between those points: Number of fans of a football club vs. the number of tournaments won. The line of best fit can be used to figure out if there is a relationship between those two points.
Example 11 & 12
The scatterplot shows the number of pollinating flowers for 20 different aged Southern Magnolia plants. The line of best fit is also shown.
11. Which of the following is the best interpretation of the slope of the line of best fit in the context of this problem?
A. The predicted increase in the age of the Southern Magnolia, in years, for every increase of a pollinating flower.
B. The predicted increase in the number of pollinating flowers for every year increase in the age of the Southern Magnolia.
C. The Southern Magnolia predicted age in years when it has 0 pollinating flowers.
D. The Southern Magnolia predicted number of pollinating flowers when it was just born (Age of 0).
12. Which of the following is the best interpretation of the y-intercept of the line of best fit in the context of this problem?
B. The predicted increase in number of pollinating flowers for every year increase in the age of the Southern Magnolia.
11. As we learned in the coordinate geometry chapter, slope is the increase in y (age of the Southern Magnolia) for each increase in x (number of pollinating flowers). The only difference now is that it's a predicted increase. The correct answer is A.
12. The y-intercept is the value of y (age of the Southern Magnolia) when x (number of pollinating flowers) is 0. Therefore, the correct answer is C.
2. Margin of Error
The margin of error refers to the room for error we give to an estimate. For example, we could say that England has 6 million Manchester United fans with a margin of error of 1 million. This means that the number of Manchester United fans living in the country of England is between 5 million and 7 million.
The margin of error primarily depends on the following two factors:
Sample Size - This is common sense. The more the number of people in England we survey, the more accurate (lower margin of error) our predictions are going to be.
Variability of the Data - We should only select people from England to check if they are Manchester United fans or not. If we select random people from Europe, our predictions will not be very accurate, and our margin of error increases.
A real estate agent randomly surveyed 100 apartments for sale in San Francisco, California and found that the average price of each apartment was $800,000. Another real estate agent intends to replicate the survey and will attempt to get a smaller margin of error. Which of the following samples will most likely result in a smaller margin of error for the mean price of an apartment in San Francisco, California?
A. 50 randomly selected apartments in San Francisco
B. 50 randomly selected apartments in all of California
C. 100 randomly selected apartments in San Francisco
D. 100 randomly selected apartments in all of California
Solution: As discussed above, the larger the sample size, the lower the margin of error. Therefore, we can eliminate A or B because the sample size is actually smaller. The second rule discussed above is the variability of the data. Since we want to figure out the average selling price of a house in San Francisco, it is better to get the sample data from San Francisco only and not the other cities in California. Therefore, the correct answer is C.
3. Confidence Interval
Now, this is where most AP Guru students go bonkers. It's fine if you do not know what confidence intervals are - most students do not. You'll never need to calculate one, and the SAT questions that refer to confidence intervals very easy. All you need to understand is what a confidence interval is.
A confidence interval tells you how sure you are of predicting any statistical measure (like mean or standard deviation) for a population whose sample you're measuring. For example, you have a 95% confidence interval that the average age of a French citizen is between 34 and 38 years. The higher the confidence, the more likely the average age will be within the interval.
A 95% confidence interval means that if the same experiment were repeated again and again, each with 100 random individuals from France, 95% of those experiments would end up with a mean average age between 34 to 38
Environmentalists are testing pH levels in a forest that is being harmed by acid rain. They analyzed water samples from 40 rainfalls in the past year and found that the mean pH of the water samples has a 95% confidence interval of 3.2 to 3.8. Which of the following conclusions is the most appropriate based on the confidence interval?
A. 95% of all the forest rainfalls in the past year have pH between 3.2 and 3.8
B. 95% of all the forest rainfalls in the past decade have a pH between 3.2 and 3.8
C. It is plausible that the true mean pH of all the forest rainfalls in the past year is between 3.2 and 3.8
D. It is plausible that the true mean pH of all the forest rainfalls in the past decade is between 3.2 and 3.8
Solution: A confidence interval does NOT say anything about the rainfalls themselves. You cannot say that any one rainfall has a 95% chance of having a pH between 3.2 and 3.8, and you cannot say that 95% of all the forest rainfalls in the past year had a pH between 3.2 and 3.8.
So, in the example above, we can be quite confident that the true mean pH of all the forest rainfalls in the past year is between 3.2 and 3.8. The correct answer is C. The answer is not (D) because we cannot draw conclusions about the past decade when the samples were gathered from the past year.
4. Causation vs Correlation
The difference between correlation and causation is where the SAT stumps a lot of students. For example, just because students who complete more mock tests get better scores on the SAT doesn't mean that complete more mock tests cause an improvement in SAT scores. Rather, completing mock tests is associated with an improvement in SAT scores.
Perhaps students who attempt more mock tests have more discipline, or they have more demanding parents who make them study harder. But due to the way the experiment was designed, we can't tell what the underlying factor is.
Researchers conducted an experiment to determine whether eating junk food increased body weight. They randomly selected 500 people who eat processed packaged food at least once a week, and 500 people who do not eat processed packaged food at all. After tracking the people's weight for a year, the researchers found that the people who eat processed packaged food at least once a week had experienced weight gain significantly higher than the people who do not eat processed packaged food at all. Based on the design and results of the study, which of the following is an appropriate conclusion?
A. Eating processed packaged food least once a week is likely to increase body weight.
B. Eating processed packaged food three times a week increases body weight more than eating processed packaged food just once a week.
C. Any person who starts eating processed packaged food at least once a week will increase his or her body weight.
D. There is a positive association between eating processed packaged food and increased weight gain.
Solution: This question deals with a classic case of correlation vs. causation. Just because people who eat processed packaged food had a higher body weight doesn't mean that eating processed packaged food causes an increase in body weight.
Therefore, answer A is wrong because it implies causation. Answer B is also wrong because it not only implies causation but also that the frequency of eating processed packaged food matters, something that wasn't tracked in the experiment. Answer C is wrong because it suggests a completely certain outcome. Even if eating processed packaged food DID increase body weight, not every single person who starts eating processed packaged food will increase his body weight. Any conclusion drawn from sample data is a generalization and should not be regarded as truth for every individual.
The correct answer is D. There is a positive association between eating processed packaged food and an increase in body weight.
5. Random Sample
One last important thing to note in this chapter is that the sample has to be random. If a samples is not random, we will not be able to make a prediction.
For example, let's say we are trying to determine an association between bargoers and football fans. If researchers picked 100 individuals who live close to a football stadium, then the sample is not random. Those people could be football fans because they live in close proximity to a stadium.
What should the researchers have done differently? The answer is random assignment. They should instead randomly select 100 people from all walks of life and from different geographic locations. The more diverse the group, the more accurate the data will be. Of course, conducting this type of experiment can be extremely difficult, which is why proving causation can be such a monumental task. | CommonCrawl |
How do I show that $\ln(2) = \sum \limits_{n=1}^\infty \frac{1}{n2^n}$?
My task is this:
Show that $$\ln(2) = \sum \limits_{n=1}^\infty \frac{1}{n2^n}$$
My work so far:
If we approximate $\ln(x)$ around $x = 1$, we get:
$\ln(x) = (x-1) - \frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} - \frac{(x-1)^4}{4} + ...$
Substituting $x = 2$ then gives us:
$\ln(2) = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} + ...$
No suprise there, we should always get alternating series for $\ln(x)$ when we are doing the taylor expansion. By using Euler transform which is shown at the middle of this page on natural logarithm one can obtain the wanted result, but how do you derive it? I need someone to actually show and explain in detail how one starts from the left side of the equation and ends up on the other.
convergence taylor-expansion
$\begingroup$ Well, your series is expanded in powers of $\frac{1}{2}$, and your $\ln x$ series is expanded in powers of $-(x-1)$ (noting that the signs of the coefficients alternate) - just choose a suitable $x$ to make these match up. $\endgroup$ – πr8 May 9 '16 at 9:55
$\begingroup$ Hint: $\ln(\frac{x}{x-1})=-\ln(\frac{x-1}{x})=-\ln(1-\frac{1}{x})$ $\endgroup$ – MrYouMath May 9 '16 at 9:57
$\begingroup$ See also: math.stackexchange.com/questions/1153499/… $\endgroup$ – Martin Sleziak Aug 30 '16 at 23:13
Note that \begin{align} \frac{1}{1-x}&=1+x+x^2+\cdots\qquad(-1<x<1). \end{align} Thus \begin{align} -\ln(1-x)&=\int\frac{1}{1-x}{\rm d}x\\ &=x+\frac{1}{2}x^2+\frac{1}{3}x^3+\cdots\\ &=\sum_{n=1}^\infty\frac{1}{n}x^n. \end{align} By taking $x=\frac{1}{2}$ into the above equation, we have $$\sum_{n=1}^\infty\frac{1}{n2^n}=-\ln\left(\frac{1}{2}\right)=\ln(2).$$
SolumilkyuSolumilkyu
Hint. One may write
$$ \begin{align} \sum \limits_{n=1}^\infty \frac{1}{n2^n}&=\sum \limits_{n=1}^\infty \int_0^{1/2}x^{n-1}dx \\\\&=\int_0^{1/2}\sum \limits_{n=1}^\infty x^{n-1}\:dx \\\\&=\int_0^{1/2}\frac1{1-x}\:dx \\\\&=\left[-\ln(1-x)\right]_0^{1/2} \\\\&=\ln 2 \end{align} $$
as wanted.
Similarly, one gets $$ \sum \limits_{n=1}^\infty \frac{t^n}{n}=-\ln(1-t),\quad |t|<1. $$
Olivier OloaOlivier Oloa
Not the answer you're looking for? Browse other questions tagged convergence taylor-expansion or ask your own question.
Show $\ln2 = \sum_\limits{n=1}^\infty\frac1{n2^n}$
Taylor expansion on interval or at infinity
Taylor / Maclaurin series expansion origin.
calculating the taylor series when there is an integral involved
Derive Taylor Series for $f(x)$
Show $\int_0^1 \frac{\log(1-x)}{x}dx=-\frac{\pi^2}{6}$
understanding taylor series expansion
How to prove that the series $\sum\limits_{n=1}^\infty \left(\left(n+\frac12\right)\ln\left(1+\frac1n\right) - 1 \right)$ converges
Using the MacLaurin series of $\ln(1-x)$ to find a series representation of $\ln2$
Rewrite function as Taylor series equal to natural logarithm of some value
Triple sum $\sum\limits_{a=1}^{\infty} \sum\limits_{b=1}^{\infty} \sum\limits_{c=1}^{\infty} \frac{\cos a \cos b \cos c}{a^2 + b^2 + c^2}$ | CommonCrawl |
Networks, Epidemics and Collective Behavior
XIX The birth of statistical mechanics
XIX.1 Meanwhile, in the social sciences
XX The century of Big Science
XX.1 More is Different
XX.2 From lattices to networks
XXI The Information Age
2 Statistical mechanics of networks
2.1 Brief introduction to graph theory
2.1.1 Topological properties of networks
2.1.2 Important degree distributions
2.1.3 Multilayer graphs
2.2 The problem of null models
2.2.1 Microcanonical models
2.2.2 Canonical models
2.3 Exponential random graphs
2.4 Randomizing real networks
2.4.1 Undirected binary networks
2.4.2 Undirected weighted networks
2.4.3 Fermionic and bosonic graphs
2.5 Anomalies in transportation networks
2.5.1 Null models for undirected weighted networks
2.5.2 The worldwide air transportation network
2.5.3 Other transportation networks
2.6 Generating data-driven contact networks
2.6.1 Theoretical framework
2.6.2 Data description
2.6.3 Age contact networks
3 The law of mass action: animals collide
3.1 A basic assumption: homogeneous mixing
3.1.1 Introducing the age compartment
3.1.2 Changing demographics
3.2 The basic reproduction number
3.2.1 Measuring \(\mathbf{R_0}\)
3.2.2 The effective reproduction number
3.2.3 Measurability of the epidemic reproduction number
3.3 The epidemic threshold fades out
3.3.1 The decade of viruses
3.3.2 The generating function approach
3.3.3 Directionality reduces the epidemic threshold in directed multiplex networks
3.4 Age and network structures
4 Diving into the anthill
4.1 Online discussion boards
4.1.1 Description of Forocoches
4.1.2 Introduction to inhomogeneous Poisson processes
4.1.3 Fitting Hawkes processes
4.1.4 The dynamics of the board
4.2 The dynamics of a crowd controlled game
4.2.1 Description of the event
4.2.2 The ledge
4.2.3 The politics of the crowd
4.2.4 The challenges of digital crowds
5.1 Future work
Published with bookdown
Networks, Epidemics and Collective Behavior: from Physics to Data Science
Chapter 3 The law of mass action: animals collide
Infectious diseases have been an unpleasant companion of humankind for millions of years. Yet, crowd epidemic diseases could have only emerged within the past 11,000 years, following the rise of agriculture. The ability to maintain large and dense human populations, as well as the close contact with domestic animals, allowed the most deadly diseases to be sustained unlike when human populations were sparse [135].
Perhaps the most-well documented epidemic outbreak in ancient times is the plague of Athens (430-427 BCE) that caused the death of Pericles and killed around 30% of Athens population [136]. The fact that some diseases were contagious was probably well-known way before that. For instance, it has been claimed that in the 14th century BCE the Hittites sent rams infected with tularemia to their enemies to weaken them [137] and there are evidences of quarantine-like isolation of leprous individuals in the Biblical book of Leviticus. Yet, it was thought that diseases were caused by miasma or "bad air" for over 2,000 years. It was not until the end of the XIX century that it was finally discovered that microorganisms were the cause of diseases [138].
The advent of modern epidemiology is usually attributed to John Snow who in the mid of the XIX century traced back the origin of a cholera epidemic in the city of London [139]. However, mathematical methods were not firmly introduced until the beginning of the XX century8. Already in 1906 Hamer showed that "an epidemic outbreak could come to an end despite the existence of large numbers of susceptible persons in the population, merely on a mechanical theory of numbers and density" [142]. Although it was thanks to the works by Ross, Kermack and McKendrick that finally a mechanistic theory of epidemics was developed as an analogy to the law of mass-action. In particular, it was McKendrick who gave the title to this chapter when he said in a lecture in 1912: "consider a type of epidemic which is spread by simple contact from human being to human being […] The rate at which this epidemic will spread depends obviously on the number of infected animals, and also on the number of animals that remain to be infected - in other words the occurence of a new infection depends on a collision between an infected and uninfected animal" [143].
The next 50 years were mostly devoted to establishing the mathematical foundations of epidemiology. The problem was that this process was mostly done by mathematicians and statisticians, who were more interested in the theoretical implications of the models rather than in their application to data [144]. This situation changed during the 1980s when Anderson and May, coming from a background in zoology and ecology respectively, started to collaborate with biologists and mathematicians, bridging the gap between data and theory [62]. During the 1990s graphs where introduced in epidemiological models, challenging the classical assumption of homogeneous mixing (that we will discuss in section 3.1), and brought physicists into the field attracted by the similarity of some concepts with phase transitions in non-equilibrium systems [145].
The latest developments of epidemic modeling are based on incorporating more and more data. For instance, the Global Epidemic and Mobility (GLEaM) framework incorporates demographic data of the whole world with short-range and long-range mobility data as the basis of its epidemic model, allowing for the simulation of world-wide pandemics [146]. Similarly, to properly study the spreading of Zika virus it is necessary to take into account the dynamics of mosquitoes, temperature, demographics and mobility, attached to the disease dynamics of the own virus [147]. Multiple sources of data are also being used in the study of vaccination, either to devise efficient administration strategies, including economic considerations [148], or to properly understand how they actually work [149]. Even more, of particular interest nowadays is analyzing the interplay between processes that have been deeply studied in complex systems such as game theory, behavior diffusion and epidemic processes.
Herd immunity, a term coined by Topley and Wilson in 1923 albeit with the completely opposite meaning to the current one [150], refers to the fact that it is possible to have a population where diseases cannot spread even if only a fraction of the individuals are immune to it. Firstly calculated theoretically in 1970 by Smith [151], it has been the subject of great interest as it allows to completely immunize a population even if there are members who cannot be administered a vaccine due to their medical conditions [152]. Unfortunately, the great successes achieved by vaccination are now endangered by people who refuse to vaccinate their children, which also affects those kids who cannot be vaccinated but should have been protected by herd immunity [153].
For instance, measles requires 95% of the population to be vaccinated for herd immunity to work. This was achieved in the U.S. by the end of the past century, being declared measles free in 2000. Similarly, the UK was declared measles free in 2017. Yet, the World Health Organization (WHO) removed this status from the UK in August 2019 [154], and the U.S. is facing a similar fate as so far in 2019 they have reported the greatest number of cases since 1992 [155]. Both phenomena have been attributed to anti-vaccine groups, whose behavior can be studied from the point of view of game theory. But there are more ingredients into play. In particular, if the risk of infection is regarded low, maybe thanks to herd immunity, the motivation to become vaccinated can decrease. This behavior can then be spread among adults, a process that will be clearly coupled with the disease dynamics. Thus, a holistic view of the whole problem is needed, something that can only be done under the lenses of complex systems [156].
In this context, rather than extending the mathematical formalism that is already well established, we pushed forward our knowledge about disease dynamics by adding data and revisiting some of the assumptions classically made either for simplicity or lack of information. For this reason, rather than giving a whole mathematical introduction and then visiting each contribution, we will organize them in a way that roughly follows the historical development of mathematical epidemiology, explaining in each section the basic ideas and then showing how we challenged those assumptions.
We will begin in section 3.1 with the most basic approach to disease dynamics. That is, humans are gathered in closed populations in which every individual can contact every other, very much like particles colliding in a box. This simple premise, known as homogeneous mixing, can be slightly improved by considering individuals to be part of smaller groups, with correspondingly different patterns of interaction. This is the classical approach to introduce the age structure of the population, for which experimental data exist. We will, however, go one step further and analyze the problem of projecting this data into the future, taking into account the demographic evolution of society. This part of the thesis will be thus based on the publication:
Arregui, S., Aleta, A., Sanz, J. and Moreno, Y., Projecting social contact matrices to different demographic structures, PLoS Comput. Biol. 14:e1006638, 2018
Our next step will be to introduce, in section 3.2, one of the cornerstone quantities of modern epidemiology, the basic reproduction number, \(R_0\). We will revisit its original definition and challenge it using data-driven population models, demonstrating that some of the assumptions that have been made since its conception are not entirely correct. This corresponds to the work:
Liu, Q.-H., Ajelli, M., Aleta, A., Merler, S., Moreno, Y. and Vespignani, A., Measurability of the epidemic reproduction number in data-driven contact networks, Proc. Natl. Acad. Sci. U.S.A., 115:12680-12685, 2018
Then, in section 3.3 we will finally introduce networks into the picture. We will show some of the counter-intuitive consequences of this and, again, challenge some of the most basic assumptions. In particular, disease dynamics are often implemented on single layer undirected networks, but we will show that directionality can play a crucial role on the dynamics, with particular emphasis on multilayer networks. We will follow the article
Wang, X., Aleta, A., Lu, D., Moreno, Y., Directionality reduces the impact of epidemics in multilayer networks, New. J. Phys., 21:093026, 2019
of which I am first co-author.
We will finish this chapter in section 3.4 analyzing the age-contact networks that we generated in section 2.6, chapter 2. The objective of this part will be to show the different approaches than can be followed depending on the available data and their impact in the outcome of the dynamics. This will be based on the work
Aleta, A., Ferraz de Arruda, G. and Moreno, Y., Data-driven contact structures: From homogeneous mixing to multilayer networks, PLoS Comput. Biol. 16(7):e1008035, 2020
The starting point of this discussion is going to be precisely the own introduction of the paper by Kermack and McKendrick published in 1927 that is regarded as the starting point of modern epidemiological models [157]. Even if over 90 years have passed, any text written today about the subject would start roughly in the same way:
"The problem may be summarised as follows: One (or more) infected person is introduced into a community of individuals, more or less susceptible to the disease in question. The disease spreads from the affected to the unaffected by contact infection. Each infected person runs through the course of his sickness, and finally is removed from the number of those who are sick, by recovery or by death. The chances of recovery or death vary from day to day during the course of his illness. The chances that the affected may convey infection to the unaffected are likewise dependent upon the stage of the sickness. As the epidemic spreads, the number of unaffected members of the community becomes reduced. Since the course of an epidemic is short compared with the life of an individual, the population may be considered as remaining constant, except in as far as it is modified by deaths due to the epidemic disease itself. In the course of time the epidemic may come to an end. […] [This] discussion will be limited to the case in which all members of the community are initially equally susceptible to the disease, and it will be further assumed that complete immunity is conferred by a single infection."
For the shake of clarity we can summarize some of the implicit assumptions in the previous paragraph, plus some more that were introduced in other parts of the paper, as [158]:
The disease is directly transmitted from host to host.
The disease ends in either complete immunity or death.
Contacts are according to the law of mass-action.
Individuals are only distinguishable by their health status.
The population is closed.
The population is large enough to be described with a deterministic approach.
In this section we will explore the effect of relaxing assumptions 3 and 4. Note that these two assumptions can be regarded as an approximation when no sufficient data about the whereabouts of the population are known. Now, however, we have much more data available than they did and thus in section 3.2 we will be able to completely remove assumption 4. Similarly, in sections 3.3 and 3.4 we will suppress assumptions 2 and 3. Besides, except for this introduction, throughout the chapter we will disregard assumption 6, but we will always respect the 1st and 5th ones.
With modern terminology, models in which individuals are only distinguishable by their health status are known as compartmental models. In these models, it is supposed that each individual belongs to one and only one compartment (class, in Kermack and McKendrick terms). Compartments are a tool to encapsulate the complexity of infections in a simple way. Hence, an individual that is completely free from the disease but can be infected is said to be in the susceptible state (\(S\)), one that can spread the disease is said to be infected (\(I\)) and one that can neither be infected nor infect is said to be removed (\(R\)) either because is immune or dead. This classification is known as the \(SIR\) model. This framework, however, is quite flexible and it is possible to incorporate as many compartments as needed, depending on the disease under study, reaching hundreds of compartments in the most sophisticated models [159]. In particular, in section 3.1.2, we will introduce the exposed (\(E\)) state to classify individuals that have been infected but are not yet infectious.
The six assumptions, after some algebra, lead to the original equation proposed by Kermack and McKendrick (albeit with slightly updated notation),
\[\begin{equation} \frac{\text{d}S(t)}{\text{d}t} = S(t) \int_0^\infty A(\tau) \frac{\text{d}S(t-\tau)}{\text{d}t} \text{d}\tau\,, \tag{3.1} \end{equation}\]
where \(S(t)\) denotes the number of individuals in the susceptible compartment - henceforth number of susceptibles - at time \(t\) and \(A(\tau)\) is the expected infectivity of an individual that became infected \(\tau\) units of time ago [157], [158].
To obtain \(A(\tau)\), we define \(\phi(\tau)\) as the rate of infectivity of an individual that has been infected for a time \(\tau\). Similarly, we define \(\psi(\tau)\) as the rate of removal, either by immunization or death. Let us denote by \(v(t,\tau)\) the number of individuals that are infected at time \(t\) and have been infected for a period of length \(\tau\). If we divide time into separate intervals \(\Delta t\), such that the infection takes places only at the instant of passing from one interval to the next, the following relation holds:
\[\begin{equation} \begin{split} v(t,\tau) & = v(t-\Delta t,\tau-\Delta t)(1-\psi(\tau-\Delta t)) \\ & = v(t-2\Delta t,\tau-2\Delta t)(1-\psi(\tau-\Delta t))(1-\psi(\tau-2\Delta t)) \\ & = v(t-\tau,0) B(\tau)\,, \end{split} \tag{3.2} \end{equation}\]
so that, if \(\Delta t\) is small enough,
\[\begin{equation} \begin{split} B(\tau) & = (1-\psi(\tau-\Delta t))(1-\psi(\tau-2\Delta t))\ldots(1-\psi(0)) \\ & \approx e^{-\psi(\tau-\Delta t)} e^{-\psi(\tau-2\Delta t)} \ldots e^{-\psi(0)} \\ & \approx e^{-\int_0^\tau \psi(a) \text{d}a}\,. \end{split} \tag{3.3} \end{equation}\]
\[\begin{equation} A(\tau) = \phi(\tau) B(\tau) = \phi(\tau) e^{-\int_0^\tau \psi(a) \text{d} a}\,, \tag{3.4} \end{equation}\]
which defines the original shape of the Kermack and McKendrick model.
However, in the literature it is common to present as the Kermack and McKendrick model the special case they analyze in their paper in which both the infectivity and removal rates are constant. Indeed, if we set \(\phi(\tau) = \beta\) and \(\psi(\tau)=\mu\),
\[\begin{equation} A(\tau) = \beta e^{-\int_0^\tau \mu \text{d}{a}} = \beta e^{-\mu\tau}\,, \tag{3.5} \end{equation}\]
and defining the number of infected individuals at time \(t\) as
\[\begin{equation} I(t) \equiv -\frac{1}{\beta} \int_0^\infty A(\tau) \frac{\text{d}S(t-\tau)}{\text{d}t} \text{d}\tau \,, \tag{3.6} \end{equation}\]
equation (3.1) reads
\[\begin{equation} \frac{\text{d}S(t)}{\text{d}t} = - \beta I(t) S(t)\,. \tag{3.7} \end{equation}\]
If we now derive expression (3.6), using Leibniz's rule,
\[\begin{equation} \begin{split} \frac{\text{d}I(t)}{\text{d}t} & = -\frac{\text{d}S(t)}{\text{d}t} - \int_{-\infty}^t \frac{\partial}{\partial t} e^{-\mu(t-\tau)} \frac{\text{d}S(\tau)}{\text{d}t} \text{d}\tau \\ & = - \frac{\text{d}S(t)}{\text{d}t} + \mu \int_0^\infty e^{-\mu \tau} \frac{\text{d}S(t-\tau)}{\text{d}t} \text{d} \tau \\ & = \beta I(t) S(t) - \mu I(t)\,, \end{split} \tag{3.8} \end{equation}\]
together with the fact that the population, \(N\), is closed, \(S(t) + I(t) + R(t) = N\), we obtain the system of equations
\[\begin{equation} \left\{\begin{array}{l} \frac{\text{d}S(t)}{\text{d}t} = - \beta I(t) S(t) \\ \frac{\text{d}I(t)}{\text{d}t} = \beta I(t) S(t) - \mu I(t) \\ \frac{\text{d}R(t)}{\text{d}t} = \mu I(t) \end{array}\right. \tag{3.9} \end{equation}\]
which is the model that is usually introduced as the Kermack-McKendrick, even though we have seen that their original contribution was much more general [160].
Equation (3.9) is also often used to introduce epidemic models in the literature as it constitutes one of the most basic models. As we are considering that every individual can contact every other this model is also known as the homogeneous mixing model [145], [161]. However, it should be noted that sometimes a slightly different version of this set of equations is presented. Indeed, if we define the fraction of susceptible individuals in the population as \(s(t)\equiv S(t)/N\), and similarly with the others, note that the expression for the evolution of infected individuals is
\[\begin{equation} \frac{\text{d}i(t)}{\text{d}t} = \beta N i(t) s(t) - \mu i(t)\,. \tag{3.10} \end{equation}\]
Hence, the larger the population, the faster the spreading. This is known as the density dependent approach. However, we can formulate a very similar model in which we define the infectivity rate as \(\phi(\tau) = \beta/N\), so that
\[\begin{equation} \frac{\text{d}i(t)}{\text{d}t} = \beta i(t) s(t) - \mu i(t)\, \tag{3.11} \end{equation}\]
is independent of \(N\). This latter approach is called frequency dependent and is probably the most common one in the literature of epidemic processes on networks. Both approaches are valid and depend on the specific disease that is being modeled, see [162] for a deeper discussion of this matter.
Despite the simplicity of this model, it provides two very powerful insights about disease dynamics. The first one is related to the reasons that account for the termination of an epidemic. Until the publication of this model, the most accepted explanations in medical circles were that an epidemic stopped either because all susceptible individuals had been removed or because during the course of the epidemic the virulence of the organism causing the disease decreased gradually [163]. Yet, this model shows that with a fixed virulence (\(\beta\)) it is possible to reach states in which the epidemic fades out even if there are still susceptible individuals. To demonstrate this, although some approximations can be done to show this behavior (there is no closed form solution of the model), for our purposes it suffices to show a numerical solution, figure 3.1A.
Figure 3.1: Basic results of the homogeneous mixing model. In panel A the evolution of the set of equations (3.9) as a function of time with \(\beta=0.16\) and \(\mu=0.10\) is shown. It is possible to reach a disease free state with a fraction of susceptible individuals larger than 0. In panel B the total fraction of recovered individuals in equilibrium conditions as a function of \(\beta/\mu\) is shown. For simplicity the frequency dependent approach has been used so that the threshold is 1.
At this point a clarification might be in order. During the introduction we said that Hamer had already showed in 1906 that it was possible for an epidemic to end despite the existence of large number of susceptible persons in the population. However, the difference resides in that Hamer proposal was based on data about measles, while this model is formulated without any specific disease in mind. Indeed, although clearly influenced by Hamer's and Ross' works, one of the great achievements of Kermack and McKendrick was to establish a formulation based only on mechanistic principles, regardless of the specific properties of the disease. Nonetheless, the most important result of this model has not been discussed yet, the epidemic threshold.
Suppose that in a completely susceptible population we introduce a tiny amount of infected individuals so that \(s(t=0) \equiv s_0 = 1 - \epsilon\) and \(i(t=0) \equiv i_0 = \epsilon\) with \(\epsilon\rightarrow 0\). If we linearize equation (3.10) around this point, we have
\[\begin{equation} \frac{\text{d}i(t)}{\text{d}t} \approx \beta N i_0 s_0 - \mu i_0\,, \tag{3.12} \end{equation}\]
which only grows if \(\beta N - \mu > 0\). Hence, there exists a minimum susceptible population at the initial state below which an epidemic cannot take place, the epidemic threshold:
\[\begin{equation} N_c > \frac{\mu}{\beta}\,. \tag{3.13} \end{equation}\]
Note that the formulation of this threshold can vary slightly according to the characteristics of the model. For instance, in the frequency dependent approach, equation (3.11), the epidemic threshold is defined by
\[\begin{equation} \frac{\beta}{\mu} > 1\,, \tag{3.14} \end{equation}\]
which is independent of \(N\). The existence of this threshold is numerically demonstrated in figure 3.1B, where the final fraction of recovered individuals as a function of the ratio \(\beta/\mu\) is shown. Regardless of the specific shape of the condition, the message is that it is possible to explain why an epidemic might not spread in a population of fully susceptible individuals. Moreover, it also provides a mechanism to fight diseases before they spread. Indeed, in equation (3.13) we have simply considered that \(S_0 = N\), but if we were able to immunize a fraction of the population so that \(S_0 < \mu/\beta\) then the epidemic could not take place. In other words, we would have conferred the population the herd immunity discussed in the beginning of the chapter.
Since the establishment of epidemiology as a science, a lot of attention has been devoted to the study of measles as its recurring patterns puzzled physicians and mathematicians alike. The distinguishing characteristic of measles epidemics is that they had a very regular temporal pattern with periodic outbreaks of the disease, as shown in figure 3.2. As this disease affects specially the children and also conveys permanent immunity to those who have suffered it, analyzing the time evolution over large time-scales to obtain the patterns required the inclusion of age in the models. Nevertheless, with the basic model that we have analyzed we can already propose a plausible explanation for this behavior. Indeed, we know that if the amount of susceptibles in the population is below a given threshold, the epidemic cannot take place. Thus, it seems reasonable to think that once the epidemic fades-out, there is a period in which there are not enough susceptibles for it to appear again. Yet, when new children are born, the amount of susceptibles will increase, possibly going above the threshold and therefore allowing a new outbreak.
A similar explanation was already proposed by Soper in 1929 [164], although it only matched the observations qualitatively, not quantitatively. It was Barlett who, in 1957, finally provided a quantitative explanation of the phenomenon [165]. Besides the details that we have already discussed, in his proposal he added a new factor that we have not mentioned yet. He proposed that the problem of previous models was that they were deterministic, an approximation that is only valid in very large populations. However, it was observed that the periodicity of measles not only depended on the size of the city, but it was specially so in small towns. In physical terms, we would say that there were finite size effects, tearing down assumption 6 (see section 3.1). Thus, he proposed to use a stochastic model for which he could not obtain a closed form solution, so he had to resort to an "electronic computer". Nowadays the use of stochastic computational simulations are much more common than the deterministic approach. The reasons why this approach is more favorable are out of the scope of this thesis (see, for instance [140], [166]–[168] for a discussion) but we will leverage this opportunity to say that in the following sections we will mostly work with stochastic simulations, rather than deterministic approaches. Before concluding the discussion about Barlett's paper, we find worth highlighting that it was presented during a meeting of the Royal Statistical Society, in 1956, after which a discussion followed. In said discussion, Norman T. J. Bailey said "One of the signs of the times is the use of an electronic computer to handle the Monte Carlo experiments. Provided they are not made an excuse for avoiding difficult mathematics, I think there is a great scope for such computers in biometrical work". And indeed there was, as 25 years later Mr. Bailey was appointed Professor of Medical Informatics [169].
Figure 3.2: Measles epidemics in New York from 1906 to 1948. This figure represents the number of reported cases of measles in the city of New York from 1906 to 1948 with a biweekly resolution. There are some gaps due to missing reports. Data obtained from [170].
Returning to our discussion, it is not surprising, then, that McKendrick already introduced age in his models in 1926 [171], one year before the publication of the full model that we have already explored. However, to introduce age we will use a slightly more modern formulation that will simplify the analysis. In particular, we need to revisit assumption 4, i.e., individuals are only distinguishable by their health status.
Let us state that individuals can now be identified both by their health status and their age. Hence, we have to add more compartments to the model, one for each age group and health status combination. In other words, rather than having three compartments, \(S,I,R\), we now have 3 times the number of age brackets considered, i.e. \(S_a, I_a, R_a\) being \(a\) the age bracket the individuals belong to (see 2.6 for the definition of age bracket). Moreover, we will suppose that the disease dynamics is much faster than the demographic evolution of the population. The only thing left is to decide how to go from one compartment to another:
For the rate of infectivity, we will define an auxiliary expression that will facilitate the discussion. By inspection of equation (3.9), we can define the force of infection [163] as
\[\begin{equation} \lambda(t) \equiv \phi(\tau) I(t) = \beta I(t)\,, \tag{3.15} \end{equation}\]
which does not depend on any characteristic of the individual. Hence, we can simply incorporate age by modifying the force of infection so that
\[\begin{equation} \lambda(t,a) = \sum_{a'} \phi(\tau,a,a') I_{a'}(t)\,. \tag{3.16} \end{equation}\]
This way, both the age of the individual that is getting infected (\(a\)) and the age of all other individuals (\(\sum_{a'}\)) are taken into account. Furthermore, we can separate \(\phi(\tau,a,a')\) into two components: one accounting for the rate of contacts between individuals of age \(a\) and \(a'\) and another one accounting for the likelihood that such contacts lead to an infection. Hence,
\[\begin{equation} \phi(\tau,a,a') \equiv C(a,a') \beta(a,a')\,. \tag{3.17} \end{equation}\]
Recalling section 2.6, the term \(C(a,a')\) can be obtained from the contact surveys that we have already studied. On the other hand, we will suppose that the likelihood of infection is independent of the age so that \(\beta(a,a') = \beta\).
For the rate of recovery, we will assume that it is independent of the age of the individual, i.e. \(\mu(a) = \mu\).
Under these assumptions, the homogeneous mixing model with age dependent contacts reads \[\begin{equation} \left\{\begin{array}{l} \frac{\text{d}S_a(t)}{\text{d}t} = - \sum_{a'}\beta C(a,a') I_{a'}(t) S_a(t) \\ \frac{\text{d}I_a(t)}{\text{d}t} = \sum_{a'}\beta C(a,a') I_{a'}(t) S_a(t) - \mu I_a(t) \\ \frac{\text{d}R_a(t)}{\text{d}t} = \mu I_a(t) \end{array}\right. \tag{3.18} \end{equation}\]
Despite its simplicity, this model is still widely used today, specially in the context of metapopulations9 [132]. Even more, as in all compartmental models, it is straightforward to extend it to include more complex dynamics. For instance, we can add the exposed state so that individuals that get infected remain in a latent state for a certain amount of time before showing symptoms and being able to infect others. This model, known as the SEIR model, can be used to describe influenza dynamics [173]
\[\begin{equation} \left\{\begin{array}{l} \frac{\text{d}S_a(t)}{\text{d}t} = - \sum_{a'}\beta C(a,a') I_{a'}(t) S_a(t) \\ \frac{\text{d}E_a(t)}{\text{d}t} = \sum_{a'}\beta C(a,a') I_{a'}(t) S_a(t) - \sigma E_a(t)\\ \frac{\text{d}I_a(t)}{\text{d}t} = \sigma E_a(t) - \mu I_a(t) \\ \frac{\text{d}R_a(t)}{\text{d}t} = \mu I_a(t) \end{array}\right. \,. \tag{3.19} \end{equation}\]
The new parameter, \(\sigma\), accounts for the rate at which an individual from the latent state goes to the infectious state, in a similar fashion as \(\mu\) does for the transition from \(I\) to \(R\). This model will be the focus of the last part of this section.
I call myself a Social Atom - a small speck on the surface of society.
"Memoirs of a social atom", William E. Adams
As we discussed earlier, for a long period of time the developments in mathematical epidemiology were disconnected from data, at least until Anderson and May arrived to the field in the late 1980s. It is not so surprising, then, that even though age was incorporated into models since the beginning of the discipline, we had to wait until the late 1990s to get experimental data of age mixing patterns.
The first attempt to quantify the mixing behavior responsible for infections transmitted by respiratory droplets or close contact (which are the ones best suited to be studied with homogeneous mixing models) was the pioneering work by Edmunds et al. in 1997 [174]. Their results, however, can hardly be extrapolated as they only analyzed a population consisting of 62 individuals coming from two British universities. The first large-scale experiment to measure these patterns was conducted by Mossong et al. in 2008 [131]. In their study, they measured the age-dependent contact rates in eight European countries (Belgium, Finland, Germany, Great Britain, Italy, Luxembourg, Netherlands and Poland), as part of the European project Polymod, using contact diaries. In the next years other authors followed the route opened by Mossong et al. and measured the age-dependent social contacts of countries such as China [175], France [176], Japan [177], Kenya [178], Russia [179], Uganda [180] or Zimbabwe [181], as well as the Special Administrative Region of Hong Kong [182], greatly expanding the available empirical data.
These experiments provide us with the key ingredient required for the introduction of age compartments into the models, the age contact matrix, \(C\). There are, however, a couple of ways of defining this matrix that are equivalent under certain transformations. We define the matrix in extensive scale, \(C\), as the one in which each element \(C_{i,j}\) contains the total number of contacts between two age groups \(i\) and \(j\). It is trivial to see that given this definition there must be reciprocity in the system, i.e.,
\[\begin{equation} C_{i,j} = C_{j,i}\,. \tag{3.20} \end{equation}\]
A similar definition can be obtained if instead of accounting for all the contacts between two groups we want to capture the average number of contacts that a single individual of group \(i\) will have with individuals in group \(j\):
\[\begin{equation} M_{i,j} = \frac{C_{i,j}}{N_i}\,, \tag{3.21} \end{equation}\]
where \(N_i\) is the number of individuals in group \(i\). We call the matrix in this form the intensive scale. This is the usual format in which this matrix is given. In this case, reciprocity is fulfilled if
\[\begin{equation} M_{i,j} N_i = M_{j,i} N_j\,. \tag{3.22} \end{equation}\]
This last expression rises an interesting question. The reciprocity relation depends on the population in each age bracket, \(N_i\). Thus, if the matrix \(M\) was measured in year \(y\), we have that \(M_{i,j}(y) N_i(y) = M_{j,i}(y) N_j(y)\). However, if we want to use this matrix in a different year, that is, with a different demographic structure due to the inherent evolution of the population, reciprocity will no longer be fulfilled, i.e.
\[\begin{equation} M_{i,j}(y) N_i(y') \neq M_{j,i}(y) N_j(y')\,, \tag{3.23} \end{equation}\]
unless the population has not changed. This is a major problem because there are diseases whose temporal dynamics are comparable to the ones of the demographic evolution. For instance, Tuberculosis is a disease in which age is particularly important and the incubation period ranges from 1 to 30 years [183]. Hence, to properly forecast the evolution of Tuberculosis in a population it is strictly necessary to project somehow these age-contact matrices into the future [134]. Even for diseases that have much shorter dynamics, such as influenza, this is a relevant problem because given how costly these experiments are, it is unpractical to repeat them every few years to obtain updated matrices. As a consequence, if we simply want to study the impact of influenza this year, more than 10 years after the work by Massong et al., we need to devise a way to properly update them.
Figure 3.3 exemplifies, for Poland and Zimbabwe, the error we would make if we do not adapt \(M\) and blindly use it with demographic structures that are different than the original. We define the reciprocity error as
\[\begin{equation} E = \frac{\sum_{i,j>i} |C_{i,j}-C_{j,i}|}{0.5\cdot \sum_{i,j} C_{i,j}} = \frac{\sum_{i,j>i} | M_{i,j} N_i - M_{j,i} N_j |}{0.5 \cdot \sum_{i,j} M_{i,j} N_i}\,, \tag{3.24} \end{equation}\]
to quantify the fraction of links that are not reciprocal. The two countries under consideration have very different demographic patterns, both in the past and in the future, and yet we can see that the error is quite large in both of them.
Figure 3.3: Reciprocity error as a function of time in Poland and Zimbabwe. For each country, in the top plots the demographic structures of 1950 and 2050 are compared to the one existing when the contact matrices were measured. In the bottom plot the reciprocity error as a function of time is shown. For the matrix to be correct in different years the error should be 0, but that only happens in the year when the data was collected.
The problem is that both \(C_{i,j}\) and \(M_{i,j}\) implicitly contain information about the demographic structure of the population at the time they were measured. To solve this problem, we define the intrinsic connectivity matrix as
\[\begin{equation} \Gamma_{i,j} = M_{i,j} \frac{N}{N_j}\,. \tag{3.25} \end{equation}\]
This matrix corresponds, except for a global factor, to the contact pattern in a "rectangular" demography (a population structure where all age groups have the same density). Hence, it does not have any information about the demographic structure of the population.
In figure 3.4A we show the instrinsic connectivity matrices for each of the 16 regions enumerated previously. Interestingly, the contact patterns are quite different from region to region. To facilitate the comparison, in figure 3.4B we plot the fraction of connectivity that corresponds to young individuals (less than 20 years old) as a function of the assortativity of each matrix as defined by Newman [73] (this quantity is an adaptation of the Pearson correlation coefficient so that it is equal to 1 if individuals tend to contact those who are like them, -1 in the opposite case and 0 if the pattern is completely uncorrelated). We can see that regions with similar demographic structures and culture tend to cluster together, although it is not possible to disentangle which is the precise cause leading to one pattern or the other.
Figure 3.4: Age contact matrices from 16 regions. A) Intrinsic connectivity matrix, \(\Gamma_{i,j}\), of each region. There is not a standard definition of age brackets and thus each study had its own definition. For comparison purposes, we have adapted the data to 15 age brackets: \([0,5),[5,10),\ldots,[65,70),+70\). B) Proportion of connectivity corresponding to individuals younger than 20 versus the assortativity coefficient of each matrix.
With this matrix we can now easily compute \(M\) at any other time, as long as we know the demographic structure of the population at that time:
\[\begin{equation} M_{i,j} (y') = \Gamma_{i,j} \frac{N_j(y')}{N(y')} = M_{i,j}(y) \frac{N(y)N_j(y')}{N_j(y)N(y')}\,. \tag{3.26} \end{equation}\]
In our case, we will obtain this data from the UN population division database, which contains information of both the past demographic structures and their projections to 2050 for the whole world [184].
We conclude this section addressing how this correction impacts disease modeling. To this end, we simulate the spreading of an influenza-like disease both with and without corrections on the matrix. We choose influenza because it is a short-cycle disease so that we can assume that the population structure is constant during each simulated outbreak. Besides, it can be effectively modeled using the SEIR model presented in 3.1.1.
To parameterize the model we use the values of an influenza outbreak that took place in Belgium in the season 2008/2009 [132]. Thus, individuals can catch the disease with transmissibility rate \(\beta\) per-contact with an infectious individual. The value of \(\beta\) is determined in each simulation so that the basic reproductive number is equal to \(2.12\) using the next generation approach (this procedure will be explained in more detail in section 3.2) [132], [185]. Once infected, individuals remain on a latency state for \(\sigma^{-1} = 1.1\) days on average. Then, they become infectious for \(\mu^{-1} = 3\) days on average, period when they can transmit the infection to susceptible individuals. After that, they recover and become immune to the disease. We use a discrete and stochastic model with the population divided into 15 age classes, whose mixing is given by the age contact matrix \(M\). To sum up:
The probability of an individual belonging to age group \(i\) to get infected is
\[\begin{equation} p_{S\rightarrow E} = \beta \sum_j \frac{M_{i,j}}{N_j} I_j\,. \tag{3.27} \end{equation}\]
Once in the latent state, the probability of entering the infected state is
\[\begin{equation} p_{E\rightarrow I} = \sigma\,. \tag{3.28} \end{equation}\]
Finally, an infected individual will recover with probability
\[\begin{equation} p_{I\rightarrow R} = \mu\,. \tag{3.29} \end{equation}\]
Under these conditions, we compute the predicted size of the epidemic, i.e., \(R(t\rightarrow \infty)\), in years 2000 and 2050. In figure 3.5 we present the results. In particular, in A we show the difference between the predicted size of the epidemic in 2050 versus the one in 2000 using the same \(M\) matrix in both years. In almost all countries the final size of the epidemic is smaller in 2050, except for China and the African countries in which it increases. However, in B we repeat the analysis but using the adapted values of \(M(y')\) obtained using (3.26). In this case, in general, the situation is reversed. Most countries have larger epidemics, except for the African ones. Even more, in countries such as China and Japan the difference is quite large, close to 20%.
Figure 3.5: Predictions of influenza incidence in 2050 with demographic corrections. In both plots the black horizontal line starts at the median age of each region in the year 2000 and ends with a bullet point with the predicted value in 2050. Color bars denote the relative variation of incidence over the same period. In A the predictions are computed using the original contact matrices collected from the surveys. In B the proposed demographic corrections are applied to the matrices.
Summarizing, to create more realistic models we need to incorporate empirically measured data. However, blindly using data without thinking whether it can be applied to the specific system we are studying is not adequate. In the particular case of social mixing matrices, we have seen that even if we keep studying the same country, just moving a few years away from the moment in which the experiment took place dramatically affects the reciprocity of the contacts. This, in turn, leads to important differences in the global incidence for influenza-like diseases, as we have shown in our analysis of the SEIR model. Even more, since there are different intrinsic connectivity patterns across countries, it is possible that there exists a time evolution of this quantity. Indeed, if we believe that the intrinsic pattern is a consequence of the culture of the country, it seems logical to think that an evolving culture will also have evolving intrinsic connectivity patterns. Although predicting how society will change in the future is currently impossible, this should be taken into account as a limitation in any forecast for which heterogeneity in social mixing is a key element.
One of the cornerstones of modern epidemiology is the basic reproduction number, \(R_0\), defined as the expected number of individuals infected by a single infected person during her entire infectious period in a population which is entirely susceptible. From this definition, it is clear that if \(R_0<1\), then, each infected individual will produce, on average, less than one infection. Therefore, the disease will not be able to be sustained in the population. Conversely, if \(R_0>1\) the disease will be able to propagate to a macroscopic fraction of the population. Hence, this simple dimensionless quantity is informing us of three key aspects of a disease: (1) whether the disease will be able to invade the population, at least initially; (2) a way to determine which control measures, and at what magnitude, would be the most effective, i.e., which ones will reduce \(R_0\) below 1; (3) to gauge the risk of an epidemic in emerging infectious diseases [186].
Interestingly, despite its importance, this quantity was not originated in epidemiology. The concept of \(R_0\), and its notation, was formalized by Dublin and Lotka, in 1925, in the context of demography10 [188]. The similitude of the concept in both fields is obvious, in one it measures the number of new infections per infected while in the other the number of births per female. Yet, in epidemiology the concept was mostly unknown until Anderson and May popularized it 60 years later in the Dahlem conference [189] (see [190] for a nice historical discussion on why it took so long for this concept to mature in epidemiology).
It might be enlightening to introduce the mathematical definition from the point of view of demography. Consider a large population. Let \(F_d(a)\) be the survival function, i.e., the probability for a new-born individual to survive at least to age \(a\), and let \(b(a)\) denote the average number of offspring that an individual will produce per unit of time at age \(a\). The function \(n(a) \equiv b(a) F_d(a)\) is called the reproduction function. Hence, the expected future offspring of a new-born individual, \(R_0\), is [191]
\[\begin{equation} R^{demo}_0 \equiv \int_0^\infty n(a) \text{d} a = \int_0^\infty b(a) F_d(a) \text{d}a\,. \tag{3.30} \end{equation}\]
The translation of this definition to epidemiology is straightforward. First, note that the reproduction function at age \(a\) is equivalent to the expected infectivity of an individual who was infected \(\tau\) units of time ago, \(A(\tau)\) (see equation (3.4)). There is, however, one crucial difference. While in demography it is possible to "create" new individuals regardless of the size of the rest of the population, in epidemiology the creation of new individuals depends both on the infectivity and on the amount of susceptible individuals in the population. Hence,
\[\begin{equation} R_0(\eta) \equiv \int_\Omega S(\xi) \int_0^\infty A(\tau,\xi,\eta)\text{d}\tau \text{d}\xi\,. \tag{3.31} \end{equation}\]
This rather cryptic expression is the most general definition of this quantity [192], although we will see in a moment simpler ones. The expression should be read as follows: the value of \(R_0\) for individuals in an infectious state \(\eta\) is equal to the sum of all individuals in a susceptible state characterized by \(\xi\), of size \(\Omega\), times the infectivity of individuals in said state \(\eta\) who where infected \(\tau\) steps ago and can infect individuals in state \(\xi\).
In the particular case of the SIR model under the density dependent approach, equation (3.9), the basic reproduction number is simply
\[\begin{equation} \begin{split} R_0 & = S_0 \int_0^\infty A(\tau) \text{d}\tau = S_0 \int_0^\infty \beta e^{-\mu \tau} \text{d}\tau \\ & = \frac{\beta S_0}{\mu}\,, \end{split} \tag{3.32} \end{equation}\]
where \(S_0\) is the number of susceptible individuals at the beginning of the infection, which in the absence of immunized individuals is equal to \(N\). Recall that the linear stability analysis of the SIR model (3.12) yields,
\[\begin{equation} i(t) = i_0 e^{\beta N- \mu}\,, \tag{3.33} \end{equation}\]
which only grows if \(\beta N > \mu\). In other words, \(R_0\) defines precisely the epidemic threshold that we found in the previous section, i.e.,
\[\begin{equation} R_0 = \frac{\beta N}{\mu} > 1\,, \tag{3.34} \end{equation}\]
as heuristically discussed at the beginning of this section. In the frequency dependent approach, which is more common in the network science literature as we shall see in 3.3, the basic reproduction number11 is
\[\begin{equation} R_0 = \frac{\beta}{\mu} > 1\,. \tag{3.35} \end{equation}\]
To obtain an explicit expression for \(R_0\) with more complex compartmental models, an alternative approach to linear stability analysis is the next generation matrix as proposed by Diekmann in 1990 [192] and further elaborated by van den Driessche and Watmough in 2002 [185]. Briefly, the idea is to study the stability of the disease free state, \(x_0\). To do so, we restrict the model to those compartments with infected individuals and separate the evolution due to new individuals getting infected, \(\mathcal{F}\), and the transitions resulting for any other reason,
\[\begin{equation} \frac{\text{d}x_i(t)}{\text{d}t} = \mathcal{F}_i(x) - \mathcal{V}_i(x)\,, \tag{3.36} \end{equation}\]
where \(x = (x_1,\ldots,x_m)\) denotes the \(m\) infected states in the model. If we now define
\[\begin{equation} F = \left[\frac{\partial \mathcal{F}_i(x_0)}{\partial x_j}\right] \text{~~and~~} V = \left[\frac{\partial \mathcal{V}_i(x_0)}{\partial x_j}\right]\,, \tag{3.37} \end{equation}\]
the next generation matrix is \(FV^{-1}\) and the basic reproductive number can be obtained as
\[\begin{equation} R_0 = \rho(FV^{-1})\, \tag{3.38} \end{equation}\]
where \(\rho\) denotes the spectral radius [193]. In particular, for the model considered in section 3.1.2, the next generation matrix reads
\[\begin{equation} K_{i,j} = \frac{\beta}{\mu} \frac{M_{i,j}}{N_j}\,. \tag{3.39} \end{equation}\]
This expression can be used to ensure that, regardless of the values of \(M_{i,j}\) and \(N_j\), the starting point of the dynamics is the same. For this reason, when we wanted to address the differences in incidence consequence of the changing demographics, we fitted \(\beta\) so that the spectral radius of (3.39) was always \(R_0 = 2.12\).
It is worth pointing out that \(R_0\) clearly depends on the model we choose for the dynamics. As a consequence, even though its epidemiological definition is completely independent from models (number of secondary infections per infected individual in a fully susceptible population), its mathematical formulation is not univocal. Ideally, if in a disease outbreak we knew who infected whom, we would be able to obtain the exact value of \(R_0\). In reality, however, this information is seldom available. Hence, to compute it, one often relies on aggregated quantities, such as \(\beta\) and \(\mu\) in equation (3.35). The problem is that if we assume that a disease can be modeled within a specific framework, we cannot directly compare the value obtained for \(R_0\) with the ones measured for other diseases unless the exact same model has been used to obtain it. This is one of the observations that will motivate our work, which we will describe in section 3.2.3.
Measuring \(R_0\) is not an easy task, specially in the case of emergent diseases for which fast forecasts are required. An accurate estimation of its value is crucial to planning for the control of an infection, but usually the only available information about the transmissibility of a new infectious disease is restricted to the daily count of new cases. Fortunately, it is possible, under certain conditions, to obtain an expression for \(R_0\) as a function of that data.
Following [194], we will assume that in the beginning of a disease outbreak the growth of the number of infected individuals is exponential. Hence, the number of new infected individuals at time \(t\) will be equal to the number of new infected individuals \(\tau\) time units ago, multiplied by the exponential growth,
\[\begin{equation} \frac{\text{d}S(t)}{\text{d}t} = \frac{\text{d}S(t-\tau)}{\text{d}t} e^{r\tau}\, \tag{3.40} \end{equation}\]
where \(r\) denotes the growth rate. Inserting this expression in (3.1) with \(t\rightarrow 0\),
\[\begin{equation} \frac{\text{d}S(t)}{\text{d}t} = S(t=0) \int_0^\infty A(\tau) \frac{\text{d}S(t)}{\text{d}t} e^{-r\tau} \text{d}\tau \Rightarrow 1 = S_0 \int_0^\infty A(\tau) e^{-r\tau} \text{d}\tau \tag{3.41} \end{equation}\]
At this point it might be enlightening to return once again to the demographic simile. In equation (3.30) we saw that the total number of offspring of a person could be obtained integrating \(n(a)\) (rate of reproduction at age \(a\)) over the whole lifespan of the individual. Thus, we can define the distribution of the age a person has when she has a child as
\[\begin{equation} g'(a) = \frac{n(a)}{\int_0^\infty n(a) \text{d} a } = \frac{n(a)}{R_0^{demo}}\,. \tag{3.42} \end{equation}\]
If we take the "age" of an infection to be the time since the infection, we can define an analogous quantity in epidemiology,
\[\begin{equation} g(\tau) = \frac{S_0 A(\tau)}{R_0}\,, \tag{3.43} \end{equation}\]
called generation interval distribution. In this case this distribution is the probability distribution function for the time from infection of an individual to the infection of a secondary case by that individual. Going back to (3.41) we now have
\[\begin{equation} \frac{1}{R_0} = \int_0^\infty g(\tau) e^{-r\tau} \text{d}\tau\,. \tag{3.44} \end{equation}\]
According to this last expression, the shape of the generation interval distribution determines the relation between the basic reproduction number \(R_0\) and the growth rate \(r\). In all the models explored so far, we assumed that both the rate of infection \(\beta\) and the rate of leaving the infectious stage \(\mu\) were constant. Hence, it follows that the duration of a generation interval is specified as an exponential distribution with mean \(Tg = 1/\mu\). Under these assumptions the basic reproduction number is then
\[\begin{equation} \begin{split} R_0 & = \left( \int_0^\infty \mu e^{-\mu\tau} e^{-r\tau} \text{d} \tau\right)^{-1} = \left( \frac{\mu}{r+\mu} \right)^{-1} \\ & = 1 + rTg \,. \end{split} \tag{3.45} \end{equation}\]
This relation between the growth rate and the generation time was already proposed by Dietz in 1976 [195], although only in the specific case of the SIR model. Equation (3.44), however, allows for the calculation of \(R_0\) in more complex scenarios, such as non constant \(\mu\) [194]. Despite its limitations, this expression is widely used in the literature due to its simplicity. Indeed, \(Tg\) is often considered to be simply the inverse of the recovery rate, which is relatively easy to measure. Thus, \(r\) can be obtained by fitting a straight line to the cumulative number of infections as a function of time, see (3.40).
There are, however, several problems with this procedure. First, we stated that the exponential growth is valid during the early phase of an outbreak, but there is no way to know how long is that in general. As a consequence, when one fits a straight line to the data, some heuristics have to be used to determine which points to use. Even more, if the dynamics is really fast there might be just a few valid points. Given the stochasticity of the process, this might lead to poor estimates of the growth rate.
Besides, there are some caveats on the exponential growth assumption. For instance, it has been observed that for some diseases such as AIDS/HIV the early growth is sub-exponential [196]. Likelihood based methods in which the early exponential growth is not needed were thus proposed [197], [198]. But even for diseases in which it might be a good approximation, there is the problem of susceptible depletion. Indeed, if the population is infinite, each infected individual will be always able to reach an infinite amount of susceptibles. But this is not true in real situations, forbidding the exponential growth to be sustained for too long. Hence, methods that account for this depletion during the initial phase had also to be developed [199], [200].
It should be clear by now that despite the widespread use of this parameter, it is far from being perfectly understood, specially in the presence of real world data. Yet, we can go one step further and generalize the definition of \(R_0\) to the effective reproduction number, \(R(t)\).
The effective reproduction number, \(R(t)\), is defined as the average number of secondary cases generated by an infectious individual at time \(t\). Hence, we are relaxing the hypothesis of a fully susceptible population that we gave at the beginning of section 3.2.
This parameter is obviously better suited for studying the impact of protection measures taken after the detection of an epidemic, as it can be defined at any time. If \(R(t)<1\), it seems reasonable to say that the epidemic is in decline and may be regarded as being under control at time \(t\). Furthermore, in section 3.1.1 we saw that diseases such as measles have periodic outbreaks and also convey immunity to those who have suffered it. Thus, when a new outbreak starts the population is not completely susceptible, invalidating one of the conditions in the definition of \(R_0\) [201].
To provide a mathematical definition of \(R(t)\) [202], we can revisit equation (3.32) and define
\[\begin{equation} R(t) = S(t) \int_0^\infty A(\tau)\text{d}\tau\,, \tag{3.46} \end{equation}\]
which leads to
\[\begin{equation} R(t) = \frac{S(t)}{S_0} R_0\,. \tag{3.47} \end{equation}\]
According to this expression, in a closed population, the value of \(R(t)\) should monotonically decrease. As expected, this has been observed in computational simulations [201], even in the case of sub-exponential growth [203]. However, this is not always the case if one tries to obtain \(R(t)\) from real data [204]. In particular, Walling and Teunis studied the severe acute respiratory syndrome (SARS) epidemic from 2003 and observed several local maxima in the evolution of the effective reproduction number, which they attributed to "super-spread events" in which certain individuals infected unusually large numbers of secondary cases [205]. This was also found in other diseases, signaling that the whole complexity of real systems cannot be completely captured with simple homogeneous models [206]. For this reason, the next section will be devoted to our contribution in the study of the effect that more heterogeneous population distributions have on the reproduction numbers.
Of course, there had been plenty of diseases, long before humans had been around. But humans had definitely created Pestilence. They had a genius for crowding together, for poking around in jungles, for setting the midden so handily next to the well. Pestilence was, therefore, part human, with all that this entailed.
The fundamental role that households play in the spreading of epidemics has been acknowledged for a long time. Early estimations of influenza spreading already showed that the probability of getting infected from someone living in your household or someone from the community were quite different. Even more, it was shown that children were twice more likely to get the infection from the community than adults, signaling that the places that children visit and the own heterogeneity of household members are fundamental in the disease dynamics [207]. In a more recent study, data from a real epidemic in a small semi-rural community was analyzed, with schools added explicitly into the picture. As expected, it was observed that their role is key in the spreading of the disease. But, even more interesting, the authors calculated a reproduction number for each population structure and found it to be smaller or of the order of 1, meaning that for an outbreak to be sustained a complex interplay between those structures must take place [208].
In order to introduce the concept of households into the models analyzed so far, we need to revisit once again the assumption of full homogeneity. Most theoretical approaches in this line, since the seminal work of Ball et al. in 1997 [209], have focused on what is known as models with two levels of mixing. In these models, a local homogeneous mixing in small environments (such as households) is set over a background homogeneous mixing of the whole population. This can be further extended by adding other types of local interactions, such as schools or workplaces. An individual can thus belong at the same time to two or more local groups. For this reason, they are also known as overlapping group models [210]. This allows for the definition of several basic reproduction numbers, one for the community and the rest for local interactions, which in turn can be used to devise efficient vaccination strategies [211], [212]. Other studies have also proposed that the generation time can differ from within households and the community [213].
However, theoretical studies have been mostly focused on the early phase of the epidemics because it is more mathematically tractable. Yet, we have seen in the previous section that \(R(t)\) can provide very important insights to understand the dynamics of real diseases. For this reason, statistical methods have been developed to analyze \(R(t)\) [214], [215]. Unfortunately, for these methods, disentangling the role that each structure of the system plays is challenging due to the lack of microscale data on human contact patterns for large populations. Note also that due to the typically small size of households, stochastic effects are highly important.
In this work our objective is to shed some light into the mechanisms behind disease dynamics in heterogeneous populations. To do so, we study the evolution of \(R(t)\) and \(Tg\) with data-driven stochastic micro-simulations of an influenza-like outbreak on a highly detailed synthetic population. The term "micro" refers to the fact that we will keep track of each individual in the population, allowing us to reconstruct the entire transmission chain. The great advantage of this method is that it allows for the computation of \(R(t)\) from its own epidemiological definition, without requiring any mathematical approximation.
Our synthetic population is composed by 500,000 agents, representing a subset of the Italian population. This population model, developed by Fumanelli et al. [216], divides the system into the four settings where influenza transmission occurs, namely households, schools, workplaces and the general community [217]. Henceforth we will refer to these settings as layers for the similarity of this construction to the multilayer networks we saw in section 2.1.312; a visualization of the model is shown in 3.6A. The household layer is composed by \(n_H\) disconnected components, each one representing one household. The amount of individuals inside each household is determined by sampling from the actual Italian household size distribution, as well as their age. Then, by sampling from the multinomial distribution of schooling and employment rates by age, each individual might be also assigned to a school or workplace. Both the number and size of workplaces and schools is also sampled from the actual Italian distribution. As in the household layer, each of the \(n_S\) schools and \(n_W\) workplaces are disconnected from the rest in their respective layers. Lastly, all individuals are allowed to interact with each other in the community layer, encapsulating the background global interaction. To highlight the heterogeneity of the system, in figure 3.6B the size distribution of the places each individual belongs to is shown. Note that while most households contain 2-3 individuals and most schools are close to 1,000 students, workplaces cover a much wider range of sizes.
Figure 3.6: Model structure of a synthetic population organized in schools, households and workplaces. A) Visualization of the overlapping system, with individuals being able to interact locally in multiple contexts. B) Distribution of the structure size each individual belongs to. C) Illustration of the transmission process with an example of how to calculate the reproduction number and generation interval.
The influenza-like transmission dynamics are defined through the susceptible, infected, removed (SIR) compartmental model that we have been analyzing under diverse assumptions. We simulate the transmission dynamics using a stochastic process for each individual, keeping track of where she contracted the disease, who is in contact with and so on. In order to resemble an influenza-like disease, the local spreading power in each layer is calibrated in such a way that the fraction of cases in the four layers is in agreement with literature values (namely, 30% of all influenza infections are linked to transmission occurring in the household setting, 18% in schools, 19% in workplaces and 33% in the community [218]). Hence, the probability that individual \(j\) infects \(i\) in layer \(l\) is
\[\begin{equation} \beta = w_l\,, \tag{3.48} \end{equation}\]
as long as \(j\) is infected, \(i\) is susceptible and both belong to the same component in layer \(l\). Moreover, we set the values of \(w_l\) such that the basic reproduction number is \(R_0=1.3\) [219]. Finally, the removal probability, \(\mu\), is set so that the removal time is 3 days [220].
The epidemic starts with a fully susceptible population in which we set just one individual as infected. Thanks to the microscopic detail of the model, we can thus compute the basic reproduction number directly as the number of infections that said first individual produces before recovery. Its counterpart over time, the effective reproduction number, \(R(t)\), is measured using the average number of secondary cases generated by an infectious individual at time \(t\). Similarly, we also define the effective reproduction number in layer \(l\), \(R_l(t)\), as the average number of secondary infections generated by a typical infectious individual in layer \(l\):
\[\begin{equation} R_l(t) = \frac{\sum_{i\in \mathcal{I}(t)} D_l(i)}{|\mathcal{I}(t)|}\,, \tag{3.49} \end{equation}\]
where \(\mathcal{I}(t)\) represents the set of infectious individuals that acquired the infection at time \(t\) and \(D_l(i)\) the number of infections generated by infectious node \(i\) in layer \(l\) with \(l \in L=\{H,S,W,C\}\). With this expression we can obtain the overall reproductive number as
\[\begin{equation} R(t) = \sum_{l\in L} R_l(t)\,. \tag{3.50} \end{equation}\]
The generation time \(Tg\) is defined as the average time interval between the infection time of infectors and their infectees. Hence, analogously to the reproduction number, we define the generation time in layer \(l\) as
\[\begin{equation} {Tg}_l = \frac{\sum_{i\in \mathcal{I}(t)} \sum_{j \in \mathcal{I}'_l(i)} (\tau(j)-t)}{\sum_{i \in \mathcal{I}(t)} D_l(i)}\,, \tag{3.51} \end{equation}\]
where \(\mathcal{I}'_l(i)\) denotes the set of individuals that \(i\) infected in layer \(l\) and \(\tau(j)\) is the time when node \(j\) acquired the infection. Therefore, the overall generation time \(Tg(t)\) reads
\[\begin{equation} Tg(t) = \frac{\sum_{l\in L}\sum_{i \in \mathcal{I}(t)} \sum_{j \in \mathcal{I}'_l(i)} (\tau(j)-t)}{\sum_{l\in L}\sum_{i\in \mathcal{I}(t)} D_l(i)}\,. \tag{3.52} \end{equation}\]
A schematic illustration of the transmission dynamics is shown in figure 3.6C. In that case, individual 1 gets infected at \(t=t_0\), while individuals 2 and 3 are still susceptible. During the course of her disease individual 1 infects individual 2 at \(t=t_1\) and individual 3 at \(t=t_2\) before finally getting recovered at \(t=t_3\). Thus, her reproduction number is equal to 2 and her generation time is \(1.5\), supposing that \(t_{i+1} - t_i = 1\). In fact, in our simulations we will set \(\Delta t = 1\) day. Due to the stochasticity of the process, each realization might result in an outbreak of different length. Hence, the time evolution of each simulation is aligned so that the peak of the epidemic is exactly at \(t=0\). The results for the reproduction number and generation time are shown in figure 3.7.
Figure 3.7: Fundamental epidemiological indicators. A) Top: mean \(R(t)\) of data-driven model (solid line) compared to the solution under a completely homogeneous population (dashed line). The colored area shows the density distribution of \(R(t)\) values obtained in single realizations of the model. Bottom: the reproductive number is broken down in the four layers. B) As A but for the generation time. In all cases the simulations have been aligned at the peak of the epidemic.
We find that \(R(t)\) increases over time in the early phase of the epidemic, starting from \(R_0 = 1.3\) to a peak of about \(2.5\) (figure 3.7A). In contrast, in the homogeneous model (dashed line), which lacks the typical structures of human populations, \(R(t)\) is nearly constant in the early epidemic phase and then rapidly declines before the epidemic peak (\(t=0\)), as predicted by the classical theory. The non-constant phase of \(R(t)\) implies that \(R_0\) loses its meaning as a fundamental indicator in favor of \(R(t)\). In figure 3.7B we show an analogous analysis of the measured generation time in the data-driven model. In this case, we find that \(Tg\) is considerably shorter than the infectious period (\(3\) days), with a more marked shortening near the epidemic peak. Once again, in the homogeneous model (dashed line) the behavior predicted by the classical theory is recovered.
A closer look at the transmission process in each layer helps to understand the origin of the deviations from classical theory. Specifically, we see that \(R(t)\) tends to peak in the workplace layer, and to some extent also in the school layer. In the community layer, on the other hand, the behavior is much closer to what is expected in a classical homogeneous population. We also find that \(Tg\) is remarkably shorter in the household layer than in all other layers. This could simply be due to a depletion of susceptibles. To illustrate this, suppose that an infected individual in a household of size 3 infected one of the other two. Then, during the next time step both will compete to infect the last susceptible, something that does not happen in large populations. This would lead to a shorter generation time simply because she is unable to infect other members, even if she is still infected. This evidence calls for considering within household competition effects when analyzing empirical data of the generation time.
Figure 3.8: Attack rate as a function of site size. A) Fraction of individuals belonging to each place that contracted the disease, not necessarily in said setting. B) Solid line: average size of places in which there is at least one new infection in each time step, broken down in three layers. Dashed line: expected size if there is at least one infection in every place.
To further understand the reasons of the diverse trends observed in each layer, in figure 3.8 we analyze the effect that the size of the components has on the dynamics. In figure 3.8A we study the attack rate (final fraction of removed individuals, i.e., individuals that suffered the infection at some point) as a function of the site size, distinguishing the three layers. The results indicate that the spreading is much more important in large buildings, but we know that they are scarce (see figure 3.6B). Hence, it seems that the initial growth of the epidemic might stop once the big components have been mostly infected. This is corroborated in figure 3.8B, where the average size of buildings with at least one infection is shown. The situation is thus clear. In the classical model not only it is assumed that all population is initially susceptible, but also that it is in contact with the first infected since the beginning. In heterogeneous populations, however, the first infected individual has only a handful of local contacts, diminishing its infectious power. Then, as the epidemic progresses more and more susceptibles enter into play, increasing the amount of individuals that can be infected. Yet, sooner or later the components will run out of susceptibles, even if there is still a large fraction available in the rest of the system. This, in turn, leads to a more abrupt descent than what is expected in the classical approximation.
These results clearly highlight how the heterogeneity of human interactions (i.e., clustering in households, schools and workplaces) alters the standard results of fundamental epidemiological indicators, such as the reproduction number and the generation time. Furthermore, they call into question the measurability of \(R_0\) in realistic populations, as well as its adequacy as an approximate descriptor of the epidemic dynamics. Lastly, our study suggests that epidemic inflection points, often ascribed to behavioral changes or control strategies, could also be explained by the natural contact structure of the population. Hopefully, this analysis will open the path to developing better theoretical frameworks, in which some of the most fundamental assumptions of epidemiology have to be revisited.
In epidemiology attention has historically been restricted to biological factors. We began this chapter stating that individuals were just indistinguishable particles interacting according to the mass action law. However, throughout the following sections, we have shown that when this oversimplification is relaxed many interesting phenomena arise. In this section we shall go one step further and completely remove what we called assumptions 3 and 4: mass action and indistinguishability. To distinguish individuals, we will assign to each of them an index, \(i\). Then, we will allow individuals to spread the disease only to those with whom they have some kind of contact (e.g. they are friends), which we will encode in links. In other words, we are finally going to introduce networks into the picture.
It is rather difficult to establish the origin of what we may call disease spreading on networks or network epidemiology. Probably, one of the earliest attempts is the work by Cochran, in 1936 [221], in which he studied the propagation of a disease in a small plantation of tomatoes. Although his work might be better described as statistical analysis, the reason to consider it one of the precursors of the spreading on networks is that, as Kermack and McKendrick had done roughly 10 years before, his assumptions were all mechanistic rather than based on the knowledge of the particular problem. This is clearly seen in how he introduced the model: "We suppose that in the first [day] each plant in the field has an equal and independent chance \(p\) of becoming infected, and that in the second [day] a plant which is next to a diseased plant has a probability \(s\) of being infected by it, healthy plants which are not next to a diseased plant remaining healthy". In modern terminology, the plants were arranged in a lattice structure and could infect their first neighbors with probability \(s\). The assumptions are particularly strong because he knew that the disease was propagated by an insect, but decided to create a very general model.
During the next couple of decades, lattice systems were quite popular in physics, geography and ecology. Then, in the 1960s the interest in studying the spatial spread of epidemics started to grow (see the introduction of [222] for a nice overview) with three main approaches. In the first, the agents that could be infected were set in the center of a certain tessellation of space and could only infect/be infected by their neighbors. This is the closest approach to the modern study of epidemics on networks, but it was not so popular as it was mainly used to study plant systems [223]. The most popular approach was to distribute individuals continuously in space with some chosen density. This led to the study of diffusion processes, focusing on the interplay between velocity and density [224]. The third approach was based on what we briefly defined in section 3.1.1 as metapopulations. Recall that in a metapopulation individuals are arranged in a set of sub-populations. Within each sub-population usually homogeneous mixing is applied and it is also allowed to have individuals migrating from one sub-population to another. Hence, the idea was to simulate the fact that people (or animals) live in a certain place where they can contract the disease, and then travel to a different area and spread it to its inhabitants [168]. Note, however, that in none of these methods we are taking into account any sociological factors of the population.
In the beginning of the 1980s some results pointing into the direction of introducing more complex networks started to appear. In particular, von Bahr and Martin-L"{o}f in 1980 [225] and Grassberger in 1983 [226] (in the context of percolation) showed that the classical SIR model on a homogeneous population could be related to the ER graph model that we saw in section 2.1.2. Indeed, suppose that we have a set of individuals under the homogeneous mixing approach and simulate an epidemic. Next, if an individual \(i\) infects another individual \(j\), we establish a link between them. If the probability of this event is really low, the epidemic will not take place. Conversely, for large probabilities most nodes will be randomly connected. This is precisely how we defined the ER model, with the only difference being that the probability \(p\) of establishing a link will be a function of both the probability of infecting someone, \(\beta\), and that of getting recovered, \(\mu\). Note, however, that they did not implement a disease dynamics on a network, but rather extracted the network from the results of the dynamics.
Roughly at the same time, the gonorrhea rates rose precipitously. To understand the dynamics of this venereal disease, it was recognized that some sort of nonrandom mixing of the population had to be incorporated to the models. The first attempts were based on separating the contact process and the spreading process [227]. Indeed, going back to the definition of the SIR model, we defined the rate of infectivity (3.5) as \(\phi(\tau) = \beta/N\) (under the frequency approach). We can simply define $= c ' $ where \(c\) is the contact rate between individuals and \(\beta '\) the probability of spreading the disease given that a contact has taken place. For simplicity, we can remove the apostrophe and simply write \(\phi(\tau) = c \beta/N\). With this definition the epidemic threshold would read
\[\begin{equation} \frac{c \beta}{\mu} > 1 \Rightarrow \frac{\beta}{\mu} > \frac{1}{c}\,. \tag{3.53} \end{equation}\]
This expression is giving us a very powerful insight about how to combat diseases. Supposing that \(\beta\) and \(\mu\) are fixed, as they mostly depend on the characteristics of the pathogen, the best way to prevent and epidemic is to reduce the number of contacts as much as possible. A similar result was obtained in the context of vector-borne diseases, in which the number of vectors (e.g. mosquitoes) play the role of the number of contacts [228].
The next step was the introduction of mixing matrices, the same approach that we followed to incorporate age into the SIR model in section 3.1.1. Recall that the idea was to divide the population into smaller groups according to some characteristics (e.g. gender or age) and establish some rules governing the interaction between those groups encoded in a contact matrix (hence the name of mixing matrices). Typically, both the group definitions and the mixing function were very simple. In the context of venereal diseases, the most common characteristic used to form groups was activity level. This approach was popularized by Hethcote and Yorke in 1984 [229] in their modeling of gonorrhea dynamics using the core group: a group of highly sexually active individuals who are efficient transmitters interacted with a much larger noncore group. They showed that with less than 2% of the population in the core group, this model lead to 60% of the infections to be caused directly by core members. Yet, the world of epidemiology was about to be shaken by a new virus that would defy all these assumptions, HIV.
The emergence of HIV in the early 1980s forced scientists to pay even more attention to the role of selective mixing. In 1986, in one of the earliest attempts to model this disease [230], Anderson and May summarized the challenges that sexually transmitted diseases (STDs) presented in contrast to other more common infectious diseases such as measles:
For STDs only sexually active individuals need to be considered as candidates of the transmission process, in contrast to simple "mass action" transmission models.
The carrier phenomenon, in which certain individuals harbor asymptomatic infection, is important for many STDs.
Many STDs induce little or no acquired immunity. In the case of HIV, the situation is probably more complex since persistence without symptoms might be lifelong.
The transmission of most STDs is characterized by a high degree of heterogeneity generated by great variability in sexual habits among individuals within a given community.
They concluded their introduction with a sentence that we would like to highlight, although its full meaning will not be understood until the end of this section: "This set of characteristics - virtual absence of a threshold density of hosts for disease-agent persistence, long-lived carriers of infection, absence of lasting immunity, and a great heterogeneity in transmission - give rise to infectious diseases that are well adapted to persist in small low-density aggregations of people".
Clearly, the homogeneous mixing approach was not valid anymore and heterogeneity had to enter into play. However, there was a huge problem, they did not have any data. Up to that point, epidemiologists had focused on the biological factors of diseases, completely ignoring the heterogeneity of human interactions. Yet, the importance that they had in HIV transmission sprouted a series of studies that would finally shed some light in the contact patterns of human populations. In particular, during the first years of the HIV epidemic, it was observed that homosexual males accounted for 70-80% of the known cases, and thus most efforts were devoted to study said community. The earliest studies found that the distribution of the number of sexual partners had a high mean, but also a very large variance13.
This observation led them to the formulation of a model that could account for this huge heterogeneity (a variance much larger than the mean). They focused on a closed population of homosexual males and divided it into sub-groups of size \(N_i\), whose members on average had \(i\) new sexual partners per unit time. Under these assumptions, the SIR model reads
\[\begin{equation} \left\{\begin{array}{l} \dv{S_i(t)}{t} = - i \lambda S_i \\ \dv{I_i(t)}{t} = i \lambda S_i - \mu I_i \\ \dv{R_i(t)}{t} = \mu I_i(t) \end{array}\right.\,, \tag{3.54} \end{equation}\]
where the infection probability per partner, \(\lambda\), was given by
\[\begin{equation} \lambda = \beta \frac{\sum_i i I_i}{\sum i N_i}\,. \tag{3.55} \end{equation}\]
At first glance it might seem that the model has not changed that much, as it is just the standard SIR model with \(i\) contacts. However, considering that the population is divided into several groups with heterogeneous contact patterns leads to a result that 15 years latter would become one of the cornerstones of network science. According to Anderson and May, in this model the early rate of exponential growth, \(\Lambda\), is defined as
\[\begin{equation} \Lambda = \beta \frac{\langle i^2 \rangle}{\langle i \rangle} - \mu\,. \tag{3.56} \end{equation}\]
Hence, the epidemic only grows if
\[\begin{equation} \frac{\beta}{\mu} > \frac{\langle i \rangle}{\langle i^2\rangle}\,, \tag{3.57} \end{equation}\]
which we can arrange to look like equation (3.53),
\[\begin{equation} \frac{\beta}{\mu} > \frac{1}{c'} ~~~\text{ with } c' = \frac{\langle i^2\rangle}{\langle i \rangle}\,. \tag{3.58} \end{equation}\]
Thus, for epidemiological purposes, the effective value of the average number of new partners per unit time is not the mean of the distribution, but rather it is the ratio of the mean square to the mean. In other words, this result reflects the disproportionate role played by individuals in the most active groups, who are both more likely to acquire infection and more likely to spread it [231].
In parallel, Boltz, Blancard and Kr"{u}ger started to develop in 1986 a series of complex computational models that allowed them to introduce many more factors into the dynamics [232]. They said that the principal weaknesses of the standard epidemiological models when applied to HIV infection were:
They describe, through mass action dynamics, permanent potential contact between all members of the groups involved.
The behavior is uniform over each group. To account for nonuniform behavior, a subdivision into sub-groups has to be done at the prize of a higher dimensionality of the systems of differential equations.
They cannot directly take into account partners.
Time delays, age dependencies and time dependent rates are not easily incorporated.
They do not represent the true contact structure of the population.
Hence, they proposed to use "random graphs and discrete time stochastic processes to model the epidemic dynamics of sexually transmitted diseases". This is one of the earliest (if not the first) attempts to clearly study disease dynamics on networks.
Their models were highly detailed. For instance, they considered eleven groups of individuals: homosexual males, bisexual males, heterosexual males, heterosexual females, heterosexual females having contact with bisexual males, male intravenous drugs users, female intravenous drug users, prostitutes, prostitutes who are also intravenous drug users and hemophiliacs (clearly way beyond the simple model of only homosexuals that was being studied analytically in those days). Even more, they could track the behavior of single individuals. But this level of detail also posed the problem of acquiring a huge amount of data to parameterize the models that, admittedly, they did not have at that time. In any case, this represented a huge step forward in the direction of highly detailed computational models, as the one we presented in section 3.2.3.
Yet, data was about to arrive. Already in 1985 Klovdahl [233], inspired by the networks that sociologist had been studying since the 1930s, studied the information provided by a small sample of 40 patients with AIDS and reconstructed their social network. He proposed that a "network approach" could be a key element to understand the nature and spread of the disease. Some years later, in 1994 [234], together with some collaborators, he designed a larger experiment to obtain the social network of a small city in Colorado. Their approach was to initially target individuals with higher risk of contracting HIV, such as prostitute women and injecting drug users, and trace as many contacts (of any kind) as they could. Their results showed that a lot of people were much closer to HIV infection than expected, implying that a small change (e.g. reducing condom use) could quickly reach individuals who were not directly connected with people infected with HIV. To demonstrate the importance of a network perspective in epidemiology they gave a very simple example: suppose that one individual is infected with HIV and reports sexual relationships with two people and a social relation with another one. Commonly, health professionals would only worry about the first two, disregarding the latter. However, if that third individual happens to by highly central in the network (i.e., having a large degree or betweenness, using the terminology of chapter 2), an eventual infection could lead to an "explosion" of disease. Hence, their claim was that under a network perspective addressing the distance to the disease and the centrality of individuals was as important as being actually infected with HIV.
Clearly the concept of networks was starting to gain momentum in epidemiology, although it was not always clear what was part of the epidemic model and which elements came just from the topology of the network (see [56] and the references therein). A noteworthy exception is the work by Andersson in 1997 [235] in which he studied an epidemic process on a random graph, this time from an analytical point of view, and concluded that the basic reproduction number was
\[\begin{equation} R_0 = p_1 \left(\frac{\langle X^2 \rangle}{\langle X \rangle} -1\right)\,, \tag{3.59} \end{equation}\]
where \(X\) denotes the number of links a certain node has. Thus, there is a clear dependency on the topology of the networks, regardless its specific shape. The similarity with the expression that Anderson and May obtained 10 years before is clear (equation (3.57)), the only difference is the \(-1\) factor (and the fact that he set \(\mu=1\)). The reasons why these two expressions are so similar, and why this expression contains a \(-1\) will come clear in a moment. But first, we need to go back in time a little bit to have a look at what was happening in the emerging field of cybersecurity.
On November 3, 1983, the first computer virus was conceived as an experiment to be presented at a weekly seminar on computer security. A virus, in this context, is a computer program that can "infect" other programs by modifying them to include a possibly evolved copy of itself. Thus, if the programs communicate with other computers, the viruses can spread through the whole network. During this decade, the internet started to grow, connecting more and more computers through a huge network. Hence, viruses imposed a clear threat [236]. Soon after, in 1988, Murray proposed that the propagation of computer viruses could be studied using epidemiology tools14 [237]. Then, in 1991, Kephart and White tried to apply epidemic models to study the propagation of viruses, but quickly realized that the homogeneous mixing approach was not suited for computers, as they were connected through networks [238]. Hence, they applied the model on a random directed graph and showed that the probability of having an epidemic depended on the connectivity of the network. When connectivity was high, they recovered the classical results of Kermack and McKendrick obtained under the homogeneous mixing approach. Conversely, when connectivity was really low the probability quickly decreased. However, they were unable to mathematically show this and had to rely on simulations (we can hypothesize that if they had been aware of the results found for venereal diseases they might had been able to do it, because their observation is essentially explained by equation (3.53), but it seems highly unlikely that computer scientists were interested in such a specific sub-field back then). Yet, there was something odd. During the 1990s Kephart and collaborators collected virus statistics from a population of several hundred thousand PCs. They observed that there was a huge amount of viruses that survived for a really long time, but also that their spreading was quite low and reached only a small fraction of the population, contrary to the expected exponential growth predicted by epidemiology (see section 3.2.1) [239]. The only possibility, according to the theory, was that the infection and recovery parameters associated with each virus were really close to the epidemic threshold, yielding low growth but still some persistence. Yet, this regularity seemed highly unlikely and they advocated to further account the network patterns as possible reasons behind this discrepancy.
With this context, we will now finally be able to realize why disease dynamics has been one of the main areas of research in network science and, at the same time, why networks became so successful in a relatively short period of time. In 1999 Albert et al. measured the topology of the World Wide Web, considering each document as a node and their hyperlinks to other documents as links. Surprisingly, the degree distribution of such network did not follow the Poisson distribution of random graphs but rather a power law distribution [240]. In section 2.1.2 we defined scale-free networks as those networks whose degree distribution followed a power law. Moreover, we showed that if the exponent of the distribution is in the interval \(\gamma \in (2,3)\), the average of the distribution is finite but the second moment of the distribution diverges if the system size tends to infinity.
In 2001 Pastor-Satorras and Vespignani studied the behavior of an SIS model on a network, in an attempt to answer the questions posed by Kephart [241]. The SIS model, which we have not talked about yet, is simply the SIR model but once an individual is cured instead of going into the removed compartment she is sent back to the susceptible one. Hence, the equation describing the process is
\[\begin{equation} \frac{\text{d}I(t)}{\text{d}t} = \frac{\beta}{N} S(t) I(t) - \mu I(t)\,, \tag{3.60} \end{equation}\]
with \(S(t) + I(t) = N\). However, as they wanted to apply it on a network they had to keep track of the state of each node, yielding a system of \(N\) coupled equations,
\[\begin{equation} \frac{\text{d}p_i(t)}{\text{d}t} = \beta [1-p_i(t)] \sum_j a_{ij}p_j(t) - \mu p_i(t)\,, \tag{3.61} \end{equation}\]
where \(p_i(t)\) denotes the probability that node \(i\) is infected at time \(t\). This system does not have a closed form solution and, besides, it depends on the adjacency matrix of the network (the term \(a_{ij}\)). Hence, they followed a mean-field approach and supposed that the behavior of nodes with the same degree \(k\) would be similar. Under this assumption, the system is reduced to
\[\begin{equation} \frac{\text{d}\rho_k(t)}{\text{d}t} = \beta [1-\rho_k(t)] k \Theta - \mu \rho_k(t)\,, \tag{3.62} \end{equation}\]
where \(\rho_k\) is the density of infected individuals with degree \(k\) and \(\Theta\) is the probability that any given link points to an infected node.
The stationary solution (i.e. \(d_t \rho_k(t) = 0\)) yields
\[\begin{equation} \rho_k = \frac{k \beta \Theta}{\mu+k\beta \Theta}\,, \tag{3.63} \end{equation}\]
denoting that the higher the node connectivity, the higher the probability to be infected. Now, in the absence of correlations the probability that a randomly chosen link in the network is attached to a node with \(s\) links is proportional to \(sP(s)\), where \(P(s)\) denotes the probability of having degree \(s\). Hence,
\[\begin{equation} \Theta = \sum_k \frac{k P(k) \rho_k}{\sum_s s P(s)}\,. \tag{3.64} \end{equation}\]
Combining (3.63) and (3.64),
\[\begin{equation} \Theta = \sum_k \frac{k P(k)}{\sum_s s P(s)} \cdot \frac{ k \beta \Theta}{\mu+k\beta \Theta}\,. \tag{3.65} \end{equation}\]
Besides the trivial solution \(\rho_k=0\), \(\rho_k>0\) will be a solution of the system as long as
\[\begin{equation} \frac{\beta}{\mu} \frac{\langle k^2\rangle}{\langle k \rangle} > 1\,, \tag{3.66} \end{equation}\]
which can be identified as the basic reproduction number of the system. Better still, we can put all parameters relating to the disease in one side and all the ones coming from the network in the other so that
\[\begin{equation} R_0 = \frac{\beta}{\mu} > \frac{\langle k \rangle}{\langle k^2\rangle}\,. \tag{3.67} \end{equation}\]
Hence, the epidemic threshold is not 1 anymore. Instead, it is a function of the connectivity of the network. But remember that the pourpose of the model was to study disease propagation on computer networks which, according to Albert et al., were not only scale-free but also had an exponent of \(\gamma = 2.45\) [240]. Thus, in the internet \(\langle k^2 \rangle \rightarrow \infty\), implying that equation (3.67) is actually
\[\begin{equation} R_0 > \frac{\langle k \rangle}{\langle k^2 \rangle} \rightarrow 0\,. \tag{3.68} \end{equation}\]
In other words, the epidemic threshold fades out. Moreover, two months later Liljeros et al. showed that the network of human sexual contacts was also scale free [242].
This result is the answer to all the questions that have arisen throughout this section. First, it explains why so many computer viruses were able to persist without growing exponentially. Indeed, if the epidemic threshold had been 1 then they all had to be really close to 1. However, as it tends to 0, they can be anywhere between 0 and 1, be able to infect a macroscopic fraction of the population and at the same time doing it slowly. It is also worth highlighting that from very different approaches Anderson and May obtained essentially the same result (3.57). The reason is simply that in the mean-field approximation we have neglected the connections and only considered groups of nodes with \(k\) neighbors, which is equivalent to Anderson and May groups of individuals making \(i\) new sexual partners per unit time. Even more, if we add the fact that the network of sexual contacts is scale free, we finally understand the reason behind the virtual absence of a threshold they were talking about in the 1980s. Furthermore, this result refutes the hypothesis of the core-noncore gonorrhea model. In order to have two distinct groups, the sexual contact network should have a binomial degree distribution, but it does not. Nevertheless, the idea was not completely wrong as the role of the core is played by the nodes with large degree, the hubs of the network.
Figure 3.9: Epidemic threshold and topology. Total fraction of recovered individuals in equilibrium conditions as a function of \(\beta/\mu\). In the homogeneous mixing approach the epidemic threshold is 1. When the SIR model is implemented in a random network with \(\langle k \rangle = 3\) the epidemic threshold is \(1/3\) (3.71). Conversely, in a SF network with \(\langle k \rangle = 3\) and \(\langle k^2 \rangle = 113\) the epidemic threshold is \(0.03\). Note that the size of the network is \(N=10^4\), with a maximum degree of \(k_\text{max} = \sqrt{N}\) to avoid correlations (see chapter 2). Hence, the threshold does not vanish completely, as it is supposed to be 0 only the limit of \(N\rightarrow \infty\).
Furthermore, note that to obtain expression (3.67) the only property of the network that we have used is that it is uncorrelated. Whence, it can be used to study any network that we desire. In particular, in the case of random graphs the degree distribution is Poisson, implying that \(\langle k^2 \rangle = \langle k \rangle^2 + \langle k \rangle\). Thus, for those networks the epidemic threshold is simplified to
\[\begin{equation} R_0 = \frac{\beta}{\mu} > \frac{1}{\langle k \rangle + 1}\,, \tag{3.69} \end{equation}\]
which is the result obtained by the earliest studies of gonorrhea propagation (3.53) (except for the \(+1\), but its role will be elucidated in a moment). In other words, we can say that the problem in those models is that they implicitly assumed a random contact network when they should have used a scale free network instead. In figure 3.9 we show a comparison of the final size of the epidemic as a function of \(R_0\) between ER networks, SF networks and the homogeneous mixing approach.
Note that most of the ideas had been around for years, but did not get that much attention. We could argue that what really made the difference in the case of the work by Pastor-Satorras and Vespignani was the use of data. Indeed, many theoretical approaches can be proposed, but without experimental data it is not possible to really gauge their importance. Thus, once again, this highlights how incorporating more data into already existing models can make the difference.
To conclude, we should address why the equation obtained by Anderson and May for the SIR model did not have a \(-1\) factor (3.57), the one from Andersson also for the SIR model did (3.59) and the one from Pastor-Satorras and Vespignani for the SIS model did not (3.67). The first observation is that in the SIS model on a network, it is possible that if \(i\) infects \(j\), then \(i\) recovers and \(j\) infects \(i\). This cannot happen in the SIR model, so for a node of degree \(k\) only \(k-1\) links can transmit the disease,
\[\begin{equation} \Theta = \sum_k \frac{(k-1)P(k)\rho_k}{\sum_s s P(s)}\,, \tag{3.70} \end{equation}\]
leading to the threshold [243]
\[\begin{equation} R_0^\text{SIR} > \frac{\langle k \rangle}{\langle k^2\rangle - \langle k \rangle}\,. \tag{3.71} \end{equation}\]
Which explains the discrepancy between (3.59) and (3.67). Lastly, note that in the case of Anderson and May they did not consider a fixed structure, such as a network, but rather that at each time step every individual would seek \(i\) new sexual partners. Thus, the correction accounting where the disease came from does not apply to their model.
There are multiple techniques that can be used to solve the system (3.61) and obtain the value of the epidemic threshold (see [145], [243] for a review). In this section we will describe an approach introduced by Newman in 2002 inspired by percolation [244], based on the use of generating functions, that is specially suited to analyze directed networks. This is the methodology that we will use in section 3.3.3 to study disease propagation in directed multiplex networks.
Although for the moment we are only interested in undirected networks, we will introduce the methodology considering that the networks might have both directed and undirected links [245], as this will be the approach used in section 3.3.3. Hence, suppose that the probability that a random node of the network has \(j\) incoming links, \(l\) outgoing links and \(m\) undirected links is \(p_{jlm}\). The generating function of the distribution will be
\[\begin{equation} G(x,y,z) = \sum_{j=0}^\infty \sum_{l=0}^\infty \sum_{m=0}^\infty p_{jlm} x^j y^l z^m\,. \tag{3.72} \end{equation}\]
As long as \(p_{jlm}\) is normalized this function has the property \(G(1,1,1) = 1\) and
\[\begin{equation} \langle k_d \rangle = \frac{\text{d}G(1,1,1)}{\text{d}x} \equiv G^{(1,0,0)}(1,1,1)\,, \tag{3.73} \end{equation}\]
where \(\langle k_d \rangle\) is the average number of incoming links in the network. As for any incoming link there has to be also an outgoing link,
\[\begin{equation} \langle k_d \rangle = \frac{\text{d}G(1,1,1)}{\text{d}y} \equiv G^{(0,1,0)}(1,1,1) = G^{(1,0,0)}(1,1,1)\,. \tag{3.74} \end{equation}\]
Lastly, for the undirected links
\[\begin{equation} \langle k_u \rangle = \frac{\text{d}G(1,1,1)}{\text{d}z} \equiv G^{(0,0,1)}(1,1,1)\,. \tag{3.75} \end{equation}\]
A related quantity that will be needed for the derivation is the generating function of the excess degree distribution, which is the degree distribution of nodes reached by following a randomly chosen link, without considering the one we came along. Note that if we choose a node at random, its degree will depend on \(p_k\), however if we follow a link the probability will be higher the more links the node has, \(k p_k\). Thus, the generating function obtained by following a directed link is
\[\begin{equation} H_d(x,y,z) = \frac{\sum_{jlm} j x^{j-1} y^l z^m}{\sum_{jkm} j p_{jlm}} = \frac{1}{\langle k_d \rangle} G^{(1,0,0)}(x,y,z)\,, \tag{3.76} \end{equation}\]
similarly if we follow a directed link in the reverse direction,
\[\begin{equation} H_r(x,y,z) = \frac{\sum_{jlm} l x^{j} y^{l-1} z^m}{\sum_{jkm} l p_{jlm}} = \frac{1}{\langle k_d \rangle} G^{(0,1,0)}(x,y,z)\,, \tag{3.77} \end{equation}\]
and lastly if we follow an undirected link,
\[\begin{equation} H_u(x,y,z) = \frac{\sum_{jlm} m x^{j} y^l z^{m-1}}{\sum_{jkm} m p_{jlm}} = \frac{1}{\langle k_u \rangle} G^{(0,0,1)}(x,y,z)\,. \tag{3.78} \end{equation}\]
The next step is to take into account that the disease will not be transmitted through all the links. Indeed, we define the probability of a link "being infected" in the sense that node \(i\) transmits the disease to \(j\) using that link as \(T\) (regardless of it being directed or undirected). Hence, the probability of a node having exactly \(a\) of the \(j\) links emerging from it infected is given by the binomial distribution \(\binom{j}{a} T^a(1-T)^{j-a}\). Under these assumptions, the generating function is modified so that
\[\begin{equation} \begin{split} G(x,y,z;T) & = \sum_{jlm} p_{jlm} \left[ \sum_{a=0}^j \dbinom{j}{a} (Tx)^a (1-T)^{j-a} \sum_{b=0}^l \dbinom{l}{b} (Ty)^b (1-T)^{l-b}\right. \\ & \left. ~~~~~~~~~~~~~~~\sum_{c=0}^m \dbinom{m}{c} (Tz)^c(1-T)^{m-c}\right] \\ & = \sum_{jlm} p_{jlm} (1-T+Tx)^j (1-T+Ty)^l (1-T+Tz)^m \\ & = G(1+(x-1)T,1+(y-1)T,1+(z-1)T)\,. \end{split} \tag{3.79} \end{equation}\]
Analogously, the generating functions for the distribution of infected links of a node reached by following randomly chosen links are:
\[\begin{equation} \begin{split} H_f(x,y,z;T) & = H_f(1+(x-1)T,1+(y-1)T,1+(z-1)T) \\ H_r(x,y,z;T) & = H_r(1+(x-1)T,1+(y-1)T,1+(z-1)T) \\ H_u(x,y,z;T) & = H_u(1+(x-1)T,1+(y-1)T,1+(z-1)T)\,. \end{split} \tag{3.80} \end{equation}\]
The fundamental quantity that we want to obtain is the number \(s\) of nodes contained in an outbreak that begins at a randomly selected node. Let \(g(w,T)\) be the generating function for the probability that a randomly chosen node belongs to a group of infected nodes of a given size:
\[\begin{equation} g(w;T) = \sum_s P_s(T) w^s\,. \tag{3.81} \end{equation}\]
To solve it, we also need to evaluate the probability that a randomly chosen link leads to a node belonging to a group of infected nodes of given size. The generating function of the distribution reads
\[\begin{equation} h_d(w;T) = \sum_t P_t(T) w^t\,. \tag{3.82} \end{equation}\]
Figure 3.10: Scheme of the generating function approach. Left: The generating function of the excess degree, \(H\), gives the distribution of links (directed and undirected) of a node reached by following a random link. Right: As the infection starts in a node, the generating function of the node's degree, \(G\), has to be used. Hence, \(G(H(x))\) gives the distribution of links in the first layer, \(G(H(H(x))\) in the second layer, etc.
This expression satisfies a condition of the form
\[\begin{equation} h_d(w;T) = w H_d(1,h_d(w;T),h_u(w;T))\,. \tag{3.83} \end{equation}\]
Similarly, in the case of undirected links,
\[\begin{equation} h_u(w;T) = w H_u(1,h_d(w;T),h_u(w;T))\,. \tag{3.84} \end{equation}\]
With the expressions (3.80) and these two last equations, we have completely defined the distribution of \(t\). It follows (see figure 3.10) that if the disease starts at a randomly chosen node the distribution is
\[\begin{equation} g(w;T) = w G(1,h_d(w;T),h_u(w;T))\,, \tag{3.85} \end{equation}\]
yielding an average size of outbreaks of
\[\begin{equation} \langle s \rangle = \sum_s s P_s(T) = \left.\frac{\text{d}g(w;T)}{\text{d}w} \right|_{w=1}\,. \tag{3.86} \end{equation}\]
Performing the derivatives and setting \(w=1\) we obtain
\[\begin{equation} \begin{split} g' & = 1 + G^{(0,1,0)} h_d' + G^{(0,0,1)} h_u'\\ h_d' & = 1 + H_d^{(0,1,0)} h_d' + H_d^{(0,0,1)} h_u' \\ h_u' & = 1 + H_u^{(0,1,0)} h_d' + H_u^{(0,0,1)} h_u'\,, \end{split} \tag{3.87} \end{equation}\]
where we have dropped the arguments of the functions for readability. Inserting these equations in (3.86) we obtain
\[\begin{equation} \begin{split} \langle s \rangle = 1 & + \frac{G^{(0,1,0)} \left(1-H^{(0,1,0)}_d + H_u^{(0,1,0)}\right)}{\left(1-H_d^{(0,1,0)}\right)\left(1-H_u^{(0,0,1)}\right) - H_u^{(0,1,0)}H_d^{(0,0,1)}} \\ & + \frac{G^{(0,0,1)} \left(1-H^{(0,0,1)}_d + H_u^{(0,0,1)}\right)}{\left(1-H_d^{(0,1,0)}\right)\left(1-H_u^{(0,0,1)}\right) - H_u^{(0,1,0)}H_d^{(0,0,1)}} \,. \end{split} \tag{3.88} \end{equation}\]
Note that this expression diverges if
\[\begin{equation} \left(1-H_d^{(0,1,0)}\right)\left(1-H_u^{(0,0,1)}\right) - H_u^{(0,1,0)}H_d^{(0,0,1)} = 0\,. \tag{3.89} \end{equation}\]
In other words, equation (3.89) sets the condition for the epidemic threshold. The last step is to note that
\[\begin{equation} G^{(1,0,0)} (1,1,1;T) = T G^{(1,0,0)}(1,1,1)\,, \tag{3.90} \end{equation}\]
and similarly for the rest of equations. Hence, equation (3.89) reads
\[\begin{equation} \left(1-T H_d^{(0,1,0)}\right)\left(1-T H_u^{(0,0,1)}\right) - T^2 H_u^{(0,1,0)}H_d^{(0,0,1)} = 0\,, \tag{3.91} \end{equation}\]
where now the arguments of the functions are \((1,1,1)\) instead of \((1,1,1;T)\).
In the particular case of undirected networks this expression further simplifies to
\[\begin{equation} 1 - T H_u^{(0,0,1)} = 0 \Rightarrow T = \frac{1}{H_u^{(0,0,1)}}\,. \tag{3.92} \end{equation}\]
To rewrite this expression in a more familiar format we can calculate the explicit dependency of \(H_u^{(0,0,1)}\) as a function of the network topology:
\[\begin{equation} \begin{split} H_u^{(0,0,1)}(1,1,1) & = \frac{1}{\langle k \rangle}\frac{\text{d}}{\text{d}z} G^{(0,0,1)} (1,1,1) = \frac{1}{\langle k \rangle} \frac{\text{d}^2}{\text{d}z^2} \left.\sum_m p_m z^m \right|_{z=1}\\ & = \frac{1}{\langle k \rangle} \sum_m m(m-1) p_m = \frac{1}{\langle k \rangle} \left(\langle k^2 \rangle - \langle k \rangle \right) \,. \end{split} \tag{3.93} \end{equation}\]
Therefore, the epidemic threshold is
\[\begin{equation} T = \frac{\langle k \rangle}{\langle k^2 \rangle - \langle k \rangle}\,. \tag{3.94} \end{equation}\]
And the "cosmological principles" were, I fear, dogmas that should not have been proposed.
As we saw in chapter 2, network science constitutes a whole field of research on its own. Therefore, any advance in the understanding of networks in general might also have its applications in the study of disease spreading on networks. In particular, we can investigate the dynamics of diseases on the multilayer networks we introduced in section 2.1.3 [246]. One option can be to have the same network pattern in all layers but different dynamics on each of them, such as modeling the spreading of two interacting diseases in the same population [247] or the interplay between information and disease spreading that we discussed in the introduction [248]. On the other hand, we can have the same dynamics in all layers but diverse interaction patterns in each of them, in a similar fashion as our model of section 3.2.3.
In this work we will focus on the latter, i.e., the same dynamics in the whole system but different networks in the layers. Even more, we will consider that the networks can have directed links, something that is usually disregarded in epidemic models (note that adding direction to links implies that more data is necessary than just knowing that there is a relationship between two agents). Some relevant examples of the importance of the directionality in this context are the case of meerkats in which transmission varies between groomers and groomees [249] and even in the transmission of HIV that we have briefly discussed, as male-to-female transmission is 2.3 times greater than female-to-male transmission [117]. Similarly, when addressing the problem of diseases that can be transmitted among different species, it is important to account for the fact that they might be able to spread from one type of host to the other, but not the other way around. For instance, the bubonic plague can be endemic in rodent populations and spread to humans under certain conditions. If it evolves to the pneumonic form, it may then spread from human to human [250]. Analogously, Andes virus spreads within rodent populations, but it can be transmitted to humans and then spread via person-to-person contacts [251].
Recall that in multilayer networks there are two types of links: intralayer (those contained within layers) and interlayer (those connecting nodes set in different layers). Our objective is to understand how the epidemic threshold is influenced by the directionality of both intralayer and interlayer links. In particular, we will consider multiplex networks composed by two layers with either homogeneous or heterogeneous degree distributions in the layers (i.e., ER or SF networks). Besides, we will analyze several combinations of directionality: (i) Directed layer - Undirected interlinks - Directed layer (DUD); (ii) Directed layer - Directed interlinks - Directed layer (DDD); and Undirected layer - Directed interlinks - Undirected layer (UDU). For the sake of comparison, we will also include the standard scenario, namely, (iv) Undirected layer - Undirected interlinks - Undirected layer (UUU). We will implement a susceptible-infected-susceptible (SIS) model on these networks and study the evolution of the epidemic threshold as a function of the directionality and the coupling strength between layers. In addition, we will derive analytically the epidemic threshold using generating functions (see 3.3.2) to obtain theoretical insights on the underlying mechanisms driving the dynamics of these systems.
First, we implement stochastic SIS dynamics on the two layer multiplex networks. Note that as there are two types of links, we can associate a different spreading probability to each of them: the interlayer spreading probability, \(\gamma\), and the intralayer spreading probability, \(\beta\) [252]. Accordingly, a node can transmit the disease with probability \(\beta\) to those susceptible neighbors contained in the same layer and with probability \(\gamma\) to those set in the other layer. As a consequence, the epidemic threshold will depend on both parameters. Thus, henceforth we will define the epidemic threshold as \(\beta_c\) and explore its value as a function of \(\gamma\) (note that previously we defined the epidemic threshold as the ratio \(\beta/\mu\), but in this case we will keep fixed the value of \(\mu\) for simplicity).
In the simulations, all the nodes are initially susceptible. The spreading starts when one node is set to the infectious state. Then, at each time step, each infected node spreads the disease through each of its links with probability \(\beta\) if the link is contained in a layer and with probability \(\gamma\) if the link connects nodes in different layers. Besides, each infected node recovers with probability \(\mu\) at each time step. The simulation runs until a stationary state for the number of infected individuals is reached.
To determine the epidemic threshold we fix the value of \(\gamma\) and run the simulation over multiple values of \(\beta\), repeating \(10^3\) times the simulation for each of those values. The minimum value of \(\beta\) at which, on average, the number of infected individuals in the steady state is greater than one determines the value of the epidemic threshold. This procedure is then repeated for several values of \(\gamma\) to obtain the dependency of \(\beta_c\) with the spreading across layers. Lastly, this dependency is evaluated for \(100\) realizations of each network considered in the study and their \(\beta_c(\gamma)\) curves are averaged.
For the cases in which the interlinks are directed, we need to add another parameter to the model. If all the links were to point in the same direction, the epidemic threshold would be trivially the one of the source layer and thus the multiplex structure would play no role. For this reason, for each directed link connecting layers \(u\) and \(v\) we set the directionality to be \(u\rightarrow v\) with probability \(p\) and \(u\leftarrow v\) with probability \((1-p)\). Consequently, in networks with directed interlinks the epidemic threshold will be given as a function of this probability \(p\).
Figure 3.11: Epidemic threshold in directed multilayer networks according to SIS simulations. Several configurations of networks are considered: A) ER networks with undirected interlinks; B) SF networks with undirected interlinks; C) ER networks with directed interlinks; D) SF networks with directed interlinks. In all cases \(\mu=0.1\) the number of nodes is \(N=2\cdot 10^4\) and for each directionality configuration there are two sets of networks with different average degree as shown in the legend. In the networks with directed interlinks \(p=0.5\).
The results, figure 3.11, signal that the consequences of changing the directionality of some links is completely different for SF and ER networks. In particular, in 3.11A, we can see that for networks with \(\langle k \rangle = 6\) the epidemic threshold is very similar in both UUU and DUD configurations. This effect is again seen for denser networks, \(\langle k \rangle = 12\), implying that it is the directionality of the interlinks, and not the one of the links contained within layers, the main driver of the epidemic in these networks. On the other hand, in figure 3.11B we can see that this behavior is not replicated in SF networks. Certainly, there is a large difference between the curves of the UUU and DUD configurations, implying that the directionality of intralinks is much more important in this type of networks. A similar pattern is observed in figures 3.11C and 3.11D, in which the interlinks are directed. Moreover, in all the cases considered the epidemic threshold is always lower for those configurations with undirected links within the layers, compared to those in which those links are directed, given the same interlink directionality.
To get further insights into the mechanisms driving this behavior we proceed to compute analytically the epidemic threshold. We introduced the generating function of the network, equation (3.72), saying that it accounts for the probability of having \(j\) incoming links, \(l\) outgoing links and \(m\) undirected links. However, in this case the directionality of links within the layers is always the same (we do not mix directed and undirected links). Hence, we can use \(j\) as an indicator for directed links when we have directed intralinks, or we can regard it as the number of undirected links otherwise. This frees \(m\) to be used for the interlinks. In other words, the generating function will now be \(G(x,z)\) if the network has the shape UXU and \(G(x,y,z)\) if it is DXD, with \(z\) representing the links connecting different layers.
Figure 3.12: Scheme of the generating function on multilayer networks. Recursive relation of generating functions for the size distribution of outbreaks by following a link in layer 1, \(h_1\), from 1 to 2, \(h_{12}\), in layer 2, \(h_2\), and from 2 to 1, \(h_{21}\).
Analogously, the definition of the generating function for the excess distribution (3.76) does not change. The first difference is encountered when we want to obtain the probability of a link being infected. In the previous case, we set said probability equal to \(T\) in all links, but now we have \(\beta\) for links within layers and \(\gamma\) for links across layers. Thus, we keep \(T\) as the probability of a link within layers being infected, and denote the probability of the other set of links being infected as \(T_{uv}\). With these definitions equation (3.79) now reads
\[\begin{equation} G(x,y,z;T,T_{uv}) = G(1+(x-1)T,1+(y-1)T,1+(z-1)T_{uv})\,. \tag{3.95} \end{equation}\]
Next, we introduced the generating function used to calculate the probability that a randomly chosen link belongs to the group of infected nodes. We distinguished \(h_d\) and \(h_u\) if the links were directed or undirected respectively. In this case, as the directionality is the same, what we need to define is \(h_1\) if the link is in layer \(1\), \(h_2\) if it is in layer 2, \(h_{12}\) if it is a link going from layer 1 to layer 2 and \(h_{21}\) if it is going from layer 2 to layer 1. The recursive relations in this case (see figure 3.12) read
\[\begin{equation} \begin{split} h_1(w;T,T_{uv}) & = wH_1(1,h_1(w;T,T_{uv}),h_{12}(w;T,T_{uv});T,T_{uv}) \\ h_2(w;T,T_{uv}) & = wH_2(1,h_2(w;T,T_{uv}),h_{21}(w;T,T_{uv});T,T_{uv}) \\ h_{12}(w;T,T_{uv}) & = wH_{12}(1,h_2(w;T,T_{uv}),h_{21}(w;T,T_{uv});T,T_{uv}) \\ h_{21}(w;T,T_{uv}) & = wH_{21}(1,h_1(w;T,T_{uv}),h_{12}(w;T,T_{uv});T,T_{uv}) \,. \end{split} \tag{3.96} \end{equation}\]
Then, the generating function for the distribution of the size of an outbreak starting in a randomly chosen node in layer 1 is
\[\begin{equation} g(w;T,T_{uv}) = wG(1,h_1(w;T,T_{uv}),h_{12}(w;T,T_{uv});T,T_{uv})\,. \tag{3.97} \end{equation}\]
Leading to the expression of the average size of an outbreak:
\[\begin{equation} \begin{split} \langle s \rangle & = \sum_s s P_s(T) = \left.\frac{\text{d}g(w;T,T_{uv})}{\text{d}w}\right|_{w=1} \\ & = 1 + G^{(0,1,0)}h_1' + G^{(0,0,1)}h_{12}'\,. \end{split} \tag{3.98} \end{equation}\]
As in the previous case, this equation diverges when the denominator is equal to 0. Hence, after some algebra, the condition that establishes the epidemic threshold reads
\[\begin{equation} \begin{split} 0 = & \left[\left(1-H_1^{(0,1,0)}\right) \left(1-H_{12}^{(0,0,1)}H_{21}^{(0,0,1)}\right)-H_1^{(0,0,1)}H_{12}^{(0,0,1)}H_{21}^{(0,1,0)}\right] \\ & \cdot \left[\left(1-H_2^{(0,1,0)}\right)\left(1-H_{12}^{(0,0,1)}H_{21}^{(0,0,1)}\right) - H_2^{(0,0,1)}H_{21}^{(0,0,1)}H_{12}^{(0,1,0)}\right] \\ & - H_1^{(0,0,1)}H_2^{(0,0,1)} H_{12}^{(0,1,0)} H_{21}^{(0,1,0)}\,. \end{split} \tag{3.99} \end{equation}\]
Note that this expression works for all the configurations we are considering in this work, given that we choose the proper values of \(H_x\). For instance, for the DUD configuration we have
\[\begin{equation} \begin{split} & H_1^{(0,1,0)} = H_2^{(0,1,0)} = T\langle k \rangle \\ & H_1^{(0,0,1)} = H_2^{(0,0,1)} = T \langle k \rangle \\ & H_{12}^{(0,1,0)} = H_{21}^{(0,1,0)} = T_{uv} \\ & H_{12}^{(0,0,1)} = H_{21}^{(0,0,1)} = T_{uv}\, \end{split} \tag{3.100} \end{equation}\]
yielding
\[\begin{equation} \tag{ER-DUD} T_c = \frac{1-T_{uv}}{\langle k \rangle}\,. \end{equation}\]
Similarly, we can obtain the epidemic threshold for the rest of the configurations:
\[\begin{equation} \tag{ER-UUU} T_c = \frac{1-T_{uv}}{\langle k \rangle + 1 - T_{uv}}\,, \end{equation}\]
\[\begin{equation} \tag{ER-DDD} T_c = \frac{2}{\langle k \rangle ( 2 + m + \sqrt{m(m+8)})}\,, \end{equation}\]
where \(m = p(1-p) T_{uv}^2\),
\[\begin{equation} \tag{ER-UDU} T_c = \frac{2(1+\langle k \rangle ) + m' - \sqrt{m'(4+8\langle k \rangle +m')}}{2((1+\langle k \rangle)^2 -m' \langle k \rangle)}\,, \end{equation}\]
with \(m'=\langle k \rangle p (1-p) T_{uv}^2\). These results were simplified thanks to the property \(\langle k^2\rangle = \langle k \rangle^2 + \langle k \rangle\) of Poisson distributions. For the case of SF, on the other hand, we cannot do this simplification and thus some expressions will depend on both moments of the distribution:
\[\begin{equation} \tag{DUD-SF} T_c = \frac{1-T_{uv}}{\langle k \rangle}\,, \end{equation}\]
\[\begin{equation} \tag{UUU-SF} T_c = \frac{\langle k \rangle (1-T_{uv})}{\langle k^2\rangle (1-T_{uv}) + \langle k \rangle^2 T_{uv}}\,, \end{equation}\]
\[\begin{equation} \tag{DDD-SF} T_c = \frac{2}{\langle k \rangle (2+m+\sqrt{m(m+8)})}\,, \end{equation}\]
where \(m = p(1-p) T_{uv}^2\), and lastly
\[\begin{equation} \tag{UDU-SF} T_c = \frac{2\langle k^2\rangle \langle k \rangle + \langle k \rangle^2\left(\langle k \rangle m - \sqrt{m(4\langle k^2\rangle + \langle k \rangle^2(4+m))}\right)}{2(\langle k ^2\rangle^2 - \langle k \rangle^4m)}\,. \end{equation}\]
These expressions closely match the results obtained in the simulations, figure 3.13. Again, we can observe that the value of the epidemic threshold of the DUD configuration in SF networks tends to the value of the UUU configuration for large values of the spreading probability across layers, mimicking the behavior of ER networks. Hence, in general, we can conclude that the directionality (or lack of) of the interlinks is the main driver of the epidemic spreading process. The exception is the limit of small spreading from layer to layer as in this scenario the directionality of interlinks makes SF networks much more resilient. Altogether, the conclusion is that directionality reduces the impact of disease spreading in multilayer systems.
Figure 3.13: Epidemic threshold in directed multilayer networks, simulations vs. theory. A) Comparison between the results of the stochastic simulations (points) and the theoretical predictions (lines) for the ER set of networks. B) As A) but for SF networks.
It is worth point out that these results are not only relevant for the situations we have described in this chapter so far. One particularly interesting and open challenge is to quantify the effects that the interplay between different social networks could have on spreading dynamics. The theoretical framework developed here is particularly suited to study this and similar challenges related to the spreading of information in social networks. On one hand, because social links are not always reciprocal [253], specially in online systems in which a user is not necessarily followed by her followings. Besides, disease-like models have been widely used to study information dissemination [246], [254]. For this reason, we have analyzed the dependence of the epidemic threshold with the inter-spreading rate in a real social network composed by two layers, figure 3.14A. The first layer of the multilayer system is made up by the directed set of interactions in a subset of users of the now defunct FriendFeed platform, whereas the second layer is defined by the directed set of interactions of those same users in Twitter. Even though this multiplex network corresponds to a DUD configuration, we have also explored the other configurations that we have studied. Note that in contrast to the synthetic networks used so far, in this network the layers have different average degree. In particular, the FriendFeed layer has 4,768 nodes and 29,501 directed links, resulting in an average out-degree of 6.19 while the Twitter layer is composed by 4,786 nodes and 40,168 directed links, with an average out-degree of 8.42. Nevertheless, their degree distributions are both heavy tailed, resembling the power law structure of SF networks, although the maximum degree in the FriendFeed network is much larger than in the Twitter network [255].
Figure 3.14: Epidemic threshold in a multilayer social system. Epidemic threshold obtained from simulations in a multiplex network composed by users of two different social platforms: FriendFeed and Twitter. The original network, panel A, has directed intralinks and undirected interlinks, corresponding to a DUD configuration. Nevertheless, to explore the effects of directionality in a real scenario, the four discussed configurations are considered in panel B. For those configurations with directed interlinks we set \(p=0.5\).
The results, figure 3.14B, confirm the findings of synthetic networks. In particular, configurations with some directionality are always more resilient against the spreading. Consequently, information travels much more easily in undirected systems than in directed systems. This is particularly worrisome given that even though Twitter can be modeled as a directed network, social networks such as Facebook and Whatsapp should be modeled using undirected configurations and, recently, these two platforms were identified as one of the main sources of misinformation spreading [256].
In summary, we have seen the importance that networks have in shaping disease dynamics. Hence, as more data becomes available, our network models should be improved in order to better account for the real mechanisms behind such a dynamics. To this end, in this section we have developed a framework that allows studying disease-like processes in multilayer networks with, possibly, directed links. This represents an important step towards the characterization of diffusion processes in interdependent systems. Our results show that directionality has a positive impact on the system's resistance to disease propagation. Furthermore, we have seen that the way in which interconnected social networks are coupled can determine their ability to spread information. Hence, the insights obtained in this work can be applied to a plethora of systems and show that more emphasis should be put in studying the role of interlinks and directionality in diffusion processes.
The problems that we have studied in this chapter have shown us that, when we consider no longer that humans are particles, the dynamics of an epidemic can change dramatically. Note that the mass action approximation was just a handy tool to overcome either the scarcity of data in the past or the analytical intractability of some formulations. However, nowadays we have both enough data and computational power to introduce many more details in the dynamics.
It is now time to return to where we left at the end of chapter 2. In section 2.6.3 we introduced a mathematical framework that allowed us to create networks in which both the degree distribution and the age contact patterns of the population could be taken into account. With all the information we have gathered about disease dynamics, we can finally analyze the implications of this choice.
In the following, we will consider four different scenarios depending on the data that one may have at her disposal:
Homogeneous mixing with \(\langle C \rangle\): suppose that the only data available is the average number of contacts and individual may have. In this case, we would be in the same situation as in the studies of gonorrhea that we presented in section 3.3. According to equation (3.53) the epidemic threshold in this situation is
\[\begin{equation} \frac{\beta}{\mu} = \frac{1}{\langle C \rangle}\,. \end{equation}\]
Homogeneous mixing with age contact patterns: if data about the mixing patterns of the population is available, we can improve the model by creating multiple homogeneous mixing groups, one for each age bracket, as we discussed in section 3.1.1. Note that this formulation is similar to the one that Anderson and May introduced for studying interacting groups with different activity patterns, equation (3.54). Hence, the epidemic threshold, according to equation (3.57), should be
\[\begin{equation} \frac{\beta}{\mu} = \frac{\langle C \rangle}{\langle C^2 \rangle}\,. \end{equation}\]
Network information: if we have only information about the network structure, the epidemic threshold is given by (3.71),
\[\begin{equation} \frac{\beta}{\mu} = \frac{\langle k \rangle}{\langle k^2\rangle - \langle k \rangle}\,. \end{equation}\]
Network and age information: if we are able to obtain information about both the network structure and the age mixing patterns of the population, we can build the network of interactions using the techniques introduced in section 2.6.3.
Figure 3.15: Phase diagram for different amounts of data. We compare the total number of recovered individuals as a function of the ratio \(\beta/\mu\) for four models, each one determined by the amount of data used. In the homogeneous scenario the only information is \(\langle C\rangle = 6.18\) which yields an epidemic threshold of approximately \(0.16\). Next, we extract the information about the age mixing patterns of Belgium in 2005 from the polymod study [131] and weight the matrix so that the average number of contacts is also \(6.18\). This, in turn, produces \(\langle C^2\rangle = 40.37\) yielding an epidemic threshold slightly over \(0.15\). To model the network structure we have assumed that the degree distribution follows a power law with \(\langle k \rangle = 6.18\) and \(\langle k^2\rangle = 102.27\), resulting in a threshold of \(0.06\). Lastly, we have combined this network distribution with the data from Belgium to build the age contact network.
To properly compare the four situations we will set in all of them the same average number of contacts, which we denote by \(\langle C\rangle\) when there is no network information and by \(\langle k \rangle\) when there is. Next, we perform numerical simulations of these scenarios, using the SIR model introduced so far (with the adequate modifications depending on the amount of data available), to compare the evolution of the epidemic size as a function of the ratio \(\beta/\mu\), figure (fig:chap3-age-comparison).
We can clearly see the effect that the heterogeneity of the network introduces in the system. For the homogeneous scenario and the homogeneous with information about the mixing patterns, the epidemic threshold is almost the same. However, when we introduce a network with a power law degree distribution, the heterogeneity in the contacts is much larger, yielding a smaller epidemic threshold. Thus, it seems that from this point of view it is more important to be able to collect data about the contact structure of the population than from the age mixing patterns.
Figure 3.16: Comparison of attack rate per age group. In A we compare the number of recovered individuals in each age bracket in the model with age and network structure against the model with information of the network structure, so that positive values imply that the attack was larger in the model with more information, and vice versa. In B the comparison is done between the model with all the information and the homogeneous with age mixing patterns. Note that to compare similar situations a value of \(\beta=0.21\) has been chosen, as it is the value in which the three dynamics intersect in figure 3.15.
On the other hand, note that there are many diseases which can affect differently an individual depending on her age. This effect was particularly important, for instance, in the 2009 H1N1 pandemic [257]. Furthermore, age is one of the factors used to classify people into risk groups, which in turn are the main targets of vaccination campaigns. Hence, even if, according to figure 3.15, knowledge of the age structure does not seem too important, once we look closer into the dynamics of the system the situation changes.
To illustrate this, in figure 3.16, we compare the attack rate of each group in the different models. In particular, in panel A we show the relative change on the amount of recovered individuals per age group between the model with all the information and the model that only considers the network structure. As we can see, the complete model has higher attack rates in teenagers, while having much lower attack rates among the elderly. Thus, even if the total attack rate in both situations is the same, if the decision to administer a vaccine is only based on the network structure we would be making a big mistake.
Conversely, adding the network information to the age mixing matrices has a smaller effect, indicating that from this point of view it is more important to collect data about the age mixing patterns of the population than from the network structure. Note that the network structure in these models is not extracted from data. Hence, the differences that we can see in panel B do not have any specific interpretation rather than that the power law distribution that we chose is responsible for those differences.
To summarize, it is clear that the more data we have, the better, as long as we use it properly and fully understand it. Yet, if for some reason we need to choose which data we need to collect, it is important to know the final application, as it is not straightforward to say which information is more valuable. In particular, we have seen that the network structure has dramatic effects on both the epidemic threshold and the incidence of the epidemic. But, on the other hand, having information about the mixing patterns of the population might be more valuable in some situations, such as determining risk groups for vaccination.
A noteworthy exception is the work by Daniel Bernoulli in 1766, although he was just too ahead of his time. Furthermore, some authors claim that the credit for creating the first modern disease transmission model should go to Pyotr Dimitrievich Enko. Unfortunately, even though he published his work as early as 1889, he wrote it in russian and thus it became largely unnoticed for the majority of the scientific community until it was translated by the middle of the XX century (see [140], [141] for a nice introduction to the early days of mathematical epidemiology).↩︎
A metapopulation is a set of populations that are spatially separated but can exchange individuals. Within each subpopulation any plausible disease dynamics can be implemented, although usually the homogeneous mixing approach is used [172].↩︎
In the original paper at a certain point the authors expand a quantity using Taylor's theorem an introduce the notation \(R_n = \int_0^\infty a^n p(a) m(a) \text{d}A\). Then, they determine the ratio between the total births in two successive generations to be \(\int_0^\infty p(a) m(a) \text{d} A\), which is equal to \(R_0\) according to the previous definition. Thus, the subscript \(0\) historically would represent the \(0\)-\(th\) moment of the distribution. In modern literature, however, the subscript is interpreted as referring to the "very beginning" of the epidemic [140]. Note also that \(R_0\) is usually pronounced R naught in Britain and R zero in the U.S. [187].↩︎
Although the notation \(R_0\) is well established, there is not an agreement on how to call this quantity. Exchanging the word reproduction for reproductive is common in the literature, as well as using rate or ratio instead of number. Although the differences are minimal, note that rate is clearly wrong as it is not a rate but a (dimensionless) ratio [191].↩︎
Even though we have not discussed yet on how to introduce networks in epidemic modeling, note that an homogeneous population is equivalent to a complete network in which every node is connected to every other node. Hence, these overlapping structures can also be regarded as a multiplex network in which each node can be in more than one layer↩︎
Actually, the shape of the distribution was the same as the ones we saw in figure 2.10 when we studied age contact patterns. The high heterogeneity of human interactions is clearly not restricted to sexual activity.↩︎
Despite the great leap forward that his proposal represented, one could argue that he might have been a bit naive when he concluded the paper saying: "All this having been said, I am sanguine. God is still in his heaven. The environment is generally benign. The community is resilient. Most individuals are acceptably polite, orderly and well behaved. On the list of vulnerabilities in our complex society, this one is distinguished primarily by its novelty. Unlike some of the more intractable ones, this one will yield to good will. In the face of genuine evil intent, I prefer it to plastic explosives in power plants."↩︎ | CommonCrawl |
Single-Cell Biophysics
Membrane Biophysics
Physics of collective behavior
Biophysical tools
CMU Biophysics Blog
How vaccines change the spread of Covid
by Markus Deserno | Aug 10, 2021 | Science Vignettes
The arrival of several highly effective vaccines has dramatically changed the landscape of the Covid-19 pandemic. Remarkably, though, a considerable fraction of people are hesitant to get immunized—for reasons that would be fitting material for a different blog post, but which right now I will sidestep. What interests me here, instead, is how this hesitancy affects the ongoing pandemic—especially in the light of common statements we hear very often in this context. For instance, Dr. Rochelle Walensky, director of the CDC, has warned that Covid-19 is becoming a "pandemic of the unvaccinated." Conversely, Emily Miller, who briefly served as the FDA's assistant commissioner for media affairs, claimed that "[one's personal] decision not to get vaccinated does not affect anyone else's health." I will try to explain why I consider the first statement misleading and the second one flat-out wrong.
But I don't just want to argue by waving my hands. Being a physicist, I'm itching to do a little bit of mathematical modeling, since this helps me to understand the world better. However, I want to preface everything that follows by emphasizing that I am not an epidemiologist. You have every right to be skeptical of what I am about to write and instead consult a real epidemiologist. That being said, the type of modeling I plan to roll out is actually pretty standard. It's also very simple—which makes it easy to understand what's actually being modeled, but which also means that quite a lot of finer points are being passed over in the first round. This often makes people nervous ("How can you ignore all that???"), but physicists like playing around with simple models. The justification for this is that if one understands the simple cases (and their limitations!) reasonably well, one has a good basis for thinking about what all the real world complications actually do, and when they matter. In the same spirit, i's apt to remember the great aphorism generally attributed to British statistician George Box: "All models are wrong, but some are useful."
My proposed model is a minor extension of the good old SIR model—to which you can get an excellent introduction in the YouTube video of Oxford mathematician Tom Crawford or in this somewhat more technical article on Wikipedia. In the name "SIR" the letters "S", "I", and "R" stand for individuals Susceptible for catching a disease, those Infected with it, and those that have Recovered (and are then assumed to no longer being able to catch it). The basic idea is that susceptible people have some probability of catching the disease when they encounter infected people, and that infected people recover after some typical time. That's basically it. For this scenario, one can write a set of three differential equations that describe the time evolution of these three groups of people, and they look like this:
$$\begin{align}
\dot{S} &= -b\frac{S I}{N} \\
\dot{I} &= +b\frac{S I}{N} \,-\, kI \\
\dot{R} &= +kI \ .
\end{align}$$
Here, the dot over the letters means "rate of change," the parameter "$b$" is the rate at which susceptible people catch the disease from infected ones, and the parameter "$k$" is the rate with which infected people recover. The variable "$N$" describes the total number of people, and one can easily show, by adding all equations, that $N=S+I+R$ is constant (because $\dot{N}=\dot{S}+\dot{I}+\dot{R}=0$). The ratio $R_0=b/k$ plays a very important role in the spread of the pandemic: if it is bigger than 1, then in an initially healthy population the disease will spread (initially) exponentially, until so many people have been infected, and others have recovered, that the virus cannot find new "victims," and then the epidemic peters out. Observe that this model entirely avoids thinking about where people live (dense city? rural countryside?), how they actually meet one another (the "social networks of interactions"), the potentially outsized importance of rare events ("superspreaders"), and many other complications. All of this is ignored—for better or worse.
I will now modify this model in a very simple way by subdividing all three populations into two classes: vaccinated and unvaccinated ones, which I will label with a subscript "${\rm v}$" and "${\rm u}$", respectively. The important point is that the rates of catching the disease from someone who is sick—the $b$ values—depend on whether that person is vaccinated, since in that case they will catch the disease less readily. To keep things simple, I will assume that it doesn't matter whether you catch it from a sick vaccinated or a sick unvaccinated individual: once someone is sick, I take them to be equally contagious. (I'm being quite uncharitable to the vaccine here!) Also, I will assume that the recovery time is the same with or without vaccine (again, I'm being very pessimistic).
With these assumptions, an extended set of equations now looks as follows:
\dot{S}_{\rm v} &= -b_{\rm v}\frac{S_{\rm v}}{N}(I_{\rm v}+I_{\rm u}) \\
\dot{S}_{\rm u} &= -b_{\rm u}\frac{S_{\rm u}}{N}(I_{\rm v}+I_{\rm u}) \\
\dot{I}_{\rm v} &= +b_{\rm v}\frac{S_{\rm v}}{N}(I_{\rm v}+I_{\rm u}) \,-\, k I_{\rm v} \\
\dot{I}_{\rm u} &= +b_{\rm u}\frac{S_{\rm u}}{N}(I_{\rm v}+I_{\rm u}) \,-\, k I_{\rm u} \\
\dot{R}_{\rm v} &= +k I_{\rm v} \\
\dot{R}_{\rm u} &= +k I_{\rm u} \ .
Observe that it is the sum of vaccinated and unvaccinated infected individuals, $I_{\rm v}+I_{\rm u}$, which turns susceptible people into infected ones (albeit at different rates). This implies that unvaccinated infected individuals affect vaccinated susceptible ones (and vice versa, but that's typically less frequent). For this reason the pandemic isn't just happening among the unvaccinated. It spreads between the two classes. (Physicists would say they are "coupled".) More specifically, we will see that the majority of vaccinated people who get sick do so because of unvaccinated sick individuals.
In order to make quantitative predictions, we now need to put in some numbers. I will consider the case of the United States, where $N\approx 330$ million people, of which a fraction $f$ is fully vaccinated. Currently, that fraction is pretty close to 50%. I will also assume that the recovery rate is $k=0.07$, which leads to $1/k\approx 14$, meaning, that it takes about 2 weeks to get over Covid. Evidently, there's a distribution, and there's the very significant complication of "long Covid", which I ignore (and which does not affect the dynamics of the epidemic). The more difficult thing to estimate is $b$, since this amounts to saying how contagious the disease is. I will use $b_{\rm u}=0.15$, which corresponds to $R_0=b_{\rm u}/k\approx 2.1$ for the unvaccinated. This is close to what we thought to be the infectivity of the original Covid-19 strain, but the Delta variant is believed to be about twice as infective. On the other hand, this simple model doesn't account for masks, for quarantining, and for the fact that (most) sick people will likely try to minimize exposure to others for much of the time they are likely infectious. I'm accounting for all of this by assuming a smaller $R_0$ value, but this is a bit ad hoc. My expectation is that I'm likely underestimating Delta's ability to spread.
What about the getting-sick-rate for vaccinated people? Taking an average vaccine efficacy of 90%, I will assume that the rate $b_{\rm v}$ is ten times smaller than that for the unvaccinated, or $b_{\rm v}=0.015$. Notice that the corresponding $R_0$ value is then about 0.21, well below 1, showing that if only everyone was vaccinated, the pandemic would quickly wane.
I finally need some numbers for sick people. Taking the current daily case rates, it seems reasonable that at present (i.e., August 10, 2021) about 1 million people are infected, of which maybe 90% are unvaccinated. This means I will assume that right now, "at $t=0$", we have $I_{\rm v}(0)=100,000$ and $I_{\rm u}(0)=900,000$ (these details actually do not matter a lot for our conclusions). This gives us all we need to predict how the pandemic evolves—at least within the model we just wrote down. All we need to do is (numerically) solve the coupled differential equations.
Let us start with the situation as it is right now. We have a vaccination rate of 50%, and let's assume it stays there. (My hope is to convince people to get vaccinated, so I want to first show what happens when no one changes their mind; I'll then discuss what happens at higher overall vaccination rates.) How will cases evolve? The following graph plots the number of infected people as a function of time. The solid curve represents the unvaccinated people, the dashed curve the vaccinated ones:
Figure 1: Infected people as a function of time, assuming a vaccination rate of $f=50\%$.
At a casual glance, this doesn't look too bad—the cases don't really grow exponentially, they will ultimately become only about three times as high as they are now, and they will then drop down again. The vaccinated case numbers also grow, by a bit more than a factor of 3.
But notice that the wave, which builds into winter and then wanes in the next Spring, evolves really slowly. It takes almost a year until case numbers have come to the same values we have now! We'll have at least a year more of the pandemic with us, and about 2 years until it's mostly over! I don't think any of us can currently stomach this outlook. Observe also that the very existence of a (small) peak in the vaccinated population is a sure sign that they are being made sick by the unvaccinated ones. (Indeed, the unvaccinated peak preceeds the vaccinated one by about 2 weeks: the former seems to "drive" the latter.) This is understandable, because the $R_0$ value among the vaccinated is much smaller than 1, and if that would be what drives the disease spread among the vaccinated, then their case numbers should quickly decline, not actually grow.
Besides current cases, we also want to know what happens to the total case numbers. How many unvaccinated people will ultimately get sick? And how many vaccinated ones? Here's the answer in this second graph:
Figure 2: Total (=cumulative) case numbers, assuming a vaccination rate of $f=50\%$.
We see that ultimately about 55 million unvaccinated people are predicted to catch the virus, plus an additional 6.5 million vaccinated ones. Those are very sizable numbers! So far, throughout the entire pandemic, about 36 million cases were counted in the US. We may hope it doesn't get this bad, but it seems unlikely that political leaders are willing to enact lockdowns and strict mask mandates again, which might rob us of emergency measures to combat this wave. Moreover, recall that I have been rather conservative with the $R_0$ value of the virus. This is a real threat.
The number of cases among vaccinated people looks maybe unexpectedly large. We started with 10 times fewer sick vaccinated people, and it mostly stayed at this ratio. Why did that proportion not drop? Does the vaccine not work? Well, it does—10% was, after all, its "failure rate." But more importantly, the key is that the tremendous growth of infected unvaccinated people puts the vaccinated ones at risk as well. Remember that it's the sum $I_{\rm v}+I_{\rm u}$ which drives overall new infections, and since among those two terms $I_{\rm u}$ is clearly the bigger one, we conclude that the great majority of vaccinated people who get sick do so because they get infected by unvaccinated people. In that sense, it is clearly not just a personal choice whether someone gets vaccinated or not. Letting the virus burn so uninhibitedly through the population also puts those at risk who took the precautions. Less so, but still.
In fact, we could look at a (very hypothetical) scenario where the two populations truly "decouple": vaccinated people only encounter other vaccinated people, unvaccinated people only encounter other unvaccinated ones. This turns the epidemic truly into two "non-interacting" classes of people. With the values for $b$, $k$, etc. we used so far, we would then find that the number of vaccinated infected people very rapidly declines. By the time they reach their maximum of $I_{\rm v}\approx 340,000$ in the actual "mixed" epidemic, they have dropped to 5 (!) in the decoupled one (and we'd have 50 times fewer cases in total). Conversely, the unvaccinated reach a staggering maximum of 30 million simultaneous infections after just 2 months, and ultimately a total of 140 million cases—almost the entire unvaccinated population. Clearly, the presence of vaccinated people massively protects the unvaccinated already now.
Let us now dream a bit! What would instead happen if we had higher vaccination rates—something entirely within our grasp, since the vaccines are abundantly available. Consider therefore a vaccination rate of 70%. In this case, the pandemic plays out very differently. Here's a plot showing simultaneously the total infections (red) and the accumulated number of cases (blue). Again, solid curves are the unvaccinated people, while dashed curves are the vaccinated ones:
Figure 3: Number of infected people (red) and total case count (blue) for a vaccination rate of $f=70\%$. The solid curves are the unvaccinated individuals, the dashed curves are the vaccinated ones.
Three drastic changes are noticeable: First, the number of infected people at any give time goes down right from the start. No new wave! Second, the total number of cases is drastically cut down: instead of getting about 55 million unvaccinated new cases, we only get about 3.7 million—almost 15 times less! And third, the pandemic seems to be over significantly faster: after less than a year the infections have dropped down to very small numbers. We can put the pandemic behind us much earlier! In fact, if we dream even more optimistically, about a vaccination rate of 85%, then things look as follows:
Notice the even shorter time scale (I'm using a horizontal axes that is 5 times shorter!) , and the further reduction in overall cases, even though the latter is not as dramatic as the step from 50% to 70%.
Another thing we should talk about is the number of fatal cases. People who recover from the disease no longer infect others. But that would of course also happen if they don't recover but if they instead die. Our model is quite agnostic about this, and some people indeed translate the "R" in the SIR model into the more general term "removed." For the pandemic modeling it doesn't matter what happens with these people, but of course friends and family care a lot. To estimate the casualties, we must know how many cases will be fatal. Last year's data in both the US and the UK indicated that, as a simple rule of thumb, about 1 in 60 people who received a positive Covid test died from the disease (about 3 weeks later). Hence, dividing the cases by 60 gives the expected fatalities—very roughly. (How Covid hits different age groups differently clearly is something worth incorporating in a refined calculation.) Even though this 1-in-60 rule was observed for the original strain of the virus (for a population that was unvaccinated), I will in the following assume that it remains true for the Delta variant and the currently unvaccinated people. (This variant is often described as more deadly, but we have also figured out better ways for how to treat Covid-19, and average patients are now younger.) However, for vaccinated people we need a further correction: the protections from the vaccine against severe Covid and death are substantially better than the efficacy of 90%—maybe 99%. I will therefore assume that the fatality ratio gets reduced by an additional factor of 10. With this in mind, we can then project the expected number of deaths from the number of recovered people, both in the vaccinated and the unvaccinated group, even resolved by nationwide vaccination rate. Here are these (very tentative!) numbers—displayed in different plots between the unvaccinated and vaccinated people, because they are so different:
Figure 5: Expected total deaths among the unvaccinated, as a function of vaccination rate $f$
Figure 6: Expected total deaths among the vaccinated, as a function of vaccination rate $f$
Observe the difference in the vertical scales! the bars for the vaccinated are approximately a factor of 80 smaller! Currently, at a vaccination rate of $f=50\%$, we're heading for about 900,000 more deaths among the unvaccinated! This is more than the number of people who so far have died from Covid-19 in the US, which is about 600,000. This sounds huge and suspicious. But recall also that I have made actually quite gentle assumptions about the virus. If we let a virus as deadly and contagious as the Delta variant of SARS-CoV-2 burn through an unvaccinated population, we will see many deaths.
But notice that most of these deaths are entirely preventable! Pushing the vaccination rate higher than 50% will have immediate and quite massive positive effects. At 70% the deaths among the unvaccinated have dropped almost 15-fold, to about 62,000. And while this is still a sad number, it is not nearly as terrifying as 900,000. After all, we saved about 800,000 lives! It really pays to push vaccinations up, and the biggest effect will be seen in the avoidance of deaths among the unvaccinated. But notice that deaths also decay in the vaccinated group, even though this group becomes bigger as we increase $f$ and hence contains more potential victims of the disease. The reason is again absolutely clear: the overwhelming number of infections (and hence deaths) in the vaccinated group is caused by unvaccinated sick individuals, and cutting their number drastically down also makes the vaccinated people safer.
Let us also briefly consider how much longer we have to live with this pandemic—at least according to our very simple model. As a measure for "pandemic end" we can for instance (somewhat arbitrarily) say that the pandemic is "over" when the total number of infections has dropped to 0.01% of the total population in the US. The precise answer will depend a bit on what we choose here, but not actually all that much, since towards the rear end of the pandemic cases will decay exponentially, and then it doesn't much matter when exactly we declare the number is small enough.
Using this convention—here is our model's prediction how much longer the pandemic will be with us, as a function again of vaccination rate (taken actually all the way from $f=0\%$ to $f=100\%$):
Figure 7: Estimated time until the pandemic has basically petered out (meaning $I_{\rm u}+I_{\rm v}<N/10,000$), as a function of vaccination rate $f$.
There's a lot to unpack here. At the current vaccination rate of 50%, the pandemic will be with us for about 2 more years. But increasing vaccinations helps a lot. We could be done with this by Christmas if we all pull together and get the jab! Furthermore, notice that this graph has a maximum at around $f=55\%$. Why? The answer is that at very low vaccination rates we don't really contain the pandemic, but we can stretch it out in time and thereby lower both current and cumulative case numbers. That's what's come to be known as "flattening the curve." But beyond a critical vaccination rate we don't just flatten the curve. We crush the pandemic, because the growth rate is no longer positive. The $R_0$ value averaged over the entire population drops below 1, and then we're actually reversing growth. This is visible both in the number of cases, but also in the expected duration of the pandemic. It is therefore infuriating how close we seem to be right now to crushing it, but vaccination numbers are stalling just as we're getting there.
As a last question, let us investigate what happens when we diligently ramp up vaccinations, but then an even nastier variant comes along that reduces how well the vaccine works. Right now, I assumed that $b_{\rm v}=b_{\rm u}\times(1-\phi)$, where $\phi$ is the efficacy of the vaccine. Our chosen value $\phi=90\%$ then implies an infection rate among the vaccinated that is 10 times lower than among the unvaccinated. But when the efficacy drops, the infection rate among the vaccinated increases. What would this do to the pandemic?
Let us consider a situation where we have actually managed to reach an $f=70\%$ vaccination rate, but then the vaccine's efficacy drops. The following bar chart shows what fraction of the vaccinated (blue) and unvaccinated (red) population would ultimately catch the disease, as a function of efficacy $\phi$.
Figure 8: Fraction of vaccinated (blue) and unvaccinated (red) people that would ultimately catch the virus if the vaccine efficacy has the value $\phi$, assuming an $f=70\%$ vaccination rate.
We see that, unsurprisingly, a lower efficacy leads to a larger fraction of sick people—among both the vaccinated and the unvaccinated. The unvaccinated are of course still more likely to catch the virus, but what's surprising is that the effect on them seems to be even more dramatic. Observe that there seems to exist a "critical" efficacy, $\phi_{\rm c}$, somewhere around 75%, below which there is a more pronounced uptick in cases. What sets this value? The critical point where the pandemic changes between growing and shrinking is where $\dot{I}_{\rm v}(0)+\dot{I}_{\rm u}(0)=0$. This is equivalent to the condition that the population-averaged $R_0$ value becomes 1:
$$\begin{equation}\frac{b_{\rm u}(1-\phi)f + b_{\rm u}(1-f)}{k} = 1 \ , \end{equation}$$
which can be solved for the critical efficacy. Remembering that $R_0=b_{\rm u}/k$, we get
$$\begin{equation}\phi_{\rm c} = \frac{1}{f}\Big(1-\frac{1}{R_0}\Big) \ . \end{equation}$$
For our numbers this gives $\phi_{\rm c}\approx 76\%$, quite close to the point below which the red and blue bars pick up more rapidly.
These considerations show that at a higher overall vaccination rate the vaccine's efficacy does not need to be so high in order to keep a large outbreak at bay. But there are limits! Even at 100% vaccination rate $\phi_{\rm c}=1-1/R_0$, which in our case is about 53%. This shows why we are so lucky that the Covid-19 vaccines are so good. The efficacy of the typical flu vaccines many of us get each year is lamentably low: between 40% and 60%. At that rate, we could not hold Covid-19 at bay! (That is because Covid-19 is more infectious than the typical seasonal flu, which has an $R_0$ value around 1.3. This much lower value gives a critical efficacy $\phi_{\rm c}\approx 23\%$, so that a 50% effective vaccine is actually sufficient.)
Conversely, if the vaccination rate drops below the (same!) critical value $1-1/R_0$, then $\phi_{\rm c}=1$, and we are always in the regime where an epidemic outbreak cannot be avoided, even with perfect vaccines. Notice that this is exactly where we seem to be right now! Even with a 90% effective vaccine, we cannot prevent a wave if only half the population took it—just as we have found above! This is what all the discussions about vaccine-induced herd immunity are about, and why it is so important to push the vaccination rate up. At 70%, as we have just calculated, $\phi_{\rm c}\approx 76\%$, and since the actual efficacy of the vaccine is better than that (as best as we know), we can avoid (sadly: could have avoided) a wave. Recall, however, that this threshold is dependent on the $R_0$ value of the virus—for which I made assumptions on which reasonable people may well disagree. In any case, more infectious variants will push the threshold up and will make it increasingly hard to fight Covid-19. Notice also that this threshold is no all-or-nothing transition but a "rounded crossover," which is one of many reasons why it is so hard to give a clear answer to the question where exactly herd immunity lies.
(Incidentally: this reasoning also explains the maximum in the pandemic-duration plot from above (Figure 7): at $R_0=0.15/0.07\approx 2.14$ and $\phi=90\%$, the critical vaccination rate above which we begin to crush the pandemic is $f_{\rm c}=(1-1/R_0)/\phi\approx 59\%$, which is very close to the observed peak.)
This is probably more than enough technical stuff for a single blog post. Let me leave you with two more general thoughts.
First, I want to emphasize the obvious fact that not all unvaccinated people choose to stay unvaccinated. Kids under the age of 12 cannot yet get the vaccine! Since in the US this group is about 50 million people strong, or about 15% of the total population, we cannot get to a vaccination rate higher than 85% without, say, an emergency use authorization for younger children. I suspect this will likely come at some point in the (hopefully not too distant) future, but even before then, there is a substantial gap between the current 50% and the already achievable 85%. As I have tried to argue with this very simple model, there is a huge difference in cases, deaths, and the overall length of the pandemic between our current state and what is possible if we all just work together. It would be great if more of us would understand this, if we could all see ourselves as being part of a big connected community in which our actions define our joint future.
Second, nothing I said should distract from the fact that the pandemic is only truly over when it is over worldwide. New variants can always upend everything—and they will if we just keep letting the virus burn. My point was to show how much better we can do here, and how much vaccine hesitancy, often defended in the name of personal freedom and liberty, will contribute to massive needless suffering.
I hope that the illustrations I have tried to offer with the modified SIR model, even though simplified and idealized, help to get this key point across, which I believe to be quite robust against many of the complications one could undoubtedly add.
Markus Deserno is a professor in the Department of Physics at Carnegie Mellon University. His field of study is theoretical and computational biophysics, with a focus on lipid membranes.
Is relativistic velocity addition really that strange?
Meet a new foe: the systematics of computational flicker spectroscopy
How asymmetry can make lipid bilayers stiffer
Nobody comprehends Graham's number
Salama on How do I write a Statement of Purpose for my grad school application?
5000 Forbes Ave.
Mellon College of Science
Biophysical Society
APS Division of Biological Physics | CommonCrawl |
The costs of disability in Australia: a hybrid panel-data examination
Binh Vu1,2,
Rasheda Khanam1,
Maisha Rahman3 &
Son Nghiem4
Health Economics Review volume 10, Article number: 6 (2020) Cite this article
Over four million people in Australia have some form of disability, of whom 2.1 million are of working age. This paper estimates the costs of disability in Australia using the standard-of-living approach. This approach defines the cost of disability as additional income required for people with a disability to achieve a similar living standard to those without a disability. We analyse data from the Household, Income and Labour Dynamics in Australia (HILDA) Survey using a hybrid panel data model. To the best of our knowledge, this is the first study to examine the costs of disability in Australia using a high quality, large, nationally-representative longitudinal data set.
This study estimates the costs of disability in Australia by using the Standard of Living (SoL) and a dynamic model approach. It examines the dynamics of disability and income by using lagged disability and income status. The study also controls for unobserved individual heterogeneity and endogeneity of income. The longitudinal specification in this study allows us to separate short- and long-run costs of disability using a hybrid panel data regression approach.
Our results show that people with a disability need to increase their adult-equivalent disposable income by 50% (in the short-run) to achieve the same standard of living as those without a disability. This figure varies considerably according to the severity of the disability, ranging from 19% for people without work-related limitations to 102% for people with severe limitations. Further, the average cost of disability in the long-run is higher and it is 63% of the adult-equivalent disposable income.
Firstly, our results show that with the same level of income, the living standard is lower in households with people with a disability compared to households without members with a disability. This indicates a strong relationship between poverty and disability. However, current poverty measures do not take into account disability, therefore, they fail to consider substantial differences in poverty rates between people with and without a disability. Secondly, the estimated costs reflected in this study do consider foregone income due to disability. Therefore, policymakers should seriously consider adopting disability-adjusted poverty and inequality measurements. Thirdly, increasing the income (e.g. through government payments) or providing subsidised services for people with a disability may increase their financial satisfaction, leading to an improved living standard. The results of this study can serve as a baseline for the evaluation of the National Disability Insurance Scheme (NDIS).
About four million people in Australia have some form of disability, of whom 2.1 million are of working age [1]. There is a consensus that people with a disability need additional income to achieve a similar living standard to those without a disability. An Australian study by [2] showed that the costs of disability in households with at least one family member with a disability was 37% of the disposable income (i.e. people with a disability need to increase disposable income by 37% to have the same living standard as those without a disability). However, the cost estimated by [2] is based on an outdated data set: the 1998-1999 Household Expenditure Survey. Further, [2]'s cross-sectional study was unable to control for potential confounders. In addition, since 2007, Australia's current health landscape has changed significantly and the nation is currently undergoing a major reform in disability care with the introduction of the National Disability Insurance Scheme (NDIS). The NDIS aims to support all Australians below 65 years of age with a permanent and significant disability to achieve greater independence, community involvement, employment and improved wellbeing [3]. The scheme was piloted in several trial sites around Australia from July 2013 and was rolled out gradually to the rest of Australia from July 2016. It will be in full operation in 2020. Thus, there is an urgent need to estimate the costs of disability in Australia using a contemporary data set. We fill this gap in knowledge by estimating the costs of disability in Australia using recent longitudinal data (2001-2016) from the Household, Income and Labour Dynamics in Australia (HILDA) Survey.
We contributed to the existing literature in the following ways. Firstly, we provided up-to-date estimates of the costs of disability in Australia using a large, nationally-representative longitudinal data set. Secondly, use of longitudinal data allowed us to control for confounders such as previous disability and income status which affect current disability and thus current living standard and disability costs. The added benefit of using previous disability and income status was the opportunity to examine long-run costs of disability, which was not possible in cross-sectional studies (e.g.,[4]; [2]). Thus we were able to distinguish between short-run (contemporaneous) and long-run (lagged) costs of disability. Finally, we were able to control for individual-specific unobserved effects by using a hybrid panel regression model, which mitigated the bias caused by unobserved effects. Results from a base model showed that people with a disability need to increase adult-equivalent disposable income by 50% to achieve a similar living standard as those without a disability. However, the cost varied with the level of functional limitations caused by the disability, ranging from 19% for those with no work-related limitations to 102% for those with severe limitations. Note that the cost of disability in this study was estimated implicitly rather than explicitly, similar to most cost-of-illness studies. Thus, indirect costs such as loss of productivity due to disability were also implicitly included in the additional income required to make the standard of living of people with a disability similar to that of those without a disability.
There are several approaches to measure the costs of disability, and each approach has its own advantages and disadvantages [5, 6]. One approach uses the receipt of a disability payment as a proxy for the costs of disability. An implicit assumption in this approach is that disability payments perfectly represent disability costs, which can be questionable as there may be other hidden costs which cannot be represented through receipts. The other approach is based on expert opinions on the costs of disability. The main difficulty with this approach is that disability is a complex concept and cost estimates from experts or people with a disability may vary considerably. Revealed preference is the third approach to estimate the costs of disability. This involves estimating the consumption pattern of people with a disability and matching that of individuals without a disability. However, this approach is based on the assumption that both groups were given alternatives to make their consumption decisions. This assumption may not hold in practice as people with a disability often face comparatively fewer choices. The final approach is referred to as the "standard of living (SoL)" approach. This consists of indirectly estimating the disability costs as the amount of additional income needed to make the living standard of people with a disability similar to that of people without a disability. We use the SoL approach because of its relevance to the available data and its increasing popularity in the literature (for a recent review, see [7]).
We focus on reviewing the most relevant studies using the SoL approach to estimate the costs of disability within developed countries. A study by [4] was one of the earliest studies deploying this approach to estimate disability costs in the United Kingdom using the 1996/1997 Family Resources Survey (FRS) and the British Household Panel Survey (BHPS). They found that the extra costs of disability varied considerably with the data sets, choice of living standard indicators, and household structure. For example, the cost of disability for households that had members with a disability was 14% of mean income when analysing FRS data with a dummy variable capturing whether the household had "any savings" used as a proxy for standard of living. The estimate increased to 50% when the BHPS data were analysed, and a categorical variable on the self-reported "financial situation" of the household was selected to represent the living standard. Morciano et al. [6] updated these analyses to the 2007/2008 wave of the FRS data and took into account the latent nature of disability and living standard. They focused on estimating the costs of disability among people over the pension age (65 for men and 60 for women) in households with a single person or a couple. They used a series of variables to construct the indicators for living standard (e.g. ability to repair or keep the home in decent conditions, affordability of holidays, hobbies and leisure activities) and different indicators of disability (e.g. difficulties with mobility, communication and memory). Their results showed that disability costs predicted by linear, log-linear and log-quadratic models were 55%, 65% and 62% of net weekly income, respectively.
Cullinan et al. [8] examined both the short-run and long-run economic costs of disability using the Irish survey data of the 1995–2001 period. They found that for people with a severe level of disability, the short-run costs (30% of weekly income) were higher than the long-run costs (23.6%). However, for those with a lower level of disability the short-run costs (17.5%) were lower than the long-run costs (20.3%). Both the short- and long-run disability costs became statistically insignificant when controlling for unobservable characteristics. Anton et al. [9] compared the cost of disability between 31 EU countries using the SOL approach, with living standard indicators being "subjective well-being" and "asset ownership". They found strong positive correlations between disability costs and GDP per capita. Their estimated disability costs ranged from 17% to 99% for subjective wellbeing and from 16% to 155% for asset ownership.
In Australia, quantitative research on disability costs is limited. The only available Australian study [2] found that households with at least one family member with a disability need an increase of disposable income by 37% to achieve a similar level of living standard to those families living without a disability. He also found that the income gap increased with the level of disability, reaching 40% to 49% of income for those with a severe restriction. However, the cross-sectional nature of the data set used in the study did not allow for examining transient effects or controlling of unobserved individual-specific characteristics.
In summary, although numerous studies have investigated the cost of disability in developed countries, no previous study has used nationally representative longitudinal data to estimate the costs of disability in Australia. In addition, the only available Australian study [2] is now out-dated. Therefore, the current study will add significantly to the existing literature. Further, a more precise and detailed estimate of the costs of disability in Australia in the present period will provide a critical baseline for accurate future evaluations of the National Disability Insurance Scheme (NDIS), which will be implemented fully in 2020.
The costs of disability in this study are calculated using the SoL and a dynamic model approach, which is similar to [8]. SoL estimates the additional income required by people with disability to have a similar living standard as people without a disability. People with a disability have a lower living standard at the same level of income or require higher income to maintain the same living standard as those without a disability, if all other factors remain constant. This is because physical and mental disabilities often result in lower productive capabilities, resulting in a poorer ability to work to gain income or a narrower range of potential occupations. Also, having disability incurs costs associated with medication, functional adaptation and health care. As illustrated in Fig. 1, for any given income Y, the living standard of people with a disability (point C) is lower than that of people without a disability (point A). To maintain the same standard of living (S∗) as people with no disability, an additional amount of income (i.e. the "compensating income variation" – CIV) is needed to shift the position of people with a disability from point C to point B.
The relationship between living standard, income and disability
Empirically, the costs of disability using the standard of living approach is specified as
$$ S_{it}=\beta_{0}+\beta_{1}Y_{it}+\beta_{2}D_{it}+\gamma X_{it}+(\alpha_{i}+\epsilon_{it}) $$
where Sit represents the standard of living of individual i at time period t; Y is the logarithm of inflation-adjusted disposable income per adult-equivalent; D is the disability status, X is a vector of individual, household and neighbourhood characteristics; and the composite error term consists of individual-specific unobserved characteristics (αi) and random noise (εit).
The additional amount of income (i.e., the "compensating income variation" – CIV) needed to keep the living standard of people with a disability (S(CIV+Y,D=1)) equal to that of people without a disability (S(Y,D=0)) can be estimated by replacing their respective values of income and disability status into Eq. (1):
$$ \begin{array}{cc} (CIV+Y)\times\beta_{1}+\beta_{2} & =Y\times\beta_{1}\end{array} $$
and thus, the percentage income gap due to disability is \(\frac {-\beta _{2}}{\beta _{1}}\).
Due to the presence of individual-specific unobserved characteristics αi, the composite error term may be correlated with other observable covariates. Thus, applying standard regression to Eq. (1) may produce biased estimates. A random-effect estimator assumes that individual-specific unobserved characteristics (αi) follow a normal distribution with zero mean and non-zero variance, and, critically, are uncorrelated with observable covariates. Alternatively, a fixed-effect estimator eliminates the time-invariant unobserved individual-specific characteristics αi by taking the mean difference of the outcome and covariates, as follows:
$$ {}S_{it}-\bar{S_{i}}=\beta_{w}([X_{it}-\bar{X_{i}}]+[Y_{it}-\bar{Y_{i}}]+[D_{it}-\bar{D_{i}}])+(\epsilon_{it}-\bar{\epsilon_{i}}) $$
where βw give the within (or fixed) effects of the covariates on the outcome variables (i.e. how within-individual variations in covariates affect within-individual changes in the outcome). However, this approach cannot yield estimates of the effects of observable time-invariant characteristics such as gender and ethnicity. For categorical outcomes, a fixed-effect estimator will eliminate all observations from individuals who report the same standard of living over time.
From Eq. (3), Sit can be expressed as
$$ \begin{array}{ll} S_{it}&= \bar{S_{i}}+\beta_{w}([X_{it}-\bar{X_{i}}]+[Y_{it}-\bar{Y_{i}}]+[D_{it}-\bar{D_{i}}])\\&\quad+(\epsilon_{it}-\bar{\epsilon_{i}})\\ &=\beta_{w}(\bar{X_{i}}+\bar{Y_{i}}+\bar{D_{i}})+\bar{\alpha_{i}}+\bar{\epsilon_{i}}+\beta_{w}([X_{it}-\bar{X_{i}}]\\&\quad+[Y_{it}-\bar{Y_{i}}]+[D_{it}-\bar{D_{i}}])+(\epsilon_{it}-\bar{\epsilon_{i}})\\ &= +\beta_{w}([X_{it}-\bar{X_{i}}]+[Y_{it}-\bar{Y_{i}}]+[D_{it}-\bar{D_{i}}])\\&\quad+\beta_{w}(\bar{X_{i}}+\bar{Y_{i}}+\bar{D_{i}})+\bar{\alpha_{i}}+\epsilon_{it} \end{array} $$
Mundlak [10] proposed a correlated random-effect estimator where the time-invariant individual unobserved characteristics (αi) are allowed to be correlated with the time-average of potentially endogenous observable covariates:
$$ \alpha_{i}=\gamma(\bar{X_{i}}+\bar{Y_{i}}+\bar{D_{i}})+\varepsilon_{i} $$
where εi is random noise. Since the individual unobserved effects are time-invariant (\(\alpha _{i}=\bar {\alpha _{i}}\)) and hence the value of αi in Eq. 5 can be used to replace \(\bar {\alpha _{i}}\) in Eq. 4 to obtain hybrid estimator proposed by [11]:
$$ {}{\begin{aligned}S_{it}=\beta_{w}([X_{it}-\bar{X_{i}}]+[Y_{it}-\bar{Y_{i}}]+[D_{it}-\bar{D_{i}}])+\beta_{b}(\bar{X_{i}}+\bar{Y_{i}}+\bar{D_{i}})+\epsilon_{it} \end{aligned}} $$
where βb, which is a combination of \(\bar {\beta }\) in Eq. 4 and γ in Eq. 5, represent between effects. Equation 6 can be estimated using an ordered logit random effects estimator, with the common indicator of standard of living as a ranking of financial satisfaction. This specification allows for the conducting of a Hausman-like test by using a Wald test for the equality of within- and between-effects parameters (βw=βb). This can be further tested using robust standard errors estimators, and it does not depend on the positive definiteness of covariance matrices [12]. The static specification in Eq. 6, however, does not reflect the fact that income and disability in the previous period can affect the standard of living in the current period. Thus, we also include the lagged value of disability status and income to specify this dynamic relationship. The advantage of this specification is that we are able to separate the contemporaneous disability costs (calculated using current period parameters) with the long-run costs (estimated using lagged parameters). Note that the outcome of interest in this study is the cost of disability or the ratio of \(\frac {-\beta _{2}}{\beta _{1}}\) in Eq. 1. Thus, the term dynamic in this study refers to the inter-temporal relationship between disability status and income, rather than the inclusion of a lagged dependent variable in the model as a traditional dynamic specification.
Data source and variable selection
The data used in this study come from the first 16 waves of the Household, Income and Labour Dynamics in Australia (HILDA) Survey– a nationally representative longitudinal study of Australian households. The annual survey began in 2001 and collected a wide range of information on relationships, child care, employment, income, health and wellbeing, from all household members aged 15 years and older [13]. The HILDA Survey applied a multi-stage sampling approach to select the sample. In the first stage, 488 Census Collection Districts, each consisting of about 200-250 households, were selected by State and metropolitan status. In the second stage, 22-23 dwellings were selected from each Census Collection District. Finally, up to three households were selected from each dwelling. The main method of data collection was through face-to-face interviews but a small proportion of telephone interviews were also conducted for members who moved to locations outside the areas covered by interviewers. The survey attained a reasonably high response rate of 66% at the household level and 61% at the individual level [14]. The HILDA Survey followed the progress of participating households over time by including new participants: those who were children of participating households and became 15 years of age; those who began sharing a residence with participating households; and those who were married to or had children with participating household members. The wave-on-wave retention rates in the HILDA Survey were remarkably high, of around 95%.
Variable selection
There is a wide range of variables that have been selected as measures of the standard of living in the literature, including subjective wellbeing [15], and self-reported financial situation [6]. We chose financial satisfaction as an indicator of the living standard, rather than overall subjective wellbeing, to represent SoL. Financial satisfaction is a more appropriate indicator because if income is sufficient to support the additional needs of a person living with a disability, then their standard of living will be similar to that of a person living without a disability. On the other hand, subjective wellbeing is dependent on factors such as the psychosocial status of an individual and is thus an unreliable indicator of living standard. Furthermore, additional income may not be able to restore the subjective overall wellbeing of people with a disability, but it could help them to achieve a similar level of financial satisfaction to those without a disability. In addition, financial satisfaction as an indicator is practical because results based on this can be easily translated into measurable policy targets, such as the optimal amount of financial support needed for people with a disability. Thus, we selected financial satisfaction as the proxy for SoL, with a range from 0 for "totally dissatisfied" to 10 for "totally satisfied" as the proxy for SoL. As a sensitivity test, we also approximate living standards via a dummy variable capturing whether the household can mobilise $2,000-$3,000 from savings. One could argue that a different choice of savings level would better represent financial stability but data limitation does not allow us to explore this path further.
Disability status was measured through the question "Do you have any long-term health condition, impairment or disability that restricts you in your everyday activities, and has lasted or is likely to last, for 6 months or more?" To measure the severity of a disability, we also considered responses to the question: "Could you pick a number between 0 and 10 to indicate how much your condition[s] limit[s] the amount of work you can do? An answer of 0 means "not at all" and an answer of 10 means you are "unable to do any work". For ease of interpretation, we recoded responses into three categories: no limitation (score of 0); moderate limitation (scores of 1-6); and severe limitation (scores of 7-10). Also, we have assumed "no limitation" for people who reported having a disability but their response to the severity of their disability question is missing. Model covariates included age, gender, ethnicity, education level and employment of the respondent, household size, household income, type of tenure, and region of residence. The annual income variable was adjusted for inflation using the consumer price index at 2016 prices. We converted income to adult-equivalent income using the modified OECD-equivalence scale, which allocates a coefficient of 1 to the first adult, 0.5 to each of the remaining adults and 0.3 to each child under 15 [16]. The sample size of our study included all individuals in the HILDA Survey with no missing data on the selected variables.
Table 1 shows the prevalence of disability with various levels of severity over time and the associated standard of living. On average, 27.3% of the survey individuals have a disability or long-term health condition, which is comparable with the Irish figure of 28% [8] but considerably higher than the 18.5% figure reported by the Survey of Disability, Ageing and Carers (SDAC) in 2009 [17]. One possible reason for the difference is that the definition of disability in this study is broader: "having any long-term health condition, impairment or disability". However, the proportion of households that have members with a disability causing limitation was lower when the severity was taken into account. The percentage of households with members with a disability having a moderate limitation and a severe limitation to work were 11.6% and 6.1%, respectively, making a total of 17.7%. The remaining 9.6% are people with long-term health conditions or who report having a disability but face no limitation to work.
Table 1 Disability status and standard of living over time
The standard of living (proxied by financial satisfaction) decreased with increasing disability severity levels. For example, the average living standard for people with a disability with no limitation, some limitation and severe limitation to work were 6.4, 6.1 and 5.6, respectively. However, the living standard of people without a disability was substantially higher than that of people with a disability. This pattern was consistent from Wave 1 to Wave 16. While there was no clear trend on the prevalence of disability, the living standard has improved slightly over time across all disability severity levels.
Table 2 shows significant differences between people with and without a disability in a range of variables used in the models, with the exception of the gender of household heads.
Table 2 Descriptive statistics
People without a disability were better-off with an average adult-equivalent disposable income of $53,892 per year, which was 27.5% higher than the figure of $42,266 for people with a disability. On average, people with a disability lived in smaller households, were more likely to be of Aboriginal and Torres Strait Islander descent, had lower education attainment, were more likely to live in socially rented/subsidised properties, or lived in more disadvantaged areas, as proxied by quintiles of the Socio-Economic Index for Areas (SEIFA). People with a disability also had lower levels of satisfaction with life, and the magnitude of the difference was substantial (8.05 versus 7.56). Likewise, differences in financial satisfaction levels (6.6 versus 6.1) and the probability of being able to mobilise $2,000-$3,000 from savings (71% versus 68%) were substantial. This suggests that using the 'level of satisfaction with life' as a proxy for the standard of living may result in higher estimates of the costs of disability.
We first estimated the costs of disability in a pooled model, where disability at all levels of severity (proxied by the limitation to work) was estimated together. The Hausman-like specification test rejected (χ2(4)=251, p-val=0.00) the null hypothesis that the between and within parameters are equal, suggesting that the within parameters were preferred and hence we focus on reporting and discussing results based on these parameters. Table 3 shows that the magnitude of the disability parameters (in absolute value), and the estimated disability costs, increase with the level of severity. For example, contemporary costs, estimated as the ratio of the disability and income parameters in the current period, increased from 19% (i.e., \(\frac {0.09}{0.46}\)) for those with no work-related limitation, to 71% for those with some limitation and 102% for those with severe limitation (Table 3). However, the long-term disability costs, estimated as the ratio of the lagged disability to the lagged income parameter, increased at a slower pace, from 37% for those with no limitations to 94% for those with severe limitations. As expected, the estimated cost of disability using the disability indicator that disregards the severity of limitations lies in the middle of estimate costs based on different severity levels.
Table 3 Costs of disability by severity: pooled model (dependent variable: satisfaction with the financial situation)
In particular, the contemporaneous and long-term costs of having a disability were 50% and 63% of adult-equivalent annual income, respectively. These estimates are higher than those reported by [2]. This difference is likely to emerge because Saunders applied a standard regression model that did not account for unobserved individual-specific characteristics. For comparison, we applied a standard ordered logit regression same as [2] to Eq. 1 (instead of a hybrid ordered logit estimator in Eq. 6) and found that the additional costs for households with people with a disability were 37% of their equivalised disposable income, which is the same as the findings of [2]. This comparison result suggests that the inability of applying a panel data analysis may underestimate the disability costs.
Sensitivity test
As a sensitivity test, we used the saving capacity of the household as a proxy for the living standard. Findings were similar: the disability costs increased with the level of severity (Table 4). However, the magnitudes of the cost estimates were smaller than in the main models for financial satisfaction. For example, the contemporary estimates of the additional cost for people with a disability and no work-related limitation increased to 13% and the costs for those with severe work-related limitations increased to 71%. The long-run disability cost estimates using the lagged parameters were more substantial, ranging from 62% for those without a work-related limitation to 118% for those with severe limitations. Similarly, the cost estimates that disregard the severity of disability were 31% and 77% for the long-term, respectively. We also performed an analysis using the overall level of satisfaction with life as a proxy for standard of living. As we expected, the cost estimates using this approach were much higher (people with a disability need to increase their disposable income by 300% to have the same level of overall satisfaction as those without a disability).
Table 4 Costs of disability by severity: pooled model, using savings to represent the standard of living
The results of this study have several implications. Firstly, our results show that with the same level of income, the living standard is lower in households with people with a disability compared to households without members with a disability. This finding indicates a strong relationship between poverty and disability. However, current poverty measures do not take into account disability, therefore, they fail to consider substantial differences in poverty rates between people with and without a disability. Also, the income used in this study included all sources of income, including current disability support, hence it suggests that the current level of government support for people with disabilities is not enough. Secondly, the estimated costs reflected in this study do consider foregone income due to disability. Therefore, policymakers should seriously consider adopting-disability adjusted poverty and inequality measurements. Thirdly, increasing the income (e.g. through government payments) or providing subsidised services for people with a disability may increase the financial satisfaction of these people, leading to an improved living standard. Therefore, policymakers should also consider increased spending for people with a disability. Fourthly, as the National Disability Insurance Scheme (NDIS) only focuses on people with severe disability, to improve the income of people with less severe disabilities, there should be policies addressing job support, workplace support and employability enhancement for people with less restrictive disabilities. Finally, the results of this study can serve as a baseline for the evaluation of the NDIS. Future research replicating our approach after the nationwide rollout of the NDIS in 2019 is needed.
The current study has several limitations. Firstly, for our sensitivity test, we approximated living standards using a dummy variable capturing whether the household can mobilise $2,000-$3,000 from savings. Our understanding is that a different choice of savings level may better represent financial stability; however, we cannot explore this path further due to the data availability. Secondly, we assumed "no limitation" for people who reported having a disability but their response to the severity of their disability question was missing. Therefore, our assumptions may not completely reflect the severity level of theirdisability. Finally, this study did not estimate the cost of disability for different socioeconomic groups despite controlling for ethnicity and socioeconomic status in the regressions. Our estimates also did not identify contributors to income disparities by disability status, which could be investigated using a decomposition approach [18, 19].
This paper has implicitly estimated the costs of disability in Australia by applying a SoL approach using a large, contemporary, national panel data set. In this paper we were able to: (1) investigate the dynamics of disability and income by using lagged disability and income status, (2) control for unobserved individual heterogeneity and endogeneity of income, and, (3) distinguish between short and long run disability costs using a hybrid panel data regression approach. We found that the average cost of having any disability in the short-run in Australia was 50% of disposable adult-equivalent annual income. This figure varied considerably with the severity of disability, ranging from 19% for people without work-related limitations to 102% for people with severe limitations. Also, the average cost of disability in the long-run was higher at 63% of adult-equivalent disposable income. This was distributed more evenly across severity levels, ranging from 37% for people with no work-related limitations to 94% for people with severe limitations. These results were sensitive to the choice of proxies for standard of living. Highly subjective measures such as overall life satisfaction inflated the cost estimates, and therefore were not recommended. Further, estimates that used cross-sectional data and ignored unobserved individual-specific characteristics (e.g. previous Australian estimates by Saunders [2]) may underestimate the costs of disability.
The results of this study have several implications. Firstly, our results show that with the same level of income, the living standard is lower in households with people with a disability compared to households without members with a disability. This indicates a strong relationship between poverty and disability. However, current poverty measures do not take into account disability, therefore, they fail to consider substantial differences in poverty rates between people with and without a disability. Secondly, the estimated costs reflected in this study do consider forgone income due to disability. Therefore, policy makers should consider adopting disability adjusted poverty and inequality measurements. Thirdly, increasing the income (e.g. through government payments) or providing subsidised services for people with a disability may increase the financial satisfaction of these people, leading to an improved living standard. Thus policy makers need toconsider increased spending for people with a disability. Further, results of this study can serve as a baseline for the evaluation of the National Disability Insurance Scheme (NDIS). Therefore, future research replicating our approach after the nationwide rollout of the NDIS in 2019 is needed.
The data are publicly available.
BHPS:
British household panel survey
CIV:
Compensating income variation
FRS:
Family resources survey
HILDA:
Household, income and labour dynamics in Australia survey
NDIS:
OECD:
Organization for economic cooperation and development
SDAC:
Survey of disability, ageing and carers
SEIFA:
Socio-economic index for areas
SoL:
Standard of living
ABS. Disability, ageing and carers, australia. Technical report. Aust Bur Stat. 2015.
Saunders P. The costs of disability and the incidence of poverty. Aust J Soc Issues. 2007; 42(4):461–80.
NDIA. The national disability insurance scheme. Technical report, The National Disability Insurance Agency. 2018. https://www.ndis.gov.au/about-us/what-ndis. Accessed 15 June 2018.
Zaidi A, Burchardt T. Comparing incomes when needs differ: Equivalization for the extra costs of disability in the uk. Rev Income Wealth. 2005; 51(1):89–114.
Tibble M. Review of Existing Research on the Extra Costs of Disability. Leeds: Corporate Document Services; 2005.
Morciano M, Hancock R, Pudney S. Disability costs and equivalence scales in the older population in great britain. Rev Income Wealth. 2015; 61(3):494–514.
Mitra S, Palmer M, Kim H, Mont D, Groce N. Extra costs of living with a disability: A review and agenda for future research. Disabil Health. 2017; 10:458–84.
Cullinan J, Gannon B, Lyons S. Estimating the extra cost of living for people with disabilities. Health Econ. 2011; 20(5):582–99.
Anton J-I, Brana F-J, de Bustillo RM. An analysis of the cost of disability across europe using the standard of living approach. SERIEs. 2016; 7(3):281–306.
Mundlak Y. On the pooling of time series and cross section data. Econometrica J Econ Soc. 1978; 46:69–85.
Allison P. D.Fixed Effects Regression Models, vol. 160. Thousand Oaks: SAGE publications; 2009.
Schunck R, Perales F. Within-and between-cluster effects in generalized linear mixed models: A discussion of approaches and the xthybrid command. Stata J. 2017; 17(1):89–115.
Summerfield M, Bevitt A, Freidin S, Hahn M, La N, Macalalad N, O'Shea M, Watson N, Wilkins R, Wooden M. Hilda user manual - release 16. Melbourne Institute of Applied Economic and Social Research. Melbourne: University of Melbourne; 2017.
Wooden M, Watson N. The hilda survey and its contribution to economic and social research (so far). Econ Rec. 2007; 83(261):208–31.
Groot W, Van Den Brink HM. The compensating income variation of cardiovascular disease. Health Econ. 2006; 15(10):1143–8.
OECD. OECD Framework for Statistics on the Distribution of Household Income, Consumption and Wealth. Paris: OECD Publishing; 2013.
Australian Bureau of Statistics. Disability in australia, cat. no. 4446.0. Technical report. 2009. http://www.abs.gov.au/ausstats/[email protected]/Lookup/4446.0main+features22009. Accessed 15 June 2018.
Coveney M., Garia-Gomez P, Van Doorslaer E., Van Ourti T.Health disparities by income in spain before and after the economic crisis. Health Econ. 2016; 25:141–58.
Coveney M, Garcia-Gomez P, Van Doorslaer E, Van Ourti T. Thank goodness for stickiness: Unravelling the evolution of income-related health inequalities before and after the great recession in europe. J Health Econ. 2019:102259.
We thank Prof Roger Lawrey for his administrative support with a working version of the paper. We also thank the Melbourne Institute for providing access to the HILDA data.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
University of Southern Queensland, Toowoomba, Australia
Binh Vu & Rasheda Khanam
Institute of Research and Development, Duy Tan University, Da Nang, 550000, Vietnam
Binh Vu
Royal Brisbane & Women's Hospital, Brisbane, Australia
Maisha Rahman
Centre for Applied Health Economics, 170 Kessels Road Sir Samuel Griffith Centre (N78) 1.11, Queensland, Nathan QLD 4111, Australia
Son Nghiem
Rasheda Khanam
We wish to confirm that all the authors contributed to this study. All authors read and approved the final manuscript.
Correspondence to Son Nghiem.
We confirm that our study is original and has not been published previously or has not been under consideration for publication elsewhere.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Vu, B., Khanam, R., Rahman, M. et al. The costs of disability in Australia: a hybrid panel-data examination. Health Econ Rev 10, 6 (2020). https://doi.org/10.1186/s13561-020-00264-1
Received: 18 June 2019
Cost of disability
Panel data
Hybrid estimator | CommonCrawl |
Solar Neutrino Experiments: New Physics? [PDF]
John N. Bahcall
Physics , 1993,
Abstract: Physics beyond the simplest version of the standard electroweak model is required to reconcile the results of the chlorine and the Kamiokande solar neutrino experiments. None of the 1000 solar models in a full Monte Carlo simulation is consistent with the results of the chlorine or the Kamiokande experiments. Even if the solar models are forced articficially to have a ${}^8 B$ neutrino flux in agreement with the Kamiokande experiment, none of the fudged models agrees with the chlorine observations. This comparison shows that consistency of the chlorine and Kamiokande experiments requires some physical process that changes the shape of the ${}^8 B$ neutrino energy spectrum. The GALLEX and SAGE experiments, which currently have large statistical uncertainties, differ from the predictions of the standard solar model by $2 \sigma$ and $3 \sigma$, respectively. The possibility that the neutrino experiments are incorrect is briefly discussed.
Do Solar Neutrino Experiments Imply New Physics? [PDF]
John N. Bahcall,H. A. Bethe
Physics , 1992, DOI: 10.1103/PhysRevD.47.1298
Abstract: None of the 1000 solar models in a full Monte Carlo simulation is consistent with the results of the chlorine or the Kamiokande experiments. Even if the solar models are forced artifically to have a \b8 neutrino flux in agreeement with the Kamiokande experiment, none of the fudged models agrees with the chlorine observations. The GALLEX and SAGE experiments, which currently have large statistical uncertainties, differ from the predictions of the standard solar model by $2 \sigma$ and $3 \sigma$, respectively.
Solar neutrinos and neutrino physics [PDF]
Michele Maltoni,Alexei Yu. Smirnov
Abstract: Solar neutrino studies triggered and largely motivated the major developments in neutrino physics in the last 50 years. Theory of neutrino propagation in different media with matter and fields has been elaborated. It includes oscillations in vacuum and matter, resonance flavor conversion and resonance oscillations, spin and spin-flavor precession, etc. LMA MSW has been established as the true solution of the solar neutrino problem. Parameters theta12 and Delta_m21^2 have been measured; theta13 extracted from the solar data is in agreement with results from reactor experiments. Solar neutrino studies provide a sensitive way to test theory of neutrino oscillations and conversion. Characterized by long baseline, huge fluxes and low energies they are a powerful set-up to search for new physics beyond the standard 3nu paradigm: new neutrino states, sterile neutrinos, non-standard neutrino interactions, effects of violation of fundamental symmetries, new dynamics of neutrino propagation, probes of space and time. These searches allow us to get stringent, and in some cases unique bounds on new physics. We summarize the results on physics of propagation, neutrino properties and physics beyond the standard model obtained from studies of solar neutrinos.
Neutrino Physics & The Solar Neutrino Problem [PDF]
Andrew John Lowe
Abstract: A literature review of neutrino physics and the solar neutrino problem.
A road map to solar neutrino fluxes, neutrino oscillation parameters, and tests for new physics [PDF]
John N. Bahcall,Carlos Pena-Garay
Physics , 2003, DOI: 10.1088/1126-6708/2003/11/004
Abstract: We analyze all available solar and related reactor neutrino experiments, as well as simulated future 7Be, p-p, pep, and ^8B solar neutrino experiments. We treat all solar neutrino fluxes as free parameters subject to the condition that the total luminosity represented by the neutrinos equals the observed solar luminosity (the `luminosity constraint'). Existing experiments show that the p-p solar neutrino flux is 1.02 +- 0.02 (1 sigma) times the flux predicted by the BP00 standard solar model; the 7Be neutrino flux is 0.93^{+0.25}_{-0.63} the predicted flux; and the ^8B flux is 1.01 +- 0.04 the predicted flux. The neutrino oscillation parameters are: Delta m^2 = 7.3^{+0.4}_{-0.6}\times 10^{-5} eV^2 and tan^2 theta_{12} = 0.41 +- 0.04. We evaluate how accurate future experiments must be to determine more precisely neutrino oscillation parameters and solar neutrino fluxes, and to elucidate the transition from vacuum-dominated to matter-dominated oscillations at low energies. A future 7Be nu-e scattering experiment accurate to +- 10 % can reduce the uncertainty in the experimentally determined 7Be neutrino flux by a factor of four and the uncertainty in the p-p neutrino flux by a factor of 2.5 (to +- 0.8 %). A future p-p experiment must be accurate to better than +- 3 % to shrink the uncertainty in tan^2 theta_{12} by more than 15 %. The idea that the Sun shines because of nuclear fusion reactions can be tested accurately by comparing the observed photon luminosity of the Sun with the luminosity inferred from measurements of solar neutrino fluxes. Based upon quantitative analyses of present and simulated future experiments, we answer the question: Why perform low-energy solar neutrino experiments?
Advancements in solar neutrino physics
Antonelli, Vito;Miramonti, Lino
High Energy Physics - Phenomenology , 2013, DOI: 10.1142/S0218301313300099
Abstract: We review the results of solar neutrino physics, with particular attention to the data obtained and the analyses performed in the last decades, which were determinant to solve the solar neutrino problem (SNP), proving that neutrinos are massive and oscillating particles and contributing to refine the solar models. We also discuss the perspectives of the presently running experiments in this sector and of the ones planned for the near future and the impact they can have on elementary particle physics and astrophysics.
Advancements in solar neutrino physics [PDF]
Vito Antonelli,Lino Miramonti
Physics , 2013, DOI: 10.1142/S0218301313300099
Neutrino physics and the mirror world: how exact parity symmetry explains the solar neutrino deficit, the atmospheric neutrino anomaly and the LSND experiment [PDF]
R. Foot,R. R. Volkas
Abstract: Evidence for $\bar \nu_{\mu} \rightarrow \bar \nu_e$ oscillations has been reported at LAMPF using the LSND detector. Further evidence for neutrino mixing comes from the solar neutrino deficit and the atmospheric neutrino anomaly. All of these anomalies require new physics. We show that all of these anomalies can be explained if the standard model is enlarged so that an unbroken parity symmetry can be defined. This explanation holds independently of the actual model for neutrino masses. Thus, we argue that parity symmetry is not only a beautiful candidate for a symmetry beyond the standard model, but it can also explain the known neutrino physics anomalies.
The Nuclear Physics of Solar and Supernova Neutrino Detection [PDF]
W. C. Haxton
Abstract: This talk provides a basic introduction for students interested in the responses of detectors to solar, supernova, and other low-energy neutrino sources. Some of the nuclear physics is then applied in a discussion of nucleosynthesis within a Type II supernova, including the r-process and the neutrino process.
Standard Physics Solution To The Solar Neutrino Problem? [PDF]
Arnon Dar
Abstract: The $^8$B solar neutrino flux predicted by the standard solar model (SSM) is consistent within the theoretical and experimental uncertainties with that observed at Kamiokande. The Gallium and Chlorine solar neutrino experiments, however, seem to imply that the $^7$Be solar neutrino flux is strongly suppressed compared with that predicted by the SSM. If the $^7$Be solar neutrino flux is suppressed, still it can be due to astrophysical effects not included in the simplistic SSM. Such effects include short term fluctuations or periodic variation of the temperature in the solar core, rotational mixing of $^3$He in the solar core, and dense plasma effects which may strongly enhance p-capture by $^7$Be relative to e-capture. The new generation of solar observations which already look non stop deep into the sun, like Superkamiokande through neutrinos, and SOHO and GONG through acoustic waves, may point at the correct solution. Only Superkamiokande and/or future solar neutrino experiments, such as SNO, BOREXINO and HELLAZ, will be able to find out whether the solar neutrino problem is caused by neutrino properties beyond the minimal standard electroweak model or whether it is just a problem of the too simplistic standard solar model. | CommonCrawl |
Temporal dynamics of the fecal microbiota in veal calves in a 6-month field trial
Méril Massot ORCID: orcid.org/0000-0002-6611-214X1,
Marisa Haenni2,
Thu Thuy Nguyen1,
Jean-Yves Madec2,
France Mentré1,3 &
Erick Denamur1,4
Little is known about maturation of calves' gut microbiome in veal farms, in which animals are confined under intensive-farming conditions and the administration of collective antibiotic treatment in feed is common. We conducted a field study on 45 calves starting seven days after their arrival in three veal farms. We collected monthly fecal samples over six months and performed 16S rRNA gene sequencing and quantitative PCR of Escherichia coli to follow the dynamics of their microbiota, including that of their commensal E. coli populations. We used mixed-effect models to characterize the dynamics of α-diversity indices and numbers of E. coli, and searched for an effect of collective antibiotic treatments on the estimated parameters. On two farms, we also searched for associations between recommended daily doses of milk powder and bacterial abundance.
There was high heterogeneity between calves' microbiota upon their arrival at the farms, followed by an increase in similarity, starting at the first month. From the second month, 16 genera were detected at each sampling in all calves, representing 67.5% (± 9.9) of their microbiota. Shannon diversity index showed a two-phase increase, an inflection occurring at the end of the first month. Calves receiving antibiotics had a lower intercept estimate for Shannon index (− 0.17 CI95%[-0.27; − -0.06], p = 0.003) and a smaller number of E. coli/ gram of feces during the treatment and in the 15 days following it (− 0.37 log10 (E. coli/g) CI95%[− 0.66; − 0.08], p = 0.01) than unexposed calves. There were moderate to strong positive associations between the dose of milk powder and the relative abundances of the genera Megasphaera, Enterococcus, Dialister and Mitsuokella, and the number of E. coli (rs ≥ 0.40; Bonferroni corrected p < 0.05).
This observational study shows early convergence of the developing microbiota between veal calves and associations between the dose of milk powder and members of their microbiota. It suggests that administration of collective antibiotic treatment results in a reduction of microbial diversity and size of the E. coli population and highlights the need for additional work to fully understand the impact of antibiotic treatment in the veal industry.
The veal-calf industry is an intensive farming system that produces meat from milk-fed calves. Europe is one of the largest producer of veal in the world, producing approximately six million heads yearly, mostly in France, the Netherlands, Italy, and Belgium. Male dairy calves are commonly used to produce veal. They are purchased in dairy farms by "integrators", companies that are involved in all stages of the production process [1]. They are collected at 2 weeks of age, batched, and placed without delay in dedicated closed buildings for approximately 6 months. Calves are mainly fed milk replacers and a small amount of solid feed is introduced during the first weeks for the welfare of the animals. They are reared indoors, often in high-animal-density buildings, which increases the risk of pathogens spreading among the batch. Collective antibiotic treatment is frequent, particularly at the start of the fattening period, when calves coming from different dairy farms are grouped together. In France, veal calves typically receive more than eight antibiotic treatments during the fattening process, mainly by the oral route [2, 3]. Collective treatment represents almost 96% of the administrated treatments and 88% of the prescriptions are made in advance of an anticipated outbreak of disease, when some animals in the batch develop symptoms of infection [2,3,4]. A pervasive effect of antibiotic treatment can be the collapse of gut bacterial populations, which results in a loss of fecal microbial diversity, as shown in cattle [5, 6].
Because of the extensive use of antibiotics, combined with specific dietary practices and intensive farming practices, maintaining a proper gut microbiota balance during calf development is a key challenge in the veal-calf industry. To our knowledge, only a few attempts have been made to investigate the effect of antibiotics on the gut microbiota composition of pre-weaned dairy calves in commercial farms [7, 8], and no study has focused on calves reared in veal farms.
Based on a field trial, we report the temporal dynamics of bacterial communities in veal calves highly exposed to antibiotics from an early age, with an additional focus on the commensal E. coli population. Although E. coli is one of the most common pathogens that causes diarrhea in calves, it is also found as a commensal in the gut of healthy calves [9, 10]. Few studies have focused on commensal E. coli populations in pre-weaned calves [11]. We hypothesized that the developmental trajectory of the microbiota is influenced by the use of antibiotics during growth, resulting in the development of a community with lower diversity and persistent shifts in taxonomic composition. We addressed this question by performing 16S rRNA gene sequencing and quantitative PCR (qPCR) of Escherichia on 312 rectal swabs collected from 45 veal calves distributed in French veal farms over 6 months.
Animal sampling and follow-up
We collected fecal samples from 15 calves each from three veal farms partnered with different integrators in Brittany, France, during fattening. They were sampled by rectal swabbing at days 7 and 21 and then monthly for 5 months until the departure of the batch to the slaughterhouse. The calves were 14 days old when they arrived on the farms and were mainly fed milk replacer throughout the fattening period, which is reconstituted from milk powder with hot water. Calves stayed for 161 days on farms A and B and 147 days on farm C. Collective antibiotic treatments were recorded by the three farms throughout the fattening period (Fig. 1). Antibiotics were always used at therapeutic doses and administered orally in the feed. All calves received antibiotics more than once and calves from farms A and C received several consecutive antibiotic treatments during the first month (Fig. 1).
Scheme of sampling dates for each farm. Sampling points for farm A, farm B, and farm C are represented in the upper panel, middle panel, and lower panel, respectively. "N" indicates the number of calves studied on each farm. The days of sampling are indicated by grey dots. Of note, one calf on farm C died during the fattening period and was excluded from the study. Antibiotic treatments are indicated by bold dark lines or back triangles, and the names of the antibiotics are given in the legend
All calves from farms A and B included in the study were followed until the end of the fattening period. One calf from farm C died during fattening and was excluded from the study. For the 44 remaining calves, there were no missing samples during the follow-up and thus downstream analyses were performed on 308 samples.
After sampling, swabs were placed immediately in portable coolers with ice packs. Swabs were shipped to the antimicrobial resistance and virulence lab of the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) lab in Lyon, France, within 24 h, and stored at − 80 °C. After genomic DNA extraction from the swabs, we characterized the fecal microbiota by 16S rRNA gene amplicon sequencing (V4 region) using Illumina MiSeq technology. E. coli was quantified by qPCR targeting the 16S rRNA gene sequence specific to the Escherichia genus.
16S amplicon sequencing
After processing reads using the mothur pipeline, 34,153,188 quality amplicons were generated, with an average of 111,248 ± 63,002 per sample (Additional file 1: Fig. S1a). One sample had mostly poor-quality reads and a very small number of amplicons and was thus excluded from the downstream 16S amplicon sequencing analyses. The minimum number of operational taxonomic units (OTUs) detected in a sample was 119 and the maximum 1302 (Additional file 1: Fig. S1b). The rarefaction threshold was set to 47,000 sequences (Additional file 1: Fig. S1c). Seven samples were below this threshold and excluded from the α- and β-diversity analyses.
Weighted and unweighted Unifrac distances
The similarity of microbiota composition among calves was tracked using β-diversity measures, which represents the dissimilarity between samples. The weighted Unifrac distances between calves were the highest on day 7, with an overall mean of 0.45 (± 0.10) (Fig. 2a). The weighted Unifrac distances started to decrease at the next sample, the overall mean reaching 0.33 (± 0.06) on day 21 (Fig. 2b). Weighted Unifrac distances remained low until the end of fattening, with a mean of 0.31 (± 0.08) on the last day (Fig. 2c). The time of sampling had a significant effect on the calf microbiome composition (p = 0.001, permutational multivariate analysis of variance (PERMANOVA) on weighted Unifrac distances). It explained 15.5% of the between-sample variation in calves, indicating extensive sharing of the microbial community among calves at a given time of the fattening period. The microbiota of individuals belonging to the same farm were no more similar than those of individuals from different farms (p = 0.4, PERMANOVA on weighted Unifrac distances). Despite the use of different antibiotic molecules and different times of collective antibiotic treatments between farms, the farm affiliation explained only 4.0% of the between sample variation (Fig. 2).
Heatmaps of the weighted Unifrac distances at the first month and last month of fattening. Heatmaps of the β-diversity weighted Unifrac distances matrix are shown for the (a) first sampling (day 7), (b) second sampling (day 21), and (c) last sampling (day 161 for farms A and B and day 147 for farm C). Each square represents a pairwise distance between two calves. The pale yellow squares indicate low Unifrac distances, whereas dark red squares indicate high Unifrac distances. The calves are ordered according to the farms in both the lines and columns. The calf's distance from itself is represented by the white square on the main diagonal. The means ± standard deviations for each sampling and farm are shown in the lower triangles
We obtained similar results for the unweighted Unifrac distances (Additional file 2: Fig. S2, p = 0.001 and p = 0.5 for the time of sampling and farm in the PERMANOVA test, respectively). The farm affiliation explained 2.9% of the between-sample variation.
The weighted Unifrac distances between consecutive samplings showed a trend over time, with a downward trajectory for almost all calves (Additional file 3: Fig. S3). The mean intra-calf weighted Unifrac distances between the first and second samplings and the second and third samplings was 0.40 (± 0.10) and 0.33 (± 0.08), respectively. The mean intra-calf weighted Unifrac distances between the second to last sampling and last sampling was 0.25 (± 0.07). These results suggest that the magnitude of changes tended to decrease on a monthly scale.
Taxonomic composition of the microbiota
We investigated the taxonomic composition at different levels, from phyla to OTUs. We detected 19 phyla in the samples. Among them, only Firmicutes, Bacteroidetes, Actinobacteria, and Proteobacteria were present at a relative abundance above 1% in all samples. The phyla Firmicutes and Bacteroidetes were dominant on all farms throughout the period of the study. Their overall mean relative abundances were 47.7% (± 10.2) and 40.3% (± 11.3), respectively. The overall mean relative abundance of Actinobacteria was 5.2% (± 7.2) and that of Proteobacteria 3.7% (± 4.3). On farm A, Firmicutes was predominant at the beginning and then decreased slightly towards relative abundances similar to those of Bacteroidetes (Fig. 3).
Mean relative abundance of the four main phyla over time in each farm. The mean relative abundance of the four main phyla on farm A, farm B, and farm C are represented in panels (a), (b), and (c), respectively. The mean relative abundances ± standard deviations of the data are represented by the bars
We detected 349 genera, of which 50 (14%) were found to be one of the five most abundant taxa in at least one sample (Fig. 4, Additional file 4: Fig. S4). On day 7, the mean cumulative relative abundance of the five most abundant taxa was 73.9% (± 12.3) (Fig. 4a). On day 21, the mean cumulative relative abundance of the five most abundant taxa was 59.0% (± 7.4) and it was 60.2% (± 8.4) on the last day (Fig. 4b and c).
Individual microbiota composition at the genus level at the first and last month of fattening. Relative abundance of the five most abundant taxa at the genus level for all calves for the (a) first sampling (day 7), (b) second sampling (day 21), and (c) last sampling (day 161 for farms A and B and day 147 for farm C). Other detected taxa are depicted in white. Calf IDs are given at the top of the panels and are ordered according to farm. The color scale of the dots beneath the bar graphs represents the distribution of the Shannon index values. Grey dots indicate samples for which no index was computed because the number of sequences was lower than the rarefaction threshold. The color key refers to the phylum of each taxa and each palette was built to maximize the distinctiveness between shades
From the second month, 16 taxa at the genus level were detected in all calves at each sampling until the end of fattening. They were Alloprevotella, Bacteroides, Bifidobacterium, Blautia, Dorea, Faecalibacterium, Lactobacillus, Olsenella, Parabacteroides, Prevotella, Pseudoflavonifractor, Ruminococcus2, unclassified taxa from Clostridiales, Erysipelotrichaceae, Lachnospiraceae, and Ruminococcaceae. The overall mean cumulative relative abundance of these taxa was 67.5% (± 9.9), and was similar across samplings throughout the fattening period. On farm C, the mean was equal to 74.1% (± 5.5), 72.8% (± 5.5), and 70.0% (± 6.5) on days 35, 91 and 147, respectively. On farms A and B, the mean was 64.6% (± 10.0), 64.6% (± 8.8), and 65.5% (± 7.7) on days 49, 106, and 161, respectively.
For each calf, the proportion of OTUs not previously detected was higher than the proportion of OTUs detected in the previous sample (Fig. 5). The proportion of OTUs detected in the previous sample varied between 7.2 and 48.6%, whereas the proportion of OTUs that had never been detected before varied between 51.4 and 92.8%. These results suggest that the temporal dynamics of the calf microbiota is driven by the replacement of autochthonous OTUs by the newly detected ones. The proportion of newly detected OTUs tended to decrease over time, concomitantly with an increase in the proportion of OTUs detected in two consecutive samples (Fig. 5). Only 50 OTUs simultaneously persisted in more than 97% of calves between consecutive samples. An OTU from the genus Faecalibacterium and an unclassified OTU from the Ruminococcaceae family persisted from the first to last month in more than 97% of calves (Additional file 5: Table S1). No OTU was newly detected or lost simultaneously by more than 50% of the calves (Additional file 5: Table S1).
Mean proportion of OTUs relative to those detected in previous samples of the same calf. The mean proportion of OTUs for farm A, farm B, and farm C are represented in panels (a), (b), and (c), respectively. The mean proportions ± standard deviations of the data are represented by the bars
No over-abundance or depletion of taxa was detected at the phylum (Fig. 3), nor genus level (Fig. 4a and b) in samples of calves under antibiotic treatment or those that had received antibiotics in the 15 days before sampling relative to calves not exposed during the same period. For example, at day 21, although calves on farm A had received antibiotics for 20 days while calves on farm B had not, 154 genera were detected on both farms. They represented 86.5 and 85.6% of the genera detected that day on farms A and B, respectively. At day 21, 157 genera were detected on both farms B and C. They represented 87.2 and 86.7% of the genera detected that day on farms B and C, respectively, despite the fact that calves on farm C had received antibiotics for 14 days. Sixteen taxa at the genus level were only detected on farm B and were found at a relative abundance of < 1% (Chlamydophila, Snodgrassella, unclassified taxa from Rhodobacteraceae, Tissierella, Clostridium_XI, Comamonas, Basfia, Janibacter, Anaerosporobacter, Pseudoscardovia, unclassified taxa from Deltaproteobacteria, Brevinema, Brucella, unclassified taxa from Gammaproteobacteria, unclassified taxa from Desulfovibrionaceae, Oligosphaera).
Shannon index and the number of observed OTUs
We evaluated the temporal dynamics of microbiota diversity in each sample by examining the α-diversity metrics, the Shannon index, and the number of observed OTUs. The Shannon index showed an increasing trend over time, with an overall mean of 3.05 (± 0.82) on day 7 and 4.23 (± 0.59) at the end of fattening. The temporal dynamics of microbiota diversity were best described by a two-slope model (Likelihood Ratio Test (LRT) between the two candidate models, p < 10− 15, Fig. 6a, Additional file 6: Table S2). The coefficient of the first slope was higher than that of the second, suggesting a large increase in diversity during the first month, followed by a lowering of the rate of increase, and even stagnation on farm A. The estimates for the intercept and second slope for farms B and C were significantly higher than those for farm A. There was a similar temporal trend and a significant farm effect on the number of observed OTUs (Additional file 7: Fig. S5a, Additional file 6: Table S3).
Dynamics of the mean observed and predicted Shannon index for each farm. The predicted dynamics of the Shannon index, without and with the antibiotic-treatment effect, in the final model are shown in panels (a) and (b), respectively. The mean values ± standard deviations of the observed data for each farm are represented by the dashed bars. Model-predicted profiles and their 95% confidence bands are represented by the solid lines and bands, respectively. Antibiotic treatments during sampling or within 15 days before sampling are colored coded by farm and indicated above the x-axis in panel (b)
A variable representing antibiotic treatment in the 15 days previous to sampling (still ongoing or not at the time of sampling) was added to the two-slope model and tested for significance for both indices. There was a significant effect of antibiotics on the Shannon index intercept, with an estimated decrease of − 0.17 CI95%[− 0.27; − 0.06] (p = 0.003, Additional file 6: Table S2, Fig. 6b), indicating an antibiotic-induced reduction of bacterial diversity during exposure. The effect of antibiotic exposure on the number of observed OTUs was also significant and showed a similar effect of decreasing diversity, with an estimated decrease of 82.7 OTUs CI95%[− 115.8; − 49.6] (p = 1*10− 6, Additional file 6: Table S3, Additional file 7: Fig. S5b).
Absolute number of E. coli per gram of feces
We quantified the absolute number of Escherichia/g by qPCR, which can be considered as a fair proxy of the absolute number of E. coli in calves' feces (see Methods). E. coli is the main facultative anaerobic bacteria in the large intestine and a marker of dysbiosis [12]. We first compared these absolute numbers to the relative abundance of the Escherichia genus estimated by 16S rRNA gene sequencing. The number of E. coli estimated by qPCR was strongly and positively associated with the relative abundance of the Escherichia genus (Spearman's correlation, rs = 0.80, p < 10− 15, Fig. 7a), confirming the strong relationship between the two variables. The temporal dynamics of the number of E. coli/g was similar for the three farms, with an overall mean of 7.81 log10 (E. coli/g) (± 0.67) on day 7. During the second month, a transient but important increase (approximately 2 log10) occurred on the three farms (Fig. 7b). The dynamics of the number of E. coli/g were best described by a quartic function of time (LRT between cubic and quartic function of time models, p = 0.01). The final model gave the same estimates for the parameters of farms A and C, suggesting the presence of an additional factor in shaping the number of E. coli/g on these farms (Fig. 7b and Additional file 6: Table S4).
Absolute quantification of the Escherichia coli population by qPCR. a Relative abundance of the Escherichia genus as a function of the number of E. coli/g estimated by qPCR. Each point represents a sample. Points on the x-axis represent samples for which no 16S rRNA gene sequence of the Escherichia genus was detected by sequencing. b Dynamics of the mean observed and predicted number of E. coli/g for each farm in the final model without the antibiotic-treatment effect. The mean values ± standard deviations of the observed data for each farm at each sampling time are represented by the dashed bars. The model-predicted profiles and their 95% confidence bands are represented by the solid lines and bands, respectively. The predicted profiles of farms A and C are overlapping, which is why the predicted profile of farm A does not appear. c Dynamics of the mean observed and predicted number of E. coli/g for each farm in the final model with the antibiotic-treatment effect. Antibiotic treatments during sampling or within 15 days before sampling are color-coded by farm and indicated above the x-axis. d Temporal dynamics of the recommended dose of milk powder per kilo of live weight for farms B and C
As for α-diversity indices, a variable representing antibiotic treatment in the 15 days prior to sampling (still ongoing or not at the time of sampling) was added to the quartic model and tested for significance. There was a significant reduction in the E. coli population in calves treated in the previous 15 days relative to those that were not, with an estimated decrease of − 0.37 log10 (E. coli/g) CI95%[− 0.66; − 0.08] (p = 0.01, Additional file 6: Table S4, Fig. 7c).
Association between the estimated dose of milk powder given in two farms and bacterial abundance
In farms B and C, for which the information was available, we explored associations between the relative abundance of genera and the daily dose of milk powder recommended by the integrator. Calves were almost exclusively fed milk replacer throughout the fattening period, which is reconstituted from milk powder with hot water. Their diet was also supplemented with a small amount of solid feed from the first weeks. We conducted 349 Spearman rank correlation tests and found 17 taxa at the genus level for which the Spearman correlation coefficient was positive and significantly different from zero (Table 1, Additional file 8: Table S5). The genus with the strongest correlation was Megasphaera, with a moderate to strong association (rs = 0.60, Bonferroni adjusted p < 1*10− 15, Table 1, Additional file 9: Fig. S6). The genera with the next highest positive correlation coefficients were Enterococcus, Dialister, and Mitsuokella, indicating moderate association with the dose of milk powder (rs = 0.44, 0.42, and 0.41, and Bonferroni adjusted p = 2*10− 8, 3*10− 7, and 6*10− 7, respectively, Table 1, Additional file 9: Fig. S6). The genus Escherichia was also positively associated with milk powder, but to a lesser extent (rs = 0.28, Bonferroni adjusted p = 0.02, Table 1).
Table 1 Correlation between the dose of milk powder and fecal microbiota on farms B and C
As E. coli is a β-galactosidase-positive species, which means it is able to cleave lactose into monosaccharides, we compared the estimated dynamic profiles of the absolute number of E. coli/g and the dose of milk powder recommended by the integrators on farms B and C. The estimated profiles were very close for both farms, particularly during the second month, with the peak of the dose of milk powder superimposed over that for the number of E. coli/g (Fig. 7d). We searched for an association between the estimated daily doses of milk powder and the farm predicted numbers of E. coli/g on farms B and C using Spearman's correlation test. We found a significant strong positive association between the farm predicted numbers of E. coli/g and the estimated dose of milk powder (rs = 0.77, p < 1*10−15).
We characterized the dynamics of the fecal microbiota of calves from two weeks to six months of age on three commercial veal farms representative of the three main French integrators and of management practices in the veal industry in France. The calves were mainly fed milk replacers throughout the follow-up and received several collective antibiotic treatments at therapeutic doses, most administered in the first weeks of fattening (Fig. 1). We performed 16S rRNA gene sequencing to study the composition of the microbiota and qPCR of the Escherichia genus as a proxy of E. coli to quantify its commensal populations. We estimated the daily dose of milk powder recommended by the integrator for two farms to search for an association with the relative abundance of the detected genera. The most striking results of this study are (i) the convergence of the fecal microbiota composition among calves, which began during the first month of life, along with an increase in α-diversity, (ii) a decrease in microbiota diversity and the size of the E. coli population during or within the 15 days following an antibiotic treatment relative to non-exposed calves of the same age (reduction of the Shannon index by 0.17 and the number of E. coli/g of feces by 0.37 log10 (E. coli / g)), and (iii) a significant association between the estimated daily dose of milk powder and the relative abundance of four genera (Megasphaera, Enterococcus, Dialister, Mitsuokella) and the predicted farm profiles of the number of E. coli/g from our model.
The development of the microbiota of these calves was characterized by the dominance of a small number of genera mainly from the Firmicutes and Bacteroidetes phyla (Fig. 3). These phyla remained dominant as the calves aged (Fig. 4), with a simultaneous increase in microbiota diversity (Fig. 6 and Additional file 7: Fig. S5a). These developmental features have already been described for calves with the same characteristics (age, sex, and breed) fed milk replacers [10], as well as for females of the same breed in Canada [10, 13], the USA [14, 15], and Japan [16] and for dairy calves of a different breed in Austria [17]. These shared findings suggest that the fecal microbiota of calves undergoes a predictable age-dependent trend that is common to distinct calf populations.
The high heterogeneity of the microbiota composition at day 7 on the farms (corresponding to 3 weeks of age) is likely attributable to the distinct origin of the calves, as they came from different dairy farms. Prior studies have noted the important influence of exposure to the bacterial communities of both the dam and the environment on the composition of the microbiota throughout the gut of the newly born calf [18, 19]. The transport from dairy farms to veal farms could also be responsible for the high heterogeneity between calves at the beginning of fattening. Transport has been reported to have disrupting effect on the gut microbiota in young beef cattle 5 days after their transport to the feedlot [6]. The composition of milk replacers may have been a source of variability between farms during fattening. A difference in the relative abundance of Bifidobacterium spp. and Faecalibacterium prausnitzii was found between one-month-old dairy calves fed milk replacers containing different levels of proteins and fat, although this difference was transitory [20].
From as soon as the end of the first month after arriving on the farms, bacterial succession (Fig. 5) gradually increased the similarity of the microbiota composition among the calves on all farms. This increase was due to both an increase in sharing of the same bacterial members and the homogeneity of their relative abundance, as shown by unweighted and weighted Unifrac distances (Additional file 2: Fig. S2 and Fig. 2, respectively). This convergent pattern occurred in the absence of environmental or dietary changes, such as weaning, the calves being reared in dedicated closed buildings and drinking milk replacers throughout fattening. This suggests that the influence of environmental and dietary factors on such convergence was probably limited and highlights the likely role of host physiology. Convergence related to age has also been observed in the ruminal microbiota of calves between one day and two years of age [21] and in both the ruminal and fecal microbiota of dairy heifers receiving different diets before weaning [22]. These studies suggest that such changes in composition are not restricted to the lower part of the gut and are not strongly driven by diet. An intriguing possibility is that such convergent stabilization of the microbiota composition over time (Fig. 2, Additional file 2: Fig. S2) may be linked to age-dependent shifts of the gut mucosal immune system, as the expression of Toll-like receptors in both the rumen and colon have been shown to change as calves age [23]. Constraints imposed by the gut environment and autochthonous microbiota on allochthonous bacterial settlement may become less permissive, resulting in more specific requirements as the calves aged.
The microbiota of calves on farms where collective antibiotic treatments were given in the previous 15 days or during sampling underwent a reduction in diversity and the number of E. coli relative to calves of the same age that had not been exposed during the same period (Fig. 6b, Additional file 7: Fig. S5, Fig. 7c). We pooled the effects of both long-term and short-term antibiotic treatment and molecules with different spectra to focus on the common disruptive effects of antibiotics on microbial ecosystems. As all calves of the same farm received the same antibiotic treatment at the same time, the design of our study did not allow the analysis of the specific effects of each molecule. As the treatments were collective, we did not have negative controls for calves receiving antibiotics from the same farm. Hence, the results could have been biased by farm-specific variables, which could have modulated the relative abundance of certain genera. To our knowledge, no study has yet explored the impact of antibiotic treatments on the maturation of gut microbiota in veal calves. Moreover, literature on the effect of each antibiotic class on the hindgut microbiota of preweaned dairy calves is scarce. As far as we know, research on this topic has focused on a few classes: macrolides [7, 24], tetracycline [7], amphenicol [7], fluoroquinolones [24], and polypeptides [25]. These studies, with the exception of that by Xie et al. [25], all focused on the effects of a single injection of the antibiotic, which differs from the treatment plans and route of administration (oral) in our study. Xie et al. reported dysbiosis in neonatal dairy calves receiving bacitracin for 10 days in the milk replacer, characterized by an increase in the abundance of Escherichia and Enterococcus, along with a decrease in the abundance of Dorea, Collinsella, Eubacterium, Faecalibacterium, Papillibacter, Peptostreptococcus, Prevotella, and Roseburia, relative to non-exposed calves of the same age [25]. The modalities of treatment in this study were close to those for the calves of farms A and C just after their arrival on farms. On farm A, they received a combination of antibiotics in their feed for several days, including colistin, another polypeptide. Their finding concerning the genus Escherichia is contrary to the negative effect of antibiotics that we found. This may be explained by the difference of the spectrum between bacitracin and the antibiotics used on the veal farms, as bacitracin has a narrow spectrum, targeting Gram-positive bacteria, whereas E. coli is sensitive to antibiotics used during extended treatment (colistin, sulfonamides, tetracycline, trimethoprim). It may also be partially explained by the age of calves, as the microbiota change markedly between birth and the first month of life [16, 26].
Antibiotic-induced loss of diversity has already been reported in young beef cattle [6], as well as pre-weaned calves [7], and was often found to be associated with the depletion of beneficial bacteria and/or the increase of opportunistic pathogens [7, 25]. Antibiotic-induced dysbiosis is also observed when pre-weaned calves are fed low doses of antibiotic molecules [27].
The effect of the antibiotic treatments was small relative to the longitudinal changes (Additional file 6: Tables S2, S3 and S4), consistent with findings in pre-weaned dairy calves receiving enrofloxacin and tulathromycin metaphylactic treatment [24] and beef cattle receiving oxytetracycline or tulathromycin injection [6]. These findings can be explained by the existence of a natural resistome, independently of any antibiotic treatment, carried by certain abundant families in the fecal microbiota of pre-weaned calves, as recently shown [28]. Genera of these antibiotic-resistance gene carrying families were found to be dominant in the feces of the veal calves, such as Anaerostipes, Blautia, and Roseburia (Lachnospiraceae family), Enterococcus (Enterococcaceae family), Faecalibacterium and Pseudoflavonifractor (Ruminococcaceae family), Bacteroides (Bacteroidaceae family), and Streptococcus (Streptococcaceae). Members of the Enterobacteriaceae family, such as E. coli, were also found to be a major reservoir of antibiotic-resistance genes within the microbiota resistome.
Two recent studies reported that the resistome of fecal microbiota in pre-weaned dairy calves is composed of resistance-conferring genes against tetracycline, sulfonamides, trimethoprim, β-lactams, and macrolides [8, 28]. These results suggest the existence of a natural resilience in fecal bacterial communities to collective antibiotic treatments on veal farms; the antibiotics used to treat the calves in our study belonged to these classes. None of these studies reported the presence of the mcr-1 gene, which confers resistance to colistin, another molecule used to treat calves, in microbial communities, although another study detected the gene in the commensal E. coli of veal calves [3]. Moreover, it has been shown that this natural resistome is shaped by the bacterial phylogeny of the fecal microbiome and decreases as the calves age. One of the main drivers of such a decrease was the decrease in abundance of the Enterobacteriaceae family, in which 90% of the members were classified as E. coli [28]. It is well-known that commensal E. coli populations of veal calves harbor high levels of antibiotic-resistance genes [3, 29], and that they are diverse. Antibiotic treatment may have promoted an increase in the number of specific pre-existing E. coli strains at the beginning of fattening, as the extended treatments during the first month did not result in marked depletion of the E. coli population.
Among others, we found a possible link between E. coli population dynamics and the use of milk replacer, which is reconstituted from dry milk powder and rich in lactose (Fig. 7d) [30]. Lactose allows the growth of the vast majority of E. coli strains [31]. It has been shown that the lag time and generation time of E. coli strains, which depend on metabolic efficiency and are crucial for gut colonization and persistence [32, 33], are influenced by the type and abundance of available nutrients in the habitat [34]. Other components of milk replacers may have also influenced the E. coli population dynamics, such as vitamin D, for which the absence of the vitamin D receptor in the intestinal epithelium of mice has been associated with increased E. coli loads [35]. Our findings are consistent with those of another study in which the fecal microbiota of Simmental calves was followed during their first 3 months of life, although only six calves were included in the study and the sampling was sparse [17]. The relative abundance of the genus Escherichia was found to be maximal during the milk-feeding period and decreased before weaning.
Lactose, which is one of the main components of milk powder (approximately 45% of the dry weight), is hydrolyzed to the monosaccharides glucose and galactose by bacteria that synthesize the β-galactosidase enzyme. We therefore looked for the presence of this enzyme in available annotated genomes from the NCBI genome database of the Megasphaera, Enterococcus, Dialister, and Mitsuokella genera. The sequence of E. coli LacZ β-galactosidase was also compared to the protein sequences found in members of these genera using the blastp program [36, 37]. A β-galactosidase gene sequence was found in the genome of Mitsuokella multacida, which was isolated from human feces [38], and several species of the Enterococcus genus, which have been isolated from cattle (E. faecalis, E. faecium, E. hirae, E. thailandicus, E. malodoratus, E. devriesei, E. casseliflavus, E. italicus) [39,40,41,42] (data not shown). The concomitant fluctuations of this lactose-rich source and the relative abundance of the genera Enterococcus and Mitsuokella strongly suggest a direct role of host diet on members of the fecal microbiota. We did not find a β-galactosidase annotated gene in the 24 Megasphaera annotated genomes nor in the 18 Dialister annotated genomes available on NCBI [37]. The bacteria of these two genera did not carry a protein similar to the E. coli LacZ β-galactosidase (data not shown). The discrepancy between the associations found for the genera Megasphaera and Dialister and the absence of the β-galactosidase enzyme sequence in the genome of known members of these genera could be explained by the utilization of another nutrient present in the milk powder by the members of these taxa. Another explanation for this discrepancy could be the limited redundancy of carbon source use among members of these genera coupled with a small number of genomes from these genera in the NCBI database.
Our study had several limitations. First, we followed the fecal microbiota of calves reared in commercial veal farms. Thus, the calves were not randomly assigned to the different farms (nor antibiotic treatments) and neither the environmental nor dietary variables could be controlled as they would be in a randomized trial. Nevertheless, one of the aims of this study was to characterize the fecal microbiota of calves reared following common veal-farm practices. These three farms, in which field studies had already been conducted [3], are representative of management practices in the veal-calf industry in France. Second, the calves were only sampled after spending 7 days on the farms. Thus, no information in terms of their microbiota composition before antibiotic treatment was available. They were also sampled on a monthly basis, whereas high-frequency sampling has been recommended in early-life microbiome studies in infants [43]. Furthermore, sampling was performed independently of antibiotic treatment. Hence, certain short- and mid-term age- and antibiotic-associated changes may have been missed. Third, although we tracked the dynamics of microbiota using 16S rRNA gene sequencing and E. coli qPCR, we only focused on specific features of this complex ecosystem and may have missed specific patterns at other levels. For example, we had no information concerning the dynamics of fecal bacterial loads, which have been shown to vary in newborn calves [9, 44, 45] and be associated with microbiota composition [46].
Nevertheless, veal calves, as studied here, have attributes relevant to the exploration of the microbiota maturation process. First, batches are composed of male calves of the same age and breed (usually Holsteins). Thus, they are highly genetically and physiologically homogeneous. Second, they share the same living environment and diet, which are not subject to major changes, as they are reared in dedicated closed buildings in which the conditions are stable and controlled to optimize their growth. Moreover, they do not experience any drastic changes in their diet, as it remains predominantly composed of milk replacers during the 6 months of fattening. Third, the systematic administration of antibiotics at therapeutic doses to all members of the batch is common practice to prevent the spread of infectious diseases [2]. As healthy young subjects that share similar controlled conditions over a long period of time and experience common antibiotic exposure, veal calves represent a unique opportunity to disentangle the factors that drive microbiota assembly under real-life conditions.
This observational study conducted on calves reared under intensive-farming conditions shows (i) early convergence of the developing fecal microbiota among farms and (ii) a significant association between the estimated daily doses of milk powder and the relative abundance of certain genera and the predicted farm profiles of the number of E. coli/g. This study also suggests that the administration of collective antibiotic treatment results in a limited reduction of diversity and size of the E. coli population and highlights the need for additional studies to fully understand the impact of antibiotic treatment in the context of the veal industry. To our knowledge, this is the first field study to follow the microbiota composition and size of the commensal E. coli population of veal calves throughout the fattening period.
Animal handling and sampling
We collected fecal samples from veal calves during a cohort study dedicated to monitor the excretion of extended spectrum β-lactamase- (ESBL)-producing E. coli, which has been shown to be frequent in veal calves [3, 47]. The fecal excretion of ESBL-producing E. coli was followed in 45 veal calves distributed in three French veal farms (named A, B, and C) located in the region of Brittany [3], within a 100-km radius around Rennes. The three farmers raised calves in partnership with different integrators, which were the main veal calves producers in France. We streaked swabs on selective ChromID ESBL agar (bioMérieux, Marcy l'Etoile, France) and classified calves as "ESBL-producing E. coli high-level excretor", "low level excretor" or "ESBL-producing E. coli-free" based on the number of colonies that grew after 24 h at 37 °C (> 100 colonies = high level excretion, < 100 colonies = low excretion, or no colonies = no excretion) [3]. This cohort also provided an opportunity to follow the dynamics of the fecal microbiota of veal calves under real-life conditions by the collection of additional fecal samples. Characterization of the ESBL-producing E. coli will be published elsewhere.
Sampling began upon the arrival of batches of new calves in October and November 2015. A batch was defined in the study as a group of calves entering the farm at the same time and reared together until slaughter (Additional file 10: Fig. S7). Seven days after arrival, 15 calves were randomly selected from 50 on each farm and included in the study: five with high levels of ESBL-producing E. coli excretion, five with low-level excretion, and five with no excretion. The 15 calves selected per farm were then sampled by rectal swabbing bimonthly for ESBL-producing E. coli excretion follow-up until departure of the batch to the slaughterhouse. To study the dynamics of the fecal microbiota, additional samples were collected at days 7 and 21 on the farms, then monthly for 5 months, for a total of seven samples per calf (Fig. 1). Swabs were placed immediately in portable coolers with ice packs, shipped to the ANSES lab in Lyon, France, and stored at − 80 °C.
The calves were 14 days old when they arrived at the farms and were mainly fed milk replacer, which is reconstituted from cow milk powder with hot water, throughout fattening. Their diet was also supplemented with a small amount of solid feed from the first weeks. The daily quantity of milk powder was divided by the mean weight of a male Holstein calf of the corresponding age each day (assuming that the calves arrived at the farms at 14 days of age), which enabled us to estimate a proxy for the dose of milk powder consumed on these farms throughout the fattening period. Thus, the integrators' recommended doses of milk powder, of which approximately 45% consists of lactose, were estimated for farms B and C.
Collective antibiotic treatments were recorded by the three farms throughout fattening (Fig. 1). Antibiotics were always used at therapeutic doses and administered orally in water or milk replacer. All treatments upon entry to the fattening farms were set up to prevent gastrointestinal disorders, whereas treatments during the course of fattening were used to treat respiratory diseases. All calves received antibiotics more than once and calves from farms A and C received several consecutive antibiotic treatments during the first month (Fig. 1). On farm A, calves received a 10-day course of colistin and sulfonamides (first day to day 10), followed by another 10-day course of tetracycline (day 11 to day 20). Later during fattening, they received a one-day treatment of doxycycline on day 53 and a one-day treatment of amoxicillin on day 135. On farm B, calves received a one-day treatment of doxycycline and erythromycin on day 26, a one-day treatment of tetracycline on day 90, and a one-day treatment of amoxicillin on day 101. On farm C, calves received a six-day course of sulfonamides and trimethoprim (from day 3 to day 8) and a seven-day course of tetracycline (day 10 to day 16). They also received a five-day course of doxycycline (day 20 to day 24) and four-day courses of spiramycin (day 25 to day 28) and tetracycline (day 80 to day 83).
DNA extraction, 16S rRNA gene sequencing, and Escherichia-specific quantitative PCR
Genomic DNA was extracted from rectal swabs using the DNEasy PowerSoil kit (QIAGEN, Venlo, Netherlands). The cotton tips of frozen swabs were broken off directly into bead tubes. The tubes were incubated at 70 °C for 10 min, as previously described [48]. The remaining steps were performed according to the manufacturer's instructions, with an additional overnight incubation step with elution buffer at 4 °C. Extracted DNA was stored at − 20 °C. The V4 region of the 16S rRNA gene from each sample was amplified using the primers 515fB (GTGYCAGCMGCCGCGGTAA) and 806rB (GGACTACNVGGGTWTCTAAT), modified to contain a barcode sequence between the primer and Illumina adaptor sequences, as previously described [49, 50]. Dual-barcoded libraries were sequenced on an Illumina MiSeq machine (MiSeq Reagent Kit V3, 600 cycles) according to the manufacturer's specifications to generate paired-end reads of 300 bases in length.
E. coli populations were quantified by qPCR targeting the 16S rRNA gene sequence specific to the Escherichia genus. The Escherichia genus is a good proxy of E. coli in cattle, as Escherichia cryptic clades and other species of the genus represent < 4% in non-human mammal feces [51]. We verified this assumption by plating three randomly selected swabs from distinct calves of our study on Drigalski plates. We determined the species of 50 colonies per plate using MALDI-TOF and Clermont typing PCR [52]. All colonies were confirmed to be E. coli. The sequences of the forward (CATGCCGCGTGTATGAAGAA) and reverse (CGGGTAACGTCAATGAGCAAA) primers and probe (FAM-TATTAACTTTACTCCCTTCCTCCCCGCTGAA-TAMRA) were obtained from [53]. Each DNA sample (approximately 20 ng) was added to 30 μl PCR mixture containing 15 μl TaqMan Universal PCR master mix II 2X (Applied Biosystems, Life Technologies, Carlsbad, California, USA), 300 nM of each primer, 100 nM fluorescent probe, and bovine serum albumin at a final concentration of 0.1 μg/μl (New England BioLabs, Evry, France), as previously described [54]. A standard curve was generated using known amounts of DNA of the archetypal ED1a E. coli strain for each experiment. Products were detected using an Applied Biosystems Prism 7500 instrument.
Processing of 16S rRNA gene sequences
The quality of the reads was verified using FASTQC [55] and they were processed using mothur (version 1.35.1) [56, 57]. Contigs were generated by assembling forward and reverse reads. Low-quality contigs were discarded if the total length was outside 289 to 292 bases, if there were more than five ambiguous bases ("N"), or if homopolymer runs exceeded five bases. After a clean-up step, sequences were aligned to those of the SILVA reference database (February 2017, release 128) [58]. OTU assignment was made after clustering the sequences with a similarity cutoff of 97%. Singletons, duplicates, and triplicates were discarded. The taxonomy of each detected OTU was obtained using the RDP quality-controlled, aligned, and annotated Bacterial and Archaeal 16S rRNA gene sequence database [59]. Chimeric sequences were removed after de novo chimera detection using the VSEARCH tool, version 2.3.4 [60]. Sequences flagged as chloroplasts, mitochondria, or eukaryotes were discarded from the dataset. We assessed the taxonomic composition of the microbiota, focusing on the phyla and genera, and focused on the five most abundant genera in each sample. For each calf, the OTUs detected in a sample were classified according to their detection in the previous sample to determine the monthly degree of change of the calf microbiota at the OTU level. For each time point, we determined the proportion of OTUs that were not previously detected and the proportion of OTUs that were not detected in the previous sample. This analysis was performed to identify concurrent patterns of acquisition and persistence of OTUs among calves.
A rarefaction step was performed before computation of the α- and β-diversity metrics. Several candidate rarefaction thresholds were assessed for several characteristics, such as the number of samples below the threshold, the number of samples for which the sampling effort would result in more than 25% and 50% of the total number of sequences, and the proportion of sequences sampled in the most abundant sequence sample. The threshold was set to 47,000 sequences, which was the best compromise between these four characteristics. Then, α-diversity and β-diversity metrics were computed from the rarefied samples. α-diversity is defined as the ecological diversity within samples, whereas β-diversity is defined as the dissimilarity between samples. The Shannon diversity index and the number of observed OTUs were computed as α-diversity metrics. Unweighted and raw-weighted Unifrac distances were computed as β-diversity metrics [61]. For each calf, distances were computed between consecutive samples, and the distances between all calves were computed at the first, second, and last sampling.
In summary, the available data for each sample were (i) the relative abundance of taxa at the phylum and genus level, (ii) the α-diversity metrics Shannon index and the number of observed OTUs, (iii) the weighted and unweighted Unifrac distances to the previous and next sample for the same calf, and (iv) the absolute number of E. coli per gram of feces. In addition, Unifrac distances were computed at the first, second, and last sampling between calves.
Comparing community structures between and within calves over time
We performed PERMANOVA tests [62] using weighted and unweighted Unifrac distance matrices to evaluate the effects of time and the farm on the calf's microbiota. Tests were performed using 1000 permutations. We constrained the permutations within each calf to account for repeated measures. The weighted and unweighted Unifrac distances between calves at the first, second, and last samplings were represented in heatmaps. Moreover, the Unifrac distances between consecutive samples for each calf were represented by spaghetti plots to assess the temporal variability of its microbiota composition.
Temporal modelling of α-diversity indices
We built linear mixed-effects models to study the temporal dynamics of microbiota diversity. One-slope and two-slope models were tested. For the two-slope model, the break was set at day 21 for the farms after visual inspection of the raw data. The equation of the two-slope model is shown below:
$$ {\boldsymbol{y}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}={\boldsymbol{\theta}}_{\mathbf{0}\boldsymbol{i},\boldsymbol{k}}+{\boldsymbol{\theta}}_{\mathbf{1}\boldsymbol{i},\boldsymbol{k}}\times {\boldsymbol{t}}_{\mathbf{1}\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}+{\boldsymbol{\theta}}_{\mathbf{2}\boldsymbol{i},\boldsymbol{k}}\times {\boldsymbol{t}}_{\mathbf{2}\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}+{\boldsymbol{\varepsilon}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}} $$
where yi,j,k is the observed Shannon index or the number of observed OTUs at the jth day of calf i from farm k. θ0i,k, θ1i,k, θ2i,k are the intercept, first slope, and second slope, respectively. t1i,j,k and t2i,j,k represent the time before or on the 21st day and after, respectively, and εi,j,k the residual error.
The farm effect was introduced for each parameter and the calves set as random effects. The LRT was used to compare the fit between the candidate models. For each parameter of the final model, farms were grouped when the effects were not significantly different. We assumed that the random effects and residual errors were independent and had a normal distribution, with a mean of 0. Evaluation of the final model was conducted using basic goodness-of-fit plots.
Temporal modelling of the absolute number of E. coli per gram of feces
First, we performed Spearman's correlation between the number of bacteria from the Escherichia genus, estimated by quantitative PCR, and the relative abundance of the Escherichia genus to assess the consistency of the two techniques. Second, we built linear mixed-effects models to study the temporal dynamics of the absolute number of E. coli and tested polynomial functions of time. The farm effect was introduced for each parameter and the calves set as random effects. The equation of the quartic model is shown below:
$$ {\boldsymbol{y}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}={\boldsymbol{\theta}}_{\mathbf{0}\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}+{\boldsymbol{\theta}}_{\mathbf{1}\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}\times {\boldsymbol{t}}_{\boldsymbol{i},\boldsymbol{j}}+{\boldsymbol{\theta}}_{\mathbf{2}\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}\times {\boldsymbol{t}}_{\boldsymbol{i},\boldsymbol{j}}^{\mathbf{2}}+{\boldsymbol{\theta}}_{\mathbf{3}\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}\times {\boldsymbol{t}}_{\boldsymbol{i},\boldsymbol{j}}^{\mathbf{3}}+{\boldsymbol{\theta}}_{\mathbf{4}\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}}\times {\boldsymbol{t}}_{\boldsymbol{i},\boldsymbol{j}}^{\mathbf{4}}+{\boldsymbol{\varepsilon}}_{\boldsymbol{i},\boldsymbol{j}} $$
where yi,j,k is the observed number of E. coli/g and θ0i,j,k, θ1i,j,k, θ2i,j,k θ3i,j,k, θ4i,j,k the coefficients of each term of the polynomial function of time; ti,j the jth day of calf i, and εi,j,k the residual error.
The number of random effects was reduced by a backward approach using the Akaike Information Criterion (AIC), starting with the random effect of the coefficient with the highest degree. Selection of the final model and grouping by the farm effect were performed as for the α-diversity indices. We also made the same assumptions of normality and independence and evaluated the final model using basic goodness-of-fit plots.
Determination of the influence of antibiotic treatment on temporal predictions of α-diversity indices and the absolute number of E. coli
Alterations of fecal microbiota, such as selective depletion of bacterial populations and the reduction of ecological diversity, are generally regarded as gut microbiota dysbiosis markers following antibiotic treatment [63,64,65]. We investigated such effects on the temporal dynamics of the calves' fecal microbiota by considering that dysbiosis linked to antibiotic treatment could have occurred for all samples for which a collective antibiotic treatment was given within 15 days before sampling and antibiotic treatment still ongoing or not at the time of sampling (Fig. 1).
The influence of antibiotic treatment on the intercept of α-diversity indices and the absolute number of E. coli models was tested by introducing a covariable representing the existence of treatment within the 15 days before sampling. Samples were collected in such a time window on two dates at farm A (days 7 and 21), one date at farm B (day 106), and four dates at farm C (days 7, 21, 35, and 91, Fig. 1).
Exploring associations between the abundance of genera and the dose of milk powder
A link between the relative abundance of genera and the dose of milk powder was explored for farms B and C. The Spearman correlation test was used to look for positive associations between the dose of milk powder and the relative abundance of genera. As multiple tests were performed, the P values were adjusted using the Bonferroni correction. We searched for the presence of the β-galactosidase enzyme, which hydrolyzes lactose to the monosaccharides glucose and galactose, in the available annotated genomes in the NCBI genome and protein databases for genera that were found to have a significant moderate to strong positive association with the dose of milk powder (rs > 0.4) [37]. We also blasted the protein sequence of the LacZ β-galactosidase enzyme in the NCBI protein database using the program blastp [36, 37].
As E. coli is a lactose-fermenting species, we searched for an association between the estimated daily doses of milk powder and the predicted number of E. coli/g for farms B and C using Spearman's correlation test. The farm-predicted numbers of E. coli/g were obtained by adding the estimates of the farm parameters to the intercept in our final model.
Means were presented with standard deviations. All statistical analyses were performed using R software (R version 3.1.0) [66]. Mixed effect models were built using nlme [67] and PERMANOVA tests were performed using the "adonis" function in the vegan package [68].
Sequencing reads were deposited as entire raw data in the European Nucleotide Archive repository (ENA) under the BioProject ID PRJEB33072, separately for each sample. The code used for processing the 16S rRNA gene sequences was originally developed by Kozich et al. [57] and is available at the website https://www.mothur.org/wiki/MiSeq_SOP (access on January 2017). The unrarefied OTU table and the corresponding taxonomic classification analyzed during the current study are available from the corresponding author upon reasonable request. The Escherichia-specific qPCR dataset has been included as Additional file 11: Table S6.
AIC:
ANSES:
French Agency for Food, Environmental and Occupational Health & Safety
ESBL:
Extended spectrum β-lactamase
PERMANOVA:
Permutational multivariate analysis of variance
qPCR:
rRNA:
Ribosomal ribonucleic acid
Sans P, de Fontguyon G. Veal calf industry economics. Rev Médecine Vét. 2009;160:420–4.
Jarrige N, Cazeau G, Morignat E, Chanteperdrix M, Gay E. Quantitative and qualitative analysis of antimicrobial usage in white veal calves in France. Prev Vet Med. 2017;144:158–66.
Gay E, Bour M, Cazeau G, Jarrige N, Martineau C, Madec J-Y, et al. Antimicrobial usages and antimicrobial resistance in commensal Escherichia coli from veal calves in France: evolution during the fattening process. Front Microbiol. 2019;10:792.
Pardon B, Catry B, Dewulf J, Persoons D, Hostens M, De Bleecker K, et al. Prospective study on quantitative and qualitative antimicrobial and anti-inflammatory drug use in white veal calves. J Antimicrob Chemother. 2012;67:1027–38.
Ji S, Jiang T, Yan H, Guo C, Liu J, Su H, et al. Ecological restoration of antibiotic-disturbed gastrointestinal microbiota in foregut and hindgut of cows. Front Cell Infect Microbiol. 2018;8:79.
Holman DB, Yang W, Alexander TW. Antibiotic treatment in feedlot cattle: a longitudinal study of the effect of oxytetracycline and tulathromycin on the fecal and nasopharyngeal microbiota. Microbiome. 2019;7:86.
Oultram J, Phipps E, Teixeira AGV, Foditsch C, Bicalho ML, Machado VS, et al. Effects of antibiotics (oxytetracycline, florfenicol or tulathromycin) on neonatal calves' faecal microbial diversity. Vet Rec. 2015;177:598.
Haley BJ, Kim S-W, Salaheen S, Hovingh E, Van Kessel JAS. Differences in the microbial community and resistome structures of feces from preweaned calves and lactating dairy cows in commercial dairy herds. Foodborne Pathog Dis. 2020;17:494–503.
Smith HW, Crabb WE. The fæcal bacterial flora of animals and man: its development in the young. J Pathol Bacteriol. 1961;82:53–66.
Meale SJ, Li S, Azevedo P, Derakhshani H, Plaizier JC, Khafipour E, et al. Development of ruminal and fecal microbiomes are affected by weaning but not weaning strategy in dairy calves. Front Microbiol. 2016;7:582.
Kolenda R, Burdukiewicz M, Schierack P. A systematic review and meta-analysis of the epidemiology of pathogenic Escherichia coli of calves and the role of calves as reservoirs for human pathogenic E. coli. Front Cell Infect Microbiol. 2015;5:23.
Litvak Y, Byndloss MX, Tsolis RM, Bäumler AJ. Dysbiotic Proteobacteria expansion: a microbial signature of epithelial dysfunction. Curr Opin Microbiol. 2017;39:1–6.
Meale SJ, Li SC, Azevedo P, Derakhshani H, DeVries TJ, Plaizier JC, et al. Weaning age influences the severity of gastrointestinal microbiome shifts in dairy calves. Sci Rep. 2017;7:198.
Oikonomou G, Teixeira AGV, Foditsch C, Bicalho ML, Machado VS, Bicalho RC. Fecal microbial diversity in pre-weaned dairy calves as described by pyrosequencing of metagenomic 16S rDNA. Associations of Faecalibacterium species with health and growth. PLoS One. 2013;8(4):e63157.
Wickramasinghe HKJP, Anast S-ES, Serão NVL, Appuhamy JADRN. Beginning to offer drinking water at birth increases the species richness and the abundance of Faecalibacterium and Bifidobacterium in the gut of preweaned dairy calves. J Dairy Sci. 2020;103(5):4262–74.
Uyeno Y, Sekiguchi Y, Kamagata Y. rRNA-based analysis to monitor succession of faecal bacterial communities in Holstein calves. Lett Appl Microbiol. 2010;51:570–7.
Klein-Jöbstl D, Schornsteiner E, Mann E, Wagner M, Drillich M, Schmitz-Esser S. Pyrosequencing reveals diverse fecal microbiota in Simmental calves during early development. Front Microbiol. 2014;5:622.
Malmuthuge N, Griebel PJ, Guan LL. The gut microbiome and its potential role in the development and function of newborn calf gastrointestinal tract. Front Vet Sci. 2015;2:36.
Yeoman CJ, Ishaq SL, Bichi E, Olivo SK, Lowe J, Aldridge BM. Biogeographical differences in the influence of maternal microbial sources on the early successional development of the bovine neonatal gastrointestinal tract. Sci Rep. 2018;8:1–14.
Badman J, Daly K, Kelly J, Moran AW, Cameron J, Watson I, et al. The effect of milk replacer composition on the intestinal microbiota of pre-ruminant dairy calves. Front Vet Sci. 2019;6:371.
Jami E, Israel A, Kotser A, Mizrahi I. Exploring the bovine rumen bacterial community from birth to adulthood. ISME J. 2013;7:1069–79.
Dill-McFarland KA, Breaker JD, Suen G. Microbial succession in the gastrointestinal tract of dairy cows from 2 weeks to first lactation. Sci Rep. 2017;7:40864.
Malmuthuge N, Li M, Fries P, Griebel PJ, Guan LL. Regional and age dependent changes in gene expression of toll-like receptors and key antimicrobial defence molecules throughout the gastrointestinal tract of dairy calves. Vet Immunol Immunopathol. 2012;146:18–26.
Foditsch C, Pereira RVV, Siler JD, Altier C, Warnick LD. Effects of treatment with enrofloxacin or tulathromycin on fecal microbiota composition and genetic function of dairy calves. PLoS One. 2019;14:e0219635.
Xie G, Duff GC, Hall LW, Allen JD, Burrows CD, Bernal-Rigoli JC, et al. Alteration of digestive tract microbiome in neonatal Holstein bull calves by bacitracin methylene disalicylate treatment and scours. J Anim Sci. 2013;91:4984–90.
Song Y, Malmuthuge N, Steele MA, Guan LL. Shift of hindgut microbiota and microbial short chain fatty acids profiles in dairy calves from birth to pre-weaning. FEMS Microbiol Ecol. 2018;94(3):10.1093/femsec/fix179.
Van Vleck PR, Lima S, Siler JD, Foditsch C, Warnick LD, Bicalho RC. Ingestion of milk containing very low concentration of antimicrobials: longitudinal effect on fecal microbiota composition in preweaned calves. PLoS One. 2016;11:e0147525.
Liu J, Taft D, Gomez M, Johnson D, Treiber M, Lemay D, et al. The fecal resistome of dairy cattle is associated with diet during nursing. Nat Commun. 2019;10(1):4406.
Bosman AB, Wagenaar JA, Wagenaar J, Stegeman A, Vernooij H, Mevius D. Quantifying antimicrobial resistance at veal calf farms. PLoS One. 2012;7:e44831.
Pantophlet AJ, Gilbert MS, van den Borne JJGC, Gerrits WJJ, Roelofsen H, Priebe MG, et al. Lactose in milk replacer can partly be replaced by glucose, fructose, or glycerol without affecting insulin sensitivity in veal calves. J Dairy Sci. 2016;99:3072–80.
Sabarly V, Bouvet O, Glodt J, Clermont O, Skurnik D, Diancourt L, et al. The decoupling between genetic structure and metabolic phenotypes in Escherichia coli leads to continuous phenotypic diversity. J Evol Biol. 2011;24:1559–71.
Ozawa A, Freter R. Ecological mechanism controlling growth of Escherichia coli in continuous flow cultures and in the mouse intestine. J Infect Dis. 1964;114:235–42.
Fabich AJ, Leatham MP, Grissom JE, Wiley G, Lai H, Najar F, et al. Genotype and phenotypes of an intestine-adapted Escherichia coli K-12 mutant selected by animal passage for superior colonization. Infect Immun. 2011;79:2430–9.
Sabarly V, Aubron C, Glodt J, Balliau T, Langella O, Chevret D, et al. Interactions between genotype and environment drive the metabolic phenotype within Escherichia coli isolates. Environ Microbiol. 2016;18:100–17.
Wu S, Zhang Y, Lu R, Xia Y, Zhou D, Petrof EO, et al. Intestinal epithelial vitamin D receptor deletion leads to defective autophagy in colitis. Gut. 2015;64:1082–94.
Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic local alignment search tool. J Mol Biol. 1990;215:403–10.
NCBI Resource Coordinators. Database resources of the National Center for Biotechnology Information. Nucleic Acids Res. 2018;46(D1):D8–D13.
Parks D, Chuvochina M, Waite D, Rinke C, Skarshewski A, Chaumeil P-A, et al. A standardized bacterial taxonomy based on genome phylogeny substantially revises the tree of life. Nat Biotechnol. 2018;36(10):996–1004.
Beukers AG, Zaheer R, Goji N, Amoako KK, Chaves AV, Ward MP, et al. Comparative genomics of Enterococcus spp isolated from bovine feces. BMC Microbiol. 2017;17:52.
Fortina MG, Ricci G, Mora D, Manachini PL. Molecular analysis of artisanal Italian cheeses reveals Enterococcus italicus sp. nov. Int J Syst Evol Microbiol. 2004;54:1717–21.
Švec P, Vancanneyt M, Koort J, Naser SM, Hoste B, Vihavainen E, et al. Enterococcus devriesei sp. nov., associated with animal sources. Int J Syst Evol Microbiol. 2005;55:2479–84.
Collins MD, Jones D, Farrow JAE, Kilpper-Balz R, Schleifer KH. Enterococcus avium nom. Rev., comb. nov.; E. casseliflavus nom. Rev., comb. nov.; E. durans nom. Rev., comb. nov.; E. gallinarum comb. nov.; and E. malodoratus sp. nov. Int J Syst Evol Microbiol. 1984;34:220–3.
Trosvik P, Stenseth NC, Rudi K. Convergent temporal dynamics of the human infant gut microbiota. ISME J. 2010;4:151–8.
Smith HW. The development of the bacterial flora of the faeces of animals and man: the changes that occur during ageing. J Appl Bacteriol. 1961;24:235–41.
Alipour MJ, Jalanka J, Pessa-Morikawa T, Kokkonen T, Satokari R, Hynönen U, et al. The composition of the perinatal intestinal microbiota in cattle. Sci Rep. 2018;8:10437.
Vandeputte D, Kathagen G, D'hoe K, Vieira-Silva S, Valles-Colomer M, Sabino J, et al. Quantitative microbiome profiling links gut community variation to microbial load. Nature. 2017;551(7681):507–11.
Hordijk J, Wagenaar JA, van de Giessen A, Dierikx C, van Essen-Zandbergen A, Veldman K, et al. Increasing prevalence and diversity of ESBL/AmpC-type β-lactamase genes in Escherichia coli isolated from veal calves from 1997 to 2010. J Antimicrob Chemother. 2013;68:1970–3.
Costello EK, Lauber CL, Hamady M, Fierer N, Gordon JI, Knight R. Bacterial community variation in human body habitats across space and time. Science. 2009;326(5960):1694–7.
Smati M, Clermont O, Bleibtreu A, Fourreau F, David A, Daubié A-S, et al. Quantitative analysis of commensal Escherichia coli populations reveals host-specific enterotypes at the intra-species level. MicrobiologyOpen. 2015;4:604–15.
Clermont O, Christenson JK, Denamur E, Gordon DM. The Clermont Escherichia coli phylo-typing method revisited: improvement of specificity and detection of new phylo-groups. Environ Microbiol Rep. 2013;5:58–65.
Penders J, Thijs C, Vink C, Stelma FF, Snijders B, Kummeling I, et al. Factors influencing the composition of the intestinal microbiota in early infancy. Pediatrics. 2006;118:511–21.
Smati M, Clermont O, Le Gal F, Schichmanoff O, Jauréguy F, Eddi A, et al. Real-time PCR for quantitative analysis of human commensal Escherichia coli populations reveals a high frequency of subdominant phylogroups. Appl Environ Microbiol. 2013;79:5005–12.
Andrews S. FastQC: a quality control tool for high throughput sequence data. 2010. Available online at: http://www.bioinformatics.babraham.ac.uk/projects/fastqc.
Schloss PD, Westcott SL, Ryabin T, Hall JR, Hartmann M, Hollister EB, et al. Introducing mothur: open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol. 2009;75:7537–41.
Kozich JJ, Westcott SL, Baxter NT, Highlander SK, Schloss PD. Development of a dual-index sequencing strategy and curation pipeline for analyzing amplicon sequence data on the MiSeq Illumina sequencing platform. Appl Environ Microbiol. 2013;79:5112–20.
Pruesse E, Quast C, Knittel K, Fuchs BM, Ludwig W, Peplies J, et al. SILVA: a comprehensive online resource for quality checked and aligned ribosomal RNA sequence data compatible with ARB. Nucleic Acids Res. 2007;35:7188–96.
Cole JR, Wang Q, Cardenas E, Fish J, Chai B, Farris RJ, et al. The ribosomal database project: improved alignments and new tools for rRNA analysis. Nucleic Acids Res. 2009;37:D141–5.
Rognes T, Flouri T, Nichols B, Quince C, Mahé F. VSEARCH: a versatile open source tool for metagenomics. PeerJ. 2016;4:e2584.
Lozupone C, Knight R. UniFrac: a new phylogenetic method for comparing microbial communities. Appl Environ Microbiol. 2005;71:8228–35.
Anderson MJ. A new method for non-parametric multivariate analysis of variance. Austral Ecol. 2001;26:32–46.
Palleja A, Mikkelsen KH, Forslund SK, Kashani A, Allin KH, Nielsen T, et al. Recovery of gut microbiota of healthy adults following antibiotic exposure. Nat Microbiol. 2018;3:1255–65.
Korpela K, Salonen A, Virta LJ, Kekkonen RA, Forslund K, Bork P, et al. Intestinal microbiome is related to lifetime antibiotic use in Finnish pre-school children. Nat Commun. 2016;7:10410.
Burdet C, Nguyen TT, Saint-Lu N, Sayah-Jeanne S, Hugon P, Sablier-Gallis F, et al. Change in bacterial diversity of fecal microbiota drives mortality in a hamster model of antibiotic-induced Clostridium difficile colitis. Antimicrob Agents Chemother. 2018;4:S382.
R Development Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for statistical computing; 2005.
Pinheiro J, Bates D, DebRoy S, Sarkar D, R Core Team. Nlme: linear and nonlinear mixed effects models. R package version 3.1–137; 2018.
Oksanen J, Blanchet FG, Friendly M, Kindt R, Legendre P, McGlinn D, et al. Vegan: community ecology package. 2019.
We are grateful to Antoine Bridier-Nahmias, Hervé Perdry, and David Gordon for helpful discussions and Mélanie Magnan, Caroline Wybraniec, and Gilles Collin for technical assistance.
This work was supported in part by grants from La Fondation pour la Recherche Médicale to M.M. (grant number FDM20150633309) and E.D. (équipe FRM 2016, grant number DEQ20161136698), and from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 773830 (Project ARDIG, EJP One Health) and INTERBEV (Protocol N° SECU-15-31). The funding organizations were not involved in the study design, or collection, analysis, or interpretation of the data, or writing of the manuscript.
Université de Paris, IAME, INSERM, Site Xavier Bichat, 16 rue Henri Huchard, F-75018, Paris, France
Méril Massot, Thu Thuy Nguyen, France Mentré & Erick Denamur
Unité Antibiorésistance et Virulence Bactériennes, Université de Lyon - ANSES, Laboratoire de Lyon, Lyon, France
Marisa Haenni & Jean-Yves Madec
AP-HP, Hôpital Bichat-Claude Bernard, Département d'Epidémiologie, Biostatistiques et Recherche Clinique, F-75018, Paris, France
France Mentré
AP-HP, Hôpital Bichat-Claude Bernard, Laboratoire de Génétique Moléculaire, F-75018, Paris, France
Erick Denamur
Méril Massot
Marisa Haenni
Thu Thuy Nguyen
Jean-Yves Madec
MH, JYM, FM, and ED conceived and designed the study. MH and JYM collected the samples. MM performed the laboratory assays and carried out the bioinformatics analyses. MM, TTN, and FM carried out the statistical analyses of the data. MM generated the figures. MM and ED wrote the manuscript. MH, TTN, JYM, and FM revised and edited the draft. The authors read and approved the final manuscript.
Correspondence to Méril Massot.
This study was declared to the CNIL, the French office responsible for protecting personal data, supporting innovation, and preserving individual liberties. No further ethical approval was needed since this study did not involve any experimentation on animals (only rectal swabs were sampled) and since we did not collect and register any personnel opinion of the participants.
Additional file 1 Fig. S1.
Sequence and OTU distributions after bioinformatics processing. (a) Distribution of the number of 16S rRNA gene V4 region sequences in samples after quality filtering. (b) Distribution of the number of OTUs in samples after clustering sequences with a similarity cutoff of 97%. The inner lines in the boxplots represent the median, the edges show the first and third quartiles, and the whiskers extend to the 5th and 95th percentiles in (a) and (b). (c) Rarefaction curves for 16S rRNA gene V4 region sequences. Each curve corresponds to a sample. The red vertical line represents the chosen rarefaction threshold.
Heatmaps of the β-diversity unweighted Unifrac distances matrix for the (a) first sampling (day 7), (b) second sampling (day 21), and (c) last sampling (day 161 for farms A and B and day 147 for farm C). Yellow squares indicate low Unifrac distances, whereas dark red squares indicate high Unifrac distances. Calves are ordered according to farms in both lines and columns. The means ± standard deviations for each sampling on each farm are shown in the lower triangles.
Observed intra-calf β-diversity weighted Unifrac distances between consecutive samplings for (a) farm A, (b) farm B, and (c) farm C. The dots indicate the Unifrac distances between consecutive samples from the same calf.
Relative abundance of the five most abundant taxa at the genus level for all calves throughout the fattening period. For each panel, the first and second days represent the sampling date for farm C and farms A and B, respectively. Relative abundance of the five most abundant taxa are given for (a) days 35 and 49, (b) days 63 and 77, (c) days 91 and 106, and (d) days 119 and 133. Other detected taxa are depicted by the white bars. Calf IDs are provided at the top of the panels and are ordered according to farms. The color scale of the dots beneath the bar graphs represents the distribution of the Shannon index values. The color key refers to the phylum of each taxa and each palette was built to maximize the distinctiveness between shades.
Additional file 5 Table S1.
OTUs detected in or absent from previous samples and shared by calves over time. The sampling, ranges of proportions of calves, and OTU taxonomies are presented in each layer. The lists of OTUs consist of (a) OTUs simultaneously detected in two consecutive samples in more than 25% of the calves, (b) OTUs that were not previously detected and that simultaneously appeared in more than 25% of the calves, and (c) OTUs that were simultaneously lost by more than 25% of the calves.
Additional file 6 Tables S2, S3, and S4.
Tables S2, S3, and S4 contain the estimated parameters for the final models of the Shannon index, the number of observed OTUs, and the absolute number of E. coli/g, respectively.
Dynamics of the mean observed and predicted number of observed OTUs for each farm. Predicted dynamics of the number of observed OTUs, without and with the antibiotic-treatment effect, in the final model are represented in panels (a) and (b), respectively. The mean values ± standard deviations of the observed data for each farm are represented by the dashed bars. Model-predicted profiles and their 95% confidence bands are represented by the solid lines and bands, respectively. Antibiotic treatments during sampling or within 15 days before sampling are color-coded by farm and indicated above the x-axis in panel (b).
List of the Spearman's rank correlation coefficients between genera found in calves' feces and the dose of milk powder estimated for farms B and C. The genera with a significant positive correlation and a Bonferroni corrected p-value < 0.05 are highlighted in green.
Relative abundance of the genera Megasphaera, Enterococcus, Dialister, and Mitsuokella as a function of the dose of milk powder. Each point represents a sample. These four genera had the highest significant positive correlation with the estimated dose of milk powder in farms B and C. Values on the x-axis correspond to samples in which the corresponding genus was not detected by 16S rRNA gene sequencing.
Additional file 10 Fig. S7.
Veal calves on fattening farms (a) on the first day, corresponding to 14 days of age, and (b) at 115 days of age, during the third month of fattening.
Additional file 11 Table S6
. Absolute number of Escherichia coli per gram of feces in all samples, estimated by Escherichia-specific quantitative PCR.
Massot, M., Haenni, M., Nguyen, T.T. et al. Temporal dynamics of the fecal microbiota in veal calves in a 6-month field trial. anim microbiome 2, 32 (2020). https://doi.org/10.1186/s42523-020-00052-6
Veal calves
Fecal microbiota development | CommonCrawl |
A nonlinear multigrid solver with line Gauss-Seidel-semismooth-Newton smoother for the Fenchel pre-dual in total variation based image restoration
Electrical impedance tomography using a point electrode inverse scheme for complete electrode data
May 2011, 5(2): 341-353. doi: 10.3934/ipi.2011.5.341
3D coded aperture imaging, ill-posedness and link with incomplete data radon transform
Jean-François Crouzet 1,
Institut de Mathématiques et de Modélisation de Montpellier, CNRS UMR 5149 Place Eugène Bataillon, 34095 Montpellier, France
Received June 2010 Revised December 2010 Published May 2011
Coded Aperture Imaging is a cheap imaging process encountered in many fields of research like optics, medical imaging, astronomy, and that has led to several good results for two dimensional reconstruction methods. However, the three dimensional reconstruction problem remains nowadays severely ill-posed, and has not yet furnished satisfactory outcomes.
In the present study, we propose an illustration of the poorness of the data in order to operate a good inversion in the 3D case. In the context of a far-field imaging, an inversion formula is derived when the detector screen can be widely translated. This reformulates the 3D inversion problem of coded aperture imaging in terms of classical Radon transform. In the sequel, we examine more accurately this reconstruction formula, and claim that it is equivalent to solve the limited angle Radon transform problem with very restricted data.
We thus deduce that the performances of any numerical reconstruction will remain shrank, essentially because of the physical nature of the coding process, excepted when a very strong a priori knowledge is given for the 3D source.
Keywords: Radon Transform, Coded Aperture Imaging, Ill-posed problems..
Mathematics Subject Classification: Primary: 44A12, 65R10, 65R30; Secondary: 92C5.
Citation: Jean-François Crouzet. 3D coded aperture imaging, ill-posedness and link with incomplete data radon transform. Inverse Problems & Imaging, 2011, 5 (2) : 341-353. doi: 10.3934/ipi.2011.5.341
D. Barret and al., "Astrophysical Journal Letters,", \textbf{405, 405, L59 (1993). Google Scholar
J. Brunol, "Reconstruction d'Images Tomographiques en Médecine Nucléaire,", Thèse d'état, (1979). Google Scholar
J. Brunol, N. de Beaucoudrey, J. Fonroget and S. Lowenthal, Imagerie tridimensionnelle en gammagraphie,, Optics Communications, 25 (1978), 163. doi: 10.1016/0030-4018(78)90297-3. Google Scholar
J. Brunol and J. Fonroget, Bruit multiplex en gammagraphie par codage,, Optics Communications, 22 (1977), 301. doi: 10.1016/S0030-4018(97)90015-8. Google Scholar
N. de Beaucoudrey and L. Garnero, Off-axis multi-slit coding for tomographic X-Ray imaging of microplasma,, Optics Communications, 49 (1984), 103. doi: 10.1016/0030-4018(84)90371-7. Google Scholar
N. de Beaucoudrey, L. Garnero and J.-P. Hugonin, Imagerie tomographique par codage et reconstruction,, Traitement du signal, 5 (1988), 209. Google Scholar
J. Brunol, R. Sauneuf and J.-P. Gex, Micro coded aperture imaging applied to laser plasma diagnosis,, Optics Communications, 31 (1979), 129. doi: 10.1016/0030-4018(79)90287-6. Google Scholar
J-F. Crouzet, "La Gammagraphie par Ouverture de Codage,", Ph.D thesis, (1996). Google Scholar
J-F.Crouzet, Radon transform over cones and related deconvolution problems,, J. Integral Equations Appl., 13 (2001), 311. doi: 10.1216/jiea/1020254808. Google Scholar
M.-E. Davison, The ill-conditionated nature of the limited angle tomography problem,, SIAM Journal of Applied Mathematics, 43 (1983), 428. doi: 10.1137/0143028. Google Scholar
J. Fonroget, Y. Belvaux and S. Lowenthal, Fonction de transfert de modulation d'un système de Gammagraphie holographique,, Optics Communications, 15 (1975), 76. doi: 10.1016/0030-4018(75)90187-X. Google Scholar
S. R. Gottesman and E. E. Fenimore, New family of binary arrays for coded aperture imaging,, Applied Optics, 28 (1989), 4344. doi: 10.1364/AO.28.004344. Google Scholar
G.-R. Gindi, R.-G. Paxman and H.-H. Barrett, Reconstruction of an object from its coded image and object constraints,, Applied Optics, 23 (1984), 851. doi: 10.1364/AO.23.000851. Google Scholar
B. Honga, Z. Mub and Y. Liu, A new approach of 3D spect reconstruction for near-field coded aperture imaging,, Proceedings of SPIE, 6142 (2006). Google Scholar
J. Illingworth and J. Kittler, A survey of the hough transform,, Computer vision, 44 (1988). Google Scholar
T.-P. Kohman, Coded-aperture $x$- or $\gamma$-ray telescope with least-squares image reconstruction. I. Design considerations,, Rev. Sci. Instrum., 60 (1989), 3396. doi: 10.1063/1.1140536. Google Scholar
T.-P. Kohman, Coded-aperture x-ray or gamma-ray telescope with least-squares image reconstruction. II. Computer simulation,, Rev. Sci. Instrum., 60 (1989), 3410. doi: 10.1063/1.1140537. Google Scholar
T.-P. Kohman, Coded-aperture x-ray or gamma-ray telescope with least-squares image reconstruction. III. Data acquisition and analysis enhancements,, Rev. Sci. Instrum., 68 (1997), 2404. doi: 10.1063/1.1148124. Google Scholar
A.-K. Louis, Incomplete data problems in X-Ray computerized tomography: Singular value decomposition of the limited angle transform,, Numerische Mathematik, 48 (1986), 251. doi: 10.1007/BF01389474. Google Scholar
P. Mandrou and al., "Astronomy and Astrophysics,", \textbf{Suppl. 97, Suppl. 97, 1 (1993). Google Scholar
R.-S. May, Z. Akcasu and G.-F. Knoll, Gamma-Ray imaging with stochastic apertures,, Applied Optics, 13 (1974), 2589. doi: 10.1364/AO.13.002589. Google Scholar
F. Natterer, "The Mathematics of Computerized Tomography,'', John Wiley & Sons, (1986). Google Scholar
N. Ohyama, T. Honda and J. Tsujiuchi, Tomogram reconstruction using advanced coded aperture imaging,, Optics Communications, 36 (1981), 434. doi: 10.1016/0030-4018(81)90184-X. Google Scholar
R. G. Paxman, W. E. Smith and H. H. Barrett, Two algorithms for use with an orthogonal-view coded-aperture system,, J. Nucl. Med., 25 (1984), 700. Google Scholar
C. Zhou and S. K. Nayar, "What are Good Apertures for Defocus Deblurring?,", IEEE International Conference on Computational Photography, (2009). doi: 10.1109/ICCPHOT.2009.5559018. Google Scholar
Stefan Kindermann. Convergence of the gradient method for ill-posed problems. Inverse Problems & Imaging, 2017, 11 (4) : 703-720. doi: 10.3934/ipi.2017033
Matthew A. Fury. Estimates for solutions of nonautonomous semilinear ill-posed problems. Conference Publications, 2015, 2015 (special) : 479-488. doi: 10.3934/proc.2015.0479
Paola Favati, Grazia Lotti, Ornella Menchi, Francesco Romani. An inner-outer regularizing method for ill-posed problems. Inverse Problems & Imaging, 2014, 8 (2) : 409-420. doi: 10.3934/ipi.2014.8.409
Matthew A. Fury. Regularization for ill-posed inhomogeneous evolution problems in a Hilbert space. Conference Publications, 2013, 2013 (special) : 259-272. doi: 10.3934/proc.2013.2013.259
Felix Lucka, Katharina Proksch, Christoph Brune, Nicolai Bissantz, Martin Burger, Holger Dette, Frank Wübbeling. Risk estimators for choosing regularization parameters in ill-posed problems - properties and limitations. Inverse Problems & Imaging, 2018, 12 (5) : 1121-1155. doi: 10.3934/ipi.2018047
Olha P. Kupenko, Rosanna Manzo. On optimal controls in coefficients for ill-posed non-Linear elliptic Dirichlet boundary value problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1363-1393. doi: 10.3934/dcdsb.2018155
Sergiy Zhuk. Inverse problems for linear ill-posed differential-algebraic equations with uncertain parameters. Conference Publications, 2011, 2011 (Special) : 1467-1476. doi: 10.3934/proc.2011.2011.1467
Eliane Bécache, Laurent Bourgeois, Lucas Franceschini, Jérémi Dardé. Application of mixed formulations of quasi-reversibility to solve ill-posed problems for heat and wave equations: The 1D case. Inverse Problems & Imaging, 2015, 9 (4) : 971-1002. doi: 10.3934/ipi.2015.9.971
Markus Haltmeier, Richard Kowar, Antonio Leitão, Otmar Scherzer. Kaczmarz methods for regularizing nonlinear ill-posed equations II: Applications. Inverse Problems & Imaging, 2007, 1 (3) : 507-523. doi: 10.3934/ipi.2007.1.507
Misha Perepelitsa. An ill-posed problem for the Navier-Stokes equations for compressible flows. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 609-623. doi: 10.3934/dcds.2010.26.609
Guozhi Dong, Bert Jüttler, Otmar Scherzer, Thomas Takacs. Convergence of Tikhonov regularization for solving ill-posed operator equations with solutions defined on surfaces. Inverse Problems & Imaging, 2017, 11 (2) : 221-246. doi: 10.3934/ipi.2017011
Youri V. Egorov, Evariste Sanchez-Palencia. Remarks on certain singular perturbations with ill-posed limit in shell theory and elasticity. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1293-1305. doi: 10.3934/dcds.2011.31.1293
Johann Baumeister, Barbara Kaltenbacher, Antonio Leitão. On Levenberg-Marquardt-Kaczmarz iterative methods for solving systems of nonlinear ill-posed equations. Inverse Problems & Imaging, 2010, 4 (3) : 335-350. doi: 10.3934/ipi.2010.4.335
Alfredo Lorenzi, Luca Lorenzi. A strongly ill-posed integrodifferential singular parabolic problem in the unit cube of $\mathbb{R}^n$. Evolution Equations & Control Theory, 2014, 3 (3) : 499-524. doi: 10.3934/eect.2014.3.499
Faker Ben Belgacem. Uniqueness for an ill-posed reaction-dispersion model. Application to organic pollution in stream-waters. Inverse Problems & Imaging, 2012, 6 (2) : 163-181. doi: 10.3934/ipi.2012.6.163
Markus Haltmeier, Antonio Leitão, Otmar Scherzer. Kaczmarz methods for regularizing nonlinear ill-posed equations I: convergence analysis. Inverse Problems & Imaging, 2007, 1 (2) : 289-298. doi: 10.3934/ipi.2007.1.289
Adriano De Cezaro, Johann Baumeister, Antonio Leitão. Modified iterated Tikhonov methods for solving systems of nonlinear ill-posed equations. Inverse Problems & Imaging, 2011, 5 (1) : 1-17. doi: 10.3934/ipi.2011.5.1
Lianwang Deng. Local integral manifolds for nonautonomous and ill-posed equations with sectorially dichotomous operator. Communications on Pure & Applied Analysis, 2020, 19 (1) : 145-174. doi: 10.3934/cpaa.2020009
Peter I. Kogut, Olha P. Kupenko. On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1273-1295. doi: 10.3934/dcdsb.2019016
Simon Gindikin. A remark on the weighted Radon transform on the plane. Inverse Problems & Imaging, 2010, 4 (4) : 649-653. doi: 10.3934/ipi.2010.4.649
Jean-François Crouzet | CommonCrawl |
Hexokinase inhibition using D-Mannoheptulose enhances oncolytic newcastle disease virus-mediated killing of breast cancer cells
Ahmed Ghdhban Al-Ziaydi1,
Ahmed Majeed Al-Shammari ORCID: orcid.org/0000-0002-2699-15142,
Mohammed I. Hamzah3,
Haider Sabah kadhim4 &
Majid Sakhi Jabir5
Cancer Cell International volume 20, Article number: 420 (2020) Cite this article
Most cancer cells exhibit increased glycolysis and use this metabolic pathway cell growth and proliferation. Targeting cancer cells' metabolism is a promising strategy in inhibiting cancer cell progression. We used D-Mannoheptulose, a specific hexokinase inhibitor, to inhibit glycolysis to enhance the Newcastle disease virus anti-tumor effect.
Human breast cancer cells were treated by NDV and/or hexokinase inhibitor. The study included cell viability, apoptosis, and study levels of hexokinase enzyme, pyruvate, ATP, and acidity. The combination index was measured to determine the synergism of NDV and hexokinase inhibitor.
The results showed synergistic cytotoxicity against breast cancer cells by combination therapy but no cytotoxic effect against normal cells. The effect was accompanied by apoptotic cell death and hexokinase downregulation and inhibition to glycolysis products, pyruvate, ATP, and acidity.
The combination treatment showed safe significant tumor cell proliferation inhibition compared to monotherapies suggesting a novel strategy for anti-breast cancer therapy through glycolysis inhibition by hexokinase downregulation.
Cancer cell generally relies on aerobic glycolysis due to hypoxia, mitochondrial dysfunction, and the malignant transformation that lead to glycolysis pathway-dependent for ATP generation [1]. This phenomenon is described as the Warburg effect where cancer cells require high amounts of glucose to support the metabolic function and produce energy [2]. Cancer cells mainly produce energy by increasing the rate of glycolysis by 200 times higher than that in the normal cells of origin; this increment is succeeded by lactate fermentation in the cytosol of the cell regardless of the abundant oxygen supply [3]. The uptake of glucose in normal tissue is less than that in cancer cells. This difference can be used as a target for cancer therapy [4]. Breast-cancer stem cells are fermentative glycolysis dependent, and that makes it sensitive to glycolysis inhibitors [5].
The first and rate-limiting step of glycolysis controlled by hexokinase (HK) enzyme is glucose phosphorylation to glucose-6-phosphate (Glu-6-P), which is later employed to generate two ATP molecules [6]. In mammalian tissues, there are four major hexokinases are expressed, designated as HK1, HK2, HK3, and HK4 [7]. HK1 and HK2 are the main hexokinases that are important in cell survival. Mammalian adult tissues mostly expressed HK1 isoform, while HK2 is expressed abundantly only in dew adult tissues such as cardiac muscles, skeletal, and adipose [6]. It is found that HK2 is expressed in many types of cancers to promotes its growth through an increased glycolytic flux [8]. Brown et al. [9] found that breast cancers were HK2-positive in 79% of studied tumors. HKII status in breast cancer tissue sections was significantly related to poor prognosis and relapse of breast cancers [10]. Moreover, tamoxifen-resistant breast cancer cells MCF-7 showed upregulation of HK2 and mTOR that was accompanied by an enhanced glycolysis process. These findings make HK2 as a possible target to be blocked to overcome resistance to tamoxifen [11]. And for paclitaxel resistance [12]. Furthermore, HK2 overexpression in ovarian cancer cells induces cisplatin resistance [13]. Therefore, targeting hexokinase-2 will block glucose metabolism in cancer cells, which may inhibit its proliferation with minimum side effects reported [14]. The non-metabolizable glucose analog D-Mannoheptulose (MH) inhibits hexokinase, the first enzyme in glycolysis, with anticancer effect [15, 16], which lead to block cellular energy production [17]. D-Mannoheptulose (MH) accumulates in avocado leaves and occasionally in avocado fruit [18]. Mannoheptulose described by Dakubo [19] as an inhibitor for HK II with many other agents including lonidamine, 3-BrPA, 2-deoxyglucose, and 5-thioglucose.
On the other hand, cancer is still a very difficult disease to treat. Therefore, we need to find an effective and selective treatment approach that can destroy cancer cells, such as oncolytic virotherapy [20]. One of the first oncolytic Viruses discovered since the late 20th century is the Newcastle disease virus (NDV) [21]. The AMHA1 NDV is an avirulent attenuated strain of avian paramyxovirus, which is enveloped, non-segmented, negative-sense RNA viruses [22]. AMHA1 possesses onco-tropism characteristics [23]. NDV induced apoptosis via caspase-dependent and caspase-independent pathways [24, 25]. Deng et al. [26] found in a proteomic study that NDV downregulate phosphoglycerate kinase (PGK) expression in the NDV-infected chicken peripheral blood mononuclear cells (PBMCs), PGK is a glycolytic enzyme that participates in the glycolytic pathway, which suggests that NDV may restrict the infected cell glycolytic pathway. Its reported that interfering with cancer cells metabolism through glycolysis inhibition may enhance oncolytic virotherapy activity [27]. A recent study showed that using 2-deoxyglucose (2DG) to block glycolysis or through restricting glucose amount will enhance oncolytic adenoviruses activity in permissive and poorly permissive cancer cells [28]. Furthermore, pyruvate dehydrogenase kinase inhibition showed to improve reovirus oncolytic anti-tumor efficacy in several cancer types [29]. We reported previously that 2DG would enhance oncolytic NDV against breast cancer cells through glycolysis inhibition by downregulation of glyceraldehyde3-phosphate (GAPDH) [30]. Here, we investigate using D-Mannoheptulose, a hexokinase inhibitor to increase breast cancer cells sensitivity to oncolytic NDV, and the proposed mechanisms of this combination using glycolysis products analysis in breast cancer cells NDV-mediated oncolysis correlates with the disturbance within the metabolism.
NDV propagation
NDV (Iraqi AMHA1 strain) was provided by the Experimental Therapy Department/Iraqi Center of Cancer and Medical Genetics Research (ICCMGR), Mustansiriyah University, Baghdad, Iraq. A stock of attenuated NDV was propagated in embryonated chicken eggs (Al-Kindi Company, Baghdad, Iraq), harvested from allantoic fluid, and then purified from debris through centrifugation (3000 rpm, 30 min at 4 °C). NDV was quantified through a hemagglutination test, aliquoted, and stored at − 80 °C. Viral titers were determined based on 50% tissue culture infective dose titration on Vero cells following standard procedure [31].
The Cell Bank Unit provided the estrogen, progesterone receptors negative AMJ13 human breast cancer cell line [32], The estrogen, progesterone receptors positive MCF-7 human breast cancer cell line, and normal rat embryo fibroblast cell line (REF), Experimental Therapy Department, ICCMGR, Baghdad, Iraq. AMJ13 and REF cell lines were cultured in RPMI-1640 medium. In contrast, MCF-7 cell was cultured in MEM (US Biological, USA) supplemented with 10% (v/v) fetal bovine serum (FBS) (Capricorn-Scientific, Germany) and 1% (v/v) penicillin–streptomycin (Capricorn-Scientific, Germany) and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Exponentially growing cells were used for experiments.
Cytotoxicity assay
Cells were seeded at a density of 1 × 104 cells/well in a 96-well microplate and incubated at 37 °C for 72 h until monolayer confluence was achieved as observed under an inverted microscope. Cytotoxicity was investigated by using 3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) assay. The cells were exposed to a range of diluted concentrations of MH (13.125, 26.25, 52.5, 105, 210, 840, and 1680 μg/ml) (Santacruz Biotechnology, USA) and NDV over a range of diluted multiplicities of infection (MOI, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.4, and 12.8) for IC50 determination. After 72 h, each well received 50 μl of MTT dye solution (2 mg/ml), incubated for 3 h, and solubilized with 100 μl of dimethyl sulfoxide (DMSO). The plates were incubated for 15 min. The optical density values of treated and untreated cells were measured at 492 nm with an ELISA plate reader [32]. Exactly 200 µL of cell suspension was seeded in 96-well microtitration plates at the density of 1 × 104 cells/ml and incubated for 72 h at 37 °C to visualize cell morphology under an inverted microscope. The medium was removed, and NDV and MH were added. The plates were stained with 50 µl of crystal violet and incubated at 37 °C for 15 min. Finally, the stain was removed through gentle washing with tap water. The cells were observed under an inverted microscope at 40 × magnification and photographed by using a digital camera. The endpoint parameter for each cell line included the inhibition rate of cell growth (cytotoxicity %), which was calculated as follows:
$${\text{Cytotoxicity }}\% = \, \left( {{\text{OD}}_{\text{Control}} {-}{\text{ OD}}_{\text{sample}} } \right) \, /{\text{ OD}}_{\text{Control}} \times 100,$$
where OD control is the mean optical density of untreated wells, and OD Sample is the optical density of treated wells [23].
Combined cytotoxicity assays and Chou–Talalay analysis
The doses for this experiment were selected from the previous cytotoxicity assay results of IC50 determination. We took concentrations around the IC50 value for NDV and MH in cancer cells. The AMJ13, MCF7, and REF cell lines were seeded at the density of 1 × 104 cells/well into 96-well plates and incubated overnight. NDV was added first at MOI of 0.3, 1, and 2, followed by MH at the indicated concentrations (62.5, 125, and 250 μg/ml) through serial dilution for the growth inhibition test. Growth inhibition was measured after 72 h of infection through MTT assay, as described earlier. The assay was performed in triplicate. NDV and MH were studied at nonconstant ratios to determine synergism. The Chou–Talalay combination index (CI) was calculated using CompuSyn software (CombuSyn Inc., Paramus, NJ, USA) to analyze the combination of NDV and MH. The unfixed ratios of NDV and MH and mutually exclusive equations were used to determine the combination index (CI). CI values of 0.9–1.1, < 0.9, and > 1.1 indicate additive, synergism, and antagonism, respectively [33].
Assessment of apoptosis (propidium iodide/acridine orange assay)
Propidium iodide/acridine orange (PI/AO) dual staining was used to measure the apoptotic rates in infected and control breast cancer cells. The cells were seeded at the density of 7000 cells/well in a 96-well plate the night before treatment and then treated with MH of 62.5 µg/ml and NDV MOI of 2 in a 37 °C incubator for 72 h before PI/AO staining. Then, 1 ml of cell suspension was used for conventional PI/AO staining (10 μl AO + 10 μl PI + 1 ml PBS). Exactly 50 μl of stain mixture (AO/PI) was added to the tested wells, which were allowed to stand for 30 s at room temperature (RT). The dye was then discarded. Photographs were taken directly under a Leica fluorescent microscope [34].
Cell treatment for glycolysis pathway evaluation
The AMJ13, MCF7, and REF cell lines were seeded in 96-well cell culture plates at a density of 1 × 104 cells/ml and incubated overnight. After confluence, the cells were exposed to 62.5 µg/ml MH and NDV at MOI 2 alone or in combination and compared with the control (untreated cell) [35] after 72 h cell lysate were collected, normalized, and stored at − 86 °C until used.
HK activity assay
HK activity levels were measured in treated and untreated cell samples through a colorimetric method by using a hexokinase Assay Kit (ElabScience, USA) per the manufacturer's recommendation. The kit detection principle, glucose is converted to glucose-6-phosphate by hexokinase; the glucose-6-phosphate is oxidized by glucose-6-phosphate dehydrogenase (G6PD) to produce NADH, which reduces the colorless substrate to the colored solution, HK activity can be calculated by determining the absorbance at 340 nm.
Pyruvate assay
Pyruvate levels were measured using a colorimetric assay kit according to the manufacturer's instructions (ElabScience, USA). The detection principle is that pyruvic acid reacts with chromogenic reagent to form a reddish-brown solution. The color intensity is directly related to the pyruvate content. The OD values were measured of each sample at 505 nm using a spectrophotometer.
ATP assay
ATP was measured through a colorimetric method by using an ATP assay kit (ElabScience, USA). Cell culture samples were prepared as follows: treated and untreated cell samples were collected and centrifuged at 1000–1500 r/min for 10 min. The supernatant was removed, and the cell sediment was retained (approximately 106/ml cells). The cell suspension was prepared with 0.3–0.5 ml of boiled double-distilled water, allowed to stand in a boiling water bath for 10 min, blended, and finally extracted for 1 min. The sample was centrifuged at 3500 r/min for 10 min, and the supernatant was obtained for detection. The OD values of each tube were measured at 636 nm, following the manufacturer's recommendation.
PH measurements
The cancer cells were seeded at a density of 1 × 104 cells/plate in a 96-well microplate and incubated at 37 °C for 72 h until monolayer confluence was achieved as observed under an inverted microscope. The cells were exposed to MH (62.5 μg/ml) and NDV (MOI 2) and incubated at 37 °C for 72 h. The media supernatant was collected. The pH values were measured by using PH meter and litmus papers and compared with those of the control [36].
All results were presented as SD and mean ± SEM. unpaired t test and statistical analysis were performed with statistical software Excel version 10, GraphPad Prism version 7 (USA). CompuSyn software was used to compare the difference between groups under different conditions. The level of significance was set at P < 0.05.
Cytotoxicity of NDV and MH against breast cancer and normal cell lines
MTT cytotoxicity assay was used to evaluate the effect against cancer and normal cells of different concentrations of MH and over a range of MOI of NDV (Fig. 1). There were no noticeable percentages of cytotoxicity (CT%) for MH against normal REF cells, as the CT% ranged from 1.67 to 24.72% at higher concentrations. While there was higher cytotoxicity against breast cancer cells ranged from 27.29 to 58.64% for AMJ13 cells; and 26.26% to 60.49% for MCF-7 after MH treatment (Fig. 1a–c). NDV virotherapy did not induce a cytotoxic effect against normal embryonic REF cells (Fig. 1f). Breast cancer cells were more sensitive to NDV virotherapy as the CT% ranged from 24.69% to 64.26% for AMJ13; and 23.95% to 62.02% for MCF-7 cell line after NDV treatment (Fig. 1d–f). The cytotoxicity assay analysis showed that IC50 values of MH (486.9 µg REF, 124.7 µg AMJ13, and 122.6 µg MCF7) and IC50 values for NDV MOI (57.5 REF, 1.648 AMJ13, and 1.561 MCF-7). Therefore, we chose IC50 related doses of the MH and NDV for the combination study, (0.3, 1, 2 MOI) for NDV and (62.5, 125, and 250 μg/ml) for MH.
MH and oncolytic AMHA1 NDV are cytotoxic against human AMJ13 and MCF-7 breast cancer cells, but not cytotoxic to normal embryonic REF cells. The cells were treated with (a–c) D-Mannoheptulose (MH) (13.125, 26.25, 52.5, 105, 210, 840, and 1680 μg/ml) or (d–f) NDV (MOI 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.4, and 12.8) for 72 h. cytotoxicity was investigated using MTT assay and showed that IC50 values of MH (486.9 µg REF, 124.7 µg AMJ13, and 122.6 µg MCF7) and IC50 values for NDV MOI (57.5 REF, 1.648 AMJ13 and 1.561 MCF-7). All data shown are mean ± SEM from three independent experiments
Combination cytotoxicity assays and Chou–Talalay analysis of cell lines
In order to investigate the effects of oncolytic NDV and MH combination therapy, we examined the cytotoxicity ratio of the NDV (0.3, 1, and 2 MOI) and for the MH (62.5, 125, and 250 μg/ml). Synergism was observed in all combined doses against both breast cancer (AMJ13 and MCF7) cell lines (Fig. 2a, b). Whereas no synergism relationships were detected among treatments against the non-cancerous REF cell line (Fig. 2e).
A combination of NDV and MH showed superior anticancer activity in comparison to monotherapies in both AMJ13 and MCF-7 breast cancer cells. However, there was no enhanced toxicity against non-cancerous REF cells. (a, c and e) AMJ13, MCF-7, and REF cells were treated with NDV (0.3, 1, and 2 MOI) and with MH (62.5, 125, and 250 μg/ml), then cell viability was measured by MTT assay. d–f Illustrations of normalized isobologram of nonconstant combination ratios were measured by the Chou-Talalay method, where CI value quantitatively defines synergism. (CI < 0.9), additive effect (CI = 0.9–1.1) and antagonism (CI > 1.1). All data shown are mean ± SEM (*P < 0.05 compared to mono-treatments) data from three different experiments
The CI was estimated from the dose–effect data of single and combined treatments by using CompuSyn Isobologram. CI < 1 indicates synergism; CI = 1 to 1.1 indicates an additive effect; and CI > 1.1 indicates antagonism. The AMJ13 cell line had CI < 1 in eight combination points, indicating a synergistic effect or interaction between NDV and MH. The additive effect was seen in one combination point (Table 1A) and (Fig. 2a, b). The combination points for the combined treatment to MCF-7 cell line indicated a synergistic effect or interaction between NDV and MH at all points (Table 1B) and (Fig. 2c, d). In the normal REF cell line, all combination points were CI > 1, indicating antagonism and additive effect, which is a neglected effect as there was no killing effect reached 50% at all tested concentrations (Table 1C) and (Fig. 2e, f).
Table 1 Cytotoxicity of NDV and MH combination or alone against AMJ13, MCF-7, and REF cells
Apoptosis and morphological changes of breast cancer and normal cell lines
The morphological changes exhibited by treated cells after 72 h of treatment were attributed to an intense cytopathic effect for the combination therapy. AO/PI assay, as seen under the fluorescent microscope, showed the untreated apoptotic cells appeared green (viable cells), whereas the apoptotic treated cells with MH and NDV appeared yellow or orange (dead cells) (Fig. 3). Morphological changes and apoptosis were more intense in the cells treated by combined NDV and MH than in those treated with NDV or MH alone. This result indicated that the synergism between the inhibitor and virus enhanced the morphological changes and the percentage of apoptosis in cancer cells (Fig. 3j, k). We noticed weaker or less intense morphological changes and apoptosis in the normal REF cell line (Fig. 3m) than in the breast cancer cell lines treated with MH, NDV, and their combination.
Investigation the MH-NDV combined therapy to induce apoptosis in treated cells using acridine orange and propidium iodide: (a–d) AMJ13 cancer cells indicated that MH-NDV induces apoptosis as evidenced red-stained cells, Untreated control cells emitting green fluorescence. e–h MCF-7 cells were showing that the number of apoptotic cells is higher in the combination treatment. i–l There was no effect against REF cells by combination therapy using MH-NDV. j, k MH-NDV treated AMJ13, and MCF-7 breast cancer cells expressed significant apoptotic cell death in comparison to monotherapies and the untreated control cells. m no significant changes in all treatment modalities. Values represent the (mean ± SD). *P < 0.05, **P < 0.01, ***P < 0.001 and ****P < 0.0001. The magnification of all images was 400×
Hexokinase activity assay analysis indicates strong HK enzyme activity reduction in both breast cancer cell lines treated by MH-NDV combination therapy but not in normal embryonic cells
In the current experiment, we quantified and analyzed the HK enzyme activity in both breast cancer cell lines and the normal embryonic cells. The HK enzyme was evaluated for the comparison of treated and untreated cells at 72 h (Fig. 4a). We identified a significant reduction in the HK enzyme activity in all the treatment modalities in the cancer cells but not in the normal cells. The results showed that the HK enzyme activity was significantly reduced in the combination of MH-NDV treated breast cancer cells in comparison to the single treatment modalities.
MH–NDV combination efficiently inhibits glycolysis products in the treated breast cancer cells but not in normal cells in comparison to monotherapies. a MH–NDV combined therapy induces a significant decrease in the activity of hexokinase in AMJ13 and MCF-7 cancer cell lines. At the same time, there is no significant reduction in normal REF cells (b, c) measurement of pyruvate and ATP levels in cancer cells significantly reduced in MH-NDV combination therapy in comparison to both monotherapies and untreated control cells. However, the reduction in the normal REF cells was not significant. d Measurements of pH levels in AMJ13 and MCF-7 breast cancer cells indicate that MH-NDV treated cells supernatant was more alkaline in comparison to both monotherapies significantly, while untreated control cells showing acidity as well as the control and treated non-cancerous REF cells. All data shown are mean ± SEM (*P < 0.05) data from three different experiments using unpaired t-test, stars on MH and NDV column means it is significant in compare to each other and star on the com means it is significant in compare to monotherapies (MH and NDV). Star on the control column means it is significant when compared to MH, NDV and Com
MH–NDV combination efficiently inhibits glycolysis products in the treated breast cancer cells but not in normal cells
Further investigation of the mechanism by which MH–NDV inhibits breast cancer cell proliferation was done through examination whether MH–NDV efficiently decreases cancer cell glycolysis products pyruvate, ATP, and acidity (represent lactic acid) compared to monotherapies. Remarkably, MH–NDV induces a significant reduction on the levels of pyruvate, ATP, and acidity in both breast cancer cell lines but not in normal embryonic REF cells (Fig. 4b–d).
Malignant cells use glycolysis as their main source of energy. The in vitro results of this study showed that by increasing the concentration of MH and the MOI of NDV, there is an increase in cytotoxicity and enhanced antiproliferation effect against breast cancer cell lines but not in normal cells. MH-NDV combination, in turn, found to reduce HK activity, pyruvate, ATP, and acidity levels. Our findings are similar to previous results showing that NDV in combination with 2-Deoxyglucos, may inhibit glycolysis as the main pathway to induce breast cancer-killing effect [30]. Moreover, inhibiting glycolysis with 3-bromopyruvate and 2-deoxyglucose results in mitochondrial pathway-induced apoptosis in cancer cells [37]. We found that MH and NDV had a significant effect on breast cancer cells but a non-significant effect on normal cells. The combination of NDV and MH had the strongest effect on cancer cell lines in comparison to monotherapies. CI values revealed high synergism between MH and NDV (CI < 1) in both MCF-7 and AMJ13 cell lines. All values for the REF cell line showed no synergism and they have neglected cytotoxicity due to the fact of absence any death percentage above 50%.
AO/PI assay observations revealed that combination therapy was the best inducer for apoptosis, and that was compatible with our previous results. NDV induces apoptosis in virus-treated cells [38]. Furthermore, it's found that glycolysis inhibitors enhance apoptosis and mitochondrial damage [38]. Apoptosis is essential for the prevention of tumor formation and cancer growth [39]. Our results support the aim of our study.
In addition, our results demonstrated that MH-NDV inhibits the glycolytic pathway in MCF-7 and AMJ13 cell lines by suppressing the activity of HK more than the specific inhibitor alone and more than NDV treatment alone. Wang et al. [40] described HK2- targeting modulate Warburg effect to stimulate cancer cell apoptosis. HK inhibition can cause ATP depletion, thus resulting in insufficient energy supply for cancer cell mitosis, proliferation, and invasion, as previously described [39]. HK activity in breast cancer cell lines was lower than that in control cells, and a non-significant reduction in HK activity was noticed in the REF normal cell line.
Patra et al. [14] discovered that the deletion of the HK2 gene in adult mice does not considerably disturb normal tissues. Moreover, MH is a specific inhibitor for HK II [19], and normal cells rely more on HK I [41] this explains why REF cells showed a mild non-significant reduction in HK activity. There are several studies described the role of HK2 as essential in tumor initiation and development in breast cancer; therefore, HK2 deletion as a cancer treatment have therapeutic value with no adverse physiological side effects [14, 42]. This effect prevented cancer cell proliferation. Thus, the targeting of this key enzyme by inhibitors inhibits glycolysis and therefore suppresses cancer cell proliferation [43]. We noticed a significant reduction in pyruvate concentration, after treatment with the combination in breast cancer cell lines, and a non-significant relation in the REF normal cell line after 72 h of treatment. Given that pyruvate concentration depends on the action of HK in the glycolysis pathway [44], the decrease in HK resulted in a deficiency in pyruvate concentration, which confirmed by our experiment.
In correlation with previous experiment results, we noticed that the MH-NDV combination suppresses the generation of intercellular ATP. We found that the ATP concentrations in AMJ13 and MCF-7 cell lines treated with MH and NDV were significantly lower than those of the control. Reducing ATP levels effectively inhibits energy metabolism in MCF-7 and MDA-MB-231 cells. When glycolysis is inhibited, lactate production is completely terminated, and intracellular ATP concentration abruptly decreases [45]. These phenomena are consistent with the results of our study. Our results also provided further evidence that glycolysis is increased in transformed cells (cancer cells) in comparison with that in normal cells, which reported before [46].
The pH values of treated breast cancer cell lines secretions were higher than those of the control. A non-significant effect was detected in the normal REF cell line after treatment with MH and NDV. Acidity decreased because lactate concentration decreased as a result of reduced pyruvate concentration and inhibited the growth and proliferation of cancer cells. It is proved that lactate deficiency reduces the acidity of the cell environment [38]. This effect is favorable for preventing cancer cell proliferation because tumors or cancer cells grow in acidic environments.
Our results revealed that MH, NDV, and their combination could inhibit the growth and proliferation of breast cancer cell lines by increasing cytotoxicity through inhibiting the glycolysis pathway and inducing apoptosis. These effects were attributed to specific reductions in HK activity by the MH-NDV combinations leading to a decrease in pyruvate, ATP, and micro-environmental acidity. In addition, our results showed that the combined MH and NDV treatment had potent cytotoxic effects against breast cancer cell lines but not against the normal REF cell line that proved safety. This treatment affected and enhanced cancer cell apoptosis as a result of glycolysis inhibition. Our investigation proved the effect of MH and NDV and the strong effect of synergism between these compounds as the best therapy for cancerous cells in vitro. This strategy has potential applications as an effective cancer treatment. Our observations will provide new insight into the development of therapeutic strategies for breast cancer and other types of cancer in the near future.
Data available on request from the authors, the data that support the findings of this study are available from the corresponding author upon reasonable request.
Pelicano H, Martin DS, Xu RH, Huang P. Glycolysis inhibition for anticancer treatment. Oncogene. 2006;25(34):4633–46.
Warburg O. On the origin of cancer cells. Science. 1956;123(3191):309–14.
Alfarouk KO, Verduzco D, Rauch C, Muddathir AK, Adil HB, Elhassan GO, et al. Glycolysis, tumor metabolism, cancer growth and dissemination. A new pH-based etiopathogenic perspective and therapeutic approach to an old cancer question. Oncoscience. 2014;1(12):777.
Aft RL, Zhang FW, Gius D. Evaluation of 2-deoxy-d-glucose as a chemotherapeutic agent: mechanism of cell death. Br J Cancer. 2002;87(7):805–12.
CAS PubMed PubMed Central Google Scholar
Ciavardelli D, Rossi C, Barcaroli D, Volpe S, Consalvo A, Zucchelli M, et al. Breast cancer stem cells rely on fermentative glycolysis and are sensitive to 2-deoxyglucose treatment. Cell Death Dis. 2014;5(7):e1336.
Wilson JE. Isozymes of mammalian hexokinase: structure, subcellular localization and metabolic function. J Exp Biol. 2003;206(12):2049–57.
Robey RB, Hay N. Mitochondrial hexokinases, novel mediators of the antiapoptotic effects of growth factors and Akt. Oncogene. 2006;25(34):4683–96.
Wolf A, Agnihotri S, Micallef J, Mukherjee J, Sabha N, Cairns R, et al. Hexokinase 2 is a key mediator of aerobic glycolysis and promotes tumor growth in human glioblastoma multiforme. J Exp Med. 2011;208(2):313–26.
Brown RS, Goodman TM, Zasadny KR, Greenson JK, Wahl RL. Expression of hexokinase II and Glut-1 in untreated human breast cancer. Nuclear Med Biol. 2002;29(4):443–53.
Sato-Tadano A, Suzuki T, Amari M, Takagi K, Miki Y, Tamaki K, et al. Hexokinase II in breast carcinoma: a potent prognostic factor associated with hypoxia-inducible factor-1α and Ki-67. Cancer Sci. 2013;104(10):1380–8.
Liu X, Miao W, Huang M, Li L, Dai X, Wang Y. Elevated Hexokinase II expression confers acquired resistance to 4-Hydroxytamoxifen in breast cancer cells. Mol Cell Proteomics. 2019;18(11):2273–84.
Yang T, Ren C, Qiao P, Han X, Wang L, Lv S, et al. PIM2-mediated phosphorylation of hexokinase 2 is critical for tumor growth and paclitaxel resistance in breast cancer. Oncogene. 2018;37(45):5997–6009.
Zhang X-Y, Zhang M, Cong Q, Zhang M-X, Zhang M-Y, Lu Y-Y, et al. Hexokinase 2 confers resistance to cisplatin in ovarian cancer cells by enhancing cisplatin-induced autophagy. Int J Biochem Cell Biol. 2018;95:9–16.
Patra KC, Wang Q, Bhaskar PT, Miller L, Wang Z, Wheaton W, et al. Hexokinase 2 is required for tumor initiation and maintenance and its systemic deletion is therapeutic in mouse models of cancer. Cancer Cell. 2013;24(2):213–28.
Xu LZ, Weber IT, Harrison RW, Gidh-Jain M, Pilkis SJ. Sugar specificity of human.beta.-cell Glucokinase: correlation of molecular models with kinetic measurements. Biochemistry. 1995;34(18):6083–92.
Board M, Colquhoun A, Newsholme EA. High Km glucose-phosphorylating (glucokinase) activities in a range of tumor cell lines and inhibition of rates of tumor growth by the specific enzyme inhibitor mannoheptulose. Cancer Res. 1995;55(15):3278–85.
Jordan S, Tung N, Casanova-Acebes M, Chang C, Cantoni C, Zhang D, et al. Dietary Intake regulates the circulating inflammatory monocyte pool. Cell. 2019;178(5):1102.
Nordal A, Benson A. Isolation of mannoheptulose and identification of its phosphate in Avocado Leaves1. J Am Chem Soc. 1954;76(20):5054–5.
Dakubo GD. Mitochondrial genetics and cancer: Springer Science & Business Media; 2010.
Kirn D, Martuza RL, Zwiebel J. Replication-selective virotherapy for cancer: biological principles, risk management and future directions. Nat Med. 2001;7(7):781–7.
Cassel WA, Garrett RE. Newcastle disease virus as an Antineoplastic Agent. Cancer. 1965;18:863–8.
Al-Shammari AM, Al-Nassrawei HA, Kadhim AM. Isolation and sero-diagnosis of newcastle disease virus infection in human and chicken poultry flocks in three cities of middle Euphrates. Kufa J Veterinary Med Sci. 2014;5(1):16–21.
Al-Shammari AM, Rameez H, Al-Taee MF. Newcastle disease virus, rituximab, and doxorubicin combination as anti-hematological malignancy therapy. Oncolytic Virotherapy. 2016;5:27–34.
Al-Shammari AM, Humadi TJ, Al-Taee EH, Al-Atabi SM, Yaseen NY. Oncolytic newcastle disease virus iraqi virulent strain induce apoptosis in vitro through intrinsic pathway and association of both intrinsic and extrinsic pathways in vivo. Mol Therapy. 2015;23(S1):S173–4.
Mohammed MS, Al-Taee MF, Al-Shammari AM. Caspase dependent and independent anti- hematological malignancy activity of AMHA1 attenuated newcastle disease virus. Int J Mol Cell Med. 2019;8(3):211.
Deng X, Cong Y, Yin R, Yang G, Ding C, Yu S, et al. Proteomic analysis of chicken peripheral blood mononuclear cells after infection by Newcastle disease virus. J Veterinary Sci. 2014;15(4):511–7.
Kennedy BE, Sadek M, Gujar SA. Targeted metabolic reprogramming to improve the efficacy of oncolytic virus therapy. Mol Ther. 2020;28:1417–21.
Dyer A, Schoeps B, Frost S, Jakeman P, Scott EM, Freedman J, et al. Antagonism of glycolysis and reductive carboxylation of glutamine potentiates activity of oncolytic adenoviruses in cancer cells. Cancer Res. 2019;79(2):331–45.
Kennedy BE, Murphy JP, Clements DR, Konda P, Holay N, Kim Y, et al. Inhibition of pyruvate dehydrogenase kinase enhances the antitumor efficacy of oncolytic reovirus. Cancer Res. 2019;79(15):3824–36.
Al-Shammari AM, Abdullah AH, Allami ZM, Yaseen NY. 2-Deoxyglucose and newcastle disease virus synergize to kill breast cancer cells by inhibition of glycolysis pathway through Glyceraldehyde3-Phosphate downregulation. Front Mol Biosci. 2019;6:90.
Salih RH, Odisho SM, Al-Shammari AM, Ibrahim OMS. Antiviral effects of olea europaea leaves extract and interferon-beta on gene expression of newcastle disease virus. Adv Anim Vet Sci. 2017;5(11):436–45.
Al-Shammari A, Salman M, Saihood Y, Yaseen N, Raed K, Shaker H, et al. In vitro synergistic enhancement of newcastle disease virus to 5-fluorouracil cytotoxicity against tumor cells. Biomedicines. 2016;4(1):3.
PubMed Central Google Scholar
Chou T-C. Drug combination studies and their synergy quantification using the Chou-Talalay method. Cancer Res. 2010;70(2):440–6.
Ali Z, Jabir M, Al- Shammari A. Gold nanoparticles inhibiting proliferation of human breast cancer cell line. Res J Biotechnol. 2019;14(S1):79–82.
Jabur AR, Al-Hassani ES, Al-Shammari AM, Najim MA, Hassan AA, Ahmed AA. Evaluation of stem cells' growth on electrospun polycaprolactone (PCL) scaffolds used for soft tissue applications. Energy Procedia. 2017;119:61–71.
TeSlaa T, Teitell MA. Techniques to monitor glycolysis. Methods Enzymol. 2014;542:91–114.
Danos M, Taylor WA, Hatch GM. Mitochondrial monolysocardiolipin acyltransferase is elevated in the surviving population of H9c2 cardiac myoblast cells exposed to 2-deoxyglucose-induced apoptosis. Biochem Cell Biol. 2008;86(1):11–20.
Arora R, Schmitt D, Karanam B, Tan M, Yates C, Dean-Colomb W. Inhibition of the Warburg effect with a natural compound reveals a novel measurement for determining the metastatic potential of breast cancers. Oncotarget. 2015;6(2):662.
Coore H, Randle P. Inhibition of glucose phosphorylation by mannoheptulose. Biochem J. 1964;91(1):56.
Wang L, Wang J, Xiong H, Wu F, Lan T, Zhang Y, et al. Co-targeting hexokinase 2-mediated Warburg effect and ULK1-dependent autophagy suppresses tumor growth of PTEN- and TP53-deficiency-driven castration-resistant prostate cancer. EBioMedicine. 2016;7:50–61.
Xu S, Catapang A, Braas D, Stiles L, Doh HM, Lee JT, et al. A precision therapeutic strategy for hexokinase 1-null, hexokinase 2-positive cancers. Cancer & Metabol. 2018;6(1):7.
Targeting Hexokinase 2 May Block cancer glucose metabolism. Cancer Discov. 2013;3(10):F25-OF.
Mathupala S, Ko Ya, Pedersen PL. Hexokinase II: cancer's double-edged sword acting as both facilitator and gatekeeper of malignancy when bound to mitochondria. Oncogene. 2006;25(34):4777.
Ding Y, Liu Z, Desai S, Zhao Y, Liu H, Pannell LK, et al. Receptor tyrosine kinase ErbB2 translocates into mitochondria and regulates cellular metabolism. Nat Commun. 2012;3:1271.
Gonin-Giraud S, Mathieu A, Diocou S, Tomkowiak M, Delorme G, Marvel J. Decreased glycolytic metabolism contributes to but is not the inducer of apoptosis following IL-3-starvation. Cell Death Differ. 2002;9(10):1147.
Mikirova NA, Casciari J, Gonzalez MJ, Miranda-Massari JR, Riordan N, Duconge J. Bioenergetics of human cancer cells and normal cells during proliferation and differentiation. Cancer Ther OncolInt J. 2017;3:1–8.
The authors would like to thank the Department of Experimental Therapy, Iraqi Center for Cancer and Medical Genetic Research, Mustansiriyah University for their support during the work.
The authors declare that no specific funding was provided for the work.
Department of Medical Chemistry, College of Medicine, University of Al-Qadisiyah, Al Diwaniyah, Iraq
Ahmed Ghdhban Al-Ziaydi
Experimental Therapy, Iraqi Center for Cancer and Medical Genetics Research, Mustansiriyah University, Baghdad, Iraq
Ahmed Majeed Al-Shammari
College of Medicine, University of Al-Nahrain, Baghdad, Iraq
Mohammed I. Hamzah
Department of Microbiology, College of Medicine, Al-Nahrain University, Baghdad, Iraq
Haider Sabah kadhim
Division of Biotechnology, Department of Applied Science, University of Technology, Baghdad, Iraq
Majid Sakhi Jabir
AMA, HSK and MIH designed the experiments. AGA, AMA, and MSJ conducted experiments. AGA, AMA, MIH wrote the manuscript. AGA, AMA, MIH and HSK approved the drafts. All authors read and approved the final manuscript.
Correspondence to Ahmed Majeed Al-Shammari.
The authors declare that there is no competence of financial interests.
Al-Ziaydi, A.G., Al-Shammari, A.M., Hamzah, M.I. et al. Hexokinase inhibition using D-Mannoheptulose enhances oncolytic newcastle disease virus-mediated killing of breast cancer cells. Cancer Cell Int 20, 420 (2020). https://doi.org/10.1186/s12935-020-01514-2
Anticancer therapy
Warburg effect
Hexokinase inhibitor
Pyruvate | CommonCrawl |
MInimizing cost function using iterative search for a minimum method
I want to estimated the parameters $\ \hat{\theta} $ of a model using an iterative search for the minimum of a cost function. The cost function is defined as follows:
$$ V_N(\hat{\theta}) = \frac{1}{N}\sum_{i=1}^N(y(t_k)-\hat{y}(t_k|\theta))^2 $$
where $\ y $ is the output of the system and $\ \hat{y} $ is the estimated output of the system. The system is described by the following differential equations:
$$ \dot{h}(t) = -\theta\sqrt{2g}\sqrt{h(t)}+u(t) $$
$$ \hat{y}(t|\theta) = \theta\sqrt{2g}\sqrt{h(t)} $$
where $\ u $ is the input to the system. Suppose that data for both input and output of the system are collected and available. The equation for updating the estimate for the unknown parameters $\ \theta $ is:
$$ \hat{\theta}_{i+1} = \hat{\theta}_i-μ_i[\frac{d^2}{d\theta^2}(V_N(\hat{\theta}))]^{-1}\frac{d}{d\theta}(V_N(\hat{\theta})) $$ where $\ μ_i $ is a step length determined so that: $\ V_N(\hat{\theta}_{i+1}) < V_N(\hat{\theta}_i) $. The derivatives of the cost function are:
$$ \frac{d^2}{d\theta^2}(V_N(\hat{\theta})) = \frac{1}{N}\sum_{i=1}^N(\frac{d}{d\hat{\theta}}\hat{y}(t_i|\hat{\theta}))((\frac{d}{d\hat{\theta}}\hat{y}(t_i|\hat{\theta}))^T-\frac{1}{N}\sum_{i=1}^N\frac{d^2}{d\hat{\theta}^2}(\hat{y}(t_i|\hat{\theta}))(y(t_i)-\hat{y}(t_i|\hat{\theta})) $$
which by neglecting the second sum comes down to the Gauss-Newton Method. Considering this the whole problem is solved by finding the way to compute:
$$ ψ(t,\hat{\theta})=\frac{d}{d\hat{\theta}}\hat{y}(t|\hat{\theta}) $$
By working out the math, the following differential equations are obtained:
$$ z(t,\hat{\theta}) = \frac{d}{d\hat{\theta}}x(t,\hat{\theta}) $$ $$ \frac{d}{dt}z(t,\hat{\theta}) = -\frac{\hat{\theta}\sqrt{2g}}{2\sqrt{x(t,\hat{\theta})}}z(t,\hat{\theta})-\sqrt{2gx(t,\hat{\theta})} $$
$$ ψ^T(t,\hat{\theta}) = \frac{d}{d\hat{\theta}}\hat{y}(t,\hat{\theta}) = \frac{\hat{\theta}\sqrt{2g}}{2\sqrt{x(t,\hat{\theta})}}z(t,\hat{\theta})+\sqrt{2gx(t,\hat{\theta})} $$
In order to calculate $\ \frac{d}{d\hat{\theta}}{\hat{y}(t,\hat{\theta})} $ we need to first compute $\ x(t,\hat{\theta}) $ and $\ z(t,\hat{\theta}) $ since $\ g $ is the gravity constant. Suppose that we have an initial guess for the value of $\ \hat{\theta} $, now $\ x(t,\hat{\theta}) $ is obtained by the differential equation $\ \dot{x}(t,\hat{\theta}) $ since we also have the value of the input data $\ u $ and some initial condition for $\ x(t,\hat{\theta}) $. My question is how to compute $\ z(t,\hat{\theta}) $ since there are two equations which give as output $\ z(t,\hat{\theta}) $ ?
Is the sequence in which the equations should be computed the following:
Compute $\ x(t,\theta) $
Compute $\ z(t,\theta) $
Compute $\ \frac{d}{d\theta}\hat{y}(t,\theta) $
Compute $\ \frac{d^2}{d\theta^2}V_N(\theta $
Compute $\ \frac{d}{d\theta}V_N(\theta) $
Update estimation of $\ \hat{\theta} $
Would an ODE solver of MATLAB do the work in one bunch ?
optimization nonlinear-equations iterative-method differential-equations modeling
Teo ProtoulisTeo Protoulis
I don't see how two equations give $z$ as output. Nevertheless, your sequence of computations looks reasonable, except I would combine steps one and two into:
Simultaneously solve for $x$ and the sensitivities $z$. This is a extended ODE system that you could throw at a built-in MATLAB ODE solver:
$$ \begin{bmatrix} x'(t,\hat{\theta}) \\ z'(t,\hat{\theta}) \end{bmatrix} = \begin{bmatrix} -\theta\sqrt{2g}\sqrt{x(t)}+u(t) \\ -\frac{\hat{\theta}\sqrt{2g}}{2\sqrt{x(t,\hat{\theta})}}z(t,\hat{\theta})-\sqrt{2gx(t,\hat{\theta})} \end{bmatrix} $$
If your error $V$ used a continuous $L_2$ norm instead of a discrete $\ell_2$ norm, the integral and its sensitivities could be computed as the solution to an ODE, extending the system even more. Maybe this would be preferable.
Since you're using MATLAB, you may be interested in the MATLODE package, which provides several types of time integration schemes. Its tangent linear and adjoint model integration can compute sensitivities with respect to model parameters as well as initial conditions. Note its discrete adjoint integrators may be of interest for efficiently computing the application of $\frac{d}{d \widehat{\theta}} \widehat{y}^T$ with a vector as needed in the Hessian approximation.
$\begingroup$ I also thought initially of moving into continuous time for computational purposes. So, you saying it would be better for the cost function to be defined in continuous $L_2$ ? $\endgroup$
– Teo Protoulis
$\begingroup$ It would allow you to consolidate everything into a single ODE solve and probably be more accurate at measuring the error between $y$ and $\hat{y}$. There's probably advantages to leaving it as a discrete $\ell_2$. Maybe less storage? $\endgroup$
$\begingroup$ I believe less storage for sure but I considering the fact that a good estimation can be found at not so many iterations, yo uare right I will go for accuracy. Thanks for the help! $\endgroup$
I have been thinking that it might be easier if one changes first the variables in the differential equation. That way one can bypass the function $h(t)$ and deal with fewer functions. Since $$\hat{y}(t) = \hat{y}_{\theta}(t) = \hat{y}(t \,| \, \theta) = \theta \, \sqrt{2g} \, \sqrt{h(t)}$$ change the dependent variable $$\hat{y} = \theta \, \sqrt{2g} \, \sqrt{h}$$ whose converse is $$h = \frac{1}{2g \,\theta^2} \,\, \hat{y}^2$$ Consequently \begin{align} \frac{dh}{dt} = \frac{1}{g \,\theta^2} \,\,\hat{y} \, \frac{d\hat{y}}{dt} = -\,\theta \, \sqrt{2g}\, \sqrt{h} \, +\, u(t) = -\, \hat{y} \, + \, u(t) \end{align} so the differential equation after change of variable becomes $$\frac{d\hat{y}}{dt} = g \,\theta^2 \, \frac{u(t)}{\hat{y}} \, - \, g \,\theta^2 \,$$ The first variation of $\hat{y}$ with respect to $\theta$, i.e. the first derivatives of $\hat{y}$ with respect to $\theta$, satisfy the system of linear equations $$\frac{d}{dt}\,\partial_{\theta}\hat{y} = - \, g \,\theta^2 \, \frac{u(t)}{\hat{y}^2}\, \partial_{\theta}\hat{y} \, + \,2\, g \,\theta \, \frac{u(t)}{\hat{y}} \, - \,2\, g \,\theta $$ Thus, as a whole, the functions $\big(\,\hat{y}(t \, | \, \theta),\,\,\, \partial_{\theta}\hat{y}(t \, | \, \theta)\, \big)$ satisfy the system of ODEs \begin{align} &\frac{d\hat{y}}{dt} = g \,\theta^2 \, \frac{u(t)}{\hat{y}} \, - \, g \,\theta^2 \\ &\frac{d}{dt}\,\partial_{\theta}\hat{y} = - \, g \,\theta^2 \, \frac{u(t)}{\hat{y}^2}\, \partial_{\theta}\hat{y} \, + \, 2\,g \,\theta \, \frac{u(t)}{\hat{y}} \, - \,2\, g \,\theta \end{align} The gradient of your functional $$V_N(\theta) =\frac{1}{N}\sum_{k=1}^{N} \Big(\,\hat{y}(t_k \, | \, \theta) \, - \,y(t_k) \,\Big)^2 $$ should be $$\partial_{\theta}V_N(\theta) =\frac{2}{N}\sum_{k=1}^{N} \Big(\,\hat{y}(t_k \, | \, \theta) \, - \,y(t_k) \,\Big)\, \partial_{\theta}\hat{y}(t_k \, | \, \theta)$$ I do not know how important this Newton's method is to you (it is probably faster than what I am going to propose), but if you do not feel like taking second derivatives of $\hat{y}$ with respect to $\theta$, you can try first the slower gradient descend method: $$\theta_{i+1} = \theta_{i} - \mu_i \, \partial_{\theta}V_N(\theta_i)$$ Then the parameter fitting algorithm should be like this:
You start with given data points $y(t_1), \, y(t_2), \, ..., y(t_N)$. Take a reasonable guess $\theta_1$ for the parameter. Assume that at step $n$, you have calculated an approximate parameter $\theta_n$:
Step 1: Use an ODE solver, like Runge-Kutta or something like that, to generate solutions of the initial value problem \begin{align} &\frac{d\hat{y}}{dt} = g \,\theta_n^2 \, \frac{u(t)}{\hat{y}} \, - \, g \,\theta_n^2 \\ &\frac{d}{dt}\,\partial_{\theta}\hat{y} = - \, g \,\theta_n^2 \, \frac{u(t)}{\hat{y}^2}\, \partial_{\theta}\hat{y} \, + \,2\, g \,\theta \, \frac{u(t)}{\hat{y}} \, - \,2\, g \,\theta \\ &\hat{y}(t_1) = y(t_1)\\ &\partial_{\theta}\hat{y}(t_1) = 0 \end{align} and obtain the sequence of solution data $$\Big\{ \, \big(\,\hat{y}(t_k \, | \, \theta),\,\, \partial_{\theta}\hat{y}(t_k \, | \, \theta)\, \big) \, \,\, \big{|} \,\,\, k = 1, 2, 3, ..., N \,\, \Big\}$$
Step 2: The sequence of solution data from step 1 allows us to calculate the cost function $$V_N(\theta) =\frac{1}{N}\sum_{k=1}^{N} \Big(\,\hat{y}(t_k \, | \, \theta) \, - \,y(t_k) \,\Big)^2$$
Step 3: If $V_{N}(\theta_n) < \varepsilon$, stop the algorithm and take as a solution $\theta = \theta_n$. Otherwise, if the error is $V_N(\theta_n) \geq \varepsilon$, calculate the gradient of $V_N$, using the solution data from step 1: $$\partial_{\theta}V_N(\theta_n) =\frac{2}{N}\sum_{k=1}^{N} \Big(\,\hat{y}(t_k \, | \, \theta_n) \, - \,y(t_k) \,\Big)\, \partial_{\theta}\hat{y}(t_k \, | \, \theta_n)$$
Step 4: Using the gradient calculated in step 3, update the $\theta$ parameter: $$\theta_{n+1} = \theta_{n} - \mu_n \, \partial_{\theta}V_N(\theta_n)$$
Step 5: Go back to step 1 with the newly calculated parameters $\theta_{n+1}$.
Even if you really want the Newton's method, you can add the extra differential equations for the second derivatives of $\hat{y}(t \, | \, \theta)$ with respect to $\theta$ in step 1 and add the calculation of the hessian of the cost function $V_{N}(\theta)$ in step 3.
FuturologistFuturologist
$\begingroup$ What a great answer! Thank you very much, would really like to try both approaches! $\endgroup$
Not the answer you're looking for? Browse other questions tagged optimization nonlinear-equations iterative-method differential-equations modeling or ask your own question.
Solving this system of equations numerically
Formulation of the least-squares parameter estimation problem
Online Parameter Estimation using steepest descent
Numerical Solution to Rayleigh Plesset Equation in Python
How to properly compute weights for Weighted Least Squares (WLS)?
Parameters estimation with fewer variables than parameters | CommonCrawl |
Convergence of $\sum(n^3\sin^2n)^{-1}$
I saw a while ago in a book by Clifford Pickover, that whether $\displaystyle \sum_{n=1}^\infty\frac1{n^3\sin^2 n}$ converges is open.
I would think that the question of its convergence is really about the density in $\mathbb N$ of the sequence of numerators of the standard convergent approximations to $\pi$ (which, in itself, seems like an interesting question). Naively, the point is that if $n$ is "close" to a whole multiple of $\pi$, then $1/(n^3\sin^2n)$ is "close" to $\frac1{\pi^2 n}$.
[Numerically there is some evidence that only some of these values of $n$ affect the overall behavior of the series. For example, letting $S(k)=\sum_{n=1}^{k}\frac1{n^3\sin^2n}$, one sees that $S(k)$ does not change much in the interval, say, $[50,354]$, with $S(354)<5$. However, $S(355)$ is close to $30$, and note that $355$ is very close to $113\pi$. On the other hand, $S(k)$ does not change much from that point until $k=100000$, where I stopped looking.]
I imagine there is a large body of work within which the question of the convergence of this series would fall naturally, and I would be interested in knowing something about it. Sadly, I'm terribly ignorant in these matters. Even knowing where to look for some information on approximations of $\pi$ by rationals, or an ad hoc approach just tailored to this specific series would be interesting as well.
nt.number-theory sequences-and-series
Andrés E. Caicedo
Andrés E. CaicedoAndrés E. Caicedo
$\begingroup$ If we replace $n^3$ with $n^2$, what happens? (This isn't a rhetorical question; I'm honestly wondering if anyone knows the answer.) More generally, consider sums $F(a,b) = \sum_{n=1}^\infty 1/(n^a |sin n|^b)$. Clearly $F(a,b)$ increases with $a$ and decreases with $b$, but when is $F(a,b)$ known to be either finite or infinite? $\endgroup$ – Michael Lugo May 14 '10 at 13:17
$\begingroup$ The convergence of this general form is related to the irrationality measure of $\pi$, that is the infimum of exponents $k$ such that $|\pi-a/b|<1/b^k$ has only finitely many integer solutions. (For $|\sin n|$ to be small, $n$ must be close to an integer multiple $m\pi$ of $\pi$ and then $|\sin n|\sim m|\pi-n/m|$.) Results are known (see for instance planetmath.org/encyclopedia/IrrationalityMeasure.html) and these will yield explicit values of $a$, $b$ for which the series converges, but the proofs are delicate and don't yield the best expected result. $\endgroup$ – Robin Chapman May 14 '10 at 15:26
$\begingroup$ $n^3sin^2{n}<10^{-n}$, Does n which satisfies this inequality exist infinitely? This is more high level question. $\endgroup$ – Takahiro Waki Jul 24 '16 at 0:59
As Robin Chapman mentions in his comment, the difficulty of investigating the convergence of $$ \sum_{n=1}^\infty\frac1{n^3\sin^2n} $$ is due to lack of knowledge about the behavior of $|n\sin n|$ as $n\to\infty$, while the latter is related to rational approximations to $\pi$ as follows.
Neglecting the terms of the sum for which $n|\sin n|\ge n^\varepsilon$ ($\varepsilon>0$ is arbitrary), as they all contribute only to the `convergent part' of the sum, the question is equivalent to the one for the series $$ \sum_{n:n|\sin n|< n^\varepsilon}\frac1{n^3\sin^2n}. \qquad(1) $$ For any such $n$, let $q=q(n)$ minimizes the distance $|\pi q-n|\le\pi/2$. Then $$ \sin|\pi q-n|=|\sin n|< \frac1{n^{1-\varepsilon}}, $$ so that $|\pi q-n|\le C_1/n^{1-\varepsilon}$ for some absolute constant $C_1$ (here we use that $\sin x\sim x$ as $x\to0$). Therefore, $$ \biggl|\pi-\frac nq\biggr|<\frac{C_1}{qn^{-\varepsilon}}, $$ equivalently $$ \biggl|\pi-\frac nq\biggr|<\frac{C_2}{n^{2-\varepsilon}} \quad\text{or}\quad \biggl|\pi-\frac nq\biggr|<\frac{C_2'}{q^{2-\varepsilon}} $$ (because $n/q\approx\pi$) for all $n$ participating in the sum (1). It is now clear that the convergence of the sum (1) depends on how often we have $$ \biggl|\pi-\frac nq\biggr|<\frac{C_2'}{q^{2-\varepsilon}} $$ and how small is the quantity in these cases. (Note that it follows from Dirichlet's theorem that an even stronger inequality, $$ \biggl|\pi-\frac nq\biggr|<\frac1{q^2}, $$ happens for infinitely many pairs $n$ and $q$.) The series (1) converges if and only if $$ \sum_{n:|\pi-n/q|< C_2n^{-2+\varepsilon}}\frac1{n^5|\pi-n/q|^2} $$ converges. We can replace the summation by summing over $q$ (again, for each term $\pi q\approx n$) and then sum the result over all $q$, because the terms corresponding to $|\pi-n/q|< C_2n^{-2+\varepsilon}$ do not influence on the convergence: $$ \sum_{q=1}^\infty\frac1{q^5|\pi-n/q|^2} =\sum_{q=1}^\infty\frac1{q^3(\pi q-n)^2} \qquad(2) $$ where $n=n(q)$ is now chosen to minimize $|\pi-n/q|$.
Summarizing, the original series converges if and only if the series in (2) converges.
It is already an interesting question of what can be said about the convergence of (2) if we replace $\pi$ by other constant $\alpha$, for example by a "generic irrationality". The series $$ \sum_{q=1}^\infty\frac1{q^3(\alpha q-n)^2} $$ for a real quadratic irrationality $\alpha$ converges because the best approximations are $C_3/q^2\le|\alpha-n/q|\le C_4/q^2$, and they are achieved on the convergents $n/q$ with $q$ increasing geometrically. A more delicate question seems to be for $\alpha=e$, because one third of its convergents satisfies $$ C_3\frac{\log\log q}{q^2\log q}<\biggl|e-\frac pq\biggr|< C_4\frac{\log\log q}{q^2\log q} $$ (see, e.g., [C.S.Davis, Bull. Austral. Math. Soc. 20 (1979) 407--410]). The number $e$, quadratic irrationalities, and even algebraic numbers are `generic' in the sense that their irrationality exponent is known to be 2. What about $\pi$?
The irrationality exponent $\mu=\mu(\alpha)$ of a real irrational number $\alpha$ is defined as the infimum of exponents $\gamma$ such that the inequality $|\alpha-n/q|\le|q|^{-\gamma}$ has only finitely many solutions in $(n,q)\in\Bbb Z^2$ with $q\ne0$. (So, Dirichlet's theorem implies that $\mu(\alpha)\ge2$. At the same time from metric number theory we know that it is 2 for almost all real irrationals.) Assume that $\mu(\pi)>5/2$, then there are infinitely many solutions to the inequality $$ \biggl|\pi-\frac nq\biggr|<\frac{C_5}{q^{5/2}}, $$ hence infinitely many terms in (2) are bounded below by $1/C_5$, so that the series diverges (and (1) does as well). Although the general belief is that $\mu(\pi)=2$, the best known result of V.Salikhov (see this answer by Gerry and my comment) only asserts that $\mu(\pi)<7.6064\dots$,.
I hope that this explains the problem of determining the behavior of the series in question.
122 silver badges33 bronze badges
Wadim ZudilinWadim Zudilin
$\begingroup$ So am I right in reading this as saying that the series is expected to converge? $\endgroup$ – Lev Borisov Jan 30 '14 at 3:44
$\begingroup$ Lev, yes, this is the expectation. $\endgroup$ – Wadim Zudilin Feb 5 '14 at 11:12
$\begingroup$ why don't you write $|\pi q - n(q)|$ in (2) ? it would be clearer $\endgroup$ – reuns Jul 12 '16 at 6:41
Wadim Zudilin's answer is further extended in http://arxiv.org/abs/1104.5100 (Max A. Alekseyev, On convergence of the Flint Hills series).
Zurab SilagadzeZurab Silagadze
There is an even bigger reduction that can be done:
Theorem: The Flint Hills series converges if and only if the series $$ \sum_{n = 1}^\infty \frac{1}{q_n^3 (q_n\pi - p_n)^2} \qquad{(1)} $$ converges, where $(p_n/q_n)_1^\infty$ is the sequence of convergents of $\pi$.
Proof: Let $$ S = \sum_{q = 1}^\infty \frac{1}{q^3 (q\pi - p)^2}, \qquad{(2)} $$ where $p\in\mathbb N$ is chosen to minimize $|q\pi - p|$. As Wadim Zudilin argued, the Flint Hills series converges if and only if $S$ converges. Now consider the unimodular lattice $\Lambda = \{(q,q\pi - p) : p,q\in\mathbb Z\}$. We can rewrite $S$ as $$ S = \sum_{\substack{(q,r)\in\Lambda^* \\ q > 0 \\ -1/2 < r < 1/2}} \frac{1}{q^3 |r|^2}\cdot $$ Here $\Lambda^* = \Lambda\setminus\{\mathbf 0\}$. Next, using the identity $$ \frac{1}{q^3 r^2} = \int_{s > q} \int_{t > r} \frac{\partial}{\partial s}\frac{\partial}{\partial t}\frac{1}{s^3 t^2} \;dt\;ds = \int_{s > 1} \int_{t > 0} \frac{\partial}{\partial s}\frac{\partial}{\partial t}\frac{1}{s^3 t^2} [s > q][t > r] \;dt\;ds $$ we get $$ S = \int_{s > 1} \int_{t > 0} \frac{\partial}{\partial s}\frac{\partial}{\partial t}\frac{1}{s^3 t^2} \sum_{\substack{(q,r)\in\Lambda^* \\ q > 0 \\ -1/2 < r < 1/2}} [s > q][t > |r|] \;dt\;ds\\ = \int_{s > 1} \int_{t > 0} \frac{\partial}{\partial s}\frac{\partial}{\partial t}\frac{1}{s^3 t^2} \#\big\{(q,r)\in\Lambda^* : 0 < q < s,\; |r| < \min(t,1/2)\big\} \;dt\;ds. \qquad{(3)} $$ We can bound the integrand in two different ways, depending on whether or not $$ N_{s,t} := \#\big\{(q,r)\in\Lambda^* : 0 < q < s,\; |r| < \min(t,1/2)\big\} \leq \max(0,3st - 1/2). \qquad{(4)} $$ If (4) holds, then it can be used to bound the entire integral; I leave it to the reader to verify that the resulting integral converges. So let us consider the cases where (4) fails.
Fix $s > 1$ and $t > 0$, and let $D_{s,t} = (-s,s)\times(-t,t)$. If $D_{s,t}$ contains two linearly independent elements of $\Lambda$, then $D$ contains a fundamental domain for $\Lambda$, say $F$; we have $$ 1 + 2N_{s,t} = \#(\Lambda\cap D_{s,t}) = \sum_{\mathbf x\in\Lambda\cap D_{s,t}} m(\mathbf x + F) = m\left(\bigcup_{\mathbf x\in\Lambda\cap D_{s,t}}(\mathbf x + F)\right) \leq m(2D_{s,t}) = 4st, $$ which implies that (4) holds. Similarly, if $D_{s,t}\cap\Lambda = \{\mathbf 0\}$, then (4) holds.
So if we assume that (4) fails for some pair $(s,t)$, then we have $D_{s,t}\cap\Lambda = D_{s,t}\cap \mathbb Z\mathbf x$ for some $\mathbf x = (q,r)\in D_{s,t}\cap\Lambda^*$. It follows that $$ \max(1,3st - 1/2) \leq N_{s,t} = \left\lfloor \min\left(\frac sq,\frac t{|r|}\right)\right\rfloor \leq \min\left(\frac sq,\frac t{|r|}\right) $$ and thus $$ \frac{st}{q|r|} \geq \min\left(\frac sq,\frac t{|r|}\right)^2 \geq \max(1,3st - 1/2)^2 \geq \max(1,3st - 1/2) \geq 2st, $$ so $q|r| = q|q\pi - p| \leq 1/2$. A well-known theorem now implies that $p/q$ is a convergent of $\pi$, i.e. $(q,p) = (q_n,p_n)$ for some $n\in\mathbb N$. So if we let $$ \Lambda_c = \{(k q_n, k(q_n\pi - p_n)) : n,k\in\mathbb N\} $$ then $$ N_{s,t} = \#(\Lambda_c\cap D_{s,t}). $$ In other words, the only points which are contributing to the integrand of (3) are points which come from $\Lambda_c$. Reversing the argument of (3) now gives $$ S \leq C + \sum_{\substack{(q,r)\in\Lambda_c \\ q > 0 \\ -1/2 < r < 1/2}} \frac{1}{q^3 |r|^2}, $$ where $C < \infty$ is a constant describing an upper bound on the contribution to the integral (3) of pairs $(s,t)$ satisfying (4). Thus, $$ S \leq C + \sum_{n = 1}^\infty \sum_{k = 1}^\infty \frac{1}{(k q_n)^3 (k(q_n \pi - p_n))^2} = C+\zeta(5)\sum_{n = 1}^\infty \frac{1}{q_n^3 (q_n \pi - p_n)^2}\cdot $$ It follows that (1) converges if and only if (2) converges.
If this proof was too technical to follow, I'll try to summarize the main ideas: First of all, any rational number $p/q$ which is not a convergent of $\pi$ must satisfy $q|q\pi - p| > 1/2$ (this is a well-known fact). By itself this fact isn't enough to guarantee that the terms coming from non-convergents won't make the series (2) diverge, since you end up comparing it with the harmonic series, which (just barely) diverges. But that's just the crudest possible bound: most rationals $p/q$ will satisfy $q|q\pi - p| \gg 1$. Since (2) involves a summation over all $q$, there will be a lot of "averaging", and so the "spikes" which occur when $q|q\pi - p|$ is small will be washed out in the long run. In order to formalize this you need to talk about lattices and fundamental domains - basically, the idea is that the number of intersection points of a lattice with a convex centrally symmetric region is about the same as the area of the region except for certain exceptional cases; these exceptional cases turn out to correspond to the convergents of $\pi$.
Corollary: If the exponent of irrationality of $\pi$ is strictly less than $5/2$, then the Flint Hills series converges. Proof: If $\mu(\pi) < 5/2$, then there exists $\varepsilon > 0$ such that for all but finitely many $n$, we have $$ |q_n \pi - p_n| \geq q_n^{-3/2 + \varepsilon}. $$ This gives the following upper bound for (1): $$ \sum_{n = 1}^\infty \frac{1}{q_n^3 (q_n^{-3/2 + \varepsilon})^2} = \sum_{n = 1}^\infty \frac{1}{q_n^{2\varepsilon}}\cdot $$ But since the sequence $(q_n)_1^\infty$ must grow at least exponentially fast, this series converges.
David SimmonsDavid Simmons
Not the answer you're looking for? Browse other questions tagged nt.number-theory sequences-and-series or ask your own question.
Not especially famous, long-open problems which anyone can understand
Does pi contain 1000 consecutive zeroes (in base 10)?
Signed variant of the Flint Hills series
Is liminf|(n*sinn)|=0 as n tends to infinity?
Proving convergence of sum over $\mathbb{Z}^n$ | CommonCrawl |
International Journal of Aeronautical and Space Sciences
The Korean Society for Aeronautical and Space Sciences (KSAS)
2093-274X(pISSN)
Machinery > Aircraft System
Machinery > Space Launch Vehicle
The International Journal of Aeronautical and Space Sciences (IJASS) encourages submission of papers addressing all aspects of aerospace science and technology, which include acoustics, aerodynamics and fluid mechanics, aerospace telecommunications, airworthiness and maintenance, avionics, combustion and propulsion, flight dynamics, guidance and control, flight simulation and operations, low carbon manufacturing, nano application, plasmas and lasers, research instrumentation and facilities, space exploration, structural dynamics and aeroelasticity, structures and materials, and thermomechanics and reacting flows. The journal also addresses the ground and flight tests of aerospace systems including aircraft, air ship, helicopter, microelectromechanical system, missile, satellite, rocket, and UAV. The Korean Society for Aeronautical & Space Sciences (KSAS) is the largest and the most prestigious society for aerospace engineers and scientists in Korea and would also like to maintain our highest standing in the English language journal publication by soliciting high quality research papers. As of 2012, IJASS is indexed by SCIE, SCOPUS and KCI. Starting from 2012 vol 13 (1), IJASS has been accepted for coverage in Science Citation Index Expanded (SCIE). The Korean Society for Aeronautical & Space Sciences (KSAS) is pleased to announce that the publications of our journal, IJASS (International Journal of Aeronautical and Space Sciences) have been selected for coverage in Thomson Reuter's product and services.
http://ijass.org KSCI KCI SCOPUS
Finite Volume Analysis of a Supersonic Non-Equilibrium Flow Around an Axisymmetric Blunt Body
Haoui, R. 59
https://doi.org/10.5139/IJASS.2010.11.2.059 PDF KSCI KPUBS
The aim of this work is to analyze high temperature flows around an axisymmetric blunt body taking into account chemical and vibrational non-equilibrium state for air mixture species. For this purpose, a finite volume methodology is employed to determine the supersonic flow parameters around the axisymmetric blunt body. This allows the capture of a shock wave before a blunt body placed in supersonic free stream. The numerical technique uses the flux vector splitting method of Van Leer. Here, adequate time stepping parameters, along with Courant, Friedrich, Lewis coefficient and mesh size level are selected to ensure numerical convergence, sought with an order of $10^{-8}$.
Experimental Framework for Controller Design of a Rotorcraft Unmanned Aerial Vehicle Using Multi-Camera System
Oh, Hyon-Dong;Won, Dae-Yeon;Huh, Sung-Sik;Shim, David Hyun-Chul;Tahk, Min-Jea 69
This paper describes the experimental framework for the control system design and validation of a rotorcraft unmanned aerial vehicle (UAV). Our approach follows the general procedure of nonlinear modeling, linear controller design, nonlinear simulation and flight test but uses an indoor-installed multi-camera system, which can provide full 6-degree of freedom (DOF) navigation information with high accuracy, to overcome the limitation of an outdoor flight experiment. In addition, a 3-DOF flying mill is used for the performance validation of the attitude control, which considers the characteristics of the multi-rotor type rotorcraft UAV. Our framework is applied to the design and mathematical modeling of the control system for a quad-rotor UAV, which was selected as the test-bed vehicle, and the controller design using the classical proportional-integral-derivative control method is explained. The experimental results showed that the proposed approach can be viewed as a successful tool in developing the controller of new rotorcraft UAVs with reduced cost and time.
Attitude Estimation for Satellite Fault Tolerant System Using Federated Unscented Kalman Filter
Bae, Jong-Hee;Kim, You-Dan 80
We propose a spacecraft attitude estimation algorithm using a federated unscented Kalman filter. For nonlinear spacecraft systems, the unscented Kalman filter provides better performance than the extended Kalman filter. Also, the decentralized scheme in the federated configuration makes a robust system because a sensor fault can be easily detected and isolated by the fault detection and isolation algorithm through a sensitivity factor. Using the proposed algorithm, the spacecraft can continuously perform a given mission despite navigation sensor faults. Numerical simulation is performed to verify the performance of the proposed attitude estimation algorithm.
Satellite Attitude Control with a Modified Iterative Learning Law for the Decrease in the Effectiveness of the Actuator
Lee, Ho-Jin;Kim, You-Dan;Kim, Hee-Seob 87
A fault tolerant satellite attitude control scheme with a modified iterative learning law is proposed for dealing with actuator faults. The actuator fault is modeled to reflect the degradation of actuation effectiveness, and the solar array-induced disturbance is considered as an external disturbance. To estimate the magnitudes of the actuator fault and the external disturbance, a modified iterative learning law using only the information associated with the state error is applied. Stability analysis is performed to obtain the gain matrices of the modified iterative learning law using the Lyapunov theorem. The proposed fault tolerant control scheme is applied to the rest-to-rest maneuver of a large satellite system, and numerical simulations are performed to verify the performance of the proposed scheme.
Unmanned Aerial Vehicle Recovery Using a Simultaneous Localization and Mapping Algorithm without the Aid of Global Positioning System
Lee, Chang-Hun;Tahk, Min-Jea 98
This paper deals with a new method of unmanned aerial vehicle (UAV) recovery when a UAV fails to get a global positioning system (GPS) signal at an unprepared site. The proposed method is based on the simultaneous localization and mapping (SLAM) algorithm. It is a process by which a vehicle can build a map of an unknown environment and simultaneously use this map to determine its position. Extensive research on SLAM algorithms proves that the error in the map reaches a lower limit, which is a function of the error that existed when the first observation was made. For this reason, the proposed method can help an inertial navigation system to prevent its error of divergence with regard to the vehicle position. In other words, it is possible that a UAV can navigate with reasonable positional accuracy in an unknown environment without the aid of GPS. This is the main idea of the present paper. Especially, this paper focuses on path planning that maximizes the discussed ability of a SLAM algorithm. In this work, a SLAM algorithm based on extended Kalman filter is used. For simplicity's sake, a blimp-type of UAV model is discussed and three-dimensional pointed-shape landmarks are considered. Finally, the proposed method is evaluated by a number of simulations.
Performance Analysis of Pursuit-Evasion Game-Based Guidance Laws
Kim, Young-Sam;Kim, Tae-Hun;Tahk, Min-Jea 110
https://doi.org/10.5139/IJASS.2010.11.2.0110 PDF KSCI KPUBS
We propose guidance laws based on a pursuit-evasion game. The game solutions are obtained from a pursuit-evasion game solver developed by the authors. We introduce a direct method to solve planar pursuit-evasion games with control variable constraints in which the game solution is sought by iteration of the update and correction steps. The initial value of the game solution is used for guidance of the evader and the pursuer, and then the pursuit-evasion game is solved again at the next time step. In this respect, the proposed guidance laws are similar to the approach of model predictive control. The proposed guidance method is compared to proportional navigation guidance for a pursuit-evasion scenario in which the evader always tries to maximize the capture time. The capture sets of the two guidance methods are demonstrated.
Earliest Intercept Geometry Guidance to Improve Mid-Course Guidance in Area Air-Defence
Shin, Hyo-Sang;Tahk, Min-Jea;Tsourdos, A.;White, B.A. 118
This paper describes a mid-course guidance strategy based on the earliest intercept geometry (EIG) guidance. An analytical solution and performance validation will be addressed for generalized mid-course guidance problems in area air-defence in order to improve reachability and performance. The EIG is generated for a wide range of possible manoeuvres of the challenging missile based on the guidance algorithm using differential geometry concepts. The main idea is that a mid-course guidance law can defend the area as long as it assures that the depending area and objects are always within the defended area defined by EIG. The velocity of Intercept Point in EIG is analytically derived to control the Intercept Geometry and the defended area. The proposed method can be applied in deciding a missile launch window and launch point for the launch phase.
Conceptual Design of a Multi-Rotor Unmanned Aerial Vehicle based on an Axiomatic Design
Yoo, Dong-Wan;Won, Dae-Yeon;Tahk, Min-Jea 126
This paper presents the conceptual design of a multi-rotor unmanned aerial vehicle (UAV) based on an axiomatic design. In most aerial vehicle design approaches, design configurations are affected by past and current design tendencies as well as an engineer's preferences. In order to design a systematic design framework and provide fruitful design configurations for a new type of rotorcraft, the axiomatic design theory is applied to the conceptual design process. Axiomatic design is a design methodology of a system that uses two design axioms by applying matrix methods to systematically analyze the transformation of customer needs into functional requirements (FRs), design parameters (DPs), and process variables. This paper deals with two conceptual rotary wing UAV designs, and the evaluations of tri-rotor and quad-rotor UAVs with proposed axiomatic approach. In this design methodology, design configurations are mainly affected by the selection of FRs, constraints, and DPs.
Three-Axis Autopilot Design for a High Angle-Of-Attack Missile Using Mixed H2/H∞ Control
Won, Dae-Yeon;Tahk, Min-Jea;Kim, Yoon-Hwan 131
https://doi.org/10.5139/JASS.2010.11.2.131 PDF KSCI KPUBS
We report on the design of a three-axis missile autopilot using multi-objective control synthesis via linear matrix inequality techniques. This autopilot design guarantees $H_2/H_{\infty}$ performance criteria for a set of finite linear models. These models are linearized at different aerodynamic roll angle conditions over the flight envelope to capture uncertainties that occur in the high-angle-of-attack regime. Simulation results are presented for different aerodynamic roll angle variations and show that the performance of the controller is very satisfactory.
Waypoint Planning Algorithm Using Cost Functions for Surveillance
Lim, Seung-Han;Bang, Hyo-Choong 136
This paper presents an algorithm for planning waypoints for the operation of a surveillance mission using cooperative unmanned aerial vehicles (UAVs) in a given map. This algorithm is rather simple and intuitive; therefore, this algorithm is easily applied to actual scenarios as well as easily handled by operators. It is assumed that UAVs do not possess complete information about targets; therefore, kinematics, intelligence, and so forth of the targets are not considered when the algorithm is in operation. This assumption is reasonable since the algorithm is solely focused on a surveillance mission. Various parameters are introduced to make the algorithm flexible and adjustable. They are related to various cost functions, which is the main idea of this algorithm. These cost functions consist of certainty of map, waypoints of co-worker UAVs, their own current positions, and a level of interest. Each cost function is formed by simple and intuitive equations, and features are handled using the aforementioned parameters.
Comprehensive Code Validation on Airloads and Aeroelastic Responses of the HART II Rotor
You, Young-Hyun;Park, Jae-Sang;Jung, Sung-Nam;Kim, Do-Hyung 145
In this work, the comprehensive structural dynamics codes including DYMORE and CAMRAD II are used to validate the higher harmonic control aeroacoustic rotor test (HART) II data in descending flight condition. A total of 16 finite elements along with 17 aerodynamic panels are used for the CAMRAD II analysis; whereas, in the DYMORE analysis, 10 finite elements with 31 equally-spaced aerodynamic panels are utilized. To improve the prediction capability of the DYMORE analysis, the finite state dynamic inflow model is upgraded with a free vortex wake model comprised of near shed wake and trailed tip vortices. The predicted results on aerodynamic loads and blade motions are correlated with the HART II measurement data for the baseline, minimum noise and minimum vibration cases. It is found that an improvement of solution, especially for blade vortex interaction airloads, is achieved with the free wake method employed in the DYMORE analysis. Overall, fair to good correlation is achieved for the test cases considered in this study. | CommonCrawl |
Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas
On some Grüss' type inequalities for the complex integral
Silvestru Sever Dragomir
Original Paper
Assume that f and g are continuous on \(\gamma \), \(\gamma \subset {\mathbb {C}}\) is a piecewise smooth path parametrized by \(z\left( t\right) ,\)\(t\in \left[ a,b\right] \) from \(z\left( a\right) =u\) to \(z\left( b\right) =w\) with \(w\ne u\) and the complexČebyšev functional is defined by
$$\begin{aligned} {\mathcal {D}}_{\gamma }\left( f,g\right) :=\frac{1}{w-u}\int _{\gamma }f\left( z\right) g\left( z\right) dz-\frac{1}{w-u}\int _{\gamma }f\left( z\right) dz \frac{1}{w-u}\int _{\gamma }g\left( z\right) dz. \end{aligned}$$
In this paper we establish some bounds for the magnitude of the functional \( {\mathcal {D}}_{\gamma }\left( f,g\right) \) under various assumptions for the functions f and g and provide a complex version for the well known Grüss inequality.
Complex integral Continuous functions Holomorphic functions Grüss inequality
Mathematics Subject Classification
26D15 26D10 30A10 30A86
The author would like to thank the anonymous referee for valuable suggestions that have been implemented in the final version of the paper.
Alomari, M.W.: A companion of Grüss type inequality for Riemann-Stieltjes integral and applications. Mat. Vesnik 66(2), 202–212 (2014)MathSciNetzbMATHGoogle Scholar
Andrica, D., Badea, C.: Grüss' inequality for positive linear functionals. Period. Math. Hungar. 19(2), 155–167 (1988)MathSciNetzbMATHGoogle Scholar
Baleanu, D., Purohit, S.D., Uçar, F.: On Grüss type integral inequality involving the Saigo's fractional integral operators. J. Comput. Anal. Appl. 19(3), 480–489 (2015)MathSciNetzbMATHGoogle Scholar
Beck M., Marchesi G., Pixton, D., Sabalka, L.: A First Course in Complex Analysis. Orthogonal Publishing L3C, Michigan (2002). http://math.sfsu.edu/beck/complex.html
Chebyshev, P.L.: Sur les expressions approximatives des intè grals dèfinis par les outres prises entre les même limites. Proc. Math. Soc. Charkov 2, 93–98 (1882)Google Scholar
Cerone, P.: On a Čebyšev-type functional and Grüss-like bounds. Math. Inequal. Appl. 9(1), 87–102 (2006)MathSciNetzbMATHGoogle Scholar
Cerone, P., Dragomir, S.S.: A refinement of the Grüss inequality and applications, Tamkang J. Math., 38(1), 37–49 (2007). Preprint RGMIA Res. Rep. Coll., 5(2) (2002), Article 14. http://rgmia.org/papers/v5n2/RGIApp.pdf
Cerone, P., Dragomir, S.S.: Some new Ostrowski-type bounds for the Čebyšev functional and applications. J. Math. Inequal. 8(1), 159–170 (2014)MathSciNetzbMATHGoogle Scholar
Cerone, P., Dragomir, S.S., Roumeliotis, J.: Grüss inequality in terms of \(\Delta \)-seminorms and applications. Integral Transforms Spec. Funct. 14(3), 205–216 (2003)MathSciNetzbMATHGoogle Scholar
Cheng, X.L., Sun, J.: A note on the perturbed trapezoid inequality. J. Inequal. Pure Appl. Math. 3(2), 29 (2002)MathSciNetzbMATHGoogle Scholar
Dragomir, S.S.: A generalization of Grüss's inequality in inner product spaces and applications. J. Math. Anal. Appl. 237(1), 74–82 (1999)MathSciNetzbMATHGoogle Scholar
Dragomir, S.S.: A Grüss' type integral inequality for mappings of \(r\)-Hölder's type and applications for trapezoid formula. Tamkang J. Math. 31(1), 43–47 (2000)MathSciNetzbMATHGoogle Scholar
Dragomir, S.S.: Some integral inequalities of Grüss type. Indian J. Pure Appl. Math. 31(4), 397–415 (2000)MathSciNetzbMATHGoogle Scholar
Dragomir, S.S.: Integral Grüss inequality for mappings with values in Hilbert spaces and applications. J. Korean Math. Soc. 38(6), 1261–1273 (2001)MathSciNetzbMATHGoogle Scholar
Dragomir, S.S., Fedotov, I.A.: An inequality of Grüss' type for Riemann-Stieltjes integral and applications for special means. Tamkang J. Math. 29(4), 287–292 (1998)MathSciNetzbMATHGoogle Scholar
Dragomir, S.S., Gomm, I.: Some integral and discrete versions of the Grüss inequality for real and complex functions and sequences. Tamsui Oxf. J. Math. Sci. 19(1), 67–77 (2003)MathSciNetzbMATHGoogle Scholar
Fink, A.M.: A Treatise on Grüss' Inequality Analytic and Geometric Inequalities and Applications, Math Appl, vol. 478, pp. 93–113. Kluwer Academic Publisher, Dordrecht (1999)Google Scholar
Grüss, G.: Über das Maximum des absoluten Betrages von \( \frac{1}{b-a}\int _{a}^{b}f(x)g(x)dx-\frac{1}{\left( b-a\right) ^{2}} \int _{a}^{b}f(x)dx\int _{a}^{b}g(x)dx\). Math. Z. 39, 215–226 (1935)MathSciNetzbMATHGoogle Scholar
Jankov Maširević, D., Pogány, T.K.: Bounds on Čebyšev functional for \(C_{\varphi }[0,1]\) function class. J. Anal. 22, 107–117 (2014)MathSciNetzbMATHGoogle Scholar
Liu, Z.: Refinement of an inequality of Grüss type for Riemann-Stieltjes integral. Soochow J. Math. 30(4), 483–489 (2004)MathSciNetzbMATHGoogle Scholar
Liu, Z.: Notes on a Grüss type inequality and its application. Vietnam J. Math. 35(2), 121–127 (2007)MathSciNetzbMATHGoogle Scholar
Lupaş, A.: The best constant in an integral inequality. Mathematica (Cluj, Romania) 15(38)(2), 219–222 (1973)MathSciNetzbMATHGoogle Scholar
Mercer, A.McD, Mercer, P.R.: New proofs of the Grüss inequality. Aust. J. Math. Anal. Appl 1(2), 6 (2004). Art. 12zbMATHGoogle Scholar
Minculete, N., Ciurdariu, L.: A generalized form of Grüss type inequality and other integral inequalities. J. Inequal. Appl 2014(119), 18 (2014)zbMATHGoogle Scholar
Pachpatte, B.G.: A note on some inequalities analogous to Grü ss inequality. Octogon Math. Mag. 5(2), 62–66 (1997)MathSciNetGoogle Scholar
Pečarić, J., Ungar, Š.: On a inequality of Grüss type. Math. Commun. 11(2), 137–141 (2006)MathSciNetzbMATHGoogle Scholar
Sarikaya, M.Z., Budak, H.: An inequality of Grüss like via variant of Pompeiu's mean value theorem. Konuralp J. Math. 3(1), 29–35 (2015)MathSciNetzbMATHGoogle Scholar
Ujević, N.: A generalization of the pre-Grüss inequality and applications to some quadrature formulae. J. Inequal. Pure Appl. Math. 3(1), 9 (2002). Article 13zbMATHGoogle Scholar
© The Royal Academy of Sciences, Madrid 2019
1.Mathematics, College of Engineering and ScienceVictoria UniversityMelbourneAustralia
2.DST-NRF Centre of Excellence in the Mathematical and Statistical Sciences, School of Computer Science and Applied MathematicsUniversity of the WitwatersrandJohannesburgSouth Africa
Dragomir, S.S. RACSAM (2019). https://doi.org/10.1007/s13398-019-00712-6
Received 03 February 2019
DOI https://doi.org/10.1007/s13398-019-00712-6
Publisher Name Springer International Publishing | CommonCrawl |
The recommendation algorithm model of score preference and project type
The algorithm of fusing score preference and project type
Experimental design and analysis
Future works
Recommendation algorithm based on user score probability and project type
Chunxue Wu1,
Jing Wu1View ORCID ID profile,
Chong Luo1,
Qunhui Wu2,
Cong Liu1,
Yan Wu3 and
Fan Yang4Email author
Accepted: 27 February 2019
The interaction and sharing of data based on network users make network information overexpanded, and "information overload" has become a difficult problem for everyone. The information filtering technology based on recommendation could dig out the needs and hobbies of users from the historical behavior, historical data, and social network and filter out useful resource for users in accordance with the needs and hobbies from the accumulation of information resource. Collaborative filtering is one of the core technologies in the recommendation system and is also the most widely used and most effective recommendation algorithm. In this paper, we study the accuracy and the data sparsity problems of recommendation algorithm. On the basis of the conventional algorithm, we combine the user score probability and take the commodity type into consideration when calculating similarity. The algorithm based on user score probability and project type (UPCF) is proposed, and the experimental data set from the recommendation system is used to validate and analyze data. The experimental results show that the UPCF algorithm alleviates the sparsity of data to a certain extent and has better performance than the conventional algorithms.
Score probability
Similarity calculation
Recommendation algorithm is a very important tool to help users deal with information overload in the era of big data [1]. In the scoring matrix, the scoring behavior and scoring value of the user are the basis for the recommendation algorithm to recommend the product. In the era of information explosion, because the number of commodities is too large, the user can only score a few projects of their preferences. This results in the sparsity and incomplete of scoring data in the user-product scoring matrix, which makes it impossible to find similar neighbors of the target user. If there are no similar neighbors, the recommendation algorithm cannot recommend the product to the user, or the recommendation product to the user is inappropriate.
The primary cause of the sparse data in the scoring matrix of the recommendation system is that the user does not take the initiative to score the commodities. Therefore, the number of scores in the scoring matrix is not random, but depends on the user' subjective choice. Traditional recommendation algorithms think that users randomly choose and score the commodity. They also believe that users score high on commodities, which indicates that users like the product, and low scores on commodities, which indicates that users do not like the product. In [2], it is proved that the hypothesis of the traditional recommendation algorithm is inaccurate and does not accord with the reality of the massive information era, because the conventional algorithms ignore the performance of the user's subjective behavior.
In the big data era of information explosion, the number of commodities is very large. Users can only access to a small number of commodities, and then choose the type of interest preference from the small number of products to score. This results in the sparsity of scoring data in user commodity scoring matrix, which affects the accuracy of recommendation. Users choose products and score them, which is an invisible embodiment of user interest preference.
On the basis of the conventional algorithm, this paper integrates the subjective behavior of users to score the commodities and puts forward the algorithm of integrating score preference and project type. Compared with the traditional algorithm that users randomly choose a product to score, the paper improves the recommendation algorithm (UPCF) with integrating score preference and project type, which makes full use of the subjective behavior of the user to choose and score commodity. The two-step predictive recommendation algorithm proposed in [3] and the probabilistic latent semantic recommendation algorithm based on an autonomous prediction all present similar points to this chapter. There are two kinds of difference between the improved algorithm and them in this paper. First, the method to calculate the probability of the user to score product is different. Second, the UPCF algorithm takes the product type into consideration in the similar calculation.
The main contents of this paper are as follows:
For the accuracy problem, the paper in the calculation of similarity integrates the project type. Combining the similarity calculated from the scoring matrix and the similarity obtained from the commodity type, the calculation of the similarity will be more accurate.
For the problem of data sparsity, the fundamental reason for data sparsity is that users do not take the initiative to score the project. This paper calculates user score probability by analyzing the user's historical scoring behavior and the type of the commodity. According to the score probability and commodity type, the similarity S2 of two users is calculated. The similarity S1 is calculated by a score matrix. The combination of the two similarities overcomes the problem that data sparsity cannot calculate user similarity.
2 The recommendation algorithm model of score preference and project type
2.1 User behavior information
The core idea of the recommendation algorithm is to obtain the information implied in the user's behavior, identify the user's behavior, use the collective wisdom [4] to match the user, and recommend the product to the user.
The traditional recommendation algorithm only pays attention to the value of the product scored by the user [5, 6], ignoring the user's implicit information in the behavior of scoring the commodities. The traditional algorithms indicate that the user randomly selects some of the commodities and scores them according to the degree of preference for the commodities. A high score shows that the user likes this commodity, and the low score shows that the user does not like the commodities. In the era of online shopping information explosion, a user's shopping behavior is based on their own needs and preferences [7–10]. The user will score the goods according to the quality of commodities, customer service attitude, logistics speed, and other factors. If the user is not interested in a commodity, he will not buy it. That is to say, giving a commodity a low score can only indicate that the user is not satisfied with this product. In the traditional algorithm, this dissatisfaction is spread to the same type of commodity, making the system think that the user's preference for similar products is reduced and affecting the system's recommendation for similar products. Therefore, it is a subjective behavior of the user to select a product and score it and this behavior is an invisible embodiment of the user's interest preference [11–13].
About users' behavior, there are some different views between this paper and the traditional algorithms:
Different views on scoring behavior. The traditional algorithm considers that the user's scoring behavior is random. In this paper, it is considered that the scoring behavior is the implicit embodiment of the user's interest preference, and the user will only score the commodities that they are interested in.
Different reasons for the sparsity of scoring data [14–17]. The traditional algorithms think that the users' scoring behavior is random. This paper believes that users will only choose the products that they are interested in and score them, which means that the user's subjective choice results in the lack of data.
Different views on the level of scoring value. The traditional algorithms consider that the user likes the commodity if he gives it a high score, and the user does not like it to give the commodity a low score. This paper believes that as long as the user gives a score, regardless of the level of the score value, the user has a preference for this kind of commodity.
For example, as shown in Fig. 1, user A loves to watch action movies, but does not love science fiction and comedy. As the number of movies on the video site is very large, so user A will choose his favorite movie to watch. First of all, user A will choose the action movie on the video site, then scores it after watching the movie. Because they do not like to see the comedy and science fiction film, user A cannot evaluate such films. User A finds a movie called "IP MAN" [18, 19] in the action movie category. After watching it, user A feels that the clarity of the picture is not good, so he gives the film a low score. If user A does not like the action movie, he will not see this category of movies. The traditional collaborative filter [20, 21] considers that as long as the user gives a low score that users do not like this type of film, but the fact is that if the user does not like a film, users will not pay attention to this type, let alone watch and to score.
Example of user behavior. User A loves to watch action movies, but does not love science fiction and comedy movies. User A finds a movie called "IP MAN" in the action movie category. After watching it, user A feels that the clarity of the picture is not good, so he gives the film a low score. If user A does not like the action movie, he will not see this category of movies. The traditional collaborative filter considers that as long as the user gives a low score that users do not like this type of film, but the fact is that if the user does not like a film, users will not pay attention to this type, let alone watch and score
2.2 Recommendation algorithm model of user score preference and project type
This paper believes that the score value cannot indicate whether the user likes this kind of commodity or not, but only indicate that the user is not satisfied with the current product. Users pay attention to and want to consume the products that they are interested in. Users will score a high score for products that they are interested in and satisfied with, and low scores for products that they are interested in but not satisfied with. Therefore, regardless of whether the user gives a product a high or low score, the behavior of the user to score the product fully indicates that the user is interested in this type of product.
Based on the above viewpoints and the implicit expression of the user score behavior [22, 23], this paper designs a recommendation algorithm model based on user score preference and project type. By analyzing the user score behavior, our algorithm obtains the user's preference for the commodity. According to the preference of the user, we can predict the probability of the user to score the target commodities. The recommendation system combines the similarity calculated from the score value with the similarity calculated from the user score probability and the project type to make the similarity between the users more accurate. The similarity calculation of recommendation algorithm framework based on score preference and the project type is shown in Fig. 2.
Similarity calculation of an improved algorithm model. The behavior that a user gives a score to a commodity is called the user rating behavior. The score value that the user gives to the commodity is called the user rating value. The frame calculates the score probability P based on the user rating behavior and calculates the similarity S2 according to the score probability P and the project type. Then, we calculate the similarity S1 according to the user rating value. Finally, we combine the two similarity values S1 and S2 to obtain the ultimate similarity S
As shown in Fig. 2, the behavior that a user gives a score to a commodity is called the user rating behavior. The score value that the user gives to the commodity is called the user rating value. The frame calculates the score probability P based on the user rating behavior and calculates the similarity S2 according to the score probability P and the project type. Then, we calculate the similarity S1 according to the user rating value. Finally, we combine the two similarity values S1 and S2 to obtain the ultimate similarity S [24, 25].
On the basis of the conventional algorithm based on neighborhood, the ideas of the user behavior in Fig. 1 and the improved similarity of Fig. 2 are integrated. The improved algorithm framework of this chapter is obtained, which is called the recommendation algorithm framework of scoring preference and project type. This framework makes full use of the user's preference information and calculates the user interest in a kind of commodity, that is, the probability of scoring. We will get the scoring probability and improved similarity by calculating [26]. The process of the model of the score preference and the project type is shown in Fig. 3.
Recommendation algorithm model of integrating score preference and project type. On the basis of the conventional algorithm based on neighborhood, the ideas of the user behavior in Fig. 1 and the improved similarity of Fig. 2 are integrated. The improved algorithm framework of this chapter is obtained, which is called the recommendation algorithm framework of scoring preference and project type. The core idea of the model of score preference and project types in Fig. 3: Firstly, we calculate the similarity S1 based on the matrix. Then, we calculate the possibility Pro of the user to score the product according to preference information implied in the user rating behavior. The second similarity S2 is calculated according to Pro and the type of the product. Due to the different weights of the two similarities, the final similarity of the user S is obtained by combining the two similarities S1 and S2
The recommendation algorithm based on the user score preference and project type combines the probability of the user to score the commodity with the user's prediction value to the product. Some studies have shown that making full use of the user's behavior of scoring the product can effectively improve the recommendation accuracy of recommendation algorithm. The model takes full advantage of the user's scoring behavior, excavates the user's implied hobbies, and predicts the product type that the user may be interested in.
The core idea of the model based on the user score preference and project types is shown in Fig. 3: Firstly, we calculate the similarity S1 based on the matrix. Then, we calculate the possibility Pro of the user to score the product according to the preference information implied in the user rating behavior. The second similarity S2 is calculated according to Pro and the type of the product. Due to the different weights of the two similarities, the final similarity of the user S is obtained by combining the two similarities S1 and S2 [27, 28]. For all users, the similarity between any two forms an M×M similarity matrix (M is the number of users).
For example, in order to calculate the similarity of user A and user B, the algorithm reads the scoring information from the data set, gets the score matrix of 943×1682 (943 represents 943 users, and 1682 represents 1682 commodities), and calculates the Sim1(A,B) by score matrix and Pearson's formula. Then, a 943×18 score count matrix and a 943×18 score probability matrix are created (943 represents 943 users, and 18 represents of 18 types). Next, the algorithm traverses the score matrix, records the number of user score for each commodity into the 943×18 score matrix. The algorithm also traverses the score count matrix, calculates the probability of user score for each commodity, and records it into the scoring probability matrix. We obtain the second similarity S2(A,B) through probability matrix and Pearson's formula and get the final S(A,B) by combining S1(A, B) and S2(A, B). By calculating the similarity by the above way, we can obtain a 943×943 similarity matrix.
The UPCF algorithm takes full advantage of the user's behavior and the type of information of the product to score the product, which is the main difference between the UPCF algorithm and the traditional recommendation algorithm. The two-step prediction recommendation algorithm proposed in [3] and the probabilistic latent semantic recommendation algorithm based on autonomous prediction proposed in [13, 29] make full use of the user's behavior information to score the product. The difference between them is that the UPCF algorithm uses a different approach when calculating the score probability and considers the type of information of the project when calculating the similarity.
In order to verify the effectiveness of the framework, this paper combines IBCF with the framework in Fig. 2 to propose an algorithm of fusing score preferences and project types. The next section will detail the UPCF algorithm.
3 The algorithm of fusing score preference and project type
This section takes the user's subjective scoring behavior into consideration on the basis of the traditional recommendation algorithm based on neighborhood, proposing the algorithm of fusion score preference and item type. UPCF is short for collaborative filtering recommendation algorithm based on user score probability and project type.
3.1 Prediction of user score probability
The user's scoring preferences can also be used to calculate the user's score probability. The scoring value of all commodities in the score matrix can be regarded as an n-dimensional score vector, as follows:
$$ P(U)=\left({I}_1,{I}_2\dots \dots {I}_n\right) $$
If the value is not 0, the user has scored the commodity; otherwise, there is no score on the commodities. Traverse the target user's n-dimensional score vector, count the type of commodities and the number of times each commodity is scored, and put the statistic results into the list. Each item in the list is an <i, n> binary relationship group, where i is the commodity type, and n is the number of times the commodities have been scored. We predict the users' interest in this type of product according to the number of users scoring a certain type in the list, that is, predicting the probability of users to score the commodity. If the target commodity type is j, N(j) represents the scoring number of u on the j type and M is the total number of the user to score commodities. The user score probability is calculated as follows:
$$ \Pr \left(u,j\right)=N(j)/M $$
The specific implementation of the score probability prediction is shown in the following pseudo code:
Prediction of scoring probability
Proba(int[][] grade)
Create a new 943×18 matrix pro
Traverse the grade
If(grade[a][m]!=0)
Get the type k of the movie m
pro[a][k]+1
3.2 Improvements in similarity calculations
The cosine similar and Pearson et al. [30–32] are the most common way to calculate similarity in the conventional collaborative filtering algorithms. But regardless of the kind of calculation, the database of the calculation is the commodities' score commonly used by the users. This calculation ignores the commodity type [33]. This section incorporates the commodity type and the scoring probability of the user on the basis of the traditional similarity calculation method and adjusts the ratio of the two similarities. The improved similarity formula is as follows:
$$ \mathrm{Sim}\left({U}_i,{U}_j\right)=\beta S\left({U}_i,{U}_j\right)+\left(1-\beta \right){S}_{\mathrm{sort}}\left({U}_i,{U}_j\right) $$
where Sim(Ui, Uj) is the final similarity, S(Ui, Uj) is the similarity calculated by using the user's score value, Ssort(Ui, Uj) is the similarity calculated by the commodity type and the scoring probability of product. The formula to calculate Ssort(Ui, Uj) is as follows:
$$ {S}_{\mathrm{sort}}\left({U}_i,{U}_j\right)=\frac{\sum \limits_{k\in L\left({U}_i\right)\cap L\left({U}_j\right)}\left({P}_{U_ik}-{\overline{P}}_{U_i}\right)\left({P}_{U_jk}-{\overline{P}}_{U\mathrm{j}}\right)}{\sqrt{\sum \limits_{k\in L\left({U}_i\right)\cap L\left({U}_j\right)}{\left({P}_{U_ik}-{\overline{P}}_{U_i}\right)}^2}\sum \limits_{k\in L\left({U}_i\right)\cap L\left({U}_j\right)}{\left({P}_{U_jk}-{\overline{P}}_{U_jk}\right)}^2} $$
where L(Ui) is a type collection of commodities that are scored by Ui. L(Uj) is a type collection of commodities that are scored by Uj. \( {P}_{U_ik} \) is the scoring probability of Ui for the k type, and \( {\overline{P}}_{U_i} \) is the average of the scoring probability of Ui for all the types. The k type is one of the intersection types scored by Ui and Uj.
The selection of the nearest neighbor and the scoring prediction and scoring criteria have been described in detail in the previous section, and it is no longer described here.
3.3 The selection of the nearest neighbor
There are two conditions for the target user to choose the nearest neighbor [34–37]. Firstly, the selected neighbor is highly similar to the target user. Secondly, the selected neighbor has been scored on the target commodity.
When selecting the best neighbor, there is a threshold needed to set in order to prevent the existence of less similar individuals that affects the final results in collaborative filtering. Only the neighbor who has given the target product score and the similarity is greater than the threshold value that can become the target neighbor. The selection of neighbors is as follows:
$$ \mathrm{KN}\left({U}_m\right)=\left\{{U}_n/\mathrm{Sim}\left({U}_m,{U}_n\right)>\sigma \&\left.\mathrm{Score}\left({U}_n,I\right)!=0,m\ne n\right\}\right. $$
where KN(Um) is the neighbor list of the user Um and β is the threshold, which can be set to the average value of the similarity of all the users who are similar to the user Um. The specific implementation of selecting neighbor is as follows:
The selection of the nearest neighbor
FindNeighbor(int i, int k, int[][] grade, int[][] similar)
where i is the user, k is the commodity, grade is the score matrix, similar is the similarity matrix.
if(grade[j][k]!=0&&similar[i][j]!=0)
List_N[j]=similar[i][j], List_N is the neighbor list
Sort(List_N), sort the list of neighbors
choose the N neighbors we need
3.4 Calculation of prediction score
In the calculation of the predicted score, the traditional collaborative filtering algorithm only focuses on the similarity [38, 39] between the neighbor and the target user and the neighbor's score for the prediction item. Each user has different scoring criteria. For example, some users give three points to show that they like that product, and some users need to give five points to express the same meaning. In order to solve this problem, this algorithm takes the average value of the user's score into account to resolve the difference between users. The user's rating for item v is
$$ \mathrm{Score}\left({U}_i,v\right)=\left({\overline{r}}_{ui}+\frac{\sum \limits_{U_j\in KN\left({U}_i\right)}\mathrm{Sim}\left({U}_i,{U}_i\right)\left({r}_{U_jv}-{\overline{r}}_{uj}\right)}{\sum \limits_{{\mathrm{U}}_{\mathrm{j}}\widehat{\mathrm{I}}\mathrm{KN}\left({\mathrm{U}}_{\mathrm{i}}\right)}\mathrm{Sim}\left({\mathrm{U}}_{\mathrm{i}},{\mathrm{U}}_{\mathrm{j}}\right)}\right)f $$
where \( f=\exp \left\{-1+\alpha \left({\overline{r}}_{ui}-{\overline{r}}_{uj}\right)\right\} \), exp represents the exponential function based on e, \( {\overline{r}}_{ui} \) is the average of Ui, \( {r}_{u_jv} \) is score of Uj to v, α is the attenuation factor, and Score(Ui, v) is the prediction score of Ui to v.
3.5 Evaluation indicators
The evaluation indicators of the recommendation system can be summarized as accuracy and the other indicators out of accuracy [40, 41]. The accuracy of this paper is mainly referring to the accuracy index of the prediction score. This kind of indicator is to judge the accuracy by comparing the difference between the prediction score and the real score. The most commonly used is the MAE (mean absolute error), |test| is the test set, ruv is a prediction of U to V, and \( {r}_{uv}^{\mathrm{test}} \) is the real score of U to V in the test set.
MAE is calculated as follows:
$$ \mathrm{MAE}=\frac{\sum \limits_{\left(U,V\right)\in \mathrm{test}}\left|{r}_{uv}-{r}_{uv}^{\mathrm{test}}\right|}{\left|\mathrm{test}\right|} $$
MAE is easy to understand and to calculate, but it also has some shortcomings that the MAE makes a contribution to the inaccurate prediction of low-score products. The RMSE (root mean square error) is also an evaluation indicator related to MAE; RMSE is calculated as follows:
$$ \mathrm{RMSE}=\sqrt{\frac{\sum \limits_{\left(u,v\right)\in \mathrm{test}}{\left|{r}_{uv}-{r}_{uv}^{\mathrm{test}}\right|}^2}{\left|\mathrm{test}\right|}} $$
In RMSE, each absolute error is squared, which makes the larger absolute error becomes larger.
4 Experimental design and analysis
4.1 Data sources
In order to verify the validity of the collaborative filtering recommendation algorithm of score preference and project type, the experiment was validated on the MovieLens set provided by GroupLens. The MovieLens data set is collected by the GroupLens Study Group of the University of Minnesota [42–44], which contains three different versions. This chapter selects the ml-100K data set for experimentation. The data set has 943 users and 1682 movies and 943×1682 score records. The score is 0 or a positive integer between 1 and 5, the score is 0 that the user did not score the product, the higher the score indicates that the higher the degree of the user's preferences for commodities. Ninety percent of the data set was randomly selected as the training set, and the rest was used for the experiment.
4.2 Experimental design
The acquisition of the experimental data samples is from the MovieLens data set [45–47] provided by GroupLens. We select the ml-100K version of MovieLens. The traditional collaborative filtering algorithm based on the nearest neighbor, GSCF algorithm, and UPCF algorithm is run on the data sets train1 and test1 and data sets train2 and test2, and then compare and analyze the difference between the MAE value and RMSE value of the three algorithms [48–50]. According to the analysis of the three algorithms recommendation results, the specific steps of the experimental design are as follows:
The first step, the division of the data set: the data set will be divided into two parts according to the proportion of 9:1 in accordance with the principle of completely random. One class is called the training set train, and the less is called the test set test. The ml-100K data set is divided into several times, and we obtain the training sets train1, train2, train3 and so on, as well as the test set corresponding to the training sets test1, test2, test3 and so on.
The second step, the prediction of scoring probability: the probability of the user's score for each type is calculated according to Section 3.1, and a score probability matrix of 943×18 is obtained. Nine hundred forty-three lines represent 943 users, and 18 columns represent 18 types of movies.
The third step, the similarity calculation: the similarity S1 is calculated by the score matrix, and the similarity S2 is calculated by the score probability. We will get a similarity matrix by combining the two similarities; the similarity matrix S is as follows:
\( S=\left[\begin{array}{cccc}{S}_{11}& {S}_{12}& \dots & {S}_{1m}\\ {}{S}_{21}& \dots & \dots & \dots \\ {}\dots & \dots & \dots & \dots \\ {}{S}_{m1}& \dots & \dots & {S}_{mm}\end{array}\right] \)
Sm1 is the similarity between user m and user 1 in the matrix.
The fourth step is to choose the nearest neighbor according to the similarity.
The fifth step is to calculate the scoring error MAE, RMSE.
4.3 Experimental results and analysis
This section is compared with the traditional user-based propulsion algorithm UBCF, traditional item-based algorithm IBCF, a recommendation algorithm GSCF based on graph structure and project type. For the UPCF algorithm, we carry out experimental analysis. By comparing the influence of different neighbors on the MAE and RMSE of these four recommendation algorithms, the number of neighbors is the same, the difference between the MAE value and the RMSE value of the four recommendation algorithms is obtained. The number of neighbors which selected 10 to 80 variables was shown in the data sets train1 and test1 and data sets train2 and test2 of these four algorithms (UBCF, IBCF, GSCF, UPCF) in the MAE and RMSE performance. In the data sets train1 and test1, the three recommendation algorithm's MAE value of the comparison is shown in Table 1.
Comparison table for MAE value
Number of neighbors
UBCF
IBCF
GSCF
UPCF
When the number of neighbors is the same, the MAE value of the improved algorithm UPCF is the smallest, that is, the prediction error is the smallest and the recommendation performance is the best. When the number of neighbors is different, the MAE value of the four algorithms decreases first and then increases with the growth of the neighbors. It shows that the MAE is affected by the neighbors, in other words, the performance is affected by the neighbors
In Table 1, UBCF is based on the user's algorithm, IBCF is a conventional item-based algorithm, GSCF is an algorithm based on graph structure and project type, and UPCF is an algorithm based on user score preference and project type. When the number of neighbors is the same, the MAE value of the improved algorithm UPCF is the smallest, that is, the prediction error is the smallest and the recommendation performance is the best. When the number of neighbors is different, the MAE value of the four algorithms decreases first and then increases with the growth of the neighbors. It shows that the MAE is affected by the neighbors, in other words, the performance is affected by the neighbors. In order to make the comparison of the four algorithms more obvious, this chapter draws the MAE values into line graphs, as shown in Fig. 4.
Comparison of the MAE values of the four algorithms on the data set train1. When the number of neighbors is the same, the MAE value of the improved algorithm UPCF is the smallest, that is, the prediction error is the smallest and the recommendation performance is the best. When the number of neighbors is different, the MAE value of the four algorithms decreases first and then increases with the growth of the neighbors. It shows that the MAE is affected by the neighbors, in other words, the performance is affected by the neighbors
We take the number of neighbors as variables; with the increase of the number of neighbors, the MAE values of the four algorithms are reduced first and then flattened. This shows that the number of neighbors has a certain impact on the scoring error, when the number of neighbors is enough, this effect gradually weakened. In the case of the same number of neighbors, the MAE value of the algorithm UPCF is lower than the algorithm GSCF, which is lower than the UBCF and IBCF. This chapter shows that the error between the real score and the predicted score of the improved algorithm is the lowest, and the prediction of the user is more accurate. On the data sets train1 and test1, the RMSE values of the four recommendation algorithms are compared as shown in Table 2.
Comparison of RMSE value
The RMSE value of the algorithm UPCF is always smaller than the other three recommendation algorithms (under the same number of neighbors). As the number of neighbors increases, the RMSE value becomes smaller first and then bigger. When the number of neighbors is about 40, the value of RMSE tends to be the smallest, that is, the error of the prediction score is the smallest
As can be seen from Table 2, the RMSE value of the algorithm UPCF is always smaller than the other three recommendation algorithms (under the same number of neighbors). As the number of neighbors increases, the RMSE value becomes smaller first and then bigger. When the number of neighbors is about 40, the value of RMSE tends to be the smallest, that is, the error of the prediction score is the smallest.
According to the data in Table 2, the RMSE values of the four contrast algorithms are plotted as histograms as shown in Fig. 5.
The comparison of RMSE values between four algorithms in data set of train1. The RMSE value of the algorithm UPCF is always smaller than the other three recommendation algorithms (under the same number of neighbors). As the number of neighbors increases, the RMSE value becomes smaller first and then bigger. When the number of neighbors is about 40, the value of RMSE tends to be the smallest, that is, the error of the prediction score is the smallest
As can be seen from Fig. 5, with the increase of the number of nearest neighbors, the RMSE (root mean square error) value of the four algorithms is gradually reduced and then tends to be gentle. In the case of the same number of neighbors, the RMSE value of the UPCF algorithm has been lower than the other three kinds of recommendation algorithms, that is, the error between the real score and the prediction score of UPCF is the smallest, and the prediction is more accurate.
In order to exclude the impact of the data set on the results of the algorithm, the following experiments will be performed on second data sets train2 and test2 generated randomly.
In the data sets of train2 and test2, with the nearest neighbor as a variable, MAE and RMSE are the evaluation criteria to analyze and compare these four recommendation algorithms.
Figure 6 is the contrast effect diagram of the MAE value, with the increase in the number of nearest neighbors, the MAE value of the three algorithms decreases first and then increases. When the nearest neighbor number is about 40, the MAE value is the smallest, which shows that the collaborative filtering algorithm is affected by the nearest neighbor number. When the nearest neighbor number is the same, the MAE of UPCF algorithm is the smallest, that is, the predicted score is close to the real value of the user and it provides the best recommendation result. It fully illustrates the importance of the user to the subjective behavior of commodity score.
The comparison of the MAE values between four algorithms in data set of train2. The MAE value of the three algorithms decreases first and then increases. When the nearest neighbor number is about 40, the MAE value is the smallest, which shows that the collaborative filtering algorithm is affected by the nearest neighbor number. When the nearest neighbor number is the same, the MAE of UPCF algorithm is the smallest, that is, the predicted score is close to the real value of the user and it provides the best recommendation result. It fully illustrates the importance of the user to the subjective behavior of the commodity score
Figure 7 is the contrast effect diagram of the RMSE value, with the increase in the number of nearest neighbors, the value of RMSE decreased first and then increased. When the nearest neighbor number is about 40, the value of RMSE is the smallest. In the case of the same number of near neighbors, the RMSE value of the UPCF algorithm is the smallest, that is, the error between the real score and the prediction score is the smallest, and the performance is the best.
The comparison of RMSE values between four algorithms in train2. With the increase in the number of nearest neighbors, the value of RMSE decreased first and then increased. When the nearest neighbor number is about 40, the value of RMSE is the smallest. In the case of the same number of near neighbors, the RMSE value of the UPCF algorithm is the smallest, that is, the error between the real score and the prediction score is the smallest, and the performance is the best
4.4 Comparative analysis of algorithms
The recommendation algorithm GSCF based on graph structure and item type is based on the conventional algorithm, which makes full use of the indirect neighbor and commodity type when computing similarity.
However, the algorithm UPCF thinks that the user is the product of the user's subjective behavior; it is a kind of implicit user preferences. We make full use of the user score behavior that can find the root cause of data sparsity, as it causes no marks of the product for users. In order to better analyze and compare the two improved algorithms in this paper, we will compare the MAE value and RMSE value of the two algorithms.
Figure 8 is the algorithm UPCF and algorithm GSCF, which has the contrast line chart of the data set train1 on the MAE value (the average absolute error). It can be seen from the graph that the MAE value of the two algorithms decreases first and then increases, and finally tends to be gentle. With the near neighbor number as the variable, When the near neighbor number is about 40, the MAE values of the two algorithms are the smallest, that is, the error is the smallest. Under the condition of the same neighborhood, the numerical MAE of the UPCF algorithm is lower than the GSCF algorithm. The MAE value is smaller, which means that the scoring error of UPCF is lower than GSCF, that is, the recommendation effect of the algorithm UPCF is more accurate than the GSCF algorithm. Because it analyzes the root causes of data sparsity, it makes full use of the implicit information implied by user rating behavior, namely, the user's implicit preference.
Comparison of MAE values of two improved algorithms in a data set of train1. It can be seen from the graph that the MAE value of the two algorithms decreases first and then increases, and finally tends to be gentle. With the near neighbor number as the variable, when the near neighbor number is about 40, the MAE values of the two algorithms are the smallest, that is, the error is the smallest. Under the condition of the same neighborhood, the numerical MAE of the UPCF algorithm is lower than the GSCF algorithm. MAE value is smaller, which means that the scoring error of UPCF is lower than GSCF, that is, the recommendation effect of the algorithm UPCF is more accurate than the GSCF algorithm
Figure 9 is the RMSE (RMS error) contrast histogram between the algorithm GSCF and algorithm UPCF. The left is the algorithm GSCF, and the right is the algorithm UPCF. With the near neighbor number as the variable, the RMSE values of the two kinds of recommendation algorithms decrease first and then increase, and finally tend to be gentle. When the near neighbor number reaches about 40, the RMSE value of the two algorithms is the smallest, that is, the algorithm has the least error and the highest precision. In the case of the same number of near neighbors, the RMSE value of the algorithm UPCF is lower than that of the algorithm GSCF, that is, the error is smaller.
Comparison of RMSE values of two improved algorithms in a data set of train1. With the near neighbor number as the variable, the RMSE values of the two kinds of recommendation algorithms decrease first and then increase, and finally tend to be gentle. When the near neighbor number reaches about 40, the RMSE value of the two algorithms is the smallest, that is, the algorithm has the least error and the highest precision. In the case of the same number of near neighbors, the RMSE value of the algorithm UPCF is lower than that of the algorithm GSCF, that is, the error is smaller
The experimental results show that the numerical values about MAE and RMSE of the UPCF algorithm are lower than the GSCF algorithm, namely, the error score is lower between the real and prediction score, so that the recommendation algorithm of UPCF is better than the GSCF algorithm.
The primary cause of the sparse data is that the user does not take the initiative to score the commodities. Due to the continuous updating of the commodity information, the user has no ability and energy to purchase and rate each commodity. The subjective behavior of the user to select a product and score it is an invisible embodiment of the user's interest preference. Users will only choose the products that they are interested in. If the user is satisfied with the commodities, he will give them high scores. On the basis of the conventional algorithm, this paper integrates the subjective behavior of users to score the commodities and makes full use of the implicit information in user behavior. The improved algorithm firstly calculates the scoring probability of the user to a product and then incorporates the commodity type and the scoring probability of the user to the product on the basis of the traditional similarity calculation method. Compared with the traditional algorithm based on a neighbor recommendation, GSCF algorithm, and UBCF algorithm, in the case of the same number of neighbors, MAE value and RMSE value UPCF algorithm are the lowest, which fully illustrates the usability of user preference information implied by the subjective behavior of user score.
This paper presents the UPCF algorithm. The project type is added to the traditional collaborative filtering recommendation algorithm to alleviate the cold start and score sparsity problem. The primary reason of data sparsity problem is that users do not score the commodities, which is the users' subjective behavior. Based on the GSCF algorithm, this paper combines the user's willingness to score the commodity, and an algorithm based on user score probability and project type is proposed. The difference between UPCF and the traditional algorithm is analyzed, and the recommendation system dataset is used for experimental verification and data analysis. The experimental results show that the algorithm based on user score probability and project type alleviates the problem of data sparsity, which has a better effect than the conventional algorithm.
6 Future works
The current collaborative filtering recommendation technology research has been more mature, but there is still room for improvement in the recommendation accuracy and user experience. The improved algorithm proposed in this paper is only for the data sparsity situation. In order to solve other shortcomings of the traditional recommendation algorithm, the future work mainly around the followings:
The use of social network to solve the cold start problem: The social information and display information (circle of friends, QQ space) of social network users are used to supplement and improve the recommendation algorithm user behavior information, so that we can get better predictions of user preference and enhance the recommendation performance of the recommendation algorithm.
The use of time sequence to solve the problem of user interest drift: Because the interest of the user changes over time, the time is added to the recommendation algorithm to study the impact of this objective factor on the recommendation accuracy.
GSCF:
A collaborative filtering recommendation algorithm based on the graph structure
IBCF:
A collaborative filtering recommendation algorithm based on item
MSE:
Mean square error
RMSE:
Root mean square error
UBCF:
A collaborative filtering recommendation algorithm based on user
UPCF:
A collaborative filtering recommendation algorithm based on user score probability and project type
The authors would like to appreciate all anonymous reviewers for their insightful comments and constructive suggestions to polish this paper in high quality. This research was supported by the National Key Research and Development Program of China (No. 2018YFC0810204), National Natural Science Foundation of China (No.61502220), Shanghai Science and Technology Innovation Action Plan Project (16111107502, 17511107203) and Shanghai key lab of modern optical system. In this paper, Fan Yang is the corresponding author.
CHUNXUE WU received the Ph.D. degree in Control Theory and Control Engineering from of mining and technology, Beijing, China, in 2006.He is a Professor with the Computer Science and Engineering and software engineering Division, School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, China. His research interests include, wireless sensor networks, distributed and embedded systems, wireless and mobile systems, networked control systems.
JING WU is a graduate student of computer technology at School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China. His research interests include Internet of Things, embeded system development, Deep Learning.
CHONG LUO (1991-) is a postgraduate student of computer technology, School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, China. His research interests include, networks communication, big data and machine learning.
QUNHUI WU is a Shanghai Hao long environmental technology Co., Ltd. system integration engineer. 2013 graduated from Xi'an Jiaotong University in computer science and technology. At present the main research direction for the computer system integration, computer control systems.
CONG LIU received the Ph.D. degree in computer application from the East China Normal University, Shanghai, China, in 2013. He is currently a Lecturer with the Department of Computer Science and Engineering, School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, China. His research interests include Evolutionary Computation, Machine Learning, and Image Processing.
YAN WU is currently a postdoctoral associate at the school of public and environmental affairs, Indiana University Bloomington. He obtained his PhD degree in Southern Illinois University Carbondale, with concentrations in environmental chemistry and ecotoxicology. His research involves elucidations of environmental fate of contaminants using chemical and computational techniques, as well as predictions of their associated effects on wildlife and public health. Data Processing and Analysis in Environmental Related Fields.
FAN YANG is an Associate Prof in School of Information, Zhongnan University of Economics and law. She received her PhD degree in School of Computer Science, Wuhan University, China, 2007, and M.S. degree in Dept. of Computer Engineering, Hubei University, China, 2004. Now Dr. Yang is doing some research on wireless communication security. Her research interest includes security analysis and improvements for Block Chain related technology.
This research was supported by the National Key Research and Development Program of China (No. 2018YFC0810204), National Natural Science Foundation of China (No.61502220), Shanghai Science and Technology Innovation Action Plan Project (16111107502, 17511107203) and Shanghai key lab of modern optical system.
The idea arose from the discussion between CW and JW. CLuo, QW, and YW performed the experiments. JW helped in finalizing the solution and amending the manuscript. CLiu and QW completed the writing and formatting of the paper. FY took charge of all the works for the submission of the paper. All authors read and approved the final manuscript.
School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
Shanghai Haolong Environmental Science and Technology Co., Ltd, Shanghai, 201110, China
Public and Environmental Affairs, Indiana University Bloomington, Bloomington, IN 47405, USA
School of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan, 430073, China
L. Jianguo, Z. Tao, W. Binghong, Research progress of personalized recommendation system. Adv. Nat. Sci. 19(1), 1–15 (2009)Google Scholar
D. Ailin, Z. Yangyong, S. Bole, Collaborative filtering recommendation algorithm based on item score prediction. Softw. J. (9), 1621–1628 (2003)Google Scholar
Z. Xiangyu, Study on top-N collaborative filtering recommendation technology (Beijing Institute of Technology, Beijing, 2014)Google Scholar
J. Xiao, M. Luo, J.M. Chen, J.J. Li, in Advanced Intelligent Computing Theories and Applications. ICIC 2015. Lecture Notes in Computer Science, ed. by D. S. Huang, K. Han. An Item Based Collaborative Filtering System Combined with Genetic Algorithms Using Rating Behavior, vol 9227 (Springer, Cham, 2015), pp. 453–460Google Scholar
L. Shijia, Research and application of collaborative filtering algorithm based on coupling similarity (Zhejiang University, Hangzhou, 2016)Google Scholar
F. Bo, C. Jiujun, Among multiple users similarity collaborative filtering algorithm. Comput. Sci. 39(1), 23–26 (2012)Google Scholar
W. Jing, Y. Jian, An optimized item-based collaborative filtering recommendation algorithm. Mini Comput. Syst. 31(12), 2337–2342 (2010)Google Scholar
C. Yanping, W. Sai, Hybrid collaborative filtering algorithm based on user item. Comput. Technol. Dev. 24(12), 88–91 (2014)Google Scholar
L. Qihua, Z. Liyi, Research progress on the diversity of personalized recommendation system. Libr. Inf. Work 57(20), 127–135 (2013)Google Scholar
Z. Tao, Ten challenges of personalized recommendation technology. Programmers 6, 107–111 (2012)Google Scholar
J. Xu, Personalized recommendation algorithm based on comments and ratings (Zhejiang University, Hangzhou, 2013)Google Scholar
Y. ting, Personalized recommendation based on collaborative filtering (Beijing Institute of Technology, Beijing, 2015)Google Scholar
C.X. Jia, R.R. Liu, Improve the algorithmic performance of collaborative filtering by using the interevent time distribution of human behaviors. Physica A 436, 236–245 (2015)Google Scholar
M. Elahi, F. Ricci, N. Rubens, A survey of active learning in collaborative filtering recommender systems. Comput. Sci. Rev. 20(C), 29–50 (2016)MathSciNetMATHGoogle Scholar
W. Kong, Research on the key issues of collaborative filtering recommendation system (Huazhong Normal University, Wuhan, 2013)Google Scholar
L. Qiang, Research on key algorithms in collaborative filtering recommendation system (Zhejiang University, Hangzhou, 2013)Google Scholar
H. Liu, Z. Hu, A. Mian, et al., A new user similarity model to improve the accuracy of collaborative filtering. Knowl.-Based Syst. 56(3), 156–166 (2014)Google Scholar
J. Lee, M. Sun, G. Lebanon, A comparative study of collaborative filtering algorithms. arXiv preprint arXiv:1205, vol 3193 (2012)Google Scholar
Y. Zeng, C.J. Sreenan, N. Xiong, L.T. Yang, J.H. Park, Connectivity and coverage maintenance in wireless sensor networks. J. Supercomput. 52(1), 23–46 (2010)Google Scholar
C.L. Liao, S.J. Lee, A clustering based approach to improving the efficiency of collaborative filtering recommendation [J]. Electron. Commer. Res. Appl. 18, 1–9 (2016)Google Scholar
X. Peiyong, Research on collaborative filtering algorithm in personalized recommendation technology [D] (Ocean University of China, Qingdao, 2011)Google Scholar
H. Chuangguang, Y. Jian, W. Jing, et al., Uncertain of the nearest neighbor collaborative filtering recommendation algorithm. J. Comput. 33(8), 1369–1377 (2010)Google Scholar
N. Xiong, A.V. Vasilakosb, L.T. Yang, C. Wang, R. Kannane, C. Chang, Y. Pan, A novel self-tuning feedback controller for active queue management supporting TCP flows. Inf. Sci. 180(11), 2249–2263 (2010)MathSciNetGoogle Scholar
G. Shenhua, A collaborative filtering algorithm based on singular value decomposition and temporal weight. Comput. Appl. Softw. 27(6), 256–259 (2010)Google Scholar
X. Yang, J. Yu, T. Ergen, et al., The collaborative filtering model combined singularity and diffusion process. Softw. J. (8), 1868–1884 (2013)Google Scholar
Z. Qinqin, L. Kai, W. Bin, SPCF: a memory based collaborative filtering recommendation algorithm [J]. J. Comput. Sci. 36(3), 671–676 (2013)Google Scholar
Y. Ar, E. Bostanci, A genetic algorithm solution to the collaborative filtering problem. Expert Syst. Appl. 61, 122–128 (2016)Google Scholar
C. Zhou, S. Huang, N. Xiong, S.H. Yang, H. Li, Y. Qin, X. Li, Design and analysis of multimodel-based anomaly intrusion detection systems in industrial process automation. IEEE Trans. Syst. Man Cybern. Syst. 45(10), 1345–1360 (2017)Google Scholar
R. He, N. Xiong, L.T. Yang, J.H. Park, Using multi-modal semantic association rules to fuse keywords and visual features automatically for web image retrieval. Inf. Fusion 12(3), 223–230 (2011)Google Scholar
Z. Wang, T. Li, N. Xiong, Y. Pan, A novel dynamic network data replication scheme based on historical access record and proactive deletion. J. Supercomput. 62(1), 227–250 (2012)Google Scholar
N. Xiong, J.W.A.V. Vasilakos, Y.R. Yang, A. Rindos, Y. Zhou, in A self-tuning failure detection scheme for cloud computing service. W.Z. Song, in 2012 IEEE 26th Parallel & Distributed Processing Symposium (IPDPS) (2012)Google Scholar
J. Yin, W. Lo, S. Deng, Y. Li, Z. Wu, N. Xiong, Colbar: a collaborative location-based regularization framework for QoS prediction. Inf. Sci. 265, 68–84 (2014)MathSciNetGoogle Scholar
C. Wu, J. Yuan, B. Shi, Stability of initialization response of fractional oscillators. J. Vibroengineering 18(6), 4148–4154 (2016)Google Scholar
G. Li, Z. Zhang, L. Wang, et al., One-class collaborative filtering based on rating prediction and ranking prediction. Knowl.-Based Syst. 124, 46–54 (2017)Google Scholar
B. Lin, W. Guo, N. Xiong, G. Chen, A.V. Vasilakos, H. Zhang, A pretreatment workflow scheduling approach for big data applications in multi-cloud environments. IEEE Trans. Netw. Serv. Manag. 13(3), 581–594 (2016)Google Scholar
Z. Wan, N. Xiong, N. Ghani, A.V. Vasilakos, L. Zhou, Adaptive unequal protection for wireless video transmission over IEEE 802.11 e networks. Multimed. Tools Appl. 72(1), 541–571 (2014)Google Scholar
Y. Sang, H. Shen, Y. Tan, N. Xiong, in Efficient protocols for privacy preserving matching against distributed datasets. International Conference on Information and Communications Security (2006), pp. 210–227Google Scholar
R. Han, Y. Gao, C. Wu, An effective multi-objective optimization algorithm for spectrum allocations in the cognitive-radio-based internet of things. IEEE Access 6, 12858–12867 (2018)Google Scholar
H. Zheng, W. Guo, N. Xiong, A kernel-based compressive sensing approach for mobile data gathering in wireless sensor network systems. IEEE Trans. Syst. Man Cybern. Syst 8(99), 1–13 (2017) https://doi.org/10.1109/TSMC.2017.2734886 Google Scholar
Z. Yuxiao, L. Linyuan, Review of evaluation index of recommendation system. J. Univ. Electron. Sci. Technol. China 41(2), 163–175 (2012)Google Scholar
L. Jianguo, Z. Tao, G. Qiang, et al., Review of evaluation methods for personalized recommendation systems. Soc. Syst. Complex. Sci. 6(3), 1–10 (2009)Google Scholar
H. Shanshan, Study on the key issues of collaborative filtering recommendation algorithm (Shandong University, Ji'nan, 2016)Google Scholar
L. Qingwen, Study on the recommendation algorithm based on collaborative filtering (University of Science & Technology China, Hefei, 2013)Google Scholar
Q. Liu, E. Chen, H. Xiong, et al., Enhancing collaborative filtering by user interest expansion via personalized ranking. IEEE Trans. Syst. Man & Cybern. B Cybern. A Publication of the IEEE Systems Man & Cybernetics Society 42(1), 218–233 (2012)Google Scholar
Y. Zhou, D. Zhang, N. Xiong, Post-cloud computing paradigms: a survey and comparison. Tsinghua Sci. Technol. 22(6), 714–732 (2017)MATHGoogle Scholar
X. Liu, S. Zhao, A. Liu, N. Xiong, A.V. Vasilakos, Knowledge-aware proactive nodes selection approach for energy management in internet of things. Future Generation Computer Systems. (2017) https://doi.org/10.1016/j.future.2017.07.022
N. Xiong, A.V. Vasilakos, L.T. Yang, L. Song, Y. Pan, R. Kannan, Y. Li, Comparative analysis of quality of service and memory usage for adaptive failure detectors in healthcare systems. IEEE J. Sel. Areas Commun. 27(4), 495–509 (2009)Google Scholar
C. Lin, N. Xiong, J.H. Park, T. Kim, Dynamic power management in new architecture of wireless sensor networks. Int. J. Commun. Syst. 22(6), 671–693 (2010)Google Scholar
J. Li, N. Xiong, J.H. Park, C. Liu, M.A. Shihua, S.E. Cho, Intelligent model design of cluster supply chain with horizontal cooperation. J. Intell. Manuf. 23(4), 917–931 (2012)Google Scholar
W. Fang, Y. Li, H. Zhang, N. Xiong, J. Lai, A.V. Vasilakos, On the throughput-energy tradeoff for data transmission between cloud and mobile devices. Inf. Sci. 283, 79–93 (2014)Google Scholar | CommonCrawl |
Hostname: page-component-6c8bd87754-5s4vs Total loading time: 0.798 Render date: 2022-01-20T21:53:35.443Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }
>National Institute Economic Review
>Volume 257: ECONOMIC CONTRIBUTIONS TO INFECTION CO...
>TIME SERIES MODELLING OF EPIDEMICS: LEADING INDICATORS,...
National Institute Economic Review
Growth curves and time series models
Comparing different growth curves
A statistical model for leading indicators
Policy interventions and control groups
TIME SERIES MODELLING OF EPIDEMICS: LEADING INDICATORS, CONTROL GROUPS AND POLICY ASSESSMENT
Published online by Cambridge University Press: 30 September 2021
Andrew Harvey [Opens in a new window]
Andrew Harvey*
Faculty of Economics, University of Cambridge, Cambridge, United Kingdom
*Corresponding author. Email: [email protected]
This article shows how new time series models can be used to track the progress of an epidemic, forecast key variables and evaluate the effects of policies. The univariate framework of Harvey and Kattuman (2020, Harvard Data Science Review, Special Issue 1—COVID-19, https://hdsr.mitpress.mit.edu/pub/ozgjx0yn) is extended to model the relationship between two or more series and the role of common trends is discussed. Data on daily deaths from COVID-19 in Italy and the UK provides an example of leading indicators when there is a balanced growth. When growth is not balanced, the model can be extended by including a non-stationary component in one of the series. The viability of this model is investigated by examining the relationship between new cases and deaths in the Florida second wave of summer 2020. The balanced growth framework is then used as the basis for policy evaluation by showing how some variables can serve as control groups for a target variable. This approach is used to investigate the consequences of Sweden's soft lockdown coronavirus policy in the spring of 2020.
balanced growthCOVID-19Gompertz curveKalman filterstochastic trendC22C32
National Institute Economic Review , Volume 257: ECONOMIC CONTRIBUTIONS TO INFECTION CONTROL , Summer 2021 , pp. 83 - 100
DOI: https://doi.org/10.1017/nie.2021.21[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© The Author(s), 2021. Published by Cambridge University Press on behalf of National Institute Economic Review
The aim of this article is to show how time series modelsFootnote 1 can be used to track the progress of an epidemic, forecast key variables and evaluate the effects of policies. Developing effective techniques to accomplish these tasks is of some importance, because, as documented by Ioannidis et al. (Reference Ioannidis, Cripps and Tanner2020), the performance of many of the methods used to forecast the current COVID-19 epidemic has not been impressive. The new models draw much of their inspiration from time series econometrics. However, the characteristics of time series for epidemics are different from those of most time series in economics and these differences need to be taken into account.
Harvey and Kattuman (Reference Harvey and Kattuman2020)—hereafter HK—developed a class of univariate time series models for predicting future values of a variable which when cumulated is subject to an unknown saturation level. In these models, the logarithm of the growth rate of the cumulated series dependsFootnote 2 on a time trend. Allowing this trend to be time-varying introduces flexibility which, in the context of an epidemic, enables the effects of changes in policy and population behaviour to be tracked. Nowcasts and forecasts of the variables of interest, such as the daily number of cases, its growth rate and the instantaneous reproduction number, $ {R}_t, $ can be made. Estimation of the models is by maximum likelihood (ML) and goodness of fit can be assessed by standard statistical test procedures.
Time series models can also be used to address other questions by exploring relationships between different series. One application concerns how the time path of an epidemic in a country which suffers an outbreak before another can be used as a leading indicator. The rationale for modelling the logarithm of the growth rate (of the cumulated series) comes from the properties of a Gompertz growth curve and when two such curves follow the same time path, but one lags the other, the trends in the series on the logarithms of the growth rate are a constant distance apart. This suggests that when the trends are stochastic, the same will be true. This situation, known as a balanced growth, arises in macroeconomics and is a special case of what econometricians call co-integration; see, for example, Stock and Watson (Reference Stock and Watson1988). The situation is illustrated by showing how the time path of deaths in the UK in the first few months of the coronavirus epidemic follows the time path of deaths in Italy 2 weeks earlier.
The requirement that two series exhibit balanced growth, while highly desirable, is not necessary for one to be a good leading indicator of the other. The need for additional flexibility is explored with data from the 'second wave' of coronavirus in Florida in the early part of the summer of 2020 where it is shown how daily new cases can potentially offer improved forecasts of deaths in 2–3 weeks' time. The forecasts are based on a bivariate unobserved component time series model that combines the dynamic information in the two series by a common trend specified as an integrated random walk (IRW) but includes an independent random walk (RW) component for new cases.
Time series modelling of an intervention can be used to assess the impact of a policy. This was done in HK in connection with the UK lockdown of March 2020. Here, an attempt is made to answer the question 'What if lockdown had been imposed a week earlier?' The impact of lockdowns is explored further by developing the ideas associated with balanced growth to try to estimate the number of coronavirus deaths in Sweden had a more stringent lockdown been imposed. The methodology draws on the study of control groups in time series by Harvey and Thiele (Reference Harvey and Thiele2021). It is argued that the fact that death rates in Sweden were roughly 10 times those in neighbouring countries could be misleading; the growth paths of the UK and Italy provide more relevant information. A comparison is made with studies based on the synthetic control (SC) method of Abadie et al. (Reference Abadie, Diamond and Hainmueller2010, Reference Abadie, Diamond and Hainmueller2015).
2. Growth curves and time series models
This section sets out the basic model in which the logarithm of the growth rate of the cumulated series consists of a stochastic trend plus an irregular term. It is then shown how the framework may be extended to model the relationship between two series.
2.1. Dynamic trend models
The observational model uses data on the time series of the cumulated total of confirmed cases or deaths, $ {Y}_t, $ $ t=0,1,\dots, T, $ and the daily change, $ {y}_t=\Delta {Y}_t={Y}_t-{Y}_{t-1}. $ HK show how the theory of generalised logistic growth curves suggests models for $ \ln \kern0.5em {y}_t $ and $ \ln \kern0.5em {g}_t $ , where $ {g}_t={y}_t/{Y}_{t-1} $ or $ \Delta \ln \kern0.5em {Y}_t. $ For the special case of the Gompertz growth curve:
(1) $$ \ln \kern0.5em {y}_t=\ln \kern0.5em {Y}_{t-1}+\delta -\gamma t+{\varepsilon}_t,\kern0.5em \gamma >0,\kern1em t=1,\dots, T, $$
(2) $$ \ln \kern0.5em {g}_t=\delta -\gamma t+{\varepsilon}_t,\kern1em t=1,\dots, T, $$
where $ {\varepsilon}_t $ is a random disturbance term.
A stochastic, or time-varying, trend may be introduced into (2), to give the dynamic trend model:
(3) $$ \ln \kern0.5em {g}_t={\delta}_t+{\varepsilon}_t,\kern1em {\varepsilon}_t\sim NID\left(0,{\sigma}_{\varepsilon}^2\right),\kern2em t=1,\dots, T, $$
(4) $$ {\displaystyle \begin{array}{lllll}{\delta}_t& =& {\delta}_{t-1}-{\gamma}_{t-1}+{\eta}_t,& \kern1em & {\eta}_t\sim NID\left(0,{\sigma}_{\eta}^2\right),\\ {}{\gamma}_t& =& {\gamma}_{t-1}+{\zeta}_t,& \kern1em & {\zeta}_t\sim NID\left(0,{\sigma}_{\zeta}^2\right),\end{array}} $$
and the normally distributed irregular, level and slope disturbances, $ {\varepsilon}_t, $ $ {\eta}_t $ and $ {\zeta}_t $ , respectively, are mutually independent. When $ {\sigma}_{\zeta}^2 $ is positive, but $ {\sigma}_{\eta}^2=0, $ the trend is an IRW. HK found an IRW trend to be particularly useful for tracking an epidemic and it will be adopted in the applications here. The speed with which a trend adapts to a change depends on the signal-noise ratio, which for the IRW is $ {q}_{\zeta }={\sigma}_{\zeta}^2/{\sigma}_{\varepsilon}^2; $ when $ {q}_{\zeta }=0 $ the trend is deterministic, as in (2).
Allowing $ {\gamma}_t $ to change over time means that the progress of the epidemic is no longer tied to the proportion of the population infected as it would be if $ {Y}_t $ followed a deterministic growth curve. Instead the model adapts to movements brought about by changes in behaviour and policies. If $ {\gamma}_t $ falls to zero, the growth in $ {Y}_t $ becomes exponential while a negative $ {\gamma}_t $ means that the growth rate is increasing. This flexibility also allows the model to deal with second waves, where infections start to increase sharply after having fallen to a relatively low level. The Florida exampleFootnote 3 of Section 4.2 shows how the model deals successfully with a second wave and Harvey et al. (Reference Harvey and Kattuman2021) report accurate forecast for the second UK wave of early 2021. A modification of the model, that is currently under investigation, is to re-initialize the cumulative total at the low point before a new wave begins. The way in which the cumulative total enters the model is important because a key feature of the dynamic Gompertz model is its ability to detect upcoming turning points and to make forecasts that show a downward movement even before a peak has been reached; see, for example, the forecasts made for Germany in HK.
Additional components, such as day of the week effects, can be added to (3). These may be deterministic or stochastic. Explanatory variables, including interventions, can also be included, as may stationary components. Thus (3) could become:
(5) $$ \ln \kern0.5em {g}_t={\delta}_t+{\theta}_t+{\mu}_t+{\mathbf{x}}_t^{\prime}\boldsymbol{\beta} +{\varepsilon}_t,\kern2em t=1,\dots, T, $$
where $ {\theta}_t $ is a stochastic daily component, modelled as in Harvey (Reference Harvey1989, pp. 43–4), $ {\mu}_t $ is a stationary autoregressive process, $ {\mathbf{x}}_t $ is a vector of explanatory variables and $ \boldsymbol{\beta} $ is a corresponding vector of parameters. Possible candidates for explanatory variables include stringency indices for governmental policies, as in Hale et al. (Reference Hale, Angrist, Goldszmidt, Kira, Petherick, Phillips, Webster, Cameron-Blake, Hallas, Majumdar and Tatlow2021). All these models can be handled using techniques based on state space models and the Kalman filter (KF); see Durbin and Koopman (Reference Durbin and Koopman2012). Here, the STAMP package of Koopman et al. (Reference Koopman, Lit and Harvey2021) is used. Estimation of the unknown parameters is by ML. Diagnostic tests for normality and residual serial correlation are based on the one-step ahead prediction errors, $ {v}_t=\ln \kern0.5em {g}_t-{\delta}_{t|t-1},t=3,\dots, T. $
The KF outputs the estimates and forecasts of the state vector $ {\left({\delta}_t,{\gamma}_t\right)}^{\prime }. $ Estimates at time $ t $ conditional on information up to and including time $ t $ are denoted $ {\left({\delta}_{t|t},{\gamma}_{t|t}\right)}^{\prime } $ , while predictions $ j $ steps ahead are $ {\left({\delta}_{t+j|t},{\gamma}_{t+j|t}\right)}^{\prime }. $ The smoother, which estimates the state at time $ t $ based on all $ T $ observations in the series, is denoted $ {\left({\delta}_{t|T},{\gamma}_{t|T}\right)}^{\prime } $ .
Remark 1. When the observations are small, a negative binomial distribution for $ {y}_t, $ conditional on past observations, may be appropriate. HK show how the model may be modified to deal with this possibility. However, the numbers in the applications here are big enough to allow $ {y}_t $ to be treated as conditionally lognormal and hence for the conditional distribution of $ \ln \kern0.5em {g}_t $ to be considered normal.
2.2. Forecasts
The forecasts of the trend in future values of $ \ln \kern0.5em {g}_t $ in the dynamic Gompertz model are given by $ {\delta}_{T+\mathrm{\ell}\mid T}={\delta}_{T\mid T}-{\gamma}_{T\mid T}\mathrm{\ell}, $ $ \mathrm{\ell}=1,2,\dots, $ where $ {\delta}_{T\mid T} $ and $ {\gamma}_{T\mid T} $ are the KF estimates of $ {\delta}_T $ and $ {\gamma}_T $ at the end of the sample. Forecasts of the trend in the daily observations are obtained from a recursion for the trend in their cumulative total, $ {Y}_t $ , namely,
(6) $$ {\mu}_{T+\mathrm{\ell}\mid T}={\mu}_{T+\mathrm{\ell}-1\mid T}\left(1+{g}_{T+\mathrm{\ell}\mid T}\right),\kern2em \mathrm{\ell}=1,2,\dots, \kern1em $$
where $ {g}_{T+\mathrm{\ell}\mid T}=\exp {\delta}_{T+\mathrm{\ell}\mid T} $ and $ {\mu}_{T\mid T}={Y}_T $ . The trend in the daily figures is then,
(7) $$ {\mu}_{y,T+\mathrm{\ell}\mid T}={g}_{T+\mathrm{\ell}\mid T}{\mu}_{T+\mathrm{\ell}-1\mid T}\kern0.5em ,\kern2em \mathrm{\ell}=1,2,\dots $$
Daily effects can be added to $ {\delta}_t. $ In this case, forecasts of the observations themselves, that is $ {\hat{y}}_{T+\mathrm{\ell}\mid T} $ and $ {\hat{Y}}_{T+\mathrm{\ell}\mid T}, $ are given by adding the filtered value of the daily component to the trend component, $ {\delta}_{T+\mathrm{\ell}\mid T} $ .
Unlike most other forecasting methods, the dynamic Gompertz model yields prediction intervals. The way in which they are constructed is set out in Section 2.4 of Harvey et al. (Reference Harvey, Kattuman and Thamotheram2021) and examples are given.
2.3. Forecasting and nowcasting R
Harvey and Kattuman (Reference Harvey and Kattuman2021) use filtered estimates of $ {g}_{y,t}, $ given by $ {g}_{y,t\mid t}={g}_{t\mid t}-{\gamma}_{t\mid t}, $ to track the progress of an epidemic. A corresponding estimator of the instantaneous reproduction number, $ {R}_t, $ can be constructed in a number of ways, as in Wallinga and Lipsitch (Reference Wallinga and Lipsitch2007). The most practical for COVID-19 are:
(8) $$ {\tilde{R}}_{t,\tau }=1+\tau {g}_{y,t\mid t}\kern1.5em \mathrm{and}\kern1em {\tilde{R}}_{t,\tau}^e=\exp \left(\tau {g}_{y,t\mid t}\right), $$
where $ \tau $ is the generation interval, which is the number of days that must elapse before an infected person can transmit the disease; setting $ \tau =4 $ is a good choice. Harvey and Kattuman (Reference Harvey and Kattuman2021) provide more details and show how forecasts of $ {R}_{t,\tau } $ , with associated prediction intervals. Harvey et al. (Reference Harvey, Kattuman and Thamotheram2021) illustrate the implementation of this approach in the NIESR tracker.
2.4. Panel data
The extended dynamic Gompertz model of (5) can be used as the basis for handling panel data. When there are $ N $ cross-sectional units,
(9) $$ \ln \kern0.5em {g}_{it}={\delta}_{it}+{\mathbf{z}}_{it}^{\prime }{\boldsymbol{\alpha}}_i+{\mathbf{x}}_{it}^{\prime}\boldsymbol{\beta} +{\varepsilon}_{it},\kern1.5em i=1,\dots, N,\kern1.5em t=1,\dots, T, $$
where $ {\delta}_{it}^{\prime }s $ are stochastic trend components and $ {\mathbf{z}}_{it} $ and $ {\mathbf{x}}_{it} $ are vectors of explanatory variables, with $ {\boldsymbol{\alpha}}_i, $ $ i=1,\dots, N, $ and $ \boldsymbol{\beta} $ denoting associated coefficients. It may be necessary to add autoregressive and day of the week components. Either way, we can pre-filter with the univariate filter, as in Harvey (Reference Harvey1989, Sect. 3.4.2) and then, if the components are assumed to be mutually independent, use the transformed observation to estimate a standard panel data model. This procedure can be iterated to convergence.
Further generalisations would let the stochastic trends depend on common factors.
3. Comparing different growth curves
The Gompertz growth curve lies behind the notion of setting up time series models in which the logarithm of the growth rate of the cumulative total of a variable follows a trend. It is therefore able to provide insight on how to formulate and interpret models linking several series.
The Gompertz growth curve is:
(10) $$ \mu (t)=\overline{\mu}\exp \left(-\alpha {e}^{-\gamma t}\right),\kern2em \alpha, \gamma >0,\kern2em -\infty <t<\infty, $$
where $ \gamma $ is a growth rate parameter, $ \overline{\mu} $ is the upper bound or saturation level ( $ \mu (t)\kern1.em \to \kern1.5em \overline{\mu} $ as $ t\to \infty \Big) $ and $ \alpha $ reflects initial conditions. The associated incidence curve is:
$$ d\mu (t)/ dt={\mu}^{\prime }(t)=\gamma \alpha \mu (t)\exp \left(-\gamma t\right), $$
with a peak at $ t={\gamma}^{-1}\ln \kern0.5em \alpha . $ Figure 1 shows an incidence curve with a peak at $ t=19.97 $ , together with the same curve shifted to the right so the peak is at $ 30.71. $ A curve above the right hand curve is also shown; this is higher because the value of $ \overline{\mu} $ is 1400 rather than 1000 as it is for the other two curves. In all cases $ \gamma =0.15, $ but for the left hand curve $ \alpha $ is 20 whereas for the right hand curves it is 100.
Figure 1. (Colour online) Gompertz incidence curves, $ {\mu}^{\prime }(t), $ with $ \gamma =0.15, $ $ {\alpha}_1=20 $ for the left hand curve and $ {\alpha}_2=100 $ for the right hand curves; the value of $ \overline{\mu} $ in the upper curve is 1400 as opposed to 1000 as in the lower curve
Although the right hand curves in figure 1 clearly lag the left hand one, it is not immediately evident how to model the relationship. However, the logarithms of the growth rates of $ \mu (t) $ are:
(11) $$ \ln \kern0.5em g(t)=\delta -\gamma t,\kern2em t\kern0.5em \ge \kern0.5em 0, $$
where $ \delta = $ $ \ln \kern0.5em \alpha \gamma; $ compare (2). Figure 2 shows the two lines for $ \ln \kern0.5em g(t) $ running in parallel. The distance between them depends on the intercepts, $ \delta, $ which in turn depend on the initialization parameter, $ \alpha . $ The height of the incidence curve, which depends on the saturation level, $ \overline{\mu}, $ is irrelevant; as a result, the lines corresponding to the two right-hand incidence curves in figure 1 are identical. This is important because it means that small populations can be compared with big ones: size does not matter.
Figure 2. (Colour online) Logarithms of the growth rates for incidence curves in figure 1; $ \gamma =0.15, $ $ {\alpha}_1=20 $ and $ {\alpha}_2=100 $ (upper line)
When two lines are parallel, the upper line lags the lower one by:
(12) $$ k=\frac{\delta_2-{\delta}_1}{\gamma }=\frac{\ln \kern0.5em {\alpha}_2-\ln \kern0.5em {\alpha}_1}{\gamma }, $$
where $ {\delta}_1 $ and $ {\delta}_2 $ are the intercepts of the lower and upper lines, respectively and $ {\alpha}_1 $ and $ {\alpha}_2 $ are the corresponding initial conditions. In figure 2, the lag is $ k=10.73. $ When the $ {\gamma}^{\prime }s $ are different, the epidemic progresses at different speeds. The lines for $ \ln \kern0.5em g(t) $ are no longer parallel and the time lag is no longer constant.
4. A statistical model for leading indicators
Now consider observational models of the form (2) for two time series which are on the same growth path because $ {\gamma}_1={\gamma}_2 $ but the first series leads the second by $ k $ time periods. The observations run from $ t=1 $ to $ T $ but when the first series is lagged by $ k $ time periods, $ \ln \kern0.5em {g}_{1,t-k} $ runs from $ t=k+1 $ to $ T+k. $ Subtracting the first series from the second gives:
(13) $$ \ln \kern0.5em {g}_{2t}=\delta +\ln \kern0.5em {g}_{1,t-k}+{\varepsilon}_t,\kern0.5em $$
where $ \delta =\ln \kern0.5em \left({\alpha}_2/{\alpha}_1\right) $ and the disturbance term is $ {\varepsilon}_t={\varepsilon}_{2t}-{\varepsilon}_{1,t-k} $ . The equation takes the same form when the trends are stochastic, so long as there is balanced growth. The disturbance, $ {\varepsilon}_t, $ can replaced by any stationary process.
When the two series are not on the same growth path, there is no longer a value of $ k $ for the contrast in (13) that makes it stationary. The stationarity test of Kwiatkowski et al. (Reference Kwiatkowski, Phillips, Schmidt and Shin1992)—the KPSS test—can be used to test for this possibility.
A bivariate time series model combines the dynamic information in the target series with that in the leading indicator. It is set up by lagging the observations on the leading indicator so that they are aligned with the target. Hence, defining $ {g}_{1,t}^{(k)}={g}_{1,t-k} $ for $ t=k+1,..,T+k $ gives:
(14) $$ {\displaystyle \begin{array}{l}\ln \kern0.5em {g}_{1,t}^{(k)}={\delta}_t+{\psi}_t+{\varepsilon}_{1t},\kern1.5em t=k+1,.\dots, T+k,\\ {}\ln \kern0.5em {g}_{2t}=\overline{\delta}+{\delta}_t+{\varepsilon}_{2t},\kern5.5em t=k+1,.\dots, T.\end{array}} $$
The $ k $ future values of $ \ln \kern0.5em {g}_{2,T+j}, $ $ j=1,..,k $ are treated as missing observations.Footnote 4 The trend, $ {\delta}_t, $ is an IRW that is designed to capture the growth path of the target series. Its initial level has been (arbitrarily) assigned to the first series; hence the need for a constant term, $ \overline{\delta}, $ in the second. The role of the other stochastic component, $ {\psi}_t, $ is to allow for deviations of the leading indicator from the balanced growth path. A convenient specification for it is the first-order autoregressive process,
$$ {\psi}_t={\phi \psi}_{t-1}+{\zeta}_t,\kern1em {\zeta}_t\sim NID\left(0,{\sigma}_{\zeta}^2\right),\kern1em t=k+1,.\dots, T+k. $$
All disturbances in (14), including $ {\varepsilon}_{1t} $ and $ {\varepsilon}_{2t}, $ are Gaussian and assumed to be mutually as well as serially independent. Only a single lag is present. More lags could be included, but the aim is find a viable leading indicator for movements in the trend rather than to estimate a distributed lag for the observations. Estimation is by state space methods. As new observations become available, nowcasts and forecasts are updated by the KF.
When $ \left|\phi \right|<1 $ , the series are co-integrated with balanced growth. In the absence of balanced growth, the suggestion is to let $ {\psi}_t $ be a RW, by setting $ \phi =1 $ . The value of $ k $ is then based on experimentation and prior information about what might constitute a reasonable lag. The hope is that the RW specification for $ {\psi}_t $ enables its movements to be separated from those in the IRW trend.
The filtered estimates, $ {g}_{t\mid t} $ and $ {\gamma}_{t\mid t}, $ for the target series give the nowcast of $ {g}_{y,t} $ at time $ t=T $ and the forecast at $ t=T+k $ . Forecasts can also be made beyond $ t=T+k, $ but without the benefit of corresponding values of the leading indicator. The KF and smoother implicitly weights observations in both series in order to compute nowcasts and forecasts for the target.
4.1. Italy and the UK
Figure 3 shows the daily deaths in Italy and the UK from 2 March to 20 June 2020; after that the numbers for Italy start to become small. The figures are for when the deaths were recorded rather than when they occurred. Series based on date of death would not have the daily pattern but were difficult to obtain at that time. Data sources are given in Appendix A.
Figure 3. (Colour online) Daily deaths from COVID-19 in Italy and UK in 2020
Italy clearly leads the UK but the relationship is captured more precisely in figure 4 which shows the logarithms of the growth rates of total deaths (LDL). The UK numbers are small at the beginning of March and so there are missing observations. A lag of 14 days is not inconsistent with prior information and it has the attraction of lining up the days of the week in the two countries. Other lags were tried but 14 minimised the gap between the two countries. Figure 5 shows the LDL series with Italy lagged by 14 days together with the contrast between the two countries obtained by subtracting Italy from the UK. The contrast series appears to be stationary with a mean close to zero; without the lag for Italy the values at the end of March and the beginning of April tend to be higher than the others, reflecting the later UK lockdown. Estimating a regression model with daily dummy variables removed most of the serial correlation and gave a mean of $ \tilde{\delta}=-0.083, $ with a SE on $ 0.035. $ The diagnostic statistics wereFootnote 5: r(1) = −0:06; $ Q(14)=13.40, $ $ r(1)=-0.06, $ $ BS=1.85 $ and $ H=1.24. $
Figure 4. (Colour online) Logarithms of the growth rates (LDL) of total deaths in UK and Italy
Figure 5. (Colour online) LDL series from 16 March to 20 June with Italy lagged by 14 days together with the contrast LDLUK–LDLItaly
Fitting a bivariate time series model of the form (14), starting on 16 March and finishing on 5 July, gave a slowly changing trend that was close to being deterministic. The $ {\delta}_{1t} $ term was excluded but a daily component was included. The estimate of the daily growth rate of UK deaths 14 days beyond the final observation on 20 June was $ {g}_{y,T+k\mid T}=-0.058, $ giving a forecast of $ {\tilde{R}}_{T+k,4}=0.77 $ .
4.2. Deaths and new cases in Florida
Daily cases of COVID-19 in the U.S. state of Florida peaked in early April. There was then a decline following a lockdown during April. After April restrictions were eased and there was a levelling out in May, followed by a sharp rise in June. This second wave poses a challenge for a model in which new cases are used as a leading indicator for deaths. The model deals with the second wave by allowing $ {\gamma}_{t\mid t} $ to become negative; estimates of $ {R}_t $ can still be obtained from $ {g}_{y,t\mid t}, $ as in (8).
Aside from the model having to deal with a situation where new cases and deaths rise and fall, there is the problem that the basis on which new cases are recorded changes over time. At the beginning of the pandemic, new cases in many countries were primarily hospital admissions, but over time testing became more widespread. In the case of Florida, there was an increase in testing in May, although the growth rate in tests was roughly constant from the end of May onwards. This suggests that the growth rate of confirmed new cases may still be a good indicator of the path of new infections; see Harvey and Kattuman (Reference Harvey and Kattuman2021).
The observations, particularly deaths, have a strong weekly pattern. A clearer impression of the underlying trend is given by figure 6 which shows a 7-day moving average of the logarithms of the growth rates of total new cases and deaths from 29 March to 19 July 2020 inclusive. It is apparent that new cases are leading, but the relationship between deaths and new cases is not completely stable over time, partly because of an increase in the growth rate of testing in May and partly because of other factors, such as better hospital treatment. The inclusion of the $ {\psi}_t $ component in model (14) offers a way of dealing with the discrepancy. Nevertheless, despite the instability, it seems clear that new cases peak some time before the end of the sample whereas deaths appear to be at their peak, something confirmed by later observations.
Figure 6. (Colour online) Seven-day moving average of LDL deaths in Florida, new cases and new cases lagged 18 days (dotted line) from 29 March till 19 July 2020
The lag in (14) is chosen so as to get maximum benefit for new cases as a leading indicator. It is not trying to model the distribution of days from infection to death although the choice of $ k $ may be roughly aligned with the mean time to death. After some experimentation it was decided to fix the lag at 18. The LDL for new cases shifted in this way, and shown in figure 6, tracks deaths quite well.
The model, including day of the week variables, was fitted to the Florida data from 29 March till 19 July, with the new cases shifted forward 18 days so as to end on 6 August; thus $ k=18 $ . Specifying $ {\psi}_t $ as a first-order autoregressive process in (14) gave an estimated $ \phi $ of 0.998, so a RW seems appropriate. The smoothed estimates of this RW component are shown in figure 7. The lower graph is the smoothed estimates of the day of the week component in deaths; note that the bivariate model is able to give estimates for the period after 19 July when there are no observations on deaths. The size and variability of the daily component in deaths is much bigger than it is for new cases, with the very high variation coinciding with the period when the numbers of deaths are relatively small. Similarly the prediction error variance of 0.115 for new cases was less than half the 0.253 obtained for deaths. Little serial correlation remained in the residuals for deaths: the Box–Ljung Q-statistic for the first 18 residual autocorrelations was 8.16, while the corresponding figure for cases was a little higherFootnote 6 at 25.01. The signal-noise ratio was estimated as 0.00037, so the trend changes relatively slowly but is still able to adapt to changes in direction.
Figure 7. (Colour online) Smoothed estimates of the RW component in Florida new cases, shifted forward by 18 days and the associated daily component in deaths
Figure 8 shows the forecasts of the logarithm of the growth rate deaths, obtained by using the leading indicator, together with the actual observations from 20 July to 4 August. The smooth dotted line is the trend in LDL deaths. As can be seen, the model forsees the turning point. By contrast, the growth rate of LDL deaths on 19 July is still positive, and estimating a univariate model up to this point gave forecasts continuing on an upward path, overshooting the actual observations.
Figure 8. (Colour online) Forecasts (dots) and trend (smooth dots) of the logarithm of the growth rate of deaths, obtained by using the leading indicator, together with the actual observations from 20 July to 4 August; observations before 20 July (LDLFlDeath) shown by thick line
5. Policy interventions and control groups
This section shows how the time series models can be used to assess the effects of policy. The first example uses univariate time series modelling to investigate the timing of the UK lockdown in the spring of 2020. The second illustrates how the balanced growth framework provides the basis for policy evaluation by showing how some variables can be used as control groups for a target variable. The consequences of Sweden's soft lockdown coronavirus policy in the early part of 2020 are assessed and a comparison is made with studies based on the method of SC.
5.1. What if the March 2020 lockdown in the UK had been a week earlier?
The UK went into full lockdown on 23 March. Can we estimate how many deaths could have been saved if it had been a week earlier?
A slope intervention in (2) enables the effect of a policy which changes $ \gamma $ to be evaluated. Thus,
(15) $$ \ln \kern0.5em {g}_t=\delta -\gamma t-\beta {tw}_t+{\varepsilon}_t,\kern1em t=1,\dots, T, $$
where $ {w}_t $ are intervention dummy variables. When the full effect is realised, the slope on the time trend will have moved from $ \gamma $ to $ \gamma +\beta . $ A positive $ \beta $ lowers the growth rate, $ {g}_t, $ the peak of the incidence curve and the final level. The intervention dummies can be constructed from a logistic cumulative distribution function, giving a response curve:
$$ W(t)=1/\Big(1+\exp \left(-\gamma \left(t-{t}^{\mathrm{med}}\right)\right),\kern2em -\infty <W(t)<\infty, $$
where $ {t}^{\mathrm{med}} $ is the median. With $ {t}^L $ and $ {t}^U $ denoting the beginning and end of the time span during which the response to the intervention occurs, the dummy variables are defined as $ {w}_t=0 $ for $ t<{t}^L, $ $ {w}_t=W(t) $ for $ {t}^L, $ $ t= $ $ {t}^L+1,..,{t}^I,..,{t}^U $ and $ {w}_t=1 $ for $ t={t}^U+1,..,T. $ HK fitted the static model in (15) to new cases in the UK, with an intervention starting on 26 March and ending on 12 April, using data from the beginning of March up to 29 April. The result was an estimate of $ \beta $ equal to $ 0.020 $ $ (0.004) $ and an estimate of $ \gamma $ also equal to $ 0.020. $ The overall effectFootnote 7 is a new slope of $ 0.041 $ . The trend, with the intervention included, is shown by the dashed line in figure 9.
Figure 9. (Colour online) Estimates of logarithm of growth rate of total cases in UK with a logistic intervention and a daily effect
The effect of implementing lockdown restrictions a week earlier can be estimated by shifting the intervention response forward by 1 week so it starts on 19 March, rather than on 26 March. The adjusted trend in the logarithm of the growth rate is then:
(16) $$ \ln \kern0.5em {g}_t^{\ast }=\delta -\gamma t-\beta {tw}_{t+7},\kern1em t=1,\dots, T. $$
Once the effect of the intervention has worked itself through, the new slope is the same as before, as can be seen in the solid line in figure 9.
The predicted final total is:
$$ \overline{\mu}\simeq {\mu}_T\exp \left(\left(\exp {\delta}_{T\mid T}\right)/{\gamma}_{T\mid T}\right), $$
where $ T $ is 12 April. For the actual data, $ {\mu}_T $ can be approximated by $ {Y}_T, $ but for the early lockdown scenario, $ {\mu}_T $ will be smaller because the growth rate falls earlier. This implies that the level on 18 March is multiplied by $ \exp \left(\sum {g}_t^{\ast}\right), $ where the summation is over the period from 19 March to 12 April. To ensure comparability, the actual level on 12 April is best estimated in the same way, rather than by $ {Y}_T $ . Thus an estimate of the ratio of the total number of cases for a hypothetical early lockdown to the actual total is given by:
$$ \frac{\mathrm{Hypothetical}}{\mathrm{Actual}}=\frac{\exp \left(\sum {g}_j^{\ast}\right)}{\exp \left(\sum {g}_j\right)}=\frac{\exp \left(\sum \exp \left(\delta -\gamma t-\beta {tw}_{t+7}\right)\right)}{\exp \left(\sum \exp \left(\delta -\gamma t-\beta {tw}_t\right)\right)}. $$
This ratio is $ 0.551 $ implying that the number of infections, as measured by data on daily coronavirus hospital admissions, could have been almost halved by an earlier lockdown. If a constant proportion of those admitted die, the implication is that deaths in the initial phase of the epidemic (up to the end of June) could have been almost halved by an earlier lockdown.Footnote 8 This conclusion is not too different from ones obtained by other methods. For example, the BBC reported on 10 June that Professor Neil Ferguson of Imperial College told a committee of MPs: 'Had we introduced lockdown measures a week earlier, we would have reduced the final death toll by at least a half'.
5.2. Fewer deaths in Sweden with a full lockdown?
Sweden did not opt for the full lockdown that other European countries imposed in March. Restrictions were minimal: the government recommended frequent handwashing, working from home, self-isolation for those who felt ill or were over 70 and social distancingFootnote 9; see, for example, Kamerlin and Kasson (Reference Kamerlin and Kasson2020). Did this policy lead to the number of deaths being significantly higher than it might have been under a full lockdown? To answer this question we need to determine the growth path that Sweden would most likely have followed under a hard lockdown.
The analysis is based on daily deaths in Sweden, UK and Italy (lagged 14 days) from 18 March to 22 July; by the end of July numbers had become small. A comparison of actual and potential growth paths is best carried out with the logarithms of growth rates of the cumulative total for the reasons discussed earlier. Although Sweden is much smaller than the UK and Italy, there is no need to take deaths per 100,000 because it follows from the discussion in Section 2.3 that standardising in this way leaves the growth rate, $ {g}_t, $ unchanged. Because the day of the week effect is very strong, particularly in the UK, the logarithms of growth rates were smoothed with a 7-day moving average, centred on the fourth day. The graph in figure 10 shows that Sweden initially fell at the same rate as the UK and Italy but then started to divergeFootnote 10 around 24 April, about a month after the UK lockdown began on 23 March.
Figure 10. (Colour online) Seven day moving averages of the logarithms of the growth rate (LDL) from 18 March to 22 July
If Sweden had kept on the same growth path as the UK and Italy there would have been fewer deaths. An estimate of the number of deaths under this alternative scenario is given by reference to the forecasting equations in Section 2.2. Let $ t=m $ denote the date of divergence and let $ {\hat{\delta}}_t $ denote the values of $ {\delta}_t $ estimated for the lockdown growth path using the data on UK and Italy. Since the moving averages are quite smooth, $ {\hat{\delta}}_t $ was constructed as a simple average of the two countries, rather than by restricted least squares (RLS) asFootnote 11 in Harvey and Thiele (Reference Harvey and Thiele2021). Then,
(17) $$ {\hat{\mu}}_{m+j}={\hat{\mu}}_{m+j-1}\left(1+{\hat{g}}_{m+j}\right)\simeq {\hat{\mu}}_{m+j-1}\exp {\hat{\delta}}_{m+j},\kern2em j=1,2,..,T-m.\kern1.5em $$
The initial value is $ {\hat{\mu}}_m={Y}_m, $ or a weighted average around that point. Solving the recursion gives:
(18) $$ {\hat{Y}}_T={\hat{\mu}}_T={Y}_m\prod \limits_{j=1}^{T-m}\left(1+{\hat{g}}_{m+j}\right)\simeq {Y}_m\exp \sum_{j=1}^{T-m}{\hat{\delta}}_{m+j}, $$
as the estimated total number of deaths, up to time $ T, $ under the lockdown scenario. The estimated number of deaths after time $ m $ is $ {\hat{Y}}_T-{Y}_m $ while the actual is $ {Y}_T-{Y}_m. $ Here $ T $ is 22 July; the number of deaths after that is relatively small.
The total on 24 April was 2236 and using formula (18) gives an estimate of 4062 for 22 July as opposed to an actual figure of 5722, a difference of 1660. The sensitivity to the initial value can be gauged by noting that the estimates using the totals 2 days before and 2 days after 24 April are 3808 and 4378, respectively.
One way of reducing the dependence on the starting value is to estimate the underlying total for Sweden using formula (18) with $ {\hat{g}}_{m+j} $ replaced by the actual Swedish values. This gave a total of 5657. The ratio of $ {\hat{Y}}_T $ for the lockdown control group to that of Sweden is $ 1.816/2.530=0.718. $ For $ {\hat{Y}}_T-{Y}_m $ it is $ 0.816/1.530=0.533. $ This implies that the actual increase from 24 April, which was 3486, could have been 1902. The first method gave $ 4062-2236=1826. $ The overall conclusion is that, between 24 April and 22 July, there were perhaps 40–45 per cent more deaths than there might have been under a more stringent lockdown of the kind implemented in the UK and Italy.
It is worth noting that although Sweden may have had more deaths under its soft lockdown, this does not mean a higher death rate than countries which had a hard lockdown. On 4 September, the figures for deaths per one million for Sweden were 577 as against 611 for the UK and 587 for Italy. The rates for Denmark, Norway and Finland were 108, 49 and 61, respectively, but this should not lead one to infer that the soft Swedish lockdown resulted in a death rate of perhaps ten times what it might have been.
The number of deaths in Denmark is too small to allow a full analysis based on the logarithms of growth rates. The variability is high and after mid-May there are often days when no deaths occur. Numbers in Norway and Finland are lower still. However, up to the end of April the logarithm of the growth rate for Denmark is informative. Figure 11 shows the logarithms of the growth rates for Sweden, Italy, UK and Denmark. Denmark is on a similar growth path to that of the other countries but it is lower than the UK because coronavirus may have arrived earlier and lockdown was imposed on 13 March; the gap is consistent with Denmark leading the UK by about a week. During this period deaths in Denmark were much lower than in Sweden even though they were on the same growth path until close to the end of April. This difference therefore seems to be for reasons not directly connected to the policies of the two countries on lockdown.
Figure 11. (Colour online) Seven-day moving averages of the logarithms of the growth rate from 18 March to 30 April
On 30 April, 2714 deaths had been recorded in Sweden as against 443 in Denmark, a ratio of 6.13. On 24 April, the figures were 2236 and 394, a ratio of 5.68. (But bear in mind that the population of Sweden is 1.76 times that of Denmark so in per capita terms the ratio is closer to three.) On 22 July, the ratio of Swedish to Danish deaths had risen to 9.36. However, the ratio of the lockdown estimate of 4062 to the 611 Danish deaths is only 6.64 which is not far from the ratio at the end of April. Thus the estimate of the number of deaths obtained using the control group seems quite plausible. The conclusion is that for reasons unconnected with lockdown policy the death rate per head in Sweden was about three and a half times that in Denmark. The less stringent lockdown then raised this ratio to nearly five and a half.
5.3. Synthetic control
A number of researchers have analysed the Swedish experience using the method of SC. The recent paper by Cho (Reference Cho2020) is a careful and thoughtful analysis, containing a number of references to earlier papers on the topic. Cho uses daily infection case data per million people to construct a SC variable for Sweden using observations from 29 February to 24 March. The countries and their SC weights were: Finland (0.49), Greece (0.24), Norway (0.22), Denmark (0.03) and Estonia (0.02). The choice of these countries, with the exception of Greece, is not unexpected.Footnote 12 Cho concludes that, for the 75 days post-lockdown days, from 25 March until early June, synthetic Sweden is 75 per cent lower than actual Sweden. The SC method cannot be applied directly to deaths because, as noted above, the numbers for the key control group candidates are too small so Cho goes on to examine excess deaths by combining the analysis of new cases with weekly data on excess mortality. He concludes that excess deaths were about 25 per cent less in synthetic Sweden as compared with actual Sweden. What is striking is that in the balanced growth analysis the reduction in deaths is quite close, at 29 per cent, and converting to excess deaths might end up with a figure that is closer still.
Cho, in common with other SC researchers like Born et al. (Reference Born, Dietrich and Müller2020), uses raw cases numbers, standardised for population. However, the logarithm of the growth rate could also be used and, since this yields better behaved time series, it would be interesting to see if it yields the same SC group.
Overall the balanced growth approach is simpler, more transparent and arguably more convincing. Harvey and Thiele (Reference Harvey and Thiele2021) reach the same conclusion in their analysis of the seminal SC applications of Abadie et al. (Reference Abadie, Diamond and Hainmueller2010, Reference Abadie, Diamond and Hainmueller2015).
The aim of this article has been to provide a methodological framework for the statistical analysis of the relationship between time series of the kind that are relevant for tracking and forecasting epidemics and analysing the effects of policy. The examples illustrate how the methods may be applied in practice, although a degree of caution is needed in interpreting the results because of data revisions and different definitions of what constitutes a COVID-19 death.
The growth path of an epidemic is best captured by fitting a stochastic trend to the logarithm of the growth rate of the cumulated series. When two series are on a balanced growth path, the difference between them is stationary. The relationship between deaths from coronavirus in the UK and Italy in the first half of 2020 is a good example of balanced growth, with deaths in Italy 14 days earlier providing a leading indicator for deaths in the UK. A bivariate state space model takes full account of the dynamics in both series and, by extracting the common underlying trend, yields estimates of the daily growth rate of an epidemic and the associated value of $ {R}_t. $
The balanced growth model was extended by including a RW component. This allows the growth path of the leading indicator to deviate from the growth path of the target series. A model of this kind linking deaths to new cases in Florida was estimated for the period covering the second wave in early summer 2020. The forecasts made for deaths while they were still rising are remarkably successful in picking up the subsequent downward movement.
Policy evaluation can be carried out by using some series as control groups for others. A common trend or, better still, balanced growth is the key ingredient. The Swedish policy response to coronavirus provides an example of the methodology. It is shown that the average of the growth paths of deaths in the UK and Italy yields a suitable control group for deaths in Sweden. The Swedish growth path is initially the same as those of the UK and Italy but it begins to diverge towards the end of April. The difference in the growth paths then enables the implications of the Swedish soft lockdown policy to be assessed. The analysis suggests that, between 24 April and 22 July, there were perhaps 40–45 per cent more deaths than there might have been under a more stringent lockdown of the kind implemented in the UK and Italy.
Comments and suggestions from Leopoldo Catania, Stanley Cho, Jagjit Chadha, Radu Cristea, Paul Kattuman, Michael Höhle, Christopher Howe, Peter Kasson, Jonas Knecht, Rutger-Jan Lange, Daniel Mackay, Franco Peracchi, Jerome Simons, Craig Thamotheram, Herman van Dijk and a referee are gratefully acknowledged; of course they bear no responsibility for opinions expressed or mistakes made. Some of the ideas were presented at a Keynote talk at the (virtual) 40th International Symposium on Forecasting in October 2020 and at NIESR conferences in November 2020 and February 2021. The work was carried out as part of the University of Cambridge Keynes Fund project Persistence and Forecasting in Climate and Environmental Econometrics.
A. Appendix A. Data Sources
The data for European countries was obtained from the European Centre for Disease Prevention and Control (ECDC) website, https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide. For Florida the source was: https://covidtracking.com/data. The data were obtained at the end of August and the beginning of September. Data can be subject to revisions. For example, the UK definition of deaths was changed in August to include only people who had a laboratory-confirmed positive COVID-19 test and had died within 28 days of the date the test result was reported. Before that it included anybody who had ever tested positive for COVID-19 no matter how long before the actual death.
Case-fatality statistics in Italy are based on defining COVID-19-related deaths as those occurring in patients who test positive for SARS-CoV-2 via RTPCR, independently of pre-existing diseases that may have caused death. This method may have resulted in overestimation; see Onder (Reference Onder2020).
B. Appendix B. Transformations
The $ \ln \kern0.5em {g}_t $ transformation (LDL) is crucial in giving a series that stabilises the variability of the observations around the trend. The behaviour of $ \ln \kern0.5em {y}_t $ is similar when $ {Y}_t $ is large, but it may be quite different at the start of an epidemic. Other leading examples of statistical forecasting methods are based on different transformations. For example, Doornik et al. (Reference Doornik, Castle and Hendry2020) model $ {g}_t $ directly, but a comparison of $ {g}_t $ with its logarithm shows it to be much less stable in that its variability changes with the level. Figure A.1 shows these transformations for data on cases of coronavirus in Florida from 29 March to 19 July, as used in the leading indicator study in Section 4.2. Figure 4 reinforces the case for $ \ln \kern0.5em {g}_t $ by showing its downward trend and stability for UK and Italian deaths in the initial phase of the epidemic during the spring of 2020.
Figure A.1. (Colour online) Daily cases of coronavirus in Florida from 29 March to 19 July (top left hand graph), and its logarithm, $ \ln {y}_t, $ (top right hand), together with the growth rate of the cumulative total (lower left hand) and its logarithm, $ \ln {g}_t $ .
1 The application of classical time series methods to data on epidemics is relatively undeveloped. Most of the emphasis has been on building 'semi-mechanistic' models to simulate the path of an epidemic under different assumptions about behaviour and policies; see Avery et al. (Reference Avery, Bossert, Clark, Ellison and Ellison2020).
2 The models have been used as the basis for the NIESR COVID-19 tracker since the early part of 2021; see Harvey et al. (Reference Harvey and Kattuman2021) and https://www.niesr.ac.uk/latest-weekly-covid-19-tracker
3 Appendix B illustrates the importance of the $ \ln \kern0.5em {g}_t $ transformation for new cases in Florida.
4 If the first $ k $ observations on the second series are reliable they could be used by treating the first $ k $ values of the first series as missing.
5 r(1) is the autocorrelation at lag one, Q(P) is Box–Ljung statistic with P autocorrelations, BS is the Bowman–Shenton normality statistic and H is a heteroscedasticity statistic constructed as the ratio of the sum of squares in the last third of the sample to the sum of squares in the first third.
6 The suggestion in the STAMP manual is to test against a chi-square variable with allowance made for the loss in degrees of freedom due to estimated parameters which here is six. Thus, the chi-square may be taken to have 12 degrees of freedom.
7 When the slope was allowed to be stochastic, the estimate of $ \beta $ was reduced to $ 0.014 $ $ (0.006) $ , but with such a small sample size, a stochastic slope risks some confounding with the intervention variable.
8 It should be stressed that these findings relate specifically to the effect of the full lockdown of March 2020. A full lockdown imposed now is unlikely to have the same impact because the environment is different in that social distancing restrictions are in place, behaviour has changed and the risk to care homes is better understood.
9 Carl Bildt, a Former Prime Minister, was quoted as saying 'Swedes, especially of the older generation, have a genetic disposition to social distancing anyway'.
10 The growth path of deaths in the UK and Italy differs somewhat from the growth path of new cases. The growth rate of $ \ln \kern0.5em {g}_t $ for new cases, that is $ {\gamma}_t, $ drops significantly within a little over 2 weeks from the start of lockdown; HK estimate the UK fall by fitting intervention variables. A corresponding sharp drop in $ {\gamma}_t $ is less evident in the deaths data. The divergence of Sweden from Italy and the UK is more a consequence of the Swedish $ {\gamma}_t $ increasing, rather than the $ {\gamma}_t^{\prime}\mathrm{s} $ falling for the other countries.
11 The general methodology is to select a set of controls from a donor pool by using the KPSS test to determine which series are on a balanced growth path with the target. The control group weighting is then determined by RLS. The complication here is that, when there is an intervention, balanced growth may require lagging some of the series.
12 In an earlier study, Born et al. (Reference Born, Dietrich and Müller2020) selected a somewhat different group, namely the Netherlands (0.39), Denmark (0.26), Finland (0.19), Norway (0.15) and Portugal (0.01).
Abadie, A., Diamond, A. and Hainmueller, J. (2010), 'Synthetic control methods for comparative case studies: Estimating the effect of California's tobacco control program', Journal of the American Statistical Association, 105, pp. 493–505.CrossRefGoogle Scholar
Abadie, A., Diamond, A. and Hainmueller, J. (2015), 'Comparative politics and the synthetic control method', American Journal of Political Science, 59, pp. 495–510.CrossRefGoogle Scholar
Avery, C., Bossert, W., Clark, A., Ellison, G. and Ellison, S.F. (2020), 'An economist's guide to epidemiology models of infectious diseases', Journal of Economic Perspectives, 34, pp. 79–104.CrossRefGoogle Scholar
Born, B., Dietrich, A.M. and Müller, G.J. (2020), 'Do lockdowns work? A counterfactual for Sweden', Covid Economics, 16, pp. 1–22.Google Scholar
Cho, S.-W (2020), 'Quantifying the impact of nonpharmaceutical interventions during the COVID-19 outbreak: The case of Sweden', Econometrics Journal, 23, pp. 323–44. http://dx.doi.org/10.1093/ectj/utaa025.CrossRefGoogle Scholar
Doornik, J.A., Castle, J.L. and Hendry, D.F. (2020), 'Short-term forecasting of the coronavirus pandemic', International Journal of Forecasting. https://doi.org/10.1016/j.ijforecast.2020.09.003.CrossRefGoogle ScholarPubMed
Durbin, J. and Koopman, S.J. (2012), Time Series Analysis by State Space Methods, Oxford: Oxford University Press.CrossRefGoogle Scholar
Hale, T., Angrist, N., Goldszmidt, R., Kira, B., Petherick, A., Phillips, T., Webster, S., Cameron-Blake, E., Hallas, L., Majumdar, S. and Tatlow, H. (2021), 'A global panel database of pandemic policies (Oxford COVID-19 Government Response Tracker)', Nature Human Behaviour, 5, pp. 529–38.CrossRefGoogle Scholar
Harvey, A. and Kattuman, P. (2020), 'Time series models based on growth curves with applications to forecasting coronavirus', Harvard Data Science Review, Special Issue 1 - COVID-19, available online at https://hdsr.mitpress.mit.edu/pub/ozgjx0yn.CrossRefGoogle Scholar
Harvey, A. and Kattuman, P. (2021), 'A farewell to R: Time series models for tracking and forecasting epidemics', Journal of the Royal Society Interface, forthcoming.CrossRefGoogle Scholar
Harvey, A., Kattuman, P. and Thamotheram, C. (2021), 'Tracking the mutant: Forecasting and nowcasting COVID-19 in the UK in 2021', National Institute Economic Review, 256, pp. 110–26. https://doi.org/10.1017/nie.2021.12.CrossRefGoogle Scholar
Harvey, A.C. (1989), Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge: Cambridge University Press.Google Scholar
Harvey, A.C. and Thiele, S. (2021), 'Co-integration and control: Assessing the impact of events using time series data', Journal of Applied Econometrics, 36, pp. 71–85.CrossRefGoogle Scholar
Ioannidis, J.P.A., Cripps, S. and Tanner, M.A. (2020), 'Forecasting for COVID-19 has failed', International Journal of Forecasting. https://doi.org/10.1016/j.ijforecast.2020.08.004.CrossRefGoogle ScholarPubMed
Kamerlin, S.C.L. and Kasson, P.M. (2020), 'Managing coronavirus disease 2019 spread with voluntary public health measures: Sweden as a case study for pandemic control', Clinical Infectious Diseases, 71, pp. 3174–81. https://doi.org/10.1093/cid/ciaa864.CrossRefGoogle ScholarPubMed
Koopman, S.J., Lit, R. and Harvey, A.C. (2021), STAMP 9.00. Structural Time Series Analyser, Modeller and Predictor, London: Timberlake Consultants.Google Scholar
Kwiatkowski, D., Phillips, P.C.B., Schmidt, P. and Shin, Y. (1992), 'Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root?', Journal of Econometrics, 44, pp. 159–78.CrossRefGoogle Scholar
Onder, G. (2020). 'Case-fatality rate and characteristics of patients dying in relation to COVID-19 in Italy', Journal of the American Medical Association, 323, pp. 1775–6.Google ScholarPubMed
Stock, J. and Watson, M. (1988), 'Testing for common trends', Journal of the American Statistical Association, 83, pp. 1097–107.CrossRefGoogle Scholar
Wallinga, J. and Lipsitch, M. (2007), 'How generation intervals shape the relationship between growth rates and reproductive numbers', Proceedings of the Royal Society B, 274, pp. 599–604.CrossRefGoogle ScholarPubMed
No CrossRef data available.
Andrew Harvey (a1)
DOI: https://doi.org/10.1017/nie.2021.21 | CommonCrawl |
On G-invariant solutions of a singular biharmonic elliptic system involving multiple critical exponents in \(R^{N}\)
Zhiying Deng ORCID: orcid.org/0000-0002-8675-71641,
Dong Xu1 &
Yisheng Huang2
Boundary Value Problems volume 2018, Article number: 53 (2018) Cite this article
In this work, a biharmonic elliptic system is investigated in \(\mathbb{R}^{N}\), which involves singular potentials and multiple critical exponents. By the Rellich inequality and the symmetric criticality principle, the existence and multiplicity of G-invariant solutions to the system are established. To our best knowledge, our results are new even in the scalar cases.
In this article, we study the singular fourth-order elliptic problem:
$$ \textstyle\begin{cases} \Delta^{2} u=\mu\frac{u}{ \vert x \vert ^{4}}+Q(x) \sum_{i=1}^{m}\frac{\varsigma_{i}\alpha_{i}}{2^{\ast\ast}} \vert u \vert ^{\alpha_{i}-2}u \vert v \vert ^{\beta_{i}}+\sigma h(x) \vert u \vert ^{q-2}u, &\text{in } \mathbb{R}^{N},\\ \Delta^{2} v=\mu\frac{v}{ \vert x \vert ^{4}}+Q(x)\sum_{i=1}^{m} \frac{\varsigma_{i}\beta_{i}}{2^{\ast\ast}} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}-2}v+\sigma h(x) \vert v \vert ^{q-2}v, &\text{in } \mathbb{R}^{N},\\ \int_{\mathbb{R}^{N}} ( \vert \Delta u \vert ^{2}+ \vert \Delta v \vert ^{2} )\,dx< +\infty \quad\text{and}\quad u, v\not\equiv0, &\text{in } \mathbb{R}^{N}, \end{cases} $$
where \(\Delta^{2}\) denotes the biharmonic operator, \(N\geq5\), \(\sigma\geq0\), \(\mu\in[0, \overline{\mu})\) with \(\overline{\mu}\triangleq\frac{1}{16}N^{2}(N-4)^{2}\), \(q\in(1, 2)\), \(\varsigma_{i}\in(0, +\infty)\), and \(\alpha_{i}\), \(\beta_{i}>1\) satisfy \(\alpha_{i}+\beta_{i}=2^{\ast\ast}\ (i=1, \ldots, m; 1\leq m\in\mathbb{N})\), \(2^{\ast\ast} \triangleq\frac{2N}{N-4}\) is the critical Sobolev exponent; \(Q(x)\) and \(h(x)\) are G-invariant functions such that \(Q(x)\in\mathscr {C}(\mathbb{R}^{N})\cap L^{\infty}(\mathbb{R}^{N})\) and \(h(x)\in L^{\theta}(\mathbb{R}^{N})\) with \(\theta\triangleq 2^{\ast\ast}/(2^{\ast\ast}-q)\) (see Sect. 2 for details).
There have been by now a large number of papers concerning the existence, nonexistence as well as qualitative properties of nontrivial solutions to critical elliptic problems of second order. With no hope of being complete, we would like to mention some of them [1–4]. In most of these papers, the authors deal with the elliptic problems involving singular potentials and critical exponents. For instance, Deng and Jin in [4] handled the following singular equation:
$$ -\Delta u=\mu\frac{u}{ \vert x \vert ^{2}}+Q(x) \vert x \vert ^{-s}u^{2^{\ast}(s)-1} \quad\text{and}\quad u>0 \quad\text{in } \mathbb{R}^{N}, $$
where \(N> 2\), \(\mu\in[0, \frac{1}{4}(N-2)^{2})\), \(s\in[0, 2)\), \(2^{\ast}(s)=\frac{2(N-s)}{N-2}\), and \(2^{\ast}(0)=2^{\ast}\triangleq\frac{2N}{N-2}\), and Q is G-invariant with respect to a subgroup G of \(O(\mathbb{N})\). By applying analytic techniques and critical point theory, several results on the existence and multiplicity of G-invariant solutions to (1.2) were obtained. Subsequently, Waliullah [5] extended the results in [4] to the weighted polyharmonic elliptic equations. In particular, Waliullah considered the following semilinear partial differential equation:
$$ (-\Delta)^{k} u=Q(x) \vert u \vert ^{2_{(k)}^{\ast}-2}u \quad\text{in } \mathbb{R}^{N}, $$
where \(k>1\), \(N>2k\), \(2_{(k)}^{\ast}=\frac{2N}{N-2k}\), and Q is G-invariant. By employing the minimizing sequence and the concentration–compactness method, the author attained the existence of nontrivial G-invariant solution to (1.3). Borrowing ideas from [4, 5], Deng and Huang [6–8] recently established a few valuable results for the scalar elliptic problems in a bounded G-invariant domain. Moreover, let us also mention that when \(\mu=0\) and the right-hand side nonlinearity term \(\vert x \vert ^{-s}u^{2^{\ast}(s)-1}\) in (1.2) is substituted by \(u^{q-1}\) with \(1< q\leq2^{\ast}\), there have been a variety of remarkable results on G-invariant solutions in [9–11]. Furthermore, for other results about this aspect, see [12] with singular Lane–Emden–Fowler equations, [13] with singular p-Laplacian equations, [14] with biharmonic operators and [15] with \(p(x)\)-biharmonic operators [16], and monograph [17] with generalized Lane–Emden–Fowler equations or Gierer–Meinhardt systems involving singular nonlinearity.
For the systems of singular elliptic equations involving critical exponents, a wide range of works concerning the solutions structures have been presented in recent years. For example, Cai and Kang [18] studied the following elliptic system with multiple critical terms:
$$ \textstyle\begin{cases} \mathcal{L}_{\mu}u=\frac{\varsigma_{1}\alpha_{1}}{2^{\ast}} \vert u \vert ^{\alpha_{1}-2}u \vert v \vert ^{\beta_{1}}+ \frac{\varsigma_{2}\alpha_{2}}{2^{\ast}} \vert u \vert ^{\alpha _{2}-2}u \vert v \vert ^{\beta_{2}} +a_{1} \vert u \vert ^{q_{1}-2}u+a_{2}v, &\text{in } \Omega,\\ \mathcal{L}_{\mu}v=\frac{\varsigma_{1}\beta_{1}}{2^{\ast}} \vert u \vert ^{\alpha_{1}} \vert v \vert ^{\beta_{1}-2}v +\frac{\varsigma_{2}\beta_{2}}{2^{\ast}} \vert u \vert ^{\alpha_{2}} \vert v \vert ^{\beta_{2}-2}v +a_{2}u+a_{3} \vert v \vert ^{q_{2}-2}v, &\text{in } \Omega,\\ u=v=0, &\text{on } \partial\Omega, \end{cases} $$
where \(N\geq3\), \(\Omega\subset\mathbb{R}^{N}\) is a smooth bounded domain such that \(0\in\Omega\), \(\mathcal{L}_{\mu}=-\Delta-\mu \vert x \vert ^{-2}\), \(\mu<\frac{1}{4}(N-2)^{2}\), \(a_{j}\in\mathbb{R}\ (j=1, 2, 3)\), \(\varsigma_{i}\in(0, +\infty)\), \(q_{i}\in[2, 2^{\ast})\), and \(\alpha_{i}\), \(\beta_{i}>1\) fulfill \(\alpha_{i}+\beta_{i}=2^{\ast}\ (i=1, 2)\). By a variational minimax method combined with a delicate analysis of Palais–Smale sequences, the authors proved the existence of positive solutions to (1.4). Very recently, Nyamoradi and Hsu [19] investigated the following quasilinear elliptic system involving multiple critical exponents:
$$ \textstyle\begin{cases} -\operatorname{div}( \vert x \vert ^{-ap} \vert \nabla u \vert ^{p-2}\nabla u)=\sum_{i=1}^{m}\frac{\varsigma_{i}\alpha_{i} \vert u \vert ^{\alpha_{i}-2}u \vert v \vert ^{\beta_{i}}}{p^{\ast}(a, b) \vert x \vert ^{bp^{\ast}(a, b)}} +\sum_{i=1}^{m}\frac{ \lambda_{i}f_{i}(x) }{ \vert x \vert ^{\beta}} \vert u \vert ^{q-2}u, &\text{in } \Omega,\\ -\operatorname{div}( \vert x \vert ^{-ap} \vert \nabla v \vert ^{p-2}\nabla v)=\sum_{i=1}^{m}\frac{ \varsigma_{i}\beta_{i} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}-2}v}{p^{\ast}(a, b) \vert x \vert ^{bp^{\ast}(a, b)}} +\sum_{i=1}^{m}\frac{ \mu_{i}f_{i}(x) }{ \vert x \vert ^{\beta}} \vert v \vert ^{q-2}v, &\text{in } \Omega,\\ u=v=0, &\text{on } \partial\Omega, \end{cases} $$
where \(0\in\Omega\) is a smooth bounded domain in \(\mathbb{R}^{N}\), \(1< p< N\), \(0\leq a<\frac{N-p}{p}\), \(a\leq b< a+1\), \(0<\varsigma_{i}, \lambda_{i}, \mu_{i}<+\infty\), \(\alpha_{i}\), \(\beta_{i}>1\), \(\alpha_{i}+\beta_{i}=p^{\ast}(a, b)=\frac{Np}{N-p(a+1-b)}\) for \(i=1,\ldots, m\). By employing the analytic techniques of Nehari manifold, the authors established the existence and multiplicity of positive solutions to (1.5) under certain appropriate hypotheses on the parameters q, β, \(\lambda_{i}\), \(\mu_{i}\) and the weighted functions \(f_{i}(x)\ (i=1, \ldots, m)\). Other results relating to second-order elliptic systems can be found in [20–23] and the references therein. For the systems of fourth-order elliptic equations, we would like to refer the reader to the papers [24–26] for the elliptic problems related to nonlinearities with critical growth.
Nevertheless, elliptic systems involving the G-invariant solutions have seldom been studied; we only find a handful of results in [27–30]. To the best of our knowledge, there are few results on G-invariant solutions for the singular fourth-order elliptic problem (1.1) even in the scalar cases \(\sigma=0\), \(0<\mu<\overline{\mu}\), \(m=1\), and \(u=v\). Therefore, it is necessary for us to investigate (1.1) thoroughly. Let \(\overline{Q}>0\) be a constant. This work is dedicated to seeking the G-invariant solutions for both the cases of \(\sigma=0\), \(Q(x)\not\equiv\overline{Q}\) and \(\sigma>0\), \(Q(x)\equiv\overline{Q}\) in (1.1). Our arguments are mainly based upon the symmetric criticality principle due to Palais [31] and variational methods.
The rest of this article is schemed as follows. The variational framework and the main results of this paper are presented in Sect. 2. The proofs of G-invariant solutions for the cases \(\sigma=0\) and \(Q(x)\not\equiv\overline{Q}\) are detailed in Sect. 3, while the multiplicity results for the cases \(\sigma>0\) and \(Q(x)\equiv\overline{Q}\) are proved in Sect. 4.
Preliminaries and main results
Let \(\mathscr{D}^{2, 2}(\mathbb{R}^{N})\) denote the completion of \(\mathscr{C}_{0}^{\infty}(\mathbb{R}^{N})\) under the norm \((\int_{\mathbb{R}^{N}} \vert \Delta u \vert ^{2}\,dx)^{1/2}\), associated with the inner product given by \(\langle u, \varphi\rangle=\int_{\mathbb{R}^{N}}\Delta u\Delta\varphi \,dx\). Recall the well-known Rellich inequality [32]
$$ \int_{\mathbb{R}^{N}} \vert \Delta u \vert ^{2}\,dx\geq \overline{\mu} \int_{\mathbb{R}^{N}}\frac{u^{2}}{ \vert x \vert ^{4}}\,dx, \quad\forall u\in \mathscr{D}^{2, 2} \bigl(\mathbb{R}^{N} \bigr), $$
where \(N\geq5\), \(\overline{\mu}= \frac{1}{16}N^{2}(N-4)^{2}\). We now employ the following norm in \(\mathscr{D}^{2, 2}(\mathbb{R}^{N})\):
$$\Vert u \Vert _{\mu}\triangleq \biggl[ \int_{\mathbb {R}^{N}} \bigl( \vert \Delta u \vert ^{2}-\mu \vert x \vert ^{-4}u^{2} \bigr)\,dx \biggr]^{\frac{1}{2}},\quad 0\leq\mu< \overline{\mu}. $$
Thanks to the Rellich inequality (2.1), we find that the above norm \(\Vert \cdot \Vert _{\mu}\) is equivalent to the usual norm \((\int_{\mathbb{R}^{N}} \vert \Delta\cdot \vert ^{2}\,dx)^{1/2}\). Besides, we define the product space \((\mathscr{D}^{2, 2}(\mathbb{R}^{N}))^{2}\) endowed with the norm
$$ \bigl\Vert (u, v) \bigr\Vert _{\mu}= \bigl( \Vert u \Vert _{\mu }^{2}+ \Vert v \Vert _{\mu}^{2} \bigr)^{\frac{1}{2}}, \quad\forall(u, v)\in \bigl(\mathscr{D}^{2, 2} \bigl( \mathbb{R}^{N} \bigr) \bigr)^{2}. $$
As usual, we denote by G any closed subgroup of \(O(\mathbb{N})\), the group of orthogonal linear transformations. Let \(G_{x}=\{gx; g\in G\}\) be the orbit of \(x\in\mathbb{R}^{N}\); \(\vert G_{x} \vert \) denote the number of elements in \(G_{x}\) and \(\vert G_{0} \vert = \vert G_{\infty} \vert =1\). Denote \(\vert G \vert =\inf_{x\in\mathbb{R}^{N}\backslash\{0\}} \vert G_{x} \vert \). Note that \(\vert G \vert \) may be +∞. We call Ω a G-invariant subset of \(\mathbb{R}^{N}\), if \(x\in\Omega\), then \(gx\in\Omega\) for all \(g\in G\). A function \(f: \mathbb{R}^{N}\mapsto\mathbb{R}\) is called G-invariant if \(f(gx)=f(x)\) for every \(g\in G\) and \(x\in\mathbb{R}^{N}\). In particular, an \(O(\mathbb{N})\)-invariant function is called radial.
The natural functional space to frame the analysis of (1.1) by variational methods is the Hilbert space \((\mathscr{D}_{ G}^{2, 2}(\mathbb{R}^{N}))^{2}\), which is the subspace of \((\mathscr{D}^{2, 2}(\mathbb{R}^{N}))^{2}\) consisting of all G-invariant functions. This work is devoted to the study of the following systems:
$$\bigl(\mathscr{P}^{Q}_{\sigma} \bigr) \textstyle\begin{cases} \Delta^{2} u=\mu\frac{u}{ \vert x \vert ^{4}}+Q(x) \sum_{i=1}^{m}\frac{\varsigma_{i}\alpha_{i}}{2^{\ast\ast}} \vert u \vert ^{\alpha_{i}-2}u \vert v \vert ^{\beta_{i}}+\sigma h(x) \vert u \vert ^{q-2}u, &\text{in } \mathbb{R}^{N},\\ \Delta^{2} v=\mu\frac{v}{ \vert x \vert ^{4}}+Q(x)\sum_{i=1}^{m} \frac{\varsigma_{i}\beta_{i}}{2^{\ast\ast}} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}-2}v+\sigma h(x) \vert v \vert ^{q-2}v, &\text{in } \mathbb{R}^{N},\\ (u, v)\in (\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}) )^{2}\quad\text{and}\quad u, v\not\equiv 0, &\text{in } \mathbb{R}^{N}. \end{cases} $$
To clearly describe the results of this paper, several notations should be presented:
$$\begin{aligned} & \mathcal{A}_{\mu}\triangleq\inf_{u\in\mathscr{D}^{2, 2}(\mathbb{R}^{N})\backslash\{0\}} \frac{\int_{\mathbb{R}^{N}} ( \vert \Delta u \vert ^{2}-\mu\frac{u^{2}}{ \vert x \vert ^{4}} )\,dx}{ (\int_{\mathbb{R}^{N}} \vert u \vert ^{2^{\ast\ast}}\,dx ) ^{\frac{2}{2^{\ast\ast}}}}, \end{aligned}$$
$$\begin{aligned} & y_{\epsilon}(x)\triangleq C\epsilon^{-\Lambda_{0}}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr), \end{aligned}$$
where \(\epsilon>0\), \(\Lambda_{0}\triangleq\frac{N-4}{2}\), and the constant \(C=C(N, \mu)>0\), depending only on N and μ. From [26, 33], we mention that \(y_{\epsilon}(x)\) satisfies the following equations:
$$ \int_{\mathbb{R}^{N}} \biggl( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} \biggr)\,dx=1 $$
$$\int_{\mathbb{R}^{N}}y_{\epsilon}^{2^{\ast\ast}-1}\varphi \,dx= \mathcal{A}_{\mu}^{-\frac{2^{\ast\ast}}{2}} \int_{\mathbb {R}^{N}} \biggl(\Delta y_{\epsilon}\Delta\varphi-\mu \frac{y_{\epsilon}\varphi }{ \vert x \vert ^{4}} \biggr)\,dx $$
for all \(\varphi\in\mathscr{D}^{2, 2}(\mathbb{R}^{N})\). Hence, we obtain (let \(\varphi=y_{\epsilon}\))
$$ \int_{\mathbb{R}^{N}}y_{\epsilon}^{2^{\ast\ast}} \,dx= \mathcal{A}_{\mu}^{-\frac{2^{\ast\ast}}{2}}. $$
According to [26, Lemma 2.1] and [33, Theorem 2], we remark that the function \(U_{\mu}(x)\) in (2.4) is positive, radial symmetric, radially decreasing, and solves
$$\textstyle\begin{cases} \Delta^{2}u=\mu\frac{u}{ \vert x \vert ^{4}}+ u^{2^{\ast\ast}-1}, &\text{in } \mathbb{R}^{N}\backslash\{0\}, \\ u\in\mathscr{D}^{2, 2}(\mathbb{R}^{N})\quad \text{and}\quad u>0, &\text{in } \mathbb{R}^{N}\backslash\{0\}. \end{cases} $$
By setting \(r= \vert x \vert \), there holds that
$$\begin{aligned} &U_{\mu}(r)=O_{1} \bigl(r^{-l_{1}(\mu)} \bigr), \quad\text{as } r\rightarrow0, \end{aligned}$$
$$\begin{aligned} &U_{\mu}(r)=O_{1} \bigl(r^{-l_{2}(\mu)} \bigr),\qquad U_{\mu}^{\prime}(r)=O_{1} \bigl(r^{-l_{2}(\mu)-1} \bigr),\quad \text{as } r\rightarrow+\infty, \end{aligned}$$
where \(O_{1}(r^{t})\) \((r\rightarrow r_{0})\) means that there exist constants \(C_{1}\), \(C_{2}>0\) such that \(C_{1}r^{t}\leq O_{1}(r^{t})\leq C_{2}r^{t}\) as \(r\rightarrow r_{0}\), \(l_{1}(\mu)\triangleq\Lambda_{0}\vartheta(\mu)\), \(l_{2}(\mu)\triangleq\Lambda_{0}(2-\vartheta(\mu))\), \(\Lambda_{0}=\frac{N-4}{2}\), and \(\vartheta(\mu): [0, \overline{\mu}]\mapsto[0, 1]\) is defined as
$$\vartheta(\mu)\triangleq 1-\frac{\sqrt{N^{2}-4N+8-4\sqrt{(N-2)^{2}+\mu}}}{N-4}. $$
This implies \(\vartheta(0)=0\), \(\vartheta(\overline{\mu})=1\) and
$$ 0\leq l_{1}(\mu)< \Lambda_{0}< l_{2}( \mu)\leq2\Lambda_{0},\quad \forall\mu\in[0, \overline{\mu}). $$
Moreover, there exist positive constants \(C_{3}=C_{3}(N, \mu)\) and \(C_{4}=C_{4}(N, \mu)\) such that
$$ 0< C_{3}\leq U_{\mu}(x) \bigl( \vert x \vert ^{\frac{l_{1}(\mu)}{\Lambda_{0}}} + \vert x \vert ^{\frac{l_{2}(\mu)}{\Lambda_{0}}} \bigr)^{\Lambda_{0}} \leq C_{4}, \quad\forall x\in\mathbb{R}^{N}\backslash\{0\}. $$
The following hypotheses are needed.
(q.1)
\(Q(x)\) is G-invariant.
\(Q(x)\in\mathscr{C}(\mathbb{R}^{N})\cap L^{\infty}(\mathbb{R}^{N})\), and \(Q_{+}(x)\not\equiv0\), where \(Q_{+}(x)=\max\{0, Q(x)\}\).
(h.1)
\(h(x)\) is G-invariant.
\(h(x)\) is a nonnegative function in \(\mathbb{R}^{N}\) such that
$$0< \Vert h \Vert _{\theta}\triangleq \biggl( \int_{\mathbb{R}^{N}} h^{\theta}(x)\,dx \biggr)^{\frac{1}{\theta}}< +\infty \quad\text{with } \theta=\frac{2^{\ast\ast}}{2^{\ast\ast}-q}. $$
The main results of this work can be stated in the following.
Theorem 2.1
Assume that (q.1) and (q.2) hold. If
$$ \int_{\mathbb{R}^{N}}Q(x)y_{\epsilon}^{2^{\ast\ast}}\,dx\geq \max \bigl\{ \vert G \vert ^{\frac{2-2^{\ast\ast}}{2}} \mathcal{A}_{0}^{-\frac{2^{\ast\ast}}{2}} \Vert Q_{+} \Vert _{\infty}, \mathcal{A}_{\mu}^{-\frac{2^{\ast\ast}}{2}}Q_{+}(0), \mathcal{A}_{\mu}^{-\frac{2^{\ast\ast}}{2}}Q_{+}(\infty) \bigr\} >0 $$
for certain \(\epsilon>0\), where \(Q_{+}(\infty)=\limsup_{ \vert x \vert \rightarrow\infty}Q_{+}(x)\), then problem \((\mathscr{P}^{Q}_{0})\) possesses at least one nontrivial solution in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\).
Corollary 2.1
Assume that (q.1) and (q.2) hold. Then we have the following statements.
Problem \((\mathscr{P}^{Q}_{0})\) admits at least one nontrivial solution if
$$Q(0)>0, \quad Q(0)\geq\max \bigl\{ \vert G \vert ^{\frac{2-2^{\ast\ast}}{2}} ( \mathcal{A}_{0}/\mathcal{A}_{\mu} )^{-\frac{2^{\ast\ast}}{2}} \Vert Q_{+} \Vert _{\infty}, Q_{+}(\infty) \bigr\} , $$
and either (i) \(Q(x)\geq Q(0)+\xi_{0} \vert x \vert ^{2^{\ast\ast}(l_{2}(\mu)-\Lambda_{0})}\) for some \(\xi_{0}>0\) and \(\vert x \vert \) small, or (ii) \(\vert Q(x)-Q(0) \vert \leq \xi_{1} \vert x \vert ^{\varsigma}\) for some constants \(\xi_{1}>0\), \(\varsigma>2^{\ast\ast}(l_{2}(\mu)-\Lambda_{0})>0\) and \(\vert x \vert \) small and
$$ \int_{\mathbb{R}^{N}} \bigl(Q(x)-Q(0) \bigr) \vert x \vert ^{-2^{\ast\ast}l_{2}(\mu)}\,dx>0. $$
Problem \((\mathscr{P}^{Q}_{0})\) has at least one nontrivial solution if \(\lim_{ \vert x \vert \rightarrow\infty}Q(x)=Q(\infty)\) exists and is positive,
$$Q(\infty)\geq\max \bigl\{ \vert G \vert ^{\frac{2-2^{\ast\ast}}{2}} ( \mathcal{A}_{0}/\mathcal{A}_{\mu} )^{-\frac{2^{\ast\ast}}{2}} \Vert Q_{+} \Vert _{\infty}, Q_{+}(0) \bigr\} , $$
and either (i) \(Q(x)\geq Q(\infty)+\xi_{2} \vert x \vert ^{-2^{\ast\ast}(\Lambda_{0}-l_{1}(\mu))}\) for certain \(\xi_{2}>0\) and large \(\vert x \vert \), or (ii) \(\vert Q(x)-Q(\infty) \vert \leq\xi_{3} \vert x \vert ^{-\kappa}\) for some constants \(\xi_{3}>0\), \(\kappa>2^{\ast\ast}(\Lambda_{0}-l_{1}(\mu))>0\) and large \(\vert x \vert \) and
$$ \int_{\mathbb{R}^{N}} \bigl(Q(x)-Q(\infty) \bigr) \vert x \vert ^{-2^{\ast\ast}l_{1}(\mu)}\,dx>0. $$
If \(Q(x)\geq Q(\infty)=Q(0)>0\) on \(\mathbb{R}^{N}\) and
$$Q(\infty)=Q(0)\geq \vert G \vert ^{\frac{2-2^{\ast\ast}}{2}} (\mathcal{A}_{0}/ \mathcal{A}_{\mu} )^{-\frac{2^{\ast\ast}}{2}} \Vert Q_{+} \Vert _{\infty}, $$
then problem \((\mathscr{P}^{Q}_{0})\) possesses at least one nontrivial solution.
Remark 2.1
Conditions (q.1) and (q.2) are essentially introduced in [9]. According to (q.2), we only presume that \(Q(x)\) is bounded and continuous on \(\mathbb{R}^{N}\). Hence, the above results do not require the continuity of \(Q(x)\) at infinity.
Assume that \(\vert G \vert =+\infty\) and \(Q_{+}(0)=Q_{+}(\infty)=0\). Then there exist infinitely many G-invariant solutions to problem \((\mathscr{P}^{Q}_{0})\).
If Q is a radial function such that \(Q_{+}(0)=Q_{+}(\infty)=0\), then there exist infinitely many radial solutions to problem \((\mathscr{P}^{Q}_{0})\).
Let \(\overline{Q}>0\) be a constant. Assume that \(Q(x)\equiv\overline{Q}\) and (h.1), (h.2) hold. Then there exists \(\sigma^{\ast}>0\) such that, for any \(\sigma\in(0, \sigma^{\ast})\), problem \((\mathscr{P}^{\overline{Q}}_{\sigma})\) possesses at least two nontrivial solutions in \((\mathscr{D}_{ G}^{2, 2}(\mathbb{R}^{N}))^{2}\).
The main results of this paper extend and complement those of [4, 5, 26, 29, 30]. Even in the scalar cases \(\sigma=0\), \(0<\mu<\overline{\mu}\), \(m=1\), and \(u=v\), the above results in the whole space are new.
Throughout this paper, we denote various positive constants as \(C_{i}\ (i=1, 2, \ldots)\) or C. The dual space of \((\mathscr {D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) (\((\mathscr{D}^{2, 2}(\mathbb{R}^{N}))^{2}\), resp.) is denoted by \((\mathscr{D}_{ G}^{-2, 2}(\mathbb{R}^{N}))^{2}\) (\((\mathscr{D}^{-2, 2}(\mathbb{R}^{N}))^{2}\), resp.). The ball of center x and radius r is denoted by \(B_{r}(x)\). \(o_{n}(1)\) is a generic infinitesimal value as \(n\rightarrow\infty\). For any \(\epsilon>0\), \(t\in\mathbb{R}\), \(O(\epsilon^{t})\) denotes the quantity satisfying \(\vert O(\epsilon^{t}) \vert /\epsilon^{t}\leq C\), and \(O_{1}(\epsilon^{t})\) \((\epsilon\rightarrow\epsilon_{0})\) means that there exist constants \(C_{1}\), \(C_{2}>0\) such that \(C_{1}\epsilon^{t}\leq O_{1}(\epsilon^{t})\leq C_{2}\epsilon^{t}\) as \(\epsilon\rightarrow\epsilon_{0}\). In a Banach space X, we denote by '→' and '⇀' strong and weak convergence, respectively. A functional \(\mathcal{F}\in \mathscr{C}^{1}(X, \mathbb{R})\) is called to satisfy the \((PS)_{c}\) condition if each sequence \(\{w_{n}\}\) in X satisfying \(\mathcal{F}(w_{n})\rightarrow c\) in \(\mathbb{R}\), \(\mathcal{F}^{\prime}(w_{n})\rightarrow0\) in \(X^{\ast}\) contains a strongly convergent subsequence.
Existence and multiplicity results for problem \((\mathscr {P}^{Q}_{0})\)
The energy functional corresponding to problem \((\mathscr{P}^{Q}_{0})\) is defined on \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) by
$$ \mathcal{F}(u, v)=\frac{1}{2} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2} -\frac{1}{2^{\ast\ast}} \int_{\mathbb{R}^{N}} Q(x)\sum_{i=1}^{m} \varsigma_{i} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}}\,dx. $$
It follows from (q.2) and the Rellich inequality (2.1) that \(\mathcal{F}\) is a well-defined \(\mathscr{C}^{1}\) functional on \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\). Then the critical points of \(\mathcal{F}\) correspond to weak solutions of problem \((\mathscr{P}^{Q}_{0})\). According to the principle of symmetric criticality (see Lemma 3.1), any critical point of \(\mathcal{F}\) in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) is also a solution of \((\mathscr{P}^{Q}_{0})\) in \((\mathscr {D}^{2, 2}(\mathbb{R}^{N}))^{2}\). This means that \((u, v)\in (\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) satisfies \((\mathscr{P}^{Q}_{0})\) if and only if, for any \((\varphi_{1}, \varphi_{2})\in(\mathscr{D}^{2, 2}(\mathbb{R}^{N}))^{2}\),
$$\begin{aligned} & \bigl\langle \mathcal{F}^{\prime}(u, v), ( \varphi_{1}, \varphi_{2}) \bigr\rangle \\ &\quad= \int_{\mathbb{R}^{N}} \biggl(\Delta u\Delta\varphi_{1}+\Delta v \Delta\varphi_{2}-\mu\frac{u\varphi_{1} +v\varphi_{2}}{ \vert x \vert ^{4}} \biggr)\,dx \\ &\qquad-\frac{1}{2^{\ast\ast}} \int_{\mathbb{R}^{N}}Q(x) \Biggl(\varphi_{1}\sum _{i=1}^{m} \varsigma_{i} \alpha_{i} \vert u \vert ^{\alpha_{i}-2}u \vert v \vert ^{\beta_{i}} +\varphi_{2}\sum_{i=1}^{m} \varsigma_{i}\beta_{i} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}-2}v \Biggr)\,dx=0. \end{aligned}$$
Lemma 3.1
If \(Q(x)\) is a G-invariant function, then \(\mathcal{F}^{\prime}(u, v)=0\) in \((\mathscr{D}_{G}^{-2, 2}(\mathbb{R}^{N}))^{2}\) implies \(\mathcal{F}^{\prime}(u, v)=0\) in \((\mathscr{D}^{-2, 2}(\mathbb{R}^{N}))^{2}\).
The proof is similar to that of [9, Lemma 1] and is omitted here. □
For \(\mu\in[0, \overline{\mu})\), \(\varsigma_{i}\in(0, +\infty)\), \(\alpha_{i}\), \(\beta_{i}>1\), and \(\alpha_{i}+\beta_{i}=2^{\ast\ast}\ (i=1, \ldots, m)\), we define
$$\begin{aligned} &\mathcal{A}_{\mu, m}\triangleq\inf_{(u, v)\in (\mathscr{D}^{2, 2}(\mathbb{R}^{N})\backslash\{0\})^{2}} \frac{\int_{\mathbb{R}^{N}} ( \vert \Delta u \vert ^{2}+ \vert \Delta v \vert ^{2}-\mu \frac{u^{2}+v^{2}}{ \vert x \vert ^{4}} )\,dx}{ (\int_{\mathbb{R}^{N}}\sum_{i=1}^{m}\varsigma_{i} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}}\,dx )^{\frac{2}{2^{\ast\ast}}}}, \end{aligned}$$
$$\begin{aligned} & \mathscr{B}(\tau)\triangleq\frac{1+\tau^{2}}{ (\sum_{i=1}^{m}\varsigma_{i}\tau^{\beta_{i}} )^{\frac{2}{2^{\ast\ast}}}},\quad \tau\geq0, \end{aligned}$$
$$\begin{aligned} &\mathscr{B}(\tau_{\min})\triangleq\min _{\tau\geq 0}\mathscr{B}(\tau)>0, \end{aligned}$$
where \(\tau_{\min}>0\) is a minimal point of \(\mathscr{B}(\tau)\) and hence a root of the equation
$$ \sum_{i=1}^{m} \varsigma_{i}\tau^{\beta_{i}-1} \bigl(\alpha_{i}\tau ^{2}-\beta_{i} \bigr) =0,\quad \tau\geq0. $$
Let \(y_{\epsilon}(x)\) be the minimizer of \(\mathcal{A}_{\mu}\) defined in (2.4), \(\mu\in[0, \overline{\mu})\), \(\varsigma_{i}\in(0, +\infty)\), \(\alpha_{i}\), \(\beta_{i}>1\), and \(\alpha_{i}+\beta_{i}=2^{\ast\ast}\ (i=1, \ldots, m)\). Then we have the following statements.
\(\mathcal{A}_{\mu, m}=\mathscr{B}(\tau_{\min})\mathcal{A}_{\mu}\);
\(\mathcal{A}_{\mu, m}\) has the minimizer \((y_{\epsilon}(x), \tau_{\min}y_{\epsilon}(x))\) for all \(\epsilon>0\).
The proof is a repeat of that in [19, Theorem 2.2] (see also [21, Theorem 5]) and hence is omitted here. □
To find conditions under which the Palais–Smale condition holds, we need the following concentration compactness principle due to Lions [34].
Let \(\{(u_{n}, v_{n})\}\) be a weakly convergent sequence to \((u, v)\) in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) such that \(\vert \Delta u_{n} \vert ^{2}\rightharpoonup\eta^{(1)}\), \(\vert \Delta v_{n} \vert ^{2}\rightharpoonup\eta^{(2)}\), \(\vert u_{n} \vert ^{\alpha_{i}} \vert v_{n} \vert ^{\beta_{i}} \rightharpoonup\nu^{(i)}\ (i=1, \ldots, m)\), \(\vert x \vert ^{-4} \vert u_{n} \vert ^{2}\rightharpoonup\gamma^{(1)}\), and \(\vert x \vert ^{-4} \vert v_{n} \vert ^{2}\rightharpoonup\gamma^{(2)}\) in the sense of measures. Then there exists some at most countable set \(\mathscr{J}\), \(\{\eta_{j}^{(1)}\geq 0\}_{j\in\mathscr{J}\cup\{0\}}\), \(\{\eta_{j}^{(2)}\geq 0\}_{j\in\mathscr{J}\cup\{0\}}\), \(\{\nu_{j}^{(i)}\geq 0\}_{j\in\mathscr{J}\cup\{0\}}\), \(\gamma_{0}^{(1)}\geq0\), \(\gamma_{0}^{(2)}\geq0\), \(\{x_{j}\}_{j\in\mathscr{J}}\subset\mathbb{R}^{N}\backslash\{0\}\) such that
\(\eta^{(1)}\geq \vert \Delta u \vert ^{2}+ \sum_{j\in\mathscr {J}}\eta_{j}^{(1)}\delta_{x_{j}}+\eta_{0}^{(1)}\delta_{0}\), \(\eta^{(2)}\geq \vert \Delta v \vert ^{2}+ \sum_{j\in\mathscr {J}}\eta_{j}^{(2)}\delta_{x_{j}}+\eta_{0}^{(2)}\delta_{0}\),
\(\nu^{(i)}= \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}}+ \sum_{j\in\mathscr {J}}\nu_{j}^{(i)}\delta_{x_{j}}+\nu_{0}^{(i)}\delta_{0}\), \(i=1, \ldots, m\),
\(\gamma^{(1)}= \vert x \vert ^{-4} \vert u \vert ^{2}+\gamma_{0}^{(1)}\delta_{0}\), \(\gamma^{(2)}= \vert x \vert ^{-4} \vert v \vert ^{2}+\gamma_{0}^{(2)}\delta_{0}\),
\(\mathcal{A}_{0, m} (\sum_{i=1}^{m}\varsigma_{i}\nu_{j}^{(i)} )^{\frac{2}{2^{\ast\ast}}}\leq\eta_{j}^{(1)}+\eta_{j}^{(2)}\),
\(\mathcal{A}_{\mu, m} (\sum_{i=1}^{m}\varsigma_{i}\nu_{0}^{(i)} )^{\frac{2}{2^{\ast\ast}}}\leq\eta_{0}^{(1)}+\eta_{0}^{(2)} -\mu(\gamma_{0}^{(1)}+\gamma_{0}^{(2)})\),
where \(\delta_{x_{j}}\), \(j\in\mathscr{J}\cup \{0 \}\), is a Dirac mass of 1 concentrated at \(x_{j}\in\mathbb{R}^{N}\).
To establish the existence results for problem \((\mathscr{P}^{Q}_{0})\), we need the following local \((PS)_{c}\) condition, which is indispensable for the proof of Theorem 2.1.
Assume that (q.1) and (q.2) hold. Then the \((PS)_{c}\) condition in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) holds for \(\mathcal{F}\) if
$$ c< c_{0}^{\ast}\triangleq\frac{2}{N} \min \bigl\{ \vert G \vert \mathcal{A}_{0, m}^{\frac{N}{4}} \Vert Q_{+} \Vert _{\infty}^{1-\frac{N}{4}}, \mathcal{A}_{\mu, m}^{\frac{N}{4}}Q_{+}(0)^{1-\frac{N}{4}}, \mathcal{A}_{\mu, m}^{\frac{N}{4}}Q_{+}(\infty)^{1-\frac{N}{4}} \bigr\} . $$
We follow closely the arguments in [9, Proposition 2]. It is trivial to check that the \((PS)_{c}\) sequence \(\{(u_{n}, v_{n})\}\) of \(\mathcal{F}\) is bounded in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\). Then we may assume that \((u_{n}, v_{n})\rightharpoonup(u, v)\) in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\). In view of Lemma 3.3, there exist measures \(\eta^{(1)}\), \(\eta^{(2)}\), \(\nu^{(i)}\ (i=1, \ldots, m)\), \(\gamma^{(1)}\), and \(\gamma^{(2)}\) such that relations (a)–(e) of this lemma hold. We begin by considering the concentration at the point \(x_{j}\in\mathbb{R}^{N}\backslash\{0\}\), \(j\in\mathscr{J}\). For \(\epsilon>0\) small, we define the cut-off function \(\psi_{x_{j}}^{\epsilon}(x)\in\mathscr {C}_{0}^{\infty}(\mathbb{R}^{N})\) such that \(0\leq\psi_{x_{j}}^{\epsilon}(x)\leq1\), \(\psi_{x_{j}}^{\epsilon}(x)=1\) in \(B_{\epsilon}(x_{j})\), \(\psi_{x_{j}}^{\epsilon}(x)=0\) on \(\mathbb{R}^{N}\backslash B_{2\epsilon}(x_{j})\), \(\vert \nabla\psi_{x_{j}}^{\epsilon} \vert \leq 2/\epsilon\), and \(\vert \Delta\psi_{x_{j}}^{\epsilon} \vert \leq 2/\epsilon^{2}\) on \(\mathbb{R}^{N}\). Then, by Lemma 3.1, \(\lim_{n\rightarrow\infty}\langle\mathcal{F}^{\prime}(u_{n}, v_{n}), (u_{n}\psi_{x_{j}}^{\epsilon}, v_{n}\psi_{x_{j}}^{\epsilon})\rangle=0\); hence, combining (3.2), the Hölder inequality, and the Sobolev inequality, we derive
$$\begin{aligned} & \int_{\mathbb{R}^{N}} \psi_{x_{j}}^{\epsilon} \Biggl\{ d \eta^{(1)}+d\eta^{(2)} -\mu \bigl(d\gamma^{(1)}+d \gamma^{(2)} \bigr) -Q(x) \sum_{i=1}^{m} \frac{\varsigma_{i}}{2^{\ast\ast}} (\alpha_{i}+\beta_{i} )\,d \nu^{(i)} \Biggr\} \\ &\quad\leq\overline{\lim_{n\rightarrow\infty}} \int_{\mathbb{R}^{N}} \bigl\{ 2 \bigl\vert \Delta u_{n} \bigl\langle \nabla u_{n}, \nabla\psi_{x_{j}}^{\epsilon} \bigr\rangle +\Delta v_{n} \bigl\langle \nabla v_{n}, \nabla \psi_{x_{j}}^{\epsilon} \bigr\rangle \bigr\vert + \bigl\vert (u_{n}\Delta u_{n}+v_{n}\Delta v_{n} )\Delta\psi_{x_{j}}^{\epsilon} \bigr\vert \bigr\} \,dx \\ &\quad\leq\sup_{n\geq 1} \biggl( \int_{\mathbb{R}^{N}} \vert \Delta u_{n} \vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggl[2\overline{\lim _{n\rightarrow \infty}} \biggl( \int_{\mathbb{R}^{N}} \vert \nabla u_{n} \vert ^{2} \bigl\vert \nabla\psi_{x_{j}}^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac{1}{2}} \\ &\qquad{} +\overline{\lim _{n\rightarrow\infty}} \biggl( \int_{\mathbb{R}^{N}} \vert u_{n} \vert ^{2} \bigl\vert \Delta\psi_{x_{j}} ^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggr] \\ &\qquad{} +\sup_{n\geq1} \biggl( \int_{\mathbb{R}^{N}} \vert \Delta v_{n} \vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggl[2\overline{\lim _{n\rightarrow \infty}} \biggl( \int_{\mathbb{R}^{N}} \vert \nabla v_{n} \vert ^{2} \bigl\vert \nabla\psi_{x_{j}}^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac{1}{2}} \\ &\qquad{} +\overline{\lim _{n\rightarrow\infty}} \biggl( \int_{\mathbb{R}^{N}} \vert v_{n} \vert ^{2} \bigl\vert \Delta\psi_{x_{j}} ^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggr] \\ &\quad\leq C \biggl\{ \biggl( \int_{\mathbb{R}^{N}} \vert \nabla u \vert ^{2} \bigl\vert \nabla\psi_{x_{j}}^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac {1}{2}}+ \biggl( \int_{\mathbb{R}^{N}} \vert u \vert ^{2} \bigl\vert \Delta \psi_{x_{j}}^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac {1}{2}}+ \biggl( \int_{\mathbb{R}^{N}} \vert v \vert ^{2} \bigl\vert \Delta \psi_{x_{j}}^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac{1}{2}} \\ &\qquad{} + \biggl( \int_{\mathbb{R}^{N}} \vert \nabla v \vert ^{2} \bigl\vert \nabla\psi_{x_{j}}^{\epsilon} \bigr\vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggr\} \leq C \biggl\{ \biggl( \int_{B_{2\epsilon}(x_{j})} \vert \nabla u \vert ^{\frac{2N}{N-2}}\,dx \biggr)^{\frac{N-2}{2N}} \biggl( \int_{\mathbb{R}^{N}} \bigl\vert \nabla\psi_{x_{j}}^{\epsilon} \bigr\vert ^{N}\,dx \biggr)^{\frac{1}{N}} \\ &\qquad{} + \biggl( \int_{B_{2\epsilon}(x_{j})} \vert u \vert ^{2^{\ast\ast}}\,dx \biggr)^{\frac{1}{2^{\ast\ast}}} \biggl( \int _{\mathbb{R}^{N}} \bigl\vert \Delta\psi_{x_{j}}^{\epsilon} \bigr\vert ^{\frac{N}{2}} \biggr) ^{\frac{2}{N}} \\ &\qquad{}+ \biggl( \int_{B_{2\epsilon}(x_{j})} \vert v \vert ^{2^{\ast\ast}}\,dx \biggr)^{\frac{1}{2^{\ast\ast}}} \biggl( \int _{\mathbb{R}^{N}} \bigl\vert \Delta\psi_{x_{j}}^{\epsilon} \bigr\vert ^{\frac{N}{2}} \biggr)^{\frac{2}{N}} \\ &\qquad{} + \biggl( \int_{B_{2\epsilon}(x_{j})} \vert \nabla v \vert ^{\frac{2N}{N-2}}\,dx \biggr)^{\frac{N-2}{2N}} \biggl( \int_{\mathbb{R}^{N}} \bigl\vert \nabla\psi_{x_{j}}^{\epsilon} \bigr\vert ^{N}\,dx \biggr)^{\frac{1}{N}} \biggr\} \leq C \biggl\{ \biggl( \int_{B_{2\epsilon}(x_{j})} \vert \nabla u \vert ^{\frac{2N}{N-2}}\,dx \biggr)^{\frac{N-2}{2N}} \\ &\qquad{} + \biggl( \int_{B_{2\epsilon}(x_{j})} \vert \Delta u \vert ^{2}\,dx \biggr)^{\frac{1}{2}}+ \biggl( \int_{B_{2\epsilon}(x_{j})} \vert \Delta v \vert ^{2}\,dx \biggr)^{\frac{1}{2}}+ \biggl( \int_{B_{2\epsilon}(x_{j})} \vert \nabla v \vert ^{\frac{2N}{N-2}}\,dx \biggr)^{\frac{N-2}{2N}} \biggr\} . \end{aligned}$$
As \(\epsilon\rightarrow0\), it follows from (3.8) and Lemma 3.3 that
$$ Q(x_{j})\sum_{i=1}^{m} \varsigma_{i}\nu_{j}^{(i)}\geq \eta_{j}^{(1)}+\eta_{j}^{(2)}. $$
This means that the concentration of the measures \(\nu^{(i)}\ (i=1, \ldots, m)\) cannot occur at points where \(Q(x_{j})\leq0\). By virtue of (3.9) and (d) of Lemma 3.3, we conclude that either (i) \(\nu_{j}^{(i)}=0\ (i=1, \ldots, m)\) or (ii) \(\sum_{i=1}^{m}\varsigma_{i}\nu_{j}^{(i)}\geq (\mathcal{A}_{0, m}/ \Vert Q_{+} \Vert _{\infty})^{N/4}\). Let us now study the possibility of concentration at \(x=0\) and at ∞. By the argument similar to that of \(x_{j}\in \mathbb{R}^{N}\backslash\{0\}\), we find \(\eta_{0}^{(1)}+\eta_{0}^{(2)} -\mu(\gamma_{0}^{(1)}+\gamma_{0}^{(2)}) -Q(0)\sum_{i=1}^{m}\varsigma_{i}\nu_{0}^{(i)} \leq0\). Together with (e) of Lemma 3.3, it follows that either (iii) \(\nu_{0}^{(i)} =0\ (i=1, \ldots, m)\) or (iv) \(\sum_{i=1}^{m}\varsigma_{i}\nu_{0}^{(i)}\geq (\mathcal{A}_{\mu, m}/Q_{+}(0))^{N/4}\). To discuss the concentration at infinity of the sequence \(\{(u_{n}, v_{n})\}\), we define the following quantities:
\(\eta_{\infty}^{(1)}=\lim_{R\rightarrow\infty} \overline{\lim_{n\rightarrow\infty}}\int_{ \vert x \vert >R} \vert \Delta u_{n} \vert ^{2}\,dx\), \(\eta_{\infty}^{(2)}=\lim_{R\rightarrow\infty} \overline{\lim_{n\rightarrow\infty}}\int_{ \vert x \vert >R} \vert \Delta v_{n} \vert ^{2}\,dx\),
\(\nu_{\infty}^{(i)}= \lim_{R\rightarrow\infty} \overline{\lim_{n\rightarrow\infty}}\int_{ \vert x \vert >R} \vert u_{n} \vert ^{\alpha_{i}} \vert v_{n} \vert ^{\beta_{i}}\,dx\), \(i=1, \ldots, m\),
\(\gamma_{\infty}^{(1)}=\lim_{R\rightarrow\infty} \overline{\lim_{n\rightarrow\infty}}\int_{ \vert x \vert >R} \vert x \vert ^{-4} \vert u_{n} \vert ^{2}\,dx\), \(\gamma_{\infty}^{(2)}=\lim_{R\rightarrow\infty} \overline{\lim_{n\rightarrow\infty}}\int_{ \vert x \vert >R} \vert x \vert ^{-4} \vert v_{n} \vert ^{2}\,dx\).
It is obvious that \(\eta_{\infty}^{(1)}\), \(\eta_{\infty}^{(2)}\), \(\nu_{\infty}^{(i)}\ (i=1, \ldots, m)\), \(\gamma_{\infty}^{(1)}\), and \(\gamma_{\infty}^{(2)}\) defined by (1)–(3) exist and are finite. For \(R>1\), let \(\psi_{R}(x)\in\mathscr {C}^{\infty}(\mathbb{R}^{N})\) be a function such that \(0\leq\psi_{R}(x)\leq1\), \(\psi_{R}(x)=1\) for \(\vert x \vert >R+1\), \(\psi_{R}(x)=0\) for \(\vert x \vert < R\), \(\vert \nabla\psi_{R} \vert \leq2/R\), and \(\vert \Delta\psi_{R} \vert \leq2/R^{2}\). Because the sequence \(\{(u_{n}\psi_{R}, v_{n}\psi_{R})\}\) is bounded in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\), we deduce from (3.2) and the fact that \(\alpha_{i}+\beta_{i}=2^{\ast\ast}\ (i=1, \ldots, m)\) that
$$\begin{aligned} 0={}&\lim_{n\rightarrow\infty} \bigl\langle \mathcal{F}^{\prime}(u_{n}, v_{n}), (u_{n}\psi_{R}, v_{n} \psi_{R}) \bigr\rangle \\ ={}&\lim_{n\rightarrow\infty} \int_{\mathbb{R}^{N}} \Biggl\{ \Biggl( \vert \Delta u_{n} \vert ^{2}+ \vert \Delta v_{n} \vert ^{2}- \mu\frac{ \vert u_{n} \vert ^{2} + \vert v_{n} \vert ^{2}}{ \vert x \vert ^{4}}-Q(x)\sum_{i=1}^{m} \varsigma_{i} \vert u_{n} \vert ^{\alpha_{i}} \vert v_{n} \vert ^{\beta_{i}} \Biggr)\psi_{R} \\ &{} + \bigl(2\Delta u_{n}\langle\nabla u_{n}, \nabla \psi_{R}\rangle+u_{n}\Delta u_{n}\Delta \psi_{R}+2\Delta v_{n}\langle\nabla v_{n}, \nabla\psi_{R}\rangle+v_{n}\Delta v_{n}\Delta \psi_{R} \bigr) \Biggr\} \,dx. \end{aligned}$$
Furthermore, by utilizing the Hölder inequality and the Sobolev inequality, we obtain
$$\begin{aligned} &\lim_{R\rightarrow\infty} \overline{\lim_{n\rightarrow\infty}} \int_{\mathbb{R}^{N}} \bigl(2 \bigl\vert \Delta u_{n}\langle \nabla u_{n}, \nabla \psi_{R}\rangle \bigr\vert + \vert u_{n}\Delta u_{n}\Delta\psi_{R} \vert \bigr)\,dx \\ &\quad\leq\lim_{R\rightarrow\infty} \overline{\lim_{n\rightarrow\infty}} \biggl( \int_{\mathbb {R}^{N}} \vert \Delta u_{n} \vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggl[2 \biggl( \int_{\mathbb{R}^{N}} \vert \nabla u_{n} \vert ^{2} \vert \nabla\psi_{R} \vert ^{2}\,dx \biggr)^{\frac{1}{2}}\\ &\qquad{}+ \biggl( \int _{\mathbb{R}^{N}} \vert u_{n} \vert ^{2} \vert \Delta\psi_{R} \vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggr] \\ &\quad\leq C\lim_{R\rightarrow \infty} \biggl\{ \biggl( \int_{R< \vert x \vert < R+1} \vert \nabla u \vert ^{2} \vert \nabla\psi_{R} \vert ^{2}\,dx \biggr)^{\frac{1}{2}}+ \biggl( \int_{R< \vert x \vert < R+1} \vert u \vert ^{2} \vert \Delta \psi_{R} \vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggr\} \\ &\quad\leq C\lim_{R\rightarrow \infty} \biggl\{ \biggl( \int_{R< \vert x \vert < R+1} \vert \nabla u \vert ^{\frac{2N}{N-2}}\,dx \biggr)^{\frac{N-2}{2N}}+ \biggl( \int_{R< \vert x \vert < R+1} \vert \Delta u \vert ^{2}\,dx \biggr)^{\frac{1}{2}} \biggr\} =0. \end{aligned}$$
Similarly, we have \(\lim_{R\rightarrow \infty}\overline{\lim_{n\rightarrow\infty}}\int_{\mathbb{R}^{N}} (2 \vert \Delta v_{n}\langle\nabla v_{n}, \nabla \psi_{R}\rangle \vert + \vert v_{n}\Delta v_{n}\Delta\psi_{R} \vert )\,dx=0\). Consequently, it follows from (3.10) and definitions (1)–(3) of the quantities \(\eta_{\infty}^{(1)}\), \(\eta_{\infty}^{(2)}\), \(\nu_{\infty}^{(i)}\ (i=1, \ldots, m)\), \(\gamma_{\infty}^{(1)}\), and \(\gamma_{\infty}^{(2)}\) that
$$ Q_{+}(\infty)\sum_{i=1}^{m} \varsigma_{i}\nu_{\infty}^{(i)} \geq \eta_{\infty}^{(1)}+\eta_{\infty}^{(2)} -\mu \bigl( \gamma_{\infty}^{(1)}+\gamma_{\infty}^{(2)} \bigr). $$
Moreover, in view of (3.3), we find \(\mathcal{A}_{\mu, m}(\sum_{i=1}^{m} \varsigma_{i}\nu_{\infty}^{(i)})^{\frac{2}{2^{\ast\ast}}} \leq\eta_{\infty}^{(1)}+\eta_{\infty}^{(2)} -\mu(\gamma_{\infty}^{(1)}+\gamma_{\infty}^{(2)})\). This, combined with (3.11), implies that either (v) \(\nu_{\infty}^{(i)}=0\ (i=1, \ldots, m)\) or (vi) \(\sum_{i=1}^{m}\varsigma_{i}\nu_{\infty}^{(i)} \geq(\mathcal{A}_{\mu, m}/Q_{+}(\infty))^{N/4}\). In the following, we claim that (ii), (iv), and (vi) cannot occur. For every continuous nonnegative function ψ such that \(0\leq\psi(x)\leq 1\) on \(\mathbb{R}^{N}\), we find
$$\begin{aligned} c&=\lim_{n\rightarrow\infty} \biggl(\mathcal{F}(u_{n}, v_{n}) -\frac{1}{2^{\ast\ast}} \bigl\langle \mathcal{F}^{\prime}(u_{n}, v_{n}), (u_{n}, v_{n}) \bigr\rangle \biggr) \\ &=\frac{2}{N} \lim_{n\rightarrow\infty} \int_{\mathbb{R}^{N}} \biggl( \vert \Delta u_{n} \vert ^{2}+ \vert \Delta v_{n} \vert ^{2}-\mu \frac{ \vert u_{n} \vert ^{2}+ \vert v_{n} \vert ^{2}}{ \vert x \vert ^{4}} \biggr)\,dx \\ &\geq\frac{2}{N}\overline{\lim_{n\rightarrow\infty}} \int_{\mathbb{R}^{N}} \biggl( \vert \Delta u_{n} \vert ^{2}+ \vert \Delta v_{n} \vert ^{2}-\mu \frac{ \vert u_{n} \vert ^{2}+ \vert v_{n} \vert ^{2}}{ \vert x \vert ^{4}} \biggr)\psi(x)\,dx. \end{aligned}$$
Note that the measures \(\nu^{(i)}\ (i=1, \ldots, m)\) are bounded and G-invariant. This means that if (ii) holds, then the set \(\mathscr{J}\) must be finite. Moreover, if \(x_{j}\neq0\) is a singular point of \(\nu^{(i)}\ (i=1, \ldots, m)\), so is \(gx_{j}\) for each \(g\in G\), and the mass of \(\nu^{(i)}\ (i=1, \ldots, m)\) concentrated at \(gx_{j}\) is the same for every \(g\in G\). Assuming that (ii) occurs for some \(j\in\mathscr{J}\) with \(x_{j}\neq0\), we choose ψ with compact support so that \(\psi(gx_{j})=1\) for every \(g\in G\), and we derive
$$\begin{aligned} c&\geq\frac{2}{N} \vert G \vert \bigl(\eta_{j}^{(1)}+ \eta_{j}^{(2)} \bigr)\geq \frac{2}{N} \vert G \vert \mathcal{A}_{0, m} \Biggl(\sum_{i=1}^{m} \varsigma_{i}\nu_{j}^{(i)} \Biggr) ^{\frac{2}{2^{\ast\ast}}} \\ &\geq\frac{2}{N} \vert G \vert \mathcal{A}_{0, m} \bigl( \mathcal{A}_{0, m}/ \Vert Q_{+} \Vert _{\infty} \bigr)^{\frac{2}{2^{\ast \ast}-2}} =\frac{2}{N} \vert G \vert \mathcal{A}_{0, m}^{\frac{N}{4}} \Vert Q_{+} \Vert _{\infty}^{1-\frac{N}{4}}, \end{aligned}$$
which is impossible. Similarly, assuming that (iv) holds for \(x=0\), we take ψ with compact support so that \(\psi(0)=1\), and we have
$$\begin{aligned} c&\geq\frac{2}{N} \bigl(\eta_{0}^{(1)}+ \eta_{0}^{(2)} -\mu\gamma_{0}^{(1)}-\mu \gamma_{0}^{(2)} \bigr)\geq \frac{2}{N} \mathcal{A}_{\mu, m} \Biggl(\sum_{i=1}^{m} \varsigma_{i}\nu_{0}^{(i)} \Biggr) ^{\frac{2}{2^{\ast\ast}}} \\ &\geq\frac{2}{N}\mathcal{A}_{\mu, m} \bigl(\mathcal{A}_{\mu, m}/Q_{+}(0) \bigr)^{\frac{2}{2^{\ast\ast}-2}} =\frac{2}{N}\mathcal{A}_{\mu, m}^{\frac{N}{4}}Q_{+}(0)^{1-\frac{N}{4}}, \end{aligned}$$
a contradiction to (3.7). Finally, if (vi) occurs, we choose \(\psi=\psi_{R}\) to obtain
$$\begin{aligned} c&\geq\frac{2}{N} \bigl(\eta_{\infty}^{(1)}+ \eta_{\infty}^{(2)} -\mu\gamma_{\infty}^{(1)}-\mu \gamma_{\infty}^{(2)} \bigr)\geq \frac{2}{N} \mathcal{A}_{\mu, m} \Biggl(\sum_{i=1}^{m} \varsigma_{i}\nu_{\infty}^{(i)} \Biggr) ^{\frac{2}{2^{\ast\ast}}} \\ &\geq\frac{2}{N}\mathcal{A}_{\mu, m} \bigl(\mathcal{A}_{\mu, m}/Q_{+}( \infty) \bigr)^{\frac{2}{2^{\ast\ast}-2}} =\frac{2}{N}\mathcal{A}_{\mu, m}^{\frac{N}{4}}Q_{+}( \infty)^{1-\frac{N}{4}}, \end{aligned}$$
which contradicts (3.7). Hence, \(\nu_{j}^{(i)}=0\ (i=1, \ldots, m)\) for all \(j\in\mathscr{J}\cup\{0, \infty\}\), and this yields
$$\lim_{n\rightarrow\infty} \int_{\mathbb{R}^{N}} \sum_{i=1}^{m} \varsigma_{i} \vert u_{n} \vert ^{\alpha_{i}} \vert v_{n} \vert ^{\beta_{i}}\,dx = \int_{\mathbb{R}^{N}}\sum_{i=1}^{m} \varsigma_{i} \vert u \vert ^{\alpha _{i}} \vert v \vert ^{\beta_{i}} \,dx. $$
Finally, taking into account \(\lim_{n\rightarrow\infty}\langle \mathcal{F}^{\prime}(u_{n}, v_{n})-\mathcal{F}^{\prime}(u, v), (u_{n}-u, v_{n}-v)\rangle=0\), we naturally deduce \((u_{n}, v_{n})\rightarrow(u, v)\) as \(n\rightarrow\infty\) in \((\mathscr {D}^{2, 2}(\mathbb{R}^{N}))^{2}\). □
Thanks to Lemma 3.4, we immediately obtain the following result.
If \(\vert G \vert =+\infty\) and \(Q_{+}(0)=Q_{+}(\infty)=0\), then the functional \(\mathcal{F}\) satisfies the \((PS)_{c}\) condition for every \(c\in\mathbb{R}\).
Proof of Theorem 2.1
Let \(y_{\epsilon}\) be the extremal function satisfying (2.4)–(2.10). We now choose \(\epsilon>0\) such that (2.11) is fulfilled. It is clear from (q.2), (3.1), and (3.2) that there exist constants \(\alpha_{0}>0\) and \(\rho>0\) such that \(\mathcal{F}(u, v)\geq\alpha_{0}\) for all \(\Vert (u, v) \Vert _{\mu}=\rho\). Moreover, if we set \(u=y_{\epsilon}\), \(v=\tau_{\min}y_{\epsilon}\), and
$$\begin{aligned} \Phi(t)={}&\mathcal{F}(ty_{\epsilon}, t\tau_{\min}y_{\epsilon})= \frac{t^{2}}{2} \bigl(1+\tau_{\min}^{2} \bigr) \int_{\mathbb{R}^{N}} \biggl( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} \biggr)\,dx\\ &{} -\frac{t^{2^{\ast\ast}}}{2^{\ast\ast}} \sum _{i=1}^{m}\varsigma_{i} \tau_{\min}^{\beta_{i}} \int_{\mathbb{R}^{N}} Q(x)y_{\epsilon}^{2^{\ast\ast}}\,dx \end{aligned}$$
with \(t\geq0\), then \(\max_{t\geq0}\Phi(t)\) is attained for some finite \(\overline{t}>0\) with \(\Phi^{\prime}(\overline{t})=0\). This yields
$$ \max_{t\geq0}\Phi(t)=\mathcal{F}( \overline{t}y_{\epsilon}, \overline{t}\tau_{\min}y_{\epsilon})= \frac{2}{N} \biggl\{ \frac{ (1+\tau_{\min}^{2} ) \int_{\mathbb{R}^{N}} ( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} )\,dx}{ (\sum_{i=1}^{m}\varsigma_{i}\tau_{\min}^{\beta_{i}} \int_{\mathbb{R}^{N}}Q(x)y_{\epsilon}^{2^{\ast\ast}}\,dx ) ^{\frac{2}{2^{\ast\ast}}}} \biggr\} ^{\frac{2^{\ast\ast}}{2^{\ast\ast}-2}}. $$
Besides, because \(\mathcal{F}(ty_{\epsilon}, t\tau_{\min}y_{\epsilon})\rightarrow-\infty\) as \(t\rightarrow+\infty\), there exists \(t_{0}>0\) such that \(\Vert (t_{0}y_{\epsilon}, t_{0}\tau_{\min}y_{\epsilon }) \Vert _{\mu}>\rho\) and \(\mathcal{F}(t_{0}y_{\epsilon}, t_{0}\tau_{\min}y_{\epsilon})<0\). Now, we define
$$ c_{0}=\inf_{\gamma\in\Gamma}\max _{t\in[0, 1]}\mathcal{F} \bigl(\gamma(t) \bigr), $$
where \(\Gamma=\{\gamma\in\mathscr{C}([0, 1], (\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}); \gamma(0)=(0, 0), \mathcal{F}(\gamma(1))<0, \Vert \gamma(1) \Vert _{\mu }>\rho\}\). It follows directly from (2.5), (2.11), (3.4), (3.5), (3.7), (3.12), (3.13), and Lemma 3.2 that
$$\begin{aligned} c_{0}&\leq\mathcal{F}(\overline{t}y_{\epsilon}, \overline{t} \tau_{\min}y_{\epsilon})=\frac{2}{N} \biggl\{ \frac{ (1+\tau_{\min}^{2} ) \int_{\mathbb{R}^{N}} ( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} )\,dx}{ (\sum_{i=1}^{m}\varsigma_{i}\tau_{\min}^{\beta_{i}} \int_{\mathbb{R}^{N}}Q(x)y_{\epsilon}^{2^{\ast\ast}}\,dx ) ^{\frac{2}{2^{\ast\ast}}}} \biggr\} ^{\frac{2^{\ast\ast}}{2^{\ast\ast}-2}} \\ &\leq\frac{2}{N} \biggl\{ \frac{\mathscr{B}(\tau_{\min})\int_{\mathbb{R}^{N}} ( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} )\,dx}{ (\max \{ \vert G \vert ^{\frac{2-2^{\ast\ast}}{2}} \mathcal{A}_{0}^{-\frac{2^{\ast\ast}}{2}} \Vert Q_{+} \Vert _{\infty}, \mathcal{A}_{\mu}^{-\frac{2^{\ast\ast}}{2}}Q_{+}(0), \mathcal{A}_{\mu}^{-\frac{2^{\ast\ast}}{2}}Q_{+}(\infty) \} ) ^{\frac{2}{2^{\ast\ast}}}} \biggr\} ^{\frac{2^{\ast\ast}}{2^{\ast\ast}-2}} \\ &=\frac{2}{N}\min \bigl\{ \vert G \vert \mathcal{A}_{0, m}^{\frac{N}{4}} \Vert Q_{+} \Vert _{\infty}^{1-\frac{N}{4}}, \mathcal{A}_{\mu, m}^{\frac{N}{4}}Q_{+}(0)^{1-\frac{N}{4}}, \mathcal{A}_{\mu, m}^{\frac{N}{4}}Q_{+}(\infty)^{1-\frac{N}{4}} \bigr\} =c_{0}^{\ast}. \end{aligned}$$
If \(c_{0}< c_{0}^{\ast}\), then the \((PS)_{c}\) condition holds by Lemma 3.4. Thus we arrive at the conclusion by the mountain pass theorem in [35]. If \(c_{0}=c_{0}^{\ast}\), then \(\gamma(t)=(tt_{0}y_{\epsilon}, tt_{0}\tau_{\min}y_{\epsilon})\), with \(0\leq t\leq1\), is a path in Γ such that \(\max_{t\in [0, 1]}\mathcal{F}(\gamma(t))=c_{0}\). Hence, either \(\Phi^{\prime}(\overline{t})=0\) and we are done, or γ can be deformed to a path \(\widetilde{\gamma}\in\Gamma\) with \(\max_{t\in[0, 1]}\mathcal{F}(\widetilde{\gamma}(t))< c_{0}\) and we have a contradiction. Thus we conclude from Lemma 3.1 that there exists a nontrivial G-invariant solution \((u_{0}, v_{0})\in(\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N})\backslash\{0\})^{2}\) to problem \((\mathscr {P}_{0}^{Q})\) and the results follow. □
Proof of Corollary 2.1
In view of (2.6) and Theorem 2.1, it is sufficient to prove that
$$ \int_{\mathbb{R}^{N}} \bigl(Q(x)-\widetilde{Q} \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx\geq0 $$
for some \(\epsilon>0\), where \(\widetilde{Q}=\max\{ \vert G \vert ^{\frac{2-2^{\ast\ast}}{2}} (\mathcal{A}_{0}/\mathcal{A}_{\mu})^{-\frac{2^{\ast\ast}}{2}} \Vert Q_{+} \Vert _{\infty}, Q_{+}(0), Q_{+}(\infty)\}\).
Part (1), case (i). By virtue of (3.14), we need to show that
$$ \epsilon^{-2^{\ast\ast}l_{2}(\mu)} \int_{\mathbb{R}^{N}} \bigl(Q(x)-Q(0) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx\geq0 $$
for certain \(\epsilon>0\). By the hypothesis, we choose \(\varrho_{0}>0\) so that \(Q(x)\geq Q(0)+\xi_{0} \vert x \vert ^{2^{\ast\ast}(l_{2}(\mu)-\Lambda_{0})}\) for \(\vert x \vert \leq\varrho_{0}\). It follows from \(2^{\ast\ast}\Lambda_{0}=N\) and (2.8) that
$$\begin{aligned} &\epsilon^{-2^{\ast\ast}l_{2}(\mu)} \int_{ \vert x \vert \leq\varrho_{0}} \bigl(Q(x)-Q(0) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \\ &\quad\geq\xi_{0} \int_{ \vert x \vert \leq\varrho_{0}}\epsilon^{-2^{\ast\ast }l_{2}(\mu)} \vert x \vert ^{2^{\ast\ast}(l_{2}(\mu)-\Lambda_{0})} U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \\ &\quad=\xi_{0} \int_{ \vert x \vert \leq\varrho_{0}} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon } \biggr) ^{l_{2}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}} \vert x \vert ^{-N}\,dx\rightarrow+\infty \end{aligned}$$
as \(\epsilon\rightarrow0\). On the other hand, for any \(\epsilon>0\), we deduce from (2.8), (2.9), and the fact that \(2^{\ast\ast}l_{2}(\mu)>N\) that
$$\begin{aligned} &\biggl\vert \epsilon^{-2^{\ast\ast}l_{2}(\mu)} \int_{ \vert x \vert > \varrho_{0}} \bigl(Q(x)-Q(0) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \biggr\vert \\ &\quad\leq \int_{ \vert x \vert > \varrho_{0}}\frac{ \vert Q(x)-Q(0) \vert }{ \vert x \vert ^{2^{\ast\ast}l_{2}(\mu)}} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{2}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}}\,dx \\ &\quad \leq C \int_{ \vert x \vert > \varrho_{0}}\frac{1}{ \vert x \vert ^{2^{\ast\ast}l_{2}(\mu)}}\,dx\leq \overline{C}_{1} \end{aligned}$$
for some constant \(\overline{C}_{1}>0\) independent of ϵ. Combining (3.16) and (3.17), we obtain (3.15) for ϵ sufficiently small.
Part (1), case (ii). By the hypothesis, we choose \(\varrho_{1}>0\) so that \(\vert Q(x)-Q(0) \vert \leq\xi_{1} \vert x \vert ^{\varsigma}\) for \(\vert x \vert \leq\varrho_{1}\). Taking into account \(\varsigma>2^{\ast\ast}(l_{2}(\mu)-\Lambda_{0})>0\), \(N-1+\varsigma-2^{\ast\ast}l_{2}(\mu)>-1\) and \(N-1-2^{\ast\ast}l_{2}(\mu)<-1\), we derive
$$\begin{aligned} &\epsilon^{-2^{\ast\ast}l_{2}(\mu)} \int_{\mathbb{R}^{N}} \bigl\vert Q(x)-Q(0) \bigr\vert U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \\ &\quad= \int_{\mathbb{R}^{N}}\frac{ \vert Q(x)-Q(0) \vert }{ \vert x \vert ^{2^{\ast\ast }l_{2}(\mu)}} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{2}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}}\,dx \\ &\quad\leq C \int_{\mathbb{R}^{N}}\frac{ \vert Q(x)-Q(0) \vert }{ \vert x \vert ^{2^{\ast\ast }l_{2}(\mu)}}\,dx \\ &\quad\leq C \biggl(\xi_{1} \int_{ \vert x \vert \leq\varrho_{1}} \vert x \vert ^{\varsigma-2^{\ast\ast}l_{2}(\mu)}\,dx + \int_{ \vert x \vert >\varrho_{1}} \bigl\vert Q(x)-Q(0) \bigr\vert \vert x \vert ^{-2^{\ast\ast}l_{2}(\mu )}\,dx \biggr) \\ &\quad\leq C \biggl( \int_{0}^{\varrho_{1}} r^{N-1+\varsigma-2^{\ast\ast}l_{2}(\mu)}\,dr + \int_{\varrho_{1}}^{+\infty}r^{N-1-2^{\ast\ast}l_{2}(\mu)}\,dr \biggr) < +\infty. \end{aligned}$$
Thus, by (2.8), (2.12), and the Lebesgue dominated convergence theorem, we have
$$\begin{aligned} & \lim_{\epsilon\rightarrow0} \int_{\mathbb{R}^{N}}\epsilon^{-2^{\ast\ast}l_{2}(\mu)} \bigl(Q(x)-Q(0) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \\ &\quad =\lim_{\epsilon\rightarrow 0} \int_{\mathbb{R}^{N}} \bigl(Q(x)-Q(0) \bigr) \vert x \vert ^{-2^{\ast\ast }l_{2}(\mu)} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{2}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}}\,dx \\ &\quad =C \int_{\mathbb{R}^{N}} \bigl(Q(x)-Q(0) \bigr) \vert x \vert ^{-2^{\ast\ast }l_{2}(\mu)}\,dx>0. \end{aligned}$$
Hence (3.15) holds for ϵ small enough.
Part (2), case (i). According to (3.14), we need to prove that
$$ \epsilon^{-2^{\ast\ast}l_{1}(\mu)} \int_{\mathbb{R}^{N}} \bigl(Q(x)-Q(\infty) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx\geq0 $$
for certain \(\epsilon>0\). By the assumption, we take \(\varrho_{2}>0\) such that \(Q(x)\geq Q(\infty)+ \xi_{2} \vert x \vert ^{-2^{\ast\ast}(\Lambda_{0}-l_{1}(\mu))}\) for all \(\vert x \vert \geq\varrho_{2}\). It follows from (2.7) that
$$\begin{aligned} &\epsilon^{-2^{\ast\ast}l_{1}(\mu)} \int_{ \vert x \vert \geq \varrho_{2}} \bigl(Q(x)-Q(\infty) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \\ &\quad = \int_{ \vert x \vert \geq \varrho_{2}} \bigl(Q(x)-Q(\infty) \bigr) \vert x \vert ^{-2^{\ast\ast}l_{1}(\mu)} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{1}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}}\,dx \\ &\quad \geq\xi_{2} \int_{ \vert x \vert \geq\varrho_{2}} \vert x \vert ^{-N} \biggl[ \biggl( \frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{1}(\mu)}U_{\mu} \biggl( \frac{ \vert x \vert }{\epsilon} \biggr) \biggr] ^{2^{\ast\ast}}\,dx\rightarrow+\infty \end{aligned}$$
as \(\epsilon\rightarrow+\infty\). On the other hand, for any \(\epsilon>0\), we conclude from (2.7), (q.2), and the fact that \(N-1-2^{\ast\ast}l_{1}(\mu)>-1\) that
$$\begin{aligned} &\biggl\vert \int_{ \vert x \vert \leq \varrho_{2}}\epsilon^{-2^{\ast\ast}l_{1}(\mu)} \bigl(Q(x)-Q(\infty ) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \biggr\vert \\ &\quad \leq \int_{ \vert x \vert \leq \varrho_{2}}\frac{ \vert Q(x)-Q(\infty) \vert }{ \vert x \vert ^{2^{\ast\ast}l_{1}(\mu)}} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{1}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}}\,dx \\ &\quad\leq C \int_{ \vert x \vert \leq \varrho_{2}}\frac{ \vert Q(x)-Q(\infty) \vert }{ \vert x \vert ^{2^{\ast\ast}l_{1}(\mu )}}\,dx\leq C \int_{0}^{\varrho_{2}}r^{N-1-2^{\ast\ast}l_{1}(\mu)}\,dr\leq \overline{C}_{2} \end{aligned}$$
for some constant \(\overline{C}_{2}>0\) independent of \(\epsilon>0\). By putting these two estimates together, we obtain (3.18) for \(\epsilon>0\) large enough.
Part (2), case (ii). By the assumption, we take \(\varrho_{3}>0\) such that \(\vert Q(x)-Q(\infty) \vert \leq \xi_{3} \vert x \vert ^{-\kappa}\) for all \(\vert x \vert \geq\varrho_{3}\). Taking into account \(\kappa>2^{\ast\ast}(\Lambda_{0}-l_{1}(\mu))>0\), \(N-1-\kappa-2^{\ast\ast}l_{1}(\mu)<-1\) and \(N-1-2^{\ast\ast}l_{1}(\mu)>-1\), we find
$$\begin{aligned} &\epsilon^{-2^{\ast\ast}l_{1}(\mu)} \int_{\mathbb{R}^{N}} \bigl\vert Q(x)-Q(\infty) \bigr\vert U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \\ &\quad = \int_{\mathbb{R}^{N}}\frac{ \vert Q(x)-Q(\infty) \vert }{ \vert x \vert ^{2^{\ast\ast }l_{1}(\mu)}} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{1}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}}\,dx\\ &\quad \leq C \int_{\mathbb{R}^{N}}\frac{ \vert Q(x)-Q(\infty) \vert }{ \vert x \vert ^{2^{\ast\ast }l_{1}(\mu)}}\,dx \\ &\quad \leq C \biggl(\xi_{3} \int_{ \vert x \vert \geq \varrho_{3}} \vert x \vert ^{-\kappa-2^{\ast\ast}l_{1}(\mu)}\,dx + \int_{ \vert x \vert \leq \varrho_{3}} \bigl\vert Q(x)-Q(\infty) \bigr\vert \vert x \vert ^{-2^{\ast\ast}l_{1}(\mu)}\,dx \biggr) \\ &\quad \leq C \biggl( \int_{ \varrho_{3}}^{+\infty}r^{N-1-\kappa-2^{\ast\ast}l_{1}(\mu)}\,dr + \int_{0}^{\varrho_{3}}r^{N-1-2^{\ast\ast}l_{1}(\mu)}\,dr \biggr)< +\infty. \end{aligned}$$
Therefore, by (2.7), (2.13), and the Lebesgue dominated convergence theorem, we obtain
$$\begin{aligned} & \lim_{\epsilon\rightarrow+\infty} \int_{\mathbb{R}^{N}}\epsilon^{-2^{\ast\ast}l_{1}(\mu)} \bigl(Q(x)-Q(\infty) \bigr) U_{\mu}^{2^{\ast\ast}} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr)\,dx \\ &\quad =\lim_{\epsilon\rightarrow +\infty} \int_{\mathbb{R}^{N}}\frac{Q(x)-Q(\infty)}{ \vert x \vert ^{2^{\ast \ast}l_{1}(\mu)}} \biggl[ \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) ^{l_{1}(\mu)}U_{\mu} \biggl(\frac{ \vert x \vert }{\epsilon} \biggr) \biggr]^{2^{\ast \ast}}\,dx \\ &\quad =C \int_{\mathbb{R}^{N}} \bigl(Q(x)-Q(\infty) \bigr) \vert x \vert ^{-2^{\ast\ast }l_{1}(\mu)}\,dx>0. \end{aligned}$$
Thus (3.18) holds for \(\epsilon>0\) enough large. Similar to the above, we find that part (3) follows. □
To prove Theorem 2.2, we need the following symmetric mountain pass theorem (see [36] or [37, Theorem 9.12]).
Let X be an infinite dimensional Banach space, and let \(\mathcal{F}\in\mathscr{C}^{1}(X, \mathbb{R})\) be an even functional satisfying the \((PS)_{c}\) condition for each c and \(\mathcal{F}(0)=0\). Furthermore, one supposes that:
there exist constants \(\widetilde{\alpha}>0\) and \(\rho>0\) such that \(\mathcal{F}(w)\geq\widetilde{\alpha}\) for all \(\Vert w \Vert =\rho\);
there exists an increasing sequence of subspaces \(\{X_{k}\}\) of X, with \(\dim X_{k}=k\), such that for every k one can find a constant \(R_{k}>0\) such that \(\mathcal{F}(w)\leq0\) for all \(w\in X_{k}\) with \(\Vert w \Vert \geq R_{k}\).
Then \(\mathcal{F}\) possesses a sequence of critical values \(\{c_{k}\}\) tending to ∞ as \(k\rightarrow\infty\).
We follow closely the arguments in [9, Theorem 3] (see also [38, Theorem 3]). By virtue of Lemma 3.5 with \(X=(\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) and \(w=(u, v)\in X\), we easily see from (q.2), (2.2), (3.1), and (3.3) that
$$\mathcal{F}(u, v)\geq\frac{1}{2} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2}-\frac{1}{2^{\ast\ast}} \Vert Q \Vert _{\infty} \mathcal{A}_{\mu, m}^{-\frac{2^{\ast\ast}}{2}} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2^{\ast\ast}}. $$
Thanks to \(2^{\ast\ast}>2\), there exist constants \(\widetilde{\alpha}>0\) and \(\rho>0\) such that \(\mathcal{F}(u, v)\geq\widetilde{\alpha}\) for any \((u, v)\) with \(\Vert (u, v) \Vert _{\mu}=\rho\). To find an appropriate sequence of finite dimensional subspaces of \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\), we set \(\Omega=\{x\in\mathbb{R}^{N}; Q(x)>0\}\). The set Ω is G-invariant, and we can define \((\mathscr{D}_{G}^{2, 2}(\Omega))^{2}\), which is the subspace of G-invariant functions of \((\mathscr{D}^{2, 2}(\Omega))^{2}\). Extending functions in \((\mathscr{D}_{G}^{2, 2}(\Omega))^{2}\) by 0 outside Ω, we can presume that \((\mathscr{D}_{ G}^{2, 2}(\Omega))^{2}\subset(\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\). Let \(\{X_{k}\}\) be an increasing sequence of subspaces of \((\mathscr{D}_{G}^{2, 2}(\Omega))^{2}\) with \(\dim X_{k}=k\) for every k. As in [38, Theorem 3], we define \(\varphi_{1, k}\), …, \(\varphi_{k, k}\in\mathscr {C}_{0}^{\infty}(\mathbb{R}^{N})\) such that \(0\leq\varphi_{i, k}\leq1\), \(\operatorname{supp}(\varphi_{i, k})\cap\operatorname{supp}(\varphi_{j, k})=\emptyset\), \(i\neq j\), and
$$\bigl\vert \operatorname{supp}(\varphi_{i, k})\cap\Omega \bigr\vert >0,\qquad \bigl\vert \operatorname{supp}(\varphi_{j, k})\cap\Omega \bigr\vert >0, \quad\forall i, j\in\{1, \ldots, k\}. $$
Taking \(e_{i, k}=(a\varphi_{i, k}, b\varphi_{i, k})\in X_{k}\), \(i=1, \ldots, k\), and \(X_{k}=\operatorname{span}\{e_{1, k}, \ldots, e_{k, k}\}\), where a and b are two positive constants, we conclude from the construction of \(X_{k}\) that \(\dim X_{k}=k\) for every k. Therefore, there exists a constant \(\epsilon(k)>0\) such that
$$\begin{aligned} &\int_{\Omega}Q(x) \bigl(\varsigma_{1} \vert \tilde{u} \vert ^{\alpha_{1}} \vert \tilde{v} \vert ^{\beta_{1}} + \cdots+\varsigma_{m} \vert \tilde{u} \vert ^{\alpha_{m}} \vert \tilde{v} \vert ^{\beta _{m}} \bigr)\,dx \\ &\quad= \int_{\Omega}Q(x) \Biggl( \varsigma_{1} \Biggl\vert \sum_{i=1}^{k}at_{i, k} \varphi_{i, k} \Biggr\vert ^{\alpha_{1}} \Biggl\vert \sum_{i=1}^{k}bt_{i, k} \varphi_{i, k} \Biggr\vert ^{\beta_{1}}+\cdots \\ &\qquad{}+ \varsigma_{m} \Biggl\vert \sum_{i=1}^{k}at_{i, k} \varphi_{i, k} \Biggr\vert ^{\alpha_{m}} \Biggl\vert \sum _{i=1}^{k}bt_{i, k} \varphi_{i, k} \Biggr\vert ^{\beta_{m}} \Biggr)\,dx \geq\epsilon(k) \end{aligned}$$
for all \((\tilde{u}, \tilde{v})=\sum_{i=1}^{k}t_{i, k}e_{i, k}\in X_{k}\), with \(\Vert (\tilde{u}, \tilde{v}) \Vert _{\mu}=1\). Hence, if \((u, v)\in X_{k}\backslash\{(0, 0)\}\), then we write \((u, v)=t(\tilde{u}, \tilde{v})\), with \(t= \Vert (u, v) \Vert _{\mu}\) and \(\Vert (\tilde{u}, \tilde{v}) \Vert _{\mu}=1\). Therefore, we derive
$$\mathcal{F}(u, v)=\frac{1}{2}t^{2} -\frac{1}{2^{\ast\ast}}t^{2^{\ast\ast}} \int_{\Omega}Q(x) \sum_{i=1}^{m} \varsigma_{i} \vert \tilde{u} \vert ^{\alpha_{i}} \vert \tilde {v} \vert ^{\beta_{i}} \,dx\leq\frac{1}{2}t^{2} - \frac{\epsilon(k)}{2^{\ast\ast}}t^{2^{\ast\ast}}\leq0 $$
for \(t>0\) sufficiently large. By Corollary 3.1 and Lemma 3.5, we conclude that there exists a sequence of critical values \(c_{k}\rightarrow\infty\) as \(k\rightarrow\infty\) and the results follow. □
Because \(Q(x)\) is radial, we know that the corresponding group \(G=O(\mathbb{N})\) and \(\vert G \vert =+\infty\). By Corollary 3.1, \(\mathcal{F}\) satisfies the \((PS)_{c}\) condition for every \(c\in\mathbb{R}\). Therefore, we deduce from Theorem 2.2 that the results follow. □
Multiplicity results for problem \((\mathscr{P}_{\sigma }^{\overline{Q}})\)
The purpose of this section is to investigate problem \((\mathscr {P}_{\sigma}^{\overline{Q}})\) and prove Theorem 2.3; here we always presume that \(\sigma>0\) and \(Q(x)\equiv\overline{Q}>0\) is a constant. The corresponding energy functional of problem \((\mathscr{P}_{\sigma}^{\overline{Q}})\) is defined on \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) by
$$ \mathscr{E}_{\sigma}(u, v)=\frac{1}{2} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2} -\frac{\overline{Q}}{2^{\ast\ast}} \int_{\mathbb{R}^{N}} \sum_{i=1}^{m} \varsigma_{i} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}}\,dx -\frac{\sigma}{q} \int_{\mathbb{R}^{N}}h(x) \bigl( \vert u \vert ^{q} + \vert v \vert ^{q} \bigr)\,dx, $$
where \(1< q<2\). In view of (h.2), (2.3), and the Hölder inequality, we find
$$\begin{aligned} & \int_{\mathbb{R}^{N}}h(x) \bigl( \vert u \vert ^{q} + \vert v \vert ^{q} \bigr)\,dx \\ &\quad \leq \biggl( \int_{\mathbb{R}^{N}} h^{\theta}(x)\,dx \biggr)^{\frac{1}{\theta}} \biggl\{ \biggl( \int_{\mathbb{R}^{N}} \vert u \vert ^{2^{\ast\ast}}\,dx \biggr)^{\frac{q}{2^{\ast\ast}}} + \biggl( \int_{\mathbb{R}^{N}} \vert v \vert ^{2^{\ast\ast}}\,dx \biggr)^{\frac{q}{2^{\ast\ast}}} \biggr\} \\ &\quad\leq\mathcal{A}_{\mu}^{-\frac{q}{2}} \Vert h \Vert _{\theta} \bigl( \Vert u \Vert _{\mu}^{q}+ \Vert v \Vert _{\mu}^{q} \bigr)\leq C \Vert h \Vert _{\theta} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{q}. \end{aligned}$$
It follows from (4.1) and (4.2) that \(\mathscr{E}_{\sigma}\in\mathscr{C}^{1}((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}, \mathbb{R})\) and there exists a one-to-one correspondence between the weak solutions of \((\mathscr {P}_{\sigma}^{\overline{Q}})\) and the critical points of \(\mathscr{E}_{\sigma}\). We now observe that an analogously symmetric criticality principle of Lemma 3.1 clearly holds. Consequently, the weak solutions of problem \((\mathscr {P}_{\sigma}^{\overline{Q}})\) are exactly the critical points of the functional \(\mathscr{E}_{\sigma}\).
Assume that (h.1) and (h.2) hold. Then there exists a positive constant M depending only on N, q, \(\mathcal{A}_{\mu}\), and \(\Vert h \Vert _{\theta}\), such that any bounded sequence \(\{(u_{n}, v_{n})\}\subset(\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) satisfying
$$\begin{aligned} \begin{aligned} &\mathscr{E}_{\sigma}(u_{n}, v_{v}) \rightarrow c< \frac{2}{N} \overline{Q}^{1-\frac{N}{4}} \mathcal{A}_{\mu, m}^{\frac{N}{4}}-M \sigma^{\frac{2}{2-q}}, \\ & \mathscr{E}_{\sigma}^{\prime}(u_{n}, v_{v})\rightarrow0\quad (n\rightarrow\infty) \end{aligned} \end{aligned}$$
contains a convergent subsequence.
By the hypothesis, \(\{(u_{n}, v_{n})\}\) is bounded in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\). Hence we obtain a subsequence, still denoted by \(\{(u_{n}, v_{n})\}\), satisfying \((u_{n}, v_{n})\rightharpoonup(u, v)\) in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\), \((u_{n}, v_{n})\rightarrow(u, v)\) a.e. in \(\mathbb{R}^{N}\) and \((u_{n}, v_{n})\rightarrow(u, v)\) in \((L_{\operatorname{loc}}^{r}(\mathbb{R}^{N}))^{2}\) for all \(r\in[1, 2^{\ast\ast})\). By virtue of (h.2), the Hölder inequality and the Lebesgue dominated theorem, we derive
$$ \lim_{n\rightarrow \infty} \int_{\mathbb{R}^{N}}h(x) \bigl( \vert u_{n} \vert ^{q} + \vert v_{n} \vert ^{q} \bigr)\,dx = \int_{\mathbb{R}^{N}}h(x) \bigl( \vert u \vert ^{q} + \vert v \vert ^{q} \bigr)\,dx. $$
Applying the standard argument, we easily check from (4.4) that \((u, v)\) is a critical point of \(\mathscr{E}_{\sigma}\). Further, in view of (h.2), (4.1), (4.2), and the Hölder inequality, by direct calculation, we obtain
$$\begin{aligned} \mathscr{E}_{\sigma}(u, v)&=\mathscr{E}_{\sigma}(u, v)-\frac{1}{2^{\ast\ast}} \bigl\langle \mathscr{E}_{\sigma}^{\prime}(u, v), (u, v) \bigr\rangle \\ &=\frac{2}{N} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2}- \frac{\sigma}{2^{\ast\ast}q} \bigl(2^{\ast\ast}-q \bigr) \int_{\mathbb{R}^{N}}h(x) \bigl( \vert u \vert ^{q} + \vert v \vert ^{q} \bigr)\,dx \\ &\geq\frac{2}{N} \bigl( \Vert u \Vert _{\mu}^{2}+ \Vert v \Vert _{\mu}^{2} \bigr) -\frac{\sigma}{2^{\ast\ast}q} \bigl(2^{\ast\ast}-q \bigr) \mathcal{A}_{\mu}^{-\frac{q}{2}} \Vert h \Vert _{\theta} \bigl( \Vert u \Vert _{\mu}^{q}+ \Vert v \Vert _{\mu}^{q} \bigr) \\ &\geq-(2-q) \biggl(\frac{qN}{4} \biggr)^{\frac{q}{2-q}} \biggl( \frac{2^{\ast\ast}-q}{2^{\ast\ast}q}\mathcal{A}_{\mu }^{-\frac{q}{2}} \Vert h \Vert _{\theta} \biggr)^{\frac{2}{2-q}} \sigma ^{\frac{2}{2-q}} \triangleq-M \sigma^{\frac{2}{2-q}}, \end{aligned}$$
where \(M=(2-q)(\frac{qN}{4})^{\frac{q}{2-q}} (\frac{2^{\ast\ast}-q}{2^{\ast\ast}q}\mathcal{A}_{\mu}^{-\frac{q}{2}} \Vert h \Vert _{\theta})^{\frac{2}{2-q}}\) is a positive constant. We now set \(\widetilde{u}_{n}=u_{n}-u\) and \(\widetilde{v}_{n}=v_{n}-v\). Then, by the Brezis–Lieb lemma [39] and arguing as in [40, Lemma 2.1], we have
$$\begin{aligned} &\bigl\Vert (\widetilde{u}_{n}, \widetilde{v}_{n}) \bigr\Vert _{\mu }^{2}= \bigl\Vert (u_{n}, v_{n}) \bigr\Vert _{\mu}^{2}- \bigl\Vert (u, v) \bigr\Vert _{\mu }^{2}+o_{n}(1), \end{aligned}$$
$$\begin{aligned} & \int_{\mathbb{R}^{N}} \vert \widetilde{u}_{n} \vert ^{\alpha_{i}} \vert \widetilde{v}_{n} \vert ^{\beta_{i}}\,dx = \int_{\mathbb{R}^{N}} \vert u_{n} \vert ^{\alpha_{i}} \vert v_{n} \vert ^{\beta_{i}}\,dx- \int_{\mathbb{R}^{N}} \vert u \vert ^{\alpha_{i}} \vert v \vert ^{\beta_{i}}\,dx+o_{n}(1),\quad i=1, \ldots, m. \end{aligned}$$
Taking into account \(\mathscr{E}_{\sigma}(u_{n}, v_{n})=c+o_{n}(1)\) and \(\mathscr{E}_{\sigma}^{\prime}(u_{n}, v_{n})=o_{n}(1)\), we conclude from (4.1), (4.4), (4.6), and (4.7) that
$$\begin{aligned} c+o_{n}(1)={}&\mathscr{E}_{\sigma}(u_{n}, v_{n})=\mathscr{E}_{\sigma}(u, v)+ \frac{1}{2} \bigl\Vert (\widetilde{u}_{n}, \widetilde{v}_{n}) \bigr\Vert _{\mu}^{2} \\ &{} -\frac{\overline{Q}}{2^{\ast\ast}} \int_{\mathbb{R}^{N}} \sum_{i=1}^{m} \varsigma_{i} \vert \widetilde{u}_{n} \vert ^{\alpha_{i}} \vert \widetilde{v}_{n} \vert ^{\beta_{i}}\,dx+o_{n}(1) \end{aligned}$$
$$ \bigl\Vert (\widetilde{u}_{n}, \widetilde{v}_{n}) \bigr\Vert _{\mu}^{2} - \overline{Q} \int_{\mathbb{R}^{N}}\sum_{i=1}^{m} \varsigma_{i} \vert \widetilde{u}_{n} \vert ^{\alpha_{i}} \vert \widetilde{v}_{n} \vert ^{\beta_{i}}\,dx=o_{n}(1). $$
As a result, for a subsequence \(\{(\widetilde{u}_{n}, \widetilde{v}_{n})\}\), we find
$$\bigl\Vert (\widetilde{u}_{n}, \widetilde{v}_{n}) \bigr\Vert _{\mu}^{2} \rightarrow\overline{\xi}\geq0,\qquad \overline{Q} \int_{\mathbb{R}^{N}} \sum_{i=1}^{m} \varsigma_{i} \vert \widetilde{u}_{n} \vert ^{\alpha_{i}} \vert \widetilde{v}_{n} \vert ^{\beta _{i}}\,dx \rightarrow \overline{\xi} $$
as \(n\rightarrow\infty\). It follows from (3.3) that \(\mathcal{A}_{\mu, m}(\overline{\xi}/\overline{Q})^{\frac{2}{2^{\ast\ast}}}\leq \overline{\xi}\). This yields either \(\overline{\xi}=0\) or \(\overline{\xi}\geq\overline{Q}^{1-\frac{N}{4}}\mathcal{A}_{\mu, m}^{\frac{N}{4}}\). If \(\overline{\xi}\geq \overline{Q}^{1-\frac{N}{4}}\mathcal{A}_{\mu, m}^{\frac{N}{4}}\), then we deduce from (4.5), (4.8), and (4.9) that
$$c=\mathscr{E}_{\sigma}(u, v)+ \biggl(\frac{1}{2}- \frac{1}{2^{\ast\ast}} \biggr)\overline{\xi} \geq\frac{2}{N} \overline{Q}^{1-\frac{N}{4}}\mathcal{A}_{\mu, m}^{\frac{N}{4}}-M \sigma^{\frac{2}{2-q}}, $$
which contradicts (4.3). Therefore, we obtain \(\Vert (\widetilde{u}_{n}, \widetilde{v}_{n}) \Vert _{\mu }^{2}\rightarrow 0\) as \(n\rightarrow+\infty\), and hence, \((u_{n}, v_{n})\rightarrow (u, v)\) in \((\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\). The conclusion of this lemma follows. □
Assume that (h.1) and (h.2) hold. Then there exists \(\sigma_{1}^{\ast}>0\) such that for any \(\sigma\in(0, \sigma_{1}^{\ast})\) the following geometric conditions for \(\mathscr{E}_{\sigma}(u, v)\) hold:
\(\mathscr{E}_{\sigma}(0, 0)=0\); there exist constants \(\overline{\alpha}>0\) and \(\rho>0\) such that \(\mathscr{E}_{\sigma}(u, v)\geq\overline{\alpha}\) for all \(\Vert (u, v) \Vert _{\mu}=\rho\);
there exists \((e_{u}, e_{v})\in (\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}\) such that \(\Vert (e_{u}, e_{v}) \Vert _{\mu}>\rho\) and \(\mathscr{E}_{\sigma}(e_{u}, e_{v})< 0\).
In view of (h.2), (3.3), (4.1), (4.2), and the Hölder inequality, by direct computation, we derive
$$\begin{aligned} \mathscr{E}_{\sigma}(u, v)&\geq\frac{1}{2} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2}-\frac{\overline{Q}}{2^{\ast\ast }} \mathcal{A}_{\mu, m}^{-\frac{2^{\ast\ast}}{2}} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2^{\ast\ast}}-\frac{\sigma}{q}C \Vert h \Vert _{\theta} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{q} \\ &\geq \biggl(\frac{1}{2}-\varsigma_{0} \biggr) \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2}-\frac{\overline{Q}}{2^{\ast\ast }} \mathcal{A}_{\mu, m}^{-\frac{2^{\ast\ast}}{2}} \bigl\Vert (u, v) \bigr\Vert _{\mu}^{2^{\ast\ast}}-C(\varsigma_{0})\sigma^{\frac {2}{2-q}} \end{aligned}$$
for any \(\varsigma_{0}\in(0, \frac{1}{2})\), where \(C(\varsigma_{0})=(\frac{2}{q}-1)\varsigma_{0} [C \Vert h \Vert _{\theta}/(2\varsigma_{0})]^{2/(2-q)}\) is a positive constant. It follows from the last inequality in (4.10) that there exist constants \(\overline{\alpha}>0\), \(\rho>0\), and \(\sigma_{1}^{\ast}>0\) such that \(\mathscr{E}_{\sigma}(u, v)\geq\overline{\alpha}>0\) for all \(\Vert (u, v) \Vert _{\mu}=\rho\), \(\varsigma_{0}\in(0, \frac{1}{2})\) and \(\sigma\in(0, \sigma_{1}^{\ast})\). This yields (i).
On the other hand, taking into account \(\int_{\mathbb{R}^{N}}h(x)( \vert u \vert ^{q}+ \vert v \vert ^{q})\,dx\geq0\), we deduce from (4.1) that there exists \((\check{u}, \check{v})\in(\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N})\backslash\{0\})^{2}\) such that \(\mathscr {E}_{\sigma}(t\check{u}, t\check{v})\rightarrow-\infty\) as \(t\rightarrow+\infty\). Therefore, we can choose \((e_{u}, e_{v})=(T\check{u}, T\check{v})\) (\(T>0\) large enough) such that \(\Vert (e_{u}, e_{v}) \Vert _{\mu}>\rho\) and \(\mathscr {E}_{\sigma}(e_{u}, e_{v})< 0\). Thus (ii) follows. □
Assume that (h.1) and (h.2) hold. Then there exists \(\sigma_{2}^{\ast}>0\) such that
$$ \sup_{t\geq0}\mathscr{E}_{\sigma} (ty_{\epsilon}, t\tau_{\min}y_{\epsilon} ) < \frac{2}{N} \overline{Q}^{1-\frac{N}{4}}\mathcal{A}_{\mu, m}^{\frac{N}{4}}-M \sigma^{\frac{2}{2-q}} $$
for any \(\sigma\in(0, \sigma_{2}^{\ast})\) and small \(\epsilon>0\), where \(M>0\) is given in Lemma 4.1 and \(\tau_{\min}>0\) satisfies (3.4)–(3.6).
Similar to the proof in Alves [41, Theorem 3], we define the functions
$$\begin{aligned} \Psi(t)={}&\mathscr{E}_{\sigma} (ty_{\epsilon}, t \tau_{\min}y_{\epsilon} )= \frac{t^{2}}{2} \bigl(1+ \tau_{\min}^{2} \bigr) \int_{\mathbb {R}^{N}} \biggl( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} \biggr)\,dx \\ &{}-\frac{t^{2^{\ast\ast}}}{2^{\ast\ast}} \Biggl(\sum_{i=1}^{m} \varsigma_{i} \tau_{\min}^{\beta_{i}} \Biggr)\overline{Q} \int_{\mathbb{R}^{N}}y_{\epsilon}^{2^{\ast\ast}}\,dx- \frac{ \sigma}{q}t^{q} \bigl(1+\tau_{\min}^{q} \bigr) \int_{\mathbb{R}^{N}}h(x) y_{\epsilon}^{q}\,dx \end{aligned}$$
$$ \widetilde{\Psi}(t)=\frac{t^{2}}{2} \bigl(1+ \tau_{\min}^{2} \bigr) \int_{\mathbb{R}^{N}} \biggl( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} \biggr)\,dx -\frac{t^{2^{\ast\ast}}}{2^{\ast\ast}} \Biggl(\sum _{i=1}^{m}\varsigma_{i} \tau_{\min}^{\beta_{i}} \Biggr)\overline{Q} \int_{\mathbb{R}^{N}}y_{\epsilon}^{2^{\ast\ast}}\,dx $$
with \(t\geq0\). Note that \(\widetilde{\Psi}(0)=0\), \(\widetilde{\Psi}(t)>0\) for \(t\rightarrow0^{+}\), and \(\lim_{t\rightarrow+\infty}\widetilde{\Psi}(t)=-\infty\). Hence, \(\sup_{t\geq0}\widetilde{\Psi}(t)\) can be achieved at some finite \(t_{\epsilon}^{0}>0\) at which \(\widetilde{\Psi}^{\prime}(t)\) becomes zero. In view of (2.5), (2.6), (3.4)–(3.6), (4.13), and Lemma 3.2, by simple arithmetic, we derive
$$\begin{aligned} \sup_{t\geq 0}\widetilde{\Psi}(t)=\widetilde{\Psi} \bigl(t_{\epsilon}^{0} \bigr)={}& \biggl(\frac{1}{2}- \frac{1}{2^{\ast\ast}} \biggr) \biggl\{ \frac{ (1+\tau_{\min}^{2} ) \int_{\mathbb{R}^{N}} ( \vert \Delta y_{\epsilon} \vert ^{2}-\mu\frac{y_{\epsilon}^{2}}{ \vert x \vert ^{4}} )\,dx}{ [ (\sum_{i=1}^{m}\varsigma_{i} \tau_{\min}^{\beta_{i}} )\overline{Q} \int_{\mathbb{R}^{N}}y_{\epsilon}^{2^{\ast\ast}}\,dx ]^{\frac {2}{2^{\ast\ast}}}} \biggr\} ^{\frac{2^{\ast\ast}}{2^{\ast\ast}-2}} \\ ={}&\frac{2}{N}\overline{Q}^{1-\frac{N}{4}} \bigl(\mathscr{B}( \tau_{\min})\mathcal{A}_{\mu} \bigr) ^{\frac{N}{4}}= \frac{2}{N}\overline{Q}^{1-\frac{N}{4}}\mathcal {A}_{\mu, m}^{\frac{N}{4}}. \end{aligned}$$
Let \(\overline{\sigma}>0\) be such that
$$\frac{2}{N}\overline{Q}^{1-\frac{N}{4}}\mathcal{A}_{\mu, m}^{\frac{N}{4}}-M \sigma^{\frac{2}{2-q}}>0,\quad \forall\sigma\in(0, \overline{\sigma}). $$
On the one hand, by virtue of (h.1), (h.2), (2.5), and (4.12), we conclude that
$$\Psi(t)=\mathscr{E}_{\sigma} (ty_{\epsilon}, t\tau_{\min}y_{\epsilon} )\leq\frac{t^{2}}{2} \bigl(1+\tau _{\min}^{2} \bigr),\quad \forall t\geq0, \sigma>0, $$
and there exists \(T_{0}\in(0, 1)\) independent of ϵ such that
$$ \sup_{0\leq t\leq T_{0}}\Psi(t)\leq\frac{T_{0}^{2}}{2} \bigl(1+\tau_{\min}^{2} \bigr) < \frac{2}{N} \overline{Q}^{1-\frac{N}{4}}\mathcal{A}_{\mu, m}^{\frac{N}{4}}-M \sigma^{\frac{2}{2-q}},\quad \forall\sigma\in(0, \overline{\sigma}). $$
On the other hand, it follows from (4.12), (4.13), and (4.14) that
$$\begin{aligned} \sup_{t\geq T_{0}}\Psi(t)&\leq\sup _{t\geq 0} \widetilde{\Psi}(t)-\frac{\sigma}{q}T_{0}^{q} \bigl(1+ \tau_{\min }^{q} \bigr) \int_{\mathbb{R}^{N}} h(x)y_{\epsilon}^{q}\,dx \\ &=\frac{2}{N}\overline{Q}^{1-\frac{N}{4}}\mathcal{A}_{\mu, m}^{\frac{N}{4}}- \frac{\sigma}{q}T_{0}^{q} \bigl(1+\tau_{\min }^{q} \bigr) \int_{\mathbb{R}^{N}}h(x)y_{\epsilon}^{q}\,dx. \end{aligned}$$
Now, taking \(\sigma>0\) such that \(-\frac{\sigma}{q}T_{0}^{q}(1+\tau_{\min}^{q}) \int_{\mathbb{R}^{N}}h(x)y_{\epsilon}^{q}\,dx <-M\sigma^{\frac{2}{2-q}}\), namely
$$0< \sigma< \biggl[\frac{T_{0}^{q}}{qM} \bigl(1+\tau_{\min}^{q} \bigr) \int_{\mathbb{R}^{N}}h(x) y_{\epsilon}^{q}\,dx \biggr]^{\frac{2-q}{q}}\triangleq\widetilde{\sigma}, $$
we find from (4.16) that
$$ \sup_{t\geq T_{0}}\Psi(t)< \frac{2}{N} \overline{Q}^{1-\frac{N}{4}}\mathcal {A}_{\mu, m}^{\frac{N}{4}}-M \sigma^{\frac{2}{2-q}}, \quad\forall\sigma\in(0, \widetilde{\sigma}). $$
Choosing \(\sigma_{2}^{\ast}=\min\{\overline{\sigma},\widetilde{\sigma}\}\), we deduce from (4.15) and (4.17) that
$$\sup_{t\geq0}\Psi(t) < \frac{2}{N}\overline{Q}^{1-\frac{N}{4}} \mathcal{A}_{\mu, m}^{\frac{N}{4}}-M\sigma^{\frac{2}{2-q}}, \quad\forall\sigma \in \bigl(0, \sigma_{2}^{\ast} \bigr), $$
which implies (4.11). Hence the results of this lemma follow. □
Taking \(\rho>0\) and \(\sigma^{\ast}=\min\{\sigma_{1}^{\ast}, \sigma_{2}^{\ast}\}\), for \(0<\sigma<\sigma^{\ast}\), given in the proofs of Lemmas 4.2 and 4.3, we define
$$c_{1}\triangleq\inf_{\overline{\mathcal{B}} _{\rho}(0)}\mathscr{E}_{\sigma}(u, v), $$
where \(\overline{\mathcal{B}}_{\rho}(0)=\{(u, v)\in(\mathscr{D}_{G}^{2, 2}(\mathbb{R}^{N}))^{2}; \Vert (u, v) \Vert _{\mu}\leq\rho\}\). It is easy to see that the metric space \(\overline{\mathcal{B}}_{\rho}(0)\) is complete. According to the Ekeland variational principle [42], we deduce that there exists a sequence \(\{(u_{n}, v_{n})\}\subset\overline{\mathcal{B}}_{\rho}(0)\) such that \(\mathscr{E}_{\sigma}(u_{n}, v_{n})\rightarrow c_{1}\) and \(\mathscr{E}_{\sigma}^{\prime}(u_{n}, v_{n})\rightarrow0\) as \(n\rightarrow\infty\).
Let \(\varphi_{0}\), \(\psi_{0}\in\mathscr {C}_{0}^{\infty}(\mathbb{R}^{N})\) be the G-invariant functions such that \(\varphi_{0}\), \(\psi_{0}> 0\). It follows from (h.1) and (h.2) that \(\int_{\mathbb{R}^{N}}h(x)( \varphi_{0}^{q}+\psi_{0}^{q})\,dx>0\). In view of \(1< q<2<2^{\ast\ast}\), we find that there exists \(\tilde{t}_{0}=\tilde{t}_{0}(\varphi_{0}, \psi_{0})>0\) sufficiently small such that
$$\begin{aligned} \mathscr{E}_{\sigma}(\tilde{t}_{0}\varphi_{0}, \tilde{t}_{0}\psi_{0})={}&\frac{\tilde{t}_{0}^{2}}{2} \bigl\Vert ( \varphi_{0}, \psi_{0}) \bigr\Vert _{\mu}^{2}\\ &{}- \frac{\overline{Q}}{2^{\ast\ast}} \tilde{t}_{0}^{2^{\ast\ast}} \int_{\mathbb{R}^{N}}\sum_{i=1}^{m} \varsigma_{i}\varphi_{0}^{\alpha_{i}} \psi_{0}^{\beta_{i}}\,dx-\frac{\sigma}{q}\tilde{t}_{0}^{q} \int _{\mathbb{R}^{N}}h(x) \bigl( \varphi_{0}^{q}+ \psi_{0}^{q} \bigr)\,dx< 0. \end{aligned}$$
This yields
$$c_{1}< 0< \frac{2}{N}\overline{Q}^{1-\frac{N}{4}} \mathcal{A}_{\mu, m}^{\frac{N}{4}}-M\sigma^{\frac{2}{2-q}}, \quad \forall\sigma \in \bigl(0, \sigma^{\ast} \bigr). $$
By virtue of Lemma 4.1, \(\mathscr{E}_{\sigma}\) admits a nontrivial critical point \((u_{1}, v_{1})\) with \(\mathscr{E}_{\sigma}(u_{1}, v_{1})=c_{1}<0\). Applying the principle of symmetric criticality, we obtain that \((u_{1}, v_{1})\) is a nontrivial G-invariant solution of problem \((\mathscr{P}_{\sigma}^{\overline{Q}})\).
Furthermore, we now define
$$c_{2}\triangleq\inf_{\gamma\in\Gamma}\max_{t\in[0, 1]} \mathscr{E}_{\sigma} \bigl(\gamma(t) \bigr), $$
where \(\Gamma=\{\gamma\in\mathscr{C}([0, 1], (\mathscr{D}_{ G}^{2, 2}(\mathbb{R}^{N}))^{2}); \gamma(0)=(0, 0), \gamma(1)=(e_{u}, e_{v})\}\). It follows from Lemmas 4.2 and 4.3 that
$$0< \overline{\alpha}\leq c_{2}< \frac{2}{N}\overline{Q}^{1-\frac{N}{4}} \mathcal{A}_{\mu, m}^{\frac{N}{4}}-M\sigma^{\frac{2}{2-q}},\quad \forall\sigma \in \bigl(0, \sigma^{\ast} \bigr). $$
This, combined with the mountain pass theorem, implies that \(c_{2}\) is another nonzero critical value of \(\mathscr{E}_{\sigma}\). Similar to the above arguments, problem \((\mathscr{P}_{\sigma}^{\overline{Q}})\) possesses another nontrivial G-invariant solution \((u_{2}, v_{2})\) with \(\mathscr{E}_{\sigma}(u_{2}, v_{2})=c_{2}>0\). □
Brezis, H., Nirenberg, L.: Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents. Commun. Pure Appl. Math. 36, 437–477 (1983)
Cerami, G., Zhong, X.-X., Zou, W.-M.: On some nonlinear elliptic PDEs with Sobolev–Hardy critical exponents and a Li–Lin open problem. Calc. Var. Partial Differ. Equ. 54, 1793–1829 (2015)
Bhakta, M., Santra, S.: On singular equations with critical and supercritical exponents. J. Differ. Equ. 263, 2886–2953 (2017)
Deng, Y.-B., Jin, L.-Y.: On symmetric solutions of a singular elliptic equation with critical Sobolev–Hardy exponent. J. Math. Anal. Appl. 329, 603–616 (2007)
Waliullah, S.: Minimizers and symmetric minimizers for problems with critical Sobolev exponent. Topol. Methods Nonlinear Anal. 34, 291–326 (2009)
Deng, Z.-Y., Huang, Y.-S.: Existence and multiplicity of symmetric solutions for semilinear elliptic equations with singular potentials and critical Hardy–Sobolev exponents. J. Math. Anal. Appl. 393, 273–284 (2012)
Deng, Z.-Y., Huang, Y.-S.: Existence and multiplicity of symmetric solutions for the weighted critical quasilinear problems. Appl. Math. Comput. 219, 4836–4846 (2013)
MathSciNet MATH Google Scholar
Deng, Z.-Y., Huang, Y.-S.: On positive G-symmetric solutions of a weighted quasilinear elliptic equation with critical Hardy–Sobolev exponent. Acta Math. Sci. 34B, 1619–1633 (2014)
Bianchi, G., Chabrowski, J., Szulkin, A.: On symmetric solutions of an elliptic equations with a nonlinearity involving critical Sobolev exponent. Nonlinear Anal. 25, 41–59 (1995)
Bartsch, T., Willem, M.: Infinitely Many Non-radial Solutions of an Euclidean Scalar Field Equation. Mathematisches Institut, Universitat, Heidelberg (1992)
Chabrowski, J.: On the existence of G-symmetric entire solutions for semilinear elliptic equations. Rend. Circ. Mat. Palermo 41, 413–440 (1992)
Ghergu, M., Rădulescu, V.: Multi-parameter bifurcation and asymptotics for the singular Lane–Emden–Fowler equation with a convection term. Proc. R. Soc. Edinb. A 135A, 61–83 (2005)
Maultsby, B.: Uniqueness of solutions to singular p-Laplacian equations with subcritical nonlinearity. Adv. Nonlinear Anal. 6, 37–59 (2017)
Demarque, R., Miyagaki, O.: Radial solutions of inhomogeneous fourth order elliptic equations and weighted Sobolev embeddings. Adv. Nonlinear Anal. 4, 135–151 (2015)
Kong, L.: Multiple solutions for fourth order elliptic problems with \(p(x)\)-biharmonic operators. Opusc. Math. 36, 253–264 (2016)
Ghergu, M., Rădulescu, V.: Singular elliptic problems with lack of compactness. Ann. Mat. Pura Appl. 185, 63–79 (2006)
Ghergu, M., Rădulescu, V.: Singular Elliptic Problems: Bifurcation and Asymptotic Analysis. Oxford Lecture Series in Mathematics and Its Applications, vol. 37. Oxford University Press, Oxford (2008)
Cai, M.-J., Kang, D.-S.: Elliptic systems involving multiple strongly coupled critical terms. Appl. Math. Lett. 25, 417–422 (2012)
Nyamoradi, N., Hsu, T.-S.: Existence of multiple positive solutions for semilinear elliptic systems involving m critical Hardy–Sobolev exponents and m sign-changing weight function. Acta Math. Sci. 34B, 483–500 (2014)
Cortázar, C., Elgueta, M., Garcia-Melián, J.: Analysis of an elliptic system with infinitely many solutions. Adv. Nonlinear Anal. 6, 1–12 (2017)
Alves, C.O., de Morais Filho, D.C., Souto, M.A.S.: On systems of elliptic equations involving subcritical or critical Sobolev exponents. Nonlinear Anal. 42, 771–787 (2000)
Kang, D.-S.: Positive minimizers of the best constants and solutions to coupled critical quasilinear systems. J. Differ. Equ. 260, 133–148 (2016)
Benrhouma, M.: On a singular elliptic system with quadratic growth in the gradient. J. Math. Anal. Appl. 448, 1120–1146 (2017)
Lü, D.-F.: Multiple solutions for a class of biharmonic elliptic systems with Sobolev critical exponent. Nonlinear Anal. 74, 6371–6382 (2011)
Alvarez-Caudevilla, P., Colorado, E., Galaktionov, V.A.: Existence of solutions for a system of coupled nonlinear stationary bi-harmonic Schrödinger equations. Nonlinear Anal., Real World Appl. 23, 78–93 (2015)
Kang, D.-S., Xiong, P.: Ground state solutions to biharmonic equations involving critical nonlinearities and multiple singular potentials. Appl. Math. Lett. 66, 9–15 (2017)
Deng, Z.-Y., Huang, Y.-S.: Existence of symmetric solutions for singular semilinear elliptic systems with critical Hardy–Sobolev exponents. Nonlinear Anal., Real World Appl. 14, 613–625 (2013)
Kang, D.-S., Yang, F.: Elliptic systems involving multiple critical nonlinearities and symmetric multi-polar potentials. Sci. China Math. 57, 1011–1024 (2014)
Deng, Z.-Y., Huang, Y.-S.: Symmetric solutions for a class of singular biharmonic elliptic systems involving critical exponents. Appl. Math. Comput. 264, 323–334 (2015)
Deng, Z.-Y., Huang, Y.-S.: Multiple symmetric results for a class of biharmonic elliptic systems with critical homogeneous nonlinearity in \(\mathbb{R}^{N}\). Acta Math. Sci. 37B, 1665–1684 (2017)
Palais, R.: The principle of symmetric criticality. Commun. Math. Phys. 69, 19–30 (1979)
Rellich, F.: Perturbation Theory of Eigenvalue Problems. Courant Institute of Mathematical Sciences, New York University, New York (1954)
D'Ambrosio, L., Jannelli, E.: Nonlinear critical problems for the biharmonic operator with Hardy potential. Calc. Var. Partial Differ. Equ. 54, 365–396 (2015)
Lions, P.L.: The concentration-compactness principle in the calculus of variations, the limit case. Rev. Mat. Iberoam. 1 (part I), 145–201 (1985) 1 (part II) (1985) 45–121
Ambrosetti, A., Rabinowitz, P.H.: Dual variational methods in critical point theory and applications. J. Funct. Anal. 14, 349–381 (1973)
Pucci, P., Rădulescu, V.: The impact of the mountain pass theory in nonlinear analysis: a mathematical survey. Boll. Unione Mat. Ital. (9) 3, 543–582 (2010)
Rabinowitz, H.: Methods in Critical Point Theory with Applications to Differential Equations. CBMS. Amer. Math. Soc., Providence (1986)
Book MATH Google Scholar
Nyamoradi, N.: Solutions of the quasilinear elliptic systems with combined critical Sobolev–Hardy terms. Ukr. Math. J. 67, 891–915 (2015)
Brezis, H., Lieb, E.: A relation between pointwise convergence of functions and convergence of functionals. Proc. Am. Math. Soc. 88, 486–490 (1983)
Han, P.-G.: The effect of the domain topology on the number of positive solutions of some elliptic systems involving critical Sobolev exponents. Houst. J. Math. 32, 1241–1257 (2006)
Alves, C.O.: Multiple positive solutions for equations involving critical exponent in \(\mathbb{R}^{N}\). Electron. J. Differ. Equ. 1997, 13 (1997)
Ekeland, I.: Nonconvex minimization problems. Bull. Am. Math. Soc. 3, 443–474 (1979)
ZD and DX are supported by the National Natural Science Foundation of China (Nos. 11471235; 11601052) and Chongqing Research Program of Basic Research and Frontier Technology (No. cstc2017jcyjBX0037). YH is supported by the National Natural Science Foundation of China (No. 11471235).
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Key Lab of Intelligent Analysis and Decision on Complex Systems, Chongqing University of Posts and Telecommunications, Chongqing, P.R. China
Zhiying Deng & Dong Xu
Department of Mathematics, Soochow University, Suzhou, P.R. China
Yisheng Huang
Zhiying Deng
Dong Xu
The authors declare that this study was independently finished. All authors read and approved the final manuscript.
Correspondence to Zhiying Deng.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Deng, Z., Xu, D. & Huang, Y. On G-invariant solutions of a singular biharmonic elliptic system involving multiple critical exponents in \(R^{N}\). Bound Value Probl 2018, 53 (2018). https://doi.org/10.1186/s13661-018-0971-5
35J48
G-invariant solution
Rellich inequality
Symmetric criticality principle
Biharmonic elliptic system | CommonCrawl |
Only show content I have access to (26)
Only show open access (2)
Chapters (1)
Physics and Astronomy (8)
Materials Research (2)
Statistics and Probability (2)
International Astronomical Union Colloquium (6)
Proceedings of the International Astronomical Union (4)
Epidemiology & Infection (2)
MRS Online Proceedings Library Archive (2)
British Journal of Nutrition (1)
Highlights of Astronomy (1)
Journal of Fluid Mechanics (1)
Mathematika (1)
Microscopy and Microanalysis (1)
Parasitology (1)
Psychological Medicine (1)
Publications of the Astronomical Society of Australia (1)
Nestle Foundation - enLINK (3)
Materials Research Society (2)
MiMi / EMAS - European Microbeam Analysis Society (1)
test society (1)
Integration of a recent infection testing algorithm into HIV surveillance in Ireland: improving HIV knowledge to target prevention
E. Robinson, J. Moran, K. O'Donnell, J. Hassan, H. Tuite, O. Ennis, F. Cooney, E. Nugent, L. Preston, S. O'Dea, S. Doyle, S. Keating, J. Connell, C. De Gascun, D. Igoe
Journal: Epidemiology & Infection / Volume 147 / 2019
Recent infection testing algorithms (RITA) for HIV combine serological assays with epidemiological data to determine likely recent infections, indicators of ongoing transmission. In 2016, we integrated RITA into national HIV surveillance in Ireland to better inform HIV prevention interventions. We determined the avidity index (AI) of new HIV diagnoses and linked the results with data captured in the national infectious disease reporting system. RITA classified a diagnosis as recent based on an AI < 1.5, unless epidemiological criteria (CD4 count <200 cells/mm3; viral load <400 copies/ml; the presence of AIDS-defining illness; prior antiretroviral therapy use) indicated a potential false-recent result. Of 508 diagnoses in 2016, we linked 448 (88.1%) to an avidity test result. RITA classified 12.5% of diagnoses as recent, with the highest proportion (26.3%) amongst people who inject drugs. On multivariable logistic regression recent infection was more likely with a concurrent sexually transmitted infection (aOR 2.59; 95% CI 1.04–6.45). Data were incomplete for at least one RITA criterion in 48% of cases. The study demonstrated the feasibility of integrating RITA into routine surveillance and showed some ongoing HIV transmission. To improve the interpretation of RITA, further efforts are required to improve completeness of the required epidemiological data.
Commissioning of FLAG: A phased array feed for the GBT
K. M. Rajwade, N. M. Pingel, R. A. Black, M. Ruzindana, M. Burnett, B. Jeffs, K. Warnick, D. J. Pisano, D. R. Lorimer, R. M. Prestage, L. Hawkins, J. Ray, P. Marganian, T. Chamberlin, J. Ford, W. Shillue, D. A. Roshi
Journal: Proceedings of the International Astronomical Union / Volume 13 / Issue S337 / September 2017
Print publication: September 2017
Phased Array Feed (PAF) technology is the next major advancement in radio astronomy in terms of combining high sensitivity and large field of view. The Focal L-band Array for the Green Bank Telescope (FLAG) is one of the most sensitive PAFs developed so far. It consists of 19 dual-polarization elements mounted on a prime focus dewar resulting in seven beams on the sky. Its unprecedented system temperature of ~17 K will lead to a 3 fold increase in pulsar survey speeds as compared to contemporary single pixel feeds. Early science observations were conducted in a recently concluded commissioning phase of the FLAG where we clearly demonstrated its science capabilities. We observed a selection of normal and millisecond pulsars and detected giant pulses from PSR B1937+21.
X-rays from the mode-switching PSR B0943+10
S. Mereghetti, L. Kuiper, A. Tiengo, J. Hessels, W. Hermsen, K. Stovall, A. Possenti, J. Rankin, P. Esposito, R. Turolla, D. Mitra, G. Wright, B. Stappers, A. Horneffer, S. Oslowski, M. Serylak, J.-M. Griessmeier, M. Rigoselli
Published online by Cambridge University Press: 04 June 2018, pp. 62-65
New simultaneous X-ray and radio observations of the archetypal mode-switching pulsar PSR B0943+10 have been carried out with XMM-Newton and the LOFAR, LWA and Arecibo radio telescopes in November 2014. They allowed us to better constrain the X-ray spectral and variability properties of this pulsar and to detect, for the first time, the X-ray pulsations also during the X-ray-fainter mode. The combined timing and spectral analysis indicates that unpulsed non-thermal emission, likely of magnetospheric origin, and pulsed thermal emission from a small polar cap are present during both radio modes and vary in a correlated way.
Evolution of the low-frequency pulse profile of PSR B2217+47
D. Michilli, J. W. T. Hessels, J. Y. Donner, J.-M. Grießmeier, M. Serylak, B. Shaw, B. W. Stappers, J. P. W. Verbiest, A. T. Deller, L. N. Driessen, D. R. Stinebring, L. Bondonneau, M. Geyer, M. Hoeft, A. Karastergiou, M. Kramer, S. Osłowski, M. Pilia, S. Sanidas, P. Weltevrede
An evolution of the low-frequency pulse profile of PSR B2217+47 is observed during a six-year observing campaign with the LOFAR telescope at 150 MHz. The evolution is manifested as a new component in the profile trailing the main peak. The leading part of the profile, including a newly-observed weak component, is steady during the campaign. The transient component is not visible in simultaneous observations at 1500 MHz using the Lovell telescope, implying a chromatic effect. A variation in the dispersion measure of the source is detected in the same timespan. Precession of the pulsar and changes in the magnetosphere are investigated to explain the profile evolution. However, the listed properties favour a model based on turbulence in the interstellar medium (ISM). This interpretation is confirmed by a strong correlation between the intensity of the transient component and main peak in single pulses. Since PSR B2217+47 is the fourth brightest pulsar visible to LOFAR, we speculate that ISM-induced pulse profile evolution might be relatively common but subtle and that SKA-Low will detect many similar examples. In this scenario, similar studies of pulse profile evolution could be used in parallel with scintillation arcs to characterize the properties of the ISM.
Utilizing High-temperature Atomic-resolution STEM and EELS to Determine Reconstructed Surface Structure of Complex Oxide
Weizong Xu, Preston C. Bowes, Everett D. Grimley, Douglas L. Irving, James M. LeBeau
Journal: Microscopy and Microanalysis / Volume 23 / Issue S1 / July 2017
Published online by Cambridge University Press: 04 August 2017, pp. 1596-1597
ON A SINGULAR INITIAL-VALUE PROBLEM FOR THE NAVIER–STOKES EQUATIONS
Incompressible viscous fluids
L. E. Fraenkel, M. D. Preston
Journal: Mathematika / Volume 61 / Issue 2 / May 2015
This paper presents a recent result for the problem introduced eleven years ago by Fraenkel and McLeod [A diffusing vortex circle in a viscous fluid. In IUTAM Symposium on Asymptotics, Singularities and Homogenisation in Problems of Mechanics, Kluwer (2003), 489–500], but described only briefly there. We shall prove the following, as far as space allows. The vorticity ${\it\omega}$ of a diffusing vortex circle in a viscous fluid has, for small values of a non-dimensional time, a second approximation ${\it\omega}_{A}+{\it\omega}_{1}$ that, although formulated for a fixed, finite Reynolds number ${\it\lambda}$ and exact for ${\it\lambda}=0$ (then ${\it\omega}={\it\omega}_{A}$ ), tends to a smooth limiting function as ${\it\lambda}\uparrow \infty$ . In §§1 and 2 the necessary background and apparatus are described; §3 outlines the new result and its proof.
Effects of elevating colonic propionate on liver fat content in overweight adults with non-alcoholic fatty liver disease: a pilot study
E. S. Chambers, A. Viardot, A. Psichas, D. J. Morrison, K. G. Murphy, S. E. K. Zac-Varghese, K. MacDougall, T. Preston, M. C. Tedford, J. D. Bell, E. L. Thomas, S. Mt-Isa, D. Ashby, W. S. Dhillo, S. R. Bloom, W. G. Morley, S. Clegg, G. Frost
Journal: Proceedings of the Nutrition Society / Volume 74 / Issue OCE1 / 2015
Published online by Cambridge University Press: 15 April 2015, E30
Print publication: 2015
Targeted delivery of propionate to the human colon prevents body weight and intra-abdominal adipose tissue gain in overweight adults
Published online by Cambridge University Press: 20 May 2014, E22
Effects of dark chocolate and cocoa consumption on endothelial function and arterial stiffness in overweight adults
Sheila G. West, Molly D. McIntyre, Matthew J. Piotrowski, Nathalie Poupin, Debra L. Miller, Amy G. Preston, Paul Wagner, Lisa F. Groves, Ann C. Skulas-Ray
Journal: British Journal of Nutrition / Volume 111 / Issue 4 / 28 February 2014
Print publication: 28 February 2014
The consumption of cocoa and dark chocolate is associated with a lower risk of CVD, and improvements in endothelial function may mediate this relationship. Less is known about the effects of cocoa/chocolate on the augmentation index (AI), a measure of vascular stiffness and vascular tone in the peripheral arterioles. We enrolled thirty middle-aged, overweight adults in a randomised, placebo-controlled, 4-week, cross-over study. During the active treatment (cocoa) period, the participants consumed 37 g/d of dark chocolate and a sugar-free cocoa beverage (total cocoa = 22 g/d, total flavanols (TF) = 814 mg/d). Colour-matched controls included a low-flavanol chocolate bar and a cocoa-free beverage with no added sugar (TF = 3 mg/d). Treatments were matched for total fat, saturated fat, carbohydrates and protein. The cocoa treatment significantly increased the basal diameter and peak diameter of the brachial artery by 6 % (+2 mm) and basal blood flow volume by 22 %. Substantial decreases in the AI, a measure of arterial stiffness, were observed in only women. Flow-mediated dilation and the reactive hyperaemia index remained unchanged. The consumption of cocoa had no effect on fasting blood measures, while the control treatment increased fasting insulin concentration and insulin resistance (P= 0·01). Fasting blood pressure (BP) remained unchanged, although the acute consumption of cocoa increased resting BP by 4 mmHg. In summary, the high-flavanol cocoa and dark chocolate treatment was associated with enhanced vasodilation in both conduit and resistance arteries and was accompanied by significant reductions in arterial stiffness in women.
The Commensal Real-Time ASKAP Fast-Transients (CRAFT) Survey
Jean-Pierre Macquart, M. Bailes, N. D. R. Bhat, G. C. Bower, J. D. Bunton, S. Chatterjee, T. Colegate, J. M. Cordes, L. D'Addario, A. Deller, R. Dodson, R. Fender, K. Haines, P. Hall, C. Harris, A. Hotan, S. Johnston, D. L. Jones, M. Keith, J. Y. Koay, T. J. W. Lazio, W. Majid, T. Murphy, R. Navarro, C. Phillips, P. Quinn, R. A. Preston, B. Stansby, I. Stairs, B. Stappers, L. Staveley-Smith, S. Tingay, D. Thompson, W. van Straten, K. Wagstaff, M. Warren, R. Wayth, L. Wen, The CRAFT Collaboration
Journal: Publications of the Astronomical Society of Australia / Volume 27 / Issue 3 / 2010
We are developing a purely commensal survey experiment for fast (<5 s) transient radio sources. Short-timescale transients are associated with the most energetic and brightest single events in the Universe. Our objective is to cover the enormous volume of transients parameter space made available by ASKAP, with an unprecedented combination of sensitivity and field of view. Fast timescale transients open new vistas on the physics of high brightness temperature emission, extreme states of matter and the physics of strong gravitational fields. In addition, the detection of extragalactic objects affords us an entirely new and extremely sensitive probe on the huge reservoir of baryons present in the IGM. We outline here our approach to the considerable challenge involved in detecting fast transients, particularly the development of hardware fast enough to dedisperse and search the ASKAP data stream at or near real-time rates. Through CRAFT, ASKAP will provide the testbed of many of the key technologies and survey modes proposed for high time resolution science with the SKA.
By Pierre Amarenco, Adrià Arboix, Marcel Arnold, Robert W. Baloh, John Bamford, Jason J. S. Barton, Claudio L. Bassetti, Christopher F. Bladin, Julien Bogousslavsky, Julian Bösel, Marie-Germaine Bousser, Thomas Brandt, John C. M. Brust, Erica C. S. Camargo, Louis R. Caplan, Emmanuel Carrera, Carlo W. Cereda, Seemant Chaturvedi, Claudia Chaves, Chin-Sang Chung, Isabelle Crassard, Hans Christoph Diener, Marianne Dieterich, Ralf Dittrich, Geoffrey A. Donnan, Paul Eslinger, Conrado J. Estol, Edward Feldmann, José M. Ferro, Joseph Ghika, Daniel Hanley, Ahamad Hassan, Cathy Helgason, Argye E. Hillis, Marc Hommel, Carlos S. Kase, Julia Kejda-Scharler, Jong S. Kim, Rainer Kollmar, Joshua Kornbluth, Sandeep Kumar, Emre Kumral, Hyung Lee, Didier Leys, Eric Logigian, Mauro Manconi, Elisabeth B. Marsh, Randolph S. Marshall, Isabel P. Martins, Josep Lluís Martí-Vilalta, Heinrich P. Mattle, Jérome Mawet, Mikael Mazighi, Patrik Michel, Jay Preston Mohr, Thierry Moulin, Sandra Narayanan, Kwang-Yeol Park, Florence Pasquier, Charles Pierrot-Deseilligny, Nils Petersen, Raymond Reichwein, E. Bernd Ringelstein, Gabriel J. E. Rinkel, Elliott D. Ross, Arnaud Saj, Martin A. Samuels, Jeremy D. Schmahmann, Stefan Schwab, Florian Stögbauer, Mathias Sturzenegger, Laurent Tatu, Pariwat Thaisetthawatkul, Dagmar Timmann, Jan van Gijn, Ana Verdelho, Francois Vingerhoets, Patrik Vuilleumier, Fabrice Vuillier, Eelco F. M. Wijdicks, Shirley H. Wray, Wendy C. Ziai
Edited by Louis R. Caplan, Jan van Gijn
Book: Stroke Syndromes, 3ed
Print publication: 12 July 2012, pp vii-x
Unstable Richtmyer–Meshkov growth of solid and liquid metals in vacuum
W. T. Buttler, D. M. Oró, D. L. Preston, K. O. Mikaelian, F. J. Cherne, R. S. Hixson, F. G. Mariam, C. Morris, J. B. Stone, G. Terrones, D. Tupa
Journal: Journal of Fluid Mechanics / Volume 703 / 25 July 2012
Print publication: 25 July 2012
We present experimental results supporting physics-based ejecta model development, where our main assumption is that ejecta form as a special limiting case of a Richtmyer–Meshkov (RM) instability at a metal–vacuum interface. From this assumption, we test established theory of unstable spike and bubble growth rates, rates that link to the wavelength and amplitudes of surface perturbations. We evaluate the rate theory through novel application of modern laser Doppler velocimetry (LDV) techniques, where we coincidentally measure bubble and spike velocities from explosively shocked solid and liquid metals with a single LDV probe. We also explore the relationship of ejecta formation from a solid material to the plastic flow stress it experiences at high-strain rates ( ) and high strains (700 %) as the fundamental link to the onset of ejecta formation. Our experimental observations allow us to approximate the strength of Cu at high strains and strain rates, revealing a unique diagnostic method for use at these extreme conditions.
The size of airborne dust particles precipitating bronchospasm in house dust sensitive children
R. P. Clark, T. D. Preston, D. C. Gordon-Nesbitt, S. Malka, L. Sinclair
Journal: Epidemiology & Infection / Volume 77 / Issue 3 / December 1976
Published online by Cambridge University Press: 15 May 2009, pp. 321-325
We have assessed the effect of house-cleaning procedures on changes in airborne dust and bacteria counts and correlated these with respiratory function tests in 14 children with bronchial asthma who were known to have developed attacks at home, and who had positive skin tests to house dust and the house-dust mite.
We have demonstrated that after cleaning procedures a positive and statistically significant correlation exists between the increase in the numbers of small particles, 2 μm. and less in diameter, in the environment, and reduction in mean peak flow. This indicates that particles of this size penetrate the bronchial tree and are the causative factor in the genesis of bronchospasm.
Stalkers and harassers of royalty: the role of mental illness and motivation
D. V. James, P. E. Mullen, M. T. Pathé, J. R. Meloy, L. F. Preston, B. Darnley, F. R. Farnham
Journal: Psychological Medicine / Volume 39 / Issue 9 / September 2009
Published online by Cambridge University Press: 01 April 2009, pp. 1479-1490
Public figures are at increased risk of attracting unwanted attention in the form of intrusions, stalking and, occasionally, attack. Whereas the potential threat to the British Royal Family from terrorists and organized groups is clearly defined, there is a dearth of knowledge about that from individual harassers and stalkers. This paper reports findings from the first systematic study of this group.
A retrospective study was conducted of a randomly selected stratified sample (n=275) of 8001 files compiled by the Metropolitan Police Service's Royalty Protection Unit over 15 years on inappropriate communications or approaches to members of the British Royal Family. Cases were split into behavioural types. Evidence of major mental illness was recorded from the files. Cases were classified according to a motivational typology. An analysis was undertaken of associations between motivation, type of behaviour and mental illness.
Of the study sample, 83.6% were suffering from serious mental illness. Different forms of behaviour were associated with different patterns of symptomatology. Cases could be separated into eight motivational groups, which also showed significant differences in mental state. Marked differences in the intrusiveness of behaviour were found between motivational groups.
The high prevalence of mental illness indicates the relevance of psychiatric intervention. This would serve the health interests of psychotic individuals and alleviate protection concerns without the necessity of attempting large numbers of individual risk predictions. The finding that some motivations are more likely to drive intrusive behaviours than others may help focus both health and protection interventions.
DIVISION X / COMMISSION 40 / WORKING GROUP GLOBAL VERY LONG BASELINE INTERFEROMETRY
Jonathan D. Romney, Richard T. Schilizzi, Simon T. Garrington, Leonid I. Gurvits, Xiaoyu Hong, David L. Jauncey, Hideyuki Kobayashi, Richard Porcas, Robert A. Preston, Christopher John Salter, Arpad Szomoru, Masato Tsuboi, James S. Ulvestad, Alan R. Whitney
Journal: Proceedings of the International Astronomical Union / Volume 4 / Issue T27A / December 2008
View extract
This triennium began with an action to re-create the Terms of Reference for the Working Group Global VLBI (WG-GV). These had been lost over the years since the Group was established in 1990. Fortunately, the personal archive of one long-term member yielded a copy of the original memorandum by R. D. Ekers, which was found to coincide quite well with current practice and areas of interest. New Terms of Reference, based on modern conditions, were drafted and accepted by both IAU and URSI.
Proceedings of the 129th Semon Club, 27 May 2005, the Otolaryngology Department, Guy's and St Thomas' NHS Foundation Trust, London, UK
E Chevretton, L Michaels, R Preston, D Gillett
Journal: The Journal of Laryngology & Otology / Volume 120 / Issue 7 / July 2006
Published online by Cambridge University Press: 30 May 2006, pp. 1-6
In-Situ Investigation of Surface Stoichiometry During InGaN and GaN Growth by Plasma-Assisted Molecular Beam Epitaxy Using RHEED-TRAXS
Randy Preston Tompkins, Brenda L. VanMil, Kyoungnae Lee, Eric D. Schires, Yewhee Chye, David Lederman, Thomas H. Myers
Journal: MRS Online Proceedings Library Archive / Volume 892 / 2005
Published online by Cambridge University Press: 01 February 2011, 0892-FF04-06
Reflection high-energy electron diffraction total-reflection-angle x-ray spectroscopy (RHEED-TRAXS) uses high-energy electrons from RHEED to excite x-ray fluorescence. Monitoring characteristic x-rays of selected elements thus allows study of surface coverage of materials. In this study, surface coverage of Ga and In during growth of GaN and InGaN was probed using this technique. Evolution of the surface layer of Ga on GaN during growth and deposition of Ga on static GaN at room temperature were studied. RHEED-TRAXS measurements were performed during growth of InGaN by measuring the ratio of the In Lα to Ga Kα intensity. A significant surface coverage of In was observed at all temperatures investigated regardless of actual In incorporation.
Pearson-Readhead Survey from Space
R.A. Preston, M.L. Lister, S. J. Tingay, B.G. Piner, D.W. Murphy, D. L. Jones, D.L. Meier, T.J. Pearson, A. C. S. Readhead, H. Hirabayashi, H. Kobayashi, M. Inoue
Journal: Symposium - International Astronomical Union / Volume 205 / 2001
We are using the VSOP space VLBI mission to observe a complete sample of Pearson-Readhead survey sources at 4.8 GHz to determine core brightness temperatures and pc-scale jet properties. To date we have imaged 27 of the 31 objects in our sample. Our preliminary results show that the majority of objects contain strong core components that remain unresolved on baselines of 30,000 km. The brightness temperatures of several cores significantly exceed 1012 K, which is indicative of highly relativistically beamed emission. We also find that core brightness temperature is correlated with intraday variability in compact AGNs.
Measuring the Properties of the Gravitational Lens PKS 1830–211
J. E. J. Lovell, P. M. McCulloch, S. P. Ellingsen, C. J. Phillips, J. E. Reynolds, D. L. Jauncey, M. W. Sinclair, W. E. Wilson, A. K. Tzioumis, E. A. King, R. G. Gough, R. A. Preston, D. L. Jones, P. R. Backus
Journal: International Astronomical Union Colloquium / Volume 164 / 1998
PKS 1830–211 is the strongest known radio gravitational lens by almost an order of magnitude and has the potential to provide a measurement of H0, provided the lensing system can be parameterized. Attempts to identify optical counterparts, to measure redshifts, have so far proved unsuccessful and this has lead to radio and millimetre spectral line observations. We present our discovery of an absorption system at z = 0.19. A brief description is also made of our ATCA observations to measure the lensing time delay for this source.
The Astronomical Low-Frequency Array (ALFA)
D. L. Jones, K. W. Weiler, R. J. Allen, M. M. Desch, W. C. Erickson, M. L. Kaiser, N. E. Kassim, T. B. H. Kuiper, M. J. Mahoney, K. A. Marsh, R. A. Perley, R. A. Preston, R. G. Stone
The ALFA mission is designed to map the entire sky at frequencies between approximately 0.3 and 30 MHz with angular resolution limited by interstellar and interplanetary scattering. Most of this region of the spectrum is inaccessible from the ground because of absorption and refraction by the Earth's ionosphere. A wide range of astrophysical questions concerning solar system, galactic, and extragalactic objects could be answered with high resolution images at low frequencies, where absorption effects and coherent emission processes become important and the synchrotron lifetimes of electrons are comparable to the age of the universe. | CommonCrawl |
The circRNA circADAMTS6 promotes progression of ESCC and correlates with prognosis
Circular RNA hsa_circ_0000277 sequesters miR-4766-5p to upregulate LAMA1 and promote esophageal carcinoma progression
Peng Li Zhou, Zhengyang Wu, … Xinwei Han
CircCNTNAP3-TP53-positive feedback loop suppresses malignant progression of esophageal squamous cell carcinoma
Hui Wang, Xuming Song, … Gaochao Dong
Hsa_circ_0021727 (circ-CD44) promotes ESCC progression by targeting miR-23b-5p to activate the TAB1/NFκB pathway
Fan Meng, Xiaokang Zhang, … Xin He
THAP9-AS1/miR-133b/SOX4 positive feedback loop facilitates the progression of esophageal squamous cell carcinoma
Jiwei Cheng, Haibo Ma, … Wenqun Xing
Oncogenic SNORD12B activates the AKT-mTOR-4EBP1 signaling in esophageal squamous cell carcinoma via nucleus partitioning of PP-1α
Baoqing Tian, Jiandong Liu, … Ming Yang
Circular RNA profile identifies circOSBPL10 as an oncogenic factor and prognostic marker in gastric cancer
Sen Wang, Xing Zhang, … Zekuan Xu
Circular RNA circTRPS1-2 inhibits the proliferation and migration of esophageal squamous cell carcinoma by reducing the production of ribosomes
Renchang Zhao, Pengxiang Chen, … Hui Tian
Circular RNA hsa_circ_0110389 promotes gastric cancer progression through upregulating SORT1 via sponging miR-127-5p and miR-136-5p
Min Liang, Wenxia Yao, … Yanfang Zheng
CircHAS2 promotes the proliferation, migration, and invasion of gastric cancer cells by regulating PPM1E mediated by hsa-miR-944
Shuo Ma, Xinliang Gu, … Shaoqing Ju
Jing Bu1 na1,
Lina Gu1 na1,
Xin Liu1 na1,
Xixi Nan1,
Xiangmei Zhang1,
Lingjiao Meng1,
Yang Zheng1,
Fei Liu1,
Jiali Li1,
Ziyi Li1,
Meixiang Sang1,2 &
Baoen Shan1,2
Scientific Reports volume 12, Article number: 13757 (2022) Cite this article
Circular RNAs (circRNAs) are a type of noncoding RNA, which play a vital role in the occurrence and development of esophageal squamous cell carcinoma (ESCC). While the role of novel circADAMTS6 in ESCC remains unknown. We assessed circADAMTS6 expression in ESCC tissues and cells, and the relationship between circADAMTS6 expression and overall survival of ESCC patients. Functional experiments in vitro and xenograft in vivo assay were applied to explore the functions and mechanisms of circADAMTS6 in ESCC. Results found that up-regulation of circADAMTS6 was associated with poor overall survival and may acted as an independent risk factor for ESCC prognosis. Knockdown of circADAMTS6 significantly inhibited the proliferation, migration and invasion of ESCC cells and growth of xenograft tumors in vivo. Induced AGR2 expression was able to rescue the loss of function induced by si-circADAMTS6 in KYSE150 cell. CircADAMTS6 may acts as oncogene by activating AGR2 and the Hippo signaling pathway coactivator YAP in ESCC.
Esophageal cancer is the sixth dominant cause of cancer-related mortality worldwide that threatens human health seriously1. It has a high incidence in China, especially in Shanxi and Hebei Province. As the main pathological type of esophageal cancer in China, ESCC accounts for more than 90% of the total number of esophageal cancers2,3,4. Despite rapid advances in treatments, including neoadjuvant chemotherapy or immunity therapy, the prognosis of ESCC patents remains poor, with 5-year overall survival (OS) rate is less than 20%5. Consequently, our immediate concern is to investigate the pathogenesis of ESCC and to seek early screening indicators.
As a group of endogenous noncoding RNAs, circRNAs have stable covalent closed loop structure6,7, which makes them resistant to RNA exonuclease compared to linear RNA. In addition, the tissue expression specificity of circRNAs endows them stably exist in saliva, plasma and other peripheral tissue, making them potential prognostic markers8,9,10. Growing evidence suggests that circRNAs are abnormally expressed in various diseases including ESCC and may act as tumor suppressor genes or oncogenes during the occurrence and development of cancer11,12. Our previous studies showed that ciRS-7 was over-expressed in ESCC tissue and accelerated the proliferation, migration and invasion ability of ESCC cells by regulating the miR-7/KLF4 axis to activate the NF-κB p65 signaling pathway13. Cao et al. found that the up-regulation of circRNA-100876 promoted ESCC cell invasion, migration and epithelial mesenchymal transition (EMT), and associated with poor prognosis14. For instance, circUBAP2 plays a role of oncogene by regulating mir-422a/rab10 axis and may be a predictive marker for the prognosis of ESCC15. Above discovers reveal the vital role of circRNAs in ESCC. However, many valuable circRNAs related to ESCC need to be further explored and identified.
Anterior Gradient Homolog 2 (AGR2) is a member of protein disulfide isomerase (PDI) family, which is overexpressed in ESCC, lung cancer, breast cancer and other cancer16. Studies demonstrated that AGR2 promotes tumor growth by inducing dephosphorylation of Yes-associated protein (YAP) in lung adenocarcinoma17. Besides, AGR2 overexpression promoted cell proliferation and migration and inhibited TNF-induced intestinal epithelial barrier damage by activating YAP18. However, little is known about the relationship of AGR2 and circRNAs in ESCC.
In our research, hsa_circ_0072688 (also called circADAMTS6, originated from the ADAMTS6 gene) was discovered and dramatically expressed in ESCC tissues and cells. qRT-PCR results showed that knockdown of circADAMTS6 significantly reduced the proliferation, migration and invasion of KYSE150 and KYSE30 cells. Base on recent investigations indicating the function of AGR2 to cancer progression, we aimed to confirm regulatory effect of circADAMTS6 on AGR2 expression. Mechanistically, circADAMTS6 positively regulates the expression of AGR2 to accelerate the proliferation and invasion of KYSE150 cell by activating the expression level of the Hippo signaling pathway co-activator YAP. In conclusion, our study demonstrated that circADAMTS6 may play a vital role as an oncogene and serve as a tumor marker to promote early diagnosis and treatment of ESCC.
The features of circADAMTS6
The circADAMTS6 located at chr5:6,474,730–6,476,977 and originated from exon 2–7 of ADAMTS6 gene by back-splicing (Fig. 1A). The back-splice junction site of circADAMTS6 was amplified using the divergent primers and verified by Sanger sequencing. In addition, the divergent primers and convergent primers were designed to amplify the circular and linear transcripts of ADAMTS6 in cDNA and gDNA, respectively. GAPDH was acted as a linear RNA control. As expected in the PCR results, the circular form was only amplified by divergent primers in cDNA but not in gDNA, while the linear form was amplified by convergent primers in both cDNA and gDNA (Fig. 1B). Furthermore, we employed a highly processive 3' to 5' exoribonuclease (RNase R enzyme)19 to explore the characteristics of circADAMTS6 in ESCC cell and tissue. The results showed that the circADAMTS6 was resistant to RNase R treatment, which was different from the linear control gene GAPDH (Fig. 1C), indicating that the ADAMTS6 gene has a circular RNA form that is independent of ADAMTS6 mRNA.
The characteristics of circADAMTS6 in ESCC cells. (A) circADAMTS6 is formed by reverse splicing of exons 2–7 in the ADAMTS6 gene. (B) The existence of circADAMTS6 in ESCC cell and tissue was confirmed. CircADAMTS6 was only amplified by divergent primers in cDNA but not in gDNA. GAPDH was acted as a linear RNA control. Divergent primers or convergent primers were indicated by the opposite or the same directions of the arrowhead. (C) RT-PCR assessed the expression levels of circADAMTS6 and linear mRNA upon RNase R treatment in ESCC cells and tissues.
Knockdown of circADAMTS6 exhibits an anticancer effect on ESCC in vitro and vivo
Initially, the siRNA that could specifically silence the back-splicing region of circADAMTS6 was constructed to assess the potential biological functions of circADAMTS6 in ESCC. Obviously, circADAMTS6 expression levels (Fig. 2A) were dramatically inhibited after transfection with circADAMTS6 siRNA (si-1, si-2 and si-3) compared with negative control siRNA (si-NC). We selected si-2 to explore the biological functions of circADAMTS6 in KYSE150 and KYSE30 cells. CCK-8 assay, colony formation assay, wound-healing assay and transwell assay were performed to evaluated the proliferation, migration and invasion of ESCC cells in vitro, and a nude mouse xenograft model was used to detect tumor formation in vivo. It showed that circADAMTS6 knockdown remarkably restrained the proliferation of ESCC cells in CCK8, colony formation assay (Fig. 2B,C) and xenograft nude mice model (Fig. 2D). Furthermore, results indicated that the migration and invasion capacity of ESCC cells were significantly blocked by wound-healing and transwell assay after transfection with si-2 (Fig. 2E,F). Together, these observations support that circADAMTS6 maybe a critical factor in the progression of ESCC.
Knockdown of circADAMTS6 inhibits the ability of cell proliferation, migration and invasion in ESCC cells and tumor formation in vivo. (A) Relative expression level of circADAMTS6 transfected with si-NC and si-circADAMTS6 in KYSE150 cell. (B, C) Cell proliferation was assessed by CCK8 and colony formation assay after down-regulation of circADAMTS6. (D) Representative photographs and tumor growth curves of subcutaneous xenograft tumor model developed from KYSE150 cell with si-NC and si-circADAMTS6 (n = 6). The tumor volumes were calculated according to the formula (L × W2)/2. (E, F) Knockdown circADAMTS6 repressed migration and invasion of ESCC cells. (*P < 0.05, **P < 0.01, ***P < 0.001).
CircADAMTS6 is remarkably upregulated in ESCC tissues
To analyze circADAMTS6 expression in ESCC tissues, fluorescence in situ hybridization (FISH) was applied by using the circADAMTS6 probe. 114 ESCC patients with clinicopathological and follow-up dates were obtained on Tissue Microarrays (TMAs). CircADAMTS6 positive cells were stained red and the nuclei were stained with DAPI in blue. Representative images of H&E and FISH staining of circADAMTS6 expression in ESCC tissues and adjacent normal tissues were displayed (Fig. 3). The outcomes indicated that circADAMTS6 was notably overexpressed in most ESCC tissues (n = 114; 81/114), which were great different from adjacent normal tissues (n = 66, 16/66).
Different expression levels of circADAMTS6 in ESCC tissues and adjacent normal tissues. A comparison of two tissues, an increased expression of circADAMTS6 was discovered in ESCC tissues in our TMA-FISH results.
CircADAMTS6 is related to clinicopathological characteristics and OS in ESCC patients
The clinicopathological characteristics of ESCC patients in tissue microarrays were displayed in Supplementary Table 1. Our research discovered the positive expression rate of circADAMTS6 in ESCC tissues and adjacent normal tissues were 71.1% (81/114) and 24.2% (16/66), indicating that circADAMTS6 was more frequently expressed in ESCC tissues. The connection between circADAMTS6 levels and the clinicopathological characteristics of ESCC was investigated (Table 1). The high expression level of circADAMTS6 was closely related to advanced clinical stage, poor pathological grade, large tumor size and lymph node metastasis (P < 0.05), but not corrected with gender or age in ESCC tissues (P > 0.05). Multivariate analysis indicated that patients with higher circADAMTS6 expression had shorter overall survival time than those with lower circADAMTS6 expression, suggesting that circADAMTS6 was an independent prognostic factor in ESCC (Fig. 4A). These results provide a theoretical basis for further promoting circADAMTS6 as prognostic markers of ESCC.
Table 1 Univariate and multivariable analyses of prognostic factors in ESCC for overall survival.
(A) Kaplan–Meier survival analysis indicated that circADAMTS6 high expression (P < 0.001) was significantly associated with shorter overall survival in ESCC cases. (B) CircADAMTS6 and AGR2 expression were measured after transfection of si-circADAMTS6 and co-transfection of AGR2 and circADAMTS6 by qRT-PCR. (C) Western blot analysis revealed that AGR2 and YAP proteins were significantly decreased after knockdown of circADAMTS6, while pYAP protein was significantly increased. (D–G) Rescue assays showed that transfection of AGR2 reversed the suppression of the proliferation (D, E), migration and invasion (F, G) caused by circADAMTS6 silencing in KYSE150 cell. (H) Western blot validated that the effect of pYAP, YAP and AGR2 proteins caused by si-circADAMTS6 could be rescued by transfection of AGR2.
CircADAMTS6 promotes the progression of ESCC by regulating AGR2 and activating the expression level of the Hippo signaling pathway co-activator YAP
To explore whether AGR2 is regulated by circADAMTS6, the expression of circADAMTS6 and AGR2 were detected by si-circADAMTS6 in KYSE150 cell. qRT-PCR results showed that after transfection of si-circADAMTS6, the expression of AGR2 mRNA was distinguishedly lower than that in the control group (P < 0.05) (Fig. 4B). Consistently, Western blot analysis revealed that AGR2 and YAP proteins were significantly decreased after knockdown of circADAMTS6 (P < 0.05), while phosphorylated Yes-associated protein (pYAP) was significantly increased (P < 0.05) (Fig. 4C).
We further explored the relationship between AGR2 and circADAMTS6 by detecting whether AGR2 overexpressed could rescue the effect of knockdown circADAMTS6 in KYSE150 cell. Transfection of AGR2 overexpression plasmid significantly increased the expression level of AGR2 in KYSE150 cell (Fig. S5). For rescue experiments, AGR2 overexpression recovered the effects of circADAMTS6 knockdown on proliferation (Fig. 4D,E), migration and invasion(Fig. 4F,G) of KYSE150 cell . Besides, we measured the protein levels of YAP, pYAP and AGR2 using Western blotting (Fig. 4H). The results indicated that circADAMTS6 knockdown significantly decreased YAP and AGR2 levels and increased pYAP level. As expected, co-transfection of circADAMTS6 and AGR2 neutralized the effects of knockdown circADAMTS6 on YAP, pYAP and AGR2 expression.
The above results demonstrated that circADAMTS6 positively regulates AGR2 expression to promotes the progression of ESCC by activating the expression level of the Hippo signaling pathway coactivator YAP.
Increasing evidence indicates that circRNAs play vital roles in various physiological and pathological processes, suggesting that they may serve as potential diagnostic and predictive biomarkers in numerous diseases, including ESCC. CircRNAs interact with tumor-related miRNAs, proteases or signaling pathways, play important regulatory effects in the occurrence and development process of multiple tumors20,21,22. Traditional tumor markers are not sufficiently sensitive and effective for early ESCC detection, largely ESCC patients are diagnosed in the advanced stage and the overall 5-year survival rate is relatively poor23,24,25. Despite the characteristics and functions of many circRNAs have been studied in depth in recent years, the mechanism of circADAMTS6 in ESCC metastasis and prognosis remains unknown. Our research aim to investigate the potential role of novel circADAMTS6 in the development of ESCC and elucidate its underlying mechanism.
As a novel circRNA, circADAMTS6 was firstly reported to inhibit apoptosis in human chondrocyte by sponging miR-431-5p26. Subsequently, circADAMTS6 participates in IL-1β-induced human chondrocyte dysfunction though competing miR-324-5p and PI3K/AKT/mTOR signaling pathway27. In the study, we identified circADAMTS6 was notably high expressed in ESCC cells and tissues for the first time. In terms of its function and mechanism, circADAMTS6 promoted migration, proliferation and invasion of ESCC cells though the AGR2-YAP axis. Besides, high expression of circADAMTS6 was closely associated with pathological grade, tumor size, lymph node metastasis and poor prognosis. Correspondingly, silencing of circADAMTS6 was confirmed to suppress tumor growth in our xenograft mouse models. Notably, the above results demonstrated that the vital role of circADAMTS6, which may provide a potential therapeutic target for ESCC. As a widely used prognostic marker, the TNM classification is also the most universal tumor staging system in the world28,29. Then we guessed that the simultaneous use of TNM classification and molecular markers may predict the prognosis of ESCC patients more accurately. This is consistent with the research results of some scholars30.
As an evolutionarily conserved pathway, the Hippo/YAP signal transduction plays a vital role in organ size by regulating cell proliferation, apoptosis and metastasis in various diseases. Non-phosphorylation of YAP mediates the transcriptional function of cells, while phosphorylated YAP is retained in cytoplasm and cannot perform transcriptional functions. Furthermore, overexpression and nuclear localization of YAP has been detected in ESCC, lung cancer and breast cancer, and associated with a poor prognosis. AGR2 is an oncogene and involved in cell proliferation, invasion and tumour progression via microRNA, circRNAs and several pathways. It targets and regulates YAP and amphiregulin (AREG) to promotes tumor growth in lung adenocarcinoma31. Recently, AGR2 was described to promote tumor metastasis via activation of the mTOR/AKT signaling pathway32. Notably, high AGR2 expression levels was found in esophageal squamous tissue and ESCC cell lines, and was correlated with a worse prognosis in ESCC patients33. The above indicate the direction for further study of the mechanism of circRNAs in ESCC.
In conclusion, circADAMTS6 may play a vital role as an oncogene and serve as an independent molecular marker to promote early diagnosis and treatment of ESCC. However, article does not involve circADAMTS6 overexpression experiment due to the efficiency of circADAMTS6 formation is too low and the amplification fold of circADAMTS6 was much lower than that of the linear transcript. More works remain to be done for further explore to elucidate other mechanisms.
The Ethics Committee of the Fourth Affiliated Hospital, Hebei Medical University approved the study's protocol and exempted informed consent (approval number: YB M-05–01). Our research was conducted in accordance with the Declaration of Helsinki. All animal experiments have been approved by the Animal Care and Use Committee of the Fourth Hospital of Hebei Medical University, and all experiments were carried out accordance with the guidelines. The study is reported in accordance with ARRIVE guidelines (www.arriveguidelines.org).
Patients and clinical tissue samples
The ESCC tissue microarrays (TMAs: No. EsoS180Su08-XT18-031) were purchased from Shanghai Outdo Biotech Co., Ltd. (Shanghai, China), involving 114 cancer tissues and 66 adjacent normal tissues. None of the patients received any treatment before operation or diagnosed with other cancers. All patients underwent accurate surgery base on their clinical examinations combined with pathological diagnosis.
According to the hospital standard follow-up system, the patients were followed up and evaluated every 6 months after convalescence. July 31, 2015 was the deadline for follow-up evaluations. All the participants completed 0–107 months (average: 31.70 months) follow-up survey. The survival time was defined as the date of operation to the date of death or the deadline of follow-up.
The clinical features of the 114 patients were obtained from their medical records, such as age, gender, histological type tumor, pathological grade, primary tumor size and lymph node metastasis. According to the seventh edition of the American Joint Committee on Cancer (AJCC), metastasis status was assessed by the pathologic stage of the disease.
Cell cultures and transfection
Esophageal cancer cell lines (KYSE150, KYSE30, KYSE170 and TE1) were provided by the Scientific Research Center of the Fourth Hospital of Hebei Medical University (Hebei, China), and cultured in RPMI1640 (GIBCO, USA) supplemented with 10% fetal bovine serum (GIBCO, USA), 100U/ml penicillin and 100 µg/ml streptomycin at 37 °C in humidified air containing 5% carbon dioxide.
KYSE150 and KYSE30 cells were maintained in six-well plates to 70 ~ 80% confluence, and transfected with circADAMTS6-siRNA (Geneseed Biotech Co, Guangzhou, China) (CTAGGTTAAAAAATGGCCA) or si-NC (negative control siRNA) using Lipofectmine 2000 (Invitrogen, USA). Measured the expression of circADAMTS6 by real-time RT-PCR at 48 h after transfection.
DNA and RNA extraction, RNA RNase R treatment
Genome DNA was extracted from ESCC cells by using a simplified Proteinase K (Merck, Germany) digestion method. According to the manufacturer's instructions, total RNA from cells was extracted using TRIzol Reagent (Invitrogen, USA) after transfection. RNase R treatment was managed using RNase R 3μ/mg for 15 min at 37 °C.
Nucleic acid electrophoresis and quantitative real-time PCR
The cDNA was served as a template for Nucleic acid electrophoresis and quantitative real-time RT-PCR. All the primer sequences in our experiment were displayed in Supplementary Table 3. The small nuclear U6 and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were used as internal controls for circular RNA and linear RNA, respectively. The manufacturer's instructions indicated that cDNA was synthesized from total RNA using GoScriptTM Reverse Transcription System (Promega, USA). Quantitative real-time PCR was performed performed with GoTaq® qPCR Master Mix (Promega, USA) using ABI QuantStudioTM 6 Flex. PCR products were separated on 2% agarose gels and imaged by UV irradiation (Fig. 1B). Besides, 2−∆∆CT method was applied to compute the fold changes of the target gene expression.
RNA fluorescence in situ hybridization (FISH)
RNA FISH was applied to explore circADAMTS6 expression in 114 ESCC tissues and adjacent normal tissues. Cy5-labelled circADAMTS6 probe (5' biotin -ATGGCCATTTTTTAACCTAGTA- 3' biotin) was designed and synthesized directly by Guangzhou Geneseed Biotech Co., Ltd. The FISH experiment was performed according to the manufacturer's instructions. Images were obtained by using a fluorescence microscope (Carl Zeiss, Oberkochen, Germany) at room temperature.
Diluted the cells to a density of 3 × 105 cells per well and seeded in 6-well plates, incubated 48 h. After overnight incubation, cell mono-layer was scratched with a 20μL pipette tip and washed 3 times with PBS. Photos of the wound were taken at 0, 24 h under the microscope after scratched.
Cell proliferation assay
To evaluated the proliferation of KYSE150 and KYSE30 cell lines by using CCK-8. Cells (5 × 103 cells per well) in the logarithmic growth phase were seeded in 96-well plates and incubated at 37 °C. After 0 h, 24 h, 48 h, 72 h and 96 h, 10 μl CCK8 (Dojindo, Japan) was added to each well and incubated for 1 h. Finally, the absorption value of the whole wells was directly detected at 450 nm.
Colony formation assay
Diluted the cells to a density of 1 × 103 cells per well and seeded them into a 6-well plates and incubated at 37 °C, 5% CO2. After 7 days, cell colonies were stained with Giemsa and observed under the microscope.
Transwell migration and matrigel invasion assay
To detect the capacity of migration and invasion of ESCC cells through using transwell chamber precoated with or without matrigel. Briefly, 5 × 104 cells in the upper chamber were supplemented with serum-free RPMI1640 medium, whereas the lower chamber were cultivated in medium containing 10% FBS. Following overnight incubation, the migration cells and invaded cells were stained with Giemsa dye. The numbers of stained cells were calculated by a microscope (Leica, Germany) in five randomly selected fields.
Animal in vivo assays
Four-week-old female BALB/c nude mice were purchased from SPF Biotechnology Co., Ltd (Beijing, China), and randomly divided into two groups (6 mice per group). To evaluate the function of circADAMTS6 in vivo, xenograft nude mice model was established by subcutaneously injecting KYSE150 cells (5 × 106) into the right flank of mice. After tumor formation, 5 nmol si-RNA or si-NC was intratumorally injected into the two groups twice per week for three weeks. Tumor size was monitored weekly and volume was analyzed according to the formula:
$${\text{volume}} = \left( {{\text{With}}^{{2}} \times {\text{Length}}} \right)/{2}.$$
Six weeks later, all mice were sacrificed and tumor were harvested.
Calculations in the study were carried out using SPSS Statistics 22.0 software (SPSS Inc, Chicago, IL, USA). Chi-square test was employed to assess the relevance between circADAMTS6 expression levels and the clinicopathological features of ESCC patients. Analyzed the survival outcomes relied on the Kaplan–Meier method and log-rank test, and evaluate the potential prognostic factors of overall survival based upon the Cox proportional hazards regression model. The results were expressed as the mean ± S.D., and analyzed by using Student's t-test. All statistical tests were two-sided, and dates in statistically significant were defined as P < 0.05.
All data, models, and code generated or used during the study appear in the submitted article.
Sung, H. et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. J. CA A Cancer J. Clin. 71, 209–249 (2021).
Cohen, D. J. & Ajani, J. An expert opinion on esophageal cancer therapy. J. Expert Opin. Pharmacother. 12, 225–239 (2011).
Bi, Y. et al. EP300 as an oncogene correlates with poor prognosis in esophageal squamous carcinoma. J. J. Cancer 10, 5413–5426 (2019).
Visaggi, P. et al. Modern diagnosis of early esophageal cancer: From blood biomarkers to advanced endoscopy and artificial intelligence. J. Cancers. 13, 3162 (2021).
Osugi, H., Narumiya, K. & Kudou, K. Supracarinal dissection of the oesophagus and lymphadenectomy by MIE. J. J. Thoracic Dis. 9, S741–S750 (2017).
Wan, B., Hu, H., Wang, R., Liu, W. & Chen, D. Therapeutic potential of circular RNAs in osteosarcoma. J. Front. Oncol. 10, 370 (2020).
Jia, E. et al. Transcriptomic profiling of circular RNA in different brain regions of Parkinson's disease in a mouse model. J. Int. J. Mol. Sci. 21, 3006 (2020).
Li, Y. et al. Circular RNA is enriched and stable in exosomes: A promising biomarker for cancer diagnosis. J. Cell Res. 25, 981–984 (2015).
Bahn, J. H. et al. The landscape of microRNA, Piwi-interacting RNA, and circular RNA in human saliva. J. Clin. Chem. 61, 221–230 (2015).
Meng, S. et al. CircRNA: Functions and properties of a novel potential biomarker for cancer. J. Mol. Cancer 16, 94 (2017).
Zhang, H. D. et al. Circular RNA hsa_circ_0052112 promotes cell migration and invasion by acting as sponge for miR-125a-5p in breast cancer. J. Biomed. Pharmacother. Biomed. Pharmacother. 107, 1342–1353 (2018).
Yu, C. et al. Circular RNA cMras inhibits lung adenocarcinoma progression via modulating miR-567/PTPRG regulatory pathway. J. Cell Proliferation. 52, e12610 (2019).
Sang, M. et al. Circular RNA ciRS-7 accelerates ESCC progression through acting as a miR-876-5p sponge to enhance MAGE-A family expression. J. Cancer Lett. 426, 37–46 (2018).
Cao, S., Chen, G., Yan, L., Li, L. & Huang, X. Contribution of dysregulated circRNA_100876 to proliferation and metastasis of esophageal squamous cell carcinoma. J. OncoTargets Therapy. 11, 7385–7394 (2018).
Wu, Y., Zhi, L., Zhao, Y., Yang, L. & Cai, F. Knockdown of circular RNA UBAP2 inhibits the malignant behaviours of esophageal squamous cell carcinoma by microRNA-422a/Rab10 axis. J. Clin. Exp. Pharmacol. Physiol. 47, 1283–1290 (2020).
Dong, A., Gupta, A., Pai, R. K., Tun, M. & Lowe, A. W. The human adenocarcinoma-associated gene, AGR2, induces expression of amphiregulin through Hippo pathway co-activator YAP1 activation. J. J. Biol. Chem. 286, 18301–18310 (2011).
Sommerova, L. et al. ZEB1/miR-200c/AGR2: A new regulatory loop modulating the epithelial-mesenchymal transition in lung adenocarcinomas. J. Cancers (Basel). 12, 1614 (2020).
Ye, X., Wu, J., Li, J. & Wang, H. Anterior gradient protein 2 promotes mucosal repair in pediatric ulcerative colitis. J. Biomed. Res. Int. 2021, 6483860 (2021).
Vincent, H. A. & Deutscher, M. P. Insights into how RNase R degrades structured RNA: Analysis of the nuclease domain. J. J. Mol. Biol. 387, 570–583 (2009).
Hansen, T. B. et al. Natural RNA circles function as efficient microRNA sponges. J. Nat. 495, 384–388 (2013).
Piwecka, M. et al. Loss of a mammalian circular RNA locus causes miRNA deregulation and affects brain function. J. Sci. 357, 6357 (2017).
Conn, S. J. et al. The RNA binding protein quaking regulates formation of circRNAs. J. Cell. 160, 1125–1134 (2015).
Hu, X. et al. circGSK3β promotes metastasis in esophageal squamous cell carcinoma by augmenting β-catenin signaling. J. Mol. Cancer. 18, 160 (2019).
Jia, W. et al. Coexpression of periostin and EGFR in patients with esophageal squamous cell carcinoma and their prognostic significance. J. OncoTargets Ther. 9, 5133–5142 (2016).
Zhang, H., Li, H., Ma, Q., Yang, F. Y. & Diao, T. Y. Predicting malignant transformation of esophageal squamous cell lesions by combined biomarkers in an endoscopic screening program. J. World J. Gastroenterol. 22, 8770–8778 (2016).
Fu, Q. et al. CircADAMTS6/miR-431-5p axis regulate interleukin-1β induced chondrocyte apoptosis. J. Gene Med. 23, e3304 (2021).
Shen, L. et al. Regulation of circADAMTS6-miR-324-5p-PIK3R3 ceRNA pathway may be a novel mechanism of IL-1β-induced osteoarthritic chondrocytes. J. J. Bone Miner. Metab. 40, 389–401 (2022).
Sobin, L. H. & Fleming, I. D. TNM classification of malignant tumors, fifth edition Union Internationale Contre le Cancer and the American Joint Committee on Cancer. J. Cancer. 80, 1803–1804 (1997).
Sobin, L. H., Hermanek, P. & Hutter, R. V. TNM classification of malignant tumors. A comparison between the new (1987) and the old editions. J. Cancer. 61, 2310–2314 (1988).
Takeno, S. et al. Assessment of clinical outcome in patients with esophageal squamous cell carcinoma using TNM classification score and molecular biological classification. J. Ann. Surg. Oncol. 14, 1431–1438 (2007).
Maarouf, A. et al. Anterior gradient protein 2 is a marker of tumor aggressiveness in breast cancer and favors chemotherapy-induced senescence escape. J. Int J Oncol. 60, 5 (2022).
Takabatake, K. et al. TP53Anterior gradient 2 regulates cancer progression in -wild-type esophageal squamous cell carcinoma. J. Oncol Rep. 46, 1–11 (2021).
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. The authors have no relevant financial or non-financial interests to disclose.
These authors contributed equally: Jing Bu, Lina Gu, and Xin Liu.
Department of Research Center, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, People's Republic of China
Jing Bu, Lina Gu, Xin Liu, Xixi Nan, Xiangmei Zhang, Lingjiao Meng, Yang Zheng, Fei Liu, Jiali Li, Ziyi Li, Meixiang Sang & Baoen Shan
Tumor Research Institute, The Fourth Hospital of Hebei Medical University, 050017, Shijiazhuang, Hebei, People's Republic of China
Meixiang Sang & Baoen Shan
Jing Bu
Lina Gu
Xixi Nan
Xiangmei Zhang
Lingjiao Meng
Yang Zheng
Fei Liu
Jiali Li
Ziyi Li
Meixiang Sang
Baoen Shan
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by J.B., L.G. and X.L. The first draft of the manuscript was written by J.B. and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Correspondence to Meixiang Sang or Baoen Shan.
Supplementary Information.
Bu, J., Gu, L., Liu, X. et al. The circRNA circADAMTS6 promotes progression of ESCC and correlates with prognosis. Sci Rep 12, 13757 (2022). https://doi.org/10.1038/s41598-022-17450-2
About Scientific Reports
Guide to referees
Journal highlights
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.
Get what matters in cancer research, free to your inbox weekly. Sign up for Nature Briefing: Cancer | CommonCrawl |
A unified polynomial selection method for the (tower) number field sieve algorithm
AMC Home
A spectral characterisation of $ t $-designs and its applications
August 2019, 13(3): 457-475. doi: 10.3934/amc.2019029
A subspace code of size $ \bf{333} $ in the setting of a binary $ \bf{q} $-analog of the Fano plane
Daniel Heinlein 1, , Michael Kiermaier 2, , Sascha Kurz 2, and Alfred Wassermann 2,
Department of Communications and Networking, Aalto University, FI-00076 Aalto, Finland
Mathematisches Institut, Universität Bayreuth, D-95440 Bayreuth, Germany
Received July 2018 Published April 2019
Fund Project: The work was supported by the ICT COST Action IC1104 and grants KU 2430/3-1, WA 1666/9-1 – "Integer Linear Programming Models for Subspace Codes and Finite Geometry" – from the German Research Foundation
We show that there is a binary subspace code of constant dimension 3 in ambient dimension 7, having minimum subspace distance 4 and cardinality 333, i.e., $ 333 \le A_2(7, 4;3) $, which improves the previous best known lower bound of 329. Moreover, if a code with these parameters has at least 333 elements, its automorphism group is in one of 31 conjugacy classes.
This is achieved by a more general technique for an exhaustive search in a finite group that does not depend on the enumeration of all subgroups.
Keywords: Finite groups, finite projective spaces, constant dimension codes, subspace codes, subspace distance, combinatorics, computer search.
Mathematics Subject Classification: Primary: 51E20; Secondary: 05B07, 11T71, 94B25.
Citation: Daniel Heinlein, Michael Kiermaier, Sascha Kurz, Alfred Wassermann. A subspace code of size $ \bf{333} $ in the setting of a binary $ \bf{q} $-analog of the Fano plane. Advances in Mathematics of Communications, 2019, 13 (3) : 457-475. doi: 10.3934/amc.2019029
J. Ai, T. Honold and H. Liu, The expurgation-augmentation method for constructing good plane subspace codes, arXiv preprint, arXiv: 1601.01502.Google Scholar
E. Artin, Geometric Algebra, Interscience Publishers, Inc., New York-London, 1957. Google Scholar
R. Baer, Linear Algebra and Projective Geometry, Academic Press Inc., New York, N. Y., 1952. Google Scholar
H. U. Besche, B. Eick and E. O'Brien, Small Groups library, URL http://www.icm.tu-bs.de/ag_algebra/software/small/, Visited on Dec. 12, 2016.Google Scholar
A. Beutelspacher, Partial spreads in finite projective spaces and partial designs, Mathematische Zeitschrift, 145 (1975), 211-229. doi: 10.1007/BF01215286. Google Scholar
M. Braun, T. Etzion, P. R. J. Östergård, A. Vardy and A. Wassermann, Existence of q-analogs of Steiner systems, Forum Math. Pi, 4 (2016), e7, 14pp. doi: 10.1017/fmp.2016.5. Google Scholar
M. Braun, A. Kerber and R. Laue, Systematic construction of q-analogs of t-(v, l, λ-designs, Designs, Codes and Cryptography, 34 (2005), 55–70, URL https://doi.org/10.1007/s10623-003-4194-z. doi: 10.1007/s10623-003-4194-z. Google Scholar
M. Braun, M. Kiermaier and A. Nakić, On the automorphism group of a binary q-analog of the Fano plane, European Journal of Combinatorics, 51 (2016), 443-457. doi: 10.1016/j.ejc.2015.07.014. Google Scholar
M. Braun, P. R. J. Östergård and A. Wassermann, New lower bounds for binary constant dimension subspace codes, Experimental Mathematics, 27 (2018), 179-183. doi: 10.1080/10586458.2016.1239145. Google Scholar
M. Braun and J. Reichelt, q-analogs of packing designs, Journal of Combinatorial Designs, 22 (2014), 306-321. doi: 10.1002/jcd.21376. Google Scholar
M. Buratti, M. Kiermaier, S. Kurz, A. Nakić and A. Wassermann, q-analogs of group divisible designs, arXiv preprint, arXiv: 1804.11172.Google Scholar
A. Cossidente and F. Pavese, On subspace codes, Designs, Codes and Cryptography, 78 (2016), 527-531. doi: 10.1007/s10623-014-0018-6. Google Scholar
A. Cossidente, F. Pavese and L. Storme, Geometrical aspects of subspace codes, in Network Coding and Subspace Designs, Springer, 2018,107–129. Google Scholar
T. Etzion, A new approach to examine q-Steiner systems, arXiv preprint, arXiv: 1507.08503.Google Scholar
T. Etzion, On the structure of the q-Fano plane, arXiv preprint, arXiv: 1508.01839.Google Scholar
T. Etzion and L. Storme, Galois geometries and coding theory, Designs, Codes and Cryptography, 78 (2016), 311–350, URL http://dx.doi.org/10.1007/s10623-015-0156-5. doi: 10.1007/s10623-015-0156-5. Google Scholar
T. Etzion and N. Silberstein, Error-correcting codes in projective spaces via rank-metric codes and Ferrers diagrams, IEEE Transactions on Information Theory, 55 (2009), 2909-2919. doi: 10.1109/TIT.2009.2021376. Google Scholar
T. Etzion and A. Vardy, On q-analogs of Steiner systems and covering designs, Advances in Mathematics of Communications, 5 (2011), 161-176. doi: 10.3934/amc.2011.5.161. Google Scholar
M. Greferath, M. O. Pavčević, N. Silberstein and M. Á. Vázquez-Castro, Network Coding and Subspace Designs, Springer, 2018. doi: 10.1007/978-3-319-70293-3. Google Scholar
M. Hall Jr, The Theory of Groups, Macmillan New York, 1959. Google Scholar
O. Heden and P. A. Sissokho, On the existence of a (2, 3)-spread in V(7, 2), Ars Combinatoria, 124 (2016), 161-164. Google Scholar
D. Heinlein, T. Honold, M. Kiermaier and S. Kurz, Generalized vector space partitions, The Australasian Journal of Combinatorics, 73 (2019), 162-178. Google Scholar
D. Heinlein, M. Kiermaier, S. Kurz and A. Wassermann, Tables of subspace codes, arXiv preprint, arXiv: 1601.02864.Google Scholar
D. Heinlein and S. Kurz, Asymptotic bounds for the sizes of constant dimension codes and an improved lower bound, in International Castle Meeting on Coding Theory and Applications, Springer, 10495 (2017), 163–191. Google Scholar
T. Honold, M. Kiermaier and S. Kurz, Optimal binary subspace codes of length 6, constant dimension 3 and minimum distance 4, Contemporary Mathematics, 632 (2015), 157-176. doi: 10.1090/conm/632/12627. Google Scholar
T. Honold and M. Kiermaier, On putative q-analogues of the Fano plane and related combinatorial structures, in Dynamical Systems, Number Theory and Applications, World Sci. Publ., Hackensack, NJ, 2016,141–175. Google Scholar
M. Kiermaier, S. Kurz and A. Wassermann, The order of the automorphism group of a binary q-analog of the Fano plane is at most two, Designs, Codes and Cryptography, 86 (2018), 239-250. doi: 10.1007/s10623-017-0360-6. Google Scholar
M. Kiermaier and M. O. Pavčević, Intersection numbers for subspace designs, Journal of Combinatorial Designs, 23 (2015), 463-480. doi: 10.1002/jcd.21403. Google Scholar
R. Koetter and F. Kschischang, Coding for errors and erasures in random network coding, IEEE Transactions on Information Theory, 54 (2008), 3579-3591. doi: 10.1109/TIT.2008.926449. Google Scholar
A. Kohnert and S. Kurz, Construction of large constant dimension codes with a prescribed minimum distance, in Mathematical Methods in Computer Science, vol. 5393 of Lecture Notes in Computer Science, Springer, Berlin, 2008, 31–42. doi: 10.1007/978-3-540-89994-5_4. Google Scholar
S. Kurz, Improved upper bounds for partial spreads, Designs, Codes and Cryptography, 85 (2017), 97-106. doi: 10.1007/s10623-016-0290-8. Google Scholar
S. Kurz, Packing vector spaces into vector spaces, The Australasian Journal of Combinatorics, 68 (2017), 122-130. Google Scholar
H. Liu and T. Honold, Poster: A new approach to the main problem of subspace coding, in 9th International Conference on Communications and Networking in China (ChinaCom 2014, Maoming, China, Aug. 14–16), 2014, 676–677, Full paper available as arXiv: 1408.1181.Google Scholar
K. Metsch, Bose-Burton type theorems for finite projective, affine and polar spaces, in Surveys in Combinatorics, 1999 (Canterbury), vol. 267 of London Math. Soc. Lecture Note Ser., Cambridge Univ. Press, Cambridge, 1999, 137-166. Google Scholar
M. Miyakawa, A. Munemasa and S. Yoshiara, On a class of small 2-designs over GF(q), Journal of Combinatorial Designs, 3 (1995), 61-77. doi: 10.1002/jcd.3180030108. Google Scholar
D. Silva, F. Kschischang and R. Koetter, A rank-metric approach to error control in random network coding, IEEE Transactions on Information Theory, 54 (2008), 3951-3967. doi: 10.1109/TIT.2008.928291. Google Scholar
A. Storjohann, An $ {O}(n^3) $ algorithm for the {F}robenius normal form, in Proceedings of the 1998 International Symposium on Symbolic and Algebraic Computation, ACM, 1998,101–104. doi: 10.1145/281508.281570. Google Scholar
S. Thomas, Designs over finite fields, Geometriae Dedicata, 24 (1987), 237-242. doi: 10.1007/BF00150939. Google Scholar
S. Thomas, Designs and partial geometries over finite fields, Geometriae Dedicata, 63 (1996), 247-253. doi: 10.1007/BF00181415. Google Scholar
J. Zumbrägel, Designs and codes in affine geometry, arXiv preprint, arXiv: 1605.03789.Google Scholar
Thomas Honold, Michael Kiermaier, Sascha Kurz. Constructions and bounds for mixed-dimension subspace codes. Advances in Mathematics of Communications, 2016, 10 (3) : 649-682. doi: 10.3934/amc.2016033
Daniel Heinlein, Sascha Kurz. Binary subspace codes in small ambient spaces. Advances in Mathematics of Communications, 2018, 12 (4) : 817-839. doi: 10.3934/amc.2018048
Heide Gluesing-Luerssen, Carolyn Troha. Construction of subspace codes through linkage. Advances in Mathematics of Communications, 2016, 10 (3) : 525-540. doi: 10.3934/amc.2016023
Ernst M. Gabidulin, Pierre Loidreau. Properties of subspace subcodes of Gabidulin codes. Advances in Mathematics of Communications, 2008, 2 (2) : 147-157. doi: 10.3934/amc.2008.2.147
Daniele Bartoli, Matteo Bonini, Massimo Giulietti. Constant dimension codes from Riemann-Roch spaces. Advances in Mathematics of Communications, 2017, 11 (4) : 705-713. doi: 10.3934/amc.2017051
Anna-Lena Trautmann. Isometry and automorphisms of constant dimension codes. Advances in Mathematics of Communications, 2013, 7 (2) : 147-160. doi: 10.3934/amc.2013.7.147
Natalia Silberstein, Tuvi Etzion. Large constant dimension codes and lexicodes. Advances in Mathematics of Communications, 2011, 5 (2) : 177-189. doi: 10.3934/amc.2011.5.177
Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010
Linda Beukemann, Klaus Metsch, Leo Storme. On weighted minihypers in finite projective spaces of square order. Advances in Mathematics of Communications, 2015, 9 (3) : 291-309. doi: 10.3934/amc.2015.9.291
Roland D. Barrolleta, Emilio Suárez-Canedo, Leo Storme, Peter Vandendriessche. On primitive constant dimension codes and a geometrical sunflower bound. Advances in Mathematics of Communications, 2017, 11 (4) : 757-765. doi: 10.3934/amc.2017055
Andries E. Brouwer, Tuvi Etzion. Some new distance-4 constant weight codes. Advances in Mathematics of Communications, 2011, 5 (3) : 417-424. doi: 10.3934/amc.2011.5.417
Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. Advances in Mathematics of Communications, 2018, 12 (4) : 773-804. doi: 10.3934/amc.2018046
Antonio Cossidente, Francesco Pavese, Leo Storme. Optimal subspace codes in $ {{\rm{PG}}}(4,q) $. Advances in Mathematics of Communications, 2019, 13 (3) : 393-404. doi: 10.3934/amc.2019025
Alexander A. Davydov, Massimo Giulietti, Stefano Marcugini, Fernanda Pambianco. Linear nonbinary covering codes and saturating sets in projective spaces. Advances in Mathematics of Communications, 2011, 5 (1) : 119-147. doi: 10.3934/amc.2011.5.119
Somphong Jitman, Ekkasit Sangwisut. The average dimension of the Hermitian hull of constacyclic codes over finite fields of square order. Advances in Mathematics of Communications, 2018, 12 (3) : 451-463. doi: 10.3934/amc.2018027
Aicha Batoul, Kenza Guenda, T. Aaron Gulliver. Some constacyclic codes over finite chain rings. Advances in Mathematics of Communications, 2016, 10 (4) : 683-694. doi: 10.3934/amc.2016034
Somphong Jitman, San Ling, Patanee Udomkavanich. Skew constacyclic codes over finite chain rings. Advances in Mathematics of Communications, 2012, 6 (1) : 39-63. doi: 10.3934/amc.2012.6.39
Eimear Byrne. On the weight distribution of codes over finite rings. Advances in Mathematics of Communications, 2011, 5 (2) : 395-406. doi: 10.3934/amc.2011.5.395
Pedro A. S. Salomão. The Thurston operator for semi-finite combinatorics. Discrete & Continuous Dynamical Systems - A, 2006, 16 (4) : 883-896. doi: 10.3934/dcds.2006.16.883
Thomas Westerbäck. Parity check systems of nonlinear codes over finite commutative Frobenius rings. Advances in Mathematics of Communications, 2017, 11 (3) : 409-427. doi: 10.3934/amc.2017035
Daniel Heinlein Michael Kiermaier Sascha Kurz Alfred Wassermann
\begin{document}$ \bf{333} $\end{document} in the setting of a binary \begin{document}$ \bf{q} $\end{document}-analog of the Fano plane" readonly="readonly"> | CommonCrawl |
Brief history of Korean national forest inventory and academic usage
Park, Byung Bae;Han, Si Ho;Rahman, Afroja;Choi, Byeong Am;Im, Young Suk;Bang, Hong Seok;So, Soon Jin;Koo, Kyung Mo;Park, Dae Yeon;Kim, Se Bin;Shin, Man Yong 299
https://doi.org/10.7744/kjoas.20160032 PDF KSCI
The National Forest Inventory (NFI) is important for providing fundamental data for basic forest planning and the establishment of forest policies for the purpose of implementing sustainable forest management. The purpose of this study is to present the development of Korea's NFI including legal basis, sampling design, and measured variables and to review the usage of NFI data. The survey methods and forestry statistics among the Unites States, Canada, Japan, China, and European countries were briefly compared. Total 140 publications utilizing NFI data between 2008 and 2015 were categorized with 15 subjects. Korea has conducted the NFI 6 times since 1971, but only the $6^{th}$ NFI is comparable with the fifth, the previous NFI, because the permanent sampling plots have been shared between the periods. The Korean Forestry Statistics contains only half as many variables as that of advanced countries in Forestry. More researches were needed to improve consistent measurement of diverse variables through implementation of advanced technologies. Additional data for Forest Health Monitoring since the NFI $6^{th}$ must be under quality control which will be an essential part of the inventories for providing the chronological change of forest health.
Salmonellosis in swine: Clinical perspectives
Shim, Minkyung;Hong, Sanghyun;Seok, Min-Jae;Kim, Hyeun Bum 320
Salmonella is one of the most important food-borne zoonotic pathogens, causing acute or chronic digestive diseases such as enteritis. The acute form of enteritis is common in young pigs of 2 - 4 months of age. The main symptoms include high fever ($41-42^{\circ}C$), loss of appetite, and increased mortality within 2 - 4 days of onset of the disease. It is often the cause of increasing mortality, decreasing growth rate and reducing feed efficiency of piglets. In the case of chronic enteritis in pigs, the main symptom is weight loss due to the continuing severe diarrhea. Salmonella enterica serovar Typhimurium and Salmonella enterica serovar Choleraesuis are typical pig adapted serotypes, which cause one of four major syndromes: enteric fever, enterocolitis/diarrhea, bacteremia and chronic asymptomatic carriage. These syndromes cause a huge economic burden to swine industry by reducing production. Therefore, it is necessary that swine industries should strive to decrease Salmonellosis in pigs in order to reduce economic losses. There are several measures, such as vaccination to prevent salmonellosis, that are implemented differently from country to country. For the treatment of Salmonella, ongoing antibiotic treatment is needed. However constant doses of antibiotics can be a problem because of antibiotic resistance. Therefore, the focus should be made more on prevention than treatment. In this review, we addressed the basic information about Salmonella, route of infection, clinical symptoms, and prevention of Salmonellosis.
Effect of different transplanting and harvest times on yield and quality of pigmented rice cultivars in the Yeongnam plain area
Kim, Sang-Yeol;Han, Sang-Ik;Oh, Seong-Hwan;Seo, Jong-Ho;Yi, Hwi-Jong;Hwang, Jung-Dong;Choi, Won-Yeong;Oh, Myung-Kyu 330
The effect of transplanting and harvest timing was evaluated for the production of high quality pigmented rice in the Yeongnam plain area. Rice was transplanted on June $2^{nd}$ and $14^{th}$ and harvested between 35 - 55 days after panicle heading at 5 - day intervals. Three black- and 3 red-pigmented rice cultivars (such as early cultivar : Josengheugchal, Jeogjinju; medium cultivar : Heugseol, Hongjinju; and mid-late cultivar : Sintoheugmi, Geongganghongmi) were studied. Yield components like spikelet number, ripened grain ratio, and 1,000 - grain weight of the black- and red-pigmented rice cultivars were similar for both the June 2 and June 14 transplantings but panicle number per $m^2$ was higher for the June 14 transplanting than for June 2. This contributed to a higher brown rice yield for the June 14 transplanting, by 6 - 19% for black-pigmented rice, and by 10 - 21% for red-pigmented rice than the yield for the June 2 transplanting. Total anthocyanin and polyphenol productions of the pigmented rice were also higher in the June 14 transplanting than that in the June 2 transplanting due to high brown rice yield. Based on the combined pigmented brown rice yield, we concluded that the optimal harvest timing would be 40 - 45 days after panicle heading (DAH) for the black-pigmented rice and 45 - 50 DAH for the red-pigmented rice. This study suggests that optimum transplanting and harvest timings play an important role for production of high quality pigmented rice in the Yeongnam plain area.
Effects of ethylene treatment on postharvest quality in kiwi fruit
Lim, Byung-Seon;Lee, Jin-Su;Park, Hee-Ju;Oh, Soh-Young;Chun, Jong-Pil 340
The kiwi fruit (Actinidia deliciosa cv. 'Hayward') should be ripened at any step during postharvest handling before consumer consumption. This is essential for freshly harvested kiwi fruit. But, this requires correct temperatures and ethylene concentrations. More testing of a newly developed ethylene generator using charcoal for commercial purposes is needed. This study was conducted to investigate the optimum storage temperatures and the effect of ethylene on the postharvest quality of kiwi fruit. Three different ethylene concentrations of 10, 50, and $100{\mu}L{\cdot}L^{-1}$ were used on fresh kiwi fruit stored at different temperatures of 10, 15, and $20^{\circ}C$. The quality changes of the fruits were assessed by sensory evaluation and by measuring firmness, soluble solids content, titratable acidity, and ethylene production. Higher storage temperatures and ethylene concentrations softened the kiwi fruit quickly and led to the rapid loss of acidity while soluble solid contents of fruit increased to a significant extent during the same storage period. Similarly, the firmness of ethylene-treated fruits stored at 20 and $15^{\circ}C$ dramatically decreased in the experiment while treated fruits stored at $10^{\circ}C$ decreased only slightly. Quality characteristics of kiwi fruits stored at 15 and $20^{\circ}C$ were better than those of fruits at $10^{\circ}C$. With regards to the effect of temperature, fruits stored at lower temperatures took a longer time to ripen and retained their quality longer. The newly developed ethylene generator maintained the ethylene concentration in the 5 kg box at $40-400{\mu}L{\cdot}L^{-1}$. The ethylene generator could also be used to soften persimmons.
Influence of plant surface spray adhesion of dinotefuran and thiodicarb on control of apple leafminer
Kim, Young-Shin;Kim, Kwang-Soo;Jin, Na-Young;Yu, Yong-Man;Youn, Young-Nam;Lim, Chi-Hwan 346
This study was conducted to obtain the correlation between the plant surface spray adhesion amount of pesticides and the pest control effect. The linearity of the standard curves of dinotefuran and thiodicarb was $R^2=0.9999$, and recovery was between 70% to 120% which was satisfactory for insecticide residue analyses. The pest control effect was evaluated by assessing the number of apple leafminers (Phyllonorycter ringoniella, Gracillariidae, Lepidoptera) captured by sex pheromone traps from late June to late September in 2015. For the adhesion amount, dinotefuran recovered from trap A and B, respectively were $47{\mu}g/50cm^2$ and $23{\mu}g/50cm^2$, which can be characterized as a very low adhesion amount in comparison to the average adhesion amount of $81{\mu}g/50cm^2$ in the field. In case of thiodicarb, $691{\mu}g/50cm^2$ and $71{\mu}g/50cm^2$ were recovered from trap A and B, respectively, and the average amount in the field is $325{\mu}g/50cm^2$. These results showed close correlation with the insect population captured by trap A and B. The numbers of insects captured by trap A and B between the end of July and late August were similar. After spraying thiodicarb on August 28, the number of apple leafminers captured by trap B is bigger than that of trap A. It appears that pest occurrence tended to be high at low adhesion amounts of the active ingredient. Therefore, in order to obtain an optimal control effect, it is suggested that uniform application of insecticides is critical instead of relying on the amount of insecticide applied in the field.
Statistically estimated storage potential of organic carbon by its association with clay content for Korean upland subsoil
Han, Kyung-Hwa;Zhang, Yong-Seon;Jung, Kang-Ho;Cho, Hee-Rae;Seo, Mi-Jin;Sonn, Yeon-Kyu 353
Soil organic carbon (SOC) retention has gradually gotten attention due to the need for mitigation of increased atmospheric carbon dioxide and the simultaneous increase in crop productivity. We estimated the statistical maximum value of soil organic carbon (SOC) fixed by clay content using the Korean detailed soil map database. Clay content is a major factor determining SOC of subsoil because it influences the vertical mobility and adsorption capacity of dissolved organic matter. We selected 1,912 soil data of B and C horizons from 13 soil series, Sangju, Jigog, Jungdong, Bonryang, Anryong, Banho, Baegsan, Daegog, Yeongog, Bugog, Weongog, Gopyeong, and Bancheon, mainly distributed in Korean upland. The ranges of SOC and clay content were $0-40g\;kg^{-1}$ and 0 - 60%, respectively. Soils having more than 25% clay content had much lower SOC in subsoil than topsoil, probably due to low vertical mobility of dissolved organic carbon. The statistical analysis of SOC storage potential of upland subsoil, performed using 90%, 95%, and 99% maximum values in cumulative SOC frequency distribution in a range of clay content, revealed that these results could be applicable to soils with 1% - 25% of clay content. The 90% SOC maximum values, closest to the inflection point, at 5%, 10%, 15%, and 25% of clay contents were $7g\;kg^{-1}$, $10g\;kg^{-1}$, $12g\;kg^{-1}$, and $13g\;kg^{-1}$, respectively. We expect that the statistical analysis of SOC maximum values for different clay contents could contribute to quantifying the soil carbon sink capacity of Korean upland soils.
Establishment scheme for official standards of liquid swine manure fertilizer
Lee, Dong Sung;Lee, Jae-Bong;Lee, Myoung-Yun;Joo, Ri-Na;Lee, Kyo-Suk;Min, Se-Won;Hong, Byeong-Deok;Chung, Doug-Young 360
A more efficient use of nutrients can benefit both farmers and water quality. To propose an establishment scheme for official standards for liquid fertilizer from swine manure slurry, we evaluated previous and present data related to swine manure as well as analyzed 101 swine manure samples collected from 28 public livestock recycling centers throughout the nation. From these investigations, we found that the official standards for byproduct fertilizers set by the Rural Development Administration (RDA), especially for a liquid swine manure fertilizer, should be revised due to nutrient content requirements having to meet at least 0.3% content for the sum of nitrogen, phosphorus, and potassium. Otherwise, most of the swine manure cannot be utilized as a liquid fertilizer because the result of the 101 samples' analysis showed fewer than 28% of them met the minimum standard of ${\geq}0.3%$ content for the sum of nitrogen, phosphorus, and potassium, while the contents of heavy metals as indicators of toxicity met the standard requirements. Therefore, it is suggested that official standards for byproduct fertilizers set by RDA should be revised as follows: no limit for nutrient contents and addition of chloride as homogeneity. Also, NaCl should be changed to Na because NaCl cannot be analyzed by instrument.
Analysis of environmental effects affecting reproductive traits of primiparous and multiparous Hanwoo
Eum, Seung-Hoon;Park, Hu-Rak;Seo, Jakyeom;Cho, Seong-Keun;Kim, Byeong-Woo 369
Improving the reproductive traits of Hanwoo might decrease their production cost. Therefore, this study was conducted to investigate the effects of environmental factors [registration grade (basic, pedigree or advanced), birth year, birth season, parity, delivery year, and delivery season] on various reproductive traits (age at 1st service, age at 1st conception, age for 1st calving, days at 1st service postpartum, non-pregnant condition period, calving interval, gestation length, and number of services for conception) in Hanwoo (primiparous 12,219 heads, multiparous 10,471 heads). All data was acquired from Gyeongnam province areas which were surveyed from 2007 to 2015. All environmental factors significantly influenced (p < 0.01) reproductive traits of primiparous cows but, but not all environmental factors influenced multiparous cows. Primiparous cows registered as advanced grade showed significantly lower age at 1st service (by 15.36 days), age at 1st conception (by 8.66 days), and age for 1st calving (by 8.77 days) (p < 0.01) than those registered as basic grades. Age at 1st service, age at 1st conception and age for 1st calving were not significantly related to birth year in primiparous cows. As delivery years advanced from 2005 to 2012, all durations associated to reproductive traits tended to be shorter. Days at 1st service postpartum, non-pregnant condition period, and calving interval tended to be shortened as parity increased. Days at 1st service postpartum, days open, calving interval, and gestation in multiparous cows calved in winter were shorter than those in summer. The registration grade was not a effected with reproductive traits in Hanwoo.
Changes in ruminal fermentation and blood metabolism in steers fed low protein TMR with protein fraction-enriched feeds
Choi, Chang Weon 379
Four ruminally cannulated Holstein steers (BW $482.9{\pm}8.10kg$), fed low protein TMR (CP 11.7%) as a basal diet, were used to investigate changes in rumen fermentation and blood metabolism according to protein fraction, cornell net carbohydrates and protein system (CNCPS), and enriched feeds. The steers, arranged in a $4{\times}4$ Latin square design, consumed TMR only (control), TMR supplemented with rapeseed meal (AB1), soybean meal (B2), and perilla meal (B3C), respectively. The protein feeds were substituted for 23.0% of CP in TMR. Ruminal pH, ammonia-N, and volatile fatty acids (VFA) in rumen digesta, sampled through ruminal cannula at 1 h-interval after the morning feeding, were analyzed. For plasma metabolites analysis, blood was sampled via the jugular vein after the rumen digesta sampling. Different N fraction-enriched protein feeds did not affect (p > 0.05) mean ruminal pH except AB1 being numerically lower 1 - 3 h post-feeding than the other groups. Mean ammonia-N was statistically (p < 0.05) higher for AB1 than for the other groups, but VFA did not differ among the groups. Blood urea nitrogen was statistically (p < 0.05) higher for B2 than for the other groups, which was rather unclear due to relatively low ruminal ammonia-N. This indicates that additional studies on relationships between dietary N fractions and ruminant metabolism according to different levels of CP in a basal diet should be required.
Fermentative characteristics of wheat bran direct-fed microbes inoculated with starter culture
Kim, Jo Eun;Kim, Ki Hyun;Kim, Kwang-Sik;Kim, Young Hwa;Kim, Dong Woon;Park, Jun-Cheol;Kim, Sam-Chul;Seol, Kuk-Hwan 387
This study was conducted to determine the fermentative characteristics of wheat bran inoculated with a starter culture of direct-fed microbes as a microbial wheat bran (DMWB) feed additive. Wheat bran was prepared with 1% (w/w, 0.5% Lactobacillus plantarum and 0.5% of Saccharomyces cerevisiae) starter culture treatment (TW) or without starter culture as a control (CW). Those were fermented under anaerobic conditions at $30^{\circ}C$ incubation for 3 days. Samples were taken at 0, 1, 2, and 3 days to analyze chemical composition, microbial growth, pH, and organic acid content. Chemical composition was not significantly different between CW and TW (p > 0.05). In TW, the number of lactic acid bacteria and yeast increased during the 3 days of fermentation (p < 0.05) and the population of lactic acid bacteria was significantly higher than in CW (p < 0.05). After 3 days, the number of yeast in TW was $7.50{\pm}0.07log\;CFU/g$, however, no yeast was detected in CW (p < 0.05). The pH values of both wheat bran samples decreased during the 3 days of fermentation (p < 0.05), and TW showed significantly lower pH than CW after 3 days of fermentation (p < 0.05). Contents of lactic acid and acetic acid increased significantly at 3rd day of fermentation in TW. However, no organic acids were generated in CW during testing period. These results suggest that 3 days of fermentation at $37^{\circ}C$ incubation after the inoculation wheat bran with starter culture makes it possible to produce a direct-feed with a high population of lactic acid bacteria at more than $10^{11}CFU/g$.
Impact of environmental factors on milk β-hydroxybutyric acid and acetone levels in Holstein cattle associated with production traits
Ranaraja, Umanthi;Cho, Kwang Hyun;Park, Mi Na;Choi, Tae Jung;Kim, Si Dong;Lee, Jisu;Kim, Hyun Seong;Do, Chang Hee 394
The objective of this study was to estimate the environmental factors affecting milk ${\beta}$-hydroxybutyric acid (BHBA) and acetone (Ac) concentrations in Holstein cattle. A total of 264,221 test-day records collected from the Korea Animal Improvement Association (KAIA) during the period of 2012 to 2014 were used in this study. Analysis of variance (ANOVA) was performed to determine the factors significantly affecting ketone body concentrations. Parameters considered in the model were season of test, season of calving, parity, lactation stage, and milk collecting time (AM and PM). According to the ANOVA, the $R^2$ for milk BHBA and Ac were 0.5226 and 0.4961, respectively. 'Season of test' showed a considerable influence on ketone body concentration. Least square (LS) means for milk BHBA concentrations was the lowest ($39.04{\mu}M$) in winter while it increased up to $62.91{\mu}M$ in summer. But Ac concentration did not significantly change along with 'season of test'. The means of milk BHBA and Ac concentrations were high at first lactation stage, low around second lactation stage, and then gradually increased. Cows milked in the morning had lower mean BHBA and Ac concentrations ($48.49{\mu}M$ and $121.69{\mu}M$, respectively) in comparison to those milked in the evening ($53.46{\mu}M$ and $130.42{\mu}M$, respectively). The LS means of BHBA and Ac slightly increased over parities. These results suggest that proper maintenance of milk collection, herd management programs, and evaluation of ketone body levels in milk should be considered for the efficient management of resistance to ketosis.
Synergistic effect of co-inoculation with phosphate-solubilizing bacteria
Park, Jin-Hee;Lee, Heon-Hak;Han, Chang-Hoon;Yoo, Jeoung-Ah;Yoon, Min-Ho 401
The synergistic effect on phosphate solubilization of single- and co-inoculation of two phosphate solubilizing bacteria, Burkholderia anthina PSB-15 and Enterobacter aerogenes PSB-16, was assessed in liquid medium and green gram plants. Co-inoculation of two strains was found to release the highest content of soluble phosphorus ($519{\mu}g\;mL^{-1}$) into the medium, followed by single inoculation of Burkholderia strain ($492{\mu}g\;mL^{-1}$) and Enterobacter strain ($483{\mu}g\;mL^{-1}$). However, there was no significant difference between single inoculation of bacterial strain and co-inoculation of two bacterial strains in terms of phosphorous release. The highest pH reduction, organic acid production, and glucose consumption were observed in the culture medium co-inoculated with PSB-15 and PSB-16 strains rather than that of single inoculation. Based on the plant growth promotion bioassay, co-inoculated mung bean seedlings recorded 9% and 8% higher shoot and root growth, respectively, compared to the control. Therefore, in conclusion, co-inoculation of the strains B. anthina and E. aerogenes displayed better performance in stimulating plant growth than inoculation of each strain alone. However, considering the short assessment period of the present study, we recommend engaging in further work under field conditions in order to test the suitability of these strains as bio-inoculants.
Antibacterial activity of supernatant obtained from Weissella koreensis and Lactobacillus sakei on the growth of pathogenic bacteria
Im, Hana;Moon, Joon-Kwan;Kim, Woan-Sub 415
This study was carried out to obtain basic data for the industrial use of Weissella koreensis and Lactobacillus sakei. The antibacterial activity of supernatants obtained from W. koreensis and L. sakei were tested against pathogenic bacteria such as Escherichia coli KCCM 11234, Salmonella enteritidis KCCM 3313, Salmonella enteritidis KCCM 12021, Salmonella typhimurium KCCM 40253, and Salmonella typhimurium KCCM 15. The supernatant of L. sakei showed antibacterial activity against E. coli KCCM 11234, S. enteritidis KCCM 12021, and S. typhimurium KCCM 15, while the supernatant of W. koreensis showed antibacterial activity against E. coli KCCM 11234 and S. enteritidis KCCM 12021. The effect of pH changes and heat treatment on antibacterial activity of the supernatants was examined using the sensitive pathogenic bacteria (E. coli KCCM 11234, S. enteritidis KCCM 12021 and S. typhimurium KCCM 15). Antibacterial activity against sensitive pathogenic bacteria was maintained under heat treatment at all temperatures, but there was no antibacterial activity associated with pH modification. Furthermore, it was confirmed that the antibacterial activity of the supernatants obtained from W. koreensis and L. sakei was a result of organic acids including, lactic, acetic, phosphoric, succinic, pyroglutamic, citric, malic, and formic acids. Therefore, the present study showed that the organic acids produced by L. sakei and W. koreensis exhibited a strong antibacterial activity against pathogenic bacteria. Moreover, in the food industry, these organic acids have the potential to inhibit the growth of pathogenic bacteria and improve the quality of stored food.
Chemical properties of liquid swine manure for fermentation step in public livestock recycling center
The nutrients in livestock manure produced during fermentation processes in public livestock recycling centers are used as fertilizers. However, the large amounts of swine manure produced in intensive livestock farms can be a nonpoint source of pollution. In this experiment, we investigated the chemical properties, inorganic components, and heavy metal contents in 101 samples of liquid swine manure collected from 28 public livestock recycling centers throughout the nation. Results showed that the average pH of the samples was alkaline (pH range 5.18 to 9.54), and their maximum EC was $53.2dS\;m^{-1}$. The amounts of total nitrogen and total phosphorus were in the range of 1000 - 2000 and $200-800mg\;L^{-1}$ while potassium, which constituted 47% of the total inorganic ions recovered from the liquid swine manure, amounted to $1500mg\;L^{-1}$. The most distinctive heavy metals recovered from the liquid swine manure were copper and zinc although the amounts of both heavy metals were much lesser than those of the standards as livestock liquid fertilizer set by the Rural Development Administration. On the other hand, the amount of nitrogen decreased rapidly with an increasing fermentation period from immature to mature, assumed to be lost as volatile compounds, such as ammonia, which are the major odor components during the fermentation process.
Analysis of mechanical properties of agricultural products for development of a multipurpose vegetable cutting machine
Park, Jeong Gil;Jung, Hyun Mo;Kang, Bum Seok;Mun, Seong Kyu;Lee, Seung Hun;Lee, Seung Hyun 432
The consumption of pre-treated vegetables (including fresh-cut vegetables) that are washed, peeled, and trimmed has been significantly increased because of their easy use for cooking. Vegetable cutting machines have been widely utilized for producing fresh-cut vegetables or agricultural products of different sizes; however, the design standard is not established for specific types of agricultural products. Therefore, this study was conducted to determine mechanical properties (compressive and shear forces) of targeted agricultural products (radish, carrot, squash, cucumber, shiitake mushroom, and sweet potato) for developing a multipurpose vegetable cutting machine. According to ASAE standard (s368.3), compressive and shear forces of targeted agricultural products were measured by using a custom built UTM (universal testing machine). Shape type of samples and speed ranges (5 - 15 mm/min) of loading rate on bioyield and shear points varied depending on the targeted agricultural product. The range of averaged bioyield points of targeted agricultural products were between 7.89 and 146.98 N. On the other hand, their averaged shear points ranged from 22.50 to 53.47 N. Results clearly showed that the bioyield and shear points of targeted agricultural products were thoroughly affected by their components. As measuring compressive and shear forces of a variety of agricultural products, it will be feasible to calculate blade cutting force for designing multipurpose vegetable cutting machine.
Analysis of the effects of operating point of tractor engine on fatigue life of PTO gear using simulation
Lee, Pa-Ul;Chung, Sun-Ok;Choi, Chang-Hyun;Park, Young-Jun;Kim, Yong-Joo 441
Agricultural tractors are designed using the empirical method due to the difficulty of measuring precise load cycles under various working conditions and soil types. Especially, directly drives various tractor implements, the power take off (PTO) gear. Therefore, alternative design methods using gear design software are needed for the optimal design of tractors. The objective of this study is to simulate fatigue life of the PTO gear according to the operating point of the tractor engine. The PTO gear was made with SCr415 alloy steel with carburizing and quenching treatments. The fatigue life of the PTO gear was simulated by using bending and contact stress according to the torque of the load levels. The PTO gear simulation was conducted by the KISSsoft commercial software for gear analysis. Bending and contact stress were calculated by the ISO 6336:2006 Method A and B. The simulation of fatigue life was calculated by the Miner's cumulative damage law. The total fatigue life of tractors can be estimated to 3,420 hours; thus, 3,420 hours of fatigue life were used in the simulation of the PTO gear of tractors. The main simulation results showed that the maximum fatigue life of the PTO gear was infinite fatigue life at maximum engine power. Minimum fatigue life of the PTO gear was 19.61 hours at 70% of the maximum engine power. Fatigue life of the PTO gear changed according to load of tractor. Therefore, tractor work data is needed for optimal design of the PTO gear.
Development of a hydraulic power transmission system for the 3-point hitch of 50-kW narrow tractors
Chung, Sun-Ok;Kim, Yong-Joo;Choi, Moon-Chan;Lee, Kyu-Ho;Ha, Jong-Kyou;Kang, Tae-Kyoung;Kim, Young-Keun 450
High performance small and mid-sized tractors are required for dryland and orchard operations. A power transmission system is the most important issue for the design of high performance tractors. Many operations, such as loading and lifting, use hydraulic power. In the present study, a hydraulic power transmission system for the 3-point hitch of a 50 kW narrow tractor was developed and its performance was evaluated. First, major components were designed based on target design parameters. Target operations were spraying, weeding, and transportation. Main design parameters were determined through mathematical calculation and computer simulation. The capacity of the hydraulic cylinder was calculated taking the lifting force required for the weight of the implements into consideration. Then, a prototype was fabricated. Major components were the lifting valve, hydraulic cylinder, and 3-point hitch. Finally, performance was evaluated through laboratory tests. Tests were conducted using load weights, lift arm sensor, and lift arm height from the ground. Test results showed that the lifting force was in the range of 23.5 - 29.4 kN. This force was greater than lifting forces of competing foreign tractors by 3.9 - 4.9 kN. These results satisfied the design target value of 20.6 kN, determined by survey of advanced foreign products. The prototype will be commercialized after revision based on various field tests. Improvement of reliability should be also achieved.
A study on the purchase behavior of Chinese consumers about environment-friendly agricultural products
Kim, Sounghun;Ryu, In-Hwan;Lee, Ki-Young 459
In Korea, the market size for environment-friendly agricultural products has reached a plateau, even though many Korean consumers still show a high level of preference for environment-friendly agricultural products. In order to solve this problem, some Korean farmers and marketers are starting to try to export their products to many countries, including China. China, in particular, is becoming one of the fastest rising market for Korean environment-friendly agricultural products, after the signing of the Free Trade Agreement with China. However, little research has been done or reported about the purchase behaviors of Chinese consumers. The purpose of this paper is to analyze the environment-friendly agricultural product purchase behavior (especially, mandarin orange and muskmelon) of consumers in the Chinese market and to present some useful implications for Korean farmers and marketers. Through survey in China (especially, Beijing and Shanghai) and frequency analysis, this study made the following findings: first, Chinese consumers show a very strong concern for environment-friendly agricultural products. Second, many Chinese consumers usually buy environment-friendly agricultural products more than two times per month. Third, Chinese consumers give more value to freshness and food-safety than taste when they make decisions on buying environment-friendly mandarin orange and muskmelon. These can have some implications for the exportation of environment-friendly agricultural products.
An analysis of ex-post assessment on Korea-Chile Free Trade Agreement with respect to the agricultural sector
Han, Suk-Ho 468
As the existing FTAs' implementations are being accelerated, ex-post assessments, such as tariff schedules and agricultural trade analyses results, have been emerging as important national issues for the agricultural sector. Korea-Chile FTA is the first FTA in Korea, and more than ten years have passed since April 2004. It will be necessary to measure the impacts of the agreement on the domestic agricultural industry by analyzing concessions made on traded items of farm products on prices, agricultural trade, and so on. The purpose of this study is to prepare for the request for ex-post assessments on the agricultural sector by trade negotiation procedural law. Additionally, by providing policy direction for agricultural policy segments requiring amendments and supplements through an ex-post assessment, we can more objectively evaluate the conflicting arguments between the agricultural and non-agricultural sectors. Current evaluation methods about ex-post impact assessment of FTA are generally comparison analysis on the change of trade balance before and after FTA implementation. However, this simple comparison analysis cannot be said to pure FTA effects and objective, tightening economic impact assessment of the FTA because of all combined situations such as effects of exchange rates, international macroeconomic changes, climate change, and the occurrence of pests. This research attempts to use dynamic analysis as its ex-post assessment methodology and is expected to contribute to future policy evaluation.
Analysis of the economic value of the production of lily bulbs in Korea
Jang, Hyundong;Kim, Sounghun 481
Lily, which is one of Korea's main flower exports, is one of the most important agricultural product in the country. Korean lily farmers have difficulty earning more profit from producing lilies, because of the high cost of lily bulbs. Most lily bulbs used in Korea are imported from the Netherlands. Thus, the Korean government has kept trying to supply more and better Korean lily bulbs. However, many experts have questioned the efficiency and economic value of the Korean lily production system. The purpose of this paper is to analyze the economic value of the production of lily bulbs in Korea. Especially, this study evaluates the economic value of the production systems of Korean lily bulbs and compares the results from several cases. The results of the present study presents some useful findings, as follows: first, two Korean production areas (Gangneung and Jeju) show a positive economic value but one Korean production area (Taean) presents problems causing a negative economic value. Second, the Korean production area in Vietnam currently has trouble in the view of economic value but will likely overcome that problem. Third, the production area in the Netherlands shows the best economic value. Thus, Korean lily bulb producers need to benchmark that system.
Economic effect analysis of flame retardant aluminum screen development
Park, Bum-Soon;Han, Chung-Soo;Kang, Tae-Hwan;Lee, Hee-Sook 496
The purpose of this study was to investigate the economic effects of a flame retardant aluminum screen developed by a company Economic effects were analyzed in terms of micro and macro-economic aspects. In the macro-economic aspect, economic effects were analyzed under the assumptions that the total import volume of flame retardant aluminum screen was approximately $50m^2$ in 2015 and that possible import substitution rates were 100%, 80%, and 60%. Results showed economic values of 2.25 billion won (100% import substitution rates), 1.8 billion won (80% import substitution rates), and 1.35 billion won (60% import substitution rates). If existing farms which had been using imported flame retardant aluminum screen replaced it newly developed with the flame-retardant aluminum screen developed in this study at rates of 100%, 80%, and 60%, the farms could save 750 million won, 60 million won, and 45 million won, respectively. Furthermore, the social cost savings from fire prevention could be 1.184 billion won. In the micro-economic aspect, if a farm with a typical-size ($1,000m^2$) greenhouse growing red pepper wanted to install flame retardant aluminum screen instead of generic aluminum screen, the farm may only pay an additional cost of 720,000 won. In comparison, if the farm chose fire insurance instead of flame-retardant aluminum screen, then the farm would pay 21,000,000 won for fire insurance. The above results show that the economic effect of flame retardant aluminum screen developed by the company would be be very efficient compared to the imported one. | CommonCrawl |
Saddle-node bifurcation of homoclinic orbits in singular systems
January 2001, 7(1): 219-235. doi: 10.3934/dcds.2001.7.219
The exact rate of approximation in Ulam's method
Christopher Bose 1, and Rua Murray 2,
Department of Mathematics and Statistics, University of Victoria, P.O. Box 3045, Victoria, BC, Canada V8W 3P4, Canada
Department of Mathematics, University of Waikato, Private Bag 3105, Hamilton, New Zealand
Received July 2000 Revised November 2000 Published November 2000
This paper investigates the exact rate of convergence in Ulam's method: a well-known discretization scheme for approximating the invariant density of an absolutely continuous invariant probability measure for piecewise expanding interval maps. It is shown by example that the rate is no better than $O(\frac{\log n}{n})$, where $n$ is the number of cells in the discretization. The result is in agreement with upper estimates previously established in a number of general settings, and shows that the conjectured rate of $O(\frac{1}{n})$ cannot be obtained, even for extremely regular maps.
Keywords: $L^1$ Error Bounds., Perron-Frobenius Operator, Interval Maps, Approximation of Absolutely Continuous Invariant Measures.
Mathematics Subject Classification: 28D05, 41A25, 41A4.
Citation: Christopher Bose, Rua Murray. The exact rate of approximation in Ulam's method. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 219-235. doi: 10.3934/dcds.2001.7.219
Stefan Klus, Péter Koltai, Christof Schütte. On the numerical approximation of the Perron-Frobenius and Koopman operator. Journal of Computational Dynamics, 2016, 3 (1) : 51-79. doi: 10.3934/jcd.2016003
Martin Lustig, Caglar Uyanik. Perron-Frobenius theory and frequency convergence for reducible substitutions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 355-385. doi: 10.3934/dcds.2017015
Gary Froyland, Ognjen Stancevic. Escape rates and Perron-Frobenius operators: Open and closed dynamical systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 457-472. doi: 10.3934/dcdsb.2010.14.457
Marianne Akian, Stéphane Gaubert, Antoine Hochart. A game theory approach to the existence and uniqueness of nonlinear Perron-Frobenius eigenvectors. Discrete & Continuous Dynamical Systems - A, 2020, 40 (1) : 207-231. doi: 10.3934/dcds.2020009
Simon Lloyd, Edson Vargas. Critical covering maps without absolutely continuous invariant probability measure. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2393-2412. doi: 10.3934/dcds.2019101
Stefan Klus, Christof Schütte. Towards tensor-based methods for the numerical approximation of the Perron--Frobenius and Koopman operator. Journal of Computational Dynamics, 2016, 3 (2) : 139-161. doi: 10.3934/jcd.2016007
Jiu Ding, Aihui Zhou. Absolutely continuous invariant measures for piecewise $C^2$ and expanding mappings in higher dimensions. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 451-458. doi: 10.3934/dcds.2000.6.451
Jawad Al-Khal, Henk Bruin, Michael Jakobson. New examples of S-unimodal maps with a sigma-finite absolutely continuous invariant measure. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 35-61. doi: 10.3934/dcds.2008.22.35
Lucia D. Simonelli. Absolutely continuous spectrum for parabolic flows/maps. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 263-292. doi: 10.3934/dcds.2018013
Rua Murray. Approximation error for invariant density calculations. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 535-557. doi: 10.3934/dcds.1998.4.535
Arno Berger, Roland Zweimüller. Invariant measures for general induced maps and towers. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 3885-3901. doi: 10.3934/dcds.2013.33.3885
Xavier Bressaud. Expanding interval maps with intermittent behaviour, physical measures and time scales. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 517-546. doi: 10.3934/dcds.2004.11.517
Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4611-4623. doi: 10.3934/dcds.2017198
Adrian Tudorascu. On absolutely continuous curves of probabilities on the line. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5105-5124. doi: 10.3934/dcds.2019207
Tatsuya Arai, Naotsugu Chinen. The construction of chaotic maps in the sense of Devaney on dendrites which commute to continuous maps on the unit interval. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 547-556. doi: 10.3934/dcds.2004.11.547
Walter Alt, Robert Baier, Matthias Gerdts, Frank Lempio. Error bounds for Euler approximation of linear-quadratic control problems with bang-bang solutions. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 547-570. doi: 10.3934/naco.2012.2.547
Chengxiang Wang, Li Zeng. Error bounds and stability in the $l_{0}$ regularized for CT reconstruction from small projections. Inverse Problems & Imaging, 2016, 10 (3) : 829-853. doi: 10.3934/ipi.2016023
Paola Goatin, Philippe G. LeFloch. $L^1$ continuous dependence for the Euler equations of compressible fluids dynamics. Communications on Pure & Applied Analysis, 2003, 2 (1) : 107-137. doi: 10.3934/cpaa.2003.2.107
Eduardo Garibaldi, Irene Inoquio-Renteria. Dynamical obstruction to the existence of continuous sub-actions for interval maps with regularly varying property. Discrete & Continuous Dynamical Systems - A, 2020, 40 (4) : 2315-2333. doi: 10.3934/dcds.2020115
Jiu Ding, Noah H. Rhee. A unified maximum entropy method via spline functions for Frobenius-Perron operators. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 235-245. doi: 10.3934/naco.2013.3.235
Christopher Bose Rua Murray | CommonCrawl |
June 2014 , Volume 56, Issue 2, pp 333–373 | Cite as
Group efforts when performance is determined by the "best shot"
Stefano Barbieri
David A. Malueg
First Online: 23 November 2013
We investigate the private provision of a public good whose level is determined by the maximum effort made by a group member. Costs of effort are either commonly known or privately known. For symmetric perfect-information games, any number of players may be active and we characterize the unique (mixed-strategy) equilibrium in which active contributors use the same strategy. Increasing the number of active players leads to stochastically lower individual efforts and level of the public good. When information is private, the symmetric equilibrium is in pure strategies. Increasing the number of players yields a pointwise reduction in the equilibrium contribution strategy but an increase in equilibrium payoffs. Comparative statics with respect to costs and levels of risk aversion are derived. Finally, whether information is public or private, equilibria are inefficient—we provide mechanisms that improve efficiency.
Best-shot public good Privately provided public good Volunteer's dilemma
The authors are grateful to Subhasish Chowdhury and to two anonymous referees of this Journal for their comments on the earlier versions of this paper.
JEL Classification
D61 D82 H41
Appendix 1: Proofs
Proof of Lemma 1
As explained in the text, the support of \(F\) has infimum \({\underline{x}}= 0\) and supremum \(\bar{x}= x^{\text {sa}}\). It only remains to show that the support of \(F\) has no gaps. Suppose to the contrary that there exists an interval \((x^l,x^h)\), with \(0 \le x^l< x^h \le \bar{x}\), in which no efforts fall, and \(x^l\) and \(x^h\) are in the support of \(F\). Let \(v^*\) denote an active player's payoff in the semi-symmetric equilibrium where active players use cdf \(F\). In equilibrium, it must be that \(V(x) = v^*\) a.e.-\(F\), and because \(V\) is continuous, it follows that \(V(x) =v^*\) on the support of \(F\). Therefore, \(V(x^l) = V(x^h)\), which in turn implies
$$\begin{aligned} 0&= V(x^h) - V(x^l) \\&= \left( - cx^h + v(\bar{x}) - \int \limits _{x^h}^{\bar{x}} (F(y))^{m-1} v'(y)\, \hbox {d}y\right) \\&\qquad - \left( - cx^l + v(\bar{x}) - \int \limits _{x^l}^{\bar{x}} (F(y))^{m-1} v'(y)\, \hbox {d}y\right) \\&= -(x^h - x^l)c + \int \limits _{x^l}^{x^h} (F(y))^{m-1} v'(y)\, \hbox {d}y \\&< -(x^h - x^l)c + (x^h - x^l)(F(x^l))^{m-1} v'(x^l), \end{aligned}$$
where the inequality follows because \(F\) is constant on \([x^l, x^h]\) and \(v'\) is strictly decreasing. The extremes of the previous equations yield
$$\begin{aligned} c < (F(x^l))^{m-1} v'(x^l). \end{aligned}$$
For any \(x \in (x^l, x^h)\), we have
$$\begin{aligned} V(x) = -c x + v(x) (F(x^l))^{m-1} + \int \limits _{x^h}^{\bar{x}} v(y)\, \hbox {d}(F(y))^{m-1}, \end{aligned}$$
at which points
$$\begin{aligned} V'(x) = -c + v'(x) (F(x^l))^{m-1}. \end{aligned}$$
$$\begin{aligned} 0 \ge \lim _{x \downarrow x^l} V'(x) = -c + v'(x^l) (F(x^l))^{m-1}, \end{aligned}$$
where the inequality follows because, otherwise, since \(V\) is continuous, an effort slightly greater than \(x^l\) would yield a payoff strictly exceeding the payoff in \((x^l - \varepsilon , x^l )\), for sufficiently small \(\varepsilon > 0\), contradicting the assumption that \(F\) is an equilibrium strategy. But then (18) contradicts (16). Hence, it must be that there are no gaps in the support of \(F\).\(\square \)
Proof that the \(k\) -step strategies of Example 3 form an equilibrium. To see that \(x_{1,k'}\) is optimal against player \(2\)'s strategy, note first that, over the interval \((x_{2,k'}, x_{2, k'+1})\),
$$\begin{aligned} x_{1,k'} \in \arg \! \max _g \, \left( \sum _{j=1}^{k'} p_{2,j} \right) v(g)+ \sum _{j=k'+1}^{k} v(x_{2,j}) p_{2,j}-c_1 g, \end{aligned}$$
since, for \(v(x)=2 \sqrt{x},\)
$$\begin{aligned} c_1&= \left( \sum _{j=1}^{k'} p_{2,j}\right) v'(g) \\&= \frac{1}{\sqrt{g}}\left( \frac{1}{2k-1} + (k'-1)\frac{2}{2k-1}\right) \\&= \frac{1}{\sqrt{g}} \, \frac{2k'-1}{2k-1}. \end{aligned}$$
We next show that player 1's utility of taking effort \(x_{1,k'}\) is the same as for effort \(x_{1,k'-1},\) for \(k' \in {2,\ldots ,k}\). It is easily verified that
$$\begin{aligned} \sum _{j=1}^{k'} p_{2,j} = \frac{1}{2k-1}+(k'-1)\frac{2}{2k-1} = \frac{2k'-1}{2k-1} = c_1 \sqrt{x_{1,k'}}. \end{aligned}$$
Therefore, for \(k' \in {2,\ldots ,k}\), player 1's payoff at effort \(x_{1,k'}\) is
$$\begin{aligned}&\left( \sum _{j=1}^{k'} p_{2,j}\right) v(x_{1,k'}) + \sum _{j=k'+1}^{k} v(x_{2,j}) p_{2,j}-c_1 x_{1,k'}\\&\qquad = c_1 \sqrt{x_{1,k'}} \big (2 \sqrt{x_{1, k'}}\big ) + \sum _{j=k'+1}^{k} v(x_{2,j}) p_{2,j}-c_1 x_{1,k'} - c_1 x_{1,k'}\\&\qquad = c_1 x_{1,k'}+ \sum _{j=k'+1}^{k} v(x_{2,j}) p_{2,j} \\&\qquad = c_1 x_{1, k'-1} + \sum _{j=k'}^{k} v(x_{2,j}) p_{2,j} \\&\quad \qquad + \big [ c_1(x_{1, k'}-x_{1, k'-1}) - v(x_{2, k'}) p_{2, k'} \big ]\\&\qquad = c_1 x_{1, k'-1} + \sum _{j=k'}^{k} v(x_{2,j}) p_{2,j} \\&\quad \qquad + \underbrace{\left[ \frac{(2k'-1)^2-(2k'-3)^2}{c_1(2k-1)^2} - 2\left( \frac{2k'-2}{2k-1}\, \frac{1}{c_1}\right) \frac{2}{2k-1}\right] }_{= 0}\\&\qquad = \left( \sum _{j=1}^{k'-1} p_{2,j}\right) v(x_{1,k'-1})+ \sum _{j=k'}^{k} v(x_{2,j}) p_{2,j}-c_1 x_{1,k'-1}, \end{aligned}$$
establishing the payoff equality and thus concluding the proof that \(x_{1,k'}\) is optimal against player \(2\)'s strategy.
Similar calculations hold for player \(2\). The final consideration is whether player 2 has a profitable deviation to \(x^{\text {sa}}_2\). This is surely not the case if \(x^{\text {sa}}_2 \le x^{\text {sa}}_1\). However, if \(x^{\text {sa}}_2 > x^{\text {sa}}_1\) (i.e., \(c_2 < c_1\)), it must be verified that player 2 cannot profit from deviating to \(x^{\text {sa}}_2\). Under the proposed equilibrium strategies, player 2's expected payoff when choosing \(x_{2,1} = 0\) is
$$\begin{aligned} \sum _{k' = 1}^k p_{1,k'} v\left( x_{1,k'}\right) = \frac{2c_1+c_2\left( -1 + \frac{1}{(2k-1)^2}\right) }{c_1^2}, \end{aligned}$$
while the payoff from effort \(x^{\text {sa}}_2 = 1/c_2^2\) is \(1/c_2\). Therefore, the proposed equilibrium strategies indeed form an equilibrium if
$$\begin{aligned} 0&\le \frac{2c_1+c_2\left( -1 + \frac{1}{(2k-1)^2}\right) }{c_1^2} - \frac{1}{c_2}\\&= \frac{\big [2k c_2 - (2k-1)c_1\big ]\big [(2k-1)c_1-2(k-1)c_2 \big ]}{(2k-1)^2 c_1^2 c_2}, \end{aligned}$$
a condition satisfied if and only if
$$\begin{aligned} \frac{2k}{2k-1} c_2 \ge c_1 \ge \frac{2(k-1)}{2k-1} c_2. \end{aligned}$$
If \(c_1 = \frac{2(k-1)}{2k-1} c_2\), then the proposed equilibrium strategies have \(p_{1,k} = 0\), in which case player 1 does not truly have \(x^{\text {sa}}_1\) in the support of his strategy. Therefore, the proposed strategies form an equilibrium in which players have \(k\) elements in the supports of their strategies if and only if
$$\begin{aligned} \frac{2k}{2k-1} c_2 \ge c_1 > \frac{2(k-1)}{2k-1} c_2. \end{aligned}$$
Proof of Proposition 2
First, we complete the discussion preceding Proposition 2 and establish that \(g\) in (12) is indeed a symmetric equilibrium strategy. If all players but player 1 use strategy \(g\) in (12), then player 1 can implement any level of group effort \(g(c')\) in the interval \([0, g(\underline{c})]\) by acting as a type \(c'\). From (9), we see
$$\begin{aligned} \frac{\partial V(c',c)}{\partial c'}&= g'(c') \left[ - c - (1-F^M(c'))v'(g(c')) \right] \\&= g'(c) \left[ -c + c'\right] . \qquad \qquad \qquad \qquad \qquad \qquad \mathrm{(by\; (10))} \end{aligned}$$
Because \(g\) is decreasing, it follows that the above derivative is positive for \(c'< c\) and negative for \(c' \in (c, c^*).\) Therefore, player 1 with cost \(c\) cannot do better acting as some other type using \(g\). It only remains to verify he cannot do better by choosing some level of effort above \(g(\underline{c})\). Choosing \(\gamma \ge g(\underline{c})\) yields player 1 the payoff \(v(\gamma ) - c \gamma \), for which \(\frac{\partial }{\partial \gamma }(v(\gamma ) - c \gamma ) = v'(\gamma ) - c \le v'(\gamma ) - \underline{c}\le 0\) for any \(\gamma > g(\underline{c})\). Consequently, no type wishes to exert effort greater than \(g(\underline{c})\). Thus, we have established that \(g\) in (12) describes a symmetric equilibrium.
Next, we show it is the unique symmetric equilibrium. Let \(g_s\) be any symmetric equilibrium strategy. By the usual incentive compatibility arguments, \(g_s\) is weakly decreasing. Now, note that \(g_s(\bar{c})=0,\) since otherwise type \(\bar{c}\) of one player could reduce his effort and save on his costs without changing the expected level of provision of the public good. Also, \(g_s(\underline{c})>0,\) which implies that the no-effort strategy is not a symmetric equilibrium. To see this, suppose to the contrary that \(g_s(c)=0\) for all \(c \in (\underline{c},\bar{c}).\) The condition \(v'(0)>\underline{c}\) then guarantees a strictly profitable deviation to a positive effort level by type \(\underline{c}.\)
We now establish that \(g_s\) is continuous on \((\underline{c},\bar{c})\). Suppose to the contrary that a discontinuity at \(\tilde{c} \in (\underline{c},\bar{c})\) exists, so that \(\lim _{c \uparrow \tilde{c}} g_s(c) = g_s^h>g_s^l=\lim _{c \downarrow \tilde{c}}.\) Thus, for any effort \(\gamma \in (g_s^l,g_s^h),\) the utility of type \(c\) is
$$\begin{aligned} V^{nc}(c,\gamma )=-c \gamma + (1-F^{M}(\tilde{c}))v(\gamma )+\int \limits _{\underline{c}}^{\tilde{c}}v(g_s(y))f^{M}(y)\,\hbox {d}y. \end{aligned}$$
Note also that \(V^{nc}\) is continuous for \(\gamma \in [g_s^l,g_s^h],\) even if there exists a mass of probability in the distribution of efforts at \(g_s^l\) or \(g_s^h\). This is because the identity of player with the "best-shot" does not affect the prize, which is a pure public good, nor the payment, which is always required. Hence, since \(g_s\) is an equilibrium, the following necessary condition must hold:
$$\begin{aligned} \frac{\partial V^{nc}}{\partial \gamma } (\tilde{c},g_s^l) \le 0 \le \frac{\partial V^{nc}}{\partial \gamma } (\tilde{c},g_s^h), \end{aligned}$$
reflecting the fact that \(\tilde{c}\) does not want to exert effort greater than \(g_s^l\) or less than \(g_s^h\). But
$$\begin{aligned} \frac{\partial V^{nc}}{\partial \gamma } (\tilde{c},g_s^l) \!=\! -\tilde{c}\!+\!(1-F^{M}(\tilde{c}))v'(g_s^l)>-\tilde{c}\!+\!(1-F^{M}(\tilde{c}))v'(g_s^h)\!=\!\frac{\partial V^{nc}}{\partial \gamma } (\tilde{c},g_s^h), \end{aligned}$$
because \(v'\) is decreasing, thus contradicting (19).
Using continuity of \(g_s\), the previously derived bounds, and the fact that \(g_s\) must obey equation (10) when \(g_s\) is strictly decreasing, we now rule out flat spots at an effort level \(\gamma \in (0, g_s(\underline{c})).\) Again, proceeding by contradiction, if such a flat spot exists, then
$$\begin{aligned} c^{\gamma } \equiv \sup \{c:g_s(c)>\gamma \} < \inf \{c: g_s(c)<\gamma \} \equiv c_{\gamma }. \end{aligned}$$
By continuity of \(g_s\), there exist strictly decreasing segments of \(g_s\) with range \((\gamma -\varepsilon ,\gamma )\) and \((\gamma , \gamma +\varepsilon ).\) Taking limits in (10), this implies \(\frac{c^{\gamma }}{1-F^{M}(c^{\gamma })}=v'(\gamma )=\frac{c_{\gamma }}{1-F^{M}(c_{\gamma })},\) which contradicts \(c^{\gamma }<c_{\gamma },\) since the function \(\frac{c}{1-F^{M}(c)}\) is strictly increasing in \(c\).
We now rule out a flat spot at a effort level \(\gamma =g_s(\underline{c}).\) To the contrary, suppose such flat spot exists. The above discussion implies the existence of a strictly decreasing segment of \(g_s\) with range \((\gamma -\varepsilon ,\gamma )\). Hence, again letting \( c_{\gamma } = \inf \{c: g(c)<\gamma \}\), we have \(\underline{c}< c_{\gamma }\) and \(v'(\gamma )=\frac{c_{\gamma }}{1-F^{M}(c_{\gamma })}\) by (10), implying \(v'(\gamma )>c_{\gamma }\). For any effort \(\gamma ' > \gamma \), the utility of type \(c\) is \(-c \gamma ' + v(\gamma ').\) Hence, type \(c_{\gamma }\) has a profitable deviation to an effort level marginally higher than \(\gamma ,\) since \(v'(\gamma )>c_{\gamma }\).
Finally, we rule out a flat spot at the effort level of zero, unless the flat spot occurs for types larger than \(c^*.\) By contradiction, let there be \(\tilde{c}<c^*\) such that \(g_s(c)=0\) for all \(c>\tilde{c}.\) The previous discussion implies the existence of a strictly decreasing segment of \(g_s\) with range \((0,\varepsilon )\). Hence, Eq. (10) implies \(v'(0)=\frac{\tilde{c}}{1-F^{M}(\tilde{c})}.\) If \(v'(0)=+ \infty \) this is impossible. If \(v'(0)< \infty \), the definition of \(c^*\) and the fact that \(\frac{c}{1-F^{M}(c)}\) is strictly increasing in \(c\) imply \(\tilde{c}=c^*,\) and again we have a contradiction.
Therefore, the only possibility for a symmetric equilibrium is for \(g_s\) to equal the formulation in (12).\(\square \)
Recall that here we consider two continuous distributions \(F_{1}\) and \(F_{2}\) on \([\underline{c},\bar{c}\)] and we assume \(F_2\) SOSD \(F_1\): \(\int \nolimits _{\underline{c}}^{y} \left( F_{1}(c )-F_{2}(c)\right) \hbox {d}c \ge 0\) for all \(y \ge \underline{c}\). The following lemma proves very useful in the rest of the analysis.
Lemma 3
(SOSD implications) Suppose \(F_2\) SOSD \(F_1\), and let \(W(c)\) be a nonnegative decreasing function. Then
$$\begin{aligned} \int \limits _{\underline{c}}^{y}(F_{2}(c)-F_{1}(c) )W(c) \, \hbox {d}c \le 0. \end{aligned}$$
Define \(\Delta (c) \equiv F_2(c) - F_1(c)\) and
$$\begin{aligned} \bar{\Delta }(y) \equiv \int \limits _{\underline{c}}^y \Delta (c)\, \hbox {d}c. \end{aligned}$$
$$\begin{aligned} \int \limits _{\underline{c}}^y \Delta (c) W(c)\, \hbox {d}c&= \left. \bar{\Delta }(c) W(c)\right| _{\underline{c}}^y - \int \limits _{\underline{c}}^y \bar{\Delta }(c) W'(c)\,\hbox {d}c\\&= \underbrace{\bar{\Delta }(y)}_{{\le 0\;\mathrm{by\; SOSD }}} \underbrace{W(y)}_{\ge 0} - \int \limits _{\underline{c}}^y \underbrace{\bar{\Delta }(c)}_{\le 0} \underbrace{W'(c)}_{\le 0}\,\hbox {d}c\\&\le 0, \end{aligned}$$
where the second equality uses \(\bar{\Delta }(\underline{c}) = 0\).\(\square \)
We now establish two results about \(F^{M}_{1}\) and \(F^{M}_{2}\). First, note that, since
$$\begin{aligned} F^{M}_{i}(c)=1-(1-F_{i}(c) )^{n-1}, \qquad i=1,2, \end{aligned}$$
we have that
$$\begin{aligned} F^{M}_{1}(c) \ge F^{M}_{2}(c) \iff F_{1}(c) \ge F_{2}(c). \end{aligned}$$
Thus, whether we consider the pair \(F_1,F_2\) or the pair \(F^{M}_{1},F^{M}_{2}\), all intersection points and all directions of the intersections are the same. Our second result is that
$$\begin{aligned} \int \limits _{\underline{c}}^{y}F_{1}^{M}(c)\hbox {d}c \ge \int \limits _{\underline{c}}^{y}F_{2}^{M}(c)\hbox {d}c. \end{aligned}$$
To see this, note that (20) is equivalent to
$$\begin{aligned} \int \limits _{\underline{c}}^{y}\left( (1-F_{1}\left( c\right) )^{n-1}-\left( 1-F_{2}(c)\right) ^{n-1}\right) \hbox {d}c \le 0, \end{aligned}$$
or, decomposing the difference of powers,
$$\begin{aligned} \int \limits _{\underline{c}}^{y}(F_{2}(c)-F_{1}(c) )P(c)\hbox {d}c \le 0, \end{aligned}$$
$$\begin{aligned} P(c) = \sum _{j=1}^{n-1} \left( 1-F_1(c)\right) ^{n-1-j} \left( 1-F_2(c)\right) ^{j-1} \end{aligned}$$
is positive and decreasing in \(c\). Therefore, Lemma 3 applies and (21) is true, which in turn establishes (20).
Denote now with \(n_\mathrm{int}\) the number of strict intersections between \(F_2\) and \(F_1\), not including the those that may occur at \(\underline{c}\) and \(\bar{c}\). To reflect SOSD of \(F_2\) over \(F_1, F_2\) must overtake \(F_1\) at the first strict intersection point. We label this point as \(c^{1}.\)
It then follows that \(F_2\) overtakes \(F_1\) at all odd-numbered strict intersection points. The situation we envision is depicted in Fig. 1. (\(n_\mathrm{int}\) is odd there, but this is not essential.)
Two cdfs ordered by SOSD, crossing thrice
We conduct the proof for the case \(n_\mathrm{int}=3\), as illustrated in Fig. 1, but the reasoning easily generalizes to any \(n_\mathrm{int}.\) (Footnote 25, infra, outlines the simple changes in the proof that are required when \(n_\mathrm{int}>3\)). The conclusion we want to prove is that for any \(y\), we have
$$\begin{aligned} \int \limits _{\underline{c}}^{y}\left( g(c|F_{1})-g(c|F_{2})\right) \hbox {d}c \le 0. \end{aligned}$$
Denote as Case 1 the situation in which \(y\) is so that \(F_{1}(y)>F_{2}(y).\) Case 2 then occurs when \(F_{1}(y) \le F_{2}(y).\) We explicitly demonstrate our conclusion only for \(y>c^{3}\), a sub-case of Case 1. This is because, as the following proof makes clear, other values of \(y\) make establishing (22) easier. (We briefly comment on Case 2 at the end of this proof.)
We begin by establishing that \(\frac{\partial ^{2}\psi }{\partial z^{2}}\ge 0\) implies \(\frac{\partial ^{2}\psi }{\partial z\partial c}\ge 0.\) Indeed, through the definition of \(\psi \), we obtain
$$\begin{aligned} \frac{\partial ^{2}\psi (c,z) }{ \partial z^{2}} \cdot v''(\psi (c,z))=\frac{c}{(1-z)^3} \left( 2- \frac{v'(\psi (c,z))}{v''(\psi (c,z))} \frac{v'''(\psi (c,z))}{v''(\psi (c,z))}\right) , \end{aligned}$$
$$\begin{aligned} \frac{\partial ^{2}\psi (c,z) }{\partial z \,\partial c} \cdot v''(\psi (c,z))=\frac{1}{(1-z)^2} \left( 1- \frac{v'(\psi (c,z))}{v''(\psi (c,z))} \frac{v'''(\psi (c,z))}{v''(\psi (c,z))}\right) . \end{aligned}$$
$$\begin{aligned} \frac{\partial ^{2}\psi (c,z) }{\partial z^{2}} \cdot v''(\psi (c,z)) \cdot \frac{(1-z)^3}{c} \ge \frac{\partial ^{2}\psi (c,z) }{\left( \partial z\right) \left( \partial c\right) } \cdot v''(\psi (c,z)) \cdot (1-z)^2, \end{aligned}$$
so, since \(z \le 1\) and \(v'' \le 0\), if \(\frac{\partial ^{2}\psi }{\partial z^{2}}\ge 0\), then \(\frac{\partial ^{2}\psi }{\partial z\partial c}\ge 0.\) Having established the signs of the partial derivatives, we now proceed to prove the lemma. The integral of interest in (22) can be rewritten as
$$\begin{aligned} \int \limits _{\underline{c}}^{y}\left( g(c|F_{1})-g(c|F_{2})\right) \hbox {d}c&= \int \limits _{\underline{c}}^{c^{1 }}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c \nonumber \\&\qquad +\int \limits _{c^{1}}^{c^{2}}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c \nonumber \\&\qquad +\int \limits _{c^{2}}^{c^{3}}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c \nonumber \\&\qquad +\int \limits _{c^{3 }}^{y}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c.\quad \qquad \end{aligned}$$
Consider the term \( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \) in the first integral on the right-hand side of (24). Since \(c \le c^{1}\), we have \( F^{M}_{2}\left( c\right) \le F^{M}_{1}\left( c\right) \), so, using \(\frac{\partial ^{2}\psi }{\partial z^{2}}\ge 0\) and \(w\le F^{M}_{1}\left( c\right) \le F^{M}_{1}\left( c^{1}\right) \), we obtain
$$\begin{aligned} \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right)&= \int \limits _{F^{M}_{2}\left( c\right) }^{F^{M}_{1}\left( c\right) }\frac{\partial \psi }{ \partial z}\left( c,w\right) \hbox {d}w\\&\le \frac{\partial \psi }{\partial z}\left( c,F^{M}_{1}\left( c^{1}\right) \right) \left( F^{M}_{1}\left( c\right) -F^{M}_{2}\Big (c\Big )\right) . \end{aligned}$$
Now, using \(\frac{\partial ^{2}\psi }{\partial z\partial c}\ge 0\), we see that the first term on the right-hand side of (24) can be bounded above as
$$\begin{aligned}&\int \limits _{\underline{c}}^{c^{1 }}\left( \psi \left( c,F^{M}_{1}(c)\right) \right. \left. -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c\nonumber \\&\quad \le \frac{\partial \psi }{\partial z}\left( c^{1},F^{M}_{1}\left( c^{1}\right) \right) \int \limits _{\underline{c}}^{c^{1 }}\left( F^{M}_{1}\left( c\right) -F^{M}_{2}\Big (c\Big )\right) \hbox {d}c. \end{aligned}$$
Consider now the second integral on the right-hand side of (24). Since \(c^{1} \le c \le c^{2}\), we have \(F^{M}_{2}\left( c\right) >F^{M}_{1}\left( c\right) \), so
$$\begin{aligned} \psi \left( c,F^{M}_{1}(c)\right)&- \psi \left( c,F^{M}_{2}(c)\right) \\&= \int \limits _{F^{M}_{2}\left( c\right) }^{F^{M}_{1}\left( c\right) }\frac{\partial \psi }{ \partial z}\left( c,w\right) \hbox {d}w=\int \limits _{F^{M}_{1}\left( c\right) }^{F^{M}_{2}\left( c\right) }\left( -\frac{\partial \psi }{\partial z}\left( c,w\right) \right) \hbox {d}w. \end{aligned}$$
Since \(\left( -\frac{\partial ^{2}\psi }{\partial z^{2}}\right) \le 0\) and \(w\ge F^{M}_{1}\left( c\right) \ge F^{M}_{1}\left( c^{1}\right) ,\) we have
$$\begin{aligned} \int \limits _{F^{M}_{1}\left( c\right) }^{F^{M}_{2}\left( c\right) }\left( -\frac{\partial \psi }{\partial z}\left( c,w\right) \right) \hbox {d}w\le \left( -\frac{\partial \psi }{\partial z}\left( c,F^{M}_{1}\left( c^{1}\right) \right) \right) \left( F^{M}_{2}\left( c\right) -F^{M}_{1}\left( c\right) \right) , \end{aligned}$$
and since \(\left( -\frac{\partial ^{2}\psi }{\partial z\partial c}\right) \le 0\) and \(c \ge c^{1}\), we obtain
$$\begin{aligned} \int \limits _{F^{M}_{1}\left( c\right) }^{F^{M}_{2}\left( c\right) }\left( -\frac{\partial \psi }{\partial z}\left( c,w\right) \right) \hbox {d}w&\le \left( -\frac{\partial \psi }{\partial z}\left( c^{1 },F^{M}_{1}\left( c^{1 }\right) \right) \right) \left( F^{M}_{2}\left( c\right) -F^{M}_{1}\left( c\right) \right) \\&= \left( \frac{\partial \psi }{\partial z}\left( c^{1 },F^{M}_{1}\left( c^{1}\right) \right) \right) \left( F^{M}_{1}\left( c\right) -F^{M}_{2}\left( c\right) \right) . \end{aligned}$$
All in all, therefore, the second term on the right-hand side of (24) can be bounded above as
$$\begin{aligned}&\int \limits _{c^{1 }}^{c^{2 }}\left( \psi \left( c,F^{M}_{1}(c)\right) \right. - \left. \psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c\nonumber \\&\quad \le \frac{\partial \psi }{\partial z}\left( c^{1},F^{M}_{1}\left( c^{1}\right) \right) \int \limits _{c^{1 }}^{c^{2}}\left( F^{M}_{1}\left( c\right) -F^{M}_{2}\left( c\right) \right) \hbox {d}c. \end{aligned}$$
Therefore, from (25) and (26), we see the sum of the first and second terms on the right-hand side of (24) is bounded above by
$$\begin{aligned} \frac{\partial \psi }{\partial z}\left( c^{1},F^{M}_{1}\left( c^{1}\right) \right) \int \limits _{\underline{c}}^{c^{2}}\left( F^{M}_{1}\left( c\right) -F^{M}_{2}\left( c\right) \right) \hbox {d}c. \end{aligned}$$
Proceeding in a similar fashion, the sum of the third and fourth terms on the right-hand side of (24) is bounded above by
$$\begin{aligned} \frac{\partial \psi }{\partial z}\left( c^{3},F^{M}_{1}\left( c^{3}\right) \right) \int \limits _{c^{2}}^{y}\left( F^{M}_{1}\left( c\right) -F^{M}_{2}\left( c\right) \right) \hbox {d}c. \end{aligned}$$
Note now that, by (20), the integral in (27) is nonnegative. Therefore, using \(\frac{\partial ^{2}\psi }{\partial z^{2}}\ge 0\) and \(\frac{\partial ^{2}\psi }{\partial z\partial c}\ge 0\), the value in (27) is in turn bounded above by25
Thus, using the above displayed value and Eqs. (24), (28) is bounded above as
$$\begin{aligned} \int \limits _{\underline{c}}^{y}\left( g(c|F_{1})-g(c|F_{2})\right) \hbox {d}c\le \left( \frac{ \partial \psi }{\partial z}\left( c^{3},F^{M}_{1}\left( c^{3}\right) \right) \right) \int \limits _{\underline{c}}^{y}\left( F^{M}_{1}\left( c\right) -F^{M}_{2}\left( c\right) \right) \hbox {d}c. \end{aligned}$$
Note that the first term in the right-hand side of the above is negative, by concavity of \(v\). And the second term is positive, by (20). Hence,
$$\begin{aligned} \int \limits _{\underline{c}}^{y}g(c|F_{1})\hbox {d}c\le \int \limits _{\underline{c}}^{y}g(c|F_{2})\hbox {d}c, \end{aligned}$$
as we wanted to show to establish (22).
We now briefly discuss Case 2. With reference to Fig. 1, take \(y \in [c_2,c_3].\) (The argument easily adapts to other values of \(y\) belonging to Case 2.) The integral of interest in (22) can be rewritten as
$$\begin{aligned} \int \limits _{\underline{c}}^{y}\left( g(c|F_{1})-g(c|F_{2})\right) \hbox {d}c&= \int \limits _{\underline{c}}^{c^{1 }}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c \\&\qquad +\int \limits _{c^{1}}^{c^{2}}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c \\&\qquad +\int \limits _{c^{2}}^{y}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c \\&\le \frac{\partial \psi }{\partial z}\left( c^{1},F^{M}_{1}\left( c^{1}\right) \right) \int \limits _{\underline{c}}^{c^{2}}\left( F^{M}_{1}\left( c\right) -F^{M}_{2}\left( c\right) \right) \hbox {d}c \\&\qquad +\int \limits _{c^{2}}^{y}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c \\&\le \int \limits _{c^{2}}^{y}\left( \psi \left( c,F^{M}_{1}(c)\right) -\psi \left( c,F^{M}_{2}(c)\right) \right) \hbox {d}c, \end{aligned}$$
where the first inequality follows from (27), and the second from (20) and \(\frac{\partial \psi }{\partial z} \le 0\). Therefore, using the facts that \(\frac{\partial \psi }{\partial z} \le 0\) and \(F^{M}_{1}(c) > F^{M}_{2}(c)\) for \(c \in [c_2, y]\) (by \(y \in [c_2,c_3]\)), we have again established (22).\(\square \)
Incentive compatibility. Type \(c\)'s utility when announcing \(c^{a}<\bar{c}\) is
$$\begin{aligned}&U(c^{a}, c) \equiv E_{T}+\int \limits _{\underline{c}}^{c^{a}} v(x^{\text {sa}}(y)) f^M(y) \,\hbox {d}y\\&\qquad + (1-F^M(c^a)) \big [ v(x^{\text {sa}}(c^{a}))-c x^{\text {sa}}(c^{a}) \big ] - T(c^{a}). \end{aligned}$$
We now have
$$\begin{aligned} \frac{\partial U(c^{a}, c) }{\partial c^{a}}&= v(x^{\text {sa}}(c^a))f^M(c^a) - f^M(c^a) \left( v(x^{\text {sa}}(c^a)) - c x^{\text {sa}}(c^a)\right) \\&\quad + (1-F^M(c^a)) (v'(x^{\text {sa}}(c^a))-c)\frac{\hbox {d}x^{\text {sa}}(c^a)}{\hbox {d}c^a} - T'(c^a)\\&= f^M(c^a)cx^{\text {sa}}(c^a) + (1-F^M(c^a))(c^a - c) \frac{\hbox {d}x^{\text {sa}}(c^a)}{\hbox {d}c^a} \\&\quad - c^a x^{\text {sa}}(c^a)f^M(c^a)\\&= (c- c^{a}) \left[ x^{\text {sa}}(c^{a}) f^M(c^{a}) - (1-F^M(c^a))\frac{\hbox {d}x^{\text {sa}}(c^a)}{\hbox {d}c^a} \right] , \end{aligned}$$
where the second equality follows from \(v'(x^{\text {sa}}(c^a)) = c^a\) and from differentiating (15). Therefore, because \(\hbox {d}x^{\text {sa}}(y)/\hbox {d}y \le 0, U\) is strictly quasi-concave with respect to \(c^a\), which guarantees incentive compatibility.
Interim Pareto dominance. We now turn to the comparison of interim utilities. For a player with cost \(c\), let \(U^A(c) \equiv U(c, c)\) denote the interim utility under the alternative mechanism. Under truth-telling, for type \(c \in (\underline{c}, \bar{c})\), the difference in interim utility between this alternative mechanism and the original game is
$$\begin{aligned} U^A(c) - V^*(c)&= E_{T}+\int \limits _{\underline{c}}^{c} \left[ v(x^{\text {sa}}(y)) - y x^{\text {sa}}(y) \right] f^M(y) \,\hbox {d}y \\&\qquad +(1-F^M(c)) \left[ v(x^{\text {sa}}(c))-c x^{\text {sa}}(c) \right] \\&\qquad - \Big \{ \int \limits _{\underline{c}}^{c} \left[ v\left( g_n(y)\right) - c g_n(c) \right] f^M(y) \,\hbox {d}y \\&\qquad +(1-F^M(c)) \left[ v\left( g_n(c)\right) -c g_n(c) \right] \Big \} \\&> E_{T}+\int \limits _{\underline{c}}^{c} \left[ v(x^{\text {sa}}(y)) - y x^{\text {sa}}(y) \right] f^M(y) \,\hbox {d}y \\&\qquad - \int \limits _{\underline{c}}^{c} \left[ v\left( g_n(y)\right) - c g_n(c) \right] f^M(y) \,\hbox {d}y \\ \!&> \! E_{T}\!+\!\int \limits _{\underline{c}}^{c} \underbrace{v'(x^{\text {sa}}(y))}_{\!=\! y} \left( x^{\text {sa}}(y)\!-\!g_n(y)\right) f^M(y) \, \hbox {d}y \!-\! \int \limits _{\underline{c}}^{c} y x^{\text {sa}}(y) f^M(y)\, \hbox {d}y \\&= E_{T}-\int \limits _{\underline{c}}^{c} y g_n(y) f^M(y)\, \hbox {d}y, \end{aligned}$$
where the first inequality follows because the amount \(x^{\text {sa}}(c)\) is the stand-alone effort of type \(c\) and the second follows because \(v\) is strictly concave. Consequently, for all \(c \in (\underline{c}, \bar{c})\),
$$\begin{aligned} U^A(c) - V^*(c)&> E_{T}-\int \limits _{\underline{c}}^{c} y g_n(y) f^M(y)\, \hbox {d}y \\&\ge E_{T}-\int \limits _{\underline{c}}^{\bar{c}} y g_n(y) f^M(y)\, \hbox {d}y. \end{aligned}$$
We recalculate \(E_{T}\) as follows:
$$\begin{aligned} E_{T}&= \int \limits _{\underline{c}}^{\bar{c}} T(c)\, dF^M(c) = \int \limits _{\underline{c}}^{\bar{c}} \int \limits _{\underline{c}}^{c} x^{\text {sa}}(y) y f^M(y)\, \hbox {d}y\, \, dF^M(c) \\&= \int \limits _{\underline{c}}^{\bar{c}} \int \limits _{y}^{\bar{c}} dF^M(c)\, x^{\text {sa}}(y) y f^M(y)\, \hbox {d}y = \int \limits _{\underline{c}}^{\bar{c}} y x^{\text {sa}}(y) (1-F(y)) f^M(y)\, \hbox {d}y. \end{aligned}$$
With this last expression for \(E_{T}\), for any \(c \in (\underline{c}, \bar{c})\), we have
$$\begin{aligned} U^A(c) - V^*(c) >\int \limits _{\underline{c}}^{\bar{c}} y\left[ x^{\text {sa}}(y) (1-F(y)) - g_n(y) \right] f^M(y) \,\hbox {d}y. \end{aligned}$$
Because \(U^A\) and \(V^*\) are continuous, the right-hand side of (29) is a lower bound for the utility improvement under the alternative mechanism, for all \(c \in [\underline{c}, \bar{c}]\). We next establish that each of the conditions (i), (ii), and (iii) ensures the integrand in (29) is strictly positive, from which it follows that the allocation under the alternative mechanism interim Pareto dominates the unique symmetric equilibrium in the original game.
We first prove part (i). Observe that \(g_k(y) = x^{\text {sa}}\left( \frac{y}{(1-F(y))^{k-1}}\right) \) for any \(k \ge 1\). Fix \(n \ge 2\). Because \(g_n(y) \le g_2(y)\) for all \(y\), we have \( x^{\text {sa}}(y) (1-F(y)) - g_n(y) \ge x^{\text {sa}}(y) (1-F(y)) - g_2(y) , \) so to show (29) is strictly positive, it suffices to show
$$\begin{aligned} x^{\text {sa}}(y) (1-F(y)) - g_2(y) = x^{\text {sa}}(y) (1-F(y)) - x^{\text {sa}}\left( \frac{y}{1-F(y)}\right) \ge 0 , \end{aligned}$$
with strict inequality on \(y \in (\underline{c}, c_2^*)\). On \((c_2^*, \bar{c}]\), (30) is necessarily nonnegative as here \(g_2(y) = 0\). By assumption of part (i), \(s x^{\text {sa}}(s)\) is decreasing in \(s\), so for any \(y \in (\underline{c}, c^*_2]\) we have \(y < y/(1-F(y))\), implying
$$\begin{aligned} y x^{\text {sa}}(y) > \frac{y}{(1-F(y)} \ x^{\text {sa}}\left( \frac{y}{1-F(y)}\right) , \end{aligned}$$
which in turn implies \(x^{\text {sa}}(y) (1-F(y)) > x^{\text {sa}}\left( \frac{y}{1-F(y)}\right) \), that is, (30) is satisfied.
Next, consider part (ii). We show that for all \(n\) sufficiently large,
$$\begin{aligned} x^{\text {sa}}(y) (1-F(y)) - g_n(y) \ge 0, \quad {\mathrm{with\; strict\; inequality\; on}}\; (\underline{c}, c^*_1). \end{aligned}$$
First, we consider a neighborhood of \(\underline{c}\). We see at \(y = \underline{c}\) both \(x^{\text {sa}}(y) (1-F(y))\) and \(g_n(y)\) equal \(x^{\text {sa}}(\underline{c}) >0\), where the inequality follows from \(v'(0) > \underline{c}\). Furthermore,
$$\begin{aligned} \left. \frac{d\left[ x^{\text {sa}}(y) (1-F(y))\right] }{\hbox {d}y} \right| _{y=\underline{c}} = \frac{\hbox {d}x^{\text {sa}}(\underline{c})}{\hbox {d}y}-f(\underline{c}) x^{\text {sa}}(\underline{c}) \end{aligned}$$
$$\begin{aligned} g_n'(\underline{c}) = \frac{\hbox {d}x^{\text {sa}}(\underline{c})}{\hbox {d}y} [1 + (n-1) \underline{c}f(\underline{c})]. \end{aligned}$$
Because \(\hbox {d}x^{\text {sa}}(\underline{c})/\hbox {d}y\) is negative and finite, \(x^{\text {sa}}\)(\(\underline{c}\)) is finite, and \(f(\underline{c}) > 0\), it now follows from (32) and (33) that there exists \(n'\) such that the expression in (33) is strictly less than that in (32) for all \(n \ge n'\). Consequently, there exists \(\varepsilon > 0\) such that
$$\begin{aligned} x^{\text {sa}}(y) (1-F(y)) - g_{n'}(y) > 0 \quad \forall y \in (\underline{c}, \underline{c}+ \varepsilon ). \end{aligned}$$
Because \(v'(0) < \infty \), it follows from (11) that \((c^*_k)_k\) is a strictly decreasing sequence with limit \(\underline{c}\). So fix \(n'' \ge n'\) such that \(c^*_{n''} < \underline{c}+ \varepsilon \). Then
$$\begin{aligned} x^{\text {sa}}(y) (1-F(y)) - g_{n''}(y) = x^{\text {sa}}(y) (1-F(y)) > 0 \quad \forall y \in [\underline{c}+\varepsilon , c^*_1). \end{aligned}$$
Because the strategies \((g_k)_k\) are pointwise weakly decreasing in \(n\), inequalities (34) and (35) establish the validity of (31) for all \(n \ge n''\).
Finally, consider part (iii). Here too, we show that for all \(n\) sufficiently large, (31) is satisfied. Repeating the arguments of part (ii), we conclude there exists \(n' \ge 3\) and \(\varepsilon > 0\) such that
Next, we consider a neighborhood of \(\bar{c}\). (In part (iii), \(c^*_k = \bar{c}\) for all \(k\).) Both \(x^{\text {sa}}(y) (1-F(y)) \) and \(g_{n'}(y)\) go to 0 as \(y \rightarrow \bar{c}\), and, as we show, of these, \(g_{n'}(y)\) goes to 0 much faster. In particular,
$$\begin{aligned} 0&\le \limsup _{y \rightarrow \bar{c}} \frac{g_{n'}(y)}{x^{\text {sa}}(y)(1-F(y))} = \limsup _{y \rightarrow \bar{c}} \frac{x^{\text {sa}}\left( \frac{y}{(1-F(y))^{n'-1}} \right) }{x^{\text {sa}}(y)(1-F(y))} \\&= \limsup _{y \rightarrow \bar{c}} \frac{(1-F(y))^{n'-1}}{y} \times \frac{y}{(1-F(y))^{n'-1}} \times \frac{x^{\text {sa}}\left( \frac{y}{(1-F(y))^{n'-1}}\right) }{x^{\text {sa}}(y)(1-F(y))}\\&= \limsup _{y \rightarrow \bar{c}} \left( \frac{(1-F(y))^{n'-2}}{x^{\text {sa}}(y) y} \right) \underbrace{\left( \frac{y}{(1-F(y))^{n'-1}}\right) x^{\text {sa}}\left( \frac{y}{(1-F(y))^{n'-1}}\right) }_{\mathrm{by\; assumption\; bounded\; as\;}y \rightarrow \bar{c}}\; \\&= 0 \end{aligned}$$
because \(x^{\text {sa}}(\bar{c}) > 0\) (since \(v'(0) = \infty \) and \(\bar{c}< \infty \)) and \(n' > 2\). Therefore, there exists \(\delta > 0\) such that
$$\begin{aligned} x^{\text {sa}}(y) (1-F(y)) - g_{n'}(y) > 0 \qquad \forall y \in [\bar{c}- \delta , \bar{c}). \end{aligned}$$
Because \(\lim _k g_k(y) = 0\) for any \(y \in (\underline{c}, \bar{c}]\), we can choose \(n'' \ge n'\) be such that \(g_{n''}(\underline{c}+\varepsilon ) < x^{\text {sa}}(\bar{c}-\delta )(1-F(\bar{c}-\delta ))\), which, because equilibrium strategies are nonincreasing, in turn implies
$$\begin{aligned} x^{\text {sa}}(y)(1-F(y)) - g_{n''}(y) > 0 \qquad \forall y \in [\underline{c}+\varepsilon , \bar{c}- \delta ]. \end{aligned}$$
We conclude from (36), (37), and (38) that (31) is satisfied for \(n = n''\), and again because \((g_k)_k\) is pointwise decreasing with respect to \(k\), (31) is satisfied for all \(n \ge n''\).\(\square \)
Appendix 2: Equilibrium with asymmetric private information
In this appendix, we analyze the two-player, independent-cost game in which players' marginal costs of effort are drawn from different distributions. In particular, we let \(c_i\) be distributed on \([\underline{c},\bar{c}]\) according to \(F_i,\) with continuous density \(f_i\), for \(i=1,2.\)
We look for equilibrium strategies \(g_1\) and \(g_2\). Our characterization begins for effort levels \(\gamma \) at which both \(g_1\) and \(g_2\) are strictly decreasing, so that they admit inverse functions \(\phi _1\) and \(\phi _2\). If player 1 with cost \(c_1\) takes effort \(\gamma \), then his payoff is
$$\begin{aligned} V_1(\gamma , c_1) = - c_1 \gamma + (1- F_2 (\phi _2(\gamma )))v(\gamma ) + \int \limits _{\underline{c}}^{\phi _2(\gamma )} v(g(y)) f_2(y) \,\hbox {d}y, \end{aligned}$$
and the first-order condition with respect to \(\gamma \) yields \( c_1=v'(\gamma )(1- F_2 (\phi _2(\gamma ))). \) Proceeding similarly for player \(2\), and noting that for \(i=1,2 c_i=\phi _i (\gamma ),\) we obtain the following system of two equations in the two unknowns \(\phi _1 (\gamma )\) and \(\phi _2 (\gamma ):\)
$$\begin{aligned} {\left\{ \begin{array}{ll} \phi _1 (\gamma )=v'(\gamma )(1- F_2 (\phi _2(\gamma )))\\ \phi _2 (\gamma )=v'(\gamma )(1- F_1 (\phi _1(\gamma ))). \end{array}\right. } \end{aligned}$$
The above system is analogous to condition (10) for the symmetric case. However, further complications arise when the system (40) fails to produce two functions \(\phi _1 (\gamma )\) and \(\phi _2 (\gamma )\) that are strictly decreasing throughout. The following example illustrates the adjustments necessary to identify equilibrium in such a circumstance.
\(F_1(c_1)=\frac{10}{9} \left( c_1-\frac{1}{10}\right) \) and \(F_2(c_2)=\frac{100}{99}\left( (c_2 )^2-(\frac{1}{10})^2\right) \) on \([\frac{1}{10}, 1]\), with \(v(x) = 2\sqrt{x}\).
Player 2 has stochastically higher costs than player 1, so we might expect player 2 will be more likely to free ride. Note that \(1-F_1(c_1)=\frac{10}{9} \left( 1-c_1\right) \) and \(1-F_2(c_2)=\frac{100}{99}\left( 1-(c_2 )^2\right) .\) Solving the system (40), we find
$$\begin{aligned} \phi _1^H (\gamma ) = \frac{20000+9 \sqrt{\gamma }(-891 \gamma +\sqrt{4000000-3960000 \sqrt{\gamma }+793881 \gamma ^2})}{20000}, \; \end{aligned}$$
$$\begin{aligned} \phi _2^H (\gamma ) = \frac{891 \gamma -\sqrt{4000000-3960000 \sqrt{\gamma }+793881 \gamma ^2}}{2000}. \end{aligned}$$
Figure 2 plots \(\phi _1^H\) and \(\phi _2^H\). Because equilibrium strategies must be nonincreasing in cost, these functions are potentially relevant only where they are both nonincreasing (as functions of \(\gamma \)). Therefore, we expect that for cost sufficiently high, player 2 will drop out. Now, suppose \(\hat{\gamma }\) is the smallest positive effort of player 2, and this happens at cost \(\hat{c}_2\). From Fig. 2, we see when player 1 has cost less than \(\hat{c}_1\), he will face the standard problem and his optimal effort is found via (40), but with cost above \(\hat{c}_1\), player 1 must consider the possibility that player 2 exerts zero effort. This leads to a reformulation of player 1's objective function when he has cost larger than \(\hat{c}_1\). Now, the equilibrium dropout effort level for player 2 is found where he is just indifferent between efforts \(\hat{\gamma }\) and 0, having accounted for the recalculation of player 1's strategy for costs above \(\hat{c}_1\). As shown below, this indifference requires that the equilibrium dropout level \(\hat{\gamma }\) solve
$$\begin{aligned}&-\phi _2^H (\hat{\gamma }) \hat{\gamma }+\frac{10}{9}\left( 1-\phi _1^H (\hat{\gamma })\right) 2 \sqrt{\hat{\gamma }} \nonumber \\&\quad = -2\times \frac{100}{99}\left( 1-\left( \phi _2^H (\hat{\gamma }) \right) ^2\right) \frac{10}{9}{\mathrm {log}}\!\left( \, \phi _1^H (\hat{\gamma })\, \right) . \end{aligned}$$
It may be verified numerically that \(\hat{\gamma } \approx 11.21\). Also, to facilitate the description of equilibrium, let \(\hat{c}_1= \phi _1^H (\hat{\gamma }) \approx 0.28\) and let \(\hat{c}_2 = \phi _2^H (\hat{\gamma }) \approx 0.24\). Following the discussion of equilibrium, we demonstrate that the following functions describe an equilibrium:
$$\begin{aligned} g_1(c_1) = {\left\{ \begin{array}{ll} (\phi _1^H)^{-1} (c_1)&{}\quad {\mathrm{if}}\; c_1 \le \hat{c}_1, \\ \left( \frac{99}{100}\times \frac{1-(\hat{c}_2)^2}{c_1}\right) ^2 &{}\quad {\mathrm{if}}\; c_1 > \hat{c}_1; \end{array}\right. } \end{aligned}$$
$$\begin{aligned} g_2(c_2) = {\left\{ \begin{array}{ll} (\phi _2^H)^{-1}(c_2) &{}\quad {\mathrm{if}}\; c_2 \le \hat{c}_2, \\ 0 &{}\quad {\mathrm{if}}\; c_2 > \hat{c}_2, \end{array}\right. } \end{aligned}$$
where here \((\phi _2^H)^{-1}\) derives from the portion of \(\phi _2^H\) where \(\phi _2^H\) is a strictly decreasing function.
Possible effort functions in Example 5
The behavior described by \(g_1\) and \(g_2\) above shows that the player with the worse (higher) cost distribution—player \(2\)—drops out for sufficiently large realizations of his cost, that is, when \(c_2 >\hat{c}_2\). Therefore, for effort levels smaller than \(\hat{\gamma }=g_2(\hat{c}_2)=g_1(\hat{c}_1)\), type \(c_1 > \hat{c}_1\) bears the full cost of provision and chooses his best effort taking into account that he will be the "best shot" if \(c_2 >\hat{c}_2,\) which occurs with probability \(1-F_2 (\hat{c}_2)=\frac{100}{99}(1-(\hat{c}_2)^2)\). In contrast, when \(c_1 \le \hat{c}_1\) and \(c_2 \le \hat{c}_2\), both players provide effort according to condition (40), which has solutions \(\phi _1^H\) and \(\phi _2^H\). Finally, we also see that \(g_2(c) < g_1(c)\) for all \(c\), in accord with the initial expectation that the higher-cost player engages in more free riding.
Proof that the strategies \(g_1\) and \(g_2\) of Example 5 form an equilibrium. We begin by establishing that \(g_1\) is optimal given \(g_2\). First note that for efforts \(\gamma \) larger than \(\hat{\gamma }\), the appropriate expression for payoffs is \(V(\gamma ,c_1)\) in (39), with derivative
$$\begin{aligned} \frac{\partial V(\gamma ,c_1)}{\partial \gamma }=-c_1 + v'(\gamma )(1-F_2 (\phi _2 (\gamma ))). \end{aligned}$$
Hence, for \(c_1 \le \hat{c}_1, g_1\) solves the necessary first-order condition for optimality, since \(\phi _1^H\) solves (40). For \(c_1 > \hat{c}_1, g_1\) prescribes efforts smaller than \(\hat{\gamma }\). In this case, the appropriate expression for payoffs is
$$\begin{aligned} \tilde{V}(\gamma ,c_1)=-c_1 \gamma + \int \limits _{\underline{c}}^{\hat{c}_2} v(g_2(c_2))f_2(c_2)\hbox {d}c_2+(1-F_2 (\hat{c}_2))v(\gamma ), \end{aligned}$$
with derivative
$$\begin{aligned} \frac{\partial \tilde{V}(\gamma ,c_1)}{\partial \gamma }=-c_1 + v'(\gamma )(1-F_2 (\hat{c}_2)). \end{aligned}$$
As it can easily be checked substituting the functional forms for \(v\) and \(F_2\) in this example, also for this range of costs and efforts, \(g_1\) solves the necessary condition for optimality, which now reads as
$$\begin{aligned} c_1=v'(g_1(c_1))(1-F_2 (\hat{c}_2)). \end{aligned}$$
We now establish that these necessary conditions are also sufficient. Take \(c_1<\hat{c}_1\) and consider a deviation to \(\gamma ^D < \hat{\gamma }\). The difference in utility brought about by the deviation is then
$$\begin{aligned} \tilde{V}(\gamma ^D,c_1) - V(g_1 (c_1),c_1)=- \left[ ~\int \limits _{\gamma ^D}^{\hat{\gamma }} \frac{\partial \tilde{V}}{\partial \gamma }(y,c_1) \, \hbox {d}y + \int \limits _{\hat{\gamma }}^{g_1 (c_1)} \frac{\partial V}{\partial \gamma }(y,c_1) \, \hbox {d}y \right] . \end{aligned}$$
Now, for \(y>\hat{\gamma },\) we have
$$\begin{aligned} \frac{\partial V}{\partial \gamma }(y,c_1) =-c_1 + v'(y)(1-F_2 (\phi _2 (y)))= -c_1 + \phi _1^H (y), \end{aligned}$$
by (40) and the formulation of \(g_1\) in this example. And since \(y<g_1(c_1),\) we obtain \(\partial V / \partial \gamma >0.\) Similarly, for \(y<\hat{\gamma },\) we have
$$\begin{aligned} \frac{\partial \tilde{V}}{\partial \gamma }(y,c_1)&= -c_1 + v'(y)(1-F_2 (\hat{c}_2)) \\&= -c_1 + \phi _1 (y)\quad {\mathrm{(by\; (42))}} \\&> 0, \quad (\mathrm{since}\; y<\hat{\gamma } < g_1(c_1)). \end{aligned}$$
Therefore, \(\tilde{V}(\gamma ^D,c_1) - V(g_1 (c_1),c_1)<0\) and the deviation is strictly unprofitable. Similar considerations establish the optimality of \(g_1 (c_1)\) for \(c_1 > \hat{c}_1\) and for deviations to \(\gamma ^D> \hat{\gamma }\).
Now, turning our attention to player 2, one can establish the optimality of \(g_2 (c_2)\) for \(c_2 < \hat{c}_2\) as above; differences in the proof arise only for \(c_2 \ge \hat{c}_2.\) Type \(\hat{c}_2\) must be indifferent between efforts \(0\) and \(\hat{\gamma }.\) Thus, the condition to be satisfied is
$$\begin{aligned} -\hat{c}_2 \hat{\gamma }+(1-F_1 (\hat{c}_1)) v(\hat{\gamma }) +\int \limits _{\underline{c}}^{\hat{c}_1} v(g_1(c_1))f_1(c_1) \, \hbox {d}c_1= \int \limits _{\underline{c}}^{\bar{c}} v(g_1(c_1))f_1(c_1) \, \hbox {d}c_1, \end{aligned}$$
which, for \(v=2\sqrt{x}\) and our \(g_1\), reduces to (41). Note also that if \(\hat{c}_2\) weakly prefers \(0\) to \(\hat{\gamma },\) then all types \(c_2>\hat{c}_2\) strictly prefer to exert no effort. And this is true not just for effort \(\hat{\gamma }\), but for any effort larger than zero. Thus, the proof is complete if we can show that \(\hat{c}_2\) weakly prefers \(0\) to \(\gamma \in (g_1(\bar{c}),\hat{\gamma }).\) The utility of such an effort is
$$\begin{aligned} V^D(\gamma ,\hat{c}_2)=-\hat{c}_2 \gamma + \int \limits _{\underline{c}}^{\phi _1(\gamma )} v(g_1(c_1)) \, dF_1+(1-F_1 (\phi _1(\gamma )))v(\gamma ), \end{aligned}$$
with derivative with respect to \(\gamma \) equal to
$$\begin{aligned} -\hat{c}_2 + (1-F_1 (\phi _1(\gamma )))v'(\gamma ). \end{aligned}$$
We now use this derivative to show that \(V^D\) has no interior maximum. Substituting the functional forms for this example, the above derivative is equal to
$$\begin{aligned}&-\hat{c}_2 + \frac{10}{9}\left( 1-\frac{100}{99}\frac{1-(\hat{c}_2)^2}{\sqrt{\gamma }} \right) \frac{1}{\sqrt{\gamma }}\\&\quad =\frac{-1000+1000(\hat{c}_2)^2+990\sqrt{\gamma }-891(\hat{c}_2) \gamma }{891 \gamma }\\&\quad =\frac{-(\hat{c}_2)}{\gamma }\left( \sqrt{\gamma }-\frac{5(11-\sqrt{11}\sqrt{11-40 \hat{c}_2+40(\hat{c}_2)^3})}{99 \hat{c}_2}\right) \\&\qquad \times \left( \sqrt{\gamma }-\frac{5(11+\sqrt{11}\sqrt{11-40 \hat{c}_2+40(\hat{c}_2)^3})}{99 \hat{c}_2}\right) \\&\quad =\frac{-(\hat{c}_2)}{\gamma }\left( \sqrt{\gamma }-\frac{5(11-\sqrt{11}\sqrt{11-40 \hat{c}_2+40(\hat{c}_2)^3})}{99 \hat{c}_2}\right) \left( \sqrt{\gamma }- \sqrt{\hat{\gamma }}\right) , \end{aligned}$$
where the last equality uses \(\hat{c}_2 = \phi _2^H (\hat{\gamma })\). Hence, \(V^D(\gamma ,\hat{c}_2)\) has no interior maximum for \(\gamma \in (g_1(\bar{c}),\hat{\gamma }):\) the only critical point is
$$\begin{aligned} \gamma =\left( \frac{5(11-\sqrt{11}\sqrt{11-40 \hat{c}_2+40(\hat{c}_2)^3})}{99 \hat{c}_2}\right) ^2, \end{aligned}$$
and it is a minimum. Now, note that \(\gamma = g_1(\bar{c})\) is clearly worse than taking no effort. Indeed, player 2 cannot benefit from this deviation because the public good is produced in exactly the same quantity as if 2's effort were zero, but now player 2 pays the cost of a strictly positive effort. Finally, note that the utility of effort \(\hat{\gamma }\) is the same obtained with zero effort, by the indifference condition (41). Thus, type \(\hat{c}_2\) does not want to deviate and the proof is complete.
Arce, D., Sandler, T.: Transnational public goods: Strategies and institutions. Eur. J. Polit. Econ. 17, 493–516 (2001)CrossRefGoogle Scholar
Barbieri, S., Malueg, D.A.: Private provision of a discrete public good: Efficient equilibria in the private-information contribution game. Econ. Theory 37, 51–80 (2008)CrossRefGoogle Scholar
Bergstrom, T., Blume, L., Varian, H.R.: On the private provision of public goods. J. Public Econ. 29, 25–49 (1986)CrossRefGoogle Scholar
Billingsley, P.: Probability and Measure, 3rd edn. Wiley, New York (1995)Google Scholar
Bilodeau, M., Slivinsky, A.: Toilet cleaning and department chairing: Volunteering a public service. J. Public Econ. 59, 299–308 (1996a)CrossRefGoogle Scholar
Bilodeau, M., Slivinsky, A.: Volunteering nonprofit entrepreneurial services. J. Econ. Behav. Organ. 31, 117–127 (1996b)CrossRefGoogle Scholar
Bilodeau, M., Childs, J., Mestelman, S.: Volunteering a public service: An experimental investigation. J. Public Econ. 88, 2839–2855 (2004)CrossRefGoogle Scholar
Bliss, C., Nalebuff, B.: Dragon-slaying and ballroom dancing: The private supply of a public good. J. Public Econ. 25, 1–12 (1984)CrossRefGoogle Scholar
Börgers, T., Norman, P.: A note on budget balance under interim participation constraints: The case of independent types. Econ. Theory 39, 477–489 (2009)CrossRefGoogle Scholar
Calvó-Armengol, A., Jackson, M.O.: Peer pressure. J. Eur. Econ. Assoc. 8, 62–89 (2010)CrossRefGoogle Scholar
Chowdhury, S.M., Lee, D., Sheremeta, R.M.: Top guns may not fire: Best-shot group contests with group-specific public good prizes. J. Econ. Behav. Organ. 92, 94–193 (2013)Google Scholar
Conybeare, J.A.C., Murdoch, J.C., Sandler, T.: Alternative collective-goods models of military alliances: Theory and empirics. Econ. Inq. 32, 525–542 (1994)CrossRefGoogle Scholar
Cornes, R.: Dyke maintenance and other stories: Some neglected types of public goods. Q. J. Econ. 108, 259–271 (1993)CrossRefGoogle Scholar
Cornes, R., Hartley, R.: Weak links, good shots and other public good games: Building on BBV. J. Public Econ. 91, 1684–1707 (2007)CrossRefGoogle Scholar
Croson, R., Fatas, E., Neugebauer, T.: An experimental analysis of conditional cooperation. Mimeo (2006a)Google Scholar
Croson, R., Fatas, E., Neugebauer, T.: Excludability and contribution: A laboratory study in team production. Mimeo (2006b)Google Scholar
Danezis, G., Anderson, R.: The economics of resisting censorship. IEEE Secur. Priv. 3, 45–50 (2005)CrossRefGoogle Scholar
Diekmann, A.: Volunteer's dilemma. J. Conflict Resolut. 29, 605–610 (1985)CrossRefGoogle Scholar
Fu, H.: Mixed-strategy equilibria and strong purification for games with private and public information. Econ. Theory 37, 521–532 (2008)CrossRefGoogle Scholar
Grossklags, J., Christin, N., Chuang, J.: Secure or insecure? A game-theoretic analysis of information security games. In: Proceedings of the 17th International Conference on World Wide Web (WWW '08), pp. 209–218 (2008)Google Scholar
Harrington Jr, J.E.: A simple game-theoretical explanation for the relationship between group size and helping. J. Math. Psychol. 45, 389–392 (2001)CrossRefGoogle Scholar
Harrison, G.W., Hirshleifer, J.: An experimental evaluation of weakest link/best shot models of public goods. J. Polit. Econ. 97, 201–225 (1989)CrossRefGoogle Scholar
Hausken, K.: Probabilistic risk analysis and game theory. Risk Anal. 22, 17–27 (2002)CrossRefGoogle Scholar
Hirshleifer, J.: From weakest-link to best-shot: The voluntary provision of public goods. Public Choice 41, 371–386 (1983)CrossRefGoogle Scholar
Hirshleifer, J.: From weakest-link to best-shot: Correction. Public Choice 46, 221–223 (1985)CrossRefGoogle Scholar
Kirchgässner, G.: On minimal morals. Eur. J. Polit. Econ. 26, 330–339 (2010)CrossRefGoogle Scholar
Kroll, S., Cherry, T.L., Shogren, J.F.: The impact of endowment heterogeneity and origin on contributions in best-shot public good games. Exp. Econ. 10, 411–428 (2007)CrossRefGoogle Scholar
Palfrey, T.R., Rosenthal, H.: Participation and the provision of discrete public goods: A strategic analysis. J. Public Econ. 24, 171–193 (1984)CrossRefGoogle Scholar
Sandler, T.: Global and regional public goods: A prognosis for collective action. Fiscal Stud. 19, 221–247 (1998)CrossRefGoogle Scholar
Sandler, T., Hartley, K.: Economics of alliances: The lessons for collective action. J. Econ. Lit. 39, 869–896 (2001)CrossRefGoogle Scholar
Varian, H.R.: System reliability and free riding. Mimeo (2004)Google Scholar
Xu, X.: Group size and the private supply of a best-shot public good. Eur. J. Polit. Econ. 17, 897–904 (2001)CrossRefGoogle Scholar
Weesie, J.: Incomplete information and timing in the volunteer's dilemma: A comparison of four models. J. Conflict Resolut. 38, 557–585 (1994)CrossRefGoogle Scholar
1.Department of Economics, 206 Tilton HallTulane UniversityNew OrleansUSA
2.Department of Economics, 3136 Sproul HallUniversity of CaliforniaRiversideUSA
Barbieri, S. & Malueg, D.A. Econ Theory (2014) 56: 333. https://doi.org/10.1007/s00199-013-0787-6 | CommonCrawl |
humanities and social sciences communications
Adopting the Hirschman–Herfindahl Index to estimate the financial sustainability of Vietnamese public universities
Trung Thanh Le1,
Thuy Linh Nguyen2,
Minh Thong Trinh ORCID: orcid.org/0000-0001-8984-56133,4,
Mai Huong Nguyen5,
Minh Phuong Thi Nguyen1,6 &
Hiep-Hung Pham ORCID: orcid.org/0000-0003-3300-77704,7
Humanities and Social Sciences Communications volume 8, Article number: 247 (2021) Cite this article
Over several decades, the Vietnamese government has increasingly cut its investment in the public higher education system and has also introduced a cost-sharing mechanism. Under this scheme, Vietnamese public universities have been seeking other sources of revenue. Despite the bold emphasis on the need for revenue diversification in higher education in Vietnam, there is little empirical evidence of the status quo of Vietnamese public higher education finance. The purpose of this paper was to fill this research gap by using the Hirschman–Herfindahl Index to estimate the degree of financial diversity in 51 public universities in Vietnam between 2015 and 2017. Our findings revealed that all institutions in this study were unsustainable due to their weak financial diversity. Suggestions for policy makers and university leaders that may enhance financial sustainability include the adoption of performance-based financial allocations and the implementation of capacity-building programs for universities with regard to fund-raising and entrepreneurship skills.
Previous decades have shown a steady shift in higher education finance policies across the world, from fully free higher education to cost-sharing systems (Woodhall, 1988; Heller and Rogers, 2006; Finney, 2014; Pham and Vu, 2019). University costs are now shared by a variety of stakeholders including governments, students, parents, industries, and donors. Higher education has become a quasi-market and universities are tending to behave like private firms (Marginson, 2013). Universities are now more active in seeking sources of revenue other than public funding. Thus, the degree of financial diversification reflects the health or sustainability of higher education institutions (De Dominicis et al., 2011; Garland, 2020).
Vietnam is not immune to the trend described above. Since the first cost-sharing scheme was introduced in 1997 (Vietnamese Government, 1997), the Vietnamese government has implemented a number of direct and indirect cost-sharing-linked regulations and programs, including allowing universities to levy tuition fees from students (Vietnamese Government, 2010); launching a national student loan scheme (Prime Minister, 2007); encouraging donations/philanthropy (Ministry of Education and Training, 2018) and technology transfers (Vuong, 2018; Vietnamese Government, 2018).
An emerging theme that has arisen in public discourse is the question of the financial sustainability or financial health of the Vietnamese public higher education system. For instance, talking to local media, the Minister of Education, Phung Xuan Nha, warned that Vietnamese higher education institutions that are over-dependent on tuition fees and state subsidies may face many risks (Phung, 2019). In the same vein, Bui (2019) argued that successfully accessing revenue other than public subsidies has become a crucial part of the sustainable development of higher education institutions in Vietnam.
Despite the emphasis on the need for revenue diversification in current higher education practice in Vietnam, there is little empirical evidence of the status quo of Vietnamese public higher education financing. Thus, the purpose of this paper was to fill the above research gap by using the Hirschman–Herfindahl Index (HHI) to estimate the degree of financial diversity of 51 public universities in Vietnam between 2015 and 2017. The HHI is a well-established measurement, which has been widely employed to measure the degree of diversification of financial revenue of organizations, ranging from the private sector to the public sector (Suyderhoud, 1994), the non-profit sector and higher education (Carroll and Stater, 2009; Calabrese, 2012; Mayer et al., 2014). Using the HHI, we expected to obtain a preliminary picture of the financial health or financial sustainability of public higher education in Vietnam. Our study was similar to others that also aimed to compute the degree of financial health or financial sustainability in higher education in other countries, such as England (see Garland, 2020) and other European countries (De Dominicis et al., 2011).
The paper is organized as follows. In the next section, the literature review, the need for diversifying revenue sources for public higher education institutions, from elite to massive higher education in Vietnam, governmental expenditure on higher education, and cost-sharing policies in higher education are discussed. Subsequently, a brief description of the present study, including an introduction to the HHI and data collection methods is provided. The "Results" section presents our estimations, using the HHI in a range of scenarios. The paper ends with a discussion and conclusions.
The need to diversify revenue sources for public higher education institutions
Traditionally, public universities received full financial support from the government, and students undertook their higher education free of charge. However, a gradual decrease in government funding for public higher education has occurred worldwide over the past few decades (Tandberg, 2010; Joaquim and Cerdeira, 2020). This change means that cost-sharing policies or the need for public universities to seek other sources of income, such as tuition fees, donations, and knowledge transfer services, have become necessary for public universities to maintain their operations (Ayalew, 2013; Yip et al., 2020). The concept of cost-sharing has appeared in the higher education reform agendas of many countries, including European nations and the US (Clark, 1998; Etzkowitz et al., 2000) and developing/emerging countries such as Jordan (Kanaan et al., 2011). Compared to public universities in developed countries, public universities in developing/emerging countries are under higher pressure to engage in cost-sharing, partly because developing and emerging governments are facing more budget constraints than their counterparts in the developed world (World Bank, 2000).
From elite to massive higher education in Vietnam
The Vietnamese education system witnessed dramatic growth in the late 1980s, along with Doi Moi (Renovation) (Pham and Vuong, 2019). In 2018, Vietnam had 236 higher education institutions, of which 171 were public universities and 65 were private institutions (Ministry of Education and Training, 2019). In comparison, in 1987, the elite Vietnamese higher education system had only 101 public universities and no private ones. The number of enrolled students also rose dramatically from 1987 to 2018. In 2018, there were 2,162,106 university students in Vietnam (Ministry of Education and Training, 2019), a 16-fold increase over 1987, when there were only 133,000 students (Pham, 2011). With regard to the Gross Enrollment Ratio for higher education, the figure for Vietnam increased significantly, from 9.5% in 1999 to 28.5% in 2017 (UNESCO, 2020). According to Trow's (2008) classification of higher education systems, higher education in Vietnam, by the end of the 2010s, was categorized as a massive system.
According to Vuong et al., (2019), Vietnam's higher education system followed the former Soviet model in which most universities focused on teaching functions rather than research. Contrary to the former model, the current approach regards both teaching and research as indispensable functions of universities (Trinh et al., 2020). However, most universities are still teaching-oriented institutions; and only a few, such as the Vietnam National University-Hanoi, the Vietnam National University-Ho Chi Minh City, and the Hanoi University of Science and Technology, claim to be research-oriented.
Governmental expenditure on higher education
With regard to higher education in particular and education in general, the Vietnamese government has increased its expenditure continuously over the past decade in terms of absolute numbers (Pham and Vuong, 2019). However, this increase has not kept pace with the ongoing massification of higher education. According to recent statistics (see Ministry of Education and Training, 2019; World Bank, 2019), the governmental expenditure per student in Vietnam as a percentage of GDP per capita ranged from 21.3% to 30.5%. Due to public budget constraints, it is anticipated that the Vietnamese government will not be able to raise its current expenditure to further enhance the governmental expenditure per student as a percentage of GDP per capita.
Cost-sharing policies in higher education
From the fully subsidized financing system that applied in the early 1990s, the Vietnamese government now has a cost-sharing mechanism for public universities. According to Pham and Vu (2019), this shift may stem from the massification of higher education as mentioned above. We outline below some key cost-sharing policies in public higher education in Vietnam that have been introduced since the 1990s.
The first agenda for cost-sharing schemes
The initiation of cost-sharing in education has played a key role in the education policy agenda in Vietnam since the 1990s. However, Vietnamese authorities and policy-makers seldom recognize or use the term "cost-sharing" (chia sẻ chi phí in Vietnamese) in legal and official documents. Instead, "socialization" (xã hội hóa in Vietnamese) is used as a euphemistic term to refer to cost-sharing (Pham and Vu, 2019). The "socialization" term was first officially mentioned in the government's Resolution No. 90/CP in 1997 (Vietnamese Government, 1997). Under the "socialization" scheme, public universities in Vietnam must rely not only on financial allocation from the government but also on other sources such as tuition fees, donations and knowledge transfer. A plausible explanation of the use of the term "socialization" (xã hội hóa) rather than "cost-sharing" (chia sẻ chi phí) lies in Vietnam's socialist-oriented market economy. Thus, the Vietnamese government developed the new term "socialization" rather than adopting "cost-sharing", which originated in capitalist economies.
The tuition fee scheme for public higher education
Over the past 30 years, following the scheme of "socialization", higher education in Vietnam has transitioned from a fully free system to a cost-sharing one in which student fees have become a significant source of university income. By the academic year of 1998–1999, a public university student was required to pay a monthly tuition fee ranging from VND 50,000 to VND 180,000 (US$ 3.60 to US$ 12.95 in 1998), depending on his/her major. Since then, the tuition cap regulated by the government has increased gradually year by year. By the academic year of 2019–2020, the tuition fee at public universities in Vietnam ranged from VND 1,850,000 to VND 4,500,000 (US$ 80.43 to US$ 195.65 in 2019) per month, depending on the student's major (Prime Minister, 2015).
The National Student Loan program
Similar to many countries across the world, such as the USA and Australia (Salmi, 2001), the Vietnamese government also introduced a loan program for students to ensure accessibility and equality for underprivileged students faced with tuition fees. Following its introduction in 2007, underprivileged students in Vietnam could access the loan program to cover their study costs (Prime Minister, 2007). By 2007, the program allowed students to borrow up to VND 800,000 (equal to US$ 49.91 in 2007) per month with discounted interest rates of only 6% per year (Prime Minister, 2007). The loan limit has been increased gradually since then to keep pace with the inflation rate. By 1 December 2019, the monthly loan limit was VND 2,500,000 (US$ 107.87 in 2019). Between 2007 and 2016, total loans in the program reached over VND 56 trillion and over 3.3 million students had been granted assistance (The State Bank of Vietnam, 2016).
The social and charity funds
Charity funding is also an important source of income in Western countries, which helps to fund both higher education institutions and students (Wharton et al., 2016). In Vietnam, universities are also allowed to receive social and charity funds. Decree No. 30/2012/NĐ-CP (Vietnam Government, 2012) is regarded as the first comprehensive policy that aimed to mobilize donations from society for the higher education sector as well as other public service sectors (Pham and Vu, 2019). This decree includes notable features on tax deductions and student rights when receiving additional state allocations outside of self-fundraising. Although we do not have any statistics available on the Vietnamese universities' revenues stemming from Social and Charity Funds, anecdotal evidence shows that this type of revenue only contributes a very small share of the total (Pham and Vu, 2019). According to the Institute for Studies of Society Economy and Environment (2015, Vietnamese people appear to favor donating their money for religious purposes rather than educational purposes.
Transfer knowledge
Like many other governments, including Israel (World Intellectual Property Organization, 2012) and China (Fuller, 2019), the Vietnamese government also has policies focusing on transferring knowledge and technology to society and the private sector with the aim of diversifying the sources of revenue for higher education institutions. Notable policies include a waiver on corporate income tax for investment in research and development (Vietnamese National Assembly, 2013), and permission for public universities to use their properties to contribute capital to joint ventures (Vietnamese Government, 2006). However, the efficiency of the above incentive policies, so far, is limited, as a financial contribution from knowledge transfer still contributes only a modest share of the overall revenue of higher education institutions in Vietnam (Nguyen, 2017).
Self-paid students at public universities
Given the increasing constraints on public budgets, along with the continuing demands for university degrees from the young population, by the mid-2010s, the Vietnamese government officially allowed public universities to open a second track to enroll fully self-paid students, i.e., students who pay tuition fees covering the whole instruction cost (Ministry of Education and Training, 2014). Pham and Vu (2019) called this policy "Dual Fee Track System in Public Institutions", which is, to a greater or lesser extent, similar to the dual fee track system described by Johnstone (2004). To further this policy, in 2014, the Vietnamese government also allowed public universities to opt for only "second track" programs (Vietnamese Government, 2014). Under this policy, 23 selected public universities agreed to stop receiving recurring subsidies from the government and instead have all self-paid programs and are granted more autonomy in various aspects of their functioning, including academic, organizational, staffing and finance. These 23 public universities are described as "autonomous" or "finance-autonomous" higher education institutions.
The present study
The Hirschman–Herfindahl Index
The origin of the HHI and its application to estimate the financial sustainability of universities
The HHI was first introduced to measure the concentration or inequality of trade and industry (Rhoades, 1995). The index has been also used to calculate the revenue concentration, revenue diversification, and financial sustainability of organizations in different sectors (e.g., see Chikoto et al., 2016). These studies regard an organization with a high level of revenue concentration or a low level of revenue diversification as having a low level of financial sustainability. In contrast, organizations with high levels of financial sustainability tend to exhibit low levels of revenue concentration or high levels of revenue diversification.
In the field of higher education, several authors have used the HHI to evaluate the revenue concentration, revenue diversification, or financial sustainability of universities in different contexts, including Europe (Kasperski and Holland, 2013), the US (Webb, 2015), and Malaysia (Nik Ahmad et al., 2019). In this study, we followed these studies, using HHI to evaluate the financial sustainability of universities in Vietnam.
The formula of the HHI
Initially, the HHI was used by the US Department of Justice to measure the market concentration or competition (U.S. Department of Justice, 2010). Specifically, the HHI is calculated as the sum of the squares of the market shares of each firm participating in a certain market.
The HHI ranges between 0 and 10,000. A market would be classified as diverse and highly competitive if its HHI is <1500. In contrast, a market would be categorized as highly concentrated if its HHI is more than 2500. Between the two ends of the spectrum, a market for which the HHI is between 1500 and 2500 is classified as moderately competitive. For instance, if five firms are participating in a market with shares of 30%, 20%, 15%, 17% and 18% respectively, then the HHI score is 302 + 202 + 152 + 172 + 182 = 2138. Thus, the market is categorized as moderately competitive.
The HHI is also used to estimate the degree of financial diversity or financial sustainability of an organization. Instead of using market shares as in the above example, in calculating financial diversity, the shares of different income sources for a certain organization would be used (Chikoto et al., 2016; Garland, 2020). Specifically, the level of financial diversity is estimated using the following formula:
$$\mathop {\sum}\nolimits_{i = 1}^N {\left( {r_j/R} \right)^2,\,i = 1,...,n,}$$
where N is the number of income sources, r is the income from the ith source, and R is the total income (revenue) from all sources. In calculating financial diversity/sustainability, some authors such as Garland (2020) standardized the final HHI score from a range of [0–10,000] to [0–1].
In this study, we followed this standardized approach. Table 1 represents the categorization of HHI in two respects; market concentration/competition and financial diversity/sustainability.
Table 1 Classification of market concentration and financial diversity according to the value of HHI.
Aggregation and disaggregation forms of HHI
The differences in terms of income inflows may result in different values in the HHI (Chikoto et al., 2016). For instance, in the non-profit sector, donations may be aggregated into a single income inflow or disaggregated into two sub-inflows: individual and institutional donations. Similarly, government funding may be aggregated into a single flow or disaggregated into three sub-inflows: federal, state, and local (Kerlin, 2006).
In the context of this study, we classified the income of public universities in Vietnam into four flows: (1) State allocation for instruction; (2) Tuition and fees; (3) Research and Development; and (4) Borrowing and Donations. These four flows may be further divided into 12 sub-inflows: (1.1) State recurrent subsidies (for instruction); (1.2) Earmarked non-recurrent allocation; (1.3) Capital investment; (2.1) Tuition and fees from domestic students; (2.2) Tuition and fees from international students; (3.1) State recurrent subsidies (for research and development); (3.2) State projects; (3.3) Non-state projects; (3.4) Technology transfer and service; (4.1) State borrowing; (4.2) Non-state borrowing; and (4.3) Donation. Figure 1 represents the four flows of income of public universities in Vietnam and their 12 sub-inflows, accordingly.
Four flows and 12 sub-inflows of Public Higher Education Institutions in Vietnam.
Prior studies have often been based on secondary data to compute the HHIs. For instance, Garland (2020) used the Higher Education Statistics Agency (HESA)'s database to estimate HHIs for 102 universities in England. In the same vein, using the American National Center for Charitable Statistics (NCCS), Chikoto et al. (2016) computed HHIs to explore the financial volatility and growth results of non-profit organizations.
Since Vietnam does not yet have available aggregate data, which is similar to HESA's or NCCS's, we had to collect data directly from the Vietnamese universities. Data collection started in April 2019 and finished in September 2019. We sent emails to a convenience sample of 100 public universities to ask them to assign a manager (normally, the head of the finance/accounting department or the deputy rector in charge of finance/accounting), who could provide us with data on their different sources of revenue in the most recent years (see Appendix 1). As the data collection was started in April 2019, most universities did not have available data from 2018: they only provided us with data for 2017 and 2 years earlier, i.e., 2015 and 2016.
Sample profile
Five months after the survey was first delivered, we had received responses from 51 universities, which implied a rate of return of 51%. Table 2 illustrates the profiles of these 51 surveyed universities.
Table 2 Descriptive profile of 51 surveyed universities.
We conducted the surveys with all universities in three regions of Vietnam. More than half of our studied universities were located in Northern Vietnam, with 27 institutions, accounting for 53% of the total. This was followed by 16 universities in Southern Vietnam or 31%. This result was reasonable as most of the Vietnamese higher education institutions are located in the Red River Delta and the Southeast region (Le, 2017). The number of research-oriented universities and teaching-oriented universities was almost the same, with 26 (51%) and 25 (49%), respectively.
Shares of four income inflows and 12 income sub-inflow in 2015–2017
Appendix 1 provides the descriptive statistics for the income flows (and sub-inflows) of 51 surveyed Vietnamese universities, including means, standard deviations (SDs) and respective percentages. As evident in Appendix 1, tuition fees, especially those from domestic students, have played an increasing role, forming the majority of total income for the surveyed universities. Following tuition fees, state allocations for instruction also contributed a significant share to the overall income of the surveyed universities. Specifically, this inflow ranged from 30.02% (2017) to 35.98% (2015). A closer look at the three sub-inflows of the state allocation for instruction costs shows that the state recurrent subsidy (ranging from 22.30% to 25.32%) was more important than the two other sub-inflows, i.e., earmarked non-recurrent allocations (ranging from 4.44% to 5.60%) and capital investment (ranging from 2.79% to 5.06%). Compared to state allocations for instruction costs and tuition and fees, the income inflows from research and development (and its four sub-inflows) and borrowing and donations (and its three sub-inflows) play relatively modest roles. Specifically, research and development only range from 7.09% to 8.8% of the total income of the 51 surveyed universities. Figure 2 shows the shares of different flows and sub-flows of incomes of 51 surveyed universities in this study between 2015 and 2017. For the details of our data, please address Appendix 2.
Shares of financial income of 51 Vietnamese universities 2015–2017: aggregation and disaggregation approaches.
Estimation of HHI
Based on the data provided in Appendix 1, we estimated HHIs using two different approaches: aggregation and disaggregation. Specifically, the average HHI of 51 surveyed universities between 2015 and 2017 was 0.559 (SD = 0.018) when the aggregation approach was used. The respective figure for the disaggregation approach was 0.479 (SD = 0.022).
A closer look at the detailed estimation provides a comprehensive picture of the financial diversity/sustainability of different universities. Thus, Fig. 3 illustrates the distribution of average HHI scores in 51 Vietnamese universities between 2015 and 2017 according to two approaches: aggregated and disaggregated. As shown in Fig. 3, HHI was estimated with the aggregation approach and the disaggregation approach. We did not find any university with an HHI lower than 0.25. This implies that all surveyed universities in this study should be categorized as having weak financial diversity or sustainability (all HHIs > 0.25). We found three universities with extremely weak financial diversity or sustainability (HHIs using aggregated or disaggregated approaches were both higher than 0.75).
Distribution of average HHI scores in 51 Vietnamese universities 2015–2017.
HHIs were also estimated according to different types of universities. In this study, we categorized universities according to their age (over or under 50 years old), total enrollment (over or under 10,000 students on average between 2015 and 2017), orientation (teaching-oriented or research-oriented), and location (Northern or Central and Southern). Data on universities' ages and locations were collected from universities' websites. Data on universities' total enrollment were provided by the universities' representatives along with other data regarding their financial flows. Data on university orientation were extracted from a recent report of top Vietnamese universities in terms of research performance (Thuy Nga, 2020): those included in this list were identified as research-oriented and those outside this list were identified as teaching-oriented. Table 3 provides the HHIs in both aggregated and disaggregated approaches according to university type: over 50-year-old vs. under 50-year-old; over 10,000 students vs. under 10,000 students; teaching-oriented vs. research-oriented; and Northern vs. Central and Southern. As shown in Table 3, in general, there are no significant differences between different types of universities with regard to HHI, either in aggregation or disaggregation approaches (all p-values > 0.05, except the p-value pertaining to the HHI aggregation approach between teaching-oriented and research-oriented universities).
Table 3 Estimation of HHIs according to university type.
In a cost-sharing context, higher education institutes across the world should seek additional sources of revenues to compensate for budget deficits. The more diverse the sources a university can obtain revenue from, the more sustainable is its financial health. Vietnam is not immune to this trend. In this study, we followed some recent authors (e.g., De Dominicis et al., 2011; Garland, 2020; Schiller and Liefner, 2007), using the HHI as a proxy for the level of financial sustainability of 51 public universities in Vietnam between 2015 and 2017. Our computations indicated that, regardless of the approach used to compute the HHI (i.e., aggregation or disaggregation), Vietnamese universities are confronting an alarming situation in terms of financial health. Specifically, our findings indicate that all Vietnamese public universities included in this study showed weak financial diversity. On average, the value of the HHI between 2015 and 2017, in aggregation form, was 0.56; whereas the respective figure in disaggregation form was 0.48. Both values are significantly higher than 0.25, which is the cut-off point between two levels of financial diversity: weak and moderate. These findings also show that the financial sustainability of Vietnamese universities appear to much lower than in other countries. For instance, calculations involving 200 research-based European universities yielded an average HHI value of 0.3 (De Dominicis et al., 2011). A similar computation with 102 public and non-specialist universities in England yielded an average value of HHI of 0.42 (Garland, 2020)
A closer look at the detailed results may provide a plausible explanation for the current state of financial health of Vietnamese universities. Specifically, it appears that Vietnamese universities are over-relying on tuition fees from students, while the financial contribution from the government is modest and other sources (such as technology transfer and service or donations) are even more insignificant (see Appendix 1 and Fig. 2).
To understand this phenomenon, we must take into consideration the actual context of Vietnam.
First, the large share of tuition fees and the small share of governmental allocations in total university revenues may be interpreted as follows: In 2005, due to budget constraints, the Vietnamese government introduced a new financial subsidy mechanism based on historical data rather than a student enrollment quota. In parallel, due to the increasing demands of the younger generation for higher education, public universities started to open their doors to enroll more students (see Nguyen, 2020). Government subsidies to public universities have continued to increase over the years but the growth of revenue from tuition fees has outpaced them. Thus, the government subsidy share is shrinking while the share of tuition fees is growing.
Second, the small contribution from research and development revenue is understandable as it reflects the chronically peripheral role of research in Vietnamese universities (Q. H. Vuong, 2019; Vuong et al., 2021). Traditionally, the academic sector in Vietnam followed the former Soviet model with two separated types of institutions: research institutes and teaching universities (Nguyen, 2020; Trinh et al., 2020). While the former specifically focused on research activities, the latter oversaw teaching at the post-secondary level and prepared future personnel for state organizations. Although the current legislation (Vietnamese National Assembly, 2012, 2018) has declared teaching, research, and knowledge transfer as the three missions of universities, research and development still receive less attention from universities than teaching. Vietnam still has relatively few universities that can be categorized as research-based (Nguyen, 2020).
Third, akin to research and development, borrowing and donations contribute and almost insignificant share of the total revenue of Vietnamese public universities. This may be interpreted as follows. Due to their low level of autonomy, Vietnamese universities still behave as state-dependent agencies rather than independent agencies (Nguyen et al., 2020). Thus, they are not willing to borrow for investment for further development. Even for those that do intend to borrow, they would face several challenges, since according to the current legislation, public universities in Vietnam are not the owners of their buildings. Thus, they do not have collateral for loans as do many other universities across the world, such as those in Austria, Spain or Italy (see Pruvot and Estermann, 2017).
The low contribution of donations to total revenue is reasonable since Vietnamese culture does not appear to regard education as a high priority for donations and charitable activities (Vuong et al., 2018; Pham and Vu, 2019). A recent survey by the Institute for Studies of Society, Economy and Environment (2015) revealed that among 1197 surveyed participants, only 24% answered that they donated money for educational purposes. The figures for other purposes, such as poverty relief, disaster relief, or supporting people with disabilities were 80%, 67%, and 26%, respectively.
The results of this study have several implications with the first implication is for academic perspective and the three others are for practical purposes.
First, this study confirms the usefulness of HHI as a proxy to evaluate the financial sustainability in higher education with a new sample from Vietnam. While previous endeavor mostly selected developed worlds such as the US (Webb, 2015), England (Garland, 2020), or contingent European countries (Kasperski and Holland, 2013), no prior studies, except Malaysia (Nik Ahmad et al., 2019) selected developing or emerging countries as a sample to compute HHI and evaluate the financial sustainability in higher education.
Second, as the financial health of all public universities is at risk, the Vietnamese government should be advised to reform its financial allocation mechanism for public higher education to enhance its effectiveness and efficiency. The current mechanism is mostly historically based, with 24.25% of the total revenue coming from state recurrent subsidy sub-inflows, and only 9.10% coming from state competitive funding (earmarked non-recurrent allocation: 4.99% and capital investment: 4.11%). Further adjustments that promote performance-based financial allocation and reduce ineffective and inefficient historically based mechanisms are suggested (see World Bank, 2020).
Third, given the current governmental expenditure on higher education in Vietnam is, in fact, not disproportionately low, compared to other developing countries as discussed at the outset, the Vietnamese government may opt for an investment strategy similar to China's (see Projects 985 and 211 at Yip et al., 2020); that is, focusing specifically on a very small number of highly qualified public universities that can meet world-class levels. For other universities, a more cost-sharing-linked mechanism may be adopted to encourage them to seek new sources of funding and ensure their financial sustainability. For instance, as shown earlier, the contribution of revenue from international students to the overall revenue of Vietnamese universities is still very limited, Vietnamese universities may regard international students as a potential source in the future. Pham et al. (2021) revealed some evidence of Vietnamese universities and colleges having gained fruits in international students in recent years. Strategies adopted by these institutes may offer learned lessons for other universities.
Fourth, given the low contribution from non-traditional sources of funding such as donations or knowledge transfer to overall income, the suggestion is that Vietnamese universities launch initiatives to enhance capacities in fund-raising, entrepreneurship skills and business mindsets for their faculty and staff (Ho et al., 2019). Previous studies indicated that proactive attitudes and capacities of faculty and staff are prerequisite conditions for higher education renovation (e.g., Q.-H. Vuong, 2019), including financial diversification through donations and knowledge transfer (Lockett and Wright, 2005; Speck, 2010).
Anecdotal evidence has shown some good practices undertaken by some Vietnamese universities (see Thanh, 2020). More radically, we suggest Vietnamese universities and higher education policymakers adopt the concept of the "business of science" into all their research activities. Specifically, Vietnamese universities must regard their "research and research-based activities and content [as] (commercial) products" such as "science communication, science journalism, data collection, data analysis and software development (Ho et al., 2019, p. 168). High profile "business of science" activities from developed countries such as state-funded research in the US and the UK (Nature Materials, 2006) or biotech startups in the US (Pisano, 2010) may provide examples for Vietnamese universities.
Last but not least, from a university perspective, diversifying its revenue is a must, which may, in turn, result in a more sustainable scenario for its finance. However, it would be unrealistic for one university to enhance all non-traditional revenues simultaneously (Jaafar et al., 2021). A wise strategy that focusing on only one or two new sources of revenues would be more feasible and appropriate.
Limitations and suggestions for future studies
This study has several limitations (Vuong, 2020). First, given the unavailability of secondary data, we had to conduct our own primary survey with Vietnamese universities. Therefore, our sample consisted of only 51 institutions, which is much smaller than other studies in other countries that also used the HHI to estimate the financial sustainability of higher education institutions (e.g. Garland, 2020; Kasperski and Holland, 2013; Webb, 2015). Further studies are suggested to enhance the validity of the research. Second, the data obtained in this study only covered public universities and ignored private universities. Private higher education has become an indispensable component of Vietnam's higher education system (Chau et al., 2020). Future studies should take private universities into consideration. Third, as the current study only provides a descriptive figure of the degrees of financial sustainability of Vietnamese universities, it would be worth extending this by investigating the antecedents and/or consequences of such degrees of sustainability, using inferential approach such as panel data (Frees, 2004) or Bayesian analysis (Vuong et al., 2020).
Some specific datasets were included in the manuscript as Appendix. The other datasets are not publicly available as they form part of the author's on-going researchs. They are available on reasonable request.
Ayalew SA (2013) Financing higher education in Ethiopia: analysis of cost-sharing policy and its implementation. Higher Educ Policy 26(1):109–126. https://doi.org/10.1057/hep.2012.21
Bui, V. H. (2019) Huy động nguồn lực tài chính cho giáo dục đại học công lập ở Việt Nam [Mobilizing financial resources for public higher education in Vietnam]. Tap Chi Tai Chinh. http://tapchitaichinh.vn/nghien-cuu-trao-doi/huy-dong-nguon-luc-tai-chinh-cho-giao-duc-dai-hoc-cong-lap-o-viet-nam-302114.html
Calabrese TD (2012) The accumulation of nonprofit profits: a dynamic analysis. Nonprofit Volunt Sect Q 41(2):300–324. https://doi.org/10.1177/0899764011404080
Carroll DA, Stater KJ (2009) Revenue diversification in nonprofit organizations: does it lead to financial stability? J Public Adm Res Theory 19(4):947–966. https://doi.org/10.1093/jopart/mun025
Chau Q, Nguyen CH, Nguyen T-T (2020) The emergence of private higher education in a communist state: the case of Vietnam'. Stud High Educ 1–16 https://doi.org/10.1080/03075079.2020.1817890
Chikoto GL, Ling Q, Neely DG (2016) The adoption and use of the Hirschman–Herfindahl Index in nonprofit research: does revenue diversification measurement matter? VOLUNTAS 27(3):1425–1447. https://doi.org/10.1007/s11266-015-9562-6
Clark BR (1998) Creating entrepreneurial universities: organizational pathways of transformation. Elsevier
De Dominicis L, Susana Elena P, Ana Fernández Z (2011) European University Funding and Financial Autonomy: a study on the degree of diversification of University Budget and the Share of Competitive Funding. JRC scientific and technical reports. https://rio.jrc.ec.europa.eu/sites/default/files/files_imported/report/EW%20horizontal/European%20university%20funding%20and%20financial%20autonomy_2011_EU.pdf
Etzkowitz H et al. (2000) The future of the university and the university of the future: evolution of ivory tower to entrepreneurial paradigm'. Res Policy 29(2):313–330. https://doi.org/10.1016/S0048-7333(99)00069-4
Finney J (2014) Why the finance model for public higher education is broken and must be fixed, Penn Wharton University of Pennsylvania—Public Policy Initiative. https://vtechworks.lib.vt.edu/bitstream/handle/10919/84053/WhyFinanceModelMustbeFixed.pdf?sequence=1&isAllowed=y
Frees EW (2004) Longitudinal and panel data. Cambridge University Press
Fuller DB (2019) Technology transfer in China, Oxford bibliographies. Oxford University Press
Garland M (2020) How vulnerable are you? Assessing the financial health of England's universities. Perspectives 24(2):43–52. https://doi.org/10.1080/13603108.2019.1689374
Heller DE, Rogers KR (2006) Shifting the burden: public and private financing of higher education in the United States and implications for Europe. Tert Educ Manag 12(2):91–117. https://doi.org/10.1007/s11233-006-0001-5
Ho M-T et al. (2019) The emerging business of science in Vietnam. In: Vuong Q-H, Tran T (eds) The Vietnamese social sciences at a fork in the road. Sciendo, pp. 163–177
Institute for Studies of Society Economy and Environment (2015) People's awareness of charitable activities and fundraising capabilities of Vietnamese non-governmental organizations. Institute for Studies of Society Economy and Environment http://isee.org.vn/wp-content/uploads/2018/11/nhan-thuc-cua-nguoi-dan-ve-hoat-dong-tu-thien-va-kha-nang-gay-quy-cua-cac-to-chuc-phi-chinh-phu-viet-nam..pdf
Jaafar JA, Latiff ARA, Daud ZM, Osman MNH (2021) Does revenue diversification strategy affect the financial sustainability of Malaysian Public Universities? A panel data analysis, Higher Education Policy. https://doi.org/10.1057/s41307-021-00247-9
Joaquim JA, Cerdeira L (2020) Financial accessibility in cost-sharing policies in Higher Education in Mozambique. Int J Res-GRANTHAALAYAH 8(9). https://doi.org/10.29121/granthaalayah.v8.i9.2020.1403
Johnstone DB (2004) The economics and politics of cost sharing in higher education: comparative perspectives. Econ Educ Rev 23(4):403–410. https://doi.org/10.1016/j.econedurev.2003.09.004
Kanaan TH, Al-Salamat MN, Hanania MD (2011) Political economy of cost-sharing in higher education: the case of Jordan. PROSPECTS 41(1):23–45. https://doi.org/10.1007/s11125-011-9179-5
Kasperski S, Holland DS (2013) Income diversification and risk for fishermen, Proc Natl Acad Sci USA 110(6), 2076–2081. https://doi.org/10.1073/pnas.1212278110
Kerlin JA (2006) U.S.-based international NGOs and federal government foreign assistance: out of alignment? In: Boris E, Steuerle E (eds) Nonprofits and government: collaboration and conflict. The Urban Institute Press, Washington, pp. 373–398
Le V (2017) 'Những con số "biết nói" về giáo dục đại học Việt Nam [The numbers "tell" about Vietnam's higher education]', Vietnamnet. https://vietnamnet.vn/vn/giao-duc/tuyen-sinh/nhung-con-so-biet-noi-ve-giao-duc-dai-hoc-viet-nam-389870.html
Lockett A, Wright M (2005) Resources, capabilities, risk capital and the creation of university spin-out companies. Res Policy 34(7):1043–1057. https://doi.org/10.1016/j.respol.2005.05.006
Marginson S (2013) The impossibility of capitalist markets in higher education. J Educ Policy 28(3):353–370. https://doi.org/10.1080/02680939.2012.747109
Mayer WJ et al. (2014) The impact of revenue diversification on expected revenue and volatility for nonprofit organizations. Nonprofit Volunt Sect Q 43(2):374–392. https://doi.org/10.1177/0899764012464696
Ministry of Education and Training (2014) Thông tư 23/2014 về đào tạo chương trình chất lượng cao trình độ đại học [Circular No. 23/2014 on advanced program training in university]. https://thuvienphapluat.vn/van-ban/Bo-may-hanh-chinh/Thong-tu-23-2014-TT-BGDDT-Quy-dinh-dao-tao-chat-luong-cao-trinh-do-dai-hoc-240505.aspx
Ministry of Education and Training (2018) Thông Tư 16/2018/TT-BGDĐT Quy Định Về Tài Trợ Cho Các Cơ Sở Giáo Dục Thuộc Hệ Thống Giáo Dục Quốc Dân [Circular 16/2018/tt-bgdđt Prescribing sponsorship for educational institutions in the National Education System]. https://thuvienphapluat.vn/van-ban/Giao-duc/Thong-tu-16-2018-TT-BGDDT-quy-dinh-ve-tai-tro-cho-co-so-giao-duc-thuoc-he-thong-giao-duc-quoc-dan-393562.aspx
Ministry of Education and Training (2019) The 5-year Review report on implementation of Party Central Committee's Resolution 29 on Comprehensive Renovation of Education in 2013–2018. Hanoi
Nature Materials (2006) The business of science. Nat Mater 5(12):921. https://doi.org/10.1038/nmat1796
Nguyen HTL (2020) A review of University Research Development in Vietnam from 1986 to 2019. In: Phan LH, Doan BN (eds) Higher education in market-oriented socialist Vietnam. Springer International Publishing, Cham, pp. 63–86
Nguyen TH, Ly TMC, Tran MD et al. (2020) Nghiên cứu đề xuất các giải pháp đẩy mạnh quốc tế hóa giáo dục Việt Nam [Research and make the recommendations to promote internationalization of Vietnamese education system].http://chuongtrinhkhgd.moet.gov.vn/content/dauthaudautucong/Lists/DuAn/Attachments/74/018.%20Bao%20cao%20tom%20tat%20QTHGD%20-%2013-1-2021.pdf
Nguyen TL (2020) Tăng cường các nguồn lực tài chính cho phát triển các trường đại học công lập ở Việt Nam [Improving financial resources for the development of public universities in Vietnam]. Vietnam National University, Hanoi, Unpublished document
Nguyen TTH et al. (2020) The adoption of international publishing within Vietnamese academia from 1986 to 2020: a review. Learned Publishing
Nguyen VP (2017) Các trường ĐH: Nguồn thu từ NCKH và chuyển giao CN còn khiêm tốn [Universities: income from scientific research and technology transfer is still modest]. Giao duc Online. https://www.giaoduc.edu.vn/cac-truong-dh-nguon-thu-tu-nckh-va-chuyen-giao-cn-con
Nik Ahmad NN, Siraj SA, Ismail S (2019) Revenue diversification in public higher learning institutions: an exploratory Malaysian study. J Appl Res Higher Educ 11(3):379–397. https://doi.org/10.1108/JARHE-04-2018-0057
Pham H-H (2011) VIETNAM: Young academic talent not keen to return. University World News. https://www.universityworldnews.com/post.php?story=20110722201850123 Accessed 6 Aug 2020
Pham H-H, Vu H-M (2019) 'Financing Vietnamese higher education: from a wholly government-subsidized to a cost-sharing mechanism'. In: Nguyen N-T, Tran L-T (eds) Reforming Vietnamese higher education. education in the Asia-Pacific region: issues, concerns and prospects. Springer, Singapore, pp. 75–90
Pham MC, Vuong Q-H (2019) Vietnam's economy: ups and downs and breakthroughs [Kinh tế Việt Nam: Thăng trầm và đột phá]. Truth National Political Publishing House, Hanoi
Pham H-H, Vuong Q-H, Dong T-K-T, Nguyen T-T, Ho M-T, Vuong T-T, Nguyen M-H (2021) The southern world as a destination of international students: an analysis of 50 tertiary education institutions in Vietnam. J Contemp Eastern Asia 20(1):24–43. https://doi.org/10.17477/JCEA.2021.20.1.024
Phung XN (2019) Nguồn thu đại học không thể trông mãi vào học phí [University's revenues cannot always count on tuition fees]. https://tuoitre.vn/nguon-thu-dai-hoc-khong-the-trong-mai-vao-hoc-phi-20190724093811336.htm
Pisano GP (2010) The evolution of science-based business: innovating how we innovate'. Ind Corp Change 19(2):465–482. https://doi.org/10.1093/icc/dtq013
Prime Minister (2007) Decision No. 157/2007/QD-TTg on credit loans for pupils and students [Quyết định 157/2007/QĐ-TTg về tín dụng đối với học sinh, sinh viên]. http://vanban.chinhphu.vn/portal/page/portal/chinhphu/hethongvanban?class_id=1&mode=detail&document_id=41275
Prime Minister (2015) Decree no. 86/2015/ND-CP. On mechanism for collection and management of tuition fees applicable to educational institutions in the national education system and policies on tuition fee exemption and reduction and financial support from academic year 2015–2016 to 2020–2021 [Nghị định số 86/2015/NĐ-CP: Quy định về cơ chế thu, quản lý học phí đối với cơ sở giáo dục thuộc hệ thống giáo dục quốc dân và chính sách miễn, giảm học phí, hỗ trợ chi phí học tập từ năm học 2015–2016 đến năm học 2020–2021]. http://vanban.chinhphu.vn/portal/page/portal/chinhphu/hethongvanban?class_id=1&_page=1&mode=detail&document_id=181665
Pruvot EB, Estermann T (2017) University Autonomy in Europe III: the Scorecard 2017. European University Association. https://eua.eu/resources/publications/350:university-autonomyin-europe-iii-the-scorecard-2017.html
Rhoades SA (1995) Market share inequality, the HHI, and other measures of the firm-composition of a market. Rev Ind Organ 10(6):657–674. https://doi.org/10.1007/BF01024300
Salmi J (2001) Student loans: The World Bank experience. Int High Educ (22). https://doi.org/10.6017/ihe.2001.22.6912
Schiller D, Liefner I (2007) Higher education funding reform and university–industry links in developing countries: the case of Thailand. High Educ 54(4):543–556. https://doi.org/10.1007/s10734-006-9011-y
Speck BW (2010) The growing role of private giving in financing the modern university New Dir High Educ 2010 (149):7–16. https://doi.org/10.1002/he.376
Suyderhoud JP (1994) State-local revenue diversification, balance, and fiscal performance. Public Financ Rev 22(2):168–194. https://doi.org/10.1177/109114219402200202
Tandberg DA (2010) Politics, interest groups and state funding of public higher education. Res High Educ 51(5):416–450. https://doi.org/10.1007/s11162-010-9164-5.
Thanh H (2020) Raising awareness of new and innovative startups among the staff and lecturers [Nâng cao nhận thức về khởi nghiệp đổi mới sáng tạo trong đội ngũ cán bộ, giảng viên]. https://vnu.edu.vn/btdhqghn/?C1654/N22528/Nang-cao-nhan-thuc-ve-khoi-nghiep-doi-moi-sang-tao-trong-doi-ngu-can-bo,-giang-vien.htm
The State Bank of Vietnam (2016) More than 3.3 million turns of poor pupils and students have access to preferential credit loans [Hơn 3,3 triệu lượt học sinh, sinh viên nghèo được vay vốn tín dụng ưu đãi]. https://sbv.gov.vn/webcenter/portal/m/menu/trangchu/ttsk/ttsk_chitiet;jsessionid=hjjZp1SGzJxLCySqyb8Sr9TynDJ1Gtf0PFD7d2ZyhlqnCTP0y0yg!-1636599546!-1385226589?centerWidth=100%25&dDocName=SBV244664&leftWidth=0%25&rightWidth=0%25&showFooter=false&showHeader=
Thuy Nga (2020) 30 leading research institutions in Vietnam [30 cơ sở đại học dẫn đầu về nghiên cứu tại VN], Vietnamnet. https://vietnamnet.vn/vn/giao-duc/khoa-hoc/30-truong-dh-dan-dau-ve-cac-chi-so-nghien-cuu-tai-viet-nam-nam-2019-605526.html
Trinh T et al. (2020) Factors impacting international‐indexed publishing among Vietnamese educational researchers. Learn Publ 33(4):419–429. https://doi.org/10.1002/leap.1323
Trow M (2008) Reflections on the transition from Elite to Mass to Universal Access: forms and phases of higher education in modern societies since WWII'. In: Forest J & Altbach P (eds), International handbook of higher education. pp. 243–280, Springer, Dordrecht
U.S. Department of Justice (2010) Horizontal merger guidelines. U.S. Department of Justice https://www.justice.gov/atr/horizontal-merger-guidelines-08192010#5c
UNESCO (2020) Gross enrolment ratio for tertiary education. UNESCO http://tcg.uis.unesco.org/4-3-2-gross-enrolment-ratio-for-tertiary-education/
Vietnam Government (2012) Decree No. 30/2012/ND-CP: the organization and operation of Social Funds and Charity Funds [Nghị định 30/2010/NĐ-CP về tổ chức, hoạt động của Quỹ xã hội, Quỹ từ thiện]. Thuvienphapluat https://thuvienphapluat.vn/van-ban/van-hoa-xa-hoi/Nghi-dinh-30-2012-ND-CP-to-chuc-hoat-dong-quy-xa-hoi-tu-thien-137920.aspx
Vietnamese Government (1997) Resolution No. 90/CP/1997 on the direction and policy of socialization of educational medical an cultural activities [Nghị quyết 90/CP/1997 Về Phương hướng và chủ trương xã hội hoá các hoạt động Giáo dục,Y tế, Văn hoá]. Thuvienphapluat. https://thuvienphapluat.vn/van-ban/giao-duc/Nghi-quyet-90-CP-phuong-huong-va-chu-truong-xa-hoi-hoa-cac-hoat-dong-giao-duc-y-te-van-hoa-40903.aspx
Vietnamese Government (2006) Decree No. 43/2006/ND-CP on providing for the right to autonomy and self-responsibility for task performance, organizational apparatus, payroll and finance of public non-business units [Nghị định 43/2006/NĐ-CP Quy định quyền tự chủ, tự chịu trách nhiệm về thực hiện nhiệm vụ, tổ chức bộ máy, biên chế và tài chính đối với đơn vị sự nghiệp công lập]. Vietnamese Government https://thuvienphapluat.vn/van-ban/Bo-may-hanh-chinh/Nghi-dinh-43-2006-ND-CP-quyen-tu-chu-tu-chiu-trach-nhiem-thuc-hien-nhiem-vu-to-chuc-bo-may-bien-che-tai-chinh-doi-voi-don-vi-su-nghiep-cong-lap-11313.aspx
Vietnamese Government (2010) Decree No. 49/2010/ND-CP on reduction and exemption of tuition fees, support for learning cost, collection and use of tuition applicable to educational institutions belonging to national education system from school year 2010–2011 to 2014–2015 [Nghị định 49/2010/NĐ-CP Quy Định Về Miễn, Giảm Học Phí, Hỗ Trợ Chi Phí Học Tập Và Cơ Chế Thu, Sử Dụng Học Phí Đối Với Cơ Sở Giáo Dục Thuộc Hệ Thống Giáo Dục Quốc Dân Từ Năm Học 2010–2011 Đến Năm Học 2014–2015]. Vietnamese Government https://vanbanphapluat.co/decree-no-49-2010-nd-cp-reduction-and-exemption-of-tuition-fees
Vietnamese Government (2014) Resolution 77/NQ-CP on piloting new mechanisms of operation with public higher education institutions for the period of 2014-2017 [Nghị quyết 77/NQ-CP về thí đIểm đổI mớI cơ chế hoạt động đốI vớI các cơ sở giáo dục đạI học công lập giai đoạn 2014–2017]. Vietnamese Government https://thuvienphapluat.vn/van-ban/giao-duc/Nghi-quyet-77-NQ-CP-2014-thi-diem-doi-moi-co-che-hoat-dong-co-so-giao-duc-dai-hoc-cong-lap-2014-2017-254531.aspx
Vietnamese Government (2018) Decree No. 76/2018/nd-cp dated May 15, 2018 providing guidelines of the Law on Technology Transfer [Nghị định 76/2018/NĐ-CP Quy Định Chi Tiết Và Hướng Dẫn Thi Hành Một Số Điều Của Luật Chuyển Giao Công Nghệ]. Vietnamese Government https://thuvienphapluat.vn/van-ban/cong-nghe-thong-tin/Nghi-dinh-76-2018-ND-CP-huong-dan-Luat-Chuyen-giao-cong-nghe-380225.aspx
Vietnamese National Assembly (2012) Law on Higher Education [Luật Giáo dục Đại học]. Vietnamese National Assembly https://thuvienphapluat.vn/van-ban/giao-duc/Luat-Giao-duc-dai-hoc-2012-142762.aspx
Vietnamese National Assembly (2013) Law on the Amendments to the Law on Enterprise Income Tax [Luật sửa đổi, bổ sung một số điều của Luật thuế thu nhập doanh nghiệp]. Vietnamese National Assembly https://thuvienphapluat.vn/van-ban/doanh-nghiep/Luat-thue-thu-nhap-doanh-nghiep-sua-doi-2013-197250.aspx
Vietnamese National Assembly (2018) Law on Amendments to the Law on Higher Education [Luật sửa đổi, bổ sung một số điều của Luật Giáo dục Đại Học]. Vietnamese National Assembly https://luatvietnam.vn/giao-duc/luat-giao-duc-dai-hoc-sua-doi-nam-2018-169346-d1.html
Vuong QH (2019) The harsh world of publishing in emerging regions and implications for editors and publishers: the case of Vietnam. Learn Publ 32(4):314–324. https://doi.org/10.1002/leap.1255
Vuong QH et al. (2019) Effects of work environment and collaboration on research productivity in Vietnamese social sciences: evidence from 2008 to 2017 scopus data. Stud High Educ 44(12):2132–2147. https://doi.org/10.1080/03075079.2018.1479845
Vuong QH (2018) The (ir)rational consideration of the cost of science in transition economies. Nat Hum Behav 2(1):5. https://doi.org/10.1038/s41562-017-0281-4
Vuong Q-H (2019) Breaking barriers in publishing demands a proactive attitude. Nat Hum Behav 3(10):1034. https://doi.org/10.1038/s41562-019-0667-6
Vuong Q-H (2020) Reform retractions to make them more transparent. Nature 582(7811):149. https://doi.org/10.1038/d41586-020-01694-x
Vuong Q-H et al. (2018) Cultural additivity: behavioural insights from the interaction of Confucianism, Buddhism and Taoism in folktales. Palgrave Commun 4(1):143. https://doi.org/10.1057/s41599-018-0189-2
Vuong Q-H et al. (2020) Bayesian analysis for social data: a step-by-step protocol and interpretation. MethodsX 7:100924. https://doi.org/10.1016/j.mex.2020.100924.
Vuong Q-H et al. (2021) Mirror, mirror on the wall: is economics the fairest of them all? An investigation into the social sciences and humanities in Vietnam. Res Eval https://doi.org/10.1093/reseval/rvaa036
Webb J (2015) A path to sustainability: how revenue diversification helps colleges and universities survive touch economic conditions J Int Interdiscip Bus Res 2(7):69–97 https://core.ac.uk/download/pdf/214515334.pdf
Wharton R, Kail A, Curvers S (2016) How can charities work best in the school system? A discussion paper. https://www.thinknpc.org/wp-content/uploads/2018/07/School-Report_How-can-charities-work-best-in-the-school-system_April16.pdf
Woodhall M (1988) Designing a student loan programme for a developing country: the relevance of international experience. Econ Educ Rev 7(1):153–161. https://doi.org/10.1016/0272-7757(88)90079-9
World Bank (2000) Higher education in developing countries: peril and promise. World Bank, Washington. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/345111467989458740/higher-education-in-developing-countries-peril-and-promise
World Bank (2019) GDP per capita. World Bank https://data.worldbank.org/indicator/NY.GDP.PCAP.CD
World Bank (2020) Improving the performance of higher education in Vietnam: strategic priorities and policy options (English). World Bank http://documents1.worldbank.org/curated/en/347431588175259657/pdf/Improving-the-Performance-of-Higher-Education-in-Vietnam-Strategic-Priorities-and-Policy-Options.pdf
World Intellectual Property Organization (2012) Technology transfer in countries in transition: policy and recommendations. World Intellectual Property Organization https://www.wipo.int/publications/en/details.jsp?id=4118&plang=EN
Yip PSF et al. (2020) Is there gender bias in research grant success in social sciences?: Hong Kong as a case study. Humanit Soc Sci Commun 7(1):173. https://doi.org/10.1057/s41599-020-00656-y
This work is the output of the project, entitled "Some critical financial measures to develop higher education in Vietnam, period 2021–2030, vision 2035", no. ĐTĐL.XH-07/19, funded by the Ministry of Science and Technology of Vietnam. The authors sincerely thank the Ministry for this support.
University of Economics and Business VNU, Hanoi, Vietnam
Trung Thanh Le & Minh Phuong Thi Nguyen
Vietnam Ministry of Finance, Hanoi, Vietnam
Thuy Linh Nguyen
University of Strathclyde, Glasgow, Scotland, UK
Minh Thong Trinh
EdLab Asia Educational Research and Development Centre, Hanoi, Vietnam
Minh Thong Trinh & Hiep-Hung Pham
Vietnam Ministry of Science and Technology, Hanoi, Vietnam
Mai Huong Nguyen
Department of Planning and Finance, VNU, Hanoi, Vietnam
Minh Phuong Thi Nguyen
Phu Xuan University, Hue, Vietnam
Hiep-Hung Pham
Trung Thanh Le
Correspondence to Hiep-Hung Pham.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Le, T.T., Nguyen, T.L., Trinh, M.T. et al. Adopting the Hirschman–Herfindahl Index to estimate the financial sustainability of Vietnamese public universities. Humanit Soc Sci Commun 8, 247 (2021). https://doi.org/10.1057/s41599-021-00927-2
Humanities and Social Sciences Communications (Humanit Soc Sci Commun) ISSN 2662-9992 (online) | CommonCrawl |
Does $\left(n^2 \sin n\right)$ have a convergent subsequence?
I'm wrestling with the following:
Question: For what values of $\alpha > 0$ does the sequence $\left(n^\alpha \sin n\right)$ have a convergent subsequence?
(The special case $\alpha = 2$ in the title happened to arise in my work.) In a continuous setting this would be a very simple question since $x^\alpha \sin x$ achieves every value infinitely often for (positive) $x \in \mathbb{R}$, but I feel ill-equipped for this situation -- I have an eye on the Bolzano-Weierstrass theorem and not much else.
I have shown the answer is affirmative for $\alpha \leq 1$. Here is my idea.
Proof when $0 < \alpha \leq 1$: Define $x_n = n^\alpha \sin n$ for all $n \in \mathbb{N}$. We will find a bounded subsequence $(y_n)$ of $(x_n)$ so the Bolzano-Weierstrass theorem applies. Let $n \geq 1$ be arbitrary; then by Dirichlet's approximation theorem there are $p_n, q_n \in \mathbb{N}$ satisfying $$q_n \leq n \,\,\,\,\,\,\,\,\,\, \text{and} \,\,\,\,\,\,\,\,\,\, |q_n \pi - p_n| < \frac{1}{n}.$$ Take $y_n = x_{p_n} = (p_n)^\alpha \sin p_n$ for this index $n$. Then $$\begin{eqnarray}|y_n| &=& (p_n)^\alpha \left|\sin(q_n \pi - p_n)\right|\\ &<& (p_n)^\alpha \left(\frac{1}{n}\right)\\ &<& \left(q_n \pi + \frac{1}{n}\right)^\alpha \left(\frac{1}{n}\right)\\ &\leq& \left(n \pi + \frac{1}{n}\right)^\alpha \left(\frac{1}{n}\right)\\ &\leq& \left(n \pi + \frac{1}{n}\right) \left(\frac{1}{n}\right)\\ &\leq& \pi + 1\end{eqnarray}$$ so the sequence $(y_n)$ has a convergent subsequence $(z_n)$ by the Bolzano-Weierstrass theorem. But $(z_n)$ is in turn a subsequence of $(x_n)$, which proves the claim.
Unfortunately, my strategy breaks down when $\alpha > 1$ (where I replace $\alpha$ by $1$ in the chain of inequalities). So in this case, can a convergent subsequence be found some other way or not? (I have a suspicion as to the answer, but I don't want to bias people based on heuristics.)
sequences-and-series analysis special-functions
VandermondeVandermonde
This seems very closely related to the irrationality measure of $\pi$. Fix some exponent $\mu$, and suppose that there are infinitely many $p_n, q_n$ with
$$ \left| \pi - \frac{p_n}{q_n} \right| < \frac{1}{(q_n)^\mu} \, .\,\,\, (*) $$
Then essentially the same argument you gave shows that the subsequence $\left\{x_{p_n}\right\}$ is bounded for any $\alpha \leq \mu-1$.
Conversely, suppose we have a bounded subsequence $\left\{x_{p_n}\right\}$ of $n^\alpha \sin n$, so $(p_n)^\alpha |\sin p_n| < K$ for some fixed $K$ and all $n$. Choose $q_n$ so that $|p_n - q_n \pi| < \frac{\pi}{2}$. Then $$|p_n - q_n \pi| < \frac{\pi}{2} |\sin (p_n-q_n \pi)| < \frac{K \pi}{2(p_n)^{\alpha}} \, ,$$
so $(*)$ holds infinitely often for any $\mu < \alpha + 1$.
According to the Mathworld page I linked to above, all we know about the irrationality measure of $\pi$ is that it's between $2$ and $7.6063$, so your specific problem (which requires that we compare it to $3$) is unsolved.
EDIT: I can't find an online version of the 2008 paper by Salikhov that proves the 7.6063 bound, but here's a pdf of Hata's earlier paper that shows $\mu(\pi) < 8.0161$, and here's a related MathOverflow question (which also has a link to Hata's paper).
MicahMicah
$\begingroup$ Interesting -- I unwittingly stumbled upon an unsolved problem. At least now I know the keyword to find out more (or maybe not to spend too much time chasing this question, but I'm going to read the paper). Thanks for your info. $\endgroup$
– Vandermonde
$\begingroup$ I got the impulse to look at these references again. I still think it is really cool and creative to quantify irrationality by well-approximation like that; I regarded rationality as a qualitative, binary, 'algebraic' property and left it at that, and would have never have thought to do something like this. It's a similar kind of feeling to when you're used to statements being sharply true or false and someone introduces a really rich generalisation with the notion of probabilities between 0 and 1. $\endgroup$
Not the answer you're looking for? Browse other questions tagged sequences-and-series analysis special-functions or ask your own question.
Does $a_n = n\sin n$ have a convergent subsequence?
Convergent subsequence for sin(n)
Does $a_{n}$ have a convergent subsequence
For a sequence, why must $\lim _{n→∞} {||x_n||} = ∞$, $\lim _{n→∞} {||x_n||} = 0$, or there exists a convergent subsequence with a nonzero limit?
Proof that $x_n$ = $\left(\frac{ncos(n)-23}{n}\right)$ has a convergent subsequence
If $\{x_n\}$ and $\{y_n\}$ are Cauchy then $\left\{\frac{2x_n}{y_n}\right\}$ is Cauchy
Does the sequence $(a_n)_{n\in\mathbb{N}}$ have a convergent subsequence?
Interesting explicit convergent subsequence for not converging bounded sequence
Convergent subsequence of $\sin(n^2)$
Convergent subsequence for bounded rational sequences | CommonCrawl |
The chemical nature of hydrogen preoxide is :-
Oxidising and reducing agent in acidic medium, but not in basic medium.
Oxidising and reducing agent in both acidic and basic medium
Reducing agent in basic medium, but not in acidic medium
Oxidising agent in acidic medium, but not in basic medium.
Correct Option: 2,
$\mathrm{H}_{2} \mathrm{O}_{2}$ act as oxidising agent and reducing agent in acidic medium as well as basic medium.
$\mathrm{H}_{2} \mathrm{O}_{2}$ Act as oxidant :-
$\mathrm{H}_{2} \mathrm{O}_{2}+2 \mathrm{H}^{\oplus}+2 \mathrm{e}^{\Theta} \rightarrow 2 \mathrm{H}_{2} \mathrm{O}$ (In acidic medium)
$\mathrm{H}_{2} \mathrm{O}+2 \mathrm{e}^{\ominus} \rightarrow 2 \mathrm{OH}^{\ominus}($ In basic medium $)$
$\mathrm{H}_{2} \mathrm{O}_{2}$ Act as reductant :-
$\mathrm{H}_{2} \mathrm{O}_{2} \rightarrow 2 \mathrm{H}^{+}+\mathrm{O}_{2}+2 \mathrm{e}^{\Theta}$ (In acidic medium)
$\mathrm{H}_{2} \mathrm{O}_{2}+2 \mathrm{OH}^{\Theta} \rightarrow 2 \mathrm{H}_{2} \mathrm{O}+\mathrm{O}_{2}+2 \mathrm{e}^{\ominus}($ In basic medium $)$ | CommonCrawl |
What is it about modern set theory that prevents us from defining the set of all sets which are not members of themselves?
We can clearly define a set of sets. I feel intuitively like we ought to define sets which do contain themselves; the set of all sets which contain sets as elements, for instance. Does that set produce a contradiction?
I do not have a very firm grasp on what constitutes a set versus what constitutes a class. I understand that all sets are classes, but that there exist classes which are not sets, and this apparently resolves Russell's paradox, but I don't think I see exactly how it does so. Can classes not contain classes? Can a class contain itself? Can a set?
set-theory
crfcrf
$\begingroup$ The short answer is the axiom of foundation: en.wikipedia.org/wiki/Axiom_of_foundation. But I suspect you will get a very good answer shortly, so I only suggest the above link as a first reading. $\endgroup$ – M Turgeon Aug 14 '12 at 22:03
$\begingroup$ @MTurgeon This question does not involve the axiom of foundation. If the OP wanted a set $x$ such that $x \in x$, then the axiom of foundation would be need to show that no such set exists. $\endgroup$ – William Aug 14 '12 at 22:05
$\begingroup$ I think he referred to Russell's paradox. Bertrand is unfortunately Russell's first name and Joseph's last name. $\endgroup$ – Tunococ Aug 14 '12 at 22:08
$\begingroup$ @William My knowledge of set theory is very poor. But it seems to me that you just proved in your comment that the axiom of foundation is relevant to the OPs question. $\endgroup$ – M Turgeon Aug 14 '12 at 22:08
$\begingroup$ Call the set containing precisely all sets that are not members of themselves $S$. Thus we have $$A\in S\iff A\not\in A$$ for every set $A$. That is, a set does not contain itself if and only if it is a member of $S$. Then it follows that $S\in S\iff S\not\in S$ (because $S$ is a set!), a clear impossibility. This does not follow if $S$ is not a set, because we cannot substitute $A:=S$ when $S$ is not a set. (Intuitively, a (proper) class is a collection too large to be a set. An actual set theorist should be able to describe the situation better.) $\endgroup$ – anon Aug 14 '12 at 22:10
Russell's paradox
In Zermelo set theory, the proof of the titular question is straightforward:
Assume there is such a set. Call it $R$.
Fact: $x \notin x$ if and only if $x \in R$. This is the defining property of $R$.
Assume $R \in R$.
By the fact, this means $R \notin R$.
Contradiction!
Therefore $R \notin R$.
By the fact, this means $R \in R$.
Therefore no such set exists.
There is an immediate corollary: there is no set of all sets.
Assume there is a set of all sets. Call it S.
There is a subset $R \subseteq S$ containing exactly those sets $x$ for which that $x \notin x$
Therefore, there is no set of all sets.
Rationale for Zermelo set theory
One of the most important features of a set theory is having tools to actually construct sets. Cantor's 'naive' set theory had the most powerful rule of all: if you could name any property $P$, then there was a set of all sets that have property $P$. This let you construct any set you could image! Unfortunately, it lets you construct the set of Russell's paradox, and thus Cantor's set theory is self contradictory.
Zermelo took a more modest approach*: he looked for a more conservative collection of constructions that sufficed for mathematics, but isn't so strong as to create any of the known paradoxical sets. Fraenkel added another useful construction, and gave us the axiom of foundation which simplifies technical arguments.
Among the constructions of Zermelo set theory is the restricted form of Cantor's "comprehension principle": if we have any property $P$ and a set $S$, then we can form the subset of $S$ of things satisfying property $P$.
The axiom of restricted comprehension exactly the property of a universe of sets that is needed to make the argument in the opening section.
*: I do not know if this is historically accurate. Really, I'm espousing an a postiori observation about it.
Set-builder notation is very useful notation to denote sets. Recall that each of the following notations define sets in ZFC:
$$ \{ x \in S \mid P(S) \} \qquad \qquad \{ f(x) \mid x \in S \} \qquad \qquad \{ a, b \} $$
where $a,b,S$ are all sets, $P$ is a unary predicate whose domain includes $S$, and $f$ is a function whose domain includes $S$.
The same notation turns out to be quite useful to define predicates. For example, predicate
P(x) = "x contains the empty set"
is easily notated as
$$ P = \{ x \mid \emptyset \in x \} $$
and the assertion that $x$ satisfies the predicate $P$ can be written as
$$ x \in P. $$
This notation, formally, has nothing to do with sets: it is alternative notation for logic. When we do this, we call a predicate a "class".
The way you manipulate logic in the form of classes is so strikingly similar to the way you manipulate sets that this unified notation is extremely useful.
To answer a question you had, the only objects are still sets. The only thing that can be a member of a set is a set. The only thing that can be a member of a class is a set. Classes can't be members of anything, because they aren't objects: they're logic. (at least, if we stick to first-order logic....)
It can be technically awkward when you hav0e to pay attention to what is a set and what is a class, especially if you want to reason in a 'stripped down' version of formal logic.
So, Von Neumann, Bernays, and Gödel invented (NBG) set theory*. The objects of NBG set theory are classes. It might be a little confusing to use the same word as we did for the alternative view of logic above; however in practice it's not a problem.
NBG set theory includes a class called $\mathbf{Set}$. $V$ is another commonly used name for this class. There is a theorem/axiom that says if $x \in y$, then $x \in \mathbf{Set}$.
NBG can also be presented (and usually is, I think) as a theory with two sorts: a sort of sets and a sort of classes. Only sets may be elements of things. But for any set there is a class that has the same elements, and it is reasonable to conflate the two.
*: Again, this is not meant to be a historically accurate presentation.
Universes
Another approach to dealing with classes is a Grothendieck universe. However, using them requires assuming a large cardinal axiom.
A Grothendieck universe is, briefly, a set $U$ with the property that the elements of $U$ collectively have good enough properties to be justifiably called a 'universe of sets'. We call the elements of $U$ "small sets". The things we would normally call classes are all subsets of $U$.
In this way (other than having had to assume a large cardinal axiom) we don't have to do much that is special -- everything we are talking about is a set. We just occasionally have to take note of which sets are "small" and which are not.
HurkylHurkyl
$\begingroup$ "This notation, formally, has nothing to do with sets: it is alternative notation for logic. When we do this, we call a predicate a 'class'." Ohhhhhhhhhh. That helps a lot. Thank you! $\endgroup$ – crf Aug 18 '12 at 1:42
$\begingroup$ What's the point/benefit of building the set pointed in Russell's paradox in a non-paradoxical way (considering it's possible)? What change would it make for mathematics? $\endgroup$ – Billy Rubina Jun 23 '13 at 5:55
Since the other users have answered the question in the context of well-founded set theory, let me say a few words about other set theories.
Before we can really answer this question, we must first think about what a 'set' is in the first place. Intuitively, a set is something that has members and which is wholly determined by what its members are. This is codified in the axiom of extensionality:
Extensionality. If $X$ and $Y$ are sets, and for all $z$, $z \in X \iff z \in Y$, then $X = Y$.
Notice, however, that the quantifier "for all $z$" is unbounded – that is, there is no restriction on the type of $z$. Let us non-commitally fix a universe of discourse $\mathbf{U}$ and say that $z$ is required to be in $\mathbf{U}$. So, can a set be a member of another set? Well, that depends – are there any sets in $\mathbf{U}$? If not, then obviously a set cannot be a member of any set. This is rather unacceptable for doing modern mathematics, so we must rectify this somehow.
Russell's own solution to his paradox was to introduce the notion of a type. (What I describe here is the unramified type theory TST, not the type theory of Principia Mathematica.) We start with some basic type $\mathbf{U}_0$ – say, the natural numbers. We define sets whose members are of type $\mathbf{U}_0$ – and this is a new type $\mathbf{U}_1$. We repeat this procedure infinitely, forming at each stage the type $\mathbf{U}_{n+1}$ corresponding to sets of things of type $\mathbf{U}_n$. Thus, we get sets whose members are other sets; on the other hand, it is clear that the universal 'set' does not exist in this ontology: if $X$ is a set, then it is of type $\mathbf{U}_{n+1}$ for some natural number $n$, and its members must be of type $\mathbf{U}_n$ – so in particular, $X \notin X$. We could even entirely banish the formula "$X \in X$" because there is no possible assignment of types that makes it a well-formed formula!
Unfortunately, we have had to introduce infinitely many types of sets, and it seems rather complicated to keep track of all these types in practice. Modern set theory resolves this by taking $\mathbf{U}_0$ to be the empty type and collapsing all the higher types into a single type $\mathbf{U}$. Thus, everything in the universe of discourse is a set (but that does not mean all sets are in $\mathbf{U}$!), and it makes sense to ask whether $x \in y$ for any $x$ and $y$ in $\mathbf{U}$. In particular, the once-banished formula $x \in x$ is well-formed again – so again we have to find some other solution to Russell's paradox.
Digging a little bit deeper, we discover that one of the assumptions of the paradox is the naïve axiom of comprehnsion: that is, whenever $\varphi (x)$ is a well-formed formula, then there exists a set $\{ x : \varphi (x) \}$ in $\mathbf{U}$ whose members are precisely those $x$ in $\mathbf{U}$ for which $\varphi (x)$ is satisfied. As such, we must be more careful about the sets we assume are in $\mathbf{U}$. This is where the set–class distinction comes from: in the usual parlance, 'set' refers to sets that are in $\mathbf{U}$, and 'class' refers to sets whose members are in $\mathbf{U}$ but are not necessarily in $\mathbf{U}$ themselves. To avoid confusion, I will say $\mathbf{U}$-set for the former.
So what should we assume instead of the naïve axiom of comprehension? Quine's New Foundations (NF) offers one option:
Stratified comprehension. Let us say a well-formed formula $\varphi (x)$ is stratified if there is a way to assign types to all the variables appearing in $\varphi (x)$ so that whenever $y \in z$ appears in $\varphi (x)$, $y$ is of type $\mathbf{U}_n$ and $z$ is of type $\mathbf{U}_{n+1}$, and whenever $y = z$ appears in $\varphi (x)$, both $y$ and $z$ are of type $\mathbf{U}_n$. Then, whenever $\varphi (x)$ is a stratified formula, the class $\{ x : \varphi(x) \}$ is a $\mathbf{U}$-set.
Roughly speaking, any set that exists under TST also exists under NF. In particular, the class $\{ x : x = x \}$ is a $\mathbf{U}$-set under NF – so NF admits a universal set. On the other hand, the paradoxical class $\{ x : x \notin x \}$ does not exist in NF, because $x \notin x$ is not a stratified formula. Now, the relative consistency of NF is not well-understood, but the related theory NFU (obtained by allowing $\mathbf{U}_0$ to be non-empty) is known to be consistent relative to ZF set theory. Thus, if we believe ZF is consistent, then we should also believe that there is a consistent set theory in which the universal set exists – in particular, the universal set does not produce a contradiction on its own.
Having mentioned it, I suppose I should also say how comprehension is handled in ZF. We have the following axiom:
Separation. For any $\mathbf{U}$-set $X$, if $\varphi (x)$ is any well-formed formula, the class $\{ x \in X : \varphi (x) \}$ is a $\mathbf{U}$-set.
Obviously, in the presence of a universal set, the axiom of separation is equivalent to the naïve axiom of comprehension, so we had better do something about that.
Regularity. Any $\mathbf{U}$-set $X$ has a member $Y$ such that any member of $Y$ is not a member of $X$. (Equivalently, $X \cap Y = \emptyset$.)
In particular, there is no universal set. It is tempting to call say that the membership relation $\in$ is well-founded on $\mathbf{U}$, but there is a subtlety here: only $\mathbf{U}$-sets are guaranteed to have a $\in$-minimal member. There are still other problems to fix, however – so far, there are no axioms that guarantee our universe $\mathbf{U}$ is non-empty! But that is a story for another day.
Finally, we should discuss formal class–set theories such as von Neumann–Bernays–Gödel (NBG) or Morse–Kelley (MK). In these theories, the universe of discourse $\mathbf{U}$ consists of 'classes', and a 'set' is defined to be a class that is a member of some class. To avoid confusion, let us say $\mathbf{V}$-class for the former and $\mathbf{V}$-set for the latter. A proper $\mathbf{V}$-class is a $\mathbf{V}$-class that is not also a $\mathbf{V}$-set.
We have a class comprehension axiom governing the formation of $\mathbf{V}$-classes:
Bounded class comprehension. If $\varphi (x)$ is a well-formed formula that does not have any bound variables ranging over $\mathbf{V}$-classes, then the class $\{ x : x \text{ is a } \mathbf{V} \text{-set and } \varphi (x) \}$ is a $\mathbf{V}$-class.
Full class comprehension. If $\varphi (x)$ is any well-formed formula, then the class $\{ x : x \text{ is a } \mathbf{V} \text{-set and } \varphi (x) \}$ is a $\mathbf{V}$-class.
NBG uses the bounded class comprehension axiom, while MK uses the full class comprehension axiom. Either way, we are guaranteed the existence of the $\mathbf{V}$-class $$\mathbf{V} = \{ x : x \text{ is a } \mathbf{V} \text{-set and } x = x \}$$ which contains all $\mathbf{V}$-sets. But is $\mathbf{V}$ itself a $\mathbf{V}$-set? To answer that we need an axiom telling us which $\mathbf{V}$-classes are $\mathbf{V}$-sets.
Limitation of size. Let us say that a bijection is a $\mathbf{U}$-bijection if its graph exists in $\mathbf{U}$, i.e. if it can be defined by a $\mathbf{V}$-class function. A $\mathbf{V}$-class $X$ is a $\mathbf{V}$-set if and only if there does not exist a $\mathbf{U}$-bijection between $X$ and $\mathbf{V}$.
In particular, $\mathbf{V}$ must be a proper $\mathbf{V}$-class. Note that by definition a proper $\mathbf{V}$-class cannot contain itself. Unfortunately, this doesn't answer the question of whether a $\mathbf{V}$-set can be contained in itself. In NBG and MK, this question is settled by the regularity axiom applied to classes:
Class regularity. Any $\mathbf{V}$-class $X$ has a member $Y$ such that any member of $Y$ is not a member of $X$.
Thus, no $\mathbf{V}$-set can contain itself – at least in NBG or MK.
Zhen LinZhen Lin
I don't entirely understand, "We can clearly define a set of sets. I feel intuitively like we ought to define sets which do contain themselves"
You can define the collection of all sets. It is defined by the formula $x = x$. $V = \{x : x = x\}$. Similarly, you can define the collection of all sets that are not members of themselves. The formula that does this is $x \notin x$. Let $A = \{x : x \notin x\}$. In any structure in the language of set theory, these would correspond to definable classes.
However, both of these are not sets. You are interested in the fact that $A$ is not a set. Suppose that $A$ is a set. You have that $A \in A$ or $A \notin A$. If $A \in A$, then $A$ does not satisfy the formula defining $A$; hence, you $A \notin A$. Contradiction. Now suppose that $A \notin A$. Then $A$ satisfies the defining formula for $A$. So $A \in A$. Contradiction. Since neither can occur, you must have that $A$ is not a set.
Russel paradox is resolved by limiting the comprehension axiom. Instead of all definable classes being sets, the axiom of specification asserts that the intersection of any definable class with a set is a set.
WilliamWilliam
$\begingroup$ Do you mean "$x \in x$" where you have "$x = x$"? $\endgroup$ – Austin Mohr Aug 15 '12 at 1:41
$\begingroup$ @AustinMohr If you referring to the second sentence of the second paragraph, it should be $x = x$ since I am trying to define the entire universe $V$ here. $\endgroup$ – William Aug 15 '12 at 1:54
$\begingroup$ @AustinMohr He's just using a formula (x=x) that is trivially satisfied by every set, so the 'collection' of all that do must be the whole universe. $\endgroup$ – Quinn Culver Aug 15 '12 at 5:10
$\begingroup$ I'm also interested in the set $B=\{x:x\in x\}$, and whether it leads directly to a contradiction. I suppose it does since if $A$ is a set then $x\notin B$ is a perfectly good formula and we're back to Russell. But then—what are A and B? Are they "classes"? Is there a thing that prevents us from talking about the class which contains all and only those classes which do not contain themselves? I'm sure I've got some great reading ahead in the other answers that will answer these questions though... $\endgroup$ – crf Aug 15 '12 at 7:35
There are two intertwined issues here: first, (Bertrand) Russell's paradox of the "set" of all sets which don't contain themselves, which the question title asks about. This is resolved by the restriction of Frege's naive comprehension axiom to permit only subsets of pre-existing sets.
Second, your question body seems more concerned with the problem of sets which contain themselves. The axiom of foundation AF(or regularity,) which bans such sets, actually is not the resolution of Russell's paradox: if we keep naive comprehension and add AF, the set $S$ of all sets which don't contain themselves will simply become the set of all sets. So it's apparent that $S \in S,$ while AF requires $S \notin S$, so naive set theory with just AF is inconsistent.
So AF isn't sufficient for fixing Frege's set theory. It isn't necessary, either: if Zermelo-Fraenkel set theory in inconsistent with AF removed, then ZF itself is inconsistent. This leaves the way open for alternatives. Aczel did the canonical work on this with his anti-foundation axiom AFA. Loosely speaking, this defines sets as things which can be broken down into sets, rather than built up from sets as with AF. The upshot is that sets exist which contain themselves, while the theory maintains the same consistency strength as ZF. So, the answer to your question whether a set can contain itself is "no" in standard set theory, but "yes" in general.
Kevin CarlsonKevin Carlson
There's a lot of interplay between set theory and first-order logic. A system of set theory is a list of first-order axioms (which 'live outside' the system of set theory!), so for example, naive set theory might have axioms like:
1) if two sets $x$ and $y$ have the same elements they are the same sets (the axiom of extensionality). in first-order language, this might be written as $(\forall x,y)(\forall z ((z \in x) \iff (z\in y)) \iff (x=y)$
2) for any first order formula with one free variable there is a set of objects which satisfies that formula (naive comprehension).
However, this leads to Russell's paradox, so we instead restrict the number of sets we consider to only have well-founded sets, that is, sets for which $x \neq \{x\}$ (or sets which are not members of themselves). This leads to Zermelo-Fraenkel set theory, and the important thing here is that we restrict the axiom scheme of comprehension so that we cannot ask for a set of all things which satisfy a first-order formula.
(it is interesting to note that originally Zermelo tried to code $0$ as $\emptyset$, 1 as $\{\emptyset\}$, 2 as $\{\{\emptyset\}\}$ etc etc, and $\omega$ as $\{ \dots\{\emptyset\}\dots\}$, which would have violated foundation)
We can, however, consider a class of all sets which satisfy a first-order formula. So, for instance, the formula $\varphi$ with one free variable $x$ stating $(x=x)$, and this is satisfied by all sets, so the class corresponding to $\varphi$ is the class of all sets. Note that this is not a set, as we cannot reach it by the axioms of ZFC - it lives 'outside ZFC' as it were. So, to recap, a class is an equivalence class (this is not circular!) of sets satisfying a first-order formula, not necessarily constructable inside our set theory.
KrisKris
$\begingroup$ Okay so, we do outlaw sets which contain themselves as elements? Is this justified only because it prevents us from defining a set which contains all and only sets of which it is a member? Actually, how is that even prevented by this restriction? $\endgroup$ – crf Aug 14 '12 at 22:13
$\begingroup$ @crf The axiom of foundation implies that that there are no sets which contains themselves as elements. This does not prevent you from showing that $\{x : x \notin x\}$ is a set. The axiom of specification prevent you from proving that the above is a set. $\endgroup$ – William Aug 14 '12 at 22:30
$\begingroup$ Yes, I should probably have added that what stops Russell's paradox in ZFC (or how ZFC tries to stop Russell's paradox) is restriction of comprehension to replacement and separation. $\endgroup$ – Kris Aug 15 '12 at 3:09
$\begingroup$ @William so the axiom of foundation basically says $\{x:x\in x\}$ is not a set. You mention we need the axiom of specification in order to show that $\{x:x\notin x\}$ is not a set. Why do we need both axioms? If we outlaw $\{x:x\notin x\}$ axiomatically, doesn't that prevent contradictions arising out of $\{x:x\in x\}$? $\endgroup$ – crf Aug 15 '12 at 7:41
$\begingroup$ @crf Your statement isn't quite true; the axiom of foundation says that it is not the case that $x\in x$ for any set $x$, but that's different than saying that the collection $\{x:x\in x\}$ isn't a set. What AF says is that that collection is empty - that no set satisfies the condition - but the empty collection is in fact a perfectly good set - it's the empty set $\emptyset$. $\endgroup$ – Steven Stadnicki Aug 15 '12 at 18:07
The difference between sets and classes is subtle. It arises because in the naive approach to set theory, where you don't define what a set is nor axiomatize what you can and can't do with them (and instead just pretend everybody agrees on what a set is and just hope all is well) leads to the famous Russell's Paradox: A set $R$ that satisfies the contradictory assertion $R\in R$ and $R\notin R$.
There are many ways to deal with that problem in the world of axiomatic set theory. Since it is only certain sets that lead to such a contradiction one way out of the paradox is to exclude these pathological entities. But, that means you can no longer just form a set by the familiar construction of conjuring it by collecting all those things that satisfy some nice formula. The reason is that you don't a priori know if that entity that will be defined is one of those pathological things you cast out.
So what happens is that you must be more careful about the axioms of how sets can be manipulated and you find yourself calling 'sets' less entities then what you (at least think) that you can define. Those things that you can define but you cast away as not sets are called proper classes. The rest are sets (and classes). It is not always easy to determine if something that looks harmless is a proper class or not. For instance, the collection of all singleton sets is a proper class.
This is just one way of dealing with Russell's Paradox. There are others within classical axiomatic set theory, also ones that don't introduce a distinction between sets and classes (i.e., everything in the model is a set (but not everything you can imagine will be in the model)). Another way to address the paradox is by reconsidering our problem with the (what I will now call 'apparent') paradox: $R\in R$ and $R\notin R$. Namely, in classical logic it is well known that from the provability of a contradiction, such as Russell's Paradox, every assertion, such as "I am the Queen of England", can be proven. However, one might object that such an implication is not material leading one to consider a different kind of logic than the classical one. In the same vein, one might actually not be too worried by the apparent paradoxical nature of a statement of the form $P \wedge \neg P$, if that contradiction is not material to what interests you most. This approach will lead to paraconsistent logic.
Ittay WeissIttay Weiss
Disclaimer: I am very far from any level of expertisa on logic or set theory, so what I've written below should be taken as my philosophical (or metamathematical), rather than mathematical answer.
I think the fundamental source of confusion is that, mathematically, it is in fact NOT clear what sets are or why they are. Sure, there are a number of axiomatizations (which tell you how sets ought to relate to one another), but an axiomatization can come only after an intuition about what the objects in question ought to be. For example, Euclidean geometry has multiple axiomatizations as well, and it cannot be said that one is a TRUE axiomatization. Rather, they all satisfy certain basic properties which we find intuitively should hold, i.e. intuition ought to exist before axiomatization. On the other hand, intuition can be refined by an attempt to axiomatize it (e.g. we really CANNOT omit the parallel postulate, etc.)
So a good question to ask yourself is what, if any, is the intuition for what a set ought to be? An even better question is to ask yourself what questions, if any, should the notion of a set allow us to answer (it is clear that geometry allows us to ask (and often answer) intuitively geometric questions, and that algebra allows us to ask (and sometimes answer) intuitively numeric questions. What kind of questions do we want sets to answer?
What do we use sets for? We use sets, naively, as notation. In common mathematical practice, sets are just a shorthand for collecting elements that satisfy a certain property (e.g. the "set" U of all sets, the "set" NO of all sets not containing themselves). How would you use these? Well, you would roughly use these to pick out elements, e.g. if I want a set that does not contain itself, instead of writing that all the time, I would just pick an element of the "set" NO. Would I ever consider the whole set? Maybe if I wanted to do some comparisons regarding which set had more stuff in it (was of larger cardinality), maybe for something else.
EDITED: What is the problem with Russel's paradox (in ZFC)? The problem is the following: suppose that S is a set. Then let V be the set of elements of S which do not contain themselves (this relies on ability to construct subsets of sets satisfying any formula, i.e. restricted comprehension). Then if V is in V, we get the genuine contradiction that V is not in V. Hence V is not in V. Since it is not an element of itself, then it would be in V if it were in S, but it's not in V, so it's not in S.
Hence, for any set S there exists a set V that is not an element of itself, and that is also not an element of S (as a direct consequence of the restricted comprehension axiom allowing us to make a set $V=\{s\in S\colon\phi(s)\}$ where $\phi$ is any formula). END EDIT.
But notice that I can still talk about elements of S that do not contain themselves. In general I can still talk about sets that do not contains themselves (non-sets obviously don't contain themselves). I can take one of them, and prove things about it. What I cannot do is talk about ALL of them, at the same time (not that sets in ZFC have that many properties, other than cardinality, and maybe ordinality).
What about classes then? People talk about "the proper class" of the set of all sets, what is that? Kenneth Kunen wrote in The Foundations of Set Theory (I suspect with tongue firmly in cheek) that
"Formally, proper classes do not exist, and expressions involving them must be thought of as abbreviations for expressions not involving them."
What this means is that the set of all sets (or the set of all sets not containing themselves) are proper classes in the sense that there exists a formula "x contains itself" or "x does not contain itself", and that we can surely talk about elements of the proper class, i.e. sets that satisfy or do not satisfy the formula, but that proper classes by themselves are NOT mathematical objects: they are only notational convenience. (it is sometimes easier to say x in NO rather than x is a set that does not contain itself).
But wait, didn't I say that that's how we use sets (as notational convenience)? Yes, that's how we use sets naively. But notation is not a priori a mathematical object, it is an object of social convention (even if between mathematicians). What is the distinction between sets and proper classes then, if they fulfill the same function in daily practice? The difference is that sets CAN be made into mathematical objects (what is a mathematical object? It seems currently that it is an object that can be specified in a formal language which mirrors the intuitive properties we want, e.g. subsets and power sets, and which has no contradictions as far as we know) by any of the axiomatizations we have (and according to the different axiomatizations, we get seemingly different features for sets), while proper classes CANNOT be made into mathematical objects.
Vladimir SotirovVladimir Sotirov
$\begingroup$ Vladimir wrote: "What is the problem with Russel's paradox (in ZFC)? The problem is the following: suppose that $S$ is a set. Then let $V$ be the set of subsets of $S$ which do not contain themselves." This is not the form that Russell's Paradox takes. RP does not talk about subsets. There is in fact no problem with this particular construction. You are correct that $V$ is neither an element of itself nor $S$. But I don't think you will be able to prove that $V$ is an element of itself or $S$ to obtain a contradiction. $\endgroup$ – Dan Christensen Aug 17 '12 at 5:07
$\begingroup$ Vladimir wrote: "$V$ is not a subset of $S$." Do you mean $V$ is not an element of $S$? $\endgroup$ – Dan Christensen Aug 17 '12 at 5:23
$\begingroup$ Oops, I screwed up. Thanks for pointing out my mistake, Dan. The correction is that V should be defined as the set of elements of S that do not contain themselves, not subsets (if s in S is not a set, then vacuously s is not contained in itself and hence is in V). $\endgroup$ – Vladimir Sotirov Aug 17 '12 at 19:49
crf wrote:
"I understand that all sets are classes, but that there exist classes which are not sets, and this apparently resolves Russell's paradox...."
You don't need classes to resolve Russell's paradox. The key is that, for any formula P, you cannot automatically assume the existence of $\{x | P(x)\}$. If $P(x)$ is $x\notin x$, we have arrive at Russsell's Paradox. If $P(x)$ is $x\in x$, however, you don't necessarily run into any problems.
So, you can ban the use of certain formulas, hoping that the ban will cover all possibilities that lead to a contradiction. My preference (see http://www.dcproof.com ) is not to assume a priori the existence of any sets, not even the empty set. In such a system, you cannot prove the existence of any sets, problematic or otherwise. You can, of course, postulate the existence of a set in such a system, and construct other sets from it, e.g. subsets, or power sets as permitted.
Dan ChristensenDan Christensen
Surprisingly, Russell's paradox occurs even in the absence of any axioms! We don't even need Extensionality - so its not a quirk of modern set theory. To see this, suppose we have a two-place relation $\in$ defined on a domain of discourse whose elements will be called 'sets'. Now assume for a contradiction that for some 'paradoxical' set $R$ it holds that $$\forall x(x \in R \leftrightarrow x \notin x).$$
In words, this reads, 'For all sets $x$ it holds that $x$ is an element of $R$ if and only if $x$ is not an element of itself.' We therefore conclude:
$$R \in R \leftrightarrow R \notin R$$
Contradiction! So no such $R$ exists. But the point is, this argument works irrespective of your axioms.
goblingoblin
$\begingroup$ The point of the answers is not that $R$ cannot exist (which, as you say, is straightforward), but rather that we need a workaround, so we can still have a theory of sets rich enough to do something with it but not so unwieldy that would allow us to conclude that $R$ exists (and therefore be inconsistent). What the answers are trying to do is to explain how the standard restrictions of Zermelo's, Quine's, etc, (attempt to) achieve this balance. $\endgroup$ – Andrés E. Caicedo Jun 23 '13 at 5:44
$\begingroup$ @Andres, I guess our readings of the original question are different. My reading is: which combination of axioms rules out the existence of $R$? $\endgroup$ – goblin Jun 23 '13 at 6:13
Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question.
What is the difference between a class and a set?
What are the differences between class, set, family, and collection?
When is $x=\{ x\}$?
Is the set of inner product spaces a subset of the set of metric spaces?
The class of all classes not containing themselves
Does the set of all sets that contain themselves contain itself?
Since the conception of Set Theory, was Russell's Set the only problematic set found?
Russell's paradox and axiom of separation
Is there a set theory that avoids Russel's paradox while still allowing one to define the set of all sets not containing themselves?
Generalizing beyond proper classes
Why wasn't Bertrand Russell surprised by the set of all sets that contain themselves?
Is there exists unbounded collections rather than be restricted into the world "class"?
Is the "set" of all sets which contain $1$ a set?
How do the consequences of Russell's paradox extend beyond universal comprehension principle as far as the set of all sets problem? | CommonCrawl |
Validity of naively computing the de Broglie wavelength of a macroscopic object
Many introductory quantum mechanics textbooks include simple exercises on computing the de Broglie wavelength of macroscopic objects, often contrasting the results with that of a proton, etc.
For instance, this example, taken from a textbook:
Calculate the de Broglie wavelength for
(a) a proton of kinetic energy 70 MeV
(b) a 100 g bullet moving at 900 m/s
The pedagogical motivation behind these questions is obvious: when the de Broglie wavelength is small compared to the size of an object, the wave behaviour of the object is undetectable.
But what is the validity of the actual approach of applying the formula naively? Of course, a 100g bullet is not a fundamental object of 100g but a lattice of some $10^{23}$ atoms bound by the electromagnetic force. But is the naive answer close to the actual one (i.e. within an order of magnitude or two)? How does one even calculate the de Broglie wavelength for a many body system accurately?
quantum-mechanics waves
Mark Allen
Mark AllenMark Allen
The de Broglie wavelength formula is valid to a non-fundamental (many body) object. The reason is that for a translation invariant system of interacting particles, the center of mass dynamics can be separated from the internal dynamics. Consequently, the solution of the Schrödinger equation can be obtained by separation of variables and the center of mass component of the wave funtion just satisfies a free Schrödinger equation (with the total mass parameter). Here are some details:
Consider a many body nonrelativistic system whose dynamics is governed by the Hamiltonian:
$\hat{H} = \hat{K} + \hat{V} = \sum_i \frac{\hat{p}_i^2}{2m_i} + V(x_i)$
($K$ is the kinetic and $V$ the potential energies respectively). In the translation invariant case, the potential $V$ is a function of relative displacements of the individual bodies and not on their absolute values. In this case, the center of mass dynamics can be separated from the relative motion since the kinetic term can be written as:
$ \hat{K} = \frac{\hat{P}^2}{2M} + \hat{K'} $
Where $P$ is the total momentum and $M$ is the total mass. $K'$ is the reduced kinetic term. In the case of a two-body problem, for example the hydrogen atom $K'$ has a nice formula in terms of the reduced mass, for larger number of particles, the $K'$ formula is less nice, but the essential point is that it depends on the relative momenta only.
For this type of Hamiltonian (with no external forces), the Schrödinger equation can be solved by separation of variables:
$\psi(x_i) = \Psi(X) \psi'(\rho_i)$
Where $X$ is the center of mass coordinate, and $\rho_i$ is a collection of the relative coordinates.
After the separation of variables, the center of mass wave function satisfies the free Schrödinger equation:
$ -\frac{\hbar^2}{2M}\nabla_X^2\Psi(X) = E \Psi(X) $
Whose solution (corresponding to the energy $E = \frac{p^2}{2M}$) has the form:
$\Psi(X) \sim exp(i \frac{p X}{\hbar})$
from which the de Broglie wave length can be read
$ \lambda = \frac{2 \pi \hbar}{p}$
David Bar MosheDavid Bar Moshe
$\begingroup$ This is practically contradictory to John's answer except if one interprets it as "a wavelength" for an extended body. Won't the wavelengths of the constituents extend spatially by far over this wavelength? Where does decoherence come in? $\endgroup$ – anna v Mar 20 '13 at 11:41
$\begingroup$ @Anna What I tried to emphasize is that the de Broglie wavelength is a property of a free degree of freedom. The internal state of the composite system is not free and will not be characterized with its constituents de Broglie wavelengths but rather with its bound state energies for example its vibrational modes. $\endgroup$ – David Bar Moshe Mar 20 '13 at 13:51
$\begingroup$ @Anna cont. When conditions are provided such that the system stays in a single (ground state) internal state, it is possible to approximate it by its center of mass quantum dynamics governed by its de Broglie wavelength, for example in the case of the buckyball and other large molecule interferometry. $\endgroup$ – David Bar Moshe Mar 20 '13 at 13:52
$\begingroup$ Anna cont. It is true that in the composite case the decoherence conditions are harder to achieve, because not only can this system loose coherence by being "kicked" by interacting particles but also it can loose coherence by random excitations of its internal degrees of freedom for example when the thermal excitations exceed its vibrational energies. $\endgroup$ – David Bar Moshe Mar 20 '13 at 13:52
$\begingroup$ @John Rennie You are correct, but I don't know if it is due to the present limitations in knowledge or technology or there is a fundamental theoretical limitation of decoherence. In the first case it might be possible in the future to increase the experimental bounds where interference is observable. $\endgroup$ – David Bar Moshe Mar 20 '13 at 15:50
If you've read about optical diffraction experiments like the Young's slits, you may have noticed they all refer to coherent light. This is the requirement that all the light in the experiment is in phase. If you aren't using coherent light you won't observe any diffraction because different bits of the light will diffract differently and the diffraction pattern is washed out.
Exactly the same applies to observing the wavelike behaviour of quantum objects. If you're diffracting electrons this isn't a problem, but if you're trying to diffract a bullet you require all parts of the bullet to be coherent. In principle you could prepare a bullet in a coherent state, but even if you could manage this the bullet would immediately decohere due to interactions with it's environment. This process is known as quantum decoherence. I've linked a Wikipedia article but be warned that the article isn't well written for non-nerds. If you want to know more you'd be better off Googling for popular science articles on decoherence.
Anyhow, as you obviously suspected from the way you've worded your question, because of decoherence it doesn't make sense to talk about a single de Broglie wavelength for macroscopic objects like bullets. As far as I know, the largest object ever to show quantum behaviour is an oscillator built by Andrew Cleland's group at Santa Barbara. This was around 50 - 100 microns in size, which is actually pretty big. However this is something of a special case and took enormous care to build. A more realistic upper limit is a buckyball, which is around a nanometre in size.
John RennieJohn Rennie
$\begingroup$ It's also interesting to note that when you can isolate a large-ish object like a buckyball sufficiently from its environment to prevent environmental decoherence, then interference can still be measured even if the object's size is larger than its de Broglie wavelength--the page here on buckyball interference experiments says "The de Broglie wave length is thus ~ 400 times smaller than the size of the particle". $\endgroup$ – Hypnosifl Dec 22 '14 at 23:40
$\begingroup$ I think the reason this works is because, as noted on this page, the separation between fringes is equal to the de Broglie wavelength times D/d, where D is the distance from the slits to the screen, and d is the spacing between slits--typically D is much larger than d, so the spacing between fringes will be larger than the de Broglie wavelength by the same factor. $\endgroup$ – Hypnosifl Dec 22 '14 at 23:41
$\begingroup$ sorry to come into this so late - but is not the issue to have all bullets coherent with each other rather than to have a bullet coherent with itself? $\endgroup$ – tom Mar 23 '15 at 10:19
$\begingroup$ @tom: well, a single photon will diffract therefore so should a single bullet. $\endgroup$ – John Rennie Mar 23 '15 at 16:09
$\begingroup$ Coherent light isn't required (though it helps). Here's a double-slit image from sunlight (via). $\endgroup$ – rob♦ Jul 20 '17 at 12:07
"But is the naive answer close to the actual one"
There is no "actual one" - the de Broglie formula defines the de Broglie wavelength for body of mass $m$. It is useful for description of microscopic particles that are observed to exhibit diffraction phenomena and the values for electrons seem to be in agreement with experimentally measured diffraction pattern.
100 g bodies were not observed to exhibit diffraction phenomena. According to the de Broglie formula, their wavelength is so short that diffraction of the wave would be unobservable, so the formula was not disproven. But it is not useful either.
Ján LalinskýJán Lalinský
I just spent a few hours researching this question, and it seems to me that:
It's important to understand what is meant by a particle having a wavelength. This great link has more information (http://electron6.phys.utk.edu/phys250/modules/module%202/matter_waves.htm) It emphatically does not mean that if we were to somehow isolate the particle and set it moving with some velocity v, it would move sinusoidally in space as a function of time. The key idea is that although deBroglie's relation about wavelength applies both to photons and other particles, it means something different in both cases! For the photon, it makes pretty intuitive sense - it refers to the EM wave that is the photon. What about other particles?
The wavelength refers to the particle's wavefunction, which, under the statistical interpretation, tells you the probability you would find the particle at a certain position x (http://hyperphysics.phy-astr.gsu.edu/hbase/uncer.html#c5). The previous link also makes sense of what it means to talk about the wavelength of a particle (if the wavelength isn't sinusoidal - basically, you take some sort of average).
Does a macroscopic object, like the aforementioned bullet, have a wavelength? I believe so, since we can take all the individual particles and consider the interactions between all the particles in the bullet (giving us entanglements and whatnot). We can imagine the bullet having some complicated wavefunction that describes the probability of it being found somewhere. This is the point on which I'm unsure, but the bigger point is that it doesn't really matter, since the object does not live in isolation to the environment. The idea is that it decoheres by interaction with the environment (a great description here: www.ipod.org.uk/reality/reality_decoherence.asp), which basically means that the environemtn acts differently on each part of the bullet, so even if the bullet might have originally acted like a quantum system, it no longer does after a measurement (i.e. when we see it). And so it doesn't make sense now to talk about the wavelength of the bullet since we really need to consider it as part of a bigger system, the bullet plus the environment.
$\begingroup$ But your final argument apply to small and elementary particles as well. I am not a downvoter, tough. $\endgroup$ – Alchimista Apr 12 '18 at 12:18
Not the answer you're looking for? Browse other questions tagged quantum-mechanics waves or ask your own question.
Can a macroscopic body have wavelength as that of electron?
Will a football (soccer) diffract?
Can we find a wave function for a planet orbiting the sun?
Do quantum physics apply universally at all scales?
Why does the wave-particle duality become unnoticeable with more mass?
Can human have a long wavelength if he moves slowly enough?
Is there a hard upper bound to the deBroglie wavelength of a particle with vanishing momentum?
de Broglie's Hypothesis
Why isn't everthing in a superposition state?
If the particles show uncertainity ,then why the body made of particles cant show that?
De Broglie wavelength, frequency and velocity - interpretation
Question about De Broglie Wavelength
Is de Broglie matter wave a mass or a particle hypothesis?
Estimating our de Broglie wavelength
De Broglie wavelength Vs Wavefunction
How can a probability distribution have wavelength (de Broglie wavelength)?
Thermal de Broglie wavelength - definition
How do you arrive at the de-Broglie wavelength?
De Broglie Wavelength interpretation | CommonCrawl |
Is there a true many-body green's function for interacting systems?
I've recently been trying to compute the Green's function for a non-interacting system of fermions. Since this is a site for mathematicians, for context, let me provide the following definition:
Definition: A noninteracting system of fermions is a quantum dynamical system along with the following data:
A single-particle Hilbert space $\mathfrak h$, for which the full Hilbert space of the dynamical system is the exterior algebra $\Lambda(\mathfrak h):=\oplus_{n\geq 0}\,\Lambda^n(\mathfrak h)$.
A non-interacting Hamiltonian $H$, which, for some basis $f_1,\cdots f_{\dim \mathfrak h}$ of the single particle Hilbert space reads as $H=\sum_{ij}A_{ij}c^\dagger(f_i)c(f_j).$
I've been puzzled by how physicists go about computing the Green's function of a non-interacting Hamiltonian $H$. To see what I mean, here is a theorem:
Theorem (Classification of Non-interacting Systems): For $(\Lambda(\mathfrak h), A_{ij})$ a general non-interacting system of fermions, the time-evolution of any $k$-particle state factors in the following manner: \begin{align*} e^{itH}(g_1\wedge \cdots \wedge g_k)=e^{it\mathcal H}g_1\wedge \cdots \wedge e^{it\mathcal H}g_k \end{align*} Where $\mathcal H= \sum_{ij}A_{ij}\,\left|f_i\right>\left<f^j\right|$ is a single-particle Hamiltonian, and the raised index indicates dualization. In other words, a "non-interacting system of identical fermions" always factors into a set of identical, non-interacting, single-particle systems, where the single-particle dynamics has the replacements \begin{align*} c^*(f_i)\,c(f_j)\mapsto \left|f_i\right>\left<f_j\right|.\\ \end{align*}
Enough background. When physicists say, "The Green's function of this non-interacting Hamiltonian $H$", I would think that they mean $$G:=\frac{1}{\frac{i}{\hbar}H-\partial_t},~~~~~~ H=\sum_{ij}A_{ij}c^\dagger(f_i)c(f_j). $$ However, they really mean the Green's function of the associated single-particle Hamiltonian: $$G':=\frac{1}{\frac{i}{\hbar}\mathcal H-\partial_t},~~~~~~\mathcal H= \sum_{ij}A_{ij}\,\left|f_i\right>\left<f^j\right|.$$ However, this does not generalize straightforwardly to interacting systems, and therefore, I am actually curious: is there a nice formula for $G$ in terms of $G'$? Naively, using the direct-sum decomposition $\Lambda(\mathfrak h)=\oplus_{k\geq 0}\,\Lambda^k(\mathfrak h)$, we get $$\frac{1}{\frac{i}{\hbar}H-\partial_t}=\bigoplus_{k\geq 0} \frac{1}{\frac{i}{\hbar}H_k-\partial_t}$$ So this reduces to computing the Green's function of the $k$-particle Hamiltonian: $(\frac{i}{\hbar}H_k-\partial_t)^{-1}$. However, this is as far as I can get on my own.
ap.analysis-of-pdes differential-equations quantum-mechanics quantum-field-theory
David RobertsDavid Roberts
$\begingroup$ isn't this just the difference between "first" and "second" quantization? --- en.wikipedia.org/wiki/Second_quantization $\endgroup$ – Carlo Beenakker Feb 22 '16 at 7:24
$\begingroup$ Sure, but then how does one find the "second-quantized" Green's function? $\endgroup$ – David Roberts Feb 22 '16 at 7:47
You ask for a relation between the Green's function of the single-particle Hamiltonian and the Green's function of the many-particle Hamiltonian, in the case of non-interacting particles (fermions or bosons). Let me try to explain that the retarded Green's functions are identical.
• For the single-particle Hamiltonian $H(x)=-\partial_x^2+V(x)$ the retarded Green's function is defined by $$i\partial_t G(x,x';t,t')=H(x)G(x,x';t,t'),\;\;t>t',$$ with the condition that $G(x,x';t,t')\equiv 0$ for $t<t'$ and $$\lim_{t\downarrow t'}G(x,x';t,t')=-i\delta(x'-x).$$
• The retarded many-particle Green's function is defined as the ground state expectation value $\langle\cdots\rangle$ of the (anti-)commutator of field operators $\hat\psi(x,t)$, $${\cal G}(x,x';t,t')=-i\langle\hat\psi(x,t)\hat\psi^\dagger(x',t')\pm\hat\psi^\dagger(x',t')\hat\psi(x,t)\rangle\theta(t-t').$$ The function $\theta(t)$ is the unit step function, the $+$ sign is for fermions and the $-$ sign for bosons. The field operator satisfies the operator equation $$i\partial_t\hat\psi(x,t)=[\hat\psi(x,t),\hat{\cal H}],$$ with $\hat{\cal H}$ the many-particle Hamiltonian operator and $[\cdot,\cdot]$ the commutator. Note also the equal-time (anti-commutation) relation $$\hat\psi(x,t)\hat\psi^\dagger(x',t)\pm\hat\psi^\dagger(x',t)\hat\psi(x,t)=\delta(x-x').$$
• Now let us compare the two functions $G(x,x';t,t')$ and ${\cal G}(x,x';t,t')$. Both vanish for $t<t'$ and both satisfy the delta-function limit when $t\downarrow t'$. But in general the function ${\cal G}$ is not the Green's function of any differential equation, unlike $G$.
However, for non-interacting particles we have $$[\hat\psi(x,t),\hat{\cal H}]=(-\partial_x^2+V(x))\hat\psi(x,t)=H(x)\hat\psi(x,t),$$ hence ${\cal G}$ satisfies the same differential equation $$i\partial_t {\cal G}(x,x';t,t')=H(x){\cal G}(x,x';t,t'),$$ as $G$ and we conclude that they are the very same function.
I hope this clarifies what physicists mean when they speak of the "many-particle Green's function". The name suggests otherwise, but in general this function is not the Green's function of any differential equation. (Its equation of motion is nonlinear when the particles interact.) The reason why physicists call ${\cal G}$ a Green's function is because it reduces to the Green's function $G$ of the Schrödinger equation in the absence of interactions.
All of this is for the retarded Green's function. The time-ordered Green's function or thermal Green's function are different quantities, that do not reduce to the single-particle Green's function even in the absence of interactions.
Carlo BeenakkerCarlo Beenakker
In http://arxiv.org/abs/1602.07793, I compute the honest (in the sense of inverting a differential operator) Green's function of the time-dependent Schrodinger equation, for a system of non-interacting particles.
In short, the full Green's function is a simple extrapolation of the single-particle Green's function, and may be computed rather easily in the non-interacting case (though no one is interested in such an object, for some reason).
answered Apr 3 '16 at 6:20
Not the answer you're looking for? Browse other questions tagged ap.analysis-of-pdes differential-equations quantum-mechanics quantum-field-theory or ask your own question.
Kontsevich's flow on the space of Poisson structures
Supercommutator of exterior multiplication operators and their adjoints
Is there an algebraic way to characterise the ordinary integral flags?
Existence of a solution to an infinite dimensional Stratonovich SDE
Analytic solution to two component, first order, linear PDE system
Convergence of Bessel (Sturm-Liouville) Expansions at the End Points
Proving that system is Hamiltonian
Green's function of time-dependent Stokes equation
Relate two different mod 2 indices: $\eta$ invariant and the number of zero modes of Dirac operator, associated to SU(2)
Differential equations of system dynamics and adjoint dynamics, non-smooth dynamics | CommonCrawl |
Traveling wave solutions for a cancer stem cell invasion model
Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations
September 2021, 26(9): 5047-5066. doi: 10.3934/dcdsb.2020332
Stochastic modelling and analysis of harvesting model: Application to "summer fishing moratorium" by intermittent control
Xiaoling Zou 1,, and Yuting Zheng 2,
Department of Mathematics, Harbin Institute of Technology(Weihai), Weihai 264209, China
Department of Basic Course, Xingtai Polytechnic College, Xingtai 054000, China
* Corresponding author: Xiaoling Zou
Received November 2019 Revised June 2020 Published September 2021 Early access November 2020
As we all know, "summer fishing moratorium" is an internationally recognized management measure of fishery, which can protect stock of fish and promote the balance of marine ecology. In this paper, "intermittent control" is used to simulate this management strategy, which is the first attempt in theoretical analysis and the intermittence fits perfectly the moratorium. As an application, a stochastic two-prey one-predator Lotka-Volterra model with intermittent capture is considered. Modeling ideas and analytical skills in this paper can also be used to other stochastic models. In order to deal with intermittent capture in stochastic model, a new time-averaged objective function is proposed. Besides, the corresponding optimal harvesting strategies are obtained by using the equivalent method (equivalency between time-average and expectation). Theoretical results show that intermittent capture can affect the optimal harvesting effort, but it cannot change the corresponding optimal time-averaged yield, which are accord with observations. Finally, the results are illustrated by practical examples of marine fisheries and numerical simulations.
Keywords: summer fishing moratorium, intermittent control, asymptotically stable in distribution, optimal harvesting, equivalent method.
Mathematics Subject Classification: Primary: 34F05, 93E03; Secondary: 60H10.
Citation: Xiaoling Zou, Yuting Zheng. Stochastic modelling and analysis of harvesting model: Application to "summer fishing moratorium" by intermittent control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 5047-5066. doi: 10.3934/dcdsb.2020332
L. J. S. Allen and A. M. Burgin, Comparison of deterministic and stochastic SIS and SIR models in discrete time, Math. Biocsi., 163 (2000), 1-33. doi: 10.1016/S0025-5564(99)00047-4. Google Scholar
L. H. R. Alvarez, Optimal harvesting under stochastic fluctuations and critical depensation, Math. Biosci., 152 (1998), 63-85. doi: 10.1016/S0025-5564(98)10018-4. Google Scholar
L. H. R. Alvarez and L. A. Shepp, Optimal harvesting of stochastically fluctuating populations, Journal of Mathematical Biology, 37 (1998), 155-177. doi: 10.1007/s002850050124. Google Scholar
I. Barbǎlat, Système d'équations différentielles d'oscillations non linéaires, Rev. Math. Pures Appl., 4 (1959), 267-270. Google Scholar
J. Batsleer, A. D. Rijnsdorp, K. G. Hamon, H. M. J. van Overzee and J. J. Poos, Mixed fisheries management: Is the ban on discarding likely to promote more selective and fuel efficient fishing in the dutch flatfish fishery?, Fish Res., 174 (2016), 118-128. doi: 10.1016/j.fishres.2015.09.006. Google Scholar
J. R. Beddington and R. M. May, Harvesting natural populations in a randomly fluctuating environment, Science, 197 (1977), 463-465. doi: 10.1126/science.197.4302.463. Google Scholar
A. Bottaro, Y. Yasutake, T. Nomura, M. Casadio and P. Morasso, Bounded stability of the quiet standing posture: An intermittent control model, Hum. Movement Sci., 27 (2008), 473-495. doi: 10.1016/j.humov.2007.11.005. Google Scholar
C. W. Clark, Mathematical Bioeconomics. The Optimal Management of Renewable Resources, , Pure and Applied Mathematics. Wiley-Interscience [John Wiley & Sons], New York-London-Sydney, 1976. Google Scholar
J. H. Connell, On the prevalence and relative importance of interspecific competition: Evidence from field experiments, Am. Nat., 122 (1983), 661-696. doi: 10.1086/284165. Google Scholar
[10] G. Da Prato and J. Zabczyk, Ergodicity for Infinite Dimensional Systems, Cambridge University Press, 1996. doi: 10.1017/CBO9780511662829. Google Scholar
J.-M. Ecoutin, M. Simier, J.-J. Albaret, R. Laë, J. Raffray, O. Sadio and L. T. de Morais, Ecological field experiment of short-term effects of fishing ban on fish assemblages in a tropical estuarine mpa, Ocean Coastal Manage., 100 (2014), 74-85. doi: 10.1016/j.ocecoaman.2014.08.009. Google Scholar
B. $\emptyset$ksendal, Stochastic Differential Equations, Springer-Verlag, 1985. doi: 10.1007/978-3-662-13050-6. Google Scholar
A. Gray, D. Greenhalgh, L. Hu, X. Mao and J. Pan, A stochastic differential equation sis epidemic model, SIAM J. Appl. Math., 71 (2011), 876-902. doi: 10.1137/10081856X. Google Scholar
Y. Guo, W. Zhao and X. Ding, Input-to-state stability for stochastic multi-group models with multi-dispersal and time-varying delay, Appl. Math. Comput., 343 (2019), 114-127. doi: 10.1016/j.amc.2018.07.058. Google Scholar
S. Hong and N. Hong, H$^{\infty}$ switching synchronization for multiple time-delay chaotic systems subject to controller failure and its application to aperiodically intermittent control, Nonlinear Dyn., 92 (2018), 869-883. Google Scholar
C. Hu, J. Yu, H. Jiang and Z. Teng, Exponential stabilization and synchronization of neural networks with time-varying delays via periodically intermittent control, Nonlinearity, 23 (2010), 2369-2391. doi: 10.1088/0951-7715/23/10/002. Google Scholar
Z. Y. Huang, A comparison theorem for solutions of stochastic differential equations and its applications, Proc. Amer. Math. Soc., 91 (1984), 611-617. doi: 10.1090/S0002-9939-1984-0746100-9. Google Scholar
L. Imhof and S. Walcher, Exclusion and persistence in deterministic and stochastic chemostat models, J. Differ. Equations, 217 (2005), 26-53. doi: 10.1016/j.jde.2005.06.017. Google Scholar
B. Johnson, R. Narayanakumar, P. S. Swathilekshmi, R. Geetha and C. Ramachandran, Economic performance of motorised and non-mechanised fishing methods during and after-ban period in ramanathapuram district of tamil nadu, Indian J. Fish., 64 (2017), 160-165. doi: 10.21077/ijf.2017.64.special-issue.76248-22. Google Scholar
G. B. Kallianpur, Stochastic differential equations and diffusion processes, Technometrics, 25 (1983), 208. doi: 10.1080/00401706.1983.10487861. Google Scholar
W. Li and K. Wang, Optimal harvesting policy for stochastic logistic population model, Appl. Math. Comput., 218 (2011), 157-162. doi: 10.1016/j.amc.2011.05.079. Google Scholar
M. Liu and K. Wang, Dynamics of a two-prey one-predator system in random environments, J. Nonlinear Sci., 23 (2013), 751-775. doi: 10.1007/s00332-013-9167-4. Google Scholar
O. Ovaskainen and B. Meerson, Stochastic models of population extinction, Trends Ecol. Evol., 25 (2010), 643-652. doi: 10.1016/j.tree.2010.07.009. Google Scholar
N.-T. Shih, Y.-H. Cai and I.-H. Ni, A concept to protect fisheries recruits by seasonal closure during spawning periods for commercial fishes off taiwan and the east china sea, J. Appl. Ichthyol., 25 (2009), 676-685. doi: 10.1111/j.1439-0426.2009.01328.x. Google Scholar
L. Wang, D. Jiang and G. S. K. Wolkowicz, Global asymptotic behavior of a multi-species stochastic chemostat model with discrete delays, J. Dyn. Differ. Equ., 32 (2020), 849-872. doi: 10.1007/s10884-019-09741-6. Google Scholar
W. Xia and J. Cao, Pinning synchronization of delayed dynamical networks via periodically intermittent control, Chaos, 19 (2009), 013120, 8pp. doi: 10.1063/1.3071933. Google Scholar
B. Yang, Y. Cai, K. Wang and W. Wang, Optimal harvesting policy of logistic population model in a randomly fluctuating environment, Phys. A, 526 (2019), 120817, 17pp. doi: 10.1016/j.physa.2019.04.053. Google Scholar
Y. Ye, Assessing effects of closed seasons in tropical and subtropical penaeid shrimp fisheries using a length-based yield-per-recruit model, ICES J. Mar. Sci., 55 (1998), 1112-1124. doi: 10.1006/jmsc.1998.0415. Google Scholar
C. Zhang, W. Li and K. Wang, Graph-theoretic method on exponential synchronization of stochastic coupled networks with Markovian switching, Nonlinear Anal-Hybri., 15 (2015), 37-51. doi: 10.1016/j.nahs.2014.07.003. Google Scholar
G. Zhang and Y. Shen, Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control, Neural Networks, 55 (2014), 1-10. doi: 10.1016/j.neunet.2014.03.009. Google Scholar
X. Zou and K. Wang, Optimal harvesting for a stochastic lotka-volterra predator-prey system with jumps and nonselective harvesting hypothesis, Optim. Control Appl. Methods., 37 (2016), 641-662. doi: 10.1002/oca.2185. Google Scholar
X. Zou and K. Wang, Optimal harvesting for a stochastic n-dimensional competitive lotka-volterra model with jumps, Discrete Cont. Dyn-B, 20 (2015), 683-701. doi: 10.3934/dcdsb.2015.20.683. Google Scholar
X. Zou, Y. Zheng, L. Zhang and J. Lv, Survivability and stochastic bifurcations for a stochastic Holling type II predator-prey model, Commun. Nonlinear Sci Numer. Simulat., 83 (2020), 105136, 20 pp. doi: 10.1016/j.cnsns.2019.105136. Google Scholar
Figure 1. Numerical simulations for sample paths
Figure 2. Numerical simulations for time average
Figure 3. The effects of intermittent control in one-dimensional situation
Peng Zhong, Suzanne Lenhart. Study on the order of events in optimal control of a harvesting problem modeled by integrodifference equations. Evolution Equations & Control Theory, 2013, 2 (4) : 749-769. doi: 10.3934/eect.2013.2.749
Hiroaki Morimoto. Optimal harvesting and planting control in stochastic logistic population models. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2545-2559. doi: 10.3934/dcdsb.2012.17.2545
Peng Zhong, Suzanne Lenhart. Optimal control of integrodifference equations with growth-harvesting-dispersal order. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 2281-2298. doi: 10.3934/dcdsb.2012.17.2281
Erika Asano, Louis J. Gross, Suzanne Lenhart, Leslie A. Real. Optimal control of vaccine distribution in a rabies metapopulation model. Mathematical Biosciences & Engineering, 2008, 5 (2) : 219-238. doi: 10.3934/mbe.2008.5.219
Hannes Uecker. Optimal spatial patterns in feeding, fishing, and pollution. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021099
M. W. Hirsch, Hal L. Smith. Asymptotically stable equilibria for monotone semiflows. Discrete & Continuous Dynamical Systems, 2006, 14 (3) : 385-398. doi: 10.3934/dcds.2006.14.385
Qun Lin, Ryan Loxton, Kok Lay Teo. The control parameterization method for nonlinear optimal control: A survey. Journal of Industrial & Management Optimization, 2014, 10 (1) : 275-309. doi: 10.3934/jimo.2014.10.275
Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019
Sebastian Aniţa, Ana-Maria Moşsneagu. Optimal harvesting for age-structured population dynamics with size-dependent control. Mathematical Control & Related Fields, 2019, 9 (4) : 607-621. doi: 10.3934/mcrf.2019043
Scipio Cuccagna. Orbitally but not asymptotically stable ground states for the discrete NLS. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 105-134. doi: 10.3934/dcds.2010.26.105
Karl Kunisch, Markus Müller. Uniform convergence of the POD method and applications to optimal control. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4477-4501. doi: 10.3934/dcds.2015.35.4477
Martin Bohner, Sabrina Streipert. Optimal harvesting policy for the Beverton--Holt model. Mathematical Biosciences & Engineering, 2016, 13 (4) : 673-695. doi: 10.3934/mbe.2016014
Giuseppe Maria Coclite, Mauro Garavello, Laura V. Spinolo. Optimal strategies for a time-dependent harvesting problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 865-900. doi: 10.3934/dcdss.2018053
Meng Liu, Chuanzhi Bai. Optimal harvesting of a stochastic delay competitive model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1493-1508. doi: 10.3934/dcdsb.2017071
Wei Mao, Yanan Jiang, Liangjian Hu, Xuerong Mao. Stabilization by intermittent control for hybrid stochastic differential delay equations. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 569-581. doi: 10.3934/dcdsb.2021055
Hanqing Jin, Shige Peng. Optimal unbiased estimation for maximal distribution. Probability, Uncertainty and Quantitative Risk, 2021, 6 (3) : 189-198. doi: 10.3934/puqr.2021009
François Genoud. Orbitally stable standing waves for the asymptotically linear one-dimensional NLS. Evolution Equations & Control Theory, 2013, 2 (1) : 81-100. doi: 10.3934/eect.2013.2.81
Marcus Wagner. A direct method for the solution of an optimal control problem arising from image registration. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 487-510. doi: 10.3934/naco.2012.2.487
Alexander Tyatyushkin, Tatiana Zarodnyuk. Numerical method for solving optimal control problems with phase constraints. Numerical Algebra, Control & Optimization, 2017, 7 (4) : 481-492. doi: 10.3934/naco.2017030
Mohamed Aliane, Mohand Bentobache, Nacima Moussouni, Philippe Marthon. Direct method to solve linear-quadratic optimal control problems. Numerical Algebra, Control & Optimization, 2021, 11 (4) : 645-663. doi: 10.3934/naco.2021002
Xiaoling Zou Yuting Zheng | CommonCrawl |
Transcriptomics- and metabolomics-based integration analyses revealed the potential pharmacological effects and functional pattern of in vivo Radix Paeoniae Alba administration
Sining Wang1 na1,
Huihua Chen1 na1,
Yufan Zheng2 na1,
Zhenyu Li3,
Baiping Cui2,
Pei Zhao4,
Jiali Zheng1,
Rong Lu1 &
Ning Sun2
Radix Paeoniae Alba (RPA) and other natural medicines have remarkable curative effects and are widely used in traditional Chinese Medicine (TCM). However, due to their multi-component and multi-target characteristics, it is difficult to study the detailed pharmacological mechanisms for those natural medicines in vivo. Therefore, their real effects on organisms is still uncertain.
RPA was selected as research object, the present study was designed to study the complex mechanisms of RPA in vivo by integrating and interpreting the transcriptomic based RNA-seq and metabolomic based NMR spectrum after RPA administration in mice. A variety of dimension-reduction algorithms and classifier models were applied to the processing of high-throughput data.
Among serum metabolites, the contents of PC and glucose were significantly increased, while the contents of various amino acids, lipids and their metabolites were significantly decreased in mice after RPA administration. Based on the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases, differential analysis showed that the liver was the site where RPA exerted a significant effect, which confirmed the rationality of "meridian tropism" in the theory in TCM. In addition, RPA played a role in lipid metabolism by regulating genes encoding enzymes of the glycerolipid metabolism pathway, such as 1-acyl-sn-glycerol-3-phosphate acyltransferase (Agpat), phosphatidate phosphatase (Lpin), phospholipid phosphatase (Plpp) and endothelial lipase (Lipg). We also found that RPA regulates several substance addiction pathways in the brain, such as the cocaine addiction pathway, and the related targets were predicted based on the sequencing data from pathological model in the GEO database. The overall effective pattern of RPA was intuitively presented with a multidimensional radar map through a self-designed model which found that liver and brain were mainly regulated by RPA compared with the traditional meridian tropism theory.
Overall this study expanded the potential application of RPA and provided possible targets and directions for further mechanism study, meanwhile, it also established a multi-dimensional evaluation model to represent the overall effective pattern of TCM for the first time. In the future, such study based on the high-throughput data sets can be used to interpret the theory of TCM and to provide a valuable research model and clinical medication reference for the TCM researchers and doctors.
Radix Paeoniae Alba (RPA) is the dried root of the Chinese herbaceous peony buttercup plant, which is widely used in the treatment of liver diseases and emotional-related diseases in traditional Chinese medicine (TCM). In TCM theory, RPA is thought to have an effect of "nourishing blood, regulating menstruation, retaining "Yin", stopping sweat, smoothing liver and relieving pain" according to Pharmacopoeia of the People's Republic of China (Commission, 2015). According to modern pharmacological studies, paeoniflorin (PF), the main active ingredient in RPA, plays a role in the nervous and immune systems. PF significantly attenuated inflammatory pain by protecting neural progenitor cells and PC12 cells from oxidative stress damage through the ROS/PKC δ/NF-κB pathway and the PI3K/Akt-1 pathway [1,2,3]. PF also decreased caspase–3 activity and downregulated p–p38 MAPK expression in Alzheimer's disease (AD) mice [4]. The anti-inflammatory effect also allowed PF to reduce cerebral infarct and neurological deficits in rats with ischemia–reperfusion injury, suggesting that PF might be used for treatment of stroke [5]. In addition, PF inhibited the activities and protein expression levels of inducible nitric oxide synthase, diminished IL-8 production, and thus exerted cardioprotective and hepatoprotective effects [6, 7].
Previous studies of RPA always use monomer components such as PF as the main research subject. However, changes in the organism caused by the whole herb itself are often different from those caused by a single component within the herb. Therapeutic efficacy of RPA has been confirmed by various clinical trials. However, due to the complex composition of RPA and the limitation of research techniques, the pattern of their complex effects and the microscopic changes have not been well interpreted. An unexpected and unbiased method to analyze the effects of natural medicines based on high-throughput data generated by different types of omics combinations is needed. After the development of "omics" and relevant technologies, their systematic strategies were highly consistent with the "holistic view" in the theory system of TCM and were gradually accepted by researchers. In addition, network pharmacology can clarify the synergistic effect of multicomponent- multitarget drugs, so it is also a suitable method to evaluate the efficacy and reveal the functional mechanisms of natural drugs [8].
Transcriptomics is one of the earliest omics technologies, which analyzes the changes in gene transcription caused by environmental or drug stimuli on an overall level. Metabolomics can evaluate the organisms' response to conditional disturbances and yield biomarkers by identifying endogenous molecular metabolites that are quantitatively changed. Combined application of transcriptomics and metabolomics can systematically depict the complex relationship between the phenotypes and mechanisms. Although the mapping relationship between the metabolome and transcriptome is not direct in the information transmission sequence of the central principle [9], with increasing examples of their combined application, the analytical methods have become increasingly mature and reasonable. The integration analysis method based on prior knowledge can intuitively generate valuable insights [10], while the integration method based on metabolism-transcription pathways via KEGG and other public database can reveal the functional relationship among the targets more conveniently [11]. In addition, parametric models can also be constructed based on rate distortion criteria [12] or weighted gene correlation network analysis (WGCNA) [13]. These methods can overcome the limitations of established information and make the integration of omics data more efficient.
In this study, our aim of this study was to clarify the effect of RPA as a whole medical entity, not its single components, on live organisms to further compare with and analyze the belonging of the meridian theory in TCMs. Through the design of an in-laboratory model, a method that could present the action pattern of traditional Chinese natural medicines was established, which aimed to interpret the theory of TCM by scientific paradigm, and to provide a valuable research model and clinical medication reference for the TCM researchers and doctors. In addition, this method was more aligned than previous studies with the development trend of modern biomedicine towards systematization, which expanded the content of TCM theory and provided a new perspective for future relevant research.
Quantitative high-performance liquid chromatography (HPLC) analysis of RPA
The contents of RPA in the sample preparations (180103, Kangqiao Traditional Chinese Medicine Co., Ltd., Shanghai) were determined by HPLC. The standard substance was PF (YZ-110736, National Institute for the Control of Pharmaceutical and Biological Products, Beijing). The chromatographic column was a QS-C18 Plus (4.6 mm × 50 mm, Puning Analysis Technology Co., Ltd., Shanghai). The mobile phase was acetonitrile-0.1% H3PO4 in gradient mode at a flow rate of 1.0 mL min−1, with a split ratio of 14:86, and the observation wavelength was 230 nm.
Our research involved the utilization of laboratory animals under the supervision of the Fudan University Institutional Animal Care and Use Committee. The animals used were specific pathogen-free male C57BL/6 J mice (Slake Laboratory Animal Co. Ltd., Shanghai), which were bred for up to 8 weeks (weighted 20 ± 1.5 g) to adapt to the environment before the experiment. All animals were maintained in a room with regulated temperature (20 ± 2 °C) and relative humidity (40–70%). An artificial 12/12-h light/dark cycle was maintained, with lights turned on at 08:00 a.m.
Preparation of RPA extract
RPA decoction pieces were soaked for 30 min at 50 °C with 6 times the volume of water, using a condensation reflux device to heat the mixture twice, each time for 1 h. The obtained decoction was made into a freeze-dried powder, and the powder extraction rate was calculated, which was 17.68% in this study. The dosage of RPA was calculated by the Meeh-Rubner formula coefficient k according to the common human adult clinical dose. The average dosage of administration was 3.9 crude drug (g)/weight (kg)/day.
Transcriptome sequencing (RNA-seq)
Experimental mice were randomly divided into the control group and the RPA group (n = 3). After 7 days of intragastric administration, the vital organs (heart, liver, spleen, lung, kidney, brain and adrenal glands) were harvested for total RNA extraction. Transcriptome libraries were prepared by NGS Multiplex Oligos for Illumina (ExCell Biotech Co., Ltd.) according to the manufacturer's instructions. After the libraries amplified, quality controlled, and then run on an Illumina HiSeqX10 platform with a paired-end 150 bp sequencing strategy. Fragments per kilobase of exon per million mapped reads (FPKM) were used to compare gene expression differences between different samples and the software OmicsBean (http://www.omicsbean.com:88/) was used to identify and analyze the differentially expressed genes (DEGs), with fold change ˃ 1.5 and FDR ˂ 0.05 used as the criteria for significant differences between the two groups as described in previous studies [14].
Untargeted metabolomics analysis (NMR)
Mice were grouped and treated in the same way as those for the transcriptome analysis (n = 7). At the end of the experiment, all animals were fasted except water for 12 h and sacrificed following isoflurane anesthesia. Serum samples were obtained for each group using a standard protocol (removed the hemolytic sample). Each serum sample (30 μL) was mixed with 30 μL of phosphate buffer (45 mM, pH 7.43) and then transferred into NMR tube and used directly for detection. The analysis methods of NMR spectroscopy referred to a previously published paper [15]. Data analysis was performed with the software package SIMCA-P + (V14.0, Umetrics, Sweden) and a MATLAB script (MATLAB V7.1, Mathworks Inc., USA). Feature extraction data analysis and OPLS-DA were carried out. All models were further tested with a T-test for significance of intergroup differentiations (with p < 0.05 as a significant level). MetaboAnalyst 4.0 (https://www.metaboanalyst.ca) was used for pathway enrichment analysis [16].
Network pharmacology
The Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform (TCMSP http://lsp.nwu.edu.cn/index.ph) was used to select the active ingredients of RPA by combining the oral absorption (OB > 30%) and drug type index (DL > 0.18). Corresponding targets were also identified by searching the TCMSP. Targets' symbol names were obtained through a Perl language script, which was built in-house UniProt database. The transformation between the target symbol and entrezID and the enrichment analysis was performed using R packages ("org.Hs.e.g.db" and "pathview").
A multidimensional algorithm model was established based on multiorgan transcriptomics data
PCA was used to reduce the dimension of the 7 transcriptomes datasets, and the first principal component (PC01) was selected as representative, of which the contribution rate was calculated. Next, the increased-magnitude was calculated in the RPA group relative to the CON group through the formula:
$$ \varvec{y}_{\varvec{i}} = \left( {\varvec{PC}01_{{\varvec{iarvP}}} - \varvec{PC}01_{{\varvec{iarvC}}} } \right)/\left( {\varvec{PC}01_{{\varvec{imax}}} - \varvec{PC}01_{{\varvec{imin}}} } \right) $$
where \( y_{i} \) is the amplification of RPA group, \( PC01_{iarvP} \) is the average \( PC01 \) of the ist organ in the RPA group, \( PC01_{iarvC} \) is the average \( PC01 \) of the ist organ in the CON group, \( PC01_{imax} \). is the maximum \( PC01 \) of the two groups, and \( PC01_{imin} \) is the minimum \( PC01 \) of both.
Third step, according to the number of different organs genes,\( y_{i} \) was given different weights. Fuzzy set theory was used to design the weight, four trapezoidal membership function and membership function was designed as following and shown in Additional file 1: Fig. S1, the four function to represent the "least", "less", "more" and "most" of four conditions which values were set to 0.3, 0.5, 0.7, 0.9, then the four membership degree values of DEGs in different organs were calculated. The function serial number corresponding to the maximum membership value was calculated by using the maximum operator rule and converted to the corresponding weight value. The weight values corresponding of seven transcriptome genes were shown in Additional file 2: Table S1.
$$ \varvec{\mu}_{1} \left( \varvec{x} \right) = \left\{ \begin{aligned} & 1 \quad \quad \quad \quad \quad x < 300 \\ & \frac{{400 - \varvec{x}}}{100} \quad \quad \;300 \le x \le 400 \\ & 0\quad \quad \quad \quad \quad x > 400 \\ \end{aligned} \right. $$
$$ \varvec{\mu}_{2} \left( \varvec{x} \right) = \left\{ \begin{aligned} & 0\quad \quad \quad \quad x < 300 \\ & \frac{{\varvec{x} - 300}}{100}\quad \;300 \le x \le 400 \\ & 1\quad \quad \quad \quad 400 \le x \le 550 \\ & \frac{{650 - \varvec{x}}}{100}\quad \;550 \le x \le 650 \\ & 0\quad \quad \quad \quad x > 650 \\ \end{aligned} \right. $$
$$ \varvec{\mu}_{4} \left( \varvec{x} \right) = \left\{ \begin{aligned} & 0\quad \quad \quad \quad x > 800 \\ & \frac{{\varvec{x} - 800}}{100}\quad \;800 \le x \le 900 \\ & 1\quad \quad \quad \quad x > 900 \\ \end{aligned} \right. $$
RPA caused changes in serum metabolites in mice
To ensure the safety and effectiveness of RPA, quality control by HPLC was carried out for the drug used in the experiment, and the test results were in line with the descriptions in Pharmacopoeia of the People's Republic of China [17]. Reports and typical fingerprints of standard and test samples are displayed in the Additional file 3: Fig. S2 and Additional file 4: Table S2. Next, the experiment was carried out according to the flow design shown in Fig. 1. After NMR serum metabolomics experiments, of which the representative spectra and primary signals are displayed in the Additional file 5: Fig. S3, feature extraction analyses of data were conducted to explore the rationality of the model through various dimension-reduction methods. The first three panels in Fig. 2A revealed a surface to completely distinguish the two groups of data after rotation. This finding suggested that the choice of dimension-reduction algorithm must be combined with a classifier. We therefore categorized the data by the Back-Propagation Neural Network (BPNN), Support Vector Machine (SVM), Random Forest (RF), Naive Bayesian (NB) and k-Nearest Neighbor (kNN) approaches at this stage (Additional file 6: Table S3). The discernment accuracies of the data were calculated based on dimension-reduction algorithm under different classification algorithms and again calculated on classification algorithms under different dimension-reduction algorithms (Tables 1, 2). It can be concluded from the results that the dimensionality reduction data obtained by the PCA algorithm could distinguish the samples of the control (CON) group and those of the RPA group most effectively. The average recognition time of a single sample with the SVM classification algorithm was far less than that with the other algorithms. Therefore, it was feasible and effective to use serum metabolites to distinguish the samples of the CON and RPA groups. If the number of samples increased gradually, the recognition accuracy could be further improved.
Overview of the experimental design. a Referring to Pharmacopoeia of the People's Republic of China for quality control by HPLC, RPA was prepared as a freeze-dried powder for in vivo administration in mice. b Vital organs were harvested for transcriptomics analyses, and serum was collected for metabolomics analyses (n = 3 samples, intragastric administration of RPA for 1 week)
RPA administration induced changes in the serum metabolism profile in mice. a Feature extraction analyses based on six data dimension-reduction algorithms (n = 7 CON group and 6 RPA group samples). b ROC curves based on the CV performance by the SVM classifier, the default is the ROC curves from all models averaged from all CV runs, and the 95% confidence interval can be computed in this case. c Model of 20 features that were selected as screening criteria, AUC = 0.94. d Correlation analysis network among all identified serum metabolites. e Differential metabolite analysis model between groups based on OPLS-DA. f Pathways enriched by differential metabolites-based KEGG; the redder the color is, the more significant the result. p < 0.05. PCA principal component analysis, t-SNE T-distributed stochastic neighbor embedding, ISOMAP isometric mapping, LLE locally linear embedding, WT wavelet transform, ROC receiver operating characteristic, CV cross-validation, AUC area under the curve
Table 1 The average recognition accuracy of each dimension reduction algorithm under different classification algorithms
Table 2 The average recognition accuracy of each classification algorithm under different dimensionality reduction algorithms
Since SVM showed a good effect on the classification of small samples, we chose the SVM classifier to calculate the receiver operating characteristic (ROC) curve (Fig. 2b). Metabolites with high frequency in the model (Fig. 2c, 20 features, AUC = 0.94), such as glycerophosphocholine (GPC), glucose, acetoacetate, and a variety of unsaturated fatty acids, can be selected as serum biomarkers to show the effect of RPA. Afterward, the Metscape tool [18] was used to perform a correlation analysis, and the results indicated that phosphorylcholine (PC) and N-acetylated protein (NAG) played a significant role in the correlation network. Among them, PC had a significant negative correlation with various amino acids, including valine, isoleucine, leucine, and some lipids, while NAG had a significant positive correlation with various small molecular fatty acids in the network (Fig. 2d).
To identify those significantly different metabolites induced by RPA administration, OPLS-DA was conducted for the metabolomics data, and the model parameters were obtained as R2X = 0.915, R2Y = 1, and Q2 = 0.773, which indicated that the model was stable and reliable and had high predictive ability (Fig. 2e). Variable importance in projection (VIP) > 1.0 was used as the screening standard for differential metabolites, and 16 statistically significant differential metabolites were finally determined (Table 3), including mainly choline (PC and GPC), amino acids (leucine and valine), lipids [CH2C=C, C=CCH2C=C, R-CHI, R-CH3, CH2CH2COO, CH2COO and triglycerides (TGs)], ketones (acetoacetate, glucose, pyruvate and lactate) and glycoproteins (NAG), which were the representative significantly changed metabolites after RPA administration in mice. Among them, the contents of PC were significantly increased, while the contents of various lipids and their metabolites (acetoacetic acid) were significantly decreased, which may be due to the function of choline to regulate lipid metabolism [19]. In addition, amino acids were significantly reduced, whereas glucose was significantly increased. Metabolites such as pyruvate and lactic acid were also significantly reduced in mice after RPA administration. These results suggested that RPA participated in regulating glucose, amino acids, lipids, and other energy metabolism processes in the body. Based on the KEGG database, enrichment analysis was performed according to the differential metabolites to demonstrate the topological properties. We found that there were mainly 18 pathways involved (Additional file 7: Table S4), and the results were largely consistent with an analysis based on the Small Molecule Pathway Database (SMPDB) (Additional file 8: Fig. S4A). Integrating the p values between different pathways, the results showed that valine, leucine and isoleucine biosynthesis and degradation; butanoate metabolism; pyruvate metabolism; glycolysis or gluconeogenesis; glycerolipid metabolism; glycerophospholipid metabolism; and the synthesis and degradation of ketone bodies changed significantly (Fig. 2f). These results showed that RPA administration significantly affect the above metabolism pathways.
Table 3 List of significant differential metabolites from NMR
Liver is the major organ that affected by RPA administration
To study the regulation of gene expression by RPA, a comprehensive transcriptomics analysis was conducted on several vital organs, and the results showed that RPA could cause different degrees of changes in the transcripts of each organ (Fig. 3a). RPA is considered as a main medicine for regulating the liver meridian in TCM theory. Extensive RPA clinical application evidences in liver function regulation and liver diseases treatment were also constantly observed. Our transcriptomics analysis results indeed confirmed the traditional medical experiences. Compared with that of other organ tissues, the liver's response to RPA administration was the most significant regardless of the number of DEGs or the significantly enriched pathways (Fig. 3b). The liver is also the main place of energy metabolism, which was reflected in the metabolomics results obtained in this study. We next systematically analyzed the liver transcriptome. The PCA result suggested that there was a good separation between the groups (Fig. 3c), with 456 genes were upregulated and 464 genes were downregulated (Fig. 3d). Next, pathway enrichment analysis was carried out for DEGs. According to the KEGG database, pathways were clustered into classes (subcategories) such as metabolism (carbohydrate metabolism, lipid metabolism, and nucleotide metabolism), environmental information processing (signal transduction and signaling molecules and interaction), cellular processes (transport and catabolism, cell growth and death, and cellular community), organismal systems (immune system, endocrine system, development, and environmental adaptation) and human diseases (cancer, neurodegenerative diseases, substance dependence, and infectious diseases: bacterial, viral and parasitic) (Additional file 9: Table S5).
Integrated analyses of the liver revealed that it was the site where RPA exerted a major effect. a RPA induced a number of upregulated genes (red) and downregulated genes (blue) in various vital organs (p ˂ 0.05, cutoff = 1.5). b Number of pathways enriched with DEGs in each organ, based on KEGG. c Three-dimensional PCA revealed the overall intergroup separation of the liver transcriptomes. d Volcano plot showing the upregulated genes (red) and downregulated genes (blue) in liver tissue. e Venn diagram showing the overlapping pathways enriched with DEGs from our results and previously identified targets of RPA in the TCMSP. f Venn diagram showing the overlapping pathways enriched with DEGs by GSEA. Detailed result diagrams are shown in additional files
RPA plays a series of roles in TCM theory, including collecting "yin" in liver, nourishing the liver blood, softening the liver body, and relieving acute liver diseases. However, these descriptions are often difficult to understand by modern medicine. Therefore, we conducted a network pharmacology analysis on RPA administration to interpret its in vivo efficacy. Thirteen main ingredients of RPA were obtained from the TCMSP (Additional file 10: Table S6), which involved 61 targets and 84 pathways enriched by the R package [20]. Nineteen pathways overlapped with the pathways of liver transcriptomics enrichment caused by RPA administration (Fig. 3e and Additional file 8: Fig. S4B), which could be considered the stable part of the pharmacological effects of RPA, as well as proving laterally the rationality of our study.
Since the RPA induced pathway changes were most extensive in the liver, in order to observe the most representative pathways, Gene Set Enrichment Analysis (GSEA) was conducted on the liver transcriptome (Additional file 11: Table S7), and when the result of DEGs pathway enrichment overlapped with that of GSEA, 8 pathways, including leukocyte transendothelial migration, natural killer cell mediated cytotoxicity, hematopoietic cell lineage, leishmaniasis, prion diseases, cell adhesion molecules, lysosome and glycerolipid metabolism had significant differences and consistent changes under both analysis methods (Fig. 3f, Additional file 12: Fig. S5 and Additional file 13: Table S8). When combined with the metabolomics results, it was found that the glycerolipid metabolism pathway was enriched in both results (Additional file 8: Fig. S4C). In this pathway, several important genes, such as Agpat1, Agpat2, Lpin1, Lpin2, Plpp1 and Lipg, were regulated to varying extents, resulting in an overall decrease in lysophospholipid acyltransferase, phosphatidate phosphatase, and endothelial lipase. Subsequently, TGs and other fatty acids among serum metabolites were reduced, suggesting that RPA played an important regulatory role in lipid metabolism (Fig. 4 and Additional file 14: Fig. S6).
A Reconstructed pathway map of glycerolipid metabolism containing key genes and metabolites that were regulated. The level of TGs among serum metabolites decreased significantly which represented in the histogram. DEGs differentially expressed genes, TG triglyceride
RPA regulated the central nervous system
Among the differential metabolites after RPA administration, the contents of GPC and PC were simultaneously increased. Glycerophospholipid metabolism (the upstream pathway of glycerolipid metabolism, Fig. 4), in which PC and GPC were enriched, was also regulated in the liver transcriptomics results, further confirming the role of RPA in liver lipid metabolism. Interestingly, choline molecules are not only important components of lipid metabolism in the liver [21] but also key substances for brain function and information transmission [22]. In our metabolomics results, PC, the upstream substance of citicoline (by choline-phosphate cytidylyltransferase [23], was significantly elevated (Table 3). Of note, citicoline has been clinically used in the treatment of cerebral ischemic diseases such as stroke or vascular cognitive impairment [24,25,26]. PC is also a source of choline, which can be converted into acetylcholine by choline O-acetyltransferase and phosphocholine phosphatase [27]. We next therefore analyzed the brain transcriptome. We found PCA presented a similar form to the liver, suggesting that RPA administration could effectively separate the transcriptome data of the RPA group from that of the CON group (Fig. 5a). There were 300 upregulated genes and 326 downregulated genes (Fig. 5b) in the brain after RPA administration, which were enriched in 18 pathways (Fig. 5c). Among them, the most significantly changed pathway was the neuroactive ligand-receptor interaction pathway, which is related to a variety of nerve activities (Additional file 15: Fig. S7, mmu04080). Genes related to serotonin receptor, neuropeptide receptor, neuroregulatory peptide receptor, and nucleotide receptor in the pathway were upregulated, while genes related to acetylcholine receptor, dopamine receptor, and lysophosphatidic acid receptor were downregulated (Fig. 5d). Interestingly, in the enrichment results for disease class, various substance addiction pathways, such as those for morphine, nicotine, cocaine, etc., were found. Such pharmacological effects of RPA on substance addiction have never been reported previously (Additional file 16: Fig. S8). To explore the targets of RPA on substance addiction, the pathway of cocaine addiction was taken as a model and the transcriptome sequencing datasets (GSE108836) of the associated disease model were obtained by searching the Gene Expression Omnibus (GEO) database. A Venn analysis was conducted on the DEGs of the model group and our RPA group. Twelve DEGs (Myct1, Gm21860, Ninj2, Fam183b, Lars2, Alkbh1, Fgf5, Frmd7, Tm6sf2, Wnt6, Batf3, and Clca1) were found to be regulated inversely among the overlap genes, which may be possible targets of RPA to disrupt the cocaine addiction process (Fig. 5e).
RPA had a significant effect on the transcriptome of brain tissue. a Three-dimensional PCA revealed the overall intergroup separation of the brain transcriptomes. b Volcano plot showing the upregulated genes (red) and downregulated genes (blue) in brain tissue. cKEGG enrichment bubble plot of DEGs in the brain transcriptomes (pathway p ˂ 0.05). d Target genes were significantly upregulated (red) and downregulated (blue) by RPA in the neuroactive ligand-receptor interaction pathway. e Venn diagram showing the potential targets of RPA on the cocaine addiction model based on RNA-seq data obtained from the GEO database
A multidimensional algorithm model of the overall effective pattern for RPA administration in vivo
To evaluate the effect pattern of RPA from the perspective of systemic changes caused by in vivo drug administration, a multidimensional algorithm model was established based on multiorgan transcriptomics data and applied to the analysis. The higher the contribution rate of PC01 was, the more representative it was among the entire transcriptome sample of PCA (Additional file 17: Table S9). The results showed that most contribution rates of PCs were more than 70% and the highest of them was more than 95%, indicating that the PC01 of transcriptomes can effectively represent the whole sample. The results of each organization can be obtained by multiplying \( y_{i} \) and the corresponding weight value. Finally, 7 values obtained were drawn into a multidimensional radar map, which can intuitively present the overall effective pattern of RPA administration in vivo (Fig. 6). As previously mentioned, the liver and brain are the main sites where RPA exerted an effect. In addition, the intensity of the adrenal transcriptome response to RPA was also significant in the radar map. Some studies have found that PF can increase serotonin (5-HT) and 5-hydroxyindoleacetic acid in the prefrontal cortex and hippocampus in post-traumatic stress disorder. And, the levels of corticosterone, corticotropin releasing hormone, and adrenocorticotropic hormone in serum were also reversed by PF [28]. This finding suggested that RPA may exert an effect on mental disease by regulating the HPA axis.
A multidimensional algorithm model was established based on the multiorgan transcriptomics data
At present, numbers of studies have found the therapeutic effect of RPA and its main components on lipid metabolism-related diseases. Such as RPA can reverse the abnormal lipid profiles in serum induced by alcohol in conjunction with a high fat diet [29]. In addition, RPA can reduce the serum TG, malondialdehyde, leptin and TNF-α levels caused by ovariectomies, so as to improve the lipid metabolism disorder and inhibit obesity [30]. These results suggest that RPA can be used in the clinical treatment of steatohepatitis, as well as the obesity caused by the decrease of estrogen levels in women during and after perimenopausal period, relevant mechanism is still unclear.
In this study, we found that RPA played the most significant role on the transcriptome of the liver, especially the glycerolipid metabolism pathway, and the effect may be linked to the metabolism of glucose, lipids, amino acids pathways of the body. The result is consistent with the content of "RPA to liver meridian" in the traditional theory to a certain extent. However, the method established in this study cannot simply correspond to the theory of meridian tropism. Since the liver and brain are the main sites where RPA exerted an effect, which was different from the traditional functional characteristics of RPA of "acting on the liver meridian and spleen meridian", as there is no "brain meridian" in the theory of TCM. In fact, a new interpretation and supplement of RPA medicinal content were proposed based on this modern research methods. The response intensity of the spleen transcriptome to RPA was not prominent in the whole model, which may be due to the functions of the spleen in modern medicine and those of the "spleen" in TCM theory being quite different.
It still needs to be noted that although organs in TCM theory cannot be completely equated with modern medicine, they have a high degree of consistency in the function of the liver. For example, it is believed that liver has two main functions, namely "controlling conveyance and dispersion" and "storing blood" in TCM. The former is related to digestive function, fluid metabolism and reproductive function, etc. The latter can regulate blood volume and prevent abnormal haemorrhage. In modern medicine, the liver is also an important digestive and metabolic organs with the function of excreting bile, storing glycogen, and participating in the regulation of nutrients synthesis and degradation. In addition, liver participate in the metabolism of water, hormone inactivation, and synthesizing plasma albumin, plasma globulin, and coagulation factors. RPA is generally considered as an herb of liver meridian with a series of effects such as "collect liver Yin, nourishing the liver blood, soften liver body ", etc. since the definition of organs in TCM is more based on the physiological functions.
Previous research showed that there was a negative correlation between mature Brain-derived neurotrophic factor in parietal cortex and in liver which indicated that there is a liver-brain axis in psychiatric disorders [31]. In our results, RPA seemed to have great potential value in the treatment of mental diseases because choline seemed to mediate the crosslink of the liver and brain when analyzing combined with metabolomics, and choline could regulate the production of IL-1β and IL-18 by macrophages and affect the acute and chronic inflammation models [32]. These findings suggested that RPA may influence immune activity by regulating the metabolic processes of choline.
In addition, immune-related activities were involved in transcriptomic differential gene expression analysis in multiple organs, such as the Toll-like receptor signaling pathway (lung and adrenal gland), complement and coagulation cascades (heart and liver), the chemokine signaling pathway (liver and spleen), and hematopoietic cell lineage (liver, spleen, lung and brain), which may be the reason why RPA can effectively treat a variety of infectious diseases due to bacteria, viruses and parasites, such as Staphylococcus aureus infection, influenza A and leishmaniasis (Table S5). These conclusions, to a certain extent, also provide an interpretation of the TCM theory that RPA can "clear heat and relieve pain". The combined analysis suggested that TCM have the advantage of multi-circuit comprehensive therapeutic action, that is, multiple metabolic pathways in the body are simultaneously regulated [33].
There are many potential mechanisms of the pharmacological efficacy of RPA that worth to be discussed. For instance, RPA has an effect of "dispersing stagnated liver 'qi' and relieving depression" in TCM theory. Although it was not directly related to depression-related pathways in the results, as we've known before, a deficit in GABAergic transmission in neural circuits is causal for depression. Inversely, an enhancement of GABA transmission has antidepressant effects [34], and the genes encode GABA receptors, such as Gabrb1, Gabre and Gabrq, were significantly increased as well as the GABAergic synapse pathway were directly enriched. These unexplored link like this needs to be further explored in the future.
In this study, transcriptomics and metabolomics were integrated to provide an unexpected and unbiased analysis of the effective profile of in vivo administration of RPA, which is a frequently used natural medicine in TCM. First, the changes in serum metabolites were evaluated, and we found that RPA had certain effects on energy metabolism. By using high-throughput sequencing technology to detect the transcriptome of organs, we found that the liver is the organ with most obvious responses to RPA administration, which is very consistent with the theory that "RPA goes to the liver meridian" in TCM. Combined with serum metabolomics, we further found that RPA plays a role in regulating lipid metabolism by regulating the expression of enzymes in the glycerolipid metabolism pathway and inducing a decrease in downstream lipid metabolites. In addition, RPA also exerted an important influence on brain tissue, and for the first time, we unexpectedly found that RPA is involved in regulating the processes of various substance addiction diseases. To clearly visualize this collaborative regulation pattern, a computational model was designed, which took transcriptomics data as evaluation elements, and the overall function characteristics of RPA were innovatively expressed in a radar map.
This study provided a valuable reference pattern to interpret the theories of TCM, expanded the potential application of RPA, and provided possible targets and directions for further mechanism study.
The datasets generated for this study can be found in the Sequence Read Archive (SRA) database with accession number PRJNA587068 from the National Center for Biotechnology Information, USA.
RPA:
Radix Paeoniae Alba
NMR:
Proton nuclear magnetic resonance spectrometer
GO:
PF:
Paeoniflorin
HPLC:
DEGs:
Differentially expressed genes
OPLS-DA:
Orthogonal partial least squares discrimination analysis
VIP:
Variable importance in the projection value
TCMSP:
Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform
PCA:
Principal component analysis
SVM:
GPC:
Glycerophosphocholine
Phosphorylcholine
N-Acetylated protein
TG:
GSEA:
GEO:
Wu YM, Jin R, Yang L, Zhang J, Yang Q, Guo YY, et al. Phosphatidylinositol 3 kinase/protein kinase B is responsible for the protection of paeoniflorin upon H(2)O(2)-induced neural progenitor cell injury. Neuroscience. 2013;240:54–62. https://doi.org/10.1016/j.neuroscience.2013.02.037.
Dong H, Li R, Yu C, Xu T, Zhang X, Dong M. Paeoniflorin inhibition of 6-hydroxydopamine-induced apoptosis in PC12 cells via suppressing reactive oxygen species-mediated PKCdelta/NF-kappaB pathway. Neuroscience. 2015;285:70–80. https://doi.org/10.1016/j.neuroscience.2014.11.008.
Hu B, Xu G, Zhang X, Xu L, Zhou H, Ma Z, et al. Paeoniflorin attenuates inflammatory pain by inhibiting microglial activation and Akt-NF-kappaB signaling in the central nervous system. Cell Physiol Biochem. 2018;47(2):842–50. https://doi.org/10.1159/000490076.
Gu X, Cai Z, Cai M, Liu K, Liu D, Zhang Q, et al. Protective effect of paeoniflorin on inflammation and apoptosis in the cerebral cortex of a transgenic mouse model of Alzheimer's disease. MOL MED REP. 2016;13(3):2247–52. https://doi.org/10.3892/mmr.2016.4805.
Tang NY, Liu CH, Hsieh CT, Hsieh CL. The anti-inflammatory effect of paeoniflorin on cerebral infarction induced by ischemia-reperfusion injury in Sprague-Dawley rats. Am J Chin Med. 2010;38(1):51–64. https://doi.org/10.1142/s0192415x10007786.
Chen C, Du P, Wang J. Paeoniflorin ameliorates acute myocardial infarction of rats by inhibiting inflammation and inducible nitric oxide synthase signaling pathways. MOL MED REP. 2015;12(3):3937–43. https://doi.org/10.3892/mmr.2015.3870.
Gong WG, Lin JL, Niu QX, Wang HM, Zhou YC, Chen SY, et al. Paeoniflorin diminishes ConA-induced IL-8 production in primary human hepatic sinusoidal endothelial cells in the involvement of ERK1/2 and Akt phosphorylation. Int J Biochem Cell Biol. 2015;62:93–100. https://doi.org/10.1016/j.biocel.2015.02.017.
Kibble M, Saarinen N, Tang J, Wennerberg K, Makela S, Aittokallio T. Network pharmacology applications to map the unexplored target space and therapeutic potential of natural products. Nat Prod Rep. 2015;32(8):1249–66. https://doi.org/10.1039/c5np00005j.
Gibney ER, Nolan CM. Epigenetics and gene expression. Heredity (Edinb). 2010;105(1):4–13. https://doi.org/10.1038/hdy.2010.54.
Knolhoff AM, Nautiyal KM, Nemes P, Kalachikov S, Morozova I, Silver R, et al. Combining small-volume metabolomic and transcriptomic approaches for assessing brain chemistry. Anal Chem. 2013;85(6):3136–43. https://doi.org/10.1021/ac3032959.
CAS Article PubMed Central Google Scholar
Luo X, Yu H, Song Y, Sun T. Integration of metabolomic and transcriptomic data reveals metabolic pathway alteration in breast cancer and impact of related signature on survival. J Cell Physiol. 2019;234(8):13021–31. https://doi.org/10.1002/jcp.27973.
Jauhiainen A, Nerman O, Michailidis G, Jornsten R. Transcriptional and metabolic data integration and modeling for identification of active pathways. BIOSTATISTICS. 2012;13(4):748–61. https://doi.org/10.1093/biostatistics/kxs016.
Kelly RS, Croteau-Chonka DC, Dahlin A, Mirzakhani H, Wu AC, Wan ES et al. Integration of metabolomic and transcriptomic networks in pregnant women reveals biological pathways and predictive signatures associated with preeclampsia. Metabolomics. 2017;13(1). https://doi.org/10.1007/s11306-016-1149-8.
Cui B, Zheng Y, Zhou X, Zhu J, Zhuang J, Liang Q, et al. Repair of adult mammalian heart after damages by oral intake of Gu Ben Pei Yuan San. Front Physiol. 2019;10:607. https://doi.org/10.3389/fphys.2019.00607.
Li H, An Y, Zhang L, Lei H, Zhang L, Wang Y, et al. Combined NMR and GC-MS analyses revealed dynamic metabolic changes associated with the carrageenan-induced rat pleurisy. J Proteome Res. 2013;12(12):5520–34. https://doi.org/10.1021/pr400440d.
Chong J, Soufan O, Li C, Caraus I, Li S, Bourque G et al. MetaboAnalyst 4.0: towards more transparent and integrative metabolomics analysis. Nucleic Acids Res. 2018;46(W1):W486–94. https://doi.org/10.1093/nar/gky310.
Commission CP. Pharmacopoeia of the People's Republic of China (2015 edition A). Beijing: China Medical Science and Technology Press; 2015.
Basu S, Duren W, Evans CR, Burant CF, Michailidis G, Karnovsky A. Sparse network modeling and metscape-based visualization methods for the analysis of large-scale metabolomics data. Bioinformatics. 2017;33(10):1545–53. https://doi.org/10.1093/bioinformatics/btx012.
Chandler TL, White HM. Choline and methionine differentially alter methyl carbon metabolism in bovine neonatal hepatocytes. PLoS ONE. 2017;12(2):e171080. https://doi.org/10.1371/journal.pone.0171080.
Yu G, Wang LG, Han Y, He QY. clusterProfiler: an R package for comparing biological themes among gene clusters. OMICS. 2012;16(5):284–7. https://doi.org/10.1089/omi.2011.0118.
Carvajal S, Perramon M, Oro D, Casals E, Fernandez-Varo G, Casals G, et al. Cerium oxide nanoparticles display antilipogenic effect in rats with non-alcoholic fatty liver disease. Sci Rep. 2019;9(1):12848. https://doi.org/10.1038/s41598-019-49262-2.
Blusztajn JK, Slack BE, Mellott TJ. Neuroprotective actions of dietary choline. Nutrients. 2017;9:8. https://doi.org/10.3390/nu9080815.
Kalmar GB, Kay RJ, LaChance AC, Cornell RB. Primary structure and expression of a human CTP:phosphocholine cytidylyltransferase. Biochim Biophys Acta. 1994;1219(2):328–34. https://doi.org/10.1016/0167-4781(94)90056-6.
Nakazaki E, Yabuki Y, Izumi H, Shinoda Y, Watanabe F, Hishida Y, et al. Combined citicoline and docosahexaenoic acid treatment improves cognitive dysfunction following transient brain ischemia. J Pharmacol Sci. 2019;139(4):319–24. https://doi.org/10.1016/j.jphs.2019.02.003.
Belova LA, Mashin VV, Dudikov EM, Belov DV, Krupennikov AA. A multicenter observation study of the efficacy of cortexin and recognan (citicoline) in the treatment of cognitive impairments in chronic cerebrovascular pathology. Zh Nevrol Psikhiatr Im S S Korsakova. 2019;119(2):35–8. https://doi.org/10.17116/jnevro201911902135.
Secades JJ, Alvarez-Sabin J, Castillo J, Diez-Tejedor E, Martinez-Vila E, Rios J, et al. Citicoline for acute ischemic stroke: a systematic review and formal meta-analysis of randomized, double-blind, and placebo-controlled trials. J Stroke Cerebrovasc Dis. 2016;25(8):1984–96. https://doi.org/10.1016/j.jstrokecerebrovasdis.2016.04.010.
Roberts SJ, Stewart AJ, Sadler PJ, Farquharson C. Human PHOSPHO1 exhibits high specific phosphoethanolamine and phosphocholine phosphatase activities. Biochem J. 2004;382(Pt 1):59–65. https://doi.org/10.1042/BJ20040511.
Qiu ZK, He JL, Liu X, Zeng J, Xiao W, Fan QH, et al. Anxiolytic-like effects of paeoniflorin in an animal model of post traumatic stress disorder. Metab Brain Dis. 2018;33(4):1175–85. https://doi.org/10.1007/s11011-018-0216-4.
Su-Hong C, Qi C, Bo L, Jian-Li G, Jie S, Gui-Yuan L. Antihypertensive effect of Radix Paeoniae Alba in spontaneously hypertensive rats and excessive alcohol intake and high fat diet induced hypertensive rats. Evid-Based Compl Alt. 2015;2015:1–8. https://doi.org/10.1155/2015/731237.
Zhu JE. A study of suppressive effects on obesity induced by Radix Paeomiae Alba. Master thesis: Lanzhou University: 2007.05.
Yang B, Ren Q, Zhang JC, Chen QX, Hashimoto K. Altered expression of BDNF, BDNF pro-peptide and their precursor proBDNF in brain and liver tissues from psychiatric disorders: rethinking the brain-liver axis. Transl Psychiatry. 2017;7(5):e1128. https://doi.org/10.1038/tp.2017.95.
Sanchez-Lopez E, Zhong Z, Stubelius A, Sweeney SR, Booshehri LM, Antonucci L, et al. Choline uptake and metabolism modulate macrophage IL-1beta and IL-18 production. Cell Metab. 2019;29(6):1350–62. https://doi.org/10.1016/j.cmet.2019.03.011.
Yang M, Chen JL, Xu LW, Ji G. Navigating traditional chinese medicine network pharmacology and computational tools. Evid Based Complement Alternat Med. 2013;2013:731969. https://doi.org/10.1155/2013/731969.
Luscher B, Mohler H. Brexanolone, a neurosteroid antidepressant, vindicates the GABAergic deficit hypothesis of depression and may foster resilience. F1000Res. 2019;8:1. https://doi.org/10.12688/f1000research.18758.1.
We thank the school of life sciences, Fudan University, for their NMR metabolomics assistance.
Grants from Shanghai Municipal Commission of Health and Family Planning, NO. ZY (2018-2020)-CCCX-2001-01, the National Key R&D Program of China 2018YFC2000202, and the Haiju program of National Children's Medical Center EK1125180102.
Sining Wang, Huihua Chen and Yufan Zheng contributed equally to this work
Department of Pathology, School of Basic Medical Sciences, Shanghai University of Traditional Chinese Medicine, 1200 CaiLun Ave, Pudong, 201203, Shanghai, China
Sining Wang, Huihua Chen, Jiali Zheng & Rong Lu
Department of Physiology and Pathophysiology, School of Basic Medical Sciences, Fudan University, 130 DongAn Ave, Xuhui, 200032, Shanghai, China
Yufan Zheng, Baiping Cui & Ning Sun
College of Information and Computer Engineering, Northeast Forestry University, Harbin, China
Zhenyu Li
Public Laboratory Platform, School of Basic Medical Science, Shanghai University of Traditional Chinese Medicine, Shanghai, China
Pei Zhao
Sining Wang
Huihua Chen
Yufan Zheng
Baiping Cui
Jiali Zheng
Rong Lu
Ning Sun
Conceived and supervised the experiments: NS and RL. Performed the experiments: SnW, YfZ and HhC. Analyzed the data: SnW, YfZ and ZyL. Technical support: ZyL and BpC. Contributed reagents, materials and tools: PZ and HhC. Wrote the paper: SnW, YfZ. All authors read and approved the final manuscript.
Correspondence to Rong Lu or Ning Sun.
Our research involved the utilization of laboratory animals under the supervision of the Fudan University Institutional Animal Care and Use Committee.
Additional file 1: Fig. S1
Fuzzification of transcriptome DEGs into 4 terms.
Additional file 2: Table S1
The number of DEGs and corresponding weight values.
Typical fingerprints of RPA quality control by HPLC.
Result report of HPLC analysis.
Typical 600 MHz 1H NMR spectra of serum from CON and RPA groups. The dotted region was vertically expanded 32 times in the spectra. TMAO: trimethylamine N-oxide; TG: triglycerides; GPC: glycerophosphocholine; PC: phosphorylcholine; DMG: dimethylglycine; OAG: O-acetylated glycoproteins; NAG: N-acetylated glycoproteins; 3-HB: 3-Hydroxybutytrate.
Recognition accuracy and time of different dimensionality reduction algorithms combine different classification algorithms.
Differential metabolites pathway enrichment.
Additional file 8: Fig. S4 (A)
Enrichment analysis of differential metabolites based on SMPDB. (B) Enriched pathways of RPA targets from the TCMSP overlapped with the pathways of liver transcriptomics enrichment caused by RPA. (C) Enriched pathways of DEGs overlapped with GSEA based on KEGG. The pathway in the red box was also enriched, according to metabolomics.
Liver transcriptomics differential genes pathway enrichment.
Additional file 10: Table S6
Main active ingredients in RPA.
GSEA results of liver transcriptomic based on KEGG database.
Additional file 12: Fig. S5
GSEA results showing the pathways that overlap.
Pathways obtained from the Venn result of liver GSEA and DEG analysis.
Levels of differential lipid metabolites in the serum NMR metabolic spectra, p ˂ 0.05.
Neuroactive ligand-receptor interaction pathway (mmu04080, in KEGG); the red boxes are genes upregulated by RPA, and the blue boxes are genes downregulated by RPA.
PPI analysis network of substance addiction-related disease pathways of brain transcriptomics.
The first principal component contribution rate of each dimension.
Wang, S., Chen, H., Zheng, Y. et al. Transcriptomics- and metabolomics-based integration analyses revealed the potential pharmacological effects and functional pattern of in vivo Radix Paeoniae Alba administration. Chin Med 15, 52 (2020). https://doi.org/10.1186/s13020-020-00330-0 | CommonCrawl |
Lower Bounds on Matrix Factorization Ranks via Noncommutative Polynomial Optimization
Sander Gribling1,
David de Laat1 &
Monique Laurent1,2
Foundations of Computational Mathematics volume 19, pages 1013–1070 (2019)Cite this article
We use techniques from (tracial noncommutative) polynomial optimization to formulate hierarchies of semidefinite programming lower bounds on matrix factorization ranks. In particular, we consider the nonnegative rank, the positive semidefinite rank, and their symmetric analogs: the completely positive rank and the completely positive semidefinite rank. We study convergence properties of our hierarchies, compare them extensively to known lower bounds, and provide some (numerical) examples.
Matrix Factorization Ranks
A factorization of a matrix \(A \in \mathbb {R}^{m \times n}\) over a sequence \(\{K^d\}_{d\in \mathbb {N}}\) of cones that are each equipped with an inner product \(\langle \cdot ,\cdot \rangle \) is a decomposition of the form \(A=(\langle X_i,Y_j\rangle )\) with \(X_i, Y_j \in K^d\) for all \((i,j)\in [m]\times [n]\), for some integer \(d\in \mathbb {N}\). Following [34], the smallest integer d for which such a factorization exists is called the cone factorization rank of A over \(\{K^d\}\).
The cones \(K^d\) we use in this paper are the nonnegative orthant \(\mathbb {R}^d_+\) with the usual inner product and the cone \(\mathrm {S}^d_+\) (resp., \(\mathrm {H}^d_+\)) of \(d\times d\) real symmetric (resp., Hermitian) positive semidefinite matrices with the trace inner product \(\langle X, Y \rangle = \mathrm {Tr}(X^\textsf {T}Y)\) (resp., \(\langle X, Y \rangle = \mathrm {Tr}(X^* Y)\)). We obtain the nonnegative rank, denoted \({{\,\mathrm{rank}\,}}_+(A)\), which uses the cones \(K^d=\mathbb {R}^d_+\), and the positive semidefinite rank, denoted \(\hbox {psd-rank}_\mathbb {K}(A)\), which uses the cones \(K^d=\mathrm {S}^d_+\) for \(\mathbb {K}= \mathbb {R}\) and \(K^d=\mathrm {H}^d_+\) for \(\mathbb {K}=\mathbb {C}\). Both the nonnegative rank and the positive semidefinite rank are defined whenever A is entrywise nonnegative.
The study of the nonnegative rank is largely motivated by the groundbreaking work of Yannakakis [78], who showed that the linear extension complexity of a polytope P is given by the nonnegative rank of its slack matrix. The linear extension complexity of P is the smallest integer d for which P can be obtained as the linear image of an affine section of the nonnegative orthant \(\mathbb {R}^d_+\). The slack matrix of P is given by the matrix \((b_i-a_i^\mathsf{T}v)_{v\in V,i\in I}\), where \(P= \text {conv}(V)\) and \(P= \{x: a_i^\mathsf{T}x\le b_i\ (i\in I)\}\) are the point and hyperplane representations of P. Analogously, the semidefinite extension complexity of P is the smallest d such that P is the linear image of an affine section of the cone \(\mathrm {S}^d_+\) and it is given by the (real) positive semidefinite rank of its slack matrix [34].
The motivation to study the linear and semidefinite extension complexities is that polytopes with small extension complexity admit efficient algorithms for linear optimization. Well-known examples include spanning tree polytopes [54] and permutahedra [32], which have polynomial linear extension complexity, and the stable set polytope of perfect graphs, which has polynomial semidefinite extension complexity [40] (see, e.g., the surveys [18, 25]). The above connection to the nonnegative rank and to the positive semidefinite rank of the slack matrix can be used to show that a polytope does not admit a small extended formulation. Recently, this connection was used to show that the linear extension complexities of the traveling salesman, cut, and stable set polytopes are exponential in the number of nodes [29], and this result was extended to their semidefinite extension complexities in [51]. Surprisingly, the linear extension complexity of the matching polytope is also exponential [66], even though linear optimization over this set is polynomial time solvable [23]. It is an open question whether the semidefinite extension complexity of the matching polytope is exponential.
Besides this link to extension complexity, the nonnegative rank also finds applications in probability theory and in communication complexity, and the positive semidefinite rank has applications in quantum information theory and in quantum communication complexity (see, e.g., [24, 29, 42, 55]).
For square symmetric matrices (\(m=n\)), we are also interested in symmetric analogs of the above matrix factorization ranks, where we require the same factors for the rows and columns (i.e., \(X_i = Y_i\) for all \(i\in [n]\)). The symmetric analog of the nonnegative rank is the completely positive rank, denoted \(\hbox {cp-rank}(A)\), which uses the cones \(K^d = \mathbb {R}_+^d\), and the symmetric analog of the positive semidefinite rank is the completely positive semidefinite rank, denoted \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {K}(A)\), which uses the cones \(K^d=\mathrm {S}^d_+\) if \(\mathbb {K}=\mathbb {R}\) and \(K^d=\mathrm {H}^d_+\) if \(\mathbb {K}=\mathbb {C}\). These symmetric factorization ranks are not always well defined since not every symmetric nonnegative matrix admits a symmetric factorization by nonnegative vectors or positive semidefinite matrices. The symmetric matrices for which these parameters are well defined form convex cones known as the completely positive cone, denoted \(\hbox {CP}^n\), and the completely positive semidefinite cone, denoted \(\mathrm {CS}_{+}^n\). We have the inclusions \(\hbox {CP}^n \subseteq \mathrm {CS}_{+}^n \subseteq \mathrm {S}_+^n\), which are known to be strict for \(n\ge 5\). For details on these cones see [6, 17, 50] and references therein.
Motivation for the cones \(\hbox {CP}^n\) and \(\mathrm {CS}_{+}^n\) comes in particular from their use to model classical and quantum information optimization problems. For instance, graph parameters such as the stability number and the chromatic number can be written as linear optimization problems over the completely positive cone [45], and the same holds, more generally, for quadratic problems with mixed binary variables [13]. The \(\hbox {cp-rank}\) is widely studied in the linear algebra community; see, e.g., [6, 10, 68, 69].
The completely positive semidefinite cone was first studied in [50] to describe quantum analogs of the stability number and of the chromatic number of a graph. This was later extended to general graph homomorphisms in [72] and to graph isomorphism in [2]. In addition, as shown in [53, 72], there is a close connection between the completely positive semidefinite cone and the set of quantum correlations. This also gives a relation between the completely positive semidefinite rank and the minimal entanglement dimension necessary to realize a quantum correlation. This connection has been used in [38, 62, 63] to construct matrices whose completely positive semidefinite rank is exponentially large in the matrix size. For the special case of synchronous quantum correlations, the minimum entanglement dimension is directly given by the completely positive semidefinite rank of a certain matrix (see [37]).
The following inequalities hold for the nonnegative rank and the positive semidefinite rank: We have
$$\begin{aligned} \hbox {psd-rank}_\mathbb {C}(A)\le \hbox {psd-rank}_\mathbb {R}(A) \le {{\,\mathrm{rank}\,}}_+(A) \le \mathrm {min}\{m,n\} \end{aligned}$$
for any \(m\times n\) nonnegative matrix A and \(\hbox {cp-rank}(A)\le \left( {\begin{array}{c}n+1\\ 2\end{array}}\right) \) for any \(n\times n\) completely positive matrix A. However, the situation for the cpsd-rank is very different. Exploiting the connection between the completely positive semidefinite cone and quantum correlations it follows from results in [73] that the cone \(\mathrm {CS}_{+}^n\) is not closed for \(n\ge 1942\). The results in [22] show that this already holds for \(n\ge 10\). As a consequence, there does not exist an upper bound on the \(\hbox {cpsd-rank}\) as a function of the matrix size. For small matrix sizes, very little is known. It is an open problem whether \(\mathrm {CS}_{+}^5\) is closed, and we do not even know how to construct a \(5 \times 5\) matrix whose cpsd-rank exceeds 5.
The \({{\,\mathrm{rank}\,}}_+\), \(\hbox {cp-rank}\), and \(\text {psd-rank}\) are known to be computable; this follows using results from [65] since upper bounds exist on these factorization ranks that depend only on the matrix size, see [5] for a proof for the case of the \(\hbox {cp-rank}\). But computing the nonnegative rank is NP-hard [76]. In fact, determining the \({{\,\mathrm{rank}\,}}_+\) and \(\hbox {psd-rank}\) of a matrix are both equivalent to the existential theory of the reals [70, 71]. For the cp-rank and the cpsd-rank, no such results are known, but there is no reason to assume they are any easier. In fact, it is not even clear whether the cpsd-rank is computable in general.
To obtain upper bounds on the factorization rank of a given matrix, one can employ heuristics that try to construct small factorizations. Many such heuristics exist for the nonnegative rank (see the overview [30] and references therein), factorization algorithms exist for completely positive matrices (see the recent paper [39], also [20] for structured completely positive matrices), and algorithms to compute positive semidefinite factorizations are presented in the recent work [75]. In this paper, we want to compute lower bounds on matrix factorization ranks, which we achieve by employing a relaxation approach based on (noncommutative) polynomial optimization.
Contributions and Connections to Existing Bounds
In this work, we provide a unified approach to obtain lower bounds on the four matrix factorization ranks mentioned above, based on tools from (noncommutative) polynomial optimization.
We sketch the main ideas of our approach in Sect. 1.4 below, after having introduced some necessary notation and preliminaries about (noncommutative) polynomials in Sect. 1.3. We then indicate in Sect. 1.5 how our approach relates to the more classical use of polynomial optimization dealing with the minimization of polynomials over basic closed semialgebraic sets. The main body of the paper consists of four sections each dealing with one of the four matrix factorization ranks. We start with presenting our approach for the completely positive semidefinite rank and then explain how to adapt this to the other ranks.
For our results, we need several technical tools about linear forms on spaces of polynomials, both in the commutative and noncommutative setting. To ease the readability of the paper, we group these technical tools in Appendix A. Moreover, we provide full proofs, so that our paper is self-contained. In addition, some of the proofs might differ from the customary ones in the literature since our treatment in this paper is consistently on the 'moment' side rather than using real algebraic results about sums of squares.
In Sect. 2, we introduce our approach for the completely positive semidefinite rank. We start by defining a hierarchy of lower bounds
$$\begin{aligned} {\xi _{1}^{\mathrm {cpsd}}}(A) \le {\xi _{2}^{\mathrm {cpsd}}}(A) \le \ldots \le {\xi _{t}^{\mathrm {cpsd}}}(A)\le \ldots \le {{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A), \end{aligned}$$
where \({\xi _{t}^{\mathrm {cpsd}}}(A)\), for \(t \in \mathbb {N}\), is given as the optimal value of a semidefinite program whose size increases with t. Not much is known about lower bounds for the cpsd-rank in the literature. The inequality \(\sqrt{{{\,\mathrm{rank}\,}}(A)} \le {{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\) is known, which follows by viewing a Hermitian \(d\times d\) matrix as a \(d^2\)-dimensional real vector, and an analytic lower bound is given in [62]. We show that the new parameter \({\xi _{1}^{\mathrm {cpsd}}}(A)\) is at least as good as this analytic lower bound and we give a small example where a strengthening of \({\xi _{2}^{\mathrm {cpsd}}}(A)\) is strictly better then both above-mentioned generic lower bounds. Currently, we lack evidence that the lower bounds \({\xi _{t}^{\mathrm {cpsd}}}(A)\) can be larger than, for example, the matrix size, but this could be because small matrices with large cpsd-rank are hard to construct or might even not exist. We also introduce several ideas leading to strengthenings of the basic bounds \({\xi _{t}^{\mathrm {cpsd}}}(A)\).
We then adapt these ideas to the other three matrix factorization ranks discussed above, where for each of them we obtain analogous hierarchies of bounds.
For the nonnegative rank and the completely positive rank, much more is known about lower bounds. The best-known generic lower bounds are due to Fawzi and Parrilo [26, 27]. In [27], the parameters \(\tau _+(A)\) and \(\tau _{\mathrm {cp}}(A)\) are defined, which, respectively, lower bound the nonnegative rank and the \(\hbox {cp-rank}\), along with their computable semidefinite programming relaxations \(\tau _\mathrm {+}^\mathrm {sos}(A)\) and \(\tau _\mathrm {cp}^\mathrm {sos}(A)\). In [27] it is also shown that \(\tau _+(A)\) is at least as good as certain norm-based lower bounds. In particular, \(\tau _+(\cdot )\) is at least as good as the \(\ell _\infty \) norm-based lower bound, which was used by Rothvoß [66] to show that the matching polytope has exponential linear extension complexity. In [26] it is shown that for the Frobenius norm, the square of the norm-based bound is still a lower bound on the nonnegative rank, but it is not known how this lower bound compares to \(\tau _+(\cdot )\).
Fawzi and Parrilo [27] use the atomicity of the nonnegative and completely positive ranks to derive the parameters \(\tau _+(A)\) and \(\tau _{\mathrm {cp}}(A)\); i.e., they use the fact that the nonnegative rank (cp-rank) of A is equal to the smallest d for which A can be written as the sum of d nonnegative (positive semidefinite) rank one matrices. As the \(\hbox {psd-rank}\) and \(\hbox {cpsd-rank}\) are not known to admit atomic formulations, the techniques from [27] do not extend directly to these factorization ranks. However, our approach via polynomial optimization captures these factorization ranks as well.
In Sects. 3 and 4, we construct semidefinite programming hierarchies of lower bounds \({\xi _{t}^{\mathrm {cp}}}(A)\) and \({\xi _{t}^{\mathrm {+}}}(A)\) on \(\hbox {cp-rank}(A)\) and \({{\,\mathrm{rank}\,}}_+(A)\). We show that the bounds \({\xi _{t}^{\mathrm {+}}}(A)\) converge to \(\tau _+(A)\) as \(t \rightarrow \infty \). The basic hierarchy \(\{{\xi _{t}^{\mathrm {cp}}}(A)\}\) for the cp-rank does not converge to \(\tau _{\mathrm {cp}}(A)\) in general, but we provide two types of additional constraints that can be added to the program defining \({\xi _{t}^{\mathrm {cp}}}(A)\) to ensure convergence to \(\tau _{\mathrm {cp}}(A)\). First, we show how a generalization of the tensor constraints that are used in the definition of the parameter \(\tau _{\mathrm {cp}}^\mathrm {sos}(A)\) can be used for this, and we also give a more efficient (using smaller matrix blocks) description of these constraints. This strengthening of \({\xi _{2}^{\mathrm {cp}}}(A)\) is then at least as strong as \(\tau _{\mathrm {cp}}^\mathrm {sos}(A)\), but requires matrix variables of roughly half the size. Alternatively, we show that for every \(\varepsilon >0\) there is a finite number of additional linear constraints that can be added to the basic hierarchy \(\{{\xi _{t}^{\mathrm {cp}}}(A)\}\) so that the limit of the sequence of these new lower bounds \({\xi _{t}^{\mathrm {+}}}(A)\) is at least \(\tau _{\mathrm {cp}}(A)-\varepsilon \). We give numerical results on small matrices studied in the literature, which show that \({\xi _{3}^{\mathrm {+}}}(A)\) can improve over \(\tau _{+}^\mathrm {sos}(A)\).
Finally, in Sect. 5, we derive a hierarchy \(\{{\xi _{t}^{\mathrm {psd}}}(A)\}\) of lower bounds on the psd-rank. We compare the new bounds \({\xi _{t}^{\mathrm {psd}}}(A)\) to a bound from [52], and we provide some numerical examples illustrating their performance.
We provide two implementations of all the lower bounds introduced in this paper, at the arXiv submission of this paper. One implementation uses Matlab and the CVX package [36], and the other one uses Julia [8]. The implementations support various semidefinite programming solvers, for our numerical examples we used Mosek [56].
Preliminaries
In order to explain our basic approach in the next section, we first need to introduce some notation. We denote the set of all words in the symbols \(x_1,\ldots ,x_n\) by \(\langle \mathbf{x}\rangle = \langle x_1, \ldots , x_n \rangle \), where the empty word is denoted by 1. This is a semigroup with involution, where the binary operation is concatenation, and the involution of a word \(w\in \langle \mathbf{x}\rangle \) is the word \(w^*\) obtained by reversing the order of the symbols in w. The \(*\)-algebra of all real linear combinations of these words is denoted by \(\mathbb {R}\langle \mathbf{x} \rangle \), and its elements are called noncommutative polynomials. The involution extends to \(\mathbb {R}\langle \mathbf{x}\rangle \) by linearity. A polynomial \(p\in \mathbb {R}\langle \mathbf{x}\rangle \) is called symmetric if \(p^*=p\) and \(\mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \) denotes the set of symmetric polynomials. The degree of a word \(w\in \langle \mathbf{x}\rangle \) is the number of symbols composing it, denoted as |w| or \(\deg (w)\), and the degree of a polynomial \(p=\sum _wp_ww\in \mathbb {R}\langle \mathbf{x}\rangle \) is the maximum degree of a word w with \(p_w\ne 0\). Given \(t\in \mathbb {N}\cup \{\infty \}\), we let \(\langle \mathbf{x} \rangle _t\) be the set of words w of degree \(|w| \le t\), so that \(\langle \mathbf{x} \rangle _\infty =\langle \mathbf{x}\rangle \), and \(\mathbb {R}\langle \mathbf{x} \rangle _t\) is the real vector space of noncommutative polynomials p of degree \(\mathrm {deg}(p) \le t\). Given \(t \in \mathbb {N}\), we let \(\langle \mathbf{x} \rangle _{=t}\) be the set of words of degree exactly equal to t.
For a set \(S\subseteq \mathrm {Sym} \,\mathbb {R}\langle \mathbf{x}\rangle \) and \(t\in \mathbb {N}\cup \{\infty \}\), the truncated quadratic module at degree 2t associated to S is defined as the cone generated by all polynomials \(p^*g p \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}\) with \(g\in S\cup \{1\}\):
$$\begin{aligned} {{\mathscr {M}}}_{2t}(S)=\mathrm {cone}\Big \{p^*gp: p\in \mathbb {R}\langle \mathbf{x}\rangle , \ g\in S\cup \{1\},\ \deg (p^*gp)\le 2t\Big \}. \end{aligned}$$
Likewise, for a set \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \), we can define the truncated ideal at degree 2t, denoted by \(\mathscr {I}_{2t}(T)\), as the vector space spanned by all polynomials \(p h \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}\) with \(h \in T\):
$$\begin{aligned} {\mathscr {I}}_{2t}(T) = \mathrm {span}\big \{ ph : p \in \mathbb {R}\langle \mathbf {x}\rangle , \, h \in T, \, \mathrm {deg}(ph) \le 2t \big \}. \end{aligned}$$
We say that \({{\mathscr {M}}}(S) + {{\mathscr {I}}}(T)\) is Archimedean when there exists a scalar \(R>0\) such that
$$\begin{aligned} R-\sum _{i=1}^n x_i^2\in {\mathscr {M}}(S)+ {\mathscr {I}}(T). \end{aligned}$$
Throughout, we are interested in the space \(\mathbb {R}\langle \mathbf{x} \rangle _t^*\) of real-valued linear functionals on \(\mathbb {R}\langle \mathbf{x} \rangle _t\). We list some basic definitions: A linear functional \(L \in \mathbb {R}\langle \mathbf{x} \rangle _t^*\) is symmetric if \(L(w) = L(w^*)\) for all \(w \in \langle \mathbf{x} \rangle _t\) and tracial if \(L(ww') = L(w'w)\) for all \(w,w' \in \langle \mathbf{x} \rangle _t\). A linear functional \(L \in \mathbb {R}\langle \mathbf{x} \rangle _{2t}^*\) is said to be positive if \(L(p^*p) \ge 0\) for all \(p \in \mathbb {R}\langle \mathbf{x} \rangle _t\). Many properties of a linear functional \(L \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) can be expressed as properties of its associated moment matrix (also known as its Hankel matrix). For \(L \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) we define its associated moment matrix, which has rows and columns indexed by words in \(\langle \mathbf{x}\rangle _t\), by
$$\begin{aligned} M_t(L)_{w,w'} = L(w^* w') \quad \text {for} \quad w,w' \in \langle \mathbf{x}\rangle _t, \end{aligned}$$
and as usual we set \(M(L) = M_\infty (L)\). It then follows that L is symmetric if and only if \(M_t(L)\) is symmetric, and L is positive if and only if \(M_t(L)\) is positive semidefinite. In fact, one can even express nonnegativity of a linear form \(L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) on \({{\mathscr {M}}}_{2t}(S)\) in terms of certain associated positive semidefinite moment matrices. For this, given a polynomial \(g\in \mathbb {R}\langle \mathbf{x}\rangle \), define the linear form \(gL \in \mathbb {R}\langle \mathbf{x}\rangle _{2t-\deg (g)}^*\) by \((gL)(p)=L(gp)\). Then, we have
$$\begin{aligned} L(p^*gp)\ge 0 \text { for all } p\in \mathbb {R}\langle \mathbf{x}\rangle _{t-d_g} \iff M_{t-d_g}(gL)\succeq 0, \quad (d_g = \lceil \deg (g)/2\rceil ), \end{aligned}$$
and thus \(L\ge 0\) on \({{\mathscr {M}}}_{2t}(S)\) if and only if \(M_{t-d_{g}}(gL) \succeq 0\) for all \(g\in S \cup \{1\}\). Also, the condition \(L=0\) on \({{\mathscr {I}}}_{2t}(T)\) corresponds to linear equalities on the entries of \(M_t(L)\).
The moment matrix also allows us to define a property called flatness. For \(t \in \mathbb {N}\), a linear functional \(L \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) is called \(\delta \)-flat if the rank of \(M_t(L)\) is equal to that of its principal submatrix indexed by the words in \(\langle \mathbf{x}\rangle _{t-\delta }\), that is,
$$\begin{aligned} {{\,\mathrm{rank}\,}}(M_t(L))={{\,\mathrm{rank}\,}}(M_{t-\delta }(L)). \end{aligned}$$
We call Lflat if it is \(\delta \)-flat for some \(\delta \ge 1\). When \(t=\infty \), L is said to be flat when \(\mathrm {rank}(M(L))<\infty \), which is equivalent to \({{\,\mathrm{rank}\,}}(M(L))={{\,\mathrm{rank}\,}}(M_s(L))\) for some \(s\in \mathbb {N}\).
A key example of a flat symmetric tracial positive linear functional on \(\mathbb {R}\langle \mathbf{x}\rangle \) is given by the trace evaluation at a given matrix tuple \(\mathbf {X}= (X_1,\ldots ,X_n) \in (\mathrm {H}^d)^n\):
$$\begin{aligned} p \mapsto \mathrm {Tr}(p(\mathbf {X})). \end{aligned}$$
Here, \(p(\mathbf {X})\) denotes the matrix obtained by substituting \(x_i\) by \(X_i\) in p, and throughout \(\mathrm {Tr}(\cdot )\) denotes the usual matrix trace, which satisfies \(\mathrm {Tr}(I) = d\) where I is the identity matrix in \(\mathrm {H}^d\). We mention in passing that we use \(\mathrm {tr}(\cdot )\) to denote the normalized matrix trace, which satisfies \(\mathrm {tr}(I) = 1\) for \(I \in \mathrm {H}^d\). Throughout, we use \(L_\mathbf {X}\) to denote the real part of the above functional, that is, \(L_\mathbf {X}\) denotes the linear form on \(\mathbb {R}\langle \mathbf{x}\rangle \) defined by
$$\begin{aligned} L_{\mathbf{X}}(p) = \mathrm {Re}( \mathrm {Tr}(p(X_1,\ldots ,X_n))) \quad \text {for} \quad p \in \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$
Observe that \(L_\mathbf {X}\) too is a symmetric tracial positive linear functional on \(\mathbb {R}\langle \mathbf{x}\rangle \). Moreover, \(L_\mathbf {X}\) is nonnegative on \({{\mathscr {M}}}(S)\) if the matrix tuple \(\mathbf {X}\) is taken from the matrix positivity domain\({\mathscr {D}}(S)\) associated to the finite set \(S \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \), defined as
$$\begin{aligned} {\mathscr {D}}(S)=\bigcup _{d\ge 1} \Big \{\mathbf {X}=(X_1,\ldots ,X_n)\in (\mathrm {H}^d)^n: g(\mathbf {X})\succeq 0 \text { for } g\in S\Big \}. \end{aligned}$$
Similarly, the linear functional \(L_\mathbf {X}\) is zero on \({{\mathscr {I}}}(T)\) if the matrix tuple \(\mathbf {X}\) is taken from the matrix variety\(\mathscr {V}(T)\) associated to the finite set \(T \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \), defined as
$$\begin{aligned} {\mathscr {V}}(T) = \bigcup _{d\ge 1} \big \{\mathbf {X}\in (\mathrm {H}^d)^n : h(\mathbf {X}) = 0 \text { for all } h \in T\big \}, \end{aligned}$$
To discuss convergence properties of our lower bounds for matrix factorization ranks, we will need to consider infinite dimensional analogs of matrix algebras, namely \(C^*\)-algebras admitting a tracial state. Let us introduce some basic notions we need about \(C^*\)-algebras; see, e.g., [9] for details. For our purposes, we define a \(C^*\)-algebra to be a norm closed \(*\)-subalgebra of the complex algebra \({{\mathscr {B}}}({\mathscr {H}})\) of bounded operators on a complex Hilbert space \({\mathscr {H}}\). In particular, we have \(\Vert a^*a\Vert = \Vert a\Vert ^2\) for all elements a in the algebra. Such an algebra \({{\mathscr {A}}}\) is said to be unital if it contains the identity operator (denoted 1). For instance, any full complex matrix algebra \(\mathbb {C}^{d\times d}\) is a unital \(C^*\)-algebra. Moreover, by a fundamental result of Artin-Wedderburn, any finite dimensional \(C^*\)-algebra (as a vector space) is \(*\)-isomorphic to a direct sum \(\bigoplus _{m=1}^M \mathbb {C}^{d_m\times d_m}\) of full complex matrix algebras [3, 77]. In particular, any finite dimensional \(C^*\)-algebra is unital.
An element b in a \(C^*\)-algebra \({{\mathscr {A}}}\) is called positive, denoted \(b\succeq 0\), if it is of the form \(b=a^*a\) for some \(a\in {{\mathscr {A}}}\). For finite sets \(S \subseteq \mathrm {Sym} \,\mathbb {R}\langle \mathbf{x}\rangle \) and \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \), the \(C^*\)-algebraic analogs of the matrix positivity domain and matrix variety are the sets
$$\begin{aligned} {\mathscr {D}}_{{\mathscr {A}}}(S)&= \big \{\mathbf{X}=(X_1,\ldots ,X_n) \in \mathscr {A}^n : X_i^* = X_i \text { for } i \in [n], \, g(\mathbf{X}) \succeq 0 \text { for } g \in S \big \},\\ {\mathscr {V}}_{{\mathscr {A}}}(T)&= \big \{\mathbf{X}=(X_1,\ldots ,X_n) \in \mathscr {A}^n : X_i^* = X_i \text { for } i \in [n], \, h(\mathbf{X}) = 0 \text { for } h \in T \big \}. \end{aligned}$$
A state\(\tau \) on a unital \(C^*\)-algebra \({{\mathscr {A}}}\) is a linear form on \({{\mathscr {A}}}\) that is positive, i.e., \(\tau (a^*a)\ge 0\) for all \(a\in {{\mathscr {A}}}\), and satisfies \(\tau (1)=1\). Since \({{\mathscr {A}}}\) is a complex algebra, every state \(\tau \) is Hermitian: \(\tau (a) = \tau (a^*)\) for all \(a \in {{\mathscr {A}}}\). We say that that a state is tracial if \(\tau (ab) = \tau (ba)\) for all \(a,b \in {\mathscr {A}}\) and faithful if \(\tau (a^*a)=0\) implies \(a=0\). A useful fact is that on a full matrix algebra \(\mathbb {C}^{d\times d}\) the normalized matrix trace is the unique tracial state (see, e.g., [15]). Now, given a tuple \(\mathbf {X}=(X_1,\ldots ,X_n)\in {{\mathscr {A}}}^n\) in a \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \), the second key example of a symmetric tracial positive linear functional on \(\mathbb {R}\langle \mathbf{x}\rangle \) is given by the trace evaluation map, which we again denote by \(L_\mathbf {X}\) and is defined by
$$\begin{aligned} L_\mathbf {X}(p)=\tau (p(X_1,\ldots ,X_n)) \quad \text {for all} \quad p\in \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$
Basic Approach
To explain the basic idea of how we obtain lower bounds for matrix factorization ranks, we consider the case of the completely positive semidefinite rank. Given a minimal factorization \(A=(\mathrm {Tr}(X_i,X_j))\), with \(d={{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\) and \(\mathbf {X}=(X_1,\ldots ,X_n)\) in \((\mathrm {H}_+^d)^n\), consider the linear form \(L_{\mathbf{X}}\) on \(\mathbb {R}\langle \mathbf{x}\rangle \) as defined in (5):
Then, we have \(A=(L_{\mathbf {X}}(x_ix_j))\) and \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A) = d=L_{\mathbf {X}}(1)\). To obtain lower bounds on \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\), we minimize L(1) over a set of linear functionals L that satisfy certain computationally tractable properties of \(L_{\mathbf{X}}\). Note that this idea of minimizing L(1) has recently been used in the works [59, 74] in the commutative setting to derive a hierarchy of lower bounds converging to the nuclear norm of a symmetric tensor.
The above linear functional \(L_{\mathbf{X}}\) is symmetric and tracial. Moreover, it satisfies some positivity conditions, since we have \(L_{\mathbf{X}}(q) \ge 0\) whenever \(q(\mathbf{X})\) is positive semidefinite. It follows that \(L_{\mathbf{X}}(p^*p) \ge 0\) for all \(p\in \mathbb {R}\langle \mathbf{x}\rangle \) and, as we explain later, \(L_{\mathbf{X}}\) satisfies the localizing conditions \(L_{\mathbf{X}}(p^*(\sqrt{A_{ii}} x_i - x_i^2)p) \ge 0\) for all p and i. Truncating the linear form yields the following hierarchy of lower bounds:
$$\begin{aligned} {\xi _{t}^{\mathrm {cpsd}}}(A) = \mathrm {min} \Big \{ L(1) : \;&L \in \mathbb {R}\langle x_1,\ldots ,x_n \rangle _{2t}^* \text { tracial and symmetric},\\&L(x_ix_j) = A_{ij} \quad \text {for} \quad i,j \in [n],\\&L \ge 0\quad \text {on} \quad {\mathscr {M}}_{2t}\big ( \{\sqrt{A_{11}} x_1-x_1^2, \ldots ,\sqrt{A_{nn}} x_n-x_n^2 \}\big )\Big \}. \end{aligned}$$
The bound \({\xi _{t}^{\mathrm {cpsd}}}(A)\) is computationally tractable (for small t). Indeed, as was explained in Sect. 1.3, the localizing constraint "\(L\ge 0\) on \({\mathscr {M}}_{2t}(S)\)" can be enforced by requiring certain matrices, whose entries are determined by L, to be positive semidefinite. This makes the problem defining \({\xi _{t}^{\mathrm {cpsd}}}(A)\) into a semidefinite program. The localizing conditions ensure the Archimedean property of the quadratic module, which permits to show certain convergence properties of the bounds \({\xi _{t}^{\mathrm {cpsd}}}(A)\).
The above approach extends naturally to the other matrix factorization ranks, using the following two basic ideas. First, since the cp-rank and the nonnegative rank deal with factorizations by diagonal matrices, we use linear functionals acting on classical commutative polynomials. Second, the asymmetric factorization ranks (psd-rank and nonnegative rank) can be seen as analogs of the symmetric ranks in the partial matrix setting, where we know only the values of L on the quadratic monomials corresponding to entries in the off-diagonal blocks (this will require scaling of the factors in order to be able to define localizing constraints ensuring the Archimedean property). A main advantage of our approach is that it applies to all four matrix factorization ranks, after easy suitable adaptations.
Connection to Polynomial Optimization
In classical polynomial optimization, the problem is to find the global minimum of a commutative polynomial f over a semialgebraic set of the form
$$\begin{aligned} D(S) = \{x \in \mathbb {R}^n : g(x) \ge 0 \text { for } g \in S\}, \end{aligned}$$
where \(S \subseteq \mathbb {R}[\mathbf {x}] = \mathbb {R}[x_1,\ldots ,x_n]\) is a finite set of polynomials.Footnote 1 Tracial polynomial optimization is a noncommutative analog, where the problem is to minimize the normalized trace \(\mathrm {tr}(f(\mathbf {X}))\) of a symmetric polynomial f over a matrix positivity domain \({\mathscr {D}}(S)\) where \(S \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle \) is a finite set of symmetric polynomials.Footnote 2 Notice that the distinguishing feature here is the dimension independence: the optimization is over all possible matrix sizes. Perhaps counterintuitively, in this paper, we use techniques similar to those used for the tracial polynomial optimization problem to compute lower bounds on factorization dimensions.
For classical polynomial optimization Lasserre [46] and Parrilo [60] have proposed hierarchies of semidefinite programming relaxations based on the theory of moments and the dual theory of sums of squares polynomials. These can be used to compute successively better lower bounds converging to the global minimum (under the Archimedean condition). This approach has been used in a wide range of applications and there is an extensive literature (see, e.g., [1, 47, 49]). Most relevant to this work, it is used in [48] to design conic approximations of the completely positive cone and in [58] to check membership in the completely positive cone. This approach has also been extended to the noncommutative setting, first to the eigenvalue optimization problem [57, 61] (which will not play a role in this paper), and later to tracial optimization [14, 43].
For our paper, the moment formulation of the lower bounds is most relevant: For all \(t \in \mathbb {N}\cup \{\infty \}\), we can define the bounds
$$\begin{aligned} f_t&=\mathrm {inf}_{}\big \{L(f) : L\in \mathbb {R}[\mathbf{x}]_{2t}^*,\, L(1)=1,\, L\ge 0 \text { on } {{\mathscr {M}}}_{2t}(S)\big \}, \\ f_t^\mathrm {tr}&=\mathrm {inf}_{}\big \{L(f) : L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^* \text { tracial and symmetric},\, L(1)=1,\, L \ge 0 \text { on } {{\mathscr {M}}}_{2t}(S)\big \}, \end{aligned}$$
where \(f_t\) (resp., \(f_t^\mathrm {tr}\)) lower bounds the (tracial) polynomial optimization problem.
The connection between the parameters \({\xi _{t}^{\mathrm {cpsd}}}(A)\) and \(f_t^\mathrm {tr}\) is now clear: in the former we do not have the normalization property "\(L(1)=1\)" but we do have the additional affine constraints "\(L(x_i x_j) = A_{ij}\)". This close relation to (tracial) polynomial optimization allows us to use that theory to understand the convergence properties of our bounds. Since throughout the paper we use (proof) techniques from (tracial) polynomial optimization, we will state the main convergence results we need, with full proofs, in Appendix A. Moreover, we give all proofs from the "moment side", which is most relevant to our treatment. Below we give a short summary of the convergence results for the hierarchies \(\{f_t\}\) and \(\{f_t^\mathrm {tr}\}\) that are relevant to our paper. We refer to Appendix A.3 for details.
Under the condition that \({{\mathscr {M}}}(S)\) is Archimedean, we have asymptotic convergence: \(f_t \rightarrow f_\infty \) and \(f_t^\mathrm {tr} \rightarrow f_\infty ^\mathrm {tr}\) as \(t \rightarrow \infty \). In the commutative setting, one can moreover show that \(f_\infty \) is equal to the global minimum of f over the set D(S). However, in the noncommutative setting, the parameter \(f_\infty ^\mathrm {tr}\) is in general not equal to the minimum of \(\mathrm {tr}(f(\mathbf {X}))\) over \(\mathbf {X}\in {\mathscr {D}}(S)\). Instead, we need to consider the \(C^*\)-algebraic version of the tracial polynomial optimization problem: one can show that
$$\begin{aligned} f_\infty ^\mathrm {tr}= \mathrm {inf} \big \{ \tau (f(\mathbf{X})) : \mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S), \, {\mathscr {A}} \text { is a unital }C^*\text { -algebra with tracial state } \tau \big \}. \end{aligned}$$
An important additional convergence result holds under flatness. If the program defining the bound \(f_t\) (resp., \(f_t^\mathrm {tr}\)) admits a sufficiently flat optimal solution, then equality holds: \(f_t = f_\infty \) (resp., \(f_t^\mathrm {tr} = f_\infty ^\mathrm {tr}\)). Moreover, in this case, the parameter \(f_t^\mathrm {tr}\) is equal to the minimum value of \(\mathrm {tr}(f(\mathbf {X}))\) over the matrix positivity domain \({\mathscr {D}}(S)\).
Lower Bounds on the Completely Positive Semidefinite Rank
Let A be a completely positive semidefinite \(n \times n\) matrix. For \(t \in \mathbb {N}\cup \{\infty \}\), we consider the following semidefinite program, which, as we see below, lower bounds the complex completely positive semidefinite rank of A:
$$\begin{aligned} {\xi _{t}^{\mathrm {cpsd}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}\langle x_1,\ldots ,x_n \rangle _{2t}^* \text { tracial and symmetric},\\&L(x_ix_j) = A_{ij} \quad \text {for} \quad i,j \in [n],\\&L \ge 0\quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{\mathrm {cpsd}}) \big \}, \end{aligned}$$
where we set
$$\begin{aligned} S_A^{\mathrm {cpsd}}= \big \{\sqrt{A_{11}} x_1 - x_1^2, \ldots , \sqrt{A_{nn}} x_n - x_n^2\big \}. \end{aligned}$$
Additionally, define the parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\), obtained by adding the rank constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to the program defining \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\), where we consider the infimum instead of the minimum since we do not know whether the infimum is always attained. (In Proposition 1 we show the infimum is attained in \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for \(t\in \mathbb {N}\cup \{\infty \}\)). This gives a hierarchy of monotone nondecreasing lower bounds on the completely positive semidefinite rank:
$$\begin{aligned} {\xi _{1}^{\mathrm {cpsd}}}(A) \le \ldots \le {\xi _{t}^{\mathrm {cpsd}}}(A)\le \ldots \le {\xi _{\infty }^{\mathrm {cpsd}}}(A) \le {\xi _{*}^{\mathrm {cpsd}}}(A)\le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A). \end{aligned}$$
The inequality \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\le {\xi _{*}^{\mathrm {cpsd}}}(A)\) is clear and monotonicity as well: If L is feasible for \({\xi _{k}^{\mathrm {cpsd}}}(A)\) with \(t \le k \le \infty \), then its restriction to \(\mathbb {R}\langle \mathbf{x}\rangle _{2t}\) is feasible for \({\xi _{t}^{\mathrm {cpsd}}}(A)\).
The following notion of localizing polynomials will be useful. A set \(S\subseteq \mathbb {R}\langle \mathbf{x}\rangle \) is said to be localizing at a matrix tuple \(\mathbf {X}\) if \(\mathbf {X}\in {\mathscr {D}}(S)\) (i.e., \(g(\mathbf {X})\succeq 0\) for all \(g\in S\)) and we say that S is localizing for A if S is localizing at some factorization \(\mathbf {X}\in (\mathrm {H}_+^d)^n\) of A with \(d={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). The set \(S_A^{\mathrm {cpsd}}\) as defined in (7) is localizing for A, and, in fact, it is localizing at any factorization \(\mathbf {X}\) of A by Hermitian positive semidefinite matrices. Indeed, since
$$\begin{aligned} A_{ii}={{\,\mathrm{Tr}\,}}(X_i^2)\ge \lambda _{\mathrm {max}}(X_i^2) = \lambda _{\mathrm {max}} (X_i)^2 \end{aligned}$$
we have \(\sqrt{A_{ii}} X_i - X_i^2 \succeq 0\) for all \(i\in [n]\).
We can now use this to show the inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). For this set \(d = {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\), let \(\mathbf {X}\in (\mathrm {H}_+^d)^n\) be a Gram factorization of A, and consider the linear form \(L_\mathbf {X}\in \mathbb {R}\langle \mathbf {x}\rangle ^*\) defined by
$$\begin{aligned} L_\mathbf {X}(p) = \mathrm {Re}(\mathrm {Tr}(p(\mathbf {X}))) \quad \text {for all} \quad p \in \mathbb {R}\langle \mathbf {x}\rangle . \end{aligned}$$
By construction \(L_\mathbf {X}\) is symmetric and tracial, and we have \(A=(L(x_ix_j))\). Moreover, since the set of polynomials \(S_A^{\mathrm {cpsd}}\) is localizing for A, the linear form \(L_\mathbf {X}\) is nonnegative on \({\mathscr {M}}(S_A^{\mathrm {cpsd}})\). Finally, we have \({{\,\mathrm{rank}\,}}(M(L_\mathbf {X}))<\infty \), since the algebra generated by \(X_1, \ldots , X_n\) is finite dimensional. Hence, \(L_\mathbf {X}\) is feasible for \({\xi _{*}^{\mathrm {cpsd}}}(A)\) with \(L_\mathbf {X}(1)=d\), which shows \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).
The inclusions in (8) below show the quadratic module \({{\mathscr {M}}}(S_A^{\mathrm {cpsd}})\) is Archimedean (recall the definition in (3)). Moreover, although there are other possible choices for the localizing polynomials to use in \(S_A^{\mathrm {cpsd}}\), these inclusions also show that the choice made in (7) leads to the largest truncated quadratic module and thus to the best bound. For any scalar \(c > 0\), we have the inclusions
$$\begin{aligned} {{\mathscr {M}}}_{2t}(x,c-x) \subseteq {{\mathscr {M}}}_{2t}(x,c^2-x^2) \subseteq {{\mathscr {M}}}_{2t}(cx-x^2) \subseteq {{\mathscr {M}}}_{2t+2}(x,c-x), \end{aligned}$$
which hold in light of the following identities:
$$\begin{aligned} c-x&= \big ((c-x)^2 + c^2-x^2\big )/(2c), \end{aligned}$$
$$\begin{aligned} c^2 - x^2&= (c-x)^2 + 2(cx - x^2), \end{aligned}$$
$$\begin{aligned} cx - x^2&= \big ((c-x) x (c-x) + x(c-x)x\big )/c, \end{aligned}$$
$$\begin{aligned} x&= \big ( (cx - x^2) + x^2\big )/c. \end{aligned}$$
In the rest of this section, we investigate properties of the hierarchy \(\{{\xi _{t}^{\mathrm {cpsd}}}(A)\}\) as well as some variations on it. We discuss convergence properties, asymptotically and under flatness, and we give another formulation for the parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\). Moreover, as the inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\) is typically strict, we present an approach to strengthen the bounds in order to go beyond \({\xi _{*}^{\mathrm {cpsd}}}(A)\). Then, we propose some techniques to simplify the computation of the bounds, and we illustrate the behavior of the bounds on some examples.
The Parameters \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) and \({\xi _{*}^{\mathrm {cpsd}}}(A)\)
In this section, we consider convergence properties of the hierarchy \({\xi _{t}^{\mathrm {cpsd}}}(\cdot )\), both asymptotically and under flatness. We also give equivalent reformulations of the limiting parameters \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) and \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in terms of \(C^*\)-algebras with a tracial state, which we will use in Sects. 2.3–2.4 to show properties of these parameters.
Let \(A \in \mathrm {CS}_{+}^n\). For \(t \in \mathbb {N}\cup \{\infty \}\) the optimum in \({\xi _{t}^{\mathrm {cpsd}}}(A)\) is attained, and
$$\begin{aligned} \lim _{t \rightarrow \infty } {\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{\infty }^{\mathrm {cpsd}}}(A). \end{aligned}$$
Moreover, \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) is equal to the smallest scalar \(\alpha \ge 0\) for which there exists a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \((X_1,\ldots ,X_n) \in {\mathscr {D}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\).
The sequence \(({\xi _{t}^{\mathrm {cpsd}}}(A))_t\) is monotonically nondecreasing and upper bounded by \({\xi _{\infty }^{\mathrm {cpsd}}}(A) <\infty \), which implies its limit exists and is at most \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\).
As \({\xi _{t}^{\mathrm {cpsd}}}(A)\le {\xi _{\infty }^{\mathrm {cpsd}}}(A)\), we may add the redundant constraint \(L(1) \le {\xi _{\infty }^{\mathrm {cpsd}}}(A)\) to the problem \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for every \(t \in \mathbb {N}\). By (10), we have \(\mathrm {Tr}(A) -\sum _ix_i^2 \in {{\mathscr {M}}}_2(S_A^{\mathrm {cpsd}})\). Hence, using the result of Lemma 13, the feasible region of \({\xi _{t}^{\mathrm {cpsd}}}(A)\) is compact, and thus, it has an optimal solution \(L_t\). Again by Lemma 13, the sequence \((L_t)\) has a pointwise converging subsequence with limit \(L \in \mathbb {R}\langle \mathbf {x}\rangle ^*\). This pointwise limit L is symmetric, tracial, satisfies \((L(x_ix_j)) = A\), and is nonnegative on \({\mathscr {M}}(S_A^{\mathrm {cpsd}})\). Hence, L is feasible for \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\). This implies that L is optimal for \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) and we have \(\lim _{t \rightarrow \infty } {\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{\infty }^{\mathrm {cpsd}}}(A)\).
The reformulation of \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) in terms of \(C^*\)-algebras with a tracial state follows directly using Theorem 1. \(\square \)
Next, we give some equivalent reformulations for the parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\), which follow as a direct application of Theorem 2. In general, we do not know whether the infimum in \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is attained. However, as a direct application of Corollary 1, we see that this infimum is attained if there is an integer \(t \in \mathbb {N}\) for which \({\xi _{t}^{\mathrm {cpsd}}}(A)\) admits a flat optimal solution.
Let \(A \in \mathrm {CS}_{+}^n\). The parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is given by the infimum of L(1) taken over all conic combinations L of trace evaluations at elements in \({\mathscr {D}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) for which \(A=(L(x_ix_j))\). The parameter \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is also equal to the infimum over all \(\alpha \ge 0\) for which there exist a finite dimensional \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \((X_1,\ldots ,X_n) \in {\mathscr {D}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\).
In addition, if \({\xi _{t}^{\mathrm {cpsd}}}(A)\) admits a flat optimal solution, then \({\xi _{t}^{\mathrm {cpsd}}}(A)= {\xi _{*}^{\mathrm {cpsd}}}(A)\).
Next we show a formulation for \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in terms of factorization by block-diagonal matrices, which helps explain why the inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le {{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A)\) is typically strict. Here \(\Vert \cdot \Vert \) is the operator norm, so that \(\Vert X\Vert = \lambda _\mathrm {max}(X)\) for \(X \succeq 0\).
For \(A \in \mathrm {CS}_{+}^n\) we have
$$\begin{aligned} {\xi _{*}^{\mathrm {cpsd}}}(A) = \mathrm {inf} \Bigg \{ \sum _{m=1}^M d_m \cdot \underset{i \in [n]}{\mathrm {max}} \frac{\Vert X^m_{i}\Vert ^2}{A_{ii}} : \;&M \in \mathbb {N},\, d_1,\ldots ,d_M \in \mathbb {N}, \nonumber \\&X_i^m \in \mathrm {H}_+^{d_m} \text { for } i \in [n], m \in [M],\nonumber \\&A = \mathrm {Gram}\big (\oplus _{m=1}^M X_1^m,\ldots ,\oplus _{m = 1}^M X_n^m\big )\Bigg \}. \end{aligned}$$
Note that using matrices from \(\mathrm {S}_+^{d_m}\) instead of \(\mathrm {H}_+^{d_m}\) does not change the optimal value.
The proof uses the formulation of \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in terms of conic combinations of trace evaluations at matrix tuples in \({\mathscr {D}}(S_A^{\mathrm {cpsd}})\) as given in Proposition 2. We first show the inequality \(\beta \le {\xi _{*}^{\mathrm {cpsd}}}(A)\), where \(\beta \) denotes the optimal value of the program in (13).
For this, assume \(L\in \mathbb {R}\langle \mathbf{x}\rangle ^*\) is a conic combination of trace evaluations at elements of \({{\mathscr {D}}}(S_A^{\mathrm {cpsd}})\) such that \(A=(L(x_ix_j))\). We will construct a feasible solution for (13) with objective value L(1). The linear functional L can be written as
$$\begin{aligned} L=\sum _{m=1}^M \lambda _m L_{\mathbf Y^m}, \text { where } \lambda _m > 0 \text { and } \mathbf Y^m=(Y^m_1,\ldots ,Y^m_n) \in {{\mathscr {D}}}(S_A^{\mathrm {cpsd}}) \text { for } m \in [M]. \end{aligned}$$
Let \(d_m\) denote the size of the matrices \(Y_1^m, \ldots , Y_n^m\), so that \(L(1)=\sum _m \lambda _m d_m\). Since \(\mathbf Y^m \in {{\mathscr {D}}}(S_A^{\mathrm {cpsd}})\), we have \(Y^m_i \succeq 0\) and \(A_{ii}I-(Y^m_i)^2\succeq 0\) by identities (10) and (12). This implies \(\Vert Y^m_i\Vert ^2 \le A_{ii}\) for all \(i\in [n]\) and \(m \in [M]\). Define \(\mathbf X^m = \sqrt{\lambda _m} \, \mathbf Y^m\). Then, \(L(x_ix_j)= \sum _m {{\,\mathrm{Tr}\,}}(X^m_iX^m_j)\), so that the matrices \(\oplus _m X^m_1,\ldots ,\oplus _m X^m_n\) form a Gram decomposition of A. This gives a feasible solution to (13) with value
$$\begin{aligned} \sum _{m=1}^M d_m \cdot \underset{i\in [n]}{\mathrm {max}}\frac{\Vert X^m_i\Vert ^2}{A_{ii}} =\sum _{m=1}^M d_m \lambda _m \, \underset{i\in [n]}{\mathrm {max}}\frac{\Vert Y^m_i\Vert ^2}{A_{ii}} \le \sum _{m=1}^M d_m\lambda _m =L(1), \end{aligned}$$
which shows \(\beta \le L(1)\), and hence \(\beta \le {\xi _{*}^{\mathrm {cpsd}}}(A)\).
For the other direction, we assume
$$\begin{aligned} A = \mathrm {Gram}\big (\oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n\big ), \quad X^m_1,\ldots ,X^m_n \in \mathrm {S}^{d_m}_+ \ \text { for } m \in [M]. \end{aligned}$$
Set \(\lambda _m = \mathrm {max}_{i\in [n]} {\Vert X^m_i\Vert ^2/ A_{ii}}\), and define the linear form L by
$$\begin{aligned} L= \sum _{m=1}^M \lambda _m L_{\mathbf Y^m}, \quad \text {where} \quad \mathbf Y^m = \mathbf {X}^m / \sqrt{\lambda _m} \quad \text {for all} \quad m \in [M]. \end{aligned}$$
We have \(L(1)=\sum _m \lambda _m d_m\) and \(A=(L(x_ix_j))\), and thus it suffices to show that each matrix tuple \(\mathbf Y^m\) belongs to \({{\mathscr {D}}}(S_A^{\mathrm {cpsd}})\). For this we observe that \(\lambda _mA_{ii}\ge \Vert X^m_i\Vert ^2\). Therefore \(\lambda _m A_{ii} I \succeq (X_i^m)^2\), and thus \(A_{ii} I \succeq (Y_i^m)^2\), which implies \(\sqrt{A_{ii}} Y_i^m - (Y_i^m)^2 \succeq 0\). This shows \({\xi _{*}^{\mathrm {cpsd}}}(A) \le L(1)=\sum _m \lambda _m d_m\), and thus \({\xi _{*}^{\mathrm {cpsd}}}(A) \le \beta \). \(\square \)
We can say a bit more when the matrix A lies on an extreme ray of the cone \(\mathrm {CS}_{+}^n\). In the formulation from Proposition 3, it suffices to restrict the minimization over factorizations of A involving only one block. However, we know very little about the extreme rays of \(\mathrm {CS}_{+}^n\), also in view of the recent result that the cone is not closed for large n [22, 73].
If A lies on an extreme ray of the cone \(\mathrm {CS}_{+}^n\), then
$$\begin{aligned} {\xi _{*}^{\mathrm {cpsd}}}(A) = {\text {inf}} \left\{ d \cdot \underset{i \in [n]}{\mathrm {max}} \frac{\Vert X_{i}\Vert ^2}{A_{ii}} : d \in \mathbb {N}, X_1,\ldots ,X_n \in \mathrm {H}_+^{d}, \, A = \mathrm {Gram}\big (X_1, \ldots , X_n \big )\right\} . \end{aligned}$$
Moreover, if \(\oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n\) is a Gram decomposition of A providing an optimal solution to (13) and some block \(X^m_i\) has rank 1, then \({\xi _{*}^{\mathrm {cpsd}}}(A)={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).
Let \(\beta \) be the infimum in Proposition 4. The inequality \({\xi _{*}^{\mathrm {cpsd}}}(A) \le \beta \) follows from the reformulation of \({\xi _{*}^{\mathrm {cpsd}}}(A)\) in Proposition 3. To show the reverse inequality we consider a solution \( \oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n \) to (13), and set \(\lambda _m= \mathrm {max}_i\Vert X^m_i\Vert ^2/A_{ii}\). We will show \(\beta \le \sum _m d_m\lambda _m\). For this define the matrices \( A_m={{\,\mathrm{Gram}\,}}(X^m_1,\cdots ,X^m_n), \) so that \(A=\sum _m A_m\). As A lies on an extreme ray of \(\mathrm {CS}_{+}^n\), we must have \(A_m = \alpha _m A\) for some \(\alpha _m>0\) with \(\sum _m\alpha _m=1\). Hence, since
$$\begin{aligned} A=A_m/\alpha _m={{\,\mathrm{Gram}\,}}(X^m_1/\sqrt{\alpha _m}, \cdots , X^m_n/\sqrt{\alpha _m}), \end{aligned}$$
we have \(\beta \le d_m\lambda _m/\alpha _m\) for all \(m\in [M]\). It suffices now to use \(\sum _m \alpha _m=1\) to see that \(\mathrm {min}_m d_m\lambda _m/\alpha _m \le \sum _m d_m\lambda _m\). So we have shown \(\beta \le \mathrm {min}_m d_m\lambda _m/\alpha _m \le \sum _m d_m\lambda _m.\) This implies \(\beta \le {\xi _{*}^{\mathrm {cpsd}}}(A)\), and thus equality holds.
Assume now that \(\oplus _{m=1}^M X^m_1,\ldots ,\oplus _{m=1}^M X^m_n\) is optimal to (13) and that there is a block \(X_i^m\) of rank 1. By Proposition 3 we have \(\sum _m d_m\lambda _m= {\xi _{*}^{\mathrm {cpsd}}}(A)\). From the argument just made above it follows that
$$\begin{aligned} {\xi _{*}^{\mathrm {cpsd}}}(A)= \mathrm {min}_m d_m\lambda _m/\alpha _m =\sum _m d_m \lambda _m. \end{aligned}$$
As \(\sum _m \alpha _m=1\) this implies \(d_m\lambda _m/\alpha _m =\mathrm {min}_m d_m\lambda _m/\alpha _m\) for all m; that is, all terms \(d_m\lambda _m/\alpha _m\) take the same value \({\xi _{*}^{\mathrm {cpsd}}}(A)\). By assumption, there exist some \(m\in [M]\) and \(i\in [n]\) for which \(X^m_i\) has rank 1. Then \(\Vert X^m_i\Vert ^2=\langle X^m_i,X^m_i\rangle \), which gives \(\lambda _m =\alpha _m\), and thus \({\xi _{*}^{\mathrm {cpsd}}}(A) = d_m\). On the other hand, \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\le d_m\) since \((X^m_i/\sqrt{\alpha _m})_i\) forms a Gram decomposition of A, so equality \({\xi _{*}^{\mathrm {cpsd}}}(A)=d_m={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\) holds. \(\square \)
Additional Localizing Constraints to Improve on \({\xi _{*}^{\mathrm {cpsd}}}(A)\)
In order to strengthen the bounds, we may require nonnegativity over a (truncated) quadratic module generated by a larger set of localizing polynomials for A. The following lemma gives one such approach.
Lemma 1
Let \(A \in \mathrm {CS}_{+}^n\). For \(v\in \mathbb {R}^n\) and \(g_v= v^\textsf {T}Av -\big (\sum _{i=1}^n v_ix_i\big )^2\), the set \(\{g_v\}\) is localizing for every Gram factorization by Hermitian positive semidefinite matrices of A (in particular, \(\{g_v\}\) is localizing for A).
If \(X_1,\ldots ,X_n\) is a Gram decomposition of A by Hermitian positive semidefinite matrices, then
$$\begin{aligned} v^\textsf {T}Av= {{\,\mathrm{Tr}\,}}\left( \Big (\sum _{i=1}^n v_iX_i\Big )^2\right) \ge \lambda _{\mathrm {max}}\left( \Big (\sum _{i=1}^n v_iX_i\Big )^2\right) , \end{aligned}$$
hence \(v^\textsf {T}AvI-(\sum _{i=1}^nv_iX_i)^2\succeq 0\). \(\square \)
Given a set \(V\subseteq \mathbb {R}^n\), we consider the larger set
$$\begin{aligned} S_{A,V}^{\mathrm {cpsd}}= S_A^{\mathrm {cpsd}}\cup \{g_v: v\in V\} \end{aligned}$$
of localizing polynomials for A. For \(t \in \mathbb {N}\cup \{\infty ,*\}\), denote by \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) the parameter obtained by replacing in \({\xi _{t}^{\mathrm {cpsd}}}(A)\) the nonnegativity constraint on \({{\mathscr {M}}}_{2t}(S_A^{\mathrm {cpsd}})\) by nonnegativity on the larger set \({{\mathscr {M}}}_{2t}(S_{A,V}^{\mathrm {cpsd}})\). We have \({\xi _{t,\emptyset }^{\mathrm {cpsd}}}(A)={\xi _{t}^{\mathrm {cpsd}}}(A)\) and
$$\begin{aligned} {\xi _{t}^{\mathrm {cpsd}}}(A)\le {\xi _{t,V}^{\mathrm {cpsd}}}(A)\le {{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A) \quad \text {for all} \quad V \subseteq \mathbb {R}^n. \end{aligned}$$
By scaling invariance, we can add the above constraints for all \(v \in \mathbb {R}^n\) by setting V to be the unit sphere \(\mathbb {S}^{n-1}\). Since \(\mathbb {S}^{n-1}\) is a compact metric space, there exists a sequence \(V_1 \subseteq V_2 \subseteq \ldots \subseteq \mathbb {S}^{n-1}\) of finite subsets such that \(\bigcup _{k\ge 1} V_k\) is dense in \(\mathbb {S}^{n-1}\). Each of the parameters \({\xi _{t,V_k}^{\mathrm {cpsd}}}(A)\) involves finitely many localizing constraints, and, as we now show, they converge to the parameter \({\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A)\).
Consider a matrix \(A\in \mathrm {CS}_{+}^n\). For \(t \in \{\infty , *\}\), we have
$$\begin{aligned} \lim _{k \rightarrow \infty } {\xi _{t,V_k}^{\mathrm {cpsd}}}(A) = {\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A). \end{aligned}$$
Let \(\varepsilon > 0\). Since \(\bigcup _k V_k\) is dense in \(\mathbb {S}^{n-1}\), there is an integer \(k\ge 1\) so that for every \(u \in \mathbb {S}^{n-1}\) there exists a vector \(v \in V_k\) satisfying
$$\begin{aligned} \Vert u-v\Vert _1 \le \frac{\varepsilon \lambda _\mathrm {min}(A)}{4 \sqrt{n} \, \mathrm {max}_i A_{ii}} \quad \text {and} \quad \Vert u-v\Vert _2 \le \frac{\varepsilon \lambda _\mathrm {min}(A)}{4\mathrm {Tr}(A^2)^{1/2}}. \end{aligned}$$
The above Propositions 1 and 2 have natural analogs for the programs \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\). These show that for \(t = \infty \) (\(t = *\)) the parameter \({\xi _{t,V_k}^{\mathrm {cpsd}}}(A)\) is the infimum over all \(\alpha \ge 0\) for which there exist a (finite dimensional) unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,V_k}^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\).
Below we will show that \(\mathbf{X}' = \sqrt{1-\varepsilon } \mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cpsd}})\). This implies that the linear form \(L \in \mathbb {R}\langle \mathbf {x}\rangle ^*\) defined by \(L(p) = \alpha /(1-\varepsilon ) \tau (p(\mathbf{X'}))\) is feasible for \({\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A)\) with objective value \(L(1) = \alpha /(1-\varepsilon )\). This shows
$$\begin{aligned} {\xi _{t,\mathbb {S}^{n-1}}^{\mathrm {cpsd}}}(A) \le {1\over 1-\varepsilon }\ {\xi _{t,V_k}^{\mathrm {cpsd}}} (A) \le {1\over 1-\varepsilon }\ \lim _{k\rightarrow \infty } {\xi _{t,V_k}^{\mathrm {cpsd}}}(A). \end{aligned}$$
Since \(\varepsilon >0\) was arbitrary, letting \(\varepsilon \) tend to 0 completes the proof.
We now show \(\mathbf{X}' = \sqrt{1-\varepsilon } \mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cpsd}})\). For this consider the map
$$\begin{aligned} f_{\mathbf{X}} :\mathbb {S}^{n-1} \rightarrow \mathbb {R}, \, v \mapsto \Big \Vert \sum _{i=1}^n v_i X_i\Big \Vert ^2, \end{aligned}$$
where \(\Vert \cdot \Vert \) denotes the \(C^*\)-algebra norm of \({\mathscr {A}}\). For \(\alpha \in \mathbb {R}\) and \(a\in {{\mathscr {A}}}\) with \(a^*=a\), we have \(\alpha \ge \Vert a\Vert \) if and only if \(\alpha -a\succeq 0\) in \({{\mathscr {A}}}\), or, equivalently, \(\alpha ^2-a^2\succeq 0\) in \({{\mathscr {A}}}\). Since \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,V_k}^{\mathrm {cpsd}})\) we have \(v^\textsf {T}A v - f_{\mathbf{X}}(v) \ge 0\) for all \(v \in V_k\), and hence
$$\begin{aligned} v^\textsf {T}A v - f_{\mathbf{X}'}(v)= & {} v^\textsf {T}A v \left( 1 - (1-\varepsilon ) \frac{f_{\mathbf{X}}(v)}{v^\textsf {T}A v}\right) \ge v^\textsf {T}A v \big ( 1 - (1-\varepsilon ) \big ) \\= & {} \varepsilon v^\textsf {T}A v \ge \varepsilon \lambda _\mathrm {min}(A). \end{aligned}$$
Let \(u \in \mathbb {S}^{n-1}\) and let \(v \in V_k\) be such that (14) holds. Using Cauchy-Schwarz we have
$$\begin{aligned} | u^\textsf {T}A u - v^\textsf {T}A v |&= | (u-v)^\textsf {T}A (u + v)| = |\langle A, (u-v) (u+v)^\textsf {T}\rangle | \\&\le \sqrt{\mathrm {Tr}(A^2)} \sqrt{\mathrm {Tr}((u+v) (u-v)^\textsf {T}(u-v) (u+v)^\textsf {T})}\\&\le \sqrt{\mathrm {Tr}(A^2)} \Vert u-v\Vert _2 \Vert u+v\Vert _2 \le 2\sqrt{\mathrm {Tr}(A^2)} \Vert u-v\Vert _2\\&\le 2\sqrt{\mathrm {Tr}(A^2)} \frac{\varepsilon \lambda _\mathrm {min}(A)}{4\sqrt{\mathrm {Tr}(A^2)}}= \frac{\varepsilon \lambda _\mathrm {min}(A)}{2}. \end{aligned}$$
Since \(\sqrt{A_{ii}} X_i - X_i^2\) is positive in \({\mathscr {A}}\), we have that \(\sqrt{A_{ii}} -X_i\) is positive in \({\mathscr {A}}\) by (9) and (10), which implies \(\Vert X_i\Vert \le \sqrt{A_{ii}}\). By the reverse triangle inequality, we then have
$$\begin{aligned} |f_{\mathbf{X'}}(u) - f_{\mathbf{X'}}(v)|&= \left| \big \Vert \sum _{i=1}^n u_i X_i'\big \Vert - \big \Vert \sum _{i=1}^n v_i X_i'\big \Vert \right| \left( \big \Vert \sum _{i=1}^n u_i X_i'\big \Vert + \big \Vert \sum _{i=1}^n v_i X_i'\big \Vert \right) \\&\le \left\| \sum _{i=1}^n (v_i - u_i) X_i'\right\| 2\sqrt{n} \, \mathrm {max}_i \sqrt{A_{ii}} \\&\le \left( \sum _{i=1}^n |v_i - u_i| \Vert X_i'\Vert \right) 2\sqrt{n} \, \mathrm {max}_i \sqrt{A_{ii}}\\&\le \Vert u-v\Vert _1 2 \sqrt{n} \, \mathrm {max}_i A_{ii}\\&\le \frac{\varepsilon \lambda _\mathrm {min}(A)}{4 \sqrt{n} \, \mathrm {max}_i A_{ii}} 2\sqrt{n}\, \mathrm {max}_i A_{ii}= \frac{\varepsilon \lambda _\mathrm {min}(A)}{2}. \end{aligned}$$
Combining the above inequalities we obtain that \(u^\textsf {T}A u - f_{\mathbf{X}'}(u) \ge 0\) for all \(\mathbb {S}^{n-1}\), and hence \(u^\textsf {T}A u - \big (\sum _{i=1}^n u_i X_i'\big )^2\) is positive in \({\mathscr {A}}\). Thus, we have \(\mathbf {X}' \in {\mathscr {D}}_{{\mathscr {A}}}(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cpsd}})\). \(\square \)
We now discuss two examples where the bounds \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) go beyond \({\xi _{*}^{\mathrm {cpsd}}}(A)\).
Consider the matrix
$$\begin{aligned} A = \begin{pmatrix} 1 &{}1/2\\ 1/2 &{} 1 \end{pmatrix}= {{\,\mathrm{Gram}\,}}\ \left( \begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}, \begin{pmatrix} 1/2 &{} 1/2 \\ 1/2 &{} 1/2\end{pmatrix} \right) , \end{aligned}$$
with \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A) = 2\). We can also write \(A = \mathrm {Gram}(Y_1, Y_2)\), where
$$\begin{aligned} Y_1 = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 &{} 0 &{} 0\\ 0 &{} 1 &{} 0\\ 0 &{} 0 &{}0 \end{pmatrix}, \quad Y_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 \end{pmatrix}. \end{aligned}$$
With \(X_i= \sqrt{2} \ Y_i\) we have \(I - X_i^2 \succeq 0\) for \(i=1,2\). Hence the linear form \(L = L_\mathbf {X}/2\) is feasible for \({\xi _{*}^{\mathrm {cpsd}}}(A)\), which shows that \({\xi _{*}^{\mathrm {cpsd}}}(A) \le L(1) = 3/2\). In fact, this form L gives an optimal flat solution to \({\xi _{2}^{\mathrm {cpsd}}}(A)\), as we can check using a semidefinite programming solver, so \({\xi _{*}^{\mathrm {cpsd}}}(A) = 3/2\). In passing, we observe that \({\xi _{1}^{\mathrm {cpsd}}}(A) = 4/3\), which coincides with the analytic lower bound (18) (see also Lemma 6 below).
For \(e = (1,1) \in \mathbb {R}^2\) and \(V = \{e\}\), this form L is not feasible for \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\), because for the polynomial \(p = 1-3 x_1 - 3x_2\) we have \(L(p^*g_ep) = -9/2 < 0\). This means that the localizing constraint \(L(p^*g_ep)\ge 0\) is not redundant: For \(t\ge 2\) it cuts off part of the feasibility region of \({\xi _{t}^{\mathrm {cpsd}}}(A)\). Indeed, using a semidefinite programming solver, we find an optimal flat solution of \({\xi _{3,V}^{\mathrm {cpsd}}}(A)\) with objective value \((5-\sqrt{3})/2\approx 1.633\), hence
$$\begin{aligned} {\xi _{*,V}^{\mathrm {cpsd}}}(A) = (5-\sqrt{3})/2 > 3/2 = {\xi _{*}^{\mathrm {cpsd}}}(A). \end{aligned}$$
Consider the symmetric circulant matrices
$$\begin{aligned} M(\alpha ) = \begin{pmatrix} 1 &{} \alpha &{} 0 &{} 0 &{} \alpha \\ \alpha &{} 1 &{} \alpha &{} 0 &{} 0 \\ 0 &{} \alpha &{} 1 &{} \alpha &{} 0 \\ 0 &{} 0 &{} \alpha &{} 1 &{} \alpha \\ \alpha &{} 0 &{} 0 &{} \alpha &{} 1 \end{pmatrix}\quad \text { for } \quad \alpha \in \mathbb {R}. \end{aligned}$$
For \(0\le \alpha \le 1/2\), we have \(M(\alpha ) \in \mathrm {CS}_{+}^5\) with \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(M(\alpha )) \le 5\). To see this, we set \(\beta =(1+\sqrt{1-4\alpha ^2})/2\) and observe that the matrices
$$\begin{aligned} X_i = \mathrm {Diag}(\sqrt{\beta } \, e_i + \sqrt{1-\beta }\, e_{i+1}) \in \mathrm {S}^5_+, \quad i\in [5], \quad (\text {with }e_6 := e_1), \end{aligned}$$
form a factorization of \(M(\alpha )\). As \(M(\alpha )\) is supported by a cycle, we have \(M(\alpha )\in \mathrm {CS}_{+}^5\) if and only if \(M(\alpha )\in \hbox {CP}^5\) [50]. Thus, \(M(\alpha ) \in \mathrm {CS}_{+}^5\) if and only if \(0 \le \alpha \le 1/2\).
By using its formulation in Proposition 3, we can use the above factorization to derive the inequality \({\xi _{*}^{\mathrm {cpsd}}}(M(1/2))\le 5/2\). However, using a semidefinite programming solver, we see that
$$\begin{aligned} {\xi _{2,V}^{\mathrm {cpsd}}}(M(1/2)) = 5, \end{aligned}$$
where V is the set containing the vector \((1,-1,1,-1,1)\) and its cyclic shifts. Hence, the bound \({\xi _{2,V}^{\mathrm {cpsd}}}(M(1/2))\) is tight: It certifies \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(M(1/2))=5\), while the other known bounds, the rank bound \(\sqrt{\mathrm {rank}(A)}\) and the analytic bound (18), only give \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A) \ge 3\).
We now observe that there exist \(0<\varepsilon ,\delta <1/2\) such that \(\hbox {cpsd-rank}_\mathbb {C}(M(\alpha )) = 5\) for all \( \alpha \in [0,\varepsilon ] \cup [\delta ,1/2]\). Indeed, this follows from the fact that \({\xi _{1}^{\mathrm {cpsd}}}(M(0)) = 5\) (by Lemma 6), the above result that \({\xi _{2,V}^{\mathrm {cpsd}}}(M(1/2)) = 5\), and the lower semicontinuity of \(\alpha \mapsto {\xi _{2,V}^{\mathrm {cpsd}}}(M(\alpha ))\), which is shown in Lemma 7 below.
As the matrices \(M(\alpha )\) are nonsingular, the above factorization shows that their cp-rank is equal to 5 for all \(\alpha \in [0,1/2]\); whether they all have \(\hbox {cpsd-rank}\) equal to 5 is not known.
Boosting the Bounds
In this section, we propose some additional constraints that can be added to strengthen the bounds \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) for finite t. These constraints may shrink the feasibility region of \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) for \(t \in \mathbb {N}\), but they are redundant for \(t\in \{\infty ,*\}\). The latter is shown using the reformulation of the parameters \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) in terms of \(C^*\)-algebras.
We first mention how to construct localizing constraints of "bilinear type", inspired by the work of Berta, Fawzi and Scholz [7]. Note that as for localizing constraints, these bilinear constraints can be modeled as semidefinite constraints.
Let \(A\in \mathrm {CS}_{+}^n\), \(t \in \mathbb {N}\cup \{\infty , *\}\), and let \(\{g,g'\}\) be localizing for A. If we add the constraints
$$\begin{aligned} L(p^*gpg')\ge 0 \quad \text {for} \quad p\in \mathbb {R}\langle \mathbf{x}\rangle \quad \text {with} \quad \deg (p^*gpg')\le 2t \end{aligned}$$
to \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\), then we still get a lower bound on \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). However, the constraints (16) are redundant for \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) when \(g,g' \in {{\mathscr {M}}}(S_{A,V}^{\mathrm {cpsd}})\).
Let \(\mathbf {X}\in (\mathrm {H}^d_+)^n\) be a Gram decomposition of A, and let \(L =L_\mathbf {X}\) be the real part of the trace evaluation at \(\mathbf {X}\). Then, \(p(\mathbf {X})^* g(\mathbf {X}) p(\mathbf {X})\succeq 0\) and \(g'(\mathbf {X})\succeq 0\), and thus
$$\begin{aligned} L(p^*gpg') =\text {Re}( {{\,\mathrm{Tr}\,}}( p(\mathbf {X})^* g(\mathbf {X}) p(\mathbf {X}) g'(\mathbf {X})))\ge 0. \end{aligned}$$
So by adding the constraints (16) we still get a lower bound on \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).
To show that the constraints (16) are redundant for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\) when \(g,g'\in {{\mathscr {M}}}(S_{A,V}^{\mathrm {cpsd}})\), we let \(t\in \{\infty ,*\}\) and assume L is feasible for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\). By Theorem 1 there exist a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\in {\mathscr {D}}(S_{A,V}^{\mathrm {cpsd}})\) such that \(L(p)=L(1) \tau (p(\mathbf {X}))\) for all \(p\in \mathbb {R}\langle \mathbf{x}\rangle \). Since \(g,g' \in {{\mathscr {M}}}(S_{A,V}^{\mathrm {cpsd}})\) we know that \(g(\mathbf {X}), g'(\mathbf {X})\) are positive elements in \({{\mathscr {A}}}\), so \(g(\mathbf {X}) = a^* a\) and \(g'(\mathbf {X}) = b^* b\) for some \(a,b \in {{\mathscr {A}}}\). Then, we have
$$\begin{aligned} L(p^* g pg)&= L(1) \, \tau (p^*(\mathbf {X}) \, g(\mathbf {X}) \, p(\mathbf {X}) \, g'(\mathbf {X}) ) \\&= L(1) \, \tau (p^*(\mathbf {X}) \, a^* a \, p(\mathbf {X}) \, b^* b) \\&= L(1) \, \tau ((a \, p(\mathbf {X}) \, b^*)^* a \, p(\mathbf {X}) \, b^*) \ge 0, \end{aligned}$$
where we use that \(\tau \) is a positive tracial state on \({{\mathscr {A}}}\). \(\square \)
Second, we show how to use zero entries in A and vectors in the kernel of A to enforce new constraints on \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\).
Let \(A\in \mathrm {CS}_{+}^n\) and \(t \in \mathbb {N}\cup \{\infty , *\}\). If we add the constraint
$$\begin{aligned} L=0 \quad \text { on } \quad {\mathscr {I}}_{2t}\left( \big \{\sum _{i=1}^nv_ix_i: v\in \ker A\big \} \cup \big \{x_ix_j: A_{ij}=0 \big \} \right) \end{aligned}$$
to \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\), then we still get a lower bound on \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\). Moreover, these constraints are redundant for \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)\) and \({\xi _{*,V}^{\mathrm {cpsd}}}(A)\).
Let \(\mathbf {X}\in (\mathrm {H}^d_+)^n\) be a Gram factorization of A and let \(L_\mathbf {X}\) be as in (5). If \(Av=0\), then \(0=v^\textsf {T}Av = {{\,\mathrm{Tr}\,}}((\sum _{i=1}^n v_iX_i)^2)\) and thus \(\sum _{i=1}^nv_iX_i=0\). Hence \(L_\mathbf {X}((\sum _{I=1}^nv_ix_i)p)=\mathrm {Re}({{\,\mathrm{Tr}\,}}((\sum _{i=1}^nv_iX_i)p(\mathbf {X})))=0\). If \(A_{ij}=0\), then \({{\,\mathrm{Tr}\,}}(X_iX_j)=0\), which implies \(X_iX_j=0\), since \(X_i\) and \(X_j\) are positive semidefinite. Hence, \(L_\mathbf {X}(x_ix_ip)=\text {Re}({{\,\mathrm{Tr}\,}}(X_iX_jp(\mathbf {X})))=0\). Therefore, adding the constraints (17) still lower bounds \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A)\).
As in the proof of the previous lemma, if \(t \in \{\infty ,*\}\) and L is feasible for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\) then, by Theorem 1, there exist a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\) in \({\mathscr {D}}(S_{A,V}^{\mathrm {cpsd}})\) such that \(L(p)=L(1) \tau (p(\mathbf {X}))\) for all \(p\in \mathbb {R}\langle \mathbf{x}\rangle \). Moreover, by Lemma 12, we may assume \(\tau \) to be faithful. For a vector v in the kernel of A, we have \(0 = v^\textsf {T}A v = L((\sum _i v_i x_i)^2) = L(1) \tau ( (\sum _i v_i X_i)^2)\), and hence, since \(\tau \) is faithful, \(\sum _i v_i X_i = 0\) in \({{\mathscr {A}}}\). It follows that \(L(p (\sum _i v_i x_i)) = L(1) \tau (p(\mathbf {X}) \, 0) = 0\) for all \(p \in \mathbb {R}\langle \mathbf{x}\rangle \). Analogously, if \(A_{ij}=0\), then \(L(x_ix_j)=0\) implies \(\tau (X_iX_j)=0\) and thus \(X_iX_j=0\), since \(X_i, X_j\) are positive in \({{\mathscr {A}}}\) and \(\tau \) is faithful. This implies \(L(p x_i x_j) = 0\) for all \(p \in \mathbb {R}\langle \mathbf{x}\rangle \). This shows that the constraints (17) are redundant. \(\square \)
Note that the constraints \(L(p \, (\sum _{i=1}^nv_ix_i))=0\) for \(p\in \mathbb {R}\langle \mathbf{x}\rangle _t,\) which are implied by (17), are in fact redundant: if \(v \in \ker (A)\), then the vector obtained by extending v with zeros belongs to \(\ker (M_t(L))\), since \(M_t(L)\succeq 0\). Also, for an implementation of \({\xi _{t}^{\mathrm {cpsd}}}(A)\) with the additional constraints (17), it is more efficient to index the moment matrices with a basis for \(\mathbb {R}\langle \mathbf{x}\rangle _{t}\) modulo the ideal \({\mathscr {I}}_t\big (\{ \sum _i v_i x_i: v \in \ker (A)\} \cup \{x_i x_j : A_{ij} = 0\}\big )\).
Additional Properties of the Bounds
Here, we list some additional properties of the parameters \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for \(t \in \mathbb {N}\cup \{\infty , *\}\). First we state some properties for which the proofs are immediate and thus omitted.
Suppose \(A\in \mathrm {CS}_{+}^n\) and \(t \in \mathbb {N}\cup \{\infty ,*\}\).
If P is a permutation matrix, then \({\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{t}^{\mathrm {cpsd}}}(P^\textsf {T}A P)\).
If B is a principal submatrix of A, then \({\xi _{t}^{\mathrm {cpsd}}}(B) \le {\xi _{t}^{\mathrm {cpsd}}}(A)\).
If D is a positive definite diagonal matrix, then \({\xi _{t}^{\mathrm {cpsd}}}(A) = {\xi _{t}^{\mathrm {cpsd}}}(D A D).\)
We also have the following direct sum property, where the equality follows using the \(C^*\)-algebra reformulations as given in Propositions 1 and 2.
If \(A \in \mathrm {CS}_{+}^n\) and \(B \in \mathrm {CS}_{+}^m\), then \({\xi _{t}^{\mathrm {cpsd}}}(A\oplus B) \le {\xi _{t}^{\mathrm {cpsd}}}(A) + {\xi _{t}^{\mathrm {cpsd}}}(B)\), where equality holds for \(t \in \{\infty , *\}\).
To prove the inequality, we take \(L_A\) and \(L_B\) feasible for \({\xi _{t}^{\mathrm {cpsd}}}(A)\) and \({\xi _{t}^{\mathrm {cpsd}}}(B)\), and construct a feasible L for \({\xi _{t}^{\mathrm {cpsd}}}(A\oplus B)\) by \(L(p(\mathbf{x}, \mathbf{y})) = L_A(p(\mathbf{x}, \mathbf{0})) + L_B(p(\mathbf{0}, \mathbf{y}))\).
Now we show equality for \(t = \infty \) (\(t=*\)). By Proposition 1 (Proposition 2), \({\xi _{t}^{\mathrm {cpsd}}}(A\oplus B)\) is equal to the infimum over all \(\alpha \ge 0\) for which there exists a (finite dimensional) unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \((\mathbf {X}, \mathbf{Y}) \in {{\mathscr {D}}}_{{\mathscr {A}}}(S_{A\oplus B}^{\mathrm {cpsd}})\) such that \(A = \alpha \cdot (\tau (X_iX_j))\), \(B = \alpha \cdot (\tau (Y_iY_j))\) and \((\tau (X_iY_j))=0\). This implies \(\mathbf {X}\in {{\mathscr {D}}}_{{\mathscr {A}}}(S_A^{\mathrm {cpsd}})\) and \(\mathbf Y\in {{\mathscr {D}}}_{{\mathscr {A}}}(S_B^{\mathrm {cpsd}})\). Let \(P_A\) be the projection onto the space \(\sum _i \mathrm {Im}(X_i)\) and define the linear form \(L_A \in \mathbb {R}\langle \mathbf {x}\rangle ^*\) by \(L_A(p) = \alpha \cdot \tau (p(\mathbf {X}) P_A)\). It follows that \(L_A\) is nonnegative on \({\mathscr {M}}(S_A^{\mathrm {cpsd}})\), and
$$\begin{aligned} L_A(x_ix_j) = \alpha \, \tau (x_ix_jP_A) = \alpha \, \tau (x_ix_j) = A_{ij}, \end{aligned}$$
so \(L_A\) is feasible for \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) with \(L_A(1)=\alpha \tau (P_A)\). In the same way, we consider the projection \(P_B\) onto the space \(\sum _j \mathrm {Im}(Y_j)\) and define a feasible solution \(L_B\) for \({\xi _{t}^{\mathrm {cpsd}}}(B)\) with \(L_B(1)=\alpha \tau (P_B)\). By Lemma 12 we may assume \(\tau \) to be faithful, so that positivity of \(X_i\) and \(Y_j\) together with \(\tau (X_iY_j) = 0\) implies \(X_iY_j = 0\) for all i and j, and thus \(\sum _i \mathrm {Im}(X_i) \perp \sum _j \mathrm {Im}(Y_j)\). This implies \(I \succeq P_A + P_B\) and thus \(\tau (P_A+P_B)\le \tau (1)=1\). We have
$$\begin{aligned} L_A(1) + L_B(1) = \alpha \, \tau (P_A) + \alpha \tau (P_B) \le \alpha \, \tau (1) = \alpha , \end{aligned}$$
so \({\xi _{t}^{\mathrm {cpsd}}}(A)+{\xi _{t}^{\mathrm {cpsd}}}(B) \le L_A(1)+L_B(1)\le \alpha \), completing the proof. \(\square \)
Note that the \(\hbox {cpsd-rank}\) of a matrix satisfies the same properties as those mentioned in the above two lemmas, where the inequality in Lemma 5 is always an equality: \(\hbox {cpsd-rank}_\mathbb {C}(A~\oplus ~B)=\hbox {cpsd-rank}_\mathbb {C}(A)+\hbox {cpsd-rank}_\mathbb {C}(B)\) [38, 62].
The following lemma shows that the first level of our hierarchy is at least as good as the analytic lower bound (18) on the cpsd-rank derived in [62, Theorem 10].
For any non-zero matrix \(A \in \mathrm {CS}_{+}^n\), we have
$$\begin{aligned} {\xi _{1}^{\mathrm {cpsd}}}(A) \ge \frac{\left( \sum _{i=1}^n \sqrt{A_{ii}}\right) ^2}{\sum _{i,j=1}^n A_{ij}}. \end{aligned}$$
Let L be feasible for \({\xi _{1}^{\mathrm {cpsd}}}(A)\). Since L is nonnegative on \({{\mathscr {M}}}_{2}(S_A^{\mathrm {cpsd}})\), it follows that \(L(\sqrt{A_{ii}}x_i-x_i^2)\ge 0\), implying \(\sqrt{A_{ii}} L(x_i)\ge L(x_i^2)=A_{ii}\) and thus \(L(x_i)\ge \sqrt{A_{ii}}\). Moreover, the matrix \(M_1(L)\) is positive semidefinite. By taking the Schur complement with respect to its upper left corner (indexed by 1), it follows that the matrix \(L(1)\cdot A- (L(x_i)L(x_j))\) is positive semidefinite. Hence, the sum of its entries is nonnegative, which gives \(L(1)(\sum _{i,j}A_{ij})\ge (\sum _i L(x_i))^2\ge (\sum _i \sqrt{A_{ii}})^2\) and shows the desired inequality. \(\square \)
As an application of Lemma 6, the first bound \({\xi _{1}^{\mathrm {cpsd}}}\) is exact for the \(k\times k\) identity matrix: \({\xi _{1}^{\mathrm {cpsd}}}(I_k)={{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(I_k)=k\). Moreover, by combining this with Lemma 4, it follows that \({\xi _{1}^{\mathrm {cpsd}}}(A)~\ge ~k\) if A contains a diagonal positive definite \(k\times k\) principal submatrix. A slightly more involved example is given by the \(5 \times 5\) circulant matrix A whose entries are given by \(A_{ij} = \cos ((i-j)4\pi /5)^2\) (\(i,j \in [5]\)); this matrix was used in [25] to show a separation between the completely positive semidefinite cone and the completely positive cone, and it was shown that \(\hbox {cpsd-rank}_\mathbb {C}(A) =2\). The analytic lower bound of [62] also evaluates to 2, hence Lemma 6 shows that our bound is tight on this example.
We now examine further analytic properties of the parameters \({\xi _{t}^{\mathrm {cpsd}}}(\cdot )\). For each \(r \in \mathbb {N}\), the set of matrices \(A\in \mathrm {CS}_{+}^n\) with \({{\,\mathrm{cpsd-rank_{\mathbb {C}}}\,}}(A) \le r\) is closed, which shows that the function \(A \mapsto \hbox {cpsd-rank}_\mathbb {C}(A)\) is lower semicontinuous. We now show that the functions \(A \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A)\) have the same property. The other bounds defined in this paper are also lower semicontinuous, with a similar proof.
For every \(t \in \mathbb {N}\cup \{\infty \}\) and \(V \subseteq \mathbb {R}^n\), the function
$$\begin{aligned} \mathrm {S}^n \rightarrow \mathbb {R}\cup \{\infty \}, \, A \mapsto {\xi _{t,V}^{\mathrm {cpsd}}}(A) \end{aligned}$$
is lower semicontinuous.
It suffices to show the result for \(t\in \mathbb {N}\), because \({\xi _{\infty ,V}^{\mathrm {cpsd}}}(A)=\mathrm {sup}_t\, {\xi _{t,V}^{\mathrm {cpsd}}}(A)\), and the pointwise supremum of lower semicontinuous functions is lower semicontinuous. We show that the level sets \(\{A \in \mathrm {S}^n: {\xi _{t,V}^{\mathrm {cpsd}}}(A) \le r\}\) are closed. For this, we consider a sequence \((A_k)_{k\in \mathbb {N}}\) in \(\mathrm {S}^n\) converging to \(A \in \mathrm {S}^n\) such that \({\xi _{t,V}^{\mathrm {cpsd}}}(A_k) \le r\) for all k. We show that \({\xi _{t,V}^{\mathrm {cpsd}}}(A) \le r\). Let \(L_k\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) be an optimal solution to \({\xi _{t,V}^{\mathrm {cpsd}}}(A_k)\). As \(L_k(1) \le r\) for all k, it follows from Lemma 13 that there is a pointwise converging subsequence of \((L_k)_k\), still denoted \((L_k)_k\) for simplicity, that has a limit \(L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) with \(L(1)\le r\). To complete the proof we show that L is feasible for \({\xi _{t,V}^{\mathrm {cpsd}}}(A)\). By the pointwise convergence of \(L_k\) to L, for every \(\varepsilon >0\), \(p \in \mathbb {R}\langle \mathbf{x}\rangle \), and \(i \in [n]\), there exists a \(K \in \mathbb {N}\) such that for all \(k \ge K\) we have
$$\begin{aligned} |L(p^* x_i p) - L_k(p^* x_i p) |&< \mathrm {min}\left\{ 1,\frac{\varepsilon }{\sqrt{A_{ii}}}\right\} , \qquad |L(p^* x_i^2 p) - L_k(p^* x_i^2 p)|< \varepsilon , \\ |\sqrt{A_{ii}} - \sqrt{(A_k)_{ii}}|&< \frac{\varepsilon }{L(p^* x_i p) + 1}. \end{aligned}$$
Hence, we have
$$\begin{aligned} L(p^*(\sqrt{A_{ii}} x_i - x_i^2) p)&= \sqrt{A_{ii}} \Big (L(p^* x_i p) - L_k(p^* x_i p) + L_k (p^* x_i p) \Big ) \\&\quad - \Big ( L(p^* x_i^2 p) -L_k(p^* x_i^2 p ) + L_k(p^* x_i^2p)\Big ) \\&\ge -2 \varepsilon + \sqrt{A_{ii}} \, L_k (p^* x_i p) - L_k(p^* x_i^2p) \\&\ge -3 \varepsilon + \sqrt{(A_k)_{ii}} \, L_k (p^* x_i p) - L_k(p^* x_i^2p) \\&= -3 \varepsilon + L_k(p^*(\sqrt{(A_k)_{ii}} \, x_i - x_i^2) p) \ge -3 \varepsilon , \end{aligned}$$
where in the second inequality we use that \(0 \le L_k(p^* x_i p) \le L(p^* x_i p) + 1\). Letting \(\varepsilon \rightarrow 0\) gives \(L(p^*(\sqrt{A_{ii}}x_i-x_i^2)p)\ge 0\).
Similarly, one can show \(L(p^*(v^\textsf {T}Av - (\sum _i v_i x_i)^2) p) \ge 0\) for \(v \in V\), \(p \in \mathbb {R}\langle \mathbf{x}\rangle \).
\(\square \)
If we restrict to completely positive semidefinite matrices with an all-ones diagonal, that is, to \(\mathrm {CS}_{+}^n \cap \mathrm {E}_n\), we can show an even stronger property. Here, \(\mathrm {E}_n\) is the elliptope, which is the set of \(n \times n\) positive semidefinite matrices with an all-ones diagonal.
For every \(t \in \mathbb {N}\cup \{\infty \}\), the function
$$\begin{aligned} \mathrm {CS}_{+}^n \cap \mathrm {E}_n \rightarrow \mathbb {R},\, A \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A) \end{aligned}$$
is convex, and hence continuous on the interior of its domain.
Let \(A,B\in \mathrm {CS}_{+}^n\cap \mathrm {E}_n\) and \(0<\lambda <1\). Let \(L_A\) and \(L_B\) be optimal solutions for \({\xi _{t}^{\mathrm {cpsd}}}(A)\) and \({\xi _{t}^{\mathrm {cpsd}}}(B)\). Since the diagonals of A and B are the same, we have \(S_A^{\mathrm {cpsd}}=S_B^{\mathrm {cpsd}}\). So the linear functional \(L=\lambda L_A+(1-\lambda )L_B\) is feasible for \({\xi _{t}^{\mathrm {cpsd}}}(\lambda A+(1-\lambda )B)\), hence \( {\xi _{t}^{\mathrm {cpsd}}}(\lambda A+(1-\lambda )B)\le \lambda L_A(1)+(1-\lambda )L_B(1) = \lambda {\xi _{t}^{\mathrm {cpsd}}}(A)+ (1-\lambda ){\xi _{t}^{\mathrm {cpsd}}}(B). \) \(\square \)
In this example, we show that for \(t \ge 1\), the function
$$\begin{aligned} \mathrm {CS}_{+}^n \rightarrow \mathbb {R}, \, A \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A) \end{aligned}$$
is not continuous. For this, we consider the matrices
$$\begin{aligned} A_k = \begin{pmatrix} 1/k &{} 0 \\ 0 &{} 1 \end{pmatrix}\in \mathrm {CS}_{+}^2, \end{aligned}$$
with \({{\,\mathrm{cpsd-rank}\,}}_\mathbb {C}(A_k) = 2\) for all \(k\ge 1\). As \(A_k\) is diagonal positive definite, we have \({\xi _{t}^{\mathrm {cpsd}}}(A_k) = 2\) for all \(t,k\ge 1\), while \({\xi _{t}^{\mathrm {cpsd}}}(\lim _{k \rightarrow \infty } A_k) = 1\). This argument extends to \(\mathrm {CS}_{+}^n\) with \(n > 2\). This example also shows that the first level of the hierarchy \({\xi _{1}^{\mathrm {cpsd}}}(\cdot )\) can be strictly better than the analytic lower bound (18) of [62].
In this example, we determine \({\xi _{t}^{\mathrm {cpsd}}}(A)\) for all \(t \ge 1\) and \(A \in \mathrm {CS}_{+}^2\). In view of Lemma 4(3), we only need to find \({\xi _{t}^{\mathrm {cpsd}}}(A(\alpha ))\) for \(0 \le \alpha \le 1\), where \( A(\alpha )= \bigl ({\begin{matrix} 1 &{} \alpha \\ \alpha &{} 1\end{matrix}}\bigr ). \)
The first bound \({\xi _{1}^{\mathrm {cpsd}}}(A(\alpha ))\) is equal to the analytic bound \(2/(\alpha +1)\) from (18), where the equality follows from the fact that L given by \(L(x_i x_j) = A(\alpha )_{ij}\), \(L(x_1)=L(x_2)=1\) and \(L(1)=2/(\alpha +1)\) is feasible for \({\xi _{1}^{\mathrm {cpsd}}}(A(\alpha ))\).
For \(t \ge 2\), we show \({\xi _{t}^{\mathrm {cpsd}}}(A(\alpha )) = 2-\alpha \). By the above, this is true for \(\alpha = 0\) and \(\alpha = 1\), and in Example 1 we show \({\xi _{t}^{\mathrm {cpsd}}}(A(1/2)) =3/2\) for \(t\ge 2\). The claim then follows since the function \(\alpha \mapsto {\xi _{t}^{\mathrm {cpsd}}}(A(\alpha ))\) is convex by Lemma 8.
Lower Bounds on the Completely Positive Rank
The best current approach for lower bounding the completely positive rank of a matrix is due to Fawzi and Parrilo [27]. Their approach relies on the atomicity of the completely positive rank, that is, the fact that \(\hbox {cp-rank}(A)=r\) if and only if A has an atomic decomposition \(A=\sum _{k=1}^r v_k v_k^\textsf {T}\) for nonnegative vectors \(v_k\). In other words, if \(\hbox {cp-rank}(A)=r\), then A / r can be written as a convex combination of r rank one positive semidefinite matrices \(v_k v_k^\textsf {T}\) that satisfy \(0 \le v_k v_k^\textsf {T}\le A\) and \(v_k v_k^\textsf {T}\preceq A\). Based on this observation, Fawzi and Parrilo define the parameter
$$\begin{aligned} \tau _\mathrm {cp}(A) \!=\! \mathrm {min}\Big \{ \alpha : \alpha \!\ge \!0,\, A \in \alpha \cdot \mathrm {conv} \big \{ R \in \mathrm {S}^n : 0 \!\le \!R \le A, \,R \preceq A,\, {{\,\mathrm{rank}\,}}(R) \le \! 1\big \}\Big \}, \end{aligned}$$
as lower bound for \(\hbox {cp-rank}(A)\). They also define the semidefinite programming parameter
$$\begin{aligned} \tau _{\mathrm {cp}}^{\mathrm {sos}}(A) = \mathrm {min} \big \{ \alpha : \;&\alpha \in \mathbb {R}, \, X \in \mathrm {S}^{n^2},\\&\begin{pmatrix} \alpha &{} \text {vec}(A)^\textsf {T}\\ \text {vec}(A) &{} X \end{pmatrix} \succeq 0,\\&X_{(i,j),(i,j)} \le A_{ij}^2 \quad \text {for} \quad 1 \le i,j \le n, \\&X_{(i,j),(k,l)} = X_{(i,l),(k,j)} \quad \text {for} \quad 1 \le i< k \le n, \; 1 \le j < l \le n,\\&X \preceq A \otimes A\big \}, \end{aligned}$$
as an efficiently computable relaxation of \(\tau _\mathrm {cp}(A)\), and they show \({{\,\mathrm{rank}\,}}(A) \le \tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\). Therefore, we have
$$\begin{aligned} {{\,\mathrm{rank}\,}}(A) \le \tau _{\mathrm {cp}}^{\mathrm {sos}}(A) \le \tau _\mathrm {cp}(A)\le \hbox {cp-rank}(A). \end{aligned}$$
Instead of the atomic point of view, here we take the matrix factorization perspective, which allows us to obtain bounds by adapting the techniques from Sect. 2 to the commutative setting. Indeed, we may view a factorization \(A =(a_i^\mathsf{T}a_j)\) by nonnegative vectors as a factorization by diagonal (and thus pairwise commuting) positive semidefinite matrices.
Before presenting the details of our hierarchy of lower bounds, we mention some of our results in order to make the link to the parameters \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\) and \( \tau _\mathrm {cp}(A)\). The direct analog of \(\{{\xi _{t}^{\mathrm {cpsd}}}(A)\}\) in the commutative setting leads to a hierarchy that does not converge to \(\tau _{\mathrm {cp}}(A)\), but we provide two approaches to strengthen it that do converge to \(\tau _{\mathrm {cp}}(A)\). The first approach is based on a generalization of the tensor constraints in \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\). We also provide a computationally more efficient version of these tensor constraints, leading to a hierarchy whose second level is at least as good as \(\tau _{\mathrm {cp}}^\mathrm {sos}(A)\) while being defined by a smaller semidefinite program. The second approach relies on adding localizing constraints for vectors in the unit sphere as in Sect. 2.2.
The following hierarchy is a commutative analog of the hierarchy from Sect. 2, where we may now add the localizing polynomials \(A_{ij}-x_ix_j\) for the pairs \(1 \le i < j \le n\), which was not possible in the noncommutative setting of the completely positive semidefinite rank. For each \(t \in \mathbb {N}\cup \{\infty \}\), we consider the semidefinite program
$$\begin{aligned} {\xi _{t}^{\mathrm {cp}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}[x_1,\ldots ,x_n]_{2t}^*,\\&L(x_ix_j) = A_{ij} \quad \text {for} \quad i,j \in [n],\\&L \ge 0 \quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{\mathrm {cp}}) \big \}, \end{aligned}$$
$$\begin{aligned} S_A^{\mathrm {cp}}= \big \{\sqrt{A_{ii}}x_i - x_i^2 : i \in [n]\big \} \cup \big \{A_{ij} - x_i x_j : 1 \le i < j \le n\big \}. \end{aligned}$$
We additionally define \({\xi _{*}^{\mathrm {cp}}}(A)\) by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to \({\xi _{\infty }^{\mathrm {cp}}}(A)\). We also consider the strengthening \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\), where we add to \({\xi _{t}^{\mathrm {cp}}}(A)\) the positivity constraints
$$\begin{aligned} L(gu) \ge 0 \quad \text {for} \quad g \in \{1\} \cup S_A^{\mathrm {cp}}\quad \text {and} \quad u \in [\mathbf{x}]_{2t-\deg (g)} \end{aligned}$$
and the tensor constraints
$$\begin{aligned} (L((ww')^c))_{w,w' \in \langle \mathbf {x}\rangle _{=l}} \preceq A^{\otimes l} \quad \text {for all integers } \quad 2 \le l \le t, \end{aligned}$$
which generalize the case \(l=2\) used in the relaxation \(\tau _\mathrm {cp}^\mathrm {sos}(A)\). Here, for a word \(w \in \langle \mathbf {x}\rangle \), we denote by \(w^c\) the corresponding (commutative) monomial in \([\mathbf {x}]\). The tensor constraints (20) involve matrices indexed by the noncommutative words of length exactly l. In Sect. 3.4, we show a more economical way to rewrite these constraints as \( (L(mm'))_{m,m' \in [\mathbf {x}]_{=l}} \preceq Q_l A^{\otimes l} Q_l^\textsf {T}, \) thus involving smaller matrices indexed by commutative words of degree l.
Note that, as before, we can strengthen the bounds by adding other localizing polynomials to the set \(S_A^{\mathrm {cp}}\). In particular, we can follow the approach of Sect. 2.2. Another possibility is to add localizing constraints specific to the commutative setting: we can add each monomial \(u \in [\mathbf{x}]\) to \(S_A^{\mathrm {cp}}\) (see Sect. 3.5.2 for an example).
The bounds \({\xi _{t}^{\mathrm {cp}}}(A)\) and \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) are monotonically nondecreasing in t, and they are invariant under simultaneously permuting the rows and columns of A and under scaling a row and column of A by a positive number. In Propositions 6 and 7, we show
$$\begin{aligned} \tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\le {\xi _{t,\dagger }^{\mathrm {cp}}}(A)\le \tau _{\mathrm {cp}}(A) \quad \text {for} \quad t \ge 2, \end{aligned}$$
and in Proposition 10, we show the equality \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) = \tau _{\mathrm {cp}}(A)\).
Comparison to \(\tau _\mathrm {cp}^\mathrm {sos}(A)\)
We first show that the semidefinite programs defining \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) are valid relaxations for the completely positive rank. More precisely, we show that they lower bound \(\tau _{\mathrm {cp}}(A)\).
For \(A \in \hbox {CP}^n\) and \(t \in \mathbb {N}\cup \{\infty ,*\}\), we have \({\xi _{t,\dagger }^{\mathrm {cp}}}(A) \le \tau _{\mathrm {cp}}(A)\).
It suffices to show the inequality for \(t=*\). For this, consider a decomposition \(A=\alpha \, \sum _{k=1}^r \lambda _k R_k\), where \(\alpha \ge 1\), \(\lambda _k>0\), \(\sum _{k=1}^r \lambda _k = 1\), \(0\le R_k\le A\), \(R_k\preceq A\), and \({{\,\mathrm{rank}\,}}R_k= 1\). There are nonnegative vectors \(v_k\) such that \(R_k=v_k v_k^\textsf {T}\). Define the linear map \(L\in \mathbb {R}[\mathbf{x}]^*\) by \(L=\alpha \sum _{k=1}^r \lambda _k L_{v_k}\), where \(L_{v_k}\) is the evaluation at \(v_k\) mapping any polynomial \(p\in \mathbb {R}[\mathbf{x}]\) to \(p(v_k)\).
The equality \((L(x_ix_j))=A\) follows from the identity \(A=\alpha \sum _{k=1}^r \lambda _k R_k\). The constraints \( L((\sqrt{A_{ii}} x_i - x_i^2) p^2) \ge 0 \) follow because
$$\begin{aligned} L_{v_k}(\sqrt{A_{ii}} x_i - x_i^2) p^2) = (\sqrt{A_{ii}} (v_k)_i - (v_k)_i^2) p(v_k)^2 \ge 0, \end{aligned}$$
where we use that \((v_k)_i \ge 0\) and \((v_k)_i^2 = (R_k)_{ii} \le A_{ii}\) implies \((v_k)_i^2 \le (v_k)_i \sqrt{A_{ii}}\). The constraints \( L((A_{ij} - x_ix_j) p^2) \ge 0 \) and
$$\begin{aligned} L(gu) \ge 0 \quad \text {for} \quad g \in \{1\} \cup S_A^{\mathrm {cp}}\quad \text {and} \quad u \in [\mathbf{x}] \end{aligned}$$
follow in a similar way.
It remains to be shown that \(X_l \preceq A^{\otimes l}\) for all l, where we set \(X_l = (L(uv))_{u,v\in \langle \mathbf{x}\rangle _{=l}}\). Note that \(X_1=A\). We adapt the argument used in [27] to show \(X_l \preceq A^{\otimes l}\) using induction on \(l \ge 2\). Suppose \(A^{\otimes (l-1)}\succeq X_{l-1}\). Combining \(A-R_k\succeq 0\) and \(R_k\succeq 0\) gives \((A-R_k)\otimes R_k^{\otimes (l-1)}\succeq 0\) and thus \(A\otimes R_k^{\otimes (l-1)}\succeq R_k^{\otimes l}\) for each k. Scale by factor \(\alpha \lambda _k\) and sum over k to get
$$\begin{aligned} A\otimes X_{l-1}=\sum _k \alpha \lambda _k A\otimes R_k^{\otimes (l-1)} \succeq \sum _k \alpha \lambda _k R_k^{\otimes l}= X_l. \end{aligned}$$
Finally, combining with \(A^{\otimes (l-1)}-X_{l-1}\succeq 0\) and \(A\succeq 0\), we obtain
$$\begin{aligned} A^{\otimes l} =A\otimes (A^{\otimes (l-1)}-X_{l-1})+ A\otimes X_{l-1} \succeq A\otimes X_{l-1}\succeq X_l. \end{aligned}$$
Now we show that the new parameter \({\xi _{2,\dagger }^{\mathrm {cp}}}(A)\) is at least as good as \(\tau _\mathrm {cp}^\mathrm {sos}(A)\). Later in Sect. 3.5.1, we will give an example where the inequality is strict.
For \(A \in \hbox {CP}^n\) we have \( \tau _{\mathrm {cp}}^{\mathrm {sos}}(A) \le {\xi _{2,\dagger }^{\mathrm {cp}}}(A). \)
Let L be feasible for \({\xi _{2,\dagger }^{\mathrm {cp}}}(A)\). We will construct a feasible solution to the program defining \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\) with objective value L(1), which implies \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\le L(1)\) and thus the desired inequality. For this set \(\alpha = L(1)\) and define the symmetric \(n^2 \times n^2\) matrix X by \( X_{(i,j),(k,l)} =L(x_ix_jx_kx_l)\) for \(i,j,k,l \in [n]\). Then, the matrix
$$\begin{aligned} M:=\begin{pmatrix} \alpha &{} \text {vec}(A)^\textsf {T}\\ \text {vec}(A) &{} X \end{pmatrix} \end{aligned}$$
is positive semidefinite. This follows because M is obtained from the principal submatrix of \(M_2(L)\) indexed by the monomials 1 and \(x_ix_j\) (\(1\le i\le j\le n\)) where the rows/columns indexed by \(x_j x_i\) with \(1 \le i < j \le n\) are duplicates of the rows/columns indexed by \(x_i x_j\).
We have \(L((A_{ij} - x_ix_j)x_ix_j) \ge 0\) for all i, j: For \(i \ne j\) this follows using the constraint \(L((A_{ij} - x_ix_j)u) \ge 0\) with \(u = x_ix_j\) (from (19)), and for \(i = j\) this follows from
$$\begin{aligned} L((A_{ii} -x_i^2) x_i^2) = L\left( (\sqrt{A_{ii}} - x_i)^2 + 2 (\sqrt{A_{ii}} x_i - x_i^2)\right) \ge 0, \end{aligned}$$
which holds because of (10), the constraint \(L(p^2) \ge 0\) for \(\deg (p)\le 2\), and the constraint \(L(\sqrt{A_{ii}} x_i - x_i^2) \ge 0\). Using \(L(x_ix_j) = A_{ij}\), we get \( X_{(i,j),(i,j)} = L(x_i^2x_j^2) \le A_{ij}^2. \) We also have \( X_{(i,j),(k,l)} = L(x_ix_jx_kx_l) = L(x_ix_lx_kx_j) = X_{(i,l),(k,j)}, \) and the constraint \((L(uv))_{u,v \in \langle \mathbf {x}\rangle _{=2}} \preceq A^{\otimes 2}\) implies \(X \preceq A \otimes A\). \(\square \)
Convergence of the Basic Hierarchy
We first summarize convergence properties of the hierarchy \({\xi _{t}^{\mathrm {cp}}}(A)\). Note that unlike in Sect. 2 where we can only claim the inequality \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\le {\xi _{*}^{\mathrm {cpsd}}}(A)\), here we can show the equality \({\xi _{\infty }^{\mathrm {cp}}}(A) = {\xi _{*}^{\mathrm {cp}}}(A)\). This is because we can use Theorem 7, which permits to represent certain truncated linear functionals by finite atomic measures.
Let \(A \in \hbox {CP}^n\). For every \(t \in \mathbb {N}\cup \{\infty , *\}\) the optimum in \({\xi _{t}^{\mathrm {cp}}}(A)\) is attained, and \({\xi _{t}^{\mathrm {cp}}}(A) \rightarrow {\xi _{\infty }^{\mathrm {cp}}}(A) = {\xi _{*}^{\mathrm {cp}}}(A)\) as \(t\rightarrow \infty \). If \({\xi _{t}^{\mathrm {cp}}}(A)\) admits a flat optimal solution, then \({\xi _{t}^{\mathrm {cp}}}(A) = {\xi _{\infty }^{\mathrm {cp}}}(A)\). Moreover, \({\xi _{\infty }^{\mathrm {cp}}}(A) = {\xi _{*}^{\mathrm {cp}}}(A)\) is the minimum value of L(1) taken over all conic combinations \(L\) of evaluations at elements of \(D(S_A^{\mathrm {cp}})\) satisfying \(A = (L(x_ix_j))\).
We may assume \(A\ne 0\). Since \(\sqrt{A_{ii}} x_i -x_i^2 \in S_A^{\mathrm {cp}}\) for all i, using (10) we obtain that \(\mathrm {Tr}(A) -\sum _i x_i^2 \in {{\mathscr {M}}}_2(S_A^{\mathrm {cp}})\). By adapting the proof of Proposition 1 to the commutative setting, we see that the optimum in \({\xi _{t}^{\mathrm {cp}}}(A)\) is attained for \(t \in \mathbb {N}\cup \{\infty \}\), and \({\xi _{t}^{\mathrm {cp}}}(A) \rightarrow {\xi _{\infty }^{\mathrm {cp}}}(A)\) as \(t\rightarrow \infty \).
We now show the inequality \({\xi _{*}^{\mathrm {cp}}}(A)\le {\xi _{\infty }^{\mathrm {cp}}}(A)\), which implies that equality holds. For this, let L be optimal for \({\xi _{\infty }^{\mathrm {cp}}}(A)\). By Theorem 7, the restriction of L to \(\mathbb {R}[\mathbf {x}]_2\) extends to a conic combination of evaluations at points in \(D(S_A^{\mathrm {cp}})\). It follows that this extension is feasible for \({\xi _{*}^{\mathrm {cp}}}(A)\) with the same objective value. This shows that \({\xi _{*}^{\mathrm {cp}}}(A)\le {\xi _{\infty }^{\mathrm {cp}}}(A)\), that the optimum in \({\xi _{*}^{\mathrm {cp}}}(A)\) is attained, and that \({\xi _{*}^{\mathrm {cp}}}(A)\) is the minimum of L(1) over all conic combinations \(L\) of evaluations at elements of \(D(S_A^{\mathrm {cp}})\) such that \(A = (L(x_ix_j))\). Finally, by Theorem 6, we have \({\xi _{t}^{\mathrm {cp}}}(A) = {\xi _{\infty }^{\mathrm {cp}}}(A)\) if \({\xi _{t}^{\mathrm {cp}}}(A)\) admits a flat optimal solution. \(\square \)
Next, we give a reformulation for the parameter \({\xi _{*}^{\mathrm {cp}}}(A)\), which is similar to the formulation of \(\tau _\mathrm {cp}(A)\), although it lacks the constraint \(R \preceq A\) which is present in \(\tau _\mathrm {cp}(A)\).
$$\begin{aligned} {\xi _{*}^{\mathrm {cp}}}(A) = \mathrm {min}\Big \{ \alpha : \alpha \ge 0,\, A \in \alpha \cdot \mathrm {conv} \big \{ R \in \mathrm {S}^n : 0 \le R \le A, \, {{\,\mathrm{rank}\,}}(R) \le 1\big \}\Big \}. \end{aligned}$$
This follows directly from the reformulation of \({\xi _{*}^{\mathrm {cp}}}(A)\) in Proposition 8 in terms of conic evaluations at points in \(D(S_A^{\mathrm {cp}})\) after observing that, for \(v \in \mathbb {R}^n\), we have \(v \in D(S_A^{\mathrm {cp}})\) if and only if the matrix \(R = vv^\textsf {T}\) satisfies \(0 \le R \le A\). \(\square \)
Additional Constraints and Convergence to \(\tau _\mathrm {cp}(A)\)
The reformulation of the parameter \({\xi _{*}^{\mathrm {cp}}}(A)\) in Proposition 9 differs from \(\tau _\mathrm {cp}(A)\) in that the constraint \(R\preceq A\) is missing. In order to have a hierarchy converging to \(\tau _\mathrm {cp}(A)\), we need to add constraints to enforce that L can be decomposed as a conic combination of evaluation maps at nonnegative vectors v satisfying \(vv^\mathsf{T}\preceq A\). Here, we present two ways to achieve this goal. First, we show that the tensor constraints (20) suffice in the sense that \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) =\tau _{\mathrm {cp}}(A)\) (note that the constraints (19) are not needed for this result). However, because of the special form of the tensor constraints we do not know whether \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) admitting a flat optimal solution implies \({\xi _{t,\dagger }^{\mathrm {cp}}}(A) = {\xi _{*,\dagger }^{\mathrm {cp}}}(A)\), and we do not know whether \({\xi _{\infty ,\dagger }^{\mathrm {cp}}}(A) = {\xi _{*,\dagger }^{\mathrm {cp}}}(A)\). Second, we adapt the approach of adding additional localizing constraints from Sect. 2.2 to the commutative setting, where we do show \({\xi _{\infty ,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) = {\xi _{*,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) = \tau _{\mathrm {cp}}(A)\). This yields a doubly indexed sequence of semidefinite programs whose optimal values converge to \(\tau _{\mathrm {cp}}(A)\).
Let \(A \in \hbox {CP}^n\). For every \(t \in \mathbb {N}\cup \{\infty \}\), the optimum in \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) is attained. We have \({\xi _{t,\dagger }^{\mathrm {cp}}}(A) \rightarrow {\xi _{\infty ,\dagger }^{\mathrm {cp}}}(A)\) as \(t\rightarrow \infty \) and \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) =\tau _{\mathrm {cp}}(A)\).
The attainment of the optima in \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) for \(t \in \mathbb {N}\cup \{ \infty \}\) and the convergence of \({\xi _{t,\dagger }^{\mathrm {cp}}}(A)\) to \({\xi _{\infty ,\dagger }^{\mathrm {cp}}}(A)\) can be shown in the same way as the analog statements for \({\xi _{t}^{\mathrm {cp}}}(A)\) in Proposition 8.
We have seen the inequality \({\xi _{*,\dagger }^{\mathrm {cp}}}(A) \le \tau _{\mathrm {cp}}(A)\) in Proposition 6. Now we show the reverse inequality. Let L be feasible for \({\xi _{*,\dagger }^{\mathrm {cp}}}(A)\). We will show that L is feasible for \(\tau _{\mathrm {cp}}(A)\), which implies \(\tau _{\mathrm {cp}}(A)\le L(1)\) and thus \(\tau _{\mathrm {cp}}(A)\le {\xi _{*,\dagger }^{\mathrm {cp}}}(A)\).
By Proposition 7 and the fact that \({{\,\mathrm{rank}\,}}(A) \le \tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\), we have \(L(1) > 0\) (where we assume \(A\ne 0\)). By Theorem 5, we may write
$$\begin{aligned} L= L(1) \sum _{k=1}^K \lambda _k L_{v_k}, \end{aligned}$$
where \(\lambda _k>0\), \(\sum _k \lambda _k =1\), and \(L_{v_k}\) is an evaluation map at a point \(v_k \in D(S_A^{\mathrm {cp}})\). We define the matrices \(R_k = v_k v_k^\textsf {T}\), so that \(A = L(1) \sum _{k=1}^K R_k\). The matrices \(R_k\) satisfy \(0 \le R_k \le A\) since \(v_k \in D(S_A^{\mathrm {cp}})\). Clearly also \(R_k \succeq 0\). It remains to show that \(R_k \preceq A\). For this we use the tensor constraints (20). Using that L is a conic combination of evaluation maps, we may rewrite these constraints as
$$\begin{aligned} L(1) \sum _{k=1}^K \lambda _k R_k^{\otimes l} \preceq A^{\otimes l}, \end{aligned}$$
from which it follows that \(L(1) \lambda _k R_k^{\otimes l} \preceq A^{\otimes l}\) for all \(k\in [K]\). Therefore, for all \(k\in [K]\) and all vectors v with \(v^\mathsf{T}R_kv>0\), we have
$$\begin{aligned} L(1) \lambda _k \le \left( \frac{v^\textsf {T}A v}{v^\textsf {T}R_kv}\right) ^l \quad \text {for all} \quad l \in \mathbb {N}. \end{aligned}$$
Suppose there is a k such that \(R_k \not \preceq A\). Then there exists a v such that \(v^\textsf {T}R_k v > v^\textsf {T}A v\). As \((v^\textsf {T}A v) / (v^\textsf {T}R_kv) < 1\), letting l tend to \(\infty \) we obtain \(L(1)\lambda _k=0\), reaching a contradiction. It follows that \(R_k \preceq A\) for all \(k \in [K]\). \(\square \)
The second approach for reaching \(\tau _\mathrm {cp}(A)\) is based on using the extra localizing constraints from Sect. 2.2. For a subset \(V\subseteq \mathbb {S}^{n-1}\), define \({\xi _{t,V}^{\mathrm {cp}}}(A)\) by replacing the truncated quadratic module \({\mathscr {M}}_{2t}(S_A^{\mathrm {cp}})\) in \({\xi _{t}^{\mathrm {cp}}}(A)\) by \({\mathscr {M}}_{2t}(S_{A,V}^{\mathrm {cp}})\), where
$$\begin{aligned} S_{A,V}^{\mathrm {cp}}= S_A^{\mathrm {cp}}\cup \left\{ v^\textsf {T}Av-\Big (\sum _{i=1}^n v_ix_i\Big )^2 : v\in V \right\} . \end{aligned}$$
Proposition 5 can be adapted to the completely positive setting, so that we have a sequence of finite subsets \(V_1 \subseteq V_2 \subseteq \ldots \subseteq \mathbb {S}^{n-1}\) with \( {\xi _{*,V_k}^{\mathrm {cp}}}(A) \rightarrow {\xi _{*,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) \) as \(k\rightarrow \infty \). Proposition 8 still holds when adding extra localizing constraints, so that for any \(k\ge 1\) we have
$$\begin{aligned} \lim _{t \rightarrow \infty } {\xi _{t,V_k}^{\mathrm {cp}}}(A) = {\xi _{*,V_k}^{\mathrm {cp}}}(A). \end{aligned}$$
Combined with Proposition 11, this shows that we have a doubly indexed sequence \({\xi _{t,V_k}^{\mathrm {cp}}}(A)\) of semidefinite programs that converges to \(\tau _\mathrm {cp}(A)\) as \(t \rightarrow \infty \) and \(k \rightarrow \infty \).
For \(A \in \hbox {CP}^n\) we have \({\xi _{*,\mathbb {S}^{n-1}}^{\mathrm {cp}}}(A) = \tau _{\mathrm {cp}}(A)\).
The proof is the same as the proof of Proposition 9, with the following additional observation: Given a vector \(u \in \mathbb {R}^n\), we have \(u \in D(S_{A,\mathbb {S}^{n-1}}^{\mathrm {cp}})\) only if \(uu^\textsf {T}\preceq A\). The latter follows from the additional localizing constraints: for each \(v \in \mathbb {R}^n\) we have
$$\begin{aligned} 0 \le v^\textsf {T}A v - \Big (\sum _i v_i u_i \Big )^2 = v^{\textsf {T}} ( A - uu^\textsf {T}) v. \end{aligned}$$
More Efficient Tensor Constraints
Here, we show that for any integer \(l\ge 2\) the constraint \(A^{\otimes l} -(L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\succeq 0\), used in the definition of \({\xi _{t,+}^{\mathrm {cp}}}(A)\), can be reformulated in a more economical way using matrices indexed by commutative monomials in \([\mathbf{x}]_{=l}\) instead of noncommutative words in \(\langle \mathbf{x}\rangle _{=l}\). For this we exploit the symmetry in the matrices \(A^{\otimes l}\) and \((L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\) for \(L \in \mathbb {R}[\mathbf {x}]_{2l}^*\). Recall that for a word \(w \in \langle \mathbf {x}\rangle \), we let \(w^c\) denote the corresponding (commutative) monomial in \([\mathbf {x}]\).
Define the matrix \(Q_l \in \mathbb {R}^{[\mathbf{x}]_{=l} \times \langle \mathbf{x}\rangle _{=l}}\) by
$$\begin{aligned} (Q_l)_{m,w} = {\left\{ \begin{array}{ll} 1/d_m &{} \text { if } w^c = m,\\ 0 &{} \text { otherwise,} \end{array}\right. } \end{aligned}$$
where, for \(m = x_1^{\alpha _1} \cdots x_n^{\alpha _n} \in [\mathbf {x}]_{=l}\), we define the multinomial coefficient
$$\begin{aligned} d_m = \big |\big \{w\in \langle \mathbf{x}\rangle _{=l}: w^c = m\big \}\big | = \frac{l!}{\alpha _1! \cdots \alpha _n!}. \end{aligned}$$
For \(L \in \mathbb {R}[\mathbf{x}]_{2l}^*\) we have
$$\begin{aligned} Q_l (L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}} Q_l^\textsf {T}= (L(mm'))_{m,m'\in [\mathbf{x}]_{=l}}. \end{aligned}$$
For \(m,m'\in [\mathbf{x}]_{l}\), the \((m,m')\)-entry of the left hand side is equal to
$$\begin{aligned} \sum _{w,w'\in \langle \mathbf{x}\rangle _{=l}} Q_{mw}Q_{m'w'}L((ww')^c)&= \sum _{\underset{w^c = m}{w \in \langle \mathbf{x}\rangle _{=l}}} \sum _{\underset{(w')^c = m'}{w' \in \langle \mathbf{x}\rangle _{=l}}} \frac{L((ww')^c)}{d_md_{m'}} = L(mm'). \end{aligned}$$
The symmetric group \(S_l\) acts on \(\langle \mathbf {x}\rangle _{=l}\) by \((x_{i_1} \cdots x_{i_l})^\sigma = x_{i_{\sigma (1)}} \cdots x_{i_{\sigma (l)}}\) for \(\sigma \in S_l\). Let
$$\begin{aligned} P = \frac{1}{l!} \sum _{\sigma \in S_l} P_\sigma , \end{aligned}$$
where, for any \(\sigma \in S_l\), \(P_\sigma \in \mathbb {R}^{\langle \mathbf{x}\rangle _{=l} \times \langle \mathbf{x}\rangle _{=l}}\) is the permutation matrix defined by
$$\begin{aligned} (P_\sigma )_{w,w'} = {\left\{ \begin{array}{ll} 1 &{} \text {if } w^\sigma = w',\\ 0 &{} \text {otherwise}.\end{array}\right. } \end{aligned}$$
A matrix \(M \in \mathbb {R}^{\langle \mathbf{x}\rangle _{=l} \times \langle \mathbf{x}\rangle _{=l}}\) is said to be \(S_l\)-invariant if \(P^\sigma M = M P^\sigma \) for all \(\sigma \in S_l\).
Lemma 10
If \(M \in \mathbb {R}^{\langle \mathbf{x}\rangle _{=l} \times \langle \mathbf{x}\rangle _{=l}}\) is symmetric and \(S_l\)-invariant, then
$$\begin{aligned} M\succeq 0 \quad \Longleftrightarrow \quad Q_l M Q_l^\textsf {T}\succeq 0. \end{aligned}$$
The implication \(M \succeq 0 \Longrightarrow Q_l M Q_l^\textsf {T}\succeq 0\) is immediate. For the other implication, we need a preliminary fact. Consider the diagonal matrix \(D \in \mathbb {R}^{[\mathbf{x}]_{=l}\times [\mathbf{x}]_{=l}}\) with \(D_{mm}= d_m\) for \(m \in [\mathbf{x}]_{=l}\). We claim that \(Q_l^\textsf {T}D Q_l = P\), the matrix in (24). Indeed, for any \(w,w'\in \langle \mathbf{x}\rangle _{=l}\), we have
$$\begin{aligned} (Q_l^\textsf {T}D Q_l)_{ww'}&= \sum _{m\in [\mathbf{x}]_{=l}} (Q_l)_{mw}(Q_l)_{mw'}D_{mm} = {\left\{ \begin{array}{ll} 1/d_m &{} \text {if } w^c = (w')^c=m,\\ 0 &{} \text {otherwise}\end{array}\right. }\\&= \frac{|\{\sigma \in S_l: w^\sigma =w'\}|}{l!} = P_{ww'}. \end{aligned}$$
Suppose \(Q_l M Q_l^\textsf {T}\succeq 0\), and let \(\lambda \) be an eigenvalue of M with eigenvector z. Since \(MP=PM\), we may assume \(Pz=z\), for otherwise we can replace z by Pz, which is still an eigenvector of M with eigenvalue \(\lambda \). We may also assume z to be a unit vector. Then, \(\lambda \ge 0\) can be shown using the identity \(Q_l^\textsf {T}D Q_l=P\) as follows:
$$\begin{aligned} \lambda \!=\! z^\textsf {T}M z \!=\! z^\textsf {T}P M P z \!=\! z^\textsf {T}(Q_l^\textsf {T}D Q_l) M(Q_l^\textsf {T}D Q_l)z = (D Q_l z)^\textsf {T}(Q_l M Q_l^\textsf {T}) D Q_l z \ge 0. \end{aligned}$$
We can now derive our symmetry reduction result:
$$\begin{aligned} A^{\otimes l}-(L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\succeq 0 \quad \Longleftrightarrow \quad Q_l A^{\otimes l}Q_l^\textsf {T}- (L(mm'))_{m,m'\in [\mathbf{x}]_{=l}}\succeq 0. \end{aligned}$$
For any \(w,w'\in \langle \mathbf{x}\rangle _{=l}\), we have \((P_\sigma A^{\otimes l} P_\sigma ^\textsf {T})_{w,w'} = A^{\otimes l}_{w^\sigma , (w')^\sigma } = A^{\otimes l}_{w,w'}\) and
$$\begin{aligned} (P_\sigma (L((uu')^c))_{u,u'\in \langle \mathbf{x}\rangle _{=l}} P_\sigma ^*)_{w,w'} = L((w^\sigma (w')^\sigma )^c) = L((ww')^c). \end{aligned}$$
This shows that the matrix \(A^{\otimes l}-(L((ww')^c))_{w,w'\in \langle \mathbf{x}\rangle _{=l}}\) is \(S_l\)-invariant. Hence, the claimed result follows by using Lemmas 9 and 10. \(\square \)
Computational Examples
Bipartite Matrices
Consider the \((p+q)\times (p+q)\) matrices
$$\begin{aligned} P(a,b) = \begin{pmatrix} (a+q) I_p &{} J_{p,q} \\ J_{q,p} &{} (b+p) I_q \end{pmatrix}, \quad a,b \in \mathbb {R}_+, \end{aligned}$$
where \(J_{p,q}\) denotes the all-ones matrix of size \(p \times q\). We have \(P(a,b)=P(0,0)+D\) for some nonnegative diagonal matrix D. As can be easily verified, P(0, 0) is completely positive with \(\hbox {cp-rank}(P(0,0))=pq\), so P(a, b) is completely positive with \(pq \le \hbox {cp-rank}(P(a,b)) \le pq + p + q\).
For \(p=2\) and \(q=3\), we have \(\hbox {cp-rank}(P(a,b))=6\) for all \(a,b \ge 0\), which follows from the fact that \(5 \times 5\) completely positive matrices with at least one zero entry have \(\hbox {cp-rank}\) at most 6; see [6, Theorem 3.12]. Fawzi and Parrilo [27] show that \(\tau _{\text {cp}}^{\mathrm {sos}}(P(0,0)) = 6\), and give a subregion of \([0,1]^2\) where \(5< \tau _{\text {cp}}^{\mathrm {sos}}(P(a,b)) < 6\). The next lemma shows the bound \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b))\) is tight for all \(a,b \ge 0\) and therefore strictly improves on \(\tau _{\mathrm {cp}}^{\mathrm {sos}}\) in this region.
For \(a,b \ge 0\) we have \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b)) \ge pq\).
Let L be feasible for \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b))\) and let
$$\begin{aligned} B = \begin{pmatrix} \alpha &{} c^\textsf {T}\\ c &{} X \end{pmatrix} \end{aligned}$$
be the principal submatrix of \(M_2(L)\) where the rows and columns are indexed by
$$\begin{aligned} \{1\} \cup \{x_ix_j : 1 \le i \le p, \, p+1 \le j \in p+q\}. \end{aligned}$$
It follows that c is the all-ones vector \(c = \mathbf {1}\). Moreover, if \(P(a,b)_{ij} = 0\) for some \(i\ne j\), then the constraints \(L(x_ix_ju) \ge 0\) and \(L((P(a,b)_{ij} - x_ix_j)u) \ge 0\) imply \(L(x_i x_j u) = 0\) for all \(u \in [\mathbf {x}]_2\). Hence, \(X_{x_ix_j,x_kx_l} = L(x_i x_j x_k x_l) = 0\) whenever \(x_ix_j \ne x_k x_l\). It follows that X is a diagonal matrix. We write
$$\begin{aligned} B = \begin{pmatrix} \alpha &{} \mathbf {1}^\textsf {T}\\ \mathbf {1} &{} \mathrm {Diag}(z_1, \ldots , z_{pq}) \end{pmatrix}. \end{aligned}$$
Since \(\begin{pmatrix} 1 &{} - \mathbf {1}^\textsf {T}\\ -\mathbf {1} &{} J \end{pmatrix} \succeq 0\) we have
$$\begin{aligned} 0 \le \mathrm {Tr}\left( \begin{pmatrix} \alpha &{} \mathbf {1}^\textsf {T}\\ \mathbf {1} &{} \mathrm {Diag}(z_1, \ldots , z_{pq}) \end{pmatrix} \begin{pmatrix} 1 &{} - \mathbf {1}^\textsf {T}\\ -\mathbf {1} &{} J \end{pmatrix}\right) = \alpha - 2 pq + \sum _{k = 1}^{pq} z_k. \end{aligned}$$
Finally, by the constraints \(L((P(a,b)_{ij} - x_i x_j) u) \ge 0\) (with \(i \in [p], j \in p+[q]\) and \(u = x_i x_j\)) and \(L(x_i x_j) = P(a,b)_{ij}\) we obtain \(z_k \le 1\) for all \(k \in [pq]\). Combined with the above inequality, it follows that
$$\begin{aligned} L(1) = \alpha \ge 2pq - \sum _{k=1}^{pq} z_k \ge pq, \end{aligned}$$
and hence \({\xi _{2,\dagger }^{\mathrm {cp}}}(P(a,b)) \ge pq\). \(\square \)
Examples Related to the DJL-Conjecture
The Drew–Johnson–Loewy conjecture [21] states that the maximal \(\hbox {cp-rank}\) of an \(n~\times ~n\) completely positive matrix is equal to \(\lfloor n^2/4 \rfloor \). Recently, this conjecture has been disproven for \(n=7,8,9,10,11\) in [10] and for all \(n \ge 12\) in [11] (interestingly, it remains open for \(n=6\)). Here, we study our bounds on the examples of [10]. Although our bounds are not tight for the \(\hbox {cp-rank}\), they are non-trivial and as such may be of interest for future comparisons. For numerical stability reasons, we have evaluated our bounds on scaled versions of the matrices from [10], so that the diagonal entries become equal to 1. In Table 1 the matrices \(\tilde{M}_7\), \(\tilde{M}_8\) and \(\tilde{M}_9\) correspond to the matrices \(\tilde{M}\) in Examples 1–3 of [10], and \(M_7\), \(M_{11}\) correspond to the matrices M in Examples 1 and 4. The column \({\xi _{2,\dagger }^{\mathrm {cp}}}(\cdot ) + x_i x_j\) corresponds to the bound \({\xi _{2,\dagger }^{\mathrm {cp}}}(\cdot )\) where we replace \(S_A^{\mathrm {cp}}\) by \(S_A^{\mathrm {cp}}\cup \{ x_i x_j : 1 \le i < j \le n\}\).
Table 1 Examples from [10] with various bounds on their cp-rank
Lower Bounds on the Nonnegative Rank
In this section, we adapt the techniques for the cp-rank from Sect. 3 to the asymmetric setting of the nonnegative rank. We now view a factorization \(A = (a_i^\textsf {T}b_j)_{i \in [m], j \in [n]}\) by nonnegative vectors as a factorization by positive semidefinite diagonal matrices, by writing \(A_{ij} = {{\,\mathrm{Tr}\,}}(X_i X_{m+j})\), with \(X_i =\mathrm{Diag}(a_i)\) and \(X_{m+j} = \mathrm{Diag}(b_j)\). Note that we can view this as a "partial matrix" setting, where for the symmetric matrix \(({{\,\mathrm{Tr}\,}}(X_iX_k))_{i,k\in [m+n]}\) of size \(m+n\), only the off-diagonal entries at the positions \((i,m+j)\) for \(i\in [m], j\in [n]\) are specified.
This asymmetry requires rescaling the factors in order to get upper bounds on their maximal eigenvalues, which is needed to ensure the Archimedean property for the selected localizing polynomials. For this we use the well-known fact that for any \(A \in \mathbb {R}_+^{m \times n}\) there exists a factorization \(A=({{\,\mathrm{Tr}\,}}(X_iX_{m+j}))\) by diagonal nonnegative matrices of size \({{\,\mathrm{rank}\,}}_+(A)\), such that
$$\begin{aligned} \lambda _\mathrm {max}(X_i), \lambda _\mathrm {max}(X_{m+j}) \le \sqrt{A_\mathrm {max}} \quad \text {for all} \quad i \in [m], j \in [n], \end{aligned}$$
where \(A_\mathrm {max}:= \mathrm {max}_{i,j} A_{ij}\). To see this, observe that for any rank one matrix \(R = u v^\textsf {T}\) with \(0 \le R \le A\), one may assume \(0 \le u_i, v_j \le \sqrt{A_\mathrm {max}}\) for all i, j. Hence, the set
$$\begin{aligned} S_A^{+}= \big \{\sqrt{A_\mathrm {max}}x_i - x_i^2 : i \in [m+n]\big \} \cup \big \{A_{ij} - x_i x_{m+j} : i \in [m], j \in [n] \big \} \end{aligned}$$
is localizing for A; that is, there exists a minimal factorization \(\mathbf {X}\) of A with \(\mathbf {X}\in {\mathscr {D}}(S_A^+)\).
Given \(A\in \mathbb {R}^{m\times n}_{\ge 0}\), for each \(t \in \mathbb {N}\cup \{\infty \}\) we consider the semidefinite program
$$\begin{aligned} {\xi _{t}^{\mathrm {+}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}[x_1,\ldots ,x_{m+n}]_{2t}^*,\\&L(x_ix_{m+j}) = A_{ij} \quad \text {for} \quad i \in [m], j \in [n],\\&L \ge 0 \quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{+}) \big \}. \end{aligned}$$
Moreover, define \({\xi _{*}^{\mathrm {+}}}(A)\) by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to the program defining \({\xi _{\infty }^{\mathrm {+}}}(A)\). It is easy to check that \({\xi _{t}^{\mathrm {+}}}(A)\le {\xi _{\infty }^{\mathrm {+}}}(A)\le {\xi _{*}^{\mathrm {+}}}(A)\le {{\,\mathrm{rank}\,}}_+(A)\) for \(t \in \mathbb {N}\).
Denote by \({\xi _{t,\dagger }^{\mathrm {+}}}(A)\) the strengthening of \({\xi _{t}^{\mathrm {+}}}(A)\) where we add the positivity constraints
$$\begin{aligned} L(gu) \ge 0 \quad \text {for} \quad g \in \{1\} \cup S_A^{+}\quad \text {and} \quad u \in [\mathbf{x}]_{2t-\deg (g)}. \end{aligned}$$
Note that these extra constraints can help for finite t, but are redundant for \(t \in \{\infty , *\}\).
Comparison to Other Bounds
As in the previous section, we compare our bounds to the bounds by Fawzi and Parrilo [27]. They introduce the following parameter \(\tau _+(A)\) as analog of the bound \(\tau _\mathrm {cp}(A)\) for the nonnegative rank:
$$\begin{aligned} \tau _+(A) = \mathrm {min}\Big \{ \alpha : \alpha \ge 0,\,A \in \alpha \cdot \mathrm {conv} \big \{ R \in \mathbb {R}^{m \times n}: 0 \le R \le A, \, {{\,\mathrm{rank}\,}}(R) \le 1\big \}\Big \}, \end{aligned}$$
and the analog \(\tau _+^\mathrm {sos}(A)\) of the bound \(\tau _{\mathrm {cp}}^{\mathrm {sos}}(A)\) for the nonnegative rank:
$$\begin{aligned} \tau _{+}^{\mathrm {sos}}(A) = \mathrm {inf} \big \{ \alpha : \;&X \in \mathbb {R}^{mn \times mn}, \, \alpha \in \mathbb {R},\\&\begin{pmatrix} \alpha &{} \text {vec}(A)^\textsf {T}\\ \text {vec}(A) &{} X \end{pmatrix} \succeq 0, \\&X_{(i,j),(i,j)} \le A_{ij}^2 \quad \text {for} \quad 1 \le i \le m, 1 \le j \le n, \\&X_{(i,j),(k,l)} = X_{(i,l),(k,j)} \quad \text {for} \quad 1 \le i< k \le m, \; 1 \le j < l \le n \big \}. \end{aligned}$$
First, we give the analog of Proposition 8, whose proof we omit since it is very similar.
Let \(A \in \mathbb {R}_+^{m \times n}\). For every \(t \in \mathbb {N}\cup \{\infty , *\}\) the optimum in \({\xi _{t}^{\mathrm {+}}}(A)\) is attained, and \({\xi _{t}^{\mathrm {+}}}(A) \rightarrow {\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)\) as \(t\rightarrow \infty \). If \({\xi _{t}^{\mathrm {+}}}(A)\) admits a flat optimal solution, then \({\xi _{t}^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)\). Moreover, \({\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)\) is the minimum of L(1) over all conic combinations \(L\) of trace evaluations at elements of \(D(S_A^{+})\) satisfying \(A =( L(x_ix_{m+j}))\).
Now we observe that the parameters \({\xi _{\infty }^{\mathrm {+}}}(A)\) and \({\xi _{*}^{\mathrm {+}}}(A)\) coincide with \(\tau _+(A)\), so that we have a sequence of semidefinite programs converging to \(\tau _+(A)\).
For any \(A \in \mathbb {R}_{\ge 0}^{m \times n}\), we have \({\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A) = \tau _+(A).\)
The discussion at the beginning of Sect. 4 shows that for any rank one matrix R satisfying \(0 \le R \le A\) we may assume that \(R=uv^\textsf {T}\) with \((u,v)\in \mathbb {R}^m_+ \times \mathbb {R}^n_+\) and \( u_i,v_j \le \sqrt{A_{\mathrm {max}}}\) for \(i\in [m],j\in [n]\). Hence, \(\tau _+(A)\) can be written as:
$$\begin{aligned} \mathrm {min}\Big \{\alpha : \alpha \!\ge \! 0,\, A \!\in \!\alpha&\cdot \mathrm {conv} \big \{ uv^\textsf {T}:u \in \Big [0, \sqrt{A_\mathrm {max}}\Big ]^m, v \in \Big [0, \sqrt{A_\mathrm {max}}\Big ]^n,\, uv^\textsf {T}\le A \big \} \Big \} \\&=\mathrm {min}\Big \{ \alpha : \alpha \ge 0,\, A \in \alpha \cdot \mathrm {conv}\big \{uv^\textsf {T}: (u,v) \in D(S_A^{+})\big \} \Big \}. \end{aligned}$$
The equality \({\xi _{\infty }^{\mathrm {+}}}(A) = {\xi _{*}^{\mathrm {+}}}(A)=\tau _+(A)\) now follows from the reformulation of \({\xi _{*}^{\mathrm {+}}}(A)\) in Proposition 13 in terms of conic evaluations, after noting that for (u, v) in \( \mathbb {R}^m\times \mathbb {R}^n\) we have \((u,v)\in D(S_A^{+})\) if and only if the matrix \(R=uv^\textsf {T}\) satisfies \(0\le R\le A\). \(\square \)
Analogously to the case of the completely positive rank, we have the following proposition. The proof is similar to that of Proposition 4.2, considering now for M the principal submatrix of \(M_2(L)\) indexed by the monomials 1 and \(x_ix_{m+j}\) for \(i\in [m]\) and \(j\in [n]\).
If A is a nonnegative matrix, then \({\xi _{2,\dagger }^{\mathrm {+}}}(A) \ge \tau _{+}^{\mathrm {sos}}(A)\).
In the remainder of this section, we recall how \(\tau _+(A)\) and \(\tau _{+}^{\mathrm {sos}}(A)\) compare to other bounds in the literature. These bounds can be divided into two categories: combinatorial lower bounds and norm-based lower bounds. The following diagram from [27] summarizes how \(\tau _+^{\mathrm {sos}}(A)\) and \(\tau _+(A)\) relate to the combinatorial lower bounds
Here \(\mathrm {RG}(A)\) is the rectangular graph, with \(V = \{(i,j)\in [m]\times [n]: A_{ij} > 0\}\) as vertex set and \(E = \{ ((i,j),(k,l)): A_{il} A_{kj}= 0\}\) as edge set. The coloring number of \(\mathrm {RG}(A)\) coincides with the well-known rectangle covering number (also denoted \({{\,\mathrm{rank}\,}}_B(A)\)), which was used, e.g., in [29] to show that the extension complexity of the correlation polytope is exponential. The clique number of \(\mathrm {RG}(A)\) is also known as the fooling set number (see, e.g., [28]). Observe that the above combinatorial lower bounds only depend on the sparsity pattern of the matrix A, and that they are all equal to one for a strictly positive matrix.
Fawzi and Parrilo [27] have furthermore shown that the bound \(\tau _+(A)\) is at least as good as norm-based lower bounds:
$$\begin{aligned} \tau _+(A) = \underset{\begin{array}{c} {\mathscr {N}} \text { monotone and} \\ \text { positively homogeneous} \end{array}}{\mathrm {sup}} \frac{{\mathscr {N}}^*(A)}{{\mathscr {N}}(A)}. \end{aligned}$$
Here, a function \({\mathscr {N}}: \mathbb {R}^{m \times n}_{+} \rightarrow \mathbb {R}_{+}\) is positively homogeneous if \({\mathscr {N}}(\lambda A) = \lambda {\mathscr {N}}(A)\) for all \(\lambda \ge 0\) and monotone if \({\mathscr {N}}(A) \le {\mathscr {N}}(B)\) for \(A \le B\), and \({\mathscr {N}}^*(A)\) is defined as
$$\begin{aligned} {\mathscr {N}}^*(A) = \mathrm {max}\{ L(A) :&\ L:\mathbb {R}^{m\times n}\rightarrow \mathbb {R} \text{ linear } \text{ and } L(X) \le 1 \text { for all } X \in \mathbb {R}^{m \times n}_{+} \\&\text { with } {{\,\mathrm{rank}\,}}(X) \le 1 \text { and } {\mathscr {N}}(X) \le 1\}. \end{aligned}$$
These bounds are called norm-based since norms often provide valid functions \({\mathscr {N}}\). For example, when \({\mathscr {N}}\) is the \(\ell _\infty \)-norm, Rothvoß [66] used the corresponding lower bound to show that the matching polytope has exponential extension complexity.
When \({\mathscr {N}}\) is the Frobenius norm: \({\mathscr {N}}(A) = (\sum _{i,j} A_{ij}^2)^{1/2}\), the parameter \({\mathscr {N}}^*(A)\) is known as the nonnegative nuclear norm. In [26] it is denoted by \(\nu _+(A)\), shown to satisfy \({{\,\mathrm{rank}\,}}_+(A)\ge \left( \nu _+(A)/||A||_F\right) ^2\), and reformulated as
$$\begin{aligned} \nu _+(A)&= \mathrm {min}\left\{ \sum _i \lambda _i : A = \sum _{i} \lambda _i u_i v_i^\textsf {T}, \, (\lambda _i,u_i,v_i) \in \mathbb {R}^{1+m+n}_{+}, \, ||u_i||_2 = ||v_i||_2 = 1 \right\} \end{aligned}$$
$$\begin{aligned}&= \mathrm {max}\big \{ \langle A, W \rangle : W \in \mathbb {R}^{m \times n}, \, \bigl ({\begin{matrix} I &{} -W \\ -W^\textsf {T}&{} I\end{matrix}}\bigr ) \text { is copositive} \big \}. \end{aligned}$$
where the cone of copositive matrices is the dual of the cone of completely positive matrices. Fawzi and Parrilo [26] use the copositive formulation (27) to provide bounds \(\nu _+^{[k]}(A)\) (\(k\ge 0\)), based on inner approximations of the copositive cone from [60], which converge to \(\nu _+(A)\) from below. We now observe that by Theorem 7 the atomic formulation of \(\nu _+(A)\) from (26) can be seen as a moment optimization problem:
$$\begin{aligned} \nu _+(A) = \mathrm {min}\int _{V(S)} 1 \, {\hbox {d}} \mu (x) \quad \text {s.t.} \quad A_{ij} = \int _{V(S)} x_i x_{m+j} \, {\hbox {d}} \mu (x) \quad \text {for}\quad i\in [m], j\in [n]. \end{aligned}$$
Here, the optimization variable \(\mu \) is required to be a Borel measure on the variety V(S), where
$$\begin{aligned} S=\textstyle {\left\{ \sum _{i=1}^mx_i^2-1, \ \sum _{j=1}^n x_{m+j}^2-1\right\} }. \end{aligned}$$
(The same observation is made in [74] for the real nuclear norm of a symmetric 3-tensor and in [59] for symmetric odd-dimensional tensors.) For \(t \in \mathbb {N}\cup \{\infty \}\), let \(\mu _t(A)\) denote the parameter defined analogously to \({\xi _{t}^{\mathrm {+}}}(A)\), where we replace the condition \(L\ge 0\) on \({\mathscr {M}}_{2t}(S_A^+)\) by \(L\ge 0\) on \({\mathscr {M}}_{2t}(\{x_1,\ldots , x_{m+n}\})\) and \(L=0\) on \({\mathscr {I}}_{2t}(S)\), and let \(\mu _*(A)\) be obtained by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to \(\mu _\infty (A)\). We have \(\mu _t(A) \rightarrow \mu _\infty (A) = \mu _*(A) = \nu _+(A)\) by Theorem 7 and (a non-normalized analog of) Theorem 8. One can show that \(\mu _1(A)\) with the additional constraints \(L(u) \ge 0\) for all \(u \in [\mathbf{x}]_2\), is at least as good as \(\nu _+^{[0]}(A)\). It is not clear how the hierarchies \(\mu _t(A)\) and \(\nu _+^{[k]}(A)\) compare in general.
We illustrate the performance of our approach by comparing our lower bounds \({\xi _{2,\dagger }^{\mathrm {+}}}\) and \({\xi _{3,\dagger }^{\mathrm {+}}}\) to the lower bounds \(\tau _+\) and \(\tau _+^{\mathrm {sos}}\) on the two examples considered in [27].
All Nonnegative \(2 \times 2\) Matrices
For \(A(\alpha ) = \bigl ({\begin{matrix} 1 &{} 1 \\ 1 &{} \alpha \end{matrix}}\bigr )\), Fawzi and Parrilo [27] show that
$$\begin{aligned} \tau _+(A(\alpha )) = 2-\alpha \quad \text {and} \quad \tau _+^{\mathrm {sos}}(A(\alpha )) = \frac{2}{1+\alpha } \quad \text {for all} \quad 0 \le \alpha \le 1. \end{aligned}$$
Since the parameters \(\tau _+(A)\) and \(\tau _+^{\mathrm {sos}}(A)\) are invariant under scaling and permuting rows and columns of A, one can use the identity
$$\begin{aligned} \begin{pmatrix} 1 &{} 1 \\ 1 &{} \alpha \end{pmatrix} = \begin{pmatrix} 1 &{} 0 \\ 0 &{} \alpha \end{pmatrix}\begin{pmatrix} 1 &{} 1 \\ 1 &{} 1/\alpha \end{pmatrix}\begin{pmatrix} 0 &{} 1 \\ 1 &{} 0 \end{pmatrix} \end{aligned}$$
to see this describes the parameters for all nonnegative \(2 \times 2\) matrices. By using a semidefinite programming solver for \(\alpha = k/100\), \(k \in [100]\), we see that \({\xi _{2}^{\mathrm {+}}}(A(\alpha ))\) coincides with \(\tau _+(A(\alpha ))\).
The Nested Rectangles Problem
In this section, we consider the nested rectangles problem as described in [27, Section 2.7.2] (see also [55]), which asks for which a, b there exists a triangle T such that \(R(a,b) \subseteq T \subseteq P\), where \(R(a,b) = [-a,a] \times [-b,b]\) and \(P = [-1,1]^2\).
The nonnegative rank relates not only to the extension complexity of a polytope [78], but also to extended formulations of nested pairs [12, 31]. An extended formulation of a pair of polytopes \(P_1\subseteq P_2 \subseteq \mathbb {R}^d\) is a (possibly) higher dimensional polytope K whose projection \(\pi (K)\) is nested between \(P_1\) and \(P_2\). Let us suppose \(\pi (K)= \{ x \in \mathbb {R}^d : y \in \mathbb {R}_+^k, \, (x,y) \in K\}\) and \(K= \{(x,y): Ex+Fy = g,\, y \in \mathbb {R}^k_+\}\), then k is the size of the extended formulation, and the smallest such k is called the extension complexity of the pair \((P_1, P_2)\). It is known (cf. [12, Theorem 1]) that the extension complexity of the pair \((P_1,P_2)\), where
$$\begin{aligned} P_1 = \mathrm {conv}(\{v_1, \ldots , v_n\}) \quad \text {and} \quad P_2 = \left\{ x : a_i^\textsf {T}x \le b_i \text { for } i \in [m]\right\} , \end{aligned}$$
is equal to the nonnegative rank of the generalized slack matrix \(S_{P_1,P_2} \in \mathbb {R}^{m \times n}\), defined by
$$\begin{aligned} (S_{P_1,P_2})_{ij} = b_j - a_j^\textsf {T}v_i \quad \text {for} \quad i\in [m], j\in [n]. \end{aligned}$$
Any nonnegative matrix is the slack matrix of some nested pair of polytopes [34, Lemma 4.1] (see also [31]).
Applying this to the pair (R(a, b), P), one immediately sees that there exists a polytope K with at most three facets whose projection \(T = \pi (K)\subseteq \mathbb {R}^2\) satisfies \(R(a,b) \subseteq T \subseteq P\) if and only if the pair (R(a, b), P) admits an extended formulation of size 3. For \(a,b>0\), the polytope T has to be 2 dimensional, therefore K has to be at least 2 dimensional as well; it follows that K and T have to be triangles. Hence, there exists a triangle T such that \(R(a,b) \subseteq T \subseteq P\) if and only if the nonnegative rank of the slack matrix \(S(a,b) := S_{R(a,b),P}\) is equal to 3. One can verify that
$$\begin{aligned} S(a,b) = \begin{pmatrix} 1-a &{} 1+a &{} 1-b &{} 1+b \\ 1+a &{} 1-a &{} 1-b &{} 1+b \\ 1+a &{} 1-a &{} 1+b &{}1-b \\ 1-a &{} 1+a &{} 1+b &{} 1-b \end{pmatrix}. \end{aligned}$$
Such a triangle exists if and only if \((1+a)(1+b) \le 2\) (see [27, Proposition 4] for a proof sketch). To test the quality of their bound, Fawzi and Parrilo [27] compute \(\tau _+^{\mathrm {sos}}(S(a,b))\) for different values of a and b. In doing so, they determine the region where \(\tau _+^{\mathrm {sos}}(S(a,b))>3\). We do the same for the bounds \({\xi _{1,\dagger }^{\mathrm {+}}}(S(a,b)), {\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))\) and \({\xi _{3,\dagger }^{\mathrm {+}}}(S(a,b))\), see Fig. 1. The results show that \({\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))\) strictly improves upon the bound \(\tau _+^{\mathrm {sos}}(S(a,b))\), and that \({\xi _{3,\dagger }^{\mathrm {+}}}(S(a,b))\) is again a strict improvement over \({\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))\).
The colored region corresponds to \({{\,\mathrm{rank}\,}}_+(S(a,b)) = 4\). The top right region (black) corresponds to \({\xi _{1,\dagger }^{\mathrm {+}}}(S(a,b)) >3\), the two top right regions (black and red) together correspond to \(\tau _+^\mathrm {sos}(S(a,b)) > 3\), the three top right regions (black, red and yellow) to \({\xi _{2,\dagger }^{\mathrm {+}}}(S(a,b))>3\), and the four top right regions (black, red, yellow, and green) to \({\xi _{3,\dagger }^{\mathrm {+}}}(S(a,b))>3\) (Color figure online)
Lower Bounds on the Positive Semidefinite Rank
The positive semidefinite rank can be seen as an asymmetric version of the completely positive semidefinite rank. Hence, as was the case in the previous section for the nonnegative rank, we need to select suitable factors in a minimal factorization in order to be able to bound their maximum eigenvalues and obtain a localizing set of polynomials leading to an Archimedean quadratic module.
For this we can follow, e.g., the approach in [52, Lemma 5] to rescale a factorization and claim that, for any \(A \in \mathbb {R}^{m\times n}_+\) with psd-rank\(_\mathbb {C}(A) = d\), there exists a factorization \(A =( \langle X_i, X_{m+j}\rangle )\) by matrices \(X_1, \ldots , X_{m+n} \in \mathrm {H}_{+}^d\) such that \(\sum _{i=1}^m X_i = I\) and \(\mathrm {Tr}(X_{m+j}) = \sum _i A_{ij}\) for all \(j\in [n]\). Indeed, starting from any factorization \(X_i,X_{m+j}\) in \(\mathrm {H}^d_+\) of A, we may replace \(X_i\) by \(X^{-1/2}X_iX^{-1/2}\) and \(X_{m+j}\) by \(X^{1/2}X_{m+j}X^{1/2}\), where \(X:=\sum _{i=1}^m X_i\) is positive definite (by minimality of d). This argument shows that the set of polynomials
$$\begin{aligned} S_A^{\mathrm {psd}}= \left\{ x_i - x_i^2 : i \in [m]\right\} \cup \left\{ \Big (\sum _{i=1}^m A_{ij}\Big ) x_{m+j} - x_{m+j}^2 : j \in [n] \right\} \end{aligned}$$
is localizing for A; that is, there is at least one minimal factorization \(\mathbf {X}\) of A such that \(g(\mathbf {X})\succeq 0\) for all polynomials \(g\in S_A^{\mathrm {psd}}\). Moreover, for the same minimal factorization \(\mathbf {X}\) of A, we have \(p(\mathbf {X}) (1-\sum _{i=1}^m X_i) = 0\) for all \(p \in \mathbb {R}\langle \mathbf{x}\rangle \).
Given \(A\in \mathbb {R}^{m\times n}_{+}\), for each \(t \in \mathbb {N}\cup \{\infty \}\) we consider the semidefinite program
$$\begin{aligned} {\xi _{t}^{\mathrm {psd}}}(A) = \mathrm {min} \big \{ L(1) : \;&L \in \mathbb {R}\langle x_1,\ldots ,x_{m+n}\rangle _{2t}^*,\\&L(x_ix_{m+j}) = A_{ij} \quad \text {for} \quad i \in [m], j \in [n],\\&L \ge 0 \quad \text {on} \quad {\mathscr {M}}_{2t}(S_A^{\mathrm {psd}}), \\&L = 0 \quad \text {on} \quad {\mathscr {I}}_{2t}(1-\textstyle {\sum _{i=1}^m x_i}) \big \}. \end{aligned}$$
We additionally define \({\xi _{*}^{\mathrm {psd}}}(A)\) by adding the constraint \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) to the program defining \({\xi _{\infty }^{\mathrm {psd}}}(A)\) (and considering the infimum instead of the minimum, since we do not know if the infimum is attained in \({\xi _{*}^{\mathrm {psd}}}(A)\)). By the above discussion, it follows that the parameter \({\xi _{*}^{\mathrm {psd}}}(A)\) is a lower bound on psd-rank\(_\mathbb {C}(A)\) and we have
$$\begin{aligned} {\xi _{1}^{\mathrm {psd}}}(A)\le \ldots \le {\xi _{t}^{\mathrm {psd}}}(A)\le \ldots \le {\xi _{\infty }^{\mathrm {psd}}}(A)\le {\xi _{*}^{\mathrm {psd}}}(A)\le \hbox {psd-rank}_\mathbb {C}(A). \end{aligned}$$
Note that, in contrast to the previous bounds, the parameter \({\xi _{t}^{\mathrm {psd}}}(A)\) is not invariant under rescaling the rows of A or under taking the transpose of A (see Sect. 5.2.2).
It follows from the construction of \(S_A^{\mathrm {psd}}\) and Eq. (10) that the quadratic module \({{\mathscr {M}}}(S_A^{\mathrm {psd}})\) is Archimedean, and hence the following analog of Proposition 1 can be shown.
Let \(A \in \mathbb {R}^{m \times n}_+\). For each \(t \in \mathbb {N}\cup \{\infty \}\), the optimum in \({\xi _{t}^{\mathrm {psd}}}(A)\) is attained, and we have
$$\begin{aligned} \lim _{t \rightarrow \infty } {\xi _{t}^{\mathrm {psd}}}(A) = {\xi _{\infty }^{\mathrm {psd}}}(A). \end{aligned}$$
Moreover, \({\xi _{\infty }^{\mathrm {psd}}}(A)\) is equal to the infimum over all \(\alpha \ge 0\) for which there exists a unital \(C^*\)-algebra \({{\mathscr {A}}}\) with tracial state \(\tau \) and \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}} (S_A^{\mathrm {psd}}) \cap {\mathscr {V}}_\mathscr {A}(1-\textstyle {\sum _{i=1}^m x_i})\) such that \(A = \alpha \cdot (\tau (X_iX_{m+j}))_{i\in [m],j\in [n]}\).
In [52], the following bound on the complex positive semidefinite rank was derived:
$$\begin{aligned} \hbox {psd-rank}_\mathbb {C}(A) \ge \sum _{i=1}^m \mathrm {max}_{j \in [n]} \frac{ A_{ij}}{\sum _i A_{ij}}. \end{aligned}$$
If a feasible linear form L to \({\xi _{t}^{\mathrm {psd}}}(A)\) satisfies the inequalities \(L(x_i( \sum _i A_{ij} - x_{m+j}))~\ge ~0\) for all \(i \in [m], j \in [n]\), then L(1) is at least the above lower bound. Indeed, the inequalities give
$$\begin{aligned} L(x_i) \ge \mathrm {max}_{j \in [n]} \, \frac{L(x_i x_{m+j})}{\sum _i A_{ij}} = \mathrm {max}_{j \in [n]} \, \frac{A_{ij}}{ \sum _i A_{ij}}. \end{aligned}$$
and hence
$$\begin{aligned} L(1) = \sum _{i = 1}^m L(x_i) \ge \sum _{i=1}^m \mathrm {max}_{j \in [n]} \frac{ A_{ij}}{\sum _i A_{ij}}. \end{aligned}$$
The inequalities \(L(x_i( \sum _i A_{ij} - x_{m+j})) \ge 0\) are easily seen to be valid for trace evaluations at points of \({{\mathscr {D}}}(S_A^{\mathrm {psd}})\). More importantly, as in Lemma 2, these inequalities are satisfied by feasible linear forms to the programs \({\xi _{\infty }^{\mathrm {psd}}}(A)\) and \({\xi _{*}^{\mathrm {psd}}}(A)\). Hence, \({\xi _{\infty }^{\mathrm {psd}}}(A)\) and \({\xi _{*}^{\mathrm {psd}}}(A)\) are at least as good as the lower bound (28).
In [52], two other fidelity based lower bounds on the psd-rank were defined; we do not know how they compare to \({\xi _{t}^{\mathrm {psd}}}(A)\).
In this section, we apply our bounds to some (small) examples taken from the literature, namely \(3\times 3\) circulant matrices and slack matrices of small polygons.
Nonnegative Circulant Matrices of Size 3
We consider the nonnegative circulant matrices of size 3 which are, up to scaling, of the form
$$\begin{aligned} M(b,c) = \begin{pmatrix} 1 &{} b &{} c \\ c &{} 1 &{} b \\ b &{} c &{} 1 \end{pmatrix} \quad \text {with} \quad b,c \ge 0. \end{aligned}$$
If \(b=1=c\), then \({{\,\mathrm{rank}\,}}(M(b,c)) = \hbox {psd-rank}_\mathbb {R}(M(b,c)) = \hbox {psd-rank}_\mathbb {C}(M(b,c)) = 1\). Otherwise, we have \({{\,\mathrm{rank}\,}}(M(b,c))\ge 2\), which implies \(\hbox {psd-rank}_\mathbb {K}(M(b,c)) \ge 2\) for \(\mathbb {K}\in \{\mathbb {R},\mathbb {C}\}\). In [25, Example 2.7] it is shown that
$$\begin{aligned} \hbox {psd-rank}_\mathbb {R}(M(b,c)) \le 2 \quad \Longleftrightarrow \quad 1 +b^2 +c^2 \le 2(b + c + bc). \end{aligned}$$
Hence, if b and c do not satisfy the above relation then \(\hbox {psd-rank}_\mathbb {R}(M(b,c))=3\).
The colored region corresponds to the values (b, c) for which \(\hbox {psd-rank}_\mathbb {R}(M(b,c)) =3\); the outer region (yellow) shows the values of (b, c) for which \({\xi _{2}^{\mathrm {psd}}}(M(b,c))>2\) (Color figure online)
To see how good our lower bounds are for this example, we use a semidefinite programming solver to compute \({\xi _{2}^{\mathrm {psd}}}(M(b,c))\) for \((b,c) \in [0,4]^2\) (with stepsize 0.01). In Fig. 2, we see that the bound \({\xi _{2}^{\mathrm {psd}}}(M(b,c))\) certifies that \(\hbox {psd-rank}_\mathbb {R}(M(b,c)) =\hbox {psd-rank}_\mathbb {C}(M(b,c))=3\) for most values (b, c) where \(\hbox {psd-rank}_\mathbb {R}(M(b,c))=3\).
Here, we consider the slack matrices of two polygons in the plane, where the bounds are sharp (after rounding) and illustrate the dependence on scaling the rows or taking the transpose. We consider the quadrilateral Q with vertices (0, 0), (0, 1), (1, 0), (2, 2), and the regular hexagon H, whose slack matrices are given by
$$\begin{aligned} S_Q = \begin{pmatrix} 0 &{}0 &{}2 &{}2 \\ 1 &{} 0 &{} 0&{} 3 \\ 0 &{} 1 &{} 3 &{} 0 \\ 2 &{}2 &{}0 &{} 0\end{pmatrix}, \qquad S_H = \begin{pmatrix} 0 &{} 1&{} 2&{} 2&{} 1&{} 0 \\ 0&{} 0&{} 1&{}2 &{}2&{} 1 \\ 1 &{} 0 &{} 0 &{} 1 &{} 2 &{} 2 \\ 2&{} 1&{} 0&{} 0&{} 1&{} 2\\ 2&{} 2&{} 1&{} 0&{} 0 &{}1 \\ 1&{} 2&{} 2&{} 1&{} 0 &{}0 \end{pmatrix}. \end{aligned}$$
Our lower bounds on the \(\hbox {psd-rank}_\mathbb {C}\) are not invariant under taking the transpose, indeed numerically we have \({\xi _{2}^{\mathrm {psd}}}(S_Q) \approx 2.266\) and \({\xi _{2}^{\mathrm {psd}}}(S_Q^\textsf {T}) \approx 2.5\). The slack matrix \(S_Q\) has \(\hbox {psd-rank}_\mathbb {R}(S_Q) = 3\) (a corollary of [35, Theorem 4.3]) and therefore both bounds certify \(\hbox {psd-rank}_\mathbb {C}(S_Q) = 3 = \hbox {psd-rank}_\mathbb {R}(S_Q)\).
Secondly, our bounds are not invariant under rescaling the rows of a nonnegative matrix. Numerically we have \({\xi _{2}^{\mathrm {psd}}}(S_H) \approx 1.99\) while \({\xi _{2}^{\mathrm {psd}}}(DS_H) \approx 2.12\), where \(D = \mathrm {Diag}(2,2,1,1,1,1)\). The bound \({\xi _{2}^{\mathrm {psd}}}(DS_H)\) is in fact tight (after rounding) for the complex positive semidefinite rank of \(DS_H\) and hence of \(S_H\): in [33] it is shown that \(\hbox {psd-rank}_\mathbb {C}(S_H) = 3\).
Discussion and Future Work
In this work, we provide a unified approach for the four matrix factorizations obtained by considering (a)symmetric factorizations by nonnegative vectors and positive semidefinite matrices. Our methods can be extended to the nonnegative tensor rank, which is defined as the smallest integer d for which a k-tensor \(A \in \mathbb {R}_+^{n_1 \times \cdots \times n_k}\) can be written as \(A = \sum _{l=1}^d u_{1,l}\otimes \cdots \otimes u_{k,l}\) for nonnegative vectors \(u_{j,l} \in \mathbb {R}_+^{n_j}\). The approach from Sect. 4 for \({{\,\mathrm{rank}\,}}_+\) can be extended to obtain a hierarchy of lower bounds on the nonnegative tensor rank. For instance, if A is a 3-tensor, the analogous bound \({\xi _{t}^{\mathrm {+}}}(A)\) is obtained by minimizing L(1) over \(L\in \mathbb {R}[x_{1},\ldots ,x_{n_1+n_2+n_3}]^*\) such that \(L(x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3})=A_{i_1i_2i_3}\) (for \(i_1\in [n_1],i_2\in [n_2],i_3\in [n_3]\)), using as localizing polynomials in \(S_A^+\) the polynomials \(\root 3 \of {A_\mathrm {max}}x_i-x_i^2\) and \(A_{i_1i_2i_3}- x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3}\). As in the matrix case one can compare to the bounds \(\tau _+(A)\) and \(\tau _+^{\mathrm {sos}}(A)\) from [27]. One can show \({\xi _{*}^{\mathrm {+}}}(A)=\tau _+(A)\), and one can show \({\xi _{3,\dagger }^{\mathrm {+}}}(A) \ge \tau _+^{\mathrm {sos}}(A)\) after adding the conditions \(L(x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3}(A_{i_1i_2i_3}- x_{i_1}x_{n_1+i_2}x_{n_1+n_2+i_3}))\ge 0\) to \({\xi _{3}^{\mathrm {+}}}(A)\).
Testing membership in the completely positive cone and the completely positive semidefinite cone is another important problem, to which our hierarchies can also be applied. It follows from the proof of Proposition 8 that if A is not completely positive then, for some order t, the program \({\xi _{t}^{\mathrm {cp}}}(A)\) is infeasible or its optimum value is larger than the Caratheodory bound on the cp-rank (which is similar to an earlier result in [58]). In the noncommutative setting, the situation is more complicated: If \({\xi _{*}^{\mathrm {cpsd}}}(A)\) is feasible, then \(A\in \mathrm {CS}_{+}\), and if \(A\not \in \mathrm {CS}_{+,\mathrm {vN}}^n\), then \({\xi _{\infty }^{\mathrm {cpsd}}}(A)\) is infeasible (Propositions 1 and 2). Here, \(\mathrm {CS}_{+,\mathrm {vN}}^n\) is the cone defined in [17] consisting of the matrices admitting a factorization in a von Neumann algebra with a trace. By Lemma 12, \(\mathrm {CS}_{+,\mathrm {vN}}^n\) can equivalently be characterized as the set of matrices of the form \(\alpha \, (\tau (a_ia_j))\) for some \(C^*\)-algebra \({\mathscr {A}}\) with tracial state \(\tau \), positive elements \(a_1,\ldots ,a_n\in {\mathscr {A}}\) and \(\alpha \in \mathbb {R}_+\).
Our lower bounds are on the complex version of the (completely) positive semidefinite rank. As far as we are aware, the existing lower bounds (except for the dimension counting rank lower bound) are also on the complex (completely) positive semidefinite rank. It would be interesting to find a lower bound on the real (completely) positive semidefinite rank that can go beyond the complex (completely) positive semidefinite rank.
We conclude with some open questions regarding applications of lower bounds on matrix factorization ranks. First, as was shown in [38, 62, 63], completely positive semidefinite matrices whose \(\hbox {cpsd-rank}_\mathbb {C}\) is larger than their size do exist, but currently we do not know how to construct small examples for which this holds. Hence, a concrete question: Does there exist a \(5 \times 5\) completely positive semidefinite matrix whose \(\hbox {cpsd-rank}_\mathbb {C}\) is at least 6? Second, as we mentioned before, the asymmetric setting corresponds to (semidefinite) extension complexity of polytopes. Rothvoß' result [66] (indirectly) shows that the parameter \({\xi _{\infty }^{\mathrm {+}}}\) is exponential (in the number of nodes of the graph) for the slack matrix of the matching polytope. Can this result also be shown directly using the dual formulation of \({\xi _{\infty }^{\mathrm {+}}}\), that is, by a sum-of-squares certificate? If so, could one extend the argument to the noncommutative setting (which would show a lower bound on the semidefinite extension complexity)?
Here, and throughout the paper, we use \([\mathbf{x}]\) as the commutative analog of \(\langle \mathbf{x}\rangle \).
In fact, one could consider optimization over \({\mathscr {D}}(S)\cap {\mathscr {V}}(T)\) for some finite set \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \), the results below still hold in that setting, see Appendix A.
Note that in the commutative setting we could avoid using the variety since \(V(T)=D(\pm T)\). However, in the noncommutative setting, the polynomials in T need not be symmetric in which case the quadratic module \({\mathscr {D}}(\pm T)\) would not be well defined.
M.F. Anjos and J.B. Lasserre. Handbook on Semidefinite, Conic and Polynomial Optimization. International Series in Operations Research & Management Science Series, Springer, 2012.
A. Atserias, L. Mančinska, D. Roberson, R. Šámal, S. Severini, and A. Varvitsiotis. Quantum and non-signalling graph isomorphisms. Journal of Combinatorial Theory, Series B (2018). https://doi.org/10.1016/j.jctb.2018.11.002.
MathSciNet MATH Article Google Scholar
G.P. Barker, L.Q. Eifler, and T.P. Kezlan. A non-commutative spectral theorem, Linear Algebra and its Applications 20(2) (1978), 95–100.
C. Bayer, J. Teichmann. The proof of Tchakaloff's theorem. Proceedings of the American Mathematical Society 134 (2006), 3035–3040.
A. Berman, U.G. Rothblum. A note on the computation of the cp-rank. Linear Algebra and its Applications 419 (2006), 1–7.
A. Berman, N. Shaked-Monderer. Completely Positive Matrices. World Scientific, 2003.
MATH Book Google Scholar
M. Berta, O. Fawzi, V.B. Scholz. Quantum bilinear optimization. SIAM Journal on Optimization 26(3) (2016), 1529–1564.
J. Bezanson, A. Edelman, S. Karpinski, V.B. Shah. Julia: A Fresh Approach to Numerical Computing. SIAM Review 59(1) (2017), 65–98.
B. Blackadar. Operator Algebras: Theory of C*-Algebras and Von Neumann Algebras. Encyclopaedia of Mathematical Sciences, Springer, 2006.
I.M. Bomze, W. Schachinger, R. Ullrich. From seven to eleven: Completely positive matrices with high cp-rank. Linear Algebra and its Applications 459 (2014), 208 – 221.
I.M. Bomze, W. Schachinger, R. Ullrich. New lower bounds and asymptotics for the cp-rank. SIAM Journal on Matrix Analysis and Applications 36 (2015), 20–37.
G. Braun, S. Fiorini, S. Pokutta, D. Steurer. Approximation limits of linear programs (beyond hierarchies). Mathematics of Operations Research 40(3) (2015), 756–772. Appeared earlier in FOCS'12.
S. Burer. On the copositive representation of binary and continuous nonconvex quadratic programs. Mathematical Programming 120(2) (2009), 479–495.
S. Burgdorf, K. Cafuta, I. Klep, J. Povh. The tracial moment problem and trace-optimization of polynomials. Mathematical Programming 137(1) (2013), 557–578.
S. Burgdorf, I. Klep. The truncated tracial moment problem. Journal of Operator Theory 68(1) (2012), 141–163.
MathSciNet MATH Google Scholar
S. Burgdorf, I. Klep, J. Povh. Optimization of Polynomials in Non-Commutative Variables. Springer Briefs in Mathematics, Springer, 2016.
MATH Google Scholar
S. Burgdorf, M. Laurent, T. Piovesan. On the closure of the completely positive semidefinite cone and linear approximations to quantum colorings. Electronic Journal of Linear Algebra 32 (2017), 15–40.
M. Conforti, G. Cornuéjols, G. Zambelli. Extended formulations in combinatorial optimization. 4OR 8 (2010), 1–48.
R.E. Curto, L.A. Fialkow. Solution of the Truncated Complex Moment Problem for Flat Data. Memoirs of the American Mathematical Society, American Mathematical Society, 1996.
P. Dickinson, M. Dür. Linear-time complete positivity detection and decomposition of sparse matrices. SIAM Journal on Matrix Analysis and Applications 33(3) (2012), 701–720.
J.H. Drew, C.R. Johnson, R. Loewy. Completely positive matrices associated with M-matrices. Linear and Multilinear Algebra 37(4) (1994), 303–310.
K.J. Dykema, V.I. Paulsen, J. Prakash. Non-closure of the set of quantum correlations via graphs, arXiv:1709.05032 (2017).
J. Edmonds. Maximum matching and a polyhedron with \(0,1\) vertices. Journal of Research of the National Bureau of Standards 69 B (1965), 125–130.
Y. Faenza, S. Fiorini, R. Grappe, H. Tiwari. Extended formulations, non-negative factorizations and randomized communication protocols. Mathematical Programming 153(1) (2015), 75–94.
H. Fawzi, J. Gouveia, P.A. Parrilo, R.Z. Robinson, R.R. Thomas. Positive semidefinite rank. Mathematical Programming 153(1) (2015), 133–177.
H. Fawzi, P.A. Parrilo. Lower bounds on nonnegative rank via nonnegative nuclear norms. Mathematical Programming 153(1) (2015), 41–66.
H. Fawzi, P.A. Parrilo. Self-scaled bounds for atomic cone ranks: applications to nonnegative rank and cp-rank. Mathematical Programming 158(1) (2016), 417–465.
S. Fiorini, V. Kaibel, K. Pashkovich, D. Theis. Combinatorial bounds on nonnegative rank and extended formulations. Discrete Mathematics 313(1) (2013), 67–83.
S. Fiorini, S. Massar, S. Pokutta, H.R. Tiwary, R. de Wolf. Exponential lower bounds for polytopes in combinatorial optimization. Journal of the ACM 62(2) (2015), 17:1–17:23. Appeared earlier in STOC'12.
N. Gillis. Introduction to nonnegative matrix factorization. SIAG/OPT Views and News 25(1) (2017), 7–16.
MathSciNet Google Scholar
N. Gillis, F. Glineur. On the geometric interpretation of the nonnegative rank. Linear Algebra and its Applications 437(11) (2012), 2685–2712.
M. Goemans. Smallest compact formulation for the permutahedron. Mathematical Programming 153(1) (2015), 5–11.
A.P. Goucha, J. Gouveia, P.M. Silva. On ranks of regular polygons. SIAM Journal on Discrete Mathematics 31(4) (2016), 2612–2625.
J. Gouveia, P.A. Parrilo, R.R. Thomas. Lifts of convex sets and cone factorizations. Mathematics of Operations Research 38(2) (2013), 248–264.
J. Gouveia, R.Z. Robinson, R.R. Thomas. Polytopes of minimum positive semidefinite rank. Discrete & Computational Geometry 50(3) (2013), 679–699.
M. Grant, S. Boyd. CVX: Matlab Software for Disciplined Convex Programming, version 2.1, 2014. http://cvxr.com/cvx
S. Gribling, D. de Laat, M. Laurent. Bounds on entanglement dimensions and quantum graph parameters via noncommutative polynomial optimization. Mathematical Programming Series B 171(1) (2018), 5–42.
S. Gribling, D. de Laat, M. Laurent. Matrices with high completely positive semidefinite rank. Linear Algebra and its Applications 513 (2017), 122 – 148.
P. Groetzner, M. Dür. A factorization method for completely positive matrices. Preprint (2018), http://www.optimization-online.org/DB_HTML/2018/03/6511.html.
M. Grötschel, L. Lovász., A. Schrijver. The ellipsoid method and its consequences in combinatorial optimization. Combinatorica 1(2) (1981), 169–197.
E.K. Haviland. On the Momentum Problem for Distribution Functions in More Than One Dimension. II. American Journal of Mathematics 58(1) (1936), 164–168.
R. Jain, Y. Shi, Z. Wei, S. Zhang. Efficient protocols for generating bipartite classical distributions and quantum states. IEEE Transactions on Information Theory 59(8) (2013), 5171–5178.
I. Klep, J. Povh. Constrained trace-optimization of polynomials in freely noncommuting variables. Journal of Global Optimization 64(2) (2016), 325–348.
I. Klep, M. Schweighofer. Connes' embedding conjecture and sums of hermitian squares. Advances in Mathematics 217(4) (2008), 1816–1837.
E. de Klerk, D.V. Pasechnik. Approximation of the stability number of a graph via copositive program-ming. SIAM Journal on Optimization 12(4) (2002), 875–892.
J.B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization 11(3) (2001), 796–817.
J.B. Lasserre. Moments, Positive Polynomials and Their Applications, . Imperial College Press, 2009.
J.B. Lasserre. New approximations for the cone of copositive matrices and its dual. Mathematical Programming 144(1-2) (2014), 265–276.
M. Laurent. Sums of squares, moment matrices and optimization over polynomials. In Emerging Applications of Algebraic Geometry (M. Putinar, S. Sullivant eds.), Springer, 2009, pp. 157–270.
M. Laurent, T. Piovesan. Conic approach to quantum graph parameters using linear optimization over the completely positive semidefinite cone. SIAM Journal on Optimization 25(4) (2015), 2461–2493.
J.R. Lee, P. Raghavendra, D. Steurer. Lower bounds on the size of semidefinite programming relaxations. In Proceedings of the Forty-seventh Annual ACM Symposium on Theory of Computing, STOC'15, 2015, pp. 567–576.
T. Lee, Z. Wei, R. de Wolf. Some upper and lower bounds on psd-rank. Mathematical Programming 162(1) (2017), 495–521.
L. Mančinska, D. Roberson. Note on the correspondence between quantum correlations and the completely positive semidefinite cone. Available at quantuminfo.quantumlah.org/memberpages/laura/corr.pdf (2014).
R.K. Martin. Using separation algorithms to generate mixed integer model reformulations. Operations Research Letters 10(3) (1991), 119–128.
D. Mond, J. Smith, D. van Straten. Stochastic factorizations, sandwiched simplices and the topology of the space of explanations. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 459(2039) (2003), 2821–2845.
MOSEK ApS. The MOSEK optimization toolbox for MATLAB manual. Version 8.0.0.81, 2017. URL http://docs.mosek.com/8.0/toolbox.pdf
M. Navascués, S. Pironio, A. Acín. SDP relaxations for non-commutative polynomial optimization. In Handbook on Semidefinite, Conic and Polynomial Optimization (M.F. Anjos, J.B. Lasserre eds.). Springer, 2012, pp. 601–634.
J. Nie. The \({\cal{A}}\)-truncated \(K\)-moment problem. Foundations of Computational Mathematics 14(6) (2014), 1243–1276.
J. Nie. Symmetric tensor nuclear norms. SIAM Journal on Applied Algebra and Geometry 1(1) (2017), 599–625.
P.A. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. PhD thesis, Caltech, 2000.
S. Pironio, M. Navascués, A. Acín. Convergent relaxations of polynomial optimization problems with noncommuting variables. SIAM Journal on Optimization 20(5) (2010), 2157–2180.
A. Prakash, J. Sikora, A. Varvitsiotis, Z. Wei. Completely positive semidefinite rank. Mathematical Programming 171(1–2) (2017), 397–431.
A. Prakash, A. Varvitsiotis. Correlation matrices, Clifford algebras, and completely positive semidefinite rank. Linear and Multilinear Algebra (2018). https://doi.org/10.1080/03081087.2018.1529136.
Putinar, M.: Positive polynomials on compact semi-algebraic sets. Indiana University Mathematics Journal 42, 969–984 (1993)
J. Renegar. On the computational complexity and geometry of the first-order theory of the reals. Part I: Introduction. Preliminaries. The geometry of semi-algebraic sets. The decision problem for the existential theory of the reals. Journal of Symbolic Computation 13(3) (1992), 255 – 299.
T. Rothvoss. The matching polytope has exponential extension complexity. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC'14, 2014, pp. 263–272.
W. Rudin. Real and complex analysis. Mathematics series. McGraw-Hill, 1987.
N. Shaked-Monderer, A. Berman, I.M. Bomze, F. Jarre, W. Schachinger. New results on the cp-rank and related properties of co(mpletely )positive matrices. Linear and Multilinear Algebra 63(2) (2015), 384–396.
N. Shaked-Monderer, I.M. Bomze, F. Jarre, W. Schachinger. On the cp-rank and minimal cp factorizations of a completely positive matrix. SIAM Journal on Matrix Analysis and Applications 34(2) (2013), 355–368.
Y. Shitov. A universality theorem for nonnegative matrix factorizations. arXiv:1606.09068v2 (2016).
Y. Shitov. The complexity of positive semidefinite matrix factorization. SIAM Journal on Optimization 27(3) (2017), 1898–1909.
J. Sikora, A. Varvitsiotis. Linear conic formulations for two-party correlations and values of nonlocal games. Mathematical Programming 162(1) (2017), 431–463.
W. Slofstra. The set of quantum correlations is not closed. arXiv:1703.08618 (2017).
G. Tang, P. Shah. Guaranteed tensor decomposition: A moment approach. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, ICML'15, 2015, pp. 1491–1500.
A. Vandaele, F. Glineur, N. Gillis. Algorithms for positive semidefinite factorization. Computational Optimization and Applications 71(1) (2018), 193–219.
S.A. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal on Optimization 20(3) (2009), 1364–1377.
J.H.M. Wedderburn. Lectures on Matrices. Dover Publications Inc., 1964.
M. Yannakakis. Expressing combinatorial optimization problems by linear programs. Journal of Computer and System Sciences 43(3) (1991), 441 – 466.
The authors would like to thank Sabine Burgdorf for helpful discussions and an anonymous referee for suggestions that helped improve the presentation.
CWI, Amsterdam, The Netherlands
Sander Gribling, David de Laat & Monique Laurent
Tilburg University, Tilburg, The Netherlands
Monique Laurent
Sander Gribling
David de Laat
Correspondence to Monique Laurent.
The first and second authors are supported by the Netherlands Organization for Scientific Research, Grant number 617.001.351, and the second author by the ERC Consolidator Grant QPROGRESS 615307.
Communicated by Agnes Szanto.
Commutative and Tracial Polynomial Optimization
In this appendix, we discuss known convergence and flatness results for commutative and tracial polynomial optimization. We present these results in such a way that they can be directly used for our hierarchies of lower bounds on matrix factorization ranks. Although the commutative case was developed first, here we treat the commutative and tracial cases together. For the reader's convenience, we provide all proofs by working on the "moment side"; that is, relying on properties of linear functionals rather than using real algebraic results on sums of squares. Tracial optimization is an adaptation of eigenvalue optimization as developed in [61], but here we only discuss the commutative and tracial cases, as these are most relevant to our work.
Flat Extensions and Representations of Linear Forms
The optimization variables in the optimization problems considered in this paper are linear forms on spaces of (noncommutative) polynomials. To study the properties of the bounds obtained through these optimization problems, we need to study properties and representations of (flat) linear forms on polynomial spaces.
In Sect. 1.3, the key examples of symmetric tracial linear functionals on \(\mathbb {R}\langle \mathbf{x}\rangle _{2t}\) are trace evaluations on a (finite dimensional) \(C^*\)-algebra. In this section, we present some results that provide conditions under which, conversely, a symmetric tracial linear map on \(\mathbb {R}\langle \mathbf{x}\rangle _{2t}\) (\(t \in \mathbb {N}\cup \{\infty \}\)) that is nonnegative on \({{\mathscr {M}}}(S)\) and zero on \({\mathscr {I}}(T)\) arises from trace evaluations at elements in the intersection of the \(C^*\)-algebraic analogs of the matrix positivity domain of S and the matrix ideal of T. In Theorems 1 and 2, we consider the case \(t= \infty \) and in Theorem 3 we consider the case \(t \in \mathbb {N}\). Results like these can for instance be used to link the linear forms arising in the limiting optimization problems of our hierarchies to matrix factorization ranks.
The proofs of Theorems 1 and 2 use a classical Gelfand–Naimark–Segal (GNS) construction. In these proofs, it will also be convenient to work with the concept of the null space of a linear functional \(L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\), which is defined as the vector space
$$\begin{aligned} N_t(L) = \big \{ p \in \mathbb {R}\langle \mathbf {x}\rangle _t : L(qp) = 0 \text { for } q\in \mathbb {R}\langle \mathbf {x}\rangle _t\big \}. \end{aligned}$$
We use the notation \(N(L)=N_\infty (L)\) for the nontruncated null space. Recall that \(M_t(L)\) is the moment matrix associated with L, its rows and columns are indexed by words in \(\langle \mathbf{x}\rangle _t\), and its entries are given by \(M_t(L)_{w,w'} = L(w^* w')\) for \(w,w' \in \langle \mathbf{x}\rangle _t\). The null space of L can therefore be identified with the kernel of \(M_t(L)\): A polynomial \(p=\sum _{w}c_w w\) belongs to \(N_t(L)\) if and only if its coefficient vector \((c_w)\) belongs to the kernel of \(M_t(L)\).
In Sect. 1.3, we defined a linear functional \(L \in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) to be \(\delta \)-flat based on the rank stabilization property (4) of its moment matrix: \({{\,\mathrm{rank}\,}}(M_t(L)) = {{\,\mathrm{rank}\,}}(M_{t-\delta }(L))\). This definition can be reformulated in terms of a decomposition of the corresponding polynomial space using the null space: the form L is \(\delta \)-flat if and only if
$$\begin{aligned} \mathbb {R}\langle \mathbf {x}\rangle _t = \mathbb {R}\langle \mathbf {x}\rangle _{t-\delta } + N_t(L). \end{aligned}$$
Recall that L is said to be flat if it is \(\delta \)-flat for some \(\delta \ge 1\). Finally, in the nontruncated case (\(t=\infty \)), L was called flat if \({{\,\mathrm{rank}\,}}(M(L))<\infty \). We can now see that \({{\,\mathrm{rank}\,}}(M(L))<\infty \) if and only if there exists an integer \(s \in \mathbb {N}\) such that \(\mathbb {R}\langle \mathbf{x}\rangle = \mathbb {R}\langle \mathbf{x}\rangle _s + N(L)\).
Theorem 1 below is implicit in several works (see, e.g., [16, 57]). Here, we assume that \({\mathscr {M}}(S) + {\mathscr {I}}(T)\) is Archimedean, which we recall means that there exists a scalar \(R>0\) such that
Theorem 1
Let \(S\subseteq \mathrm {Sym}\, \mathbb {R}\langle \mathbf{x}\rangle \) and \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \) with \({{\mathscr {M}}}(S)+ {\mathscr {I}}(T)\) Archimedean. Given a linear form \(L\in \mathbb {R}\langle \mathbf{x}\rangle ^*\), the following are equivalent:
L is symmetric, tracial, nonnegative on \({{\mathscr {M}}}(S)\), zero on \({\mathscr {I}}(T)\), and \(L(1) = 1\);
there is a unital \(C^*\)-algebra \({\mathscr {A}}\) with tracial state \(\tau \) and \(\mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S)\cap {\mathscr {V}}_{{\mathscr {A}}}(T)\) with
$$\begin{aligned} L(p)=\tau (p(\mathbf{X})) \quad \text {for all} \quad p\in \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$
We first prove the easy direction \((2) \Rightarrow (1)\): We have
$$\begin{aligned} L(p^*) = \tau (p^*(\mathbf {X})) = \tau (p(\mathbf {X})^*) = \overline{\tau (p(\mathbf {X}))} = \overline{L(p)} = L(p), \end{aligned}$$
where we use that \(\tau \) is Hermitian and \(X_i^* = X_i\) for \(i \in [n]\). Moreover, L is tracial since \(\tau \) is tracial. In addition, for \(g \in S \cup \{1\}\) and \(p \in \mathbb {R}\langle \mathbf {x}\rangle \) we have
$$\begin{aligned} L(p^*gp) = \tau (p^*(\mathbf {X}) g(\mathbf {X}) p(\mathbf {X})) = \tau (p(\mathbf {X})^* g(\mathbf {X}) p(\mathbf {X})) \ge 0, \end{aligned}$$
since \(g(\mathbf{X})\) is positive in \({\mathscr {A}}\) as \(\mathbf {X}\in {{\mathscr {D}}}_{{\mathscr {A}}}(S)\) and \(\tau \) is positive. Similarly \(L(hq) = \tau (h(\mathbf {X}) q(\mathbf {X})) = 0\) for all \(h \in T\), since \(\mathbf X\in {\mathscr {V}}_{{\mathscr {A}}}(T)\).
We show \((1) \Rightarrow (2)\) by applying a GNS construction. Consider the quotient vector space \(\mathbb {R}\langle \mathbf{x}\rangle /N(L)\), and denote the class of p in \(\mathbb {R}\langle \mathbf{x}\rangle /N(L)\) by \(\overline{p}\). We can equip this quotient with the inner product \(\langle \overline{p},\overline{q}\rangle =L(p^*q)\) for \(p,q\in \mathbb {R}\langle \mathbf{x}\rangle \), so that the completion \({\mathscr {H}}\) of \(\mathbb {R}\langle \mathbf{x}\rangle /N(L)\) is a separable Hilbert space. As N(L) is a left ideal in \(\mathbb {R}\langle \mathbf{x}\rangle \), the operator
$$\begin{aligned} X_i :\mathbb {R}\langle \mathbf{x}\rangle /N(L) \rightarrow \mathbb {R}\langle \mathbf{x}\rangle /N(L), \, \overline{p} \mapsto \overline{x_ip} \end{aligned}$$
is well defined. We have
$$\begin{aligned} \langle X_i\,\overline{p},\overline{q}\rangle = L((x_ip)^*q) = L(p^*x_iq)=\langle \overline{p},X_i\overline{q}\rangle \quad \text {for all} \quad p,q \in \mathbb {R}\langle \mathbf{x} \rangle , \end{aligned}$$
so the \(X_i\) are self-adjoint. Since \(g \in S \cup \{1\}\) is symmetric and \(\langle \overline{p}, g(\mathbf {X}) \overline{p}\rangle = \langle \overline{p},\overline{gp}\rangle = L(p^* g p)\ge 0\) for all p we have \(g(\mathbf{X}) \succeq 0\). By the Archimedean condition, there exists an \(R > 0\) such that \(R-\sum _{i=1}^nx_i^2\in {\mathscr {M}}(S)+{{\mathscr {I}}}(T)\). Using \(R-x_i^2= (R-\sum _{j=1}^nx_j^2)+\sum _{j\ne i}x_j^2 \in {\mathscr {M}}(S)+{{\mathscr {I}}}(T)\), we get
$$\begin{aligned} \langle X_i\overline{p},X_i\overline{p}\rangle = L(p^*x_i^2p)\le R\cdot L(p^*p)=R\langle \overline{p},\overline{p}\rangle \quad \text {for all} \quad p \in \mathbb {R}\langle \mathbf{x} \rangle . \end{aligned}$$
So each \(X_i\) extends to a bounded self-adjoint operator, also denoted \(X_i\), on the Hilbert space \({\mathscr {H}}\) such that \(g(\mathbf{X})\) is positive for all \(g \in S \cup \{1\}\). Moreover, we have \( \langle \overline{f}, h(\mathbf {X}) \overline{1} \rangle = L(f^* h) = 0\) for all \(f \in \mathbb {R}\langle \mathbf{x}\rangle , h \in T\).
The operators \(X_i \in {\mathscr {B}}({\mathscr {H}})\) extend to self-adjoint operators in \({\mathscr {B}}(\mathbb {C}\otimes _\mathbb {R}{\mathscr {H}})\), where \(\mathbb {C}\otimes _\mathbb {R}{\mathscr {H}}\) is the complexification of \({\mathscr {H}}\). Let \({\mathscr {A}}\) be the unital \(C^*\)-algebra obtained by taking the operator norm closure of \(\mathbb {R}\langle \mathbf {X}\rangle \subseteq {\mathscr {B}}(\mathbb {C}\otimes _\mathbb {R}{\mathscr {H}})\). It follows that \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}}(S) \cap {\mathscr {V}}_{{\mathscr {A}}}(T)\).
Define the state \(\tau \) on \({\mathscr {A}}\) by \(\tau (a) = \langle \overline{1}, a\overline{1} \rangle \) for \(a \in {\mathscr {A}}\). For all \(p,q \in \mathbb {R}\langle \mathbf{x}\rangle \), we have
$$\begin{aligned} \tau (p(\mathbf {X}) q(\mathbf {X})) = \langle \overline{1}, p(\mathbf {X}) q(\mathbf {X})\overline{1} \rangle = \langle \overline{1}, \overline{pq} \rangle = L(pq), \end{aligned}$$
so that the restriction of \(\tau \) to \(\mathbb {R}\langle \mathbf {X}\rangle \) is tracial. Since \(\mathbb {R}\langle \mathbf {X}\rangle \) is dense in \({\mathscr {A}}\) in the operator norm, this implies \(\tau \) is tracial.
To conclude the proof, observe that (29) follows from (31) by taking \(q=1\). \(\square \)
The next result can be seen as a finite dimensional analog of the above result, where we do not need \({\mathscr {M}}(S) +{\mathscr {I}}(T)\) to be Archimedean, but instead we assume the rank of M(L) to be finite (i.e., L to be flat). In addition to the Gelfand–Naimark–Segal construction, the proof uses Artin–Wedderburn theory. For the unconstrained case, the proof of this result can be found in [15], and in [16, 43] this result is extended to the constrained case.
For \(S\subseteq \mathrm {Sym}\, \mathbb {R}\langle \mathbf{x}\rangle \), \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \), and \(L\in \mathbb {R}\langle \mathbf{x}\rangle ^*\), the following are equivalent:
L is a symmetric, tracial, linear form with \(L(1) =1\) that is nonnegative on \({{\mathscr {M}}}(S)\), zero on \({\mathscr {I}}(T)\), and has \(\mathrm {rank}(M(L)) < \infty \);
there is a finite dimensional \(C^*\)-algebra \({\mathscr {A}}\) with a tracial state \(\tau \), and \(\mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S) \cap {\mathscr {V}}_{{\mathscr {A}}}(T)\) satisfying equation (29);
L is a convex combination of normalized trace evaluations at points in \({{\mathscr {D}}}(S) \cap {\mathscr {V}}(T)\).
((1) \(\Rightarrow \) (2)) Here, we can follow the proof of Theorem 1, with the extra observation that the condition \({{\,\mathrm{rank}\,}}(M(L))<\infty \) implies that the quotient space \(\mathbb {R}\langle \mathbf{x}\rangle /N(L)\) is finite dimensional. Since \(\mathbb {R}\langle \mathbf{x}\rangle /N(L)\) is finite dimensional the multiplication operators are bounded, and the constructed \(C^*\)-algebra \({\mathscr {A}}\) is finite dimensional.
((2) \(\Rightarrow \) (3)) By Artin-Wedderburn theory, there exists a \(*\)-isomorphism
$$\begin{aligned} \varphi :{\mathscr {A}} \rightarrow \bigoplus _{m=1}^M \mathbb {C}^{d_m \times d_m}\quad \text { for some } \ M\in \mathbb {N},\ d_1,\ldots ,d_M\in \mathbb {N}. \end{aligned}$$
Define the \(*\)-homomorphisms \(\varphi _m :{\mathscr {A}} \rightarrow \mathbb {C}^{d_m \times d_m}\) for \(m\in [M]\) by \(\varphi = \oplus _{m=1}^M \varphi _m\). Then, for each \(m\in [M]\), the map \(\mathbb {C}^{d_m \times d_m} \rightarrow \mathbb {C}\) defined by \(X \mapsto \tau (\varphi _m^{-1}(X))\) is a positive tracial linear form, and hence, it is a nonnegative multiple \(\lambda _m \mathrm {tr}(\cdot )\) of the normalized matrix trace (since, for a full matrix algebra, the normalized trace is the unique tracial state). Then, we have \(\tau (a) = \sum _m \lambda _m\, \mathrm {tr}(\varphi _m(a))\) for all \(a\in {{\mathscr {A}}}\). So \(\tau (\cdot )=\sum _m\lambda _m \mathrm {tr}(\cdot )\) for nonnegative scalars \(\lambda _m\) with \(\sum _m \lambda _m = L(1) = 1\). By defining the matrices \(X_i^m = \varphi _m(X_i)\) for \(m\in [M]\), we get
$$\begin{aligned} L(p) = \tau (p(X_1,\ldots ,X_n)) = \sum _{m=1}^M \lambda _m\, \mathrm {tr}(p(X_1^m, \ldots , X_n^m)) \quad \text { for all }\quad p\in \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$
Since \(\varphi _m\) is a \(*\)-homomorphism, we have \(g(X_1^m,\ldots ,X_n^m) \succeq 0\) for all \(g \in S \cup \{1\}\) and also \(h(X_1^m,\ldots ,X_n^m) = 0\) for all \(h \in T\), which shows \((X_1^m,\ldots ,X_n^m) \in {\mathscr {D}}(S) \cap {\mathscr {V}}(T)\).
((3) \(\Rightarrow \) (1)) If L is a conic combination of trace evaluations at elements from \({\mathscr {D}}(S)\cap {\mathscr {V}}(T)\), then L is symmetric, tracial, nonnegative on \({\mathscr {M}}(S)\), zero on \({\mathscr {I}}(T)\), and satisfies \({{\,\mathrm{rank}\,}}(M(L)) < \infty \) because the moment matrix of any trace evaluation has finite rank. \(\square \)
The previous two theorems were about linear functionals defined on the full space of noncommutative polynomials. The following result claims that a flat linear functional on a truncated polynomial space can be extended to a flat linear functional on the full space of polynomials while preserving the same positivity properties. It is due to Curto and Fialkow [19] in the commutative case and extensions to the noncommutative case can be found in [61] (for eigenvalue optimization) and [15] (for trace optimization).
Let \(1 \le \delta \le t < \infty \), \(S \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x}\rangle _{2\delta }\), and \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle _{2\delta }\). Suppose \(L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^*\) is symmetric, tracial, \(\delta \)-flat, nonnegative on \({{\mathscr {M}}}_{2t}(S)\), and zero on \({\mathscr {I}}_{2t}(T)\). Then L extends to a symmetric, tracial, linear form on \(\mathbb {R}\langle \mathbf{x}\rangle \) that is nonnegative on \({\mathscr {M}}(S)\), zero on \({\mathscr {I}}(T)\), and whose moment matrix has finite rank.
Let \(W\subseteq \langle \mathbf{x}\rangle _{t-\delta }\) index a maximum nonsingular submatrix of \(M_{t-\delta }(L)\), and let \(\mathrm {span}(W)\) be the linear space spanned by W. We have the vector space direct sum
$$\begin{aligned} \mathbb {R}\langle \mathbf{x}\rangle _t=\mathrm {span}(W)\oplus N_t(L). \end{aligned}$$
That is, for each \(u \in \langle \mathbf{x}\rangle _t\) there exists a unique \(r_u \in \mathrm {span}(W)\) such that \(u - r_u \in N_t(L)\).
We first construct the (unique) symmetric flat extension \(\hat{L} \in \mathbb {R}\langle \mathbf{x}\rangle _{2t+2}\) of L. For this, we set \(\hat{L}(p) = L(p)\) for \(\deg (p) \le 2t\), and we set
$$\begin{aligned} \hat{L}(u^* x_i v) = L(u^* x_i r_v) \quad \text {and} \quad \hat{L}((x_i u)^* x_j v) = L((x_i r_u)^* x_j r_v) \end{aligned}$$
for all \(i,j \in [n]\) and \(u,v \in \langle \mathbf{x}\rangle \) with \(|u|=|v| = t\). One can verify that \(\hat{L}\) is symmetric and satisfies \(x_i (u - r_u) \in N_{t+1}(\hat{L})\) for all \(i \in [n]\) and \(u \in \mathbb {R}\langle \mathbf{x}\rangle _t\), from which it follows that \(\hat{L}\) is 2-flat.
We also have \((u-r_u)x_i \in N_{t+1}(\hat{L})\) for all \(i \in [n]\) and \(u \in \mathbb {R}\langle \mathbf{x}\rangle _t\): Since \(\hat{L}\) is 2-flat, we have \((u-r_u)x_i \in N_{t+1}(\hat{L})\) if and only if \(\hat{L}(p (u-r_u) x_i) = 0\) for all \(p \in \mathbb {R}\langle \mathbf{x}\rangle _{t-1}\). By using \(\deg (x_ip) \le t\), L is tracial, and \(u-r_u \in N_t(L)\), we get \(\hat{L}(p(u-r_u) x_i) = L(p(u-r_u)x_i)=L(x_i p(u-r_u)) = 0\).
By consecutively using \((v-r_v)x_j \in N_{t+1}(\hat{L})\), symmetry of \(\hat{L}\), \(x_i (u-r_u) \in N_{t+1}(\hat{L})\), and again symmetry of \(\hat{L}\), we see that
$$\begin{aligned} \hat{L}((x_i u)^* v x_j) \!=\! \hat{L}((x_i u)^* r_v x_j) \!=\! \hat{L}((r_v x_j)^* x_i u) = \hat{L}((r_v x_j)^* x_i r_u) \!=\! \hat{L}((x_i r_u)^* r_v x_j), \end{aligned}$$
and in an analogous way one can show
$$\begin{aligned} \hat{L}((u x_i)^* x_j v ) = \hat{L}(( r_u x_i)^* x_j r_v ). \end{aligned}$$
We can now show that \(\hat{L}\) is tracial. We do this by showing that \(\hat{L}(w x_j) = \hat{L}(x_j w)\) for all w with \(\deg (w) \le 2t+1\). Notice that when \(\deg (w) \le 2t-1\) the statement follows from the fact that \(\hat{L}\) is an extension of L. Suppose \(w = u^* v\) with \(\deg (u) = t+1\) and \(\deg (v) \le t\). We write \(u = x_i u'\), and we let \(r_{u'},r_v \in \mathbb {R}\langle \mathbf{x}\rangle _{t-1}\) be such that \(u' - r_{u'}, v-r_v \in N_t(L)\). We then have
$$\begin{aligned} \hat{L}(wx_j) = \hat{L}(u^*vx_j)&= \hat{L}((x_i u')^* v x_j) \\&= \hat{L}((x_i r_{u'})^* r_v x_j)&\text { by }~(33) \\&= L((x_i r_{u'})^* r_v x_j)&\text { since } \deg ( x_i r_{u'} r_v x_j) \le 2t \\&= L( (r_{u'} x_j)^* x_i r_v)&\text { since } L \text { is tracial}\\&= \hat{L}( (r_{u'} x_j)^* x_i r_v)&\text { since } \deg ((r_{u'} x_j)^* x_i r_v) \le 2t\\&= \hat{L}((u'x_j)^* x_i v)&\text { by }~(34)\\&= \hat{L}(x_j w). \end{aligned}$$
It follows \(\hat{L}\) is a symmetric tracial flat extension of L, and \({{\,\mathrm{rank}\,}}(M(\hat{L})) = {{\,\mathrm{rank}\,}}(M(L))\).
Next, we iterate the above procedure to extend L to a symmetric tracial linear functional \(\hat{L} \in \mathbb {R}\langle \mathbf{x}\rangle ^*\). It remains to show that \(\hat{L}\) is nonnegative on \({{\mathscr {M}}}(S)\) and zero on \({\mathscr {I}}(T)\). For this, we make two observations:
\({\mathscr {I}}(N_t(L)) \subseteq N(\hat{L})\).
\(\mathbb {R}\langle \mathbf{x}\rangle = \mathrm {span}(W) \oplus {\mathscr {I}}(N_t(L))\).
For (i) we use the (easy to check) fact that \( N_t(L) = \mathrm {span}( \{u-r_u: u \in \langle \mathbf{x}\rangle _t\}). \) Then it suffices to show that \(w(u-r_u)\in N(\hat{L})\) for all \(w\in \langle \mathbf{x}\rangle \), which can be done using induction on |w|. From (i), one easily deduces that \( \mathrm {span}(W)\cap N(\hat{L})=\{0\}\), so we have the direct sum \( \mathrm {span}(W) \oplus {\mathscr {I}}(N_t(L))\). The claim (ii) follows using induction on the length of \(w \in \langle \mathbf{x}\rangle \): The base case \(w \in \langle \mathbf{x}\rangle _t\) follows from (32). Let \(w = x_i v \in \langle \mathbf{x}\rangle \) and assume \(v \in \mathrm {span}(W) \oplus {\mathscr {I}}(N_t(L))\), that is, \(v= r_v + q_v\) where \(r_v \in \mathrm {span}(W)\) and \(q_v \in {\mathscr {I}}(N_t(L))\). We have \(x_i v = x_i r_v + x_i q_v\) so it suffices to show \(x_i r_v, x_i q_v \in \mathrm {span}(W) \oplus {\mathscr {I}}(N_t(L))\). Clearly \(x_i q_v \in {\mathscr {I}}(N_t(L))\), since \(q_v \in {\mathscr {I}}(N_t(L))\). Also, observe that \(x_i r_v \in \mathbb {R}\langle \mathbf{x}\rangle _t\) and therefore \(x_i r_v \in \mathrm {span}(W) \oplus {\mathscr {I}}(N_t(L))\) by (32).
We conclude the proof by showing that \(\hat{L}\) is nonnegative on \({{\mathscr {M}}}(S)\) and zero on \({\mathscr {I}}(T)\). Let \(g \in {{\mathscr {M}}}(S)\), \(h \in {\mathscr {I}}(T)\), and \(p \in \mathbb {R}\langle \mathbf{x}\rangle \). For \(p \in \mathbb {R}\langle \mathbf{x}\rangle \) we extend the definition of \(r_p\) so that \(r_p \in \mathrm {span}(W)\) and \(p -r_p \in {\mathscr {I}}(N_t(L))\), which is possible by (ii). Then,
$$\begin{aligned}&\hat{L}(p^* g p) \overset{\mathrm {(i)}}{=} \hat{L}(p^* g r_p) = \hat{L}(r_p^* g p) \overset{\mathrm {(i)}}{=} \hat{L}(r_p^* g r_p) = L(r_p^* g r_p) \ge 0, \\&\hat{L}(p^* h) = \hat{L}(h^* p) \overset{\mathrm {(i)}}{=} \hat{L}(h^* r_p) = \hat{L}(r_p h) = L(r_p h) = 0, \end{aligned}$$
where we use \(\deg (r_p^*gr_p)\le 2(t-\delta )+2\delta =2t\) and \(\deg (r_ph)\le (t-\delta )+ 2\delta \le 2t\). \(\square \)
Combining Theorems 2 and 3 gives the following result, which shows that a flat linear form can be extended to a conic combination of trace evaluation maps. It was first proven in [43, Proposition 6.1] (and in [15] for the unconstrained case).
Corollary 1
Let \(1 \le \delta \le t < \infty \), \(S \subseteq \mathrm {Sym}\, \mathbb {R}\langle \mathbf{x}\rangle _{2\delta }\), and \(T \in \mathbb {R}\langle \mathbf{x}\rangle _{2\delta }\). If \(L \in \mathbb {R}\langle \mathbf {x}\rangle ^*_{2t}\) is symmetric, tracial, \(\delta \)-flat, nonnegative on \({\mathscr {M}}_{2t}(S)\), and zero on \({\mathscr {I}}_{2t}(T)\), then it extends to a conic combination of trace evaluations at elements of \({\mathscr {D}}(S)\cap {\mathscr {V}}(T)\).
Specialization to the Commutative Setting
The material from Appendix A.1 can be adapted to the commutative setting. Throughout \([\mathbf{x}]\) denotes the set of monomials in \(x_1,\ldots ,x_n\), i.e., the commutative analog of \(\langle \mathbf{x}\rangle \).
The moment matrix \(M_t(L)\) of a linear form \(L\in \mathbb {R}[\mathbf{x}]_{2t}^*\) is now indexed by the monomials in \([\mathbf{x}]_{t}\), where we set \(M_t(L)_{w,w'}=L(ww')\) for \(w,w'\in [\mathbf{x}]_t\). Due to the commutativity of the variables, this matrix is smaller and more entries are now required to be equal. For instance, the \((x_2x_1,x_3x_4)\)-entry of \(M_2(L)\) is equal to its \((x_3x_1,x_2x_4)\)-entry, which does not hold in general in the noncommutative case.
Given \(a \in \mathbb {R}^n\), the evaluation map at a is the linear map \(L_a\in \mathbb {R}[\mathbf{x}]^*\) defined by
$$\begin{aligned} L_a(p)= p(a_1,\ldots ,a_n) \quad \text {for all} \quad p\in \mathbb {R}[\mathbf{x}]. \end{aligned}$$
We can view \(L_a\) as a trace evaluation at scalar matrices. Moreover, we can view a trace evaluation map at a tuple of pairwise commuting matrices as a conic combination of evaluation maps at scalars by simultaneously diagonalizing the matrices.
The quadratic module \({\mathscr {M}}(S)\) and the ideal \({\mathscr {I}}(T)\) have immediate specializations to the commutative setting. We recall that in the commutative setting the (scalar) positivity domain and scalar variety of sets \(S,T\subseteq \mathbb {R}[\mathbf{x}]\) are given byFootnote 3
$$\begin{aligned} D(S)= \big \{a \in \mathbb {R}^n: g(a)\ge 0 \text { for } g\in S\big \} \text {, } \quad V(T) = \big \{a \in \mathbb {R}^n: h(a) = 0 \text { for } h \in T\big \}. \end{aligned}$$
We first give the commutative analog of Theorem 1, where we give an additional integral representation in point (3). The equivalence of points (1) and (3) is proved in [64] based on Putinar's Positivstellensatz. Here we give a direct proof on the "moment side" using the Gelfand representation.
Let \(S,T \subseteq \mathbb {R}[\mathbf {x}]\) with \({{\mathscr {M}}}(S) + {\mathscr {I}}(T)\) Archimedean. For \(L\in \mathbb {R}[\mathbf {x}]^*\), the following are equivalent:
L is nonnegative on \({{\mathscr {M}}}(S)\), zero on \({\mathscr {I}}(T)\), and \(L(1) = 1\);
there exists a unital commutative \(C^*\)-algebra \({\mathscr {A}}\) with a state \(\tau \) and \(\mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S)\cap {\mathscr {V}}_{{\mathscr {A}}}(T)\) such that \(L(p)=\tau (p(\mathbf{X}))\) for all \(p\in \mathbb {R}[\mathbf {x}]\);
there exists a probability measure \(\mu \) on \(D(S) \cap V(T)\) such that
$$\begin{aligned} L(p) = \int _{D(S) \cap V(T)} p(x) \, {\hbox {d}}\mu (x) \quad \text {for all} \quad p\in \mathbb {R}[\mathbf {x}]. \end{aligned}$$
((1) \(\Rightarrow \) (2)) This is the commutative analog of the implication (1) \(\Rightarrow \) (2) in Theorem 1 (observing in addition that the operators \(X_i\) in (30) pairwise commute so that the constructed \(C^*\)-algebra \({{\mathscr {A}}}\) is commutative).
((2) \(\Rightarrow \) (3)) Let \(\widehat{{\mathscr {A}}}\) denote the set of unital \(*\)-homomorphisms \({\mathscr {A}} \rightarrow \mathbb {C}\), known as the spectrum of \({{\mathscr {A}}}\). We equip \(\widehat{{\mathscr {A}}}\) with the weak-\(^*\) topology, so that it is compact as a result of \({\mathscr {A}}\) being unital (see, e.g., [9, II.2.1.4]). The Gelfand representation is the \(*\)-isomorphism
$$\begin{aligned} \varGamma :{\mathscr {A}} \rightarrow {\mathscr {C}}(\widehat{{\mathscr {A}}}), \quad \varGamma (a)(\phi ) = \phi (a) \quad \text {for} \quad a\in {{\mathscr {A}}},\ \phi \in \widehat{{\mathscr {A}}}, \end{aligned}$$
where \({\mathscr {C}}(\widehat{{\mathscr {A}}})\) is the set of complex-valued continuous functions on \(\widehat{{\mathscr {A}}}\). Since \(\varGamma \) is an isomorphism, the state \(\tau \) on \({{\mathscr {A}}}\) induces a state \(\tau '\) on \({\mathscr {C}}(\widehat{{\mathscr {A}}})\) defined by \(\tau '(\varGamma (a))=\tau (a)\) for \(a\in {{\mathscr {A}}}\). By the Riesz representation theorem (see, e.g., [67, Theorem 2.14]) there is a Radon measure \(\nu \) on \(\widehat{{\mathscr {A}}}\) such that
$$\begin{aligned} \tau '(\varGamma (a)) = \int _{\widehat{{\mathscr {A}}}} \varGamma (a)(\phi ) \, {\hbox {d}}\nu (\phi ) \quad \text {for all} \quad a \in {\mathscr {A}}. \end{aligned}$$
We then have
$$\begin{aligned} L(p)&= \tau (p(\mathbf {X})) = \tau '(\varGamma (p(\mathbf {X}))) = \int _{\widehat{{\mathscr {A}}}} \varGamma (p(\mathbf {X}))(\phi ) \, {\hbox {d}}\nu (\phi ) = \int _{\widehat{{\mathscr {A}}}} \phi (p(\mathbf {X})) \, {\hbox {d}}\nu (\phi )\\&= \int _{\widehat{{\mathscr {A}}}} p(\phi (X_1),\ldots ,\phi (X_n)) \, {\hbox {d}}\nu (\phi ) = \int _{\widehat{{\mathscr {A}}}} p(f(\phi )) \, {\hbox {d}}\nu (\phi ) = \int _{\mathbb {R}^n} p(x) \, {\hbox {d}}\mu (x), \end{aligned}$$
where \(f :\widehat{{\mathscr {A}}} \rightarrow \mathbb {R}^n\) is defined by \(\phi \mapsto (\phi (X_1),\ldots ,\phi (X_n)), \) and where \(\mu = f_*\nu \) is the pushforward measure of \(\nu \) by f; that is, \(\mu (B) = \nu (f^{-1}(B))\) for measurable \(B \subseteq \mathbb {R}^n\).
Since \(\mathbf {X}\in {\mathscr {D}}_{{\mathscr {A}}}(S)\), we have \(g(\mathbf {X}) \succeq 0\) for all \(g \in S\), hence \(\varGamma (g(\mathbf {X}))\) is a positive element of \({\mathscr {C}}(\widehat{{\mathscr {A}}})\), implying \( g(\phi (X_1), \ldots , \phi (X_n)) = \phi (g(\mathbf {X})) =\varGamma (g(\mathbf {X}))(\phi ) \ge 0. \) Similarly we see \(h(\phi (X_1), \ldots , \phi (X_n)) = 0\) for all \(h \in T\). So, the range of f is contained in \(D(S) \cap V(T)\), \(\mu \) is a probability measure on \(D(S) \cap V(T)\) since \(L(1)=1\), and we have \(L(p) = \int _{D(S) \cap V(T)} p(x) \, {\hbox {d}}\mu (x)\) for all \(p \in \mathbb {R}[\mathbf {x}]\).
((3) \(\Rightarrow \) (1)) This is immediate. \(\square \)
Note that the more common proof for the implication (1) \(\Rightarrow \) (3) in Theorem 4 relies on Putinar's Positivstellensatz [64]: if L satisfies (1) then \(L(p)\ge 0\) for all polynomials p nonnegative on \(D(S)\cap V(T)\) (since \(p+\varepsilon \in {\mathscr {M}}(S) +{\mathscr {I}}(T)\) for any \(\varepsilon >0\)), and thus L has a representing measure \(\mu \) as in (3) by the Riesz-Haviland theorem [41].
The following is the commutative analog of Theorem 2.
For \(S\subseteq \mathbb {R}[\mathbf{x}]\), \(T \subseteq \mathbb {R}[\mathbf{x}]\), and \(L\in \mathbb {R}[\mathbf{x}]^*\), the following are equivalent:
L is nonnegative on \({{\mathscr {M}}}(S)\), zero on \({\mathscr {I}}(T)\), has \(\mathrm {rank}(M(L)) < \infty \), and \(L(1)=1\);
there is a finite dimensional commutative \(C^*\)-algebra \({\mathscr {A}}\) with a state \(\tau \), and \(\mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S) \cap {\mathscr {V}}_{{\mathscr {A}}}(T)\) such that \(L(p)=\tau (p(\mathbf{X}))\) for all \(p\in \mathbb {R}[\mathbf{x}]\);
L is a convex combination of evaluations at points in \(D(S) \cap V(T)\).
((1) \(\Rightarrow \) (2)) We indicate how to derive this claim from its noncommutative analog. For this denote the commutative version of \(p \in \mathbb {R}\langle \mathbf{x}\rangle \) by \(p^c \in \mathbb {R}[\mathbf{x}]\). For any \(g\in S\) and \(h \in T\), select symmetric polynomials \(g',h' \in \mathbb {R}\langle \mathbf{x}\rangle \) with \((g')^c = g\) and \((h')^c = h\), and set
$$\begin{aligned} S'= & {} \big \{ g' : g \in S \big \}\subseteq \mathbb {R}\langle \mathbf{x}\rangle \ \text { and }\ T' = \big \{ h' : h \in T \big \} \cup \big \{x_ix_j - x_j x_i \\\in & {} \mathbb {R}\langle \mathbf{x}\rangle : i, j \in [n], \, i \ne j\big \}\subseteq \mathbb {R}\langle \mathbf{x}\rangle . \end{aligned}$$
Define the linear form \(L' \in \mathbb {R}\langle \mathbf{x} \rangle ^*\) by \(L'(p) = L(p^c)\) for \(p\in \mathbb {R}\langle \mathbf{x}\rangle \). Then \(L'\) is symmetric, tracial, nonnegative on \({\mathscr {M}}(S')\), zero on \(\mathscr {{{\mathscr {I}}}}(T')\), and satisfies \({{\,\mathrm{rank}\,}}M(L')={{\,\mathrm{rank}\,}}M(L)<\infty \). Following the proof of the implication (1) \(\Rightarrow \) (2) in Theorem 1, we see that the operators \(X_1,\ldots ,X_n\) pairwise commute (since \(\mathbf {X}\in {\mathscr {V}}_{{\mathscr {A}}}(T')\) and \(T'\) contains all \(x_ix_j-x_jx_i\)) and thus the constructed \(C^*\)-algebra \({{\mathscr {A}}}\) is finite dimensional and commutative.
((2) \(\Rightarrow \) (3)) Here, we follow the proof of this implication in Theorem 2 and observe that since \({{\mathscr {A}}}\) is finite dimensional and commutative, it is \(*\)-isomorphic to an algebra of diagonal matrices (\(d_m=1\) for all \(m\in [M]\)), which gives directly the desired result.
((3) \(\Rightarrow \) (1)) is easy. \(\square \)
The next result, due to Curto and Fialkow [19], is the commutative analog of Corollary 1.
Let \(1 \le \delta \le t < \infty \) and \(S,T \subseteq \mathbb {R}[\mathbf {x}]_{2\delta }\). If \(L\in \mathbb {R}[\mathbf{x}]_{2t}^*\) is \(\delta \)-flat, nonnegative on \({\mathscr {M}}_{2t}(S)\), and zero on \({\mathscr {I}}_{2t}(T)\), then L extends to a conic combination of evaluation maps at points in \(D(S) \cap V(T)\).
Here too we derive the result from its noncommutative analog in Corollary 1. As in the above proof for the implication (1) \(\Longrightarrow \) (2) in Theorem 5, define the sets \(S',T'\subseteq \mathbb {R}\langle \mathbf{x}\rangle \) and the linear form \(L' \in \mathbb {R}\langle \mathbf{x} \rangle _{2t}^*\) by \(L'(p) = L(p^c)\) for \(p\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}\). Then \(L'\) is symmetric, tracial, nonnegative on \({\mathscr {M}}_{2t}(S')\), zero on \({\mathscr {I}}_{2t}(T')\), and \(\delta \)-flat. By Corollary 1, \(L'\) is a conic combination of trace evaluation maps at elements of \({\mathscr {D}}(S') \cap {\mathscr {V}}(T')\). It suffices now to observe that such a trace evaluation \(L_\mathbf {X}\) is a conic combination of (scalar) evaluations at elements of \(D(S) \cap V(T)\). Indeed, as \(\mathbf {X}\in {\mathscr {V}}(T')\), the matrices \(X_1,\ldots ,X_n\) pairwise commute and thus can be assumed to be diagonal. Since \(\mathbf {X}\in {{\mathscr {D}}}(S') \cap {\mathscr {V}}(T')\), we have \(g(\mathbf {X})\succeq 0\) for \(g'\in S'\) and \(h'(\mathbf {X}) = 0\) for \(h' \in T'\). This implies \(g((X_1)_{jj},\ldots ,(X_n)_{jj})\ge 0\) and \(h((X_1)_{jj},\ldots ,(X_n)_{jj}) = 0\) for all \(g\in S\), \(h \in T\), and \(j\in [d]\). Thus \(L_\mathbf {X}= \sum _j L_{r_j}\), where \(r_j = ((X_1)_{jj},\ldots ,(X_n)_{jj}) \in D(S) \cap V(T)\). \(\square \)
Unlike in the noncommutative setting, here we also have the following result, which permits to express any linear functional L nonnegative on an Archimedean quadratic module as a conic combination of evaluations at points, when restricting L to polynomials of bounded degree.
Let \(S,T \subseteq \mathbb {R}[\mathbf{x}]\) such that \({\mathscr {M}}(S) + {\mathscr {I}}(T)\) is Archimedean. If \(L\in \mathbb {R}[\mathbf{x}]^*\) is nonnegative on \({\mathscr {M}}(S)\) and zero on \({\mathscr {I}}(T)\), then for any integer \(k\in \mathbb {N}\) the restriction of L to \(\mathbb {R}[\mathbf{x}]_{k}\) extends to a conic combination of evaluations at points in \(D(S)\cap V(T)\).
By Theorem 4 there exists a probability measure \(\mu \) on D(S) such that
$$\begin{aligned} L(p) = L(1) \int _{D(S) \cap V(T)} p(x) \, {\hbox {d}}\mu (x) \quad \text {for all} \quad p\in \mathbb {R}[\mathbf {x}]. \end{aligned}$$
A general version of Tchakaloff's theorem, as explained in [4], shows that there exist \(r\in \mathbb {N}\), scalars \(\lambda _1,\ldots ,\lambda _r>0\) and points \(x_1,\ldots ,x_r \in D(S)\) such that
$$\begin{aligned} \int _{D(S) \cap V(T)} p(x) \, {\hbox {d}}\mu (x) = \sum _{i=1}^r \lambda _i p(x_i) \quad \text {for all} \quad p\in \mathbb {R}[\mathbf {x}]_k. \end{aligned}$$
Hence the restriction of L to \(\mathbb {R}[\mathbf {x}]_k\) extends to a conic combination of evaluations at points in D(S). \(\square \)
We briefly discuss here the basic polynomial optimization problems in the commutative and tracial settings. We recall how to design hierarchies of semidefinite programming based bounds, and we give their main convergence properties. The classical commutative polynomial optimization problem asks to minimize a polynomial \(f\in \mathbb {R}[\mathbf{x}]\) over a feasible region of the form D(S) as defined in (35):
$$\begin{aligned} f_{*}= \mathrm {inf}_{a\in D(S)}f(a) = \mathrm {inf}\big \{ f(a) : a \in \mathbb {R}^n, \, g(a)\ge 0 \text { for } g\in S\big \}. \end{aligned}$$
In tracial polynomial optimization, given \(f\in \mathrm {Sym}\, \mathbb {R}\langle \mathbf{x}\rangle \), this is modified to minimizing \(\mathrm {tr}(f(\mathbf{X}))\) over a feasible region of the form \({\mathscr {D}}(S)\) as in (6):
$$\begin{aligned} f_*^\mathrm {tr} = \mathrm {inf}_{\mathbf{X} \in {\mathscr {D}}(S)} \mathrm {tr}(f(\mathbf{X})) = \mathrm {inf}\big \{ \mathrm {tr}(f(\mathbf{X})) : d\in \mathbb {N},\, \mathbf{X} \in (\mathrm {H}^d)^n, \, g(\mathbf{X}) \succeq 0 \text { for } g\in S\big \}, \end{aligned}$$
where the infimum does not change if we replace \(\mathrm {H}^d\) by \(\mathrm {S}^d\). Commutative polynomial optimization is recovered by restricting to \(1 \times 1\) matrices.
For the commutative case, Lasserre [46] and Parrilo [60] have proposed hierarchies of semidefinite programming relaxations based on sums of squares of polynomials and the dual theory of moments. This approach has been extended to eigenvalue optimization [57, 61] and later to tracial optimization [14, 43]. The starting point in deriving these relaxations is to reformulate the above problems as minimizing L(f) over all normalized trace evaluation maps L at points in \(D(S)\) or \({\mathscr {D}}(S)\), and then to express computationally tractable properties satisfied by such maps L.
For \(S \cup \{f\} \subseteq \mathbb {R}[\mathbf{x}]\) and \(\lceil \deg (f)/2\rceil \le t \le \infty \), recall the (truncated) quadratic module \({\mathscr {M}}_{2t}(S)\)
$$\begin{aligned} {{\mathscr {M}}}_{2t}(S) =\mathrm {cone}\big \{gp^2: p\in \mathbb {R}[\mathbf{x}], \ g\in S\cup \{1\},\ \deg (gp^2)\le 2t\big \}, \end{aligned}$$
which we use to formulate the following semidefinite programming lower bound on \(f_{*}\):
$$\begin{aligned} f_t =\mathrm {inf}_{}\big \{L(f) : L\in \mathbb {R}[\mathbf{x}]_{2t}^*,\, L(1)=1,\, L\ge 0 \text { on } {{\mathscr {M}}}_{2t}(S)\big \}. \end{aligned}$$
For \(t\in \mathbb {N}\) we have \(f_t\le f_\infty \le f_*\).
In the same way, for \(S \cup \{f\} \subseteq \mathrm {Sym} \, \mathbb {R}\langle \mathbf{x} \rangle \) and t such that \(\lceil \deg (f)/2\rceil \le t \le \infty \), we have the following semidefinite programming lower bound on \(f_*^\mathrm {tr}\):
$$\begin{aligned} f_t^\mathrm {tr} =\mathrm {inf}_{}\big \{L(f) : L\in \mathbb {R}\langle \mathbf{x}\rangle _{2t}^* \text { tracial and symmetric},\, L(1)=1,\, L \ge 0 \text { on } {{\mathscr {M}}}_{2t}(S)\big \}, \end{aligned}$$
where we now use definition (1) for \({{\mathscr {M}}}_{2t}(S)\).
The next theorem from [46] gives fundamental convergence properties for the commutative case; see also, e.g., [47, 49] for a detailed exposition.
Let \(1 \le \delta \le t < \infty \) and \(S \cup \{f\} \subseteq \mathbb {R}[\mathbf{x}]_{2\delta }\) with \(D(S)\ne \emptyset \).
If \({{\mathscr {M}}}(S)\) is Archimedean, then \(f_t \rightarrow f_\infty \) as \(t \rightarrow \infty \), the optimal values in \(f_\infty \) and \(f_{*}\) are attained, and \(f_\infty = f_{*}\).
If \(f_t\) admits an optimal solution L that is \(\delta \)-flat, then L is a convex combination of evaluation maps at global minimizers of f in \(D(S)\), and \(f_t=f_\infty =f_{*}\).
By repeating the first part of the proof of Theorem 9 in the commutative setting, we see that \(f_t \rightarrow f_\infty \) and that the optimum is attained in \(f_\infty \). Let L be optimal for \(f_\infty \) and let k be greater than \(\mathrm {deg}(f)\) and \(\mathrm {deg}(g)\) for \(g \in S\). By Theorem 7, the restriction of L to \(\mathbb {R}[\mathbf {x}]_k\) extends to a conic combination of evaluations at points in D(S). It follows that this extension if feasible for \(f_*\) with the same objective value, which shows \(f_\infty = f_*\).
This follows in the same way as the proof of Theorem 9(ii) below, where, instead of using Corollary 1, we now use its commutative analog, Theorem 6.
To discuss convergence for the tracial case, we need one more optimization problem:
$$\begin{aligned} f_\mathrm {II_1}^\mathrm {tr} = \mathrm {inf} \big \{ \tau (f(\mathbf{X})) : \mathbf{X} \in {\mathscr {D}}_{{\mathscr {A}}}(S), \, {\mathscr {A}} \text { is a unital }C^*\text {-algebra with tracial state } \tau \big \}. \end{aligned}$$
This problem can be seen as an infinite dimensional analog of \(f_*^{\mathrm {tr}}\): if we restrict to finite dimensional \(C^*\)-algebras in the definition of \(f_{\mathrm {II_1}}^{\mathrm {tr}}\), then we recover the parameter \(f_*^{\mathrm {tr}}\) (use Theorem 2 to see this). Moreover, as we see in Theorem 9(ii) below, equality \(f_*^{\mathrm {tr}} = f_{\mathrm {II_1}}^{\mathrm {tr}}\) holds if some flatness condition is satisfied. Whether \(f_\mathrm {II_1}^\mathrm {tr} = f_*^\mathrm {tr}\) is true in general is related to Connes' embedding conjecture (see [16, 43, 44]).
Above we defined the parameter \(f_\mathrm {II_1}^\mathrm {tr}\) using \(C^*\)-algebras. However, the following lemma shows that we get the same optimal value if we restrict to \({{\mathscr {A}}}\) being a von Neumann algebra of type \(\mathrm {II_1}\) with separable predual, which is the more common way of defining the parameter \(f_\mathrm {II_1}^\mathrm {tr}\) as is done in [43] (and justifies the notation). We omit the proof of this lemma which relies on a GNS construction and algebraic manipulations, standard for algebraists.
Let \({\mathscr {A}}\) be a \(C^*\)-algebra with tracial state \(\tau \) and \(a_1,\ldots ,a_n \in {\mathscr {A}}\). There exists a von Neumann algebra \({\mathscr {F}}\) of type \(\mathrm {II_1}\) with separable predual, a faithful normal tracial state \(\phi \), and elements \(b_1,\ldots ,b_n \in {\mathscr {F}}\), so that for every \(p \in \mathbb {R}\langle \mathbf{x}\rangle \) we have
$$\begin{aligned}&\tau (p(a_1,\ldots ,a_n)) = \phi (p(b_1,\ldots ,b_n)) \quad \text { and } \\&\quad p(a_1,\ldots , a_n) \text { is positive} \quad \iff \quad p(b_1,\ldots ,b_n) \text { is positive}. \end{aligned}$$
For all \(t \in \mathbb {N}\) we have
$$\begin{aligned} f_t^\mathrm {tr} \le f^{\text {tr}}_{\infty } \le f_\mathrm {II_1}^\mathrm {tr} \le f_\mathrm {*}^\mathrm {tr}, \end{aligned}$$
where the last inequality follows by considering for \({{\mathscr {A}}}\) the full matrix algebra \(\mathbb {C}^{d\times d}\). The next theorem from [43] summarizes convergence properties for these parameters, its proof uses Lemma 13 below.
Let \(1 \le \delta \le t < \infty \) and \(S\cup \{f\}\subseteq \mathrm {Sym}\, \mathbb {R}\langle \mathbf{x}\rangle _{2\delta }\) with \({{\mathscr {D}}}(S)\ne \emptyset \).
If \({{\mathscr {M}}}(S)\) is Archimedean, then \(f_t^{\text {tr}} \rightarrow f_\infty ^\mathrm {tr}\) as \(t \rightarrow \infty \), and the optimal values in \(f^{\text {tr}}_{\infty }\) and \(f_\mathrm {II_1}^\mathrm {tr}\) are attained and equal.
If \(f_t^{\text {tr}}\) has an optimal solution L that is \(\delta \)-flat, then L is a convex combination of normalized trace evaluations at matrix tuples in \({\mathscr {D}}(S)\), and \(f_t^{\text {tr}}=f_\infty ^{\text {tr}}=f_\mathrm {II_1}^\mathrm {tr} =f_*^\mathrm {tr}\).
We first show (i). As \({{\mathscr {M}}}(S)\) is Archimedean, \(R-\sum _{i=1}^nx_i^2\in {{\mathscr {M}}}_{2d}(S)\) for some \(R>0\) and \(d\in \mathbb {N}\). Since the bounds \(f^{\text {tr}}_t\) are monotone nondecreasing in t and upper bounded by \(f^{\text {tr}}_\infty \), the limit \(\lim _{t\rightarrow \infty } f^{\text {tr}}_t\) exists and it is at most \(f^{\text {tr}}_\infty \).
Fix \(\varepsilon >0\). For \(t\in \mathbb {N}\) let \(L_t\) be a feasible solution to the program defining \(f^{\text {tr}}_t\) with value \(L_t(f)\le f^{\text {tr}}_t+\varepsilon \). As \(L_t(1)=1\) for all t we can apply Lemma 13 below and conclude that the sequence \((L_t)_t\) has a convergent subsequence. Let \(L\in \mathbb {R}\langle \mathbf{x}\rangle ^*\) be the pointwise limit. One can easily check that L is feasible for \(f^{\text {tr}}_\infty \). Hence we have \(f^{\text {tr}}_\infty \le L(f)\le \lim _{t\rightarrow \infty } f^{\text {tr}}_t +\varepsilon \le f^{\text {tr}}_\infty +\varepsilon \). Letting \(\varepsilon \rightarrow 0\) we obtain that \(f^{\text {tr}}_\infty =\lim _{t\rightarrow \infty }f^{\text {tr}}_t\) and L is optimal for \(f^{\text {tr}}_\infty \).
Next, since L is symmetric, tracial, and nonnegative on \({{\mathscr {M}}}(S)\), we can apply Theorem 1 to obtain a feasible solution \(({{\mathscr {A}}},\tau ,\mathbf {X})\) to \(f_\mathrm {II_1}^\mathrm {tr}\) satisfying (29) with objective value L(f). This shows \(f^{\text {tr}}_\infty = f_\mathrm {II_1}^{\mathrm {tr}}\) and that the optima are attained in \(f^{\text {tr}}_\infty \) and \(f^{\text {tr}}_\mathrm {II_1}\).
Finally, part (ii) is derived as follows. If L is an optimal solution of \(f^{\text {tr}}_t\) that is \(\delta \)-flat, then, by Corollary 1, it has an extension \(\hat{L}\in \mathbb {R}\langle \mathbf{x}\rangle ^*\) that is a conic combination of trace evaluations at elements of \({\mathscr {D}}(S)\). This shows \(f^{\text {tr}}_* \le \hat{L}(f) = L(f)\), and thus the chain of equalities \(f^{\text {tr}}_t=f^{\text {tr}}_\infty = f^{\text {tr}}_*=f^{\text {tr}}_{\varPi _1}\) holds. \(\square \)
We conclude with the following technical lemma, based on the Banach-Alaoglu theorem. It is a well-known crucial tool for proving the asymptotic convergence result from Theorem 9(i) and it is used at other places in the paper.
Let \(S \subseteq \mathrm {Sym}\, \mathbb {R}\langle \mathbf{x}\rangle \), \(T \subseteq \mathbb {R}\langle \mathbf{x}\rangle \), and assume \(R-(x_1^2 + \cdots + x_n^2) \in {{\mathscr {M}}}_{2d}(S)+{\mathscr {I}}_{2d}(T)\) for some \(d\in \mathbb {N}\) and \(R>0\). For \(t\in \mathbb {N}\) assume \(L_t \in \mathbb {R}\langle \mathbf {x}\rangle _{2t}^*\) is tracial, nonnegative on \({\mathscr {M}}_{2t}(S)\) and zero on \({{\mathscr {I}}}_{2t}(T)\). Then we have \(|L_t(w)|\le R^{|w|/2} L_t(1)\) for all \(w\in \langle \mathbf{x}\rangle _{2t-2d+2}\). In addition, if \(\mathrm {sup}_t \, L_t(1) < \infty \), then \(\{L_t\}_t\) has a pointwise converging subsequence in \(\mathbb {R}\langle \mathbf {x}\rangle ^*\).
We first use induction on |w| to show that \(L_t(w^*w)\le R^{|w|}L_t(1)\) for all \(w\in \langle \mathbf{x}\rangle _{t-d+1}\). For this, assume \(L_t(w^*w)\le R^{|w|}L_t(1)\) and \(|w|\le t-d\). Then we have
$$\begin{aligned} L_t((x_iw)^*x_iw) =L_t(w^*(x_i^2-R)w)+R \cdot L_t(w^*w) \le R\cdot R^{|w|}L_t(1)=R^{|x_iw|}L_t(1). \end{aligned}$$
For the inequality we use the fact that \(L_t(w^*(x_i^2-R)w)\le 0\) since \(w^*(R-x_i^2)w\) can be written as the sum of a polynomial in \({\mathscr {M}}_{2t}(S)+{\mathscr {I}}_{2t}(T)\) and a sum of commutators of degree at most 2t, which follows using the following identity: \(w^*qhw=ww^*qh+[w^*qh,w].\) Next we write any \(w\in \langle \mathbf{x}\rangle _{2(t-d+1)}\) as \(w=w_1^*w_2\) with \(w_1,w_2\in \langle \mathbf{x}\rangle _{t-d+1}\) and use the positive semidefiniteness of the principal submatrix of \(M_t(L_t)\) indexed by \(\{w_1,w_2\}\) to get
$$\begin{aligned} L_t(w)^2 = L_t(w_1^*w_2)^2\le L_t(w_1^*w_1)L_t(w_2^*w_2) \le R^{|w_1|+|w_2|}L_t(1)^2=R^{|w|}L_t(1)^2. \end{aligned}$$
This shows the first claim.
Suppose \(c:=\mathrm {sup}_t \, L_t(1) < \infty \). For each \(t \in \mathbb {N}\), consider the linear functional \(\hat{L}_t\in \mathbb {R}\langle \mathbf{x}\rangle ^*\) defined by \(\hat{L}_t(w)=L_t(w)\) if \(|w|\le 2t-2d+2\) and \(\hat{L}_t(w)=0\) otherwise. Then the vector \((\hat{L}_t(w)/(c R^{|w|/2}))_{w \in \langle \mathbf{x}\rangle }\) lies in the supremum norm unit ball of \(\mathbb {R}^{\langle \mathbf{x} \rangle }\), which is compact in the weak\(*\) topology by the Banach–Alaoglu theorem. It follows that the sequence \((\hat{L}_t)_t\) has a pointwise converging subsequence and thus the same holds for the sequence \((L_t)_t\). \(\square \)
OpenAccess This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Gribling, S., de Laat, D. & Laurent, M. Lower Bounds on Matrix Factorization Ranks via Noncommutative Polynomial Optimization. Found Comput Math 19, 1013–1070 (2019). https://doi.org/10.1007/s10208-018-09410-y
Revised: 31 May 2018
DOI: https://doi.org/10.1007/s10208-018-09410-y
Nonnegative rank
Positive semidefinite rank
Completely positive rank
Completely positive semidefinite rank
Noncommutative polynomial optimization
Mathematics Subject Classification | CommonCrawl |
Comments on the NEMA NU 4-2008 Standard on Performance Measurement of Small Animal Positron Emission Tomographs
Opinion Article
Patrick Hallen ORCID: orcid.org/0000-0001-9371-00251,
David Schug1,2 &
Volkmar Schulz1,2,3,4
EJNMMI Physics volume 7, Article number: 12 (2020) Cite this article
The National Electrical Manufacturers Association's (NEMA) NU 4-2008 standard specifies methodology for evaluating the performance of small-animal PET scanners. The standard's goal is to enable comparison of different PET scanners over a wide range of technologies and geometries used. In this work, we discuss if the NEMA standard meets these goals and we point out potential flaws and improvements to the standard.For the evaluation of spatial resolution, the NEMA standard mandates the use of filtered backprojection reconstruction. This reconstruction method can introduce star-like artifacts for detectors with an anisotropic spatial resolution, usually caused by parallax error. These artifacts can then cause a strong dependence of the resulting spatial resolution on the size of the projection window in image space, whose size is not fully specified in the NEMA standard. If the PET ring has detectors which are perpendicular to a Cartesian axis, then the resolution along this axis will typically improve with larger projection windows.We show that the standard's equations for the estimation of the random rate for PET systems with intrinsic radioactivity are circular and not satisfiable. However, a modified version can still be used to determine an approximation of the random rates under the assumption of negligible random rates for small activities and a constant scatter fraction. We compare the resulting estimated random rates to random rates obtained using a delayed coincidence window and two methods based on the singles rates. While these methods give similar estimates, the estimation method based on the NEMA equations overestimates the random rates.In the NEMA standard's protocol for the evaluation of the sensitivity, the standard specifies to axially step a point source through the scanner and to take a different scan for each source position. Later, in the data analysis section, the standard does not specify clearly how the different scans have to be incorporated into the analysis, which can lead to unclear interpretations of publicized results.The standard's definition of the recovery coefficients in the image quality phantom includes the maximum activity in a region of interest, which causes a positive correlation of noise and recovery coefficients. This leads to an unintended trade-off between desired uniformity, which is negatively correlated with variance (i.e., noise), and recovery.With this work, we want to start a discussion on possible improvements in a next version of the NEMA NU-4 standard.
The National Electrical Manufacturers Association's (NEMA) NU 4-2008 standard on "Performance Measurements of Small Animal Positron Emission Tomography" specifies "standardized methodology for evaluating the performance of positron emission tomographs (PET) designed for animal imaging" [1]. The standard's goal is to enable comparison of the performance of different PET systems over a wide range of technologies and geometries used. Thus, the methods specified in the standard should not artificially favor or disfavor certain choices in scanner geometry and technology and the performance results should indicate the expected performance in real-world applications as closely as possible. Virtually all commercial small-animal PET systems and most research prototype PET systems have published performance evaluations based on the NEMA standard and Goertzen et al. [2] have published a review comparing small-animal PET systems based on the respective NEMA performance publications. These publications are an essential benchmark in the development of new PET systems and an important tool for the purchase decisions of potential buyers.
The NEMA standard specifies 5 measurements with respective analysis: evaluation of spatial resolution; evaluation of total, true, scattered, random, and noise-equivalent count rates; evaluation of system sensitivity; and quantitative evaluation of image quality in a standardized imaging situation using a hot-rod phantom.
The standard was devised over 10 years ago, so it does not incorporate newer technological developments and paradigm shifts. For instance, the use of data acquisition into sinograms and filtered backprojection reconstruction mandated in the standard was more widespread than it is today. Nowadays, these methods are often only implemented to evaluate the PET performance based on NEMA but never actually used for real-world applications
In this work, we examine if the NEMA standard meets its goals to enable a fair comparison of PET systems and we point out potential flaws and improvements in the standard. In our opinion, the standard is underspecified in several parts, limiting the comparability of different systems, since the investigators performing the performance evaluations are still free to choose parameters which significantly influence the results. The methods specified for evaluation of the spatial resolution disadvantages certain system geometries, where those geometries do not exhibit the same reduction in spatial resolution in real-world applications. The definition of random rates is circular and allows the use of very different other methods generating different results. The chapter on sensitivity is ambiguous, leading to publications using different or even unclear methods for the measurement of sensitivity, creating ambiguity in the interpretation of sensitivity of different PET systems.
If applicable, we demonstrate the claimed issues with simple simulation studies. All discussions in this work should be universally applicable to any PET system. However, it is still helpful and instructive to support the claims in this work with real-world data. This is done using data obtained with the Hyperion II D PET/MRI scanner, which was developed by our group [3]. Using the same data, we already have published a performance evaluation based on the NEMA standard [4].
The goal of this work is to start a discussion on a revised version of the NEMA standard and to provide input for this discussion.
To evaluate the spatial resolution, the NEMA standard mandates the use of point source scans which are reconstructed using filtered backprojection. However, basically all modern PET scanners instead use an iterative maximum likelihood expectation maximization (MLEM) algorithm for reconstruction [4–15], so a scanner's spatial resolution using filtered backprojection is not necessarily indicative of its spatial resolution for applications. While the mandated filtered backprojection is intended to benchmark the detector performance alone, we will demonstrate in the following that it disadvantages certain scanner geometries. Furthermore, the NEMA standard specifies that the spatial resolution must be determined using the projections of the reconstructed point sources inside a window in image space, without strictly specifying the size of this projection window. We will demonstrate that this can lead to an ambiguous spatial resolution which depends on the size of the projection window and allows for artificially enhancing the spatial resolution by choosing a particularly large projection window for certain scanner geometries.
The main disadvantage of filtered backprojection is that it does not include any model of the detector and assumes an ideal, ring-like PET scanner, while the detectors in real-world PET scanners are usually in a block geometry with anisotropic spatial resolutions. Line of responses (LORs) perpendicular to the detector's front face are detected with the highest resolution, while tilted LORs have a parallax error in the detected position, which increases with more tilt of the LORs relative to the detector's front faces as illustrated in Fig. 1. In principle, this effect can be reduced by detectors which are able to determine the depth of interaction (DOI) of the gamma interaction, but in practice most PET system do not employ detectors with DOI determination [4–6, 11, 12, 14, 16]. Additionally, PET rings have gaps between the detector, where no LORs are detected at all.
Ring geometry that was used for the simulations and the measurement. The blue bands show the parallax error of LORs, which increases approximately proportional to the angle φ to the normal of the block detector
These issues with filtered backprojection will lead to artifacts in the reconstructed activity. For instance, each angle where the PET ring has an enhanced spatial resolution creates an excess in reconstructed activity along the line connecting this position with the point source and each angle with degraded spatial resolution creates a reduction in reconstructed activity along the respective line. Similarly, gaps between the detector create a lack of reconstructed activity along these lines.
To understand and demonstrate this behavior, it is instructive to look at these effects in sinogram space. In sinogram space, the enhanced spatial resolution of perpendicular LORs manifests as hot spots or rather peaks in the center of each detector modules as Fig. 2g shows. With increasing distance from the center of the detector module the spatial resolution degrades, blurring the line of the point source in sinogram space. We model this as the convolution of the sinogram of a Gaussian point source and the parallax error of the detector. The parallax error of the detector stack can be modeled as the shape of two triangles, connected at their tips as shown in Fig. 2d. The parallax error is proportional to sinφ, where φ is the angle to the normal of the block detector as defined in Fig. 1. The parallax error shown in Fig. 2d is a small-angle approximation of this.
Visualization of the influence of anisotropic detector resolution on the filtered backprojection and resulting spatial resolution along the two axis. e, h, k The simulation with only gaps. f, i, l The simulation with anisotropic detector resolution and gaps of 10 detector modules. g, j, m A measurement. The simulation with anisotropic detector resolution and the measurement exhibit a star-like artifact in the reconstruction, which leads to a split in spatial resolution along x and y axis, as shown in the bottom row
In addition to the inherent problems of mandating the use of filtered backprojection in the NEMA standard, the standard additionally mandates projecting the reconstructed three-dimensional activity onto different one-dimensional axes using a projection window. However, the size of the projection window is not fully specified: "The response function is formed by summing all one-dimensional profiles that are parallel to the direction of measurement and within at least two times the FWHM of the orthogonal direction" [1, p. 7]. The first issue is that this definition is circular, since the minimal size of the projection window to determine the FWHM is defined using the FWHM itself. One can easily fix this problem, either using a sufficiently large projection window in the first place, or by reducing the size of the projection window iteratively in dependence of the determined FWHM in the previous iteration. However, the much bigger problem is that the size of the projection window can strongly influence the resulting spatial resolution. The cause of this is the integration of the star-like artifacts created by the anisotropic spatial resolution, as we demonstrate with the following simulation, shown in Fig. 2.
We created the activity distribution of an ideally reconstructed point source by assuming a rotationally symmetric two-dimensional normal distribution, shown in Fig. 2a. The position of the point source is off-center at a radial offset of 10 mm. To investigate the essence of the effects, we do not include noise in our simulation. From this ideally reconstructed point source, we create a sinogram by forward projection (i.e., by applying a Radon transformation). The resulting sinogram is shown in Fig. 2b.
We include the gaps between the detector stacks in our simulation by creating a sensitivity sinogram, where all bins corresponding to gaps are 0 and bins corresponding to sensitive detector area are 1 shown in Fig. 2c. The simulated geometry is depicted in Fig. 1 and follows the geometry of the Hyperion II D scanner to allow a comparison between simulation and measurement. When we include this model of gaps in our simulation by multiplying the sensitivity sinogram with our point-source sinogram (Fig. 2e) and then performing a filtered backprojection (i.e., an inverse Radon transformation), we get a reconstructed point source with slight artifacts, shown in Fig. 2h. As stated above, the artifacts are a lack of reconstructed activity along the lines connecting the gaps and the point source. When analyzing the spatial resolution of the filtered backprojection with gaps, we observe little influence of the gaps compared to the filtered backprojection of an ideal sinogram without gaps. More importantly, the resulting spatial resolution of 1.2 mm FWHM is stable to changes in the size of the projection window, as shown in Fig. 2k. Thus, gaps between the detectors are not the cause of severe artifacts and only have a very minor influence on the resulting spatial resolution with the usually small gaps of PET scanners.
When we additionally include the effect of the anisotropic detector resolution due to parallax errors by convolving the point-source sinogram and the point spread function in Fig. 2d, the resulting filtered backprojection in Fig. 2i exhibits a star-like artifact, i.e., the lines connecting the center of each detector stack and the point source exhibit a visible excess in activity.
If one of these excesses aligns with one of the Cartesian projection axis, and with the simulated geometry they do so for the x axis, the projection onto the axis perpendicular to this axis will result in a peaked excess at the maximum of the line profile, as shown in Fig. 3. A scanner's spatial resolution is defined by the FWHM and FWTM of this profile, which depends strongly on the height of the maximum. Therefore, a peaked excess of the maximum will significantly enhance the resulting spatial resolution. For our geometry, this enhancement is only observed for the y axis, because only the x axis has an excess in activity aligned with it, as there are not any detector stack which are perpendicular to the y axis. This difference between the resolution in x and y is essentially an artifact and basically non-existent in real-world applications using an iterative maximum likelihood expectation maximization (MLEM) reconstruction. More importantly, the extent of this effect depends strongly on the size of the projection window as demonstrated in Fig. 2k. Increasing the size of the projection window enhances the resulting spatial resolution in y (i.e., decreases FWHM and FWTM) while degrading the spatial resolution in x. This makes comparison of the spatial resolution of different PET system difficult and maybe even impossible, as the NEMA standard does neither specify a clear projection window size nor does it mandate that the used window size should be reported. Thus, most publications do not state the used projection window [5, 7, 14, 16]. Other geometries may not exhibit this behavior at all, favoring or disfavoring systems which have detectors perpendicular to a Cartesian axis.
Line profile of the reconstructed point source projected onto the y axis. The star-like artifact which is aligned with the x axis creates an excess in activity at the peak of the profile boosting the spatial resolution
The measurement and filtered backprojection reconstruction of point sources with the Hyperion II D scanner shown in Fig. 2g and j look very similar to the simulation which includes parallax error and gaps: The sinogram has the same hot spots at the angles where the line of responses is perpendicular to the detector surface and the reconstruction exhibits the same star-like artifact. The analysis of the reconstruction yields the same observed difference in spatial resolution between the x and y axis. Additionally, we observe the same strong dependence on the size of the projection window, shown in Fig. 2m.
An extreme example of a scanner geometry affected by this issue would be a box geometry instead of the conventional ring geometry, i.e., a PET scanner with 4 large perpendicular detector modules without DOI capabilities. With such a geometry, the filtered backprojection artifact would have the shape of a cross, with both lines of excessive activity aligned with the x and y axis. Thus, the artifact would enhance the resolution along both x and y axis by boosting the maximum of both projections. This scenario is not solely hypothetical, as small-animal PET scanners with the described box-like geometry exist such as PETbox 4 [17]. In PETbox's NEMA NU-4 performance evaluation they state that using FBP was not possible "since a FBP algorithm specific for the PETbox4 system with the unconventional geometry has not been developed" [17, p. 3797].
Other examples of published performance evaluation which have omitted the filtered backprojection altogether when evaluating the spatial resolution are [8, 18]. This is an indication that these groups do not find the results based on filtered backprojection not indicative for the performance of their system.
Fixing the issues of this method and proposing a better method to evaluate the spatial resolution is challenging. The NEMA standards committee surely knew many of these issues and we believe most of the PET community will be aware of issues with filtered backprojection, as well. However, so far, none of the performance publications based on NEMA discussed the issues presented here, so we believe it is worthwhile to state them to start a discussion.
One obvious solution would be to simply not use filtered backprojection and to perform the reconstruction with the default reconstruction method provided with the scanner, which is also used for the evaluation of the image quality phantom and for real-world applications. In modern scanners, this is usually an iterative reconstruction algorithm, e.g., ordered subset expectation maximization [19] and maximum likelihood expectation maximization [20, 21]. However, these algorithms can artificially enhance the spatial resolution of point sources without background activity due to, e.g., the non-positivity constraint or resolution recovery [22–24]. Thus, the reconstruction of a point source would mostly be a benchmark of the reconstruction and not of the underlying detector performance. We suspect that these arguments were the main reason why the NEMA standards committee chose filtered backprojection instead.
One alternative could be the evaluation of spatial resolution using a Derenzo hot-rod phantom. The standard could specify the geometry of such a phantom, specify the activity and scan time, allow the use of the reconstruction method supplied by the manufacturer, and then define a quantitative analysis method. The Derenzo phantom is already well-established in the community as a benchmark to evaluate the spatial resolution. For instance, several NEMA performance publications already include such a measurement as a benchmark of spatial resolution [5, 7, 12, 15]. However, these results are not easily comparable, as there currently is no standardized quantitative analysis method to determine the spatial resolution from a measurement of a Derenzo phantom. Usually, the spatial resolution is estimated by making a qualitative judgment at which distance the hot rods are still discernible. In principle, such a definition of spatial resolution based on the ability to resolve to close points is very reasonable and commonly used as a definition of spatial or angular resolution for telescopes and microscopes [25, 26]. However, for a quantitative definition of spatial resolution, there must be a standardized limit of the peak-to-valley ratio between two resolvable point sources, i.e., how much the intensity between the two peaks must dip to make them just resolvable. In a new standardized definition of PET spatial resolution, the PET community could follow the commonly used Rayleigh criterion with an intensity dip of 26.5% [27], or standardize a different limit.
For the scan of a Derenzo phantom, such a resolvability criterion would require to determine the valley-to-peak ratios of the profile lines over the different regions of the phantom. To include anisotropies in the spatial resolution, the profile lines should be defined over multiple angles as demonstrated in Fig. 4a. Figure 4b shows the resulting distribution of valley-to-peak ratios for the phantom's 0.9-mm region. We would recommend that the spatial resolution is defined as the hot-rod distance in the region where at least 90% of the peak-to-valley ratios are below 0.735, i.e., the valley dips are above 26.5% for consistency with the Rayleigh criterion. Alternatively, one could define a limit based on the average peak-to-valley ratio of a region or using a different percentile than the suggested 90%. As shown in Fig. 4b, the region with distances of 0.9 mm has 100% of the valley-to-peak ratios below 0.735. For the 0.8 mm region, over half of the valley-to-peak ratios would be above 0.735 in our measurement. Thus, the resulting spatial resolution would be 0.9 mm.
Evaluation of spatial resolution using a Derenzo phantom. a Reconstruction of Derenzo phantom scan. The labels indicate the diameters and the distance between the rods. The red lines show an example of profile lines which would be used for determination of valley-to-peak ratios to evaluate the spatial resolution. b Distribution of valley-to-peak ratios for the region with a rod distance of 0.9 mm. All ratios are below 0.735, which is marked with a red vertical line
To prevent arbitrary selection of peaks and valleys in a noisy reconstruction, the standard could specify a limit for the allowed deviation from the physical hot-rod distances when selecting the position of peak and valleys in the profiles of the Derenzo region.
To evaluate the influence of radial and axial offsets on the spatial resolution, the standard could specify different radial distances at which the Derenzo phantom should be placed. Similarly, the standard could also specify additional measurements of the rotated phantom to investigate the isotropy of the spatial resolution.
In our opinion, such a method would depend much less on the system's geometry and technology and would provide a much more realistic benchmark, closely mirroring real-world use of the system. As one of the disadvantages, the precision of this method would be limited by the differences in hot-rod distances between the phantom's region. However, with commonly used Derenzo phantoms, one would achieve a precision in the determination of the spatial resolution of 0.1 mm, which is more than adequate to assess the scanner's viability for intended applications. Another drawback of the Derenzo phantom is that it is missing warm background activity and which could potentially lead to an artificial enhancement of spatial resolution with a high number of reconstruction iterations.
The outlined method is only intended as one possible first suggestion. We believe that developing a robust and objective method to benchmark the spatial resolution is a challenging and important research problem. One advantage of the current evaluation method is its simplicity, which simplifies Monte Carlo simulation and similar research.
As another alternative, Lodge et al. [28] have recently proposed a novel method for the measurement of clinical PET spatial resolution using a homogeneous cylinder phantom at an oblique angle. Another idea would be two use two adjacent point sources in a warm background, similar to the method described in [24].
Scatter fraction, count losses, and random coincidence measurements
The definitions of the randoms rate, scatter rate, and scatter fraction are not satisfiable and thus ill-defined for systems employing detector material containing intrinsic radioactivity, such as LYSO or LSO scintillators, as most modern PET systems do.
To explain this issue, we give a brief summary of the NEMA standard for the measurement of the scatter fraction, count losses, and random coincidence rate in the following. The measurement is specified as a scan of an FDG-filled line source inside a scatter phantom consisting of polyethylene. The rows of the measured sinogram are centered at their maxima and the sum of all rows is calculated. In the resulting radial profile of the phantom scan, the NEMA standard specifies a signal window of 7 mm around the maximum. All event counts outside this signal window are regarded as either scatter or randoms. It is assumed that the sum of scatter and random event counts is at the same level inside the signal window as on the edges of the signal window. The sum of random and scatter event counts is denoted as Cr+s, and the sum of all event counts are denoted as the total event count CTOT.
For systems without intrinsic radioactivity, the scatter fraction is supposed to be determined by assuming that the contribution of the randoms rate to the combined scatter and random counts Cr+s is negligible for measurements at a low activity. Then, the randoms rate is determined from the total event rate RTOT and true event rate Rt.
For systems with intrinsic radioactivity, the sum of random and scatter event counts also includes the random event counts produced by the intrinsic radioactivity and this contribution of the intrinsic randoms rate cannot be neglected at low measured activities [29]. The NEMA standard acknowledges this issue by specifying: "For systems employing detector material containing intrinsic radioactivity, the scatter fraction shall be evaluated by first evaluating the scattered event counting rate (see section 4.4.5 below)." [1, p. 13] Section 4.4.5 gives the following formula for the scattered event counting rate Rs, which already includes the randoms rate Rr [1, p. 14]
$$ R_{s} = R_{\text{TOT}} - R_{t} - R_{r} - R_{\text{int}} $$
The formula for the randoms rate is given above, in section 4.4.4 and it includes the scatter fraction SF
$$ {R_{r}} = R_{\text{TOT}} - \left(\frac{R_{t}}{1 - SF}\right) $$
The scatter fraction SF, which is defined in the mentioned section 4.4.5, in turn includes the scattered count rate
$$ SF = \frac{R_{s}}{R_{t} + R_{s}} $$
These three equations are not satisfiable for Rint>0 as shown in the following proof. We insert the definition of SF (i.e., Eq. 3) into the definition of Rr (i.e., Eq. 2:
$$\begin{array}{@{}rcl@{}} {R_{r}} &= R_{\text{TOT}} - \left(\frac{R_{t}}{1 - \frac{R_{s}}{R_{t} + R_{s}}}\right) \\ &= R_{\text{TOT}} - \left(\frac{R_{t}}{\frac{R_{t} + R_{s} - R_{s}}{R_{t} + R_{s}}}\right) \\ &= R_{\text{TOT}} - R_{t} - R_{s} \end{array} $$
This is inserted into the definition of Rs (i.e., Eq. 1):
$$\begin{array}{@{}rcl@{}} R_{s} &= R_{\text{TOT}} - R_{t} - \left(R_{\text{TOT}} - R_{t} - R_{s}\right) - R_{\text{int}} \\ &= R_{s} - R_{\text{int}} \qquad \text{\lightning~for}\; R_{\text{int}} \neq 0 \end{array} $$
This is a contradiction, because by definition it is true that Rint≠0, since the standard specifies these definitions of Rr and Rs for scanners with intrinsic radioactivity.
We can speculate on the intended meaning of the NEMA standard's definitions. One simple explanation is that the term − Rint was simply forgotten in Eq. 2 since subtracting Rint from Rr would remove the contradiction. However, that would still leave the definition circular and would thus require explicit instructions on how to solve this set of equations in practice. One sensible instruction could be to neglect the influence of the randoms rate Rr (i.e., assume Rr=0) in Eq. 1 for measurements at low activities to determine Rs and SF. We can then assume that SF is approximately constant with increasing activity and use SF determined at a low activity to calculate the randoms rates Rr and scatter rates Rs at higher activities.
The NEMA standard specifies the following lower activity threshold: "For scanner employing, radioactive scintillator material, measurements shall be performed until the single event rate is equal to twice intrinsic single event rate" [1, p. 11]. Our scanner has an intrinsic single event rate of 80 kcps and we reach a single event rate of 160 kcps at 430 kBq. Thus, we use this activity to estimate the scatter rate Rs using Eq. 1 while neglecting the randoms rate. This scatter rate is then used with Eq. 3 to determine the scatter fraction SF. This scatter fraction is assumed to be constant with varying activity and we use this with Eq. 2 to determine the randoms rates Rr at different activities. With these randoms rates we can evaluate Eqs. 1 and 3 again to determine the scatter rates and fractions at higher activities without neglecting the randoms rates.
Alternatively, the NEMA standard allows the usage of a randoms rate estimate supplied directly by the scanner. Such estimates usually use one of two techniques: one using a delayed coincidence window (DCW) [30, 31] and one based on the singles rate [30]. The singles rate (SR) method infers the randoms rate Rij between to detector element ij from the singles rates Si and Sj using the formula
$$ R^{\text{SR}}_{ij} = 2 \tau S_{i} S_{j} $$
with the time coincidence window τ. However, this method systematically overestimates the randoms rate [32, 33]. Oliver et al. [34] proposed an improved method "Singles Prompt" (SP) which includes corrections based on the coincidence rate (or prompt rate) Pi to account for the contribution of true coincidences and pile-up events:
$$ R^{\text{SP}}_{ij} = \frac{2 \tau e^{- (\lambda + S) \tau}} {(1 - 2 \lambda \tau)^{2}} (S_{i} - e^{(\lambda + S) \tau} P_{i})(S_{j} - e^{(\lambda + S)\tau} P_{j}), $$
where λ is the solution of the equation
$$ 2 \tau \lambda^{2} - \lambda + S - P\,e^{(\lambda + S) \tau} = 0. $$
with the total singles rate \(S = \sum _{i} S_{i}\) and the total prompt rate \(P = \sum _{i} P_{i}\).
We have implemented these methods with the Hyperion II D scanner and can compare them empirically with the modified method the NEMA standard suggests. The NEMA standard specifies a cylindrical signal window of 8 mm around the phantom (i.e., a total diameter of 41 mm) in sinogram space. We applied an equivalent cylindrical signal window, i.e., we only determined the randoms rate for the pairs of detector elements whose line of responses intersect with the cylindrical signal window.
Figure 5 shows the total randoms rates as a function of activity inside the scanner for the four different methods: NEMA, DCW, SR, and SP.
Comparison of different methods for the determination of random event rates. NEMA means a method based on the NEMA standard using Eq. 1, DCW uses a delayed coincidence window, SR is based on the singles rate using Eq. 4, and SP incorporates additional corrections using Eq. 5
As expected, the randoms estimates RSR is larger than the randoms estimate RSP: RSR≥RSP. The randoms estimate RDCW is similar to RSP, and the modified NEMA randoms estimate RNEMA is similar to RSR, which is known to be the less precise than RDCW and RSP [34].
Oliver et al. [34] showed that randoms estimates RDCW using a delayed coincidence window (DCW) are larger or equal to the randoms estimates RSP: RSR≥RDW≥RSP. There are many publications investigating the correctness of these methods, providing evidence from theory, simulations, and measurements. For the NEMA method, on the other hand, we are not aware of any publications investigating the correctness of the method. Additionally, the verbatim definition of the NEMA method for systems with intrinsic radioactivity is contradictory, as shown above. However, we acknowledge the value of allowing a randoms estimations method which is independent on the ability to either measure delayed coincidence or single rates. Thus, one simple revision to the standard could be to correct the contradictions in the definition, possibly in the way described in this work.
All of these points apply also to the scatter rate Rs defined in Eq. 1 and the noise-equivalent count rate, as the definitions of these observables depend on the randoms rate.
We think the NEMA standard's protocol for the evaluation of the sensitivity is unclear. Section 5.3 of the NEMA standard specifies to axially step a point source through the scanner. Further, Section 5.3.4 implies that a different scan for each source position should be acquired. In Section 5.4, all of the data analysis is specified for single sinogram slices i. For instance, the sensitivity is defined as
$$ S_{i} = \frac{R_{i} - R_{B,i}}{A_{\text{cal}}} $$
with the counting rate Ri and the background rate RB,i of sinogram slice i. However, the NEMA standard only ever references sinogram slices and never different measurements. We have one measurement per source position and each of these measurements has many sinogram slices. In other words, there are many measurements for each axial sinogram slice. Whenever the NEMA standard refers to sinogram slices, it remains unclear which measurement to consider. One possible intention could be to calculate the sum of all measurements; however, this is never explicitly stated. This would effectively create a sensitivity measurement with a virtual line source of activity n·A, where n is the number of measurements. Such a line source would be similar to the source distribution specified in the sensitivity protocol in the clinical NEMA NU 2-2012 standard. However, the sensitivity Si is defined by the activity Acal in Eq. 7, not a virtual activity n A of the combined measurements. Unfortunately, the NEMA standard does not define Acal in this equation, the only definition of Acal is in Section 1.2 as "activity at time Tcal". In conclusion, if this interpretation was the intention of the NEMA standard, multiple required instructions would be missing.
Another possible interpretation could be to take the slice i of the measurement where the point source is located at the center position of the slice. However, this interpretation is not consistent with the formulas given for the total system sensitivity
$$ S_{\text{tot}} = \sum_{\text{all}\,i} S_{i}, $$
which lack a normalization for the total number of slices. With a normalization with the total number of slices, this would effectively be an additional axial signal window around the point source, However, the size of this axial signal window would depend on the scanner's slice thickness, giving an unfair disadvantage to high-resolution scanners. For instance, with a slice thickness of 1 mm, this axial signal window would cut into the point source. Additionally, this interpretation would not be realistic in the context of real-world applications, where the sensitivity is supposed to be an indicator of how many true coincidences one can expect for a given activity inside the scanner's FoV.
In summary, the NEMA standard does not include any instructions on how to analyze the data of the multiple measurements it instructs to take. It only defines the sensitivity of sinogram slices without specifying the relationship of the sinogram slices and measurements with different source positions.
One consistent alternative definition of sensitivity could simply sum all sinogram slices and then divide the total coincidence counts by the acquisition time and activity for each measurement (i.e., source position). The sensitivity profile would consequently be defined as this total sensitivity as a function of the source position. To calculate the mouse- and rat-equivalent sensitivities, one would average this sensitivity profile inside the central 7 cm or 15 cm. Because the NEMA standard specifies a transversal signal window with a width of 20 mm in sinogram space, it would be consistent to apply the same signal window around the point source in axial direction. We believe that this method is already used in multiple performance evaluations based on NEMA [5, 12, 14, 35], although the exact details of the methods are usually not explained.
Therefore, the ambiguity of the NEMA standard can lead to unclear and incomparable results in performance publication based on NEMA, impeding an objective comparison of different sensitivity results.
For instance, Prasad et al. [13] seem to follow the formulas given by NEMA quite closely, without clearly specifying how the data of the different measurements at different source measurement is used in the data analysis. The reported sensitivity profile has data points above 1 cps/Bq, i.e., an impossible sensitivity larger than 100% for the central slices. They claim a total absolute sensitivity of 12.74%, which is implausibly large compared to the expected geometric sensitivity of 12.9%. We calculated this ideal geometric sensitivity using their scanner's diameter, axial length, and crystal thicknesses with the simple geometric model explained in [4]. The usual ratio between measured peak sensitivity and geometric sensitivity is between 0.3 and 0.5 [4].
Image quality, accuracy of attenuation, and scatter corrections
The NEMA standard defines several observables for quantitative analysis of the image quality phantom. The uniformity is defined as the relative standard deviation of all voxels in a large cylindrical volume of interest over the uniform region in the image quality phantom. For determination of the recovery coefficients, the image slices along the central 10 mm of the hot rods are averaged. Then, the recovery coefficients are defined as the maximum values in a circular region of interest around the hot rods with different diameters, divided by the mean activity in the volume of interest over the uniform region. The issue with this definition is that the recovery coefficients are correlated with the uniformity: The maximum value of a randomly distributed sample increases with variance, even if the mean value of the distribution is constant. Thus, this definition of the recovery coefficients does not measure the mean recovery in the hot rods, but measures a combination of recovery and variance. With a high variance and a good recovery the recovery coefficients can even reach values larger than 1.
We can demonstrate this behavior in a simple Monte Carlo simulation, where we assume that the reconstructed activity in a voxel follows a normal distribution with the standard deviation given by the uniformity. The simulated geometry is the NEMA image quality phantom. Figure 6 shows the simulated recovery coefficients of the 5-mm rod as a function of the uniformity. The ground truth for the recovery coefficient for the activity in the rod was 0.95. The data analysis follows the NEMA standard, i.e., the recovery coefficient is defined by the maximum activity in the region of interest. The drawn errors are calculated from the errors on the mean of the averaged pixels in the region of interest. The simulation demonstrates that the recovery coefficient is always overestimated compared to the ground truth and increases with increasing variance (i.e., larger uniformity values).
Simulated recovery coefficient of the 5-mm rod as a function of the uniformity. The ground truth for the recovery was 0.95. The simulated recovery coefficients are always larger than the ground truth and increase with increasing variance (i.e., larger uniformity values)
Thus, the NEMA standard's definition of the recovery coefficients hampers an easy comparison of different scanner's recovery performance, because the recovery and uniformity must be compared at the same time. In other words, the same scanner can achieve different recovery performance at different uniformity points. The user can influence the uniformity with parameters such as the amount of filtering during reconstruction. Figure 7 shows measured recovery coefficients as a function of varying uniformity. Each uniformity value corresponds to different widths of a Gaussian kernel used during reconstruction of a scan of the image quality phantom. We used the maximum likelihood expectation maximization reconstruction described in [36]. As predicted by the Monte Carlo simulation, the recovery coefficients are correlated with the relative standard deviation in the uniformity region: Both values decrease with large filter width, i.e., reduced variance in the image. Of course, it is not unexpected that the recovery decreases with stronger filtering during reconstruction. However, the observed effect is on top of the expected decrease in recovery due to filtering. Using the NEMA standard's observables, improving the uniformity performance will always lead to a loss in observed recovery, regardless of whether the actual true recovery degraded or not. When conducting a NEMA performance evaluation, one has to chose an arbitrary point on the uniformity and recovery curve resulting in one of many possible results, which are difficult to compare with the results of other scanners.
Measured recovery coefficients as a function of uniformity. The curves of the recovery coefficients correspond to rods with diameters of 5 to 1 mm, from top to bottom. Each different uniformity value corresponds to a different filter width used during reconstruction. A larger filter reduces variance and therefore increases uniformity (i.e., decreasing relative standard deviation). The recovery coefficients are increasing with increasing uniformity values, so overall image quality performance is a trade-off between uniformity and recovery
As another minor issue, the NEMA standard derives the standard deviation of the recovery coefficients from the standard deviations of the line profiles along axial directions and the standard deviations of the uniform regions using Gaussian error propagation. This is not the correct standard deviation of the recovery coefficient, because the standard deviation of the maximum value of a randomly distributed value is not the standard deviation of the underlying distribution.
Fixing the definition of the recovery coefficients is not trivial. The NEMA standard probably uses the maximum due to the small diameters of the hot rods. For the very small rods, very few, if any, voxels lie clearly in the center of the rods. Alternative definitions using the mean in a volume of interest will therefore be biased by the smaller reconstructed activity in the border regions of the rods. However, with today's high-resolution PET scanners, we believe it would be possible for most scanners to define volume of interest (VoI) inside the hot rods and then define the recovery coefficients using the mean reconstructed activity inside the VoI. Even if these VoIs would partially include the border regions of the rods, it would still at least be a comparable measure of recovery for every scanner. For the larger rods it should not be any problem to define VoI which are well inside the hot rods with a sufficient number of voxels. It is these larger rods where the current definition of recovery coefficients leads to basically a recovery of 1 or larger for all current scanners, hindering a differentiation of subtle differences in recovery between the scanners.
Another addition to the NEMA standard could be a scan of the image quality phantom at low activities to evaluate the performance of the reconstruction under low statistics, because iterative reconstruction methods usually exhibit bias at low statistics [37, 38]
Another research opportunity would be the development of a new phantom geometry using hot small spheres instead of axial hot rods. Such a geometry would be more similar to hot lesions in rodents and thus provide a benchmark of contrast recovery which is more similar to uptake in rodents. It would also be better comparable to the phantom used in the clinical NEMA NU-2 standard [39]. Ideally, such hot spheres would be situated in a warm background, although that would introduce the problem of cold sphere walls [40]. However, manufacturing a practical phantom with millimeter-sized fillable spheres is mechanically challenging.
General points
The NEMA standard does not explicitly mandate the use of the same settings for each measurement. Most scanners offer a multitude of settings for measurements and data processing, such as trigger settings, coincidence, and energy window sizes and quality filters for gamma interactions (e.g., detector scatter rejection [41, 42]). The choice of setting parameters requires often a trade-off for different performance parameters. For example, the sensitivity benefits from wide energy and coincidence windows and no quality filters, while the image quality and spatial resolution benefit from narrow windows and strict quality filters. One could report very misleading performance results by optimizing the settings for each performance measurement separately, thus achieving performance results which are unattainable at the same time in real-world applications.
While following the standard, many performance publications based on NEMA do neither state if they used the same settings for every measurement explicitly nor report all used settings for each measurement. For example, Nagy et al. [5] use wide energy windows for the sensitivity and count rates measurements and a narrow energy window for the measurement of spatial resolution. They do not report any settings for the image quality measurement.
Another issue is the mandated use of sinograms. The data analysis for every measurement except the image quality measurement are described on sinograms. However, most modern scanners store their data in listmode format and might only implement sinogram support to conduct the NEMA measurements. To our knowledge, all NEMA NU-8 measurements published in the last 5 years used listmode files for data acquisition and had to convert the listmode files to sinograms after the measurements [4–15, 43]. Spinks et al. [8] even mentions that the calculation of scatter fractions were omitted due to missing sinogram support, so this performance evaluation did apparently only use listmode data for the data analysis. The number of scintillator crystals is usually above 30 000 in modern small-animal PET systems, so that full 3D sinograms have a file size of multiple gigabytes even for very short measurements. Listmode files on the other hand are usually much smaller, making sinograms much more unwieldy.
PET scanners with monolithic scintillator blocks [15, 44] might not have clear bins which correspond to sinogram bins. For instance, such detectors might use continuous regression methods for determining the most likely position of gamma interactions [45].
The data analyses in the NEMA standard could be specified without the use of sinograms, since most of the specified cuts in the sinograms could be specified as cylindrical cuts in the scanner's field of view. The standard could still allow the use of sinograms as one possibility to implement the specified geometric cuts for backwards compatibility.
Eleven years after the publication of the NEMA NU-4 standard, we believe it is time for a revision of the standard. In this work, we have pointed out several flaws in the standard which should be addressed in the next version. Additionally, the new technological developments in the last decade would warrant discussing an updated version in itself. With this publication, we would like to open this discution.
National Electrical Manufacturers Association (NEMA). NEMA NU4-2008: performance measurements of small animal positron emission tomographs. Rosslyn: National Electrical Manufacturers Association: 2008.
Goertzen AL, Bao Q, Bergeron M, Blankemeyer E, Blinder S, Cañadas M, Chatziioannou AF, Dinelle K, Elhami E, Jans H-S, Lage E, Lecomte R, Sossi V, Surti S, Tai Y-C, Vaquero JJ, Vicente E, Williams DA, Laforest R. NEMA NU 4-2008 comparison of preclinical PET imaging systems. J Nucl Med. 2012; 53(8):1300–9. https://doi.org/10.2967/jnumed.111.099382. Accessed 09 Jul 2015.
Weissler B, Gebhardt P, Dueppenbecker PM, Wehner J, Schug D, Lerche CW, Goldschmidt B, Salomon A, Verel I, Heijman E, Perkuhn M, Heberling D, Botnar RM, Kiessling F, Schulz V. A digital preclinical PET/MRI insert and initial results. IEEE Trans Med Imaging. 2015; 34(11):2258–70. https://doi.org/10.1109/TMI.2015.2427993.
Hallen P, Schug D, Weissler B, Gebhardt P, Salomon A, Kiessling F, Schulz V. PET performance evaluation of the small-animal Hyperion IID PET/MRI insert based on the NEMA NU-4 standard. Biomed Phys Eng Express. 2018; 4(6):065027. https://doi.org/10.1088/2057-1976/aae6c2. Accessed 23 Jan 2019.
Nagy K, Tóth M, Major P, Patay G, Egri G, Häggkvist J, Varrone A, Farde L, Halldin C, Gulyás B. Performance evaluation of the small-animal nanoScan PET/MRI system. J Nucl Med. 2013; 54(10):1825–32. https://doi.org/10.2967/jnumed.112.119065. Accessed 02 Mar 2016.
Ko GB, Kim KY, Yoon HS, Lee MS, Son J-W, Im H-J, Lee JS. Evaluation of a silicon photomultiplier PET insert for simultaneous PET and MR imaging. Med Phys. 2015; 43(1):72–83. https://doi.org/10.1118/1.4937784. Accessed 07 May 2018.
Omidvari N, Cabello J, Topping G, Schneider FR, Paul S, Schwaiger M, Ziegler SI. PET performance evaluation of MADPET4: a small animal PET insert for a 7 T MRI scanner. Phys Med Biol. 2017; 62(22):8671. https://doi.org/10.1088/1361-6560/aa910d. Accessed 07 May 2018.
Spinks TJ, Karia D, Leach MO, Flux G. Quantitative PET and SPECT performance characteristics of the Albira Trimodal pre-clinical tomograph. Phys Med Biol. 2014; 59(3):715. https://doi.org/10.1088/0031-9155/59/3/715. Accessed 01 Feb 2017.
Miyake KK, Matsumoto K, Inoue M, Nakamoto Y, Kanao S, Oishi T, Kawase S, Kitamura K, Yamakawa Y, Akazawa A, Kobayashi T, Ohi J, Togashi K. Performance evaluation of a new dedicated breast PET scanner using NEMA NU4-2008 Standards. J Nucl Med. 2014; 55(7):1198–203. https://doi.org/10.2967/jnumed.113.131565. Accessed 01 Feb 2017.
Sato K, Shidahara M, Watabe H, Watanuki S, Ishikawa Y, Arakawa Y, Nai YH, Furumoto S, Tashiro M, Shoji T, Yanai K, Gonda K. Performance evaluation of the small-animal PET scanner ClairvivoPET using NEMA NU 4-2008 Standards. Phys Med Biol. 2016; 61(2):696–711. https://doi.org/10.1088/0031-9155/61/2/696.
Mackewn JE, Lerche CW, Weissler B, Sunassee K, de Rosales RTM, Phinikaridou A, Salomon A, Ayres R, Tsoumpas C, Soultanidis GM, Gebhardt P, Schaeffter T, Marsden PK, Schulz V. PET performance evaluation of a pre-clinical SiPM-based MR-compatible PET scanner. IEEE Trans Nucl Sci. 2015; 62(3):784–90. https://doi.org/10.1109/TNS.2015.2392560.
Wong W-H, Li H, Baghaei H, Zhang Y, Ramirez RA, Liu S, Wang C, An S. Engineering and performance (NEMA and Animal) of a lower-cost higher-resolution animal PET/CT scanner using photomultiplier-quadrant-sharing detectors. J Nucl Med. 2012; 53(11):1786–93. https://doi.org/10.2967/jnumed.112.103507. Accessed 04 May 2015.
Prasad R, Ratib O, Zaidi H. NEMA NU-04-based performance characteristics of the LabPET-8TM small animal PET scanner. Phys Med Biol. 2011; 56(20):6649. https://doi.org/10.1088/0031-9155/56/20/009. Accessed 21 Jan 2015.
Szanda I, Mackewn J, Patay G, Major P, Sunassee K, Mullen GE, Nemeth G, Haemisch Y, Blower PJ, Marsden PK. National Electrical Manufacturers Association NU-4 performance evaluation of the PET component of the nanoPET/CT preclinical PET/CT scanner. J Nucl Med. 2011; 52(11):1741–7. https://doi.org/10.2967/jnumed.111.088260. Accessed 05 May 2015.
Krishnamoorthy S, Blankemeyer E, Mollet P, Surti S, Holen RV, Karp JS. Performance evaluation of the MOLECUBES BETA-CUBE—a high spatial resolution and high sensitivity small animal PET scanner utilizing monolithic LYSO scintillation detectors. Phys Med Biol. 2018; 63(15):155013. https://doi.org/10.1088/1361-6560/aacec3. Accessed 13 Aug 2018.
Bao Q, Newport D, Chen M, Stout DB, Chatziioannou AF. Performance evaluation of the Inveon Dedicated PET preclinical tomograph based on the NEMA NU-4 standards. J Nucl Med. 2009; 50(3):401–8. https://doi.org/10.2967/jnumed.108.056374. Accessed 22 Apr 2015.
Gu Z, Taschereau R, Vu NT, Wang H, Prout DL, Silverman RW, Bai B, Stout DB, Phelps ME, Chatziioannou AF. NEMA NU-4 performance evaluation of PETbox4, a high sensitivity dedicated PET preclinical tomograph. Phys Med Biol. 2013; 58(11):3791. https://doi.org/10.1088/0031-9155/58/11/3791. Accessed 27 Jan 2017.
España S, Marcinkowski R, Keereman V, Vandenberghe S, Holen RV. DigiPET: sub-millimeter spatial resolution small-animal PET imaging using thin monolithic scintillators. Phys Med Biol. 2014; 59(13):3405. https://doi.org/10.1088/0031-9155/59/13/3405. Accessed 22 Aug 2014.
Hudson HM, Larkin RS. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans Med Imaging. 1994; 13(4):601–9. https://doi.org/10.1109/42.363108.
Barrett HH, White T, Parra LC. List-mode likelihood. JOSA A. 1997; 14(11):2914–23. https://doi.org/10.1364/JOSAA.14.002914. Accessed 27 Dec 2019.
Parra L, Barrett HH. List-mode likelihood: EM algorithm and image quality estimation demonstrated on 2-D PET. IEEE Trans Med Imaging. 1998; 17(2):228–35. https://doi.org/10.1109/42.700734.
Yang Y, Tai Y-C, Siegel S, Newport DF, Bai B, Li Q, Leahy RM, Cherry SR. Optimization and performance evaluation of the microPET II scanner forin vivosmall-animal imaging. Phys Med Biol. 2004; 49(12):2527–45. https://doi.org/10.1088/0031-9155/49/12/005. Accessed 27 Dec 2019.
Alessio AM, Stearns CW, Tong S, Ross SG, Kohlmyer S, Ganin A, Kinahan PE. Application and evaluation of a measured spatially variant system model for PET image reconstruction. IEEE Trans Med Imaging. 2010; 29(3):938–49. https://doi.org/10.1109/TMI.2010.2040188.
Gong K, Cherry SR, Qi J. On the assessment of spatial resolution of PET systems with iterative image reconstruction. Phys Med Biol. 2016; 61(5):193–202. https://doi.org/10.1088/0031-9155/61/5/N193. Accessed 27 Dec 2019.
Lord Rayleigh FRS. XXXI, Investigations in optics, with special reference to the spectroscope. Lond Edinb Dublin Philos Mag J Sci. 1879; 8(49):261–74. https://doi.org/10.1080/14786447908639684. Accessed 16 Aug 2018.
Born M, Wolf E. Principles of Optics, 7th edn.. Cambridge University Press: 1999.
McKechnie TS. Telescope resolution and optical tolerance specifications. In: General theory of light propagation and imaging through the atmosphere, 1st edn. Springer Series in Optical Sciences, volume 1. Heidelberg: Springer: 2016. p. 405–64. https://doi.org/10.1007/978-3-319-18209-4_14.
Lodge MA, Leal JP, Rahmim A, Sunderland JJ, Frey EC. Measuring PET spatial resolution using a cylinder phantom positioned at an oblique angle. J Nucl Med. 2018; 59(11):1768–75. https://doi.org/10.2967/jnumed.118.209593. Accessed 01 Feb 2019.
Efthimiou N, Loudos G, Karakatsanis NA, Panayiotakis GS. Effect of 176lu intrinsic radioactivity on dual head PET system imaging and data acquisition, simulation, and experimental measurements. Med Phys. 2013; 40(11):112505. https://doi.org/10.1118/1.4824694. Accessed 28 Dec 2019.
Hoffman EJ, Huang SC, Phelps ME, Kuhl DE. Quantitation in positron emission computed tomography: 4. Effect of accidental coincidences. J Comput Assist Tomogr. 1981; 5(3):391–400.
Evans RD. The Atomic Nucleus, International series in pure and applied physics. New York: McGraw-Hill; 1955.
Rafecas M, Torres I, Spanoudaki V, McElroy DP, Ziegler SI. Estimating accidental coincidences for pixelated PET detectors and singles list-mode acquisition. Nucl Inst Methods Phys Res Sec A Accelerators Spectrometers Detectors Assoc Equip. 2007; 571(1):285–8. https://doi.org/10.1016/j.nima.2006.10.084. Accessed 08 Mar 2018.
Oliver JF, Rafecas M. Improving the singles rate method for modeling accidental coincidences in high-resolution PET. Phys Med Biol. 2010; 55(22):6951. https://doi.org/10.1088/0031-9155/55/22/022. Accessed 08 Mar 2018.
Oliver JF, Rafecas M. Modelling random coincidences in positron emission tomography by using singles and prompts: a comparison study. PLoS ONE. 2016; 11(9):0162096. https://doi.org/10.1371/journal.pone.0162096. Accessed 14 Sept 2016.
Cañadas M, Embid M, Lage E, Desco M, Vaquero JJ, Pérez JM. NEMA NU 4-2008 Performance measurements of two commercial small-animal PET scanners: ClearPET and rPET-1. IEEE Trans Nucl Sci. 2011; 58(1):58–65. https://doi.org/10.1109/TNS.2010.2072935.
Salomon A, Goldschmidt B, Botnar R, Kiessling F, Schulz V. A self-normalization reconstruction technique for PET scans using the positron emission data. IEEE Trans Med Imaging. 2012; 31(12):2234–40. https://doi.org/10.1109/TMI.2012.2213827.
Van Slambrouck K, Stute S, Comtat C, Sibomana M, van Velden FHP, Boellaard R, Nuyts J. Bias reduction for low-statistics PET: maximum likelihood reconstruction with a modified poisson distribution. IEEE Trans Med Imaging. 2015; 34(1):126–36. https://doi.org/10.1109/TMI.2014.2347810.
Walker MD, Asselin M-C, Julyan PJ, Feldmann M, Talbot PS, Jones T, Matthews JC. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model. Phys Med Biol. 2011; 56(4):931–49. https://doi.org/10.1088/0031-9155/56/4/004. Accessed 02 Jan 2020.
National Electrical Manufacturers Association (NEMA). NEMA NU2-2018: performance measurements of positron emission tomographs (PET). 2018.
Hofheinz F, Dittrich S, Pötzsch C, Hoff Jvd. Effects of cold sphere walls in PET phantom measurements on the volume reproducing threshold. Phys Med Biol. 2010; 55(4):1099–113. https://doi.org/10.1088/0031-9155/55/4/013. Accessed 02 Jan 2020.
Hunter WCJ, Barrett HH, Lewellen TK, Miyaoka RS. Multiple-hit parameter estimation in monolithic detectors. IEEE Trans Med Imaging. 2013; 32(2):329–37. https://doi.org/10.1109/TMI.2012.2226908.
Gross-Weege N, Schug D, Hallen P, Schulz V. Maximum likelihood positioning algorithm for high-resolution PET scanners. Med Phys. 2016; 43(6):3049–61. https://doi.org/10.1118/1.4950719. Accessed 09 Nov 2016.
Pajak MZ, Volgyes D, Pimlott SL, Salvador CC, Asensi AS, McKeown C, Waldeck J, Anderson KI. NEMA NU4-2008 Performance evaluation of Albira: a two-ring small-animal PET system using continuous LYSO crystals. Open Med J. 2016; 3(1). https://doi.org/10.2174/1874220301603010012. Accessed 24 Jan 2019.
Sánchez F, Moliner L, Correcher C, González A, Orero A, Carles M, Soriano A, Rodriguez-Alvarez MJ, Medina LA, Mora F, Benlloch JM. Small animal PET scanner based on monolithic LYSO crystals: performance evaluation. Med Phys. 2012; 39(2):643–53. https://doi.org/10.1118/1.3673771. Accessed 24 Jan 2019.
Müller F, Schug D, Hallen P, Grahe J, Schulz V. Gradient tree boosting-based positioning method for monolithic scintillator crystals in positron emission tomography. IEEE Trans Radiat Plasma Med Sci. 2018; 2(5):411–21. https://doi.org/10.1109/TRPMS.2018.2837738.
PH's position was funded by Philips Research Europe.
Department of Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Pauwelstraße 19, Aachen, 52074, Germany
Patrick Hallen, David Schug & Volkmar Schulz
Hyperion Hybrid Imaging Systems GmbH, Pauwelstraße 19, Aachen, 52074, Germany
David Schug & Volkmar Schulz
III. Physikalisches Institut B, RWTH Aachen University, Otto-Blumenthal-Straße, Aachen, 52074, Germany
Fraunhofer Institute for Digital Medicine MEVIS, Forckenbeckstrasse 55, Aachen, 52074, Germany
PH discovered the issues described in this work and conceptualized the calculations, simulations, and measurements to demonstrate these issues. He developed the simulations and data analysis and performed the measurements. He wrote the manuscript and created the figures with the exception of Fig. 1. DS created Fig. 1. Both DS and VS contributed substantially to discussions about this work and provided comments on the manuscript. All authors read and approved the final manuscript.
Correspondence to Patrick Hallen.
Hallen, P., Schug, D. & Schulz, V. Comments on the NEMA NU 4-2008 Standard on Performance Measurement of Small Animal Positron Emission Tomographs. EJNMMI Phys 7, 12 (2020). https://doi.org/10.1186/s40658-020-0279-2
Small-animal imaging | CommonCrawl |
The subtraction game
Alice and Bob play a game that starts by Bob picking a secret integer $N\ge100$. Then the game goes through several rounds.
In every round, Alice picks an integer $x\ge3$. Every number can be picked at most once (and hence cannot be picked again in later rounds).
Alice announces $x$ to Bob.
If $x$ divides Bob's current secret value $N$, then Alice wins and the game ends.
If $x$ does not divide Bob's current secret value $N$, then Bob subtracts $x$ from $N$ (and thus replaces his old secret value by $N-x$).
If the secret value ever becomes non-positive, then Bob wins and the game ends.
Question: Can Alice enforce a win? (As usual, we assume that Alice and Bob use optimal strategies.)
mathematics strategy game
Zikato
GamowGamow
$\begingroup$ What happens if Bob picks 100 and Alice picks 99? Bob will have N = 1 and Alice can't guess lower than 3. $\endgroup$ – Zikato Mar 10 '15 at 8:53
$\begingroup$ whatever Alice guesses, that will be subtracted from 1 and it will give negative answer and Bob wins $\endgroup$ – user9174 Mar 10 '15 at 8:56
$\begingroup$ A few observations: Alice's stategy consists in a list of numbers (Since the only in-game information alice obtains is whether she has won). All we have to do is use CRT to build a number $N$ that doesn't work for any given sequence. (Notice we only have to consider sequences with sum $N-2$ or less. Clarification: We don't have to build a number that doesn't work for any sequence, for every sequence we have to build a number. $\endgroup$ – Jorge Fernández Mar 10 '15 at 14:34
$\begingroup$ @Zikato that would be a loss for Alice. no matter what she guesses (5 for example), she gets a loss at -4. $\endgroup$ – user3453281 Mar 10 '15 at 14:42
$\begingroup$ Alice could have won if x=2 was allowed. The restriction x>=3 makes me feel that she is not going to be able to win. $\endgroup$ – Raziman T V Mar 10 '15 at 20:22
If Alice plays [3, 7, 5, 8, 6, 15, 4, 10, 27, 12, 9, 20, 13, 24, 19, 60, 32, 40, 38, 30, 72, 120]
and Bob's integer is greater than 202
then Alice wins.
Proof: check 203 through 203+120. Check that Alice's strategy covers every residue class (mod 120).
Have fun optimizing 202 down.
I found this sequence using a program and this page. Specifically, Erdos discovered that every integer lies in one of the modular residue classes 0 (mod 3), 0(4), 0(5), 1(6), 1(8), 2(10), 11(12), 1(15), 14(20), 5(24), 8(30), 6(40), 58(60), or 26(120). By inserting numbers between these guesses, Alice can cover all of these residue classes.
LopsyLopsy
$\begingroup$ Nice one. I tried with 120 and could not find anything. Got one with 144 though, but the limit was way higher than 203. $\endgroup$ – Raziman T V Mar 11 '15 at 13:14
$\begingroup$ While interesting, this doesn't seem to answer the question. If Bob knows this (and he should), he can just pick 202, and Alice loses. Unless there is a single sequence that covers everything n>=100, the answer should be "no" (or at least "probably not, but unproven yet"). $\endgroup$ – Set Big O Mar 11 '15 at 13:15
$\begingroup$ I know that. This is partial progress. Perhaps you can adapt this strategy to make the bound 202 lower. Also, this post may be a good counter to debunk bad 'proofs' that Alice can never force a win. $\endgroup$ – Lopsy Mar 11 '15 at 13:23
$\begingroup$ Nice. Did your program verify that this is the optimal way to cover Erdos's residue classes? $\endgroup$ – Julian Rosen Mar 11 '15 at 16:50
$\begingroup$ For anyone curious the only numbers that don't seem to work for are 154 and 202. $\endgroup$ – kaine Mar 11 '15 at 19:25
Alice has a very simple starting strategy that guarantees a win at least one-third of the time regardless of Bob's strategy.
Randomly choose one of 3, 4, and 5. If she chooses 4 or 5, choose 3 next.
Why this works:
All integers are either $3x$, $3x+1$, or $3x+2$ for some integer $x$. If Bob chooses a multiple of 3 there is a 1 in 3 chance Alice will choose 3 and win right away. If Bob chooses a number that's one more than a multiple of 3, Alice has a 1 in 3 chance of choosing 4 first, so $3x+1-4=3x-3=3(x-1)$ which is a multiple of 3, which is what Alice will choose second. If Bob chooses $3x+2$, then if Alice chooses 5 first $3x+2-5=3(x-1)$ and Alice will get it when she chooses 3 second. So because Bob (nor Alice) knows what Alice will choose first, Bob has no way to defend against Alice's strategy.
I believe that there is a strategy under which Alice can always win by choosing the a certain series of numbers in order, however I have not yet found the right series. My reasoning is as follows:
My strategy for trying to determine the series is similar to how the sliding bolt puzzle works. If you look at this in terms of modular arithmetic, we are simply trying to eliminate possible states and force the solution to move to a single state regardless of where it started from.
The way to analyze a particular series of numbers is to look at in terms of the modulo of the least common multiple of all the numbers. For example, let's look at if 2 were allowed, but the only numbers we could use were 2, 3, 4, 6, 8, and 9. The least common multiple of these numbers is 72, so we are interested in X mod 72. Initially, X mod 72 could be any number between 0 and 71. If we choose 2 first and it's not a multiple of 2, then X-2 mod 72 could be any odd number between 1 and 71. If we then choose 3, then X-2 mod 72 could not be 3, 9, etc. so X-5 mod 72 could not be an odd number or 0, 6, 12, etc. There are many more states than in the sliding bolt puzzle, but I feel like there should be a way to whittle away the possibilities until it has been forced into a single state.
Rob WattsRob Watts
$\begingroup$ Does not work if Bob chooses 191. $\endgroup$ – Raziman T V Mar 11 '15 at 0:03
$\begingroup$ @crazyiman Yeah, I realized I made a mistake. I think my strategy will work, but I haven't gone through using the right numbers yet. $\endgroup$ – Rob Watts Mar 11 '15 at 1:48
This is wrong - see Lopsy's answer. The hole in the proof is the claim that Alice removes at most 1/n numbers with a pick of n. Because of the existence of covering systems, she starts off with a lot more information than I thought, and so she can remove more numbers.
It turns out that Erdos is smarter than me...
No, Alice can't force a win, because she doesn't get enough information from each turn to improve her choices.
Any time Alice picks a number n, she removes at most 1/n numbers from the pool of Bob's possible picks (less than that if she picks badly). She can't repeat, so after k picks the best she could have done is to leave $\frac{2}{3}.\frac{3}{4}...\frac{k+1}{k+2}$ of the entries still in the pool.
The log of $\frac{k-1}{k}$ approximates to $-\frac{1}{k}$, so the log of the entire fraction will be more than $-\frac{1}{3} - \frac{1}{4} - ... - \frac{1}{k+2}$, which is more than $-(ln(k+2) + 1)$, since it's a subset of the harmonic series. Therefore the fraction of the original numbers that remain is larger than $\frac{1}{e.(k+2)}$.
But the sum of those k picks must be at least $\frac{k(k+1)}{2}$. Multiplying those two numbers together we get $\frac{k(k+1)}{2e(k+2)}$ which will be larger than 1 for k > 6, and converges on $\frac{k}{2e}$, a steadily increasing number. This means there are at least some numbers which Alice can't eliminate and which will win for Bob.
Therefore, if Bob knows what Alice's numbers will be, he can win - so there can't be a forcing strategy for Alice. She can win (her odds are clearly at least 1/3), but can't guarantee it. The chances of a particular number winning will converge on $\frac{1}{ek}$, so the bigger the number Bob picks, the worse his chances. He'll do best with a randomly chosen number near 100.
CallidusCallidus
$\begingroup$ I don't think this argument is correct. If Bob's limit was 10000 instead of 100, there exists a sequence of numbers that guarantees a win for Alice. I think your mistake is multiplying 2/3 * 3/4 …. By smartly picking the numbers, you can do better than that. $\endgroup$ – Raziman T V Mar 11 '15 at 12:16
With the restrictions imposed in the question, it's not possible for Alice to guarantee win.
Using "optimal strategy", Alice must not guess using any number larger than 100 - the reasoning behind this is that if Bob picks the number 100, 101, 102, or 103, and Alice picked a larger number than 100 (and didn't win), Bob would automatically win; in order for Alice's strategy to be "optimal", it must encompass the ability to derive a win no matter the number (including numbers 100 - 103). (In other words, it's not optimal to pick anything higher than 100).
The problem comes when she starts using small numbers. Because Alice cannot repeat numbers, and she doesn't want to go into the negatives, she further limits her usable number set - she can't use anything that's higher than 51 (51 included). If she picks 51 and Bob picked 100, she now has to derive from 49; but what if Bob picked 101? or 102? Now she has to derive from x - 51, and if that just so happened to be a prime number that's higher than what she has left, well, she's screwed (As in it's just not optimal).
There are three ways for Alice tolook at this: Starting small, starting big (relatively large numbers, eg 50 - 75 ish), or going at random.
Because the OP said "assume optimal strategies are used", Alice will not go at random. (Clearly not optimal).
If she starts big, she runs the risk of going under - as in, if she starts big, she could instantly throw herself under the bus and drop into a negative value, thus losing the game. This has the second largest risk out of the 3 approaches, and is not optimal.
If she starts small, eventually Alice runs out of small factors for what's left at the end of the spectrum. Using up all the numbers from 3 to 12 really quickly will destroy her chances at solving the problem if bob's number was relatively big.
eg: Starting at base case of 100, since Alice must allow for the chance of 100 to be chosen.
100 % 3 leaves 97; 97 is instantly a prime number, so this cannot be the optimal strategy.
100 % 4 works; 101 % 4 = 97; prime number, uh oh.
100 % 5 works; 101 % 5 = 96 -> 96 % (lowest of the following factors 1,2,3,4,6,8,12,16,24,32,48,96, which is 3) = works.
We can follow the pattern and every time, at some point you'll end up with a prime number. (If you could suddenly guess the prime number, it wouldn't be optimal - it would only work for specific case).
I personally spent hours coding different variants of patterns to try and devise an optimal strategy, all of which lead to one inevitable result - Eventually, you hit a number that is a prime number that is too high. The moment you use that prime number, at some point in the list of possible numbers (which again I must remind you is infinite), you will hit a new prime number for which you will need another specific use case.
(Note that I also considered and tried variants of subtracting numbers from the prime numbers and then continuing. Keep reading for those algorithms, but at the end of the day you can't achieve 100% winrate.)
Because there are an infinite number of primes (See Euclids proof: https://primes.utm.edu/notes/proofs/infinite/euclids.html), there is no optimal strategy that works for ALL numbers.
Basically, whether Alice wins or not comes down to blind luck.
Best algorithm with no repeats that I could find (to give highest winrate for Alice) (This algorithm could be extended but you will always have those numbers where you miss and go negative...) (13.6% failrate out of 9900 tests (numbers 100 to 10000)) Here, Alice wins 86.4% of the time (which is decent in a universe of infinite numbers). Even when increasing the number of test cases by a factor of 10, the failrate stays about the same.
old - fairly consistent 13.6% failrate:
(20);(3);(9);(10);(12);(6);(4);(5);(17);(8);(11);(12);(13);(15);(17);
edit: new Algorithm, 10% failrate ramping up towards 13.1% at 99900 tests!
(5);(6);(4);(3);(8);(9);(7);(12);(10);(15);(49);(14);
Now, if the conditions were changed to say.. allow a single repeat.... that would be a different story and a different set of calculations.
I did manage to find an algorithm (If we were allowed to repeat the number "3" just one time) that only fails 40 out of 400 tries. Upon further testing, it fails 90 out of 900 tries. 990 fails out of 9900 tries. (10% failrate overall, which IMO in a universe of infinite numbers is pretty good for Alice. She wins 90% of the time) Woah! 3 flipping percent with half the length of the other algorithms from the ability to repeat one number? I wonder what adding more repeats would allow... ;)
The said algorithm if you're inclined to try it:
20, 3, 9, 5, 12, 6, 4, 10, 7, 14, 3
edit: Crayziman's algorithm (95% winrate at 99900 test cases)
! 3,7,3,13,6,15,4,18,12
edit: Crayziman's 100% algorithm with one repeating number:
3,7,4,14,3,16,6,15,12
TLDR for original question There will always be those numbers that fail. There is NO 100% winrate strategy for Alice. However, Alice can maximize her winrate by starting small and adding numbers to the ends of the algorithm. It'll only increase her chances by a bit, but hey, a bit is better than nothing.
AifyAify
$\begingroup$ If the number N is gurantee to be non-prime number, could you figure out a 100% algorithm? $\endgroup$ – Alex Mar 10 '15 at 21:35
$\begingroup$ Uh.... I'd have to do a lot more calculations... I can't confirm without further testing. $\endgroup$ – Aify Mar 10 '15 at 21:37
$\begingroup$ If 3 can repeat, 3,7,3,13,6,15,4,18,12 is a 100% win strategy. Also, your explanation works even when x=2 is allowed - but there is a 100% win strategy without repetition if 2 is allowed. So your explanation does not work. $\endgroup$ – Raziman T V Mar 10 '15 at 21:37
$\begingroup$ @crazyiman My explanation doesn't take into account x = 2. The question explicitly says Alice can only guess 3 or higher. The question also says no repeats, the repeating part in my question is extra and irrelevant to my actual answer. If you can find a no repeating, 3+ strategy that is 100% come back and tell me. $\endgroup$ – Aify Mar 10 '15 at 21:39
$\begingroup$ I would not be surprised if there was a solution. Although you are pretty convinced there isn't. I would say that your arguments are not sufficient proof. And if you're right the next question would be: what strategy has the best win chance? $\endgroup$ – Ivo Beckers Mar 10 '15 at 23:17
Alice can enforce a win.
Bob picks a number, which is always in the form of 100 + 12*m + n, with m >= 0 and n from 0 to 11. By dividing/substracting alternating by 3 and 4, Alice can enforce a win in turn 5 the latest. As we only divide by 3 and 4, the pattern repeats itself every 12 numbers.
Base Div 3 Div 4 Div 3 Div 4 Div 3
100 97 93 - - -
101 98 94 91 87 -
102 - - - - -
103 100 - - - -
104 101 97 94 90 -
106 103 99 - - -
109 106 102 - - -
110 107 103 100 - -
NephtyzNephtyz
$\begingroup$ Every number can be picked at most once. So as far as I understand the question, you can't repeat 3 or 4. $\endgroup$ – EagleV_Attnam Mar 10 '15 at 9:19
$\begingroup$ I did not read this. Did he ninja-edit his post? However, this nullifies my answer. $\endgroup$ – Nephtyz Mar 10 '15 at 9:22
$\begingroup$ Nope, that condition was there from the start. $\endgroup$ – Zikato Mar 10 '15 at 9:36
I might be wrong but:
If we assume that Bob uses an optimal strategy, then that will actually give us information on the number he selects. Below are a few basic optimal strategies and their implications that I can think of.
If I were Bob then the best selection would be a prime number, as it is the only 100% guarantee that it will not be guessed on the first go. This informs us that the number is odd at this point. From here, however, Alice can assume that the number is odd.
This allows Alice:
To keep the number even by first asking an odd number, which is arbitrary but lets keep it low to maximise the number of guesses. So lets say 7 for instance (3 and 5 are useful). And then maintaining even guesses.
Alice then needs to:
Make the number divisible by 4. This can be done by using larger numbers to reduce the number. If we know that N is gerater than 100, it has to be divisible not by 4 on every other even number, meaning if we tried 32 first, and followed that up with 8. Then if it fails both times, the originial number was not divisible by 4. (I think I'm not 100% on this logic but I think it is correct don't be too harsh) This means that whatever N is now, it is not divisible by 4 but is even, so we try 6 and then 4 which should then work.
Attempted logical proof:
This took a lot of pages in my notebook, but:
Lets write N = 100 + y + z. (Where z is the arbitrary odd number we have removed from the equation in the first step). As we have decided one way to win is to force N to become divisible by 4. As 100 is divisible by 4 it makes the sum much easier. Trying to use 3 would fail because 100 is not divisible by 3. We are therefore trying to make y divisible by 4.
If new N is divisible by 32 then we have already won.
If not, then the new N=N-32 is also not div by 32. Otherwise N would have been originally. Therefore we know there exists an m between N-32 and N s.t. m is divisible by 32 although this is not that helpful. We then try 8 (being 32/4).
If N-32 is divisible by 8 then we have already won.
If not, N-40 is not divisible by 8. This can be written as N - (8*5) meaning N is not divisible by 8 (as we know for the same reason N-32 is not divisible by 32 anyway). This can be expanded to say that N - (4*10) is also not an integer, proving that N is not divisible by 4.
This tells us that if we assume an odd starting number, that N-40 is not now divisible by 4. We can therefore attempt 6 (2*3) to move across an even number and make sure that N is divisible by 4.
If N-40 is divisible by 6 then we win.
Otherwise, N-46 must be divisible by 4 as N-46 is even, every other even number is divisible by 4 and N-40 is not divisible by 4 (so N-42 is, N-44 is not and N-46 is). We have forced a victory and only subtracted 4 + 6 + 8 + 32 + 7 = 57 < 100 so we cannot go broke. It doesn't matter how large or small N is to start as even numbers repeat.
It is possible assuming that Bob starts on a prime number to force a victory
However, this strategy relies on a prime (or just odd) number being chosen first, which is only optimal if Bob decides it is important to have a 100% success rate on the first question and goes with a prime number. Bob will therefore know not to start with a prime number every time. This could therefore end up very much like a game of chess.
If we take the optimum strategy as being to chose an even number every time (Being that this would confuse the oponent by not starting on a prime) Then the same steps work without the initial odd number. Although that relies on Alice calling Bob's bluff with regard to strategy.
I have yet to generalise this to work for any starting number. I need to discover whether it is odd or even somehow. I will think a bit more on it.
Ryan DurrantRyan Durrant
$\begingroup$ hate to disprove you but your strategy doesn't work for example with 139. 139 is prime (so not divisble by 7). 132 is not divisble by 32. 100 is not divisble by 8. 92 is not divisble by 6. 86 is not divisble by 4 $\endgroup$ – Ivo Beckers Mar 10 '15 at 18:34
$\begingroup$ BTW, 139 is the lowest prime it doesn't work for. 115 is the lowest odd number it doesn't work for. $\endgroup$ – Ivo Beckers Mar 10 '15 at 18:38
$\begingroup$ Can you really depend on Bob's behavior like this? For all you know, he has done the exact same analysis you have, and so he will pick a number that works best against Alice's strategy. A working strategy for Alice will have to win against every integer $\geq$ 100. $\endgroup$ – Lopsy Mar 10 '15 at 18:46
$\begingroup$ I already said that Lopsy. It was one scenario. I was trying to find any possible scenario because I initially thought it was impossible. Is worth trying, might have had someone come in with some advice/alterations $\endgroup$ – Ryan Durrant Mar 10 '15 at 19:47
$\begingroup$ If you assume N is odd, then N=2M+1 for integer M. If you can create a situation where either you win, or M is not an integer, you can continue under the assumption that N is even. Then you repeat the process. $\endgroup$ – freekvd Mar 10 '15 at 19:48
Not the answer you're looking for? Browse other questions tagged mathematics strategy game or ask your own question.
Sliding Bolt Puzzle
05132: Some off facing?
Rook game on chessboard
Guess my number! Pay for the answer!
Ninety-nine non-negative numbers
The knight's game
A prime number game
Removing chips from the table
Heaps of marbles
A matchstick game
A game with 330 pebbles
Double or Take game | CommonCrawl |
Publications Software Presentations
My research interests are in algebraic combinatorics, asymptotic group theory, and computational algebra. I develop efficient algorithms to aid in various isomorphism problems—in particular for finite nilpotent groups, which is a known bottleneck in the Group Isomorphism Problem. I also apply combinatorial tools to understand and compute certain $p$-adic integrals coming from zeta functions of groups and rings and Igusa's zeta function.
The Poincaré-extended ab-index, with Galen Dorpalen-Barry, Christian Stump, submitted.
Motivated by a conjecture concerning Igusa local zeta functions for intersection posets of hyperplane arrangments, we introduce and study the Poincaré-extended ab-index, which generalizes both the ab-index and the Poincaré polynomial. For posets admitting R-labelings, we give a combinatorial description of the coefficients of the extended ab-index, proving their nonnegativity. In the case of intersection posets of hyperplane arrangements, we prove the above conjecture of the second author and Voll as well as another conjecture of the second author and Kühne. In the setting of oriented matroids, we also refine results of Billera--Ehrenborg--Readdy concerning the cd-index of face posets.
Smooth cuboids in group theory, with Mima Stanojkovski, submitted.
A smooth cuboid can be identified with a $3 \times 3$ matrix of linear forms, with coefficients in a field $K$, whose determinant describes a smooth cubic in the projective plane. To each such matrix one can associate a group scheme over $K$. We produce isomorphism invariants of these groups in terms of their adjoint algebras, which also give information on the number of their maximal abelian subgroups. Moreover, we give a characterization of the isomorphism types of the groups in terms of isomorphisms of elliptic curves and also give a description of the automorphism group. We conclude by applying our results to the determination of the automorphism groups and isomorphism testing of finite $p$-groups of class $2$ and exponent $p$ arising in this way.
A spectral theory for transverse tensor operators, with Uriya A. First, James B. Wilson, submitted.
Tensors are multiway arrays of data, and transverse operators are the operators that change the frame of reference. We develop the spectral theory of transverse tensor operators and apply it to problems closely related to classifying quantum states of matter, isomorphism in algebra, clustering in data, and the design of high performance tensor type-systems. We prove the existence and uniqueness of the optimally-compressed tensor product spaces over algebras, called *densors*. This gives structural insights for tensors and improves how we recognize tensors in arbitrary reference frames. Using work of Eisenbud--Sturmfels on binomial ideals, we classify the maximal groups and categories of transverse operators, leading us to general tensor data types and categorical tensor decompositions, amenable to theorems like Jordan--Hölder and Krull--Schmidt. All categorical tensor substructure is detected by transverse operators whose spectra contain a Stanley--Reisner ideal, which can be analyzed with combinatorial and geometrical tools via their simplicial complexes. Underpinning this is a ternary Galois correspondence between tensor spaces, multivariable polynomial ideals, and transverse operators. This correspondence can be computed in polynomial time. We give an implementation in the computer algebra system $\textsf{Magma}$.
On the geometry of flag Hilbert–Poincaré series for matroids, with Lukas Kühne, to appear in Algebr. Comb.
arXiv pdf
We extend the definition of coarse flag Hilbert–Poincaré series to matroids; these series arise in the context of local Igusa zeta functions associated to hyperplane arrangements. We study these series in the case of oriented matroids by applying geometric and combinatorial tools related to their topes. In this case, we prove that the numerators of these series are coefficient-wise bounded below by the Eulerian polynomial and equality holds if and only if all topes are simplicial. Moreover this yields a sufficient criterion for non-orientability of matroids of arbitrary rank.
Flag Hilbert–Poincaré series of hyperplane arrangements and Igusa zeta functions, with Christopher Voll, to appear in Israel J. Math.
We introduce and study a class of multivariate rational functions associated with hyperplane arrangements, called flag Hilbert–Poincaré series. These series are intimately connected with Igusa local zeta functions of products of linear polynomials, and their motivic and topological relatives. Our main results include a self-reciprocity result for central arrangements defined over fields of characteristic zero. We also prove combinatorial formulae for a specialization of the flag Hilbert–Poincaré series for irreducible Coxeter arrangements of types $\mathsf{A}$, $\mathsf{B}$, and $\mathsf{D}$ in terms of total partitions of the respective types. We show that a different specialization of the flag Hilbert–Poincaré series, which we call the coarse flag Hilbert–Poincaré series, exhibits intriguing nonnegativity features and—in the case of Coxeter arrangements—connections with Eulerian polynomials. For numerous classes and examples of hyperplane arrangements, we determine their (coarse) flag Hilbert–Poincaré series. Some computations were aided by a SageMath package we developed.
Tensor Isomorphism by conjugacy of Lie algebras, with Peter A. Brooksbank, James B. Wilson, J. Algebra, 604 (2022), 790–807.
arXiv doi pdf
We introduce an algorithm to decide isomorphism between tensors. The algorithm uses the Lie algebra of derivations of a tensor to compress the space in which the search takes place to a so-called densor space. To make the method practicable we give a polynomial-time algorithm to solve a generalization of module isomorphism for a common class of Lie modules. As a consequence, we show that isomorphism testing is in polynomial time for tensors whose derivation algebras are classical Lie algebras and whose densor spaces are 1-dimensional. The method has been implemented in the Magma computer algebra system.
Compatible filters with isomorphism testing, J. Pure Appl. Algebra, 225 (2021), no. 3, 106528–106555.
Like the lower central series of a nilpotent group, filters generalize the connection between nilpotent groups and graded Lie rings. However, unlike the case with the lower central series, the associated graded Lie ring may share few features with the original group: e.g. the associated Lie ring may be trivial or arbitrarily large. We determine properties of filters such that the Lie ring and group are in bijection. We prove that, under such conditions, every isomorphism between groups is induced by an isomorphism between graded Lie rings.
Exact sequences of inner automorphisms of tensors, with Peter A. Brooksbank, James B. Wilson, J. Algebra, 545 (2020), 43–63.
We produce a long exact sequence of unit groups of associative algebras that behave as automorphisms of tensors in a manner similar to inner automorphisms for associative algebras. Analogues for Lie algebras of derivations of a tensor are also derived. These sequences, which are basis invariants of the tensor, generalize similar ones used for associative and non-associative algebras; they similarly facilitate inductive reasoning about, and calculation of the groups of symmetries of a tensor. The sequences can be used for problems as diverse as understanding algebraic structures to distinguishing entangled states in particle physics.
Enumerating isoclinism classes of semi-extraspecial groups, with Mark L. Lewis, Proc. Edinb. Math. Soc. (2), 63 (2020), no. 2, 426–442.
We enumerate the number of isoclinism classes of semi-extraspecial $p$-groups with derived subgroup of order $p^2$. To do this, we enumerate $\text{GL}(2, p)$-orbits of sets of irreducible, monic polynomials in $\mathbb{F}_p[x]$. Along the way, we also provide a new construction of an infinite family of semi-extraspecial groups as central quotients of Heisenberg groups over local algebras.
Most small $p$-groups have an automorphism of order 2, Arch. Math. (Basel), 108 (2017), no. 3, 225–232.
Let $f(p, n)$ be the number of pairwise nonisomorphic $p$-groups of order $p^n$ , and let $g(p, n)$ be the number of groups of order $p^n$ whose automorphism group is a $p$-group. We prove that the limit, as $p$ grows to infinity, of the ratio $g(p, n) / f(p, n)$ equals $1/3$ for $n = 6,7$.
Efficient characteristic refinements for finite groups, J. Symbolic Comput., 80 (2017), part 2, 511–520.
Filters were introduced by J.B. Wilson in 2013 to generalize work of Lazard with associated graded Lie rings. It holds promise in improving isomorphism tests, but the formulas introduced then were impractical for computation. Here, we provide an efficient algorithm for these formulas, and we demonstrate their usefulness on several examples of $p$-groups.
A fast isomorphism test for groups whose Lie algebra has genus 2, with Peter A. Brooksbank, James B. Wilson, J. Algebra, 473 (2017), 545–590.
Motivated by the desire for better isomorphism tests for finite groups, we present a polynomial-time algorithm for deciding isomorphism within a class of $p$-groups that is well-suited to studying local properties of general groups. We also report on the performance of an implementation of the algorithm in the computer algebra system Magma.
Longer nilpotent series for classical unipotent groups, J. Grp. Theory, 18 (2015), no. 4, 569–585.
In studying nilpotent groups, the lower central series and other variations can be used to construct an associated $\mathbb{Z}^+$-graded Lie ring, which is a powerful method to inspect a group. Indeed, the process can be generalized substantially by introducing $\mathbb{N}^d$-graded Lie rings. We compute the adjoint refinements of the lower central series of the unipotent subgroups of the classical Chevalley groups over the field $\mathbb{Z}/p\mathbb{Z}$ of rank $d$. We prove that, for all the classical types, this characteristic filter is a series of length $\Theta(d^2)$ with nearly all factors having $p$-bounded order.
Economical generating sets for the symmetric and alternating groups consisting of cycles of a fixed length, with Scott Annin, J. Algebra Appl., 11 (2012), no. 6, 1250110–1250118.
The symmetric group $S_n$ and the alternating group $A_n$ are groups of permutations on the set $\{0, 1, 2, \ldots , n - 1\}$ whose elements can be represented as products of disjoint cycles (the representation is unique up to the order of the cycles). In this paper, we show that whenever $n \geq k \geq 2$, the collection of all $k$-cycles generates $S_n$ if $k$ is even, and generates $A_n$ if $k$ is odd. Furthermore, we algorithmically construct generating sets for these groups of smallest possible size consisting exclusively of $k$-cycles, thereby strengthening results in [O. Ben-Shimol, The minimal number of cyclic generators of the symmetric and alternating groups, Commun. Algebra 35 (10) (2007) 3034–3037]. In so doing, our results find importance in the context of theoretical computer science, where efficient generating sets play an important role.
${\sf Densor}$, with James B. Wilson, for Magma, version 1.0 (2019).
Download Documentation
A Magma package, built on top of TensorSpace, to compute densor subspaces of bilinear maps.
${\sf EGroups}$, for Magma, version 1.0 (2022).
A Magma implementation for algorithms concerning isomorphisms of elliptic $p$-groups.
${\sf ExceptionAlge}$, with James B. Wilson, for Magma, version 1.0 (2019).
A Magma package for exceptional nonassociative algberas. Constructors for composition algebras and Jordan algebras are provided along with standard tools to analyze them.
${\sf Filters}$, for Magma, version 1.0 (2017).
A Magma package for data structures and algorithms for filters for groups.
${\sf HypIgu}$, for SageMath, version 1.1 (2021).
A SageMath package that provides functions to compute the Igusa local zeta function associated with hyperplane arrangements and matroids.
${\sf Sylver}$, with Peter A. Brooksbank, James B. Wilson, for Magma, version 1.0 (2019).
A Magma package to compute various algebras associated to tensors. At the core of the algorithms is a solver for a system of Sylvester-like equations.
${\sf TameGenus}$, with Peter A. Brooksbank, James B. Wilson, for Magma, version 2.0 (2020).
A Magma package to decide isomorphism, construct automorphisms, and assign canonical labels to groups whose Lie algebra has genus 2.
${\sf TensorSpace}$, with Peter A. Brooksbank, James B. Wilson, for Magma, version 2.3 (2020).
A Magma package for data structures, constructors, and low-level algorithms concerning tensors. Functions include constructing tensors from algebraic objects, slicing tensors to create new ones, and applying morphisms from various categories to tensors.
Flag Hilbert--Poincaré series and Igusa zeta functions of hyperplane arrangements, New trends around profinite groups, Bella Vista Relax Hotel, Levico Terme, Italy, September 2021.
Isomorphism via derivations, Groups in Galway 2020, National University of Ireland Galway, Galway Ireland, September 2020.
Isomorphism, derivations, and Lie representations, Groups, representations and applications: new perspectives, Isaac Newton Institute, Cambridge, United Kingdom, February 2020.
Multilinear tools for groups, Groups, representations and applications: computational and algorithmic methods, Isaac Newton Institute, Cambridge, United Kingdom, January 2020.
Computing order zeta functions via resolution of singularities, Buildings, Varieties, and Applications, MPI für Mathematik in den Naturwissenschaften Leipzig, Leipzig, Germany, November 2019.
A Tensor Playground: a demonstration of TensorSpace, Tensors: Algebra-Computation-Applications, University of Colorado Boulder, Boulder, Colorado, June 2019.
Multilinear tools through filters on groups, Groups and Geometries, University of Auckland, Auckland, New Zealand, January 2019.
Refinements and general filters for groups, Logic and Algorithms in Group Theory, Hausdorff Institute for Mathematics, Bonn, Germany, November 2018.
© J. Maglione | CommonCrawl |
What is 1/10 of 8?
What is 1 / 10 of 8 and how to calculate it yourself
1 / 10 of 8 = 0.8
1 / 10 of 8 is 0.8. In this article, we will go through how to calculate 1 / 10 of 8 and how to calculate any fraction of any whole number (integer). This article will show a general formula for solving this equation for positive numbers, but the same rules can be applied for numbers less than zero too!
Here's how we will calculate 1 / 10 of 8:
1: First step in solving 1 / 10 of 8 is understanding your fraction
1 / 10 has two important parts: the numerator (1) and the denominator (10). The numerator is the number above the division line (called the vinculum) which represent the number of parts being taken from the whole. For example: If there were 14 cars total and 1 painted red, 1 would be the numerator or parts of the total. In this case of 1 / 10, 1 is our numerator. The denominator (10) is located below the vinculum and represents the total number. In the example above 14 would be the denominator of cars. For our fraction: 1 is the numerator and 10 is the denoimator.
2: Write out your equation of 1 / 10 times 8
When solving for 1 / 10 of a number, students should write the equation as the whole number (8) times 1 / 10. The solution to our problem will always be smaller than 8 because we are going to end up with a fraction of 8.
$$ \frac{ 1 }{ 10 } \times 8 $$
3. Convert your whole number (8) into a fraction (8/1)
To convert any whole number into a fraction, add a 1 into the denominator. Now place 1 / 10 next to the new fraction. This gives us the equation below.
Tip: Always write out your fractions 8 / 1 and 1 / 10. It might seem boring or taxing, but dividing fractions can be confusing. Writing out the conversion simplifies our work.
$$ \frac{ 1 }{ 10 } \times \frac{ 8 }{1} $$
Once we set our equations 1 / 10 and 8 / 1, we now need to multiple your values starting with the numerators. In this case, we will be multiplying 1 (the numerator of 1 / 10) and 8 (the numerator of our new fraction 8/1). If you need a refresher on multiplying fractions, please see our guide here!
$$ \frac{ 1 }{ 10 } \times \frac{ 8 }{1} = \frac{ 8 }{ 10 } $$
Our new numerator is 8.
Then we need to do the same for our denominators. In this equation, we multiply 10 (denominator of 1 / 10) and 1 (the denominator of our new fraction 8 / 1).
Our new denominator is 10.
5. Divide our new fraction (8 / 10)
After solving for our new equation off 8 / 10, our last job is to simplify this problem using long division. For longer fractions, we recommend to all of our students to write this last part down and use left to right long division.
$$ \frac{ 8 }{ 10 } = 0.8 $$
Turn 8 into a fraction: 8 / 1
Multiply 8 / 1 by our fraction, 1 / 10
We get 8 / 10 from that
Perform a standard division: 8 divided by 10 = 0.8
Additional way of calculating 1 / 10 of 8
You can also write our fraction, 1 / 10, as a decimal by simply dividing 1 by 10 which is 0.1. If you multiply 0.1 with 8 you will see that you will end up with the same answer as above. You may also find it useful to know that if you multiply 0.1 with 100 you get 10.0. Which means that our answer of 0.8 is 10.0 percent of 8.
What is 1 / 10 of 1?
What is 4 / 9 of 65? What is 1 / 3 of 69? What is 4 / 18 of 85? What is 12 / 16 of 12? What is 2 / 19 of 61? What is 10 / 14 of 85? What is 10 / 17 of 49? What is 2 / 3 of 26? What is 8 / 15 of 91? What is 1 / 15 of 52?
Angles inside a Circle Angles in a Triangle and other shapes Scientific Notation Speed Distance Time Pythagoras Theorem Integers, Adding/Subtracting Decimal Place Value | CommonCrawl |
Section 3.5 Random variables
The chance of landing on single number in the game of roulette is 1/38 and the pay is 35:1. The chance of landing on Red is 18/38 and the pay is 1:1. Which game has the higher expected value? The higher standard deviation of expected winnings? How do we interpret these quantities in this context? If you were to play each game 20 times, what would the distribution of possible outcomes look like? In this section, we define and summarize random variables such as this, and we look at some of their properties.
Subsection 3.5.1 Learning objectives
Define a probability distribution and what makes a distribution a valid probability distribution.
Summarize a discrete probability distribution graphically using a histogram and verbally with respect to center, spread, and shape.
Calculate and interpret the mean (expected value) and standard deviation of a random variable.
Calculate the mean and standard deviation of a transformed random variable.
Calculate the mean of the sum or difference of random variables.
Calculate the standard deviation of the sum or difference of random variables when those variables are independent.
Subsection 3.5.2 Introduction to expected value
Example 3.5.1.
Two books are assigned for a statistics class: a textbook and its corresponding study guide. The university bookstore determined 20% of enrolled students do not buy either book, 55% buy the textbook only, and 25% buy both books, and these percentages are relatively constant from one term to another. If there are 100 students enrolled, how many books should the bookstore expect to sell to this class?
Around 20 students will not buy either book (0 books total), about 55 will buy one book (55 books total), and approximately 25 will buy two books (totaling 50 books for these 25 students). The bookstore should expect to sell about 105 books for this class.
Checkpoint 3.5.2.
Would you be surprised if the bookstore sold slightly more or less than 105 books? 1
If they sell a little more or a little less, this should not be a surprise. Hopefully Chapter 2 helped make clear that there is natural variability in observed data. For example, if we would flip a coin 100 times, it will not usually come up heads exactly half the time, but it will probably be close.
The textbook costs $137 and the study guide $33. How much revenue should the bookstore expect from this class of 100 students?
About 55 students will just buy a textbook, providing revenue of
\begin{gather*} $137 \times 55 = $7,535 \end{gather*}
The roughly 25 students who buy both the textbook and the study guide would pay a total of
\begin{gather*} ($137 + $33) \times 25 = $ \times 25 = $4,250 \end{gather*}
Thus, the bookstore should expect to generate about \($7,535 + $4,250 = $11,785\) from these 100 students for this one class. However, there might be some sampling variability so the actual amount may differ by a little bit.
Figure 3.5.4. Probability distribution for the bookstore's revenue from one student. The triangle represents the average revenue per student.
What is the average revenue per student for this course?
The expected total revenue is $11,785, and there are 100 students. Therefore the expected revenue per student is \($11,785/100 = $117.85\text{.}\)
Subsection 3.5.3 Probability distributions
A probability distribution is a table of all disjoint outcomes and their associated probabilities. Table 3.5.7 shows the probability distribution for the sum of two dice.
Rules for probability distributions.
A probability distribution is a list of the possible outcomes with corresponding probabilities that satisfies three rules:
The outcomes listed must be disjoint.
Each probability must be between 0 and 1.
The probabilities must total 1.
Table 3.5.8 suggests three distributions for household income in the United States. Only one is correct. Which one must it be? What is wrong with the other two? 2
The probabilities of (a) do not sum to 1. The second probability in (b) is negative. This leaves (c), which sure enough satisfies the requirements of a distribution. One of the three was said to be the actual distribution of US household incomes, so it must be (c).
Dice 2 3 4 5 6 7 8 9 10 11 12
Probability \(\frac{1}{36}\) \(\frac{2}{36}\) \(\frac{3}{36}\) \(\frac{4}{36}\) \(\frac{5}{36}\) \(\frac{6}{36}\) \(\frac{5}{36}\) \(\frac{4}{36}\) \(\frac{3}{36}\) \(\frac{2}{36}\) \(\frac{1}{36}\)
Table 3.5.7. Probability distribution for the sum of two dice.
Income range ($1000s) 0-25 25-50 50-100 100+
(a) 0.18 0.39 0.33 0.16
(b) 0.38 -0.27 0.52 0.37
(c) 0.28 0.27 0.29 0.16
Table 3.5.8. Proposed distributions of US household incomes (Checkpoint 3.5.6).
Chapter 2 emphasized the importance of plotting data to provide quick summaries. Probability distributions can also be summarized in a histogram or bar plot. The probability distribution for the sum of two dice is shown in Table 3.5.7 and its histogram is plotted in Figure 3.5.9. The distribution of US household incomes is shown in Figure 3.5.10 as a bar plot. The presence of the 100+ category makes it difficult to represent it with a regular histogram. 3
It is also possible to construct a distribution plot when income is not artificially binned into four groups. Density histograms for continuous distributions are considered in Section 3.6.
Figure 3.5.9. A histogram for the probability distribution of the sum of two dice.
Figure 3.5.10. A bar graph for the probability distribution of US household income. Because it is artificially separated into four unequal bins, this graph fails to show the shape or skew of the distribution.
In these bar plots, the bar heights represent the probabilities of outcomes. If the outcomes are numerical and discrete, it is usually (visually) convenient to make a histogram, as in the case of the sum of two dice. Another example of plotting the bars at their respective locations is shown in Figure 3.5.4.
Subsection 3.5.4 Expectation
We call a variable or process with a numerical outcome a random variable, and we usually represent this random variable with a capital letter such as \(X\text{,}\) \(Y\text{,}\) or \(Z\text{.}\) The amount of money a single student will spend on her statistics books is a random variable, and we represent it by \(X\text{.}\)
Random variable.
A random process or variable with a numerical outcome.
The possible outcomes of \(X\) are labeled with a corresponding lower case letter \(x\) and subscripts. For example, we write \(x_1=$0\text{,}\) \(x_2=$137\text{,}\) and \(x_3=$170\text{,}\) which occur with probabilities \(0.20\text{,}\) \(0.55\text{,}\) and \(0.25\text{.}\) The distribution of \(X\) is summarized in Figure 3.5.4 and Table 3.5.11.
\(i\) 1 2 3 Total
\(x_i\) $0 $137 $170 –
\(P(x_i)\) 0.20 0.55 0.25 1.00
Table 3.5.11. The probability distribution for the random variable \(X\text{,}\) representing the bookstore's revenue from a single student. We use \(P(x_i)\) to represent the probability of \(x_i\text{.}\)
We computed the average outcome of \(X\) as $117.85 in Solution 3.5.5.1. We call this average the expected value of \(X\text{,}\) denoted by \(E(X)\text{.}\) The expected value of a random variable is computed by adding each outcome weighted by its probability:
\begin{align*} E(X) \amp = 0 \cdot P(0) + 137 \cdot P(137) + 170 \cdot P(170)\\ \amp = 0 \cdot 0.20 + 137 \cdot 0.55 + 170 \cdot 0.25 = 117.85 \end{align*}
Expected value of a discrete random variable.
If \(X\) takes outcomes \(x_1\text{,}\) \(x_2\text{,}\) ..., \(x_n\) with probabilities \(P(x_1)\text{,}\) \(P(x_2)\text{,}\) ..., \(P(x_n)\text{,}\) the mean, or expected value, of \(X\) is the sum of each outcome multiplied by its corresponding probability:
\begin{align*} \mu_{\scriptscriptstyle{X}} = E(X) \amp = x_1\cdot P(x_1) + x_2\cdot P(x_2) + \cdots + x_n\cdot P(x_n)\\ \amp = \sum_{i=1}^{n}x_i\cdot P(x_i) \end{align*}
The expected value for a random variable represents the average outcome. For example, \(E(X)=117.85\) represents the average amount the bookstore expects to make from a single student, which we could also write as \(\mu=117.85\text{.}\) While the bookstore will make more than this on some students and less than this on other students, the average of many randomly selected students will be near $117.85.
It is also possible to compute the expected value of a continuous random variable (see Section 3.6). However, it requires a little calculus and we save it for a later class. 4
\(\mu_{\scriptscriptstyle{X}} = \int xf(x)dx\) where \(f(x)\) represents a function for the density curve.
In physics, the expectation holds the same meaning as the center of gravity. The distribution can be represented by a series of weights at each outcome, and the mean represents the balancing point. This is represented in Figure 3.5.4 and Figure 3.5.12. The idea of a center of gravity also expands to continuous probability distributions. Figure 3.5.13 shows a continuous probability distribution balanced atop a wedge placed at the mean.
Figure 3.5.12. A weight system representing the probability distribution for \(X\text{.}\) The string holds the distribution at the mean to keep the system balanced.
Figure 3.5.13. A continuous distribution can also be balanced at its mean.
Subsection 3.5.5 Variability in random variables
Suppose you ran the university bookstore. Besides how much revenue you expect to generate, you might also want to know the volatility (variability) in your revenue.
The variance and standard deviation can be used to describe the variability of a random variable. Subsection 2.2.3 introduced a method for finding the variance and standard deviation for a data set. We first computed deviations from the mean (\(x_i - \mu\)), squared those deviations, and took an average to get the variance. In the case of a random variable, we again compute squared deviations. However, we take their sum weighted by their corresponding probabilities, just like we did for the expectation. This weighted sum of squared deviations equals the variance, and we calculate the standard deviation by taking the square root of the variance, just as we did in Subsection 2.2.3.
Variance and standard deviation of a discrete random variable.
If \(X\) takes outcomes \(x_1\text{,}\) \(x_2\text{,}\) ..., \(x_n\) with probabilities \(P(x_1)\text{,}\) \(P(x_2)\text{,}\) ..., \(P(x_n)\) and expected value \(\mu_{\scriptscriptstyle{X}}=E(X)\text{,}\) then to find the standard deviation of \(X\text{,}\) we first find the variance and then take its square root.
\begin{align*} Var(X) = \sigma^2_x \amp = (x_1-\mu_{\scriptscriptstyle{X}})^2\cdot P(x_1) + (x_2-\mu_{\scriptscriptstyle{X}})^2\cdot P(x_2) + \cdots + (x_n-\mu_{\scriptscriptstyle{X}})^2\cdot P(x_n)\\ \amp = \sum_{i=1}^{n} (x_i - \mu_{\scriptscriptstyle{X}})^2 \cdot P(x_i)\\ SD(X) = \sigma_{\scriptscriptstyle{X}} \amp = \sqrt{ \sum_{i=1}^{n} (x_i - \mu_{\scriptscriptstyle{X}})^2 \cdot P(x_i)} \end{align*}
Just as it is possible to compute the mean of a continuous random variable using calculus, we can also use calculus to compute the variance. 5 However, this topic is beyond the scope of the AP exam.
\(\sigma^2_x = \int (x - \mu_{\scriptscriptstyle{X}})^2f(x)dx\) where \(f(x)\) represents a function for the density curve.
Example 3.5.14.
Compute the expected value, variance, and standard deviation of \(X\text{,}\) the revenue of a single statistics student for the bookstore.
It is useful to construct a table that holds computations for each outcome separately, then add up the results.
\(x_i\) $0 $137 $170
\(P(x_i)\) 0.20 0.55 0.25
\(x_i \cdot P(x_i)\) 0 75.35 42.50 117.85
Thus, the expected value is \(\mu_{X}=117.85\text{,}\) which we computed earlier. The variance can be constructed using a similar table:
\(x_i - \mu_{X}\) -117.85 19.15 52.15
\((x_i-\mu_{X})^2\) 13888.62 366.72 2719.62
\((x_i-\mu_{X})^2\cdot P(x_i)\) 2777.7 201.7 679.9 3659.3
The variance of \(X\) is \(\sigma_{X}^2 = 3659.3\text{,}\) which means the standard deviation is \(\sigma_{X} = \sqrt{3659.3} = $60.49\text{.}\)
Checkpoint 3.5.15.
The bookstore also offers a chemistry textbook for $159 and a book supplement for $41. From past experience, they know about 25% of chemistry students just buy the textbook while 60% buy both the textbook and supplement.
What proportion of students don't buy either book? Assume no students buy the supplement without the textbook.
Let \(Y\) represent the revenue from a single student. Write out the probability distribution of \(Y\text{,}\) i.e. a table for each outcome and its associated probability.
Compute the expected revenue from a single chemistry student.
Find the standard deviation to describe the variability associated with the revenue from a single student. 6
(a) \(100% - 25% - 60% = 15%\) of students do not buy any books for the class. Part (b) is represented by the first two lines in the table below. The expectation for part (c) is given as the total on the line \(y_{i}\cdot P(y_{i})\text{.}\) The result of part (d) is the square-root of the variance listed on in the total on the last line: \(\sigma_{Y}= \sqrt{Var(Y)}=\sqrt{4800}=69.28\)
\(i\) (scenario) 1 (noBook) 2 (notebook) 3 (both) Total
\(y_{i}\) 0.00 159.00 200.00
\(P(y_{i})\) 0.15 0.25 0.60
\(y_{i} \cdot P(y_{i})\) 0/00 39.75 120.00 \(E(Y)=159.75\)
\(y_{i}- \mu_{Y}\) -159.75 -0.75 40.25
\((y_{i}-\mu_{Y})^2\) 25520.06 0.56 1620.06
\((y_{i}-\mu_{Y})^2 \cdot P(y_{i})\) 3828.0 0.1 972.0 \(Var(Y) \approx 4800\)
Subsection 3.5.6 Linear transformations of a random variable
An online store is selling a limited edition t-shirt. The maximum a person is allowed to buy is 3. Let X be a random variable that represents how many of the t-shirts a t-shirt buyer orders. The probability distribution of X is given in the following table.
\(x_i\) 1 2 3
\(P(x_i)\) 0.6 0.3 0.1
Using the methods of the previous section we can find that the mean \(\mu_{\scriptscriptstyle{X}} = 1.5\) and the standard deviation \(\sigma_{\scriptscriptstyle{X}} = 0.67\text{.}\) Suppose that the cost of each t-shirt is $30 and that there is flat rate $5 shipping fee. The amount of money a t-shirt buyer pays, then, is \(30X + 5\text{,}\) where X is the number of t-shirts ordered. To calculate the mean and standard deviation for the amount of money a t-shirt buyers pays, we could define a new variable \(Y\) as follows:
\begin{gather*} Y = 30X + 5 \end{gather*}
Verify that the distribution of \(Y\) is given by the table below. 7
\(30 \times 1 + 5 = 35\text{;}\) \(30 \times 2 + 5 = 65\text{;}\) \(30 \times 3 + 5 = 95\)
\(y_i\) $35 $65 $95
\(P(y_i)\) 0.6 0.3 0.1
Using this new table, we can compute the mean and standard deviation of the cost for t-shirt orders. However, because Y is a linear transformation of X, we can use the properties from Subsection 2.2.8. Recall that multiplying every X by 30 multiplies both the mean and standard deviation by 30. Adding 5 only adds 5 to the mean, not the standard deviation. Therefore,
\begin{align*} \mu_{30X+5}\amp =E(30X+5) \amp \sigma_{30X+5}\amp =SD(30X+5)\\ \amp = 30\times E(X) + 5 \amp \amp = 30\times SD(X)\\ \amp = 30\times 1.5+5 \amp \amp = 30 \times 0.67\\ \amp = 45.00 \amp \amp = 20.10 \end{align*}
Among t-shirt buyers, they spend an average of $45.00, with a standard deviation of $20.10.
Linear transformations of a random variable.
If \(X\) is a random variable, then a linear transformation is given by \(aX + b\text{,}\) where \(a\) and \(b\) are some fixed numbers.
\begin{align*} E(aX+b) \amp = a\times E(X) + b \amp SD(aX+b) \amp = \lvert a\rvert \times SD(X) \end{align*}
Subsection 3.5.7 Linear combinations of random variables
So far, we have thought of each variable as being a complete story in and of itself. Sometimes it is more appropriate to use a combination of variables. For instance, the amount of time a person spends commuting to work each week can be broken down into several daily commutes. Similarly, the total gain or loss in a stock portfolio is the sum of the gains and losses in its components.
John travels to work five days a week. We will use \(X_1\) to represent his travel time on Monday, \(X_2\) to represent his travel time on Tuesday, and so on. Write an equation using \(X_1\text{,}\) ..., \(X_5\) that represents his travel time for the week, denoted by \(W\text{.}\)
His total weekly travel time is the sum of the five daily values:
\begin{equation*} W = X_1 + X_2 + X_3 + X_4 + X_5 \end{equation*}
Breaking the weekly travel time \(W\) into pieces provides a framework for understanding each source of randomness and is useful for modeling \(W\text{.}\)
It takes John an average of 18 minutes each day to commute to work. What would you expect his average commute time to be for the week?
We were told that the average (i.e. expected value) of the commute time is 18 minutes per day: \(E(X_i) = 18\text{.}\) To get the expected time for the sum of the five days, we can add up the expected time for each individual day:
\begin{align*} E(W) \amp = E(X_1 + X_2 + X_3 + X_4 + X_5)\\ \amp = E(X_1) + E(X_2) + E(X_3) + E(X_4) + E(X_5)\\ \amp = 18 + 18 + 18 + 18 + 18 = 90\text{ minutes } \end{align*}
The expectation of the total time is equal to the sum of the expected individual times. More generally, the expectation of a sum of random variables is always the sum of the expectation for each random variable.
Elena is selling a TV at a cash auction and also intends to buy a toaster oven in the auction. If \(X\) represents the profit for selling the TV and \(Y\) represents the cost of the toaster oven, write an equation that represents the net change in Elena's cash. 8
She will make \(X\) dollars on the TV but spend \(Y\) dollars on the toaster oven: \(X-Y\text{.}\)
Based on past auctions, Elena figures she should expect to make about $175 on the TV and pay about $23 for the toaster oven. In total, how much should she expect to make or spend? 9
\(E(X-Y) = E(X) - E(Y) = 175 - 23 = $152\text{.}\) She should expect to make about $152.
Would you be surprised if John's weekly commute wasn't exactly 90 minutes or if Elena didn't make exactly $152? Explain. 10
No, since there is probably some variability. For example, the traffic will vary from one day to next, and auction prices will vary depending on the quality of the merchandise and the interest of the attendees.
Two important concepts concerning combinations of random variables have so far been introduced. First, a final value can sometimes be described as the sum of its parts in an equation. Second, intuition suggests that putting the individual average values into this equation gives the average value we would expect in total. This second point needs clarification — it is guaranteed to be true in what are called linear combinations of random variables.
A linear combination of two random variables \(X\) and \(Y\) is a fancy phrase to describe a combination
\begin{equation*} aX + bY \end{equation*}
where \(a\) and \(b\) are some fixed and known numbers. For John's commute time, there were five random variables — one for each work day — and each random variable could be written as having a fixed coefficient of 1:
\begin{equation*} 1X_1 + 1 X_2 + 1 X_3 + 1 X_4 + 1 X_5 \end{equation*}
For Elena's net gain or loss, the \(X\) random variable had a coefficient of +1 and the \(Y\) random variable had a coefficient of -1.
When considering the average of a linear combination of random variables, it is safe to plug in the mean of each random variable and then compute the final result. For a few examples of nonlinear combinations of random variables — cases where we cannot simply plug in the means — see the footnote. 11
If \(X\) and \(Y\) are random variables, consider the following combinations: \(X^{1+Y}\text{,}\) \(X\times Y\text{,}\) \(X/Y\text{.}\) In such cases, plugging in the average value for each random variable and computing the result will not generally lead to an accurate average value for the end result.
Linear combinations of random variables and the average result.
If \(X\) and \(Y\) are random variables, then a linear combination of the random variables is given by \(aX + bY\text{,}\) where \(a\) and \(b\) are some fixed numbers. To compute the average value of a linear combination of random variables, plug in the average of each individual random variable and compute the result:
\begin{gather*} E(aX+bY) = a\times E(X) + b\times E(Y) \end{gather*}
Recall that the expected value is the same as the mean, i.e. \(E(X) = \mu_{X}\text{.}\)
Leonard has invested $6000 in Google Inc. (stock ticker: GOOG) and $2000 in Exxon Mobil Corp. (XOM). If \(X\) represents the change in Google's stock next month and \(Y\) represents the change in Exxon Mobil stock next month, write an equation that describes how much money will be made or lost in Leonard's stocks for the month.
For simplicity, we will suppose \(X\) and \(Y\) are not in percents but are in decimal form (e.g. if Google's stock increases 1%, then \(X=0.01\text{;}\) or if it loses 1%, then \(X=-0.01\)). Then we can write an equation for Leonard's gain as
\begin{gather*} $6000\times X + $2000\times Y \end{gather*}
If we plug in the change in the stock value for \(X\) and \(Y\text{,}\) this equation gives the change in value of Leonard's stock portfolio for the month. A positive value represents a gain, and a negative value represents a loss.
Suppose Google and Exxon Mobil stocks have recently been rising 2.1% and 0.4% per month, respectively. Compute the expected change in Leonard's stock portfolio for next month. 12
\(E($6000 \times X + $2000\times Y) = $6000\times 0.021 + $2000\times 0.004 = $134\text{.}\)
You should have found that Leonard expects a positive gain in Checkpoint 3.5.23. However, would you be surprised if he actually had a loss this month? 13
No. While stocks tend to rise over time, they are often volatile in the short term.
Subsection 3.5.8 Variability in linear combinations of random variables
Quantifying the average outcome from a linear combination of random variables is helpful, but it is also important to have some sense of the uncertainty associated with the total outcome of that combination of random variables. The expected net gain or loss of Leonard's stock portfolio was considered in Checkpoint 3.5.23. However, there was no quantitative discussion of the volatility of this portfolio. For instance, while the average monthly gain might be about $134 according to the data, that gain is not guaranteed. Figure 3.5.25 shows the monthly changes in a portfolio like Leonard's during the 36 months from 2009 to 2011. The gains and losses vary widely, and quantifying these fluctuations is important when investing in stocks.
Figure 3.5.25. The change in a portfolio like Leonard's for the 36 months from 2009 to 2011, where $6000 is in Google's stock and $2000 is in Exxon Mobil's.
Just as we have done in many previous cases, we use the variance and standard deviation to describe the uncertainty associated with Leonard's monthly returns. To do so, the standard deviations and variances of each stock's monthly return will be useful, and these are shown in Table 3.5.26. The stocks' returns are nearly independent.
Mean (\(\bar{x}\)) Standard deviation (\(s\)) Variance (\(s^2\))
GOOG 0.0210 0.0849 0.0072
XOM 0.0038 0.0520 0.0027
Table 3.5.26. The mean, standard deviation, and variance of the GOOG and XOM stocks. These statistics were estimated from historical stock data, so notation used for sample statistics has been used.
We want to describe the uncertainty of Leonard's monthly returns by finding the standard deviation of the return on his combined portfolio. First, we note that the variance of a sum has a nice property: the variance of a sum is the sum of the variances. That is, if X and Y are independent random variables:
\begin{align*} Var(X + Y) \amp = Var(X) + Var(Y) \end{align*}
Because the standard deviation is the square root of the variance, we can rewrite this equation using standard deviations:
\begin{gather*} (SD_{X + Y})^2 = (SD_X)^2 + (SD_Y)^2 \end{gather*}
This equation might remind you of a theorem from geometry: \(c^2 = a^2 + b^2\text{.}\) The equation for the standard deviation of the sum of two independent random variables looks analogous to the Pythagorean Theorem. Just as the Pythagorean Theorem only holds for right triangles, this equation only holds when X and Y are independent. 14
Another word for independent is orthogonal, meaning right angle! When X and Y are dependent, the equation for \(SD_{X+Y}\) becomes analogous to the law of cosines.
Standard deviation of the sum and difference of random variables.
If X and Y are independent random variables:
\begin{gather*} SD_{X + Y} = SD_{X - Y} = \sqrt{(SD_X)^2 + (SD_Y)^2} \end{gather*}
Because \(SD_Y\) = \(SD_{-Y}\text{,}\) the standard deviation of the difference of two variables equals the standard deviation of the sum of two variables. This property holds for more than two variables as well. For example, if X, Y, and Z are independent random variables:
\begin{gather*} SD_{X + Y + Z} = SD_{X - Y - Z} = \sqrt{(SD_X)^2 + (SD_Y)^2 + (SD_Z)^2} \end{gather*}
If we need the standard deviation of a linear combination of independent variables, such as \(aX + bY\text{,}\) we can consider \(aX\) and \(bY\) as two new variables. Recall that multiplying all of the values of variable by a positive constant multiplies the standard deviation by that constant. Thus, \(SD_{aX}\) = \(a \times SD_X\) and \(SD_{bY}\) = \(b \times SD_Y\text{.}\) It follows that:
\begin{gather*} SD_{aX + bY} = \sqrt{(a \times SD_X)^2 + (b \times SD_Y)^2} \end{gather*}
This equation can be used to compute the standard deviation of Leonard's monthly return. Recall that Leonard has $6,000 in Google stock and $2,000 in Exxon Mobil's stock. From Table 3.5.26, the standard deviation of Google stock is 0.0849 and the standard deviation of Exxon Mobile stock is 0.0520.
\begin{align*} SD_{6000X + 2000Y} \amp = \sqrt{(6000\times SD_X)^2 + (2000\times SD_Y)^2}\\ \amp = \sqrt{(6000\times 0.0849)^2 + (2000\times .0520)^2}\\ \amp = \sqrt{270304} = 520 \end{align*}
The standard deviation of the total is $520. While an average monthly return of $134 on an $8000 investment is nothing to scoff at, the monthly returns are so volatile that Leonard should not expect this income to be very stable.
Standard deviation of linear combinations of random variables.
To find the standard deviation of a linear combination of random variables, we first consider \(aX\) and \(bY\) separately. We find the standard deviation of each, and then we apply the equation for the standard deviation of the sum of two variables:
\begin{gather*} SD_{aX + bY} = \sqrt{(a\times SD_X)^2 + (b\times SD_Y)^2} \end{gather*}
This equation is valid as long as the random variables \(X\) and \(Y\) are independent of each other.
Suppose John's daily commute has a standard deviation of 4 minutes. What is the uncertainty in his total commute time for the week?
The expression for John's commute time is
\begin{gather*} X_1 + X_2 + X_3 + X_4 + X_5 \end{gather*}
Each coefficient is 1, so the standard deviation of the total weekly commute time is
\begin{align*} \text{ SD } \amp = \sqrt{(1 \times 4)^2 + (1 \times 4)^2 + (1 \times 4)^2 + (1 \times 4)^2 + (1 \times 4)^2}\\ \amp = \sqrt{5\times (4)^2}\\ \amp = 8.94 \end{align*}
The standard deviation for John's weekly work commute time is about 9 minutes.
The computation in Solution 3.5.27.1 relied on an important assumption: the commute time for each day is independent of the time on other days of that week. Do you think this is valid? Explain. 15
One concern is whether traffic patterns tend to have a weekly cycle (e.g. Fridays may be worse than other days). If that is the case, and John drives, then the assumption is probably not reasonable. However, if John walks to work, then his commute is probably not affected by any weekly traffic cycle.
Consider Elena's two auctions from Checkpoint 3.5.19. Suppose these auctions are approximately independent and the variability in auction prices associated with the TV and toaster oven can be described using standard deviations of $25 and $8. Compute the standard deviation of Elena's net gain. 16
The equation for Elena can be written as \((1) \times X + (-1) \times Y\text{.}\) To find the SD of this new variable we do: \(SD_{(1) \times X+(-1) \times Y}= \sqrt{(1 \times SD_{X})^2+(-1 \times SD_{Y})^2} = \sqrt{(1\times 25)^2)+(-1\times8)^2}=26.25\) The SD is about $26.25
Consider again Checkpoint 3.5.29. The negative coefficient for \(Y\) in the linear combination was eliminated when we squared the coefficients. This generally holds true: negatives in a linear combination will have no impact on the variability computed for a linear combination, but they do impact the expected value computations.
Subsection 3.5.9 Section summary
A discrete probability distribution can be summarized in a table that consists of all possible outcomes of a random variable and the probabilities of those outcomes. The outcomes must be disjoint, and the sum of the probabilities must equal 1.
A probability distribution can be represented with a histogram and, like the distributions of data that we saw in Chapter 2, can be summarized by its center, spread, and shape.
When given a probability distribution table, we can calculate the mean (expected value) and standard deviation of a random variable using the following formulas.
\begin{align*} E(X) = \mu_{\scriptscriptstyle{X}} \amp = \sum{x_i\cdot P(x_i)} \amp\\ \amp = x_1\cdot P(x_1) + x_2\cdot P(x_2) + \cdots + x_n\cdot P(x_n)\\ Var(X) = \sigma^2_x \amp = \sum(x_i - \mu_{\scriptscriptstyle{X}})^2 \cdot P(x_i)\\ SD(X) = \sigma_{\scriptscriptstyle{X}} \amp =\sqrt{\sum(x_i - \mu_{\scriptscriptstyle{X}})^2 \cdot P(x_i)}\\ \amp =\sqrt{(x_1-\mu_{\scriptscriptstyle{X}})^2\cdot P(x_1) + (x_2-\mu_{\scriptscriptstyle{X}})^2\cdot P(x_2) + \cdots + (x_n-\mu_{\scriptscriptstyle{X}})^2\cdot P(x_n) } \end{align*}
We can think of \(P(x_i)\) as the weight, and each term is weighted its appropriate amount.
The mean of a probability distribution does not need to be a value in the distribution. It represents the average of many, many repetitions of a random process. The standard deviation represents the typical variation of the outcomes from the mean, when the random process is repeated over and over.
Linear transformations. Adding a constant to every value in a probability distribution adds that value to the mean, but it does not affect the standard deviation. When multiplying every value by a constant, this multiplies the mean by the constant and it multiplies the standard deviation by the absolute value of the constant.
Combining random variables. Let \(X\) and \(Y\) be random variables and let \(a\) and \(b\) be constants.
The expected value of the sum is the sum of the expected values.
\(E(X+Y) = E(X) + E(Y)\)
\(E(aX+bY) = a\times E(X) + b\times E(Y)\)
When X and Y are independent: The standard deviation of a sum or a difference is the square root of the sum of each standard deviation squared.
\(SD(X + Y) = \sqrt{(SD(X))^2 + (SD(Y))^2}\)
\(SD(X - Y) = \sqrt{(SD(X))^2 + (SD(Y))^2}\)
\(SD(aX + bY) = \sqrt{(a\times SD(X))^2 + (b\times SD(Y))^2}\)
The SD properties require that \(X\) and \(Y\) be independent. The expected value properties hold true whether or not \(X\) and \(Y\) are independent.
Exercises 3.5.10 Exercises
1. College smokers.
At a university, 13% of students smoke.
Calculate the expected number of smokers in a random sample of 100 students from this university.
The university gym opens at 9 am on Saturday mornings. One Saturday morning at 8:55 am there are 27 students outside the gym waiting for it to open. Should you use the same approach from part (a) to calculate the expected number of smokers among these 27 students?
(a) 13.
(b) No, these 27 students are not a random sample from the university's student population. For example, it might be argued that the proportion of smokers among students who go to the gym at 9 am on a Saturday morning would be lower than the proportion of smokers in the university as a whole.
2. Ace of clubs wins.
Consider the following card game with a well-shuffled deck of cards. If you draw a red card, you win nothing. If you get a spade, you win $5. For any club, you win $10 plus an extra $20 for the ace of clubs.
Create a probability model for the amount you win at this game. Also, find the expected winnings for a single game and the standard deviation of the winnings.
What is the maximum amount you would be willing to pay to play this game? Explain your reasoning.
3. Hearts win.
In a new card game, you start with a well-shuffled full deck and draw 3 cards without replacement. If you draw 3 hearts, you win $50. If you draw 3 black cards, you win $25. For any other draws, you win nothing.
Create a probability model for the amount you win at this game, and find the expected winnings. Also compute the standard deviation of this distribution.
If the game costs $5 to play, what would be the expected value and standard deviation of the net profit (or loss)? (Hint: profit = winnings \(-\) cost; \(X-5\))
If the game costs $5 to play, should you play this game? Explain.
(a) \(E(X) = 3.59\text{.}\) \(SD(X) = 9.64\text{.}\)
(b) \(E(X) = -1.41\text{.}\) \(SD(X) = 9.64\text{.}\)
(c) No, the expected net profit is negative, so on average you expect to lose money.
4. Is it worth it?
Andy is always looking for ways to make money fast. Lately, he has been trying to make money by gambling. Here is the game he is considering playing: The game costs $2 to play. He draws a card from a deck. If he gets a number card (2-10), he wins nothing. For any face card ( jack, queen or king), he wins $3. For any ace, he wins $5, and he wins an extra $20 if he draws the ace of clubs.
Create a probability model and find Andy's expected profit per game.
Would you recommend this game to Andy as a good way to make money? Explain.
5. Portfolio return.
A portfolio's value increases by 18% during a financial boom and by 9% during normal times. It decreases by 12% during a recession. What is the expected return on this portfolio if each scenario is equally likely?
5% increase in value.
6. Baggage fees.
An airline charges the following baggage fees: $25 for the first bag and $35 for the second. Suppose 54% of passengers have no checked luggage, 34% have one piece of checked luggage and 12% have two pieces. We suppose a negligible portion of people check more than two bags.
Build a probability model, compute the average revenue per passenger, and compute the corresponding standard deviation.
About how much revenue should the airline expect for a flight of 120 passengers? With what standard deviation? Note any assumptions you make and if you think they are justified.
7. American roulette.
The game of American roulette involves spinning a wheel with 38 slots: 18 red, 18 black, and 2 green. A ball is spun onto the wheel and will eventually land in a slot, where each slot has an equal chance of capturing the ball. Gamblers can place bets on red or black. If the ball lands on their color, they double their money. If it lands on another color, they lose their money. Suppose you bet $1 on red. What's the expected value and standard deviation of your winnings?
\(E = -0.0526\text{.}\) \(SD = 0.9986\text{.}\)
8. European roulette.
The game of European roulette involves spinning a wheel with 37 slots: 18 red, 18 black, and 1 green. A ball is spun onto the wheel and will eventually land in a slot, where each slot has an equal chance of capturing the ball. Gamblers can place bets on red or black. If the ball lands on their color, they double their money. If it lands on another color, they lose their money.
Suppose you play roulette and bet $3 on a single round. What is the expected value and standard deviation of your total winnings?
Suppose you bet $1 in three different rounds. What is the expected value and standard deviation of your total winnings?
How do your answers to parts (a) and (b) compare? What does this say about the riskiness of the two games? | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.