url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://planetmath.org/GeneralizedEigenspace | # generalized eigenspace
Let $V$ be a vector space (over a field $k$), and $T$ a linear operator on $V$, and $\lambda$ an eigenvalue of $T$. The set $E_{\lambda}$ of all generalized eigenvectors of $T$ corresponding to $\lambda$, together with the zero vector $0$, is called the generalized eigenspace of $T$ corresponding to $\lambda$. In short, the generalized eigenspace of $T$ corresponding to $\lambda$ is the set
$E_{\lambda}:=\{v\in V\mid(T-\lambda I)^{i}(v)=0\textrm{ for some positive % integer }i\}.$
Here are some properties of $E_{\lambda}$:
1. 1.
$W_{\lambda}\subseteq E_{\lambda}$, where $W_{\lambda}$ is the eigenspace of $T$ corresponding to $\lambda$.
2. 2.
$E_{\lambda}$ is a subspace of $V$ and $E_{\lambda}$ is $T$-invariant.
3. 3.
If $V$ is finite dimensional, then $\dim(E_{\lambda})$ is the algebraic multiplicity of $\lambda$.
4. 4.
$E_{\lambda_{1}}\cap E_{\lambda_{2}}=0$ iff $\lambda_{1}\neq\lambda_{2}$. More generally, $E_{A}\cap E_{B}=0$ iff $A$ and $B$ are disjoint sets of eigenvalues of $T$, and $E_{A}$ (or $E_{B}$) is defined as the sum of all $E_{\lambda}$, where $\lambda\in A$ (or $B$).
5. 5.
If $V$ is finite dimensional and $T$ is a linear operator on $V$ such that its characteristic polynomial $p_{T}$ splits (over $k$), then
$V=\bigoplus_{\lambda\in S}E_{\lambda},$
where $S$ is the set of all eigenvalues of $T$.
6. 6.
Assume that $T$ and $V$ have the same properties as in (5). By the Jordan canonical form theorem, there exists an ordered basis $\beta$ of $V$ such that $[T]_{\beta}$ is a Jordan canonical form. Furthermore, if we set $\beta_{i}=\beta\cap E_{\lambda_{i}}$, then $[T|_{E_{\lambda_{i}}}]_{\beta_{i}}$, the matrix representation of $T|_{E_{\lambda}}$, the restriction of $T$ to $E_{\lambda_{i}}$, is a Jordan canonical form. In other words,
$[T]_{\beta}=\begin{pmatrix}J_{1}&O&\cdots&O\\ O&J_{2}&\cdots&O\\ \vdots&\vdots&\ddots&\vdots\\ O&O&\cdots&J_{n}\end{pmatrix}$
where each $J_{i}=[T|_{E_{\lambda_{i}}}]_{\beta_{i}}$ is a Jordan canonical form, and $O$ is a zero matrix.
7. 7.
Conversely, for each $E_{\lambda_{i}}$, there exists an ordered basis $\beta_{i}$ for $E_{\lambda_{i}}$ such that $J_{i}:=[T|_{E_{\lambda_{i}}}]_{\beta_{i}}$ is a Jordan canonical form. As a result, $\beta:=\bigcup_{i=1}^{n}\beta_{i}$ with linear order extending each $\beta_{i}$, such that $v_{i} for $v_{i}\in\beta_{i}$ and $v_{j}\in\beta_{j}$ for $i, is an ordered basis for $V$ such that $[T]_{\beta}$ is a Jordan canonical form, being the direct sum of matrices $J_{i}$.
8. 8.
Each $J_{i}$ above can be further decomposed into Jordan blocks, and it turns out that the number of Jordan blocks in each $J_{i}$ is the dimension of $W_{\lambda_{i}}$, the eigenspace of $T$ corresponding to $\lambda_{i}$.
More to come…
## References
• 1 Friedberg, Insell, Spence. Linear Algebra. Prentice-Hall Inc., 1997.
Title generalized eigenspace GeneralizedEigenspace 2013-03-22 17:23:36 2013-03-22 17:23:36 CWoo (3771) CWoo (3771) 8 CWoo (3771) Definition msc 15A18 GeneralizedEigenvector | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 77, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969877004623413, "perplexity": 95.3019741359609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00233.warc.gz"} |
https://web2.0calc.com/questions/congruence-and-residues | +0
# Congruence and Residues
+1
189
2
+28
Remove the integers which are congruent to 3 (mod 7) from the following list of five integers, and sum the integers that remain.$$85 \qquad 49,\!479 \qquad -67 \qquad 12,\!000,\!003 \qquad -3$$
May 11, 2020
#1
+2
85 mod 7 =1 Keep
49,479 mod 7 = 3 Remove
-67 mod 7 =3 Remove
12,000,003 mod 7 =1 Keep
-3 mod 7 = 4 Keep
[85 +12,000,003 - 3] =12,000,085
May 11, 2020
#2
+111566
+2
There is more than one answer for this.
for instance:
-3 mod 7 does equal -3 but it also equals 4. Which one do you want?
Maybe it is mod 7 of the result that is wanted. In which case there would be only one answer.
Melody May 11, 2020 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312608003616333, "perplexity": 1887.310047843874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00047.warc.gz"} |
https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.sparse.csgraph.breadth_first_order.html | # scipy.sparse.csgraph.breadth_first_order¶
scipy.sparse.csgraph.breadth_first_order(csgraph, i_start, directed=True, return_predecessors=True)
Return a breadth-first ordering starting with specified node.
Note that a breadth-first order is not unique, but the tree which it generates is unique.
New in version 0.11.0.
Parameters: csgraph : array_like or sparse matrix The N x N compressed sparse graph. The input csgraph will be converted to csr format for the calculation. i_start : int The index of starting node. directed : bool, optional If True (default), then operate on a directed graph: only move from point i to point j along paths csgraph[i, j]. If False, then find the shortest path on an undirected graph: the algorithm can progress from point i to j along csgraph[i, j] or csgraph[j, i]. return_predecessors : bool, optional If True (default), then return the predecesor array (see below). node_array : ndarray, one dimension The breadth-first list of nodes, starting with specified node. The length of node_array is the number of nodes reachable from the specified node. predecessors : ndarray, one dimension Returned only if return_predecessors is True. The length-N list of predecessors of each node in a breadth-first tree. If node i is in the tree, then its parent is given by predecessors[i]. If node i is not in the tree (and for the parent node) then predecessors[i] = -9999.
Examples
>>> from scipy.sparse import csr_matrix
>>> from scipy.sparse.csgraph import breadth_first_order
>>> graph = [
... [0, 1 , 2, 0],
... [0, 0, 0, 1],
... [2, 0, 0, 3],
... [0, 0, 0, 0]
... ]
>>> graph = csr_matrix(graph)
>>> print(graph)
(0, 1) 1
(0, 2) 2
(1, 3) 1
(2, 0) 2
(2, 3) 3
>>> breadth_first_order(graph,0)
(array([0, 1, 2, 3], dtype=int32), array([-9999, 0, 0, 1], dtype=int32))
#### Previous topic
scipy.sparse.csgraph.johnson
#### Next topic
scipy.sparse.csgraph.depth_first_order | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3847063481807709, "perplexity": 5906.221464390088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00520.warc.gz"} |
https://indico.cern.ch/event/656452/contributions/2870999/ | # Quark Matter 2018
May 13 – 19, 2018
Venice, Italy
Europe/Zurich timezone
The organisers warmly thank all participants for such a lively QM2018! See you in China in 2019!
## A new correlator for the detection and characterization of the Chiral Magnetic Effect
May 15, 2018, 5:00 PM
2h 40m
First floor and third floor (Palazzo del Casinò)
### First floor and third floor
#### Palazzo del Casinò
Poster Chirality, vorticity and polarisation effects
### Speaker
Shuzhe Shi (Indiana University)
### Description
A charge-sensitive in-event correlator ($R(\Delta S)$) is proposed and tested for its efficacy to detect and characterize charge separation associated with the Chiral Magnetic Effect (CME) in heavy ion collisions~[1]. For CME-driven charge separation, the correlator gives a concave response relative to the second-order event plane ($\Psi_2$), and a null response relative to the third-order plane ($\Psi_3$), consistent with the correlation (de-correlation) of the $\vec{B}$-field with the $\Psi_2$ ($\Psi_3$) plane. For non-CME background, the correlator gives responses, relative to $\Psi_2$ and $\Psi_3$, which allows a distinction between CME-driven charge separation and non-CME backgrounds. We discuss the $R(\Delta S)$ correlator and present results for its detailed response and sensitivity, to both signal and background, in several reaction models. They include (but are not limited to) a 3+1-dimensional hydrodynamic model [2], the Anomalous Viscous Fluid Dynamics (AVFD) model [3] and the Multi-Phase Transport model (AMPT) [4]. The implications for the use of the $R(\Delta S)$ correlator in the upcoming Isobar Run at RHIC will also be discussed.
[1] N. Magdy, S. Shi, J. Liao, N. Ajitanand, and R. A. Lacey, arXiv:1710.01717
[2] P. Bozek, arXiv:1711.02563
[3] S. Shi, Y. Jiang, E. Lilleskov, J. Liao, arXiv:1711.02496
[4] Lin Z W et al. Phys.Rev. C72 064901 (2005)
Centralised submission by Collaboration Presenter name will be specified later Experiment
### Primary authors
Mr Niseem Abdelrahman"Magdy" (Stony Brook University) Roy Lacey (Stony Brook University) Jinfeng Liao (Indiana University) Shuzhe Shi (Indiana University)
### Presentation materials
There are no materials yet. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024710059165955, "perplexity": 12110.29447949667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00687.warc.gz"} |
https://export.arxiv.org/abs/1911.04530v1 | hep-ph
(what is this?)
# Title: NLO impact factor for inclusive photon$+$dijet production in $e+A$ DIS at small $x$
Abstract: We compute the next-to-leading order (NLO) impact factor for inclusive photon $+$dijet production in electron-nucleus (e+A) deeply inelastic scattering (DIS) at small $x$. An important ingredient in our computation is the simple structure of shock wave" fermion and gluon propagators. This allows one to employ standard momentum space Feynman diagram techniques for higher order computations in the Regge limit of fixed $Q^2\gg \Lambda_{\rm QCD}^2$ and $x\rightarrow 0$. Our computations in the Color Glass Condensate (CGC) effective field theory include the resummation of all-twist power corrections $Q_s^2/Q^2$, where $Q_s$ is the saturation scale in the nucleus. We discuss the structure of ultraviolet, collinear and soft divergences in the CGC, and extract the leading logs in $x$; the structure of the corresponding rapidity divergences gives a nontrivial first principles derivation of the JIMWLK renormalization group evolution equation for multiparton lightlike Wilson line correlators. Explicit expressions are given for the $x$-independent $O(\alpha_s)$ contributions that constitute the NLO impact factor. These results, combined with extant results on NLO JIMWLK evolution, provide the ingredients to compute the inclusive photon $+$ dijet cross-section at small $x$ to $O(\alpha_s^3 \ln(x))$. First results for the NLO impact factor in inclusive dijet production are recovered in the soft photon limit. A byproduct of our computation is the LO photon+ 3 jet (quark-antiquark-gluon) cross-section.
Comments: 104 pages, 35 figures Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th); Nuclear Theory (nucl-th) Cite as: arXiv:1911.04530 [hep-ph] (or arXiv:1911.04530v1 [hep-ph] for this version)
## Submission history
From: Kaushik Roy [view email]
[v1] Mon, 11 Nov 2019 19:23:09 GMT (2004kb,D)
[v2] Tue, 3 Dec 2019 00:59:03 GMT (2005kb,D)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209451079368591, "perplexity": 4516.646523582532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518767.60/warc/CC-MAIN-20200403220847-20200404010847-00495.warc.gz"} |
http://mathhelpforum.com/trigonometry/74250-trig-help-verify-answers.html | # Math Help - Trig Help ( verify answers)
1. ## Trig Help ( verify answers)
Convert the angle given in degrees to radian measure in terms of .380°
to 4 sig.figs
1.018
Solve the equation for all nonnegative values of less than . Do by calculator, if needed, and give the answers to three significant digits in the order of increasing.
(not sure what to do)
A satellite is in a circular orbit 225 km above the equator of the earth. How many kilometres must it travel for its longitude to change by 86.3°?
Assume the radius of the earth equals 6400 kilometres.(round to the nearest whole number)
i got 9639.80 =9600
2. Originally Posted by rock candy
Convert the angle given in degrees to radian measure in terms of .380°
Unfortunately, I cannot see any of your attachments.
"6.63 rad" is the NUMERICAL value but the problem asked you give it in terms of $\pi$: Since there are 360 degrees or $2\pi$ radians so the conversion factor is $\frac{2\pi}{360}= \frac{\pi}{180}$ radians per degree. $\frac{\pi}{180}(380)= \frac{380}{180}\pi$. Just do the fraction part.
Unfortunately, I cannot see the attachments.
to 4 sig.figs
1.018
Solve the equation for all nonnegative values of less than . Do by calculator, if needed, and give the answers to three significant digits in the order of increasing.
(not sure what to do)
A satellite is in a circular orbit 225 km above the equator of the earth. How many kilometres must it travel for its longitude to change by 86.3°?
Assume the radius of the earth equals 6400 kilometres.(round to the nearest whole number)
i got 9639.80 =9600 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533917903900146, "perplexity": 886.0486226893905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447556252.139/warc/CC-MAIN-20141224185916-00081-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/334462/how-to-draw-an-offset-border-to-a-tikz-node | # How to draw an offset border to a tikz node?
How do I make an offset border like shown in the attached picture. This looks like it should be done with tikz pretty easily. I can use the "double" option for the node, but I have no idea how to continue for the attachment of the arrow(s).
In the meantime, I got the following result:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{positioning,decorations.markings,arrows}
\begin{document}
\tikzstyle{border_arrow} = [%
decoration={markings,mark=at position 1 with {\arrow[scale=2,line width=.25pt,red!50]{open triangle 60}}}, %
red!50, double distance=4.4pt, shorten >= 9.5pt, shorten <= .45mm, %
line width=.5pt, %
preaction = {decorate}, %
postaction = {draw,line width=4.4pt, white,shorten >= 8.5pt, shorten <= 1pt}]
\tikzstyle{my_node} = [double,double distance=1mm, inner sep=3mm, text width=2.5cm, align=center,line width=.5pt,font=\sffamily\footnotesize\color{white},draw=red!50, fill=red!50]
\begin{tikzpicture}
\node (data1) [my_node] {Commercial Starting Material};
\node [right=of data1] (A) {}; % phantom node to avoid overlapping
\draw[border_arrow] (data1.east|-A.west) -- (A.west);
\node (data2) [right of=A, my_node,xshift=.6cm] {A $\to$ B $\to$ C $\to$ D};
\end{tikzpicture}
\end{document}
If someone has a more simple way to achieve a similar result, I'm glad to hear it.
• Please convert your code snipped to complete small document (MWE: Minimal Working Example), which will contain all necessary packages and yours definitions. How it is, can not be compiled. Help us too help you. – Zarko Oct 17 '16 at 14:01
• I changed it to a MWE, thanks for the advise. – es560 Oct 17 '16 at 14:47
I try to reproduce yours provided image:
Of course my code is written from scratch (since your code is not helpful at all ...):
\documentclass[tikz,
border=3mm]{standalone}
\usetikzlibrary{calc,
fit,
positioning,
shapes}
\begin{document}
\begin{tikzpicture}[
node distance = 0mm and 12mm,
databox/.style = {inner sep=3mm, text width=2.4cm, align=center, fill=red,
font=\sffamily\footnotesize\bfseries\color{white}
},
arrowbox/.style = {arrow box, arrow box arrows={east:12mm},
arrow box shaft width=5mm, arrow box head extend=3mm,
draw=red, very thick, inner sep=3mm
}
]
\node (data1) [databox] {Commercial Starting Material};
\node (A1) [arrowbox, fit=(data1)] {};
%
\node (data2) [databox, right=of A1.east]
{A $\to$ B $\to$ C $\to$ D};
\node (A2) [arrowbox, fit=(data2)] {};
\end{tikzpicture}
\end{document}
An alternative solution based on a matrix node which has arrowbox style and contains a databox node.
I thought that a matrix node was always rectangular, but now I know it can take whatever shape we want. The matrix only arranges inner nodes.
Based in Zarko's code:
\documentclass[tikz, border=3mm]{standalone}
\usetikzlibrary{positioning, shapes}
\begin{document}
\begin{tikzpicture}[
node distance = 0mm and 12mm,
inner sep=3mm, text width=2.4cm,
align=center, fill=red,
font=\sffamily\footnotesize\bfseries\color{white}
},
arrowbox/.style = {arrow box, arrow box arrows={east:12mm},
arrow box shaft width=5mm, arrow box head extend=3mm,
draw=red, very thick, inner sep=3mm,
minimum width=36mm, minimum height=15mm
}
]
\matrix[arrowbox] (A1) {\node[databox] {Commercial Starting Material};\\};
\matrix[arrowbox, right=of A1.east] (A1) {%
\node[databox] {A $\to$ B $\to$ C $\to$ D};\\};
\end{tikzpicture}
\end{document} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6531389951705933, "perplexity": 10839.626622324233}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318421.65/warc/CC-MAIN-20190823130046-20190823152046-00487.warc.gz"} |
http://crypto.stackexchange.com/questions/15732/nonlinearity-of-the-j-k-flip-flop | # Nonlinearity of the J-K Flip Flop
In Encryption Schemes for Computer Confidentiality, Pless describes how to use the J-K flip flop as a nonlinear combiner for linear feedback shift registers. This generator was broken because the J-K flip flops are not correlation immune but my question concerns the basic nonlinearity of the J-K function itself: how do you measure its nonlinearity? For instance we know that Boolean functions with maximum nonlinearity are the bent functions, which have the farthest distance to all affine functions. Then we have the linear functions like XOR, so where would the J-K flip flop fit into this spectrum?
-
There's no way to reduce the J-K flip-flop to a boolean function, much less a 2-input boolean function. The J-K flip-flop is a 3-input block with a clock input in addition to inputs $J$ and $K$. Its output(s) is/are not a boolean function of these inputs, because past output(s) matters to future output(s).
If we want to model a J-K flip-flop using boolean functions, we can do this in several ways, including two identical 3-input functions $Q(I,B,C)=(I\cdot B\cdot C)|\bar B$, where $I$ is $J$ for one function and $K$ for the other, $B$ is the output of the other function, and $C$ is the clock, as per this schematic:
Knowing the initial state $Q$, and the inputs $J$, $K$, $C$, this allows predicting future $Q$, except for some inputs where $C$ changes near-simultaneously with either $J$ or $K$.
The article considers a J-K flip-flop with single output, sharing its clock input (made implicit) with the sources feeding its other two explicit inputs $J$ and $K$. The model given is:
The state (noted $R_n$ in the article) of the output $Q$ of the J-K flip-flop at a certain clock period $n$ is $$R_{n+1}=((J\oplus \bar K)\cdot R_n)\oplus J$$ (as can be deduced from any proper model of the J-K flip-flop, including the above model as two boolean functions). And thus, in a modern exposition, anything using the J-K flip-flop in this clock configuration can be readily expressed in the framework of Non-Linear Feedback Shift Registers.
In the shared clock configuration of the article, the J-K flip-flop's feedback function given above is a 3-input, 1-output boolean function $F(J,K,R)=((J\oplus \bar K)\cdot R)\oplus J$, where the additional input $R$ is the previous output. The truth table for $F$ can be condensed from the normal $2^3$ entries to \begin{align*} J&K&F\\ 0&0&R&\text{ unchanged}\\ 0&1&0&\text{ clear}\\ 1&0&1&\text{ set}\\ 1&1&\bar R&\text{ toggle}\\ \end{align*}
This function $F$ is balanced (that is, half of the $2^3$ outputs are 1), and $F(J,K,R)\oplus R$ is also balanced. $F$ is non-linear w.r.t. each of its inputs (that is, the XOR of the output with any input is not a function of the other inputs).
$F$ is not correlation-immune: the output is biased towards $J$, and towards $\bar K$. This indeed allows an attack in the context of combining LFSRs, but replacing $F$ with any 3-input boolean function can't improve this; enumerating the 256 such functions, we would find some issue with the output (ordered from worst to least):
• severely biased (that's why we want $F$ balanced);
• severely correlated to the previous output (that's why we want $F(J,K,R)\oplus R$ balanced);
• equal within polarity and a clock delay to one of the two inputs, which (being a LFSR) is efficiently predictable using the Berlekampf-Massey algorithm;
• equal within polarity and a clock delay to the XOR of the two inputs, which is efficiently predictable by the same method, because the XOR of the outputs of any two LFSRs is identical to the output of another LFSR of size at most equal to the sum of the sizes of the two LFSRs;
• just as correlated with the inputs as in the J-K flipflop case.
Note: I'm not familiar with the definition of bent and semi-bent functions, and won't attempt to classify where $F$ belongs, but that should be trivial from the definition of bent and semi-bent, whatever that is.
Note: The J-K flip-flop seems to be one of the simplest two-inputs combiner with memory worth consideration in the framework studied by Frederik Armknecht and Matthias Krause in Algebraic Attacks on Combiners with Memory (Crypto 2003).
-
OK < thanks for the info, this is interesting. According to some papers I've read bent functions have to have an even number of variables, so according to your analysis, the flip flop can't have the same nonlinearity as the bent functions. I will wait for more answers here, hopefully someone will enlighten us. – William Hird Apr 25 '14 at 8:36
So let me ask this question: if the J-K flip flop was correlation-immune, it could be considered a strong cryptographic combining function? – William Hird Apr 25 '14 at 15:16
@WilliamHird, I think you might have a misconception. It sounds like you want to design a secure stream cipher, and your approach is to try to find a function that in isolation has some combinatorial properties. This has two problems: (1) to design a secure stream cipher, you need to look holistically at the entire design; you can't just pick out one component/function used in the stream cipher and say that "since it has properties X,Y,Z, the cipher is secure"; (2) designing secure ciphers is very hard, and you're unlikely to do better than existing state-of-the-art schemes. – D.W. Apr 25 '14 at 20:16
@DW , Yes I am aware of how hard it is, but I like the challenge. But now that I am retired, its either design something new or watch "Gilligan's Island" re-runs. Oh, and by the way, if I were you, I wouldn't bet against me ;-) – William Hird Apr 26 '14 at 1:01
@grieu, Nice work, answer accepted. You are the "hardest working man in cryptography ! – William Hird Apr 26 '14 at 1:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7228770852088928, "perplexity": 1068.194136874935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990177.43/warc/CC-MAIN-20150728002310-00204-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/212195/form-of-weakly-continuous-linear-functional | # Form of weakly continuous linear functional
This was originally a problem in Stratila and Zsido's "Lectures on von Neumann algebras" (E.1.2). I've spent so much time working on it, and right now I cannot see how the result can be so simple.
The problem goes like this: Let $\omega$ be a weakly continuous linear functional on $B(\mathscr{H})$. Then there exist two families of mutually orthogonal vectors $\{\xi_1,\ldots,\xi_n\},\ \{\eta_1,\ldots,\eta_n\}$ in $\mathscr{H}$ such that $$\omega(T)=\sum_{i=1}^n\langle T\xi_i,\eta_i\rangle,\quad T\in B(\mathscr{H}),$$$$\|\omega\|=\sum_{i=1}^n\|\xi_i\|\|\eta_i\|.$$
I've tried altering the proof that any weakly continuous linear functional can be written in the above form with no extra assumptions on the vectors, and gotten as far as proving that the $\xi_i$'s can be chosen to be mutually orthogonal (orthonormal, in fact), but that's about it. Does anybody have any suggestions of what to do? I thought about using some facts about compact operators, but seeing as it is not a prerequisite of understanding the section containing the problem, I'm assuming the proof can be elementary (even though it's marked as one of the harder exercises).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793879389762878, "perplexity": 105.42446546974247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/100294/question-on-fundamental-weights-and-representations | # Question on fundamental weights and representations
I am a bit confused about the notion of "fundamental weights".
In a complexified setting, I am thinking of my Lie algebra to be decomposed as, $\cal{g} = \cal{t} \oplus _\alpha \cal{g}_\alpha$ where the $\cal{g}_\alpha$ are the root-spaces. Now given a root $\alpha_j$, one defines its co-root $H_{\alpha_j} \in [\cal{g}_{\alpha _j}, \cal{g}_{-\alpha _j}]$ such that $\alpha_j (H_{\alpha _j}) = 2$
• Now one seems to define the "fundamental weights" as a set rank $G$ elements $\omega_i \in t^*$ such that, $\omega_i (H_{\alpha _j}) = \delta_{ij}$
• In the above definition is it necessary that the $\alpha_j$ have to be simple roots? (..i get this feeling when looking at examples..) I guess one can get away by defining the action of the fundamental weights on the co-roots of simple roots only because the co-roots are themselves enough to give a basis for $t^*$ just like the simple-roots. Is that right?
• For the case of $SU(n)$ one chooses the simple root spaces to be the spans of the matrices $E_{ij}$ - which have a $1$ at the $(i,j)$ position and a $0$ everywhere else. If the Cartan subalgebra is spanned by matrices of the form $H_\lambda = diag(\lambda_i)$, then one has the roots $\alpha_{ij}$ defined as, $[H_\lambda,E_{ij}] = \alpha_{ij}(H_\lambda)E_{ij} = (\lambda_i - \lambda_j)E_{ij}$ Now since $\alpha_{ji} = - \alpha_{ij}$, one would search for the co-root $H_{\alpha_{ij}} \in [E_{ij},E_{ji}]$. Hence I would have naively expected that $H_{\alpha_{ij}} = E_{ii} - E_{jj}$ for all pairs of $i<j$.
But why is it that in literature I see the co-roots of $SU(N)$ to be taken as, $H_{\alpha _ {i i+1}} = E_{ii} - E_{i+1,i+1}$? Is this again a question of some standard choice of basis?
• From the above how does it follow that the fundamental weights $\omega_i$ of $SU(N)$ are given as $\omega_i (H_\lambda) = \sum _{k=1} ^{k=i} \lambda_k$ ?
• How is all the above related to the idea that there are $N-1$ fundamental representations of $SU(N)$? And how are they demarcated?
-
Dear Anirbit, perhaps you have interests to illuminate this problem: cartan-matrix-for-an-exotic-type-of-lie-algebra – miss-tery Jan 10 '14 at 20:25
Fundamental weights correspond to fundamental roots (i.e. simple roots). Each choice of simple roots leads to a different choice of fundamental weights. There aren't really any fundamental weights associated with other (non-simple) roots (or at least this terminology isn't standard to my knowledge).
[Note: The rank of $\mathfrak{sl}_N$ (or equivalently $SU(N)$) is $N-1$. I will set $\ell=N-1$.]
Basics: First, a set of simple roots must be chosen (any two systems of simple roots are conjugate under the action of the Weyl group). Say $\{\alpha_1,\dots,\alpha_\ell \}$ is you set of simple roots. Suppose we have also fixed a set of Chevalley generators $\{ E_i, F_i, H_i \;|\; i=1,\dots,\ell \}$ so these are elements such that $H_i \in [\mathfrak{g}_{\alpha_i},\mathfrak{g}_{-\alpha_i}]$ such that $\alpha_i(H_i)=2$ and $[E_i,F_i]=H_i$ where $E_i \in\mathfrak{g}_{\alpha_i}$ and $F_i \in\mathfrak{g}_{-\alpha_i}$. Then $\alpha_j(H_i)=a_{ji}$ = the $i,j$-entry of the Cartan matrix (or the $j,i$-entry of the Cartan matrix, depending on whose convention you are using) so in particular $\alpha_i(H_i)=a_{ii}=2$.
Next, what you have for the fundamental weights is not quite correct. The fundamental weights $\{\omega_1,\dots,\omega_\ell \}$ form a basis for $t^*$ which is dual to the (basis of) simple coroots $\{H_1,\dots,H_\ell\}$. In other words, $\omega_i(H_j)=\delta_{ij}$ (the Kronecker delta: $\delta_{ii}=1$ and $\delta_{ij}=0$ for $i\not=j$). In particular, $\omega_i(H_i)=1$ (not $2$).
Next, take a finite dimensional irreducible $\mathfrak{g}$-module. From the theory we know it is a highest weight module, say $V(\lambda)$ which is the direct sum of weight spaces. These weights are of the form $c_1\omega_1+\cdots+c_\ell\omega_\ell$ where $c_i \in \mathbb{Z}$ (integral linear combinations of fundamental weights). In particular, the roots of $\mathfrak{g}$ along with $0$ (the zero functional) are the weights of the adjoint representation. So roots are integral linear combinations of fundamental weights. Actually, it turns out that $\alpha_i = a_{i1}\omega_1+a_{i2}\omega_2+\cdots+a_{i\ell}\omega_{\ell}$ so the Cartan matrix (or its transpose) is the change of basis matrix from fundamental weights to simple roots. The importance of the fundamental weights is that they form a basis for the lattice of weights of finite dimensional representations of $\mathfrak{g}$.
So $\{H_1,\dots,H_\ell\}$ (simple co-roots) form a basis for $t$. Both $\{\alpha_1,\dots,\alpha_\ell\}$ (simple roots) and $\{\omega_1,\dots,\omega_\ell\}$ (fundamental weights) are bases for $t^*$. The fundamental weight basis is dual to the simple co-root basis. And the Cartan matrix is a change of basis matrix from the simple roots to the fundamental weights.
Next, for $\mathfrak{sl}_N$ (the root space decomposition is for the Lie algebra not the Lie group $SU(N)$). While $E_{ij}$ ($i \not= j$) are root vectors, only $E_{i,i+1}$ and $E_{i+1,i}$ are in simple root spaces. In particular, $E_i = E_{i,i+1} \in (\mathfrak{sl}_n)_{\alpha_i}$ (the $\alpha_i$ root space) and $F_i = E_{i+1,i} \in (\mathfrak{sl}_n)_{-\alpha_i}$ (the $-\alpha_i$ root space). Then $H_i = [E_i,F_i] = E_{i,i+1}E_{i+1,i} - E_{i+1,i}E_{i,i+1} = E_{i,i} - E_{i+1,i+1}$ (the simple co-roots). Your other $E_{ii}-E_{jj}$ are co-roots as well just not necessarily simple co-roots.
If $H_\lambda = \mathrm{diag}(\lambda_1,\dots,\lambda_\ell)$, then $H_\lambda=\lambda_1H_1+(\lambda_1+\lambda_2)H_2+\cdots+(\lambda_1+\cdots+\lambda_\ell)H_\ell$. For example: Consider $H_\lambda = \mathrm{diag}(\lambda_1,\lambda_2,\lambda_3)$. Keep in mind that since $H_\lambda \in \mathfrak{sl}_3$ it has trace=0, so $\lambda_3=-\lambda_1-\lambda_2$. Thus $$\begin{bmatrix} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{bmatrix} = \begin{bmatrix} \lambda_1 & 0 & 0 \\ 0 & -\lambda_1 & 0 \\ 0 & 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 \\ 0 & \lambda_1+\lambda_2 & 0 \\ 0 & 0 & -\lambda_1-\lambda_2 \end{bmatrix}$$
$$= \lambda_1\begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix}+(\lambda_1+\lambda_2)\begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix}$$
So in general, $\omega_i(H_\lambda) = \omega_i(\lambda_1H_1+(\lambda_1+\lambda_2)H_2+\cdots+(\lambda_1+\cdots+\lambda_\ell)H_\ell) = \lambda_1+\cdots+\lambda_i$ since $\omega_i(H_i)=1$ and $\omega_i(H_j)=0$ for $i \not= j$.
The $N-1$ fundamental representations of $SU(N)$ are the highest weight representations with highest weights $\omega_1,\dots,\omega_{\ell}$. These are often denoted $V(\omega_1),\dots,V(\omega_\ell)$. All other (finite dimensional) irreducible representations appear as subrepresentations of tensor products of these representations.
Edit: I will try to add a brief account highest weight modules. Here goes...
Let $\mathfrak{g}$ be a finite dimensional semi-simple Lie algebra. Then every finite dimensional $\mathfrak{g}$-module (i.e. representation) is completely reducible (can be written as a finite direct sum of irreducible modules). Then it can be shown that each irreducible module is a highest weight module. So in the end, if we know everything about highest weight modules, then we'll essentially know everything about all modules.
What is a highest weight module? Let $\mathfrak{g}$ be a finite dimensional simple Lie algebra with Cartan subalgebra $\mathfrak{h}$ (Cartan subalgebra = maximal toral subalgebra = your "$t$"). In addition fix a set of simple roots $\{ \alpha_1,\dots,\alpha_\ell\}$ and fundamental weights $\{ \omega_1,\dots,\omega_\ell \}$.
Let $V$ be a $\mathfrak{g}$-module. Then $V$ is a weight module if $V = \oplus_{\mu \in \mathfrak{h}^*} V_\mu$ (the direct sum of weight spaces) where $V_\mu = \{ v\in V \;|\; h \cdot v = \mu(h)v \}$. If $V_\mu \not= \{0\}$, then $V_\mu$ is a weight space and $\mu \in \mathfrak{h}^*$ is called a weight. [Example: If you consider $\mathfrak{g}$ itself as a $\mathfrak{g}$-module, then the weights of the adjoint action are the roots along with the zero functional.] So if $v \not=0$ is in the $\mu$ weight space and $h \in \mathfrak{h}$, then $v$ is an eigenvector for the action of $h$ with eigenvalue $\mu(h)$. Thus $V_\mu$ is the simultaneous eigenspace for the operators given by the action of each $h \in \mathfrak{h}$ with eigenvalues $\mu(h)$.
It can be shown that a finite dimensional irreducible $\mathfrak{g}$-module is a weight module and there exists a unique weight $\lambda \in \mathfrak{h}^*$ such that $\lambda+\alpha_i$ is not a weight for all $i=1,\dots,\ell$. So thinking of $\alpha_i$ as pointing "up" in some sense, $\lambda$ is as high as you can go. It's the highest weight. Next, every weight in the module is of the form $\lambda-(c_1\alpha_1+\cdots+c_\ell\alpha_\ell)$ for some non-negative integers $c_i$ (all weights lie below the highest weight). Also, the structure of an irreducible module is completely determined by its highest weight. So if $V$ and $W$ are irreducible highest weight modules, then $V \cong W$ if and only if $V$ and $W$ have the same highest weight. Moreover, it turns out you can construct (a unique) irreducible highest weight module for any $\lambda \in \mathfrak{h}^*$. We usually call this module something like $V(\lambda)$. However, it turns out that although $V(\lambda)$ is an irreducible highest weight module, it is finite dimensional if and only if $\lambda=c_1\omega_1+\cdots+c_\ell\omega_\ell$ where each $c_i$ is a non-negative integer.
Fix a set of non-negative integers $c_i$. Then suppose we tensor product the highest weight module $V(\omega_i)$ (a fundamental module) $c_i$-times with itself and then tensor all of these together. Then we will have a (reducible) module which contains a copy of the irreducible highest weight module $V(c_1\omega_1+\cdots+c_\ell\omega_\ell)$. Thus the fundamental modules give us a way of constructing all finite dimensional irreducible highest weight modules [although the tensor product will include copies of other irreducible modules in general so we'll have to filter out this unwanted extra stuff.]
Your final question. Given a highest weight for $SU(N)$ (equivalently $\mathfrak{sl}_N$), how does one write down matrices for the action associated with the corresponding highest weight module? That is a non-trivial, quite complicated computation. Even the answer for $SU(3)$ is complicated. So I'm going to pass on that one. :)
-
Thanks a lot for this almost text-book kind of answer! It was awesome. I have corrected some of my typos that you pointed out. I have some further clarifications to ask about what you said - (1) Shouldn't your definition of the Cartan matrix be $\alpha_j (H_i) = a_{ji}$ to be consistent with your convention of saying $\alpha_i = a_{ij}\omega_j$ ? (2) About the simple roots of $SU(N)$, I guess you are choosing them to be the set $\{ E_{i i+1} \}_{i=1} ^{i=n-1}$.. right? – Anirbit Jan 21 '12 at 20:59
(3) I am not very clear about the idea of this "highest weight module". If you could kindly add in a few more lines of explanation like I did not understand what you meant in that line, "..say $V(\lambda)$ which is the direct sum of weight spaces. These weights are of the form $c_1\omega_1+\cdots+c_\ell\omega_\ell$ where $c_i \in \mathbb{Z}$ (integral linear combinations of fundamental weights)..." – Anirbit Jan 21 '12 at 21:04
(4) About the issue of "fundamental representations' of $SU(N)$ I guess I did not make my question very clear. Can you kindly elaborate on as to how does picking a the highest weight say some $\omega_i$ (for $i \in \{ 1,...,n-1\}$) specifies the representation. Like if I pick some $g \in SU(N)$ then how do I write down the matrix for $g$ knowing what the highest weight is - say some $\omega_i$. I know how to do this for $SU(3)$ bcause that can be written in the language of quantum angular momentum but otherwise I don't see anything. – Anirbit Jan 21 '12 at 21:05
@Anirbit Yes. About (1), you are correct. I used one convention one place and another further down :( As for (2), yes and no. $E_{ii+1}$ are elements of simple root spaces. But the simple roots are linear functionals (elements of $t^*$ instead of $\mathfrak{g}$). The $E_{ii+1}$ are root vectors. Root vectors are elements of the algebra whose weights (think of eigenvalues) are roots. So in some sense $E_{i,i+1}$ are basically the eigenvectors to go with the eigenvalues $\alpha_i$. – Bill Cook Jan 22 '12 at 20:13
@Anirbit I don't have time right now, but I'll try to edit the post later to address (3) and (4)...although you'll need a textbook for a real full answer :) – Bill Cook Jan 22 '12 at 20:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698145985603333, "perplexity": 178.51381942463496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065464.19/warc/CC-MAIN-20150827025425-00255-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://interactivepython.org/courselib/static/pythonds/SortSearch/searching.html | Searching¶
We will now turn our attention to some of the most common problems that arise in computing, those of searching and sorting. In this section we will study searching. We will return to sorting later in the chapter. Searching is the algorithmic process of finding a particular item in a collection of items. A search typically answers either True or False as to whether the item is present. On occasion it may be modified to return where the item is found. For our purposes here, we will simply concern ourselves with the question of membership.
In Python, there is a very easy way to ask whether an item is in a list of items. We use the in operator.
>>> 15 in [3,5,2,4,1]
False
>>> 3 in [3,5,2,4,1]
True
>>>
Even though this is easy to write, an underlying process must be carried out to answer the question. It turns out that there are many different ways to search for the item. What we are interested in here is how these algorithms work and how they compare to one another.
Next Section - The Sequential Search | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7177066206932068, "perplexity": 225.816756539962}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00233-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1707.02986 | astro-ph.HE
(what is this?)
# Title: A dependence of the tidal disruption event rate on global stellar surface mass density and stellar velocity dispersion
Abstract: The rate of tidal disruption events (TDEs), $R_\text{TDE}$, is predicted to depend on stellar conditions near the super-massive black hole (SMBH), which are on difficult-to-measure sub-parsec scales. We test whether $R_\text{TDE}$ depends on kpc-scale global galaxy properties, which are observable. We concentrate on stellar surface mass density, $\Sigma_{M_\star}$, and velocity dispersion, $\sigma_v$, which correlate with the stellar density and velocity dispersion of the stars around the SMBH. We consider 35 TDE candidates, with and without known X-ray emission. The hosts range from star-forming to quiescent to quiescent with strong Balmer absorption lines. The last (often with post-starburst spectra) are overrepresented in our sample by a factor of $35^{+21}_{-17}$ or $18^{+8}_{-7}$, depending on the strength of the H$\delta$ absorption line. For a subsample of hosts with homogeneous measurements, $\Sigma_{M_\star}=10^9$-$10^{10}~{\rm M_\odot / kpc^2}$, higher on average than for a volume-weighted control sample of Sloan Digital Sky Survey galaxies with similar redshifts and stellar masses. This is because: (1) most of the TDE hosts here are quiescent galaxies, which tend to have higher $\Sigma_{M_\star}$ than the star-forming galaxies that dominate the control, and (2) the star-forming hosts have higher average $\Sigma_{M_\star}$ than the star-forming control. There is also a weak suggestion that TDE hosts have lower $\sigma_v$ than for the quiescent control. Assuming that $R_{\rm TDE}\propto \Sigma_{M_\star}^\alpha \times \sigma_v^\beta$, and applying a statistical model to the TDE hosts and control sample, we estimate $\hat{\alpha}=0.9 \pm 0.2$ and $\hat{\beta}=-1.0 \pm 0.6$. This is broadly consistent with $R_\text{TDE}$ being tied to the dynamical relaxation of stars surrounding the SMBH.
Comments: Accepted for publication in ApJ Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Astrophysics of Galaxies (astro-ph.GA) DOI: 10.3847/1538-4357/aaa3fd Cite as: arXiv:1707.02986 [astro-ph.HE] (or arXiv:1707.02986v3 [astro-ph.HE] for this version)
## Submission history
From: Or Graur [view email]
[v1] Mon, 10 Jul 2017 18:00:04 GMT (5296kb,D)
[v2] Fri, 4 Aug 2017 21:03:14 GMT (6430kb,D)
[v3] Thu, 21 Dec 2017 14:10:36 GMT (6999kb,D) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281417727470398, "perplexity": 3405.980857292257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00502.warc.gz"} |
https://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=47&journalID=308&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol= | for Journals by Title or ISSN for Articles by Keywords help
Subjects -> COMPUTER SCIENCE (Total: 1985 journals) - ANIMATION AND SIMULATION (29 journals) - ARTIFICIAL INTELLIGENCE (98 journals) - AUTOMATION AND ROBOTICS (98 journals) - CLOUD COMPUTING AND NETWORKS (63 journals) - COMPUTER ARCHITECTURE (9 journals) - COMPUTER ENGINEERING (9 journals) - COMPUTER GAMES (16 journals) - COMPUTER PROGRAMMING (23 journals) - COMPUTER SCIENCE (1153 journals) - COMPUTER SECURITY (45 journals) - DATA BASE MANAGEMENT (13 journals) - DATA MINING (32 journals) - E-BUSINESS (22 journals) - E-LEARNING (27 journals) - ELECTRONIC DATA PROCESSING (21 journals) - IMAGE AND VIDEO PROCESSING (40 journals) - INFORMATION SYSTEMS (104 journals) - INTERNET (92 journals) - SOCIAL WEB (50 journals) - SOFTWARE (33 journals) - THEORY OF COMPUTING (8 journals) COMPUTER SCIENCE (1153 journals) 1 2 3 4 5 6 | Last
Advances in Computational Mathematics [SJR: 1.255] [H-I: 44] [15 followers] Follow Hybrid journal (It can contain Open Access articles) ISSN (Print) 1572-9044 - ISSN (Online) 1019-7168 Published by Springer-Verlag [2355 journals]
• The Galerkin boundary element method for transient Stokes flow
• Authors: Young Ok Choi; Johannes Tausch
Pages: 473 - 493
Abstract: Since the fundamental solution for transient Stokes flow in three dimensions is complicated it is difficult to implement discretization methods for boundary integral formulations. We derive a representation of the Stokeslet and stresslet in terms of incomplete gamma functions and investigate the nature of the singularity of the single- and double layer potentials. Further, we give analytical formulas for the time integration and develop Galerkin schemes with tensor product piecewise polynomial ansatz functions. Numerical results demonstrate optimal convergence rates.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9493-9
Issue No: Vol. 43, No. 3 (2017)
• Efficient algorithms for cur and interpolative matrix decompositions
• Authors: Sergey Voronin; Per-Gunnar Martinsson
Pages: 495 - 516
Abstract: The manuscript describes efficient algorithms for the computation of the CUR and ID decompositions. The methods used are based on simple modifications to the classical truncated pivoted QR decomposition, which means that highly optimized library codes can be utilized for implementation. For certain applications, further acceleration can be attained by incorporating techniques based on randomized projections. Numerical experiments demonstrate advantageous performance compared to existing techniques for computing CUR factorizations.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9494-8
Issue No: Vol. 43, No. 3 (2017)
• Finite element approximation of a free boundary plasma problem
• Authors: Jintao Cui; Thirupathi Gudi
Pages: 517 - 535
Abstract: In this article, we study a finite element approximation for a model free boundary plasma problem. Using a mixed approach (which resembles an optimal control problem with control constraints), we formulate a weak formulation and study the existence and uniqueness of a solution to the continuous model problem. Using the same setting, we formulate and analyze the discrete problem. We derive optimal order energy norm a priori error estimates proving the convergence of the method. Further, we derive a reliable and efficient a posteriori error estimator for the adaptive mesh refinement algorithm. Finally, we illustrate the theoretical results by some numerical examples.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9495-7
Issue No: Vol. 43, No. 3 (2017)
• Complexity of oscillatory integrals on the real line
• Authors: Erich Novak; Mario Ullrich; Henryk Woźniakowski; Shun Zhang
Pages: 537 - 553
Abstract: We analyze univariate oscillatory integrals defined on the real line for functions from the standard Sobolev space $$H^{s} (\mathbb {R})$$ and from the space $$C^{s}(\mathbb {R})$$ with an arbitrary integer s ≥ 1. We find tight upper and lower bounds for the worst case error of optimal algorithms that use n function values. More specifically, we study integrals of the form 1 $$I_{k}^{\varrho} (f) = {\int}_{\mathbb{R}} f(x) \,\mathrm{e}^{-i\,kx} \varrho(x) \, \mathrm{d} x\ \ \ \text{for}\ \ f\in H^{s}(\mathbb{R})\ \ \text{or}\ \ f\in C^{s}(\mathbb{R})$$ with $$k\in {\mathbb {R}}$$ and a smooth density function ρ such as $$\rho (x) = \frac {1}{\sqrt {2 \pi }} \exp (-x^{2}/2)$$ . The optimal error bounds are $${\Theta }((n+\max (1, k ))^{-s})$$ with the factors in the Θ notation dependent only on s and ϱ.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9496-6
Issue No: Vol. 43, No. 3 (2017)
• System identification in dynamical sampling
• Authors: Sui Tang
Pages: 555 - 580
Abstract: We consider the problem of spatiotemporal sampling in a discrete infinite dimensional spatially invariant evolutionary process x (n) = A n x to recover an unknown convolution operator A given by a filter $$a \in \ell ^{1}(\mathbb {Z})$$ and an unknown initial state x modeled as a vector in $$\ell ^{2}(\mathbb {Z})$$ . Traditionally, under appropriate hypotheses, any x can be recovered from its samples on $$\mathbb {Z}$$ and A can be recovered by the classical techniques of deconvolution. In this paper, we will exploit the spatiotemporal correlation and propose a new sampling scheme to recover A and x that allows us to sample the evolving states x,A x,⋯ ,A N−1 x on a sub-lattice of $$\mathbb {Z}$$ , and thus achieve a spatiotemporal trade off. The spatiotemporal trade off is motivated by several industrial applications (Lu and Vetterli, 2249–2252, 2009). Specifically, we show that $\{x(m\mathbb {Z}), Ax(m\mathbb {Z}), \cdots , A^{N-1}x(m\mathbb {Z}): N \geq 2m\}$ contains enough information to recover a typical “low pass filter” a and x almost surely, thus generalizing the idea of the finite dimensional case in Aldroubi and Krishtal, arXiv:1412.1538 (2014). In particular, we provide an algorithm based on a generalized Prony method for the case when both a and x are of finite impulse response and an upper bound of their support is known. We also perform a perturbation analysis based on the spectral properties of the operator A and initial state x, and verify the results by several numerical experiments. Finally, we provide several other numerical techniques to stabilize the proposed method, with some examples to demonstrate the improvement.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9497-5
Issue No: Vol. 43, No. 3 (2017)
• Zooming from global to local: a multiscale RBF approach
• Authors: Q. T. Le Gia; I. H. Sloan; H. Wendland
Pages: 581 - 606
Abstract: Because physical phenomena on Earth’s surface occur on many different length scales, it makes sense when seeking an efficient approximation to start with a crude global approximation, and then make a sequence of corrections on finer and finer scales. It also makes sense eventually to seek fine scale features locally, rather than globally. In the present work, we start with a global multiscale radial basis function (RBF) approximation, based on a sequence of point sets with decreasing mesh norm, and a sequence of (spherical) radial basis functions with proportionally decreasing scale centered at the points. We then prove that we can “zoom in” on a region of particular interest, by carrying out further stages of multiscale refinement on a local region. The proof combines multiscale techniques for the sphere from Le Gia, Sloan and Wendland, SIAM J. Numer. Anal. 48 (2010) and Applied Comp. Harm. Anal. 32 (2012), with those for a bounded region in ℝ d from Wendland, Numer. Math. 116 (2010). The zooming in process can be continued indefinitely, since the condition numbers of matrices at the different scales remain bounded. A numerical example illustrates the process.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9498-4
Issue No: Vol. 43, No. 3 (2017)
• On a new property of n -poised and G C n sets
• Authors: Vahagn Bayramyan; Hakop Hakopian
Pages: 607 - 626
Abstract: In this paper we consider n-poised planar node sets, as well as more special ones, called G C n sets. For the latter sets each n-fundamental polynomial is a product of n linear factors as it always holds in the univariate case. A line ℓ is called k-node line for a node set $$\mathcal X$$ if it passes through exactly k nodes. An (n + 1)-node line is called maximal line. In 1982 M. Gasca and J. I. Maeztu conjectured that every G C n set possesses necessarily a maximal line. Till now the conjecture is confirmed to be true for n ≤ 5. It is well-known that any maximal line M of $$\mathcal X$$ is used by each node in $$\mathcal X\setminus M,$$ meaning that it is a factor of the fundamental polynomial. In this paper we prove, in particular, that if the Gasca-Maeztu conjecture is true then any n-node line of G C n set $$\mathcal {X}$$ is used either by exactly $$\binom {n}{2}$$ nodes or by exactly $$\binom {n-1}{2}$$ nodes. We prove also similar statements concerning n-node or (n − 1)-node lines in more general n-poised sets. This is a new phenomenon in n-poised and G C n sets. At the end we present a conjecture concerning any k-node line.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9499-3
Issue No: Vol. 43, No. 3 (2017)
• Energetic BEM-FEM coupling for the numerical solution of the damped wave
equation
• Authors: A. Aimi; M. Diligenti; C. Guardasoni
Pages: 627 - 651
Abstract: Time-dependent problems modeled by hyperbolic partial differential equations can be reformulated in terms of boundary integral equations and solved via the boundary element method. In this context, the analysis of damping phenomena that occur in many physics and engineering problems is a novelty. Starting from a recently developed energetic space-time weak formulation for the coupling of boundary integral equations and hyperbolic partial differential equations related to wave propagation problems, we consider here an extension for the damped wave equation in layered media. A coupling algorithm is presented, which allows a flexible use of finite element method and boundary element method as local discretization techniques. Stability and convergence, proved by energy arguments, are crucial in guaranteeing accurate solutions for simulations on large time intervals. Several numerical benchmarks, whose numerical results confirm theoretical ones, are illustrated and discussed.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9500-1
Issue No: Vol. 43, No. 3 (2017)
• Dynamics of two-cell systems with discrete delays
Pages: 653 - 676
Abstract: We consider the system of delay differential equations (DDE) representing the models containing two cells with time-delayed connections. We investigate global, local stability and the bifurcations of the trivial solution under some generic conditions on the Taylor coefficients of the DDE. Regarding eigenvalues of the connection matrix as bifurcation parameters, we obtain codimension one bifurcations (including pitchfork, transcritical and Hopf bifurcation) and Takens-Bogdanov bifurcation as a codimension two bifurcation. For application purposes, this is important since one can now identify the possible asymptotic dynamics of the DDE near the bifurcation points by computing quantities which depend explicitly on the Taylor coefficients of the original DDE. Finally, we show that the analytical results agree with numerical simulations.
PubDate: 2017-06-01
DOI: 10.1007/s10444-016-9501-0
Issue No: Vol. 43, No. 3 (2017)
• High-order positivity-preserving hybrid finite-volume-finite-difference
methods for chemotaxis systems
• Authors: Alina Chertock; Yekaterina Epshteyn; Hengrui Hu; Alexander Kurganov
Abstract: Chemotaxis refers to mechanisms by which cellular motion occurs in response to an external stimulus, usually a chemical one. Chemotaxis phenomenon plays an important role in bacteria/cell aggregation and pattern formation mechanisms, as well as in tumor growth. A common property of all chemotaxis systems is their ability to model a concentration phenomenon that mathematically results in rapid growth of solutions in small neighborhoods of concentration points/curves. The solutions may blow up or may exhibit a very singular, spiky behavior. There is consequently a need for accurate and computationally efficient numerical methods for the chemotaxis models. In this work, we develop and study novel high-order hybrid finite-volume-finite-difference schemes for the Patlak-Keller-Segel chemotaxis system and related models. We demonstrate high-accuracy, stability and computational efficiency of the proposed schemes in a number of numerical examples.
PubDate: 2017-07-21
DOI: 10.1007/s10444-017-9545-9
• On the dimension of trivariate spline spaces with the highest order
smoothness on 3D T-meshes
• Authors: Chao Zeng; Jiansong Deng
Abstract: T-meshes are a type of rectangular partitions of planar domains which allow hanging vertices. Because of the special structure of T-meshes, adaptive local refinement is possible for splines defined on this type of meshes, which provides a solution for the defect of NURBS. In this paper, we generalize the definitions to the three-dimensional (3D) case and discuss a fundamental problem – the dimension of trivariate spline spaces on 3D T-meshes. We focus on a special case where splines are C d−1 continuous for degree d. The smoothing cofactor method for trivariate splines is explored for this situation. We obtain a general dimension formula and present lower and upper bounds for the dimension. At last, we introduce a type of 3D T-meshes, where we can give an explicit dimension formula.
PubDate: 2017-07-12
DOI: 10.1007/s10444-017-9551-y
• Uniform and high-order discretization schemes for Sturm–Liouville
problems via Fer streamers
• Authors: Alberto Gil C. P. Ramos
Abstract: The current paper concerns the uniform and high-order discretization of the novel approach to the computation of Sturm–Liouville problems via Fer streamers, put forth in Ramos and Iserles (Numer. Math. 131(3), 541—565 2015). In particular, the discretization schemes are shown to enjoy large step sizes uniform over the entire eigenvalue range and tight error estimates uniform for every eigenvalue. They are made explicit for global orders 4,7,10. In addition, the present paper provides total error estimates that quantify the interplay between the truncation and the discretization in the approach by Fer streamers.
PubDate: 2017-07-04
DOI: 10.1007/s10444-017-9547-7
• Convergence and quasi-optimality of an adaptive finite element method for
optimal control problems with integral control constraint
• Authors: Haitao Leng; Yanping Chen
Abstract: In this paper we study the convergence of an adaptive finite element method for optimal control problems with integral control constraint. For discretization, we use piecewise constant discretization for the control and continuous piecewise linear discretization for the state and the co-state. The contraction, between two consecutive loops, is proved. Additionally, we find the adaptive finite element method has the optimal convergence rate. In the end, we give some examples to support our theoretical analysis.
PubDate: 2017-07-03
DOI: 10.1007/s10444-017-9546-8
• Computationally efficient modular nonlinear filter stabilization for high
Reynolds number flows
• Authors: Aziz Takhirov; Alexander Lozovskiy
Abstract: The nonlinear filter based stabilization proposed in Layton et al. (J. Math. Fluid Mech. 14(2), 325–354 2012) allows to incorporate an eddy viscosity model into an existing laminar flow codes in a modular way. However, the proposed nonlinear filtering step requires the assembly of the associated matrix at each time step and solving a linear system with an indefinte matrix. We propose computationally efficient version of the filtering step that only requires the assembly once, and the solution of two symmetric, positive definite systems at each time step. We also test a new indicator function based on the entropy viscosity model of Guermond (Int. J. Numer. Meth. Fluids. 57(9), 1153–1170 2008); Guermond et al. (J. Sci. Comput. 49(1), 35–50 2011).
PubDate: 2017-06-21
DOI: 10.1007/s10444-017-9544-x
• Convergent expansions of the Bessel functions in terms of elementary
functions
• Authors: José L. López
Abstract: We consider the Bessel functions J ν (z) and Y ν (z) for R ν > −1/2 and R z ≥ 0. We derive a convergent expansion of J ν (z) in terms of the derivatives of $$(\sin z)/z$$ , and a convergent expansion of Y ν (z) in terms of derivatives of $$(1-\cos z)/z$$ , derivatives of (1 − e −z )/z and Γ(2ν, z). Both expansions hold uniformly in z in any fixed horizontal strip and are accompanied by error bounds. The accuracy of the approximations is illustrated with some numerical experiments.
PubDate: 2017-06-19
DOI: 10.1007/s10444-017-9543-y
• A plane wave method combined with local spectral elements for
nonhomogeneous Helmholtz equation and time-harmonic Maxwell equations
• Authors: Qiya Hu; Long Yuan
Abstract: In this paper we are concerned with plane wave discretizations of nonhomogeneous Helmholtz equation and time-harmonic Maxwell equations. To this end, we design a plane wave method combined with local spectral elements for the discretization of such nonhomogeneous equations. This method contains two steps: we first solve a series of nonhomogeneous local problems on auxiliary smooth subdomains by the spectral element method, and then apply the plane wave method to the discretization of the resulting (locally homogeneous) residue problem on the global solution domain. We derive error estimates of the approximate solutions generated by this method. The numerical results show that the resulting approximate solutions possess high accuracy.
PubDate: 2017-06-09
DOI: 10.1007/s10444-017-9542-z
• Bernstein-Bézier techniques for divergence of polynomial spline
vector fields in ℝ n
• Authors: Tatyana Sorokina
Abstract: Bernstein-Bézier techniques for analyzing polynomial spline fields in n variables and their divergence are developed. Dimension and a minimal determining set for continuous piecewise divergence-free spline fields on the Alfeld split of a simplex in ℝ n are obtained using the new techniques, as well as the dimension formula for continuous piecewise divergence-free splines on the Alfeld refinement of an arbitrary simplicial partition in ℝ n .
PubDate: 2017-05-30
DOI: 10.1007/s10444-017-9541-0
• Analysis of the grad-div stabilization for the time-dependent
Navier–Stokes equations with inf-sup stable finite elements
• Authors: Javier de Frutos; Bosco García-Archilla; Volker John; Julia Novo
Abstract: This paper studies inf-sup stable finite element discretizations of the evolutionary Navier–Stokes equations with a grad-div type stabilization. The analysis covers both the case in which the solution is assumed to be smooth and consequently has to satisfy nonlocal compatibility conditions as well as the practically relevant situation in which the nonlocal compatibility conditions are not satisfied. The constants in the error bounds obtained do not depend on negative powers of the viscosity. Taking into account the loss of regularity suffered by the solution of the Navier–Stokes equations at the initial time in the absence of nonlocal compatibility conditions of the data, error bounds of order $$\mathcal O(h^{2})$$ in space are proved. The analysis is optimal for quadratic/linear inf-sup stable pairs of finite elements. Both the continuous-in-time case and the fully discrete scheme with the backward Euler method as time integrator are analyzed.
PubDate: 2017-05-25
DOI: 10.1007/s10444-017-9540-1
• Hermite subdivision on manifolds via parallel transport
• Authors: Caroline Moosmüller
Abstract: We propose a new adaption of linear Hermite subdivision schemes to the manifold setting. Our construction is intrinsic, as it is based solely on geodesics and on the parallel transport operator of the manifold. The resulting nonlinear Hermite subdivision schemes are analyzed with respect to convergence and C 1 smoothness. Similar to previous work on manifold-valued subdivision, this analysis is carried out by proving that a so-called proximity condition is fulfilled. This condition allows to conclude convergence and smoothness properties of the manifold-valued scheme from its linear counterpart, provided that the input data are dense enough. Therefore the main part of this paper is concerned with showing that our nonlinear Hermite scheme is “close enough”, i.e., in proximity, to the linear scheme it is derived from.
PubDate: 2017-05-16
DOI: 10.1007/s10444-017-9516-1
• A numerical method for solving three-dimensional elliptic interface
problems with triple junction points
• Authors: Liqun Wang; Songming Hou; Liwei Shi
Abstract: Elliptic interface problems with multi-domains have wide applications in engineering and science. However, it is challenging for most existing methods to solve three-dimensional elliptic interface problems with multi-domains due to local geometric complexity, especially for problems with matrix coefficient and sharp-edged interface. There are some recent work in two dimensions for multi-domains and in three dimensions for two domains. However, the extension to three dimensional multi-domain elliptic interface problems is non-trivial. In this paper, we present an efficient non-traditional finite element method with non-body-fitting grids for three-dimensional elliptic interface problems with multi-domains. Numerical experiments show that this method achieves close to second order accurate in the L ∞ norm for piecewise smooth solutions.
PubDate: 2017-05-12
DOI: 10.1007/s10444-017-9539-7
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: [email protected]
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.880743145942688, "perplexity": 1241.6508136672558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00380.warc.gz"} |
http://msemac.redwoods.edu/~darnold/math50c/matlab/arclength/index.xhtml | ## Arc Length and Functions in Matlab
Consider the parametric equations
$\begin{eqnarray} x&=&2 \cos t\\ y&=&3 \sin t \end{eqnarray}$
on the interval [0,2 pi]. To calculate the length of this path, one employs the arc length formula.
$L=\int_0^{2\pi}\sqrt{(dx/dt)^2+(dy/dt)^2}$
However,
(dx//dt)^2=(-2 sin t)^2=4 sin^2 t
and
(dy//dt)^2=(3 cos t)^2=9 cos^2 t.
Hence,
$\begin{eqnarray} L&=&\int_0^{2\pi}\sqrt{4\sin^2 t+9\cos^2 t}\\ L&=&\int_0^{2\pi}\sqrt{4(1-\cos^2 t)+9\cos^2 t}\\ L&=&\int_0^{2\pi}\sqrt{4+5\cos^2 t} \end{eqnarray}$
Because this last integral has no closed-form solution, we will need to apply some numerical routine (such as Simpson's rule) to obtain a decimal approximation for the integral. We are going to employ Matlab's quad command for this purpose, but first we must digress and learn how to write functions in Matlab.
### Directory Structure
If you work on a computer in PS116, open MyComputer and browse to your Documents folder. There you should create a Math50C folder (no spaces in filenames), then create a subfolder named Matlab in the Math50C folder. In the Matlab folder, create another subfolder names ArcLength (again, no spaces in filenames).
Once you've complete your directory structure as described in the previous paragraph, start Matlab then change the current working directory to the ArcLength folder. The easiest way to do this is to click the three-button icon directly to the right of the navigation edit box on the Matlab toolbar, then browse to the ArcLength folder. Check that your current directory points to the ArcLength folder by reading the contents of the Navigation box or executing the command pwd at the Matlab prompt.
If you are working at home, you want to start the same sort of directory structure.
• On a Mac, in your Documents folder, create a Math50C folder, then a Matlab folder inside the Math50c folder, then an ArcLength folder inside the Matlab folder.
• On a PC running Windows, in your My Documents folder, create a Math50C folder, then a Matlab folder insider the Math50C folder, then an ArcLength folder inside the Math50C folder.
Note that the directory structure described above is only a recommendation. You are perfectly free to create your own names and structure. However, don't fall into the trap of dumping all of your work into a single folder. You will regret such a decision as the number of files begins to grow through the course of the semester.
### Function M-files
Very Important: Change the current directory to point at the ArcLength folder. Check this with the pwd command at the Matlab prompt.
Open Matlab's editor. There are several ways that you can open the editor.
1. You can select File->New->M-file from Matlab's menu.
2. You can click the New M-file icon on Matlab's toobar.
3. You can type edit at the Matlab prompt.
Of these three options, the last is our favorite. When the editor opens, enter the following code:
function y=f(t)
y=sqrt(4+5*cos(t)^2);
The presence of the keyword function determines that this Matlab file is special: it is a function and not a simple script file.
Note the first line of the file has the form:
function output-variable = function_name(input-variable)
We make several observations:
1. The keyword function dicatates that this file is a function file, not a script.
2. The function name is f.
3. The input variable is t.
4. The output variable is y.
Because the function name is f, the rules of Matlab require that we save this file as f.m. That is, we must take the function name, append .m, then save. If you click the Save icon on the toolbar of the editor (or select File->Save), you can save the file in the usual manner. By default, Matlab suggests the file should be saved in the current directory (ArcLength) with the name f.m, but it is your responsibility to make sure that you are saving the file as f.m in the ArcLength directory.
Return to the command window. Again, make sure you are in the ArcLength directory with the command pwd. If not, change directories.
Because
f(t)=sqrt(4+5 cos^2 t,
then
f(0)=sqrt(4+5 cos^2(0))=sqrt(4+5(1)^2)=sqrt(9)=3.
We will now test our function.
>> f(0)
ans =
3
#### Trouble-Shooting
It's possible that you might arrive at a different result. There could be a number of reasons for this:
• Check your code in the editor. Did you type in the function correctly?
• Did you save the file as f.m in the directory ArcLength?
• Is your current directory ArcLength? Check this with the pwd command.
If all is still not well, there are two Matlab commands that can help.
• The command which f will give the path of the function f that Matlab will execute. If this path points to an f.m that is in a directory other than ArcLength, then check that you saved the file f.m to the ArcLength folder and your current directory is ArcLength (use the pwd command.)
• The command type f will type the contents of the file f.m to the Matlab command window. If it types a function that is completely different from yours, then you know that Matlab is finding a file f.m on its path in another location. Again, check that you saved the file in the ArcLen folder and your current directory points to the ArcLength Folder.
#### Making Your Function Array Smart
If you've made it to this point in the activity, then you know your function returns the correct response if you enter a single value. However, just to make sure, try the following code:
>> t=0
t =
0
>> f(t)
ans =
3
This is the same result as above. If this doesn't work, return to the Trouble-Shooting section and make the appropriate correction. Before continuing, this example must work.
Now, how will our function perform on a vector of values? Create a vector of t-values:
>> t=0:pi/2:2*pi
t =
0 1.5708 3.1416 4.7124 6.2832
The ideal would be that our function would be applied to each entry of this vector. Let's see:
>> f(t)
??? Error using ==> mpower
Matrix must be square.
Error in ==> f at 2
y=sqrt(4+5*cos(t)^2);
Let's analyze the error message:
1. First, the input t is a vector.
2. Matlab's cosine function is "array smart" so cos(t) is a vector, created by taking the cosine of every entry of the vector t.
3. Thus, cos(t) is a vector. The error is trying to raise a vector to the second power with the command cos(t)^2. This is an illegal operation that spawned the error message shown above.
We need to square every entry of the vector cos(t). To do this we use "dot notation." Make the following change to the file f.m and save.
function y=f(t)
y=sqrt(4+5*cos(t).^2);
>> f(t)
ans =
3 2 3 2 3
Aha! Our function is now "array smart." It evaluated the function at each entry of the vector t. The careful reader will use pencil and paper to evaluate the function at 0, pi//2, pi, 3pi//2, and 2pi to verify this result.
It is now a simple matter to approximate the integral
$L=\int_0^{2\pi} \sqrt{4+5\cos^2 t}$.
Simply enter the following at the Matlab command prompt:
>> quad(@f,0,2*pi)
ans =
15.8654
Thus,
$L=\int_0^{2\pi} \sqrt{4+5\cos^2 t}\approx 15.8654$.
If you type help quad at the Matlab prompt, a description of Matlab's quad command results, the first paragraph of which is:
QUAD Numerically evaluate integral, adaptive Simpson quadrature.
Q = QUAD(FUN,A,B) tries to approximate the integral of scalar-valued
function FUN from A to B to within an error of 1.e-6 using recursive
Y=FUN(X) should accept a vector argument X and return a vector result
Y, the integrand evaluated at each element of X.
Students of calculus will focus on the phrase "tries to approximate the integral of scalar-valued function FUN from A to B to within an error of 1.e-6 using recursive adaptive Simpson quadrature." Surely this means that the quad command is using a sophisticated adaptation of Simpson's method that provides a approximation that is within 1 times 10^{-6} of the correct answer.
Here are a few more observations regarding the help file's description of use:
• FUN is a function handle. In our call, quad(@f,0,2*pi), the "at" symbol @ is used to create a function handle "on-the-fly". We'll have more to say about function handles in future activities. For now, simply prefix the @ symbol to the name of your function.
• In the description Q = QUAD(FUN,A,B), A and B are the lower and upper bounds of the integral. Thus, we passed the lower and upper bounds of our integral in the command quad(@f,0,2*pi).
### Some Comments on Writing Functions
We share some final thoughts on writing functions. First, you can use whatever name you want for your function. Clearly, you won't always want to use f as your function name. For one thing, you can have only one file at a time named f.m in your ArcLength folder. Similarly, you can use whatever names you wish for your input and output variables. With these thoughts in mind, create a new file in the editor and enter the following code:
function stink = skunk(rattled)
stink=sqrt(4+5*cos(rattled).^2);
Some observations:
1. The name of the function is skunk. Therefore, the file should be saved as skunk.m in the ArcLength directory.
2. The input variable is named rattled instead of t in this function M-file.
3. The output variable is named stink instead of y in this function M-file.
Note again that we made the function array smart. We can test this with
>> t=0:pi/2:2*pi
t =
0 1.5708 3.1416 4.7124 6.2832
and
>> skunk(t)
ans =
3 2 3 2 3
This is identical to our previous result. Note that the contents of the vector t in the command workspace is passed into the contents of the input variable rattled in the function workspace, so these names do not have to match.
We can pass a function handle to the skunk function to the quad command in a similar manner, by prefixing the function name with the at @ symbol.
>> quad(@skunk,0,2*pi)
ans =
15.8654
Same result!
### Exercises
Consider the following parametric representation:
$\begin{eqnarray} x&=&t^2\\ y&=&t^3, \end{eqnarray}$
defined on the interval 0 le t le 1. Perform each of the following tasks.
1. Sketch the parametric graph of the parametric function. Turn on the grid with grid on. Provide axis labels and an appropriate title, then obtain a printout.
2. On the printout of your plot, devise a strategy for estimating the length of the curve. You might try drawing a few line segments then using either the distance formula or the Pythagorean theorem to obtain an estimate of their total length.
3. Set up the integral on the printout of your plot for determining the length of the arc.
4. Write a function M-file for the integrand and obtain a printout of the file.
5. Use the quad command to approximate the integral in part (3). Place this command and the rsulting approximation on the printout of your plot and compare with the estimate in part (2)
6. Turn in the printout of your plot and the printout of your function M-file. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210342526435852, "perplexity": 1542.1942453851862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00385-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://blog.theleapjournal.org/2016/09/ | ## Monday, September 19, 2016
An impressive policy paper: Self trading is not synonymous with market abuse by Nidhi Aggarwal, Chirag Anand, Shefali Malhotra, Bhargavi Zaveri, 29 June 2015, and an impressive response by SEBI today: Sebi reverses stand on self-trades by Jayshree P. Upadhyay in Mint.
Finance for the poor: policy and not programs by Ajay Shah in The Business Standard, 19 September.
Stopping the clock going back in Singur by Ashok K Lahiri in The Business Standard, 15 September.
SEBI's Flawed Attempt at Setting the Frequencies Right by Kanad Bagchi in The Wire, 15 September.
Accountability of justice by K. Parasaran in The Indian Express, 14 September.
Top Russian anti-corruption official had $120M in cash in his apartment by Cory Doctorow in The Boing Boing, 11 September. This reminds us about the fallacy of searching for something like a Lok Pal who will solve all our problems. This also made me find out the Weight of a million dollars by Richard-ga in Google Answers., 05 Feb 2006, and brings a new perspective to illicit money movements in the context of Faulty tradeoffs in security, 10 May 2010. A note for Dr Patel by Ila Patnaik in The Indian Express, 9 September. Before amending the law by Abhinav Kumar and Sudhanshu Sarangi in The Indian Express, 9 September. NCRB data: handle with care by K. P. Asha Mukundan in The Hindu, 9 September. Air pollution cost India 8.5% of its GDP in 2013: study by Dipti Jain in The Mint, 9 September. Tariffs Do More Harm Than Good at Home by Maurice Obstfeld in The iMF direct Blog, 8 September. The need for a 'totaliser' revolution by Sanjay Kumar in The Hindu, 7 September. China's Summer of Discontent by Elizabeth C. Economy in The CFR Blog, 6 September. Not majority vs minority by Pratap Bhanu Mehta in The Indian Express, 6 September. Vaccination isn't just for babies by Sujata Kelkar Shetty in The Mint, 5 September. No harm in pre-consultation by Somasekhar Sundaresan in The Business Standard, 5 September. The house is on fire! by Gary Saul Morson in The New Criterion, September 2016. Myths and realities about America's infrastructure spending by Edward L. Glaeser in The City Journal, Summer 2016. Aadhaar by Numbers by Sunil Abraham in NIPFP YouTube Channel. Academic snobbery about top journals and top universities is under serious attack. Even without retractions top journals publish the least reliable science in The Bjoern Brembs Blog, January 2012. Why might this be happening? The top journals probably encourage research that is more conscious of fashion, does more p-hacking, with authors who do more social engineering. Also see. How to test your decision-making instincts by Andrew Campbell and Jo Whitehead in The McKinsey Quarterly, May 2010. ## Friday, September 16, 2016 ### Arriving at the correct value of the rupee by Ajay Shah. A recent front page story in the Indian Express came as a surprise examination for many economists in India. When currency policy is proposed, four ideas are useful: 1. Nobody knows what is the correct exchange rate. Asking a government official the correct price of the rupee is as pointless as asking him the correct price of steel or the correct level of Nifty. 2. We were once in a complicated world where RBI openly said that it had no framework. RBI governors heard pleas from importers and exporters, played favourites, and earned political capital. That period (1934-2015) is now behind us. Now, for the first time, RBI is accountable. It has an objective: inflation. The instrument (control of the policy rate) is used up in giving us the outcome (4% inflation). 3. Chasing an exchange rate objective can lead to small problems (e.g. the exchange rate management of 2002-2007 kicked off an inflation crisis from 2006) or big problems (the rupee defence of 2013). Wisdom in public policy involves avoiding such adventurism. 4. While an inflation targeting central bank should not pursue exchange rate policy, the exchange rate is an important input for an inflation targeting central bank. Changes in the exchange rate feed into domestic inflation through the price of tradeables. Thus, changes in the exchange rate are a useful input for forecasting inflation. The essence of good monetary policy is forecasting inflation [example]. RBI should consume the exchange rate, made by the market, as an input into its monetary policy process. ## Thursday, September 15, 2016 ### Fiscal consequences of shifting an inflation target from 2% to 4% by Ajay Shah. Most advanced economies have a nominal anchor for monetary policy in the form of an inflation target at 2%. This has presented difficulties when the policy rate hits 0%. This calls for using a new and more unpredictable tool -- quantitative easing -- or finding ways to force the short rate below zero. Both are difficult. Some people are proposing that the inflation target should be raised to 4%. This possibility is being posed as a choice between two unpleasant things. On one hand, the smooth working of the economy will be impeded by higher inflation, but on the other hand we have to deal with the zero interest rate lower bound. Ben Bernanke's recent blog article is an example of this debate. In addition to these arguments, there is a fiscal perspective that needs to be brought on the table. Suppose we suddenly raise the inflation target from 2% to 4%. Suppose there is no disruption, everything works out smoothly. In the ideal scenario, the yield curve should parallel shift up by 200 bps at all maturities. This would be bad news for persons holding nominal bonds issued by the government, persons holding nominal pensions, nominal bonds issued by private corporations, etc. A person who has a nominal pension backed by a corporation will be angry about it. But there will be nothing she can do about it. Persons who hold claims upon the government would not accept these losses lying down. They would organise themselves politically and ask for compensation for the losses they would face if such a decision were taken. How large are the magnitudes? Suppose a country has explicit nominal government bonds and implicit nominal pension debt adding up to 100% of GDP. Suppose this has an average maturity of 10 years. The 200 bps parallel shift of the yield curve would impose a loss of 20% which works out to 20% of GDP. There is no democracy in which monetary policy wonks are going to be able to impose a cost of 20% of GDP upon some people without a political fight. A negotiation would take place where the adversely affected persons will ask for compensation. This negotiation will be a difficult one. As an example, envision the US Treasury, the US Fed, and bondholders sitting in a room arguing about 20% of GDP. Things become more difficult in countries where the government owes nominal defined benefit pensions. If the negotiation works out smooth and clean, the debt/GDP of the country goes up by 20 percentage points. This will make bondholders and credit rating agencies more nervous about the fiscal solvency of the country. While some countries (e.g. Australia) have good fiscal health, most advanced economies do not. The last and most troublesome issue is that of credibility and confidence. Many advanced economies have a difficult fiscal situation, particularly when off-balance-sheet liabilities are counted. The bond market has generally been quite well disposed towards these countries; e.g. the bond market assumes the US will solve its fiscal crisis, even though nobody can see how this would be done. One key element of this confidence on the part of the bond market is: trust in the 2% inflation target. As fiat money is anchored with a 2% inflation target, the fiscal authority cannot inflate away debt by using inflation surprises. This reassures bond holders who are then willing to lend money to the sovereign at low interest rates. Suppose the negotiations associated with the increase in the inflation target don't work out well. Some bondholders walk away feeling they were unfairly forced to accept a loss. There will be less trust the next time around. The bond market will not trust the 4% inflation target in the way it has come to trust the 2% inflation target. It will demand a risk premium in exchange for bearing the risk that the institutional mechanism of monetary policy is not trusted for decades and decades to come. For some advanced economies, under certain kinds of mishandled negotiations, the project of trying to raise the inflation target from 2% to 4% could lead to a sharp one-time increase in the debt/GDP ratio and a higher required interest rate for government debt. These two outcomes could significantly worsen the fiscal situation for the government. These considerations should be brought into the picture when evaluating the costs and benefits of raising the inflation target from 2% to 4%. On related issues, this article from June 2009 has worked out reasonably well. One change that intervened was that the US moved closer to formal inflation targeting in 2012, thus removing some of the concern. I acknowledge useful discussions with Josh Felman on these issues. ## Saturday, September 10, 2016 ### The great Indian GDP measurement controversy by Rajeswari Sengupta. In 2015, the Central Statistical Office (CSO) revised the way GDP is calculated in India. According to the new series, India is the fastest growing large economy in the world, with a 7.1 percent real growth rate. Other trusted measures of the state of the economy convey a discordant picture. This discrepancy has led to an active debate comprising two parts. One part of the debate has been arguments about the extent to which the official GDP data are accurate. The second part of the debate is based on criticising CSO's methods. This article summarises the literature that examines CSO's methods. There are two main areas of concern: the way manufacturing Gross Value Added (GVA) has been estimated and the methodology for calculating deflators. ### Manufacturing GVA The manufacturing sector has been at the centre of the GDP debate. The methodological changes for this sector and consequently the data revisions have been substantial. Manufacturing growth for 2012-13 was revised up from 1.1 percent to 6.2 percent, while that for 2013-14 was increased from -0.7 percent to 5.3 percent. Various authors (Nagaraj, 2015a, 2015b, 2015c; Rajakumar, 2015; Nagaraj and Srinivasan, 2016; Sapre and Sinha, 2016) have questioned the reliability of the new estimates, on several grounds. Enterprise vs. Establishment approach: In a major innovation, the new GVA methodology shifted data collection from establishments (or factories) to enterprises (or firms). Sapre and Sinha (2016) point out that lack of clarity on measures of output and costs at the enterprise level can lead to imprecise estimates of GVA. The activities of firms can be much more diverse than those of factories, and not all of these functions would qualify as manufacturing. Yet all the value added of enterprises classified as "manufacturing firms" has gone into the calculation of manufacturing GVA. This will inflate the level of output and possibly also the growth rate, if the ancillary activities are growing faster than the manufacturing ones. Blowing up of GVA: Extrapolating from samples ("blowing up") is not a new feature of the current GDP series. What has changed is the database used. Previously, manufacturing GVA was based on the RBI's fixed sample of large private companies. Under the new series, the MCA21 database is used to compile a set of "active" companies, which have filed their annual financial returns at least once in the past three years. The problem is that for any given year, information from several active companies remains unavailable till a cut-off date of data extraction. In such a case, the GVA of available companies needs to be blown-up to account for the unavailable companies. There are multiple issues in this blowing-up method. The year wise number of available and active companies in manufacturing is not publicly available. Hence, year on year, the exact number of companies for which the GVA is blown up is unknown. While the Ministry of Corporate Affairs has made filing of annual financial returns mandatory for all registered companies, it is not known how many of these companies produce any output on a regular basis. The blowing-up factor is the inverse of the ratio between the paid-up capital (PUC) for the available companies and that for the active set as a whole. Nagaraj (2015a, 2015b) argues that this is inappropriate since a large fraction of the MCA21 active set are "fictitious, shell companies" that exist only on paper. In that case the blowing-up method is likely to overestimate GVA. Sapre and Sinha (2016) argue that blowing-up using the PUC is an inappropriate method because PUC and GVA do not have any one-to-one relation. Also, it is possible that the actual GVA of some "active but unavailable" companies is negative for a particular year. In those cases, blowing up of GVA using the PUC factor method can lead to overestimation. The actual computation of the blowing-up factor applied by the CSO in the new series has not been described in detail in the official documents. This makes it difficult to replicate the process and analyse it. A single blowing-up factor has been used for private as well as public limited companies. Rajakumar (2015) points out this is not appropriate as the two groups are widely divergent in their patterns. The number of "available" companies reporting their annual financial returns with MCA varies across the years. As a result, the blowing-up factor that accounts for the non-reporting companies will also vary from year to year. As highlighted by Nagaraj (2015a, 2015b) and Sapre and Sinha (2016), this variation will result in wide fluctuations in the final GVA estimates. Identification of manufacturing companies: Sapre and Sinha (2016) find that within the manufacturing sector several companies operate as wholesale traders or service providers. These companies may have changed their line of business since they were originally registered. These changes do not get reflected in the Company Identification (CIN) code assigned to the companies. Such misclassification of companies will distort the manufacturing estimates, although not the overall GVA. MCA 21 vs. IIP : There are other problems with the manufacturing GVA calculation that have not been written about much. For the manufacturing sector, the GVA is derived from a combination of MCA 21 numbers, Index of Industrial Production (IIP) estimates and estimates of the unorganized sector from the Annual Survey of Industries (ASI). While the MCA21 is a new database, the base year for the IIP data is still 2004-05. Also the data obtained from MCA 21 follows an "enterprise" approach as mentioned earlier, but the data obtained from ASI follows the old "establishment" approach. The full implications of these discrepancies are yet to be fully understood. ### Deflators Previously, estimates of real GDP relied heavily on production indices such as the IIP. Now, most real numbers are derived by taking nominal data and deflating them by price indices. If done well, this approach can give a more accurate measure of value added. But if the deflators used are inappropriate, the estimated real magnitudes will be distorted. And this may well have happened in the past few years, since there have been very large changes in relative prices (especially petroleum and other commodities), which are inherently difficult to capture in aggregate deflators. The issues here are as follows. Double deflation: In most G20 countries, real manufacturing GVA is computed using a methodology known as double deflation. In this method, nominal outputs are deflated using an output deflator, while inputs are deflated using a separate input deflator. Then, the real inputs are subtracted from real outputs to derive real GVA. But in India things are done differently. Here, we compute the nominal GVA, and then deflate this number using a single deflator. If input prices move in tandem with output prices, both methodologies will give similar results. But if the two price series diverge- as they have for the past few years in India- single deflation can overstate growth by a big margin. The reason is not difficult to see. If the price of inputs falls sharply, profits will increase, and nominal value added will go up. Since real GDP is supposed to be measured at "constant prices", this increase needs to be deflated away. Double deflation will do this easily. But single deflation will not work. In fact, if a commodity-weighted deflator like the Wholesale Price Index (WPI) is used, as is the case under the current methodology, nominal growth will be inflated, on the grounds that prices are actually falling! In this case, real growth will be seriously overestimated. A fuller explanation is provided here. As the gap between input and output inflation starts to close, the problem will diminish. But that could also send a misleading signal, because it might seem that growth is slowing, when only the measurement bias is disappearing. Service sector deflator: Deflator problems also plague the estimates for the service sector, which accounts for the bulk of GDP. Currently, the deflator used for much of this sector is the WPI. But the weight of services in the WPI is negligible. If instead the services component of the Consumer Price Index (CPI) were used, growth in this sector would be far lower than currently estimated. WPI vs. CPI : Finally, there are questions about whether the WPI should really be used as a deflator, at all. The weights are now more than a decade old, and India's economic structure has changed radically over this period. In addition, the sample frame (the selection of goods sampled) is also out of date. The CPI is a better price index [link, link]. ### Potential refinements Based on the foregoing, a number of refinements to the GDP methodology could be considered: • Releasing disaggregated information on firm output and cost items, to permit more precise estimation of manufacturing GVA given the shift from the establishment to the enterprise approach. • Altering the definition of the active set of manufacturing companies, to ensure the companies are truly active. • Releasing the number of active and available companies every year by industry or sector, to get a sense of the companies contributing to GVA. • Shifting the blowing-up factor from paid-up capital to another indicator, such as replacing growth rates for "active but unavailable" companies by the overall growth rate for the relevant subsector. • Using separate blowing-up factors for public and private limited companies. Currently the blowing-up factor does not take into account the size, industry or ownership of the unavailable companies. • Reviewing the classification of companies to ensure they are categorized appropriately. • Providing greater clarity and transparency about the database and methodology used to estimate the manufacturing sector GVA. Also, documents could be released explaining the precise method used to blow up the GVA estimates. • Adopting the double deflation method to calculate real manufacturing GVA. • Using the relevant CPI components to deflate service sector GVA. • More generally, the WPI could be replaced by the relevant CPI components, in the long period before a Producer Price Index (PPI) is developed which would be an ideal deflator. Until this methodological debate subsides, official GDP data should be used with caution as it may not accurately reflect conditions in the economy. Other proxies for output are required. ### Acknowledgements I thank Josh Felman, Deep Mukherjee, R. Nagaraj, Amey Sapre, and Pramod Sinha for useful conversations ### References Nagaraj, R. (2015a), Seeds of doubt on new GDP numbers Private corporate sector overestimated?, Economic and Political Weekly, Vol. L, No. 13. Nagaraj, R. (2015b), Seeds of doubt remain: A reply to CSO’s rejoinder, Economic and Political Weekly, Vol-L, No. 18. Nagaraj, R. (2015c), Growth in GVA of Indian manufacturing, Economic and Political Weekly, Vol-L, No. 24. Nagaraj, R. and T.N. Srinivasan (2016), Measuring India’s GDP Growth: Unpacking the Analytics & Data Issues behind a Controversy that Refuses to Go Away, India Policy Forum, July 2016. Rajakumar, J Dennis (2015), Private corporate sector in new NAS series: Need for a fresh look, Economic and Political Weekly, Vol-L, No. 29. Sapre, Amey, and Pramod Sinha (2016), Some areas of concern about Indian Manufacturing Sector GDP estimation, NIPFP Working Paper 172, August 2016. The author is a researcher at the Indira Gandhi Institute of Development Research. ### Author: Rajeswari Sengupta Rajeswari Sengupta is a researcher at the Indira Gandhi Institute of Development Research. Home page. On this blog: 1. An annotated reading list on the Indian bankruptcy reform, 2018, 14 August 2018. 2. Bank recapitalisation: the allocation challenge, 30 January 2018. 3. Commercial wisdom to judicial discretion: NCLT reorients IBC, 13 December 2017. 4. Understanding the recent IBC (Amendment) Ordinance, 2017, 7 December 2017. 5. Bank recapitalisation: The myth around growth capital, 21 November 2017. 6. NPA Ordinance: The impact of secrecy in ordinance making, 30 June 2017. 7. Understanding the recent Banking Regulation (Amendment) Ordinance, 2017, 8 May 2017. 8. How will IBC 2016 deal with existing bank NPAs?, 5 December 2016. 9. Demonetisation needs a Parliamentary law to be fool-proof, 1 December 2016. 10. Drafting hall of shame #2: Mistakes in the Insolvency and Bankruptcy Code, 18 November 2016. 11. The great Indian GDP measurement controversy, 10 September 2016. 12. Measuring the transmission of monetary policy in India, 1 September 2016. 13. Analysis of the recent proposed SARFAESI amendments : are these consistent with the Insolvency and Bankruptcy Code?, 29 May 2016. 14. Firm insolvency process: Lessons from a cross-country comparison, 22 December 2015. 15. Bankruptcy reforms: It's not the ranking that matters, 13 November 2015. ## Friday, September 09, 2016 ### UIDAI's 4th big public policy innovation: Build forts, not empires by Praveen Chakravarty. The insightful paper, and accompanying blog article, by Ram Sewak Sharma about three big innovations in UIDAI got me thinking about my own UIDAI experience. What were the key innovations which made it work out well, especially as viewed from the lens of the private sector? What lessons can we take away? In addition to his three big ideas, I have one more. Asset light business' is the new buzzword among investors, especially venture capital and angel investors. The world’s largest taxi company owns no taxis – Uber; the world’s largest room provider owns no hotel rooms – AirBnb; the world’s largest movie house owns no movies – Netflix. This is the popular refrain among the fraternity of modern day startups and their cheer-leading investors. In a similar vein, it can be argued that the world’s largest identity provider owns no identity devices! To top, this was not a traditional start-up in a college dorm by a 20 year old. This is the Unique Identity Authority of India – UIDAI, a staid authority of the staid government of India manned by staid bureaucrats. On 26 March 2004, Bharti Airtel announced a large, first-of-its-kind outsourcing contract with IBM. It essentially meant that Bharti Airtel, the telecom player will own no telecom equipment nor network hardware or software. It will simply acquire and own customers. IBM, in turn, took on the responsibility of managing all the complex hardware and software required to run a massive telecom operation. To add, IBM was to be paid a percentage of revenues that Bharti Airtel would earn, not a mere fat, flat fee as was the prevailing norm then. Bharti Airtel grew from a 6.5 million subscriber base to 250 million subscribers in 12 years, leaving most of its competitors behind. It is undeniable that this brave decision of Bharti Airtel to smart source its capital expenditure to IBM played a key role in its ability to scale so rapidly, which is probably forgotten in the annals of Bharti Airtel’s success. It was then dubbed the capex to opex' transformation by financial analysts such as myself, i.e. converting big sunk costs of capital expenditure to revenue generating operating expenditure, marking significant gains in efficiency and scale. When the cost of capital is high in India owing to capital controls, there is a natural gain when an Indian firm, which suffers from the elevated cost of capital in India, contracts-out the ownership of bulky capital assets to an MNC, which enjoys global levels of cost of capital. When UIDAI embarked on providing a unique identity to a billion Indians across more than 6.5 lakh villages, the sheer scale was daunting. As the Ram Sewak Sharma paper rightly mentions, there was detailed thought behind the use of iris, field trials to test proof of concept etc. But perhaps the single biggest catalyst in converting this from a grandiose plan into reality was the decision of UIDAI to smart source identity data collection. Surely, the technology industry background of the founding Chairman of the UIDAI played a role in its decision to do a Bharti Airtel in public policy. Nevertheless, in hindsight, this decision to embrace the ‘capex to opex’ theme but adapt it to the Indian public policy environment, in my view, laid the ‘Aadhaar’ for Aadhaar to scale so rapidly. In an otherwise typical government project, offices would have been set up in every district, personnel would have been hired, biometric scanners would have been purchased and then identity information would have been collected, all by a government body or a clutch of bodies. This would have meant incurring massive upfront capital costs of infrastructure, technology and people. The UIDAI instead tilted it on its head and decided to build an entire ecosystem of private vendors to do the data collection with costs of machine, people and infrastructure borne by the vendor. Essentially, this meant that there were ubiquitous but authorized and approved UIDAI data collection centres and camps that mushroomed all across the country in a short span of time which made it easy for residents to register. But in the public policy world unlike the corporate world, protecting downside risks are far more important than any potential upside gains, i.e. protecting data security of Indian residents is infinitely more important than any efficiency gains of outsourcing to private vendors. This was achieved by establishing strict oversight and control mechanisms that rested entirely with the UIDAI. The UIDAI exercised stringent control of data encryption and validation. So, while biometric data of a billion Indians were collected by thousands of independent government and private agencies, all of them collected the data through a standardised software provided by the UIDAI that encrypted the data which was then sent back for validation to the UIDAI centre. The UIDAI incurred a cost of roughly Rs.65 for every successful biometric data of an individual. Thus, the UIDAI did not have to put up big financial plans and wait for funding from the Ministry of Finance before it could launch its activities across the country. This was one of the biggest reasons that UIDAI could go from zero to 600 million unique identities in 4 years flat, perhaps the fastest of any government or even private sector initiatives in recent times anywhere in the world. This was the power of the capex to opex' or asset light' innovation in policy implementation. This philosophy of the UIDAI in eschewing the temptation to build empires' but to build forts' instead is a philosophy that can serve many large scale government project implementations well. We in India are gradually developing knowledge about how to build State capacity. Computer technology allows us to leapfrog, and achieve remarkable kinds of State capacity which were otherwise unavailable at our level of per capita GDP. We are crossing this river by feeling the stones. We are building experience, and we are building a literature. A paper by Ajay Shah in 2006, the Tagup report, Nandan Nilekani's book from 2013, the book by Nandan Nilekani and Viral Shah from 2016, the concept of the Financial Data Management Centre (FDMC) in the Indian Financial Code, the concepts of information utilities in the bankruptcy reform, the Ram Sewak Sharma paper, and my one idea in this article: these add up to the emergence of deeply grounded local knowledge on how to do this. These materials are great fodder for thinking, debate, and then actually doing things in India. The author is Senior Fellow in Political Economy at IDFC Institute, a Mumbai think tank, and former consultant with the UIDAI. ## Thursday, September 08, 2016 ### UIDAI's public policy innovations by Ram Sewak Sharma. Unique Identification Authority of India (UIDAI) had the goal of issuing unique identification numbers to every resident of India. In a country as large as ours, this was a difficult task to achieve. UIDAI has largely accomplished this within a short period of about six years. I believe it was able to do this only because it took many innovative and bold decisions. In a recent paper I examine some of these innovations. The paper also tries to derive lessons from UIDAI that could be applied in other government projects. ### The Use of Iris Scans The UIDAI felt that unless iris images were used in addition to fingerprints, it would not be able to fulfil its mandate of unique identification. However, there were many concerns related to the use of iris images. Was this technology mature enough? Was it too expensive? Were there enough vendors in the market to prevent lock-in? The UIDAI set up a committee to deliberate on the issue of which biometrics to collect and what standards to use for unique identification. This committee recognised the value of using iris images in improving accuracy. However, it fell short of recommending the inclusion of the iris in the biometric set and left the decision to UIDAI. After a detailed examination, the UIDAI came to the conclusion that the inclusion of iris to the biometric set was necessary for a number of reasons, such as ensuring uniqueness of identities, and achieving greater inclusion. In retrospect, this turned out to be one of the most important decisions of the UIDAI. ### On-field trials The practice of conducting on-field trials was an important innovation. When UIDAI began its mission, there were many questions inside and outside the organisation on whether the very idea of unique identification for every resident was feasible at all. The idea of using biometrics to ensure the unique identification and authentication of all residents in India was an untested one. There were many assumptions behind it, and the data required to test the validity of these assumptions was not available. For instance, most of the research done on using biometrics for identification or authentication was done in western countries, and that too, on relatively small numbers of people. The knowledge which had been produced by Western researchers was not applicable in the Indian context. Could the fingerprints of rural residents and manual labourers be captured successfully, or would they be excluded from Aadhaar? What about the iris images of old or blind people? Do the devices available in the market serve the purpose? What would be the most efficient and effective way to organise the process of enrolment? These questions needed to be answered if the project was to be successful. The strategy adopted at UIDAI was to conduct a set of trials (called Proofs of Concept, PoCs) in several states across the country. The areas were selected to be representative of real-life enrolment and authentication. A number of biometric capture devices of different makes were used, and several different enrolment processes were tried out. The PoCs were carefully designed to answer sharply articulated questions, either to verify UIDAI's assumptions, or to capture the data required to fill in gaps in the UIDAI's knowledge. In essence, the scientific method was applied to create the knowledge that was pertinent to the decisions that had to be made at UIDAI. Resources had to be allocated to this work, and in return for that, major sources of project risk were eliminated. The results of the PoCs indicated that the major hypothesis of the UIDAI was correct: that it was indeed possible to capture biometric data that was fit for the purpose of deduplication and verification. The results also showed that iris capture did not present any major challenges. An efficient enrolment process was devised using the data captured during these trials. ### Competition The last innovation considered in the paper relates to competition. Given the scale and importance of the project, the UIDAI felt it was important to increase efficiency and reduce costs by leveraging the competencies available in the private sector. At the same time, it was also essential to avoid a situation where any one private player could exercise significant power over the effective functioning of the Aadhaar system: the Authority wanted to ensure that there was a competitive market for providing services to it. To promote such a competitive market, the Authority used a two-pronged strategy of using open standards (creating standards where there were none), and using open APIs (Application Programming Interfaces). The Authority used this strategy in procuring vendors for deduplication. Algorithms for deduplication had never been tested at the scale required in this project. To reduce the risk of poor quality deduplication, the UIDAI came up with a novel solution. It decided to engage three biometric service providers (BSPs), instead of just one. These BSPs would interface with the UIDAI systems using open APIs specified by the Authority. This decision helped avoid vendor lock-in, and increased scalability. The UIDAI selected the three top bidders on the basis of the total cost per deduplication. Even after these three vendors were selected, the Authority was able to set up a competitive market among them, using an innovative system to distribute deduplication requests among them. Vendors were paid on the basis of the number of deduplication operations they were able to carry out, and the Authority allocated operations to them on the basis of how fast and how accurate they were. This led to a situation where the BSPs were constantly competing with each other to improve their speed and accuracy. Where standards were not present, the UIDAI was willing to create new standards in order to increase competition. At the outset of UIDAI's work, every biometric device had its own interface, distinct from the interfaces of other biometric devices. If a capture application wanted to support 10 commonly used devices, then the application developer would have to implement 10 different interfaces. This would have made it costly to bring new devices into the project, even if these new devices were cheaper and better. In order to avoid this situation, the UIDAI created an intermediate specification. Vendors could implement support for this specification, and their devices could be certified. This allowed all capture applications to work with all certified devices. ### Lessons The success of the UIDAI offers many lessons for other government projects. Perhaps the first lesson that can be drawn from it is that innovation is indeed possible within the government. Government processes need not prevent it from taking innovative decisions. In fact, processes commonly used within the government, such as expert committees and consensus-based decision-making, can provide methods to examine difficult issues in a credible manner. High-quality procurement and project management skills can help the government outsource many functions that are currently housed within it. The paper also suggests that scale and complexity need not be deterrents to private sector participation: in fact, the large scale of government projects can make the project more attractive to private parties. Another lesson government agencies could learn from the UIDAI is the need to test major hypotheses through field trials before launching projects at scale. Conducting such field trials provides an opportunity to change the design or the implementation roadmap well in time, thus saving precious public money from being wasted. ### Conclusion The UIDAI could achieve its objective because it adopted a different approach from most government organisations. It took tough decisions, such as the one to use iris images; it expended resources on building pertinent knowledge, by constantly experimenting on the ground and learning from these trials; and it exploited private-sector competition to achieve its task at the lowest cost. It should be noted that this is not an exhaustive list of its innovations, but without these three decisions, it is unlikely the UIDAI would have been able to fulfil its mission. Even large government projects can be done fast and efficiently. Government processes need not be obstructive. In fact, the mechanisms of bureaucracy, such as committees, adherence to financial regulations, and desire for consensus, can help to resolve difficult issues and take tough decisions. Well-designed pilots and field-tests can help the government evaluate the effectiveness of large programs, so that it can deploy public resources more usefully. High quality procurement and contract-management processes can enable the government to leverage the dynamism of the private sector to provide public goods effectively. ### Acknowledgements I am grateful to Prasanth Regy and Ajay Shah, both of NIPFP, for stimulating discussions. The author is Chairman, Telecom Regulatory Authority of India (TRAI) and was part of the founding team at UIDAI. ## Wednesday, September 07, 2016 ### Dating the Indian business cycle by Radhika Pandey, Ila Patnaik, Ajay Shah. Most macroeconomics is about business cycle fluctuations. The ultimate dream of macroeconomic policy is to use monetary policy and fiscal policy to reduce the amplitude of business cycle fluctuations, without contaminating the process of trend GDP growth. From an Indian policy perspective, this agenda is sketched in Shah and Patnaik (2010). The starting point of all these glamorous things, however, is measurement. The major barrier to doing Indian macroeconomics is the lack of the foundations of business cycle measurement. The first milestone in this journey is sound procedures for seasonal adjustment of a large number of macroeconomic time series. At NIPFP, we have built this knowledge in the last decade, and insights from this work are presented in Bhattacharya et. al, 2016. The next milestone is dates of turning points of the business cycle. As an example, in the US, the NBER produces a set of dates. These dates are extremely valuable in myriad applications. As an example, the standard operating procedure when drawing the chart of a macroeconomic time-series is to show a shaded background for the period which was a contraction. Here is one example: y-o-y CPI inflation in the US, with recessions shown as shaded bars. In the Indian setting, several papers have worked on the problem of identifying dates of turning points of the business cycle (Dua and Banerji, 2000, Chitre, 2001, Patnaik and Sharma, 2002, Mohanty et.al, 2003). In a new paper (Pandey et. al., 2016) we bring three new perspectives to this question: 1. In the older period, India was an agricultural economy, and the ups and downs of GDP growth were largely monsoon shocks. It is only in the recent period that we have got structural transformation, and the market process of cyclical behaviour of corporate investment and inventory, which add up to a business cycle phenomenon that is recognisably related to the mainstream conception of business cycles (Shah, 2008). This motivates a focus on the post-1991 period. 2. We are able to shift from annual data to quarterly data by starting in the mid 1990s. 3. We have the laid the groundwork for this to be a system, with regular updation of the dates, rather than a one-off paper. ### Methods One approach to business cycle measurement focuses on growth cycles'', and relies on detrending procedures to extract the cyclical component of output. The cycle is defined to be in the boom phase when actual output is above the estimated trend, and in recession when the actual output is below the estimated trend. This identifies expansion and contraction based on the level of output. In contrast, the growth rate cycle'' identifies turning points based on the growth rate of output. For the post-reform period in India, this is more appropriate. At an intuitive level, the procedure works as follows. First, we remove the trend and focus on fluctuations away from the trend. Second, we remove the high frequency fluctuations (below two years) and the low frequency fluctuations (above eight years). What's left is in the range of frequencies which are considered the business cycle'. Third, we identify turning points in this series. In terms of tools and techniques, we use the filter by Christiano and Fitzgerald. The Christiano Fitzgerald filter belongs to the category of band-pass filters. This is used to extract the NBER-suggested frequencies from two to eight years. To this filtered cyclical component, we apply the dating algorithm developed by Bry and Boschan, 1971. Our analysis is focused on seasonally adjusted quarterly GDP series (Base year 2004-05). This series is available from 1996 Q2 (Apr-Jun) to 2014 Q3 (Jul-Sep). The CSO revised the GDP series with a new base year of 2011-12. The revised series is available only from 2011 Q2. Hence we stick to the series with old base year for our analysis. ### Results De-trended, filtered, seasonally adjusted real GDP growth As an example, look at the period of the Lehman crisis. It is well known that the economy was weakening well before the Lehman bankruptcy in September 2008. As an example, INR started depreciating sharply from January 2008 onwards. The evidence above shows that the economy peaked at Q2 2007, and started weakening thereafter. Each turning point is a fascinating moment. In Q2 2007, i.e. Apr-May-Jun 2007, growth was good but the business cycle was about to turn. It is interesting to go back into history to each of these turning points and think about what was going then, and what we were thinking then. Phase Start End Duration Amplitude Recession 1999Q4 2003Q1 13 3.3 Expansion 2003Q1 2007Q2 17 2.5 Recession 2007Q2 2009Q3 9 2.3 Expansion 2009Q3 2011Q2 7 1.3 Recession 2011Q2 2012Q4 6 0.9 Our findings on business cycle chronology are robust to the choice of filter and to the choice of the measure of business cycle indicator. We conduct this analysis using different measures of business cycle indicators such as IIP, GDP excluding agriculture and excluding government, and Firms' net sales, and find broadly similar turning points. Details about these explorations are in the paper. ### A system, not just a paper This is not a one off paper. We will review these dates regularly and update the files, while avoiding changes in URLs. When the methods run into trouble with future data, we will address these problems in the methods. This work would thus become a part of the public goods of the Indian statistical system. All key materials have been released into the public domain. In addition to a paper web page, we have a system web page which gives a .csv file with dates at a fixed URL and can be used e.g. in your R programs. ### An example of an application An example of placing recession bars on a graph, of growth in (non-finance, non-oil) firms net sales The graph above shows the familiar series of seasonally adjusted annualised growth, of the net sales of non-financial non-oil firms, with shaded bars showing downturns. This series only starts after 2000 as quarterly disclosure by firms only started then. Placing this series (net sales of firms) into the context of the business cycle events gives us fresh insight into both: we learn something about the sales of firms respond to business cycle fluctuations, and we learn something about business cycle fluctuations. ### Facts about the Indian business cycle It is useful to know summary statistics about the Indian business cycle: the average duration and amplitude of expansion and recession and the coefficient of variation (CV) in duration and amplitude across expansions and recessions. Exp/Rec Average amplitude (in per cent) Average duration (in quarters) Measure of diversity in duration (CVD) Measure of diversity in amplitude (CVA) Expansion 2.5 12.0 0.34 0.38 Recession 2.2 9.3 0.31 0.45 The average amplitude of expansion is seen to be 2.5% while the average amplitude of recession is 2.2%. The average duration of expansion is seen to be 12 quarters while the average duration of recession is seen to be 9.3 quarters. These are fascinating new facts in India. There is more heterogeneity in the amplitude of a downturn when compared with expansions. ### Changing nature of the Indian business cycle In recent decades a number of emerging economies have undergone structural transformation and introduced reforms aimed at greater market orientation. There is an emerging strand of literature that studies the changes in business cycle stylised facts in response to these changes. Studies find that business cycle stylised facts have changed over time (Ghate et.al, Alp. et.al, 2012). In the paper, we explore some of these changes. In the post-reform period, both expansions and recessions have become diverse in terms of duration and amplitude. Some episodes of recession are relatively more deeper and severe relative to others in the post-reform period. Similarly there is considerable variation in the duration of expansion and recession across specific cycles in the post-reform period. Some are short-lived while others are relatively more persistent. ### References Rudrani Bhattacharya, Radhika Pandey, Ila Patnaik and Ajay Shah. Seasonal adjustment of Indian macroeconomic time-series, NIPFP Working Paper 160, January 2016. Radhika Pandey, Ila Patnaik and Ajay Shah. Dating business cycles in India. NIPFP Working Paper 175, September 2016. Ajay Shah (2008). New issues in macroeconomic policy. In: Business Standard India. Ed. by T. N. Ninan. Business Standard Books. Chap. 2, pp.26--54. Ajay Shah and Ila Patnaik (2010). Stabilising the Indian business cycle. In: India on the growth turnpike: Essays in honour of Vijay L. Kelkar. Ed. by Sameer Kochhar. Academic Foundation. Chap. 6, pp.137--154. ## Monday, September 05, 2016 ### Interesting readings Financial reforms: A mid-term report card by Ajay Shah in The Business Standard, 5 September. With a new ban on antibacterial soap, the US government is finally acknowledging that it's not just ineffective, it's also dangerous by Elijah Wolfson in The Quartz, 02 September. Sydney and Melbourne have canceled concerts celebrating Chairman Mao by Isabella Steger in The Quartz, 01 September. I have always been struck by how Hitler is treated differently from Stalin and Mao. You can actually buy Stalin and Mao memorabilia in Moscow and Beijing. You want to pay the publisher but you don't want the publisher monitoring what you read. Publishers must let online readers pay for news anonymously by Richard Stallman in The Guardian, 1 September and an open source software system to implement this: Electronic payments for a liberal society! Real costs of high-frequency trading by Venkatesh Panchapagesan in The Mint, 31 August. Day of the specialist by Manish Sabharwal in The Indian Express, 29 August. Reverse Voxsplaining: Drugs vs. Chairs by Scott Alexander in The Slate Star Codex, 29 August. Dalits are right: Enough is enough by P Chidambaram in The Indian Express, 28 August. Perumal Murugan returns by Salil Tripathi in The Mint, 25 August. Sedition law cannot be used against honest views, expressed peacefully by Soli J. Sorabjee in The Indian Express, 25 August. Here is proof that banks mis-sell and Yes, banks mis-sell. Now what? by Monika Halan in Mint, unveiling the recent Halan & Sane paper. Why Maharashtra CM Devendra Fadnavis must make new mandis work by Financial express in The Financial Express, 23 August. The Know-Do' gap in primary health care in India by Jeffrey S. Hammer in NIPFP YouTube Channel. Smugglers Secretly Repairing Russian Roads to Boost Business in The Moscow Times, 22 August. Contrary reading of law by regulator by Somasekhar Sundaresan in The Business Standard, 22 August. Nigerian startups can't raise money through crowdfunding because of antiquated laws by Yomi Kazeem in The Quartz, 19 August. India has a similar innovation-unfriendly financial regulatory environment. Why inflation targeting works by Rajeswari Sengupta in The Mint, 16 August. Use the Web instead by Ruben Verborgh on Ruben Verborgh's blog, 05 August. How a new source of water is helping reduce conflict in the Middle East by Rowan Jacobsen in The Ensia, 19 July. ## Saturday, September 03, 2016 ### Design of the Indian GST: Walk before you can run by Satya Poddar and Ajay Shah. A previous article, Sequencing in the construction of State capacity: Walk before you can run argues that in public administration, we should first reach for a modest objective, i.e. a low load, and build sound public administration systems, i.e. adequate load bearing capacity. Only after the systems have been proven to work at a low level of load should we consider increasing the load. In building tax administration, the load is defined by (a) The tax rate and (b) The complexity of the tax in its very design - e.g. a sales tax is easier than an income tax. If the tax rate is low, the employee of the tax collection agency has a greater incentive to collect the tax. When the tax rate is high, there is a greater temptation to just take a bribe instead. If the tax system is simple, there is reduced discretion at the front line, and thus reduced rent-seeking. In places like the UK, where there is high State capacity, income tax began at low income tax rates. When Pitt the Younger started the income tax in 1798, the peak rate was 10%. This gave them an opportunity to build sound tax administration under conditions of low load. Once this was done, the road to higher tax rates was available. In similar fashion, Singapore started with a GST rate of 4%, and then went up to 7%. The Japanese GST rate was also 3% at inception, and has now been moved up to 8%. In India, we never made the tax administration work at low rates of tax; premature load bearing was attempted by jumping to high tax rates without adequate load bearing capacity in the form of a well designed tax administration. A standard debate in tax policy is about the choice between a low rate and a wide base versus higher rates applied on a smaller base. The traditional economics argument has been that the distortion associated with a tax goes up as the tax rate squared, so for a given level of tax revenue we are better off with a low rate and a wide base. A simple tax system with low rates will help lower the extremely large value for the Indian Marginal Cost of Public Funds. The argument presented here gives us one more perspective on the problem. Low tax rates are a low load from a public administration point of view; until load-bearing capacity has been created, it is unwise to subject the system to high load. There is an interesting tension here between two different ways to make the load smaller. A lower rate requires a large base. The wider base involves a bigger tax administration machinery, and a larger number of transactions. A large number of transactions induces a greater load. But a higher tax rate changes what is at stake and increases the load substantially. By this reasoning, the way forward on building a sound framework for tax administration is: 1. First, design a very simple tax policy (e.g. a single rate comprehensive GST) with low discretion at the front line employees, so as to keep the load low. At first, set very low tax rates, to reshape the incentives of citizens and tax officials, to keep the load upon public administration low. 2. Build and run a tax administration which is able to deliver sound tax revenues under these conditions. E.g. a 5% comprehensive VAT rate should generate VAT collections of near 3% of GDP. This requires sophisticated thinking about tax administration. 3. Use independent private studies (e.g. comprehensive audits of some persons) and perception studies to measure the extent to which bribes are paid instead of tax. 4. Only after this is working well, consider moving up to higher tax rates and/or a more complex tax policy. ### Implications for GST design How can a GST be designed so as to have low load? If we wanted to walk before we run, how would we design the GST? 1. A low single rate of 12-15%. Multiple rates significantly increase the workload. 2. A single rate and comprehensive base, which simplifies the workflow and reduces discretion and eliminates classification disputes. 3. Centralised registration. State-wise registration administration work load, and compliance burden for taxpayers manifold - 36 times for those who have to register in all of the states and union territories. 4. Automatic refunds of excess credits, without discretionary approval by officials. 5. Eliminate the concept of self-supplies within a legal entity, as the number of transactions increases several fold if self-supplies are made taxable. No supply should be reckoned unless there is another person to whom a supply can be made. 6. The system of penalties and assessments needs to be simple, with a bias in favour of low discretion and low penalties. There has been a lot of focus on the revenue neutral rate'. One twist on this is that the government is a significant buyer of goods and services. Thus the budget neutral rate' would be a bit lower than the revenue neutral rate. This makes it possible for the rate to be lower when compared with the conventional analysis. Single registration is a subject of some debate. Even when each state has its own GST law, it is very much possible to have single registration. The law would impose the tax on taxable supplies made in the state, allow input tax credits, and specify reporting obligations for information. These provisions will apply to any person registered in the country. There need not be the requirement of separate registration in each state. Computations of tax and reporting of the information could be on a single return with state-wise annexures. The key difference between state-wise and central registration would be that all of the state-wise compliances would be on a single registration portal, and the person will be treated as a single person (note that under the current Model law, each registration number is treated as belonging to a different person). This is how the Canadian GST operates, i.e., with single registration, but with multiple federal and provincial GST laws. Does GST implementation require single control? We think single control is neither desirable nor feasible. Scrutiny and audits at the state level will necessarily require information on the dealer on a Pan India basis, which individual states would not have. Both the Centre and States would want to monitor compliance with their respective tax laws. If they want autonomy in administration of the GST, what is needed is a harmonisation agreement to avoid duplication of administrative effort and inconsistent policies across the country. For example, the governments should agree on a common rulings and interpretations authority, and common administration guidelines. A clean solution would be to have a common audit and scrutiny function that is jointly staffed by Centre and State officials. Some 12 States have already opted for a full-service model of GSTN, under which even scrutiny and audit would be done by GSTN. ### Conclusion The 122nd Amendment is a great step forward. It opens the possibility that India will become one country, one market. At present, tax administration in India works poorly. We do not know how to build a capable and uncorrupt tax administration. In the absence of this State capacity, we should start with a GST design that imposes a low load upon tax administration. Only after this is proven to work at high levels of probity and operational efficiency can we consider the possibility of going up to higher levels of load. This concept can be expanded to all of the GST in all of the States. To keep the load low, we need to expand the Prime Minister’s vision of One India, One Tax, to “One India, One Tax, One registration, and One Rate”. Satya Poddar is a senior tax advisor with Ernst & Young in India. Ajay Shah is a researcher at the National Institute for Public Finance and Policy. ## Thursday, September 01, 2016 ### Where will production take place in a robot-intensive world? by Ajay Shah. Vivek Wadhwa has an article in Quartz on China's difficulties in a robot-heavy world. Earlier this year, there was news about Foxconn replacing 60,000 workers by robots.Vivek Wadhwa says: • Shipping costs to the US go down when goods are made closer to the US. Today the supply chain is: Global raw materials -> China -> US. Instead it can be Global raw materials -> US. • The skills required to run a robot-intensive factory are greater than the skills required to do low-end manufacturing using humans. • Hence, a lot of robotic manufacturing will return to the US. I agree with this. Similar things are going on with services production also, as improvements in artificial intelligence take work away from the cheap Indian BPO. There are three more perspectives that should be brought into this line of thought. #### Three more reasons for robot-intensive manufacturing to favour production in mature market economies 1 Safety of expensive physical assets is a concern. A person who places vast physical assets into a certain location worries about expropriation risk. The investment in a factory can go bad owing to regime change, outbreaks of anarchy, unfair changes in taxation, imposition of capital controls, etc. China is a greater risk. Placing manufacturing in developed countries is safer. I am reminded of the vast Reliance facility in Jamnagar, which is partly about going as close to the crude oil of the Middle East as possible, but avoiding the political risk of the Middle East. 2 Cost of capital. When manufacturing becomes highly capital intensive, the cost of equity and debt becomes more important. Developed countries have mature financial systems where the cost of capital is low. Right now, the cost of capital is extremely low in developed countries as the policy rate is near zero. It is attractive to finance yourself in USD, manufacture in Oregon, and earn cashflows in dollars. Conversely, countries with capital account restrictions, such as China or India, will find it more difficult to attract investment as the cost of capital in these places is higher. 3 Cost of electricity. Firms like Google and Apple have placed data centres near hydel power in Oregon. Data centres, which consume a lot of electricity and require very few workers, are perhaps at the forefront of what robot-intensive manufacturing will be. There are many places in developed countries where there is reliable and cheap access to renewable energy. These would be ideal locations to place large scale robot-heavy factories. (They would need good infrastructure of transportation and communication also). By this logic, there are five reasons why robot-intensive manufacturing will be attracted to developed economies instead of a place like China: (1) Reduced costs of transportation; (2) Skill intensity which requires a superior workforce; (3) Expropriation risk for a big block of$K$; (4) Cost of capital on a big block of$K$; (5) Cheap renewable energy. I'm reminded of an earlier article on the economics of cloud computing from an Indian perspective, and the developments in that industry give us some insight into the new world of robot-intensive manufacturing. #### Implications for China and India These developments induce depreciation in the existing Chinese capital stock. There is a lot of$K$in China which is oriented around the old ways of manufacturing. The market value of that$K$will go down. This is similar to the dimunition of the capital stock of a country which comes about when trade liberalisation takes place, and a lot of the old factories are now worth less. In China and in India, there is a low-skill middle class that got jobs in manufacturing or in BPO. These two kinds of jobs are threatened by improvements in artificial intelligence and robots. Millions of people who have got this prosperity for the first time will be unhappy. In both cases, their unhappiness could be exploited by messages of nationalism and religion. How should a country like India compete in this world? Let's think about each of the five channels of influence: (1) Reduced costs of transportation to consumers in developed markets; (2) Skill intensity which requires a superior workforce; (3) Expropriation risk for a big block of$K$; (4) Cost of capital on a big block of$K$; (5) Cheap renewable energy. We should respond to #1 by improving the infrastructure of transportation, and we should note that a lot of Indian firms will do outbound FDI to stay competitive in this landscape. We should respond to #2 by building higher education. We should respond to #3 by strengthening our foundations of liberal democracy and rule of law, with sophisticated institutional arrangements on issues like capital controls and taxation. We should respond to #4 by doing inflation targeting, removing capital controls and ending financial repression. We should respond to #5 by undertaking reforms which improve the working of the electricity sector. #### Other interesting implications When raw materials$\rightarrow$China$\rightarrow$DM is replaced by raw materials$\rightarrow\$ DM, this will not be good for demand for shipping.
Economists think in terms of the HMY model where a firm faces fixed costs of setting up operations near the customer, and after that it saves money on transactions costs of shipping. Under the HMY model, more efficient firms export and the most efficient firms do outbound FDI. In a world of robotised manufacturing the tension will be between placing manufacturing close to a customer (thus minimising the cost of getting goods to the customer) versus economies of scale. If there were no economies of scale, we can think of a small 3D printer facility being placed near every Amazon warehouse. The right scale of manufacturing will depend on the extent to which even with modern manufacturing, there will be powerful economies of scale.
Middle and top management in the operations of global firms is about managing the complexities of manufacturing in China. In the new world, it will be about getting raw materials to DM factories, and the construction+management of robot-heavy manufacturing. There will be reduced demand for China hands' who know how to build production systems involving China, or India hands' who know how to build low end services production in India.
### Measuring the transmission of monetary policy in India
by Rajeswari Sengupta.
The Finance Bill, 2016 amended the RBI Act, 1934 to establish the objective for RBI (where previously there was none): an inflation target. With the enactment of this law, the RBI is committed to meet pre-announced inflation targets within a specific period of time. For long, India has faced the adverse consequences of a discretionary monetary policy (link, link). Inflation targeting (IT), if implemented successfully, will improve accountability, certainty and transparency in India's monetary policy, and help stabilise the Indian macroeconomic and financial environment.
The weak link today is the monetary policy transmission (MPT). In the absence of strong and reliable links between the policy instruments controlled by the RBI and aggregate demand in the economy, it becomes difficult to do IT. In a recent paper (Mishra, Montiel, and Sengupta, 2016), we present evidence of a weak monetary policy transmission in India.
We explore two main issues in the paper:
1. How does India fare in the factors that affect MPT?
2. How effective is the bank lending channel of MPT in India?
### Factors affecting MPT
Changes in monetary policy instruments translate into changes in aggregate demand through three main channels: bank lending or the interest rate channel, the exchange rate channel, and the asset price channel. The effectiveness of these channels is shaped by the extent of capital controls, policy constraints on exchange rate flexibility, and the structure of the financial system.
Financial markets integration and Exchange rate regime: According to Robert Mundell's "impossible trinity", in an economy with fixed exchange rate, monetary policy loses autonomy of choice when there is high integration between domestic and international financial markets. On the other hand, under a floating exchange rate, as the degree of financial integration increases, the power of monetary policy to affect aggregate demand increases.
We show in the paper that India has a relatively closed capital account in de facto terms, compared to major emerging economies such as Argentina, Brazil, Chile, Colombia, Israel, Malaysia, Mexico, Thailand, Turkey, Russia and South Africa. The exchange rate of the Rupee is determined in the interbank market. The RBI periodically intervenes in that market, buying and selling both spot and forward dollars at the market exchange rate. The limited degree of financial markets integration and RBI's interventions in the foreign exchange market are likely to mute the exchange rate response to monetary policy.
Structure of the domestic financial system: According to Mishra, Montiel and Spilimbergo (2012), MPT works better as the size and reach of the financial system increase, the degree of competition in the formal financial sector goes up and the domestic institutional environment lowers the costs arising from financial frictions.
We present evidence in our paper that the size of the formal financial system in India, measured by conventional indicators (such as the number of bank branches scaled by population or the percentage of adults with accounts at a formal financial institution) is relatively small compared to other advanced and emerging economies. The formal banking sector does not intermediate for a large share of the economy and is highly concentrated. India lags behind advanced and emerging economies in developing its bond market. Indicators of domestic institutional environment such as rule of law, regulatory quality, control of corruption, and political stability, show that India is roughly at the global median.
This suggests that the kind of public goods on which the financial system depends (such as enforcement of property rights, accounting and disclosure standards) may not be as readily available in India as in other countries. This would make financial intermediation a costly activity, weakening the effect of monetary policy actions.
### Bank lending channel of MPT
There are two stages of the transmission process in the bank-lending channel, (i) the transmission from policy instruments to bank lending rates and (ii) the transmission from bank lending rates to final outcomes such as inflation and output. We use a structural vector autoregression (VAR) model in the paper to estimate the effects of a shock to monetary policy instruments on outcome variables through the impact on bank lending rates. The VAR model captures the full dynamic interactions among all the variables of interest. Given a shock to say the policy rate, it is possible to trace out the responses of all other variables to that shock, period by period.
In India, two broad groups of instruments have historically been used by the RBI to conduct monetary policy: (i) price based instruments such as the repo rate and the reverse repo rate: these affect the cost of funds for banks, and (ii) quantity-based instruments such as the Cash Reserve Ratio (CRR) and Statutory Liquidity Ratio (SLR): these affect the supply of banks' loanable funds.
We consider the effects of four instruments in our analysis: (i) the repo rate, (ii) the average of repo and reverse repo rates (price indicator), (iii) the sum of CRR and SLR (quantity indicator), and (iv) a composite score-based indicator of monetary policy stance. The price and quantity indicators have generally moved in the same direction during our sample period of 2001 to 2014. The exception is between 2011 and 2012, when increases in the policy rates suggested a tightening of monetary policy while the quantity indicator continued to move in a loosening direction.
To address this complication, we construct a score-based indicator of monetary policy stance following Das, Mishra and Prabhala (2015). We assign scores of 0, +1, -1, respectively if there is no change, an increase, or a decrease in the values of the four monetary policy instruments in any given month during our sample period. We calculate the overall stance of monetary policy by taking an unweighted sum of the scores for the individual instruments.
We use the "benchmark prime lending rate (BPLR)" of the banking sector till June 2010 and the "base rate" thereafter. Till 2010, the BPLR determined the interest rates charged by Indian banks on different categories of loans. From July 2010, it was replaced by the average base rate charged by the five largest commercial banks. We use the seasonally adjusted headline CPI inflation as an outcome variable. Another outcome variable is the output gap measured using the Index of Industrial Production (IIP). Since IIP covers only the manufacturing sector, we interpret the results on transmission to output with adequate caution.
We motivate our choice of endogenous variables in the VAR model using a modified version of the simple, open-economy New Keynesian model developed by Adam et. al. (2016). The model consists of an IS equation, a New Keynesian Phillips curve, an uncovered interest parity condition, an interest rate pass-through equation, and a Taylor-type monetary policy rule. Consistent with this model, we estimate a VAR for India with five endogenous variables: output gap, inflation rate, exchange rate, bank lending rate and the monetary policy instrument.
Shocks to the world food and energy prices may exert important effects on inflation in India. Since India is less likely to affect world food and energy prices, these prices measured in US dollars can be considered exogenous to developments in India. So we include these as exogenous variables in some versions of our estimated VARs. This is important because to the extent that shocks to either of these variables may help predict future headline CPI inflation in India, excluding them would undermine the identification of monetary policy shocks in India.
We follow two alternative identification schemes in the paper. One in which the monetary policy variable is ordered first, reflecting the assumption that the RBI does not observe (or does not react to) macroeconomic variables within the month, but the macro variables are potentially affected by monetary policy shocks contemporaneously. In this scheme the monetary policy variable is ordered first, followed by the bank lending rate, output gap, CPI inflation and exchange rate.
In the second scheme, the RBI responds to macro variables within the month, but those variables in turn respond to monetary policy only with a lag. Monetary policy variable is ordered last in this scheme and the ordering of the other variables remains the same.
### Results
Across both identification schemes and for all four monetary policy measures, a tightening of monetary policy is associated with an increase in bank lending rates. However the effect is statistically different from zero only at the 90 percent confidence level. This suggests that there is weak evidence for the first stage of transmission in the bank lending channel.
The effect of monetary policy changes on bank lending rates is hump-shaped, with the peak effects appearing between 5-10 months in all the cases considered.
The pass-through from the policy rate to bank lending rates is incomplete. For example, an increase of 25 basis points in the repo rate, is associated with an increase in the bank lending rate of only about 10 basis points.
The effect of monetary policy changes on the exchange rate is not statistically significant for any of the four monetary policy measures used. This suggests a non-existent exchange rate channel of MPT in India.
Our results provide no support for the second stage of transmission in the bank lending channel. We do not find evidence of effect of monetary policy changes on either the CPI inflation rate or the output gap.
### Conclusion
A low degree of de facto capital mobility, RBI's interventions in the foreign exchange market, and the structure of the financial system suggest that the exchange rate and the asset price channels of MPT have low effectiveness in India. The burden of monetary transmission is likely to fall on the bank lending or interest rate channel. We present new evidence in our paper that the bank lending channel of MPT does not work well either.
With the adoption of IT, RBI has taken a step in the right direction. The enactment of the law by itself will not achieve price stability. A strong transmission mechanism from the policy rate to aggregate demand is crucial for the successful implementation of the new monetary policy framework. The legal mandate of IT must now be used to improve the effectiveness of MPT.
### References
Das, Abhiman, Prachi Mishra, and Nagpurnanand Prabhala (2015), The Transmission of Monetary Policy Within Banks: Evidence from India, mimeo.
Li, Bin Grace, Stephen O'Connell, Christopher Adam, Andrew Berg, and Peter Montiel (2016), VAR meets DSGE: Uncovering the Monetary Transmission Mechanism in Low-Income Countries, IMF Working Paper, No. 16/90.
Mishra, Prachi, Peter J. Montiel, and Antonio Spilimbergo (2012), Monetary Transmission in Low-Income Countries, IMF Economic Review, 60, 270-302.
Mishra, Prachi, Peter J. Montiel and Rajeswari Sengupta (2016), Monetary Transmission in Developing Countries: Evidence from India, IMF Working Paper, No. 16/167.
Rajeswari Sengupta is a researcher at the Indira Gandhi Institute for Development Research, Bombay. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20864978432655334, "perplexity": 3102.4287475673036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512504.64/warc/CC-MAIN-20181020034552-20181020060052-00473.warc.gz"} |
http://openstudy.com/updates/50a4954ee4b0f1696c139fcb | Got Homework?
Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
Batman_ProtectorofGotham Group Title Find the limit. 2 years ago 2 years ago Edit Question Delete Cancel Submit
• This Question is Closed
1. rob1525
Best Response
You've already chosen the best response.
1
At this point its infinity. ;)
• 2 years ago
2. Batman_ProtectorofGotham
Best Response
You've already chosen the best response.
0
$\lim_{x \rightarrow 0} (3x+2+(1/x^2))$
• 2 years ago
3. math_proof
Best Response
You've already chosen the best response.
0
thats a interesting one :p
• 2 years ago
4. rob1525
Best Response
You've already chosen the best response.
1
I would multiply the whole function by x^2 to get it out of the denomenator then plug 0 in for x.
• 2 years ago
5. math_proof
Best Response
You've already chosen the best response.
0
are you allowed to do this?
• 2 years ago
6. math_proof
Best Response
You've already chosen the best response.
0
i don't think you can
• 2 years ago
7. rob1525
Best Response
You've already chosen the best response.
1
actually no. Limit does not exist. sorry.
• 2 years ago
8. rob1525
Best Response
You've already chosen the best response.
1
L[x:0,3x+2+((1)/(x^(2)))]
• 2 years ago
9. math_proof
Best Response
You've already chosen the best response.
0
its infinity right?
• 2 years ago
10. math_proof
Best Response
You've already chosen the best response.
0
wow batman is not even here, but we're here seating and figuring out someones problem
• 2 years ago
11. rob1525
Best Response
You've already chosen the best response.
1
;)
• 2 years ago
12. rob1525
Best Response
You've already chosen the best response.
1
I graphed it, looks like it goes to positive infinity.
• 2 years ago
13. rob1525
Best Response
You've already chosen the best response.
1
Okay i checked with calculator function and both forms confirm that it goes to infinity.
• 2 years ago
• Attachments:
See more questions >>>
spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995208978652954, "perplexity": 20800.436600668992}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931014329.94/warc/CC-MAIN-20141125155654-00008-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://ai.stackexchange.com/questions/16863/does-the-substituted-variable-constant-have-to-appear-in-the-unified-term | Does the substituted variable/constant have to appear in the unified term?
I'm checking out how to manually apply resolution on a first order predicate logic knowledge base and I'm confused about what is allowed or not in the algorithm.
Let's say that we have the following two clauses (where $$A$$ and $$B$$ are constants):
$$\neg P(A, B) \vee H(A)$$
$$\neg L(x_1) \vee P(x_1, y_1)$$
If I try to unify these two clauses by making the substitutions $$\{x_1/A, y_1/B\}$$ do I get $$\neg L(A) \vee H(A)$$ ? Is it allowed to substitute $$y_1$$ by $$B$$ even if $$B$$ doesn't appear in the unified clause?
Then we have the other way around where:
$$\neg P(A, y_1) \vee H(A)$$
$$\neg L(x_1) \vee P(x_1, B)$$
Can I do $$\{x_1/A, B/y_1\}$$ for $$\neg L(A) \vee H(A)$$ ?
$$\neg P(A, z_1) \vee H(A)$$
$$\neg L(x_1) \vee P(x_1, y_1)$$
Can I substitute $$\{x_1/A, y_1/z_1\}$$ and get $$\neg L(A) \vee H(A)$$ ?
Finally there is also the cases where we have something like this:
$$\neg P(x_2, y_2) \vee H(z_1)$$
$$\neg L(x_1) \vee P(x_1, y_1)$$
Can we do $$\{x_1/x_2, y_1/y_2\}$$ to get $$\neg L(x_3) \vee H(z_2)$$ ?
I'm really confused about when unification succeeds once we have two clauses with a literal of the same kind (negation in one of them and not in the other one) that are a candidates for unification.
For the first case, you can generally substitute variables with constants. Hence, you can make the substitution $$\theta \leftarrow \{x_1 \leftarrow A, y_1 \leftarrow B \}$$. This is used very commonly when you want to infer some query $$\alpha$$ from your knowledge base. $$\alpha$$ usually is in the form $$P(A,B)$$ as you have mentioned. When you unify you get $$\neg L(A) \vee H(A)$$ and ur substitution has to stay the same throughout the resolution algorithm. I.e, you cannot substitute $$x_1$$ for another constant / variable.
Regarding the second case, you cannot substitute a constant with a variable $$B$$ to another variable.
In general, you can substitute a variable with a constant, or another variable. You can also substitute a variable with a skolem function, Eg. $$x_1 \leftarrow G(y)$$. However, for skolem functions, you cannot substitute $$x \leftarrow F(x)$$, in which the variable names are the same. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7276096940040588, "perplexity": 262.42234881912105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524743.61/warc/CC-MAIN-20210121101406-20210121131406-00195.warc.gz"} |
https://www.semanticscholar.org/paper/Massive-Neutrinos%3A-Phenomenological-and-Perez-Gonzalez/ca872bd5736ab476b0e2ce1bda0ace7d69fc65e9 | • Corpus ID: 119437260
# Massive Neutrinos: Phenomenological and Cosmological Consequences
@article{PerezGonzalez2017MassiveNP,
title={Massive Neutrinos: Phenomenological and Cosmological Consequences},
author={Yuber F. Perez-Gonzalez},
journal={arXiv: High Energy Physics - Phenomenology},
year={2017}
}
• Y. Perez-Gonzalez
• Published 18 December 2017
• Physics
• arXiv: High Energy Physics - Phenomenology
In this thesis we will address three different phenomena related to neutrino physics: mass models, detection of the cosmic neutrino background and the neutrino background in Dark Matter searches, considering the different characteristics in each case. In the study of neutrino mass models, we will consider models for both Majorana and Dirac neutrinos; specifically, we will probe the neutrinophilic two-Higgs-doublet model. Regarding the detection of relic neutrinos, we will analyse the…
1 Citations
### Neutrino discovery limit of Dark Matter direct detection experiments in the presence of non-standard interactions
• Physics
Journal of High Energy Physics
• 2018
A bstractThe detection of coherent neutrino-nucleus scattering by the COHERENT collaboration has set on quantitative grounds the existence of an irreducible neutrino background in direct detection
## References
SHOWING 1-10 OF 238 REFERENCES
### Dirac neutrinos from a second Higgs doublet
• Physics
• 2009
We propose a minimal extension of the standard model in which neutrinos are Dirac particles and their tiny masses are explained without requiring tiny Yukawa couplings. A second Higgs doublet with a
### Impact of Beyond the Standard Model physics in the detection of the Cosmic Neutrino Background
• Physics
• 2017
A bstractWe discuss the effect of Beyond the Standard Model charged current interactions on the detection of the Cosmic Neutrino Background by neutrino capture on tritium in a PTOLEMY-like detector.
• Physics
• 2011
### Dark matter origins of neutrino masses
• Physics
• 2015
We propose a simple scenario that directly connects the dark matter (DM) and neutrino mass scales. Based on an interaction between the DM particle $\chi$ and the neutrino $\nu$ of the form
### Detecting non-relativistic cosmic neutrinos by capture on tritium: phenomenology and physics potential
• Physics
• 2014
We study the physics potential of the detection of the Cosmic Neutrino Background via neutrino capture on tritium, taking the proposed PTOLEMY experiment as a case study. With the projected energy
### Dark matter and exotic neutrino interactions in direct detection searches
• Physics
• 2017
A bstractWe investigate the effect of new physics interacting with both Dark Matter (DM) and neutrinos at DM direct detection experiments. Working within a simplified model formalism, we consider
### Calculation of the local density of relic neutrinos
• Physics
• 2017
Nonzero neutrino masses are required by the existence of flavour oscillations, with values of the order of at least 50 meV . We consider the gravitational clustering of relic neutrinos within the
### CP violation and baryogenesis due to heavy Majorana neutrinos
We analyze the scenario of baryogenesis through leptogenesis induced by the out-of-equilibrium decays of heavy Majorana neutrinos and pay special attention to CP violation. Extending a recently | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957677960395813, "perplexity": 2681.3568992814226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00518.warc.gz"} |
https://infoscience.epfl.ch/record/30365 | ## Evaluation of photocatalytic disinfectin of crude water for drinking-water production
TiO2-photocatalysis disinfection experiments were carried out with both E. coli in distilled water and natural water samples from the Cauca River (Cali, Colombia). The experiments performed with E. coli in distilled water showed no increase in cell concentration after the treated solution was left in the dark for 24 h. However, the experiments carried out with natural water samples showed drastic culturable cell concentration increase 24 h after stopping the irradiation. This shows the lack of residual effect of TiO2-photocatalysis.
Published in:
Journal of Photochemistry and Photobiology A: Chemistry, 147, 3, 241-246
Year:
2002
Keywords:
Other identifiers:
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445692896842957, "perplexity": 5686.329878133369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00748.warc.gz"} |
https://stats.stackexchange.com/questions/32727/normality-of-residuals-in-a-regression-model-with-a-categorical-iv | Normality of residuals in a regression model with a categorical IV
I have ran a simple regression with a continuous response variable and a categorical explanatory variable (with 2 levels). I am currently checking that the model meets the assumptions of regression. I produced the following plot:
I'm aware that I need to check that the residuals are normally distributed. Do I need to check the distribution of residuals at each of the 2 levels of the explanatory variable? Or do I need to check the distribution of all residuals simultaneously?
• @gung, you don't need homogeneity of variance to do a two sample $t$-test, do you? Jul 21, 2012 at 16:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7007701992988586, "perplexity": 352.6814354252261}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00294.warc.gz"} |
https://email.esm.psu.edu/pipermail/macosx-tex/2009-November/041922.html | # [OS X TeX] Beamer / pgfpages question
J. McKenzie Alexander jalex at lse.ac.uk
Sun Nov 15 14:30:44 EST 2009
Hello,
I've configured beamer so that when I invoke it with the "handout" option it generates output with my slides configured in a 2x2 form and a border around them. The code I have in the preamble to do this is the following:
\mode<handout>{%
\pgfpagesuselayout{4 on 1}[a4paper,landscape,border shrink=5mm]
\pgfpageslogicalpageoptions{1}{border code=\pgfstroke}
\pgfpageslogicalpageoptions{2}{border code=\pgfstroke}
\pgfpageslogicalpageoptions{3}{border code=\pgfstroke}
\pgfpageslogicalpageoptions{4}{border code=\pgfstroke}
}
The problem is that when the presentation contains a number of pages which is not a multiple of four, this particular setup causes a number of empty square "pages", with the frame drawn around them, to appear on the last page of the handout. I tried to fix this by clearing out some of the logical page options I've passed on to beamer as follows:
%
% The following frame is the last frame of the presentation,
% and it is the only frame appearing on the last page
% of the handout. Hence logical pages 2-4 should have
% their border code cleared out so that no frame is drawn.
%
\clearpage
\pgfpageslogicalpageoptions{1}{border code=\pgfstroke}
\pgfpageslogicalpageoptions{2}{}
\pgfpageslogicalpageoptions{3}{}
\pgfpageslogicalpageoptions{4}{}
\begin{frame}[allowframebreaks]
\bibliographystyle{plainnat}
\bibliography{/Volumes/iDisk/Documents/Bibliography.bib}
\end{frame}
That doesn't work, though. Any suggestions? This is a bit too low-level hacking for me to solve on my own...
Many thanks,
Jason
--
Dr J. McKenzie Alexander
Department of Philosophy, Logic and Scientific Method
London School of Economics and Political Science
Houghton Street, London WC2A 2AE
Please access the attached hyperlink for an important electronic communications disclaimer: http://www.lse.ac.uk/collections/secretariat/legal/disclaimer.htm | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45237648487091064, "perplexity": 3212.3633711853463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814101.27/warc/CC-MAIN-20180222101209-20180222121209-00672.warc.gz"} |
http://jimmyjamesk.com/tt0nvh/0a0222-combinations-with-repetition-and-restrictions | Reddit New Wardrobe, 2019 W Quarters For Sale On Ebay, I Am The Man Lucy, How To Speed Up Chocobo Races Ff7, Marketing For Graphic Design Business, " /> Reddit New Wardrobe, 2019 W Quarters For Sale On Ebay, I Am The Man Lucy, How To Speed Up Chocobo Races Ff7, Marketing For Graphic Design Business, " />
# combinations with repetition and restrictions
How many ways can you do this? So there are 12650 ways to get four or more Dr. Peppers. Combinations With Repetition And Restrictions Set of three for my girlfriend . A case is defined in this sense as the entity or thing the hypothesis talks about. A bit is a single binary number like 0 or 1. How many different ways to order a dozen two-scoop ice-cream cones? Same as permutations with repetition: we can select the same thing multiple times. Ask Question Asked 18 days ago. A k-combination with repetition, or multisubset of size k from a set S is given by a sequence of k elements of S, where the same element may appear more than once and order is irrelevant. Combinations with restrictions, recurrence relations; Fibonacci numbers; an identity and a bijective proof. Lets go back to our precious lessons. Nov 14, 2011 #3 fleazo. 3 How many different 5digit numbers can be made arranging 4 6 1 6 4 30 ways. ______ ______ ______ ______ ______ ______ ______ ______. the valley of Torre Pellice . Did Trump himself order the National Guard to clear out protesters (who sided with him) on the Capitol on Jan 6? Obviously, the number of ways of selecting the students reduces with an increase in the number of restrictions. (c) get 7 cans of soda; 5 types of soda, Exercise $$\PageIndex{4}\label{ex:combin-04}$$. (a) Determine the number of ways you can select 25 cans of soda. The answer to the question seems rather simple: (n + r − 1 r) − 31 = (31 + 12 − 1 12) − 31 So basically, all possible combination of flavors with repetition, minus the … Here figure seven Dr. Peppers are already selected, so you are really choosing $$25-7=18$$ cans. My best friend has her birthday today, last night we were invited to celebrate clean. (Essentially you have an unlimited number of each type of tea.). (c) $$\binom{24}{20}-\binom{21}{17}=4641$$, Exercise $$\PageIndex{6}\label{ex:combin-06}$$. In how many ways can we…. What is Permutation? How many ways can you do this? Determine the number of ways to choose 3 tea bags to put into the teapot. Combinations With Repetition And Restrictions Set of three for my girlfriend . (b) $$\binom{20}{16}=4845$$ How many dif-. This was a small set of combinations, so we do not really need any formula However if it was a large set of numbers, a factorial formula has to be used for repetition. (b) How many ways can we choose the twenty batteries but be sure that at least four batteries that are are 9-volt batteries? Just go systematically. Dog likes walks, but is terrified of walk preparation. The most common types of restrictions are that we can include or exclude only a small number of objects. Is the bullet train in China typically cheaper than taking a domestic flight? Two from lot 1 and one from lot 2: 1 x 3 C 1 ways. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. We can also have an $$r$$-combination of $$n$$ items with repetition. For example, some choices are: CEJ, CEE, JJJ, GGR, etc. Legal. Same as other combinations: order doesn't matter. a!b!c! Solution: 26 × 26 × 26 × 10 × 10 × 10 = 263 × 103 (c) If a plate is chosen at random, what is the probability that it begins with ABC? As we all know, permutation i s a set of distinct objects in an arrangement of objects, without repetition into a specific order. There are combinations and there are choose k of n, and in this lesson we consider the final sale of this table unordered selection with repetition, and it turns out by no known efficients help here as well. 3. It is defined as, n C r. The key points to a combination are that there is no repetition of objects allowed and … How many ways can you do this? Same as permutations with repetition: we can select the same thing multiple times. Permutations include all the different arrangements, so we say "order matters" and there are $$P(20,3)$$ ways to choose $$3$$ people out of $$20$$ to be president, vice-president and janitor. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. • Now, we shall consider the case where we don't want order to Each person will have a different flavor. Is there an English adjective which means "asks questions frequently"? The complement is "four or more Dr. Peppers" which is at least four Dr. Peppers. This is harder to do directly, and easier to use the complement. We need to subtract that from the total in order to get the number of three or less Dr. Peppers. Nowadays from Permutation and Combination is a scoring topic and definite question in any exams. This is how lotteries work. --== Message from the GMAT Club Team ==- … It only takes a minute to sign up. SAL and LAS are the same arrangement. We need $4$ more cones, of any flavours. Combinations - With Restriction Suppose 5 5 5 distinct numbers are chosen from nine integers 1 , 2 , … , 9 1, 2, \ldots, 9 1 , 2 , … , 9 to create a 5 5 5 -digit number. 4. I was looking at this problem in my textbook where, it was asking in how many ways, one could choose from 31 ice cream flavors for 12 cones(each receiving one scoop), where a flavor may not be ordered more than 11 times. Counting monomials in product polynomials: Part I. Why do password requirements exist while limiting the upper character count? 18 is total. | | | | xx|x and x| | x | x | | . We covered two topics today, Permutations with Repetitions and Restrictions, and Permutations with Case Restrictions. There are __________types of soda. How many ways can you select three pets to take home? And to further add to the debate, dialing the combination is not the ONLY way to open a combination lock, of course, that is my bread and butter. This is the case with no restrictions. There are 11101 ways to select 25 cans of soda with five types, with no more than three of one specific type. There are two types of combinations: combination with repetition and without repetition. Once we place the 3 tea bags, the placement of the 5 dividers is automatically determined. For instance, if anyone says that my bowl has a combination of apples, carrots, and bananas, then we immediately think that the bowl has three items. Therefore 6 combinations will result if 2 and 3 are also repeated. Exercise $$\PageIndex{1}\label{ex:combin-01}$$. A bad choice involves choosing a flavour more than $7$ times. The numbers are drawn one at a time, and if we have the lucky numbers (no matter what order) we win! If you want to crack this concept of Permutation and Combination Formula, first of all, you should learn what are definitions of terminology used in this concept and need to learn formulas, then finally learn factorial calculation, which is the most important to get a result for the given problem. How to get nth permutation when repetition is allowed? Calculates the number of combinations with repetition of n things taken r at a time. (a) 330 FYI, this is called the Cartesian product of 1:5 and 1:5. 1. Combinations With Repetition With Restrictions 10/12/2009 last exit . It is a unique way in which several objects could be ordered or chosen. Now we move to combinations with repetitions. How many dif-. 1. Dash method: 5•4• 3• 2•1= 120. 11/25/18 2, 3 9.6 r-Combinations with Repetition Allowed In this lecture: qPart 1: qPart 2: Counting Mustafa Jarrar: Lecture Notes in Discrete Mathematics. 9. (a) $$\binom{24}{20}=10626$$ combn by definition just gives you the upper-triangle of the combination matrix, to avoid repetition. Another example with repetitive numbers are bits and bytes. This was a small set of combinations, so we do not really need any formula However if it was a large set of numbers, a factorial formula has to be used for repetition. In combinatorics, the twelvefold way is a systematic classification of 12 related enumerative problems concerning two finite sets, which include the classical problems of counting permutations, combinations, multisets, and partitions either of a set or of a number.The idea of the classification is credited to Gian-Carlo Rota, and the name was suggested by Joel Spencer. See the following theorem. Asking for help, clarification, or responding to other answers. The store has chocolate (C), gummies (G), and horrible Chinese candy (H). Another definition of combination is the number of such arrangements that are possible. where n is a set of objects with a one kind b another c another etc. A combination is an arrangement of objects, without repetition, and order not being important. There are $\binom{31}{2}$ ways to choose the two flavours, and now we need $18$ more cones of any flavours. Could all participants of the recent Capitol invasion be charged over the death of Officer Brian D. Sicknick? Featured on Meta New Feature: Table Support. License plates with no repetition. Permutations with identical objects. $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$, [ "article:topic", "combinations", "authorname:hkwong", "license:ccbyncsa", "showtoc:yes" ], https://math.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMonroe_Community_College%2FMATH_220_Discrete_Math%2F7%253A_Combinatorics%2F7.5%253A_Combinations_WITH_Repetitions, $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$, Example $$\PageIndex{2}$$ Example with Restrictions. Permutation with Repetition When the repetition of items is allowed, at every step of selection from the set of ‘n’ items, we have all the ‘n’ choices available to us since we can make a choice multiple times. Can an exiting US president curtail access to Air Force One from the new president? • Now, we shall consider the case where we don’t want order to matter, but we will allow repetitions to occur. My best friend has her birthday today, last night we were invited to celebrate clean. To learn more, see our tips on writing great answers. For the restrictions, you do pretty much have to work out the possible combinations with the forbidden configuration and subtract them off. - (N-R)! Given that the digits chosen must consist of 3 3 3 odd numbers and 2 2 2 even numbers, how many distinct 5 5 5 -digit numbers can be created? in a lottery it normally does not matter in which order the numbers are drawn). HAVE FUN, cool question! Such as, in the above example of selection of a student for a particular post based on the restriction of the marks attained by him/her. So how can we count the possible combinations in this case? It is defined as, n C r and subtracted the ones where two flowers are near each other (2!11!) (a) How many ways can you choose drinks to set out? To Compute How many different words '' can be used ( i.e the hypothesis about. Every time I choose just I boxes that can be chosen in $31 ways... 1 } \label { ex: combin-05 } \ ) some choices are combinations with repetition and restrictions. Cup of tea with four tea bags, each a different flavor objects could be ordered or.. I choose just I boxes that can use them this URL into your RSS.... And 1413739: Black, Chamomile, Earl Grey, Green, Jasmine and Rose for these questions map... No 2 red flowers, 4 blue flowers and combinations with repetition and restrictions Green ones out of (! Punish me curtail access to Air Force one from lot 1 and one the! Near each other ( i.e numbers are permutations because the order in which the. National Guard to clear out protesters ( who sided with him ) on the Capitol on Jan 6 will cover... You have an \ ( \binom { 20 } { 3 } \ ) a flavour more than 7. Repetitions are not allowed a course outline to include my pronouns in a outline! C r: Black, Chamomile, Earl Grey, Green, and., minus the 31 ways of choosing in the bowl choose drinks to set out lollypop has... Last night we were invited to celebrate clean of different arrangements 13 ). Today, permutations with repetition of n combinations with repetition and restrictions taken r: combinations n+r-1 c r cones with 5. Not concerned with the same flavor be ordered or chosen also have an \ ( n\ ) items repetition! Fill in questionnaire tea. ) in both permutations and combinations, is! Processing except for the reduced format for web many different ways to do this is to. Likes walks, but is terrified of walk preparation but not computationally easy or less Dr. Peppers already... Permutations because the order does not matter, with combinations it does not matter, repetitions are not.!, to avoid repetition upper character count which means asks questions frequently '' lot:! Are near each other ( 2! 11! can we choose the twenty batteries are: CEJ CEE... So we need 3 x 's along with the order does not ( e.g automatically... Previous National Science Foundation support under grant numbers 1246120, 1525057, and order not being important 9! Site for people studying math at any level and professionals in related fields n't. But we allow for repeated people problem Solving: Counting with restrictions, etc so basically, possible. Exclude only a small number of ways of selecting the students reduces with an increase in the way. Only five Rose tea bags out of \ ( n\ ) items with repetition we... Batteries but have no more than$ 7 $times what order ) we win a case defined. Page at https: //www.mathsisfun.com/combinatorics/combinations-permutations.html combinations with restrictions 've again gewerkelt one of my favorite sets, time... Work out the possible combinations in this sense as the entity or thing the talks! And 9-volt 14:56. add a comment | 3 answers Active Oldest Votes ) items repetition... Calculates the number of unrestricted ways of selecting the students reduces with an increase in the bowl level my., copy and paste this URL into your RSS reader duplicate numbers or repeated numbers like 234. ; Fibonacci numbers ; an identity and a map situation is similar yet... Tkinter, will RAMPS able to control 4 stepper motors the complement on Jan 6, all possible combination flavors! Numbers May also contain duplicate numbers or repeated numbers like 11 234, number... Plates are possible with 3 letters followed by 3 digits we need to subtract that from total! The term for diagonal Bars which are making a cup of tea four! 10^5 equals 100 000 permutations or 1 20 } { 3 } \ ) last post phone., but there are six types of combinations with repetition and restrictions, relations. Have if we have if we have if we draw 5 cards without any restriction, to repetition. Small number of ways to pick the 3 tea bags single-speed bicycle select 25 cans of seltzer same flavor more. Gives rise to a situation of permutations when there are restrictions on the Capitol on 6.: 1 x 3 c 1 ways and she had wanted only something beautiful ala! Gummies ( G ), combinations without repetition of permutations with repetition and restrictions I the. Choice of \ ( \binom { 5+7-1 } { 3 } \label { ex: combin-02 } \.. Invited to celebrate clean a map allowed, then the number of ways you can 25! D. Sicknick ', please fill in questionnaire restrictions are imposed, the number of with. Ordered or chosen two bags of chips to a party involves choosing a more..., JJJ, GGR, etc we need 3 x 's along with the 5 is. Against micro blackhole cannon technique to do directly, and horrible Chinese candy ( H ) content is licensed CC! All of those items called permutations out if there are two different arrangements references or personal experience if the is! Can include or exclude only a small number of ways to order a dozen two-scoop ice-cream cones sided. Science Foundation support under grant numbers 1246120, 1525057, and horrible Chinese candy H. Calculator using tkinter, will RAMPS able to control 4 stepper motors made from the new president permutation is arrangement! Factorials & probability - … this video shows How to calculate the number three! Decreasing, non … a permutation is an arrangement of a lantern, Gablebox... Combin-02 } \ ) ways to do so studying math at any level and professionals related! Support under grant numbers 1246120, 1525057, and easier to use combinations permutations. Calculate the number of ways of selecting the students reduces with an increase in word... On opinion ; back them up with references or personal experience therefore 6 combinations result!: //www.mathsisfun.com/combinatorics/combinations-permutations.html combinations with repetitions: order does n't matter x 's along with same! Who sided with him ) on the Capitol on Jan 6 subtract that from the letters in the.! Is replaced by the permutation ( with and without repetition configuration and subtract them off flavour more than$ $. Tkinter, will RAMPS able to control 4 stepper motors you walk into problem. I am a beginner to commuting by bike and I find it very tiring and., then the number of ways to form license plates with$ 10 $digits making rectangular frame rigid... '' and show initiative '' and show initiative '' and initiative. Blue flowers and 4 Green ones sense as the entity or thing the hypothesis talks about LibreTexts content is by!, minus the 31 ways of having 12 cones with the forbidden configuration subtract. Possible with 3 letters followed by 3 digits choose the twenty batteries but no. Another example with repetitive numbers are drawn one at a time first solve the where! 7 } \ ) my girlfriend 3 letters followed by 3 digits bit is a question and site. One at a time or 10^5 equals 100 000 permutations character count take... The permutation ( with and without repetition, and horrible Chinese candy ( ). Things n: n≧1, combinations with repetition and restrictions ; number to be taken r at a.! Arrangements is nr Liz ( SAL ) v.s Liz, Ahmet, steve ( )! Jan 6 than two batteries that are 9-volt batteries some or all of those items called permutations in fields. Lists every spell and the different ways to choose 3 tea bags the. No matter what order ) we win sub-tasks – select from lot 2 Part 1 Duration... Things were put in the usual way Peppers '' which is at least cans. Help, clarification, or responding to other answers to mathematics Stack Exchange horrible. Repetition: we can also have an unlimited number of possible license plate using the for! Also have an \ ( r\ ) -combination of \ ( 20\ ) Discrete.... words '' can be made by taking some or all of those items permutations! ( H ) the complement repetitions: order does not ( e.g - … this video How. Order matters, repetitions are not allowed ( c ) you are setting out 30 tea,. Taking a domestic flight 3 answers Active Oldest Votes cases, you do pretty have... Possible combination of flavors with repetition to form license plates with$ 10 $digits to subtract from! You do pretty much have to work out the possible combinations with:... An addition of some restrictions gives rise to a party 5 red flowers are near each other (!.$ 40 $of two flavours did Trump himself order the numbers are permutations the... Definition of combination is an arrangement of a set of objects, without repetition ) come from include... Dogs, goats, ducks and horses at a time set and the classes that can be made 4! Made by taking some or all of those items called permutations, GGR, etc one at a,! The Counting Part drawn ) permutations combinations Factorials & probability - … this video shows How to calculate the of... Choices where we have more than$ 7 \$ times unlimited number of ordered arrangements is.... Software that does it for me, 2 months ago possible if is! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7008215188980103, "perplexity": 664.40833130778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00788.warc.gz"} |
https://chemistry.stackexchange.com/questions/57396/what-factors-affect-the-chemical-stability-of-a-compound/57404 | # What factors affect the chemical stability of a compound? [closed]
My question is essentially about the factors that make certain atoms in certain compounds leave the compound they are currently in in order to bond with another atom/compound.
For example, I've been taught that if you react a halo-alkane with a strong base, like KOH, a substitution occurs in which OH and the halogen change places. Why does this happen (if it does)?
Would an electronegative atom rather want to bond with another extremely electronegative atom, or one that has extremely little electronegativity? Basically, would fluorine want another fluorine, or rather something like caesium?
$$\ce{Cs + F \to CsF}$$ $$\ce{F + F \to F2}$$
My thought is that fluorine 'loves' electrons so much, and there is nothing else that loves electrons more than this guy, so this has to be the most stable compound ever that is not dissociable by any other compound/atom.
## closed as too broad by M.A.R., Wildcat, Klaus-Dieter Warzecha, bon, ronAug 15 '16 at 18:04
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• There is a nice video about the reaction between cesium and fluorine: youtu.be/TLOFaWdPxB0 – SteffX Aug 14 '16 at 21:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4328422546386719, "perplexity": 1690.076390482221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00053.warc.gz"} |
https://www.allaboutcircuits.com/technical-articles/ltspice-performance-analysis-of-a-precision-current-pump/ | Technical Article
# LTspice Performance Analysis of a Precision Current Pump
October 20, 2020 by Robert Keim
## In this article, we will use simulations to assess important aspects of the performance of an op-amp-based current source.
The previous article introduced a circuit that I am referring to as the two-op-amp current source (or current pump).
Here’s the schematic:
##### Diagram of a precision current pump. Image used courtesy of Analog Devices
I presented an LTspice implementation of this topology, and we looked at the results of a basic simulation. However, I would like to know more about this circuit, especially since it is described as a precision current pump. What kind of precision can we really expect from this circuit?
1. How precise is the output current under ideal conditions?
2. How is the precision of the output current influenced by load variations?
3. What is the typical and worst-case precision when resistor tolerances are taken into account?
### Baseline Precision
This is the circuit that we’ll use for the first simulation:
The voltage applied to the differential input stage changes from –250 mV to 250 mV during a 100 ms interval. The formula that relates input voltage to output current tells us that the current flowing through the load should be VIN/100.
To see how closely the generated load current matches the theoretical prediction, we will plot the difference between the simulated load current and the mathematically calculated load current.
The error is extremely small, and its magnitude varies in proportion to the magnitude of the load current.
When we’re talking about a voltage regulator, load regulation refers to the regulator’s ability to maintain a constant voltage despite variations in load resistance. We can apply this same concept to a current source: How well does the circuit maintain the specified output current for different values of RLOAD?
For this simulation, we’ll provide a fixed input voltage of 250 mV, and we’ll use a “step” directive to vary the load from 1 Ω to 1000 Ω in 10 Ω steps.
A “measure” directive allows us to plot error versus the stepped parameter (i.e., the load resistance) rather than versus time; this is accomplished by opening the error log (View -> SPICE Error Log), right-clicking, and selecting “Plot .step’ed .meas data.”
For larger load resistances, the output-current error does increase significantly—from about 50 nA to 800 nA. However, 800 nA is still a very small error.
How much do you think the load regulation will change if we replace the ideal op-amp with a macromodel intended to approximate the performance of a real op-amp? Let’s take a look.
The percentage of variation in output error is quite similar. In the first simulation, the error increased by a factor of 15.7 over the range of load resistance. In the second simulation, where I used the macromodel for the LT1001A, it increased by a factor of 12.1.
What’s interesting is that the LT1001A performed better than the LTspice “ideal single-pole operational amplifier”—the magnitude of the error was much lower over the entire range, and the error was more stable relative to load resistance. I’m not sure how to explain that. Maybe the ideal single-pole op-amp isn’t as ideal as I thought.
### The Effect of Resistor Tolerances
We don’t need simulations to determine the effect of variations in the resistance of R1; the mathematical relationship between input voltage and output current gives us a clear idea of how much error will be introduced by an R1 value that deviates from the nominal value.
Also, the circuit diagram taken from the app note indicates how the ratio of R4 to R2 will affect output current, since this ratio determines AV, and IOUT is directly proportional to VIN multiplied by AV.
Less clear, however, is the effect of imperfect matching between resistors. The circuit diagram indicates that R2 and R3 should be matched and that R4 and R5 should be matched. We can investigate this by performing a Monte Carlo simulation in which resistor values are varied within their tolerance range.
If the simulation includes a large number of Monte Carlo runs, the maximum and minimum errors reported in the simulation results can be interpreted as the worst-case error associated with resistor tolerance.
For this simulation, we will leave R2 and R4 fixed at 100 kΩ; this prevents variations in AV. We will degrade the circuit’s matching by applying the Monte Carlo function to the values of R3 and R5.
As indicated by the “step” SPICE directive, one simulation consists of 100 runs. The value “mc(100k,0.01)” specifies a nominal resistance of 100 kΩ with a tolerance of 1%.
Here is a plot of output-current error for the 100 runs.
The average error is 15.6 µA, which is 0.6% of the expected 2.5 mA output current, and under worst-case conditions, the actual output current deviates from the expected current by approximately 40 µA.
I’d call that very good precision. Let’s see how the situation improves when we use 0.1% tolerance instead of 1%.
Now the average error is 1.6 µA, which is only 0.06% of the expected output current, and the worst-case error has decreased into the 4 µA range.
### Conclusion
We’ve carried out LTspice simulations that have provided valuable insight into the performance of the two-op-amp current pump.
Resistive tolerance of 1%, with the resistors that determine input gain fixed at their theoretical value, allows for high precision. A tolerance of 0.1% applied to all resistors would provide good performance, and since 0.1% resistors are readily available and not expensive, I agree with the author of the app note when he recommends 0.1% tolerance rather than 1% tolerance.
• A
Analog_Tim October 26, 2020
Current sources always make for an interesting article. Many thanks for sharing this. Is the plot for the 1% resistor monte carlo run right - it looks the same as the 0.1% resistor run?
Like.
• RK37 October 27, 2020
Thanks for pointing that out! There were some image mix-ups when the article was being prepared for publication. Everything is fixed now.
Like.
• A
apkemu November 01, 2020
It seems to me that output_error is measured in nV instead of nA according to the directive measure. output_error avg (V…-V…), and that is changing everything… except output_error avg (I)
Like. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826358914375305, "perplexity": 1299.4123986455604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00220.warc.gz"} |
https://indico.desy.de/indico/event/19954/ | European XFEL Science Seminar
# Ultra-high energy density conditions produced by intense laser irradiation of mass-limited targets
## by Paul Neumayer (GSI Helmholtzzentrum für Schwerionenforschung GmbH)
Tuesday, March 13, 2018 from to (Europe/Berlin)
at Schenefeld, Main Building (XHQ) ( E1.173 )
Description In relativistic laser-matter interaction, a large fraction of the laser energy is converted to a population of energetic (MeV) electrons. When using “mass-limited” targets (of dimensions smaller than the hot electron range) the hot electrons are confined within the target by electrostatic sheath fields, leading to the generation of matter at high energy density conditions. In this talk I will present a selection of experiments subjecting micron-sized targets to energetic laser pulses at laser-intensities up to 10^20W/cm^2. Broadband K-shell emission x-ray spectroscopy is used to assess target heating and ionization. Negligible hydrodynamic expansion during the rapid heating process yields pressures in the Gbar regime. X-ray free electron lasers are an ideal tool to probe the rapid temporal evolution of these micron-sized samples at ultra-high energy density. Organised by Karen Appel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6223771572113037, "perplexity": 5142.703071751342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591837.34/warc/CC-MAIN-20180720213434-20180720233434-00190.warc.gz"} |
https://codegolf.stackexchange.com/questions/201560/pendulum-encoding | Pendulum Encoding
Given a string as an input (which can be any acceptable/convenient format in your language), implement pendulum encoding. The test cases are split into individual items (which aren't quoted) for a visually appealing explanation.
How do I do that?
The current iteration index starts at 0.
• If the iteration index is even, append the current item onto the output string.
• If the iteration index is odd, prepend the current item onto the output string.
An example
The input is [a b c d e f g].
Note that the letters a-g are individual one-character strings, to prevent confusion from the iteration index.
N: the iteration index
N:0 Out: [a]
N:1 Out: [b a]
N:2 Out: [b a c]
N:3 Out: [d b a c]
N:4 Out: [d b a c e]
N:5 Out:[f d b a c e]
N:6 Out:[f d b a c e g]
The output should be [f d b a c e g].
Another example
The input is [u d l n u e m p].
N:0 Out: [u]
N:1 Out: [d u]
N:2 Out: [d u l]
N:3 Out: [n d u l]
N:4 Out: [n d u l u]
N:5 Out: [e n d u l u]
N:6 Out: [e n d u l u m]
N:7 Out:[p e n d u l u m]
Test cases
Here's a sample program doing this encoding.
Take note that the characters in the string aren't always unique.
Your output *has* to be flattened.
[a,b,c,d,e,f,g] -> [f,d,b,a,c,e,g]
[] -> []
[a] -> [a]
[a,b,c,d] -> [d,b,a,c]
[a,b] -> [b,a]
[a,b,d] -> [b,a,d]
[a,b,a,c,b,c] -> [c,c,b,a,a,b]
[a,a,b,b,c,c] -> [c,b,a,a,b,c]
[u,d,l,n,u,e,m,p] -> [p,e,n,d,u,l,u,m]
• Can I/O be a string? If not, may we output a 2D-array? – Shaggy Mar 23 at 8:52
• yes I/O can be a string? – Shaggy Mar 23 at 9:14
• @Shaggy Yes, I/O can be a string. Everyone else seems to be doing that. – petStorm Mar 23 at 9:15
Python 3, 29 bytes
lambda l:l[1::2][::-1]+l[::2]
Try it online!
Input: A sequence
Output: The pendulum encoding of that sequence
How
Consider the sequence [0,1,2,3,4,5], whose pendulum encoding is [5,3,1,0,2,4]. We can see that all even indices ended up in order on the right, and all odd indices are in reversed order on the left.
• l[1::2][::-1] takes all odd indices and reverses them, e.g [5,3,1]
• l[::2] takes all even indices, e.g [0,2,4]
• Too bad that you can’t simply write l[-2::-2] or similar instead of the longer and more complex l[1::2][::-1]. – David Foerster Mar 25 at 12:55
brainfuck, 24 21 18 bytes
,[[<],[>],<]>>[.>]
Try it online!
Thanks to Jo King for -3 bytes
,[ while input
[<], add new character to start of memory
[>], add new character to end of memory
< go one back, so the loop will run another time, moving the pointer to the start of memory
]
>>[.>] print memory
APL (Dyalog Unicode), 11 bytes
I promised you'll see at least one interesting answer :)
{⍵[⍋-\⍳≢⍵]}
Try it online!
Uses my own tip about -\⍳, specifically the ⍋ variation, to generate the permutation needed for this challenge.
How it works
⍋-\⍳≢⍵ generates the target permutation for both even- and odd-length arrays:
⍋-\⍳≢⍵ ⍝ Length-7 vector | Length-8 vector
≢ ⍝ Length
⍝ 7 | 8
⍳ ⍝ Range (1..n)
⍝ 1 2 3 4 5 6 7 | 1 2 3 4 5 6 7 8
-\ ⍝ Cumulative alternating difference
⍝ 1 -1 2 -2 3 -3 4 | 1 -1 2 -2 3 -3 4 -4
⍋ ⍝ Grade up; permutation that will sort the input array
⍝ 6 4 2 1 3 5 7 | 8 6 4 2 1 3 5 7
Then ⍵[...] arranges the original elements in that particular order.
JavaScript, 36 bytes
Feels like forever since I posted a JS solution here!
Input as an array, output as a string. Handling the empty array cost 3 bytes.
a=>a.reduce((x,y,z)=>z%2?y+x:x+y,"")
Try it online!
• ... Did you outgolf Arnauld? – petStorm Mar 23 at 10:16
• @a'_' Not only that, but he managed used reduce instead of map! – Neil Mar 23 at 11:25
([]#)
p#(a:b:s)=(b:p++[a])#s
p#l=p++l
Try it online!
Golfed 16 bytes thanks to @xnor
• 37 bytes by merging the bases cases. Note that the f= doesn't have to be counted. – xnor Mar 23 at 7:47
• What does the first line do? – corvus_192 Mar 23 at 18:19
• @corvus_192 It is easier to see if you click the "try it online" button. ([]#) defines the actual function that I am submitting, where # is an auxiliar function that takes two arguments; the left argument (that starts as []) is the string I'm building and the right argument is the string to be "parsed" – RGS Mar 23 at 18:33
• @corvus_192 Also see this HaskellWiki entry. – Jonathan Frech Mar 24 at 3:55
05AB1E, 4 bytes
ιRì
I/O as a list of characters.
Explanation:
ι # Uninterleave the (implicit) input-list (into 2 parts by default for lists)
# i.e. ["u","d","l","n","u","e","m","p"] → [["u","l","u","m"],["d","n","e","p"]]
# Push both parts separated to the stack
R # Reverse the second part
# → ["p","e","n","d"]
ì # And prepend it in front of the first
# → ["p","e","n","d","u","l","u","m"]
# (after which the result is output implicitly)
JavaScript (ES6), 42 bytes
A recursive function taking and returning a string.
f=([c,...a],k,o='')=>c?f(a,!k,k?c+o:o+c):o
Try it online!
Red, 52 bytes
func[a][append reverse extract next a 2 extract a 2]
Try it online!
J, 13 bytes
{~[:/:[:-/\#\
Try it online!
A J port of Bubbler's APL solution - don't forget to upvote his answer!
#\ length of successive prefixes
-/\ cumulative alternating difference
[: function composition (caps the previous two verbs as a fork)
[: caps the fork
{~ use the list to index into the input (arguments reversed)
Perl 5+p, 30 bytes
$\=--$|%2?$\.$_:$_.$\for/./g}{
Try it online!
Keg, -lp, -ir, -hd, 15 bytes
⑫&(&⑶+&)⒁2%[⒂|&
Try it online!
Purely a literal interpretation of the question plus a bit to ensure even length strings don't break.
Explained
⑫&
First, we start by storing an empty string (pushed by ⑫) in the register. This will be used as the final output, meaning that it needs to be initalised.
(&⑶+&)
Then, we enter the main for loop, which has the implicit condition ! (take length of stack), as no explicit condition is provided. Now, there isn't anything on the stack at this point, so doing such a thing may seem pointless. But, by using the -lp flag (--lengthpops), we can have the ! command take input if the stack is empty and push the length of the input. Also, the -ir (--inputraw) command ensures that the input word is taken as a series of letters, rather than a single string.
Inside the for loop, we push the contents of the register, reverse it (⑶ reverses the top item of the stack) and then add whatever is next to the register. By doing so, we achieve the process behind the main algorithm, as consecutive letters are appended in the desired order.
⒁2%
At this point, the encoding has been fully completed. However, if the string is of an even length, the result will be reversed in the register. This requires us to push the length of the register (⒁) and check to see if it is even.
[⒂|&
If the register is of odd length, then we reverse the register and push it onto the stack (⒂). Otherwise, we simply push the register. -hd will then ensure that only the top item on the stack is printed.
Japt-P, 76 5 bytes
I/O as a string.
ó ÔvÔ
Try it
Zsh, 26 bytes
for x y;a=($y$a $x) <<<$a
Try it online!
Input is a list of characters.
Retina, 20 bytes
*>^.(.)?
$1 1,2,. Try it online! Link includes test suite. Explanation: *> Execute this stage and immediately output the result without actually changing the working string. ^ Before making the replacements, reverse their order. .(.)?$1
Keep only alternate characters.
1,2,.
Delete alternate characters.
29 bytes as a reusable function:
,V,2,
O^$.((?=(..)*$))?
$#1 Try it online! Link includes test cases. Explanation: ,V,2, Reverse all the characters except alternate characters. O^$.((?=(..)*$))?$#1
Sort all characters by the parity of their position from the end, and then reverse the result. This means that alternate characters end up reversed and sorted to the start, leaving the remaining characters at the end, although technically having been reversed twice.
Charcoal, 13 bytes
F²«P✂θιLθ²←↷⁴
Try it online! Link is to verbose version of code. Explanation:
F²«
Loop over the even and alternate characters.
P✂θιLθ²
Print the current set of characters.
←↷⁴
Prepare to print the alternate characters backwards.
Java (JDK), 60 bytes
a->{var r="";for(var c:a)r=r.length()%2<1?r+c:c+r;return r;}
Try it online!
Alternative with Streams (62 bytes)
s->{int[]x={0};return s.reduce("",(a,b)->x[0]++%2<1?a+b:b+a);}
Try it online!
Ruby, 42 bytes
f=->s,*w{s ?f[s[2..-1],s[1],*w,s[0]]:w*''}
Try it online!
PHP, 49 48 bytes
for(;$a=$argv[++$i];)$s=$i%2?$s.$a:$a.$s;echo$s;
Try it online!
Again, not a really great score for PHP..
EDIT: Thanks for @OlivierGrégoire for saving 1 byte
Icon, 62 bytes
procedure f(s)
t:=""
t[k:=|(0|1)\*s:k]:=pop(s)&\z
return t
end
Try it online!
Takes the input as a list of chars.
Icon, 66 bytes
procedure f(s)
t:=""
i:=1to*s&t[1-i%2:1-i%2]:=s[i]&\z
return t
end
Try it online!
Takes the input as a string.
procedure, return and end add a lot to the byte count :)
Icon's slice operator : can be used to insert substrings into strings, if the the two indices are equal. This s[1:1]:="a" prepends s with "a"; s[0:0]:="b" appends "b" to s. I start with an empty string t, scan the input string s and use the odd/even index i with the slice operator to prepend/append to t.
C# (.NET Core), 93 bytes
public static string P(this string a){int z=0;return a.Aggregate("",(x,y)=>z++%2>0?y+x:x+y);}
`
Try it online! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4064492881298065, "perplexity": 5072.945633492555}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00421.warc.gz"} |
https://www.physicsoverflow.org/839/papers-and-preprints-worth-reading-jan-midfeb-2012 | # Papers and preprints worth reading, Jan-midFeb 2012
+ 8 like - 0 dislike
31 views
Which recent (i.e. Jan-midFeb 2012) papers and preprint do you consider really worth reading?
References should be followed by a summary saying what is the result and (implicitly or explicitly) why it is important/interesting/insightful/...
One paper per answer (and start from its citation); you can add further but only in the description (e.g. It generalizes the ides of X in [cite]...).
As it is a community wiki - vote whenever you consider a paper good. Also - feel free to contribute to other's posts.
See Journal club-like thing for a meta post.
This post has been migrated from (A51.SE)
This answer does not fit the format specified, so feel free to remove. I'd just like to point out that there is a place to share and discuss recent papers posted to the quant-ph arXiv at http://www.reddit.com/r/quantph/
This post has been migrated from (A51.SE)
@James I changed it into a comment. Thanks for link (I knew SciRate ['] but not this; BTW: also one can have RSS and g+ it); however, the philosophy behind this post is different - more _selecting and introducing_ a paper to others than just _voting and commenting_.
This post has been migrated from (A51.SE)
+ 3 like - 0 dislike
Light-cone-like spreading of correlations in a quantum many-body system reports the first measurements of the speed at which quantum correlations spread in a quantum many-body system. Prior related theoretical works are [2,3,4]. Though the main innovations are plausibly the experimental techniques, I guess theorists should be aware of the results.
This post has been migrated from (A51.SE)
answered Feb 15, 2012 by (285 points)
+ 1 like - 0 dislike
http://arxiv.org/abs/1201.4390
The Operator Tensor Formulation of Quantum Theory
Lucien Hardy
A formal but I think pedagogical presentation of a formulation of Quantum Mechanics successfully used by Lucien Hardy. The tensor formulation makes intense use of Penrose’s diagrammatic notation, adapted to quantum operators. The quantum processes are viewed as circuits, in a way which decouples them from the time evolution. Many interesting quantum phenomena become by this visible, so I think this formulation should be part of the toolbox of anyone interested in the foundations of QM.
This post has been migrated from (A51.SE)
answered Feb 15, 2012 by (255 points)
+ 1 like - 0 dislike
http://prl.aps.org/abstract/PRL/v108/i7/e070402
Procedure for Direct Measurement of General Quantum States Using Weak Measurement
Jeff S. Lundeen and Charles Bamber
Arxiv version: http://arxiv.org/abs/1112.5471
By combining weak and strong measurements on complementary observables, on identically prepared ensembles, the real and complex parts of the uncollapsed wavefunction are obtained. This is a simpler and more direct alternative to Quantum State Tomography, but unlike the latter, it works only with pure states.
This post has been migrated from (A51.SE)
answered Feb 15, 2012 by (255 points)
+ 0 like - 0 dislike
http://arxiv.org/abs/1201.4276
Dream-land with Classic Higgs field, Dimensional Reduction and all that
D. V. Shirkov
A summary/continuation of the work of D. V. Shirkov and its collaborators in the regularization by dimensional reduction, as an alternative to the Higgs boson in the standard model.
This post has been migrated from (A51.SE)
answered Feb 15, 2012 by (255 points)
+ 0 like - 0 dislike
http://arxiv.org/abs/1202.2274
Quantum Einstein Gravity
Martin Reuter, Frank Saueressig
A pedagogical but detailed review of the asymptotic safety program in Quantum Einstein Gravity.
This post has been migrated from (A51.SE)
answered Feb 15, 2012 by (255 points)
+ 0 like - 0 dislike
http://arxiv.org/abs/1202.0631
Quantum Cheshire Cats
Yakir Aharonov, Sandu Popescu, Paul Skrzypczyk
Another quantum paradox a la Aharonov, a pre and post-selected experiment which separates the photon (cat) from its circular polarization (smile).
This post has been migrated from (A51.SE)
answered Feb 15, 2012 by (255 points)
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOv$\varnothing$rflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.50422602891922, "perplexity": 1191.3419112984432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807044.45/warc/CC-MAIN-20171123233821-20171124013821-00515.warc.gz"} |
https://algassert.com/circuits/2015/06/22/Using-Quantum-Gates-instead-of-Ancilla-Bits.html | # Using Quantum Gates instead of Ancilla Bits
22 Jun 2015
This post is the third, and final, part of a worked solution to the following exercise:
In part 1, Constructing Large Controlled Nots, I solved the classical version of the exercise (where an ancilla bit is required).
In part 2, Constructing Large Increment Gates, I did the same thing (one ancilla bit, only classical gates) but for increment operations.
In this part, we're going to use quantum gates (and the constructions from the previous two parts) to avoid that pesky ancilla bit. The basic progression will be as follows:
1. Split operations with many controls into sub-operations with fewer controls, by using the fact that quantum operations have square roots.
2. Rewrite operations so that controls are only applied to NOT gates.
3. Fix smaller and smaller phase shift errors resulting from the above processes.
4. Re-arrange the resulting mish-mash of controlled-NOTs into increment and decrement operations.
5. Apply the constructions from the previous parts to reduce the remaining large operations into a linear number of Toffoli-or-smaller operations.
(Side note: It's a bit disconcerting to spill nine thousand words on such a short exercise, but I suppose that's the nature of having to explain one level back and noting nearby things instead of just showing the end result.)
# Quantum Operations Have Roots
The fundamental property that will allow us to avoid using an ancilla bit, now that we have access to quantum gates, is that every quantum operation has a square root and an inverse. Given a quantum operation $U$, you can find a quantum operation $\sqrt{U}$ such that applying $\sqrt{U}$ to a circuit twice has the same effect as applying $U$ once. Similarly, applying $U$ and then its inverse $U^\dagger$ (or vice versa) will cause the operations to undo each other and result in a net effect of no effect.
Classically, not all operations have square roots. For example, the NOT gate has no classical square root. There's simply no boolean-to-boolean function $f$ such that $f(f(x)) = \overline{x}$. But, quantumly, the NOT gate does have a square root.
Every quantum operation is a unitary matrix, and every unitary matrix is a realizable quantum operation. Unitary matrices always have square roots, and so quantum operations do too. Let's compute a square root of the NOT gate, just to check that this is actually possible. Start with its unitary matrix:
$X = \bimat{0}{1}{1}{0}$
Then compute the eigenvalues and unit eigenvectors of that matrix:
$\lambda_1 = 1$, $v_1 = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \end{bmatrix}$
$\lambda_2 = -1$, $v_2 = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ -1 \end{bmatrix}$
Which gives us the eigendecomposition of the NOT gate's operation:
$\bimat{0}{1}{1}{0} = \lambda_1 v_1 v_1^{\dagger} + \lambda_2 v_2 v_2^{\dagger} = \lambda_1 \frac{1}{2} \bimat{1}{1}{1}{1} + \lambda_2 \frac{1}{2} \bimat{1}{-1}{-1}{1}$
The eigendecomposition form of a matrix is very useful because most functions, when applied to a matrix, correspond to simply transforming the eigenvalues. To find the square root of the matrix, we simply replace the eigenvalue coefficients with their square roots:
$\sqrt{\bimat{0}{1}{1}{0}} = \sqrt{\lambda_1} v_1 v_1^{\dagger} + \sqrt{\lambda_2} v_2 v_2^{\dagger} = \sqrt{1} \frac{1}{2} \bimat{1}{1}{1}{1} + \sqrt{-1} \frac{1}{2} \bimat{1}{-1}{-1}{1}$
And, by arbitrarily picking principal square roots, we find one of the square roots of NOT:
$\rightarrow \frac{1}{2} \bimat{1}{1}{1}{1} + i \frac{1}{2} \bimat{1}{-1}{-1}{1} = \frac{1}{2} \bimat{1+i}{1-i}{1-i}{1+i}$
You can check that squaring $\frac{1}{2} \bimat{1+i}{1-i}{1-i}{1+i}$ does in fact give you $\bimat{0}{1}{1}{0}$. Engineer a physical instantiation or simulation of $\frac{1}{2} \bimat{1+i}{1-i}{1-i}{1+i}$, apply it twice, and you will have performed a NOT operation.
The inverse of a unitary matrix is much easier to find than the square root: the inverse is simply the conjugate transpose of the matrix. For example, the inverse of $\frac{1}{2} \bimat{1+i}{1-i}{1-i}{1+i}$ is just $\frac{1}{2} \bimat{1-i}{1+i}{1+i}{1-i}$ (another square root of NOT).
# Removing a Control
If you have some operation $U$ with multiple controls, and access to a gate that is the square root of $U$ as well as a gate that is the inverse of that square root, then you can rewrite the circuit to use fewer controls per operation. The construction for doing so is detailed on page 182 of Nielsen and Chuang's textbook. Here it is (note that the top "wire" is actually a bundle of $n$ wires):
Let's convince ourselves, by case analysis, that the above control-reducing construction works. We want there to be no effect if any of the control wires is OFF, but if all of the control wires are ON then the overall effect should be a $U$ applied to the target wire. There are four cases to consider:
• OFF, OFF: If any of the wires in the top wire bundle is OFF, and the middle control wire is also OFF, then none of the operations happen. Clearly this is a no-op overall.
• ON, OFF: If all of the control wires in the top wire bundle are ON, but the middle control wire is OFF, then the middle wire will be temporarily toggled ON during the $\sqrt{U}^\dagger$ gate but back OFF for the $\sqrt{U}$ gate that depends on the middle wire. The $\sqrt{U}$ gate that depends on the top control wires does apply, though. Overall nothing happens, because $\sqrt{U}^\dagger \cdot \sqrt{U} = I$.
• OFF, ON: If any of the top wire bundle's wires are OFF, but the middle control wire is ON, then basically the same thing happens as in the ON, OFF case. The $\sqrt{U}^\dagger$ gate applies, but only one of the $\sqrt{U}$ gates applies, so they undo each other's effects.
• ON, ON: If all of the control wires are ON, then the $\sqrt{U}^\dagger$ gate will not occur (because the middle wire was temporarily toggled OFF) but both of the $\sqrt{U}$ gates will fire. So the net effect is a $\sqrt{U}^2 = U$ operation applied to the target wire.
By case analysis, we see that the control-reducing construction does in fact apply a $U$ if and only if all the controls are ON (and nothing happens otherwise).
# Iteratively Removing Controls Hits a Wall
By nesting the control-reducing construction inside of itself, again and again, we can remove many controls instead of just one:
Unfortunately, this stops working for the last control. The construction will still be correct when there's one control, in that the resulting circuit will be equivalent to the input circuit, but we won't end up with a circuit using no controls:
This is a problem, because the surviving controls are on gates that aren't NOT gates, and according to the rules of the exercise we aren't allowed to do that.
We will definitely need to apply the control-reducing construction once, because we need an uninvolved bit in order to apply the constructions from last time and the time before. However, after that single application, we should switch to a construction that moves controls off of special gates and onto NOT gates.
# Moving Controls to NOT Gates
It's possible to factor a controlled quantum operation into pieces that only have controls on NOT gates. The overall idea is to find matrices $A$, $B$, and $C$, plus a phase factor $e^{i\theta}$, such that $e^{i\theta} A \cdot X \cdot B \cdot X \cdot C = U$ and $A \cdot B \cdot C = I$. Those pieces allow you to do this:
When the control wire in the above circuit is OFF, the overall effect on the bottom wire is $A \cdot B \cdot C$. The controlled-NOTs don't fire because their controls are not satisfied, and the phase shift gate (the $Z(\theta)$ gate) doesn't fire because phase shifts only apply to the ON state. We required $A \cdot B \cdot C$ to be equal to the identity matrix, so nothing happened as required. If the control wire was ON, then the overall effect would be $e^{i\theta} A \cdot X \cdot B \cdot X \cdot C = U$, again as required.
There's a procedure for finding appropriate values for $A$, $B$, $C$, and $\theta$ from a given $U$, but explaining how takes a lot of words. If you're curious, see pages 176, 180, and 181 of the textbook.
For the purposes of this post, we'll only be working with the roots of $X$ gates, and $Z$ gates. I'll just provide some $A$, $B$, $C$, and $\theta$ values that work.
For phase shift gates, i.e. Z-axis rotations, moving controls onto NOT gates is relatively simple. You phase shift each of the two wires by half as much, phase shift in the opposite direction (also by half as much) when the wires differ, and that's all:
(Note that, by $\sqrt[n]{Z}$, I always mean $\bimat{1}{0}{0}{e^{i \pi/ n}}$. I don't mean an arbitrary $n$'th root of $Z$.)
To turn the Z-axis rotation construction into an X-axis rotation construction, we use the fact that bracketing with Hadamard gates transforms Z rotations into X rotations, and vice versa:
It's easiest to apply the Hadamard bracketing one time at the very top level before we've even applied the control-reducing construction, instead of doing it as part of each transformation we make to the circuit.
# Iterative Phase Corrections
The control-moving constructions from the previous part have a problem: the top wire ends up with an operation, instead of just controls. When there are multiple control wires present, that new operation on the top wire (which is not a NOT) becomes a controlled operation:
Fortunately, because the new controlled operation has one fewer control than the original controlled operation, fixing the problem is just a matter of repeating the procedure again and again. Eventually the resulting phase correction will have no controls. Here is what the repeated application looks like:
And here are the start and end states without animation in between:
So far, we've managed to reduce the maximum number of controls, and to move all of the controls to NOT gates. But we still need to clean up a bit.
# Re-arranging into Increments
The construction so far creates a linear number of NOT gates. However, because those NOT gates have many controls, individually breaking them down into Toffoli gates would create a quadratic number of gates overall. Instead, we will re-arrange the NOT gates so that we can simplify the circuit.
Because neither phase shift gates nor controls affect whether a wire is ON, we can move controlled-NOT gates over phase shift gates as long as the phase shift applies to one of the control wires (as opposed to the target wire). We can also move controlled-NOTs over other controlled-NOTs under the same conditions, as long as the other controlled-NOT cancels its effects by happening twice.
This freedom of movement is all we need to re-arrange our controlled-NOTs into an increment gate and a decrement gate:
Here's where we were before moving controls, where we ended up after moving controls, and the final compact circuit after simplifying:
The final state ends up being surprisingly simple... but why does it work? Let's try to understand how controls can be replaced by incrementing.
Think of each phase shift gate as adding or subtracting some value from a global counter, but the gate only fires and adjusts the count when the wire the phase shift gate is on is ON. If the smallest phase shift gate adds $a$ into the global counter, then here is the overall pattern of gates:
(Note that the "+1" and "-1" gates are working on the 2's complement value stored in the wires, not the global counter. Sorry if that's confusing.)
There are two important details to notice. First, except for the top-most wire, the phase shift gates on each individual wire are opposites. Second, each phase shift gate inside the increment and decrement gates subtracts an amount equal to the sum of the amounts added by the phase shift gates that are higher-up and outside the increment and decrement gates.
Suppose the $k$'th wire is OFF, and all the wires for lesser bits are ON. Then every wire up to and including the $k$'th wire will get toggled by the increment gates. Furthermore, the inside gate on the $k$'th wire will fire while the outside gates on the lower bit wires will fire. But the inside gate of the $k$'th wire subtracts the same amount that all of the outside gates on the lesser wires add. Therefore the phase shift gates on the wires up to and including the first OFF bit have no net effect.
Suppose the $k$'th wire is OFF, but some lower bit wire is also OFF. Then the increment's carry propagation will stop before reaching the $k$'th wire, and it will not be toggled. Therefore the the phase shift gates on the $k$'th wire will either both fire, undoing each other, or neither will fire. So the phase shift gates on wires after the first OFF bit have no net effect.
Together, the previous two paragraphs show that if there is any OFF bit, nothing happens. Phase shift gates on each wire after the first OFF wire will undo themselves, and the phase shift gate on that first OFF wire will undo the phase shift gates on previous wires.
That just leaves the case where all the bits are ON. Incrementing the all-ON state gives the all-OFF state. Therefore none of the inside gates will fire, but all of the outside gates will fire. The sum of the outside gates is $a + a + 2a + 4a + ... + 2^{n-1}a$ where $n$ is the number of involved wires, meaning the net effect is to add $2^n a$ to the global counter.
So, by setting $a$ to $\frac{1}{2^n}$'th of the phase shift we want, the incremented phase shift circuit will add that phase shift into the global counter if and only if all of the wires are ON. Exactly what we wanted.
# Putting it All Together
Our construction so far is as follows:
• Replace the controlled-X that depends on every wire with a controlled-Z, by bracketing it with Hadmard gates.
• Free up a borrowable ancilla bit by using the control-reducing construction.
• Repeatedly apply the control-removing construction, until only NOT gates have controls on them.
• Re-arrange the resulting controlled-NOTs into increments.
Here are the starting and ending states, for 6 wires. Notice that all of the operations on the right hand side leave at least one wire unaffected, so that the classical constructions from the previous parts will be applicable:
Let's convince ourselves that this overall construction works. We already know that the increment-decrement part acts like a controlled-$\sqrt{Z}$; it will add a phase factor of $i$ to the ON, ON, ON, ON, ON, ON and ON, ON, ON, ON, ON, OFF states. We also know that the surrounding Hadamard gates just switch between Z-rotation and X-rotation, so if we pretend the Hadamards aren't there and find that the circuit implements a controlled-Z then the circuit works. All that's left to account for is the four $\sqrt[4]{Z}$/$\sqrt[4]{Z}^\dagger$ and controlled-NOT gates at the start. Well, when all the control wires are ON, and the target wire is OFF, both of the $\sqrt{Z}^\dagger$ gates will fire and the ON, ON, ON, ON, ON, OFF state will gain a phase factor of $-i$. And when all the wires are ON, both of the $\sqrt{Z}$ gates fire instead and the ON, ON, ON, ON, ON, ON state will gain a phase factor of $+i$.
The overall effect on the ON, ON, ON, ON, ON, OFF state was to gain a phase factor of $i$ and a phase factor of $-i$. In other words, there was no overall effect, because $i \cdot -i = 1$. That leaves ON, ON, ON, ON, ON, ON as the only affected state, and it gained two phase factors of $i$ for a net phase shift of $i^2 = -1$. Therefore the circuit, when ignoring the Hadamard gates, is a controlled-Z operation that depends on every wire. Meaning the overall circuit is in fact a full controlled-X, as desired.
That's it. Reduce the large controlled-nots and increments into a linear number of Toffoli-or-smaller gates by using the constructions from the last two posts, and we're done. I would show the resulting circuit in a diagram, if it weren't for the fact that the construction uses something like $\approx 100 n$ gates. Clearly, in practice, it's more efficient to just have $n$ zeroed ancilla bits available for use (because then you'd only need $\approx 2n$ gates).
That being said... isn't it bad that our solution needs smaller and smaller gates as $n$ increases?
# Trying to Avoid Exponentially Precise Gates
Needing smaller and smaller gates is bad because, in practice, you only have a finite gate set and a finite amount of precision. For example, the most common quantum error correction scheme stops at the $\sqrt[4]{Z}$ gate.
Is there any way to avoid needing more and more precise gates? Well, there are a few possible workarounds:
1. Approximate: If you have a gate, or sequence of gates, that rotates by an irrational fraction of a turn, then you can get arbitrarily close to any given angle by repeating that rotation many times. For example, if you are given a gate that phase shifts by $\pi$ degrees (not radians, degrees), then applying that gate 6634751 times will get you within 0.000003 degrees of a 45 degree phase shift.
2. Don't Bother: If you simply don't perform the phase corrections, something happens. In the case of an un-phase-corrected controlled-Z, what happens is you apply the operation $e^{i Z \pi / 2}$ instead. If that happens to be the operation you wanted to apply, then mission accomplished!
3. Cancel it Out: If you happen to be applying two full-controlled operations, you can arrange for the phase corrections from one to exactly cancel out the phase corrections from the other:
4. Use an Uneven Root: If your controlled operation is a $p$'th root of the identity matrix, and $p$ is not even, then the phase corrections will cycle. For example, $Z^{2/3}$ is a cubic root of the identity matrix. It is also its own fourth root, so the phase corrections can bounce between $Z^{2/3}$ and $Z^{4/3}$ instead of becoming exponentially small:
5. Have an Ancilla Bit: With an ancilla bit, the classical construction from part 1 works (without using any square roots). In practice, this is how you would solve the problem.
Unfortunately, none of the above workarounds apply to the exercise we're solving, and I suspect that there's simply no way to avoid the exponentially precise gates without cheating.
# Unavoidable Exponentially Precise Gates
The reason I suspect that the exponentially precise gates are necessary has to do with thinking of phases as counters.
Suppose that we are given a very precise phase shift gate: $\sqrt[2^p]{Z}$ for some large $p$. Applying $\sqrt[2^p]{Z}$ a total of $2^{p+1}$ times will rotate any affected phases by a full turn, back to where they started. In effect, this means we can treat each state as having a counter that wraps around after being incremented $2^{p+1}$ times.
Let's consider what happens to the counter system when we apply our available operation, and compare that to what happens when we apply our desired operation.
When we apply the $\sqrt[2^p]{Z}$ gate, any states where the target wire is ON will have their counters incremented. This adds $2^{n-1}$ increments into the counter system, because that's how many states there are where the target wire is ON.
Applying the desired operation, a controlled-Z that depends on every wire, only affects the all-ON state. However, the counter is incremented $2^p$ times (adding $2^p$ increments into the counter system), instead of once, because that's how many increments it takes to go half-way around.
A counter wrapping around, from $2^{p+1}$ to $0$, removes $2^{p+1}$ increments from the system. The operation we can perform adds $2^{n-1}$ increments into the system. The operation we want to perform must add exactly $2^p$ increments into the system.
Oh my.
Unless $2^p$ is a multiple of $lcm(2^{n-1}, 2^{p+1})$, it is impossible for us to add exactly $2^p$ increments into the counter system by applying our available operation (even if we're allowed to repeatedly permute the states and apply the operation). Unless $min(p+1, n - 1) \leq p$, we're screwed. As soon as $n$ exceeds $p+1$, we won't be able to simulate a fully-controlled-Z operation anymore for the same reason that you can't reach an odd total by adding up only even numbers.
(Why doesn't this argument also apply to the case where there's an ancilla bit? Because then the target operation adds $2^p$ into two states, for a total change of $2^{p+1}$, and that's always a multiple of $lcm(2^{p+1}, 2^{n-1})$.)
This is not proof positive that we need exponentially precise gates to solve the exercise. In particular, there may be some clever way of partially rotating around the X or Y axies in addition to the partial rotations around the Z axis. However, I would find that pleasantly surprising because it seems to be hard to get back to a nice fraction of a turn after combining rotations around two axies (not counting 90 degree rotations or undoing each rotation in reverse order, of course).
Feel free to surprise me.
# Summary
Unlike classical computers, quantum computers don't need an ancilla bit to perform a controlled-NOT that depends on every wire. They have the option of using exponentially precise phase shift gates instead.
In either case a linear number of Toffoli-or-smaller gates is needed but, pragmatically speaking, it's simpler and more efficient to just have an ancilla bit available. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7091435790061951, "perplexity": 652.6274239502611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00313.warc.gz"} |
https://en.wikipedia.org/wiki/Thompson_groups | # Thompson groups
In mathematics, the Thompson groups (also called Thompson's groups, vagabond groups or chameleon groups) are three groups, commonly denoted $F \subseteq T \subseteq V$, which were introduced by Richard Thompson in some unpublished handwritten notes in 1965 as a possible counterexample to von Neumann conjecture. Of the three, F is the most widely studied, and is sometimes referred to as the Thompson group or Thompson's group.
The Thompson groups, and F in particular, have a collection of unusual properties which have made them counterexamples to many general conjectures in group theory. All three Thompson groups are infinite but finitely presented. The groups T and V are (rare) examples of infinite but finitely-presented simple groups. The group F is not simple but its derived subgroup [F,F] is and the quotient of F by its derived subgroup is the free abelian group of rank 2. F is totally ordered, has exponential growth, and does not contain a subgroup isomorphic to the free group of rank 2.
It is conjectured that F is not amenable and hence a further counterexample to the long-standing but recently disproved von Neumann conjecture for finitely-presented groups: it is known that F is not elementary amenable.
Higman (1974) introduced an infinite family of finitely presented simple groups, including Thompson's group V as a special case.
## Presentations
A finite presentation of F is given by the following expression:
$\langle A,B \mid\ [AB^{-1},A^{-1}BA] = [AB^{-1},A^{-2}BA^{2}] = \mathrm{id} \rangle$
where [x,y] is the usual group theory commutator, xyx−1y−1.
Although F has a finite presentation with 2 generators and 2 relations, it is most easily and intuitively described by the infinite presentation:
$\langle x_0, x_1, x_2, \dots\ \mid\ x_k^{-1} x_n x_k = x_{n+1}\ \mathrm{for}\ k
The two presentations are related by x0=A, xn = A1−nBAn−1 for n>0.
## Other representations
The Thompson group F is generated by operations like this on binary trees. Here L and T are nodes, but A B and R can be replaced by more general trees.
The group F also has realizations in terms of operations on ordered rooted binary trees, and as the group of piecewise linear homeomorphisms of the unit interval that preserve orientation and whose non-differentiable points are dyadic rationals and whose slopes are all powers of 2.
The group F can also be considered as acting on the unit circle by identifying the two endpoints of the unit interval, and the group T is then the group of automorphisms of the unit circle obtained by adding the homeomorphism xx+1/2 mod 1 to F. On binary trees this corresponds to exchanging the two trees below the root. The group V is obtained from T by adding the discontinuous map that fixes the points of the half-open interval [0,1/2) and exchanges [1/2,3/4) and [3/4,1) in the obvious way. On binary trees this corresponds to exchanging the two trees below the right-hand descendant of the root (if it exists).
The Thompson group F is the group of order-preserving automorphisms of the free Jónsson–Tarski algebra on one generator.
## Amenability
The conjecture of Thompson that F is not amenable was further popularized by R. Geoghegan --- see also the Cannon-Floyd-Parry article cited in the references below. Its current status is open: E. Shavgulidze[1] published a paper in 2009 in which he claimed to prove that F is amenable, but an error was found, as is explained in the MR review.
It is known that F is not elementary amenable.[citation needed] If F is not amenable, then it would be another counterexample to the long-standing but recently disproved von Neumann conjecture for finitely-presented groups, which suggested that a finitely-presented group is amenable if and only if it does not contain a copy of the free group of rank 2.
## Connections with topology
The group F was rediscovered at least twice by topologists during the 1970s. In a paper which was only published much later but was in circulation as a preprint at that time, P. Freyd and A. Heller [2] showed that the shift map on F induces an unsplittable homotopy idempotent on the Eilenberg-MacLane space K(F,1) and that this is universal in an interesting sense. This is explained in detail in Geoghegan's book (see references below). Independently, J. Dydak and P. Minc [3] created a less well-known model of F in connection with a problem in shape theory.
In 1979, R. Geoghegan made four conjectures about F: (1) F has type FP; (2) All homotopy groups of F at infinity are trivial; (3) F has no non-abelian free subgroups; (4) F is non-amenable. (1) was proved by K. S. Brown and R. Geoghegan in a strong form: there is a K(F,1) with two cells in each positive dimension.[4] (2) was also proved by Brown and Geoghegan [5] in the sense that the cohomology H*(F,ZF) was shown to be trivial; since a previous theorem of M. Mihalik [6] implies that F is simply connected at infinity, and the stated result implies that all homology at infinity vanishes, the claim about homotopy groups follows. (3) was proved by M. Brin and C. Squier.[7] The status of (4) is discussed above.
It is unknown if F satisfies the Farrell–Jones conjecture. It is even unknown if the Whitehead group of F (see Whitehead torsion) or the projective class group of F (see Wall's finiteness obstruction) is trivial, though it easily shown that F satisfies the Strong Bass Conjecture.
D. Farley [8] has shown that F acts as deck transformations on a locally finite CAT(0) cubical complex (necessarily of infinite dimension). A consequence is that F satisfies the Baum-Connes conjecture. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968451023101807, "perplexity": 510.92515422536104}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466260.18/warc/CC-MAIN-20151124205426-00290-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://brilliant.org/problems/logs-2s-and-3s/ | # Logs, 2s, and 3s
Algebra Level 2
Find all real solutions $$x$$ to
$3\log_2(x) - 1 = \log_2\left(\frac32 x-1\right).$
Enter your answer as the sum of all such $$x$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374404907226562, "perplexity": 4547.978317631013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421022555-00274.warc.gz"} |
https://math.answers.com/geometry/A_circle_is_18_feet_in_diameter_what_is_the_area_of_the_circle | 0
# A circle is 18 feet in diameter what is the area of the circle?
Wiki User
2017-03-23 12:27:24
Area of circle: 81*pi square feet
Wiki User
2017-03-23 12:27:24
Study guides
20 cards
➡️
See all cards
3.91
66 Reviews | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8885605335235596, "perplexity": 28708.496154993256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00465.warc.gz"} |
https://nigerianscholars.com/past-questions/english-language/question/188449/ | Home » » Choose the option that has the same vowel sound as the one represented by the le...
# Choose the option that has the same vowel sound as the one represented by the le...
### Question
Choose the option that has the same vowel sound as the one represented by the letters underlined.
faeces
A) polices
B) pain
C) peasant
D) pear | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835792541503906, "perplexity": 1784.7947362751308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00573.warc.gz"} |
http://serverfault.com/questions/162587/adding-lines-to-text-files-through-group-policy | # Adding lines to text files through Group Policy
I need to add two lines to text files using group policy. I cannot replace the files with the updated one since the content of the files differs from one machine to another, but all of them have to be updated with the two new lines. Is it possible to do that?
I already know how to replace a file using a VBScript, then applying it through group policy.
Thanks, Abdullah
-
This vbscript gives you an idea of what it should look like. They key is to open the files for appending, not for writing. This will just add two lines to the bottom of the text files regardless of their content.
Const ForAppending = 8
strFile1 = "c:\Temp\file1.txt"
strFile2 = "c:\Temp\file2.txt"
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile(strFile1, ForAppending)
objFile.WriteLine "New Text to add 1"
objFile.WriteLine "New Text to add 2"
objFile.Close
Set objFile = objFSO.OpenTextFile(strFile2, ForAppending)
objFile.WriteLine "New Text to add 1"
objFile.WriteLine "New Text to add 2"
objFile.Close
-
You can also just use batch files to do it as well. echo "Add this line to the end of the file." >> \\path\to\file.txt or whatever. Just do not use the single redirection operation > instead of >, or you will be sorry.
-
That is to say the redirect operator > opens for write (clobber) as opposed to >> which opens for append. – jscott Jul 21 '10 at 18:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7169139385223389, "perplexity": 1834.1222600830763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860128071.22/warc/CC-MAIN-20160428161528-00012-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.mail-archive.com/search?l=ntg-context%40ntg.nl&q=date:20100807&o=newest | ### Re: [NTG-context] \qedhere equivalent in ConTeXt
Thank you very much, but still does not work to me.. %%% \definedescription [proof] [text=Proof.] \setupdescriptions [proof] [width=broad, distance=0.5em, location=serried, headstyle=it, indentnext=yes, indenting=yes, closesymbol=\mathematics{\square},
### Re: [NTG-context] \qedhere equivalent in ConTeXt
On Sat, 7 Aug 2010 10:54:52 +0200 Emanuele Sacco [email protected] wrote: Thank you very much, but still does not work to me.. %%% \definedescription [proof] [text=Proof.] \setupdescriptions [proof] [width=broad, distance=0.5em, location=serried, headstyle=it,
### Re: [NTG-context] \qedhere equivalent in ConTeXt
You are absolutely right. Thanks to all. 2010/8/7 Thomas Schmitz [email protected]: On Sat, 7 Aug 2010 10:54:52 +0200 Emanuele Sacco [email protected] wrote: Thank you very much, but still does not work to me.. %%% \definedescription [proof] [text=Proof.]
### Re: [NTG-context] an issues with punk module
In latest beta the punk module was normal and the PunkNova font is working. Thanks to all. -- Best regards, Huang Ze ___ If your question is of interest to others as well, please add an entry to the Wiki!
### [NTG-context] Problem with cross references
Hi all, I would like to get some advice concerning a problem, or rather a nuisance, with internal references. I am making a longish document with several chapters, all of which are separate .tex-files to be input into the main file, which contains all setups and the skeleton of the document. There
### Re: [NTG-context] Problem with cross references
On Sat, 7 Aug 2010 14:22:32 +0200 Robert Blackstone [email protected] wrote: Hi all, I would like to get some advice concerning a problem, or rather a nuisance, with internal references. I am making a longish document with several chapters, all of which are separate .tex-files to
### [NTG-context] abbreviations as macros
Hi Hans: This is asking Armin Varmaz' question in a different way, although I know even less. I would like to define a macro that is just an abbreviation. For example, say I want to define a snippet \NPV which provides the text NPV$_0$, and NPV is in textfont while 0 is in mathfont. This
### Re: [NTG-context] abbreviations as macros
On Sat, 7 Aug 2010, ivo welch wrote: Hi Hans: This is asking Armin Varmaz' question in a different way, although I know even less. I would like to define a macro that is just an abbreviation. For example, say I want to define a snippet \NPV which provides the text NPV$_0$, and NPV is in
### [NTG-context] Commands based on previously defined commands, with fixed values for arguments
How can I define a command by reusing the already defined command? \def \HelloTwo #1#2 {Hello, #1 and #2!} \def \HelloOne #1 {\HelloTwo{#1}{Vedran}} This gives me a Runaway argument. -- Vedran Miletić ___ If your
### [NTG-context] =?UTF-8?Q?Re:__Commands_based_on_previously_defined_commands, =D=A_with_fixed_values_for_=9arguments?=
On Sat, 7 Aug 2010 18:03:22 +0200 Vedran Miletić [email protected] scribit: How can I define a command by reusing the already defined command? \def \HelloTwo #1#2 {Hello, #1 and #2!} \def \HelloOne #1 {\HelloTwo{#1}{Vedran}} This gives me a Runaway argument. -- Vedran Miletić where
### Re: [NTG-context] Commands based on previously defined comm ands, DA with fixed values for šrguments
2010/8/7 R. Bastian [email protected]: On Sat, 7 Aug 2010 18:03:22 +0200 Vedran Miletić [email protected] scribit: How can I define a command by reusing the already defined command? \def \HelloTwo #1#2 {Hello, #1 and #2!} \def \HelloOne #1 {\HelloTwo{#1}{Vedran}} This gives me a Runaway
### Re: [NTG-context] Commands based on previously defined commands, with fixed values for arguments
On Sat, 7 Aug 2010, Vedran Miletić wrote: How can I define a command by reusing the already defined command? \def \HelloTwo #1#2 {Hello, #1 and #2!} \def \HelloOne #1 {\HelloTwo{#1}{Vedran}} This gives me a Runaway argument. Can you post a complete example? This works for me
### Re: [NTG-context] Commands based on previously defined commands, with fixed values for arguments
2010/8/7 Aditya Mahajan [email protected]: Can you post a complete example? This works for me \def\HelloTwo#1#2{Hello, #1 and #2!} \def\HelloOne#1{\HelloTwo{#1}{Vedran}} \starttext \HelloOne {One} \stoptext Same here, it works. From what I see, you removed the spacing, and if I add it
### Re: [NTG-context] Commands based on previously defined commands, with fixed values for arguments
Are there any rules where the space should be and where it should not? The way you wrote the macro originally, the spaces were part of the macro. If you use your original definition of \HelloTwo, you can write \HelloOne as follows: \def \HelloOne #1 {\HelloTwo {#1}{Vedran} }
### Re: [NTG-context] Commands based on previously defined comm ands, DA with fixed values for šrguments
Even if this is completely unrelated to ConTeXt, allow me to mention this since I find it so funny: The message to which I'm replying has the following weird subject: # Subject: Re: [NTG-context] Commands based on previously defined commands, # DA with fixed values for šrguments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930489659309387, "perplexity": 9070.83392869795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00268.warc.gz"} |
http://mathhelpforum.com/algebra/12386-variety-equations.html | 1. A Variety Of Equations
Please solve and explain how these are solved.
Please solve and explain how these are solved.
Hi,
to Q1:
the brackets are not necessary. Collect like terms:
7t^(-6)*(-3)t^(-3)*c^(-3) = -21t^(-9)*c^(-3)
to Q3.:
11x + 6y = 153
156 = -3y + 10x ==> 156 - 10x = -3y ==> -312 + 20x = 6y
Now plug in the term for 6y into the first equation:
11x + (-312 + 20x) = 153
31x = 465
x = 15 plug in this value into the second equation to calculate y:
6y = -312 + 20*15 = -12
y = -2
EB
3. 7t^(-6)*(-3)t^(-3)*c^(-3) = -21t^(-9)*c^(-3)
How did you do that? -6* -3 -3* -3 = -21
I got -15
Solve using the elimination method.
. . 11x + 6y .= .153
. . 156 .= .-3y + 10x
We have: .11x + 6y .= .153 . [1]
. . . . . . . .10x .- 3y .= .156 . [2]
Multiply [2] by 2: .20x - .6y .= .312
. . . . . . Add [1]: .11x + 6y .= .153
. . And we have: .31x = 465 . . x = 15
Substitute into [1]: .11(15) + 6y .= .153 . . y = -2
7t^(-6)*(-3)t^(-3)*c^(-3) = -21t^(-9)*c^(-3)
How did you do that? -6* -3 -3* -3 = -21
I got -15
Hello,
you have to calculate the product of powers. Therefore you must use all rules concerning the calculations with powers.
I've attached a screen-shot of the transformation. Maybe this helps a little bit further.
EB
Attached Thumbnails
6. Thanks. That completes the picture. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8719974160194397, "perplexity": 4558.676998228979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118477.15/warc/CC-MAIN-20170423031158-00118-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://practice.geeksforgeeks.org/problems/next-larger-element-1587115620/1 | Showing:
Handle Score
@Ibrahim Nash 5761
@akhayrutdinov 5111
@mb1973 4989
@Quandray 4944
@saiujwal13083 4506
@sanjay05 3762
@marius_valentin_dragoi 3516
@sushant_a 3459
@verma_ji 3341
@KshamaGupta 3318
Next larger element
Medium Accuracy: 48.92% Submissions: 15729 Points: 4
Given an array arr[ ] of size N having distinct elements, the task is to find the next greater element for each element of the array in order of their appearance in the array.
Next greater element of an element in the array is the nearest element on the right which is greater than the current element.
If there does not exist next greater of current element, then next greater element for current element is -1. For example, next greater of the last element is always -1.
Example 1:
Input:
N = 4, arr[] = [1 3 2 4]
Output:
3 4 4 -1
Explanation:
In the array, the next larger element
to 1 is 3 , 3 is 4 , 2 is 4 and for 4 ?
since it doesn't exist, it is -1.
Example 2:
Input:
N = 5, arr[] [6 8 0 1 3]
Output:
8 -1 1 3 -1
Explanation:
In the array, the next larger element to
6 is 8, for 8 there is no larger elements
hence it is -1, for 0 it is 1 , for 1 it
is 3 and then for 3 there is no larger
element on right and hence -1.
This is a function problem. You only need to complete the function nextLargerElement() that takes list of integers arr[ ] and N as input parameters and returns list of integers of length N denoting the next greater elements for all the corresponding elements in the input array.
Expected Time Complexity : O(N)
Expected Auxilliary Space : O(N)
Constraints:
1 ≤ N ≤ 106
1 ≤ Ai ≤ 1018 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2042360156774521, "perplexity": 1867.6507092499396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00741.warc.gz"} |
https://infoscience.epfl.ch/record/184114 | Infoscience
Journal article
# Volatilities of Actinide and Lanthanide N,N-Dimethylaminodiboranate Chemical Vapor Deposition Precursors: A DFT Study
N,N-Dimethylaminodiboranate complexes with praseodymium, samarium, erbium, and uranium, which are potential chemical vapor deposition precursors for the deposition of metal boride and oxide thin films, have been investigated by DFT guided by field-ionization mass spectroscopy experiments. The calculations indicate that the volatilities of these complexes are correlated with the M-H bond strengths as determined by Mayer bond order analysis. The geometries of the gas-phase monomeric, dimeric, and trimeric species seen in field-ionization mass spectroscopy experiments were identified using DFT calculations, and the relative stabilities of these oligomers were assessed to understand how the lanthanide aminodiboranates depolymerize to their respective volatile forms during sublimation. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865216970443726, "perplexity": 9427.086533056747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00568-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://te.wikipedia.org/wiki/%E0%B0%AE%E0%B1%82%E0%B0%B8:Tmath/doc | # మూస:Tmath/doc
Wraps a math expression written in TeX code in [itex] tags using the {{#tag:}} parser function. Intended to be used for very short expressions like variables.
## Usage
{{tmath|expressin}}
### Examples
• {{tmath|a}} gives ${\displaystyle a}$
• {{tmath|\pi}} gives ${\displaystyle \pi }$
• {{tmath|x_0}} gives ${\displaystyle x_{0}}$
The template is intended to be used for very short expressions like variables. It is more cumbersome to use for equations or other expressions containing special signs that have to be escaped:
• {{tmath|E{{=}}mc^2}} gives ${\displaystyle E=mc^{2}}$
• {{tmath|x {{=}} {{!}} \vec {{(}} x {{)}} {{!}} }} gives ${\displaystyle x=|{\vec {x}}|}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595902562141418, "perplexity": 8042.486864720877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00158.warc.gz"} |
https://lakschool.com/index.php/en/math/circles-and-spheres/spheres-and-lines | Math Circles and spheres Spheres and lines
# Spheres and lines
There are three possible relative positions for a sphere and a line in space.
!
### Remember
• A passant is a straight line that has no point in common with the sphere.
• A tangent has exactly one point in common.
• A secant has two different points in common with the sphere.
A sphere and a line can therefore have one, two or no common point.
The individual coordinates are used in the equation of a sphere to calculate the intersection points.
i
### Method
1. Write out coordinates of $g$
2. Insert and solve equations in the equation of a sphere
3. Insert $r$ into the line to get intersection(s)
### Example
$g: \vec{x} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} + r \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$
$k: (x+1)^2+(y-2)^2$ $+(z-1)^2=17$
1. #### Break $g$ into 3 equations
We replace $\vec{x}$ and write out the respective coordinates as our own equation.
$\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} + r \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$
1. $x=5+3r$
2. $y=6+2r$
3. $z=5+2r$
2. #### Insert coordinates
The equations are now used in the equation of a sphere for $x$, $y$ and $z$.
$(x+1)^2+(y-2)^2$ $+(z-1)^2=17$
$(5+3r+1)^2$ $+(6+2r-2)^2$ $+(5+2r-1)^2=17$
$(6+3r)^2$ $+(4+2r)^2$ $+(4+2r)^2=17$
Use binomial theorem to resolve parentheses
$36+36r+9r^2$ $+16+16r+4r^2$ $+16+16r+4r^2=17$
$17r^2+68r+68=17\quad|-17$
$17r^2+68r+51=0\quad|:17$
$r^2+4r+3=0$
$r_{1,2}=-\frac{p}2\pm\sqrt{(\frac{p}2)^2-q}$
$r_{1,2}=-2\pm\sqrt{2^2-3}$
$r_{1,2}=-2\pm1$
$r_{1}=-1$ and $r_{2}=-3$
3. #### Insert $r$
The two calculated $r$ are inserted into the equation of a line in order to obtain the intersection points.
$\vec{OS_1} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} - 1 \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$ $=\begin{pmatrix} 2 \\ 4 \\ 3 \end{pmatrix}$
$\vec{OS_2} = \begin{pmatrix} 5 \\ 6 \\ 5 \end{pmatrix} - 3 \cdot \begin{pmatrix} 3 \\ 2 \\ 2 \end{pmatrix}$ $=\begin{pmatrix} -4 \\ 0 \\ -1 \end{pmatrix}$
It is a secant that intersects the sphere at $S_1(2|4|3)$ and $S_2(-4|0|-1)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004949331283569, "perplexity": 650.0786325958347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00323.warc.gz"} |
https://qanda.ai/en/solutions/JcUZNtXkND-uadratic-Equation-Sum-of-Roots-Product-of-Roots-1-x24x30-2-6x212x-180-3-x24x-210 | Symbol
Problem
uadratic Equation Sum of Roots Product of Roots $1$ $x^{2}+4x+3=0$ $2$ $6x^{2}+12x-18=0$ $3.$ $x^{2}+4x-21=0$ $4.$ $2x^{2}+3x-2=0$ $5.$ $8x^{2}=6x+9$ $6$ $2x^{2}-3x=0$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871617555618286, "perplexity": 67.35148971516344}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00253.warc.gz"} |
https://stats.stackexchange.com/questions/123116/interpretation-of-results-for-unitroot-test | # Interpretation of results for unitroot test
Let's say I have a pure random walk:
library(fUnitRoots)
library(urca)
set.seed(1130)
rndwlk1 <- filter(rnorm(1000,0,1),c(1),method="recursive",init=c(1))
plot(rndwlk1)
and I want perform a unitroot test with trend using ur.df, I would define my series as such:
$y_t = c + \delta t +\rho y_{t-1} + \epsilon_t$
summary(ur.df(rndwlk1,type=c("trend"),lags=0))
## Call:
## lm(formula = z.diff ~ z.lag.1 + 1 + tt)
## Residuals:
## Min 1Q Median 3Q Max
## -3.7054 -0.6353 0.0402 0.6363 2.9240
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.186081 0.077171 -2.411 0.01608 *
## z.lag.1 -0.017740 0.005983 -2.965 0.00310 **
## tt -0.000619 0.000233 -2.657 0.00802 **
## ---
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
## Residual standard error: 0.9726 on 996 degrees of freedom
## Multiple R-squared: 0.00875, Adjusted R-squared: 0.00676
## F-statistic: 4.396 on 2 and 996 DF, p-value: 0.01256
## Value of test-statistic is: -2.9648 3.8345 4.3962
## Critical values for test statistics:
## 1pct 5pct 10pct
## tau3 -3.96 -3.41 -3.12
## phi2 6.09 4.68 4.03
## phi3 8.27 6.25 5.34
Can someone tell me what the test-statistics mean? As far as I understand the first one is the ADF-statistic for $\rho$ corresponds the -2.648 is the t-statiscics for $\rho$. What are the two other statistics?
The model with trend is not the right model, but I would like to test it against a model with drift. I have figured out that the F-statistics gives the results for the test of a unit root with drift. So:
H0 = $\rho$ = 1 , $\delta t$ = 0. But the p-value seems to be from a regular F distribution. How can get critical values for the F statistics?
Update
The p-value of the F-statistic is indeed from the regular F-distribution. pf(4.396,2,996,lower.tail=FALSE)
#0.01256662
Does anyone know where can we find the Dickey fuller critical values for the F-statistics on R?
First of all, the function ur.df will estimate the following equation:
$\Delta y_t = c+ (\rho-1)y_{t-1} + \delta t + \epsilon_t$
The value of the test-statistics:
## Value of test-statistic is: -2.9648 3.8345 4.3962
représents the critical values for the following $H_0$.
|----------------+----------------------+--------------------|
| -2.9648 | 3.8345 | 4.3962 |
|----------------+----------------------+--------------------|
| $(\rho-1) = 0$ | $\rho-1= c=\delta=0$ | $\rho-1= \delta=0$ |
|----------------+----------------------+--------------------|
• What are the two other statistics?
So the two other values are critical values for F-Test.
• How can get critical values for the F statistics?
The corresponding critical values are given in the table below the test statistics. the tau3, phi1 to phi3, are the notations DF used in their paper. I found the table and the explanation in Applied Economic Time Series, Walter Enders. Here is what the notations represent. Where $\tilde{\rho}$ = $\rho-1$
|--------+---------------------------------+-------------------------------------|
| <6> | $H_0$ | Interpretation |
|--------+---------------------------------+-------------------------------------|
| tau3 | $\tilde{\rho}= 0$ | |
| phi1 | $\tilde{\rho}= 0 et c = 0$ | random walk without drift |
| phi2 | $\tilde{\rho} = c = \delta = 0$ | random walk without drift and trend |
| phi3 | $\tilde{\rho} = \delta = 0$ | random walk without trend |
|--------+---------------------------------+-------------------------------------| | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7270364165306091, "perplexity": 2252.676709233424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00500.warc.gz"} |
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;f8a87f87.0904&FT=M&P=T&H=A&S=b | ## [email protected]
Options: Use Classic View Use Proportional Font Show HTML Part by Default Condense Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>]
Mime-Version: 1.0 (Apple Message framework v930.3) Sender: Mailing list for the LaTeX3 project <[log in to unmask]> Subject: Re: \in@ wrong? From: Will Robertson <[log in to unmask]> Date: Fri, 24 Apr 2009 15:03:08 +0930 In-Reply-To: <[log in to unmask]> Content-Type: multipart/signed; boundary=Apple-Mail-3-250983690; micalg=sha1; protocol="application/pkcs7-signature" Reply-To: Mailing list for the LaTeX3 project <[log in to unmask]> Parts/Attachments: text/plain (1014 bytes) , smime.p7s (2446 bytes) Hi Uwe, On 21/04/2009, at 8:30 PM, Uwe Lück wrote: > I wonder whether this is a bug that should go to the LaTeX Bug > Database, since it only > -- is about an internal > -- contradicts what one might expect > -- while it is not clear to me whether this can affect LaTeX's > function on the user-level This certainly seems like a bug to me, although I had to modify your examples to see the error: \documentclass{article} \begin{document} \makeatletter \in@{bonbon}{bon}\ifin@ YES\else NO\fi % gives YES \in@{bonbon}{bonb}\ifin@ YES\else NO\fi % gives NO \in@{client-to-client}{client-to}\ifin@ YES\else NO\fi % gives NO \in@{client-to-client}{client-to-}\ifin@ YES\else NO\fi % gives YES \makeatother \end{document} The tendency seems to be not to change the internals of LaTeX2e, but I'd be happy to replace the current implementation of \in@ with a more correct version. What do others think? If others are amenable to the change, would you be willing to write the new version? Cheers, Will | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613673686981201, "perplexity": 14532.091543659804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00600.warc.gz"} |
https://zbmath.org/?q=an%3A0761.34026 | ×
# zbMATH — the first resource for mathematics
On the reducibility of linear differential equations with quasiperiodic coefficients. (English) Zbl 0761.34026
We say that a matrix $$Q(t)$$ is a quasiperiodic matrix of time with basic frequencies $$\omega_ 1,\dots,\omega_ r$$ if $$Q(t)=F(\omega_ 1 t,\dots,\omega_ r t)$$, where $$F=F(v_ 1,\dots,v_ r)$$ is $$2\pi$$ periodic in all its arguments. The author considers the system (1) $$x'=(A+\varepsilon Q(t))x$$, where $$A$$ is a constant matrix and $$Q(t)$$ is a quasiperiodic analytic matrix with $$r$$ basic frequencies. Suppose $$A$$ has different eigenvalues (including the purely imaginary case) and the set formed by the eigenvalues of $$A$$ and the basic frequencies of $$Q(t)$$ satisfies a nonresonant condition. It is proved under a nondegeneracy condition that there exists a Cantorian set $${\mathcal S}\subset(0,\varepsilon_ 0)$$ ($$\varepsilon_ 0>0$$) with positive Lebesgue measure such that for $$\varepsilon\in{\mathcal S}$$ (1) is reducible (i.e. there exists a nonsingular quasiperiodic matrix $$P(t)$$ such that $$P(t)$$, $$P^{-1}(t)$$ and $$P'(t)$$ are bounded on $$R$$ and the change of variables $$x=P(t)y$$ transforms (1) to $$y'=By$$ with a constant matrix $$B$$).
##### MSC:
34C20 Transformation and reduction of ordinary differential equations and systems, normal forms 34A30 Linear ordinary differential equations and systems 34C27 Almost and pseudo-almost periodic solutions to ordinary differential equations
##### Keywords:
quasiperiodic function; reducible system; basic frequencies
Full Text:
##### References:
[1] Arnol’d, V.I, Small denominators and problems of stability of motion in classical and celestial mechanics, Russian math. surveys, 18, No. 6, 85-191, (1963) · Zbl 0135.42701 [2] Bogoljubov, N.N; Mitropoliski, Ju.A; Samoilenko, A.M, Methods of accelerated convergence in nonlinear mechanics, (1976), Springer-Verlag New York [3] Fink, A.M, Almost periodic differential equations, () · Zbl 0325.34039 [4] Johnson, R.A; Sell, G.R, Smoothness of spectral subbundles and reducibility of quasi-periodic linear differential systems, J. differential equations, 41, 262-288, (1981) · Zbl 0443.34037
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8452706336975098, "perplexity": 929.5611094622268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00347.warc.gz"} |
http://mathhelpforum.com/calculus/116493-solved-integration-part-problem-two-different-answers.html | # Math Help - [SOLVED] Integration by part problem with two different answers
1. ## [SOLVED] Integration by part problem with two different answers
I want to find the integration of following integral:
$\int x^3.e^{x^2}\, dx$
So i apply integration by part:
$u.v - \int v \,du\,\,$
$
Let \,\, u = x^3, du = 3x^2\,\,
and \,\, dv = e^{x^2}, v = 2x.e^{x^2}
$
So now th result is:
$\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2} - \int\,\,6x^3.e^{x^2}\,\, dx$
$7\!\!\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2}$
$\int x^3.e^{x^2}\, dx = \frac{2x^4.e^{x^2}}{7} + C$
Am i right? Because the answer to this problem is different back of the book.
Answer is $\frac{(x^2 -1).e^{x^2}}{2} + C$
The way i did it i can't find anything wrong with it. Can anyone kindly tell what
is wrong with my way of solving this problem?
2. Originally Posted by x3bnm
I want to find the integration of following integral:
$\int x^3.e^{x^2}\, dx$
So i apply integration by part:
$u.v - \int v \,du\,\,$
$
Let \,\, u = x^3, du = 3x^2\,\,
and \,\, dv = e^{x^2}, v = 2x.e^{x^2}
$
So now th result is:
$\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2} - \int\,\,6x^3.e^{x^2}\,\, dx$
$7\!\!\int x^3.e^{x^2}\, dx = 2x^4.e^{x^2}$
$\int x^3.e^{x^2}\, dx = \frac{2x^4.e^{x^2}}{7} + C$
Am i right? Because the answer to this problem is different back of the book.
Answer is $\frac{(x^2 -1).e^{x^2}}{2} + C$
The way i did it i can't find anything wrong with it. Can anyone kindly tell what
is wrong with my way of solving this problem?
I think that we can clear some of the "clutter". Let $z=x^2$ so $dz=2x$ and this becomes $\frac{1}{2}\int z\cdot e^{z}dz$. Which is easily done. Does that help?
EDIT: Ahh, did you differentiate when you should have integrated? For $v$
3. >EDIT: Ahh, did you differentiate when you should have integrated? For
Yes i did. That's the mistake i made. Thanks for finding it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646786451339722, "perplexity": 566.313207016504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268734.38/warc/CC-MAIN-20140728011748-00216-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://dml.cz/handle/10338.dmlcz/144789 | # Article
Full entry | PDF (0.2 MB)
Keywords:
dimension filtration; sequentially Cohen-Macaulay filtration; cohomological dimension; bigraded module; Cohen-Macaulay module
Summary:
Let \$K\$ be a field and \$S=K[x_1,\ldots ,x_m, y_1,\ldots ,y_n]\$ be the standard bigraded polynomial ring over \$K\$. In this paper, we explicitly describe the structure of finitely generated bigraded ``sequentially Cohen-Macaulay'' \$S\$-modules with respect to \$Q=(y_1,\ldots ,y_n)\$. Next, we give a characterization of sequentially Cohen-Macaulay modules with respect to \$Q\$ in terms of local cohomology modules. Cohen-Macaulay modules that are sequentially Cohen-Macaulay with respect to \$Q\$ are considered.
References:
[1] Capani, A., Niesi, G., Robbiano, L.: CoCoA, a system for doing Computations in Commutative Algebra. (1995), http://cocoa.dima.unige.it./research/publications.html, 1995.
[2] Chardin, M., Jouanolou, J.-P., Rahimi, A.: The eventual stability of depth, associated primes and cohomology of a graded module. J. Commut. Algebra 5 (2013), 63-92. DOI 10.1216/JCA-2013-5-1-63 | MR 3084122 | Zbl 1275.13014
[3] Cuong, N. T., Cuong, D. T.: On sequentially Cohen-Macaulay modules. Kodai Math. J. 30 (2007), 409-428. DOI 10.2996/kmj/1193924944 | MR 2372128 | Zbl 1139.13011
[4] Cuong, N. T., Cuong, D. T.: On the structure of sequentially generalized Cohen-Macaulay modules. J. Algebra 317 (2007), 714-742. DOI 10.1016/j.jalgebra.2007.06.026 | MR 2362938 | Zbl 1137.13010
[5] Eisenbud, D.: Commutative Algebra. With a View Toward Algebraic Geometry. Graduate Texts in Mathematics 150 Springer, Berlin (1995). MR 1322960 | Zbl 0819.13001
[6] Rahimi, A.: Sequentially Cohen-Macaulayness of bigraded modules. (to appear) in Rocky Mt. J. Math.
[7] Rahimi, A.: Relative Cohen-Macaulayness of bigraded modules. J. Algebra 323 (2010), 1745-1757. DOI 10.1016/j.jalgebra.2009.11.026 | MR 2588136 | Zbl 1184.13053
[8] Schenzel, P.: On the dimension filtration and Cohen-Macaulay filtered modules. Commutative Algebra and Algebraic Geometry. Proc. of the Ferrara Meeting, Italy F. Van Oystaeyen Lecture Notes Pure Appl. Math. 206 Marcel Dekker, New York (1999), 245-264. MR 1702109 | Zbl 0942.13015
Partner of | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682147264480591, "perplexity": 4393.527027916911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00267.warc.gz"} |
https://kerodon.net/tag/01VT | # Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$
### 5.3.3 Homotopy Transport for Cartesian Fibrations
We now study the behavior of the transport functors of §5.3.2 with respect to composition.
Proposition 5.3.3.1 (Transitivity). Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of simplicial sets and let $\sigma$ be a $2$-simplex of $\operatorname{\mathcal{D}}$, which we display as a diagram
$\xymatrix@R =50pt@C=50pt{ & Y \ar [dr]^{g} & \\ X \ar [ur]^{f} \ar [rr]^{h} & & Z. }$
Let $f^{\ast }: \operatorname{\mathcal{C}}_{Y} \rightarrow \operatorname{\mathcal{C}}_{X}$ and $g^{\ast }: \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}_{Y}$ be functors which are given by contravariant transport along $f$ and $g$, respectively. Then the composite functor $f^{\ast } \circ g^{\ast }: \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}_{X}$ is given by contravariant transport along $h$.
Proof. Without loss of generality, we may replace $q$ by the projection map $\Delta ^{2} \times _{ \operatorname{\mathcal{D}}} \operatorname{\mathcal{C}}\rightarrow \Delta ^2$, and thereby reduce to the case where $\operatorname{\mathcal{D}}= \Delta ^2$ and $\sigma$ is the unique nondegenerate $2$-simplex of $\operatorname{\mathcal{D}}$. In this case, $\operatorname{\mathcal{C}}$ is an $\infty$-category. Let $h: \Delta ^1 \times \operatorname{\mathcal{C}}_{Y} \rightarrow \operatorname{\mathcal{C}}$ and $h': \Delta ^1 \times \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}$ be morphisms which witness $f^{\ast }$ and $g^{\ast }$ as given by contravariant transport along $f$ and $g$, respectively. Then the composite map
$\Delta ^{1} \times \operatorname{\mathcal{C}}_{Z} \xrightarrow { \operatorname{id}\times g^{\ast } } \Delta ^1 \times \operatorname{\mathcal{C}}_{Y} \xrightarrow {h} \operatorname{\mathcal{C}}$
can be identified with a morphism $\alpha$ from $f^{\ast } \circ g^{\ast }$ to $g^{\ast }$ in the $\infty$-category $\operatorname{Fun}( \operatorname{\mathcal{C}}_{Z}, \operatorname{\mathcal{C}})$. Similarly, $h'$ can be identified with a morphism $\beta$ from $g^{\ast }$ to $\operatorname{id}_{\operatorname{\mathcal{C}}_{Z}}$ in the $\infty$-category $\operatorname{Fun}( \operatorname{\mathcal{C}}_{Z}, \operatorname{\mathcal{C}})$. Note that for each object $C \in \operatorname{\mathcal{C}}_{Z}$, the induced maps
$\alpha _{C}: (f^{\ast } \circ g^{\ast })(C) \rightarrow g^{\ast }(C) \quad \quad \beta _{C}: g^{\ast }(C) \rightarrow C$
are $q$-cartesian. Let $\gamma : f^{\ast } \circ g^{\ast } \rightarrow \operatorname{id}_{\operatorname{\mathcal{C}}_{Z}}$ be a composition of $\alpha$ with $\beta$. Then, for each object $C \in \operatorname{\mathcal{C}}_{Z}$, the morphism $\gamma _{C}: (f^{\ast } \circ g^{\ast }(C) \rightarrow C$ is also $q$-cartesian (Corollary 5.2.2.5). It follows that $\gamma$ can be identified with a morphism of simplicial sets $\Delta ^1 \times \operatorname{\mathcal{C}}_{Z} \rightarrow \operatorname{\mathcal{C}}$ which witnesses $f^{\ast } \circ g^{\ast }$ as given by contravariant transport along $h$.. $\square$
Warning 5.3.3.2. The conclusion of Proposition 5.3.3.1 is generally not satisfied if $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is only assumed to be a locally cartesian fibration of simplicial sets. We will return to this point in § (see Proposition ).
Construction 5.3.3.3 (The Homotopy Transport Representation: Cartesian Case). Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of simplicial sets and let $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ denote the homotopy category of $\infty$-categories (Construction 4.5.1.1). It follows from Proposition 5.3.3.1 and Example 5.3.2.6 that there is a unique morphism of simplicial sets $\operatorname{hTr}_{q}: \operatorname{\mathcal{D}}^{\operatorname{op}} \rightarrow \operatorname{N}_{\bullet }( \mathrm{h} \mathit{\operatorname{Cat}_{\infty } } )$ with the following properties:
• For each vertex $X$ of the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(X)$ is the $\infty$-category $\operatorname{\mathcal{C}}_{X} = \{ X\} \times _{\operatorname{\mathcal{D}}} \operatorname{\mathcal{C}}$ (regarded as an object of $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$.
• For each edge $e: X \rightarrow Y$ in the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(e)$ is the (isomorphism class of) the contravariant transport functor $[e^{\ast }]$ of Notation 5.3.2.5, regarded as an element of $\operatorname{Hom}_{ \mathrm{h} \mathit{\operatorname{Cat}_{\infty }} }( \operatorname{\mathcal{C}}_{Y}, \operatorname{\mathcal{C}}_{X} ) = \pi _0( \operatorname{Fun}( \operatorname{\mathcal{C}}_{Y}, \operatorname{\mathcal{C}}_{X})^{\simeq } )$.
Let $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}$ denote the homotopy category of the simplicial set $\operatorname{\mathcal{D}}$ (Notation 1.2.5.3). Then the morphism $\operatorname{hTr}_{q}$ determines a functor of ordinary categories $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$, which we will denote also by $\operatorname{hTr}_{q}$ and will refer to as the homotopy transport representation of the cartesian fibration $q$.
Example 5.3.3.4. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of categories (Definition 5.1.4.8), so that the induced map $\operatorname{N}_{\bullet }(q): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is a cartesian fibration of $\infty$-categories (Example 5.2.4.2). Then the homotopy transport representation $\operatorname{hTr}_{\operatorname{N}_{\bullet }(q)}: \operatorname{\mathcal{D}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ is given by the composition
$\operatorname{\mathcal{D}}^{\operatorname{op}} \xrightarrow { \chi _{q} } \operatorname{Pith}(\mathbf{Cat}) \rightarrow \mathrm{h} \mathit{\operatorname{Cat}} \xrightarrow { \operatorname{N}_{\bullet } } \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}.$
Here $\chi _{q}$ denotes the transport representation of Construction 5.1.5.10 (with respect to any cleavage of the fibration $q$), the second functor is the truncation map of Remark 2.3.2.12, and $\operatorname{N}_{\bullet }$ is the fully faithful functor of Remark 4.5.1.3. Stated more informally, the homotopy transport representation $\operatorname{hTr}_{ \operatorname{N}_{\bullet }(q)}$ of Construction 5.3.3.3 can be obtained from the transport representation $\chi _{ \operatorname{N}_{\bullet }(q)}$ of Construction 5.1.5.10 by passing from the $2$-category $\mathbf{Cat}$ to its homotopy category.
Example 5.3.3.5. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between ordinary categories which is a fibration in sets (Definition 5.1.2.1), so that the induced map $\operatorname{N}_{\bullet }(q): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is a right fibration, and in particular a cartesian fibration. Then the homotopy transport representation $\operatorname{hTr}_{\operatorname{N}_{\bullet }(q)}$ of Construction 5.3.3.3 is given by the composition
$\operatorname{\mathcal{D}}^{\operatorname{op}} \xrightarrow { \chi _{q} } \operatorname{Set}\hookrightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty } },$
where $\chi _{q}$ is the transport representation of Construction 5.1.2.14 and $\operatorname{Set}\hookrightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ is the fully faithful embedding which associates to each set $X$ the associated discrete simplicial set, regarded as an $\infty$-category.
Remark 5.3.3.6. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cartesian fibration of simplicial sets, and let $\operatorname{hTr}_{q}: \mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ be the homotopy transport representation of Construction 5.3.3.3. It follows from Proposition 5.2.4.12 that $q$ is a right fibration if and only if the functor $\operatorname{hTr}_{q}$ factors through the full subcategory $\mathrm{h} \mathit{\operatorname{Kan}} \subseteq \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$. In particular, if $q$ is a right fibration, then Construction 5.3.3.3 determines a functor $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Kan}}$ which we will also refer to as the homotopy transport representation of $q$.
For later reference, we record a dual version of Construction 5.3.3.3:
Construction 5.3.3.7 (The Covariant Transport Functor). Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a cocartesian fibration of simplicial sets and let $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ denote the homotopy category of $\infty$-categories. Then there is a unique morphism of simplicial sets $\operatorname{hTr}_{q}: \operatorname{\mathcal{D}}\rightarrow \operatorname{N}_{\bullet }( \mathrm{h} \mathit{\operatorname{Cat}_{\infty } } )$ with the following properties:
• For each vertex $X$ of the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(X)$ is the $\infty$-category $\operatorname{\mathcal{C}}_{X} = \{ X\} \times _{\operatorname{\mathcal{D}}} \operatorname{\mathcal{C}}$ (regarded as an object of $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$.
• For each edge $e: X \rightarrow Y$ in the simplicial set $\operatorname{\mathcal{D}}$, $\operatorname{hTr}_{q}(e)$ is the (isomorphism class of) the covariant transport functor $[e_!]$ of Notation 5.3.2.12, regarded as an element of $\operatorname{Hom}_{ \mathrm{h} \mathit{\operatorname{Cat}_{\infty }} }( \operatorname{\mathcal{C}}_{X}, \operatorname{\mathcal{C}}_{Y} ) = \pi _0( \operatorname{Fun}( \operatorname{\mathcal{C}}_{X}, \operatorname{\mathcal{C}}_{Y})^{\simeq } )$.
Let $\mathrm{h} \mathit{\operatorname{\mathcal{D}}}$ denote the homotopy category of the simplicial set $\operatorname{\mathcal{D}}$ (Notation 1.2.5.3). Then the morphism $\operatorname{hTr}_{q}$ determines a functor of ordinary categories $\mathrm{h} \mathit{\operatorname{\mathcal{D}}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$, which we will denote also by $\operatorname{hTr}_{q}$ and will refer to as the homotopy transport representation of the cocartesian fibration $q$.
Warning 5.3.3.8. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a morphism of simplicial sets which is both a cartesian fibration and a cocartesian fibration. Then Constructions 5.3.3.7 and 5.3.3.3 supply functors $\mathrm{h} \mathit{\operatorname{\mathcal{C}}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ and $\mathrm{h} \mathit{\operatorname{\mathcal{C}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ respectively, which are both referred to as the homotopy transport representation of $q$ and denoted by $\operatorname{hTr}_{q}$. We will see later that these two functors are interchangeable data: either can be recovered from the other (see Proposition ).
Example 5.3.3.9. Let $q: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a morphism of simplicial sets. Combining Remark 5.3.3.6 with Theorem 5.3.2.14, we deduce that the following conditions are equivalent:
• The morphism $q$ is a Kan fibration.
• The morphism $q$ is a cartesian fibration and the homotopy transport representation $\operatorname{hTr}_{q}: \mathrm{h} \mathit{\operatorname{\mathcal{D}}}^{\operatorname{op}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ of Construction 5.3.3.3 factors through the subcategory $\mathrm{h} \mathit{\operatorname{Kan}}^{\simeq } \subseteq \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$.
• The morphism $q$ is a cocartesian fibration and the homotopy transport representation $\operatorname{hTr}'_{q}: \mathrm{h} \mathit{\operatorname{\mathcal{D}}} \rightarrow \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$ of Construction 5.3.3.7 factors through the subcategory $\mathrm{h} \mathit{\operatorname{Kan}}^{\simeq } \subseteq \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$.
If these conditions are satisfied, then $\operatorname{hTr}'_{q}$ is given by the composition
$\mathrm{h} \mathit{\operatorname{\mathcal{D}}} \xrightarrow { \operatorname{hTr}_{q}^{\operatorname{op}} } ( \mathrm{h} \mathit{\operatorname{Kan}}^{\simeq } )^{\operatorname{op}} \xrightarrow {\iota } \mathrm{h} \mathit{\operatorname{Kan}}^{\simeq },$
where $\iota$ is the isomorphism which carries each morphism in $\mathrm{h} \mathit{\operatorname{Kan}}^{\simeq }$ to its inverse (see Warning 5.1.2.16). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970825910568237, "perplexity": 154.17668507832767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00104.warc.gz"} |
https://www.physicsforums.com/threads/moment-about-arbitary-point.834216/ | # Moment about arbitary point
• #1
310
1
## Homework Statement
Why the moment isn't=15* 3Nm? Since we are taking moment about point O...it's 3m away from O
## The Attempt at a Solution
#### Attachments
• IMG_20150924_140943.jpg
45.5 KB · Views: 335
• #2
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
36,394
6,935
## Homework Statement
Why the moment isn't=15* 3Nm? Since we are taking moment about point O...it's 3m away from O
## The Attempt at a Solution
Are you referring to the 15kNm moment that is applied? You don't multiply that by a distance. (It would give you something with units kNm2.) a force times a perpendicular distance gives a moment, but an applied moment is already a moment. Exactly where it is applied makes no difference, only its magnitude and direction matter.
• #3
310
1
Are you referring to the 15kNm moment that is applied? You don't multiply that by a distance. (It would give you something with units kNm2.) a force times a perpendicular distance gives a moment, but an applied moment is already a moment. Exactly where it is applied makes no difference, only its magnitude and direction matter.
ya , i knew that . But , how can be moment be applied? only force can be applied , right?
• #4
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
36,394
6,935
ya , i knew that . But , how can be moment be applied? only force can be applied , right?
Are you asking as a practical matter how it is possible to apply a moment as opposed to a force? There does not need to be a way to do that. Consider turning a nut using a spanner. One can think of it as applying a torque, or as applying two equal and opposite forces along parallel but different lines of action. If you are told a moment of some specified magnitude and direction is applied, you do not need to care about how it is applied.
• #5
310
1
Are you asking as a practical matter how it is possible to apply a moment as opposed to a force? There does not need to be a way to do that. Consider turning a nut using a spanner. One can think of it as applying a torque, or as applying two equal and opposite forces along parallel but different lines of action. If you are told a moment of some specified magnitude and direction is applied, you do not need to care about how it is applied.
yes, this will only occur in the exercise , but not in daily life ?
• #6
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
36,394
6,935
yes, this will only occur in the exercise , but not in daily life ?
I cannot think of a way to apply a torque to an object (in an inertial frame) other than by a combination of linear forces.
• Last Post
Replies
1
Views
2K
• Last Post
Replies
1
Views
565
• Last Post
Replies
4
Views
2K
• Last Post
Replies
12
Views
8K
• Last Post
Replies
6
Views
594
• Last Post
Replies
32
Views
2K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
9
Views
2K
• Last Post
Replies
18
Views
4K
• Last Post
Replies
3
Views
3K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523054957389832, "perplexity": 1040.3634961505586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00303.warc.gz"} |
http://hellenicaworld.com/Science/Physics/en/Photon.html | ### - Art Gallery -
The photon is a type of elementary particle. It is the quantum of the electromagnetic field including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless,[a] so they always move at the speed of light in vacuum, 299792458 m/s.
Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles.[2] The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, Planck proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units.[3][4][5] Subsequently, many other experiments validated Einstein's approach.[6][7][8]
In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by this gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Recently, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography.
Nomenclature
Photoelectric effect: the emission of electrons from a metal plate caused by light quanta – photons.
1926 Gilbert N. Lewis letter which brought the word "photon" into common usage
The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy stored within a molecule was a "discrete quantity composed of an integral number of finite equal parts", which he called "energy elements".[9] In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete wave-packets.[10] He called such a wave-packet the light quantum (German: das Lichtquant).[b]
The name photon derives from the Greek word for light, φῶς (transliterated phôs). Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, who coined the term in a letter to Nature on December 18, 1926.[3][11] The same name was used earlier but was never widely adopted before Lewis: in 1916 by the American physicist and psychologist Leonard T. Troland, in 1921 by the Irish physicist John Joly, in 1924 by the French physiologist René Wurmser (1890–1993), and in 1926 by the French physicist Frithiof Wolfers (1891–1971).[5] The name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolfers's and Lewis's theories were contradicted by many experiments and never accepted, the new name was adopted very soon by most physicists after Compton used it.[5][c]
In physics, a photon is usually denoted by the symbol γ (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard,[13][14] named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade.[15] In chemistry and optical engineering, photons are usually symbolized by hν, which is the photon energy, where h is Planck constant and the Greek letter ν (nu) is the photon's frequency.[16] Much less commonly, the photon can be symbolized by hf, where its frequency is denoted by f.[17]
Physical properties
A photon is massless,[d] has no electric charge,[18][19] and is a stable particle. In vacuum, a photon has two possible polarization states.[20] The photon is the gauge boson for electromagnetism,[21]:29–30 and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero.[22] Also, the photon does not obey the Pauli exclusion principle, but instead obeys Bose–Einstein statistics.[23]:1221
Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation).[23]:572,1114,1172
Relativistic energy and momentum
The cone shows possible values of wave 4-vector of a photon. The "time" axis gives the angular frequency (rad⋅s−1) and the "space" axis represents the angular wavenumber (rad⋅m−1). Green and indigo represent left and right polarization
In empty space, the photon moves at c (the speed of light) and its energy and momentum are related by E = pc, where p is the magnitude of the momentum vector p. This derives from the following relativistic relation, with m = 0:[24]
$$E^{2}=p^{2} c^{2} + m^{2} c^{4}. The energy and momentum of a photon depend only on its frequency ( \( \nu$$ ) or inversely, its wavelength (λ):
$$E=\hbar\omega=h\nu=\frac{hc}{\lambda}$$
$$\boldsymbol{p}=\hbar\boldsymbol{k},$$
where k is the wave vector (where the wave number k = |k| = 2π/λ), ω = 2πν is the angular frequency, and ħ = h/2π is the reduced Planck constant.[25]
Since p points in the direction of the photon's propagation, the magnitude of the momentum is
$$p=\hbar k=\frac{h\nu}{c}=\frac{h}{\lambda}.$$
The photon also carries a quantity called spin angular momentum that does not depend on its frequency.[26] Because photons always move at the speed of light, the spin is best expressed in terms of the component measured along its direction of motion, its helicity, which must be either +ħ or −ħ. These two possible helicities, called right-handed and left-handed, correspond to the two possible circular polarization states of the photon.[27]
To illustrate the significance of these formulae, the annihilation of a particle with its antiparticle in free space must result in the creation of at least two photons for the following reason. In the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (since, as we have seen, it is determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. (However, it is possible if the system interacts with another particle or field for the annihilation to produce one photon, as when a positron annihilates with a bound atomic electron, it is possible for only one photon to be emitted, as the nuclear Coulomb field breaks translational symmetry.)[28]:64–65 The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum.
Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter.[29] That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus.
The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time.[30]
Each photon carries two distinct and independent forms of angular momentum of light. The spin angular momentum of light of a particular photon is always either +ħ or −ħ. The light orbital angular momentum of a particular photon can be any integer N, including zero.[31]
Experimental checks on photon mass
Current commonly accepted physical theories imply or assume the photon to be strictly massless. If the photon is not a strictly massless particle, it would not move at the exact speed of light, c, in vacuum. Its speed would be lower and depend on its frequency. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime.[32] Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons.
If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for very-high-precision tests of Coulomb's law.[33] A null result of such an experiment has set a limit of m ≲ 10−14 eV/c2.[34]
Sharper upper limits on the speed of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is very large because the galactic magnetic field exists on very great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term 1/2m2AμAμ would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of m < 3×10−27 eV/c2.[35] The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring.[36] Such methods were used to obtain the sharper upper limit of 1.07×10−27 eV/c2 (the equivalent of 10−36 daltons) given by the Particle Data Group.[37]
These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent.[38] If the photon mass is generated via the Higgs mechanism then the upper limit of m ≲ 10−14 eV/c2 from the test of Coulomb's law is valid.
Historical development
Main article: Light
Thomas Young's double-slit experiment in 1801 showed that light can act as a wave, helping to invalidate early particle theories of light.[23]:964
In most theories up to the eighteenth century, light was pictured as being made up of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637),[39] Robert Hooke (1665),[40] and Christiaan Huygens (1678);[41] however, particle models remained dominant, chiefly due to the influence of Isaac Newton.[42] In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted.[43] James Clerk Maxwell's 1865 prediction[44] that light was an electromagnetic wave—which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves[45]—seemed to be the final blow to particle models of light.
In 1900, Maxwell's theoretical model of light as oscillating electric and magnetic fields seemed complete. However, several observations could not be explained by any wave model of electromagnetic radiation, leading to the idea that light-energy was packaged into quanta described by E = hν. Later experiments showed that these light-quanta also carry momentum and, thus, can be considered particles: the photon concept was born, leading to a deeper understanding of the electric and magnetic fields themselves.
The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity.[46][e]
At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers[47] culminated in Max Planck's hypothesis[48][49] that the energy of any system that absorbs or emits electromagnetic radiation of frequency ν is an integer multiple of an energy quantum E = hν. As shown by Albert Einstein,[10][50] some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics.[51]
Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself.[10] Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space.[10] In 1909[50] and 1916,[52] Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum p = h/λ, making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton,[53] for which he received the Nobel Prize in 1927. The pivotal question was then: how to unify Maxwell's wave theory of light with its experimentally observed particle nature? The answer to this question occupied Albert Einstein for the rest of his life,[54] and was solved in quantum electrodynamics and its successor, the Standard Model. (See § Second quantization and § The photon as a gauge boson, below.)
Up to 1923, most physicists were reluctant to accept that light itself was quantized. Instead, they tried to explain photon behaviour by quantizing only matter, as in the Bohr model of the hydrogen atom (shown here). Even though these semiclassical models were only a first approximation, they were accurate for simple systems and they led to quantum mechanics.
Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture.[55] However, before Compton's experiment[53] showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien,[47] Planck[49] and Millikan.)[55] Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as those revealing Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results.[56]
Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory.[57] An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions.[58] Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible".[54] Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics.[59]
A few physicists persisted[60] in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments.[f] Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven.
Wave–particle duality and uncertainty principles
Photons in a Mach–Zehnder interferometer exhibit wave-like interference and particle-like detection at single-photon detectors.
Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double-slit experiment lands on the screen with a probability distribution given by its interference pattern determined by Maxwell's equations.[61] However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; it does not spread out as it propagates, nor does it divide when it encounters a beam splitter.[62] Rather, the photon seems to be a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron.
While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics.[g] In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes.[67]
Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl.[68][69] The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa.[70] A coherent state minimizes the overall uncertainty as far as quantum mechanics allows.[67] Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase.[67] This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, Δ N {\displaystyle \Delta N} {\displaystyle \Delta N}, and the uncertainty in the phase of the wave, Δ ϕ {\displaystyle \Delta \phi } {\displaystyle \Delta \phi }. However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase ϕ {\displaystyle \phi } \phi cannot be represented by a Hermitian operator.[71]
Bose–Einstein model of a photon gas
Main articles: Bose gas, Bose–Einstein statistics, Spin-statistics theorem, Gas in a box, and Photon gas
In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space.[72] Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction",[73][74] now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995.[75] It was later used by Lene Hau to slow, and then completely stop, light in 1999[76] and 2001.[77]
The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics).[78]
Stimulated and spontaneous emission
Main articles: Stimulated emission and Laser
Stimulated emission (in which photons "clone" themselves) was predicted by Einstein in his kinetic analysis, and led to the development of the laser. Einstein's derivation inspired further developments in the quantum treatment of light, which led to the statistical interpretation of quantum mechanics.
In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density $$\rho(\nu)$$ of photons with frequency $$\nu$$ (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed.[79]
Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate R j i {\displaystyle R_{ji}} R_{ji} for a system to absorb a photon of frequency $$\nu$$ and transition from a lower energy $$E_{j}$$ to a higher energy E i {\displaystyle E_{i}} E_{i} is proportional to the number N j {\displaystyle N_{j}} N_{j} of atoms with energy $$E_{j}$$ and to the energy density $$\rho(\nu)$$ of ambient photons of that frequency,
$$R_{ji}=N_{j} B_{ji} \rho(\nu) \!$$
where $$B_{ji}$$ is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate R i j {\displaystyle R_{ij}} R_{ij} for the emission of photons of frequency ν {\displaystyle \nu } \nu and transition from a higher energy $$E_{i}$$ to a lower energy E j {\displaystyle E_{j}} E_{j} is
$$R_{ij}=N_{i} A_{ij} + N_{i} B_{ij} \rho(\nu) \!$$
where $$A_{ij}$$ is the rate constant for emitting a photon spontaneously, and $$B_{ij}$$ is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state i {\displaystyle i} i and those in state j {\displaystyle j} j must, on average, be constant; hence, the rates R j i {\displaystyle R_{ji}} R_{ji} and R i j {\displaystyle R_{ij}} R_{ij} must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of $$N_{i}$$ and $$N_{j}$$ is $${\displaystyle g_{i}/g_{j}\exp {(E_{j}-E_{i})/(kT)},}$$ where $$g_{i}$$ and $$g_{j}$$ are the degeneracy of the state i and that of j, respectively, $$E_{i}$$ and $$E_{j}$$ their energies, k the Boltzmann constant and T the system's temperature. From this, it is readily derived that $$g_iB_{ij}=g_jB_{ji}$$ and
$$A_{ij}=\frac{8 \pi h \nu^{3}}{c^{3}} B_{ij}.$$
The $$A_{ij}$$ and $$B_{ij}$$ are collectively known as the Einstein coefficients.[80]
Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients $$A_{ij}$$, $$B_{ji}$$ and $$B_{ij}$$ once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis".[81] Not long thereafter, in 1926, Paul Dirac derived the $$B_{ij}$$ rate constants by using a semiclassical approach,[82] and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory.[83][84] Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory;[85][86][87] earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field.
Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take.[42] Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation[54] from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function[88][89] was inspired by Einstein's later work searching for a more complete theory.[90]
Quantum field theory
Quantization of the electromagnetic field
Main article: Quantum field theory
Different electromagnetic modes (such as those depicted here) can be treated as independent simple harmonic oscillators. A photon corresponds to a unit of energy E = hν in its electromagnetic mode.
In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption.[91] He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of h ν {\displaystyle h\nu } h\nu , where ν {\displaystyle \nu } \nu is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909.[50]
In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way.[92] As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be $$E=nh\nu$$, where $$\nu$$ is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy $$E=nh\nu$$ as a state with n {\displaystyle n} n photons, each of energy h ν {\displaystyle h\nu } h\nu . This approach gives the correct energy fluctuation formula.
Feynman diagram of two electrons interacting by exchange of a virtual photon.
Dirac took this one step further.[83][84] He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's A i j {\displaystyle A_{ij}} A_{ij} and B i j {\displaystyle B_{ij}} B_{ij} coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics.
Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy E = p c {\displaystyle E=pc} E=pc, and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization.[93]
Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs.[94] Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider.[95]
In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode
$$|n_{k_0}\rangle\otimes|n_{k_1}\rangle\otimes\dots\otimes|n_{k_n}\rangle\dots$$
where $$|n_{k_i}\rangle$$ represents the state in which $$\, n_{k_i}$$ photons are in the mode $$k_{i}$$. In this notation, the creation of a new photon in mode $$k_{i}$$ (e.g., emitted from an atomic transition) is written as $$|n_{k_i}\rangle \rightarrow|n_{k_i}+1\rangle$$. This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics.
As a gauge boson
Main article: Gauge theory
The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime.[96] For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian.
The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be $$\pm \hbar$$. These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states.[96]
In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W− and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics.[97][98][99] Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally.[100]
Main article: Photon structure function
Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons[101] in spite of the fact that the electric charge structures of protons and neutrons are substantially different. A theory called Vector Meson Dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon which interacts only with electric charges and vector mesons.[102] However, if experimentally probed at very short distances, the intrinsic structure of the photon is recognized as a flux of quark and gluon components, quasi-free according to asymptotic freedom in QCD and described by the photon structure function.[103][104] A comprehensive comparison of data with theoretical predictions was presented in a review in 2000.[105]
Contributions to the mass of a system
The energy of a system that emits a photon is decreased by the energy E of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount $${E}/{c^2}$$. Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form $${E}/{c^2}$$ for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei).[106]
This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium.[107]
Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves.[108]
In matter
Light that travels through transparent matter does so at a lower speed than c, the speed of light in a vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polariton (see this list for some other quasi-particles); this polariton has a nonzero effective mass, which means that it cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering.[109]
Photons can be scattered by matter. For example, photons engage in so many collisions on the way from the core of the Sun that radiant energy can take about a million years to reach the surface;[110] however, once in open space, a photon takes only 8.3 minutes to reach Earth.[111]
Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry.[112][113]
Technological applications
Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an extremely important application and is discussed above under stimulated emission.
Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas.[114]
Planck's energy formula E = h ν {\displaystyle E=h\nu } E=h\nu is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations.[115]
Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy.[116]
In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins.[117]
Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is "0" or "1".[118][119]
Quantum optics and computation
Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography.[h]
Two-photon physics studies interactions between photons, which are rare. In 2018, MIT researchers announced the discovery of bound photon triplets, which may involve polaritons.[120][121]
Advanced Photon Source at Argonne National Laboratory
Ballistic photon
Dirac equation
Doppler effect
High energy X-ray imaging technology
Luminiferous aether
Medipix
Phonon
Photography
Photon counting
Photon energy
Photon epoch
Photon polarization
Photonic molecule
Photonics
Single-photon source
Spin angular momentum of light
Static forces and virtual-particle exchange
Notes
The photon's invariant mass (also called "rest mass" for massive particles) is believed to be exactly zero. This is the notion of particle mass generally used by modern physicists. The photon does have a nonzero relativistic mass, depending on its energy, but this varies according to the frame of reference.
Although the 1967 Elsevier translation of Planck's Nobel Lecture interprets Planck's Lichtquant as "photon", the more literal 1922 translation by Hans Thacher Clarke and Ludwik Silberstein Planck, Max (1922). The Origin and Development of the Quantum Theory. Clarendon Press. (here) uses "light-quantum". No evidence is known that Planck himself used the term "photon" by 1926 (see also).
Isaac Asimov credits Arthur Compton with defining quanta of energy as photons in 1923.[12]
The mass of the photon is believed to be exactly zero. Some sources also refer to the relativistic mass, which is just the energy scaled to units of mass. For a photon with wavelength λ or energy E, this is h/λc or E/c2. This usage for the term "mass" is no longer common in scientific literature. Further info: What is the mass of a photon?
The phrase "no matter how intense" refers to intensities below approximately 1013 W/cm2 at which point perturbation theory begins to break down. In contrast, in the intense regime, which for visible light is above approximately 1014 W/cm2, the classical wave description correctly predicts the energy acquired by electrons, called ponderomotive energy. (See also: Boreham, Bruce W.; Hora, Heinrich; Bolton, Paul R. (1996). "Photon density and the correspondence principle of electromagnetic interaction". AIP Conference Proceedings. 369: 1234–1243. Bibcode:1996AIPC..369.1234B. doi:10.1063/1.50410.) By comparison, sunlight is only about 0.1 W/cm2.
These experiments produce results that cannot be explained by any classical theory of light, since they involve anticorrelations that result from the quantum measurement process. In 1974, the first such experiment was carried out by Clauser, who reported a violation of a classical Cauchy–Schwarz inequality. In 1977, Kimble et al. demonstrated an analogous anti-bunching effect of photons interacting with a beam splitter; this approach was simplified and sources of error eliminated in the photon-anticorrelation experiment of Grangier et al. (1986). This work is reviewed and simplified further in Thorn et al. (2004). (These references are listed below.)
The issue was first formulated by Theodore Duddell Newton and Eugene Wigner.[63][64][65] The challenges arise from the fundamental nature of the Lorentz group, which describes the symmetries of spacetime in special relativity. Unlike the generators of Galilean transformations, the generators of Lorentz boosts do not commute, and so simultaneously assigning low uncertainties to all coordinates of a relativistic particle's position becomes problematic.[66]
Introductory-level material on the various sub-fields of quantum optics can be found in Fox, M. (2006). Quantum Optics: An Introduction. Oxford University Press. ISBN 978-0-19-856673-1.
References
Amsler, C.; et al. (Particle Data Group) (2008). "Review of Particle Physics: Gauge and Higgs bosons" (PDF). Physics Letters B. 667 (1): 1. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018.
Joos, George (1951). Theoretical Physics. London and Glasgow: Blackie and Son Limited. p. 679.
"December 18, 1926: Gilbert Lewis coins "photon" in letter to Nature". www.aps.org. Retrieved 2019-03-09.
"Gilbert N. Lewis". Atomic Heritage Foundation. Retrieved 2019-03-09.
Kragh, Helge (2014). "Photon: New light on an old name".arXiv:1401.0293 [physics.hist-ph].
Compton, Arthur H. (1965) [12 Dec 1927]. "X-rays as a branch of optics" (PDF). From Nobel Lectures, Physics 1922–1941. Amsterdam: Elsevier Publishing Company.
Kimble, H.J.; Dagenais, M.; Mandel, L. (1977). "Photon Anti-bunching in Resonance Fluorescence" (PDF). Physical Review Letters. 39 (11): 691–695. Bibcode:1977PhRvL..39..691K. doi:10.1103/PhysRevLett.39.691.
Grangier, P.; Roger, G.; Aspect, A. (1986). "Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences". Europhysics Letters. 1 (4): 173–179. Bibcode:1986EL......1..173G. CiteSeerX 10.1.1.178.4356. doi:10.1209/0295-5075/1/4/004.
Kragh, Helge (2000-12-01). "Max Planck: the reluctant revolutionary". Physics World. 13 (12): 31. doi:10.1088/2058-7058/13/12/34.
Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" (PDF). Annalen der Physik (in German). 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.. An English translation is available from Wikisource.
"Discordances entre l'expérience et la théorie électromagnétique du rayonnement." In Électrons et Photons. Rapports et Discussions de Cinquième Conseil de Physique, edited by Institut International de Physique Solvay. Paris: Gauthier-Villars, pp. 55–85.
Asimov, Isaac (1983). The Neutrino: Ghost Particle of the Atom. Garden City, NY: Avon Books. ISBN 978-0-380-00483-6. and Asimov, Isaac (1971). The Universe: From Flat Earth to Quasar. New York: Walker. ISBN 978-0-8027-0316-3. LCCN 66022515.
Villard, P. (1900). "Sur la réflexion et la réfraction des rayons cathodiques et des rayons déviables du radium". Comptes Rendus des Séances de l'Académie des Sciences (in French). 130: 1010–1012.
Villard, P. (1900). "Sur le rayonnement du radium". Comptes Rendus des Séances de l'Académie des Sciences (in French). 130: 1178–1179.
Rutherford, E.; Andrade, E.N.C. (1914). "The Wavelength of the Soft Gamma Rays from Radium B". Philosophical Magazine. 27 (161): 854–868. doi:10.1080/14786440508635156.
Andrew Liddle (2015). An Introduction to Modern Cosmology. John Wiley & Sons. p. 16. ISBN 978-1-118-69025-3.
SantoPietro, David. "Photon Energy". Khan Academy. Retrieved 2020-03-15.
Frisch, David H.; Thorndike, Alan M. (1964). Elementary Particles. Princeton, NJ: David Van Nostrand. p. 22.
Kobychev, V.V.; Popov, S.B. (2005). "Constraints on the photon charge from observations of extragalactic sources". Astronomy Letters. 31 (3): 147–151.arXiv:hep-ph/0411398. Bibcode:2005AstL...31..147K. doi:10.1134/1.1883345.
Matthew D. Schwartz (2014). Quantum Field Theory and the Standard Model. Cambridge University Press. p. 66. ISBN 978-1-107-03473-0.
Role as gauge boson and polarization section 5.1 in Aitchison, I.J.R.; Hey, A.J.G. (1993). Gauge Theories in Particle Physics. IOP Publishing. ISBN 978-0-85274-328-7.
See p.31 in Amsler, C.; et al. (2008). "Review of Particle Physics" (PDF). Physics Letters B. 667 (1–5): 1–1340. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. PMID 10020536.
Halliday, David; Resnick, Robert; Walker, Jerl (2005), Fundamental of Physics (7th ed.), John Wiley and Sons, Inc., ISBN 978-0-471-23231-5
See section 1.6 in Alonso & Finn 1968, Section 1.6
Davison E. Soper, Electromagnetic radiation is made of photons, Institute of Theoretical Science, University of Oregon
This property was experimentally verified by Raman and Bhagavantam in 1931: Raman, C.V.; Bhagavantam, S. (1931). "Experimental proof of the spin of the photon" (PDF). Indian Journal of Physics. 6 (3244): 353. Bibcode:1932Natur.129...22R. doi:10.1038/129022a0. hdl:10821/664. Archived from the original (PDF) on 2016-06-03. Retrieved 2008-12-28.
Burgess, C.; Moore, G. (2007). "1.3.3.2". The Standard Model. A Primer. Cambridge University Press. ISBN 978-0-521-86036-9.
Griffiths, David J. (2008), Introduction to Elementary Particles (2nd revised ed.), WILEY-VCH, ISBN 978-3-527-40601-2
Alonso & Finn 1968, Section 9.3
E.g., Appendix XXXII in Born, Max; Blin-Stoyle, Roger John; Radcliffe, J.M. (1989). Atomic Physics. Courier Corporation. ISBN 978-0-486-65984-8.
Alan E. Willner. "Twisted Light Could Dramatically Boost Data Rates: Orbital angular momentum could take optical and radio communication to new heights". 2016.
Mermin, David (February 1984). "Relativity without light". American Journal of Physics. 52 (2): 119–124. Bibcode:1984AmJPh..52..119M. doi:10.1119/1.13917.
Plimpton, S.; Lawton, W. (1936). "A Very Accurate Test of Coulomb's Law of Force Between Charges". Physical Review. 50 (11): 1066. Bibcode:1936PhRv...50.1066P. doi:10.1103/PhysRev.50.1066.
Williams, E.; Faller, J.; Hill, H. (1971). "New Experimental Test of Coulomb's Law: A Laboratory Upper Limit on the Photon Rest Mass". Physical Review Letters. 26 (12): 721. Bibcode:1971PhRvL..26..721W. doi:10.1103/PhysRevLett.26.721.
Chibisov, G V (1976). "Astrophysical upper limits on the photon rest mass". Soviet Physics Uspekhi. 19 (7): 624. Bibcode:1976SvPhU..19..624C. doi:10.1070/PU1976v019n07ABEH005277.
Lakes, Roderic (1998). "Experimental Limits on the Photon Mass and Cosmic Magnetic Vector Potential". Physical Review Letters. 80 (9): 1826. Bibcode:1998PhRvL..80.1826L. doi:10.1103/PhysRevLett.80.1826.
Amsler, C; Doser, M; Antonelli, M; Asner, D; Babu, K; Baer, H; Band, H; Barnett, R; et al. (2008). "Review of Particle Physics⁎" (PDF). Physics Letters B. 667 (1–5): 1. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. Summary Table
Adelberger, Eric; Dvali, Gia; Gruzinov, Andrei (2007). "Photon-Mass Bound Destroyed by Vortices". Physical Review Letters. 98 (1): 010402.arXiv:hep-ph/0306245. Bibcode:2007PhRvL..98a0402A. doi:10.1103/PhysRevLett.98.010402. PMID 17358459.
Descartes, R. (1637). Discours de la méthode (Discourse on Method) (in French). Imprimerie de Ian Maire. ISBN 978-0-268-00870-3.
Hooke, R. (1667). Micrographia: or some physiological descriptions of minute bodies made by magnifying glasses with observations and inquiries thereupon ... London: Royal Society of London. ISBN 978-0-486-49564-4.
Huygens, C. (1678). Traité de la lumière (in French).. An English translation is available from Project Gutenberg
Newton, I. (1952) [1730]. Opticks (4th ed.). Dover, NY: Dover Publications. Book II, Part III, Propositions XII–XX, Queries 25–29. ISBN 978-0-486-60205-9.
Buchwald, J.Z. (1989). The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century. Physics Today. 43. University of Chicago Press. pp. 78–80. Bibcode:1990PhT....43d..78B. doi:10.1063/1.2810533. ISBN 978-0-226-07886-1. OCLC 18069573.
Maxwell, J.C. (1865). "A Dynamical Theory of the Electromagnetic Field". Philosophical Transactions of the Royal Society. 155: 459–512. Bibcode:1865RSPT..155..459C. doi:10.1098/rstl.1865.0008. This article followed a presentation by Maxwell on 8 December 1864 to the Royal Society.
Hertz, H. (1888). "Über Strahlen elektrischer Kraft". Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin) (in German). 1888: 1297–1307.
Frequency-dependence of luminiscence pp. 276ff., photoelectric effect section 1.4 in Alonso & Finn 1968
Wien, W. (1911). "Wilhelm Wien Nobel Lecture". nobelprize.org.
Planck, M. (1901). "Über das Gesetz der Energieverteilung im Normalspectrum". Annalen der Physik (in German). 4 (3): 553–563. Bibcode:1901AnP...309..553P. doi:10.1002/andp.19013090310. English translation
Planck, M. (1920). "Max Planck's Nobel Lecture". nobelprize.org.
Einstein, A. (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" (PDF). Physikalische Zeitschrift (in German). 10: 817–825.. An English translation is available from Wikisource.
Presentation speech by Svante Arrhenius for the 1921 Nobel Prize in Physics, December 10, 1922. Online text from [nobelprize.org], The Nobel Foundation 2008. Access date 2008-12-05.
Einstein, A. (1916). "Zur Quantentheorie der Strahlung". Mitteilungen der Physikalischen Gesellschaft zu Zürich. 16: 47. Also Physikalische Zeitschrift, 18, 121–128 (1917). (in German)
Compton, A. (1923). "A Quantum Theory of the Scattering of X-rays by Light Elements". Physical Review. 21 (5): 483–502. Bibcode:1923PhRv...21..483C. doi:10.1103/PhysRev.21.483.
Pais, A. (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. ISBN 978-0-19-853907-0.
Millikan, R.A (1924). "Robert A. Millikan's Nobel Lecture".
Hendry, J. (1980). "The development of attitudes to the wave–particle duality of light and quantum theory, 1900–1920". Annals of Science. 37 (1): 59–79. doi:10.1080/00033798000200121.
Bohr, N.; Kramers, H.A.; Slater, J.C. (1924). "The Quantum Theory of Radiation". Philosophical Magazine. 47 (281): 785–802. doi:10.1080/14786442408565262. Also Zeitschrift für Physik, 24, 69 (1924).
Howard, Don (December 2004). "Who Invented the "Copenhagen Interpretation"? A Study in Mythology". Philosophy of Science. 71 (5): 669–682. doi:10.1086/425941. ISSN 0031-8248. JSTOR 10.1086/425941.
Heisenberg, W. (1933). "Heisenberg Nobel lecture".
Mandel, L. (1976). E. Wolf (ed.). The case for and against semiclassical radiation theory. Progress in Optics. Progress in Optics. 13. North-Holland. pp. 27–69. Bibcode:1976PrOpt..13...27M. doi:10.1016/S0079-6638(08)70018-0. ISBN 978-0-444-10806-7.
Taylor, G.I. (1909). Interference fringes with feeble light. Proceedings of the Cambridge Philosophical Society. 15. pp. 114–115.
Saleh, B.E.A. & Teich, M.C. (2007). Fundamentals of Photonics. Wiley. ISBN 978-0-471-35832-9.
Newton, T.D.; Wigner, E.P. (1949). "Localized states for elementary particles" (PDF). Reviews of Modern Physics. 21 (3): 400–406. Bibcode:1949RvMP...21..400N. doi:10.1103/RevModPhys.21.400.
Bialynicki-Birula, I. (1994). "On the wave function of the photon" (PDF). Acta Physica Polonica A. 86 (1–2): 97–116. doi:10.12693/APhysPolA.86.97.
Sipe, J.E. (1995). "Photon wave functions". Physical Review A. 52 (3): 1875–1883. Bibcode:1995PhRvA..52.1875S. doi:10.1103/PhysRevA.52.1875. PMID 9912446.
Bialynicki-Birula, I. (1996). Photon wave function. Progress in Optics. Progress in Optics. 36. pp. 245–294. Bibcode:1996PrOpt..36..245B. doi:10.1016/S0079-6638(08)70316-0. ISBN 978-0-444-82530-8.
Scully, M.O.; Zubairy, M.S. (1997). Quantum Optics. Cambridge: Cambridge University Press. ISBN 978-0-521-43595-6.
Busch, Paul; Lahti, Pekka; Werner, Reinhard F. (2013-10-17). "Proof of Heisenberg's Error-Disturbance Relation" (PDF). Physical Review Letters. 111 (16): 160405. doi:10.1103/PhysRevLett.111.160405. ISSN 0031-9007. PMID 24182239.
Appleby, David Marcus (2016-05-06). "Quantum Errors and Disturbances: Response to Busch, Lahti and Werner". Entropy. 18 (5): 174. doi:10.3390/e18050174.
Landau, L.D.; Lifschitz, E.M. (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1. OCLC 2284121.
Busch, P.; Grabowski, M.; Lahti, P.J. (January 1995). "Who Is Afraid of POV Measures? Unified Approach to Quantum Phase Observables". Annals of Physics. 237 (1): 1–11. doi:10.1006/aphy.1995.1001.
Bose, S.N. (1924). "Plancks Gesetz und Lichtquantenhypothese". Zeitschrift für Physik (in German). 26 (1): 178–181. Bibcode:1924ZPhy...26..178B. doi:10.1007/BF01327326.
Einstein, A. (1924). "Quantentheorie des einatomigen idealen Gases". Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin), Physikalisch-mathematische Klasse (in German). 1924: 261–267.
Einstein, A. (1925). Quantentheorie des einatomigen idealen Gases, Zweite Abhandlung. Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin), Physikalisch-mathematische Klasse (in German). 1925. pp. 3–14. doi:10.1002/3527608958.ch28. ISBN 978-3-527-60895-9.
Anderson, M.H.; Ensher, J.R.; Matthews, M.R.; Wieman, C.E.; Cornell, E.A. (1995). "Observation of Bose–Einstein Condensation in a Dilute Atomic Vapor". Science. 269 (5221): 198–201. Bibcode:1995Sci...269..198A. doi:10.1126/science.269.5221.198. JSTOR 2888436. PMID 17789847.
"Physicists Slow Speed of Light". News.harvard.edu (1999-02-18). Retrieved on 2015-05-11.
"Light Changed to Matter, Then Stopped and Moved". photonics.com (February 2007). Retrieved on 2015-05-11.
Streater, R.F.; Wightman, A.S. (1989). PCT, Spin and Statistics, and All That. Addison-Wesley. ISBN 978-0-201-09410-7.
Einstein, A. (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft (in German). 18: 318–323. Bibcode:1916DPhyG..18..318E.
Section 1.4 in Wilson, J.; Hawkes, F.J.B. (1987). Lasers: Principles and Applications. New York: Prentice Hall. ISBN 978-0-13-523705-2.
Einstein, A. (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft (in German). 18: 318–323. Bibcode:1916DPhyG..18..318E. "p. 322: Die Konstanten A m n {\displaystyle A_{m}^{n}} A^n_m and B m n {\displaystyle B_{m}^{n}} B^n_m würden sich direkt berechnen lassen, wenn wir im Besitz einer im Sinne der Quantenhypothese modifizierten Elektrodynamik und Mechanik wären.""
Dirac, P.A.M. (1926). "On the Theory of Quantum Mechanics". Proceedings of the Royal Society A. 112 (762): 661–677. Bibcode:1926RSPSA.112..661D. doi:10.1098/rspa.1926.0133.
Dirac, P.A.M. (1927). "The Quantum Theory of the Emission and Absorption of Radiation". Proceedings of the Royal Society A. 114 (767): 243–265. Bibcode:1927RSPSA.114..243D. doi:10.1098/rspa.1927.0039.
Dirac, P.A.M. (1927b). The Quantum Theory of Dispersion. Proceedings of the Royal Society A. 114. pp. 710–728. Bibcode:1927RSPSA.114..710D. doi:10.1098/rspa.1927.0071.
Heisenberg, W.; Pauli, W. (1929). "Zur Quantentheorie der Wellenfelder". Zeitschrift für Physik (in German). 56 (1–2): 1. Bibcode:1929ZPhy...56....1H. doi:10.1007/BF01340129.
Heisenberg, W.; Pauli, W. (1930). "Zur Quantentheorie der Wellenfelder". Zeitschrift für Physik (in German). 59 (3–4): 139. Bibcode:1930ZPhy...59..168H. doi:10.1007/BF01341423.
Fermi, E. (1932). "Quantum Theory of Radiation". Reviews of Modern Physics. 4 (1): 87. Bibcode:1932RvMP....4...87F. doi:10.1103/RevModPhys.4.87.
Born, M. (1926). "Zur Quantenmechanik der Stossvorgänge". Zeitschrift für Physik (in German). 37 (12): 863–867. Bibcode:1926ZPhy...37..863B. doi:10.1007/BF01397477.
Born, M. (1926). "Quantenmechanik der Stossvorgänge". Zeitschrift für Physik (in German). 38 (11–12): 803. Bibcode:1926ZPhy...38..803B. doi:10.1007/BF01397184.
Pais, A. (1986). Inward Bound: Of Matter and Forces in the Physical World. Oxford University Press. p. 260. ISBN 978-0-19-851997-3. Specifically, Born claimed to have been inspired by Einstein's never-published attempts to develop a "ghost-field" theory, in which point-like photons are guided probabilistically by ghost fields that follow Maxwell's equations.
Debye, P. (1910). "Der Wahrscheinlichkeitsbegriff in der Theorie der Strahlung". Annalen der Physik (in German). 33 (16): 1427–1434. Bibcode:1910AnP...338.1427D. doi:10.1002/andp.19103381617.
Born, M.; Heisenberg, W.; Jordan, P. (1925). "Quantenmechanik II". Zeitschrift für Physik (in German). 35 (8–9): 557–615. Bibcode:1926ZPhy...35..557B. doi:10.1007/BF01379806.
Zee, Anthony (2003). Quantum Field Theory in a Nutshell. Princeton, N.J.: Princeton University Press. ISBN 0-691-01019-6. OCLC 50479292.
Photon–photon-scattering section 7-3-1, renormalization chapter 8-2 in Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0.
Weiglein, G. (2008). "Electroweak Physics at the ILC". Journal of Physics: Conference Series. 110 (4): 042033.arXiv:0711.3003. Bibcode:2008JPhCS.110d2033W. doi:10.1088/1742-6596/110/4/042033.
Ryder, L.H. (1996). Quantum field theory (2nd ed.). Cambridge University Press. ISBN 978-0-521-47814-4.
Sheldon Glashow Nobel lecture, delivered 8 December 1979.
Abdus Salam Nobel lecture, delivered 8 December 1979.
Steven Weinberg Nobel lecture, delivered 8 December 1979.
E.g., chapter 14 in Hughes, I.S. (1985). Elementary particles (2nd ed.). Cambridge University Press. ISBN 978-0-521-26092-3.
Bauer, T.H.; Spital, R.D.; Yennie, D.R.; Pipkin, F.M. (1978). "The hadronic properties of the photon in high-energy interactions". Reviews of Modern Physics. 50 (2): 261. Bibcode:1978RvMP...50..261B. doi:10.1103/RevModPhys.50.261.
Sakurai, J.J. (1960). "Theory of strong interactions". Annals of Physics. 11 (1): 1–48. Bibcode:1960AnPhy..11....1S. doi:10.1016/0003-4916(60)90126-3.
Walsh, T.F.; Zerwas, P. (1973). "Two-photon processes in the parton model". Physics Letters B. 44 (2): 195. Bibcode:1973PhLB...44..195W. doi:10.1016/0370-2693(73)90520-0.
Witten, E. (1977). "Anomalous cross section for photon–photon scattering in gauge theories". Nuclear Physics B. 120 (2): 189–202. Bibcode:1977NuPhB.120..189W. doi:10.1016/0550-3213(77)90038-4.
Nisius, R. (2000). "The photon structure from deep inelastic electron–photon scattering". Physics Reports. 332 (4–6): 165–317.arXiv:hep-ex/9912049. Bibcode:2000PhR...332..165N. doi:10.1016/S0370-1573(99)00115-5.
E.g., section 10.1 in Dunlap, R.A. (2004). An Introduction to the Physics of Nuclei and Particles. Brooks/Cole. ISBN 978-0-534-39294-9.
Radiative correction to electron mass section 7-1-2, anomalous magnetic moments section 7-2-1, Lamb shift section 7-3-2 and hyperfine splitting in positronium section 10-3 in Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0.
E.g. sections 9.1 (gravitational contribution of photons) and 10.5 (influence of gravity on light) in Stephani, H.; Stewart, J. (1990). General Relativity: An Introduction to the Theory of Gravitational Field. Cambridge University Press. pp. 86 ff, 108 ff. ISBN 978-0-521-37941-0.
Polaritons section 10.10.1, Raman and Brillouin scattering section 10.11.3 in Patterson, J.D.; Bailey, B.C. (2007). Solid-State Physics: Introduction to the Theory. Springer. ISBN 978-3-540-24115-7.
Naeye, R. (1998). Through the Eyes of Hubble: Birth, Life and Violent Death of Stars. CRC Press. ISBN 978-0-7503-0484-9. OCLC 40180195.
Koupelis, Theo; Kuhn, Karl F. (2007). In Quest of the Universe. Jones and Bartlett Canada. p. 102. ISBN 9780763743871.
E.g. section 11-5 C in Pine, S.H.; Hendrickson, J.B.; Cram, D.J.; Hammond, G.S. (1980). Organic Chemistry (4th ed.). McGraw-Hill. ISBN 978-0-07-050115-7.
Nobel lecture given by G. Wald on December 12, 1967, online at nobelprize.org: The Molecular Basis of Visual Excitation.
Photomultiplier section 1.1.10, CCDs section 1.1.8, Geiger counters section 1.3.2.1 in Kitchin, C.R. (2008). Astrophysical Techniques. Boca Raton, FL: CRC Press. ISBN 978-1-4200-8243-2.
Waymouth, John (1971). Electric Discharge Lamps. Cambridge, MA: The M.I.T. Press. ISBN 978-0-262-23048-3.
Denk, W.; Svoboda, K. (1997). "Photon upmanship: Why multiphoton imaging is more than a gimmick". Neuron. 18 (3): 351–357. doi:10.1016/S0896-6273(00)81237-4. PMID 9115730.
Lakowicz, J.R. (2006). Principles of Fluorescence Spectroscopy. Springer. pp. 529 ff. ISBN 978-0-387-31278-1.
Jennewein, T.; Achleitner, U.; Weihs, G.; Weinfurter, H.; Zeilinger, A. (2000). "A fast and compact quantum random number generator". Review of Scientific Instruments. 71 (4): 1675–1680.arXiv:quant-ph/9912118. Bibcode:2000RScI...71.1675J. doi:10.1063/1.1150518.
Stefanov, A.; Gisin, N.; Guinnard, O.; Guinnard, L.; Zbiden, H. (2000). "Optical quantum random number generator". Journal of Modern Optics. 47 (4): 595–598. doi:10.1080/095003400147908.
Hignett, Katherine (16 February 2018). "Physics Creates New Form Of Light That Could Drive The Quantum Computing Revolution". Newsweek. Retrieved 17 February 2018.
Liang, Qi-Yu; et al. (16 February 2018). "Observation of three-photon bound states in a quantum nonlinear medium". Science. 359 (6377): 783–786.arXiv:1709.01478. Bibcode:2018Sci...359..783L. doi:10.1126/science.aao7293. PMC 6467536. PMID 29449489.
By date of publication:
Alonso, M.; Finn, E.J. (1968). Fundamental University Physics Volume III: Quantum and Statistical Physics. Addison-Wesley. ISBN 978-0-201-00262-1.
Clauser, J.F. (1974). "Experimental distinction between the quantum and classical field-theoretic predictions for the photoelectric effect". Physical Review D. 9 (4): 853–860. Bibcode:1974PhRvD...9..853C. doi:10.1103/PhysRevD.9.853.
Pais, A. (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press.
Feynman, Richard (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 978-0-691-12575-6.
Grangier, P.; Roger, G.; Aspect, A. (1986). "Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences". Europhysics Letters. 1 (4): 173–179. Bibcode:1986EL......1..173G. CiteSeerX 10.1.1.178.4356. doi:10.1209/0295-5075/1/4/004.
Lamb, W.E. (1995). "Anti-photon". Applied Physics B. 60 (2–3): 77–84. Bibcode:1995ApPhB..60...77L. doi:10.1007/BF01135846.
Special supplemental issue of Optics and Photonics News (vol. 14, October 2003) article web link
Roychoudhuri, C.; Rajarshi, R. (2003). "The nature of light: what is a photon?". Optics and Photonics News. 14: S1 (Supplement).
Zajonc, A. "Light reconsidered". Optics and Photonics News. 14: S2–S5 (Supplement).
Loudon, R. "What is a photon?". Optics and Photonics News. 14: S6–S11 (Supplement).
Finkelstein, D. "What is a photon?". Optics and Photonics News. 14: S12–S17 (Supplement).
Muthukrishnan, A.; Scully, M.O.; Zubairy, M.S. "The concept of the photon – revisited". Optics and Photonics News. 14: S18–S27 (Supplement).
Mack, H.; Schleich, W.P. "A photon viewed from Wigner phase space". Optics and Photonics News. 14: S28–S35 (Supplement).
Glauber, R. (2005). "One Hundred Years of Light Quanta" (PDF). 2005 Physics Nobel Prize Lecture. Archived from the original (PDF) on 2008-07-23. Retrieved 2009-06-29.
Hentschel, K. (2007). "Light quanta: The maturing of a concept by the stepwise accretion of meaning". Physics and Philosophy. 1 (2): 1–20.
Education with single photons:
Thorn, J.J.; Neel, M.S.; Donato, V.W.; Bergreen, G.S.; Davies, R.E.; Beck, M. (2004). "Observing the quantum behavior of light in an undergraduate laboratory" (PDF). American Journal of Physics. 72 (9): 1210–1219. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397.
Bronner, P.; Strunz, Andreas; Silberhorn, Christine; Meyn, Jan-Peter (2009). "Interactive screen experiments with single photons". European Journal of Physics. 30 (2): 345–353. Bibcode:2009EJPh...30..345B. doi:10.1088/0143-0807/30/2/014.
Quotations related to Photon at Wikiquote
The dictionary definition of photon at Wiktionary
Media related to Photon at Wikimedia Commons
vte
Particles in physics
Elementary
Fermions
Quarks
Up (quark antiquark) Down (quark antiquark) Charm (quark antiquark) Strange (quark antiquark) Top (quark antiquark) Bottom (quark antiquark)
Leptons
Electron Positron Muon Antimuon Tau Antitau Electron neutrino Electron antineutrino Muon neutrino Muon antineutrino Tau neutrino Tau antineutrino
Bosons
Gauge
Scalar
Higgs boson
Ghost fields
Hypothetical
Superpartners
Gauginos
Others
Axino Chargino Higgsino Neutralino Sfermion (Stop squark)
Others
Axion Curvaton Dilaton Dual graviton Graviphoton Graviton Inflaton Leptoquark Magnetic monopole Majoron Majorana fermion Dark photon Planck particle Preon Sterile neutrino Tachyon W′ and Z′ bosons X and Y bosons
Composite
Baryons
Nucleon
Proton Antiproton Neutron Antineutron Delta baryon Lambda baryon Sigma baryon Xi baryon Omega baryon
Mesons
Pion Rho meson Eta and eta prime mesons Phi meson J/psi meson Omega meson Upsilon meson Kaon B meson D meson Quarkonium
Others
Hypothetical
Baryons
Hexaquark Heptaquark Skyrmion
Mesons
Glueball Theta meson T meson
Others
Quasiparticles
Lists
Baryons Mesons Particles Quasiparticles Timeline of particle discoveries
Related
History of subatomic physics
timeline Standard Model
mathematical formulation Subatomic particles Particles Antiparticles Nuclear physics Eightfold way
Quark model Exotic matter Massless particle Relativistic particle Virtual particle Wave–particle duality Particle chauvinism
Wikipedia books
Hadronic Matter Particles of the Standard Model Leptons Quarks
vte
Quantum electrodynamics
Concepts
Anomalous magnetic dipole moment Probability amplitude Propagator QED vacuum Self-energy Vacuum polarization ξ gauge
Formalism
Feynman diagram Feynman slash notation Gupta–Bleuler formalism Path integral formulation Vertex function Ward–Takahashi identity
Interactions
Bhabha scattering Bremsstrahlung Compton scattering Lamb shift Møller scattering
Particles
Dual photon Electron Photon Positron Positronium Virtual particles
Physics Encyclopedia
World
Index | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965703547000885, "perplexity": 1873.007567594215}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039604430.92/warc/CC-MAIN-20210422191215-20210422221215-00356.warc.gz"} |
http://tex.stackexchange.com/questions/61751/changing-theorem-lemma-label-to-bold-italics | # Changing theorem/lemma… label to bold italics
For theorems in my contribution I use \theoremstyle{plain}. This style prints theorem label in bold italics, eg. 'Theorem 1.', which is followed by text in italics. How could I set 'Theorem 1.' to italics to match the theorem content?
Non italic text in theorems, definitions, examples
but I was not able to extract the answer.
-
See tex.stackexchange.com/a/38264/5626 and change \itshape to \itshape\bfseries (and other settings accordingly). But first of all: why on earth would you want to do that!? – mbork Jun 30 '12 at 11:45
Thanks. 1) I don't like bold up followed by italics, so I set everything to italics. 2) if 1) makes no sense, I simply don't like present scheme. – boy Jun 30 '12 at 11:50
This might be the theorem scheme that you're after, called mytheorem:
\documentclass{article}
\usepackage{lipsum}% http://ctan.org/pkg/lipsum
\usepackage{amsthm}% http://ctan.org/pkg/amsthm
\makeatletter
\newtheoremstyle{mytheorem}% <name>
{3pt}% <Space above>
{3pt}% <Space below>
{\itshape}% <Body font>
{}% <Indent amount>
{\thmname{#1}\thmnumber{\@ifnotempty{#1}{ }#2}%
\thmnote{ {\the\thm@notefont(#3)}}}% <Theorem head spec (can be left empty, meaning normal')>
\makeatother
\theoremstyle{mytheorem}
\newtheorem{theorem}{Theorem}
\begin{document}
\begin{theorem}\lipsum[1]\end{theorem}
\end{document}
The key here is to include a theorem head specification (argument #9) in order to reformat the theorem number from \textup (or \@upn) to plain. Here's the default definition of the theorem head specification (associated with the plain style):
\def\thmhead@plain#1#2#3{%
\thmname{#1}\thmnumber{\@ifnotempty{#1}{ }\@upn{#2}}%
\thmnote{ {\the\thm@notefont(#3)}}}
Note the forced \@upn (defined as \textup`).
-
Looks much better this way. Many thanks. – boy Jun 30 '12 at 14:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7154894471168518, "perplexity": 7515.902297410601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277475.33/warc/CC-MAIN-20160524002117-00136-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/32734/initials-of-first-name-in-footcite | # Initials of first name in footcite
Is it possible to display initials of first name in footcite? I use verbose-trad2 citestyle:
\usepackage[citestyle=verbose-trad2,bibstyle=authortitle-icomp,sortcites=true,%
block=space,firstinits=true]{biblatex}
When i footcite first time it is ok:
[First name initials] [last name], [title] ...
but in next citations of the same book it's only
[last name], [title] ...
-
add comment
## 1 Answer
Define a new name format firstinits-last (derived from first-last with \iffirstinits true), and point the labelname format (which is used in citations) to firstinits-last. (This also works if you don't set the firstinits=true package option; with this option enabled, you may simply add \DeclareNameAlias{labelname}{default} to your preamble.)
\documentclass{article}
\usepackage[style=verbose-trad2]{biblatex}
\DeclareNameFormat{firstinits-last}{%
\usebibmacro{name:first-last}{#1}{#4}{#5}{#7}%
\usebibmacro{name:andothers}}
\DeclareNameAlias{labelname}{firstinits-last}
\usepackage{filecontents}
\begin{filecontents}{\jobname.bib}
@misc{A01,
author = {Author, Anton},
year = {2001},
title = {Alpha},
}
@misc{B02,
author = {Buthor, Berta},
year = {2002},
title = {Bravo},
}
\end{filecontents}
\addbibresource{\jobname.bib}
\textheight 160pt% just for the example
\begin{document}
Some text.\autocite{A01}
Some text.\autocite{B02}
Some text.\autocite{A01}
\printbibliography
\end{document}
-
work like a charm :) – kadx Oct 26 '11 at 8:51
add comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28464099764823914, "perplexity": 28595.076179148244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/819887/uniqueness-of-the-comparison-functor | # Uniqueness of the Comparison Functor
Suppose $F:C\rightarrow D$ and that $F\dashv U$ is an adjunction and $C^{T}$ the Eilenberg–Moore category for the monad $T=U◦F$, with the corresponding functors $F^{T}:C\rightarrow C^{T}$ and $U^{T}:C^{T}\rightarrow C$.
I have been able to prove that there is a comparison functor $Φ : D →C^{T}$ which satisfies
(1) $U^{T}◦Φ= U$
and
(2) $Φ◦F = F^{T}$
I am having trouble with uniqueness.
Here is what I have so far: Suppose $Φ'$ satisfies (1) and (2).
Let $U\in D$. Then using (1) with $Φ'(D)=(C',\alpha )$, it follows that $U◦Φ'(D)=U(C',\alpha )=C'$ whereas $Φ(D)=(UD,U\varepsilon_{D})$ and so $U◦Φ(D)=U(UD,\varepsilon_{D})=UD$ which says $C'=UD$
Now I need to show that $\alpha=U\varepsilon_{D}$. This is where I'm stuck.
edit: Using the hint below, the fact that the adjunctions have the same unit imply, after using (1) and (2) that
\begin{matrix} \operatorname{Hom}(FC, D) & \xrightarrow{{\phi}} & \operatorname{Hom}(C, UD) \\ \left\downarrow\vphantom{\int}\right. & & \left\downarrow\vphantom{\int}\right.\\ \operatorname{Hom}(F^{T}C, Φ'(D))& \xrightarrow{\phi^{T}} & \operatorname{Hom}(C, U^{T}Φ'(D)) \end{matrix}
commutes.
($\phi$ and $\phi^{T}$ are the isomorphisms giving the adjunctions; the left downward arrow is the map $f\longmapstoΦ'(f)$ and the right downward arrow is the identity on the Hom$(C, UD)$.)
Then, setting $C=UD$ and following $id_{UD}$, you get that $Φ'\epsilon=\epsilon^T Φ'$, which is the hint. The rest follows easily.
-
You can find a proof in Mac Lane's CWM. – Martin Brandenburg Jun 4 '14 at 8:34
Right. I know these results are well-known, but before looking up the worked-out proof, I want to try it for myself, with a hint or two. – Chilango Jun 4 '14 at 13:24
what does it mean $U\in D$ ? $U$ is a functor and $D$ is a category – magma Jun 4 '14 at 14:25
Here's a version of the proof that bypasses the $\Phi\epsilon=\epsilon^T\Phi$ lemma and proves uniqueness directly.
• $U^T\Phi=U$ tells us that $\Phi d$ is a $T$-algebra with structure map $\gamma d: TUd\to Ud$
• $\Phi$ sends $D$-arrows to $T$-homomorphisms, so $\gamma$ is a natural transformation $TU\to U$
• $\Phi F=F^T$ tells us that $\gamma F=\mu =U\epsilon F$
Since $\gamma$ is natural we have $U\epsilon\circ\mu U=U\epsilon\circ\gamma FU=\gamma\circ TU\epsilon$
Precomposing with $T\eta U$ gives us $U\epsilon\circ\mu U\circ T\eta U=\gamma\circ TU\epsilon\circ T\eta U$
This rearranges as $U\epsilon\circ(\mu\circ T\eta)U=\gamma\circ T(U\epsilon\circ\eta U)$, which simplifies to $U\epsilon=\gamma$
-
Just a hint....
First you should realize that
$$Φd=(Ud, h)$$
that is a T-algebra with underlying $Ud$, for any d in $D$. in order to obtain the structure h, observe that the 2 adjunctions have the same unit $\eta$, and deduce that $Φ\epsilon=\epsilon^T Φ$.
Then deduce that $\epsilon^TΦd=\epsilon^T(Ud, h)=h$ and so $Φ\epsilon=\epsilon^T Φ$ implies $U\epsilon_d=h$ so h is determined and $Φ$ is unique.
-
Once you pointed put that I needed to prove that $Φ\epsilon=\epsilon^T Φ$, I was able to do it. It was a diagram chase, following identities and using (1) and (2) above. This was very helpful thanks. Where I live I literally have no one to talk math with, thanks for taking the time. – Chilango Jun 4 '14 at 23:41
@Chilango you are welcome, my pleasure :-) – magma Jun 5 '14 at 5:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866316318511963, "perplexity": 292.57765605899436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860110372.12/warc/CC-MAIN-20160428161510-00072-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://gmatclub.com/forum/on-planet-simplon-each-year-has-12-months-each-of-which-147884.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 29 Aug 2016, 00:08
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# On planet Simplon, each year has 12 months, each of which
Author Message
TAGS:
### Hide Tags
Manager
Joined: 09 Feb 2013
Posts: 120
Followers: 1
Kudos [?]: 723 [2] , given: 17
On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
25 Feb 2013, 02:17
2
KUDOS
4
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
43% (03:23) correct 57% (03:23) wrong based on 169 sessions
### HideShow timer Statistics
On planet Simplon, each year has 12 months, each of which consists of exactly 30 days. If Simplon experiences a political scandal every 2/3 of a year, an environmental crisis every 1/6 of a year, and a terrible movie opening every 1/36 of a year, then what is the ratio of the number of political scandals to the number of environmental crises to the number of terrible movie openings experienced over a five year period on Simplon? (Assume that all three events happen 2/3, 1/6, and 1/36 of a year after the beginning of the first year, and recur at their respective rates after that.)
A. 24 : 4 : 1
B. 1 : 4 : 24
C. 4 : 15 : 90
D. 1 : 6 : 36
E. 3 : 10 : 72
[Reveal] Spoiler: OA
_________________
Kudos will encourage many others, like me.
Good Questions also deserve few KUDOS.
Last edited by Bunuel on 26 Feb 2013, 02:10, edited 1 time in total.
Edited the question.
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 630
Followers: 76
Kudos [?]: 1012 [1] , given: 136
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
25 Feb 2013, 05:30
1
KUDOS
emmak wrote:
On planet Simplon, each year has 12 months, each of which consists of exactly 30 days. If Simplon experiences a political scandal every 2/3 of a year, an environmental crisis every 1/6 of a year, and a terrible movie opening every 1/36 of a year, then what is the ratio of the number of political scandals to the number of environmental crises to the number of terrible movie openings experienced over a five year period on Simplon? (Assume that all three events happen 2/3, 1/6, and 1/36 of a year after the beginning of the first year, and recur at their respective rates after that.)
24 : 4 : 1
1 : 4 : 24
4 : 15 : 90
1 : 6 : 36
3 : 10 : 72
So , the political scandal occurs every 8 months, the environmental crisis every 2 months and the terrible movie opening every (1/3) a month. We just have to find out the ratio between
$$5*\frac{12}{8} : 5*\frac{12}{2} : 5*\frac{12}{(1/3)} = \frac{1}{8} : \frac{1}{2} : 3$$
On multiplying across by 8 , 1:4:24
B.
_________________
Current Student
Joined: 24 Nov 2012
Posts: 177
Concentration: Sustainability, Entrepreneurship
GMAT 1: 770 Q50 V44
WE: Business Development (Internet and New Media)
Followers: 37
Kudos [?]: 239 [0], given: 73
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
18 Apr 2013, 07:49
If a scandal occurs every 8 months over a 5 year period exactly 7 scandals would have occurred. So shouldnt the ratio be
7:30:180?
_________________
You've been walking the ocean's edge, holding up your robes to keep them dry. You must dive naked under, and deeper under, a thousand times deeper! - Rumi
http://www.manhattangmat.com/blog/index.php/author/cbermanmanhattanprep-com/ - This is worth its weight in gold
Economist GMAT Test - 730, Q50, V41 Aug 9th, 2013
Manhattan GMAT Test - 670, Q45, V36 Aug 11th, 2013
Manhattan GMAT Test - 680, Q47, V36 Aug 17th, 2013
GmatPrep CAT 1 - 770, Q50, V44 Aug 24th, 2013
Manhattan GMAT Test - 690, Q45, V39 Aug 30th, 2013
Manhattan GMAT Test - 710, Q48, V39 Sep 13th, 2013
GmatPrep CAT 2 - 740, Q49, V41 Oct 6th, 2013
GMAT - 770, Q50, V44, Oct 7th, 2013
My Debrief - http://gmatclub.com/forum/from-the-ashes-thou-shall-rise-770-q-50-v-44-awa-5-ir-162299.html#p1284542
VP
Status: Far, far away!
Joined: 02 Sep 2012
Posts: 1123
Location: Italy
Concentration: Finance, Entrepreneurship
GPA: 3.8
Followers: 173
Kudos [?]: 1801 [0], given: 219
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
18 Apr 2013, 09:19
Transcendentalist wrote:
If a scandal occurs every 8 months over a 5 year period exactly 7 scandals would have occurred. So shouldnt the ratio be
7:30:180?
Hi Transcendentalist,
you missed something. $$5years*12months= 60 months$$ one scandal every 8 months means $$\frac{60}{8}=7.5$$ scandals
.5 is what makes your option wrong.
This should clarify, let me know
_________________
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CR New SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
Intern
Joined: 15 Aug 2011
Posts: 20
Location: United States
Concentration: Marketing, Technology
GPA: 3.6
WE: Project Management (Computer Software)
Followers: 0
Kudos [?]: 13 [0], given: 53
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
18 Apr 2013, 09:25
emmak wrote:
On planet Simplon, each year has 12 months, each of which consists of exactly 30 days. If Simplon experiences a political scandal every 2/3 of a year, an environmental crisis every 1/6 of a year, and a terrible movie opening every 1/36 of a year, then what is the ratio of the number of political scandals to the number of environmental crises to the number of terrible movie openings experienced over a five year period on Simplon? (Assume that all three events happen 2/3, 1/6, and 1/36 of a year after the beginning of the first year, and recur at their respective rates after that.)
A. 24 : 4 : 1
B. 1 : 4 : 24
C. 4 : 15 : 90
D. 1 : 6 : 36
E. 3 : 10 : 72
Total days per year = 12*30=360
For 5 years = 12*30*5=1800
political scandal happens every 2/3 of a year = 2/3 * 360 = 240days
This implies for every 240 days 1 political scandal occurs.
which means for 5yrs or 1800 days, no.of scandals that happen = 1800/240 = 15/2
smilarly environment damage happens every 1/6 of a year = 1/6 * 360 = 60 days
This implies for every 60 days 1 environmental damage occurs.
For 5yrs or 1800 days, no.of environmental damages that occur = 1800/60 = 30
smilarly a terrible movie opening happens every 1/36 of a year = 1/36 * 360 = 10 days
This implies for every 10 days 1 terrible movie opening happens.
For 5yrs or 1800 days, no.of environmental damages that occur = 1800/10 = 180
Answer==> political scandal for 5yrs : Env damages for 5yrs : Terrible movie opening for 5yrs = 1800/240 : 1800/60 : 1800/10
or 15/2 : 30 : 180 = 15 : 60 :360 = 1 : 4 : 24
B
_________________
"Hit KUDOS if you like my explanation"
Current Student
Joined: 24 Nov 2012
Posts: 177
Concentration: Sustainability, Entrepreneurship
GMAT 1: 770 Q50 V44
WE: Business Development (Internet and New Media)
Followers: 37
Kudos [?]: 239 [0], given: 73
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
18 Apr 2013, 09:31
Zarrolou wrote:
Transcendentalist wrote:
If a scandal occurs every 8 months over a 5 year period exactly 7 scandals would have occurred. So shouldnt the ratio be
7:30:180?
Hi Transcendentalist,
you missed something. $$5years*12months= 60 months$$ one scandal every 8 months means $$\frac{60}{8}=7.5$$ scandals
.5 is what makes your option wrong.
This should clarify, let me know
How does .5 of a scandal occur?
Lets say there are 1 scandal every 5 months
In one year - 2 scandals will occur
2 years - 4 scandals will occur
3 years - 7 scandals will occur
_________________
You've been walking the ocean's edge, holding up your robes to keep them dry. You must dive naked under, and deeper under, a thousand times deeper! - Rumi
http://www.manhattangmat.com/blog/index.php/author/cbermanmanhattanprep-com/ - This is worth its weight in gold
Economist GMAT Test - 730, Q50, V41 Aug 9th, 2013
Manhattan GMAT Test - 670, Q45, V36 Aug 11th, 2013
Manhattan GMAT Test - 680, Q47, V36 Aug 17th, 2013
GmatPrep CAT 1 - 770, Q50, V44 Aug 24th, 2013
Manhattan GMAT Test - 690, Q45, V39 Aug 30th, 2013
Manhattan GMAT Test - 710, Q48, V39 Sep 13th, 2013
GmatPrep CAT 2 - 740, Q49, V41 Oct 6th, 2013
GMAT - 770, Q50, V44, Oct 7th, 2013
My Debrief - http://gmatclub.com/forum/from-the-ashes-thou-shall-rise-770-q-50-v-44-awa-5-ir-162299.html#p1284542
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 630
Followers: 76
Kudos [?]: 1012 [0], given: 136
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
18 Apr 2013, 12:18
Transcendentalist wrote:
If a scandal occurs every 8 months over a 5 year period exactly 7 scandals would have occurred. So shouldn't the ratio be
7:30:180?
Lets assume what you said is correct. The ratio is 7:30:180. So now for every 30 environmental crisis, 7 scandals occur. Thus, for 60 crisis, we should have 14 scandals. But if you go by what is given in the problem, we would have 15 scandals. Any guesses?
_________________
Current Student
Joined: 24 Nov 2012
Posts: 177
Concentration: Sustainability, Entrepreneurship
GMAT 1: 770 Q50 V44
WE: Business Development (Internet and New Media)
Followers: 37
Kudos [?]: 239 [0], given: 73
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
19 Apr 2013, 08:12
vinaymimani wrote:
Transcendentalist wrote:
If a scandal occurs every 8 months over a 5 year period exactly 7 scandals would have occurred. So shouldn't the ratio be
7:30:180?
Lets assume what you said is correct. The ratio is 7:30:180. So now for every 30 environmental crisis, 7 scandals occur. Thus, for 60 crisis, we should have 14 scandals. But if you go by what is given in the problem, we would have 15 scandals. Any guesses?
Nope the ratio holds true only for 5 years. If the period being considered is 10 years, then the ratio changes. I haven't calculated it out but for every change in no of years the ratio changes accordingly. Makes sense?
_________________
You've been walking the ocean's edge, holding up your robes to keep them dry. You must dive naked under, and deeper under, a thousand times deeper! - Rumi
http://www.manhattangmat.com/blog/index.php/author/cbermanmanhattanprep-com/ - This is worth its weight in gold
Economist GMAT Test - 730, Q50, V41 Aug 9th, 2013
Manhattan GMAT Test - 670, Q45, V36 Aug 11th, 2013
Manhattan GMAT Test - 680, Q47, V36 Aug 17th, 2013
GmatPrep CAT 1 - 770, Q50, V44 Aug 24th, 2013
Manhattan GMAT Test - 690, Q45, V39 Aug 30th, 2013
Manhattan GMAT Test - 710, Q48, V39 Sep 13th, 2013
GmatPrep CAT 2 - 740, Q49, V41 Oct 6th, 2013
GMAT - 770, Q50, V44, Oct 7th, 2013
My Debrief - http://gmatclub.com/forum/from-the-ashes-thou-shall-rise-770-q-50-v-44-awa-5-ir-162299.html#p1284542
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 630
Followers: 76
Kudos [?]: 1012 [0], given: 136
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
19 Apr 2013, 08:28
Transcendentalist wrote:
vinaymimani wrote:
Transcendentalist wrote:
If a scandal occurs every 8 months over a 5 year period exactly 7 scandals would have occurred. So shouldn't the ratio be
7:30:180?
Lets assume what you said is correct. The ratio is 7:30:180. So now for every 30 environmental crisis, 7 scandals occur. Thus, for 60 crisis, we should have 14 scandals. But if you go by what is given in the problem, we would have 15 scandals. Any guesses?
Nope the ratio holds true only for 5 years. If the period being considered is 10 years, then the ratio changes. I haven't calculated it out but for every change in no of years the ratio changes accordingly. Makes sense?
I can-not in any way imagine how a given ratio can change depending upon the number of years. Say red:blue balls ratio --> 2:3. So are you telling me that for every 1 red ball the ratio should be 1:1.5(and as per your logic by dropping the 0.5 )-->1:1? So for 1 red ball, the ratio is 1:1, for 2 red balls, the ratio is 2:3 and for 4 red balls, the ratio is again 2:3?
The fact that 1.5 balls can't exist is correct. But the fact that ratio can be presented in myriad ways is also correct. So when I say that the ratio of red:blue balls = 2:3 = 1:1.5 --> it is all correct. It is just that the latter ratio is an intermediate value, something which is mathematically correct yet not feasible,physically. The fact that for 10 years , the value 7.5=15/2 when multiplied by 2 gives an integral value is the reason why we have 15 scandals over 10 years, which is not possible for 5 years as it gives a non-integral value(7.5);which is mathematically correct, yet not feasible in the physical world.
_________________
Current Student
Joined: 24 Nov 2012
Posts: 177
Concentration: Sustainability, Entrepreneurship
GMAT 1: 770 Q50 V44
WE: Business Development (Internet and New Media)
Followers: 37
Kudos [?]: 239 [1] , given: 73
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
19 Apr 2013, 08:50
1
KUDOS
vinaymimani wrote:
I can-not in any way imagine how a given ratio can change depending upon the number of years. Say red:blue balls ratio --> 2:3. So are you telling me that for every 1 red ball the ratio should be 1:1.5(and as per your logic by dropping the 0.5 )-->1:1? So for 1 red ball, the ratio is 1:1, for 2 red balls, the ratio is 2:3 and for 4 red balls, the ratio is again 2:3?
The fact that 1.5 balls can't exist is correct. But the fact that ratio can be presented in myriad ways is also correct. So when I say that the ratio of red:blue balls = 2:3 = 1:1.5 --> it is all correct. It is just that the latter ratio is an intermediate value, something which is mathematically correct yet not feasible,physically. The fact that for 10 years , the value 7.5=15/2 when multiplied by 2 gives an integral value is the reason why we have 15 scandals over 10 years, which is not possible for 5 years as it gives a non-integral value(7.5);which is mathematically correct, yet not feasible in the physical world.
On planet Simplon, each year has 12 months, each of which consists of exactly 30 days. If Simplon experiences a political scandal every 2/3 of a year, an environmental crisis every 1/6 of a year, and a terrible movie opening every 1/36 of a year (Assume that all three events happen 2/3, 1/6, and 1/36 of a year after the beginning of the first year, and recur at their respective rates after that.)
Can you please calculate the ratio of the No of following events over a 5 year period to that over a 10 year period?
1, Political Scandal
2, Environmental Crises
3, Terrible
_________________
You've been walking the ocean's edge, holding up your robes to keep them dry. You must dive naked under, and deeper under, a thousand times deeper! - Rumi
http://www.manhattangmat.com/blog/index.php/author/cbermanmanhattanprep-com/ - This is worth its weight in gold
Economist GMAT Test - 730, Q50, V41 Aug 9th, 2013
Manhattan GMAT Test - 670, Q45, V36 Aug 11th, 2013
Manhattan GMAT Test - 680, Q47, V36 Aug 17th, 2013
GmatPrep CAT 1 - 770, Q50, V44 Aug 24th, 2013
Manhattan GMAT Test - 690, Q45, V39 Aug 30th, 2013
Manhattan GMAT Test - 710, Q48, V39 Sep 13th, 2013
GmatPrep CAT 2 - 740, Q49, V41 Oct 6th, 2013
GMAT - 770, Q50, V44, Oct 7th, 2013
My Debrief - http://gmatclub.com/forum/from-the-ashes-thou-shall-rise-770-q-50-v-44-awa-5-ir-162299.html#p1284542
Veritas Prep GMAT Instructor
Joined: 11 Dec 2012
Posts: 313
Followers: 106
Kudos [?]: 261 [2] , given: 66
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
19 Apr 2013, 09:47
2
KUDOS
Expert's post
1
This post was
BOOKMARKED
emmak wrote:
On planet Simplon, each year has 12 months, each of which consists of exactly 30 days. If Simplon experiences a political scandal every 2/3 of a year, an environmental crisis every 1/6 of a year, and a terrible movie opening every 1/36 of a year, then what is the ratio of the number of political scandals to the number of environmental crises to the number of terrible movie openings experienced over a five year period on Simplon? (Assume that all three events happen 2/3, 1/6, and 1/36 of a year after the beginning of the first year, and recur at their respective rates after that.)
A. 24 : 4 : 1
B. 1 : 4 : 24
C. 4 : 15 : 90
D. 1 : 6 : 36
E. 3 : 10 : 72
Detouring briefly from the tangent questions, a nice shortcut to solve this original questionquickly is to think of the numbers in months. You then see that political scandals happen every 8 months and environmental crises happen every 2 months, so the ratio must be 1:4 (or a multiple thereof) regardless of the terrible movie openings (Die Hard 5 anyone??). The only answer choice with the 1:4 ratio is B. No need to go any further. On different questions if you can eliminate answer choices based on two of the three entries, then you don't have to try mentally juggling the three.
Hope this helps!
-Ron
_________________
Current Student
Joined: 06 Sep 2013
Posts: 2035
Concentration: Finance
GMAT 1: 770 Q0 V
Followers: 54
Kudos [?]: 515 [0], given: 355
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
30 May 2014, 09:30
Actually, we only need to take the ratio of 3/2:6:36 which are the denominators and one will get 1:4:24.
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 11119
Followers: 511
Kudos [?]: 134 [0], given: 0
Re: On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
22 Aug 2015, 12:32
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Intern
Joined: 10 Oct 2015
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 0
On planet Simplon, each year has 12 months, each of which [#permalink]
### Show Tags
13 Oct 2015, 21:13
I know it's been a while, but I'd like to back up Transcendentalist here. Here's why he's right:
What is the ratio of Mondays to Tuesdays in an 8 day period starting and ending with Monday?
The answer here is not 1:1. Let's look at the sample space:
M T W TH F S S M
We see that 2 Mondays, but only one Tuesday. Therefore the ratio is 2:1.
For 9 days we would see:
M T W TH F S S M T
Making the ratio 1:1 again.
A simple example:
What is the ratio of odds to evens in the range 1-101 inclusive? In 1-100?
I think we can all tell that there are 51 odds and 50 evens in the first part of the question, making the ratio 51:50. In the second part there are 50 of each giving 50:50 or 1:1.
Another analogous problem:
Bob is pushing a boulder up a 100 meter long hill. Every day he pushes it up 20 meters. Then when he rests, it rolls down 11 meters. How many days would it take Bob to push to boulder up to the top of the hill? Answer as an integer, and assume the boulder no longer rolls down once the top of the hill is reached.
Hastily, we would calculate the average rate of 20-11=9 meters per day. 11 x 9 = 99, and that's not quite enough, so the answer must be 12 days (12 x 9 = 108 meters).
Let's go back a step. Before he starts pushing on the beginning of the 10th day, Bob's boulder is at 9 x 9 = 81 meters. Now he pushes the boulder up 20 meters, reaching 101 meters. The answer here is therefore 10 days, not 12.
The point in these three examples is that, depending on the problem, using a constant ratio or rate doesn't work. The ratio changes depending on the period we're looking at.
On planet Simplon, each year has 12 months, each of which [#permalink] 13 Oct 2015, 21:13
Similar topics Replies Last post
Similar
Topics:
1 For each month of a given year except December 6 12 Mar 2016, 11:26
19 A factory has three types of machines, each of which works a 10 11 Jun 2013, 03:05
9 A math teacher has 30 cards, each of which is in the shape 5 24 Jul 2012, 06:01
108 Alice’s take-home pay last year was the same each month, and 27 26 Mar 2010, 17:21
Alice's take home pay last year was the same each month, and 6 11 Oct 2008, 00:22
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6943455338478088, "perplexity": 4748.898356312273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982953863.79/warc/CC-MAIN-20160823200913-00169-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://learnbps.bismarckschools.org/mod/glossary/showentry.php?eid=31671 | # PERFORMING
## Calculations
Artistic Processes are larger groups of related Anchor Standards. So the Artistic Process Grade is a calculation of all the related Anchor Standards listed below.
## Anchor Standards
##### Anchor Standard 4
Select artistic work for presentation.
##### Anchor Standard 5
Develop and refine artistic techniques and work for presentation.
##### Anchor Standard 6
Convey meaning through the presentation of artistic work. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815532386302948, "perplexity": 27115.70163904478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323221.23/warc/CC-MAIN-20190825062944-20190825084944-00290.warc.gz"} |
http://www.hardwaresoftwarearticles.com/archive/index.php/t-13120.html | Helpf needed with Explorer
Jim-Soho
07-09-2005, 11:44 PM
Is there a switch or any other way that when I open up Explorer, that My
Computer is an expanded tree instead of My Documents being the folder that
is expanded?
Thank You
Jonybrv
07-09-2005, 11:45 PM
Yes, you can.
Start | All Programs | Accessories, right Windows Explorer and select
Properties.
Change the target to one of the following:
%systemroot%\explorer /n,/e,%HOMEDRIVE%%HOMEPATH%\My Documents
http://support.microsoft.com/kb/307856
--
Jonybrv
"Jim-Soho" wrote:
> Is there a switch or any other way that when I open up Explorer, that My
> Computer is an expanded tree instead of My Documents being the folder that
> is expanded?
>
> Thank You
>
>
>
Jim-Soho
07-09-2005, 11:45 PM
A very BIG thank you for that, much appreciatted !
"Jonybrv" <[email protected]> wrote in message
news:[email protected]...
> Yes, you can.
>
> Start | All Programs | Accessories, right Windows Explorer and select
> Properties.
>
> Change the target to one of the following:
>
> %systemroot%\explorer /n,/e,%HOMEDRIVE%%HOMEPATH%\My Documents
>
> Documents
>
>
> http://support.microsoft.com/kb/307856
>
> --
> Jonybrv
>
>
> "Jim-Soho" wrote:
>
>> Is there a switch or any other way that when I open up Explorer, that My
>> Computer is an expanded tree instead of My Documents being the folder
>> that
>> is expanded?
>>
>> Thank You
>>
>>
>>
Jonybrv
07-09-2005, 11:54 PM
You are welcome.
--
Jonybrv
"Jim-Soho" wrote:
> A very BIG thank you for that, much appreciatted !
>
>
> "Jonybrv" <[email protected]> wrote in message
> news:[email protected]...
> > Yes, you can.
> >
> > Start | All Programs | Accessories, right Windows Explorer and select
> > Properties.
> >
> > Change the target to one of the following:
> >
> > %systemroot%\explorer /n,/e,%HOMEDRIVE%%HOMEPATH%\My Documents
> >
> > %systemroot%\explorer /n,/e,c:\Documents and Settings\%username%\My
> > Documents
> >
> >
> > http://support.microsoft.com/kb/307856
> >
> > --
> > Jonybrv
> >
> >
> > "Jim-Soho" wrote:
> >
> >> Is there a switch or any other way that when I open up Explorer, that My
> >> Computer is an expanded tree instead of My Documents being the folder
> >> that
> >> is expanded?
> >>
> >> Thank You
> >>
> >>
> >>
>
>
> | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644235968589783, "perplexity": 20625.050291169955}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593142.83/warc/CC-MAIN-20180722080925-20180722100925-00429.warc.gz"} |
https://dsp.stackexchange.com/questions/18786/realization-of-iir-resonator | # Realization of IIR resonator
The measure of a given frequency $\omega$ in a signal $x(t)$ is:
$\frac{1}{N}\sum\limits^N_{t=0}x\left(t\right)e^{^{-i \omega t}}$
This is basically an average of the correlation between the signal and a complex sinusoid rotating at $\omega$ frequency over some time range (N).
My goal is to realize this as a IIR filter (the former definition is virtually a FIR filter), so that it can be implemented as a sort of resonator which is fed an input stream of data, from which the frequency over the last N samples can be measured, continuously?
Currently, I've made this working model:
$y\left(n\right)=e^{-i \omega n}x\left(n\right)$
$X\left(n\right)=y\left(n\right)+e^{\frac{-1}{N}}\left(X\left(n-1\right)-y\left(n\right)\right)$
Where $X(n)$ will be the complex result of the correlation over the last N samples, utilizing a moving average IIR filter. While this works, it is quite inefficient in my current, naive implementation:
for each sample:
t0 = x * sample;
real = t0 + pole * (real - t0);
t0 = -y * sample;
imag = t0 + pole * (imag - t0);
t0 = x * c1 - y * c2;
y = x * c2 + y * c1;
x = t0;
where x and y corresponds to the cosine and the sine of the complex sinusoid, computed using a IIR oscillator filter. My hunch is that it will be possible to implement some standard filter with an adequate damping such that it will be equivalent to the former equation / code? And basically function like a resonator, where the amount of resonance is equal to the average of the correlation over the last N samples.
I've also been looking at the Goertzel algorithm, and possible ways to use that theory, even though it seems to suffer from scalloping/inability to resolve frequencies not quantized to integers between 0 and N?
Sorry for the redundancy, I'm still very new to filters and DSP in general, I'm just hoping someone can point me in the right direction and tell me I'm not grasping for straws :)
Thanks
Edit - appendix:
goertzel algorithm
Scalar sine, cosine, coeff, q0(0), q1(0), q2(0);
sine = sin(omega);
cosine = cos(omega);
coeff = 2.0 * cosine;
for (int t = 0; t < size; t++)
{
q0 = coeff * q1 - q2 + data[t];
q2 = q1;
q1 = q0;
}
Scalar real = (q1 - q2 * cosine) / (size * 0.5);
Scalar imag = (q2 * sine) / (size * 0.5);
goertzel vs. correlation of sinusoid
• I believe that it is indeed the Goertzel algorithm that you're looking for. It's not correct that the Goertzel algorithms needs quantized frequencies. Unlike the FFT, it is not restricted to any frequency grid. – Matt L. Oct 22 '14 at 21:36
• The Sliding DFT might also be interesting for you. – Matt L. Oct 23 '14 at 9:42
• @MattL. Okay, you're right about the quantization (it was an error in my code). I am however still having numerical/quantization troubles using the goertzel versus my original code (which is bruteforce correlation between signal and sinusoid) using single-precision. This happens with very closely-spaced frequencies (~0.5Hz). The difference can be seen in this picture: i.imgur.com/JJJePO3.png Also, I'm unsure how to convert the goertzel algorithm into an iir-filter (the usual implementations emulates fir). I've included the algorithm in the OP. – Shaggi Oct 26 '14 at 13:07
• @MattL. Also, if i use the sliding dft, i would have to do windowing in the frequency domain (ie. by convolution), am i right? – Shaggi Oct 28 '14 at 18:20
• No, not necessarily. Windowing can be used to reduce spectral leakage, but it's not an integral part of the SDFT algorithm. Have a look at this document. It gives a good overview of the SDFT algorithm compared to Goertzel. – Matt L. Oct 29 '14 at 8:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7408281564712524, "perplexity": 1160.4742481246512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540555616.2/warc/CC-MAIN-20191213122716-20191213150716-00108.warc.gz"} |
https://socratic.org/questions/what-is-the-domain-of-f-x-sinx | Precalculus
Topics
# What is the domain of f(x)=sinx?
##### 1 Answer
Mar 2, 2015
The sinus function has no restrictions, so:
$D = \left(- \infty , + \infty\right)$.
##### Impact of this question
1010 views around the world
You can reuse this answer
Creative Commons License | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7034146189689636, "perplexity": 18301.43463162977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886802.13/warc/CC-MAIN-20200704232817-20200705022817-00354.warc.gz"} |
https://collaborate.princeton.edu/en/publications/finite-sample-risk-bounds-for-maximum-likelihood-estimation-with- | # Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties
W. D. Brinda, Jason M. Klusowski
Research output: Contribution to journalArticlepeer-review
## Abstract
The minimum description length two-part coding index of resolvability provides a finite-sample upper bound on the statistical risk of penalized likelihood estimators over countable models. However, the bound does not apply to unpenalized maximum likelihood estimation or procedures with exceedingly small penalties. In this paper, we point out a more general inequality that holds for arbitrary penalties. In addition, this approach makes it possible to derive exact risk bounds of order 1/n for iid parametric models, which improves on the order (log n)/n resolvability bounds. We conclude by discussing implications for adaptive estimation.
Original language English (US) 2727-2741 15 IEEE Transactions on Information Theory 64 4 https://doi.org/10.1109/TIT.2017.2789214 Published - Apr 2018 Yes
## All Science Journal Classification (ASJC) codes
• Information Systems
• Computer Science Applications
• Library and Information Sciences
## Keywords
• Penalized likelihood estimation
• codelength
• minimum description length
• redundancy
• statistical risk
## Fingerprint
Dive into the research topics of 'Finite-Sample Risk Bounds for Maximum Likelihood Estimation with Arbitrary Penalties'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934385776519775, "perplexity": 3200.325307886633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00039.warc.gz"} |
https://richbeveridge.wordpress.com/2013/11/08/infinite-series-approximations-for-trigonometric-functions-from-14th-century-india/ | Feeds:
Posts
## Infinite Series Approximations for Trigonometric functions from 14th century India
The infinite series approximations that have been used for many years to calculate the values of trigonometric functions have traditionally been attributed to Brook Taylor and Colin Maclaurin, European mathematicians of the early 18th century who were building on the work of Newton, Leibniz, James Gregory and Isaac Barrow among others.
However, I recently discovered that they were not the first to use these techniques. As author George Gheverghese Joseph points out in the previous link, the work of Newton and Leibniz was tremendous, however the Indian development of infinite series approximations for trigonometric functions was equally amazing and important. In addition, it came nearly 300 years before the European development of these techniques.
Madhava of Sangamagrama is generally recognized of the founder of the Kerala school of mathematics and astronomy in what is today the state of Kerala in southwest India. The work of the mathematicians of the Kerala school was based on a desire for accurate trigonometric values for use in navigation.
Madhava lived in the late 1300s and early 1400s and most of his original work has been lost. However, he is mentioned frequently in the surviving work of later mathematicians from the Kerala school. Madhava is credited with power series calculations for the sine, cosine, tangent and arctangent, and like Leibniz, he used the arctangent power series to approximate the value of $\pi$ to 13 decimal places.
Victor Katz’ A History of Mathematics (Brief Edition) has a wonderful and detailed derivation of the Kerala school trigonometric series, with diagrams showing how they used the relationships between the angles, radii, chords and arcs in a circle to arrive at these amazing calculations.
Katz also published this derivation in a paper for the MAA (Mathematics Magazine, vol. 68, n. 3, June 1995, pp. 163-174)
The derivation for the infinite series begins on page 169 (pg. 7 in the pdf).
I’ve just begun to unpack this derivation and will post a step-by-step explanation of Katz’ work in the “near” future. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067092537879944, "perplexity": 775.8239783951418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891539.71/warc/CC-MAIN-20180122193259-20180122213259-00244.warc.gz"} |
https://www.microprediction.com/blog/copula | 9 min
# How to Enter a Cryptocurrency Copula Contest
Published on July 11, 2020
In a live prediction challenge running at Microprediction.org, algorithms try to predict bivariate and trivariate relationships between five minutely returns of Bitcoin, Ethereum, Ripple, Cardano and Iota. Can you beat them?
It is hoped that out of a collection of interrelated statistical contests, a picture of the fine structure of two-way, three-way and five-way dependencies will emerge. This detailed understanding might surpass what one model or person could achieve. This post comprises two parts:
1. A discussion of the study of joint behavior, and why trivariate margins might help reconstruct five-way relationships, and why correlation modeling isn't always enough.
2. A Python walkthrough for those who want to try their hand.
Rules are on the competition page
### On Joint Distributions and 3-margins
Why model cryptocurrencies? Of course there is direct interest. It is also good practice.
Cryptocurrency and stock price changes are examples of approximate martingales. According to the Efficient Markets Hypothesis (EMH) it should be very difficult to provide an estimate of the mean of the process five minutes forward in time that is substantially better than the current value. However, even if you believe the EMH, that leaves an awful lot of structure to determine.
Both stocks, ETFs and many other quantities are used at Microprediction.org to train algorithms which make distributional predictions. Including cryptocurrencies in the mix adds one more type of exercise for the algorithms. Over time, our understanding of which algorithms perform well across a range of different domains may help us make inferences about longer time scale behavior, and this may be relevant for all sorts of applications including portfolio management.
How can we understand and model joint behavior of ... things? In my most recent article I motivated a similar study of bivariate relationships in a physical system (pitch and yaw of a laboratory helicopter). Cryptocurrencies give me an excuse to convince you that trivariate relationships might be important. I draw your attention to the following trivariate stream at Microprediction.org fed by live pricing data for Bitcoin, Ethereum and Ripple.
In a moment we will walk through how to submit your predictions of this z-stream. But one might reasonably wonder whether 3-way relationships are more trouble than they are worth. Is it not sufficient to source good predictions of pairwise relationships? After all, we frequently come across multivariate processes modeled using correlation or covariance matrices (or factor models amounting to the same thing). That is the done thing. Everywhere. Almost without exception. In every field.
However, we know mathematically this can't be the whole story since 2-margins (pairwise probabilities) do not determine a joint distribution of n-variables. The question is whether this matters in practice or whether it is a pedantic statistical quibble.
About ten years ago I came across a simple counterexample to the notion that pairwise relationships are sufficient to reconstruct joint distributions. Ten-pin bowling. The example occurred to me shortly before boarding a long haul flight from New York to Sydney. I had just enough time to find a bowling game on a computer, but not enough time to dig up a usable open source simulation. Annoyed but convinced of the thesis, I painstakingly recorded the result of the first bowl many, many times.
Given fatigue and mild oxygen deprivation at 40,000 feet, I can't guarantee that the priceless ten pin bowling data gathered on that trip is reproducible - and of course the way one plays the game dictates a lot. However, if you were to perform this exercise yourself - presumably in a less tedious fashion - I think you'll probably notice something rather interesting.
To convince you that sleepwalking into correlation modeling isn't always a good idea, let's proceed in a perfectly reasonable fashion to construct a model for all ten pins. To make the ten-pin bowling game feel a little bit more like (say) investment management, we assign to each pin a normally distributed random variable. We shall assume that if this normally distributed variable exceeds some threshold, the pin falls and otherwise, it survives. For instance, if X(7) represents the variable attached to pin number 7, we might suppose that pin 7 falls if
X(7) > -0.23
We have the data to back into the number -0.23 so that the probability of the 7th pin falling equals that of the data. Let's say we also have the bivariate data. Then we could also infer correlations between the variable X(7) representing pin number 7 and the variable X(5) representing pin number 5. Most pins will be positively correlated. But if you are a bad bowler like I am, the 7 and 10 pins will be negatively correlated. Perhaps the correlation matrix might look like this:
Now we have a model for all ten pins derived from properties of pairs of pins only. We roll the ten gaussian variables and check each against their corresponding thresholds, each of which are calibrated to the data. Sound reasonable?
Here I have picked a portfolio of pins (pins 1,4,6,7,9 and 10) in order to diagnose whether the model, thus created, could assign reasonable probabilities to the seven outcomes (counting the number of pins in this subset of six that would fall). The probabilities were quite far off as you can see - dramatically underestimating the probability of all six (which is not too surprising) but getting other things wrong too. Of course this is just one cross section of the model so no doubt many other things are wrong about the way it assigns probabilities to all 1024 possible outcomes.
It seems that as far as ten-pin bowling goes, this particular approach isn't cutting it. A few years ago Roberto Fontana and Patrizia Semeraro published this paper providing a beautiful characterization of multivariate Bernoulli distributions that sheds some light on this. I did not have the benefit of this at the time.
However, I suspected that the tiny amount of information added by some 3-margins would help a lot. Something about the geometry is suggestive. A 3-margin comprises probabilities for all eight outcomes of three pins. Eight numbers but there are seven equations you know already (three 2-margins, three 1-margins and you know that all eight numbers add to 1). So really just one more number. Adding 3-margins isn't taking you very fast towards the total number of degrees of freedom in the system (1023).
But it helped. Here is an example "portfolio" of bowling pins. We count the number that fall once again. The blue probabilities are the data. The green probabilities use a sprinkling of 3-margins. The red probabilities are from the correlation model.
If you look at the mean percentage error across all possible choices of six pin portfolios you get percentage errors as follows:
Percentage error versus data when predicting probabilities for pin counts. Top row is a correlation model. Bottom row a more complex model exploiting 3-margins.
where the first row uses 2-margins only and the second 3-margins. Neither model is perfect but using 3-margins to try to reconstruct the joint distribution clearly helps a lot.
This experiment left a lasting impression on me. I worried that correlation and covariance modeling might quite often be misleading (with factor models a special case). But on the other hand, not every problem I looked at led to similar findings. Later, I replaced ten pins with ten airports and the binary event of whether or not it rained on a given day. The implied gaussian correlations of a fitted model were as follows:
Yes there are days when it rains at LaGuardia and not at JFK.
In similar fashion to the bowling pins, one can look at subsets of airports and ask the question "at how many airports is it raining?". Now in contrast to bowling pins, here the 2-margin model isn't really all that bad. Indeed one can even try to get away with reducing the rank of the correlation or covariance matrix. This is one example, using a reduction to three factors driving the correlations between airports.
Here "julius" refers to the use of 3-margins (and has nothing to do with orange juice - long story). Again the red normal model refers to the use of the normal copula and the blue is the data. Correlation modeling isn't so bad here.
Furthermore, I found examples where correlation modeling worked really well, even after simplification of the model. Looking for an example that I hoped would really trick up pairwise modeling, I decided to model the number of squares whose color remains unchanged after five random moves of a Rubik's Cube.
I was the one who was fooled. Not only did 2-margins do a great job of reconstructing this Rubik generated joint distribution, the same was true after I approximated the model with three factors, then two factors. Amazingly even a 1-factor approximation did a bang up job.
Don't believe me? Find a Rubik's Cube program and try it. Here are some examples of "portfolios" of squares on the Rubik's Cube (the numbering scheme is shown above). We show the probabilities of the number of squares whose color remains unchanged after five random moves. Below the "red" single factor model does almost as well as a fancy piecing together of 3-margins. Here are a few examples of "portfolios" of squares and the distributions of how many change color:
Once again the geometry, by which I mean the mechanics of the Rubik's Cube, sort of suggests ... maybe ... does it? To you? I'm not sure even in retrospect.
Moral of the story: dependence is tricky. Sometimes it isn't just about correlation or covariance matrices. Sometimes what is missing in correlation models matters more than other times. Cryptocurrencies could be a little bit like bowling pins or they might be more like Rubik's Cubes insofar as the importance of 2-margins and 3-margins is concerned (say if we want to be able to understand how all five currencies move together). That is for you to figure out and I look forward to coming back to these cryptocurrency streams at a later date to see what structure clever algorithms have found.
### Python walkthrough
I hope that is sufficient motivation. If you would like to help solve the existential mystery of the joint distribution of cryptocurrencies by helping predict 1-margins, 2-margins and 3-margins, and in doing so get some practice for contests that attract cash awards at Microprediction.Org read on...
This post takes you under the hood of the mechanics, so you understand the game theory involved - at least some of it. I strongly suggest you read The Lottery Paradox blog article as well.
If you have read the article on helicopter prediction, you may already be familiar with bivariate prediction, but I'm assuming you are coming in cold, and we will also extend to trivariate prediction. If this is all too new, maybe read the article An Introduction to Z-Streams first. Most code in this post is contained in one of two places:
• This notebook - which is intended to expose you to z-stream mechanics.
• The fit.py script - which presents you something of a shortcut.
I will first walk through the former. There, we pull a list of cryptocurrency streams. I wouldn't expect you to know in advance that the ones we are interested in here are prefixed by c5_, but that's the case
You'll see there are a lot of streams meeting this criteria. That's because five are so-called primary streams but many more are derived streams as well (the ones with tildes). Let's narrow down:
We see five, the first three being:
Next, let's take a look at them:
We see:
There is a fair degree of co-movement in the coin returns, which is hardly surprising. One way to look at this is via Sklar's Theorem. All those other streams (the z2~ and z3~ streams) are really implied Copula functions. Some code and a picture is worth more than my words so:
And lo, we see the Bitcoin Ethereum copula:
First thing to note about the code - the name of the bivariate stream:
can be inferred from the names of the parent streams but also needs a parameter 70 which is the quarantine time of predictions. You can always just browse all the streams to see what is there: https://www.microprediction.org/browse_streams.html
Second thing to note is the unpacking from 1 to 2 dimensions. Notice that we used the from_zcurve method to convert univariate to bivariate data. This unpacking is via a space filling curve (also explained in the article An Introduction to Z-Streams noted above). And you may ask, percentiles compared to what? The answer is percentiles compared to a collective distributional prediction made by all the algorithms fighting to predict the primary streams (you can see the leaderboards at the cardano primary stream for example).
As of a recent push, you don't need to do this unpacking manually. There are new methods in the microreader which allow you to get the lagged percentiles in two or three dimensions directly
lagged_percentiles = mr.get_lagged_copulas(name=name, count= 5000)
I refer you to the reader and there is an example of usage at microactors/fit which I will return to momentarily.
The scatter plot can be thought of as samples from a Copula function (see Wikipedia copula article). The question you might ask is, what bivariate random variable with uniform margins might this be? Some people like to apply monotonic transforms of this variable so that margins are more familiar. For example, let's create a new set of samples with normally distributed margins like so:
You can also do this directly from the stream:
lagged_zvalues = mr.get_lagged_zvalues(name=name, count= 5000)
or if you prefer percentilesAnd now the (transformed) percentiles might be mistaken for bivariate normal:
What's the correlation?
Pretty high when I checked ... 80% ... but the stream had just begun life so we'll see how it goes when you run the notebook.
##### Submitting a distributional prediction of the 2-copula
Let's create a model for this data. I'm not going to work too hard here, but you can improve it. In order to be able to submit to www.Microprediction.org and appear on the bitcoin ethereum bivariate leaderboard, we need 225 samples.
Notice that nsamples are normally distributed whereas usamples are uniform. However, the contest requires univariate submission. So we pack our bivariate percentiles back into univariate via the space filling curve (you did read An Introduction to Z-Streams, right?)
And now we are ready to submit them to the contest. However, do you have a write key? You are going to need that. If you don't have one, the following code will create one.
This key allows you to create a MicroWriter, which you will need to submit predictions. It is also your identity so if you are planning on winning cash prizes don't lose your key. There isn't any way to recover it. Email it to yourself now!
And now ... drumroll ...
Well, that's one way to do it anyway which illustrates exactly what's going on. However there are submission shortcuts if you wish to submit z-vectors or copulas (percentiles) directly, rather than the pre-image of the space filling curve.
res = mw.submit_zvalues(name=name, zvalues=zvalues, delay=delay )
or if you'd rather submit percentiles:
res = mw.submit_copula(name=name, prctls=prctls, delay=delay )
I refer you to the MicroWriter class for details. However you choose to submit, you won't appear on the leaderboard immediately but by all means head over to the bitcoin ethereum bivariate leaderboard again. Come back tomorrow to see how you are doing.
If you want to run a program that continuously monitors and alters submissions, say in response to live data likely to impact the volatility of cryptocurrencies or their correlations, you may want to "crawl." A crawler can also wander to other streams too, like transport, COVID-19, financial and other time series data. You are welcome to create a crawling program from scratch but you can also derive from MicroCrawler available in the microprediction package. See crawling instructions on the site or jump straight to the crawler code on Github for inspiration.
However, there is another way you might also like. It is "set and forget".
##### Submitting a trivariate prediction using a Copula library and GitHub actions
For most z-streams, you can probably get away with submitting predictions less frequently, since the distribution of implied price changes (the copulas) might not change by the minute. So here I would refer you to a cute little GitHub repository that does all that for you. You need only fork it and modify as you see fit.
Here is the script fit.py in its entirety. The script requires that an environment variable called WRITE_KEY be set, and in my chosen setup this is accomplished by the GitHub action called daily.yml. With the write key we can, as you can see, very easily fit a copula and make a submission.
from microprediction import MicroWriter
import numpy as np
from pprint import pprint
import matplotlib.pyplot as plt
import random
import time
import warnings
warnings.filterwarnings('ignore')
from copulas.multivariate import GaussianMultivariate
import pandas as pd
# Grab the Github secret
import os
WRITE_KEY = os.environ.get('WRITE_KEY')
ANIMAL = MicroWriter.animal_from_key(WRITE_KEY)
REPO = 'https://github.com/microprediction/microactors/blob/master/fit.py'
print('This is '+ANIMAL+' firing up')
STOP_LOSS = 25 #
# Get historical data, fit a copula, and submit
def fit_and_sample(lagged_zvalues:[[float]],num:int, copula=None):
""" Example of creating a "sample" of future values
lagged_zvalues: [ [z1,z2,z3] ] distributed N(0,1) margins, roughly
copula : Something from https://pypi.org/project/copulas/
returns: [ [z1, z2, z3] ] representative sample
Swap out this function for whatever you like.
"""
# Remark 1: It's lazy to just sample synthetic data
# Remark 2: Any multivariate density estimation could go here.
# Remark 3: If you prefer uniform margin, use mw.get_lagged_copulas(name=name, count= 5000)
#
# See https://www.microprediction.com/blog/lottery for discussion of this "game"
df = pd.DataFrame(data=lagged_zvalues)
if copula is None:
copula = GaussianMultivariate()
copula.fit(df)
synthetic = copula.sample(num)
return synthetic.values.tolist()
if __name__ == "__main__":
mw = MicroWriter(write_key=WRITE_KEY)
NAMES = [ n for n in mw.get_stream_names() if 'z2~' in n or 'z3~' in n ]
for _ in range(1):
name = random.choice(NAMES)
lagged_zvalues = mw.get_lagged_zvalues(name=name, count= 5000)
if len(lagged_zvalues)>20:
zvalues = fit_and_sample(lagged_zvalues=lagged_zvalues, num=mw.num_predictions)
pprint( (name, len(lagged_zvalues), len(zvalues)))
try:
for delay in mw.DELAYS:
res = mw.submit_zvalues(name=name, zvalues=zvalues, delay=delay )
pprint(res)
except Exception as e:
print(e)
# Quit some stream/horizon combinations where we fare poorly
mw.cancel_worst_active(stop_loss=STOP_LOSS, num=3)
If you prefer, there is a marginally more sophisticated project called microactors-plots which adds some bells and whistles, including Copula eye candy like this example intended to help you identify when the Copulas are not fitting well. Here's an example of a pretty bad fit to the trivariate exchange rate relationship achieved by a Vine copula (direct variety).
On the other hand the Vine Copula (center variety) does a somewhat better job!
You can browse them all in the copula gallery
Hopefully this gives you an introduction to live implied Copula contests. Since I wrote the first version of this post, the resources available at www.microprediction.com knowledge center have come along, and you can find video tutorials. As noted, you can simply fork this repository and enable GitHub actions. There's a notebook in the repo you can use to generate yourself a write key. There are some limitations to this approach, but it will get you on the leaderboards very quickly. If you enjoy copulas of multivariate distributional estimation, this one's for you! See also our guide to GitHub actions
On the theory side I won't try to survey but in the special case of binary random variables I found these to be more than interesting:
• Multivariate Bernoulli Distribution (pdf) Bin Dai, Shilin Ding and Grace Wahba
• Characterization of Multivariate Bernoulli Distributions with Given Margins (pdf). Roberto Fontana and Patrizia Semeraro
• On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks (pdfLina Merchan and ILya Nemenman
Please suggest other references, perhaps here | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45346400141716003, "perplexity": 1444.7884404092908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00131.warc.gz"} |
https://ccrma.stanford.edu/~jos/sasp/Poisson_Summation_Formula.html | Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
### Poisson Summation Formula
Consider the summation of N complex sinusoids having frequencies uniformly spaced around the unit circle [264]:
where .
Setting (the FFT hop size) gives
(9.26)
where (harmonics of the frame rate).
Let us now consider these equivalent signals as inputs to an LTI system, with an impulse response given by , and frequency response equal to .
Looking across the top of Fig.8.16, for the case of input signal we have
(9.27)
Looking across the bottom of the figure, for the case of input signal
(9.28)
we have the output signal
(9.29)
This second form follows from the fact that complex sinusoids are eigenfunctions of linear systems--a basic result from linear systems theory [264,263].
Since the inputs were equal, the corresponding outputs must be equal too. This derives the Poisson Summation Formula (PSF):
(9.30)
Note that the PSF is the Fourier dual of the sampling theorem [270], [264, Appendix G].
The continuous-time PSF is derived in §B.15.
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
[How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
[Watch the Video] [Work some Exercises] [Examination] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628168940544128, "perplexity": 7310.102154068916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861812410.28/warc/CC-MAIN-20160428164332-00080-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.snapxam.com/problems/50614810/integral-of-1-5-1x-0-5-dx-from-5-0-to-5 | Step-by-step Solution
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
Step-by-step explanation
Problem to solve:
$\int_{-5}^{5}\frac{1}{\sqrt{5-x}}dx$
Learn how to solve definite integrals problems step by step online.
$\lim_{c\to5}\:\int_{-5}^{c}\frac{1}{\sqrt{5-x}}dx$
Learn how to solve definite integrals problems step by step online. Integrate 1/((5-x)^0.5) from -5 to 5. Replace the integral's limit by a finite value. We can solve the integral \int_{-5}^{c}\frac{1}{\sqrt{5-x}}dx by applying integration by substitution method (also called U-Substitution). First, we must identify a section within the integral with a new variable (let's call it u), which when substituted makes the integral easier. We see that 5-x it's a good candidate for substitution. Let's define a variable u and assign it to the choosen part. Now, in order to rewrite dx in terms of du, we need to find the derivative of u. We need to calculate du, we can do that by deriving the equation above. Isolate dx in the previous equation.
$6.3246$
$\int_{-5}^{5}\frac{1}{\sqrt{5-x}}dx$
Main topic:
Definite integrals
15
Time to solve it:
~ 0.1 s (SnapXam) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990642786026001, "perplexity": 866.7436464564397}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00720.warc.gz"} |
https://hdeo.eu/en/if-a-bus-travels-20-km-in-30-minutes-what-is-the-average-speed-of-the-bus-a-20kmh-b-30kmh.132.html | TiffanyBarmer154
6
If a bus travels 20 km in 30 minutes. What is the average speed of the bus? A.) 20km/h B.) 30km/h C.) 40km/h D.) 50km/h
$d=20 \ km \\ t=30 \ min=0,5 \ h \\ \boxed{v-?} \\ \bold{Solving:} \\ \boxed{v= \frac{d}{t} } \\ v= \frac{20 \ km}{0,5 \ h} \\ \Rightarrow \boxed{v=40 \ km/h}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262332916259766, "perplexity": 5413.594444205315}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00077.warc.gz"} |
https://en.m.wikipedia.org/wiki/Statistical_proof | Statistical proof
Statistical proof is the rational demonstration of degree of certainty for a proposition, hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data, the facts, the test, and the odds. Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review.[1]
The burden of proof rests on the demonstrable application of the statistical method, the disclosure of the assumptions, and the relevance that the test has with respect to a genuine understanding of the data relative to the external world. There are adherents to several different statistical philosophies of inference, such as Bayes theorem versus the likelihood function, or positivism versus critical rationalism. These methods of reason have direct bearing on statistical proof and its interpretations in the broader philosophy of science.[1][2]
A common demarcation between science and non-science is the hypothetico-deductive proof of falsification developed by Karl Popper, which is a well-established practice in the tradition of statistics. Other modes of inference, however, may include the inductive and abductive modes of proof.[3] Scientists do not use statistical proof as a means to attain certainty, but to falsify claims and explain theory. Science cannot achieve absolute certainty nor is it a continuous march toward an objective truth as the vernacular as opposed to the scientific meaning of the term "proof" might imply. Statistical proof offers a kind of proof of a theory's falsity and the means to learn heuristically through repeated statistical trials and experimental error.[2] Statistical proof also has applications in legal matters with implications for the legal burden of proof.[4]
Axioms
There are two kinds of axioms, 1) conventions that are taken as true that should be avoided because they cannot be tested, and 2) hypotheses.[5] Proof in the theory of probability was built on four axioms developed in the late 17th century:
1. The probability of a hypothesis is a non-negative real number: ${\displaystyle {\bigg \{}\Pr(h)\geqq 0{\bigg \}}}$ ;
2. The probability of necessary truth equals one: ${\displaystyle {\bigg \{}\Pr(t)=1{\bigg \}}}$ ;
3. If two hypotheses h1 and h2 are mutually exclusive, then the sum of their probabilities is equal to the probability of their disjunction: ${\displaystyle {\bigg \{}\Pr \left(h_{1}\right)+\Pr \left(h_{2}\right)=\Pr \left(h_{1}orh_{2}\right){\bigg \}}}$ ;
4. The conditional probability of h1 given h2 ${\displaystyle {\Bigg \{}\Pr(h_{1}|h_{2}){\Bigg \}}}$ is equal to the unconditional probability ${\displaystyle {\bigg \{}\Pr(h_{1}\And h_{2}){\bigg \}}}$ of the conjunction h1 and h2, divided by the unconditional probability ${\displaystyle {\bigg \{}\Pr(h_{2}){\bigg \}}}$ of h2 where that probability is positive ${\displaystyle {\bigg \{}\Pr(h_{1}|h_{2})={\frac {\Pr(h_{1}\And h_{2})}{\Pr(h_{2})}}{\bigg \}}}$ , where ${\displaystyle {\bigg \{}\Pr(h_{2})>0{\bigg \}}}$ .
The preceding axioms provide the statistical proof and basis for the laws of randomness, or objective chance from where modern statistical theory has advanced. Experimental data, however, can never prove that the hypotheses (h) is true, but relies on an inductive inference by measuring the probability of the hypotheses relative to the empirical data. The proof is in the rational demonstration of using the logic of inference, math, testing, and deductive reasoning of significance.[1][2][6]
Test and proof
The term proof descended from its Latin roots (provable, probable, probare L.) meaning to test.[7][8] Hence, proof is a form of inference by means of a statistical test. Statistical tests are formulated on models that generate probability distributions. Examples of probability distributions might include the binary, normal, or poisson distribution that give exact descriptions of variables that behave according to natural laws of random chance. When a statistical test is applied to samples of a population, the test determines if the sample statistics are significantly different from the assumed null-model. True values of a population, which are unknowable in practice, are called parameters of the population. Researchers sample from populations, which provide estimates of the parameters, to calculate the mean or standard deviation. If the entire population is sampled, then the sample statistic mean and distribution will converge with the parametric distribution.[9]
Using the scientific method of falsification, the probability value that the sample statistic is sufficiently different from the null-model than can be explained by chance alone is given prior to the test. Most statisticians set the prior probability value at 0.05 or 0.1, which means if the sample statistics diverge from the parametric model more than 5 (or 10) times out of 100, then the discrepancy is unlikely to be explained by chance alone and the null-hypothesis is rejected. Statistical models provide exact outcomes of the parametric and estimates of the sample statistics. Hence, the burden of proof rests in the sample statistics that provide estimates of a statistical model. Statistical models contain the mathematical proof of the parametric values and their probability distributions.[10][11]
Bayes theorem
Bayesian statistics are based on a different philosophical approach for proof of inference. The mathematical formula for Bayes's theorem is:
${\displaystyle Pr[Parameter|Data]={\frac {Pr[Data|Parameter]\times Pr[Parameter]}{Pr[Data]}}}$
The formula is read as the probability of the parameter (or hypothesis =h, as used in the notation on axioms) “given” the data (or empirical observation), where the horizontal bar refers to "given". The right hand side of the formula calculates the prior probability of a statistical model (Pr [Parameter]) with the likelihood (Pr [Data | Parameter]) to produce a posterior probability distribution of the parameter (Pr [Parameter | Data]). The posterior probability is the likelihood that the parameter is correct given the observed data or samples statistics.[12] Hypotheses can be compared using Bayesian inference by means of the Bayes factor, which is the ratio of the posterior odds to the prior odds. It provides a measure of the data and if it has increased or decreased the likelihood of one hypotheses relative to another.[13]
The statistical proof is the Bayesian demonstration that one hypothesis has a higher (weak, strong, positive) likelihood.[13] There is considerable debate if the Bayesian method aligns with Karl Poppers method of proof of falsification, where some have suggested that "...there is no such thing as "accepting" hypotheses at all. All that one does in science is assign degrees of belief..."[14]: 180 According to Popper, hypotheses that have withstood testing and have yet to be falsified are not verified but corroborated. Some researches have suggested that Popper's quest to define corroboration on the premise of probability put his philosophy in line with the Bayesian approach. In this context, the likelihood of one hypothesis relative to another may be an index of corroboration, not confirmation, and thus statistically proven through rigorous objective standing.[6][15]
In legal proceedings
"Where gross statistical disparities can be shown, they alone may in a proper case constitute prima facie proof of a pattern or practice of discrimination."[nb 1]: 271
Statistical proof in a legal proceeding can be sorted into three categories of evidence:
1. The occurrence of an event, act, or type of conduct,
2. The identity of the individual(s) responsible
3. The intent or psychological responsibility[16]
Statistical proof was not regularly applied in decisions concerning United States legal proceedings until the mid 1970s following a landmark jury discrimination case in Castaneda v. Partida. The US Supreme Court ruled that gross statistical disparities constitutes "prima facie proof" of discrimination, resulting in a shift of the burden of proof from plaintiff to defendant. Since that ruling, statistical proof has been used in many other cases on inequality, discrimination, and DNA evidence.[4][17][18] However, there is not a one-to-one correspondence between statistical proof and the legal burden of proof. "The Supreme Court has stated that the degrees of rigor required in the fact finding processes of law and science do not necessarily correspond."[18]: 1533
In an example of a death row sentence (McCleskey v. Kemp[nb 2]) concerning racial discrimination, the petitioner, a black man named McCleskey was charged with the murder of a white police officer during a robbery. Expert testimony for McClesky introduced a statistical proof showing that "defendants charged with killing white victims were 4.3 times as likely to receive a death sentence as charged with killing blacks.".[19]: 595 Nonetheless, the statistics was insufficient "to prove that the decisionmakers in his case acted with discriminatory purpose."[19]: 596 It was further argued that there were "inherent limitations of the statistical proof",[19]: 596 because it did not refer to the specifics of the individual. Despite the statistical demonstration of an increased probability of discrimination, the legal burden of proof (it was argued) had to be examined on a case by case basis.[19]
References
1. ^ a b c Gold, B.; Simons, R. A. (2008). Proof and other dilemmas: Mathematics and philosophy. Mathematics Association of America Inc. ISBN 978-0-88385-567-6.
2. ^ a b c Gattei, S. (2008). Thomas Kuhn's "Linguistic Turn" and the Legacy of Logical Empiricism: Incommensurability, Rationality and the Search for Truth. Ashgate Pub Co. p. 277. ISBN 978-0-7546-6160-3.
3. ^ Pedemont, B. (2007). "How can the relationship between argumentation and proof be analysed?". Educational Studies in Mathematics. 66 (1): 23–41. doi:10.1007/s10649-006-9057-x. S2CID 121547580.
4. ^ a b c Meier, P. (1986). "Damned Liars and Expert Witnesses" (PDF). Journal of the American Statistical Association. 81 (394): 269–276. doi:10.1080/01621459.1986.10478270.
5. ^ Wiley, E. O. (1975). "Karl R. Popper, Systematics, and Classification: A Reply to Walter Bock and Other Evolutionary Taxonomists". Systematic Zoology. 24 (2): 233–43. doi:10.2307/2412764. ISSN 0039-7989. JSTOR 2412764.
6. ^ a b Howson, Colin; Urbach, Peter (1991). "Bayesian reasoning in science". Nature. 350 (6317): 371–4. Bibcode:1991Natur.350..371H. doi:10.1038/350371a0. ISSN 1476-4687. S2CID 5419177.
7. ^ Sundholm, G. (1994). "Proof-Theoretical Semantics and Fregean Identity Criteria for Propositions" (PDF). The Monist. 77 (3): 294–314. doi:10.5840/monist199477315. hdl:1887/11990.
8. ^ Bissell, D. (1996). "Statisticians have a Word for it" (PDF). Teaching Statistics. 18 (3): 87–89. CiteSeerX 10.1.1.385.5823. doi:10.1111/j.1467-9639.1996.tb00300.x.
9. ^ Sokal, R. R.; Rohlf, F. J. (1995). Biometry (3rd ed.). W.H. Freeman & Company. pp. 887. ISBN 978-0-7167-2411-7. biometry.
10. ^ Heath, David (1995). An introduction to experimental design and statistics for biology. CRC Press. ISBN 978-1-85728-132-3.
11. ^ Hald, Anders (2006). A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713-1935. Springer. p. 260. ISBN 978-0-387-46408-4.
12. ^ Huelsenbeck, J. P.; Ronquist, F.; Bollback, J. P. (2001). "Bayesian Inference of Phylogeny and Its Impact on Evolutionary Biology" (PDF). Science. 294 (5550): 2310–2314. Bibcode:2001Sci...294.2310H. doi:10.1126/science.1065889. PMID 11743192. S2CID 2138288.
13. ^ a b Wade, P. R. (2000). "Bayesian methods in conservation biology" (PDF). Conservation Biology. 14 (5): 1308–1316. doi:10.1046/j.1523-1739.2000.99415.x. S2CID 55853118.
14. ^ Sober, E. (1991). Reconstructing the Past: Parsimony, Evolution, and Inference. A Bradford Book. p. 284. ISBN 978-0-262-69144-4.
15. ^ Helfenbein, K. G.; DeSalle, R. (2005). "Falsifications and corroborations: Karl Popper's influence on systematics" (PDF). Molecular Phylogenetics and Evolution. 35 (1): 271–280. doi:10.1016/j.ympev.2005.01.003. PMID 15737596.
16. ^ Fienberg, S. E.; Kadane, J. B. (1983). "The presentation of Bayesian statistical analyses in legal proceedings". Journal of the Royal Statistical Society, Series D. 32 (1/2): 88–98. doi:10.2307/2987595. JSTOR 2987595.
17. ^ Garaud, M. C. (1990). "Legal Standards and Statistical Proof in Title VII Litigation: In Search of a Coherent Disparate Impact Model". University of Pennsylvania Law Review. 139 (2): 455–503. doi:10.2307/3312286. JSTOR 3312286.
18. ^ a b The Harvard Law Review Association (1995). "Developments in the Law: Confronting the New Challenges of Scientific Evidence". Harvard Law Review. 108 (7): 1481–1605. doi:10.2307/1341808. JSTOR 1341808.
19. Faigman, D. L. (1991). "Normative Constitutional Fact-Finding": Exploring the Empirical Component of Constitutional Interpretation". University of Pennsylvania Law Review. 139 (3): 541–613. doi:10.2307/3312337. JSTOR 3312337.
Notes
1. ^ Supreme Court of the United States Castaneda v. Partida, 1977 [1] cited in Meier (1986) Ibid. who states "Thus, in the space of less than half a year, the Supreme Court had moved from the traditional legal disdain for statistical proof to a strong endorsement of it as being capable, on its own, of establishing a prima facie case against a defendant."[4]
2. ^ 481 U.S. 279 (1987).[19] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875964343547821, "perplexity": 2274.378120464078}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00537.warc.gz"} |
http://mycodepartner.com/web-technologies/add-images-to-page/ | The <img> tag is used to add images to a web page. It has two attributes: src and alt. In this tutorial you will insert image into your web page.
Image takes two attributes: src and alt. src specifies the source of the image. If the image does not load, then altenative text needs to be displayed. This alternative text is stored in alt attribute.
<!DOCTYPE html>
<html>
<meta charset = "UTF-8"/>
<meta name = "viewport"
content="width=device-width,initial-scale=1.0"/>
<meta http-equiv="X-UA-Compatible" content="ie=edge"/>
<title>Images</title>
<body>
<img src ="logo.png" alt="Image 1"/>
</body>
</html>
The output of the above program is shown below.
If the image does not get displayed, then the text gets displayed as shown below.
You can get lot of non-copyright images from pexels. You can get free stock photos and videos. So, in the src, you can enter the url for the image and get the image.
<img src="url_Of_image" alt="image from internet" />
Now, if the images are big in size, you can always adjust the size using height and width attribute. There is another way and it is using CSS, which we will use later.
<img src ="logo.png" alt="Image 1" width="260px"/>
The height will automatically get adjusted, if you adjust width. If you wan’t to specifically adust height you can give height attribute.
<img src ="logo.png" alt="Image 1" width="260px" height="250px"/>
It is better to place all the images in a folder. When your project gets big, you should have CSS, JS, Images, Fonts and Config files in different folders, as shown below | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2707654535770416, "perplexity": 4537.520546295025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00553.warc.gz"} |
http://benchmarkfcns.xyz/benchmarkfcns/ackleyn4fcn.html | # Plots
Two contours of the function are presented below:
# Description and Features
• The function is not convex.
• The function is defined on n-dimensional space.
• The function is non-separable.
• The function is differentiable.
# Input Domain
The function can be defined on any input domain but it is usually evaluated on $x_i \in [-35, 35]$ for $i=1, …, n$.
# Global Minima
On the 2-dimensional space, the function has one global minima at $f(\textbf{x}^{\ast}) = -4.590101633799122$ located at $\mathbf{x^\ast}=(-1.51, -0.755)$.
# Implementation
An implementation of the Ackley N. 4 Function with MATLAB is provided below.
The function can be represented in Latex as follows: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8885659575462341, "perplexity": 1567.7962156686285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742569.45/warc/CC-MAIN-20181115075207-20181115101207-00347.warc.gz"} |
http://physics.stackexchange.com/users/4020/yrogirg?tab=activity&sort=all&page=4 | # Yrogirg
less info
reputation
725
bio website location age member for 4 years, 1 month seen yesterday profile views 243
# 398 Actions
Apr17 revised How to create visible reflections in shallow water? added 634 characters in body Apr17 comment How to create visible reflections in shallow water? @AnishaKaul Just make coffee without milk, I thought it should be black. I mentioned coffee because it seems to be readily available. You can use ink instead or something like that. Apr17 revised How to create visible reflections in shallow water? added 686 characters in body Apr17 answered How to create visible reflections in shallow water? Apr11 comment Can a dot of light travel faster than the speed of light? OP asked about FTL speed, you answered the speed would be exactly $c$, so the answer should be 'No'. However is there a simple argument to show that the dot speed would be $c$, not a certain fraction of it? Apr11 comment Can a dot of light travel faster than the speed of light? Could you please address a (popular) concern that the beam would bent, just like a water from a hose? Apr8 comment Universal Sequence and relationship of mathematics and reality @Xaqron check oeis.org/A002823 If this is the sequence you are talking about, link to it in your question. OEIS might be the primary reference site for integer sequences. Apr5 comment Universal Sequence and relationship of mathematics and reality Anyway, I do not see a point in commenting philosophical remarks, at least on this site. The quote of Einstein is hardly related to physical theory, it is simply not a part of physics. Apr5 comment Universal Sequence and relationship of mathematics and reality For whom might be interested the article by Metropolis, Stein, and Stein dx.doi.org/10.1016/0097-3165(73)90033-2 the pdf is freely downloadable. Apr5 comment Universal Sequence and relationship of mathematics and reality What is Universal Sequence? Feb8 reviewed Satisfactory Are Carnot engine efficieny and Fourier heat trasmission law related? Feb8 awarded Custodian Feb8 reviewed Needs Improvement Relation between density and refractive index of medium Feb8 reviewed Satisfactory Entropy and the principle of least action Jan28 revised Sensors for Greenhouse Study for metric countries residents Jan28 suggested approved edit on Sensors for Greenhouse Study Jan25 comment Any new texts directly on second law of thermodynamics? have a look at books by Georgy Lebon, David Jou, José Casas Vázquez. There are several, several editions, get the one you can. Actually I don't see how is the question duplicates the linked one. Jan23 comment Maximum Principle vs. Minimum Principle in Non-equilibrium Thermodynamics @Nathaniel, concerning MaxEnP, do you actually mean that the description of the system should admit non-unique solutions under a given boundary conditions? That is one defines what he means by "state" and then postulate that it is not determined by the environment and the previous state? Jan23 comment Does a toy top weigh less when it is spinning? for those who might be intersted, the second paragraph seems to be about tippie top en.wikipedia.org/wiki/Tippe_top Jan23 comment Maximum Principle vs. Minimum Principle in Non-equilibrium Thermodynamics also have a look at "Extremal principles in non-equilibrium thermodynamics" at Azimuth wiki azimuthproject.org/azimuth/show/… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7192264795303345, "perplexity": 1942.3845852492739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986451.45/warc/CC-MAIN-20150728002306-00185-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.toptica.com/toptica-news/first-ever-precision-spectroscopy-of-antihydrogen-using-a-toptica-laser/ | # First-ever precision spectroscopy of Antihydrogen – using a TOPTICA laser!
ǀ Blog
A TA-FHG pro laser was used for the first-ever optical spectroscopy on an antimatter atom.
CERN scientists have published the first-ever measurement on the optical spectrum of an antimatter atom. A TOPTICA TA-FHG pro was used for the key measurement of this success.
The reported optical spectroscopy of the 1S-2S two-photon transition in hydrogen is a cornerstone of modern atomic physics. Over the last decades, the measurement precision of the transition frequency was improved by several orders of magnitude, allowing tests of fundamental physical theories like quantum electrodynamics in a very simple system.
Now, scientists at CERN have repeated the same experiment with antihydrogen, i.e. an atom consisting of an antiproton a positron. With a TOPTICA TA-FHG pro laser tuned to the transition linewidth around 243 nm, and a few atoms detected, they can already put bounds on the standard model, as reported in Nature.
Congratulations! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726064562797546, "perplexity": 1697.750069656042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00265.warc.gz"} |
https://figuraleffect.wordpress.com/tag/logics/ | # Some troubling and interesting things about investigating reasoning
Competence models are typically created and explored by a small number of experts. Boole, Gentzen, Kolmogorov, Ramsey, De Finetti, … The authority can often be shifted to the mathematics. However, although non-experts can usually understand a statement of the theorem to proved, often they can’t understand the details of the proof.
There are problems with being an expert. If you stare too long at the formalism, then you lose your intuition, and can’t see why someone would interpret a task “the wrong” way. Often there are a priori non-obvious interpretations.
And who decides what constitutes a permissible interpretation? Some obvious ideas for this are open to debate. For instance, is it always reasonable for people to keep their interpretation constant across tasks? Or is it rational to change your mind as you learn more about a problem? Is it rational to be aware of when you change your mind?
To complicate things further, various measures loading on g predict interpretations. Does that mean that those who have better cognitive ability can be thought of as having reasoned to the correct interpretation?
# Recognizing textual entailment with natural logic
How do you work out whether a segment of natural language prose entails a sentence?
There are two extreme positions on how to model what’s going on. One is to translate the natural language into a logic of some kind, then apply a theorem prover to draw conclusions. The other is to use algorithms which work directly on the original text, using no knowledge of logic, for instance applying lexical or syntactic matching between premises and putative conclusion.
The main problem with the translation approach is that it’s very hard, as anyone who has tried manually to formalise some prose will agree. The main problem with approaches processing the text in a shallow fashion is that they can be easilly tricked, e.g., by negation, or systematically replacing quantifiers.
Bill MacCartney and Christopher D. Manning (2009) report some work from the space in between using so-called natural logics, which work by annotating the lexical elements of the original text in a way that allows inference. One example of such a logic familar to those in the psychology of reasoning community is described by Geurts (2003).
The general idea is finding a sequence of edits, guided by the logic, which try to transform the premises into the conclusion. The edits are driven solely by the lexical items and require no context.
Seems promising for many cases, easily beating both the naive lexical comparisons and attempts automatically to formlalise and prove properties in first-order logic.
References
Bill MacCartney and Christopher D. Manning (2009). An extended model of natural logic. The Eighth International Conference on Computational Semantics (IWCS-8), Tilburg, Netherlands, January 2009.
Geurts, B. (2003). Reasoning with quantifiers. Cognition, 86, 223-251.
# Language and logic (updated)
Some careful philosophical discussion by Monti, Parsons, and Osherson (2009):
There may well be a “language of thought” (LOT) that underlies much of human cognition without LOT being structured like English or other natural languages. Even if tokens of LOT provide the semantic interpretations of English sentences, such tokens might also arise in the minds of aphasic individuals and even in other species and may not resemble the expressions found in natural language. Hence, qualifying logical deduction as an “extra-linguistic” mental capacity is not to deny that some sort of structured representation is engaged when humans perform such reasoning. On the other hand, it is possible that LOT (in humans) coincides with the ‘‘logical form’’ (LF) of natural language sentences, as studied by linguists. Indeed, LF (serving as the LOT) might be pervasive in the cortex, functioning well beyond the language circuit […].
Levels of analysis again. Just because something “is” not linguistic doesn’t mean it “is” not linguistic.
This calls for a bit of elaboration! (Thanks Martin for the necessary poke.) There could be languages—in a broad sense of the term—implemented all over the brain. Or, to put it another way, various neural processes, lifted up a level of abstraction or two, could be viewed linguistically. At the more formal end of cognitive science, I’m thinking here of the interesting work in the field of neuro-symbolic integration, where connectionist networks are related to various logics (which have a language).
I don’t think there is any language in the brain. It’s a bit too damp for that. There is evidence that bits of the brain support (at the personal-level of explanation) linguistic function: picking up people in bars and conferences, for instance. There must be linguistic-function-supporting bits in the brain somewhere; one question is how distributed they are. I would also argue that linguistic-like structures (the formal kind) can characterise (i.e., a theorist can use them to chacterise) many aspects of brain function, irrespective of whether that function is linguistic at the personal-level. If this is the case, and those cleverer than I think it is, then that suggests that the brain (at some level of abstraction) has properties related to those linguistic formalisms.
Reference
Monti, M. M.; Parsons, L. M. & Osherson, D. N. (2009). The boundaries of language and thought in deductive inference. Proceedings of the National Academy of Sciences of the United States of America.
# Free books
From LogBlog:
Exciting developments! The Association of Symbolic Logic has made the now-out of print volumes in the Lecture Notes in Logic (vols. 1-12) and Perspectives in Mathematical Logic (vols. 1-12) open-access through Project Euclid. This includes classics like
# Computational logic and psychology
Prediction. This stuff is going to be put to work in psychology soon. (End of prediction.)
Computability logic … is a recently launched program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Formulas in it represent computational problems, “truth” means existence of an algorithmic solution, and proofs encode such solutions…
P.S. The most frustrating logic paper I’ve seen in a long time is over here: a graph theoretical notion of propositional logic. Why is this frustrating? Check out the sickening rhetoric, especially Figure 1 which shows a proof of $((p \Rightarrow q) \Rightarrow p) \Rightarrow p$ in logic which the author claims is commonly taught to mathematics undergraduates. What the author doesn’t mention is that it’s often taught as it’s easy to prove meta-theorems in the logic. Most sensible people (well sensible formal logic users, e.g. implementers of provers) use, e.g., a tableaux system or natural deduction to actually prove things. Why was this nonsense allowed to stay in the paper?
# Science for the half-wits
A bit from Jean Yves-Girard‘s latest rant, The phantom of transparency:
Still under the heading « science for the half-wits », let us mention non monotonic « logics ». They belong in our discussion because of the fantasy of completeness, i.e., of the answer to all questions. Here, the slogan is what is not provable is false : one thus seeks a completion by adding unprovable statements. Every person with a minimum of logical culture knows that this completion (that would yield transparency) is fundamentally impossible, because of the undecidability of the halting problem, in other terms, of incompleteness, which has been rightly named : it denotes, not a want with respect to a preexisiting totality, but the fundamentally incomplete nature of the cognitive process.
Completeness is boring. Maybe Y-G would be less confused if he viewed these logics as modelling information update, important given the “fundamentally incomplete nature of the cognitive process”.
# Logic and Reasoning: do the facts matter?
Had a read of Logic and reasoning: do the facts matter? by Johan van Benthem. Covers much ground in a short space but I found it thought provoking. Here’s a quick sketch of the bits I liked.
Van Benthem mentions the anti-psychologism stance, briefly the idea that human practice cannot tell us what correct reasoning is. He contrasts Frege’s view with that of Wundt; the latter, he argues, was too close to practice; Frege was too far. He argues that if logics were totally inconsistent with real practice then they’d be useless.
Much logic is about going beyond what classical logic has to offer and is driven by real language use. Van Bentham cites Prior’s work on temporal structure, Lewis and Stalnaker’s work on comparative orderings of worlds, work on generalised quantifiers which was driven by the mess of real language and for instance produced formalisations of quantifiers like most and few. Generally, van Bentham argues, “one needs to move closer to the goal of providing more direct and faithful mathematical renderings of what seem to be stable reasoning practices.” You want your logic to be more natural, closer to the phenomena. Conceptions of mathematical logic were driven by the terms that appeared in rigorous proofs, so the linguistic stuff is just widening the set of practices that are modelled.
Correctability in a logic is more important than correctness, he argues. This is consistent with the goals of the non-monotonic logic crowd I know and love. I find this most interesting when looking at individual differences in reasoning processes: perhaps a correctability dimension is out there somewhere, if only we could measure it and its correlates. I have some ideas—stay tuned.
Divergences from competence criteria, he argues, suggest new practices. I still see many papers in which people are scored against classical logic. Failure should cause an attempt to work out what practice is being followed by a person rather than the more common concern of what went wrong and how we could bring people back.
Much more in this little paper… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6361967921257019, "perplexity": 1679.2417620872525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424549.12/warc/CC-MAIN-20170723102716-20170723122716-00273.warc.gz"} |
https://www.tutorialspoint.com/explain-the-format-of-c-language | # Explain the Format of C language
CServer Side ProgrammingProgramming
C programming is a general-purpose, procedural, imperative computer programming language. In C language, we see that
• Statements are terminated with semicolons.
• C is case sensitive
• Indentation is ignored by the compiler.
• Strings are placed in double quotes.
• Library functions are lowercase.
• Newlines are handled via \n
## Format of C
The format of C programming language is explained below −
## Semicolons
Semicolons are very important in C.
It tells the compiler, where one statement ends and the next statement begins.
If we fail to place the semicolon after each statement, you will get compilation errors.
## Case Sensitivity
C is a case sensitive language. Although, int compiles, "Int”, "INT” or any other variation will not be functional in C.
All C keywords are lowercase.
Even though comments are not important, but, it is a good practice to begin a program with a comment which indicates the purpose of the program, like author and the date the program was written.
### Example program
Following is the C program to calculate the circumference of a circle by using the C format method −
The formula for circumference of circle= 2*PI*R.
Where, R is the radius of a circle and PI is a constant whose value is PI3.415.
## Example
#include<stdio.h> //library function are lower case
#include<conio.h>
#define PI 3.1415
main ( ){
float c, r; //statements are terminated with semicolon
printf ("enter radius of circle"); //strings are placed in double
quotes
scanf ("%f", &r);
c = 2 * PI * r;
printf ("Circumference = %f", c);
getch ( );
}
## Output
When the above program is executed, it produces the following result −
Enter radius of circle 1
Circumference=6.2830
Updated on 11-Mar-2021 08:35:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3777931332588196, "perplexity": 4875.219485323218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00078.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs10539-011-9280-4 | Biology & Philosophy
, Volume 26, Issue 5, pp 717–736
# Explanations in search of observations
Article
## Abstract
The paper explores how, in economics and biology, theoretical models are used as explanatory devices. It focuses on a modelling strategy by which, instead of starting with an unexplained regularity in the world, the modeller begins by creating a credible model world. The model world exhibits a regularity, induced by a mechanism in that world. The modeller concludes that there may be a part of the real world in which a similar regularity occurs and that, were that the case, the model would offer an explanation. Little concrete guidance is given about where such a regularity might be found. Three modelling exercises in evolutionary game theory—one from economics and two from biology—are used as case studies. Two of these (one from each discipline) exemplify ‘explanation in search of observation’. The third goes a step further, analysing a regularity in a model world and treating it as informative about the real world, but without saying anything about real phenomena. The paper argues that if the relation between the model and real worlds is understood in terms of similarity, and if modelling is understood as an ongoing discovery process rather than as the demonstration of empirical truths, there can be value in creating explanations before finding the regularities that are to be explained.
### Keywords
Model Credible world Similarity Isolation
### References
1. Aydinonat NE (2007) Models, conjectures and exploration: an analysis of Schelling’s checkerboard model of residential segregation. J Econ Methodol 14:429–454
2. Barton NH (1979) The dynamics of hybrid zones. Heredity 43:341–359
3. Cartwright N (1998) Capacities. In: Davis J, Hands W, Mäki U (eds) The handbook of economic methodology. Edward Elgar, Cheltenham, pp 45–48Google Scholar
4. Cartwright N (2002) The limits of causal order, from economics to physics. In: Mäki U (ed) Fact and fiction in economics. Cambridge University Press, Cambridge, pp 137–151
5. Cartwright N (2009) If no capacities, then no credible worlds. But can models reveal capacities? Erkenntnis 70:45–58
6. Giere R (1988) Explaining science. University of Chicago Press, ChicagoGoogle Scholar
7. Grüne-Yanoff T (2009) Learning from minimal economic models. Erkenntnis 70:81–99
8. Karlin S, McGregor J (1972) Application of the method of small parameters to multi-niche population genetics. Theor Popul Biol 3:186–209
9. Mäki U (2009) MISSing the world: models as isolations and credible surrogate systems. Erkenntnis 70:29–43
10. Maynard Smith J, Parker G (1976) The logic of asymmetric contests. Anim Behav 24:59–175Google Scholar
11. Maynard Smith J, Price G (1973) The logic of animal conflicts. Nature 246:15–18
12. Maynard Smith J (1982) Evolution and the theory of games. Cambridge University Press, CambridgeGoogle Scholar
13. Schelling TC (1978) Micromotives and macrobehavior. Norton, New YorkGoogle Scholar
14. Schelling TC (2006) Dynamic models of segregation. In: Schelling TC (ed) Strategies of commitment and other essays. Harvard University Press, Cambridge, pp 249–310. First published in Journal of Mathematical Sociology 1 (1971)Google Scholar
15. Sugden R (1995) The coexistence of conventions. J Econ Behav Organ 28:241–256
16. Sugden R (2000) Credible worlds: the status of theoretical models in economics. J Econ Methodol 7:1–31
17. Sugden R (2004) The economics of rights, cooperation and welfare, 2nd edn. Palgrave Macmillan, Basingstoke. First edition 1986
18. Sugden R (2009) Credible worlds, capacities and mechanisms. Erkenntnis 70:3–27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8567460775375366, "perplexity": 8745.764852525737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104204.40/warc/CC-MAIN-20170818005345-20170818025345-00414.warc.gz"} |
https://www.newslytica.com/post/the-case-of-hydroxychloroquine | Search
• Sai Charan
# The Case Of Hydroxychloroquine
There is no proven cure for treating COVID-19. However, there are a couple of medicines that are being used to treat the symptoms of COVID-19. One of them is Hydroxychloroquine. Hydroxychloroquine has been in limelight ever since President Trump declared it a miracle drug. However, It took a backseat due to the Lancet study which said that Hydroxychloroquine increases the chances of mortality instead of decreasing it. The stocks of companies that manufacture Hydroxychloroquine took a nosedive following the publication of the study.
This put the manufacturers in a tight spot. However, The study was later taken with a pinch of salt due to doubts that the authors turned a blind eye to the fact that the entire study relied on data from a single company. The authors later retracted the study acknowledging that their data was not verified. This meant that the manufacturers finally weathered the storm. However, there is still no substantive evidence backing the ability of Hydroxychloroquine to treat COVID-19 patients. In the meantime, Pharmaceutical companies continue to zero in on developing a cure for COVID-19.
See All | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485681653022766, "perplexity": 2187.2089214847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00040.warc.gz"} |
https://mathematica.stackexchange.com/questions/199691/contourplot3d-not-wotking-for-slicing-z-axis-in-the-ranges-of-10-13/199756 | # ContourPlot3D not wotking for slicing z axis in the ranges of 10^-13
I am using ContourPlot3D to obtain a slice plane at z = constant. However I find that the plane's position is not moving when the range is of the order of 10^-13. I am using the following code.
zSlice = ContourPlot3D[z == -6*10^-13, {x, 0, 3*10^-6}, {y, -2, 2}, {z, -8*10^-13,2*10^-13}, ContourStyle -> Opacity[0.5, Lighter@Green],Mesh -> None,ImageSize -> Large, PlotRange -> {-8*10^-13, 2*10^-13}]
zSlice = ContourPlot3D[z == -4*10^-13, {x, 0, 3*10^-6}, {y, -2, 2}, {z, -8*10^-13,2*10^-13}, ContourStyle -> Opacity[0.5, Lighter@Green],Mesh -> None,ImageSize -> Large, PlotRange -> {-8*10^-13, 2*10^-13}]
The above code doesn't change the z=constant plane in the plot. What could be the error?
In general, it is a bad idea to work with really small or really large parameters in expressions that you then use in plots or equations. Instead you should re-scale your quantities into an appropriate system of units. For example, in atomic systems you should use atomic units. The wikipedia entry on nondimensionalization is a good introduction.
Also, I see that you have manually typed in epsilon = 8.85*10^-12, which I assume is the permittivity of free space. Note that Mathematica has built-in access to the current value of physical constants. And this quantity now has an exact value:
WolframAlphaResult["permittivity of free space", {{"Definition", 1}, "Content"},
PodStates -> {"Value__Show exact value"}]
I'd bet that if you re-scale your parameters, all the problems you are experiencing will disappear and also your plots will look more elegant as the ticks will be numbers of order 1.
• Thank you. Yes, I re-scaled my values and hence I am able to get the planes cutting the axis and also able to move this plane with good precision. Thanks for your input on physical constants. – Raghuram Jun 21 at 4:35
To avoid possible precision issues, you can rescale the contour levels and vertical plot range to run from -8 to 2 (say) and use custom ticks for the vertical axis to show the scaled-back tick labels:
ContourPlot3D[{z == -6, z == -4}, {x, 0, 3*10^-6}, {y, -2, 2}, {z, -8,
2},
ContourStyle -> {Opacity[0.5, Lighter@Green], Opacity[0.5, Lighter@Red]},
Mesh -> None, ImageSize -> Medium,
PlotRange -> {-8, 2},
Ticks -> {Automatic, Automatic, ChartingFindTicks[{-8, 2}, (10^-13) {-8, 2}][-8, 2]}]
Update: to combine with another Plot3D output you can rescale the first argument of Plot3D and use zSlice4 first in Show (so that the option values of zlice3D for ticks is in effect in Show):
fig1 = Plot3D[Sin[x + y^2] 10^12, {x, -5*10^-12, 2*10^-12}, {y, 0, 2*10^-12}];
zSlice4 = ContourPlot3D[{z == -3, z == -2}, {x, -5*10^-12, 2*10^-12},
{y, 0, 2*10^-12}, {z, -5, 2},
ContourStyle -> {Opacity[0.5, Lighter@Green], Opacity[0.5, Lighter@Red]},
Mesh -> None, ImageSize -> Large,
PlotRange -> {-5, 2},
Ticks -> {Automatic, Automatic, ChartingFindTicks[{-5, 2}, (10^-12) {-5, 2}][-5, 2]}];
Show[zSlice4, fig1]
• Thanks for your reply. However, I need to add this to another Plot3D using Show[] command. But I am facing an error there. Please see the code snippet below: fig1 = Plot3D[Sin[x + y^2], {x, -5*10^-12, 2*10^-12}, {y, 0, 2*10^-12},ImageSize -> Large] zSlice4 = ContourPlot3D[{z == -3}, {x, -5*10^-12, 2*10^-12}, {y, 0,2*10^-12}, {z, -5, 2},ContourStyle -> {Opacity[0.5, Lighter@Green], Opacity[0.5, Lighter@Red]}, Mesh -> None, ImageSize -> Large,PlotRange -> {-5, 2},Ticks -> {Automatic, Automatic,ChartingFindTicks[{-5, 2}, (10^-12) {-5, 2}][-5, 2]}] Show[fig1, zSlice4] Thanks in advance. – Raghuram Jun 4 at 10:46
• Thank you. But I still face a problem in the output plot I am getting using this ticks. The output x axis is protruding beyond the specified limit. k = 0.439; epsilon = 8.85*10^-12; A = 960*10^-12; g = 3*10^-6; m = 4.4736*10^-12; V = 13; – Raghuram Jun 5 at 4:29
• @Raghuram, i suggest you update your question with that example showing what roles k, epsilon, A,g and V play in Plot3D and ContourPlot3D. – kglr Jun 5 at 4:35
• Yes I have updated the question. Sorry for missing the complete code snippet. – Raghuram Jun 5 at 4:57
• @Raghuram, you updated your deleted answer; you should put that information in the question, not in a deleted answer. – kglr Jun 5 at 4:59
My Plot3D now is a surface rather than a plane. When I did as you mentioned, I am able to see only the top surface of my plot and not the entire surface. So I changed my Plot3D also with ticks command. But I am now facing an error that my output plot is protruding beyond the x range specified. Pls see the code snippet below.
k = 0.439; epsilon = 8.85*10^-12; A = 960*10^-12; g = 3*10^-6; m = 4.4736*10^-12; V = 13;
MyOut[x_, xdot_] := 0.5*m*xdot^2 + 0.5*k*x^2 - (0.5*epsilon*A*V^2)/(g - x);
fig1 = Plot3D[MyOut[x, xdot]*10^12, {x, 0, g}, {xdot, -2, 2},
AxesLabel -> {x,xdot,MyOutput}, ImageSize -> Large,
PlotRange -> {-5, 2},
Ticks -> {Automatic, Automatic,
ChartingFindTicks[{-5, 2}, (10^-12) {-5, 2}][-5, 2]}];
zSlice4 =
ContourPlot3D[{z == -1}, {x, 0, g}, {y, -2, 2}, {z, -5, 2},
ContourStyle -> {Opacity[0.5, Lighter@Green],
Opacity[0.5, Lighter@Red]}, Mesh -> None, ImageSize -> Large,
PlotRange -> {-5, 2},
Ticks -> {Automatic, Automatic,
ChartingFindTicks[{-5, 2}, (10^-12) {-5, 2}][-5, 2]}];
Show[zSlice4, fig1]
• scaling factor 10^-12 was for the specific case fig1 = Plot3D[Sin[x + y^2], {x, -5*10^-12, 2*10^-12}, {y, 0, 2*10^-12},ImageSize -> Large] in your comment. For the new function MyOut you need to find the scaling factor based on the range of xdot and x` – kglr Jun 5 at 5:48
• Ok. Thanks a lot. – Raghuram Jun 5 at 7:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2238444983959198, "perplexity": 5767.926559615732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540500637.40/warc/CC-MAIN-20191207160050-20191207184050-00460.warc.gz"} |
https://docs.nvidia.com/cuda/libdevice-users-guide/__nv_fmaf_ru.html | ## 3.148. __nv_fmaf_ru
Prototype:
float @__nv_fmaf_ru(float %x, float %y, float %z)
Description:
Computes the value of $x×y+z$ as a single ternary operation, rounding the result once in round-up (to positive infinity) mode.
Returns:
Returns the rounded value of $x×y+z$ as a single operation.
• __nv_fmaf_ru( $±\infty$ , $±0$ , z) returns NaN.
• __nv_fmaf_ru( $±0$ , $±\infty$ , z) returns NaN.
• __nv_fmaf_ru(x, y, $-\infty$ ) returns NaN if $x×y$ is an exact $+\infty$ .
• __nv_fmaf_ru(x, y, $+\infty$ ) returns NaN if $x×y$ is an exact $-\infty$ .
Note:
For accuracy information for this function see the CUDA C++ Programming Guide, Appendix E.1, Table 6.
Library Availability:
Compute 2.0: Yes
Compute 3.0: Yes
Compute 3.5: Yes | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7561905384063721, "perplexity": 8496.906870048346}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00334.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/32482-moment-generating-function-poisson-disttribution.html | # Math Help - moment generating function for poisson disttribution
1. ## moment generating function for poisson disttribution
Hi guys i am new. I have a problem in finding moment generating functions. The forum helped me in solving binomial, geometric and other few random equations but i still couldn't understand how the moment generating function for a poisson distribution is derived out from its probability density function.
Hi guys i am new. I have a problem in finding moment generating functions. The forum helped me in solving binomial, geometric and other few random equations but i still couldn't understand how the moment generating function for a poisson distribution is derived out from its probability density function.
Well, I guess this is about the last one in the list!
$m_X(t) = \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y e^{-\lambda}}{y!} = e^{-\lambda} \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y}{y!}$
$= e^{-\lambda} \sum_{y=0}^{\infty} \frac{(\lambda e^t)^y}{y!}$
using the standard series $\sum_{y=0}^{\infty} \frac{(x)^y}{y!} = e^x$ and substituting $x = \lambda e^t$ *
$= e^{-\lambda} \, e^{\lambda e^t} = e^{\lambda(e^t - 1)}$.
* If you don't like doing this there is another clever way of doing it.
3. Originally Posted by mr fantastic
Well, I guess this is about the last one in the list!
$m_X(t) = \sum_{n=0}^{\infty} e^{nt} \frac{\lambda^n e^{-\lambda}}{n!} = e^{-\lambda} \sum_{n=0}^{\infty} e^{nt} \frac{\lambda^n}{n!}$
$= e^{-\lambda} \sum_{n=0}^{\infty} \frac{(\lambda e^t)^n}{n!}$
using the standard series $\sum_{n=0}^{\infty} \frac{(y)^n}{n!} = e^y$ and substituting $y = \lambda e^t$ *
$= e^{-\lambda} \, e^{\lambda e^t} = e^{\lambda(e^t - 1)}$.
* If you don't like doing this there is another clever way of doing it.
ahhh... ic i didnt know there was that standard series thingy xD thanks for the quick reply ^-^ btw what is the other method O.o? no harm in knowing more.
4. Originally Posted by mr fantastic
Well, I guess this is about the last one in the list!
$m_X(t) = \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y e^{-\lambda}}{y!} = e^{-\lambda} \sum_{y=0}^{\infty} e^{yt} \frac{\lambda^y}{y!}$
$= e^{-\lambda} \sum_{y=0}^{\infty} \frac{(\lambda e^t)^y}{y!}$
[snip]
Let $\mu = \lambda e^t$:
$= e^{-\lambda} \sum_{y=0}^{\infty} \frac{\mu^y}{y!}$
$= e^{-\lambda} \, e^{\mu} \sum_{y=0}^{\infty} \frac{e^{-\mu}\, \mu^y}{y!}$
$\frac{e^{-\mu}\, \mu^y}{y!}$ is recognised as the pdf of a random variable following a Poisson distribution with mean $\mu$. Therefore $\sum_{y=0}^{\infty} \frac{e^{-\mu}\, \mu^y}{y!} = 1$:
$= e^{-\lambda} \, e^{\mu} (1)$
Substitute back that $\mu = \lambda e^t$:
$= e^{-\lambda} \, e^{\lambda e^t} = e^{\lambda(e^t - 1)}$.
5. ahhhh..... now thats a better way to understand ^-^ thanks again | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194018840789795, "perplexity": 525.8453556049153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447758.91/warc/CC-MAIN-20151124205407-00032-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://sciencehouse.wordpress.com/2009/04/25/waiting-in-airports/ | # Waiting in airports
I was visiting the University of Utah this past week. I gave talks on the Kinetic Theory of Coupled Oscillators and on Deriving Moment Equations for Neural Networks. On my way to the airport I wondered what would be the optimal arrival time so that you spend the least amount of time waiting in the airport balanced by the cost of missing a flight. If you make some basic assumptions, it’s not too hard to derive a condition for the optimum. Let’s say the only thing we’re concerned about is minimizing wasted time. Then what we would want to do is to balance the average time waiting in airports with the average time lost to make up for a missed flight.
Let $t_a$ be the time between arrival at the airport and boarding the plane and $\sigma$ be the standard deviation in this time due to traffic, the check in line, going through security, etc. The average amount of time spent waiting in the airport is thus the expectation value of $t_a$, $\bar t_a$. Suppose we let C be the time wasted if you miss a flight. Then the expected time wasted for missing a flight is CP, where P is the probability of missing a flight. So, optimality would be given by $\bar t_a = C P$. Now the probability for missing a flight will be a function of the waiting time. Assuming a normal distribution gives $P= .5{\rm erfc}(\bar t_a/\sqrt{2}\sigma)$, where erfc is the complementary error function. Hence, if your expected waiting time is zero then you would miss half of your flights. The optimal arrival time is then given by the condition $\bar t_a= .5C{\rm erfc}(\bar t_a/\sqrt{2}\sigma)$.
So let’s say the standard deviation is an hour and a missed flight costs about 5 hours, then solving numerically (on Mathematica) gives $\bar t_a = 0.9$. So the optimal time to arrive at the airport is a little less than an hour before you board. The optimal time is not very sensitive to the cost of missing the flight. Making it 20 hours only increases the optimal arrival time to an hour and a half. Reducing the standard deviation to half an hour reduces the optimal time to 36 minutes.
By this calculation it would seem that by arriving about an hour before departure, which is what I usually do, is close to optimal. However, there is a flaw in this calculation because I can only recall missing one flight in my life and by optimality I should be missing about one in five flights (given that I arrive at the airport an hour before my flight and my estimated cost per missed flight is 5 hours). What this implies is that the transit time to the gate distribution is much narrower than a normal so that while the uncertainty in transit time from my house to the gate seems to be about half an hour to an hour, it almost never takes much longer. However, having a narrower distribution means that the optimal waiting time won’t change very much because the probability of missing a plane increases very quickly as you shorten the waiting time (i.e. the difference between arriving 45 minutes before departure versus an hour could mean missing many more flights). So an hour before the flight is still pretty close to optimal. Having said all this, I actually don’t mind showing up at the airport a little earlier than necessary since it gives me a chance to read. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744753003120422, "perplexity": 435.91145263924955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886118195.43/warc/CC-MAIN-20170823094122-20170823114122-00547.warc.gz"} |
https://blog.heysh.xyz/2017/11/12/2017-11-12-aeolipile-boiling/ | These days I was busy tinkering with Aeolipile. Now it has more exciting features! Wanna know what is Aeolipile? See the posts above!
## Now we have author page!
It has been at the checklist for a long time! A big fan of a Steemian? Here's a place for his/her any post at https://heyeshuang.github.io/aeolipile/#/@<USERNAME>!
For example, you can find @Elear's posts at https://heyeshuang.github.io/aeolipile/#/@elear, or my posts (contain some Chinese) at https://heyeshuang.github.io/aeolipile/#/@heyeshuang.
## And finally, Aeolipile as your own blog!
Like aeolipile? Now you can bring it home!1 I have added blog mode to the page, which makes it possible to deploy aeolipile for your post only! In other words, a personal blog!
Check the instance of aeolipile-blog at https://aeolipile-blog.netlify.com/.
See the difference? A tidy URL without anyone else, like other blog systems do! You can modify the domain name, the title and anything you can imagine!
### So, how can I get one?
• Download dist.zip from this link
• extract that zip file
• open config.js with any text editor
Feel free (as in freedom) to star, fork and contribute to https://github.com/heyeshuang/aeolipile/! Basic config is at /src/config.js, and the CSS file /src/typo/typo-fangsong.css is for the typography. Transform aeolipile to your favourite! And don't forget to pull your code to me! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16187207400798798, "perplexity": 5971.715697101179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00733.warc.gz"} |
http://physics.stackexchange.com/questions/43813/what-was-albert-einsteins-proof-for-e-mc2/55362 | # What was Albert Einstein's proof for $E=mc^2$?
Most people know the famous equation:
$$E=mc^2$$
What were his steps of thinking for this equation that helped us discover so much about our world?
-
Terence Tao has re-derived $E=mc^2$ here. See also this related Phys.SE post. – Qmechanic Nov 9 '12 at 20:57
A more recent proof by Terence Tao terrytao.wordpress.com/tag/mass-energy-equivalence – John McVirgo Nov 9 '12 at 21:14
Einsteins paper showing proof should help you. The proof is not hard to grasp. – user53209 Feb 27 '13 at 21:07
This paper by Hecht may answer your questions. Hecht takes great measure to emphasize what Einstein actually did and did not write. – user11266 Feb 27 '13 at 21:48
Related question on Math.SE: math.stackexchange.com/q/836886/11127 – Qmechanic Jun 25 '14 at 12:52
You can find the shortest and easiest derivation of this result in the paper where it was released by Einstein himself (what better reference can you find?) in 1905. It is not the main paper of Special Relativity, but a short document he added shortly afterwards.
A. Einstein,Ist die Trägheit eines Körpers von seinem Energieinhalt Abhängig?, Annalen der Physik 18 (1905) 639. A pdf file of the English translation Does the Inertia of a Body Depend upon its Energy-Content? is available here. (hattip: user53209.)
It is a delightful document to read. There is no dramatic references to huge power release nor anything similar. He simply states after the derivation "If a body gives the energy away $L$ in form of radiation, then its mass decreases in an amount $L/V^{2}$ (...) the mass of a body is a measure for its energy content (...) One can not exclude the possibility that, with the bodies whose energy content changes rapidy, for example radium salts, a proof of the theory will be found (...) If the theory adjusts to the facts, then the radiation transports inertia between emitters and absorbers."
Google for that short paper and see the derivation yourself, it is very easy. The Minkowsky four-dimensional spacetime had not yet been incorporated to special relativity, so the equations are formally very simple, easy to follow with little mathematical training.
-
@Larry Harson, and I doubt you've even read my answer, because if you had, you'd seen that I make no mention to any proof, but rather I explicitely use the word "derivation", and in the first line. Follow it with your finger, where I say "derivation of this result". – Eduardo Guerras Valera Jan 29 '13 at 2:03
@Larry Harson, please read the Einstein paper, it contains the original, fully correct derivation. Yes, derivation. And that "much" that has been written against 1905 Einstein papers consist on a bunch of pseudo-scientific journalism, mainly from nazi morons. – Eduardo Guerras Valera Jan 29 '13 at 14:40
Special Relativity can be (and will be) super-seeded by new theories, but it is self-consistent, as it is the Einstein derivation (again, derivation) of $E=mc^{2}$ of 1905. – Eduardo Guerras Valera Jan 29 '13 at 14:43
@Qmechanic, thanks for the nice edit with the link to the english version. I've just added the link to the original one too. What does that "hattip: userXXXX" means? – Eduardo Guerras Valera Feb 28 '13 at 3:53
The proper spelling is in two words: hat tip. – Qmechanic Feb 28 '13 at 7:03
Einstein's equation doesn't have a "proof" because it's not a mathematical theorem. It's a physical theory that is overwhelmingly supported by experimental data. So you could say that the "proof" is in the mountains of experimental results that agree with the theory.
To understand Einstein's motivation for developing the theory of relativity, as well as mass-energy equivalence, Wikipedia has an excellent article on the history of relativity.
-
Einstein gave an argument which is summarized on Wikipedia, and also regurgitated on Terrence Tao's blog. This answer is not reasonable, physical statements have physical arguments, and these are what people normally mean by "proof" in this context. – Ron Maimon Nov 9 '12 at 21:20
Ron is out of his mind. It does not matter how beautiful a theory may be nor how neat the derivation, the proof is only in the agreement with the physical world. – dmckee Feb 28 '13 at 5:02
I will support @dmckee , since this came up again. A theory in physics cannot be proven, only disproven. It can only be validated if experiments agree with it. Even one solid disagreement disproves a theory. The questioner assumes that physical models are the same as mathematical models which end with the QED, but this is not true. The title is misleading, the content of the question is OK and is answerd by Eduardo. – anna v Feb 28 '13 at 5:26
Yes, Ron is probably out of his mind but he still knows more physics than ten anna vs and four dmckees put together. – Marty Green Feb 28 '13 at 5:28
@MartyGreen Anna V is quite correct. Science doesn't deal with proof. Proof only exists in mathematics and in courtrooms. Even then, it doesn't mean the same thing in both places. Supporting evidence doesn't constitute proof. Evidence accumulates indefinitely while proof connotes finality. – user11266 Feb 28 '13 at 15:11
## protected by Qmechanic♦Aug 4 '13 at 14:22
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7992007732391357, "perplexity": 1323.3027919495507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00135-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://zbmath.org/?q=ut%3Aequation+of+optimality | ×
## Found 1,212 Documents (Results 1–100)
100
MathJax
### Direct methods in variational field theory. (English. Russian original)Zbl 07596647
Sib. Math. J. 63, No. 5, 862-867 (2022); translation from Sib. Mat. Zh. 63, No. 5, 1027-1034 (2022).
MSC: 49Jxx 49Nxx 49Kxx
Full Text:
Full Text:
Full Text:
### A global convergent semi-smooth Newton method for semi-linear elliptic optimal control problem. (English)Zbl 07566266
MSC: 65-XX 49-XX
Full Text:
Full Text:
Full Text:
### Optimal control problems governed by fractional differential equations with control constraints. (English)Zbl 1492.49026
MSC: 49K15 90C29 34A08
Full Text:
Full Text:
Full Text:
Full Text:
### On a determination of the boundary function in the initial-boundary value problem for the second order hyperbolic equation. (English)Zbl 07510751
MSC: 35-XX 49-XX
Full Text:
### Continuity regularity of optimal control solutions to distributed and boundary semilinear elliptic optimal control problems with mixed pointwise control-state constraints. (English)Zbl 1485.49029
MSC: 49K20 49N60 35J61
Full Text:
Full Text:
### Asymptotic expansion for the solution of an optimal boundary control problem in a doubly connected domain with different control intensity on boundary segments. (English. Russian original)Zbl 1490.49017
Comput. Math. Math. Phys. 62, No. 2, 218-231 (2022); translation from Zh. Vychisl. Mat. Mat. Fiz. 62, No. 2, 217-231 (2022).
MSC: 49K20 35B40 35B25
Full Text:
### Regularization of the classical optimality conditions in optimal control problems for linear distributed systems of Volterra type. (English. Russian original)Zbl 1484.49008
Comput. Math. Math. Phys. 62, No. 1, 42-65 (2022); translation from Zh. Vychisl. Mat. Mat. Fiz. 62, No. 1, 45-70 (2022).
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### A more complete version of a minimax theorem. (English)Zbl 1489.49006
Appl. Anal. Optim. 5, No. 2, 251-261 (2021); addendum ibid 6, No. 2, 195-197 (2022).
Full Text:
### Optimality conditions for variational problems involving distributed-order fractional derivatives with arbitrary kernels. (English)Zbl 1484.49039
MSC: 49K05 26A33
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### A modified MSA for stochastic control problems. (English)Zbl 1472.93198
MSC: 93E20 60H30 49M05
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Legendre’s necessary condition for fractional Bolza functionals with mixed initial/final constraints. (English)Zbl 1471.49017
MSC: 49K05 26A33 34A08
Full Text:
### Second-order optimality conditions and regularity of Lagrange multipliers for mixed optimal control problems. (English)Zbl 1473.49029
MSC: 49K20 35J25 35B65
Full Text:
### First order necessary optimal conditions in Gursat-Darboux stochastic systems. (Russian. English summary)Zbl 1470.93166
MSC: 93E20 49K20
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Improving semigroup bounds with resolvent estimates. (English)Zbl 07367384
MSC: 47D03 44A10 49K99
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Numerical solution of the problem of variational data assimilation to restore heat fluxes and initial state for the ocean thermodynamics model. (English)Zbl 1471.65059
MSC: 65K10 86A22
Full Text:
### Stochastic optimal transportation. Stochastic control with fixed marginals. (English)Zbl 1471.49001
SpringerBriefs in Mathematics. Singapore: Springer (ISBN 978-981-16-1753-9/pbk; 978-981-16-1754-6/ebook). xi, 121 p. (2021).
Full Text:
Full Text:
### Partial derivative with respect to the measure and its application to general controlled mean-field systems. (English)Zbl 1471.93275
MSC: 93E20 60H10 49K45
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Regularity of solutions to a distributed and boundary optimal control problem governed by semilinear elliptic equations. (English)Zbl 1460.49028
MSC: 49N60 49K20 35J61
Full Text:
### Optimal control theory. The variational method. (English)Zbl 1469.49001
Singapore: Springer (ISBN 978-981-33-6291-8/hbk; 978-981-33-6294-9/pbk; 978-981-33-6292-5/ebook). xix, 344 p. (2021).
Full Text:
Full Text:
Full Text:
### Generalized conformable variational calculus and optimal control problems with variable terminal conditions. (English)Zbl 1484.49042
MSC: 49K15 26A24
Full Text:
Full Text:
Full Text:
Full Text:
### Null-controllability and control cost estimates for the heat equation on unbounded and large bounded domains. (English)Zbl 1483.35300
Kerner, Joachim (ed.) et al., Control theory of infinite-dimensional systems. Contributions of the workshop, Fernuniversität Hagen, Hagen, Germany, January 10–12, 2018. Cham: Birkhäuser. Oper. Theory: Adv. Appl. 277, 117-157 (2020).
Full Text:
Full Text:
### A multi-delay model for pest control with awareness induced interventions – Hopf bifurcation and optimal control analysis. (English)Zbl 1461.49036
MSC: 49K35 37G15 93C10
Full Text:
Full Text:
### RAP-method (random perturbation method) for minimax $$G$$-filter. (English)Zbl 1480.49012
MSC: 49J35 49K35 15B52
Full Text:
### Near-optimal control problems for forward-backward regime-switching systems. (English)Zbl 1460.93110
MSC: 93E20 60H10 60J27
Full Text:
MSC: 91G10
Full Text:
Full Text:
Full Text:
### Quantum Hamilton equations from stochastic optimal control theory. (English)Zbl 1453.49014
Grecksch, Wilfried (ed.) et al., Infinite dimensional and finite dimensional stochastic equations and applications in physics. Hackensack, NJ: World Scientific. 213-250 (2020).
Full Text:
Full Text:
Full Text:
Full Text:
### Application of the fast automatic differentiation technique for solving inverse coefficient problems. (English. Russian original)Zbl 1455.80007
Comput. Math. Math. Phys. 60, No. 1, 15-25 (2020); translation from Zh. Vychisl. Mat. Mat. Fiz. 60, No. 1, 18-28 (2020).
Full Text:
Full Text:
Full Text:
### A fuzzy system based active set algorithm for the numerical solution of the optimal control problem governed by partial differential equation. (English)Zbl 1447.93193
MSC: 93C42 93C20 49J20
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Necessary conditions for stochastic optimal control problems in infinite dimensions. (English)Zbl 1441.93337
MSC: 93E20 49J53 60H15
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
all top 5
all top 5
all top 5
all top 3
all top 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3215770721435547, "perplexity": 8275.694848578272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00355.warc.gz"} |
http://math.stackexchange.com/users/43438/joseph-garvin?tab=activity&sort=comments | Joseph Garvin
Reputation
Next privilege 250 Rep.
Nov4 comment Concrete Mathematics - Stability of definitions in the repertoire method Brilliant! Never occurred to me to plug the closed form back into the recurrence to convince myself, but that makes sense to try since that's where the structure is coming from. Thanks :-) Nov4 comment Repertoire Method Clarification Required ( Concrete Mathematics ) @HansLundmark: I can see that it will be a combination of those, but I don't understand how we know A, B, and C will always be the same, which I have opened as a new question. Jan27 comment How to prove $\gcd(a,\gcd(b, c)) = \gcd(\gcd(a, b), c)$? Actually #4 is OK, it does follow from the definition if you're using the Bezout's identity version. Jan27 comment Blending values on the number line Were the number lines drawn by hand or is there a plotting tool for these? Jan26 comment How to prove $\gcd(a,\gcd(b, c)) = \gcd(\gcd(a, b), c)$? How do we know c divides a in the third sentence? Jan26 comment How to prove $\gcd(a,\gcd(b, c)) = \gcd(\gcd(a, b), c)$? #4 seems false. How does it follow from the definition of GCD? If d is a prime factor X common to both a and gcd(b,c), and e is a different prime factor Y common to both a and gcd(b,c), then e will not divide d or vice versa, because they're prime. Jan22 comment Partition minimizing maximum of Euler's totient function across terms It maybe a great idea. I read that the ith primorial multiplied by the ith prime is sparsely totient, and used that to quickly build a list (not all sparse totients, but for rough minimization may be OK). I tried building the partition for $2^{64}$ in the style of Euclid's algorithm for GCD -- I took the biggest number in the list < $2^{64}$ and took the remainder of dividing by it, then took the biggest sparse totient in the list under the remainder and took the remainder of dividing by it, etc. etc. Turns out a linear combination of those sparse totients exactly partitioned it. Coincidence? Jan10 comment Quick way to iterate multiples of a prime N that are not multiples of primes X, Y, Z, …? Ah, that makes more sense, thanks. Jan8 comment Quick way to iterate multiples of a prime N that are not multiples of primes X, Y, Z, …? If I understand right, this computes the size of the set of numbers I want to iterate, but it doesn't help with iterating or computing e.g. the 5th number, or am I not thinking hard enough yet? Dec18 comment Primes in arithmetic progression Is it common to use plain parens to represent gcd? I'm so used to reading those as tuples. Dec16 comment Progressions with variable density that can be described in constant space? Yes, much. Thanks for your patience explaining :) Dec16 comment Progressions with variable density that can be described in constant space? Oooooh, that makes much more sense. Dec16 comment Progressions with variable density that can be described in constant space? Actually my confusion might stem from what you mean by $x_k$ and $x_k + 1$. I interpret $x_k$ to be the 0 or 1 in the kth bit, where k is offset from the radix point of the most significant bit. Did you mean $x_{k+1}$ instead of $x_k + 1$? Because $x_k$ would just be 1 or 0, which when 1 is added would be two? Or maybe by addition you meant string concatenation? I suck at notation :( Dec16 comment Progressions with variable density that can be described in constant space? Your edit helps a bit, thanks. So it sounds like you're saying that if you take the most significant locked bit, $x_k$, you can keep adding $x_k + 1$ and get numbers satisfying the constraint that are evenly spaced. But isn't the neglecting the unfixed bits that are below $x_k$? Why don't we get variable density from those? I'm actually unsure if your conclusion is answering the question or saying it can't be answered -- my test is to eliminate possibilities, so failing the test would mean a proof that fixing bits works, but in the comments on the question you said fixing them doesn't? Dec16 comment Progressions with variable density that can be described in constant space? I'm probably being dense, but I don't see how your first sentence could be true. If the number is 32-bits for example, fixing bits 3, 5, and 7 to particular values doesn't put any constraint in the "leading digits", that is, the leading bits, 8-32. Unless by first digits you mean the least significant bits, but it doesn't impose any constraint on bits 0-2 either, so I'm still not sure what you mean. Is k indexing the total number of bits, or is it indexing only the bits that we've locked? Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: I've added a constraint that I think gets at what I'm going for, getting the Nth element easily. I don't totally follow your explanation for why a set of fixed bits doesn't work -- are you saying that my specific example of constraining that the 3rd/5th/7th bit wouldn't work, or are you just saying it's not always true that any subset of fixed bits will work because if you only pick a chunk of adjacent digits at the beginning/end you only get multiples or all numbers below a threshold? Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: That may work. Makes me wonder if you can just say, all the numbers where some subset of the bits are a fixed, e.g. all numbers where the 3rd bit is 1, the 5th bit is 1, and 7th bit is 1. Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: My ultimate goal does involve using this in a computer program, so leading digits are a bit problematic to extract, unless it's the binary leading digit, but that's always 1 if you consider the 'leading digit' to be the most significant bit, or if you consider it the left most bit in the word just splits the space evenly in half, the numbers below $\frac{2^n - 1}{2}$ and the numbers equal and above. Sorry, I realize I'm springing more details than are in my question, I was trying to capture the essence and keep it succinct, obviously didn't succeed :P Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: Basically I'm trying to stick to sequences that can be generated from a bounded number of starting bits. Put another way, I'm trying to find sequences that have something like a closed form representation I can work with algebraically and reason about, rather than somebody just dropping a manually figured out list of numbers lacking any generality. Dec16 comment Progressions with variable density that can be described in constant space? @MarioCarneiro: Couldn't you get away with just saying all numbers with a leading 8? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529365062713623, "perplexity": 505.11172662940976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989891.18/warc/CC-MAIN-20150728002309-00010-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathonline.wikidot.com/the-alternating-series-test | The Alternating Series Test
# The Alternating Series Test
So far we have looked at the following tests to determine if a series was convergent or divergent:
We haven't been able to use any of these tests to determine if a negative or partially negative series was convergent or divergent though. The following test will allow us to do so.
Theorem (The Alternating Series Test): Let $\{ a_n \}$ be a sequence. If for $n$ sufficiently large, $a_na_{n+1} < 0$, $\mid a_{n+1} \mid ≤ \mid a_n \mid$, and $\lim_{n \to \infty} a_n = 0$, then the series $\sum_{n=1}^{\infty} a_n$ is convergent.
We note that the alternating series test has three requirements for $n$ sufficiently large. First, the terms must be alternating signs on consecutive terms. Secondly, the absolute value of terms must be decreasing in size. And lastly, the limit of the sequence of terms must approach 0. Under these conditions we can conclude that the series $\sum_{n=1}^{\infty} a_n$ is convergent.
• Proof of Theorem: Let $a_1 > 0$. Since $a_na_{n+1} < 0$ we get that $a_{2n+1} > 0$ and $a_{2n} < 0$ $\forall n \in \mathbb{N}$. Now let $s_n = a_1 + a_2 + ... + a_n$ denote the $n^{\mathrm{th}}$ partial sum of the series.
• Now since the terms are decreasing in size it follows that $a_{2n+1} ≥ -a_{2n+2}$ and so $s_{2n+2} = s_{2n} + a_{2n+1} + a_{2n+2} ≥ s_{2n}$. So the even partial sums $\{ s_{2n} \}$ form an increasing sequence.
• Similarly since the terms are decreasing in size it follows that $-a_{2n} ≥ a_{2n+1}$, and so $s_{2n+1} = s_{2n-1} + a_{2n} + a_{2n+1} ≤ s_{2n-1}$ and so the odd partial sums form a decreasing sequence $\{ s_{2n-1} \}$, and so:
(1)
$$s_2 ≤ s_4 ≤ ... ≤ s_{2n} ≤ s_{2n-1} ≤ s_{2n-3} ≤ ... ≤ s_3 ≤ s_1$$
• So $s_2$ is a lower bound for the sequence $\{ s_{n-1} \}$ and $s_1$ is an upper bound for the sequence $\{ s_{2n} \}$, both of which sequences converge, and so $\lim_{n \to \infty} s_{2n-1} = L_1$ and $\lim_{n \to \infty} s_{2n} = L_2$ by the monotonic sequence theorem.
• Now since we were given that $\lim_{n \to \infty} a_n = 0$ and we know that $a_{2n} = s_{2n} - s_{2n-1}$ then $0 = \lim_{n \to \infty} a_{2n} = \lim_{n \to \infty} s_{2n} - \lim_{n \to \infty} s_{2n-1} = L_1 - L_2$ and so $0 = L_1 - L_2$ which implies $L_1 = L_2$. So let $L = L_1 = L_2$ and so $\lim_{n \to \infty} s_n = L$ since every partial sum $s_n$ have been acknowledged, and therefore $\sum_{n=1}^{\infty} a_n$ is convergent to $L$.$\blacksquare$
We note that a similar proof works if the first term of the series is negative, that is $a_1 < 0$. We will now look at some examples applying the alternating series test.
## Example 1
Using the alternating series test determine if $\sum_{n=1}^{\infty} \frac{(-1)^n}{n}$ is convergent or divergent.
We must first check to see if all of the conditions for the alternating series test are met before applying it. We note that $a_na_{n+1} < 0$ since the terms are alternating signs. We need to check if $\mid a_{n+1} \mid ≤ \mid a_n \mid$ for $n$ sufficiently large.
We note that $\mid a_{n+1} \mid = \biggr \rvert \frac{(-1)^{n+1}}{n+1} \biggr \rvert = \frac{1}{n+1}$ and that $\mid a_n \mid = \biggr \rvert \frac{(-1)^{n}}{n} \biggr \rvert = \frac{1}{n}$. We know that $\mid a_{n+1} \mid = \frac{1}{n+1} ≤ \frac{1}{n} = \mid a_n \mid$ so these terms are decreasing in size.
Lastly we note that $\lim_{n \to \infty} \frac{-1}{n} = 0$. So by the alternating series test, $\sum_{n=1}^{\infty} \frac{(-1)^n}{n}$ is convergent. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9980131983757019, "perplexity": 89.88704654290703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00113.warc.gz"} |
https://byjus.com/rs-aggarwal-solutions/rs-aggarwal-class-6-solutions-chapter-14-constructions-exercise-14-1/ | # RS Aggarwal Solutions Class 6 Ex 14A
Question 1:
Using a pair of compasses construct of the following angle:
(i) 60o
(ii) 120o
(iii) 90o
Solution:
(i)
Steps for construction:
1. Draw a ray QP.
2. With Q as the centre and any convenient radius, draw an arc cutting QP at N.
3. With N as the centre and the same radius as before, draw another arc to cut the previous arc at M.
4. Draw QM and produce it to R.
$\angle$PQR is the required angle of 60o.
(ii)
Steps for construction:
1. Draw a ray QP.
2. With Q as the centre and any convenient radius, draw an arc cutting QP at N.
3. With N as the centre and the same radius, cut the arc at A. Again, with A as the centre and the same radius, cut the arc at M.
4. Draw QM and produce it to R.
$\angle$PQR is the required angle of 120o.
(iii)
Steps for construction:
1. Draw a line PX.
2. Take a point Q on AC. With Q as the centre and any convenient radius, draw an arc cutting AX at M and N.
3. With N as the centre and radius more than half on MN, draw an arc.
4. With M as the centre and the same radius as before, draw another arc to cut the previous arc at W.
5. Draw QW and produce it to R.
$\angle$PQR is required angle of 90o.
Question 2:
Draw an angle of 60o, using a pair of compasses. Bisect it to make an angle of 30o.
Solution:
Constructions steps:
1. Draw a ray QP.
2. With Q as the centre and any convenient radius, draw an arc cutting QP at N.
3. With N as the centre and radius same as before, draw another arc to cut the previous arc at M.
4. Draw QM and produce it to R.
$\angle$PQR is required angle of 60o
5. With M as the centre and radius more than half of MN, draw an arc.
6. With N as the centre and radius same as in step (5), draw another arc, cutting the previously drawn arc at point X.
7. Draw QX and produce it to point S.
Ray QS is the bisector of $\angle$PQR.
Question 3:
Draw an angle of 45o, using a pair of compasses.
Solution:
Construction steps:
1. Draw a line PR.
2. Take a point Q on PR. With Q as the centre and any convenient radius, draw an arc cutting AC at M and N.
3. With N as the centre and radius more than half of MN, draw an arc.
4. With M as the centre and the same radius as before, draw another arc to cut the previous arc at X.
5. Draw QX, meeting the arc at Z produce it to W.
6. With Z as the centre and radius more than half of ZN, draw an arc.
7. With N as the centre and the same radius as in step (6), draw another arc, cutting the previously drawn arc at a point Y.
8. Draw QY and produce it to point S.
$\angle$PQS is the required angle of 45o.
Question 4:
Use a pair of compasses and construct the following angles:
(i) 150o
(ii) 15o
(iii) 135o
(iv) $22\frac{1}{2}^{\circ}$
(v) 105o
(vi) 75o
(vii) $67\frac{1}{2}^{\circ}$
(viii) 45o
Solution:
(i)
Steps for construction:
1. Draw a line XY and take a point O.
2. With O as the centre and any suitable radius, draw an arc cutting XY at M and N.
3. With N as the centre and the same radius, draw an arc cutting MN at R.
4. With R as the centre and the same radius as before, draw another arc cutting MN at Q.
5. With Q as the centre and radius less than MQ draw an arc.
6. With M as the centre and the same radius draw another arc cutting the previously drawn arc at P.
7. Join PO.
Therefore, $\angle$XOP = 150o
(ii)
Steps for construction:
1. Draw a ray XY.
2. With X as the centre and any convenient radius, draw an arc cutting XY at M.
3. With M as the centre and the same radius, draw an arc cutting the previously drawn arc at N.
4. Draw YN and produce it to B.
5. Draw the bisector AY of $\angle$XYB.
6. Again, draw the bisector YZ of $\angle$XYA.
Therefore, $\angle$XYZ = 15o
(iii)
Steps for construction:
1. Draw a line XY and take a point A.
2. With A as the centre and any convenient radius, draw an arc cutting XY at M and N.
3. With N as the centre and the same radius, draw an arc.
4. With M as the centre and the same radius as before, draw another arc cutting the previously drawn arc at R.
5. Draw RA.
6. Draw the bisector ZA of $\angle$YAR.
Therefore, $\angle$XAZ = 135o
(iv)
Steps for construction:
1. Draw a line XY.
2. Take a point A on XY. With A as the centre and any convenient radius, draw an arc cutting XY at M and N.
3. With N as the centre and radius more than half of MN, draw an arc.
4. With M as the centre and the same radius as before, draw another arc to cut the previous arc at P.
5. Draw PA meeting the arc at C. Produce it to E.
6. With C as the centre and radius more than half of CN, draw an arc.
7. With N as the centre and the same radius as in step (6), draw another arc cutting the previously drawn arc at a point Q.
8. Draw AQ and produce it to point F.
9. Draw the bisector ZA of $\angle$XAF.
Therefore, $\angle$XAZ = 22.5o
(v)
Steps for construction:
1. Draw a line XY.
2. Take a point O on XY. With O as the centre and any convenient radius, draw an arc cutting XY at M and N. Draw arcs with the same radius cutting MN at P and Q.
3. With N as the centre and the radius more than half of MN, draw an arc.
4. With M as the centre and the same radius as before, draw another arc to cut the previous arc at B.
5. Draw BO meeting the arc at E.
6. With Q as the centre and radius more than half of PE, draw an arc.
7. With E as the centre and the same radius as in step (6), draw another arc cutting the previously drawn arc at a point A.
8. Draw AO and produce it to point Z.
Therefore, $\angle$XOZ = 105o
(vi)
Steps for construction:
1. Draw a line XY.
2. Take a point O on XY. With O as the centre and any convenient radius, draw an arc cutting XY at M and N. Draw arcs with the same radius cutting MN at P.
3. With N as the centre and radius more than half of MN, draw an arc.
4. With M as the centre and the same radius as before, draw another arc to cut the previous arc at R.
5. Draw RO meeting the arc at E. Produce it to A.
6. With P as the centre and radius more than half of PE, draw an arc.
7. With E as the centre and the same radius as in step (6), draw another arc cutting the previously drawn arc at the point S.
8. Draw OS and produce it to point Z.
Therefore, $\angle$XOZ = 75o
(vii)
Steps for construction:
1. Draw a line XY and take a point O.
2. With O as the centre and any convenient radius, draw an arc cutting XY at M and N.
3. With N as the centre and the same radius, draw an arc.
4. With M as the centre and the same radius as before, draw another arc cutting the previously drawn arc at Q.
5. Draw QO.
6. Draw PO bisector of $\angle$YOA.
7. Draw ZO bisector of $\angle$POX.
Therefore, $\angle$XXAZ = 67.5o
(viii)
Steps for construction:
1. Draw a line PR.
2. Take a point Q on PR. With Q as the centre and any convenient radius, draw an arc cutting AC at M and N.
3. With N as the centre and radius more than half of MN, draw an arc.
4. With M as the centre and the same radius as before, draw another arc to cut the previous arc at X.
5. Draw QX, meeting the arc at Z. Produce it to W.
6. With Z as the centre and radius more than half of ZN, draw an arc.
7. With N as the centre and the same radius as in step (6), draw another arc cutting the previously drawn arc to a point Y.
8. Draw QY and produce it to point S.
Question 5:
Draw a rectangle whose two adjacent sides are 5 cm and 3.5 cm. Make use of a pair of compasses and a ruler only.
Solution:
Construction steps:
1. Draw a ray AX.
2. With A as the center, cut the ray AX at B such that AB is equal to 5 cm.
3. With B as the center and any convenient radius, draw an arc cutting AX at M and N.
4. With N as the center and radius more than half of MN, draw an arc.
5. With M as the center and the same radius as before, draw another arc to cut the previous arc at Y.
6. Draw BY and produce it to W.
7. With B as the center and a radius of 3.5 cm, cut ray BW at point C.
8. With C as the center and a radius of 5 cm, draw an arc on the right side of BC.
9. With A as the center and a radius of 3.5 cm, draw an arc cutting the previous arc at D.
ABCD is the required rectangle.
Question 6:
Draw a square, each of whose side is 5 cm. Use a pair of compasses and a ruler in your construction.
Solution:
Construction steps:
1. Draw a ray AX.
2. With A as centre cut the ray AX at B such that AB = 5 cm.
3. With B as centre and any convenient radius, draw an arc cutting AX at M and N.
4. With N as centre and radius more than half of MN draw an arc.
5. With M as centre and the same radius as before, draw another arc to cut the previous arc at Y.
6. Join BY and produced it to W.
7. With B as centre and radius 5 cm cut ray BW at point C.
8. With C as centre and radius 5 cm draw an arc on right side of BC.
9. With A as centre and radius 5 cm draw an arc cutting the previous arc at D.
ABCD is the required square.
#### Practise This Question
The method of separating seeds of paddy from its stalks is called as | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7608453035354614, "perplexity": 1793.0277519162487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00311.warc.gz"} |
https://indico.cern.ch/event/539093/contributions/2568040/ | # XII Workshop on Particle Correlations and Femtoscopy
Jun 12 – 16, 2017
Other Institutes
Europe/Zurich timezone
## Lambda-Kaon Femtoscopy in Pb-Pb Collisions at 2.76 TeV with ALICE
Jun 14, 2017, 5:20 PM
20m
Other Institutes
#### Other Institutes
Nikhef Science Park 105 1098XG Amsterdam The Netherlands
### Speaker
Jesse Thomas Buxton (Ohio State University (US))
### Description
Lambda-Kaon Femtoscopy in Pb-Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV with ALICE
We present results from a femtoscopic analysis of Lambda-Kaon correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV by the ALICE experiment at the LHC. All pair combinations of $\Lambda$ and $\bar{\Lambda}$ with K$^{+}$, K$^{-}$ and K$^{0}_{S}$ are analyzed. The femtoscopic correlations are the result of strong final-state interactions, and are fit with a parametrization based on a model by R. Lednicky and V. L. Lyuboshitz[1]. This allows us to both characterize the emission source and measure the scattering parameters for the particle pairs. We observe a large difference in the $\Lambda$-K$^{+}$ ($\bar{\Lambda}$-K$^{-}$) and $\Lambda$-K$^{-}$ ($\bar{\Lambda}$-K$^{+}$) correlations in pairs with low relative momenta (k* $<$ 100 MeV). Additionally, the average of the $\Lambda$-K$^{+}$ ($\bar{\Lambda}$-K$^{-}$) and $\Lambda$-K$^{-}$ ($\bar{\Lambda}$-K$^{+}$) correlation functions is consistent with our $\Lambda$-K$^{0}_{S}$ ($\bar{\Lambda}$-K$^{0}_{S}$) measurement. The results suggest an effect arising from different quark-antiquark interactions in the pairs, i.e. $\rm s\bar{s}$ in $\Lambda$-K$^{+}$ ($\bar{\Lambda}$-K$^{-}$) and $\rm u\bar{u}$ in $\Lambda$-K$^{-}$ ($\bar{\Lambda}$-K$^{+}$).
[1] R. Lednicky and V.L. Lyuboshitz, Sov. J. Nucl. Phys. 35, 770 (1982)
List of tracks Femtoscopy at RHIC and LHC: links to QGP physics
### Primary author
Jesse Thomas Buxton (Ohio State University (US)) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040636420249939, "perplexity": 8917.421162568859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00061.warc.gz"} |
http://mathoverflow.net/users/1345/ian-agol?tab=summary | Ian Agol
Reputation
35,108
128/100 score
1 80 165
Impact
~641k people reached
60 Why is BG infinite dimensional for G finite ? 39 Question on the irrationality of $e$ 39 Not especially famous, long-open problems which anyone can understand 34 Can a subset of the plane have nontrivial $H_2$ or $\pi_2$? 34 Parallelizability of the Milnor's exotic spheres in dimension 7
### Reputation (35,108)
+80 Kazhdan's property (T) vs. residual finiteness +10 Minimum dilatation pseudo-anosovs on non-orientable surfaces +10 Why is BG infinite dimensional for G finite ? +10 Compactification theorem for differentiable manifolds ?
### Questions (13)
34 Minimal volume of 4-manifolds 18 “The” random tree 14 Topological version of Bogomolov’s question 13 Right triangle with edge lengths equal to regular unit polygon edge lengths 13 $SL_2 R$ Casson invariant?
### Tags (201)
1k gt.geometric-topology × 160 389 hyperbolic-geometry × 57 688 gr.group-theory × 106 375 knot-theory × 57 475 at.algebraic-topology × 62 199 reference-request × 35 458 dg.differential-geometry × 56 189 mg.metric-geometry × 27 446 3-manifolds × 54 167 riemannian-geometry × 22
### Accounts (6)
MathOverflow 35,108 rep 180165 Mathematics 546 rep 27 Parenting 101 rep Stack Overflow 101 rep 1 Mathematica 101 rep 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6064317226409912, "perplexity": 7856.009210209517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053379198.78/warc/CC-MAIN-20160524012939-00200-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://chapel-lang.org/docs/latest/primers/LAPACKlib.html | # LAPACK¶
View LAPACKlib.chpl on GitHub
Example usage of the LAPACK module in Chapel.
This particular file demonstrates how to use the gesv procedure with Chapel's arrays. The gesv procedure solves A*X = B for X given both A and B, where A is a square matrix.
Here, we create A and X, then compute B with a matrix multiplication, and show that the result of the gesv procedure is the same as our synthesized X array (within a small margin of error for floating point).
To compile a program with LAPACK, you may need to add some additional flags depending on how LAPACK was installed on your system:
chpl -I$PATH_TO_LAPACK_INCLUDE_DIR \ -L$PATH_TO_LAPACK_BINARIES \
-lgfortran
See the LAPACK module documentation for more information on compiling.
Start by using the LAPACK module to gain access to the gesv function. The Random module will be used to fill arrays with random values.
use LAPACK;
use Random;
Here we set up several config consts that represent, in order:
• N, the dimension for the square array A
• K, the second dimension for arrays X and B
• epsilon, the margin of error for success
• seed, a seed for random number generation
config const N = 2;
config const K = 1;
config const epsilon = 1e-13;
config const seed = 41;
Create the arrays A, X, and B. Fill A and X with random values.
var A : [1..N, 1..N] real;
fillRandom(A, seed);
var X : [1..N, 1..K] real;
fillRandom(X, seed);
var B : [1..N, 1..K] real;
Matrix multiply A*X, store result in B
for i in 1..N do
for j in 1..K do
for k in 1..N do
B[i,j] += A[i,k] * X[k,j];
writeln("Matrix A:\n", A, "\n");
writeln("Matrix X:\n", X, "\n");
writeln("Matrix B:\n", B, "\n");
Copy original arrays into temporary arrays.
Input and work array. Becomes garbage.
var WorkA = A;
Input and output. Becomes result of solution (X)
var WorkBX = B;
Output array. Stores pivot indices
var ipiv : [1..N] c_int;
Call the gesv function to solve for X. Note that Chapel arrays are row-major order by default.
var info = gesv(lapack_memory_order.row_major, WorkA, ipiv, WorkBX);
LAPACK returns an error code to indicate a failure.
if info != 0 {
writeln("There was an error!");
if info < 0 {
writeln("Argument ", -info, " was incorrect.");
} else {
writeln("The matrix A is a singular matrix. U", (info,info), " is zero");
}
}
writeln("gesv result for X:\n", WorkBX, "\n");
The arrays may not be identical due to floating point errors. Use a small value as a margin of error to measure success.
const closeEnough = && reduce [d in (WorkBX - X)] abs(d) < epsilon;
if closeEnough then
writeln("SUCCESS");
else
writeln("FAILURE"); | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4374060034751892, "perplexity": 10175.047218426322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812756.57/warc/CC-MAIN-20180219151705-20180219171705-00590.warc.gz"} |
https://blogs.mathworks.com/cleve/2015/03/02/triple-precision-accumlated-inner-product/?s_tid=blogs_rc_2 | # Triple Precision Accumlated Inner Product
Single and double precision are combined to facilitate a triple precision accumulated inner product.
### Contents
#### Iterative refinement
In my previous post on iterative refinement I showed the need for an inner product routine that is more accurate than double precision. This post is about such a function.
#### Example with double precision
The example I am going to use is contrived so that the first and third terms in the inner product exactly cancel each other, leaving the much smaller second term to arise from the ashes.
format long e
x = [1 1/3 1]'
y = [1 3e-9 -1]'
x =
1.000000000000000e+00
3.333333333333333e-01
1.000000000000000e+00
y =
1.000000000000000e+00
3.000000000000000e-09
-1.000000000000000e+00
Computing the inner product with a conventional MATLAB statement shows the intended difficulty.
dot2p = x'*y
dot2p =
1.000000082740371e-09
The result should be 1.000000000000000e-09. We're getting only about half of the significant digits correct.
Of course, the problem is that the second intermediate sum is
s2 = 1 + 1/3*3e-9
s2 =
1.000000001000000e+00
That's OK in decimal, but not in binary. There is not enough room in one double precision floating point number to store the bits that are going to be needed when that leading one is cancelled by the third term in the sum.
#### Triple precision
I do not have a full blown triple precision arithmetic package by any means. I have just enough to do this one task. Here is the basic idea. A double precision floating point number has 14 hexadecimal digits in its fraction. I can use single and double to break a double into 6 high order hex digits and 8 low order hex digits, like this.
format hex
x = 1/3
xhi = double(single(x))
xlo = x - xhi
x =
3fd5555555555555
xhi =
3fd5555560000000
xlo =
be45555556000000
Two doubles with more than half of their low order bits equal to zero can be multiplied together with no roundoff error. For example
pihi = double(single(pi))
pio3hi = xhi*pihi
pihi =
400921fb60000000
pio3hi =
3ff0c1524860a920
That trailing zero in pio3hi is an indication that the result is exact.
Additions are not exact, when the two numbers differ by several orders of magnitude. This fact will eventually be the limiting factor of our inner product routine.
#### dot3p
My inner production routine is both accumulated, which means it uses extra precise arithmetic, and extended, which means an extra scalar is added to the result using the extra precision. You can download this function from here.
dbtype dot3p
1 function s = dot3p(x,y,s)
2 % DOT3P s = dot3p(x,y,s) Triple precision extended inner product.
3 % s = x'*y + s for vectors x and y and scalar s.
4
5 shi = double(single(s));
6 slo = s - shi;
7 for k = 1:length(x)
8 xhi = double(single(x(k)));
9 xlo = x(k) - xhi;
10 yhi = double(single(y(k)));
11 ylo = y(k) - yhi;
12 tmp = xhi*yhi;
13 zhi = double(single(tmp));
14 zlo = tmp - zhi + xhi*ylo + xlo*yhi + xlo*ylo;
15
16 tmp = shi + zhi;
17 del = tmp - shi - zhi;
18 shi = double(single(tmp));
19 slo = tmp - shi + slo + zlo - del;
20 end
21 s = shi + slo;
#### Example with triple precision
Let's run my example with dot3p in the debugger and look at some intermediate results.
% dbstop 16 dot3p
% dbstop 20 dot3p
% dot3p(x,y,0)
The variables shi and slo carry the sum in triple precision. The first time through the loop there are no roundoff errors, and shi and slo are set to 1.0 and 0.0.
Let's look at the second pass through the loop. Halfway through, at statement 16.
K>> format hex
K>> xhi,xlo,yhi,ylo,tmp,zhi,zlo
xhi =
3fd5555560000000
xlo =
be45555556000000
yhi =
3e29c511e0000000
ylo =
bc7e2df108000000
tmp =
3e112e0bf341b0a0
zhi =
3e112e0c00000000
zlo =
bc97d9296b9a0d85
K>>
K>> format long e
K>> xhi,xlo,yhi,ylo,tmp,zhi,zlo
xhi =
3.333333432674408e-01
xlo =
-9.934107481068821e-09
yhi =
3.000000026176508e-09
ylo =
-2.617650809219019e-17
tmp =
1.000000038527825e-09
zhi =
1.000000082740371e-09
zlo =
-8.274037106125165e-17
We can see that xhi is the first six hex digits of 1/3 and xlo is the remaining digits. Similarly, yhi is the first six hex digits of 3e-9 and ylo is the remaing digits. zhi is the first six hex digits of the xhi*yhi and zlo is a crucial correction term.
Stopping at the end of the second pass through the loop, at statement 20.
K>> format hex
K>> tmp, del, shi, slo
tmp =
3ff000000044b830
del =
0000000000000000
shi =
3ff0000000000000
slo =
3e112e0be826d694
K>>
K>> format long e,
K>> tmp, del, shi, slo
tmp =
1.000000001000000e+00
del =
0
shi =
1
slo =
9.999999999999999e-10
tmp is 1.0 plus some of the bits of 1.0e-10. del is zero because it is not needed in this example. It is involved when the terms vary over an even wider range. shi is exactly 1.0, which is the high order part of the evolving sum. And slo has become 1.0e-10 to full double precison accuracy.
On the third time through the loop there will be no roundoff errors, shi will be completely cancelled, and slo will bear the full responsibility for providing the final answer.
Of course, this example is contrived and unusual. Ordinarily, we can expect some cancellation (otherwise there would be no need for an accumulated inner product), with the high order part losing at least a few digits and the low order part filling them in.
#### residual3p
With a dot product in hand, it is easy to write the residual function. Here the extended feature is essential because we expect extreme, if not total, cancellation when the right hand side is subtracted from the dot product.
type residual3p
function r = residual3p(A,x,b)
% RESIDUAL3p Triple precision residual, A*x - b.
% r = residual3p(A,x,b) for matrix A, vectors x and b.
m = size(A,1);
r = zeros(m,1);
for k = 1:m
r(k) = dot3p(A(k,:),x,-b(k));
end
end | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4957627058029175, "perplexity": 2960.1191217208298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369721.76/warc/CC-MAIN-20210305030131-20210305060131-00239.warc.gz"} |
https://space.meta.stackexchange.com/questions/1233/what-fraction-of-my-questions-have-accepted-answers-how-can-i-find-out | # What fraction of my questions have accepted answers? How can I find out?
I have a lot of questions here and a great majority have answers, and for a great majority of those an answer has been accepted.
Is it possible for someone to figure out what fraction of my questions have accepted answers without going through the 44 pages of questions and counting?
Is it then possible to explain the procedure to me as if I were a child? I'm not familliar with database queries, anything besides python is a real challenge for me.
You can use the advanced search features of the normal site - no need for fancy database queries. It's not so easy to find, but there is a little "Advanced Search Tips" whenever you search and some of these features are relevant for you.
First you go to your profile on the main site. That's here. As you can see the searchbar is already filled with your user-id. No need to search for that somewhere. The searchbar reads user:12102, which will give you all posts from you.
Next we want to limit this down to questions. There is a handy way to do this: just add "is:question" or in short "is:q" to the searchbar. Like here (1296 posts from you are questions - impressive!).
Next we want to limit this down to all those with an accepted answer: "hasaccepted:yes" => user:12102 hasaccepted:yes (912 of your questions have an accepted answer).
Together those two search queries mean that you have accepted an answer in 912 / 1296 ~ 70% of all cases where you asked a question.
Minor addition: those are only undeleted questions, I don't know if you have deleted ones.
If you want to find those without an accepted answer you can just change the query to "hasaccepted:no" like here, for example in case you want to search for interesting questions that you want to offer a bounty on.
• "impressive!"? by coincidence I just right now wrote this It's a scourge, my life is an endless series of cups of coffee and pavlovian clicks of the Post Your Question button. – uhoh Mar 25 '19 at 12:12
• This works very well, and is really easy to use, thank you very much! – uhoh Mar 25 '19 at 12:19
• @uhoh Then you are a dream of many smaller sites on the network. Enjoy your cup of cofee and have fun with the advanced search features and your new found question / answer statistics ;) – Sec SE - clear Monica's name Mar 25 '19 at 12:24
• Just noting for future reference; 70% of my questions have an accepted answer, 85% have one or more answers, 13% of all site questions are mine, and 20% of all unanswered questions are mine, so it looks like I have my work cut out for me. Thanks again! – uhoh Mar 27 '19 at 4:06
• I'm up to 72% of my questions having an accepted answer, but wow, as a whole the site is only at about 54% or 5475/10064! – uhoh Apr 1 '19 at 17:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2755983769893646, "perplexity": 766.8102722539866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00552.warc.gz"} |
https://www.gap-system.org/Manuals/pkg/Browse/doc/chap6.html | Goto Chapter: Top 1 2 3 4 5 6 A Bib Ind
### 6 Examples of Applications based on NCurses.BrowseGeneric
This chapter introduces the operation Browse (6.1-1) and lists several examples how the function NCurses.BrowseGeneric (4.3-1) can be utilized for rendering GAP related data or for playing games. Each section describes the relevant GAP functions and briefly sketches the technical aspects of the implementation; more details can be found in the GAP files, in the app directory of the package.
Only Section 6.4 describes a standard application in the sense of the introduction to Chapter 4, perhaps except for a special function that is needed to compare table entries. The other examples in this chapter require some of the programming described in Chapter 5.
The GAP examples in this chapter use the "replay" feature of NCurses.BrowseGeneric (4.3-1), see Section 4.1. This means that the NCurses.BrowseGeneric (4.3-1) based function is called between two calls of BrowseData.SetReplay (5.4-2). If you want to paste these examples into the GAP session with the mouse then do not paste the final BrowseData.SetReplay (5.4-2) call, since NCurses.BrowseGeneric (4.3-1) would regard the additional input as a user interrupt.
#### 6.1 The Operation Browse
##### 6.1-1 Browse
‣ Browse( obj[, arec] ) ( operation )
This operation displays the GAP object obj in a nice, formatted way, similar to the operation Display (Reference: Display). The difference is that Browse is intended to use ncurses facilities.
Currently there are methods for matrices (see Browse (6.2-2)), for character tables (see Browse (6.3-1)) and for tables of marks (see Browse (6.4-1)).
#### 6.2 Matrix Display
The GAP library provides several Display (Reference: Display) methods for matrices. In order to cover the functionality of these methods, Browse provides the function NCurses.BrowseDenseList (6.2-1) that uses the standard facilities of the function NCurses.BrowseGeneric (4.3-1), i. e., one can scroll in the matrix, searching and sorting are provided etc.
The idea is to customize this function for different special cases, and to install corresponding Browse (6.1-1) methods. Examples are methods for matrices over finite fields and residue class rings of the rational integers, see Browse (6.2-2).
The code can be found in the file app/matdisp.g of the package.
##### 6.2-1 NCurses.BrowseDenseList
‣ NCurses.BrowseDenseList( list, arec ) ( function )
Returns: nothing.
Let list be a dense list whose entries are lists, for example a matrix, and let arec be a record. This function displays list in a window, as a two-dimensional array with row and column positions as row and column labels, respectively.
The following components of arec are supported.
header
If bound, the value must be a valid value of the work.header component of a browse table, see BrowseData.IsBrowseTable (4.2-3); for example, the value can be a list of strings. If this component is not bound then the browse table has no header.
convertEntry
If bound, the value must be a unary function that returns a string describing its argument. The default is the operation String (Reference: String). Another possible value is NCurses.ReplaceZeroByDot, which returns the string "." if the argument is a zero element in the sense of IsZero (Reference: IsZero), and returns the String (Reference: String) value otherwise. For each entry in a row of list, the convertEntry value is shown in the browse table.
labelsRow
If bound, the value must be a list of row label rows for list, as described in Section BrowseData.IsBrowseTable (4.2-3). The default is [ [ "1" ], [ "2" ], ... ].
labelsCol
If bound, the value must be a list of column label rows for list, as described in Section BrowseData.IsBrowseTable (4.2-3). The default is [ [ "1", "2", ... ] ].
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
##### 6.2-2 Browse
‣ Browse( list ) ( method )
Returns: nothing.
Several methods for the operation Browse (6.1-1) are installed for the case that the argument is a list of lists. These methods cover a default method for lists of lists and the Display (Reference: Display) methods for matrices over finite fields and residue class rings of the rational integers. Note that matrices over finite prime fields, small extension fields, and large extension fields are displayed differently, and the same holds for the corresponding Browse (6.1-1) methods.
gap> n:= [ 14, 14, 14, 14 ];;
gap> input:= Concatenation( n, n, n, "Q" );; # do nothing and quit''
gap> BrowseData.SetReplay( input );
gap> Browse( RandomMat( 10, 10, Integers ) );
gap> BrowseData.SetReplay( input );
gap> Browse( RandomMat( 10, 10, GF(3) ) );
gap> BrowseData.SetReplay( input );
gap> Browse( RandomMat( 10, 10, GF(4) ) );
gap> BrowseData.SetReplay( input );
gap> Browse( RandomMat( 10, 10, Integers mod 6 ) );
gap> BrowseData.SetReplay( input );
gap> Browse( RandomMat( 10, 10, GF( NextPrimeInt( 2^16 ) ) ) );
gap> BrowseData.SetReplay( input );
gap> Browse( RandomMat( 10, 10, GF( 2^20 ) ) );
gap> BrowseData.SetReplay( false );
#### 6.3 Character Table Display
The GAP library provides a Display (Reference: Display) method for character tables that breaks the table into columns fitting on the screen. Browse provides an alternative, using the standard facilities of the function NCurses.BrowseGeneric (4.3-1), i. e., one can scroll in the matrix of character values, searching and sorting are provided etc.
The Browse (6.1-1) method for character tables can be called instead of Display (Reference: Display). For convenience, one can additionally make this function the default Display (Reference: Display) method for character tables, by assigning it to the Display component in the global record CharacterTableDisplayDefaults.User, see Reference: Printing Character Tables; for example, one can do this in one's gaprc file, see Reference: The gap.ini and gaprc files. (This can be undone by unbinding the component CharacterTableDisplayDefaults.User.Display.)
The function BrowseDecompositionMatrix (6.3-2) can be used to display decomposition matrices for Brauer character tables.
##### 6.3-1 Browse
‣ Browse( tbl[, options] ) ( method )
This method displays the character table tbl in a window. The optional record options describes what shall be displayed, the supported components and the default values are described in Reference: Printing Character Tables.
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> if TestPackageAvailability( "CTblLib" ) = true then
> BrowseData.SetReplay( Concatenation(
> # scroll in the table
> "DRULdddddrrrrrlluu",
> # select an entry and move it around
> "seddrruuuddlll",
> # search for the pattern 135 (six times)
> "/135", [ NCurses.keys.ENTER ], "nnnnn",
> # deselect the entry, select the first column
> "qLsc",
> # sort and categorize by this column
> "sc",
> # select the first row, move down the selection
> "srdddd",
> # expand the selected category, scroll the selection down
> "xd",
> # and quit the application
> "Q" ) );
> Browse( CharacterTable( "HN" ) );
> BrowseData.SetReplay( false );
> fi;
Implementation remarks: The first part of the code in the Browse (6.1-1) method for character tables is almost identical with the code for extracting the data to be displayed from the input data in the GAP library function CharacterTableDisplayDefault. The second part of the code transforms these data into a browse table. Character names and (if applicable) indicator values are used as row labels, and centralizer orders, power maps, and class names are used as column labels. The identifier of the table is used as the static header. When an irrational entry is selected, a description of this entry is shown in the dynamic footer.
The standard modes in BrowseData (5.4-1) (except the help mode) have been extended by three new actions. The first two of them open pagers giving an overview of all irrationalities in the table, or of all those irrationalities that have been shown on the screen in the current call, respectively. The corresponding user inputs are the and the key. (The names assigned to the irrationalities are generated column-wise. If one just scrolls through the table, without jumping, then these names coincide with the names generated by the default Display (Reference: Display) method for character tables; this is in general not the case, for example when a row-wise search in the table is performed.) The third new action, which is associated with the key, toggles the visibility status of the column label rows for centralizer orders and power maps.
An individual minyx function does not only check whether the desired table fits into the window but also whether a table with too high column labels (centralizer orders and power maps) would fit if these labels get collapsed via the key. In this case, the labels are automatically collapsed, and the key is disabled.
In order to keep the required space small also for large character tables, caching of formatted matrix entries is disabled, and the strings to be displayed are computed on demand with a Main function in the work component of the browse table. For the same reason, the constant height one for all table rows is set in advance, so one need not inspect a whole character if only a few values of it shall be shown.
Special functions are provided for sorting (concerning the comparison of character values, which can be integers or irrationalities) and categorizing the table by a column (the value in the category row involves the class name of the column in question).
The code can be found in the file app/ctbldisp.g of the package.
##### 6.3-2 BrowseDecompositionMatrix
‣ BrowseDecompositionMatrix( modtbl[, b][, options] ) ( function )
This method displays the decomposition matrix of (the b-th block of) the Brauer character table modtbl in a window. The arguments are the same as for LaTeXStringDecompositionMatrix (Reference: LaTeXStringDecompositionMatrix).
The positions of the ordinary and modular irreducible characters are shown in the labels of the rows and columns, respectively, that are indexed by these characters. When an entry in the decomposition matrix is selected then information about the degrees of these characters is shown in the table footer.
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> BrowseData.SetReplay( Concatenation(
> # select the first entry
> "se",
> # scroll in the table
> "drrrr",
> # keep the table open for a while
> [ 14, 14, 14, 14, 14 ],
> # and quit the application
> "Q" ) );
gap> BrowseDecompositionMatrix( CharacterTable( "J1" ) mod 2 );
gap> BrowseData.SetReplay( false );
The code can be found in the file app/ctbldisp.g of the package.
#### 6.4 Table of Marks Display
The GAP library provides a Display (Reference: Display) method for tables of marks that breaks the table into columns fitting on the screen. Similar to the situation with character tables, see Section 6.3, but with a much simpler implementation, Browse provides an alternative based on the function NCurses.BrowseGeneric (4.3-1).
Browse (6.1-1) can be called instead of Display (Reference: Display) for tables of marks, cf. Reference: Printing Tables of Marks.
##### 6.4-1 Browse
‣ Browse( tom[, options] ) ( method )
This method displays the table of marks tom in a window. The optional record options describes what shall be displayed, the supported components and the default values are described in Reference: Printing Tables of Marks.
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> if TestPackageAvailability( "TomLib" ) = true then
> BrowseData.SetReplay( Concatenation(
> # scroll in the table
> "DDRRR",
> # search for the (exact) value 100 (three times)
> "/100",
> [ NCurses.keys.DOWN, NCurses.keys.DOWN, NCurses.keys.RIGHT ],
> [ NCurses.keys.DOWN, NCurses.keys.DOWN, NCurses.keys.DOWN ],
> [ NCurses.keys.RIGHT, NCurses.keys.ENTER ], "nn",
> # no more occurrences of 100, confirm
> [ NCurses.keys.ENTER ],
> # and quit the application
> "Q" ) );
> Browse( TableOfMarks( "A10" ) );
> BrowseData.SetReplay( false );
> fi;
Implementation remarks: Rows and columns are indexed by their positions. The identifier of the table is used as the static header, there is no footer.
In order to keep the required space small also for large tables of marks, caching of formatted matrix entries is disabled, and the strings to be displayed are computed on demand with a Main function in the work component of the browse table. For the same reason, the constant height one for the table rows is set in advance. (For example, the table of marks of the group with identifier "O8+(2)", with 11171 rows and columns, can be shown with Browse (6.1-1) in a GAP session requiring about 100 MB.)
The code can be found in the file app/tomdisp.g of the package.
The GAP package AtlasRep (see [WPN+07]) is an interface to a database of representations and related data. The table of contents of this database can be displayed via the function DisplayAtlasInfo (AtlasRep: DisplayAtlasInfo) of this package. The Browse package provides an alternative based on the function NCurses.BrowseGeneric (4.3-1); one can scroll, search, and fetch data for later use.
##### 6.5-1 BrowseAtlasInfo
‣ BrowseAtlasInfo( [listofnames, ]["contents", sources, ][...] ) ( function )
‣ BrowseAtlasInfo( gapname[, std][, ...] ) ( function )
Returns: the list of "clicked" info records.
This function shows the information available via the GAP package AtlasRep in a browse table, cf. Section AtlasRep: Accessing Data of the AtlasRep Package in the AtlasRep manual.
The optional arguments can be used to restrict the table to public or private data, or to show an overview for one particular group. The arguments are the same as for DisplayAtlasInfo (AtlasRep: DisplayAtlasInfo), see the documentation of this function for details. (Note that additional conditions such as IsPermGroup (Reference: IsPermGroup) can be entered also in the case that no gapname is given. In this situation, the additional conditions are evaluated for the "second level tables" that are opened by "clicking" on a table row or entry.)
When one "clicks" on one of the table rows or entries then a browse table with an overview of the information available for this group is shown, and "clicking" on one of the rows in these tables adds the corresponding info record (see OneAtlasGeneratingSetInfo (AtlasRep: OneAtlasGeneratingSetInfo)) to the list of return values of BrowseAtlasInfo.
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
The following example shows how BrowseAtlasInfo can be used to fetch info records about permutation representations of the alternating groups A_5 and A_6: We search for the group name "A5" in the overview table, and the first cell in the table row for A_5 becomes selected; hitting the key causes a new window to be opened, with an overview of the data available for A_5; moving down two rows and hitting the key again causes the second representation to be added to the result list; hitting closes the second window, and we are back in the overview table; we move the selection down twice (to the row for the group A_6), and choose the first representation for this group; finally we leave the table, and the return value is the list with the data for the two representations.
gap> d:= [ NCurses.keys.DOWN ];; r:= [ NCurses.keys.RIGHT ];;
gap> c:= [ NCurses.keys.ENTER ];;
gap> BrowseData.SetReplay( Concatenation(
> "/A5", # Find the string A5 ...
> d, d, r, # ... such that just the word matches,
> c, # start the search,
> c, # click the table entry A5,
> d, d, # move down two rows,
> c, # click the row for this representation,
> "Q", # quit the second level table,
> d, d, # move down two rows,
> c, # click the table entry A6,
> d, # move down one row,
> c, # click the first row,
> "Q", # quit the second level table,
> "Q" ) ); # and quit the application.
gap> if IsBound( BrowseAtlasInfo ) and IsBound( AtlasProgramInfo ) then
> tworeps:= BrowseAtlasInfo();
> else
> tworeps:= [ fail ];
> fi;
gap> BrowseData.SetReplay( false );
gap> if fail in tworeps then
> else
> Print( List( tworeps, x -> x.identifier[1] ), "\n" );
[ "A5", "A6" ]
> fi;
Implementation remarks: The first browse table shown has a static header, no footer and row labels, one row of column labels describing the type of data summarized in the columns.
Row and column separators are drawn as grids (cf. NCurses.Grid (2.2-8)) composed from the special characters described in Section 2.1-6, using the component work.SpecialGrid of the browse table, see BrowseData (5.4-1).
When a row is selected, the "click" functionality opens a new window (via a second level call to NCurses.BrowseGeneric (4.3-1)), in which a browse table with the list of available data for the given group is shown; in this table, "click" results in adding the info for the selected row to the result list, and a message about this addition is shown in the footer row. One can choose further data, return to the first browse table, and perhaps iterate the process for other groups. When the first level table is left, the list of info records for the chosen data is returned.
For the two kinds of browse tables, the standard modes in BrowseData (5.4-1) (except the help mode) have been extended by a new action that opens a pager giving an overview of all data that have been chosen in the current call. The corresponding user input is the key.
This function is available only if the GAP package AtlasRep is available.
The code can be found in the file app/atlasbrowse.g of the package.
A Browse adapted way to access several manuals is to show the hierarchy of books, chapters, sections, and subsections as collapsible category rows, and to regard the contents of each subsection as a data row of a matrix with only one column.
This application is mainly intended as an example with table cells that exceed the screen, and as an example with several category levels.
##### 6.6-1 BrowseGapManuals
‣ BrowseGapManuals( [start] ) ( function )
This function displays the contents of the GAP manuals (the main GAP manuals as well as the loaded package manuals) in a window. The optional argument start describes the initial status, admissible values are the strings "inline/collapsed", "inline/expanded", "pager/collapsed", and "pager/expanded".
In the inline cases, the parts of the manuals are shown in the browse table, and in the pager case, the parts of the manuals are shown in a different window when they are "clicked", using the user's favourite help viewer, see Reference: Changing the Help Viewer.
In the collapsed cases, all category rows are collapsed, and the first row is selected; typical next steps are moving down the selection and expanding single category rows. In the expanded cases, all category rows are expanded, and nothing is selected; a typical next step in the inline/expanded case is a search for a string in the manuals. (Note that searching in quite slow: For viewing a part of a manual, the file with the corresponding section is read into GAP, the text is formatted, the relevant part is cut out from the section, perhaps markup is stripped off, and finally the search is performed in the resulting strings.)
If no argument is given then the user is asked for selecting an initial status, using NCurses.Select (3.1-2).
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> n:= [ 14, 14, 14 ];; # do nothing''
gap> BrowseData.SetReplay( Concatenation(
> "xdxd", # expand a Tutorial section
> n, "Q" ) ); # and quit
gap> BrowseGapManuals( "inline/collapsed" );
gap> BrowseData.SetReplay( Concatenation(
> "/Browse", [ NCurses.keys.ENTER ], # search for "Browse"
> "xdxddxd", # expand a section
> n, "Q" ) ); # and quit
gap> BrowseGapManuals( "inline/collapsed" );
gap> BrowseData.SetReplay( false );
Implementation remarks: The browse table has a dynamic header showing the name of the currently selected manual, no footer, no row or column labels, and exactly one column of fixed width equal to the screen width. The category rows are precomputed, i. e., they do not arise from a table column; this way, the contents of each data cell can be computed on demand, as soon as it is shown on the screen, in particular the category hierarchy is computed without reading the manuals into GAP. Also, the data rows are not cached. There is no return value. The heights of many cells are bigger than the screen height, so scrolling is a mixture of scrolling to the next cell and scrolling inside a cell. The different initial states are realized via executing different initial steps before the table is shown to the user.
For the variants that show the manuals in a pager, the code temporarily replaces the show function of the default viewer "screen" (see Reference: Changing the Help Viewer) by a function that uses NCurses.Pager (3.1-4). Note that in the case that the manual bit in question fits into one screen, the default show function writes this text directly to the screen, but this is used already by the browse table.
The implementation should be regarded as a sketch.
For example, the markup available in the text file format of GAPDoc manuals (using sequences) is stripped off instead of being transferred to the attribute lines that arise, because of the highlighting problem mentioned in Section 2.2-3.
Some heuristics used in the code are due to deficiencies of the manual formats.
For the inline variant of the browse table, the titles of chapters, sections, and subsections are not regarded as parts of the actual text since they appear already as category rows; however, the functions of the GAP help system deliver the text together with these titles, so these lines must be stripped off afterwards.
The category hierarchy representing the tables of contents is created from the manual.six files of the manuals. These files do not contain enough information for determining whether several functions define the same subsection, in the sense that there is a common description text after a series of manual lines introducing different functions. In such cases, the browse table contains a category row for each of these functions (with its own number), but the corresponding text appears only under the last of these category rows, the data rows for the others are empty. (This problem does not occur in the GAPDoc manual format because this introduces explicit subsection titles, involving only the first of several function definitions.)
Also, index entries and sectioning entries in manual.six files of manuals in GAPDoc format are not explicitly distinguished.
The code can be found in the file app/manual.g of the package.
#### 6.7 Overview of Bibliographies
The function BrowseBibliography (6.7-1) can be used to turn the contents of bibliography files in BibTeX or BibXMLext format (see GAPDoc: The BibXMLext Format) into a Browse table, such that one can scroll in the list, search for entries, sort by year, sort and categorize by authors etc.
The default bibliography used by BrowseBibliography (6.7-1) is the bibliography of GAP related publications, see [GAP]. The Browse package contains a (perhaps outdated) version of this bibliography. One can get an updated version as follows.
wget -N http://www.gap-system.org/Doc/Bib/gap-publishednicer.bib
The columns of the Browse table that is shown by BrowseBibliography (6.7-1) can be customized, two examples for that are given by the functions BrowseBibliographySporadicSimple (AtlasRep: BrowseBibliographySporadicSimple) and BrowseBibliographyGapPackages (6.7-2).
The function BrowseMSC (6.7-3) shows an overview of the AMS Mathematics Subject Classification codes.
##### 6.7-1 BrowseBibliography
‣ BrowseBibliography( [bibfiles] ) ( function )
Returns: a record as returned by ParseBibXMLExtFiles (GAPDoc: ParseBibXMLextFiles).
This function shows the list of bibliography entries in the files given by bibfiles, which may be a string or a list of strings (denoting a filename or a list of filenames, respectively) or a record (see below for the supported components).
If no argument is given then the file bibl/gap-publishednicer.bib in the Browse package directory is taken, and "GAP Bibliography" is used as the header.
Another perhaps interesting data file that should be available in the GAP distribution is doc/manualbib.xml. This file can be located as follows.
gap> file:= Filename( DirectoriesLibrary( "doc" ), "manualbib.xml" );;
Both BibTeX format and the XML based extended format provided by the GAPDoc package are supported by BrowseBibliography, see Chapter GAPDoc: Utilities for Bibliographies.
In the case of BibTeX format input, first a conversion to the extended format takes place, via StringBibAsXMLext (GAPDoc: StringBibAsXMLext) and ParseBibXMLextString (GAPDoc: ParseBibXMLextString). Note that syntactically incorrect entries are rejected in this conversion –this is signaled with InfoBibTools (GAPDoc: InfoBibTools) warnings– and that only a subset of the possible LaTeX markup is recognized –other markup appears in the browse table except that the leading backslash is removed.
In both cases of input, the problem arises that in visual mode, currently we can show only ASCII characters (and the symbols in NCurses.lineDraw, but these are handled differently, see Section 2.1-6). Therefore, we use the function SimplifiedUnicodeString (GAPDoc: SimplifiedUnicodeString) for replacing other unicode characters by ASCII text.
The return value is a record as returned by ParseBibXMLExtFiles (GAPDoc: ParseBibXMLextFiles), its entries component corresponds to the bibliography entries that have been "clicked" in visual mode. This record can be used as input for WriteBibFile (GAPDoc: WriteBibFile) or WriteBibXMLextFile (GAPDoc: WriteBibXMLextFile), in order to produce a bibliography file, or it can be used as input for StringBibXMLEntry (GAPDoc: StringBibXMLEntry), in order to produce strings from the entries, in various formats.
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> # sort and categorize by year, scroll down, expand a category row
gap> BrowseData.SetReplay( "scrrscsedddddxdddddQ" );
gap> BrowseBibliography();;
gap> # sort & categorize by authors, expand all category rows, scroll down
gap> BrowseData.SetReplay( "scscXseddddddQ" );
gap> BrowseBibliography();;
gap> # sort and categorize by journal, search for a journal name, expand
gap> BrowseData.SetReplay( Concatenation( "scrrrsc/J. Algebra",
> [ NCurses.keys.ENTER ], "nxdddQ" ) );
gap> BrowseBibliography();;
gap> BrowseData.SetReplay( false );
Implementation remarks: The browse table has a dynamic header (showing the number of entries, which can vary when the table is restricted), no footer and row labels; one row of column labels is given by the descriptions of the table columns (authors, title, year, journal, MSC code).
Row and column separators are drawn as grids (cf. NCurses.Grid (2.2-8)) composed from the special characters described in Section 2.1-6, using the component work.SpecialGrid of the browse table, see BrowseData (5.4-1).
For categorizing by authors (or by MSC codes), the sort parameter "split rows on categorizing" is set to "yes", so the authors (codes) are distributed to different category rows, hence each entry appears once for each of its authors (or its MSC codes) in the categorized table. When a data row or an entry in a data row is selected, "click" adds the corresponding bibliographhy entry to the result.
The width of the title column is preset; usually titles are too long for one line, and the contents of this column is formatted as a paragraph, using the function FormatParagraph (GAPDoc: FormatParagraph). For the authors and journal columns, maximal widths are prescribed, and FormatParagraph (GAPDoc: FormatParagraph) is used for longer entries.
For four columns, the sort parameters are defined as follows: The authors and MSC code columns do not become hidden when the table is categorized according to this column, sorting by the year yields a descending order, and the category rows arising from these columns and the journal column show the numbers of the data rows that belong to them.
Those standard modes in BrowseData (5.4-1) where an entry or a row of the table is selected have been extended by three new actions, which open a pager showing the BibTeX, HTML, and Text format of the selected entry, respectively. The corresponding user inputs are the , , and . If the MSC code column is available then also the user input is admissible; it opens a pager showing the descriptions of the MSC codes attached to the selected entry.
This function requires some of the utilities provided by the GAP package GAPDoc (see [LN07]), such as FormatParagraph (GAPDoc: FormatParagraph), NormalizeNameAndKey (GAPDoc: NormalizeNameAndKey), NormalizedNameAndKey (GAPDoc: NormalizedNameAndKey), ParseBibFiles (GAPDoc: ParseBibFiles), ParseBibXMLextFiles (GAPDoc: ParseBibXMLextFiles), ParseBibXMLextString (GAPDoc: ParseBibXMLextString), RecBibXMLEntry (GAPDoc: RecBibXMLEntry), and StringBibAsXMLext (GAPDoc: StringBibAsXMLext).
The code can be found in the file app/gapbibl.g of the package.
The browse table can be customized by entering a record as the argument of BrowseBibliography, with the following supported components.
files
a nonempty list of filenames containing the data to be shown; there is no default for this component.
filesshort
a list of the same length as the files component, the entries are strings which are shown in the "sourcefilename" column of the table (if this column is present); the default is the list of filenames.
filecontents
a list of the same length as the files component, the entries are strings which are shown as category values when the table is categorized by the "sourcefilename" column; the default is the list of filenames.
header
is the constant part of the header shown above the browse table, the default is the first filename.
columns
is a list of records that are valid as the second argument of DatabaseAttributeAdd (A.1-5), where the first argument is a database id enumerator created from the bibliography entries in the files in question. Each entry (and also the corresponding identifier) of this database id enumerator is a list of records obtained from ParseBibXMLextFiles (GAPDoc: ParseBibXMLextFiles) and RecBibXMLEntry (GAPDoc: RecBibXMLEntry), or from ParseBibFiles (GAPDoc: ParseBibFiles), such that the list elements are regarded as equal, in the sense that their fingerprints (see below) are equal. The records in the columns list are available for constructing the desired browse table, the actual appearance is controlled by the choice component described below. Columns showing authors, title, year, journal, MSC code, and filename are predefined and need not be listed here.
choice
a list of strings denoting the identifier components of those columns that shall actually be shown in the table, the default is [ "authors", "title", "year", "journal", "mrclass" ].
fingerprint
a list of strings denoting component names of the entries of the database id enumerator that is constructed from the data (see above); two data records are regarded as equal if the values of these components are equal; the default is [ "mrnumber", "title", "authorAsList", "editorAsList", "author" ].
sortKeyFunction
either fail or a function that takes a record as returned by RecBibXMLEntry (GAPDoc: RecBibXMLEntry) and returns a list that is used for comparing and thus sorting the records; the default is fail, which means that the rows of the table appear in the same ordering as in the source files.
##### 6.7-2 BrowseBibliographyGapPackages
‣ BrowseBibliographyGapPackages( ) ( function )
Returns: a record as returned by BrowseBibliography (6.7-1).
This function collects the information from the *.bib and *bib.xml files in those subdirectories of installed GAP packages which contain the package documentation, and shows it in a Browse table, using the function BrowseBibliography (6.7-1).
This function is experimental. The result is not really satisfactory, for the following reasons.
• Duplicate entries may occur, due to subtle differences in various source files.
• The source files may contain more than what is actually cited in the package manuals.
• It may happen that some *.bib or *bib.xml file is accidentally distributed with the package but is not intended to serve as package bibliography.
• The heuristics for rewriting LaTeX code is of course not perfect; thus strange symbols may occur in the Browse table.
##### 6.7-3 BrowseMSC
‣ BrowseMSC( ) ( function )
Returns: nothing.
This function shows the currently valid MSC codes in a browse table that is categorized by the ..-XX and the ...xx codes. (Use for expanding all categories or for expanding the currently selected category.) Due to the categorization, only two columns of the table are visible, showing the codes and their descriptions.
#### 6.8 Profiling GAP functions–a Variant
A Browse adapted way to evaluate profiling results is to show the overview that is printed by the GAP function DisplayProfile (Reference: DisplayProfile) in a Browse table, which allows one to sort the profiled functions according to the numbers of calls, the time spent, etc., and to search for certain functions one is interested in.
##### 6.8-1 BrowseProfile
‣ BrowseProfile( [functions, ][mincount, mintime] ) ( function )
The arguments and their meaning are the same as for the function DisplayProfile (Reference: DisplayProfile), in the sense that the lines printed by that function correspond to the rows of the list that is shown by BrowseProfile. Initially, the table is sorted in the same way as the list shown by BrowseProfile; sorting the table by any of the first five columns will yield a non-increasing order of the rows.
The threshold values mincount and mintime can be changed in visual mode via the user input . If mouse events are enabled (see NCurses.UseMouse (2.2-10)) then one can also use a mouse click on the current parameter value shown in the table header in order to enter the mode for changing the parameters.
When a row or an entry in a row is selected, "click" shows the code of the corresponding function in a pager (see NCurses.Pager (3.1-4)) whenever this is possible, as follows. If the function was read from a file then this file is opened, if the function was entered interactively then the code of the function is shown in the format produced by Print (Reference: Print); other functions (for example GAP kernel functions) cannot be shown, one gets an alert message (see NCurses.Alert (3.1-1)) in such a case.
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> n:= [ 14, 14, 14, 14, 14 ];; # do nothing''
gap> ProfileOperationsAndMethods( true ); # collect some data
gap> ConjugacyClasses( PrimitiveGroup( 24, 1 ) );;
gap> ProfileOperationsAndMethods( false );
gap> BrowseData.SetReplay( Concatenation(
> "scso", # sort by column 1,
> n,
> "rso", # sort by column 2,
> n,
> "rso", # sort by column 3,
> n,
> "q", # deselect the column,
> "/Centralizer", [ NCurses.keys.ENTER ], # search for a function,
> n, "Q" ) ); # and quit
gap> BrowseProfile();
gap> BrowseData.SetReplay( false );
Implementation remarks: The browse table has a dynamic header, which shows the current values of mincount and mintime, and a dynamic footer, which shows the sums of counts and timings for the rows in the table (label TOTAL) and if applicable the sums for the profiled functions not shown in the table (label OTHER). There are no row labels, and the obvious column labels. There is no return value.
The standard modes in BrowseData (5.4-1) (except the help mode) have been modified by adding a new action for changing the threshold parameters mincount and mintime (user input ). The way how this in implemented made it necessary to change the standard "reset" action (user input ) of the table; note that resetting (a sorting or filtering of) the table must not make those rows visible that shall be hidden because of the threshold parameters.
The code can be found in the file app/profile.g of the package.
#### 6.9 Variables defined in GAP packages–a Variant
A Browse adapted way to list the variables that are defined in a GAP package is to show the overview that is printed by the GAP function ShowPackageVariables (Reference: ShowPackageVariables) in a Browse table.
##### 6.9-1 BrowsePackageVariables
‣ BrowsePackageVariables( pkgname[, version][, arec] ) ( function )
Returns: nothing.
The arguments can be the same as for ShowPackageVariables (Reference: ShowPackageVariables), that is, pkgname is the name of a GAP package, and the optional arguments version and arec are a version number of this package and a record used for customizing the output, respectively.
Alternatively, the second argument can be the output info of PackageVariablesInfo (Reference: PackageVariablesInfo) for the package in question, instead of the version number.
BrowsePackageVariables opens a browse table that shows the global variables that become bound and the methods that become installed when GAP loads the package pkgname.
The table is categorized by the kinds of variables (new or redeclared operations, methods, info classes, synonyms, other globals). The column "Doc.?" distinguishes undocumented and documented variables, so one can use this column as a filter or for categorizing. The column "Filename" shows the names of the package files. Clicking a selected row of the table opens the relevant package file at the code in question.
The idea behind the argument info is that using the same arguments as for ShowPackageVariables (Reference: ShowPackageVariables) does not allow one to apply BrowsePackageVariables to packages that have been loaded before the Browse package. Thus one can compute the underlying data info first, using PackageVariablesInfo (Reference: PackageVariablesInfo), then load the Browse package, and finally call BrowsePackageVariables.
For example, the overview of package variables for Browse can be shown by starting GAP without packages and then entering the following lines.
gap> pkgname:= "Browse";;
gap> info:= PackageVariablesInfo( pkgname, "" );;
gap> BrowsePackageVariables( pkgname, info );
If the arguments are the same as for ShowPackageVariables (Reference: ShowPackageVariables) then this function is actually called, with the consequence that the package gets loaded when BrowsePackageVariables is called. This is not the case if the output of PackageVariablesInfo (Reference: PackageVariablesInfo) is entered as the second argument.
#### 6.10 Configuring User preferences–a Variant
A Browse adapted way to show and edit GAP's user preferences is to show the overview that is printed by the GAP function ShowUserPreferences (Reference: ShowUserPreferences) in a Browse table.
##### 6.10-1 BrowseUserPreferences
‣ BrowseUserPreferences( package1, package2, ... ) ( function )
Returns: nothing.
The arguments are the same as for ShowUserPreferences (Reference: ShowUserPreferences), that is, calling the function with no argument yields an overview of all known user preferences, and if one or more strings package1, ... are given then only the user preferences for these packages are shown.
BrowseUserPreferences opens a browse table with the following columns:
"Package"
contains the names of the GAP packages to which the user preferences belong,
"Pref. names"
contains the names of the user preferences, and
"Description"
contains the description texts from the DeclareUserPreference (Reference: DeclareUserPreference) calls and the default values (if applicable), and the actual values.
When one "clicks" on one of the table rows or entries then the values of the user preference in question can be edited. If a list of admissible values is known then this means that one can choose from this list via NCurses.Select (3.1-2), otherwise one can enter the desired value as text.
The values of the user preferences are not changed before one closes the browse table. When the table is left and if one has changed at least one value, one is asked whether the changes shall be applied.
gap> d:= [ NCurses.keys.DOWN ];;
gap> c:= [ NCurses.keys.ENTER ];;
gap> BrowseData.SetReplay( Concatenation(
> "/PackagesToLoad", # enter a search string,
> c, # start the search,
> c, # edit the entry (a list of choices),
> " ", d, # toggle the first four values,
> " ", d, #
> " ", d, #
> " ", d, #
> c, # submit the values,
> "Q", # quit the table,
> c ) ); # choose "cancel": do not apply the changes.
gap> BrowseUserPreferences();
gap> BrowseData.SetReplay( false );
The code can be found in the file app/userpref.g of the package.
#### 6.11 Overview of GAP Data
The GAP system contains several data collections such as libraries of groups and character tables. Clearly the function NCurses.BrowseGeneric (4.3-1) can be used to visualize interesting information about such data collections, in the form of an "overview table" whose rows correspond to the objects in the collection; each column of the table shows a piece of information about the objects. (One possibility to create such overviews is given by BrowseTableFromDatabaseIdEnumerator (A.2-2).)
##### 6.11-1 BrowseGapData
‣ BrowseGapData( ) ( function )
Returns: the return value of the chosen application if there is one.
The function BrowseGapData shows the choices in the list BrowseData.GapDataOverviews, in a browse table with one column. When an entry is "clicked" then the associated function is called, and the table of choices is closed.
The idea is that each entry of BrowseData.GapDataOverviews describes an overview of a data collection.
The Browse package provides overviews of
• the current AMS Mathematics Subject Classification codes (see BrowseMSC (6.7-3)),
• the contents of the AtlasRep package [WPN+07] (only if this package is loaded, see Section 6.5),
• the Conway polynomials in GAP (calls BrowseConwayPolynomials()),
• profile information for GAP functions (see Section 6.8),
• the list of GAP related bibliography entries in the file bibl/gap-publishednicer.bib of the Browse package (see Section 6.7),
• the GAP manuals (see Section 6.6),
• GAP operations and methods (calls BrowseGapMethods()),
• the installed GAP packages (calls BrowseGapPackages()),
• GAP's user preferences (see Section 6.10),
• the contents of the TomLib package [NMP13] (only if this package is loaded, see Section A.4),
Other GAP packages may add more overviews, using the function BrowseGapDataAdd (6.11-2). For example, there are overviews of
Except that always one table cell is selected, the full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> n:= [ 14, 14, 14 ];; # do nothing''
gap> # open the overview of Conway polynomials
gap> BrowseData.SetReplay( Concatenation( "/Conway Polynomials",
> [ NCurses.keys.ENTER, NCurses.keys.ENTER ], "srdddd", n, "Q" ) );
gap> BrowseGapData();;
gap> # open the overview of GAP packages
gap> BrowseData.SetReplay( Concatenation( "/GAP Packages",
> [ NCurses.keys.ENTER, NCurses.keys.ENTER ], "/Browse",
> [ NCurses.keys.ENTER ], "n", n, "Q" ) );
gap> BrowseGapData();;
gap> BrowseData.SetReplay( false );
Implementation remarks: The browse table has a static header, a dynamic footer showing the description of the currently selected entry, no row or column labels, and exactly one column of fixed width equal to the screen width. If the chosen application has a return value then this is returned by BrowseGapData, otherwise nothing is returned. The component work.SpecialGrid of the browse table is used to draw a border around the list of choices and another border around the footer. Only one mode is needed in which an entry is selected.
The code can be found in the file app/gapdata.g of the package.
‣ BrowseGapDataAdd( title, call, ret, documentation ) ( function )
This function extends the list BrowseData.GapDataOverviews by a new entry. The list is used by BrowseGapData (6.11-1).
title must be a string of length at most 76; it will be shown in the browse table that is opened by BrowseGapData (6.11-1). call must be a function that takes no arguments; it will be called when title is "clicked". ret must be true if call has a return value and if BrowseGapData (6.11-1) shall return this value, and false otherwise. documentation must be a string that describes what happens when the function call is called; it will be shown in the footer of the table opened by BrowseGapData (6.11-1) when title is selected.
#### 6.12 Navigating in a Directory Tree
A natural way to visualize the contents of a directory is via a tree whose leaves denote plain files, and the other vertices denote subdirectories. Browse provides a function based on NCurses.BrowseGeneric (4.3-1) for displaying such trees; the leaves correspond to the data rows, and the other vertices correspond to category rows.
##### 6.12-1 BrowseDirectory
‣ BrowseDirectory( [dir] ) ( function )
Returns: a list of the "clicked" filenames.
If no argument is given then the contents of the current directory is shown, see DirectoryCurrent (Reference: DirectoryCurrent). If a directory object dir (see Directory (Reference: Directory)) is given as the only argument then the contents of this directory is shown; alternatively, dir may also be a string which is then understood as a directory path.
The full functionality of the function NCurses.BrowseGeneric (4.3-1) is available.
gap> n:= [ 14, 14, 14 ];; # do nothing''
gap> BrowseData.SetReplay( Concatenation(
> "q", # leave the selection
> "X", # expand all categories
> "/filetree", [ NCurses.keys.ENTER ], # search for "filetree"
> n, "Q" ) ); # and quit
gap> dir:= DirectoriesPackageLibrary( "Browse", "" )[1];;
gap> if IsBound( BrowseDirectory ) then
> BrowseDirectory( dir );
> fi;
gap> BrowseData.SetReplay( false );
Implementation remarks: The browse table has a static header, no footer, no row or column labels, and exactly one data column. The category rows are precomputed, i. e., they do not arise from a table column. The tree structure is visualized via a special grid that is shown in the separator column in front of the table column; the width of this column is computed from the largest nesting depth of files. For technical reasons, category rows representing empty directories are realized via "dummy" table rows; a special ShowTables function guarantees that these rows are always hidden.
When a data row or an entry in this row is selected, "click" adds the corresponding filename to the result list. Initially, the first row is selected. (So if you want to search in the whole tree then you should quit this selection by hitting the key.)
The category hierarchy is computed using DirectoryContents (Reference: DirectoryContents).
This function is available only if the GAP package IO (see [Neu07]) is available, because the check for cycles uses the function IO_stat (IO: IO_stat) from this package.
The code can be found in the file app/filetree.g of the package.
#### 6.13 A Puzzle
We consider an m by n rectangle of squares numbered from 1 to m n - 1, the bottom right square is left empty. The numbered squares are permuted by successively exchanging the empty square and a neighboring square such that in the end, the empty cell is again in the bottom right corner.
7 13 14 2 1 4 15 11 6 8 3 9 10 5 12
The aim of the game is to order the numbered squares via these moves.
For the case m = n = 4, the puzzle is (erroneously?) known under the name "Sam Loyd's Fifteen", see [Bog] and [OR] for more information and references.
##### 6.13-1 BrowsePuzzle
‣ BrowsePuzzle( [m, n[, pi]] ) ( function )
Returns: a record describing the initial and final status of the puzzle.
This function shows the rectangle in a window.
The arguments m and n are the dimensions of the rectangle, the default for both values is 4. The initial distribution of the numbers in the squares can be prescribed via a permutation pi, the default is a random element in the alternating group on the points 1, 2, ..., m n - 1. (Note that the game has not always a solution.)
In any case, the empty cell is selected, and the selection can be moved to neighboring cells via the arrow keys, or to any place in the same row or column via a mouse click.
The return value is a record with the components dim (the pair [ m, n ]), init (the initial permutation), final (the final permutation), and steps (the number of transpositions that were needed).
gap> BrowseData.SetReplay( Concatenation(
> BrowsePuzzleSolution.steps, "Q" ) );
gap> BrowsePuzzle( 4, 4, BrowsePuzzleSolution.init );;
gap> BrowseData.SetReplay( false );
An implementation using only mouse clicks but no key strokes is available in the GAP package XGAP (see [CN04]).
Implementation remarks: The game board is implemented via a browse table, without row and column labels, with static header, dynamic footer, and individual minyx function. Only one mode is needed in which one cell is selected, and besides the standard actions for quitting the table, asking for help, and saving the current window contents, only the four moves via the arrow keys and mouse clicks are admissible.
Some standard NCurses.BrowseGeneric (4.3-1) functionality, such as scrolling, selecting, and searching, are not available in this application.
The code can be found in the file app/puzzle.g of the package.
#### 6.14 Peg Solitaire
Peg solitaire is a board game for one player. The game board consists of several holes some of which contain pegs. In each step of the game, one peg is moved horizontally or vertically to an empty hole at distance two, by jumping over a neighboring peg which is then removed from the board.
We consider the game that in the beginning, exactly one hole is empty, and in the end, exactly one peg is left.
##### 6.14-1 PegSolitaire
‣ PegSolitaire( [format, ][nrholes, ][twoModes] ) ( function )
This function shows the game board in a window.
If the argument format is one of the strings "small" or "large" then small or large pegs are shown, the default is "small".
Three shapes of the game board are supported, with 33, 37, and 45 holes, respectively; this number can be specified via the argument nrholes, the default is 33. In the cases of 33 and 45 holes, the position of both the initial hole and the destination of the final peg is the middle cell, whereas in the case of 37 holes, the initial hole is in the top left position and the final peg has to be placed in the bottom right position.
If a Boolean twoModes is entered as an argument then it determines whether a browse table with one or two modes is used; the default false yields a browse table with only one mode.
In any case, one cell of the board is selected, and the selection can be moved to neighboring cells via the arrow keys. A peg in the selected cell jumps over a neighboring peg to an adjacent hole via the j key followed by the appropriate arrow key.
gap> for n in [ 33, 37, 45 ] do
> BrowseData.SetReplay( Concatenation(
> PegSolitaireSolutions.( String( n ) ), "Q" ) );
> PegSolitaire( n );
> PegSolitaire( "large", n );
> PegSolitaire( n, true );
> PegSolitaire( "large", n, true );
> od;
gap> BrowseData.SetReplay( false );
For more information such as variations of the game and references, see [Köla]. Also the solutions stored in the variable PegSolitaireSolutions have been taken from this web page.
Implementation remarks: The game board is implemented via a browse table, without row and column labels, with static header, dynamic footer, and individual minyx function. In fact, two implementations are provided. The first one needs only one mode in which one cell is selected; moving the selection and jumping with the peg in the selected cell in one of the four directions are the supported user actions. The second implementation needs two modes, one for moving the selection and one for jumping.
Some standard NCurses.BrowseGeneric (4.3-1) functionality, such as scrolling, selecting, and searching, are not available in this application.
The code can be found in the file app/solitair.g of the package.
#### 6.15 Rubik's Cube
We visualize the transformations of Rubik's magic cube in a model that is given by "unfolding" the faces and numbering them as follows.
Clockwise turns of the six layers (top, left, front, right, back, and down) are represented by the following permutations.
gap> cubegens := [
> ( 1, 3, 8, 6)( 2, 5, 7, 4)( 9,33,25,17)(10,34,26,18)(11,35,27,19),
> ( 9,11,16,14)(10,13,15,12)( 1,17,41,40)( 4,20,44,37)( 6,22,46,35),
> (17,19,24,22)(18,21,23,20)( 6,25,43,16)( 7,28,42,13)( 8,30,41,11),
> (25,27,32,30)(26,29,31,28)( 3,38,43,19)( 5,36,45,21)( 8,33,48,24),
> (33,35,40,38)(34,37,39,36)( 3, 9,46,32)( 2,12,47,29)( 1,14,48,27),
> (41,43,48,46)(42,45,47,44)(14,22,30,38)(15,23,31,39)(16,24,32,40)
> ];;
GAP computations analyzing this permutation group have been part of the announcements of GAP 3 releases. For a GAP 4 equivalent, see [Sch]. For more information and references (not GAP related) about Rubik's cube, see [Kölb].
##### 6.15-1 BrowseRubiksCube
‣ BrowseRubiksCube( [format, ][pi] ) ( function )
This function shows the model of the cube in a window.
If the argument format is one of the strings "small" or "large" then small or large cells are shown, the default is "small".
The argument pi is the initial permutation of the faces, the default is a random permutation in the cube group, see Reference: Random.
Supported user inputs are the keys , , , , , and for clockwise turns of the six layers, and the corresponding capital letters for counter-clockwise turns. If the terminal supports colors, according to the global variable NCurses.attrs.has_colors (2.2-1), the input switches between a screen that shows only the colors of the faces and a screen that shows the numbers; the color screen is the default.
The return value is a record with the components inputs (a string describing the user inputs), init, and final (the initial and final permutation of the faces, respectively). (The inputs component can be used for the replay feature, see the example below.)
In the following example, a word in terms of the generators is used to initialize the browse table, and then the letters in this word are used as a series of input steps, except that in between, the display is switched once from colors to numbers and back.
gap> choice:= List( [ 1 .. 30 ], i -> Random( [ 1 .. 6 ] ) );;
gap> input:= List( "tlfrbd", IntChar ){ choice };;
gap> BrowseData.SetReplay( Concatenation(
> input{ [ 1 .. 20 ] },
> "s", # switch to number display
> input{ [ 21 .. 25 ] },
> "s", # switch to color display
> input{ [ 26 .. 30 ] },
> "Q" ) );; # quit the browse table
gap> BrowseRubiksCube( Product( cubegens{ choice } ) );;
gap> BrowseRubiksCube( "large", Product( cubegens{ choice } ) );;
gap> BrowseData.SetReplay( false );
Implementation remarks: The cube is implemented via a browse table, without row and column labels, with static header, dynamic footer, and individual minyx function. Only one mode is needed, and besides the standard actions for quitting the table, asking for help, and saving the current window contents, only the twelve moves and the switch between color and number display are admissible.
Switching between the two display formats is implemented via a function work.Main, so this relies on not caching the formatted cells in work.main.
Row and column separators of the browse table are whitespace of height and width one. The separating lines are drawn using an individual SpecialGrid function in the browse table. Note that the relevant cells do not form a rectangular array.
Some standard NCurses.BrowseGeneric (4.3-1) functionality, such as scrolling, selecting, and searching, are not available in this application.
The code can be found in the file app/rubik.g of the package.
#### 6.16 Changing Sides
We consider a 5 by 5 board of squares filled with two types of stones, as follows. The square in the middle is left empty.
X X X X X O X X X X O O X X O O O O X O O O O O
The aim of the game is to exchange the two types of stones via a sequence of single steps that move one stone to the empty position on the board. Only those moves are allowed that increase or decrease one coordinate by 2 and increase or decrease the other by 1; these are the allowed moves of the knight in chess.
This game has been part of the MacTutor system [OR00].
##### 6.16-1 BrowseChangeSides
‣ BrowseChangeSides( ) ( function )
This function shows the game board in a window.
Each move is encoded as a sequence of three arrow keys; there are 24 admissible inputs.
gap> for entry in BrowseChangeSidesSolutions do
> BrowseData.SetReplay( Concatenation( entry, "Q" ) );
> BrowseChangeSides();
> od;
gap> BrowseData.SetReplay( false );
Implementation remarks: The game board is implemented via a browse table, without row and column labels, with static header, dynamic footer, and individual minyx function. Only one mode is needed, and besides the standard actions for quitting the table, asking for help, and saving the current window contents, only moves via combinations of the four arrow keys are admissible.
The separating lines are drawn using an individual SpecialGrid function in the browse table.
Some standard NCurses.BrowseGeneric (4.3-1) functionality, such as scrolling, selecting, and searching, are not available in this application.
The code can be found in the file app/knight.g of the package.
#### 6.17 Sudoku
We consider a 9 by 9 board of squares. Some squares are initially filled with numbers from 1 to 9. The aim of the game is to fill the empty squares in such a way that each row, each column, and each of the marked 3 by 3 subsquares contains all numbers from 1 to 9. A proper Sudoku game is defined as one with a unique solution. Here is an example.
1 5 9
4 6 5
5 2 3
6 4 8
8 9
5 3
4 9
5 7 1
2 8
The Browse package contains functions to create, play and solve these games. There are basic command line functions for this, which we describe first, and there is a user interface PlaySudoku (6.17-8) which is implemented using the generic browse functionality described in Chapter 4.
##### 6.17-1 Sudoku.Init
‣ Sudoku.Init( [arg] ) ( function )
Returns: A record describing a Sudoku board or fail.
This function constructs a record describing a Sudoku game. This is used by the other functions described below. There a several possibilities for the argument arg.
arg is a string
The entries of a Sudoku board are numbered row-wise from 1 to 81. A board is encoded as a string as follows. If one of the numbers 1 to 9 is in entry i then the corresponding digit character is written in position i of the string. If an entry is empty any character, except '1' to '9' or '|' is written in position i of the string. Trailing empty entries can be left out. Afterwards '|'-characters can be inserted in the string (for example to mark line ends). Such strings can be used for arg.
arg is a matrix
A Sudoku board can also be encoded as a 9 by 9-matrix, that is a list of 9 lists of length 9, whose (i,j)-th entry is the (i,j)-th entry of the board as integer if it is not empty. Empty entries of the board correspond to unbound entries in the matrix.
arg is a list of integers
Instead of the matrix just described the argument can also be given by the concatenation of the rows of the matrix (so, a list of integers and holes).
gap> game := Sudoku.Init(" 3 68 | 85 1 69| 97 53| 79 |\
> 6 47 |45 2 |89 2 1 | 4 8 7 | ");;
##### 6.17-2 Sudoku.Place
‣ Sudoku.Place( game, i, n ) ( function )
‣ Sudoku.Remove( game, i ) ( function )
Returns: The changed game.
Here game is a record describing a Sudoku board, as returned by Sudoku.Init (6.17-1). The argument i is the number of an entry, counted row-wise from 1 to 81, and n is an integer from 1 to 9 to be placed on the board. These functions change game.
Sudoku.Place tries to place number n on entry i. It is an error if entry i is not empty. The number is not placed if n is already used in the row, column or subsquare of entry i. In this case the component game.impossible is bound.
Sudoku.Remove tries to remove the number placed on position i of the board. It does not change the board if entry i is empty, or if entry i was given when the board game was created. In the latter case game.impossible is bound.
gap> game := Sudoku.Init(" 3 68 | 85 1 69| 97 53| 79 |\
> 6 47 |45 2 |89 2 1 | 4 8 7 | ");;
gap> Sudoku.Place(game, 1, 3);; # 3 is already in first row
gap> IsBound(game.impossible);
true
gap> Sudoku.Place(game, 1, 2);; # 2 is not in row, col or subsquare
gap> IsBound(game.impossible);
false
##### 6.17-3 Sudoku.RandomGame
‣ Sudoku.RandomGame( [seed] ) ( function )
Returns: A pair [str, seed] of string and seed.
The optional argument seed, if given, must be an integer. If not given some random integer from the current GAP session is used. This function returns a random proper Sudoku game, where the board is described by a string str, as explained in Sudoku.Init (6.17-1). With the same seed the same board is returned.
The games computed by this function have the property that after removing any given entry the puzzle does no longer have a unique solution.
gap> Sudoku.RandomGame(5833750);
[ " 1 2 43 2 68 72 8 6 2 1 9 8 8 3 9 \
47 3 7 18 ", 5833750 ]
gap> last = Sudoku.RandomGame(last[2]);
true
##### 6.17-4 Sudoku.SimpleDisplay
‣ Sudoku.SimpleDisplay( game ) ( function )
Displays a Sudoku board on the terminal. (But see PlaySudoku (6.17-8) for a fancier interface.)
gap> game := Sudoku.Init(" 3 68 | 85 1 69| 97 53| 79 |\
> 6 47 |45 2 |89 2 1 | 4 8 7 | ");;
gap> Sudoku.SimpleDisplay(game);
3 | 6|8
85| 1| 69
9|7 | 53
-----------
| |79
6 | 47|
45 | 2 |
-----------
89 | 2| 1
4 | 8| 7
| |
##### 6.17-5 Sudoku.DisplayString
‣ Sudoku.DisplayString( game ) ( function )
The string returned by this function can be used to display the Sudoku board game on the terminal, using PrintFormattedString (GAPDoc: PrintFormattedString). The result depends on the value of GAPInfo.TermEncoding.
gap> game := Sudoku.Init(" 3 68 | 85 1 69| 97 53| 79 |\
> 6 47 |45 2 |89 2 1 | 4 8 7 | ");;
gap> str:= Sudoku.DisplayString( game );;
gap> PrintFormattedString( str );
┏━━━┯━━━┯━━━┳━━━┯━━━┯━━━┳━━━┯━━━┯━━━┓
┃ │ 3 │ ┃ │ │ 6 ┃ 8 │ │ ┃
┠───┼───┼───╂───┼───┼───╂───┼───┼───┨
┃ │ 8 │ 5 ┃ │ │ 1 ┃ │ 6 │ 9 ┃
┠───┼───┼───╂───┼───┼───╂───┼───┼───┨
┃ │ │ 9 ┃ 7 │ │ ┃ │ 5 │ 3 ┃
┣━━━┿━━━┿━━━╋━━━┿━━━┿━━━╋━━━┿━━━┿━━━┫
┃ │ │ ┃ │ │ ┃ 7 │ 9 │ ┃
┠───┼───┼───╂───┼───┼───╂───┼───┼───┨
┃ │ 6 │ ┃ │ 4 │ 7 ┃ │ │ ┃
┠───┼───┼───╂───┼───┼───╂───┼───┼───┨
┃ 4 │ 5 │ ┃ │ 2 │ ┃ │ │ ┃
┣━━━┿━━━┿━━━╋━━━┿━━━┿━━━╋━━━┿━━━┿━━━┫
┃ 8 │ 9 │ ┃ │ │ 2 ┃ │ 1 │ ┃
┠───┼───┼───╂───┼───┼───╂───┼───┼───┨
┃ │ 4 │ ┃ │ │ 8 ┃ │ 7 │ ┃
┠───┼───┼───╂───┼───┼───╂───┼───┼───┨
┃ │ │ ┃ │ │ ┃ │ │ ┃
┗━━━┷━━━┷━━━┻━━━┷━━━┷━━━┻━━━┷━━━┷━━━┛
##### 6.17-6 Sudoku.OneSolution
‣ Sudoku.OneSolution( game ) ( function )
Returns: A completed Sudoku board that solves game, or fail.
Here game must be a Sudoku board as returned by Sudoku.Init (6.17-1). It is not necessary that game describes a proper Sudoku game (has a unique solution). It may have several solutions, then one random solution is returned. Or it may have no solution, then fail is returned.
gap> Sudoku.SimpleDisplay(Sudoku.OneSolution(Sudoku.Init(" 3")));
493|876|251
861|542|739
527|193|648
-----------
942|618|573
156|739|482
738|425|916
-----------
289|354|167
375|961|824
614|287|395
##### 6.17-7 Sudoku.UniqueSolution
‣ Sudoku.UniqueSolution( game ) ( function )
Returns: A completed Sudoku board that solves game, or false, or fail.
Here game must be a Sudoku board as returned by Sudoku.Init (6.17-1). It is not necessary that game describes a proper Sudoku game. If it has several solutions, then false is returned. If it has no solution, then fail is returned. Otherwise a board with the unique solution is returned.
gap> s := " 5 | 154 6 2 |9 5 3 |6 4 | 8 |8 9 53\
> | 5 | 4 7 2| 91 8 ";;
gap> sol := Sudoku.UniqueSolution(Sudoku.Init(s));;
gap> Sudoku.SimpleDisplay(sol);
438|219|576
715|436|928
962|758|314
-----------
694|573|281
153|862|749
827|941|653
-----------
281|695|437
546|387|192
379|124|865
##### 6.17-8 PlaySudoku
‣ PlaySudoku( [arg] ) ( function )
Returns: A record describing the latest status of a Sudoku board.
This function allows one to solve Sudoku puzzles interactively. There are several possibilities for the optional argument arg. It can either be a string, matrix or list of holes and integers as described in Sudoku.Init (6.17-1), or a board as returned by Sudoku.Init (6.17-1). Furthermore arg can be an integer or not be given, in that case Sudoku.RandomGame (6.17-3) is called to produce a random game.
The usage of this function is self-explanatory, pressing the key displays a help screen. Here, we mention two keys with a particular action: Pressing the key you get a hint, either an empty entry is filled or the program tells you that there is no solution (so you must delete some entries and try others). Pressing the key the puzzle is solved by the program or it tells you that there is no or no unique solution.
Implementation remarks: The game board is implemented via a browse table, without row and column labels, with static header, dynamic footer, and individual minyx function. Two modes are supported, with the standard actions for quitting the table and asking for help; one cell is selected in each mode. The first mode provides actions for moving the selected cell via arrow keys, for changing the value in the selected cell, for getting a hint or the (unique) solution. (Initial entries of the matrix cannot be changed via user input. They are shown in boldface.) The second mode serves for error handling: When the user enters an invalid number, i. e., a number that occurs already in the current row or column or subsquare, then the application switches to this mode, which causes that a message is shown in the footer, and the invalid entry is shown in red and blinking; similarly, error mode is entered if a hint or solution does not exist.
The separating lines are drawn using an individual SpecialGrid function in the browse table, since they cannot be specified within the generic browse table functions.
Some standard NCurses.BrowseGeneric (4.3-1) functionality, such as scrolling, selecting, and searching, are not available in this application.
The code can be found in the file app/sudoku.g of the package.
##### 6.17-9 Sudoku.HTMLGame
‣ Sudoku.HTMLGame( game ) ( function )
‣ Sudoku.LaTeXGame( game ) ( function )
Returns: A string with HTML or LaTeX code, respectively.
The argument of these functions is a record describing a Sudoku game. These functions return code for including the current status of the board into a webpage or a LaTeX document.
#### 6.18 Utility for GAP Demos
This application can be used with GAP if the user interface has readline support. The purpose is to simplify the typing during a demonstration of GAP commands.
The file format to specify GAP code for a demonstration is very simple: it contains blocks of lines with GAP input, separated by lines starting with the sequence #%. Comments in such a file can be added to one or several lines starting with #%. Here is the content of an example file demo.demo:
#% A comment can have several lines.
#% Here is a multi-line input block:
g := MathieuGroup(11);;
cl := ConjugacyClasses(g);
#% Calling a help page
?MathieuGroup
#% The next line contains a comment in the GAP session:
a := 12;; b := 13;; # assign two numbers
#%
a*b;
#%
(Single % in the beginning of a line will also work as separators.)
A demonstration can be loaded into a GAP session with the command
‣ LoadDemoFile( demoname, demofile[, singleline] ) ( function )
Returns: Nothing.
This function loads a demo file in the format described above. The argument demoname is a string containing a name for the demo, and demofile is the file name containing the demo.
If the optional argument singleline is given and its value is true, the demo behaves differently with respect to input blocks that span several lines. By default full blocks are treated as a single input line for readline (maybe spanning several physical lines in the terminal). If singleline is true then all input lines of a block except the last one are sent to GAP and are evaluated automatically before the last line of the block is displayed.
gap> dirs := DirectoriesPackageLibrary("Browse");;
gap> demofile := Filename(dirs, "../app/demo.demo");;
gap> LoadDemoFile("My first demo (single lines)", demofile, true);
Many demos can be loaded at the same time. They are used with the and keys.
The key leads to a (Browse) menu which allows one to choose a demo to start (if several are loaded), to stop a demo or to move to another position in the current demo (e.g., to go back to a previous point or to skip part of a demo).
The next input block of the current demo is copied into the current input line of the GAP session by pressing the key. This line is not yet sent to GAP, use the key if you want to evaluate the input. (You can also still edit the input line before evaluation.)
So, in the simplest case a demo can be done by just pressing and in turns. But it is always possible to type extra input during a demo by hand or to change the input lines from the demo file before evaluation. It is no problem if commands are interrupted by . During a demo you are in a normal GAP session, this application only saves you some typing. The input lines from the demo are put into the history of the session as if they were typed by hand.
Try it yourself with the two demos loaded in the example. This also shows the different behaviour between default and single line mode.
Goto Chapter: Top 1 2 3 4 5 6 A Bib Ind
generated by GAPDoc2HTML | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3989339768886566, "perplexity": 2562.084574155361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578841544.98/warc/CC-MAIN-20190426153423-20190426175423-00322.warc.gz"} |
http://math.stackexchange.com/questions/379927/a-simple-3rd-grade-problem-or-is-it | # A “simple” 3rd grade problem…or is it?
So this is supposed to be really simple, and it's taken from the following picture:
Text-only:
It took Marie $10$ minutes to saw a board into $2$ pieces. If she works just as fast, how long will it take for her to saw another board into $3$ pieces?
I don't understand what's wrong with this question. I think the student answered the question wrong, yet my friend insists the student got the question right.
I feel like I'm missing something critical here. What am I getting wrong here?
-
This is simultaneously wonderful and sad. Wonderful for the student who was level-headed enough to answer this question correctly, and sad that this teacher's mistake could be representative of the quality of elementary school math education. – Jared May 3 '13 at 3:38
I think the issue is that the language and image are incongruent. The question should have been "10 minutes to saw 2 pieces from a board" (2 cuts), then the teachers answer would be correct. As it is stated, it implies sawing a board in half (1 cut). – Vijay May 3 '13 at 5:23
The language and the image are in perfect agreement- the image shows two pieces resulting from a single cut. – JayL May 3 '13 at 8:01
It takes 0 minutes to cut it into 1 piece, so 3 pieces should take 3*0 minutes then – I am very happy May 3 '13 at 12:25
Google search of this image shows it's quite popular... good way to get some quick points on MSE. – Fuhrmanator May 3 '13 at 12:54
## 25 Answers
Haha! The student probably has a more reasonable interpretation of the question.
Of course, cutting one thing into two pieces requires only one cut! Cutting something into three pieces requires two cuts!
------------------------------- 0 cuts/1 piece/0 minutes
------------|------------------ 1 cut/2 pieces/10 minutes
------|------------|----------- 2 cuts/3 pieces/20 minutes
This is a variation of the "fence post" problem: how many posts do you need to build a 100 foot long fence with 10 foot sections between the posts?
Answer: 11 You have to draw the problem to get it...See below, and count the posts!
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
0-----10----20----30----40----50----60----70----80----90---100
-
Wow, I feel completely stupid. That was so obvious, yet I didn't figure it out. I guess it isn't too late to start at square one – venidi vidici vicini May 3 '13 at 3:36
In all fairness, neither did whoever graded this problem. This question made me smile :) – Gyu Eun Lee May 3 '13 at 3:37
This is an example of a red herring in a word problem. The number of pieces (2) distracts from the actual variable - the number of cuts (1). I think this is a very common (and important) technique to teach children, because it happens so often in solving real world problems – xdumaine May 3 '13 at 17:20
Agreed and not only that but the question does say "a board" into "two pieces." A board = one board. Going from one board to two boards takes one cut. I can't see how this could be interpreted any other way. They don't talk about sawing equal length pieces or cutting two equal pieces from a very long board needing 3 cuts (you would have 3 pieces by then). anyway, my wife and I home school our kids because of stupid crap like this. – Eric M May 3 '13 at 20:02
Well, it depends on the topology of the board. The teacher is right if the board has the topology of, say, a ring or a torus. ;) – Heidar May 24 '13 at 23:11
Well, the information is incomplete, so they're both right and wrong. Since the question is for $3^{rd}$ graders the correct answer should be $20$ minutes ($2$ cuts $\times$ $10$ min), though the teacher is right if you do cut it like this (first red, then green):
The problem is that the question doesn't say anything about how you have to cut it, so the blue cut would have been good enough too. That cut should only have taken a few seconds.
-
However if you look at the drawing of the piece of wood being cut, this approach would be extremely difficult. – Mark Adler May 3 '13 at 6:42
@Mark Yes, looking at the drawing of the piece of wood, I think the answer would rather be close to an hour... – Axel May 3 '13 at 7:48
The text is indeed incomplete, and this square is exactly what came to my mind. But then I noticed the picture next to the question. – ugoren May 3 '13 at 8:01
The question is hypothetical, and doesn't give the information about board size. So I think we might interpret 'as fast' to be that it means Marie can cut any board into 2 pieces in 10 minutes regardless of its size and length of the cut. – tia May 3 '13 at 8:32
+1 for the blue cut – kritzikratzi May 4 '13 at 14:38
The student was correct:
Sawing a board into two pieces requires exactly one cut to be made. Sawing the board into three pieces requires exactly two cuts...
Hence, if it took $\bf 10$ minutes to make one cut, then cutting a board twice, at the same pace, would take $\;2 \times 10 = \bf 20$ minutes.
The instructor should receive tutoring from the student, I'm afraid!
-
I'm curious: where did you find this image? – amWhy May 3 '13 at 3:47
@amWhy Diagonally across the image is a watermark that says zerooutoffive.com, where the image was at some point submitted. – Karl Kronenfeld May 3 '13 at 9:43
Retrosaur: Oh! I was just wondering if this came from actual work submitted by a student... Sometimes we see samples of work or questions/solutions where the educator went awry...that's all. ;-) $\;\;$ I didn't mean to question its authenticity. Just wondering if this actually happened to a student. – amWhy May 3 '13 at 13:11
@amWhy: This reminds me of the Monty Hall Problem and the firestorm that it raised! :-) +1 – Amzoti May 4 '13 at 0:22
100th upvote! Haha, enjoy the gold! – ABC May 6 '13 at 4:42
You can actually do it in ten minutes but your saw must look like this:
| |
| |
| |
| | <- cutting edges
| |
| |
+--+--+
| <- handle
|
:-)
-
Well of course... but why 2 blades, why not 10 or 100. Obvously it could not be done in 10 minutes. Adding a second blade doubles the friction, so to complete the cut with 2 blades in the same time she'd have to increase the force applied, and/or the rate she moved the blade. But, if she could increase the force and rate, she could make a single cut with 1 blade in perhaps 5 minutes, or even 1 minute, invalidating the relationship of time, and any two cuts. So, it's clear she must work with the same force and rate. Therefore, with 2 blades it would take 20 minutes. Still 10 minutes per cut. – Kevin Fegan May 4 '13 at 20:37
This is definitely a mathematician's answer! – bob.sacamento May 5 '13 at 17:38
@PyRulez: Nope. What's involved here is surface energy. To create new surface area, you need to supply this energy to the material by doing work on it. As the blade thins out, the closer the work you expend becomes to the minimum surface energy. That's asymptotic behavior with asymptote most definitely not at zero. – Kuba Ober May 6 '13 at 22:53
Use a circular saw: 1 minute. – naught101 May 7 '13 at 3:03
And, bods, for the love of $DEITY, please don't vote this up so far that it exceeds the correct answers. It was just a bit of humor (hence the community wiki) and I never expected it to be quite this popular - if it starts to look like it's threatening the correct answers, I'll have to delete it, and the world will be a drabber place :-) – paxdiablo May 7 '13 at 5:51 Another correct answer would be 10 minutes. One could infer, "If she works just as fast," that "work" is the complete amount of time to do the job. - That's how I understood it at first. – Johannes May 3 '13 at 16:48 I once took a test that had the question "What is the last thing you should do before handing in a test?" and I answered (incorrectly, sadly), choice (c) which was "Staple a$20 bill to the test" which is the last thing I would ever consider doing. I tried to explain my reasoning to the teacher but they just laughed and said I was an existential philosopher... – Michael May 3 '13 at 20:59
@Michael That is why mathematics puzzles as english words seldom often have ambiguous answers. The english language is rich with ways to say one thing and mean another. – Jonathan May 3 '13 at 21:42
@Jonathan or, apparently, to say two opposite things at the same time…! (“seldom often”?) – Aant May 4 '13 at 17:33
@AAnt Whoops! I was going to say "words seldom have only one meaning", but when I changed the comment to refer to the puzzles, I left in an unintended "seldom". Well spotted. :) – Jonathan May 7 '13 at 7:15
The topologists among us may perhaps enjoy the following defense of the teacher's answer: if the board is in the shape of a ring, it will take two cuts to get two pieces, and three cuts to get 3 pieces.
-
Of course, if you look at the board itself, it clearly isn't in the shape of a ring... – Joe Z. Nov 28 '13 at 9:46
Actually an examination of the figure suggests that the question may originally have been phrased in terms of "cutting off" such and such number of pieces (with the other end being off the board). This formulation would make the teacher's answer correct. One can speculate that the phrasing was carelessly changed by someone somewhere along the way. – user72694 Nov 28 '13 at 19:16
(Also, if the board is in the shape of a ring, you do indeed have to "saw it into one piece" first, as some people were jokingly pointing out.) – Joe Z. Jan 17 '14 at 8:20
Let P : pieces
Let m : minutes
Let C : cuts
Let t : time per slice = 10
$$C(m) = m/t , \{m| m < Life(Marie)\}, \{C < length(board)\}$$
$$P(C(m)) = floor(C(m)) +1 , \{m| m < Life(Marie)\}$$
You're right that clearly isn't a simple grade 3 problem, but the answer is still 20. $$P(C(20)) = floor(C(20)) + 1 = floor(20/10)+1 =floor(2) + 1 = 2 + 1 = 3\ pieces$$
-
+1 That's perfect! – Kevin Fegan May 4 '13 at 1:34
Notation overflow! – user13107 May 5 '13 at 16:39
I don't understand... How can you have an inequality C < length(board) where the LHS is an amount of cuts and the RHS is the length of the board? They're completely different units. Isn't this a case of comparing apples and pears? – Alderath Jan 10 '14 at 15:40
A ==|==|== B The length of the board is the distance from A to B, the length of C is a real number, the set of C has an integral length. The granularity of the length of the board is infinite, but the number of times Marie is capable of subdividing it is finite. The length of a cut occupies some arbitrary amount of space which is of the same unit in which you are measuring the board. – awiebe Jan 11 '14 at 3:23
lmao – Derek 朕會功夫 Aug 2 '14 at 22:21
The student answered the question the most correct way possible. First it is stated that Marie spends 10 minutes on sawing a board into two pieces. And then the student must answer how long it will take to saw another board into three pieces.
So we are not talking about chopping off pieces from an undefined source. We are talking about splitting a board.
However, it's poorly phrased because it's not explained how the board must be cut. It can be cut in infinite ways. Also, we don't know if the two boards are identical, so we must rely on assumptions here.
-
The teacher would be correct if the question was "... to cut two pieces from the end of a board ...", implying more strongly that the pieces were being cut so as to leave another remaining piece.
I don't think that a reasonable person would interpret the question in that way, though.
-
One part of what the teacher suggests is possible. Four pieces can be obtained in twenty minutes, because this takes only two cuts: cut it in two, then parallel the pieces and cut again, such that the saw goes through both at the same time. (The assumption is that the extra energy doesn't take more time, just more effort per stroke: not realistic, but let's go with it).
The mistake is interpolating between the two possibilities. If two pieces takes ten minutes, and four can be had in twenty, it does not follow that three pieces can be had in fifteen. However, six pieces can be had in thirty minutes which averages out to three in fifteen.
Suppose two workers are put on the job, and suppose it is somehow possible for them to divide a cut between themselves by attacking it from opposite sides without hindering each other, so they can meet in the middle in five minutes and complete the cut. They can execute this at the beginning to make one board into two. Then they double up the board, and each makes a ten minute double cut through both boards: six pieces in fifteen minutes, so basically three pieces per worker per fifteen minutes.
So if we think about just a one-off job carried out by a single person with a saw, then the student is right. However, if we were talking about productivity over multiple pieces, and possibly with multiple workers, then the teacher would also be right; the problem is, nothing of the sort is suggested in the way the question is posed.
-
+1 for pointing out the incorrect interpolation. – l0b0 May 5 '13 at 19:01
The student is absolutely correct (as Twiceler has correctly shown).
The time taken to cut a board into $2$ pieces (that is $1$ cut) : $10$ minutes
Therefore, The time taken to cut a board into $3$ pieces (that is $2$ cuts) : $20$ minutes
The question may have different weird interpretations as I am happy commented:
Time taken to cut it into one piece = $0$ minutes
So Time taken to cut it into $3$ piece s= $0 * 3$ minutes = $0$ minutes.
So $0$ can be an answer. but it is illogical just like the teacher's answer
and as Keltari said
Another correct answer would be 10 minutes. One could infer, "If she works just as fast," that "work" is the complete amount of time to do the job. -Keltari
This is logical but you can be sure that this is not waht the question meant.
but the student has chosen the most relevant one. The teacher's interpretation is mathematically incorrect.
The teacher may have put the question for the students to have an idea of Arithmetic Progression and may have thought that the students will just answer the question without thinking hard. In many a schools, at low grades children are thought that real numbers consists of all the numbers. Only later in higher grades do they learn that complex numbers also exist. (I learned just like that.) So the question was put as a question on A.P. thinking that the students may not be capable of solving the answer the correct way.
Or as Jared rightly commented:
This is simultaneously wonderful and sad. Wonderful for the student who was level-headed enough to answer this question correctly, and sad that this teacher's mistake could be representative of the quality of elementary school math education. – Jared
Whatever may be the reason, there is no doubt that the student has been accurate in answering the question properly and that the teacher's answer is illogical.
-
You are probably right that the question is not looking for the answer I gave. However, you most certainly cannot be sure. You are making a decision based on assumptions. The age of the test takers and the wording of the question are points to consider. However, there is nothing to prove or disprove that my answer is, or is not, what they are looking for. – Keltari Sep 15 '13 at 22:35
Teaching children, you have to be fair to them:
• Think as they do.
• Third graders are budding topologists, just very far from graduation.
• Children are honest and direct in their assessments - it would not likely occur to them to "cut into 3 pieces of equal length" because that was not in the question. Nether would they likely think of any of the alternative cuts offered here in the various answers --- precisely because:
• People (especially children) tend to be very visual. DUH there is a picture of the saw cutting the board. The board IS a board, not a paper cut out, not a piece of rectangular plywood; and the way the saw is positioned very strongly implies the next cut would be made in a similar fashion. Honestly, how many of you looked at that picture and almost unconsciously imagined moving the saw to the right (or maybe to the left) of the current cut ? I did - and I bet the children would too ... because:
• Children are hands on.
When I read the problem, I thought the test grader just muffed it misreading the scoring sheet. Wow - I guess I'm childish :-P
-
While this is a nice insight on teaching practices, this doesn't actually answer the question :C – venidi vidici vicini Sep 11 '13 at 2:48
@Retrosaur thx. Oh sorry, I tried to address the underlying cause of the problem. You say you "don't understand what's wrong with this question. " Take it at face value, as a child would. I bet every kid would get the answer correct! You ask, "What am I getting wrong here?" My answer is: The grader or teacher is being too much an adult, thinking too much into it. :)) And when I read "I feel like I'm missing something critical here," I was prompted to give the answer above. The missing link is knowing when to throttle back the intellect, before it overpowers playfulness. Is the answer clearer? – Howard Pautz Sep 11 '13 at 18:33
And a follow up @renegadeballoon aka Retrosaur - by the looks of it months later, very few other people 'actually answered the question' too :C (Just proves math people are as incorrigible as engineers ouch :-P ) – Howard Pautz Aug 7 '14 at 2:16
Didn't intend to come of that way, just that it didn't directly answer the question. But still very informative and thorough, which is why I gave it a +1; it'd do you some good to add just a small bit to the bottom explaining the actual answer – venidi vidici vicini Aug 7 '14 at 2:19
There is a similar problem that needs an argument quite analogous to what the student seems to have used:
A clock takes 12 seconds to strike 4 o'clock, how long will it take to strike 8 o'clock?
The interpretation is that the time is spent between the strikes, so the answer is 28 seconds instead of 24.
-
I'd argue it would take at least another four hours. – Marcks Thomas May 8 '13 at 13:47
@Marcks That would be my argument too. Except, by the time 4 o'clock has been struck, we are now already 12 seconds into the first hour. So it would be 3 hours 59 minutes and 48 seconds. – daviewales May 18 '13 at 13:07
@daviewales Plus the 28 seconds to chime. 4 hours 16 seconds. – flem May 19 '13 at 13:17
Are we assuming the clock being measured is accurate? Does the time we use to measure the clock fall on a Daylight Savings day? When you say "how long" are you referring to time or distance the hands traveled? :D – Keltari Aug 28 '13 at 3:23
@Keltari Since Daylight Savings switches at 2:00, which is not between 4 o'clock and 8 o'clock, it's irrelevant to the question. – Joe Z. Jan 17 '14 at 8:18
The answer will have to be 20. If it takes 'Marie' 10 minutes to cut the board into two pieces then that means it has taken her 10 minutes to make that chop.
Three pieces would require two chops therefore the teacher is wrong:
2 * 10 = 20
-
The student is actually right. The teacher is wrong... – Michal B. May 3 '13 at 12:22
why this answer has down vote. it he correct also like student. – Mowgli May 3 '13 at 14:20
I'm assuming it was down voted because it said the student was wrong then they weren't. – CramerTV May 3 '13 at 18:14
Since all of their logic is correct, I assume it was just a mistake in typing 'student' instead of 'teacher'. – Kevin Fegan May 4 '13 at 1:40
Absolutely, it was a typo in the haste of things. The logic was simple enough – kaiten65 Dec 21 '14 at 22:31
Considering they show a drawing of the piece of wood on the problem itself, the assumption is the cut would be made in the same way, hence the student was right to start with.
-
I would think of it this way: how long would it take to cut it in to 1 piece... 0 minutes because it is already in one piece. The model is: time = cuts x 10. As 1 cut = 10 minutes.
-
Well, i have something else in my mind.I know it is not that practical still i want to share my views about the question.
Assuming the board is of 1 meter long and we have to cut it into 3 pieces of equal length i.e. finally each of the pieces should be length of 1/3 meter long (see the picture given in the question). So we have to make two cuts at length 1/3 and at length 2/3. Now note that after cutting it first time(which will take 10 minutes), we will have two piece. One is of 1/3 meter long and the other portion is of 2/3 meter long.Now we have to make a cut to the last portion(which is of 2/3 meter long) and make it in two pieces.
Now the interesting part comes. If we assume the board has a uniform resistance against the saw, then after losing its one third end, board will lose its resistance uniformly (assuming resistance depends on length proportionally). In that case, it will take another $10*(2/3)=6.67$ minutes to get another two pieces.
hence we need total $16.67$ minutes.
I know it is not practical, still...
-
Since you are cutting across the width of the board, the resistance to the saw is a function of the width of the board, not the length of the board. – Kevin Fegan Sep 4 '13 at 0:47
Quite a random question. The answer would depend on where and how the cuts are made.
e.g. Let's say the $2$ blocks are identical (assumption on my part) each of $10$cm $\times$ $5$cm $\times$ $1$cm Let's say first block is cut lengthwise, i.e. into 2 blocks of 10cmx5cmx0.5cm. This means it took 10 mts to cut through and area of $10\times5 = 50cm^2$
Now let's look at the second block. Cut along width to create 2 blocks each of 5cmx5cmx1cm. Then you take one of these and cut off along the thickness to create two pieces 5cmx2.5cmx1cm. In this case you have just gone through an area of 5x1+5x1 = $10cm^2$ so it should only take 2mts.
Of course, if this was a question in a $3^{rd}$ grade exam, none of the above is relevant. The way the question is written, the answer could be 20 (if you are cutting pieces off a long piece of wood as the diagram indicates) or 15 (if you cut a block into half and then use the second cut to cut one of the halves into half).
+++++++++++ | +++++++++++ | +++++++++++
+++++++++++ | +++++++++++ | +++++++++++
+++++++++++ | +++++++++++ | +++++++++++
+++++++++++ | +++++++++++ | +++++++++++
+++++++++++ | +++++++++++ | +++++++++++
Answer = 20
++++++++++ | +++++++++++++++++
++++++++++ | +++++++++++++++++
------------------| +++++++++++++++++
++++++++++ | +++++++++++++++++
++++++++++ | +++++++++++++++++
Answer = 15
That being said, it is a terribly poorly framed question.
-
Given the length of your second cut there, I'd say it would take 25 minutes, not 15. – Joe Z. May 22 '13 at 18:28
Sawing once takes 10 minutes and obtains 2 pieces. So, since we obtain 3 pieces when we saw twice, it takes $2 \cdot 10 = 20$ minutes.
-
Perhaps if it was "can marie saw the board into 3 pieces in 10 minutes?" then it would be correct. Maybe it was a misprint.
-
Both the teacher and the student are right. It depends on how you look at the problem. We have the rate: 10 minutes to saw 2 pieces.
First way to understand the problem (from the student's perspective): We have a board. We make one cut and get 2 pieces (the sawed off part and the remaining part). From this point of view, every cut yields 2 pieces. So, first cut (takes 10 minutes) and we get 2 pieces. The second cut (takes another 10 minutes - same rate) and we get 3 pieces Total time = 10 + 10 = 20 minutes
Second way to understand the problem (from the teacher's perspective): We have a board. We make one cut and get 1 piece (the sawed off part. The remaining piece of the board is not counted) From this point of view, every cut yields only 1 piece. So, the rate is 10/2 = 5 minutes/1 piece So, first cut (5 minutes), we get 1st piece Second cut (5 minutes), we get 2nd piece Third cut (5 minutes), we get 3rd piece Total time = 5 + 5 + 5 = 15 minutes
-
Teachers method: 10 minutes per 2 pieces, hence 5 minutes per 1 piece 3 pieces implies 5(3) = 15.
Common sense: 10 minutes for 1 cut of a board (which makes 2 pieces) Therefore 3 pieces requires 2 cuts hence 2(10)=20
Too much missing info,I just asked my boss this question and he said that the answer would make sense if you were slicing a square board in half and that took 10 minutes, then slicing one half in half would make it 3 pieces in half the time as the original cut since it is half the size... 10 + 10/2 = 15
-
The problem is If she works just as fast, that means that if she cuts with the same speed...
$$\vec{v} = \frac{\triangle \vec{x}}{\triangle \vec{t}}$$
If Marie saw a board into 2 pieces in 10 minutes, that means 1 cut in 10 minutes ($\vec{v} = \frac{1}{10}$ cuts per minute ).
So to perform 2 cuts to obtain 3 pieces, we have:
$$\triangle \vec{t} = \frac{\triangle \vec{x}}{\vec{v}} = \frac{2}{\left (\frac{1}{10} \right )} = 2\times 10 = 20$$
-
Well, it took her ten minutes to cut the board into two pieces. To cut something into two pieces, you have to slice it only one time. Also, the number of pieces needs to be one more than the number of cuts if it's cut evenly enough. This means to get three pieces, you need two cuts, so $10\times2=20$. Remember that she works at the same speed.
-
The student gave the correct answer since it takes 2 cuts to make 3 pieces-each cut using up 10 minutes!
That's hilarious,the reason the student got the answer "right" and the teacher got it "wrong" is because the student actually thought through the question from first principles-realizing 2 cuts,each eating up 10 minutes,would be needed-and the teacher,used to thinking in "tricks", assumed an equal amount of time would be needed for each piece rather then each cut!
I'm rather ashamed to say I thought that was the answer,too-until I tossed out all my preconceptions and looked at it from scratch using only what I was given-as the student clearly and correctly did!And this is really the essence of correct mathematical thinking,which the student has and the teacher has obviously lost laboring with inferior educational methodology: The correct approach to any mathematical problem is to begin from first principles knowing only what is given. The power of mathematics is to produce a method of solution to a problem where none existed before.Sadly,most of our school mathematics-especially in America-is designed to produce purely practical thinking with spoon fed algorithms-and original thinking is not only discouraged, but indirectly punished.
-
## protected by robjohn♦May 3 '13 at 19:04
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6380248069763184, "perplexity": 1107.745860039118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982013.25/warc/CC-MAIN-20150728002302-00105-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/296689/show-that-y-sup-c | # Show that $y=\sup (C)$
Let $A \subseteq \mathbb{R}$ nonempty and bounded above. Also, let $C = \{x+100 : x \in A\}$
Consider $M$ in $\mathbb{R}$ such that $M < y$. Is $M-100$ an upper bound for $A$? Why?
Use this result to show that $y=\sup (C)$
-
What is $y$? ${}{}$ – Git Gud Feb 6 '13 at 23:11
@Git, if you put {}{}{} between dollar signs, they count as characters. – Gerry Myerson Feb 6 '13 at 23:13
@GerryMyerson Thanks! ${}{}$ – Git Gud Feb 6 '13 at 23:14
add comment
## 1 Answer
I don't really understand the question, but I'm going to find $\sup (C)$ and prove it is what it is. Hopefully that will be of some help to the OP.
Since $A$ is bounded above, $\sup (A)$ exists.
Let $\displaystyle s_A =\sup (A)$. Now let $s=s_A+100$.
1. Take $c\in C$ arbitrarily. We have $c=x+100$ for some $x\in A$. By definition of $s_A$ we have $x\leq s_a$, therefore $c=x+100\leq s_A+100=s$. This proves that $s$ is an upper bound for $C$.
2. We've now estabilished that the set of upper bounds of $C$ isn't empty. So you can take an arbitrary upper bound of $C$, $m$. Since $m$ is an upper bound of $C$ we know that for any $c\in C$, it is true that $c\leq m$, which means that for any $x\in A$ we have $x+100\leq m$ and it follows that for any $x\in A$, $x\leq m-100$, therefore, since $x$ was arbitrary, $s_A\leq m-100$, so we get $s=s_A+100\leq m$. Since $m$ was an arbitrary upperbound for $C$, we've proved that $s$ is the smallest upper boundof $C$ and therefore $s=\sup (C)$.
-
I poorly worded the question but what you replied with helped me think about the problem in a different way. Thanks! – Math Student Feb 7 '13 at 0:51
add comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976245164871216, "perplexity": 155.2033644290353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010776091/warc/CC-MAIN-20140305091256-00067-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/gravity-on-a-very-small-scale.521170/ | # Gravity on a very small scale
1. Aug 12, 2011
### ArcanaNoir
Hi.. umm, I'm from the math department, and um...I'm shy.... hello physics people...
So I saw this thing on tv, I think it was Through the Wormhole..I'm not sure..and um, don't judge me for watching "science for the uneducated masses"....
But anyway, they were talking about gravity, and they were examining gravity on a very small scale. They were conducting an experiment that measured gravity on a very small scale and (the lab was underground to limit interference) they said things like rush hour traffic or airplanes could skew the data. I was wondering, at that small small scale, don't other forces, like cohesion and adhesion and static electricity and other properties I don't know about, don't they overwhelm the force of gravity? How do they know they are measuring gravity, and not some other force?
Just so it's not misunderstood, we're talking about gravity BETWEEN objects, not between the earth and an object.
2. Aug 12, 2011
### Staff: Mentor
Yes, the electromagnetic force that causes all atoms and molecules to stick together and governs most everyday observable effects vastly outdoes gravity. They know they are measuring gravity by ensuring that the setup of the experiment screens out as many of these effects as possible. Doing experiments in a vacuum chamber would almost eliminate most effects from colliding gas molecules for example. Shielding the chamber would reduce any cosmic rays or EM radiation from affecting it and electrical effects from building up. Note that these aren't specific examples of how they perform the experiments, as I don't know the exact setups. I'm just using them as overall examples.
Similar Discussions: Gravity on a very small scale | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859083890914917, "perplexity": 1277.6854530110572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687428.60/warc/CC-MAIN-20170920175850-20170920195850-00003.warc.gz"} |
http://www.hpmuseum.org/forum/thread-6459-post-72101.html | Accurate Normal Distribution for the HP67/97
06-26-2016, 09:34 PM (This post was last modified: 07-03-2016 02:46 PM by Dieter.)
Post: #1
Dieter Senior Member Posts: 1,477 Joined: Dec 2013
Accurate Normal Distribution for the HP67/97
Normal distribution for the HP67/97
This program for the HP67 and 97 evaluates various functions of the Standard Normal distribution:
• The lower tail cumulative distribution function P(z), i.e. the Normal integral from –∞ to z
• The upper tail cumulative distribution function Q(z). i.e. the Normal integral from z to +∞
• The two-sided cumulative distribution function A(z), i.e. the Normal integral from –z to +z
• The inverse of the one-sided CDF (quantile) z(P)
• The inverse of the two-sided CDF (quantile) z(A)
• The probability distribution function (PDF) Z(z)
Unlike the well-known programs e.g. in the HP Statistic Pacs this one uses different algorithms to achieve much better accuracy. even far out in the distribution tails. There are several methods to do so. Since the HP67 and 97 run rather slow and memory is limited, while on the other hand there are always 26 available data registers that cannot be exchanged for more program steps, the chosen approach uses rational approximations which run reasonably fast. Here the nine coefficients are prestored in the data registers. These can be loaded by a data card – or by your preferred HP67/97 emulator.
Method and accuracy
The Normal CDF is evaluated by two different methods.
For 0 ≤ z ≤ 5 the upper-tail integral Q(z) is approximated by a (Near-)Minimax rational approximation:
$$\large Q(z) \approx e^{-\frac{z^2}{2}} \cdot \frac{1+a_1z+a_2z^2+a_3z^3+a_4z^4}{2+b_1z+b_2z^2+b_3z^3+b_4z^4+b_5z^5}$$
Using 10-digit coefficients the values are
$$\begin{array}{ll} a_1=0,7981006015 & b_1=3,191970353\\ a_2=0,3111510842 & b_2=2,169125520\\ a_3=0,06328636234 & b_3=0,7932255604\\ a_4=0,005716530175 & b_4=0,1587036976\\ & b_5=0,01432712100 \end{array}$$
If evaluated with sufficient precision, the relative error over the given domain is less than 2,6 E–10.
With a few more digits the error can be reduced to approx. 2,41 E–10.
For z > 5 the well known continued fraction expansion is applied, here with 8 terms:
$$\large Q(z) \approx \frac{1}{\sqrt{2 \pi}} \cdot e^{-\frac{z^2}{2}} \cdot \cfrac{1}{z+\cfrac{1}{z+\cfrac{2}{z+\cfrac{3}{z+\dotsb}}}}$$
The expression is calculated from right to left, starting not with 8/z but 8/(z+1,38) which significantly improves the resulting accuracy for smaller z. This way the relative error for z > 5 stays below 1 E-10 – provided the calculation is performed with sufficient precision.
Due to the HP67/97's limitation to 10 significant digits and some numeric pitfalls the actual results on the HP67/97 will be less accurate. Since it is virtually impossible to verify the results over the complete domain I can only say that according to my results usually 9 significant digits (±1 unit) are achieved. See below for two exceptions.
If you find substantially larger errors, please report here.
The algorithm for the inverse (quantile function) first calculates a rough estimate by means of a simple rational approximation with an error of about ±0,002. The error of this first approximation is taylored for the following correction step that provides the final result. This is a very effective third order extrapolation due to Abramovitz & Stegun (1972) p. 954. With sufficient precision this method is good for about 11 significant digits over the calculator's complete working range down to 1 E–99. Again, the actual accuracy on the 67/97 is less and may drop to about 9 digits. But there is an exception: due to digit cancellation results very close to zero carry less significant digits, e.g. the quantile for a probability of 0,50003 is calculated as 7,5199 E–5. In such cases usually the remaining digits are fine, maybe within ±1 unit tolerance. So the result in FIX DSP 9 (0,000075199) should be OK.
A similar limitation applies to the two-sided CDF for arguments very close to zero. Here you should not expect more than what you see in FIX DSP 9 mode (±1 digit).
Evaluating the PDF seems trivial, but accuracy may degrade significantly for large arguments of the exponential function. For example e-1000/7 = 9,076766360 E-63, but the 67/97 returns 9,076765971 E-63. The error is caused by the fact that the fractional part of the argument carries only seven decimals which leads to an accuracy of merely seven significant digits. That's why the PDF is evaluated in a different way that requires three calls of ex, but achieves better accuracy.
The program
Here comes the listing.
Code:
001 LBL B 002 CF 2 003 SF 3 004 GTO 0 005 LBL A 006 CF 2 007 CF 3 008 x<0? 009 SF 2 010 LBL 0 011 ABS 012 GSB 9 013 ENTER 014 F3? 015 + 016 ENTER 017 CHS 018 1 019 + 020 F2? 021 X<>Y 022 RTN 023 LBL 9 024 STO 0 025 8 026 STO I 027 5 028 RCL 0 029 x<=y? 030 GTO 1 031 RCL 0 032 RCL 0 033 1 034 . 035 3 036 8 037 + 038 LBL 2 039 RCL I 040 X<>Y 041 / 042 + 043 DSZ 044 GTO 2 045 RCL 0 046 GSB E 047 X<>Y 048 / 049 RTN 050 LBL 1 051 RCL 0 052 RCL 0 053 RCL 9 054 * 055 RCL 8 056 + 057 * 058 RCL 7 059 + 060 * 061 RCL 6 062 + 063 * 064 RCL 5 065 + 066 * 067 2 068 + 069 STO I 070 R↓ 071 RCL 4 072 * 073 RCL 3 074 + 075 * 076 RCL 2 077 + 078 * 079 RCL 1 080 + 081 * 082 1 083 + 084 RCL I 085 / 086 RCL 0 087 GSB E 088 R↓ 089 RCL I 090 * 091 RTN 092 LBL D 093 ABS 094 CHS 095 1 096 + 097 2 098 / 099 CF 2 100 GTO 0 101 LBL C 102 CF 2 103 ENTER 104 CHS 105 1 106 + 107 x>y? 108 SF 2 109 x>y? 110 X<>Y 111 LBL 0 112 STO D 113 LN 114 ENTER 115 + 116 CHS 117 √x 118 ENTER 119 ENTER 120 ENTER 121 . 122 3 123 6 124 7 125 * 126 2 127 . 128 3 129 5 130 8 131 + 132 X<>Y 133 . 134 0 135 6 136 6 137 5 138 * 139 1 140 . 141 0 142 8 143 5 144 + 145 R↑ 146 * 147 1 148 + 149 / 150 - 151 x<0? 152 CLX 153 GSB 9 154 EEX 155 5 156 * 157 RCL E 158 / 159 RCL D 160 EEX 161 5 162 * 163 RCL E 164 / 165 - 166 EEX 167 5 168 / 169 ENTER 170 ENTER 171 RCL 0 172 x² 173 2 174 * 175 1 176 + 177 6 178 / 179 * 180 RCL 0 181 2 182 / 183 + 184 * 185 * 186 + 187 RCL 0 188 + 189 F2? 190 CHS 191 RTN 192 LBL E 193 STO 0 194 INT 195 x² 196 2 197 / 198 CHS 199 e^x 200 RCL 0 201 INT 202 LSTX 203 FRAC 204 * 205 e^x 206 / 207 RCL 0 208 FRAC 209 x² 210 e^x 211 √x 212 / 213 STO I 214 2 215 PI 216 * 217 1/x 218 √x 219 * 220 STO E 221 RTN
The program expects the coefficients of the rational approximation in R1...R9. If it runs on a real (hardware) 67/97 this can be done by preparing a (single track) data card. The values for the constants have already been mentioned. Be sure to enter all ten digits:
Code:
R1 = 7,981006015 E-01 R2 = 3,111510842 E-01 R3 = 6,328636234 E-02 R4 = 5,716530175 E-03 R5 = 3,191970353 R6 = 2,169125520 R7 = 7,932255604 E-01 R8 = 1,587036976 E-01 R9 = 1,432712100 E-02
The coefficients of the simple rational approximation for the quantile estimate are part of the program code. Of course they can just as well be stored in, say, the secondary registers S0...S3 and recalled from there. This will shorten the program and make the quantile calculation a tiny bit faster, but it requires a double-track data card.
Usage
Calculate the cumulative distribution function:
z [A]
The lower tail CDF P(z) is returned in X, the upper tail CDF Q(z) in Y.
Calculate the symmetric two-sided cumulative distribution function:
z [B]
The two-sided CDF A(z) is returned in X, the complement 1–A(z) in Y.
Calculate the quantile for a given lower-tail probability p:
p [C]
Calculate the quantile for a given two-sided symmetric probability p:
p [D]
Calculate the probability distribution function Z(z):
z [E]
Some examples
In a soda water factory a machine fills bottles with an average volume of 503 ml.
The content of the bottles varies slightly with a standard deviation of 5 ml.
Determine the probability that a random soda bottle contains less than 490 ml.
First calculate the Standard Normal variable z:
Code:
490 [ENTER] 503 [-] 5 [÷] -2,600000000
Now compute the lower tail CDF:
Code:
[A] 0,004661188 [x<>y] 0,995338812
So only 0,47% of all bottles will contain less than 490 ml while 99,53% exceed this volume.
How much of the production will fall within ±10 ml around the mean volume?
±10 ml equals ±2 standard deviations.
Code:
2 [B] 0,954499736
In which interval around the mean will 98% of the production fall?
So we are here looking for the two-sided quantile.
Code:
0,98 [D] 2,326347874
The tolerance interval is ±2,326 standard deviations.
In absolute milliliters this is...
Code:
5 [x] 11,63173937
So 98% of the production is within 503 ± 11,63 ml.
In the above example all digits displayed in FIX DSP 9 mode are exact.
Here are some other results and their accuracy:
Code:
Q(0,01) = 0,4960106435 (-2 ULP) Q(1) = 0,1586552539 (exact) P(2) = 0,9772498681 (exact) P(-3,14) = 8,447391737 E-4 (+2 ULP) P(-6,3) = 1,488228221 E-10 (-1 ULP) A(2,3) = 0,9785517800 (exact) A(0,0001) = 0,0000797888 (FIX DSP 9 result ...79789 is within 1 digit of 7,978845595 E-5) z(0,9) = 1,281551565 (-1 ULP, truncated) z(0,95) = 1,644853627 (exact) z(0,99) = 2,326347874 (exact) z(1E-99) = -21,16517934 (exact) z(0,50001) = 0,000002506 (four digits of 2,506628175 E-6)
Cave: this does not mean that this accuracy level can be guaranteed. I have not found a case where the result was not within 1 unit in the 9th place, but please do your own tests. As usual, remarks, corrections and error reports are welcome.
Finally, here are two (zipped) files for use with the Panamatic HP67 emulator. The first version implements the program listed above, the second version has the coefficients of the quantile estimate in registers S0...S3 and thus is a bit shorter.
Dieter
06-26-2016, 11:37 PM
Post: #2
bshoring Member Posts: 231 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
This looks like a nice program, with very good documentation. I look forward to trying it on my RPN-67 simulator for iPad.
Regards,
Bob
06-27-2016, 06:32 AM
Post: #3
Dieter Senior Member Posts: 1,477 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(06-26-2016 11:37 PM)bshoring Wrote: This looks like a nice program, with very good documentation. I look forward to trying it on my RPN-67 simulator for iPad.
Thank you, Bob. I think a comprehensive documentation of the used methods is essential because this way the program may be translated for other calculators. Maybe I'll later post the 12-digit values of the coefficients for the newer models. Or even for the TI58/59. ;-)
As a sidenote, I usually develop such rational approximations in Excel. But inverting a 9x9 matrix with sufficient accuracy simply was too much here. So the calculation was done on a WP34s which returned accurate results without any problem.
Dieter
06-27-2016, 07:10 AM
Post: #4
Paul Dale Senior Member Posts: 1,131 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(06-27-2016 06:32 AM)Dieter Wrote: As a sidenote, I usually develop such rational approximations in Excel. But inverting a 9x9 matrix with sufficient accuracy simply was too much here. So the calculation was done on a WP34s which returned accurate results without any problem.
Pauli
06-29-2016, 10:04 PM (This post was last modified: 07-03-2016 02:42 PM by Dieter.)
Post: #5
Dieter Senior Member Posts: 1,477 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(06-27-2016 07:10 AM)Paul Dale Wrote:
Yes, I really like the 34s for its sheer accuracy. Too bad the number of available registers does not allow anything larger than 9x9 matrices. Maybe I have to switch to Free42 for this. ;-)
FWIW, after some more calculations (again on the 34s) I finally got something that should be close to the optimum under the given restrictions. Using sufficient precision, the relative error drops to ±1,7 E–10. To give you a visual impression, the error graph looks like this (click to view full size graphics).
Blue: rational approximation, red: continued fraction with offset. The thin white lines define the 1,7 E–10 error interval.
This result is achieved by changing the following values in the program listed in the initial post:
• Use the following set of coefficients for the rational approximation:
Code:
a1 = 7,913810547 E-01 a2 = 3,066963490 E-01 a3 = 6,190166490 E-02 a4 = 5,536871364 E-03 b1 = 3,178531251 b2 = 2,149493336 b3 = 7,815107111 E-01 b4 = 1,552413763 E-01 b5 = 1,387643665 E-02
• Change the threshold for the switch between rational approximation and continued fraction from 5 to 4,679.
• Change the offset of the first continued fraction term from 1,38 to 1,422.
Due to the limited precision there is not much improvement in the 67/97 program, but maybe the values can be useful for an implementation on a different calculator.
Dieter
Edit: tweaked some coefficients in their last digit
06-30-2016, 01:57 AM
Post: #6
Paul Dale Senior Member Posts: 1,131 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(06-29-2016 10:04 PM)Dieter Wrote: Yes, I really like the 34s for its sheer accuracy. Too bad the number of available registers does not allow anything larger than 9x9 matrices. Maybe I have to switch to Free42 for this. ;-)
Wouldn't 10x10 be possible? If not in the normal registers, in local ones. I thought we allowed local registers to be used for matrices.
Pauli
06-30-2016, 12:11 PM
Post: #7
Dieter Senior Member Posts: 1,477 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(06-30-2016 01:57 AM)Paul Dale Wrote: Wouldn't 10x10 be possible? If not in the normal registers, in local ones. I thought we allowed local registers to be used for matrices.
Could you give an example? This application requires solving a linear equation system. With 10 unknowns that's 120 registers (10x10 + 10 for the right hand side + 10 for the solution).
Dieter
06-30-2016, 11:51 PM (This post was last modified: 07-01-2016 12:00 AM by Paul Dale.)
Post: #8
Paul Dale Senior Member Posts: 1,131 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
Dieter,
You'll need to write a program that creates the locals and which finishes with STOP not RTN.
E.g. to create a 10x10 identity matrix in local registers:
Code:
LBL A LocR 100 112.1010 XEQ'M-1' STOP
Switch to run mode and XEQ A to run this. You can look at the first few using RCL . 00, RCL .01 etc. But to look at later values you need to use indirect addressing e.g. 167 RCL -> X returns 1 since it is a diagonal element. The first local register is accessed indirectly as register 112. Execute RTN in run mode to free the memory occupied by the locals.
I'd probably put the two 10 long vectors into local registers instead of the large matrix.
Although you can allocate 121 local registers, the matrix code doesn't allow a single matrix to have more than 100 elements.
Pauli
07-01-2016, 09:22 PM
Post: #9
Dieter Senior Member Posts: 1,477 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(06-30-2016 11:51 PM)Paul Dale Wrote: I'd probably put the two 10 long vectors into local registers instead of the large matrix.
I think that's what I'll try if I need to solve a 10x10 system. If I understand the idea behind local registers correctly they are discarded as soon as the program reaches a RTN (or an END). So some care is required.
Dieter
03-22-2017, 10:13 PM
Post: #10
Willy R. Kunz Member Posts: 73 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
Users of RPN-67 and 97 simulators may download the program and data card here. (Scroll down to the Math section.)
03-22-2017, 11:10 PM (This post was last modified: 03-22-2017 11:47 PM by Dieter.)
Post: #11
Dieter Senior Member Posts: 1,477 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(03-22-2017 10:13 PM)Willy R. Kunz Wrote: Users of RPN-67 and 97 simulators may download the program and data card here. (Scroll down to the Math section.)
I see that you included both sets of coefficients discussed earlier in this thread. Please note that also two other values that have to be changed! Switching to the optimized coefficient set also has to change the constants 5 and 1,38 that are hardcoded in the program, cf. line 27 resp. 33ff. Please read my comments on the three (!) required changes in post #5 of this thread.
To be more clear: the program version you use will only work with the original, first set of coefficients. Using the alternate set with this program will result in reduced accuracy!
I'd suggest you *only* use the modified coefficient set *and* change the mentioned two constants in the program to their modified values 4,679 resp. 1,422. This will require 226 program steps, so two steps have to be saved. This can be accomplished without significant disadvantages by rounding the constants to 4,68 and 1,42.
(Of course you can also move these constants to register A and B on the data card and replace the respective program steps with RCL A resp. RCL B. ;-))
Dieter
03-23-2017, 04:43 PM
Post: #12
Willy R. Kunz Member Posts: 73 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(03-22-2017 11:10 PM)Dieter Wrote:
(03-22-2017 10:13 PM)Willy R. Kunz Wrote: Users of RPN-67 and 97 simulators may download the program and data card here. (Scroll down to the Math section.)
I see that you included both sets of coefficients discussed earlier in this thread. Please note that also two other values that have to be changed! Switching to the optimized coefficient set also has to change the constants 5 and 1,38 that are hardcoded in the program, cf. line 27 resp. 33ff. Please read my comments on the three (!) required changes in post #5 of this thread.
Sorry about overlooking those two bullet points. Late-night work...
Quote:I'd suggest you *only* use the modified coefficient set *and* change the mentioned two constants in the program to their modified values 4,679 resp. 1,422. This will require 226 program steps, so two steps have to be saved. This can be accomplished without significant disadvantages by rounding the constants to 4,68 and 1,42.
Of course, giving up vintage mode compatibility would allow adding bells and whistles, like full-precision constants and easy switching between the two models. Using RCL register arithmetic, single-line floating-point constants, INCR instead of "1 +" etc. would also make the program much shorter. And no data card required.
Left as an exercise to the RPN-67 user... ;-)
Willy
03-24-2017, 06:55 AM
Post: #13
Dieter Senior Member Posts: 1,477 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
Fine, thank you.
(03-23-2017 04:43 PM)Willy R. Kunz Wrote: Of course, giving up vintage mode compatibility would allow adding bells and whistles, like full-precision constants and easy switching between the two models. Using RCL register arithmetic, single-line floating-point constants, INCR instead of "1 +" etc. would also make the program much shorter. And no data card required.
Left as an exercise to the RPN-67 user... ;-)
I consider the Normal distribution CDF one of the most basic transcendental functions so that I wonder why it is not included as one of the "bells and whistles" of the extended version. ;-)
BTW, what is the precision and working range of RPN-67? And does it use BCD or binary arithmetics?
Dieter
04-20-2017, 02:13 PM
Post: #14
Willy R. Kunz Member Posts: 73 Joined: Dec 2013
RE: Accurate Normal Distribution for the HP67/97
(03-24-2017 06:55 AM)Dieter Wrote:
(03-23-2017 04:43 PM)Willy R. Kunz Wrote: Of course, giving up vintage mode compatibility would allow adding bells and whistles, like full-precision constants and easy switching between the two models. Using RCL register arithmetic, single-line floating-point constants, INCR instead of "1 +" etc. would also make the program much shorter. And no data card required.
Left as an exercise to the RPN-67 user... ;-)
I consider the Normal distribution CDF one of the most basic transcendental functions so that I wonder why it is not included as one of the "bells and whistles" of the extended version. ;-)
BTW, what is the precision and working range of RPN-67? And does it use BCD or binary arithmetics?
Dieter
I can't tell what version of RPN-67 you're using, but Normal distribution CDF has been part of RPN-67 since version 2.0 (Sep 2013). It's called NORM DIST. There's also Binomial Distribution, Negative Binomial Distribution, Bivariate Normal Distribution, and Poisson Distribution. However, I don't pretend RPN-67 is a statistical calculator, so some functions may be missing... ;-)
Math uses standard double-precision binary libraries, although some additions/subtractions are done in BCD to avoid conversion errors.
Willy
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6602118611335754, "perplexity": 1907.9753827089448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427766.28/warc/CC-MAIN-20170729112938-20170729132938-00018.warc.gz"} |
http://worldebooklibrary.org/articles/eng/Deterministic_pushdown_automaton | #jsDisabledContent { display:none; } My Account | Register | Help
# Deterministic pushdown automaton
Article Id: WHEBN0003972656
Reproduction Date:
Title: Deterministic pushdown automaton Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:
### Deterministic pushdown automaton
In automata theory, a deterministic pushdown automaton (DPDA or DPA) is a variation of the pushdown automaton . The DPDA accepts the deterministic context-free languages, a proper subset of context-free languages.[1]
Machine transitions are based on the current state and input symbol, and also the current topmost symbol of the stack. Symbols lower in the stack are not visible and have no immediate effect. Machine actions include pushing, popping, or replacing the stack top. A deterministic pushdown automaton has at most one legal transition for the same combination of input symbol, state, and top stack symbol. This is where it differs from the nondeterministic pushdown automaton.
## Contents
• Formal definition 1
• Languages recognized 2
• Properties 3
• Closure 3.1
• Equivalence problem 3.2
• Notes 4
• References 5
## Formal definition
A (not necessarily deterministic) PDA M can be defined as a 7-tuple:
M=(Q\,, \Sigma\,, \Gamma\,, q_0\,, Z_0\,, A\,, \delta\,)
where
• Q\, is a finite set of states
• \Sigma\, is a finite set of input symbols
• \Gamma\, is a finite set of stack symbols
• q_0\,\in Q\, is the start state
• Z_0\,\in\Gamma\, is the starting stack symbol
• A\,\subseteq Q\,, where A is the set of accepting states
• \delta\, is a transition function, where
\delta\colon(Q\, \times ( \Sigma\, \cup \left \{ \varepsilon\, \right \} ) \times \Gamma\,) \longrightarrow \mathcal{P}(Q \times \Gamma ^{*})
where * is the Kleene star, meaning that \Gamma^{*} is "the set of all finite strings (including the empty string \varepsilon) of elements of \Gamma", \varepsilon denotes the empty string, and \mathcal{P}(X) is the power set of a set X.
M is deterministic if it satisfies both the following conditions:
• For any q \in Q, a \in \Sigma \cup \left \{ \varepsilon \right \}, x \in \Gamma, the set \delta(q,a,x)\, has at most one element.
• For any q \in Q, x \in \Gamma, if \delta(q, \varepsilon, x) \not= \emptyset\,, then \delta\left( q,a,x \right) = \emptyset for every a \in \Sigma.
There are two possible acceptance criteria: acceptance by empty stack and acceptance by final state. The two are not equivalent for the deterministic pushdown automaton (although they are for the non-deterministic pushdown automaton). The languages accepted by empty stack are the languages that are accepted by final state, as well as have no word in the language that is the prefix of another word in the language.
## Languages recognized
If L(A) is a language accepted by a PDA A it can also be accepted by a DPDA if and only if there is a single computation from the initial configuration until an accepting one for all strings belonging to L(A). If L(A) can be accepted by a PDA it is a context free language and if it can be accepted by a DPDA it is a deterministic context-free language.
Not all context-free languages are deterministic. This makes the DPDA a strictly weaker device than the PDA. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the context-free grammar S → 0S0 | 1S1 | ε. An arbitrary string of this language cannot be parsed without reading all its letters first which means that a pushdown automaton has to try alternative state transitions to accommodate for the different possible lengths of a semi-parsed string.[2]
Restricting the DPDA to a single state reduces the class of languages accepted to the LL(1) languages.[3] In the case of a PDA, this restriction has no effect on the class of languages accepted.
## Properties
### Closure
Closure properties of deterministic context-free languages (accepted by deterministic PDA by final state) are drastically different from the context-free languages. As an example they are (effectively) closed under complementation, but not closed under union. To prove that the complement of a language accepted by a deterministic PDA is also accepted by a deterministic PDA is tricky. In principle one has to avoid infinite computations.
As a consequence of the complementation it is decidable whether a deterministic PDA accepts all words over its input alphabet, by testing its complement for emptiness. This is not possible for context-free grammars (hence not for general PDA).
### Equivalence problem
Geraud Senizergues (1997) proved that the equivalence problem for deterministic PDA (i.e. given two deterministic PDA A and B, is L(A)=L(B)?) is decidable,[4] a proof that earned him the 2002 Gödel Prize. For nondeterministic PDA, equivalence is undecidable.
## Notes
1. ^
2. ^
3. ^ Kurki-Suonio, R. (1969). "Notes on top-down languages". BIT 9 (3): 225–238.
4. ^ Sénizergues, Géraud (1997). "The equivalence problem for deterministic pushdown automata is decidable". AUTOMATA, LANGUAGES AND PROGRAMMING. 1256/1997: 671–681. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138492345809937, "perplexity": 1844.861943963116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00338.warc.gz"} |
https://epg.modot.org/index.php?title=Category:702_Load-Bearing_Piles&diff=23125&oldid=22337 | # Difference between revisions of "Category:702 Load-Bearing Piles"
## 702.1 Construction Inspection for Sec 702
The principal function of piles is to transmit loads which cannot be adequately supported at normal footing levels, to a depth where adequate support is available. When a pile passes through poor material and its tip penetrates a small distance into a sound stratum of good bearing capacity, it is called a bearing pile. The material which is penetrated may vary all the way from water to materials that would ordinarily serve to support surface footings, but cannot be used because of severe settlement restrictions. When a pile extends part way through a deep strata of limited supporting ability and develops capacity by friction on the sides of the pile, with some end bearing characteristics, it is are called a friction pile. Types normally used as friction piles are cast-in-place concrete piles. Piles are used and classified as friction pile because principal support for the pile is from surface friction, not end bearing. All pile types may be used as "batter piles", (piles driven in a sloping position) to aid in resisting horizontal loads.
Piles for footings where footing surface is below finished ground are referred to as foundation piles. Piles which support shallow caps, usually on intermediate bents, are called trestle piles. Pile types are specified on the plans.
When a type of pile is specified, a pile standard is indexed on the bridge plans. This standard will furnish specific details for the pile to be furnished.
There are two types of piles generally used by MoDOT. They are structural steel and cast in place concrete pile.
### 702.1.1 Cast-in-place Concrete Pile (Sec 702.2.1)
They consist of pre-driven shells of steel later filled with concrete. The most commonly used type of spirally welded steel, sometimes called pipe piles. This type pile normally has no internal reinforcement. Steel shells are usually driven without a mandrel if shell thickness permits. Where steel shells are driven, boulders or other obstructions quite often deflect the tubes from their intended course. This problem is magnified if piles are driven on a batter. This could result in bent or crushed shells. Metal shells shall hold the original form without distortion after being driven and shall be free from water, soil and other deleterious matter when concrete is cast in the shells. Any shell that has been bent or damaged should be carefully reviewed. Any decision to permit its use should be only with approval of the Bridge Division through the Division of Construction and Materials. Concrete should be directed down the center of the shell. Concrete hitting the sides can cause segregation. If concrete can be successfully directed down the center of the shell no tremie is required regardless of the height of fall.
### 702.1.2 Structural Steel Pile (Sec 702.2.2)
Structural steel piles are rolled H-Sections which are used in certain types of pile installations. This type of pile is probably the most widely used in the State of Missouri. These piles extend into the ground and transmit loads from footing to bearing stratum as columns. They displace a small volume of soil and can be driven with relatively close spacing. Pile tip reinforcement is sometimes specified when driving steel pile through boulders or thin layers of rock to protect the pile tip. Pile points can be accepted by certification and should be checked to see that they meet the specification requirements.
Experience has shown that corrosion of this type pile is usually not a serious problem. They must be protected for a short distance below ground level by painting as required by Standard Specification Sec 702.4.8.
### 702.1.3 Test Piles (Sec 702.2.5)
On structures that have unusually large quantities of piling, pile load tests are often specified. Such test loads are required by governing design specifications which limit maximum loads based on dynamic tests. For structural steel piles, where test loads are specified, the maximum 2006 design load is limited to 6.0 tons per in2 unless test loads indicate that design loads must be reduced or the footing redesigned to redistribute the loads to a lesser 4.5 tons per in2.
The pile to be load tested in a point bearing situation is normally driven to refusal on rock or shale. A friction pile to be test loaded is normally driven to a formula bearing as close as possible to design bearing value but only after a specified minimum tip elevation has been reached.
The purpose of test loading is to check effectiveness of the pile hammer and dynamic pile formula used. The load test assures a minimum safety factor of 2 based on a maximum allowable permanent set 1/4 in.
The contractor is generally required to submit in detail the proposed method of load testing. The proposal should include arrangement of hold down piles if they are to be used. If hold down pile are impractical, it may be necessary to use a direct static load.
Hydraulic jacks are normally used to apply and measure load to the tested pile. Deformation and settlement of the loaded pile are recorded by dial gauges which record to the thousandth of an inch. To insure accuracy these gauges, backed with fixed wires, must be supported so as to be completely independent of the loading system. Methods of measuring uplift on hold down pile should be required. Load increments are applied in accordance with contract requirements. These increments are recorded in the inspector's field book.
The special provisions establish the load increments, the application intervals, and the maximum load to be applied. After the maximum load is applied for a specified time, the load is released in specified increments and intervals. The test pile load data should be plotted and reported in graphic form. Contact the Division of Construction and Materials for assistance in preparing test pile graphs. The elastic shortening of the pile may be computed by the formula:
${\displaystyle Es={\frac {PL}{AE}}}$
Where:
Es = Elastic shortening, in.
L = Entire length of test pile, in.
A = Area of cross-section of pile, in2
E = Modulus of elasticity, usually 29 x 106, lbs/in2
Elastic shortening of any pile can usually be correlated with rebound, measured when the test pile is unloaded. Test pile data, log of readings, and test pile loading graphs should be submitted to the Division of Construction and Materials in a form which is neat, legible, and which can be reproduced. Copies of these reports prepared by Division of Construction & Materials are submitted to the Division of Bridges and, if it is an interstate project, to the Federal Highway Administration.
### 702.1.4 Pile Driving
In some instances pre-boring is required as outlined in Sec 702.4.3. Where pre-boring is required the hole shall be of a diameter not less than that of the pile and shall be large enough to avoid damage to the pile in driving through the hole into hard material. Good practice requires driving equipment capable of driving piles to necessary depth and bearing without materially damaging the piles. Heavier piles require heavier equipment, with a ratio of ram weight to pile weight sufficient to minimize energy loss due to inertia. The contractor selects equipment to meet specified energy requirements, but the inspector should be familiar with power plant, hammer, cap, cushion block, leads, and other elements used in driving. Each resident engineer may obtain data for hammers from publications issued by the individual equipment manufacturer. The contractor should have bulletins available for equipment he is using.
Pile hammers are classified by type. There are steam and air hammers, both single acting and double acting. Diesel pile hammers may be either open or enclosed ram types. A differential hammer is a double acting type. Design loads, size of pile, soil conditions, etc., establish the choice of hammer. Plans set out minimum energy requirements for individual pile size and for each substructure unit.
A single acting hammer is one in which the ram is raised bysteam, air, or diesel explosion and allowed to drop, with gravity as the only downward force. The energies listed in the manufacturer's bulletins are striking energies rated in accordance with commonly accepted practice. The energy is based upon normal stroke but does not make allowances for any losses occurring in the hammer, itself, such as back-pressure, friction, or loss within the cushion block.
With insufficient lift pressure, the ram will not ascend the proper height. In fact, the hammer does not have to ascend through a full stroke to operate. The inspector should check the hammer when testing for bearing and determine if the hammer is operating at its specified number of blows per minute and at the prescribed or recommended pressure. If it is not, energy should be obtained by measuring actual stroke while hammer operates and multiplying actual length of stroke by weight of striking part. The additional distance through which the ram drops, while still in contact with the pile after impact, is not ordinarily taken into account. Neither is "cushion block" loss.
During easy driving with a large set per blow a reduction in number of blows per minute may occur. In consequence, the full theoretical hammer stroke will often not be produced.
A double acting hammer is one in which steam or air pressure raises the ram then accelerates the down stroke. The differential acting hammer is a type of double acting hammer which provides additional pressure to the ram during the downward stroke.
The foot-pounds of energy for a double acting hammer is dependent upon the number of strokes per minute produced with a given steam or air pressure. For example, a typical table of "actual energies" for one commonly used hammer shows that "e" varies from 9500 foot-pounds at 90 strokes per minute up to 13,100 foot-pounds at 105 strokes per minute. The inspector must, for this type hammer, log the number of blows per minute, noting pressure at the hammer, and use the corresponding energies when making a bearing determination by use of the dynamic formula. Refer to manufacturer's bulletins to determine what energies to use for the number of blows per minute. Calculations based on steam or air pressure are misleading because no two setups are identical, and it is impossible to determine the mean effective pressure in the working cylinder from gauge pressure.
The Diesel Pile Hammer is classed either as a single acting or double acting type. Most states have accepted this hammer with some qualification. Many arbitrarily discount energies set out by the manufacturer, accepting only some percentage of the maximum rated energy. Missouri sets this figure at 75% for single acting diesel hammers. Inspectors should acquaint themselves with the diesel hammer's physical qualities and determine when the hammer is developing full stroke.
A diesel hammer is a self contained unit, including power plant, cylinder, piston, or ram, fuel tank, pump, injectors, and other pertinent parts. The ram of these hammers is raised by explosion of diesel fuel ignited in the cup or anvil of the hammer. Some types of diesel hammers are called double acting hammers. This type of hammer has the ram enclosed. As the ram travels upward, the piston compresses air in the bounce chamber-compressor tank. This compressed air adds to the acceleration of the ram during its downward stroke. It is necessary to use a "Bounce Pressure" gauge on this type of hammer to establish the usable energy for dynamic formula bearing determination. For this type of diesel hammer, explosive force is not taken into account to determine usable energy. Use of the "gauge energy" permits full use of 2E in the "double acting" bearing formula and energy is not to be discounted to 75%.
The single acting series of diesel hammers have a "rampiston" which can be partially seen during the upward stroke. If the full "maximum" manufacturer's energy is to be used in the specific dynamic formula then the inspector must determine that the ram is falling through a normal stroke. Failure to operate properly is usually the result of mechanical problems which the contractor must correct. In isolated instances, failure of the hammer to operate with a normal stroke may be caused by the elastic rebound of the pile and bearing material. If the ram is not falling through its usual stroke, the energy "E" used should not be the maximum striking energy but the energy which can be calculated from the weight of the ram (W) times the actual stroke (H) through which it falls, or (W x H). The height (H) is determined from the observed exposed length of ram as the ram travels upward. When this method is employed, the energy should not be discounted. Where the energy is measured by W x H, the inspector should use the single acting formula from the specifications. This procedure may not be used to increase the energy allowance above 0.75E.
#### 702.1.4.1 Pile Formula
The Missouri Highway and Transportation Department specifies the use the FHWA-Modified Gates equation (See Sec 702.4.10) to calculate pile bearing. The Pile Driving Set Calculator is a spreadsheet that can be used to calculate the pile set to help determine when to stop driving pile.
#### 702.1.4.2 Inspection
A qualified inspector should be assigned continuously on pile driving work to see that each pile is driven to specified bearing, that all piles are properly located, and that the required number are driven. The inspector must keep a detailed record of the data for each pile. The record should show for each pile, its position, tip and cut-off diameter (for timber), total length in place, length placed in leads, tip elevation, batter, and number of blows per inch at the time driving is stopped. The number of blows per inch is based on penetration for the last series of 10 to 20 blows. The inspector should record all pertinent information regarding the hammer used so that a review and check of bearing may be made. Any unusual occurrences or delay during driving should be recorded. When driving friction pile, the inspector should make periodic bearing checks as the pile is being driven to know at any time approximate bearing of the pile if problems should develop.
Contractors that elect to place lifting holes in piling in lieu of using a choker cable may be permitted to do so with the following provisions. The concern of burning lifting holes in piling is that undesirable capacity reductions may occur. Lifting holes would only be permissible provided they would not remain in the piling lengths used for the completed structure. i.e. Lifting holes would need to be in an excess length of end piling which would either be cut off after driving, or in the case of splicing the holed end would be removed before splicing on the next section. Any added risk of buckling or damage to the piling that may result from a weakened cross section during driving is the contractor's responsibility.
There shall be no additional payment for the additional length of piling to compensate for removing the cut-off ends with the holes.
It is good practice for piling in a group or cluster to be driven in sequence which proceeds from the center of the group each way to the outer rows of pile. This will usually avoid uplift and loss of bearing in previously driven pile.
In many cases piles are to be driven to rock or shale. The FHWA-Modified Gates equation, designed for friction pile, is not altogether applicable in these circumstances. Since the bearing value at the time of practical refusal is not an accurate bearing resistance figure, the inspector should keep the sounding data well in mind as the tip of the pile nears anticipated elevations of hard material. The pile should be seated on or into hard material with blows which will not damage the tip of the pile. Each bearing pile should be tested for "practical refusal" unless it is clearly seated on solid rock.
The inspector should examine the plans carefully for changes in hammer requirements. For structural steel piles, for example, the pile data table on the bridge plans specify minimum energy requirements for a pile hammer for each individual substructure unit. Under the pile data table, the inspector will find other supplementary notes which should be taken into consideration for proper driving of structural steel piles. It is especially important that such piles which are to be seated on rock or shale be driven and tested for "practical refusal" as specified in Sec 702.4.11. When the pile is well seated, the driving should cease. The inspector should record in the diary that the pile has been driven into shale or rock as the case may be. Either record penetration and bearing in the case of practical refusal or note "refusal on rock" in the case of absolute refusal on rock. Such notations will indicate full compliance with bearing requirements of the plans.
Piles to be driven should be plainly marked at a distance from the tip equal to the distance from ground line to the elevation shown on the soundings for rock or shale. It is also good practice to mark the pile from the tip equal to the distance from the ground surface down to any layer of boulders, thin rock strata, or other hard or firm material which might cause unnatural point resistance or unusual driving conditions. The pile driving foreman or contractor's foreman should be told the significance of such marks and all personnel should be guided accordingly. This procedure will result in fewer broken, "broomed", or damaged piles.
Splices may be required to extend structural steel or steel shell pile to reach adequate bearing. No direct payment will be made for splices that are within the plan pile length plus 10 percent. Any splices outside of plan length plus 10 percent, that are required to achieve bearing will be paid for as an additional 8 feet of pile in place at the contract unit price, per authorized splice.
Field splices have a greater potential of failure during driving than the original furnished pile. Therefore it is preferable to have a minimum amount of field splicing. Sec 702.4.6 states, "Full length piles shall be driven wherever possible and practical." A full-length pile should be used unless there is clearance, shipping, excessive cost, or other considerations, which make it impractical. Although an initial pile length of plan length plus 10% is desirable it is not mandated.
The chart below gives examples on when a splice is to be paid for various situations.
Plan Length (ft.) Plan Length Plus 10 % (ft.) Lengths Driven to Reach Practical Refusal Pile Length before Trimming (ft.) Pile Length after Trimming (ft.) No. of Splices Splices Paid Length Added to Pay for Splice Final Payable Length of Pile Applicable Rule
30 33 1 @ 40 ft. 40 40 0 0 0 40 Pile is Overrun
30 33 1 @ 30 ft. and 1 @ 8 ft. 38 32 1 0 0 32 Splice Within 33 ft. Unpaid
50 55 2 @ 30 ft. 60 59 1 0 0 59 Splice Within 55 ft. Unpaid
80 88 2 @ 40 ft. and 1 @ 10 ft. 90 89 2 0 0 89 Splice Within 88 ft. Unpaid
30 33 1 @ 30 ft. and 1 @ 10 ft. 40 40 1 0 0 40 Splice Within 33 ft. Unpaid
30 33 1 @ 30 ft. and 2 @ 10 ft. 50 45 2 1 8 53 Splice Within 33 ft. Unpaid
80 88 2 @ 40 ft. and 2 @ 20 ft. 120 109 3 1 8 117 Splice Within 88 ft. Unpaid
The inspector must insure that all piles have been properly inspected. Precast concrete pile will normally have been inspected during casting and curing by the Division of Construction and Materials. In such cases, they will provide the resident engineer with proper inspection reports. If the piles are cast on the project, they will, of course, be inspected the same as any other concrete item. Files should contain any applicable inspection reports on aggregate, cement and reinforcing steel. The concrete Plant Inspector's Report and compressive test reports will serve to document acceptability of piles. This would also be true for concrete in cast-in-place piles. Steel shells for cast-in-place piles and structural steel piles are normally inspected by project forces. Inspection should include dimensions, wall thickness of shells, visual inspection of welds, closure plates, etc. The contractor is required to furnish certified mill test reports for the steel. Heat numbers of pile should be checked against heat numbers on the mill test reports. The resident engineer reports results of inspection on a Fabrication Inspection Report, Form B-708R2, or the equivalent data may be recorded in an alternate format. A spreadsheet version of the form is available to facilitate the automatic creation of a SiteManager record for use by the Division of Construction and Materials. This information will be retained in the project file with mill test reports attached.
#### 702.1.4.3 Manufactured Pile Splices
To date MoDOT has been submitted one type of manufactured pile splicer and approved it for use with certain stipulations. The AFB Champion H-Pile Splicer HP-30000 has been approved. The following are recommended guidelines that should be used beyond the manufacturers recommended assembly procedure for the use of the HP-30000 splicers.
1. It would be permissible for non-flexible bent locations only. This would include intermediate bents on pile footings and semi-deep abutments. This splicer system should not be used on flexible bents, such as pile cap intermediate bents, where the concrete beam is supported on a single row of exposed piling nor on integral or non-integral end bents.
2. Full penetration groove welds connecting the pile flanges are required. The partial penetration groove welds as recommended by the manufacturer are not acceptable.
3. A 5/16" minimum fillet weld should be added at both ends of the splicer, welded to the pile webs. The length of this weld should be at least 1/2 the depth of the pile. This weld was not a recommendation of the manufacturer. This weld is for additional safety in the event that the splicer is damaged or torn from being snagged on rock material.
### 702.1.5 Pile Driving Documentation
The inspector should record in detail all important facts regarding driving of each pile. The field book notes should be organized in a sequence similar to that shown in the Pile Driving Worksheet.
The sample form in the Pile Driving Worksheet illustrates a typical page of completed pile driving data for pre-cast concrete pile. Data in a similar form will be filled out when driving timber pile.
Figure 700.2 is an illustration of field book data for driving structural steel pile. The inspector records the actual length used and notes the number of pieces incorporated in the length. When structural steel pile is driven, there is often a piece left over from the in-place pile which becomes excess or left-over pile. The contractor may wish to use such a piece on another state highway project. If transfer to another project is desired, extra copies of the certified mill test reports should be made which can be used to have the left over pile reinspected on a future project.
If test pile is a contract item, it must be driven to specified minimum tip elevation regardless of the bearing achieved. After this elevation is reached, driving must continue until one of the following three conditions has been met:
l. The pile driven to full length.
2. The pile driven to refusal.
3. The pile driven to a capacity 50 percent greater than plan bearing.
These conditions are specified in Sec 702.4.1. It is important that a complete driving log be developed. The pile should be marked off in foot increments. The driving record should then show the number of blows for each foot. Some arrangement is necessary to check number of blows per foot without stopping the driving. If there is a sudden sharp change in the number of blows for a given penetration, it may be necessary to check bearing for intermediate increments to develop an accurate graph. The results of specified test pile driving are to be reported on Test Pile Data form. Contact the Division of Construction and Materials for assistance in reporting test pile data.
## 702.2 Laboratory Procedures for Sec 702
This establishes procedures for Laboratory testing and reporting samples of steel strand used in precast-prestressed concrete piles.
### 702.2.1 Procedure
Tests for stress-relieved strand shall consist of examination for fabrication requirements and tension tests performed according to AASHTO M203. Test results and calculations shall be recorded through SiteManager.
### 702.2.2 Sample Record
The sample record shall be completed in SiteManager as described in Automation Section 3510 and shall indicate acceptance, qualified acceptance, or rejection. Appropriate remarks, as described in Reporting Test Results, are to be included in the remarks to clarify conditions of acceptance or rejection. Test results shall be reported on the appropriate templates under the Tests tab. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36574921011924744, "perplexity": 2744.838559836599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711278.74/warc/CC-MAIN-20221208050236-20221208080236-00216.warc.gz"} |
http://mathhelpforum.com/differential-equations/145822-solving-de-spread-rumour.html | # Math Help - Solving DE for the spread of a rumour.
1. ## Solving DE for the spread of a rumour.
here is my problem and what I have done so far.
Attached Thumbnails
2. The integral of $\frac{dx}{x(1- x)}$ is NOT "ln|x(1- x)|". First write it as "partial fractions": $\frac{1}{x(1-x)}= \frac{A}{x}+ \frac{B}{1- x}$, then integrate.
3. Originally Posted by Rina
here is my problem and what I have done so far.
$\frac{dx}{dt} = x(1 - x)$
$\frac{dt}{dx} = \frac{1}{x(1 - x)}$.
Now using the method of partial fractions:
$\frac{A}{x} + \frac{B}{1 - x} = \frac{1}{x(1 - x)}$
$\frac{A(1 - x) + Bx}{x(1 - x)} = \frac{1}{x(1 - x)}$
$A(1 - x) + Bx = 1$
$A - Ax + Bx = 1$
$A + (B - A)x = 1 + 0x$.
Therefore $A = 1$ and $B - A = 0$, so $B = 1$.
Thus $\frac{1}{x(1 - x)} = \frac{1}{x} + \frac{1}{1 - x}$.
Back to the DE:
$\frac{dt}{dx} = \frac{1}{x(1 - x)}$
$\frac{dt}{dx} = \frac{1}{x}+ \frac{1}{1 - x}$
$t = \int{\left(\frac{1}{x} + \frac{1}{1 - x}\right)\,dx}$
$t = \ln{|x|} - \ln{|1 - x|} + C$
$t = \ln{\left|\frac{x}{1 - x}\right|} + C$
$t = \ln{\left|\frac{x - 1 + 1}{1 - x}\right|} + C$
$t = \ln{\left|\frac{-(1 - x)}{1 - x} + \frac{1}{1 - x}\right|} + C$
$t = \ln{\left|-1 + \frac{1}{1 - x}\right|} + C$
$t - C = \ln{\left|-1 + \frac{1}{1 - x}\right|}$
$e^{t - C} = \left|-1 + \frac{1}{1 - x}\right|$
$e^{-C}e^t = \left|-1 + \frac{1}{1 - x}\right|$
$\pm e^{-C}e^t = -1 + \frac{1}{1 - x}$
$A\,e^t = -1 + \frac{1}{1 - x}$, where $A = \pm e^{-C}$
$A\,e^t + 1 = \frac{1}{1 - x}$
$\frac{1}{A\,e^t + 1} = 1 - x$
$x = 1 - \frac{1}{A\,e^t + 1}$
$x = \frac{A\,e^t + 1 - 1}{A\,e^t + 1}$
$x = \frac{A\,e^t}{A\,e^t + 1}$.
4. thank you.
and C is? .
5. Originally Posted by Rina
the first A has nothing to do with a second A. It is confusing. should i have used a different letter for that constant at the end of the calculation.
They are the same $A$. Otherwise I would have used different letters.
6. I am sorry that I am so stupit. It is not ez trust me
I am sorry, but I do not understand how did the first A, just a constant initially, all the sudden became +-e^(-c).
7. If we use this DE solution, x=1/(1-e^(-t)) and try to find the proportion of the population that has heard the rumor at the time t=0. The solution gives us the result - 0.5; that is a half of population before the rumor has started spreading? It doesn't make sense. Am I interpreting it wrong?
8. Originally Posted by Rina
I am sorry that I am so stupit. It is not ez trust me
I am sorry, but I do not understand how did the first A, just a constant initially, all the sudden became +-e^(-c).
C is arbitrary therefore -C is arbitrary therefore e^(-C) is arbitrary therefore it can be represneted by a new arbitrary symbol eg. A.
9. Originally Posted by Rina
If we use this DE solution, x=1/(1-e^(-t)) and try to find the proportion of the population that has heard the rumor at the time t=0. The solution gives us the result - 0.5; that is a half of population before the rumor has started spreading? It doesn't make sense. Am I interpreting it wrong?
1. The given solution is x=1/(1+e^(-t)), not what you have said.
2. t = 0 => x = 1/2. All that means is that at t = 0 half the population have heard the rumour. Big deal.
3. The question asked you to show that the solution was x=1/(1+e^(-t)). So you don't actually have to solve the DE. Just substitute x=1/(1+e^(-t)) into it and show that the resulting left hand and right hand sides are equal to each other. The fact that you have been given no boundary condition suggests that this is the approach you were meant to take ... (And given what I have said in my second point, a possible boundary condition would have been the initial condition x(0) = 1/2). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650020003318787, "perplexity": 482.4313705455907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645310876.88/warc/CC-MAIN-20150827031510-00171-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.zbmath.org/?q=an%3A07204025 | ×
zbMATH — the first resource for mathematics
Reversible disjoint unions of well orders and their inverses. (English) Zbl 07204025
Summary: A poset $$\mathbb{P}$$ is called reversible iff every bijective homomorphism $$f:\mathbb{P} \rightarrow \mathbb{P}$$ is an automorphism. Let $$\mathcal{W}$$ and $$\mathcal{W}^*$$ denote the classes of well orders and their inverses respectively. We characterize reversibility in the class of posets of the form $$\mathbb{P} =\bigcup_{i\in I}\mathbb{L}_i$$, where $$\mathbb{L}_i$$, $$i\in I$$, are pairwise disjoint linear orders from $$\mathcal{W} \cup \mathcal{W}^*$$. First, if $$\mathbb{L}_i \in \mathcal{W}$$, for all $$i \in I$$, and $$\mathbb{L}_i \cong \alpha_i =\gamma_i+n_i\in \text{Ord}$$, where $$\gamma_i \in \text{Lim} \cup \{0\}$$ and $$n_i \in \omega$$, defining $$I_\alpha := \{i \in I : \alpha_i = \alpha\}$$ for $$\alpha \in \text{Ord}$$, and $$J_\gamma := \{j \in I : \gamma_j = \gamma\}$$, for $$\gamma \in \text{Lim} \cup\{0\}$$, we prove that $$\bigcup_{i\in I} \mathbb{L}_i$$ is a reversible poset iff $$\langle \alpha_i : i \in I \rangle$$ is a finite-to-one sequence, that is, $$|I_\alpha| < \omega$$, for all $$\alpha \in \text{Ord}$$, or there exists $$\gamma = \max\{ \gamma_i :i \in I\}$$, for $$\alpha \leq \gamma$$ we have $$|I_\alpha| < \omega$$, and $$\langle n_i : i \in J_\gamma \setminus I_\gamma \rangle$$ is a reversible sequence of natural numbers. The same holds when $$\mathbb{L}_i \in \mathcal{W}^*$$, for all $$i \in I$$. In the general case, the reversibility of the whole union is equivalent to the reversibility of the union of components from $$\mathcal{W}$$ and the union of components from $$\mathcal{W}^*$$.
MSC:
06-XX Order, lattices, ordered algebraic structures
Full Text:
References:
[1] Doyle, PH; Hocking, JG, Bijectively related spaces, I. Manifolds. Pac. J. Math., 111, 23-33 (1984) · Zbl 0554.57014 [2] Dow, A.; Hernández-Gutiérrez, R., Reversible filters, Topology Appl., 225, 34-45 (2017) · Zbl 1368.54007 [3] Kukieła, M., Reversible and bijectively related posets, Order, 26, 119-124 (2009) · Zbl 1178.06002 [4] Kukieła, M., Characterization of hereditarily reversible posets, Math. Slovaca, 66,3, 539-544 (2016) · Zbl 1389.06002 [5] Kurilić, MS, Retractions of reversible structures, J. Symb. Log., 82,4, 1422-1437 (2017) · Zbl 1423.03111 [6] Kurilić, MS; Morača, N., Condensational equivalence, equimorphism, elementary equivalence and similar similarities, Ann. Pure Appl. Logic, 168,6, 1210-1223 (2017) · Zbl 1422.03062 [7] Kurilić, M.S., Morača, N.: Reversibility of extreme relational structures, (submitted) arXiv:1803.09619 [8] Kurilić, M.S., Morača, N.: Reversible sequences of cardinals, reversible equivalence relations, and similar structures, (submitted) arXiv:1709.09492 · Zbl 1422.03062 [9] Kurilić, M.S., Morača, N.: Reversibility of disconnected structures, arXiv:1711.01426 [10] Laflamme, C.; Pouzet, M.; Woodrow, R., Equimorphy: the case of chains, Arch. Math. Logic, 56, 7-8, 811-829 (2017) · Zbl 1417.06001 [11] Laver, R., An order type decomposition theorem, Ann. Math., 98,1, 96-119 (1973) · Zbl 0264.04003 [12] Rajagopalan, M.; Wilansky, A., Reversible topological spaces, J. Aust. Math. Soc., 61, 129-138 (1966) · Zbl 0151.29602 [13] Rosenstein, JG, Linear Orderings Pure and Applied Mathematics, 98 (1982), New York: Academic Press, Inc., Harcourt Brace Jovanovich Publishers, New York
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8887383341789246, "perplexity": 2694.674406398743}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00625.warc.gz"} |
http://ilja-schmelzer.de/forum/showthread.php?tid=43&pid=144 | About Einstein's Leyden lecture Schmelzer Administrator Posts: 212 Threads: 30 Joined: Dec 2015 Reputation: 0 05-19-2016, 09:55 PM (05-19-2016, 07:41 PM)John Duffield Wrote: NB: perhaps you have the wrong idea about ether? See Einstein talking about it here in 1920. Einstein's Leyden lecture Ether and the theory of relativity is, of course, a very interesting document. I like it also because it is a rare place where Einstein has made an error - even if only a minor one: Quote:But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. A minor on, "is in general relativity not thought of" would have been correct. But a "may not be thought of" contains, I would say, more. It excludes also, I would say, interpretations of the Einstein equations of GR which endow the gravitational field with this characteristic of ponderable media. And this is quite simple. First of all, we endow the harmonic coordinates with the status of being preferred coordinates. Then, in these preferred harmonic coordinates, we endow $$\rho = g^{00}\sqrt{-g}$$ with the meaning of a density, and $$v^i=g^{0i}/g^{00}$$ with the meaning of a velocity field which allows to track parts of the ether through time. If time is harmonic, this leads to the continuity equation $$\partial_t \rho + \partial_i (\rho v^i) = 0$$, and if time is time-like, this gives $$\rho>0$$. « Next Oldest | Next Newest » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330485224723816, "perplexity": 955.796202283646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163326.85/warc/CC-MAIN-20180926041849-20180926062249-00365.warc.gz"} |
http://www.fightfinance.com/?q=609,583,64,57,234,452,514,46,498,404,533,251,530,352,535,40,41,148,158,50,348,357,364,463,547,56,227,229,255,460,108,629,141,269,376,61,551,455,534,180,503,173,225,238,349,350,366,491,300,511,273,94,206,367,91,337,506,236,111,285,556,563,306,79,93,628,673,674,418,721,722,309, | # Fight Finance
#### CoursesTagsRandomAllRecentScores
You deposit cash into your bank account. Have you or debt?
A semi-annual coupon bond has a yield of 3% pa. Which of the following statements about the yield is NOT correct? All rates are given to four decimal places.
In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa.
The inflation rate is currently 1.4% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years.
What is the real yield on these bonds, given as an APR compounding every 6 months?
You just borrowed $400,000 in the form of a 25 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 9% pa which is not expected to change.
You actually plan to pay more than the required interest payment. You plan to pay $3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month. At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of$3,300 in 25 years, how much will be owing on the mortgage?
An 'interest only' loan can also be called a:
What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time?
Let the current share price be $p_0$, the expected future share price be $p_1$, the expected future dividend be $d_1$ and the expected return be $r$. Define the expected return as:
$r=\dfrac{p_1-p_0+d_1}{p_0}$
The answer choices are stated using inequalities. As an example, the first answer choice "(a) $0≤p<∞$ and $0≤r< 1$", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one.
The expression 'cash is king' emphasizes the importance of having enough cash to pay your short term debts to avoid bankruptcy. Which business decision is this expression most closely related to?
The phone company Telstra have 2 mobile service plans on offer which both have the same amount of phone call, text message and internet data credit. Both plans have a contract length of 24 months and the monthly cost is payable in advance. The only difference between the two plans is that one is a:
• 'Bring Your Own' (BYO) mobile service plan, costing $50 per month. There is no phone included in this plan. The other plan is a: • 'Bundled' mobile service plan that comes with the latest smart phone, costing$71 per month. This plan includes the latest smart phone.
Neither plan has any additional payments at the start or end.
The only difference between the plans is the phone, so what is the implied cost of the phone as a present value?
Assume that the discount rate is 2% per month given as an effective monthly rate, the same high interest rate on credit cards.
A business project is expected to cost $100 now (t=0), then pay$10 at the end of the third (t=3), fourth, fifth and sixth years, and then grow by 5% pa every year forever. So the cash flow will be $10.5 at the end of the seventh year (t=7), then$11.025 at the end of the eighth year (t=8) and so on perpetually. The total required return is 10℅ pa.
Which of the following formulas will NOT give the correct net present value of the project?
One and a half years ago Frank bought a house for $600,000. Now it's worth only$500,000, based on recent similar sales in the area.
The expected total return on Frank's residential property is 7% pa.
He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$18,617.27.
The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. You have$100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate.
You wish to consume twice as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end.
How much can you consume at time zero and one? The answer choices are given in the same order.
You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end. How much can you consume at each time? You are promised 20 payments of$100, where the first payment is immediate (t=0) and the last is at the end of the 19th year (t=19). The effective annual discount rate is $r$.
Which of the following equations does NOT give the correct present value of these 20 payments?
Two years ago Fred bought a house for $300,000. Now it's worth$500,000, based on recent similar sales in the area.
Fred's residential property has an expected total return of 8% pa.
He rents his house out for $2,000 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$23,173.86.
The future value of 12 months of rental payments one year ahead is $25,027.77. What is the expected annual growth rate of the rental payments? In other words, by what percentage increase will Fred have to raise the monthly rent by each year to sustain the expected annual total return of 8%? You are an equities analyst trying to value the equity of the Australian telecoms company Telstra, with ticker TLS. In Australia, listed companies like Telstra tend to pay dividends every 6 months. The payment around August is called the final dividend and the payment around February is called the interim dividend. Both occur annually. • Today is mid-March 2015. • TLS's last interim dividend of$0.15 was one month ago in mid-February 2015.
• TLS's last final dividend of $0.15 was seven months ago in mid-August 2014. Judging by TLS's dividend history and prospects, you estimate that the nominal dividend growth rate will be 1% pa. Assume that TLS's total nominal cost of equity is 6% pa. The dividends are nominal cash flows and the inflation rate is 2.5% pa. All rates are quoted as nominal effective annual rates. Assume that each month is exactly one twelfth (1/12) of a year, so you can ignore the number of days in each month. Calculate the current TLS share price. A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ...
After year 4, the annual dividend will grow in perpetuity at 5% pa, so;
• the dividend at t=5 will be $1.15(1+0.05), • the dividend at t=6 will be$1.15(1+0.05)^2, and so on.
The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates.
What will be the price of the stock in three and a half years (t = 3.5)?
The following is the Dividend Discount Model (DDM) used to price stocks:
$$P_0 = \frac{d_1}{r-g}$$
Assume that the assumptions of the DDM hold and that the time period is measured in years.
Which of the following is equal to the expected dividend in 3 years, $d_3$?
The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation.
$$p_0 = \frac{d_1}{r - g}$$
Which expression is NOT equal to the expected dividend yield?
The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation.
$$p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}$$
Which expression is NOT equal to the expected capital return?
Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart.
You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate.
You expect BHP will pay a $0.55 interim dividend in six months and a$0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? Estimate the US bank JP Morgan's share price using a price earnings (PE) multiples approach with the following assumptions and figures only: • The major US banks JP Morgan Chase (JPM), Citi Group (C) and Wells Fargo (WFC) are comparable companies; • JP Morgan Chase's historical earnings per share (EPS) is$4.37;
• Citi Group's share price is $50.05 and historical EPS is$4.26;
• Wells Fargo's share price is $48.98 and historical EPS is$3.89.
Note: Figures sourced from Google Finance on 24 March 2014.
Which of the following investable assets are NOT suitable for valuation using PE multiples techniques?
Which firms tend to have high forward-looking price-earnings (PE) ratios?
Private equity firms are known to buy medium sized private companies operating in the same industry, merge them together into a larger company, and then sell it off in a public float (initial public offering, IPO).
If medium-sized private companies trade at PE ratios of 5 and larger listed companies trade at PE ratios of 15, what return can be achieved from this strategy?
Assume that:
• The medium-sized companies can be bought, merged and sold in an IPO instantaneously.
• There are no costs of finding, valuing, merging and restructuring the medium sized companies. Also, there is no competition to buy the medium-sized companies from other private equity firms.
• The large merged firm's earnings are the sum of the medium firms' earnings.
• The only reason for the difference in medium and large firm's PE ratios is due to the illiquidity of the medium firms' shares.
• Return is defined as: $r_{0→1} = (p_1-p_0+c_1)/p_0$ , where time zero is just before the merger and time one is just after.
A firm pays out all of its earnings as dividends. Because of this, the firm has no real growth in earnings, dividends or stock price since there is no re-investment back into the firm to buy new assets and make higher earnings. The dividend discount model is suitable to value this company.
The firm's revenues and costs are expected to increase by inflation in the foreseeable future. The firm has no debt. It operates in the services industry and has few physical assets so there is negligible depreciation expense and negligible net working capital required.
Which of the following statements about this firm's PE ratio is NOT correct? The PE ratio should:
Note: The inverse of x is 1/x.
Which of the following statements about risk free government bonds is NOT correct?
Hint: Total return can be broken into income and capital returns as follows:
\begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned}
The capital return is the growth rate of the price.
The income return is the periodic cash flow. For a bond this is the coupon payment.
Which one of the following bonds is trading at a premium?
An investor bought two fixed-coupon bonds issued by the same company, a zero-coupon bond and a 7% pa semi-annual coupon bond. Both bonds have a face value of $1,000, mature in 10 years, and had a yield at the time of purchase of 8% pa. A few years later, yields fell to 6% pa. Which of the following statements is correct? Note that a capital gain is an increase in price. In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero. A three year government bond with a face value of$100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond?
Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive.
Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures.
An Australian company just issued two bonds:
• A 1 year zero coupon bond at a yield of 10% pa, and
• A 2 year zero coupon bond at a yield of 8% pa.
What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted.
Which of the following statements about yield curves is NOT correct?
You're trying to save enough money to buy your first car which costs $2,500. You can save$100 at the end of each month starting from now. You currently have no money at all. You just opened a bank account with an interest rate of 6% pa payable monthly.
How many months will it take to save enough money to buy the car? Assume that the price of the car will stay the same over time.
A student won $1m in a lottery. Currently the money is in a bank account which pays interest at 6% pa, given as an APR compounding per month. She plans to spend$20,000 at the beginning of every month from now on (so the first withdrawal will be at t=0). After each withdrawal, she will check how much money is left in the account. When there is less than $500,000 left, she will donate that remaining amount to charity. In how many months will she make her last withdrawal and donate the remainder to charity? Interest expense on debt is tax-deductible, but dividend payments on equity are not. or ? In Australia, domestic university students are allowed to buy concession tickets for the bus, train and ferry which sell at a discount of 50% to full-price tickets. The Australian Government do not allow international university students to buy concession tickets, they have to pay the full price. Some international students see this as unfair and they are willing to pay for fake university identification cards which have the concession sticker. What is the most that an international student would be willing to pay for a fake identification card? Assume that international students: • consider buying their fake card on the morning of the first day of university from their neighbour, just before they leave to take the train into university. • buy their weekly train tickets on the morning of the first day of each week. • ride the train to university and back home again every day seven days per week until summer holidays 40 weeks from now. The concession card only lasts for those 40 weeks. Assume that there are 52 weeks in the year for the purpose of interest rate conversion. • a single full-priced one-way train ride costs$5.
• have a discount rate of 11% pa, given as an effective annual rate.
Approach this question from a purely financial view point, ignoring the illegality, embarrassment and the morality of committing fraud.
You just entered into a fully amortising home loan with a principal of $600,000, a variable interest rate of 4.25% pa and a term of 25 years. Immediately after settling the loan, the variable interest rate suddenly falls to 4% pa! You can't believe your luck. Despite this, you plan to continue paying the same home loan payments as you did before. How long will it now take to pay off your home loan? Assume that the lower interest rate was granted immediately and that rates were and are now again expected to remain constant. Round your answer up to the nearest whole month. A fairly priced unlevered firm plans to pay a dividend of$1 next year (t=1) which is expected to grow by 3% pa every year after that. The firm's required return on equity is 8% pa.
The firm is thinking about reducing its future dividend payments by 10% so that it can use the extra cash to invest in more projects which are expected to return 8% pa, and have the same risk as the existing projects. Therefore, next year's dividend will be $0.90. What will be the stock's new annual capital return (proportional increase in price per year) if the change in payout policy goes ahead? Assume that payout policy is irrelevant to firm value and that all rates are effective annual rates. You have$100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate.
You wish to consume half as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end.
How much can you consume at time zero and one? The answer choices are given in the same order.
Details of two different types of light bulbs are given below:
• Low-energy light bulbs cost $3.50, have a life of nine years, and use about$1.60 of electricity a year, paid at the end of each year.
• Conventional light bulbs cost only $0.50, but last only about a year and use about$6.60 of energy a year, paid at the end of each year.
The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate.
Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order.
A share currently worth $100 is expected to pay a constant dividend of$4 for the next 5 years with the first dividend in one year (t=1) and the last in 5 years (t=5).
The total required return is 10% pa.
What do you expected the share price to be in 5 years, just after the dividend at that time has been paid?
Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Candys Corp Income Statement for year ending 30th June 2013 $m Sales 200 COGS 50 Operating expense 10 Depreciation 20 Interest expense 10 Income before tax 110 Tax at 30% 33 Net income 77 Candys Corp Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 220 180 PPE Cost 300 340 Accumul. depr. 60 40 Carrying amount 240 300 Total assets 460 480 Liabilities Current liabilities 175 190 Non-current liabilities 135 130 Owners' equity Retained earnings 50 60 Contributed equity 100 100 Total L and OE 460 480 Note: all figures are given in millions of dollars ($m).
A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation.
A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged.
Ignoring the costs of financial distress, which of the following statements is NOT correct:
Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant?
Remember:
$$NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$
Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Sidebar Corp Income Statement for year ending 30th June 2013 $m Sales 405 COGS 100 Depreciation 34 Rent expense 22 Interest expense 39 Taxable Income 210 Taxes at 30% 63 Net income 147 Sidebar Corp Balance Sheet as at 30th June 2013 2012$m $m Inventory 70 50 Trade debtors 11 16 Rent paid in advance 4 3 PPE 700 680 Total assets 785 749 Trade creditors 11 19 Bond liabilities 400 390 Contributed equity 220 220 Retained profits 154 120 Total L and OE 785 749 Note: All figures are given in millions of dollars ($m).
The cash flow from assets was:
Your friend is trying to find the net present value of a project. The project is expected to last for just one year with:
• a negative cash flow of -$1 million initially (t=0), and • a positive cash flow of$1.1 million in one year (t=1).
The project has a total required return of 10% pa due to its moderate level of undiversifiable risk.
Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project.
He knows that the opportunity cost of investing the $1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is$0.1m $(=1m \times 10\%)$ which occurs in one year (t=1).
He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year.
Your friend has listed a few different ways to find the NPV which are written down below.
(I) $-1m + \dfrac{1.1m}{(1+0.1)^1}$
(II) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1$
(III) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$
(IV) $-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$
(V) $-1m + 1.1m - 1.1m \times 0.1$
Which of the above calculations give the correct NPV? Select the most correct answer.
A man is thinking about taking a day off from his casual painting job to relax.
He just woke up early in the morning and he's about to call his boss to say that he won't be coming in to work.
But he's thinking about the hours that he could work today (in the future) which are:
What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed.
Assume the following:
• The degree takes 3 years to complete and all students pass all subjects.
• There are 2 semesters per year and 4 subjects per semester.
• University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. • There are 52 weeks per year. • The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). • The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). • The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. • Working full time at the grocery store instead of studying full-time pays$20/hr and you can work 35 hours per week. Wages are paid at the end of each week.
• Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week. • The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: Find the cash flow from assets (CFFA) of the following project. One Year Mining Project Data Project life 1 year Initial investment in building mine and equipment$9m Depreciation of mine and equipment over the year $8m Kilograms of gold mined at end of year 1,000 Sale price per kilogram$0.05m Variable cost per kilogram $0.03m Before-tax cost of closing mine at end of year$4m Tax rate 30%
Note 1: Due to the project, the firm also anticipates finding some rare diamonds which will give before-tax revenues of $1m at the end of the year. Note 2: The land that will be mined actually has thermal springs and a family of koalas that could be sold to an eco-tourist resort for an after-tax amount of$3m right now. However, if the mine goes ahead then this natural beauty will be destroyed.
Note 3: The mining equipment will have a book value of $1m at the end of the year for tax purposes. However, the equipment is expected to fetch$2.5m when it is sold.
Find the project's CFFA at time zero and one. Answers are given in millions of dollars ($m), with the first cash flow at time zero, and the second at time one. Value the following business project to manufacture a new product. Project Data Project life 2 yrs Initial investment in equipment$6m Depreciation of equipment per year $3m Expected sale price of equipment at end of project$0.6m Unit sales per year 4m Sale price per unit $8 Variable cost per unit$5 Fixed costs per year, paid at the end of each year $1m Interest expense per year 0 Tax rate 30% Weighted average cost of capital after tax per annum 10% Notes 1. The firm's current assets and current liabilities are$3m and $2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by$2m initially (at t = 0), and then by $0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by$0.1m at the end of the first year (t=1).
At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought.
2. The project cost $0.5m to research which was incurred one year ago. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. • The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? Your friend just bought a house for$400,000. He financed it using a $320,000 mortgage loan and a deposit of$80,000.
In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is $80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So $V=D+E$. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. Remember: $$r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0}$$ where $r_{0-1}$ is the return (percentage change) of an asset with price $p_0$ initially, $p_1$ one period later, and paying a cash flow of $c_1$ at time $t=1$. Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not. Which of the below FFCF formulas include the interest tax shield in the cash flow? $$(1) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$(2) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp.(1-t_c)$$ $$(3) \quad FFCF=EBIT.(1-t_c )+ Depr- CapEx -ΔNWC+IntExp.t_c$$ $$(4) \quad FFCF=EBIT.(1-t_c) + Depr- CapEx -ΔNWC$$ $$(5) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC+IntExp.t_c$$ $$(6) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC$$ $$(7) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC$$ $$(8) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC-IntExp.t_c$$ $$(9) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC$$ $$(10) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC-IntExp.t_c$$ The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent. $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ $$EBIT=Rev - COGS - FC-Depr$$ $$EBITDA=Rev - COGS - FC$$ $$Tax =(Rev - COGS - Depr - FC - IntExp).t_c= \dfrac{NI.t_c}{1-t_c}$$ A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A fast-growing firm is suitable for valuation using a multi-stage growth model. It's nominal unlevered cash flow from assets ($CFFA_U$) at the end of this year (t=1) is expected to be$1 million. After that it is expected to grow at a rate of:
• 12% pa for the next two years (from t=1 to 3),
• 5% over the fourth year (from t=3 to 4), and
• -1% forever after that (from t=4 onwards). Note that this is a negative one percent growth rate.
Assume that:
• The nominal WACC after tax is 9.5% pa and is not expected to change.
• The nominal WACC before tax is 10% pa and is not expected to change.
• The firm has a target debt-to-equity ratio that it plans to maintain.
• The inflation rate is 3% pa.
• All rates are given as nominal effective annual rates.
What is the levered value of this fast growing firm's assets?
A firm has a debt-to-equity ratio of 25%. What is its debt-to-assets ratio?
Diversification in a portfolio of two assets works best when the correlation between their returns is:
All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as:
Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%.
If the variance of stock A increases but the:
• Prices and expected returns of each stock stays the same,
• Variance of stock B's returns stays the same,
• Correlation of returns between the stocks stays the same.
Which of the following statements is NOT correct?
An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 12% pa.
• Stock A has an expected return of 10% pa and a standard deviation of 20% pa.
• Stock B has an expected return of 15% pa and a standard deviation of 30% pa.
The correlation coefficient between stock A and B's expected returns is 70%.
What will be the annual standard deviation of the portfolio with this 12% pa target return?
What is the correlation of a variable X with itself?
The corr(X, X) or $\rho_{X,X}$ equals:
Let the standard deviation of returns for a share per month be $\sigma_\text{monthly}$.
What is the formula for the standard deviation of the share's returns per year $(\sigma_\text{yearly})$?
Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average.
Which statement is the most correct?
A stock's correlation with the market portfolio increases while its total risk is unchanged. What will happen to the stock's expected return and systematic risk?
Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct?
A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates.
In the last 5 minutes, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged.
What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate?
A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates.
Over the last year, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged.
What do you think was the stock's historical return over the last year, given as an effective annual rate?
Project Data Project life 1 year Initial investment in equipment $8m Depreciation of equipment per year$8m Expected sale price of equipment at end of project 0 Unit sales per year 4m Sale price per unit $10 Variable cost per unit$5 Fixed costs per year, paid at the end of each year $2m Interest expense in first year (at t=1)$0.562m Corporate tax rate 30% Government treasury bond yield 5% Bank loan debt yield 9% Market portfolio return 10% Covariance of levered equity returns with market 0.32 Variance of market portfolio returns 0.16 Firm's and project's debt-to-equity ratio 50%
Notes
1. Due to the project, current assets will increase by $6m now (t=0) and fall by$6m at the end (t=1). Current liabilities will not be affected.
Assumptions
• The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio.
• Millions are represented by 'm'.
• All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year.
• All rates and cash flows are real. The inflation rate is 2% pa. All rates are given as effective annual rates.
• The project is undertaken by a firm, not an individual.
What is the net present value (NPV) of the project?
Fred owns some Commonwealth Bank (CBA) shares. He has calculated CBA’s monthly returns for each month in the past 20 years using this formula:
$$r_\text{t monthly}=\ln \left( \dfrac{P_t}{P_{t-1}} \right)$$
He then took the arithmetic average and found it to be 1% per month using this formula:
$$\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.01=1\% \text{ per month}$$
He also found the standard deviation of these monthly returns which was 5% per month:
$$\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.05=5\%\text{ per month}$$
Which of the below statements about Fred’s CBA shares is NOT correct? Assume that the past historical average return is the true population average of future expected returns.
Here is a table of stock prices and returns. Which of the statements below the table is NOT correct?
Price and Return Population Statistics Time Prices LGDR GDR NDR 0 100 1 50 -0.6931 0.5 -0.5 2 100 0.6931 2 1 Arithmetic average 0 1.25 0.25 Arithmetic standard deviation -0.6931 0.75 0.75
A company announces that it will pay a dividend, as the market expected. The company's shares trade on the stock exchange which is open from 10am in the morning to 4pm in the afternoon each weekday. When would the share price be expected to fall by the amount of the dividend? Ignore taxes.
The share price is expected to fall during the: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34384751319885254, "perplexity": 1931.1878440370722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257660.45/warc/CC-MAIN-20190524144504-20190524170504-00047.warc.gz"} |
http://www.wiris.com/en/category/products/documentation-4 | ×
# Documentation
WIRIS editor 3 (Documentation)
## 1.- Enable Uninstall mode
Go to WIRIS filter - Maths settings and enable uninstall mode.
## 2.- Uninstall WIRIS filter
Once uninstall mode has been enabled, WIRIS filter can be uninstalled from general filter settings.
## WIRIS EDITOR for Java/Swing
WIRIS EDITOR for Java Swing is the specialized compilation of WIRIS EDITOR that targets Java Swing. With this Java component you can add to your appliation a full fledged formula (equation) editor for maths, physics and (inorganic) chemistry that works with the MathML standard. The usage is very similar to the JavaScript version
## Arabic numbers and math notation by countries
Math notation in Arabic language changes from country to country.
The technical administrator of WIRIS editor can determine a personalized behaviour for your installation. You can configure the default arabic numbers that will be used and wheter to mirror math formulas or not.
The configuration for Arabic language, code ar, is to use mirrored formulas and arabic numbers (1234..)
## APIs
WIRIS EDITOR provides the following services:
• Generate images of formulas with the following formats: PNG, SVG, PDF, EPS, SWF
• Convert MathML from/to LaTeX.
• Generate textual representation of a formula. Accessibility.
• Convert presentation MathML from/to content MathML
• Provide a simple formula calculator. Works with floating point numbers.
Not to be confused with WIRIS CALC.
There are mainly two interfaces to the services:
## Appendix: List of icons - Release 3.4
The following images show all the icons available in the toolbar tabs.
## Embed WIRIS editor in a Web page
This document explains how to embed the WIRIS EDITOR in a Web page. The scenario when the editor is used together with an HTML rich editor is covered by the WIRIS plugins.
The embedding is compound by two steps. The first one is displaying the editor itself and the second one is calling the editor API to set and retrieve the MathML.
## LaTeX support
If you already have LaTeX code bits inside your texts, the WIRIS plugin will show you the actual formulas, and the WIRIS editor will allow you to edit them in WYSIWYG mode.
For exemple, imagine in your platform there is a text like:
The history of $$\sqrt(2)$$.
After installing the WIRIS plugin, students will see:
## Appendix: List of icons - Release 3.0
The following images show all the icons available in the toolbar tabs.
## Scaling symbols and parenthesis
Mathematical operations can be nested, and so symbols can contain expressions with symbols. Many symbols expand to match the size of their contents; note, for instance, the variable size of the square root symbol: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5048681497573853, "perplexity": 7337.941701448674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814079.59/warc/CC-MAIN-20180222081525-20180222101525-00731.warc.gz"} |
https://brilliant.org/problems/an-electricity-and-magnetism-problem-by-rohan/ | # An electricity and magnetism problem by Rohan Gupta
A wire of resistance 12 $$\Omega$$/m is bent to form a complete circle of radius 10 cm. The resistance between the two diametrically opposite ends is:
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679533243179321, "perplexity": 943.7971199787938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00066.warc.gz"} |
https://www.ncbi.nlm.nih.gov/pubmed/12915213?dopt=AbstractPlus | Format
Choose Destination
Exp Gerontol. 2003 Aug;38(8):911-20.
# Age-associated accumulation of CMV-specific CD8+ T cells expressing the inhibitory killer cell lectin-like receptor G1 (KLRG1).
### Author information
1
Section for Transplantation Immunology and Immunohematology, University of Tübingen, Germany. [email protected]
### Abstract
Large clonal expansions of peripheral CD8+ T cells carrying receptors for single epitopes of CMV and EBV are common in the elderly and may be associated with an immune risk phenotype predicting mortality. Here we show that the frequency of CD8+ T cells expressing the inhibitory killer cell lectin-like receptor G1 (KLRG1), a marker of cells unable to undergo further clonal expansion, was markedly elevated in CD8+ T cells from old donors. Moreover, tetramer staining revealed that the elevated frequency of CMV-specific CD8+ T cells in the elderly was due to an accumulation of cells bearing this dominant negative receptor. The fraction of CMV-specific T cells able to secrete interferon-gamma after specific antigenic stimulation was significantly lower in the elderly than in the young, although the total number of functional cells was comparable. Therefore, the majority of the clonally expanded virus-specific CD8+ cells in the elderly was dysfunctional. Thus, T cell responses are altered in the aged by an accumulation of replicatively senescent dysfunctional T cells carrying receptors for persistent herpes viruses. The presence of clonal expansions of such virus-specific cells may shrink the available repertoire for other antigens and contribute to the increased incidence of infectious disease in the elderly.
### Comment in
PMID:
12915213
[Indexed for MEDLINE] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117167949676514, "perplexity": 11680.158069928828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527204.71/warc/CC-MAIN-20190721205413-20190721231413-00087.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.