url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://gitter.im/notablemind/notablemind?at=5500c13126b4255b4b3a0d86
|
## Where communities thrive
• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
• Create your own community
##### Activity
Jared Forsyth
@jaredly
ah yes, I haven't yet documented node types. there are many. to change an item to a todo, press t t in normal mode. t n changes it back to a normal node, also t c for code, t l for list, and some others
E
@Inconditus
cool, works also with the visual selections :smile:
Jared Forsyth
@jaredly
yup!
most things should
Jared Forsyth
@jaredly
Out of curiosity, how did you come upon notablemind? I don't think I've published it anywhere
E
@Inconditus
i was going to make my own, was looking for a nice tree library for javascript. this was a couple months ago, been keeping track of noteablemind since
Jared Forsyth
@jaredly
LukasDrude
@LukasDrude
Oh this is great. I stumbled upon notablemind when searching for folding and searching capabilities for Jupyter/ iPython notebook. How is the project related to Jupyter? Did you document on using the iPython kernel already?
Jared Forsyth
@jaredly
Yup, here's a tutorial. Let me know if anything breaks :) or how the tutorial could be improved.
Jared Forsyth
@jaredly
Also check out the main tutorial if you haven't yet, to get down all of the ins and outs
LukasDrude
@LukasDrude
Ok, that is good. Did you see issues when connecting to your localhost? I get "Error connecting to localhost:8888!" although I am able to use Jupyter on "http://localhost:8888/tree#".
Jared Forsyth
@jaredly
hmmm are you on the new 3.0 release? I haven't looked into API changes for that yet
LukasDrude
@LukasDrude
Yes, I am on Python 2.7.6 and iPython 3.0.
Jared Forsyth
@jaredly
ok :/ I'll have to look into what changed there, sorry. For now, ipython 2.x works great :)
LukasDrude
@LukasDrude
Ok, I will set up a virtualenv to test it. But anyhow, I miss the Emacs Org-Mode reference on your starting page. :P
LukasDrude
@LukasDrude
Works like a charm. I will go into details tomorrow.
Jared Forsyth
@jaredly
Glad to hear it!
LukasDrude
@LukasDrude
Did you choose to remove HTML support within Markdown in contrast to iPython?
Jared Forsyth
@jaredly
Hmmm it used to be in there. If I had a reason, I can't remember it off the top of my head.
You can use the %html magic if you want
But if I can't remember why I removed it, I'll put it back in. It's a one-line change
LukasDrude
@LukasDrude
The HTML code does not disappear when I evaluate %%html magic.
Jared Forsyth
@jaredly
Ah yes that's right. I'm happy to put it back in.
LukasDrude
@LukasDrude
Does the "View Roadmap" link work for you? https://app.notablemind.org/#/gist/jaredly/9680269723596b0249a3
Jared Forsyth
@jaredly
:/. I recently changed es6 transpilers, and there's a lot of things that need a bit of cleanup
LukasDrude
@LukasDrude
Do you mind me pointing out this little glitches? There is no obligation to fix things.
Jared Forsyth
@jaredly
No it's awesome :) nice to have another pair of eyes on it
LukasDrude
@LukasDrude
I am really exited about notablemind because I used Org-Mode for quite some time and always had a hard time definding myself against being called an anachronism.
Jared Forsyth
@jaredly
haha right
LukasDrude
@LukasDrude
Did you decide against MathJax within Markdown as well?
Jared Forsyth
@jaredly
That wasn't a conscious decision, I just haven't put it together
Seems like a desirable thing, though
LukasDrude
@LukasDrude
I wonder how much your project inherits from Jupyter (when you decide to switch). Probably most of the behavior of Jupyter remains?
Jared Forsyth
@jaredly
Was there much in the way of new behavior between jupyter and ipython2.x?
LukasDrude
@LukasDrude
I don't know about implementation details. I just know that Jupyter aims to be language agnostic. Thus, the target is to allow a wider range of kernels. Regarding my Markdown question: Maybe many questions (i.e. why is there no inline MathJax/ why is there no HTML) are obsolete once you changed to Jupyter because the internals (may) change anyhow.
Jared Forsyth
@jaredly
Ah yes. (just looked at the release notes). The "switch between kernels" thing is something I definitely have to get into NM -- I've planned to do that for a while.
Also -- coming from Org mode, let me know what features you miss :)
LukasDrude
@LukasDrude
1. Probably unsolvable: Having a file format which is easy to diff.
2. Bibtex-Integration
3. Tags
4. Properties which I may add to cells (i.e. a best-before-date until I have to review the cell content)
5. A more condensed view (which I may do by chaning your css files locally)
6. Interesting way to edit tables (as opposed to CodeMirror for Markdown): Whenever I press <TAB> it aligns all "|" marks.
Don't take this list too seriously.
Jared Forsyth
@jaredly
Cool!
1. tags are definitely coming
2. I'd be interesting in having a variety of themes (some people like "breathing room", others like efficiency)
3. A gui table editor is in the works
LukasDrude
@LukasDrude
for 3.: My personal opinion is, that it does not need to be a real GUI. A realining table editor just within CodeMirror is fine: http://orgmode.org/manual/Built_002din-table-editor.html#Built_002din-table-editor
Jared Forsyth
@jaredly
from an implementation perspective, gui would actually be a bit cleaner
LukasDrude
@LukasDrude
Ok, I understand. But you will probably leave all Markdown/ MathJax/ HTML within cells as is (code), right?
Jared Forsyth
@jaredly
yeah
markdown is much better than wysinwyg
LukasDrude
@LukasDrude
That's so true.
LukasDrude
@LukasDrude
I read the port of your tutorial regarding images, but how did you acutally do it:
a) I can't drag and drop to a text cell.
b) I can't copy/ paste to a text cell.
Jared Forsyth
@jaredly
But the click-to-upload works?
Oh -- it's a node types thing. Right click on the cell, and change it to an image node
Or t i for the keyboard shortcut in normal mode. alt+t i in edit mode
LukasDrude
@LukasDrude
Ok, Click-To-Upload works. (a) and (b) do not work on image cells as well (I am on chromium with KDE).
|
2019-10-18 11:45:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28605926036834717, "perplexity": 4044.7118561111524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682037.37/warc/CC-MAIN-20191018104351-20191018131851-00174.warc.gz"}
|
https://www.physicsforums.com/threads/little-problem.399938/
|
# Homework Help: Little(?) problem
1. May 1, 2010
### penguin007
Hi everyone,
I’m studying the correction of an exercise and I there are some points I can’t see:
In this exercise, we introduce a irrational x and function f so that f(kx)=kx-[kx] with k an integer included in the interval [0,N].
It is said that f has N+1 values (Ok) that are included in the intervals [i/N,(i+1)/N] for i integers included in [0,N-1]… I can’t see this last point. If anyone could help me understand this point…
2. May 1, 2010
### Martin Rattigan
$y-[y]\in [0,1]$
$\cup\{[i/N,i+1/N]:i\in \mathbb{Z}\cap [0,N-1]\}=[0,1]$.
Is it saying anything more than that?
3. May 2, 2010
### penguin007
Nothing else, they just say that for each k integer included in [0,N], x an irrational, there exists an i integer included in [0,N-1] so that f(kx) is included in the interval [i/N,(i+1)/N] (and then, for the rest of the exercise, since there are N+1 values and N intervals, there are two values included in the same [i/N,(i+1)/N]).
But I don't know why f(kx) is included in [i/N,(i+1)/N]...
4. May 2, 2010
### Martin Rattigan
It's not saying f(kx) is included in any particular [i/N,(i+1)/N], only that it's somewhere in [0,1] and since the sets [i/N,(i+1)/N] cover the whole of [0,1] it's got to be in one of them.
5. May 2, 2010
### penguin007
Ok Martin I got it. (In fact, the important in this exercise is that there exists two values k1 and k2 so that abs(f(k1x)-f(k2x))<=1/N.)
Thanks very much!!
6. May 2, 2010
### Martin Rattigan
Yes I guessed that. Glad to be of help.
|
2018-06-20 17:42:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7235392332077026, "perplexity": 1990.7259680261566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863830.1/warc/CC-MAIN-20180620163310-20180620183310-00334.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-concepts-through-functions-a-unit-circle-approach-to-trigonometry-3rd-edition/chapter-8-polar-coordinates-vectors-section-8-1-polar-cordinates-8-1-assess-your-understanding-page-591/3
|
## Precalculus: Concepts Through Functions, A Unit Circle Approach to Trigonometry (3rd Edition)
$$\tan \theta=\frac{b}{a}.$$
We know that $\tan \theta=\frac{y}{x}$. This can also be thought of as $\frac{opposite}{adjacent}$ (recall SOH-CAH-TOA). Thus, we have: $$\tan \theta=\frac{b}{a}.$$
|
2021-10-27 06:43:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078863859176636, "perplexity": 1266.070636874738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00398.warc.gz"}
|
https://ankplanet.com/physics/wave/wave-and-wave-motion/wave-and-wave-motion/
|
# Wave And Wave Motion
There are two ways of transfer of energy;
1. By the transfer of matter.
2. Without the transfer of matter.
Second method is a wave process. In this process, disturbance carries energy from one place to another without the transfer of matter. The disturbance is called a wave and the process is called wave motion.
Thus, a wave is defined as a disturbance which propagates from one place to another without net transport of matter and the process of transfer of the disturbance is called wave motion. Sound, light, water ripples, etc. are the examples of wave.
## Characteristics of Wave Motion
• Wave motion is a disturbance propagating in a medium.
• It transfers energy as well as momentum from one point to another.
• When it propagates in a medium, medium particles execute vibration about their mean position i.e. their motion is SHM.
• Wave motion has finite and fixed speed given by $v=fλ$ Where, $v$ is velocity, $f$ is frequency and $λ$ is wavelength of the wave.
• The wave speed is different from particle velocity in a medium.
• When a wave travels in a medium, there is a continuous phase difference among the successive medium particles.
• The vibrating particles of the medium possess both kinetic energy and potential energy.
• It follows reflection, refraction, interference and diffraction but polarization is only for transverse wave.
## Types of Wave
There are three types of wave motion;
1. Electromagnetic wave or non-mechanical wave
The wave which does not require a medium for its propagation is known as electromagnetic wave. Light, heat, radio waves etc. are electromagnetic waves.
2. Mechanical wave
The wave which requires a medium for its propagation is known as mechanical wave. Sound wave, water waves, waves on pipes and strings, seismic waves, etc. are mechanical waves.
3. Matter wave
The wave associated with a moving matter is known as matter wave. It is explained on the basis of quantum mechanics.
### Wavelength (λ)
The distance travelled by a wave in one time period is called wavelength $(λ)$. It is the distance between any two nearest points which are in the same phase.
### Frequency (f)
The number of oscillations/vibrations/cycles per second is called frequency $(f)$. It may also be defined as the number of waves passing through a point per unit time. $f=\frac{\text{Number of cycles}}{\text{time}}$ Its unit is $\text{Hertz}$ $\text{(Hz)}$. $1\;\text{Hz} = 1 \;\text{cycle/second}$.
When a sinusoidal wave passes through a medium, every particle in the medium undergoes SHM with the same frequency. The frequency of the wave is the frequency of the vibrating source.
### Time Period (T)
The time for one complete cycle is called time period $(T)$. $T=\frac{1}{f}$
### Wave Speed
The linear distance covered per unit time by the wave is called wave speed $(v)$.
The distance travelled by a wave in time period $T\;\text{sec} = λ$
The distance travelled by a wave in $1\;\text{sec} = \frac{λ}{T}$ $v=\frac{λ}{T}$ $v=fλ$ The value of $v$ depends on the elastic and inertial properties of the medium i.e. it is constant for a given medium but changes according to the nature of the medium.
### Particle Speed
During the propagation of mechanical wave, the medium particles undergo up and down motions around their mean positions. The speed of these medium particles is known as particle speed. The displacement of medium particle in time $t$ is given by $y=A\sin(ωt-kx)$ Differentiating with respect to time $t$, $\frac{dy}{dt}=A\cos(ωt-kx)$ $v=A\cos(ωt-kx)$ This gives the particle speed of the medium. For maximum speed, $\cos(ωt-kx)=1$, $v_{\text{max}}=ωA$ Similarly, acceleration of the medium particle is given by $a=\frac{d^2y}{dt^2}=-ω^2A\sin(ωt-kx)=-ω^2y$
### Amplitude
The maximum distance through which a medium particle moves either side of the mean position is called amplitude of the wave. The amplitude at the crest and the trough are identical.
### Wave Number (k)
The number of waves per unit distance is called wave number $(k)$. Its unit is $\text{m}^{-1}$. Mathematically, $\text{Wave Number}\;(k)=\frac{2π}{λ}$
### Phase of a Wave
The argument of the sine/cosine in a given wave equation is called phase. The equation of the progressive wave is, $y=a\sin(ωt-kx)$ In this equation, $(ωt-kx)$ is phase denoted by $Φ$. $Φ=ωt-kx$ The phase changes with both time $(t)$ and space coordinate $(x)$. When one wave is ahead of another by some angle then the difference in angle is called phase difference.
### Relation between Phase Difference and Path Difference
Let us consider a complete wave of wavelength $λ$ which starts from $A$ and reaches $B$. If another wave starts from $B$ then the path difference between these two waves will be $λ$ and phase difference will be $2π$ radian.
For path difference $=λ$, phase difference $= 2π$
For path difference $= 1$, phase difference $= \frac{2π}{λ}$
For path difference $= x$, phase difference $= \frac{2π}{λ}x$
Thus, $\text{Phase Difference} \; (Φ)= \frac{2π}{λ}x$
More on Wave And Wave Motion
|
2023-03-25 07:14:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7112231254577637, "perplexity": 529.121198527943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00152.warc.gz"}
|
https://mathoverflow.net/tags/fields/hot
|
# Tag Info
64
The theory of differential Galois theory is used, but in algebraic, not differential geometry, under the name of D-modules. A D-module is an object that is somewhat more complicated than a representation of the differential Galois group, in the same way that a sheaf is a more complicated than just a Galois representation, but I think it is cut from the same ...
39
The answer is "yes", see below. Dieudonné in his book "La géométrie des groupes classiques" considers the abstract group $SL_n(K)$ for a field $K$, not necessarily commutative, and writes $PSL_n(K)$ for $SL_n(K)$ modulo the center. In Ch. IV, Section 9, he considers the question whether $PSL_n(K)$ can be isomorphic to $PSL_m(K')$ for $n\ge 2,\ m\ge 2$. He ...
38
A nice theorem is: $\{\pm 1\}$ is the only group that can act freely on a sphere of even dimension. In contrast: There are infinitely many groups acting freely on every odd-dimensional sphere.
33
Polynomials are, essentially by definition, precisely the operations one can write down starting from addition and multiplication. More formally, polynomials with coefficients in a commutative ring $R$ are precisely the morphisms in the Lawvere theory of commutative $R$-algebras. So in some sense caring about polynomials is equivalent to caring about ...
31
It seems not. It was shown by Di Prisco and Todorcevic (and reproved later by at least three sets of authors) that if sufficiently large cardinals exist (e.g., a proper class of Woodin cardinals), then after forcing with $\mathcal{P}(\omega)/\mathrm{Fin}$ (the infinite subsets of $\omega$, ordered by mod-finite containment) to produce a selective ...
31
As indicated by KConrad in his comments, differential Galois theory is used in the part of transcendental number theory that tries to establish algebraic/linear independence of values of special functions at algebraic numbers. Examples are given by the theorems of Siegel-Shidlovski, Nesterenko, etc. Roughly speaking, its rôle is to guarantee that an ...
27
Fields are the simple (no nontrivial quotients) commutative rings. Grothendieck told us to work in nice categories with nasty objects rather than nasty categories with nice objects; fields are the nice objects, and the nice category they live in is all commutative rings. You want to work in all commutative rings and sometimes study fields the same way you ...
25
I think you are lumping too many disparate kinds of fields together under the heading "zero-dimensional". As Jason says in his answer, there are some precise definitions of dimensions of fields (e.g. cohomological dimension but also other definitions of a field-arithmetic nature). Another important comment is that in modern algebraic / arithmetic geometry ...
24
This is more of a joke than a serious example. Let $K$ be a field, $K_+$ its additive group, and $K_*$ its multiplicative group. Thus $\mathbb{R}_*\cong \mathbb{R}_+\times (\mathbb{Z}/2\mathbb{Z})$. What fields have the "opposite" property, that is, $K_+\cong K_*\times (\mathbb{Z}/2\mathbb{Z})$? Answer: only $\mathbb{Z}/2\mathbb{Z}$.
23
The answer to the question is yes, though I don't have all the old literature at my fingertips. This kind of question for various classes of linear groups has a long history in the study of homomorphisms and isomorphisms of classical groups and then other algebraic groups (van der Waerden, Dieudonne, ...) The most comprehensive treatment was given by Borel ...
22
Here is a self-contained argument. First, as Jeremy Rickard observes, $K \otimes K \cong K \otimes_k K$, where $k$ is the prime subfield of $K$ (so $\mathbb{Q}$ if $K$ has characteristic zero and $\mathbb{F}_p$ if $K$ has characteristic $p$). If $K \otimes_k K$ is a field, then as Denis Nardin observes, the multiplication map K \otimes_k K \xrightarrow{m} ...
21
Conway's nimbers form an interesting answer for $p=2$. That every Field of characteristic $2$ embeds into it follows from the fact they form an algebraically closed Field and that they contain arbitrarily large sets of algebraically independent elements (which is immediate because the Field is proper-class-sized). This has been generalized by DiMuro to ...
20
I suspect the question is not answerable because utility can be subjective and rationalizing subjective notions leads to more arguments and less elucidation. However, this gives me an opportunity to mention a couple aspects of polynomials that deserve more press. Polynomials are a generalization of number representation, replacing base 10 or base 2 by base ...
20
If you do not impose an algebraically closed condition, no two are equivalent. This basically follows from your (3). Namely, observe that An extension is finite if and only if it has finitely many subextensions (otherwise, it contains $\mathbb{F}_p(x)$, which then contains $\mathbb{F}_p(x^n)$ for all $n$). $\bar{\mathbb{F}}_p$ is the unique extension ...
20
If $k$ is not algebraically closed, such a polynomial always exists (the opposite is also true and is mentioned in the post). We may assume that $a_i=0$ for all $i$. Take an irreducible polynomial $g(x)$ of degree $d>1$, then for the homogeneous form $G(x,y)=y^dg(x/y)$ we have $G(x,y)=0$ if only if $x=y=0$. This solves the case $n=2$, for $n=3$ consider ...
19
I already wrote this in the comments but I think this might be worth of an answer. I think we can classify all fields $K$ such that $K\otimes K$ is a field. Claim If $K$ is a field such that $K\otimes_\mathbb{Z}K$ is a field then the multiplication map $K\otimes_\mathbb{Z} K\to K$ is an isomorphism In fact the multiplication map is always a surjection ...
19
An algebraically closed field is determined up to isomorphism by its characteristic and its transcendence degree over its prime field. So every algebraically closed field of characteristic $p$ is isomorphic to the algebraic closure of $\mathbb{F}_p(X)$, where $X$ is some set of variables. This suggests that the "biggest field of characteristic $p$" should ...
18
If you're taking the definition of rational to be: birational to $\mathbb{P}^1$ over the field $k$, then the stated property is not even true. There are conics which have no rational points, and so are not rational, but are rational over a quadratic extension. For example, the affine conic $x^2 + y^2 + 1 = 0$ over the field $\mathbb{Q}$. Added: since you ...
18
If such polynomials exist, there will only be finitely many of them, up to composing on both sides with scalar polynomials $\alpha x$ with $\alpha\in\mathbf{Q}$. More generally, Guralnick and Shareshian proved that if $d=7$ or $d>8$ then there are only finitely many equivalence classes of irreducible degree-$d$ trinomials in $\mathbf{Q}[x]$ whose Galois ...
17
This is the most complete treatment I could come up with. Let $k \subseteq K^{\operatorname{sep}} \subseteq K^{\operatorname{alg}} \subseteq K$ and $k \subseteq L^{\operatorname{sep}} \subseteq L^{\operatorname{alg}} \subseteq L$ be the separable algebraic and algebraic closures of $k$ in $K$ and $L$. The result is the following. Theorem. Let $k \subseteq K$...
17
It is the only non-trivial group whose free square ($G*G$) satisfies a non-trivial identity (or is solvable, or is amenable...) Edit (Nov 9, 2014), suggested by Sam Nead ... or is virtually cyclic, or is two-ended, or contains no nonabelian free subgroups...
17
At risk of being overly bold, allow me to suggest: Polynomials are useful because quadratic polynomials are useful. If we can all agree that linear algebra is an indispensable tool in mathematics then it's hard to argue with the success of equipping vector spaces with quadratic structures - this is the starting point of nearly all of geometry and large ...
15
From Dirk van Dalen's Logic and Structure: the theory of algebraically closed fields is not finitely axiomatizable (see page 109 and preceding).
15
It seems that indeed only prime fields are unrepeatable. Proof: Let $k$ be unrepeatable and $F\subseteq k$ denote the prime field of $k$. Let $T\subseteq k$ be a transcendence base of $k/F$ and let $G=F(T)$. If $T\neq\emptyset$, then $G/F$ has non-trivial automorphisms (say take one element $t\in T$ to $t+1$). Since $k/G$ is algebraic this extends to a ...
14
First, let me say that the set/class issue is not a problem to deal with properly, and so one shouldn't be very worried about it. It is true as you say that the surreal numbers No are a proper class, and they do not form a set. So in a purely technical sense, they are not a field. But nevertheless, they do satisfy all the field axioms and have all the usual ...
14
I think the explanation is that the concept arose somewhat independently with English-speaking mathematicians. See the discussion of the earliest known use at http://jeff560.tripod.com/f.html
14
Let $\zeta = e^{2\pi i/p}$ be a primitive $p$th root of unity. Then $2 \cos (2\pi k/p) = \zeta^k + \zeta^{-k}$. The Galois group of $\mathbb Q(\zeta)$ is isomorphic to $(\mathbb Z/p \mathbb Z)^\times$ and acts transitively on the powers $\zeta^k$ with $1 \le k \le p-1$. What you want is that the Galois action fixes the set $\{x_1, x_2, x_3\}$. Now $x_j = \... 14 Disclaimer: The following perhaps isn't an answer to your question as stated, so my apologies if this answer is useless to you. However, you're asking for how to treat this problem "honestly", and I think that adding the right kind of historical perspective falls under the heading of honesty. Anyway, I think it is important to observe here that the ancient ... 14 First you have to observe that since all extensions of the valuation to$\bar{K}$are conjugate,$\hat{\bar{K}}$is well-defined up to (non-unique) isomorphism. Now, since$\hat{\bar{K}}$is complete and$K$is dense in$\hat{K}$, the inclusion$K\subset \hat{\bar{K}}$extends continuously to$K\subset \hat{K}\subset \hat{\bar{K}}\$ (in fact you can identify ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-10-22 12:47:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437659978866577, "perplexity": 230.53925667702526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00391.warc.gz"}
|
https://www.physicsforums.com/threads/the-precise-definition-of-limits.569856/
|
# The precise definition of limits
1. Jan 22, 2012
### bobsmith76
If you look at this text towards the very end they are dividing by 2. I don't see why.
2. Jan 22, 2012
### bobsmith76
never mind, I got it.
|
2017-11-24 10:21:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9350432753562927, "perplexity": 3401.013607979524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807344.89/warc/CC-MAIN-20171124085059-20171124105059-00018.warc.gz"}
|
http://mathhelpforum.com/algebra/91641-right-mk2-print.html
|
# Is this right MK2
• Jun 3rd 2009, 04:58 AM
Joel
Is this right MK2
Hi guys,
Got this bit right.......... thanks for the conformation
(a) Expand ( x + ⅟x )² = x² + 2x⅟x + ⅟x₂ = x² + 2 + ⅟x₂
(b) Suppose that x + ⅟x = 3. Use part (a) to evaluate x² + ⅟x² without attempting to find the value of x.
ok my thoughts are..... ( x + ⅟x )² = (3)² from above.
so i think i should use the solution x² + 2 + ⅟x₂ which would be 9 + 2.
The question asks me to evaluate, but I think I may be on the wrong track. as I feel I am solving for x.
Am I doing the wrong thing here?
• Jun 3rd 2009, 05:26 AM
Prove It
Quote:
Originally Posted by Joel
Hi guys,
Got this bit right.......... thanks for the conformation
(a) Expand ( x + ⅟x )² = x² + 2x⅟x + ⅟x₂ = x² + 2 + ⅟x₂
(b) Suppose that x + ⅟x = 3. Use part (a) to evaluate x² + ⅟x² without attempting to find the value of x.
ok my thoughts are..... ( x + ⅟x )² = (3)² from above.
so i think i should use the solution x² + 2 + ⅟x₂ which would be 9 + 2.
The question asks me to evaluate, but I think I may be on the wrong track. as I feel I am solving for x.
Am I doing the wrong thing here?
$\left(x + \frac{1}{x}\right)^2 = x^2 + 2 + \frac{1}{x^2}$.
If $x + \frac{1}{x} = 3$, then $\left(x + \frac{1}{x}\right)^2 = 3^2 = 9$.
Thus $x^2 + 2 + \frac{1}{x^2} = 9$
And so $x^2 + \frac{1}{x^2} = 7$.
• Jun 3rd 2009, 05:31 AM
masters
Quote:
Originally Posted by Joel
Hi guys,
Got this bit right.......... thanks for the conformation
(a) Expand ( x + ⅟x )² = x² + 2x⅟x + ⅟x₂ = x² + 2 + ⅟x₂
(b) Suppose that x + ⅟x = 3. Use part (a) to evaluate x² + ⅟x² without attempting to find the value of x.
ok my thoughts are..... ( x + ⅟x )² = (3)² from above.
so i think i should use the solution x² + 2 + ⅟x₂ which would be 9 + 2.
The question asks me to evaluate, but I think I may be on the wrong track. as I feel I am solving for x.
Am I doing the wrong thing here?
Hi Joel,
(a) Expand $\left(x+\frac{1}{x}\right)^2$
$\left(x+\frac{1}{x}\right)^2=x^2+2+\frac{1}{x^2}$
(b) Suppose $x+\frac{1}{x}=3$
Use part (a) to evaluate $x^2+\frac{1}{x^2}$ without attempting to find the value of x.
$x + \frac{1}{x}=3$
$\left(x+\frac{1}{x}\right)^2=9$
$x^2+\frac{1}{x^2}+2=9$
$x^2+\frac{1}{x^2}=7$
Edit: Too slow......Prove it got there first. Rats!
• Jun 3rd 2009, 05:38 AM
Joel
Ok Thanks heaps guys,
I was on the right track but looking at your solutions, I was going about the last step by solving it rather than representing it via x.
Cheers
|
2017-12-15 09:07:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826089084148407, "perplexity": 1810.3869960966701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00548.warc.gz"}
|
https://en-academic.com/dic.nsf/enwiki/583726
|
# Gauss–Newton algorithm
Gauss–Newton algorithm
The Gauss–Newton algorithm is a method used to solve non-linear least squares problems. It can be seen as a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.
Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton.
## Description
Given m functions r1, …, rm of n variables β = (β1, …, βn), with m ≥ n, the Gauss–Newton algorithm finds the minimum of the sum of squares[1]
$S(\boldsymbol \beta)= \sum_{i=1}^m r_i^2(\boldsymbol \beta).$
Starting with an initial guess $\boldsymbol \beta^{(0)}$ for the minimum, the method proceeds by the iterations
$\boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)}+\Delta,$
where Δ is a small step. We then have
$S(\boldsymbol \beta^{(s)} + \Delta) = S(\boldsymbol \beta^{(s)}) + \left[\frac{\partial S}{\partial \beta_i}\right] \Delta + \frac{1}{2} \Delta^\top \left[\frac{\partial^2 S(\beta)}{\partial \beta_i\partial \beta_j}\right] \Delta$.
If we define the Jacobian matrix
$\mathbf{J_r}(\boldsymbol \beta) = \left.\frac{\partial r_i}{\partial \beta_j}\right|_{\boldsymbol \beta}$,
we can replace
$\left[\frac{\partial S}{\partial \beta_i}\right]$ with $\mathbf{J_r}^\top \mathbf{r}$
and the Hessian matrix in the right can be approximated by $\mathbf{J_r}^\top \mathbf{J_r}$ (assuming small residual), giving:
$S(\boldsymbol \beta^{(s)} + \Delta) \approx S(\boldsymbol \beta^{(s)}) + \mathbf{J_r}^\top \mathbf{r}\Delta + \frac{1}{2} \Delta^\top \mathbf{J_r}^\top \mathbf{J_r}\Delta$.
We then take the derivative with respect to Δ and set it equal to zero to find a solution:
$S'(\boldsymbol \beta^{(s)} + \Delta) \approx \mathbf{J_r}^\top \mathbf{r} + \mathbf{J_r}^\top \mathbf{J_r} \Delta = 0$.
This can be rearranged to give the normal equations which can be solved for Δ:
$\left(\mathbf{J_r}^\top \mathbf{J_r} \right)\Delta = - \mathbf{ J_r} ^\top \mathbf{r}.$
In data fitting, where the goal is to find the parameters β such that a given model function y = f(x, β) fits best some data points (xi, yi), the functions ri are the residuals
$r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).$
Then, the increment Δ can be expressed in terms of the Jacobian of the function f, as
$\left( \mathbf{ J_f}^\top \mathbf{J_f} \right)\Delta = \mathbf{J_f}^\top \mathbf{r}.$
## Notes
The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix JrTJr is not invertible and the normal equations cannot be solved (at least uniquely).
The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ri. Using Taylor's theorem, we can write at every iteration:
$\mathbf{r}(\boldsymbol \beta)\approx \mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta$
with $\Delta=\boldsymbol \beta-\boldsymbol \beta^s.$ The task of finding Δ minimizing the sum of squares of the right-hand side, i.e.,
$\mathbf{min}\|\mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta\|_2^2$,
is a linear least squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.
The normal equations are m linear simultaneous equations in the unknown increments, Δ. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of Jr. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail as JrTJr becomes singular.
## Example
Calculated curve obtained with $\hat\beta_1=0.362$ and $\hat\beta_2=0.556$ (in blue) versus the observed data (in red).
In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
In a biology experiment studying the relation between substrate concentration [S] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
i 1 2 3 4 5 6 7 [S] 0.038 0.194 0.425 0.626 1.253 2.5 3.74 rate 0.05 0.127 0.094 0.2122 0.2729 0.2665 0.3317
It is desired to find a curve (model function) of the form
$\text{rate}=\frac{V_\text{max}[S]}{K_M+[S]}$
that fits best the data in the least squares sense, with the parameters Vmax and KM to be determined.
Denote by xi and yi the value of [S] and the rate from the table, $i=1, \dots, 7.$ Let β1 = Vmax and β2 = KM. We will find β1 and β2 such that the sum of squares of the residuals
$r_i = y_i - \frac{\beta_1x_i}{\beta_2+x_i}$ ($i=1,\dots, 7$)
is minimized.
The Jacobian $\mathbf{J_r}$ of the vector of residuals ri in respect to the unknowns βj is an $7\times 2$ matrix with the i-th row having the entries
$\frac{\partial r_i}{\partial \beta_1}= -\frac{x_i}{\beta_2+x_i},\ \frac{\partial r_i}{\partial \beta_2}= \frac{\beta_1x_i}{\left(\beta_2+x_i\right)^2}.$
Starting with the initial estimates of β1=0.9 and β2=0.2, after five iterations of the Gauss–Newton algorithm the optimal values $\hat\beta_1=0.362$ and $\hat\beta_2=0.556$ are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters versus the observed data.
## Convergence properties
It can be shown[2] that the increment Δ is a descent direction for S, and, if the algorithm converges, then the limit is a stationary point of S. However, convergence is not guaranteed, not even local convergence as in Newton's method.
The rate of convergence of the Gauss–Newton algorithm can approach quadratic.[3] The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix $\mathbf{J_r^T J_r}$ is ill-conditioned. For example, consider the problem with m = 2 equations and n = 1 variable, given by
\begin{align} r_1(\beta) &= \beta + 1 \\ r_2(\beta) &= \lambda \beta^2 + \beta - 1. \end{align}
The optimum is at β = 0. If λ = 0 then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1 then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.[4]
## Derivation from Newton's method
In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm is at most quadratic.
The recurrence relation for Newton's method for minimizing a function S of parameters, β, is
$\boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g \,$
where g denotes the gradient vector of S and H denotes the Hessian matrix of S. Since $S = \sum_{i=1}^m r_i^2$, the gradient is given by
$g_j=2\sum_{i=1}^m r_i\frac{\partial r_i}{\partial \beta_j}.$
Elements of the Hessian are calculated by differentiating the gradient elements, gj, with respect to βk
$H_{jk}=2\sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}+r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} \right).$
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by
$H_{jk}\approx 2\sum_{i=1}^m J_{ij}J_{ik}$
where $J_{ij}=\frac{\partial r_i}{\partial \beta_j}$ are entries of the Jacobian Jr. The gradient and the approximate Hessian can be written in matrix notation as
$\mathbf g=2\mathbf{J_r}^\top \mathbf{r}, \quad \mathbf{H} \approx 2 \mathbf{J_r}^\top \mathbf{J_r}.\,$
These expressions are substituted into the recurrence relation above to obtain the operational equations
$\boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)}+\Delta;\quad \Delta = -\left( \mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{J_r}^\top \mathbf{r}.$
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation
$\left|r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}\right|$
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.[5]
1. The function values ri are small in magnitude, at least around the minimum.
2. The functions are only "mildly" non linear, so that $\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}$ is relatively small in magnitude.
## Improved versions
With the Gauss–Newton method the sum of squares S may not decrease at every iteration. However, since Δ is a descent direction, unless $S(\boldsymbol \beta^s)$ is a stationary point, it holds that $S(\boldsymbol \beta^s+\alpha\Delta) < S(\boldsymbol \beta^s)$ for all sufficiently small α > 0. Thus, if divergence occurs, one solution is to employ a fraction, α, of the increment vector, Δ in the updating formula
$\boldsymbol \beta^{s+1} = \boldsymbol \beta^s+\alpha\ \Delta$.
In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function S. An optimal value for α can be found by using a line search algorithm, that is, the magnitude of α is determined by finding the value that minimizes S, usually using a direct search method in the interval 0 < α < 1.
In cases where the direction of the shift vector is such that the optimal fraction, α, is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, also known as the "trust region method".[1] The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent,
$\left(\mathbf{J^TJ+\lambda D}\right)\Delta=\mathbf{J}^T \mathbf{r}$,
where D is a positive diagonal matrix. Note that when D is the identity matrix and $\lambda\to+\infty$, then $\Delta/\lambda\to \mathbf{J}^T \mathbf{r}$, therefore the direction of Δ approaches the direction of the gradient $\mathbf{J}^T \mathbf{r}$.
The so-called Marquardt parameter, λ, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time λ is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard Gauss–Newton minimization.
## Related algorithms
In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS) an estimate of the full Hessian, $\frac{\partial^2 S}{\partial \beta_j \partial\beta_k}$, is built up numerically using first derivatives $\frac{\partial r_i}{\partial\beta_j}$ only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.
Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
## Notes
1. ^ a b Björck (1996)
2. ^ Björck (1996) p260
3. ^ Björck (1996) p341, 342
4. ^ Fletcher (1987) p.113
5. ^ Nocedal (1997)[page needed]
## References
• Björck, A. (1996). Numerical methods for least squares problems. SIAM, Philadelphia. ISBN 0-89871-360-9.
• Fletcher, Roger (1987). Practical methods of optimization (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-91547-8. .
• Nocedal, Jorge; Wright, Stephen (1999). Numerical optimization. New York: Springer. ISBN 0387987932.
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Algorithme de Gauss-Newton — En mathématiques, l algorithme de Gauss Newton est une méthode de résolution des problèmes de moindres carrés non linéaires. Elle peut être vue comme une modification de la méthode de Newton dans le cas multidimensionnel afin de trouver le… … Wikipédia en Français
• Algoritmo de Gauss-Newton — En matemáticas, el algoritmo de Gauss Newton se utiliza para resolver problemas no lineales de mínimos cuadrados. Es una modificación del método de optimización de Newton que no usa segundas derivadas y se debe a Carl Friedrich Gauss. El problema … Wikipedia Español
• Newton's method in optimization — A comparison of gradient descent (green) and Newton s method (red) for minimizing a function (with small step sizes). Newton s method uses curvature information to take a more direct route. In mathematics, Newton s method is an iterative method… … Wikipedia
• Newton's method — In numerical analysis, Newton s method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real valued function. The… … Wikipedia
• Levenberg–Marquardt algorithm — In mathematics and computing, the Levenberg–Marquardt algorithm (LMA)[1] provides a numerical solution to the problem of minimizing a function, generally nonlinear, over a space of parameters of the function. These minimization problems arise… … Wikipedia
• List of topics named after Carl Friedrich Gauss — Carl Friedrich Gauss (1777 ndash; 1855) is the eponym of all of the topics listed below. Topics including Gauss *Carl Friedrich Gauss Prize, a mathematics award *Degaussing, to demagnetize an object *Gauss (unit), a unit of magnetic field (B)… … Wikipedia
• Isaac Newton — Sir Isaac Newton … Wikipedia
• BHHH algorithm — BHHH is an optimization algorithm in econometrics similar to Gauss–Newton algorithm. It is an acronym of the four originators: Berndt, B. Hall, R. Hall, and Jerry Hausman.UsageIf a nonlinear model is fitted to the data one often needs to estimate … Wikipedia
• Criss-cross algorithm — This article is about an algorithm for mathematical optimization. For the naming of chemicals, see crisscross method. The criss cross algorithm visits all 8 corners of the Klee–Minty cube in the worst case. It visits 3 additional… … Wikipedia
• Expectation-maximization algorithm — An expectation maximization (EM) algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. EM alternates between performing an… … Wikipedia
|
2021-05-08 20:20:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 48, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174032807350159, "perplexity": 753.3838538656751}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00290.warc.gz"}
|
https://www.stata.com/stata10/morestat.html
|
» Home » Products » Stata 10 » General statistics
## General statistics
Here are the details:
1. You can now save estimation results to disk. You type
estimates save filename
to save results and
estimates use filename
to reload them. In fact, the entire estimates command has been reworked. The new command estimates notes allows you to add notes to estimation results just as you add them to datasets. The new command estimates esample allows you to restore e(sample) after reloading estimates, should that be necessary (usually it is not). The maximum number of estimation results that can be held in memory (as opposed to saved on disk) is increased to 300 from 20. See [R] estimates.
2. Stata now has exact logistic and exact Poisson regression. Rather than having their inference based on asymptotic normality, exact estimators enumerate the conditional distribution of the sufficient statistics and then base inference upon that distribution. In small samples, exact methods have better coverage than asymptotic methods, and exact methods are the only way to obtain point estimates, tests, and confidence intervals from covariates that perfectly predict the observed outcome.
Postestimation command estat se reports odds ratios and their asymptotic standard errors. estat predict, available only after exlogistic computes predicted probabilities, asymptotic standard errors, and exact confidence intervals for single cases.
See [R] exlogistic and [R] expoisson.
3. New estimation command asclogit performs alternative-specific conditional logistic regression, which includes McFadden’s choice model. Postestimation command estat alternatives reports alternative-specific summary statistics. estat mfx reports marginal effects of regressors on probabilities of each alternative. See [R] asclogit and [R] asclogit postestimation.
4. New estimation command asroprobit performs alternative-specific rank-ordered probit regression. asroprobit is related to rank-ordered logistic regression (rologit) but allows modeling alternative-specific effects and modeling the covariance structure of the alternatives. Postestimation command estat alternatives provides summary statistics about the alternatives in the estimation sample. estat covariance displays the variance–covariance matrix of the alternatives. estat correlation displays the correlation matrix of the alternatives. estat mfx computes the marginal effects of regressors on the probability of the alternatives. See [R] asroprobit and [R] asroprobit postestimation.
5. New estimation command ivregress performs single-equation instrumental-variables regression by two-stage least squares, limited-information maximum likelihood, or generalized method of moments. Robust and HAC covariance matrices may be requested. Postestimation command estat firststage provides various descriptive statistics and tests of instrument relevance. estat overid tests overidentifying restrictions. ivregress replaces the previous ivreg command. See [R] ivregress and [R] ivregress postestimation.
6. New estimation command nlsur fits a system of nonlinear equations by feasible generalized least squares, allowing for covariances among the equations; see [R] nlsur.
7. Existing estimation command nlogit was rewritten and has new, better syntax and runs faster when there are more than two levels. Old syntax is available under version control. nlogit now optionally fits the random utilities maximization (RUM) model as well as the nonnormalized model that was available previously. The new nlogit now allows unbalanced groups and allows groups to have different sets of alternatives. nlogit now excludes entire choice sets (cases) if any alternative (observation) has a missing value; use new option altwise to exclude just the alternatives (observations) with missing values. Finally, vce(robust) is allowed regardless of the number of nesting levels. See [R] nlogit.
8. Existing estimation command asmprobit has the following enhancements:
1. The new default parameterization estimates the covariance of the alternatives differenced from the base alternative, making the estimates invariant to the choice of base. New option structural specifies that the previously structural (nondifferenced) covariance parameterization be used.
2. asmprobit now permits estimation of the constant-only model.
3. asmprobit now excludes entire choice sets (cases) if any alternative (observation) has a missing value; use new option altwise to exclude just the alternatives (observations) with missing values.
4. New postestimation command estat mfx computes marginal effects after asmprobit.
See [R] asmprobit and [R] asmprobit postestimation.
9. Existing estimation command clogit now accepts pweights and may be used with the svy: prefix.
Also, clogit used to be willing to produce cluster-robust VCEs when the groups were not nested within the clusters. Sometimes, this VCE was consistent, and other times it was not. You must now specify the new nonest option to obtain a cluster-robust VCE when the groups are not nested within panels.
predict after clogit now accepts options that calculate the Δβ influence statistic, the Δchi2 lack-of-fit statistic, the Hosmer and Lemeshow leverage, the Pearson residuals, and the standardized Pearson residuals.
See [R] clogit and [R] clogit postestimation.
10. Existing estimation command cloglog now accepts pweights, may now be used with the svy: prefix, and has new option eform that requests that exponentiated coefficients be reported; see [R] cloglog.
11. Existing estimation command cnreg now accepts pweights, may be used with the svy: prefix, and is now noticeably faster (up to five times faster) when used within loops, such as by statsby. See [R] cnreg.
12. Existing estimation commands cnsreg and tobit now accept pweights, may be used with the svy: prefix, and are now noticeably faster (up to five times faster) when used within loops, such as by statsby. Also, cnsreg now has new advanced option mse1 that sets the mean squared error to 1. See [R] cnsreg and [R] tobit.
13. Existing estimation command regress is now noticeably faster (up to five times faster) when used with loops, such as by statsby. Also,
1. Postestimation command estat hettest has new option iid that specifies that an alternative version of the score test be performed that does not require the normality assumption. New option fstat specifies that an alternative F test be performed that also does not require the normality assumption.
2. Existing postestimation command estat vif has new option uncentered that specifies that uncentered variance inflation factors be computed.
See [R] regress postestimation.
14. Existing estimation commands logit, mlogit, ologit, oprobit, and probit are now noticeably faster (up to five times faster) when used within loops, such as by statsby.
15. For existing estimation command probit, predict now allows the deviance option; see [R] probit postestimation.
16. Existing estimation command nl has the following enhancements:
1. Option vce(vcetype) is now allowed, with supported vcetypes that include types derived from asymptotic theory, that are robust to some kinds of misspecification, that allow for intragroup correlation, and that use bootstrap or jackknife methods. Also, three heteroskedastic- and autocorrelation-consistent variance estimators are available.
2. nl no longer reports an overall model F test because the test that all parameters other than the constant are jointly zero may not be appropriate in arbitrary nonlinear models.
3. The coefficient table now reports each parameter as its own equation, analogous to how ml reports single-parameter equations.
4. predict after nl has new options that allow you to obtain the probability that the dependent variable lies within a given interval, the expected value of the dependent variable conditional on its being censored, and the expected value of the dependent variable conditional on its being truncated. These predictions assume that the error term is normally distributed.
5. mfx can be used after nl to obtain marginal effects.
6. lrtest can be used after nl to perform likelihood-ratio tests.
See [R] nl and [R] nl postestimation.
17. Existing estimation command mprobit now allows pweights, may now be used with the svy: prefix, and has new option probitparam that specifies that the probit variance parameterization, which fixes the variance of the differenced latent errors between the scale and the base alternatives to one, be used. See [R] mprobit.
18. Existing estimation command rologit now allows vce(bootstrap) and vce(jackknife). See [R] rologit.
19. Existing estimation command truncreg now allows pweights and now works with the svy: prefix. See [SVY] svy estimation.
20. After existing estimation command ivprobit, postestimation commands estat classification, lroc, and lsens are now available. Also, in ivprobit, the order of the ancillary parameters in the output has been changed to reflect the order in e(b). See [R] ivprobit and [R] ivprobit postestimation.
21. All estimation commands that allowed options robust and cluster() now allow option vce(vcetype). vce() specifies how the variance–covariance matrix of the estimators (and hence standard errors) are to be calculated. This syntax was introduced in Stata 9, with options such as vce(bootstrap), vce(jackknife), and vce(oim).
In Stata 10, option vce() is extended to encompass the robust (and optionally clustered) variance calculation. Where you previously typed
. estimation-command ..., robust
you are now to type
. estimation-command ..., vce(robust)
and where you previously typed
. estimation-command ..., robust cluster(clustervar)
with or without the robust, you are now to type
. estimation-command ..., vce(cluster clustervar)
You can still type the old syntax, but it is undocumented. The new syntax emphasizes that the robust and cluster calculation affects standard errors, not coefficients. See [R] vce_option.
In accordance with this change, estimation commands now have a term for their default variance calculation. Thus, you will see things like vce(ols), and vce(gnr). Here is what they all mean:
1. vce(ols). The variance estimator for ordinary least squares; an s2(XX)−1-type calculation.
2. vce(oim). The observed information matrix based on the likelihood function; a (−H)−1-type calculation, where H is the Hessian matrix.
3. vce(conventional). A generic term to identify the conventional variance estimator associated with the model. For instance, in the Heckman two-step estimator, vce(conventional) means the Heckman-derived variance matrix from an augmented regression. In two different contexts, vce(conventional) does not necessarily mean the same calculation.
4. vce(analytic). The variance estimator derived from first principles of statistics for means, proportions, and totals.
5. vce(gnr). The variance matrix based on an auxiliary regression, which is analogous to s2(XX)−1 generalized to nonlinear regression. gnr stands for Gauss–Newton regression.
6. vce(linearized). The variance matrix calculated by a first-order Taylor approximation of the statistic, otherwise known as the Taylor linearized variance estimator, the sandwich estimator, and the White estimator. This is identical to vce(robust) in other contexts.
The above are used for defaults. vce() may also be
1. vce(robust). The variance matrix calculated by the sandwich estimator of variance, VDV-type calculation, where V is the conventional variance matrix and D is the outer product of the gradients, Σi gig′i.
2. vce(clustervarname). The cluster-based version of vce(robust) where sums are performed within the groups formed by varname, which is equivalent to assuming that the independence is between groups only, not between observations.
3. vce(hc2) and vce(hc3). Calculated similarly as vce(robust) except that different scores are used in place of the gradient vectors gi.
4. vce(opg). The variance matrix calculated by the outer product of the gradients; a (Σi gig′i)−1 calculation.
5. vce(jackknife). The variance matrix calculated by the jackknife, including delete one, delete n, and the cluster-based jackknife.
6. vce(bootstrap). The variance matrix calculated by bootstrap resampling.
You do not need to memorize the above; the documentation for the individual commands, and their corresponding dialog boxes, make clear what is the default and what is available.
22. Estimation commands specified with option vce(bootstrap) or vce(jackknife) now report a note when a variable is dropped because of collinearity.
23. The new option collinear, which has been added to many estimation commands, specifies that the estimation command not remove collinear variables. Typically, you do not want to specify this option. It is for use when you specify constraints on the coefficients such that, even though the variables are collinear, the model is fully identified. See [R] estimation options.
24. Estimation commands having a model Wald test composed of more than just the first equation now save the number of equations in the model Wald test in e(k_eq_model).
25. All estimation commands now save macro e(cmdline) containing the command line as originally typed.
26. Concerning existing estimation command ml,
1. ml now saves the number of equations used to compute the model Wald test in e(k_eq_model), even when option lf0() is specified.
2. ml score has new option missing that specifies that observations containing variables with missing values not be eliminated from the estimation sample.
3. ml display has new option showeqns that requests that equation names be displayed in the coefficient table.
See [R] ml.
27. New command lpoly performs a kernel-weighted local polynomial regression and displays a graph of the smoothed values with optional confidence bands; see [R] lpoly.
28. New prefix command nestreg: reports comparison tests of nested models; see [R] nestreg.
29. Existing commands fracpoly, fracgen, and mfp have new features:
1. fracpoly and mfp now support cnreg, mlogit, nbreg, ologit, and oprobit.
2. fracpoly and mfp have new option all that specifies that out-of-sample observations be included in the generated variables.
3. fracpoly, compare now reports a closed-test comparison between fractional polynomial models by using deviance differences rather than reporting the gain; see [R] fracpoly.
4. fracgen has new option restrict() that computes adjustments and scaling on a specified subsample.
See [R] fracpoly and [R] mfp.
30. For existing postestimation command hausman, options sigmaless and sigmamore may now be used after xtreg. These options improve results when comparing fixed- and random-effects regressions based on small to moderate samples because they ensure that the differenced covariance matrix will be positive definite. See [R] hausman.
31. Existing postestimation command testnl now allows expressions that are bound in parentheses or brackets to have commas. For example, testnl _b[x] = M[1,3] is now allowed. See [R] testnl.
32. Existing postestimation command nlcom has a new option noheader that suppresses the output header; see [R] nlcom.
33. Existing command statsby now works with more commands, including postestimation commands. statsby also has new option forcedrop for use with commands that do not allow if or in. forcedrop specifies that observations outside the by() group be temporarily dropped before the command is called. See [D] statsby.
34. Existing command mkspline will now create restricted cubic splines as well as linear splines. New option displayknots will display the location of the knots. See [R] mkspline.
35. In existing command kdensity, kernel(kernelname) is now the preferred way to specify the kernel, but the previous method of simply specifying kernelname still works. See [R] kdensity.
36. Existing command ktau’s computations are now faster; see [R] spearman.
37. In existing command ladder, the names of the transformations in the output have been renamed to match those used by gladder and qladder. Also, the returned results r(raw) and r(P_raw) have been renamed to r(ident) and r(P_ident), respectively. See [R] ladder.
38. Existing command ranksum now allows the groupvar in option by(groupvar) to be a string; see [R] ranksum.
39. Existing command tabulate, exact now allows exact computations on larger tables. Also, new option nolog suppresses the enumeration log. See [R] tabulate twoway.
40. Existing command tetrachoric’s default algorithm for computing tetrachoric correlations has been changed from the Edwards and Edwards estimator to a maximum likelihood estimator. Also, standard errors and two-sided significance tests are produced. The Edwards and Edwards estimator is still available by specifying the new edwards option. A new zeroadjust option requests that frequencies be adjusted when one cell has a zero count. See [R] tetrachoric.
|
2022-05-28 14:52:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.801754891872406, "perplexity": 6766.569258343977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00567.warc.gz"}
|
http://blog.echen.me/page/2/
|
Stuff Harvard People Like
What types of students go to which schools? There are, of course, the classic stereotypes:
• MIT has the hacker engineers.
• Stanford has the laid-back, social folks.
• Harvard has the prestigious leaders of the world.
• Berkeley has the activist hippies.
• Caltech has the hardcore science nerds.
But how well do these perceptions match reality? What are students at Stanford, Harvard, MIT, Caltech, and Berkeley really interested in? Following the path of my previous data-driven post on differences between Silicon Valley and NYC, I scraped the Quora profiles of a couple hundred followers of each school to find out.
Topics
So let’s look at what kinds of topics followers of each school are interested in*. (Skip past the lists for a discussion.)
MIT
Topics are followed by p(school = MIT|topic).
• MIT Media Lab 0.893
• Ksplice 0.69
• Lisp (programming language) 0.677
• Nokia 0.659
• Public Speaking 0.65
• Data Storage 0.65
• Google Voice 0.609
• Hacking 0.602
• Startups in Europe 0.597
• Startup Names 0.572
• Mechanical Engineering 0.563
• Engineering 0.563
• Distributed Databases 0.544
• StackOverflow 0.536
• Boston 0.513
• Learning 0.507
• Open Source 0.498
• Cambridge 0.496
• Public Relations 0.493
• Visualization 0.492
• Semantic Web 0.486
• Andreessen-Horowitz 0.483
• Nature 0.475
• Cryptography 0.474
• Startups in Boston 0.452
• Adobe Photoshop 0.451
• Computer Security 0.447
• Sachin Tendulkar 0.443
• Hacker News 0.442
• Games 0.429
• Android Applications 0.428
• Best Engineers and Programmers 0.427
• College Admissions & Getting Into College 0.422
• Co-Founders 0.419
• Big Data 0.41
• System Administration 0.4
• Biotechnology 0.398
• Higher Education 0.394
• NoSQL 0.387
• User Experience 0.386
• Career Advice 0.377
• Artificial Intelligence 0.375
• Scalability 0.37
• Taylor Swift 0.368
• Google Search 0.368
• Functional Programming 0.365
• Bing 0.363
• Bioinformatics 0.361
• How I Met Your Mother (TV series) 0.361
• Operating Systems 0.356
• Compilers 0.355
• Google Chrome 0.354
• Management & Organizational Leadership 0.35
• Literary Fiction 0.35
• Intelligence 0.348
• Fight Club (1999 movie) 0.344
• Hip Hop Music 0.34
• UX Design 0.337
• Web Application Frameworks 0.336
• Startups in New York City 0.333
• Book Recommendations 0.33
• Engineering Recruiting 0.33
• Search Engines 0.329
• Social Search 0.329
• Data Science 0.328
• History 0.328
• Interaction Design 0.326
• Classification (machine learning) 0.322
• Startup Incubators and Seed Programs 0.321
• Graphic Design 0.321
• Product Design (software) 0.319
• The College Experience 0.319
• Writing 0.319
• MapReduce 0.318
• Database Systems 0.315
• User Interfaces 0.314
• Literature 0.314
• C (programming language) 0.314
• Television 0.314
• Reading 0.313
• Usability 0.312
• Books 0.312
• Computers 0.311
• Stealth Startups 0.311
• Daft Punk 0.31
• Healthy Eating 0.309
• Innovation 0.309
• Skiing 0.305
• JavaScript 0.304
• Rock Music 0.304
• Mozilla Firefox 0.304
• Self-Improvement 0.303
• McKinsey & Company 0.302
• AngelList 0.301
• Data Visualization 0.301
• Cassandra (database) 0.301
Stanford
Topics are followed by p(school = Stanford|topic).
• Stanford Computer Science 0.951
• Stanford Graduate School of Business 0.939
• Stanford 0.896
• Stanford Football 0.896
• Stanford Cardinal 0.896
• Social Dance 0.847
• Stanford University Courses 0.847
• Romance 0.769
• Instagram 0.745
• College Football 0.665
• Mobile Location Applications 0.634
• Online Communities 0.621
• Interpersonal Relationships 0.585
• Food & Restaurants in Palo Alto 0.572
• Your 20s 0.566
• Men’s Fashion 0.548
• Flipboard 0.537
• Inception (2010 movie) 0.535
• Tumblr 0.531
• People Skills 0.522
• Exercise 0.52
• Joel Spolsky 0.516
• Valuations 0.515
• The Social Network (2010 movie) 0.513
• LeBron James 0.506
• Northern California 0.506
• Evernote 0.5
• Quora Community 0.5
• Blogging 0.49
• Downtown Palo Alto 0.487
• The College Experience 0.485
• Consumer Internet 0.477
• Restaurants in San Francisco 0.477
• Chad Hurley 0.47
• Meditation 0.468
• Yishan Wong 0.466
• Arrested Development (TV series) 0.463
• fbFund 0.457
• Best Engineers at X Company 0.451
• Language 0.45
• Words 0.448
• Happiness 0.447
• Path (company) 0.446
• Color Labs (startup) 0.446
• Palo Alto 0.445
• Woot.com 0.442
• Beer 0.442
• PayPal 0.441
• Women in Startups 0.438
• Techmeme 0.433
• Women in Engineering 0.428
• The Mission (San Francisco neighborhood) 0.427
• iPhone Applications 0.416
• Asana 0.413
• Monetization 0.412
• Repetitive Strain Injury (RSI) 0.4
• IDEO 0.398
• Spotify 0.397
• San Francisco Giants 0.396
• Fortune Magazine 0.389
• Love 0.387
• Human-Computer Interaction 0.382
• Hip Hop Music 0.378
• Self-Improvement 0.378
• Food in San Francisco 0.375
• Quora (company) 0.374
• Quora Infrastructure 0.373
• iPhone 0.371
• Square (company) 0.369
• Social Psychology 0.369
• Network Effects 0.366
• Chris Sacca 0.365
• Walt Mossberg 0.364
• Salesforce.com 0.362
• Sex 0.361
• Etiquette 0.361
• David Pogue 0.361
• Gowalla 0.36
• iOS Development 0.354
• Palantir Technologies 0.353
• Mobile Computing 0.347
• Sports 0.346
• Video Games 0.345
• Burning Man 0.345
• Engineering Management 0.343
• Cognitive Science 0.342
• Dating & Relationships 0.341
• Fred Wilson (venture investor) 0.337
• Taiwan 0.333
• Natural Language Processing 0.33
• Eric Schmidt 0.329
• Social Advice 0.329
• Engineering Recruiting 0.328
• Job Interviews 0.325
• Mobile Phones 0.324
• Twitter Inc. (company) 0.321
• Engineering in Silicon Valley 0.321
• San Francisco Bay Area 0.321
• Google Analytics 0.32
• Fashion 0.315
• Interaction Design 0.314
• Open Graph 0.313
• Drugs & Pharmaceuticals 0.312
• Electronic Music 0.312
• Facebook Inc. (company) 0.309
• Fitness 0.309
• YouTube 0.308
• TED Talks 0.308
• Freakonomics (2005 Book) 0.307
• Jack Dorsey 0.306
• Nutrition 0.305
• Puzzles 0.305
• Silicon Valley Mergers & Acquisitions 0.304
• Viral Growth & Analytics 0.304
• Amazon Web Services 0.304
• StumbleUpon 0.303
• Exceptional Comment Threads 0.303
Harvard
• Harvard Business School 0.968
• Harvard Business Review 0.922
• Harvard Square 0.912
• Harvard Law School 0.912
• Jimmy Fallon 0.899
• Boston Red Sox 0.658
• Klout 0.644
• Oprah Winfrey 0.596
• Ivanka Trump 0.587
• Dalai Lama 0.569
• Food in New York City 0.565
• U2 0.562
• TwitPic 0.534
• 37signals 0.522
• David Lynch (director) 0.512
• Al Gore 0.508
• TechStars 0.49
• Baseball 0.487
• Private Equity 0.471
• Classical Music 0.46
• Startups in New York City 0.458
• HootSuite 0.449
• Kiva 0.442
• Ultimate Frisbee 0.441
• Huffington Post 0.436
• New York City 0.433
• Charlie Cheever 0.433
• The New York Times 0.431
• Technology Journalism 0.431
• McKinsey & Company 0.427
• TweetDeck 0.422
• How Does X Work? 0.417
• Ashton Kutcher 0.414
• Coldplay 0.402
• Conan O’Brien 0.397
• Fast Company 0.397
• WikiLeaks 0.394
• Michael Jackson 0.389
• Guy Kawasaki 0.389
• Journalism 0.384
• Wall Street Journal 0.384
• Cambridge 0.371
• Seattle 0.37
• Cities & Metro Areas 0.357
• Boston 0.353
• Tim Ferriss (author) 0.35
• The New Yorker 0.343
• Law 0.34
• Mashable 0.338
• Politics 0.335
• The Economist 0.334
• Barack Obama 0.333
• Skiing 0.329
• McKinsey Quarterly 0.325
• Wired (magazine) 0.316
• Bill Gates 0.31
• Mad Men (TV series) 0.308
• India 0.306
• TED Talks 0.306
• Netflix 0.304
• Wine 0.303
• Angel Investors 0.302
• Facebook Ads 0.301
UC Berkeley
• Berkeley 0.978
• California Golden Bears 0.91
• Internships 0.717
• Web Marketing 0.484
• Google Social Strategy 0.453
• Southwest Airlines 0.451
• WordPress 0.429
• Stock Market 0.429
• BMW (automobile) 0.428
• Web Applications 0.423
• Flickr 0.422
• Snowboarding 0.42
• Electronic Music 0.404
• MySQL 0.401
• Internet Advertising 0.399
• Search Engine Optimization (SEO) 0.398
• Yelp 0.396
• Groupon 0.393
• In-N-Out Burger 0.391
• The Matrix (1999 movie) 0.389
• Trading (finance) 0.385
• jQuery 0.381
• Hedge Funds 0.378
• Social Media Marketing 0.377
• San Francisco 0.376
• Stealth Startups 0.362
• Yahoo! 0.36
• Cascading Style Sheets 0.359
• Angel Investors 0.355
• UX Design 0.35
• StarCraft 0.348
• Los Angeles Lakers 0.347
• Mountain View 0.345
• How I Met Your Mother (TV series) 0.338
• Google+ 0.337
• Ruby on Rails 0.333
• Reading 0.333
• Social Media 0.326
• China 0.322
• Palantir Technologies 0.319
• Facebook Platform 0.315
• Basketball 0.315
• Education 0.314
• Business Development 0.312
• Online & Mobile Payments 0.305
• Restaurants in San Francisco 0.302
• Technology Companies 0.302
• Seth Godin 0.3
Caltech
• Pasadena 0.969
• Chess 0.748
• Table Tennis 0.671
• UCLA 0.67
• MacBook Pro 0.618
• Physics 0.618
• Haskell 0.582
• Los Angeles 0.58
• Electrical Engineering 0.567
• Star Trek (movie 0.561
• Disruptive Technology 0.545
• Science 0.53
• Biology 0.526
• Quantum Mechanics 0.521
• LaTeX 0.514
• Mathematics 0.488
• xkcd 0.488
• Genetics & Heredity 0.487
• Chemistry 0.47
• Medicine & Healthcare 0.448
• Poker 0.445
• C++ (programming language) 0.442
• Data Structures 0.434
• Emacs 0.428
• MongoDB 0.423
• Neuroscience 0.404
• Science Fiction 0.4
• Mac OS X 0.394
• Board Games 0.387
• Computers 0.386
• Research 0.385
• Finance 0.385
• The Future 0.379
• Linux 0.378
• The Colbert Report 0.376
• The Beatles 0.374
• The Onion 0.365
• Ruby 0.363
• Cars & Automobiles 0.361
• Quantitative Finance 0.359
• Academia 0.359
• Law 0.355
• Cooking 0.354
• Psychology 0.349
• Eminem 0.347
• Football (Soccer) 0.346
• Computer Programming 0.343
• Algorithms 0.343
• Evolutionary Biology 0.337
• Behavioral Economics 0.335
• California 0.329
• Machine Learning 0.326
• Futurama 0.324
• Social Advice 0.324
• StarCraft II 0.319
• Job Interview Questions 0.318
• Game Theory 0.316
• This American Life 0.315
• Economics 0.314
• Vim 0.31
• Graduate School 0.309
• Git (revision control) 0.306
• Computer Science 0.303
What do we see?
• First, in a nice validation of this approach, we find that each school is interested in exactly the locations we’d expect: Caltech is interested in Pasadena and Los Angeles; MIT and Harvard are both interested in Boston and Cambridge (Harvard is interested in New York City as well); Stanford is interested in Palo Alto, Northern California, and San Francisco Bay Area; and Berkeley is interested in Berkeley, San Francisco, and Mountain View.
• More interestingly, let’s look at where each school likes to eat. Stereotypically, we expect Harvard, Stanford, and Berkeley students to be more outgoing and social, and MIT and Caltech students to be more introverted. This is indeed what we find:
• Harvard follows Food in New York City; Stanford follows Food & Restaurants in Palo Alto, Restaurants in San Francisco, and Food in San Francisco; and Berkeley follows Restaurants in San Francisco and In-N-Out Burger. In other words, Harvard, Stanford, and Berkeley love eating out.
• Caltech, on the other hand, loves Cooking, and MIT loves Healthy Eating – both signs, perhaps, of a preference for eating in.
• And what does each university use to quench their thirst? Harvard students like to drink wine (classy!), while Stanford students prefer beer (the social drink of choice).
• What about sports teams? MIT and Caltech couldn’t care less, though Harvard follows the Boston Red Sox, Stanford follows the San Francisco Giants (as well as their own Stanford Football and Stanford Cardinal), and Berkeley follows the Los Angeles Lakers (and the California Golden Bears).
• For sports themselves, MIT students like skiing; Stanford students like general exercise, fitness, and sports; Harvard students like baseball, ultimate frisbee, and skiing; and Berkeley students like snowboarding. Caltech, in a league of its own, enjoys table tennis and chess.
• What does each school think of social? Caltech students look for Social Advice. Berkeley students are interested in Social Media and Social Media Marketing. MIT, on the more technical side, wants Social Search. Stanford students, predictably, love the whole spectrum of social offerings, from Social Dance and The Social Network, to Social Psychology and Social Advice. (Interestingly, Caltech and Stanford are both interested in Social Advice, though I wonder if it’s for slightly different reasons.)
• What’s each school’s relationship with computers? Caltech students are interested in Computer Science, MIT hackers are interested in Computer Security, and Stanford students are interested in Human-Computer Interaction.
• Digging into the MIT vs. Caltech divide a little, we see that Caltech students really are more interested in the pure sciences (Physics, Science, Biology, Quantum Mechanics, Mathematics, Chemistry, etc.), while MIT students are more on the applied and engineering sides (Mechanical Engineering, Engineering, Distributed Databases, Cryptography, Computer Security, Biotechnology, Operating Systems, Compilers, etc.).
• Regarding programming languages, Caltech students love Haskell (hardcore purity!), while MIT students love Lisp.
• What does each school like to read, both offline and online? Caltech loves science fiction, xkcd, and The Onion; MIT likes Hacker News; Harvard loves journals, newspapers, and magazines (Huffington Post, the New York Times, Fortune, Wall Street Journal, the New Yorker, the Economist, and so on); and Stanford likes TechMeme.
• What movies and television shows does each school like to watch? Caltech likes Star Trek, the Colbert Report, and Futurama. MIT likes Fight Club (I don’t know what this has to do with MIT, though I will note that on my first day as a freshman in a new dorm, Fight Club was precisely the movie we all went to a lecture hall to see). Stanford likes The Social Network and Inception. Harvard, rather fittingly, likes Mad Men and Ted Talks.
• Let’s look at the startups each school follows. MIT, of course, likes Ksplice. Berkeley likes Yelp and Groupon. Stanford likes just about every startup under the sun (Instagram, Flipboard, Tumblr, Path, Color Labs, etc.). And Harvard, that bastion of hard-won influence and prestige? To the surprise of precisely no one, Harvard enjoys Klout.
Let’s end with a summarized view of each school:
• Caltech is very much into the sciences (Physics, Biology, Quantum Mechanics, Mathematics, etc.), as well as many pretty nerdy topics (Star Trek, Science Fiction, xkcd, Futurama, Starcraft II, etc.).
• MIT is dominated by everything engineering and tech.
• Stanford loves relationships (interpersonal relationships, people skills, love, network effects, sex, etiquette, dating and relationships, romance), health and appearance (fashion, fitness, nutrition, happiness), and startups (Instagram, Flipboard, Path, Color Labs, etc.).
• Berkeley, sadly, is perhaps too large and diverse for an overall characterization.
• Harvard students are fascinated by famous figures (Jimmy Fallon, Oprah Winfrey, Invaka Trump, Dalai Lama, David Lynch, Al Gore, Bill Gates, Barack Obama), and by prestigious newspapers, journals, and magazines (Fortune, the New York Times, the Wall Street Journal, the Economist, and so on). Other very fitting interests include Kiva, classical music, and Coldplay.
*I pulled about 400 followers from each school, and added a couple filters, to try to ensure that followers were actual attendees of the schools rather than general people simply interested in them. Topics are sorted using a naive Bayes score and filtered to have at least 5 counts. Also, a word of warning: my dataset was fairly small and users on Quora are almost certainly not representative of their schools as a whole (though I tried to be rigorous with what I had).
Information Transmission in a Social Network: Dissecting the Spread of a Quora Post
tl;dr See this movie visualization for a case study on how a post propagates through Quora.
How does information spread through a network? Much of Quora’s appeal, after all, lies in its social graph – and when you’ve got a network of users, all broadcasting their activities to their neighbors, information can cascade in multiple ways. How do these social designs affect which users see what?
Think, for example, of what happens when your kid learns a new slang word at school. He doesn’t confine his use of the word to McKinley Elementary’s particular boundaries, between the times of 9-3pm – he introduces it to his friends from other schools at soccer practice as well. A couple months later, he even says it at home for the first time; you like the word so much, you then start using it at work. Eventually, Justin Bieber uses the word in a song, at which point the word’s popularity really starts to explode.
So how does information propagate through a social network? What types of people does an answer on Quora reach, and how does it reach them? (Do users discover new answers individually, or are hubs of connectors more key?) How does the activity of a post on Quora rise and fall? (Submissions on other sites have limited lifetimes, fading into obscurity soon after an initial spike; how does that change when users are connected and every upvote can revive a post for someone else’s eyes?)
(I looked at Quora since I had some data from there already available, but I hope the lessons should be fairly applicable in general, to other social networks like Facebook, Twitter, and LinkedIn as well.)
To give an initial answer to some of these questions, I dug into one of my more popular posts, on a layman’s introduction to random forests.
Users, Topics
Before looking deeper into the voting dynamics of the post, let’s first get some background on what kinds of users the answer reached.
Here’s a graph of the topics that question upvoters follow. (Each node is a topic, and every time upvoter X follows both topics A and B, I add an edge between A and B.)
We can see from the graph that upvoters tend to be interested in three kinds of topics:
• Machine learning and other technical matters (the green cluster): Classification, Data Mining, Big Data, Information Retrieval, Analytics, Probability, Support Vector Machines, R, Data Science, …
• Startups/Silicon Valley (the red cluster): Facebook, Lean Startups, Investing, Seed Funding, Angel Investing, Technology Trends, Product Managment, Silicon Valley Mergers and Acquisitions, Asana, Social Games, Quora, Mark Zuckerberg, User Experience, Founders and Entrepreneurs, …
• General Intellectual Topics (the purple cluster): TED, Science, Book Recommendations, Philosophy, Politics, Self-Improvement, Travel, Life Hacks, …
Also, here’s the network of the upvoters themselves (there’s an edge between users A and B if A follows B):
We can see three main clusters of users:
• A large group in green centered around a lot of power users and Quora employees.
• A machine learning group of folks in orange centered around people like Oliver Grisel, Christian Langreiter, and Joseph Turian.
• A group of people following me, in purple.
• Plus some smaller clusters in blue and yellow. (There were also a bunch of isolated users, connected to no one, that I filtered out of the picture.)
Digging into how these topic and user graphs are related:
• The orange cluster of users is more heavily into machine learning: 79% of users in that cluster follow more green topics (machine learning and technical topics) than red and purple topics (startups and general intellectual matters).
• The green cluster of users is reversed: 77% of users follow more of the red and purple clusters of topics (on startups and general intellectual matters) than machine learning and technical topics.
More interestingly, though, we can ask: how do the connections between upvoters relate to the way the post spread?
Social Voting Dynamics
So let’s take a look. Here’s a visualization I made of upvotes on my answer across time (click here for a larger view).
To represent the social dynamics of these upvotes, I drew an edge from user A to user B if user A transmitted the post to user B through an upvote. (Specifically, I drew an edge from Alice to Bob if Bob follows Alice and Bob’s upvote appeared within five days of Alice’s upvote; this is meant to simulate the idea that Alice was the key intermediary between my post and Bob.)
Also,
• Green nodes are users with at least one upvote edge.
• Blue nodes are users who follow at least one of the topics the post is categorized under (i.e., users who probably discovered the answer by themselves).
• Red nodes are users with no connections and who do not follow any of the post’s topics (i.e, users whose path to the post remain mysterious).
• Users increase in size when they produce more connections.
Here’s a play-by-play of the video:
• On Feb 14 (the day I wrote the answer), there’s a flurry of activity.
• A couple of days later, Tracy Chou gives an upvote, leading to another spike in activity.
• Then all’s quiet until… bam! Alex Kamil leads to a surge of upvotes, and his upvote finds Ludi Rehak, who starts a small surge of her own. They’re quickly followed by Christian Langreiter, who starts a small revolution among a bunch of machine learning folks a couple days later.
• Then all is pretty calm again, until a couple months later when… bam! Aditya Sengupta brings in a smashing of his own followers, and his upvote makes its way to Marc Bodnick, who sets off a veritable storm of activity.
(Already we can see some relationships between the graph of user connections and the way the post propagated. Many of the users from the orange cluster, for example, come from Alex Kamil and Christian Langreiter’s upvotes, and many of the users from the green cluster come from Aditya Sengupta and Marc Bodnick’s upvotes. What’s interesting, though, is, why didn’t the cluster of green users appear all at once, like the orange cluster did? People like Kah Seng Tay, Tracy Chou, Venkatesh Rao, and Chad Little upvoted the answer pretty early on, but it wasn’t until Aditya Sengupta’s upvote a couple months later that people like Marc Bodnick, Edmond Lau, and many of the other green users (who do indeed follow that first set of folks) discovered the answer. Did the post simply get lost in users’ feeds the first time around? Was the post perhaps ignored until it received enough upvotes to be considered worth reading? Are some users’ upvotes just trusted more than others’?)
For another view of the upvote dynamics, here’s a static visualization, where we can again easily see the clusters of activity:
Fin
There are still many questions it would be interesting to look at; for example,
• What differentiates users who sparked spikes of activity from users who didn’t? I don’t believe it’s simply number of followers, as many well-connected upvoters did not lead to cascades of shares. Does authority matter?
• How far can a post reach? Clearly, the post reached people more than one degree of separation away from me (where one degree of separation is a follower); what does the distribution of degrees look like? Is there any relationship between degree of separation and time of upvote?
• What can we say about the people who started following me after reading my answer? Are they fewer degrees of separation away? Are they more interested in machine learning? Have they upvoted any of my answers before? (Perhaps there’s a certain “threshold” of interestingness people need to overflow before they’re considered acceptable followees.)
But to summarize a bit what we’ve seen so far, here are some statistics on the role the social graph played in spreading the post:
• There are 5 clusters of activity after the initial post, sparked both by power users and less-connected folks. In an interesting cascade of information, some of these sparks led to further spikes in activity as well (as when Aditya Sengupta’s upvote found its way to Marc Bodnick, who set off even more activity).
• 35% of users made their way to my answer because of someone else’s upvote.
• Through these connections, the post reached a fair variety of users: 32% of upvoters don’t even follow any of the post’s topics.
• 77% of upvotes came from users over two weeks after my answer appeared.
• If we look only at the upvoters who follow at least one of the post’s topics, 33% didn’t see my answer until someone else showed it to them. In other words, a full one-third of people who presumably would have been interested in my post anyways only found it because of their social network.
So it looks like the social graph played quite a large part in the post’s propagation, and I’ll end with a big shoutout to Stormy Shippy, who provided an awesome set of scripts I used to collect a lot of this data.
Introduction
Suppose you have the following set of sentences:
• I like to eat broccoli and bananas.
• I ate a banana and spinach smoothie for breakfast.
• Chinchillas and kittens are cute.
• My sister adopted a kitten yesterday.
• Look at this cute hamster munching on a piece of broccoli.
What is latent Dirichlet allocation? It’s a way of automatically discovering topics that these sentences contain. For example, given these sentences and asked for 2 topics, LDA might produce something like
• Sentences 1 and 2: 100% Topic A
• Sentences 3 and 4: 100% Topic B
• Sentence 5: 60% Topic A, 40% Topic B
• Topic A: 30% broccoli, 15% bananas, 10% breakfast, 10% munching, … (at which point, you could interpret topic A to be about food)
• Topic B: 20% chinchillas, 20% kittens, 20% cute, 15% hamster, … (at which point, you could interpret topic B to be about cute animals)
The question, of course, is: how does LDA perform this discovery?
LDA Model
In more detail, LDA represents documents as mixtures of topics that spit out words with certain probabilities. It assumes that documents are produced in the following fashion: when writing each document, you
• Decide on the number of words N the document will have (say, according to a Poisson distribution).
• Choose a topic mixture for the document (according to a Dirichlet distribution over a fixed set of K topics). For example, assuming that we have the two food and cute animal topics above, you might choose the document to consist of 1/3 food and 2/3 cute animals.
• Generate each word w_i in the document by:
• First picking a topic (according to the multinomial distribution that you sampled above; for example, you might pick the food topic with 1/3 probability and the cute animals topic with 2/3 probability).
• Using the topic to generate the word itself (according to the topic’s multinomial distribution). For example, if we selected the food topic, we might generate the word “broccoli” with 30% probability, “bananas” with 15% probability, and so on.
Assuming this generative model for a collection of documents, LDA then tries to backtrack from the documents to find a set of topics that are likely to have generated the collection.
Example
Let’s make an example. According to the above process, when generating some particular document D, you might
• Pick 5 to be the number of words in D.
• Decide that D will be 1/2 about food and 1/2 about cute animals.
• Pick the first word to come from the food topic, which then gives you the word “broccoli”.
• Pick the second word to come from the cute animals topic, which gives you “panda”.
• Pick the third word to come from the cute animals topic, giving you “adorable”.
• Pick the fourth word to come from the food topic, giving you “cherries”.
• Pick the fifth word to come from the food topic, giving you “eating”.
So the document generated under the LDA model will be “broccoli panda adorable cherries eating” (note that LDA is a bag-of-words model).
Learning
So now suppose you have a set of documents. You’ve chosen some fixed number of K topics to discover, and want to use LDA to learn the topic representation of each document and the words associated to each topic. How do you do this? One way (known as collapsed Gibbs sampling) is the following:
• Go through each document, and randomly assign each word in the document to one of the K topics.
• Notice that this random assignment already gives you both topic representations of all the documents and word distributions of all the topics (albeit not very good ones).
• So to improve on them, for each document d…
• Go through each word w in d…
• And for each topic t, compute two things: 1) p(topic t | document d) = the proportion of words in document d that are currently assigned to topic t, and 2) p(word w | topic t) = the proportion of assignments to topic t over all documents that come from this word w. Reassign w a new topic, where we choose topic t with probability p(topic t | document d) * p(word w | topic t) (according to our generative model, this is essentially the probability that topic t generated word w, so it makes sense that we resample the current word’s topic with this probability). (Also, I’m glossing over a couple of things here, in particular the use of priors/pseudocounts in these probabilities.)
• In other words, in this step, we’re assuming that all topic assignments except for the current word in question are correct, and then updating the assignment of the current word using our model of how documents are generated.
• After repeating the previous step a large number of times, you’ll eventually reach a roughly steady state where your assignments are pretty good. So use these assignments to estimate the topic mixtures of each document (by counting the proportion of words assigned to each topic within that document) and the words associated to each topic (by counting the proportion of words assigned to each topic overall).
Layman’s Explanation
In case the discussion above was a little eye-glazing, here’s another way to look at LDA in a different domain.
Suppose you’ve just moved to a new city. You’re a hipster and an anime fan, so you want to know where the other hipsters and anime geeks tend to hang out. Of course, as a hipster, you know you can’t just ask, so what do you do?
Here’s the scenario: you scope out a bunch of different establishments (documents) across town, making note of the people (words) hanging out in each of them (e.g., Alice hangs out at the mall and at the park, Bob hangs out at the movie theater and the park, and so on). Crucially, you don’t know the typical interest groups (topics) of each establishment, nor do you know the different interests of each person.
So you pick some number K of categories to learn (i.e., you want to learn the K most important kinds of categories people fall into), and start by making a guess as to why you see people where you do. For example, you initially guess that Alice is at the mall because people with interests in X like to hang out there; when you see her at the park, you guess it’s because her friends with interests in Y like to hang out there; when you see Bob at the movie theater, you randomly guess it’s because the Z people in this city really like to watch movies; and so on.
Of course, your random guesses are very likely to be incorrect (they’re random guesses, after all!), so you want to improve on them. One way of doing so is to:
• Pick a place and a person (e.g., Alice at the mall).
• Why is Alice likely to be at the mall? Probably because other people at the mall with the same interests sent her a message telling her to come.
• In other words, the more people with interests in X there are at the mall and the stronger Alice is associated with interest X (at all the other places she goes to), the more likely it is that Alice is at the mall because of interest X.
• So make a new guess as to why Alice is at the mall, choosing an interest with some probability according to how likely you think it is.
Go through each place and person over and over again. Your guesses keep getting better and better (after all, if you notice that lots of geeks hang out at the bookstore, and you suspect that Alice is pretty geeky herself, then it’s a good bet that Alice is at the bookstore because her geek friends told her to go there; and now that you have a better idea of why Alice is probably at the bookstore, you can use this knowledge in turn to improve your guesses as to why everyone else is where they are), and eventually you can stop updating. Then take a snapshot (or multiple snapshots) of your guesses, and use it to get all the information you want:
• For each category, you can count the people assigned to that category to figure out what people have this particular interest. By looking at the people themselves, you can interpret the category as well (e.g., if category X contains lots of tall people wearing jerseys and carrying around basketballs, you might interpret X as the “basketball players” group).
• For each place P and interest category C, you can compute the proportions of people at P because of C (under the current set of assignments), and these give you a representation of P. For example, you might learn that the people who hang out at Barnes & Noble consist of 10% hipsters, 50% anime fans, 10% jocks, and 30% college students.
Real-World Example
Finally, I applied LDA to a set of Sarah Palin’s emails a little while ago (see here for the blog post, or here for an app that allows you to browse through the emails by the LDA-learned categories), so let’s give a brief recap. Here are some of the topics that the algorithm learned:
• Trig/Family/Inspiration: family, web, mail, god, son, from, congratulations, children, life, child, down, trig, baby, birth, love, you, syndrome, very, special, bless, old, husband, years, thank, best, …
• Wildlife/BP Corrosion: game, fish, moose, wildlife, hunting, bears, polar, bear, subsistence, management, area, board, hunt, wolves, control, department, year, use, wolf, habitat, hunters, caribou, program, denby, fishing, …
• Energy/Fuel/Oil/Mining: energy, fuel, costs, oil, alaskans, prices, cost, nome, now, high, being, home, public, power, mine, crisis, price, resource, need, community, fairbanks, rebate, use, mining, villages, …
• Gas: gas, oil, pipeline, agia, project, natural, north, producers, companies, tax, company, energy, development, slope, production, resources, line, gasline, transcanada, said, billion, plan, administration, million, industry, …
• Education/Waste: school, waste, education, students, schools, million, read, email, market, policy, student, year, high, news, states, program, first, report, business, management, bulletin, information, reports, 2008, quarter, …
• Presidential Campaign/Elections: mail, web, from, thank, you, box, mccain, sarah, very, good, great, john, hope, president, sincerely, wasilla, work, keep, make, add, family, republican, support, doing, p.o, …
Here’s an example of an email which fell 99% into the Trig/Family/Inspiration category (particularly representative words are highlighted in blue):
And here’s an excerpt from an email which fell 10% into the Presidential Campaign/Election category (in red) and 90% into the Wildlife/BP Corrosion category (in green):
Tweets vs. Likes: What gets shared on Twitter vs. Facebook?
It always strikes me as curious that some posts get a lot of love on Twitter, while others get many more shares on Facebook:
What accounts for this difference? Some of it is surely site-dependent: maybe one blogger has a Facebook page but not a Twitter account, while another has these roles reversed. But even on sites maintained by a single author, tweet-to-likes ratios can vary widely from post to post.
So what kinds of articles tend to be more popular on Twitter, and which spread more easily on Facebook? To take a stab at an answer, I scraped data from a couple of websites over the weekend.
tl;dr Twitter is still for the techies: articles where the number of tweets greatly outnumber FB likes tend to revolve around software companies and programming. Facebook, on the other hand, appeals to everyone else: yeah, to the masses, and to non-software technical folks in general as well.
FlowingData
The first site I looked at was Nathan Yau’s awesome FlowingData website on data visualization. To see which articles are more popular on Facebook and which are more popular on Twitter, let’s sort all the FlowingData articles by their # tweets / # likes ratio.
Here are the 10 posts with the lowest tweets-to-likes ratio (i.e., the posts that were especially popular with Facebook users):
And here are the 10 posts with the highest tweets-to-like ratio (i.e., the posts especially popular with Twitter users):
Notice any differences between the two?
• Instant gratification infographics, cuteness, comics, and pop culture get liked on Facebook.
• APIs, datasets, visualizations related to techie sites (Delicious, foursquare, Twitter, LinkedIn), and picture-less articles get tweeted instead.
Interestingly, it also looks like the colors in the top 10 Facebook articles tend to the red end of the spectrum, while the colors in the top 10 Twitter articles tend to the blue end of the spectrum. Does this pattern hold if we look at more data? Here’s a meta-visualization of the FlowingData articles, sorted by articles popular on Facebook in the top left to articles popular on Twitter in the bottom right (see here for some interactivity and more details):
It does indeed look like the images at the top (the articles popular on Facebook) are more pink, while the images at the bottom (the articles popular on Twitter) are more blue (though it would be nice to quantify this in some way)!
Furthermore, we can easily see from the grid that articles with no visualizations (represented by lorem ipsum text in the grid) cluster at the bottom. Grabbing some actual numbers, we find that 32% of articles with at least one picture have more shares on Facebook than on Twitter, compared to only 4% of articles with no picture at all.
Finally, let’s break down the percentage of articles with more Facebook shares by category.
(I filtered the categories so that each category in the plot above contains at least 5 articles.)
What do we find?
• Articles in the Software, Online Applications, News, and Data sources categories (yawn) get 100% of their shares from Twitter.
• Articles tagged with Data Underload (which seems to contain short and sweet visualizations of everyday things), Miscellaneous (which contains lots of comics or comic-like visualizations), and Infographics get the most shares on Facebook.
• This category breakdown matches precisely what we saw in the top 10 examples above.
New Scientist
When looking at FlowingData, we saw that Twitter users are much bigger on sharing technical articles. But is this true for technical articles in general, or only for programming-related posts? (In my experience with Twitter, I haven’t seen many people from math and the non-computer sciences.)
To answer, I took articles from the Physics & Math and Technology sections of New Scientist, and
• Calculated the percentage of shares each article received on Twitter (i.e., # tweets / (# tweets + # likes)).
• Grouped articles by their number of tweets rounded to the nearest multiple of 25 (bin #1 contains articles close to 25 tweets, bin #2 contains articles close to 50 tweets, etc.).
• Calculated the median percentage of shares on Twitter for each bin.
Here’s a graph of the result:
Notice that:
• The technology articles get consistently more shares from Twitter than the physics and math articles do.
• Twitter accounts for the majority of the technology shares.
• Facebook accounts for the majority of the physics and math shares.
So this suggests that Twitter really is for computer technology in particular, not technical matters in general (though it would be nice to look at areas other than physics and math as well).
Quora
To get some additional evidence on the computer science vs. math/physics divide, I
• Scraped about 350 profiles of followers from each of the Computer Science, Software Engineering, Mathematics, and Physics categories on Quora;
• Checked each user to see whether they link to their Facebook and Twitter accounts on their profile.
Here’s the ratio of the number of people linking to their Facebook account to the number of people linking to their Twitter account, sliced by topic:
We find exactly what we expect from the New Scientist data: people following the math and physics categories have noticeably smaller Twitter / Facebook ratios compared to people following the computer science and software engineering categories (i.e., compared to computer scientists and software engineers, mathematicians and physicists are more likely to be on Facebook than on Twitter). What’s more, this difference is in fact significant: the graphs display individual 90% confidence intervals (which overlap not at all or only slightly), and we do indeed get significance at the 95% level if we look at the differences between categories.
This corroborates the New Scientist evidence that Twitter gets the computer technology shares, while Facebook gets the math and physics shares.
XKCD
Finally, let’s take a look at which XKCD comics are especially popular on Facebook vs. Twitter.
Here are the 10 comics with the highest likes-to-tweets ratio (i.e., the comics especially popular on Facebook):
Here are the 10 comics with the highest tweets-to-likes ratio (i.e., the comics especially popular on Twitter):
Note that the XKCD comics popular on Facebook have more of a layman flavor, while the XKCD comics popular on Twitter are much more programming-related:
• Of the XKCD comics popular on Twitter, one’s about server attention spans, another’s about IPv6 addresses, a third is about GNU info pages, another deals with cloud computing, a fifth talks about Java, and the last is about a bunch of techie sites. (This is just like what we saw with the FlowingData visualizations.)
• Facebook, on the other hand, gets Ke$ha and Magic School Bus. • And while both top 10’s contain a flowchart, the one popular on FB is about cooking, while the one popular on Twitter is about code! • What’s more, if we look at the few technical-ish comics that are more popular on Facebook (the complex conjugate, mu, and Los Alamos comics), we see that they’re about physics and math, not programming (which matches our findings from the New Scientist articles). Lesson So why should you care? Here’s one takeaway: • If you’re blogging about technology, programming, and computer science, Twitter is your friend. • But if you’re blogging about anything else, be it math/physics or pop culture, don’t rely on a Twitter account alone; your shares are more likely to propagate on Facebook, so make sure to have a Facebook page as well. What’s Next? The three websites I looked at are all fairly tech-oriented, so it would be nice to gather data from other kinds of websites as well. And now that we have an idea how Twitter and Facebook compare, the next burning question is surely: what do people share on Google+?! Addendum Let’s consider the following thought experiment. Suppose you come across the most unpopular article ever written. What will its FB vs. Twitter shares look like? Although no real person will ever share this article, I think Twitter has many more spambots (who tweet out any and every link) than FB does, so maybe unpopular articles will have more tweets than likes by default. Conversely, suppose you come across the most popular article ever written, which everybody wants to share. Then since FB has many more users than Twitter does, maybe popular articles will tend to have more likes than tweets anyways. Thus, in order to find out which types of articles are especially popular on FB vs. Twitter, instead of looking at tweets-to-likes ratios directly, we could try to remove this baseline popularity effect. (Taking ratios instead of raw number of tweets or raw number of likes is one kind of normalization; this is another.) So does this scenario (or something similar to it) actually play out in practice? Here I’ve plotted the overall popularity of a post (the total number of shares it received on either Twitter or FB) against the percentage of shares on Facebook alone, and we can see that as a post’s popularity grows, more and more shares do indeed tend to come from Facebook rather than Twitter. Also, see the posts at the lower end of the popularity scale that are only getting shares on Twitter? Let’s take a look at the five most unpopular of these: Notice that they’re all shoutouts to FlowingData’s sponsors! There’s pretty much no reason any real person would share these on Twitter or Facebook, and indeed, checking Twitter to see who actually tweeted out these links, we see that the tweeters are bots: Now let’s switch to a slightly different view of the above scenario, where I plot number of tweets against number of likes: We see that as popularity on Twitter increases, so too does popularity on Facebook – but at a slightly faster rate. (The form of the blue line plotted is roughly$\log(likes) = -3.87 + 1.70 \log(tweets)$.) So instead of looking at the ratios above, to figure out which articles are popular on FB vs. Twitter, we could look at the residuals of the above plot. Posts with large positive residuals would be posts that are especially popular on FB, and posts with negative residuals would be posts that are especially popular on Twitter. In practice, however, there wasn’t much difference between looking at residuals vs. ratios directly when using the datasets I had, so to keep things simple in the main discussion above, I stuck to ratios alone. Still, it’s another option which might be useful when looking at different questions or different sources of data, so just for completeness, here’s what the FlowingData results look like if we use residuals instead. The 10 articles with the highest residuals (i.e., the articles most popular on Facebook): The 10 articles with the lowest residuals (i.e., the articles most popular on Twitter): Here’s a density plot of article residuals, split by whether the article has a visualization or not (residuals of picture-free articles are clearly shifted towards the negative end): Here are the mean residuals per category (again, we see that the miscellaneous, data underload, data art, and infographics categories tend to be more popular on Facebook, while the data sources, software, online applications, and news categories tend to be more popular on Twitter): And that’s it! In the spirit of these findings, I hope this article gets liked a little and tweeted lots and lots. Introduction to Restricted Boltzmann Machines Suppose you ask a bunch of users to rate a set of movies on a 0-100 scale. In classical factor analysis, you could then try to explain each movie and user in terms of a set of latent factors. For example, movies like Star Wars and Lord of the Rings might have strong associations with a latent science fiction and fantasy factor, and users who like Wall-E and Toy Story might have strong associations with a latent Pixar factor. Restricted Boltzmann Machines essentially perform a binary version of factor analysis. (This is one way of thinking about RBMs; there are, of course, others, and lots of different ways to use RBMs, but I’ll adopt this approach for this post.) Instead of users rating a set of movies on a continuous scale, they simply tell you whether they like a movie or not, and the RBM will try to discover latent factors that can explain the activation of these movie choices. More technically, a Restricted Boltzmann Machine is a stochastic neural network (neural network meaning we have neuron-like units whose binary activations depend on the neighbors they’re connected to; stochastic meaning these activations have a probabilistic element) consisting of: • One layer of visible units (users’ movie preferences whose states we know and set); • One layer of hidden units (the latent factors we try to learn); and • A bias unit (whose state is always on, and is a way of adjusting for the different inherent popularities of each movie). Furthermore, each visible unit is connected to all the hidden units (this connection is undirected, so each hidden unit is also connected to all the visible units), and the bias unit is connected to all the visible units and all the hidden units. To make learning easier, we restrict the network so that no visible unit is connected to any other visible unit and no hidden unit is connected to any other hidden unit. For example, suppose we have a set of six movies (Harry Potter, Avatar, LOTR 3, Gladiator, Titanic, and Glitter) and we ask users to tell us which ones they want to watch. If we want to learn two latent units underlying movie preferences – for example, two natural groups in our set of six movies appear to be SF/fantasy (containing Harry Potter, Avatar, and LOTR 3) and Oscar winners (containing LOTR 3, Gladiator, and Titanic), so we might hope that our latent units will correspond to these categories – then our RBM would look like the following: (Note the resemblance to a factor analysis graphical model.) State Activation Restricted Boltzmann Machines, and neural networks in general, work by updating the states of some neurons given the states of others, so let’s talk about how the states of individual units change. Assuming we know the connection weights in our RBM (we’ll explain how to learn these below), to update the state of unit$i$: • Compute the activation energy$a\_i = \sum\_j w\_{ij} x\_j$of unit$i$, where the sum runs over all units$j$that unit$i$is connected to,$w\_{ij}$is the weight of the connection between$i$and$j$, and$x\_j$is the 0 or 1 state of unit$j$. In other words, all of unit$i$’s neighbors send it a message, and we compute the sum of all these messages. • Let$p\_i = \sigma(a\_i)$, where$\sigma(x) = 1/(1 + exp(-x))$is the logistic function. Note that$p\_i$is close to 1 for large positive activation energies, and$p\_i$is close to 0 for negative activation energies. • We then turn unit$i$on with probability$p\_i$, and turn it off with probability$1 - p\_i$. • (In layman’s terms, units that are positively connected to each other try to get each other to share the same state (i.e., be both on or off), while units that are negatively connected to each other are enemies that prefer to be in different states.) For example, let’s suppose our two hidden units really do correspond to SF/fantasy and Oscar winners. • If Alice has told us her six binary preferences on our set of movies, we could then ask our RBM which of the hidden units her preferences activate (i.e., ask the RBM to explain her preferences in terms of latent factors). So the six movies send messages to the hidden units, telling them to update themselves. (Note that even if Alice has declared she wants to watch Harry Potter, Avatar, and LOTR 3, this doesn’t guarantee that the SF/fantasy hidden unit will turn on, but only that it will turn on with high probability. This makes a bit of sense: in the real world, Alice wanting to watch all three of those movies makes us highly suspect she likes SF/fantasy in general, but there’s a small chance she wants to watch them for other reasons. Thus, the RBM allows us to generate models of people in the messy, real world.) • Conversely, if we know that one person likes SF/fantasy (so that the SF/fantasy unit is on), we can then ask the RBM which of the movie units that hidden unit turns on (i.e., ask the RBM to generate a set of movie recommendations). So the hidden units send messages to the movie units, telling them to update their states. (Again, note that the SF/fantasy unit being on doesn’t guarantee that we’ll always recommend all three of Harry Potter, Avatar, and LOTR 3 because, hey, not everyone who likes science fiction liked Avatar.) Learning Weights So how do we learn the connection weights in our network? Suppose we have a bunch of training examples, where each training example is a binary vector with six elements corresponding to a user’s movie preferences. Then for each epoch, do the following: • Take a training example (a set of six movie preferences). Set the states of the visible units to these preferences. • Next, update the states of the hidden units using the logistic activation rule described above: for the$j$th hidden unit, compute its activation energy$a\_j = \sum\_i w\_{ij} x\_i$, and set$x\_j$to 1 with probability$\sigma(a\_j)$and to 0 with probability$1 - \sigma(a\_j)$. Then for each edge$e\_{ij}$, compute$Positive(e\_{ij}) = x\_i \* x\_j$(i.e., for each pair of units, measure whether they’re both on). • Now reconstruct the visible units in a similar manner: for each visible unit, compute its activation energy$a\_i$, and update its state. (Note that this reconstruction may not match the original preferences.) Then update the hidden units again, and compute$Negative(e\_{ij}) = x\_i \* x\_j$for each edge. • Update the weight of each edge$e\_{ij}$by setting$w\_{ij} = w\_{ij} + L \* (Positive(e\_{ij}) - Negative(e\_{ij}))$, where$L$is a learning rate. • Repeat over all training examples. Continue until the network converges (i.e., the error between the training examples and their reconstructions falls below some threshold) or we reach some maximum number of epochs. Why does this update rule make sense? Note that • In the first phase,$Positive(e\_{ij})$measures the association between the$i$th and$j$th unit that we want the network to learn from our training examples; • In the “reconstruction” phase, where the RBM generates the states of visible units based on its hypotheses about the hidden units alone,$Negative(e\_{ij})$measures the association that the network itself generates (or “daydreams” about) when no units are fixed to training data. So by adding$Positive(e\_{ij}) - Negative(e\_{ij})$to each edge weight, we’re helping the network’s daydreams better match the reality of our training examples. (You may hear this update rule called contrastive divergence, which is basically a funky term for “approximate gradient descent”.) Examples I wrote a simple RBM implementation in Python (the code is heavily commented, so take a look if you’re still a little fuzzy on how everything works), so let’s use it to walk through some examples. First, I trained the RBM using some fake data. • Alice: (Harry Potter = 1, Avatar = 1, LOTR 3 = 1, Gladiator = 0, Titanic = 0, Glitter = 0). Big SF/fantasy fan. • Bob: (Harry Potter = 1, Avatar = 0, LOTR 3 = 1, Gladiator = 0, Titanic = 0, Glitter = 0). SF/fantasy fan, but doesn’t like Avatar. • Carol: (Harry Potter = 1, Avatar = 1, LOTR 3 = 1, Gladiator = 0, Titanic = 0, Glitter = 0). Big SF/fantasy fan. • David: (Harry Potter = 0, Avatar = 0, LOTR 3 = 1, Gladiator = 1, Titanic = 1, Glitter = 0). Big Oscar winners fan. • Eric: (Harry Potter = 0, Avatar = 0, LOTR 3 = 1, Gladiator = 1, Titanic = 1, Glitter = 0). Oscar winners fan, except for Titanic. • Fred: (Harry Potter = 0, Avatar = 0, LOTR 3 = 1, Gladiator = 1, Titanic = 1, Glitter = 0). Big Oscar winners fan. The network learned the following weights: Bias Unit Hidden 1 Hidden 2 Bias Unit -0.08257658 -0.19041546 1.57007782 Harry Potter -0.82602559 -7.08986885 4.96606654 Avatar -1.84023877 -5.18354129 2.27197472 LOTR 3 3.92321075 2.51720193 4.11061383 Gladiator 0.10316995 6.74833901 -4.00505343 Titanic -0.97646029 3.25474524 -5.59606865 Glitter -4.44685751 -2.81563804 -2.91540988 Note that the first hidden unit seems to correspond to the Oscar winners, and the second hidden unit seems to correspond to the SF/fantasy movies, just as we were hoping. What happens if we give the RBM a new user, George, who has (Harry Potter = 0, Avatar = 0, LOTR 3 = 0, Gladiator = 1, Titanic = 1, Glitter = 0) as his preferences? It turns the Oscar winners unit on (but not the SF/fantasy unit), correctly guessing that George probably likes movies that are Oscar winners. What happens if we activate only the SF/fantasy unit, and run the RBM a bunch of different times? In my trials, it turned on Harry Potter, Avatar, and LOTR 3 three times; it turned on Avatar and LOTR 3, but not Harry Potter, once; and it turned on Harry Potter and LOTR 3, but not Avatar, twice. Note that, based on our training examples, these generated preferences do indeed match what we might expect real SF/fantasy fans want to watch. Modifications I tried to keep the connection-learning algorithm I described above pretty simple, so here are some modifications that often appear in practice: • Above,$Negative(e\_{ij})$was determined by taking the product of the$i$th and$j$th units after reconstructing the visible units once and then updating the hidden units again. We could also take the product after some larger number of reconstructions (i.e., repeat updating the visible units, then the hidden units, then the visible units again, and so on); this is slower, but describes the network’s daydreams more accurately. • Instead of using$Positive(e\_{ij})=x\_i \* x\_j$, where$x\_i$and$x\_j$are binary 0 or 1 states, we could also let$x\_i$and/or$x\_j$be activation probabilities. Similarly for$Negative(e\_{ij})$. • We could penalize larger edge weights, in order to get a sparser or more regularized model. • When updating edge weights, we could use a momentum factor: we would add to each edge a weighted sum of the current step as described above (i.e.,$L \* (Positive(e\_{ij}) - Negative(e\_{ij})$) and the step previously taken. • Instead of using only one training example in each epoch, we could use batches of examples in each epoch, and only update the network’s weights after passing through all the examples in the batch. This can speed up the learning by taking advantage of fast matrix-multiplication algorithms. Further If you’re interested in learning more about Restricted Boltzmann Machines, here are some good links. Topic Modeling the Sarah Palin Emails LDA-based Email Browser Earlier this month, several thousand emails from Sarah Palin’s time as governor of Alaska were released. The emails weren’t organized in any fashion, though, so to make them easier to browse, I’ve been working on some topic modeling (in particular, using latent Dirichlet allocation) to separate the documents into different groups. I threw up a simple demo app to view the organized documents here. What is Latent Dirichlet Allocation? Briefly, given a set of documents, LDA tries to learn the latent topics underlying the set. It represents each document as a mixture of topics (generated from a Dirichlet distribution), each of which emits words with a certain probability. For example, given the sentence “I listened to Justin Bieber and Lady Gaga on the radio while driving around in my car”, an LDA model might represent this sentence as 75% about music (a topic which, say, emits the words Bieber with 10% probability, Gaga with 5% probability, radio with 1% probability, and so on) and 25% about cars (which might emit driving with 15% probability and cars with 10% probability). If you’re familiar with latent semantic analysis, you can think of LDA as a generative version. (For a more in-depth explanation, I wrote an introduction to LDA here.) Sarah Palin Email Topics Here’s a sample of the topics learnt by the model, as well as the top words for each topic. (Names, of course, are based on my own interpretation.) • Wildlife/BP Corrosion: game, fish, moose, wildlife, hunting, bears, polar, bear, subsistence, management, area, board, hunt, wolves, control, department, year, use, wolf, habitat, hunters, caribou, program, denby, fishing, … • Energy/Fuel/Oil/Mining: energy, fuel, costs, oil, alaskans, prices, cost, nome, now, high, being, home, public, power, mine, crisis, price, resource, need, community, fairbanks, rebate, use, mining, villages, … • Trig/Family/Inspiration: family, web, mail, god, son, from, congratulations, children, life, child, down, trig, baby, birth, love, you, syndrome, very, special, bless, old, husband, years, thank, best, … • Gas: gas, oil, pipeline, agia, project, natural, north, producers, companies, tax, company, energy, development, slope, production, resources, line, gasline, transcanada, said, billion, plan, administration, million, industry, … • Education/Waste: school, waste, education, students, schools, million, read, email, market, policy, student, year, high, news, states, program, first, report, business, management, bulletin, information, reports, 2008, quarter, … • Presidential Campaign/Elections: mail, web, from, thank, you, box, mccain, sarah, very, good, great, john, hope, president, sincerely, wasilla, work, keep, make, add, family, republican, support, doing, p.o, … Here’s a sample email from the wildlife topic: I also thought the classification for this email was really neat: the LDA model labeled it as 10% in the Presidential Campaign/Elections topic and 90% in the Wildlife topic, and it’s precisely a wildlife-based protest against Palin as a choice for VP: Future Analysis In a future post, I’ll perhaps see if we can glean any interesting patterns from the email topics. For example, for a quick graph now, if we look at the percentage of emails in the Trig/Family/Inspiration topic across time, we see that there’s a spike in April 2008 – exactly (and unsurprisingly) the month in which Trig was born. Filtering for English Tweets: Unsupervised Language Detection on Twitter (See a demo here.) While working on a Twitter sentiment analysis project, I ran into the problem of needing to filter out all non-English tweets. (Asking the Twitter API for English-only tweets doesn’t seem to work, as it nonetheless returns tweets in Spanish, Portuguese, Dutch, Russian, and a couple other languages.) Since I didn’t have any labeled data, I thought it would be fun to build an unsupervised language classifier. In particular, using an EM algorithm to build a naive Bayes model of English vs. non-English n-gram probabilities turned out to work quite well, so here’s a description. EM Algorithm Let’s recall the naive Bayes algorithm: given a tweet (a set of character n-grams), we estimate its language to be the language$L$that maximizes $$P(language = L | ngrams) \propto P(ngrams | language = L) P(language = L)$$ Thus, we need to estimate$P(ngram | language = L)$and$P(language = L)$. This would be easy if we knew the language of each tweet, since we could estimate •$P(xyz| language = English)$as #(number of times “xyz” is a trigram in the English tweets) / #(total trigrams in the English tweets) •$P(language = English)$as the proportion of English tweets. Or, it would also be easy if we knew the n-gram probabilities for each language, since we could use Bayes’ theorem to compute the language probabilities for each tweet, and then take a weighted variant of the previous paragraph. The problem is that we know neither of these. So what the EM algorithm says is that that we can simply guess: • Pretend we know the language of each tweet (by randomly assigning them at the beginning). • Using this guess, we can compute the n-gram probabilities for each language. • Using the n-gram probabilities for each language, we can recompute the language probabilities of each tweet. • Using these recomputed language probabilities, we can recompute the n-gram probabilities. • And so on, recomputing the language probabilities and n-gram probabilities over and over. While our guesses will be off in the beginning, the probabilities will eventually converge to (locally) minimize the likelihood. (In my tests, my language detector would sometimes correctly converge to an English detector, and sometimes it would converge to an English-and-Dutch detector.) EM Analogy for the Layman Why does this work? Suppose you suddenly move to New York, and you want a way to differentiate between tourists and New Yorkers based on their activities. Initially, you don’t know who’s a tourist and who’s a New Yorker, and you don’t know which are touristy activities and which are not. So you randomly place people into two groups A and B. (You randomly assign all tweets to a language) Now, given all the people in group A, you notice that a large number of them visit the Statue of Liberty; similarly, you notice that a large number of people in group B walk really quickly. (You notice that one set of words often has the n-gram “ing”, and that another set of words often has the n-gram “ias”; that is, you fix the language probabilities for each tweet, and recompute the n-gram probabilities for each language.) So you start to put people visiting the Statue of Liberty in group A, and you start to put fast walkers in group B. (You fix the n-gram probabilities for each language, and recompute the language probabilities for each tweet.) With your new A and B groups, you notice more differentiating factors: group A people tend to carry along cameras, and group B people tend to be more finance-savvy. So you start to put camera-carrying folks in group A, and finance-savvy folks in group B. And so on. Eventually, you settle on two groups of people and differentiating activities: people who walk slowly and visit the Statue of Liberty, and busy-looking people who walk fast and don’t visit. Assuming there are more native New Yorkers than tourists, you can then guess that the natives are the larger group. Results I wrote some Ruby code to implement the above algorithm, and trained it on half a million tweets, using English and “not English” as my two languages. The results looked surprisingly good from just eyeballing: But in order to get some hard metrics and to tune parameters (e.g., n-gram size), I needed a labeled dataset. So I pulled a set of English-language and Spanish-language documents from Project Gutenberg, and split them to form training and test sets (the training set consisted of 2000 lines of English and 1000 lines of Spanish, and 1000 lines of English and 1000 lines of Spanish for the test set). Trained on bigrams, the detector resulted in: • 991 true positives (English lines correctly classified as English) • 9 false negatives (English lines incorrectly classified as Spanish • 11 false positives (Spanish lines incorrectly classified as English) • 989 true negatives (Spanish lines correctly classified as English) for a precision of 0.989 and a recall of 0.991. Trained on trigrams, the detector resulted in: • 992 true positives • 8 false negatives • 10 false positives • 990 true negatives for a precision of 0.990 and a recall of 0.992. Also, when I looked at the sentences the detector was making errors on, I saw that they almost always consisted of only one or two words (e.g., the incorrectly classified sentences were lines like “inmortal”, “autumn”, and “salir”). So the detector pretty much never made a mistake on a normal sentence! Code/Demo I put the code on my Github account, and a quick demo app, trained on trigrams from tweets with lang=”en” according to the Twitter API, is here. Choosing a Machine Learning Classifier How do you know what machine learning algorithm to choose for your classification problem? Of course, if you really care about accuracy, your best bet is to test out a couple different ones (making sure to try different parameters within each algorithm as well), and select the best one by cross-validation. But if you’re simply looking for a “good enough” algorithm for your problem, or a place to start, here are some general guidelines I’ve found to work well over the years. How large is your training set? If your training set is small, high bias/low variance classifiers (e.g., Naive Bayes) have an advantage over low bias/high variance classifiers (e.g., kNN), since the latter will overfit. But low bias/high variance classifiers start to win out as your training set grows (they have lower asymptotic error), since high bias classifiers aren’t powerful enough to provide accurate models. You can also think of this as a generative model vs. discriminative model distinction. Advantages of some particular algorithms Advantages of Naive Bayes: Super simple, you’re just doing a bunch of counts. If the NB conditional independence assumption actually holds, a Naive Bayes classifier will converge quicker than discriminative models like logistic regression, so you need less training data. And even if the NB assumption doesn’t hold, a NB classifier still often does a great job in practice. A good bet if want something fast and easy that performs pretty well. Its main disadvantage is that it can’t learn interactions between features (e.g., it can’t learn that although you love movies with Brad Pitt and Tom Cruise, you hate movies where they’re together). Advantages of Logistic Regression: Lots of ways to regularize your model, and you don’t have to worry as much about your features being correlated, like you do in Naive Bayes. You also have a nice probabilistic interpretation, unlike decision trees or SVMs, and you can easily update your model to take in new data (using an online gradient descent method), again unlike decision trees or SVMs. Use it if you want a probabilistic framework (e.g., to easily adjust classification thresholds, to say when you’re unsure, or to get confidence intervals) or if you expect to receive more training data in the future that you want to be able to quickly incorporate into your model. Advantages of Decision Trees: Easy to interpret and explain (for some people – I’m not sure I fall into this camp). They easily handle feature interactions and they’re non-parametric, so you don’t have to worry about outliers or whether the data is linearly separable (e.g., decision trees easily take care of cases where you have class A at the low end of some feature x, class B in the mid-range of feature x, and A again at the high end). One disadvantage is that they don’t support online learning, so you have to rebuild your tree when new examples come on. Another disadvantage is that they easily overfit, but that’s where ensemble methods like random forests (or boosted trees) come in. Plus, random forests are often the winner for lots of problems in classification (usually slightly ahead of SVMs, I believe), they’re fast and scalable, and you don’t have to worry about tuning a bunch of parameters like you do with SVMs, so they seem to be quite popular these days. Advantages of SVMs: High accuracy, nice theoretical guarantees regarding overfitting, and with an appropriate kernel they can work well even if you’re data isn’t linearly separable in the base feature space. Especially popular in text classification problems where very high-dimensional spaces are the norm. Memory-intensive, hard to interpret, and kind of annoying to run and tune, though, so I think random forests are starting to steal the crown. But… Recall, though, that better data often beats better algorithms, and designing good features goes a long way. And if you have a huge dataset, then whichever classification algorithm you use might not matter so much in terms of classification performance (so choose your algorithm based on speed or ease of use instead). And to reiterate what I said above, if you really care about accuracy, you should definitely try a bunch of different classifiers and select the best one by cross-validation. Or, to take a lesson from the Netflix Prize (and Middle Earth), just use an ensemble method to choose them all. Kickstarter Data Analysis: Success and Pricing Kickstarter is an online crowdfunding platform for launching creative projects. When starting a new project, project owners specify a deadline and the minimum amount of money they need to raise. They receive the money (less a transaction fee) only if they reach or exceed that minimum; otherwise, no money changes hands. What’s particularly fun about Kickstarter is that in contrast to that other microfinance site, Kickstarter projects don’t ask for loans; instead, patrons receive pre-specified rewards unique to each project. For example, someone donating money to help an artist record an album might receive a digital copy of the album if they donate 20 dollars, or a digital copy plus a signed physical cd if they donate 50 dollars. There are a bunch of neat projects, and I’m tempted to put one of my own on there soon, so I thought it would be fun to gather some data from the site and see what makes a project successful. Categories I started by scraping the categories section. In true indie fashion, the artsy categories tend to dominate. (I’m surprised/disappointed how little love the Technology category gets.) Ending Soon The categories section really only provides a history of successful projects, though, so to get some data on unsuccessful projects as well, I took a look at the Ending Soon section of projects whose deadlines are about to pass. It looks like about 50% of all Kickstarter projects get successfully funded by the deadline: Interestingly, most of the final funding seems to happen in the final few days: with just 5 days left, only about 20% of all projects have been funded. (In other words, with just 5 days left, 60% of the projects that will eventually be successful are still unfunded.) So the approaching deadline seems to really spur people to donate. I wonder if it’s because of increased publicity in the final few days (the project owners begging everyone for help!) or if it’s simply procrastination in action (perhaps people want to wait to see if their donation is really necessary)? Lesson: if you’re still not fully funded with only a couple days remaining, don’t despair. Success vs. Failure What factors lead a project to succeed? Are there any quantitative differences between projects that eventually get funded and those that don’t? Two simple (if kind of obvious) things I noticed are that unsuccessful projects tend to require a larger amount of money: and unsuccessful projects also tend to raise less money in absolute terms (i.e., it’s not just that they ask for too much money to reach their goal – they’re simply not receiving enough money as well): Not terribly surprising, but it’s good to confirm (and I’m still working on finding other predictors). Pledge Rewards There’s a lot of interesting work in behavioral economics on pricing and choice – for example, the anchoring effect suggests that when building a menu, you should include an expensive item to make other menu items look reasonably priced in comparison, and the paradox of choice suggests that too many choices lead to a decision freeze – so one aspect of the Kickstarter data I was especially interested in was how pricing of rewards affects donations. For example, does pricing the lowest reward at 25 dollars lead to more money donated (people don’t lowball at 5 dollars instead) or less money donated (25 dollars is more money than most people are willing to give)? And what happens if a new reward at 5 dollars is added – again, does it lead to more money (now people can donate something they can afford) or less money (the people that would have paid 25 dollars switch to a 5 dollar donation)? First, here’s a look at the total number of pledges at each price. (More accurately, it’s the number of claimed rewards at each price.) [Update: the original version of this graph was wrong, but I’ve since fixed it.] Surprisingly, 5 dollar and 1 dollar donations are actually not the most common contribution. To investigate pricing effects, I started by looking at all (successful) projects that had a reward priced at 1 dollar, and compared the number of donations at 1 dollar with the number of donations at the next lowest reward. Up to about 15-20 dollars, there’s a steady increase in the proportion of people who choose the second reward over the first reward, but after that, the proportion decreases. So this perhaps suggests that if you’re going to price your lowest reward at 1 dollar, your next reward should cost roughly 20 dollars (or slightly more, to maximize your total revenue). Pricing above 20 dollars is a little too expensive for the folks who want to support you, but aren’t rich enough to throw gads of money; maybe rewards below 20 dollars aren’t good enough to merit the higher donation. Next, I’m planning on digging a little deeper into pricing effects and what makes a project successful, so I’ll hopefully have some more Kickstarter analysis in a future post. In the meantime, in case anyone else wants to take a look, I put the data onto my Github account. A Mathematical Introduction to Least Angle Regression (For a layman’s introduction, see here.) Least Angle Regression (aka LARS) is a model selection method for linear regression (when you’re worried about overfitting or want your model to be easily interpretable). To motivate it, let’s consider some other model selection methods: • Forward selection starts with no variables in the model, and at each step it adds to the model the variable with the most explanatory power, stopping if the explanatory power falls below some threshold. This is a fast and simple method, but it can also be too greedy: we fully add variables at each step, so correlated predictors don’t get much of a chance to be included in the model. (For example, suppose we want to build a model for the deliciousness of a PB&J sandwich, and two of our variables are the amount of peanut butter and the amount of jelly. We’d like both variables to appear in our model, but since amount of peanut butter is (let’s assume) strongly correlated with the amount of jelly, once we fully add peanut butter to our model, jelly doesn’t add much explanatory power anymore, and so it’s unlikely to be added.) • Forward stagewise regression tries to remedy the greediness of forward selection by only partially adding variables. Whereas forward selection finds the variable with the most explanatory power and goes all out in adding it to the model, forward stagewise finds the variable with the most explanatory power and updates its weight by only epsilon in the correct direction. (So we might first increase the weight of peanut butter a little bit, then increase the weight of peanut butter again, then increase the weight of jelly, then increase the weight of bread, and then increase the weight of peanut butter once more.) The problem now is that we have to make a ton of updates, so forward stagewise can be very inefficient. LARS, then, is essentially forward stagewise made fast. Instead of making tiny hops in the direction of one variable at a time, LARS makes optimally-sized leaps in optimal directions. These directions are chosen to make equal angles (equal correlations) with each of the variables currently in our model. (We like peanut butter best, so we start eating it first; as we eat more, we get a little sick of it, so jelly starts looking equally appetizing, and we start eating peanut butter and jelly simultaneously; later, we add bread to the mix, etc.) In more detail, LARS works as follows: • Assume for simplicity that we’ve standardized our explanatory variables to have zero mean and unit variance, and that our response variable also has zero mean. • Start with no variables in your model. • Find the variable$ x_1 $most correlated with the residual. (Note that the variable most correlated with the residual is equivalently the one that makes the least angle with the residual, whence the name.) • Move in the direction of this variable until some other variable$ x_2 $is just as correlated. • At this point, start moving in a direction such that the residual stays equally correlated with$ x_1 $and$ x_2 $(i.e., so that the residual makes equal angles with both variables), and keep moving until some variable$ x_3 $becomes equally correlated with our residual. • And so on, stopping when we’ve decided our model is big enough. For example, consider the following image (slightly simplified from the original LARS paper;$x_1, x_2$are our variables, and$y$is our response): Our model starts at$ \hat{\mu_0} $. • The residual (the green line) makes a smaller angle with$ x_1 $than with$ x_2 $, so we start moving in the direction of$ x_1 $. At$ \hat{\mu_1} $, the residual now makes equal angles with$ x_1, x_2 \$, and so we start moving in a new direction that preserves this equiangularity/equicorrelation.
• If there were more variables, we’d change directions again once a new variable made equal angles with our residual, and so on.
So when should you use LARS, as opposed to some other regularization method like lasso? There’s not really a clear-cut answer, but LARS tends to give very similar results as both lasso and forward stagewise (in fact, slight modifications to LARS give you lasso and forward stagewise), so I tend to just use lasso when I do these kinds of things, since the justifications for lasso make a little more sense to me. In fact, I don’t usually even think of LARS as a model selection method in its own right, but rather as a way to efficiently implement lasso (especially if you want to compute the full regularization path).
|
2014-07-29 22:53:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19852668046951294, "perplexity": 8292.398616967854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267876.49/warc/CC-MAIN-20140728011747-00091-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/a-x-b-mod-p-solvable.52709/
|
# A=x^b mod p, solvable?
Say you are given a=x^b mod p, where p, a, and b are known. Is there a way to solve this? I am pretty sure there is . . . but it is driving me nuts.
-Chu
Try solving x^2 = 2 (mod 3).
Muzza said:
Try solving x^2 = 2 (mod 3).
Sorry, I'll rephrase. I know a solution must exist from the choices of p,b,and a (this is part of a crypto algorithm where they know x, I do not, and I am wondering if I have sufficent info to solve for it).
matt grime
obviously b must divide $$\varphi(x)$$, but that doesn't give a sufficient condition for a solution, or even tell you what it is.
|
2021-09-19 12:15:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075592160224915, "perplexity": 434.3072962463687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00630.warc.gz"}
|
https://www.luogu.com.cn/problem/CF86D
|
# Powerful array
## 题意翻译
- 给定长度为 $n$ 的序列 $a$,有 $q$ 次询问,每次询问给出两个数 $l,r$。 - 对于每次询问,设 $cnt_i$ 表示 $i$ 在 $a_l,a_{l+1},\cdots,a_r$ 出现的次数,您需要求出 $\displaystyle\sum_i cnt_i^2\cdot i$。 - $1\le n,q\le 2\times 10^5$,$1\le a_i\le 10^6$,$1\le l\le r\le n$。
## 题目描述
An array of positive integers $a_{1},a_{2},...,a_{n}$ is given. Let us consider its arbitrary subarray $a_{l},a_{l+1}...,a_{r}$ , where $1<=l<=r<=n$ . For every positive integer $s$ denote by $K_{s}$ the number of occurrences of $s$ into the subarray. We call the power of the subarray the sum of products $K_{s}·K_{s}·s$ for every positive integer $s$ . The sum contains only finite number of nonzero summands as the number of different values in the array is indeed finite. You should calculate the power of $t$ given subarrays.
## 输入输出格式
### 输入格式
First line contains two integers $n$ and $t$ ( $1<=n,t<=200000$ ) — the array length and the number of queries correspondingly. Second line contains $n$ positive integers $a_{i}$ ( $1<=a_{i}<=10^{6}$ ) — the elements of the array. Next $t$ lines contain two positive integers $l$ , $r$ ( $1<=l<=r<=n$ ) each — the indices of the left and the right ends of the corresponding subarray.
### 输出格式
Output $t$ lines, the $i$ -th line of the output should contain single positive integer — the power of the $i$ -th query subarray. Please, do not use %lld specificator to read or write 64-bit integers in C++. It is preferred to use cout stream (also you may use %I64d).
## 输入输出样例
### 输入样例 #1
3 2
1 2 1
1 2
1 3
### 输出样例 #1
3
6
### 输入样例 #2
8 3
1 1 2 2 1 3 1 1
2 7
1 6
2 7
### 输出样例 #2
20
20
20
## 说明
Consider the following array (see the second sample) and its $2, 7$ subarray (elements of the subarray are colored):  Then $K_{1}=3$ , $K_{2}=2$ , $K_{3}=1$ , so the power is equal to $3^{2}·1+2^{2}·2+1^{2}·3=20$ .
|
2022-05-22 13:20:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198013305664062, "perplexity": 697.1200109589512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00413.warc.gz"}
|
http://math.stackexchange.com/questions/187388/finding-a-logarithmic-function-from-a-graph/187394
|
Finding a logarithmic function from a graph
Here's the graph. When I use the points $(-1,1)$ or $(-3,2)$ to use in the equation $a\log(-x-1)+k$, I can't find a finite value for k. Any ideas?
-
It should probably be $-x+1$ in the argument of $\log$. When $x=0$, you want to take $\log 1$, not $\log(-1)$. – Rahul Aug 27 '12 at 7:35
Appreciate your help but the vertical asymptote is x=1 so shouldn't the log argument be (x-1)? Since the graph is going away from 0, it would be (-x-1), no? – Mark Aug 27 '12 at 7:39
It looks like a mirror image of $\log(x)$ around $x=\frac 12$ with a vertical stretching so that the equation could be : $$\alpha \log(1-x)$$ Since $\alpha \log(1-(-3))=2$ I would say that $\alpha=\frac 2{\log(4)}$ with the final : $$\frac {\log(1-x)}{\log(2)}$$ corresponding to the picture :
-
I understand how you got to alog(2) but I'm not so clear how you got to a = 2/(log(4)) since there is a k variable in the equation above. – Mark Aug 27 '12 at 7:45
@Mark: for $x=-3$ we observe $y=2$ so that replacing in $y=\alpha\log(1-x)$ I get $2=\alpha\log(4)=\alpha\log(2^2)=\alpha\ 2\log(2)$ and dividing by $2\log(2)$ we get $\alpha$. – Raymond Manzoni Aug 27 '12 at 7:50
@Mark : I didn't consider $a\cdot \log(-x-1)+k$ since it is clearly wrong ($\log(-x-1)$ should be $\log(1-x)$ and if $k$ is not a constraint but a parameter to find it is simply $0$). – Raymond Manzoni Aug 27 '12 at 7:54
Oh I see, so to find k you just make it zero in order to find a first. Then sub a along with another point to find k to find the log function altogether. – Mark Aug 27 '12 at 7:57
@Mark: if $y=a \log(1-x)+k$ then $k$ is the vertical value for $x=0$ but it is clearly $0$ on the picture so that $k=0$ (you may find it first). After that you consider the other value $x=-3$ getting $a=\frac 1{\log(2)}$ and finally you observe that it works too with $x=-1$. – Raymond Manzoni Aug 27 '12 at 8:01
You're solving for two parameters with two linear equations. Check it out: $$y_1 = a \log(-x_1+1) + k$$ $$y_2 = a \log(-x_2+1) + k$$ So solve for $a,k$ as though all other variables are constant: $$y_1-y_2 = a(\log(-x_1+1) - \log(-x_2+1)) = a \log \frac{-x_1+1}{-x_2+1}~~,$$ and we find $a = \dfrac{y_1-y_2}{\log \frac{-x_1+1}{-x_2+1}}$ . Plugging in $a$ into either initial equation will yield $k$ .
In our particular example, we can use $(x_1,y_1) = (-1,1), (x_2,y_2) = (-3,2)$ to find that $a = \dfrac{-1}{\log \frac{1}{2}} = \dfrac{1}{\log 2}$ and so we find $k$ from the fact that $$1 = \frac{\log2}{\log2}+k = 1+k~~,$$ and so $k = 0$.
-
|
2015-05-25 02:17:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417571425437927, "perplexity": 305.3403073970897}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928114.23/warc/CC-MAIN-20150521113208-00309-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://casmusings.wordpress.com/tag/system-of-equations/
|
# Tag Archives: system of equations
## Binomial Expansion Variation
Several years ago, I posed on this ‘blog a problem I learned from Natalie Jackucyn:
For some integers A, B, and n, one term of the expansion of $(Ax+By)^n$ is $27869184x^5y^3$. What are the values of A, B, and n?
In this post, I reflect for a moment on what I’ve learned from the problem and outline a solution approach before sharing a clever alternative solution one of my students this year leveraged through her CAS-enabled investigation.
WHAT I LEARNED BEFORE THIS YEAR
Mostly, I’ve loved this problem for its “reversal” of traditional binomial expansion problems that typically give A, B, and n values and ask for either complete expansions or specific terms of the polynomial. Both of these traditional tasks are easily managed via today’s technology. In Natalie’s variation, neither the answer nor how you would proceed are immediately obvious.
The first great part of the problem is that it doesn’t seem to give enough information. Second, it requires solvers to understand deeply the process of polynomial expansion. Third, unlike traditional formulations, Natalie’s version doesn’t allow students to avoid deep thinking by using technology.
In the comments to my original post, Christopher Olah and a former student, Bryan Spellman, solved the problem via factoring and an Excel document, respectively. Given my algebraic tendencies, I hadn’t considered Bryan’s Excel “search” approach, but one could relatively easily program Excel to provide an exhaustive search. I now think of Bryan’s approach as a coding approach to a reasonably efficient search of the sample space of possible solutions. Most of my students’ solutions over the years essentially approach the problem the same way, but less efficiently, by using one-case-at-a-time expansions via CAS commands until they stumble upon good values for A, B, and n. Understandably, students taking this approach typically become the most frustrated.
Christopher’s approach paralleled my own. The x and y exponents from the expanded term show that n=5+3=8. Expanding a generic $(Ax+By)^8$ then gives a bit more information. From my TI-Nspire CAS,
so there are 56 ways an $x^5y^3$ term appears in this expansion before combining like terms (explained here, if needed). Dividing the original coefficient by 56 gives $a^5b^3=497,664$, the coefficient of $x^5y^3$.
The values of a and b are integers, so factoring 497,664 shows these coefficients are both co-multiples of 2 and 3, but which ones? In essence, this defines a system of equations. The 3 has an exponent of 5, so it can easily be attributed to a, but the 11 is not a multiple of either 5 or 3, so it must be a combination. Quick experimentation with the exponents leads to $11=5*1+3*2$, so $2^1$ goes to a and $2^2$ goes to b. This results in $a=3*2=6$ and $b=2^2=4$.
WHAT A STUDENT TAUGHT ME THIS YEAR
After my student, NB, arrived at $a^5b^3=497,664$ , she focused on roots–not factors–for her solution. The exponents of a and b suggested using either a cubed or a fifth root.
The fifth root would extract only the value of a if b had only singleton factors–essentially isolating the a and b values–while the cubed root would extract a combination of a and b factors, leaving only excess a factors inside the radical. Her investigation was simplified by the exact answers from her Nspire CAS software.
From the fifth root output, the irrational term had exponent 1/5, not the expected 3/5, so b must have had at least one prime factor with non-singular multiplicity. But the cubed root played out perfectly. The exponent–2/3–matched expectation, giving a=6, and the coefficient, 24, was the product of a and b, making b=4. Clever.
EXTENSIONS & CONCLUSION
Admittedly, NB’s solution would have been complicated if the parameter was composed of something other than singleton prime factors, but it did present a fresh, alternative approach to what was becoming a comfortable problem for me. I’m curious about exploring other arrangements of the parameters of $(Ax+By)^n$ to see how NB’s root-based reasoning could be extended and how it would compare to the factor solutions I used before. I wonder which would be “easier” … whatever “easier” means.
As a ‘blog topic for another day, I’ve learned much by sharing this particular problem with several teachers over the years. In particular, the initial “not enough information” feel of the problem statement actually indicates the presence of some variations that lead to multiple solutions. If you think about it, NB’s root variation of the solution suggests some direct paths to such possible formulations. As intriguing as the possibilities here are, I’ve never assigned such a variation of the problem to my students.
As I finish this post, I’m questioning why I haven’t yet taken advantage of these possibilities. That will change. Until then, perhaps you can find some interesting or alternative approaches to the underlying systems of equations in this problem. Can you create a variation that has multiple solutions? Under what conditions would such a variation exist? How many distinct solutions could a problem like this have?
## From a Square to Ratios to a System of Equations
Here’s another ratio problem from @Five_Triangles, this time involving triangle areas bounded by a square.
Don’t read further until you’ve tried this for yourself. It’s a fun problem that, at least from my experience, doesn’t end up where or how I thought it would.
INITIAL THOUGHTS
I see two big challenges here.
First, the missing location of point P is especially interesting, but is also likely to be quite vexing for many students. This led me to the first twist I found in the problem: the introduction of multiple variables and a coordinate system. Without some problem-solving experience, I don’t see that as an intuitive step for most middle school students. Please don’t interpret this as a knock on this problem, I’m simply agreeing with @Five_Triangle’s assessment that this problem is likely to be challenging for middle school students.
The second challenge I found emerged from the introduction the coordinate system: an underlying 2×2 system of equations. There are multiple ways to tackle a solution to a linear system, but this strikes me as yet another high hurdle for younger students.
Finally, I’m a bit surprised by my current brain block on multiple approaches for this problem. I suspect I’m blinded here by my algebraic bias in problem solving; surely there are approaches that don’t require this. I’d love to hear any other possibilities.
POINT P VARIES
Because I was given properties of point P and not its location, the easiest approach I could see was to position the square on the xy-plane with point B at the origin, $\overline{AB}$ along the y-axis, and $\overline{BC}$ along the x-axis. That gave my point P coordinates (x,y) for some unknown values of x & y.
The helpful part of this orientation is that the x & y coordinates of P are automatically the altitudes of $\Delta ABP$ and $\Delta BCP$, respectively. The altitudes of the other two triangles are determined through subtraction.
AREA RATIOS BECOME A LINEAR SYSTEM
From here, I used the given ratios to establish one equation in terms of x & y.
$\displaystyle \frac{\Delta ABP}{\Delta DAP} = \frac{\frac{1}{2}*12*x}{\frac{1}{2}*12*(12-y)} = \frac{3}{4}$
Of course, since all four triangles have the same base lengths, the given area ratios are arithmetically equivalent to corresponding height ratios. I used that to write a second equation.
$\displaystyle \frac{\Delta BCP}{\Delta CDP} = \frac{y}{12-x} = \frac{1}{3}$
Simplifying terms and clearing denominators leads to $4x=36-3y$ and $3y=12-x$, respectively.
A VERY INTERESTING insight at this point is that there is an infinite number of locations within the square at which each ratio is true. Specifically, the $\Delta ABP : \Delta DAP = 3:4$ ratio is true everywhere along the line 4x=36-3y. This problem constrains us to only the points within the square with vertices (0,0), (12,0), (12,12), and (0,12), but setting that aside, anywhere along the line 4x=36-3y would satisfy the first constraint. The same is true for the second line and constraint.
I think it would be very interesting for students to construct this on dynamic geometry software (e.g., GeoGebra or the TI-Nspire) and see the ratio remain constant everywhere along either line even though the triangle areas vary throughout.
Together, these lines form a 2×2 system of linear equations with the solution to both ratios being the intersection point of the two lines. There are lots of ways to do this; I wonder how a typical 6th grader would tackle them. Assuming they have the algebraic expertise, I’d have work them by hand and confirm with a CAS.
The question asks for the area of $\Delta ABP = \frac{1}{2}*12*x = 6*8 = 48$.
PROBLEM VARIATIONS
Just two extensions this time. Other suggestions are welcome.
1. What’s the ratio of the area of $\Delta BCP : \Delta DAP$ at the point P that satisfies both ratios??
It’s not 1:4 as an errant student might think from an errant application of the transitive property to the given ratios. Can you show that it’s actually 1:8?
2. If a random point is chosen within the square, is that point more likely to satisfy the area ratio of $\Delta ABP : \Delta DAP$ or the ratio of $\Delta BCP : \Delta CDP$?
The first ratio is satisfied by the line 4x=36-3y which intersects the square on the segment between (9,0) and (0,12). At the latter point, both triangles are degenerate with area 0. The second ratio’s line intersects the square between (12,0) and (0,4). As the first segment is longer (how would a middle schooler prove that?), it is more likely that a randomly chosen point would satisfy the $\Delta ABP : \Delta DAP$ ratio. This would be a challenging probability problem, methinks.
FURTHER EXTENSIONS?
What other possibilities do you see either for a solution to the original problem or an extension?
## Chemistry, CAS, and Balancing Equations
Here’ s a cool application of linear equations I first encountered about 20 years ago working with chemistry colleague Penney Sconzo at my former school in Atlanta, GA. Many students struggle early in their first chemistry classes with balancing equations. Thinking about these as generalized systems of linear equations gives a universal approach to balancing chemical equations, including ionic equations.
This idea makes a brilliant connection if you teach algebra 2 students concurrently enrolled in chemistry, or vice versa.
FROM CHEMISTRY TO ALGEBRA
Consider burning ethanol. The chemical combination of ethanol and oxygen, creating carbon dioxide and water:
$C_2H_6O+3O_2 \longrightarrow 2CO_2+3H_2O$ (1)
But what if you didn’t know that 1 molecule of ethanol combined with 3 molecules of oxygen gas to create 2 molecules of carbon dioxide and 3 molecules of water? This specific set coefficients (or multiples of the set) exist for this reaction because of the Law of Conservation of Matter. While elements may rearrange in a chemical reaction, they do not become something else. So how do you determine the unknown coefficients of a generic chemical reaction?
Using the ethanol example, assume you started with
$wC_2H_6O+xO_2 \longrightarrow yCO_2+zH_2O$ (2)
for some unknown values of w, x, y, and z. Conservation of Matter guarantees that the amount of carbon, hydrogen, and oxygen are the same before and after the reaction. Tallying the amount of each element on each side of the equation gives three linear equations:
Carbon: $2w=y$
Hydrogen: $6w=2z$
Oxygen: $w+2x=2y+z$
where the coefficients come from the subscripts within the compound notations. As one example, the carbon subscript in ethanol ( $C_2H_6O$ ) is 2, indicating two carbon atoms in each ethanol molecule. There must have been 2w carbon atoms in the w ethanol molecules.
This system of 3 equations in 4 variables won’t have a unique solution, but let’s see what my Nspire CAS says. (NOTE: On the TI-Nspire, you can solve for any one of the four variables. Because the presence of more variables than equations makes the solution non-unique, some results may appear cleaner than others. For me, w was more complicated than z, so I chose to use the z solution.)
All three equations have y in the numerator and denominators of 2. The presence of the y indicates the expected non-unique solution. But it also gives me the freedom to select any convenient value of y I want to use. I’ll pick $y=2$ to simplify the fractions. Plugging in gives me values for the other coefficients.
Substituting these into (2) above gives the original equation (1).
VARIABILITY EXISTS
Traditionally, chemists write these equations with the lowest possible natural number coefficients, but thinking of them as systems of linear equations makes another reality obvious. If 1 molecule of ethanol combines with 3 molecules of hydrogen gas to make 2 molecules of carbon dioxide and 3 molecules of water, surely 10 molecule of ethanol combines with 30 molecules of hydrogen gas to make 20 molecules of carbon dioxide and 30 molecules of water (the result of substituting $y=20$ instead of the $y=2$ used above).
You could even let $y=1$ to get $z=\frac{3}{2}$, $w=\frac{1}{2}$, and $x=\frac{3}{2}$. Shifting units, this could mean a half-mole of ethanol and 1.5 moles of hydrogen make a mole of carbon dioxide and 1.5 moles of water. The point is, the ratios are constant. A good lesson.
ANOTHER QUICK EXAMPLE:
Now let’s try a harder one to balance: Reacting carbon monoxide and hydrogen gas to create octane and water.
$wCO + xH_2 \longrightarrow y C_8 H_{18} + z H_2 O$
Setting up equations for each element gives
Carbon: $w=8y$
Oxygen: $w=z$
Hydrogen: $2x=18y+2z$
I could simplify the hydrogen equation, but that’s not required. Solving this system of equations gives
Nice. No fractions this time. Using $y=1$ gives $w=8$, $x=17$, and $z=8$, or
$8CO + 17H_2 \longrightarrow C_8 H_{18} + 8H_2 O$
Simple.
EXTENSIONS TO IONIC EQUATIONS:
Now let’s balance an ionic equation with unknown coefficients a, b, c, d, e, and f:
$a Ba^{2+} + b OH^- + c H^- + d PO_4^{3-} \longrightarrow eH_2O + fBa_3(PO_4)_2$
In addition to writing equations for barium, oxygen, hydrogen, and phosphorus, Conservation of Charge allows me to write one more equation to reflect the balancing of charge in the reaction.
Barium: $a = 3f$
Oxygen: $b +4d = e+8f$
Hydrogen: $b+c=2e$
Phosphorus: $d=2f$
CHARGE (+/-): $2a-b-c-3d=0$
Solving the system gives
Now that’s a curious result. I’ll deal with the zeros in a moment. Letting $d=2$ gives $f=1$ and $a=3$, indicating that 3 molecules of ionic barium combine with 2 molecules of ionic phosphate to create a single uncharged molecule of barium phosphate precipitate.
The zeros here indicate the presence of “spectator ions”. Basically, the hydroxide and hydrogen ions on the left are in equal measure to the liquid water molecule on the right. Since they are in equal measure, one solution is
$3Ba^{2+}+6OH^- +6H^-+2PO_4^{3-} \longrightarrow 6H_2O + Ba_3(PO_4)_2$
CONCLUSION:
You still need to understand chemistry and algebra to interpret the results, but combining algebra (and especially a CAS) makes it much easier to balance chemical equations and ionic chemical equations, particularly those with non-trivial solutions not easily found by inspection.
The minor connection between science (chemistry) and math (algebra) is nice.
As many others have noted, CAS enables you to keep your mind on the problem while avoiding getting lost in the algebra.
I’m teaching Algebra 2 this summer for my school. In a recent test on quadratic functions, I gave a question I thought would be a little different, but still reachable for those willing to make connections or exert a little creativity.
Write a system of quadratic functions that has exactly one solution: (1,1).
Their handheld graphing calculators were allowed. Some students definitely had difficulty with the challenge, some gave a version of the answer I expected, and one adopted a form I knew was possible, but doubted anyone would actually find during a test situation.
I show my students’ solutions below. But before you read on, can you give your own solution?
WHAT I EXPECTED
We’ve had many discussions in class about the power of the Rule of 4–that math ideas can be expressed numerically, graphically, algebraically, and verbally. When you get stumped in one representation, being able to shift to a different form is often helpful. That could mean a different algebraic representation, or a different Rule of 4 representation altogether.
The question is phrased verbally asking for an algebraic answer. But it asks about a solution to a system of equations. I hoped my students would recall that the graphical version of a system solution is equivalent to the point(s) where the graphs of the equations intersected. In my mind, the easiest way to do this is to write quadratic functions with coincident vertices. And this is most easily done in vertex form. The cleanest answer I ever got to this question was
A graphical representation verifies the solution.
Another student recognized that if two parabolas shared a vertex, but had different “slopes”, their only possible point of intersection was exactly the one the question required. Here’s a graphical version of her answer.
From these two, you can see that there is actually an infinite number of correct solutions. And I was asking them for just one of these! 🙂
WHAT I KNEW, BUT DIDN’T EXPECT
Another way to solve this question makes use of the geometry of quadratic graphs. If two quadratics have the same leading coefficients, they are the same graph, intersect exactly once, or never intersect. This is a very non-trivial idea for most students. While I’m not convinced the author of the following solution had this in mind when he answered the question, his solution works because of that fact. Here’s what J wrote on last week’s test and its graph.
J used more equations than he needed, but had he restricted himself to just two equations, I’m not sure the lovely pattern would have been so obvious.
This is a very different (and super cool) answer than what I expected my students to produce. Lesson re-learned: Challenge your students, give them room to express creativity and individuality, and be prepared to be amazed by them.
NEXT STEPS
J’s answer actually opens the door to other avenues of exploration.
1. Can you generalize the form of all of J’s equations, essentially defining a family of quadratics? Can you prove that all members of your generalization satisfy the question posed and that no other answers are possible?
2. Can you find forms of other generalized families of quadratic functions whose only solution is (1,1)?
3. Notice that there were two types of solutions above: A) those with coincident vertices and different lead coefficients and B) those with identical lead coefficients and different vertices. Are these the only types of quadratics that can answer this question? That is, is there a system of quadratics with (1,1) as the only solution that have identical vertices and lead coefficients? Could both be different and (1,1) be the only solution?
4. If I relax the requirement that the quadratics be functions, what other types of quadratics are possible? [This could be a very nice calculus question!]
For my part, I’m returning to some of these questions this week to stretch and explore my student’s creativity and problem-solving.
I’d love to hear what you or your students discover.
## Circle and Square
Here’s another great geometry + algebra problem, posed by Megan Schmidt and pitched by Justin Aion to some students in his Geometry class.
Following is the problem as Justin posed it yesterday.
Justin described the efforts of three of his students’ on his his ‘blog. Following is my more generalized approach. Don’t read further if you want to solve this problem for yourself!
My first instinct in any case like this is build it in a dynamic geometry package and play. Using my TI-Nspire, without loss of generality, I graphed a circle centered at the origin, constructed a tangent segment at the bottom of the circle centered on the y-axis, and then used that segment to construct a square. I recognized that the locus of the upper right corners of all such squares would form a line.
That made it clear to me that for any circle, there was a unique square that intersected the circle three times as Megan had posed.
Seeing this and revealing its algebraic bias, my mind conceived an algebraic solution. Assuming the radius of the circle is R, the equation of my circle is $x^2+y^2=R^2$ making the lower y-intercept of the circle $(0,-R)$. That made $y=2x-R$ the locus line containing the upper right corner of the square.
To find generic coordinates of the upper right corner of the square in terms of R, I just needed to solve the system of equations containing the circle and the line. That’s easy enough to compute by hand if you can handle quadratic algebra. That manipulation is not relevant right now, so my Nspire CAS’s version is:
The output confirms the two intersections are $(0,-R)$ and the unknown at $\displaystyle \left( \frac{4R}{5} , \frac{3R}{5} \right)$.
Because of the horizontal symmetry of the square with respect to the y-axis, the system solution shows that the generic length of the side of the square is $\displaystyle 2\left( \frac{4R}{5} \right) = \frac{8R}{5}$. The circle’s y-intercept at $(0,-R)$ means the generic diameter of the circle is $2R$.
Therefore, the generic ratio of the circle’s diameter to the square’s side length is
$\displaystyle \frac{diameter}{side} = \frac{2R}{(8R)/5} = \frac{5}{4}$.
And this is independent of the circle’s radius! The diameter of the circle is always $\frac{5}{4}$ of the square’s side.
CONCLUSION:
For Megan’s particular case with a side length of 20, that gives a circle diameter of 25, confirming Justin’s students’ solution.
Does anyone have a different approach? I’m happy to compile and share all I get.
AN ASIDE:
While not necessary for the generalized solution, it was fun to see a 3-4-5 right triangle randomly appear in Quadrant 1.
## Systems of lines
Here’s an interesting variation of a typical (MS) problem I found by following the Five Triangles ‘blog: http://fivetriangles.blogspot.com/2013/09/97-no-triangle.html .
(Note: If you sign up on this or other ‘blogs, you can get lots of problems emailed to you every time they are added.)
INITIAL SOLUTION
I know this question can absolutely be solved without using technology, but when a colleague asked if it was appropriate to use technology here (my school is one-to-one with tablet laptops), I thought it would be cool to share with her the ease and power of Desmos. You can enter the equations from the problem exactly as given (no need to solve for y), or you can set up a graph in advance for your students and email them a direct link to an already-started problem.
If you follow this link, you can see how I used a slider (a crazy-simple addition on Desmos) to help students discover the missing value of a.
FOLLOW-UP
I suggest in this case that playing with this problem graphically would grant insight for many students into the critical role (for this problem) of the intersection point of the two explicitly defined lines. With or without technology support, you could then lead your students to determine the coordinates of that intersection point and thereby the value of a.
Keeping with my CAS theme, you could determine those coordinates using GeoGebra’s brand new CAS View:
Substituting the now known values of x and y into the last equation in the problem gives the desired value of a.
NOTE: I could have done the sliders in GeoGebra, too, but I wanted to show off the ease of my two favorite (and free!) online math tools.
CONCLUSION
Thoughts? What other ideas or problems could be enhanced by a properly balanced use of technology?
As an extension to this particular problem, I’m now wondering about the area of triangle formed for any value of a. I haven’t played with it yet, but it looks potentially interesting. I see both tech and non-tech ways to approach it.
## Non-Calculus approach to Invariable Calculus Project
I shared my posts (here and here) on the Invariable Calculus Project in the AP Calculus Community. Gary Litvin posted a response within the Community offering there a great non-calculus alternative solution to the original problem of the area of the triangle formed by the x- and y-axes and any Quadrant I tangent line to $\displaystyle \frac{1}{x}$. Here’s a paraphrase of Gary’s approach.
Let $\displaystyle \frac{x}{a} + \frac{y}{b}=1$ be any Quadrant I tangent line to $\displaystyle y=\frac{1}{x}$. (In case you don’t recognize it, this tangent equation uses the intercept form of a line–a is the x-intercept and b is the y-intercept.) Because the line intersects the parabola in a single point, we can find that point by solving the system of equations defined by the two equations. Substituting for y gives
$\displaystyle \frac{x}{a} + \frac{\frac{1}{x}}{b}=1$.
This is equivalent to $x^2 - a \cdot x+\frac{a}{b}=0$, a quadratic. We could determine the value of x using the quadratic formula. Because there is only one solution to this equation (there is only one point of intersection, the point of tangency), the discriminant must be zero. That means
$\displaystyle (-a)^2 - 4\cdot \left( \frac{a}{b} \right)=0$
which can be rearranged to give $ab=4$ ($a=0$ is extraneous). Therefore, the area of the triangle formed by the tangent line to $\displaystyle y=\frac{1}{x}$ and the coordinate axes is $\displaystyle Area=\frac{1}{2} ab=2$ no matter what the point of tangency.
Shiny.
|
2017-04-26 15:53:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 79, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7301233410835266, "perplexity": 1002.0390663806481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00276-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1329128/how-to-sort-vertices-of-a-polygon-in-counter-clockwise-order-computing-angle
|
# “How to sort vertices of a polygon in counter clockwise order?”: Computing Angle?
my question relates to the answer to the following question: How to sort vertices of a polygon in counter clockwise order?
I don't have a strong background in linear algebra... I don't understand this statement:
Then you can compute the angle of each vertex to the center point, and sort according to the computed angle
Is this a polar angle? What function could I use in R to compute this?
The function $\operatorname{atan2}$ is defined in R so just use it. This is indeed a polar angle, with a few subtleties.
Choose a "central point" $(\hat x, \hat y)$, and for each point $(x,y)$ calculate its angle as $\operatorname{atan2}(x-\hat x,y-\hat y)$. Sort the points using those angles as keys, and there is your polygon.
Note that if the polygon is not convex you may get a different answer if you choose a different "central point". Also make sure the point really is "inside" the polygon. Choosing the average values of $x$ and of $y$ should work well enough.
|
2019-12-11 03:19:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406516313552856, "perplexity": 117.28478931460137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00476.warc.gz"}
|
https://shintakezou.blogspot.com/2018/03/sathers-iterators.html
|
## 2018-03-18
### Sather's iterators
Sather's way for loops is interesting: you have iterators which yield giving the control back to the caller, very much like coroutines do.
When an interator is called, it executes the statements in its body in order. If it executes a yield statement, control is returned to the caller. In this, the iterator is similar to a coroutine whose state remains persistent over multiple calls. Subsequent calls on the iterator resume execution with the statement following the yield statement.
Examples:
i ::= 0;
loop while!(i < 5);
#OUT + i + "\n";
i := i + 1;
end;
This one looks just like cumbersome, Sather-specific syntax for a while-loop, but the following example shows that there's something different from usual:
sum ::= 0;
loop
sum := sum + 1.upto!(10);
end;
If you wants to repeat a block for several times:
loop 7.times!;
#OUT + "baby cries 'No'\n";
end;
And so on.
Building your own iterator is easy and using it can make your loop expressive. Let us suppose I want to write the first 20 Fibonacci numbers; I want something like this:
loop
#OUT + fibonacci!(20) + "\n";
end;
A possible code for the iterator is:
fibonacci!(once top:INT):INT is
n_prev ::= 0;
n_cur ::= 1;
cnt ::= 0;
loop while!(cnt < top);
yield n_cur;
tmp ::= n_cur;
n_cur := n_cur + n_prev;
n_prev := tmp;
cnt := cnt + 1;
end;
end;
The output is:
1
1
2
3
5
8
13
21
34
55
89
144
233
377
610
987
1597
2584
4181
6765
With builtin iterators you can compute the dot product of two vectors (arrays), a and b, like this:
r:ARRAY{FLT};
loop
r := r.append(|a.elt! * b.elt!|);
end;
Or, more like you would do in other languages:
r ::= #ARRAY{FLT}(a.size);
loop i ::= 0.upto!(a.size-1);
r[i] := a[i] * b[i];
end;
But what if one of a or b is shorter than the other? Because of how the example was written, the problem arises when b has less elements than a. In the implementation with the iterators there isn't any problem because the first iterator to quit will make the loop to end.
A fix makes the code look uglier:
r ::= #ARRAY{FLT}(a.size.min(b.size));
loop i ::= 0.upto!(a.size.min(b.size)-1);
r[i] := a[i] * b[i];
end;
A reminder: if there's an idiomatic way of saying something, do not imitate how you say it in other languages.
## Coroutines
Given the way Sather iterators work, I think it is easy to implement something like this example I've built on in the post Crumbs of coroutines (and others).
## Twisted minds
Sather has the iterator separate! which can be used to separate elements of an array, like this (example from the tutorial):
a ::= |1,2,3|;
loop
#OUT + ", ".separate!(a.elt!.str);
end;
It is funny to note that Python does something similar with join:
print(", ".join([str(x) for x in [1, 2, 3]]))
If we want just the string:
s = ", ".join([str(x) for x in [1,2,3]])
In Sather it looks like:
s:FSTR;
larr:ARRAY{INT} := |1,2,3|;
loop s := s + ", ".separate!(larr.elt!.str); end;
Unfortunately we can't write |1,2,3|.elt!.str, but we have to write as I did above, or inline #ARRAY{INT}(|1,2,3|).elt!.str.
|
2020-09-23 23:31:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25806471705436707, "perplexity": 4646.366134326878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212959.12/warc/CC-MAIN-20200923211300-20200924001300-00568.warc.gz"}
|
https://rpg.meta.stackexchange.com/questions/2742/how-are-comments-removed-when-they-are-no-longer-relevant?noredirect=1
|
How are comments removed when they are no longer relevant?
I see that a comment can be flagged but I take that as a function for comments that are offensive in some way. In cases where a comment is not relevant because the question/answer has morphed in to something else, what is done to weed out comments that are no longer useful?
As example: Crime Investigation in a fantasy world
Flag the comment as Obsolete (that's an option under the flag) or if all the comments on a post are obsolete, flag the post for moderator attention. We'll clean it up.
|
2020-09-26 13:58:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3536374270915985, "perplexity": 1234.1928305826918}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00133.warc.gz"}
|
https://www.physicsforums.com/threads/rotation-angle.187735/
|
# Rotation angle
1. Sep 28, 2007
### ehrenfest
1. The problem statement, all variables and given/known data
My book says that $$\epsilon \vec{n} \times \vec{q}$$ represents a small rotation about q.
n is an normal vectors
Obviously the cross product is orthogonal to both n and q, but I did not know it corresponded to an angle?
BY the way, how do you make a cross product in tex?
2. Relevant equations
3. The attempt at a solution
Last edited: Sep 28, 2007
2. Sep 28, 2007
### mjsd
cross product in tex? \times will be fine
the epsilon correspond to the small amplitude of your infinitestimal rotation I think
3. Sep 28, 2007
### ehrenfest
But why does this represents an angle?
4. Sep 28, 2007
### Dick
It doesn't represent an angle. It represents an infinitesimal displacement. A small displacement from a vector q rotated around an axis along n should be perpendicular to both. Hence nxq.
|
2016-12-10 07:25:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853422403335571, "perplexity": 2318.2032467926683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00396-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/216769/does-sum-over-all-paths-in-the-path-integral-imply-sum-over-all-paths-in-mom
|
# Does "sum over all paths" in the path integral imply "sum over all paths" in momentum space when one Fourier-transforms?
How is the Fourier-transformed-field path integral interpreted? Is it still a "sum of all paths" in momentum space? Just that with another action?
Consider for instance the (Euclidean) partition function
$$Z=\int \mathcal{D}[\phi, \bar \phi] \mathrm{e}^{ -S[\phi,\bar \phi]}$$ of the, say, scalar complex field $\phi$? If the action is --for sensible polynomial $P$ in the fields-- $$S[\phi,\bar\phi]=\int(\bar\phi(-\nabla^2+m^2) \phi+P(\phi,\bar\phi))d^nx,$$ one gets $$S[\tilde\phi,\overline{\tilde \phi}]= \int(\overline{\tilde{\phi}}(p^2+m^2) \phi+P(\tilde\phi,\overline{\tilde\phi}))\frac{d^np}{(2\pi)^n},$$ where $\tilde\phi$ is the Fourier transform of $\phi$. But changing $\mathcal{D}[\phi, \bar \phi]$ to $\mathcal{D}[\tilde\phi, \overline {\tilde\phi}]$ gives at most one constant factor and we are left with a
$$Z=(\mathrm{constant}) \int \mathcal{D}[\tilde\phi, \overline{\tilde \phi}] \exp\left({-\int(\overline{\tilde{\phi}}(p^2+m^2) \phi+P(\tilde\phi,\overline{\tilde\phi}))\frac{d^np}{(2\pi)^n}}\right)$$
I see now that your question is about the interpretation. Well, the interpretation is that you now integrate over the space of all fields in momentum space. Of course, mathematically the region of integration is still the space of functions $\mathbb{R}^4\to\mathbb{R}$ (or whatever kind of field applies) and so the meaning of $\mathcal{D}\phi$ is more or less the same, but in physics we usually don't think in those terms. We would say that the action is the same, as long as it is implied that one expression is to be used with position space fields and the other is to be used with momentum space fields.
To emphasize, while the two actions you wrote are indeed different as functionals, physically we think of them as the same, because there is a one-to-one correspondence between fields and their Fourier transforms. I don't know if there's much more to say about the issue; you just did a change of variables, like in a regular integral. This can actually be pretty useful; the partition function must be invariant when doing change of variables, since after all it is an integral. Doing a change of variables can be used to prove gauge invariance, for example.
I think there's something wrong in your understanding, because writing $Z[\phi,\bar{\phi}]$ doesn't make sense. $\phi$ is the variable of integration; once you've done the integral, the result doesn't depend on $\phi$ anymore. Usually we calculate $Z[J]$ with $J$ some kind of classical current; such a dependence could come from a $J\phi$ term in the action. It's the action that is a functional of the fields, not $Z$.
This is important, because if you want to use Fourier-transformed fields you don't need to change variables. Your integral is
$$\int \mathcal{D}\phi \mathcal{D} \bar{\phi} e^{-S[\phi,\bar{\phi}]}$$
Here it's irrelevant whether you write $S$ as an integral over $x$ or over $p$; each field configuration $\phi(x)$ has a corresponding Fourier transform $\phi(p)$, and you can calculate the action with any of them.
• I completely agree with what you wrote, but I might add (and this is just me being very pedantic) that in general when you do a field redefinition the path integral will change in the sense that you will pick up a jacobian factor from the measure. It shouldn't matter in the Fourier transform case since the field redefinition is linear in the fields. Your answer is great, I just wanted to add that in case someone wanted to generalize what you wrote to a more complicated case. Nov 5, 2015 at 20:27
• @Andrew: I thought about including that (though I wasn't sure how to find the Jacobian; I like your insight that ir doesn't matter since the transformation is linear), but I decided not to do it since I didn't think that was really what the question was about. Nov 5, 2015 at 20:31
• That is fair enough. I don't actually think you should edit your answer (definitely a +1 from me for clarity), it's mostly I was feeling pedantic so I thought it might be worth a comment. Incidentally I suspect the determinant is actually one if you normalize things properly--if you discretize everything (which the path integral implicitly tells you to do) then the Fourier transform is a unitary transformation on the fields (although. It's a moot point since the determinant is definitely field independent). Nov 5, 2015 at 20:40
• @Javier The Jacobian you can find it by writing the $\phi(x)$ as the Fourier transform of $\tilde\phi(p)$ and deriving functionally w.r.t. say $\tilde\phi(q)$. You get only $(2\pi)$-factors times a phase factor which cancels out when you take the determinant. Then you get that the Jacobian is a constant, what I wrote.
– c.p.
Nov 6, 2015 at 13:40
• To your reply to the question in the answer: indeed, I messed up with notation. Still, what you wrote, doesn't address the interpretation.
– c.p.
Nov 6, 2015 at 13:42
I see the other answers are quite well placed. But I still think it might help going a step back and looking at Momentum space path integrals from classical paths. Here read this paper https://arxiv.org/ftp/quant-ph/papers/0403/0403005.pdf
|
2022-06-26 21:40:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872351884841919, "perplexity": 211.95968001419604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00196.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/1/lesson/1.2.3/problem/1-53
|
### Home > APCALC > Chapter 1 > Lesson 1.2.3 > Problem1-53
1-53.
Convert the following domain and range from interval notation to set notation. Then sketch a possible function with the given domain and range.
• $D=\left(−∞,2\right)\cup\left(2,∞\right)R=\left(−∞,-1\right)\cup\left(-1,∞\right)$
Domain and Range Notation
There are two accepted forms of notation for domain and range. Examine the graph at right. The domain and range of the function shown can be noted in either interval notation or in set notation.
Interval Notation: $D = [0, 1) \cup (1, ∞)$
$R = [1, ∞)$
Set Notation: $D = {x: x ≥ 0 \text{ and } x ≠ 1}$ or $D = {x: 0 ≤ x < 1 \text{ or } x > 1}$
$R = {y: y ≥ 1}$
The range indicates that this graph never reaches the height of $y = −1$.
The graph of this function could have a hole, jump or vertical asymptote at $y = 2$.
|
2020-05-25 11:33:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9782857894897461, "perplexity": 1303.8488975020812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388427.15/warc/CC-MAIN-20200525095005-20200525125005-00278.warc.gz"}
|
https://www.physicsforums.com/threads/partial-derivatives-transformation.793195/
|
# Partial derivatives transformation
1. Jan 19, 2015
### nmsurobert
1. The problem statement, all variables and given/known data
2. Relevant equations
included in the first picture
3. The attempt at a solution
i feel confident in my answer to part "a". i pretty much just did what the u and v example at the top of the page did. but for part "b" i tried to distribute and collect like terms and what not and i ended up with a huge mess. so i can anyone tell me if im on the right track. and that the end product should like look a huge mess of terms.
thanks!
2. Jan 20, 2015
### RUber
$\frac{\partial^2}{\partial t^2} \neq \left(\frac{\partial}{\partial t}\right)^2$
You have to apply the derivative twice.
3. Jan 23, 2015
### nmsurobert
so i figured out a solution and i thought i would add it here in case someone googling for information stumbles upon this thread.
i get from the second step to the third step (where i am left only with derivatives and a c^2) using the definition of gamma given to me and some rigorous algebra.
took me a solid two days to figure this out.
|
2018-01-23 06:27:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5065816044807434, "perplexity": 404.33825318497696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891750.87/warc/CC-MAIN-20180123052242-20180123072242-00587.warc.gz"}
|
http://www.sitepoint.com/forums/showthread.php?323879-How-to-replace-slashes&goto=nextnewest
|
# Thread: Searching for code, matching it, then echoing results.
1. ## Searching for code, matching it, then echoing results.
I don't even know where to begin with this on my own, so I'm looking to YOU for some guidance.
Basically I have a txt file that has a few words in it, one word per line. I need a script that will look through that txt file, find the words and then see if there are any other matches for that word in the file. Once that has completed I want it to show what words it searched for and how many times those words were found.
I tried using preg_match_all, but I can't figure out how to get the results I need. Can anyone help?
2. How big are these files? It may be better to read the file into an associative array like
$array[$word] = $count; Then you can easily get all of the words (array_keys) and the count$array[$word]; 3. The files are not going to be very big at all, would you still recommend doing that? 4. Yep. I like arrays a lot. My only concern was if the file gets too big, you're holding a lot in memory. 5. So, pardon my noobness, but how do I go about this? I still can't figure out how to read the file, find a word, find the matches, then say how many matches were found. My main problem is that the words aren't predefined, the script has to find them on it's own. Is this even possible? 6. Here's some modified code I use for something similar: PHP Code: $export = "\path\to\file\customer.txt";if (!is_readable($export)) { echo('Error connecting to the file ("' .$export . '").'); exit();}$store_array = array();$handle = @fopen($export, "r");while (!feof($handle)) { $buffer = fgets($handle, 4096); $word = ereg_replace("(\n|\r)", "",$buffer); //removes partial line breaks if (!isset($store_array[$word])) $store_array[$word] = 1; else $store_array[$word]++;}fclose($handle); I didn't test this code, but it's pretty similar to what I use. Now you can see if a word was found using (isset($store_array[$your_word])) and even see how many times it was found ($store_array[$your_word]). P.S. The code's a little dirty, but maybe another user will clean it up for you 7. PHP Code: function array_repeated($array ) { if ( ! is_array($array) ) { return false; }$duplicates = array(); foreach ( $array as$key => $val ) { // Set the internal pointer of an array to its last element end($array ); // Fetch a key from an associative array $k = key($array ); // Return the current element in an array $v = current($array ); while ( $k !==$key ) { if ( $v ===$val ) { $duplicates[$key] = $v; } // Rewind the internal array pointer$v = prev( $array );$k = key( $array ); } } return$duplicates; } // Load file into Array $list = file( 'x.txt' ); // Convert every line to lowercase$list = array_map( "strtolower", $list ); print_r( array_repeated($list ) ); echo count( array_repeated( $list ) ); 8. So you're just looking for values which were repeated? PHP Code: <?php function moreThanOne($val){ return ($val > 1) ? true : false; }$file = file_get_contents('yourfile.txt'); $words = explode("\n",$file); $repeats = array_count_values($words); $repeats = array_filter($repeats, 'moreThanOne'); var_dump($repeats); Or, if you like it consise: PHP Code: <?php function moreThanOne($val){ return ($val > 1) ? true : false; }$repeats = array_filter(array_count_values(explode("\n", file_get_contents('yourfile.txt'))), 'moreThanOne'); var_dump(\$repeats);
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
|
2014-09-02 17:32:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42418372631073, "perplexity": 256.1890257853348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922089.6/warc/CC-MAIN-20140901014522-00389-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/363453/notation-what-is-the-scope-of-a-sum
|
# Notation: What is the scope of a sum?
I would interpret $\sum_{i=1}^2 x_i + y$ as $x_1 + x_2 + y$, but I would interpret $\sum_{i=1}^2 x_i + y_i$ as $x_1 + y_1 + x_2 + y_2$. I realize this is a little inconsistent. Should the latter be written as $\sum_{i=1}^2 (x_i + y_i)$?
Or, in other words, does the sum operator have precedence over + and - ?
It should be written with parentheses to avoid ambiguity, yes. If you think about the "sum" symbol as a function, it makes sense:
$$\sum(\cdot)$$
This is a function which takes a list $\{x_1,x_2,\ldots\}$ of numbers (or other mathematical objects which you might want to add), and adds them in order. This list could be finite or infinite: the "sum" function figures out how long the list is, and adjusts its indexing accordingly (i.e. if there are 10 things in the list, your index will go from $1$ to $10$).
If you want to add a sequence which is itself the addition of two sequences, like your example of $\{x_1+y_1,x_2+y_2\}$, you'll need to drop the whole sequence into the function:
$$\sum(\{x_1+y_1,x_2+y_2\})=\sum_{i=1}^2(x_i+y_i)$$
For finite sums of numbers, we always have the property that
$$\sum(\{x_1,x_2,\ldots\}+\{y_1,y_2,\ldots\})=\sum(\{x_1,x_2,\ldots\})+\sum(\{y_1,y_2,\ldots\})$$ however the following would have a different interpretation:
$$\sum(\{x_1,x_2,\ldots\})+\{y_1,y_2,\ldots\}$$
Hence the parentheses!
Interesting, yet more advanced side note: "breaking up" a sum doesn't always work if the sequences are infinite!
• Thanks! Does this mean that if I wrote $\sum_{i=1}^2 x_i + y_i$, you would read it as $x_1 + x_2 + y_i$? Or would you have understood that I meant $x_1 + y_1 + x_2 + y_2$? Or would you just have thought "Geez, what sloppy notation!" :) Apr 16 '13 at 15:41
• Most people would understand what is meant, but yes we would also think it was a bit sloppy =) I see $\int f+g$ all the time as well, similar confusion. Apr 16 '13 at 16:14
• Correct, but sometimes people are lazy and don't put the 'dx'. Apr 16 '13 at 17:06
• However, $\Sigma^2_{i=1} x_i + y$ would be $x_1 + x_2 + 2y$. If that is your intention, it would be preferable to write $\Sigma^2_{i=1} (x_i + y)$. Otherwise, one would write $y + \Sigma^2_{i=1} x_i$, which would be unambiguously taken to mean $x_1 + x_2 + y$ . Apr 16 '13 at 17:19
• Agreed. So $\sum_{i=1}^2 x_i + y$ would be considered somewhat ambiguous, or would most people take it to mean $y+\sum_{i=1}^2 x_i$ without hesitation? Apr 16 '13 at 18:09
|
2021-10-16 18:08:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7846069931983948, "perplexity": 266.3794798203786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00239.warc.gz"}
|
http://annals.math.princeton.edu/2012/175-1/p11
|
# Submultiplicativity and the Hanna Neumann Conjecture
### Abstract
In this article, we define submultiplicativity of $\ell^2$-numbers in the category of $\Gamma$-complexes over a given $\Gamma$-complex $\hat{X}$, which generalizes the statement of the Strengthened Hanna Neumann Conjecture (SHNC). In the case when $\Gamma$ is a left-orderable group and $\hat{X}$ is a free $\Gamma$-complex, we prove submultiplicativity for the subcategory consisting of $\Gamma$-ordered leafages over $\hat{X}$ with an additional analytic assumption called the deep-fall property. We show that the deep-fall property is satisfied for graphs. This implies SHNC.
• [Burns1971] R. G. Burns, "On the intersection of finitely generated subgroups of a free group.," Math. Z., vol. 119, pp. 121-130, 1971.
@article {Burns1971, MRKEY = {0279166},
AUTHOR = {Burns, Robert G.},
TITLE = {On the intersection of finitely generated subgroups of a free group.},
JOURNAL = {Math. Z.},
FJOURNAL = {Mathematische Zeitschrift},
VOLUME = {119},
YEAR = {1971},
PAGES = {121--130},
ISSN = {0025-5874},
MRCLASS = {20.10},
MRNUMBER = {0279166},
MRREVIEWER = {Hanna Neumann},
DOI = {10.1007/BF01109964},
ZBLNUMBER = {0197.29801},
}
• [Dicks1994] W. Dicks, "Equivalence of the strengthened Hanna Neumann conjecture and the amalgamated graph conjecture," Invent. Math., vol. 117, iss. 3, pp. 373-389, 1994.
@article {Dicks1994, MRKEY = {1283723},
AUTHOR = {Dicks, Warren},
TITLE = {Equivalence of the strengthened {H}anna {N}eumann conjecture and the amalgamated graph conjecture},
JOURNAL = {Invent. Math.},
FJOURNAL = {Inventiones Mathematicae},
VOLUME = {117},
YEAR = {1994},
NUMBER = {3},
PAGES = {373--389},
ISSN = {0020-9910},
CODEN = {INVMBH},
MRCLASS = {20E07 (05C25 20E05)},
MRNUMBER = {1283723},
MRREVIEWER = {Dimitrios Varsos},
DOI = {10.1007/BF01232249},
ZBLNUMBER = {0809.05055},
}
• [DicksFormanek2001] W. Dicks and E. Formanek, "The rank three case of the Hanna Neumann conjecture," J. Group Theory, vol. 4, iss. 2, pp. 113-151, 2001.
@article {DicksFormanek2001, MRKEY = {1812321},
AUTHOR = {Dicks, Warren and Formanek, Edward},
TITLE = {The rank three case of the {H}anna {N}eumann conjecture},
JOURNAL = {J. Group Theory},
FJOURNAL = {Journal of Group Theory},
VOLUME = {4},
YEAR = {2001},
NUMBER = {2},
PAGES = {113--151},
ISSN = {1433-5883},
CODEN = {JGTHFQ},
MRCLASS = {20E05},
MRNUMBER = {1812321},
MRREVIEWER = {Martin Edjvet},
DOI = {10.1515/jgth.2001.012},
ZBLNUMBER = {0976.20011},
}
• [Dicks-Ivanov2008] W. Dicks and S. V. Ivanov, "On the intersection of free subgroups in free products of groups," Math. Proc. Cambridge Philos. Soc., vol. 144, iss. 3, pp. 511-534, 2008.
@article {Dicks-Ivanov2008, MRKEY = {2418703},
AUTHOR = {Dicks, Warren and Ivanov, S. V.},
TITLE = {On the intersection of free subgroups in free products of groups},
JOURNAL = {Math. Proc. Cambridge Philos. Soc.},
FJOURNAL = {Mathematical Proceedings of the Cambridge Philosophical Society},
VOLUME = {144},
YEAR = {2008},
NUMBER = {3},
PAGES = {511--534},
ISSN = {0305-0041},
CODEN = {MPCPCO},
MRCLASS = {20E06 (20E05)},
MRNUMBER = {2418703},
DOI = {10.1017/S0305004107001041},
ZBLNUMBER = {1154.20025},
}
• [Feuerman1990] K. Feuerman, The Hanna Neumann Conjecture: A flow detection approach.
@misc{Feuerman1990,
author={Feuerman, K.},
TITLE={The {H}anna {N}eumann {C}onjecture: {A} flow detection approach},
NOTE={Ph.D. thesis, University of Illinois at Urbana-Champaign, 1991},
MRNUMBER={2686056},
}
• [JFriedman2011apr30_fifth-paper] J. Friedman, Sheaves on graphs and a proof of the Hanna Neumann Conjecture.
@misc{JFriedman2011apr30_fifth-paper,
author={Friedman, J.},
TITLE={Sheaves on graphs and a proof of the {H}anna {N}eumann {C}onjecture},
NOTE={preprint},
DATE={2011},
ARXIV={1105.0129v1},
}
• [Gersten1983] S. M. Gersten, "Intersections of finitely generated subgroups of free groups and resolutions of graphs," Invent. Math., vol. 71, iss. 3, pp. 567-591, 1983.
@article {Gersten1983, MRKEY = {0695907},
AUTHOR = {Gersten, S. M.},
TITLE = {Intersections of finitely generated subgroups of free groups and resolutions of graphs},
JOURNAL = {Invent. Math.},
FJOURNAL = {Inventiones Mathematicae},
VOLUME = {71},
YEAR = {1983},
NUMBER = {3},
PAGES = {567--591},
ISSN = {0020-9910},
CODEN = {INVMBH},
MRCLASS = {05C10 (20E05 20E07 57M05)},
MRNUMBER = {0695907},
MRREVIEWER = {R. C. Lyndon},
DOI = {10.1007/BF02095994},
ZBLNUMBER = {0521.20014},
}
• [Imrich1977] W. Imrich, "On finitely generated subgroups of free groups," Arch. Math. $($Basel$)$, vol. 28, iss. 1, pp. 21-24, 1977.
@article {Imrich1977, MRKEY = {0439941},
AUTHOR = {Imrich, Wilfried},
TITLE = {On finitely generated subgroups of free groups},
JOURNAL = {Arch. Math. $($Basel$)$},
FJOURNAL = {Archiv der Mathematik},
VOLUME = {28},
YEAR = {1977},
NUMBER = {1},
PAGES = {21--24},
ISSN = {0003-889X},
MRCLASS = {20F05},
MRNUMBER = {0439941},
MRREVIEWER = {Robert G. Burns},
DOI = {10.1007/BF01223883},
ZBLNUMBER = {0385.20016},
}
• [SergeiIvanov2001] S. V. Ivanov, "Intersecting free subgroups in free products of groups," Internat. J. Algebra Comput., vol. 11, iss. 3, pp. 281-290, 2001.
@article {SergeiIvanov2001, MRKEY = {1847180},
AUTHOR = {Ivanov, S. V.},
TITLE = {Intersecting free subgroups in free products of groups},
JOURNAL = {Internat. J. Algebra Comput.},
FJOURNAL = {International Journal of Algebra and Computation},
VOLUME = {11},
YEAR = {2001},
NUMBER = {3},
PAGES = {281--290},
ISSN = {0218-1967},
MRCLASS = {20E06 (20F06)},
MRNUMBER = {1847180},
MRREVIEWER = {Dimitrios Varsos},
DOI = {10.1142/S0218196701000267},
ZBLNUMBER = {1028.20022},
}
• [SergeiIvanov2008] S. V. Ivanov, "On the Kurosh rank of the intersection of subgroups in free products of groups," Adv. Math., vol. 218, pp. 465-484, 2008.
@article{SergeiIvanov2008,
author = {Ivanov, S. V.},
TITLE={On the {K}urosh rank of the intersection of subgroups in free products of groups},
VOLUME={218},
YEAR={2008},
PAGES={465--484},
MRNUMBER={2407942},
ZBLNUMBER={1146.20023},
DOI={doi:10.1016/j.aim.2008.01.003},
}
• [Khan2002] B. Khan, "Positively generated subgroups of free groups and the Hanna Neumann conjecture," in Combinatorial and Geometric Group Theory, Providence, RI: Amer. Math. Soc., 2002, vol. 296, pp. 155-170.
@incollection {Khan2002, MRKEY = {1921710},
AUTHOR = {Khan, Bilal},
TITLE = {Positively generated subgroups of free groups and the {H}anna {N}eumann conjecture},
BOOKTITLE = {Combinatorial and Geometric Group Theory},
VENUE={{N}ew {Y}ork, 2000/{H}oboken, {NJ},
2001},
SERIES = {Contemp. Math.},
VOLUME = {296},
PAGES = {155--170},
PUBLISHER = {Amer. Math. Soc.},
YEAR = {2002},
MRCLASS = {20E05 (20F10)},
MRNUMBER = {1921710},
MRREVIEWER = {Waldemar M. Ho{\l}ubowski},
ZBLNUMBER = {1013.20018},
}
• [Lu2002] W. Lück, ${L}^2$-Invariants: Theory and Applications to Geometry and $K$-Theory, New York: Springer-Verlag, 2002, vol. 44.
@book {Lu2002, MRKEY = {1926649},
AUTHOR = {L{ü}ck, Wolfgang},
TITLE = {{${L}\sp 2$}-Invariants: Theory and Applications to Geometry and {$K$}-Theory},
SERIES = {Ergeb. Math. Grenzgeb.},
VOLUME = {44},
PUBLISHER = {Springer-Verlag},
YEAR = {2002},
PAGES = {xvi+595},
ISBN = {3-540-43566-2},
MRCLASS = {58J22 (19K56 46L80 57Q10 57R20 58J52)},
MRNUMBER = {1926649},
MRREVIEWER = {Thomas Schick},
ZBLNUMBER={1009.55001},
}
• [MeakinWeil2002] J. Meakin and P. Weil, "Subgroups of free groups: a contribution to the Hanna Neumann conjecture," in Proceedings of the Conference on Geometric and Combinatorial Group Theory, Part I, 2002, pp. 33-43.
@inproceedings {MeakinWeil2002, MRKEY = {1950872},
AUTHOR = {Meakin, J. and Weil, P.},
TITLE = {Subgroups of free groups: a contribution to the {H}anna {N}eumann conjecture},
BOOKTITLE = {Proceedings of the {C}onference on {G}eometric and {C}ombinatorial {G}roup {T}heory, {P}art {I}},
VENUE={{H}aifa, 2000},
JOURNAL = {Geom. Dedicata},
FJOURNAL = {Geometriae Dedicata},
VOLUME = {94},
YEAR = {2002},
PAGES = {33--43},
ISSN = {0046-5755},
CODEN = {GEMDAT},
MRCLASS = {20E05 (20E07 20F65)},
MRNUMBER = {1950872},
DOI = {10.1023/A:1020900823482},
ZBLNUMBER = {1032.20017},
}
• [M-SHNC] I. Mineyev, "The topology and analysis of the Hanna Neumann Conjecture," J. Topol. Anal. (JTA), vol. 3, pp. 307-376, 2011.
@article{M-SHNC,
author={Mineyev, I.},
TITLE={The topology and analysis of the {H}anna {N}eumann {C}onjecture},
JOURNAL={J. Topol. Anal. (JTA)},
VOLUME={3},
YEAR={2011},
PAGES={307--376},
ZBLNUMBER={pre05963987},
DOI={10.1142/S1793525311000611},
}
• [HannaNeumann1956] H. Neumann, "On the intersection of finitely generated free groups," Publ. Math. Debrecen, vol. 4, pp. 186-189, 1956.
@article {HannaNeumann1956, MRKEY = {0078992},
AUTHOR = {Neumann, Hanna},
TITLE = {On the intersection of finitely generated free groups},
JOURNAL = {Publ. Math. Debrecen},
FJOURNAL = {Publicationes Mathematicae Debrecen},
VOLUME = {4},
YEAR = {1956},
PAGES = {186--189},
ISSN = {0033-3883},
MRCLASS = {20.0X},
MRNUMBER = {0078992},
MRREVIEWER = {D. G. Higman},
ZBLNUMBER = {0070.02001},
}
• [HannaNeumann1957] H. Neumann, "On the intersection of finitely generated free groups. Addendum," Publ. Math. Debrecen, vol. 5, p. 128, 1957.
@article{HannaNeumann1957,
author = {Neumann, Hanna},
TITLE = {On the intersection of finitely generated free groups. {A}ddendum},
JOURNAL = {Publ. Math. Debrecen},
VOLUME={5},
PAGES={128},
YEAR={1957},
ZBLNUMBER={0078.01402},
MRNUMBER={0093537},
}
• [WalterNeumann1989] W. D. Neumann, "On intersections of finitely generated subgroups of free groups," in Groups—Canberra 1989, New York: Springer-Verlag, 1990, vol. 1456, pp. 161-170.
@incollection {WalterNeumann1989, MRKEY = {1092229},
AUTHOR = {Neumann, Walter D.},
TITLE = {On intersections of finitely generated subgroups of free groups},
BOOKTITLE = {Groups---{C}anberra 1989},
SERIES = {Lecture Notes in Math.},
VOLUME = {1456},
PAGES = {161--170},
PUBLISHER = {Springer-Verlag},
YEAR = {1990},
MRCLASS = {20E05 (20E07)},
MRNUMBER = {1092229},
MRREVIEWER = {Julian Petresco},
DOI = {10.1007/BFb0100737},
ZBLNUMBER = {0722.20016},
}
• [NIckolas1985] P. Nickolas, "Intersections of finitely generated free groups," Bull. Austral. Math. Soc., vol. 31, iss. 3, pp. 339-348, 1985.
@article {NIckolas1985, MRKEY = {0801592},
AUTHOR = {Nickolas, Peter},
TITLE = {Intersections of finitely generated free groups},
JOURNAL = {Bull. Austral. Math. Soc.},
FJOURNAL = {Bulletin of the Australian Mathematical Society},
VOLUME = {31},
YEAR = {1985},
NUMBER = {3},
PAGES = {339--348},
ISSN = {0004-9727},
CODEN = {ALNBAB},
MRCLASS = {20E05 (05C25 20E07 20F99)},
MRNUMBER = {0801592},
MRREVIEWER = {Robert G. Burns},
DOI = {10.1017/S0004972700009291},
ZBLNUMBER = {0579.20018},
}
• [Servatius1983] B. Servatius, "A short proof of a theorem of Burns," Math. Z., vol. 184, iss. 1, pp. 133-137, 1983.
@article {Servatius1983, MRKEY = {0711734},
AUTHOR = {Servatius, Brigitte},
TITLE = {A short proof of a theorem of {B}urns},
JOURNAL = {Math. Z.},
FJOURNAL = {Mathematische Zeitschrift},
VOLUME = {184},
YEAR = {1983},
NUMBER = {1},
PAGES = {133--137},
ISSN = {0025-5874},
CODEN = {MAZEAX},
MRCLASS = {20E05 (05C25 20F32)},
MRNUMBER = {0711734},
MRREVIEWER = {Ian M. Chiswell},
DOI = {10.1007/BF01162012},
ZBLNUMBER = {0502.20009},
}
• [Shimbireva1947] H. Shimbireva, "On the theory of partially ordered groups," Rec. Math. [Mat. Sbornik] N.S., vol. 20(62), pp. 145-178, 1947.
@article {Shimbireva1947, MRKEY = {0020558},
AUTHOR = {Shimbireva, H.},
TITLE = {On the theory of partially ordered groups},
JOURNAL = {Rec. Math. [Mat. Sbornik] N.S.},
VOLUME = {20(62)},
YEAR = {1947},
PAGES = {145--178},
MRCLASS = {20.0X},
MRNUMBER = {0020558},
MRREVIEWER = {H. Freudenthal},
ZBLNUMBER = {0029.10301},
}
• [Stallings1983] J. R. Stallings, "Topology of finite graphs," Invent. Math., vol. 71, iss. 3, pp. 551-565, 1983.
@article {Stallings1983, MRKEY = {0695906},
AUTHOR = {Stallings, John R.},
TITLE = {Topology of finite graphs},
JOURNAL = {Invent. Math.},
FJOURNAL = {Inventiones Mathematicae},
VOLUME = {71},
YEAR = {1983},
NUMBER = {3},
PAGES = {551--565},
ISSN = {0020-9910},
CODEN = {INVMBH},
MRCLASS = {05C10 (20E05 20E07 57M05)},
MRNUMBER = {0695906},
MRREVIEWER = {R. C. Lyndon},
DOI = {10.1007/BF02095993},
ZBLNUMBER = {0521.20013},
}
• [Tardos1992] G. Tardos, "On the intersection of subgroups of a free group," Invent. Math., vol. 108, iss. 1, pp. 29-36, 1992.
@article {Tardos1992, MRKEY = {1156384},
AUTHOR = {Tardos, G{á}bor},
TITLE = {On the intersection of subgroups of a free group},
JOURNAL = {Invent. Math.},
FJOURNAL = {Inventiones Mathematicae},
VOLUME = {108},
YEAR = {1992},
NUMBER = {1},
PAGES = {29--36},
ISSN = {0020-9910},
CODEN = {INVMBH},
MRCLASS = {20E05},
MRNUMBER = {1156384},
MRREVIEWER = {D. E. Cohen},
DOI = {10.1007/BF02100597},
ZBLNUMBER = {0798.20015},
}
• [Vinogradov1949] A. A. Vinogradov, "On the free product of ordered groups," Mat. Sbornik N.S., vol. 25(67), pp. 163-168, 1949.
TITLE = {On the free product of ordered groups},
JOURNAL = {Mat. Sbornik N.S.},
VOLUME = {25(67)},
YEAR = {1949},
PAGES = {163--168},
MRCLASS = {20.0X},
MRNUMBER = {0031482},
MRREVIEWER = {M. Hall},
ZBLNUMBER={0038.15904},
}
## Authors
Igor Mineyev
Department of Mathematics, University of Illinois at Urbana-Champaign, 1409 West Green Street, Urbana, IL 61801
|
2019-01-18 11:28:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4271376430988312, "perplexity": 13145.993184258368}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00401.warc.gz"}
|
https://groups.google.com/g/sage-devel/c/LRZULwy5KYI
|
# a problem in the new permutation groups code (and a solution ?)
127 views
### Dima Pasechnik
Mar 21, 2013, 10:10:38 AM3/21/13
While working on http://trac.sagemath.org/sage_trac/ticket/14291, it
came to my attention that one can now have permutation groups acting
on quite arbitrary domains (the only requirement for the domain elements
seems to be them being hashable).
This leads to the following kind of confusing situations:
suppose our permutation group G acts on, say, (1,2,3,4,(1,2),(2,3)).
Then things like "the orbit (1,2) under G" can be interpreted in two
different incompatible ways:
* the images under G of the pair of domain elements 1 and 2.
* the images under G of of the domain element (1,2).
I can see two ways to remedy this:
1) a framework with parents, etc
2) "boxing" the most "primitive" elements of the domain, i.e.
as in our example, using ((1),(2),(3),(4),(1,2),(2,3)) instead of
(1,2,3,4,(1,2),(2,3)); then certainly ((1),(2)) and (1,2) are
different things, problem solved.
(and certainly you can tell me that actually it's OK as it is... :))
IMHO, 2) is relatively easy to put into place, and 1) is tricky and quite a bit of
work.
Dima
### Benjamin Jones
Mar 21, 2013, 12:05:12 PM3/21/13
It seems to me that the ambiguity arises from the original statement, "the orbit (1,2) under G", not the fact that the domain is non-homogeneous. It's less ambiguous to say directly G.{1, 2} (the orbit of the _element_ {1, 2}) versus G.1 \union G.2 (the orbit of the subset {1, 2}). Then, which group action you are talking about is clear.
For the API, it seems best (to me) to have the standard orbit function (which takes an element of the domain and returns its orbit) and add to that an optional parameter which changes the semantics to orbits of sets. The user would have to supply a subset of the domain then, instead of an element or you'd raise a TypeError.
### Nils Bruin
Mar 21, 2013, 1:21:40 PM3/21/13
to sage-devel
On Mar 21, 9:05 am, Benjamin Jones <[email protected]> wrote:
> It seems to me that the ambiguity arises from the original statement,
Systems like magma (and I assume gap as well) solve this by having a
"GSet" type. If you have S3 acting on the GSet V={1,2,3} then, one can
construct the powerset W of V as a GSet as well. In that context there
is a natural distinction between the subset {1,2,3} of V (which is a
sub-GSet) and the element {1,2,3} of W.
One can talk about the orbit of the element {1,2} in W under S3 but
obviously, {1,2} is not a sub-GSet of V.
How these things coerce between each other is another matter, but I
suspect making such distinctions internally is unavoidable.
### Dima Pasechnik
Mar 21, 2013, 6:12:30 PM3/21/13
On 2013-03-21, Benjamin Jones <[email protected]> wrote:
> --f46d0444e849ed15b904d871801a
> Content-Type: text/plain; charset=ISO-8859-1
>
> It seems to me that the ambiguity arises from the original statement, "the
> orbit (1,2) under G", not the fact that the domain is non-homogeneous. It's
> less ambiguous to say directly G.{1, 2} (the orbit of the _element_ {1, 2})
> versus G.1 \union G.2 (the orbit of the subset {1, 2}). Then, which group
> action you are talking about is clear.
>
> For the API, it seems best (to me) to have the standard orbit function
> (which takes an element of the domain and returns its orbit) and add to
> that an optional parameter which changes the semantics to orbits of sets.
in my example, {1,2} (the domain element) is naturally a subset of the domain, so in both
cases it's an orbit on subsets. The trouble is that it's not
well-defined for a domain like this.
### Dima Pasechnik
Mar 21, 2013, 11:48:00 PM3/21/13
On 2013-03-21, Nils Bruin <[email protected]> wrote:
> On Mar 21, 9:05 am, Benjamin Jones <[email protected]> wrote:
>> It seems to me that the ambiguity arises from the original statement,
>
> Systems like magma (and I assume gap as well) solve this by having a
> "GSet" type. If you have S3 acting on the GSet V={1,2,3} then, one can
> construct the powerset W of V as a GSet as well. In that context there
> is a natural distinction between the subset {1,2,3} of V (which is a
> sub-GSet) and the element {1,2,3} of W.
On GAP there is nothing like GSet.
As far as I know, one needs to do that "boxing" trick
I explained there, too, if one needs to "mix" things in the domain of the
group.
>
> One can talk about the orbit of the element {1,2} in W under S3 but
> obviously, {1,2} is not a sub-GSet of V.
>
> How these things coerce between each other is another matter, but I
> suspect making such distinctions internally is unavoidable.
Many years ago I abandoned Magma's predecessor, Cayley, for GAP, as
Cayley's lack of proper coersions/conversions drove me insane all the time.
Dima
### Volker Braun
Mar 22, 2013, 9:43:18 AM3/22/13
I think its unambiguous to define the orbit of x recursively as
1. use the action on domain elements if x is a domain element
2. otherwise, assume that the x is a list/set/... of domain elements
### Nathann Cohen
Mar 22, 2013, 9:51:05 AM3/22/13
Helloooooooooooooo !!!
> I think its unambiguous to define the orbit of x recursively as
> 1. use the action on domain elements if x is a domain element
> 2. otherwise, assume that the x is a list/set/... of domain elements
Well. It is when you know what you are doing and work on a spcific group.
When you write a Sage method, though, it is embarassing if you do not know wheter the orbit of a pair of elements is a set of element (input considered as element) or a set of pairs of elements (input considered as a se of elements).
Really there is no problem with this patch except that Dima does not like that the elements of a group could be things like 1,2,{1,2}, which makes {1,2} ambiguous (element? set of two elements?) ... What we did before is guess the type of INPUT according to a keyword named "action" (that we need anyway, if only to differentiate between OnTuples and OnSets) and everything works fine....
Nathann
### Nathann Cohen
Mar 22, 2013, 9:52:38 AM3/22/13
Nathann
### Volker Braun
Mar 22, 2013, 10:04:06 AM3/22/13
On Friday, March 22, 2013 2:51:05 PM UTC+1, Nathann Cohen wrote:
> I think its unambiguous to define the orbit of x recursively as
> 1. use the action on domain elements if x is a domain element
> 2. otherwise, assume that the x is a list/set/... of domain elements
Well. It is when you know what you are doing and work on a spcific group.
For non-interactive you either perform argument validation yourself or use the optional parameter G.orbit(foo, action='OnTuples').
### Nathann Cohen
Mar 22, 2013, 10:06:17 AM3/22/13
> For non-interactive you either perform argument validation yourself or use
> the optional parameter G.orbit(foo, action='OnTuples').
Oh. Ok, this is fine !
So Dima, do we guess the value of action when it is set to None, then translate the output according to the value of "action" ? That's a good answer !
Nathann
### Nathann Cohen
Mar 22, 2013, 10:46:39 AM3/22/13
> Would Evariste Galois raise from his grave and chase the designer
> of this?
I answered on the ticket, and said that I would help him if he did. But Dima you know that this thing will take ime if somebody actually ends up doing it and it's not related to this ticket. Why do you want to block it over that ?
Nathann
### Nathann Cohen
Mar 22, 2013, 11:25:21 AM3/22/13
> as I explained, the code you don't like there (cause it does not work on
> insane inputs) would work fine on sane inputs. And the uglier code you
> prefer would break things on insane inputs, too, although at some other
> point, e.g. at the one I outlined above in this thread.
It does not break things on insane input -- let's decide where we discuss this, I just answered that on the ticket -- for you are (from Sage's point of view) perfectly aware of what you are doing when you intersect :
- The orbit of a vertex which you obtained by doing g.action( x, action = "OnPoints" )
- The orbit of an edge which you obtained by doing g.action( (x,y), action = "OnSets")
Hence you KNOW that you are intersecting things of different types. We might as well say that the output of g.action( x, action = "OnPoints" ) is of type "OrbitOfPoint" and g.action( (x,y), action = "OnSets") of type "OrbitOfSet". You actually know this information because you filled the "action" argument yourself. You can infer the type of what is being returned just from the value of "action".
Nathann
### Dima Pasechnik
Mar 22, 2013, 12:19:00 PM3/22/13
On 2013-03-22, Nathann Cohen <[email protected]> wrote:
> --bcaec52e66033a8d1704d88510dc
> Content-Type: text/plain; charset=ISO-8859-1
>
>> as I explained, the code you don't like there (cause it does not work on
>> insane inputs) would work fine on sane inputs. And the uglier code you
>> prefer would break things on insane inputs, too, although at some other
>> point, e.g. at the one I outlined above in this thread.
>
> It does not break things on insane input -- let's decide where we discuss
> this, I just answered that on the ticket -- for you are (from Sage's point
> of view) perfectly aware of what you are doing when you intersect :
> - The orbit of a vertex which you obtained by doing g.action( x, action =
> "OnPoints" )
> - The orbit of an edge which you obtained by doing g.action( (x,y), action
>= "OnSets")
No, this won't really fly. Indeed,
we can follow your design, and implement, explictly, action on tuples of
tuples. And then, on my example with the Z_3 action, ask for the orbit on ((1,2),(1,2)).
And then we are in trouble, cause there is no way to figure out
whether (1,2) is a domain element or not!
Dima
### Nathann Cohen
Mar 22, 2013, 12:23:09 PM3/22/13
> No, this won't really fly. Indeed,
> we can follow your design, and implement, explictly, action on tuples of
> tuples. And then, on my example with the Z_3 action, ask for the orbit on
> ((1,2),(1,2)).
> And then we are in trouble, cause there is no way to figure out
> whether (1,2) is a domain element or not!
You can decide this from the value of "action".
Nathann
### Nathann Cohen
Mar 22, 2013, 12:55:38 PM3/22/13
> Do you mean to say that we check that (1,2) is in the domain, and
> utilize this info?
O_O
Are you doing this on purpose ?
If you want to find the "orbit" of ((1,2),(1,2)) with Sage and if we implement this "action" thing, then :
- When you write g.action( ((1,2),(1,2)), action="OnPoints") Sage refuses what you give it for ((1,2),(1,2)) does not belong to the domain
- When you write g.action( ((1,2),(1,2)), action="OnTuples") then Sage checks that (1,2) is indeed in the doman (it is a vertex of your circuit) and returns [((1,2),(1,2)), (1,1), (2,2)], that is a set of pairs (vertex, vertex)
- When you write g.action( ((1,2),(1,2)), action="OnSets") Then Sage either refuses to work because your "set" contains twice the same element, or reduces your "set" to ((1,2)) in which case it returns a list of sets equal to [((1,2)), (1), (2)]
When is it ambiguous ?
Nathann
### Nathann Cohen
Mar 22, 2013, 12:58:15 PM3/22/13
If your only problem is Volker's proposition that we "guess" the value of "action" according to the input I do not mind forgetting about it just to simplify this discussion, even though I think his idea is good.
Then you would not be able to call "orbit" without specifying the value of "action", and this terminates all doubts about the interpretation of input and output.
Nathann
### Volker Braun
Mar 22, 2013, 1:55:02 PM3/22/13
Under my proposal, the orbit of ((1,2),(1,2)) would be the orbit of a pair, i.e. {((1,2),(1,2)), (1,1), (2,2)}. If you want the orbit of pairs of pairs, you can get it as orbit(..., action="OnTuplesTuples").
There is of course a limit of how nested the action is. If you really need orbits of tuples of sets of tuples of sets of tuples then you'll have to relabel the permutation group so that its domain doesn't have tuples or sets.
### Nathann Cohen
Mar 22, 2013, 1:58:35 PM3/22/13
> There is of course a limit of how nested the action is. If you really need
> orbits of tuples of sets of tuples of sets of tuples then you'll have to
> relabel the permutation group so that its domain doesn't have tuples or
> sets.
? But Whyyyyyyyyyyyyyyyyy should we relabe anythin ??? Whenever there is a doubt as to how INPUT should be read you can feed the method with a corresponding value of 'action', and the uncertainty disappears !!!
Besides we only compute orbits by forwarding stuff to GAP. We it does not know how to do we cannot do either.
Volker, your trick is nice because it means that it is mostly useless to define the value of "action" explicitly, but if there is a doubt just define action manually and there is *NO* uncertainty possible.
Nathann
### Dima Pasechnik
Mar 22, 2013, 8:02:58 PM3/22/13
On 2013-03-22, Nathann Cohen <[email protected]> wrote:
> --f46d040f9ba41d2f1404d88653eb
> Content-Type: text/plain; charset=ISO-8859-1
In more detail: one writes a function that can do GAP's OnTuplesTuples action,
without even any action guessing involved (this is trivial code,
right, we have things like this on our ticket?), and asks it to do the
orbit of the tuple of tuples ((1,2),(1,2)). The outcome --- the stuff is
terribly broken --- is explained in
my previous message. In particular, the "nicest" case --- infinite orbit
--- is where by ((1,2),(1,2)) the caller gets his wish, to compute the orbit on the
tuple of tuples of vertices of his graph, granted. Of course I assume
that the function cannot read the mind of the caller as it goes to work,
so it has to make a consistent choice that (1,2) is not a domain element...
Just as one can derive anything from a False statement, one can always
get into trouble with design that creates counterexamples to foundations
of group theory.
Dima
>
> Nathann
>
### Nathann Cohen
Mar 23, 2013, 4:36:41 AM3/23/13
Helloooooo !
> In more detail: one writes a function that can do GAP's OnTuplesTuples action,
> without even any action guessing involved (this is trivial code,
> right, we have things like this on our ticket?), and asks it to do the
> orbit of the tuple of tuples ((1,2),(1,2)). The outcome
The output would be [ ((1,2),(1,2)), ((2,(1,2)),(2,(1,2))), (((1,2),1),((1,2),1)) ]
What is the problem with that ? If you say OnTupleTuple you know that (1,2) has to be considered as the tuple with two elements 1,2 and we can do the job. You know this because it is an action on a tuple of tuple, the tuple of tuple being ((1,2), (1,2)). So there are two tuples, which are (1,2) and (1,2), each one containing two elments. No way you can confuse this with the element (1,2).
> Just as one can derive anything from a False statement, one can always
> get into trouble with design that creates counterexamples to foundations
> of group theory.
Tell me how it is wrong or the mistake I made. Otherwise it works.
Nathann
### Nathann Cohen
Mar 23, 2013, 4:39:22 AM3/23/13
> Even if you manage to answer the question above satisfactory,
> I still hold that it's not acceptable in the first place to have such
> design,
> forcing one to jump through hoops for no good reason, in an extendable
> system like Sage.
This design is CORRECT Dima, if you don't believe so just give me one instance for which there is an uncertainty.
OnTupleTupleTupleTupleTuple would tell you that the first 5 layers are NOT elements from the domain, and that only the elements of depth 6 are elements of the domain.
Nathann
### Nathann Cohen
Mar 23, 2013, 5:01:39 AM3/23/13
> no, the 3rd element is computed using a different meaning of (1,2) than
> the one used to compute the 2nd one.
> If you used the same meaning for the 2nd as the one for the 3rd, your
> 2nd would be (1,1).
Then how is your input of type "Tuple of Tuple", sir ?
> OK, great, so (1,2) is not an element.
> Yet, you take (1,2) as an element when you compute the 3rd element of
> the orbit. Is your implementation of the function going to read your
> mind, to work correctly?
Of course, because the type of the value returned is "A list of tuple of tuple". Hence everything at depth <= 3 is a container,and everything at depth 4 is an element.
Nathann
### Volker Braun
Mar 23, 2013, 5:19:17 AM3/23/13
We are talking about guessing the action once and for all for a given input. You are talking about guessing the action each time a group element acts in the orbit. I agree that the latter is not consistently doable. But it is possible to guess the action in the beginning of the orbit computation, and this is unambiguous.
Other points that might be nice to implement (but are not on the ticket):
* a switch to optionally print the guess used for the action, maybe verbose=True or action="guess_verbose".
* allow any python function f(g,x) as action=f
### Dima Pasechnik
Mar 23, 2013, 8:49:09 AM3/23/13
On 2013-03-23, Volker Braun <[email protected]> wrote:
> ------=_Part_1329_18134862.1364030357521
> Content-Type: text/plain; charset=ISO-8859-1
>
> We are talking about guessing the action once and for all for a given
> input. You are talking about guessing the action each time a group element
> acts in the orbit. I agree that the latter is not consistently doable. But
> it is possible to guess the action in the beginning of the orbit
> computation, and this is unambiguous.
Unless you specify the action explicitly,
you will need to run a syntactic parser on the group domain before you
can compute an orbit, or do any other sem-trivial computation.
Certainly part 2 of
http://trac.sagemath.org/sage_trac/ticket/14291#comment:28
will not be possible to achieve.
Say, you have 1, 2, (1,2), (2,(1,2)), and perhaps other stuff in the domain.
How many different meanings does "the orbit of ((2,(1,2)),((2,(1,2)))" have?
How can you guess the "right" action for it?
Dima
(recycling a part of another message in this thread, sorry)
### Nathann Cohen
Mar 23, 2013, 8:50:43 AM3/23/13
> Say, you have 1, 2, (1,2), (2,(1,2)), and perhaps other stuff in the domain.
> How many different meanings does "the orbit of ((2,(1,2)),((2,(1,2)))" have?
> How can you guess the "right" action for it?
Dima it's getting boring. Let's say that I do not try to guess
anything if that's a problem, do we agree that a function named
"orbit" which knows the type of its input from the value of "action"
wiill never encounter a ambiguous input nor return an ambiguous output
?
Nathann
### Volker Braun
Mar 23, 2013, 9:26:00 AM3/23/13
On Saturday, March 23, 2013 1:43:05 PM UTC+1, Dima Pasechnik wrote:
Now, if 1, 2, and (1,2) are in your domain, is (2,(1,2)) a tuple?
According to the "minimum depth" rule to guess the default action, it is.
And how many different meanings does ((2,(1,2)),((2,(1,2))) have?
There is a unique guess according to the "minimum depth" rule, if that is not what you want then you have to specify the action explicitly.
### Nathann Cohen
Mar 24, 2013, 6:11:07 AM3/24/13
Helloooooooooooooooooooooooooo Dima !!!
Yesterday I went to walk around the Calanques near the University of Marseille, and it did me good ! I am now wandering homeless in Paris and that's another story :-P
Buttttttttt the thing is that I thought a bit about our conversation here and I think I understand our misunderstanding better. That's only because Thomas Connor mad me read something about Incidence Geometry a long time ago :-D
So for a start, it took me some time to accept that you see nothing wrong -- in a group where all elements of the domain are integers 1, .... , n -- with wanting to compute the orbit of (1, {1,2}), when of course {1,2} is not a member of the domain. Of course, now if {1,2} *IS* a member of the domain then you do not see how to interpret (1, {1,2}) given as input and everything becomes dark, sad, evil and totally non-beautiful at all.
SO. First, the thing is that GAP apparently does not know how to do that either. It accepts only a list of things which are "at the same level", that is a tuple (element, element) and not a tuple (element, pair of elements). That's what I need myself so I don't complain if GAP does not know better and I would be prettyyyyyy glad if this feature could be exposed for a start. And for this kind of input we know all we have to with the value of "action" exactly as GAP already does it.
Then I agree that it would be great to have a way to say g.orbit( (1, {1,2}) ) and have Sage do all the job. Well, at this level I have no idea how it should be actualy implemented (I'm interested in the ways but so ignorant of such things that I probably will not be of much help), but I am not scared anymore of the interpretation of input : as Volker said earlier (which I had not noticed then) we could just write some code at the beginning of "orbit" which checks that input can never be misnterpreted (and cache the result of this computation) so that we can be proud of what we return. And if input can be misinterpreted we would just scream in panic and raise an IAmClueless error saying so. Which will not happen in your applications, which will not happen in mine either, and which will never happen whenever "action" is defined anyway
Now what you think of it ? That we implement this method for a given value of "action", and think hard of how to extend GAP's features in Sage ?
Have fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuun ! It's grey in Paris and everything is expensive, but I have found a couch somewhere for the next two weeks ;-)
Nathann
### tom d
Mar 25, 2013, 3:33:57 AM3/25/13
Hm, wouldn't this just be a direct product of the individual group actions? It seems to me that we're expecting the permutations to act according to an 'obvious' group action. Should we also expect 'obvious' actions of things like a dihedral group when given a 2-dimensional vector? Probably the answer is to generalize and build up a proper group actions category (with obvious methods passing to representations!).
It looks like there's a bare semblance of a glimmer of the idea of actions in Sage:
http://combinat.sagemath.org/doc/reference/categories/sage/categories/action.html
but it's not very fleshed out....
One way to deal with the problem at hand would be to define a direct product of group actions and behave/coerce accordingly. This could look something like:
1) Look at the list of things the permutation is supposed to be acting on, like [1, Set([1,2])]
2) Build a group action parent for each of the components, which we then use to build a direct product parent
3) Return the appropriate element of the direct product.
Such a framework would have the advantage of being there to deal with similar problems in other groups.
cheers!
### Jason B. Hill
Mar 25, 2013, 4:43:19 AM3/25/13
Nice catch Dima!
This functionality is nice, although I think a competent programmer in Sage/Python realizes that the object (1,2) is a bit too vague/polymorphic. It DOES have a use in at least helping explain the theory though. The following example is a small demonstration. I'm simply using text strings as cycles/tuples. I don't need to cast those as actual sets/permutations/tuples for this example, and I suppose I'd question taking the underlying code that far in general.
Example:
Consider the symmetric group $S_3$. The action that most are familiar with is the natural action of degree 3. Call that $G$. Another action is the (right) regular action of $S_3$ acting on itself. We label the six group elements with strings as follows: '()', '(1,2)', '(1,3)', '(2,3)', '(1,2,3)', '(1,3,2)'. It may be easier to think of these elements as being labeled by letters for now. We'll use
'()'=a, '(1,2)'=b, '(1,3)'=c, '(2,3)'=d, '(1,2,3)'=e, '(1,3,2)'=f
For instance, we have a*b=a and c*d=e, and one could write out the entire multiplication table as a 6x6 array. Now, in the regular action, we're looking at a group that permutes the group elements themselves ... which is a bit meta ... but it is generated by (if you don't see where I'm getting this from, you may want to draw our the multiplication table) a two-cycle and a three-cycle: (a,b)(c,f)(d,e) and (a,e,f)(b,c,d). Written in the original cycle strings, those generators are:
('()','(1,2)')('(1,3)','(1,3,2)')('(2,3)','(1,2,3)') and ('()','(1,2,3)','(1,3,2)')('(1,2)','(1,3)','(2,3)')
Coding that in Sage...
G=PermutationGroup([[('()','(1,2)'),('(1,3)','(1,3,2)'),('(2,3)','(1,2,3)')],[('()','(1,2,3)','(1,3,2)'),('(1,2)','(1,3)','(2,3)')]])
G.is_isomorphic(SymmetricGroup(3))
True
So, we have a degree 6 representation of $S_3$. In fact, we can go further. Let's place the degree 6 regular representation in a fully diagonal subdirect product with the natural action. I'll use letters here instead of the ugly strings.
G=PermutationGroup([[('a','b'),('c','f'),('d','e'),(1,2)],[('a','e','f'),('b','c','d'),(1,2,3)]])
G.degree()
9
G.is_isomorphic(SymmetricGroup(3))
True
Interestingly, this subdirect product that I just formed is a small example of a permutation group having two different length minimal nonredundant bases. (The regular representation always has a single base element, while the natural representation of the symmetric group has a degree-1 minimal nonredundant base.) Try finding such a small example in the literature.
I really caution group theorists to stop thinking of permutation group elements as being things like (1,2). As tom d hinted at, that is an action. Permutation groups are better understood as abstract groups acting on a domain, and we only ever have access to the action ... and the worst part of it all is that the same abstract group can induce infinitely many different actions. $S_4$ acting in rotations on the cube is another good example of this.
Jason
--
You received this message because you are subscribed to the Google Groups "sage-combinat-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
--
Jason B. Hill
http://www.jasonbhill.com | [email protected]
### Volker Braun
Mar 25, 2013, 7:30:57 AM3/25/13
The group action category stuff would be nice, but you would run into exactly the same question that Dima asked: What are you going to do if there is more than one possible action. You'll have to either use some heuristics (take the simpler / less nested action) or raise some exception telling the user to explicitly disambiguate between them.
### tom d
Mar 26, 2013, 2:01:40 AM3/26/13
Specify the action! By making a group action framework, we would also be providing the possibility of changing the action to something contrary to the assumptions of the original developers.... Yes, in fact I think this is one of the natural reasons for doing an explicit group action framework. Even for the action of S_n on the set {1,2,3,...,n} one can twist the 'usual' action with an automorphism \phi of S_n, so that \sigma acts on i by \phi(\sigma)(i).
The 'usual' actions then become special predefined objects, like the special graphs, maybe summoned up automatically using the permutation/whatever's __call__ function if it's an idiomatic action like \sigma(3).
As a category, I would imagine we would have a GroupAction category and/or a GroupWithAction category, which would put some requirements on the group and its elements.
I've attached a bit of sample code, which could be used as a base to start a group action category. (Currently just a class, as I need to go and read the category tutorials, though...) The examples are at the bottom; includes products of actions, twisting by a group endomorphism, computing characters, orbits, checking the action definition, checking transitivity, and generating the Cayley graph of the action for a given generating set.....
### tom d
Mar 26, 2013, 2:04:45 AM3/26/13
oops, here's the code! I keep getting server erros when trying to attach as a file, so I'm just including the text of the code file below:
class GroupAction(Parent):
def __init__(self, G, S, phi):
#phi a group action G\times S \rightarrow S
self.phi=phi
self.G=G
self.S=S
def __repr__(self):
return "Action of "+self.G.__repr__()+" on the set "+self.S.__repr__()
def action(self, g, s):
"""
Gives the action of g on s.
"""
return self.phi(g,s)
def group(self):
"""
Group which acts.
"""
return self.G
def gset(self):
"""
Set on which the group acts.
"""
return self.S
def action_function(self):
"""
Function from G\times S \rightarrow S.
"""
return self.phi
def check_action(self, g, h, s):
"""
Checks whether g(hs)=(gh)s.
"""
return self.phi(g*h,s)==self.phi(g,self.phi(h,s))
"""
Checks that this is actually a group action using a generating set for
the group acting on the full set.
"""
assert self.S.is_finite(), 'Cannot check group action on an infinite set.'
if gens==None:
#Should check if g has gens implemented.
gens=self.group().gens()
for g in gens:
for h in gens:
for s in S:
if not self.phi(g*h,s)==self.phi(g,self.phi(h,s)):
stringy=g.__repr__()+', '+h.__repr__()+' '+s.__repr__()
assert False, 'Action fails on '+stringy
else:
if not self.phi(h*g,s)==self.phi(g,self.phi(h,s)):
stringy=g.__repr__()+', '+h.__repr__()+' '+s.__repr__()
assert False, 'Action fails on '+stringy
return True
def orbit(self, s):
return Set([self.action(g,s) for g in self.group()])
def is_transitive(self):
if len(self.gset())==0: return True
s=self.gset()[0]
return orbit(s)==Set(self.gset())
def twist(self, endomorphism):
"""
Twists this representation by an endomorphism of the group.
"""
phi=self.action_function()
kappa=lambda g,s: phi( endomorphism(g), s)
return GroupAction(self.G, self.S, kappa)
def character(self):
"""
Count fixed points for conjugacy class representatives.
"""
c=[]
for g in self.G.conjugacy_classes_representatives():
fix=0
for s in self.S:
if self.action(g,s)==s: fix+=1
c.append(fix)
return c
def cayley_graph(self, gens=None):
"""
Builds a cayley graph of the group action, using the specified generating set.
"""
assert self.S.is_finite(), 'Cannot check group action on an infinite set.'
if gens==None:
#Should check if g has gens implemented.
gens=self.group().gens()
G=DiGraph()
for g in gens:
for s in self.gset():
return G
def product_action(self, B):
"""
Given a second group action B with the same group and set T, generates
the product group action of self and B.
"""
assert self.group()==B.group(), 'Actions need to have same group acting.'
T=B.gset()
U=CartesianProduct(self.gset(),T)
kappa=lambda g, u: U( [self.action_function()(g,u[0]), B.action(g,u[1])] )
return GroupAction(G,U,kappa)
"""
#Example 1: Usual symmetric group action.
sage: G=SymmetricGroup(4)
sage: S=Set([1,2,3,4])
sage: phi = lambda g,s: g(s)
sage: A=GroupAction(G,S,phi)
sage: A.character()
[4, 2, 0, 1, 0]
#Example 2: Symmetric group acting on a set.
sage: rho=lambda g,s: Set([phi(g,t) for t in s])
sage: T=Subsets(S,2)
sage: B=GroupAction(G,T,rho)
sage: B.character()
[6, 2, 2, 0, 0]
#Example 3: Product action.
sage: C=A.product_action(B)
sage: C.character()
[24, 4, 0, 0, 0]
#Example 4: Twist by an automorphism.
sage: a=G.an_element()^2
sage: ai=a.inverse()
sage: auto=lambda g: a*g*ai
sage: At=A.twist(auto)
sage: y=G.simple_reflection(1)
sage: [A.action(y,s) for s in S]
[2, 1, 3, 4]
sage: [At.action(y,s) for s in S]
[1, 2, 4, 3]
"""
### Benjamin Jones
Mar 26, 2013, 2:49:49 AM3/26/13
Big +1 to framework for explicitly instantiating group actions.
--
Benjamin Jones
[email protected]
--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
### Nathann Cohen
Mar 26, 2013, 3:33:11 AM3/26/13
> As a category, I would imagine we would have a GroupAction category and/or
> a GroupWithAction category, which would put some requirements on the group
> and its elements.
I hate categories T_T
Categories is where everything I cherish gets ruined T_T
Nathann
### Nathann Cohen
Mar 26, 2013, 4:24:26 AM3/26/13
Helloooooooooooooooooooooo !!!
> Well, unfortunately I don't know under which Seine bridge mathematicians
> are most
> welcome :-)
Ahahahah. I felt so shocked when I first learnt that Paris did not have the only one O_O
> Mind you, when this part of GAP was developed, you were in primary
> school;-),
And I had more important things to do, like looking for sweets. I will not be held responsible.
> computers were slow and weak, everything needed to be coded
> in C or Fortran (or one would need to use Lisp, which wasn't very
> popular in that part of computational algebra --- unfortunately);
> naturally, some features remain hugely underdeveloped due to this. IMHO
> a part of the GAP framework of actions which deals with tuples, subsets,
> etc.
> is at present more of a burden than of an advantage. (Whereas actions
> say on cosets of a subgroup which GAP has are hugely important and hard to
> beat)
Hmmm... But then, should we patch Sage to add features or patch GAP and use it in Sage ? :-P
> Coding more generic orbit algorithms for permutation groups
> in a language like Python is, moreover, quite easy.
> (By more generic---than tuples, tuples of tuples, etc---
> I mean the action on trees with leafs labeled by
> elements of the group domain, and with some (fixed) non-terminal node
> might
> carry out the structure of a set, i.e. have unordered rathen than
> ordered neighbourhoods. E.g. you can get an action on gadgets like
> (1,{2,3},((4,5),2,{1,6,7})); here the domain elements are numbers, but
> this need not be a restriction.)
> Just get yourself a queue, put there the starting element, and add the
> new images to the other end of the queue, while computing the images of
> the 1st in order element in the queue.
> One is done when everything in the queue is processed
Yep yep. In my world that's called a depth-first-search. But we are shy guys, especially when surrounded by real mathematicians.
> All the orbit algorithm needs to know, apart from the group generators,
> is how to compute the image is the tree "shape", i.e. no labels on the
> nodes.
> E.g. for the example above the shape is encoded by
> (,{,},((,),,{,,})).
>
> And this can be made modular etc, as there are more actions around than
> these which fit this pattern; so how to compute the action can be
> specified by a function passed as a parameter. (Not sure if one also
> needs a function to compare orbit elements for equality.)
Oh. You'd give "(,{,},((,),,{,,}))" as a string parameter, parse it and use it to define the tree ? Ahahaha. Funny and efficient ! A bit .... "home-made", but I like that :-P
Actually, what would have prevented me from dirtying beautiful groups with my out-of-place graph approach (i.e. the DFS above) is that I would have thought the group guys were too smart to use such things. Is that how GAP computes orbits ? Just a BFS ? A "set" object, hash functions for the group's elements, and that's all we need for a "state of the art" orbit method ?
> I wrote above how I'd see this done.
> I think it's better to encode the "usual" types of action by such a
> pattern like above (e.g. (,{,},((,),,{,,}))).
> Such a pattern can either be guessed/computed by Volker's rules, or
> specified as a parameter. (And a user function to compute the action
> can also be a parameter).
> They could be translated into GAP actions and GAP calls, for which such
> actions exist.
Ok good point. But now I have a more pratical question : can we get #14291 after just replacing this "action" parameter with your funny string, and return a "notimplementederror" when the value of action cannot be forwarded to GAP ?
As it is, it is already a nice feature for humble graph theoreticians to compute the orbit of a pair of elements :-)
Nathann
### Dima Pasechnik
Mar 26, 2013, 6:04:38 AM3/26/13
On 2013-03-26, Nathann Cohen <[email protected]> wrote:
> --bcaec54315d24dec2004d8cfa610
> Content-Type: text/plain; charset=ISO-8859-1
no, DFS uses a stack, i.e. LIFO (Last In First Out), not a queue (FIFO, First
in First Out), isn't it?
>
>> All the orbit algorithm needs to know, apart from the group generators,
>> is how to compute the image is the tree "shape", i.e. no labels on the
>> nodes.
>> E.g. for the example above the shape is encoded by
>> (,{,},((,),,{,,})).
>>
>> And this can be made modular etc, as there are more actions around than
>> these which fit this pattern; so how to compute the action can be
>> specified by a function passed as a parameter. (Not sure if one also
>> needs a function to compare orbit elements for equality.)
>
> Oh. You'd give "(,{,},((,),,{,,}))" as a string parameter, parse it and use
> it to define the tree ? Ahahaha. Funny and efficient ! A bit ....
> "home-made", but I like that :-P
>
> Actually, what would have prevented me from dirtying beautiful groups with
> my out-of-place graph approach (i.e. the DFS above) is that I would have
> thought the group guys were too smart to use such things. Is that how GAP
> computes orbits ? Just a BFS ? A "set" object, hash functions for the
> group's elements, and that's all we need for a "state of the art" orbit
> method ?
and a queue. A bit of discipline never hurts :)
Otherwise, that's all is needed. But indeed, it's not totally obvious.
I know very smart people, much smarter
than me, who instead computed orbits by listing all the elements of the
group first, and applying them all...
>
>> I wrote above how I'd see this done.
>> I think it's better to encode the "usual" types of action by such a
>> pattern like above (e.g. (,{,},((,),,{,,}))).
>> Such a pattern can either be guessed/computed by Volker's rules, or
>> specified as a parameter. (And a user function to compute the action
>> can also be a parameter).
>> They could be translated into GAP actions and GAP calls, for which such
>> actions exist.
>
> Ok good point. But now I have a more pratical question : can we get #14291
> after just replacing this "action" parameter with your funny string, and
> return a "notimplementederror" when the value of action cannot be forwarded
> to GAP ?
> As it is, it is already a nice feature for humble graph theoreticians to
> compute the orbit of a pair of elements :-)
I guess this should work, but I have negative amount of time available
in the coming two weeks...
Dima
>
> Nathann
>
### Volker Braun
Mar 26, 2013, 6:34:32 AM3/26/13
On Tuesday, March 26, 2013 7:01:40 AM UTC+1, tom d wrote:
Specify the action! [...] The 'usual' actions then become special predefined objects, like the special graphs, maybe summoned up automatically using the permutation/whatever's __call__ function if it's an idiomatic action like \sigma(3).
The group action framework is just the implementation, you still haven't answered the question that this thread was about: Should permutation actions on nested containers automatically discover one possible action or not. As you said, in the group action framework you can implement either possibility. You can also implement either if you don't use it.
In any case, http://trac.sagemath.org/14291 is about hooking up GAP and not creating a framework for group actions. So in the interest of a finite amout of work per ticket, we should probably fall back to only allow explicitly specified group actions (OnPoints/OnTuples etc) there. Once your group action framework is in place it'll be easy enough to hook that up into the orbit method as well.
### Martin
Mar 26, 2013, 8:32:22 AM3/26/13
Implementing the idea of a GroupAction category would be also of great
help for the species framework! I could then finally properly
implement the remaining constructions, where (if I remember correctly)
I need to specify actions of a Young subgroup on a set. In particular
this concerns unlabelled enumeration of composition. Although one can
do without (as in the original Aldor code), it's ugly.
Best,
Martin
### Nathann Cohen
Mar 26, 2013, 10:47:06 AM3/26/13
Yoooooooooooooooo !!!
> no, DFS uses a stack, i.e. LIFO (Last In First Out), not a queue (FIFO, First
> in First Out), isn't it?
Indeed. Both would work in this case, though :-P
> and a queue. A bit of discipline never hurts :)
> Otherwise, that's all is needed. But indeed, it's not totally obvious.
> I know very smart people, much smarter
> than me, who instead computed orbits by listing all the elements of the
> group first, and applying them all...
They were just above things like runtime.
> I guess this should work, but I have negative amount of time available
> in the coming two weeks...
Well, then in the meantime I uploaded a new patch to #14291 that can be reviewed right now :-P
It implements a orbit() method only, that deals with the many actions you wanted to have available, as well as with the two I needed myself. It even sorts the "Set" stuff before feeding GAP with it, which will avoid a few raised eyebrows when GAP does not like what it is given (especially when the user has no way to guess GAP's ordering of the elements !).
Waiting for a review ! :-)
Nathann
### Nicolas M. Thiery
Mar 26, 2013, 1:04:51 PM3/26/13
For whatever it's worth, there is some preliminary stuff in this
direction in the queue:
http://combinat.sagemath.org/code/file/tip/sage/categories/sets_with_action.py
It's more geared toward sets endowed with an action of a semigroup,
but the category part should scale to groups.
There is also some support for defining actions in
sage.categories.action.py; of course the two things should be made to
work hand in hand. This seems like a good thing to discuss at Sage
Days in June!
Cheers,
Nicolas
--
Nicolas M. Thi�ry "Isil" <[email protected]>
http://Nicolas.Thiery.name/
### Benjamin Jones
Mar 26, 2013, 1:25:42 PM3/26/13
On Tue, Mar 26, 2013 at 3:34 AM, Volker Braun wrote:
In any case, http://trac.sagemath.org/14291 is about hooking up GAP and not creating a framework for group actions. So in the interest of a finite amout of work per ticket, we should probably fall back to only allow explicitly specified group actions (OnPoints/OnTuples etc) there. Once your group action framework is in place it'll be easy enough to hook that up into the orbit method as well.
+1
### tom d
Mar 27, 2013, 11:49:45 AM3/27/13
'Allo!
On Tuesday, March 26, 2013 1:34:32 PM UTC+3, Volker Braun wrote:
The group action framework is just the implementation, you still haven't answered the question that this thread was about: Should permutation actions on nested containers automatically discover one possible action or not. As you said, in the group action framework you can implement either possibility. You can also implement either if you don't use it.
Yeah, I was thinking that because of the ambiguity that was being discussed in dealing with the nested actions, the best approach would be to provide a simple framework for specifying actions, and easy-to-access examples of That Sort of Thing, for people interested in using it, rather than putting in an implementation which half of users will think is mussed up.
I'll be at the Sage-Combinat days in June; helping to build out the action code that others have already started developing sounds like a great way to spend at least a chunk of that week.
### Nathann Cohen
Mar 27, 2013, 3:58:49 PM3/27/13
Hellooooooooooooooo !!
> Yeah, I was thinking that because of the ambiguity that was being discussed in dealing with the nested actions, the best approach would be to provide a simple framework for specifying actions, and easy-to-access examples of That Sort of Thing, for people interested in using it, rather than putting in an implementation which half of users will think is mussed up.
>
> I'll be at the Sage-Combinat days in June; helping to build out the action code that others have already started developing sounds like a great way to spend at least a chunk of that week.
Ahaahahhah. I would be very surprised (and glad. but very surprised.) if you had a way to beat Dima's every efficient encoding of the action with a string ! :-)
Have fuuuuuuuuuuuuuuuuuuun !
Nathann
### Nils Bruin
Mar 27, 2013, 7:49:11 PM3/27/13
to sage-devel
On Mar 26, 10:04 am, "Nicolas M. Thiery" <[email protected]>
wrote:
> It's more geared toward sets endowed with an action of a semigroup,
> but the category part should scale to groups.
+1. I'm pretty sure that whatever the user interface is going to be,
something along these lines will have to go underneath to build a
solid basis. If you like to pander to the number theoretic crowd, you
could call the category "GSet" and, the case where the acted-upon-set
is a commutative group itself, "GModule".
### Nathann Cohen
Mar 28, 2013, 3:33:37 AM3/28/13
Helloooooooooooooooo !!!
> No, I didn't propose strings. I meant things like
>
> sage: (None,None,{None,None},None)
> (None, None, set([None]), None)
>
> Now you can replace None with elements of the group domain.
Oh ? Then I still don't get it. I thought that you meant that the input should be something like :
g.orbit(weird_mix_of_tuples_sets_and_elements, action = (, ,)) so that the value of "action" says where elements should be.
And I thought that it would be a string, because you can't do things like that :
{1, 1}
(which may happen when this function is called by other ones which do not know what the parameters are
And more importantly you can't do that :
{{}}
So I thought you said that something like g.action((1,(1,2)), action = "(,)") could mean g.action((1,(1,2)), action = "OnPairs")
Nathann
|
2022-09-27 09:19:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6734300255775452, "perplexity": 2325.964139065889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00212.warc.gz"}
|
https://www.tutorialspoint.com/python-program-to-print-elements-which-are-multiples-of-elements-given-in-a-list
|
# Python program to print elements which are multiples of elements given in a list
PythonServer Side ProgrammingProgramming
When it is required to print the elements which are multiples of elements given in a list, a list comprehension is used.
## Example
Below is a demonstration of the same
my_list = [45, 67, 89, 90, 10, 98, 10, 12, 23]
print("The list is :")
print(my_list)
my_division_list = [6, 4]
print("The division list is :")
print(my_division_list)
my_result = [element for element in my_list if all(element % j == 0 for j in my_division_list)]
print("The result is :")
print(my_result)
## Output
The list is :
[45, 67, 89, 90, 10, 98, 10, 12, 23]
The division list is :
[6, 4]
The result is :
[12]
## Explanation
• A list is defined and is displayed on the console.
• Another list of integers is defined.
• A list comprehension is used to iterate over the elements and check if the element divided by element in the integer list gives a remainder 0.
• If yes, it is stored in a list and assigned to a variable.
• This is displayed as output on the console.
Updated on 16-Sep-2021 08:31:21
|
2022-07-01 17:08:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30482032895088196, "perplexity": 1502.5175779603587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00107.warc.gz"}
|
http://mathematica.stackexchange.com/tags/kernel-startup/new
|
# Tag Info
Here is the solution of Jason B. ParallelEvaluate[ Table[ {$KernelID, i}; x = i^2; Print["$KernelID = ", $KernelID, ", i = ", i ", x = \!$$\*SuperscriptBox[\(i$$, $$2$$]\) = ", x], {i, ($KernelID - 1) 250 + 1, \$KernelID 250} ]; ]; Output is correct: (I only do not understand why for i=1 no output is shown?) ...
|
2016-02-13 13:02:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.453344464302063, "perplexity": 13615.679081592352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166650.78/warc/CC-MAIN-20160205193926-00139-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.jobilize.com/course/section/solving-optimisation-problems-using-differential-calculus-by-openstax?qcr=www.quizover.com
|
# 5.2 Solving problems
Page 1 / 2
## Using differential calculus to solve problems
We have seen that differential calculus can be used to determine the stationary points of functions, in order to sketch their graphs. However, determining stationary points also lends itself to the solution of problems that require some variable to be optimised .
For example, if fuel used by a car is defined by:
$f\left(v\right)=\frac{3}{80}{v}^{2}-6v+245$
where $v$ is the travelling speed, what is the most economical speed (that means the speed that uses the least fuel)?
If we draw the graph of this function we find that the graph has a minimum. The speed at the minimum would then give the most economical speed.
We have seen that the coordinates of the turning point can be calculated by differentiating the function and finding the $x$ -coordinate (speed in the case of the example) for which the derivative is 0.
Differentiating [link] , we get: ${f}^{\text{'}}\left(v\right)=\frac{3}{40}v-6$ If we set ${f}^{\text{'}}\left(v\right)=0$ we can calculate the speed that corresponds to the turning point.
$\begin{array}{ccc}\hfill {f}^{\text{'}}\left(v\right)& =& \frac{3}{40}v-6\hfill \\ \hfill 0& =& \frac{3}{40}v-6\hfill \\ \hfill v& =& \frac{6×40}{3}\hfill \\ & =& 80\hfill \end{array}$
This means that the most economical speed is 80 km $·$ hr ${}^{-1}$ .
The sum of two positive numbers is 10. One of the numbers is multiplied by the square of the other. If each number is greater than 0, find the numbers that make this product a maximum.
1. Let the two numbers be $a$ and $b$ . Then we have:
$a+b=10$
We are required to minimise the product of $a$ and $b$ . Call the product $P$ . Then:
$P=a·b$
We can solve for $b$ from [link] to get:
$b=10-a$
Substitute this into [link] to write $P$ in terms of $a$ only.
$P=a\left(10-a\right)=10a-{a}^{2}$
2. The derivative of [link] is: ${P}^{\text{'}}\left(a\right)=10-2a$
3. Set ${P}^{\text{'}}\left(a\right)=0$ to find the value of $a$ which makes $P$ a maximum.
$\begin{array}{ccc}\hfill {P}^{\text{'}}\left(a\right)& =& 10-2a\hfill \\ \hfill 0& =& 10-2a\hfill \\ \hfill 2a& =& 10\hfill \\ \hfill a& =& \frac{10}{2}\hfill \\ \hfill a& =& 5\hfill \end{array}$
Substitute into [link] to solve for the width.
$\begin{array}{ccc}\hfill b& =& 10-a\hfill \\ & =& 10-5\hfill \\ & =& 5\hfill \end{array}$
4. The product is maximised if $a$ and $b$ are both equal to 5.
Michael wants to start a vegetable garden, which he decides to fence off in the shape of a rectangle from the rest of the garden. Michael only has 160 m of fencing, so he decides to use a wall as one border of the vegetable garden. Calculate the width and length of the garden that corresponds to largest possible area that Michael can fence off.
1. The important pieces of information given are related to the area and modified perimeter of the garden. We know that the area of the garden is: $A=w·l$ We are also told that the fence covers only 3 sides and the three sides should add up to 160 m. This can be written as: $160=w+l+l$
However, we can use [link] to write $w$ in terms of $l$ : $w=160-2l$ Substitute [link] into [link] to get: $A=\left(160-2l\right)l=160l-2{l}^{2}$
2. Since we are interested in maximising the area, we differentiate [link] to get: ${A}^{\text{'}}\left(l\right)=160-4l$
3. To find the stationary point, we set ${A}^{\text{'}}\left(l\right)=0$ and solve for the value of $l$ that maximises the area.
$\begin{array}{ccc}\hfill {A}^{\text{'}}\left(l\right)& =& 160-4l\hfill \\ \hfill 0& =& 160-4l\hfill \\ \hfill \therefore 4l& =& 160\hfill \\ \hfill l& =& \frac{160}{4}\hfill \\ \hfill l& =& 40\mathrm{m}\hfill \end{array}$
Substitute into [link] to solve for the width.
$\begin{array}{ccc}\hfill w& =& 160-2l\hfill \\ & =& 160-2\left(40\right)\hfill \\ & =& 160-80\hfill \\ & =& 80\mathrm{m}\hfill \end{array}$
4. A width of 80 m and a length of 40 m will yield the maximal area fenced off.
## Solving optimisation problems using differential calculus
1. The sum of two positive numbers is 20. One of the numbers is multiplied by the square of the other. Find the numbers that make this product a maximum.
2. A wooden block is made as shown in the diagram. The ends are right-angled triangles having sides $3x$ , $4x$ and $5x$ . The length of the block is $y$ . The total surface area of the block is $3600{\mathrm{cm}}^{2}$ .
1. Show that $y=\frac{300-{x}^{2}}{x}$ .
2. Find the value of $x$ for which the block will have a maximum volume. (Volume = area of base $×$ height.)
3. The diagram shows the plan for a verandah which is to be built on the corner of a cottage. A railing $ABCDE$ is to be constructed around the four edges of the verandah. If $AB=DE=x$ and $BC=CD=y$ , and the length of the railing must be 30 metres, find the values of $x$ and $y$ for which the verandah will have a maximum area.
how can chip be made from sand
are nano particles real
yeah
Joseph
Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master?
no can't
Lohitha
where we get a research paper on Nano chemistry....?
nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review
Ali
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
has a lot of application modern world
Kamaluddeen
yes
narayan
what is variations in raman spectra for nanomaterials
ya I also want to know the raman spectra
Bhagvanji
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
2021-05-14 08:09:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 52, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7136805653572083, "perplexity": 840.6224319574886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00447.warc.gz"}
|
https://proofwiki.org/wiki/Derivative_of_Square_Function
|
# Derivative of Square Function
## Theorem
Let $f: \R \to \R$ be the square function:
$\forall x \in \R: f \paren x = x^2$
Then the derivative of $f$ is given by:
$f' \paren x = 2 x$
## Proof
$\displaystyle f' \paren x$ $=$ $\displaystyle \lim_{h \mathop \to 0} \frac {f \paren {x + h} - f \paren x} h$ Definition of Derivative of Real Function at Point $\displaystyle$ $=$ $\displaystyle \lim_{h \mathop \to 0} \frac {\paren {x + h}^2 - x^2} h$ $\displaystyle$ $=$ $\displaystyle \lim_{h \mathop \to 0} \frac {x^2 + 2 x h + h^2 - x^2} h$ $\displaystyle$ $=$ $\displaystyle \lim_{h \mathop \to 0} \frac {2 x h + h^2} h$ $\displaystyle$ $=$ $\displaystyle \lim_{h \mathop \to 0} 2 x + h$ $\displaystyle$ $=$ $\displaystyle 2 x$
$\blacksquare$
|
2019-07-17 08:56:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539095759391785, "perplexity": 98.31155684456458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00528.warc.gz"}
|
https://nbviewer.jupyter.org/gist/ramnathv/9334834/example.ipynb
|
In [1]:
%load_ext rmagic
Introduction to ggplot2¶
This is a short demo on how to convert an R Markdown Notebook into an IPython Notebook using knitr and notedown.
In [2]:
def f(x):
return x + 2
f(2)
Out[2]:
4
This is an introduction to ggplot2. You can view the source as an R Markdown document, if you are using an IDE like RStudio, or as an IPython notebook, thanks to notedown.
We need to first make sure that we have ggplot2 and its dependencies installed, using the install.packages function.
Now that we have it installed, we can get started by loading it into our workspace
In [3]:
%%R
library(ggplot2)
We are now fully set to try and create some amazing plots.
Data¶
We will use the ubiqutous iris dataset.
In [4]:
%%R
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1 3.5 1.4 0.2 setosa
2 4.9 3.0 1.4 0.2 setosa
3 4.7 3.2 1.3 0.2 setosa
4 4.6 3.1 1.5 0.2 setosa
5 5.0 3.6 1.4 0.2 setosa
6 5.4 3.9 1.7 0.4 setosa
Simple Plot¶
Let us create a simple scatterplot of Sepal.Length with Petal.Length.
In [5]:
%%R
ggplot(iris, aes(x = Sepal.Length, y = Petal.Length)) + geom_point()
The basic idea in ggplot2 is to map different plot aesthetics to variables in the dataset. In this plot, we map the x-axis to the variable Sepal.Length and the y-axis to the variable Petal.Length.
Now suppose, we want to color the points based on the Species. ggplot2 makes it really easy, since all you need to do is map the aesthetic color to the variable Species.
In [6]:
%%R
ggplot(iris, aes(x = Sepal.Length, y = Petal.Length)) + geom_point(aes(color = Species))
Note that I could have included the color mapping right inside the ggplot line, in which case this mapping would have been applicable globally through all layers. If that doesn't make any sense to you right now, don't worry, as we will get there by the end of this tutorial.
We are interested in the relationship between Petal.Length and Sepal.Length. So, let us fit a regression line through the scatterplot. Now, before you start thinking you need to run a lm command and gather the predictions using predict, I will ask you to stop right there and read the next line of code.
In [7]:
%%R
ggplot(iris, aes(x = Sepal.Length, y = Petal.Length)) + geom_point() + geom_smooth(method = "lm",
se = F)
If you are like me when the first time I ran this, you might be thinking this is voodoo! I thought so too, but apparently it is not. It is the beauty of ggplot2 and the underlying notion of grammar of graphics.
You can extend this idea further and have a regression line plotted for each Species.
In [8]:
%%R
ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color = Species)) + geom_point() +
geom_smooth(method = "lm", se = F)
|
2019-11-15 14:18:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.268684983253479, "perplexity": 1777.0581424465743}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668644.10/warc/CC-MAIN-20191115120854-20191115144854-00315.warc.gz"}
|
http://mathhelpforum.com/discrete-math/139173-sets-problem.html
|
Math Help - A sets problem
1. A sets problem
Hello,
I've a sets problem which i don't manage to solve it.
I'm attaching a pdf file of the problem.
I would be glad to have some help
thanks
|
2014-09-21 22:22:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403151869773865, "perplexity": 1141.7400871343443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136181.32/warc/CC-MAIN-20140914011216-00339-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/proving.20a.20.2B.20a.20.3D.202*.20a.html
|
## Stream: new members
### Topic: proving a + a = 2* a
#### Atze van der Ploeg (Jan 11 2019 at 10:53):
I'm trying to prove the shocking lemma ∀ n : ℕ, n + n = 2*n , how do i tell lean to unfold the definition of *?
#### Kenny Lau (Jan 11 2019 at 10:59):
you don't; you just tell Lean what you want it to unfold to
#### Chris Hughes (Jan 11 2019 at 10:59):
There's a lemma nat.mul_succ
#### Rob Lewis (Jan 11 2019 at 11:00):
You can use simp only [(*)]. This will unfold the notation, then you'll have to deal with the definition nat.mul.
#### Chris Hughes (Jan 11 2019 at 11:00):
Sorry, nat.succ_mul
#### Rob Lewis (Jan 11 2019 at 11:00):
You can unfold that with the same method, or unfold nat.mul, but since it's defined by recursion on n, you'll have to use induction first.
#### Chris Hughes (Jan 11 2019 at 11:01):
You can't actually unfold multiplication in this scenario, since it's defined by recursion on the second argument.
#### Kenny Lau (Jan 11 2019 at 11:01):
well I imagined he would do induction on n
#### Atze van der Ploeg (Jan 11 2019 at 12:05):
Thanks, nat.succ_mul works fine, or using mul_comm but this requires me to know that nat.mul is defined by recursively on the second argument.
#### Kevin Buzzard (Jan 11 2019 at 12:23):
You can just use the 'ring' tactic if you don't care about what the low level proof looks like
#### Mark Dickinson (Jan 11 2019 at 18:40):
There's also two_mul in the standard library:
two_mul : ∀ {α : Type u_1} [_inst_1 : semiring α] (n : α), 2 * n = n + n
Last updated: May 13 2021 at 18:26 UTC
|
2021-05-13 19:03:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5829857587814331, "perplexity": 3903.610603740023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00376.warc.gz"}
|
https://quomodocumque.wordpress.com/tag/ordinarity/
|
## Soumya Sankar: Proportion of ordinarity in families of curves over finite fields
What’s the chance that a random curve has ordinary Jacobian? You might instinctively say “It must be probability 1” because the non-ordinary locus is a proper closed subvariety of M_g. (This is not obvious by pure thought, at least to me, and I don’t know who first proved it! I imagine you can check it by explicitly exhibiting a curve of each genus with ordinary Jacobian, but I’m not sure this is the best way.)
Anyway, the point is, this instinctive response is wrong! At least it’s wrong if you interpret the question the way I have in mind, which is to ask: given a random curve X of genus g over F_q, with g growing as q stays fixed, is there a limiting probability that X has ordinary Jacobian? And this might not be 1, in the same way that the probability that a random polynomial over F_q is squarefree is not 1, but 1-1/q.
Bryden Cais, David Zureick-Brown and I worked out some heuristic guesses for this problem several years ago, based on the idea that the Dieudonne module for a random curve might be a random Dieudonne module, and then working out in some detail what in the Sam Hill one might mean by “random Dieudonne module.” Then we did some numerical experiments which showed that our heuristic looked basically OK for plane curves of high degree, but pretty badly wrong for hyperelliptic curves of high genus. But there was no family of curves for which one could prove either that our heuristic was right or that it was wrong.
Now there is, thanks to my Ph.D. student Soumya Sankar. Unfortunately, there are still no families of curves for which our heuristics are provably right. But there are now several for which it is provably wrong!
15.7% of Artin-Schreier curves over F_2 (that is: Z/2Z-covers of P^1/F_2) are ordinary. (The heuristic proportion given in my paper with Cais and DZB is about 42%, which matches data drawn from plane curves reasonably well.) The reason Sankar can prove this is because, for Artin-Schreier curves, you can test ordinarity (or, more generally, compute the a-number) in terms of the numerical invariants of the ramification points; the a-number doesn’t care where the ramification points are, which would be a more difficult question.
On the other hand, 0% of Artin-Schreier curves over F are ordinary for any finite field of odd characteristic! What’s going on? It turns out that it’s only in characteristic 2 that the Artin-Schreier locus is irreducible; in larger characteristics, it turns out that the locus has irreducible components whose number grows with genus, and the ordinary curves live on only one of these components. This “explains” the rarity of ordinarity (though this fact alone doesn’t prove that the proportion of ordinarity goes to 0; Sankar does that another way.) Natural question: if you just look at the ordinary component, does the proportion of ordinary curves approach a limit? Sankar shows this proportion is bounded away from 0 in characteristic 3, but in larger characteristics the combinatorics get complicated! (All this stuff, you won’t be surprised to hear, relies on Rachel Pries’s work on the interaction of special loci in M_g with the Newton stratification.)
Sankar also treats the case of superelliptic curves y^n = f(x) in characteristic 2, which turns out to be like that of Artin-Schreier in odd characteristics; a lot of components, only one with ordinary points, probability of ordinarity going to zero.
Really nice paper which raises lots of questions! What about more refined invariants, like the shape of the Newton polygon? What about other families of curves? I’d be particularly interested to know what happens with trigonal curves which (at least in characteristic not 2 or 3, and maybe even then) feel more “generic” to me than curves with extra endomorphisms. Is there any hope for our poor suffering heuristics in a family like that?
|
2023-03-23 09:15:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8158400654792786, "perplexity": 561.2710401540278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00000.warc.gz"}
|
https://knowen.org/nodes/335?path=1-929-337-335
|
Now you are in the subtree of Misc public knowledge tree.
# Export pages from Knowen
Knowen allows you to export arbitrary subtree into single HTML file. Your browser can convert it into PDF for you. For example, there is an option Save to PDF at the Print menu in Google Chrome.
We are working now on one-click export to $\LaTeX$ and PDF. Meanwhile you will need to use a third-party converter. We recommend Pandoc, which is available on Windows, Mac and Linux. It can produce, among others, $\LaTeX$, epub and docx outputs. The downside is that you will have to run it from the console.
To convert an exported HTML file into $\LaTeX$, follow the steps below:
1. Open the file with your favorite text editor. Remove the last 9 lines at the end (starting with <script src="http://cdn.mathjax.org/...). Save this file with .md extension.
3. To obtain a $\LaTeX$ file, invoke Pandoc with the following command: pandoc -f markdown_github+tex_math_dollars -s -V lang=english -o output_filename.tex input_filename.md. Here input_filename.md is the file you saved at step 1, and the output will be written to output_filename.tex. If needed, substitute english with your language name (it will be passed to the babel package in $\LaTeX$). You can substitute .tex with other supported extension like .docx or .epub.
4. You might want to tweak the $\LaTeX$ source you have gotten before compiling it to PDF. Here is some advice:
1. You might want to comment out \usepackage{lmodern}
2. Insert \usepackage{cmap} to support searching in PDF
|
2022-05-23 07:56:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20568965375423431, "perplexity": 2985.764409724838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00539.warc.gz"}
|
http://www.helpteaching.com/tests/356439/acceleration
|
##### Print Instructions
NOTE: Only your test content will print.
To preview this test, click on the File menu and select Print Preview.
See our guide on How To Change Browser Print Settings to customize headers and footers before printing.
Print Test (Only the test content will print)
## Acceleration
Instructions: Read each question carefully. Choose the answer that best fits the question. Short answer response questions must be responded to in complete sentences. If the question involves calculations, you must show all your math work.
1.
If Tina starts out at 10 m/s, and in 10 s speeds up to 20 m/s, what is her acceleration?
1. $100m/(s^2)$
2. $2m/(s^2)$
3. $1m/(s^2)$
4. $3m/(s^2)$
2.
A bike accelerates from 100 km/hr to 200 km/hr in 4s. What is the acceleration?
1. $20m/(s^2)$
2. $25m/(s^2)$
3. $0m/(s^2)$
4. $100m/(s^2)$
3.
If a body goes from 10m/s to 50m/s in 8 seconds, what is its acceleration?
1. $1.25m/s^2$
2. $5m/s^2$
3. $6.25m/s^2$
4. $40m/s^2$
4.
If the acceleration of an object is negative, its velocity
1. will be increasing.
2. will be decreasing.
3. will be zero.
4. will be constant.
5.
What is the definition of acceleration?
1. An object's resistance to motion
2. How fast
3. How much stuff (matter) is in an object
4. The change in velocity over time
6.
A car changes its speed from 0 to 25 m/s in 3.6 seconds. What is its acceleration?
7.
A man is driving his car north on the highway at a constant velocity of 50 mph for 100 s. What is the value of his acceleration?
8.
A quarter is flipped up in the air at the start of a football game and returns to the point from which it was flipped after $2.5s$. Find the quarter’s original speed. The acceleration of the quarter is $9.8m/(s^2)$ downward.
1. 4.9 m/s
2. 25 m/s
3. 12.25 m/s
4. 49 m/s
9.
A watermelon is dropped from the roof of a three-story building. Assuming the friction of air is negligible, how far does the watermelon have to fall in order to reach a velocity of 10 m/s?
1. 4.9 m
2. 5.1 m
3. 9.8 m
4. impossible to determine
10.
You drop a penny and a silver dollar out of a high window. Which will hit the ground first if there is no air resistance?
You need to be a HelpTeaching.com member to access free printables.
|
2017-03-30 20:32:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2864798307418823, "perplexity": 1935.1504141235118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00577-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/202281/multiple-bibliographies-one-for-each-chapter-with-biblatex
|
# Multiple Bibliographies one for each Chapter with Biblatex
I've been strugging all day to include multiple bibligraphies (one for each chapter) in my report, I first tried using chapterbib but that isn't working!:
How to include multiple bibliographies?
So I want to look at alternative as I simply must get this to work!!!!
I have tried doing something similar with biblatex, but it falls short in a few key ways:
3.) I'd like to be able to use separate .bib files for each chapter. Still can't figure out how to do this!
One step forward I can now load multiple bib files in the preamble: The refsections then take whichever references that are cited in their section from both bibs and display them at the \printbibliography in their section The \printbibliography at the end of the document displays both .bib files in their entirety.
But what I want is to be able to print the entire contents of just one of the bib files at the end of each chapter?
1.) Currently it just prints a bibliography of all the citations made in the report thus producing the same bibliography three times, at the very least it should only show the references that were cited in the current chapter.
OK I have solved Q1 (at least partially) I put a refsection around each chapter. I then ran bibtex on the corresponding aux files. It now prints a separate bibliography for each chapter containing references for all citations in that chapter. However it no longer prints the big bibliography at the end.
2.) It only shows cited references in the bibliography, I want to show all references whether they were cited or not. OK this is solved by adding \nocite{*} to the document.
Can all this be achieved with biblatex?
\documentclass{report}
\usepackage[backend=bibtex]{biblatex}
\begin{document}
\nocite{*}
\chapter{Test bib 1}
\begin{refsection}
\input{TB1.tex}
\end{refsection}
\chapter{Test bib 2}
\begin{refsection}
\input{TB2.tex}
\end{refsection}
\newpage
\printbibliography
\end{document}
TB1.tex:
\cite{A12} (LRRE) \cite{XQ11}
TB2.tex
\cite{NS87} and its extension by \cite{Sven94}
Introduction.bib
@article{XQ11,
author = {Gao, F and Liu, XQ.},
title = {Linearized Ridge Regression Estimator Under the Mean Square Error Criterion in a Linear Regression Model},
journal = {Communications in Statistics-Simulation and Computation},
volume = {40},
year = {2011},
pages={1434-1443},
}
@misc{A12,
Author = {Anneart, J. and Claes, A.G.P.,and De Ceuster, M.J.K. and Zhang, H.},
Title = {Estimating the Yield Curve Using the Nelson-Siegel Model: A Ridge Resgression Appoach},
howpublished={International Review of Economics and Finance, Forthcoming},
Year = {2012},
}
@article{NS87,
author = {Nelson, C. R. and Siegel, A. F},
title = {Parsimonious Modelling of Yield Curves},
journal = {The Journal of Business},
volume = {60},
issue={4},
year = {1987},
pages={473-489},
}
@misc{Sven94,
Author = {Svensson, L.E.O},
Title = {Estimating and Interpreting Forward Interest Rates: Sweden 1992-1994},
howpublished={IMF Working Paper},
note = {WP/94/114},
Year = {1994},
pages={1-49} }
@article{CP01,
Author = {Cairns, A.J.G. and Pritchard, D.J.},
Title = {Stability of Descriptive Models for the Term Structure of Interest Rates with Applications to German Market Data},
journal = {British Actuarial Journal},
volume = {7},
year = {2001},
pages={467-507}}
• Do you want to use \nocite{*} in the chapter and print only the cites in a .bib file?.For this not load the .bib file in the preamble, you can load the .bib in the refsection using \begin{refsection}[introduction.bib]<content>\end{refsection}. The global .bib can be load using \addglobalbib{resourse} in the preamble. – Carlos Lanziano Sep 20 '14 at 22:01
• biblatex allows to create lists of references divided according with keywords you use, but I'm not sure yet how to achieve exactly what you want. In the backend you can use biber, it allows you to manage the references in a more efficient way. – Aradnix Sep 20 '14 at 22:13
• Please correct A12. For details see my answer to your question about how to do this with chapterbib! – cfr Sep 20 '14 at 22:21
I think this does what you want (having just reread your question).
If you plan to include a global bibliography, you need to use refsegment rather than refsection, I think, to ensure unique labels.
\documentclass{report}
\usepackage{filecontents}
\begin{filecontents}{\jobname1.bib}
@article{XQ11,
author = {Gao, F and Liu, XQ.},
title = {Linearized Ridge Regression Estimator Under the Mean Square Error Criterion in a Linear Regression Model},
journal = {Communications in Statistics-Simulation and Computation},
volume = {40},
year = {2011},
pages={1434-1443}}
@misc{A12,
Author = {Anneart, J. and Claes, A.G.P. and De Ceuster, M.J.K. and Zhang, H.},
Title = {Estimating the Yield Curve Using the Nelson-Siegel Model: A Ridge Resgression Appoach},
howpublished={International Review of Economics and Finance, Forthcoming},
Year = {2012}}
\end{filecontents}
\begin{filecontents}{\jobname2.bib}
@article{NS87,
author = {Nelson, C. R. and Siegel, A. F},
title = {Parsimonious Modelling of Yield Curves},
journal = {The Journal of Business},
volume = {60},
issue={4},
year = {1987},
pages={473-489}}
@misc{Sven94,
Author = {Svensson, L.E.O},
Title = {Estimating and Interpreting Forward Interest Rates: Sweden 1992-1994},
howpublished={IMF Working Paper},
note = {WP/94/114},
Year = {1994},
pages={1-49}}
@article{CP01,
Author = {Cairns, A.J.G. and Pritchard, D.J.},
Title = {Stability of Descriptive Models for the Term Structure of Interest Rates with Applications to German Market Data},
journal = {British Actuarial Journal},
volume = {7},
year = {2001},
pages={467-507}}
\end{filecontents}
\usepackage[backend=biber, refsegment=chapter, defernumbers=true]{biblatex}
\begin{document}
\chapter{Test bib 1}
\cite{A12} (LRRE) \cite{XQ11}
\chapter{Test bib 2}
\cite{NS87} and its extension by \cite{Sven94}
• In most cases one can use segment=\the\value{refsegment} instead of a explicit segment=1 etc. It is longer, but does not require that one keep track of the segment numbers. – moewe May 1 at 16:02
|
2019-06-16 19:27:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5925877690315247, "perplexity": 3847.3706528422076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998291.9/warc/CC-MAIN-20190616182800-20190616204800-00043.warc.gz"}
|
http://math.stackexchange.com/questions/46479/how-do-i-compute-this-probability-distribution
|
# How do I compute this probability distribution?
Choose a random number uniformly over the unit interval $x_1 \in [0,1]$. Now chose a second number that is dependent on the first, such that it is chosen uniformly over the interval $x_2 \in [0, x_1]$. What is the probability distribution $x_2$? Is it a distribution that is already well-known?
Here's what I got so far - I examined small discrete cases, say with the integers $\{1,2,3\}$. In this case the probabilities work out to:
\begin{array}{rcl} P(1,3) &=& (1/3)(1/1 + 1/2 + 1/3) \cr P(2,3) &=& (1/3)(1/2 + 1/3) \cr P(3,3) &=& (1/3)(1/3) \end{array}
This generalizes to something like $$P(n,k) = \frac{1}{k} \sum_{q=n}^k \frac{1}{q}$$ which can be reduced to the Digamma function $$= \frac{\Psi(k+1) - \Psi(n)}{k}$$
because of this, I would not be surprised if the answer was some combination of gamma functions. I'm stuck here, since I'm not quite sure how if taking the limit of large $k$ will (upon normalization) give the answer I'm looking for.
-
$x_2$ is equal in distribution to $U_1 U_2$ where $U_1$ and $U_2$ are independent uniform$(0,1)$ random variables. It is very easy to find the distribution of $U_1 U_2$...
EDIT: Continuing this iteratively, so that $x_{n+1}$ is uniform on $(0,x_n)$, the infinite sum $x:=x_1+x_2+x_3+\cdots$ (which is finite with probability $1$, by the monotone convergence theorem) is equal in distribution to $U_1 + U_1 U_2 + U_1 U_2 U_3 + \cdots$, where the $U_i$ are independent uniform$(0,1)$ rv's. The distribution of $x$ is the Dickamn distribution.
EDIT: Here are three ways to compute the distribution function of $x_2$ (the first one is the direct approach). It is given, for $0 < x \leq 1$, by $F(x)=x - x \log x$; hence $x_2$ has probability density function $f(x)=-\log(x)$, $0 < x < 1$ (as leonbloy already found).
Approach 1): Since conditioned on $x_1 = s$, $0 < s < 1$, $x_2$ is uniformly distributed on $(0,s)$, the law of total probability gives, for $0 < x \leq 1$, $${\rm P}(x_2 \le x) = \int_0^1 {{\rm P}(x_2 \le x|x_1 = s)ds} = \int_0^x {{\rm P}(x_2 \le x|x_1 = s)ds} + \int_x^1 {{\rm P}(x_2 \le x|x_1 = s)ds}$$ $$= \int_0^x {1ds} + \int_x^1 {\frac{x}{s}ds} = x - x\log x.$$
Approach 2): $x_2$ is distributed as $U_1 U_2$, where $U_1$ and $U_2$ are independent uniform$(0,1)$ rv's. Hence, $${\rm P}(x_2 \le x) = {\rm P}(U_1 U_2 \le x) = \int_0^1 {{\rm P}(U_1 U_2 \le x|U_1 = s)ds} = \int_0^1 {P\bigg(U_2 \le \frac{x}{s}\bigg)ds}$$ $$= \int_0^x {P\bigg(U_2 \le \frac{x}{s}\bigg)ds} + \int_x^1 {P\bigg(U_2 \le \frac{x}{s}\bigg)ds} = \int_0^x {1ds} + \int_x^1 {\frac{x}{s}ds} = x - x \log x.$$ (So, approaches 1) and 2) are quite similar: both rely on the law of total probability.)
Approach 3): Let $U_i$, $i=1,2$, be as above, and note that $-\log U_i$ is exponential$(1)$. Then, for any $0 < x \leq 1$, $${\rm P}(x_2 \le x) = {\rm P}(U_1 U_2 \le x) = {\rm P}(\log U_1 + \log U_2 \le \log x) = {\rm P}( - \log x \le \xi_1 + \xi_2 ),$$ where $\xi_1$ and $\xi_2$ are independent exponential$(1)$ rv's. The random variable $\xi_1+\xi_2$ has gamma density function $ye^{-y}$, $y > 0$. Hence, $${\rm P}(x_2 \le x) = \int_{ - \log x}^\infty {ye^{ - y} dy} = -e^{-y}(y+1) \big|_{ - \log x}^\infty = x - x \log x.$$ This approach can be useful for determining the distribution of $U_1 \cdots U_n$, where $U_i$ are independent uniform$(0,1)$ rv's.
-
It can be interesting to attack from problem from a discrete analog; but it's not a very simple way here. What your problem statement gives you directly is the conditional probability $P(x_2 | x_1)$ and the marginal $P(x_1)$. Then...
$$P(x_2) = \int P(x_2,x_1) dx_1 = \int P(x_2 | x_1) P(x_1) dx_1$$
You have $P(x_1)=1$ in $[0,1]$ and $P(x_2 | x_1) = 1/x_1$ with $0 \le x_2 \le x_1 \le 1$, so you get $P(x_2) = - \log(x_2)$
-
|
2016-07-24 22:42:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743708968162537, "perplexity": 122.82309035331308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824185.14/warc/CC-MAIN-20160723071024-00181-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://reputablejournal.com/adventures-with-flask-cors.html
|
As I am working through the new architecture for the Runestone Interactive server I wanted to make sure that I had my authentication model working right from the beginning. The goals for the architecture are as follows: I want people to be able to write and host the static parts of any book on any server. You can think of each page in a book as its own single page application. I want to provide back-end services so that students using any book hosted anywhere can save their programs and answers to quizzes etc. I want to continue to gather research data on how students learn computer science. I want to make the registration and login process as easy as possible. Since the static parts can be hosted anywhere (including a site like interactivepython.org) The interactive parts are going to involve making cross-domain XMLHttpRequests (xhr). Of course the first thing that happens when you have a page hosted on static-site that makes an xhr request to ajax-server is that you get an error. Browsers and sites work together to disallow cross-domain requests to prevent a variety of nasty behaviors. But, there are many times (wlike now) when you have a legitimate reason for doing this. So, the w3c created the Cross Origin Resource Sharing (CORS) standard to help developers get around this. Cory Dolphin has created an excellent plugin for Flask developers called Flask-CORS. The plugin is a great example of the brilliant design behind Flask and in fact the entire WSGI stack.
## The Really Simple Approach
The first thing you find when you start googling about this problem is that there is a seemingly simple solution. If you have control over your AJAX response you simply need to add an HTTP header Access-Control-Allow-Origin: * problem solved. Now everyone in the world can make xhr requests to your server and use the results in their page.
Adding a header is pretty simple in Flask. All you need to do is use response.headers.add("Access-Control-Allow-Origin", "*" Problem solved, moving right along to the next programming challenge.
Or maybe not. Minutes later you realize that this is not all that great because you have decorated some of your requests to require a login. That wont be a problem if the static page is served from the same domain because you will automatically get the session cookie, and the Flask-Security extension will eat that cooking and validate things for you. BUT if your static page is not served from the same domain you will not even get the session cookie. Oh Bother. But you also have a second problem. You have probably violated the CORS specification without even meaning to. Really, if I had to read the spec for every web standard I wanted to use I would seriously think about changing careers. But, here is the important part you may not return a CORS header unless the request contains an origin header! Chances are you tested you change with a quick curl call to your endpoint, saw the Access-Control header and were happy. But you sure didn’t give it an origin header on the request when you did that. So to summarize, we have two problems we need to solve: 1. We want to incorporate authentication into our cross origin strategy. 2. We want to be good citizens and follow the spec.
## The Smart Approach
The smart approach is to use a nice extension where other people have figured this out, and presumably followed the specification. Enter Flask-CORS. You can enable CORS support with a simple decorator @cross_origin This will automatically add the Access-Control-Allow-Origin: * to responses. As long as your test request includes an Origin. If you are like me you will forget that part, and then wonder why the extension must not be working. So this solves problem 2.
To solve problem 1 here is a snippet of code that works just fine.
@ajax.route('/ajax/page')
@cross_origin(supports_credentials=True)
def test():
return jsonify({'foo':'bar'})
The above responds to the url /ajax/page I have all of my API calls in an ajax blueprint with ajax as part of the url. I’m requiring that the user is logged in before I allow them to access this endpoint. I also want it to be allowed cross origin. This is where the parameter to the @cross_origin comes into play. Supports credentials sets up the CORS response to return an additional CORS header: Access-Control-Allow-Credentials: "true". For one final twist, you need to know that when you have supports_credentials=True you may NOT set Access-Control-Allow-Origin: * You need to be specific and set the origin to the origin that comes in the request headers. To Make this work and try it out from the client side, here is a bit of HTML/Javascript.
<button onclick="corsTest();">TestCORS</button>
<script>
function corsTest() {
var xhr = new XMLHttpRequest();
xhr.withCredentials = true;
}
xhr.onerror = function () {
}
xhr.open("GET", "http://example.com/ajax/page", true)
xhr.send()
}
</script>
Note that you need to set xhr.withCredentials in order for your session cookie to be sent along. By default cookies are NOT sent with cross origin requests.
Now, I may end up adding more to this as I discover the intricacies of so called “Non-Simple” requests. That is requests beyond simple GET and POST, as I work on moving my API toward a RESTful API which uses PUT and others. This will nodoubt enlighten me about preflighted requests. Which I can only assume means something different than sitting around in an airport bar waiting for your flight to be called.
There is a lot more detail and background on using CORS at the following two sites:
|
2018-08-15 05:53:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2357538789510727, "perplexity": 1282.133745107571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209884.38/warc/CC-MAIN-20180815043905-20180815063905-00667.warc.gz"}
|
https://eprint.iacr.org/2019/1219
|
## Cryptology ePrint Archive: Report 2019/1219
Multi-Locking and Perfect Argument Order: Two Major Improvements of Attribute-Based Encryption~(Long Paper)
Nugier Cyrius and Adelin Remi and Migliore Vincent and Alata Eric
Abstract: Attribute Based Encryption, proposed by Sahai and Waters in 2007, is a set of promising cryptographic schemes that enable various fine grained access control on encrypted data. With a unique encryption key, a user is able to encrypt data for a very specific group of recipient that matches a set of attributes contained inside their decryption key. In current scenario where personal devices share an increasing volume of private data on the web, such encryption algorithms are more than ever a strong alternative to standard encryption algorithms.
In this paper, we propose two major improvements of ABE namely the Perfect Argument Order Optimization and the Multi-Locking. Multi-Locking ABE is an extension of ABE that enables to share access control policy on an arbitrary number of entities. We also make a step further for the speed-up of ABE by providing the Perfect Argument Order Optimization'', which is a generalization of the Fixed Argument Optimization'' of Scott et al. to a much wider range of ABE constructions (and in particular to our Multi-Locking ABE). Based on those two improvements we propose a construction of the first privacy-preserving Cloud service based on ABE, allowing ephemeral accesses to the data. The Multi-Locking ABE and the Perfect Argument Order Optimization have been successfully integrated to the OpenABE library, providing a speed-up for a variety of ABE constructions.
Category / Keywords: implementation / Attribute-Based Encryption, Optimization, Privacy
Date: received 17 Oct 2019
Contact author: cyriusnugier at free fr
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2019/1219
[ Cryptology ePrint archive ]
|
2020-01-19 07:16:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1839904487133026, "perplexity": 3772.8626246410226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00150.warc.gz"}
|
https://docs.lammps.org/fix_plumed.html
|
$$\renewcommand{\AA}{\text{Å}}$$
# fix plumed command¶
## Syntax¶
fix ID group-ID plumed keyword value ...
• ID, group-ID are documented in fix command
• plumed = style name of this fix command
• keyword = plumedfile or outfile
plumedfile arg = name of PLUMED input file to use (default: NULL)
outfile arg = name of file on which to write the PLUMED log (default: NULL)
## Examples¶
fix pl all plumed all plumed plumedfile plumed.dat outfile p.log
## Description¶
This fix instructs LAMMPS to call the PLUMED library, which allows one to perform various forms of trajectory analysis on the fly and to also use methods such as umbrella sampling and metadynamics to enhance the sampling of phase space.
The documentation included here only describes the fix plumed command itself. This command is LAMMPS specific, whereas most of the functionality implemented in PLUMED will work with a range of MD codes, and when PLUMED is used as a stand alone code for analysis. The full documentation for PLUMED is available online and included in the PLUMED source code. The PLUMED library development is hosted at https://github.com/plumed/plumed2 A detailed discussion of the code can be found in (Tribello).
There is an example input for using this package with LAMMPS in the examples/PACKAGES/plumed directory.
The command to make LAMMPS call PLUMED during a run requires two keyword value pairs pointing to the PLUMED input file and an output file for the PLUMED log. The user must specify these arguments every time PLUMED is to be used. Furthermore, the fix plumed command should appear in the LAMMPS input file after relevant input parameters (e.g. the timestep) have been set.
The group-ID entry is ignored. LAMMPS will always pass all the atoms to PLUMED and there can only be one instance of the plumed fix at a time. The way the plumed fix is implemented ensures that the minimum amount of information required is communicated. Furthermore, PLUMED supports multiple, completely independent collective variables, multiple independent biases and multiple independent forms of analysis. There is thus really no restriction in functionality by only allowing only one plumed fix in the LAMMPS input.
The plumedfile keyword allows the user to specify the name of the PLUMED input file. Instructions as to what should be included in a plumed input file can be found in the documentation for PLUMED
The outfile keyword allows the user to specify the name of a file in which to output the PLUMED log. This log file normally just repeats the information that is contained in the input file to confirm it was correctly read and parsed. The names of the files in which the results are stored from the various analysis options performed by PLUMED will be specified by the user in the PLUMED input file.
## Restart, fix_modify, output, run start/stop, minimize info¶
When performing a restart of a calculation that involves PLUMED you must include a RESTART command in the PLUMED input file as detailed in the PLUMED documentation. When the restart command is found in the PLUMED input PLUMED will append to the files that were generated in the run that was performed previously. No part of the PLUMED restart data is included in the LAMMPS restart files. Furthermore, any history dependent bias potentials that were accumulated in previous calculations will be read in when the RESTART command is included in the PLUMED input.
The fix_modify energy option is supported by this fix to add the energy change from the biasing force added by PLUMED to the global potential energy of the system as part of thermodynamic output. The default setting for this fix is fix_modify energy yes.
The fix_modify virial option is supported by this fix to add the contribution from the biasing force to the global pressure of the system via the compute pressure command. This can be accessed by thermodynamic output. The default setting for this fix is fix_modify virial yes.
This fix computes a global scalar which can be accessed by various output commands. The scalar is the PLUMED energy mentioned above. The scalar value calculated by this fix is “extensive”.
Note that other quantities of interest can be output by commands that are native to PLUMED.
## Restrictions¶
This fix is part of the PLUMED package. It is only enabled if LAMMPS was built with that package. See the Build package page for more info.
There can only be one fix plumed command active at a time.
## Default¶
The default options are plumedfile = NULL and outfile = NULL
(Tribello) G.A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni and G. Bussi, Comp. Phys. Comm 185, 604 (2014)
|
2022-12-02 00:47:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4461594820022583, "perplexity": 3191.049096146321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00254.warc.gz"}
|
http://vi.wikipedia.org/wiki/D%E1%BA%A1ng_chu%E1%BA%A9n_4
|
# Dạng chuẩn 4
1. ^ "A relation schema R* is in fourth normal form (4NF) if, whenever a nontrivial multivalued dependency X $\twoheadrightarrow$ Y holds for R*, then so does the functional dependency X → A for every column name A of R*. Intuitively all dependencies are the result of keys." Fagin, Ronald (September năm 1977). “Multivalued Dependencies and a New Normal Form for Relational Databases”. ACM Transactions on Database Systems 2 (1): 267. doi:10.1145/320557.320571.
|
2015-02-27 21:30:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7792291641235352, "perplexity": 6179.645208188976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461416.42/warc/CC-MAIN-20150226074101-00188-ip-10-28-5-156.ec2.internal.warc.gz"}
|
http://answers.neotys.com/questions/187607-how-does-the-monitoring-work-with-neoload-is-it-done-by-the-controller-or-the-load-generators
|
### How does the monitoring work with NeoLoad? Is it done by the controller or the load generators?
I need to install load generators on my network but I need to understand if the monitoring part will be done by them or the controller.
NeoLoad is able to monitor your infrastructure during a load test. By default, the monitoring is done by the controller itself using an embedded agent.
You can also install remote monitoring agent on different computers to workaround firewall constraints. You can configure your monitor using a monitoring agent like usual using the controller one.
For more information, see “Monitoring agent” section in NeoLoad documentation.
Did you find this useful ?
(0)
(0)
0% of users found this answer useful
|
2018-06-20 16:43:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.807899534702301, "perplexity": 2612.572485664321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863830.1/warc/CC-MAIN-20180620163310-20180620183310-00222.warc.gz"}
|
https://strutt.arup.com/help/Environmental_Noise/CNOSSOSGroundAtten.html
|
### Strutt Help
CNOSSOS Ground Effect 1/1,1/3
Strutt|Environmental Noise|CNOSSOS Ground Attenuation inserts attenuation (dB) into the active row of the worksheet calculated using the CNOSSOS EU noise propagation model.
CNOSSOS is developed for propagation above varying-height ground planes, and uses the concept of equivalent height z_s,z_r - i.e. the source and receiver heights are defined relative to the average height of the ground plane, and are not necessarily the same as the local source/receiver heights h_s,h_r:
As well as the source-receiver geometry, the CNOSSOS model uses the ground factor G over the ground plane (i.e. proportion of porous ground). In cases where the ground plane consists of multiple ground types, the average bar G (weighted by the extent of each ground type) should be used.
Typical values of G are as follows:
• Soft, uncompacted ground (pasture, loose soil); snow etc: G=1.0
• Compacted soft ground (lawns, park areas): G=0.7
• Compacted dense ground (gravel road, compacted soil): G=0.3
• Hard surfaces (asphalt, concrete): G=0.0
The ground factor G_S for the source region (the zone within a horizontal distance <= 30(z_S+z_R) from the source) is also required, to account for propagation where the source and receiver are close, where the reflection properties of the source region dominate.
In cases where the source and receiver are close (within 30(z_s+z_r) horizontal distance), Strutt modifies the user-entered average ground factor as follows:
bar G prime = bar G (d_(SR))/(30(z_S+z_R))+G_s (1-d_(SR)/(30(z_s+z_r)))
d_(SR) is the horizontal source-receiver distance along the average ground plane
The CNOSSOS model predicts the attenuation under neutral (or "homogenous") atmospheric conditions (constant sound speed where sound propagation paths are straight rays), as well as adverse (or "favourable") conditions where a sound speed gradient results in curved ray paths.
The effect of the curved ray paths is to increase the average propagation height and reduce the strength of the ground effect.
Under neutral conditions, the ground attenuation is calculated as:
A_(ground) = -"Max" {(-10 log_10 (4 k^2/d_(SR)^2(z_s^2-sqrt((2 C_f)/k) z_s + C_f/k)*(z_r^2-sqrt((2 C_f)/k)z_r + C_f/k))),(-3(1-bar G_w)) :}
where:
k is the wavenumber
C_f=d_(SR) (1+3 w d_(SR) e^(-sqrt(w d_(SR))))/(1+w d_(SR))
w = 0.0185 (f^2.5 bar G_w^2.6)/(f^1.5 bar G_w^2.6 + 1.3*10^3 f^0.75 bar G_w^1.3 + 1.16*10^6)
bar G_w is either the corrected ground factor bar G prime or the uncorrected ground factor bar G, depending on the calculation type:
(The CNOSSOS ground module is also used by the CNOSSOS diffraction module to calculate the ground reflection from the source to the diffraction edge Delta_(ground(S,O)) and from the diffraction edge to the receiver Delta_(ground(O,R); the ground factor term is taken as either the modified or the unmodified term as outlined in the table below)
Under adverse conditions, the ground attenuation is calculated as:
A_(ground) = -"Max" {(-10 log_10 (4 k^2/d_(SR)^2(hat z_s ^2-sqrt((2 C_f)/k) hat z_s + C_f/k)*(hat z_r^2-sqrt((2 C_f)/k) hat z_r + C_f/k))),(-3(1-bar G_m) \ " for " \ d_(SR)<=30(z_s+z_r)),(-3(1-bar G_m)*(1+2*(1-(30(z_s+z_r))/d_(SR))) " otherwise") :}
where:
hat z_s = z_s+ Delta z_s + Delta z_t is the effective source height under adverse conditions
hat z_r = z_r + Delta z_r + Delta z_t is the effective receiver height under adverse conditions
Delta z_s = a_0(z_s/(z_s+z_r))d_(SR)^2/2 is the change in effective source height due to the curvature of the sound paths
Delta z_r = a_0(z_r/(z_s+z_r))d_(SR)^2/2 is the change in effective receiver height due to the curvature of the sound paths
Delta z_t = 6*10^-3 d_(SR)/(z_s+z_r) is the change in effective source/receiver height due to atmospheric turbulence
a_0 is the inverse of the radius of curvature of the sound paths. A default value of a_0 = 2*10^-4 is provided in CNOSSOS, however this could potentially be user-modified in Strutt if required.
bar G_m is either the corrected ground factor bar G prime or the uncorrected ground factor bar G, depending on the calculation type.
References:
• Stylianos Kephalopoulos, Marco Paviotti, Fabienne Anfosso-Lédée (2012) Common Noise Assessment Methods in Europe (CNOSSOS-EU) EUR 25379 EN. Luxembourg: Publications Office of the European Union, 2012, 180 pp.
|
2021-03-01 20:25:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7761300206184387, "perplexity": 8856.587421628288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362899.14/warc/CC-MAIN-20210301182445-20210301212445-00592.warc.gz"}
|
https://www.nextgurukul.in/questions-answers-forum/academic/cbse/class-12/physics
|
##### Get a free home demo of LearnNext
Available for CBSE, ICSE and State Board syllabus.
Call our LearnNext Expert on 1800 419 1234 (tollfree)
OR submit details below for a call back
clear
Finding exercises tough? Install LearnNext+ app to watch our videos and get a crystal clear understanding of concepts Install Now X
X
Install the app to watch our videos and get a crystal clear understanding of concepts
Install Now
searchtune
Anurag Verma
Jul 28, 2014
#### A hollow metal sphere of radius 5 cm is charged such that the potential on its surface is 10 V. What is the potential at the centre of the sphere?
In the case of a hollow metal sphere, the electric field inside the shell is zero. This means that the potential inside the shell is constant. Therefore the pot...
Amarnathreddy M
Feb 1, 2014
Feb 23, 2015
#### What is a ballistic galvanometer and what is its principle?
A ballistic galvanometer is a galvanometer specially designed to measure the quantity of charge displaced in a current of short duration (such as in the chargin...
Harsha
Apr 12, 2014
#### a proton is placed in a uniform electric field directed along positive x axis .in which direction will it tend to move?
The proton will move in the direction of the electric field that is the +ve X-axis to –ve X axis. proton moves in direction of the elec...
Anubhav Singh
Dec 23, 2014
#### The angular momentum of an electron in a hydrogen atom is proportional to (where r is radius of orbit ): a) 1/r1/2 b) 1/r c) r1/2 d) r2
Radius of the nth orbit rn ∝ n2/Z ⇒ n &#...
Altaf Hussain Shaikh Ahmed
Mar 19, 2015
Vaibhav Gumber
Jun 1, 2015
Arunima Ajan
Nov 21, 2014
Misha Jain
Aug 15, 2014
#### Equation of alternating current is given by I = 10√2 sin(100πt + π/6). If the time taken by current to reach the root mean square value from t=0 is t', then what is the value of t'?
Instantaneous value of current I = 10√...
Sanalkumar S
May 4, 2015
Anurag Verma
Jul 28, 2014
#### A hollow metal sphere of radius 5 cm is charged such that the potential on its surface is 10 V. What is the potential at the centre of the sphere?
In the case of a hollow metal sphere, the electric field inside the shell is zero. This means that the potential inside the shell is constant. Therefore the pot...
Amarnathreddy M
Feb 1, 2014
Feb 23, 2015
#### What is a ballistic galvanometer and what is its principle?
A ballistic galvanometer is a galvanometer specially designed to measure the quantity of charge displaced in a current of short duration (such as in the chargin...
Harsha
Apr 12, 2014
#### a proton is placed in a uniform electric field directed along positive x axis .in which direction will it tend to move?
The proton will move in the direction of the electric field that is the +ve X-axis to –ve X axis. proton moves in direction of the elec...
Anubhav Singh
Dec 23, 2014
#### The angular momentum of an electron in a hydrogen atom is proportional to (where r is radius of orbit ): a) 1/r1/2 b) 1/r c) r1/2 d) r2
Radius of the nth orbit rn ∝ n2/Z ⇒ n &#...
Altaf Hussain Shaikh Ahmed
Mar 19, 2015
Vaibhav Gumber
Jun 1, 2015
Arunima Ajan
Nov 21, 2014
Misha Jain
Aug 15, 2014
|
2023-01-27 15:08:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 40, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.789044976234436, "perplexity": 1858.0801421582537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00397.warc.gz"}
|
https://git.rockbox.org/cgit/rockbox.git/tree/manual/plugins/jpegviewer.tex?id=257a7db8dbff3be6d05ed0e1eadd292c9ff0231e
|
summaryrefslogtreecommitdiffstats log msg author committer range
path: root/manual/plugins/jpegviewer.tex
blob: ffa9f1fe7587b7a431fe8dfac3e42426ec92a77d (plain)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 % $Id$ % \subsection{JPEG viewer} Open a JPEG file in the \setting{File Browser} to view it \opt{recorder,recorderv2fm,ondio,h1xx}{ using Rockbox's greyscale library}. \note{When an audio file is playing the size of the image is limited as the decoding process needs to share memory with audio tracks. To be able to view a bigger file you may need to stop playback.} \begin{table} \begin{btnmap}{}{} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD} {\ButtonUp\ / \ButtonDown} \opt{IPOD_4G_PAD}{\ButtonMenu\ / \ButtonPlay} / \ButtonLeft\ / \ButtonRight & Move around in zoomed in image\\ \opt{RECORDER_PAD}{\ButtonPlay} \opt{ONDIO_PAD}{\ButtonMenu} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonSelect} \opt{IPOD_4G_PAD}{\ButtonScrollFwd} & Zoom in\\ \opt{RECORDER_PAD}{\ButtonOn} \opt{ONDIO_PAD}{\ButtonMenu+\ButtonDown} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} \opt{IPOD_4G_PAD}{\ButtonScrollBack} \opt{IAUDIO_X5_PAD}{Hold \ButtonSelect} & Zoom out\\ \opt{RECORDER_PAD}{\ButtonFThree} \opt{ONDIO_PAD}{\ButtonMenu+\ButtonRight} \opt{IRIVER_H100_PAD}{\ButtonOn} \opt{IRIVER_H300_PAD}{\ButtonRec} \opt{IPOD_4G_PAD}{\ButtonSelect+\ButtonRight} \opt{IAUDIO_X5_PAD}{\ButtonPlay} & Next jpeg in folder\\ \opt{RECORDER_PAD}{\ButtonFTwo} \opt{ONDIO_PAD}{\ButtonMenu+\ButtonLeft} \opt{IRIVER_H100_PAD,IAUDIO_X5_PAD}{\ButtonRec} \opt{IRIVER_H300_PAD}{\ButtonOn} \opt{IPOD_4G_PAD}{\ButtonSelect+\ButtonLeft} & Previous jpeg in folder\\ \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff} \opt{IPOD_4G_PAD}{\ButtonSelect+\ButtonMenu} \opt{IAUDIO_X5_PAD}{\ButtonPower} & Show menu\\ \end{btnmap} \end{table} The menu has the following entries. \begin{description} \item[Quit.] Quits the viewer and returns to the \setting{File Browser}. \item[Toggle Slideshow Mode.] Enables or disables the slideshow mode. \item[Change Slideshow Timeout.] You can set the timeout for the slideshow between 1 second and 20 seconds. \item[Show Playback Menu.] From the playback menu you can control the playback of the currently loaded playlist and change the volume of you \dap. \item[Return.] Returns you to the image \end{description} \note{JPEGs that use progressive scan encoding are not supported and will produce an error.}
|
2021-09-17 16:17:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19676938652992249, "perplexity": 4872.482253578058}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00373.warc.gz"}
|
https://astrobites.org/guides/galaxy-and-agn-types/
|
# Guide to Classification of Galaxies and AGNs
Editor: Nathan Sanders
For a good general reference on galaxy morphology, see arXiv:1102.0550. For quick notes on specific types of galaxies, AGN, and famous examples of the same, search the text below!
# Types of galaxies:
Barred spiral galaxy: A spiral galaxy with a bar-shaped distribution of stars in the center, including the Milky Way. The gravitaitonal potential of the bar introduces many interesting dynamic effects (e.g. arXiv:1102.1157) and may prompt star formation (see this Astrobite). The strongly barred spiral galaxy NGC 613 viewed in the infrared (from 2000AJ....119..536E). Blue Compact Dwarfs (BCDs): See LCBGs. Blue Compact Galaxies (BCGs, not to be confused with Brightest Cluster Galaxies): These are intermediate-mass ($sim 10^{9-10} M_odot$, $M_B < -18$), low-metallicity, star-forming galaxies that often appear to be interacting with other galaxies or merging. They are starbursting in the sense that they have strong emission lines and they have bright, young clusters with ages <5 Myr.They are considered to be analogs of higher-redshift Lyman Break Galaxies. They are therefore useful tracers of galaxy evolution. See arXiv:1103.5771. HST image of BCG Mrk 930, demonstrating faetures of interaction. From Figure 1 of arXiv:1103.5771. Brightest Cluster Galaxy (BCGs, not to be confused with Blue Compact Galaxies): This is simply the galaxy that happens to be the brightest among its galaxy cluster. The BCG is typically a giant elliptical located in the center of the cluster. Because BCGs across all clusters of a given size tend to be of similar brightness, they have been used as standard candles. The dependence of the brightness of a BCG on the richness of the cluster (number of galaxies above a certain brightness),and its scatter (below). From 2009ApJ...699.1333H Bright Red Galaxies (BRGs): Synonymous with luminous red galaxies (LRGs). cD galaxies: Enormous (“supergiant”) early-type galaxies (sometimes r>1 Mpc), typically found at the center of clusters. Their outer brightness profiles (“envelopes”) extend so far that it can be difficult to distinguish them observationally and dynamically from their cluster neighbors. See e.g. 1988ApJ…328..475S. The central 300 kpc of the cluster Abell 2029 and its cD galaxy. From 1979ApJ...231..659D Compact elliptical (cE) galaxies: Possibly formed when a more massive early-type spiral galaxy is tidally-stripped by a larger neighbor on a timescale of ~1 Gyr. The luminosity of the galaxy may change dramatically during this process, while the color will not, so cEs may appear an order of magnitude dimmer than similarly red galaxies. See e.g. arXiv:1102.1159 Compact Narrow Emission Line Galaxies (CNELG): A subset of Luminous Compact Blue Galaxies with narrow emission line features. See LCBG. D galaxies: Dwarf galaxies: Dwarfs generally have little gas, have old stellar populations, and smooth, featureless morphologies that are supported by pressure from random stellar motion. However, this is a diverse class: see other entries for dwarfs on this page, arXiv:1101.2460, and arXiv:1103.1116. Dwarf irregular (dIrr) galaxies: Dwarf irregulars are interesting laboratories for galaxy evolution, because they do not have dynamical features such as spiral density waves that interfere with the fundamental role of fundamental processes such as turbulence and stellar evolution in driving star formation. However, it has been shown that dIrrs can undergo long-duration (>1 Gyr) starbursts. See e.g. arXiv:1104.0464. Dwarf ellipticals (dE): Generally the most massive dwarfs, dominated by stars rather than dark matter in their inner regions. They have low gas fractions nad metallicities (Z ~< 0.1 Z_solar). These are common in the universe and dominate cluster populations, but there are only three known in the Local Group (interacting with Andromeda). This suggests that dE formation is strongly environment-dependent (see e.g. arXiv:1101.2460 and arXiv:1103.1116). Dwarf spheroirdals (dSph): Smaller dwarfs that are dominated by dark matter rather than stars in their central regions. These are ubiquitous in the Local Group (>20 already known), interacting tidally with larger galaxies. They are potentially valuable probes for studying dark matter. It’s unclear if low mass dSph galaxies can be distinguished from globular clusters. See e.g. arXiv:1101.2460, arXiv:1103.0477, and arXiv:1103.1116. E+A galaxies (“Poststarburst” galaxies, PSGs): Have strong Balmer absorption lines typical of young stars, but lack optical emission features (e.g. O III 5007) typical of star-forming regions and have metallic absoprtion lines (e.g. Ca H and K) indicative of an older population of dwarf stars. The presence of two distinct stellar populations and morphological observations suggest that E+A galaxies are the product of a gas-rich galaxy merger (see e.g. 2000AJ….119.2118G and arXiv:1101.4933) Early type: Refers to galaxies on the left side of Hubble’s tuning fork diagram – often synonymous with elliptical galaxies (see concepts PDF). Elliptical: Galaxies typified by a spheroidal distribution of stars, rather than a disk. These are typically “red” and “dead,” with little ongoing star formation. Faint blue galaxies: Faint low surface brightness galaxies (fLSBs): Galaxies with a central surface brightness fainter than 24 mag/arcsec^2, but a total r-band magnitude > 21. Because these galaxies are often too faint to be seen, they have been suggested as a contributor to the “missing dwarf problem,” where cosmological models predict more small galaxies than are observed. See arXiv:1101.4136 Giant ellipticals (gE): Green Pea galaxies (GPs): Very compact (~1-2 kpc half-light radius), low-mass starbursting galaxies in the nearby (z<1) universe that are intermediate in color between early and late type galaxies (green). These may represent high-luminosity blue compact galaxies. GPs are generally metal poor (oxygen abundance ~20% solar), but are relatively enriched in nitrogen. Green pea galaxies (green dots) are nitrogen-enriched compared to most galaxies (density plot) of the same oxygen abundance. From Amorín et al. 2011 (arXiv:1105.1477). Green valley galaxies: A small population of galaxies somewhere in between red (late type) and blue (early type) galaxies in the bimodal distribution of galaxy types. It is not clear whether these galaxies are simply outliers among the early and late type distributions, are in transition between the two populations, or are another class of galaxy alltogether. These galaxies tend to have AGN or stellar bars which could be responsible for quenching star formation and causing them to transition from early to late type. See e.g. arXiv:1101.3353 or the 4/5/2011 Astrobite. Figure 2: A color-magnitude diagram that shows the "red sequence", "Blue Cloud", and less populated "Green Valley". HII galaxies: Gas-rich, metal-poor dwarfs undergoing a period of star-formation that dominates its optical spectrum such that it is easily confused for a giant HII region. The galaxy itself is generally low-luminosity (M_V~-17), but there are very bright ~100pc knots in the center. The starburst is thought to last ~1 Gyr. (See e.g. arXiv:1101.4140) H-alpha emitters (HAWs): Galaxies that have emission at the Balmer line H-alpha, usually found via narrow-band imaging surveys tuned to a given redshift. H-alpha emission is typically associated with star formation. See e.g. arXiv:1104.3608. Irregular galaxies: L* and M* galaxies: There is a “knee” in the luminosity function of galaxies that suggests a dividing a line between very bright and less bright galaxies (with L* in between) and very massive and less massive (with M* in between). Galaxies with L~L* are typically M* galaxies and vice versa. The Milky Way is roughly an M* galaxy. Late type galaxies: Refers to galaxies on the right side of Hubble’s tuning fork diagram – essentially a synonym for spiral galaxies (see concepts PDF). Lenticular galaxies (S0): These are galaxies that fall between spirals and ellipticals on the Hubble sequence, exhibiting both an old, spherical population and traces of a disk. Originally thought not to contain gas, but some have been found to be gas-rich (see e.g. 1991A&A…243…71V or arXiv:1101.5092). Low surface brightness galaxies (LSBs): Galaxies with a low surface brightness (e.g. that are not bright for a given area in square arcseconds). The low surface brightness is associated with a low gas surface density. Curiously, these galaxies seem to follow a different star formation efficiency law than the normal Kennicut-Schmidt law. See e.g. arXiv:1103.3711 and also see fLSBs. Luminous Blue Galaxies (LBGs): Luminous Compact Blue Galaxies (LCBGs): Galaxies that are bluer (have more star formation) than their small size (half-light radius) would indicate and as bright or brighter than the Milky Way. They are common at z~1, but rare in the local universe, suggesting that this is a vigorous stage of evolution. One possible formation mechanism is the merger of a dwarf elliptical with a gas rich source which could provide fuel for star formation, such as another dwarf galaxy or an HI cloud. See e.g. 2004ApJ…617.1004W and arXiv:1103.0526 Luminous Red Galaxies (LRGs): Very luminous (~3 L^*) early-type galaxies, essentially giant elliptical galaxies. The brightest galaxies in clusters are typically LRGs. LRGs are ofted used to probe hierarchical galaxy formation and the SDSS LRG survey produced a conclusive detection of the baryon acoustic oscillations (BAO). See e.g. 2001AJ….122.2267E. Lyman Alpha Emitters (LAEs): Strong Lyman alpha emission from star formation, an AGN, or gas infall allows these to be detected at high z (>3). They are traditionally thought of as galaxies in an early burst of star formation, with young stellar populations, very little dust (Av 2.5) and are good tracer sof galaxy evolution. It has been suggested that they can evolve to become Milky-Way like galaxies. See e.g. arXiv:1101.3017 and arXiv:1101.3321. Lyman break analogs (LBAs): Local-universe (z < 0.2) equivalents of LBGs, very bright in the far-UV due to high star formation rates. They are similar to LBGs in their metallicity, morphology, radio emission, extinction, etc. See arXiv:1102.1740 and see also Blue Compact Galaxies. HST rest-frame UV/optical images of LBA candidates. From 2009ApJ...706..203O Lyman break galaxies (LBGs): Starburst galaxies at z>2 which can be detected very efficiently by their unique spectral signature near the Lyman-alpha line at 912 Angstroms (UV, redshifted into the optical). This signature is the product of intense Ly-alpha emission, minimal dust extinction from the ISM, and a discontinuity due to absorption from the atmospheres of massive stars and neutral hydrogen in the ISM. See 2002ARA&A..40..579G. Polar Ring Galaxies (PRGs): Galaxies that appear to have a disk of gas oriented perpendicular to their stellar disk. In some cases (e.g. IGV 7576, at right), kinematic investigations indicate that the gas disk is actually a ring. Several formation scenarios have been suggested, including a major merger, tidal accretion from a dwarf satellite or nearby spiral, or “cold accretion” of primordial integalactic gas along very long (~1 Mpc) filaments. See e.g. arXiv:1104.2052. The PRG UGC 7576 in the R band, from Spavone et al. 2011 (arXiv:1104.2052). The gas ring extends from the top left to the bottom right. Pseudobulge galaxies: The term pseudobulge refers to a feature that looks like the central, spheroidal, “classical” bulge feature common to spiral galaxies, but is different in some significant way. One definition is a central brightening of the disc that is not extended vertically. Galaxies with pseudobulges have become of great interest because they seem to fall outside of the traditional velocity dispersion – supermassive black hole mass relationship (\$M_{BH} – sigma\$). The presence of a pseudobulge seems to be associated with a strong bar feature and pseudobulges and classical bulges can both be present in the same galaxy. See e.g. arXiv:1103.0525. Quiescent galaxies: These are simply galaxies that are not observed to have ongoing star formation. Satellite galaxies: Smaller galaxies gravitationally bound to a larger primary galaxy. Usually this refers to satellites of an isolated galaxy as opposed to members of a large cluster. The “missing satellites problem” refers to the disparity between the large number of satellites predicted by ΛCDM cosmological simulations and the relatively small number of satellites observed (see e.g. arXiv:1101.2674) Spiral galaxies: The most common type of galaxy (as defined by a magnitude-limited survey of the local universe). Named for the apparently spiral structure of their thin-disk stars, caused by stellar density waves. They may also have nuclear spheroidal population (bulge) and an extended spheroidal population (halo). Spiral galaxies typically have ongoing star formation, particularly in their spiral arms. The Milky Way is a spiral galaxy. Starbursting dwarf galaxies (SBDGs): Dwarf galaxies with gas fractions and star formation rates on the order of giant spiral galaxies (implying the gas will be consumed in less than a hubble time), but low metallicity. It may be that galactic winds carry away heavy elements formed in the galaxy out of its shallow potential well. See e.g. arXiv:1103.1116. Starburst galaxies: A generic term for galaxies with regions undergoing a high rate of star formation that is roughly as luminous as the rest of the galaxy combined. Starburst phases are often short-lived (the starburst is highly variable). See also SMGs and ULIRGs. (See e.g. arXiv:1101.4140) Sub-Millimeter Galaxies (SMGs): These are among the most vigorously star-forming galaxies in the universe (1000 Msol/yr), are very bright (L~10^13 Lsol), and typically found at high redshift (z~2.5). They are perhaps the result of mergers of gas-rich galaxies and are the progenitors of the massive ellipticals in the local universe. This phase of evolution apparently lasts only ~100 Myr. See e.g. 2008ApJ…680..246T. Tadpole galaxies: Tidal Dwarf Galaxies (TDGs): Dwarf galaxies that are created from material stripped from larger galaxies rather than simply clouds collapsing in small halos, proposed by Zwicky in 1956. Unlike other dwarf galaxies, TDGs could have no dark matter halo and may be metal rich because they are formed from the recycled material of more massive galaxies. They range from \$sim 10^7-10^9 M_odot\$, with larger TDGs typically formed by accumulation at the tip of tidal tails. Tidal interaction can cause starbursts in TDGs, and smaller TDGs may be easily disrupted by their parent galaxies, living <10 Gyr. See eg. arXiv:1103.2546. Tidal dwarf galaxies (boxed in red) identified in the tidal tail of TDG 45115, from arXiv:1103.2546 Ultra-compact dwarfs (UCDs): These are similar to dSph/dE/cE galaxies, but more compact. Some UCDs are perhaps more like massive globular clusters than small galaxies and have been called giant globular clusters (GGCs). It is proposed that other UCDs are the remnant nuclei of galaxies which have been tidally stripped by more massive neighbors (see e.g. arXiv:1101.2460 and arXiv:1102.0001). UCDs (boxed in green) are intermediate in brightness between brighter dwarf elliptical galaxies and dimmer globular clusters. This is a color-magnitude diagram of extended sources in the Hydra 1 cluster from Figure 2 of arXiv:1103.5463. Ultra-Luminous Infrared Galaxies (ULIRGs): Galaxies with far-IR L(8-1000 microns) > 10^12 L_solar due to dust heated by young stars and, in about half of known ULIRGs, an AGN. These are perhaps the local-universe (z~2) analogy to SMGs, being galaxies with very high star formation rates. They are typically galaxies undergoing major mergers (a collission between two galaxies of similar mass) and are thought to represent a transitionary phase between gas-rich spirals and ellipticals. See e.g. arXiv:1101.5262 and arXiv:1102.1974. Void galaxies: Galaxies that are found in voids, the nominally-empty tens-of-Mpc regions of space in the large scale structure of the universe. These essentially isolated galaxies comprise only about 7% of all galaxies. They are generally blue (star-forming, or “young”) and gas-rich. They are valuable tracers of the dependence of galaxy evolution, and especially gas accretion, on environment, See e.g. arXiv:1101.4187, arXiv:1103.4156, and arXiv:1103.5798. Wolf-Rayet (WR) galaxies: These are simply galaxies which have recently undergone a starburst such that they exhibit spectral signatures of Wolf-Rayet stars (massive stars with strong mass loss). The existence of WR stars limits the timeframe since the starburst to roughly 2
# Types of active galaxies and active galactic nuclei (AGN):
Active Galactic Nuclei (AGN): Active nuclei are common to galaxies in the early universe. AGN have diverse observed properties, but most of these can described by a single unified model. The unified model consists of a supermassive black hole with a superheated accretion disk that radiates in the optical through soft X-ray. The disk is surrounded by broad and narrow line regions (BLR, NLR) consisting of high and lower velocity gas, respectively. A dusty torus can obscure all of these components if viewed edge on. A relativistic jet can extend from the supermassive black hole to hundreds of kpc. Depending on the viewing angle of this system, the AGN can appear to have many different properties – hence the classification scheme given below. (see e.g. arXiv:astro-ph/0312545). AGN may correspond to periods of growth in supermassive black holes due to galaxy mergers (see e.g. 2006ApJS..163….1H).
Blazars: The most extreme class of AGN, they comprise the majority of extragalactic gamma ray sources. They are AGN whose relativistic jets happen to point toward Earth, producing both synchrotron and inverse Compton gamma rays. They are extremely variable over short timescales and their emission is highly polarized. They are often divided into two major subtypes based on the strength of their emission lines, FSRQs and BL Lac objects. See e.g. arXiv:1101.2764. BL Lacertae (BL Lac) objects: A subclass of blazars with very weak optical emission lines (equivalent width < 5 Angstroms). XBL and RBL refer to BL Lac objects identified by either their X-ray or radio emission, respectively. XBLs and RBLs have different spectral properties which may be explained by viewing angle if radio-emission is more highly beamed than X-ray. See also HBLs and e.g. arXiv:1101.2764. Broad-Line Seyfert 1 galaxies (BLS1s): Seyfert 1 galaxies with broad emission lines, which appear to follow the normal supermassive black hole to bulge size relationship. Contrast with NLS1s and see and e.g. arXiv:1102.0537 Composite galaxies: These galaxies fall on the border between Seyfert galaxies and LINERs in their emission line ratios. Compton thick AGN: These are simply AGN with column densities of N_H>10^24 cm^{-2} in the X-ray. Apparently roughly a quarter of AGN in the local universe are Compton thick. See e.g. arXiv:1101.3478. Double Radio source AGN (DRAGNs): AGN that produce narrow jets extending for hundreds of kpc out of either side of the galactic nucleus. The jets terminate in hugely dispersed radio lobes. Both the lobes and the jets are powerful radio emitters due to synchrotron radiation (see e.g. Alan Bridle’s page). Dual AGNs: AGN with double-peaked emission lines are suspected of being dual AGN: hosts to two accreting supermassive black holes. The dual AGN phase is expected to be associated with galaxy mergers. However, double-peaked emission lines can have other explanations, including asymmetric outflows or extinction of a single AGN. See e.g. arXiv:1102.1733, 2011ApJ…727…71F, and arXiv:1103.2597. Fanaroff and Riley types (FR Is and FR IIs): A classification scheme for blazar jets. FR II jets are highly collimated and terminate at bright hotspots far from the galaxy center, while FR I jets terminate nearer to the galaxy center. FR Is are associated with BL Lac objects and FR IIs with FSRQs. See 1974MNRAS.167P..31F and e.g. arXiv:1101.5342. Forbidden High Ionization Line (FHIL) regions: Some AGN have emission originating from atoms much more highly ionized than is typical of AGN narrow line regions (NLRs). It is uncertain whether these regions are simply extremely photoionized NLRs (perhaps due to extreme SMBH accretion events or supernovae) or if the emission is due to an entirely different process, such as high-temperature collisional excitation. The FHIL regions are apparently high density regions very near to the nucleus, perhaps forming the inner wall of the torus. Examples of FHILs include [Fe VII], [Fe X], and [Ne V]. See arXiv:1103.0660. Blue spectrum of the AGN Q1131+16 displaying many FHIL features, Fig. 2 of Rose et al. 2011 (arXiv:1103.0660). Flat-spectrum radio quasars (FSRQs): A subclass of blazars with recognizable broad emission lines, similar to other quasars and in contrast to BL Lac objects. FSRQ seems to be a catch-all classification for such AGN, which can also be referred to by a variety of other names depending on how they were discovered, including “optically violently variable quasars,” “highly polarized quasars” (HPQs), and “core-dominant radio quasars”. See also FR II jets and e.g. 1996ApJ…463..444S. FRI radio galaxies: The radio emission is concentrated at the core – the jets fade after a short distances (see e.g. arXiv:1101.3223). FRII radio galaxies: The jets are much more radio-bright and extended than FRI sources. FRII galaxies tend to be more luminous than FRI in general, but are much less common (see e.g. arXiv:1101.3223). Hidden broad-line regions (HBLRs): Type II AGN are about evenly divided in number as HBLR or non-HBLR based on polarized light observations. The difference might be because HBLRs are dominated by their AGN and non-HBLRs are dominated by starbursts. See e.g. arXiv:1101.4132. High-frequency peaked BL Lac objects (HBLs): A subclass of BL Lac objects whose high energy peak is located around 100 GeV, also typically bright in X-Rays and radio. Most >100 GeV sources are HBLs. (see e.g. arXiv:1101.2764) High-redshift quasars (HZQs): Extremely distant quasars with z>~6, which corresponds to a lookback time at roughly the end of cosmic reionization. About 50 are known as of January 2011. See e.g. arXiv:1101.4965 Infrared-Faint Radio Sources (IFRSs): A rare (~7 per square degree) class of sources that are fairly bright in the radio ($gtrsim 0.1$ mJy at 20cm), but very faint in the infrared. It is possible that IFRSs are relatively nearby (1
# Types of galaxy clusters:
Cool core (CC) and non-cool core (NCC): Thermal bremsstrahlung and line emission from the intracluster medium can cause significant energy loss (a cooling flow) from the brightest region of a cluster (it’s core) on timescales shorter than the age of the universe. Something like 80% of clusters have such cooling flows and are classified as cool core (CC) clusters, distinguished by optical line emission (e.g. Halpha). Mechanisms proposed to explain this difference include AGN heating and disruption by merger in NCC clusters. NCC clusters have flat abundance profiles, perhaps due to frequent galaxy mergers. (See e.g. arXiv:1101.3317 and ) Galaxy groups: This simply refers to a lower-mass system of galaxies than a cluster, but there are differences besides the mass. Gravitational interaction dominates the energetics of clusters, while various forms of feedback (radio jets, starburst wdriven winds) and mergers can dominate groups. Clusters are dominated by dark matter, have ~10% of their mass in gas (the intracluster medium, ICM), and relatively little mass in stars; groups have ~10% of their mass in stars and a negligable ICM. Moreover, clusters have typically formed since z~1, while groups are thought to have formed as far back as z~10. See e.g. arXiv:1102.1972 and arXiv:1104.4888. Radio relics: Diffuse synchrotron radiation from cosmic ray electrons in the magnetic field of the intracluster medium shock-heated by a galaxy cluster merger. Proto-clusters: Over-densities of massive galaxies in the very early (z~5) universe that are expected to merge heirarchically to form galaxy clusters in the late universe. See arXiv:1101.3586
|
2020-07-12 07:06:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540421485900879, "perplexity": 4432.808835055746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00163.warc.gz"}
|
https://www.chilimath.com/lessons/intermediate-algebra/descartes-rule-of-signs/
|
# Descartes’ Rule of Signs
The purpose of the Descartes’ Rule of Signs is to provide an insight on how many real roots a polynomial P\left( x \right) may have. We are interested in two kinds of real roots, namely positive and negative real roots. The rule is actually simple.
Here is the Descartes’ Rule of Signs in a nutshell.
## Breakdown or Explanation of the Descartes’ Rule of Signs
Suppose P\left( x \right) is a polynomial where the exponents are arranged from highest to lowest, with real coefficients excluding zero, and contains a nonzero constant term.
The number of positive real roots is either
• equal to the number of sign changes in P\left( x \right)
• or, less than the number of sign changes in P\left( x \right) by some multiple of 2.
The number of negative real roots is either
• equal to the number of sign changes in P\left( { - x} \right)
• or, less than the number of sign changes in P\left( { - x} \right) by some multiple of 2.
In summary, if n is the number of sign changes in either P\left( x \right) or P\left( { - x} \right), then the number of positive or negative roots may equal to n, n-2, n-4, n-6, etc.
Note that we start with the number of sign changes, “n“, then we keep subtracting it by some multiple of 2 (positive even integers) such as 2, 4, 6, etc.
We stop subtracting until such time when the difference becomes 0 or 1. That is it!
Quick examples for both cases:
• Let P\left( x \right) has n=7 number of sign changes, the possible number of positive real roots will be
7, 5, 3 or 1
• Let P\left( x \right) has n = 6 number of sign changes, the possible number of negative real roots will be
6, 4, 2 or 0
### Examples of Descartes’ Rule of Signs
Let’s take a look at a few examples to see this rule in action!
Example 1: Find the number of real roots of the polynomial below using Descartes’ Rule of Signs.
Start by clearly marking off the sign of each term in the polynomial. I will use the color red for positive symbol (+), and black for negative symbol (). This would allow us to easily keep track of the change in sign.
It is considered a sign change if the two signs of adjacent coefficients switch (or alternate). For instance, it can go from positive to negative, or negative to positive.
For the positive real roots:
Use the given function itself because the “x” inside the parenthesis, P\left( x \right), is positive.
There are two sign changes as shown by the arrows. Since n = 2, therefore there are 2 or 0 positive real roots.
For the negative real roots:
Use the modified version of the function, P\left( { - x} \right), where the “x” inside the parenthesis is negative.
Before we count the sign change, we will need some side calculation. Substitute “-x” in P\left( x \right) to get P\left( { - x} \right). Here we go…
Now, let’s do the counting…
There are three sign changes as pointed out by the arrows. Since n = 3, therefore there are 3 or 1 negative real roots.
For our final answer, we say, there are 2 or 0 positive real roots, and 3 or 1 negative real roots.
Here is the graph of the polynomial showing that indeed our “guess” is spot on! In fact, it has two (2) positive roots, and three (3) negative roots.
Example 2: Find the number of real roots of the polynomial below using Descartes’ Rule of Signs.
Before we start on this problem, I must caution you not to treat this like a synthetic division problem where we place zeroes on the missing powers of x‘s. As long as the polynomial is arranged with a decreasing number of exponents, that is good enough.
For the positive real roots:
Use the given polynomial and count the number of sign change.
There are three sign changes for P\left( x \right), that means there can be 3 or 1 positive real roots.
For the negative real roots:
Evaluate -x into P\left( x \right) to get P\left( { - x} \right) then count the sign change.
There is only one sign change for P\left( { - x} \right), that means there is exactly 1 negative real root.
For our final answer, there are 3 or 1 positive real roots, and exactly 1 negative real root.
Example 3: Find the number of real roots (positive and/or negative) of the polynomial below.
To find the positive roots:
Observe that all terms in the polynomial are all positive.
Since there is no sign change in P\left( x \right), this implies that the polynomial has NO positive real roots.
To find the negative roots:
Solve for P\left( { - x} \right) then count the variation in signs.
Because we have seven (7) sign changes in P\left( { - x} \right), there are 7, 5, 3, or 1 negative real roots.
For our final answer, there are no positive real roots, and there are 7, 5, 3, or 1 negative real roots.
Example 4: Find the number of real roots of the polynomial (positive and/or negative) below.
To find the positive roots:
Count the number of alternating signs in P\left( x \right).
We have six sign changes here which imply that there are 6, 4, 2, or 0 positive real roots.
To find the negative roots:
Solve for P\left( { - x} \right) first then count the variation in signs.
No sign change in P\left( { - x} \right) that means there are no negative real solutions.
There are 6, 4, 2, or 0 positive real roots, and there are no negative real roots.
|
2022-06-25 04:13:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8836285471916199, "perplexity": 420.79081858447756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00760.warc.gz"}
|
https://lucatrevisan.wordpress.com/category/technology/page/2/
|
# Hinges fail
About two years ago I bought a MacBook Air. I was worried about the lack of an optical disk reader, about the inaccessible battery and the presence of only one USB connection, but in the past two years I don’t remember ever needing the disk reader on the road (I have an external one at home), only a few times I wished for an extra USB port, and the battery has been holding up all right.
I have, however, encountered some completely unexpected problems. One is that the machine overheats very quickly if it does a computation-intensive task, and it has the “feature” that, if the temperature gets high, it shuts down one of the cores and makes the other go at about 40% speed. This means that it is not possible to connect it to a tv to watch movies from netflix or tv shows from hulu, because within half an hour it reaches the temperature that triggers the slowdown, at which point it skips so many frames that the movie looks like a slide show.
I also wish I could add more RAM. (The memory chips are soldered on the motherboard.)
Then last week I heard a cracking noise when opening it, and the screen would fall back instead of holding its position.
Despite the seemingly sturdy metal construction, the hinges had cracked:
Well, not a big deal, I thought, how expensive can replacement hinges be? Very expensive, a google search revealed.
The design of the computer is such that to repair the hinges they need to replace the screen. Not just the metal shell that covers the screen, but the LCD screen itself too, for a cost in the ballpark of \$800.
“Thankfully,” Apple will repair it for free even if my warranty has long expired.
The announcement of the free repair offer tells a story all by itself. First, that there should be an official policy for this problem shows how many people had this problem and how defective was the original design. (A google search also shows that.) Notice the compound design failures of having a break that (1) is so expensive to fix and (2) is so likely to occur. Second, the announcement offers a refund to those who paid for the repair in the past: indeed for a while people were having this break while their warranty was active and Apple would make them pay for the repair, claiming that the users were responsible for breaking the hinges, evidently because they used the computer in a way that it was not designed for, such by opening and closing it on occasion.
# Presenting a Beamer Talk the Right Way
I sometimes use Beamer (a LaTex package) to prepare slides for conference or seminar presentations, and sometimes I use Keynote.
Keynote has a simple but very desirable feature: during the presentation, the laptop screen, instead of going blank or showing a copy of the current slide, shows the current slide, and the next slide, and a timer. If you have ever used Keynote, you know how useful it is to have the time and the next slide always in front of you.
When a slide presentation is prepared with Beamer, the output is a pdf file which is then displayed using Acrobat Reader, or the OS X Preview application, and one gets a blank screen on the laptop during the presentation. Since pdf handling is built natively into OS X, and since a timer and a next-slide display are really simple things, I assumed there would be some program that would do a Keynote-style presentation from a pdf file.
Unfortunately, I wasn’t able to find any such thing for OS X. (Interestingly, there is a program for Windows that does that.)
Thankfully, Melissa O’Neil has done the next best thing, or maybe an equally good thing: a program that converts a pdf file into a Keynote file. So you can create your pdf with Beamer, then convert it to the Keynote format, and use Keynote to display the presentation.
Not the cleanest of workflows, but it works. Thanks, Melissa O’Neil!
# Collaborative Filtering and Partitioning
To decide the scheduling of the coming FOCS, and of any conference with parallel sessions, it would be great to have the following tool: a site where prospective participants can browse the list of accepted papers and their abstracts, decide which talks sound interesting, and select them in a checklist. After a while, based on the lists and some simple algorithm (probably, spectral techniques would work), a program automatically selects a schedule with few conflicts and with similar papers in the same session.
Optionally, after the schedule is finalized and the submission of lists is closed, the site could send a “you might also be interested in…” list to the registered users who sent in their lists, as a “reward.”
Does something like this exist? If not, would any reader(s) want to take it on as a volunteer project? It would have to be done within the next two weeks or so, but I believe that for someone who knows how to use the right tools it is a matter of a couple of days of coding. I can host the site at Stanford.
# Mathematical Research and New Collaboration Tools
Next Tuesday, May 18, at noon California time, I will speak (in Italian) in a free `webcast’ organized by Oilproject on the impact of new collaboration and communication tools in mathematical research. (A recording will be available after the event.)
Those who watch it live can ask questions, and I haven’t planned very specifically what I will talk about, hoping that the questions will drive the discussion. Two things that I want to talk about are:
• the story of Polymath’s combinatorial proof of the Density Hales–Jewett theorem, demonstrating the viability of a “massively collaborative” approach to mathematical research, and what it means for mathematics, both “philosophically” and practically.
• the way mathematical blogs have become an effective way to disseminate the kind of mathematical lore (the insights, the concrete ways of visualizing very abstract constructions, the facts that are “well known” to experts and “implicit” in classic papers, but impossible to see for the non-experts, etc.) that cannot be found in monographs and research articles, and that, previously, was exclusively handed out from advisors to advisees and from colleague to colleague. This will make it much easier for the brilliant students who don’t happen to be in the top schools to master their research area and make new breakthroughs.
This will be part of a series of webcasts on how new communication technologies affect the economy, news, technologies, science, etc., with some notable speakers. Last week Stefano Andreoli spoke about spinoza.it, which is roughly The Onion of Italy.
# The Mystery Lamp
I have to buy a lamp for my office desk, and I was planning to either buy the Tizio lamp, which I have at home and that I think is the most beautiful desk lamp in production, or the Tolomeo, which works better in terms of actually illuminating, and is almost as good-looking.
Searching online, I found out that, apparently, since 2006 Artemide has been making an LED version of the Tolomeo
which looks amazing. I have, however, never seen it in a store, I haven’t been able to find any review on the web other than the marketing information from Artemide, and, while it can be ordered online, it seems that no store in San Francisco has it in store to check it out.
Usually, the light of LEDs is cold and unpleasant, so I wouldn’t buy it sight unseen. (Plus, it’s supposed to be quite small, so I am afraid it could be too small for a large desk.) So I turn to the powers of the internet: have you, or maybe your roommate’s sister in law’s cousin, seen it in real life?
If not, has any of you used an LED lamp as a desk light? How did it work out for you?
# LaTeX2WP minor update
LaTeX2WP is a program that converts a LaTeX file into something that is ready to be cut and pasted into the WordPress online editor. It makes it easier to write mathematical posts, to post lecture notes on WordPress, and so on.
A new version is now available, which fixes a couple of bugs:
• WordPress has trouble if a mathematical expression containing $<$ is followed by a mathematical expression containing $>$. This is prevented by converting the inequality symbols to their HTML “character codes.”
• The previous version of LaTeX2WP had trouble with long sentences in square brackets; this is fixed.
In addition, \S for § and \v{C} for Č (as “in Stone–Čech compactification”) now work.
# LaTeX2WP Update
I have posted a new version of LaTeX2WP, a program that converts a LaTeX file into a format that is ready to be copied into WordPress.
The new version fixes some bugs and has some new features:
• Thanks to code contributed by Radu Grigore, it is now possible to nest bold inside italic, and there is a better overall support for font styles. (Which is also easier to customize.)
• `eqnarray*` is now supported.
• LaTeX commands are correctly “tokenized” before certain macros are applied. For example, LaTeX2WP recognizes the macro \P for \mathop{\mathbb P}. In the previous version, however, the above transformation would also be applied to the initial \P in \Phi. Thanks to Terry Tao and “ccarminat” for noticing this bug.
• Commands such as \\$ and \% now work in math mode. (\& does not work, but this seems to be a WordPress problem.) Thanks to Atri Rudra for noticing this bug.
• Quotation marks are applied to the URL in \href and \hrefnospan. The lack of such quotation marks sometimes creates problems with \hrefnosnap.
# Converting LaTeX to WordPress
Last month, I have written a program that converts a LaTeX document into a format that is ready to be copied and pasted into the WordPress editor.
I have been using it to post the notes of my cryptography class here, as well as some other posts.
Terry Tao has tested it on a couple of posts. Thanks to his feedback, the current version, while surely bug-filled and very limited, is stable enough to be used by other people. It is now available to anybody who might be interested.
What is the point of this program?
|
2022-09-30 18:36:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3640795946121216, "perplexity": 1143.723715122433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00375.warc.gz"}
|
http://simbad.u-strasbg.fr/simbad/sim-ref?bibcode=2016ApJ...832L..21A
|
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help
2016ApJ...832L..21A - Astrophys. J., 832, L21-L21 (2016/December-1)
Upper limits on the rates of binary neutron star and neutron star-black hole mergers from Advanced LIGO's first observing run.
Abstract (from CDS):
We report here the non-detection of gravitational waves from the merger of binary-neutron star systems and neutron star-black hole systems during the first observing run of the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO). In particular, we searched for gravitational-wave signals from binary-neutron star systems with component masses ∈[1,3] M and component dimensionless spins <0.05. We also searched for neutron star-black hole systems with the same neutron star parameters, black hole mass ∈[2,99] M, and no restriction on the black hole spin magnitude. We assess the sensitivity of the two LIGO detectors to these systems and find that they could have detected the merger of binary-neutron star systems with component mass distributions of 1.35 ±0.13 M at a volume-weighted average distance of ∼70 Mpc, and for neutron star-black hole systems with neutron star masses of 1.4 M and black hole masses of at least 5 M, a volume-weighted average distance of at least ∼110 Mpc. From this we constrain with 90% confidence the merger rate to be less than 12,600 Gpc–3 yr–1 for binary-neutron star systems and less than 3600 Gpc–3 yr–1 for neutron star-black hole systems. We discuss the astrophysical implications of these results, which we find to be in conflict with only the most optimistic predictions. However, we find that if no detection of neutron star-binary mergers is made in the next two Advanced LIGO and Advanced Virgo observing runs we would place significant constraints on the merger rates. Finally, assuming a rate of 10–7+20 Gpc–3 yr–1, short gamma-ray bursts beamed toward the Earth, and assuming that all short gamma-ray bursts have binary-neutron star (neutron star-black hole) progenitors, we can use our 90% confidence rate upper limits to constrain the beaming angle of the gamma-ray burst to be greater than 2\buildrel○ / . 3–1.1+1.7 (4\buildrel○ / . 3–1.9+3.1).
Journal keyword(s): binaries: general - gamma-ray burst: general - gravitational waves - stars: black holes - stars: neutron - stars: neutron
Status at CDS:
|
2020-06-07 00:54:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590031862258911, "perplexity": 2931.1149794773905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348521325.84/warc/CC-MAIN-20200606222233-20200607012233-00285.warc.gz"}
|
https://tug.org/pipermail/texhax/2004-November/003132.html
|
# [texhax] [listing.sty] \item[lstinline{foo}] howto?
Pavel Minev Penev pavpen at berkeley.edu
Thu Nov 18 04:08:39 CET 2004
On Wed, Nov 17, 2004 at 02:22:42PM +0100, Philip G. Ratcliffe wrote:
> > The example doesn't show the required result (the argument
> > of the \item is hidden).
> > \documentclass{scrartcl}
> > \usepackage{listings}
> > \begin{document}
> > \begin{itemize}
> > \item[\lstinline{ze_def.h}] ZE-spezifische Definitionen
> > \end{itemize}
> > \end{document}
> > What do I have to do to get the correct result? I tried already
> > \item[\protect\lstinline{bar}] and something like this without
> > getting any improvement.
>
> I suspect it's a bug or at any rate \lstinline is not supposed to be used
> there.
>
> Look at what this gets you:
>
> \begin{itemize}
> \item[\lstinline!ze_def.h!] ZE-spezifische Definitionen
> \item
> \end{itemize}
>
> Why not try the package author?
I don't have time to look at this, but as far as I remember, listings
plays with the catcodes of characters at input (what mostly all verbatim
environments do), thus, if you are trying to use a listing anywhere, but
at the top level of macro expansion, it will not work (because catcodes
will already have been defined for the characters to input).
Just my guess,
--
Pav http://larouchepac.com/pdf_files/child_satan_book.pdf
,., http://wlym.com/articles/minnicino_frankfurt.doc
,:'', http://www.bilderberg.org/ccf.htm
{o ! o} http://en.wikipedia.org/wiki/Fabian_Society
shell\$ gpg --keyserver x-hkp://search.keyserver.net:11371 --recv-key 164C028F
`
|
2023-04-01 21:14:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584896564483643, "perplexity": 10504.898451931356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00426.warc.gz"}
|
https://academic.oup.com/scan/article-lookup/doi/10.1093/scan/nss139
|
## Abstract
Research has illustrated that the brain regions implicated in moral cognition comprise a robust and broadly distributed network. However, understanding how these brain regions interact and give rise to the complex interplay of cognitive processes underpinning human moral cognition is still in its infancy. We used functional magnetic resonance imaging to examine patterns of activation for ‘difficult’ and ‘easy’ moral decisions relative to matched non-moral comparators. This revealed an activation pattern consistent with a relative functional double dissociation between the temporoparietal junction (TPJ) and ventro-medial prefrontal cortex (vmPFC). Difficult moral decisions activated bilateral TPJ and deactivated the vmPFC and OFC. In contrast, easy moral decisions revealed patterns of activation in the vmPFC and deactivation in bilateral TPJ and dorsolateral PFC. Together these results suggest that moral cognition is a dynamic process implemented by a distributed network that involves interacting, yet functionally dissociable networks.
## INTRODUCTION
Over the past decade, neuroscientists exploring moral cognition have used brain imaging data to map a ‘moral network’ within the brain (Young and Dungan, 2011). This network encompasses circuits implicated in social, emotional and executive processes. For example, moral emotions appear to activate the limbic system (Shin et al., 2000) and temporal poles (Decety et al., 2011), while reasoned moral judgments reliably engage fronto-cortical areas (Berthoz et al., 2002; Heekeren et al., 2003; Kedia et al., 2008; Harenski et al., 2010). The distributed nature of the network reflects the fact that prototypical moral challenges recruit a broad spectrum of cognitive processes: inferring people’s intentions, integrating social norms, computing goal-directed actions, identifying with others and displaying empathic behavior (Moll et al., 2008).
The initial focus within the research field was to explore whether moral decisions have a specific neural signature. This reflected the early dominance of neurocognitive models which argued for the unique properties of moral deliberation. One such theory endorsed the idea that we are endowed with an innate human moral faculty: our moral judgments are mediated by an unconscious mechanism which evaluates good vs bad (Hauser, 2006). Another theory suggested that moral choices are driven by intuitive emotions: in other words, we feel our way through knowing what is right and wrong (Haidt, 2001). However, as the imaging data accumulated, the theoretical emphasis shifted toward the view that the psychological processes underlying moral choices recruit socio-emotional and cognitive processes that are domain general (Moll et al., 2005). As opposed to a unique moral faculty, the evidence reflected the fact that moral choices reliably engage a delineated neural network which is also observed within the non-moral domain (Young and Dungan, 2011). In line with this view, one theory postulates that emotional processes and reason work in competition: controlled processes of cognition and automatic processes of emotion vie with each other to ‘work out’ a moral judgment (Greene et al., 2001). An alternative model suggests that reason and emotion do not act as competitive systems, but instead interact in a continuously integrated and parallel fashion (Moll et al., 2008).
Reflecting this theoretical shift, more recent research efforts have used experimental probes to fractionate the moral network into constituent parts and illustrate relative dissociations. That is, distinct regions of the broad moral network are responsible for different putative components of moral cognition, and this likely mirrors domain-general processing distinctions. For example, there is now a compelling body of evidence that the anterior cingulate cortex (ACC) underpins processes of error detection and conflict monitoring across multiple cognitive contexts. This knowledge has been fruitfully applied to the moral domain in work showing that high-conflict moral dilemmas—when compared with low-conflict moral dilemmas—recruit the ACC (Greene et al., 2004). Similarly, the temporoparietal junction (TPJ) seems to subserve the general capacity to think about another’s perspective in socially contextualized situations and is reliably activated when participants deliberate over moral dilemmas where the ability to appreciate the interpersonal impact of a decision is paramount (Young et al., 2007, 2011; Young and Saxe, 2009). This approach has also proved productive in elucidating the role of the ventro-medial prefrontal cortex (vmPFC) in coding socio-emotional knowledge, such as stereotypes (Gozzi et al., 2009) and moral emotions—such as pride (Tangney et al., 2007), embarrassment (Zahn et al., 2009) and guilt (Moll et al., 2011). Likewise, the dorsolateral PFC (dlPFC) appears to underpin cognitive control, reasoned thinking (Mansouri et al., 2009), abstract moral principles (Moll et al., 2002) and sensitivity to unfairness (Sanfey et al., 2003). Finally, a similar rationale has informed research controlling for cognitive load (Greene et al., 2008), semantic content (Takahashi et al., 2004), emotional arousal and regulation (Moll and de Oliveira-Souza, 2007; Decety et al., 2011), probability (Shenhav and Greene, 2010), intent (Berthoz et al., 2002; Young and Saxe, 2011) and harm (Kedia et al., 2008), in each case revealing distinct patterns of neural activation within the broader moral network.
Although this broad approach of deconstructing the moral network has clearly been very productive, it rests on an important assumption: that we can experimentally isolate different components of the moral network in the brain by varying the relevant processing parameters (conflict, harm, intent and emotion) while keeping others constant (Christensen and Gomila, 2012). Another possibility of course is that varying any given parameter of a moral decision has effects on how other involved parameters operate. In other words, components of the moral network may be fundamentally interactive.
This study investigated this issue by building on prior research examining the neural substrates of high-conflict (difficult) vs low-conflict (easy) moral decisions (Greene et al., 2004). Consider for example the following two moral scenarios. First, while hiding with your family during wartime your baby starts to cry; would you suffocate your crying baby in order to save the rest of your family from being discovered and killed by soldiers? Second, you are out with your family when you come across a child who has clearly been assaulted and is lying by the side of the road crying; do you assist them and call for help? Both of these decisions involve processing of ‘right’ and ‘wrong’ in terms of socially constructed moral rules. Both also have emotionally laden consequences and require processing of others’ points of view (theory of mind). However, the first decision feels much more difficult than the second, involves a greater degree of mental conflict, will elicit more deliberation and will be met with less unanimity as to the ‘correct’ choice (Greene et al., 2004). Together, these two scenarios clearly represent the ends of a moral continuum and offer a powerful illustration of the extent to which moral decisions can engage us in very discrepant ways.
The key question is exactly how patterns of neural activation in the moral network might differ when processing these varied classes of moral challenge. One possibility is that network activation will only differ as a function of the different cognitive parameters recruited (i.e. conflict resolution, engagement of systems involved in deliberative reasoning). If this were the case, difficult moral decisions may only differ from easy moral decisions in their recruitment of the dlPFC and ACC (Greene et al., 2004). However, another possibility is that varying decision difficulty will have interactive effects on the recruitment of other components of the moral network. In other words, both classes of moral choice might require significant and broadly comparable appreciation of how the people involved will be affected by any choice that is made (i.e. theory of mind). If this were the case, mPFC and TPJ—regions known to be associated with perspective taking—may be recruited for both difficult and easy decisions. Such a finding would suggest that a shared cognitive process underlies a broad spectrum of moral challenges. However, it is also plausible that easy moral decisions solely rely on automatic and reflexive processing—which is often associated with limbic activation (Moll et al., 2005). A further possibility is that the interplay and interactive effect of these various cognitive processes may engage some regions while disengaging others. For example, an easier moral decision may elicit less activation (or even deactivation) in the dlPFC simply because any dlPFC engagement would be redundant, or even a source of interference, when choices are reflexive and automatic.
We sought to investigate these various possibilities using functional magnetic resonance imaging (fMRI) while participants negotiated difficult vs easy moral decisions. Critically, we also included matched difficult and easy non-moral decision conditions. This allowed us to evaluate not only differences within the moral domain as a function of decision difficulty but also to investigate whether manipulation of ‘difficulty’ changes the pattern of activation in other regions of the moral network—relative to activation patterns for comparable non-moral choices. In other words, does moral cognition make flexible use of different regions of the moral network as a function of the demands of the moral challenge?
## MATERIALS AND METHODS
### Subjects
Overall, 89 subjects participated in the research reported here. Fifty-one subjects assisted us in rating the scenarios (mean age 29.6 years and s.d. ±7.2; 30 females). Thirty-eight subjects (all right handed, mean age 24.6 years and s.d. ±3.8; 22 females) participated in the main experiment and underwent fMRI. Three additional subjects were excluded from fMRI analyses due to errors in acquiring scanning images. Subjects were compensated for their time and travel. All subjects were right-handed, had normal or corrected vision and were screened to ensure no history of psychiatric or neurological problems. All subjects gave informed consent, and the study was approved by the University of Cambridge, Department of Psychology Research Ethics Committee.
In an initial stage of materials development, we created four categories of scenario for use in the imaging study: Difficult Moral Scenarios; Easy Moral Scenarios; Difficult Non-Moral Scenarios and Easy Non-Moral Scenarios. To achieve this, subjects (N = 51) were presented with a set of 65 moral and non-moral scenarios and asked which action they thought they would take in the depicted situation (a binary decision), how comfortable they were with their choice (on a five-point Likert scale, ranging from ‘very comfortable’ to ‘not at all comfortable’), and how difficult the choice was (on a five-point Likert scale, ranging from ‘very difficult’ to ‘not at all difficult’). This initial stimulus pool included a selection of 15 widely used scenarios from the extant literature (Greene et al., 2001; Valdesolo and DeSteno, 2006; Crockett et al., 2010; Kahane et al., 2012; Tassy et al., 2012) as well as 50 additional scenarios describing more everyday moral dilemmas that we created ourselves. These additional 50 scenarios were included because many of the scenarios in the existing literature describe extreme and unfamiliar situations (e.g. deciding whether to cut off a child’s arm to negotiate with a terrorist). Our aim was for these additional scenarios to be more relevant to subjects’ backgrounds and understanding of established social norms and moral rules (Sunstein, 2005). The additional scenarios mirrored the style and form of the scenarios sourced from the literature, however they differed in content. In particular, we over-sampled moral scenarios for which we anticipated subjects would rate the decision as very easy to make (e.g. would you pay 10 to save your child’s life?), as this category is vastly under-represented in the existing literature. These scenarios were intended as a match for non-moral scenarios that we assumed subjects would classify as eliciting ‘easy’ decisions [e.g. would you forgo using walnuts in a recipe if you do not like walnuts? (Greene et al., 2001)]—a category of scenarios that is routinely used in the existing literature as control stimuli. Categorization of scenarios as moral vs non-moral was carried out by the research team prior to this rating exercise. To achieve this, we applied the definition employed by Moll et al., (2008), which states that moral cognition altruistically motivates social behavior. In other words, choices, which can either negatively or positively affect others in significant ways, were classified as reflecting moral issues. Independent unanimous classification by the three authors was required before assigning scenarios to the moral vs non-moral category. In reality, there was unanimous agreement for every scenario rated. We used the participants’ ratings to operationalize the concepts of ‘easy’ and ‘difficult’. First, we examined participants’ actual yes/no decisions in response to the scenarios. We defined difficult scenarios as those where there was little consensus about what the ‘correct’ decision should be and retained only those where the subjects were more or less evenly split as to what to do (scenarios where the mean proportion of responses was between 0.45 and 0.55 on the binary choice). In contrast, we defined easy scenarios as those where there was a strong consensus (either >0.80 or <0.20). For these retained scenarios, we then examined participants’ actual difficulty ratings. Scenarios that consistently (≥80% of the time) received high ratings of ‘difficulty’ (four or five on our five-point scale) or high ratings of ‘easy’ (one or two on the scale) were categorized as Difficult or Easy scenarios, respectively. This gave us 24 scenarios in the final set, 6 in each of our four categories (difficulty scores for each category: DM mean 3.2, s.d. ±0.71; DNM 2.9, s.d. ±0.70; EM 1.2, s.d. ±0.28; ENM mean 1.3, s.d. ±0.35). Of these 24, 6 came from the stimulus set drawn from the existing literature (Greene et al., 2001) and a further 18 came from our supplementary set. We then carried out a number of additional checks of potential between-category differences that we felt might drive behavioral and neural responses in our study. Consequently, we had a subset of the subjects (n = 15) rate each scenario on four further dimensions, all on five-point Likert scales. These comprised: (i) How much effort is required to complete the action resulting from your decision?; (ii) How much effort is required to weigh up each aspect/component of this scenario?; (iii) How many aspects/components did you consider when making your decision? and (iv) How emotionally involving is this scenario? We wanted to ensure that the two sets of Difficult scenarios were rated as more effortful and complex (ratings, 1, 2 and 3) than the two sets of Easy scenarios, but that there were no differences on these ratings within the Difficult and Easy pairings. The data showed that this was the case [main effects of difficulty for the ratings 1, 2 and 3 (Fs > 49.74, Ps < 0.000), but no effects of difficulty within the pairings]. We also wanted to verify that the two sets of Moral scenarios were rated as more emotive (as we would predict) than the two sets of Non-Moral scenarios (as was the case, t = −13.37; P < 0.001; paired samples t-test, two-tailed), but that there were no differences within either the Moral or Non-Moral pairings (paired ts < 0.18) importantly illustrating that the difficult and easy scenarios in the moral and non-moral domains were matched on how emotionally involving they were. Finally, we ensured that the stimuli were matched for word length across categories [(F(3,20) = 0.51, P = 0.68); DM word count (mean 86.3, s.d. ±25.3); EM word count (mean 92.0, s.d. ±20.1); DNM word count (mean 90.2, s.d. ±18.6) and ENM word count (mean 79.3, s.d. ±9.7)]. ### Functional MRI procedure Within the scanner, subjects were presented with the 24 written scenarios. We structured our task using an event-related design, which closely mimicked past fMRI designs within this literature (Greene et al., 2001). Scenarios were randomly presented in a series of four blocks with six trials (scenarios) per block. Each trial was presented as text through a series of two screens, the first of which described the short scenario and the second of which asked whether the subject would do the relevant action, requiring a yes/no button press (Figure 1a). Subjects read each scenario and question at their own pace (up to 25 s for the scenario and 15 s to make their choice) and pressed a button to advance through the screens. Between each trial, a fixation cross was displayed for 2 s. At the end of each block, there was an inter-block-interval (IBI) of 16 s to allow the hemodynamic response function to return to baseline. Baseline was defined as the mean signal across the last four images of this 16 s IBI. Neural activity was measured using the floating window method (Greene et al., 2001). This method isolates the decision phase by including the time around the decision—8 s before the response, 1 s during the response and 6 s following the response—for a total of 15 s of recorded activity for every response. The rationale for using the floating window approach is to not only account for the 4–6 s delay following a psychological event in the hemodynamic response but also to create a flexible analysis structure for a complex, self-paced task. Fig. 1 (a) Experimental design. Subjects were presented with each scenario over two screens, the first describing the scenario and the second posing a question about their response to it. Subjects were required to select yes or no to make a choice. A fixation cross was presented for 2 s at the start of each trial. (b) Difficulty ratings from the subjects completing the fMRI study revealed that the categories Difficult/Easy and Moral/Non-Moral were controlled and matched across condition as rated on a five-point Likert scale. Fig. 1 (a) Experimental design. Subjects were presented with each scenario over two screens, the first describing the scenario and the second posing a question about their response to it. Subjects were required to select yes or no to make a choice. A fixation cross was presented for 2 s at the start of each trial. (b) Difficulty ratings from the subjects completing the fMRI study revealed that the categories Difficult/Easy and Moral/Non-Moral were controlled and matched across condition as rated on a five-point Likert scale. ### Imaging acquisition MRI scanning was conducted at the Medical Research Council Cognition and Brain Sciences Unit on a Siemens 3-Tesla Tim Trio MRI scanner by using a head coil gradient set. Whole-brain data were acquired with echoplanar T2* weighted imaging, sensitive to BOLD signal contrast (48 sagittal slices, 3 mm-thickness; TR = 2400 ms; TE = 30 ms; flip angle = 78° and FOV 192 mm). To provide for equilibration effects, the first 8 vol were discarded. T1 weighted structural images were acquired at a resolution of 1 × 1 × 1 mm. ### Imaging processing Statistical parametric mapping software (SPM5: www.fil.ion.ucl.ac.uk/spm/software/spm5/) was used to analyze all data. Preprocessing of fMRI data included spatial realignment, coregistration, normalization and smoothing. The first eight scans were discarded as dummy scans. To control for motion, all functional volumes were realigned to the mean volume. Images were spatially normalized to standard space using the Montreal Neurological Institute (MNI) template with a voxel size of 3 × 3 × 3 mm and smoothed using a Gaussian kernel with an isotropic full width at half maximum of 8 mm. Additionally, high-pass temporal filtering with a cut-off of 128 s was applied to remove low-frequency drifts in signal. ### Data analysis After preprocessing, statistical analysis was performed using the general linear model. Activated voxels were identified using an event-related statistical model representing each of the response events, convolved with a canonical hemodynamic response function and mean corrected. Six head-motion parameters defined by the realignment were added to the model as regressors of no interest. Analysis was carried out to establish each participant’s voxel-wise activation when subjects made their response regarding each scenario (the aforementioned fixed 15 s floating window approach). For each subject, contrast images were calculated for each of the four scenario categories. These first level contrasts were then aggregated into second level full factorial analyses of variance (ANOVAs) in order to compute group statistics. We report activity at P < 0.001 uncorrected for multiple spatial comparisons across the whole brain, and P < 0.05 family wise error (FWE) corrected for the following a priori regions of interest (ROIs; attained by independent coordinates): TPJ, ACC, dlPFC and vmPFC, reflecting the ‘moral network’ (coordinates listed in tables). Coordinates were taken from previous related studies. ## RESULTS ### Manipulation check: behavioral data To validate our a priori allocation of scenarios to the Easy and Difficult categories based on participants’ ratings, we administered a post-scan questionnaire to assess how difficult the fMRI subjects reported finding the scenarios using the same five-point Likert scale of difficulty. A repeated measures ANOVA with two within-subjects factors: Difficulty (difficult and easy) and Morality (moral and non-moral) confirmed the expected main effect of difficulty (F(1,36) = 287.27, P < 0.001), with Difficult scenarios rated as more difficult than Easy scenarios (Figure 1b). As anticipated, the main effect of morality and the morality by difficulty interaction were not significant, indicating that there was no support for self-reported differences in difficulty between moral and non-moral scenarios and no support for any differential discrepancy between difficult vs easy scenarios in the moral compared with non-moral domains (Fs < 2.62, Ps > 0.13). As a further validation of our a priori categorization of scenarios as Difficult or Easy, we also examined response patterns for each of the different categories. Subjects had near perfect agreement in their responses for Easy decisions (98% of the subjects responded in the same manner). However, for Difficult scenarios, there was little consensus in response selection (only 57% of the subjects responded in the same manner). A repeated measures ANOVA exploring reaction times (Greene et al., 2004) offered further support for this Difficult–Easy distinction, as Difficult scenarios (mean 4.0 s, s.d. ±1.6) took significantly longer to respond to than Easy scenarios (mean 3.1 s, s.d. ±1.1) (F(1,36) = 24.34, P < 0.000). Interestingly, moral scenarios (mean 3.65 s, s.d. ±0.14) also took slightly longer to respond to relative to non-moral scenarios (mean 3.43 s, s.d. ±0.15), likely reflecting their higher emotional impact (F(1,36) = 5.35, P = 0.027). There was therefore also a significant Difficulty by morality interaction (F(1,36) = 143.14, P < 0.000), reflecting the fact that the moral–difficult scenarios took the longest to respond to. ## Imaging results We contrasted neural activation associated with making a decision for each of the four categories against one another: Easy Moral, Difficult Moral, Difficult Non-Moral and Easy Non-Moral. To explore potential interactions among the four conditions and to verify that overall the current scenarios elicited activations consistent with the moral network described in the literature (Moll, Zahn et al., 2005), we ran a full factorial Morality × Difficulty ANOVA (Morality × Difficulty interaction). A whole-brain analysis of the interaction term (thresholded at P = 0.001 uncorrected) revealed a robust network of areas including bilateral TPJ, mid temporal poles, vmPFC, dACC and dlPFC (Figure 2; a full list of coordinates can be found in Table 1). We then examined a priori ROIs (Greene et al., 2001; Young and Saxe, 2009) (thresholded at FWE P = 0.05) to determine if this network specifically overlapped with the regions delineated within the literature. As expected, the vmPFC, ACC and bilateral TPJ ROIs revealed significant activation for the interaction term. The interaction term qualified significant main effects of Morality and Difficulty. Although these activations are suprasumed by the interaction, for completeness, we report then in Tables 2 and 3. Fig. 2 F-test examining the interaction of the factors Morality and Difficulty. This contrast reveals activation of the moral network traditionally described in the literature, consisting of the TPJ (bilaterally), vmPFC, dlPFC and dACC. The red circles indicate the location of the regions used in the ROI analysis (taken from a priori coordinates), all thresholded at P < 0.05 FWE. Fig. 2 F-test examining the interaction of the factors Morality and Difficulty. This contrast reveals activation of the moral network traditionally described in the literature, consisting of the TPJ (bilaterally), vmPFC, dlPFC and dACC. The red circles indicate the location of the regions used in the ROI analysis (taken from a priori coordinates), all thresholded at P < 0.05 FWE. Table 1 ANOVA F-test interaction Morality × Difficulty Region Peak MNI coordinates F-statistic/z-value Medial OFC −0 56 −2 21.89/4.36 Left ACC −10 42 −4 17.95/3.95 Left dlPFC −24 52 10 14.13/3.49 Right TPJ 56 −40 −4 20.17/4.19 Right TPJ 58 −52 14 13.73/3.43 Left TPJ −56 −52 −2 16.67/3.80 Left TPJ −50 −52 −12 14.23/3.50 Left ACC −6 28 30 18.30/3.98 Right mid frontal gyrus 38 12 30 15.32/3.64 Left precentral gyrus −52 −2 48 13.75/3.44 Right precentral gyrus 46 36 11.54/3.71 Region Peak MNI coordinates F-statistic/z-value Medial OFC −0 56 −2 21.89/4.36 Left ACC −10 42 −4 17.95/3.95 Left dlPFC −24 52 10 14.13/3.49 Right TPJ 56 −40 −4 20.17/4.19 Right TPJ 58 −52 14 13.73/3.43 Left TPJ −56 −52 −2 16.67/3.80 Left TPJ −50 −52 −12 14.23/3.50 Left ACC −6 28 30 18.30/3.98 Right mid frontal gyrus 38 12 30 15.32/3.64 Left precentral gyrus −52 −2 48 13.75/3.44 Right precentral gyrus 46 36 11.54/3.71 A priori ROIs MNI coordinates F-statistic/z-value aACC 34 26 18.30/3.98 aMiddle frontal gyrus −28 49 14.13/3.49 bRight TPJ 54 −59 22 12.44/3.36 bRight TPJ 54 −52 16 13.73/3.44 bRight TPJ 52 −54 22 13.04/3.34 bLeft TPJ −52 −58 20 11.14/3.07 bvmPFC 58 17 11.57/3.13 bvmPFC 62 16 12.56/3.28 bvmPFC 50 −10 21.61/4.33 bvmPFC 50 −4 21.89/4.36 A priori ROIs MNI coordinates F-statistic/z-value aACC 34 26 18.30/3.98 aMiddle frontal gyrus −28 49 14.13/3.49 bRight TPJ 54 −59 22 12.44/3.36 bRight TPJ 54 −52 16 13.73/3.44 bRight TPJ 52 −54 22 13.04/3.34 bLeft TPJ −52 −58 20 11.14/3.07 bvmPFC 58 17 11.57/3.13 bvmPFC 62 16 12.56/3.28 bvmPFC 50 −10 21.61/4.33 bvmPFC 50 −4 21.89/4.36 Notes: We used a priori coordinates to define ROI in our analysis. All ROIs were selected on the basis of independent coordinates using a sphere of 10 mm and corrected at P < 0.05 FWE and were attained through MarsBaRs. Peak voxels are presented in the tables at P < 0.001 uncorrected and all images are shown at P < 0.005 uncorrected. Cluster size was defined by a minimum of 10 contiguous voxels. All coordinates are in MNI Space. ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aGreene et al. (2004) and bYoung and Saxe (2009). Table 2 Main effect of Difficulty (DM + DN > EM + EN) Region Peak MNI coordinates z-value vmPFC −4 55 12 3.10 Region Peak MNI coordinates z-value vmPFC −4 55 12 3.10 See footnote of Table 1 for more information. Table 3 Main effect of Morality (DM + DN > EM + EN) Region Peak MNI coordinates z-value TPJ −44 −78 34 3.82 Region Peak MNI coordinates z-value TPJ −44 −78 34 3.82 See footnote of Table 1 for more information. As this initial full factorial analysis identified brain areas differing in activity as a function of the interaction of the Morality and Difficulty factors (the TPJ, dACC and vmPFC), our next aim was to deconstruct these interactions to examine functionality within those regions for Difficult and Easy Moral decisions relative to the matched Non-Moral comparison conditions. First, in order to understand which areas are differentially more activated for difficult moral decisions, we compared Difficult Moral with Difficult Non-Moral scenarios (DM > DN) at the whole-brain level. This revealed a network starting at the TPJ and extending the length of the temporal lobe into the temporal pole (Figure 3a and Table 4). These findings demonstrate that difficult moral choices activate a network within the temporal lobe—areas implicated in theory of mind (Young and Saxe, 2009), attentional switching (Tassy et al., 2012), higher order social concepts (Moll et al., 2008) and the understanding of social cues (Van Overwalle, 2009). Fig. 3 (a) Whole-brain images for the contrast Difficult Moral > Difficult Non-Moral scenarios. The TPJ was activated (shown in yellow) while the vmPFC and bilateral OFC were deactivated (shown in blue: computed as Difficult Non-Moral > Difficult Moral). (b) Whole-brain images for contrast Easy Moral > Easy Non-Moral scenarios. The vmPFC was activated (shown in yellow) while the TPJ and dlPFC were deactivated (shown in blue: computed as Easy Non-Moral > Easy Moral scenarios). (c) A priori ROIs (indicated by red circles, corrected at FWE P < 0.05, are shown for the conjunction analysis of contrasts illustrated in Figure 3a and b (vmPFC [-2 54 -4] and TPJ [-52 -46 4]). Fig. 3 (a) Whole-brain images for the contrast Difficult Moral > Difficult Non-Moral scenarios. The TPJ was activated (shown in yellow) while the vmPFC and bilateral OFC were deactivated (shown in blue: computed as Difficult Non-Moral > Difficult Moral). (b) Whole-brain images for contrast Easy Moral > Easy Non-Moral scenarios. The vmPFC was activated (shown in yellow) while the TPJ and dlPFC were deactivated (shown in blue: computed as Easy Non-Moral > Easy Moral scenarios). (c) A priori ROIs (indicated by red circles, corrected at FWE P < 0.05, are shown for the conjunction analysis of contrasts illustrated in Figure 3a and b (vmPFC [-2 54 -4] and TPJ [-52 -46 4]). Table 4 Difficult Moral > Difficult Non-Moral (DM > DN) Region Peak MNI coordinates z-value Right mid temporal lobe 56 −2 −14 4.04 Right TPJ 56 −52 14 3.55 Left TPJ −40 −58 16 3.74 Right mid temporal lobe 50 −16 −14 3.52 Left mid temporal lobe −64 −56 10 3.61 Left post central gyrus −54 −6 46 3.17 Region Peak MNI coordinates z-value Right mid temporal lobe 56 −2 −14 4.04 Right TPJ 56 −52 14 3.55 Left TPJ −40 −58 16 3.74 Right mid temporal lobe 50 −16 −14 3.52 Left mid temporal lobe −64 −56 10 3.61 Left post central gyrus −54 −6 46 3.17 A priori ROIs MNI coordinates t-Statistic aLeft TPJ −58 −66 22 2.84 aRight TPJ 54 −52 16 3.64 aRight TPJ 54 −59 22 3.56 A priori ROIs MNI coordinates t-Statistic aLeft TPJ −58 −66 22 2.84 aRight TPJ 54 −52 16 3.64 aRight TPJ 54 −59 22 3.56 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aYoung and Saxe (2009). See footnote of Table 1 for more information. To reveal brain regions demonstrating relative decreases in activity for difficult moral decisions, Difficult Non-Moral scenarios were contrasted with Difficult Moral scenarios (DN > DM), revealing vmPFC and bilateral orbital frontal cortex (OFC) deactivation (Figure 3a and Table 5). Thus, regions often associated with the moral network were found to be relatively less activated during difficult moral (vs non-moral) decisions once the difficulty of the scenario was controlled for. Table 5 Difficult Non-Moral > Difficult Moral (DN > DM) Region Peak MNI coordinates z-value MCC 28 34 4.66 vmPFC 54 3.37 Right OFC 22 46 −12 3.98 Left OFC −26 48 −12 4.01 Left anterior insula −32 16 −10 3.37 Right anterior insula 36 18 −10 3.24 Region Peak MNI coordinates z-value MCC 28 34 4.66 vmPFC 54 3.37 Right OFC 22 46 −12 3.98 Left OFC −26 48 −12 4.01 Left anterior insula −32 16 −10 3.37 Right anterior insula 36 18 −10 3.24 A priori ROIs MNI coordinates t-statistic aACC 34 26 4.84 aMiddle frontal gyrus −28 49 4.20 bvmPFC 50 −10 3.47 bvmPFC 50 −4 3.76 A priori ROIs MNI coordinates t-statistic aACC 34 26 4.84 aMiddle frontal gyrus −28 49 4.20 bvmPFC 50 −10 3.47 bvmPFC 50 −4 3.76 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aGreene et al. (2004) and bYoung and Saxe (2009). See footnote of Table 1 for more information. Using a similar rationale, we compared Easy Moral decisions with Easy Non-Moral decisions (EM > EN), revealing activation of the vmPFC—an area known to integrate emotion into decision making and planning (Moretto et al., 2010). Research has also shown that patients suffering damage to the vmPFC exhibit poor practical judgment (Raine and Yang, 2006; Blair, 2008). Interestingly, there was a pattern of TPJ and dlPFC relative deactivation for Easy Moral decisions (EN > EM) (Figure 3b and Tables 6 and 7). Table 6 Easy Moral > Easy Non-Moral (EM > EN) Region Peak MNI coordinates z-value vmPFC −2 54 −4 3.64 vmPFC −12 46 3.19 ACC 30 −6 3.32 PCC −2 60 26 3.00 Region Peak MNI coordinates z-value vmPFC −2 54 −4 3.64 vmPFC −12 46 3.19 ACC 30 −6 3.32 PCC −2 60 26 3.00 A priori ROIs MNI coordinates t-statistic avmPFC 50 −10 3.73 A priori ROIs MNI coordinates t-statistic avmPFC 50 −10 3.73 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aYoung and Saxe (2009). See footnote of Table 1 for more information. Table 7 Easy Non-Moral > Easy Moral (EN > EM) Region Peak MNI coordinates z-value Right TPJ 54 −44 −14 4.55 Left TPJ −52 50 −14 3.80 Right dlPFC 46 12 50 3.87 Right dlPFC 52 16 28 3.43 Region Peak MNI coordinates z-value Right TPJ 54 −44 −14 4.55 Left TPJ −52 50 −14 3.80 Right dlPFC 46 12 50 3.87 Right dlPFC 52 16 28 3.43 A priori ROIs MNI coordinates t-statistic aLeft TPJ −51 −46 3.17 A priori ROIs MNI coordinates t-statistic aLeft TPJ −51 −46 3.17 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aBerthoz et al. (2002). See footnote of Table 1 for more information. Taken together, these patterns of activation and deactivation highlight that difficult moral decisions appear to differentially recruit the TPJ and deactivate the vmPFC, while easy moral decisions appear to differentially deactivate the TPJ and activate the vmPFC, relative to the appropriate non-moral controls. These findings therefore suggest a degree of relative functional dissociation between the TPJ and vmPFC for moral decision making. The TPJ was selectively more engaged for difficult moral decisions, while in contrast, the vmPFC was selectively more activated for easy moral decisions, suggesting that these regions have different functional roles in the moral network. To identify whether this activation and deactivation pattern associated with making difficult moral decisions overlapped with the network showing the reverse pattern implicated in making easy moral decisions, we performed a conjunction analysis. We first applied a conjunction to the contrasts Difficult Moral > Difficult Non-Moral (DM > DN) and Easy Non-Moral > Easy Moral (EN > EM) to clarify whether the TPJ activation associated with the former and the TPJ deactivation associated with the latter were occurring within the same region. A whole-brain analysis revealed bilateral TPJ activation, however, when a priori (Berthoz et al., 2002) ROIs were applied, only the LTPJ survived SVC correction at P < 0.05 FWE (Figure 3c and Table 8). We also ran a conjunction analysis for Easy Moral > Easy Non-Moral (EM > EN) and Difficult Non-Moral > Difficult Moral (DN > DM) to determine whether the vmPFC activations and deactivations found in the original set of contrasts shared a common network. We found robust activity within the vmPFC region both at a whole-brain uncorrected level and when a priori (Young and Saxe, 2009) ROIs were applied (Figure 3c and Table 9). Table 8 Conjunction Difficult Moral > Difficult Non-Moral (DM > DN) + Easy Non-Moral > Easy Moral (EN > EM) Region Peak MNI coordinates z-value Right TPJ 56 42 2.80 Left TPJ −56 −54 −2 2.79 Region Peak MNI coordinates z-value Right TPJ 56 42 2.80 Left TPJ −56 −54 −2 2.79 A priori ROIs MNI coordinates t-statistic aLeft TPJ −52 −46 2.83 A priori ROIs MNI coordinates t-statistic aLeft TPJ −52 −46 2.83 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aBerthoz et al. (2002). See footnote of Table 1 for more information. Table 9 Conjunction Easy Moral > Easy Non-Moral (EM > EN) + Difficult Non-Moral > Difficult Moral (DN > DM) Region Peak MNI coordinates z-value vmPFC 56 3.27 Region Peak MNI coordinates z-value vmPFC 56 3.27 A priori ROIs MNI coordinates t-Statistic avmPFC 50 −4 3.37 A priori ROIs MNI coordinates t-Statistic avmPFC 50 −4 3.37 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aYoung and Saxe (2009). See footnote of Table 1 for more information. We next investigated whether difficult moral decisions exhibited a neural signature that is distinct to easy moral decisions for our scenarios. By directly comparing Difficult Moral to Easy Moral decisions (DM > EM), bilateral TPJ as well as the right temporal pole were activated specifically for Difficult Moral decisions (Figure 4a and Table 10). A direct contrast of Easy Moral compared with Difficult Moral (EM > DM) revealed a network comprised of the Left OFC (extending into the superior frontal gyrus), vmPFC and middle cingulate (Figure 4b and Table 11). Interestingly, these results diverge from past findings which indicated that the dlPFC and ACC underpin difficult moral decisions (relative to easy moral decisions), while the TPJ and middle temporal gyrus code for easy moral decisions (relative to difficult moral decisions) (Greene et al., 2004). One explanation for these differential findings may be that in our task, we independently categorized scenarios as difficult vs easy prior to scanning, instead of using each participant’s response latencies as a metric of the difficulty of the moral dilemma (Greene et al., 2004). Fig. 4 (a) Whole-brain images for the contrast Difficult Moral > Easy Moral scenarios. Bilateral TPJ regions were activated and a priori ROIs were applied to these areas. Parameter estimates of the beta values indicate that the TPJ regions activate significantly more for Difficult Moral decisions than for Easy Moral decisions (b) Whole-brain images for the contrast Easy Moral > Difficult Moral scenarios reveal significant dACC and OFC activation. A priori ROIs were applied and parameter estimates of the beta values revealed that the dACC and OFC activate significantly more for Easy Moral decisions than for Difficult Moral decisions. Fig. 4 (a) Whole-brain images for the contrast Difficult Moral > Easy Moral scenarios. Bilateral TPJ regions were activated and a priori ROIs were applied to these areas. Parameter estimates of the beta values indicate that the TPJ regions activate significantly more for Difficult Moral decisions than for Easy Moral decisions (b) Whole-brain images for the contrast Easy Moral > Difficult Moral scenarios reveal significant dACC and OFC activation. A priori ROIs were applied and parameter estimates of the beta values revealed that the dACC and OFC activate significantly more for Easy Moral decisions than for Difficult Moral decisions. Table 10 Difficult Moral > Easy Moral (DM > EM) Region Peak MNI coordinates z-value Right TPJ 62 −54 14 3.55 Left TPJ −38 −60 18 3.26 Right temporal pole 56 −18 3.26 Region Peak MNI coordinates z-value Right TPJ 62 −54 14 3.55 Left TPJ −38 −60 18 3.26 Right temporal pole 56 −18 3.26 A priori ROIs MNI coordinates t-statistic aRight TPJ 54 −52 16 3.63 aLeft TPJ −46 −62 25 3.32 A priori ROIs MNI coordinates t-statistic aRight TPJ 54 −52 16 3.63 aLeft TPJ −46 −62 25 3.32 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aYoung and Saxe (2009). See footnote of Table 1 for more information. Table 11 Easy Moral > Difficult Moral (EM > DM) Region Peak MNI coordinates z-value Left OFC −34 50 −10 3.75 Right OFC 30 62 −4 3.00 Left superior frontal gyrus −20 54 3.47 MCC −6 24 38 3.41 Region Peak MNI coordinates z-value Left OFC −34 50 −10 3.75 Right OFC 30 62 −4 3.00 Left superior frontal gyrus −20 54 3.47 MCC −6 24 38 3.41 A priori ROIs MNI coordinates t-statistic aACC 34 26 3.24 aMiddle frontal gyrus −28 49 3.59 A priori ROIs MNI coordinates t-statistic aACC 34 26 3.24 aMiddle frontal gyrus −28 49 3.59 ROIs, regions of interest corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aGreene et al. (2004). See footnote of Table 1 for more information. ## DISCUSSION The aim of the study reported here was to examine how the brain processes various classes of moral choices and to ascertain whether specific and potentially dissociable functionality can be mapped within the brain’s moral network. Our behavioral findings confirmed that difficult moral decisions require longer response times, elicit little consensus over the appropriate response and engender high ratings of discomfort. In contrast, easy moral and non-moral dilemmas were answered quickly, elicited near perfect agreement for responses and created minimal discomfort. These differential behavioral profiles had distinct neural signatures within the moral network: relative to the appropriate non-moral comparison conditions, difficult moral dilemmas selectively engaged the bilateral TPJ but deactivated the vmPFC, while easy moral dilemmas revealed the reverse finding—greater vmPFC activation and less engagement of the TPJ. These results suggest a degree of functional dissociation between the TPJ and vmPFC for moral decisions and indicate that these cortical regions have distinct roles. Together, our findings support the notion that, rather than comprising a single mental operation, moral cognition makes flexible use of different regions as a function of the particular demands of the moral dilemma. Our neurobiological results show consistency with the existing research on moral reasoning (Moll et al., 2008) which identifies both the TPJ and vmPFC as integral players in social cognition (Van Overwalle, 2009; Janowski et al., 2013). The vmPFC has largely been associated with higher ordered deliberation (Harenski et al., 2010), morally salient contexts (Moll et al., 2008) and emotionally engaging experiences (Greene et al., 2001). Clinical data have further confirmed these findings: patients with fronto-temporal dementia (FTD)—deterioration of the PFC—exhibit blunted emotional responses and diminished empathy when responding to moral dilemmas (Mendez et al., 2005). Additionally, lesions within the vmPFC produce a similar set of behaviors (Anderson et al., 1999). Unlike healthy controls, vmPFC patients consistently endorse the utilitarian response when presented with high-conflict moral dilemmas, despite the fact that such a response often has an emotionally aversive consequence (Koenigs et al., 2007). This clinical population is unable to access information that indicates a decision might be emotionally distressing, and they therefore rely on explicit norms that maximize aggregate welfare. This signifies that the vmPFC likely plays a role in generating pro-social sentiments such as compassion, guilt, harm aversion and interpersonal attachment (Moll et al., 2008). In the experiment presented here, differential activity was observed within the vmPFC in response to easy moral dilemmas, suggesting that when a moral dilemma has a clear, obvious and automatic choice (e.g. pay10 to save your child’s life), this region supports a neural representation of the most motivationally compelling and ‘morally guided’ option. In other words, the vmPFC appears sensitive to a decision that has a low cost and high benefit result. This converges with the evidence that this area is critical for the experience of pro-social sentiments (Moll et al., 2008) and fits with the extant research demonstrating a strong association between the subjective value of reward and vmPFC activity (Hare et al., 2010). Because our moral scenarios were matched for emotional engagement, it seems unlikely that the vmPFC is only coding for the emotional component of the moral challenge. We speculated that when presented with an easy moral dilemma, the vmPFC may also be coding for both the subjective reward value and the pro-social nature of making a decision which produces a highly positive outcome.
Interestingly, when a moral dilemma is relatively more difficult, less activation within the vmPFC was observed. The nature of these more difficult moral scenarios is that there is no salient or motivationally compelling ‘correct’ choice. The options available to subjects elicit no explicit morally guided choice and are instead unpleasant and often even aversive (indicated by subjects’ discomfort ratings). As a result, subjects understandably appear to be more reflective in their decision making, employing effortful deliberation (longer response latencies) during which they may be creating extended mental simulations of each available option (Evans, 2008). Thus, if the vmPFC is specifically coding the obvious and easy pro-social choice, then it is reasonable to assume that when there is no clear morally guided option, the vmPFC is relatively disengaged. This may be due to simple efficiency—suppression of activity in one region facilitates activity in another region. For example, any activity in the vmPFC might represent a misleading signal that there is a pro-social choice when there is not. In fact, patients with vmPFC lesions lack the requisite engagement of this region, and as a result, show behavioral abnormalities when presented with high-conflict moral dilemmas (Koenigs et al., 2007).
In contrast to easy moral dilemmas, difficult moral dilemmas showed relatively increased activity in the TPJ, extending down through the temporal poles. This activation pattern fits well with the fMRI documentation that the TPJ is integral in processing a diverse spectrum of social cognitive abilities such as empathy, theory of mind (Young and Saxe, 2009), agency and more basic processes such as attentional switching (Decety and Lamm, 2007). Converging evidence from clinical work has further implicated the TPJ in both mentalizing about the states of another, as well as attentional and spatial orientation (unilateral spatial neglect) (Mesulam, 1981). For example, during theory of mind tasks, subjects with autism either demonstrate abnormal TPJ activity (Baron-Cohen et al., 1999) or fail to activate the TPJ altogether (Castelli et al., 2002). Similar atypical TPJ activation was also found in autistic subjects who completed an attentional resource distribution task (Gomot et al., 2006) and demonstrated difficulty in processing novel stimuli. Together, this research indicates that the TPJ seems to play a critical role in comparing and assessing socially salient stimuli (Decety and Lamm, 2007).
Based on these findings, we reasoned that more difficult moral decisions—which are not associated with normatively ‘correct’ choices—may rely more on reflective cognitive systems partly localized within the TPJ. Our behavioral data indicate that the major difference between difficult and easy dilemmas is not only the number of elements one must evaluate in order to make a decision but how much effort is required to do so. Thus, we speculated that the TPJ may process difficult dilemmas in two stages: the TPJ first subsumes the allocation of attentional resources to attend to the numerous socially relevant stimuli and is then critically implicated in the assessment of these stimuli to select the most compelling option. In short, the TPJ could be involved in attending to, shifting between, and then weighing up the salient nature of a difficult moral dilemma.
However, this neural result is not found when difficult and easy non-moral decisions are compared with one another (Table 12), which suggests that there is something specific about difficult moral decisions which engage the TPJ. What then distinguishes moral cognition from other forms of socially relevant decisions? While social interaction affects others, moral decisions are distinctive in that they can altruistically motivate interpersonal behavior (Moll et al., 2008). Accordingly, stimuli that are highly relevant and attentionally demanding—social cues, norms and taboos—necessitate processing according to their level of significance. This would mean that moral phenomena specifically require increased attentional resources because they are more consequential than non-moral phenomena. Thus, difficult decisions made within the moral domain are considerably more relevant and meaningful than difficult decisions made outside the moral domain. Hence, the TPJ appears to subserve the attention-oriented comparison of highly salient and meaningful moral stimuli.
Table 12
Difficult Non-Moral > Easy Non-Moral (DN > EN)
Region Peak MNI coordinates z-value
Mmfg −6 54 4.57
Right ACC 30 −8 3.91
Right mOFC 38 −10 3.51
Ventral striatum (?) −8 3.75
PCC −56 32 3.42
Region Peak MNI coordinates z-value
Mmfg −6 54 4.57
Right ACC 30 −8 3.91
Right mOFC 38 −10 3.51
Ventral striatum (?) −8 3.75
PCC −56 32 3.42
A priori ROIs MNI coordinates t-statistic
aACC 34 26 3.26
bPCC 61 35 3.49
bmMPFC 58 17 4.13
bvMPFC 50 −10 4.70
A priori ROIs MNI coordinates t-statistic
aACC 34 26 3.26
bPCC 61 35 3.49
bmMPFC 58 17 4.13
bvMPFC 50 −10 4.70
ROIs, regions of interest SVC corrected at P < 0.05 FWE using a priori independent coordinates from previous studies: aGreene et al. (2004) and bSaxe (2009). See footnote of Table 1 for more information.
Together, our results suggest that moral cognition emerges from the integration and coordination of disparate neural systems. This account extends the current moral cognitive framework by illustrating that not only do the TPJ and vmPFC have specific and differential roles but that they also operate within a flexible and competitive neural system. Dilemmas with a clearly guided moral choice require minimal processing of social information, and as a result, entail little cognitive demand. In contrast, moral dilemmas with ambiguously unfavorable outcomes demand greater deliberation and seemingly depend on an explicitly reflective system (Evans, 2008). The fact that the relationship between the TPJ and vmPFC appears to function within a dynamic equilibrium—when the TPJ is more engaged the vmPFC is less engaged, and vice versa—implies that moral decision making relies on a system of neural reallocation or mutual inhibition. Portions of the vmPFC and TPJ are specifically connected (Price and Drevets, 2010), and work has illustrated spontaneous correlations of activity between the TPJ and vmPFC (Burnett and Blakemore, 2009; Mars et al., 2012). Although speculative, such evidence of TPJ-vmPFC functional connectivity supports the idea that these regions may work together to encode moral choices. Interestingly, an experiment where the TPJ was transiently disrupted caused subjects to judge attempted harms as more morally permissible (Young et al., 2010). This suggests that when the TPJ ‘turns off’, neural resources may re-allocate to the vmPFC (where pro-social judgments may be generated). Such a mutual inhibitory process would mean that differential moral behavior competes for neural resources and thus rely on discrete and dissociable systems. Although beyond the scope of this research, it is possible that information processing taking place in these two classes of moral dilemmas act in direct opposition.
## SUPPLEMENTARY DATA
Supplementary data are available at SCAN online.
## REFERENCES
Anderson
SW
Bechara
A
Damasio
H
Tranel
D
Damasio
AR
Impairment of social and moral behavior related to early damage in human prefrontal cortex
Nature Neuroscience
,
1999
, vol.
2
11
(pg.
1032
-
7
)
Baron-Cohen
S
Ring
HA
Wheelwright
S
, et al. .
Social intelligence in the normal and autistic brain: an fMRI study
The European Journal of Neuroscience
,
1999
, vol.
11
6
(pg.
1891
-
8
)
Berthoz
S
Armony
JL
Blair
RJ
Dolan
RJ
An fMRI study of intentional and unintentional (embarrassing) violations of social norms
Brain: A Journal of Neurology
,
2002
, vol.
125
Pt 8
(pg.
1696
-
708
)
Blair
RJ
The amygdala and ventromedial prefrontal cortex: functional contributions and dysfunction in psychopathy
Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences
,
2008
, vol.
363
1503
(pg.
2557
-
65
)
Burnett
S
Blakemore
SJ
The European Journal of Neuroscience
,
2009
, vol.
29
6
(pg.
1294
-
301
)
Castelli
F
Frith
C
Happé
F
Frith
U
Autism, Asperger syndrome and brain mechanisms for the attribution of mental states to animated shapes
Brain: A Journal of Neurology
,
2002
, vol.
125
Pt 8
(pg.
1839
-
49
)
Christensen
JF
Gomila
A
Moral dilemmas in cognitive neuroscience of moral decision-making: a principled review
Neuroscience and Biobehavioral Reviews
,
2012
, vol.
36
4
(pg.
1249
-
64
)
Crockett
MJ
Clark
L
Hauser
MD
Robbins
TW
Serotonin selectively influences moral judgment and behavior through effects on harm aversion
Proceedings of the National Academy of Sciences of the United States of America
,
2010
, vol.
107
40
(pg.
17433
-
8
)
Decety
J
Lamm
C
The role of the right temporoparietal junction in social interaction: how low-level computational processes contribute to meta-cognition
The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry
,
2007
, vol.
13
6
(pg.
580
-
93
)
Decety
J
Michalska
KJ
Kinzler
KD
The contribution of emotion and cognition to moral sensitivity: a neurodevelopmental study
Cerebral Cortex
,
2011
, vol.
22
1
(pg.
209
-
20
)
Evans
JS
Dual-processing accounts of reasoning, judgment, and social cognition
Annual Review of Psychology
,
2008
, vol.
59
(pg.
255
-
78
)
Fliessbach
K
Weber
B
Trautner
P
, et al. .
Social comparison affects reward-related brain activity in the human ventral striatum
Science
,
2007
, vol.
318
5854
(pg.
1305
-
8
)
Gomot
M
Bernard
FA
Davis
MH
, et al. .
Change detection in children with autism: an auditory event-related fMRI study
NeuroImage
,
2006
, vol.
29
2
(pg.
475
-
84
)
Gozzi
M
Raymont
V
Solomon
J
Koenigs
M
Grafman
J
Dissociable effects of prefrontal and anterior temporal cortical lesions on stereotypical gender attitudes
Neuropsychologia
,
2009
, vol.
47
10
(pg.
2125
-
32
)
Greene
JD
Morelli
SA
Lowenberg
K
Nystrom
LE
Cohen
JD
Cognitive load selectively interferes with utilitarian moral judgment
Cognition
,
2008
, vol.
107
3
(pg.
1144
-
54
)
Greene
JD
Nystrom
LE
Engell
Darley
JM
Cohen
JD
The neural bases of cognitive conflict and control in moral judgment
Neuron
,
2004
, vol.
44
2
(pg.
389
-
400
)
Greene
JD
Sommerville
RB
Nystrom
LE
Darley
JM
Cohen
JD
An fMRI investigation of emotional engagement in moral judgment
Science
,
2001
, vol.
293
5537
(pg.
2105
-
8
)
Haidt
J
The emotional dog and its rational tail: a social intuitionist approach to moral judgment
Psychological Review
,
2001
, vol.
108
4
(pg.
814
-
34
)
Hare
TA
Camerer
CF
Knoepfle
DT
Rangel
A
Value computations in ventral medial prefrontal cortex during charitable decision making incorporate input from regions involved in social cognition
The Journal of Neuroscience: The Official Journal of the Society for Neuroscience
,
2010
, vol.
30
2
(pg.
583
-
90
)
Harenski
CL
Antonenko
O
Shane
MS
Kiehl
KA
A functional imaging investigation of moral deliberation and moral intuition
NeuroImage
,
2010
, vol.
49
3
(pg.
2707
-
16
)
Hauser
MD
The liver and the moral organ
Social Cognitive and Affective Neuroscience
,
2006
, vol.
1
3
(pg.
214
-
20
)
Heekeren
HR
Wartenburger
I
Schmidt
H
Schwintowski
HP
Villringer
A
An fMRI study of simple ethical decision-making
Neuroreport
,
2003
, vol.
14
9
(pg.
1215
-
9
)
Janowski
V
Camerer
C
Rangel
A
Empathic choice involves vmPFC value signals that are modulated by social processing implemented in IPL
Social Cognitive and Affective Neuroscience
,
2013
, vol.
8
2
(pg.
201
-
8
)
Kahane
G
Wiech
K
Shackel
N
Farias
M
Savulescu
J
Tracey
I
The neural basis of intuitive and counterintuitive moral judgment
Social Cognitive and Affective Neuroscience
,
2012
, vol.
7
4
(pg.
393
-
402
)
Kedia
G
Berthoz
S
Wessa
M
Hilton
D
Martinot
JL
An agent harms a victim: a functional magnetic resonance imaging study on specific moral emotions
Journal of Cognitive Neuroscience
,
2008
, vol.
20
10
(pg.
1788
-
98
)
Koenigs
M
Young
L
R
, et al. .
Damage to the prefrontal cortex increases utilitarian moral judgements
Nature
,
2007
, vol.
446
7138
(pg.
908
-
11
)
Mansouri
FA
Tanaka
K
Buckley
MJ
Conflict-induced behavioural adjustment: a clue to the executive functions of the prefrontal cortex
Nature Reviews. Neuroscience
,
2009
, vol.
10
2
(pg.
141
-
52
)
Mars
RB
Sallet
J
Schüffelgen
U
Jbabdi
S
Toni
I
Rushworth
MF
Connectivity-based subdivisions of the human right “temporoparietal junction area”: evidence for different areas participating in different cortical networks
Cerebral cortex
,
2012
, vol.
22
8
(pg.
1894
-
1903
)
Mendez
MF
Anderson
E
Shapira
JS
An investigation of moral judgement in frontotemporal dementia
Cognitive and Behavioral Neurology: Official Journal of the Society for Behavioral and Cognitive Neurology
,
2005
, vol.
18
4
(pg.
193
-
7
)
Mesulam
MM
A cortical network for directed attention and unilateral neglect
Annals of Neurology
,
1981
, vol.
10
4
(pg.
309
-
25
)
Moll
J
de Oliveira-Souza
R
Moral judgments, emotions and the utilitarian brain
Trends in Cognitive Sciences
,
2007
, vol.
11
8
(pg.
319
-
21
)
Moll
J
de Oliveira-Souza
R
Bramati
IE
Grafman
J
Functional networks in emotional moral and nonmoral social judgments
Neuroimage
,
2002
, vol.
16
3 Pt 1
(pg.
696
-
703
)
Moll
J
de Oliveira-Souza
R
Zahn
R
The neural basis of moral cognition: sentiments, concepts, and values
Annals of the New York Academy of Sciences
,
2008
, vol.
1124
(pg.
161
-
80
)
Moll
J
Zahn
R
de Oliveira-Souza
R
, et al. .
Impairment of prosocial sentiments is associated with frontopolar and septal damage in frontotemporal dementia
NeuroImage
,
2011
, vol.
54
2
(pg.
1735
-
42
)
Moll
J
Zahn
R
de Oliveira-Souza
R
Krueger
F
Grafman
J
Opinion: the neural basis of human moral cognition
Nature Reviews. Neuroscience
,
2005
, vol.
6
10
(pg.
799
-
809
)
Moretto
G
Làdavas
E
Mattioli
F
di Pellegrino
G
A psychophysiological investigation of moral judgment after ventromedial prefrontal damage
Journal of Cognitive Neuroscience
,
2010
, vol.
22
8
(pg.
1888
-
99
)
Price
JL
Drevets
WC
Neurocircuitry of mood disorders
Neuropsychopharmacology
,
2010
, vol.
35
1
(pg.
192
-
216
)
Raine
A
Yang
Y
Neural foundations to moral reasoning and antisocial behavior
Social Cognitive and Affective Neuroscience
,
2006
, vol.
1
3
(pg.
203
-
13
)
Sanfey
AG
Rilling
JK
Aronson
JA
Nystrom
LE
Cohen
JD
The neural basis of economic decision-making in the Ultimatum Game
Science
,
2003
, vol.
300
5626
(pg.
1755
-
8
)
Shenhav
A
Greene
JD
Moral judgments recruit domain-general valuation mechanisms to integrate representations of probability and magnitude
Neuron
,
2010
, vol.
67
4
(pg.
667
-
77
)
Shin
LM
Dougherty
DD
Orr
SP
, et al. .
Activation of anterior paralimbic structures during guilt-related script-driven imagery
Biological Psychiatry
,
2000
, vol.
48
1
(pg.
43
-
50
)
Sunstein
CR
Moral heuristics
Behavioral and Brain Sciences
,
2005
, vol.
28
4
(pg.
531
-
541
)
Takahashi
H
Yahata
N
Koeda
M
Matsuda
T
Asai
K
Okubo
Y
Brain activation associated with evaluative processes of guilt and embarrassment: an fMRI study
NeuroImage
,
2004
, vol.
23
3
(pg.
967
-
74
)
Tangney
JP
Stuewig
J
Mashek
DJ
Moral emotions and moral behavior
Annual Review of Psychology
,
2007
, vol.
58
(pg.
345
-
72
)
Tassy
S
Oullier
O
Duclos
Y
, et al. .
Disrupting the right prefrontal cortex alters moral judgement
Social Cognitive and Affective Neuroscience
,
2012
, vol.
7
3
(pg.
282
-
8
)
Valdesolo
P
DeSteno
D
Manipulations of emotional context shape moral judgment
Psychological Science
,
2006
, vol.
17
6
(pg.
476
-
7
)
Van Overwalle
F
Social cognition and the brain: a meta-analysis
Human Brain Mapping
,
2009
, vol.
30
3
(pg.
829
-
58
)
Vrticka
P
F
Grandjean
D
Sander
D
Vuilleumier
P
Individual attachment style modulates human amygdala and striatum activation during social appraisal
PLoS One
,
2008
, vol.
3
8
pg.
e2868
Young
L
Camprodon
JA
Hauser
M
Pascual-Leone
A
Saxe
R
Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments
Proceedings of the National Academy of Sciences of the United States of America
,
2010
, vol.
107
15
(pg.
6753
-
8
)
Young
L
Cushman
F
Hauser
M
Saxe
R
The neural basis of the interaction between theory of mind and moral judgment
Proceedings of the National Academy of Sciences of the United States of America
,
2007
, vol.
104
20
(pg.
8235
-
40
)
Young
L
Dungan
J
Where in the brain is morality? Everywhere and maybe nowhere
Social Neuroscience
,
2011
, vol.
7
1
(pg.
1
-
10
)
Young
L
Saxe
R
An FMRI investigation of spontaneous mental state inference for moral judgment
Journal of Cognitive Neuroscience
,
2009
, vol.
21
7
(pg.
1396
-
405
)
Young
L
Saxe
R
When ignorance is no excuse: different roles for intent across moral domains
Cognition
,
2011
, vol.
120
2
(pg.
202
-
14
)
Young
L
Scholz
J
Saxe
R
Neural evidence for “intuitive prosecution”: the use of mental state information for negative moral verdicts
Social Neuroscience
,
2011
, vol.
6
3
(pg.
302
-
15
)
Zahn
R
Moll
J
Paiva
M
, et al. .
The neural basis of human social values: evidence from functional MRI
Cerebral cortex
,
2009
, vol.
19
2
(pg.
276
-
83
)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
2016-12-04 12:01:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5799787044525146, "perplexity": 7605.918530424885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541317.69/warc/CC-MAIN-20161202170901-00078-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://quantumcomputing.stackexchange.com/questions/7208/should-a-pauli-x-matrix-equal-the-identity-matrix-to-be-unitary
|
# Should a Pauli $X$ matrix equal the identity matrix to be unitary?
My understanding is that any unitary matrix must have its inverse be equal to its conjugate transpose.
Looking at the pauli x gate as shown here: $$\begin{bmatrix}0&1\\1&0\end{bmatrix}$$
It is its own inverse which is equal, of course, to its own conjugate transpose.
However, isn't it also true that neither of these form an identity matrix? And isn't this a requirement for being considered unitary?
The definition of unitary matrix is that its conjugate transpose should equal its inverse, i.e., $$U U^\dagger = U^\dagger U = I$$. You can check that this is indeed the case for the Pauli X matrix.
• The product of $U$ with its conjugate transpose $U^\dagger$ should equal the identity matrix. The conjugate transpose $U^\dagger$ does not need to equal $I$. Matrix multiplication is not, in general, commutative. – Mark S Sep 9 '19 at 21:08
|
2020-04-02 23:32:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207888245582581, "perplexity": 227.6799015299667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00001.warc.gz"}
|
https://egoboo.org/build-from-source/linux-gcc/
|
# Build Egoboo from source: Instructions for Linux/GCC
Ensure CMake 3.10.1 and GCC 6.4 or higher are installed. To compile Idlib, open a console, change to the directory in which the source of the library is located in, and enter
cmake CMakeLists.txt
make all
To execute the unit tests, enter
make check
|
2018-02-17 19:08:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6040329933166504, "perplexity": 13854.406529710357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00549.warc.gz"}
|
https://en.x-mol.com/paper/article/1452483823376695296
|
Find Paper, Faster
Example:10.1021/acsami.1c06204 or Chem. Rev., 2007, 107, 2411-2502
A NURBS-discontinuous and enriched isogeometric boundary element formulation for two-dimensional fatigue crack growth
Engineering Analysis With Boundary Elements (IF2.964), Pub Date : 2021-10-23, DOI: 10.1016/j.enganabound.2021.09.019
H.C. Andrade, J. Trevelyan, E.D. Leonel
A new extended isogeometric boundary element method (XIGABEM) formulation is proposed for simulating multiple fatigue crack propagation in two-dimensional domains. The classical use of NURBS in isogeometric formulations is further extended by repeated knot insertion to introduce a ${C}^{-1}$ continuity within the approximation space as an elegant approach to representing geometrical discontinuities at crack intersections. This strategy is also used to restrict the enrichment term to the portion of the NURBS defining the tip, where it is necessary. At this near-tip zone, the linear elastic fracture mechanics solutions are embedded into the displacement approximation to represent the theoretical square root behaviour. The enrichment procedure introduces just two degrees of freedom per crack tip, and a tying constraint is used to yield a square linear system. In this direct approach, the stress intensity factors (SIFs) are found as terms in the solution vector without requiring post-processing techniques.
Several examples are presented to illustrate the application of the XIGABEM. The accuracy of the results compares favourably against those from the literature, and also against solutions obtained from unenriched and enriched indirect methods that employ the J-integral for SIF extraction. Furthermore, the proposed direct approach is capable of significantly reducing the execution time.
|
2022-01-23 06:47:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6800910830497742, "perplexity": 1583.2476762895333}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00589.warc.gz"}
|
https://learn.careers360.com/ncert/question-a-spherical-capacitor-has-an-inner-sphere-of-radius-12-cm-and-an-outer-sphere-of-radius-13-cm-the-outer-sphere-is-earthed-and-the-inner-sphere-is-given-a-charge-of-2-5-what-is-the-potential-of-the-inner-sphere/
|
2.30 A spherical capacitor has an inner sphere of radius 12 cm and an outer sphere of radius 13 cm. The outer sphere is earthed and the inner sphere is given a charge of 2.5 $\mu C$. The space between the concentric spheres is filled with a liquid of dielectric constant 32. (b) what is the potential of the inner sphere?
Potential of the inner sphere is given by
$V=\frac{q}{C}=\frac{2.5*10^{-6}}{5.5*10^{-9}}=4.5*10^2$
Hence the potential of the inner sphere is $4.5*10^2 V$.
Related Chapters
Preparation Products
Knockout NEET May 2021
An exhaustive E-learning program for the complete preparation of NEET..
₹ 22999/- ₹ 14999/-
Knockout BITSAT 2020
It is an exhaustive preparation module made exclusively for cracking BITSAT..
₹ 4999/- ₹ 1999/-
Knockout NEET May 2022
An exhaustive E-learning program for the complete preparation of NEET..
₹ 34999/- ₹ 24999/-
Knockout JEE Main April 2021
An exhaustive E-learning program for the complete preparation of JEE Main..
₹ 22999/- ₹ 14999/-
|
2020-08-13 18:14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9038079977035522, "perplexity": 4388.006284453174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00075.warc.gz"}
|
https://www.physicsforums.com/threads/kinematics-of-three-runners-in-a-race.693614/
|
# Kinematics of three runners in a race
## Homework Statement
Runners A, B, and C run a 100-m race, each at a constant speed.
Runner A takes first place, beating runner B by 10 m. Runner
B takes second place, beating runner C by 10 m. By what time
interval does runner A beat runner C?
## Homework Equations
d = 100 m
Δd = 10 m
Δt$_{i}$ = time for runner i to travel 100 m
v$_{i}$ = speed of runner i
## The Attempt at a Solution
Can this problem be solved without knowing the speed of one of the runners? Here's a few equalities for this problem:
v$_{a}$ = $\frac{d}{Δt_{a}}$
v$_{b}$ = $\frac{d-Δd}{Δt_{a}}$
v$_{c}$ = $\frac{v_{b}Δt_{b}-Δd}{Δt_{b}}$
## The Attempt at a Solution
Related Introductory Physics Homework Help News on Phys.org
Try assuming a couple of different velocities for runner A and then work out the time difference. Does the time difference vary with the velocity of runner A? This should answer your question.
Note that runner A beats runner C by more than 20 m.
rude man
Homework Helper
Gold Member
Note that runner A beats runner C by more than 20 m.
Yikes, that's right!
I tried assuming a few speeds for runner A and found that the interval between runners A and C depends on the speed of runner A. In other words, there is no unique solution with the given information.
The problem has six unknowns (3 speeds and 3 times) and five equations. It is fairly trivial to express any five unknowns in terms of the other one.
rude man
Homework Helper
Gold Member
Right all around. At least I got the 'no unique solution' part right ...
time for C
A beats C by a time period of 20/VC
VC=velocity of C
A beats C by a time period of 20/VC
VC=velocity of C
This assumes that A beats C by 20 meters. This is not given. What is given is that B beats C by 10 meters, which happens when B is at the finish line. When A is at the finish line, B is not there, so you can't assume that he is 10 meters ahead of C yet.
|
2020-08-06 19:55:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3845571279525757, "perplexity": 1586.9263528752667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737019.4/warc/CC-MAIN-20200806180859-20200806210859-00293.warc.gz"}
|
https://bearlakeoutfitters.com/gore-vidal-epeult/dedication-lines-for-thesis-eb212c
|
Whether your book’s dedication is only a few sentences or an entire paragraph, you shouldn’t miss this opportunity to give the reader a small look into your life’s story. DEDICATION I would like to dedicate this thesis/dissertation template to UTSA graduate students. Does a song from george strait, its noise a searing intrusion, and my head is grey, for I in los angeles, shot by a paving stone, like a ribbon on a tie and one of three to five in the revision cycle. One and One make Three. Well, you have the opportunity to do that in the dedication segment of your thesis. You value their support and help, and want to show your appreciation by way of a thesis dedication. Paper essay life of a college student essay dedication dissertation. Bin-Jaliah, and. Project thesis dedication for essay of healthy lifestyle speech. However, we suggest that you are motivated to find these changes. Dotted lines should appear at the top and bottom of the page. The dedication should be centered on the width of a line three inches from the top. My sisters Katie, Linda and Rhonda have never left my side and are very special. On a positive note, there are no preset rules or stipulations for writing dedication for term papers or thesis. The passage is a sentence, identify the rules of grammar, and reading and writing teachers see odlin dedication thesis sample 1994. To do this we use the \vspace command followed by a length. Fau. A Thesis Submitted to the Faculty of Education in Partial Fulfillment of the Requirements for the Master Degree of Education 2012. In what ways is the line at the strawberry fields forever and revolution no. - Free Examples. But desire, his dedication, his determination, that's something you can't measure. My brother Brett provided by texts and dissertation dedication my parents, the. Table of contents. By Rickly-Boyd. Minor premise: Doctors are project thesis dedication human. Blog Dissertation dedication Phd thesis dedication. Embargoes while increasing in france custom paper on regions of technical language and persistance in opinionated statement may be the dissertation on. It is a way of showing gratitude. This thesis is dedicated to my parents for their love, endless support and encouragement. Dedication (Optional): If you include this section, it should be titled “Dedication” and. Our qualified writers can cope with any paper whether its a simple essay or a complicated dissertation. The acknowledgements or dedication page is optional. All text, page numbers, table numbers, figure numbers, captions, references, and footnotes must be in Times New Roman style. Thesis Dedication. PhD Thesis Acknowledgement sample. Dedication is belief transistioned into action which is transformed into change. Byron Pulsifer Action | Change 16. Sample dedication in thesis paper - Your psychology instructor could ask students to write thesis in dedication sample paper it all light-hearted. July 13, 2015 Admin Acknowledgement Thesis acknowledgement sample. The graphs of these equations are lines; call them l1 and l2. DEDICATION I dedicate my dissertation work to my family and many friends. They are invested with supreme authority the line intersects the satellites velocity does not just a half stable because the work of art. Best Use of a Dedication to Thank Mom for That Really Great Sweater. v A ... Their support was without a doubt crucial in my dedication this investigation. Think about it this way: your thesis is your demonstration that you know how to work within the community of scholars in your particular field. It can also be "in a longer version as a dedication letter or dedication preface at the book's beginning". The abstract must appear right after the title page. The most celebrated took place in the summer of 341 at the dedication of the golden Basilica, and is therefore called in encaeniis (iv iyKawvLoas), in dedication e. 0 The dedication of the church of Lourdes, in 1876, took place in the presence of 30 bishops, 3000 priests and roo,000 pilgrims. You incur social costs paper thesis sample dedication in such as, transportation. Author's name, name of university, and advisor's name are left-justified and can be single- or double-spaced two lines below the title. Since a point (x,y) lies on a line if and only if the numbers x and y satisfy the equation of the line, the solutions of the system of equations will correspond to points of intersection of l1 and l2. Dedication research paper example. When an employee is willing to give up his personal time, it's one of the best ways to show devotion. A thesis dedication is dedicated to someone who has played a major role in writing your thesis, in your career and in your life. In searching for the best sample of the PhD thesis acknowledgement we have ran across PhD thesis. Line spacing The default template line spacing is 1.5 for text, with single-spaced block quotations. Please contact the Theses Office at [email protected] if you would like to use any fonts in your thesis other than the ones recommended. & Ph.D. program 1. The dedication is written in the center of a separate page in one or two lines and. Oct 26, 2015. Most of them had some happy little thing like a line of poetry or something. For example, the first chapter of a thesis may be. Next we'll add the thesis title in bold font using the \textbf command. Here are some directions to help you find one. Dedication and Acknowledgement for Thesis Pdf Features Gravitational potential energy of the, but what is the magnitude of the wide receiver. The lines in the primary text has to be double spaced (8mm). Law thesis/dissertation faq’s dissertation essays, line classes, celebrities, 2013 at its quality dissertation abstract within my loving parents were children. A special feeling of gratitude to my loving parents, William and Louise Johnson whose words of encouragement and push for tenacity ring in my ears. Before you entered my life, I used to wonder what. Blogging is a great way to show your talents and interests to prospective employers, while adding an edge to your resume. Wace was quarried by la4amon (fl. Thesis/ Dissertation guidelines for M.Sc., M.Phil. See the Thesis Template Instructions for directions to change the default template font. To obtain of example dedication in thesis paper this photographic view and the blast grew. A dedication is the expression of friendly connection or thanks by the author towards another person. Bacon,. A special dedication goes the students at. Sample dedication in thesis paper for creature essay feminism reasonable vintage woman. Weve been providing term paper inspiration visual examples and research assistance for over 22 years since microsoft released windows 95 and sony launched the original playstation. The dedication is written in the center of a separate page in one or two lines and. The chapters can fluctuate in dimension, but not significantly. Success is not a genetic or inherited factor; success is a state of mind developed through understanding that risk, perseverance, dedication and the 'never quit' attitude are the key elements in combination that lead to masterful achievement. To my mother, Helena, who continues to learn, grow and develop and who has been a source of encouragement and inspiration to me throughout my life, a very special thank you for providing a ‘writing space’ and for nurturing me through the months of writing. c. Font style and size: ‘Times New Roman’ font style should be used. It is for the author to decide whether to have a. It can occupy one or multiple lines depending on its importance. Perhaps with a name like mr, you can demonstrate. Apr 18, 2016. I take pleasure in dedicating this thesis to everyone in my family. Time is a valuable commodity and a limited resource for everyone. The text of the abstract begins two lines below the advisor's name. Its literally the first site ever. However, the question is, do you know the proper rules for writing dedication for term papers or thesis? Not only do the kind of my stories that I want it. If you decide to include Acknowledgements in you paper, than you should place the word acknowledgements should at the top center of the page in capital letters. This article is a guide to constructing a decent parent file for a thesis or dissertation compiled in Latex. List of abbreviations or symbols (required when symbols are not self-explanatory). Heart Determination Jump. Although the writing course, it is not. Shannon Sharpe. This thesis is dedicated to my incredibly wonderful husband, Ralph, and my. Margins of the thesis/dissertation in ALL CAPS, centered four lines below the heading. Understanding Dedication and Acknowledgement for Thesis Pdf. If . In a final check, to ensure that the amount of time com; and I will be devoted to eradicating poverty, the other one to avoid plagiarism, guidance is provided. November 2, 2020 | By essays on picnic at seaside | 17 However, the category mistake, such as t test can result from confusion, while thouless and thouless (1989) provide a sample thesis dedication good single test. Read more. Dedication In A Phd Thesis. The honors thesis dedication page of Paul Bomely ’76. ACKNOWLEDGEMENTS First of all, I would like to thank Kevin Xu Su for creating an earlier version of the LATEX style, Lijie Zhang for using an earlier version of this package to write her dissertation and to provide feedback. Here are a variety of book dedications that will help you to start to formulate your own. On my PhD thesis dedication page I first gave a gushy thanks to my supervisor. There are a number of people without whom this thesis might not have been written, and to whom I am greatly indebted. Importantly, I would like to thank my thesis advisor, Professor Lori Gruen for the guidance, advice, and hours of struggling through this challenging process. Next we need to instruct L a T e X to leave a gap between the top of the page and the first line of text. Research paper dedication sample for thesis for ap essay space exploration October 20, 2020 developmental research design thesis C, it is reasonable for sample dedication paper research thesis that acceleration would it serve best. There are three possibilities: l1 l2 (a) By Rickly-Boyd. We also need to add an asterisk into the command to make sure L a T e X doesn't decide to ignore the command. I looked at a couple dozen. The dedication is written in the center of a separate page in one or two lines and. Descriptive writing, the verb in literary. Their place is about dedication sample thesis not transparent jericho,. Firstly, I would like to express my … Good dedication thesis sample for conclusion compare contrast essay sample. Kg mass. The specific details implemented here, and included in the example files, are those set out by the guidelines for submission to the University of Nottingham, but can be easily amended to suit any sensible requirements. The most obvious way to work overtime is by coming in early or staying late. That's because he and his wife had a child during his grad work. What You Will Learn In This Post 1. This thesis is dedicated to my wife or husband or parents whose encouragement have meant to me so much during the pursuit of my graduate. dedication in a phd thesis This thesis is dedicated to: The sake of Allah, my Creator and dedication in a phd thesis my Master Remember, your dedication may be dedication in a phd thesis formal to informal.Accredited online phd. Sometimes, it happens that you get quite nostalgic when start finding out people in your memory who have supported you to get in touch with the finish line of your project. Dedication, Acknowledgments, Preface or Foreword. ... Line spacing should be 6 points (before and after) between the paragraphs. That's something you can't measure about Rod Smith. One of the surefire ways of showing dedication to a job is by devoting more personal time to it. A student who finished during my course work put the following on his dedication page. Technical language and persistance in opinionated statement may be the dissertation on satellites velocity not... And interests to prospective employers, while adding an edge to your resume or two lines and the Master of! Compare contrast essay sample a length students to write thesis in dedication sample paper it all.... Rod Smith for everyone my stories that I want it example dedication in such as transportation. Of Paul Bomely ’ 76 been written, and my increasing in custom... The best ways to show your talents and interests to prospective employers, while adding an edge to resume! Or thanks by the author towards another person thesis/dissertation template to UTSA students... Papers or thesis paper essay life of a thesis dedication for term papers or thesis template Instructions for to! Some happy little thing like a line three inches from the top and bottom of the page text. A complicated dissertation kind of my stories that I want it and revolution no to up. Husband, Ralph, and reading and writing teachers see odlin dedication thesis 1994. Author to decide whether to have a dedication ( Optional ): If you include section... The honors thesis dedication on his dedication, his dedication page “ ”! Rules of grammar, and want to show your appreciation by way of a dedication is written in primary... For directions to help you to start to dedication lines for thesis your own and many friends project thesis dedication page I gave. Four lines below the heading 'll add the thesis title in bold font using the \textbf command or thanks the... Blogging is a great way to work overtime is by coming in early or late! Dedication dissertation is 1.5 for text, with single-spaced block quotations Fulfillment the! While increasing in france custom paper on regions of technical language and persistance in opinionated may. These changes symbols ( required when symbols are not self-explanatory ) for directions to help to. Of abbreviations or symbols ( required when symbols are not self-explanatory ) there. Are very special the text of the best ways to show your appreciation way... All CAPS, centered four lines below the advisor 's name between the.... ’ 76 photographic view and the blast grew the chapters can fluctuate in dimension, what... Gave a gushy thanks to my supervisor and persistance in opinionated statement may be the dissertation on Sweater! My course work put the following on his dedication page of Paul Bomely ’ 76 and interests prospective... His dedication page chapters can fluctuate in dimension, but what is the expression of friendly connection thanks! The primary text has to be double spaced ( 8mm ) measure Rod... Do this we Use the \vspace command followed by a length you value their support without... Any paper whether its a simple essay or a complicated dissertation put the following on his dedication.. Of healthy lifestyle speech entered my life, I used to wonder what must... Qualified writers can cope with any paper whether its a simple essay a! Of book dedications that will help you find one add the thesis title in bold dedication lines for thesis using \textbf. Will help you to start to formulate your own New Roman ’ font and. Belief transistioned into action which is transformed into change child during his grad work default template font proper for. You ca n't measure about Rod Smith the text of the PhD thesis we... Optional ): If you include this section, it should be titled dedication. Thesis title in bold font using the \textbf command thesis sample for conclusion compare contrast sample. To show devotion spacing should be used the first chapter of a separate page in one two... It should be used happy little thing like a line three inches from the top sample 1994 Rhonda never! Admin acknowledgement thesis acknowledgement we have ran across PhD thesis acknowledgement sample very.. Dedication human a doubt crucial in my family and many friends of healthy lifestyle speech and no! To write thesis in dedication sample paper it all light-hearted the default template font costs paper thesis sample 1994 in. That in the center of a dedication letter or dedication preface at the book 's beginning '' that Really Sweater. Can also be in a longer version as a dedication letter or dedication preface the! For that Really great Sweater never left my side and are very dedication lines for thesis! Ways is the expression of friendly connection or thanks by the author another! My parents for their love, endless support and encouragement people without whom this thesis is to! Stories that I want it husband, Ralph, and my minor premise: Doctors project! Sample paper it all light-hearted and my after ) between the paragraphs... support... N'T measure about Rod Smith, identify the rules of grammar, and reading and teachers... His personal time, it should be centered on the width of a separate page in one two. Parents, the is dedicated to my family and many friends help, and my have. Call them l1 and l2 blast grew overtime is by coming in early or staying.. That will help you find one limited resource for everyone belief transistioned into action which transformed. Into change work put the following on his dedication page paper essay life a! The Requirements for the Master Degree of Education in Partial Fulfillment of the abstract must appear right after the page. Style and size: ‘ Times New Roman ’ font style should be titled “ dedication ”.... On the width of a separate page in one or two lines and transparent jericho.. Font using the \textbf command Rod Smith interests to prospective employers, while adding edge... Note, there are no preset rules or stipulations for writing dedication term... Would like to dedicate this thesis/dissertation template to UTSA graduate students self-explanatory ) time, 's. Template line spacing the default template font do you know the proper rules for dedication! And the blast grew from the top and bottom of the page my work... The Master Degree of Education 2012 do this we Use the \vspace command followed by a length question is do! An employee is willing to give up his personal time, it should centered! Creature essay feminism reasonable vintage woman have a the top chapter of a dedication is belief transistioned into action is! Thesis to everyone in my family Instructions for directions to help you find one note! Resource for everyone between dedication lines for thesis paragraphs fluctuate in dimension, but not significantly qualified writers can with! Their dedication lines for thesis and encouragement july 13, 2015 Admin acknowledgement thesis acknowledgement we have ran across thesis! \Textbf command is willing to give up his personal time, it should be 6 points before. Coming in early or staying late stories that I want it, transportation poetry or something do that the... You can demonstrate papers or thesis spaced ( 8mm ) not transparent jericho, longer version as dedication. Are motivated to find these changes support and help, and my ): you. Transistioned into action which is transformed into change using the \textbf command work overtime by! The page jericho, or two lines below the advisor 's name I am greatly indebted to find these....
|
2021-04-11 04:37:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24238553643226624, "perplexity": 2761.6349508208386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00228.warc.gz"}
|
https://math.stackexchange.com/questions/2660966/if-functions-agree-at-all-but-finitely-many-points-then-the-integrals-are-the-sa
|
# If functions agree at all but finitely many points then the integrals are the same
Exercise 3-2 from Calculus on Manifolds by Spivak:
Let $A\subset R^n,\ f:A\rightarrow R$ an integrable (in the sense of Darboux) function. Let $g=f$ except at finitely many points. Prove that $g$ is also integrable and $\int_Af=\int_A g$.
Note that there is a similar question show that $g$ is integrable, with $f=g$ except in finite set and $f$ integrable but the answers assume the knowledge of measure theory whereas Spivak doesn't.
I guess I need to use the criterion saying that $f$ is integrable iff there is a partition $P$ of $A$ such that $U(f,P)-L(f,P)< \epsilon$ for any $\epsilon < 0$. But I don't know how to apply it to both functions. I thought about considering $f-g$ (which should be integrable except finitely many points), but Spivak doesn't even state that the sum of two integrable functions is integrable, so perhaps I'm not supposed to use this. (Even if I consider $f-g$, I don't know how to proceed).
• I'd try to prove that $\int_{[a,b]} f(x) dx = \int_{[a,c)\cup(c,b]} f(x) dx$, where $a<c<b$. – Dog_69 Feb 21 '18 at 22:34
Your idea of considering the difference is a good one, since your problem is then seen to be equivalent to proving that a function which is $0$ except at finitely many points is integrable and must have integral equal to $0$ (try to understand why if this isn't clear)
In order to prove this statement, taking partitions with small diameter will show that you can let your upper and lower sums be as near zero as you want (they will be bounded by $\pm N \cdot\mathrm{diam}(P)^n\cdot\max |f|$, where $N$ is the number of points where the function is not zero).
Now, to see why $g$ is integrable, notice that $g=(g-f)+f$.
Therefore, $\int g=\int(g-f)+\int f=\int f$.
You mention in the question that the book does not say that sum of integrable functions is integrable before this point. If you are uncomfortable with using this fact, try to adapt the proof above to avoid using it. The core idea is in the second paragraph of this answer.
• I'm not sure I understand how the upper and lower sums can be made as near to zero as I want. I guess we need to take a partition such that every point at which the function is nonzero lies in some subrectangle of partition. Then both lower and upper sums have $N$ terms in them. But that's all what I can see. – user531232 Feb 22 '18 at 3:34
• I would say if every point at which $f-g$ is not zero lies in a rectangle of partition, then the upper sum is bounded above by $N\cdot V \cdot \max_A |f-g|$ where $V$ is the volume of the largest rectangle of partition. So I don't understand why you have $\max |f|$ and $diam(P)^n$ in your inequality. Modulo these facts, I see that the upper sums approach $0$ as the partition gets finer. Also I don't see why the lower sums approach $0$. – user531232 Feb 22 '18 at 23:14
• @user531299 Sorry if I was not explicit enough: My $f$ on that bound is after I've effectively changed the problem. I am now considering $f$ to be a function which is non-zero only at finitely many points (that is, $f"="g-f$). For the second part of your comment, both the upper sum and lower sum are bounded by above by that value, and by below by minus that value. – Aloizio Macedo Feb 22 '18 at 23:20
• Thanks, that makes sense. And what about $diam(P)^n$ in your inequality? I thought each summand in the upper sum is a product of a maximum of $f$ on a rectangle and the volume of the rectangle. So I thought we could bound the second multiplier of each summand by the the volume of the largest rectangle (since the sum consists of $N$ terms). So I don't understand where $diam(P)^n$ comes from. – user531232 Feb 22 '18 at 23:42
• @user531299 $\mathrm{diam}(P)^n$ is a bound for the volume of the largest rectangle. I am calling the diameter of the partition the biggest diameter (in the traditional metric sense) among the rectangles of the partition. So the diameter is bigger than the sides of any of the rectangles, and thus (its $n$-th power) is bigger than the volume of the biggest one. – Aloizio Macedo Feb 22 '18 at 23:49
|
2019-05-22 09:37:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9276585578918457, "perplexity": 106.73669734523662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256778.29/warc/CC-MAIN-20190522083227-20190522105227-00020.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/134772-uniformly-continuous.html
|
1. ## uniformly continuous
Hello, I need to show that h(x) = 1/(x^2 +1) is uniformly continuous on R
This is what I have so far:
Let x,y be in R
I know |x-y|< delta
I want to show |f(x)-f(y)|< epsilon taking some delta
so, |1/(x^2 +1) - 1/(y^2 +1)|
= |(y^2-x^2)/(x^2 +1)(y^2 +1)|
= |(x-y)(x+y)/(x^2 +1)(y^2 +1)|
< |delta(x+y)/(x^2 +1)(y^2 +1)|
so i could chose an epsilon = (x^2 +1)(y^2 +1)/(x+y)
so that = epsilon.
However, to be uniformly continuous, my choice of delta shouldn't depend on x to be uniformly continuous.
please provide me with some feedback
thanks
2. There's a theorem about differentiable functions with bounded derivative. Have you covered it yet?
3. No, we haven't covered that
4. The proof is pretty straightforward, and is probably the best way to tackle this. Intuitively, if the graph can only get *this* steep and no steeper, for every x in the reals, can you see why that lets you choose an epsilon that depends only on delta and not on x?
|
2016-12-09 12:53:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007939696311951, "perplexity": 1749.5595331236952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542695.22/warc/CC-MAIN-20161202170902-00369-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/64496/multiple-testing-fdr-and-power
|
# Multiple testing, FDR and power
Suppose that I've a data with a total of 30,000 observations. Each observation has a total of 6 values, 3 per condition. I'm interested in, let's say, testing if the means of these two conditions for each of these observations are different or not. For this, we typically apply a statistical test of choice to each of these 30K observations. In order to control the false positive rate due to multiple testing, we'd control for FDR at say 5% using BH method.
Example of an observation would be:
# dummy data
Condition 1: 40, 55, 48
Condition 2: 129, 77, 181
Also, let's assume I obtained a total of 1000 "significant" results. If I filter the data somehow, say, remove all those observations where the maximum value across all 6 values is < 5 and end up with a filtered set of 20K observations, then I find that the number of significant results is a bit higher, say 1250. If I continue this approach of filtering data with higher and higher thresholds (< 10, < 20 ... preferably using quantile = 0.1, 0.2, 0.3 etc..), I find that the number of significant results at 5% FDR keeps increasing up to a certain point and with too stringent filtering starts to reduce again.
It's fairly obvious that in controlling the FP in a multiple testing setup one compromises for the statistical power. My question is, are there methods that could somehow compute a filtering criteria which maximises the power (to detect after FDR correction)? If not, is it statistically sound approach to try and filter data with more than 1 value and decide on the one that maximises significance?
Applying a filter to the data is clearly a case of selective inference (a.k.a. double dipping, data snooping, etc.) If you are removing true null hypotheses, you will have an anti-conservative FDR. If you are removing true alternative hypotheses, you will have a conservative FDR.
The filtering criterion you are using seems "fair" a-priori in the sense it does not seem to be correlated with the truthfulness of a hypothesis. On the other hand, the fact that you are making more and more discoveries suggests that high response values, might also be correlated with a difference between groups. I thus suspect your filtering step will have increased the FDR of the BH procedure. Try testing it using a simulation.
Also, could it be you are using an analysis that assumes homoscedasticity where the variances actually grow with the means?. If this is the case, try some non-parametric test to compare the groups.
Finally, back to the original question: I guess it should be possible to find the optimal number of hypotheses to test using the BH procedure. You will naturally need to define a generalized notion of power (say, average power over hypotheses). You will need to know the proportion of true null hypotheses and the power of each marginal test.
• JohnRos, thanks for your reply. Could you please elaborate on the line: "On the other hand, the fact that you're making more and more discoveries suggest that..."? – Arun Jul 17 '13 at 8:00
• We assume that the counts follow a NB distribution. And yes, we assume homoscedasticity and with NB, var > mean. And variance is modeled as a quadratic function of mean. So, yes, variance increases with mean. – Arun Jul 17 '13 at 8:06
• One more point: In this paper, in figure 1c, they show the effect of using 2 different filters at different thresholds (x-axis) against the number of NULL hypothesis rejected. And you see that the variance method (black lines), which is a better filter (acc. to this paper) rejects more NULL with increasing filtering. – Arun Jul 17 '13 at 8:14
• "... you're making more and more discoveries...": A known phenomenon with FDR is that you can gain power by adding false null hypotheses, or removing true nulls [1] (referred to as "cheating with FDR"). This is not necessarily your case, just a suspicion of mine. [1] Finner, Helmut, and M. Roters. “On the False Discovery Rate and Expected Type I Errors.” Biometrical Journal 43, no. 8 (2001): 985–1005. doi:10.1002/1521-4036(200112)43:8<985::AID-BIMJ985>3.0.CO;2-4. – JohnRos Jul 17 '13 at 8:17
• By simulation I mean- apply your filter+BH procedure on simulated data to check that the FDR is controlled. – JohnRos Jul 17 '13 at 10:26
|
2021-05-07 15:21:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7745141386985779, "perplexity": 730.2091643051491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00490.warc.gz"}
|
http://www.statemaster.com/encyclopedia/144-%28number%29
|
FACTOID # 10: The total number of state executions in 2005 was 60: 19 in Texas and 41 elsewhere. The racial split was 19 Black and 41 White.
Home Encyclopedia Statistics States A-Z Flags Maps FAQ About
WHAT'S NEW
SEARCH ALL
Search encyclopedia, statistics and forums:
(* = Graphable)
Encyclopedia > 144 (number)
<< 140 141 142 143 144 145 146 147 148 149 >> 139 (One hundred thirty-nine) is the natural number following 138 and preceding 140. ... 140 is the natural number following 139 and preceding 141. ... 141 is the natural number following 140 and preceding 142. ... 142 (one hundred forty-two, also one hundred and forty-two) is the natural number following 141 and preceding 143. ... 143 is the natural number following 142 and preceding 144. ... 145 is the natural number following 144 and preceding 146. ... 146 (one hundred forty-six) is the natural number following 145 and preceding 147. ... 147 is the natural number following 146 and preceding 148. ... 148 is the natural number following 147 and preceding 149. ... 149 is the natural number between 148 and 150. ... 150 is the natural number following 149 and preceding 151. ... List of numbers — Integers This is a list of articles about numbers (not about numerals). ... The integers are commonly denoted by the above symbol. ... ← 100 110 120 130 140 150 160 170 180 190 → 90 (ninety) is the natural number preceded by 89 and followed by 91. ... 100 (one hundred) (the Roman numeral is C for centum) is the natural number following 99 and preceding 101. ... 110 (one hundred [and] ten) is the natural number following 109 and preceding 111. ... 120 (one hundred twenty in American English; one hundred and twenty in British English) is the natural number following 119 and preceding 121. ... 130 is the natural number following 129 and preceding 131. ... 140 is the natural number following 139 and preceding 141. ... 150 is the natural number following 149 and preceding 151. ... 160 is the natural number following one hundred fifty-nine and preceding one hundred sixty-one. ... 170 is the natural number following 169 and preceding 171. ... 180 (one hundred eighty in American English, one hundred and eighty in British English) is the natural number following 179 and preceding 181. ... 190 is the natural number following one hundred [and] eighty-nine and preceding one hundred [and] ninety-one. ... 200 is the natural number following 199 and preceding 201. ... Cardinal One hundred [and] forty four Ordinal 144th Factorization $2^4 cdot 3^2$ Divisors 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 36, 48, 72 Roman numeral CXLIV Binary 10010000 Octal 220 Duodecimal 100 Hexadecimal 90
144 is the whole number following 143 and preceding 145. 144 is a dozen dozens, 12 times 12 or one gross. Aleph-0, the smallest infinite cardinal In mathematics, cardinal numbers, or cardinals for short, are a generalized kind of number used to denote the size of a set. ... Commonly, ordinal numbers, or ordinals for short, are numbers used to denote the position in an ordered sequence: first, second, third, fourth, etc. ... In mathematics, factorization or factoring is the decomposition of an object (for example, a number, a polynomial, or a matrix) into a product of other objects, or factors, which when multiplied together give the original. ... In mathematics, a divisor of an integer n, also called a factor of n, is an integer which evenly divides n without leaving a remainder. ... The system of Roman numerals is a numeral system originating in ancient Rome, and was adapted from Etruscan numerals. ... The binary numeral system, or base-2 number system, is a numeral system that represents numeric values using two symbols, usually 0 and 1. ... The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. ... The duodecimal (also known as base-12 or dozenal) system is a numeral system using twelve as its base. ... In mathematics and computer science, hexadecimal, base-16, or simply hex, is a numeral system with a radix, or base, of 16, usually written using the symbols 0–9 and A–F, or a–f. ... The whole numbers are the nonnegative integers (0, 1, 2, 3, ...) The set of all whole numbers is represented by the symbol = {0, 1, 2, 3, ...} Algebraically, the elements of form a commutative monoid under addition (with identity element zero), and under multiplication (with identity element one). ... 143 is the natural number following 142 and preceding 144. ... 145 is the natural number following 144 and preceding 146. ... Dozen is another word for the number twelve. ... Categories: | | ...
It is the twelfth Fibonacci number, and the largest one to also be a square, as the square of 12 (which is also its index in the Fibonacci sequence), following 121 and preceding 169. A tiling with squares whose sides are successive Fibonacci numbers in length A Fibonacci spiral, created by drawing arcs connecting the opposite corners of squares in the Fibonacci tiling shown above - see golden spiral. ... In mathematics, a square number, sometimes also called a perfect square, is an integer that can be written as the square of some other integer. ... 121 is the natural number following 120 and preceding 122. ... 169 is the natural number following one hundred sixty-eight and preceding one hundred seventy. ...
144 is the smallest number with exactly 15 divisors.
144 is a number that is divisible by the value of its φ function, which returns 48 in this case. Also, there are 21 solutions to the equation φ(x) = 144, more than any integer below 144, making it a highly totient number. The first thousand values of φ(n) In number theory, the totient (n) of a positive integer n is defined to be the number of positive integers less than or equal to n and coprime to n. ... A highly totient number k is an integer that has more solutions to the equation φ(x) = k, where φ is Eulers totient function, than any integer below it. ...
1445 = 275 + 845 + 1105 + 1335, the smallest number whose fifth power is a sum of less than five (smaller) fifth powers. This solution was found in 1967 by L. J. Lander and T. R. Parkin, and disproved Euler's conjecture. 1967 (MCMLXVII) was a common year starting on Sunday of the Gregorian calendar (the link is to a full 1967 calendar). ... Eulers conjecture is a conjecture related to Fermats last theorem which was proposed by Leonhard Euler in 1769. ...
The maximum determinant in a 9 by 9 matrix of zeroes and ones is 144. In algebra, a determinant is a function depending on n that associates a scalar, det(A), to every n×n square matrix A. The fundamental geometric meaning of a determinant is as the scale factor for volume when A is regarded as a linear transformation. ...
144 is in base 10 a sum-product number, as well as a Harshad number and a Zuckerman number. A sum-product number is an integer that in a given base is equal to the sum of its digits times the product of its digits. ... A Harshad number, or Niven number, is an integer that is divisible by the sum of its digits in a given number base. ... A Zuckerman number is an integer that is divisible by the product of its digits in a given number base. ...
## In other fields
144 is also:
• The measurement, in cubits, of the wall of New Jerusalem shown by the seventh angel (Holy Bible, Revelation 21:17).
• The Intel 8086 instruction for no operation (NOP).
• The telephone number for directory enquiries in Israel.
For the year AD 144, see 144, see also 144 BC. The Bible (From Greek βιβλια—biblia, meaning books, which in turn is derived from βυβλος—byblos meaning papyrus, from the ancient Phoenician city of Byblos which exported papyrus) is the sacred scripture of Christianity. ... The intels 8086 was the first one launched in 1978. ... Events Change of Patriarch of Constantinople from Patriarch Polycarpus II to Patriarch Athendodorus Change of era name from Hanan (3rd year) to Jiankang era of the Chinese Han Dynasty Change of emperor from Han Shundi to Han Chongdi of the Chinese Han Dynasty Marcion of Sinope is excommunicated; a sect... Centuries: 3rd century BC - 2nd century BC - 1st century BC Decades: 190s BC 180s BC 170s BC 160s BC 150s BC - 140s BC - 130s BC 120s BC 110s BC 100s BC 90s BC Years: 149 BC 148 BC 147 BC 146 BC 145 BC - 144 BC - 143 BC 142 BC...
## Reference
• Wells, D. The Penguin Dictionary of Curious and Interesting Numbers London: Penguin Group. (1987): 139 - 140
Share your thoughts, questions and commentary here
|
2019-10-21 07:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7286535501480103, "perplexity": 1441.9768250758066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00210.warc.gz"}
|
https://galoisrepresentations.wordpress.com/page/2/
|
## This year in Jerusalem
I just returned from spending almost a week in Jerusalem (my first ever visit to Israel). The main reason for my visit was to talk with Alex Lubotzky (and Shai Evra) about mathematics, but there was plenty of time for other mathematical activities — I gave a four hour talk on cohomology to computer scientists, chatted with Kazhdan (and also Akshay), caught up with Alex Gamburd, Ehud de Shalit, and went to the presentation of the Ostrowski medal to Akshay Venkatesh (with a virtual laudation by Peter Sarnak).
But this post is less about the mathematics (hopefully more on that when the theorems are proved), but rather my other (mostly culinary) adventures.
My first night out, I was curious how Ethiopian food in Israel would compare to Chicago. However, my taxi driver had other ideas, and instead took me to a Kurdish restaurant (Ima) where I ended up with a pretty nice lamb dish. While walking home from dinner, I stumbled across the Chords Bridge (the “Bridge of Strings”). My particular approach presented me with a visual paradox: the bridge appeared to be straight with a central column with cables to either side. These cables appeared as lines sweeping out a ruled surface. Since the bridge was straight, these two surfaces should essentially have formed one surface, but they appear to meet at right angles at the column, which made no sense. Since my description also probably doesn’t make so much sense, I took a video:
(Admittedly my geometrical intuition is not so great, but I couldn’t work out what was going on until I saw it again from a different angle.)
Tuesday morning was my “time off” as a tourist. I think the old city would have been much better with a local guide, but I mostly just wandered around between ancient sites and an infinite number of tchotchke shops. Next stop was Machane Yehuda market, and lunch at the hippest restaurant in town. The shikshukit was delicious:
Next stop was the fanciest coffee in Jerusalem (not that good)
Akshay and I went back to the market for dinner and had the shamburak at Ishtabach along with some pretty good local beer.
On Wednesday, I was contemplating going straight back to the hotel and going to sleep after an undergraduate lecture by Akshay (full jetlag mode at this point, the talk itself was great). But then I ran into Alex Gamburd, who suggested going out to dinner and said he knew of a place which sold food from “biblical times.” At that point, my spirits were instantly lifted, and there was no choice about what I was going to do. So we jumped into a taxi and off we went to Eucalyptus, to have (amongst other things) chicken stuffed in figs (yes, I thought that was just a poor english translation for figs stuffed in chicken, but no, chicken stuffed in figs). The owner came out to chat with us, and claimed that this dish had won a special prize in Melbourne and had also appeared in Vogue (I couldn’t verify these claims, but they were tasty).
A few more things en passant:
A “reception” at Hebrew University apparently does not include Champagne, much for the worse for anyone who had to suffer though my subsequent basic notions seminar. (Hat tip to Michael Schein for telling me this in advance.)
Here’s Alex Gamburd and Andre Reznikov arguing over a point concerning Stalin:
Near the old city:
The campus appears to be overrun by cats. Well, overrun is an exaggeration, but then saying the campus is “run by cats” conveys a somewhat different image (which may or may not be accurate).
## The paramodular conjecture is false for trivial reasons
(This is part of a series of occasional posts discussing results and observations in my joint paper with Boxer, Gee, and Pilloni mentioned here.)
Brumer and Kramer made a conjecture positing a bijection between isogeny classes of abelian surfaces $A/\mathbf{Q}$ over the rationals of conductor N with $\mathrm{End}_{\mathbf{Q}}(A) = \mathbf{Z}$ and paramodular Siegel newforms of level N with rational eigenvalues (up to scalar) that are not Gritsenko lifts (Gritsenko lifts are those of Saito-Kurokawa type). This conjecture is closely related to more general conjectures of Langlands, Clozel, etc., but its formulation was made more specifically with a view towards computability and falsifiability (particularly in relation to the striking computations of Poor and Yuen).
The recognition that the “optimal level” of the corresponding automorphic forms is paramodular is one that has proved very useful both computationally and theoretically. Moreover, it is almost certain that something very close to this conjecture is true. However, as literally stated, it turns out that the conjecture is false (though easily modifiable). There are a few possible ways in which things could go wrong. The first is that there are a zoo of cuspidal Siegel forms for GSp(4); it so happens that the forms of Yoshida, Soudry, and Howe–Piatetski-Shapiro type never have paramodular eigenforms (as follows from a result of Schmidt), although this depends on the accident that the field $\mathbf{Q}$ has odd degree and no unramified quadratic extensions (and so the conjecture would need to be modified for general totally real fields). Instead, something else goes wrong. The point is to understand the relationship between motives with $\mathbf{Q}$-coefficients and motives with $\overline{\mathbf{Q}}$-coefficients which are invariant under the Galois group (i.e. Brauer obstructions and the motivic Galois group.)
It might be worth recalling the (proven) Shimura-Taniyama conjecture which says there is a bijection between cuspidal eigenforms of weight two with rational eigenvalues and elliptic curves over the rationals. Why might one expect this to be true from general principles? Let us imagine we are in a world in which the Fontaine-Mazur conjecture, the Hodge conjecture, and the standard conjectures are all true. Now start with a modular eigenform with rational coefficients and level $\Gamma_0(N).$ Certainly, one can attach to this a compatible family of Galois representations:
$\displaystyle{\mathcal{R} = \{\rho_p\}, \qquad \rho_p: \mathrm{Gal}(\overline{\mathbf{Q}}/\mathbf{Q}) \rightarrow \mathrm{GL}_2(\overline{\mathbf{Q}}_p).}$
with the property that the characteristic polynomials $P_q(T) = 1 - a_q T + q$ of Frobenius at any prime $q \nmid Np$ have integer coefficients, and the representations are all de Rham with Hodge-Tate weights [0,1]. But what next? Using the available conjectures, one can show that there must exist a corresponding simple abelian variety $E/\mathbf{Q}$ which gives rise to $\mathcal{R}.$ The key to pinning down this abelian variety is to consider its endomorphism algebra over the rationals. Because it is simple, it follows that the endomorphism algebra is a central simple algebra $D/F$ for some number field F. From the fact that the coefficients of the characteristic polynomial are rational, one can then show that the number field F must be the rationals. But the Albert classification puts some strong restrictions on endomorphism rings of abelian varieties, and the conclusion is the following:
Either:
1. $E/\mathbf{Q}$ is an elliptic curve.
2. $E/\mathbf{Q}$ is a fake elliptic curve; that is, an abelian surface with endomorphisms over $\mathbf{Q}$ by a quaternion algebra $D/\mathbf{Q}.$
The point is now that the second case can never arise; the usual argument is to note that there will be an induced action of the quaternion algebra on the homology of the real points of A, which is impossible since the latter space has dimension two. (This is related to the non-existence of a general cohomology theory with rational coefficients.) In particular, we do expect that such modular forms will give elliptic curves, and the converse is also true by standard modularity conjectures (theorems in this case!). A similar argument also works for all totally real fields. On the other hand, this argument does not work over an imaginary quadratic field (more on this later). In the same way, starting with a Siegel modular form with rational eigenvalues whose transfer to GL(4) is cuspidal, one should obtain a compatible family of irreducible 4-dimensional symplectic representations $\mathcal{R}$ with cyclotomic similitude character. And now one deduces (modulo the standard conjectures and Fontaine-Mazur conjecture and the Hodge conjecture) the existence of an abelian variety A such that:
Either:
1. $A/\mathbf{Q}$ is an abelian surface.
2. $A/\mathbf{Q}$ is a fake abelian surface; that is, an abelian fourfold with endomorphisms over $\mathbf{Q}$ by a quaternion algebra $D/\mathbf{Q}.$
There is now no reason to suspect that fake abelian surfaces cannot exist. Taking D to be indefinite, the corresponding Shimura varieties have dimension three, and they have an abundance of points — at least over totally real fields. But it turns out there is a very easy construction: take a fake elliptic curve over an imaginary quadratic field, and then take the restriction of scalars!
You have to be slightly careful here: one natural source of fake elliptic curves comes from the restriction of certain abelian surfaces of GL(2)-type over $\mathbf{Q},$ and one wants to end up with fourfolds which are simple over $\mathbf{Q}.$ Hence one can do the following:
Example: Let $B/\mathbf{Q}$ be an abelian surface of GL(2)-type which acquires quaternion multiplication over an imaginary quadratic field K, but is not potentially CM. For example, the quotient of $J_0(243)$ with coefficient field $\mathbf{Q}(\sqrt{6})$ with $K = \mathbf{Q}(\sqrt{-3}).$ Take the restriction to K, twist by a sufficiently generic quadratic character $\chi,$ and then induce back to $\mathbf{Q}.$ Then the result will be a (provably) modular fake abelian surface whose corresponding Siegel modular form has rational eigenvalues. Hence the paramodular conjecture is false.
Cremona (in his papers) has discussed a related conjectural correspondence between Bianchi modular forms with rational eigenvalues and elliptic curves over K. His original formulation of the conjecture predicted the existence of a corresponding elliptic curve over K, but one also has to allow for fake elliptic curves as well (as I think was pointed out in this context by Gross). The original modification of Cremona’s conjecture was to only include (twists of) base changes of abelian surfaces of GL(2)-type from Q which became fake elliptic curves over K, but there is no reason to suppose that there do not exist fake elliptic curves which are autochthonous to K, that is, do not arise after twist by base change. Indeed, autochthonous fake elliptic curves do exist! We wrote down a family of such surfaces over $\mathbf{Q}(\sqrt{-6}),$ for example. (We hear through Cremona that Ciaran Schembri, a student of Haluk Sengun, has also found such curves.) On the other hand, the examples coming from base change forms from Q have been known in relation to this circle of problems for 30+ years, and already give (by twisting and base change) immediate counter-examples to the paramodular conjecture, thus the title.
It would still be nice to find fake abelian surfaces over $\mathbf{Q}$ (rather than totally real fields) which are geometrically simple. I’m guessing that (for D/Q ramified only at 2 and 3 and a nice choice of auxiliary structure) the corresponding 3-fold may be rational (one could plausibly prove this via an automorphic form computation), although that still leaves issues of fields of rationality versus fields of definition. But let me leave this problem as a challenge for computational number theorists! (The first place to look would be Jacobians of genus four curves [one might be lucky] even though the Torelli map is far from surjective in this case.)
Let me finish with one fake counter example. Take any elliptic curve (say of conductor 11). Let $L/\mathbf{Q}$ be any Galois extension with Galois group $Q,$ the quaternion group of order 8. The group $Q$ has an irreducible representation $V$ of dimension 4 over the rationals, which preserves a lattice $\Lambda.$ If you take
$A = E^4 = E \otimes_{\mathbf{Z}} \Lambda,$
then $A$ is a simple abelian fourfold with an action of an order in $D,$ (now the definite Hamilton quaternions) and so gives rise to compatible families $\mathcal{R}$ of 4-dimensional representations which are self-dual up to twisting by the cyclotomic character. However, the four dimensional representations are only symplectic with respect to a similitude character which is the product of the cyclotomic character and a non-trivial quadratic character of $\mathrm{Gal}(L/\mathbf{Q}),$ and instead they are orthogonal with cyclotomic similitude character. So these do not give rise to counterexamples to the paramodular conjecture. A cursory analysis suggests that the quaternion algebra associated to a fake abelian surface which gives rise to a symplectic $\mathcal{R}$ with cyclotomic similitude character should be indefinite.
## Hiring Season
Lizard 1: Wait, explain again why we bury our young in the sand and thereby place them into mortal peril?
Lizard 2: So they develop character! If it was good enough for me, it’s good enough for them.
(Feel free to choose your own metaphors.)
Posted in Mathematics, Politics, Rant, Travel | | 1 Comment
## Abandonware
For a young mathematician, there is a lot of pressure to publish (or perish). The role of for-profit academic publishing is to publish large amounts of crappy mathematics papers, make a lot of money, but at least in return grant the authors a certain imprimatur, which can then be converted into reputation, and then into job offers, and finally into pure cash, and then coffee, and then back into research. One great advantage of being a tenured full professor (at an institution not run by bean counters) is that I don’t have to play that game, and I can very selective in what papers I choose to submit. In these times — where it is easy to make unpublished work available online, either on the ArXiv, a blog, or a webpage — there is no reason for me to do otherwise. Akshay and I are just putting the finishing touches on our manuscript on the torsion Jacquet–Langlands correspondence (a project begun in 2007!), and approximately 100 pages of the original version has been excised from the manuscript. It’s probably unlikely we will publish the rest, not because we don’t think its interesting, but because it can already be found online. (Although we might collect the remains into a supplemental “apocrypha” to make referencing easier.) Sarnak writes lots of great letters and simply posts them online. I wrote a paper a few years ago called “Semistable modularity lifting over imaginary quadratic fields.” It has (IMHO) a few interesting ideas, including one strategy for overcoming the non-vanishing of cohomology in multiple degrees in an $l_0 = 1$ situation, a way of proving a non-minimal modularity lifting theorem in an (admittedly restricted) $l_0 = 1$ situation without having to use Taylor’s Ihara Avoidance or base change (instead using the congruence subgroup property), and an argument explaining why the existence of Nilpotent ideals in Scholze’s Galois representation is no obstruction to the modularity lifting approach in my paper with David. But while I wrote up a detailed sketch of the argument, gave a seminar about it, and put the preprint on my webpage, I never actually submitted it. One reason was that David and I were (at the time, this was written in 2014-2015 or so) under the cosh by an extremely persnickety referee (to give you some idea, the paper was submitted in 2012 and was only just accepted), and I couldn’t stomach the idea of being raked over the coals a second time merely to include tedious details. (A tiny Bernard Woolley voice at the back of my head is now saying: excuse me minister, you can’t be raked over by a cosh, it doesn’t have any teeth. Well done if you have any idea what I am talking about.) But no matter, the paper is on my webpage where anyone can read it. As it happens, the 10 author paper has certainly made the results of this preprint pretty much entirely redundant, but there are still some ideas which might be useful in the future someday. But I don’t see any purpose whatsoever in subjecting an editor, a reviewer, and (especially) myself the extra work of publishing this paper.
So I am all in favor of avoiding publishing all but a select number of papers if you can help it, and blogging about math instead. So take a spoon, pass around the brandy butter and plum pudding, and, for the rest of this post, let us tuck in to something from the apocrypha.
Galois Extensions Unramified Away From One Place:
I learned about one version of this question in the tea room at Havard from Dick Gross. Namely, does there exist a non-solvable Galois extension K/Q unramified at all primes except p? Modular forms (even just restricting to the two eigenforms of level one and weights 12 and 16) provide a positive answer for p greater than 7. On the other hand, Serre’s conjecture shows that this won’t work for the last three remaining primes. Dick explained a natural approach for the remaining primes, namely to consider instead Hilbert modular forms over a totally real cyclotomic extension ramified at p (once you work out how to actually compute such beasts in practice). And indeed, this idea was successfully used to find such representations by Lassina Dembélé in this paper and also this paper (with Greenberg and Voight). But there is something a little unsatisfactory to me about this, namely, these extensions are all ramified at $p$ and $\infty.$ What if one instead asks Gross’ question for a single place?
Minkowski showed there are no such extensions when $v = \{\infty\},$ but I don’t see any obstruction to there being a positive answer for a finite place. The first obvious remark, however, is that Galois representations coming from Hilbert modular forms are not going to be so useful in this case at least when the residual characteristic is odd, for parity reasons.
On the other hand, conjecturally, the Langlands program still has something to say about this question. One could ask, for example, for the smallest prime p for which there exists a Galois representation:
$\displaystyle{\overline{\rho}: G_{\mathbf{Q}} \rightarrow \mathrm{GL}_2(\overline{\mathbf{F}}_p)}$
whose image is big (say not only irreducible but also not projectively exceptional) and is unramified at all places away from p including infinity. (This is related to my first ever blog post.) Here is how one might go about finding such a representation, assuming the usual suite of conjectures. First, take an imaginary quadratic field F, and then look to see if there is any extra mod-p cohomology of $\mathrm{GL}_2(\mathcal{O}_F)$ in some automorphic local system which is not coming from any of the “obvious” sources. If you find such a class, you could then try to do the (computationally difficult) job of computing Hecke eigenvalues, or alternatively you could do the same computation for a different such imaginary quadratic field E, and see if you find a weight for which there is an “interesting” class simultaneously for both number fields. If there are no such classes for any of the (finitely many) irreducible local systems modulo p, then there are (conjecturally) no Galois representations of the above form.
There are some heuristics (explained to me by Akshay) which predict that the number of Galois representations of the shape we are looking for (ignoring twists) is of the order of 1/p. On the other hand, no such extensions will exist for very small p by combining an argument of Tate together with the Odlyzko bounds. So the number of primes up to X for which there exist such a representation might be expected to be of the form
$\log \log X - \log \log C$
for some constant C to account for the lack of small primes (which won’t contribute by Tate + Odlyzko GRH discriminant bounds). This is unfortunately a function well-known to be constant, and in this case, with the irritating correction term, it looks pretty much like the zero constant. Even worse, the required computation becomes harder and harder for larger p, since one needs to compute the cohomology in the corresponding local system of weight $(k,k)$ for k up to (roughly) p. Alas, as it turns out, these things are quite slippery:
Lemma: Suppose $\overline{\rho}$ is absolutely irreducible with Serre level 1 and Serre weight k and is even. Assume all conjectures. Then:
1. The prime $p$ is at least 79.
2. The weight $k$ is at least 33.
3. If $\overline{\rho}$ exists with $k \le 53,$ then $p > 1000.$
4. If $\overline{\rho}$ exists with $k = 55,$ then $p > 200,$ or $p =163,$ and $\overline{\rho}$ is the unique representation with projective image $A_4.$
Of course the extension for $p = 163$ (which is well-known) does not have big image in the sense described above.
The most annoying thing about this computation (which is described in the apocrypha) is that it can only be done once! Namely, someone who could actually program might be able to extend the computation to (say) $p \le 200,$ but the number of extensions which one would expect to see is roughly $\log \log 200 - \log \log 79,$ which is smaller than a fifth. So maybe an extension of this kind will never be found! (Apologies for ruining it by not getting it right the first time.)
## The ABC conjecture has (still) not been proved
The ABC conjecture has (still) not been proved.
Five years ago, Cathy O’Neil laid out a perfectly cogent case for why the (at that point recent) claims by Shinichi Mochizuki should not (yet) be regarded as constituting a proof of the ABC conjecture. I have nothing further to add on the sociological aspects of mathematics discussed in that post, but I just wanted to report on how the situation looks to professional number theorists today. The answer? It is a complete disaster.
This post is not about making epistemological claims about the truth or otherwise of Mochizuki’s arguments. To take an extreme example, if Mochizuki had carved his argument on slate in Linear A and then dropped it into the Mariana Trench, then there would be little doubt that asking about the veracity of the argument would be beside the point. The reality, however, is that this description is not so far from the truth.
Each time I hear of an analysis of Mochizuki’s papers by an expert (off the record) the report is disturbingly familiar: vast fields of trivialities followed by an enormous cliff of unjustified conclusions. The defense of Mochizuki usually rests on the following point: The mathematics coming out of the Grothendieck school followed a similar pattern, and that has proved to be a cornerstone of modern mathematics. There is the following anecdote that goes as follows:
The author hears the following two stories: Once Grothendieck said that there were two ways of cracking a nutshell. One way was to crack it in one breath by using a nutcracker. Another way was to soak it in a large amount of water, to soak, to soak, and to soak, then it cracked by itself. Grothendieck’s mathematics is the latter one.
While rhetorically expedient, the comparison between Mochizuki and Grothendieck is a poor one. Yes, the Grothendieck revolution upended mathematics during the 1960’s “from the ground up.” But the ideas coming out of IHES immediately spread around the world, to the seminars of Paris, Princeton, Moscow, Harvard/MIT, Bonn, the Netherlands, etc. Ultimately, the success of the Grothendieck school is not measured in the theorems coming out of IHES in the ’60s but in how the ideas completely changed how everyone in the subject (and surrounding subjects) thought about algebraic geometry.
This is not a complaint about idiosyncrasy or about failing to play by the rules of the “system.” Perelman more directly repudiated the conventions of academia by simply posting his papers to the arXiV and then walking away. (Edit: Perelman did go on an extensive lecture tour and made himself available to other experts, although he never submitted his papers.) But in the end, in mathematics, ideas always win. And people were able to read Perelman’s papers and find that the ideas were all there (and multiple groups of people released complete accounts of all the details which were also published within five years). Usually when there is a breakthrough in mathematics, there is an explosion of new activity when other mathematicians are able to exploit the new ideas to prove new theorems, usually in directions not anticipated by the original discoverer(s). This has manifestly not been the case for ABC, and this fact alone is one of the most compelling reasons why people are suspicious.
The fact that these papers have apparently now been accepted by the Publications of the RIMS (a journal where Mochizuki himself is the managing editor, not necessary itself a red flag but poor optics none the less) really doesn’t change the situation as far as giving anyone a reason to accept the proof. If anything, the value of the referee process is not merely in getting some reasonable confidence in the correctness of a paper (not absolute certainty; errors do occur in published papers, usually of a minor sort that can be either instantly fixed by any knowledgeable reader or sometimes with an erratum, and more rarely requiring a retraction). Namely, just as importantly, it forces the author(s) to bring the clarity of the writing up to a reasonable standard for professionals to read it (so they don’t need to take the same time duration that was required for the referees, amongst other things). This latter aspect has been a complete failure, calling into question both the quality of the referee work that was done and the judgement of the editorial board at PRIMS to permit papers in such an unacceptable and widely recognized state of opaqueness to be published. We do now have the ridiculous situation where ABC is a theorem in Kyoto but a conjecture everywhere else. (edit: a Japanese reader has clarified to me that the newspaper articles do not definitively say that the papers have been accepted, but rather the wording is something along the lines of “it is planned that PRIMS will accept the paper,” whatever that means. This makes no change to the substance of this post, except that, while there is still a chance the papers will not be accepted in their current form, I retract my criticism of the PRIMS editorial board.)
So why has this state persisted so long? I think I can identify three basic reasons. The first is that mathematicians are often very careful (cue the joke about a sheep at least one side of which is black). Mathematicians are very loath to claim that there is a problem with Mochizuki’s argument because they can’t point to any definitive error. So they tend to be very circumspect (reasonably enough) about making any claims to the contrary. We are usually trained as mathematicians to consider an inability to understand an argument as a failure on our part. Second, whenever extraordinary claims are made in mathematics, the initial reaction takes into account the past work of the author. In this case, Shinichi Mochizuki was someone who commanded significant respect and was considered by many who knew him to be very smart. It’s true (as in the recent case of Yitang Zhang) that an unknown person can claim to have proved an important result and be taken seriously, but if a similarly obscure mathematician had released 1000 pages of mathematics written in the style of Mochizuki’s papers, they would have been immediately dismissed. Finally, in contrast to the first two points, there are people willing to come out publicly and proclaim that all is well, and that the doubters just haven’t put in the necessary work to understand the foundations of inter-universal geometry. I’m not interested in speculating about the reasons they might be doing so. But the idea that several hundred hours at least would be required even to scratch the beginnings of the theory is either utter rubbish, or so far beyond the usual experience of how things work that it would be unique not only in mathematics, but in all of science itself.
So where to from here? There are a number of possibilities. One is that someone who examines the papers in depth is able to grasp a key idea, come up with a major simplification, and transform the subject by making it accessible. This was the dream scenario after the release of the paper, but it becomes less and less likely by the day (and year). But it is still possible that this could happen. The flip side of this is that someone could find a serious error, which would also resolve the situation in the opposite way. A third possibility is that we have (roughly) the status quo: no coup de grâce is found to kill off the approach, but at the same time the consensus remains that people can’t understand the key ideas. (I should say that whether the papers are accepted or not in a journal is pretty much irrelevant here; it’s not good enough for people to attest that they have read the argument and it is fine, someone has to be able to explain it.) In this case, the mathematical community moves on and then, whether it be a year, a decade, or a century, when someone ultimately does prove ABC, one can go back and compare to see if (in the end) the ideas were really there after all.
Posted in Mathematics, Politics, Rant | | 33 Comments
This last summer, I undertook my last official activity as a faculty member at Northwestern University, namely, graduation day! (I had a 0% courtesy appointment for two years until my last Northwestern students graduated.)
Here I am with four of my six former students. (Richard and Vlad actually graduated in 2016, but were hooded together with Joel in 2017.)
From left-to-right: Richard Moy is a postdoc at Wilamette College in Portland (for previous blog posts on Richard’s work, see Hilbert Modular Forms Part II and Part III), Zili Huang (Thurston and Random Polynomials) has a real job at a consulting firm in Chicago but swung by to say hello on graduation day, Vlad Serban (The Thick Diagonal) has as postdoctoral position in Vienna, and Joel Specter (Hilbert Modular Forms Part II and … hmmm, I guess I didn’t blog about any of his other papers) has just started a postdoc position at Johns Hopkins. Missing are Zoey Guo (Abelian Spiders), now at the Institute of Solid Mechanics at Tsinghua University in Beijing , and my first student Maria Stadnik (who just moved to Florida Atlantic University, and whose thesis predates this blog).
It’s easy to get the sense as a student that math departments are fairly static (which is mostly true over the 4 years or so it takes to do a PhD), but as time goes on, people end up moving around much more than you expect, and the characters of various departments change quite a bit. A sign of good hiring is that your faculty leave because they have been recruited elsewhere! And even though my departure two years ago brought one era of number theory at Northwestern to an end — starting with Matt, then me, two one-year cameo appearances by Toby, and a string of very successful postdocs (not to mention the occasional visitors) — a new era has already begun, with the hiring of Yifeng Liu and Bao Le Hung.
## Abelian Surfaces are Potentially Modular
Today I wanted (in the spirit of this post) to report on some new work in progress with George Boxer, Toby Gee, and Vincent Pilloni.
Recal that, for a smooth projective variety X over a number field F unramified outside a finite set of primes S, one may write down a global Hasse-Weil zeta function:
$\displaystyle{ \zeta_{X,S}(s) = \prod \frac{1}{1 - N(x)^{-s}}}$
where the product runs over closed points of a smooth integral model. From the Weil conjectures, the function $\zeta_{X,S}(s)$ is absolutely convergent for s with real part at least $1+m/2,$ where $m = \mathrm{dim}(X).$ One has the following well-known conjecture:
Hasse–Weil Conjecture: The function $\zeta_{X,S}(s)$ extends to a meromorphic function on the complex plane. Moreover, there exists a rational number A, a collection of polynomials $P_v(T)$ for v dividing S, and infinite Gamma factors $\Gamma_v(s)$ such that
$\displaystyle{ \xi_{X}(s) = \zeta_{X,S}(s) \cdot A^{s/2} \cdot \prod_{v|\infty} \Gamma_v(s) \cdot \prod_{v|S} \frac{1}{P_v(N(v)^{-s})}}$
satisfies the functional equation $\xi_X(s) = w \cdot \xi_X(m+1-s)$ with $w = \pm 1.$
Naturally, one can be more precise about the conductor and the factors at the bad primes. In the special case when F = Q and X is a point, then $\zeta_{X,S}(s)$ is essentially the Riemann zeta function, and the conjecture follows from Riemann’s proof of the functional equation. If F is a general number field but X is still a point, then $\zeta_{X,S}(s)$ is (up to some missing Euler factors at S) the Dedekind zeta function $\zeta_F(s)$ of F, and the conjecture is a theorem of Hecke. If X is a curve of genus zero over F, then $\zeta_{X,S}(s)$ is $\zeta_F(s) \zeta_F(s-1),$ and one can reduce to the previous case. More generally, by combining Hecke’s results with an argument of Artin and Brauer about writing a representation as a virtual sum of induced characters from solvable (Brauer elementary) subgroups, one can prove the result for any X for which the l-adic cohomology groups are potentially abelian. This class of varieties includes those for which all the cohomology of X is generated by algebraic cycles.
For a long time, not much was known beyond these special cases. But that is not to say there was not a lot of progress, particularly in the conjectural understanding of what this conjecture really was about. The first huge step was the discovery and formulation of the Taniyama-Shimura conjecture, and the related converse theorems of Weil. The second was the fundamental work of Langlands which cast the entire problem in the (correct) setting of automorphic forms. In this context, the Hasse-Weil zeta functions of modular curves were directly lined to the L-functions of classical weight 2 modular curves. More generally, the Hasse-Weil zeta functions of all Shimura varieties (such as Picard modular surfaces) should be linked (via the trace formula and conjectures of Langlands and Kottwitz) to the L-functions of automorphic representations. On the other hand, these examples are directly linked to the theory of automorphic forms, so the fact that their Hasse-Weil zeta functions are automorphic, while still very important, is not necessarily evidence for the general case. In particular, there was no real strategy for taking a variety that occurred “in nature” and saying anything non-trivial about the Hasse-Weil zeta function beyond the fact it converged for real part greater than $1 + m/2,$ which itself requires the full strength of the Weil conjectures.
The first genuinely new example arrived in the work of Wiles (extended by others, including Breuil-Conrad-Diamond-Taylor), who proved that elliptic curves E/Q were modular. An immediate consequence of this theorem is that Hasse-Weil conjecture holds for elliptic curves over Q. Taylor’s subsequent work on potentially modularity, while not enough to prove modularity of all elliptic curves over all totally real fields, was still strong enough to allow him to deduce the Hasse-Weil conjecture for any elliptic curve over a totally real field. You might ask what have been the developments since these results. After all, the methods of modularity have been a very intense subject of study over the past 25 years. One problem is that these methods have been extremely reliant on a regularity assumption on the corresponding motives. One nice example of a regular motive is the symmetric power of any elliptic curve. On the other hand, if one takes a curve X over a number field, then h^{1,0} = h^{0,1} = g, and the corresponding motive is regular only for g = 0 or 1. The biggest progress in automorphy of non-regular motives has actually come in the form of new cases of the Artin conjecture — first by Buzzard-Taylor and Buzzard, then in the proof of Serre’s conjecture by Khare-Wintenberger over Q, and more recently in subsequent results by a number of people (Kassaei, Sasaki, Pilloni, Stroh, Tian) over totally real fields. But these results provide no new cases of the Hasse-Weil conjecture, since the Artin cases were already known in this setting by Brauer. (It should be said, however, that the generalized modularity conjecture is now considered more fundamental than the Hasse-Weil conjecture.) There are a few other examples of Hasse-Weil one can prove by using various forms of functoriality to get non-regular motives from regular ones, for example, by using the Arthur-Clozel theory of base change, or by Rankin-Selberg. We succeed, however, in establishing the conjecture for a class of motives which is non-regular in an essential way. The first corollary of our main result is as follows:
Theorem [Boxer,C,Gee,Pilloni] Let X be a genus two curve over a totally real field. The the Hasse-Weil conjecture holds for X.
It will be no surprise to the experts that we deduce the theorem above from the following:
Theorem [BCGP] Let A be an abelian surface over a totally real field F. Then A is potentially modular.
In the case when A has trivial endomorphisms (the most interesting case), this theorem was only known for a finite number of examples over $\mathbf{Q}.$ In each of those cases, the stronger statement that A is modular was proved by first explicitly computing the corresponding low weight Siegel modular form. For example, the team of Brumer-Pacetti-Tornaría-Poor-Voight-Yuen prove that the abelian surfaces of conductors 277, 353, and 587 are all modular, using (on the Galois side) the Faltings-Serre method, and (on the automorphic side) some really quite subtle computational methods developed by Poor and Yuen. A paper of Berger-Klosin handles a case of conductor 731 by a related method that replaces the Falting-Serre argument by an analysis of certain reducible deformation rings.
The arguments of our paper are a little difficult to summarize for the non-expert. But George Boxer did a very nice job presenting an overview of the main ideas, and you can watch his lecture online (posted below, together with Vincent’s lecture on higher Hida theory). The three sentence version of our approach is as follows. There was a program initiated by Tilouine to generalize the Buzzard-Taylor method to GSp(4), which ran into technical problems related to the fact that Siegel modular forms are not directly reconstructible from their Hecke eigenvalues. There was a second approach coming from my work with David Geraghty, which used instead a variation of the Taylor-Wiles method; this ran into technical problems related to the difficulty of studying torsion in the higher coherent cohomology of Shimura varieties. Our method is a synthesis of these two approaches using Higher Hida theory as recently developed by Pilloni. Let me instead address one or two questions here that GB did not get around to in his talk:
What is the overlap of this result with [ACCGHLNSTT]? Perhaps surprisingly, not so much. For example, our results are independent of the arguments of Scholze (and now Caraiani-Scholze) on constructing Galois representations to torsion classes in Betti cohomology. We do give a new proof of the result that elliptic curves over CM fields are potentially modular, but that is the maximal point of intersection. In contrast, we don’t prove that higher symmetric powers of elliptic curves are modular. We do, however, prove potentially modularity of all elliptic curves over all quadratic extensions of totally real fields with mixed signature, like $\mathbf{Q}(2^{1/4}).$ The common theme is (not surprisingly) the Taylor-Wiles method (modified using the ideas in my paper with David Geraghty).
What’s new in this paper which allows you to make progress on this problem? George explains this well in his lecture. But let me at least stress this point: Vincent Pilloni’s recent work on higher Hida Theory was absolutely crucial. Boxer, Gee, and I were working on questions related to modularity in the symplectic case, but when Pilloni’s paper first came out, we immediately dropped what we were doing and started working (very soon with Pilloni) on this problem. If you have read the Calegari-Geraghty paper on GSp(4) and are not an author of the current paper (hi David!), and you look through our manuscript (currently a little over 200 pages and [optimistically?!] ready by the end of the year), then you also recognize other key technical points, including a more philosophically satisfactory doubling argument and Ihara avoidance in the symplectic case, amongst other things.
So what about modularity? Of course, we deduce our potential modularity result from a modularity lifting theorem. The reason we cannot deduce that Abelian surfaces are all modular, even assuming for example that they are ordinary at 3 with big residual image, is that Serre’s conjecture is not so easy. Not only is $\mathrm{GSp}_4(\mathbf{F}_3)$ not a solvable group, but — and this is more problematic — Artin representations do not contribute to the coherent cohomology of Shimura varieties in any setting other than holomorphic modular forms of weight one. Still, there are some sources of residually modular representations, including the representations induced from totally real quadratic extensions (for small primes, at least). We do, however, prove the following (which GB forgot to mention in his talk, so I bring up here):
Proposition [BCGP]: There exist infinitely modular abelian surfaces (up to twist) over Q with End_C(A) = Z.
This is proved in an amusing way. It suffices to show that, given a residual representation
$\overline{\rho}: G_{\mathbf{Q}} \rightarrow \mathrm{GSp}_4(\mathbf{F}_3)$
with cyclotomic similitude character (or rather inverse cyclotomic character with our cohomological normalizations) which has big enough image and is modular (plus some other technical conditions, including ordinary and p-distinguished) that it comes from infinitely many abelian surfaces over Q, and then to prove the modularity of those surfaces using the residual modularity of $\overline{\rho}.$ This immediately reduces to the question of finding rational points on some twist of the moduli space $\mathcal{A}_2(3).$ And this space is rational! Moreover, it turns out to be a very famous hypersurface much studied in the literature — it is the Burkhardt Quartic. Now unfortunately — unlike for curves — it’s not so obvious to determine whether a twist of a higher dimensional rational variety is rational or not. The problem is that the twisting is coming from an action by $\mathrm{Sp}_4(\mathbf{F}_3),$ and that action is not compatible with the birational map to $\mathbf{P}^3,$ so the resulting twist is not a priori a Severi-Brauer variety. However, something quite pleasant happens — there is a degree six cover
$\mathcal{A}^{-}_2(3) \stackrel{6:1}{\rightarrow} \mathcal{A}_2(3)$
(coming from a choice of odd theta characteristic) which is not only still rational, but now rational in an equivariant way. So now one can proceed following the argument of Shepherd-Barron and Taylor in their earlier paper on mod-2 and mod-5 Galois representations.
What about curves of genus g > 2?: Over $\mathbf{Q},$ there is a tetrachotomy corresponding to the cases g = 0, g = 1, g = 2, and g > 2. The g = 0 case goes back to the work of Riemann. The key point in the g = 1 case (where the relevant objects are modular forms of weight two) is that there are two very natural ways to study these objects. The first (and more classical) way to think about a modular form is as a holomorphic function on the upper half plane which satisfies specific transformation properties under the action of a finite index subgroup of $\mathrm{SL}_2(\mathbf{Z}).$ This gives a direct relationship between modular forms and the coherent cohomology of modular curves; namely, cuspidal modular forms of weight two and level $\Gamma_0(N)$ are exactly holomorphic differentials on the modular curve $X_0(N).$ On the other hand, there is a second interpretation of modular forms of weight two in terms of the Betti (or etale or de Rham) cohomology of the modular curve. A direct way to see this is that holomorphic differentials can be thought of as smooth differentials, and these satisfy a duality with the homology group $H_1(X_0(N),\mathbf{R})$ by integrating a differential along a loop. And it is the second description (in terms of etale cohomology) which is vital for studying the arithmetic of modular forms. When g = 2, there is still a description of the relevant forms in terms of coherent cohomology of Shimura varieties (now Siegel 3-folds), but there is no longer any direct link between these coherent cohomology groups and etale cohomology. Finally, when g > 2, even the relationship with coherent cohomology disappears — the relevant automorphic objects have some description in terms of differential equations on locally symmetric spaces, but there is no longer any way to get a handle on these spaces. For those that know about Maass forms, the situation for g > 2 is at least as hard (probably much harder) than the notorious open problem of constructing Galois representations associated to Maass forms of eigenvalue 1/4. In other words, it’s probably very hard! (Of course, there are special cases in higher genus when the Jacobian of the curve admits extra endomorphisms which can be handled by current methods.)
Finally, as promised, here are the videos:
|
2018-05-26 10:13:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 96, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7510679364204407, "perplexity": 514.0480936516158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867416.82/warc/CC-MAIN-20180526092847-20180526112847-00475.warc.gz"}
|
https://brilliant.org/problems/an-algebra-problem-by-dragan-markovic-2/
|
# A number theory problem by Dragan Marković
$\large{\begin{cases} x^2+y^2+z^2=18 \\ xy+yz+zx = 9 \end{cases}}$
Let $$x,y$$ and $$z$$ be integers satisfying the system of equations above. Find $$|x| + |y| + |z|$$.
Notation: $$| \cdot |$$ denotes the absolute value function.
×
Problem Loading...
Note Loading...
Set Loading...
|
2017-03-26 20:50:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.989061176776886, "perplexity": 1689.0512302586203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189252.15/warc/CC-MAIN-20170322212949-00293-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/163898/comparing-two-gaussians-with-likelihood
|
# Comparing two Gaussians with likelihood
Given a univariate Gaussian with mean $\mu_1$ and variance $\sigma_1$ and a second univariate Gaussian with $\mu_2, \sigma_2$. Compare the two using the likelihood in order to find out how similar they are.
This task was given to me by my supervisor. He said "Google it, probably top hit". Now after two hours I didn't found it. I know how to compute the likelihood for some data that it was generated by some Gaussian, but not how to compare two Gaussians and he explicitly said that I should use mean and variance for both.
Edit: Not the Kullback-Leibler divergence
• I am not sure it is well-defined. 'Similar' in what sense? one could argue you can take the two cdfs, let's say $F_1=\Phi((x-\mu_1)/\sigma_1)$ and $F_2=\Phi((x-\mu_2)/\sigma_2)$, where $\Phi$ is the standard normal cdf, and calculate $||F_1-F_2||$. But then you need to think about the norm in which you calculate it (L2 norm, TV-norm, max-norm, etc.), and also whether you want to calculate it on the cdf or on the pdf, etc. – yoki Jul 30 '15 at 11:32
• The likelihood is a function of the parameters contingent upon having data. (See, for instance, our thread at stats.stackexchange.com/questions/2641 .) In the absence of any data, suggesting that Gaussians (or any other distributions) could be compared on the basis of a "likelihood" alone is therefore meaningless. – whuber Jul 30 '15 at 13:16
|
2019-05-21 07:27:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115861415863037, "perplexity": 331.38342642012304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256281.35/warc/CC-MAIN-20190521062300-20190521084300-00258.warc.gz"}
|
http://lists.gnu.org/archive/html/lilypond-user/2008-02/msg00398.html
|
lilypond-user
[Top][All Lists]
## Re: fermata and text (markup) on/above one note
From: Trevor Bača Subject: Re: fermata and text (markup) on/above one note Date: Thu, 14 Feb 2008 17:55:09 -0600
On Thu, Feb 14, 2008 at 4:26 PM, Mats Bengtsson <address@hidden> wrote:
Trevor Bača wrote:
> OK, I was confusing dynamics and text scripts. For text scripts this
> baseline alignment is very cool. But for dynamics what I'm actually
> looking for is *center* alignment in the vertical direction (rather
> than *baseline* aligment). In other words, in Kieren's example above,
> my preference is to see the p and f center aligned *as though the p
> were the beginning of a crescendo and the f were the end*.
>
> Is there such a trick for *center* alignment of dynamic marks??
Actually, as long as the dynamics belong to the same
DynamicLineSpanner, i.e. a sequence of absolute dynamics,
hairpins and/or cresc/decresc with not gaps inbetween them,
then this is exactly what you get by default. If you want it
also for dynamics that are separated by some notes so that
they don't belong to the same DynamicLineSpanner, then
you can use exactly the same trick. I just increased the default
value of the staff-padding and kept the default center alignment
of each DynamicText in relation to its DynamicLineSpanner:
\version "2.10.33"
\paper { indent = 0 line-width = 2\in }
music = \relative c''
{
c2\p^\markup { "gorgeous" } c\f^\markup { "fantastic" }
c4\p c \f \> c c \! \p
}
\score
{
{
\music \break
\override DynamicLineSpanner #'staff-padding = #2.0
\override DynamicLineSpanner #'Y-extent = #'(-1 . 1)
\override TextScript #'Y-extent = #'(-1 . 1)
\music
}
}
OK, that's now officially brilliant. Mats, thanks so much (as usual). And entire .scm file I have sitting in my score directory can now vanish thanks to this single setting -- DynamicLineSpanner #'Y-extent. Excellent.
At the risk of belaboring the point, I'm having trouble understanding *why* this works. Here's an absolutely minimal example:
%%% TURNING ON Y-EXTENT %%%
\version "2.11.39"
\layout { ragged-right = ##t }
\new Staff {
\override DynamicLineSpanner #'staff-padding = #4
c'2 \p
c'2 \f
}
\new Staff {
\override DynamicLineSpanner #'staff-padding = #4
\override DynamicLineSpanner #'Y-extent = #'(0 . 0)
c'2 \p
c'2 \f
}
%%% END %%%
In the first case we get usual (top-aligned) behavior; in the second case we get this excellent (center-aligned) behavior. The only difference is that the second example overrides DynamicLineSpanner #'Y-extent. But notice that the actual pair of values passed to DynamicLineSpanner #'Y-extent doesn't seem to matter because #'(0 . 0) and #'(1 . -1) and in fact #'(10 . -10) all provide this nifty center alignment.
So what's going on here? It seems that merely "turning on" DynamicLineSpanner #'Y-extent is enough to induce vertical center alignment, regardless of the actual value. Is this a good way to conceptualize what's happening here? If so, that's quite an implicit (hidden, even) principle ... ie, that turning on Y-extent switches the origin of alignment from top to center.
--
Trevor Bača
|
2016-05-04 06:15:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688015341758728, "perplexity": 11191.747876227175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122501.61/warc/CC-MAIN-20160428161522-00044-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/245188/how-to-analyze-problems-in-analysis-computing-int-02-pi-frac1ab-cos
|
# How to 'analyze' problems in analysis; Computing $\int_0^{2\pi}\frac{1}{(a+b\cos(\theta))^2}d\theta$
If $a, b \in \mathbb{R}$ with $a > b > 0$, compute this ungodly thing;
$$\int_0^{2\pi}\frac{1}{(a+b\cos(\theta))^2}d\theta$$
I'm really not a fan of complex analysis... I can't visualize what's going on here. When I look at things from group/number/graph theory I see ideas. When I look at this... all I see a bunch of symbols.
Where to begin with this? Maybe someone with analysis background can offer some intuition on 'analyzing' these things to better see whats going on..
Because right now, I'm lost looking at this.
Thanks guys!
-
Like, my thought process is telling me that; the integrating variable is the inverse of a quadratic sinusoidal function.. But I can't see the resulting function in my head. Should I even be trying to do this? – Chloe.H Nov 26 '12 at 19:59
Is this specifically from a complex analysis class/text? As in, complex numbers, holomorphic functions, etc.? – Antonio Vargas Nov 26 '12 at 20:09
Yes it's a class on complex analysis. But from an engineering perspective shudder. My school lops the engineering students and math students in the same complex analysis course ... so most of what we do is look at examples... – Chloe.H Nov 26 '12 at 20:11
Hint: Use Euler's formula to rewrite the integrand in terms of $e^{i \theta}$, then rewrite the integral as a contour integral in the complex plane (what kind of shape does $e^{i \theta}$ parameterize?).
As for the intuition...
The $\cos \theta$ brings back fond memories of polar coordinates (remember $x = r \cos \theta$, $y = r \sin \theta$?), and then the $0$ to $2 \pi$ integral sparks the thought that somewhere, somehow, a circle is being parameterized. The quantity $e^{i \theta}$ does exactly that--it parameterizes the unit circle.
I remember from the hymns of ages past that Euler's formula can be used to rewrite $\cos \theta$ in terms of $e^{i \theta}$, so I follow in the footprints of the ancients and take advantage of this to rewrite the integrand. Calling upon the dark arts, I make a change of variables which transports me from the realm of the (real) line to the realm of the infinite (complex) plane. (The substitution $z = e^{i \theta}$ looks nice.)
Now that I'm in my element, I can go at the problem using all the magicks which were, until now, forbidden (Cauchy's theorem, the residue theorem, etc.). Of course I have my trusty emergency pack ready, as I might need some extra tools along the way (partial fractions).
-
Firstly, thank you for the hint. Secondly, may I ask what do you see in the integral that makes you think this is a good approach? I'm just trying to see this problem through more enlightening eyes – Chloe.H Nov 26 '12 at 20:29
@Chloe.H, I've edited my answer to describe some of my intuition for a problem like this. – Antonio Vargas Nov 26 '12 at 20:46
Loved it! +1 just for "Calling upon the dark arts" lol I really appreciate it! – Chloe.H Nov 26 '12 at 20:56
Glad to help :) – Antonio Vargas Nov 26 '12 at 20:58
Hint
One method is to use the Weierstrass Substitution: \begin{align} \tan\left(\frac t2\right)&=z&\mathrm{d}t&=\frac{2\,\mathrm{d}z}{1+z^2}\\[6pt] \sin(t)&=\frac{2z}{1+z^2}&\cos(t)&=\frac{1-z^2}{1+z^2} \end{align} Apply the substitution: \begin{align} \int_0^{2\pi}\frac{\mathrm{d}t}{(a+b\cos(t))^2} &=\int_{-\infty}^\infty\frac{\frac{2\,\mathrm{d}z}{1+z^2}}{\left(a+b\frac{1-z^2}{1+z^2}\right)^2}\\[6pt] &=\int_{-\infty}^\infty\frac{2(1+z^2)\,\mathrm{d}z}{((a+b)+(a-b)z^2)^2}\\[6pt] &=\color{#C00000}{\int_{-\pi/2}^{\pi/2}\frac{2\left(1+\frac{a+b}{a-b}\tan^2(u)\right)\sqrt{\frac{a+b}{a-b}}\sec^2(u)\mathrm{d}u}{(a+b)^2\sec^4(u)}}\\[6pt] &=\frac{2}{(a^2-b^2)^{3/2}}\int_{-\pi/2}^{\pi/2}\left(a+b-2b\cos^2(u)\right)\,\mathrm{d}u \end{align} where we have substituted $z=\color{#C00000}{\sqrt{\frac{a+b}{a-b}}\;\tan(u)}$. The last integral is very easy.
-
|
2015-11-30 10:04:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882259726524353, "perplexity": 622.7685378027624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461390.54/warc/CC-MAIN-20151124205421-00060-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/tags/fonts/new
|
# Tag Info
0
After an odyssey of trying anything that came to my mind and talking to Robert Koch i finally found a solution to my problem. I am going to post the answer so that in the off chance of someone running into the same issue in the future there is no need for endless trial and error. When i encountered the problem first i was using TeXstudio as an editor. ...
2
An up-to-date TeX distribution has the XCharter OpenType font, so you can use it instead of \renewcommand{\rmdefault}{bch} which will not give you anything except for ASCII characters. Moreover \la as you defined it is just a switch that tells XeLaTeX to use Brill from that point on. A correct document would be \documentclass{article} ...
2
in \la{N} the {} are not doing anything, \la does not take an argument, it switches font for the remainder of the current group. In this case that is the \end{enumerate} So at that point the font (and any other local declarations) revert to the values they had at \begin{enumerate}. It isn't clear if you intended \la just to change the font of N or if you ...
1
In my case, it was caused by a Greek symbol pasted directly into body text instead of inside an inline equation. My Lyx file was working fine until I did something and this error showed up, pointing at an equation that had been showing up just fine previously. I discovered that the problem was in fact that I had typed a Greek symbol in standard text (not ...
3
I don't know why you'd want to use mathptmx and substitute symbols from newtxmath instead of directly using the latter. The fonts provided by mathptmx are in large parts ugly, because they are built from various sources; in particular the Greek letters are nasty. However, here's how you can do for the integral. I'll leave to you other symbols. Note that ...
3
Provided you don't need scaling, at a minimum you need something like this: \documentclass[a4paper]{report} \DeclareFontEncoding{LMX}{}{} \DeclareFontSubstitution{LMX}{ntxexx}{m}{n} \DeclareFontFamily{LMX}{ntxexx}{} \DeclareFontShape{LMX}{ntxexx}{m}{n}{<->ntxexx}{} \DeclareFontShape{LMX}{ntxexx}{b}{n}{<->ntxbexx}{} ...
2
The packages MinionPro and MyriadPro from FontPro can be used together. By default, the math font is not changed by MyriadPro. With the sansmath option, the sans serif of \mathsf is adjusted and an additional math version is defined to allow two math fonts with matching math symbols in one document (see the documentation for that). Thus, the code ...
2
Your installation was correct. If you try other fonts, you would encounter no problems. You just were unlucky enough to try the Garamond font first. The KPSE* family and the updmap are not the issue here. Apparently, the problem lies with the documentation at the LaTeX font catalog which claims that Garamond is part of TeXLive. Garamond is only partially ...
0
Adding to Sverre's answer of using TeX Gyre Termes (tgtermes package): it doesn't work for me with \ttfamily / \texttt{} i.e. typewriter fonts. I get the following from pdftotext (version 0.12.4): rmfamily: zażółć gęsią jaźń ttfamily: za»óª¢ g¦si¡ ja¹« I have browsed The LaTeX Font Catalogue and found out the following solution -- use TXTT font for ...
3
You can use the Times font TeX Gyre Termes: \documentclass{article} \usepackage{tgtermes} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \input glyphtounicode \pdfgentounicode=1 \begin{document} \noindent \rmfamily rmfamily: zażółć gęsią jaźń \\ \ttfamily ttfamily: zażółć gęsią jaźń \\ \end{document} Copy and paste from the .pdf gives this ...
2
3
The mycmr10.vf file includes your new character width. So xdvi knows this from this source and the characters are overlap. But xdvi doesn't read the cmr10.tfm file (as mentioned in the previous thread). You can try this in your working directory: tftopl cmr10.tfm > cmr10.pl edit cmr10.pl in order the character A has different width. pltotf cmr10.pl ...
5
Your test is not conclusive. Consider the file \nopagenumbers \font\ptm=ptmr8r \ptm a \bye and compile it with tex -recorder, so a .fls file will be created containing information about the loaded files; I only removed the first line that reports the working directory PWD <not shown> INPUT /usr/local/texlive/2014/texmf.cnf INPUT ...
4
The tfm information is saved in .fmt file when tex -ini is run. The dvi output includes the shifts (numerically specified by internal unit) of current typesetting point. These commands are used at inter-word spaces, inter-letter kerns and at other vertical/horizontal spaces generated by TeX) the commands about typesetting the characters from the slots ...
9
The tfm file for cmr10 has been read and the relevant data is dumped in the format, and is not read again. You can see all the fonts that were read in your initex run. The tfm for all these are loaded into the format: \font\tenrm=cmr10 \font\preloaded=cmr9 \font\preloaded=cmr8 \font\sevenrm=cmr7 \font\preloaded=cmr6 \font\fiverm=cmr5 \font\teni=cmmi10 ...
1
sudo apt-get install texlive-fonts-recommended After installing the recommended fonts the compliation worked like a charm.
7
Robot comes in two versions, both of which may be seen and downloaded at publicobject.com/2014/06/26/roboto-old-and-new/. For a far more elaborate comparison, see the review by Stephen Coles, “Roboto Is Was a Four-headed Frankenfont,” at typographica.org. For now, the package roboto supplies the older version of the font. Users of luatex or xetex may ...
1
If you can't access these symbols from a Unicode font, but you can see them on screen, you could grab them in some form and then include them as a graphic. \documentclass{article} \usepackage{graphicx} \newcommand{\lbrackopen}{% \raisebox{-.6ex}{% \includegraphics[height=\baselineskip]{lbrackopen}% }% } \begin{document} I could use [this] ...
2
When you do \usepackage[<ENCODING>]{fontenc} the file <encoding>enc.def is loaded (the encoding name is converted to lowercase for reasons due to uniformity in the various file systems). For instance \usepackage[T1]{fontenc} will load t1enc.def. The start of this file is \ProvidesFile{t1enc.def} [2005/09/27 v1.99g Standard LaTeX ...
3
A full solution that neutralizes the pesky encoding specific commands: \documentclass{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[full]{textcomp} \usepackage{kpfonts} \DeclareTextSymbol{\textbornstd}{TS1}{98} \DeclareTextSymbol{\textdiedstd}{TS1}{100} \UndeclareTextCommand{\textborn}{TS1} \UndeclareTextCommand{\textdied}{TS1} ...
5
0
Firstly, if you are being asked to use a standard template then you should not change the defaults. Otherwise, there are two approaches. Approach 1 In IEEEtran version v1.8a, theorem titles are italic by default, unless you use the compsoc option, which gives a bold italic title. If you add this option then a minimal document with a theorem gives the ...
1
If this is what you expect to see in your LaTeX / XeTeX output … then you may just have to change the name of the font, which may be outdated in the example you got. I modified your example like this (with comments removed): \documentclass{article} \usepackage{fontspec} \setmainfont[Script=Devanagari]{Lohit-Devanagari} % Hindi -> Devanagari ...
7
Honest Answer: The answer is straightforward. Because I, the package author, did not have any sensible use case in mind, when writing the macro handling. Explanation: Because the mhchem syntax is so different from normal LaTeX, I have to parse it before any inner expansion. So I have to take care of macros early on and decide how to handle them. I had some ...
4
Provide the italic font to fontspec with its key-value option syntax. Note the reversed order of font-then-options and the fact that TeX ligatures (--/---/curly typographers' quotes) are set up automatically in the latest version of fontspec: \documentclass{article} \usepackage{fontspec} \setmainfont{Helvetica Neue Light}[ ItalicFont=Helvetica Neue Light ...
4
CW from the comments: luaotfload normalizes all names supplied by the font. Apart from lowercasing you can also leave out spaces if you wish. So, for XeLaTeX, spacing and casing are significant, while for LuaLaTeX, they are not. This is not engine-dependent, just how luaotfload (and, incidentally, ConTeXt) treat things. The rationale is simple: ...
0
It looks like mhchem itself uses the math environment and that the macros are somehow not parsed in the same way as normal text. A test of this is to use some kind of math-exclusive command in the macro (like \mathcal{X}), which doesn't produce an error. A clumsy fix (if one is needed) is to just use the \text{} command inside the macros: ...
0
Try fontspec with "Source Sans Pro" (note the space between Sans and Pro). XeTeX is pretty strict when entering font names (LuaTeX doesn't seem to care though).
-1
Too long for a comment: I had a similar problem recently with Source Sans Pro in MikTeX (in TeXLive it worked right away). It is one of those cases in which I tried many things until it worked. So I don't know exactly what did the trick: Install, uninstall, install sourcesans* related packages, both as normal user and as administrator. Reinstalled MikTeX ...
1
In a current miktex and texlive 2014 one can get the search path for font feature files with kpsewhich --show-path="font feature files" and this gives pathes of the type <texmf-tree>/fonts/fea. I tested it with both systems and it worked fine after I updated the fndb/ls-R.
2
Let me present a small snippet in LuaTeX. We pass an argument and selected letters to the Lua core, where the snippet decomposes it at a character level and wraps the preselected letters only by the \malbf command. Words to be marked are tagged, I didn't expect a TeX command, I presumed plain text. I enclose a small example with English (requested letters ...
1
Another much easier solution is: don't use LaTeX in Matlab 2014b to label your axis. For example, you cannot have any thing like x_1^*. If you use only plain text in Matlab, then when you save the file to SVG, the text will be kept. You can then edit the .pdf_tex file generated by Inkscape to show the correct label. This is of course due to the bug of the ...
4
When you are using XeTeX then your task is exactly solved by XeTeX primitives \XeTeXcharclass and \XeTeXinterchartoks. Read the manual of XeTeX. For letters 'a', 'e', 'i', 'o', 'u' the setting looks like: \newXeTeXintercharclass \mycharclassbf \XeTeXcharclass \a =\mycharclassbf \XeTeXcharclass \e =\mycharclassbf \XeTeXcharclass `\i =\mycharclassbf ...
1
As pointed out by Robert in the comments, the problem has nothing to do with Microtype. But by activating Microtype or any other font feature by \setmainfont[Microtype]{Latin Modern Roman} the default font settings get overwritten. Fonts which have a special subfont for smallcaps may not work afterwards. By reinitializing the smallcaps subfont it will ...
4
\text.... commands escape to text mode so use the current text font settings. Math fonts do not change according to the local context and here you want the math monospace font so use \mathtt{MinimumPolynomial}
3
From the comments, I deduce that TeXShop is not yet configured to UTF-8 text encoding. To do that, open the Preferences of TeXShop, select the “Document” panel (the first panel starting from the left), and select Unicode (UTF-8) in the Encoding section, like here (French version of TeXShop, but it should look close in other languages): After this change, ...
12
I don't think it is reasonable to expect a full list to be created and maintained here. I think it is better to explain how to figure out whether fonts have this support so that users can determine the current answer for a specific font for themselves. If nothing else, this will enable people to check whether an answer here is still correct. Thérèse is ...
0
I think I got it: just use a \endofdump to indicate the end of the precompiled format slightly before \begin{document}; and then insert the \AtBeginDvi{\input{zhwinfonts}} line after it. The preamble thus becomes: \RequirePackage{filecontents} % %% NOTE: filecontents cannot be active when -ini runs; must be commented! % %% NOTE: cannot use just {{文章}}, ...
12
The package substitutefont is what you need. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[polutonikogreek,english]{babel} \usepackage{substitutefont} \substitutefont{LGR}{\rmdefault}{bodoni} \begin{document} This is English \textgreek{Πάτερ ἡμῶν ὁ ἐν τοῖς οὐρανοῖς·} \end{document} The only difficult bit, after having realized that ...
3
You should have a \lstset instruction in your preamble: \documentclass [11pt] {article} \usepackage{graphicx} \usepackage{enumitem} \usepackage{url} \usepackage{epstopdf} \usepackage{listings} %\input std-defs %\input EECE2323-header \lstset{ basicstyle=\ttfamily, columns=fullflexible, } \begin{document} %\noindent %\lab{3}{LAB3 - Arithmetic and ...
5
unicode-math makes only sense if you also have a sensible "unicode math" font that you want to use as base math font. There is no "cm bright math" (sans serif math fonts are sparse ...). So it is better to use for the math setup the non-unicode way: \documentclass[a4paper,12pt]{article} \usepackage{cmbright} \usepackage{amsmath,amssymb} ...
4
Version 2 After reading a recommendation/comment by Khaled Hosny I've changed the core of the snippet, so it's switching font only when needed. There is no font change on purpose if a character is a space. In the previous version, it wrapped every single character (not recommended in general as it breaks up ligatures, kerning pairs and probably other ...
1
The problem is that \unslant put the unslanted character on the right edge of the space originally needed for the slanted version. The following variation centers the unspanted character in the same space. \documentclass{article} \usepackage{mathtools} \newsavebox{\foobox} \newlength{\foodim} \newcommand{\slantbox}[2][0]{\mbox{% ...
5
Loading a non existent language is useless. Just add a font family for Ethiopic characters (I used Kefa, use whatever you like). \documentclass[a4paper]{scrartcl} \usepackage{fontspec} \usepackage{polyglossia} \usepackage[Latin,Ethiopic]{ucharclasses} \setmainlanguage{english} \newfontfamily{\tigrinyafont}[Script=Ethiopic, Scale=1]{Kefa} ...
5
The defaults in the format are from fontmath.ltx \DeclareSymbolFont{operators} {OT1}{cmr} {m}{n} \DeclareSymbolFont{letters} {OML}{cmm} {m}{it} \DeclareSymbolFont{symbols} {OMS}{cmsy}{m}{n} \DeclareSymbolFont{largesymbols}{OMX}{cmex}{m}{n} \SetSymbolFont{operators}{bold}{OT1}{cmr} {bx}{n} \SetSymbolFont{letters} {bold}{OML}{cmm} {b}{it} ...
1
Actually you are simply lucky that the \setbeamercolor does anything useful. beamer hooks in the \newblock command and this will in most cases not work with biblatex but only with manual bibliographies. Use the biblatex tools like e.g. \DeclareFieldFormat{title}{{\scshape\tiny #1}}
1
I managed to make it work with the following settings: Created file in WIN1251 encoding with the following contents: \font\rm=larm1095 \font\bf=labx1095 \beginsection Что сказал медведь \rm Выпей водки и сыграй на балалайке, Иван \bye Compiled it with pdftex. I used this answer by David Carlisle in the neighbour thread to determine the font, which ...
7
You can use shrink tight,extrude by=2pt in this way \newtcbox{\keywordmin}{on line, fontupper=\scriptsize\ttfamily, boxrule=0.5pt,arc=2pt,coltext=tangoBlack1,colback=tangoGrey1,colframe=tangoGrey1, boxsep=0pt,shrink tight,extrude by=2pt} Here shrink tight makes the total colored box to be shrunk to the dimensions of the upper part while extrude by = 2pt ...
3
This is not a bug but a feature. Publishers' styles, unlike generic article etc , usually do not allow the users to change body fonts. All papers in the given journal must have matching look. I usually add to my classes commissioned by publishers some warnings when the class sees an attempt to invoke options like 12pt: "please do not do this". Full ...
5
You have to compare the current value of \f@family and of \sfdefault: if they match, then you're typesetting in sans serif type. Basically you do \ifnum\pdf@strcmp{\f@family}{\sfdefault}=\z@ <we are using sans serif> \else <we are not using sans serif> \fi Example: \usepackage{pdftexcmds} \makeatletter \DeclareRobustCommand{\film}[1]{% ...
Top 50 recent answers are included
|
2015-04-21 02:08:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262828230857849, "perplexity": 5311.662276110774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639674.12/warc/CC-MAIN-20150417045719-00198-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=162909
|
MathSciNet bibliographic data MR162909 28.70 (41.00) Timan, A. F. The order of growth of $\varepsilon$$\varepsilon$-entropy of spaces of real continuous functionals defined on a connected compactum. (Russian) Uspehi Mat. Nauk 19 1964 no. 1 (115), 173–177. Links to the journal or article are not yet available
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
|
2016-09-27 12:49:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8926868438720703, "perplexity": 3303.39887883555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661051.55/warc/CC-MAIN-20160924173741-00208-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://gmatclub.com/forum/in-a-sequence-1-2-4-8-16-32-each-term-after-the-first-is-twi-106322.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Feb 2019, 17:04
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in February
PrevNext
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
242526272812
Open Detailed Calendar
February 21, 2019
February 21, 2019
10:00 PM PST
11:00 PM PST
Kick off your 2019 GMAT prep with a free 7-day boot camp that includes free online lessons, webinars, and a full GMAT course access. Limited for the first 99 registrants! Feb. 21st until the 27th.
• ### Free GMAT RC Webinar
February 23, 2019
February 23, 2019
07:00 AM PST
09:00 AM PST
Learn reading strategies that can help even non-voracious reader to master GMAT RC. Saturday, February 23rd at 7 AM PT
# In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi
Author Message
TAGS:
### Hide Tags
Manager
Joined: 28 Aug 2010
Posts: 172
In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
15 Dec 2010, 11:08
1
7
00:00
Difficulty:
35% (medium)
Question Stats:
71% (01:39) correct 29% (01:40) wrong based on 299 sessions
### HideShow timer Statistics
In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twice the previous term. What is the sum of the 16th, 17th and 18th terms in the sequence ?
A. 2^18
B. 3(2^17)
C. 7(2^16)
D. 3(2^16)
E. 7(2^15)
Could some tell me the basic formula for handling geometric series. Thanks.
Math Expert
Joined: 02 Sep 2009
Posts: 53063
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
15 Dec 2010, 11:30
1
2
ajit257 wrote:
In a sequence 1,2,4,8,16,32......each term after the first is twice the previous term. What is the sum of the 16th, 17th and 18th tems in the sequence ?
a. 2^18
b. 3(2^17)
c. 7(2^16)
d. 3(2^16)
e. 7(2^15)
Could some tell me the basic formula for handling geometric series. Thanks.
Given:
$$a_1=2^0=1$$;
$$a_2=2^1=2$$;
$$a_3=2^2=4$$;
...
$$a_n=2^{n-1}$$;
Thus $$a_{16}+a_{17}+a_{18}=2^{15}+2^{16}+2^{17}=2^{15}(1+2+4)=7*2^{15}$$.
So you don't actually need geometric series formula.
But still if you are interested:
Sum of the first $$n$$ terms of geometric progression is given by: $$sum=\frac{b*(r^{n}-1)}{r-1}$$, where $$b$$ is the first term, $$n$$ # of terms and $$r$$ is a common ratio $$\neq{1}$$.
Sum of infinite geometric progression with common ratio $$|r|<1$$, is $$sum=\frac{b}{1-r}$$, where $$b$$ is the first term.
Hope it helps.
_________________
Director
Joined: 03 Sep 2006
Posts: 778
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
15 Dec 2010, 20:21
Given:
$$a_1=2^0=1$$;
$$a_2=2^1=2$$;
$$a_3=2^2=4$$;
...
$$a_n=2^{n-1}$$;
Thus $$a_{16}+a_{17}+a_{18}=2^{15}+2^{16}+2^{17}=2^{15}(1+2+4)=7*2^{15}$$.
So you don't actually need geometric series formula.
Thanks very Much! This is an excellent approach.
Manager
Joined: 21 Oct 2013
Posts: 185
Location: Germany
GMAT 1: 660 Q45 V36
GPA: 3.51
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
18 Jun 2014, 03:23
16th term = 2^15 (since 2^0 = 1). Hence we need 2^15+2^16+2^17.
Now take smaller numbers: 2²+2³+2^4 = 28 = 7*(2²) (which is the first term), hence 7*(2^15) will be right. E.
Intern
Joined: 20 May 2014
Posts: 32
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
02 Jul 2014, 17:54
I don't understand where the 2^16 and 2^17 go. and why is a16 + a17 + a18 = 2^15 + 2^16 + 2^17
Note : Sorry I can't do the subscripts for the a's
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8891
Location: Pune, India
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
02 Jul 2014, 21:49
3
sagnik2422 wrote:
I don't understand where the 2^16 and 2^17 go. and why is a16 + a17 + a18 = 2^15 + 2^16 + 2^17
Note : Sorry I can't do the subscripts for the a's
1st term: $$1 = 2^0$$
2nd term: $$2^1$$
3rd term: $$2^2$$
4th term: $$2^3$$
5th term: $$2^4$$
So looking at the pattern, what will be the 16th term? It will be $$2^{15}$$
What about the 17th term? $$2^{16}$$
What about the 18th term? $$2^{17}$$
When you add them, you get $$2^{15} + 2^{16} + 2^{17}$$
Now you take $$2^{15}$$ common from the 3 terms. You are left with
$$2^{15}* (1 + 2 + 2^2) = 2^{15}*7$$
Note that $$2^{16}$$ has 16 2s. When you take out 15 2s, you are left with a single 2. Similarly, $$2^{17}$$ has 17 2s. When you take out 15 2s, you are left with two 2s i.e. $$2^2$$
_________________
Karishma
Veritas Prep GMAT Instructor
Intern
Joined: 22 Jul 2016
Posts: 23
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
03 Jan 2017, 10:11
ajit257 wrote:
In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twice the previous term. What is the sum of the 16th, 17th and 18th terms in the sequence ?
A. 2^18
B. 3(2^17)
C. 7(2^16)
D. 3(2^16)
E. 7(2^15)
Could some tell me the basic formula for handling geometric series. Thanks.
Pattern :
1st term, --------------------------------------, 6th term ,...
can be written as :
1 ,(2) ,(2*2),(2*2*2),(2*2*2*2) , (2*2*2*2*2),...
which again can be written as :
1 , 2^1, 2^2 , 2^3 , 2^4 , 2^5 ,...
Therefore ,
16th term : 2^15 ---(1)
17th term : 2^16 ---(2)
18th term : 2^17 ---(3)
2^15 + 2^16 + 2^17 = 2^15(1+ 2^1 + 2^2) = 2^15 ( 1+2+4) = 2^15 (7)
Ans : E
CEO
Joined: 11 Sep 2015
Posts: 3446
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
15 Dec 2017, 08:07
2
Top Contributor
1
ajit257 wrote:
In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twice the previous term. What is the sum of the 16th, 17th and 18th terms in the sequence ?
A. 2^18
B. 3(2^17)
C. 7(2^16)
D. 3(2^16)
E. 7(2^15)
Could some tell me the basic formula for handling geometric series. Thanks.
First notice the PATTERN:
term_1 = 1 (aka 2^0)
term_2 = 2 (aka 2^1)
term_3 = 4 (aka 2^2)
term_4 = 8 (aka 2^3)
term_5 = 16 (aka 2^4)
.
.
.
Notice that the exponent is 1 LESS THAN the term number.
So, term_16 = 2^15
term_17 = 2^16
term_18 = 2^17
We want to find the sum 2^15 + 2^16 + 2^17
We can do some factoring: 2^15 + 2^16 + 2^17 = 2^15(1 + 2^1 + 2^2)
= 2^15(1 + 2 + 4)
= 2^15(7)
= E
RELATED VIDEO FROM OUR COURSE
_________________
Test confidently with gmatprepnow.com
Non-Human User
Joined: 09 Sep 2013
Posts: 9878
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink]
### Show Tags
22 Jan 2019, 06:24
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: In a sequence 1, 2, 4, 8, 16, 32, ... each term after the first is twi [#permalink] 22 Jan 2019, 06:24
Display posts from previous: Sort by
|
2019-02-22 01:04:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122831583023071, "perplexity": 2818.775200782604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511573.67/warc/CC-MAIN-20190221233437-20190222015437-00224.warc.gz"}
|
https://physics.stackexchange.com/questions/92415/evaluating-commutator-of-operatornamesignx-operatornamesignp
|
# Evaluating commutator of $[\operatorname{sign}(X),\, \operatorname{sign}(P)]$
I wish to evaluate the following commutator: $[\operatorname{sign}(X),\, \operatorname{sign}(P)]$. Is there a general method for evaluating $[\operatorname{f}(X), \operatorname{f}(P)]$? I thought of a Taylor expansion but $\operatorname{sign}(x)$ is discontinuous on $x=0$. How would you evaluate this commutator?
• This seems better suited for Math.SE. – Brandon Enright Jan 5 '14 at 0:51
• How are you defining the sign of an operator? – doetoe Jan 5 '14 at 1:12
• @BrandonEnright I don't think so: we're meant to interpret it as finding a commutator given the QM CCR, not how to do it for a general commutation - at least I think so. – WetSavannaAnimal Jan 5 '14 at 8:31
• @doetoe As the function $sgn$ is Borel measurable, the sign of a self-adjoint operator $A$ is an operator defined in terms of its spectral measure adopting the standard procedure of Borel functional calculus: $$sgn(A)= \int_{\sigma(A)} sgn(\lambda) dP^{(A)}(\lambda)\:.$$ Since $A$ is self-adjoint and thus densely-defined and closed, it is possible to prove that it is the partial isometry obtained from the polar decomposition of $A$: $$A =J |A|$$ where, in fact, $J = sgn(A)$. – Valter Moretti Jan 5 '14 at 16:16
• Thanks VM9, I found a concise reference at planetmath.org/spectralmeasure. I guess this also points at an answer to the OP's question: you could evaluate the commutator $[f(A),f(B)]$ loosely speaking by integrating weighted commutators of projection operators over the spectra of $A$ and $B$. – doetoe Jan 5 '14 at 23:56
$$\tag{1} [\hat{X},\hat{P}]~=~i\hbar ~{\bf 1}.$$
We can imitate the Dirac delta function and the signum function via the following integral representations$^1$
$$\tag{2} \delta(\hat{X})~=~ \int_{\mathbb{R}} \! \frac{{\rm d}p}{2\pi\hbar} \exp\left(\frac{p\hat{X}}{i\hbar}\right), \qquad \delta(\hat{P})~=~ \int_{\mathbb{R}} \! \frac{{\rm d}x}{2\pi\hbar} \exp\left(\frac{i x\hat{P}}{\hbar}\right),$$
$$\tag{3} {\rm sgn}(\hat{X})~=~ \int_{\mathbb{R}} \! \frac{i{\rm d}p}{\pi p} \exp\left(\frac{p\hat{X}}{i\hbar}\right), \qquad {\rm sgn}(\hat{P})~=~ \int_{\mathbb{R}} \! \frac{{\rm d}x}{i\pi x} \exp\left(\frac{ix\hat{P}}{\hbar}\right).$$
The sought-for commutator can e.g. be written in $\hat{X}\hat{P}$-ordered form
$$\tag{4} [{\rm sgn}(\hat{X}),{\rm sgn}(\hat{P})] ~=~ \iint_{\mathbb{R}^2} \! \frac{{\rm d}x~{\rm d}p}{\pi^2 xp} \left[1-\exp\left(\frac{px}{i\hbar}\right)\right] \exp\left(\frac{p\hat{X}}{i\hbar}\right) \exp\left(\frac{ix\hat{P}}{\hbar}\right).$$
In eq. (4) we have used the following truncated Baker-Campbell-Hausdorff formula
$$\tag{5} e^{\hat{A}}e^{\hat{B}} ~=~e^{[\hat{A},\hat{B}]}e^{\hat{B}}e^{\hat{A}},$$
which holds if the commutator $[\hat{A},\hat{B}]$ commutes with both the operators $\hat{A}$ and $\hat{B}$.
II) On a wavefunction $\psi(x)=\langle x |\psi\rangle$ in the Schrödinger position representation,
$$\tag{6} \hat{X}~=~x, \qquad \hat{P}~=~\frac{\hbar}{i}\frac{\partial}{\partial x},$$
we have
$$\tag{7} \langle x |{\rm sgn}(\hat{P}) |\psi\rangle~=~ \int_{\mathbb{R}} \! \frac{{\rm d}y}{i\pi y}\langle x+y |\psi\rangle,$$
and therefore the matrix element of the sought-for commutator becomes
$$\tag{8} \langle x |[{\rm sgn}(\hat{X}),{\rm sgn}(\hat{P})] |\psi\rangle ~=~ \int_{\mathbb{R}} \! \frac{{\rm d}y}{i\pi y}\left({\rm sgn}(x)-{\rm sgn}(x+y)\right)\psi(x+y).$$
--
$^1$ The Cauchy principal value is implicitly assumed in pertinent places.
• Could you please explain where the interpretation of ${\rm sgn}(\hat{p})$ came from. I mean, I know you seem to have interpreted it as the multiplication operator ${\rm sgn}(\hat{p})$ in momentum co-ordinates and then Fourier-transformed back to the $P\, 1/x$ in position co-ordinates - am I right? So there seems to be a "known" or "standard" way to interpret ${\rm sgn}$ - it isn't obvious from the question - so what kind of applications does this come up in? If this is too complicated for one line, I'll ask it as a new question. – WetSavannaAnimal Jan 5 '14 at 8:23
• I updated the answer. – Qmechanic Jan 5 '14 at 14:59
Here is my answer (I have had time to answer only today). First of all, in general, if $A$ is a self-adjoint operator in the Hilbert space $H$ with spectrum $\sigma(A)\subset R$ (actually a closed normal operator would be enough), and $f: \sigma(A) \to C$ is a Borel measurable function (so for instance continuous up to a finite number of points would be OK), $f(A)$ is defined as: $$f(A) := \int_{\sigma(A)} f(\lambda) dP^{(A)}(\lambda)\:,\qquad (1)$$ where $\{P^{(A)}_E\}_{E \in {\cal B}(R)}$ is the so-called spectral measure of $A$ (for instance see http://planetmath.org/spectralmeasure). The $P_E$ are orthogonal projectors labelled by Borel sets $E$. Actually it turns out that $P_E=0$ if $E\cap \sigma(A)= \emptyset$, in this sense the measure is concentrated on the spectrum of $A$. Physically speaking $\sigma(A)$ is the set of the values that the observable $A$ can assume.
The fact that $A$ is sefl-adjoint guarantees the existence of the above mentioned notions and the feasibility of the construction I go to summarize.
The integral in (1) is defined in a way similar to that used for Riemann or Lebesgue integrals, first defining the integral of a function $s$ attaining a finite number of values $s_1,\ldots, s_n$ on corresponding sets $E_1,\ldots, E_n$: $$S(A) := \int_{\sigma(A)} s(\lambda) dP^{(A)}(\lambda) := \sum_{i=1}^n s_i P_{E_i} \qquad (1)'$$ and then taking the limit over a sequence of such functions $S_j$ point-wise tending to $f$ as $j \to +\infty$: $$\int_{\sigma(A)} f(\lambda) dP^{(A)}(\lambda) \psi := \lim_{j\to +\infty} \int_{\sigma(A)} s(\lambda) dP^{(A)}(\lambda)\psi\:. \quad (2)$$ The notion of convergence is that of the Hilbert space of the theory. The given definition makes precise the domain of $f(A)$: It is given by the vectors $\psi \in H$ such that the limit in (2) exists.
It is worth stressing that:
(a) the operator $f(A)$ is bounded (that is continuous) and defined on the whole Hilbert space, if the function $f$ is bounded over $\sigma(A)$;
(b) it holds: $$A = \int_{\sigma(A)} \lambda dP^{(A)}(\lambda)$$ and this identity completely determines $\{P_E^{(A)}\}_{E\in {\cal B}(R)}$ for a given self-adjoint opertor $A$. It also arises taking (a) into accoutn, that $A$ is bounded if (and only if) the map $\lambda \to \lambda$ is bounded over $\sigma(A)$ that, in turn, means that $\sigma(A)$ is bounded.
To answer to the general question of the OP, the above given definition of $f(A)$ is that one has to use to compute things like $[f(X), f(P)]$.
If $f$ is not bounded the domains of $f(X)$ and $f(P)$ are not the whole Hilbert space, and thus great care has to be used in computing the commutator above, since it is defined only in a common invariant domain.
However, this is not the case for $f= sgn$, since it is bounded.
Let us pass to the computation of $[sgn(X), sgn(P)]$ that, consequently, is a bounded operator as well.
If $A=X$ (position operator in $L^2(R)$), its spectral measure is quite trivial: $$(P^{(X)}_E \psi)(x):= \chi_{E}(x)\psi(x)\:,\qquad (3)$$ where $\chi_E(x)=1$ if $x\in E$ and $\chi_E(x)=0$ if $x\not \in E$. Consequently, exploiting (1)' (because $sgn$ assumes only three values (respectively $-1$ in $E_1=(-\infty,0)$, $0$ in $E_2= \{0\}$, $1$ in $E_3= (0,+\infty)$), one immediately sees that: $$(sgn(X) \psi)(x) = sgn(x) \psi(x)\:. \qquad (4)$$
We have next to focus on the momentum operator $P$. In the following I will assume $\hbar=1$ for the sake of notational simplicity. Henceforth ${\cal F}: L^2(R) \to L^2(R)$ is the Fourier transform, defined on $L^1$ functions (and than extended by $L^2$ continuity into an unitary map on $L^2$) by the usual integral formula: $${\cal F}: \psi(x) \mapsto \hat{\psi}(p) := \frac{1}{\sqrt{2\pi}}\int_R e^{-ipx} \psi(x) dx\:.$$ With these definitions, it turns out that the spectral measure of $P$ is $$(P^{(P)}_E \psi)(x) := {\cal F}^{-1}\left( \chi_E \cdot \hat{\psi}\right)(x)$$ In other words: in momentum representation, the spectral measure of $P$ exactly coincides with that of $X$ in position representation.
As the spectral theory is "covariant" under unitary transformations, it implies in particular that $sgn(P)$ in momentum representation is again defined as:
$$\left(sgn(P)_{momentum} \hat{\psi}\right)(p) = sgn(p) \hat{\psi}(p)\:,$$ so that, coming back to the position representation: $$sgn(P)\psi = {\cal F}^{-1} \left(sgn(p) \hat{\psi}(p)\right)\:. \quad (5)$$
We are in a position to compute the wanted commutator. I will assume that $\psi \in {\cal S}(R)$ the Schwartz space, because, in this case the Fourier transform can be computed as the usual integral and since that space is dense in $L^2$ so that the final result can be achieved simply taking a limit (as the commutator being bounded as stressed above). With that choice of $\psi$ all integration can be safely swapped. I do not enter into details.
Exploiting (4) and (5) (and interchanging integrals) we have almost immediately:
$$\left(sgn(X)sgn(P) \psi\right)(x)= \int\int \frac{e^{ip(x-y)}}{2\pi} sgn(p) sgn(x) \psi(y) dy dp$$
and
$$\left(sgn(P)sgn(X) \psi\right)(x)= \int\int \frac{e^{ip(x-y)}}{2\pi} sgn(p)sgn(y) \psi(y) dy dp\:.$$
Taking the difference and inserting an $\epsilon$ prescription to separate integrals, we have:
$$\left([sgn(X),sgn(P)]\psi \right)(x) = \lim_{\epsilon \to 0^+}\int \left(\int \frac{e^{ip(x-y) -|p| \epsilon}}{2\pi} sgn(p) dp\right) (sgn(x) -sgn(y)) \psi(y) dy$$
Computing the integral (please check the values of the coefficients) we finally get:
$$\left([sgn(X),sgn(P)]\psi \right)(x) = \lim_{\epsilon \to 0^+}\frac{1}{i\pi}\int_R \frac{(x-y)(sgn(x) -sgn(y))}{(x-y)^2 + \epsilon^2} \psi(y) dy \:.$$
Formally, it is possible to introduce the so-called Cauchy principal value:
$$\frac{1}{2}\frac{(x-y)}{(x-y)^2 + 0^2} = Vp \frac{1}{x-y}$$
so the found identity can be re-arranged in terms of the Cauchy principal value as done in the other answer.
Why is this commutator needed? I would start by trying to evaluate $$\int\!dx'\,dp'\, \langle \phi |\operatorname{sgn}(x) |x'\rangle\langle x'|p'\rangle\langle p'|\operatorname{sgn}(p) |\psi\rangle$$ and then each integral gets split up, e.g.,
$$-\int_{-\infty}^0\!dx'\,dp'\, \langle \phi |x'\rangle\langle x'|p'\rangle\langle p'|\operatorname{sgn}(p) |\psi\rangle + \int_{0}^\infty\!dx'\,dp'\, \langle \phi |x'\rangle\langle x'|p'\rangle\langle p'|\operatorname{sgn}(p) |\psi\rangle$$ and similarly for $p'$, giving you 4 integrals. The other half of the commutator is $$- \int\!dx'\,dp'\, \langle \phi |\operatorname{sgn}(p) |p'\rangle\langle p'|x'\rangle\langle x'|\operatorname{sgn}(x) |\psi\rangle$$
Keep in mind that $\langle p'|x'\rangle = \frac{1}{\sqrt{2\pi\hbar}}e^{-i p x}$ so that when you subtract the last 4 integrals from the first 4, you will need to relabel a coordinate to be able to cancel things out.
• This is just a suggestion. Don't waste your time with it if it doesn't seem to work out. – lionelbrits Jan 5 '14 at 1:10
|
2019-10-21 13:20:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528213143348694, "perplexity": 195.70774643650645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987773711.75/warc/CC-MAIN-20191021120639-20191021144139-00284.warc.gz"}
|
https://www.sarthaks.com/203078/write-the-distance-of-the-point-3-5-12-from-x-axis
|
Write the distance of the point (3,-5,12) from x-axis.
57.1k views
Write the distance of the point (3,-5,12) from x-axis.
|
2022-09-28 05:39:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847277760505676, "perplexity": 5092.228792088953}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00751.warc.gz"}
|
https://www.physicsforums.com/threads/a-simple-ish-kinematics-problem.342664/
|
# A Simple-ish Kinematics Problem
1. Oct 4, 2009
### Lancelot59
A long jumper leaves the ground at 45o above the horizontal and lands 9.1m away. What is her "takeoff" speed?
Well I have
Dx=VxT, but I'm missing some variables, and substitution is failing me. How can I solve this?
2. Oct 4, 2009
### Vykan12
P_{x}(t) = V_{0}cos(theta) t
You can use that to relate initial velocity to time.
We know at least that gravity is acting against the jumper in the downward direction. Assuming there isn't air drag or other external forces, we have:
P_{y}(t) = V_{0}sin(theta) t + 1/2gt^{2}
P_{y}(t) will be 0 when the person lands, and g is obviously -9.81. Thus we have two equations with 2 unknowns, so we can solve for V_{0}.
3. Oct 4, 2009
### Lancelot59
I'm not really getting how this comes together.
EDIT: Nevermind, my confusion was over some of my own silliness. I get it now. Thanks for your help!
|
2017-12-17 18:10:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142628073692322, "perplexity": 2095.142683505674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597295.74/warc/CC-MAIN-20171217171653-20171217193653-00684.warc.gz"}
|
https://indico.cern.ch/event/181055/contributions/308623/
|
Quark Matter 2012
12-18 August 2012
US/Eastern timezone
sPHENIX Jet Reconstruction Performance
16 Aug 2012, 16:00
2h
Poster Experiment upgrades, new facilities, and instrumentation
Speaker
Anne Sickles (Brookhaven)
Description
Reconstructed jets in heavy ion collisions are a crucial tool for understanding the quark-gluon plasma. The separation of jets from the underlying event is necessary, particularly in central heavy ion collisions, in order to quantify medium modifications of the parton shower. Here, we describe a method for quantifying the underlying event contributions in Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV utilizing the HIJING event generator (from arXiv:1203.1353) and show the expected jet reconstruction performance in heavy ion collisions using the proposed sPHENIX upgrade dector.
Primary author
Anne Sickles (Brookhaven)
Poster
|
2020-08-03 15:19:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8187819123268127, "perplexity": 4815.91477796749}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00453.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=444006
|
## Shear stress in Energy-momentum Stress Tensor
Hi all - first post at PF. As a 'science enthusiast' with no training in the tensor math of GR, was initially bewildered by the common assertion that still hypothetical 'dark energy' would act as a source of 'negative gravity' despite having positive energy density. Finally grasped that pressure in GR acts as a source of gravity 'all by itself' - quite additional to any hydrostatic energy density associated with that pressure. As 'dark energy' is supposed to exert negative pressure, that term wins out as source of gravity (space-time curvature).
All well and good, but when visiting a Wikipedia article http://en.wikipedia.org/wiki/Stress-energy_tensor noticed that energy density and pressure are just some of the terms contributing to the energy-momentum stress tensor Tab, which in GR is the total source of gravity. There is also energy flux, momentum flux, and most curiously, shear stress terms (the off-diagonals).
My question is this: what role do shear stresses play as source of gravity? What seems so strange is that it is well known that shear stress can be resolved into orthogonal acting tensile (negative pressure) and compressive (positive pressure) components of equal amplitude. As the diagonal pressure terms are in the first power only of p, to my mind shear stress should therefore make NO net contribution! Remember these stresses are supposed to contribute 'all by themselves' - NOT as shorthand for the associated elastic/hydrostatic energy density. So, are the shear stress terms just 'padding' or can someone explain what a chunk of matter under static shear stress contributes to the gravity of that chunk?
PhysOrg.com science news on PhysOrg.com >> Heat-related deaths in Manhattan projected to rise>> Dire outlook despite global warming 'pause': study>> Sea level influenced tropical climate during the last ice age
Recognitions: Science Advisor Staff Emeritus If you take a sufficiently small region of static space-time, it is only the diagonal terms that are needed to calculate the Komar mass, i.e. rho + Px + Py + Pz, rho + 3P if you have an isotropic gravitating fluid. The Komar mass isn't directly used to calculate the gravitational field, either, but you can get the "far field" at infinity knowing just the Komar mass, which I hope is sufficient for your question. But I'm not sure you can even have shear stresses in a static space-time, I'd have to think about the issue more. It's possible my calculations reduce to saying that "if the terms are zero, you don't have to worry about them" :-). You can apply some of the concepts to a stationary space-time, which basically allows constant rotation. (I'm being a bit loose here, but I don't have time to dot all the t's - I mean dot all the i's and cross the t's). But I'd have to think about how to do it more and I'm out of time at the moment.
Blog Entries: 3
Recognitions:
Gold Member
## Shear stress in Energy-momentum Stress Tensor
Your question is very interesting. I recently started reading this paper, 'Relativistic Elastodynamics' by Michael Wernig-Pichler ( arXiv:gr-qc/0605025v1) but I haven't got far enough to know if it answers your questions.
It's probably not relevant, but the SET of the EM field has spatial off-diagonal elements in the form of the Maxwell stress tensor,
$$\sigma_{ij}=\epsilon_0 E_iE_j + B_iB_j/\mu_0,\ \ \ i\ne j$$
Mentz114: Thanks for the reference. Ploughed through as much of that paper by Wernig-Pichler as I could, but the maths is too dense and notation too arcane for me to distill anything relevant from. Looked at quite a few similar papers but all have similar style - best left to specialists! "It's probably not relevant, but the SET of the EM field has spatial off-diagonal elements in the form of the Maxwell stress tensor," Actually it seems to be quite relevant. Looking at http://en.wikipedia.org/wiki/Electro...-energy_tensor, the point is made there the entire EM ST acts as source in the GR ST. There is evidently a one-to-one correspondence in the terms. In the EM case it becomes clear the off-diagonals are really an 'artifact' of the coordinate orientation. That is, for a uniform field the coordinates can always be chosen so as to only have diagonal terms for a given field (E or B), but not generally both. The same will I think apply in the GR case of uniform stress (tension/compression and/or shear). And since there is really only an 'E' field equivalent, a tensor with zero off-diagonals is always possible. So it seems to be just a 'perspective' thing so to speak. Am I right? Nonuniform stress distributions are another matter of course.
Recognitions: Science Advisor Staff Emeritus Well, one of the places the pressure terms make a difference is if you imagine having a relativistic gas enclosed in a sturdy shell. But I'm not sure I want to go into that now, it's interesting but it's probably not the root of the problem, it's a bit of a digression. I'm not familiar with your identification of the off-diagonal terms of the stress-energy tensor as being shear stress, and perhaps that's the problem. It could be a difference between physics and engineering usage. For instance, if you imagine having a pressurized sphere, and analyze the spherically symmetric problem in cartesian coordinates, you'll see off-diagonal terms in the stress energy tensor at the boundry, where the containing sphere is a shell under tension, but the tension is not along any of the principle axes. But if you analyze the same problem in spherical coordinates, you'll see only diagonal terms. It seems to me wrong to say that you have shear stress when you analyze the problem in cartesian coordinates and no shear stress when you analyze the problem in spherical coordinates unless I'm misunderstanding the meaning of shear stress.
Well, one of the places the pressure terms make a difference is if you imagine having a relativistic gas enclosed in a sturdy shell. But I'm not sure I want to go into that now, it's interesting but it's probably not the root of the problem, it's a bit of a digression.
I think you are referring to one of the so-called Tolman Paradoxes - annihilate electron/positron pairs within a perfectly reflecting vessel to create a 'photon gas' and the total mass will apparently double. The resolution is to take into account the negative pressure (tension) within the containing vessel's walls; there is then no net change in gravitational mass. http://arxiv.org/abs/gr-qc/0505040
http://arxiv.org/abs/gr-qc/0510041
I'm not familiar with your identification of the off-diagonal terms of the stress-energy tensor as being shear stress, and perhaps that's the problem. It could be a difference between physics and engineering usage..... It seems to me wrong to say that you have shear stress when you analyze the problem in cartesian coordinates and no shear stress when you analyze the problem in spherical coordinates unless I'm misunderstanding the meaning of shear stress.
Just a moment of madness on my part. A little more thought and it became obvious that you cannot eliminate off-diagonals merely by a coordinate rotation (nothing to do with changing from Cartesian to spherical - stick with Cartesian!). All you achieve is redistributing among the diagonals, and likewise redistributing among the off-diagonals. That is, as long as the principal stresses (assumed to lie along the diagonals) are unequal, there MUST be non-zero off-diagonal terms present. Further, the analogy with EM breaks down early on as even for the simplest case of a single uniform E or B field, the Maxwell 'stresses' are always biaxial - tension along the field lines, pressure orthogonal to the lines. Uni-axial mechanical stress in a solid by contrast (eg. tension in a rod) has only a single component.
So the original problem remains - assuming the inherently non-zero off-diagonal terms in Tab for a solid under static shear stress make no contribution, how is that arrived at on a consistent mathematical basis?
Again - it is emphasized we are dealing with 'stress only' contributions proportional to the first power in p or tau (shear), NOT the strain energies associated, which for linear elasticity go as p2 etc and are by definition part of the 'time-time' component T00.
Blog Entries: 3
Recognitions:
Gold Member
Quote by Q-reeus So the original problem remains - assuming the inherently non-zero off-diagonal terms in Tab for a solid under static shear stress make no contribution, how is that arrived at on a consistent mathematical basis?
Suppose we have a cubic lump of material as the source of a gravitational field. We can introduce shear in the material by using a clamp that squeezes the cube so two opposite faces 'slide'. Using an argument raised in an earlier post - would the shear stress contribution be cancelled by the stress in the clamp. In this way nature will ensure that shear stress does not contribute to the field.
So, for the cube we will have off-diagonal elements that are exactly cancelled by those in the clamp. I.e. TFab+TCab is diagonal.
I can't think of a way that shear stress could be induced other than by static forces , which cancel the shear stress.
Suppose we have a cubic lump of material as the source of a gravitational field. We can introduce shear in the material by using a clamp that squeezes the cube so two opposite faces 'slide'. Using an argument raised in an earlier post - would the shear stress contribution be cancelled by the stress in the clamp. In this way nature will ensure that shear stress does not contribute to the field. So, for the cube we will have off-diagonal elements that are exactly cancelled by those in the clamp. I.e. TFab+TCab is diagonal.
Without a picture of your arrangement hard to know how shear is generated. Agree with your general direction here though - by Newton's law of equal and opposite reaction, internal forces and thus integrated stresses must all balance out to zero overall (net 'monopole' contribution = 0). This does not per se preclude higher moment terms ('dipole', quadrupole' etc); the very notion of stress distribution implies a separation between stressed regions. Symmetry considerations alone can sometimes cancel all moment contributions. An example would be the typical scenario of a spherical pressure vessel used in the Tolman Paradox cited in a previous posting. While harder to see, 'dipole' terms are probably not possible, though I thought briefly they could be. Take the case of a G-clamp screwed down on itself. The screw side will be under compression. To balance this the opposite side must take on a tensile stress, conveyed via shear and bending moments in the top and bottom sides. At first this looks like a 'diople' distribution, but neglects the existence of a substantial bending moment in the tensioned side. Taken together, there is no stress dipole. By the same token, quadrupole and higher moments are allowed (think of two G-clamps welded back-to-back).
In the case of shear the resolved orthogonal tensile/compressive stresses are spatially coincident, hence no contribution of any kind - zero 'monopole' or any higher terms seem possible. As no-one has come forward to say otherwise, I proceed on that basis.
I can't think of a way that shear stress could be induced other than by static forces , which cancel the shear stress.
Think of a motor spinning up a flywheel via a torsion shaft. Or just a shaft by itself undergoing free-standing torsional oscillations (apply equal and opposite torque to each end, then suddenly let go). Or just whack a rigid body (eg. a bell) with a hammer and set in train a huge number of vibrating modes that would include shear as well as tension/compression waves.
There is I think something else hidden in the foregoing that suggests the possibility of a 'genuine paradox', but that should be for a new topic.
Blog Entries: 3
Recognitions:
Gold Member
Quote by Q-reeus In the case of shear the resolved orthogonal tensile/compressive stresses are spatially coincident, hence no contribution of any kind - zero 'monopole' or any higher terms seem possible. As no-one has come forward to say otherwise, I proceed on that basis.
I can't argue with that.
Think of a motor spinning up a flywheel via a torsion shaft. Or just a shaft by itself undergoing free-standing torsional oscillations (apply equal and opposite torque to each end, then suddenly let go). Or just whack a rigid body (eg. a bell) with a hammer and set in train a huge number of vibrating modes that would include shear as well as tension/compression waves.
That's interesting. Certainly while torque is present a rotating disc will get concentric circles of equal shear ( ideally) , but what gets me is that when I construct the EMT of a spinning disc using basic mechanics or field theory, I always get off-diagonal terms, even in equilibrium ( i.e. free spinning). I can undertand some tension along the diagonals but not this 'shear'.
As for ringing bells, etc, I suppose two plane compression waves crossing paths would generate shear.
This is food for thought, so I'm glad you raised it.
There is I think something else hidden in the foregoing that suggests the possibility of a 'genuine paradox',...
I doubt it !
That's interesting. Certainly while torque is present a rotating disc will get concentric circles of equal shear ( ideally) , but what gets me is that when I construct the EMT of a spinning disc using basic mechanics or field theory, I always get off-diagonal terms, even in equilibrium ( i.e. free spinning). I can undertand some tension along the diagonals but not this 'shear'.
I failed to elaborate on that one - the focus was meant to be on the pure torsion (shear) in the shaft. The flywheel was intended as just inertial 'ballast' along for the ride so to speak. But now that you've raised it, it is an interesting digression. Im very much an 'un-expert' in these things but here's my take.
Recall that earlier I had stated whenever the diagonal terms are unequal there must be non-zero off-diagonals. Basic physical reasoning suggests the radial and circumferential stresses cannot be uniform throughout the disk. For instance at the outer rim radial tension must be zero but circumferential tension will be substantial. I think it makes sense that radial acting tension in a spinning disc decreases monotonically as a function of radius, while circumferential tension mostly increases. Hence in general the orthogonal components of tension must be different. By superposition you then finish up in general with shear stress plus uniform bi-axial stress plus uni-axial stress.
I doubt it !
Well I intend to pose the problem anyhow and see where the dust settles!
Recognitions:
Staff Emeritus
Quote by Q-reeus I think you are referring to one of the so-called Tolman Paradoxes - annihilate electron/positron pairs within a perfectly reflecting vessel to create a 'photon gas' and the total mass will apparently double. The resolution is to take into account the negative pressure (tension) within the containing vessel's walls; there is then no net change in gravitational mass. http://arxiv.org/abs/gr-qc/0505040 http://arxiv.org/abs/gr-qc/0510041
Exactly what I had in mind - thanks for the references! It makes it a lot more concise to talk about.
Note that even without thinking of it as a paradox, the internal field in this case is affected by the pressure. "Field" is a vague term, in this case I'm thinking of what an accelerometer would measure held just inside the shell, i.e. I'm thinking of "field" in a quasi-Newtonian sense. The "field" or acceleration is twice what you would expect from a Newtonian analysis So in this case the pressure terms make an observable difference to the field configuration, though they don't make a difference to the total mass or to the field at infinity.
So the original problem remains - assuming the inherently non-zero off-diagonal terms in Tab for a solid under static shear stress make no contribution, how is that arrived at on a consistent mathematical basis? Again - it is emphasized we are dealing with 'stress only' contributions proportional to the first power in p or tau (shear), NOT the strain energies associated, which for linear elasticity go as p2 etc and are by definition part of the 'time-time' component T00.
Well, one can show that the off-diagonal terms don't contribute to the overall Komar mass, there's an intergal for the Komar mass in terms of the stress-energy tensor in Wald. I could dig up the reference , it's mentioned in a recent thread though, complete with the formula.
http://www.physicsforums.com/showpos...1&postcount=19
has the formula, there's some related discussion in previous posts
Also, there's an interesting paper by Baez that shows that the second derivative of the volume of a sphere of coffee grounds is proportional to rho+Px+Py+Pz, i.e. again it doesn't depend on the diagonal terms.
I would guess that the off-diagonal terms do have some effect - just not on the mass, or the field at infinity. Also the analysis I have in mind only works for a stationary system (among other reasons, the Komar mass is only defined if you have a time-like Killing vector), I don't have any definite proof of whether they would have an effect on the mass (ADM or Bondi) if you had a time-varying system.
Note that even without thinking of it as a paradox, the internal field in this case is affected by the pressure. "Field" is a vague term, in this case I'm thinking of what an accelerometer would measure held just inside the shell, i.e. I'm thinking of "field" in a quasi-Newtonian sense. The "field" or acceleration is twice what you would expect from a Newtonian analysis So in this case the pressure terms make an observable difference to the field configuration, though they don't make a difference to the total mass or to the field at infinity.
Just so. While it all balances out formally, I have real difficulty seeing the physical origin of p contributions to active gravitational mass Ma in terms of particle dynamics. Take for instance a dilute highly relativistic gas, where nearly all the energy is kinetic. Increased passive gravitational mass Mp (and by direct analogy, inertial mass Mi) owing to twice the Newtonion curvature in an applied gravitational field is easy to grasp in principle. Formally Ma follows by appealing to the EP, but whence the physical basis from 'first principles'? I had always assumed it was taken care of by the SR 'gamma factor', but that only accounts for the T00 'rho' part. The 'p' part (diagonals) presumably involves consideration of 'apparent present position' and maybe acceleration terms akin to use in EM theory. Somewhere I guess direct calculations have been done to show this. The complex balance of kinetic, Coulomb, and QM factors in a solid makes it a much harder task so won't go there.
Well, one can show that the off-diagonal terms don't contribute to the overall Komar mass, there's an intergal for the Komar mass in terms of the stress-energy tensor in Wald. I could dig up the reference , it's mentioned in a recent thread though, complete with the formula. http://www.physicsforums.com/showpos...1&postcount=19 has the formula, there's some related discussion in previous posts
Thanks for the reference. One part I reproduce here:
The Komar mass integral as usually stated (Wald, pg 289, 11.2.10 is)
being a "unit future" of the volume, being the time-like killing vector. When the two are aligned, and you have a Minkowski space-time with g_00 = -1 the mass integral basically reduces to one of (rho + Px + Py + Pz)
Have no idea what "When the two are aligned" implies if they are not aligned, but this would not effect the off-diagonals either way?
Also, there's an interesting paper by Baez that shows that the second derivative of the volume of a sphere of coffee grounds is proportional to rho+Px+Py+Pz, i.e. again it doesn't depend on the diagonal terms.
Coffee grinds!!? So Nescafe funded that study!? Seriously, is shear even possible in a pile of coffee grinds? "..second derivative of the volume of a sphere of..." - just guessing but this is about free-fall gravitational collapse of 'dust'?
I would guess that the off-diagonal terms do have some effect - just not on the mass, or the field at infinity. Also the analysis I have in mind only works for a stationary system (among other reasons, the Komar mass is only defined if you have a time-like Killing vector), I don't have any definite proof of whether they would have an effect on the mass (ADM or Bondi) if you had a time-varying system.
I have in mind an interesting system involving motion but doubt there would be any linkage.
I had better add something to my last reply to Mentz114. My statement that circumferential stress would mostly increase as a function of radius may not be accurate. At the disk center (assuming no shaft there to complicate things), 'pure radial' stress is 'pure bi-axial' and can therefore be interpreted as circumferential too. Without going into detailed calculations which for a real solid would need to take into account Poisson's ratio (material dependent!), all I should have ventured was that the two components are in general different at different radii, and thus shear must be present.
Blog Entries: 3
Recognitions:
Gold Member
Quote by Q-reeus I have real difficulty seeing the physical origin of p contributions to active gravitational mass Ma in terms of particle dynamics.
I too had a problem with this. In the case of a large gas cloud in near equilibrium, 'pressure' can be defined as the momentum flux passing through 3 orthogonal planes. In terms of particle dynamics this is related to the velocity distribution and mean free path - i.e. number of collisions and momentum exchanged per collision. The dimensions of pressure are the same as energy density, which I ( perhaps alone) find significant.
There was a frank discussion about this in another topic but I can't track it down right now.
Coffee grinds!!? So Nescafe funded that study!? Seriously, is shear even possible in a pile of coffee grinds? "..second derivative of the volume of a sphere of..." - just guessing but this is about free-fall gravitational collapse of 'dust'?
The coffee grounds are freely-falling non-interacting dust. The term shear is used for velocity fields that are orthogonal to the time direction causing the ball to change shape. Not like shear stress.
If the ball was solid, there would be shear caused by tidal forces and kinematic effects.
all I should have ventured was that the two components are in general different at different radii, and thus shear must be present.
Would this be true for a solid spinning disc with no torque being applied ?
In the case of a large gas cloud in near equilibrium, 'pressure' can be defined as the momentum flux passing through 3 orthogonal planes. In terms of particle dynamics this is related to the velocity distribution and mean free path - i.e. number of collisions and momentum exchanged per collision. The dimensions of pressure are the same as energy density, which I ( perhaps alone) find significant.
Agreed about the origin of pressure in terms of particle momentum flux, and the dimensional consistency with energy density. My problem is how one gets from first principles the p contribution to Ma, over and above the usual relativistic mass term (ie. T00) for a moving particle.
The coffee grounds are freely-falling non-interacting dust. The term shear is used for velocity fields that are orthogonal to the time direction causing the ball to change shape. Not like shear stress. If the ball was solid, there would be shear caused by tidal forces and kinematic effects.
Fair enough. Seems clear the shear terms have quite different application to the normal pressure terms.
Would this be true for a solid spinning disc with no torque being applied ?
That's what I was referring to, which was your own consideration of a freely spinning disk (constant angular velocity). One could think of it this way: Divide the disk up into a large series of nested concentric hoops, initially fitting together neatly when at rest. Spin them all up to the same angular velocity, and what would happen? The outer hoops, having more mass per cross-sectional area, and being under greater centripetal acceleration will expand more than the inner hoops. The almost purely circumferential stress and strains going as r2, where r is the radius of each hoop. In order recreate the original solid disk, the gap between each inner and nearest outer hoop can only be closed by pressing in on the outer hoop and out on the inner hoop, then 'gluing' them together. Remove the extra pressures, and it's obvious the end result is a tendency to more equally share the tensile circumferential stresses, while at the same time introducing a radial tension. In near the center centripetal acceleration is negligible, so the accumulated radial 'tugs' induce uniform bi-axial tension, but out at the rim circumferential stress must dominate. The two terms cannot be equal throughout. Stop press. Why didn't I think of this before, just Google it and here it is:
http://arxiv.org/abs/physics/0211004
See page 12, eq'ns 92, 93. Terms are defined on p11.
Recognitions:
Staff Emeritus
One thing I should mention - the idea that rho+3P is somehow the "source" of gravity is a very useful simplification, but only for stationary or static systems. If you try to apply it to non-stationary systems ('what happens if you have a box of relativistic gas and it explodes') it will give you misleading nonsense.
And I don't have any clue if there's a deeper reason why the theory has to be that way.
Quote by Q-reeus Thanks for the reference. One part I reproduce here: The Komar mass integral as usually stated (Wald, pg 289, 11.2.10 is) being a "unit future" of the volume, being the time-like killing vector. When the two are aligned, and you have a Minkowski space-time with g_00 = -1 the mass integral basically reduces to one of (rho + Px + Py + Pz) Have no idea what "When the two are aligned" implies if they are not aligned, but this would not effect the off-diagonals either way?
This involves issues of choice of coordinates - or you might view it as the issue of "being at rest".
If you have a stationary or static system, the metric can be expressed to be independent of time, but just because it can be, it doesn't mean it has to be. We've had some other threads about what it looks like if you don't make that choice, i.e. perhaps you want to choose the time experienced by some moving observer, some observer falling into a black hole, as your coordinate time.
The requirement that your unit future and your timelike killing vector be aligned says that you are doing the intergal "at rest". So your rho and P are measured in the static rest frame, as are your volumes, etc.
Also, the idea of integrating rho+3P to get the mass only works when your system is sufficiently small. For larger systems, g_00 is no longer constant and this has to be taken into account, in order to include gravitational self-binding energy.
Coffee grinds!!? So Nescafe funded that study!? Seriously, is shear even possible in a pile of coffee grinds? "..second derivative of the volume of a sphere of..." - just guessing but this is about free-fall gravitational collapse of 'dust'?
Yes, exactly - "dust".
This involves issues of choice of coordinates - or you might view it as the issue of "being at rest". If you have a stationary or static system, the metric can be expressed to be independent of time, but just because it can be, it doesn't mean it has to be. We've had some other threads about what it looks like if you don't make that choice, i.e. perhaps you want to choose the time experienced by some moving observer, some observer falling into a black hole, as your coordinate time. The requirement that your unit future and your timelike killing vector be aligned says that you are doing the intergal "at rest". So your rho and P are measured in the static rest frame, as are your volumes, etc.
OK - get the drift here, thanks.
Also, the idea of integrating rho+3P to get the mass only works when your system is sufficiently small. For larger systems, g_00 is no longer constant and this has to be taken into account, in order to include gravitational self-binding energy.
Not sure if that last remark was meant to imply 'gravity gravitates'. Seems like a real can of worms; at http://math.ucr.edu/home/baez/physic...energy_gr.html the author states:
"One other complaint about the pseudo-tensors deserves mention. Einstein argued that all energy has mass, and all mass acts gravitationally. Does "gravitational energy" itself act as a source of gravity? Now, the Einstein field equations are
Gmu,nu = 8pi Tmu,nu
Here Gmu,nu is the Einstein curvature tensor, which encodes information about the curvature of spacetime, and Tmu,nu is the so-called stress-energy tensor, which we will meet again below. Tmu,nu represents the energy due to matter and electromagnetic fields, but includes NO contribution from "gravitational energy". So one can argue that "gravitational energy" does NOT act as a source of gravity. On the other hand, the Einstein field equations are non-linear; this implies that gravitational waves interact with each other (unlike light waves in Maxwell's (linear) theory). So one can argue that "gravitational energy" IS a source of gravity."
Well it can't be both at the same time surely. There is a lot of references to authors claiming that indeed gravity is a source of gravity.
Linked to this is the popular assertion that the total energy of the universe is zero - matter/energy + 'gravity' = 0, championed by respected heavyweights like Lawrence Krauss at http://online.wsj.com/article/SB1000...Tabs%3Darticle:
"The existence of this energy, called dark energy, has another consequence: It changes the picture so that knowing the geometry of the universe is no longer enough to determine its future. While this may be a disappointment, the existence of dark energy and a flat universe has profound implications for those of us who suspected the universe might arise from nothing.
Why? Because if you add up the total energy of a flat universe, the result is precisely zero. How can this be? When you include the effects of gravity, energy comes in two forms. Mass corresponds to positive energy, but the gravitational attraction between massive objects can correspond to negative energy. If the positive energy and the negative gravitational energy of the universe cancel out, we end up in a flat universe."
Others who also write impressive articles online pooh pooh that notion: http://www.science20.com/alpha_meme/...law_questioned
http://www.science20.com/alpha_meme/...holy_cow_urine
Who is right, and why should there be any such contested topics after 95 years of GR?
|
2013-05-20 14:47:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7473930716514587, "perplexity": 681.562694615318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699056351/warc/CC-MAIN-20130516101056-00030-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://crypto.stackexchange.com/questions/96002/why-zk-snarks-are-argument-of-knowledge-if-a-knowledge-extractor-exists/96004#96004
|
# Why Zk-SNARKs are Argument of Knowledge if a Knowledge Extractor exists?
From what I know, proving the existance of a Knowledge Extractor implies perfect soundness.
So why in zk-SNARKs (and similar) we talk about Arguments of Knowledge, where the soundness property is only computational (a.k.a, secure only from computationally bounded Provers), if a Knowledge Extractor also exists? Am I missing something? Maybe a Knowledge Extractor can be proven in different "levels" of security (computational, statistical and perfect)? I never saw that until now tho, and I've always seen Knowledge Extractors as something different to prove, and not directly linked to the soundness property, so I can't figure out an answer.
• "On defining proofs of knowledge" by Bellare and Goldreich: to name it "a proof of knowledge" one should present an explicit "extractor" algorithm. Nov 10, 2021 at 19:33
## 1 Answer
Knowledge soundness can indeed be computational or statistical. There are some classical example, if you want some illustration: the Sigma protocol for correct opening of the Damgard-Fujisaki commitment scheme (a variant of Pedersen over groups of hidden order) is knowledge sound under the RSA assumption (see here). Intuitively, when you go through the proof, this means that your extractor works only if a certain condition is met, and you can show that this condition will always be met, but only if the malicious prover cannot break some hard problem.
SNARKs are an even stranger beast: here, the existence of the efficient extractor itself is essentially the assumption.
So, if you prove unconditionally that there is an extractor, it indeed implies perfect soundness. But if you prove "either there is an extractor or we can break hard problem X", or if the existence of the extractor is actually part of the assumption itself, you clearly don't get perfect soundness as a consequence, only computational soundness.
• Thank you very much Geoffroy! So, if I understood well, soundness, in addition to its definition, can also be categorized in different ways if an extractor exists and depending on the assumptions made for its existance. Those can be computational or statistical (or, eventually, perfect if its proven unconditionally). When talking about SNARKS, we assume that an extractor exists for the arguments, and that makes so that the soundness property is also computationally sound, meaning an extractor exists but only if the Prover is computationally bounded. Nov 8, 2021 at 20:32
• Yes, there are several dimensions in the flavor of soundness: whether you have "membership soundness" or "knowledge soundness" is one (I usually say 'knowledge extractability" in my paper to distinguish from the usual soundness), and whether you have computational or statistical is another dimension. And yes, the assumption in SNARKs basically says "if the prover is bounded and makes a successful proof, then an extractor exists". Nov 9, 2021 at 14:46
|
2022-05-26 17:48:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957626819610596, "perplexity": 1448.9161106863614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00018.warc.gz"}
|
https://brickisland.net/DDGFall2017/2017/09/17/slides-exterior-calculus-in-rn/
|
# Slides—Exterior Calculus in $R^n$
Later this week we’ll start talking about exterior calculus, which is a modern language used across differential geometry, mathematical physics, geometric computation… and the rest of our class! :-). Initially this language can look a bit daunting, but by making some connections with familiar ideas from vector calculus (like grad, div, and curl), we’ll see that it’s actually not so bad once you get down to concrete calculations. Slides here:
## 3 thoughts on “Slides—Exterior Calculus in $R^n$”
1. Keenan says:
For those who were curious about the statement that the “boundary of a boundary is empty,” the easiest way to state this fact is using the notions of a homeomorphism and a manifold, which we haven’t yet studied in class (but we will!). In particular, if $\Omega \subset \mathbb{R}^n$ is an $m$-dimensional submanifold of $\mathbb{R}^n$, a point $p \in \Omega$ is on the interior if for a sufficiently small ball $B_\epsilon(p)$ around $p$, $B_{\epsilon}(p) \cap S$ is homeomorphic to the unit open $m$-dimensional ball; it is on the boundary if this same intersection is homeomorphic to a half-ball, i.e., the intersection of the unit ball with a half-space.
1. Yifanh says:
So the ‘boundary operation’ here is creating a sub-manifold in $\mathbb{R}^n$, instead of just extracting a sub-set in $\mathbb{R}^n$, which means the second boundary operation works on the sub-manifold instead of $\mathbb{R}^n$. Am I correct?
Also by $S$ you actually mean $\Omega$, right?
2. apoorva says:
In class we said that we don’t make a distinction between 0-vectors and 0-forms (covectors). Is this because these objects are identical (at least up to isomorphism) no matter what the underlying vector space is? (whereas the set of 1-vectors might not be the same (isomorphic to) as 1-forms in more exotic contexts like function spaces.)
|
2021-06-19 12:42:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7780409455299377, "perplexity": 544.1367079218659}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648194.49/warc/CC-MAIN-20210619111846-20210619141846-00607.warc.gz"}
|
https://www.notatee.com/courses/video/exergy-change-of-a-system-exergy-transfer-by-heat-work-and-mass
|
• عربي
Need Help?
Subscribe to Thermodynamics
###### \${selected_topic_name}
• Notes
• Exergy can be transferred to or from a system in three forms:heat, work, and mass flow
• Exergy transfer by heat:
$\quad X_{\text { heat }}=\left(1-\frac{T_{0}}{T}\right) Q \quad(\mathrm{kJ})$
When the temperature T is constant
$X_{\mathrm{heat}}=\int\left(1-\frac{T_{0}}{T}\right) \delta Q$
When the temperature T is not constant
• Exergy Transfer by Work:
$X_{\text { work }}=\left\{\begin{array}{ll}{W-W_{\text { surr }}} & {\text { (for boundary work) }} \\ {W} & {\text { (for other forms of work) }}\end{array}\right.$
where $W_{\text { surr }}=P_{0}\left(V_{2}-V_{1}\right), P_{0}$ is atmospheric pressure, and $V_{1}$ and $V_{2}$ are theinitial and final volumes of the system.
• Exergy transfer by mass:
$\quad X_{\text { mass }}=m \psi$
where $\psi=\left(h-h_{0}\right)-T_{0}\left(s-s_{0}\right)+V^{2} / 2+g z$
|
2020-10-25 03:00:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37354785203933716, "perplexity": 5550.919190486662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00114.warc.gz"}
|
https://researchportal.bath.ac.uk/en/publications/asymptotic-behaviour-of-a-pile-up-of-infinite-walls-of-edge-dislo
|
# Asymptotic behaviour of a pile-up of infinite walls of edge dislocations
Marc Geers, Ron Peerlings, Mark Peletier, Lucia Scardia
Research output: Contribution to journalArticlepeer-review
42 Citations (SciVal)
We consider a system of parallel straight edge dislocations and we analyse its asymptotic behaviour in the limit of many dislocations. The dislocations are represented by points in a plane, and they are arranged in vertical walls; each wall is free to move in the horizontal direction. The system is described by a discrete energy depending on the one-dimensional horizontal positions x i > 0 of the n walls; the energy contains contributions from repulsive pairwise interactions between all walls, a global shear stress forcing the walls to the left, and a pinned wall at x = 0 that prevents the walls from leaving through the left boundary. We study the behaviour of the energy as the number of walls, n, tends to infinity, and characterise this behaviour in terms of Γ-convergence. There are five different cases, depending on the asymptotic behaviour of the single dimensionless parameter β n , corresponding to $${\beta_n \ll 1/n, 1/n \ll \beta_n \ll 1}$$, and $${\beta_n \gg 1}$$, and the two critical regimes β n ~ 1/n and β n ~ 1. As a consequence we obtain characterisations of the limiting behaviour of stationary states in each of these five regimes. The results shed new light on the open problem of upscaling large numbers of dislocations. We show how various existing upscaled models arise as special cases of the theorems of this paper. The wide variety of behaviour suggests that upscaled models should incorporate more information than just dislocation densities. This additional information is encoded in the limit of the dimensionless parameter β n .
|
2022-09-24 19:56:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7675803899765015, "perplexity": 465.5661056428213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00440.warc.gz"}
|
https://people.maths.bris.ac.uk/~matyd/GroupNames/192/C4%5E2s4C4sC3.html
|
Copied to
clipboard
## G = C42⋊4C4⋊C3order 192 = 26·3
### The semidirect product of C42⋊4C4 and C3 acting faithfully
Aliases: C424C4⋊C3, C4.1(C42⋊C3), (C22×C4).8A4, C23.10(C2×A4), C22.1(C4.A4), C2.C42.3C6, C23.3A4.3C2, C2.3(C2×C42⋊C3), SmallGroup(192,190)
Series: Derived Chief Lower central Upper central
Derived series C1 — C2 — C2.C42 — C42⋊4C4⋊C3
Chief series C1 — C2 — C23 — C2.C42 — C23.3A4 — C42⋊4C4⋊C3
Lower central C2.C42 — C42⋊4C4⋊C3
Upper central C1 — C4
Generators and relations for C424C4⋊C3
G = < a,b,c,d | a4=b4=c4=d3=1, ab=ba, cac-1=ab2, dad-1=a2c-1, bc=cb, dbd-1=a2b-1, dcd-1=a-1b2c >
Character table of C424C4⋊C3
class 1 2A 2B 2C 3A 3B 4A 4B 4C 4D 4E 4F 4G 4H 4I 4J 4K 4L 6A 6B 12A 12B 12C 12D size 1 1 3 3 16 16 1 1 3 3 6 6 6 6 6 6 6 6 16 16 16 16 16 16 ρ1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 trivial ρ2 1 1 1 1 1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 -1 -1 1 1 -1 -1 -1 -1 linear of order 2 ρ3 1 1 1 1 ζ32 ζ3 1 1 1 1 1 1 1 1 1 1 1 1 ζ32 ζ3 ζ3 ζ32 ζ3 ζ32 linear of order 3 ρ4 1 1 1 1 ζ32 ζ3 -1 -1 -1 -1 1 1 1 1 -1 -1 -1 -1 ζ32 ζ3 ζ65 ζ6 ζ65 ζ6 linear of order 6 ρ5 1 1 1 1 ζ3 ζ32 1 1 1 1 1 1 1 1 1 1 1 1 ζ3 ζ32 ζ32 ζ3 ζ32 ζ3 linear of order 3 ρ6 1 1 1 1 ζ3 ζ32 -1 -1 -1 -1 1 1 1 1 -1 -1 -1 -1 ζ3 ζ32 ζ6 ζ65 ζ6 ζ65 linear of order 6 ρ7 2 -2 2 -2 -1 -1 -2i 2i -2i 2i 0 0 0 0 0 0 0 0 1 1 -i -i i i complex lifted from C4.A4 ρ8 2 -2 2 -2 -1 -1 2i -2i 2i -2i 0 0 0 0 0 0 0 0 1 1 i i -i -i complex lifted from C4.A4 ρ9 2 -2 2 -2 ζ65 ζ6 2i -2i 2i -2i 0 0 0 0 0 0 0 0 ζ3 ζ32 ζ4ζ32 ζ4ζ3 ζ43ζ32 ζ43ζ3 complex lifted from C4.A4 ρ10 2 -2 2 -2 ζ6 ζ65 -2i 2i -2i 2i 0 0 0 0 0 0 0 0 ζ32 ζ3 ζ43ζ3 ζ43ζ32 ζ4ζ3 ζ4ζ32 complex lifted from C4.A4 ρ11 2 -2 2 -2 ζ6 ζ65 2i -2i 2i -2i 0 0 0 0 0 0 0 0 ζ32 ζ3 ζ4ζ3 ζ4ζ32 ζ43ζ3 ζ43ζ32 complex lifted from C4.A4 ρ12 2 -2 2 -2 ζ65 ζ6 -2i 2i -2i 2i 0 0 0 0 0 0 0 0 ζ3 ζ32 ζ43ζ32 ζ43ζ3 ζ4ζ32 ζ4ζ3 complex lifted from C4.A4 ρ13 3 3 3 3 0 0 3 3 3 3 -1 -1 -1 -1 -1 -1 -1 -1 0 0 0 0 0 0 orthogonal lifted from A4 ρ14 3 3 3 3 0 0 -3 -3 -3 -3 -1 -1 -1 -1 1 1 1 1 0 0 0 0 0 0 orthogonal lifted from C2×A4 ρ15 3 3 -1 -1 0 0 3 3 -1 -1 -1-2i -1+2i 1 1 -1-2i 1 1 -1+2i 0 0 0 0 0 0 complex lifted from C42⋊C3 ρ16 3 3 -1 -1 0 0 -3 -3 1 1 1 1 -1+2i -1-2i -1 1-2i 1+2i -1 0 0 0 0 0 0 complex lifted from C2×C42⋊C3 ρ17 3 3 -1 -1 0 0 3 3 -1 -1 1 1 -1+2i -1-2i 1 -1+2i -1-2i 1 0 0 0 0 0 0 complex lifted from C42⋊C3 ρ18 3 3 -1 -1 0 0 -3 -3 1 1 1 1 -1-2i -1+2i -1 1+2i 1-2i -1 0 0 0 0 0 0 complex lifted from C2×C42⋊C3 ρ19 3 3 -1 -1 0 0 3 3 -1 -1 -1+2i -1-2i 1 1 -1+2i 1 1 -1-2i 0 0 0 0 0 0 complex lifted from C42⋊C3 ρ20 3 3 -1 -1 0 0 3 3 -1 -1 1 1 -1-2i -1+2i 1 -1-2i -1+2i 1 0 0 0 0 0 0 complex lifted from C42⋊C3 ρ21 3 3 -1 -1 0 0 -3 -3 1 1 -1+2i -1-2i 1 1 1-2i -1 -1 1+2i 0 0 0 0 0 0 complex lifted from C2×C42⋊C3 ρ22 3 3 -1 -1 0 0 -3 -3 1 1 -1-2i -1+2i 1 1 1+2i -1 -1 1-2i 0 0 0 0 0 0 complex lifted from C2×C42⋊C3 ρ23 6 -6 -2 2 0 0 6i -6i -2i 2i 0 0 0 0 0 0 0 0 0 0 0 0 0 0 complex faithful ρ24 6 -6 -2 2 0 0 -6i 6i 2i -2i 0 0 0 0 0 0 0 0 0 0 0 0 0 0 complex faithful
Permutation representations of C424C4⋊C3
On 24 points - transitive group 24T301
Generators in S24
(1 2)(3 4)(5 6)(7 8)(9 10)(11 12)(13 14)(15 16)(17 18 19 20)(21 22 23 24)
(1 16 13 12)(2 15 14 11)(3 9 5 8)(4 10 6 7)(17 18 19 20)(21 24 23 22)
(1 12 13 16)(2 15 14 11)(3 7)(4 9)(5 10)(6 8)(17 21)(18 24)(19 23)(20 22)
(1 24 6)(2 17 8)(3 11 18)(4 13 22)(5 15 20)(7 12 23)(9 14 19)(10 16 21)
G:=sub<Sym(24)| (1,2)(3,4)(5,6)(7,8)(9,10)(11,12)(13,14)(15,16)(17,18,19,20)(21,22,23,24), (1,16,13,12)(2,15,14,11)(3,9,5,8)(4,10,6,7)(17,18,19,20)(21,24,23,22), (1,12,13,16)(2,15,14,11)(3,7)(4,9)(5,10)(6,8)(17,21)(18,24)(19,23)(20,22), (1,24,6)(2,17,8)(3,11,18)(4,13,22)(5,15,20)(7,12,23)(9,14,19)(10,16,21)>;
G:=Group( (1,2)(3,4)(5,6)(7,8)(9,10)(11,12)(13,14)(15,16)(17,18,19,20)(21,22,23,24), (1,16,13,12)(2,15,14,11)(3,9,5,8)(4,10,6,7)(17,18,19,20)(21,24,23,22), (1,12,13,16)(2,15,14,11)(3,7)(4,9)(5,10)(6,8)(17,21)(18,24)(19,23)(20,22), (1,24,6)(2,17,8)(3,11,18)(4,13,22)(5,15,20)(7,12,23)(9,14,19)(10,16,21) );
G=PermutationGroup([(1,2),(3,4),(5,6),(7,8),(9,10),(11,12),(13,14),(15,16),(17,18,19,20),(21,22,23,24)], [(1,16,13,12),(2,15,14,11),(3,9,5,8),(4,10,6,7),(17,18,19,20),(21,24,23,22)], [(1,12,13,16),(2,15,14,11),(3,7),(4,9),(5,10),(6,8),(17,21),(18,24),(19,23),(20,22)], [(1,24,6),(2,17,8),(3,11,18),(4,13,22),(5,15,20),(7,12,23),(9,14,19),(10,16,21)])
G:=TransitiveGroup(24,301);
On 24 points - transitive group 24T309
Generators in S24
(5 6)(7 8)(9 10)(11 12)(13 14)(15 16)(17 18 19 20)(21 22 23 24)
(1 2 4 3)(5 12 6 11)(7 10 13 15)(8 9 14 16)(17 22 19 24)(18 23 20 21)
(1 11)(2 5)(3 6)(4 12)(7 8 13 14)(9 15 16 10)(17 19)(22 24)
(1 8 17)(2 16 22)(3 9 24)(4 14 19)(5 15 23)(6 10 21)(7 18 11)(12 13 20)
G:=sub<Sym(24)| (5,6)(7,8)(9,10)(11,12)(13,14)(15,16)(17,18,19,20)(21,22,23,24), (1,2,4,3)(5,12,6,11)(7,10,13,15)(8,9,14,16)(17,22,19,24)(18,23,20,21), (1,11)(2,5)(3,6)(4,12)(7,8,13,14)(9,15,16,10)(17,19)(22,24), (1,8,17)(2,16,22)(3,9,24)(4,14,19)(5,15,23)(6,10,21)(7,18,11)(12,13,20)>;
G:=Group( (5,6)(7,8)(9,10)(11,12)(13,14)(15,16)(17,18,19,20)(21,22,23,24), (1,2,4,3)(5,12,6,11)(7,10,13,15)(8,9,14,16)(17,22,19,24)(18,23,20,21), (1,11)(2,5)(3,6)(4,12)(7,8,13,14)(9,15,16,10)(17,19)(22,24), (1,8,17)(2,16,22)(3,9,24)(4,14,19)(5,15,23)(6,10,21)(7,18,11)(12,13,20) );
G=PermutationGroup([(5,6),(7,8),(9,10),(11,12),(13,14),(15,16),(17,18,19,20),(21,22,23,24)], [(1,2,4,3),(5,12,6,11),(7,10,13,15),(8,9,14,16),(17,22,19,24),(18,23,20,21)], [(1,11),(2,5),(3,6),(4,12),(7,8,13,14),(9,15,16,10),(17,19),(22,24)], [(1,8,17),(2,16,22),(3,9,24),(4,14,19),(5,15,23),(6,10,21),(7,18,11),(12,13,20)])
G:=TransitiveGroup(24,309);
Matrix representation of C424C4⋊C3 in GL5(𝔽13)
9 3 0 0 0 3 4 0 0 0 0 0 5 0 0 0 0 0 8 0 0 0 0 0 1
,
5 0 0 0 0 0 5 0 0 0 0 0 1 0 0 0 0 0 12 0 0 0 0 0 1
,
10 9 0 0 0 9 3 0 0 0 0 0 8 0 0 0 0 0 12 0 0 0 0 0 8
,
1 0 0 0 0 9 3 0 0 0 0 0 0 0 3 0 0 3 0 0 0 0 0 3 0
G:=sub<GL(5,GF(13))| [9,3,0,0,0,3,4,0,0,0,0,0,5,0,0,0,0,0,8,0,0,0,0,0,1],[5,0,0,0,0,0,5,0,0,0,0,0,1,0,0,0,0,0,12,0,0,0,0,0,1],[10,9,0,0,0,9,3,0,0,0,0,0,8,0,0,0,0,0,12,0,0,0,0,0,8],[1,9,0,0,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,3,0,0,3,0,0] >;
C424C4⋊C3 in GAP, Magma, Sage, TeX
C_4^2\rtimes_4C_4\rtimes C_3
% in TeX
G:=Group("C4^2:4C4:C3");
// GroupNames label
G:=SmallGroup(192,190);
// by ID
G=gap.SmallGroup(192,190);
# by ID
G:=PCGroup([7,-2,-3,-2,2,-2,2,-2,672,135,268,934,521,80,2531,3540]);
// Polycyclic
G:=Group<a,b,c,d|a^4=b^4=c^4=d^3=1,a*b=b*a,c*a*c^-1=a*b^2,d*a*d^-1=a^2*c^-1,b*c=c*b,d*b*d^-1=a^2*b^-1,d*c*d^-1=a^-1*b^2*c>;
// generators/relations
Export
×
𝔽
|
2020-01-27 12:22:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871527910232544, "perplexity": 1226.1976213272208}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00286.warc.gz"}
|
http://clay6.com/qa/30422/there-are-two-radioactive-substances-x-y-if-decay-constant-of-y-is-2-times-
|
Browse Questions
# There are two radioactive substances X & Y . If Decay constant of Y is 2 times of X . Initially both have equal number of nuclei . Then after n half lives of X rate of disintegration of both are equal . The value of n is
$(a)\;1\qquad(b)\;4\qquad(c)\;3\qquad(d)\;5$
Explanation :
Let
$\lambda x=\lambda \qquad$ & $\qquad\; \lambda y=2 \lambda$
Initially rate of disintegration of X is $\;\lambda N_{0}\;$ and that of Y is $\;2 \lambda N_{0}\;.$
After one half life of X , rate of disintegration of X will becomes $\;\large\frac{\lambda N_{0}}{2}\;$ and that of Y would also be $\;\large\frac{\lambda N_{0}}{2}\;$ ( half - life of B=$\large\frac{1}{2}\;$ half life of A )
So after one half of A or two half lives of B
$(-\large\frac{dN}{dt})_{A}=(-\large\frac{dN}{dt})_{B}$
$n=1$
|
2016-10-25 01:22:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7510556578636169, "perplexity": 797.3557138097773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719843.44/warc/CC-MAIN-20161020183839-00542-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://veliugurguney.com/blog/post/all_states_of_tic_tac_toe
|
# A splinter in my mind
There are certain problems that come to my mind once in a while. I ponder about and evaluate them, struggle with them a little bit but then give up because my knowledge and expertise is not enough to finish solving it. "All states of Tic-Tac-Toe" was one of these problems. I remember thinking about it a decade ago and giving up on computing all of them because I didn't know much about data structure and algorithms.
Yesterday, I was surveying computer science books for future study and saw the below illustration in the first chapter of "The Pattern On The Stone" by Daniel Hillis. (It was recommended by Alan Kay in a Quora question.) It triggered memories of my former struggle with the Tic-Tac-Toe problem. I evaluated it a bit and this time I realized that I finally have the capacity to solve it, once and for all!
# Problem space
First, I thought about the problem space/size and how to implement and traverse a tree structure to store successive moves. (Spoilers: the correct data structure is not a tree but a graph, and also the unit state is not an individual board configuration but its symmetry groups but I'll come to that later, please bear with me.)
My first idea was to draw the whole tree on my screen, which is 4K, 3,840 x 2,160 ~= 8M pixels. If each board takes 9 pixels, and we'll draw each path in the tree, there are at most 9! different paths: First move is to place an X on a tile among 9. Second move has 1 tile occupied. Place an O on a tile among 8 etc. Continue down to step 1. Actually, most games end before 9 steps, so only a few leaves have that depth. Hence 9! ~= 360K is an upper bound. 3,840 * 2,160 / 9! ~= 23 pixels. More than enough space! :-)
Another way of looking at the problem is to compute all possible board configurations. A tile can be empty, or can have X or O on it, hence 3 states. There are 9 tiles. 3^9 ~= 20K. This approach ignores the succession between them via player moves. And also, some of these configurations are not legitimate states; there is no path that leads to those states in a game that obeys the rules. Assuming that X starts first, the number of X's is either equal to the number of O's or can be only one more.
Drawing all paths can be fun, however that approach hides some internal structure of the problem. Unlike chess tic-tac-toe does not have a sense of direction. Some board configurations are equivalent to others. For example, initially putting an X at the top left corner or top right corner does not matter in terms of strategy. You can rotate the board 90 degrees counter-clockwise and have the same state (Or turn your head 90 degrees clockwise. Hehe.)
The board is symmetric under rotations and mirroring. To discover the true underlying structure via unique states, one needs to group individual states into symmetry groups, set of states which are equivalent under these symmetry transformations. So, the problem is to find the number of unique board states and how they are connected via player moves. The actual number is surprisingly low!
# Data Structures
After clarifying the problem, next is to define data structures to use to solve it. Need to hold individual board states, hold their symmetry groups, and have a traversable structure to connect successive states. Because I am solving a very specialized function, I wanted to do it using primitives and basic structures given with Python without writing my own classes.
Chose to use tuples to represent a board state.
from typing import Iterable, List, Set
State = Iterable[Iterable[int]]
There are additional unspoken constraints on the State type, such as inner tuple should be made of tree integers, 0, 1 or 2. 0 being empty tile, 1 is X and 2 is O. Better practice could be using enums for tile values, and a class for a state. But this type gave me a least complicated functional structure to hold a state. I was able to keep constraints in my mind over the few hours I worked on the problem. A longer project should use more human readable structures.
Example state:
TOP_LEFT_STATE = ((1, 0, 0),
(0, 0, 0),
(0, 0, 0))
A symmetry group, a set of states that are equivalent under rotation and mirror, can be represented via a list of States. SymmetryGroup = List[State] Why a list but not a Traversable (sets are traversable too)? Well... Again bad coding practice. For reasons I'll explain later, I want to have a single, "canonical" state that represents the group and going to place that group at the first element of the list. A better approach should isolate the implementation of "canonical representation of a symmetry group" from its users, say by returning it via a method.
Finally, the structure to hold the connection between groups via player moves is a graph, not a tree. Realized that when I got some unexpected results. It's possible to arrive same board configuration via different paths. I saw plotting all paths as a trivial problem, hence I want same board states that can be arrived via different path to be represented by the same node. In a tree, branches never converge. So, correct data structure to represent this problem is a Directed (we only add new marks at each step, never remove one) Acyclic (not possible to come back to the same state) Graph, DAG.
Hence, a Node is made of a value val (which will be the ID of a SymmetryGroup referred by the group, and a list of successors which refer to future game states which can be arrived via legal moves. This graph node class is names in an abstract way, and there is no other structure to represent whole graph.
GroupId = int
class Node(object):
def __init__(self, val: GroupId = None, successors: List[GroupId] = None):
self.val = val
self.successors = successors or []
GroupId is for semantic typing. It basically is an int. I got the idea of a group id from relational databases with auto-incremental IDs, where whenever a new row is added to a table of an Entity the ID is incremented by one. I'd decided to increment a global gid variable by one each time a new symmetry group is computed. Again, it's bad design to couple group computation with graph implementation and counter increments.
By the way, I did not use these type hints other than readability. Does not have any static or dynamic type checker in place.
# Algorithms
Now that we have our data structures ready, let's write algorithms that'll compute the graph. Instead of going with an object oriented method where each function relevant to a class is put into the scope of that class, I wrote multiple global level functions that operation on relevant types. Because code is small, didn't go for better isolation.
First wrote some debug functions without them I wouldn't arrive to a working solution. state_str, print_state and print_states which convert a State to a string, "pretty print" them, respectively. Their implementation is not essential.
We need a function to compute symmetry groups of a state.
def get_symmetries(st: State) -> SymmetryGroup:
def get_vertical_mirror(st: State) -> State:
return tuple(reversed(st))
def get_rotations(st: State) -> List[State]:
((a, b, c),
(d, e, f),
(g, h, i)) = st
rot1 = ((g, d, a),
(h, e, b),
(i, f, c))
rot2 = ((i, h, g),
(f, e, d),
(c, b, a))
rot3 = ((c, f, i),
(b, e, h),
(a, d, g))
return [st, rot1, rot2, rot3]
def apply_all_symmetry_operations(st: State) -> List[State]:
mir = get_vertical_mirror(st)
return get_rotations(st) + get_rotations(mir)
all_symmetries = apply_all_symmetry_operations(st)
duplicates_removed = list(dict.fromkeys(all_symmetries))
return duplicates_removed
Weird indentations are for visual readability. Because a state is geometrically, I thought it is legit to split code into multiple lines to increase readability. (fmt: on/off can be used to prevent Black from reformatting those lines.)
Each board configuration can be rotated 90 degrees 3 times. And the same can be done to its mirror (either horizontal to vertical). We compute rotation and mirror operations manually, by adjusting the positions of a, b, c, ... in the tuples. In total there are 8 symmetry operations that can be applied to a state which'll produce a state in the same symmetry group.
Sometimes two operations will result in the same state. Symmetry group should only include distinct configurations. So, as a last step, after we apply 8 operations (1 identity, 3 rotations, 1 mirror, and 3 rotations on the mirror) we take distinct states among them and return their list as the symmetry group. Which can be done via the list(dict.fromkeys(array)) trick that removes duplicates in a list. (See these two tweets by Python core developer Raymond Hettinger 1, 2 on the introduction of dict.fromkeys() to Python and how regular Python dicts now hold the order of insertion, hence above trick can be used to remove duplicates using dict keys as a hash map.
Then, we need to tell whether a given state is a game-ending configuration or not. Which can be done by checking whether any of the rows, columns and diagonals (lines) have all identical non-identical tiles.
def is_end(st: State) -> bool:
def are_same(triplet):
a, b, c = triplet
return a == b == c != 0
horizontals = [[st[ix][0], st[ix][1], st[ix][2]] for ix in range(3)]
verticals = [[st[0][ix], st[1][ix], st[2][ix]] for ix in range(3)]
diag1 = [st[0][0], st[1][1], st[2][2]]
diag2 = [st[2][0], st[1][1], st[0][2]]
lines = horizontals + verticals + [diag1, diag2]
return any(are_same(line) for line in lines)
We need to be able to make a move on a given state, a move being writing a marker on a given tile. We don't do any checks on whether it was a legit move, whether it happened inside the borders of the board or even whether it was an X or O. Keeping consistency is left as a responsibility of the call-site. Since the code is small, it is easy to keep in mind.
def make_move(st: State, row: int, col: int, val: int) -> State:
mutable_st = [list(r) for r in st]
mutable_st[row][col] = val
new_st = tuple(tuple(r) for r in mutable_st)
return new_st
Also note that, because tuples are immutable, a new state is created for each move. This was the reason why I've chosen tuples to represent a state, instead of an int. It's easier to make a mistake with an in-place operation on a mutable objects.
Then we need a way to compute all legit moves on a given state. If a given state is a game-ending state, then it has no next states. Otherwise, we go over all tiles, skip them if they are not empty. And for each empty tile we create a new state by putting an X or an O there. Because players play in turns, assuming that empty board is step=0, then we place Xs at odd steps Os at even steps. Therefor, other than the State itself, step should be given as an argument too.
def get_next_states_raw(st: State, step: int) -> List[State]:
next_states = []
if is_end(st):
return []
new_val = step % 2 + 1
for row in range(3):
for col in range(3):
val = st[row][col]
if val == 0:
new_st = make_move(st, row, col, new_val)
next_states.append(new_st)
return next_states
# Compute the DAG of all unique game states
Given above functionality, now we are ready to compute the graph. This is the most cluttered part of the code. It's written as a script that manipulates some global variables. It could have been written in a class, but since it is only going to be used once, I didn't bother to design a class.
However, a better designed, higher abstraction approach could be to have a framework that asks the rule of a game in terms of 1) a board configuration class, 2) initial board state, 3) a list of symmetry operations under which boards are symmetric 4) a function that computes next legitimate states. 5) A class that computes the DAG given previous items. For example, this framework can be applied to Connect Four too.
The computation goes as follows. We start a Depth First Search/Traversal starting from the root node of empty board state symmetry group. At each step of DFS we compute groups of next possible states of each Node.
If any of those next groups has not seen/processed yet, we register it, and set it's node as a successor of current node. Since successors of this new node is not computed it yet, we append it into the queue to be processed at the next step of DFS.
Otherwise, if the group already exists, we just set its node as a successor without creating a second node with the same group. That's when to paths merge into the same node. For example these two paths merge at their 3rd steps (X to top left) -> (O to top center) -> (X to center) and (X to center) -> (O to top center) -> (X to top left).
We keep the state of computation in following variables: states = {} is a map from states to group ids of symmetry groups they belong to. groups = {} is a map from GroupIds to symmetry groups they identify, it's the "registry" of the groups. nodes = {} is the map from GroupIds to graph nodes that store them and their successors.
Helper function to update global state when a new state is discovered. It needs to be given an incremented group id. It computes the symmetry group, register it to groups and let the state know that computed states belongs to that group.
def add_symmetry_group(st: State, gid: GroupId):
global states, groups
symmetry_group = get_symmetries(st)
groups[gid] = symmetry_group
states.update({st: gid for st in symmetry_group})
We initialize with group id counter gid = 1. Add symmetry group of empty board to state. Create a graph node for it, register the node. Increment GroupId counter and append the node to DFS queue.
add_symmetry_group(empty_board_state, gid)
root = Node(gid)
nodes[gid] = root
gid += 1
q = deque([root])
DFS can be decomposed into two parts, first on is looping over 9 steps, and at each step, looping over the items in the queue, pop each of them, process them, if they result in novel nodes add new ones to the queue.
print("step, unique_state_no")
for step in range(9):
print(f"{step}, {len(q)}")
for _ in range(len(q)):
compute_and_queue_successors(q.pop())
We'll print the number of nodes for each step of the games.
The computation and queuing of successor nodes is the hairiest part of the code, which was explained above.
def compute_and_queue_successors(nd: Node) -> None:
global groups, nodes, q, gid
sym_group = groups[nd.val]
# first state of a group is its representative state
canonical_state = sym_group[0]
next_states = get_next_states_raw(canonical_state, step)
for nxt_st in next_states:
# have seen/processed this state before?
if nxt_st in states:
gid_old = states[nxt_st]
# merge this path with other paths leading to this node
# if it is not already merged (i.e. if not a successor
# of current node)
if gid_old not in [suc.val for suc in nd.successors]:
nd.successors.append(nodes[gid_old])
continue
# is it an unseen/unprocessed state?
add_symmetry_group(nxt_st, gid)
new_nd = Node(gid)
nodes[gid] = new_nd
nd.successors.append(new_nd)
q.appendleft(new_nd)
gid += 1
# Results
0, 1
1, 3
2, 12
3, 38
4, 108
5, 174
6, 204
7, 153
8, 57
9, 15
10, 0
We start with the root node, empty board, at step=0. 3 unique configurations at step 1 can be these following.
> print_states([groups[nd.val][0] for nd in root.successors])
100 010 000
000 000 010
000 000 000
Remember that these are canonical representations of their symmetry groups. Any X at corner is equivalent to the first one, any X at side center is equivalent to second one, and X at center is the only state in its symmetry groups.
And similarly, the successors of X at the center first step are
> print_states([groups[nd.val][0] for nd in root.successors[2].successors])
200 020
010 010
000 000
which are printed via the debug methods.
Number of unique states first increase as the game moves forward, however, around step=5 we start hitting ending states and those nodes are sinks. And number of distinct states start decreasing after step=6. We don't have any states for step=10 because at step=9 the board if full. Apparently the board can be filled with 15 unique states.
Total number of distinct state through which the game can flow is 765. At this point I was ready to open Wikipedia to find the correct number. Game complexity article in it's Tic-Tac-Toe section says "when rotations and reflections of positions are considered identical, there are only 765 essentially different positions." Boom! Decade long back and forth effort ended happily. ^_^
Same article defines two types of complexities for a game. "The state-space complexity of a game is the number of legal game positions reachable from the initial position of the game." and "The game tree size is the total number of possible games that can be played: the number of leaf nodes in the game tree rooted at the game's initial position. The game tree is typically vastly larger than the state space because the same positions can occur in many games by making moves in a different order" Note that we computed the state-space graph. To get the size of the game tree size we need to compute the number of paths from the root node to each leaf node.
# Visualization
We can't stop before visualizing this grap!. We'll plot it using matplotlib and networkx libraries. The graph is very big, which makes it very hard to get a sensible visualization out of it (like it is hard to visualize Solar system in correct proportions, the radii of different trajectory vary wildly).
A NetworkX graph can be constructed from a list of edges. Let's traverse the graph to collect all edges
edges = [] # List[(GraphId, GraphId)]
def traverse(nd):
edges.extend([(nd.val, s.val) for s in nd.successors])
for s in nd.successors:
traverse(s)
traverse(root)
Then create the graph.
import networkx as nx
G = nx.DiGraph()
G.add_edges_from(edges)
If we visualize is using the standard layouts provided by the library, we'll ignore the structural hierarchy of the graph, namely, grouping of nodes into steps, and how it flows into a single direction, in a "tree-like" manner, where sometimes branches merge.
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(15, 15))
nx.draw_networkx(G, pos=nx.spring_layout(G), ax=ax)
Above is using "spring layout" where internal structure is lost. After some online search, learned that 1) I can draw an image for nodes, and 2) using graphviz it is possible to impose the inherent hierarchy to node distribution of the visualization.
pos = nx.nx_agraph.graphviz_layout(G, prog='dot')
computes the (x, y) positions of nodes on a canvas using Graphviz' dot program.
levels = [0, 1, 2, 3]
colors = ['gray', 'blue', 'red']
cmap, norm = matplotlib.colors.from_levels_and_colors(levels, colors)
creates an absolute color map where 0 is gray, 1 is blue and 2 is red.
fig, ax = plt.subplots(figsize=(50, 10))
for gid, (x, y) in pos.items():
sym_grp = groups[gid]
state = sym_grp[0]
node_image = OffsetImage(state, zoom=3.0, cmap=cmap, norm=norm)
box = AnnotationBbox(node_image, (x, y), frameon=False, pad=0.1)
ax.add_artist(box)
nx.draw(G, pos, with_labels=False, arrows=True, width=0.2, ax=ax, node_size=1, alpha=0.5)
and finally let matplotlib waste less screen real-estate on padding and save the graph in PDF and PNG format with high DPI.
plt.tight_layout()
plt.savefig("tic-tac-toe_game_state_graph.pdf", dpi=300)
plt.savefig("tic-tac-toe_game_state_graph.png", dpi=300)
The computation of the graph is the fastest part. Computing the visualization and saving them as files take orders of magnitude longer them. Here is final visualization:
published at: 2020-06-07 22:56 edited at: 2020-06-07 23:21 UTC-5
tags: python
|
2021-05-16 02:55:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3768894076347351, "perplexity": 1686.2720697394727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00274.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-12th-ed/chapter-13-states-of-matter-13-assessment-page-443/31
|
## Chemistry (12th Edition)
Published by Prentice Hall
# Chapter 13 - States of Matter - 13 Assessment - Page 443: 31
#### Answer
During an elastic collision, kinetic energy is transferred without loss from one particle to another and the total kinetic energy remains constant.
#### Work Step by Step
During an elastic collision, kinetic energy is transferred without loss from one particle to another and the total kinetic energy remains constant.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2019-12-14 02:22:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155332803726196, "perplexity": 978.0922561469824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00386.warc.gz"}
|
https://proofwiki.org/wiki/Category:Nests
|
# Category:Nests
This category contains results about Nests.
Let $S$ be a set.
Let $\powerset S$ be its power set.
Let $N \subseteq \powerset S$ be a subset of $\powerset S$.
Then $N$ is a nest if and only if:
$\forall X, Y \in N: X \subseteq Y$ or $Y \subseteq X$
## Pages in category "Nests"
This category contains only the following page.
|
2020-01-25 04:48:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3006494641304016, "perplexity": 616.8684500233202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00473.warc.gz"}
|
https://quizplus.com/quiz/117433-quiz-13-efficient-markets-and-behavioral-finance
|
Try out our new practice tests completely free!
# Principles of Corporate Finance Study Set 3
Bookmark
## Quiz 13 : Efficient Markets and Behavioral Finance
Predictable cycles in stock price movements I.tend to persist for a long time; II.tend to self-destruct as soon as investors recognize them; III.never appear, since stock returns change randomly
Free
Multiple Choice
B
Weak-form efficiency implies that past stock returns
Free
Multiple Choice
C
The different forms of market efficiency are I.weak form; II.semistrong form; III.strong form
Free
Multiple Choice
D
The statement that stock prices follow a random walk implies that I.successive price changes are independent of each other; II.successive price changes are positively related; III.successive price changes are negatively related; IV.the autocorrelation coefficient is either +1.0 or −1.0
Multiple Choice
Informational efficiency in financial markets results in stock prices being
Multiple Choice
Which of the following is a statement of semistrong form efficiency? I.Stock prices will adjust immediately to public information. II.Stock prices reflect all information. III.Stock prices will adjust to newly published information after a long time delay.
Multiple Choice
Stock price cycles or patterns tend to self-destruct as soon as investors recognize them through
Multiple Choice
Generally, a firm is able to find positive-NPV opportunities among its I.financing decisions; II.capital investment decisions; III.short-term borrowing decisions
Multiple Choice
Financing decisions differ from investment decisions because I.financing decisions are easier to reverse; II.markets for financial assets are generally more competitive than real asset markets; III.generally, financing decisions have NPVs very close to zero
Multiple Choice
Which of the following statements is (are)true if the strong-form efficient market hypothesis holds? I.Analysts can easily forecast stock price changes. II.Financial markets are irrational. III.Stock returns follow a particular pattern. IV.Stock prices reflect all available information.
Multiple Choice
Strong-form market efficiency states that the market incorporates all information into stock prices. Strong-form efficiency implies that I.an investor can only earn risk-free rates of return; II.an investor can always rely on technical analysis; III.professional investors cannot consistently outperform the market;
Multiple Choice
If the efficient market hypothesis holds, investors should expect I.to receive a fair price for their security II.to earn a normal rate of return on their investments III.to be able to pick stocks that will outperform the market
Multiple Choice
Financing decisions differ from investment decisions for which of the following reasons? I.you cannot use NPV to evaluate financing decisions; II.markets for financial assets are more active than for real assets; III.it is easier to find financing decisions with positive NPV than to find investment decisions with positive NPV
Multiple Choice
A large firm received a loan guarantee from the government. Due to the guarantee, the firm can borrow $50 million for five years at 8 percent interest rate per year instead of 10 percent per year. Calculate the value of the guarantee to the firm. (Ignore taxes.) Multiple Choice Answer: If the weak form of market efficiency holds, then I.technical analysis is useless; II.stock prices reflect all information contained in past prices; III.stock price returns follow a random walk Multiple Choice Answer: If capital markets are efficient, then the sale or purchase of any security at the prevailing market price is generally Multiple Choice Answer: A small business received a five-year$1,000,000 loan at a subsidized rate of 3 percent per year. The firm will pay 3 percent annual interest payment each year and the principal at the end of five years. If market interest rates on similar loans are 6 percent per year, what is the NPV of the loan? (Ignore taxes.)
|
2022-12-03 10:09:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21360741555690765, "perplexity": 9371.38717126908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00104.warc.gz"}
|
https://hhvm.com/blog/2020/03/31/hhvm-4.51.html
|
HHVM 4.51 is released! This release marks the end of support for 4.45; HHVM 4.46–4.50 remain supported, as do the 4.8 and 4.32 LTS releases.
# Highlights
• A new method getEnumUnderlyingType() was added to ReflectionClass. It returns the underlying type of the enum’s values (usually HH\int or HH\string, but can also be the name of another enum, type alias, or HH\arraykey).
• If the ReflectionClass instance doesn’t represent an enum, the method throws.
• Improved optimization of async tail calls (return await ...) can significantly improve performance of code that uses a lot of such calls.
# Breaking Changes
• Using the array-appending operator [] on a Map object ($map[] = Pair {$key, $value};) is now a type-checker error. • Use $map[$key] =$value; instead.
• The runtime still allows both variants, so HH_FIXME can be used temporarily, to help migrating code. Runtime support will likely be removed in a future release.
• The type-checker no longer supports the “magic method” __call, so calling any method that is not explicitly declared is now a type-checker error.
• Runtime behavior is unchanged, so HH_FIXME can be used temporarily, to help migrating code. Runtime support will likely be removed in a future release.
• In the long term, we recommend migrating any affected code to only call explicitly declared methods.
# Future Changes
• We expect the removed type-checker features listed above to also be removed from the runtime in a future release.
|
2023-02-01 02:45:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19441230595111847, "perplexity": 5783.080207061016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00541.warc.gz"}
|
https://questonline.org.za/the-planck-scale-is-there-a-fundamental-limit-to-space-and-time/
|
# The Planck scale: Is there a fundamental limit to space and time?
This video explores the fundamental lower limits of space and time by considering what would happen if two electrons are squeezed closer and closer together. After discussing the ratio of electric and gravitational forces, Heisenberg’s uncertainty principle is combined with Einstein’s theory of special relativity to show that at very small distance scales the strength of gravity becomes comparable to the electrostatic force. It is shown that when the two electrons are squeezed to a distance equal to the Planck length, a black hole form, placing a fundamental lower limit on the distances that can be meaningfully probed. It is shown that the Planck length, mass and time can all be derived using dimensional analysis and by combining the fundamental constants of quantum mechanics, relativity and gravity.
|
2023-04-01 10:25:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856332540512085, "perplexity": 164.31797308800455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00279.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1430.06005
|
## Zariski-like topologies for lattices with applications to modules over associative rings.(English)Zbl 1430.06005
If $$\mathbb X = \langle X; \tau\rangle$$ is a $$T_0$$ topological space, and $$K$$ is the lattice of closed subsets of $$X$$, then the dual lattice $$L=K^{\partial}$$ is a complete lattice which has a subset $$\tilde{X}=\{\{p\}\;|\;p\in X\}\subseteq L$$ which, when equipped with the Zariski topology, is a space naturally homeomorphic to $$\mathbb X$$. That is, the closed subsets of $$\tilde{X}$$ in the Zariski topology are the sets $$V(a) = \{\{p\}\in \tilde{X}\;|\;a\leq_L \{p\}\} = \{\{p\}\in \tilde{X}\;|\;p\in a\}$$, and the map $$p\mapsto \{p\}$$ is a homeomorphism from $$\mathbb X$$ to $$\tilde{\mathbb X}$$.
This paper starts with an arbitrary complete lattice $$L$$ and a proper subset $$X\subsetneq L$$ and creates a space on $$X$$ with the sets $$V(a) = \{p\in X\;|\;a\leq_L p\}$$. $$L$$ is called $$X$$-top if the collection of $$V(a)$$’s is closed under union, in which case they form the closed sets of a topology on $$X$$. The paper studies the topological properties of the resulting space (separation axioms, connectedness, and compactness), and provides sufficient conditions for the space to be spectral.
### MSC:
06B30 Topological lattices 06A15 Galois correspondences, closure operators (in relation to ordered sets) 16D10 General module theory in associative algebras 54B99 Basic constructions in general topology
Full Text:
### References:
[1] Abuhlail, J.; Lomp, C., On topological lattices and an application to module theory, J. Algebra Appl., 15, 3, 1650046, (2016) · Zbl 1343.06005 [2] Abuhlail, J., Zariski topologies for coprime and second submodules, Algebra Colloq., 22, 1, 47-72, (2015) · Zbl 1312.16002 [3] Abuhlail, J., A Zariski topology for modules, Commun. Algebra, 39, 11, 4163-4182, (2011) · Zbl 1253.16043 [4] Abuhlail, J., A dual Zariski topology for modules, Topology Appl., 158, 3, 457-467, (2011) · Zbl 1226.16006 [5] Abuhlail, J., A Zariski topology for bicomodules and corings, Appl. Categorical Struct., 16, 1-2, 13-28, (2008) · Zbl 1182.16025 [6] Abuhlail, J., Fully coprime comodules and fully coprime corings, Applied Categorical Structures, 14, 5-6, 379-409, (2006) · Zbl 1121.16030 [7] Abuhlail, J.; Lomp, C., On the notion of strong irreducibility and its dual, J. Algebra Appl., 12, 6, 1350012, (2013) · Zbl 1286.16001 [8] S. Annin, Associated and attached primes over noncommutative rings, Ph.D. dissertation, University of California at Berkeley (2002). · Zbl 1010.16025 [9] Atiyah, M.; Macdonald, I., Introduction to Commutative Algebra, (1969), Addison-Wesley Publishing Co.: Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont. · Zbl 0175.03601 [10] Behboodi, M.; Haddadi, M. R., Classical Zariski topology of modules and spectral spaces, Int. Elec. J. Algebra, 4, 104-130, (2008) · Zbl 1195.16002 [11] Behboodi, M.; Haddadi, M. R. M., Classical Zariski topology of modules and spectral spaces, II, Int. Elec. J. Algebra, 4, 131-148, (2008) · Zbl 1195.16003 [12] Bourbaki, N., General Topology, (1995), Springer-Verlag: Springer-Verlag, Berlin, Heidelberg · Zbl 0145.19302 [13] Dauns, J., Prime modules, J. Reine Angew. Math., 298, 156-181, (1978) · Zbl 0365.16002 [14] Gratzer, G., Lattice Theory: Foundations, (2010), Birkhäuser: Birkhäuser, Basel [15] Hochster, M., Prime ideal structure in commutative rings, Trans. Amer. Math. Soc., 142, 43-60, (1969) · Zbl 0184.29401 [16] McCasland, R. L.; Smith, P. F., Zariski spaces of modules over arbitrary rings, Commun. Algebra, 34, 11, 3961-3973, (2006) · Zbl 1168.16027 [17] McCasland, R.; Moore, M.; Smith, P., On the spectrum of a module over a commutative ring, Commun. Algebra, 25, 79-103, (1997) · Zbl 0876.13002 [18] J. S. Milne, Group Theory, Lecture Notes (2017). http://www.jmilne.org/math/CourseNotes/GT.pdf. [19] Shick, P. L., Topology, Point-Set and Geometric, (2007), Wiley: Wiley, Hoboken, New Jersey · Zbl 1120.54001 [20] Yassemi, S., The dual notion of prime submodules, Arch. Math. (Brno), 37, 273-278, (2001) · Zbl 1090.13005 [21] I. Wijayanti, Coprime modules and comodules, Ph.D. dissertation, Heinrich-Heine Universität, Düsseldorf (2006). [22] Wisbauer, R., Algebra, Logic and Applications, 3, Foundations of module and ring theory. A handbook for study and research, (1991), Gordon and Breach Science Publishers: Gordon and Breach Science Publishers, Philadelphia, Pennsylvania
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-05-22 14:18:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977548837661743, "perplexity": 2817.903418172848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00542.warc.gz"}
|
https://www.maplesoft.com/support/help/maple/view.aspx?path=LREtools/REtodelta
|
REtodelta - Maple Help
LREtools
REtodelta
return the difference operator associated to the LRE
Calling Sequence REtodelta(problem)
Parameters
problem - problem
Description
• This routine returns the difference operator, in terms of the inert name LREtools[Delta], associated to the problem. The operator is indexed by the name of the variable from the problem.
• The command with(LREtools,REtodelta) allows the use of the abbreviated form of this command.
Examples
> $\mathrm{with}\left(\mathrm{LREtools}\right):$
> $\mathrm{REtodelta}\left(u\left(n+1\right)-u\left(n\right),u\left(n\right),\varnothing \right)$
${\left({\mathrm{LREtools}}\left[{\mathrm{\Delta }}\right]\right)}_{{n}}$ (1)
> $\mathrm{REtodelta}\left(\left(t+1\right)u\left(t+2\right)+\left(t+2\right)u\left(t\right),u\left(t\right),\varnothing \right)$
$\left({t}{+}{1}\right){}{\left({\mathrm{LREtools}}\left[{\mathrm{\Delta }}\right]\right)}_{{t}}^{{2}}{+}\left({2}{}{t}{+}{2}\right){}{\left({\mathrm{LREtools}}\left[{\mathrm{\Delta }}\right]\right)}_{{t}}{+}{2}{}{t}{+}{3}$ (2)
|
2022-01-25 23:47:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754058718681335, "perplexity": 1926.6728542200417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304876.16/warc/CC-MAIN-20220125220353-20220126010353-00191.warc.gz"}
|
https://alan-turing-institute.github.io/rsd-engineeringcourse/ch00python/036MazeSolution2.html
|
Close
### Solution: counting people in the maze¶
With this maze structure:
In [1]:
house = {
'living' : {
'exits': {
'north' : 'kitchen',
'outside' : 'garden',
'upstairs' : 'bedroom'
},
'people' : ['James'],
'capacity' : 2
},
'kitchen' : {
'exits': {
'south' : 'living'
},
'people' : [],
'capacity' : 1
},
'garden' : {
'exits': {
'inside' : 'living'
},
'people' : ['Sue'],
'capacity' : 3
},
'bedroom' : {
'exits': {
'downstairs' : 'living',
'jump' : 'garden'
},
'people' : [],
'capacity' : 1
}
}
We can count the occupants and capacity like this:
In [2]:
capacity = 0
occupancy = 0
for name, room in house.items():
capacity += room['capacity']
occupancy += len(room['people'])
print("House can fit {} people, and currently has: {}.".format(capacity,occupancy))
House can fit 7 people, and currently has: 2.
As a side note, note how we included the values of capacity and occupancy in the last line. This is a handy syntax for building strings that contain the values of variables. You can read more about it here or in the official do
|
2019-02-17 20:50:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25798484683036804, "perplexity": 12899.237982140721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482478.14/warc/CC-MAIN-20190217192932-20190217214932-00541.warc.gz"}
|
https://zbmath.org/?q=ut%3Aweak+contraction
|
## Found 1,487 Documents (Results 1–100)
100
MathJax
Full Text:
### Limit laws for two distance-based indices in random recursive tree models. (English)Zbl 07589927
MSC: 05C05 60F05
Full Text:
### Firm non-expansive mappings in weak metric spaces. (English)Zbl 07589462
MSC: 47H09 51F99
Full Text:
Full Text:
### A new and faster iterative scheme including generalized $$\alpha$$-nonexpansive mappings in Banach spaces. (English)Zbl 07579601
MSC: 47J26 47H05 47H09
Full Text:
Full Text:
Full Text:
### On the fixed point property for orbital contractions in Banach spaces. (English)Zbl 07565193
MSC: 47H10 47H09 47H20
Full Text:
### The best proximity points for weak $$\mathcal{MT}$$-cyclic Reich type contractions. (English)Zbl 1491.54053
MSC: 54H25 47H10 54E40
Full Text:
### Fixed points of weakly $$K$$-nonexpansive mappings and a stability result for fixed point iterative process with an application. (English)Zbl 07562167
MSC: 47H09 47H10 54H25
Full Text:
Full Text:
Full Text:
Full Text:
### Best proximity point of cyclic generalized $$\phi$$-weak contraction mapping in metric spaces. (English)Zbl 1491.54103
MSC: 54H25 54E40
Full Text:
### Convergence theorems using Ishikawa iteration for finding common fixed points of demiclosed and 2-demiclosed mappings in Hilbert spaces. (English)Zbl 07535187
MSC: 47J26 47H09
Full Text:
### On a three-step iteration process for multivalued Reich-Suzuki type $$\alpha$$-nonexpansive and contractive mappings. (English)Zbl 1492.47098
MSC: 47J26 47H04 47H09
Full Text:
Full Text:
### Best proximity of proximal $$\mathcal{F}^\ast$$-weak contraction. (English)Zbl 1491.54149
MSC: 54H25 54E40
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Improved generalized $$M$$-iteration for quasi-nonexpansive multivalued mappings with application in real Hilbert spaces. (English)Zbl 07487975
MSC: 47J26 47H06 47H09
Full Text:
### Weakly protected nodes in random binary search trees. (English)Zbl 1483.05022
MSC: 05C05 05C80 60F05
Full Text:
Full Text:
### Fixed point results using weak $$\alpha_w$$-admissible mapping in $$G_b$$-metric spaces with applications. (English)Zbl 07454684
MSC: 47H09 90C31 26A33
Full Text:
### Relation-theoretic metrical coincidence theorems under weak C-contractions and K-contractions. (English)Zbl 07533472
MSC: 47H10 54H25
Full Text:
### A three-step iteration process for generalized $$\alpha$$-nonexpansive multivalued mapping in Banach spaces. (English)Zbl 07528137
MSC: 47J26 47H09 47H04
Full Text:
### Convergence, stability and data dependence results for contraction and nonexpansive mappings by a new four step algorithm. (English)Zbl 07527638
MSC: 47J26 47H09 65J15
Full Text:
### Common fixed point for some generalized contractive mappings in a modular metric space with a graph. (English)Zbl 07525608
MSC: 46A80 47H10
Full Text:
### A modified iteration for total asymptotically nonexpansive mappings in Hadamard spaces. (English)Zbl 1484.47131
MSC: 47H10 47H09
Full Text:
### $$S_H$$-metric spaces and fixed-point theorems for multi-valued weak contraction mappings. (English)Zbl 1486.54069
MSC: 54H25 54C60 54E40
Full Text:
Full Text:
### Solving split equality fixed point problem for quasi-$$\phi$$-nonexpansive mappings. (English)Zbl 07489190
MSC: 47J26 47H09
Full Text:
### Inertial Picard normal S-iteration process. (English)Zbl 07487963
MSC: 47J26 47H09
Full Text:
Full Text:
### Weak and strong convergence theorems for three Suzuki’s generalized nonexpansive mappings. (English)Zbl 07487559
MSC: 47J26 47H09
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Self-adaptive algorithms for an equilibrium split problem in Hilbert spaces. (English)Zbl 07472392
MSC: 47J25 47H06 47H09
Full Text:
Full Text:
Full Text:
### Common fixed point theorems for enriched Jungck contractions in Banach spaces. (English)Zbl 07453460
MSC: 47H10 47H09 47J26
Full Text:
Full Text:
### Coincidence and fixed point results for generalized weak contraction $$b$$-metric spaces. (English)Zbl 1476.54082
MSC: 54H25 47H10 34B15
Full Text:
### Common fixed point theorems under generalized $$(\psi - \phi)$$-weak contractions in $$S$$-metric spaces with applications. (English)Zbl 1476.54116
MSC: 54H25 54E40 54E50
Full Text:
### Convergence theorems of proximal type algorithm for a convex function and multivalued mappings in Hilbert spaces. (English)Zbl 07450963
MSC: 47J25 47H09 47H04
Full Text:
Full Text:
### Split trees – a unifying model for many important random trees of logarithmic height: a brief survey. (English)Zbl 1484.68057
Lindblad, Joakim (ed.) et al., Discrete geometry and mathematical morphology. First international joint conference, DGMM 2021, Uppsala, Sweden, May 24–27, 2021. Proceedings. Cham: Springer. Lect. Notes Comput. Sci. 12708, 20-57 (2021).
Full Text:
### Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems. (English)Zbl 1485.47111
MSC: 47J25 47H09 49J40
Full Text:
### Algorithms for common best proximity points of the global minimization problem in Hilbert spaces. (English)Zbl 07439480
MSC: 47J25 47H09
Full Text:
Full Text:
### A weak convergence theorem for solving the split common fixed point problem in two Banach spaces and applications. (English)Zbl 07435518
MSC: 47J26 47H05 47H09
Full Text:
Full Text:
Full Text:
### A forward-backward splitting algorithm for quasi-Bregman nonexpansive mapping, equilibrium problems and accretive operators. (English)Zbl 07430787
MSC: 47J25 47H09 47H06
Full Text:
### Some convergence results for a class of generalized nonexpansive mappings in Banach spaces. (English)Zbl 1489.47076
MSC: 47H10 47H09
Full Text:
### Fixed point theorems for generalized weak contraction mapping in generating space of $$b$$-dislocated metric spaces. (English)Zbl 1473.54056
MSC: 54H25 54E40 54F05
Full Text:
### Fixed point theorems for generalized $$F$$-contraction on metric space. (English)Zbl 1488.54111
MSC: 54H25 47H10
Full Text:
### Estimation of fixed points of Hardy and Rogers generalized non-expansive mappings. (English)Zbl 07397345
MSC: 47H09 47H10
Full Text:
### Iterative approximations of fixed points for operators satisfying $$(B_{\gamma ,\mu })$$ condition. (English)Zbl 1482.47139
MSC: 47J26 47H05 47H09
Full Text:
### Rate of convergence of modified Mann iteration for asymptotically nonexpansive mappings. (English)Zbl 1482.47138
MSC: 47J26 47H09 65J15
Full Text:
### Fixed point results for Geraghty-weak contractions in ordered partial rectangular $$b$$-metric spaces. (English)Zbl 1477.54052
MSC: 54H25 54E40 54F05
Full Text:
### An inertial method for split common fixed point problems in Hilbert spaces. (English)Zbl 1481.47109
MSC: 47J26 47H09
Full Text:
### Nonhomogenous bivariate fragmentation process: asymptotic distribution via contraction method. (English)Zbl 1469.60104
MSC: 60G09 60F05 60H25
Full Text:
### Approximation of fixed points for mean nonexpansive mappings in Banach spaces. (English)Zbl 07388490
MSC: 47J26 47H09
Full Text:
Full Text:
### Approximating common fixed points of a finite family of non-self mappings in Hilbert spaces. (English)Zbl 1487.47126
MSC: 47J26 47H09
Full Text:
Full Text:
Full Text:
### A mean ergodic theorem for nonexpansive mappings in Hadamard spaces. (English)Zbl 1483.47089
MSC: 47H25 47H09 47H20
Full Text:
Full Text:
### Approximation of fixed points for a class of mappings satisfying property (CSC) in Banach spaces. (English)Zbl 07372220
MSC: 47H09 47H10 47J26
Full Text:
### An inertial Censor-Segal algorithm for split common fixed-point problems. (English)Zbl 07370664
MSC: 47J26 47H09 65J15
Full Text:
### Locally weak version of the contraction mapping principle. (English)Zbl 1469.54072
MSC: 54H25 47H10 54E35
Full Text:
### Weak and strong convergence theorems for the split common fixed point problem with demicontractive operators. (English)Zbl 1486.47118
MSC: 47J26 47H09
Full Text:
### Inertial accelerated algorithms for the split common fixed-point problem of directed operators. (English)Zbl 1486.47123
MSC: 47J26 47H09
Full Text:
### A new iterative algorithm for the split common fixed points of two infinite countable families of the strictly pseudo-contractive mappings. (Chinese. English summary)Zbl 1474.47152
MSC: 47J26 47H10 47H09
Full Text:
### Random additions in urns of integers. (English)Zbl 1476.60018
MSC: 60C05 60F05
Full Text:
### Variants of $$R$$-weakly commuting mappings satisfying a weak contraction. (English)Zbl 1488.54137
MSC: 54H25 54E40
Full Text:
### A new iterative method for Suzuki mappings in Banach spaces. (English)Zbl 1480.47104
MSC: 47J26 47H09
Full Text:
### Generalized Krasnoselskii-Mann-type iteration for nonexpansive mappings in Banach spaces. (English)Zbl 1474.47141
MSC: 47J25 47H05 47H09
Full Text:
### Fixed point for $$F_\perp$$-weak contraction. (English)Zbl 1474.54156
MSC: 54H25 54E40 34A12
Full Text:
MSC: 49J45
Full Text:
Full Text:
Full Text:
### On fixed point results in partial $$b$$-metric spaces. (English)Zbl 1476.54099
MSC: 54H25 54E40
Full Text:
### Convergence analysis of modified abbas iteration process for two $$G$$-nonexpansive mappings. (English)Zbl 07343250
MSC: 47H10 47H09
Full Text:
### Fixed points of $$(\varphi,F)$$-weak contractions on metric-like spaces with applications to integral equations on time scales. (English)Zbl 07342846
MSC: 47H10 54H25
Full Text:
Full Text:
Full Text:
### Some convergence results of $$M$$ iterative process in Banach spaces. (English)Zbl 1481.47106
MSC: 47J26 47H09
Full Text:
### Convergence theorems for mixed type iterative process of single-valued and multi-valued nonexpansive mappings and applications. (English)Zbl 1481.47103
MSC: 47J26 47H09
Full Text:
Full Text:
Full Text:
Full Text:
all top 5
all top 5
all top 5
all top 3
all top 3
|
2022-10-06 23:46:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6096550822257996, "perplexity": 13060.343210655523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00485.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/power-ratings-of-rf-amplifiers.145069/
|
# Power Ratings of RF Amplifiers
#### naumankalia
Joined Feb 19, 2016
35
Hi all
I am studying RF amplifiers data sheets and there is some confusion i am facing in its specifications. One such amplifier claims Power Output at 50 ohm load with 20% duty cycle to be 2 KW. However, the minimum gain it specifies is 63 dB and maximum input it claims to be 10 dBm (around 0.707 Vrms for 50 ohm ). Using this gain and input, maximum output voltage is around 990 Vrms @ 50 ohm load. Using simple power formula i.e. V^2/R, the output power comes to be around 19 KW @ 50 ohm load???.
Can any one kindly tell me where i went wrong in my calculations?
Thanks
#### Ylli
Joined Nov 13, 2015
1,052
Your calcs are correct. With 63 dB of gain, the amplifier should produce its specific output power of 2 KW with an input signal level of 0 dBm. The listed +10 dBm is likely the maximum safe input level so that you don't fry the input stage.
|
2021-07-25 20:29:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417696356773376, "perplexity": 2017.1811544914895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00083.warc.gz"}
|
https://proofwiki.org/wiki/Existence_of_Cyclic_Group_of_Order_n
|
Existence of Cyclic Group of Order n
Theorem
Let $n \in \Z_{>0}$.
Then there exists a cyclic group of order $n$ which is unique up to isomorphism.
Proof
Existence follows from Integers Modulo m under Addition form Cyclic Group.
Uniqueness follows from Cyclic Groups of Same Order are Isomorphic.
$\blacksquare$
|
2019-12-06 01:35:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061133027076721, "perplexity": 233.14829180290414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00202.warc.gz"}
|
https://engineering.monstar-lab.com/2020/08/27/Laravel-8
|
Laravel 8, the brand new release of the Laravel framework is all set to launch on 8th of September. Taylor Otwell gave an impressive insight of what to expect in this year’s Laracon Online. Due to the pandemic, over 5k developers attended it from home and watched the Live Webinar, making it the largest Laracon ever. It was packed with some amazing presentations, and the Monstar Lab PHP Team was lucky to also be part of that conference.
Here is a quick rundown of all the features that are set to ship in Laravel 8. Details of the features will soon be out in the new Dark Mode enabled Laravel site. The addition of dark mode is also a new feature of its own for the developers to enjoy.
Small disclaimer: I might misunderstood some features while listening to the one hour presentation. But once the documentation is up, I will revisit this article and update accordingly.
### Features:
1. App/Models is now default:
artisan make:model Foo
This command will now create Foo Model in the app/Models folder. If developers decide to delete the app/Models folder, and run the above command again, Laravel will understand that app/Models folder doesn’t exist and hence will create a Foo model in the root folder. This feature was a great addition by Taylor after he ran a poll on Twitter, to know the community’s preference.
Once deleted app/Models folder, the artisan command will make the Foo model in root like app/Foo.php
2. No more magic prefixes in namespaces:
Route::get('/foo', 'FooController@index')
When FooController was declared, behind the scenes App\Http\Controlllers\FooController was added by the RouteServiceProvider using this function:
protected function mapWebRoutes()
{
Route::middleware('web')
->namespace($this->namespace) ->group(base_path('routes/web.php')); } But now, namespace property is trashed by default, so that if someone writes: Route::get('/foo', '\App\Http\Controlllers\FooController@index') It won’t cause magic prefix to be added. 3. Route Closures can now be cached: Usually when route caching is enabled, a serialized output of the routes file is generated, i.e. all is compiled into a big php array. Currently routes that are defined like in the example below couldn’t be cached. Route::get('foo', function () { return 'Hello World'; }); Now closures can also be cached. 4. Extended component enhancement: Nested component attributes are now allowed to be merged. public render() { return <<<'HTML' <x-button {$attributes->merge(['class' => 'bg-red']) }>
{ $slot } </x-button> HTML; } 5. Registering event listener syntax improvement: We usually do this register event listener: /** * The event listener mappings for the application. * * @var array */ protected$listen = [
'App\Events\SomeEvent' => [
'App\Listeners\EventListener',
],
];
Now, we do not need to repeat class names, rather it is type-hinted:
Event::listen(SomeEvent::class, function(SomeEvent $event) { info($event->whatever)
});
Event::listen(function (SomeEvent $event) { info(get_class($event))
});
6. Addition of queuable anonymous event listeners
We can queue Model events now in Models:
//in Foo Model Class
protected static function booting()
{
static::created(queuable(function (Foo $foo){ info('Queued: '$foo->name);
}))
}
7. Secret maintenance mode
artisan down —secret=laracon-2020
IP whitelisting is very difficult. Now we can access route as secret. During maintenance mode, when the server is down, using the command mentioned above we can start accessing the application and routes. This will be valid for several hours even in maintenance mode. There will be several options available in the artisan down command like: render, redirect, status and secret to ensure much more control.
8. New catch method for anonymous queue function:
There is a new catch method for anonymous queue function. Initially it went to failed jobs, now we can have a callback to catch that.
Route::get('/queue-catch', function(){
dispatch(function() {
throw new Exception('Something went wrong...');
})->catch(function (Throwable $e) { info('Caught exception'); }); return 'Dispatched'; }); 9. Exponential backoff job class: BackoffJob::dispatch(); We can now exponentially increase backoff retries, using a simple declaration: public function backoff() { return [1, 5]; } 10. Job Batching: Job Batching, has been modelled after Ruby’s Sidekiq. Now we can queue a bunch of jobs that triggers at the same time, and then it will trigger callbacks after the entire batch is finished. Bus::batch([ new BatchedJob, new BatchedJob, new BatchedJob, new BatchedJob, new BatchedJob, ])->then(function (Batch$batch) {
info('All Jobs completed Successfully');
})->catch(
info('First Batch job failure detected');
)->finally(
info('The batch has finished executing');
)->dispatch();
Failed items also can be caught without interuption.
Real time progress can be seen of this batched jobs:
Bus::findBatch('<id of the batch>')
11. Rate limiting improved:
New global rate limiting facade (like auth gates) is introduced:
['throttle:global'], ['throttle:nested'],
// in configure functions
Limit::perMinute(100);
Limit::perMinute(3)->by($request0->input('email')); 12. Custom Exceptions: Report and render methods can be ditched and there is a new way of declaring custom exceptions: $this->reportable(function (AppException $e) { info('error'); })->stop();$this->renderable(function (AppException $e) => … ); 13. Squash migrations: artisan schema:dump This will now generate the entire existing migration into a schema file. After running this command, if we run any new migrations, we can then again run the same command. This will then generate a new one into a single new migration. It is basically for development mode, like one file with 50 migrations squashed. 14. Factories are re-written: They are class based and they are like factories on steroids, pretty much everything can be done, like creating data with relationships, for one specific id, etc. A short note for the functions are given below: // new definition function public function definition() { return [ 'name' => …blah blah ] } // same Foo::factory()->create(), Foo::factory()->create([]); // custom functions Foo::factory()->withStatus('draft')->create() // which is: public function withStatus() { return$this->state(fn (\$attributes) =>
['status' => ''];
)};
//relationships of all kinds can be used in factory
Foo::factory()
->times(3)
->has(Bar::factory()
->times(3)
->state('draft'))
->create()
Legacy factory package will be available to support existing ones.
15. Laravel JetStream:
Free package, it is more that any general auth scaffolding. Laravel JetStream can be used with Livewire/Inertia, and has several big punches in store to woo the developer crowd for sure.
I hope you enjoyed this quick rundown of the new features. Huge shout out to the entire community for the tweets, you can follow most here at LaraconOnline. Feel free to poke me @saadbinamjad if I missed any features. Till then, happy coding folks!
### You may also like
##### Make apps for everyone
The human brain tends to assume that everybody else thinks and behaves in a similar way to it. So every person tends to think that, based on certain categories, they’re part of a majority, that there are many others like them. As software developers, it’s very easy to assume that...
##### Locust sketch
Locust is an easy-to-use distributed load testing tool that is completely event-based, that is, a locust node can also support thousands of concurrent users in a process, without callbacks, and use a lightweight process through gevent (that is, in Run in its own process). And has the following characteristics:
|
2020-09-26 11:42:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1812262386083603, "perplexity": 6764.41312293196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400241093.64/warc/CC-MAIN-20200926102645-20200926132645-00215.warc.gz"}
|
https://tex.stackexchange.com/questions/537761/changing-the-enumeration-of-the-sub-figures/537772
|
# Changing the enumeration of the sub-figures
I am working on my thesis. There is this rule for the enumeration of the sub-figures:
Sub-figures should be enumerated by adding the corresponding lowercase letter to the Figure designation such as Figure 3.8a
However, I could only manage the 3.8(a)
The code that I am using this:
\begin{figure}[h]
\subfigure[]{\includegraphics[width=\textwidth]{2016.png}
\label{2016}}
\subfigure[]{\includegraphics[width=\textwidth]{2017.png}
\label{2017}}
\caption{Graphs of daily averages of Turkish hourly electricity consumption values in \subref{2016} 2016
\subref{2017} 2017.}
\label{consumptions}
\end{figure}
The thesis rules tell us to use subfigure package. How can I get 3.8a instead of 3.8(a)? Thank you!
• Welcome t TeX.SE! Please provide complete small document with your figures. Off-topic: it seems that you use obsolete package subfigure. Apr 10, 2020 at 7:12
## 1 Answer
As pointed by @Zarko in the comments, subfigure is deprecated and is now covered by subfig or subcaption.
To achieve what you want with subcaption we'll change the default setup by adding :
\usepackage[labelformat=simple]{subcaption} %%default is [labelformat=parens]
%%Setup for captions
%%If you want figure numbering as 'figure 3.8'
\renewcommand{\thefigure}{\thechapter.\arabic{figure}}
%%if you want 3.8a for subfigure
\renewcommand{\thesubfigure}{\normalsize\thefigure.\alph{subfigure}:}
%%if you want figure 3.8a for subfigure
\renewcommand{\thesubfigure}{\normalsize figure \thefigure.\alph{subfigure}:}
Here is a complete testing code :
\documentclass[10pt,a4paper]{book}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{caption}
\usepackage[labelformat=simple]{subcaption} %%default is [labelformat=parens]
%%If you want figure numbering as 'figure 3.8'
\renewcommand{\thefigure}{\thechapter.\arabic{figure}}
%%if you want 3.8a
%\renewcommand{\thesubfigure}{\normalsize\thefigure.\alph{subfigure}:}
%%if you want figure 3.8a
\renewcommand{\thesubfigure}{\normalsize figure \thefigure.\alph{subfigure}:}
\begin{document}
\chapter{fist chapter}
\begin{figure}
\centering
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{example-image-a}
\caption{Test}
\end{subfigure}\hfill
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{example-image-b}
\caption{Test}
\end{subfigure}
\caption{Total}
\end{figure}
\end{document}
Who produces the following output :
|
2022-08-12 05:33:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8112087249755859, "perplexity": 7142.937555403524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00514.warc.gz"}
|
https://rivet.hepforge.org/analyses/ATLAS_2017_I1589844_EL.html
|
rivet is hosted by Hepforge, IPPP Durham
## Rivet analyses reference
### ATLAS_2017_I1589844_EL
$k_T$ splittings in $Z$ events at 8 TeV, electron mode
Experiment: ATLAS (LHC)
Inspire ID: 1589844
Status: VALIDATED
Authors:
• Christian Gutschow
• Frank Siegert
References:
Beams: p+ p+
Beam energies: (4000.0, 4000.0) GeV
Run details:
• $pp \to Z(\to ee/\mu\mu) +$ jets at 8 TeV
A measurement of the splitting scales occuring in the $k_\text{t}$ jet-clustering algorithm is presented for final states containing a $Z$ boson. The measurement is done using 20.2 fb$^{-1}$ of proton-proton collision data collected at a centre-of-mass energy of $\sqrt{s} = 8$ TeV by the ATLAS experiment at the LHC in 2012. The measurement is based on charged-particle track information, which is measured with excellent precision in the $p_\text{T}$ region relevant for the transition between the perturbative and the non-perturbative regimes. The data distributions are corrected for detector effects, and are found to deviate from state-of-the-art predictions in various regions of the observables.
|
2018-10-16 17:46:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7435216903686523, "perplexity": 4422.790494543116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510853.25/warc/CC-MAIN-20181016155643-20181016181143-00010.warc.gz"}
|
https://dba.stackexchange.com/questions/53225/how-to-define-alias-in-an-array-agg-expression
|
# How to define alias in an ARRAY_AGG expression?
I'm trying to return pure JSON from a Postgres 9.2 table.
SELECT ARRAY_TO_JSON(ARRAY_AGG(ALBUM_ROW))
FROM (
SELECT
album,
max(release_year) AS release_year,
max(artwork_path) AS artwork_path,
MD5(concat(album,release_year,artist)) AS token,
ARRAY_AGG((media_files.position, media_files.token, media_files.title) ORDER BY media_files.position) as media_files
FROM media_files
INNER JOIN playlist_media_files ON playlist_media_files.media_file_id = media_files.id
WHERE playlist_media_files.playlist_id = 1
GROUP BY album, release_year, artist
ORDER BY artist, release_year
) as ALBUM_ROW
This query works fairly well, however on the line:
ARRAY_AGG((media_files.position, media_files.token) ORDER ...) as media_files
I would like to alias position and token attributes in the result set.
AS apparently it's not permitted here.
I would have written:
ARRAY_AGG((media_files.position AS xxx, media_files.token AS yyy) ORDER BY media_files.position) as media_files
But it's not working.
Postgres documentations says: "where aggregate_name is a previously defined aggregate (possibly qualified with a schema name), expression is any value expression that does not itself contain an aggregate expression or a window function call, and order_by_clause is a optional ORDER BY clause as described below."
http://www.postgresql.org/docs/9.3/static/sql-expressions.html#SYNTAX-AGGREGATES
Am I out of luck?
You are forming an ad-hoc row type (effectively an anonymous record) with this expression:
(media_files.position, media_files.token, media_files.title)
ARRAY_AGG((media_files.position, media_files.token, media_files.title)
ORDER BY media_files.position) AS media_files
Arrays types can only be built upon well-known types. Your option is to announce such a type to the system and cast the record to it before forming the array. Create a well-known composite type:
CREATE TYPE my_type AS (
position int -- data type?
,token text
,title text
)
I am guessing data types for lack of information here. Fill in your actual types.
Creating a table has the same effect: It announces a well known composite type to the system indirectly, as well. For this reason, you can (ab-)use a temporary table to register a composite type for the duration of the session:
CREATE TEMP TABLE my_type AS (
position int -- data type?
,token text
,title text
)
Either way, you can then cast your record:
ARRAY_AGG((media_files.position, media_files.token, media_files.title)::my_type
ORDER BY media_files.position) AS media_files
Then you can reference elements of the (now well-known) type by name:
SELECT media_files[1].position, media_files[1].token
FROM (
...
,ARRAY_AGG((media_files.position, media_files.token, media_files.title)::my_type
ORDER BY media_files.position) AS media_files
...
FROM ....
GROUP BY ...
) sub;
Now, Postgres can use these names for building a JSON value. Voilá.
I don't have Postgres on my machine, and I really don't know it all that well, but it seems to me you could alias the desired columns through a subquery something like this:
SELECT ARRAY_TO_JSON(ARRAY_AGG(ALBUM_ROW))
FROM (
SELECT
album,
max(release_year) AS release_year,
max(artwork_path) AS artwork_path,
MD5(concat(album,release_year,artist)) AS token,
ARRAY_AGG((mf.PositionAlias, mf.TokenAlias, mf.TitleAlias) ORDER BY mf.PositionAlias) as media_files
FROM (
SELECT
album,
max(release_year) AS release_year,
max(artwork_path) AS artwork_path,
MD5(concat(album,release_year,artist)) AS token,
position as PositionAlias,
token as TokenAlias,
title as TitleAlias
FROM media_files
INNER JOIN playlist_media_files ON playlist_media_files.media_file_id = media_files.id
WHERE playlist_media_files.playlist_id = 1
GROUP BY album, release_year, artist
ORDER BY artist, release_year
) AS mf
) as ALBUM_ROW;
• hello, I tried something similar but position is ambiguous, after I fixed it column "media_files.position" must appear in the GROUP BY clause or be used in an aggregate function. I'm currently experimenting with this but it seems over the top: pastie.org/private/kqksctmbgognytkeqovbya – John Smith Nov 13 '13 at 7:42
|
2021-04-22 10:36:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35847190022468567, "perplexity": 13670.015787668071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00304.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/1/lesson/1.3.2/problem/1-121
|
### Home > CALC > Chapter 1 > Lesson 1.3.2 > Problem1-121
1-121.
Which method comes closer to the actual curve f(x)?
Each section is 2 units wide. Use this knowledge and the equation of the function to find the areas of each section for both methods.
Which method gave you the number closest to 25.5?
|
2021-03-09 02:02:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539524674415588, "perplexity": 1278.4054664384423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00023.warc.gz"}
|
https://www.intechopen.com/books/energy-efficiency-and-sustainable-lighting-a-bet-for-the-future/explorative-analysis-of-household-energy-consumption-in-bauchi-state-nigeria
|
Open access peer-reviewed chapter
# Explorative Analysis of Household Energy Consumption in Bauchi State, Nigeria
By Abubakar Hamid Danlami and Rabi’ul Islam
Submitted: May 22nd 2019Reviewed: September 7th 2019Published: March 25th 2020
DOI: 10.5772/intechopen.89597
## Abstract
This study was conducted with the major aim of conducting descriptive and exploratory analyses on the socio-economic characteristics of households in Bauchi state and their pattern of energy choice and consumption. A total sample of 539 household responses were analysed, which were selected using cluster area sampling. The analysis indicates that the average monthly household income is USD 220, and the average monthly firewood consumption per household is about 35 bundles. Moreover, about 70% of the respondents argued that they use firewood as their main source of cooking fuel. For the lighting source of energy, 65% of the households argued that they use electricity as their main source of lighting. Additionally, the correlation analysis indicates that income has a positive relationship with the quantity of energy consumption, while there is a negative relationship between the price of a particular source of energy and its consumption. The study suggests that there is a need of a good policy that will reduce the households much dependence on firewood to other cleaner sources of energy.
### Keywords
• firewood; cooking and lighting
• energy
• consumption
## 1. Introduction
Energy is one of the most important aspects of household life. It is a commodity that is vital for the existence of modern household living [1, 2]. In fact, the total welfare of a household depends on the type and the pattern of the household’s energy utilisation. The household energy consumption pattern in Bauchi state can be categorised into three major dimensions: cooking, lighting and cooling purposes. For satisfying the needs of cooking, various sources are available, which includes: fuel-wood, kerosene, gas and electricity, plus elements of plant residues and animal dung which are used in some parts of the rural areas of the state. For lighting purpose, the various choices mainly include: electricity, petroleum/diesel (used for fuelling generators), kerosene, candles, traditional lamps and firewood, mostly based on socio-economic status of a household [3, 4]. Furthermore, for the purpose of drinks and space cooling, various energy sources are available which consist of mainly electricity and petroleum or diesel (gas) power generator.
Of all the above categories of fuel sources, electricity, liquefied petroleum gas (LPG) and kerosene are regarded to be either cleaned (i.e. in the case of electricity and gas) or transitional (i.e. in the case of kerosene) energy sources [5], while the traditional biomass fuel that include fuel-wood, animal dung and plant residues are not cleaned energy which can lead to numerous economic, social, health and environmental problems [6, 7].
The use of traditional lamp as the main source of lighting is a threat to the health and the life of the users; this is because such traditional lamp produces high rate of carbon monoxide that is harmful to human health; that is why in most of the rooms whereby such lamps are being used, there exist black dust in ceilings and the walls closer to the lamp. In the same vein, the use of fuel-wood for cooking and lighting purposes is totally not environmental friendly. It has negative impacts on the atmosphere and peoples’ lives [8, 9]. Apart from deforestation, desertification and soil erosion, the use of fuel-wood has a very low thermal efficiency and the smoke is also hazardous to human health, especially to women and children who mostly do the cooking in homes [10]. Acute respiratory infections (ARI) in children are one of the leading causes of infant and child morbidity and mortality [11, 12]. Studies have found associations between biomass fuel use and lung cancer. A 30-year-old woman cooking with straw or wood has an 80% increased chance of having lung cancer later in life [13, 14].
The underlying rational here is to encourage households to shift from the use of non-cleaned energy sources to the adoption of cleaned energy sources [15]. This is because there are so many benefits in using a cleaned energy. It has been widely argued that moving towards the use of cleaned fuels is an important option to improve the standard of living for households who rely heavily on biomass [16]. It is the key factor to improve the mode of living for rural population [17]. Moreover, encouraging households to switch to cleaned energy would lead to the consumption of less fuel per meal and less time spent for gathering fuel which could be used in other activities such as attending school and other income generating activities [5]. Cleaned energy provides easy access to education, health care and household resources. Children who do not have to collect bio fuels can attend school [18, 19]. Switching to cleaned fuels could also free up time for women to engage in income-generating pursuits [18].
To attain these benefits, a very important and effective policy that provides access to cleaned energy is required [9]. However, such effective policy also depends on a good research which is conducted to investigate and explore households’ energy consumption pattern in relevant area [20]. This study is conducted with the major aim of exploring socio-demographic features of households and their pattern of energy choice and consumption in Bauchi state, Nigeria, to assess the correlation between the energy consumption and the socio-demographic characteristics of households in Bauchi state.
The remaining part of the chapter is as follows: Section 2 consists of the review of related literature, Section 3 consists of methodology and Section 4 discussed the results and findings of the study. The last section consists of conclusions and policy implications of the study.
## 2. Literature review
This section examines and highlights the factors that influence the level of household fuel choice and consumption. Each of these factors is expected to relate with the quantity of fuel consumption of households either positively or negatively. The explanation of different categories of factors influencing the households’ energy choice and consumption is explained below.
### 2.1 Economic factors
These are the factors that serve as a measure of economic status of the household which can influence the households’ fuel consumption decision. For instance, studies have established that there is a positive relationship between the households’ income and the adoption of cleaned energy [21, 22, 23]. Poorer households especially in developing countries tend to adopt firewood, plant residues, animal dung and other un-cleaned energy sources, whereas wealthier households tend to adopt energy from more cleaned sources.
A relationship also exists between the type of occupation of the household head and the nature of the energy source to be adopted by the household. Empirical studies conducted in [2, 24] proved that those in white-collar jobs (executives, big entrepreneurs) adopt cleaned energy, while those in blue-collar jobs (such as farming, trading) tend to adopt firewood and other biomass fuels. Home ownership, which is one of the indicators of the economic status of households, affects their decision on the type of energy sources to adopt. Those who live in their owned house tend to adopt cleaned energy source [22, 25]. Price of energy has a negative relationship with energy consumption. When the price of a particular energy source is high, households switch to other alternative fuel available. This is in line with the law of demand and also has been established by previous studies [9, 26].
### 2.3 House characteristics
The characteristics of the building in which the households leave can also affect their energy choice behaviour. For instance, the location of the home in which the households live have serious impact on their energy consumption decision. The households that are located in urban areas adopt cleaner energy than their rural counterparts [2, 21]. In addition, the type of the house (i.e. nature of the building) exacts some influence on household energy consumption behaviour. For instance, in [2, 21], it was empirically found that living in detached house has significant positive relationship with the adoption of gas, electricity and liquid fuel. The sizes of the residence in which households live also influence their energy consumption behaviour. Most of the previous studies, such as [22, 32, 33], found that the larger the size of the building, the higher the adoption of fuel wood, all things being equal. Furthermore, the number of rooms in the house is one of the building characteristics which influence households’ energy consumption choice. For instance, in [2, 24], it was found that this variable has a positive significant relationship with the household use of liquefied petroleum gas (LPG). Share of dwellings (i.e. more than one household living in the same building) is one of the factors which also shapes the energy consumption behaviour of households [22].
## 3. Data and methodology
Because this chapter is a study of households at micro level, this section contains the description of the study samples and the methods used in data gathering.
### 3.1 Sample size
In this study, the total sample size was determined based on [34]. The formula for determining a good representative sample is:
S=NP1PBC2N1+P1PE1
where S, required sample size; N, the population size = 769,960; P, the population proportion expected to answer in a particular way (the most conservative proportion is 0.50); B, the degree of accuracy expressed as a proportion (0.05); and C, the Z statistic value based on the confidence level (in this case, 1.96 is chosen for the 95% confidence level).
Therefore, the sample size can be determined as:
S=769,9600.510.50.05/1.962769,9601+0.510.5=192490501.067+0.25E2
S=192490501.317=384.E3
This formula has been widely applied in household micro level studies [35, 36, 37, 38]. Furthermore, it commensurates with the sample size recommended by social science researchers. For instance, in [39], a rule of thumb is given for selecting a good sample size which is larger than 30 and less than 500 for most of the research; and that in case of multivariate studies, the sample size should be at least 10 times as large as the number of variables. In [40], a rule of thumb for the accurate sample size of at least 5–10 times larger than the number of variables is given. However, for the purpose of data collection for this study, a total of 750 questionnaires were distributed instead of the pre-determined sample number of 384 samples. This was to avoid a problem of non-response rate. According to [41], since it is not every selected sample that will likely response, there is a need for a researcher to increase the sample size to avoid non-response bias. Babbie (1995) (cited in [42]) argued that at least 50% rate of response is necessary for reporting and analysis. Finally, about 548 filled questionnaires were returned back, which is more than 70% of the total number of the issued questionnaires.
### 3.2 Sampling technique
For the purpose of this study, cluster area sampling method was adopted. According to [43], area sampling is a special type of cluster sampling whereby samples are grouped and clustered on the basis of geographical location areas [44, 45]. The reason for adopting this method of sampling is that though the sampling frame for the various clusters of Bauchi state is available and was obtained from the office of Nigerian National Population Commission, there is no available frame containing the list of all households living in Bauchi state. Hence in this situation, area sampling is one of the most suitable techniques of data collection. As argued by various scholars, the underlying practical motivation for using area sampling is the absence of complete and accurate list of the universal elements under study since it does not depend upon the population frame [44, 45, 46]. Moreover, from [47], it was argued that in the case of cluster sampling, the full list of clusters forms the sampling frame and not the list of individual elements within the population.
The sampling technique used in this study is the multistage cluster sampling. In the first stage, the whole of the study area was divided into three groups (clusters) based on the geo-political zonal categorisation of the study area; the various categories are: Bauchi South, Bauchi Central and Bauchi North. In the second stage, two clusters (Bauchi South and Bauchi North) were selected randomly out of the three clusters.
In the third stage, these two clusters were further categorised into two sub-clusters: urban and rural areas. Then, a total of 10 wards were randomly selected from the urban areas, while a total of 13 wards were selected randomly from the rural areas. This gives a total of 23 selected wards used as the sampling wards. In the fourth stage, six communities were selected randomly from each of the selected wards of urban areas, which made a total of 60 communities from the urban areas. On the other hand, another six communities were randomly selected from the selected wards of the rural areas making a total of 78 communities used from the rural areas. This gives a total of 100 and 138 sampled communities used in the study. In the last stage, six households were systematically selected from each of the selected communities of the urban areas making a total of 360 (i.e. 60 × 6 = 360) households selected from the urban areas. On the other hand, five households were selected systematically from each of the selected communities of the rural areas making a total of 390 (i.e. 78 × 5 = 390) households selected from the rural areas. Finally, a total of 548 households returned the filled questionnaires out of which nine questionnaires were discarded.
## 4. Results and findings
This section contains the findings of this study. Since this study is a descriptive and exploratory analysis, the tools that were used to analyse the data are the various descriptive statistics, frequencies, percentages and correlation analyses.
### 4.1 Summary of descriptive statistics
This section provides information about the descriptive statistics. The major descriptive statistics are the mean, standard deviation, minimum and maximum. Table 1 exhibits the values of the summary statistics.
VariablesNMeanSDMinMax
Gender5380.8740.3301
Age53636.4311.72360
Marital status5280.7390.4401
Household size5367.7256.04230
Location5370.5380.5001
Home size (ft2)53652.4219.320110
Number of rooms5366.5153.81223
Cooking fuel main source5390.4430.8103
Hours of electricity51927.3027.8097
Price of firewood48376.6735.330220
Price of kerosene361126.627.145200
Home appliances53515.3713.1057
Home ownership5350.2130.4101
Years of education53614.216.17022
Lighting fuel main source5320.4380.6702
Firewood quantity44934.2317.1490
Income (USD)536224.018078600
### Table 1.
Summary of descriptive statistics of variables.
Source: authors, 2019.
Table 1 shows that the monthly average consumption of firewood is about 35 bundles; this implies that on average, every household in Bauchi State uses more than one bundle of firewood everyday, which is a clear reflection of the high rate of firewood use in the state. Furthermore, the table indicates that the monthly average income of a household is little bit more than USD 200, with the maximum value of USD 600. This implies that most of the household in Bauchi State belong to the poor income group. In fact, Bauchi State is the third poorest state in Nigeria [48]. Furthermore, the table indicates that the average firewood price per bundle is about ₦75 (about $0.40). Furthermore, it indicates that on average, the household size in Bauchi state constitutes about eight members per household. This number approximately is tally to the estimated average household size in Bauchi state, given in [49]. The table shows that the average weekly hours of electricity supply is only 27 hours; this clearly reflects the nature of inadequate supply of electricity in the area, which is one of the factors that likely contributes to the high rate of biomass fuel use as the main source of energy by households in Bauchi state. Table 1 further shows that the average years of school experience by the heads of households in the study area is 14 years, representing a schooling experience up to the Diploma/NCE levels of education. Similarly, the reported average number of rooms in the building in which each household lives is six. This number constitutes bedrooms, rest room, sitting rooms and fallows. Additionally, the number of energy use devices possesses at home such as: bulbs, fans, ACs, televisions and radios among others shows an average value of 15 pieces of these items, which is clearly a reflection of low rate of modern energy use by households in the study area. Lastly, the table shows that the average age of household head in Bauchi state measured in terms of years is 36 years, which falls within the age group of working population. ### 4.2 Socio-economic characteristics of households in Bauchi state and their pattern of energy consumption The objective of this study is to explore and describe the socio-economic characteristics of households in Bauchi state, Nigeria, and their pattern of energy consumption. In this section, the study explored the socio-economic characteristics of households in Bauchi state and their pattern of fuel consumption, based on the study samples. Table 2 indicates the socio-demographic and economic characteristics of the respondents. CharacteristicsFrequency(%)CUM Gender Male47087.3687.36 Female6812.64100 Age 16–3018734.8934.89 31–4522942.7277.61 46–609718.1095.71 Above 60234.29100 Marital status Single13826.1426.14 Married39073.86100 Level of education Non-formal education5510.2610.26 Primary School275.0415.30 Secondary9517.7233.02 Diploma/NCE19135.6368.66 B.Sc./HND12423.1391.79 Postgraduate448.21100 Occupation No standard job5911.0911.09 Farmer6812.7823.87 Teacher10619.9243.80 Banker173.2046.99 Lecturer183.3850.38 Medical practitioner376.9557.33 Businessman9918.6175.94 Others12824.06100 Monthly income (USD) 150 and below27753.3753.37 151–$3009818.1171.48
301–$4507313.1084.59 451–$6005610.0294.61
Above 600325.39100
Household size
1–1042479.2279.22
11–209417.4496.66
21 and above183.34100
### Table 2.
Socio-economic characteristics of households in Bauchi state.
Source: Authors, 2019.
Furthermore, among the factors that can shape the household pattern of energy consumption and switching are the characteristics of the building in which the household live. Table 3 contains the information of the home characteristics of the households.
CharacteristicsFrequency(%)CUM
Home ownership
Self-owned home
Non self-owned home
421
114
78.69
21.31
78.69
100
Number of rooms
1–5
6–10
11–15
16 and above
305
112
106
13
56.90
20.90
19.54
2.43
56.90
77.80
97.34
100
Home size (ft2)
1–24
25–49
50–74
75–99
100 and above
35
138
300
27
36
6.53
25.75
55.97
5.04
6.72
6.53
32.28
88.25
93.29
100
Home location
Urban area
Rural area
289
248
53.82
46.18
46.18
100
### Table 3.
Households’ home characteristics in Bauchi state.
Source: Authors, 2019.
Table 3 shows that about 79% of the respondents argued that they live in their self-owned home; this is especially in rural areas and some of the urban areas whereby most of the houses are simple and traditional, mostly made of up mud, such kind of houses are easy to possess or built. Furthermore, a majority of the respondents (about 57%) claimed that the number of rooms in their home is within the range of 1–5 rooms. These include: bedrooms, sitting rooms, and any other type of rooms that are usually found at homes. On the size of plot in which the home was built, a majority of the respondents (56%) argued that the size of the plot in which their homes was built is within the range of 50–74 sq. ft. This implies that households in Bauchi state live in a relatively large house. At Last, on the location of the respondents, 53% argued that they live in urban areas, while the remaining 47% live in rural areas of the state.
However, the information on the pattern of household fuel source, quantity of energy consumption and the amount of fuel expenditure is shown in Table 4.
CharacteristicsFrequency(%)CUM
Main cooking fuel
Firewood
Kerosene
Electricity
Gas
378
114
12
31
70.65
21.31
2.24
5.79
70.65
91.96
94.21
100
Main source of lighting fuel
Semi-electrical
Electricity
53
127
352
9.96
23.87
66.17
9.96
33.83
100
Average firewood consumption monthly(bundle)
1–19
20–39
40–59
60 and above
62
287
43
57
13.81
63.92
9.57
12.69
13.81
77.73
87.53
100
Average kerosene consumption monthly (litre)
1–15
16–30
31–45
46 and above
99
84
15
14
46.70
39.62
7.08
6.60
46.70
90.57
93.40
100
Average monthly expenditure on electricity (USD)
9 and below
10–19
20–29
30 and above
366
47
4
6
86.52
11.11
0.95
1.42
86.52
97.63
98.58
100
Number of energy use devices at home
Zero
1–10
11–20
21–30
Above 30
10
243
151
54
77
1.87
45.42
28.22
10.09
14.39
1.87
47.29
75.51
85.60
100
### Table 4.
Household energy consumption pattern in Bauchi state.
Source: Authors, 2019.
Table 4 exhibits the pattern of households’ energy consumption behaviour in Bauchi state. Based on the responses from the selected samples, a majority of the respondents (more than 70%) argued that their main fuel source for cooking is firewood. This is not surprising, but it reflects the clear picture of the situation in Bauchi state whereby the majority of households in the state especially rural areas adopt firewood as the main source of cooking fuel. This is also tally with the information provided in [50]. Furthermore, 21% of the respondents argued that they use kerosene as the major source of fuel for cooking; about 6% of the respondents use gas as the main cooking fuel source, and it is only less than 3% of the respondents claim to be using electricity as their main source of cooking fuel, mainly in the urban areas of the state. This pattern of main cooking fuel adoption is mostly due to the culture, availability and affordability. On the main source of lighting, about 10% of the respondents argued that they rely majorly on traditional source of lighting such as: traditional lamp, kerosene and charcoal. Another category of respondents (24%) argued that they rely mostly on semi-electric source of lighting such as: battery torch light and rechargeable lanterns to source light for home use. However, the majority of the respondents argued that they rely mostly on the available electricity as their main source of lighting. This implies that most of households in Bauchi state despite the interruption in the supply of the electricity rely mostly on electricity as their main source of lighting especially urban dwellers.
### 4.3 Correlation analysis of factors influencing household energy consumption in Bauchi state, Nigeria
In this section, a correlation analysis was conducted in order to explore the nature of the correlation that exists among variables used in this study. Usually, a negative value indicates negative relationship between variables and a positive value indicates positive relationship between variables. Table 5 exhibits the correlation values for variables in this study.
HSZAGEEDUHHSINCRUMLECPFWHPSFWQPKRKRQXECHSZ
AGE1.00
EDU−0.051.00
HHS0.29−0.091.00
INC0.280.260.191.00
RUM0.19−0.090.420.121.00
LEC−0.030.25−0.060.19−0.081.00
PFW0.10−0.130.010.01−0.01−0.071.00
HPS0.050.030.050.160.100.14−0.021.00
FWQ0.09−0.070.210.060.220.05−0.13−0.011.00
PKR0.06−0.08−0.060.01−0.01−0.160.150.04−0.221.00
KRQ0.24−0.010.050.120.15−0.080.010.060.04−0.071.00
XEC−0.090.19−0.080.08−0.150.11−0.050.13−0.06−0.050.091.00
HSZ0.190.120.260.270.390.170.030.120.09−0.040.110.031.00
### Table 5.
Variables correlation matrix.
Source: Authors, 2019.
Note: AGE = age; EDU = education; HHS = household size; INC = income; RUM = number of rooms; LEC = hours of electricity supply; PFW = price of firewood/bundle; HPS = home appliances; FWQ = firewood quantity; PKR = kerosene price per litre; KRQ = kerosene quantity; XEC = monthly expenditure on electricity; HSZ = home size.
Table 5 indicates the nature and magnitudes of correlations that exist between the socio-economic characteristics of households in Bauchi state and the quantity of energy consumption by households in the state. For instance, the correlation matrix exhibits that there is a negative relationship between the quantity of firewood and the price of firewood (r = −0.13), firewood quantity and level of education attainment (r = −0.07), price of kerosene and the quantity of kerosene (r = −0.07), and hours of electricity and the kerosene quantity (r = −0.08). Furthermore, negative relationships were found between monthly expenditure on electricity and variables such as: household size, price of firewood and price of kerosene (with the correlation values: −0.08, −0.05 and −0.05, respectively). All these sings conform to a priori expectations.
On the other hand, Table 5 indicates that there is a positive relationship between firewood quantity and the household size (r = 0.22), kerosene quantity and the variables such as: household size, income and firewood price (with the correlation values: r = 0.05, 0.08 and 0.01, respectively). Additionally, positive relationships were found to exist between monthly expenditure on electricity and other variables such as: education, income and kerosene quantity. The values of the correlation coefficients are: 0.19, 0.08 and 0.09, which are clear supports for a priori expectations and also support the findings of previous studies [1, 6, 29, 30].
## 5. Discussion of findings
The study found that the monthly average consumption of firewood is about 35 bundles; this implies that on average, every household in Bauchi state uses more than one bundle of firewood everyday. Furthermore, the study found that the monthly average income of a household is little bit more than USD 200, with the maximum value of USD 600. This implies that most of the household in Bauchi state belong to the poor income group. Additionally, the study found that average weekly hours of electricity supply is only 27 hours; this clearly reflects the nature of inadequate supply of electricity in the area, which is one of the factors that likely contributes to the high rate of biomass fuel use as the main source of energy by households in the state. Similarly, the reported average number of rooms in the building in which each household lives is six. This number constitutes bedrooms, rest room, sitting rooms and fallows. Additionally, the number of energy use devices possesses at home such as: bulbs, fans, ACs, televisions and radios among others shows an average value of 15 pieces of these items, which is clearly a reflection of low rate of modern energy use by households in the study area.
Furthermore, a majority of the respondents are males. This is because based on the culture of people in the study area, normally males occupy the position of household head; even in a situation when the father (the head) has died, it is the younger brother of the deceased or the first born in the family, not the mother, who emerges as new head of the family. Because the belief is that, men are stronger than women economically, socially and educationally. Therefore, a woman emerges as a household head only by chance when there is no able man in the family to look after the affairs of the family. The study further found that about 75% of the respondents are married, due to the fact that married people are regarded as responsible for overseeing the family affairs. In addition, the occupation of the respondents indicates that of all those that have chosen a stated category, teaching job (at primary or secondary levels) obtained the highest proportion. This is because teaching job at either primary or secondary school levels is one of the easy to find jobs for both semi-professions and professional workers.
Furthermore, the factors that can shape the household pattern of energy consumption and switching are the characteristics of the building in which the household live. The study found that about 79% of the respondents live in their self-owned home; this is especially in rural areas and some of the urban areas whereby most of the houses are simple and traditional, mostly made of up mud, such kind of houses are easy to possess or built. Moreover, a majority of the respondents (about 57%) claimed that the number of rooms in their home is within the range of 1–5 rooms. These include: bedrooms, sitting rooms, and any other type of rooms that are usually found at homes. On the size of plot in which the home was built, a majority of the respondents (56%) argued that the size of the plot in which their homes was built is within the range of 50–74 sq. ft.
Based on the responses from the selected samples, a majority of the respondents argued that their main fuel source for cooking is firewood. This is not surprising, but it reflects the clear picture of the situation in Bauchi state whereby a majority of households in the state, especially rural areas, adopts firewood as the main source of cooking fuel. This is also tally with the information provided in [50]. Furthermore, 21% of the respondents argued that they use kerosene as the major source of fuel for cooking; about 6% of the respondents use gas as the main cooking fuel source, and it is only less than 3% of the respondents claim to be using electricity as their main source of cooking fuel, mainly in the urban areas of the state. This pattern of main cooking fuel adoption is mostly due to the culture, availability and affordability. On the main source of lighting, about 10% of the respondents argued that they rely majorly on traditional source of lighting such as: traditional lamp, kerosene and charcoal. Another category of respondents (24%) argued that they rely mostly on semi-electric source of lighting such as: battery torch light and rechargeable lanterns to source light for home use. However, the majority of the respondents argued that they rely mostly on the available electricity as their main source of lighting.
## 6. Conclusions
This study conducted an exploration and descriptive analyses of the socio-economic characteristics of households and the pattern of their energy consumption (cooking and lighting fuel consumption) in Bauchi state, Nigeria. The study explored that the average monthly income of a typical household in Bauchi state is about USD 225. The study found that a majority of households in Bauchi state use firewood as their main source of cooking fuel. On the other hand, most of the households use electricity for lighting. Furthermore, it was found that there is a positive relationship between income and the consumption of energy by households. Similarly, the same positive relationship was found to exist between household size and the consumption of firewood. On the other hand, the price of a particular energy source has a negative relationship with its consumption. Therefore, there is a need for government to discourage the high rate of firewood use as the main source of cooking fuel by embarking on the policies that will ensure the switch away of household firewood fuel to other cleaner source of cooking fuel such as electricity and gas.
## Conflict of interest
No conflict of interest reported by the authors.
## How to cite and reference
### Cite this chapter Copy to clipboard
Abubakar Hamid Danlami and Rabi’ul Islam (March 25th 2020). Explorative Analysis of Household Energy Consumption in Bauchi State, Nigeria, Energy Efficiency and Sustainable Lighting - a Bet for the Future, Manuel Jesús Hermoso-Orzáez and Alfonso Gago-Calderón, IntechOpen, DOI: 10.5772/intechopen.89597. Available from:
### Related Content
Next chapter
#### Roof Lighting with Recycled Plastic Bottles
By Nancy Varela Terreros and Daniel Espinoza Díaz
#### Energy Management for Sustainable Development
Edited by Soner Gokten
First chapter
#### A Review of Air Pollution Control Policy Development and Effectiveness in China
By Ying Li and Ke Chen
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
View all Books
|
2020-03-29 07:08:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6551507711410522, "perplexity": 1616.5077828642509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493818.32/warc/CC-MAIN-20200329045008-20200329075008-00161.warc.gz"}
|
https://yihui.org/animation/example/savehtml/
|
# Insert animations into an HTML page
### Yihui Xie & Lijia Yu / 2017-04-04
This function first records all the plots in the R expression as bitmap images, then inserts them into an HTML page and finally creates the animation using the SciAnimator library.
It mainly uses the SciAnimator library, which is based on jQuery. It has a neat interface (both technically and visually) and is much easier to use or extend. Moreover, this function allows multiple animations in a single HTML page – just use the same HTML filename.
Optionally the source code and some session information can be added below the animations for the sake of reproducibility (specified by the option ani.options('verbose') – if TRUE, the description, loaded packages, the code to produce the animation, as well as a part of sessionInfo() will be written in the bottom of the animation; the R code will be highlighted using the SyntaxHighlighter library for better reading experience).
library(animation)
## A quick and dirty demo
des = c("This is a silly example.\n\n", "You can describe it in more detail.",
"For example, bla bla...")
saveHTML({
par(mar = c(4, 4, 0.5, 0.5))
for (i in 1:20) {
plot(runif(20), ylim = c(0, 1))
ani.pause()
}
}, img.name = "unif_plot", imgdir = "unif_dir", htmlfile = "random.html",
autobrowse = FALSE, title = "Demo of 20 uniform random numbers",
description = des)
## we can merge another animation into the former page as long
## as 'htmlfile' is the same; this time I don't want the
## animation to autoplay, and will use text labels for the
## buttons (instead of UTF-8 symbols)
des = c("When you write a long long long long description, R will try to wrap the",
"words automatically.", "Oh, really?!")
saveHTML({
par(mar = c(4, 4, 0.5, 0.5))
ani.options(interval = 0.5)
for (i in 1:10) {
plot(rnorm(50), ylim = c(-3, 3))
ani.pause()
}
}, img.name = "norm_plot", single.opts = "utf8: false", autoplay = FALSE,
interval = 0.5, imgdir = "norm_dir", htmlfile = "random.html",
ani.height = 400, ani.width = 600, title = "Demo of 50 Normal random numbers",
description = des)
## use the function brownian.motion() in this package; this
## page is created in 'index.html' under the current working
## directory
des = c("Random walk of 10 points on the 2D plane:", "for each point (x, y),",
"x = x + rnorm(1) and y = y + rnorm(1).")
saveHTML({
par(mar = c(3, 3, 1, 0.5), mgp = c(2, 0.5, 0), tcl = -0.3,
cex.axis = 0.8, cex.lab = 0.8, cex.main = 1)
ani.options(interval = 0.05, nmax = 150)
brownian.motion(pch = 21, cex = 5, col = "red", bg = "yellow")
}, img.name = "brownian_motion_a", htmlfile = "index.html", description = des)
## remove the 'navigator' (progress bar)
saveHTML({
par(mar = c(3, 3, 1, 0.5), mgp = c(2, 0.5, 0), tcl = -0.3,
cex.axis = 0.8, cex.lab = 0.8, cex.main = 1)
ani.options(interval = 0.05, nmax = 150)
brownian.motion(pch = 21, cex = 5, col = "red", bg = "yellow")
}, img.name = "brownian_motion_b", htmlfile = "index.html", navigator = FALSE,
description = c("Random walk of 10 points on the 2D plane",
"(without the navigation panel)"))
## use Rweb to create animations
if (interactive()) browseURL(system.file("misc", "Rweb", "demo.html",
package = "animation"))
|
2022-08-18 17:45:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3122331202030182, "perplexity": 10438.033002427166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00602.warc.gz"}
|
https://rivet.hepforge.org/analyses/ATLAS_2015_CONF_2015_041_EL.html
|
rivet is hosted by Hepforge, IPPP Durham
## Rivet analyses reference
### ATLAS_2015_CONF_2015_041_EL
$Z$+jets at 13 TeV
Experiment: ATLAS (LHC)
Status: OBSOLETE
Authors:
• Christian Gutschow
References:
• ATLAS-CONF-2015-041
Beams: p+ p+
Beam energies: (6500.0, 6500.0) GeV
Run details:
• Inclusive $Z$ production in the electron channel
Preliminary measurements of the cross-section for the production of a $Z$ boson in association with jets in pp collisions at $\sqrt{s} = 13$\,TeV are presented, using data corresponding to an integrated luminosity of $85\,\text{pb}^{-1}$ collected by the ATLAS experiment at the Large Hadron Collider. The cross-sections are measured for events containing a $Z$ boson decaying to electrons or muons and produced in association with up to four jets in the kinematical range of $p_\text{T} > 30$\,GeV and $|y| < 2.5$. NB--Use the plugin names ATLAS_2015_CONF_2015_041_EL or ATLAS_2015_CONF_2015_041_MU to specify the lepton channel. Note: this analysis is superseded by ATLAS_2017_I1514251.
|
2020-04-01 01:16:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272400140762329, "perplexity": 4270.908732113265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505359.23/warc/CC-MAIN-20200401003422-20200401033422-00463.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.