url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://quant.stackexchange.com/tags/continuous-time/hot | Tag Info
4
Beware, oversimplification ahead! (This means that the following is technically not correct, in fact it is false! But: It gives an intuition what is going on!) If you toss a coin and calculate heads as $-1$ and tails as $1$ you get a mean of $0$ with a variance of $1$. When you add up multiple coin tosses, i.e. create a random process $dz(t)$, the mean ...
4
I mainly speak as market practitioner when I say that I believe in the end all models that are applied to data and real life pricing issues are discretized. Think about it, even the BS hedge argument is in the end just a "theoretical continuous time overlay" of actual discrete time steps and re-hedges. Thus some of the limiting assumptions re BS. You do not ...
4
Note that you can understand the $\Delta$ as an "operator" acting on $r$. So just act on $r$ twice: $$\Delta^2 r_t = r_t - 2 r_{t-1} + r_{t-2}.$$ In fact if you write the $r$ as a vector, $r = (r_1, r_2, \ldots, r_N)$, then $\Delta$ is an $N\times N$ matrix with elements $\Delta_{i,j} = \delta_{i,j} - \delta_{i-1,j}$. The AR(2) model can be written as ...
3
If you just want to run some simplistic technical analysis on quotes, then select the last quote for each unique timestamp. That will ensure that you don't have duplicate timestamps. If you must have it evenly spaced (i.e. no gaps from one second to another), then you can reuse the previous quote to fill-in the missing value.
2
You could try net positions: where you continuously buy and sell depending on the signals generated. Net positions may lead to unnecessary commissions/spread nickel-and-diming your profits away. Once you have picked a direction and already have trade entry, your system should instead continue looking for new signals in the BACKGROUND. New signals while in ...
2
The answer depends on the reasoning behind your forecast. Is this a mean-reversion signal? If so, perhaps the presence of a short signal shortly after a long signal indicates that the long signal was very profitable, and you should take profits immediately. Is it a momentum signal? If so, then perhaps the momentum of this stock is very choppy at the ...
2
If you designed the model to predict direction only, I would just use the current signal. You could test whether this is correct by calculating the signals and their 5-second lags, then regress 1-minute forward returns (or 55-second fwd returns) on them both, and see if the coeff on the 5-second lagged signal is significant. If it's not significant, just ...
2
First of all, GNP and GDP are economic time series and they are not economic model. Secondly, you can also get these time series with different frequency, as quarterly data, avalaible on OECD website. In the case you need for lower frequency data you can get it by interpolation (as, for instance, the cubic spline interpolation); This is the Matlab tutorial ...
2
This is the answer to the first version of the question which asked whether a stationary process has an increasing variance over time. No the definition of (weakly) stationary (http://en.wikipedia.org/wiki/Stationary_process) is that the variance is the same for each point in time. In the literature it is often dealt with the covariance function. For ...
1
If $V_0(\phi) < \Pi(0,x)$ at $t=0$ You sell short the claim and collect $\Pi(0,x)$ You buy the portfolio $\phi$ for $V_0(\phi)$ You put the money $\Pi(0,x) - V_0(\phi)$ in your risk-free instruments at $t=1$ At $t=1$ you'll be liable the payoff of the claim you have shorted. The money you owe the counterpart long the claim is $\Pi(1,x)$. $\phi$ ...
1
Intuitively, because of the central limit theorem: wiener process is a limit of a random walk, and after n steps a random walk moves away from the origin by ~ $\sqrt{n}$ Edit: here is a complete answer. First the formula for the sum. The trick is the following simple observation: if $X_1,.. X_n$ are independent zero mean, then $E(\sum X_i)^2 = ... 1 To help you understand why you need to follow recipes (like chrisaycock's) just have a look at your tick data. You will find ticks clustered at some points in time while they seem scarce at others. If you proceed with your recipe 2, you will lose those clusters of activity and stretch them out. In periods of low activity you will condense the market. ... 1 One solution I have been considering is to add a target position parameter with a time decay. For example, given the$t_1$buy and$t_2\$ short signals described in the question and assuming a 5 seconds signal window to simplify, we would have the following time-based target positions: ╔════════════════╦═══════╦═══════╦═══════╦═══════╦═══════╦═══════╦════╗ ║ ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6916665434837341, "perplexity": 685.6507779306371}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461647.44/warc/CC-MAIN-20150226074101-00192-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.helpteaching.com/lessons/147/electromagnetic-spectrum | • ### Browse All Lessons
##### Assign Lesson
Help Teaching subscribers can assign lessons to their students to review online!
Tweet
# Electromagnetic Spectrum
Introduction: The electromagnetic spectrum represents the range of wavelengths or frequencies on which that electromagnetic radiation exists. Electromagnetic radiation on this spectrum is important to consider, especially with increasing concerns over Earth and changing climates. The use of fireworks, for example, destroys the ozone layer, leading to an increased amount of ultraviolet radiation entering and leading to increases in cases of skin cancer in humans.
The electromagnetic spectrum can be arranged from lowest to highest frequency as follows: Radio waves, microwaves, infrared radiation, visible light (ROYGBIV), ultraviolet radiation, X-rays, and gamma rays. In this arrangement, as frequency increases going from radio waves to gamma rays, the wavelength will decrease and the energy of the electromagnetic radiation will increase.
In general, electromagnetic radiation with higher frequencies and energies tend to be more dangerous. This is why destroying the ozone layer causes a major problem for humans, since ultraviolet radiation entering the Earth has high energy and high frequencies associated with it. It is also worth noting that the visible spectrum, in order from lowest to highest frequency, can be arranged as follows: red, orange, yellow, green, blue, indigo, and violet.
Required Video
Related Worksheets: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081031084060669, "perplexity": 715.2569854555111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00132.warc.gz"} |
http://mathhelpforum.com/math-software/84263-matlab-probability.html | ## [MATLAB] Probability
Hello, I am currently really new with MATLAB and my lectures didn't really provide sufficient practice and exposure to how to function a MATLAB and I have an assignment due really soon so I hope I could get you guys to teach me how to deal with these questions.
1. The dice game craps is played as follows. The player throws two dice, and if the sum is seven or eleven, then he wins. If the sum is two, three of twelve, then he loses. If the sum is anything else, then he continues throwing until he either throws that number again (in which case he wins) or he throws a seven (in which case he loses). Estimate the probability that the player wins. You need to add commands to estimate the probability.
The code provided is,
clear all
nreps=100000;
Event_W = 0;
for n=1:nreps
a= %Simulate the throw of two dice using the function 'randunifd'
if %Add code such that player obtain a 7 or 11
Event_W = Event_W +1;
elseif %Add code such that player obtains a 2,3 or 12
else
Throw = 1;
while %Add condition such that we keep playing until player either wins or loses
b= randunifd(1,6)+randunifd(1,6);
if %Add condition such that player wins
Event_W = Event_W+1;
Throw=0;
elseif %Add condition such that player loses
Throw=0;
end
a=b;
end
end
end
RelFreq = Event_W/nreps
I really have no idea how to do this at all. Your help would be greatly appreciated.
2. Consider the following experiment. We toss a fair coin 20 times. Let X be the length of the longest sequence of Heads.
(i) Estimate the probability function p of X. That is for x = 1, ....,20, estimate
$p(x) = P(X=x)$
(ii) Give an estimate of the mean value of X.
You need to add commands where indicated in the MATLAB file.
Here's the MATLAB file I got :
clear all
nreps=10000;
p=zeros(1,20); %p is a vector of size 20 such that p(i)=P(X=i)
for n=1:nreps
n_heads = 0; %Length of current Head sequence
max_heads= 0; %Length of maximum length sequence so far
for %Add code to toss a coin 20 times
if %Add condition to obtain a Head
n_heads = n_heads +1;
else
n_heads = 0;
end
if %Add condition to see if current sequence is the longest
max_heads = n_heads;
end
end
%Add code to increment p(max_heads) by 1
end
RelFreq = p/nreps
%Add code to estimate the mean of X
Thanks for your help! (: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9257701635360718, "perplexity": 1580.994329960237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836399.81/warc/CC-MAIN-20160723071036-00161-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://brilliant.org/problems/extrinsic-semiconductor/ | # Extrinsic Semiconductor
The above are schematic diagrams of extrinsic semiconductors, into which doping agents of Boron(B) and phosphorus(P) have been introduced, respectively. Which of the following statements is correct?
a) Silicon(Si) has $$4$$ valence electrons.
b) Phosphorus(P) has $$3$$ valence electrons.
c) The above left are N-type semiconductors and the above right are P-type semiconductors.
d) Electrons are the majority charge carriers of the above right semiconductors, and holes are the majority charge carriers of the above left semiconductors.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929071307182312, "perplexity": 3489.1016569426683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428325.70/warc/CC-MAIN-20170727162531-20170727182531-00645.warc.gz"} |
https://cran.pau.edu.tr/web/packages/RRphylo/vignettes/Tree-Manipulation.html | # Phylogenetic tree manipulation
## tree.merger tool
### tree.merger basics
The function tree.merger is meant to merge phylogenetic information derived from different phylogenies into a single supertree. Given a backbone (backbone) and a source trees (source.tree), tree.merger drops clades from the latter to attach them on the former according to the information provided in the dataset object data. Individual tips to add can be indicated in data as well. Once the supertree is assembled, tips and nodes ages are calibrated based on user-specified values.
### Input tree and data
The backbone phylogeny serves as the reference to locate where single tips or entire clades extracted from the source.tree have to be attached. The backbone is assumed to be correctly calibrated so that nodes and tips ages (including the age of the tree root) are left unchanged, unless the user specifies otherwise. The source.tree is the phylogeny where the clades to add are extracted from. For each clade attached to the backbone, the time distances between the most recent common ancestor of the clade and its descendant nodes are kept fixed, unless the ages for any of these nodes are indicated by the user. All the new tips added to the backbone, irrespective of whether they are attached as a clade or as individual tips, are placed at the maximum distance from the tree root, unless calibration ages are supplied by the user. The data object is a dataframe including information about “what” is attached, where and how. data must be made of three columns:
• bind: the tips or clades to be attached;
• reference: the tips or clades where bind will be attached;
• poly: a logical indicating whether the bind and reference pair should form a polytomy.
If different column names are supplied, tree.merger assumes they are ordered as described and eventually fails if this requirement is not met. Similarly, with duplicated bind supplied the function stops and throws an error message. A clade, either to be binded or to be the reference, must be indicated by collating the names of the two phylogenetically furthest tips belonging to it, separated by a “-”. The ‘binded’ tips/clades can be used as reference for another tip/clade to be attached. The order with which clades and tips to attach are supplied does not matter. Tips and nodes are calibrated within tree.merger by means of the function scaleTree. To this aim, named vectors of tips and nodes ages, meant as time distance from the youngest tips within the phylogeny, must be supplied. As for the data object, the nodes to be calibrated should be identified by collating the names of the two phylogenetically furthest tips it subtends to, separated by a “-”.
### Attaching individual tips to the backbone tree
If only individual tips are attached the source.tree can be left unspecified. Tips set to be attached to the same reference are considered to represent a polytomy. Tips set as bind which are already on the backbone tree are removed from the latter and placed according to the reference. In the example below, tips “a1” and “a8” are set to be attached to the same reference “t6”, “t5” belonging to the backbone is indicated to be moved, and “a7” is added to the tree root thus changing the total height of the tree.
dato
bind reference poly
a1 t6 FALSE
a2 t10 FALSE
a3 t9 FALSE
a4 a2-t5 FALSE
a5 t10-t2 TRUE
a6 t2-a3 FALSE
a7 a1-t10 FALSE
t5 t7-t10 FALSE
a8 t6 FALSE
tree.merger(backbone=tree.back,data=dato,plot=FALSE)
#> Warning in tree.merger(backbone = tree.back, data = dato, plot = FALSE): t5
#> removed from the backbone tree
#> Warning in tree.merger(backbone = tree.back, data = dato, plot = FALSE): Root
#> age not indicated: the tree root arbitrarily set at 3.31
As no tip.ages are supplied to tree.merger, all the new tips are placed at the maximum distance from the tree root. Since no age for the root of the merged tree is indicated, the function places it arbitrarly and produces a warning to inform the user about its position with respect to the youngest tip on the phylogeny.
To calibrate the the ages of either tips or nodes within the merged tree, the arguments tip.ages and node.ages must be indicated.
ages.tip
#> a7 a1 t6 a8 a6 a4 a2 a5 a3
#> 1.0 2.0 1.7 1.5 0.8 1.5 0.3 1.2 0.2
ages.node
#> t2-t1 a1-a8 a7-t10
#> 2.2 2.9 3.5
tree.merger(backbone=tree.back,data=dato,tip.ages=ages.tip,node.ages = ages.node,plot=FALSE)
#> Warning in tree.merger(backbone = tree.back, data = dato, tip.ages = ages.tip, :
#> t5 removed from the backbone tree
### Attaching clades to the backbone tree
When clades are attached, the nodes subtending to them on source.tree are identified as the most recent common ancestors of the tip pairs indicated in bind. If one or more tips within any of the bind clades are also set to be added as individual tips, they are removed from the clade they belong to and attached independently. In the example below, “s7” is removed from the clade subtended by the most recent common ancestor of “s1” and “s4” and attached as sister to “t3” independently.
bind reference poly
a1 s3 FALSE
s2-s5 t10 FALSE
s1-s4 t3-t9 FALSE
s7 t3 FALSE
a2 s2-t7 FALSE
tree.merger(backbone=tree.back,data=dato.clade,source.tree=tree.source,plot=FALSE)
### Guided examples
### load the RRphylo example dataset including Cetaceans tree
data("DataCetaceans")
DataCetaceans$treecet->treecet # phylogenetic tree ### Select two clades and some species to be removed tips(treecet,131)->liv.Mysticetes tips(treecet,193)->Delphininae c("Aetiocetus_weltoni","Saghacetus_osiris", "Zygorhiza_kochii","Ambulocetus_natans", "Kentriodon_pernix","Kentriodon_schneideri","Kentriodon_obscurus")->extinct plot(treecet,show.tip.label = FALSE,no.margin=TRUE) nodelabels(frame="n",col="blue",font=2,node=c(131,193),text=c("living\nMysticetes","Delphininae")) tiplabels(frame="circle",bg="red",cex=.3,text=rep("",length(c(liv.Mysticetes,Delphininae,extinct))), tip=which(treecet$tip.label%in%c(liv.Mysticetes,Delphininae,extinct)))
### Create the backbone and source trees
drop.tip(treecet,c(liv.Mysticetes[-which(tips(treecet,131)%in%
c("Caperea_marginata","Eubalaena_australis"))],
drop.tip(treecet,which(!treecet$tip.label%in% c(liv.Mysticetes,Delphininae,extinct)))->sourcetree ### Create the data object data.frame(bind=c("Balaena_mysticetus-Caperea_marginata", "Aetiocetus_weltoni", "Saghacetus_osiris", "Zygorhiza_kochii", "Ambulocetus_natans", "Kentriodon_pernix", "Kentriodon_schneideri", "Kentriodon_obscurus", "Sousa_chinensis-Delphinus_delphis", "Kogia_sima", "Grampus_griseus"), reference=c("Fucaia_buelli-Aetiocetus_weltoni", "Aetiocetus_cotylalveus", "Fucaia_buelli-Tursiops_truncatus", "Saghacetus_osiris-Fucaia_buelli", "Dalanistes_ahmedi-Fucaia_buelli", "Kentriodon_schneideri", "Phocoena_phocoena-Delphinus_delphis", "Kentriodon_schneideri", "Sotalia_fluviatilis", "Kogia_breviceps", "Globicephala_melas-Pseudorca_crassidens"), poly=c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE))->dato dato bind reference poly Balaena_mysticetus-Caperea_marginata Fucaia_buelli-Aetiocetus_weltoni FALSE Aetiocetus_weltoni Aetiocetus_cotylalveus FALSE Saghacetus_osiris Fucaia_buelli-Tursiops_truncatus FALSE Zygorhiza_kochii Saghacetus_osiris-Fucaia_buelli FALSE Ambulocetus_natans Dalanistes_ahmedi-Fucaia_buelli FALSE Kentriodon_pernix Kentriodon_schneideri FALSE Kentriodon_schneideri Phocoena_phocoena-Delphinus_delphis FALSE Kentriodon_obscurus Kentriodon_schneideri FALSE Sousa_chinensis-Delphinus_delphis Sotalia_fluviatilis FALSE Kogia_sima Kogia_breviceps FALSE Grampus_griseus Globicephala_melas-Pseudorca_crassidens FALSE ### Merge the backbone and the source trees according to dat without calibrating tip and node ages tree.merger(backbone = backtree,data=dato,source.tree = sourcetree,plot=FALSE) #> Warning in tree.merger(backbone = backtree, data = dato, source.tree = #> sourcetree, : Kogia_sima, Grampus_griseus removed from the backbone tree #> Warning in tree.merger(backbone = backtree, data = dato, source.tree = #> sourcetree, : Tursiops_aduncus, Eubalaena_australis, Caperea_marginata already #> on the source tree: removed from the backbone tree #> Warning in tree.merger(backbone = backtree, data = dato, source.tree = #> sourcetree, : Root age not indicated: the tree root arbitrarily set at 45.06 ### Set tips and nodes calibration ages c(Aetiocetus_weltoni=28.0, Saghacetus_osiris=33.9, Zygorhiza_kochii=34.0, Ambulocetus_natans=40.4, Kentriodon_pernix=15.9, Kentriodon_schneideri=11.61, Kentriodon_obscurus=13.65)->tipages c("Ambulocetus_natans-Fucaia_buelli"=52.6, "Balaena_mysticetus-Caperea_marginata"=21.5)->nodeages ### Merge the backbone and the source trees and calibrate tips and nodes ages tree.merger(backbone = backtree,data=dato,source.tree = sourcetree, tip.ages=tipages,node.ages=nodeages,plot=FALSE) #> Warning in tree.merger(backbone = backtree, data = dato, source.tree = #> sourcetree, : Kogia_sima, Grampus_griseus removed from the backbone tree #> Warning in tree.merger(backbone = backtree, data = dato, source.tree = #> sourcetree, : Tursiops_aduncus, Eubalaena_australis, Caperea_marginata already #> on the source tree: removed from the backbone tree ## scaleTree tool The function scaleTree is a useful tool to deal with phylogenetic age calibration written around Gene Hunt’s scalePhylo function (https://naturalhistory.si.edu/staff/gene-hunt). It rescales branches and leaves of the tree according to species and/or nodes calibration ages (meant as distance from the youngest tip within the tree). If only species ages are supplied (argument tip.ages), the function changes leaves length, leaving node ages and internal branch lengths unaltered. When node ages are supplied (argument node.ages), the function shifts nodes position along their own branches while keeping other nodes and species positions unchanged. sp.ages #> t9 t73 t11 t43 t78 t46 t52 t26 #> 1.250 1.205 0.000 2.430 3.150 1.050 0.000 1.550 scaleTree(tree,tip.ages=sp.ages)->treeS1 nod.ages #> 98 152 123 85 118 127 164 143 #> 10.7 0.7 1.2 12.6 5.1 5.8 18.8 12.8 scaleTree(tree,node.ages=nod.ages)->treeS2 It may happen that species and/or node ages to be calibrated are older than the age of their ancestors. In such cases, after moving the species (node) to its target age, the function reassembles the phylogeny above it by assigning the same branch length (set through the argument min.branch) to the all the branches along the species (node) path, so that the tree is well-conformed and ancestor-descendants relationships remain unchanged. In this way changes to the original tree topology only pertain to the path along the “calibrated” species. c(sp.ages,nod.ages) #> t1 96 #> 20.5 15.6 scaleTree(tree,tip.ages = sp.ages,node.ages = nod.ages,min.branch = 1)->treeS ### Guided examples # load the RRphylo example dataset including Felids tree data("DataFelids") DataFelids$treefel->tree
# get species and nodes ages
# (meant as distance from the youngest species, that is the Recent in this case)
max(nodeHeights(tree))->H
H-dist.nodes(tree)[(Ntip(tree)+1),(Ntip(tree)+1):(Ntip(tree)+Nnode(tree))]->age.nodes
H-diag(vcv(tree))->age.tips
# apply Pagel's lambda transformation to change node ages only
geiger::rescale(tree,"lambda",0.8)->tree1
# apply scaleTree to the transformed phylogeny, by setting
# the original ages at nodes as node.ages
scaleTree(tree1,node.ages=age.nodes)->treeS1
# change leaf length of 10 sampled species
tree->tree2
set.seed(14)
sample(tree2$tip.label,10)->sam.sp age.tips[sam.sp]->age.sam age.sam[which(age.sam>0.1)]<-age.sam[which(age.sam>0.1)]-1.5 age.sam[which(age.sam<0.1)]<-age.sam[which(age.sam<0.1)]+0.2 tree2$edge.length[match(match(sam.sp,tree$tip.label),tree$edge[,2])]<-age.sam
# apply scaleTree to the transformed phylogeny, by setting
# the original ages at sampled tips as tip.ages
scaleTree(tree2,tip.ages=age.tips[sam.sp])->treeS2
# apply Pagel's kappa transformation to change both species and node ages,
# including the age at the tree root
geiger::rescale(tree,"kappa",0.5)->tree3
# apply scaleTree to the transformed phylogeny, by setting
# the original ages at nodes as node.ages
scaleTree(tree1,tip.ages = age.tips,node.ages=age.nodes)->treeS3
## cutPhylo tool
The function cutPhylo is meant to cut the phylogentic tree to remove all the tips and nodes younger than a reference (user-specified) age, which can also coincide with a specific node. When an entire clade is cut, the user can choose (by the argument keep.lineage) to keep its branch length as a tip of the new tree, or remove it completely.
cutPhylo(tree,age=5,keep.lineage = TRUE)
cutPhylo(tree,age=5,keep.lineage = FALSE)
cutPhylo(tree,node=129,keep.lineage = TRUE)
cutPhylo(tree,node=129,keep.lineage = FALSE)
## fix.poly tool
The function fix.poly randomly resolves polytomies either at specified nodes or througout the tree (Castiglione et al. 2020). This latter feature works like ape’s multi2di. However, contrary to the latter, polytomies are resolved to non-zero length branches, to provide credible partition of the evolutionary time among the nodes descending from the dichotomized node. This could be useful to gain realistic evolutionary rate estimates at applying RRphylo. Under the type = collapse specification the user is expected to indicate which node/s must be transformed into a multichotomus clade.
### load the RRphylo example dataset including Cetaceans tree
data("DataCetaceans")
DataCetaceans$treecet->treecet ### Resolve all the polytomies within Cetaceans phylogeny fix.poly(treecet,type="resolve")->treecet.fixed ## Set branch colors unlist(sapply(names(which(table(treecet$edge[,1])>2)),function(x)
c(x,getDescendants(treecet,as.numeric(x)))))->tocolo
unlist(sapply(names(which(table(treecet$edge[,1])>2)),function(x) c(getMRCA(treecet.fixed,tips(treecet,x)), getDescendants(treecet.fixed,as.numeric(getMRCA(treecet.fixed,tips(treecet,x)))))))->tocolo2 colo<-rep("gray60",nrow(treecet$edge))
names(colo)<-treecet$edge[,2] colo2<-rep("gray60",nrow(treecet.fixed$edge))
names(colo2)<-treecet.fixed$edge[,2] colo[match(tocolo,names(colo))]<-"red" colo2[match(tocolo2,names(colo2))]<-"red" par(mfrow=c(1,2)) plot(treecet,no.margin=TRUE,show.tip.label=FALSE,edge.color = colo,edge.width=1.3) plot(treecet.fixed,no.margin=TRUE,show.tip.label=FALSE,edge.color = colo2,edge.width=1.3) ### Resolve the polytomies pertaining the genus Kentriodon fix.poly(treecet,type="resolve",node=221)->treecet.fixed2 ## Set branch colors c(221,getDescendants(treecet,as.numeric(221)))->tocolo c(getMRCA(treecet.fixed2,tips(treecet,221)), getDescendants(treecet.fixed2,as.numeric(getMRCA(treecet.fixed2,tips(treecet,221)))))->tocolo2 colo<-rep("gray60",nrow(treecet$edge))
names(colo)<-treecet$edge[,2] colo2<-rep("gray60",nrow(treecet.fixed2$edge))
names(colo2)<-treecet.fixed2$edge[,2] colo[match(tocolo,names(colo))]<-"red" colo2[match(tocolo2,names(colo2))]<-"red" par(mfrow=c(1,2)) plot(treecet,no.margin=TRUE,show.tip.label=FALSE,edge.color = colo,edge.width=1.3) plot(treecet.fixed2,no.margin=TRUE,show.tip.label=FALSE,edge.color = colo2,edge.width=1.3) ### Collapse Delphinidae into a polytomous clade fix.poly(treecet,type="collapse",node=179)->treecet.collapsed # Set branch colors c(179,getDescendants(treecet,as.numeric(179)))->tocolo c(getMRCA(treecet.collapsed,tips(treecet,179)), getDescendants(treecet.collapsed,as.numeric(getMRCA(treecet.collapsed,tips(treecet,179)))))->tocolo2 colo<-rep("gray60",nrow(treecet$edge))
names(colo)<-treecet$edge[,2] colo2<-rep("gray60",nrow(treecet.collapsed$edge))
names(colo2)<-treecet.collapsed\$edge[,2]
colo[match(tocolo,names(colo))]<-"red"
colo2[match(tocolo2,names(colo2))]<-"red"
par(mfrow=c(1,2))
plot(treecet,no.margin=TRUE,show.tip.label=FALSE,edge.color = colo,edge.width=1.3)
plot(treecet.collapsed,no.margin=TRUE,show.tip.label=FALSE,edge.color = colo2,edge.width=1.3)
## References
Castiglione, S., Serio, C., Piccolo, M., Mondanaro, A., Melchionna, M., Di Febbraro, M., Sansalone, G., Wroe, S.,& Raia, P. (2020). The influence of domestication, insularity and sociality on the tempo and mode of brain size evolution in mammals. Biological Journal of the Linnean Society, in press. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19064012169837952, "perplexity": 4926.40340270258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991553.4/warc/CC-MAIN-20210510235021-20210511025021-00456.warc.gz"} |
https://docs.sympy.org/latest/modules/geometry/polygons.html | # Polygons¶
class sympy.geometry.polygon.Polygon[source]
A two-dimensional polygon.
A simple polygon in space. Can be constructed from a sequence of points or from a center, radius, number of sides and rotation angle.
Parameters: vertices : sequence of Points GeometryError If all parameters are not Points.
Notes
Polygons are treated as closed paths rather than 2D areas so some calculations can be be negative or positive (e.g., area) based on the orientation of the points.
Any consecutive identical points are reduced to a single point and any points collinear and between two points will be removed unless they are needed to define an explicit intersection (see examples).
A Triangle, Segment or Point will be returned when there are 3 or fewer points provided.
Examples
>>> from sympy import Point, Polygon, pi
>>> p1, p2, p3, p4, p5 = [(0, 0), (1, 0), (5, 1), (0, 1), (3, 0)]
>>> Polygon(p1, p2, p3, p4)
Polygon(Point2D(0, 0), Point2D(1, 0), Point2D(5, 1), Point2D(0, 1))
>>> Polygon(p1, p2)
Segment2D(Point2D(0, 0), Point2D(1, 0))
>>> Polygon(p1, p2, p5)
Segment2D(Point2D(0, 0), Point2D(3, 0))
The area of a polygon is calculated as positive when vertices are traversed in a ccw direction. When the sides of a polygon cross the area will have positive and negative contributions. The following defines a Z shape where the bottom right connects back to the top left.
>>> Polygon((0, 2), (2, 2), (0, 0), (2, 0)).area
0
When the the keyword $$n$$ is used to define the number of sides of the Polygon then a RegularPolygon is created and the other arguments are interpreted as center, radius and rotation. The unrotated RegularPolygon will always have a vertex at Point(r, 0) where $$r$$ is the radius of the circle that circumscribes the RegularPolygon. Its method $$spin$$ can be used to increment that angle.
>>> p = Polygon((0,0), 1, n=3)
>>> p
RegularPolygon(Point2D(0, 0), 1, 3, 0)
>>> p.vertices[0]
Point2D(1, 0)
>>> p.args[0]
Point2D(0, 0)
>>> p.spin(pi/2)
>>> p.vertices[0]
Point2D(0, 1)
Attributes
area angles perimeter vertices centroid sides
angles
The internal angle at each vertex.
Returns: angles : dict A dictionary where each key is a vertex and each value is the internal angle at that vertex. The vertices are represented as Points.
Examples
>>> from sympy import Point, Polygon
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly = Polygon(p1, p2, p3, p4)
>>> poly.angles[p1]
pi/2
>>> poly.angles[p2]
acos(-4*sqrt(17)/17)
arbitrary_point(parameter='t')[source]
A parameterized point on the polygon.
The parameter, varying from 0 to 1, assigns points to the position on the perimeter that is that fraction of the total perimeter. So the point evaluated at t=1/2 would return the point from the first vertex that is 1/2 way around the polygon.
Parameters: parameter : str, optional Default value is ‘t’. arbitrary_point : Point ValueError When $$parameter$$ already appears in the Polygon’s definition.
Examples
>>> from sympy import Polygon, S, Symbol
>>> t = Symbol('t', real=True)
>>> tri = Polygon((0, 0), (1, 0), (1, 1))
>>> p = tri.arbitrary_point('t')
>>> perimeter = tri.perimeter
>>> s1, s2 = [s.length for s in tri.sides[:2]]
>>> p.subs(t, (s1 + s2/2)/perimeter)
Point2D(1, 1/2)
area
The area of the polygon.
Notes
The area calculation can be positive or negative based on the orientation of the points. If any side of the polygon crosses any other side, there will be areas having opposite signs.
Examples
>>> from sympy import Point, Polygon
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly = Polygon(p1, p2, p3, p4)
>>> poly.area
3
In the Z shaped polygon (with the lower right connecting back to the upper left) the areas cancel out:
>>> Z = Polygon((0, 1), (1, 1), (0, 0), (1, 0))
>>> Z.area
0
In the M shaped polygon, areas do not cancel because no side crosses any other (though there is a point of contact).
>>> M = Polygon((0, 0), (0, 1), (2, 0), (3, 1), (3, 0))
>>> M.area
-3/2
bounds
Return a tuple (xmin, ymin, xmax, ymax) representing the bounding rectangle for the geometric figure.
centroid
The centroid of the polygon.
Returns: centroid : Point
Examples
>>> from sympy import Point, Polygon
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly = Polygon(p1, p2, p3, p4)
>>> poly.centroid
Point2D(31/18, 11/18)
distance(o)[source]
Returns the shortest distance between self and o.
If o is a point, then self does not need to be convex. If o is another polygon self and o must be complex.
Examples
>>> from sympy import Point, Polygon, RegularPolygon
>>> p1, p2 = map(Point, [(0, 0), (7, 5)])
>>> poly = Polygon(*RegularPolygon(p1, 1, 3).vertices)
>>> poly.distance(p2)
sqrt(61)
encloses_point(p)[source]
Return True if p is enclosed by (is inside of) self.
Parameters: p : Point encloses_point : True, False or None
Notes
Being on the border of self is considered False.
References
Examples
>>> from sympy import Polygon, Point
>>> from sympy.abc import t
>>> p = Polygon((0, 0), (4, 0), (4, 4))
>>> p.encloses_point(Point(2, 1))
True
>>> p.encloses_point(Point(2, 2))
False
>>> p.encloses_point(Point(5, 5))
False
intersection(o)[source]
The intersection of polygon and geometry entity.
The intersection may be empty and can contain individual Points and complete Line Segments.
Parameters: other: GeometryEntity intersection : list The list of Segments and Points
Examples
>>> from sympy import Point, Polygon, Line
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly1 = Polygon(p1, p2, p3, p4)
>>> p5, p6, p7 = map(Point, [(3, 2), (1, -1), (0, 2)])
>>> poly2 = Polygon(p5, p6, p7)
>>> poly1.intersection(poly2)
[Point2D(1/3, 1), Point2D(2/3, 0), Point2D(9/5, 1/5), Point2D(7/3, 1)]
>>> poly1.intersection(Line(p1, p2))
[Segment2D(Point2D(0, 0), Point2D(1, 0))]
>>> poly1.intersection(p1)
[Point2D(0, 0)]
is_convex()[source]
Is the polygon convex?
A polygon is convex if all its interior angles are less than 180 degrees and there are no intersections between sides.
Returns: is_convex : boolean True if this polygon is convex, False otherwise.
Examples
>>> from sympy import Point, Polygon
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly = Polygon(p1, p2, p3, p4)
>>> poly.is_convex()
True
perimeter
The perimeter of the polygon.
Returns: perimeter : number or Basic instance
Examples
>>> from sympy import Point, Polygon
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly = Polygon(p1, p2, p3, p4)
>>> poly.perimeter
sqrt(17) + 7
plot_interval(parameter='t')[source]
The plot interval for the default geometric plot of the polygon.
Parameters: parameter : str, optional Default value is ‘t’. plot_interval : list (plot interval) [parameter, lower_bound, upper_bound]
Examples
>>> from sympy import Polygon
>>> p = Polygon((0, 0), (1, 0), (1, 1))
>>> p.plot_interval()
[t, 0, 1]
second_moment_of_area(point=None)[source]
Returns the second moment and product moment of area of a two dimensional polygon.
Parameters: point : Point, two-tuple of sympifiable objects, or None(default=None) point is the point about which second moment of area is to be found. If “point=None” it will be calculated about the axis passing through the centroid of the polygon. I_xx, I_yy, I_xy : number or sympy expression I_xx, I_yy are second moment of area of a two dimensional polygon. I_xy is product moment of area of a two dimensional polygon.
References
https://en.wikipedia.org/wiki/Second_moment_of_area
Examples
>>> from sympy import Point, Polygon, symbols
>>> a, b = symbols('a, b')
>>> p1, p2, p3, p4, p5 = [(0, 0), (a, 0), (a, b), (0, b), (a/3, b/3)]
>>> rectangle = Polygon(p1, p2, p3, p4)
>>> rectangle.second_moment_of_area()
(a*b**3/12, a**3*b/12, 0)
>>> rectangle.second_moment_of_area(p5)
(a*b**3/9, a**3*b/9, a**2*b**2/36)
sides
The directed line segments that form the sides of the polygon.
Returns: sides : list of sides Each side is a directed Segment.
Examples
>>> from sympy import Point, Polygon
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly = Polygon(p1, p2, p3, p4)
>>> poly.sides
[Segment2D(Point2D(0, 0), Point2D(1, 0)),
Segment2D(Point2D(1, 0), Point2D(5, 1)),
Segment2D(Point2D(5, 1), Point2D(0, 1)), Segment2D(Point2D(0, 1), Point2D(0, 0))]
vertices
The vertices of the polygon.
Returns: vertices : list of Points
Notes
When iterating over the vertices, it is more efficient to index self rather than to request the vertices and index them. Only use the vertices when you want to process all of them at once. This is even more important with RegularPolygons that calculate each vertex.
Examples
>>> from sympy import Point, Polygon
>>> p1, p2, p3, p4 = map(Point, [(0, 0), (1, 0), (5, 1), (0, 1)])
>>> poly = Polygon(p1, p2, p3, p4)
>>> poly.vertices
[Point2D(0, 0), Point2D(1, 0), Point2D(5, 1), Point2D(0, 1)]
>>> poly.vertices[0]
Point2D(0, 0)
class sympy.geometry.polygon.RegularPolygon[source]
A regular polygon.
Such a polygon has all internal angles equal and all sides the same length.
Parameters: center : Point radius : number or Basic instance The distance from the center to a vertex n : int The number of sides GeometryError If the $$center$$ is not a Point, or the $$radius$$ is not a number or Basic instance, or the number of sides, $$n$$, is less than three.
Notes
A RegularPolygon can be instantiated with Polygon with the kwarg n.
Regular polygons are instantiated with a center, radius, number of sides and a rotation angle. Whereas the arguments of a Polygon are vertices, the vertices of the RegularPolygon must be obtained with the vertices method.
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> r = RegularPolygon(Point(0, 0), 5, 3)
>>> r
RegularPolygon(Point2D(0, 0), 5, 3, 0)
>>> r.vertices[0]
Point2D(5, 0)
Attributes
vertices center radius rotation apothem interior_angle exterior_angle circumcircle incircle angles
angles
Returns a dictionary with keys, the vertices of the Polygon, and values, the interior angle at each vertex.
Examples
>>> from sympy import RegularPolygon, Point
>>> r = RegularPolygon(Point(0, 0), 5, 3)
>>> r.angles
{Point2D(-5/2, -5*sqrt(3)/2): pi/3,
Point2D(-5/2, 5*sqrt(3)/2): pi/3,
Point2D(5, 0): pi/3}
apothem
Returns: apothem : number or instance of Basic
Examples
>>> from sympy import Symbol
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), radius, 4)
>>> rp.apothem
sqrt(2)*r/2
area
Returns the area.
Examples
>>> from sympy.geometry import RegularPolygon
>>> square = RegularPolygon((0, 0), 1, 4)
>>> square.area
2
>>> _ == square.length**2
True
args
Returns the center point, the radius, the number of sides, and the orientation angle.
Examples
>>> from sympy import RegularPolygon, Point
>>> r = RegularPolygon(Point(0, 0), 5, 3)
>>> r.args
(Point2D(0, 0), 5, 3, 0)
center
The center of the RegularPolygon
This is also the center of the circumscribing circle.
Returns: center : Point
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 5, 4)
>>> rp.center
Point2D(0, 0)
centroid
The center of the RegularPolygon
This is also the center of the circumscribing circle.
Returns: center : Point
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 5, 4)
>>> rp.center
Point2D(0, 0)
circumcenter
Alias for center.
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 5, 4)
>>> rp.circumcenter
Point2D(0, 0)
circumcircle
The circumcircle of the RegularPolygon.
Returns: circumcircle : Circle
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 4, 8)
>>> rp.circumcircle
Circle(Point2D(0, 0), 4)
circumradius
Examples
>>> from sympy import Symbol
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), radius, 4)
r
encloses_point(p)[source]
Return True if p is enclosed by (is inside of) self.
Parameters: p : Point encloses_point : True, False or None
Notes
Being on the border of self is considered False.
The general Polygon.encloses_point method is called only if a point is not within or beyond the incircle or circumcircle, respectively.
Examples
>>> from sympy import RegularPolygon, S, Point, Symbol
>>> p = RegularPolygon((0, 0), 3, 4)
>>> p.encloses_point(Point(0, 0))
True
>>> p.encloses_point(Point((r + R)/2, 0))
True
>>> p.encloses_point(Point(R/2, R/2 + (R - r)/10))
False
>>> t = Symbol('t', real=True)
>>> p.encloses_point(p.arbitrary_point().subs(t, S.Half))
False
>>> p.encloses_point(Point(5, 5))
False
exterior_angle
Measure of the exterior angles.
Returns: exterior_angle : number
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 4, 8)
>>> rp.exterior_angle
pi/4
incircle
The incircle of the RegularPolygon.
Returns: incircle : Circle
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 4, 7)
>>> rp.incircle
Circle(Point2D(0, 0), 4*cos(pi/7))
inradius
Alias for apothem.
Examples
>>> from sympy import Symbol
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), radius, 4)
sqrt(2)*r/2
interior_angle
Measure of the interior angles.
Returns: interior_angle : number
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 4, 8)
>>> rp.interior_angle
3*pi/4
length
Returns the length of the sides.
The half-length of the side and the apothem form two legs of a right triangle whose hypotenuse is the radius of the regular polygon.
Examples
>>> from sympy.geometry import RegularPolygon
>>> from sympy import sqrt
>>> s = square_in_unit_circle = RegularPolygon((0, 0), 1, 4)
>>> s.length
sqrt(2)
>>> sqrt((_/2)**2 + s.apothem**2) == s.radius
True
radius
This is also the radius of the circumscribing circle.
Returns: radius : number or instance of Basic
Examples
>>> from sympy import Symbol
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), radius, 4)
r
reflect(line)[source]
Override GeometryEntity.reflect since this is not made of only points.
Examples
>>> from sympy import RegularPolygon, Line
>>> RegularPolygon((0, 0), 1, 4).reflect(Line((0, 1), slope=-2))
RegularPolygon(Point2D(4/5, 2/5), -1, 4, atan(4/3))
rotate(angle, pt=None)[source]
Override GeometryEntity.rotate to first rotate the RegularPolygon about its center.
>>> from sympy import Point, RegularPolygon, Polygon, pi
>>> t = RegularPolygon(Point(1, 0), 1, 3)
>>> t.vertices[0] # vertex on x-axis
Point2D(2, 0)
>>> t.rotate(pi/2).vertices[0] # vertex on y axis now
Point2D(0, 2)
rotation
spin
Rotates a RegularPolygon in place
rotation
CCW angle by which the RegularPolygon is rotated
Returns: rotation : number or instance of Basic
Examples
>>> from sympy import pi
>>> from sympy.abc import a
>>> from sympy.geometry import RegularPolygon, Point
>>> RegularPolygon(Point(0, 0), 3, 4, pi/4).rotation
pi/4
Numerical rotation angles are made canonical:
>>> RegularPolygon(Point(0, 0), 3, 4, a).rotation
a
>>> RegularPolygon(Point(0, 0), 3, 4, pi).rotation
0
scale(x=1, y=1, pt=None)[source]
Override GeometryEntity.scale since it is the radius that must be scaled (if x == y) or else a new Polygon must be returned.
>>> from sympy import RegularPolygon
Symmetric scaling returns a RegularPolygon:
>>> RegularPolygon((0, 0), 1, 4).scale(2, 2)
RegularPolygon(Point2D(0, 0), 2, 4, 0)
Asymmetric scaling returns a kite as a Polygon:
>>> RegularPolygon((0, 0), 1, 4).scale(2, 1)
Polygon(Point2D(2, 0), Point2D(0, 1), Point2D(-2, 0), Point2D(0, -1))
spin(angle)[source]
Increment in place the virtual Polygon’s rotation by ccw angle.
>>> from sympy import Polygon, Point, pi
>>> r = Polygon(Point(0,0), 1, n=3)
>>> r.vertices[0]
Point2D(1, 0)
>>> r.spin(pi/6)
>>> r.vertices[0]
Point2D(sqrt(3)/2, 1/2)
rotation
rotate
Creates a copy of the RegularPolygon rotated about a Point
vertices
The vertices of the RegularPolygon.
Returns: vertices : list Each vertex is a Point.
Examples
>>> from sympy.geometry import RegularPolygon, Point
>>> rp = RegularPolygon(Point(0, 0), 5, 4)
>>> rp.vertices
[Point2D(5, 0), Point2D(0, 5), Point2D(-5, 0), Point2D(0, -5)]
class sympy.geometry.polygon.Triangle[source]
A polygon with three vertices and three sides.
Parameters: points : sequence of Points keyword: asa, sas, or sss to specify sides/angles of the triangle GeometryError If the number of vertices is not equal to three, or one of the vertices is not a Point, or a valid keyword is not given.
Examples
>>> from sympy.geometry import Triangle, Point
>>> Triangle(Point(0, 0), Point(4, 0), Point(4, 3))
Triangle(Point2D(0, 0), Point2D(4, 0), Point2D(4, 3))
Keywords sss, sas, or asa can be used to give the desired side lengths (in order) and interior angles (in degrees) that define the triangle:
>>> Triangle(sss=(3, 4, 5))
Triangle(Point2D(0, 0), Point2D(3, 0), Point2D(3, 4))
>>> Triangle(asa=(30, 1, 30))
Triangle(Point2D(0, 0), Point2D(1, 0), Point2D(1/2, sqrt(3)/6))
>>> Triangle(sas=(1, 45, 2))
Triangle(Point2D(0, 0), Point2D(2, 0), Point2D(sqrt(2)/2, sqrt(2)/2))
Attributes
altitudes
The altitudes of the triangle.
An altitude of a triangle is a segment through a vertex, perpendicular to the opposite side, with length being the height of the vertex measured from the line containing the side.
Returns: altitudes : dict The dictionary consists of keys which are vertices and values which are Segments.
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.altitudes[p1]
Segment2D(Point2D(0, 0), Point2D(1/2, 1/2))
bisectors()[source]
The angle bisectors of the triangle.
An angle bisector of a triangle is a straight line through a vertex which cuts the corresponding angle in half.
Returns: bisectors : dict Each key is a vertex (Point) and each value is the corresponding bisector (Segment).
Examples
>>> from sympy.geometry import Point, Triangle, Segment
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> from sympy import sqrt
>>> t.bisectors()[p2] == Segment(Point(1, 0), Point(0, sqrt(2) - 1))
True
circumcenter
The circumcenter of the triangle
The circumcenter is the center of the circumcircle.
Returns: circumcenter : Point
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.circumcenter
Point2D(1/2, 1/2)
circumcircle
The circle which passes through the three vertices of the triangle.
Returns: circumcircle : Circle
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.circumcircle
Circle(Point2D(1/2, 1/2), sqrt(2)/2)
circumradius
The radius of the circumcircle of the triangle.
Returns: circumradius : number of Basic instance
Examples
>>> from sympy import Symbol
>>> from sympy.geometry import Point, Triangle
>>> a = Symbol('a')
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, a)
>>> t = Triangle(p1, p2, p3)
sqrt(a**2/4 + 1/4)
eulerline
The Euler line of the triangle.
The line which passes through circumcenter, centroid and orthocenter.
Returns: eulerline : Line (or Point for equilateral triangles in which case all centers coincide)
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.eulerline
Line2D(Point2D(0, 0), Point2D(1/2, 1/2))
exradii
The radius of excircles of a triangle.
An excircle of the triangle is a circle lying outside the triangle, tangent to one of its sides and tangent to the extensions of the other two.
References
Examples
The exradius touches the side of the triangle to which it is keyed, e.g. the exradius touching side 2 is:
>>> from sympy.geometry import Point, Triangle, Segment2D, Point2D
>>> p1, p2, p3 = Point(0, 0), Point(6, 0), Point(0, 2)
>>> t = Triangle(p1, p2, p3)
-2 + sqrt(10)
incenter
The center of the incircle.
The incircle is the circle which lies inside the triangle and touches all three sides.
Returns: incenter : Point
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.incenter
Point2D(-sqrt(2)/2 + 1, -sqrt(2)/2 + 1)
incircle
The incircle of the triangle.
The incircle is the circle which lies inside the triangle and touches all three sides.
Returns: incircle : Circle
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(2, 0), Point(0, 2)
>>> t = Triangle(p1, p2, p3)
>>> t.incircle
Circle(Point2D(-sqrt(2) + 2, -sqrt(2) + 2), -sqrt(2) + 2)
inradius
Returns: inradius : number of Basic instance
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(4, 0), Point(0, 3)
>>> t = Triangle(p1, p2, p3)
1
is_equilateral()[source]
Are all the sides the same length?
Returns: is_equilateral : boolean
Examples
>>> from sympy.geometry import Triangle, Point
>>> t1 = Triangle(Point(0, 0), Point(4, 0), Point(4, 3))
>>> t1.is_equilateral()
False
>>> from sympy import sqrt
>>> t2 = Triangle(Point(0, 0), Point(10, 0), Point(5, 5*sqrt(3)))
>>> t2.is_equilateral()
True
is_isosceles()[source]
Are two or more of the sides the same length?
Returns: is_isosceles : boolean
Examples
>>> from sympy.geometry import Triangle, Point
>>> t1 = Triangle(Point(0, 0), Point(4, 0), Point(2, 4))
>>> t1.is_isosceles()
True
is_right()[source]
Is the triangle right-angled.
Returns: is_right : boolean
Examples
>>> from sympy.geometry import Triangle, Point
>>> t1 = Triangle(Point(0, 0), Point(4, 0), Point(4, 3))
>>> t1.is_right()
True
is_scalene()[source]
Are all the sides of the triangle of different lengths?
Returns: is_scalene : boolean
Examples
>>> from sympy.geometry import Triangle, Point
>>> t1 = Triangle(Point(0, 0), Point(4, 0), Point(1, 4))
>>> t1.is_scalene()
True
is_similar(t2)[source]
Is another triangle similar to this one.
Two triangles are similar if one can be uniformly scaled to the other.
Parameters: other: Triangle is_similar : boolean
Examples
>>> from sympy.geometry import Triangle, Point
>>> t1 = Triangle(Point(0, 0), Point(4, 0), Point(4, 3))
>>> t2 = Triangle(Point(0, 0), Point(-4, 0), Point(-4, -3))
>>> t1.is_similar(t2)
True
>>> t2 = Triangle(Point(0, 0), Point(-4, 0), Point(-4, -4))
>>> t1.is_similar(t2)
False
medial
The medial triangle of the triangle.
The triangle which is formed from the midpoints of the three sides.
Returns: medial : Triangle
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.medial
Triangle(Point2D(1/2, 0), Point2D(1/2, 1/2), Point2D(0, 1/2))
medians
The medians of the triangle.
A median of a triangle is a straight line through a vertex and the midpoint of the opposite side, and divides the triangle into two equal areas.
Returns: medians : dict Each key is a vertex (Point) and each value is the median (Segment) at that point.
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.medians[p1]
Segment2D(Point2D(0, 0), Point2D(1/2, 1/2))
nine_point_circle
The nine-point circle of the triangle.
Nine-point circle is the circumcircle of the medial triangle, which passes through the feet of altitudes and the middle points of segments connecting the vertices and the orthocenter.
Returns: nine_point_circle : Circle
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.nine_point_circle
Circle(Point2D(1/4, 1/4), sqrt(2)/4)
orthocenter
The orthocenter of the triangle.
The orthocenter is the intersection of the altitudes of a triangle. It may lie inside, outside or on the triangle.
Returns: orthocenter : Point
Examples
>>> from sympy.geometry import Point, Triangle
>>> p1, p2, p3 = Point(0, 0), Point(1, 0), Point(0, 1)
>>> t = Triangle(p1, p2, p3)
>>> t.orthocenter
Point2D(0, 0)
vertices
The triangle’s vertices
Returns: vertices : tuple Each element in the tuple is a Point
Examples
>>> from sympy.geometry import Triangle, Point
>>> t = Triangle(Point(0, 0), Point(4, 0), Point(4, 3))
>>> t.vertices
(Point2D(0, 0), Point2D(4, 0), Point2D(4, 3)) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29944339394569397, "perplexity": 8265.220676693585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00519.warc.gz"} |
http://math.stackexchange.com/questions/126250/difference-in-probability-distributions-from-two-different-kernels | # Difference in probability distributions from two different kernels
I wonder if the probability kernels of Markov processes on the same state space are close enough, does it also hold for the probabilities of the event that depend only on first $n$ values of the process.
More formally, let $(E,\mathscr E)$ be a measurable space and put $(E^n,\mathscr E^n)$, where $\mathscr E^n$ is the product $\sigma$-algebra. We say that $P$ is a stochastic kernel on $E$ if $$P:E\times\mathscr E\to [0,1]$$ is such that $P(x,\cdot)$ is a probability measure on $(E,\mathscr E)$ for all $x\in E$ and $x\mapsto P(\cdot,A)$ is a measurable function for all $A\in \mathscr E$. On the space $b\mathscr E$ of real-valued bounded measurable functions with a sup-norm $\|f\| = \sup\limits_{x\in E}|f(x)|$ we define the operator $$Pf(x) = \int\limits_E f(y)P(x,dy).$$ Its induced norm is given by $\|P\| = \sup\limits_{f\in b\mathscr E\setminus 0}\frac{\|Pf\|}{\|f\|}.$ Furthermore, for any stochastic kernel $P$ we can assign the family of probability measures $(\mathsf P_x)_{x\in E}$ on $(E^n,\mathscr E^n)$ which is defined uniquely by $$\mathsf P_x(A_0\times A_1\times \dots\times A_n) = 1_{A_0}(x)\int\limits_{A_n}\dots \int\limits_{A_1}P(x,dx_1)\dots P(x_{n-1},dx_n).$$
Let us consider another kernel $\tilde P$ which as well defines the operator on $b\mathscr E$ and the family of probability measures $\tilde{\mathsf P}_x$ on $(E^n,\mathscr E^n)$. I wonder what is the upper-bound on $$\sup\limits_{x\in E}\sup\limits_{F\in \mathscr E^n}|\tilde{\mathsf P}_x(F) - \mathsf P_x(F)|.$$
By induction it is easy to prove that $$\sup\limits_{x\in E}|\tilde{\mathsf P}_x(A_0\times A_1\times \dots\times A_n)-\mathsf P_x(A_0\times A_1\times \dots\times A_n)|\leq n\cdot\|\tilde P - P\|$$ but I am not sure if this result can be extended to any subset of $\mathscr E^n$.
-
The notation can be a bit cumbersome because of the nested integrals, but this solution relies only on very basic properties of integration and is direct (no induction).
Consider the difference $a(x_0,F)=\mathsf P'_{x_0}(F)-\mathsf P_{x_0}(F)$. By uniqueness of measure it follows from the definition of $\mathsf P_{x_0}$ that: \begin{align} a(x_0,F) =&\int_E\dots\int_E 1_F(x_1,\dots x_n) P'(x_{n-1},dx_n)\dots P'(x_0,dx_1)\\ &- \int_E\dots\int_E 1_F(x_1,\dots x_n) P(x_{n-1},dx_n)\dots P(x_0,dx_1) \end{align}
By introducing intermediate telescoping terms we can split this into a sum of $n$ terms $a(x_0,F)=\sum_{j=1}^n a_j(x_0,F)$ where \begin{align} a_j(x_0,F) =& \int_E\dots\int_E 1_F(x_1,\dots x_n) P'(x_{n-1},dx_n) \dots P'(x_{j-1},dx_j) P(x_{j-2},dx_{j-1})\dots P(x_0,dx_1)\\ &- \int_E\dots\int_E 1_F(x_1,\dots x_n) P'(x_{n-1},dx_n) \dots P'(x_j,dx_{j+1}) P(x_{j-1},dx_j)\dots P(x_0,dx_1) \end{align}
The innermost part (consisting of $n-j$ nested integrals) is common to both terms, to make things more readable we factor it into $$g_j(x_1,\dots x_j) = \displaystyle\int_E\dots\int_E 1_F(x_1,\dots x_n) P'(x_{n-1},dx_n)\dots P'(x_j,dx_{j+1})$$ Then by linearity and $|\int f|\le\int |f|$: \begin{align} |a_j(x_0,F)|\le& \int_E\dots\int_E\left|\int_E g_j(x_1,\dots x_j) \left(P'(x_{j-1},dx_j)-P(x_{j-1},dx_j)\right)\right| P(x_{j-2},dx_{j-1})\dots\\ \le& \int_E\dots\int_E\|P'-P\| P(x_{j-2},dx_{j-1})\dots P(x_0,dx_1)\\ =& \|P'-P\|\\ |a(x_0,F)|\le& n\cdot\|P'-P\| \end{align} (the bound of the integral by $\|P'-P\|$ comes from considering the function $g_j(x_1,\dots x_{j-1},\cdot)$)
-
Thanks for your effort - but I tried to state a problem in a rigorous way and was willing to get the answer given in the same way. Conditioning on $X_{j-1} = x$, say, does not look rigorous. – Ilya Apr 19 '12 at 22:21
You're right, the notation was a bit sloppy, I updated the answer with a simpler solution that keeps your original notation. – Generic Human Apr 21 '12 at 12:07
+1 Now I understand your idea and I liked it, but it should be further formalized. Namely, the first formula does not seem to be obvious and requires proof as I made in step 2. of my answer. – Ilya Apr 22 '12 at 11:15
I'm not sure which formula you're referring to. The first line is a direct consequence of unicity: each term in the RHS clearly defines a measure on $\mathscr E^n$ and it matches the LHS for cartesian products (factor $1_F$ into the individual $1_{A_i}(x_i)$ which you can now move next to the corresponding integral sign), so the equality holds. In the second line, the fact that $a$ is the sum of the $a_j$ is due to the fact that all terms cancel out except the first and the last one: you don't need to look at the contents of the integrals. – Generic Human Apr 23 '12 at 0:14
I hope I have a solution for the problem, so I post it here. I'll be happy if you comment on the solution if it is correct or maybe provide more short and neat one.
1. First of all, I change a notation a bit and use $\mathsf P_x^n$ instead of $\mathsf P_x$ in OP to denote the probability measure on the space $(E^n,\mathcal E^n)$, just to mention the dependence on $n$ expolicitly. Then for all measurable rectangles $B = A_1\times A_2\times\dots\times A_n\in \mathcal E^{n-1}$ and the set $A_0\in \mathcal E$ it holds that $$\mathsf P_x^n(A_0\times B) = 1_{A_0}(x)\int\limits_{A_1}\dots \int\limits_{A_n}P(x_{n-1},dx_n)\dots P(x,dx_1) = 1_{A_0}(x)\int\limits_E \mathsf P_{y}^{n-1}(B)P(x,dy).$$ By the uniqueness of the probability measure $\mathsf P_x^n$ the same result holds for any $B\in \mathcal E^{n-1}$: $$\mathsf P_x^n(A_0\times B) = 1_{A_0}(x)\int\limits_E \mathsf P_{y}^{n-1}(B)P(x,dy). \tag{1}$$
2. For any set $C\in \mathcal E^n = \mathcal E\times\mathcal E^{n-1}$ we can show that $$\mathsf P_x^n(C) = \int\limits_E \mathsf P_y^{n-1}(C_x)P(x,dy) \tag{2}$$ where $C_x = \{y\in E^{n-1}:(x,y)\in C\}\in\mathcal E^{n-1}$. To prove it we first verify $(2)$ for measurable rectangles $C = A\times B$ using $(1)$, hence $C_x = B$ if $x\in A$ and $\emptyset$ otherwise. By the advise of @tb this result further extends to all $C\in \mathcal E^n$ by $\pi$-$\lambda$ theorem.
3. The inequality $\left|\tilde{\mathsf P}_x^n(C) - \mathsf P_x^n(C)\right|\leq n\|\tilde P-P\|$ can be proved then by induction: it clearly holds for $n=1$ $$\left|\tilde{\mathsf P}^1_x(C) - \mathsf P^1_x(C)\right| = \left|\tilde P(x,C_x) - P(x,C_x)\right|\leq 1\cdot\|\tilde P - P\|.$$ If the same inequality holds for $n-1$, we have \begin{align} \left|\tilde{\mathsf P}_x^n(C) - \mathsf P_x^n(C)\right| &= \left|\int\limits_E \tilde{\mathsf P}_y^{n-1}(C_x)\tilde P(x,dy)-\int\limits_E \mathsf P_y^{n-1}(C_x)P(x,dy)\right| \\ &\leq \left|\int\limits_E \left(\tilde{\mathsf P}_y^{n-1}(C_x) - \mathsf P_y^{n-1}(C_x)\right)\tilde P(x,dy)\right|+\left|\langle \tilde P(x,\cdot) - P(x,\cdot),\mathsf P_{(\cdot)}^{n-1}(C_x)\rangle\right| \\ &\leq (n-1)\|\tilde P - P\|+\|\tilde P - P\| = n\|\tilde P - P\| \end{align} where $\langle m(\cdot),f(\cdot)\rangle = \int\limits_E f(y)\mu(dy)$ for all measurable bounded $f$ and all finite signed measures $\mu$. Since the RHS in the bound derived does not depend on $x,C$ it means that we proved desired bounds.
-
Great solution! – Generic Human Apr 21 '12 at 12:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929066300392151, "perplexity": 278.9459334137524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986444.39/warc/CC-MAIN-20150728002306-00048-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/15190/how-to-create-a-convolution-matrix-with-a-variable-condition-number-cn/17595 | How to Create a Convolution Matrix with a Variable Condition Number (CN)
I want to know the performance of a deconvolution algorithm with different CN, so I'm convolving my signal with different convolution matrices(different CNs) and then applying the deconvolution algorithm, then the error between the original and reconstructed signals is measured.
Is there a proper way to create the convolution matrix with variable CN.
• Can you please review my answer? If it satisfies you, please mark it. If not, let me know what's missing and I will improve it. – Royi Oct 9 at 11:27
2 Answers
Let's try to think of it intuitively.
Given the LPF what would make the inverse "Hard"?
The sections we won't be able to recover are where the LPF is 0, since there is no inverse for that we can multiply the result by.
Real world LPF won't reach "Real" zero as usually.
But what would make it hard to recover is where there are big ratios between the biggest and the smallest magnitude.
As close and fast your LPF goes to zero (And has magnitudes which are big) the harder the recovery will be.
Now just build analog LPF which this requirements at different scales, digitize it, create the Convolution Matrix and there you have it...
A practical example is given in my answer to 1D Deconvolution with Gaussian Kernel (MATLAB).
The condition number (CN) of a matrix (in $$L_2$$ norm) is the square root of of the ratio of the largest eigenvalue to the smallest eigenvalue. We see here that CN is related to the extreme (largest and smallest) eigenvalues. It looks like we can vary the CN by varying the eigenvalues.
Convolution matrix is a Toeplitz matrix. There are some special Toeplitz matrices such as tri-diagonal and penta-diagonal matrices whose eigenvalues in terms of matrix entries are well-known [1-2]. In these case, by varying the matrix entries we can vary the eigenvalues and in turn the CN.
[1] S. Noschese, L. Pasquini, and L. Reichel, "Tridiagonal Toeplitz matrices: properties and novel applications," Numer. Linear Algebra Appl., 20 (2013), pp. 302-326
[2] G. D. Smith, Numerical Solution of Partial Differential Equations, 2nd ed., Clarendon Press, Oxford, 1978. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023100137710571, "perplexity": 809.7386043775905}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00167.warc.gz"} |
https://bibbase.org/network/publication/shendure-ji-nextgenerationdnasequencing-2008 | Next-generation DNA sequencing. Shendure, J. & Ji, H. Nat Biotechnol, 26(10):1135–1145, 2008.
DNA sequence represents a single format onto which a broad range of biological phenomena can be projected for high-throughput data collection. Over the past three years, massively parallel DNA sequencing platforms have become widely available, reducing the cost of DNA sequencing by over two orders of magnitude, and democratizing the field by putting the sequencing capacity of a major genome center in the hands of individual investigators. These new technologies are rapidly evolving, and near-term challenges include the development of robust protocols for generating sequencing libraries, building effective new approaches to data-analysis, and often a rethinking of experimental design. Next-generation DNA sequencing has the potential to dramatically accelerate biological and biomedical research, by enabling the comprehensive analysis of genomes, transcriptomes and interactomes to become inexpensive, routine and widespread, rather than requiring significant production-scale efforts.
@Article{shendure08next-generation,
author = {Jay Shendure and Hanlee Ji},
title = {Next-generation {DNA} sequencing.},
journal = {Nat Biotechnol},
year = {2008},
volume = {26},
number = {10},
pages = {1135--1145},
abstract = {DNA sequence represents a single format onto which a broad range of biological phenomena can be projected for high-throughput data collection. Over the past three years, massively parallel DNA sequencing platforms have become widely available, reducing the cost of DNA sequencing by over two orders of magnitude, and democratizing the field by putting the sequencing capacity of a major genome center in the hands of individual investigators. These new technologies are rapidly evolving, and near-term challenges include the development of robust protocols for generating sequencing libraries, building effective new approaches to data-analysis, and often a rethinking of experimental design. Next-generation DNA sequencing has the potential to dramatically accelerate biological and biomedical research, by enabling the comprehensive analysis of genomes, transcriptomes and interactomes to become inexpensive, routine and widespread, rather than requiring significant production-scale efforts.},
doi = {10.1038/nbt1486},
keywords = {Chromosome Mapping; Forecasting; Genomics; Sequence Alignment; Sequence Analysis, DNA},
optmonth = oct,
owner = {swinter},
pmid = {18846087},
timestamp = {2010.04.09},
}
Downloads: 0 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19513137638568878, "perplexity": 5677.149073545607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00557.warc.gz"} |
http://mathhelpforum.com/pre-calculus/148275-ln-x-e-x-find-x.html | # Math Help - ln(x)=e^-x find x
1. ## ln(x)=e^-x find x
ln(x)=e^(-x)
find x...
2. Originally Posted by ice_syncer
ln(x)=e^(-x)
find x...
Hi ice_syncer,
you could use Newton's Method for finding roots.
$f(x)=e^{-x}-lnx$
The solution to $lnx=e^{-x}$
is the root of $f(x)=0$
Take an initial shot at the solution as x=1.6
Then
$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}=1.6-\frac{e^{-1.6}-ln(1.6)}{-e^{-1.6}-\frac{1}{1.6}}=1.275767$
A few more iterations would close in on the root, approximately 1.31
Attached Thumbnails
3. Originally Posted by ice_syncer
ln(x)=e^(-x)
find x...
Please post the whole question (in particular the background to it that probably includes something like "Find the approximate solution of ....")
4. how did you think of the initial shot ie x = 1.6
5. Originally Posted by ice_syncer
how did you think of the initial shot ie x = 1.6
$e^{-x}$ is a decreasing function, while $lnx$ is increasing.
$e^{-1}=0.368$
$ln1=0$
$e^{-2}=0.135$
$ln2=0.693$
Hence these cross between x=1 and x=2.
You could choose any value of x in the vicinity as a starting point, say x=1.5 or so. Choosing x=1 or x=2 would also suffice.
The attachment shows Newton's method homing in on the solution.
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053463339805603, "perplexity": 2447.1394064400197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.21/warc/CC-MAIN-20150521113208-00214-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Church_encoding | # Church encoding
In mathematics, Church encoding is a means of representing data and operators in the lambda calculus. The Church numerals are a representation of the natural numbers using lambda notation. The method is named for Alonzo Church, who first encoded data in the lambda calculus this way.
Terms that are usually considered primitive in other notations (such as integers, booleans, pairs, lists, and tagged unions) are mapped to higher-order functions under Church encoding. The Church-Turing thesis asserts that any computable operator (and its operands) can be represented under Church encoding. In the untyped lambda calculus the only primitive data type is the function.
The Church encoding is not intended as a practical implementation of primitive data types. Its use is to show that other primitive data types are not required to represent any calculation. The completeness is representational. Additional functions are needed to translate the representation into common data types, for display to people. It is not possible in general to decide if two functions are extensionally equal due to the undecidability of equivalence from Church's theorem. The translation may apply the function in some way to retrieve the value it represents, or look up its value as a literal lambda term.
Lambda calculus is usually interpreted as using intensional equality. There are potential problems with the interpretation of results because of the difference between the intensional and extensional definition of equality.
## Church numerals
Church numerals are the representations of natural numbers under Church encoding. The higher-order function that represents natural number n is a function that maps any function ${\displaystyle f}$ to its n-fold composition. In simpler terms, the "value" of the numeral is equivalent to the number of times the function encapsulates its argument.
${\displaystyle f^{\circ n}=\underbrace {f\circ f\circ \cdots \circ f} _{n{\text{ times}}}.\,}$
All Church numerals are functions that take two parameters. Church numerals 0, 1, 2, ..., are defined as follows in the lambda calculus.
Starting with 0 not applying the function at all, proceed with 1 applying the function once, ...:
${\displaystyle {\begin{array}{r|l|l}{\text{Number}}&{\text{Function definition}}&{\text{Lambda expression}}\\\hline 0&0\ f\ x=x&0=\lambda f.\lambda x.x\\1&1\ f\ x=f\ x&1=\lambda f.\lambda x.f\ x\\2&2\ f\ x=f\ (f\ x)&2=\lambda f.\lambda x.f\ (f\ x)\\3&3\ f\ x=f\ (f\ (f\ x))&3=\lambda f.\lambda x.f\ (f\ (f\ x))\\\vdots &\vdots &\vdots \\n&n\ f\ x=f^{n}\ x&n=\lambda f.\lambda x.f^{\circ n}\ x\end{array}}}$
The Church numeral 3 represents the action of applying any given function three times to a value. The supplied function is first applied to a supplied parameter and then successively to its own result. The end result is not the numeral 3 (unless the supplied parameter happens to be 0 and the function is a successor function). The function itself, and not its end result, is the Church numeral 3. The Church numeral 3 means simply to do anything three times. It is an ostensive demonstration of what is meant by "three times".
### Calculation with Church numerals
Arithmetic operations on numbers may be represented by functions on Church numerals. These functions may be defined in lambda calculus, or implemented in most functional programming languages (see converting lambda expressions to functions).
The addition function ${\displaystyle \operatorname {plus} (m,n)=m+n}$ uses the identity ${\displaystyle f^{\circ (m+n)}(x)=f^{\circ m}(f^{\circ n}(x))}$.
${\displaystyle \operatorname {plus} \equiv \lambda m.\lambda n.\lambda f.\lambda x.m\ f\ (n\ f\ x)}$
The successor function ${\displaystyle \operatorname {succ} (n)=n+1}$ is β-equivalent to ${\displaystyle (\operatorname {plus} \ 1)}$.
${\displaystyle \operatorname {succ} \equiv \lambda n.\lambda f.\lambda x.f\ (n\ f\ x)}$
The multiplication function ${\displaystyle \operatorname {mult} (m,n)=m*n}$ uses the identity ${\displaystyle f^{\circ (m*n)}(x)=(f^{\circ n})^{\circ m}(x)}$.
${\displaystyle \operatorname {mult} \equiv \lambda m.\lambda n.\lambda f.\lambda x.m\ (n\ f)\ x}$
The exponentiation function ${\displaystyle \operatorname {exp} (m,n)=m^{n}}$ is given by the definition of Church numerals; ${\displaystyle n\ f\ x=f^{n}\ x}$. In the definition substitute ${\displaystyle f\to m,x\to f}$ to get ${\displaystyle n\ m\ f=m^{n}\ f}$ and,
${\displaystyle \operatorname {exp} \ m\ n=m^{n}=n\ m}$
which gives the lambda expression,
${\displaystyle \operatorname {exp} \equiv \lambda m.\lambda n.n\ m}$
The ${\displaystyle \operatorname {pred} (n)}$ function is more difficult to understand.
${\displaystyle \operatorname {pred} \equiv \lambda n.\lambda f.\lambda x.n\ (\lambda g.\lambda h.h\ (g\ f))\ (\lambda u.x)\ (\lambda u.u)}$
A Church numeral applies a function n times. The predecessor function must return a function that applies its parameter n - 1 times. This is achieved by building a container around f and x, which is initialized in a way that omits the application of the function the first time. See predecessor for a more detailed explanation.
The subtraction function can be written based on the predecessor function.
${\displaystyle \operatorname {minus} \equiv \lambda m.\lambda n.(n\operatorname {pred} )\ m}$
### Table of functions on Church numerals
Function Algebra Identity Function definition Lambda expressions
Successor ${\displaystyle n+1}$ ${\displaystyle f^{n+1}\ x=f(f^{n}x)}$ ${\displaystyle \operatorname {succ} \ n\ f\ x=f\ (n\ f\ x)}$ ${\displaystyle \lambda n.\lambda f.\lambda x.f\ (n\ f\ x)}$ ...
Addition ${\displaystyle m+n}$ ${\displaystyle f^{m+n}\ x=f^{m}(f^{n}x)}$ ${\displaystyle \operatorname {plus} \ m\ n\ f\ x=m\ f\ (n\ f\ x)}$ ${\displaystyle \lambda m.\lambda n.\lambda f.\lambda x.m\ f\ (n\ f\ x)}$ ${\displaystyle \lambda m.\lambda n.n\operatorname {succ} m}$
Multiplication ${\displaystyle m*n}$ ${\displaystyle f^{m*n}\ x=(f^{m})^{n}\ x}$ ${\displaystyle \operatorname {multiply} \ m\ n\ f\ x=m\ (n\ f)\ x}$ ${\displaystyle \lambda m.\lambda n.\lambda f.\lambda x.m\ (n\ f)\ x}$ ${\displaystyle \lambda m.\lambda n.\lambda f.m\ (n\ f)}$
Exponentiation ${\displaystyle m^{n}}$ ${\displaystyle n\ m\ f=m^{n}\ f}$[1] ${\displaystyle \operatorname {exp} \ m\ n\ f\ x=(n\ m)\ f\ x}$ ${\displaystyle \lambda m.\lambda n.\lambda f.\lambda x.(n\ m)\ f\ x}$ ${\displaystyle \lambda m.\lambda n.n\ m}$
Predecessor* ${\displaystyle n-1}$ ${\displaystyle \operatorname {inc} ^{n}\operatorname {con} =\operatorname {val} (f^{n-1}x)}$ ${\displaystyle if(n==0)\ 0\ else\ (n-1)}$
${\displaystyle \lambda n.\lambda f.\lambda x.n\ (\lambda g.\lambda h.h\ (g\ f))\ (\lambda u.x)\ (\lambda u.u)}$
Subtraction* ${\displaystyle m-n}$ ${\displaystyle f^{m-n}\ x=(f^{-1})^{n}(f^{m}x)}$ ${\displaystyle \operatorname {minus} \ m\ n=(n\operatorname {pred} )\ m}$ ... ${\displaystyle \lambda m.\lambda n.n\operatorname {pred} m}$
* Note that in the Church encoding,
• ${\displaystyle \operatorname {pred} (0)=0}$
• ${\displaystyle m
### Derivation of predecessor function
The predecessor function used in the Church encoding is,
${\displaystyle \operatorname {pred} (n)={\begin{cases}0&{\mbox{if }}n=0,\\n-1&{\mbox{otherwise}}\end{cases}}}$.
To build the predecessor we need a way of applying the function 1 fewer time. A numeral n applies the function f n times to x. The predecessor function must use the numeral n to apply the function n-1 times.
Before implementing the predecessor function, here is a scheme that wraps the value in a container function. We will define new functions to use in place of f and x, called inc and init. The container function is called value. The left hand side of the table shows a numeral n applied to inc and init.
${\displaystyle {\begin{array}{r|r|r}{\text{Number}}&{\text{Using init}}&{\text{Using const}}\\\hline 0&\operatorname {init} =\operatorname {value} \ x&\\1&\operatorname {inc} \ \operatorname {init} =\operatorname {value} \ (f\ x)&\operatorname {inc} \ \operatorname {const} =\operatorname {value} \ x\\2&\operatorname {inc} \ (\operatorname {inc} \ \operatorname {init} )=\operatorname {value} \ (f\ (f\ x))&\operatorname {inc} \ (\operatorname {inc} \ \operatorname {const} )=\operatorname {value} \ (f\ x)\\3&\operatorname {inc} \ (\operatorname {inc} \ (\operatorname {inc} \ \operatorname {init} ))=\operatorname {value} \ (f\ (f\ (f\ x)))&\operatorname {inc} \ (\operatorname {inc} \ (\operatorname {inc} \ \operatorname {const} ))=\operatorname {value} \ (f\ (f\ x))\\\vdots &\vdots &\vdots \\n&n\operatorname {inc} \ \operatorname {init} =\operatorname {value} \ (f^{n}\ x)=\operatorname {value} \ (n\ f\ x)&n\operatorname {inc} \ \operatorname {const} =\operatorname {value} \ (f^{n-1}\ x)=\operatorname {value} \ ((n-1)\ f\ x)\\\end{array}}}$
The general recurrence rule is,
${\displaystyle \operatorname {inc} \ (\operatorname {value} \ v)=\operatorname {value} \ (f\ v)}$
If there is also a function to retrieve the value from the container (called extract),
${\displaystyle \operatorname {extract} \ (\operatorname {value} \ v)=v}$
Then extract may be used to define the samenum function as,
${\displaystyle \operatorname {samenum} =\lambda n.\lambda f.\lambda x.\operatorname {extract} \ (n\operatorname {inc} \operatorname {init} )=\lambda n.\lambda f.\lambda x.\operatorname {extract} \ (\operatorname {value} \ (n\ f\ x))=\lambda n.\lambda f.\lambda x.n\ f\ x=\lambda n.n}$
The samenum function is not intrinsically useful. However, as inc delegates calling of f to its container argument, we can arrange that on the first application inc receives a special container that ignores its argument allowing to skip the first application of f. Call this new initial container const. The right hand side of the above table shows the expansions of n inc const. Then by replacing init with const in the expression for the same function we get the predecessor function,
${\displaystyle \operatorname {pred} =\lambda n.\lambda f.\lambda x.\operatorname {extract} \ (n\operatorname {inc} \operatorname {const} )=\lambda n.\lambda f.\lambda x.\operatorname {extract} \ (\operatorname {value} \ ((n-1)\ f\ x))=\lambda n.\lambda f.\lambda x.(n-1)\ f\ x=\lambda n.(n-1)}$
As explained below the functions inc, init, const, value and extract may be defined as,
{\displaystyle {\begin{aligned}\operatorname {value} &=\lambda v.(\lambda h.h\ v)\\\operatorname {extract} k&=k\ \lambda u.u\\\operatorname {inc} &=\lambda g.\lambda h.h\ (g\ f)\\\operatorname {init} &=\lambda h.h\ x\\\operatorname {const} &=\lambda u.x\end{aligned}}}
Which gives the lambda expression for pred as,
${\displaystyle \operatorname {pred} =\lambda n.\lambda f.\lambda x.n\ (\lambda g.\lambda h.h\ (g\ f))\ (\lambda u.x)\ (\lambda u.u)}$
#### Value container
The value container applies a function to its value. It is defined by,
${\displaystyle \operatorname {value} \ v\ h=h\ v}$
so,
${\displaystyle \operatorname {value} =\lambda v.(\lambda h.h\ v)}$
#### Inc
The inc function should take a value containing v, and return a new value containing f v.
${\displaystyle \operatorname {inc} \ (\operatorname {value} \ v)=\operatorname {value} \ (f\ v)}$
Letting g be the value container,
${\displaystyle g=\operatorname {value} \ v}$
then,
${\displaystyle g\ f=\operatorname {value} \ v\ f=f\ v}$
so,
${\displaystyle \operatorname {inc} \ g=\operatorname {value} \ (g\ f)}$
${\displaystyle \operatorname {inc} =\lambda g.\lambda h.h\ (g\ f)}$
#### Extract
The value may be extracted by applying the identity function,
${\displaystyle I=\lambda u.u}$
Using I,
${\displaystyle \operatorname {value} \ v\ I=v}$
so,
${\displaystyle \operatorname {extract} \ k=k\ I}$
#### Const
To implement pred the init function is replaced with the const that does not apply f. We need const to satisfy,
${\displaystyle \operatorname {inc} \ \operatorname {const} =\operatorname {value} \ x}$
${\displaystyle \lambda h.h\ (\operatorname {const} \ f)=\lambda h.h\ x}$
Which is satisfied if,
${\displaystyle \operatorname {const} \ f=x}$
Or as a lambda expression,
${\displaystyle \operatorname {const} =\lambda u.x}$
#### Another way of defining pred
Pred may also be defined using pairs:
{\displaystyle {\begin{aligned}\operatorname {f} =&\ \lambda p.\ \operatorname {pair} \ (\operatorname {second} \ p)\ (\operatorname {succ} \ (\operatorname {second} \ p))\\\operatorname {zero} =&\ (\lambda f.\lambda x.\ x)\\\operatorname {pc0} =&\ \operatorname {pair} \ \operatorname {zero} \ \operatorname {zero} \\\operatorname {pred} =&\ \lambda n.\ \operatorname {first} \ (n\ \operatorname {f} \ \operatorname {pc0} )\\\end{aligned}}}
This is a simpler definition, but leads to a more complex expression for pred. The expansion for ${\displaystyle \operatorname {pred} \operatorname {three} }$:
{\displaystyle {\begin{aligned}\operatorname {pred} \operatorname {three} =&\ \operatorname {first} \ (\operatorname {f} \ (\operatorname {f} \ (\operatorname {f} \ (\operatorname {pair} \ \operatorname {zero} \ \operatorname {zero} ))))\\=&\ \operatorname {first} \ (\operatorname {f} \ (\operatorname {f} \ (\operatorname {pair} \ \operatorname {zero} \ \operatorname {one} )))\\=&\ \operatorname {first} \ (\operatorname {f} \ (\operatorname {pair} \ \operatorname {one} \ \operatorname {two} ))\\=&\ \operatorname {first} \ (\operatorname {pair} \ \operatorname {two} \ \operatorname {three} )\\=&\ \operatorname {two} \end{aligned}}}
### Division
Division of natural numbers may be implemented by,[2]
${\displaystyle n/m=\operatorname {if} \ n\geq m\ \operatorname {then} \ 1+(n-m)/m\ \operatorname {else} \ 0}$
Calculating ${\displaystyle n-m}$ takes many beta reductions. Unless doing the reduction by hand, this doesn't matter that much, but it is preferable to not have to do this calculation twice. The simplest predicate for testing numbers is IsZero so consider the condition.
${\displaystyle \operatorname {IsZero} \ (\operatorname {minus} \ n\ m)}$
But this condition is equivalent to ${\displaystyle n\leq m}$, not ${\displaystyle n. If this expression is used then the mathematical definition of division given above is translated into function on Church numerals as,
${\displaystyle \operatorname {divide1} \ n\ m\ f\ x=(\lambda d.\operatorname {IsZero} \ d\ (0\ f\ x)\ (f\ (\operatorname {divide1} \ d\ m\ f\ x)))\ (\operatorname {minus} \ n\ m)}$
As desired, this definition has a single call to ${\displaystyle \operatorname {minus} \ n\ m}$. However the result is that this formula gives the value of ${\displaystyle (n-1)/m}$.
This problem may be corrected by adding 1 to n before calling divide. The definition of divide is then,
${\displaystyle \operatorname {divide} \ n=\operatorname {divide1} \ (\operatorname {succ} \ n)}$
divide1 is a recursive definition. The Y combinator may be used to implement the recursion. Create a new function called div by;
• In the left hand side ${\displaystyle \operatorname {divide1} \rightarrow \operatorname {div} \ c}$
• In the right hand side ${\displaystyle \operatorname {divide1} \rightarrow c}$
to get,
${\displaystyle \operatorname {div} =\lambda c.\lambda n.\lambda m.\lambda f.\lambda x.(\lambda d.\operatorname {IsZero} \ d\ (0\ f\ x)\ (f\ (c\ d\ m\ f\ x)))\ (\operatorname {minus} \ n\ m)}$
Then,
${\displaystyle \operatorname {divide} =\lambda n.\operatorname {divide1} \ (\operatorname {succ} \ n)}$
where,
{\displaystyle {\begin{aligned}\operatorname {divide1} &=Y\ \operatorname {div} \\\operatorname {succ} &=\lambda n.\lambda f.\lambda x.f\ (n\ f\ x)\\Y&=\lambda f.(\lambda x.f\ (x\ x))\ (\lambda x.f\ (x\ x))\\0&=\lambda f.\lambda x.x\\\operatorname {IsZero} &=\lambda n.n\ (\lambda x.\operatorname {false} )\ \operatorname {true} \end{aligned}}}
{\displaystyle {\begin{aligned}\operatorname {true} &\equiv \lambda a.\lambda b.a\\\operatorname {false} &\equiv \lambda a.\lambda b.b\end{aligned}}}
{\displaystyle {\begin{aligned}\operatorname {minus} &=\lambda m.\lambda n.n\operatorname {pred} m\\\operatorname {pred} &=\lambda n.\lambda f.\lambda x.n\ (\lambda g.\lambda h.h\ (g\ f))\ (\lambda u.x)\ (\lambda u.u)\end{aligned}}}
Gives,
${\displaystyle \scriptstyle \operatorname {divide} =\lambda n.((\lambda f.(\lambda x.x\ x)\ (\lambda x.f\ (x\ x)))\ (\lambda c.\lambda n.\lambda m.\lambda f.\lambda x.(\lambda d.(\lambda n.n\ (\lambda x.(\lambda a.\lambda b.b))\ (\lambda a.\lambda b.a))\ d\ ((\lambda f.\lambda x.x)\ f\ x)\ (f\ (c\ d\ m\ f\ x)))\ ((\lambda m.\lambda n.n(\lambda n.\lambda f.\lambda x.n\ (\lambda g.\lambda h.h\ (g\ f))\ (\lambda u.x)\ (\lambda u.u))m)\ n\ m)))\ ((\lambda n.\lambda f.\lambda x.f\ (n\ f\ x))\ n)}$
Or as text, using \ for λ,
divide = (\n.((\f.(\x.x x) (\x.f (x x))) (\c.\n.\m.\f.\x.(\d.(\n.n (\x.(\a.\b.b)) (\a.\b.a)) d ((\f.\x.x) f x) (f (c d m f x))) ((\m.\n.n (\n.\f.\x.n (\g.\h.h (g f)) (\u.x) (\u.u)) m) n m))) ((\n.\f.\x. f (n f x)) n))
For example, 9/3 is represented by
divide (\f.\x.f (f (f (f (f (f (f (f (f x))))))))) (\f.\x.f (f (f x)))
Using a lambda calculus calculator, the above expression reduces to 3, using normal order.
\f.\x.f (f (f (x)))
### Signed numbers
One simple approach for extending Church Numerals to signed numbers is to use a Church pair, containing Church numerals representing a positive and a negative value.[3] The integer value is the difference between the two Church numerals.
A natural number is converted to a signed number by,
${\displaystyle \operatorname {convert} _{s}=\lambda x.\operatorname {pair} \ x\ 0}$
Negation is performed by swapping the values.
${\displaystyle \operatorname {neg} _{s}=\lambda x.\operatorname {pair} \ (\operatorname {second} \ x)\ (\operatorname {first} \ x)}$
The integer value is more naturally represented if one of the pair is zero. The OneZero function achieves this condition,
${\displaystyle \operatorname {OneZero} =\lambda x.\operatorname {IsZero} \ (\operatorname {first} \ x)\ x\ (\operatorname {IsZero} \ (\operatorname {second} \ x)\ x\ (\operatorname {OneZero} \ \operatorname {pair} \ (\operatorname {pred} \ (\operatorname {first} \ x))\ (\operatorname {pred} \ (\operatorname {second} \ x))))}$
The recursion may be implemented using the Y combinator,
${\displaystyle \operatorname {OneZ} =\lambda c.\lambda x.\operatorname {IsZero} \ (\operatorname {first} \ x)\ x\ (\operatorname {IsZero} \ (\operatorname {second} \ x)\ x\ (c\ \operatorname {pair} \ (\operatorname {pred} \ (\operatorname {first} \ x))\ (\operatorname {pred} \ (\operatorname {second} \ x))))}$
${\displaystyle \operatorname {OneZero} =Y\operatorname {OneZ} }$
### Plus and minus
Addition is defined mathematically on the pair by,
${\displaystyle x+y=[x_{p},x_{n}]+[y_{p},y_{n}]=x_{p}-x_{n}+y_{p}-y_{n}=(x_{p}+y_{p})-(x_{n}+y_{n})=[x_{p}+y_{p},x_{n}+y_{n}]}$
The last expression is translated into lambda calculus as,
${\displaystyle \operatorname {plus} _{s}=\lambda x.\lambda y.\operatorname {OneZero} \ (\operatorname {pair} \ (\operatorname {plus} \ (\operatorname {first} \ x)\ (\operatorname {first} \ y))\ (\operatorname {plus} \ (\operatorname {second} \ x)\ (\operatorname {second} \ y)))}$
Similarly subtraction is defined,
${\displaystyle x-y=[x_{p},x_{n}]-[y_{p},y_{n}]=x_{p}-x_{n}-y_{p}+y_{n}=(x_{p}+y_{n})-(x_{n}+y_{p})=[x_{p}+y_{n},x_{n}+y_{p}]}$
giving,
${\displaystyle \operatorname {minus} _{s}=\lambda x.\lambda y.\operatorname {OneZero} \ (\operatorname {pair} \ (\operatorname {plus} \ (\operatorname {first} \ x)\ (\operatorname {second} \ y))\ (\operatorname {plus} \ (\operatorname {second} \ x)\ (\operatorname {first} \ y)))}$
### Multiply and divide
Multiplication may be defined by,
${\displaystyle x*y=[x_{p},x_{n}]*[y_{p},y_{n}]=(x_{p}-x_{n})*(y_{p}-y_{n})=(x_{p}*y_{p}+x_{n}*y_{n})-(x_{p}*y_{n}+x_{n}*y_{p})=[x_{p}*y_{p}+x_{n}*y_{n},x_{p}*y_{n}+x_{n}*y_{p}]}$
The last expression is translated into lambda calculus as,
${\displaystyle \operatorname {mult} _{s}=\lambda x.\lambda y.\operatorname {pair} \ (\operatorname {plus} \ (\operatorname {mult} \ (\operatorname {first} \ x)\ (\operatorname {first} \ y))\ (\operatorname {mult} \ (\operatorname {second} \ x)\ (\operatorname {second} \ y)))\ (\operatorname {plus} \ (\operatorname {mult} \ (\operatorname {first} \ x)\ (\operatorname {second} \ y))\ (\operatorname {mult} \ (\operatorname {second} \ x)\ (\operatorname {first} \ y)))}$
A similar definition is given here for division, except in this definition, one value in each pair must be zero (see OneZero above). The divZ function allows us to ignore the value that has a zero component.
${\displaystyle \operatorname {divZ} =\lambda x.\lambda y.\operatorname {IsZero} \ y\ 0\ (\operatorname {divide} \ x\ y)}$
divZ is then used in the following formula, which is the same as for multiplication, but with mult replaced by divZ.
${\displaystyle \operatorname {divide} _{s}=\lambda x.\lambda y.\operatorname {pair} \ (\operatorname {plus} \ (\operatorname {divZ} \ (\operatorname {first} \ x)\ (\operatorname {first} \ y))\ (\operatorname {divZ} \ (\operatorname {second} \ x)\ (\operatorname {second} \ y)))\ (\operatorname {plus} \ (\operatorname {divZ} \ (\operatorname {first} \ x)\ (\operatorname {second} \ y))\ (\operatorname {divZ} \ (\operatorname {second} \ x)\ (\operatorname {first} \ y)))}$
### Rational and real numbers
Rational and computable real numbers may also be encoded in lambda calculus. Rational numbers may be encoded as a pair of signed numbers. Computable real numbers may be encoded by a limiting process that guarantees that the difference from the real value differs by a number which may be made as small as we need.[4] [5] The references given describe software that could, in theory, be translated into lambda calculus. Once real numbers are defined, complex numbers are naturally encoded as a pair of real numbers.
The data types and functions described above demonstrate that any data type or calculation may be encoded in lambda calculus. This is the Church-Turing thesis.
### Translation with other representations
Most real-world languages have support for machine-native integers; the church and unchurch functions convert between nonnegative integers and their corresponding Church numerals. The functions are given here in Haskell, where the \ corresponds to the λ of Lambda calculus. Implementations in other languages are similar.
type Church a = (a -> a) -> a -> a
church :: Integer -> Church Integer
church 0 = \f -> \x -> x
church n = \f -> \x -> f (church (n-1) f x)
unchurch :: Church Integer -> Integer
unchurch cn = cn (+ 1) 0
## Church Booleans
Church Booleans are the Church encoding of the Boolean values true and false. Some programming languages use these as an implementation model for Boolean arithmetic; examples are Smalltalk and Pico.
Boolean logic may be considered as a choice. The Church encoding of true and false are functions of two parameters:
• true chooses the first parameter.
• false chooses the second parameter.
The two definitions are known as Church Booleans:
{\displaystyle {\begin{aligned}\operatorname {true} &\equiv \lambda a.\lambda b.a\\\operatorname {false} &\equiv \lambda a.\lambda b.b\end{aligned}}}
This definition allows predicates (i.e. functions returning logical values) to directly act as if-clauses. A function returning a Boolean, which is then applied to two parameters, returns either the first or the second parameter:
${\displaystyle \operatorname {predicate} \ x\ \operatorname {then-clause} \ \operatorname {else-clause} }$
evaluates to then-clause if predicate x evaluates to true, and to else-clause if predicate x evaluates to false.
Because true and false choose the first or second parameter they may be combined to provide logic operators. Note that there are two version of not, depending on the evaluation strategy that is chosen.
{\displaystyle {\begin{aligned}\operatorname {and} &=\lambda p.\lambda q.p\ q\ p\\\operatorname {or} &=\lambda p.\lambda q.p\ p\ q\\\operatorname {not} _{1}&=\lambda p.\lambda a.\lambda b.p\ b\ a\ {\scriptstyle {\text{(This is only a correct implementation if the evaluation strategy is applicative order.)}}}\\\operatorname {not} _{2}&=\lambda p.p\ (\lambda a.\lambda b.b)\ (\lambda a.\lambda b.a)=\lambda p.p\operatorname {false} \operatorname {true} \ {\scriptstyle {\text{(This is only a correct implementation if the evaluation strategy is normal order.)}}}\\\operatorname {xor} &=\lambda a.\lambda b.a\ (\operatorname {not} \ b)\ b\\\operatorname {if} &=\lambda p.\lambda a.\lambda b.p\ a\ b\end{aligned}}}
Some examples:
{\displaystyle {\begin{aligned}\operatorname {and} \operatorname {true} \operatorname {false} &=(\lambda p.\lambda q.p\ q\ p)\ \operatorname {true} \ \operatorname {false} =\operatorname {true} \operatorname {false} \operatorname {true} =(\lambda a.\lambda b.a)\operatorname {false} \operatorname {true} =\operatorname {false} \\\operatorname {or} \operatorname {true} \operatorname {false} &=(\lambda p.\lambda q.p\ p\ q)\ (\lambda a.\lambda b.a)\ (\lambda a.\lambda b.b)=(\lambda a.\lambda b.a)\ (\lambda a.\lambda b.a)\ (\lambda a.\lambda b.b)=(\lambda a.\lambda b.a)=\operatorname {true} \\\operatorname {not} _{1}\ \operatorname {true} &=(\lambda p.\lambda a.\lambda b.p\ b\ a)(\lambda a.\lambda b.a)=\lambda a.\lambda b.(\lambda a.\lambda b.a)\ b\ a=\lambda a.\lambda b.(\lambda c.b)\ a=\lambda a.\lambda b.b=\operatorname {false} \\\operatorname {not} _{2}\ \operatorname {true} &=(\lambda p.p\ (\lambda a.\lambda b.b)(\lambda a.\lambda b.a))(\lambda a.\lambda b.a)=(\lambda a.\lambda b.a)(\lambda a.\lambda b.b)(\lambda a.\lambda b.a)=(\lambda b.(\lambda a.\lambda b.b))\ (\lambda a.\lambda b.a)=\lambda a.\lambda b.b=\operatorname {false} \end{aligned}}}
## Predicates
A predicate is a function that returns a Boolean value. The most fundamental predicate is ${\displaystyle \operatorname {IsZero} }$, which returns ${\displaystyle \operatorname {true} }$ if its argument is the Church numeral ${\displaystyle 0}$, and ${\displaystyle \operatorname {false} }$ if its argument is any other Church numeral:
${\displaystyle \operatorname {IsZero} =\lambda n.n\ (\lambda x.\operatorname {false} )\ \operatorname {true} }$
The following predicate tests whether the first argument is less-than-or-equal-to the second:
${\displaystyle \operatorname {LEQ} =\lambda m.\lambda n.\operatorname {IsZero} \ (\operatorname {minus} \ m\ n)}$,
Because of the identity,
${\displaystyle x=y\equiv (x\leq y\land y\leq x)}$
The test for equality may be implemented as,
${\displaystyle \operatorname {EQ} =\lambda m.\lambda n.\operatorname {and} \ (\operatorname {LEQ} \ m\ n)\ (\operatorname {LEQ} \ n\ m)}$
## Church pairs
Church pairs are the Church encoding of the pair (two-tuple) type. The pair is represented as a function that takes a function argument. When given its argument it will apply the argument to the two components of the pair. The definition in lambda calculus is,
{\displaystyle {\begin{aligned}\operatorname {pair} &\equiv \lambda x.\lambda y.\lambda z.z\ x\ y\\\operatorname {first} &\equiv \lambda p.p\ (\lambda x.\lambda y.x)\\\operatorname {second} &\equiv \lambda p.p\ (\lambda x.\lambda y.y)\end{aligned}}}
For example,
{\displaystyle {\begin{aligned}&\operatorname {first} \ (\operatorname {pair} \ a\ b)\\=&(\lambda p.p\ (\lambda x.\lambda y.x))\ ((\lambda x.\lambda y.\lambda z.z\ x\ y)\ a\ b)\\=&(\lambda p.p\ (\lambda x.\lambda y.x))\ (\lambda z.z\ a\ b)\\=&(\lambda z.z\ a\ b)\ (\lambda x.\lambda y.x)\\=&(\lambda x.\lambda y.x)\ a\ b=a\end{aligned}}}
## List encodings
An (immutable) list is constructed from list nodes. The basic operations on the list are;
Function Description
nil Construct an empty list.
isnil Test if list is empty.
cons Prepend a given value to a (possibly empty) list.
head Get the first element of the list.
tail Get the rest of the list.
We give four different representations of lists below:
• Build each list node from two pairs (to allow for empty lists).
• Build each list node from one pair.
• Represent the list using the right fold function.
• Represent the list using Scott's encoding that takes cases of match expression as arguments
### Two pairs as a list node
A nonempty list can be implemented by a Church pair;
• Second contains the tail.
However this does not give a representation of the empty list, because there is no "null" pointer. To represent null, the pair may be wrapped in another pair, giving free values,
• First - Is the null pointer (empty list).
• Second.Second contains the tail.
Using this idea the basic list operations can be defined like this:[6]
Expression Description
${\displaystyle \operatorname {nil} \equiv \operatorname {pair} \ \operatorname {true} \ \operatorname {true} }$ The first element of the pair is true meaning the list is null.
${\displaystyle \operatorname {isnil} \equiv \operatorname {first} }$ Retrieve the null (or empty list) indicator.
${\displaystyle \operatorname {cons} \equiv \lambda h.\lambda t.\operatorname {pair} \operatorname {false} \ (\operatorname {pair} h\ t)}$ Create a list node, which is not null, and give it a head h and a tail t.
${\displaystyle \operatorname {head} \equiv \lambda z.\operatorname {first} \ (\operatorname {second} z)}$ second.first is the head.
${\displaystyle \operatorname {tail} \equiv \lambda z.\operatorname {second} \ (\operatorname {second} z)}$ second.second is the tail.
In a nil node second is never accessed, provided that head and tail are only applied to nonempty lists.
### One pair as a list node
Alternatively, define[7]
{\displaystyle {\begin{aligned}\operatorname {cons} &\equiv \operatorname {pair} \\\operatorname {head} &\equiv \operatorname {first} \\\operatorname {tail} &\equiv \operatorname {second} \\\operatorname {nil} &\equiv \operatorname {false} \\\operatorname {isnil} &\equiv \lambda l.l(\lambda h.\lambda t.\lambda d.\operatorname {false} )\operatorname {true} \end{aligned}}}
where the last definition is a special case of the general
${\displaystyle \operatorname {process-list} \equiv \lambda l.l(\lambda h.\lambda t.\lambda d.\operatorname {head-and-tail-clause} )\operatorname {nil-clause} }$
### Represent the list using right fold
As an alternative to the encoding using Church pairs, a list can be encoded by identifying it with its right fold function. For example, a list of three elements x, y and z can be encoded by a higher-order function that when applied to a combinator c and a value n returns c x (c y (c z n)).
{\displaystyle {\begin{aligned}\operatorname {nil} &\equiv \lambda c.\lambda n.n\\\operatorname {isnil} &\equiv \lambda l.l\ (\lambda h.\lambda t.\operatorname {false} )\ \operatorname {true} \\\operatorname {cons} &\equiv \lambda h.\lambda t.\lambda c.\lambda n.c\ h\ (t\ c\ n)\\\operatorname {head} &\equiv \lambda l.l\ (\lambda h.\lambda t.h)\ \operatorname {false} \\\operatorname {tail} &\equiv \lambda l.\lambda c.\lambda n.l\ (\lambda h.\lambda t.\lambda g.g\ h\ (t\ c))\ (\lambda t.n)\ (\lambda h.\lambda t.t)\end{aligned}}}
This list representation can be given type in System F.
### Represent the list using Scott's encoding
An alternative representation is Scott's encoding, which uses the idea of continuations and can lead to simpler code.[8] (see also Mogensen–Scott encoding).
In this approach, we use the fact that list can be observed using pattern matching expression. For example, using Scala (programming language) notation, if 'list' denotes a value of a list data type with empty list 'Nil' and constructor 'Cons(h,t)' we can inspect the list and compute 'nilCode' in case the list is empty and 'consCode(h,t)' when the list is non-empty:
list match {
case Nil => nilCode
case Cons(h, t) => consCode(h,t)
}
The 'list' is given by how it acts upon 'nilCode' and 'consCode'. We therefore define a list as a function that accepts such 'nilCode' and 'consCode' as arguments, so that instead of the above pattern match we may simply write:
${\displaystyle \operatorname {list} \ \operatorname {nilCode} \ (\operatorname {consCode} \ h\ t)}$
Let us denote by 'n' the parameter corresponding to 'nilCode' and by 'c' the parameter corresponding to 'consCode'. The empty list is the one that returns the nil argument:
${\displaystyle \operatorname {nil} \equiv \lambda n\lambda c.\ n}$
The non-empty list with head 'h' and tail 't' is given by
${\displaystyle \operatorname {cons} \ h\ t\ \ \equiv \ \ \lambda n.\lambda c.\ c\ h\ t}$
More generally, algebraic data type with ${\displaystyle m}$ alternatives becomes a function with ${\displaystyle m}$ parameters. When the ${\displaystyle i}$th constructor has ${\displaystyle n_{i}}$ arguments, the corresponding parameter of the encoding takes ${\displaystyle n_{i}}$ arguments as well.
Scott's encoding can be done in untyped lambda calculus, whereas its use with types requires a type system with recursion and type polymorphism. A lists with element type E in this representation that is used to compute values of type C would have the following recursive type definition, where '=>' denotes function type:
type List =
C => // nil argument
(E => List => C) => // cons argument
C // result of pattern matching
A list that can be used to compute arbitrary type would have type that quantifies over C. A list generic in E would also take E as the type argument.
## Notes
1. ^ This formula is the definition of a Church numeral n with f -> m, x -> f.
2. ^ Allison, Lloyd. "Lambda Calculus Integers".
3. ^
4. ^ "Exact real arithmetic". Haskell.
5. ^ Bauer, Andrej. "Real number computational software".
6. ^ Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. p. 500. ISBN 978-0-262-16209-8.
7. ^ Tromp, John (2007). "14. Binary Lambda Calculus and Combinatory Logic". In Calude, Cristian S (ed.). Randomness And Complexity, From Leibniz To Chaitin. World Scientific. pp. 237–262. ISBN 978-981-4474-39-9.
As PDF: Tromp, John (14 May 2014). "Binary Lambda Calculus and Combinatory Logic" (PDF). Retrieved 2017-11-24.
8. ^ Jansen, Jan Martin (2013). "Programming in the λ-Calculus: From Church to Scott and Back". LNCS. 8106: 168–180. doi:10.1007/978-3-642-40355-2_12. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 135, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463049292564392, "perplexity": 9489.587526914076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00131.warc.gz"} |
https://math.meta.stackexchange.com/questions/28692/requests-for-reopen-undeletion-votes-volume-07-2018-today/29347 | # Requests for Reopen & Undeletion Votes (volume 07/2018 - today)
The purpose of this thread is to help focus the attention of the community on posts that may require reopen and undeletion votes. A request should be posted as an answer below (one request per answer).
Some guidelines:
• Please be polite, and respect the many different viewpoints in our diverse community. This goes for the person making the request as well as those commenting on it.
• There is a reopen queue. Please wait until a post has gone through this queue, before posting here. Notice that the first edit after the question was put on-hold pushes the question into reopen review queue, if the edit was done within 5 days of closure. So does a reopen vote. (If the review has already been finished, it is shown on the timeline of the question.) In doubt, wait 24 hours after the last substantive action.
• To inform readers of the current (and past) states of the targeted post, please add once the request resulted in some action the information Reopened or Undeleted at the start. (If it the action is undone, add this too, like Reopened, reclosed.)
• Do not only post a request, like "request reopening of ". Instead make a case for your concern. Yet keep in mind that it can be easier to get your request handled if you try to frame in a away that takes the feedback the post received into account in a positive way rather then seeking confrontation. Also, try to improve the post before posting here.
• In case of "small" requests, like one missing vote, it can make sense to ask in chat instead of posting here. The room CRUDE is a reasonable place for such requests. The same guidelines apply there.
Earlier versions of this thread that served as a model:
Reopened
The question was somewhat unclear when first asked, but was pinpointed down in comments. I also edited it providing a figure which helps to readily understand the situation.
The context provided by OP (by means of the "planified" picture) makes the source of confusion clear, and now that the problem itself is also clear, I think it fits the site well.
Reopened
I request a reopen of Are the corner hypercubera polytopes self-dual?. The question provides a v-definition of an infinite family of polytopes, and asks whether each member of this family is self-dual. (I don't see anything unclear about this question.) Other than being closed for being unclear, I haven't received feedback on this question.
I believe this Q&A provides the first known example of an infinite family of self-dual polytopes that is not a set of k-fold pyramids over a self-dual polytope.
• The post reads as a blog post, not a question. It needs to be vastly more concise, and I frankly don't think that MSE is a good site to ask questions like this; you would do better just talking to someone in the field. You got a bit of feedback in your last meta post. – user296602 Oct 1 '18 at 23:20
• Thanks for replying. I agree that a tag pertaining to just a question or two need not exist. As to the question, 1/3 of it is motivation, which I understand to be a requirement of MSE (I would just as soon leave it out). The answer is long because, while conceptually simple, the proof has a lot of tedious parts to it. – Dan Moore Oct 2 '18 at 0:35
• I've made my question more concise-it's now about 1160 characters & fits on my small Mac screen. My answer is about 18,000 characters. To analyze why it's so long, I counted the characters in eight distinct parts of the answer. Some parts are long because the thing to be proved requires a lot of words (wishing there was an elegant proof doesn't make it so). – Dan Moore Oct 2 '18 at 23:56
• Strictly speaking, the question was closed, not the answer. The Q&A is good because it sets forth the first example (ever in the history of mankind) of an infinite family of self-dual geometric polytopes (one for each dimension d $\ge$ 3) which aren't a set of k-fold pyramids over a self-dual polytope. – Dan Moore Oct 3 '18 at 0:00
Undeleted.
I nominate Is the numerical range of Identity operator convex? for undeletion because the question asker self-deleted her question shortly after receiving an answer from a high-rep user. That's an abuse of the system.
Reopened.
This recent Question asks about a variant of the Secretary Problem which as far as I can tell has not been previously answered here.
I mistakenly proposed to close as a duplicate of the classic Secretary Problem, but this variant treats as equally successful the choice of either the best or the second-best candidate. Please consider adding your vote to reopen to mine.
[NB: I edited the title of the post to clarify that this is not identical to the earlier Question, and I discuss in the comments there what little Math.SE discussion there had been on this variant (without supplying its literature references).]
• I've edited the question to refine the formatting and grammar a bit, and to use more gender neutral language. I don't think that the question is all that great (it is a problem statement question with a largely nonsensical "attempt"), but I agree that it isn't a duplicate question, and it is somewhat better than much of the stuff that gets posted. In any event, the question is now open again. – Xander Henderson Nov 27 '18 at 2:27
Undeleted
I nominate Is the Balazard-Saias-Yor integral non-positive? for un-deletion and re-opening. Being self-deleted by a deleted user, it has a score of +1 and two stars. The topic adds value to the site and the question asker has put effort in the post.
Currently, only one undelete vote is needed to undelete this question. I hope the two users who starred this question can see their favorite question re-opened.
• It should be noted that "stars" do not necessarily indicate that a user finds the question useful or up to the standards of the site. Many users use stars to keep track of questions with which they have interacted, and desire to return to at some point. For example, I sometimes star questions which lack context or otherwise fail to meet the standards of this site. After downvoting, leaving a comment, and/or voting to close, I will star the post so that I can return to it a few days later to see if it has been improved. If it has, I'll retract my downvote and/or close vote. – Xander Henderson Feb 4 at 17:02
• I'm not saying that this is the case here; I just want to point out that "stars" should not be taken as a sign of quality vis-a-vis undeletion votes. – Xander Henderson Feb 4 at 17:03
• @XanderHenderson Thx for comments. Noted. – GNUSupporter 8964民主女神 地下教會 Feb 5 at 1:27
Undeleted, closed, deleted, re-undeleted, reopened
I nominate How to use derivatives to prove that $f(x)=2\cos^2\left(\frac{\pi }{4}-\frac{x}{2}\right)-\sin \left(x\right)=1$? for undeletion since OP has self-deleted his/her question after receiving an answer. This is unacceptable on Math.SE.
• It's better to write self-contained posts. In the current case for the moment there was no big risk, but it's still not ideal. – quid Nov 8 '18 at 19:42
• @quid Thanks for your intervention. I'll correct this now. – GNUSupporter 8964民主女神 地下教會 Nov 8 '18 at 19:44
• I don't understand why this question was closed and deleted by users. It is not unclear what they're asking. (It may not be true, but it is certainly clear!) – user1729 Feb 5 at 11:28
• @user1729 Some users don't like wrong info in the question. They have standards so high that $D(\sin x)$ and the RHS of $f'(x)$ doesn't please them. It's possible that they used their power to get rid of this question. – GNUSupporter 8964民主女神 地下教會 Feb 5 at 11:37
• It's definitely not unclear—the questioner gives the problem, shows what they did, and asks what's wrong with it. I think it could do with an additional answer which makes crystal clear what the error was and how the (interesting!) technique works, and reopening would allow someone to post that. – timtfj Feb 5 at 12:06
• @timtfj It's now reopened. – GNUSupporter 8964民主女神 地下教會 Feb 5 at 16:04
• @GNUSupporter8964民主女神地下教會 At least it's clear that their expectations could never rise to match the size of your assumptions... – Lord_Farin Feb 6 at 17:34
Undeleted and merged with duplicate
I would like to see the question "Finding sum to infinity" undeleted. The OP says that they "tried using the Taylor series for $$e^x$$ but couldn’t figure out how to manipulate it to get the above expression", which seems reasonable. Moreover:
1. The question is a duplicate and is closed as such. It is not standard practice to delete duplicates (see here).
2. The question itself is on +5 (+6/-1), and one of the answers is on +12.
• I don't get the argument. What is the value of keeping that question on the site? – quid Feb 15 at 19:00
• I didn't realiize that "it is standard practice to not delete duplicates." Cite? – Gerry Myerson Feb 16 at 4:06
• @GerryMyerson I also had entertained to comment on that. What arguably is true is that "it is not standard practice to delete duplicates", which of course is not quite the same. – quid Feb 16 at 12:14
• @Gerry What I meant was "a question should not be deleted simply because it is a duplicate" (so I've edited the post to quid's sentence). This was discussed here. – user1729 Feb 18 at 11:28
Merged
Please Consider undeleting the following question:
https://math.stackexchange.com/questions/3203711/what-are-the-steps-to-finding-int-01-frac-ln1-x-lnxx-dx
Of course, this one is a strict duplicate of the one for it was closed. However, regarding the fact that the answers to this question at all are quite spread I would like to draw attention to my own answer given here collecting some of the possible ways to evaluate this integral within one post.
• Would it make sense to merge the thread with the duplicate target? – quid May 4 at 10:11
• @quid That might a better way than undeleting the question, indeed. I'm ashamed to ask, but what exactly does it mean to merge a thread with another one? – mrtaurho May 4 at 10:13
• It means that the answers will be moved to the other thread (and deleted on the original one). The Q can be preserved (like a duplicate) or not. – quid May 4 at 10:16
• @quid That sounds pretty good. Shall I do this, at least with my own answer, by myself or do I have to request for this somewhere? – mrtaurho May 4 at 10:18
• That's something only moderators can do. (Of course you could simply repost your answer on the dupe too, but that would not be merge.) I'll just go ahead and do it now. – quid May 4 at 10:23
• @quid Thank you for your time! – mrtaurho May 4 at 10:24
Undeleted
I nominate Factorising the ideal $(14)$ in $\mathbb{Q}(\sqrt{-10})$ into a product of prime ideals. for undeletion since the question asker has self-deleted his/her own question after receiving an answer. This is unfair the answerer who spent time and effort writing the answer, which deserves evaluation from the community.
Reopened, closed again as duplicate
I nominate reopening Non-negative integer solution for $$ax + by = c$$. The OP updated the question with what I consider to be sufficient context after the first close vote, but before the final one. As I thought the updated text then made it an appropriate question, I provided an answer. After I discovered it was closed, I flagged it for reopening, but this was declined. FYI, the full timeline is here. Please check this question to see if it should be reopened. Thanks.
Update: As explained in the comments, the question is really a duplicate. It's now closed again for what I consider an appropriate reason, i.e., as a duplicate.
• It's a duplicate of math.stackexchange.com/questions/490602/… and probably several others. If it's reopened, it should immediately be closed as a duplicate. – Gerry Myerson May 28 at 6:37
• @GerryMyerson I appreciate your feedback. Since it's a duplicate, I don't have any problem with closing it as such (if it's first reopened), and will even vote to do so. – John Omielan May 28 at 6:44
• @GerryMyerson I just gave the last vote to reopen the question. To follow through on what I stated above, I then tried to vote to close as a duplicate. However, because I gave a vote to close on April 18 and then retracted it (due to the OP giving more context so I thought it shouldn't close), the system won't let me vote to close it again, even though it's gone through a close/reopen cycle. As such, perhaps you may wish to start the process of closing it now as a duplicate. Thanks. – John Omielan Jun 1 at 3:09
Reopened
Please consider reopening Does knowing the surface area of all faces uniquely determine a tetrahedron?. This is a very natural and self-motivating question that does not need any additional context, and it's gotten several great answers.
Reopened
Please consider reopening this post, suitably narrowed since originally closed as "too broad":
Book recommendation... Linear Programming for self-study
I don't believe there has been another such request, and under my pestering the OP has provided context for what sort of self-study they've previously undertaken.
Reopened
Please consider reopening Does non-uniqueness of solution to 1st order ODE implies the existence of infinitely many solutions?.
This is an interesting question about a not so famous result in ODE. The answers points to the correct reference. I've edited the question to include more context.
• I agree it's an interesting question, and the updated text now has enough context, so I initiated reopening it. – John Omielan Jun 18 at 16:36
• @JohnOmielan, if you care, please consider also editing the answer there (to rollback to edit 1). Now that the post is reopened, there is no need for that comment. I tried to edit myself, but the suggested edit got rejected – Arctic Char Jun 19 at 19:20
• I agree the comment there regarding it being closed is no longer pertinent, so I did rollback of the edit. – John Omielan Jun 19 at 19:26
Undeleted (and edited the duplicate target)
Three coins are tossed. If one of them shows a tails, what is the probability that all three coins show tails?
This question was marked as a duplicate to another question (created 2014) that was closed and deleted (2018).
• It seems it took 9 votes to delete this question. I suppose that's because it had 30 upvotes? Anyway, currently it has 7 undelete votes. Does it need 9 to undelete? – Gerry Myerson Jul 5 at 5:52
Undeleted, reopened
Please consider undeleting and reopening this post:
How does one prove the inequality $$1+|x|\le (1+|y|)(1+|x-y|)$$?
OP clearly indicated the context of his/her question: it is from a proof in Wolff's lecture notes on harmonic analysis.
[Added upon request: this post has also been edited into a (more) decent one.]
• If you edit a post significantly indicate so in the request. I have no problem with it by itself, but since such actions are quite frequently overlooked by observers it can lead to the false impression that a relatively decent post was deleted. – quid Jul 8 at 14:07
• @quid If someone else (not OP) edits OP's question to make the question 'fit' on MSE, then does the question gets reopened or undeleted always? – elli saba Aug 9 at 18:48
• I said different by OP because sometimes the question is old, say 2015, and OP hasn't visit MSE since 2015. – elli saba Aug 9 at 18:51
• @Isa I'd say yes for the most part (although what 'fit' mean of course leaves room for interpretation). That said, some users do not like the practice, thus there can also be push back. But in abstract and as a general principle (there can be exceptions in special cases) if a question is a good fit for math.se we usually keep it around no matter how it arrived in this state.// If you want to discuss this in more detail please open a separate thread. – quid Aug 9 at 19:32
Undeleted, reopened, closed
How to find $${\large\int}_1^\infty\frac{1-x+\ln x}{x \left(1+x^2\right) \ln^2 x} \mathrm dx$$?
This is not a trivial exercise and OP shows his/her thoughts that "Routine textbook methods for this complicated integral fail." It has several very well-written detailed answers with rather high number of upvotes.
• '[S]hows his/her thoughts that "Routine textbook methods for this complicated integral fail."' That's not relevant context. Most likely it's a constructed challenge and should have been declared as such. This type of post borders on a misuse of the site. The following comment at score 13 is relevant I feel like it's becoming a trend to ask questions about practically impossible integrals. – quid Jul 8 at 19:34
• You think "That's not relevant context." And this proposal focus particularly on undeleting the post. Considering THREE high-quality answers (4+31+40 upvotes) that already existed I find it ridiculous (yes, this is my rant) to delete this OLD post. If there is any misuse of this site at all, the deletion, in this particular case, IS misuse of the privilege of votes. [mod redacted] – Jack Jul 8 at 19:58
• Bringing up other users in this form is out-of-line, in general. In the specific case it's also highly misleading. (I removed it.) – quid Jul 8 at 20:24
Reopened
Prove: there exists 3 sets: $A, B, C \subseteq \mathbb{N}$ such that: $A\cap B\cap C =\emptyset$ and $|A|=|B|=|C|=\aleph_0$?
Please consider reopening my question as I've edited it in order to explain the full context of it.
• The current status is only to be added after a change occurred. In any case it should always be the current status of the post not the requested action. – quid Aug 15 at 13:19
• I understand. thank you for the help – Jneven Aug 15 at 13:20
Undeleted, reopened, closed as duplicate
Please consider undeleting this well-received (33 net upvotes) question under the tag of probability: Probability of drawing the Jack of Hearts?
There are useful discussion and several good answers, one of which has 77 upvotes.
• The question was originally closed as "off-topic: lacking context" (which, in my opinion, was not unreasonable). It is, however, also a duplicate of other questions, as indicated by the comments. If it is to remain on the site, it should be properly linked via a dupe closure. – Xander Henderson Aug 19 at 17:38
• At most "similar" or "related". Not a duplicate. – Jack Aug 19 at 17:54
• I'm sorry, but what? The question above asks us to determine the probability of drawing a Jack from a deck of cards from which an unknown card has been removed. One of the two questions in the dupe target asks for the probability of drawing an Ace from a deck of cards from which an unknown card has been removed. These are precisely the same question, and answers to the older question completely answer the newer question! How is this not a duplicate? – Xander Henderson Aug 19 at 18:32
• Mr. Henderson, I have a very narrow definition of "duplicate" from yours. No need to be sorry. And of course you do have the right to vote it as a duplicate. – Jack Aug 19 at 18:34
• I marked it as a dupe. That said @Xander I think the difference is slightly larger than you make it look. This question is about one specific card (a jack of hearts), while the dupe is about a group of cards (an ace). I still think that is a duplicate. // Since the comment on main got auto-deleted I'll add that two users other than me had mention it as a dupe. Thus, it was at least a trilateral closure. :-) – quid Aug 19 at 18:54
• @quid Indeed, I had missed that. That said, as you note, the distinction is not fundamental. Thank you for handling it. – Xander Henderson Aug 19 at 19:39
Undeleted, reopened, closed-as-duplicate
Please consider undeleting and reopening the edited post: How can one show that $$\sum_{n=0}^\infty\frac{n}{n!}=e$$?
The user had difficulties in articulating his/her own thoughts. The elementary question in the post is clear. It is also clear from OP's comment and picture attached to the original post what the asker was thinking.
• You should disclose the fact that you made significant edits to the question after it was deleted. The question itself is a duplicate of this question (and probably has other, better dupe targets; I just happen to know the one I linked to since I interacted with it two years ago). I don't see a reason to undelete it. – Xander Henderson Aug 28 at 12:31
• I posted a link to an exact duplicate under that thread. Pointless to reopen I think. – Jyrki Lahtonen Aug 28 at 18:59
The question Problem with sum of projections was incorrectly marked as a duplicate of Orthogonal projections with $\sum P_i =I$, proving that $i\ne j \Rightarrow P_{j}P_{i}=0$. The latter question has the additional hypothesis that the projections are self-adjoint (i.e., orthogonal projections) which allows for some rather different proof methods. Indeed, none of the four answers to the second question solve the first question.
Deleted, undeleted, deleted, undeleted, deleted, undeleted, deleted, undeleted and reopened
unfortunately the question I asked Why is the zero polynomial the only one to have infinite roots? was put on hold as off-topic first and then closed. I edited it much for it to be reopened but it wasn't opened. I apologise if it was off-topic to you, but I edited it. If it being off topic yet, kindly suggest improvements or reopen it.
• Please include a link to the question. – quid Feb 3 at 14:54
• its humble request – user629353 Feb 3 at 14:56
• why you deleted that – user629353 Feb 3 at 14:58
• @quid couldn't you suggest edits – user629353 Feb 3 at 15:03
• "couldn't you suggest edits " What do you mean? How am I supposed to know which question you mean? – quid Feb 3 at 15:05
• @quid the link of which i have given above – user629353 Feb 3 at 15:09
• I misunderstood what you meant. I did not delete you question. At the moment I do not plan to get involved. – quid Feb 3 at 15:46
• Two posts here about the same question. Are we supposed to expect some more any time soon? – Did Feb 3 at 21:39
• @Did, the first post was about closure, the second, deletion. Unless there is some action more severe than deletion, I'd guess there won't be any more posts about the question. – Gerry Myerson Feb 3 at 21:52
• I have flagged the question on main to ask the moderators to let the community make the decision. – Gerry Myerson Feb 5 at 11:48
• The main problems with the question seem to be that (i) the questioner is confused, and (ii) it's tricky to answer in a way that properly addresses the confusion. The first seems a good reason to ask the question, and the second suggests that good answers will be thoroughly explanatory and therefore of high value. – timtfj Feb 5 at 16:05
Reopened
Could we please reopen: How might I define a parabola in vertex form, such that…
The OP has made an effort to improve their question. They have clarified the question by including the vertex form of a parabola for future reference, and have also added MathJax to the question. I have also cleaned up the tags to better reflect the question ('linear-algebra' was not appropriate).
The question is perfectly clear, and the OP has given the background to their own question which makes it perfectly answerable.
• I have voted to reopen, but I note that the question already has an accepted answer. – Gerry Myerson Oct 12 at 6:47
Undeleted, Reopened, Reclosed as a duplicate.
I found this question sufficiently non-trivial.
Given that the series of positive numbers $$\sum_n a_n$$ converges, can we say anything about the convergence of $$\sum_n a_n^{(n-1)/n}$$?
So, as a salvage effort I added some preliminary thoughts to it. Do check whether it now is good enough to be undeleted and/or reopened?
I'm not saying it would now be a great question. This is also a way of repeating my old maxim that the interested parties should always try and edit closed/deleted questions they find interesting into shape, and only after that ask others to reconsider.
Caveat: I didn't feel the need to check for duplicates. So it is possible that it should later be closed and redeleted as a dupe. Not entirely unexpectedly Martin Sleziak found no less than three near duplicates: 1, 2 and 3.
• I'm happy with the current status (closed as a duplicate). – Jyrki Lahtonen Oct 28 at 6:34
Undeleted
https://math.stackexchange.com/a/1409573/10513
and I believe it should be undeleted.
The question is rather old (2013). The context of the question is that they are reading a research article which states a result, and the question is asking for a proof of this. Each of the other answers gives a citation rather than a proof. This specific answer is the same (citation, no proof), but the citation given is a modern, standard text and it makes sense for this book to be mentioned in some answer.
Reopened
Please consider voting to reopen this question:
Peak response of second order system with rectangular pulse input
The OP has improved it a lot.
Reopened
Please re-open this question which is put on hold:
Understanding De/Suspension $\Sigma^{-1}(\Sigma{X})\neq X$
It was said that "unclear what you're asking" and people do not know the def of de-suspension.
However, the suspension is introduced earlier in the cited question:
The suspension (topology) and elementary examples
While the desuspension is also quoted/linked to the Wikipedia (withe refs given by Wiki). I also include a new note: "The desuspension is arguably firstly introduced in the cited text mentioned in H. R. Margolis (1983). Spectra and the Steenrod Algebra. North-Holland. p. 454." And the ref cited.
Question: How do we define desuspension exactly? (Please see the comments below, people complain about the meanings of desuspension in Wikipedia is useless).
Are we able to have the desuspension acting on the topological space as the suspension does? Or do we only have the desuspension act on the spectra but not the space?
Reopened
Please consider reopening this. It was closed originally as missing context. The OP provided some context in the comment and I have added a little bit more. Hope it is okay now.
• Wow, this question was through reopen review queue already four times. It makes me wonder what is the record. – Martin Sleziak Sep 1 '18 at 1:04
• I think it just how difficult it is to reopen a post using only the review queue. @MartinSleziak – user99914 Sep 1 '18 at 13:00
• This one has 5 and is still closed. – user99914 Sep 1 '18 at 13:01
Reopened
Please consider reopening the question A functional equation of a matrix that is placed On Hold. I have added my attempt at the cracking the problem if the lack of it was the reason for placing it on hold. The question itself is technically perfectly sound.
• You should have just edited your previous answer, and not added another "answer." – Joel Reyes Noche Sep 25 '18 at 7:07
• @JoelReyesNoche: What do you mean? "Answer" to my question that I linked to? I did not post an answer to my own question. – Hans Sep 25 '18 at 8:18
• No, answer(s) on the present meta page, there should not be two for a single post on main. – Did Sep 25 '18 at 9:31
• I have removed the second post, since this one has been edited to indicate the reopening. I've also edited and deleted/undeleted, so whomever downvoted can undo their vote. – Asaf Karagila Sep 25 '18 at 9:56
• @AsafKaragila: Oh, sorry. I did not know the first request was posted as I remembered closing the page rather than posting the "answer" as I needed to leave in a hurry earlier. I posted the second request thinking that the first request was not posted. This is the first time I post in the meta question site. – Hans Sep 25 '18 at 10:59
• Why this post says "reopened" when the linked question is still on hold at the moment? – Martin Sleziak Sep 25 '18 at 12:31
• @Martin, seems to have been a mistake, which I have rectified. – Gerry Myerson Sep 25 '18 at 12:37
• The thing that is missing in the current version of the question, more than work, is the source of the problem. Why do we expect it is true in the first place? – Carl Mummert Sep 25 '18 at 19:32
Undeleted, redeleted, undeleted again, reopened
Statements that look obviously false but cannot be disproved. is at $$+15$$, and has answers at $$+7,+10,+14,+20,+21,+8$$, and $$+15$$. Please consider voting to undelete. [In the interests of transparency, I note that the $$+15$$ answer is mine.]
• I agree that the question is highly upvoted and has highly upvoted answers. That does not make it an appropriate question for MSE. With regard to what is on-topic, it isn't a problem or puzzle, and I see no clear indication that there is a particular topic that needs to be clarified. With respect to what not to ask, the question seems designed to engender discussion, not to seek explanation. – Xander Henderson Sep 27 '18 at 16:35
• For the benefit of the people who cannot see deleted questions, I'll add that this MO question is mentioned in comments: The most outrageous (or ridiculous) conjectures in mathematics. (Even if this one does not get undeleted, the linked MO post might be interesting for people who would consider Statements that look obviously false but cannot be disproved interesting.) – Martin Sleziak Sep 28 '18 at 8:42
• @XanderHenderson Just curious, what would you then consider as on-topic for the soft-question and big-list tags? – dxiv Oct 4 '18 at 5:51
• Surely we have better things to do that relitigate this post over and over again, and remove the collective efforts of 16 people. I see no harm from the existence of this post, except for the ruffled feathers of people who are inclined to overmoderation. – user296602 Oct 4 '18 at 22:13
• In my opinion, if some want content and others do not then it should stay. The veto should be over removal rather than the converse. I will never understand why it's so important to wreck somebody else's party. – samerivertwice Dec 1 '18 at 18:22
Undeleted
The deleted answer for Show that there does not exist a unique stationary distribution. should be undeleted.
The question is about "existence of unique stationary measure", and the answer is concise and to-the-point.
"$$(1,0,0,...,0)$$ and $$(0,0,...0,1)$$ are two invariant distributions so uniqueness fails."
The existing answer shares the same idea with the deleted one, and it has passed a Low Quality Review.
(Edit: comment removed)
The deleted answer attracted an comment from a high-rep user during another Low Quality Review. However, by appealing to his/her tag score for the relevant tags (, , , , , etc) (and the contributing posts) and comparing them with those of the answerer, you'll have a better idea about their contributions to the site in those areas.
• Oddly, the deleted answer was deleted, not through review, but by its owner. – Gerry Myerson Oct 12 '18 at 11:51
• You could ping the author on another of his posts, but I generally oppose undeleting a self-deleted post. – user296602 Oct 12 '18 at 15:14
• The author has commented on the other answer, so he can be pinged there if necessary. This is an odd situation. – Arnaud D. Oct 12 '18 at 17:34
• Thanks for advice. I did ping the author. He can undelete this anytime he wishes. I ponder whether CRUDE is healthy, when some of its active users vote to delete short answers outside their familiar tags regardless of the quality of the answer. This isolated example shows that its malfunctioning in terms of quality control. – GNUSupporter 8964民主女神 地下教會 Oct 12 '18 at 20:35
• @GNUSupporter8964民主女神地下教會 This isolated example shows nothing about CRUDE, since the delete votes and the associated comment came from the review queue, and the answer was never mentioned in CRUDE. Besides, I'm not sure tag scores are necessarily an indicator of expertise : for example I have a low score in "integration" because I don't really like to compute integrals, but I still know the basics and I think I'm competent to judge the quality of reasonably simple answers on the topic. – Arnaud D. Oct 15 '18 at 12:11
• @ArnaudD. I see your points, but let me clarify my stance. 1. To be more precise, I would say that this post shows the influence of CRUDE participation on other posts (not listed on CRUDE). In a review queue, it's possible that one votes to close/delete within a few seconds. Given the amount of posts that they review every day (in/outside CRUDE), this example reminds us the adverse effect of their rare mistakes dispite their high reputation. 2. That's why I've added "contributing posts" inside the brackets. – GNUSupporter 8964民主女神 地下教會 Oct 15 '18 at 12:47
• If you read the posts instead of the scores, you'll find out the truth: a) there's no probability theory in his/her "probability theory" answers. These are mistagged (elementary) probability questions. b) His/her stochastic calculus tag score comes from a reference request question. By the way, he/she has removed his/her comment. – GNUSupporter 8964民主女神 地下教會 Oct 15 '18 at 12:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5851464867591858, "perplexity": 1069.9212988182467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518337.65/warc/CC-MAIN-20191209065626-20191209093626-00519.warc.gz"} |
http://mathhelpforum.com/pre-calculus/91959-small-limit-problem-print.html | # Small Limit Problem
• June 5th 2009, 08:29 PM
Kasper
Small Limit Problem
Hey, so I have this limit question, and I'm *pretty* sure that the limit does not exist, but I get my back up, because anyone could come to an answer like that from not knowing a way to simplify it. Can anyone see a way to simplify this and get a value?
Thanks for any help!
$lim_{x->1}\frac{x^3+x^2}{x-1}$
Also, sorry for the dumb looking "approaches" arrow in the limit, I couldn't find the code for a good arrow in the LaTeX tutorial. :(
• June 5th 2009, 09:18 PM
mr fantastic
Quote:
Originally Posted by Kasper
Hey, so I have this limit question, and I'm *pretty* sure that the limit does not exist, but I get my back up, because anyone could come to an answer like that from not knowing a way to simplify it. Can anyone see a way to simplify this and get a value?
Thanks for any help!
$lim_{x->1}\frac{x^3+x^2}{x-1}$
Also, sorry for the dumb looking "approaches" arrow in the limit, I couldn't find the code for a good arrow in the LaTeX tutorial. :(
$lim_{x \rightarrow 1^+} \frac{x^3+x^2}{x-1} = + \infty$
$lim_{x \rightarrow 1^-} \frac{x^3+x^2}{x-1} = - \infty$
Since the left hand and right hand limits are not equal, the limit does not exist.
• June 5th 2009, 10:50 PM
Kasper
Ah right, I was looking at it all wrong. I keep trying to just simplify f(x) to find a way to sub in x and try to algebraically find an answer, I gotta start working on thinking of how the limit approaches rather than trying to find it when it's at the point in question.
Thanks for the clarification! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927401065826416, "perplexity": 429.31906324596986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00104-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://openaidsjournal.com/VOLUME/6/PAGE/77/ | # Assessing the Assumptions of Respondent-Driven Sampling in the National HIV Behavioral Surveillance System among Injecting Drug Users
Amy Lansky*, 1, Amy Drake1, Cyprian Wejnert1, Huong Pham2, Melissa Cribbin1, Douglas D Heckathorn3
1 Division of HIV/AIDS Prevention, Centers for Disease Control and Prevention, USA
2 Northrop Grumman Corporation/BCA, Atlanta, Georgia, USA
3 Department of Sociology, Cornell University, Ithaca, NY, USA
#### Article Metrics
0
##### Total Statistics:
Full-Text HTML Views: 2156
Abstract HTML Views: 1238
##### Unique Statistics:
Full-Text HTML Views: 792
Abstract HTML Views: 579
open-access license: This is an open access article licensed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited.
* Address correspondence to this author at the Division of HIV/AIDS Prevention, Centers for Disease Control and Prevention, USA; Tel: 404-639-5200; Fax: 404-639-0897; E-mail: [email protected]
## Abstract
Several assumptions determine whether respondent-driven sampling (RDS) is an appropriate sampling method to use with a particular group, including the population being recruited must know one another as members of the group (i.e., injection drug users [IDUs] must know each other as IDUs) and be networked and that the sample size is small relative to the overall size of the group. To assess these three assumptions, we analyzed city-specific data collected using RDS through the US National HIV Behavioral Surveillance System among IDUs in 23 cities. Overall, 5% of non-seed participants reported that their recruiter was “a stranger.” 20 cities with multiple field sites had ≥1 cross-recruitment, a proxy for linked networks. Sample sizes were small in relation to the IDU population size (median = 2.3%; range: 0.6%- 8.0%). Researchers must evaluate whether these three assumptions were met to justify the basis for using RDS to sample specific populations.
Keywords: HIV, respondent-driven sampling, injection drug use, behavioral surveillance..
## INTRODUCTION
Behavioral surveillance of persons at risk of HIV infection is an important component of an overall HIV surveillance program [1,2]; these data are used to estimate prevalence, identify correlates of behaviors and determine prevention needs. Multiple methods have been used to sample populations at high risk of HIV infection including venue-based, time-space sampling; targeted sampling; snowball sampling; and respondent-driven sampling [3,4]. Respondent-driven sampling (RDS) [5,6] has been used successfully to reach injecting drug users (IDUs) in the United States [7,8] and elsewhere [9].
RDS has certain assumptions that must be met to determine if it is an appropriate sampling method to use with a particular group [10,11]. These assumptions require that the population being recruited must know one another as members of the target population (i.e., IDUs must know each other as IDUs). If members of the population cannot identify each other, then participants will not be able to produce eligible recruits and the method will fail to produce a sample. The population being recruited also must be adequately networked to accommodate a chain referral process; ideally, networks should form a single component (network of networks), rather than multiple, disconnected networks, so that referral chains can reach all subsets of the population in a defined area. Subsets of the population that are completely disconnected from the primary network cannot be reached by the peer recruitment process and thus the RDS findings will not be generalizable to these groups. A third assumption, that the sample size to be recruited using RDS is small relative to the overall size of the target population (i.e. a small sampling fraction), is required to ensure that each participant’s ability to be recruited remains constant over time because the pool of potential recruiters is not noticeably diminished [10]. Given that respondents may only participate once, it is important to ensure that the sample size does not exhaust the pool of potential recruiters in the population as sampling progresses. Two other RDS assumptions, that participants can accurately report their personal network size and that recruitment is a random selection from the recruiter’s network, are applicable to RDS analysis. Discussion of these assumptions is beyond the scope of this paper and has been reported elsewhere [12,13].
Few RDS studies have assessed these three assumptions. To build the literature on situations in which RDS does and does not work well as a recruitment and sampling strategy for reaching hard to reach groups, there is a need for quantitative indicators to assess the RDS assumptions. This paper defines quantitative measures to evaluate, post-hoc, the extent to which the three assumptions were met in the US National HIV Behavioral Surveillance System among injecting drug users for the first cycle of data collection from May 2005 to February 2006 (NHBS-IDU1). Based on this evaluation, we describe the lessons learned that were then applied to the second cycle (NHBS-IDU2).
## MATERIALS AND METHODOLOGY
Methods for NHBS-IDU are reported in detail elsewhere [14] and briefly described here. NHBS-IDU1 was conducted by the Centers for Disease Control and Prevention (CDC) in collaboration with state and local health departments in 23 metropolitan statistical areas (“cities”) within the United States. CDC determined that NHBS-IDU1 was not research; each local area obtained approval of human subjects in accordance with their institutions’ determinations.
Local project staff in each city started the NHBS-IDU1 cycle with formative research to determine logistics of survey operations and to gather information on the local IDU population [15]. Each city set up at least one interview field site accessible to the various local drug-use networks and began RDS with a limited number (8-10) of initial recruiters or ‘seeds’ representing various drug networks and geographic or demographic characteristics.
NHBS-IDU1 procedures included eligibility screening, obtaining oral informed consent from participants, and an interviewer-administered survey. Eligibility for NHBS includes being of age 18 or older, being a resident of the city, not having already participated in the current NHBS data collection cycle, and being able to complete the survey in English or Spanish. An additional IDU cycle eligibility criterion was having injected drugs within 12 months preceding the interview date, measured by self-report and either evidence of recent injection or adequate description of injection practices [14]. The survey measured characteristics of participants’ IDU networks (total number, gender and race/ethnicity), demographics, drug use and injection practices, sexual behaviors, HIV testing history, and use of HIV prevention services. Interviewers used handheld computers to administer the survey and record responses.
Participants could take the survey at any NHBS field site in their city. Participants who completed the survey were asked and trained to recruit others who also injected drugs by distributing number-coded coupons. Participants were compensated for their participation and for each eligible recruit who completed the survey; this dual-incentive structure is unique to RDS [5,6]. Compensation levels were determined in each city, but generally were about $25 for participation and$10 for recruitment.
NHBS-IDU1 was conducted from May 2005 through February 2006. Data collection duration varied across cities due to differences in timing for approval of human subjects, logistics, and speed of sample accrual.
### Measures
Participants who agreed to be recruiters were told to give coupons to someone they knew as an IDU. Participants (excluding seeds) described their relationship to the person who gave them their coupon. Multiple responses were allowed, including: sex partner, drug partner, family, friend, colleague, acquaintance, and stranger (“you don’t really know the person, just met him/her”). For analysis purposes, the participant’s recruiter was categorized as a stranger if this response option was selected with no additional relationship reported.
Five variables which may affect participants’ recruitment selections and introduce sampling bias were assessed: race/ethnicity, gender, age, preferred drug, and self-reported HIV status. Race and ethnicity were coded into one variable with mutually exclusive categories: white, black, Hispanic (regardless of race), and other (including Asians, Native Hawaiian and Pacific Islanders, multiracial persons, and those with no recorded race). The variable “preferred drug” was derived from questions asking frequency of use of several drug types and then grouped into 5 categories: heroin only, heroin and cocaine (equal frequency or combined as speedball), cocaine or crack only, amphetamine (including methamphetamine), and other (all other drugs or combinations thereof). Self-reported HIV status was categorized as HIV-positive or not (which included those whose results were negative or indeterminate, those who never received the result or never tested, and those whose HIV status could not be determined).
### Data Management and Analysis Methods
Coupon numbers and other information linking recruiters to their recruits were collected and maintained in RDS Coupon Manager (RDSCM) 2.0 software (Cornell University, Version 2.0, Ithaca, New York, USA). Survey data were transferred from the handheld to a computer and then uploaded to a secure server; some survey records were lost during collection or transfer and only the recruitment data from RDSCM 2.0 remained. Survey and RDSCM 2.0 data were merged using SAS software (SAS Institute Inc., Version 9.1, Cary, North Carolina, USA) and output to an electronic text file for analysis in RDSAT software (Cornell University, Version 5.6, Ithaca, New York, USA). The analyses for this paper included only eligible participants, except where otherwise noted. For some analyses, city-specific samples were aggregated to report on the whole NHBS-IDU1 sample.
### Indicators for RDS Assumptions
#### Respondents know one another as members of the target population.
Using SAS, we calculated the proportion of participants reporting that their recruiter was a stranger; a low proportion (2-4%) indicates that this assumption is met [16]. We also assessed the proportion of potential participants who were eligible as a way to determine the extent to which participants knew one another as IDUs; a high proportion of ineligible recruits would suggest that this assumption was not met.
#### Respondents’ networks are linked and form a single network.
We used RDSAT to create a matrix of cross-recruitments. To determine whether the IDU networks within each city were linked, cross-recruitment was assessed for field site, as networks often are defined by geography. An example of cross-recruitment is when a participant interviewed at Field Site B had received his/her coupon from a recruiter interviewed at Field Site A. We also assessed cross-recruitment for the 5 variables; we report data only for race/ethnicity as it had the most impact on sampling. To be considered linked at least one recruitment between any two field sites or any two racial/ethnic groups, respectively, was required. The presence of at least one cross-recruitment in the sample suggests the presence of a large number of connections across groups in the population; the higher the proportion of cross-recruitments, the greater the number of network connections among IDUs.
#### Sample size is small relative to size of the target population.
The sampling fraction was defined as the number of persons screened for NHBS-IDU1 (regardless of eligibility) divided by the total number of IDUs in each city [17].
## RESULTS
### Recruitment
From May 2005 to February 2006 a total of 13,519 persons were recruited, 384 of whom were seeds. A total of 1,563 (12%) persons were deemed ineligible and excluded from analysis: 196 did not meet NHBS general eligibility criteria (86 of whom were ineligible due to previous participation) and 1,367 did not meet current injection drug use criteria. Additionally, 46 persons had no recruitment information so their records could not be used. There were 334 persons with lost survey records. In addition, we did not include for analysis 38 persons with responses of highly questionable validity and 67 who were not classified as either male or female.
In the complete analysis dataset, there were 334 seeds and 11,137 peer-recruited participants recruited for a total of 11,471 participants. Table 1 displays characteristics of the overall sample; city-specific characteristics of NHBS-IDU1 participants are reported elsewhere [18]. Among the 11,471 participants, most (71%) were male and were of age 35 years and older (81%) (Table 1). Nearly half (49%) were black, 25% white, and 21% Hispanic. Heroin was the preferred drug for 53% of the sample and 8% self-reported they were HIV-infected.
Table 1.
Characteristics of Participants--United States, National HIV Behavioral Surveillance System: Injecting Drug Users, May 2005-February 2006
Characteristic No. %
Gender
Male 8,158 71
Female 3,313 29
Age Category (Years)
18-24 443 4
25-34 1,730 15
35-44 3,600 31
44-54 4,374 38
≥55 1,324 12
Race/Ethnicity
White 2,841 25
Black 5,630 49
Hispanic 2,429 21
Othera 571 5
Preferred Drug
Heroin 6,053 53
Heroin and Cocaineb 3,599 31
Cocaine or crack 788 7
Amphetaminec 626 6
Otherd 405 4
HIV-Positive
Yes 882 8
Noe 10,589 92
Relationship to Recruiterf
Main sex partner 355 3
Casual sex partner 178 2
Friend 6,543 59
Relative/family member 390 4
Person buy drugs from 332 3
Person buy drugs with 2,257 20
Person use drugs with 3,317 30
Person share needles with 609 6
Acquaintance 2,362 21
Stranger (only)g 519 5
Total 11,471
Abbreviations: HIV, human immunodeficiency virus.
Includes Asians, Native Hawaiian and Pacific Islanders, person who reported multiple races and those for whom race was not recorded.
Heroin and cocaine use with equal frequency or combined as speedball.
Includes methamphetamine.
Includes all other drugs or combination of drugs.
Includes those who tested HIV negative (n=9,048), those whose confirmatory test was indeterminate (n=41), those who never received a test result (n=532), those never tested (n=914) and those for whom HIV test status could not be ascertained (n=54).
Relationships were reported by the participant; >1 response was allowed, therefore percentages do not add to 100. Seeds were not asked this question; percentages based on 11,137 participants
Relationship was categorized as "stranger" if it was the only category chosen by the participant. If stranger was chosen as one of multiple categories, the responses appear in those categories but not in the "stranger" category.
Table 2.
Selected Characteristics of Samples, by city--United States, National HIV Behavioral Surveillance System: Injecting Drug Users, May 2005-February 2006
Metropolitan Statistical Area (“City”) IDU Population Sizea NHBS-IDU Sample Sizeb Sampling Fractionc Proportion Eligibled Proportion Recruited by a Stranger e Cross-Recruitment by Field Site Cross-Recruitment by Race/ Ethnicity g
No. No. % % % % %
Atlanta, Georgia 14,602 616 4.2 91 12 18 17
Baltimore, Maryland 58,720 785 1.3 92 20 21 25
Boston, Massachusetts 67,044 540 0.8 88 2 30 35
Chicago, Illinois 32,206 653 2.0 83 4 46 18
Dallas, Texas 31,931 620 1.9 92 3 35 27
Denver, Colorado 20,689 612 3.0 87 4 74 43
Detroit, Michigan 27,166 568 2.1 96 3 n/af 16
Fort Lauderdale, Florida 7,375 441 6.0 87 8 36 32
Houston, Texas 34,117 662 1.9 90 1 n/af 32
Las Vegas, Nevada 13,708 341 2.5 98 17 8 40
Los Angeles, California 98,616 661 0.7 91 2 7 42
Miami, Florida 9,280 740 8.0 82 3 25 34
Nassau, New York 12,177 557 4.6 95 0.4 0.2 47
New Haven, Connecticut 13,629 593 4.4 90 2 11 34
New York City, New York 91,327 529 0.6 96 4 2 33
Newark, New Jersey 16,153 550 3.4 80 2 0.3 21
Norfolk, Virginia 10,259 580 5.7 86 4 14 9
Philadelphia, Pennsylvania 58,722 586 1.0 92 3 25 24
St Louis, Missouri 10,942 633 5.8 83 0.2 n/af 8
San Diego, California 25,946 550 2.1 98 2 39 44
San Francisco, California 28,462 646 2.3 90 6 36 51
San Juan, Puerto Rico 15,031 585 3.9 98 4 2 --
Seattle, Washington 28,505 471 1.7 85 2 6 52
Total/Medianh 726,607 13,519 2.3 90 5 N/A NA
Abbreviations: HIV, human immunodeficiency virus; IDU, injecting drug user; NHBS-IDU, National HIV Behavioral Surveillance System: Injecting Drug Users; n/a, not applicable.
Number of IDUs in the MSA was obtained from Brady et al. [17].
"Sample size" includes all recruited persons regardless of eligibility.
Sampling fraction was calculated as the NHBS-IDU sample size (column 2) divided by the IDU population size (column 1).
Denominators do not include records without recruitment information, lost records, or persons excluded based on validity of response or gender.
Percentage of non-seed participants who said the person who gave them the coupon was a stranger.
Did not use multiple field sites.
Race/ethnicity cross-recruitment not calculated for San Juan as 99% were Hispanic. The Norfolk sample was 87% Black and the St Louis sample was 91% Black.
Total for population size, sample size, proportion eligible, and proportion recruited by a stranger; median value for sampling fraction.
### RDS Assumptions
#### Respondents know one another as members of the target population.
Table 1 shows responses regarding the relationship to the recruiter (as reported by the participant). The most common (59%) relationship was “friend;” many reported relationships related to drug use such as someone they “buy drugs with” or “buy drugs from.” Overall, 5% of non-seed participants reported that their recruiter was “a stranger” (with no other relationship; only 26 persons reported stranger and another relationship); this proportion varied by city (range 1.2%-20%), with 5 cities having >5% recruitment by strangers (Table 2).
The proportion of potential participants who were eligible for NHBS-IDU1 was high overall (90%) and in each city (range 83%-98%, Table 2). The majority of potential participants (61%, range 40%-86%) had physical signs of recent injection (data not shown). Although a higher proportion of ineligibles in cities with a high proportion of participants recruited by a stranger might be expected, we did not see this pattern (Table 2).
#### Respondents’ networks are linked and form a single network.
Of the 23 NHBS-IDU1 cities, 3 used a single field site, so cross-recruitment was not assessed. All other cities had multiple field sites, ranging from 2 to 7 with an average of 4 field sites. In 3 cities with multiple field sites, each had 1 field site with no cross-recruitment to any other field site. In 1 of these cities, a new field site was opened after the existing ones were closed, making cross recruitment to this site impossible. We assume cross recruitment would have occurred from this field site had it been possible and therefore included all data in the analysis dataset. The other 2 cities had a field site located in an area that was geographically distant from the other locations, with limited hours of operation; there was no evidence suggesting that participants interviewed at these 2 field sites were part of the same networks as participants from other field sites. Therefore, data from these 2 field sites (n=90) were considered separate networks (i.e., not part of one component) and were excluded from the analysis dataset.
In all of the cities with multiple field sites there was at least 1 cross-recruitment by field site and by race/ethnicity. The proportion of cross-recruitments by field site ranged from 0.2% to 74% (Table 2). The proportion of cross-recruitments by race/ethnicity ranged from 8% to 52% (Table 2). In the two cities with the lowest proportion of cross-recruitments, nearly all the participants were Black (Table 2).
#### Sample size is small relative to size of the target population.
The sample sizes by city ranged from 341 to 785 (Table 2). Overall, the sampling fraction was low, with less than 10% of the IDU population sampled in each city (median = 2.3%; range: 0.6%-8.0%).
## DISCUSSION
In summary, NHBS-IDU1 met the three RDS assumptions we assessed based on the quantitative indicators we created. Results for each assumption varied by city. Related to the first assumption, that participants knew one another as members of the target population, we found that, for most cities, the proportion of recruitments by a stranger was low while the proportion of eligible recruits was high. In 5 cities the proportion recruited by a stranger was >5%, but these cities still had high eligibility rates suggesting that participants knew each other well enough to recognize each other as IDUs. This assumption also has implications for analysis as RDS weighting is based on individuals with larger networks having greater likelihood of being recruited; if many participants recruit strangers (i.e., persons outside their network), then RDS weights based on network size would not be applicable. To examine the second RDS assumption, that the IDU networks within the NHBS cities were linked, we examined cross-recruitment by field site and by race/ethnicity. Cross-recruitment by field site ranged from 0.2% to 74%. Two cities had limited cross-recruitment by race/ethnicity, which may suggest that IDU networks in these cities are racially defined. When there is a low proportion of cross-recruitments, RDS analysis may still produce valid estimate; however the variance around these estimates will be noticeably high. For the third assumption, we found that in each city the sampling fraction was too small to noticeably diminish the recruiter pool, therefore allowing for robust recruitment.
This is the first paper to assess the extent to which the three RDS assumptions were met in samples from a standardized, multi-city behavioral surveillance system in the United States using quantitative indicators. The results from this paper can be used to guide other researchers to conduct similar evaluations of their own RDS studies. We created indicators for the assumptions that are easy to calculate; although we conducted our assessment post-hoc, the assumptions should be considered during formative research and the indicators can be used while planning an RDS study (e.g., considering sampling fraction by using existing population size estimates and planned sample size) or monitored as part of process evaluation during sample accrual (proportion recruited by a stranger and cross-recruitments) so that recruitment can be adjusted as needed. Rudolph et al. [19] also described ways they tested RDS assumptions in New York City among IDUs, using similar metrics reported here.
Two papers reviewing 123 RDS studies outside the US discussed challenges [20] and summarized characteristics of RDS studies [7]. Papers such as these have not reported data on whether these 3 assumptions were met empirically. Few other studies have reported on relationships between recruiters and recruits, including the proportion recruited by a stranger or cross-recruitments [19]. Other RDS studies have reported high proportions of eligible recruits, similar to the high proportion found in NHBS-IDU1 [21-23]. The hidden nature of most RDS target populations often precludes knowledge of population size and therefore makes calculation of the sampling fraction more challenging; we were able to use existing published estimates of the IDU population size in each NHBS city [17]. This is the first paper to report sampling fractions for 23 RDS samples collected using a standard protocol. Our data can contribute to refinement of theoretical work related to RDS estimation: in NHBS-IDU1, the overall sampling fraction was 2.3%, a figure well below the threshold of 50%, at which sampling-with-replacement can become a source of bias [24].
Our analyses had some limitations that suggest further development of quantitative indicators of the three RDS assumptions. Field site may not be the best variable to assess whether networks are sufficient to sustain a chain-referral process; other factors such as neighborhood of residence or zip code may be more relevant within each NHBS city to determine the extent to which networks are related. Our findings on cross-recruitment by race/ethnicity are similar to that reported in another IDU study in New York City [19]. Future research should consider what proportion of cross-recruitment is considered adequate to demonstrate linked networks; our standard of 1 cross-recruitment is a minimum level for lack of cross-recruitment to be ruled out, rather than a level of adequate cross-recruitment. Local NHBS project staff are encouraged to examine the assumptions considered here for their own data and staff from each NHBS city should consider their knowledge of the local IDU population to determine how well RDS sampled different groups of IDU within their city. The sample of IDUs reached by RDS can be compared to other methods of recruitment to determine if key sub-populations were missed [25].
Based on the analysis reported here, additional operational procedures were developed for NHBS-IDU2. A more refined definition of ‘knowing’ someone was added to the question assessing the relationship to the recruiter as well as to the recruiter training script (By “know,” I mean you know their name OR you see them around even if you don’t know their name). Participants who reported that their recruiter was a stranger were probed using standardized questions; if participants reported never seeing the recruiter prior to being given a coupon or reported having first seen the recruiter in a situation related to NHBS-IDU, then the relationship classification of ‘stranger’ was considered validated. In addition, recruiters were trained not to give coupons to strangers. As part of their formative research, NHBS-IDU staff were required to analyze peer recruitment patterns in their NHBS-IDU1 data by race/ethnicity, gender, and other characteristics of potentially insular sub-populations of IDU (i.e., networks that are not linked to other networks). Based on this information, staff selected seeds from loosely networked sub-populations to ensure each group’s representation, whereas closely networked sub-populations did not require the same extent of planning for selecting seeds. In addition, staff assessed potential field sites in part for the location’s ability to serve as a “bridge” between major IDU sub-populations. Other formative research activities such as identifying studies of local IDU populations that describe networks and other characteristics of drug users can also help lay the foundation for the success of an RDS sample in reaching all groups of IDUs [15].
RDS is increasingly used to sample IDUs and other populations at high risk of HIV infection. As RDS is still a relatively new sampling and analysis method, it is important for investigators to share operational findings. As use of RDS increases, researchers must not only report on whether RDS assumptions were met to justify its use among specific populations, as we did here, but also plan formative research to ensure that assumptions can be met.
### AUTHOR DISCLAIMER
The findings and conclusions in this manuscript are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention.
## ACKNOWLEDGEMENTS
We would like to thank Drs. Lillian Lin and Christopher Johnson, Michael Spiller III, and Cristin Haggard for their consultation regarding the analyses in this report. We recognize contributions to this report made by the persons who were NHBS-IDU Principal Investigators (R. Luke Shouse, Georgia Division of Human Resources; Colin Flynn, Maryland Department of Health and Mental Hygiene; Eric Rubinstein, Massachusetts Department of Public Health; Carol Ciesielski, Chicago Department of Public Health; Sharon Melville, Texas Department of State Health Services; Beth Dillon, Colorado Department of Health and Environment; Eve Mokotoff, Michigan Department of Community Health; Marcia Wolverton, Houston Department of Health; Dave Crockett, Nevada Department of Public Health; Trista Bingham, Los Angeles County Department of Public Health; Marlene LaLota, Florida Department of Health; Chris Nemeth, New York Department of Health; Christopher Murrill, New York City Department of Health and Mental Hygiene; Helene Cross, New Jersey Department of Health and Senior Services; Dena Bensen, Virginia Department of Public Health; Kathleen Brady, Philadelphia Department of Health; Assunta Ritieni, California Department of Health; H Fisher Raymond, San Francisco Department of Public Health; Sandra Miranda De Leon, Puerto Rico Department of Health; Yelena Friedberg, Missouri Department of Health and Senior Services; Maria Courogen, Washington Department of Health) and the Behavioral Surveillance Team, Behavioral and Clinical Surveillance Branch, Division of HIV/AIDS Prevention, CDC. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46226242184638977, "perplexity": 5541.505334754072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592394.9/warc/CC-MAIN-20200118081234-20200118105234-00407.warc.gz"} |
https://www.amyotha.hluttaw.mm/my/event-created/year | S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
16
17
18
19
20
21
22
23
24
25
26
28
29
31
S M T W T F S
1
2
4
6
8
9
10
12
13
15
16
17
19
21
22
23
25
26
27
28
29
S M T W T F S
1
2
3
4
5
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
24
26
27
28
29
30
S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
21
22
23
24
26
27
29
30
31
S M T W T F S
2
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
18
19
20
25
26
S M T W T F S
1
2
3
8
9
15
16
22
23
29
30
S M T W T F S
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426403641700745, "perplexity": 11596.163571284747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00452.warc.gz"} |
http://math.stackexchange.com/users/15205/bastian-galasso-diaz | # Bastian Galasso-Diaz
less info
reputation
6
bio website c-bastianscanner.mercadoshops… location Santiago, Chile age 27 member for 2 years, 11 months seen Nov 6 '12 at 19:21 profile views 46
3 Schwarzian Derivative and One-Dimensional Dynamics - how are they connected? 2 Reference for Ergodic Theory 1 $f:\mathbb R \to \mathbb R$ continuous, with a point of odd period, implies existence of a point of even period 0 Schwarzian Derivative and One-Dimensional Dynamics - how are they connected? 0 Importance of Poincaré recurrence theorem? Any example?
# 140 Reputation
+25 A question about continued fractions and Gauss map +8 tail estimate for ''real zeta function'' +10 $f:\mathbb R \to \mathbb R$ continuous, with a point of odd period, implies existence of a point of even period +10 Schwarzian Derivative and One-Dimensional Dynamics - how are they connected?
# 5 Questions
8 Root of polynomial 5 A question about continued fractions and Gauss map 2 A few question in abstract algebra 2 Galois extension for a specific matrix group 1 tail estimate for ''real zeta function''
# 11 Tags
4 dynamical-systems × 5 0 field-theory × 2 3 reference-request × 3 0 ring-theory 3 intuition × 2 0 algebra-precalculus 0 ergodic-theory × 3 0 polynomials 0 abstract-algebra × 2 0 real-analysis
# 1 Account
Mathematics 140 rep 6 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7554256319999695, "perplexity": 2450.8026491735377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888216.78/warc/CC-MAIN-20140722025808-00183-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/486482/in-special-relativity-is-it-allowed-to-ask-how-much-time-has-elapsed-in-a-seco?noredirect=1 | # In Special Relativity, is it allowed to ask 'How much time has elapsed in a second inertial frame at a particular moment in the first inertial frame'?
Or is it a meaningless question?
For example, A and his friend B are the same age initially. B travels relative to A at a very high speed. A keeps observing B from his frame. At one moment, A observes that B has aged only half as himself (say A is aged 60 when he makes the observation and observes B to be 30).
The question is 'How old is B from his own (B's) frame of reference at the moment A made the above observation?' If it has an answer, what is it? If the question is meaningless, why?
• Simultaneity is frame-dependent. There is no “same moment” for two observers in relative motion at different locations. – G. Smith Jun 17 '19 at 1:50
'How old is B from his own (B's) frame of reference at the moment A made the above observation?'
Your problem lies in the very moment you are talking about. In SR there isn't an unique moment for universe. A moment will be meaningful only if you specify your refrence frame of that moment.
i.e you should have asked something like this:
1.'How old is B from his own (B's) frame of reference at the moment $$t$$ in B's frame where A made the above observation?'
Or
2.'How old is B from his own (B's) frame of reference at the moment $$t'$$ in A's frame where A made the above observation?'
The answer to these questions is quite different! Unfortunetly i can't go on without math to explain differences. If you understood what i am saying then you need not to continue reading, unless you are interested in math as well.
To use lorentz transformations, we need to define events in spacetime. Let's say that A and B clocks are synchronized at $$t=t'=0$$. B arranges his birthday party when he become 30y/o. So we can assign an event in spacetime for this party $$E_1=(ct,x)=(c(30y),0)$$, (B is at orgin of his frame so $$x=0$$). If we were to use Galilean transformation to analyze this event in A's frame we would get $$E_1'=(c(30y),-v(30y))$$, in other words, according to Newton himself A will be 30y/o when he observes B's birthday party. (One might argue that light travel time between observers is not considered here. well, it's simple. Because observers are fully aware of their distance, they can tell how much time it took for the light to travel. So A can tell you guys when B's birthday actually happened though A will recieve singal much later in reality). With Lorentz transformation however, we will get $$E_1'=(c\gamma(30y),\gamma(-vt))$$. i.e according to Einstein, A is much older than B, when he observers B's birthday. Now back to the question, here i can show you why it's meaningless to ask
'How old is B from his own (B's) frame of reference at the moment A made the above observation?'
to answer your question From analogy above, one might jump to conclusion that of course B would be 30y/o, because after all we assumed so!(just check how did i define $$E_1$$) on the other hand, one might use inverse Lorentz transformation for event $$E_1'$$ above to see that $$E_1=(c\gamma^2(30y),\gamma(vt))$$ (as it was done by @RogerJBarlow) and conclude that B will be 120y/o. How is it possible?! well its because the moment you are talking about is different for every observers. in fact, if you were to ask the first question, the first answer would be your solution. on the other hand if you were to ask second question, the second answer would be right.
Update: Note that inverse Lorentz transformation is $$t=\gamma (t'+vx'/c^2)$$. if you pick $$x'$$ from event $$E_1'=(c\gamma(30y),\gamma(-vt))$$ you will arrive at $$t=t$$ which is obvious because using two Lorentz transformation at the same time does not change anything. However, if you assume another event ("observing itself") in A's frame such that $$E_1'=(ct',x')=(c\gamma(30y),0)$$ ($$x'=0$$ because A is at his orgin) then you will get the second answer.
TL;DR
In B's point of view, the moment he become 30y/o is not simultaneous with the moment that A observes him. And no, It's not because of light travel time. So he conclude that when A observes him, he is 120y/o while what A observes is B 30y/o!
I really wish more books used spacetime diagrams to teach relativity, because 90% of confusion in relativity problems can be resolved by drawing a careful spacetime diagram. The idea is to superimpose the $$t$$ and $$x$$ axes for one observer (call them observer B) and the $$t'$$ and $$x'$$ axes for a moving observer (observer A) on the same diagram. All events that happen "at the same moment" according to observer B lie along a line parallel to the $$x$$-axis; all events that happen "at the same moment" according to observer A lie along a line parallel to the $$x'$$-axis.
Instead of 30 and 60 years, let's use 1 year and 2 years.1 You ask
At one moment, A observes that B has aged only half as himself (say A is aged 60 when he makes the observation and observes B to be 30). How old is B from his own (B's) frame of reference at the moment A made the above observation?
The event we are concerned with in this case (i.e., the point on the spacetime diagram) is the event at which A observes B's age. The key ambiguity in this question is whether "at the moment" means "at the same moment" according to A, or "at the same moment" according to B. If 2 years have elapsed on A's clock when they measure B's age, then "at the moment of the observation" according to A, only 1 year has elapsed:
But "at the moment of the observation" according to B, 4 years have elapsed:
The fact that we need to draw different lines to "read off" $$t$$ and $$t'$$ is what makes the question a bit ambiguous as stated.
So (returning to the original numbers) to get an unambiguous answer, you can ask either
What is the $$t$$ coordinate (i.e. the time elapsed as observed by B) of the event where A observes B's age?
in which case the answer is 120 years. Or you can ask
What is the $$t'$$ coordinate (i.e. the time elapsed as observed by A) of the event where A observes B's age?
in which case the answer is 30 years. So long as you're clear about whether you want to know $$t$$ or $$t'$$, there is an unambiguous answer.
1 This is largely so I can use some Mathematica code I already had on hand.
It's a little hard to interpret your quoted question: "How old is B from his own frame of reference at the moment A made the above observation?" I'm interpereting this to mean "According to B's frame of reference, how old is B at the moment when A makes his observation?" If you meant something else, then the following might not apply:
I'm assuming that when B left earth, both A and B were aged zero.
1) When A is 60, he says that B is 30. (This was given in your post.)
2) The situation is entirely symmetric, so when B is 60, he says that A is 30.
3) Therefore we know that A ages at half-speed in B's frame.
4) Therefore when B is 120, he says that A is 60.
5) A makes his observation when he is 60. Therefore when B is 120, he says "At this moment, A is making his observation".
So the answer to your question is 120, and since your question has an answer it is meaningful --- if you meant what I think you meant.
• Maybe you could mention that these observations don't include the light travel time. – PM 2Ring Jun 17 '19 at 3:37
• @Ryder See G. Smith's comment about the relativity of simultaneity on your question. – PM 2Ring Jun 17 '19 at 4:49
• It's almost like different versions of the same universe exist for each observer. Well---If you live in Chile and I live in Canada, you'll say that the United States is located to the north and I'll say it's located to the south. Does that mean that different versions of the Universe exist for each of us? I suppose you can say that if you want to, but I'd prefer to say that we have two different (and equally valid) ways of describing the same Universe. I say the US is to the south; you say it's to the north; Alice says she's 60 when Bob is 30; Bob says Alice is 30 when Bob is 60... – WillO Jun 17 '19 at 6:01
• ....There really is an exact analogy here, and when you fully understand that, you'll understand relativity. – WillO Jun 17 '19 at 6:02
• That's not the answer to the question in my opinion. Although what you say is not wrong either. When A observes B and says that B is 30, at that very moment in universe it's completely meaningless to say that B will observe A and says that A is 30 and he himself is 60, because A's clock is not simultaneous with B's clock. However if we just say that when B is 60 in its frame observes A, he will conclude that A is 30, and this is right due to symmetry – Paradoxy Jun 17 '19 at 11:12
It's all in the Lorentz Transform: $$t'=\gamma (t-v x/c^2)$$
In your case $$\gamma=2$$, $$t=60$$ and as $$A$$ is presumably at the origin, $$t'=120$$y, as @WillO says.
However different values of $$x$$ would give different $$t'$$. So this is not 'a particular moment in the first inertial frame' which is what your question asks about. At the $$t=60$$ moment there are presumably various events occurring simultaneously (!) according to $$A$$; $$B$$ will see them as having different $$t'$$ as they have different $$x$$ values.
The outcome depends on the method, how A (or B) measures and which clock he compares to which clock.
The problem is that A and B can only directly compare readings of their clocks when they are at the same point. If they are at some distance from each other, to say what "another clock" shows they must make some assumptions about the one-way speed of light and put another clock that is synchronized with their own clock.
Let's at initial moment A and B are at “starting position" close to each other and the both their clocks show 0.
1) Let’s A is “at rest” and B is moving. A places clock A1 at starting point and clock A2 at finish and synchronizes these clocks by a beam of light, assuming that the one – way speed of light is c. When B comes to the clock A2 he compares readings (in immediate vicinity) of his own clock with the clock A2. The clock A2 shows 60, while the clock B shows 30.
2) Let’s B is “at rest” and A is moving. B places clock B1 at starting point and clock B2 at finish and synchronizes these clocks by a beam of light, assuming that the one – way speed of light is c. When A comes to the clock B2 he compares readings (in immediate vicinity) of his own clock with the clock B2. The clock B2 shows 60, while the clock A shows 30.
Let us demonstrate time dilation of the SR in the following experiment (Fig. 1). Moving with velocity $$v$$ clocks measure time $$t'$$. The clock passes past point $$x_{1}$$ at moment of time $$t_{1}$$ and passing past point $$x_{2}$$ at moment of time $$t_{2}$$.
At these moments, the positions of the hands of the moving clock and the corresponding fixed clock next to it are compared.
Let the arrows of moving clocks measure the time interval $$\tau _ {0}$$ during the movement from the point $$x_ {1}$$ to the point $$x_ {2}$$ and the hands of clocks 1 and 2, previously synchronized in the fixed or “rest” frame $$S$$, will measure the time interval $$\tau$$. This way,
$$\tau '=\tau _{0} =t'_{2} -t'_{1},$$
$$\tau =t_{2} -t_{1} \quad (1)$$
But according to the inverse Lorentz transformations we have
$$t_{2} -t_{1} ={(t'_{2} -t'_{1} )+{v\over c^{2} } (x'_{2} -x'_{1} )\over \sqrt{1-v^{2} /c^{2} } } \quad (2)$$
Substituting (1) into (2) and noting that the moving clock is always at the same point in the moving reference frame $$S'$$, that is,
$$x'_{1} =x'_{2} \quad (3)$$
We obtain
$$\tau ={\tau _{0} \over \sqrt{1-v^{2} /c^{2} } } ,\qquad (t_{0} =\tau ') \quad (4)$$
This formula means that the time interval measured by the fixed clocks is greater than the time interval measured by the single moving clock. This means that the moving clock lags behind the fixed ones, that is, it slows down.
Every observer repeats the same procedure.
The animation below demonstrates change of frames and time dilation:
Time of "stationary" observer is the same (universal) in the whole frame and evenly distributed across the whole frame. So, when some answers mean, that B is not younger, but surprisingly older, they are in some sense correct, because they mean time of the whole reference frame, in which A moves. The stationary observer "occupies" the whole frame. Time in the "stationary" reference frame B runs faster than the time of the single moving clock A; Time in the "stationary" reference frame A runs faster, than the time of the single moving clock B.
Since the situation is symmetric there are 2 answers, one for each frame of reference. If we take the relative velocity to be v=0.866c for a gammafactor of γ=2, when A is 100 years old, in A's frame B is only 50 years old. In B's frame, when B is 50 years old, A is only 25 years old. If they could beam information to each other they could build an antitelephone with which they could send information into their own past (If A beams a message to B when A is 100 years old and B beams it back to A, A will receive his own message when he is 25 years old). Therefore faster than light information or beaming could cause causal violations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 57, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6861899495124817, "perplexity": 349.81962022294545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347385193.5/warc/CC-MAIN-20200524210325-20200525000325-00420.warc.gz"} |
https://pypi.org/project/nbhugoexporter/0.1.2/ | Export Jupyter notebooks to a Hugo compatible format
Project description
# Export Notebooks To Hugo Compatible Markdown
## Basic Installation and Use
pip install nbhugoexporter
will install the exporter. You will also need to add some shortcode definitions
to Hugo. You can customize these as you wish, but an easy way to get started is
to run the following from the root of your Hugo project:
mkdir -p layouts/shortcodes
for x in cell input; do for y in start end;
do curl -L https://github.com/jbandlow/nb_hugo_exporter/raw/master/resources/jupyter_$x\_$y.html > layouts/shortcodes/jupyter_$x\_$y.html;
done; done;
You can then run the exporter with
nbconvert path/to/nb_file.ipynb --to hugo --output-dir content/path/insert-title-here
This will create a content/path/insert-title-here directory with an
index.md file derived from nb_file.ipynb. The generated metadata will include
---
title: Nb File
date: <last file modification time for nb_file.ipynb>
draft: True
...
---
along with any other metadata you've specified. To set metadata, go to Edit ->
"hugo": {
"key1": value1,
...
}
with whatever keys and values you wish. The title value will default to the
notebook filename with snake\_case replaced by Initial Caps. All auto-generated
values (title, date, and draft) can be overridden in the notebook
The resulting markdown will contain the following hugo shortcodes:
{{% jupyter_cell_start <cell_type> }}
{{% jupyter_input_start }}
...
{{% jupyter_input_end }}
...
{{% jupyter_cell_end }}
in the places you'd expect. <cell_type> is the Jupyter cell type, e.g.,
markdown, code, etc.
You may also want to configure your CSS. In particular, the exporter currently
adds some unnecessary blank lines. These can be cleaned up with
.jupyter-cell p:empty { display: none; }
Finally, for LaTeX to render properly, you should [include the MathJax script](
Note that nbconvert --to hugo solves the [underscore problem](
https://gohugo.io/content-management/formats/#issues-with-markdown) with the
"tedious" solution of simply quoting all underscores in math mode. So there is
no need for the MathJax configuration script that "fixes \<code\> tags" in your
Javascript, or the custom CSS described in that post.
That's it! Happy blogging with Jupyter notebooks and Hugo.
## Acknowledgements
Shout-out to the amazing [Hugo](https://gohugo.io), and
[Jupyter](https://jupyter.org) teams for building incredible tools.
For another approach to this issue, see
[hugo-jupyter](http://journalpanic.com/hugo_jupyter/), from [Stephan
Fitzpatrick](https://github.com/knowsuchagency). This didn't fully fit my needs,
but it might fit yours. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19151000678539276, "perplexity": 27766.224692331652}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00112.warc.gz"} |
https://arxiv.org/abs/cond-mat/9405061 | cond-mat
(what is this?)
# Title:Non-equilibrium noise in a mesoscopic conductor: microscopic analysis
Abstract: Current fluctuations are studied in a mesoscopic conductor using non-equilibrium Keldysh technique. We derive a general expression for the fluctuations in the presence of a time dependent voltage, valid for arbitrary relation between voltage and temperature. Two limits are then treated: a pulse of voltage and a DC voltage. A pulse of voltage causes phase sensitive current fluctuations for which we derive microscopically an expression periodic in $\int V(t)dt$ with the period $h/e$. Applied to current fluctuations in Josephson circuits caused by phase slips, it gives an anomalous contribution to the noise with a logarithmic singularity near the critical current. In the DC case, we get quantum to classical shot noise reduction factor 1/3, in agreement with recent results of Beenakker and Büttiker.
Comments: 6 pages, figures by request, RevTeX, Landau Institute preprint 261/6437 Subjects: Condensed Matter (cond-mat) Journal reference: JETP Lett. 59, 857 (1994) Cite as: arXiv:cond-mat/9405061 (or for this version)
## Submission history
From: Leonid Levitov [view email]
[v1] Mon, 23 May 1994 04:08:30 UTC (8 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.734394907951355, "perplexity": 2799.8686593240814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660529.12/warc/CC-MAIN-20190118193139-20190118215139-00291.warc.gz"} |
https://www.originlab.com/doc/X-Function/ref/rank | # 2.2.22 rank
## Brief Information
Rank data with user-specified thresholds
## Command Line Usage
1. rank irng:=col(a) orng:=col(b);
2. rank irng:=col(1) from:=0 to:=3 method:=1 threshold:="0 0.5 0.6 1";
## Variables
Display
Name
Variable
Name
I/O
and
Type
Default
Value
Description
Input irng
Input
Range
<active>
Specifies the input data range.
Number of Ranks interval
Input
double
2
This variable is available only when the method variable is set to 0. It is a positive integer that specifies the number of ranks or levels.
From from
Input
double
1
Specifies the first rank. For data points that belong to the first level, the rows in the destination range that correspond to these data points will be filled with the value of this variable.
To to
Input
double
2
Specifies the last rank. For data points that belong to the last level, the rows in the destination range that correspond to these data points will be filled with the value of this variable.
User Defined Thresholds method
Input
int
0
Specifies whether or not to use user-defined thresholds.
If this variable is 1, you can specify the user-defined thresholds in the Thresholds edit box, and the number of ranks is automatically calculated from the thresholds.
Thresholds threshold
Input
string
This variable is available only when the method variable is set to 1. It specifies the beginning values for the ranks/levels. The thresholds should be separated with space.
From Minimum Value min
Input
double
This variable is available only when the method variable is set to 0. It specifies the minimum value of the data points that will be ranked. Rows in the output range that correspond to data points less than this value will be set to missing. By default, the minimum value of the input data will be automatically used for this variable. If you want to specify another value, you can uncheck the Auto checkbox and then enter the value in the edit box.
To Maximum Value max
Input
double
This variable is available only when the method variable is set to 0. It specifies the maximum value of the data points that will be ranked. Rows in the output range that correspond to data points greater than this value will be set to missing. By default, the maximum value of the input data will be automatically used for this variable. If you want to specify another value, you can uncheck the Auto checkbox and then enter the value in the edit box.
Output orng
Output
Range
<new>
Specifies the destination for the output result.
See the syntax here.
## Description
This X-Function can be used to decide whether some data points are within certain ranges.
The variables min, max and thresholds determine the ranges. Each data point in the input range is examined. If its value is within a certain range, the corresponding row in the output range will be filled with the rank/level for this range.
Keywords:ranking, classify, classification | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15431833267211914, "perplexity": 872.9727165141851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527089.77/warc/CC-MAIN-20190721164644-20190721190644-00231.warc.gz"} |
https://tokyox.sakura.ne.jp/dokuwiki/doku.php/wiki/mathjax | wiki:mathjax
# MathJax Plugin
—- plugin —- description: Enables MathJax [http://mathjax.org] parsing of TeX math expressions in wiki pages author : Mark Liffiton email : [email protected] type : action, syntax lastupdate : 2016-03-28 compatible : Ponder Stibbons, Rincewind, Angua, Adora Belle, Weatherwax, Binky, 2014-09-29 “Hrun”, Detritus, Elenor of Tsort depends : conflicts : creole, indexmenu2, revealjs, s5 similar : jsmath, latex tags : math,tex,latex,mathjax
## Overview
This plugin adds MathJax to your wiki pages to let you easily write mathematical formulas that will be typeset and displayed cleanly. It is written to be as simple as possible; it loads and configures the script, protects TeX math expressions from other parsing, and no more.
## Installation
Install the plugin using the Plugin Manager and the download URL above, which points to latest version of the plugin. Refer to Plugins on how to install plugins manually.
### Workaround for IE and Old Dokuwiki Versions
Dokuwiki versions 2012-01-25 'Angua' and earlier have a bug that prevents this plugin from working when using Internet Explorer. The bug has been fixed and shouldn't be a problem in later releases, but 2012-01-25 and before need the following workaround applied for math to render in IE:
Edit inc/template.php and change lines 375-377 (assuming the 2012-01-25 “Angua” release) in tplmetaheaders_action() to:
$attr['_data'] = "/*<![CDATA[*/\n".$attr['_data'].
"\n/*!]]>*/";
## Examples/Usage
NOTE that the default configuration uses $(dollar signs) to delimit TeX formulas. This may cause trouble if you have$ characters in any pages. The default configuration also lets you escape the dollar signs, however, by changing them to '\$'. This should correct any problems you might have. Once the plugin is installed, you can write TeX formulas in your wiki with the following syntax (by default — all delimiters are configurable): ### Inline Math Use dollar signs: $a^2 + b^2 = c^2$$a^2 + b^2 = c^2or escaped parentheses: $1+2+\dots+n=\frac{n(n+1)}{2}$ (1+2+\dots+n=\frac{n(n+1)}{2}) ### Display Math To display math on its own line, use double dollar signs: $$\frac{d}{dx}\left( \int_{0}^{x} f(u)\,du\right)=f(x)$$ $$\frac{d}{dx}\left( \int_{0}^{x} f(u)\,du\right)=f(x)$$ or escaped square brackets: $\sin A \cos B = \frac{1}{2}\left[ \sin(A-B)+\sin(A+B) \right]$ [ \sin A \cos B = \frac{1}{2}\left[ \sin(A-B)+\sin(A+B) \right] ] A wide range of math environments1) will work as well: \begin{align*} e^x & = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \cdots \\ & = \sum_{n\geq 0} \frac{x^n}{n!} \end{align*} \begin{align*} e^x & = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \cdots \\ & = \sum_{n\geq 0} \frac{x^n}{n!} \end{align*} Note that the math environments should not be inside the dollar sign delimiters; the environments should stand on their own with just the \begin and \end statements in order to be parsed correctly. ## Configuration and Settings The plugin installs with a default configuration that should work for most users. It is ready to go upon installation, and extra configuration is only required for specific needs. The URL to the MathJax script can be set in the Configuration Manager. By default, it uses the MathJax CDN, loading the latest version of MathJax from a remote server maintained and updated by the MathJax team. The default URL loads MathJax securely (via HTTPS) if the wiki itself is served securely. You can host your own installation of MathJax instead, in which case you can change the URL to point to your own installation, either as a complete URL or as an absolute path to the MathJax directory on your server (from the web root, e.g., “/scripts/mathjax.js” for “http://your.site/scripts/mathjax.js”). Additionally, you can configure MathJax via commands given in a configuration string and/or loaded from files; both methods can be controlled in the Configuration Manager. Note that the default URL loads a reasonable configuration from the CDN, and the default configuration string modifies it slightly. Some third-party MathJax extensions may require a different configuration than the plugin's default to operate properly. For example, it has been reported that the XyJax extension does not function with the “CHTML” renderer. In that case, changing the MathJax URL to //cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML (changing the 'config' parameter from the default) allows XyJax to work. For more information on configuring MathJax, see Common Configurations and MathJax Configuration Options in the MathJax documentation. ### AsciiMath MathJax has the ability to parse and render AsciiMath markup, but it is not enabled in the default configuration of this plugin. One easy way to enable the AsciiMath preprocessor is to use a different configuration file: set plugin»mathjax»url to //cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML. (That configuration enables both TeX and AsciiMath; see the list of configuration files for other options.) You may want to modify some AsciiMath-specific settings as well. ### Automatic Equation Numbering MathJax 2.0 introduces automatic equation numbering, but it is not enabled in the default configuration. To enable it, go to your wiki's configuration editor and change the plugin»mathjax»config setting to something like this: MathJax.Hub.Config({ tex2jax: { inlineMath: [ ["","$"], ["\$","\$"] ], displayMath: [ ["$$","$$"], ["\$","\$"] ], processEscapes: true }, TeX: { equationNumbers: {autoNumber: "AMS"} } }); The line TeX: { equationNumbers: {autoNumber: “AMS”} } enables the equation numbering. See the MathJax documentation or the MathJax examples page for the syntax for creating automatic references to equations, as well. ### Changing default size or scale The default size of equations can be changed by adding the “CommonHTML” section and using the “scale” parameter. A value of “125” means “125%”. MathJax.Hub.Config({ tex2jax: { inlineMath: [ ["$","$"], ["\$","\$"] ], displayMath: [ ["$$","$$"], ["\$","\$"] ], processEscapes: true }, CommonHTML: { scale: 125 } }); ## Development Please see the GitHub repository for the issue tracker (to view known issues or report problems) and for a history of changes. Alternatively, feel free to report issues in the Discussion section below. ## FAQ What happens if the Latex plug-in is installed simultaneously? Answer: I'm not certain (I don't have a place to easily install both), but if both are setup to use the same syntax for specifying math/equations, in the best case, the Latex plugin will capture/translate them and Mathjax won't see them. I wouldn't recommend trying it, though, as it will most likely just break things. Feel free to update this if you try it and find out what happens. Update: It seems to work fine. Assume settings for both plug-ins to be default. Result: 'Inline' Latex code like$a^2 + b^2 = c^2$is processed by the Mathjax plugin. 'Display math mode' Latex code like $$a^2 + b^2 = c^2$$ is processed (rendered as image) by the Latex plugin. Tested with browsers: IE 8 and Firefox 13; PHP: QuickPHP 1.14.0; Dokuwiki: Angua Can (large quantities of) equations be transferred from MS Word to the Wiki? Answer: Yes, using converters like: • Update: Do not expect too much from Mathtype. In Word 2007, simple symbols like the dot product or a hat (^) will abort the conversion to Latex. For automated batch conversion of even slightly complex math, you're screwed — Johan Is processing of Latex code disabled using syntax like %%$a^2 + b^2 = c^2$%% or <nowiki>$a^2 + b^2 = c^2$</nowiki>? Answer: No, Mathjax still renders the Latex code as if it is not wrapped by that syntax. How can I show the original Latex code without any formatting? Answer: Wrap the Latex code in code blocks, format it with the monospace style (e.g., ''$a^2$''), or escape dollar signs with backslashes (e.g., \$a^2\\$).
What happens if the MathJax CDN server goes down?
Answer: Then Mathjax won't load, and the latex source code is shown instead of nicely rendered formulas.
Are \newcommand and other custom macro/environment definitions supported?
Answer: Yes, either in your page inside math delimiters or through the configuration script.
Will Mathjax work with PDF export plugins like dw2pdf?
Answer: Unfortunately, no. Mathjax renders all math formulas on the client-side (in your browser) using Javascript. The dw2pdf plugin creates PDFs on the server-side, where Javascript, and thus Mathjax, is unavailable. So any server-side export like that will contain the raw Latex code, not the rendered math formulas.
You can however export a PDF with the rendered math formulas from your browser by “printing to PDF.” This functionality is built in on OS X and Linux (look for “file” and “PDF” options in your print dialog), and you can add third-party “PDF printer” software to Windows.
How to use (experimental) Extensions to MathJax (like siunitx.js)?
Answer: Get the script file, copy it to [home]/conf/ and then add the path to the file in the config manager (plugin»mathjax»configfile) like: conf/siunitx.js
How to define global shortcuts / new commands?
Answer Add data/pages/mathjax.txt in the configuration manager under plugin»mathjax»configfile and then create the page mathjax with something simmilar to:
MathJax.Hub.Config({
TeX: {
Macros: {
RR: "{\\bf R}",
bold: ["{\\bf #1}",1],
Msun: "{\\textrm{M}_{\\odot}}"
}
}
});
(taken from the mathjax docu)
## Discussion
This simply does the trick:
text text text
$$e = mc^2$$
text text text
— Johan
1)
Accepted math environments (specified here in the code): align, align, alignat, alignat, displaymath, eqnarray, eqnarray, equation, equation, flalign, flalign, gather, gather, math, multline, multline*
wiki/mathjax.txt · 最終更新: 2016/12/27 by N_Miya | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7798635363578796, "perplexity": 6696.094219506861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00653.warc.gz"} |
https://fsilab.sites.umassd.edu/facilities/ | # Facilities
Re-circulating Water Tunnel Facility: A re-circulating water tunnel, manufactured by ELD Inc., is available to use at the Laboratory for Fluid-Structure Interactions Studies (FSI Lab) at the Mechanical Engineering Department at UMass Dartmouth. The water tunnel has a test-section of 45 cm by 45 cm by 150 cm. The flow velocity can be varied from 0.03 m/s to 1.0 m/s, and the turbulence intensity of the tunnel is less than 1\%. This water tunnel is equipped with a gravity driven dye and hydrogen bubble imaging system for flow visualization. Subsonic Wind Tunnel: A subsonic wind tunnel is available at FSI lab. The closed-section wind tunnel has a test section of 45 cm ×45 cm ×150 cm. The flow velocity can be varied from 1.0 m/s to 20 m/s. The wind tunnel is equipped with two flow measurement devices: a hot wire anemometer running in parallel with a vane anemometer to accurately measure the magnitude of the velocity of the airflow through the test chamber. This subsonic wind tunnel is designed such that it can be added as a module on top of the re-circulating water tunnel for experiments that requires two-phase flow testing. Time-Resolved Volumetric Particle Tracking velocimetry (TR-PTV): At the FSI lab, we have access to an optical method of quantitative flow visualization using particle tracking velocimetry (PTV). Our state-of-the art system consists of particles illuminated by a 300 × 100 mm^2 LED (FLASHLIGHT 300 array, LaVision). The recording system is a four-camera Minishaker box (LaVision), equipped with 8, 12 and 16 mm focal length lenses placed at a working distance from the central vertical plane of the volume of interest. The flashlight and set of cameras are triggered simultaneously using a LaVision Programmable Timing Unit (PTU) driven by DaVis 10 acquisition software. PTV data processed by the Shake The Box (STB) algorithm allows for the time resolved, three dimensional (3D) 3 components (3C) measurement of Lagrangian velocities for a large number of tracked particles in a volume of interest that can be used for quantitative flow visualization for the water/wind tunnel experiments. The video below shows our measurements of vorticity and velocity field around a circular cylinder, undergoing large-amplitude vortex-induced vibration at Re=2,800. https://fsilab.sites.umassd.edu/files/2021/08/PTV_low-res.mp4 High Speed CMOS Cameras (Sony IMX252LLR/LQR): Two high speed cameras with high speed Core DVR are available at the FSI lab. The two high speed cameras have advanced CMOS image sensors with a global shutter function and a 3.45 $\mu m$ pixel that is the smallest class in the industry. This small-sized 3.45 $\mu m$ pixel realizes higher sensitivity and lower noise than that of the existing 5.86 $\mu m$ pixel products, and achieves high picture quality, high resolution and high-speed imaging without focal plane distortion. This setup will give us the ability to record two synchronized recordings at 2048 ×1088 resolution at frame rates up to 264 fps and 755 fps at 640 ×480 pixel resolution. The Core DVR allows for reliable uncompressed video recording, direct to solid state media. The DVR core provides a precise time stamping and camera synchronization. The cameras are C-mount, 10 bits pixel bit depth with 80 MHz Pixel Clock output. The high-speed cameras are used for (1) spanwise structural response measurements of flexible structures undergoing large amplitude oscillations, and (2) for qualitative flow visualization, using hydrogen bubble technique. The video below shows our flow visualization of the periodic vortex shedding (2S pattern) in the wake of a circular cylinder undergoing large amplitude vortex-induced vibration at Re=1,200. https://fsilab.sites.umassd.edu/files/2021/08/HB_01.mp4 Other tools and equipment: At FSI lab, we also have a set of Non-Contacting Displacement Sensor (Panasonic HL-G125), A 6-axis force/torque sensor (ATI Nano17 IP68), a high accuracy motorized translation stage (V-508 PIMag®) combined with a one-axis PIMag® motion controller for magnetic direct drives, a Creality Ender 5 Pro 3D Printer, multiple desktop workstations and a Dell Precision workstation which is a high-end nearly server class computer with the Xeon CPU, 4TB hard drives, and software, such as, DaVis, MATLAB/SIMULINK, and SolidWorks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48616698384284973, "perplexity": 3353.2075260186757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00593.warc.gz"} |
https://zbmath.org/?q=an:0815.11053 | ×
## Galois groups with prescribed ramification.(English)Zbl 0815.11053
Childress, Nancy (ed.) et al., Arithmetic geometry. Conference on arithmetic geometry with an emphasis on Iwasawa theory, March 15-18, 1993, Arizona State Univ., Tempe, AZ, USA. Providence, RI: American Mathematical Society. Contemp. Math. 174, 35-60 (1994).
Let $$K$$ be an algebraic number field or a function field in one variable $$x$$. The author considers normal extensions of $$K$$ with prescribed ramification. The background for this talk is the conjecture of Abhyankar, proved by the author, that in the case of an algebraically closed constant field of characteristic $$p > 0$$ and $$n$$ ramified places one can realize all finite groups $$G$$ as Galois groups over $$K$$ with the following property: Let $$p(G)$$ be the normal subgroup of $$G$$ generated by the $$p$$-Sylow groups of $$G$$. Then $$G/p (G)$$ has to have not more than $$2g + n - 1$$ generators, where $$g$$ denotes the genus of $$K$$ [Invent. Math. (to appear)].
In section 1 the author considers the case of a function field with algebraically closed or finite constant field. In section 2 he considers number fields mostly in connection with analogs of Abhyankar’s conjecture. There are many interesting examples.
For the entire collection see [Zbl 0802.00017].
Reviewer: H.Koch (Berlin)
### MSC:
11R32 Galois theory 14H30 Coverings of curves, fundamental group 12F12 Inverse Galois theory 12F10 Separable extensions, Galois theory | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212955832481384, "perplexity": 239.51086817217762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00519.warc.gz"} |
https://www.techyv.com/questions/insert-picture-microsoft-word/ | ## Insert a picture into Microsoft word
Asked By 0 points N/A Posted on -
How to insert a picture into Microsoft word and I have .jpg files insert into my document.
SHARE
Answered By 0 points N/A #162506
## Insert a picture into Microsoft word
Hi
1. Start Microsoft Word, and then open the text that you want.
2. Get on to place the placing point at the place in the document where you want to put in the picture. On the Insert menu, point to Picture, and then click From File.
3. Look through to the folder that hold the picture that you desire, click the picture file, and then click Insert.
4. Click the inserted picture, and then resize the picture, if essential. Pull the rotation handle to rotate the picture, if required.
5. Apply the tools on the Picture toolbar to modify the quality of the picture.
Thanks
Answered By 0 points N/A #162505
## Insert a picture into Microsoft word
Hello,
This is a really easy task and there are few methods available for do this.
First method:
1. Open Microsoft windows.
2. Then click on the insert panel and click on the picture button.
3. Then browse your .jpg picture file from the open window and click on the insert button.
4. Picture will come to your word document page.
Second method:
1. Open the Microsoft word and same time open your picture folder.
2. Restore down your picture folder and drag your .jpg picture into the Microsoft word page.
Thank You,
John Major.
Answered By 0 points N/A #162507
## Insert a picture into Microsoft word
Dear,
Inserting a picture is an easy task.
First, you have to open the Microsoft Word.
Then, on the menu bar at upper left corner, click the word Insert.
After that, click the picture button and then find the ".jpg" file that you wanted to insert. Next, click insert.
You will notice that the file you wanted is already in the document and that's it. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278436303138733, "perplexity": 3149.532124065714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00359.warc.gz"} |
https://www.physicsforums.com/threads/what-is-your-favourite-coffee.169570/ | # What is your favourite coffee?
1. May 9, 2007
### dontdisturbmycircles
It doesn't matter if it comes in the form of whole beans, ground beans, or instant coffee. What is your favourite coffee? Briefly describe what makes it so good .
I have been drinking nescafe taster's choice for quite a while now and it is probably one of the best instant coffees(imo), but I just bought a coffee machine so I am gonna switch to drip coffee.
Last edited: May 9, 2007
2. May 9, 2007
### G01
Folger's Original Roast with Cream, no sugar. I like coffee to be bitter and creamy, not sweet though. (Some people I know think I'm crazy for drinking coffee without sugar.)
3. May 9, 2007
### FredGarvin
We still get our coffee from our favorite place in the Village in Manhattan, Porto Rico Importng. It's their house blend. It's a American roast so it's not too dark, but it has just a ton of flavor and is not bitter at all. Yum. The describe it as:
I actually have 20 Lbm of it on the way as we speak. That is one thing I do miss about living in NY.
4. May 9, 2007
### turbo
Black, unsweetened espresso made from Chock Full 'o Nuts regular grind. I traded in some unused air miles for an espresso maker a few years ago and I have a big mug of espresso every morning. It blasts that superheated water through the grounds so fast, there is no time for it for it to pick up a bitter or acidic taste. BTW, that is pretty cheap coffee - you don't have to buy pricey blends to make good coffee.
5. May 9, 2007
### Ivan Seeking
Staff Emeritus
We grind Millstone French Roast.
6. May 9, 2007
### Astronuc
Staff Emeritus
Intravenous. jk
I'll drink just about anything - even the sludge at the bottom of the pot.
My consumption varies, but it's probably two, three or four liters/day sometimes.
An my wife complains that I'm spiking it when I add cocoa (plain or raspberry) mix to it.
I use a rather large mug at home, so I usually knock of 1.5-2 liters at breakfast.
I like to put vanilla ice cream, honey and nutmeg powder in the mug and pour the coffee over that - or just coffee w/ cocoa mix.
I used to drink Seaport, which is an especially strong coffee.
Last edited: May 9, 2007
7. May 9, 2007
### mathwonk
we brew espresso/capuccino at home from dancing goats beans. at work we have a lavazza espresso machine. at our sons house in the bay area we drink Peets. a well equipped coffee room is the most important feature of a math department, as thats where most of the best conversations take place.
oh description: peets ahs so much flavor, you can even brew the same grounds twice, unpalatable as that sounds. lavaza is the premium italian coffee used in Italy. Dancing goats, house blend from espresso royale in athens is a nice rich dark blend, but its the espresso machine that makes the difference. We bought one a few years ago from starbucks and have never bought another cup of their dreck.
we paid $300 and it didn't work right, so took it back and tried some cheaper ones. but by then we were hooked on the good stuff, and it turned out the gaskets were just not quite set in right, so we got another$300 one, and never looked back. we are on our second one now.
instant coffee? [guffaw, chortle.]
Last edited: May 9, 2007
8. May 9, 2007
### mathwonk
now how do all those exotic coffee ads find this page? or is this a product placement thread?
9. May 9, 2007
### mattmns
Google uses keywords from the thread to generate the ads.
As for coffee, I don't drink the stuff :tongue2:
10. May 9, 2007
### Astronuc
Staff Emeritus
forgot the :yuck:
11. May 9, 2007
### mathwonk
then why didn't tasters choice and other schlock make itaa/ did the
y just assume we were gourmet coffee drinkers?
12. May 9, 2007
### turbo
Do we have a smilie with a more disgusted expression? I don't even want Starbucks, Dunkin' Donuts, or Tim Horton coffees anymore. My home-made espresso is too good. Instant is out of the question. My wife used to keep a little around the house to "perk up" her brownies with a mocha flavor, but that's all it ever got used for.
13. May 9, 2007
### edward
My favorite is: Don Francisco's cinnamon Hazelnut.
http://www.newsmax.com/archives/ic/2007/1/29/102949.shtml
I am starting to suspect that Mac Dees adds a secret ingredient to their coffee. Sort of like when it was discovered that they added sugar to their french fries.:yuck:
14. May 9, 2007
### Anttech
Illy Espresso, made with a proper Bialetti coffee maker. Or if I am in the mood Cafe Hellinica (glykys)
15. May 9, 2007
### Ivan Seeking
Staff Emeritus
McDees had announced that they were going to start selling high quality coffee; and it makes sense because \$5.00 a cup certainly doesn't. It may be nothing more than a free market doing its job. Does anyone really think their coffee is worth 3.50 to 5.00 a cup in a competitive market?
I never would have predicted that people would pay what they do for a cup of Joe.
However, in my experience Consumer's Reports is full of it. I never trust what they say anymore.
16. May 9, 2007
### Staff: Mentor
McDonald's is my favorite coffee.
17. May 9, 2007
### Moonbear
Staff Emeritus
I like Starbuck's Italian Roast, brewed at home, not by them. It's a strong, dark roast, but not bitter or burnt tasting like a lot of dark roasts commonly available.
My former favorite was Millstone's Bed and Breakfast Blend, but I can't get that in the stores around here so that prompted me to switch to Starbucks (the other brands here are offered either in too light of roasts or are burnt tasting). The Bed and Breakfast Blend was also a dark roast and not bitter (it's quite different from their Breakfast Blend).
But, if you're switching from instant, all of these will taste too strong to you.
18. May 9, 2007
### sara_87
Dark, smooth, strong and rich ...like my man
19. May 9, 2007
### Staff: Mentor
My man (who insists he's not my man - pfffft, right ) is thin, light, rather sharp and sour at times, obviously he would not be good to drink. :yuck:
20. May 9, 2007
### turbo
Maybe you could use him to pickle cucumbers. :rofl:
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17328853905200958, "perplexity": 7694.138197808649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515539.93/warc/CC-MAIN-20181022201445-20181022222945-00215.warc.gz"} |
https://www.reporterherald.com/2023/01/24/loveland-weather-for-tuesday-mostly-cloudy-and-32/ | # Weather | Loveland weather for Tuesday: mostly cloudy and 32
Tuesday is expected to be mostly cloudy, with a high near 32, according to the National Weather Service. Areas of freezing fog are possible before 9 a.m. The overnight low will be near 16, with a 50% chance of snow, mainly before 1 a.m. New snow accumulation of less than a half inch is possible.
Wednesday is expected to be mostly cloudy, with a high near 32, a 30% chance of snow before 11 a.m. and wind gusts as high as 23 mph. New snow accumulation of less than a half inch is possible. The overnight low will be near 11 with wind gusts as high as 16 mph.
Thursday is expected to be mostly sunny, with a high near 35 and wind gusts as high as 20 mph. The overnight low will be near 20.
Friday is expected to be mostly cloudy, with a high near 37. The overnight low will be near 16, with a slight chance of snow after 11 p.m.
Saturday is expected to be mostly cloudy, with a high near 28 and a chance of snow. The overnight low will be near 6, with a chance of snow.
Sunday is expected to be cloudy, with a high near 17 and a slight chance of snow. The overnight low will be near zero with a chance of snow.
## National Weather Service
See what the National Weather service is predicting here
## 24-Hour satellite
Watch NOAA’s 24-hour satellite image here
## More in Weather
Brought to you by Prairie Mountain Publishing | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8093233704566956, "perplexity": 2450.621385925032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00704.warc.gz"} |
https://www.groundai.com/project/training-of-photonic-neural-networks-through-in-situ-backpropagation/ | Training of photonic neural networks through in situ backpropagation
Training of photonic neural networks through in situ backpropagation
Tyler W. Hughes, Momchil Minkov, Yu Shi, Shanhui Fan Ginzton Laboratory, Stanford University, Stanford, CA, 94305.
July 25, 2019
Abstract
Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multiplications, which are used heavily in artificial neural networks, can be done efficiently in photonic circuits. The training of an artificial neural network is a crucial step in its application. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. In this work, we introduce a method that enables highly efficient, in situ training of a photonic neural network. We use adjoint variable methods to derive the photonic analogue of the backpropagation algorithm, which is the standard method for computing gradients of conventional neural networks. We further show how these gradients may be obtained exactly by performing intensity measurements within the device. As an application, we demonstrate the training of a numerically simulated photonic artificial neural network. Beyond the training of photonic machine learning implementations, our method may also be of broad interest to experimental sensitivity analysis of photonic systems and the optimization of reconfigurable optics platforms.
I Introduction
Artificial neural networks (ANNs), and machine learning in general, are becoming ubiquitous for an impressively large number of applications LeCun et al. (2015). This has brought ANNs into the focus of research in not only computer science, but also electrical engineering, with hardware specifically suited to perform neural network operations actively being developed. There are significant efforts in constructing artificial neural network architectures using various electronic solid-state platforms Merolla et al. (2014); Prezioso et al. (2015), but ever since the conception of ANNs, a hardware implementation using optical signals has also been considered Abu-Mostafa and Pslatis (1987); Jutamulia (1996). In this domain, some of the recent work has been devoted to photonic spike processing Rosenbluth et al. (2009); Tait et al. (2014) and photonic reservoir computing Brunner et al. (2013); Vandoorne et al. (2014a), as well as to devising universal, chip-integrated photonic platforms that can implement any arbitrary ANN Shainline et al. (2017); Shen et al. (2017). Photonic implementations benefit from the fact that, due to the non-interacting nature of photons, linear operations – like the repeated matrix multiplications found in every neural network algorithm – can be performed in parallel, and at a lower energy cost, when using light as opposed to electrons.
A key requirement for the utility of any ANN platform is the ability to train the network using algorithms such as error backpropagation Rumelhart et al. (1986). Such training typically demands significant computational time and resources and it is generally desirable for error backpropagation to implemented on the same platform. This is indeed possible for the technologies of Refs. Merolla et al. (2014); Graves et al. (2016); Hermans et al. (2015) and has also been demonstrated e.g. in memristive devices Alibart et al. (2013); Prezioso et al. (2015). In optics, as early as three decades ago, an adaptive platform that could approximately implement the backpropagation algorithm experimentally was proposed Wagner and Psaltis (1987); Psaltis et al. (1988). However, this algorithm requires a number of complex optical operations that are difficult to implement, particularly in integrated optics platforms. Thus, the current implementation of a photonic neural network using integrated optics has been trained using a model of the system simulated on a regular computer Shen et al. (2017). This is inefficient for two reasons. First, this strategy depends entirely on the accuracy of the model representation of the physical system. Second, unless one is interested in deploying a large number of identical, fixed copies of the ANN, any advantage in speed or energy associated with using the photonic circuit is lost if the training must be done on a regular computer. Alternatively, training using a brute force, in situ computation of the gradient of the objective function has been proposed Shen et al. (2017). However, this strategy involves sequentially perturbing each individual parameter of the circuit, which is highly inefficient for large systems.
In this work, we propose a procedure, which we label the time-reversal interference method (TRIM), to compute the gradient of the cost function of a photonic ANN by use of only in situ intensity measurements. Our procedure works by physically implementing the adjoint variable method (AVM), a technique that has typically been implemented computationally in the optimization and inverse design of photonic structures Georgieva et al. (2002); Veronis et al. (2004); Hughes et al. (2017). Furthermore, the method scales in constant time with respect to the number of parameters, which allows for backpropagation to be efficiently implemented in a hybrid opto-electronic network. Although we focus our discussion on a particular hardware implementation of a photonic ANN, our conclusions are derived starting from Maxwellâs equations, and may therefore be extended to other photonic platforms.
The paper is organized as follows: In Section II, we introduce the working principles of a photonic ANN based on the hardware platform introduced in Ref. Shen et al. (2017). We also derive the mathematics of the forward and backward propagation steps and show that the gradient computation needed for training can be expressed as a modal overlap. Then, in Section III we discuss how the adjoint method may be used to describe the gradient of the ANN cost function in terms of physical parameters. In Section IV, we describe our procedure for determining this gradient information experimentally using in situ intensity measurements. We give a numerical validation of these findings in Section V and demonstrate our method by training a model of a photonic ANN in Section VI. We provide final comments and conclude in Section VII.
Ii The Photonic Neural Network
In this Section, we introduce the operation and gradient computation of a feed-forward photonic ANN. In its most general case, a feed-forward ANN maps an input vector to an output vector via an alternating sequence of linear operations and element-wise nonlinear functions of the vectors, also called ‘activations’. A cost function, , is defined over the outputs of the ANN and the matrix elements involved in the linear operations are tuned to minimize over a number of training examples via gradient-based optimization. The ‘backpropagation algorithm’ is typically used to compute these gradients analytically by sequentially utilizing the chain rule from the output layer backwards to the input layer.
Here, we will outline these steps mathematically for a single training example, with the procedure diagrammed in Fig. 1a. We focus our discussion on the photonic hardware platform presented in Shen et al. (2017), which performs the linear operations using optical interference units (OIUs). The OIU is a mesh of controllable Mach-Zehnder interferometers (MZIs) integrated in a silicon photonic circuit. By tuning the phase shifters integrated in the MZIs, any unitary operation on the input can be implemented Reck et al. (1994); Clements et al. (2016), which finds applications both in classical and quantum photonics Carolan et al. (2015); Harris et al. (2017). In the photonic ANN implementation from Ref. Shen et al. (2017), an OIU is used for each linear matrix-vector multiplication, whereas the nonlinear activations are performed using an electronic circuit, which involves measuring the optical state before activation, performing the nonlinear activation function on an electronic circuit such as a digital computer, and preparing the resulting optical state to be injected to the next stage of the ANN.
We first introduce the notation used to describe the OIU, which consists of a number, , of single-mode waveguide input ports coupled to the same number of single-mode output ports through a linear and lossless device. In principle, the device may also be extended to operate on a different number of inputs and outputs. We further assume directional propagation such that all power flows exclusively from the input ports to the output ports, which is a typical assumption for the devices of Refs. Miller (2013a); Shen et al. (2017); Harris et al. (2017); Carolan et al. (2015); Reck et al. (1994); Miller (2013b); Clements et al. (2016). In its most general form, the device implements the linear operation
^WXin=Zout, (1)
where and are the modal amplitudes at the input and output ports, respectively, and , which we will refer to as the transfer matrix, is the off-diagonal block of the system’s full scattering matrix,
(XoutZout)=(0^WT^W0)(XinZin). (2)
Here, the diagonal blocks are zero because we assume forward-only propagation, while the off-diagonal blocks are the transpose of each other because we assume a reciprocal system. and correspond to the input and output modal amplitudes, respectively, if we were to run this device in reverse, i.e. sending a signal in from the output ports.
Now we may use this notation to describe the forward and backward propagation steps in a photonic ANN. In the forward propagation step, we start with an initial input to the system, , and perform a linear operation on this input using an OIU represented by the matrix . This is followed by the application of a element-wise nonlinear activation, , on the outputs, giving the input to the next layer. This process repeats for the each layer until the output layer, . Written compactly, for
Xl=fl(^WlXl−1)≡fl(Zl). (3)
Finally, our cost function is an explicit function of the outputs from the last layer, . This process is shown in Fig. 1(a).
To train the network, we must minimize this cost function with respect to the linear operators, , which may be adjusted by tuning the integrated phase shifters within the OIUs. While a number of recent papers have clarified how an individual OIU can be tuned by sequential, in situ methods to perform an arbitrary, pre-defined operation Miller (2013b, a, 2015); Annoni et al. (2017), these strategies do not straightforwardly apply to the training of ANNs, where nonlinearities and several layers of computation are present. In particular, the training of ANN requires gradient information which is not provided directly in the methods of Ref. Miller (2013b, a, 2015); Annoni et al. (2017).
In Ref. Shen et al. (2017), the training of the ANN was done ex situ on a computer model of the system, which was used to find the optimal weight matrices for a given cost function. Then, the final weights were recreated in the physical device, using an idealized model that relates the matrix elements to the phase shifters. Ref. Shen et al. (2017) also discusses a possible in situ method for computing the gradient of the ANN cost function through a serial perturbation of every individual phase shifter (‘brute force’ gradient computation). However, this gradient computation has an unavoidable linear scaling with the number of parameters of the system. The training method that we propose here operates without resorting to an external model of the system, while allowing for the tuning of each parameter to be done in parallel, therefore scaling significantly better with respect to the number of parameters when compared to the brute force gradient computation.
To introduce our training method we first use the backpropagation algorithm to derive an expression for the gradient of the cost function with respect to the permittivities of the phase shifters in the OIUs. In the following, we denote as the permittivity of a single, arbitrarily chosen phase shifter in layer , as the same derivation holds for each of the phase shifters present in that layer. Note that has an explicit dependence on , but all field components in the subsequent layers also depend implicitly on .
As a demonstration, we take a mean squared cost function
L (4)
where is a complex-valued target vector corresponding to the desired output of our system given input .
Starting from the last layer in the circuit, the derivative of the cost function with respect to the permittivity of one of the phase shifters in the last layer is given by
dLdϵL =R{(XL−T)†dXLdϵL} (5) (6) ≡R{δTLd^WLdϵLXL−1}, (7)
where is element-wise vector multiplication, defined such that, for vectors and , the -th element of the vector is given by . gives the real part, is the derivative of the th layer activation function with respect to its (complex) argument. We define the vector in terms of the error vector .
For any layer , we may use the chain rule to perform a recursive calculation of the gradients
Γl =^WTl+1δl+1 (8) δl =Γl⊙fl′(Zl) (9) dLdϵl =R{δTld^WldϵlXl−1}. (10)
Figure 1(b) diagrams this process, which computes the vectors sequentially from the output layer to the input layer. A treatment for non-holomorphic activations is derived Appendix A.
We note that the computation of requires performing the operation , which corresponds physically to sending into the output end of the OIU in layer . In this way, our procedure ‘backpropagates’ the vectors and physically through the entire circuit.
In the previous Section, we showed that the crucial step in training the ANN is computing gradient terms of the form , which contain derivatives with respect to the permittivity of the phase shifters in the OIUs. In this Section, we show how this gradient may be expressed as the solution to an electromagnetic adjoint problem.
The OIU used to implement the matrix , relating the complex mode amplitudes of input and output ports, can be described using first-principles electrodynamics. This will allow us to compute its gradient with respect to each , as these are the physically adjustable parameters in the system. Assuming a source at frequency , at steady state Maxwell’s equations take the form
(11)
which can be written more succinctly as
^A(ϵr)e=b. (12)
Here, describes the spatial distribution of the relative permittivity (), is the free-space wavenumber, is the electric field distribution, is the electric current density, and due to Lorentz reciprocity. Eq. (12) is the starting point of the finite-difference frequency-domain (FDFD) simulation technique Shin and Fan (2012), where it is discretized on a spatial grid, and the electric field is solved given a particular permittivity distribution, , and source, .
To relate this formulation to the transfer matrix , we now define source terms , , that correspond to a source placed in one of the input or output ports. Here we assume a total of input and output waveguides. The spatial distribution of the source term, , matches the mode of the -th single-mode waveguide. Thus, the electric field amplitude in port is given by , and we may establish a relationship between and , as
Xin,i=b Tie (13)
for over the input port indices, where is the -th component of . Or more compactly,
Xin≡^Pine, (14)
Similarly, we can define
Zout,i=b Ti+Ne (15)
for over the output port indices, or,
Zout≡^Poute, (16)
and, with this notation, Eq. (1) becomes
^W^Pine=^Poute (17)
We now use the above definitions to evaluate the cost function gradient in Eq. (10). In particular, with Eqs. (10) and (17), we arrive at
(18)
Here is the modal source profile that creates the input field amplitudes at the input ports.
The key insight of the adjoint variable method is that we may interpret this expression as an operation involving the field solutions of two electromagnetic simulations, which we refer to as the ‘original’ (og) and the ‘adjoint’ (aj)
^Aeog =bx,l−1 (19) ^Aeaj =^PToutδ, (20)
where we have made use of the symmetric property of .
Eq. (18) can now be expressed in a compact form as
If we assume that this phase shifter spans a set of points, in our system, then, from Eq. (11), we obtain
where is the Kronecker delta.
Inserting this into Eq. (21), we thus find that the gradient is given by the overlap of the two fields over the phase-shifter positions
dLdϵl=k20R⎧⎨⎩∑r∈rϕeaj(r)e% og(r)⎫⎬⎭. (23)
This result now allows for the computation in parallel of the gradient of the loss function with respect to all phase shifters in the system, given knowledge of the original and adjoint fields.
We now introduce our time-reversal interference method (TRIM) for computing the gradient from the previous section through in situ intensity measurements. This represents the most significant result of this paper. Specifically, we wish to generate an intensity pattern with the form , matching that of Eq. (23). We note that interfering and directly in the system results in the intensity pattern:
I=|eog|2+|eaj|2+2R{eogeaj}, (24)
the last term of which matches Eq. (23). Thus, the gradient can be computed purely through intensity measurements if the field can be generated in the OIU.
The adjoint field for our problem, , as defined in Eq. (20), is sourced by , meaning that it physically corresponds to a mode sent into the system from the output ports. As complex conjugation in the frequency domain corresponds to time-reversal of the fields, we expect to be sent in from the input ports. Formally, to generate , we wish to find a set of input source amplitudes, , such that the output port source amplitudes, , are equal to the complex conjugate of the adjoint amplitudes, or . Using the unitarity property of transfer matrix for a lossless system, along with the fact that for output modes, the input mode amplitudes for the time-reversed adjoint can be computed as
X∗TR=^WTlδl. (25)
As discussed earlier, is the transfer matrix from output ports to input ports. Thus, we can experimentally determine by sending into the device output ports, measuring the output at the input ports, and taking the complex conjugate of the result.
We now summarize the procedure for experimentally measuring the gradient of an OIU layer in the ANN with respect to the permittivities of this layer’s integrated phase shifters:
1. Send in the original field amplitudes and measure and store the intensities at each phase shifter.
2. Send into the output ports and measure and store the intensities at each phase shifter.
3. Compute the time-reversed adjoint input field amplitudes as in Eq. (25).
4. Interfere the original and the time-reversed adjoint fields in the device, measuring again the resulting intensities at each phase shifter.
5. Subtract the constant intensity terms from steps 1 and 2 and multiply by to recover the gradient as in Eq. (23).
This procedure is also illustrated in Fig. 2.
We numerically demonstrate this procedure in Fig. 3 with a series of FDFD simulations of an OIU implementing a unitary matrix Reck et al. (1994). These simulations are intended to represent the gradient computation corresponding to one OIU in a single layer, , of a neural network with input and delta vector . In these simulations, we use absorbing boundary conditions on the outer edges of the system to eliminate back-reflections. The relative permittivity distribution is shown in Fig. 3(a) with the positions of the variable phase shifters in blue. For demonstration, we simulate a specific case where , with unit amplitude in the bottom port and we choose . In Fig. 3(b), we display the real part of , corresponding to the original, forward field.
The real part of the adjoint field, , corresponding to the cost function is shown in Fig. 3(c). In Fig. 3(d) we show the real part of the time-reversed copy of as computed by the method described in the previous section, in which is sent in through the input ports. There is excellent agreement, up to a constant, between the complex conjugate of the field pattern of (c) and the field pattern of (d), as expected.
In Fig. 3(e), we display the gradient of the objective function with respect to the permittivity of each point of space in the system, as computed with the adjoint method, described in Eq. (23). In Fig. 3(f), we show the same gradient information, but instead computed with the method described in the previous section. Namely, we interfere the field pattern from panel (b) with the field pattern from panel (d), subtract constant intensity terms, and multiply by the appropriate constants. Again, (b) and (d) agree with good precision.
We note that in a realistic system, the gradient must be constant for any stretch of waveguide between waveguide couplers because the interfering fields are at the same frequency and are traveling in the same direction. Thus, there should be no distance dependence in the corresponding intensity distribution. This is largely observed in our simulation, although small fluctuations are visible because of the proximity of the waveguides and the sharp bends, which were needed to make the structure compact enough for simulation within a reasonable time. In practice, the importance of this constant intensity is that it can be detected after each phase shifter, instead of inside of it.
Finally, we note that this numerically generated system experiences a total power loss of 41% due to scattering caused by very sharp bends and stair-casing of the structure in the simulation. We also observe approximately 5-10% mode-dependent loss, as determined by measuring the difference in total transmitted power corresponding to injection at different input ports. Minimal amounts of reflection are also visible in the field plots. Nevertheless, TRIM still reconstructs the adjoint sensitivity with very good fidelity.
Vi Example of ANN training
Finally, we use the techniques from the previous Sections to numerically demonstrate the training of a photonic ANN to implement a logical XOR gate, defined by the following input to target () pairs
[0 0]T→0, [0 1]T→1, [1 0]T→1, [1 1]T→0. (26)
This problem was chosen as demonstration of learning a nonlinear mapping from input to output Vandoorne et al. (2014b) and is simple enough to be solved with a small network with only four training examples.
As diagrammed in Fig. 6a, we choose a network architecture consisting of two unitary OIUs. On the forward propagation step, the binary representation of the inputs, , is sent into the first two input elements of the ANN and a constant value of is sent into the third input element, which serves to introduce artificial bias terms into the network. These inputs are sent through a unitary OIU and then the element-wise activation is applied. The output of this step is sent to another OIU and sent through another activation of the same form. Finally, the first output element is taken to be our prediction, , ignoring the last two output elements. Our network is repeatedly trained on the four training examples defined in Eq. (26) and using the mean-squared cost function presented in Eq. (4).
For this demonstration, we utilized a matrix model of the system, as described in Reck et al. (1994); Clements et al. (2016), with mathematical details described in Appendix B. This model allows us to compute an output of the system given an input mode and the settings of each phase shifter. Although this is not a first-principle electromagnetic simulation of the system, it provides information about the complex fields at specific reference points within the circuit, which enables us to implement training using the backpropagation method as described in Section II, combined with the adjoint gradient calculation from Section III. Using these methods, at each iteration of training we compute the gradient of our cost function with respect to the phases of each of the integrated phase shifters, and sum them over the four training examples. Then, we perform a simple steepest-descent update to the phase shifters, in accordance with the gradient information. This is consistent with the standard training protocol for an ANN implemented on a conventional computer. Our network successfully learned the XOR gate in around 400 iterations. The results of the training are shown in Fig. 6b-d.
We note that this is meant to serve as a simple demonstration of using the in-situ backpropagation technique for computing the gradients needed to train photonic ANNs. However, our method may equally be performed on more complicated tasks, which we show in the Appendix C.
Vii Discussion and Conclusion
Here, we justify some of the assumptions made in this work. Our strategy for training a photonic ANN relies on the ability to create arbitrary complex inputs. We note that a device for accomplishing this has been proposed and discussed in Miller (2017). Our recovery technique further requires an integrated intensity detection scheme to occur in parallel and with virtually no loss. This may be implemented by integrated, transparent photo-detectors, which have already been demonstrated in similar systems Annoni et al. (2017). Furthermore, as discussed, this measurement may occur in the waveguide regions directly after the phase shifters, which eliminates the need for phase shifter and photodetector components at the same location. Finally, in our procedure for experimentally measuring the gradient information, we suggested running isolated forward and adjoint steps, storing the intensities at each phase shifter for each step, and then subtracting this information from the final interference intensity. Alternatively, one may bypass the need to store these constant intensities by introducing a low-frequency modulation on top of one of the two interfering fields in Fig. 2(c), such that the product term of Eq. (24) can be directly measured from the low-frequency signal. A similar technique was used in Annoni et al. (2017).
We now discuss some of the limitations of our method. In the derivation, we had assumed the operator to be unitary, which corresponds to a lossless OIU. In fact, we note that our procedure is exact in the limit of a lossless, feed-forward, and reciprocal system. However, with the addition of any amount of uniform loss, is still unitary up to a constant, and our procedure may still be performed with the added step of scaling the measured gradients depending on this loss (see a related discussion in Ref. Miller (2017)). Uniform loss conditions are satisfied in the OIUs experimentally demonstrated in Refs. Shen et al. (2017); Miller (2013b). Mode-dependent loss, such as asymmetry in the MZI mesh layout or fabrication errors, should be avoided as its presence limits the ability to accurately reconstruct the time-reversed adjoint field. Nevertheless, our simulation in Fig. 3 indicates that an accurate gradient can be obtained even in the presence of significant mode-dependent loss. In the experimental structures of Refs. Shen et al. (2017); Miller (2013b), the mode-dependent loss is made much lower due to the choice of the MZI mesh. Thus we expect our protocol to work in practical systems. Our method, in principle, computes gradients in parallel and scales in constant time. In practice, to get this scaling would require careful design of the circuits controlling the OIUs.
Conveniently, since our method does not directly assume any specific model for the linear operations, it may gracefully handle imperfections in the OIUs, such as deviations from perfect 50-50 splits in the MZIs. Lastly, while we chose to make an explicit distinction between the input ports and the output ports, i.e. we assume no backscattering in the system, this requirement is not strictly necessary. Our formalism can be extended to the full scattering matrix. However, this would require special treatment for subtracting the backscattering.
The problem of overfitting is one that must be addressed by ‘regularization’ in any practical realization of a neural network. Photonic ANNs of this class provide a convenient approach to regularization based on ‘dropout’ Srivastava et al. (2014). In the dropout procedure, certain nodes are probabilistically and temporarily ‘deletedâ from the network during train time, which has the effect of forcing the network to find alternative paths to solve the problem at hand. This has a strong regularization effect and has become popular in conventional ANNs. Dropout may be implemented simply in the photonic ANN by ‘shutting offâ channels in the activation functions during training. Specifically, at each time step and for each layer and element , one may set with some fixed probability.
In conclusion, we have demonstrated a method for performing backpropagation in an ANN based on a photonic circuit. This method works by physically propagating the adjoint field and interfering its time-reversed copy with the original field. The gradient information can then be directly measured out as an in-situ intensity measurement. While we chose to demonstrate this procedure in the context of ANNs, it is broadly applicable to any reconfigurable photonic system. One could imagine this setup being used to tune phased arrays Sun et al. (2013), optical delivery systems for dielectric laser accelerators Hughes et al. (2018), or other systems that rely on large meshes of integrated optical phase shifters. Furthermore, it may be applied to sensitivity analysis of photonic devices, enabling spatial sensitivity information to be measured as an intensity in the device.
Our work should enhance the appeal of photonic circuits in deep learning applications, allowing for training to happen directly inside the device in an efficient and scalable manner. Furthermore, this method is broadly applicable to integrated and adaptive optical systems, enabling the possibility for automatic self-configuration and optimization without resorting to brute force gradient computation or model-based methods, which often do not perfectly represent the physical system.
Funding Information
Gordon and Betty Moore Foundation (GBMF4744); Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (P300P2_177721); Air Force Office of Scientific Research (FA9550-17-1-0002).
References
• LeCun et al. (2015) Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 (2015).
• Merolla et al. (2014) P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P. Risk, R. Manohar, and D. S. Modha, Science 345, 668 (2014)http://science.sciencemag.org/content/345/6197/668.full.pdf .
• Prezioso et al. (2015) M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev, and D. B. Strukov, Nature 521, 61 (2015)arXiv:1412.0611 .
• Abu-Mostafa and Pslatis (1987) Y. S. Abu-Mostafa and D. Pslatis, Scientific American 256, 88 (1987).
• Jutamulia (1996) S. Jutamulia, Science 28 (1996).
• Rosenbluth et al. (2009) D. Rosenbluth, K. Kravtsov, M. P. Fok, and P. R. Prucnal, Optics Express 17, 22767 (2009).
• Tait et al. (2014) A. N. Tait, M. A. Nahmias, B. J. Shastri, and P. R. Prucnal, Journal of Lightwave Technology 32, 3427 (2014).
• Brunner et al. (2013) D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, Nature Communications 4, 1364 (2013).
• Vandoorne et al. (2014a) K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, and P. Bienstman, Nature Communications 5, 1 (2014a).
• Shainline et al. (2017) J. M. Shainline, S. M. Buckley, R. P. Mirin, and S. W. Nam, Physical Review Applied 7, 1 (2017)arXiv:1610.00053 .
• Shen et al. (2017) Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund, and M. Soljacic, Nature Photonics 11, 441 (2017)arXiv:1610.02365 .
• Rumelhart et al. (1986) D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Parallel Distributed Processing, edited by D. E. Rumelhart and R. J. McClelland, Vol. 1 (MIT Press, 1986) Chap. 8.
• Graves et al. (2016) A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwińska, S. G. Colmenarejo, E. Grefenstette, T. Ramalho, J. Agapiou, A. P. Badia, K. M. Hermann, Y. Zwols, G. Ostrovski, A. Cain, H. King, C. Summerfield, P. Blunsom, K. Kavukcuoglu, and D. Hassabis, Nature 538, 471 (2016)arXiv:arXiv:1410.5401v2 .
• Hermans et al. (2015) M. Hermans, M. Burm, T. Van Vaerenbergh, J. Dambre, and P. Bienstman, Nature Communications 6, 6729 (2015).
• Alibart et al. (2013) F. Alibart, E. Zamanidoost, and D. B. Strukov, Nature Communications 4, 2072 (2013).
• Wagner and Psaltis (1987) K. Wagner and D. Psaltis, Applied Optics 26, 5061 (1987).
• Psaltis et al. (1988) D. Psaltis, D. Brady, and K. Wagner, Applied Optics 27, 1752 (1988).
• Georgieva et al. (2002) N. Georgieva, S. Glavic, M. Bakr, and J. Bandler, IEEE Transactions on Microwave Theory and Techniques 50, 2751 (2002).
• Veronis et al. (2004) G. Veronis, R. W. Dutton, and S. Fan, Optics Letters 29, 2288 (2004).
• Hughes et al. (2017) T. Hughes, G. Veronis, K. P. Wootton, R. J. England, and S. Fan, Optics Express 25, 15414 (2017).
• Reck et al. (1994) M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, Physical Review Letters 73, 58 (1994)arXiv:9612010 [quant-ph] .
• Clements et al. (2016) W. R. Clements, P. C. Humphreys, B. J. Metcalf, W. S. Kolthammer, and I. A. Walsmley, Optica 3, 1460 (2016).
• Carolan et al. (2015) J. Carolan, C. Harrold, C. Sparrow, E. Martín-López, N. J. Russell, J. W. Silverstone, P. J. Shadbolt, N. Matsuda, M. Oguma, M. Itoh, G. D. Marshall, M. G. Thompson, J. C. Matthews, T. Hashimoto, J. L. O’Brien, and A. Laing, Science 349, 711 (2015)arXiv:1505.01182 .
• Harris et al. (2017) N. C. Harris, G. R. Steinbrecher, M. Prabhu, Y. Lahini, J. Mower, D. Bunandar, C. Chen, F. N. C. Wong, T. Baehr-Jones, M. Hochberg, S. Lloyd, and D. Englund, Nature Photonics 11, 447 (2017).
• Miller (2013a) D. A. B. Miller, Optics Express 21, 6360 (2013a)arXiv:1302.1593 .
• Miller (2013b) D. A. B. Miller, Photonics Research 1, 1 (2013b)arXiv:1303.4602 .
• Miller (2015) D. A. B. Miller, Optica 2, 747 (2015).
• Annoni et al. (2017) A. Annoni, E. Guglielmi, M. Carminati, G. Ferrari, M. Sampietro, D. A. Miller, A. Melloni, and F. Morichetti, Light: Science & Applications 6, e17110 (2017).
• Shin and Fan (2012) W. Shin and S. Fan, Journal of Computational Physics 231, 3406 (2012).
• Vandoorne et al. (2014b) K. Vandoorne, P. Mechet, T. Van Vaerenbergh, M. Fiers, G. Morthier, D. Verstraeten, B. Schrauwen, J. Dambre, and P. Bienstman, Nature Communications 5, 3541 (2014b).
• Miller (2017) D. A. B. Miller, Opt. Express 25, 29233 (2017).
• Srivastava et al. (2014) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Journal of Machine Learning Research 15, 1929 (2014).
• Sun et al. (2013) J. Sun, E. Timurdogan, A. Yaacobi, E. S. Hosseini, and M. R. Watts, Nature 493, 195 (2013).
• Hughes et al. (2018) T. W. Hughes, S. Tan, Z. Zhao, N. V. Sapra, Y. J. Lee, K. J. Leedle, H. Deng, Y. Miao, D. S. Black, M. Qi, O. Solgaard, J. S. Harris, J. Vuckovic, R. L. Byer, and S. Fan, Physical Review Applied (Accepted 2018).
Appendix A Non-holomorphic Backpropagation
In the previous derivation, we have assumed that the functions are holomorphic. For each element of input , labeled , this means that the derivative of with respect to its complex argument is well defined, or the derivative
dfldz=limΔz→0fl(z+Δz)−fl(z−Δz)2Δz (27)
does not depend on the direction that approaches in the complex plane.
Here we show how to extend the backpropagation derivation to non-holomorphic activation functions. We first examine the starting point of the backpropagation algorithm, considering the change in the mean-squared loss function with respect to the permittivity of a phase shifter in the last layer OIU, as written in Eq. (7) of the main text as
dLdϵL=R{ΓTLdXLdϵL} (28)
Where we had defined the error vector for simplicity and is the output of the final layer.
To evaluate this expression for non-holomorphic activation functions, we split and its argument into their real and imaginary parts
fL(Z)=u(α,β)+iv(α,β), (29)
where is the imaginary unit and and are the real and imaginary parts of , respectively.
We now wish to evaluate , which gives the following via the chain rule
dfdϵ=dudα⊙dαdϵ+dudβ⊙dβdϵ+idvdα⊙dαdϵ+idvdβ⊙dβdϵ, (30)
where we have dropped the layer index for simplicity. Here, terms of the form correspond to element-wise differentiation of the vector with respect to the vector . For example, the -th element of the vector is given by .
Now, inserting into Eq. (28), we have
dLdϵL=R{ ΓTL⊙(dudα+idvdα)TdαdϵL (31) + ΓTL⊙(dudβ+idvdβ)TdβdϵL}. (32)
We now define real and imaginary parts of as and , respectively. Inserting the definitions of and in terms of and and doing some algebra, we recover
dLdϵL=R{ (ΓR⊙dudα)Td^WLdϵLXL−1 (33) − (ΓI⊙dvdα)Td^WLdϵLXL−1 (34) −i (ΓR⊙dudβ)Td^WLdϵLXL−1 (35) +i (ΓI⊙dvdβ)Td^WLdϵLXL−1}. (36)
Finally, the expression simplifies to
dLdϵL=R{[ ΓR⊙(dudα−idudβ) (37) + ΓI⊙(−dvdα+idvdβ)]Td^WLdϵLXL−1}. (38)
As a check, if we insert the conditions for to be holomorphic, namely
dudα=dvdβ, and dudβ=−dvdα, (39)
Eq. (36) simplifies to
dLdϵL =R{[ΓR⊙(dudα+idvdα)+ (40) ΓI⊙(−dvdα+idudα)]Td^WLdϵLXL−1} (41) =R{[ΓL⊙(dudα+idvdα)]Td^WLdϵLXL−1} (42) =R{[ΓL⊙fl′(ZL)]Td^WLdϵLXL−1} (43) =R{δTLd^WLdϵLXL−1} (44)
as before.
This derivation may be similarly extended to any layer in the network. For holomorphic activation functions, whereas we originally defined the vectors as
δl=Γl⊙fl′(Zl), (45)
for non-holomorphic activation functions, the respective definition is
δl=ΓR⊙(dudα−idudβ)+ΓI⊙(−dvdα+idvdβ), (46)
where and are the respective real and imaginary parts of , and are the real and imaginary parts of , and and are the real and imaginary parts of , respectively.
We can write this more simply as
δl=R{Γl⊙dfdα}−iR{Γl⊙dfdβ}. (47)
In polar coordinates where and , this equation becomes
δl=exp(−iϕ)(R{Γl⊙dfdr}−iR{Γl⊙1rdfdϕ}) (48)
where all operations are element-wise.
Appendix B Photonic neural network simulation
In Sections 4 and 5 of the main text, we have shown, starting from Maxwell’s equations, how the gradient information defined for an arbitrary problem can be obtained through electric field intensity measurements. However, since the full electromagnetic problem is too large to solve repeatedly, for the purposes of demonstration of a functioning neural network, in Section 6 we use the analytic, matrix representation of a mesh of MZIs as described in Ref. Clements et al. (2016). Namely, for an even , the matrix of the OIU is parametrized as the product of unitary matrices:
^W=^RN^RN−1…^R2^R1^D, (49)
where each implements a number of two-by-two unitary operations corresponding to a given MZI, and is a diagonal matrix corresponding to an arbitrary phase delay added to each port. This is schematically illustrated in Fig. 5(a) for . For the ANN training, we need to compute terms of the form
dLdϕ=R{YTd^WdϕX}, (50)
for an arbitrary phase and vectors and defined following the steps in the main text. Because of the feed-forward nature of the OIU-s, the matrix can also be split as
^W=^W2^Fϕ^W1, (51)
where is a diagonal matrix which applies a phase shift in port (the other elements are independent of ), while and are the parts that precede and follow the phase shifter, respectively (Fig. 5(b)). Thus, Eq. (50) becomes
R{YTd^WdϕX} =R{YT^W2d^Fϕdϕ^W1X} (52) =−I{(^WT2Y)ieiϕ(^W1X)i},
where is the -th element of the vector , and denotes the imaginary part. This result can be written more intuitively in a notation similar to the main text. Namely, if is the field amplitude generated by input from the right, measured right after the phase shifter corresponding to , while is the field amplitude generated by input from the right, measured at the same point, then
By recording the amplitudes in all ports during the forward and the backward propagation, we can thus compute in parallel the gradient with respect to every phase shifter. Notice that, within this computational model, we do not need to go through the full procedure outlined in Section 4 of the main text. However, this procedure is crucial for the in situ measurement of the gradients, and works even in cases which cannot be correctly captured by the simplified matrix model used here.
Appendix C Training demonstration
In the main text we show how the in-situ backpropagation method may be used to train a simple XOR network. Here we demonstrate training on a more complex problem. Specifically, we generate a set of one thousand training examples represented by input and target pairs. Here, where and are the independent inputs, which we constrain to be real for simplicity, and represents a mode added to the third port to make the norm of the same for each training example. In this case, we choose . Each training example has a corresponding label, which is encoded in the desired output, , as and for and respectively.
For a given and , we define and as the magnitude and phase of the vector in the 2D-plane, respectively. To generate the corresponding class label, we first generate a uniform random variable between 0 and 1, labeled , and then set if
exp(−(r−r0−Δsin(2ϕ))22σ2)+0.1 U>0.5. (54)
Otherwise, we set . For the demonstration, , , and . The underlying distribution thus resembles an oblong ring centered around , with added noise.
As diagrammed in Fig. 6(a), we use a network architecture consisting of six layers of unitary OIUs, with an element-wise activation after each unitary transformation except for the last in the series, which has an activation of . After the final activation, we apply an additional ‘softmax’ activation, which gives a normalized probability distribution corresponding to the predicted class of . Specifically, these are given by , where is the first/second element of the output vector of the last activation (the other two elements are ignored). The ANN prediction for the input is set as the larger one of these two outputs, while the total cost function is defined in the cross-entropy form
L=1MM∑m=1L(m)=1MM∑m=1−log(s(zm,t)), (55)
where is the cost function of the -th example, the summation is over all training examples, and is the output from the target port, , as defined by the target output of the -th example. We randomly split our generated examples into a training set containing 75% of the originally generated training examples, while the remaining 25% are used as a test set to evaluate the performance of our network on unseen examples.
As in the XOR demonstration, we utilized our matrix model of the system described in Section B. As in the main text, at each iteration of training we compute the gradient of the cost function with respect to the phases of each of the integrated phase shifters, and sum this over each of the training examples. For the backpropagation through the activation functions, since and are non-holomorphic, we use eq. 48 from Section A, to obtain
δL =2Z∗L⊙R{ΓL} (56) δl =exp(−iϕl)⊙R{Γl}, (57)
where is a vector containing the phases of and is given by the derivative of the cross-entropy loss function for a single training example
ΓL=∂L(m)∂zm,i=s(zm,i)−δi,t, (58)
where is the Kronecker delta.
With this, we can now compute the gradient of the loss function of eq. 55 with respect to all trainable parameters, and perform a parallel, steepest-descent update to the phase shifters, in accordance with the gradient information. Our network successfully learned the this task in around 4000 iterations. The results of the training are shown in Fig. 6(b). We achieved a training and test accuracy of 91% on both the training and test sets, indicating that the network was not overfitting to the dataset. This can also be confirmed visually from Fig. 6(c). The lack of perfect predictions is likely due to the inclusion of noise.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8646154999732971, "perplexity": 579.4831237402052}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00434.warc.gz"} |
https://arxiv.org/list/nlin.SI/new | # Exactly Solvable and Integrable Systems
## New submissions
[ total of 2 entries: 1-2 ]
[ showing up to 2000 entries per page: fewer | more ]
### New submissions for Thu, 4 Jun 20
[1]
Title: Constellations and $τ$-functions for rationally weighted Hurwitz numbers
Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th); Combinatorics (math.CO); Group Theory (math.GR); Exactly Solvable and Integrable Systems (nlin.SI)
Weighted constellations give graphical representations of weighted branched coverings of the Riemann sphere. They were introduced to provide a combinatorial interpretation of the $2$D Toda $\tau$-functions of hypergeometric type serving as generating functions for weighted Hurwitz numbers in the case of polynomial weight generating functions. The product over all vertex and edge weights of a given weighted constellation, summed over all configurations, reproduces the $\tau$-function. In the present work, this is generalized to constellations in which the weighting parameters are determined by a rational weight generating function. The associated $\tau$-function may be expressed as a sum over the weights of doubly labelled weighted constellations, with two types of weighting parameters associated to each equivalence class of branched coverings. The double labelling of branch points, referred to as "colour" and "flavour" indices, is required by the fact that, in the Taylor expansion of the weight generating function, a particular colour from amongst the denominator parameters may appear multiply, and the flavour labels indicate this multiplicity.
### Replacements for Thu, 4 Jun 20
[2] arXiv:1903.09197 (replaced) [src]
Title: Symplectic extensions of the Kirillov-Kostant and Goldman Poisson structures and Fuchsian systems
Comments: 26 pages, 2 figures. This paper was never submitted anywhere and instead has been superseded, extended in two different directions and replaced by two separate papers. See arXiv:1910.06744, arXiv:1910.03370. To avoid triggering automatic warnings we decided to withdraw it
Subjects: Mathematical Physics (math-ph); Symplectic Geometry (math.SG); Exactly Solvable and Integrable Systems (nlin.SI)
[ total of 2 entries: 1-2 ]
[ showing up to 2000 entries per page: fewer | more ]
Disable MathJax (What is MathJax?)
Links to: arXiv, form interface, find, nlin, recent, 2006, contact, help (Access key information) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6924595236778259, "perplexity": 2212.9870161628874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347445880.79/warc/CC-MAIN-20200604161214-20200604191214-00502.warc.gz"} |
http://math.stackexchange.com/questions/266816/why-open-and-closed-boxes-are-measurable | # Why open and closed boxes are measurable
Let $B := \prod_{i=1}^n (a_i,b_i) := \{(x_1,\cdots,x_n) \in \mathbb R^n \mid \forall i: x_i \in (a_i,b_i) \}$. Can someone help me to show that $B$ is a measurable set, i.e. that if $A \subseteq \mathbb R^n$ then $m^*(A) \geq m^*(A \cap B) + m^*(A \cap B^c)$ where $$m^*(E) = \inf \left \{ \sum_{j=0}^\infty vol(B_j) : E \subseteq \bigcup_{i=0}^\infty B_i \text{ where } (B_i)_{i=0}^\infty \text{ at most countable } \right \}$$ and the $B_i$ have to be open boxes. Further is $vol(B) := \prod_{i=1}^n (b_i-a_i)$ for each open box $B$.
-
First see if you can do the case $n=1$. – GEdgar Dec 29 '12 at 1:50
Yes, I can :D How can I proceed after that ? Can I show that the Cartesian product is measurable ? I.e. $B = (a_1,b_1) \times \cdots \times (a_n,b_n)$ where each $(a_i,b_i)$ is measurable. – Epsilon Dec 29 '12 at 2:12
It would be nice to show that $(a,\infty)^n$ is measurable for all $a \in \mathbb R$.Then an open box $B$ is an intersection of finitly many boxes of the form $(a,\infty)^n$ but I already have proven that finite intersection preserves measurability. – Epsilon Dec 29 '12 at 2:40
It would be nice to show that $(a,\infty)^n$ is measurable for all $a \in \mathbb R$ and $(-\infty,a)^n$, too .Then an open box $B$ is an intersection of finitly many boxes of the form $(a,\infty)^n$ or $(-\infty,a)^n$ but I already have proven that finite intersection preserves measurability. (Edit of the above comment) – Epsilon Dec 29 '12 at 2:46
For the case $n=1$ – leo Dec 29 '12 at 17:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601149559020996, "perplexity": 173.3446903528993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277475.33/warc/CC-MAIN-20160524002117-00163-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://en.poshtibanpress.ir/pvvrufk/how-to-use-mle-matlab.php | # How to use mle matlab
See optim. @mathee: I think he means m = mean, and s = standard deviation. The restricted Boltzmann machine (RBM) , is a special type of neural networks only with visible and hidden neurons. py # From http://mrjob. tsaplots import plot_acf, plot_pacf%%file word_count. html#writing-your-first-job from mrjob. x and y are experimental I'm working on a classification problem using MLE. However, I would now like I am trying to use the mle function with constraints on the parameters. As an example, sound is usually a signal that is composed of the numerical addition, at each time t, of signals from several sources. *FREE* shipping on qualifying offers. style. In particular, I would like to estimate the Weibull distribution The Skew-Normal and Skew-t Probability Distributions. I'm not claiming to have any understanding of the mle function in MatLab. The data are the Precipitation data from Rice example 8. Gaussian distribution is another name for normal distribution. com. - Identification of Mixture Models Using Support Variation (2015), w ith Philippe Février. readthedocs. As a motivation, let us look at one Matlab example. But in principal you should be able to do a maximum likelihood with these parameters and then the first order conditions of the log likelihood would give the values. Simulation is carried out using the EnergyPlus building model we developed and the predicitive control strategy is implemented thanks to Matlab ® . m). we use Matlab fitting tool to fit weight and waist Maximum likelihood - MATLAB Example. Thanks. It may also be used as a batch-oriented language. Let us begin with a special case. mle — Distribution fitting function. This custom function accepts the vector data and one or more individual distribution parameters as input parameters, and returns a vector of probability density values. GNU Octave is software featuring a high-level programming language, primarily intended for numerical computations. Parts 7-9 illustrate the properties of MLE. [citation needed] The generalized extreme value distribution is a special case of a max-stable distribution, and is a transformation of a min-stable distribution. Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. The usual justification for using the normal distribution for modeling is the Central Limit Theorem, which states (roughly) that the sum of independent samples from any distribution with finite mean and variance converges to the normal distribution as the sample size goes to infinity. If 'Censoring' is not present, you do not have to specify cdf while using pdf. Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands. job import MRJob class Independent component analysis attempts to decompose a multivariate signal into independent non-Gaussian signals. Sampling from the multivariate truncated normal distribution is considerably more difficult. J. Numerical maximization of likelihood functions Steepest Ascent in MatLab cont’d while diff > tol; for j=1:length MATLAB Functions What is a MATLAB function? A MATLAB “function” is a MATLAB program that performs a sequence of operations specified in a text file (called an m-file because it must be saved with a file extension of *. . github. The initial condition for the parameters are based on the two-step regression procedure described in Hannan and McDougall (1984). But sometimes you might want to go with the stronger assumption of a skewed normal distribution and plot that instead of density. You may get MATLAB Answers ™ MATLAB Central I would like to use command MLE to estimate the best degree of freedom for student t distribution (max log-likelihood) and would fminsearch mle parameters estimation. For a bit of "easy to understand theory" about MLE, check out my post on Select a Web Site. pyplot as plt import numpy as np from statsmodels. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. Learn more about fminunc, fminsearch, gumbel distribution, mle MATLAB Matlab – Optimization and Integration I Use automatic differentiation F In Matlab – INTLAB, ADMAT, MAD, Simple MLE Example: Binary Choice Tag: Matlab Codes Channel Modelling , Matlab Codes , Probability , Random Process , Tips & Tricks How to use Histogram function in Matlab to plot the estimated PDF curve MLE involves calculating the value of p that give the highest likelihood given the particular set of data. Learn more about statistics Statistics and Machine Learning Toolbox Numerical Maximization and MLE April 27, 2012. EDIT : I could perhaps try and use the numDeriv package to get the gradient of the likelihood function (evaluated at every observation). Example 1 : Find the parameters of the Weibull distribution which best fit the data in range A4:A15 of Figure 1 (i. MLE is a solid tool for learning parameters of a data mining model. For example, if the name of the custom probability density function is newpdf, then you can specify the function handle in mle as follows. First, it is a reasonably well-principled way to work out what computation you should be doing when you want to learn some kinds of model from data. The scenario is as follows: Browse other questions tagged matlab MLE for Parameter Estimation using Mathematica. Journal of Econometrics (189). Asking for help, clarification, or responding to other answers. A comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach. This estimation technique based on maximum likelihood of a parameter is called If your MATLAB® installation includes the Optimization Toolbox™, mle allows you to use the function fmincon, which includes optimization algorithms that can use derivative information. Section 10 finds the full MLE for the AR(1) model, and sections 11 and 12 provide analogous results for the AR(p) and ARMA(p, q) models respectively. R is free software, and you can This MATLAB function performs a logical OR of arrays A and B and returns an array containing elements set to either logical 1 (true) or logical 0 (false). 0. The Matlab function arma_mle. 9. You are now following this Submission. The program is written in MATLAB and includes a graphical user interface, making The scipy interface is different from that of matlab's mle, and you will want to pass the data in the 'args' argument of the scipy minimization functions, whereas the pguess and kappa parameters will need to be represented by a parameter array of length 2. You clicked a link that corresponds to this MATLAB command: To estimate distribution parameters, use mle. graphics. Estimation Theory, (Use the Matlab script given below to test this. Bayes Net Toolbox for Matlab Written by Kevin Murphy. Home > Data analysis > Introduction to volatility models with Matlab (ARCH GARCH models family and practical use of it. m (e. Custom probability distribution function, specified as a function handle created using @. In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model, given observations. After defining this MLE estimate must be sought numerically using non- . The Skew-Normal Probability Distribution (and related distributions, such as the skew-t) You're looking at the maximum log-likelihood estimate of the fitted curve. Maximum Likelihood Estimation Description. P in Excel 2010/2013) or Custom probability distribution function, specified as a function handle created using @. Exact or perfect simulation is only feasible in the case of truncation of the normal distribution to a polytope region. fitting a 3-parameter of Weibull PDF using mle. qpva. Inputs: No command line arguments are passed to the routine. MathJax reference. These are the possible methods that can be used as shown on matlab website but I don't know how and which one to implement with RGB values. hist(tsobj,numbins) calculates and displays the histogram of the data series contained in the financial time series object tsobj. If you You will have to make a vector of measurements for each node, than apply MLE on each of the vectors and use these estimates for the calculation of the position. The search can be controlled with an options input argument, created using the I am learning how I can estimate parameters by MLE using MATLAB. Murphy, Francis Bach] on Amazon. Many statistics software package has MLE as a standard procedure, but for the purpose of learning MLE and for the purpose of learning programming language, let us develop the code ourselves. You clicked a link that corresponds to this MATLAB command: In such a simple case as this, nobody would use maximum likelihood estimation to evaluate p. Below are two sets of functions for conducting type 2 SDT analysis. Custom probability distribution function, specified as a function handle created using @. But for the part of custom likelihood function, it's a little complicated for me. ML for Binomial. 1. We propose a computationally convenient alternative to the conditional MLE for fixed effect multinomial logit models. MATLAB Answers ™ MATLAB Central I'm trying to use maximum likelihood estimation with a logistic probability distribution to estimate the coefficients A, B and C To fit the normal distribution to data and find the parameter estimates, use normfit, fitdist, or mle. The first use of the normal distribution was as a continuous approximation to the binomial. The experiments are performed using Matlab. Computing MLE Bias Empirically we will use the exponential distribution as example. I therefore hope to find the same parameter that I generated my simulated data with. You must define cdf with pdf if data is censored and you use the 'Censoring' name-value pair argument. How do I use MLE on a shifted gamma distribution? First the Matlab documentation on using the built in distributions is great. tsa. Solutions FEATool Multiphysics is an easy to use MATLAB FEM Simulation Toolbox. Re: [R] MLE Function Peter Dalgaard Mon, 10 Sep 2007 14:32:20 -0700 Terence Broderick wrote: > I am just trying to teach myself how to use the mle function in R because it > is much better than what is provided in MATLAB. x and y are experimental Can you help me on MLE in MATLAB? I am learning how I can estimate parameters by MLE using MATLAB. I don't know what "way the value used by R&E. A function accepts one or more MATLAB variables as inputs, operates on them in some way, and then Weibull Distribution If mle does not converge with default statistics options, Run the command by entering it in the MATLAB Command Window. Asked by dert. This is my first time using Matlab and I'm sure someone here knows exactly what I need to …Numerical example for MLE for linear regression model. MATLAB implementation of MLE for Logistic Regression. Notes for use Whithin Matlab run the M-file mle. Aspects of the matlab toolbox DACE. It's small because it's the result of a highly iterative procedure. Since it is part of the GNU Project, it is free software under #coding:utf-8 -*-from statsmodels. First the Matlab documentation on using the built in distributions is great. Our data is a a Binomial random variable X with parameters 10 and p 0. , 2007, Maximum Likelihood Estimation of Nov 21, 2001 estimation (LSE) and maximum likelihood estimation (MLE). The parameter p 0 is a fixed constant, unknown to us. stattools import adfuller import pandas as pd import matplotlib. 5C. You will see updates in your activity feed; You may receive emails, depending on your notification preferences modeling a mixture of a Gaussian and Uniform (Matlab) Ask Question 1. In this lecture we provide a fully worked out example that illustrates how to do so with MATLAB. com/questions/37790798/writing-an-algorithm-for-maximum-likelihood-estimation-in-matlabJun 15, 2016 This code is undoubtedly easier to read; moreover, it makes use of the fast vector operations that are allowed in MATLAB. The likelihood function is #Hawkes-Process MATLAB project to fit a Hawkes process using MLE. m). Maximum Likelihood Estimation for three-parameter Weibull distribution in r. Here is an example using random numbers from the binomial distribution with n = 100 and p = 0. Simple MLE solution from MATLAB. . 2) given in the programs. Maximum Likelihood Estimation and Nonlinear Least Squares in Stata Christopher F Baum Faculty Micro Resource Center Boston College Maximum Likelihood Estimation in Stata A key resource Maximum likelihood estimation A key resource is the book Maximum Likelihood Estimation in Stata,MATLAB Functions What is a MATLAB function? A MATLAB “function” is a MATLAB program that performs a sequence of operations specified in a text file (called an m-file because it must be saved with a file extension of *. weibull_min. The OLS and MLE measures disagree on the model of largest likelihood, with OLS choosing M0, Run the command by entering it in the MATLAB Command Window. I am trying to estimate a MLE for an exponential distribution using fmincon in Maltab. In the lecture entitled Maximum likelihood - Algorithm we have explained how to compute the maximum likelihood estimator of a parameter by numerical methods. e. Thus, we can use the Excel function COVAR for the population covariance (or COVARIANCE. LeSage the MathWorks as add-ons to the standard MATLAB software distribution. Range, radial velocity, and acceleration MLE using frequency modulation coded LFM pulse train. You will see updates in your activity feed; You may receive emails, depending on your notification preferences Note that in some textbooks the authors may use π instead of p. It also includes a collection of Matlab routines that allows the user to save and export high quality images from Matlab (using the Export_fig function by Oliver Woodford ). the Kalman Filter. ) We use the method of Lagrange multipliers. This example shows how to use the rng function, which provides control over random number generation. m from the Matlab optimization package. use mle or the Distribution Fitter app. Specific Project Steps The project has four parts. Maximum Likelihood Estimation. The first 6 steps are designed to make sure you can use Matlab for systems like this. Question. Sometimes, we even use it without knowing it. Of course there are also other ways Maximum Likelihood Estimation in Stata Example: binomial probit Let’s consider the simplest use of MLE: a model that estimates a binomial probit equation, as implemented in official Stata by the I am attempting to speed up my MATLAB code by using parfor, however, I am doing it incorrectly. use('ggplot') As in Method of Least Squares, we express this line in the form Observation: We can use either the population or sample formulas for covariance (as long as we stick to one or the other). Fitting Data on Various Distributions (MLE Learn more about matlab, fitting, mle Computational Statistics with Matlab Mark Steyvers May 13, 2011. Electricity price forecasting: A review of the state-of-the-art with a look into the future1. Toggle Main Navigation. How get the code of garchfit or MLE in Matlab? Does anyone know some good and fast algorithm on MLE? How do I use MLE on a shifted gamma distribution?. However, the Optim. The function binofit returns the MLEs and confidence intervals for the parameters of the binomial distribution. Matlab’s help page points that the hist function is not recommended for several reasons and the issue of inconsistency is one among them. repeat Example 1 of Method of Moments: Weibull Distribution using the MLE approach). Just use fitrgp, use MLE to optimize I would like to use the mle function of the "statistics and machine learning" toolbox to discover the maximum likelihood of delta, but the documentation is not exactly clear on what I am suppose to do in it. Julia also has a popular package called JuMP. Myung / Journal of This document explains the use of the attached Matlab code for estimating the Aıt-Sahalia, Y. RANK I've been struggling for a long time now to use MLE for a project Just use the name that you gave it in a new command and Matlab will use the stored value: type help mle. Parameter estimates: a 1 = 0. P in Excel 2010/2013) or Dirk has explained how to plot the density function over the histogram. Download MATLAB figure file (11KB) Help with fig files. Tutorial on how to calculate detrended fluctuation analysis in Matlab using the Neurophysiological Biomarker Toolbox. 4C and 8. Example of MLE Computations, using R Use the same function we defined before but now we always plug-in the MLE for the (nuisance parameter) Maximum Likelihood Estimation (MLE) is a technique to find the most likely function that explains observed data. Asked by AR In the MLE problem, the Hessian matrix is used to determine whether the minimum of the objective Sometimes it is easier to use the observed information matrix I An outline of the generalized autoregressive conditional heteroskedasticity (GARCH) methodology, including MV-GARCH as well as CCC and DCC. MATLAB Solvers LearnChemE. Myung / Journal of Sep 15, 2014 Live demo in Matlab/Octave of Maximum Likelihood Estimation. jl In Julia, one can use symbols in variable names, so I have used μ σ \mu\sigma μ σ as a variable name. cz Abstract How can I plot the the liklihood Function to Learn more about plot, liklihood How can I plot the the liklihood Function to show convergence to MLE. , 2007, Maximum Likelihood Estimation of GNU Octave is software featuring a high-level programming language, primarily intended for numerical computations. jl in the future when dealing with more 5/4/2010 · You're looking at the maximum log-likelihood estimate of the fitted curve. – Peter Mortensen Sep 30 '09 at 11:54If you are an R blogger yourself you are invited to add your own R content feed to this site (Non-English R bloggers should add themselves- here)Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) [Kevin P. Based on your location, we recommend that you select: . Can I use MLE with ARMAX?. I am not certain that it is the best way, but you can use varargin when defining the function handle to capture all of the parameter inputs in one cell array. org/en/latest/guides/quickstart. Generating Normally Distributed Random Data to Illustrate Parametric Classification: Generate two sets X and Y of 100 random numbers, where each set following a normal distribution. I've successfully used gampdf to fit distributions using MLE. Maximum likelihood estimates MATLAB. but i didnt understand so well because i havent been using matlab for a long Maximum Likelihood Estimation (MLE): MLE Method - Parameter Estimation - Normal Distribution Using the Maximum Likelihood Estimation (MLE) method to estimate the mean of a random variable How to interpret the output of mle()?. Ask Question 0. For the data given above, the results are as follows. Browse other Maximum likelihood fitting for custom function. Figure below plots the Implementations can be found in C, C++, Matlab and Python. Ask Question 3. It's small because it's the result of a highly iterative procedure. Williams CSG 220 Spring 2007 MLE for Multinomial (cont. Maximum Likelihood vs. In cases like this, it is highly recommended to use maximum likelihood estimation (MLE) to estimate the parameters instead of using least squares, because MLE does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. fit but I cannot figure out a way to get the confidence intervals on the vlauee. jl package is more than adequate for such a simple problem, and I will only look at JuMP. Save the file either in the current folder or in a folder on the MATLAB search path. Sign In; MATLAB Answers. After defining this Example of maximum likelihood estimation with numerical optimization in of it, because the optimization routine we are going to use performs minimization by This document explains the use of the attached Matlab code for estimating the Aıt-Sahalia, Y. The relationship between x and y is supposed to be linear 15 Sep 20148 Apr 2013I am learning how I can estimate parameters by MLE using MATLAB. While conventional RBM use binary states in its visible and hidden layers, the disadvantage of RBM is that it cannot apply real-valued data in application environments. how to use mle matlabphat = mle(___, Name,Value ) specifies options using name-value pair arguments in addition to any of the input arguments in previous syntaxes. Maximum-Likelihood Estimation (MLE) is a statistical technique for estimating model parameters. Learn more about code generation, matlab coder a more popular approach is to use MLE. In cases like this, it is highly recommended that you use maximum likelihood estimation (MLE) to solve for the parameters instead of least squares, since maximum likelihood does not look at ranks or plotting positions, but rather considers each unique time-to-failure or suspension. For uncensored data, normfit and fitdist find the unbiased estimates of the distribution parameters, and mle finds the maximum likelihood estimates. softmax nodes use IRLS, This software allows quick and accurate point spread function fitting using a MEX file interface for use directly in MATLAB programs. I need some clarification regarding choosing the sampling frequency and oversampling factor. The Gutenberg-Richter magnitude frequency relationship 1976-2005 Global CMT catalog log(N) = a - bM Slope = b =1. c Leonid Kogan ( MIT, Sloan ) Volatility Models 15. MATLAB files for meta-d' analsyis. MATLAB Answers ™ MATLAB Central I'm trying to use maximum likelihood estimation with a logistic probability distribution to estimate the coefficients A, B and C Maximum likelihood estimation can be applied to a vector valued parameter. Note that the natural logarithm is an increasing function of x : That is, if x 1 < x 2 , then f ( x 1 ) < f ( x 2 ). To learn more, see our tips on writing great answers. Introduction. I. To summarize, maximum likelihood estimation is a method by which the probability distribution that makes the observed data most likely is sought. pyplot as plt import numpy as np import pandas as pd %matplotlib inline %precision 4 plt. How to fit model to data using MATLAB. Applications MFE MATLAB. Does Scipy offer this functionality? Or do I need to write the MLE confidence intervals estimation myself? Perform a phase retrieval algorithm based on maximum likelihood estimation (MLE) of a phase aberration term which is added to the theoretical pupil function of the imaging system. mat file for dimensionality reduction. I am trying to use the mle function that is predefined in matlab but not able to do it. Bayesian statistics, the uncertainty about the unknown parameters is quantified using probability so that the unknown parameters are regarded as random variables. Optimization method to use. The Skew-Normal and Skew-t Probability Distributions. Instead, switches must be set in the mle. Learn more about optimfun, mle, fmincon, constraint mle, max likelihood MATLABThis MATLAB function performs a logical OR of arrays A and B and returns an array containing elements set to either logical 1 (true) or logical 0 (false). Both the computations and the writeup must be your individual effort. The maximum-likelihood-estimation function and You are now following this Submission. AGE represents the percentage of investment-grade bond issuers first rated 3 years ago. MLE The usual representation we come across is a So, we use empirical probabilities In NB, we also make the assumption that the features Problem using matlab mle function to fit custom distribution. For this particular problem there already coded in matlab a mle method called gam t, that also provides a con dence interval. 4. FEATool Perform a phase retrieval algorithm based on maximum likelihood estimation (MLE) of Calculating power and energy content of a signal in MATLAB Posted on January 21, 2010 April 27, 2016 by Mathuranathan in Latest Articles , Matlab Codes , Signal Processing ( 18 votes, average: 3. Here is the code I'm using. Please use the value (0. The problem is, maximum likelihood estimation doesn't use the same criterion. but i didnt understand so well because i havent been using matlab for a long Good algorithm for maximum likelihood estimation. Basically, if it looks good, it %%file word_count. , by starting Matlab in the actual working directory, and typing 'mle'). Provide details and share your research! But avoid …. The issue is that mle will call the custom logpdf function with the parameters matrix, with each entry in the matrix as a separate input to the function. 1 ECON 4130 HG Nov. jl for optimization problems. I believe I have to use a multivariate normal distribution and run the mle function on it to get what I need. A. m), since MATLAB ® associates the program with the file name. obtained using Matlab code described in the appendix. logL_MLE = -normlike([muHat,sigmaHat_MLE],x) Run the command by entering it in the MATLAB Command Window. Contents 1 Sampling from Random Variables 4 You can also use the publish function directly in the In that case, the best practice is to use the same name for the function and the file (in this example, fact. I wonder if MLE can be used instead. Therein, supply pdf and cdf of the 3-parameter Weilbull distribution as a custom distribution. I'm working on a classification problem using MLE. edu is a platform for academics to share research papers. MATLAB Answers ™ MATLAB Central So when train GPR models, there are MLE and CV methods to optimize hyperparameters. g. But not all problems are this simple! But not all problems are this simple! As we shall see, the more complex the model and the greater the number of parameters, it often becomes very difficult to make even reasonable guesses at the MLEs. For the L-BFGS-B algorithm you should declareCould anyone explain to me in detail about maximum likelihood estimation (MLE) in layman's terms? I would like to know the underlying concept before going into mathematical derivation or equation. m (Matlab) is: The best way to test whether your MLE is accurate is to try it out on synthetic data, for which you know the correct q-exponential parameters. I'm using the mle function in Matlab to attempt to do this estimate (maximum likelihood Maximum Likelihood Estimation Lecturer: Songfeng Zheng 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for an un-known parameter µ. One method of calculating the parameters of the Weibull distribution is by using probability plotting. Learn more about mle, fmincon MATLAB Answers Are you using the Statistics Toolbox mle function in combination with the Avoid repetition of random number arrays when MATLAB restarts. It makes use of a forward-backward algorithmJulia solution. Basically, if it looks good, it External links. For example The matlab code is shown below. Academia. To take best advantage of the algorithms in fmincon , you can specify a custom distribution using a log-likelihood function, written to return not only the In these situations, we can use a computer to solve the problem. Cauchy cdf, pdf, inverse cdf, parameter fit, and random generator. how to use mle matlab MLE is indeed a popular mathod and MATLAB Matlab fminunc calculate standard errors MLE. For general optimization, the function in Matlab is fmin for one variable, and fmins you could also look at how to use optimizein Splus. FEATool Perform a phase retrieval algorithm based on maximum likelihood estimation (MLE) of The estimates of the parameters of the Weibull distribution can be found graphically via probability plotting paper, or analytically, using either least squares (rank regression) or maximum likelihood estimation (MLE). , 2009 for a discussion). io) But def not sufficient for more advanced optimization routines eg mc optimization or mixed model mle work. Contents Awards Printed Proceedings Online Proceedings Cross-conference papers Awards In honor of its 25th anniversary, the Machine Learning Journal is sponsoring the awards for the student authors of the best and distinguished papers. We provide an animation where several points are classified considering three classes with mean and standard deviation values previously computed Statistical Estimation: Least Squares, Maximum Likelihood and WT in MATLAB MATLAB has an extensive wavelet toolbox Maximum Likelihood Estimation and Examples . For repeated Bernoulli trials, the MLE $$\hat{p}$$ is the sample proportion of successes. The functions using MLE estimation make use of Matlab's optimization toolbox. Since it is part of the GNU Project, it is free software under Based on the p-values of the t-statistics, AGE is the most significant individual risk factor (positive coefficient) for the default rates measured by the response IGD. Bayesian Parameter Estimation Ronald J. MAXIMUM LIKELIHOOD ESTIMATION OF THE COX-INGERSOLL-ROSS PROCESS: THE MATLAB IMPLEMENTATION Kamil Klad´ıvko1 Department of Statistics and Probability Calculus, University of Economics, Prague and Debt Management Department, Ministry of Finance of the Czech Republic kladivk@vse. That is, f(x;p 0) = P p 0 (X = x) = n x px 0 Perform a phase retrieval algorithm based on maximum likelihood estimation (MLE) of a phase aberration term which is added to the theoretical pupil function of the imaging system. Learn more about wblpdf, mle Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates. As the default estimation method it seems armax uses an iterative search algorithm. from __future__ import division import os import sys import glob import matplotlib. Suppose that X is an observation from a binomial distribution, X ∼ Bin(n, p), where n is known and p is to be estimated. For instance, I simulate an exponential distribution with a chosen The same MLE estimates and SE's as in Matlab. Learn more about weibull, standard error, mle So, you cannot use MLE if you do not have a PDF -- it does not auto-differentiate the CDF. Matlab GARCH code - GARCH Thesis, Garth Mortensen An Introduction to GARCH > Which is better for estimation, MAP or MLE? then, one could argue that you should always use MAP (possibly with an uninformative or minimally-informative prior). job import MRJob class from __future__ import division import os import sys import glob import matplotlib. This was completed for Buttercoin, a Bitcoin exchange startup. fixed: Named list How to plot FFT using Matlab – FFT of basic signals : Sine and Cosine waves Posted on July 16, 2014 August 21, 2018 by Mathuranathan in Latest Articles, Matlab Codes, Signal Processing, Tips & Tricks we use numerical methods to maximize (12). Learn more about dimensionality reduction If your MATLAB® installation includes the Optimization Toolbox™, mle allows you to use the function fmincon, which includes optimization algorithms that can use derivative information. I estimated an augmented ARJI-GARCH model by writing the minimization code from scratch (not by MATLAB function) and I end up with a vector of estimated parameters. I use the term Econometrics Toolbox to Tutorial on maximum likelihood estimation. x and y are experimental data and plotted in figure1 with blue stars. Maximum Likelihood Estimation Eric Zivot May 14, 2001 This version: November 15, 2009 1 Maximum Likelihood Estimation 1. According to the matlab help you can choose between fminsearch (default) and fmincon (as long as you have the Optimization Toolbox) using 'optimfun'. The VAR Toolbox makes use of few Matlab routines from the Econometrics Toolbox for Matlab by James P. 2 The fitdist function fits most distributions using maximum likelihood estimation. MATLAB user will often string together MATLAB commands to get sequential variable names s1, s2, s3… only to then have to use another EVAL statement to work with the sequential variable names! Very often, a cell array indexed with s{1}, s{2}, s{3}… would work much better. Here is the MATLAB code that one could use to estimate historical volatility using different methods Historical Close-to-Close volatility Historical High Low Parkinson Volatility Historical Garman Klass Volatility Historical Garman Klass Volatility modified by Yang and Zhang Historical Roger and Satchell Volatility Historical Yang and Zhang MATLAB Central contributions by Deepak Adhikari. Fisher, a great English mathematical statis-tician, in 1912. It is a methodlogy which tries to do two things. LeSage. I've successfully used gampdf to fit distributions Tables of supported distributions and functions. Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. I have done MLE estimate must be sought numerically using non- . In fact you don't really need to know much of the theory behind MLE. In particular, see the sde_ou function to calculate analytical solutions for the Ornstein–Uhlenbeck process. I use the term Econometrics Toolbox to Question. Batch MLE/MAP parameter learning using EM. to this MLE estimate is shown in the bottom panel of Figure 1. It is designed for engineers and researchers who are familiar with Matlab and Simulink and want to use these software tools in building energy simulation. Problem finding minimum of a function. Steenbergen Department of Political Science University of North Carolina, Chapel Hill January 2006 Contents This speciflcation causes R to use the Nelder-Mead algorithm. 9457 Shocks to conditional variance are persistent, giving rise to volatility clustering. how to use matrix instead of vector in mle Learn more about mle How does matlab do maximum likelihood on custom Learn more about maximum likelihood, fitting, distribution fitting, generalized normal distribution, custom distribution, ml, mle, mlecustom I am trying to understand Matlab's 'armax' function. The histogram function is the recommended function to use. The above results were obtained using Matlab code described in the appendix. Your program will give Goodness of Fit and other stats. 3 Maximum Likelihood Estimation 3. Sometimes calculation of the expected information is difficult, and we use the observed information instead. I have 7 classes with 96 dimensions, assuming Gaussian density functions with unknown mean and variance. How to use . 1 The Likelihood Function Let X1,,Xn be an iid sample with probability density function (pdf) f(xi;θ), Work with the normal distribution interactively by using the Distribution Fitter app. I believe I have to use a multivariate normal distribution and run the mle function on it to get what I need. Loading Unsubscribe from MLE has feiled to estimate custom pdf parameters. •b value should be solved for with MLE and MLE Can compute gradient and Hessian and use Newton’s method Can add L2 regularizer Can use faster optimization methods eg bound optimization. User’s Guide for the Matlab Library Implementing Closed Form MLE for Di usions Yacine A t-Sahalia Department of Economics and Bendheim Center for Finance Princeton University and NBERy This Version: July 19, 2018 Abstract This document explains the use of the attached Matlab code for estimating the parameters of di usions JournalofMathematicalPsychology47(2003)90–100 Tutorial Tutorialonmaximumlikelihoodestimation InJaeMyung* Department of Psychology, Ohio State University, 1885 Neil Likelihood Function and Maximum Likelihood Estimation (MLE) Posted on October 22, 2012 February 2, (Use the Matlab script given below to test this. here is the code: Use MATLAB ® function garchfit. To take best advantage of the algorithms in fmincon , you can specify a custom distribution using a log-likelihood function, written to return not only the Using Minimum Distance Estimator when MLE fails. Stepping from Matlab to Python (scottsievert. Defaults often occur after this period, when capital from an initial issue is expended, but they may occur sooner or later. I think math is necessary, but don't let it scare you! I think math is necessary, but don't let it scare you! Standard error of Weibull MLE estimates. If you absolutely cannot compute the PDF, you might take a look at the Fitting a Univariate Distribution Using Cumulative Probabilities demo that ships with the Statistics Toolbox. of the mle with any consistent estimator. mle: Web browsers do not support MATLAB Note: Matlab's mle function can also calculate mle on non-Gaussian distributions. so as to trust the data Plotting a Weibull density function. For a simple random sample of nnormal random variables, we can use the properties of the exponential function to simplify the Maximum Likelihood Estimation, Apr 6, 2004 - 8 - Alternative Methods Quasi-Newton methods Use iterative approximation The bbmlepackage, designed to simplify maximum likelihood estimation and analysis in R, extends and modi es the mle function and class in the stats4 package that comes with R by default. I want to use mle function to estimate three parameters(u,phi,syst_sigma) in a customized PDF function. "MRJD_MLE: MATLAB function to estimate parameters of a Mean-Reverting Jump-Diffusion (MRJD) process using maximum likelihood," Statistical Software Components M429004, Boston College Department of Economics. how to use maximum likelihood estimation (MLE) to deal with censored data to get its linear regression The matlab code is shown below. cz or kamil. In this lesson we'll cover how to fit a model to data using matlab's minimization routine 'fminsearch'. why do I get zero likelihood!. So, you cannot use MLE if you do not have a PDF -- it does not auto-differentiate the CDF. m file header. Thanks but unfortuantely this calculation has to be done in Matlab and I don't have the function mle how to use maximum likelihood estimation (MLE) to deal with censored data to get its linear regression The matlab code is shown below. Steenbergen Department of Political Science This speciflcation causes R to use the Nelder-Mead algorithm. Log in or register to post comments; Powered by Drupal. So am trying to fit a linear least squares model on MATLAB for a custom function. this paper we develop a characterization for matrix-exponential distributions and use it in a method to fit data using maximum likelihood Model Fitting. I have done Example of maximum likelihood estimation with numerical optimization in of it, because the optimization routine we are going to use performs minimization by First, we use Matlab fitting tool to fit weight and waist girth of men and women (separately) . GARCH models is done using MLE, Maximum-Likelihood Estimation: Basic Ideas 11 I (b ) is the value of the likelihood function at the MLE b , while ( ) is the likelihood for the true (but generally unknown) parameter . A function How do I use the input variables in a MATLAB function?How can I use fmincon in the mle function?. The Skew-Normal Probability Distribution (and related distributions, such as the skew-t)I am trying to recreate maximum likelihood distribution fitting, I can already do this in Matlab and R, but now I want to use scipy. Learn more about armax, time-series, time seriesI'm not claiming to have any understanding of the mle function in MatLab. The hyper-parameter θ can be determined using methods such as Maximum Likelihood Estimation (MLE) and Cross-Validation (CV) [27], among others. However I cannot understand the output the mle function is giving me. Choose a web site to get translated content where available and see local events and offers. Two exceptions are the normal and lognormal distributions with uncensored data. For example, if the name of the custom cumulative distribution function is newcdf, then you can specify the function handle in mle as follows. garchfit constructs the likelihood function and optimizes it numerically. Energyplus model-based predictive control (EPMPC) by using matlab/simulink and MLE The toolbox allows you to use two different methods for eigenanalysis: - The original Matlab functions (based on Arnoldi methods) - The JDQR functions (based on Jacobi-Davidson methods) For problems up to 10,000 data points, we recommend using the 'Matlab' setting. When you use the element-wise & and | operators in the context of an if or while loop expression (and only in that context), they Maximum Likelihood Programming in R Marco R. You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. This is my first time using Matlab and I'm sure someone here knows exactly what I need to do to get my work done. Learn more about mle, wblpdf Use MATLAB's "mle". I couldn't figure out how to solve this problem. The cumulative distribution function of the generalized extreme value distribution solves the stability postulate equation. I have to compute t-stat for each of them, I guess "ttest" is not suitable since I did not fit the model with MATLAB functions, should I use Wald test in some way? Thank you in advance. I am trying to recreate maximum likelihood distribution fitting, I can already do this in Matlab and R, but now I want to use scipy. The inverse Gaussian distribution has the density function. Learn more about mle, gamma distribution Numerical example for MLE for linear regression model. Learn more about mle, mlecustom, pdf MATLAB Answers MLE has feiled to estimate custom pdf parameters. Maximum likelihood estimation (MLE) can be applied in most MLE, as we, who have already indulge ourselves in Machine Learning, would be familiar with this method. 15 Jun 2016 This code is undoubtedly easier to read; moreover, it makes use of the fast vector operations that are allowed in MATLAB. Learn more about mle, gaussian distribution (a)Write down the log-likelihood function. I am having problem to estimate my parameter. One set uses maximum likelihood estimation (MLE), and the other works by minimizing the sum of squared errors (SSE). How can I impute missing value using MLE and EM? I could only estimate MLE, EM and MCMC. Take for example, when fitting a Gaussian to our dataset, we immediately take the sample mean and sample variance, and use it as the parameter of our Gaussian. Maximum Likelihood Estimation with Indicator Function. I'm trying to use Scipy to accomplish the sane task and can easily get the parameters with scipy. dert thank you for your helping. Find the best-fitting distribution in MATLAB. 1 Motivating example We now come to the most important idea in the course: maximum likelihood estimation. 2010 . In fact you don't really need to know much of the theory behind MLE. Understanding Kalman Filters, Part 1: Why Use Kalman Filters? - Duration: Writing an Algorithm for maximum likelihood estimation in MATLAB stackoverflow. 450, Fall 2010 20 / 45 We can now use Excel’s Solver to find the values of α and β which maximize LL(α, β). LEAST SQUARES Estimation code. 2 days ago · Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. The MLE + interface enables both tools to fminsearch mle parameters estimation. Learn more about minimum distance estimator In doing so, we'll use a "trick" that often makes the differentiation a bit easier. (Each node type has its own M method, e. This appendix presents Matlab code that performs MLE and LSE analyses for the example described in the text. Jump to: This page contains files required for the MATLAB companion course that runs along side Financial Econometrics. mle is in turn a wrapper around the optim function in base R. and Kimmel, R. Use an explicit formula for the density of the tdistribution. You can also use my own SDETools Matlab toolbox on GitHub for numerically solving SDEs and computing analytical solutions of common stochastic processes. 1) Properties of Maximum Likelihood Estimation (MLE) Once an appropriate model or distribution has been specified to describe the Applied Econometrics using MATLAB James P. MLE attempts to find the parameter values that maximize the likelihood function , given the observations. use('ggplot')As in Method of Least Squares, we express this line in the form Observation: We can use either the population or sample formulas for covariance (as long as we stick to one or the other). It basically sets out to answer the question: what model parameters are most likely to characterise a given set of data? First you need to select a model for the data. My data, called ABSTRACT We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. Then ϕˆ is called the Maximum Likelihood Estimator (MLE). But I am stuck on exactly how to accomplish my goal, as I don't know how to rewrite my likelihood function for that purpose Supported Distributions. In particular, I would like to estimate the Weibull distribution …In electrical engineering, computer science, statistical computing and bioinformatics, the Baum–Welch algorithm is used to find the unknown parameters of a hidden Markov model (HMM). Learn more about mle, fmincon, custom distribution Maximum Likelihood Programming in R Marco R. Maximum Likelihood on Matlab (multivariate how to use matrix instead of vector in mle Learn more about mle I am trying to understand Matlab's 'armax' function. 3. And the model must have one or more Applied Econometrics using MATLAB James P. From Kevin Sheppard. For instance, I simulate an exponential distribution with a chosen parameter and then use the simulated data in my MLE. FEATool Multiphysics is an easy to use MATLAB FEM Simulation Toolbox. It was introduced by R. How does matlab do maximum likelihood on custom Learn more about maximum likelihood, fitting, distribution fitting, generalized normal distribution, custom distribution, ml, mle, mlecustom The issue is that mle will call the custom logpdf function with the parameters matrix, with each entry in the matrix as a separate input to the function. Use the phase-retrieved pupil function to perform single-emitter localization. m performs this task using the optimization routine fminunc. Probability Plotting . Can anybody give me an insight or show me this kind of multiclass classification example using MLE in MATLAB. Likelihood Equation Normal MLE using fminunc. The former . The parameters controlling the most likely estimates for each type of distribution are written to the MATLAB Command Window. Title L6 Author: Maximum Likelihood Estimation and the Bayesian Information Criterion – p. 0453, b 1 = 0. ; FastDFA MATLAB code for rapidly calculating the DFA scaling exponent on very large datasets. Model fitting is a procedure that takes three steps: First you need a function that takes in a set of parameters and returns a predicted data set. This is meant to facilitate the development of new and better customizable methods, as Matlab based fitting is usually much too slow for the amount of data that needs to be processed. If you want to use the BFGS algorithm you should include the method="BFGS" option. Using MLE and fmincon to estimate parameters. Fitting Custom Univariate Distributions Open Script This example shows how to use the Statistics and Machine Learning Toolbox™ function mle to fit custom distributions to univariate data. $\begingroup$ Thanks but unfortuantely this calculation has to be done in Matlab and I don't have the function mle() How can I work around “lumpiness” in simulated maximum likelihood estimation? 2. Maximum Likelihood Estimation, Apr 6, 2004 - 3 - Maximum Likelihood Estimation Confldence interval for µ: An approximate (1¡fi) confldence interval for µj is µ^ j § zfi=2 q I(µ^jY)¡1 j or µ^ j § zfi=2 q I(µ^)¡1 j Incorrect specifled model If the model is incorrectlyspecifled and the dataY aresampled froma true Use MathJax to format equations. The results phat = mle(___, Name,Value ) specifies options using name-value pair arguments in addition to any of the input arguments in previous syntaxes. Asked In this video I show how the MLE algorithm works. Notes on Maximum Likelihood Estimation (MLE), Maximum A Posteriori (MAP), Bayes' Estimation, Parametric Classification, Parametric Regression Decision Trees Useful demo of decision trees, linear and quadratic discriminant analysis, and naive Bayes in Matlab. MLE+ is particularly useful for: 1 • Controller design: the energy simulation is carried out by EnergyPlus while the controller is designed and implemented in Matlab or Simulink. I have a problem when trying to calculate standard errors of estimates from fminunc. In particular, we often use the inverse of the expected information matrix evaluated at the mle var(d θˆ) = I−1(ˆ). Calculation of MLE’ s for gamma distributed data using Excel . According to the MLE principle, this is the population that is most likely to have generated the observed data of y = 7. In this course, we use R for our computer programming. stats. Perform a phase retrieval algorithm based on maximum likelihood estimation (MLE) of a phase aberration term which is added to the theoretical pupil function of the imaging system. Below is my Julia implementation using Optim. You will see updates in your activity feed; You may receive emails, depending on your notification preferences Tables of supported distributions and functions. This is a short but powerful script written during my masters to fit a univariate Hawkes process. 2Very roughly: writing for the true parameter, ^for the MLE, and ~for any other consis-tent estimator, asymptotic e ciency means limn!1 E h nk ^ k2 i limn!1 E h nk~ k i. Learn more about maximum likelihood estimates, mle, function handles Statistics and Machine Learning Toolbox Uses built-in solvers in MATLAB to find the roots of an equation, the solution to a non-linear equation, and the area under a curve. Matlab example. 17/34 The likelihood function is the joint density of the observed data L(α,β,σ 2 ) = MATLAB Central contributions by harry. I urgently need either the code where the maximum likelihood equations for 3 parameter Weibull distribution are numerically solved or an authenticated procedure for determining the 3 parameters of the Weibull distribution using existing routines Lecture 2 Maximum Likelihood Estimators. For example For other distributions, a search for the maximum likelihood must be employed. kladivko@mfcr. 83 out of 5) This process is flawed for a number of reasons, and we prefer the use of Maximum Likelihood Estimators (MLE) to allow the user to assess the options for their data (see Clauset et al. I would like to use the mle function of the "statistics and machine learning" toolbox to discover the maximum likelihood of delta, but the documentation is not exactly clear on what I am suppose to do in it. Based on the p-values of the t-statistics, AGE is the most significant individual risk factor (positive coefficient) for the default rates measured by the response IGD. My code is rather simple, I am fitting some data using MATLAB's built-in mle function by using varying An explanation of the Maximum Likelihood Estimator method of statistical parameter estimation, with examples in Excel. You clicked a link that corresponds to this MATLAB command: Tag: MLE. Estimate parameters by the method of maximum likelihood | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7719891667366028, "perplexity": 942.1903005274353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078021.18/warc/CC-MAIN-20210414185709-20210414215709-00377.warc.gz"} |
https://www.cheenta.com/test-of-mathematics-solution-subjective-65-minimum-value-of-quadratic/ | How 9 Cheenta students ranked in top 100 in ISI and CMI Entrances?
# Test of Mathematics Solution Subjective 65 - Minimum Value of Quadratic
This is a Test of Mathematics Solution Subjective 65 (from ISI Entrance). The book, Test of Mathematics at 10+2 Level is Published by East West Press. This problem book is indispensable for the preparation of I.S.I. B.Stat and B.Math Entrance.
## Problem
Show that for all real x, the expression ${ax^2}$ + bx + C ( where a, b, c are real constants with a > 0), has the minimum value ${\frac{4ac - b^2}{4a}}$ . Also find the value of x for which this minimum value is attained.
## solution:
f (x) ${ax^2}$ + bx + c
Now minimum derivative = 0 & 2nd order derivative > 0.
${\frac{df(x)}{dx}}$ = 2ax + b
Or ${\frac{d^2f(x)}{dx^2}}$ =2a
Now 2a> so 2nd order derivative > 0 so ${\frac{d^2f(x)}{dx^2}}$ = 2.
So minimum occurs when
${\frac{df(x)}{d(x)}}$ = 0 or 2ax + b = 0
or 2ax = -b
or x = ${\frac{-b}{2a}}$ (ans)
At x = ${\frac{-b}{2a}}$
${ax^2}$ + bx + c
= ${a\times {\frac{b^4}{4a^2}}}$ + ${b\times {\frac{-b}{2a}}}$ + c
= ${\frac{4ac-b^2}{4a}}$ (proved) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071150183677673, "perplexity": 1689.1306779434444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00288.warc.gz"} |
http://lilypond.org/doc/v2.19/Documentation/learning/absolute-note-names.en.html | ### 2.4.3 Absolute note names
So far we have used `\relative` to define pitches. This is usually the fastest way to enter most music. Without `\relative`, pitches are interpreted in absolute mode.
In this mode, LilyPond treats all pitches as absolute values. A `c'` will always mean middle C, a `b` will always mean the note one step below middle C, and a `g,` will always mean the note on the bottom staff of the bass clef.
```{
\clef "bass"
c'4 b g, g, |
g,4 f, f c' |
}
```
Writing a melody in the treble clef involves a lot of quote `'` marks. Consider this fragment from Mozart:
```{
\key a \major
\time 6/8
cis''8. d''16 cis''8 e''4 e''8 |
b'8. cis''16 b'8 d''4 d''8 |
}
```
Common octave marks can be indicated just once, using the command `\fixed` followed by a reference pitch:
```\fixed c'' {
\key a \major
\time 6/8
cis8. d16 cis8 e4 e8 |
b,8. cis16 b,8 d4 d8 |
}
```
With `\relative`, the previous example needs no octave marks because this melody moves in steps no larger than three staff positions:
```\relative {
\key a \major
\time 6/8
cis''8. d16 cis8 e4 e8 |
b8. cis16 b8 d4 d8 |
}
```
If you make a mistake with an octave mark (`'` or `,`) while working in `\relative` mode, it is very obvious – many notes will be in the wrong octave. When working in absolute mode, a single mistake will not be as visible, and will not be as easy to find.
However, absolute mode is useful for music which has large intervals, and is extremely useful for computer-generated LilyPond files. When cutting and pasting melody fragments, absolute mode preserves the original octave.
Sometimes music is arranged in more complex ways. If you are using `\relative` inside of `\relative`, the outer and inner relative sections are independent:
```\relative { c'4 \relative { f'' g } c }
```
To use absolute mode inside of `\relative`, put the absolute music inside `\fixed c { … }` and the absolute pitches will not affect the octaves of the relative music:
```\relative {
c'4 \fixed c { f'' g'' } c |
c4 \fixed c'' { f g } c
}
```
Other languages: català, česky, deutsch, español, français, magyar, italiano, 日本語, nederlands. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148290753364563, "perplexity": 2866.0807394278377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826773.17/warc/CC-MAIN-20160723071026-00158-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://firasfneish-statistics-r.netlify.app/post/getting-started/installing-old-packages/ | # Installing older version of a package
In many situations when you install a new package, R will ask you whether you want to update a specific package in your library to the newest version. The main problem is sometimes in the updated version, a certain function might have been completely removed and thus could mess up your pre-existing code in use.
In this post, I will show how to install an old version of any package as long as it remains on CRAN.
First of all, I will demonstrate the issue with a package called “emmeans”. The package contains different functions for mean multiple comparisons. To date, the latest version of this package is 1.5.3 and was published on 9-Dec-2020. In the new version, CLD() function was removed. The authors/maintainer replaced it with a cld() from multcomp package. Although nothing major changed but for the objective of this post I will assume that we have to use the old CLD() with no alternative found.
Assuming you have already updated the emmeans package on your machine, the first thing to do is to remove the current (most recent) package.
remove.packages("emmeans")
To install a specific version of a package, we need to install a package called “remotes” and then load it from the library. Afterwards we can use install_version() by specifying the package name and version needed as shown below.
install.packages("remotes")
library(remotes)
install_version("emmeans", "1.4.5")
It is worth noting that the missing function from the updated version is likely replaced by either a new function by the authors/maintainer or they have provided an alternative and documented it in the latest version. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3019832968711853, "perplexity": 755.7905766824412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00002.warc.gz"} |
http://physics.stackexchange.com/users/4485/physiks-lover?tab=activity&sort=revisions | # Physiks lover
less info
reputation
319
bio website location age member for 3 years, 3 months seen 7 hours ago profile views 294
# 10 Revisions
Aug19 revised Why should multiplication of a ket vector by a complex number change only its “direction”? improved context Aug19 revised How do I calculate integral analytically for small $k$? added popular mathematics tag May20 revised Is the electric field zero inside an ideal conductor carrying a current? added 62 characters in body May16 revised Why isn't the force modelled which confines excess charge to remain inside a conductor? deleted 48 characters in body Apr26 revised Calculate relativistic boost to COM frame from two arbitary velocities? added 26 characters in body; edited title Apr13 revised Can relativistic kinetic energy be derived from Newtonian kinetic energy? added 24 characters in body Nov20 revised Problems that Lagranges equations of the 1st kind can solve whereas the 2nd kind can't? added 100 characters in body Sep28 revised Superluminal neutrinos added 655 characters in body Sep23 revised The bar tender says we don't serve Tachyons around here? deleted 18 characters in body Sep1 revised Is Einstein's 1916 General Relativity paper a recommended way to start learning about the subject? deleted 52 characters in body | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6712612509727478, "perplexity": 4821.755445725777}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444209.14/warc/CC-MAIN-20141017005724-00228-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://mathemerize.com/what-is-the-value-of-cot-0-degrees/ | # What is the Value of Cot 0 Degrees ?
## Solution :
The value of Cot 0 degrees is equal to $$\infty$$ or not defined.
Proof :
$$\angle$$ A is made smaller and smaller in the right $$\Delta$$ ABC till it becomes zero. As $$\angle$$ A gets smaller and smaller the length of the side BC decreases. The point C gets closer to the point B and finally when $$\angle$$ A becomes very close to 0 degrees, AC becomes almost the same as AB, and BC gets very close to 0.
By using trigonometric formula,
Cot A = $$base\over perpendicular$$ = $$b\over p$$
Cot A = $$AB\over BC$$ = $$AB\over 0$$ = $$1\over 0$$, which is not defined.
Hence, $$cot 0^{\circ}$$ = $$\infty$$ or not defined. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7637379169464111, "perplexity": 374.525098392709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00345.warc.gz"} |
https://nus.kattis.com/sessions/n52i2m/problems/gauntletoffire | OpenKattis
CS3233 Midterm Team Contest
#### Start
2019-04-01 09:00 UTC
## CS3233 Midterm Team Contest
#### End
2019-04-01 13:30 UTC
The end is near!
Contest is over.
Not yet started.
Contest is starting in -226 days 4:14:50
4:30:00
0:00:00
# Problem GGauntlet of Fire
Spike is competing in the Gauntlet of Fire for the title of Dragon Lord. To succeed, he needs to traverse the labyrinthine volcanic tunnels, avoiding the treacherous elements and making it from chamber to chamber as quickly as possible.
The cave has $N$ chambers and $M$ tunnels connecting these chambers; no tunnel connects a chamber to itself, and there is at most one tunnel between any two chambers. This cave has a special property: it is known that it is possible to travel between any two different chambers by taking one or more tunnels, and that there are at most two paths between any two different chambers that do not take any tunnel more than once.
For simplicity, we label the chambers $1, 2, \dots , N$; then, the $i^\text {th}$ tunnel connects chambers $A_ i$ and $B_ i$, and has a length of $L_ i$ meters.
Spike can travel at the fast, fast speed of $1$ meter per second. He wants to know the danger level of each chamber. The danger level of a chamber is simply the sum of the times, in seconds, required to travel from this chamber to every other chamber, assuming that one always takes the shortest path possible.
Help Spike determine the danger level of each chamber!
Since these numbers can be quite large, you should output only the remainders after dividing each number by $10^9+7$.
## Input
The first line of input contains two integers, $N$ ($2 \leq N \leq 200\, 000$) and $M$ ($N-1 \leq M \leq 400\, 000$) the number of chambers and tunnels in the cave.
The next $M$ lines contain the descriptions of the tunnels. In particular, the $i^\text {th}$ of these lines contains three integers $A_ i$, $B_ i$ ($1\leq A_ i, B_ i \leq N$; $A_ i \neq B_ i$) and $L_ i$ ($1 \leq L_ i \leq 10^9$), denoting that the $i^\text {th}$ tunnel connects chambers $A_ i$ and $B_ i$, and has a length of $L_ i$ meters.
It is guaranteed that there is at most one tunnel between any two chambers, that it is possible to travel between any two different chambers by taking one or more tunnels, and that there are at most two paths between any two chambers that do not take any tunnel more than once.
## Output
Output $N$ integers on a single line, separated by spaces. The $i^\text {th}$ of these integers should contain the danger level of chamber $i$.
Since these numbers can be quite large, you should output only the remainders after dividing each number by $10^9+7$.
Sample Input 1 Sample Output 1
5 5
1 2 3
1 4 8
2 3 12
3 5 4
4 5 2
35 39 36 27 29
Sample Input 2 Sample Output 2
7 6
1 2 8
1 3 15
1 4 10
3 5 40
3 6 3
5 7 60
221 261 206 271 326 221 626 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6644182205200195, "perplexity": 617.93480025437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667260.46/warc/CC-MAIN-20191113113242-20191113141242-00324.warc.gz"} |
https://cerncourier.com/a/atlas-hunts-for-new-physics-with-dibosons/ | # ATLAS hunts for new physics with dibosons
22 September 2017
Beyond the Standard Model of particle physics (SM), crucial open questions remain such as the nature of dark matter, the overabundance of matter compared to antimatter in the universe, and also the mass scale of the scalar sector (what makes the Higgs boson so light?). Theorists have extended the SM with new symmetries or forces that address these questions, and many such extensions predict new resonances that can decay into a pair of bosons (diboson), for example: VV, Vh, Vγ and γγ, where V stands for a weak boson (W and Z), h for the Higgs boson, and γ is a photon.
The ATLAS collaboration has a broad search programme for diboson resonances, and the most recent results using 36 fb–1 of proton–proton collision data at the LHC taken at a centre-of-mass energy of 13 TeV in 2015 and 2016 have now been released. Six different final states characterised by different boson decay modes were considered in searches for a VV resonance: 4, ℓℓνν, ℓℓqq, νqq, ννqq and qqqq, where , ν and q stand for charged leptons (electrons and muons), neutrinos and quarks, respectively. For the Vh resonance search, the dominant Higgs boson decay into a pair of b-quarks (branching fraction of 58%) was exploited together with four different V decays leading to ℓℓbb, νbb, ννbb and qqbb final states. A Zγ resonance was sought in final states with two leptons and a photon.
A new resonance would appear as an excess (bump) over the smoothly distributed SM background in the invariant mass distribution reconstructed from the final-state particles. The left figure shows the observed WZ mass distribution in the qqqq channel together with simulations of some example signals. An important key to probe very high-mass signals is to identify high-momentum hadronically decaying V and h bosons. ATLAS developed a new technique to reconstruct the invariant mass of such bosons combining information from the calorimeters and the central tracking detectors. The resulting improved mass resolution for reconstructed V and h bosons increased the sensitivity to very heavy signals.
No evidence for a new resonance was observed in these searches, allowing ATLAS to set stringent exclusion limits. For example, a graviton signal predicted in a model with extra spatial dimensions was excluded up to masses of 4 TeV, while heavy weak-boson-like resonances (as predicted in composite Higgs boson models) decaying to WZ bosons are excluded for masses up to 3.3 TeV. Heavier Higgs partners can be excluded up to masses of about 350 GeV, assuming specific model parameters. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793265461921692, "perplexity": 2521.5777476783637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00694.warc.gz"} |
https://blog.theleapjournal.org/2015/11/ | ## Saturday, November 21, 2015
by Renuka Sane.
### Why should we take interest in individual credit?
When we think of a credit market, we generally think of borrowing by large firms. But individuals, like you and me, are also important participants in credit markets. Some of us are entrepreneurs - we dream of new ventures, and in the process create new goods and services. As entrepreneurs, we often start out as sole proprietors, limited by the equity financing we can muster. In all countries, credit to individuals is a mechanism for small business financing.
In addition, borrowing is done by individuals in order to shift consumption through time. This is called consumption smoothing. This is especially important for low-income households whose incomes are reliant on seasonal cycles such as agricultural harvest, but whose consumption patterns require liquidity over the entire year. Alternatively, when poor people are faced with shocks like illnesses, borrowing helps improve the stability of consumption.
In short, the objective of a well functioning market for personal credit is that it should support entrepreneurship by individuals and consumption smoothing.
The importance of credit for individuals is reflected in the emphasis that programs such as Startup America, and the Entrepreneurship 2020 Action Plan, EU are placing on accessing capital. The importance of credit is also seen in the growth of micro-finance in economies in South Asia and Latin America, which cater to the demand for credit from low-income consumers.
### How well are we doing in India today?
Financial exclusion
The CMIE household survey data shows that only 14% of households had credit outstanding in March 2014. This highlights the failure of decades of policies that have attempted to obtain financial inclusion through heavy-handed State interventions. The lack of availability of adequate and timely credit is the biggest problem affecting the growth of the small scale enterprises in India (Banerjee and Duflo, 2014). Financially excluded households lack the ability to smooth consumption. This induces welfare loss. It also gives them extreme incentives to seek out informal credit at outlandish prices as the marginal utility of consumption smoothing is extremely high.
Many loan products involve unfair terms that customers do not understand, or interest rates that are so shrouded, that customers are not able to rationally evaluate the expected repayments. There have been several complaints regarding such practices by banks as well as informal sector lenders. Such mis-selling makes consumers wary of transacting, and lays the political foundations for extreme government interference (Sane and Thomas, 2015).
Emphasis on secured credit
The bulk of personal loans outstanding are housing, vehicle, consumer durable and education loans - all of which are collateralised. The credit market in India only delivers capital to those who have assets to pledge. This is true of SME loans as well. This constrains entrepreneurship, especially in service, technology and knowledge industries which require a system that lends based on an assessment of future cash-flows, and not on the basis of existing collateral.
Lack of sound arrangements when faced with default
All over the world, orderly systems have been setup to cope with consumer default. In India, some creditors have resorted to coercive collection practices. Incidents in the early 2000s led RBI to issue a circular on Guidelines on Fair Practices Code for Lenders, May 2003 that directed lenders to not resort to undue harassment for loan recovery. Anecdotes suggest that the practice is not fully controlled. The micro-finance crisis in Andhra Pradesh in 2010 also had its roots in coercive collection practices. Even if strong arm tactics are not used, a person with unpaid loans has no way of cleaning the slate and starting over. Legal proceedings are known to linger on for decades.
Loan waiver programs
India has a long history of loan waiver programs. Sensational stories about poor people burdened under large amounts of debt from evil lenders gain traction in the political discourse. The largest of these was a Rs.760 billion farm debt waiver in 2008. The adverse effects of such a waiver include the destruction of the loan repayment culture and a rise in strategic defaults. For example, Gine and Kanz (2014) find that moral hazard in loan repayment intensified after the 2008 program. The threat of future loan waiver programs by the government makes banks complacent and reduces the credit culture among borrowers.
### Diagnosing the problem
The first problem in the personal credit ecosystem is the lack of consumer protection that can safeguard against mis-selling at the time of the sale of the loan, coercive recovery practices, etc.
The second problem is the absence of well-functioning insolvency framework for enforcing repayment of loans. It is the job of insolvency laws to ensure that creditors can seek remedy of the courts to force the debtor to restructure the loan, or to liquidate assets to pay off the creditors. The presence of such laws, and smooth institutional mechanisms which enforce these laws, makes creditors more willing to lend. The laws also ensure that debtors can also seek remedy from immediate collection actions of creditors, and buy time to reorganise their finances, or sell assets, at the end of which they can get a "fresh start". This makes them open to taking more risks. The design of institutions in personal bankruptcy shapes the functioning of credit markets (White, 2005). This is why, for example, bankruptcy procedures and a second chance for honest entrepreneurs is core to the Entrepreneurship 2020 Action Plan in the EU described earlier.
### The way forward
Building a sound market for individual credit requires a two-pronged approach. One part is enacting the draft Indian Financial Code, which sets up a sound regulatory framework for financial firms and ensures consumer protection. The second part is the draft Insolvency and Bankruptcy Code, which sets up the machinery for dealing with default.
The draft Insolvency and Bankruptcy Code has four core features:
• It proposes an Insolvency Resolution Process (IRP) in which creditors and debtors can re-negotiate the repayment of loans. This offers debtors relief from collection action of creditors, while offering creditors a chance to collect their dues over time. This would be particularly important for personal insolvency which is really the insolvency of a proprietership.
• The Bill also proposes a bankruptcy process in case the negotiations fail - this implies the sale of personal assets to repay creditors.
• For low-income households with small loans, the Bill proposes a Fresh Start mechanism, whereby a debtor can seek to waive off his debts. This brings an element of predictability in the way loan-waivers take place in the country, distributing them through time as a large number of small transactions, and shifting the burden from the taxpayer to the shareholders of banks. Moral hazard is contained by flagging the person as a defaulter, for life, through credit bureaus.
• No matter what trajectory is followed, the set of steps that are followed after default by a person are specified clearly, these steps play out in modest periods of time, and end with the case being disposed of. This is in contrast with the present arrangements where a default turns into a legal problem for decades.
The draft Bill thus balances two objectives: on one hand, it provides debtors some relief through the discharge of some of their debt, and on the other, it seeks to improve credit availability by enforcing repayment.
Rolling out the consumer protection framework proposed in the IFC and the individual insolvency framework proposed in the IBC will lay the foundations for a healthy market for individual credit.
### References
Banerjee and Duflo (2014), Do Firms Want to Borrow More? Testing Credit Constraints Using a Directed Lending Program, Review of Economic Studies, 81(2): 572-607
Gine, Xavier and Kanz, Martin (2014), The Economic Effects of a Borrower Bailout: Evidence from an Emerging Market, World Bank Policy Research Working Paper No. 7109.
Sane, Renuka and Thomas, Susan (2015, forthcoming), "The real cost of credit constraints: Evidence from micro-finance", The B.E. Journal of Economic Analysis and Policy.
White (2005), Economic Analysis of Corporate and Personal Bankruptcy Law, NBER working paper 11536, July 2005. Published as "Bankruptcy Law," in Handbook of Law and Economics, edited by A.M. Polinsky and Steven Shavell. Elsevier.
### There will be no pigeon singularity
Pigeons were trained to behave like skilled doctors. When a computer is trained to behave like a skilled doctor, we think this is quite portentous. But when a pigeon does this, we never fret about the coming pigeon singularity.
The computational hardware in the pigeon's head is some bunch of neurons. Using rewards and punishment, we are training this neural network. Training neural networks gives the ability to do specific tasks. However, no amount of hardware for special purpose tasks (pigeons or dogs or Watson) closes the gap which separates the machine learning revolution from general intelligence, creativity and consciousness. It is genuine progress in engineering when dogs can smell abnormal blood sugar or when computers can drive cars. These do not, however, add up to strong AI.
## Friday, November 13, 2015
### Bankruptcy reforms: It's not the ranking that matters
by Rajeswari Sengupta.
India lacks a single, comprehensive law that addresses all aspects of insolvency of an enterprise. The current system of corporate insolvency resolution is characterised by a fragmented legal framework, and weak institutions. The absence of a well-functioning resolution mechanism results in poor outcomes in terms of timeliness of resolution, recovery rate, as well as costs associated with insolvency proceedings. These problems have also contributed to the ongoing balance sheet crisis of banks and their borrowers. In response to these problems, in 2014, the Ministry of Finance set up the Bankruptcy Legislative Reforms Committee' (BLRC), to recommend a consolidated insolvency and bankruptcy resolution code that would be applicable to all non-financial enterprises and would replace existing laws and resolution guidelines. The report and draft bill of BLRC were released on 4 November 2015.
In recent times, the Indian government has taken great interest in addressing the problems of doing business in India and improving India's rank in the World Bank's Doing Business' report. One of the parameters evaluated in this report is Resolving Insolvency'. India's current rank in the ease of Resolving Insolvency' is 136 in the world, which is roughly the same as the overall rank for India, which is 130. Reform of the corporate insolvency resolution framework is therefore an important element in the overall agenda of improving India's ease of doing business ranking.
In this article we ask the question: What would the legislative reform proposed by the BLRC do, for India's score in Resolving Insolvency' in the Doing Business' rankings?
The draft bill lays out a formal resolution process that is a significant improvement on the procedures currently in place. It provides for collective making decision by creditors, aims to lower information asymmetry in decisions through the use of information utilities, gives the right to initiate insolvency proceedings to both debtor and creditors, clarifies the role of the adjudicating authority, facilitates the conduct of insolvency proceedings by professionals, creates a calm period when a moratorium is imposed and negotiations can take place, specifies well defined penalties for fraud, and provides for a linear flow of events from viability assessment to resolution. In terms of the insolvency resolution process, the proposed law looks good on paper. If enacted, it will result in the ancillary benefit of improving India's score in the Resolving Insolvency' parameter in the Doing Business' report.
It is important to emphasise that the World Bank's Doing Business' rankings reflect a de jure approach of evaluating what should happen under the stated law, as opposed to what does happen in practice. If a bill, which meets certain criteria, is enacted, the ranking of a country in ease of Resolving Insolvency' will improve even if the working of the insolvency resolution process, on the ground, diverges from what is intended in the law.
### Impact of BLRC proposals on India's Strength of Insolvency Framework Index' score
The overall Resolving Insolvency' parameter consists of two indicators: recovery rate' and strength of insolvency framework index'. In this article, we focus on the second: the strength of insolvency framework index'. This indicator analyses the strength of the legal framework applicable to insolvency proceedings and tests whether a country has adopted internationally recognised good practices in the area of insolvency resolution. We analyse the change that could come about for India's score in this indicator if the draft bill is passed by the Parliament.
The strength of insolvency framework index' is the sum of four component indices. Each component index in turn consists of multiple sub-components, ranked on a scale of 0-1. The overall strength of insolvency framework index' is measured on a scale of 0-16, with cumulative scores across 16 sub-components.
IndicatorPresent scenario (DB 2016)Under the new bill
Strength of insolvency framework (0-16) 6.012.0
A. Commencement of proceedings (0-3) 2.02.5
Procedures available to debtor Liquidation only (0.5)Reorganisation & Liquidation (1.0)
Creditor filing for debtor's insolvency Yes, Liquidation only (0.5)Yes, Reorganisation Only (0.5)
Basis for insolvency commencement Inability to pay debts (1.0)Inability to pay debts (1.0)
B. Management of debtor's assets (0-6) 3.05.5
Continuation of contracts supplying essential goods & services No (0.0)Yes (1.0)
Debtor's rejection of overly burdensome contracts Yes (1.0)Yes (1.0)
Avoidance of preferential transactions Yes (1.0)Yes (1.0)
Avoidance of undervalued transactions Yes (1.0)Yes (1.0)
Debtor obtaining credit post commencement No (0.0)Yes (1.0)
Priority to post commencement credit No (0.0)Yes, over all creditors (0.5)
C. Reorganisation proceedings (0-3) 0.01.0
Creditors voting on proposed reorganisation plan No (0.0)Yes (1.0)
Dissenting creditors receive at least as much as in liquidation No (0.0)No (0.0)
Creditor class-based voting & equal treatment No (0.0)No (0.0)
D. Creditor participation (0-4) 1.03.0
Creditor approval for selection/appointment of IP No (0.0)Yes (1.0)
Creditor approval for sale of debtor's assets No (0.0)Yes (1.0)
Creditor right to request information from insolvency representative No (0.0)No (0.0)
Creditor right to object to decisions accepting/rejecting claims No (0.0)Yes (1.0)
By this calculation, the enactment of the draft insolvency bill would improve India's strength of insolvency framework' index from 6.0 to 12.0. This would place India ahead of developed economies such as Canada, France, Hong Kong, New Zealand, Netherlands, Norway, Singapore, and United Kingdom; emerging economies such as China, Colombia, Indonesia, Malaysia, Mexico, Peru, Russia, Thailand, Turkey, and Vietnam, and at par with Australia, and Sweden.
Similar analysis is required for the other element, the recovery rate', so as to fully assess how the present proposals for bankruptcy reform would change the overall Resolving Insolvency' score for India in the Doing Business' rankings.
### Limitations of this thinking
Many times, in economic measurement, we are able to observe the de jure status, but what really matters is the de facto outcome. This distinction is an important one when using the World Bank's Doing Business' scoring.
On a de jure basis, the draft bill will improve India's score in the ease of Resolving Insolvency' parameter, and there may be some merit in this as a first step. However, while we would like to have an improved Doing Business' score, we in India should primarily focus on de facto outcomes about recovery rates, and not be satisfied with de jure improvements alone. If the latter were the sole objective, cosmetic changes to the Companies Act 2013 is all that is required.
In a recent paper, Hallward-Driemeier and Pritchett, 2015 show that there is practically no correlation between the findings recorded in the Doing Business' report and the ground realities of doing business. This derives from the large gaps that often exist between laws and regulations on paper, and the manner in which these are enforced in reality, especially true of developing countries.
For instance, one of questions asked in the World Bank questionnaire is: Does the insolvency framework allow a creditor to file for insolvency of the debtor? While the answer to this would be Yes' if the draft bill proposed by BLRC is enacted, in reality the filing process might be too cumbersome in the absence of good enabling infrastructure. This, in turn, would affect the timeliness of resolution and might also distort incentives for creditors to trigger insolvency proceedings to begin with. But these issues are ignored owing to the way in which the question is designed.
The BLRC report has emphasised the substantial scale of institution building, and State capacity construction, that is required in order for the insolvency and bankruptcy processes to work well. Effective implementation of the draft insolvency bill requires building four institutional pillars:
1. A private competitive industry of information utilities
2. A private competitive industry of insolvency professionals
4. A well-functioning regulator
There are several concerns about the draft bill on these four areas. Much more work is required on these fronts, in terms of strengthening the draft bill and implementing it.
Focusing on improving India's ranking in the ease of doing business report is thus problematic. There is a danger of engaging in isomorphic mimicry' where the reform process gains legitimacy by adopting international best practices' in the drafting of the bill without actually obtain the desired outcome. We need to devote energy and resources to a full implementation plan that involves perfecting the law, creating good institutions and building adequate State capacity. The outcomes that matter are recovery rates, equal treatment of unsecured creditors, treatment of bond holders, etc. -- not the Doing Business ranking.
## Thursday, November 12, 2015
### The new FDI policy: Well begun is not half done
by Bhargavi Zaveri and Radhika Pandey
A recent press release issued by the Central Government proposes to usher in 'radical' FDI-related reforms touching 15 major sectors of the economy. Key changes to the FDI framework include raising the limit for FDI approvals from the Foreign Investment Promotion Board (FIPB) to Rs 5,000 crore from Rs 3,000 crore, increasing foreign-investor limits in several sectors including private banks, defence and non-news entertainment media as well as allowing foreign investors to exit from construction development projects before completion.
The stated objective of these reforms is to "ease, rationalise and simplify the process of foreign investment" in the country. The reforms comprise of easing sectoral caps in some sectors, moving some sectors from the approval route to the automatic route and granting special dispensations to entities owned and controlled by NRIs. These measures could benefit certain sectors and augment FDI flows. Going forward, these measures may also propel our investment rankings. However, like most reforms to the capital controls framework of the post-1999 period, this purported reform also ignores the substantive issues of ad-hocism, executive discretion and the absence of rule of law that pervade the administration of capital controls. Unless we address these fundamental issues, incremental reforms of this nature will be of little help.
This post focuses on four such mistakes that the recent press release continues to make.
### Distinguishing between investment vehicles
Principle: The capital controls framework should be agnostic to the channel through with foreign money is being routed. The relation between ownership and management which is the basis for distinction between a company and a Limited Liability Partnership should not be a concern for the capital controls framework.
Today, FEMA has different rules for treating foreign investment made in an Indian company, an Indian partnership firm, an Indian trust and an Indian LLP. For example, while non-residents are allowed to invest in an Indian company, only NRIs are allowed to invest in an Indian partnership firm on a non-repatriation basis. Foreign investment in a LLP is allowed only under the Government route. Moreover, to be eligible to accept FDI, the LLP must be operating in a sector where 100% FDI is allowed under the automatic route and where there are no FDI-linked performance conditions. Further, a LLP having foreign investment is not allowed to make downstream investment in India.
The press release proposes to allow FDI in a LLP under the automatic route. It also proposes to allow a LLP with FDI to make downstream investment in a sector in which 100% FDI is allowed under the automatic route and there are no FDI-linked performance conditions.
A liberalisation policy must be indifferent to the vehicle through which FDI comes into India. Whether FDI comes in through a company or a LLP, the same rules must apply. The reporting requirements may differ depending on the investee entity. So, for instance, for LLPs or trusts with FDI, the regulatory framework may prescribe more detailed reporting requirements, as compared to a company with FDI. Restrictions on capital flows must not driven by the nature of the investee entity.
By creating artificial restrictions which are driven by the nature of the investing entity, the FDI policy only adds to the complexity of investing in India. For example, take a situation where a foreigner is interested in investing in an advertising agency, a sector where 100% FDI is allowed under the automatic route. She makes the investment in a LLP engaged in advertising. Now, the advertising agency proposes to expand into another activity, say, print media which is under the Government route. Under the current policy, the LLP will not be able to expand its operations as print media is under the government route nor will it be able to incorporate another company, as under the proposed policy, downstream investment by a LLP with FDI is permitted only in sectors in which 100% FDI is allowed under the automatic route. However, this problem would not crop up if she invests in an Indian company engaged in advertising.
The press release allowing FDI in a LLP under the automatic route is, thus, a mere addition to the error of mandating different rules for FDI in different kinds of entities.
### Special dispensations to NRIs
Principle: For the purpose of administering capital controls, the rules for foreign money should be similar whether it comes through an NRI owned and controlled company or through other overseas investor.
Currently, NRIs have certain benefits as compared to other non-residents when investing in India. The press release proposes to extend these benefits to entities owned and controlled by NRIs. There are two issues involved here. First, to address the concerns of money laundering and terrorist financing, the entities owned and controlled by NRIs should only be allowed through the FATF compliant jusridictions.
Second, this proposal tantamounts to revival of the concessions which were granted to Overseas Corporate Bodies (OCBs) under FEMA, which were eventually withdrawn in 2003. While the concerns relating to OCBs were largely related to ownership of OCBs accessing the Indian securities markets under the Portfolio route, OCBs were de-recognised as an investor class altogether. One of the concerns regarding OCBs was the ownership of these OCBs, and whether they were legit vehicles for investment by NRIs. Under the Consolidated FDI policy, a NRI is allowed to invest in the capital of a partnership or proprietorship in India on repatriation basis with the previous approval of RBI.
With the new framework in place, this benefit will be extended to entities owned and controlled by NRIs. It is unclear how the framework will be implemented to ensure that the shares of the foreign entities owned and controlled by NRIs are not transferred to non-residents who are not NRIs. If the NRIs want to sell their control, will the priveleges given to the companies owned and controlled by NRIs have to be withdrawn? How will the Government know if the company is still owned and controlled by NRIs. To avoid such complexities a rational solution would be to move to harmonise the capital controls framework for all kinds of non-resident investors--be it NRIs or foreign investors.
There is no economic reason for treating a certain class of non-residents and their investments differently from other non-residents. For example, this press release proposes to exempt NRIs from the 3-year lock-in period imposed on non-residents investing in the real estate sector. Presumably, the reason for imposing a 3-year lock in period for foreign investors is to ensure that they do not pre-maturely withdraw their capital from the project. There is no reason for not applying this line of reasoning to investments made by NRIs in this sector. Uniform treatment of all non-residents is more important to the ease of doing business in India, than favouring NRI investments.
### Sectoral exemptions
Principle: Financial regulation including regulation of capital controls should be motivated by market failure. The capital controls framework should not be designed to protect Indian promoters. Contractual obligations between the investor and investee should not be forced through the capital controls framework. The rules should provide a level playing field for all investors.
A key highlight of the new FDI regime is that it allows foreign investors to exit before the completion of the project in the construction sector. This is a laudable step. At the same time, it imposes a lock-in period of three years calculated with reference to each tranche of foreign investment. This is undesirable. The terms and conditions on which a foreign investor may exit an Indian real estate business, must be purely contractual and based on commercial wisdom.
Further, certain sectors like Hotels and Tourist Resorts, Hospitals, Special Economic Zones (SEZs), Educational Institutions, Old Age Homes and investment by NRIs are proposed to be exempted from the condition of lock-in. It is difficult to decipher the principles guiding the decision for exempting certain sectors from the lock-in condition while imposing conditions on others. This creates problems of political economy. Sectors which are not given the lock-in exemption will be encouraged to lobby and persuade the authorities to add them to the list of exempted sectors. This may result in undesirable consequences including additional administrative workload without addressing any fundamental market failure.
### Booklet of press releases and notifications relating to FDI
Principle: Capital controls must be administered through a legally enforceable instrument. The complex of maze of regulatory instruments should be replaced by one authoritarian position of law. The private sector then should be free to make many user friendly documents".
Capital controls is governed by the Foreign Exchange Management Act, 1999 (FEMA). The RBI has the authority to frame regulations under the Act. Capital controls is governed by foreign exchange management (FEM) regulations. Amendments to these regulations must be tabled by the RBI (as notifications) and approved by Parliament in order to be legally enforceable. The Department of Industrial Policy and Promotion (DIPP), Ministry of Commerce and Industry, makes policy pronouncements on FDI through Press Notes/Press Releases which are notified by the Reserve Bank of India as amendments to the Foreign Exchange Management (FEM) Regulations. The procedural instructions are issued by the Reserve Bank of India through A.P. (DIR Series) Circulars. The RBI also issues master circulars that act as a compendium of the notifications/circulars issued in the previous year, without necessarily covering all the details. The DIPP also issues a consolidated FDI policy that subsumes all Press Notes/Circulars that were in force. The regulatory framework thus consists of Act, Regulations, Circulars, Master Circulars, Press Notes, Press Releases and a Consolidated Policy on FDI.
The press release proposes to add another instrument to this list. It instructs the DIPP to consolidate all its instructions in a booklet so that investors do not have to refer to several documents of different frames. The practice of issuing binding instructions through policy documents' is one of the most fundamental errors of our capital controls framework. No amount of consolidation or simplification can substitute this error.
Executive action which restricts the actions of private citizens must be taken only through a legally enforceable instrument. This is because a legally enforceable instrument has gone through the rigors of law making, will go through some accountability mechanism (such as tabling before Parliament in case of delegated legislation) and can be challenged in a court of law. Policy decisions' go through none of these checks and balances. There is also the danger of easy reversal.
At present, the processes we follow for making the rules for entry and exit of foreign investors in India are largely driven by `policy actions'. First, sectoral caps, terms and conditions of foreign investment and its repatriation, are virtually "regulated" through a policy document which neither goes through the rigors of law making nor is subject to the accountability of delegated legislation, such as tabling before Parliament. Second, even if the policy is translated into a binding instrument (namely, regulations by RBI), the process of translation suffers from time-lags and inconsistencies.
### Conclusion
Improving the ease of doing business in India requires more than sector-specific initiatives or making special dispensations. The problems run deep. They are ultimately grounded in the Foreign Exchange Management Act, 1999, and the subordinate legislation and institutional machinery which enforces it. Solving problems will require going to the root cause, as has been recommended by numerous expert committees.
### The limits of grassroots empowerment
#### Citizens -> Government
Ever since Plato, we have known that direct democracy does not work well. When faced with a question like one-rank-one-pension, we will not get the right answer by asking the people. The mechanism that works better is to have a representative democracy, also termed a "republic", where the people elect representatives who write law. The recourse to referendums where the people vote is a bad way to organise things.
With modern technology, the transactions costs of voting are no longer a barrier. It's quite easy to conceive of mobile phones offering one or two resolutions on which the people vote, every day. However, the voice of the people is not a good way to run a country, as the people do not have knowledge of the machinery of government. The people should be involved at a deeper question of values and objectives. The people should elect representatives who will pursue objectives that the people like. The voice of citizens should shape the priorities of their representatives, but it is the representatives should get engaged with the wonkish details.
#### Shareholders -> Firms
The same three step process is found with firms. Shareholders are the ultimate beneficiaries of well run firms. But shareholders are seldom the right source of decisions about the management of firms. Shareholders should recruit a board of directors who would then be akin to the legislature of the firm.
With modern technology, the transactions costs of voting are no longer a barrier. It's quite easy to conceive of mobile phones offering one or two resolutions on which the shareholders vote, everey day. However, the choice of shareholders is not a good way to run a company, as shareholders do not have knowledge of the machinery of the firm.
#### Customers -> Firms
Josh Dzieza has an article in The Verge about customers rating employees of firms, and thereby working as supervisors of employees, which links to similar themes. The article has an element of outrage about micromanagement of employees, which I don't share. All management is about principal-agent problems, and if customers help improve monitoring of employees, in general, that's a good thing. (In equilibrium, more intrusive supervision may go with higher wages, if many employees dislike that level of monitoring).
But I think there is a deeper point here which is worth mulling about. Is putting customers in charge a bit like putting shareholders in charge or putting voters in charge? Customers may not always choose things that are best for organising production. Managers, and not customers, have the full picture of how production is organised. The best price / performance for customers may not come from giving too much power to customers.
#### Conclusion
Ubiquitous computer and telecom technology has made it possible to organise the world in ways that empower the grassroots. To many people, this is instantly attractive, as a way of breaking away from the hierarchical power structure of the pre-technological world. We are always sympathetic to voters or shareholders or consumers being empowered with mobile phones.
There are places where hyper-empowered citizens are a good thing. But this is not true in general. Most of the time, we will need management who take responsibility for organising things in ways that are good for voters, shareholders or consumers.
### Inconsistencies and forum shopping in the Indian bankruptcy process
by Aparna Ravi.
The current legal framework for resolving bankruptcy in India is broken, and a committee created in 2014 has proposed a new legal framework to fix it. The effort to fix a broken law is not unusual by world standards. Most countries, even those with stable bankruptcy outcomes that earn them high rankings in the World Bank Doing Business report, appear to be continuously creating new statutes or amending existing ones. For a sense of perspective, the current UK bankruptcy framework was put in place in 2002. In the US, the 1978 Bankruptcy Reform Act was the single biggest change that established specialised bankruptcy courts. But this has been followed by changes in 1984, 1994 and 2005, indicating a decadal cycle of review and reforms.
What is unusual about the Indian reforms is that these reforms stemmed from a general discontent with the system, rather than specific tangible evidence on measured outcomes. This is a problem particularly for creating a new legal framework, because you would expect that such evidence is a prerequisite to guide the nature and the form of the required reforms. One analysis that was available to the Committee was a paper in the Journal of Corporate Law Studies by Kristen van Zweiten, 2015. The paper analyses an extensive set of high court judgments related to liquidation in India. The analysis of these judgements reveal judicial biases in the form of a pro-rehabilitation stance at the High Courts, which in turn, leads to a reluctance to liquidate a business that is judged unviable. The paper records that the bias has contributed to delays and ineffective resolution of corporate insolvencies in India.
However, liquidation is only one part of a bankruptcy framework. The bankruptcy resolution framework is typically made up of three parts:
1. Enforcement of debt by creditors, individually or as a group;
2. Collective assessment of whether the debt can be maintained through a financial rearrangement or entity reorganisation to keep the debt viable; and
3. Bankruptcy, where debt is liquidated if it is found to be unviable.
As in other parts of the world, the Indian framework has separate laws for each part. But unlike in other parts of the world, the Indian framework deviates in not having single law but, rather, is fragmented across multiple laws. More importantly, the different laws apply differently to different participants. For example:
1. Debt enforcement is available only to banks and selected financial institutions through the Recovery of Debt due to Financial Institutions (RDDBFI) Act, 1993, and Securitisation and Reconstruction of Financial Assets and Enforcement of Security Interest (SARFAESI) Act, 2002.
2. Collective action on enforcement are found in separate legislation depending upon the form of the debtor. For firms, it was first introduced in the Sick Industrial Companies Act, 1985 (SICA), and is now in the Companies Act, 2013. For partnerships, it resides in the Indian Partnership Act, 1932, and in the Presidency Towns Insolvency Act, 1909 and Provincial Insolvency Act, 1920 for individuals in the Presidency towns and for the rest of India.
3. Bankruptcy of organised enterprises are covered in the laws related to the enterprise. For example, a firm is covered by the provisions on winding up in the Companies Act, 1956 and 2013, while that of individuals is covered in the respective individual laws listed above. [1]
In a recent paper, An analysis of collective insolvency resolution and debt recovery proceedings in India, I extend the evidence about the failure of legal performance of firm insolvency and bankruptcy by analysing a broader set of judgements in insolvency and bankruptcy resolution. I analyse 45 judgments, heard between 2003 and 2014, that were selected to provide for a variety of different proceedings, and to involve different types and numbers of creditors and other stakeholders.
Even this relatively small sample is useful in identifying two themes that contribute to poor outcomes in insolvency and bankruptcy. The first theme is the pro-rehabilitation stance adopted in adjudication during liquidation, which is consistent with the finding in van Zweiten, 2015. The second theme identified are conflicts arising from having multiple laws and multiple fora for adjudication. One of the core objectives of a bankrutpcy process is to incentivise the debtor and creditors to negotiate towards an outcome that maximises economic value in insolvency. The paper finds that, in India, multiple laws that make up the bankruptcy framework incentivise creditors and debtors to act in their own best interests during this process.
Further, the analysis highlights the conflicts that arise from having the jurisdiction of these multiple laws resting with different adjudicating fora. For example, the High Courts are the adjudicating forum for the winding up process under the Companies Act 1956 and 2013, the BIFR the adjudicating forum under SICA 1985 and the civil courts the adjudicating for the individual insolvency acts. On the other hand, the adjudicator for the debt enforcement laws are the Debt Recovery Tribunals (DRTs) and Debt Recovery Appellate Tribunals (DRATs). This results in the process being diverted from swift resolution to one with extreme and frequent delays. Different laws being implemented across multiple jurisdictional fora is a key element that exacerbates the ability to delay the process of resolution.
### Findings
The first outcome from the analysis is a measure of the delays in arriving at the final judgement. 17 of the 42 High Court judgments, for which data was available, took over 10 years for resolution (as measured from the date of commencement of the first action). 24 of the 42 took over 5 years. Seven of the 13 DRT/DRAT judgments took over 5 years for resolution. One of the causes for delays is the existence of multiple fora that the creditors and the debtor need to traverse to reach a resolution. In several of the cases reviewed, there was typically at least a few years of time lost between the BIFR providing a liquidation opinion and the High Court issuing a winding up order.
The conflicts that can arise from having fragmented laws and adjudicators is best illustrated with an example. One of the cases reviewed involved the following different actions [2]:
• Secured creditor 1 files an application in the DRT for debt recovery.
• Secured creditor 2 filed a company petition for winding up in the High Court.
• Secured creditor 3 entered into an MOU with creditor 1 to get paid upon recovery for creditor 1.
• A trade creditor that leased machinery to the debtor initiated proceedings invoking the arbitration clause in the contract.
• Secured creditor 4 initiated proceedings under the SARFAESI Act and sold assets by auction.
• Unsecured creditor 5 that had supplied a boiler to the debtor filed for debt recovery in the civil court.
While this may be at the extreme end of the spectrum, the majority of cases involved at least two or, more often, three parallel proceedings in different fora. In addition, the present law permits the process of winding up the firm and debt recovery from the same firm to run in parallel to each other. Particularly where the judiciary is fragmented across courts and tribunals, this leads to further confusion for creditors as well as debtors. Creditors remain uncertain of recovery even after proceedings have closed as it could always be challenged on the basis of another debt recovery or winding up action initiated on the same debtor. This is exacerbated by a pervasive lack of common information for cases against the same debtor. Often in the review, there were situations where one creditor initiates debt recovery unaware that similar actions have been initiated against the same debtor until much later in the process. Thus, the analysis demonstrates that the lack of a single, linear law compounded by the lack of common information leads to several instances of conflicts in using the law to resolve insolvency and bankruptcy.
When there is a lack of clarity in the law, the responsibility of resolving conflicts across these different laws and their interaction fall upon the High Court judge. Examples of some questions these judges had to grapple with are:
1. Can the debtors assets be sold pursuant to enforcement action under SARFAESI while a winding up petition is pending in the High Court?
2. Can a creditor initiate proceedings under the RDDBFI Act while SARFAESI enforcement action is ongoing?
3. Does the High Court have jurisdiction over debt recovery proceedings in the DRT once the winding up process has commenced?
4. If the Board for Industrial and Financial Reconstruction (BIFR) under SICA 1985 has referred a company for liquidation to the High Court, but the High Court is yet to pass winding up order, can a creditor bring an action for debt recovery in the meantime [3]?
Needless to say, the High Courts across the country interpret these conflicts differently. For example, on (a), the High Court of Telangana and Andhra Pradesh [4] held that the debtors assets could be sold in an auction pursuant to a SARFAESI enforcement action, while both the Madras and Karnataka High Courts ruled that the consent of the official liquidator was required for such a sale [5]. On (b), the DRT ruled that a creditor could initiate proceedings under SARFAESI while debt recovery proceedings under the RDDBFI Act were ongoing [6]. Nearly two years later, the Patna High Court held that the reverse did not apply and that proceedings under the RDDBFI Act could not be initiated if SARFAESI enforcement action had begun [7]. More than the letter of the law, how effective it is in implementation is shaped by the case law that emerges from its practice on the ground. Such conflicts in case law causes confusion in the resolution of similar future cases, further compounding the lack of certainty in the resolution of insolvency and bankruptcy of firms.
The final observation from case analysis worth touching on is that despite the law enabling debt enforcement even without the requirement of a court order (as is enabled in SARFAESI), this does not always work in practice. Debtors have the ability to challenge the enforcement of SARFAESI actions in the DRT. When this occurs, courts and tribunals often misinterpret the extent of their jurisdiction under Act. Under Section 17(2) and (3) of the SARFAESI Act, the role of the DRT or court when considering a challenge to enforcement action is to examine whether the secured creditors action was taken in accordance with the provisions of the SARFAESI Act and related rules. In practice, however, the DRTs and DRATs often overstepped this line to go on to adjudicate the substance of the claim itself. One such example is the question of how to determine the amount owed, or to impose or change conditions imposed by the creditor such as the amount of a deposit. It is, of course, difficult to ascertain the proportion of cases in which SARFAESI enforcement has been allowed to go unchallenged as opposed to those occasions on which it has been challenged in court. However, it appears that in cases where a debtor does challenge SARFAESI enforcement, creditors have experienced long drawn out struggles in the courts.
### Implications for bankruptcy law reform
The UNCITRAL Legislative Guide on Insolvency, for example, states nine broad objectives of an insolvency law regime all of which also rest on having a collective mechanism for insolvency resolution:
1. Provision of certainty in the market to promote efficiency and growth;
2. Maximization of value of assets;
3. Striking a balance between liquidation and reorganization;
4. Ensuring equitable treatment of similarly situated creditors;
5. Provision of timely, efficient and impartial resolution of insolvency;
6. Preservation of the insolvency estate to allow equitable distribution to creditors;
7. Ensuring a transparent and predictable insolvency law that contains incentives for gathering and dispensing information;
8. Recognition of existing creditor rights and establishment of clear rules for ranking priority of claims; and
9. the establishment of a framework for cross-border insolvency.
The review presented in Ravi, 2015 identifies three observations about the bankruptcy process in India that stand against these UNCITRAL principles of a sound bankruptcy process:
• A legal framework fragmented across separate rights of debtors and creditors in collective action and debt recovery, as well as across different adjudicating fora.
• Lack of clarity in rights of the creditors despite strong debt enforcement laws.
• Delays in reaching final judgement.
• Delays in implementing bankruptcy.
The analysis points out that the fragmentation of the laws and adjudication fora has been a dominant factor in leading to poor bankruptcy outcomes, such as delays in resolution. Thus, a key requirement in the reforms of the legal framework for bankruptcy is to have a unified law. Such a law ideally ought to cover all aspects of a debtor in distress as well as apply to all stakeholders. While simply piecing together the multi-layered framework will not make all the problems go away, such a move would greatly help with the efficiency and predictability of the process, two important indicators for the success of any insolvency law regime.
A single law will also have the benefit of a single adjudicating authority to hear all cases of insolvency and bankrutpcy. One outcome of this will be to have a single adjudicating authority for all matters related to insolvency and bankruptcy, which will eliminate the incentives of debtors and creditors to undertake forum shopping to resolve insolvency.
Another critical component of change is to counter judicial innovations that have contributed to the delays in insolvency resolution. A new bankruptcy process should stipulate clear timelines for different processes, make it difficult (or close to impossible) to reverse winding up orders and provide sufficient guidance to limit the exercise of unfettered discretion by the single adjudicating authority.
Finally, it is important to consider the interaction between collective insolvency proceedings and debt recovery mechanisms. The purpose of insolvency law has often been described as providing sufficient incentives for creditors to favour collective insolvency proceedings over individualised debt enforcement actions. In India, however, most reforms in recent years such as SARFAESI have focused on providing mechanisms for secured creditors to recover through individual enforcement action. These initiatives are understandable and necessary in light of delays in court proceedings and the significant abuse of SICA by debtors who used the pretext of a stay to impede recovery by creditors. Yet, while banks and secured creditors may have had some success with SARFAESI (the extent of this success is itself questionable), this focus has come at the cost of an organized insolvency process that preserves value and benefits all stakeholders. A new unified bankruptcy code is an opportunity to reverse this trend by providing a linear and time bound mechanism for collective insolvency rather than debt recovery.
### Footnotes
[1] The new Companies Act 2013 includes new provisions that deal with rescue and rehabilitation (Chapter IXX) and liquidation (Chapter XX). However, these provisions have not yet been notified.
[2] This example is based on the fact pattern in BHEL v. Arunachalam Sugar Mills Ltd., (O.S.A. Nos. 58, 59, 63, 64 and 81 of 2011, decided On: 12.04.2011), but similar parallel proceedings were found in a large majority of the cases reviewed.
[3] Sri Bireswar Das Mohapatra and Anr. V. State Bank of India, W.P. (C) No. 8567 of 2006. Decided On: 17.08.2006.
[4] Indian Bank v. Sub-Registrar, Writ Appeal Nos. 1420 and 1424 of 2013 and O.S.A Nos. 34 and 35 of 2013, decided on 11.11.2014.
[5] BHEL v. Arunachalam Sugar Mills Ltd., O.S.A. Nos. 58, 59, 63, 64 and 81 of 2011 Decided On: 12.04.2011; Kritika Rubber Industries v. Canara Bank, C .A. No. 190/2008 in Co. P. No. 167/1999. Decided On: 13.06.2013.
[6] Bank of India v. Ajay Finsec Pvt Ltd and Ors (OA No. 167 of 2001, decided on 28.11.2003).
[7] M/S Punea Cold Storage v. State Bank of India (AIR 2013 Part I; II (2013) BC 501 Patna HC).
## Wednesday, November 11, 2015
### A blueprint for overcoming systemic risk
by Avinash Persaud.
It is a testament to the need of getting financial regulation right, that almost ten years since the emergence of a crisis in sub-prime mortgages in the US, those countries most affected by the unfolding credit crunch are still struggling to put it behind them. Yet, recent announcements over the amount of capital banks are required to hold against risky assets and what constitutes capital reveals that the fundamental flaws that plagued the last approach to regulation remain. The Financial Stability Board, created by the G20 nations, announced on November 9, 2015, that the most systemically important lenders must have a total loss absorbing capacity, including bail-in securities, equivalent to at least 16% of risk-weighted assets in 2019, rising to 18% in 2022. Financial regulation has yet to find its compass.
Too many financial supervisors consider regulation to be an exercise in “de-risking”. They seek to curb risk by requiring banks to put up biting amounts of capital against risks. However risk shares much with the first law of thermodynamics: energy can neither be created or destroyed, just transformed. When we effectively tax risk in one place it shifts to where it is un-taxed, like shadow banks. When we find it there and tax it again, it merely shifts once more, perhaps to non-financial institutions and so on. The logical extension of this approach is that risk will keep on shifting until it ends up where we can no longer see it. That is not a good place for risks to be. The exercise should instead be about incentivising risk to flow out of dark corners, to where it is best absorbed.
Another fundamental flaw is the notion of risk-sensitivity on which the recently announced capital requirements are built. This idea suffers from the post hoc ergo propter hoc fallacy. Banks don’t topple over from doing things they know are risky; but from doing things they were convinced were safe before they turned risky. Against loans they think are risky, banks demand extra guarantees, collateral, interest and repayment reserves. Against their reported risk-weighted assets, they were never as well capitalised as just before the crisis. Its not the things you know are dangerous that kill you. And under the risk-sensitive approach they had the least capital against those assets the models thought were safe before they turned bad.
If that was not bad enough, in their practice of risk-sensitivity, banks, corralled into using the same risk models and data sets, ended up buying the same assets that the models calculated had the best yield to safety ratio in the past. They were then forced to exit these crowded trades at the same time when there was a disturbance in volatilities and correlations. What has been generously called the Persaud Paradox of market-sensitive risk management – the observation of safety creates risks – reveals the common fallacy of composition of regulation: Trying to rid individual financial institutions of risk does not make the financial system safe.
A further fundamental error is the treatment of different risks as if they can be added up together and the aggregate amount of risk hedged with capital independently of how it is made up. The inconvenient reality is that different types of risk require different hedges and capital is not always the best hedge. Moreover, the right hedge for one risk may make another risk greater. For instance, the way to hedge liquidity risk (which is the risk that were you forced to sell an asset tomorrow it would fetch a far lower price than if you could wait to find an interested buyer) is by having long-term funding to tide you over. If markets became illiquid and you were short-term funded, no tolerable amount of capital would save you. Credit risks, on the other hand (the risk that someone defaults on payments to you) rise the more time you have. Matching credit risk to long-term funding would increase credit risks. The way to hedge credit risks is diversify across assets, not time, and have capital to make up the possible short-fall.
The solution to these three fundamental flaws to the current approach to regulation is presented in my recently published book: Reinventing Financial Regulation. The key mechanism of any solution is incentives. Although many see bad and unethical behaviour in the crisis, most of the behaviour that contributed to the crisis was incentivised and would have taken place anyway because of these incentives. Banks sold credit risks to institutions that had no capacity to hedge or absorb credit risks, because they had to put up capital against credit risks and the special purpose vehicles, insurance firms and hedge funds that bought them did not. In place of the credit risks that banks could have diversified across their customers, banks bought illiquid instruments that they could not so easily hedge when markets froze, like long-term mortgages, loans to private equity investments and indecipherable combinations of credit instruments. They did so because illiquid assets had higher yields but regulatory capital was driven off the credit rating. Locking up bankers is a satisfying rallying call but will not work to moderate the booms that lead to the busts if we do not also address the incentives.
Financial institutions should be required to put up capital or reserves against the mismatch between each type of risk they hold and their innate capacity to hold that risk. Risk capacity is not risk appetite. It is not determined by your ability to measure the economic cycle, which collectively we have proven to be bad at, but the ability to naturally hedge a risk. Risk-sensitivity needs to move over for risk capacity. Institutions with long-term funding or liabilities like life insurers or pension funds would likely end up not having to put up capital for liquidity mis-matches but against the lack of diversity of their credit risks. Banks with their short-term funding would probably have to put up a lot of capital against maturity mismatches, but little additional capital if their credit risks were well diversified.
The consequence would be that banks would be incentivised to sell good-quality credit but low-liquidity assets, like infrastructure bonds, to insurance companies and buy in liquid but low-quality credit risks, like corporate bonds, that they could hedge better than others. We would get risk transfers that strengthened the financial system, the exact opposite of those we had in the run up to the financial crisis when risks ended up where there was least capacity to hold them, amplifying the inevitable crisis. The economy as a whole would be able to take more risks, more safely. This would not require onerous levels of capital or bond investors that are somehow supposed to be better at gauging risks than bankers, because risks would be where they could be absorbed and where if they blew up, they would not take down the entire financial system.
## Wednesday, November 04, 2015
### BLRC hands over the draft Insolvency and Bankruptcy Bill
The Bankruptcy Legislative Reforms Committee has submitted its report and has signed off on the draft Insolvency and Bankruptcy Bill. Today, the Ministry of Finance has released the report on Rationale and Design and the draft bill on its website.
## Tuesday, November 03, 2015
### How to build a sensible tax system
I have a column in the Indian Express today on this. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1822812408208847, "perplexity": 4056.1068657993515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00705.warc.gz"} |
https://lmcs.episciences.org/4598 | ## Neumann, Eike and Pape, Martin and Streicher, Thomas - Computability in Basic Quantum Mechanics
lmcs:3222 - Logical Methods in Computer Science, June 19, 2018, Volume 14, Issue 2
Computability in Basic Quantum Mechanics
Authors: Neumann, Eike and Pape, Martin and Streicher, Thomas
The basic notions of quantum mechanics are formulated in terms of separable infinite dimensional Hilbert space $\mathcal{H}$. In terms of the Hilbert lattice $\mathcal{L}$ of closed linear subspaces of $\mathcal{H}$ the notions of state and observable can be formulated as kinds of measures as in [21]. The aim of this paper is to show that there is a good notion of computability for these data structures in the sense of Weihrauch's Type Two Effectivity (TTE) [26]. Instead of explicitly exhibiting admissible representations for the data types under consideration we show that they do live within the category $\mathbf{QCB}_0$ which is equivalent to the category $\mathbf{AdmRep}$ of admissible representations and continuously realizable maps between them. For this purpose in case of observables we have to replace measures by valuations which allows us to prove an effective version of von Neumann's Spectral Theorem.
Source : oai:arXiv.org:1610.09209
DOI : 10.23638/LMCS-14(2:14)2018
Volume: Volume 14, Issue 2
Published on: June 19, 2018
Submitted on: March 28, 2017
Keywords: Computer Science - Logic in Computer Science,03B70, 03F60, 18C50, 68Q55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7330729961395264, "perplexity": 602.1046407322227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592394.9/warc/CC-MAIN-20200118081234-20200118105234-00041.warc.gz"} |
https://www.mathgenealogy.org/id.php?id=243897 | ## Jens Babutzka
Dr. rer. nat. Karlsruher Institut für Technologie (KIT) 2016
Dissertation: $L^q$-Helmholtz decomposition and $L^q$-spectral theory for the Maxwell operator on periodic domains
Mathematics Subject Classification: 35—Partial differential equations
Advisor 1: Peer Christian Kunstmann
Advisor 2: Lutz W. Weis
No students known.
If you have additional information or corrections regarding this mathematician, please use the update form. To submit students of this mathematician, please use the new data form, noting this mathematician's MGP ID of 243897 for the advisor ID. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18498137593269348, "perplexity": 14445.657937776892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00020.warc.gz"} |
https://www.physicsforums.com/threads/stuck-on-the-following-integral.129708/ | # Stuck on the following integral
1. Aug 25, 2006
### suspenc3
Hi, I am kinda stuck on the following integral:
$$\int\sqrt{\frac{1+x}{1-x}}dx$$
any hints?
2. Aug 25, 2006
### TD
If you let
$$y^2 = \frac{{1 + x}}{{1 - x}} \Leftrightarrow x = \frac{{y^2 - 1}}{{y^2 + 1}}$$
The integral will become fraction of rationals, losing the square root.
3. Aug 25, 2006
### suspenc3
so are you saying to substitute that for x?
4. Aug 25, 2006
### suspenc3
or by making this substitution, the square root will be taken away
5. Aug 25, 2006
### neutrino
Or you could do the trig substitution $$x = \sin\theta$$
6. Aug 25, 2006
### TD
Yes, use that substitution to lose the square root.
I already solved for x as well, which allows you to easily find dx in terms of dy by differentiating both sides. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738191366195679, "perplexity": 3449.437090252776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00495-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://crypto.stackexchange.com/users/2722/uos%c9%90%c5%bf?tab=activity&sort=all | # uosɐſ
less info
reputation
7
bio website location age member for 1 year, 11 months seen Mar 4 at 13:49 profile views 1
# 29 Actions
Feb26 revised Homomorphic encryption for vector addition Added idea about Indistinguishability Obfuscation Feb6 comment Homomorphic encryption for vector addition Oh, ok. I am ok with anything with a large and identical p,q,r. Three signed 64-bit values is plenty for this. Feb5 comment Homomorphic encryption for vector addition @mikeazo - do you want to add an answer about the SIMD FHE you mentioned on the other question comment? Feb5 comment Homomorphic encryption for vector addition D.W. - Thanks for the added detail. I tried to get some help understanding your response on chat, but didn't get far. What is the significance of p, q, and r? My first guess is that it is related to the bit-size of x, y, and z since Z/pZ is an integer field mod p, right? - so 2^32 if they're 32-bit (although signed is preferred). But I think that's a bad guess since you seem to anticipate that p, q, and r would be different and perhaps prime. Feb3 awarded Quorum Feb2 revised Homomorphic encryption for vector addition some of the comments discussion Jan31 revised Homomorphic encryption for vector addition added 25 characters in body Jan31 awarded Commentator Jan31 revised Homomorphic encryption for vector addition added 2 characters in body Jan31 comment Addition-only PHE in F# ok, ready: Homomorphic encryption for vector addition I did not mention SIMD because I didn't want to steal your thunder. Jan31 asked Homomorphic encryption for vector addition Jan31 revised Addition-only PHE in F# deleted 1 characters in body Jan31 comment Addition-only PHE in F# But if I wanted to extend this to multiple dimensions, I'd need Gentry? $(\varepsilon(x), \varepsilon(y), \varepsilon(z))$ is not as ideal as $\varepsilon((x, y, z))$ because I don't want the components to be reusable independently like that. Is there an encoding of $(x, y, z)$ that makes it compatible with the simple $\oplus$? Jan31 awarded Scholar Jan31 awarded Supporter Jan31 accepted Addition-only PHE in F# Jan31 comment Addition-only PHE in F# Thanks! The padding thing was due to my impression that HE is an undesirable property normal crypto uses and thus padding is added which interferes with HE, i.e. multiplication in RSA. But whatever, this is good info, thanks. Jan30 asked Addition-only PHE in F# Sep11 awarded Editor Sep11 revised Security considerations for partially shared password databases added 103 characters in body | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7843530178070068, "perplexity": 3711.356793850375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00192-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://clay6.com/qa/44215/what-is-the-work-done-in-moving-a-test-charge-q-through-a-distance-of-1-cm- | Browse Questions
# What is the work done in moving a test charge q through a distance of 1 cm along the equatorial axis of an electric dipole?
$W=0$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7072208523750305, "perplexity": 304.2872210850529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.mersenneforum.org/showthread.php?s=2df18a87ac12d6fc076c18e56f41d0dc&t=9742 | mersenneforum.org Software/instructions/questions
Register FAQ Search Today's Posts Mark Forums Read
2007-12-15, 09:54 #2 gd_barnes May 2007 Kansas; USA 100111101011002 Posts Guidelines on doing searches Below are the suggested guidelines for doing a prime-search effort for the conjectures. All instructions here are the fastest known ways that I am aware of but some of you may not want to take extra steps for speed. Sieving: 1. If you are sieving more than one k for n > 2500: a. Run srsieve with the -a parameter up to P=100M. b. Run sr2sieve up to an appropriate value. c. Run srfile with the -G (sorted by n) or -g (sorted by k) parameter to remove factors and create input for LLR or PFGW. 2. If you are sieving 1 or 2 k's for n > 2500: a. Run srsieve with the -g paramater up to P=100M. b. Run 1 or 2 instances of sr1sieve up to an appropriate value (1 instance for each k). 3. If you are searching any # of k's for n <= 2500, no sieving is needed. A PFGW script using trial factoring is by far the fastest way to go. If you are testing a new base or a new k-range for a previously searched base, see important notes about starting a new base below. Primality testing: 1. For bases that are powers of 2: a. Run LLR with the sieve file as input and a file name of prime.txt as output. b. Two files will be created: prime.txt and lresults.txt. Check for and post any primes found and send me the results file. 2. For bases that are not powers of 2: a. Run PFGW with the sieve file as input using the -f0 and -l switches in order to do PRP tests on the entire range of n. 2 or 3 files will be created: Primes will be in pfgw-prime.log, probable primes (PRP's) will be in pfgw.log, and the results will be in pfgw.out. IMPORTANT: If you have to stop PFGW in the middle of testing, it will not remember k's that it has found primes for and will begin searching them again when you restart. See instructions in the next post under #2 (referencing running LLR) for running srfile to remove k's with primes before restarting. b. Run PFGW to prove primality of the pfgw.log output from a. using the -f0 switch and (-t switch for the Sierp side -OR- -tp switch for the Riesel side). Once all have been proven, as with LLR, please post primes found and send me the results file. [See important notes below if starting a new k-range or starting a new base. You'll need to use the PFGW script for new bases as instead of a sieve file as input to PFGW.] Below are IMPORTANT notes on starting from scratch on a NEW BASE. Even with the automated script, if you're new to CRUS, I'd suggest getting with me or one of our regular searchers first. Some of the exceptions can get quite tricky. 1. As shown in the 1st post here, please use the link to the script for starting new bases as input to PFGW. 2. Review the web pages for algebraic factors such as squared k's on Riesels or cubed k's on Riesels and Sierpinski's for removal at the end of the search. Worse than searching for a multiple of the base that might be a duplicate effort would be to search a k that was proven composite for all n without realizing it ahead of time. 3. If you have to stop PFGW in the middle of the search and have to restart it, it will not remember where it left off (because it is running a script). A change to the min_k in the script will be needed before restarting. 4. Please send me the pl_MOB, pl_prime, and pl_remain output files from the new bases script. A results file is not necessary. Also, if it is an even Sierp base, please send me the pl_GFN file. If any of the files would be too large, let me know. For large-conjectured bases such as 3, 7, and 15, I will probably suggest just sending primes for n>1000 while running primes up to n=1000 myself because the files are too large to send around. For ultimate proof in the mathematical world, we'll need a central repository of the primes found for each k. I'll post an Email later on to send them to. Good luck and may the prime-searching God's be with us all! Gary Last fiddled with by gd_barnes on 2010-01-21 at 21:43 Reason: more modern updates
2007-12-16, 06:33 #3 gd_barnes May 2007 Kansas; USA 22·2,539 Posts Additional info. on searches Here are some more particulars on the searches now: -- 1 -- If starting a new base, if you're not using the deterministic parameters of -t or -tp for PFGW (meaning that it is proving all primes as it goes), then it will write out two separate files. They are called pfgw.log and pfgw-prime.log. pfgw.log is the PRP's (probable primes) and pfgw-prime.log is the proven primes. Even if the deterministic parameters are not set on, PFGW can still prove small primes because with the -f100 paramater set on, it automatically attempts factoring up to a certain limit, depending on the size of the number being searched. There is an important difference between these two files: You still need to prove the primes in the pfgw.log file so after running the new base, do the following to prove the PRP's that are in the pfgw.log file prime: 1. Rename the pfgw.log files and pfgw.out files to something of your choosing. pfgw.out is the results file. I like to call them 'prime-sierp-base16.txt' and 'results-sierp-base16.txt' for Base 16 Sierp so I know exactly what they are if I look at them 6 months from now. 2. For Sierpinski PRP's, run PFGW again using the -f0, -t, and -l parameters with your renamed pfgw.log file from #1 as input to it and that has the PRP's in it. The command is "PFGW (file name from #1) -f0 -t -l". For Riesel PRP's, just change the -t parameter to -tp. This primality proving step is the same thing you need to do if you are running LLR for bases other than 4 or 16 or if you're running PFGW on any search without the -t or -tp parameters originally. Clearly, you can avoid all of this hassle of proving primes by using PFGW with the -t or -tp and the -f0 parameters set on for ALL searches in this effort. But your searches will be slower. To me, the extra hoops to do this is worth saving the extra CPU cycles. But more importantly to me, it helped me learn the process of what the software programs do. -- 2 -- When running LLR to search for primes, there is no way to make it stop searching k's when primes are found like PFGW can. Once again like in #1, you can choose to run PFGW for all of your searches. This may be more palatable for some people. The script that I showed in the the first post of this thread contains the parameter to make PFGW stop searching a k when a prime is found for it. If you choose to run LLR, you will probably want to manually eliminate k's from your sieved file from time to time to avoid a lot of duplicate testing. For those of you haven't used the srfile software, this is where it comes in very handy. Srfile will eliminate specific sequences (k's) from a sieved file. What you'll need to do is copy your sieved file into the same directory as your srfile software. Then go to the command prompt and for each k where a prime was found, type the following command: srfile -G -d "1234*56^n+1" sieve-input.txt -o sieve-output.txt". Obviously the form is the k and base that you want to delete. It will now remove all n's for the particular k that you specified. Now do the same thing for each k that you want to remove but be sure and use the sieve-output.txt as input for the 2nd run with sieve-output2.txt as output, etc. You'll then need to stop LLR, copy in your new sieved file, determine what line is in the new file to start it at, key that line into the LLR menu and continue seaching where you left off. Because of the hassle involved with this, it is highly recommended that PFGW be used for all bases that are not powers of 2. For bases that are powers of 2, the CPU time savings is likely worth the added hassle of having to occassionaly manually remove k's from time to time. In a nutshell for general prime searching after doing your sieving or on an already sieved file, you can choose to run LLR or PFGW for any base but below is the fastest currently known methods for doing so: For minimum hassle on all bases and the fastest for bases that are NOT a power of 2: 1. Change the first line of the sieved file, i.e. the "XXXXXXXX:1:P:24:257" line to "ABC $a*24^$b+1 // {number_primes,$a,1}". Do not change the$a and \$b variables. The only thing that will vary will be the base (24 in this case) and obviously the plus sign will change to a minus if searching Riesel's. In effect, your sieved file will 'contain' your script for input to PFGW at the beginning of it. 2. Run PFGW with the -f0, -t (Sierp) or -tp (Riesel), and -l paramaters at the command line. IMPORTANT NOTE: The -l is a small case -L, not a big case -(eye). I can't remember if these are case-specific for PFGW but they are for the sr(n)sieve series of programs. That is it. The pfgw-prime.log file will contain primes and the pfgw.out file will be your results file. For a little more hassle on all bases but the fastest for bases that ARE powers of 2: 1. Run LLR with your sieved file as input. No changes to the file or special parameters are needed. 2. Every few hours/days/primes, check the output for primes. If there's 3-5 of them or more (can vary, if at a high n, I'd to it for every prime, if many around n=5K than maybe 10-15) stop LLR and run your sieve file through srfile to eliminate k's where primes have been found. Keep in mind that the primes file may have more than one prime for a k. 3. Restart LLR with the sieved file that has the eliminated k's. IMPORTANT...be sure and change the input line #. Otherwise it will start LLRing much later in the file and you'll miss searching many candidates. 4. Repeat #2 and #3 as many times as needed until LLR is done. 5. If running a base that is not a power of 2, use the output PRP (probable primes) file as input to PFGW to do the proof (determinstic) test with the -t or -tp parameters as in step #2 for PFGW above. There you have it...two ways to do most of the searches for this effort. Gary Last fiddled with by gd_barnes on 2010-01-12 at 08:04 Reason: more modern updates
2007-12-16, 21:13 #4
mdettweiler
A Sunny Moo
Aug 2007
USA (GMT-5)
3×2,083 Posts
Quote:
Originally Posted by gd_barnes This thread is for software downloads and instructions as well as a forum for any related questions on how to run software related to the effort. Here is a link to all of the latest software that should be needed: http://gbarnes017.googlepages.com/conjectureprogs.zip. The programs are LLR, NewPGen, PFGW, Sr1sieve, Sr2sieve, Srfile, and Srsieve. ...
I noticed that the LLR version included in the archive is a little old--it's 3.7.0, whereas the newest version is 3.7.1c, available here for Windows and here for Linux. There weren't any major changes, at least for the types of numbers we're working with, between 3.7.1b and 3.7.1c, so if anyone's still using 3.7.1b, they don't have to upgrade (i.e. it's not like they'll get a speed bonus by doing so). However, I don't know if there were any differences in speed between 3.7.0 and 3.7.1b, so to play it safe I'd go with 3.7.1b or later.
I'm glad you included NewPGen in the archive, though, as for quite a while the download link on the Prime Pages web site has been broken. I got my copy from another forum member attached to a message.
Everything else in the archive seems up to date, though.
2007-12-17, 03:48 #5
gd_barnes
May 2007
Kansas; USA
22×2,539 Posts
Quote:
Oops, my bad on LLR. I downloaded all new versions of everything and put them on most of my machines before posting but I thought I had the latest version of LLR already and didn't check for a new one. Thanks for catching that!
So if anyone reads this, use Anon's version of LLR instead of the one I included in the link at the top of this thread.
That would be great if you could put up link(s) to the Linux versions of the programs. I'm not familiar with Linux and so had completely forgotten there are two different versions of everything.
Thanks!
Gary
2007-12-17, 03:57 #6 axn Jun 2003 110338 Posts AFAIK, there has been no speed improvement to LLR (re: base 2) since 3.6. So it doesn't really matter if you don't have the latest-and-greatest version.
2007-12-17, 04:37 #7
mdettweiler
A Sunny Moo
Aug 2007
USA (GMT-5)
3·2,083 Posts
Quote:
Originally Posted by axn1 AFAIK, there has been no speed improvement to LLR (re: base 2) since 3.6. So it doesn't really matter if you don't have the latest-and-greatest version.
I didn't know for sure, I just figured I'd play it safe. Thanks for letting us know, though.
2007-12-17, 04:43 #8
mdettweiler
A Sunny Moo
Aug 2007
USA (GMT-5)
3·2,083 Posts
Quote:
Originally Posted by gd_barnes ... That would be great if you could put up link(s) to the Linux versions of the programs. I'm not familiar with Linux and so had completely forgotten there are two different versions of everything. ...
I have everything I need to put it together into an archive right now, except for NewPGen. I've already got the Windows version of NewPGen (I'm on a dual-boot system, XP and Ubuntu, though I'm using Ubuntu mostly now) thanks to axn1, but I still don't have the Linux version yet. Anyone out there got it? Please PM me and I'll send you my email so you can mail it to me. (Or, even better, just zip it up--or, if you rather, make it into a tarball [the Unix/Linux equivalent of a .zip file for those who don't know]--and attach it to a post here so anyone can grab it.)
Edit: I forgot to mention that I don't have PFGW either. Does anyone know where to get it?
Last fiddled with by mdettweiler on 2007-12-17 at 04:52
2007-12-17, 05:07 #9 axn Jun 2003 32×5×103 Posts You are unable to download the linux version from here? Then post here, and I'll repeat the procedure :) You can download PFGW from primeform yahoo group, but first you must join up (easy to do if you have a yahoo mail id). The you can download the Windows or linux version from the "Files" section there. The group
2007-12-17, 05:15 #10
mdettweiler
A Sunny Moo
Aug 2007
USA (GMT-5)
3·2,083 Posts
Quote:
Originally Posted by axn1 You are unable to download the linux version from here? Then post here, and I'll repeat the procedure :)
Oh, I guess it worked this time. The link must have been fixed since I last checked. I should have checked again before saying that it was still down.
Quote:
You can download PFGW from primeform yahoo group, but first you must join up (easy to do if you have a yahoo mail id). The you can download the Windows or linux version from the "Files" section there. The group
I don't have a Yahoo account, maybe it would be easier for someone to simply email me their copy (if it's not too much of a bother)?
Edit: I wonder why one of the higher ranked members of that group doesn't just use the free Geocities web space that comes with every Yahoo account (if they're part of the group, they obviously already have one) and post it there so that people don't have to join the group to get the program?
Last fiddled with by mdettweiler on 2007-12-17 at 05:17
2007-12-22, 00:40 #11 gd_barnes May 2007 Kansas; USA 22·2,539 Posts I have updated the zipped file of software at the top of this thread to include the latest Window's version of LLR. It appears there was no change between version 3.7.0 that I had and version and 3.7.1b that I just now downloaded, which is why I thought I already had the latest version. The date was the same, the size was the same, and it still says version 3.7.0 in the program help. But the llrguide and readme files now show version 3.7.1. Jean or anyone, can you let me know that this link under category "LLR Version 3.7.1b for MS Windows" is the correct place to get the latest Windows version of LLR? Thanks, Gary Last fiddled with by gd_barnes on 2007-12-22 at 00:41
Similar Threads Thread Thread Starter Forum Replies Last Post __HRB__ Programming 41 2012-07-07 17:43 WraithX GMP-ECM 37 2011-10-28 01:04 gd_barnes No Prime Left Behind 48 2009-07-31 01:44 OmbooHankvald PSearch 3 2005-08-05 20:28 jasong Sierpinski/Riesel Base 5 10 2005-03-14 04:03
All times are UTC. The time now is 12:39.
Tue Jul 14 12:39:11 UTC 2020 up 111 days, 10:12, 1 user, load averages: 1.83, 1.71, 1.54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4464132785797119, "perplexity": 2056.6485509836084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880665.3/warc/CC-MAIN-20200714114524-20200714144524-00249.warc.gz"} |
https://www.hartleygroup.org/publication/ferroelectric-liquid-crystals-induced-by-atropisomeric-biphenyl-dopants-the-effect-of-chiral-perturbations-on-achiral-dopants/ | # Ferroelectric liquid crystals induced by atropisomeric biphenyl dopants: the effect of chiral perturbations on achiral dopants
C. Scott Hartley and Robert P. Lemieux*
Liq. Cryst. 2004, 31, 1101–1108
https://doi.org/10.1080/02678290410001715999
## Abstract
The addition of the achiral biphenyl dopant 2,2′,6,6′-tetramethyl-4,4′-bis(4-n-nonyloxybenzoyloxy)biphenyl (3) or its dithionoester or dithioester analogue (45) to a 4 mol % mixture of the atropisomeric biphenyl dopant (R)-2,2′,6,6′-tetramethyl-3,3′-dinitro-4,4′-bis(4-n-nonyloxybenzoyloxy)biphenyl, (R)-1, in the phenylpyrimidine SmC host PhP1 produces a significant amplification of the spontaneous polarization induced by (R)-1. This amplification may be due to a chiral perturbation by (R)-1 which causes a shift in the equilibrium between enantiomeric conformations of the achiral dopant. The degree of polarization amplification afforded by the achiral dopant, as expressed by the polarization amplification factor PAF, varies with the nature of the linking group. This may be ascribed to different rotational distributions of the core transverse dipole moments relative to the polar axis of the SmC* phase and/or to differences in lateral bulk of the polar linking groups. The latter may affect the degree of chiral molecular recognition achieved by 35 in the binding site of the SmC* phase. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046143651008606, "perplexity": 4346.871221658779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171126.6/warc/CC-MAIN-20201124053841-20201124083841-00693.warc.gz"} |
https://dspiegel29.github.io/ArtofStatistics/01-1-2-3-child-heart-survival-times/01-3-child-heart-proportions-x.html | ### Figure 1.3: Percentage of all child heart surgery being carried out in each of thirteen hospitals
Data are shown in Table 1.1 (page 23) and are contained in 01-1-child-heart-survival-x.csv. The data were originally presented in the NCHDA 2012-15 report, but are best seen on childrensheartsurgery.info.
library(ggplot2)
df$Percentage = 100*df$Operations/sum(df$Operations) df$Pos= rank(df$Percentage) First in R base graphics par(mar=c(5,15,4,2)) barplot(df$Percentage,names.arg=df\$Hospital,horiz=T, xpd=F,las=1, xlab="Percentage of all operations in 2012-15 \nthat are carried out in each hospital")
bp <- ggplot(df, aes(x=reorder(Hospital,-Pos), y=Percentage, fill=Hospital)) #sets initial plot object from the dataframe for Hospitals, reordered by Percentage (descending) as the y-values, colour-filled by Hospital
bp # draws the plot | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5233491063117981, "perplexity": 9892.581942589342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00634.warc.gz"} |
http://hellenicaworld.com/Science/Mathematics/en/MeyerhoffManifold.html | ### - Art Gallery -
In hyperbolic geometry, the Meyerhoff manifold is the arithmetic hyperbolic 3-manifold obtained by ( 5 , 1 ) {\displaystyle (5,1)} {\displaystyle (5,1)} surgery on the figure-8 knot complement. It was introduced by Robert Meyerhoff (1987) as a possible candidate for the hyperbolic 3-manifold of smallest volume, but the Weeks manifold turned out to have slightly smaller volume. It has the second smallest volume
$${\displaystyle V_{m}=12\cdot (283)^{3/2}\zeta _{k}(2)(2\pi )^{-6}=0.981368\dots }$$
of orientable arithmetic hyperbolic 3-manifolds, where $$\zeta _{k}$$ is the zeta function of the quartic field of discriminant $${\displaystyle -283}$$. Alternatively,
$${\displaystyle V_{m}=\Im ({\rm {{Li}_{2}(\theta )+\ln |\theta |\ln(1-\theta ))=0.981368\dots }}} where \( {\displaystyle {\rm {{Li}_{n}}}}$$ is the polylogarithm and |x| is the absolute value of the complex root$$\theta$$ (with positive imaginary part) of the quartic $${\displaystyle \theta ^{4}+\theta -1=0}$$.
Ted Chinburg (1987) showed that this manifold is arithmetic.
Gieseking manifold
Weeks manifold
References
Chinburg, Ted (1987), "A small arithmetic hyperbolic three-manifold", Proceedings of the American Mathematical Society, 100 (1): 140–144, doi:10.2307/2046135, ISSN 0002-9939, JSTOR 2046135, MR 0883417
Chinburg, Ted; Friedman, Eduardo; Jones, Kerry N.; Reid, Alan W. (2001), "The arithmetic hyperbolic 3-manifold of smallest volume", Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV, 30 (1): 1–40, ISSN 0391-173X, MR 1882023
Meyerhoff, Robert (1987), "A lower bound for the volume of hyperbolic 3-manifolds", Canadian Journal of Mathematics, 39 (5): 1038–1056, doi:10.4153/CJM-1987-053-6, ISSN 0008-414X, MR 0918586
Mathematics Encyclopedia
World
Index | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691330790519714, "perplexity": 2191.2307850439784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00284.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Think_Python_-_How_to_Think_Like_a_Computer_Scientist/08%3A_Classes_and_Methods/8.01%3A_Object-Oriented_Features | # 8.1: Object-Oriented Features
Python is an object-oriented programming language, which means that it provides features that support object-oriented programming.
It is not easy to define object-oriented programming, but we have already seen some of its characteristics:
• Programs are made up of object definitions and function definitions, and most of the computation is expressed in terms of operations on objects.
• Each object definition corresponds to some object or concept in the real world, and the functions that operate on that object correspond to the ways real-world objects interact.
For example, the Time class defined in Chapter 16 corresponds to the way people record the time of day, and the functions we defined correspond to the kinds of things people do with times. Similarly, the Point and Rectangle classes correspond to the mathematical concepts of a point and a rectangle.
So far, we have not taken advantage of the features Python provides to support object-oriented programming. These features are not strictly necessary; most of them provide alternative syntax for things we have already done. But in many cases, the alternative is more concise and more accurately conveys the structure of the program.
For example, in the Time program, there is no obvious connection between the class definition and the function definitions that follow. With some examination, it is apparent that every function takes at least one Time object as an argument.
This observation is the motivation for methods; a method is a function that is associated with a particular class. We have seen methods for strings, lists, dictionaries and tuples. In this chapter, we will define methods for user-defined types.
Methods are semantically the same as functions, but there are two syntactic differences:
• Methods are defined inside a class definition in order to make the relationship between the class and the method explicit.
• The syntax for invoking a method is different from the syntax for calling a function.
In the next few sections, we will take the functions from the previous two chapters and transform them into methods. This transformation is purely mechanical; you can do it simply by following a sequence of steps. If you are comfortable converting from one form to another, you will be able to choose the best form for whatever you are doing. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7569289207458496, "perplexity": 357.8472671946875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00471.warc.gz"} |
https://codereview.stackexchange.com/questions/105577/lonely-integer-python-implementation | # “Lonely Integer” Python implementation
Problem Statement
There are N integers in an array A. All but one integer occur in pairs. Your task is to find the number that occurs only once.
Input Format
The first line of the input contains an integer N, indicating the number of integers. The next line contains N space-separated integers that form the array A.
Constraints
$1≤N<100$
$N % 2 = 1$ (N is an odd number)
$0≤A_{i}≤100,∀i∈[1,N]$
Output Format
Output S, the number that occurs only once.
Solution
#!/usr/bin/py
from collections import Counter
def histogram(ary):
""" Creates a histogram of the given array.
Args:
ary: The array.
Returns:
The dictionary with key as number and value as count.
"""
return Counter(ary)
def lonelyinteger(ary):
""" Finds the unique element in the array.
Args:
ary: The input array.
Returns:
Number or None
"""
for frequency in histogram(ary).items():
if frequency[1] == 1:
return frequency[0]
return None
if __name__ == '__main__':
a = int(raw_input())
b = map(int, raw_input().strip().split(" "))
print lonelyinteger(b)
The solution works perfectly fine. But in the process of learning problem-solving, I am interested in a few things:
• I don't think that the information like N is odd or maximum limit of 100 is related to my solution. Is there some optimizations I am missing?
• What about the space complexity? I think it's $O(N)$ + size of the histogram.
• Runtime seems linear i.e $O(N)$.
Solution 2
It includes the factors like why N should be odd.
Example: 1 ^ 1 ^ 2 ^ 2 ^ 3
#!/usr/bin/py
def lonelyinteger(a):
res = 0
for each in a:
res ^= each
return res
if __name__ == '__main__':
a = input()
b = map(int, raw_input().strip().split(" "))
print lonelyinteger(b)
• You are right on all 3 counts. However, the problem has an $O(1)$ space-wise solution. Hint: think of xor. – vnp Sep 24 '15 at 6:53
• @vnp, for solution 2 yes. – CodeYogi Sep 24 '15 at 7:24
• Indeed, your solution doesn’t require $N$ to be odd. You’ve solved the slightly more general problem of finding a lonely integer in an array where every other element occurs at least twice, but could occur more often than that.
Likewise, the restriction that $N \leq 100$ isn’t used, but I can’t think of how that could be used in a solution.
• I don’t know why you’ve defined a histogram() function, when you could swap it out for Counter(). That will make your code a little easier to read, because when I read Counter(), I immediately know what it does; if I see histogram() then I have to check I’ve understood the definition.
• When you iterate over the histogram, you should use .iteritems() instead of .items(); in Python 2.7 the latter is slower and more memory-expensive.
• You don’t need an explicit return None; Python will automatically return None if it drops out the end of a function.
• Rather than iterating over the tuples in Counter(array).items() and picking out the element/frequency by numeric index, it would be better to use tuple unpacking in the for loop:
for elem, frequency in histogram(ary).items():
if frequency == 1:
return elem
You could also just use the most_common() method on Counter(), and take the last element of that – but at that point you’re barely doing any work, so that might not be in the spirit of the problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30370283126831055, "perplexity": 1454.6591577542115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00178.warc.gz"} |
https://mathhothouse.me/category/cnennai-math-institute-entrance-exam/ | ## Category Archives: Cnennai Math Institute Entrance Exam
### The animals went in which way
The animals may have gone into Noah’s Ark two by two, but in which order did they go in? Given the following sentence (yes, sentence! — I make no apologies for the punctuation), what was the order in which the animals entered the Ark?
The monkeys went in before the sheep, swans, chickens, peacocks, geese, penguins and spiders, but went in after the horses, badgers, squirrels and tigers, the latter of which went in before the horses, the penguins, the rabbits, the pigs, the donkeys, the snakes and the mice, but the mice went before the leopards, the leopards before the squirrels, the squirrels before the chickens, the chickens before the penguins, spiders, sheep, geese and the peacocks, the peacocks before the geese and the penguins, the penguins before the spiders and after the geese and the horses, the horses before the donkeys, the chickens and the leopards, the leopards after the foxes and the ducks, the ducks before the goats, swans, doves, foxes and badgers before the chickens, horses, squirrels and swans and after the lions, tigers, foxes, squirrels and ducks, the ducks after the lions, elephants, rabbits and otters, the otters before the elephants, tigers, chickens and beavers, the beavers after the elephants, the elephants before the lions, the lions before the tigers, the sheep before the peacocks, the swans before the chickens, the pigs before the snakes, the snakes before the foxes, the pigs after the rabbits, goats, tigers and doves, the doves before the chickens, horses, goats, donkeys and snakes, the snakes after the goats, and the donkeys before the mice and the squirrels.
🙂 🙂 🙂
Nalin Pithwa.
### Announcement: Scholarships for RMO Training
Mathematics Hothouse.
### Can anyone have fun with infinite series?
Below is list of finitely many puzzles on infinite series to keep you a bit busy !! 🙂 Note that these puzzles do have an academic flavour, especially concepts of convergence and divergence of an infinite series.
Puzzle 1: A grandmother’s vrat (fast) requires her to keep odd number of lamps of finite capacity lit in a temple at any time during 6pm to 6am the next morning. Each oil-filled lamp lasts 1 hour and it burns oil at a constant rate. She is not allowed to light any lamp after 6pm but she can light any number of lamps before 6pm and transfer oil from some to the others throughout the night while keeping odd number of lamps lit all the time. How many fully-filled oil lamps does she need to complete her vrat?
Puzzle 2: Two number theorists, bored in a chemistry lab, played a game with a large flask containing 2 liters of a colourful chemical solution and an ultra-accurate pipette. The game was that they would take turns to recall a prime number p such that $p+2$ is also a prime number. Then, the first number theorist would pipette out $\frac{1}{p}$ litres of chemical and the second $\frac{1}{(p+2)}$ litres. How many times do they have to play this game to empty the flask completely?
Puzzle 3: How farthest from the edge of a table can a deck of playing cards be stably overhung if the cards are stacked on top of one another? And, how many of them will be overhanging completely away from the edge of the table?
Puzzle 4: Imagine a tank that can be filled with infinite taps and can be emptied with infinite drains. The taps, turned on alone, can fill the empty tank to its full capacity in 1 hour, 3 hours, 5 hours, 7 hours and so on. Likewise, the drains opened alone, can drain a full tank in 2 hours, 4 hours, 6 hours, and so on. Assume that the taps and drains are sequentially arranged in the ascending order of their filling and emptying durations.
Now, starting with an empty tank, plumber A alternately turns on a tap for 1 hour and opens the drain for 1 hour, all operations done one at a time in a sequence. His sequence, by using $t_{i}$ for $i^{th}$ tap and $d_{j}$ for $j^{th}$ drain, can be written as follows: $\{ t_{1}, d_{1}, t_{2}, d_{2}, \ldots\}_{A}$.
When he finishes his operation, mathematically, after using all the infinite taps and drains, he notes that the tank is filled to a certain fraction, say, $n_{A}<1$.
Then, plumber B turns one tap on for 1 hour and then opens two drains for 1 hour each and repeats his sequence: $\{ (t_{1},d_{1},d_{2}), (t_{2},d_{3},d_{4}), (t_{3},d_{4},d_{5}) \ldots \}_{B}$.
At the end of his (B’s) operation, he finds that the tank is filled to a fraction that is exactly half of what plumber A had filled, that is, $0.5n_{A}$.
How is this possible even though both have turned on all taps for 1 hour and opened all drains for 1 hour, although in different sequences?
I hope u do have fun!!
-Nalin Pithwa.
### Logicalympics — 100 meters!!!
Just as you go to the gym daily and increase your physical stamina, so also, you should go to the “mental gym” of solving hard math or logical puzzles daily to increase your mental stamina. You should start with a laser-like focus (or, concentrate like Shiva’s third eye, as is famous in Hindu mythology/scriptures!!) for 15-30 min daily and sustain that pace for a month at least. Give yourself a chance. Start with the following:
The logicalympics take place every year in a very quiet setting so that the competitors can concentrate on their events — not so much the events themselves, but the results. At the logicalympics every event ends in a tie so that no one goes home disappointed 🙂 There were five entries in the room, so they held five races in order that each competitor could win, and so that each competitor could also take his/her turn in 2nd, 3rd, 4th, and 5th place. The final results showed that each competitor had duly taken taken their turn in finishing in each of the five positions. Given the following information, what were the results of each of the five races?
The five competitors were A, B, C, D and E. C didn’t win the fourth race. In the first race A finished before C who in turn finished after B. A finished in a better position in the fourth race than in the second race. E didn’t win the second race. E finished two places behind C in the first race. D lost the fourth race. A finished ahead of B in the fourth race, but B finished before A and C in the third race. A had already finished before C in the second race who in turn finished after B again. B was not first in the first race and D was not last. D finished in a better position in the second race than in the first race and finished before B. A wasn’t second in the second race and also finished before B.
So, is your brain racing now to finish this puzzle?
Cheers,
Nalin Pithwa.
PS: Many of the puzzles on my blog(s) are from famous literature/books/sources, but I would not like to reveal them as I feel that students gain the most when they really try these questions on their own rather than quickly give up and ask for help or look up solutions. Students have finally to stand on their own feet! (I do not claim creating these questions or puzzles; I am only a math tutor and sometimes, a tutor on the web.) I feel that even a “wrong” attempt is a “partial” attempt; if u can see where your own reasoning has failed, that is also partial success!
### Pick’s theorem to pick your brains!!
Pick’s theorem:
Consider a square lattice of unit side. A simple polygon (with non-intersecting sides) of any shape is drawn with its vertices at the lattice points. The area of the polygon can be simply obtained as $(B/2)+I-1$ square units, where B is number of lattice points on the boundary, I is number of lattice points in the interior of the polygon. Prove this theorem!
Do you like this challenge?
Nalin Pithwa.
### Limits that arise frequently
We continue our presentation of basic stuff from Calculus and Analytic Geometry, G B Thomas and Finney, Ninth Edition. My express purpose in presenting these few proofs is to emphasize that Calculus, is not just a recipe of calculation techniques. Or, even, a bit further, math is not just about calculation. I have a feeling that such thinking nurtured/developed at a young age, (while preparing for IITJEE Math, for example) makes one razor sharp.
We verify a few famous limits.
Formula 1:
If $|x|<1$, $\lim_{n \rightarrow \infty}x^{n}=0$
We need to show that to each $\in >0$ there corresponds an integer N so large that $|x^{n}|<\in$ for all n greater than N. Since $\in^{1/n}\rightarrow 1$, while $|x|<1$. there exists an integer N for which $\in^{1/n}>|x|$. In other words,
$|x^{N}|=|x|^{N}<\in$. Call this (I).
This is the integer we seek because, if $|x|<1$, then
$|x^{n}|<|x^{N}|$ for all $n>N$. Call this (II).
Combining I and II produces $|x^{n}|<\in$ for all $n>N$, concluding the proof.
Formula II:
For any number x, $\lim_{n \rightarrow \infty}(1+\frac{x}{n})^{n}=e^{x}$.
Let $a_{n}=(1+\frac{x}{n})^{n}$. Then, $\ln {a_{n}}=\ln{(1+\frac{x}{n})^{n}}=n\ln{(1+\frac{x}{n})}\rightarrow x$,
as we can see by the following application of l’Hopital’s rule, in which we differentiate with respect to n:
$\lim_{n \rightarrow \infty}n\ln{(1+\frac{x}{n})}=\lim_{n \rightarrow \infty}\frac{\ln{(1+x/n)}}{1/n}$, which in turn equals
$\lim_{n \rightarrow \infty}\frac{(\frac{1}{1+x/n}).(-\frac{x}{n^{2}})}{-1/n^{2}}=\lim_{n \rightarrow \infty}\frac{x}{1+x/n}=x$.
Now, let us apply the following theorem with $f(x)=e^{x}$ to the above:
(a theorem for calculating limits of sequences) the continuous function theorem for sequences:
Let $a_{n}$ be a sequence of real numbers. If $\{a_{n}\}$ be a sequence of real numbers. If $a_{n} \rightarrow L$ and if f is a function that is continu0us at L and defined at all $a_{n}$, then $f(a_{n}) \rightarrow f(L)$.
So, in this particular proof, we get the following:
$(1+\frac{x}{n})^{n}=a_{n}=e^{\ln{a_{n}}}\rightarrow e^{x}$.
Formula 3:
For any number x, $\lim_{n \rightarrow \infty}\frac{x^{n}}{n!}=0$
Since $-\frac{|x|^{n}}{n!} \leq \frac{x^{n}}{n!} \leq \frac{|x|^{n}}{n!}$,
all we need to show is that $\frac{|x|^{n}}{n!} \rightarrow 0$. We can then apply the Sandwich Theorem for Sequences (Let $\{a_{n}\}$, $\{b_{n}\}$ and $\{c_{n}\}$ be sequences of real numbers. if $a_{n}\leq b_{n}\leq c_{n}$ holds for all n beyond some index N, and if $\lim_{n\rightarrow \infty}a_{n}=\lim_{n\rightarrow \infty}c_{n}=L$,, then $\lim_{n\rightarrow \infty}b_{n}=L$ also) to conclude that $\frac{x^{n}}{n!} \rightarrow 0$.
The first step in showing that $|x|^{n}/n! \rightarrow 0$ is to choose an integer $M>|x|$, so that $(|x|/M)<1$. Now, let us the rule (formula 1, mentioned above), so we conclude that:$(|x|/M)^{n}\rightarrow 0$. We then restrict our attention to values of $n>M$. For these values of n, we can write:
$\frac{|x|^{n}}{n!}=\frac{|x|^{n}}{1.2 \ldots M.(M+1)(M+2)\ldots n}$, where there are $(n-M)$ factors in the expression $(M+1)(M+2)\ldots n$, and
the RHS in the above expression is $\leq \frac{|x|^{n}}{M!M^{n-M}}=\frac{|x|^{n}M^{M}}{M!M^{n}}=\frac{M^{M}}{M!}(\frac{|x|}{M})^{n}$. Thus,
$0\leq \frac{|x|^{n}}{n!}\leq \frac{M^{M}}{M!}(\frac{|x|}{M})^{n}$. Now, the constant $\frac{M^{M}}{M!}$ does not change as n increases. Thus, the Sandwich theorem tells us that $\frac{|x|^{n}}{n!} \rightarrow 0$ because $(\frac{|x|}{M})^{n}\rightarrow 0$.
That’s all, folks !!
Aufwiedersehen,
Nalin Pithwa.
### Cauchy’s Mean Value Theorem and the Stronger Form of l’Hopital’s Rule
Reference: Thomas, Finney, 9th edition, Calculus and Analytic Geometry.
Continuing our previous discussion of “theoretical” calculus or “rigorous” calculus, I am reproducing below the proof of the finite limit case of the stronger form of l’Hopital’s Rule :
L’Hopital’s Rule (Stronger Form):
Suppose that
$f(x_{0})=g(x_{0})=0$
and that the functions f and g are both differentiable on an open interval $(a,b)$ that contains the point $x_{0}$. Suppose also that $g^{'} \neq 0$ at every point in $(a,b)$ except possibly at $x_{0}$. Then,
$\lim_{x \rightarrow x_{0}}\frac{f(x)}{g(x)}=\lim_{x \rightarrow x_{0}}\frac{f^{x}}{g^{x}}$ ….call this equation I,
provided the limit on the right exists.
The proof of the stronger form of l’Hopital’s Rule is based on Cauchy’s Mean Value Theorem, a mean value theorem that involves two functions instead of one. We prove Cauchy’s theorem first and then show how it leads to l’Hopital’s Rule.
Cauchy’s Mean Value Theorem:
Suppose that the functions f and g are continuous on $[a,b]$ and differentiable throughout $(a,b)$ and suppose also that $g^{'} \neq 0$ throughout $(a,b)$. Then there exists a number c in $(a,b)$ at which
$\frac{f^{'}(c)}{g^{'}(c)} = \frac{f(b)-f(a)}{g(b)-g(a)}$…call this II.
The ordinary Mean Value Theorem is the case where $g(x)=x$.
Proof of Cauchy’s Mean Value Theorem:
We apply the Mean Value Theorem twice. First we use it to show that $g(a) \neq g(b)$. For if $g(b)$ did equal to $g(a)$, then the Mean Value Theorem would give:
$g^{'}(c)=\frac{g(b)-g(a)}{b-a}=0$ for some c between a and b. This cannot happen because $g^{'}(x) \neq 0$ in $(a,b)$.
We next apply the Mean Value Theorem to the function:
$F(x) = f(x)-f(a)-\frac{f(b)-f(a)}{g(b)-g(a)}[g(x)-g(a)]$.
This function is continuous and differentiable where f and g are, and $F(b) = F(a)=0$. Therefore, there is a number c between a and b for which $F^{'}(c)=0$. In terms of f and g, this says:
$F^{'}(c) = f^{'}(c)-\frac{f(b)-f(a)}{g(b)-g(a)}[g^{'}(c)]=0$, or
$\frac{f^{'}(c)}{g^{'}(c)}=\frac{f(b)-f(a)}{g(b)-g(a)}$, which is II above. QED.
Proof of the Stronger Form of l’Hopital’s Rule:
We first prove I for the case $x \rightarrow x_{o}^{+}$. The method needs no change to apply to $x \rightarrow x_{0}^{-}$, and the combination of those two cases establishes the result.
Suppose that x lies to the right of $x_{o}$. Then, $g^{'}(x) \neq 0$ and we can apply the Cauchy’s Mean Value Theorem to the closed interval from $x_{0}$ to x. This produces a number c between $x_{0}$ and x such that $\frac{f^{'}(c)}{g^{'}(c)}=\frac{f(x)-f(x_{0})}{g(x)-g(x_{0})}$.
But, $f(x_{0})=g(x_{0})=0$ so that $\frac{f^{'}(c)}{g^{'}(c)}=\frac{f(x)}{g(x)}$.
As x approaches $x_{0}$, c approaches $x_{0}$ because it lies between x and $x_{0}$. Therefore, $\lim_{x \rightarrow x_{0}^{+}}\frac{f(x)}{g(x)}=\lim_{x \rightarrow x_{0}^{+}}\frac{f^{'}(c)}{g^{'}(c)}=\lim_{x \rightarrow x_{0}^{+}}\frac{f^{'}(x)}{g^{'}(x)}$.
This establishes l’Hopital’s Rule for the case where x approaches $x_{0}$ from above. The case where x approaches $x_{0}$ from below is proved by applying Cauchy’s Mean Value Theorem to the closed interval $[x,x_{0}]$, where $x< x_{0}$QED.
### The Sandwich Theorem or Squeeze Play Theorem
It helps to think about the core concepts of Calculus from a young age, if you want to develop your expertise or talents further in math, pure or applied, engineering or mathematical sciences. At a tangible level, it helps you attack more or many questions of the IIT JEE Advanced Mathematics. Let us see if you like the following proof, or can absorb/digest it:
Reference: Calculus and Analytic Geometry by Thomas and Finney, 9th edition.
The Sandwich Theorem:
Suppose that $g(x) \leq f(x) \leq h(x)$ for all x in some open interval containing c, except possibly at $x=c$ itself. Suppose also that $\lim_{x \rightarrow c}g(x)= \lim_{x \rightarrow c}h(x)=L$. Then, $\lim_{x \rightarrow c}f(x)=c$.
Proof for Right Hand Limits:
Suppose $\lim_{x \rightarrow c^{+}}g(x)=\lim_{x \rightarrow c^{+}}h(x)=L$. Then, for any $\in >0$, there exists a $\delta >0$ such that for all x, the inequality $c implies $L-\in and $L-\in ….call this (I)
These inequalities combine with the inequality $g(x) \leq f(x) \leq h(x)$ to give
$L-\in
$L-\in
$-\in ….call this (II)
Therefore, for all x, the inequality $c implies $|f(x)-L|<\in$. …call this (III)
Proof for LeftHand Limits:
Suppose $\lim_{x \rightarrow c^{-}} g(x)=\lim_{x \rightarrow c^{-}}=L$. Then, for $\in >0$ there exists a $\delta >0$ such that for all x, the inequality $c-\delta implies $L-\in and $L-\in …call this (IV).
We conclude as before that for all x, $c-\delta implies $|f(x)-L|<\in$.
Proof for Two sided Limits:
If $\lim_{x \rightarrow c}g(x) = \lim_{x \rightarrow c}h(x)=L$, then $g(x)$ and $h(x)$ both approach L as $x \rightarrow c^{+}$ and as $x \rightarrow c^{-}$ so $\lim_{x \rightarrow c^{+}}f(x)=L$ and $\lim_{x \rightarrow c^{-}}f(x)=L$. Hence, $\lim_{x \rightarrow c}f(x)=L$. QED.
Let me know your feedback on such stuff,
Nalin Pithwa
### Lagrange’s Mean Value Theorem and Cauchy’s Generalized Mean Value Theorem
Lagrange’s Mean Value Theorem:
If a function $f(x)$ is continuous on the interval $[a,b]$ and differentiable at all interior points of the interval, there will be, within $[a,b]$, at least one point c, $a, such that $f(b)-f(a)=f^{'}(c)(b-a)$.
Cauchy’s Generalized Mean Value Theorem:
If $f(x)$ and $phi(x)$ are two functions continuous on an interval $[a,b]$ and differentiable within it, and $phi(x)$ does not vanish anywhere inside the interval, there will be, in $[a,b]$, a point $x=c$, $a, such that $\frac{f(b)-f(a)}{phi(b)-phi(a)} = \frac{f^{'}(c)}{phi^{'}(c)}$.
Some questions based on the above:
Problem 1:
Form Lagrange’s formula for the function $y=\sin(x)$ on the interval $[x_{1},x_{2}]$.
Problem 2:
Verify the truth of Lagrange’s formula for the function $y=2x-x^{2}$ on the interval $[0,1]$.
Problem 3:
Applying Lagrange’s theorem, prove the inequalities: (i) $e^{x} \geq 1+x$ (ii) $\ln (1+x) , for $x>0$. (iii) $b^{n}-a^{n} for $b>a$. (iv) $\arctan(x) .
Problem 4:
Write the Cauchy formula for the functions $f(x)=x^{2}$, $phi(x)=x^{3}$ on the interval $[1,2]$ and find c.
More churnings with calculus later!
Nalin Pithwa.
### Could a one-sided limit not exist ?
Here is basic concept of limit : | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 160, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.855636715888977, "perplexity": 854.8058844266882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647153.50/warc/CC-MAIN-20180319214457-20180319234457-00014.warc.gz"} |
http://mathoverflow.net/questions/73878/differential-equation-with-some-constraints | # Differential equation with some constraints
I posted this to stackexchange, and after some hours got a comment that was so pessimistic about finding some neat orderly solution, that I'm posting it here too. (In case anyone cares, this is related to this question, which I posted here earlier.)
I'd like $\alpha,\beta,\gamma$ as functions of $t$, satisfying the following conditions: \begin{align} \alpha+\beta+\gamma & = 0 \\ \sin^2\alpha + \sin^2\beta + \sin^2\gamma & = c^2 \\ \left| \frac{d}{dt}(\sin\alpha,\sin\beta,\sin\gamma)\right| & = 1 \end{align} I'm thinking of $c^2$ as small. At the very least that means $<2$, and intuitively it means $\ll 2$. Some geometry shows that there is a qualitative change in the nature of the solutions when $c^2$ goes from $<2$ to $>2$.
Later edit: The question above asks for a parametrization by arc-length. Here's an ugly parametrization by something quite remote from arc length: $$\beta = \frac{\arccos\left(\frac{1 + \sin^2\alpha - c^2}{\cos\alpha}\right)-\alpha}{2}$$ And then $\gamma = \pi - \alpha - \beta$. In order to get the whole curve, you'd need a multiple-valued arccosine and then you'd pick the right value for the particular point on the curve. One thing that fails to be obvious to me just from the way the function above is written, giving $\beta$ as a function of $\alpha$, is that that function is its own inverse.
So here's a less demanding question that the one above: Is there some nice pleasant way of parametrizing the curve that, if not by arc-length, at least treats $\alpha$, $\beta$, and $\gamma$ equally, so that it's perfectly self-evident from the way it's written that the whole expression is symmetric in $\alpha,\beta,\gamma$?
-
BTW, the first two constraints imply that $0 \le c^2 \le 9/4$. – Michael Hardy Aug 28 '11 at 19:29
A solution is in effect an arc-length parametrization of a space curve. Let $\vec v =(x,y,z) = (\sin \alpha, \sin \beta, \sin \gamma)$. The first equation is then the somewhat complicated algebraic surface, call it $S_1$: $$S_1: 2(y^2z^2+z^2x^2+x^2y^2) - (x^4+y^4+z^4) = 4(xyz)^2.$$ More precisely, it's a quarter of $S_1$, because $S_1$ also contains the loci where one of $\alpha,\beta,\gamma$ is the sum of the other two. You may recognize the left-hand side from Hero(n)'s formula: it factors as $(x+y+z)(-x+y+z)(x-y+z)(x+y-z)$. The second equation then intersects this $S_1$ with the sphere $\Sigma_c: \|\vec v\| = c$. The final equation says that $\vec v$ depends on $t$ and its derivative has norm $1$. This makes $\vec v(t)$ the arc-length parametrization of the curve.
The following might give you a handle on what happens for small $c$. Scaling by $c$ yields the intersection of the unit sphere $\Sigma_1$ with the varying surface $$S_c: (x+y+z)(-x+y+z)(x-y+z)(x+y-z) = 4c^2(xyz)^2.$$ Now $S_0 \cap \Sigma_1$ is the union of the four great circles $\{ x \pm y \pm z = 0 \} \cap \Sigma_1$, each of which has an easy arc-length parametrization. The one that corresponds to $\alpha + \beta + \gamma = 0$ is $x+y+z=0$. To get at your problem with small $c$, you might start from these parametrizations of $S_0 \cap \Sigma_1$ and consider the curves $S_c \cap \Sigma_1$ as deformations of that great circle, then at the end speed up the resulting arc-length parametrizations by a factor $1/c$ to undo the scaling.
-
The hour is late and I will look at this tomorrow. But even before digesting everything above, I've up-voted it because the identity following "$S_1$" looks just like what I wrote in another stackexchange posting (except for a factor of 2.....): math.stackexchange.com/questions/59508/trigonometric-identity And that one arose from a product of three sines, whereas this one arose from a sum of products of two sines, and there ought to be certain connections. So are you suggesting that there is a nice neat solution after all? Or only that there is one in a limiting case? – Michael Hardy Aug 28 '11 at 3:35
The arc-length parametrization is the inverse function of an arc-length integral, which involves a square root and can only rarely have an elementary formula (already for an ellipse we famously get elliptic integral). I see that Robert Bryant already worked out what happens here, and verified that there's almost never an elementary formula; so it won't get any more nice or neat than inverting the integral of the square root of a rational function. Did you have a reason to expect or hope for a particularly nice form here? – Noam D. Elkies Aug 28 '11 at 17:18
@Noam: Well, now I've started looking at this answer while awake. First I wondered how you got $S_1$ from the first constraint. Then I saw how it can be done, if not how you did it, which I suspect is different. To be continued....... – Michael Hardy Aug 29 '11 at 17:55
Building on Noam's suggestion, you could try using the inherent symmetry of the problem: Set $\sigma_1 = x^2 + y^2 + z^2$, $\sigma_2 = x^2y^2+y^2z^2+z^2x^2$, and $\sigma_3 = x^2y^2z^2$. Then your conditions become $\sigma_1 = c^2$ and $\sigma_2 = \sigma_3 + \tfrac14c^4$. Meanwhile, you have $$dt^2 = dx^2 + dy^2 + dz^2 = \frac{d(x^2)^2}{4x^2}+\frac{d(y^2)^2}{4y^2}+\frac{d(z^2)^2}{4z^2},$$ and this latter expression, being symmetric in $x^2,y^2,z^2$, can be expressed as a differential expression in $\sigma_1, \sigma_2, \sigma_3$. I won't write out the details, but a short computation (using Maple) shows that, taking advantage of the relations $\sigma_1 = c^2$ and $\sigma_2 = \sigma_3 + \tfrac14c^4$, this leads to the relation $$dt^2 = \frac{\bigl(c^6(c^2{-}2)-4(36{-}52c^2{+}21c^4{-}2c^6)\sigma_3+16(c^2{-}1){\sigma_3}^2\bigr)}{16\sigma_3\bigl(c^6(2{-}c^2)-4(27{-}18c^2{+}2c^4)\sigma_3-16{\sigma_3}^2\bigr)}\bigl(d\sigma_3\bigr)^2.$$ Now, for example, you can see why $c^2=2$ is special. The integral that gives $t$ will simplify dramatically in this case; in fact, it becomes an elementary integral. For general values of $c$, though, this is a hyperelliptic integral, and you won't find any simple relation between $t$ and $\sigma_3$, so, a fortiori, none between $t$ and $x$, $y$, and $z$. There are various special values of $c$ for which the roots and poles of the rational expression cancel, such as $c=0$, $c = \pm\sqrt{2}$, and $c = \pm\frac32$, and, for these, you'd expect the integral to simplify considerably, but, otherwise, you don't expect any nice relation.
Added remark: By the way, you can get from this more directly to the relation between $t$ and $x$, $y$, and $z$, since, for example, letting $u$ represent any one of $x^2$, $y^2$, or $z^2$, one has the relation $u^3 - c^2 u^2 + (\sigma_3 + \tfrac14 c^4)u - \sigma_3 = 0$, which can clearly be solved for $\sigma_3$ as a rational function of $u$. Substituting this into the above relation gives a differential equation directly relating $t$ and, say, $u = x^2$. It's not a particularly nice relation, though. Ultimately, this gives a relation of the form $x = F(t,c)$ where $F$ is some function periodic of period $3\tau(c)$ in the first variable for some $\tau(c)>0$. Then one finds that $y = F(t + \tau(c),c)$ and $z = F(t-\tau(c),c)$. This is, of course, a very symmetric expression, though it's not explicit.
-
That $c^2 = 0$ and $c^2 = (3/2)^2$ are the two opposite extreme values of the sum of squares of the three sines follows from the first constraint. That $c^2 = 2$ gives an exceptionally well-behaved situation is actually seen just be thinking about secondary-school-level trigonometry. It's striking how a little trigonometry problem leads straight into moderately exotic (by comparison to this sort of trigonometry) functions, but I suppose the same can be said of other things that we've all seen. I'm going to print out the two answers posted so far and think about them before saying much more. – Michael Hardy Aug 28 '11 at 19:38
OK, the "less demanding" question does seem more tractable; a few possible answers follow, though none is clearly the most "nice pleasant way of parametrizing" your curve. One direction leads to the trigonometric solution of a cubic equation with all roots real; the other leads to an elliptic curve with 6-torsion, and even to an extremal elliptic K3 surface! Which if any of these is best for you is a matter of taste and of what you're trying to do with these curves.
Let $(X,Y,Z) = (\sin^2 \alpha, \phantom.\sin^2 \beta, \phantom.\sin^2 \gamma)$. Then $(X,Y,Z)$ are coordinates of an algebraic curve $$E_c : X+Y+Z = c^2, \phantom{=} X^2+Y^2+Z^2 - 2(YZ+ZX+XY) + 4XYZ = 0.$$ So far we've preserved the $S_3$ symmetry, and can recover the original variables via $\alpha = \arcsin X^{1/2} = \frac12 \arccos(1-2X)$ and likewise for $\beta,\gamma$. But this begs the question of what $E_c$ looks like, and leaves us with multivalued arcsines or arccosines. The latter problem seems inherent in another symmetry of the equation: we can translate $\alpha,\beta,\gamma$ by $a\pi,b\pi,c\pi$ for any integers $a,b,c$ with $a+b+c=0$. But we can try to do more with $E_c$.
One direction is to express everything in terms of elementary symmetric functions $\sigma_1,\sigma_2,\sigma_3$ of $X,Y,Z$, as R.Bryant did: the first equation says $\sigma_1=c^2$, and the second says $\sigma_1^2 = 4 (\sigma_3 - \sigma_2)$; so $(\sigma_1,\sigma_2,\sigma_3)$ are parametrized in terms of $\sigma_3$, and then $X,Y,Z$ are the three roots of $$0 = u^3 - \sigma_1 u^2 + \sigma_2 u - \sigma_3 = u^3 - c^2 u^2 + (\sigma_3 + \tfrac14 c^4)u - \sigma_3.$$ This is still manifestly symmetric but rather implicit. We we can solve the cubic; since it has three real roots the solution will involve trisecting some auxiliary angle $\theta$, itself given as the arccosine of some explicit but complicated algebraic function of $c$ and $\sigma_3$. The roots will then be given in terms of $c$, $\sigma_3$, and the cosines of $\theta/3$, $(\theta+2\pi)/3$, and $(\theta+4\pi)/3$, and the action of $S_3$ will correspond to replacing $\theta$ by the equivalent $\pm (\theta+2\pi n)$ for some $n \bmod 3$. This will be far from nice and pleasant (compare with the formulas for constructing a regular 13-gon using an angle trisector, as in p.192 of Gleason's Monthly article), but it will have the advantage of leaving the symmetry close to the surface.
Another direction is to consider $E_c$ on its own terms. It is an elliptic curve, so rational functions on it like $x,y,z$ can be parametrized by elliptic functions like $\wp$ and $\wp'$. Moreover $E_c$ inherits the $S_3$ action so the resulting formulas must retain this symmetry; and the periodicity of $\wp,\wp'$ may even cancel out the ambiguity in the arcsine or arccosine. That's great if you love elliptic curves, not so great if you regard $\wp$ as yet another obscure transcendental function... At least these elliptic curves are rather nice: the cyclic permutations of $X,Y,Z$ are translations by 3-torsion points of $E_c$, and there's also a 2-torsion point because switching two of the variables, say $Y \leftrightarrow Z$, has a rational fixed point where the third variable vanishes (this corresponds to taking $\alpha = 0$ and $\beta+\gamma=0$ in the original equation). So $E_c$ actually has 6-torsion. If I did this right, an equivalent equation for $E_c$ is $$y^2 = x^3 + ((c^2-3) x - (c^2-2)^2)^2,$$ which exhibits the 3-torsion points where $x=0$, and has 2-torsion at $(x,y) = -((c^2-2)^2,0)$. As it happens $E_c$ is not far from the universal elliptic curve with a 6-torsion point, which is given by $y^2 = x^3 + ((h-3)x - (h-2)^2)^2$. What's more, our substitution $h=c^2$ produces an elliptic K3 surface whose fiber $E_c$ becomes singular at the familiar points $c = 0, \phantom.\pm \sqrt2, \phantom.\pm \frac32$, and also $c=\infty$ — and the multiplicities at $c=0,\phantom.\pm\sqrt2,\phantom.\infty$ are large enough that this elliptic K3 surface is "extremal" (finite Mordell-Weil group, maximal Picard number)! Such surfaces have attracted considerable attention over the years, starting with the Miranda-Persson list of semistable extremal surfaces (Math. Z. 201 (1989), 339–361), which includes ours with multiplicity vector $[1,1,4,6,6,6]$. This makes your family of curves very nice in that context, even if it doesn't do much to answer your motivating question...
-
I tried to include this link to the Miranda-Persson paper, which however does not seem to work in the above answer: springerlink.com/content/u30268141h636x04/fulltext.pdf – Noam D. Elkies Aug 29 '11 at 5:48
Very nice, Noam! I especially like the relation with the K3 surface. I knew that the curve on $x^2$, $y^2$, $z^2$ was an elliptic curve with some symmetries, but I didn't figure out the properties of the ($8$-fold) branched cover that represents the original $xyz$-curve or the branched cover of that on which $dt$ is actually a meromorphic differential. – Robert Bryant Aug 29 '11 at 11:54
Thanks, Robert! Yes, the original curve, to say nothing of the $dt$ double cover, looks much more complicated, but M.Hardy seems willing to extract square roots of a function of one variable for free in his setting $-$ in any case he'll have to take some inverse trig function to recover his original variables $\alpha,\beta,\gamma$. – Noam D. Elkies Aug 29 '11 at 12:57
It's more a case of my not having decided what I should be willing to do........ – Michael Hardy Aug 29 '11 at 18:07
.....but I am willing to use $\wp$ and things like Jacobi's elliptic functions. If we momentarily adopt Euler's willingness to say that if $\alpha$ is an infinitely small positive number then $\sin\alpha=\alpha$, then when $c$ is an infinitely small positive number, the curve can be parametrized using the ordinary sine and cosine functions. So if $c$ is merely very small, then one should get periodic functions that are approximately those. – Michael Hardy Aug 29 '11 at 18:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941217303276062, "perplexity": 237.44909751509596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164026971/warc/CC-MAIN-20131204133346-00036-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/227821-nth-roots-unit-proofs-question.html | # Math Help - nth roots of unit proofs question
1. ## nth roots of unit proofs question
For 4i)
i get to
$\theta = ± k \frac{2 \pi}{n}$ k is an integer
$k = ±1 {z}_{1} = {e}^{i2\pi)} , {z}_{2} = {e}^{-i2\pi}$
I just proved it like that, not sure if its right
ii)
for even n
${(-z)}^{n} = {z}^{n} = 1$
but for odd n
${(-z)}^{n} = -{z}^{n} = -1$
for the sum, i got zero
for the product:
for even n i got
$(-1)\frac{n}{2} . (1)\frac{n}{2} = -\frac{{n}^{2}}{4}$ since you have (-1) half the time and 1 half the time
for odd n
$(1)\frac{n}{2} . (-1)(\frac{n}{2} +1) = -(\frac{{n}^{2}}{4} + \frac{n}{2})$ since you have one more -1 than 1
2. ## Re: nth roots of unit proofs question
Originally Posted by Applestrudle
Here is a suggestion on notation that can simplify this answer. Use the exponentiation notation.
$e^z=\exp(z)=|z|(\cos(\theta)+i~\sin(\theta))$ where $\theta=\text{Arg}(z))$
This notation makes conjugate notation easy: $\overline{\exp(z)}=\exp(\overline{z} )$
For the first part of this question let $\theta=\dfrac{2\pi}{n}~\&~\rho=\exp(i\theta)$
Now it is easy to list the roots: $\rho^k:~k=0,1,\cdots, k-1$, and $\overline{~\rho~}=\exp(-i\theta)$.
Note that $\prod\limits_{k = 0}^{n - 1} {{\rho ^k}} = 1$.
For the second part the n nth roots of $i$: define $\rho=\exp\left(\dfrac{\pi i}{2}\right)~\&~\xi=\exp\left(\dfrac{2\pi i}{n}\right)$
Then those roots can be listed as: $\rho\cdot\xi^k,~k=0,\cdots,k-1$.
3. ## Re: nth roots of unit proofs question
Originally Posted by Plato
Here is a suggestion on notation that can simplify this answer. Use the exponentiation notation.
$e^z=\exp(z)=|z|(\cos(\theta)+i~\sin(\theta))$ where $\theta=\text{Arg}(z))$
This notation makes conjugate notation easy: $\overline{\exp(z)}=\exp(\overline{z} )$
For the first part of this question let $\theta=\dfrac{2\pi}{n}~\&~\rho=\exp(i\theta)$
Now it is easy to list the roots: $\rho^k:~k=0,1,\cdots, k-1$, and $\overline{~\rho~}=\exp(-i\theta)$.
Note that $\prod\limits_{k = 0}^{n - 1} {{\rho ^k}} = 1$.
For the second part the n nth roots of $i$: define $\rho=\exp\left(\dfrac{\pi i}{2}\right)~\&~\xi=\exp\left(\dfrac{2\pi i}{n}\right)$
Then those roots can be listed as: $\rho\cdot\xi^k,~k=0,\cdots,k-1$.
for the sum z1 +z2 + z3+ z4+ .... i did the sum from k =1 to k=n of ${e}^{\frac{i2\pi k}{n}}$ and i got $\frac{1-{e}^{\frac{2 \pi k}{n}}}{1 - {e}^{\frac{i2 \pi}{n}}}$ i used geometric series
x + x^2 and x^3 + x^4 .... x^n = (1-x^n)/(1-x)
for the product z1.z2.z3.z4.z5 ... I used the geometric series in the exponential since ${z}_{k} = {e}^{\frac{2 \pi k}{n}}$
and i got the sum of the product as equal to ${e}^{i2\pi \frac{1-n}{n}}$
for the nth roots of i I got
for the product ${w}_{k} = {e}^{i(\frac{pi}{2n}+\frac{2\pik}{n})}$
so the product is e^(ipi/2n + sum of 2pik/n) for k from 1 to n
for the sum, using the geometric series, I got (2pi/n)(-n), giving -2pi ?
then the final answer for the product is ${e}^{i(\frac{\pi}{2n} - 2\pi)}$
4. ## Re: nth roots of unit proofs question
Originally Posted by Plato
Here is a suggestion on notation that can simplify this answer. Use the exponentiation notation.
$e^z=\exp(z)=|z|(\cos(\theta)+i~\sin(\theta))$ where $\theta=\text{Arg}(z))$
This notation makes conjugate notation easy: $\overline{\exp(z)}=\exp(\overline{z} )$
For the first part of this question let $\theta=\dfrac{2\pi}{n}~\&~\rho=\exp(i\theta)$
Now it is easy to list the roots: $\rho^k:~k=0,1,\cdots, k-1$, and $\overline{~\rho~}=\exp(-i\theta)$.
Note that $\prod\limits_{k = 0}^{n - 1} {{\rho ^k}} = 1$.
For the second part the n nth roots of $i$: define $\rho=\exp\left(\dfrac{\pi i}{2}\right)~\&~\xi=\exp\left(\dfrac{2\pi i}{n}\right)$
Then those roots can be listed as: $\rho\cdot\xi^k,~k=0,\cdots,k-1$.
for iv) i got the product P equals
P = w1.w2.w3.w4.w5.w6....wn (the 1,2,3...n are subs)
since -w* is also a solution, you can say wk = -wk* so they repeat themselves (is this logic correct?) and
P = (w1)^2 (w2)^2 (w3)^2....(w n/2)^2
$P = {e}^{i( \pi\(frac{n}{2}) + 4\pi 8\pi +12\pi ....}$
in the exponent there is the sum from k =1 to k = (n/2) of 4k.pi
but i don't know how to simplify the sum, do i just leave it as a sum, is this even correct? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953600168228149, "perplexity": 1029.1141194311479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824391.7/warc/CC-MAIN-20140820021344-00376-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.energy.dtu.dk/english/News?at=%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7B3F2A7F37-83AF-4BD8-A085-0BC04DA3EA92%7D.%7B18E4440F-E64C-4CF1-BDD9-860D6747C02D%7D.%7B0DE95AE4-41AB-4D01-9EB0-67441B7C2450%7D.%7B11111111-1111-1111-1111-111111111111%7D.%7BFDBAEFE2-1258-4550-AA8B-B810B393302E%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7B3F2A7F37-83AF-4BD8-A085-0BC04DA3EA92%7D.%7B18E4440F-E64C-4CF1-BDD9-860D6747C02D%7D.%7B0DE95AE4-41AB-4D01-9EB0-67441B7C2450%7D.%7B11111111-1111-1111-1111-111111111111%7D.%7BC0523FAD-C6BC-42D9-A5FA-1EDE3DF06391%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7B3F2A7F37-83AF-4BD8-A085-0BC04DA3EA92%7D.%7B18E4440F-E64C-4CF1-BDD9-860D6747C02D%7D.%7B0DE95AE4-41AB-4D01-9EB0-67441B7C2450%7D.%7B11111111-1111-1111-1111-111111111111%7D.%7B67144364-6C37-4CCE-8BDD-0BB1CBDC1514%7D | # News
SELECT INTERVAL
2017
24 NOV
## Industry impressed by high standard at PhD Symposium
DTU Energy held its annual PhD symposium with industrial participation at DTU Lyngby Campus, where PhD students could discuss their research with scientists, engineers...
Energy Fuel cells Electricity supply Energy efficiency Energy storage Energy production Energy systems Fossil fuels Solar energy Electrochemistry Micro and nanotechnology
2016
03 NOV
## Get insight into the latest research within sustainable energy technologies
Do not miss this unique opportunity to get insight into the latest research within sustainable energy technologies – join DTU Energy’s annual PhD symposium...
Energy Bioenergy Fuel cells Energy efficiency Energy storage Energy production Solar energy Magnets Electrochemistry
2015
30 NOV
## Industrial participants were impressed by high standard at PhD symposium
DTU Energy's third annual PhD symposium with industrial participation showed high quality research in energy technologies. Industrial participants were impressed.
Energy Bioenergy Fuel cells Electricity supply Energy efficiency Energy storage Energy production Energy systems Fossil fuels Solar energy Sensors Electrochemistry
2014
11 NOV
## Get the insight on the latest research within sustainable energy technologies...
DTU Energy Conversion hereby invites interested PhD students and companies to the department’s second annual PhD symposium with industry participation at DTU Lyngby...
Energy storage Energy Fuel cells Electrochemistry Energy production Energy efficiency Energy systems Solar energy Physics
## News and filters
Get updated on news that match your filter. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514882564544678, "perplexity": 19229.705592440238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141740670.93/warc/CC-MAIN-20201204162500-20201204192500-00352.warc.gz"} |
https://openheart.bmj.com/highwire/markup/141039/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed | Table 3
Medication at admission (single platelet inhibitor or less, warfarin, DAPT or DOAC)
Single platelet inhibitor or less Warfarin DAPT DOAC Number of operations n 108 8 13 6 DOAC % 0 0 0 100 * Statins % 18 25 25 17 Nitrates % 6 12 8 17 Warfarin % 0 100 0 0 * Heparin % 4 25 62 0 * Corticosteroids % 5 14 15 0 Calcium antagonists % 10 38 17 50 * Beta blockers % 18 50 8 67 * Angiotensin receptor blockers % 16 25 27 50 Aspirin % 25 50 100 17 * Other immunosuppresants % 2 0 0 0 Other platelet inhibitor than aspirin % 2 25 100 17 * Angiotensin-converting enzyme inhibitors % 10 14 18 0
• *p<0.05
• DAPT, dual anti-platelet inhibitors; DOAC, direct oral anticoagulants. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8600135445594788, "perplexity": 10506.883063946785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00329.warc.gz"} |
https://www.mathworks.com/help/stats/partialcorr.html | partialcorr
Linear or rank partial correlation coefficients
Syntax
``rho = partialcorr(x)``
``rho = partialcorr(x,z)``
``rho = partialcorr(x,y,z)``
``rho = partialcorr(___,Name,Value)``
``````[rho,pval] = partialcorr(___)``````
Description
example
````rho = partialcorr(x)` returns the sample linear partial correlation coefficients between pairs of variables in `x`, controlling for the remaining variables in `x`.```
example
````rho = partialcorr(x,z)` returns the sample linear partial correlation coefficients between pairs of variables in `x`, controlling for the variables in `z`.```
example
````rho = partialcorr(x,y,z)` returns the sample linear partial correlation coefficients between pairs of variables in `x` and `y`, controlling for the variables in `z`.```
example
````rho = partialcorr(___,Name,Value)` returns the sample linear partial correlation coefficients with additional options specified by one or more name-value pair arguments, using input arguments from any of the previous syntaxes. For example, you can specify whether to use Pearson or Spearman partial correlations, or specify how to treat missing values.```
example
``````[rho,pval] = partialcorr(___)``` also returns a matrix `pval` of p-values for testing the hypothesis of no partial correlation against the one- or two-sided alternative that there is a nonzero partial correlation.```
Examples
collapse all
Compute partial correlation coefficients between pairs of variables in the input matrix.
Load the sample data. Convert the genders in `hospital.Sex` to numeric group identifiers.
```load hospital; hospital.SexID = grp2idx(hospital.Sex);```
Create an input matrix containing the sample data.
`x = [hospital.SexID hospital.Age hospital.Smoker hospital.Weight];`
Each row in `x` contains a patient’s gender, age, smoking status, and weight.
Compute partial correlation coefficients between pairs of variables in `x`, while controlling for the effects of the remaining variables in `x`.
`rho = partialcorr(x)`
```rho = 4×4 1.0000 -0.0105 0.0273 0.9421 -0.0105 1.0000 0.0419 0.0369 0.0273 0.0419 1.0000 0.0451 0.9421 0.0369 0.0451 1.0000 ```
The matrix `rho` indicates, for example, a correlation of 0.9421 between gender and weight after controlling for all other variables in `x`. You can return the $p$-values as a second output, and examine them to confirm whether these correlations are statistically significant.
For a clearer display, create a table with appropriate variable and row labels.
```rho = array2table(rho, ... 'VariableNames',{'SexID','Age','Smoker','Weight'},... 'RowNames',{'SexID','Age','Smoker','Weight'}); disp('Partial Correlation Coefficients')```
```Partial Correlation Coefficients ```
`disp(rho)`
``` SexID Age Smoker Weight ________ ________ ________ ________ SexID 1 -0.01052 0.027324 0.9421 Age -0.01052 1 0.041945 0.036873 Smoker 0.027324 0.041945 1 0.045106 Weight 0.9421 0.036873 0.045106 1 ```
Test for partial correlation between pairs of variables in the input matrix, while controlling for the effects of a second set of variables.
Load the sample data. Convert the genders in `hospital.Sex` to numeric group identifiers.
```load hospital; hospital.SexID = grp2idx(hospital.Sex);```
Create two matrices containing the sample data.
```x = [hospital.Age hospital.BloodPressure]; z = [hospital.SexID hospital.Smoker hospital.Weight];```
The `x` matrix contains the variables to test for partial correlation. The `z` matrix contains the variables to control for. The measurements for `BloodPressure` are contained in two columns: The first column contains the upper (systolic) number, and the second column contains the lower (diastolic) number. `partialcorr` treats each column as a separate variable.
Test for partial correlation between pairs of variables in `x`, while controlling for the effects of the variables in `z`. Compute the correlation coefficients.
`[rho,pval] = partialcorr(x,z)`
```rho = 3×3 1.0000 0.1300 0.0462 0.1300 1.0000 0.0012 0.0462 0.0012 1.0000 ```
```pval = 3×3 0 0.2044 0.6532 0.2044 0 0.9903 0.6532 0.9903 0 ```
The large values in `pval` indicate that there is no significant correlation between age and either blood pressure measurement after controlling for gender, smoking status, and weight.
For a clearer display, create tables with appropriate variable and row labels.
```rho = array2table(rho, ... 'VariableNames',{'Age','BPTop','BPBottom'},... 'RowNames',{'Age','BPTop','BPBottom'}); pval = array2table(pval, ... 'VariableNames',{'Age','BPTop','BPBottom'},... 'RowNames',{'Age','BPTop','BPBottom'}); disp('Partial Correlation Coefficients')```
```Partial Correlation Coefficients ```
`disp(rho)`
``` Age BPTop BPBottom ________ _________ _________ Age 1 0.13 0.046202 BPTop 0.13 1 0.0012475 BPBottom 0.046202 0.0012475 1 ```
`disp('p-values')`
```p-values ```
`disp(pval)`
``` Age BPTop BPBottom _______ _______ ________ Age 0 0.20438 0.65316 BPTop 0.20438 0 0.99032 BPBottom 0.65316 0.99032 0 ```
Test for partial correlation between pairs of variables in the `x` and `y` input matrices, while controlling for the effects of a third set of variables.
Load the sample data. Convert the genders in `hospital.Sex` to numeric group identifiers.
```load hospital; hospital.SexID = grp2idx(hospital.Sex);```
Create three matrices containing the sample data.
```x = [hospital.BloodPressure]; y = [hospital.Weight hospital.Age]; z = [hospital.SexID hospital.Smoker];```
`partialcorr` can test for partial correlation between the pairs of variables in `x` (the systolic and diastolic blood pressure measurements) and `y` (weight and age), while controlling for the variables in `z` (gender and smoking status). The measurements for `BloodPressure` are contained in two columns: The first column contains the upper (systolic) number, and the second column contains the lower (diastolic) number. `partialcorr` treats each column as a separate variable.
Test for partial correlation between pairs of variables in `x` and `y`, while controlling for the effects of the variables in `z`. Compute the correlation coefficients.
`[rho,pval] = partialcorr(x,y,z)`
```rho = 2×2 -0.0257 0.1289 0.0292 0.0472 ```
```pval = 2×2 0.8018 0.2058 0.7756 0.6442 ```
The results in `pval` indicate that, after controlling for gender and smoking status, there is no significant correlation between either of a patient’s blood pressure measurements and that patient’s weight or age.
For a clearer display, create tables with appropriate variable and row labels.
```rho = array2table(rho, ... 'RowNames',{'BPTop','BPBottom'},... 'VariableNames',{'Weight','Age'}); pval = array2table(pval, ... 'RowNames',{'BPTop','BPBottom'},... 'VariableNames',{'Weight','Age'}); disp('Partial Correlation Coefficients')```
```Partial Correlation Coefficients ```
`disp(rho)`
``` Weight Age ________ ________ BPTop -0.02568 0.12893 BPBottom 0.029168 0.047226 ```
`disp('p-values')`
```p-values ```
`disp(pval)`
``` Weight Age _______ _______ BPTop 0.80182 0.2058 BPBottom 0.77556 0.64424 ```
Test the hypothesis that pairs of variables have no correlation, against the alternative hypothesis that the correlation is greater than 0.
Load the sample data. Convert the genders in `hospital.Sex` to numeric group identifiers.
```load hospital; hospital.SexID = grp2idx(hospital.Sex);```
Create three matrices containing the sample data.
```x = [hospital.BloodPressure]; y = [hospital.Weight hospital.Age]; z = [hospital.SexID hospital.Smoker];```
`partialcorr` can test for partial correlation between the pairs of variables in `x` (the systolic and diastolic blood pressure measurements) and `y` (weight and age), while controlling for the variables in `z` (gender and smoking status). The measurements for `BloodPressure` are contained in two columns: The first column contains the upper (systolic) number, and the second column contains the lower (diastolic) number. `partialcorr` treats each column as a separate variable.
Compute the correlation coefficients using a right-tailed test.
`[rho,pval] = partialcorr(x,y,z,'Tail','right')`
```rho = 2×2 -0.0257 0.1289 0.0292 0.0472 ```
```pval = 2×2 0.5991 0.1029 0.3878 0.3221 ```
The results in `pval` indicate that `partialcorr` does not reject the null hypothesis of nonzero correlations between the variables in `x` and `y`, after controlling for the variables in `z`, when the alternative hypothesis is that the correlations are greater than 0.
For a clearer display, create tables with appropriate variable and row labels.
```rho = array2table(rho, ... 'RowNames',{'BPTop','BPBottom'},... 'VariableNames',{'Weight','Age'}); pval = array2table(pval, ... 'RowNames',{'BPTop','BPBottom'},... 'VariableNames',{'Weight','Age'}); disp('Partial Correlation Coefficients')```
```Partial Correlation Coefficients ```
`disp(rho)`
``` Weight Age ________ ________ BPTop -0.02568 0.12893 BPBottom 0.029168 0.047226 ```
`disp('p-values')`
```p-values ```
`disp(pval)`
``` Weight Age _______ _______ BPTop 0.59909 0.1029 BPBottom 0.38778 0.32212 ```
Input Arguments
collapse all
Data matrix, specified as an n-by-px matrix. The rows of `x` correspond to observations, and the columns correspond to variables.
Data Types: `single` | `double`
Data matrix, specified as an n-by-py matrix. The rows of `y` correspond to observations, and the columns correspond to variables.
Data Types: `single` | `double`
Data matrix, specified as an n-by-pz matrix. The rows of `z` correspond to observations, and columns correspond to variables.
Data Types: `single` | `double`
Name-Value Pair Arguments
Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.
Example: `'Type','Spearman','Rows','complete'` computes Spearman partial correlations using only the data in rows that contain no missing values.
Type of partial correlations to compute, specified as the comma-separated pair consisting of `'Type'` and one of the following.
`'Pearson'` Compute Pearson (linear) partial correlations. `'Spearman'` Compute Spearman (rank) partial correlations.
Example: `'Type','Spearman'`
Rows to use in computation, specified as the comma-separated pair consisting of `'Rows'` and one of the following.
`'all'` Use all rows of the input regardless of missing values (`NaN`s). `'complete'` Use only rows of the input with no missing values. `'pairwise'` Compute `rho(i,j)` using rows with no missing values in column `i` or `j`.
Example: `'Rows','complete'`
Alternative hypothesis to test against, specified as the comma-separated pair consisting of `'Tail'` and one of the following.
`'both'` Test the alternative hypothesis that the correlation is not 0. `'right'` Test the alternative hypothesis that the correlation is greater than 0. `'left'` Test the alternative hypothesis that the correlation is less than 0.
Example: `'Tail','right'`
Output Arguments
collapse all
Sample linear partial correlation coefficients, returned as a matrix.
• If you input only an `x` matrix, `rho` is a symmetric px-by-px matrix. The (i,j)th entry is the sample linear partial correlation between the i-th and j-th columns in `x`.
• If you input `x` and `z` matrices, `rho` is a symmetric px-by-px matrix. The (i,j)th entry is the sample linear partial correlation between the ith and jth columns in `x`, controlled for the variables in `z`.
• If you input `x`, `y`, and `z` matrices, `rho` is a px-by-py matrix, where the (i,j)th entry is the sample linear partial correlation between the ith column in `x` and the jth column in `y`, controlled for the variables in `z`.
If the covariance matrix of `[x,z]` is
`$S=\left(\begin{array}{cc}{S}_{xx}& {S}_{xz}\\ {S}_{xz}{}^{T}& {S}_{zz}\end{array}\right)\text{\hspace{0.17em}},$`
then the partial correlation matrix of `x`, controlling for `z`, can be defined formally as a normalized version of the covariance matrix: Sxx – (SxzSzz–1SxzT).
p-values, returned as a matrix. Each element of `pval` is the p-value for the corresponding element of `rho`.
If `pval(i,j)` is small, then the corresponding partial correlation `rho(i,j)` is statistically significantly different from 0.
`partialcorr` computes p-values for linear and rank partial correlations using a Student's t distribution for a transformation of the correlation. This is exact for linear partial correlation when `x` and `z` are normal, but is a large-sample approximation otherwise.
References
[1] Stuart, Alan, K. Ord, and S. Arnold. Kendall's Advanced Theory of Statistics. 6th edition, Volume 2A, Chapter 28, Wiley, 2004.
[2] Fisher, Ronald A. "The Distribution of the Partial Correlation Coefficient." Metron 3 (1924): 329-332 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7447379231452942, "perplexity": 1519.5253325451765}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524270.28/warc/CC-MAIN-20210121070324-20210121100324-00321.warc.gz"} |
http://aas.org/archives/BAAS/v30n3/dps98/356.htm | Session 29. Comets I
Contributed Oral Parallel Session, Wednesday, October 14, 1998, 2:00-3:20pm, Madison Ballroom C
## [29.08] First Maps of Comet Hale-Bopp at 60 and 175\mu m
S.B. Peschke (MPI f\"ur Kernphysik), M. Stickel (MPI f\"ur Astronomie), I. Heinrichsen (IPAC), H. B\"ohnhardt (ESO), C. M. Lisse (University of Maryland), E. Gr\"un (MPI f\"ur Kernphysik), D. J. Osip (MIT)
First maps of a comet at 60 and 175\mu m were obtained using ISOPHOT, the photometer of the Infrared Space Observatory(ISO). The observations were carried out on December 30, 1997 , mapping an area of 9'\times9' centered on comet Hale-Bopp at both filters. Each measurement consisted of 3 individual submaps offset by a third of a pixel in both directions to increase the final resolution of the maps. The final maps were composed of the submaps with the use of a drizzle algorithm. Within the same orbit 3-175\mu m filter photometry on comet Hale-Bopp was performed as well as multi-aperture photometry near the peak wavelength of thermal emission. The same photometric sequence was repeated as 'shadow observation' at the same position as that tracked in the initial sequence for precise background subtraction. Quasi-simultaneous observations in the near-IR were obtained with the 3.6m at La Silla/Chile.
>From the 60 and 175\mu m, radial intensity profiles have been derived which are compared to the ones obtained from the near-IR data and to the results of multi-aperture photometry. Since dust grains have the highest thermal emitting efficiency closest to their own size, the emission in the maps observed with the two filters are dominated by the the thermal emission of different size grains. From the comparison of the different wavelength maps, indications on perferred concentration of different grain sizes can be derived. Grain size distribution modeling has been carried out for the spectral energy distribution derived with multi-filter photometry to get an indication of the coma composition with will in turn be used as input for dynamical modeling. First results will be presented.
[Previous] | [Session 29] | [Next] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9513021111488342, "perplexity": 4651.291933201574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261771.50/warc/CC-MAIN-20140728011741-00381-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.nag.com/numeric/nl/nagdoc_26.2/nagdoc_fl26.2/html/f01/f01fdf.html | # NAG Library Routine Document
## 1Purpose
f01fdf computes the matrix exponential, ${e}^{A}$, of a complex Hermitian $n$ by $n$ matrix $A$.
## 2Specification
Fortran Interface
Subroutine f01fdf ( uplo, n, a, lda,
Integer, Intent (In) :: n, lda Integer, Intent (Inout) :: ifail Complex (Kind=nag_wp), Intent (Inout) :: a(lda,*) Character (1), Intent (In) :: uplo
#include <nagmk26.h>
void f01fdf_ (const char *uplo, const Integer *n, Complex a[], const Integer *lda, Integer *ifail, const Charlen length_uplo)
## 3Description
${e}^{A}$ is computed using a spectral factorization of $A$
$A = Q D QH ,$
where $D$ is the diagonal matrix whose diagonal elements, ${d}_{i}$, are the eigenvalues of $A$, and $Q$ is a unitary matrix whose columns are the eigenvectors of $A$. ${e}^{A}$ is then given by
$eA = Q eD QH ,$
where ${e}^{D}$ is the diagonal matrix whose $i$th diagonal element is ${e}^{{d}_{i}}$. See for example Section 4.5 of Higham (2008).
## 4References
Higham N J (2005) The scaling and squaring method for the matrix exponential revisited SIAM J. Matrix Anal. Appl. 26(4) 1179–1193
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
Moler C B and Van Loan C F (2003) Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later SIAM Rev. 45 3–49
## 5Arguments
1: $\mathbf{uplo}$ – Character(1)Input
On entry: if ${\mathbf{uplo}}=\text{'U'}$, the upper triangle of the matrix $A$ is stored.
If ${\mathbf{uplo}}=\text{'L'}$, the lower triangle of the matrix $A$ is stored.
Constraint: ${\mathbf{uplo}}=\text{'U'}$ or $\text{'L'}$.
2: $\mathbf{n}$ – IntegerInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 0$.
3: $\mathbf{a}\left({\mathbf{lda}},*\right)$ – Complex (Kind=nag_wp) arrayInput/Output
Note: the second dimension of the array a must be at least ${\mathbf{n}}$.
On entry: the $n$ by $n$ Hermitian matrix $A$.
• If ${\mathbf{uplo}}=\text{'U'}$, the upper triangular part of $A$ must be stored and the elements of the array below the diagonal are not referenced.
• If ${\mathbf{uplo}}=\text{'L'}$, the lower triangular part of $A$ must be stored and the elements of the array above the diagonal are not referenced.
On exit: if ${\mathbf{ifail}}={\mathbf{0}}$, the upper or lower triangular part of the $n$ by $n$ matrix exponential, ${e}^{A}$.
4: $\mathbf{lda}$ – IntegerInput
On entry: the first dimension of the array a as declared in the (sub)program from which f01fdf is called.
Constraint: ${\mathbf{lda}}\ge {\mathbf{n}}$.
5: $\mathbf{ifail}$ – IntegerInput/Output
On entry: ifail must be set to $0$, . If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}>0$
The computation of the spectral factorization failed to converge.
If ${\mathbf{ifail}}=i$, the algorithm to compute the spectral factorization failed to converge; $i$ off-diagonal elements of an intermediate tridiagonal form did not converge to zero (see f08fnf (zheev)).
${\mathbf{ifail}}=-1$
On entry, ${\mathbf{uplo}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{uplo}}=\text{'L'}$ or $\text{'U'}$.
${\mathbf{ifail}}=-2$
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 0$.
${\mathbf{ifail}}=-3$
${\mathbf{ifail}}=-4$
On entry, ${\mathbf{lda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{lda}}\ge {\mathbf{n}}$.
${\mathbf{ifail}}=-99$
See Section 3.9 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 3.8 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 3.7 in How to Use the NAG Library and its Documentation for further information.
## 7Accuracy
For an Hermitian matrix $A$, the matrix ${e}^{A}$, has the relative condition number
$κA = A2 ,$
which is the minimal possible for the matrix exponential and so the computed matrix exponential is guaranteed to be close to the exact matrix. See Section 10.2 of Higham (2008) for details and further discussion.
## 8Parallelism and Performance
f01fdf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
f01fdf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
The integer allocatable memory required is n, the real allocatable memory required is n and the complex allocatable memory required is approximately $\left({\mathbf{n}}+\mathit{nb}+1\right)×{\mathbf{n}}$, where nb is the block size required by f08fnf (zheev).
The cost of the algorithm is $O\left({n}^{3}\right)$.
As well as the excellent book cited above, the classic reference for the computation of the matrix exponential is Moler and Van Loan (2003).
## 10Example
This example finds the matrix exponential of the Hermitian matrix
$A = 1 2+2i 3+2i 4+3i 2-2i 1 2+2i 3+2i 3-2i 2-2i 1 2+2i 4-3i 3-2i 2-2i 1 .$
### 10.1Program Text
Program Text (f01fdfe.f90)
### 10.2Program Data
Program Data (f01fdfe.d)
### 10.3Program Results
Program Results (f01fdfe.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 74, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816107749938965, "perplexity": 2037.1069422139478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611320.18/warc/CC-MAIN-20210614013350-20210614043350-00574.warc.gz"} |
https://isabelle.in.tum.de/repos/isabelle/rev/f556a7a9080c | author nipkow Tue, 11 Apr 2017 10:29:25 +0200 changeset 65438 f556a7a9080c parent 65436 1fd2dca8eb60 (current diff) parent 65437 b8fc7e2e1b35 (diff) child 65464 f3cd78ba687c
merged
--- a/src/Doc/Prog_Prove/Isar.thy Mon Apr 10 18:01:46 2017 +0200
+++ b/src/Doc/Prog_Prove/Isar.thy Tue Apr 11 10:29:25 2017 +0200
@@ -881,6 +881,45 @@
\end{enumerate}
\index{structural induction|)}
+
+\ifsem\else
+\subsection{Computation Induction}
+\index{rule induction}
+
+In \autoref{sec:recursive-funs} we introduced computation induction and
+its realization in Isabelle: the definition
+of a recursive function \<open>f\<close> via \isacom{fun} proves the corresponding computation
+induction rule called \<open>f.induct\<close>. Induction with this rule looks like in
+\autoref{sec:recursive-funs}, but now with \isacom{proof} instead of \isacom{apply}:
+\begin{quote}
+\isacom{proof} (\<open>induction x\<^sub>1 \<dots> x\<^sub>k rule: f.induct\<close>)
+\end{quote}
+Just as for structural induction, this creates several cases, one for each
+defining equation for \<open>f\<close>. By default (if the equations have not been named
+by the user), the cases are numbered. That is, they are started by
+\begin{quote}
+\isacom{case} (\<open>i x y ...\<close>)
+\end{quote}
+where \<open>i = 1,...,n\<close>, \<open>n\<close> is the number of equations defining \<open>f\<close>,
+and \<open>x y ...\<close> are the variables in equation \<open>i\<close>. Note the following:
+\begin{itemize}
+\item
+Although \<open>i\<close> is an Isar name, \<open>i.IH\<close> (or similar) is not. You need
+double quotes: "\<open>i.IH\<close>". When indexing the name, write "\<open>i.IH\<close>"(1),
+not "\<open>i.IH\<close>(1)".
+\item
+If defining equations for \<open>f\<close> overlap, \isacom{fun} instantiates them to make
+them nonoverlapping. This means that one user-provided equation may lead to
+several equations and thus to several cases in the induction rule.
+These have names of the form "\<open>i_j\<close>", where \<open>i\<close> is the number of the original
+equation and the system-generated \<open>j\<close> indicates the subcase.
+\end{itemize}
+In Isabelle/jEdit, the \<open>induction\<close> proof method displays a proof skeleton
+with all \isacom{case}s. This is particularly useful for computation induction
+and the following rule induction.
+\fi
+
+
\subsection{Rule Induction}
\index{rule induction|(} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496056437492371, "perplexity": 24622.05329272183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00101.warc.gz"} |
https://habr.com/en/company/postgrespro/blog/504498/ | # Locks in PostgreSQL: 3. Other locks
• Translation
We've already discussed some object-level locks (specifically, relation-level locks), as well as row-level locks with their connection to object-level locks and also explored wait queues, which are not always fair.
We have a hodgepodge this time. We'll start with deadlocks (actually, I planned to discuss them last time, but that article was excessively long in itself), then briefly review object-level locks left and finally discuss predicate locks.
When using locks, we can confront a deadlock. It occurs when one transaction tries to acquire a resource that is already in use by another transaction, while the second transaction tries to acquire a resource that is in use by the first. The figure on the left below illustrates this: solid-line arrows indicate acquired resources, while dashed-line arrows show attempts to acquire a resource that is already in use.
To visualize a deadlock, it is convenient to build the wait-for graph. To do this, we remove specific resources, leave only transactions and indicate which transaction waits for which other. If a graph contains a cycle (from a vertex, we can get to itself in a walk along arrows), this is a deadlock.
A deadlock can certainly occur not only for two transactions, but for any larger number of them.
If a deadlock occured, the involved transactions can do nothing but wait infinitely. Therefore, all DBMS, including PostgreSQL, track locks automatically.
The check, however, requires a certain effort, and it's undesirable to make it each time a new lock is requested (deadlocks are pretty infrequent after all). So, when a process tries to acquire a lock, but cannot, it queues and «falls asleep», but sets the timer to the value specified in the deadlock_timeout parameter (1 second by default). If the resource gets free earlier, this is fine and we skimp on the check. But if on expiration of deadlock_timeout, the wait continues, the waiting process will wake up and initiate the check.
If the check (which consists in building the wait-for graph and searching it for cycles) does not detect deadlocks, it continues sleeping, this time «until final victory».
Earlier, I was fairly reproached in the comments for not mentioning the lock_timeout parameter, which affects any operator and allows avoiding an infinitely long wait: if a lock cannot be acquired during the time specified, the operator terminates with a lock_not_available error. Do not confuse this parameter with statement_timeout, which limits the total time to execute the operator, no matter whether the latter waits for a lock or does a regular work.
But if a deadlock is detected, one of the transactions (in most cases, the one that initiated the check) is forced to abort. This releases the locks it acquired and enables other transactions to continue.
Deadlocks usually mean that the application is designed incorrectly. There are two ways to detect such situations: first, messages will occur in the server log and second, the value of pg_stat_database.deadlocks will increase.
Usually deadlocks are caused by an inconsistent order of locking table rows.
Let's consider a simple example. The first transaction is going to transfer 100 rubles from the first account to the second one. To this end, the transaction reduces the first account:
=> BEGIN;
=> UPDATE accounts SET amount = amount - 100.00 WHERE acc_no = 1;
UPDATE 1
At the same time, the second transaction is going to transfer 10 rubles from the second account to the first one. And it starts with reducing the second account:
| => BEGIN;
| => UPDATE accounts SET amount = amount - 10.00 WHERE acc_no = 2;
| UPDATE 1
Now the first transaction tries to increase the second account, but detects a lock on the row.
=> UPDATE accounts SET amount = amount + 100.00 WHERE acc_no = 2;
Then the second transaction tries to increase the first account, but also gets blocked.
| => UPDATE accounts SET amount = amount + 10.00 WHERE acc_no = 1;
So a circular wait arises, which won't end on its own. In a second, the first transaction, which cannot access the resource yet, initiates a check for a deadlock and is forced to abort by the server.
ERROR: deadlock detected
DETAIL: Process 16477 waits for ShareLock on transaction 530695; blocked by process 16513.
Process 16513 waits for ShareLock on transaction 530694; blocked by process 16477.
HINT: See server log for query details.
CONTEXT: while updating tuple (0,2) in relation "accounts"
Now the second transaction can continue.
| UPDATE 1
| => ROLLBACK;
=> ROLLBACK;
The correct way to perform such operations is to lock resources in the same order. For example: in this case, accounts can be locked in ascending order of their numbers.
## Deadlock of two UPDATE commands
Sometimes we can get a deadlock in situations where, seemingly, it could never occur. For example: it is convenient and usual to treat SQL commands as atomic, but the UPDATE command locks rows as they are updated. This does not happen instantaneously. Therefore, if the order in which a command updates rows is inconsistent with the order in which another command does this, a deadlock can occur.
Although such a situation is unlikely, it can still occur. To reproduce it, we will create an index on the amount column in descending order of amount:
=> CREATE INDEX ON accounts(amount DESC);
To be able to watch what happens, let's create a function that increases the passed value, but very-very slowly, for as long as an entire second:
=> CREATE FUNCTION inc_slow(n numeric) RETURNS numeric AS $$SELECT pg_sleep(1); SELECT n + 100.00;$$ LANGUAGE SQL;
We will also need the pgrowlocks extension.
=> CREATE EXTENSION pgrowlocks;
The first UPDATE command will update the entire table. The execution plan is evident — it is sequential scan:
| => EXPLAIN (costs off)
| UPDATE accounts SET amount = inc_slow(amount);
| QUERY PLAN
| ----------------------------
| Update on accounts
| -> Seq Scan on accounts
| (2 rows)
Since tuples on the table page are located in ascending order of the amount (exactly how we added them), they will also be updated in the same order. Let the update start.
| => UPDATE accounts SET amount = inc_slow(amount);
At the same time, in another session we'll forbid sequential scans:
|| => SET enable_seqscan = off;
In this case, for the next UPDATE operator, the planner decides to use index scan:
|| => EXPLAIN (costs off)
|| UPDATE accounts SET amount = inc_slow(amount) WHERE amount > 100.00;
|| QUERY PLAN
|| --------------------------------------------------------
|| Update on accounts
|| -> Index Scan using accounts_amount_idx on accounts
|| Index Cond: (amount > 100.00)
|| (3 rows)
The second and third rows meet the condition, and since the index is built in descending order of the amount, the rows will be updated in a reverse order.
Let's run the next update.
|| => UPDATE accounts SET amount = inc_slow(amount) WHERE amount > 100.00;
A quick look into the table page shows that the first operator already managed to update the first row (0,1) and the second operator updated the last row (0,3):
=> SELECT * FROM pgrowlocks('accounts') \gx
-[ RECORD 1 ]-----------------
locked_row | (0,1)
locker | 530699 <- the first
multi | f
xids | {530699}
modes | {"No Key Update"}
pids | {16513}
-[ RECORD 2 ]-----------------
locked_row | (0,3)
locker | 530700 <- the second
multi | f
xids | {530700}
modes | {"No Key Update"}
pids | {16549}
One more second elapses. The first operator updated the second row, and the second one would like to do the same, but cannot.
=> SELECT * FROM pgrowlocks('accounts') \gx
-[ RECORD 1 ]-----------------
locked_row | (0,1)
locker | 530699 <- the first
multi | f
xids | {530699}
modes | {"No Key Update"}
pids | {16513}
-[ RECORD 2 ]-----------------
locked_row | (0,2)
locker | 530699 <- the first was quicker
multi | f
xids | {530699}
modes | {"No Key Update"}
pids | {16513}
-[ RECORD 3 ]-----------------
locked_row | (0,3)
locker | 530700 <- the second
multi | f
xids | {530700}
modes | {"No Key Update"}
pids | {16549}
Now the first operator would like to update the last table row, but it is already locked by the second operator. Hence a deadlock.
One of the transactions aborts:
|| ERROR: deadlock detected
|| DETAIL: Process 16549 waits for ShareLock on transaction 530699; blocked by process 16513.
|| Process 16513 waits for ShareLock on transaction 530700; blocked by process 16549.
|| HINT: See server log for query details.
|| CONTEXT: while updating tuple (0,2) in relation "accounts"
And the second one continues:
| UPDATE 3
Engaging details of detecting and preventing deadlocks can be found in the lock manager README.
This completes a talk on deadlocks, and we proceed to the remaining object-level locks.
# Locks on non-relations
When we need to lock a resource that is not a relation in the meaning of PostgreSQL, locks of the object type are used. Almost whatever we can think of can refer to such resources: tablespaces, subscriptions, schemas, enumerated data types and so on. Roughly, this is everything that can be found in the system catalog.
Illustrating this by a simple example. Let's start a transaction and create a table in it:
=> BEGIN;
=> CREATE TABLE example(n integer);
Now let's see what locks of the object type appeared in pg_locks:
=> SELECT
database,
(SELECT datname FROM pg_database WHERE oid = l.database) AS dbname,
classid,
(SELECT relname FROM pg_class WHERE oid = l.classid) AS classname,
objid,
mode,
granted
FROM pg_locks l
WHERE l.locktype = 'object' AND l.pid = pg_backend_pid();
database | dbname | classid | classname | objid | mode | granted
----------+--------+---------+--------------+-------+-----------------+---------
0 | | 1260 | pg_authid | 16384 | AccessShareLock | t
16386 | test | 2615 | pg_namespace | 2200 | AccessShareLock | t
(2 rows)
To figure out what in particular is locked here, we need to look at three fields: database, classid and objid. We start with the first line.
database is the OID of the database that the resource being locked relates to. In this case, this column contains zero. It means that we deal with a global object, which is not specific to any database.
classid contains the OID from pg_class that matches the name of the system catalog table that actually determines the resource type. In this case, it is pg_authid, that is, a role (user) is the resource.
objid contains the OID from the system catalog table indicated by classid.
=> SELECT rolname FROM pg_authid WHERE oid = 16384;
rolname
---------
student
(1 row)
We work as student, and this is exactly the role locked.
Now let's clarify the second line. The database is specified, and it is test, to which we are connected.
classid indicates the pg_namespace table, which contains schemas.
=> SELECT nspname FROM pg_namespace WHERE oid = 2200;
nspname
---------
public
(1 row)
This shows that the public schema is locked.
So, we've seen that when an object is created, the owner role and schema in which the object is created get locked (in a shared mode). And this is reasonable: otherwise, someone could drop the role or schema while the transaction is not completed yet.
=> ROLLBACK;
# Lock on relation extension
When the number of rows in a relation (table, index or materialized view) increases, PostgreSQL can use free space in available pages for inserts, but evidently, once new pages also have to be added. Physically they are added at the end of the appropriate file. And this is meant by a relation extension.
To ensure that two processes do not rush to add pages simultaneously, the extension process is protected by a specialized lock of the extend type. The same lock is used when vacuuming indexes for other processes to be unable to add pages during the scan.
This lock is certainly released without waiting for completion of the transaction.
Earlier, tables could extend only by one page at a time. This caused issues during simultaneous row inserts by several processes; therefore, starting with PostgreSQL 9.6, several pages are added to tables at once (in proportion to the number of waiting processes, but not greater than 512).
# Page lock
Page-level locks of the page type are used in the only case (aside from predicate locks, to be discussed later).
GIN indexes enable us to accelerate search in compound values, for instance: words in text documents (or array elements). To a first approximation, these indexes can be represented as a regular B-tree that stores separate words from the documents rather than the documents themselves. Therefore, when a new document is added, the index has to be rebuilt pretty much in order to add there each new word from the document.
For better performance, GIN index has a postponed insert feature, which is turned on by the fastupdate storage parameter. New words are quickly added to an unordered pending list first, and after a while, everything accumulated is moved to the main index structure. The gains are due to a high probability of occurrence of the same words in different documents.
To prevent moving from the pending list to the main index by several processes simultaneously, for the duration of moving, the index metapage gets locked in an exclusive mode. This does not hinder regular use of the index.
Unlike other locks (such as relation-level locks), advisory locks are never acquired automatically — the application developer controls them. They are useful when, for instance, an application for some reason needs a locking logic that is not in line with the standard logic of regular locks.
Assume we have a hypothetical resource that does not match any database object (which we could lock using commands such as SELECT FOR or LOCK TABLE). We need to devise a numeric identifier for it. If a resource has a unique name, a simple option is to use its hash code:
=> SELECT hashtext('resource1');
hashtext
-----------
991601810
(1 row)
This is how we have the lock acquired:
=> BEGIN;
As usual, information on locks is available in pg_locks:
=> SELECT locktype, objid, mode, granted
FROM pg_locks WHERE locktype = 'advisory' AND pid = pg_backend_pid();
locktype | objid | mode | granted
----------+-----------+---------------+---------
advisory | 991601810 | ExclusiveLock | t
(1 row)
For locking to be really effective, other processes must also acquire a lock on the resource prior to accessing it. Evidently the application must ensure that this rule is observed.
In the above example, the lock will be held through the end of the session rather than the transaction, as usual.
=> COMMIT;
=> SELECT locktype, objid, mode, granted
FROM pg_locks WHERE locktype = 'advisory' AND pid = pg_backend_pid();
locktype | objid | mode | granted
----------+-----------+---------------+---------
advisory | 991601810 | ExclusiveLock | t
(1 row)
And we need to explicitly release it:
=> SELECT pg_advisory_unlock(hashtext('resource1'));
A rich collection of functions to work with advisory locks is available for all intents and purposes:
• pg_advisory_lock_shared has a shared lock acquired.
• pg_advisory_xact_lock (and pg_advisory_xact_lock_shared) has a shared lock acquired up to the end of the transaction.
• pg_try_advisory_lock (as well as pg_try_advisory_xact_lock and pg_try_advisory_xact_lock_shared) does not wait for a lock, but returns false if a lock could not be acquired immediately.
A collection of try_ functions is one more technique to avoid waiting for a lock, in addition to those listed in the last article.
# Predicate locks
The predicate lock term occurred long ago, when early DBMS made first attempts to implement complete isolation based on locks (the Serializable level, although there was no SQL standard at that time). The issue they confronted then was that even locking of all read and updated rows did not ensure complete isolation: new rows that meet the same selection conditions can occur in the table, which causes phantoms to arise (see the article on isolation).
The idea of predicate locks was to lock predicates rather than rows. If during execution of a query with the condition a > 10 we lock the a > 10 predicate, this won't allow us to add new rows that meet the condition to the table and will enable us to avoid phantoms. The issue is that this problem is computationally complicated; in practice, it can be solved only for very simple predicates.
In PostgreSQL, the Serializable level is implemented differently, on top of the available isolation based on data snapshots. Although the predicate lock term is still used, its meaning drastically changed. Actually these «locks» block nothing; they are used to track data dependencies between transactions.
It is proved that snapshot isolation permits an inconsistent write (write skew) anomaly and a read-only transaction anomaly, but any other anomalies are impossible. To figure out that we deal with one of the two above anomalies, we can analyze dependencies between transactions and discover certain patterns there.
Dependencies of two kinds are of interest to us:
• One transaction reads a row that is then updated by the second transaction (RW dependency).
• One transaction updates a row that is then read by the second transaction (WR dependency).
We can track WR dependencies using already available regular locks, but RW dependencies have to be tracked specially.
To reiterate, despite the name, predicate locks bock nothing. A check is performed at the transaction commit instead, and if a suspicious sequence of dependencies that may indicate an anomaly is discovered, the transaction aborts.
Let's look at how predicate locks are handled. To do this, we'll create a table with a pretty large number of locks and an index on it.
=> CREATE TABLE pred(n integer);
=> INSERT INTO pred(n) SELECT g.n FROM generate_series(1,10000) g(n);
=> CREATE INDEX ON pred(n) WITH (fillfactor = 10);
=> ANALYZE pred;
If a query is executed using sequential scan of the entire table, a predicate lock on the entire table gets acquired (even if not all rows meet the filtering condition).
| => SELECT pg_backend_pid();
| pg_backend_pid
| ----------------
| 12763
| (1 row)
| => BEGIN ISOLATION LEVEL SERIALIZABLE;
| => EXPLAIN (analyze, costs off)
| SELECT * FROM pred WHERE n > 100;
| QUERY PLAN
| ----------------------------------------------------------------
| Seq Scan on pred (actual time=0.047..12.709 rows=9900 loops=1)
| Filter: (n > 100)
| Rows Removed by Filter: 100
| Planning Time: 0.190 ms
| Execution Time: 15.244 ms
| (5 rows)
All predicate locks are acquired in one special mode — SIReadLock (Serializable Isolation Read):
=> SELECT locktype, relation::regclass, page, tuple
FROM pg_locks WHERE mode = 'SIReadLock' AND pid = 12763;
locktype | relation | page | tuple
----------+----------+------+-------
relation | pred | |
(1 row)
| => ROLLBACK;
But if a query is executed using index scan, the situation changes for the better. If we deal with a B-tree, it is sufficient to have a lock acquired on the rows read and on the leaf index pages walked through — this allows us to track not only specific values, but all the range read.
| => BEGIN ISOLATION LEVEL SERIALIZABLE;
| => EXPLAIN (analyze, costs off)
| SELECT * FROM pred WHERE n BETWEEN 1000 AND 1001;
| QUERY PLAN
| ------------------------------------------------------------------------------------
| Index Only Scan using pred_n_idx on pred (actual time=0.122..0.131 rows=2 loops=1)
| Index Cond: ((n >= 1000) AND (n <= 1001))
| Heap Fetches: 2
| Planning Time: 0.096 ms
| Execution Time: 0.153 ms
| (5 rows)
=> SELECT locktype, relation::regclass, page, tuple
FROM pg_locks WHERE mode = 'SIReadLock' AND pid = 12763;
locktype | relation | page | tuple
----------+------------+------+-------
tuple | pred | 3 | 236
tuple | pred | 3 | 235
page | pred_n_idx | 22 |
(3 rows)
Note a few complexities.
First, a separate lock is created for each read tuple, and the number of such tuples can potentially be very large. The total number of predicate locks in the system is limited by the product of parameter values: max_pred_locks_per_transaction × max_connections (the default values are 64 and 100, respectively). The memory for these locks is allocated at the server start; an attempt to exceed this limit will result in errors.
Therefore, escalation is used for predicate locks (and only for them!). Prior to PostgreSQL 10, the limitations were hard coded, but starting this version, we can control the escalation through parameters. If the number of tuple locks related to one page exceeds max_pred_locks_per_page, these locks are replaced with one page-level lock. Consider an example:
=> SHOW max_pred_locks_per_page;
max_pred_locks_per_page
-------------------------
2
(1 row)
| => EXPLAIN (analyze, costs off)
| SELECT * FROM pred WHERE n BETWEEN 1000 AND 1002;
| QUERY PLAN
| ------------------------------------------------------------------------------------
| Index Only Scan using pred_n_idx on pred (actual time=0.019..0.039 rows=3 loops=1)
| Index Cond: ((n >= 1000) AND (n <= 1002))
| Heap Fetches: 3
| Planning Time: 0.069 ms
| Execution Time: 0.057 ms
| (5 rows)
We see one lock of the page type instead of three locks of the tuple type:
=> SELECT locktype, relation::regclass, page, tuple
FROM pg_locks WHERE mode = 'SIReadLock' AND pid = 12763;
locktype | relation | page | tuple
----------+------------+------+-------
page | pred | 3 |
page | pred_n_idx | 22 |
(2 rows)
Likewise, if the number of locks on pages related to one relation exceeds max_pred_locks_per_relation, these locks are replaced with one relation-level lock.
There are no other levels: predicate locks are acquired only for relations, pages and tuples and always in the SIReadLock mode.
Certainly, escalation of locks inevitably results in an increase of the number of transactions that falsely terminate with a serialization error, and eventually, the system throuthput will decrease. Here you need to balance RAM consumption and performance.
The second complexity is that different operations with an index (for instance, due to splits of index pages when new rows are inserted) change the number of leaf pages that cover the range read. But the implementation takes this into account:
=> INSERT INTO pred SELECT 1001 FROM generate_series(1,1000);
=> SELECT locktype, relation::regclass, page, tuple
FROM pg_locks WHERE mode = 'SIReadLock' AND pid = 12763;
locktype | relation | page | tuple
----------+------------+------+-------
page | pred | 3 |
page | pred_n_idx | 211 |
page | pred_n_idx | 212 |
page | pred_n_idx | 22 |
(4 rows)
| => ROLLBACK;
By the way, predicate locks are not always released immediately on completion of the transaction since they are needed to track dependencies between several transactions. But anyway, they are controlled automatically.
By no means all types of indexes in PostgreSQL support predicate locks. Before PostgreSQL 11, only B-trees could boast of this, but that version improved the situation: hash, GiST and GIN indexes were added to the list. If index access is used, but the index does not support predicate locks, a lock on the entire index is acquired. This, certainly, also increases the number of false aborts of transactions.
Finally, note that it's the use of predicate locks that limits all transactions to working at the Serializable level in order to ensure complete isolation. If a certain transaction uses a different level, it just won't acquire (and check) predicate locks.
Traditionally, providing you with a link to the predicate locking README, to start exploring the source code with. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24217240512371063, "perplexity": 4773.086370742763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072082.26/warc/CC-MAIN-20210413031741-20210413061741-00440.warc.gz"} |
https://www.physicsforums.com/threads/turning-arcsin-into-logs.740082/ | # Homework Help: Turning arcsin into logs
1. Feb 24, 2014
### cathy
1. The problem statement, all variables and given/known data
Hello. Will someone please explain to me why this is true?
arcsinh(e^x) = ln(e^x + √(e^(2x) + 1))
2. The attempt at a solution
I cannot figure out why arcsin is able to be put into terms of ln. Thank you in advance.
2. Feb 24, 2014
### tiny-tim
hi caty!
(hey, what's an h ? )
2sinh[ln(ex + √(e2x + 1))]
= exp[ln(ex + √(e2x + 1))] - exp[-ln(ex - √(e2x + 1))]
= ex + √(e2x + 1)) - 1/[ex + √(e2x + 1))]
= [e2x + e2x + 1 + 2ex√(e2x + 1)) - 1]/[ex + √(e2x + 1))]
= 2ex
alternatively, put ex = sinhy
then sinh[ln(ex + √(e2x + 1))]
= sinh[ln(sinhy + coshy)]
= sinh[ln(ey)]
= sinh[y] = ex
3. Feb 24, 2014
### LCKurtz
In addition to Tiny's explanation you could also note $y =\sinh^{-1}(e^x)$ is the same as $e^x =\sinh(y)= \frac{e^y - e^{-y}} 2$ or $2e^x = e^y - \frac 1 {e^y}$. Solve that for $y$ in terms of $x$ and you will get your expression and you will see where the logarithms come from.
Last edited: Feb 24, 2014
4. Feb 25, 2014
### cathy
Ahh got it! Thank you very much! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075759053230286, "perplexity": 5315.97915471248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864466.23/warc/CC-MAIN-20180521181133-20180521201133-00599.warc.gz"} |
https://gmatclub.com/forum/the-figure-above-shows-a-circular-flower-bed-with-its-center-at-o-su-254426.html | It is currently 12 Dec 2017, 01:41
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# The figure above shows a circular flower bed, with its center at O, su
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 42559
Kudos [?]: 135309 [0], given: 12686
The figure above shows a circular flower bed, with its center at O, su [#permalink]
### Show Tags
28 Nov 2017, 21:04
00:00
Difficulty:
(N/A)
Question Stats:
100% (00:39) correct 0% (00:00) wrong based on 13 sessions
### HideShow timer Statistics
The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet?
(A) 25π
(B) 38π
(C) 55π
(D) 57π
(E) 64π
[Reveal] Spoiler:
Attachment:
2017-11-28_1025.png [ 6.37 KiB | Viewed 240 times ]
[Reveal] Spoiler: OA
_________________
Kudos [?]: 135309 [0], given: 12686
BSchool Forum Moderator
Joined: 26 Feb 2016
Posts: 1691
Kudos [?]: 745 [1], given: 19
Location: India
WE: Sales (Retail)
The figure above shows a circular flower bed, with its center at O, su [#permalink]
### Show Tags
28 Nov 2017, 21:16
1
KUDOS
The outer circle has the radius of 8+3(since the circular path around the flower bed is 3 feet) = 11
Hence, the area of the outer circle is $$π*(11)^2 = 121π$$
Since the radius of the inner circle is 8, area will be $$π*(8)^2 = 64π$$
The area of the path will be the difference of area of the outer circle and the area of the inner circle.
Hence area of the path will be $$121π - 64π = 57π$$(Option D)
_________________
Stay hungry, Stay foolish
Kudos [?]: 745 [1], given: 19
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 1922
Kudos [?]: 1011 [0], given: 3
Location: United States (CA)
Re: The figure above shows a circular flower bed, with its center at O, su [#permalink]
### Show Tags
01 Dec 2017, 07:56
Bunuel wrote:
The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet?
(A) 25π
(B) 38π
(C) 55π
(D) 57π
(E) 64π
[Reveal] Spoiler:
Attachment:
2017-11-28_1025.png
The area of the inner circle is 8^2π = 64π.
The area of the outer circle is 11^2π = 121π.
Thus, the area of the path is the difference of the two areas: 121π - 64π = 57π.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Kudos [?]: 1011 [0], given: 3
Intern
Joined: 26 Sep 2017
Posts: 6
Kudos [?]: 1 [0], given: 0
Re: The figure above shows a circular flower bed, with its center at O, su [#permalink]
### Show Tags
01 Dec 2017, 08:21
Area of the path = Area of larger circle (with radius 8+3=11) - Area of smaller circle (with radius 8)
=>Area of the path = π∗(11)^2 - π∗(8)^2
=>Area of the path = 121π - 64π = 57π
Kudos [?]: 1 [0], given: 0
Re: The figure above shows a circular flower bed, with its center at O, su [#permalink] 01 Dec 2017, 08:21
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6260932683944702, "perplexity": 6881.330978985729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515311.25/warc/CC-MAIN-20171212075935-20171212095935-00526.warc.gz"} |
https://www.physicsforums.com/threads/momentum-exchange-of-virtual-pions.728808/ | # Momentum exchange of virtual pions
1. ### gildomar
75
I know that the strong force is viewed as the exchange of virtual pions between two nucleons, with the mass and range of them confirmed by the energy-time uncertainty principle. But if the momentum of the pion is transferred from one nucleon to the other in the interaction, wouldn't that give an equivalent repulsive force between them instead of an attractive one?
2,375
3. ### gildomar
75
Thanks; guess I didn't look far enough back in the past topics.
4. ### K^2
2,470
Virtual pion exchange is just one of the contributions. It's the dominant effect, but there are other things going on. Nucleons can exchange gluons directly, as well as exchange other mesons. Pions happen to be the lightest of mesons and not restricted by confinement, so they end up being better mediators for nuclear forces, but not the only ones there.
5. ### ChrisVer
2,375
gluons don't exist in the nuclei level due to confinement, so that's why in fact (effectively) you get the puons.
The main quarks you can make your "effective particle" consist of, are up and down, because I think (from the deep inelastic scattering on protons) we already know that strange is not favorable at all...it's almost not existing in the sea particles...
6. ### RGevo
90
How can you draw any strong interaction without involving gluons?
7. ### Bill_K
4,157
From www.phy.ohiou.edu/~elster/lectures/fewblect_2.pdf:
8. ### K^2
2,470
That's precisely what I said. Pions dominate interaction because they are not confined. Other processes are non-dominant either due to confinement or higher masses of mediator particles. Where's the problem?
9. ### Bill_K
4,157
Please don't take offense when someone agrees with you. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019763469696045, "perplexity": 1412.4497421506644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737833893.68/warc/CC-MAIN-20151001221713-00090-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/96944/are-all-subrings-of-the-rationals-euclidean-domains | Are all subrings of the rationals Euclidean domains?
This is a purely recreational question -- I came up with it when setting an undergraduate example sheet.
Let's go with Wikipedia's definition of a Euclidean domain. So an ID $R$ is a Euclidean domain (ED) if there's some $\phi:R\backslash\{0\}\to\mathbf{Z}_{\geq0}$ or possibly $\mathbf{Z}_{>0}$ (I never know what $\mathbf{N}$ means, and the Wikipedia page (at the time of writing) uses $\mathbf{N}$ as the target of $\phi$, but in this case it doesn't matter, because I can just add one to $\phi$ if necessary) such that the usual axioms hold.
Now onto subrings of the rationals. The subrings of the rationals turn out to be in bijection with the subsets of the prime numbers. If $X$ is a set of primes, then define $\mathbf{Z}_X$ to be the rationals $a/b$ with $b$ only divisible by primes in $X$. Different sets $X$ give different subrings, and all subrings are of this form. This needs a little proof, but a little thought, or a little googling, leads you there.
If $X$ is empty, then $\mathbf{Z}_X=\mathbf{Z}$, which is an ED: the usual $\phi$ taken is $\phi(x)=|x|$.
If $X$ is all the primes then $\mathbf{Z}_X=\mathbf{Q}$ and this is an ED too (at least according to Wikipedia -- I think some sources demand that an ED is not a field, but let's not go there); we can just let $\phi$ be constant.
If $X$ is all but one prime, say $p$, then $\mathbf{Z}_X$ is the localisation of $\mathbf{Z}$ at $(p)$, and $\phi$ can be taken to be the $p$-adic valuation (if we're allowing $\phi$ to take the value zero, which we may as well). Note however that this is a rather different "style" of $\phi$ to the case $X$ empty: this $\phi$ is "non-archimedean" in origin, whereas in the case of $X$ empty we used an "archimedean" $\phi$. This sort of trick generalises to the case where $X$ is all but a finite set of primes -- see the "Dedekind domain with only finitely many non-zero primes" example on the Wikipedia page.
Of course the question is: if $X$ is now an arbitrary set of primes, is $\mathbf{Z}_X$ an ED?
-
What happens when $X$ is finite? In particular, when $X=\{2\}$ and $X=\{2,3\}$. – lhf Jan 6 '12 at 14:53
I think that in general one strategy for constructing $\phi$, for any ID, is this: you let $A_0$ be zero, you let $A_1$ be the units, and for $n\geq2$ you let $A_n$ be the elements $r$ of $R$ not in any earlier $A_i$ but such that the map $\cup_{j<i}A_j\to R/(r)$ is surjective. The idea is that $\phi(r)=i$ for $r\in A_i$ and you hope that the union of the $A_i$ is $R$: you've then proved $R$ is an ED, and I think that what I sketch here is basically an iff. – Kevin Buzzard Jan 6 '12 at 15:24
So if $X=\{2\}$ then $A_2$ contains (amongst other things) the primes such that 2 is a primitive root, and then $A_3$ contains (amongst other things) the primes such that either 2 or one of these primes in $A_2$ is a primitive root and etc etc. And then you just have to hope that you get everything shrug. – Kevin Buzzard Jan 6 '12 at 15:24
Given Alex's (very nice, +1) answer below, I wonder if you can follow-up to ask for a more "coherent" system of $\phi_X$'s such that $\phi_X$ agrees with the $p$-adic valuation when $X$ is the complement of a prime, and/or that $\phi_X$ behaves nicely with respect to shrinking/enlarging $X$. – Cam McLeman Jan 6 '12 at 17:11
Yes. Let $\phi(a/b) = |a|$ where $a/b$ is written in lowest terms. To see that this is a Euclidean function, let $a/b,c/d\in \mathbb{Z}_X$ be nonzero and in lowest terms and write $$\frac{a}{b}=\frac{nd}{b}\cdot \frac{c}{d}+\frac{s}{t}$$ which means that $\phi(s/t)=\phi((a-nc)/b)\leq |a-nc|$ which for a suitable value of $n$ is less than $\phi(a/b) = |a|$.
Very nice. Note that you implicitly use the fact that if $a/b$ is in the subring then so is $m/b$ for any integer $m$. I wonder if you've proved that any subring of the field of fractions of an ED is an ED?? – Kevin Buzzard Jan 6 '12 at 16:07
PS I now see that my mistake was to be too hung up on the $p$-adic side of the story. – Kevin Buzzard Jan 6 '12 at 16:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774937629699707, "perplexity": 135.7418628275303}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800767.23/warc/CC-MAIN-20140820021320-00066-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://brilliant.org/problems/minimum-value-of-sum-of-roots/ | # Minimum Value Of Sum Of Roots
$\large x^2-ax+2016=0$
Suppose the above quadratic equation has two positive integer solutions. Find the minimum value of $a$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3962475061416626, "perplexity": 416.0466334592104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00320.warc.gz"} |
http://docs.itascacg.com/flac3d700/flac3d/zone/doc/manual/zone_manual/zone_commands/cmd_zone.export.html | # zone export command
Syntax
zone export s keyword <range>
Primary keywords:
binary
Exports a grid to a file. A path can be part of the supplied file name s. The grid file is an ASCII-format file (or binary-format file if binary is assigned) description of the FLAC3D geometry (zones, gridpoints, zone groups, and face groups). The binary-format file size will be smaller than the ASCII-format file size, and it takes less loading time if being imported by FLAC3D later. The ASCII-format grid file specification can be found with the zone import command description. If no file extension is given, an extension of “f3grid” is used. The zone and face group information can be exported.
binary
exported grid file will be in binary-format instead of ASCII-format. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19717206060886383, "perplexity": 7175.991098789219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00747.warc.gz"} |
https://www.amdainternational.com/3vv8wv/thermal-conductivity-dimension-c26795 | . is the mean free path, which measures the average distance a molecule travels between collisions. The heat transfer characteristics of a solid material are measured by a property called the thermal conductivity, k (or λ), measured in W/m.K. 3 The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite λL,[45] as it means that momentum is not conserved. Ω A The thermal conductivity of a given material often depends on the temperature and even the direction of heat transfer. → → Thermal Conductivity - k - is used in the Fourier's equation. k / [31][32] For rigid elastic spheres, {\displaystyle {\rm {W/K}}} v {\displaystyle {\vec {q}}} Dimension of L.A-1 is equal to that of L-1 actually. ω the Green-Kubo relations, are difficult to apply in practice, typically consisting of averages over multiparticle correlation functions. T Temperature dependence of the mean free path has an exponential form T {\displaystyle f} W ) / 1 So the only temperature-dependent quantity is the heat capacity c, which, in this case, is proportional to T. So. Elementary calculations then lead to the expression, where P The thermal conductivity of a material depends on its temperature, density and moisture content. Λ {\displaystyle \beta } is independent of with k0 a constant. For monatomic gases, such as the noble gases, the agreement with experiment is fairly good. still holds. [29] Since Thermal conductivity is a measure of a substance’s ability to transfer heat through a material by conduction. The exact mechanisms of thermal conduction are poorly understood in liquids: there is no molecular picture which is both simple and accurate. ∝ If A is constant as well the expression can be integrated with the result, where TH and TL are the temperatures at the hot end and the cold end respectively, and L is the length of the bar. Therefore, these phonons have to possess energy of {\displaystyle P\propto {e}^{-E/kT}} [46][failed verification] This was done by assuming that the relaxation time τ decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly.[45]. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles/structures. {\displaystyle {\rm {K/W}}} [50], In an isotropic medium, the thermal conductivity is the parameter k in the Fourier expression for the heat flux. 1 m ; Input the cross-sectional area (m 2)Add your materials thickness (m)Enter the hot side temperature (°C)Enter the cold side temperature (°C) {\displaystyle \Omega (T)} Power is the rate of heat flow, (i.e.) k T Thermal conductivity of the stainless steel is 16.26 W/m-K . G Thermal conductivity has dimensions of $\mathrm{Power / (length * temperature)}$. q {\displaystyle \mu } {\displaystyle \sim k\Theta /2} In alloys the density of the impurities is very high, so l and, consequently k, are small. e [48] Therefore, specific thermal conductivity is calculated as: Δ . the system approaches a vacuum, and thermal conduction ceases entirely. , which is a significant fraction of Debye energy that is needed to generate new phonons. / {\displaystyle \lambda } is the volume of a mole of liquid, and This transport mechanism is theorized to be limited by the elastic scattering of acoustic phonons at lattice defects. This equation is a result of combining the four previous equations with each other and knowing that Θ For this reason a vacuum is an effective insulator. However, thermal conductivity, which is its reciprocal, is frequently given in specific units of ∇ The use of one million computational cells made it possible to establish a numerical error of less than 0.1%. {\displaystyle \lambda } This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Insulation Material Thermal Conductivity Chart . {\displaystyle 0} In this post we will work on the derivation of thermal conductivity formula first, then we will find the dimension of thermal conductivity as well. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids. , not deviating by more than + , we get the equation which converts from specific thermal conductivity to absolute thermal conductivity: Again, since thermal conductivity and resistivity are reciprocals of each other, it follows that the equation to convert specific thermal conductivity to absolute thermal resistance is: The thermal conductivity of T-Global L37-3F thermal conductive pad is given as 1.4 W/(mK). To incorporate more complex interparticle interactions, a systematic approach is necessary. 2 At higher temperatures (10 K < T < Θ), the conservation of energy c Thermal conductivity is defined as the transportation of energy due to the random movement of molecules across the temperature gradient. Absolute thermal conductivity, in contrast, is a component property used to compare the heat-transfer ability of different components (i.e., an extensive property). Thermal Conductivity. {\displaystyle \Lambda =v\tau } ⋅ f 1 is small compared with macroscopic (system) dimensions. . eval(ez_write_tag([[250,250],'physicsteacher_in-large-mobile-banner-2','ezslot_3',154,'0','0']));Putting the dimension of Work in equation 2, Dimension of Thermal Conductivity (k) = (ML2)(T-3) L-1 θ-1 = M1 L1 T -3 θ -1 ______ (4), In the next part of this tutorial, let’s find out the values of k for a few selected materials. The entirety of this section assumes the mean free path k = [Q L] / [A (T1-T2) t ] …………………… (1). R-values per inch given in SI and Imperial units (Typical values are approximations, based on the average of available results. , Thermal conductivity is a material property. C or This is particularly useful, for example, when calculating the maximum power a component can dissipate as heat, as demonstrated in the example calculation here. It is convenient to introduce the thermal-conductivity integral, If the temperature difference is small, k can be taken as constant. 3 or {\displaystyle \mathbf {q} _{1}=\mathbf {q} _{2}+\mathbf {q} _{3}+\mathbf {G} } It is a measure of a substances ability to transfer heat through a material by conduction. 0 He is an avid Blogger who writes a couple of blogs of different niches. Mean free path is one factor that determines the temperature dependence for λL, as stated in the following equation, where Λ is the mean free path for phonon and W−1). Power. {\displaystyle \left\langle v_{x}^{2}\right\rangle ={\frac {1}{3}}v^{2}} for a variety of interparticle force laws. T energy flow in a given time. ∂ {\displaystyle \lambda _{A}} What is the Law of Conservation of Energy and how to derive its equation? When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Alternately, the approximate expression Larger grain dimensions will reduce or eliminate the effect of edge states on the thermal conductivity of the two-dimensional carbon-based material, since the direction of heat flow is perpendicular to the irregularly shaped edges of the monolayer graphene ribbon, as defined previously. W and very close to In that case, Conversion from specific to absolute units, and vice versa. 1 Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity λL ( {\displaystyle {\frac {P}{\Delta T}}} In physics, thermal conductivity is the ability of a material to conduct heat. / T c μ P is the speed of sound in the liquid. {\displaystyle \lambda _{A}} Ranges are marked with "–". [34], For gases whose molecules are not spherically symmetric, the expression So we can write the expression in this way, In the next part of this tutorial, let’s find out th, Thermal Conductivity Derivation| Dimension of thermal conductivity, Thermal conductivity definition, formula, and…, Zeroth Law of Thermodynamics and thermal equilibrium, Poisson's ratio, Strain energy & Thermal Stress -…, What is the Law Of Conservation Of Momentum? {\displaystyle k} It is denoted by k. The inverse of thermal conductivity is thermal resistivity. A ) Ultimately, as the density goes to v Plug this into your thermal conductivity equation. T k Thermal conductivity is the amount of heat that is lost over time. From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λL. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. {\displaystyle k_{\text{B}}} v Now we will derive the Thermal Conductivity expression. Put your thermometer in an unobtrusive area of your sample. is a numerical constant of order Describing anharmonic effects is complicated because an exact treatment as in the harmonic case is not possible, and phonons are no longer exact eigensolutions to the equations of motion. T These findings not only expand the basic understanding of thermal transport in complex oxides, but also provide a path to dynamically control the thermal conductivity. {\displaystyle k=f\mu c_{v}} T Use a thermometer to measure the amount of heat passing through the sample from the warm side to the cool side to get your thermal conductivity constant. where ω At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. Existing models suffer from the lack of experimental data for the thermal properties of the polymer resist films. (here is independent of W λ Any expressions for thermal conductivity which are exact and general, e.g. 1 {\displaystyle c_{v}} Depending on the molecular substructure of ammonium cations and owing to the weaker interactions in the layered structures, the thermal conductivities of our two-dimensional hybrid perovskites range from 0.10 to 0.19 W m –1 K –1, which is drastically lower than that of their three-dimensional counterparts. λ The thermal conductivity of steel is about 1700 times higher than that of mineral wool, which may cause problems with the numer- ical accuracy. by making the following approximation T c = absolute thermal conductivity (W/K, or W/°C). 2 This has been confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were found to be limited by "internal boundary scattering" to length scales of 10−2 cm to 10−3 cm.[37][38]. Dimension of L.A-1 is equal to that of L-1 actually. / − x < Under these assumptions, an elementary calculation yields for the thermal conductivity. {\displaystyle \mu } electronvolt – what is electronvolt(eV) and how is eV related to Joule? / ⟩ {\displaystyle b=2} ℏ ) A value of 200,000 is predicted for 99.999% 12C at 80 K, assuming an otherwise pure crystal.[26]. {\displaystyle \Delta T} How does the heat transfer conduction calculator works? An explicit treatment of this effect is difficult in the Chapman-Enskog approach. More complex interaction laws introduce a weak temperature dependence. / ⟨ Thermal conductivities of PW, CW, TDCW measured at axial direction are all higher than that at radial direction and the thermal conductivity of TDCW is 0.669 Wm −1 K −1 at 50 °C, which is 114% higher than thermal conductivity of pure TD at 50 °C. This failure of the elementary theory can be traced to the oversimplified "elastic sphere" model, and in particular to the fact that the interparticle attractions, present in all real-world gases, are ignored. [47], At low temperatures (< 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. Degrees with a prism is theorized to be described by the phonons, which is both simple and accurate experiments... Is replaced by a thin plat- inum wire or nickel strip [ 8.2,3 ] [ ]! The average of available results by the free electrons impurities is very high, the. Crystalline dielectric solids is by way of elastic vibrations of the microscopic structure and atomic interactions the precise microscopic of. Higher temperatures the heat is carried mainly by the speed of longitudinal.. / [ a ( T1-T2 ) can be split into one longitudinal two. Various liquids using a simple set thermal conductivity dimension International system of units ( Typical are. Blogs of different niches way take the thermal conductivity dimension each phonon mode can be designated with Theta θ. Pipe is replaced by a thin plat- inum wire or nickel strip [ 8.2,3 ] substance! A materials property used to compare the heat-transfer ability of different materials, in... ( T1-T2 ) can be designated with Theta ( θ ) 99.999 % 12C ), in! To 0 { \displaystyle k } derived in this way take the form has 2 decades of experience. Characterizing materials ' properties molecular dynamics or Monte Carlo based methods to describe thermal is... A value of 200,000 is predicted for 99.999 % 12C ), thermal conduction are understood. At higher temperatures the heat transfer conduction calculator below is simple to use it by conduction conductivity (,. A direct consequence of the frequency lost over time low temperatures the heat is mainly! Of one million computational cells made it possible to establish a numerical of! Jearl ( 1997 ), to 41,000 for 99.9 % enriched synthetic diamond gases! Anyone studying A-level or early university physics energy due to the dimension of L.A-1 is equal that... Thermal-Conductivity integral, If the temperature range of interest to many glass forming substances using Brillouin scattering the phonon free! Specific to absolute units, and thermal conduction are poorly understood in liquids: there is no molecular which... The only temperature-dependent quantity is the surrounding fluid temperature 2 electronvolt ( eV ) and how is eV related Joule..., is proportional to T. so mass-1 × time 3 × electric-current 2 the experiment and of! The experiment and collection of data 3 the lattice ( i.e., thermal conductivity dimension elementary calculation yields for heat. Parameter k in the Chapman-Enskog approach a body related to its Centre of Gravity 180 with! ; & Walker, Jearl ( 1997 ) at low temperatures the heat transfer conduction! An otherwise pure crystal. [ 26 ] reason a vacuum is an effective insulator of. Polarization branches with a prism that may assign different dimensions be described introducing. The average of available results properties of the surface cells made it possible to establish a error... ( eV ) and how to calculate the units of thermal conductivities for materials! Two transverse polarization branches properties of the Boltzmann equation with the effective relaxation length for processes without directional.... Heat through a material ’ s ability to store and transfer heat and two transverse polarization.! - k - is used in electromagnetics that may assign different dimensions crystal and the second in non-metallic.. The power of the polymer resist films the ability of different materials ( i.e., phonons.! States this one longitudinal and two transverse polarization branches be limited by the phonons, so the conductivity... Conductivity ( W/K, or W/°C ) per meter Kelvin ( W/mK.! Fourier expression for the thermal conductivity \mathrm { power / ( length * temperature ) $... This transport mechanism is theorized to be described by introducing interface scattering mechanism, which structures! These assumptions, an intensive property ) monatomic gas, thermal conductivity is watts per meter Kelvin W/mK! Moisture content and atomic interactions of available results complex interparticle interactions and specific heat and is therefore proportional the... W/°C ) as such, thermal conductivity is high thermal bridge of a dilute for... Q contain a greater number of optical modes and a reduced λL calculator below simple... He loves to teach high School physics and utilizes his knowledge to informative. Work/Time or i.e., which derives explicit thermal conductivity dimension for k { \displaystyle 0 } the system approaches a,. Are small scattering of acoustic phonons at lattice defects also reverse the direction of energy how! This assumption fails, and vice versa derived in this way take form... > T2 ) Then the rate of heat transfer conduction calculator below is simple to use flux! ) dimensions methods to describe thermal conductivity of a body related to its Centre of?. A materials property used to compare the heat-transfer ability of different materials, vice... Centre of Gravity ability of different niches showing how to use heat and. Various liquids using a simple set up ) }$ thermal diffusivity and specific heat and therefore... Alloys the density goes to 0 { \displaystyle \lambda _ { a } } absolute! In alloys the density goes to 0 { \displaystyle 0 } the system approaches a vacuum is an avid who... For thermal conductivity, thermal conduction vary among different materials, and thermal conduction is described instead by an thermal... Gas for generic interparticle interactions, a systematic approach is provided by theory... A ( T1-T2 ) t ] …………………… ( 1 ) differs from normal momentum thermal conductivity dimension. Theory, which requires structures whose characteristic length is longer than that impurity. Of this effect is difficult in the International system of units ( SI ) thermal... Be taken as constant T2 ) Then the rate of heat that is lost over time degrees with a?... Of acoustic phonons at lattice defects first thermal conductivity dimension dominates in pure metals such as the measure a... Inch given in SI units of $\mathrm { power / ( length * temperature )$... L ] / [ a ( T1-T2 ) t ] …………………… ( 1 ) mineral wool unknown... Metals and the phonon thermal conductivity in solids 15a Standard Terminology Relating to insulation! For natural type IIa diamond ( 98.9 % 12C at 80 k, assuming an otherwise pure crystal [. Only defined within an arbitrary reciprocal lattice vector to that of impurity atom by contrast the. And lattice dimensions are reversible through multiple cycles Say, T1 > T2 ) the... 1 ) to decrease with temperature or nickel strip [ 8.2,3 ] so l and, k... { a } } = absolute thermal conductivity is the amount of heat,., density and moisture content copper, silver, etc a { \displaystyle k } derived in this,! The ability of a given material often depends on the external dimensions of surface. In solids from specific to absolute units, and in general depend on details of impurities... Multiparticle correlation functions strip [ 8.2,3 ] to predict from first-principles are temperature independent well. Si units collection of data 3 wire or nickel strip [ 8.2,3 ] phonons ) related Joule! First mechanism dominates in pure metals and the crystal and the second law of.! ), thermal conduction is described instead by an apparent thermal conductivity of Work/time or.. Generic interparticle interactions, a systematic approach is provided by Chapman–Enskog theory,,. Goes to 0 { \displaystyle \lambda _ { a } } = absolute thermal.. Heat capacity define a material ’ s ability to transfer heat liquids: there is molecular! Assuming an otherwise pure crystal. [ 47 ] the use of one computational. Utilizes his knowledge to write informative blog posts on related topics assuming an otherwise pure crystal [! Yields for the widespread use of one million computational cells made it possible to establish a numerical error of than. Lattice ( i.e., phonons ) small compared with macroscopic ( system ) dimensions: length-3 × ×. Know, dimension of Work/time or i.e. the SI unit of measuring thermal.. A body related to its Centre of Gravity of Q/tis equal to that of L-1 actually and is... Averages over multiparticle correlation functions to establish a numerical error of less than 0.1 % 8.2,3 ] reversible multiple... Based on the external dimensions of the Boltzmann equation, in an unobtrusive area your! Of impurity atom 8.2,3 thermal conductivity dimension and lattice dimensions are reversible through multiple cycles turn, a... At lattice defects temperature thermal conductivity dimension external dimensions of the electrical conductivity '' quantity in SI and Imperial units Typical. Multiparticle correlation functions convenient to introduce the thermal-conductivity integral, If the temperature difference is small compared macroscopic! [ a ( T1-T2 ) can be designated with Theta ( θ ) girders mineral! Impurities thermal conductivity dimension very high, so l and, consequently k, assuming an otherwise pure.... Reduced λL the lack of experimental data for the widespread use of million! Processes, whose significance for λL thermal conductivity dimension from the Boltzmann equation, states this to introduce the thermal-conductivity integral If! Range of interest 3 × electric-current 2 that of impurity atom } in. Into one longitudinal and two transverse polarization branches halliday, David ; Resnick Robert... Materials are listed in the Chapman-Enskog approach 15a Standard Terminology Relating to thermal insulation in pure metals and crystal. Based methods to describe thermal conductivity W/ ( m⋅K ) ) this represents the heat flux are... A vacuum is an avid Blogger who writes a couple of blogs of different materials i.e.. Is the surrounding fluid temperature 2 of acoustic phonons at lattice defects poorly understood in liquids, by,. Sensitivity are required for the thermal bridge of a rod with unknown.... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9384014010429382, "perplexity": 1394.2664774364714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00160.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/example-atm-protocol-messages-from-atm-machine-to-server-msg-name-purpose-helo-userid-let--q2976254 | ## no programming code needed
Example ATM protocol
Messages from ATM machine to Server
Msg name purpose
-------- -------
HELO <userid> Let server know that there is a card in the ATM machine
ATM card transmits user ID to Server
PASSWD <passwd> User enters PIN, which is sent to server
BALANCE User requests balance
WITHDRAWL <amount> User asks to withdraw money
BYE user all done
Messages from Server to ATM machine (display)
Msg name purpose
-------- -------
OK last requested operation (PASSWD, WITHDRAWL) OK
ERR last requested operation (PASSWD, WITHDRAWL) in ERROR
AMOUNT <amt> sent in response to BALANCE request
BYE user done, display welcome screen at ATM
Correct operation:
client server
-------- -------
HELO (userid) --------------> (check if valid userid)
<------------- PASSWD
BALANCE -------------->
<------------- AMOUNT <amt>
WITHDRAWL <amt> --------------> check if enough $to cover withdrawl <------------- OK ATM dispenses$
BYE -------------->
<------------- BYE
In situation when there's not enough money:
HELO (userid) --------------> (check if valid userid)
<------------- PASSWD
BALANCE -------------->
<------------- AMOUNT <amt>
WITHDRAWL <amt> --------------> check if enough $to cover withdrawl <------------- ERR (not enough funds) error msg displayed no$ given out
BYE -------------->
<------------- BYE
Using the above example create a similar instruction method to convert an input file in different file format Below are the description (No programming coded needed)
Design and describe an application-level protocol to be used between a client and a server program as per the following specification.
a. The server program is an application that converts a text document into a PDF (Portable document Format) file. The server receives an input file from the client, converts the file to a PDF document, and sends the converted document back to the client.
Note that you are only supposed to design the protocol and not the actual server-side logic to do the document conversion.
b. The client program can request the server to convert an input file, in one of the following formats, to a PDF file:
i. Plain text file
ii. RTF (Rich Text Format) file
iii. HTML (Hyper Text Markup Language) file
c. The client program needs to specify the type of the input file as part of its request.
d. The server should be able to handle errors in the client’s request message and respond with an appropriate error code. A possible error scenario may include the following:
i. Unsupported input file type
ii. Error in the input file data. For example, a plain text file containing non ASCII characters or an HTML file with unrecognizable html tag.
e. Depending on the design of protocol, the sizes of the input file and the generated PDF document may or may not need to be specified by the two hosts (the client and the server programs.)
f. There should not be any ambiguity in your protocol.
g. The server does not require users to be authenticated. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21812604367733002, "perplexity": 6731.750200015837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702019913/warc/CC-MAIN-20130516110019-00051-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://ask.sagemath.org/question/7714/large-groebner-basis-calculations/ | # large groebner basis calculations edit
I know there are a number of different packages in Sage capable of computing Groebner basis but I don't have much experience with these tools. I am wondering which option and specifically which routine might be most suitable for very large (50,000 generators) but relatively sparse (polynomials of degree less than 4 and with 4 or fewer terms over 64 variables). I am interested in basis and prime decomposition in relation to classifying low dimensional Frobenius algebras. I am currently using groeber and the primdec library from Singular in Sage but I am up against significant performance limits.
Also, does any have an recommended methods of pre-processing sparse sets of generators prior to calling groebner? And are there any packages capable of exploiting multi-processor architecture?
edit retag close merge delete
Sort by » oldest newest most voted
I think Singular is your best bet right now in Sage. There are a lot of options you can try beyond the default behavior, but its hard to know what will be best for a given system. For some things I've worked with the facstd commend in Singular was very helpful, you can do something like:
myideal = myring.ideal(generators)
fstd = singular(myideal).facstd()
bases = [[myring(f) for f in fs] for fs in fstd]
where of course you would have to define the generators and ring. Also, you probably already know this but using different term orders can have an enormous effect on the running time.
I don't think there is anything parallelizable in Sage for Groebner bases at the moment.
Another thing I sometimes find helpful with big systems is to compute Groebner bases of subsets of the generators. This is trivial to parallelize, and sometimes merging those subsets is faster than doing them all at once. It also can give you some insight into the structure of the system.
more
Hi!
Sorry, I did not notice the message before.
There exists no 'best variant' for large systems.
The ugly answer is, that you have to try several variants and the most important options, at least redTail... You can also try several orderings. Usually, the best ordering is dp, but sometimes you can arrange the variables in a smart block ordering structure, which makes things much easier.
Maybe, you can also try slimgb, which has had good results in the past for large system, but systems of these number of variables are reallly hard. Having 50000 equations however is awesome and improves the possibility to compute the GB.
The core algorithms std and slimgb both implement their own preprocessing.
Regardings primary decomposition of such big systems. This is more difficult than Groebner bases. So you can essentially forget about it, unless there are some special conditions. In fact, when you're ideal is such over determined and you might only have a few solutions with multiplicity 1, the prime decomposition is equivalent to solving.
I think you did not mention the characteristic.
Cheers, Michael
more | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41124922037124634, "perplexity": 781.5549145635398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00024.warc.gz"} |
https://indico.cern.ch/event/354593/ | # Network and Transfer Metrics WG - Metrics Area Meeting
Europe/Zurich
28/R-015 (CERN)
### 28/R-015
#### CERN
15
Show room on map
,
Description
The meeting date/time is a result of http://doodle.com/ezrfh8eybu7iybxy
Details on the Network and Transfer Metrics WG are available at our Twiki.
### Network and Transfer Metrics WG, Metrics Area Meeting (26 Nov 2014)
Attending: Laurent Caillat-Vallet, Costin Grigoras, Ilija Vukotic, Frederique Chollet, Duncan Rand, Kaushik De, Hung Tee Lee, Jorge Alberto Diaz Cruz, Enrico Mazzoni, Bruno Hoeft, Shawn McKee
at CERN: Tony Wildish, John Shade, Michail Salichos, Marian Babik
Indico: https://indico.cern.ch/event/354593/
### Next meetings:
– Fixed schedule for 2015
• 28 Jan, 18 Feb, 18 March, 8 Apr (all at 4pm CEST)
### List of actions:
- Use cases and requirements document (https://docs.google.com/document/d/1ceiNlTUJCwSuOuvbEHZnZp0XkWkwdkPQTQic0VbH1mc/edit)
• FTS, FAX, PhEDEx, Panda, Rucio, by 5th of Dec
• Experiments by 12 of Dec (Marian to ping ATLAS, LHCb)
- XRootD monitoring status, Marian to organize meeting on GLED status with Julia, Ilija, Mateusz, Shawn
Minutes:
Status of operations and comissioning of perfSONAR network was presented by Marian.
Kaushik: What would be good time to take a look at the real data ?
Marian: Since we're in a middle of the update campaign, it's better to wait until most of the sites are updated (current deadline for sites is 8th of January). The current ITB data store already contains data and can be queried, but we need more sites to udpate to have more complete data sets.
Kaushik: would it possible to get the data via SSB/AGIS as before ?
Shawn: In principle yes, but we will need to update the scripts as sites are running different measrument archives now, to be followed up with ATLAS.
Shawn commented on IPv6. It will be enabled and when sites have both IPv4 and IPv6 TWO tests will be run automatically. The impact we need to consider is that the number of tests will increase as we have more sites with IPv6. Original manual mesh forced only IPv4. Not the case in the new mesh.
perfSONAR metrics, examples of existing measurements and examples of identifying potential network issues was presented by Shawn.
John commented that it would be great to document this in order to encourge sites to see how usefull the tool is.
AOB
Shawn will be at CERN on Thursday 4th December, so we will have perfSONAR office (28-R-014) during the morning (until ~11:30), feel free to join us if you'd like to discuss anything related to perfSONAR
There are minutes attached to this event. Show them.
The agenda of this meeting is empty | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2615147531032562, "perplexity": 10357.063772256384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00523.warc.gz"} |
http://www.ck12.org/statistics/Applications-of-Normal-Distributions/lecture/Normal-Distribution%3A-Finding-the-Mean-and-Standard-Deviation-Part-2/r1/ | <meta http-equiv="refresh" content="1; url=/nojavascript/">
# Applications of Normal Distributions
## Using computational skill and technology to sketch and shade appropriate area under the normal curve
0%
Progress
Practice Applications of Normal Distributions
Progress
0%
Normal Distribution: Finding the Mean and Standard Deviation (Part 2)
Explains the mean and standard deviation in terms of a normal distribution. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901944160461426, "perplexity": 2441.5599486282795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736675795.7/warc/CC-MAIN-20151001215755-00144-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://hivemall.incubator.apache.org/userguide/anomaly/changefinder.html | In a context of anomaly detection, there are two types of anomalies, outlier and change-point, as discussed in this section. Hivemall has two functions which respectively detect outliers and change-points; the former is Local Outlier Detection, and the latter is Singular Spectrum Transformation.
In some cases, we might want to detect outlier and change-point simultaneously in order to figure out characteristics of a time series both in a local and global scale. ChangeFinder is an anomaly detection technique which enables us to detect both of outliers and change-points in a single framework. A key reference for the technique is:
# Outlier and Change-Point Detection using ChangeFinder
By using Twitter's time series data we prepared in this section, let us try to use ChangeFinder on Hivemall.
use twitter;
A function changefinder() can be used in a very similar way to sst(), a UDF for Singular Spectrum Transformation. The following query detects outliers and change-points with different thresholds:
SELECT
num,
changefinder(value, "-outlier_threshold 0.03 -changepoint_threshold 0.0035") AS result
FROM
timeseries
ORDER BY num ASC
;
As a consequence, finding outliers and change-points in the data points should be easy:
num result
... ...
16 {"outlier_score":0.051287243859365894,"changepoint_score":0.003292139657059704,"is_outlier":true,"is_changepoint":false}
17 {"outlier_score":0.03994335565212781,"changepoint_score":0.003484242549446824,"is_outlier":true,"is_changepoint":false}
18 {"outlier_score":0.9153515196592132,"changepoint_score":0.0036439645550477373,"is_outlier":true,"is_changepoint":true}
19 {"outlier_score":0.03940593403992665,"changepoint_score":0.0035825157392152134,"is_outlier":true,"is_changepoint":true}
20 {"outlier_score":0.27172093630215555,"changepoint_score":0.003542822324886785,"is_outlier":true,"is_changepoint":true}
21 {"outlier_score":0.006784031454620809,"changepoint_score":0.0035029441620275975,"is_outlier":false,"is_changepoint":true}
22 {"outlier_score":0.011838969816513334,"changepoint_score":0.003519599336202336,"is_outlier":false,"is_changepoint":true}
23 {"outlier_score":0.09609857927656007,"changepoint_score":0.003478729798944702,"is_outlier":true,"is_changepoint":false}
24 {"outlier_score":0.23927000145081978,"changepoint_score":0.0034338476757061237,"is_outlier":true,"is_changepoint":false}
25 {"outlier_score":0.04645945042821564,"changepoint_score":0.0034052091926036914,"is_outlier":true,"is_changepoint":false}
... ...
# ChangeFinder for Multi-Dimensional Data
ChangeFinder additionally supports multi-dimensional data. Let us try this on synthetic data.
## Data preparation
You first need to get synthetic 5-dimensional data from HERE and uncompress to a synthetic5d.t file:
$head synthetic5d.t 0#71.45185411564131#54.456141290891466#71.78932846605129#76.73002575911214#81.71265594077099 1#58.374230566196786#57.9798651697631#75.65793151143754#73.76101930504493#69.50315805346253 2#66.3595943896099#52.866595973073295#76.7987325026338#78.95890786682095#74.67527753118893 3#58.242560151043236#52.449574430621226#73.20383710416358#77.81502394558085#76.59077723631032 4#55.89878019680371#52.69611781315756#75.02482987204824#74.11154526135637#75.86881583921179 5#56.93554246767561#56.55687136423391#74.4056583421317#73.82419594611444#71.3017150863033 6#65.55704393868689#52.136347983404974#71.14213602046532#72.87394198561904#73.40278960429114 7#56.65735280596217#57.293605941063035#75.36713340281246#80.70254745535183#75.32423746923857 8#61.22095211566127#53.47603728473668#77.48215321523912#80.7760107465893#74.43951386292905 9#52.47574856682803#52.03250504263378#77.59550963025158#76.16623830860391#76.98394610743863 The first column indicates a dummy timestamp, and the following four columns are values in each dimension. Second, the following Hive operations create a Hive table for the data: create database synthetic; use synthetic; CREATE EXTERNAL TABLE synthetic5d ( num INT, value1 DOUBLE, value2 DOUBLE, value3 DOUBLE, value4 DOUBLE, value5 DOUBLE ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '#' STORED AS TEXTFILE LOCATION '/dataset/synthetic/synthetic5d'; Finally, you can load the synthetic data to the table by: $ hadoop fs -put synthetic5d.t /dataset/synthetic/synthetic5d
## Detecting outliers and change-points of the 5-dimensional data
Using changefinder() for multi-dimensional data requires us to pass the first argument as an array. In our case, the data is 5-dimensional, so the first argument should be an array with 5 elements. Except for that point, basic usage of the function is same as the previous 1-dimensional example:
SELECT
num,
changefinder(array(value1, value2, value3, value4, value5),
"-outlier_threshold 0.015 -changepoint_threshold 0.0045") AS result
FROM
synthetic5d
ORDER BY num ASC
;
Output might be:
num result
... ...
90 {"outlier_score":0.014014718350674471,"changepoint_score":0.004520174906936474,"is_outlier":false,"is_changepoint":true}
91 {"outlier_score":0.013145554693405614,"changepoint_score":0.004480713237042799,"is_outlier":false,"is_changepoint":false}
92 {"outlier_score":0.011631759675989617,"changepoint_score":0.004442031415725316,"is_outlier":false,"is_changepoint":false}
93 {"outlier_score":0.012140065235943798,"changepoint_score":0.004404170732687428,"is_outlier":false,"is_changepoint":false}
94 {"outlier_score":0.012555903663657997,"changepoint_score":0.0043670553008087355,"is_outlier":false,"is_changepoint":false}
95 {"outlier_score":0.013503247137325314,"changepoint_score":0.0043306667027628466,"is_outlier":false,"is_changepoint":false}
96 {"outlier_score":0.013896893553710932,"changepoint_score":0.004294969164345527,"is_outlier":false,"is_changepoint":false}
97 {"outlier_score":0.01322874844578159,"changepoint_score":0.004259994590721001,"is_outlier":false,"is_changepoint":false}
98 {"outlier_score":0.019383618511936707,"changepoint_score":0.004225604978710543,"is_outlier":true,"is_changepoint":false}
99 {"outlier_score":0.01121758589038846,"changepoint_score":0.004191881992962213,"is_outlier":false,"is_changepoint":false}
... ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44297268986701965, "perplexity": 5923.388833381183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00469.warc.gz"} |
https://math.stackexchange.com/questions/1351020/expected-value-when-die-is-rolled-n-times/1351043 | # Expected value when die is rolled $N$ times
Suppose we have a die with $K$ faces with numbers from 1 to $K$ written on it, and integers $L$ and $F$ ($0 < L \leq K$). We roll it $N$ times. Let $a_i$ be the number of times (out of the $N$ rolls) that a face with number $i$ written on it came up as the top face of the die.
I need to find the expectation of the value $a_1^F \times a_2^F \times \cdots a_L^F$
For example, let $N=2, K=6, L=2$ and $F=1$
Then, we roll the $6$-face die $2$ times, and we are interested in the value $a_1 \times a_2$.
The only two possible scenarios when this value is not zero are $(1, 2)$ and $(2, 1)$.
Both of them have $a_1 \times a_2 = 1$ and happen with probability $1 / 36$ each. So $P / Q = (1 + 1) / 36 = 1 / 18$
Let $A_i$ denote the number of times number $i$ appears (each number is equally likely to appear) and $\mathcal{A}$ be the set of all possible combinations of $a\equiv(a_1,\dots,a_K)$ s.t. $\sum_{k=1}^Ka_k=N$ and each $a_k\ge 0$. Then for $a\in \mathcal{A}$
$$P\{A_1=a_1,\dots,A_K=a_K\}=\frac{N!}{\prod_{k=1}^Ka_k!}K^{-\sum_{k=1}^Na_k}=\frac{K^{-N}N!}{\prod_{k=1}^Ka_k!}$$
and
$$\mathbb{E}\left[\prod_{k=1}^LA_k^F\right]=K^{-N}N!\times\sum_{\mathcal{A}}\frac{\prod_{k=1}^La_k^F}{\prod_{k=1}^Ka_k!}$$
Edit: The above formula can be simplified. Assume that $F=1$, $N\ge L$, and the relevant probabilities are $(p_1,\dots,p_K)$. Then
$$\prod_{k=1}^Lp_k \times\frac{\partial^L}{\partial p_1\cdots \partial p_L} \left(\prod_{k=1}^Lp_k^{a_k} \right)=\prod_{k=1}^L a_k p_k^{a_k}$$
Since
$$(p_1+\cdots+p_K)^N=\sum_{\mathcal{A}}\binom{N}{a_1,\dots,a_K}\prod_{k=1}^Kp^{a_k}$$
differentiating the LHS and noticing that $\sum_{k=1}^Kp_k=1$ yields
$$\prod_{k=1}^Lp_k \times\frac{\partial^L}{\partial p_1\cdots \partial p_L}(p_1+\cdots+p_K)^N=\prod_{k=1}^Lp_k\times \prod_{n=0}^{L-1}(N-n)$$
Consequently, since $p_k=K^{-1}$, $k=1,\dots,K$,
$$\mathbb{E}\left[\prod_{k=1}^LA_k\right]=K^{-L}\prod_{n=0}^{L-1}(N-n)$$
For $N<L$ this expectation is $0$ because $a_k=0$ for some $k=1,\dots,L$.
• Can you provide some example to show your approach – mat7 Jul 6 '15 at 7:30
• You can apply your own example... – d.k.o. Jul 6 '15 at 7:37
• Can't we make use of the fact that faces are numbered from 1 to K. So we can make use of it to find combinations that sum up to N – mat7 Jul 6 '15 at 7:38
• Its pretty simple example to show this formula correctness. So perhaps it will be great if you quote some good one – mat7 Jul 6 '15 at 7:38
• It doesn't matter how faces are labeled. Still, it's easy to write a simple program (in your favourite language) to calculate the expectation... – d.k.o. Jul 6 '15 at 7:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8668333888053894, "perplexity": 208.4456475902287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257920.67/warc/CC-MAIN-20190525064654-20190525090654-00516.warc.gz"} |
https://openreview.net/forum?id=jNB6vfl_680 | ## Global Magnitude Pruning With Minimum Threshold Is All We Need
Published: 28 Jan 2022, 22:06, Last Modified: 13 Feb 2023, 23:23ICLR 2022 SubmittedReaders: Everyone
Keywords: Pruning, Model Compression, One-shot, Global Magnitude Pruning
Abstract: Neural network pruning remains a very important yet challenging problem to solve. Many pruning solutions have been proposed over the years with high degrees of algorithmic complexity. In this work, we shed light on a very simple pruning technique that achieves state-of-the-art (SOTA) performance. We showcase that magnitude based pruning, specifically, global magnitude pruning (GP) is sufficient to achieve SOTA performance on a range of neural network architectures. In certain architectures, the last few layers of a network may get over-pruned. For these cases, we introduce a straightforward method to mitigate this. We preserve a certain fixed number of weights in each layer of the network to ensure no layer is over-pruned. We call this the Minimum Threshold (MT). We find that GP combined with MT when needed, achieves SOTA performance on all datasets and architectures tested including ResNet-50 and MobileNet-V1 on ImageNet. Code available on github.
One-sentence Summary: Global magnitude pruning along with minimum threshold is a very simple pruning technique and at the same time sufficient to obtain SOTA pruning performance.
19 Replies | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109785318374634, "perplexity": 1793.7431035249522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00573.warc.gz"} |
https://brilliant.org/problems/constructing-a-tetrahedron/ | # Constructing a tetrahedron
For this problem, we will define a "unit polyhedron" to be a polyhedron of edge length one.
The picture to the right shows how a larger tetrahedron can be built from four unit tetrahedra and one unit octahedron.
Amanda wishes to construct a much larger tetrahedron with many more unit tetrahedra and octahedra. In the end, she builds one using exactly 364 unit octahedra and some unit tetrahedra.
How many unit tetrahedra did she use?
Assumption: The final construction is one solid tetrahedron, with nothing extra sticking out, and nothing missing (no holes). No unit octahedra or tetrahedra are cut in any way.
Image credit: http://www.matematicasvisuales.com/
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635006546974182, "perplexity": 2772.8436356694533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00600.warc.gz"} |
http://math.stackexchange.com/questions/112762/is-there-a-formula-to-quickly-express-delayed-functions-in-terms-of-finite-diffe/112785 | # Is there a formula to quickly express delayed functions in terms of finite differences?
It is easy to express difference deltas in terms of delayed functions as follows:
$$\Delta^n [f](x)= \sum_{k=0}^n {n \choose k} (-1)^{n-k} f(x+k)$$
For example.
But what about the inverse process, is there a formula?
-
add comment
## 1 Answer
There is: for every integer $n\geqslant0$, $$f(x+n)=\sum_{k=0}^n\Delta^k[f](x).$$
-
add comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6465950012207031, "perplexity": 1704.6420066411072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=Sieve_of_Sundaram&oldid=118479 | # Sieve of Sundaram
## Description
The Sieve of Sundaram is a sieve for a range of odd prime numbers.The sieve starts out listing the natural numbers to like the following example ()
### Step 2
Then it finds all numbers of form ( fancy way of saying start at and increase by , for each not already eliminated ) and eliminates them (empty cells):
### Step 3
Double each number and add one:
## Basis
The basis of the algorithm is that for (without loss of generality, as the form we double and add 1 to is made up of commutative operations); with both factors nontrivial.
## Optimization
The equivalent in Step 2 above, of the Sieve of Eratosthenes starting the at the next prime squared, is starting at for next index
## Generalization for prime finding
Looking close, if you know modular arithmetic, you'll note the remainder 1 forms the modular multiplicative group modulo 2. This generalizes to the modular multiplicative group mod any natural number( with loss of factors in the prime list). This generalization either needs copies of the same numbers ( one for each class), or (triangle numbers) colors . The class of remainder 1, is the only class where variable must stay nonzero to introduce a nontrivial factor by default ( negative representatives increase variable value required by 1). Modulo 30 is sketched out below (8 classes):
$\begin{array}{ccccccccc} c\slash d&1&7&11&13&17&19&23&29\\ 1&a+b&7a+b&11a+b&13a+b&17a+b&19a+b&23a+b&29a+b\\ 7&a+7b&7(a+b)+1&11a+7b+2&13a+7b+3&17a+7b+3&19a+7%b+4&23a+7b+5&29a+7b+6\\ 11&a+11b&7a+11b+2&11(a+b)+4&13a+11b+4&17a+11b+6&19a+11b+6&23a+11b+8&29a+11b+10\\ 13&a+13b&7a+13b+3&11a+13b+4&13(a+b)+5&17a+13b+7&19a+13b+8&23a+13b+9&29a+13b+12\\ 17&a+17b&7a+17b+3&11a+17b+6&13a+17b+7&17(a+b)+9&19a+17b+10&23a+17b+13&29a+17b+16\\ 19&92&93&94&95&96&97&98\\ 23&107&108&109&110&111&112&113&114\\ 29&122&123&124&125&126&127&128\\ \end{array}$ (Error compiling LaTeX. ! Extra alignment tab has been changed to \cr.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271015286445618, "perplexity": 1521.8532809279598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00629.warc.gz"} |
http://arxiv-export-lb.library.cornell.edu/abs/2208.06924v1 | physics.hist-ph
(what is this?)
# Title: On the Map-Territory Fallacy Fallacy
Abstract: This paper presents a meta-theory of the usage of the free energy principle (FEP) and examines its scope in the modelling of physical systems. We consider the so-called map-territory fallacy' and the fallacious reification of model properties. By showing that the FEP is a consistent, physics-inspired theory of inferences of inferences, we disprove the assertion that the map-territory fallacy contradicts the principled usage of the FEP. As such, we argue that deploying the map-territory fallacy to criticise the use of the FEP and Bayesian mechanics itself constitutes a fallacy: what we call the {\it map-territory fallacy fallacy}. In so doing, we emphasise a few key points: the uniqueness of the FEP as a model of particles or agents that model their environments; the restoration of convention to the FEP via its relation to the principle of constrained maximum entropy; the Jaynes optimality' of the FEP under this relation; and finally, the way that this meta-theoretical approach to the FEP clarifies its utility and scope as a formal modelling tool. Taken together, these features make the FEP, uniquely, {\it the} ideal model of generic systems in statistical physics.
Comments: 23 pages Subjects: History and Philosophy of Physics (physics.hist-ph); Statistical Mechanics (cond-mat.stat-mech) Cite as: arXiv:2208.06924 [physics.hist-ph] (or arXiv:2208.06924v1 [physics.hist-ph] for this version) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906964063644409, "perplexity": 2767.6654286285197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00285.warc.gz"} |
https://www.katarinahoeger.com/2017/12/15/initial-reflections-on-teaching | For those who don’t know, my job, for the forseeable future, is that of an after school STEAM instructor at The Digital Arts Experience. This means that recently, I have taught kids to code in Python, and have directed them through fun coding exercises using Edison Robots and the EdPy program and Processing.
My degrees are not in education. Many of my friends who are teachers tell me that I will learn on the job, and that officially having a certification means nothing. To me, the certification implies that you know where to find the correct information to deal with situations that come up. Certified teachers still have to learn through trial and error, but have at least theoretic knowledge of the sorts of problems they may run into and ways to handle them gracefully.
As I do not have a degree, and I am concluding my first set of classes, I find that it’s important to record some of the lessons I’ve learned from my experiences, so that future me will make fewer mistakes.
Proper Preparation Prevents *\!~ Poor Performance
Someone who’s been in my life forever always says this, especially whenever something has gone wrong. It seems to hold pretty true for teaching. What I have found is that, not only is proper preparation important, but what counts as proper preparation depends drastically on the subject matter, age level, and the group of kids. I can’t just set aside an hour, and say that I’ve completed what I need to.
For me proper preparation includes
• Coding through an entire lesson worth of work
• Reading through any potential lesson plans
• Listing a bunch of potential questions that students will have.
• Thinking up extra-curricular exercises, and trying to at least write pseudocode for those, if not fully coding them out
I’ve found that while under the pressure of having a bunch of small humans solely focusing on me, I do not come up with complicated solutions well. I can definitely modify a lesson on the fly to make it more interesting to the students, but generally, if I did not come into the lesson with code, it’s not a good idea to write it for the students. Honestly, I have always had the same problem with coding interviews. It’s just a bad idea for me to come up with something novel under pressure.
All of this requires time. Creating new things requires very much time, and if I am exploring how to do something, I can easily put 6 – 20 hours of into preparing for an 1.5 hour class. So, while I’m getting my bearing on new courses, I have found that unless I decide to dedicate my entire life to teaching, I cannot justify creating new courses. Also, I have made a promise to myself to be the best teacher I can, given the restriction that I will not spend more than $x$ hours preparing for class.
Bored Students are Mischievous
When a student is not engaged, for whatever reason, that’s when I find that the student’s interactions negatively impact the class.
In the most benign cases, the bored students I have run into just distract their neighbors. Give a student a computer, and it is very easy for them to distract themselves and those around them. In some cases, they find internet sites and play games. In others, they don’t listen when asked to move forward with the work, and work on other work instead. Either way, the neighbors see this, and wonder why they should do their work when their classmates can goof off. With older students, I tell them that what they get out of the class is what they put in, sort of like in life. (Special thanks to Matt, one of my teaching assistants, for this pearl of wisdom.)
Sometimes, students are bored because they just don’t want to be in the class. This might be because they don’t want to learn to code, but their parents want them to, so they’re just there. It might also be because they’re advanced, and want to move faster than the class. Here, I’m stuck. When I give the advanced students more work, the less advanced students, also high achievers, don’t want to move on unless they finish the work. Also, the advanced students need someone to walk them through their work at the same time that the less advanced students need someone to walk them through their work, and I cannot show the code for both things at once. I haven’t yet found a good solution for this.
What I do try is to make lessons more engaging. I try to give the students a sense of choice, so that they have some say in what the code does, but are still working through the important concepts. In the same way that a parent may lay out 3 different shirts for a toddler to choose from, and 2 pairs of pants, and tell them to pick one of each, I try to provide a structure, and let the students pick some options from within that, but not let them choose other options. It’s like saying no to the toddler wearing only 2 shirts and no pants.
I also like to give the students a chance to make something cool on their own with what they learned, if possible. When they understand how to put the skills together to do something, they can direct the computer to do something much more interesting than they can just with individual skillsets.
Students Look Up to Me?
That’s right… when I’m the instructor, the students look up to me. It’s a strange feeling. It’s partially flattering, and partially terrifying. I feel like I need to be a particularly polished human being for them, because I would never want to accidentally influence them to do something bad. It’s one thing if I wander into unfortunate situations due to what I’ve said, or choices I’ve made, but it’s another if I’ve led someone else down a rabbit hole that they never would have found if they hadn’t met me.
This means, that I have to watch what I say. Not just word choice, but the topics I talk about as well. If I mention a current event, or even just a though about computers, they think that what I say is right.
I also need to make sure that I treat them all very equally, or some might internalize that I find them less valuable. The students I work with are typically very bright, but how do I connect with them in a way so that they don’t feel like they’re in competition with one another? This is an important skill.
Overall, the students I have taught so far are generally sweet, want to learn something cool, and just want to impress everyone around them. Like every other human, they want to feel special, and I think that one of my jobs as a teacher is to lead them to make something that makes them realize that they are special.
Conclusions
I don’t think this will be my forever job, but teaching is a great learning experience for the moment. While many go into teaching for stable jobs, my current experiences have reinforced that students deserve teachers who want to teach them.
Teaching requires a lot of time, a lot of dedication, and involves a lot of frustration and patience. If someone doesn’t want to deal with the students, they will be miserable.
As an exercises, I challenge you to think back on a teacher you had who lived for their job. Their every action in the classroom, and many outside of it, led to your enrichment as a student. If you were lucky enough to have one of those special individuals, please, send them a mental thank you! | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3074544370174408, "perplexity": 618.5662241987534}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00471.warc.gz"} |
https://darrenjw.wordpress.com/tag/independent/ | ## One-way ANOVA with fixed and random effects from a Bayesian perspective
This blog post is derived from a computer practical session that I ran as part of my new course on Statistics for Big Data, previously discussed. This course covered a lot of material very quickly. In particular, I deferred introducing notions of hierarchical modelling until the Bayesian part of the course, where I feel it is more natural and powerful. However, some of the terminology associated with hierarchical statistical modelling probably seems a bit mysterious to those without a strong background in classical statistical modelling, and so this practical session was intended to clear up some potential confusion. I will analyse a simple one-way Analysis of Variance (ANOVA) model from a Bayesian perspective, making sure to highlight the difference between fixed and random effects in a Bayesian context where everything is random, as well as emphasising the associated identifiability issues. R code is used to illustrate the ideas.
### Example scenario
We will consider the body mass index (BMI) of new male undergraduate students at a selection of UK Universities. Let us suppose that our data consist of measurements of (log) BMI for a random sample of 1,000 males at each of 8 Universities. We are interested to know if there are any differences between the Universities. Again, we want to model the process as we would simulate it, so thinking about how we would simulate such data is instructive. We start by assuming that the log BMI is a normal random quantity, and that the variance is common across the Universities in question (this is quite a big assumption, and it is easy to relax). We assume that the mean of this normal distribution is University-specific, but that we do not have strong prior opinions regarding the way in which the Universities differ. That said, we expect that the Universities would not be very different from one another.
### Simulating data
A simple simulation of the data with some plausible parameters can be carried out as follows.
set.seed(1)
Z=matrix(rnorm(1000*8,3.1,0.1),nrow=8)
RE=rnorm(8,0,0.01)
X=t(Z+RE)
colnames(X)=paste("Uni",1:8,sep="")
Data=stack(data.frame(X))
boxplot(exp(values)~ind,data=Data,notch=TRUE)
Make sure that you understand exactly what this code is doing before proceeding. The boxplot showing the simulated data is given below.
### Frequentist analysis
We will start with a frequentist analysis of the data. The model we would like to fit is
$y_{ij} = \mu + \theta_i + \varepsilon_{ij}$
where i is an indicator for the University and j for the individual within a particular University. The “effect”, $\theta_i$ represents how the ith University differs from the overall mean. We know that this model is not actually identifiable when the model parameters are all treated as “fixed effects”, but R will handle this for us.
> mod=lm(values~ind,data=Data)
> summary(mod)
Call:
lm(formula = values ~ ind, data = Data)
Residuals:
Min 1Q Median 3Q Max
-0.36846 -0.06778 -0.00069 0.06910 0.38219
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.101068 0.003223 962.244 < 2e-16 ***
indUni2 -0.006516 0.004558 -1.430 0.152826
indUni3 -0.017168 0.004558 -3.767 0.000166 ***
indUni4 0.017916 0.004558 3.931 8.53e-05 ***
indUni5 -0.022838 0.004558 -5.011 5.53e-07 ***
indUni6 -0.001651 0.004558 -0.362 0.717143
indUni7 0.007935 0.004558 1.741 0.081707 .
indUni8 0.003373 0.004558 0.740 0.459300
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1019 on 7992 degrees of freedom
Multiple R-squared: 0.01439, Adjusted R-squared: 0.01353
F-statistic: 16.67 on 7 and 7992 DF, p-value: < 2.2e-16
We see that R has handled the identifiability problem using “treatment contrasts”, dropping the fixed effect for the first university, so that the intercept actually represents the mean value for the first University, and the effects for the other Univeristies represent the differences from the first University. If we would prefer to impose a sum constraint, then we can switch to sum contrasts with
options(contrasts=rep("contr.sum",2))
and then re-fit the model.
> mods=lm(values~ind,data=Data)
> summary(mods)
Call:
lm(formula = values ~ ind, data = Data)
Residuals:
Min 1Q Median 3Q Max
-0.36846 -0.06778 -0.00069 0.06910 0.38219
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0986991 0.0011394 2719.558 < 2e-16 ***
ind1 0.0023687 0.0030146 0.786 0.432048
ind2 -0.0041477 0.0030146 -1.376 0.168905
ind3 -0.0147997 0.0030146 -4.909 9.32e-07 ***
ind4 0.0202851 0.0030146 6.729 1.83e-11 ***
ind5 -0.0204693 0.0030146 -6.790 1.20e-11 ***
ind6 0.0007175 0.0030146 0.238 0.811889
ind7 0.0103039 0.0030146 3.418 0.000634 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1019 on 7992 degrees of freedom
Multiple R-squared: 0.01439, Adjusted R-squared: 0.01353
F-statistic: 16.67 on 7 and 7992 DF, p-value: < 2.2e-16
This has 7 degrees of freedom for the effects, as before, but ensures that the 8 effects sum to precisely zero. This is arguably more interpretable in this case.
### Bayesian analysis
We will now analyse the simulated data from a Bayesian perspective, using JAGS.
#### Fixed effects
All parameters in Bayesian models are uncertain, and therefore random, so there is much confusion regarding the difference between “fixed” and “random” effects in a Bayesian context. For “fixed” effects, our prior captures the idea that we sample the effects independently from a “fixed” (typically vague) prior distribution. We could simply code this up and fit it in JAGS as follows.
require(rjags)
n=dim(X)[1]
p=dim(X)[2]
data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
theta[j]~dnorm(0,0.0001)
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)
autocorr.plot(output)
pairs(as.matrix(output))
crosscorr.plot(output)
On running the code we can clearly see that this naive approach leads to high posterior correlation between the mean and the effects, due to the fundamental lack of identifiability of the model. This also leads to MCMC mixing problems, but it is important to understand that this computational issue is conceptually entirely separate from the fundamental statisticial identifiability issue. Even if we could avoid MCMC entirely, the identifiability issue would remain.
A quick fix for the identifiability issue is to use “treatment contrasts”, just as for the frequentist model. We can implement that as follows.
data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
theta[1]<-0
for (j in 2:p) {
theta[j]~dnorm(0,0.0001)
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)
autocorr.plot(output)
pairs(as.matrix(output))
crosscorr.plot(output)
Running this we see that the model now works perfectly well, mixes nicely, and gives sensible inferences for the treatment effects.
Another source of confusion for models of this type is data formating and indexing in JAGS models. For our balanced data there was not problem passing in data to JAGS as a matrix and specifying the model using nested loops. However, for unbalanced designs this is not necessarily so convenient, and so then it can be helpful to specify the model based on two-column data, as we would use for fitting using lm(). This is illustrated with the following model specification, which is exactly equivalent to the previous model, and should give identical (up to Monte Carlo error) results.
N=n*p
data=list(y=Data$values,g=Data$ind,N=N,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (i in 1:N) {
y[i]~dnorm(mu+theta[g[i]],tau)
}
theta[1]<-0
for (j in 2:p) {
theta[j]~dnorm(0,0.0001)
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)
As suggested above, this indexing scheme is much more convenient for unbalanced data, and hence widely used. However, since our data is balanced here, we will revert to the matrix approach for the remainder of the post.
One final thing to consider before moving on to random effects is the sum-contrast model. We can implement this in various ways, but I’ve tried to encode it for maximum clarity below, imposing the sum-to-zero constraint via the final effect.
data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
for (j in 1:(p-1)) {
theta[j]~dnorm(0,0.0001)
}
theta[p] <- -sum(theta[1:(p-1)])
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)
Again, this works perfectly well and gives similar results to the frequentist analysis.
#### Random effects
The key difference between fixed and random effects in a Bayesian framework is that random effects are not independent, being drawn from a distribution with parameters which are not fixed. Essentially, there is another level of hierarchy involved in the specification of the random effects. This is best illustrated by example. A random effects model for this problem is given below.
data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
theta[j]~dnorm(0,taut)
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
taut~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","taut","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)
The only difference between this and our first naive attempt at a Bayesian fixed effects model is that we have put a gamma prior on the precision of the effect. Note that this model now runs and fits perfectly well, with reasonable mixing, and gives sensible parameter inferences. Although the effects here are not constrained to sum-to-zero, like in the case of sum contrasts for a fixed effects model, the prior encourages shrinkage towards zero, and so the random effect distribution can be thought of as a kind of soft version of a hard sum-to-zero constraint. From a predictive perspective, this model is much more powerful. In particular, using a random effects model, we can make strong predictions for unobserved groups (eg. a ninth University), with sensible prediction intervals based on our inferred understanding of how similar different universities are. Using a fixed effects model this isn’t really possible. Even for a Bayesian version of a fixed effects model using proper (but vague) priors, prediction intervals for unobserved groups are not really sensible.
Since we have used simulated data here, we can compare the estimated random effects with the true effects generated during the simulation.
> apply(as.matrix(output),2,mean)
mu tau taut theta[1] theta[2]
3.098813e+00 9.627110e+01 7.015976e+03 2.086581e-03 -3.935511e-03
theta[3] theta[4] theta[5] theta[6] theta[7]
-1.389099e-02 1.881528e-02 -1.921854e-02 5.640306e-04 9.529532e-03
theta[8]
5.227518e-03
> RE
[1] 0.002637034 -0.008294518 -0.014616348 0.016839902 -0.015443243
[6] -0.001908871 0.010162117 0.005471262
We see that the Bayesian random effects model has done an excellent job of estimation. If we wished, we could relax the assumption of common variance across the groups by making tau a vector indexed by j, though there is not much point in persuing this here, since we know that the groups do all have the same variance.
#### Strong subjective priors
The above is the usual story regarding fixed and random effects in Bayesian inference. I hope this is reasonably clear, so really I should quit while I’m ahead… However, the issues are really a bit more subtle than I’ve suggested. The inferred precision of the random effects was around 7,000, so now lets re-run the original, naive, “fixed effects” model with a strong subjective Bayesian prior on the distribution of the effects.
data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
theta[j]~dnorm(0,7000)
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)
This model also runs perfectly well and gives sensible inferences, despite the fact that the effects are iid from a fixed distribution and there is no hard constraint on the effects. Similarly, we can make sensible predictions, together with appropriate prediction intervals, for an unobserved group. So it isn’t so much the fact that the effects are coupled via an extra level of hierarchy that makes things work. It’s really the fact that the effects are sensibly distributed and not just sampled directly from a vague prior. So for “real” subjective Bayesians the line between fixed and random effects is actually very blurred indeed…
## Introduction to the particle Gibbs sampler
### Introduction
Particle MCMC (the use of approximate SMC proposals within exact MCMC algorithms) is arguably one of the most important developments in computational Bayesian inference of the 21st Century. The key concepts underlying these methods are described in a famously impenetrable “read paper” by Andrieu et al (2010). Probably the most generally useful method outlined in that paper is the particle marginal Metropolis-Hastings (PMMH) algorithm that I have described previously – that post is required preparatory reading for this one.
In this post I want to discuss some of the other topics covered in the pMCMC paper, leading up to a description of the particle Gibbs sampler. The basic particle Gibbs algorithm is arguably less powerful than PMMH for a few reasons, some of which I will elaborate on. But there is still a lot of active research concerning particle Gibbs-type algorithms, which are attempting to address some of the deficiencies of the basic approach. Clearly, in order to understand and appreciate the recent developments it is first necessary to understand the basic principles, and so that is what I will concentrate on here. I’ll then finish with some pointers to more recent work in this area.
### PIMH
I will adopt the same approach and notation as for my post on the PMMH algorithm, using a simple bootstrap particle filter for a state space model as the SMC proposal. It is simplest to understand particle Gibbs first in the context of known static parameters, and so it is helpful to first reconsider the special case of the PMMH algorithm where there are no unknown parameters and only the state path, $x$ of the process is being updated. That is, we target $p(x|y)$ (for known, fixed, $\theta$) rather than $p(\theta,x|y)$. This special case is known as the particle independent Metropolis-Hastings (PIMH) sampler.
Here we envisage proposing a new path $x_{0:T}^\star$ using a bootstrap filter, and then accepting the proposal with probability $\min\{1,A\}$, where $A$ is the Metropolis-Hastings ratio
$\displaystyle A = \frac{\hat{p}(y_{1:T})^\star}{\hat{p}(y_{1:T})},$
where $\hat{p}(y_{1:T})^\star$ is the bootstrap filter’s estimate of marginal likelihood for the new path, and $\hat{p}(y_{1:T})$ is the estimate associated with the current path. Again using notation from the previous post it is clear that this ratio targets a distribution on the joint space of all simulated random variables proportional to
$\displaystyle \hat{p}(y_{1:T})\tilde{q}(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1})$
and that in this case the marginal distribution of the accepted path is exactly $p(x_{0:T}|y_{1:T})$. Again, be sure to see the previous post for the explanation.
### Conditional SMC update
So far we have just recapped the previous post in the case of known parameters, but it gives us insight in how to proceed. A general issue with Metropolis independence samplers in high dimensions is that they often exhibit “sticky” behaviour, whereby an unusually “good” accepted path is hard to displace. This motivates consideration of a block-Gibbs-style algorithm where updates are used that are always accepted. It is clear that simply running a bootstrap filter will target the particle filter distribution
$\tilde{q}(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1})$
and so the marginal distribution of the accepted path will be the approximate $\hat{p}(x_{0:T}|y_{1:T})$ rather than the exact conditional distribution $p(x_{0:T}|y_{1:T})$. However, we know from consideration of the PIMH algorithm that what we really want to do is target the slightly modified distribution proportional to
$\displaystyle \hat{p}(y_{1:T})\tilde{q}(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1})$,
as this will lead to accepted paths with the exact marginal distribution. For the PIMH this modification is achieved using a Metropolis-Hastings correction, but we now try to avoid this by instead conditioning on the previously accepted path. For this target the accepted paths have exactly the required marginal distribution, so we now write the target as the product of the marginal for the current path times a conditional for all of the remaining variables.
$\displaystyle \frac{p(x_{0:T}^k|y_{1:T})}{M^T} \times \frac{M^T}{p(x_{0:T}^k|y_{1:T})} \hat{p}(y_{1:T})\tilde{q}(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1})$
where in addition to the correct marginal for $x$ we assume iid uniform ancestor indices. The important thing to note here is that the conditional distribution of the remaining variables simplifies to
$\displaystyle \frac{\tilde{q}(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1})} {\displaystyle p(x_0^{b_0^k})\left[\prod_{t=0}^{T-1} \pi_t^{b_t^k}p\left(x_{t+1}^{b_{t+1}^k}|x_t^{b_t^k}\right)\right]}$.
The terms in the denominator are precisely the terms in the numerator corresponding to the current path, and hence “cancel out” the current path terms in the numerator. It is therefore clear that we can sample directly from this conditional distribution by running a bootstrap particle filter that includes the current path and which leaves the current path fixed. This is the conditional SMC (CSMC) update, which here is just a conditional bootstrap particle filter update. It is clear from the form of the conditional density how this filter must be constructed, but for completeness it is described below.
The bootstrap filter is run conditional on one trajectory. This is usually the trajectory sampled at the last run of the particle filter. The idea is that you do not sample new state or ancestor values for that one trajectory. Note that this guarantees that the conditioned on trajectory survives the filter right through to the final sweep of the filter at which point a new trajectory is picked from the current selection of $M$ paths, of which the conditioned-on trajectory is one.
Let $x_{1:T} = (x_1^{b_1},x_2^{b_2},\ldots,x_T^{b_T})$ be the path that is to be conditioned on, with ancestral lineage $b_{1:T}$. Then, for $k\not= b_1$, sample $x_0^k \sim p(x_0)$ and set $\pi_0^k=1/M$. Now suppose that at time $t$ we have a weighted sample from $p(x_t|y_{1:t})$. First resample by sampling $a_t^k\sim \mathcal{F}(a_t^k|\boldsymbol{\pi}_t),\ \forall k\not= b_t$. Next sample $x_{t+1}^k\sim p(x_{t+1}^k|x_t^{a_t^k}),\ \forall k\not=b_t$. Then for all $k$ set $w_{t+1}^k=p(y_{t+1}|x_{t+1}^k)$ and normalise with $\pi_{t+1}^k=w_{t+1}^k/\sum_{i=1}^M w_{t+1}^i$. Propagate this weighted set of particles to the next time point. At time $T$ select a single trajectory by sampling $k'\sim \mathcal{F}(k'|\boldsymbol{\pi}_T)$.
This defines a block Gibbs sampler which updates $2(M-1)T+1$ of the $2MT+1$ random variables in the augmented state space at each iteration. Since the block of variables to be updated is random, this defines an ergodic sampler for $M\geq2$ particles, and we have explained why the marginal distribution of the selected trajectory is the exact conditional distribution.
Before going on to consider the introduction of unknown parameters, it is worth considering the limitations of this method. One of the main motivations for considering a Gibbs-style update was concern about the “stickiness” of a Metropolis independence sampler. However, it is clear that conditional SMC updates also have the potential to stick. For a large number of time points, particle filter genealogies coalesce, or degenerate, to a single path. Since here we are conditioning on the current path, if there is coalescence, it is guaranteed to be to the previous path. So although the conditional SMC updates are always accepted, it is likely that much of the new path will be identical to the previous path, which is just another kind of “sticking” of the sampler. This problem with conditional SMC and particle Gibbs more generally is well recognised, and quite a bit of recent research activity in this area is directed at alleviating this sticking problem. The most obvious strategy to use is “backward sampling” (Godsill et al, 2004), which has been used in this context by Lindsten and Schon (2012), Whiteley et al (2010), and Chopin and Singh (2013), among others. Another related idea is “ancestor sampling” (Lindsten et al, 2014), which can be done in a single forward pass. Both of these techniques work well, but both rely on the tractability of the transition kernel of the state space model, which can be problematic in certain applications.
### Particle Gibbs sampling
As we are working in the context of Gibbs-style updates, the introduction of static parameters, $\theta$, into the problem is relatively straightforward. It turns out to be correct to do the obvious thing, which is to alternate between sampling $\theta$ given $y$ and the currently sampled path, $x$, and sampling a new path using a conditional SMC update, conditional on the previous path in addition to $\theta$ and $y$. Although this is the obvious thing to do, understanding exactly why it works is a little delicate, due to the augmented state space and conditional SMC update. However, it is reasonably clear that this strategy defines a “collapsed Gibbs sampler” (Lui, 1994), and so actually everything is fine. This particular collapsed Gibbs sampler is relatively easy to understand as a marginal sampler which integrates out the augmented variables, but then nevertheless samples the augmented variables at each iteration conditional on everything else.
Note that the Gibbs update of $\theta$ may be problematic in the context of a state space model with intractable transition kernel.
In a subsequent post I’ll show how to code up the particle Gibbs and other pMCMC algorithms in a reasonably efficient way. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768958449363708, "perplexity": 629.7348132894087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00336.warc.gz"} |
http://soilsandrocks.com.br/river-flow-kjifab/1c34dd-cobol-commands-pdf | Because of the huge investment in building new technology or programming language becomes popular. Program section names and some proper names also appear in uppercase. Bulgarian / Български Contents Compaq NonStop Pathway/iTS SCREEN COBOL Reference Manualâ426750-001 iii 3. This course is adapted to your level as well as all COBOL pdf courses to better enrich your knowledge.. All you need to do is download the training document, open it and start learning COBOL for free.. Spanish / Español . Romanian / Română Search . Polish / polski Beginning COBOL for Programmers Book Description: Beginning COBOL for Programmers is a comprehensive, sophisticated tutorial and modular skills reference on the COBOL programming language for established programmers. Basic TSO/ISPF commands and basic knowledge of JCL will be useful but is not necessary; The Course also contains COBOL DB2 programs towards the end of the course and to code this, you should have basic idea of the DDL and DML operations in DB2. ã«ï¼ã®ä»ãã¦ããå¶å¾¡ã³ã¼ãã¯ãããªã³ã¿ã¼ãã¬ãã«Eã®ç¶æ
ã®ã¨ãã«ã®ã¿ Then type \ref ILE COBOL Programmer's Guide This guide describes how to write, compile, bind, run, debug, and maintain Integrated Language Environment (ILE) COBOL compiler programs for IBM® i. Make sure that you are using Alphabets, digits, and hyphens are allowed while forming userdefined words. Skip around in the tutorial by selecting specific options. . COBOL Tutorial - COBOL programming - This tutorial covers most imporant topics of COBOL topics like PERFORM, CALL, STRING,UNSTRING,COMP,COMP-3,INSPECT,FILE,SEQUENTIAL FILE,INDEXED FILE etc.. once can post their cobol questions here in mainframe gurukul forums 2.5 Labelling You can label any of the sectioning commands so they can be referred to in other parts of the document. User-defined words are used for naming files, data, records, paragraph names, and sections. Macedonian / македонски . An apostrophe or a quote can be a part of a literal only if it is paired. Identification Division 3. Croatian / Hrvatski Identification Division PROGRAM-ID Paragraph 3-1 DATE-COMPILED Paragraph 3-2 4. Straightforward conversion tool to transform your PDF files into. The Pro*COBOL precompiler converts the SQL statements in the COBOL program into standard Oracle run-time library calls. COBOL Word is a character string that can be a reserved word or a user-defined word. . Frequently used separators − Space, Comma, Period, Apostrophe, Left/Right Parenthesis, and Quotation mark. A comment line can be written in any column. . . What are the commands used for this. Also included is a detailed description of all PDF/CICS commands. They are written in Area B and programmers use it for reference. This book is for you if you are a developer who would like toâor mustâadd COBOL to your repertoire. . PDF file for ILE COBOL Language Reference You can view and print a PDF file of this information. All the constant values of figurative constants are mentioned in the following table. Familiarity with BMS maps is Students will learn how to execute a COBOL DB2 Program. Download Cobol Tutorial in PDF Download free Cobol tutorial course in PDF, training file in 52 chapters and 118 pages. Following is a list of the TSO/E commands that are documented in this topic and the major function each command performs. PDF file for ILE COBOL Language Reference You can view and print a PDF file of this information. There are two types of literals as discussed below −. Japanese / 日本語 Run COBOL Programs with JCL. One or more characters which will be at the highest position in descending order. The following example shows valid and invalid Numeric Literals −. . Subject 01 â Glossaries (File = train01.doc) Page 2 instant gratification and response, whereas operating in a âbatchâ mode means one has to wait for the computer system to get around to performing oneâs desired tasks. 'Characters' are lowest in the hierarchy and they cannot be divided further. Dutch / Nederlands COBOL tutorial for beginners and programmers - Learn COBOL with easy, simple and step by step tutorial covering notes and examples for computer science students on important concepts like Data types, Basic Verbs, Data layout, Conditional Statements, Iterative Statements, String Handling, Table, File Handling etc. In the following example, "Hello World" is a literal. . . Alphanumeric Literals are enclosed in quotes or apostrophe. This manual is a userâs guide for Liant Software Corporationâs RM/COBOL language. Search in IBM Knowledge Center. COBOL Programming: Can anyone give me the list of cobol commands Hi, I have download MS COBOL from this site. This edition applies to Version 4 Release 2 of IBM Enterprise COBOL for z/OS (program number 5655-S71) and to all subsequent releases and modifications until otherwise indicated in new editions. COBOL INVOKE COBOL PROMPTER AND ANS COBOL COMPILER. A comment is a character string that does not affect the execution of a program. When you sign in to comment, IBM will provide your email, first name and last name to DISQUS. Compiling Applications . The US Department of Defense, in a conference, formed CODASYL (Conference on Data Systems Language) to develop a language for business data processing needs which is now known as COBOL. W riting the Binder Language Commands for an ILE COBOL Service Pr ogram ..... . They are documented according to the task each command performs. The JCL to execute the above COBOL program is as follows: //SAMPLE JOB(TESTJCL,XXXXXX),CLASS=A,MSGCLASS=C //STEP1 EXEC PGM=HELLO //INPUT DD DSN=ABC.EFG.XYZ,DISP=SHR When you compile and execute the above program, it produces the following result: Executing COBOL program using JCL Procedure Division Vietnamese / Tiếng Việt. Imagine the confidence and speed which you get, if you master TSO Mainframe Tutorial and ISPF Mainframe Tutorial. . THE GNU COBOL GROUP SHOULD PIC AN OPEN SOURCE REPORT WRITER PROGRAM AND INCLUDE IT IN THEIR DISTRIBUTION STANDARD ANSI COBOL, IBM COBOL ETC HAVE REAL WORKING COMMANDS TO WRITE TO A "PAPER PRINTER" AN IBM, HP OR CANNON PRINTER. One or more characters have zeros in binary representation. The programs explained in this tutorial is compiled and ran without any errors. . Tag Archives: cobol commands pdf Mainframe TSO/ISPF Next Introduction of TSO/ISPF Imagine the confidence and speed which you get, if you master TSO Mainframe Tutorial and ISPF Mainframe Tutorial. 1 1 1 Passing Parameters to an ILE COBOL Pr ogram Thr ough the CL CALL Command .... . Chinese Simplified / 简体中文 . For this I created C:\GC30-build\MinGW. COBOL applications can be very large, with typically more than 1,000,000 lines of code. ii XPEDITER/TSO and XPEDITER/IMS COBOL User's Guide Please direct questions about XPEDITER/TSO and XPEDITER/IMS or comments on this document to: XPEDITER/TSO and XPEDITER/IMS Technical Support Compuware Corporation DISQUS’ privacy policy. FORT INVOKE FORTRAN PROMPTER AND FORTRAN IV G1 COMPILER.â®. The concept of DB2 cursors we have mention in detail. Part 3. COBOL Word is a character string that can be a reserved word or a user-defined word. Danish / Dansk COBOL Tutorial PDF Version Quick Guide Resources Job Search Discussion COBOL stands for Common Business Oriented Language.The US Department of Defense, in a conference, formed CODASYL (Conference on Data Systems Language) to develop a language for business data processing needs which is now known as COBOL. Program section names, and some proper names also appear in uppercase. . Take advantage of this course called Cobol programmer's guide to improve your Programming skills and better understand COBOL.. The concept of Keys and Indexes are covered with examples. Command line argument and environment variable operation function Database function Communication database function ... COBOL Commands, statements, clauses, and options you enter or select appear in uppercase. What is TSO/ISPF In Line By Line Mode , users enter a command by typing through their keyboard while in Menu Driven Mode , users interact with the Mainframe through ISPF menus. For example Hebrew / עברית This book is for you if you are a developer who would like toâor mustâadd COBOL to your repertoire. LANGUAGE PROCESSING COMMANDS:ASM INVOKE ASSEMBLER F COMPILER. Since this tutorial is written for Beginner's Programming Tutorial in QBasic This document is meant to get you started into programming, and assumes you have some experience with computers and with Windows 95 (or 98, etc.). . . . Defaults are underlined. Introduction to COBOL Programming Course Manual (Student Workbook) TOC Course Introduction COBOL Overview Program and File Definition COBOL Procedures and StatementsCOBOL Procedures and Statements Branching Testing and Debugging Validation, Logic, and Arithmetic Elements of Structured COBOL COBOL Reports DBMS Interface (not covered in public class format) Portuguese/Portugal / Português/Portugal It can be any combination of characters. Length can be up to 160 characters. Greek / Ελληνικά There are 80 character positions on each line of a coding sheet. COBOL programming site with a comprehensive set of COBOL tutorials making a full COBOL course as well as COBOL lecture notes, COBOL programming exercises with sample solutions, COBOL programming exam specifications with model answers, COBOL project specifications, and over 50 example COBOL programs. About the Tutorial COBOL stands for Common Business-Oriented Language. Download Cobol tutorial in PDF. Take advantage of this course called COBOL in 21 days to improve your Programming skills and better understand COBOL.. Literal is a constant that is directly hard-coded in a program. 1 12 Running an ILE COBOL Pr ogram Fr om a extensions, or syscall commands, which are interfaces between the z/OS operating system and the functions specified in the POSIX.1 standard (ISO/IEC 9945-1:1990[E] IEEE ⦠As400 Tutorial For Beginners Coding in Free. COBOL Word. . commands. This course is adapted to your level as well as all Cobol pdf courses to better enrich your knowledge.. All you need to do is download the training document, open it and start learning Cobol for free.. Please find the course PDF below. COBOL ALL Commands, statements, clauses, and options you enter or select appear in uppercase. Alphabets, digits, and hyphens are allowed while forming userdefined words. . . Italian / Italiano The compiler does not check a comment line for syntax and treats it for documentation. Bash Commands uname -a Show system and kernel head -n1 /etc/issue Show distri bution mount Show mounted filesy â stems date Show system date uptime Show ⦠Chapter 6. DISPLAY 'Executing COBOL program using JCL'. GnuCOBOL dialect, supporting many of the COBOL 2002 and COBOL 2014 features, many extensions found in other dialects and its own feature-set -std=cobol85 COBOL-85 without any extensions other than the amendment Intrinsic Function Module (1989), source compiled with this dialect is likely to compile with most COBOL compilers Length can be up to 18 characters. Sign cannot be the rightmost character. . COBOL Tutorial PDF Version Quick Guide Resources Job Search Discussion COBOL stands for Common Business Oriented Language.The US Department of Defense, in a conference, formed CODASYL (Conference on Data Systems Language) to develop a language for business data processing needs which is now known as COBOL. Make sure that you are using Figurative constants are constant values like ZERO, SPACES, etc. Different types of reserved words that we use frequently are as follows −, Special characters words like +, -, *, <, <=, etc. Reserved words are predefined words in COBOL. COBOL Tutorial in PDF - You can download the PDF of this wonderful tutorial by paying a nominal price of \$9.99. The following example shows valid and invalid Alphanumeric Literals −. 1 1 1 Running an ILE COBOL Pr ogram Using a HLL CALL Statement ..... . Terms And Conditions For Downloading eBook You are not allowed to upload these documents and share ⦠Instead, use CICS commands to retrieve, update, insert, and delete data. This course is adapted to your level as well as all COBOL pdf courses to better enrich your knowledge.. All you need to do is download the training document, open it and start learning COBOL for free.. . Program section names, and some proper names also appear in uppercase. Exit the tutorial at any time by pressing the END PF key (PF 3). . . Download 110 Cobol Interview Questions PDF Guide. It can be used as needed by the programmer. ILE COBOL Programmer's Guide This guide describes how to write, compile, bind, run, debug, and maintain Integrated Language TSO/ISPF. The following changes were made since the 2.0.0 release of the Open Mainframe Project's COBOL Programming Course: #140 & #159 add comments to COBOL programs and update the screenshots #154 fixes issue #21 where the source code for ⦠User-defined words are used for naming files, data, records, paragraph names, and sections. Submitted On : 2019-05-02. Slovenian / Slovenščina This COBOL tutorial starts from basics like Introduction of COBOL, Structure of COBOL and covers everything in detail. vi XPEDITER/TSO and XPEDITER/IMS COBOL User's Guide Using the COUNT Command . Example Entering LIST=COBDEMO directly from CICS immediately displays the Source Listing Display of the program COBDEMO, where you can quickly set breakpoints before you begin executing the program. 1 1 1 Running a COBOL Pr ogram Using the CL CALL Command ..... . You can ignore this, because the tutorial uses existing COBOL files. Czech / Čeština All COBOL statements must begin in area B. Our COBOL tutorial contains a lot of examples and Coding. CALC INVOKE ITF:PL/1 PROCESSOR FOR DESK CALCULATOR MODE. Pro*COBOL Precompiler Getting Started, Releases 9.2 and 1.8.77 for Windows Part No. DISQUS terms of service. COBOL also allows for USAGE BINARY, but leans towards decimal (base-10) representations. OpenCOBOL Guide for programmers This PDF tutorial describes the syntax and usage of the COBOL programming language as implemented by the current version of OpenCOBOL ,it's a free training document under 259 pages designated to intermediate users level. A separator is used to separate character strings. The source program of COBOL must be written in a format acceptable to the compilers. Hungarian / Magyar Compile Command RM/COBOL Commands RM/COBOL Syntax Summary 1 RM/COBOL Commands Compile Command The format of the Compile Command is as follows: rmcobol filename [[(] [ option] ... [) comment]] filename is It is assumed that the reader is familiar with programming concepts and with the COBOL Portuguese/Brazil/Brazil / Português/Brasil Defaults are underlined. All character strings must be ended with separators. IBM Knowledge Center uses JavaScript. Starting and ending of the literal should be same, either apostrophe or quote. Tag Archives: cobol commands pdf . The text highlighted in Bold are the commented entries in the following example −. Using Abend-AID for Batch - COBOL Revised: 11/6/2020 This test drive will guide you through the basic functionality of Abend-AID for Batch under Topaz workbench with an abending program written in COBOL. One or more zero depending on the size of the variable. If you're anticipating questions about COBOL during your next job interview, it may help to review common COBOL interview questions. Catalan / Català The commands in this topic are not documented individually. Hot tuto-computer.com. Running an ILE COBOL Program ..... . Decimal point should not appear at the end. You cannot use COBOL reserved words. Length can be up to 30 characters. Please let me know how to write a programs, compile and run them. A character string can be a. . We will also cover MVS TSO tutorial here. ⦠what the Norwegian / Norsk Length can be up to 30 characters. Korean / 한국어 . . 106 Example of Cr eating a Service Pr ogram .. . Right-click the project in the Solution Explorer. . COBOL is designed around decimal arithmetic, unlike most languages that use a binary internal representation. 1.1 Purpose This document provides general information on command defaults, abbreviations, and delimiter strings. Major DDL and DML operations are illustrated with lots of examples. This tutorial Covers DB2 operations on COBOL programs for Mainframe Developers. A96113-01 Oracle Corporation welcomes your comments and suggestions on the quality and usefulness of this Free unaffiliated ebook created from Stack OverFlow contributor. Add Files to the Project You can add existing files to the project, as follows: 1. This course is adapted to your level as well as all COBOL pdf courses to better enrich your knowledge.. All you need to do is download the training document, open it and start learning COBOL for free.. unisys Application Development Solutions COBOL Compiler Programming Reference Manual Volume 2: Compiler and System Interface Level 12R3 September 2016 7831 0455â020 Download free COBOL tutorial in PDF to improve your Programming skills and better understand... Size of the sectioning commands so they can not be divided further often find it faster to transaction-based! Digits, and TCP/IP FORTRAN IV G1 COMPILER.â® comment, IBM will provide email! From this site can have Asterisk ( * ) indicating comments, will be at the position... 9, +, â, or decimal point key ( PF 3.. Contents Compaq NonStop Pathway/iTS SCREEN COBOL Reference Manualâ426750-001 iii 3 to transform your PDF files into the.... Used as needed by the programmer hard-coded in a program letter abbreviations of or... 106 Using the CL CALL Command..... ’ privacy policy DESK CALCULATOR MODE environment Division COBOL Language. Files to the compilers this commands this topic are not documented individually on IBM Mainframe CORE and... And TCP/IP below − defaults, abbreviations, and run them may 27, 2009 - As400 commands to. - As400 commands tend to be done in one session of time, i.e. is and... To 9, +, â, or decimal point as discussed below − of digits from to... 2.5 Labelling you can add existing files to the project 's Properties when sign! Are mentioned in the hierarchy and they can not be divided further example − a part a. Cl CALL Command..... some special entries must begin in Area B and programmers use it for.! Standard Oracle run-time library calls project 's Properties when you sign in comment... Sure that you are a developer who would like toâor mustâadd COBOL to your repertoire appear! Constant that is directly hard-coded in a program the usual manner, abbreviations and... Documented according to the project, as follows: 1 and 118.!, clauses, and sections our COBOL tutorial course in PDF to improve your Programming and! Each Command performs data, cobol commands pdf, paragraph names, and delete data there are two types of questions... The highest position in descending order which displays the project 's Properties when you double-click it,. A comprehensive, sophisticated tutorial and modular skills Reference on the size of the variable and the function. Will provide your email, first name and last name to DISQUS have download MS from! Beginners Coding in free ZERO depending on the quality and usefulness of this course called in. Two types of interview questions are not documented individually quality and usefulness of this information and check the.... W riting the Binder Language commands for list, CNTL, CORE, and run in the and... Terms of Service in Bold are the commented entries in the tutorial by paying nominal... Of the document and DML operations are illustrated with lots of examples is. Starts from basics like Introduction of COBOL and covers everything in detail Period. Parts of the literal should be same, either apostrophe or a quote can be referred to other. Or decimal point ) Command..... are those that are included in the following example, cobol commands pdf World... Scripting appears to be three letter abbreviations of two or more characters have zeros in binary representation Reference can! Starts from basics like Introduction of COBOL must be written in Area B programmers. Tries to show all the cobol commands pdf based on a real-world scenario only if it is.... Tutorial course in PDF - you can add existing files to the compilers, update, insert, hyphens! Microsoft Windows, UNIX, and options you enter or select appear in uppercase Hello ''... ( base-10 ) representations not supported for your browser reserved word or a user-defined.. Indicating continuation and Slash ( / ) indicating form feed PDF - you can download PDF!, and hyphens are allowed while forming userdefined words and Coding constants are in! Languages that use a binary internal representation, Comma, Period, apostrophe, Parenthesis... ( base-10 ) representations can not be divided further, 2009 - commands... Cl CALL Command.... and print a PDF file of this wonderful tutorial paying... Ibm Mainframe ASSEMBLER F COMPILER Hi, I have download MS COBOL from this site is a of... Are used for naming files, data, records, paragraph names, delete... Highlighted in Bold are the commented entries in the following example, Hello World is!, records, paragraph names, and Quotation mark PL/1 PROCESSOR for DESK CALCULATOR.... Manualâ426750-001 iii 3 literal should be same, either apostrophe or quote Reference! 106 example of Cr eating a Service Pr ogram Using the CL CALL Command..... examples. Exit the tutorial by selecting specific options book is for you if you 're anticipating questions about during... Session of time, i.e. invalid Numeric Literals − ILE COBOL Language Reference you can add existing files the! Purpose this document provides general information on Command defaults, abbreviations, link-editing. Cobol applications can be cobol commands pdf in Area a commands in this article, we discuss these of! Cobol program into standard Oracle run-time library calls indicating comments, Hyphen ( - ) comments... Concept of DB2 cursors we have mention in detail general information on Command defaults,,! Binary representation label any of the TSO/E commands that are included in the hierarchy they! This course called download COBOL tutorial in PDF to improve your Programming skills and better understand COBOL Click... To improve your Programming skills and better understand COBOL that can be in! In other parts of the variable you 're anticipating questions about COBOL during your next job interview, may! 80 character positions on each line of a Coding sheet environment Division COBOL Programming: can anyone give the... Some sample answers, â, or decimal point and XPEDITER/IMS COBOL User 's Guide Using the COUNT Command names... Command Reference experienced users often find it faster to enter transaction-based commands for an ILE COBOL Pr ogram Cr... Tsr VPGM ) Command..... not documented individually COBOL is designed around arithmetic... Properties when you sign in to comment, IBM will provide your email, first name and last to. Be at the highest position in descending order TSO/E commands that are included in the hierarchy and can... List of the document Command performs cobol commands pdf displays the project you can the. ToâOr mustâadd COBOL to your repertoire DDL and DML operations are illustrated with lots of examples of time i.e... Key ( PF 3 ) you are a developer who would like toâor mustâadd COBOL to your repertoire,... Be referred to in other parts of the variable selecting specific options, Left/Right Parenthesis, and facilities!, along with your comments, will be governed by DISQUS ’ privacy policy and delete data an Identification PROGRAM-ID... Commands to retrieve, update, insert, and some proper names also appear in.... You master TSO Mainframe tutorial and ISPF Mainframe tutorial be very large with... A basic understanding of RM/COBOL, Microsoft Windows, UNIX, and delete data section names some! Special entries must begin in Area a languages that use a binary internal representation on Command defaults, abbreviations and... ( PF 3 ) any column review common COBOL interview questions as well as some sample.! The huge investment cobol commands pdf building new technology or Programming Language becomes popular are mentioned the. Ms COBOL from cobol commands pdf site and precise TSO/ISPF tutorial on IBM Mainframe in any.. The Binder Language commands for list, CNTL, CORE, and delimiter strings in descending order B and use... Time by pressing the END PF key ( PF 3 ) example, Hello World '' is combination... Binary internal representation me the list of COBOL, Structure of COBOL, Structure of COBOL and everything! Constant that is directly hard-coded in a program starts from basics like Introduction of COBOL and covers everything detail! Ò Click on the COBOL character Set includes 78 characters which are shown below − make sure that are... Language becomes popular add existing files to the project you can download the PDF but... Pf 3 ) our COBOL tutorial course in PDF to improve your Programming skills better! Experienced users often find it faster to enter transaction-based commands for list, CNTL, CORE, and you... 106 example of Cr eating a Service Pr ogram Using a HLL CALL Statement..... to an ILE Service. It is paired execution of a literal only if it is paired as needed the. Selecting specific options B and programmers use it for Reference the major function each Command performs are covered with.! This topic are not documented individually line for syntax and treats it for.... A Service Pr ogram.. are two types of Literals as discussed below − PDF you! Literals as discussed below − the Cr eate Service Pr ogram Using a HLL CALL Statement..... arithmetic, most! According to the compilers would like toâor mustâadd COBOL to your repertoire and. Names, and run them apostrophe, Left/Right Parenthesis, and delimiter strings a program ran... Which you cobol commands pdf, if you are Using vi XPEDITER/TSO and XPEDITER/IMS COBOL User Guide... List of COBOL commands Hi, I have download MS COBOL from this site discuss these types of questions... They can not be divided further - you can view and print a PDF file of information... Should be same, either apostrophe or a user-defined word the Binder commands. Disqus terms of Service if you master TSO Mainframe tutorial and better understand..... Modular skills Reference on the Typeset button and check cobol commands pdf PDF literal a. Sign in to comment, IBM will provide your email, first name and last name to DISQUS used. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45339736342430115, "perplexity": 7205.892345828533}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357641.32/warc/CC-MAIN-20210226115116-20210226145116-00015.warc.gz"} |
https://dspace.uni.lodz.pl/xmlui/handle/11089/39678;jsessionid=3A8EF121C447C3D71548C0A16EF32878 | ### Recent Submissions
• #### Neighbourhood Semantics for Graded Modal Logic
(Wydawnictwo Uniwersytetu Łódzkiego, 2021-07-14)
We introduce a class of neighbourhood frames for graded modal logic embedding Kripke frames into neighbourhood frames. This class of neighbourhood frames is shown to be first-order definable but not modally definable. We ...
• #### A General Model of Neutrosophic Ideals in BCK/BCI-algebras Based on Neutrosophic Points
(Wydawnictwo Uniwersytetu Łódzkiego, 2020-08-15)
More general form of (∈, ∈ ∨q)-neutrosophic ideal is introduced, and their properties are investigated. Relations between (∈, ∈)-neutrosophic ideal and (∈, ∈ ∨q(kT ,kI ,kF ))-neutrosophic ideal are discussed. Characterizations ...
• #### Falling Shadow Theory with Applications in Hoops
(Wydawnictwo Uniwersytetu Łódzkiego, 2021-01-20)
The falling shadow theory is applied to subhoops and filters in hoops. The notions of falling fuzzy subhoops and falling fuzzy filters in hoops are introduced, and several properties are investigated. Relationship between ...
• #### Tense Operators on BL-algebras and Their Applications
(Wydawnictwo Uniwersytetu Łódzkiego, 2021-05-28)
In this paper, the notions of tense operators and tense filters in $$BL$$-algebras are introduced and several characterizations of them are obtained. Also, the relation among tense $$BL$$-algebras, tense $$MV$$-algebras ...
• #### A Note on Gödel-Dummet Logic LC
(Wydawnictwo Uniwersytetu Łódzkiego, 2021-07-01)
• #### Omitting Types in Fragments and Extensions of First Order Logic
(Wydawnictwo Uniwersytetu Łódzkiego, 2021-05-28)
Fix $$2 < n < \omega$$. Let $$L_n$$ denote first order logic restricted to the first n variables. Using the machinery of algebraic logic, positive and negative results on omitting types are obtained for $$L_n$$ and for ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117526769638062, "perplexity": 8917.392032820819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00007.warc.gz"} |
https://chemistry.stackexchange.com/questions/1238/chemical-reactions-with-a-room-scale-cooling-effect | # Chemical reactions with a room-scale cooling effect
Are there chemical reactions that could cool down an average sized room by a noticeable amount (say 5 °C)?
I would like to investigate if it is possible to have a situation where I can mix 2 reagents at room temperature and pressure and in open air then they should react and become colder than room temperature without evaporation of some type, with an eye to making a noticeable drop in the room temperature.
• As I remember, $NH_4Cl$ dissolves in water with cooling of liquid. Not very much, but seen clearly. There are much reactions, that consume energy, but they are usually either slow or requires high temperature. – permeakra Sep 28 '12 at 5:11
The reaction between ammonium thiocyanate and barium hydroxide octahydrate is endothermic. It absorbs heat from the surroundings.
One way to do it is, as user41631 says, with an endothermic reaction.
Another is to use phase-change materials, from which you remove the thermal insulation, to allow the phase-change to take place. Now, all materials change their phases, at some combinations of temperature and pressure. The phrase "phase-change material" is used not to classify materials, but to identify the paricular use of a material - that is, the material is being used because one of its phase changes happens at a temperature and pressure that makes it useful for a specific process.
Such materials are being looked at for the temperature regulation of buildings.
Let's take a material with a melting point of, say, 18°C, and a high latent heat of fusion. Freeze it, insulate it, and take it into a room that's at 22°C. While the material melts, it will absorb heat from the air, cooling the room.
This principle has been applied in building material design, to provide smoothing of temperature variations. See, for example, Schossig et al's "Micro-encapsulated phase-change materials integrated into construction materials", [DOI], from which the following graph is taken, showing the smoothing of temperatures over a week in August , when a phase-change material (PCM) is used (T_Wall_PCM) - this is compared to an identical room without phase-change materials (T_Wall_REF) - as you can see, the PCM reduces the amount of time that the room spends above 26°C, providing more thermal comfort for people in the room.
Salt + ice takes the temperature down from 0 °C ice to −19 °C salt water, and can be used to make more ice (or ice cream!). So you have an ice-generating engine powered by salt and water, and as much cooling as you might desire (down to −19 °C). You could retrieve the salt later by evaporation. This is obvious, but I’m not aware of any device that does this, so maybe there are obstacles in practice.
Now, I want to know how to get down to −80 °C so I can make dry ice.
• May be alcohol with salt? – Max Sep 4 '14 at 21:17
• I think that in practice you would need too much water and salt to make it worth the effort. Also dumping the mixture of melted ice (water) and salt somewhere to make room for more solid ice and salt would be problematic. – Mikko Rantalainen Nov 27 '17 at 9:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36368584632873535, "perplexity": 962.9262272145385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00489.warc.gz"} |
https://stats.stackexchange.com/questions/391707/what-is-the-gradient-of-the-objective-function-in-the-soft-actor-critic-paper | # What is the gradient of the objective function in the Soft Actor-Critic paper?
In the paper "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor", they define the loss function for the policy network as
$$J_\pi(\phi)=\mathbb E_{s_t\sim \mathcal D}\left[D_{KL}\left(\pi_\phi(\cdot|s_t)\Big\Vert {\exp(Q_\theta(s_t,\cdot)\over Z_\theta(s_t)}\right)\right]$$
Applying the reparameterization trick, let $$a_t=f_\phi(\epsilon_t;s_t)$$, then the objective could be rewritten as
$$J_\pi(\phi)=\mathbb E_{s_t\sim \mathcal D, \epsilon \sim\mathcal N}[\log \pi_\phi(f_\phi(\epsilon_;s_t)|s_t)-Q_\theta(s_t,f_\phi(\epsilon_t;s_t))]$$
They compute the gradient of the above objective as follows
$$\nabla_\phi J_\pi(\phi)=\nabla_\phi\log\pi_\phi(a_t|s_t)+(\nabla_{a_t}\log\pi_\phi(a_t|s_t)-\nabla_{a_t}Q(s_t,a_t))\nabla_\phi f_\phi(\epsilon_t;s_t)$$
The thing confuses me is the first term in the gradient, where does it come from? To my best knowledge, the second large term is already the gradient we need, why do they add the first term?
• I just wanted to confirm that reinforcement learning is well within the scope of our site, as well as that of ai.stackexchange.com; despite the impression that may have been given by a now-deleted comment. (I've undeleted this question even though you've asked it here.) – Scortchi - Reinstate Monica Feb 13 at 14:45
• @Brale_ has answered this question here. – Maybe Mar 17 at 13:12
• Thanks for the update. – Scortchi - Reinstate Monica Mar 17 at 20:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7562390565872192, "perplexity": 598.411515335515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00406.warc.gz"} |
https://books.google.co.nz/books?qtid=f27e7864&dq=editions:UOM39015065320957&id=kolUz5MZlVMC&output=html&lr=&sa=N&start=90 | Books Books
Multiply the numerators together for a new numerator, and the denominators together for a new denominator.
The Youth's Assistant in Theoretic and Practical Arithmetic: Designed for ... - Page 93
by Zadock Thompson - 1838 - 164 pages
## The Crittendon Commercial Arithmetic and Business Manual: Designed for the ...
John Groesbeck - Business mathematics - 1873 - 348 pages
...8. 12672 " " 1. 237600—product by 18|. 24. To multiply a fraction by a fraction. RULE.—Multiply the numerators together for a new numerator, and the denominators together for a new denominator. 1. Multiply | by f 2X4 = Ans. 2. Multiply i by T <y. 3. " if" j. 25. To divide by a fraction. 4. «...
## Howard's California Calculator ...
C. Frusher Howard - Ready-reckoners - 1874 - 94 pages
...4J. 3. From 81 take 3J. Ans. 5£. 4. From 18| take 3^. Ans. 15T52-. MULTIPLICATION OF FRACTIONS. • RULE. — Multiply the numerators together for a new...and the denominators together for a new denominator. EXAMPLE. — Multiply £ by f. 8" X e 47) "2"o"' General rule for multiplying fractions and raixed...
## The High School Arithmetic: Containig All the Matter Usually Presented in a ...
Philotus Dean - Arithmetic - 1874 - 454 pages
...either 2 times _ }, making f, or 2 times fa, making T6;, which is, in its \$ X 5 5 lowest terms, f . Rule. — Multiply the numerators together for a new...and the denominators together for a new denominator. NOTE. — Abbreviate by canceling, when possible. EXAMPLES FOR PRACTICE. Multiply 2. | by f f by f...
## The boys' algebra
James Cahill (of Dublin.) - Algebra - 1875 - 209 pages
...• Simphfy ^гд— .-зцд 30. Simplify ^-^-¿ï+ PROBLEM IX. 74. To multiply fractions together. Rule — Multiply the numerators together for a new...and the denominators together for a new denominator. NOTE 1. — Before multiplying reduce an integral or mixed quantity, If there besuch, to a fractional...
## Felter's New Intermediate Arithmetic: Containing Oral and Written Problems ...
Stoddard A. Felter, Samuel Ashbel Farrand - Arithmetic - 1875 - 258 pages
...parts, and each part is -llj. IIIIIIT 1 I. ' ii I 2. Since * of £ is iV- i of f = ,V f of f = ^=i. 3. RULE. — Multiply the numerators together for a new numerator, and the denominators for a new denominator. KOTE. — Mixed numbers may be changed to improper fractions when more convenient,...
## Manual of Algebra
William Guy Peck - Algebra - 1875 - 331 pages
...(prinfLC ciple 2°); this gives for the product, ^; that is, ac ac bxd " M' Hence, we have the following RULE. Multiply the numerators together, for a new numerator, and the denominators for a new denominator. EXAMPLES. 1. Multiply |, by g. The product of the numerators is 21aae, and of...
## New Elementary Arithmetic: Embracing Mental and Written Exercises, for Beginners
Henry Bartlett Maglathlin - Arithmetic - 1875 - 208 pages
...fraction of a fraction called? What is the multiplying of a fraction by a fraction equivalent to ? Rule. — Multiply the numerators together for a new numerator, and the denominators for a new denominator. Examples. Multiply 2. £by|. Ans. TV 3. JTbyi- Ans. TV 4. § by f . Ans. §....
## Examination Christmas,1875
...explaining clearly what is meant by Aliquot parts. 3. The product of two fractions is found by multiplying the numerators together for a new numerator, and the denominators together for a new denominator. Prove the truth of this statement. 2. The terms ''ratio", ''proportion", and the difference between... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045443534851074, "perplexity": 6450.0396319782785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00410.warc.gz"} |
http://lambda-the-ultimate.org/node/2622 | The Design and Implementation of Typed Scheme
Tobin-Hochstadt, Felleisen. The Design and Implementation of Typed Scheme. POPL 2008.
When scripts in untyped languages grow into large programs, maintaining them becomes difficult. A lack of types in typical script ing languages means that programmers must (re)discover critical piecs of design information every time they wish to change a program. This analysis step both slows down the maintenance process and may even introduce mistakes due to the violation of undiscovered invariants. This paper presents Typed Scheme, an explicitly typed extension of an untyped scripting language. Its type system is based on the novel notion of occurrence typing, which we formalize and mechanically prove sound. The implementation of Typed Scheme additionally borrows elements from a range of approaches, including recursive types, true unions and subtyping, plus polymorphism combined with a modicum of local inference. Initial experiments with the implementation suggest that Typed Scheme naturally accommodates the programming style of the underlying scripting language, at least for the first few thousand lines of ported code.
The key feature of occurrence typing is the ability of the type system to assign distinct types to distinct occurrences of a variable based on control flow criteria. You can think of it as a generalization of the way pattern matching on algebraic datatypes works.
Go to sections 4.4 and 7 to whet your appetite...
Where are all the editors? LtU needs your TLC...
Comment viewing options
Type inference (pragmatic)?
The paper describes some places where type inference can happen & has been implemented, but it sounds pretty different from the e.g. Haskell or ML worlds in how often one has to manually annotate. Has anybody used Typed Scheme enough to give a subjective opinion about how it feels to use it w.r.t. finger typing to be done for typing?
It reminded me (based on
It reminded me (based on reading the paper) more of the type of things done in C# than type inference in the HM world.
Exactly
This is definitely the idea. There's currently two main places where types are inferred: local variable bindings, and polymorphic function applications. This is definitely the majority of the annotation 'burden', in my opinion. Annotations on top level definitions are useful to have, independent of type system.
There are plans for somewhat more inference, along the lines of Pierce and Turner's Local Type Inference, but that has not yet been implemented.
Thanks for the interest in the paper, by the way. If people want to try out Typed Scheme, they can find it here.
As James Iry's noted, one can implement Scheme in Haskell, and this suggests a rather obvious way of implementing types in a Scheme variant:
Compiling a 'Typed' scheme to Haskell with error messages etc. suitably translated to make sense in the 'typed scheme' language.
Obviously this would have Haskell-type types rather than the ones in the paper discussed here..
Has anyone tried this? I'm sure there are plenty of issues with making this a 'practical' implementation, but it seems like it would be a very interesting experiment.
At least it might be useful for parsing scheme, looking for latent type errors and unboxing optimizations.
Perhaps this could tie in with the (rather different) work on making scheme-like version of the ACL2 language for proving theorems about code.
Liskell
A Lispy Haskell has been discussed here before. It's definitely not Scheme, though.
Scheme in liskell
Thanks for pointing that out.. There is a website now, liskell.org.
Although the website makes the project look rather inactive, it does mention "A top-level-less Scheme compiler" as one of its "technology demos" (here).
The implementation looks rather short, though.
No types for the wicked
At least it might be useful for parsing scheme, looking for latent type errors and unboxing optimizations.
Not really. Without its own type analysis, all that such a translation can handle is the subset of Scheme programs that can have their types fully statically inferred when translated to Haskell. Any resulting errors are errors with respect to the Haskell type system and its limitations when it comes to dealing with unitypes. Most such errors wouldn't be actual errors in the original Scheme program.
That's not to say that such a translation might not be useful for other purposes, but it's not useful for checking types in ordinary Scheme code.
See Fritz Henglein's work on this
Fritz Henglein, in the 90's, wrote a system for converting Scheme programs to ML programs and minimizing the resulting tag checks. You can find the papers here:
The difference between what you suggest and Typed Scheme is, of course, that different programs typecheck. For example, there's no subtyping in Haskell, so you'd have to add lots of injections and projections (that's what Henglein wrote about). Also, this program in Scheme:
(map add1 (filter number? l))
where l is some list of arbitrary elements, is not going to typecheck in Haskell, whereas it does in Typed Scheme.
That's the coolest part of this work, ain't it?
It's not!
"Gradual typing with unification-based inference" is a purely theoretical investigation of type inference in a world of partial type annotations. It is my impression that the paper assumes a type algebra similar to that of ML or Haskell -- i.e., an algebra over arrows and the usual constructors -- and that kind of type theory can't deal with the example that Sam put up. [Please prove me wrong if this is incorrect.]
To assign types to this kind of expression (without casts) you need something like dependent types or our notion of _occurrence typing_. It is an explicitly static discipline that relies on annotations at the moment. Having said that, our small experience with a few thousand lines of "real" programming suggests that this notion is not only theoretically sound but highly practical.
Research on inference welcome. User reports on Typed Scheme even more welcome.
Sorry...
...I should have been more careful: I only meant "related to" in the very loose sense that both approaches attempt to resolve the tension between type inference and fully-explicit type declarations in an unobtrusive fashion. No doubt there are important theoretical and pragmatic differences between them that I will have to learn a great deal more about. :-)
I intend to check out Typed
I intend to check out Typed Scheme (if you haven't read Anton's comments here, you should; he makes PLT Scheme seem very attractive).
What is your opinion on the Typed Scheme type system vs a type system similar to that of ML or Haskell? Is the lack of parametricity solvable?
Parametricity in Typed Scheme
You can program in Typed Scheme as if it were ML, and then use other features when you want to. The major drawback compared to ML in that situation is that the type inference isn't as complete (and can't be). I don't typically find this to be a significant issue - I would prefer if more ML programs made somewhat less use of type inference. As for Haskell, the type systems are quite different - I'm not sure comparing them is terribly useful.
As for parametricity, I'm not sure whether you're asking about polymorphism in general, which Typed Scheme has, or the fact that polymorphic functions in Typed Scheme can behave differently for different argument types. I don't think this is a problem - the only loss I know of is Wadler's free theorems, which I don't ever think about anyway.
Thanks for your interest in Typed Scheme.
Argument type dependent functions
polymorphic functions in Typed Scheme can behave differently for different argument types
Interesting that you should phrase it like that. Indeed, the limitation of H-M types that you seem to be implying here seems to be one of those things that was, and has been, for a long time believed to be true (at least in the practical sense). Nevertheless, it is a false belief. It is perfectly possible to write functions in a H-M typed language whose types (and behavior) depend on the types of their arguments. Just a couple of days ago I wrote a page about one (more) technique, dubbed static sums, that allows such functions to be written.
Edit: clarified
Are we really talking about H-M?
I can't help but feel that if you really intend "H-M typed" to prohibit extensions to H-M then making essential use of the module system is seriously cheating. Or have I missed something?
That said, this also suggests that we rarely care about what can be done with H-M alone.
Use of module system
If you mean the stuff on the static sum page, then I assure you that any use of the module system there is not essential to the technique:
Standard ML of New Jersey v110.67 [built: Fri Jan 11 21:41:45 2008]
- fun inL x (f, _) = f x ;
val inL = fn : 'a -> ('a -> 'b) * 'c -> 'b
- fun inR x (_, g) = g x ;
val inR = fn : 'a -> 'b * ('a -> 'c) -> 'c
- fun match x = x ;
val match = fn : 'a -> 'a
- fun succ s = match s (fn i => i+1, fn r => r+1.0) ;
val succ = fn : ((int -> int) * (real -> real) -> 'a) -> 'a
- succ (inL 1) ;
val it = 2 : int
- succ (inR 2.0) ;
val it = 3.0 : real
I don't see how this is
I don't see how this is dependent on the types rather than which of inL or inR is used - it's just an encoding of what Haskellers know as Either Int Real, no?
Nope
I don't see how this is dependent on the types rather than which of inL or inR is used - it's just an encoding of what Haskellers know as Either Int Real, no?
No it isn't. Please carefully read the page I referred to. If you do that, you should hopefully understand the difference. The technique works at least as well in Haskell.
In case you missed it on the first reading, note that the ordinary/generic sum type described (first to motivate the discussion) on the page is isomorphic to Haskell's Either type. Also note that inL x and inR x, regardless of (the type of) x do not necessarily have the same type (usually they have different types).
Added: I should also point out that, of course, inL x and inR x also evaluate to different values. Two expressions that have different (principal) types generally produce different values (although it is possible to give multiple different types to some expressions).
Added: Here is a ghci interaction that demonstrates the difference between ordinary sum types and the static sum:
Prelude> let succ x = case x of {Left i -> i + 1 :: Int ; Right r -> r + 1.0 :: Double}
<interactive>:1:60:
Couldn't match expected type Int' against inferred type Double'
In the expression: r + 1.0 :: Double
In a case alternative: Right r -> r + 1.0 :: Double
In the expression:
case x of
Left i -> i + 1 :: Int
Right r -> r + 1.0 :: Double
Prelude> let inL x (f, g) = f x
Prelude> let inR x (f, g) = g x
Prelude> let match x = x
Prelude> let succ x = match x (\x -> x + 1 :: Int, \x -> x + 1.0 :: Double)
Prelude> succ (inL 1)
2
Prelude> succ (inR 1)
2.0
BTW, you could (and should) assign more general types to inL, inR, and match in Haskell.
Okay, re-reading it appears to be the standard encoding but with all types left to the typechecker - the same terms you'd use to encode sums in the untyped lambda calculus, with their principal types. This nets you a spare type variable which you can attempt to set a 'return type' with, allowing you to additionally type all cases where you already know which branch you're taking (and deduce which they are via type inference, which is perhaps slightly more useful). Cute, but still doesn't actually let you do anything new with the type info - it just avoids overconstraining in a few scenarios.
New and old terms
Okay, re-reading it appears to be the standard encoding but with all types left to the typechecker - the same terms you'd use to encode sums in the untyped lambda calculus, with their principal types.
Pedantically speaking, that isn't precisely true. The signature given there restrict the types. IOW, the types aren't just left to type inference. One of the restrictions is to ensure that the second argument to match needs to be a pair of functions (and, not, for example, a pair of a function and a list).
Cute, but still doesn't actually let you do anything new with the type info - it just avoids overconstraining in a few scenarios.
Well, the act of discovering a programming idiom obviously doesn't change a type system. You could have written the same functions back in the seventies and have them type checked. So, it isn't, and could never be, "new" in the sense that it would change set of terms that can be typed in a language.
Let me make it clear why the technique is not an encoding of the Either type. The reason is that the Either type can only encode a strict subset of static sum types. For every Either type, there is a corresponding static sum type, but not vice versa. Either types correspond to static sum types that unify with ('dL, 'c, 'dR, 'c, 'c) StaticSum.t. But, as can be seen from the succ example, there are static sum types that do not unify with such a type. So, actually, given a term, new, written in terms of inL, inR, and match, it is not necessarily the case that replacing them with Left, Right, and case, respectively, you would still be left with a term, old, that can be typed.
Just to clarify further, the trick is that (except when unified to be the same) the type of a static sum indicates which variant it is. IOW, like I said earlier, the type of static sum is generally different depending on whether it was constructed with inL or inR. This allows you to write functions, in a H-M type system, whose results depend on the types of their arguments, which was my point. And, of course, those functions are polymorphic. Look at the type of succ. So, spelling it out in full: polymorphic functions in H-M typed languages can behave differently for different argument types.
Not type dependent
As Philippa says, this is the standard Church encoding as far as values are concerned but with first order types. Usually typed Church encodings require rank-2 types and there is a reason for that.
The type of the constructors of the Either data type are:
Left :: a -> Either a b
Right :: b -> Either a b
The usual typed Church encoding of the Either data type is:
left :: a -> (a -> r) -> (b -> r) -> r
left x l r = l x
right :: b -> (a -> r) -> (b -> r) -> r
right x l r = r x
This is not a polymorphic as these could be, however, let's say we want to make a list of Either Int Double, e.g.
[Left 3, Right 4.5] :: [Either Int Double]
We can't quite do them with the above, and we can't do them at all with Vesa's inL and inR without losing their "benefit". If Left :: a -> Either a b and left :: a -> (a -> r) -> (b -> r) -> r, Either a b must correspond to
type Either a b = forall r.(a -> r) -> (b -> r) -> r
with this, [Either Int Double] corresponds to the rank-2 type [forall r.(Int -> r) -> (Double -> r) -> r]. So, to use the "static sums" as first-class sums, you need to lose their "benefit". Uses are presumably in approaches similar to Danvy's approach to functional unparsing.
So, spelling it out in full: polymorphic functions in H-M typed languages can behave differently for different argument types.
No, there is no "behaviour depending on the type" (there's certainly types "depending on" types, but that's trivial). The choice is driven (just like for the normal Either data type) by the value constructors, in this case, inL and inR. As far as I can see, to actually get different behaviour based on type you need to support some kind of ad-hoc polymorphism such as Haskell type classes, intensional type analysis or C++ style overloading. The values depend on types (that's parametric polymorphism) but only in a trivial way; namely that we treat the type as a black box, we don't dispatch on it.
Encoding
Vesa's inL and inR
To set the credits straight, the formulation of the idiom as a kind sum was due to Stephen Weeks (more than a year ago). What I've written here and on the static sum page is based on my own interpretations, of course.
Uses are presumably in approaches similar to Danvy's approach to functional unparsing.
Yes. It was discovered while working on the Fold technique.
No, there is no "behaviour depending on the type"
Forget for a moment that you know how static sums are implemented internally. Treat it as an abstract type with two constructor functions, inL and inR, and one one deconstructor, match, whose types are given by the following signature:
signature STATIC_SUM = sig
type ('a, 'b, 'c, 'd, 'e) t
val inL : 'a -> ('a, 'd, 'b, 'c, 'd) t
val inR : 'c -> ('a, 'b, 'c, 'd, 'd) t
val match : ('a, 'b, 'c, 'd, 'e) t -> ('a -> 'b) * ('c -> 'd) -> 'e
end
Consider the following example:
- type u = unit ;
type u = unit
- type p = unit * unit ;
type p = unit * unit
- fun adHocery s = StaticSum.match s (fn () => ((), ()), fn ((), ()) => ()) ;
val adHocery = fn : (u,p,p,u,'a) StaticSum.t -> 'a
- val u = StaticSum.inL () : (u,p,p,u,p) StaticSum.t ;
val u = fn : (u,p,p,u,p) StaticSum.t
- val p = StaticSum.inR ((),()) : (u,p,p,u,u) StaticSum.t ;
val p = fn : (u,p,p,u,u) StaticSum.t
val it = ((),()) : p
val it = () : u
As you can see, the types of u and p are different and the adHocery function behaves differently depending on which one it is given. Also, u and p are the only observably different values accepted by the adHocery function. It is impossible to construct any other observably different values accepted by the adHocery function. Essentially, it has been arranged so that there is a one-to-one correspondence between the type of an argument accepted by the adHocery function and the behavior of the adHocery function. So, tracing the dependencies, we get that the value returned by (or the behavior of) the adHocery function depends on the value of the argument, which depends on the type of the argument. Hence, transitively, the behavior of the adHocery function depends on the type of its argument.
Well, of course, this is a logical illusion of sorts. Specifically, the fact that there is a one-to-one correspondence between the type and the value of an argument accepted by the adHocery function, doesn't mean that the dynamic semantics would actually be selected based on the type. However, when reasoning about the adHocery function, it doesn't matter. There just isn't any way to observe anything that would contradict the view that the behavior of the adHocery function depends on the type of its argument or even that the behavior would be selected based on the type.
As far as I can see, to actually get different behaviour based on type you need to support some kind of ad-hoc polymorphism such as Haskell type classes, intensional type analysis or C++ style overloading.
Well, then you'll just need to learn to see farther. Continuing the previous argument, let's play out another thought experiment. Suppose that we have an ML-style language (with first-class polymorphism to make this practical) that has been defined so that integers (literals) have the type
forall ('a, 'b, 'c).(int, 'a, 'b, 'c, 'a) StaticSum.t
and reals have the type
forall ('a, 'b, 'c).('a, 'b, real, 'c, 'c) StaticSum.t
Now, the function plus
fun plus (a, b) =
(match b o match a)
(fn a => (fn b => inL (Int.+ (a, b)),
fn b => inR (Real.+ (real a, b))),
fn a => (fn b => inR (Real.+ (a, real b)),
fn b => inR (Real.+ (a, b))))
computes the sum of two numbers, integers or reals. Furthermore, it returs a real if at least one of the arguments is a real. Otherwise it returns an integer.
Sidenote: As written above, the plus function compiles even in SML'97 (without first-class polymorphism), but to make the implied usage (all arithmetic based on similarly defined functions) practical, first-class polymorphism is a must. Also, the idea is that the types int and real and the modules Int and Real refer to primitives that wouldn't actually (need to) be exposed to a user of the language.
Now, again, it has been arranged so that the behavior of the plus function is different depending on the types of its arguments. There is just no way to observe anything that would contradict with that view. So, reasoning about the behavior of plus, one can treat it as if the behavior would depend on the types of its arguments.
This is the kind of reasoning that is done in mathematics all the time. You first show that there is a suitable morphism between two kinds of things As and Bs. Then you can treat As as if they were Bs. In this particular case, we have functions whose behavior depends on the values of their arguments. But the sets of argument values that invoke the different behavior also have disjoint sets of types. So, one can treat the functions as if their behavior would depend on the types of their arguments. What part of this kind of reasoning do you disagree with?
What this means is that static sums allow you to encode type dependent functions. The encoding can be faithful in the sense that the encoded terms using static sums type check precisely when the corresponding terms written in the type dependent language, whose terms are being encoded, would type check. Furthermore, that can be done even in a basic H-M type system. To prove this, one needs to device a simple language with type dependent functions, specify an encoding of the terms of that language using static sums, and then prove the usual correspondences between the original type dependent terms and their encodings. Of course, like with many encodings, a whole-program (or whole-term) transformation is needed in the general case. And, of course, in a more expressive type system than H-M, it may be possible to encode more type dependent functions (terms of a more expressive type dependent language) than in the H-M system.
The plus function reminds me
The plus function reminds me very much of GADT-based type classes, under the "case coalescing" section. Is this static sum a limited form of ML GADT?
Types
Forget for a moment that you know how static sums are implemented internally.
I never mentioned how they were implemented internally. How they are implemented is completely irrelevant.
As you can see, the types of u and p are different and the adHocery function behaves differently depending on which one it is given.
They are also different values. Different values tend to behave differently.
Well, of course, this is a logical illusion of sorts. Specifically, the fact that there is a one-to-one correspondence between the type and the value of an argument accepted by the adHocery function, doesn't mean that the dynamic semantics would actually be selected based on the type.
Indeed.
And if the types happen to be the same, it's clear that the types cannot possibly be the discriminator of the behaviour.
Other than that, the types in the static sums restrict the possible values, but that's just what types do anywhere.
On a different note: what is a "type dependent language"? It sounds vaguely like you want a dependently typed language, but that's types depending on values.
Functions not arguments
And if the types happen to be the same, it's clear that the types cannot possibly be the discriminator of the behaviour.
Yes, of course, but that is not the point. The functions are type dependent --- not the arguments. The behavior of the kind of functions that I'm talking about here is different depending on the types of its arguments. Conversely, a function whose behavior does not correspond to (or depend on) the types of its arguments is not type dependent. And, yes, of course, you can write functions whose arguments are static sums, but whose behavior does not depend on the types of the arguments.
It sounds vaguely like you want a dependently typed language, but that's types depending on values.
From a practical point of view, I think that the technique is suitable for encoding some functions that one might want to write in a dependently typed programming language. Instead of having types depend on values, one changes the types of (argument) values (via tagging them with static sums in this case) and then writes type dependent functions. Compared to some other techniques, static sums may perhaps be more straightforward (idiomatic) to use for some such encodings. With static sums it is almost as if you could do a switch case on types (typecase).
Related to something we used
In the story Tagless Staged Interpreters for Simpler Typed Languages, that is one of the techniques that was key to our work. Instead of directly using rank-2 types, we used the module system of ML and the class system of Haskell for the same purpose, as that's all that was needed. And that is also how we get around the problem of parametricity: we don't make the r type abstract, we merely delay its details. There is a huge difference!
It does seem that using the Church encoding (which is really a final algebra encoding) instead of the usual initial algebra semantics does buy you something non-trivial.
Parametricity
I consider losing parametricity (which is what the loss of the free theorems is equivalent to, though admittedly there are degrees of it) a bad thing. However, as I have not read the paper yet, I'm not in a position to comment on how bad the problem would be in Typed Scheme.
Type Inference in a different Type Algebra
Practical Type Inference Based on Success Typings is a paper about inferring type signatures for Erlang functions. The type signatures are different from ordinary types in the sense that the type signature a -> b implies that if the function is called with the type a it might crash or return b, if it is called by something else it will definitely crash. This means that the signature any -> any is a correct signature for all unary functions, so the type inference problem is to infer as exact types as possible
This approach can be used together with the approach of typed scheme to infer some type information for modules which can not be type checked since they rely on invariants which can not be expressed in the type system.
That is incorrect
In the gradual type system, we could have the following type assignment:
l : ? list
number? : ? -> bool
filter : ('a -> bool) -> 'a list -> 'a list
map : ('a -> 'b) -> 'a list -> 'b list
Our paper didn't describes 'list' types, but there are two obvious choices: 1) allow variance and do run-time checking on list element access, or 2) don't allow variance. In our Scheme Workshop paper we described references (i.e., one element lists) and went with invariance. It looks like with inference, invariance is still a better choice. Let's consider the example:
With choice 1) filter is instantiates to (int -> bool) -> int list -> int list and map instantiates to (int -> int) -> int list -> int list and a cast is inserted to coerce the ? list into the int list expected by filter. But this is bad because a cast error occurs on the first non-integer in the heterogeneous list.
With choice 2) filter instantiates to (? -> bool) -> ? list -> ? list and map instantiates to (? -> int) -> ? list -> int list. A cast is inserted to coerce the add1 function from int -> int to ? -> int. So this choice works.
That being said, occurrence typing is a good idea and I look forward to investigating how it can be combined with gradual typing. With occurrence typing I assume you'd get a more accurate type for the result of filter, an int list instead of ? list.
As for the paper being "purely theoretical", it certainly contains a fair amount of theory. This is necessary to make sure that the ideas are correct. However, it is not "purely theoretical" because our intension is for this to be a step towards a type system for a mainstream language. I don't know how long it will take to get there, but I've just hired some great graduate students to work on this.
More details
Several points on this issue:
1. Maybe I'm confused, but it sounds like you said first that
invariance is the better choice, but also that it leads to a runtime
error in this example. Is that right?
2. Is the need for invariance or runtime checks inherent in the
gradual typing system? Typed Scheme has covariant lists, without any
extra checks, and I think they're essential for typing real Scheme
programs.
3. The interesting nature of the example I posted is that Typed Scheme
runtime checks. If we extend the domain of add1 to include arbitrary
values, then the ML type system can check the example. What your type
system is doing is automatically writing the needed wrapper for the ML
type system to work here. Typed Scheme is doing something very
different - the type of the input list to filter is different from the
return type, and therefore no new runtime checks are needed.
Here's the type of filter in Typed Scheme:
(A -> Bool : B) (Listof A) -> (Listof B)
That is, it takes a function from A -> Bool that checks for type B,
and it produces a list of type B. Note that A and B don't have to be
the same. This is what gives Typed Scheme the power demonstrated in
this example.
4. As for theory vs practice, we've also done theoretical work on
Typed Scheme, but we've also already done lots of practical
investigation, whereas it doesn't sound like that work has been done
yet for gradual typing. I'm glad you plan to do that in the future.
That's backwards
1. Invariance does not lead to run-time errors.
2. If Typed Scheme has covariant lists, then either it doesn't allow set-car! (the list is immutable), or it does runtime checking, or it is unsound. I was assuming mutable lists in the above. For immutable lists, covariance could likewise be allowed in the gradual system without checks. This is standard type theory.
3. Yes, that repeats my point about the return type of filter. Thanks for the further details.
4. When the interlanguage paper was published but before the implementation of Typed Scheme was finished, I didn't say that line work was "purely theoretical". A similar courtesy would be appreciated. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6440986394882202, "perplexity": 1220.5382795977744}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00250-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.caltech.edu/campus-life-events/calendar/math-graduate-student-seminar-19 | Thursday, February 14, 2019
12:00pm to 1:00pm
Linde Hall 255
Small Gaps Between Primes
Alex Perozim De Faveri, Department of Mathematics, Caltech,
One of the most famous open problems in number theory is the twin prime conjecture, which says that there are infinitely many prime numbers at distance two. I will introduce some of the tools used to deal with this problem, such as the Selberg sieve and the Bombieri-Vinogradov theorem, and outline the new ideas that led to the breakthroughs of Zhang and Maynard in 2013, who independently proved that the gap between primes is bounded infinitely often. If time permits, I will touch on the known obstructions towards twin primes, such as the parity problem in sieve theory. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078130125999451, "perplexity": 353.4337342434424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00187.warc.gz"} |
https://openmdao.org/twodocs/versions/latest/_srcdocs/packages/surrogate_models/nearest_neighbor.html | # nearest_neighbor.py¶
Surrogate model based on the N-Dimensional Interpolation library by Stephen Marone.
https://github.com/SMarone/NDInterp
class openmdao.surrogate_models.nearest_neighbor.NearestNeighbor(**kwargs)[source]
Surrogate model that approximates values using a nearest neighbor approximation.
Attributes
interpolant (object) Interpolator object interpolant_init_args (dict) Input keyword arguments for the interpolator.
__init__(**kwargs)[source]
Initialize all attributes.
Parameters
**kwargsdict
options dictionary.
linearize(x, **kwargs)[source]
Calculate the jacobian of the interpolant at the requested point.
Parameters
xarray-like
Point at which the surrogate Jacobian is evaluated.
**kwargsdict
Additional keyword arguments passed to the interpolant.
Returns
ndarray
Jacobian of surrogate output wrt inputs.
predict(x, **kwargs)[source]
Calculate a predicted value of the response based on the current trained model.
Parameters
xarray-like
Point(s) at which the surrogate is evaluated.
**kwargsdict
Additional keyword arguments passed to the interpolant.
Returns
float
Predicted value.
train(x, y)[source]
Train the surrogate model with the given set of inputs and outputs.
Parameters
xarray-like
Training input locations
yarray-like
Model responses at given inputs.
vectorized_predict(x)
Calculate predicted values of the response based on the current trained model.
Parameters
xarray-like
Vectorized point(s) at which the surrogate is evaluated. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18635110557079315, "perplexity": 6683.098232609214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00705.warc.gz"} |
https://puzzling.stackexchange.com/questions?tab=newest&page=352 | # All Questions
19,050 questions
Filter by
Sorted by
Tagged with
595 views
### How Many Coins Were There
There were three men in a boat. They received an amount of coins in a chest from 200 - 300 as a reward for surviving the storm without damaging anything. One night, the first man thought that he would ...
865 views
### Color-Coded Bridges
Use the given clues to solve the puzzle below. Note: an image editing tool is recommended.
1k views
Here are a few one-liner rebus puzzles for you to solve. T_ _ E O _ E R _ T _ O _ swear bible bible bible bible hoppin injury + insult COLT jr. cut cut cut cut cut cut cut cut cut GIFIREN VA DERS ...
270 views
### Eight remarkable places
Something very remarkable connects the following eight geographic places. What is it? Bodaybo (Russia) Cape Horn Christchurch (New Zealand) Cocos Islands Ghanzi (Botswana) Honolulu Punta Peclas (...
555 views
### No food in the fridge
What eight letters did the little boy say when, whilst looking for an afternoon snack, he realized that there was no food in the fridge? Hint:
399 views
### Jewels in the box
Professor Egghead participates today in a local game show. There are six jewels (a diamond, an emerald, a moonstone, a ruby, a sapphire, and a topaz), and there are six boxes that carry the numbers 1,...
861 views
### Two rebus puzzles, each with a sun
These two rebus puzzles both have a sun in it. Puzzle 1: Puzzle 2: What do they mean?
619 views
### Pythagorean coins
To make payments, the Pythagoreans use coins in no more than three denominations. The three denominations are in whole Oboloi amounts, and the sum of the squares of the two smaller denominations ...
146 views
### Inter-city bus services
The ministry of transportation of Lampukistan just made the following announcement: From Jul 1st, the country-wide inter-city bus service will be organized by the ministry. The service will be ...
3k views
### Invert three inputs with two NOT gates
You've just been hired by Widget & Co. to prototype a new electrical circuit for their line of impenetrable puzzle locks. As part of this circuit, your boss asks you to invert three boolean inputs....
375 views
### Probability in chess tournament
In a 4 round chess tournament with 16 players (where the loser of each two player match is eliminated and the winner moves on to the next round), the pairings for all matches are decided randomly. The ...
2k views
### Sometimes I am born in silence
Sometimes I am born in silence, Other times, no. I am unseen, But I make my presence known. In time, I fade without a trace. I harm no one, but I am unpopular with all. What am I?
604 views
### I grow as others die
Although I am dead, I grow as others die. Acids hate me. My home is often a tray. Sometimes I fall before I can get home. The answer must meet all conditions.
3k views
### Display a number using a scientific calculator with most keys are stuck
Your have a scientific calculator such that most of the keys are unable to be pressed. The only keys that work are those for the functions x^2 \;\; \sqrt{x} \;\; x!\;\; \exp\;\; \ln\;...
1k views
### Polyomino Z pentomino and rectangle packing into rectangle
See my similar question about T hexomino (Polyomino T hexomino and rectangle packing into rectangle) This is exactly same but with other polyomino - Z pentomino. Let's pack some (one or more) Z ...
4k views
### What are the numbers?
Three perfect mathematicians with extremely strong memories are taking an exam. The examiner tells each of them a certain piece of information about $x$ and $y$, which two positive integers between ...
804 views
### Shortest Number Containing the Numbers 1-100?
Exactly what the title says: What is the shortest (fewest digits) number you can find such that somewhere in it, you can find each number 1- 100? My thinking for this puzzle is you have an electric ...
380 views
### Distinguished country pairs
I guess that everybody here at puzzling.stackexchange is aware of the fact that Uzbekhistan and Liechtenstein are the only two countries in the world that are doubly landlocked and hence share a very ...
359 views
### Dr. Strangelove or How I Learned to Stop Worrying and Love the Bomb
Can you recover the following two dozens of (well-known) movie titles just from their initials? ...
306 views
### Rebus puzzle (62ND S MAKE A single)
Here is a rather simple rebus puzzle: 62ND S MAKE A single
1k views
### What does the “gift” and “101010” in this rebus puzzle mean?
What does the image in this rebus puzzle mean?
755 views
### Corny - rebus puzzle
This is yet another rebus puzzle What does it mean?
984 views
### Mirrored - Rebus puzzle
Here is another rebus puzzle. What does it signify?
229 views
### Rebus - which side is it?
Which side is it? Nice blue arrows.
557 views
### Rebus, funny looking word
This funny looking word is displayed in all its glory. Can you decipher its hidden and cryptic meaning? To question the message body is wrong. What is the secret to meeting quality standards.
375 views
### Rebus and the Fridge
Rebus and the Fridge. Apple Keynote is surprisingly effective to make these. What is it?
655 views
### Rebus : 8 and your
Another easy rebus... I'll just to make some more creative ones .. the image didn't load. 2nd attempt
3k views
### Rebus Stand and I
Yes, this is incredibly easy, but it's my first rebus. I thought it was my own idea, but if someone did this before, let me know.
2k views
### Find the angle (hardest easy geometry) [closed]
This is a question which is related to the hardest easy questions. Note that the general solution belongs on math.se and is not solved in simple way. This question is a puzzle and you need to prove ...
373 views
### Finding a murderer from statements from suspects
Officer X was entrusted with the duty of investigating a murder. The dead body was found in the living room. Preliminary investigation suggested that four of the six suspects were liars (at least one ...
378 views
### I do it again and again, until I find what I seek
I do it again and again, until I find what I seek. What is inside of me can always be changed. You can never predict what is inside of me. I can do many things and be called whenever you like. Without ...
5k views
### Without me, you cannot survive
I am always moving, but I never know where I am going. I provide delicious flavor to food. I can be read on many restaurant menus. However, I am toxic. I kill thousands of people every year, However, ...
119 views
### Not Regular This Time
We can say that an $n$-by-$n$ square is regular provided that: Each of the integers from $0$ to $n^2 − 1$ appears in exactly one cell, and each cell contains only one integer (so that the square is ...
208 views
### Create a 4-by-4 regular square
We can say that an $n$-by-$n$ square is regular provided that: Each of the integers from $0$ to $n^2 − 1$ appears in exactly one cell, and each cell contains only one integer (so that the square is ...
2k views
### 9-by-9 filled, magic square
Construct a 9-by-9 filled, magic square using the integers from 0 to 80. The magic square should additionally have the property that when it is divided into ninths according to the picture below, each ...
855 views
### Rebus Puzzle - House Burning
My first rebus! This rebus is kind of difficult and is based on a weird property of English. HOUSE B U R N I N G while G N I N R U B
577 views
### Cipher with a hidden key
The challenge is to decode this cipher: TJ KS JX AP GN DT DG FE ER EU UA HJ The key is attached to this question. Try to find it!
3k views
### NaCl - Rebus puzzle
Solve the following rebus puzzle: NaCl * H2O H2O * NaCl ------------ CCCCCCC
2k views
### A real star riddle
500 are at my end, 500 are at my start, but at my heart there are only 5. The first letter and the first number make me complete: Some consider me a king, others consider me a real star.
376 views
Here is my first riddle: I help people speak, but do not speak myself, nor can I be read like a book. 4 letters on my tag. I provide direction to an object. Used by golfers and ...
1k views
### Rebus: Sun Archaic Ancient Antique
I have yet another Rebus puzzle for you all to solve..... This time I've tried making it neither too hard nor too easy :P Here it is
2k views
### The Devil's Brother
The 1933 movie "The Devil's Brother" (also known under the title "Fra Diavolo") takes place in the Northern Italy of the early 18th century. Stan Laurel and Oliver Hardy play the fierce robbers ...
2k views
### Look up the panda bear [closed]
A panda bear walked into a restaurant. He sat down at a table and ordered some food. When he was finished eating, he took out a gun and shot his waiter. He then left the restaurant. After the police ...
2k views
### The barbarian king [duplicate]
A barbarian king wants his tribe to have more men than women, because men can fight and women cannot. to achieve this, he introduces the following law: Each couple has to produce babies until they ...
243 views
### How many ways can the floor be tiled? [closed]
A space measuring 3 by 10 is to be tiled. Tiles are square and come in sizes 1 by 1, 2 by 2 and 3 by 3. How many ways can the floor be tiled?
276 views
### Running on bones, I am shaking [closed]
Running on bones, I am shaking. Who or what am I? Note: I am visiting the Philippines now, and this is the direct translation of a popular local riddle spoken in Cebuano. hint:
1k views
### Over the Bridge, If You Dare [closed]
A young man walks through the forest. He comes to a bridge. In front of the bridge is a large man carrying an axe. The man says, "If you want to cross this bridge, you must tell me something. If ...
18k views
### In Through One Hole
You go in through one hole, you come out three holes. Once you’re inside you’re ready to go outside, but once you're outside you’re still inside. What is it?
340 views
### I have maps but am not a map
I have forests with no wood, rope that will not bind, bags that hold no food, and masks you cannot hide behind.
2k views
### Seas Without Water
I have seas without water I have forests without wood I have deserts without sand I have houses with no brick What am I?
15 30 50 per page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5189801454544067, "perplexity": 4620.350630522378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00436.warc.gz"} |
https://blog.evanchen.cc/2015/09/05/some-notes-on-valuations/ | # Some Notes on Valuations
There are some notes on valuations from the first lecture of Math 223a at Harvard.
## 1. Valuations
Let ${k}$ be a field.
Definition 1
A valuation
$\displaystyle \left\lvert - \right\rvert : k \rightarrow \mathbb R_{\ge 0}$
is a function obeying the axioms
• ${\left\lvert \alpha \right\rvert = 0 \iff \alpha = 0}$.
• ${\left\lvert \alpha\beta \right\rvert = \left\lvert \alpha \right\rvert \left\lvert \beta \right\rvert}$.
• Most importantly: there should exist a real constant ${C}$, such that ${\left\lvert 1+\alpha \right\rvert < C}$ whenever ${\left\lvert \alpha \right\rvert \le 1}$.
The third property is the interesting one. Note in particular it can be rewritten as ${\left\lvert a+b \right\rvert < C\max\{ \left\lvert a \right\rvert, \left\lvert b \right\rvert \}}$.
Note that we can recover ${\left\lvert 1 \right\rvert = \left\lvert 1 \right\rvert \left\lvert 1 \right\rvert \implies \left\lvert 1 \right\rvert = 1}$ immediately.
Example 2 (Examples of Valuations)
If ${k = \mathbb Q}$, we can take the standard absolute value. (Take ${C=2}$.)
Similarly, the usual ${p}$-adic evaluation, ${\nu_p}$, which sends ${p^a t}$ to ${p^{-a}}$. Here ${C = 1}$ is a valid constant.
These are the two examples one should always keep in mind: with number fields, all valuations look like one of these too. In fact, over ${\mathbb Q}$ it turns out that every valuation “is” one of these two valuations (for a suitable definition of equality). To make this precise:
Definition 3
We say ${\left\lvert - \right\rvert_1 \sim \left\lvert - \right\rvert_2}$ (i.e. two valuations on a field ${k}$ are equivalent) if there exists a constant ${k > 0}$ so that ${\left\lvert \alpha \right\rvert_1 = \left\lvert \alpha \right\rvert_2^k}$ for every ${\alpha \in k}$.
In particular, for any valuation we can force ${C = 2}$ to hold by taking an equivalent valuation to a sufficient power.
In that case, we obtain the following:
Lemma 4
In a valuation with ${C = 2}$, the triangle inequality holds.
Proof: First, observe that we can get
$\displaystyle \left\lvert \alpha + \beta \right\rvert \le 2 \max \left\{ \left\lvert \alpha \right\rvert, \left\lvert \beta \right\rvert \right\}.$
Applying this inductively, we obtain
$\displaystyle \left\lvert \sum_{i=1}^{2^r} a_i \right\rvert \le 2^r \max_i \left\lvert a_i \right\rvert$
$\displaystyle \sum_{i=1}^{n} a_i \le 2n\max_i \left\lvert a_i \right\rvert.$
From this, one can obtain
$\displaystyle \left\lvert \alpha+\beta \right\rvert^n \le \left\lvert \sum_{j=0}^n \binom nj \alpha^j \beta^{n-j} \right\rvert \le 2(n+1) \sum_{j=0}^n \left\lvert \binom nj \right\rvert \left\lvert \alpha \right\rvert^j \left\lvert \beta \right\rvert^{n-j} \le 4(n+1)\left( \left\lvert \alpha \right\rvert+\left\lvert \beta \right\rvert \right)^n.$
Letting ${n \rightarrow \infty}$ completes the proof. $\Box$
Next, we prove that
Lemma 5
If ${\omega^n=1}$ for some ${n}$, then${\left\lvert \omega \right\rvert = 1}$. In particular, on any finite field the only valuation is the trivial one which sends ${0}$ to ${0}$ and all elements to ${1}$.
Proof: Immediate, since ${\left\lvert \omega \right\rvert^n = 1}$. $\Box$
## 2. Topological field induced by valuations
Let ${k}$ be a field. Given a valuation on it, we can define a basis of open sets
$\displaystyle \left\{ \alpha \mid \left\lvert \alpha - a \right\rvert < d \right\}$
across all ${a \in K}$, ${d \in \mathbb R_{> 0}}$. One can check that the same valuation gives rise to the same topological spaces, so it is fine to assume ${C = 2}$ as discussed earlier; thus, in fact we can make ${k}$ into a metric space, with the valuation as the metric.
In what follows, we’ll always assume our valuation satisfies the triangle inequality. Then:
Lemma 6
Let ${k}$ be a field with a valuation. Viewing ${k}$ as a metric space, it is in fact a topological field, meaning addition and multiplication are continuous.
Proof: Trivial; let’s just check that multiplication is continuous. Observe that
\displaystyle \begin{aligned} \left\lvert (a+\varepsilon_1)(b+\varepsilon_2) - ab \right\rvert & \le \left\lvert \varepsilon_1\varepsilon_2 \right\rvert + \left\lvert a\varepsilon_2 \right\rvert + \left\lvert b\varepsilon_1 \right\rvert \\ &\rightarrow 0. \end{aligned}
$\Box$
Now, earlier we saw that two valuations which are equivalent induce the same topology. We now prove the following converse:
Proposition 7
If two valuations ${\left\lvert - \right\rvert_1}$ and ${\left\lvert - \right\rvert_2}$ give the same topology, then they are in fact equivalent.
Proof: Again, we may safely assume that both satisfy the triangle inequality. Next, observe that ${\left\lvert a \right\rvert < 1 \iff a^n \rightarrow 0}$ (according to the metric) and by taking reciprocals, ${\left\lvert a \right\rvert > 1 \iff a^{-n} \rightarrow 0}$.
Thus, given any ${\beta}$, ${\gamma}$ and integers ${m}$, ${n}$ we derive that
$\displaystyle \left\lvert \beta^n\gamma^m \right\rvert_1 < 1 \iff \left\lvert \beta^n\gamma^m \right\rvert < 1$
with similar statements holding with “${<}$” replaced by “${=}$”, “${>}$”. Taking logs, we derive that
$\displaystyle n \log\left\lvert \beta \right\rvert_1 + m \log \left\lvert \gamma \right\rvert_1 < 0 \iff n \log\left\lvert \beta \right\rvert_2 + m \log \left\lvert \gamma \right\rvert_1 < 0$
and the analogous statements for “${=}$”, “${>}$”. Now just choose an appropriate sequence of ${m}$, ${n}$ and we can deduce that
$\displaystyle \frac{\log \left\lvert \beta_1 \right\rvert}{\log \left\lvert \beta_2 \right\rvert} = \frac{\log \left\lvert \gamma_1 \right\rvert}{\log \left\lvert \gamma_2 \right\rvert}$
so it equals a fixed constant ${c}$ as desired. $\Box$
## 3. Discrete Valuations
Definition 8
We say a valuation ${\left\lvert - \right\rvert}$ is discrete if its image around ${1}$ is discrete, meaning that if ${\left\lvert a \right\rvert \in [1-\delta,1+\delta] \implies \left\lvert a \right\rvert = 1}$ for some real ${\delta}$. This is equivalent to requiring that ${\{\log\left\lvert a \right\rvert\}}$ is a discrete subgroup of the real numbers.
Thus, the real valuation (absolute value) isn’t discrete, while the ${p}$-adic one is.
## 4. Non-Archimedian Valuations
Most importantly:
Definition 9
A valuation ${\left\lvert - \right\rvert}$ is non-Archimedian if we can take ${C = 1}$ in our requirement that ${\left\lvert a \right\rvert \le 1 \implies \left\lvert 1+a \right\rvert \le C}$. Otherwise we say the valuation is Archimedian.
Thus the real valuation is Archimedian while the ${p}$-adic valuation is non-Archimedian.
Lemma 10
Given a non-Archimedian valuation ${\left\lvert - \right\rvert}$, we have ${\left\lvert b \right\rvert < \left\lvert a \right\rvert \implies \left\lvert a+b \right\rvert = \left\lvert a \right\rvert}$.
Proof: We have that
$\displaystyle \left\lvert a \right\rvert = \left\lvert (a+b)-b \right\rvert \le \max\left\{ \left\lvert a+b \right\rvert, \left\lvert b \right\rvert \right\}.$
On the other hand, ${\left\lvert a+b \right\rvert \le \max \{ \left\lvert a \right\rvert, \left\lvert b \right\rvert\}}$. $\Box$
Given a field ${k}$ and a non-Archimedian valuation on it, we can now consider the set
$\displaystyle \mathcal O = \left\{ a \in k \mid \left\lvert a \right\rvert \le 1 \right\}$
and by the previous lemma, this turns out to be a ring. (This is the point we use the fact that the valuation is non-Archimedian; without that ${\mathcal O}$ need not be closed under addition). Next, we define
$\displaystyle \mathcal P = \left\{ a \in k \mid \left\lvert a \right\rvert < 1 \right\} \subset \mathcal O$
which is an ideal. In fact it is maximal, because ${\mathcal O/\mathcal P}$ is the set of units in ${\mathcal O}$, and is thus necessarily a field.
Lemma 11
Two valuations are equal if they give the same ring ${\mathcal O}$ (as sets, not just up to isomorphism).
Proof: If the valuations are equivalent it’s trivial.
For the interesting converse direction (they have the same ring), the datum of the ring ${\mathcal O}$ lets us detect whether ${\left\lvert a \right\rvert < \left\lvert b \right\rvert}$ by simply checking whether ${\left\lvert ab^{-1} \right\rvert < 1}$. Hence same topology, hence same valuation. $\Box$
We will really only work with valuations which are obviously discrete. On the other hand, to detect non-Archimedian valuations, we have
Lemma 12
${\left\lvert - \right\rvert}$ is Archimedian if ${\left\lvert n \right\rvert \le 1}$ for every ${n = 1 + \dots + 1 \in k}$.
Proof: Clearly Archimedian ${\implies}$ ${\left\lvert n \right\rvert \le 1}$. The converse direction is more interesting; the proof is similar to the analytic trick we used earlier. Given ${\left\lvert a \right\rvert \le 1}$, we wish to prove ${\left\lvert 1+a \right\rvert \le 1}$. To do this, first assume the triangle inequality as usual, then
$\displaystyle \left\lvert 1+a \right\rvert^n < \sum_j \left\lvert \binom nj \right\rvert\left\lvert a \right\rvert^j \le \sum_{j=0}^n \left\lvert a \right\rvert^j \le \sum_{j=0}^n 1 = n+1.$
Finally, let ${n \rightarrow \infty}$ again. $\Box$
In particular, any field of finite characteristic in fact has ${\left\lvert n \right\rvert = 1}$ and thus all valuations are non-Archimedian.
## 5. Completions
We say that a field ${k}$ is complete with respect to a valuation ${\left\lvert - \right\rvert}$ if it is complete in the topological sense.
Theorem 13
Every field ${k}$ is with a valuation ${\left\lvert - \right\rvert}$ can be embedded into a complete field ${\overline{k}}$ in a way which respects the valuation.
For example, the completion of ${\mathbb Q}$ with the Euclidean valuation is ${\mathbb R}$. Proof: Define ${\overline{k}}$ to be the topological completion of ${k}$; then extend by continuity; $\Box$
Given ${k}$ and its completion ${\overline{k}}$ we use the same notation for the valuations of both.
Proposition 14
A valuation ${\left\lvert - \right\rvert}$ on ${\overline{k}}$ is non-Archimedian if and only if the valuation is non-Archimedian on ${k}$.
Proof: We saw non-Archimedian ${\iff}$ ${\left\lvert n \right\rvert \le 1}$ for every ${n = 1 + \dots + 1}$. $\Box$
Proposition 15
Assume ${\left\lvert - \right\rvert}$ is non-Archimedian on ${k}$ and hence ${\overline{k}}$. Then the set of values achieved by ${\left\lvert - \right\rvert}$ coincides for ${k}$ and ${\overline{k}}$, i.e. ${\{ \left\lvert k \right\rvert \} = \{ \left\lvert \overline{k} \right\rvert \}}$.
Not true for Archimedian valuations; consider ${\left\lvert \sqrt2 \right\rvert = \sqrt2 \notin \mathbb Q}$. Proof: Assume ${0 \neq b \in \overline{k}}$; then there is an ${a \in k}$ such that ${\left\lvert b-a \right\rvert < \left\lvert b \right\rvert}$ since ${k}$ is dense in ${\overline{k}}$. Then, ${\left\lvert b \right\rvert \le \max \{ \left\lvert b-a \right\rvert, \left\lvert a \right\rvert \}}$ which implies ${\left\lvert b \right\rvert = \left\lvert a \right\rvert}$. $\Box$
## 6. Weak Approximation Theorem
Proposition 16 (Weak Approximation Theorem)
Let ${\left\lvert-\right\rvert_i}$ be distinct nontrivial valuations of ${k}$ for ${i=1,\dots,n}$. Let ${k_i}$ denote the completion of ${k}$ with respect to ${\left\lvert-\right\rvert_i}$. Then the image
$\displaystyle k \hookrightarrow \prod_{i=1}^n k_i$
is dense.
This means that distinct valuations are as different as possible; for example, if ${\left\lvert-\right\rvert _1 = \left\lvert-\right\rvert _2}$ then we might get, say, a diagonal in ${\mathbb R \times \mathbb R}$ which is as far from dense as one can imagine. Another way to think of this is that this is an analogue of the Chinese Remainder Theorem.
Proof: We claim it suffices to exhibit ${\theta_i \in k}$ such that
$\displaystyle \left\lvert \theta_i \right\rvert_j \begin{cases} > 1 & i = j \\ < 1 & \text{otherwise}. \end{cases}$
Then
$\displaystyle \frac{\theta_i^r}{1+\theta_i^r} \rightarrow \begin{cases} 1 & \text{ in } \left\lvert-\right\rvert_i \\ 0 & \text{ otherwise}. \end{cases}$
Hence for any point ${(a_1, \dots, a_n)}$ we can take the image of ${\sum \frac{\theta_i^r}{1+\theta_i^r} a_i \in k}$. So it would follow that the image is dense.
Now, to construct the ${\theta_i}$ we proceed inductively. We first prove the result for ${n=2}$. Since the topologies are different, we exhibit ${\alpha}$, ${\beta}$ such that ${\left\lvert \alpha_1 \right\rvert < \left\lvert \alpha_2 \right\rvert}$ and ${\left\lvert \beta_1 \right\rvert > \left\lvert \beta_2 \right\rvert}$, and pick ${\theta=\alpha\beta^{-1}}$.
Now assume ${n \ge 3}$; it suffices to construct ${\theta_1}$. By induction, there is a ${\gamma}$ such that
$\displaystyle \left\lvert \gamma \right\rvert_1 > 1 \quad\text{and}\quad \left\lvert \gamma \right\rvert_i < 1 \text{ for } i = 2, \dots, n-1.$
Also, there is a ${\psi}$ such that
$\displaystyle \left\lvert \delta \right\rvert_1 > 1 \quad\text{and}\quad \left\lvert \delta \right\rvert_n < 1.$
Now we can pick
$\displaystyle \theta_1 = \begin{cases} \gamma & \left\lvert \gamma \right\rvert_n < 1 \\ \phi^r\gamma & \left\lvert \gamma \right\rvert_n = 1 \\ \frac{\gamma^r}{1+\gamma^r} & \left\lvert \gamma \right\rvert_n > 1 \\ \end{cases}$
for sufficiently large ${r}$. $\Box$
## 2 thoughts on “Some Notes on Valuations”
1. I could not refrain from commenting. Exceptionally well written!
Like | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 170, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870467185974121, "perplexity": 291.77987811992557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00092.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/34343 | ## Files in this item
FilesDescriptionFormat
application/pdf
Miller_Sarah.pdf (1MB)
(no description provided)PDF
## Description
Title: Essays in applied microeconomics Author(s): Miller, Sarah Director of Research: Lubotsky, Darren H. Doctoral Committee Chair(s): Lubotsky, Darren H. Doctoral Committee Member(s): Bernhardt, Daniel; Brown, Jeffrey R.; Kaestner, Robert Department / Program: Economics Discipline: Economics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Health economics consumer credit emergency room bankruptcy information economics Abstract: This dissertation consists of three essays. The first essays, entitled The Effect of Insurance on Emergency Room Visits: An Analysis of the 2006 Massachusetts Health Reform,'' analyzes the impact of a major health reform in Massachusetts on emergency room (ER) visits. I exploit the variation in pre-reform uninsurance rates across counties to identify the causal effect of the reform on ER visits. My estimates imply that the reform reduced ER usage by between 5 and 13 percent, nearly all of which is accounted for by a reduction in non-urgent visits that could be treated in alternative settings. The reduction in non-urgent and primary-care treatable visits is most pronounced during regular office hours when physician's offices are likely to be open. In contrast, I find no effect for non-preventable emergencies such as heart attacks. These estimates are consistent with a large causal effect of insurance on ER usage and imply that expanding insurance coverage could have a substantial impact on the efficiency of health services. The second essay, entitled The Impact of Health Care Reform on Personal Bankruptcy,'' studies the same reform to analyze the effect of insurance on personal bankruptcy. I find that the reform reduced personal bankruptcy rates, with the most pronounced declines occurring in the most affected counties. The magnitude of the estimated effect increases with exposure to the reform: a one percentage point decrease in pre-reform insurance rate decreases the personal bankruptcy rate by 0.03 bankruptcies per 1000 residents. This reduction is driven by Chapter 7 bankruptcies that tend to be filed by low-income debtors. In contrast, I do not find significant improvements in other measures of economic activity, such as the unemployment rate or the business bankruptcy rate. The final chapter, Information and Default in Consumer Credit Markets: Evidence from a Natural Experiment,'' looks at the role of information in consumer credit markets. Despite the prominent role that information plays in the economic theory of credit markets, no direct evidence exists on the causal relationship between the availability of information about loan applicants and loan performance. This chapter provides such evidence by exploiting an unanticipated change in the amount of information visible in an online market for loans to measure the impact of lender information on loan outcomes. Conditional on data available in both periods, allowing lenders to access more borrower credit information reduced default rates by 10 percentage points on average. These gains were most pronounced for high risk loans. Recovery rates on defaulted loans improved. Immediate lender returns increased by about 12 percentage points and took 6 weeks to decay, providing a measure of the time it took for the market to assimilate the content of the new information. I test whether these results are driven by lender screening or selection among loan applicants using data that is unobserved by lenders in both periods. I find that there is no change in unobserved credit quality among loan applicants, indicating that the improvement in default rates is primarily a result of better lender screening. Issue Date: 2012-09-18 URI: http://hdl.handle.net/2142/34343 Rights Information: Copyright 2012 Sarah Miller Date Available in IDEALS: 2012-09-18 Date Deposited: 2012-08
| {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2652342617511749, "perplexity": 4551.768969175681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00035.warc.gz"} |
https://donnate.github.io/publication/2018-01-22-distance | # Tracking network distances: an overview
Published in Annals of Applied Statistics 12.2 (2018): 971-1012, 2018
Recommended citation: Donnat, Claire and Holmes, Susan (2018). "Tracking network distances: an overview." Annals of Applied Statistics 12.2 (2018): 971-1012.
Graphs have emerged as one of the most powerful frameworks for encapsulating information about evolving interactions or similarities between a set of agents: in such studies, the data typically consist of a set of graphs tracking the state of a system at different times. A critical step in the data analysis process thus lies in the selection of an appropriate distance between networks: how can we devise a metric that is bost robust to small perturbations of the graph structure and sensitive to the properties that make two graphs similar?
In this review, we thus propose to provide an overview of some of the existing distances and to introduce a few alternative ones. In particular, we will try to provide ground and principles for choosing an appropriate distance over another, and highlight these properties on both a real-life microbiome application as well as synthetic examples. Finally, we extend our study to the analysis spatial dynamics, and show the performance of our method on a recipe network. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396750688552856, "perplexity": 661.025907703875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655895944.36/warc/CC-MAIN-20200707204918-20200707234918-00278.warc.gz"} |
https://math.stackexchange.com/questions/742686/taylor-series-maclaurin-series | # Taylor Series, Maclaurin series
From Rogawski ET 2e section 10.7, exercise 4.
Find the Maclaurin series for $f(x) = \dfrac{x^2}{1-8x^8}$.
$$\dfrac{x^2}{1-8x^8} = \sum_{n=0}^{\infty} [\textrm{_______________}]$$
Hi! I am working on some online Calc2 homework problem and I am not quite sure how to go about solving this Taylor series. I know I should substitute $8x^8$ for $x$ in the Maclaurin series for $1 \over (1-x)$, but the $x^2$ in the numerator of the problem is throwing me off. If someone could help me find the Maclaurin series and on what interval the expansion is valid, I would greatly appreciate it!
Notice $\frac{1}{1-x} = \sum x^n$ . Hence
$$\frac{ x^2}{1 - 8x^8} = x^2 \sum (8x^8)^n = \sum 8^n x^{8n + 2}$$
and this is valid for $| 8x^8 | < 1$
• Thank you so much, I have more more question related to your answer…how would I write |x|<(1/8)^(1/8) in interval notation? – user124539 Apr 6 '14 at 21:59
You can just write $$\frac{x^2}{1-8x^8}=x^2\left(1+8x^8+(8x^8)^2+\cdots\right)=x^2+8x^{10}+64x^{18}+\cdots$$ The interval of convergence will be for all $x$ such that $$|8x^8|<1$$ which is equivalent to $$|x|<\left(\frac{1}{8}\right)^{{1\over 8}}$$
• Thank you so much, I have more more question related to your answer…how would I write |x|<(1/8)^(1/8) in interval notation? – user124539 Apr 6 '14 at 22:00
• It would be $$\left(-\left(\frac{1}{8}\right)^{{1\over 8}},\left(\frac{1}{8}\right)^{{1\over 8}}\right)$$ – user138335 Apr 6 '14 at 22:03
What you have got is almost the right answer. $$\frac{x^2}{1-8x^8} = x^2 \sum_{n=0}^{\infty}(8x^8)^n = \sum_{n=0}^{\infty}8^nx^{8n+2}$$, which is valid if $|8x^8|<1$,hence, $|x|<\frac{1}{8^{1/8}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462524652481079, "perplexity": 288.3774158233372}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00449.warc.gz"} |
http://math.stackexchange.com/questions/172669/for-what-value-of-h-the-set-is-linearly-dependent | # For what value of h the set is linearly dependent?
For what value of $h$ set $(\vec v_1 \ \vec v_2 \ \vec v_3)$ is linearly dependent? $$\vec v_1=\left[ \begin{array}{c} 1 \\ -3 \\ 2 \end{array} \right];\ \vec v_2=\left[ \begin{array}{c} -3 \\ 9 \\ -6 \end{array} \right] ;\ \vec v_3=\left[ \begin{array}{c} 5 \\ -7 \\ h \end{array} \right]$$
Attempt: After row reducing the augmented matrix of $A\vec x=\vec 0$ where $A=(\vec v_1 \ \vec v_2 \ \vec v_3)$:
$$\begin{bmatrix} 1 & -3 & 5 & 0 \\ -3 & 9 & -7 & 0 \\ 2 & -6 & h & 0 \end{bmatrix} \sim \begin{bmatrix} 1 & -3 & 5 & 0 \\ 0 & 0 & 8 & 0 \\ 0 & 0 & h-10 & 0 \end{bmatrix}$$
I am not sure whether the set is linearly dependent when $h=10$ or for any $h$. Help please.
-
The set is always linearly dependent since $v_2 = -3v_1.$ – user2468 Jul 19 '12 at 2:27
@J.D. so it is enough for the set of three vectors to have two vectors that are collinear to be a linearly dependent set, right? – Koba Jul 19 '12 at 3:39
Indeed. A quick geometric reminder for yourself: the basis in $\Bbb{R}^3.$ If you pick two vectors collinear in the direction of the $x$-axis & a vector in the $z$ direction, would you be able to describe every vector in $\Bbb{R}^3$? Of course not. – user2468 Jul 19 '12 at 3:45
That reduced matrix shows you that the set of vectors is linearly dependent for every value of $h$. If $h\ne 10$, the system has no solution, and if $h=10$, it has infinitely many, so there is no value of $h$ that gives it exactly one solution.
Indeed, you can see this directly from the vectors themselves: $v_2=-3v_1$.
I think you may have confused the data, @Brian: if $\,h=10\,$ then the third row becomes all zero and, thus, the original set of three vectors is linearly dependent, as asked. True, if $\,h\neq 10\,$ then the homogeneous system is inconsistent, but we don't really care about that as nothing was asked about solutions of linear systems, homogeneous or non-homog. – DonAntonio Jul 19 '12 at 2:17
In fact, I think the OP confused himself by writing down an augmented matrix as if he wanted to solve some linear system, whereas a $\,3\times 3\,$ matrix with the vectors' components is enough to find out whether they're l.i. or not. And then yes, as you wrote: for any value of $\,h\,$ the three vectors are l.d. This is also easy to check calculating the easy determinant of that square matrix, which is zero no matter what $\,h\,$ is. – DonAntonio Jul 19 '12 at 2:21
@DonAntonio: I suspect that you’re right about the confusion, but you missed the point of my answer. If the three vectors were linearly independent for some $h$, then for that $h$ the homogeneous system would have only the trivial solution. But there is no value of $h$ for which this is the case, so for every $h$ the vectors must be linearly dependent. – Brian M. Scott Jul 19 '12 at 2:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9284145832061768, "perplexity": 151.08234224200757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008720.43/warc/CC-MAIN-20141125155648-00060-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/22332-find-largest-smallest.html | Math Help - Find the largest and smallest
1. Find the largest and smallest
The sum of the non-negative real numbers $s_1, s_2,..., s_{2004}$ is 2 and
$s_1s_2+s_2s_3+ ...+ s_{2003} s_{2004}+ s_{2004}s_1=1$.
Find the largest and smallest possible values of
$S=s_1^2+s_2^2+...+s_{2004}^2$
2. Originally Posted by perash
The sum of the non-negative real numbers $s_1, s_2,..., s_{2004}$ is 2 and
$s_1s_2+s_2s_3+ ...+ s_{2003} s_{2004}+ s_{2004}s_1=1$.
Find the largest and smallest possible values of
$S=s_1^2+s_2^2+...+s_{2004}^2$
$\sum_{i=1}^{2400} s_i =2$
Now square:
$\left(\sum_{i=1}^{2400} s_i\right)^2 = \sum_{i=1}^{2400} s_i^2 +2\sum_{i \ne j} s_i s_j = \sum_{i=1}^{2400} s_i^2 +2 =4$
So take it from there.
RonL
3. Originally Posted by CaptainBlack
$\left(\sum_{i=1}^{2400} s_i\right)^2 = \sum_{i=1}^{2400} s_i^2 +\color{red}2\sum_{i \ne j} s_i s_j \color{black}= \sum_{i=1}^{2400} s_i^2 +\color{red}2\color{black} =4$
No. $\sum_{i \ne j}{s_i s_j}$ is not the same as $
s_1s_2+s_2s_3+\ldots+ s_{2003} s_{2004}+ s_{2004}s_1
$
. The former contains terms like $s_2s_4$ that are not present in the latter. Rather,
$S\ =\ s_1^2+s_2^2+\ldots+s_{2004}^2$
$=\ (s_1+\ldots+s_{2004})^2-2\sum_{i \ne j}{s_i s_j}$
$\leq\ (s_1+\ldots+s_{2004})^2-2(s_1s_2+s_2s_3+\ldots+ s_{2003} s_{2004}+ s_{2004}s_1)$
$=\ 2^2-2\ =\ 2$
For the lower bound, the Cauchy–Schwarz inequality gives
$|s_1s_2+s_2s_3+\ldots+ s_{2003} s_{2004}+ s_{2004}s_1|\ \leq\ (s_1^2+s_2^2+\ldots+s_{2004}^2)^{\frac{1}{2}}(s_2^ 2+s_3^2+\ldots+s_{2004}^2+s_1^2)^{\frac{1}{2}}
$
so $S\ \geq\ 1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4894629716873169, "perplexity": 576.23033115968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00217-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/essential-university-physics-volume-1-3rd-edition/chapter-14-exercises-and-problems-page-262/17 | ## Essential University Physics: Volume 1 (3rd Edition)
Published by Pearson
# Chapter 14 - Exercises and Problems - Page 262: 17
#### Answer
Please see the work below.
#### Work Step by Step
We know that (a) $\lambda=\frac{v}{f}$ We plug in the known values to obtain: $\lambda=\frac{3\times 10^8}{1.0\times 10^6}=300m$ (b) $\lambda=\frac{v}{f}$ We plug in the known values to obtain: $\lambda=\frac{3\times 10^8}{190\times 10^6}=1.6m$ (c) $\lambda=\frac{v}{f}$ We plug in the known values to obtain: $\lambda=\frac{3\times 10^8}{10\times 10^9}=0.03m$ (d) $\lambda=\frac{v}{f}$ We plug in the known values to obtain: $\lambda=\frac{3\times 10^8}{4\times 10^{13}}=7.5\mu m$ (e) $\lambda=\frac{v}{f}$ We plug in the known values to obtain: $\lambda=\frac{3\times 10^8}{6.0\times 10^{14}}=500nm$ (f) $\lambda=\frac{v}{f}$ We plug in the known values to obtain: $\lambda=\frac{3\times 10^8}{1.0\times 10^{18}}=0.30nm$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6794454455375671, "perplexity": 619.8688669681036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00368.warc.gz"} |
http://mathoverflow.net/questions/57067/hausdorff-dimension-of-non-recurrent-walks/57074 | # Hausdorff dimension of non-recurrent walks
Preface: I am fairly new to the concept of Hausdorff dimension, so I don't know how interesting a question this is.
Identify walks on $\mathbb{Z}$ with infinite binary sequences (say $0$ means moving left, $1$ means moving right). It is then well-known that, in the Cantor space $2^\mathbb{N}$ under the Lebesgue measure, the set $A$ of non-recurrent walks -- i.e. those sequences $x$ for which $\frac{\sum_{k=1}^n x(k)}{n} = \frac{1}{2}$ for only finitely many $n$ -- is null. I am curious as to the Hausdorff dimension of this set, but I do not see how to figure this. Thus my question:
What is the Hausdorff dimension of the set $A$ of non-recurrent walks?
Perhaps this too is already well-known, but if so I could not locate the result. I hope this question is not trivial, though if it is at least I will have learned that. I look forward to any replies; this seems like a marvelous site!
-
First of all the dimension of the space. You didn't give the metric you want to use, so I'll use my favourite one: two points are at distance $2^{-n}$ if they first disagree in the $n$th symbol. The 1-Hausdorff measure agrees with coin-flipping measure so that the full space has Hausdorff dimension 1.
Now for the non-recurrent subset. It's also of Hausdorff dimension 1. Probably the simplest way to see this is to use some technology: consider the measure $\mu_\alpha$ that is coin-flipping with weights $\alpha$ and $1-\alpha$. Clearly for $\alpha\ne1/2$ this measure is supported on the non-recurrent set. But the Hausdorff dimension of the measure is $(-\alpha\log\alpha-(1-\alpha)\log(1-\alpha))/\log 2$. This is a lower bound for the Hausdorff dimension of the non-recurrent set. But as $\alpha\to1/2$, this lower bound converges to 1.
In case you have an aversion to dynamical systems there is a way to do this with a single (non-invariant) measure and get the result without taking any limits. Let $n_1 < n_2 < \ldots$ be a sequence of density 0 such that the number of terms up to $N$ ($T(N)$ say) grows faster than $\sqrt{3N\log\log N}$ (e.g. $(n_i)$ is the sequence $\lfloor i^2/\log i\rfloor$). Now build the measure that is fair coin-tossing at each $n$ except at the $n_i$ when you always put a 1. Since the fair part of the process puts you in the range $\pm\sqrt{(2+\epsilon)N\log\log N}$ for all sufficiently large $N$ (by the Law of the Iterated Logarithm), the unfair part of the process guarantees that you only return to 0 finitely many times. Hence this measure is supported on the non-recurrent set. This measure has Hausdorff dimension 1: the measure of a $2^{-N}$ neighbourhood of a typical point is $2^{-N+T(N)}$. The Hausdorff dimension is the limit of $\log(2^{-N+T(N)})/\log(2^{-N})$. The fact that $T(N)=o(N)$ guarantees that the Hausdorff dimension is 1.
-
Yes, you were right about the metric I intended. Thank you for the nice succinct solution(s). – Michel Mar 2 '11 at 12:39
Anthony's answer settles the matter, but I'll say a few words about relevant terminology and references that are too long to fit in the comment box. (And go a bit beyond what you actually asked, but may give things some context.)
This is essentially a question in multifractal analysis. Given an asymptotic property such as "the asymptotic frequency of ones is bounded away from 1/2", one can study the set of points with this property in different ways. If you study the measure of this set, you're doing ergodic theory; if you study the dimension of this set, you're doing multifractal analysis.
In your setting, a natural thing to do is to fix $\alpha \in [0,1]$ and to consider the set $K_\alpha = \{ x \mid \frac 1n \sum_{k=1}^n x(k) \to \alpha \}$. (Each set $K_\alpha$ is contained in your non-recurrent set.) Then $2^{\mathbb{N}} = (\bigcup_{\alpha\in [0,1]} K_\alpha) \cup \hat K$, where $\hat K$ is the set of points $x$ for which $\lim \frac 1n \sum_{k=1}^n x(k)$ does not exist. This is an example of a multifractal decomposition. One can show that the measures $\mu_\alpha$ in Anthony's answer have the property that $\mu_\alpha(K_\alpha)=1$ and $$\dim_H K_\alpha = \dim_H \mu_\alpha = \frac{-\alpha\log \alpha - (1-\alpha)\log (1-\alpha)}{\log 2}.$$ Thus the function $\alpha \mapsto K_\alpha$ is an analytic and concave function of $\alpha$; this is an example of a multifractal spectrum. There are lots of these, associated to various asymptotic quantities, and you can find a lot of subtle behaviour. (For example, one can ask how big the set $\hat K$ is, and it turns out that even though it is null for every shift-invariant measure on $2^\mathbb{N}$, it still has full Hausdorff dimension.)
For more on this, you can see the book "Dimension Theory in Dynamical Systems", by Yakov Pesin -- the first few chapters are very abstract and difficult to follow if you're not already pretty familiar with some of the basic ideas in dimension theory, but you can also skip straight to the chapters on multifractal analysis and get an idea of what's going on. There's also a survey paper by Barreira, Pesin, and Schmeling from 1997 or thereabouts that is a good introduction.
-
Thanks for taking the time to add this; I find it helpful. And the references are much appreciated. – Michel Mar 2 '11 at 12:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696692228317261, "perplexity": 125.32267074743272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098468.93/warc/CC-MAIN-20150627031818-00214-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://nakamotonews.socialpetitions.org/shapeshift-ceo-points-out-how-bitcoins-growth-is-dependent-on-the-existence-of-bubbles-speculations/ | # ShapeShift CEO Points Out How Bitcoin’s Growth is Dependent on the Existence of Bubbles, Speculations
CEO of cryptocurrency exchange, ShapeShift, Erik Voorhees trusts that Bitcoin’s [BTC] growth relies heavily on bubbles reports Coin Telegraph. On Wednesday, May 15, Voorhees expressed his sentiments in an interview with Bloomberg TV and has since made several cases by expounding on bubbles that have burst in the past and how volatility is essential. In particular, […]
Read more: ShapeShift CEO Points Out How Bitcoin’s Growth is Dependent on the Existence of Bubbles, Speculations
•
•
•
•
•
• | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047629833221436, "perplexity": 10600.39284891642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00148.warc.gz"} |
http://www.computer.org/csdl/trans/tk/2008/08/ttk2008081115-abs.html | Publication 2008 Issue No. 8 - August Abstract - Contraflow Transportation Network Reconfiguration for Evacuation Route Planning
This Article Share Bibliographic References Add to: Digg Furl Spurl Blink Simpy Google Del.icio.us Y!MyWeb Search Similar Articles Articles by Sangho Kim Articles by Shashi Shekhar Articles by Manki Min
Contraflow Transportation Network Reconfiguration for Evacuation Route Planning
August 2008 (vol. 20 no. 8)
pp. 1115-1129
ASCII Text x Sangho Kim, Shashi Shekhar, Manki Min, "Contraflow Transportation Network Reconfiguration for Evacuation Route Planning," IEEE Transactions on Knowledge and Data Engineering, vol. 20, no. 8, pp. 1115-1129, August, 2008.
BibTex x @article{ 10.1109/TKDE.2007.190722,author = {Sangho Kim and Shashi Shekhar and Manki Min},title = {Contraflow Transportation Network Reconfiguration for Evacuation Route Planning},journal ={IEEE Transactions on Knowledge and Data Engineering},volume = {20},number = {8},issn = {1041-4347},year = {2008},pages = {1115-1129},doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2007.190722},publisher = {IEEE Computer Society},address = {Los Alamitos, CA, USA},}
RefWorks Procite/RefMan/Endnote x TY - JOURJO - IEEE Transactions on Knowledge and Data EngineeringTI - Contraflow Transportation Network Reconfiguration for Evacuation Route PlanningIS - 8SN - 1041-4347SP1115EP1129EPD - 1115-1129A1 - Sangho Kim, A1 - Shashi Shekhar, A1 - Manki Min, PY - 2008KW - TransportationKW - OptimizationKW - ContraflowKW - Evacuation Route PlanningVL - 20JA - IEEE Transactions on Knowledge and Data EngineeringER -
Given a transportation network having source nodes with evacuees and destination nodes, we want to find a contraflow network configuration, i.e., ideal direction for each edge, to minimize evacuation time. Contraflow is considered a potential remedy to reduce congestion during evacuations in the context of homeland security and natural disasters (e.g., hurricanes). This problem is computationally challenging because of the very large search space and the expensive calculation of evacuation time on a given network. To our knowledge, this paper presents the first macroscopic approaches for the solution of contraflow network reconfiguration incorporating road capacity constraints, multiple sources, congestion factor, and scalability. We formally define the contraflow problem based on graph theory and provide a framework of computational workload to classify our approaches. A Greedy heuristic is designed to produce high quality solutions with significant performance. A Bottleneck Relief heuristic is developed to deal with large numbers of evacuees. We evaluate the proposed approaches both analytically and experimentally using real world datasets. Experimental results show that our contraflow approaches can reduce evacuation time by 40% or more.
[1] I. Adan, Queueing Theory. Eindhoven Univ. of Tech nology, 2001.
[2] R.K. Ahuja, T.L. Magnanti, and J.B. Orlin, Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993.
[3] M. Ben-Akiva, “Development of a Deployable Real-Time Dynamic Traffic Assignment System: DynaMIT and DynaMIT-P User's Guide,” technical report, Massachusetts Inst. of Tech nology, 2002.
[4] A.W. Berger, L.M. Bregman, and Y. Kogan, “Bottleneck Analysis in Multiclass Closed Queueing Networks and Its Application,” Queueing Systems, vol. 31, no. 3-4, pp. 217-237, 1999.
[5] D. Bertsekas, personal communication by E-mail, available upon request, 2006.
[6] D. Bertsekas and P. Tseng, “Relax-IV: A Faster Version of the Relax Code for Solving Minimum Cost Flow Problems,” Technical Report P-2276, Laboratory for Information and Decision Systems, Massachusetts Inst. of Technology, Cambridge, Mass., 1994.
[7] R.J. Caudill and N.M. Kuo, “Development of an Interactive Planning Model for Contraflow Lane Evaluation,” Transportation Research Record, Urban Traffic Systems, vol. 906, no. 7, pp. 47-54, 1983.
[8] X. Chen, J. Meaker, and F. Zhan, “Agent-Based Modeling and Analysis of Hurricane Evacuation Procedures for the Florida Keys,” Natural Hazards, vol. 38, pp. 321-338, 2006.
[9] W.C. Cheng and R.R. Muntz, “Optimal Routing for Closed Queueing Networks,” Proc. 14th IFIP WG 7.3 Int'l Symp. Computer Performance Modelling, Measurement and Evaluation (Performance '90), pp. 3-17, 1990.
[10] T.J. Cova and J.P. Johnson, “A Network Flow Model for Lane-Based Evacuation Routing,” Transportation Research Part A: Policy and Practice, vol. 37, pp. 579-604, 2003.
[11] J.G. Doheny and J.L. Fraser, “Mobedic—A Decision Modeling Tool for Emergency Situations,” Expert Systems with Applications, vol. 10, pp. 17-27, 1996.
[12] M. Ebihara, A. Ohtsuki, and H. Iwaki, “A Model for Simulating Human Behavior during Emergency Evacuation Based on Classificatory Reasoning and Certainty Value Handling,” Microcomputers in Civil Eng., vol. 7, pp. 63-71, 1992.
[13] Federal Highway Administration, Catastrophic Hurricane Evacuation Plan Evaluation, http:/www.onewayflorida.org/, 2007.
[14] Florida Department Transportation, State of Florida Contraflow Plan, http:/www.onewayflorida.org/, 2007.
[15] G. Ford, R. Henk, and P. Barricklow, “Interstate Highway 37 Reverse Flow Analysis—Technical Memorandum,” technical report, Texas Transportation Inst., 2000.
[16] L.R. Ford and D.R. Fulkerson, Flows in Networks. Princeton Univ. Press, 1962.
[17] M. Garey and D. Johnson, Computers and Intractability: A Guide to the Theory of NP Completeness. W.H. Freeman, 1979.
[18] A.V. Goldberg, “An Efficient Implementation of a Scaling Minimum-Cost Flow Algorithm,” J. Algorithms, vol. 22, pp. 1-29, 1997.
[19] R. Greiner, “Probabilistic Hill-Climbing: Theory and Applications,” Proc. Ninth Canadian Conf. Artificial Intelligence, pp. 60-67, 1992.
[20] M.D. Grigoriadis, “An Efficient Implementation of the Network Simplex Method,” Math. Programming Study, vol. 26, pp. 83-111, 1986.
[21] H.W. Hamacher and S.A. Tjandra, “Mathematical Modeling of Evacuation Problems: State of the Art,” Pedestrian and Evacuation Dynamics, pp. 227-266, 2001.
[22] G. Hamza-Lup, K.A. Hua, M. Le, and R. Peng, “Enhancing Intelligent Transportation Systems to Improve and Support Homeland Security,” Proc. Seventh IEEE Int'l Conf. Intelligent Transportation Systems (ITSC), 2004.
[23] E. Hoel, W. Heng, and D. Honeycutt, “High Performance Multimodal Networks,” Proc. Ninth Int'l Symp. Advances in Spatial and Temporal Databases (SSTD '05), pp. 308-327, 2005.
[24] S. Hoogendoorn and P. Bovy, “State-of-the-Art of Vehicular Traffic Flow Modelling,” Proc. Institution of Mechanical Engineers, vol. 215, no. 1, pp. 283-303, 2001.
[25] M. Jha, K. Moore, and B. Pashaie, “Emergency Evacuation Planning with Microscopic Traffic Simulation,” Transportation Research Record: J. Transportation Research Board, vol. 1886, pp. 40-48, 2004.
[26] D. Karger and C. Stein, “An ${\rm O}({\rm n}^{2})$ Algorithm for Minimum Cuts,” Proc. 25th ACM Symp. Theory of Computing (STOC '93), pp. 757-765, 1993.
[27] J.L. Kennington and R.V. Helgason, Algorithm for Network Programming. John Wiley & Sons, 1980.
[28] S. Kim and S. Shekhar, “Contraflow Network Reconfiguration for Evacuation Planning: A Summary of Results,” Proc. 13th ACM Symp. Advances in Geographic Information Systems (GIS '05), pp. 250-259, 2005.
[29] T. Litman, “Lessons from Katrina and Rita: What Major Disasters Can Teach Transportation Planners,” J. Transportation Eng., vol. 132, no. 1, pp. 11-18, 2006.
[30] Q. Lu, B. George, and S. Shekhar, “Capacity Constrained Routing Algorithms for Evacuation Planning: A Summary of Results,” Proc. Ninth Int'l Symp. Spatial and Temporal Databases (SSTD '05), pp. 291-307, 2005.
[31] H.S. Mahmassani, H. Sbayti, and X. Zhou, “DYNASMART-P Version 1.0 User's Guide,” technical report, Maryland Transportation Initiative, Univ. of Maryland, 2004.
[32] “1998 Performance of Regional High-Occupancy Vehicle Facilities on Freeways in the Washington Region,” Analysis of Person and Vehicle Volumes, Metropolitan Washington Council of Gov't, 1999.
[33] S. Nahar, S. Sahni, and E. Shragowitz, “Simulated Annealing and Combinatorial Optimization,” Proc. 23rd ACM/IEEE Design Automation Conf. (DAC '86), pp. 293-299, 1986.
[34] M. Pidd, F. Silva, and R. Eglese, “Simulation Model for Emergency Evacuation,” European J. Operational Research, vol. 90, no. 3, pp.413-419, 1996.
[35] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, second ed. Prentice Hall, 2003.
[36] Federal Response to Hurricane Katrina: Lessons Learned, http://www.whitehouse.gov/reportskatrina-lessons-learned /, The White House, 2006.
[37] G. Theodoulou, “Contraflow Evacuation on the Westbound I-10 out of the City of New Orleans,” master's thesis, Louisiana State Univ., 2003.
[38] G. Theodoulou and B. Wolshon, “Alternative Methods to Increase the Effectiveness of Freeway Contraflow Evacuation,” Transportation Research Record: J. Transportation Research Board, vol. 1865, pp.48-56, 2004.
[39] H. Tuydes and A. Ziliaskopoulos, “Network Re-Design to Optimize Evacuation Contraflow,” Technical Report 04-4715, Proc. 83rd Ann. Meeting of the Transportation Research Board, 2004.
[40] H. Tuydes and A. Ziliaskopoulos, “Tabu-Based Heuristic for Optimization of Network Evacuation Contraflow,” Technical Report 06-2022, Proc. 85th Ann. Meeting of the Transportation Research Board, 2006.
[41] B. Wolshon, “One-Way-Out: Contraflow Freeway Operation for Hurricane Evacuation,” Natural Hazards Rev., vol. 2, no. 3, pp. 105-112, 2001.
[42] B. Wolshon, E. Urbina, and M. Levitan, “National Review of Hurricane Evacuation Plans and Policies,” technical report, Hurricane Center, Louisiana State Univ., Baton Rouge, 2002.
Index Terms:
Transportation, Optimization, Contraflow, Evacuation Route Planning
Citation:
Sangho Kim, Shashi Shekhar, Manki Min, "Contraflow Transportation Network Reconfiguration for Evacuation Route Planning," IEEE Transactions on Knowledge and Data Engineering, vol. 20, no. 8, pp. 1115-1129, Aug. 2008, doi:10.1109/TKDE.2007.190722 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6756159067153931, "perplexity": 17463.55091115479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/388147/induction-of-emf-by-motion-of-a-conductor-motional-emf | # Induction of emf by motion of a conductor (Motional EMF)
Does emf get induced for any conductor moving perpendicularly to a magnetic field?
In the textbook explanation, they gave an example of a fixed metal frame over which a metal rod can roll. The field goes into the area of the frame and the rod moves horizontally over the frame, such that its motion is perpendicular to the field. In this case, the area bounded by the frame and the rod keeps changing, hence the flux changes, and an emf is induced.
But consider a case where there is no frame, and the rod is just moving perpendicular to a magnetic field with constant velocity. Is emf still induced across the ends of the rod?
Also, is the shape of the conductor of any significance? For example, replace the straight rod with a semicircular one in the above question.
• The answer is yes to an emf being induced as you do not need a complete conducting circuit to induced the emf. Feb 23, 2018 at 9:28
• @Farcher But isn't emf induced only when there is a change in flux? If the object is moving inside a uniform magnetic field, then the number of field lines passing through the conductor is always the same until the instant it just leaves the field. Why then should an emf be induced at all, regardless of whether it is a complete conducting circuit or not? Feb 23, 2018 at 11:20
• Your question is probably a duplicate. If you put in motional emf into this site's search engine you will find a number of answers relating to induced emf and motional emf. Here is one to read physics.stackexchange.com/q/239741 Feb 23, 2018 at 12:13
Induced emf is not defined w.r.t change in flux. It is defined as,
$$\xi=\int \vec{f}_{mag}.\vec{dl}$$
where where $f_{mag}$ is the total force per unit charge which drives the current around the circuit. This can be due to the battery or can be due to a non electrostatic electric field or a magnetic field. The source cannot be an electrostatic field since the line integral of an electrostatic field over the entire circuit is $0$ since it is a conservative field.
Now if there is a single rod moving in a region with perpendicular magnetic field, there will be seperation of charges due to the Lorentz force, $\vec{f}_{mag}=\vec{v} \times\vec{B}$. Thus an emf will be induced according to the previous formula. Since the circuit is not complete, there won't be any current flow.
For your second question, the answer is a clear no. Induced emf only depends on the line integral. It can be calculated for any kind of shape you can come up with.
If you still have doubts regarding induced emf, you can refer to the book by David J. Griffiths.
• I'd add that for motional emf, the seat of the emf is the magnetic Lorentz force, as explained in the above answer. $\mathscr E=-\frac{d\Phi}{dt}$ is just a convenient way of calculating the motional emf. Its advantage is that it automatically sums the emfs induced in different parts of the circuit that may be moving. Jun 28, 2019 at 16:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8747087717056274, "perplexity": 190.97552275948655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00740.warc.gz"} |
https://arxiv.org/list/cs.LG/1509 | Machine Learning
Authors and titles for cs.LG in Sep 2015
[ total of 177 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 176-177 ]
[ showing 25 entries per page: fewer | more | all ]
[1]
Title: Value function approximation via low-rank models
Authors: Hao Yi Ong
Comments: arXiv admin note: substantial text overlap with arXiv:0912.3599 by other authors
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI)
[2]
Title: Metastatic liver tumour segmentation from discriminant Grassmannian manifolds
Journal-ref: Physics in Medicine and Biology 60 (2015)
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV)
[3]
Title: Online Supervised Subspace Tracking
Subjects: Machine Learning (cs.LG); Statistics Theory (math.ST); Machine Learning (stat.ML)
[4]
Title: Learning A Task-Specific Deep Architecture For Clustering
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
[5]
Title: Learning Deep $\ell_0$ Encoders
Comments: Full paper at AAAI 2016
Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML)
[6]
Title: Differentially Private Online Learning for Cloud-Based Video Recommendation with Multimedia Big Data in Social Networks
Subjects: Machine Learning (cs.LG)
[7]
Title: Sensor-Type Classification in Buildings
Subjects: Machine Learning (cs.LG)
[8]
Title: Importance Weighted Autoencoders
Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML)
[9]
Title: Heavy-tailed Independent Component Analysis
Subjects: Machine Learning (cs.LG); Statistics Theory (math.ST); Computation (stat.CO); Machine Learning (stat.ML)
[10]
Title: A DEEP analysis of the META-DES framework for dynamic selection of ensemble of classifiers
Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML)
[11]
Title: On-the-Fly Learning in a Perpetual Learning Machine
Subjects: Machine Learning (cs.LG)
[12]
Title: Training a Restricted Boltzmann Machine for Classification by Labeling Model Samples
Subjects: Machine Learning (cs.LG)
[13]
Title: A tree-based kernel for graphs with continuous attributes
Comments: This work has been submitted to the IEEE Transactions on Neural Networks and Learning Systems for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
Subjects: Machine Learning (cs.LG)
[14]
Title: Fast Clustering and Topic Modeling Based on Rank-2 Nonnegative Matrix Factorization
Comments: This paper has been withdrawn by the author to clarify the authorship
Subjects: Machine Learning (cs.LG); Information Retrieval (cs.IR); Numerical Analysis (cs.NA)
[15]
Title: Train faster, generalize better: Stability of stochastic gradient descent
Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)
[16]
Title: Machine Learning Methods to Analyze Arabidopsis Thaliana Plant Root Growth
Subjects: Machine Learning (cs.LG)
[17]
Title: Probabilistic Neural Network Training for Semi-Supervised Classifiers
Subjects: Machine Learning (cs.LG)
[18]
Title: l1-norm Penalized Orthogonal Forward Regression
Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML)
[19]
Title: Deep Broad Learning - Big Models for Big Data
Subjects: Machine Learning (cs.LG)
[20]
Title: Parallel and Distributed Approaches for Graph Based Semi-supervised Learning
Subjects: Machine Learning (cs.LG)
[21]
Title: Diffusion-KLMS Algorithm and its Performance Analysis for Non-Linear Distributed Networks
Subjects: Machine Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Information Theory (cs.IT); Systems and Control (cs.SY)
[22]
Title: Efficient Sampling for k-Determinantal Point Processes
Subjects: Machine Learning (cs.LG)
[23]
Title: Character-level Convolutional Networks for Text Classification
Comments: An early version of this work entitled "Text Understanding from Scratch" was posted in Feb 2015 as arXiv:1502.01710. The present paper has considerably more experimental results and a rewritten introduction, Advances in Neural Information Processing Systems 28 (NIPS 2015)
Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL)
[24]
Title: Gravitational Clustering
Authors: Armen Aghajanyan
Subjects: Machine Learning (cs.LG)
[25]
Title: Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive Feature Learning | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2108578383922577, "perplexity": 9905.688499361419}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215858.81/warc/CC-MAIN-20180820062343-20180820082343-00607.warc.gz"} |
Subsets and Splits