url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://mathhelpforum.com/discrete-math/28087-second-order-recurrence-relation-print.html | # Second Order Recurrence relation
• Feb 12th 2008, 12:42 PM
frostking2
Second Order Recurrence relation
I can usually solve second order ones fine but this one has me stumped!
a(n) = 6a(n-1) - 9a(n-2) a(0) = 1 a(1) = 1
Using t^2 = 6t^(n-1) - 9t^(n-2)
I get t = 3 and only 3
When I try to plug in S(n) = 3^n and T(n) = 3^n and solve for
U(n) = bS(n) + dT^n with b3^n + d3^n and use my initial values of 1 for U(0) and for U(1) I can not find a solution for the values of b and d that meet both of these!!!!!! PLease give me a clue or two or three.....
• Feb 12th 2008, 01:25 PM
galactus
Running it through Maple, I get:
$a_{n}=(1-\frac{2n}{3})\cdot{3^{n}}$
That's something to shoot for.
• Feb 12th 2008, 01:59 PM
Soroban
Hello, frostking2!
I think we speak the same language ... We just use different symbols.
Quote:
$a(n) \:= \:6a(n-1) - 9a(n-2),\quad a(0) = 1,\;\;a(1) = 1$
Here's the way I was taught to handle these . . .
We conjecture that: . $a(n) \:=\:X^n$ . . . that the function is exponential.
The equation becomes: . $X^n \:=\:6X^{n-1} - 9X^{n-2}\quad\Rightarrow\quad X^n - 6X^{n-1} + 9X^{n-2}\;=\;0$
Divide by $X^{n-2}\!:\;\;X^2 - 6X + 9 \:=\:0\quad\Rightarrow\quad (X-3)^2\:=\:0\quad\Rightarrow\quad X \:=\:3,\,3$
With repeated roots, the function is: . $a(n) \;=\;A\!\cdot\!3^n + B\!\cdot\!n\!\cdot\!3^n$
Plug in the first two values of the sequence . . .
. . $a(0) = 1:\;A\!\cdot\!3^0 + B(0)(3^0) \:=\:1 \quad\Rightarrow\quad A \:=\:1$
. . $a(1) = 1:\;A\!\cdot\!3^! + B(1)(3^1) \:=\:1\quad\Rightarrow\quad B \:=\:-\frac{2}{3}$
Hence: . $a(n) \;=\;3^n - \frac{2}{3}n\cdot3^n \;=\;3^n\left(1 - \frac{2}{3}n\right) \;=\;3^n\left(\frac{3-2n}{3}\right)$
Therefore: . $\boxed{a(n) \;=\;3^{n-1}(3 - 2n)}$
• Feb 12th 2008, 02:59 PM
frostking2
Second order problem solved!
Thank you soooooo much. Stupid me I NEVER considered 0 for one of the values at the last portion of the problem, second roots and values for a and b. Of course it does work and so the regular process yields an answer. Thanks so much for your time and helpful attitude!!!!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8460402488708496, "perplexity": 1352.9414165324013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00362-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://brilliant.org/discussions/thread/common-tangent-rules/ | # Common tangent rules
Problem 29 on the JEE mains inspired this note:
Things to know
•Pythagorean theorem
•Graphical transformations (only vertical and horizontal shifts)
This note will describe a quick and easy way to find the number of common tangents to two circles.
Given the following equations for the circles:
$(y-a)^2+(x-b)^2=r^2$
$y^2+x^2=s^2$
find the number of common tangents these circles will have.
You may wonder, "this isn't generalized, you assume one is at the origin". Let me explain.
What I did was move both circles $p$ units along the x-axis and $q$ units along the y-axis to make one at the origin. The number of tangents to the circles won't change if we perform a transformation other than a distortion.
Now for the tangent rules: how many common tangents will they have if...
I) $a^2+b^2=(r+s)^2$ then there are 3 common tangents.
II) $a^2+b^2>(r+s)^2$ then there are 4 common tangents.
Now for case 3
III) $a^2+b^2<(r+s)^2$ then we have three cases
NOTE: you must check this first, don't immediately check the rules below (Quick rule, if r=s in this case, then there are 2 tangents).
III-1) $a^2+b^2=(r-s)^2$ then there is 1 common tangent.
III-2) $a^2+b^2>(r-s)^2$ then there are 2 common tangents.
III-3) $a^2+b^2<(r-s)^2$ then there are no common tangents.
Note by Trevor Arashiro
4 years, 6 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
Can you tell me such way in finding common tangents to a circle &a ellipse ?
- 4 years, 6 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594522714614868, "perplexity": 2402.8483193556867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655864.19/warc/CC-MAIN-20191015032537-20191015060037-00003.warc.gz"} |
https://cs.stackexchange.com/questions/22360/proving-that-max-weighted-independent-set-is-in-np | # Proving that Max Weighted Independent Set is in NP
What I'm trying to do is to show a problem in NP can be reduced to the min weight vertex cover problem
I've chosen the max independent weight problem = input: A graph G with weights on each vertex, output: An independent set with the max total weight
Before reducing, I've tried to show that the max indep. weight problem is in NP (which is usually the first step in these reductions). I'm trying to construct a verification algorithm for this problem; but I'm stuck on trying to show that the verification algorithm can check if a certificate is the max indep. set in polynomial time.
Any guidance or comments would be greatly appreciated. Thanks
Max weighted independent set is the decision problem whose instances are pairs $(G,B)$ such that $G = (V,E,w)$ is a vertex-weighted graph that has an independent set of weight at least $B$. Nowhere is it claimed that $B$ is the maximum weight of an independent set. The problem (like many others) is defined in this way precisely so that it be in NP.
Also, in order to show that your problem is NP-hard, it might be easier to reduce from max independent set.
• Thanks very much for your swift response. The reason I phrased the max indep. problem that way was because the min vertex problem I'm provided is phrased the same way (i.e. input: a graph G with weights on each vertex, output: the vertex cover with the smallest weight). I'll try and use the independent set you described above for my reduction. Thanks again! – Allan Mar 7 '14 at 3:36
• Vertex cover is defined in the same way: the problem is to decide whether there is a vertex cover of weight at most a given weight. You're confusing the decision problem (what I describe) with the optimization problem (what you quote). – Yuval Filmus Mar 7 '14 at 3:42
The issue is with the version of the problem you are using. Note that as you define it, the output is required to be the maximum weight independent set, i.e. the optimum answer. $NP$ however is a class of decision problems, so the only valid outputs are Yes and No.
So if you want to show the problem is in $NP$ you first need to convert it to a decision problem:
$k$-Weight Independent Set
Input: A vertex weighted graph $G=(V,E,w)$ and an integer $k$.
Question: Is the a set $V'\subset V$ such that $V'$ is an independent set and $\sum_{v\in V'} w(v) \geq k$?
It should be easier to see that this version is in $NP$. There is an optimization class - $NPO$ - that corresponds to $NP$, but the normal definition is that a problem is in $NPO$ if its decision variant is in $NP$ - so you're still in the position where you want to deal with the decision variant.
• Thank you! This cleared up a lot of confusion for me with regards to problems in NP – Allan Mar 7 '14 at 3:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476266860961914, "perplexity": 211.7633918131276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00131.warc.gz"} |
https://www.advanceduninstaller.com/Avira-System-Speedup-848d40c041cf9fa1eff8833384e9f308-application.htm | # Avira System Speedup
## How to uninstall Avira System Speedup from your system
This page contains complete information on how to remove Avira System Speedup for Windows. It is made by Avira Operations GmbH & Co. KG. Open here for more details on Avira Operations GmbH & Co. KG. Usually the Avira System Speedup application is found in the C:\Program Files (x86)\Avira\AviraSpeedup folder, depending on the user's option during install. The full uninstall command line for Avira System Speedup is C:\Program Files (x86)\Avira\AviraSpeedup\unins000.exe. Avira_System_Speedup.exe is the programs's main file and it takes approximately 306.28 KB (313632 bytes) on disk.
Avira System Speedup installs the following the executables on your PC, taking about 13.12 MB (13752400 bytes) on disk.
• Avira.SystemSpeedup.Core.Common.ErrorReporter.exe (27.66 KB)
• Avira.SystemSpeedup.Core.Common.Starter.exe (16.60 KB)
• Avira.SystemSpeedup.Core.Common.Updater.exe (21.59 KB)
• Avira.SystemSpeedup.SpeedupService.exe (26.98 KB)
• Avira.SystemSpeedup.SpeedupServiceInstaller.exe (15.10 KB)
• Avira.SystemSpeedup.Tools.exe (10.76 MB)
• Avira.SystemSpeedup.UI.Application.exe (328.45 KB)
• Avira.SystemSpeedup.UI.ServiceProfiler.exe (47.78 KB)
• Avira.SystemSpeedup.UI.Systray.exe (331.47 KB)
• Avira_System_Speedup.exe (306.28 KB)
• unins000.exe (1.26 MB)
...click to view all...
The current web page applies to Avira System Speedup version 2.5.6.2633 only. For other Avira System Speedup versions please click below:
...click to view all...
After the uninstall process, the application leaves leftovers on the computer. Part_A few of these are listed below.
Folders remaining:
• C:\Program Files\Avira\System Speedup
Files remaining:
• C:\Program Files\Avira\System Speedup\AARKRFA.dll
• C:\Program Files\Avira\System Speedup\Avira.Acp.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Client.Services.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Common.ErrorReporter.exe
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Common.Library.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Common.Starter.exe
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Common.Updater.exe
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Common.UserShell.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Defrag.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Host.Database.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Host.Interfaces.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Host.Library.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Host.Services.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.IPBBConnector.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.Core.Services.Interface.dll
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.SpeedupService.exe
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.SpeedupServiceInstaller.exe
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.UI.Application.exe
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.UI.ServiceProfiler.exe
• C:\Program Files\Avira\System Speedup\Avira.SystemSpeedup.UI.Systray.exe
• C:\Program Files\Avira\System Speedup\Avira_System_Speedup.exe
• C:\Program Files\Avira\System Speedup\unins000.dat
• C:\Program Files\Avira\System Speedup\unins000.exe
• C:\Users\habib rach\AppData\Local\Microsoft\CLR_v4.0_32\UsageLogs\Avira_System_Speedup.exe.log
• C:\Users\habib rach\AppData\Local\Temp\is-9RMQ1.tmp\avira_system_speedup.tmp
• C:\Users\habib rach\AppData\Local\Temp\is-KA23B.tmp\avira_system_speedup.tmp
Registry keys:
• HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall\Avira System Speedup_is1
## How to erase Avira System Speedup with Advanced Uninstaller PRO
Avira System Speedup is an application marketed by the software company Avira Operations GmbH & Co. KG. Some people want to erase this application. This can be hard because uninstalling this manually requires some skill regarding removing Windows programs manually. One of the best QUICK action to erase Avira System Speedup is to use Advanced Uninstaller PRO. Here is how to do this:
1. If you don't have Advanced Uninstaller PRO on your Windows PC, install it. This is a good step because Advanced Uninstaller PRO is a very useful uninstaller and all around utility to clean your Windows computer.
DOWNLOAD NOW
2. Run Advanced Uninstaller PRO. It's recommended to take your time to get familiar with Advanced Uninstaller PRO's interface and wealth of features available. Advanced Uninstaller PRO is a very good Windows optimizer.
3. Press the General Tools button
4. Click on the Uninstall Programs feature
5. A list of the programs installed on your PC will be made available to you
6. Navigate the list of programs until you find Avira System Speedup or simply click the Search field and type in "Avira System Speedup". The Avira System Speedup application will be found automatically. When you click Avira System Speedup in the list of programs, the following data about the application is available to you:
• Star rating (in the left lower corner). This explains the opinion other users have about Avira System Speedup, ranging from "Highly recommended" to "Very dangerous".
• Reviews by other users - Press the Read reviews button.
• Details about the app you are about to remove, by clicking on the Properties button.
7. Press the Uninstall button. A confirmation page will come up. accept the removal by clicking Uninstall. Advanced Uninstaller PRO will then uninstall Avira System Speedup.
8. After removing Avira System Speedup, Advanced Uninstaller PRO will ask you to run a cleanup. Press Next to perform the cleanup. All the items of Avira System Speedup which have been left behind will be found and you will be able to delete them. By uninstalling Avira System Speedup with Advanced Uninstaller PRO, you can be sure that no Windows registry items, files or folders are left behind on your disk.
Your Windows system will remain clean, speedy and able to take on new tasks.
DOWNLOAD NOW
## Disclaimer
The text above is not a piece of advice to remove Avira System Speedup by Avira Operations GmbH & Co. KG from your computer, nor are we saying that Avira System Speedup by Avira Operations GmbH & Co. KG is not a good application. This page only contains detailed info on how to remove Avira System Speedup supposing you want to. The information above contains registry and disk entries that other software left behind and Advanced Uninstaller PRO stumbled upon and classified as "leftovers" on other users' PCs.
2016-08-06 / Written by Daniel Statescu for Advanced Uninstaller PRO
follow @DanielStatescu
Last update on: 2016-08-06 18:58:50.107 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969100952148438, "perplexity": 24944.487686368033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00048.warc.gz"} |
https://swyde.com/s/Uncertainty_principle | # Uncertainty principle
## Explanation
The uncertainty principle states that every particle has a wave nature associated with it and it is impossible to know both the position and momentum of the wave beyond a certain level of precision at the same time. This is because the particle exists in a superposition of position and momentum, and if you were to know the position of the particle with high precision, then the momentum cannot be precisely known fundamentally. This principle is mathematically expressed as follows.
$\sigma_{x}\sigma_{p} \geq \frac{\hbar}{2}$
This explains that the products of standard deviations of the position $\sigma_{x}$ and momentum $\sigma_{p}$ cannot be small at the same time. If the standard deviation of position is small, then it is apparent that the position of the particle is known with high precision. And by the fundamental nature of the particles, the standard deviation of the momentum should now be higher enough that the product of the two should be greater than $\frac{\hbar}{2}$, where $\hbar$ is the reduced Planck's constant.
$E = hf$
Where $E$ is the radiation energy and $f$ is the frequency of the emitted radiation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615612030029297, "perplexity": 74.15065149566507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829542.89/warc/CC-MAIN-20181218164121-20181218190121-00505.warc.gz"} |
http://www.scholarpedia.org/article/Excitable_media | # Excitable media
Post-publication activity
## Definition
An excitable medium is a dynamical system distributed continuously in space, each elementary segment of which possesses the property of excitability. Neighboring segments of an excitable medium interact with each other by diffusion-like local transport processes. In an excitable medium it is possible for excitation to be passed from one segment to another by means of local coupling. Thus, an excitable medium is able to support propagation of undamped solitary excitation waves, as well as wave trains.
Originally, the term excitability referred to the property of living organisms (or of their constituent cells) to respond strongly to the action of a relatively weak external stimulus. A typical example of excitability is the generation of a spike of transmembrane potential (action potential) by a cardiac cell, induced by a short depolarizing electrical perturbation of a resting state. Usually, the shape of the generated action potential does not depend on the perturbation strength, as long as the perturbation exceeds some threshold (all-or-nothing principle). After the generation of this strong response, the system returns to its initial resting state. A subsequent excitation can be generated after a suitable length of time, called the refractory period, has passed. This obviously non-linear, dynamical behavior is characteristic of a large class of systems in biology, chemistry and physics (Winfree, 2000; Krinsky, Swinney 1991; Kapral, Showalter 1995; Zykov, 1987; Mikhailov, 1990; Kaplan, Glass, 1995; Izhikevich, 2007).
## Examples
The most prominent examples of excitable media are
• propagation of electrical excitation in various biological tissues, including nerve fiber and myocardium
• waves of spreading depression in the retina of eye
• concentration waves in yeast extract doing glycolysis
• calcium waves within frog eggs
## Mathematical models
In many applications the mathematical description of the dynamical processes in an excitable medium can be represented in the form of a reaction-diffusion system$\frac{\partial E_i}{\partial t}=F_i(\nabla E_i,\vec{E}) + \nabla(D_i\nabla E_i) + I_i(\vec{r},t).$
Here the vector $$\vec{E}$$ determines the state of the system, the $$F_i$$ are nonlinear functions of $$\vec{E}$$ and, perhaps, $$\nabla E_i\ ,$$ the $$D_i$$ are diffusion coefficients. The $$I_i$$ are external actions varying in space and time, which can be used for initiation of excitation waves. The most famous example of such type of descriptions are the Oregonator model, Hodgkin-Huxley model, Noble model, other models of cardiac cell, etc.
The basic features of the self-sustained dynamics in excitable media can be reproduced by the relatively simple and widely used two-component activator-inhibitor system
$${\partial u \over \partial t} = \nabla^2 u + f(u,v),$$
$${\partial v \over \partial t} = \sigma \nabla^2 v + \epsilon g(u,v).$$
Here $$u(\vec {r},t)$$ and $$v(\vec {r},t)\ ,$$ describe the state of the system, nonlinear functions $$f(u,v)$$ and $$g(u,v)$$ specify the local dynamics and $$\sigma$$ determines the ratio between two diffusion constants. If parameter $$\epsilon << 1$$ this reaction-diffusion system exhibit relaxational dynamics with intervals of fast and slow motion. Depending on the particular shape of the nonlinear functions $$f(u,v)$$ and $$g(u,v)$$ this system is refered to as the Brusselator, Fitzhugh-Nagumo model, Rinzel-Keller model, Barkley model, Morris-Lecar model, etc.
These generic systems of partial differential equations can be used to simulate the dynamics of one-, two- or three-dimensional media (Tyson, Keener, 1988). They can be adjusted to particular applications. For instance, the diffusion flows can be anisotropic or can include cross-diffusion terms. Some local and/or global feedback loops can be induced to reproduce naturally existing or artificially created stabilizing or destabilizing circuits.
An important example of possible modification represents the bidomain model describing the cardiac tissue as consisting of two colocated continuous media termed the intracellular and extracellular domain. The intracellular and extracellular potentials $$\phi_i$$ and $$\phi_e$$ are specified by the bidomain equations with conductivity tensors $$G_i$$ and $$G_e\ :$$
$$\nabla \cdot (G_i\nabla \phi_i)= I_m,$$
$$\nabla \cdot (G_e\nabla \phi_e)= -I_m,$$
where the transmembrane current density $$I_m$$ is determined as the following
$$I_m=\beta (C_m {\partial V_m \over \partial t} +I _{ion} + I_s).$$
Here $$\beta$$ is the membrane surface-to-volume ratio, $$C_m$$ is the membrane capacitance per unit area, $$I_{ion}$$ is the ionic current density generated by the cell membrane and $$I_s$$ is an imposed stimulation current density. The ionic current $$I_{ion}$$ depends on the transmembrane potential $$V_m=\phi_i-\phi_e$$ and on the vector $$\vec{m}$$ of gate variables in accordance to one of the models of cardiac cell. Each gate variable obeys first order ordinary differential equation as was firstly specified by the Hodgkin-Huxley model${d m_i \over d t}= \alpha_i(v_m)(1-m_i)-\beta_i(V_m)m_i.$
In the limit $$G_e \rightarrow \infty$$ the extracellular potential becomes to be uniform in space and the bidomain description is reduced to a multi-component reaction-diffusion system reproducing many important features of cardiac dynamics. However, the bidomain model allows for to consider the effects of, and the effects on, the surrounding extracellular electrical field (Keener, Sneyd, 1998).
Cellular automata and simplified descriptions of kinematics of excitation waves represent additional effective tools to describe different aspect of excitable-medium dynamics. They minimize computation time and under some approximations allow for analytical results.
## Self-organized patterns
The waves in passive, linear media differ considerably from waves in excitable media, which are nonlinear and driven by a distributed energy source. One-dimensional excitable media are able to support a single traveling wave and also wave trains propagating without decrement. If the medium size is sufficiently large, the propagation velocity and the wave profile do not depend on the initial and/or boundary conditions. This distinguishes the waves in excitable media from solitons propagating in nonlinear conservative systems, where velocities and pulse shapes strongly depend on the initial conditions.
In two-dimensional excitable media, one can observe expanding target patterns and rotating spiral waves. In three dimensions, rotating scroll waves are possible. All these dynamic phenomena represent well-known examples of self-organization in complex systems resulting in pattern formation.
The structure and the properties of the self-organized patterns are established due to a balance between the energy influx from internal distributed sources and the energy dissipation. Thus, the spatio-temporal patterns in excitable media belong to self-organization processes, which occur far away from the thermodynamical equilibrium.
## References
• A.T. Winfree, The Geometry of Biological Time (Springer, Berlin, Heidelberg, 2000).
• V. Krinsky and H. Swinney (eds), Wave and patterns in biological and chemical excitable media, (North-Holland, Amsterdam, 1991).
• R. Kapral and K. Showalter (eds.), Chemical Waves and Patterns, (Kluwer, Dordrecht, 1995).
• V.S. Zykov, Simulation of Wave Processes in Excitable Media, (Manchester Univ. Press, Manchester, 1987).
• A.S. Mikhailov, Foundation of Synergetics, (Springer, New York, 1990).
• D. Kaplan and L. Glass, Understanding Nonlinear Dynamics, (Springer, New York, 1995).
• E.M. Izhikevich, Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting, (MTI Press, 2007).
• J.P. Keener and J. Sneyd, Mathematical Physiology, (Springer, New York, 1998).
Internal references
• Gregoire Nicolis and Catherine Rouvas-Nicolis (2007) Complex systems. Scholarpedia, 2(11):1473.
• Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014.
• Martin Fink and Denis Noble (2008) Noble model. Scholarpedia, 3(2):1803.
• Richard J. Field (2007) Oregonator. Scholarpedia, 2(5):1386.
• John Dowling (2007) Retina. Scholarpedia, 2(12):3487. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7334403395652771, "perplexity": 2091.5496437175775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00433.warc.gz"} |
https://www.physicsforums.com/threads/japan-building-space-based-power-plant.335699/ | # Japan building space-based power plant
1. Sep 8, 2009
### Mk
http://www.bloomberg.com/apps/news?pid=20601080&sid=aF3XI.TvlsJk [Broken]
http://www.scientificamerican.com/b...hy-not-spend-21-billion-on-solar-p-2009-09-02
Well this should be interesting. Japan is one of the leading nations in nuclear power and is not afraid to make breakthroughs in technology and engineering.
I remember when I was 7 years old and would play Sim City 3000. The best power plant you could build was the one where satellites would beam down a maser of energy generated from solar radiation. The future will be cool.
Future aircraft and avians will have to watch out to avoid getting fried.
Last edited by a moderator: May 4, 2017
2. Sep 8, 2009
### Staff: Mentor
Raise your hand if you think this will happen?
Didn't think so.
Some interesting tidbits from the articles:
Assuming all of that is accurate, they are intending to provide the equivalent power of 1 nuclear reactor for twice the cost and taking twice as long to build (assuming pessimistic estimates for the nuclear plant and including US-like regulatory hurdles).
So existing US government research/studies imply they are off in their price estimate by a factor of 5.
B-O-O-N-D-O-G-G-L-E.
Last edited: Sep 8, 2009
3. Sep 8, 2009
### Integral
Staff Emeritus
I am with Russ, this is just too expensive. Another problem, the geocentric orbits are already crowded, at least over areas where you would need the power, so I doubt that there is room in that orbit for the huge antenna needed for this project. Now if they do not put the power station in a geocentric orbit a single receiving station is not possible. Now you need to first track the receiving station, then make a jump to the next as it comes over the horizon.
4. Sep 8, 2009
### Cyrus
If Japan doesn't build this where will James Bond go to save the world and enjoy Asian cuisine? Recall the last time he was in Japan they were sending russian rockets into space out of a volcano and hijacking US space capsules.
Last edited by a moderator: Sep 9, 2009
5. Sep 8, 2009
### Topher925
Its a cool idea and while it will work I just don't think it is cost effective. I think there are more than one companies working on the same concept although I don't think any have proven it to be economically viable. One of my professors has a pretty good blog write up about this, I'll see if I can find it.
6. Sep 8, 2009
### mgb_phys
I welcome our space based giant magnifying glass overlords.
7. Sep 9, 2009
That's a mere $70,000 per home - what a deal! Maybe they could offset some of their development costs by magnetizing it - to collect space junk for a fee. 8. Sep 9, 2009 ### Ian_Brooks why not pour that money behind iter and get us fusion energy faster? 9. Sep 9, 2009 ### mgb_phys Because the money doesn't exist. USEF is a small think-tank/quango outside the Japanese space agency. They are no more likely to actually build or launch this than when some darpa funded researcher at a US university talks about legions of flesh eating robot zombie soldiers. 10. Sep 9, 2009 ### WhoWee Maybe there's a little room in the stimulus plan for a little joint venture? 11. Sep 9, 2009 ### mheslep Since when have boondoggle projects been deprived of funding? Five, ten years ago the Japanese could not find enough boondoggles to fund. There's the airport without planes, and this http://www.nytimes.com/1999/11/25/world/economic-stimulus-in-japan-priming-a-gold-plated-pump.html" [Broken] extension that nobody needed. So no hands raised if you phrase the question "Is this a practical alternative", but phrased "if you think this will happen", and they gave themselves 30 years - you might well lose that bet. Last edited by a moderator: May 4, 2017 12. Sep 9, 2009 ### Proton Soup not sure boondoggle is the right word here. it's a proof of concept. lots of engineering will go into designing something that hasn't been built before. but if there's anything the japanese are good at, it's building thousands of them smaller and cheaper. or, maybe it's just a japanese cash-for-clunkers program. keeps their economy "stimulated" and keeps scientists and engineers and whatever technological know-how they've accumulated in-country. in any case, it's a lot less silly than lunar/mars missions. 13. Sep 9, 2009 ### mheslep Another thought occurs: if space based solar power is placed in the same category as space exploration, especially manned, i.e. do it because a) we-want-to-see-if-we-can, and b) we'll make scientific and engineering advances along the way, then this project wins out in my mind over collecting another bag of rocks from the Moon, or even the first from Mars. 14. Sep 9, 2009 ### russ_watters ### Staff: Mentor The difference is in whether it will happen or not. Building a subway no one needs is still a functional subway. This project is not even intended to happen. The MO is the same as Bush's promised trip to Mars: 1. Promise the moon (or Mars, or in this case, the Sun). Be sure the timeline of your promise far exceeds your term in office. 2. Attach a cost to it. It doesn't matter if the cost is realistic or not, attaching a cost shows commitment. 3. Put together a funding schedule that starts with small-scale studies for you, now; and real engineering and development costs that have to be comitted by someone else, a few years from now. 4. Commit just enough funds to the project to keep a few hundred engineers running around on hamster wheels, generating reports, until your term in office expires. 5. Leave office and hand the completely worthless project off to your successor. This just in: http://www.usatoday.com/tech/science/space/2009-09-08-nasa-future_N.htm Raise your hand if this surprises you. Didn't think so. No, I'd bet my house on it.....well, maybe my car. Possible exception: The ISS has been kicked-around since the early '80s. I toured a life-sized mockup of the then Space Station Freedom when I went to Space Camp in around 1989 (also in the hanger, a life-sized mockup of the Shuttle-C to heft it into orbit). I fully believe Reagan intended this to happen and he comitted real development money to the project, but the timeline still required commitment across multiple administrations, making it difficult to sustain/complete the project. Last edited by a moderator: May 4, 2017 15. Sep 9, 2009 ### mheslep Tempting bet, unless you drive an clunker. 16. Sep 9, 2009 ### russ_watters ### Staff: Mentor If you are referring to this: ....you are reading something into it that isn't there. Don't worry - it is intentionally misleading: -The actual work being started is 4 years of research on wireless power transfer. No promise of even a prototype/proof of concept delivery was attached to that (in the article). That's in paragraph 2, which contains the only real news in the entire article. -4 years doesn't take you to 2015, so we cannot conclude from the article that the proof of concept satellite is being funded. The timeline mismatch and lack of a statement about a deliverable in the one paragraph of real news implies that it isn't. This is funding for 4 years of running engineers around in hamster wheels, nothing more. 17. Sep 9, 2009 ### mheslep This space gizmo in no way disables the existing power grid. I'm sure it will be much less obtrusive than all the torn up subway streets. Worse case they're out$20B worth of tax yen.
Yep, trouble is many real long term big ticket programs have that same look in the beginning. The ISS as you note was/is a good example. Heck subways are often a 10-20 year gig.
This the country of space borne Godzilla foes. I think they're due for something like this.
18. Sep 9, 2009
2004 Mazda 6i, 102,000 miles. I'd guess it is worth about $8 grand. I love that car. But I consider this easy money. Would you also like to bet on Australia's "Solar Tower"? Remember that one? 19. Sep 9, 2009 ### russ_watters ### Staff: Mentor You missed my point. The subway was built. This solar transmitter will not be built. One is a boondoggle that costs$1B and is built, the other is a boondoggle that is projected to cost $40B, but$1B is spent and it produces nothing.
20. Sep 9, 2009
### mheslep
I get it, I get it.
$10B$21B
Nothing spent yet by Japan, per those original articles.
These projects can get built in one-offs if there's enough excitement about them: inefficient California wind turbines in the 70s, same with the the 30-40 year old 'Solar One' solar thermal plant. Remember we're not talking about an entire industry here, just one bunch o' floating mirrors. And I need only one to collect your ride. I'd say if it happens they'll scale it down to 100-300MW. Does that get me a spare tire and your hood ornament?
Last edited: Sep 9, 2009
Similar Discussions: Japan building space-based power plant | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5085733532905579, "perplexity": 3388.050210924563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812913.37/warc/CC-MAIN-20180220070423-20180220090423-00256.warc.gz"} |
https://imaging.mrc-cbu.cam.ac.uk/statswiki/FAQ/zequiv | FAQ/zequiv - CBU statistics Wiki
location: FAQ / zequiv
# Equivalence test formulation of one sample z-test
For a population mean, $$\theta$$ estimated by a sample mean:
H0: $$\theta \leq$$-d or $$\theta \geq$$ d and HA : -d $$\leq \theta \leq$$ d
If ind equals 1 then we reject nonequivalence so -d $$\leq$$ $$\theta$$ $$\leq$$ d otherwise we accept the null hypothesis of equivalence for the given type II error, beta.
[TYPE INTO R THE DESIRED INPUTS D, N, MEAN AND BETA USING VALUES IN FORM BELOW].
beta <- 0.05
d <- 0.2
n <- 10
mean <- 0
[THEN COPY AND PASTE THE BELOW INTO R]
cv <- sqrt(qchisq(p=beta, df=1, ncp=n*d^2))
cv2 <- sqrt(n)*cv
ind <- 0
if (mean < cv2) ind = 1
print(ind)
None: FAQ/zequiv (last edited 2013-03-08 10:17:39 by localhost) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5462076663970947, "perplexity": 14639.191137041655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00206.warc.gz"} |
https://swmath.org/software/10532 | # Biq Mac
Solving Max-cut to optimality by intersecting semidefinite and polyhedral relaxations. We present a method for finding exact solutions of Max-Cut, the problem of finding a cut of maximum weight in a weighted graph. We use a Branch-and-Bound setting that applies a dynamic version of the bundle method as bounding procedure. This approach uses Lagrangian duality to obtain a “nearly optimal” solution of the basic semidefinite Max-Cut relaxation, strengthened by triangle inequalities. The expensive part of our bounding procedure is solving the basic semidefinite relaxation of the Max-Cut problem, which has to be done several times during the bounding process. We review other solution approaches and compare the numerical results with our method. We also extend our experiments to instances of unconstrained quadratic 0-1 optimization and to instances of the graph equipartition problem. The experiments show that our method nearly always outperforms all other approaches. In particular, for dense graphs, where linear programming-based methods fail, our method performs very well. Exact solutions are obtained in a reasonable time for any instance of size up to $n = 100$, independent of the density. For some problems of special structure, we can solve even larger problem classes. We could prove optimality for several problems of the literature where, to the best of our knowledge, no other method is able to do so.
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element
## References in zbMATH (referenced in 72 articles , 1 standard article )
Showing results 1 to 20 of 72.
Sorted by year (citations) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5012032389640808, "perplexity": 542.0981565773938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00362.warc.gz"} |
https://www.physicsforums.com/threads/beta-minus-decay.697773/ | # Beta Minus Decay
1. ### omiros
30
Hello everybody, I am a first year physics student and I have a question about Nuclear Beta Minus Decay.
I was thinking the other day, about a beta decay. After the nucleus is formed, the new atoms state is a positive ion with charge +1.
If we think of the electron escaping from somewhere close to the nucleus the electron will be pulled by the nucleus.
Is that electron bound at any time at all, or not?
I understand that the function that describes the acceleration of the electron is going to be very weird, but I just care about the final kinetic energy of both.
Also what happens to the rest of the electrons? How are they going to react in such a case? Do any of they emit photons? Do we usually have collision between that electron and the 'bound' electrons?
2. ### Bill_K
4,159
Atomic electron energy levels are rather small compared to the energies involved in beta decay. So the emitted electron would not be bound, although it's true it will be a bit slowed down escaping the atom, and this correction needs to be taken into account when observing the decay's energy spectrum.
It's possible, although infrequent, for the emitted electron to collide with an atomic electron. A more interesting example of the interplay between nuclear decay and the atomic electrons is an alternative decay mode to beta plus decay called electron capture or K-capture, in which the nucleus grabs an atomic electron. Since this electron is taken from a low-lying shell, the atom needs to fill the hole, by emitting an X-ray, or sometimes a second ("Auger") electron.
1 person likes this.
3. ### snorkack
564
Most of time. Not always.
The decay energy of rhenium 187 is just 2,6 keV, whereas the binding energy of the inner electrons of heavy atoms is in hundreds of keV
The rhenium 187 nucleus is over 1 milliard times shorter lived than the neutral atoms. It follows that when a rhenium 187 nucleus undergoes beta decay, over 99,999999% times the electron is not emitted but goes into some bound state (ground or excited).
Neutral dysprosium 163 atom is completely stable, so the electron is always bound.
The beta decay energy is randomly divided between electron and antineutrino. Even if atomic energy levels are small, there is small but nonzero chance that the antineutrino happens to get almost all beta decay energy and the electron gets little enough to stay bound to the atom.
K-capture depends on the choice of shell whence the electron is captured. Only s electrons can ever be captured, because only they go to nucleus - but all s electrons do, not just the 1s ones. The probability of K-capture is simply bigger than L-capture or higher shell captures... except when K-capture is impossible. As is the case with holmium 163.
1 person likes this. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344907760620117, "perplexity": 780.1590681828899}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095184.64/warc/CC-MAIN-20150627031815-00204-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://reperiendi.wordpress.com/2007/02/ | # reperiendi
## Faces of War
Posted in Perception by Mike Stay on 2007 February 15
The WWI mask shops took over where the rudimentary plastic surgery left off:
“Thanks to you, I will have a home,” one soldier had written her. “…The woman I love no longer finds me repulsive, as she had a right to do.”
## MILL, BMCCs, and dinatural transformations
Posted in Category theory, Math, Quantum by Mike Stay on 2007 February 3
I’m looking at multiplicative intuitionistic linear logic (MILL) right now and figuring out how it’s related to braided monoidal closed categories (BMCCs).
The top and bottom lines in the inference rules of MILL are functors that you get from the definition of BMCCs, so they exist in every BMCC. Given a braided monoidal category C, the turnstile symbol
$\displaystyle \vdash:C^{\rm{op}} \times C \to \mathbf{Set}$
is the external hom functor (that is, $x \vdash y$ is the set of morphisms from $x$ to $y$). Inference rules are natural or dinatural transformations.
Let $D$ be a category; then a natural transformation $\alpha$ between two functors
$\displaystyle F,G:C\to D$
can be seen as a functor from the category ($A \times C$), where $A$ is the category with two objects labelled $F$ and $G$ and one nontrivial arrow between them, labelled $\alpha:$
$\displaystyle F\xrightarrow{\alpha}G$
For every morphism $f: x \to y$ in $C$ we get a commuting square in $(A \times C)$:
$\left( X^{X^X} \right\downarrow$
$\displaystyle \begin{array}{ccc}(F,x) & \xrightarrow{(1_F, f)} & (F,y) \\ \left. (\alpha, 1_x)\right\downarrow & & \left\downarrow (\alpha,1_y)\right. \\ (G,x) & \xrightarrow{(1_G, f)} & (G,y)\end{array}$
that maps to a commuting square in $D.$
$\displaystyle \begin{array}{ccc}Fx & \xrightarrow{Ff} & Fy \\ \left. \alpha_x\right\downarrow & & \left\downarrow \alpha_y \right. \\ Gx & \xrightarrow{Gf} & Gy \end{array}$
In other words, it assigns to each object x in C a morphism α_x in D such that the square above commutes.
Now consider the case where we want a natural transformation α between two functors
op F,G: C × C -> D.
Given f:x->y, g:s->t, we get a commuting cube in (A × C^op × C) that maps to a commuting cube in D.
G(1_t,f)
Gtx -------------------> Gty
7| 7|
/ | / |
α_tx / |G(g,1_x) α_ty / |
/ | / | G(g,1_y)
/ | / |
/ V G(1_s,f) / V
/ Gsx -------------- /---> Gsy
/ 7 / 7
/ / / /
/ / / /
/ / F(1_t,f) / / α_sy
Ftx -------------------> Fty /
| / | /
| / | /
F(g,1_x) | / α_sx | F(g,1_y)
| / | /
| / | /
V/ F(1_s,f) V/
Fsx -------------------> Fsy
This is bigger, but still straightforward.
To get a dinatural transformation, we set g:=f and then choose a specific route around the cube so that both of the indices are the same on α.
....................... Gyy
.. 7|
. . / |
. . α_yy / |
. . / | G(f,1_y)
. . / |
. . G(1_x,f) / V
. Gxx -------------- /---> Gxy
. 7 / .
. / / .
. / / .
. / F(1_y,f) / .
Fyx -------------------> Fyy .
| / . .
| / . .
F(f,1_x) | / α_xx . .
| / . .
| / . .
V/ ..
Fxx .......................
In other words, a dinatural transformation α: F -> G assigns to each object x a morphism α_xx such that the diagram above commutes.
Dinatural transformations come up when you’re considering two of MILL’s inference rules, namely
x ⊢ y y ⊢ z
------- (Identity) and --------------- (Cut)
x ⊢ x x ⊢ z
These two have the same symbol appearing on both sides of the turnstile, x in the Identity rule and y in the Cut rule. Setting
Fxy = * ∈ Set,
where * is the one-element set, and
Gxy = x ⊢ y ∈ Set,
the Identity rule specifies that given f:x->y we have that f o 1_x = 1_y o f:
.......................y ⊢ y
.. 7|
. . / |
. . α_yy / |
. . / | 1_y o f
. . / |
. . f o 1_x / V
. x ⊢ x ------------- /--> x ⊢ y
. 7 / .
. / / .
. / / .
. / 1 / .
* ---------------------> * .
| / . .
| / . .
1 | / α_xx . .
| / . .
| / . .
V/ ..
* ........................
where
α_xx:* -> x ⊢ x * |-> 1_x
picks out the identity morphism on x.
In the Cut rule, we let j:x->s, k:t->z, and f:s->t,
F(t,s) = (x ⊢ s, t ⊢ z)
j f k
F(1,f) = (x ---> s ---> t, t ---> z)
j f k
F(f,1) = (x ---> s, s ---> t ---> z)
G(s,t) = x ⊢ z
and consider the diagram for a morphism f:s->t in C.
.......................x ⊢ z
.. 7|
. . / |
. . composition / |
. . / | 1
. . / |
. . 1 / V
. x ⊢ z ------------- /--> x ⊢ z
. 7 / .
. / / .
. / / .
. / F(f,1) / .
(x ⊢ s, t ⊢ z) -------> (x ⊢ s, s ⊢ z)
| / . .
| / . .
| / composition . .
F(1,f) | / . .
| / . .
V/ ..
(x ⊢ t, t ⊢ z) .................
which says that composition of morphisms is associative.
## Job, Isaiah, Percy, Sting
Posted in Poetry by Mike Stay on 2007 February 3
Job 30: 29
29 I am a brother to dragons, and a companion to owls.
Isa 13:19-22
19 And Babylon, the glory of kingdoms, the beauty of the Chaldees’ excellency, shall be as when God overthrew Sodom and Gomorrah.
20 It shall never be inhabited, neither shall it be dwelt in from generation to generation: neither shall the Arabian pitch tent there; neither shall the shepherds make their fold there.
21 But wild beasts of the desert shall lie there; and their houses shall be full of doleful creatures; and owls shall dwell there, and satyrs shall dance there.
22 And the wild beasts of the islands shall cry in their desolate houses, and dragons in their pleasant palaces: and her time is near to come, and her days shall not be prolonged.
Isa. 34: 11-15
11 ¶ But the cormorant and the bittern shall possess it; the owl also and the raven shall dwell in it: and he shall stretch out upon it the line of confusion, and the stones of emptiness.
12 They shall call the nobles thereof to the kingdom, but none shall be there, and all her princes shall be nothing.
13 And thorns shall come up in her palaces, nettles and brambles in the fortresses thereof: and it shall be an habitation of dragons, and a court for owls.
14 The wild beasts of the desert shall also meet with the wild beasts of the island, and the satyr shall cry to his fellow; the screech owl also shall rest there, and find for herself a place of rest.
15 There shall the great owl make her nest, and lay, and hatch, and gather under her shadow: there shall the vultures also be gathered, every one with her mate.
Ozymandias
I met a traveller from an antique land
Who said:–Two vast and trunkless legs of stone
Stand in the desert. Near them on the sand,
Half sunk, a shatter’d visage lies, whose frown
And wrinkled lip and sneer of cold command
Tell that its sculptor well those passions read
Which yet survive, stamp’d on these lifeless things,
The hand that mock’d them and the heart that fed.
And on the pedestal these words appear:
“My name is Ozymandias, king of kings:
Look on my works, ye mighty, and despair!”
Nothing beside remains: round the decay
Of that colossal wreck, boundless and bare,
The lone and level sands stretch far away.
— Percy Bysshe Shelley
A stone’s throw from Jerusalem
I walked a lonely mile in the moonlight
And though a million stars were shining
My heart was lost on a distant planet
That whirls around the April moon
Whirling in an arc of sadness
I’m lost without you, I’m lost without you
Though all my kingdoms turn to sand and fall into the sea
And from the dark secluded valleys
I heard the ancient songs of sadness
But every step I thought of you
Every footstep only you
Every star a grain of sand
The leavings of a dried up ocean
Tell me, how much longer,
How much longer?
They say a city in the desert lies
The vanity of an ancient king
But the city lies in broken pieces
Where the wind howls and the vultures sing
These are the works of man
This is the sum of our ambition
It would make a prison of my life
If you became another’s wife
With every prison blown to dust, my enemies walk free
And I have never in my life
Felt more alone than I do now
Although I claim dominions over all I see
It means nothing to me
There are no victories
In all our histories
Without love
A stone’s throw from Jerusalem
I walked a lonely mile in the moonlight
And though a million stars were shining
My heart was lost on a distant planet
That whirls around the April moon
Whirling in an arc of sadness
I’m lost without you, I’m lost without you
And though you hold the keys to ruin of everything I see
With every prison blown to dust, my enemies walk free
Though all my kingdoms turn to sand and fall into the sea
— Gordon Matthew ‘Sting’ Sumner | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7751625776290894, "perplexity": 10403.260778081658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126538.54/warc/CC-MAIN-20170423031206-00101-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/solenoid-magnetising-current.921992/ | # Solenoid (magnetising current)
Tags:
1. Aug 6, 2017
### Suyash Singh
1. The problem statement, all variables and given/known data
Part d of attached question
2. Relevant equations
Solenoid equations
B=uo n i
3. The attempt at a solution
B (absence)=B (presence)
Uo n (i+im)=u n i
im=798A
File size:
50.1 KB
Views:
59
2. Aug 6, 2017
### Staff: Mentor
I don't understand what you calculated, but such a small difference looks like a rounding error.
3. Aug 6, 2017
### rude man
The answer should be M/H x 2A.
4. Aug 7, 2017
### Suyash Singh
Can you guys give a formula for magnetising current for solenoid?Every book gives for transformer but not for solenoid
5. Aug 7, 2017
### rude man
I just did, in post 3.
But you raise a good point: magnetizing current in a transformer is fundamentally different from magnetizing current in a solenoid. The latter term is actually rarely encountered. Why? Because it's a fictitious current whereas for a transformer it's very real.
For a solenoid, magnetizing current is defined to be the extra current that would be needed to restore the flux if the high-permeability core were removed.
For a transformer it's the actual primary current with the secondary open. It's the current needed to establish the flux given by Faraday's emf = -N d phi/dt. Note that emf = applied voltage - primary losses voltage drops.
Last edited: Aug 7, 2017 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068553805351257, "perplexity": 6740.998868816624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00149.warc.gz"} |
http://digitalhaunt.net/Kentucky/calculated-percent-error-always-zero-positive.html | We offer on-site, pickup/drop off, and remote service in the Bowling Green, KY are. If you have any type of computer issue or IT needs we will gladly help resolve your problems.
Computer hardware and software repair, upgrades, networking, installations, data backup, managed services. Residential and commercial service.
Address Bowling Green, KY 42101 (270) 392-9813 http://www.computerwhispererbg.com
# calculated percent error always zero positive Dunmor, Kentucky
When it halves again, it is a -69cNp change (a decrease.) Examples Comparisons Car M costs $50,000 and car L costs$40,000. Flag Phatcat 910 Contributions Answered In School Subjects Absolute percent error? Physics Lessons Math Test Ask a math question Math Search Login Calculating percent error When calculating percent error, just take the ratio of the amount of error to the accepted value Especially if one can only calculate data dependent mesuares like MAPE or MASE (not being able to calculate BIC or AIC because the models are from different classes).
d r = | x − y | max ( | x | , | y | ) {\displaystyle d_{r}={\frac {|x-y|}{\max(|x|,|y|)}}\,} if at least one of the values does not equal If you want all data points to be represented with the "same" quality of fit, weighted regression is required. Q: What are some Volkswagen error codes? share|cite|improve this answer answered Feb 18 '14 at 7:34 Claude Leibovici 74.5k94191 In my case, this shifts the problem to where Y_cal + Y_exp is near zero. (However, in
The formula given above behaves in this way only if xreference is positive, and reverses this behavior if xreference is negative. A: Leonardo Pisano, who is better known by the name Fibonacci, introduced the Hindu-Arabic number system to Europe at the beginning of the 13th century throug... Goodwin and Lawton (1999) point out that on a percentage scale, the MAPE is symmetric and the sMAPE is asymmetric. This is the case when it is important to determine error, but the direction of the error makes no difference.
What are the ages of Apu, and tipu ? Full Answer > Filed Under: Numbers Q: What is the significance of the number 21? Depending on your answer, there are possible alternatives. –Claude Leibovici Feb 16 '14 at 6:24 1 @ClaudeLeibovici: I am doing a parameter estimation problem. Presumably he never imagined that data and forecasts can take negative values.
The relative error is usually more significant than the absolute error. Is "The empty set is a subset of any set" a convention? A -5% error could mean that your results were a little low.
Having a negative percent error isn't worse than positive percent error -- it could mean the You can only upload a photo or a video.
Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Homepage Math blog Homework helper! PEOPLE SEARCH FOR Positive and Negative Percent Errors Calculating Percent Error Formula to Calculate Percent Error Equation for Percent Error Chemistry Formulas Percent Error Define Percent Error Calculate Percent Error in If instead I use the definition: $\text{relative error} = \frac{x_{true}-x_{test}}{x_{test}}$ Then the relative error is always 100%. E.g.
This is caused by the fact that it is spinning. There are several common sources of such random uncertainties in the type of experiments that you are likely to perform: Uncontrollable fluctuations in initial conditions in the measurements. SAVE CANCEL already exists. A -5% error could mean that your results were a little low.
Having a negative percent error isn't worse than positive percent error -- it could mean the same thing. Then, convert the ratio to a percent. statistics share|cite|improve this question asked Feb 15 '14 at 22:41 okj 9461818 1 you need a maximum for that.. –Seyhmus Güngören Feb 15 '14 at 23:06 1 Simple and Assume you made the following five measurements of a length: Length (mm) Deviation from the mean 22.8 0.0 23.1 0.3 22.7 0.1
Some dogs may ha…ve issues such as fear of water, nervousness, anxiety, and the lack of ability to swim. (MORE) What would you like to do? For example 5.00 has 3 significant figures; the number 0.0005 has only one significant figure, and 1.0005 has 5 significant figures. A percent error can be left as a negative though, and this would be perfectly acceptable (or even preferred) depending on what you're doing. When i said MAPE or MASE i meant as out of sample errors.
No it isn't. By using this site, you agree to the Terms of Use and Privacy Policy. if your space is anisotropic, but you still use 1/r^2 as the denominator), and the ratio would still work well as a relative error. I am interested in the relative error (i.e.
A good MAPE is one that is better than what everyone else gets for the same forecast objective. Incorrect measuring technique: For example, one might make an incorrect scale reading because of parallax error. Formulae Measures of relative difference are unitless numbers expressed as a fraction. For example if you say that the length of an object is 0.428 m, you imply an uncertainty of about 0.001 m.
E.g., $(\mu_{test} - x_{true}) / \sigma_{test}$ will give you a sort of 'relativized error'. I'm trying to use it but I got some errors. They come up a lot. Applying the equation for Absolute error.
The essential idea is this: Is the measurement good to about 10% or to about 5% or 1%, or even 0.1%? However, I am working on a prediction problem for university project and I would be glad to know if there is some paper which explains why this should /could be used. That also doesn't help, because this bounds the error to be in the range [0,2], and wherever one of Y_cal, Y_exp is zero, the error normalised this way will be 1, A 5% error could mean that your observed result was a little high.
Rob J Hyndman 1. Percent Error is the difference between the true value and the estimate divided by the true value and the result is multiplied by 100 to make it a percentage. Significant figures Whenever you make a measurement, the number of meaningful digits that you write down implies the error in the measurement. External links http://www.acponline.org/clinical_information/journals_publications/ecp/janfeb00/primer.htm Retrieved from "https://en.wikipedia.org/w/index.php?title=Relative_change_and_difference&oldid=741827902" Categories: MeasurementNumerical analysisStatistical ratiosHidden categories: All articles with unsourced statementsArticles with unsourced statements from February 2012Articles lacking in-text citations from March 2011All articles lacking in-text
What would you like to do? Could you tell in which context you face this situation ? Although random errors can be handled more or less routinely, there is no prescribed way to find systematic errors. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767730176448822, "perplexity": 1186.9596114670335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583700734.43/warc/CC-MAIN-20190120062400-20190120084400-00011.warc.gz"} |
https://crypto.stackexchange.com/questions/58592/tls-1-3-hkdf-expand | # TLS 1.3: HKDF-Expand
In RFC5869 I found that the function of the HKDF-Expand on the input assumes the parameter $L$ which denotes length of output keying material in octets ($\leq 255*\text{Hash}_{Length}$), and then the $N$ is computed as $\operatorname{ceil}(L/\text{Hash}_\text{Length})$ and $L$ is a multiple of $\text{Hash}_\text{Length}$, i.e. the number of output blocks equals $N\in [1,..,255]$.
But when analyzing the use of this function in TLS 1.3 to form a key schedule, I see that the input of this function for $L$ parameter is Hash.length when calling the Derive-Secret function for each generated key:
DeriveSecret(EarlySecret, "ext binder" | "res binder", "") = HKDF-Expand-Label(EarlySecret, "ext binder" | "res binder" | Hash(""), Hash.length ) = HKDF-Expand(EarlySecret, "ext binder" | "res binder" | Hash(""), Hash.length)
i.e. $L$ = Hash.length and $N = 1$? And so for each key.
I'm sure that I'm wrong somewhere but I do not understand where.
• I made some edits to your question to hopefully improve the readability - please ensure that I did not accidentally change the meaning of your question. – Ella Rose Apr 22 '18 at 23:21
• These keys don't seem to be directly used as session keys. A key the size of the hash makes sense as it provides the maximum of security, minimum of processing and it doesn't require any post-processing of the output keying material. – Maarten Bodewes Apr 25 '18 at 0:26
• Maarten Bodewes, then what is the difference between the functions used to diversify the keys - HKDF.Extract and HKDF.Expand, if HKDF.Expand works for one round ? – Kirill Voevodin Apr 25 '18 at 9:18
• Sorry if I could not get back to you at the time (life was happening to me in April 2018). Could you have a look at the given answer? HKDF Extract is used to compress entropy, HKDF expands compressed entropy using the given info and output size. I'm not sure why HKDF-Extract is not used for TLS. – Maarten Bodewes Dec 27 '18 at 15:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6312503814697266, "perplexity": 1544.3450992089568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00486.warc.gz"} |
https://learnzillion.com/lesson_plans/2055-8-graph-inequalities-c | # 8. Graph inequalities (C)
teaches Common Core State Standards 111.26.9.A http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html
teaches Common Core State Standards 111.26.9.B http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html
teaches Common Core State Standards 111.39.5.B http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html
teaches Common Core State Standards 111.27.10.A http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html
teaches Common Core State Standards 111.27.10.B http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html
teaches Common Core State Standards 111.27.10.C http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html
teaches Common Core State Standards 111.27.11.B http://ritter.tea.state.tx.us/rules/tac/chapter111/index.html
teaches Common Core State Standards 7.AF.3 http://www.doe.in.gov/standards/mathematics
teaches Common Core State Standards AI.L.2 http://www.doe.in.gov/standards/mathematics
teaches Common Core State Standards CCSS.Math.Content.7.EE.B.4b http://corestandards.org/Math/Content/7/EE/B/4/b
teaches Common Core State Standards CCSS.Math.Practice.MP4 http://corestandards.org/Math/Practice/MP4
## You have saved this lesson!
Here's where you can access your saved items.
Dismiss
Card of
Lesson objective: Understand that there are multiple values that make an inequality true and they can be shown on the number line.
Students bring prior knowledge of graphing inequalities from 6.EE.B.8. This prior knowledge is extended to discrete and continuous solutions as students learn to utilize context in graphing their solution sets. A conceptual challenge students may encounter is accounting for restrictions inherent in the context of a problem.
The concept is developed through work with a number line, which can show multiple numbers in a solution simultaneously.
This work helps students deepen their understanding of numbers because the solution to an inequality can be an infinite number of values.
Students engage in Mathematical Practice 7 (Look for and make use of structure) as they graph the numbers in the solution to a situation resulting in an inequality.
Key vocabulary:
• inequality
• isolate
• variable
Related content | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322065114974976, "perplexity": 7204.13572181471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189471.55/warc/CC-MAIN-20170322212949-00125-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://slideplayer.com/slide/3431221/ | Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable.
Presentation on theme: "Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable."— Presentation transcript:
Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable i.e., F(t) = x(t) i + y(t) j + z(t) k, where x(t), y(t), z(t): component functions t : a variable e.g., ◎ Definition 9.1: Vector function of one variable
2 。 F(t) is continuous at some t 0 if x(t), y(t), z(t) are all continuous at t 0 。 F(t) is differentiable if x(t), y(t), z(t) are all differentiable ○ Derivative of F(t): e.g.,
3 ○ Curve: C(x(t), y(t), z(t)), in which x(t), y(t), z(t): coordinate functions x = x(t), y = y(t), z = z(t): parametric equations F(t)= x(t)i + y(t)j + z(t)k: position vector pivoting at the origin Tangent vector to C: Length of C:
4 ○ Example 9.2: Position vector: Tangent vector: Length of C:
5 ○ Distance function: t(s): inverse function of s(t) ○ Let Unit tangent vector:
6 。 Example 9.3: Position function: Inverse function:
7 Unit tangent vector:
8 ○ Assuming that the derivatives exist, then (1) (2) (3) (4) (5)
9 9.2. Velocity, Acceleration, Curvature, Torsion A particle moving along a path has position vector Distance function: ◎ Definition 9.2: Velocity: (a vector) tangent to the curve of motion of the particle Speed : (a scalar) the rate of change of distance w.r.t. time
10 Acceleration: or (a vector) the rate of change of velocity w.r.t. time ○ Example 9.4: The path of the particle is the curve whose parametric equations are
11 Velocity: Speed: Acceleration: Unit tangent vector:
12 ○ Definition 9.4: Curvature (a magnitude): the rate of change of the unit tangent vector w.r.t. arc length s For variable t,
13 ○ Example 9.7: Curve C: t > 0 Position vector:
14 Tangent vector: Unit tangent vector: Curvature:
15 ◎ Definition 9.5: Unit Normal Vector i) ii) Differentiation
16 ○ Example 9.8: Position vector: t > 0 Write as a function of arc length s (Example 9.7) Solve for t, Position vector:
17 Unit tangent vector: Curvature:
18 Unit normal vector:
19 9.2.1 Tangential and Normal Components of Acceleration
20 ◎ Theorem 9.1: where Proof:
21 ○ Example 9.9: Compute and for curve C with position vector Velocity: Speed: Tangential component: Acceleration vector:
22 Normal component: Acceleration vector: Since, curvature: Unit tangent vector: Unit normal vector:
23 ◎ Theorem 9.2: Curvature Proof:
24 ○ Example 9.10: Position function:
25 9.2.3 Frenet Formulas Let Binormal vector: T, N, B form a right-handed rectangular coordinate system This system twists and changes orientation along curve
26 ○ Frenet formulas : The derivatives are all with respect to s. (i) From Def. 9.5, (ii) is inversely parallel to N Let : Torsion
27 (iii) (a) (b) (c) * Torsion measures how (T, N, B) twists along the curve
28 12.3 Vector Fields and Streamlines ○ Definition 9.6: Vector Field -- (3-D) A vector whose components are functions of three variables -- (2-D) A vector whose components are functions of two variables
29 。 A vector filed is continuous if each of its component functions is continuous. 。 A partial derivative of a vector field -- the vector fields obtained by taking the partial derivative of each component function e.g.,
30 ◎ Definition 9.7: Streamlines F: vector field defined in some 3-D region Ω : a set of curves with the property that through each point P of Ω, there passes exactly one curve from The curves in are streamlines of F if at each point in Ω, F curve in passing through is tangent to the
31 ○ Vector filed: : Streamline of F Parametric equations -- Position vector -- Tangent vector at --
32 is also tangent to C at //
33 ○ Example 9.11: Find streamlines Vector field: From Integrate Solve for x and y Parametric equations of the streamlines
34 Find the streamline through (-1, 6, 2).
35 9.4. Gradient Field and Directional Derivatives ◎ Definition 9.8: Scalar field: a real-valued function e.g. temperature, moisture, pressure, hight Gradient of : a vector field
36 e.g., 。 Properties: ○ Definition 9.9: Directional derivative of in the direction of unit vector
37 ◎ Theorem 9.3: Proof: By the chain rule
38 ○ Example 9.13:
39 ◎ Theorem 9.4: has its 1. Maximum rate of change,, in the direction of 2. Minimum rate of change,, in the direction of Proof: Max.: Min.:
40 ○ Example 9.4: The maximum rate of change at The minimum rate of change at
41 9.4.1. Level Surfaces, Tangent Planes, and Normal Lines ○ Level surface of : a locus of points e.g., Sphere (k > 0) of radius Point (k = 0), Empty (k < 0)
42 ○ Tangent Plane at point to Normal vector: the vector perpendicular to the tangent plane
43 ○ Theorem 9.5: Gradient normal to at point on the level surface Proof: Let : a curve passing point P on surface C lies on
44 normal to This is true for any curve passing P on the surface. Therefore, normal to the surface
45 ○ Find the tangent plane to Let (x, y, z): any point on the tangent plane orthogonal to the normal vector The equation of the tangent plane:
46 ○ Example 9.16: Consider surface Let The surface is the level surface Gradient vector: Tangent plane at
47
48 9.5. Divergence and Curl ○ Definition 9.10: Divergence (scalar field) e.g.,
49 ○ Definition 9.11: Curl (vector field) e.g.,
50 ○ Del operator: 。 Gradient: 。 Divergence: 。 Curl:
51 ○ Theorem 9.6: Proof:
52 ◎ Theorem 9.7: Proof:
FORMULA ○ Position vector of curve F(t)= x(t)i + y(t)j + z(t)k 。 Distance function: 。 Unite tangent vector: where C(x(t), y(t), z(t))
○ Velocity: Speed : Acceleration: or, where=, = ○ Curvature:
○ Unit Normal Vector: ○ Binormal vector: Torsion ○ Frenet formulas: ○ Vector filed: Streamline:
○ Scalar field: Gradient: ○ Directional derivative: ○ Divergence: ○ Curl: ○
Download ppt "Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable."
Similar presentations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270113110542297, "perplexity": 3917.5515243848577}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689874.50/warc/CC-MAIN-20170924044206-20170924064206-00486.warc.gz"} |
https://stats.stackexchange.com/questions/4972/how-to-fit-a-model-to-self-reported-number-of-friend-interactions-over-a-20-day | # How to fit a model to self-reported number of friend interactions over a 20 day period?
I am a novice in statistics so please correct me if I am doing something fundamentally wrong. After wrestling for a long time with R in trying to fit my data to a good distribution, I figured out that it fits the Cauchy distribution with the following parameters:
location scale
37.029894 18.678936
( 3.405665) ( 2.779136)
The data was from a survey where 100 people were asked how many friends they talked to over a period of 20 days and I am trying to see if it fits a known distribution. I generated the QQ-plot with the reference line and it looks like the image given below. From what I have been reading on the web, if the points fall close to the reference line then it is a good evidence that the data comes from this distribution.
So, is this a good evidence to say that the distribution is Cauchy or do I need to run any more tests? If so, can someone tell me the physical interpretation of this result? I mean, I read that if the data falls into a Cauchy distribution, then it will not have a mean and standard deviation but can someone help me understand this in plain English? If it does not have a mean then from what I understand, I cannot sample from this distribution. What is one supposed to infer about the population based on this result? Or should I be looking at other models?
UPDATE: What am I trying to achieve? I am trying to evaluate how much time it takes for some arbitrary piece of information to propagate for a population of size X. As this depends on the communication patterns of people, what I was trying to do was to build a model that could use the information from the 100 people I surveyed to give me patterns for the X number where X could be 500 or 1000.
QQ-Plot
Density Distribution of my data
Cauchy Distribution
QQ-Plot when trying to fit a Normal distribution to my data
UPDATE:
From all the suggestions, I think I now understand why this cannot be a Cauchy distribution. Thanks to everyone. @HairyBeast suggested that I look at a negative binomial distribution so I plotted the following as well:
QQ-Plot when Negative Binomial Distribution was used
Negative Binomial Distribution
• This question seems directly relevant. See my post for data vis tips to compare your data to other known distributions in base R. – Chase Nov 28 '10 at 13:18
• @Chase: +1 Actually yes :) I think I missed that one. I'll do that rightaway. Thanks a lot. – Legend Nov 28 '10 at 20:31
• @Legend You can also try a rootogram (don't know if it overlaps with @Chase's response on SO). Now, I don't understand why you want to try and fit every discrete distribution to your data. Either you have a priori knowledge or hypothesis about the law of your outcomes, or you don't. In the former case, you might want to explain why the observed data don't fit the model. In the latter case, you're left with exploratory data analysis (and, potentially, non-parametric density estimates, mixture models, etc.) – chl Nov 28 '10 at 21:12
• @Legend 'Scenario' means that you already have some hypothesis, no? It's difficult to answer your question because you seek to fit the 'best' model (in the sense of goodness-of-fit) to your data, but it is not necessary the 'correct' model. After all, your data may be subjected to measurement error or any other sources of error. Finally, you can still work with your observed sample and use bootstrap to simulate new samples. – chl Nov 28 '10 at 21:31
• @Legend Bootstrap is useful to estimate, based on an observed sample, the variability of an estimator when you don't know (or don't want to assume) its law. But in your case, given the context you added, I would suggest to update your question so that people can have a better idea of what you really intend to do with (which is beyond simple distribution fitting, apparently). – chl Nov 29 '10 at 6:01
First off, your response variable is discrete. The Cauchy distribution is continuous. Second, your response variable is non-negative. The Cauchy distribution with the parameters you specified puts about 1/5 of its mass on negative values. Whatever you have been reading about the QQ norm plot is false. Points falling close to the line is evidence of normality, not evidence in favor of being Cauchy distributed (EDIT: Disregard these last 2 sentences; a QQ Cauchy plot - not a QQ norm plot - was used, which is fine.) The Poisson distribution, used for modeling count data, is inappropriate since the variance is much larger than the mean. The Binomial distribution is also inappropriate since theoretically, your response variable has no upper bound. I'd look into the negative binomial distribution.
As a final note, your data does not necessarily have to come from a well known, "named" distribution. It may have come from a mixture of distributions, or may have a "true" distribution whose mass function is not a nice transformation of x to P(X=x). Don't try too hard to "force" a distribution to the data.
• (+1) Nice points, especially the latest. – chl Nov 28 '10 at 11:57
• +1 for the suggestions. I updated my post with a negative binomial distribution as well. It looks like it will serve its purpose except that the third bar is not as expected. As for your final point, I heard that if the data does not come from any known distributions, I can use something like a kernel density estimation. Would you suggest this? If so, can you kindly give me a very short example on how to do this for discrete data using R? Would I still be looking at QQ-plots to verify my model? – Legend Nov 28 '10 at 20:33
Agree with HairyBeast (+1) that Cauchy is not appropriate here (it's symmetric for one thing) and that negative binomial might well be better.
Disagree about QQ-plot though. You can do a QQ-plot for any distribution, not just normal. What you say about interpretation of a QQ-plot is correct, but note that 2 of your points lie very far indeed from the straight line.
On the Cauchy's lack of moments: this doesn't affect sampling. Once you know the parameters of the distribution sampling from it is easy (as the quantile function has a closed form) and the lack of moments is irrelevant. But the fact that the Cauchy distribution doesn't even have a mean does indicate that it's inappropriate here, as clearly it is meaningful to ask what's the expected number of friends with whom a person has a conversation in a 20-day period.
• You are right about the QQ plot being applicable to any distribution. I read the question to fast and (for whatever reason) assumed it was a QQ norm plot. One minor note: be careful about concluding a distribution from QQ plots. For example, data that has a t distribution with 20 df will still give you nice QQ norm plots. – HairyBeast Nov 28 '10 at 15:25
• +1 for the explanation as to why Cauchy does not make sense in this case. That would have been my next question if it were right :) If you get some time, could you kindly take a look at my comment above? In short, because my data need not come from a specific distribution, my readings yesterday revealed that a kernel density estimation technique can be used but am not really sure if this is the right approach and if it is, how one goes about doing it. – Legend Nov 28 '10 at 20:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.769303023815155, "perplexity": 316.61567505823666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107883636.39/warc/CC-MAIN-20201024135444-20201024165444-00268.warc.gz"} |
https://philarchive.org/browse/philosophy-of-physical-science-misc | # Philosophy of Physical Science, Misc
Related categories
Subcategories:
205 found
Order:
1 — 50 / 205
Material to categorize
1. Behind Civilization: The Fundamental Rules in the Universe.Huang Gavin - 2022 - Sydney, Australia: Gavin Huang.
In this new edition, a hypothesis is put forward for the first time to unify the Big Bang theory and the evolutionary theory by showing both events following the same set of fundamental interrelationships. As the evolution of life is a part of the evolutions of the universe, these two events express many fundamental similarities (this is self-similarity, which means a part of the system is similar to the whole system). Based on the same principle, the evolution of multicellular organisms, (...)
Export citation
Bookmark
2. Bottoms Up: The Standard Model Effective Field Theory From a Model Perspective.Philip Bechtle, Cristin Chall, Martin King, Michael Krämer, Peter Mättig & Michael Stöltzner - 2022 - Studies in History and Philosophy of Science Part A 92:129-143.
Experiments in particle physics have hitherto failed to produce any significant evidence for the many explicit models of physics beyond the Standard Model (BSM) that had been proposed over the past decades. As a result, physicists have increasingly turned to model-independent strategies as tools in searching for a wide range of possible BSM effects. In this paper, we describe the Standard Model Effective Field Theory (SM-EFT) and analyse it in the context of the philosophical discussions about models, theories, and (bottom-up) (...)
Export citation
Bookmark
3. As timely renovation of artificial ideology about authenticity of nature in proportion to contemporaneous historical background of remarkable highlight of microcosmic configuration of matter, “Homogenous Cosmos Originated from Unique Genesis” is innovative cosmos redefinition and thoroughly coherent PNT dynamics about universal existence & motion as spontaneous occurrence in the nature of essence of matter in a class by itself, which is radically extended from newly highlighted factuality that “discretionary particles in cosmos are mutually convertible” as lineal logic system based on (...)
Export citation
Bookmark
4. The Logic of Life.Bhakti Madhava Puri - 2008 - Science and Scientist.
Modern science generally assumes that the same laws of logic apply to mechanical, chemical and biological entities alike because they are all ultimately material objects. This may seem to be so obvious that there would be no need to validate it -- experimentally or logically. In this article we would like to critically examine this assumption and show that from an experiential/observational level, as well as from a rational/logical level, it is not valid. This becomes apparent, for instance, when we (...)
Export citation
Bookmark 1 citation
5. Entre a matéria e a forma: o problema da objetividade dos fenômenos quânticos em Werner Heisenberg.João Edson Gonçalves Cabral - 2019 - Dissertation, Universidade Federal Do Rio Grande Do Norte
Translate
Export citation
Bookmark
6. A mechanism for life after death is given.
Translate
Export citation
Bookmark
7. Ley verdadera, explicación y descripción en un argumento de Nancy Cartwright.Sergio Aramburu - 2015 - In Filosofía e historia de la ciencia en el cono sur. Córdoba: pp. 25-32.
Este trabajo consiste en un análisis de la tesis expuesta en el artículo de 1980 “Do the laws of physics state the facts?” de Nancy Cartwright, según la cual las leyes fundamentales de la física no “describen los hechos” porque, respecto de ellas, verdad y explicatividad se excluyen mutuamente. El texto fue luego republicado como tercer ensayo de su libro How the Laws of Physics Lie (1981), del que Mauricio Suárez afirma que el “trade-off” entre verdad y explicación es su (...)
Translate
Export citation
Bookmark
8. Hume's Natural Philosophy and Philosophy of Physical Science.Matias Slavov - 2020 - London: Bloomsbury Academic.
This book contextualizes David Hume's philosophy of physical science, exploring both Hume's background in the history of early modern natural philosophy and its subsequent impact on the scientific tradition.
Export citation
Bookmark 2 citations
9. New Insights on Time and Quantum Gravity.Ozer Oztekin - 2020 - Advances in Physics Theories and Applications 83 (DOI: 10.7176/APTA/83-08).
According to Einstein, a universal time does not exist. But what if time is different than what we think of it? Cosmic Microvawe Background Radiation was accepted as a reference for a universal clock and a new time concept has been constructed. According to this new concept, time was tackled as two-dimensional having both a wavelength and a frequency. What our clocks measure is actually a derivation of the frequency of time. A relativistic time dilation actually corresponds to an increase (...)
Export citation
Bookmark
10. The main concept of quantum field theory is the conviction that all the phenomena in the universe are created by the underlying structure of the quantum fields. Fields represent dynamical spatial properties that can be described with the help of geometrical concepts. Therefore it is possible to describe the mathematical origin of the structure of the creating fields and show the mathematical origin of the law of conservation of energy, Planck’s constant and the constant speed of light within a non-local (...)
Export citation
Bookmark 5 citations
11. (March 2019) UNBELIEVALBE similar ideas, UNBELIEVABLE similar framework of the article on “quantum mechanics” written by Proietti et al (2019) with my EDWs (2002-2008) -/- Gabriel Vacariu -/- The article that I investigate in this section is -/- (2019) Experimental rejection of observer-independence in the quantum world -/- Massimiliano Proietti,1 Alexander Pickston,1 Francesco Graffitti,1 Peter Barrow,1 Dmytro Kundys,1 Cyril Branciard,2 Martin Ringbauer,1, 3 and Alessandro Fedrizzi1 at arXiv:1902.05080v1 [quant-ph] 13 Feb 2019 -/- In the article written by Proietti et al. (...)
Export citation
Bookmark
12. Why microscopic objects exhibit wave properties (are delocalized), but macroscopic do not (are localized)? Traditional quantum mechanics attributes wave properties to all objects. When complemented with a deterministic collapse model (Quantum Stud.: Math. Found. 3, 279 (2016)) quantum mechanics can dissolve the discrepancy. Collapse in this model means contraction and occurs when the object gets in touch with other objects and satisfies a certain criterion. One single collapse usually does not suffice for localization. But the object rapidly gets in touch (...)
Export citation
Bookmark
13. Philosophical Model of Special Relativity.Alexander Klimets - 2012 - Quantum Magic 9 (3):3113-3123.
The model of special relativity is built in the article. Within the framework of the model, formulas of special relativity are obtained and their philosophical and physical meaning is revealed.
Export citation
Bookmark
14. Le scepticisme et les hypothèses de la physique.Sophie Roux - 1998 - Revue de Synthèse 119 (2-3):211-255.
The History of scepticism from Erasmus to Spinoza is often called upon to support three theses: first, that Descartes had a dogmatic notion of systematic knowledge, and therefore of physics; second, that the hypothetical epistemology of physics which spread during the xviith century was the result of a general sceptical crisis; third, that this epistemology was more successful in England than in France. I reject these three theses: I point first to the tension in Descartes’ works between the ideal of (...)
Translate
Export citation
Bookmark 4 citations
15. In this paper I shall argue in Section II that two of the standard arguments that have been put forth in support of Einstein’s Special Theory of Relativity do not support that theory and are quite compatible with what might be called an updated and perhaps even an enlightened Newtonian view of the Universe. This view will be presented in Section I. I shall call it the neo-Newtonian Theory, though I hasten to add there are a number of things in (...)
Export citation
Bookmark
16. Anhand der genaueren Analyse von Newtons experimentum crucis und der Argumentation, die er auf dieses Experiment stützt, sowie Goethes Kritik hieran sollen im Folgenden zwei verbreitete Vorurteile revidiert werden: -/- 1. Newton ist kein Dogmatiker, der methodische Ansprüche vertritt, die er nicht einlösen kann, sondern gründet seinen Anspruch, experimentelle Beweise führen zu können, auf einer vorbildlichen Methodologie kausaler Erklärungen, was seine Kritiker allerdings übersehen. 2. Goethe ist kein Antiwissenschaftler, der einen einzigartigen Kontrapunkt zur vorherrschenden wissenschaftlichen Tradition bildet, sondern steht inmitten (...)
Translate
Export citation
Bookmark
17. Can Physics Ever Be Complete If There is No Fundamental Level in Nature?Markus Schrenk - 2009 - Dialectica 63 (2):205-208.
In their recent book Every Thing Must Go, Ladyman and Ross claim: (i) Physics is analytically complete since it is the only science that cannot be left incomplete. (ii) There might not be an ontologically fundamental level. (iii) We should not admit anything into our ontology unless it has explanatory and predictive utility. In this discussion note I aim to show that the ontological commitment in implies that the completeness of no science can be achieved where no fundamental level exists. (...)
Export citation
Bookmark 2 citations
18. In the standard model of cosmology, λCDM, were introduced to explain the anomalies of the orbital velocities of galaxies in clusters highest according estimated by General Relativity the dark matter and the accelerated expansion of the universe the dark energy. The model λCDM is based in the equations of the General Relativity that of the total mass-energy of the universe assigns 4.9% to matter (including only baryonic matter), 26.8%, to dark matter and 68.3% to dark energy adjusted according observed in (...)
Translate
Export citation
Bookmark 2 citations
19. Science, Religion and Basic Biological Issues That Are Open to Interpretation.Alfred Gierer - 2009 - English Translation Of: Preprint 388, Mpi for History of Science.
This is an English translation of my essay: Alfred Gierer Wissenschaft, Religion und die deutungsoffenen Grundfragen der Biologie. Mpi for the History of Science, preprint 388, 1-21, also in philpapers. Range and limits of science are given by the universal validity of physical laws, and by intrinsic limitations, especially in self-referential contexts. In particular, neurobiology should not be expected to provide a full understanding of consciousness and the mind. Science cannot provide, by itself, an unambiguous interpretation of the natural order (...)
Export citation
Bookmark
20. Scientific Rationality, Human Consciousness, and Pro-Religious Ideas.Alfred Gierer - 2019 - In Wissenschaftliches Denken, das Rätsel Bewusstsein und pro-religiöse Ideen. Würzburg, Germany: Königshausen&Neumann. pp. 83-93.
The essay is an English version of the German article "Wissenschaftliche Rationalität, menschliches Bewusstsein und pro-religiöse Ideen". It discusses immanent versus transcendent concepts in the context of the art of living, as well as the understanding of human consciousness in the context of religion. Science provides us with a far reaching understanding of natural processes, including biological evolution, but also with deep insights into its own intrinsic limitations. This is consistent with more than one interpretation on the “metatheoretical“, that is (...)
Export citation
Bookmark
21. Lorentz Contraction, Bell’s Spaceships and Rigid Body Motion in Special Relativity.Jerrold Franklin - 2010 - European Journal of Physics 31:291-298.
The meaning of Lorentz contraction in special relativity and its connection with Bell’s spaceships parable is discussed. The motion of Bell’s spaceships is then compared with the accelerated motion of a rigid body. We have tried to write this in a simple form that could be used to correct students’ misconceptions due to conflicting earlier treatments.
Export citation
Bookmark 1 citation
22. (v.3) In this paper it is argued that Barad's Agential Realism, an approach to quantum mechanics originating in the philosophy of Niels Bohr, can be the basis of a 'theory of everything' consistent with a proposal of Wheeler that 'observer-participancy is the foundation of everything'. On the one hand, agential realism can be grounded in models of self- organisation such as the hypercycles of Eigen, while on the other agential realism, by virtue of the 'discursive practices' that constitute one aspect (...)
Export citation
Bookmark
23. Does Chance Hide Necessity ? A Reevaluation of the Debate ‘Determinism - Indeterminism’ in the Light of Quantum Mechanics and Probability Theory.Louis Vervoort - 2013 - Dissertation, University of Montreal
In this text the ancient philosophical question of determinism (“Does every event have a cause ?”) will be re-examined. In the philosophy of science and physics communities the orthodox position states that the physical world is indeterministic: quantum events would have no causes but happen by irreducible chance. Arguably the clearest theorem that leads to this conclusion is Bell’s theorem. The commonly accepted ‘solution’ to the theorem is ‘indeterminism’, in agreement with the Copenhagen interpretation. Here it is recalled that indeterminism (...)
Export citation
Bookmark
24. Towards an Ontology of Problems.Martin Zwick - 1995 - Advances in Systems Science and Applications 1:37-42.
Systems theory offers a language in which one might formulate a metaphysics (or more specifically an ontology) of problems. This proposal is based upon a conception of systems theory shared by vonBertalanffy, Wiener, Boulding, Rapoport, Ashby, Klir, and others,and expressed succinctly by Bunge, who considered game theory, information theory, feedback control theory, and the like to be attempts to construct an "exact and scientific metaphysics." Our prevailing conceptions of "problems" are concretized yet also fragmented, and in fact dissolved, by the (...)
Export citation
Bookmark 1 citation
25. Die Entwicklung von HeImholtz' Mechanismus ist durch einen Wandel im Geltungsanspruch gekennzeichnet und läßt sich in einer noch sehr groben Übersicht in zwei Perioden einteilen. Auf die erste Periode bis etwa zum Ende der 60er Jahre werde ich im ersten Teil meines Beitrages eingehen. Hier rekonstruiere ich umrißhaft die empiristische Begründung, die Helmholtz für den Wahrheitsanspruch seiner Naturauffassung gegeben hat. Im zweiten Teil werde ich dann die wichtigsten Merkmale der im Verlauf der 70er Jahre hervortretenden Hypothetisierungstendenz charakterisieren. Abschliessend will ich (...)
Translate
Export citation
Bookmark 2 citations
26. Two seemingly contradictory tendencies have accompanied the development of the natural sciences in the past 150 years. On the one hand, the natural sciences have been instrumental in effecting a thoroughgoing transformation of social structures and have made a permanent impact on the conceptual world of human beings. This historical period has, on the other hand, also brought to light the merely hypothetical validity of scientific knowledge. As late as the middle of the 19th century the truth-pathos in the natural (...)
Export citation
Bookmark 6 citations
27. What’s Wrong With Aim-Oriented Empiricism?Nicholas Maxwell - 2015 - Acta Baltica Historiae Et Philosophiae Scientiarum 3 (2):5-31.
For four decades it has been argued that we need to adopt a new conception of science called aim-oriented empiricism. This has far-reaching implications and repercussions for science, the philosophy of science, academic inquiry in general, conception of rationality, and how we go about attempting to make progress towards as good a world as possible. Despite these far-reaching repercussions, aim-oriented empiricism has so far received scant attention from philosophers of science. Here, sixteen objections to the validity of the argument for (...)
Export citation
Bookmark 2 citations
28. Relation Between Relativisitic Quantum Mechanics And.Han Geurdes - 1995 - Phys Rev E 51 (5):5151-5154.
The objective of this report is twofold. In the first place it aims to demonstrate that a four-dimensional local U(1) gauge invariant relativistic quantum mechanical Dirac-type equation is derivable from the equations for the classical electromagnetic field. In the second place, the transformational consequences of this local U(1) invariance are used to obtain solutions of different Maxwell equations.
Translate
Export citation
Bookmark
29. Underdetermination Vs. Indeterminacy.Juan José Lara - 2009 - Daimon: Revista Internacional de Filosofía 47:219-228.
Thomas Bonk has dedicated a book to analyzing the thesis of underdetermination of scientific theories, with a chapter exclusively devoted to the analysis of the relation between this idea and the indeterminacy of meaning. Both theses caused a revolution in the philosophic world in the sixties, generating a cascade of articles and doctoral theses. Agitation seems to have cooled down, but the point is still debated and it may be experiencing a renewed resurgence.
Export citation
Bookmark
30. Some Radical New Ideas About Consciousness Consciousness and the Cosmos: A New Copernican Revolution -/- Consciousness is our new frontier in modern science. Most scientists believe that it can be accomodated, explained, by existing scientific principles. I say that it cannot. That it calls all existing scientific principles into question. That consciousness is to modern science just exactly what light was to classical physics: All of our fundamental assumptions about the nature of Reality have to change. And I go on, (...)
Export citation
Bookmark
31. A literary approach to scientific practice: Essay Review of R.I.G. Hughes' _The Theoretical Practices of Physics_.
Export citation
Bookmark
32. This essay invites the reader to interpret physics from a radically empirical standpoint, both diachronic and relative. We start with some criteria of the theory of knowledge, the basis for interpreting the fundamentals of mathematics and physics. -/- Then we present some expositions of physics, including a new characterization of time, space and movement, with reference to classical mechanics, relativity and quantum mechanics.
Translate
Export citation
Bookmark
33. Entropy : A Concept That is Not a Physical Quantity.Shufeng Zhang - 2012 - Physics Essays 25 (2):172-176.
This study has demonstrated that entropy is not a physical quantity, that is, the physical quantity called entropy does not exist. If the efficiency of heat engine is defined as η = W/W1, and the reversible cycle is considered to be the Stirling cycle, then, given ∮dQ/T = 0, we can prove ∮dW/T = 0 and ∮d/T = 0. If ∮dQ/T = 0, ∮dW/T = 0 and ∮dE/T = 0 are thought to define new system state variables, such definitions would (...)
Translate
Export citation
Bookmark
34. Clifford Algebra: A Case for Geometric and Ontological Unification.William Michael Kallfelz - 2009 - VDM Verlagsservicegesellschaft MbH.
Robert Batterman’s ontological insights (2002, 2004, 2005) are apt: Nature abhors singularities. “So should we,” responds the physicist. However, the epistemic assessments of Batterman concerning the matter prove to be less clear, for in the same vein he write that singularities play an essential role in certain classes of physical theories referring to certain types of critical phenomena. I devise a procedure (“methodological fundamentalism”) which exhibits how singularities, at least in principle, may be avoided within the same classes of formalisms (...)
Translate
Export citation
Bookmark 1 citation
35. A Quasi-Analytical Constitution of Physical Space.Thomas Mormann - 2004 - In Carsten Klein & Steven Awodey (eds.), Carnap Brought Home - The View from Jena. Open Court.
Translate
Export citation
Bookmark 3 citations
36. Synthetic Geometry and Aufbau.Thomas Mormann - 2003 - In Thomas Bonk (ed.), Language, Truth and Knowledge. Kluwer Academic Publishers. pp. 45--64.
Export citation
Bookmark 5 citations
37. Dispositions, Manifestations, and Causal Structure.Toby Handfield - 2010 - In Anna Marmodoro (ed.), The Metaphysics of Powers: Their Grounding and Their Manifestations. Routledge.
This paper examines the idea that there might be natural kinds of causal processes, with characteristic diachronic structure, in much the same way that various chemical elements form natural kinds, with characteristic synchronic structure. This claim -- if compatible with empirical science -- has the potential to shed light on a metaphysics of essentially dispositional properties, championed by writers such as Bird and Ellis.
Export citation
Bookmark 7 citations
38. Buddhism and Quantum Physics: A Strange Parallelism of Two Concepts of Reality.Christian Thomas Kohl - 2007 - Contemporary Buddhism 8 (1):69-82.
Rudyard Kipling, the famous english author of « The Jungle Book », born in India, wrote one day these words: « Oh, East is East and West is West, and never the twain shall meet ». In my paper I show that Kipling was not completely right. I try to show the common ground between buddhist philosophy and quantum physics. There is a surprising parallelism between the philosophical concept of reality articulated by Nagarjuna and the physical concept of reality implied (...)
Export citation
Bookmark
39. Identity in Physics: Statistics and the (Non-)Individuality of Quantum Particles.Matteo Morganti - 2010 - In H. De Regt, S. Hartmann & S.: Okasha (eds.), EPSA Philosophy of Science: Amsterdam 2009. Springer.
This paper discusses the issue of the identity and individuality (or lack thereof) of quantum mechanical particles. It first reconstructs, on the basis of the extant literature, a general argument in favour of the conclusion that such particles are not individual objects. Then, it critically assesses each one of the argument’s premises. The upshot is that, in fact, there is no compelling reason for believing that quantum particles are not individual objects.
Export citation
Bookmark
Translate
Export citation
Bookmark 1 citation
41. Consciousness is more important than the Higgs-Bosen particle. Consciousness has emerged as a term, and a problem, in modern science. Most scientists believe that it can be accomodated and explained, by existing scientific principles. I say that it cannot, that it calls all existing principles into question, and so I propose a New Copernican Revolution among our fundamental terms. I say that consciousness points completely beyond present day science, to a whole new view of the universe, where consciousness, and not (...)
Export citation
Bookmark
42. Evaluating the Exact Infinitesimal Values of Area of Sierpinski's Carpet and Volume of Menger's Sponge.Yaroslav Sergeyev - 2009 - Chaos, Solitons and Fractals 42: 3042–3046.
Very often traditional approaches studying dynamics of self-similarity processes are not able to give their quantitative characteristics at infinity and, as a consequence, use limits to overcome this difficulty. For example, it is well know that the limit area of Sierpinski’s carpet and volume of Menger’s sponge are equal to zero. It is shown in this paper that recently introduced infinite and infinitesimal numbers allow us to use exact expressions instead of limits and to calculate exact infinitesimal values of areas (...)
Translate
Export citation
Bookmark 2 citations
43. A synthesis of trending topics in pancomputationalism. I introduce the notion that "strange loops" engender the most atomic levels of physical reality, and introduce a mechanism for global non-locality. Writen in a simple and accesssible style, it seeks to draw research in fundamental physics back to realism, and have a bit of fun in the process.
Export citation
Bookmark
44. Just How Much Do We Really Know?Dewey B. Larson - 1986 - Reciprocity 15 (2):1-15.
This memorandum, originally written in 1961 and published in an obscure journal in 1986, emphasizes the degree to which general acceptance has been substituted for proof in current scientific practice. Its main objective is to reveal which generally accepted ideas have no sound factual basis and therefore _could_ be erroneous. The new and improved basic theory that is fervently desired must conflict with some items of this kind, and probably with many of them. Such conflicts, if confined to the categories (...)
Translate
Export citation
Bookmark
45. Putnam’s Account of Apriority and Scientific Change: Its Historical and Contemporary Interest.Jonathan Y. Tsou - 2010 - Synthese 176 (3):429-445.
In the 1960s and 1970s, Hilary Putnam articulated a notion of relativized apriority that was motivated to address the problem of scientific change. This paper examines Putnam’s account in its historical context and in relation to contemporary views. I begin by locating Putnam’s analysis in the historical context of Quine’s rejection of apriority, presenting Putnam as a sympathetic commentator on Quine. Subsequently, I explicate Putnam’s positive account of apriority, focusing on his analysis of the history of physics and geometry. In (...)
Export citation
Bookmark 11 citations
46. Subjectivity or the problem of ‘qualia’ tends to make the accessibility and comprehension of psychological events intangible especially for scientific exploration. The issue becomes even more complicated but interesting when one turns towards mystical experiences. Such experiences are different from other psychological phenomena in the sense that they don’t occur to every one, so are difficult to comprehend even for their qualifications of existence. We conducted a qualitative study on one such experience of inner-light perception. This is a common experience (...)
Translate
Export citation
Bookmark 1 citation
47. Scientific Metaphysics.Nicholas Maxwell - 2004 - PhilSci Archive.
In this paper I argue that physics makes metaphysical presuppositions concerning the physical comprehensibility, the dynamic unity, of the universe. I argue that rigour requires that these metaphysical presuppositions be made explicit as an integral part of theoretical knowledge in physics. An account of what it means to assert of a theory that it is unified is developed, which provides the means for partially ordering dynamical physical theories with respect to their degrees of unity. This in turn makes it possible (...)
Export citation
Bookmark 4 citations
48. Die Inkonsistenz Empiristischer Argumentation Im Zusammenhang MIT Dem Problem der Naturgesetzlichkeit.Dieter Wandschneider - 1986 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 17 (1):131-142.
The well-known empiricist apories of the lawfulness of nature prevent an adequate philosophical interpretation of empirical science until this day. Clarification can only be expected through an immanent refutation of the empiricist point of view. My argument is that Hume’s claim, paradigmatic for modern empiricism, is not just inconsequent, but simply contradictory: Empiricism denies that a lawlike character of nature can be substantiated. But, as is shown, anyone who claimes experience to be the basis of knowledge (as the empiricist naturally (...)
Export citation
Bookmark
Interlevel Relations in Physical Science
1. Spacetime Emergence: Collapsing the Distinction Between Content and Context?Karen Crowther - 2022 - In Shyam Wuppuluri & Ian Stewart (eds.), From Electrons to Elephants and Elections: Saga of Content and Context. Springer. pp. 379–402.
Several approaches to developing a theory of quantum gravity suggest that spacetime—as described by general relativity—is not fundamental. Instead, spacetime is supposed to be explained by reference to the relations between more fundamental entities, analogous to atoms' of spacetime, which themselves are not (fully) spatiotemporal. Such a case may be understood as emergence of \textit{content}: a hierarchical' case of emergence, where spacetime emerges at a higher', or less-fundamental, level than its lower-level' non-spatiotempral basis. But quantum gravity cosmology also presents us (...) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6516311764717102, "perplexity": 3661.5936095164097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00238.warc.gz"} |
http://physics.stackexchange.com/questions/44566/does-the-exact-string-theory-s-matrix-describe-all-physics-there-is/44567 | # Does the exact string theory $S$-matrix describe all physics there is?
Suppose someone manages to evaluate the string theory $S$-matrix to all orders for any and all vertex operator insertions including non-perturbative contributions from world-sheet instantons and re-sum the whole series to obtain the exact non-perturbative string theory $S$-matrix for any combination of in- and out-states. Suppose further that the analytic result is compact, tractable, and easily amenable to numerical evaluations (say, some special function). Would such a result tell us "what string theory is"? Would it be enough in principle to answer all sensible questions about physics described by string theory? If not, what else is there we should care about?
-
Are you excluding cosmological questions? One must be clear that there are many different string S-matrices, which are linked by non-S-matrix operations, which involve turning on moduli and such, and infinite number of particles. So you shouldn't say "the" string S-matrix, but "the string S-matrix for a flat-version of our vacuum". – Ron Maimon Nov 19 '12 at 4:51
Of course not, I asked about "all sensible questions about physics" which certainly includes cosmology, doesn't it? If you know the S-matrix "for any and all vertex operator insertions", as I supposed, that should allow for arbitrary moduli and geometries, no? If not, please explain why not. – Udo Kamilla Nov 19 '12 at 4:55
It's not enough because the S-matrix is for a finite number of particles--- it doesn't even describe what happens when you move a charged particle from one momentum to another, this involves infinite number of soft photons, let alone change moduli over a region where the cosmology changes. – Ron Maimon Nov 19 '12 at 5:09
Ok, but this is a well known limitation of perturbative strings--- they only describe a finite number of particles in S-matrix, and have infrared divergences which need to be cured by using the string classical fields to define backgrounds. The equations of motion for the massless backgrounds are the second half of string theory, the more used half, and these allow you to change an infinite number of background particles, and go from one theory to another, like by changing the radion in a type II circle compactification. String theory is a half-S-matrix half-classical hybrid monster. – Ron Maimon Nov 19 '12 at 15:26
You can't describe electron electron scattering, as this is infrared divergent (you find the same infrared divergences in string theory--- you fix them by adding a soft classical changing background--- this is the dirty secret). You can't describe T-duality, as this is condensation of infinite number of zero mass particles. But you are absolutely right in your insistance that the S-matrix is complete for a given background, so I don't want to give fuel to string critics by saying "there's more than S-matrix", because it is more correct to say there isn't. – Ron Maimon Nov 20 '12 at 15:44
Well, for starters, the scattering matrix picture of interactions does not include the dynamics of spacetime, it is instead assumed as a background space where everything happens
Even string theory is just classical general relativity in a more fair description such that it can be quantized in a way that gives finite results for measurable quantities: it assumes that string modes contribute to $T_{\mu \nu}$ and as such, produce a curvature. The curvature of coherent excitations of a closed string has been proved to be equivalent to a small perturbation of the metric (see this question for details) and this gives string theorists confidence that such excitations describe gravitons.
But the picture of space-time is still classical, and a proper nonperturbative formulation of quantum spacetime is a revolution that still has not happened. Until that happens, no scattering matrix description can hope to be complete
-
That's obviously not true as there are matrix operators for the graviton which perturb the metric and hence the spacetime. That's the whole point why people got excited about strings, they include quantum gravitons in a dynamic theory. In principle it should be possible to obtain any dynamic spacetime background by appropriate vertex insertions, no? If this is not the case then I would be interested in hearing a rational argument why not. – Udo Kamilla Nov 19 '12 at 5:04
No, first because string theory is based in a number of approximations/assumptions and second because not every physical question can be answered assuming that processes take an infinite time and involve objects separated infinitely as is assumed in the S-matrix approach.
The S-matrix approach is excellent for particle physics, which deals with few particles (usually two or three) in a large mostly empty volume and only considers initial and final states of free particles. The S-matrix approach fails when you start to study many-body motion in condensed phases. This is the reason why chemists have developed other theories beyond the S-matrix formalism for the study of chemical reactions, for instance.
-
String theory is obviously not based on any approximation whatsoever. I did not ask what an S-matrix (in QFT) is usually used for. My question is conceptual and concerns string theory. – Udo Kamilla Nov 20 '12 at 2:10
Evidently string theory is based in a number of approximations and this is why the string S-matrix fails to explain even the most elementary chemical reactions in condensed phases. Moreover, even in its supposed strong point (as 'candidate' for a quantum gravity theory) the string theory approach is based in a set of gross assumptions and this is why generalizations to string theory are under active research. – juanrga Nov 20 '12 at 15:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7080411314964294, "perplexity": 687.4836600169552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00039-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/okay-now-if-youll-excuse-me-i-have-a-date.104621/ | # Okay, now if you'll excuse me, I have a date.
1. Dec 19, 2005
### Smurf
Last Exam, just got out.
...
WOOOOHOOOO!!
Okay, now if you'll excuse me, I have a date.
2. Dec 19, 2005
### Lisa!
Congrats!
3. Dec 19, 2005
### Staff: Mentor
Congrats, Smurf! You've been kind of quiet for a while.
4. Dec 19, 2005
### Math Is Hard
Staff Emeritus
yay, Smurfy-smurf!!!! I thought it had been awful quiet around here.
5. Dec 19, 2005
### Staff: Mentor
Date???? :grumpy: (invalidates all of smurfs GOOBF cards)
just kidding,
they were already invalid
Yay for Smurf!!!!!!!
6. Dec 19, 2005
### mattmns
Man Smurf that was a bad move! Admitting you have a date, now Evo knows! Congrats on finishing your exams
Have something to add?
Similar Discussions: Okay, now if you'll excuse me, I have a date. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404418468475342, "perplexity": 28375.292221905336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00595-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/14880/hearing-a-sound-backwards-because-of-doppler-effect | # Hearing a sound backwards because of Doppler effect
Consider a supersonic plane (mach 2) aproaching a stationary sound source (e.g a fog horn on a boat).
If I understand it correctly, the passengers in the plane can hear the sound twice. First at a 3 times higher frequency, and then (after they passed the source) a second time at normal frequency but backwards. None of the textbooks or web sites mention this backwards sound. Yet I am quite sure it must be there.
Am I correct? And if so, is it actually observed (e.g. By fighter pilots) and why do textbooks never mention this?
-
I doubt that fighter pilots can hear anything happening outside the plane (just because it's so loud), and I can't think of any other supersonic motion, so I wouldn't be surprised if this effect has never been observed. But it's an interesting question. – David Z Sep 19 '11 at 22:38
FYI Gareth Loy's Musimatics book mentions it at pg. 230; one gets it from Doppler shift eq. $f_d=f\frac{v_s}{v_s-u}$ (in 1d), where $v_s$ is sound speed, $u$ is the emitter's relative speed and $f$ the frequency being emitted if $u>v_s$. – eudoxos Sep 20 '11 at 9:12
i don't think this reversal of sound would take place. – Vineet Menon Sep 21 '11 at 4:59
@Vineet Menon: fully irrelevant what you think unless you give an argument. – eudoxos Sep 21 '11 at 18:15
@David Zaslavsky: how about moving surface wave source on water? (to be sure, I really mean surface waves, not acoustic waves; Doppler effect is the same) – eudoxos Sep 21 '11 at 18:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057462573051453, "perplexity": 916.2328242720803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010721184/warc/CC-MAIN-20140305091201-00070-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Condition_number | Condition number
In the field of numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given ${\displaystyle f(x)=y,}$ one is solving for x, and thus the condition number of the (local) inverse must be used. In linear regression the condition number of the moment matrix can be used as a diagnostic for multicollinearity.[1][2]
The condition number is an application of the derivative, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables.
A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called backward stability. In general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms.
As a rule of thumb, if the condition number ${\displaystyle \kappa (A)=10^{k}}$, then you may lose up to ${\displaystyle k}$ digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods.[3] However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy).
General definition in the context of error analysis
Given a problem ${\displaystyle f}$ and an algorithm ${\displaystyle {\tilde {f}}}$ with an input x, the absolute error is ${\displaystyle \left\|f(x)-{\tilde {f}}(x)\right\|}$ and the relative error is ${\displaystyle \left\|f(x)-{\tilde {f}}(x)\right\|/\left\|f(x)\right\|}$.
In this context, the absolute condition number of a problem f is
${\displaystyle \lim _{\varepsilon \rightarrow 0}\sup _{\|\delta x\|\leq \varepsilon }{\frac {\|\delta f\|}{\|\delta x\|}}}$
and the relative condition number is
${\displaystyle \lim _{\varepsilon \rightarrow 0}\sup _{\|\delta x\|\leq \varepsilon }{\frac {\|\delta f(x)\|/\|f(x)\|}{\|\delta x\|/\|x\|}}}$
Matrices
For example, the condition number associated with the linear equation Ax = b gives a bound on how inaccurate the solution x will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution, x, will change with respect to a change in b. Thus, if the condition number is large, even a small error in b may cause a large error in x. On the other hand, if the condition number is small then the error in x will not be much bigger than the error in b.
The condition number is defined more precisely to be the maximum ratio of the relative error in x to the relative error in b.
Let e be the error in b. Assuming that A is a nonsingular matrix, the error in the solution A−1b is A−1e. The ratio of the relative error in the solution to the relative error in b is
${\displaystyle {\frac {\frac {\left\|A^{-1}e\right\|}{\left\|A^{-1}b\right\|}}{\frac {\|e\|}{\|b\|}}}}$
This is easily transformed to
${\displaystyle {\frac {\left\|A^{-1}e\right\|}{\|e\|}}{\frac {\|b\|}{\left\|A^{-1}b\right\|}}.}$
The maximum value (for nonzero b and e) is then seen to be the product of the two operator norms as follows:
{\displaystyle {\begin{aligned}\max _{e,b\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}{\frac {\|b\|}{\left\|A^{-1}b\right\|}}\right\}&=\max _{e\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}\right\}\,\max _{b\neq 0}\left\{{\frac {\|b\|}{\left\|A^{-1}b\right\|}}\right\}\\&=\max _{e\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}\right\}\,\max _{x\neq 0}\left\{{\frac {\|Ax\|}{\|x\|}}\right\}\\&=\left\|A^{-1}\right\|\,\|A\|\end{aligned}}}
The same definition is used for any consistent norm, i.e. one that satisfies
${\displaystyle \kappa (A)=\left\|A^{-1}\right\|\,\left\|A\right\|\geq \left\|A^{-1}A\right\|=1.}$
When the condition number is exactly one (which can only happen if A is a scalar multiple of a linear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data.
However, it does not mean that the algorithm will converge rapidly to this solution, just that it won't diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors.
The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution.
The definition of the condition number depends on the choice of norm, as can be illustrated by two examples.
If ${\displaystyle \|\cdot \|}$ is the norm defined in the square-summable sequence space 2 (which matches the usual distance in a standard Euclidean space and is usually noted as ${\displaystyle \|\cdot \|_{2}}$), then
${\displaystyle \kappa (A)={\frac {\sigma _{\max }(A)}{\sigma _{\min }(A)}},}$
where ${\displaystyle \sigma _{\max }(A)}$ and ${\displaystyle \sigma _{\min }(A)}$ are maximal and minimal singular values of ${\displaystyle A}$ respectively. Hence
• If ${\displaystyle A}$ is normal then
${\displaystyle \kappa (A)={\frac {\left|\lambda _{\max }(A)\right|}{\left|\lambda _{\min }(A)\right|}},}$
where ${\displaystyle \lambda _{\max }(A)}$ and ${\displaystyle \lambda _{\min }(A)}$ are maximal and minimal (by moduli) eigenvalues of ${\displaystyle A}$ respectively.
• If ${\displaystyle A}$ is unitary then ${\displaystyle \kappa (A)=1.}$
The condition number with respect to L2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix.
If ${\displaystyle \|\cdot \|}$ is the norm defined in the sequence space of all bounded sequences (which matches the maximum of distances measured on projections into the base subspaces and is usually denoted by ${\displaystyle \|\cdot \|_{\infty }}$), and ${\displaystyle A}$ is lower triangular non-singular (i.e., ${\displaystyle \forall i,a_{ii}\neq 0}$) then
${\displaystyle \kappa (A)\geq {\frac {\max _{i}(|a_{ii}|)}{\min _{i}(|a_{ii}|)}}.}$
The condition number computed with this norm is generally larger than the condition number computed with square-summable sequences, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a non-linear algebra, for example when approximating irrational and transcendental functions or numbers with numerical methods).
If the condition number is not too much larger than one, the matrix is well conditioned which means its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible has condition number equal to infinity.
Nonlinear
Condition numbers can also be defined for nonlinear functions, and can be computed using calculus. The condition number varies with the point; in some cases one can use the maximum (or supremum) condition number over the domain of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest.
One variable
The condition number of a differentiable function ${\displaystyle f}$ in one variable as a function is ${\displaystyle \left|xf'/f\right|}$. Evaluated at a point ${\displaystyle x}$ this is:
${\displaystyle \left|{\frac {xf'(x)}{f(x)}}\right|}$
Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of ${\displaystyle f}$, which is ${\displaystyle (\log f)'=f'/f}$ and the logarithmic derivative of ${\displaystyle x}$, which is ${\displaystyle (\log x)'=x'/x=1/x}$, yielding a ratio of ${\displaystyle xf'/f}$. This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative ${\displaystyle f'}$ scaled by the value of ${\displaystyle f}$. Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change.
More directly, given a small change ${\displaystyle \Delta x}$ in ${\displaystyle x}$, the relative change in ${\displaystyle x}$ is ${\displaystyle [(x+\Delta x)-x]/x=(\Delta x)/x}$, while the relative change in ${\displaystyle f(x)}$ is ${\displaystyle [f(x+\Delta x)-f(x)]/f(x)}$. Taking the ratio yields:
${\displaystyle {\frac {[f(x+\Delta x)-f(x)]/f(x)}{(\Delta x)/x}}={\frac {x}{f(x)}}{\frac {f(x+\Delta x)-f(x)}{(x+\Delta x)-x}}={\frac {x}{f(x)}}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}$.
The last term is the difference quotient (the slope of the secant line), and taking the limit yields the derivative.
Condition numbers of common elementary functions are particularly important in computing significant figures, and can be computed immediately from the derivative; see significance arithmetic of transcendental functions. A few important ones are given below:
Name Symbol Condition number
Addition / Subtraction ${\displaystyle x+a}$ ${\displaystyle \left|{\frac {x}{x+a}}\right|}$
Scalar Multiplication ${\displaystyle ax}$ ${\displaystyle 1}$
Division ${\displaystyle 1/x}$ ${\displaystyle 1}$
Polynomial ${\displaystyle x^{n}}$ ${\displaystyle |n|}$
Exponential function ${\displaystyle e^{x}}$ ${\displaystyle |x|}$
Natural logarithm function ${\displaystyle \ln(x)}$ ${\displaystyle \left|{\frac {1}{\ln(x)}}\right|}$
Sine function ${\displaystyle \sin(x)}$ ${\displaystyle |x\cot(x)|}$
Cosine function ${\displaystyle \cos(x)}$ ${\displaystyle |x\tan(x)|}$
Tangent function ${\displaystyle \tan(x)}$ ${\displaystyle |x(\tan(x)+\cot(x))|}$
Inverse sine function ${\displaystyle \arcsin(x)}$ ${\displaystyle {\frac {x}{{\sqrt {1-x^{2}}}\arcsin(x)}}}$
Inverse cosine function ${\displaystyle \arccos(x)}$ ${\displaystyle {\frac {|x|}{{\sqrt {1-x^{2}}}\arccos(x)}}}$
Inverse tangent function ${\displaystyle \arctan(x)}$ ${\displaystyle {\frac {x}{(1+x^{2})\arctan(x)}}}$
Several variables
Condition numbers can be defined for any function ${\displaystyle f}$ mapping its data from some domain (e.g. an ${\displaystyle m}$-tuple of real numbers ${\displaystyle x}$) into some codomain (e.g. an ${\displaystyle n}$-tuple of real numbers ${\displaystyle f(x)}$), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example polynomial root finding or computing eigenvalues.
The condition number of ${\displaystyle f}$ at a point ${\displaystyle x}$ (specifically, its relative condition number[4]) is then defined to be the maximum ratio of the fractional change in ${\displaystyle f(x)}$ to any fractional change in ${\displaystyle x}$, in the limit where the change ${\displaystyle \delta x}$ in ${\displaystyle x}$ becomes infinitesimally small:[4]
${\displaystyle \lim _{\varepsilon \to 0^{+}}\sup _{\|\delta x\|\leq \varepsilon }\left[\left.{\frac {\left\|f(x+\delta x)-f(x)\right\|}{\|f(x)\|}}\right/{\frac {\|\delta x\|}{\|x\|}}\right]}$,
where ${\displaystyle \|\cdot \|}$ is a norm on the domain/codomain of ${\displaystyle f}$.
If ${\displaystyle f}$ is differentiable, this is equivalent to:[4]
${\displaystyle {\frac {\|J(x)\|}{\|f(x)\|/\|x\|}}}$,
where ${\displaystyle J(x)}$ denotes the Jacobian matrix of partial derivatives of ${\displaystyle f}$ at ${\displaystyle x}$ and ${\displaystyle \|J(x)\|}$ is the induced norm on the matrix.
References
1. Belsley, David A.; Kuh, Edwin; Welsch, Roy E. (1980). "The Condition Number". Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley & Sons. pp. 100–104. ISBN 0-471-05856-4.
2. Pesaran, M. Hashem (2015). "The Multicollinearity Problem". Time Series and Panel Data Econometrics. New York: Oxford University Press. pp. 67–72 [p. 70]. ISBN 978-0-19-875998-0.
3. Cheney; Kincaid (2007-08-03). Numerical Mathematics and Computing. ISBN 978-0-495-11475-8.
4. Trefethen, L. N.; Bau, D. (1997). Numerical Linear Algebra. SIAM. ISBN 978-0-89871-361-9.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 93, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813876748085022, "perplexity": 250.73757292287155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00644.warc.gz"} |
https://physics.stackexchange.com/questions/431425/why-dont-we-use-sign-convention-during-the-derivation-of-a-lens-maker-formula | # Why don't we use sign convention during the derivation of a lens maker formula?
Please have a look at the Lens makers formula.
In any derivation of Geometrical optics, we use the sign convention twice: once while deriving it and next while using it for general cases.
But in the derivation of lens makers formula, we don't consider negative and positive values of the radius of curvature while solving for both spherical surfaces.
This should lead to a wrong answer and in fact I solved one example which through individual analysis of both spherical surfaces gave a different answer than while using the lens makers formula directly.
I think am not getting this.
From what I have read, it's because no matter which surface light hits first the net refraction is same. This sign convention doesn't play a major role. But I am still confused.
I posted this question a while ago and now have realised what I was missing.
In the derivation of any other formula for e.g. the mirror formula or magnification for mirrors or lenses etc., we use a specific case involving, usually but not necessarily, a convex lens.
Thus we use the sign convention during the derivation as well as later while solving problems involving different sets of lenses or mirrors
However in lens makers formula, we use straight and direct formula where the first and second radius can take any value. Why? You may ask. Why is it that using a specific case isn't important here?
This is because we never use any specific condition involving say a convex surface on both sides. We just say an object situated in the negative direction, undergoes refraction through first surface, forms an image which again undergoes refraction through second surface. No specific condition that image formed is going to be in the positive co-ordinate, or gonna be enlarged smaller etc. So during the derivation no mention of any case. We directly apply convention for the problem we are solving.
• Forgot to mention but the answer is really basic and simple. – tiffy Oct 2 '18 at 12:03
I have a better explaination. You see we have used the formula for refraction on spherical surface $$\frac{n_2}{v}-\frac{n_1}{u}=\frac{n_2-n_1}{R}$$ in the derivation of lens makers formula. Now, the use of sign convention in any derivation is only to make the formula generalized. If we don't use sign convention, we can use the the derived formula in only the situation which we considered while deriving it. And since we have already used sign convention in the derivation of formula for refraction through spherical surfaces, we don't have to use it again in derivation of lens makers formula.
• Hi user220718, note that it's typical to place a single space after all punctuation marks, not just periods. We also have MathJax enabled on this site to make equations look nice, search "notation" in help center to learn more. – Kyle Kanos Jan 23 '19 at 15:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8551129698753357, "perplexity": 334.70798281448066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151197.83/warc/CC-MAIN-20200714181325-20200714211325-00032.warc.gz"} |
https://www.computer.org/csdl/proceedings/quatic/2010/4241/00/4241a089-abs.html | 2010 Seventh International Conference on the Quality of Information and Communications Technology (2010)
Porto, Portugal
Sept. 29, 2010 to Oct. 2, 2010
ISBN: 978-0-7695-4241-6
pp: 89-96
ABSTRACT
Traditionally, test cases are used to check whether a system conforms to its requirements. However, to achieve good quality and coverage, large amounts of test cases are needed, and thus huge efforts have to be put into test generation and maintenance. We propose a methodology, called Abstract Testing, in which test cases are replaced by verification scenarios. Such verification scenarios are more abstract than test cases, thus fewer of them are needed and they are easier to create and maintain. Checking verification scenarios against the source code is done automatically using a software model checker. In this paper we describe the general idea of Abstract Testing, and demonstrate its feasibility by a case study from the automotive systems domain.
INDEX TERMS
abstract testing verification requirements engineering bounded model checking
CITATION
H. Post, T. Kropf, C. Sinz, F. Merz and T. Gorges, "Abstract Testing: Connecting Source Code Verification with Requirements," 2010 Seventh International Conference on the Quality of Information and Communications Technology(QUATIC), Porto, Portugal, 2010, pp. 89-96.
doi:10.1109/QUATIC.2010.14
CITATIONS
SHARE
87 ms
(Ver 3.3 (11022016)) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872555673122406, "perplexity": 2303.876761950333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658928.22/warc/CC-MAIN-20190117102635-20190117124635-00362.warc.gz"} |
https://www.anl.gov/article/department-of-energy-awards-flow-into-argonne | # Argonne National Laboratory
Press Release | Argonne National Laboratory
# Department of Energy awards flow into Argonne
The U.S. Department of Energy (DOE) recently awarded $19.7 million to help national laboratories across the country speed promising energy technologies to the marketplace. Argonne received the most funding from the DOE, with nine projects being funded in three divisions. Argonne’s Energy Science division received four awards, the Nanoscience and Technology division received one and the Nuclear Engineering division won four awards. DOE Secretary Rick Perry awarded Argonne with nearly$4.7 million in projects as part of the DOE’s Office of Technology Transition’s Technology Commercialization Fund (TCF) in September.
The 2017 awards represent the second time TCF has distributed funds. Last year, the DOE awarded nearly $16 million, with Argonne winning five awards. We did quite well, winning funding for nine projects. It shows Argonne scientists are making a valuable contribution to society with their work.” - Hemant Bhimnathwala, a business development manager with Argonne’s Technology Commercialization and Partnerships division According to the DOE, the 2017 funding will support 54 projects across 12 national laboratories working with more than 30 private-sector partners. In his statement, Secretary Perry highlighted the incredible value of DOE’s national laboratories and the importance of bringing the Department’s technology transfer mission to the American people.” Argonne’s success in securing the funding is important and speaks to the expertise of the laboratory’s scientists and engineers, said Hemant Bhimnathwala, a business development manager with Argonne’s Technology Commercialization and Partnerships division. One of the ways laboratories measure success is by how much impact they create through industry engagements. Argonne won nine projects, which is significant, and this TCF funding offers an independent measure of our impact on the U.S. economy.” The funding creates impact for Argonne scientists as well, he said. It communicates to the scientists and engineers that their work is relevant and can become a useful product or service in the future.” To be considered for the awards, scientists and engineers submit proposals. This year, said Bhimnathwala, Argonne scientists submitted 15 to 20 proposals. We did quite well, winning funding for nine projects. It shows Argonne scientists are making a valuable contribution to society with their work.” The awards also inspire other Argonne researchers, said Bhimnathwala. It signals that there are ways to work with industry and possibly bring their work to the marketplace. I hope it will encourage other scientists and engineers to apply in the future.” This year’s TCF awards ranged from$75,000 to $750,000. Argonne researchers whose projects received 2017 funding include: • Acacia Brunett (Nuclear Engineering): NRC Qualification of Advanced Reactor Safety Analysis Software ($75,000)
• Jeff Elam (Energy Systems): Lithium Anodes for Electric Vehicles ($750,000; in partnership with alpha-En) • Amgad Elgowainy (Energy Systems): Two-Tier Tube-Trailer Consolidation Technology for Fast Fueling of Hydrogen Fuel Cell Electric Vehicles ($749,434; in partnership with FirstElement Fuel, Gas Technology Institute and PDC Machines, Inc.)
• Levent Eryilmaz (Energy Systems): The Application of Catalytically Active Nano-composite Coatings to increase the Service Interval of Automotive Powertrain Applications ($712,450; in partnership with Magna Services of America) • Darius Lisowski (Nuclear Engineering): Passive, High Efficiency Ventilation for the DRACS and other Natural Circulation Systems ($100,000; in partnership with General Atomics)
• Tanju Sofu (Nuclear Engineering): Joint Development of SAS4A Code in Application to Oxide-fueled LFR Severe Accident Analysis ($400,000; in partnership with Westinghouse Nuclear) • Jeff Spangenberger (Energy Systems): Development of a Scalable Process for Recovery of Polymers and Residual Metals from Mixed Polymer Content Scrap ($750,000; in partnership with Global Electric Electronic Processing International)
• Ani Sumant (Nanoscience and Technology): Graphene-Based Solid Lubricants for Automotive Applications ($640,000; in partnership with Magna International, Inc.) • Rick Vilim (Nuclear Engineering): Advanced Physics-Based Fluid System Performance Monitoring to Support Nuclear Power Plant Operations ($500,000; in partnership with LPI, Inc.)
Securing DOE funding for these projects, said Bhimnathwala, shows the value we’re bringing to taxpayers through the commercialization of our technologies.”
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26421260833740234, "perplexity": 10105.66073166879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00161.warc.gz"} |
http://tex.stackexchange.com/questions/36660/only-authors-initials-in-bibtex-natbib-using-named-style | # Only author's initials in BibTeX natbib using named style
I have a BibTeX file with a mix of entries, some with full author's names and some with just initials. I would like my typeset bibliography to only use initials. Is there a way to do this?
I have the following BibTeX commands currently in my file:
\usepackage[super,comma,sort&compress]{natbib}
\bibliography{thesis}
\bibliographystyle{named}
I have found documentation on how to do this with biblatex, but not with natbib.
-
1. Copy the file named.bst (in TeXLive it is in texmf-dist/bibtex/bst/beebe/named.bst) to the file abbrvnamed.bst in your working directory.
2. Find in this file the line
FUNCTION {format.names}
and inside the function the line
{ s nameptr "{ff~}{vv~}{ll}{, jj}" format.name$'t := 3. Change this line to { s nameptr "{f.~}{vv~}{ll}{, jj}" format.name$ 't :=
Now you can put in your document \bibliographystyle{abbrvnamed}, and get the result you want.
For the curious: in this magic line ff means Full First names, f. means abbreviated First names, vv is "Von part", ll is for Last names, jj is for Junior suffix. Yes, BibTeX language is evil.
-
Now, it's even easier: just use \bibliographystyle{abbrvnat} and it works like a charm.
In my file-containg-bibtex-entries.bib I have the following entry:
@article{Taboada2006,
doi = {10.1016/j.pragma.2005.09.010},
issn = {03782166},
journal = {Journal of Pragmatics},
keywords = {Coherence relations,Conjunctions,Connectives,Conversation,Discourse markers,Discourse signalling,Newspaper text,RST,Rhetorical Structure Theory},
month = apr,
number = {4},
pages = {567--592},
title = {{Discourse markers as signals (or not) of rhetorical relations}},
url = {http://www.sciencedirect.com/science/article/pii/S0378216605002249},
volume = {38},
year = {2006}
}
My tex file is the following:
%preamble
...
\usepackage{natbib}
...
\begin{document}
Unfortunately, the style abbrvnat does not produce the same formatted as the named style (with the modifications proposed in @Boris's answer to the format.names function) does. For one, the named style does not output the fields issn, doi, and url, whereas abbrvnat does (see the screenshot you posted). In addition, named prints "April", whereas abbrvnat abbreviates the name of the month to "Apr.". – Mico Dec 12 '15 at 19:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323860168457031, "perplexity": 8222.147874772721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257821671.5/warc/CC-MAIN-20160723071021-00255-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://answers.launchpad.net/sikuli/+question/655535 | # open .bat file in seperate command prompt
So i've always clicked on "runsikulix" and a command prompt came up cause it's a windows command script.
that's never been a problem.
But my script became extremely long and complicated, so I chopped it up, and had a openApp command reach out to a .bat file that would run an other Sikuli script until it was finished.
Problem is that while that script is running you can't ALT SHIFT C
that's not a problem because you can kill it by clsoing the command prompt.
But it runs under the exact same command prompt as the one that's running the IDE, so if you close it you have to restart the IDE.
Slightly tedious.
Is there a means of having a second command prompt come up for anything that openApp creates.
## Question information
Language:
English Edit question
Status:
Expired
For:
Sikuli Edit question
Assignee:
No assignee Edit question
Last query:
2017-08-09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376514315605164, "perplexity": 5936.862260159294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00168.warc.gz"} |
https://www.mail-archive.com/[email protected]/msg93402.html | # [NTG-context] Accessing raw titles in textcommand in TOC
Hello,
I would like to display titles differently in TOC than they appear in text. For
example, MyChapter1 -> 1retpahCyM. Basically any highly non-trivial
transformation that really needs Lua.
I've written simple Lua macros before, but the following approach trying to
define textcommand does not work since I can't find the way to pass the raw
title to my transform function.
\startluacode
userdata = userdata or {}
function userdata.mytransform(title)
--context(title) --this is just fine, but isn't very useful
context(string.reverse(title))
end
\stopluacode
\def\transformtitle#1%
{\ctxlua{userdata.mytransform([==[#1]==])}}
\setuplist[chapter][textcommand=\transformtitle]
\starttext
\completecontent
\startchapter[title={Sample Chapter}]
\stopchapter
\stoptext
When I print the actual title that is passed to mytransform, all I get is
\currentlistentrytitle and I haven't succeed expanding it (and there's all kind
of formatting stuff and so on going on I suppose). Sections have
"deeptextcommand" which is somewhat what I'm after here, but I've not found
similar option for TOC. So, is there a way to get just the raw titles?
Jason C.
___________________________________________________________________________________
Wiki!
maillist : [email protected] / http://www.ntg.nl/mailman/listinfo/ntg-context
archive : https://bitbucket.org/phg/context-mirror/commits/
wiki : http://contextgarden.net
___________________________________________________________________________________
• [NTG-context] Accessing raw titles in textcommand in TOC alephzorro | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770689964294434, "perplexity": 8312.485371435847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141745780.85/warc/CC-MAIN-20201204223450-20201205013450-00192.warc.gz"} |
http://libros.duhnnae.com/2017/aug/150156921116-The-topology-of-systems-of-hyperspaces-determined-by-dimension-functions-Mathematics-General-Topology.php | # The topology of systems of hyperspaces determined by dimension functions - Mathematics > General Topology
The topology of systems of hyperspaces determined by dimension functions - Mathematics > General Topology - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: Given a non-degenerate Peano continuum $X$, a dimension function$D:2^X *\to0,\infty$ defined on the family $2^X *$ of compact subsets of $X$,and a subset $\Gamma\subset0,\infty$, we recognize the topological structureof the system $2^X,\D {\le\gamma}X {\alpha\in\Gamma}$, where $2^X$ is thehyperspace of non-empty compact subsets of $X$ and $D {\le\gamma}X$ is thesubspace of $2^X$, consisting of non-empty compact subsets $K\subset X$ with$DK\le\gamma$.
Autor: T.Banakh, N.Mazurenko
Fuente: https://arxiv.org/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660077095031738, "perplexity": 2701.801045158887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744381.73/warc/CC-MAIN-20181118135147-20181118161147-00166.warc.gz"} |
http://aas.org/archives/BAAS/v36n5/aas205/1014.htm | AAS 205th Meeting, 9-13 January 2005
Session 170 White Dwarfs and Hot Subdwarfs
Oral, Thursday, January 13, 2005, 2:00-3:30pm, Sunrise
## [170.02] The Initial-Final Mass Relation and the DA/DB White Dwarf Ratio
J. Kalirai, H. Richer (UBC), D. Reitzel, B. Hansen, R. M. Rich (UCLA), G. Fahlman (HIA/NRC), B. Gibson (USwin), T. von Hippel (UTexas)
We present spectroscopic observations of very faint white dwarfs in the rich open star cluster NGC 2099 (M37). With multiobject data from both GMOS on Gemini and LRIS on Keck, we confirm the true WD nature for 21 of 24 faint WD candidates (V > 22.4), all of which were previously identified as possible WDs through CFHT imaging. Fitting 18 of the 21 WD spectra with model atmospheres, we find that the mean derived mass of the sample is 0.8 Msun - about 0.2 Msun larger than the mean seen amongst field WDs. This is expected given the clusters young age (650 Myrs), and hence, high turn-off mass (~ 2.4 M\odot). A surprising result is that all of the NGC 2099 WDs have hydrogen-rich atmospheres (DAs) and none exhibit helium-rich ones (DBs), or any other spectral class. From a sequence of cooling models of various masses it appears that the most promising scenario for the DA/DB number ratio discrepancy is that hot, high mass WDs do not develop large enough helium convection zones to allow helium to be brought to the surface and turn a hydrogen-rich WD into a helium-rich one. We also determine a new initial-final mass relationship and nearly double the number of existing data points from previous studies. The results indicate that stars with initial masses between 2.8 and 3.4 Msun lose 75% of their mass through stellar evolution.
We wish to thank the Gemini, Keck and Canada-France-Hawaii Telescopes. J.S.K. acknowledges support from an NSERC PGS-B Graduate Fellowship.
Bulletin of the American Astronomical Society, 36 5
© 2004. The American Astronomical Society. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8260094523429871, "perplexity": 7955.889787985905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380948.74/warc/CC-MAIN-20141119123300-00066-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://doc.rasdaman.org/04_ql-guide.html | # 4. Query Language Guide¶
## 4.1. Preface¶
### 4.1.1. Overview¶
This guide provides information about how to use the rasdaman database management system (in short: rasdaman). The document explains usage of the rasdaman Query Language.
Follow the instructions in this guide as you develop your application which makes use of rasdaman services. Explanations detail how to create type definitions and instances; how to retrieve information from databases; how to insert, manipulate, and delete data within databases.
### 4.1.2. Audience¶
The information in this manual is intended primarily for application developers; additionally, it can be useful for advanced users of rasdaman applications and for database administrators.
### 4.1.3. Rasdaman Documentation Set¶
This manual should be read in conjunction with the complete rasdaman documentation set which this guide is part of. The documentation set in its completeness covers all important information needed to work with the rasdaman system, such as programming and query access to databases, guidance to utilities such as raswct, release notes, and additional information on the rasdaman wiki.
## 4.2. Introduction¶
### 4.2.1. Multidimensional Data¶
In principle, any natural phenomenon becomes spatio-temporal array data of some specific dimensionality once it is sampled and quantised for storage and manipulation in a computer system; additionally, a variety of artificial sources such as simulators, image renderers, and data warehouse population tools generate array data. The common characteristic they all share is that a large set of large multidimensional arrays has to be maintained. We call such arrays multidimensional discrete data (or short: MDD) expressing the variety of dimensions and separating them from the conceptually different multidimensional vectorial data appearing in geo databases.
rasdaman is a domain-independent database management system (DBMS) which supports multidimensional arrays of any size and dimension and over freely definable cell types. Versatile interfaces allow rapid application deployment while a set of cutting-edge intelligent optimization techniques in the rasdaman server ensures fast, efficient access to large data sets, particularly in networked environments.
### 4.2.2. rasdaman Overall Architecture¶
The rasdaman client/server DBMS has been designed using internationally approved standards wherever possible. The system follows a two-tier client/server architecture with query processing completely done in the server. Internally and invisible to the application, arrays are decomposed into smaller units which are maintained in a conventional DBMS, for our purposes called the base DBMS.
On the other hand, the base DBMS usually will hold alphanumeric data (such as metadata) besides the array data. rasdaman offers means to establish references between arrays and alphanumeric data in both directions.
Hence, all multidimensional data go into the same physical database as the alphanumeric data, thereby considerably easing database maintenance (consistency, backup, etc.).
Figure 4.1 Embedding of rasdaman in IT infrastructure
Further information on application program interfacing, administration, and related topics is available in the other components of the rasdaman documentation set.
### 4.2.3. Interfaces¶
The syntactical elements explained in this document comprise the rasql language interface to rasdaman. There are several ways to actually enter such statements into the rasdaman system:
• By using the rasql command-line tool to send queries to rasdaman and get back the results.
• By developing an application program which uses the raslib/rasj function oql_execute() to forward query strings to the rasdaman server and get back the results.
Developing applications using the client API is the subject of this document. Please refer to the C++ Developers Guide or Java Developers Guide of the rasdaman documentation set for further information.
### 4.2.4. rasql and Standard SQL¶
The declarative interface to the rasdaman system consists of the rasdaman Query Language, rasql, which supports retrieval, manipulation, and data definition.
Moreover, the rasdaman query language, rasql, is very similar - and in fact embeds into - standard SQL. With only slight adaptations, rasql has been standardized by ISO as 9075 SQL Part 15: MDA (Multi-Dimensional Arrays). Hence, if you are familiar with SQL, you will quickly be able to use rasql. Otherwise you may want to consult the introductory literature referenced at the end of this chapter.
### 4.2.5. Notational Conventions¶
The following notational conventions are used in this manual:
Program text (under this we also subsume queries in the document on hand) is printed in a monotype font. Such text is further differentiated into keywords and syntactic variables. Keywords like struct are printed in boldface; they have to be typed in as is.
An optional clause is enclosed in brackets; an arbitrary repetition is indicated through brackets and an ellipsis. Grammar alternatives can be grouped in parentheses separated by a | symbol.
Example
select resultList
from namedCollection [ [ as ] collIterator ]
[ , namedCollection [ [ as ] collIterator ] ]...
[ where booleanExp ]
It is important not to mix the regular brackets [ and ] denoting array access, trimming, etc., with the grammar brackets [ and ] denoting optional clauses and repetition; in grammar excerpts the first case is in double quotes. The same applies to parentheses.
Italics are used in the text to draw attention to the first instance of a defined term in the text. In this case, the font is the same as in the running text, not Courier as in code pieces.
## 4.3. Terminology¶
### 4.3.1. An Intuitive Definition¶
An array is a set of elements which are ordered in space. The space considered here is discretized, i.e., only integer coordinates are admitted. The number of integers needed to identify a particular position in this space is called the dimension (sometimes also referred to as dimensionality). Each array element, which is referred to as cell, is positioned in space through its coordinates.
A cell can contain a single value (such as an intensity value in case of grayscale images) or a composite value (such as integer triples for the red, green, and blue component of a color image). All cells share the same structure which is referred to as the array cell type or array base type.
Implicitly a neighborhood is defined among cells through their coordinates: incrementing or decrementing any component of a coordinate will lead to another point in space. However, not all points of this (infinite) space will actually house a cell. For each dimension, there is a lower and upper bound, and only within these limits array cells are allowed; we call this area the spatial domain of an array. In the end, arrays look like multidimensional rectangles with limits parallel to the coordinate axes. The database developer defines both spatial domain and cell type in the array type definition. Not all bounds have to be fixed during type definition time, though: It is possible to leave bounds open so that the array can dynamically grow and shrink over its lifetime.
Figure 4.2 Constituents of an array
Synonyms for the term array are multidimensional array / MDA, multidimensional data / MDD, raster data, gridded data. They are used interchangeably in the rasdaman documentation.
In rasdaman databases, arrays are grouped into collections. All elements of a collection share the same array type definition (for the remaining degrees of freedom see Array types). Collections form the basis for array handling, just as tables do in relational database technology.
### 4.3.2. A Technical Definition¶
Programmers who are familiar with the concept of arrays in programming languages maybe prefer this more technical definition:
An array is a mapping from integer coordinates, the spatial domain, to some data type, the cell type. An array’s spatial domain, which is always finite, is described by a pair of lower bounds and upper bounds for each dimension, resp. Arrays, therefore, always cover a finite, axis-parallel subset of Euclidean space.
Cell types can be any of the base types and composite types defined in the ODMG standard and known, for example from C/C++. In fact, most admissible C/C++ types are admissible in the rasdaman type system, too.
In rasdaman, arrays are strictly typed wrt. spatial domain and cell type. Type checking is done at query evaluation time. Type checking can be disabled selectively for an arbitrary number of lower and upper bounds of an array, thereby allowing for arrays whose spatial domains vary over the array lifetime.
## 4.4. Sample Database¶
### 4.4.1. Collection mr¶
This section introduces sample collections used later in this manual. The sample database which is shipped together with the system contains the schema and the instances outlined in the sequel.
Collection mr consists of three images (see Figure 4.3) taken from the same patient using magnetic resonance tomography. Images are 8 bit grayscale with pixel values between 0 and 255 and a size of 256x211.
Figure 4.3 Sample collection mr
### 4.4.2. Collection mr2¶
Collection mr2 consists of only one image, namely the first image of collection mr (Figure 4.4). Hence, it is also 8 bit grayscale with size 256x211.
Figure 4.4 Sample collection mr2
### 4.4.3. Collection rgb¶
The last example collection, rgb, contains one item, a picture of the anthur flower (Figure 4.5). It is an RGB image of size 400x344 where each pixel is composed of three 8 bit integer components for the red, green, and blue component, resp.
Figure 4.5 The collection rgb
## 4.5. Type Definition Using rasql¶
### 4.5.1. Overview¶
Every instance within a database is described by its data type (i.e., there is exactly one data type to which an instance belongs; conversely, one data type can serve to describe an arbitrary number of instances). Each database contains a self-contained set of such type definitions; no other type information, external to a database, is needed for database access.
Types in rasdaman establish a 3-level hierarchy:
• Cell types can be atomic base types (such as char or float) or composite (“struct”) types such as red / green / blue color pixels.
• Array types define arrays over some atomic or struct cell type and a spatial domain.
• Set types describe sets of arrays of some particular array type.
Types are identified by their name which must be unique within a database and not exceed length of 200 characters. Like any other identifier in rasql queries, type names are case-sensitive, consist of only letters, digits, or underscore, and must start with a letter.
### 4.5.2. Cell types¶
#### 4.5.2.1. Atomic types¶
The set of standard atomic types, which is generated during creation of a database, materializes the base types defined in the ODMG standard (cf. Table 4.1).
Table 4.1 rasdaman atomic cell types
type name size description
bool 1 bit [2] true (nonzero value), false (zero value)
octet 8 bit signed integer
char 8 bit unsigned integer
short 16 bit signed integer
unsigned short / ushort 16 bit unsigned integer
long 32 bit signed integer
unsigned long / ulong 32 bit unsigned integer
float 32 bit single precision floating point
double 64 bit double precision floating point
CInt16 32 bit complex of 16 bit signed integers
CInt32 64 bit complex of 32 bit signed integers
CFloat32 64 bit single precision floating point complex
CFloat64 128 bit double precision floating point complex
#### 4.5.2.2. Composite types¶
More complex, composite cell types can be defined arbitrarily, based on the system-defined atomic types. The syntax is as follows:
create type typeName
as (
attrName_1 atomicType_1,
...
attrName_n atomicType_n
)
Attribute names must be unique within a composite type, otherwise an exception is thrown. No other type with the name typeName may pre-exist already.
#### 4.5.2.3. Example¶
An RGB pixel type can be defined as
create type RGBPixel
as (
red char,
green char,
blue char
)
### 4.5.3. Array types¶
An marray (“multidimensional array”) type defines an array type through its cell type (see Cell types) and a spatial domain.
#### 4.5.3.1. Syntax¶
The syntax for creating an marray type is as below. There are two variants, corresponding to the dimensionality specification alternatives described above:
create type typeName
as baseTypeName mdarray domainSpec
where baseTypeName is the name of a defined cell type (atomic or composite) and domainSpec is a multidimensional interval specification as described in the following section.
Alternatively, a composite cell type can be indicated in-place:
create type typeName
as (
attrName_1 atomicType_1,
...
attrName_n atomicType_n
) mdarray domainSpec
No type (of any kind) with name typeName may pre-exist already, otherwise an exception is thrown.
Attribute names must be unique within a composite type, otherwise an exception is thrown.
#### 4.5.3.2. Spatial domain¶
Dimensions and their extents are specified by providing an axis name for each dimension and, optionally, a lower and upper bound:
[ a_1 ( lo_1 : hi_1 ), ... , a_d ( lo_d : hi_d ) ]
[ a_1 , ... , a_d ]
where d is a positive integer number, a_i are identifiers, and lo_1 and hi_1 are integers such that lo_1 $$\le$$ hi_1. Both lo_1 and hi_1 can be an asterisk (*) instead of a number, in which case no limit in the particular direction of the axis will be enforced. If the bounds lo_1 and hi_1 on a particular axis are not specified, they are assumed to be equivalent to *.
Axis names must be unique within a domain specification, otherwise an exception is thrown.
Currently axis names are ignored and cannot be used in queries yet.
#### 4.5.3.3. Examples¶
The following statement defines a 2-D RGB image, based on the definition of RGBPixel as shown above:
create type RGBImage
as RGBPixel mdarray [ x ( 0:1023 ), y ( 0:767 ) ]
An 2-D image without any extent limitation can be defined through:
create type UnboundedImage
as RGBPixel mdarray [ x, y ]
which is equivalent to
create type UnboundedImage
as RGBPixel mdarray [ x ( *:* ), y ( *:* ) ]
Selectively we can also limit only the bounds on the x axis for example:
create type PartiallyBoundedImage
as RGBPixel mdarray [ x ( 0 : 1023 ), y ]
### 4.5.4. Set types¶
A set type defines a collection of arrays sharing the same marray type. Additionally, a collection can also have null values which are used in order to characterise sparse arrays. A sparse array is an array where some of the elements have a null value.
#### 4.5.4.1. Syntax¶
create type typeName
as set ( marrayTypeName [ nullValues ] )
where marrayTypeName is the name of a defined marray type and nullValues is an optional specification of a set of values to be treated as nulls; for semantics in operations refer to Null Values.
No type with the name typeName may pre-exist already.
#### 4.5.4.2. Null Values¶
The optional nullValues clause in a set type definition is a set of null value intervals:
null values [ nullInterval, ... ]
Each nullInterval can be a pair of lower and upper limits (1, 2, 3), or a single (double) value (1):
lo : hi (1)
* : hi (2)
lo : * (3)
nullValue (4)
In case of an interval, the three variants are interpreted as follows:
1. Both lo and hi are double values such that lo $$\le$$ hi;
2. lo is * and hi is a double value, indicating that all values lower than hi are null values;
3. lo is a double value and hi is *, indicating that all values greater than lo are null values.
For floating-point data it is recommended to always specify small intervals instead of single numbers with variant (4).
##### 4.5.4.2.1. Limitation¶
Currently, only atomic null values can be indicated. They apply to all components of a composite cell simultaneously. In future it may become possible to indicate null values individually per struct component.
#### 4.5.4.3. Example¶
For example, the following statement defines a set type of 2-D RGB images, based on the definition of RGBImage:
create type RGBSet
as set ( RGBImage )
If values 0, 253, 254, and 255 are to be considered null values, this can be specified as follows:
create type RGBSet
as set ( RGBImage null values [ 0, 253 : 255 ] )
Note that these null values will apply equally to every band. It is not possible to separate null values per band.
As the cell type in this case is char (possible values between 0 and 255), the type can be equivalently specified like this:
create type RGBSet
as set ( RGBImage null values [ 0, 253 : * ] )
### 4.5.5. Drop type¶
A type definition can be dropped (i.e., deleted from the database) if it is not in use. This is the case if both of the following conditions hold:
• The type is not used in any other type definition.
• There are no array instances existing which are based, directly or indirectly, on the type on hand.
Further, atomic base types (such as char) cannot be deleted.
Drop type syntax
drop type typeName
### 4.5.6. List available types¶
A list of all types defined in the database can be obtained in textual form, adhering to the rasql type definition syntax. This is done by querying virtual collections (similar to the virtual collection RAS_COLLECTION_NAMES).
Technically, the output of such a query is a list of 1-D char arrays, each one containing one type definition.
#### 4.5.6.1. Syntax¶
select typeColl from typeColl
where typeColl is one of
• RAS_STRUCT_TYPES for struct types
• RAS_MARRAY_TYPES for array types
• RAS_SET_TYPES for set types
• RAS_TYPES for union of all types
Note
Collection aliases can be used, such as:
select t from RAS_STRUCT_TYPES as t
No operations can be performed on the output array.
#### 4.5.6.2. Example output¶
A struct types result may look like this when printed:
create type RGBPixel
as ( red char, green char, blue char )
create type TestPixel
as ( band1 char, band2 char, band3 char )
create type GeostatPredictionPixel
as ( prediction float, variance float )
An marray types result may look like this when printed:
create type GreyImage
as char mdarray [ x, y ]
create type RGBCube
as RGBPixel mdarray [ x, y, z ]
create type XGAImage
as RGBPixel mdarray [ x ( 0 : 1023 ), y ( 0 : 767 ) ]
A set types result may look like this when printed:
create type GreySet
as set ( GreyImage )
create type NullValueTestSet
as set ( NullValueArrayTest null values [5:7] )
An all types result will print combination of all struct types, marray types, and set types results.
### 4.5.7. Changing types¶
The type of an existing collection can be changed to another type through the alter statement.
The new collection type must be compatible with the old one, which means:
• same cell type
• same dimensionality
• no domain shrinking
Changes are allowed, for example, in the null values.
Alter type syntax
alter collection collName
set type collType
where
• collName is the name of an existing collection
• collType is the name of an existing collection type
Usage notes
The collection does not need to be empty, i.e. it may contain array objects.
Currently, only set (i.e., collection) types can be modified.
Example
Update the set type of a collection Bathymetry to a new set type that specifies null values:
alter collection Bathymetry
set type BathymetryWithNullValues
## 4.6. Query Execution with rasql¶
The rasdaman toolkit offers essentially a couple of ways to communicate with a database through queries:
• By submitting queries via command line using rasql; this tool is covered in this section.
• By writing a C++, Java, or Python application that uses the rasdaman APIs (raslib, rasj, or rasdapy3 respectively). See the rasdaman API guides for further details.
The rasql tool accepts a query string (which can be parametrised as explained in the API guides), sends it to the server for evaluation, and receives the result set. Results can be displayed in alphanumeric mode, or they can be stored in files.
### 4.6.1. Examples¶
For the user who is familiar with command line tools in general and the rasql query language, we give a brief introduction by way of examples. They outline the basic principles through common tasks.
• Create a collection test of type GreySet (note the explicit setting of user rasadmin; rasql’s default user rasguest by default cannot write):
rasql -q "create collection test GreySet" \
• Print the names of all existing collections:
rasql -q "select r from RAS_COLLECTIONNAMES as r" \
--out string
• Export demo collection mr into TIFF files rasql_1.tif, rasql_2.tif, rasql_3.tif (note the escaped double-quotes as required by shell):
rasql -q "select encode(m, \"tiff\") from mr as m"
--out file
• Import TIFF file myfile into collection mr as new image (note the different query string delimiters to preserve the $ character!): rasql -q 'insert into mr values decode($1)' \
• Put a grey square into every mr image:
rasql -q "update mr as m set m[0:10,0:10] \
assign marray x in [0:10,0:10] values 127c" \
• Verify result of update query by displaying pixel values as hex numbers:
rasql -q "select m[0:10,0:10] from mr as m" --out hex
### 4.6.2. Invocation syntax¶
Rasql is invoked as a command with the query string as parameter. Additional parameters guide detailed behavior, such as authentication and result display.
Any errors or other diagnostic output encountered are printed; transactions are aborted upon errors.
Usage:
rasql [--query q|-q q] [options]
Options:
-h, --help show command line switches -q, --query q query string to be sent to the rasdaman server for execution -f, --file f file name for upload through $i parameters within queries; each$i needs its own file parameter, in proper sequence [4]. Requires –mdddomain and –mddtype --content display result, if any (see also –out and –type for output formatting) --out t use display method t for cell values of result MDDs where t is one of none: do not display result item contents file: write each result MDD into a separate file string: print result MDD contents as char string (only for 1D arrays of type char) hex: print result MDD cells as a sequence of space-separated hex values formatted: reserved, not yet supported Option –out implies –content; default: none --outfile of file name template for storing result images (ignored for scalar results). Use ‘%d’ to indicate auto numbering position, like with printf(1). For well-known file types, a proper suffix is appended to the resulting file name. Implies –out file. (default: rasql_%d) --mdddomain d MDD domain, format: ‘[x0:x1,y0:y1]’; required only if –file specified and file is in data format r_Array; if input file format is some standard data exchange format and the query uses a convertor, such as encode($1,”tiff”), then domain information can be obtained from the file header. --mddtype t input MDD type (must be a type defined in the database); required only if –file specified and file is in data format r_Array; if input file format is some standard data exchange format and the query uses a convertor, such as decode($1,”tiff”), then type information can be obtained from the file header. --type display type information for results -s, --server h rasdaman server name or address (default: localhost) -p, --port p rasdaman port number (default: 7001) -d, --database db name of database (default: RASBASE) --user u name of user (default: rasguest) --passwd p password of user (default: rasguest) --quiet print no ornament messages, only results and errors
## 4.7. Overview: General Query Format¶
### 4.7.1. Basic Query Mechanism¶
rasql provides declarative query functionality on collections (i.e., sets) of MDD stored in a rasdaman database. The query language is based on the SQL-92 standard and extends the language with high-level multidimensional operators.
The general query structure is best explained by means of an example. Consider the following query:
select mr[100:150,40:80] / 2
from mr
where some_cells( mr[120:160, 55:75] > 250 )
In the from clause, mr is specified as the working collection on which all evaluation will take place. This name, which serves as an “iterator variable” over this collection, can be used in other parts of the query for referencing the particular collection element under inspection.
Optionally, an alias name can be given to the collection (see syntax below) - however, in most cases this is not necessary.
In the where clause, a condition is phrased. Each collection element in turn is probed, and upon fulfillment of the condition the item is added to the query result set. In the example query, part of the image is tested against a threshold value.
Elements in the query result set, finally, can be “post-processed” in the select clause by applying further operations. In the case on hand, a spatial extraction is done combined with an intensity reduction on the extracted image part.
In summary, a rasql query returns a set fulfilling some search condition just as is the case with conventional SQL and OQL. The difference lies in the operations which are available in the select and where clause: SQL does not support expressions containing multidimensional operators, whereas rasql does.
Syntax
select resultList
from collName [ as collIterator ]
[ , collName [ as collIterator ] ] ...
[ where booleanExp ]
The complete rasql query syntax can be found in the Appendix.
### 4.7.2. Select Clause: Result Preparation¶
Type and format of the query result are specified in the select part of the query. The query result type can be multidimensional, a struct or atomic (i.e., scalar), or a spatial domain / interval. The select clause can reference the collection iteration variable defined in the from clause; each array in the collection will be assigned to this iteration variable successively.
Example
Images from collection mr, with pixel intensity reduced by a factor 2:
select mr / 2
from mr
### 4.7.3. From Clause: Collection Specification¶
In the from clause, the list of collections to be inspected is specified, optionally together with a variable name which is associated to each collection. For query evaluation the cross product between all participating collections is built which means that every possible combination of elements from all collections is evaluated. For instance in case of two collections, each MDD of the first collection is combined with each MDD of the second collection. Hence, combining a collection with n elements with a collection containing m elements results in n*m combinations. This is important for estimating query response time.
Example
The following example subtracts each MDD of collection mr2 from each MDD of collection mr (the binary induced operation used in this example is explained in Binary Induction).
select mr - mr2
from mr, mr2
Using alias variables a and b bound to collections mr and mr2, resp., the same query looks as follows:
select a - b
from mr as a, mr2 as b
Cross products
As in SQL, multiple collections in a from clause such as
from c1, c2, ..., ck
are evaluated to a cross product. This means that the select clause is evaluated for a virtual collection that has n1 * n2 * … * nk elements if c1 contains n1 elements, c2 contains n2 elements, and so forth.
Warning: This holds regardless of the select expression - even if you mention only say c1 in the select clause, the number of result elements will be the product of all collection sizes!
### 4.7.4. Where Clause: Conditions¶
In the where clause, conditions are specified which members of the query result set must fulfil. Like in SQL, predicates are built as boolean expressions using comparison, parenthesis, functions, etc. Unlike SQL, however, rasql offers mechanisms to express selection criteria on multidimensional items.
Example
We want to restrict the previous result to those images where at least one difference pixel value is greater than 50 (see Binary Induction):
select mr - mr2
from mr, mr2
where some_cells( mr - mr2 > 50 )
### 4.7.5. Comments in Queries¶
Comments are texts which are not evaluated by the rasdaman server in any way. However, they are useful - and should be used freely - for documentation purposes; in particular for stored queries it is important that its meaning will be clear to later readers.
Syntax
-- any text, delimited by end of line
Example
select mr -- this comment text is ignored by rasdaman
from mr -- for comments spanning several lines,
-- every line needs a separate '--' starter
## 4.8. Constants¶
### 4.8.1. Atomic Constants¶
Atomic constants are written in standard C/C++ style. If necessary constants are augmented with a one or two letter postfix to unambiguously determine its data type (Table 4.2).
The default for integer constants is l, and for floating-point it is d. Specifiers are case insensitive.
Example
25c
-1700L
.4e-5D
Note
Boolean constants true and false are unique, so they do not need a type specifier.
Table 4.2 Data type specifiers
postfix type
o octet
c char
s short
us unsigned short
l long
ul unsigned long
f float
d double
Additionally, the following special floating-point constants are supported as well:
Table 4.3 Special floating-point constants corresponding to IEEE 754 NaN and Inf.
Constant Type
NaN double
NaNf float
Inf double
Inff float
#### 4.8.1.1. Complex numbers¶
Special built-in types are CFloat32 and CFloat64 for single and double precision complex numbers, resp, as well as CInt16 and CInt32 for signed integer complex numbers.
Syntax
complex( re, im )
where re and im are integer or floating point expressions. The resulting constant type is summarized on the table below. Further re/im type combinations are not supported.
Table 4.4 Complex constant type depends on the type of the re and im arguments.
type of re type of im type of complex constant
short short CInt16
long long CInt32
float float CFloat32
double double CFloat64
Example
complex( .35d, 16.0d ) -- CFloat64
complex( .35f, 16.0f ) -- CFloat32
complex( 5s, 16s ) -- CInt16
complex( 5, 16 ) -- CInt32
Component access
The complex parts can be extracted with .re and .im; more details can be found in the Induced Operations section.
### 4.8.2. Composite Constants¶
Composite constants resemble records (“structs”) over atomic constants or other records. Notation is as follows.
Syntax
struct { const_0, ..., const_n }
where const_i must be of atomic or complex type, i.e. nested structs are not supported.
Example
struct{ 0c, 0c, 0c } -- black pixel in an RGB image, for example
struct{ 1l, true } -- mixed component types
Component access
See Struct Component Selection for details on how to extract the constituents from a composite value.
### 4.8.3. Array Constants¶
Small array constants can be indicated literally. An array constant consists of the spatial domain specification (see Spatial Domain) followed by the cell values whereby value sequencing is as follow. The array is linearized in a way that the lowest dimension [5] is the “outermost” dimension and the highest dimension [6] is the “innermost” one. Within each dimension, elements are listed sequentially, starting with the lower bound and proceeding until the upper bound. List elements for the innermost dimension are separated by comma “,”, all others by semicolon “;”.
The exact number of values as specified in the leading spatial domain expression must be provided. All constants must have the same type; this will be the result array’s base type.
Syntax
< mintervalExp
scalarList_0 ; ... ; scalarList_n ; >
where scalarList is defined as a comma separated list of literals:
scalar_0, scalar_1, ... scalar_n ;
Example
< [-1:1,-2:2] 0, 1, 2, 3, 4;
1, 2, 3, 4, 5;
2, 3, 4, 5, 6 >
This constant expression defines the following matrix:
### 4.8.4. Object identifier (OID) Constants¶
OIDs serve to uniquely identify arrays (see Linking MDD with Other Data). Within a database, the OID of an array is an integer number. To use an OID outside the context of a particular database, it must be fully qualified with the system name where the database resides, the name of the database containing the array, and the local array OID.
The worldwide unique array identifiers, i.e., OIDs, consist of three components:
• A string containing the system where the database resides (system name),
• A string containing the database (“base name”), and
• A number containing the local object id within the database.
The full OID is enclosed in ‘<’ and ‘>’ characters, the three name components are separated by a vertical bar ‘|’.
System and database names obey the naming rules of the underlying operating system and base DBMS, i.e., usually they are made up of lower and upper case characters, underscores, and digits, with digits not as first character. Any additional white space (space, tab, or newline characters) inbetween is assumed to be part of the name, so this should be avoided.
The local OID is an integer number.
Syntax
< systemName | baseName | objectID >
objectID
where systemName and baseName are string literals and objectID is an integerExp.
Example
< acme.com | RASBASE | 42 >
42
### 4.8.5. String constants¶
A sequence of characters delimited by double quotes is a string.
Syntax
"..."
Example
SELECT encode(coll, "png") FROM coll
### 4.8.6. Collection Names¶
Collections are named containers for sets of MDD objects (see Linking MDD with Other Data). A collection name is made up of lower and upper case characters, underscores, and digits. Depending on the underlying base DBMS, names may be limited in length, and some systems (rare though) may not distinguish upper and lower case letters.
Operations available on name constants are string equality “=” and inequality “!=”.
## 4.9. Spatial Domain Operations¶
### 4.9.1. One-Dimensional Intervals¶
One-dimensional (1D) intervals describe non-empty, consecutive sets of integer numbers, described by integer-valued lower and upper bound, resp.; negative values are admissible for both bounds. Intervals are specified by indicating lower and upper bound through integer-valued expressions according to the following syntax:
The lower and upper bounds of an interval can be extracted using the functions .lo and .hi.
Syntax
integerExp_1 : integerExp_2
intervalExp.lo
intervalExp.hi
A one-dimensional interval with integerExp_1 as lower bound and integerExp_2 as upper bound is constructed. The lower bound must be less or equal to the upper bound. Lower and upper bound extractors return the integer-valued bounds.
Examples
An interval ranging from -17 up to 245 is written as:
-17 : 245
Conversely, the following expression evaluates to 245; note the parenthesis to enforce the desired evaluation sequence:
(-17 : 245).hi
### 4.9.2. Multidimensional Intervals¶
Multidimensional intervals (m-intervals) describe areas in space, or better said: point sets. These point sets form rectangular and axis-parallel “cubes” of some dimension. An m-interval’s dimension is given by the number of 1D intervals it needs to be described; the bounds of the “cube” are indicated by the lower and upper bound of the respective 1D interval in each dimension.
From an m-interval, the intervals describing a particular dimension can be extracted by indexing the m-interval with the number of the desired dimension using the operator [].
Dimension counting in an m-interval expression runs from left to right, starting with lowest dimension number 0.
Syntax
[ intervalExp_0 , ... , intervalExp_n ]
[ intervalExp_0 , ... , intervalExp_n ] [integerExp ]
An (n+1)-dimensional m-interval with the specified intervalExp_i is built where the first dimension is described by intervalExp_0, etc., until the last dimension described by intervalExp_n.
Example
A 2-dimensional m-interval ranging from -17 to 245 in dimension 1 and from 42 to 227 in dimension 2 can be denoted as
[ -17 : 245, 42 : 227 ]
The expression below evaluates to [42:227].
[ -17 : 245, 42 : 227 ] [1]
…whereas here the result is 42:
[ -17 : 245, 42 : 227 ] [1].lo
## 4.10. Array Operations¶
As we have seen in the last Section, intervals and m-intervals describe n-dimensional regions in space.
Next, we are going to place information into the regular grid established by the m-intervals so that, at the position of every integer-valued coordinate, a value can be stored. Each such value container addressed by an n-dimensional coordinate will be referred to as a cell. The set of all the cells described by a particular m-interval and with cells over a particular base type, then, forms the array.
As before with intervals, we introduce means to describe arrays through expressions, i.e., to derive new arrays from existing ones. Such operations can change an arrays shape and dimension (sometimes called geometric operations), or the cell values (referred to as value-changing operations), or both. In extreme cases, both array dimension, size, and base type can change completely, for example in the case of a histogram computation.
First, we describe the means to query and manipulate an array’s spatial domain (so-called geometric operations), then we introduce the means to query and manipulate an array’s cell values (value-changing operations).
Note that some operations are restricted in the operand domains they accept, as is common in arithmetics in programming languages; division by zero is a common example. Arithmetic Errors and Other Exception Situations contains information about possible error conditions, how to deal with them, and how to prevent them.
### 4.10.1. Spatial Domain¶
The m-interval covered by an array is called the array’s spatial domain. Function sdom() allows to retrieve an array’s current spatial domain. The current domain of an array is the minimal axis-parallel bounding box containing all currently defined cells.
As arrays can have variable bounds according to their type definition (see Array types), their spatial domain cannot always be determined from the schema information, but must be recorded individually by the database system. In case of a fixed-size array, this will coincide with the schema information, in case of a variable-size array it delivers the spatial domain to which the array has been set. The operators presented below and in Update allow to change an array’s spatial domain. Notably, a collection defined over variable-size arrays can hold arrays which, at a given moment in time, may differ in the lower and/or upper bounds of their variable dimensions.
Syntax
sdom( mddExp )
Function sdom() evaluates to the current spatial domain of mddExp.
Examples
Consider an image a of collection mr. Elements from this collection are defined as having free bounds, but in practice our collection elements all have spatial domain [0 : 255, 0 : 210]. Then, the following equivalences hold:
sdom(a) = [0 : 255, 0 : 210]
sdom(a)[0] = [0 : 255]
sdom(a)[0].lo = 0
sdom(a)[0].hi = 255
### 4.10.2. Geometric Operations¶
#### 4.10.2.1. Trimming¶
Reducing the spatial domain of an array while leaving the cell values unchanged is called trimming. Array dimension remains unchanged.
Figure 4.6 Spatial domain modification through trimming (2-D example)
The generalized trim operator allows restriction, extension, and a combination of both operations in a shorthand syntax. This operator does not check for proper subsetting or supersetting of the domain modifier.
Syntax
mddExp [ mintervalExp ]
Examples
The following query returns cutouts from the area [120: 160 , 55 : 75] of all images in collection mr (see Figure 4.7).
select mr[ 120:160, 55:75 ]
from mr
Figure 4.7 Trimming result
#### 4.10.2.2. Section¶
A section allows to extract lower-dimensional layers (“slices”) from an array.
Figure 4.8 Single and double section through 3-D array, yielding 2-D and 1-D sections.
A section is accomplished through a trim expression by indicating the slicing position rather than a selection interval. A section can be made in any dimension within a trim expression. Each section reduces the dimension by one.
Syntax
mddExp [ integerExp_0 , ... , integerExp_n ]
This makes sections through mddExp at positions integerExp_i for each dimension i.
Example
The following query produces a 2-D section in the 2nd dimension of a 3-D cube:
select Images3D[ 0:256, 10, 0:256 ]
from Images3D
Note
If a section is done in every dimension of an array, the result is one single cell. This special case resembles array element access in programming languages, e.g., C/C++. However, in rasql the result still is an array, namely one with zero dimensions and exactly one element.
Example
The following query delivers a set of 0-D arrays containing single pixels, namely the ones with coordinate [100,150]:
select mr[ 100, 150 ]
from mr
#### 4.10.2.3. Trim Wildcard Operator “*”¶
An asterisk “*” can be used as a shorthand for an sdom() invocation in a trim expression; the following phrases all are equivalent:
a [ *:*, *:* ] = a [ sdom(a)[0] , sdom(a)[1] ]
= a [ sdom(a)[0].lo : sdom(a)[0].hi ,
sdom(a)[1].lo : sdom(a)[1].hi ]
An asterisk “*” can appear at any lower or upper bound position within a trim expression denoting the current spatial domain boundary. A trim expression can contain an arbitrary number of such wildcards. Note, however, that an asterisk cannot be used for specifying a section.
Example
The following are valid applications of the asterisk operator:
select mr[ 50:*, *:200 ]
from mr
select mr[ *:*, 10:150 ]
from mr
The next is illegal because it attempts to use an asterisk in a section:
select mr[ *, 100:200 ] -- illegal "*" usage in dimension 0
from mr
Note
It is well possible (and often recommended) to use an array’s spatial domain or part of it for query formulation; this makes the query more general and, hence, allows to establish query libraries. The following query cuts away the rightmost pixel line from the images:
select mr[ *:*, *:sdom(mr)[1].hi - 1 ] -- good, portable
from mr
In the next example, conversely, trim bounds are written explicitly; this query’s trim expression, therefore, cannot be used with any other array type.
select mr[ 0:767, 0:1023 ] -- bad, not portable
from mr
One might get the idea that the last query evaluates faster. This, however, is not the case; the server’s intelligent query engine makes the first version execute at just the same speed.
#### 4.10.2.4. Positionally-independent Subsetting¶
Rasdaman supports positionally-independent subsetting like in WCPS and SQL/MDA, where for each trim/slice the axis name is indicated as well, e.g.
select mr2[d0(0:100), d1(50)] from mr2
The axis names give a reference to the addressed axes, so the order doesn’t matter anymore. This is equivalent:
select mr2[d1(50), d0(0:100)] from mr2
Furthermore, not all axes have to be specified. Any axes which are not specified default to “:”. For example:
select mr2[d1(50)] from mr2
=
select mr2[d0(*:*), d1(50)] from mr2
The two subset formats cannot be mixed, e.g. this is an error:
select mr2[d0(0:100), 50] from mr2
#### 4.10.2.5. Shifting a Spatial Domain¶
Built-in function shift() transposes an array: its spatial domain remains unchanged in shape, but all cell contents simultaneously are moved to another location in n-dimensional space. Cell values themselves remain unchanged.
Syntax
shift( mddExp , pointExp )
The function accepts an mddExp and a pointExp and returns an array whose spatial domain is shifted by vector pointExp.
Example
The following expression evaluates to an array with spatial domain [3:13, 4:24]. Containing the same values as the original array a.
shift( a[ 0:10, 0:20 ], [ 3, 4 ] )
#### 4.10.2.6. Extending a Spatial Domain¶
Function extend() enlarges a given MDD with the domain specified. The domain for extending must, for every boundary element, be at least as large as the MDD’s domain boundary. The new MDD contains 0 values in the extended part of its domain and the MDD’s original cell values within the MDD’s domain.
Syntax
extend( mddExp , mintervalExp )
The function accepts an mddExp and a mintervalExp and returns an array whose spatial domain is extended to the new domain specified by mintervalExp. The result MDD has the same cell type as the input MDD.
Precondition:
sdom( mddExp ) contained in mintervalExp
Example
Assuming that MDD a has a spatial domain of [0:50, 0:25], the following expression evaluates to an array with spatial domain [-100:100, -50:50], a’s values in the subdomain [0:50, 0:25], and 0 values at the remaining cell positions.
extend( a, [-100:100, -50:50] )
#### 4.10.2.7. Geographic projection¶
##### 4.10.2.7.1. Overview¶
“A map projection is any method of representing the surface of a sphere or other three-dimensional body on a plane. Map projections are necessary for creating maps. All map projections distort the surface in some fashion. Depending on the purpose of the map, some distortions are acceptable and others are not; therefore different map projections exist in order to preserve some properties of the sphere-like body at the expense of other properties.” (Wikipedia)
Each coordinate tieing a geographic object, map, or pixel to some position on earth (or some other celestial object, for that matter) is valid only in conjunction with the Coordinate Reference System (CRS) in which it is expressed. For 2-D Earth CRSs, a set of CRSs and their identifiers is normatively defined by the OGP Geomatics Committee, formed in 2005 by the absorption into OGP of the now-defunct European Petroleum Survey Group (EPSG). By way of tradition, however, this set of CRS definitions still is known as “EPSG”, and the CRS identifiers as “EPSG codes”. For example, EPSG:4326 references the well-known WGS84 CRS.
##### 4.10.2.7.2. The project() function¶
Assume an MDD object M and two CRS identifiers C1 and C2 such as “EPSG:4326”. The project() function establishes an output MDD, with same dimension as M, whose contents is given by projecting M from CRS C1 into CRS C2.
The project() function comes in several variants based on the provided input arguments
(1) project( mddExpr, boundsIn, crsIn, crsOut )
(2) project( mddExpr, boundsIn, crsIn, crsOut, resampleAlg )
(3) project( mddExpr, boundsIn, crsIn, boundsOut, crsOut,
widthOut, heightOut )
(4) project( mddExpr, boundsIn, crsIn, boundsOut, crsOut,
widthOut, heightOut, resampleAlg, errThreshold )
(5) project( mddExpr, boundsIn, crsIn, boundsOut, crsOut,
xres, yres)
(6) project( mddExpr, boundsIn, crsIn, boundsOut, crsOut,
xres, yres, resampleAlg, errThreshold )
where
• mddExpr - MDD object to be reprojected.
• boundsIn - geographic bounding box given as a string of comma-separated floating-point values of the format: "xmin, ymin, xmax, ymax".
• crsIn - geographic CRS as a string. Internally, the project() function is mapped to GDAL; hence, it accepts the same CRS formats as GDAL:
• Well Known Text (as per GDAL)
• “EPSG:n”
• “EPSGA:n”
• “AUTO:proj_id,unit_id,lon0,lat0” indicating OGC WMS auto projections
• urn:ogc:def:crs:EPSG::n” indicating OGC URNs (deprecated by OGC)
• PROJ.4 definitions
• well known names, such as NAD27, NAD83, WGS84 or WGS72.
• WKT in ESRI format, prefixed with “ESRI::”
• “IGNF:xxx” and “+init=IGNF:xxx”, etc.
• Since recently (v1.10), GDAL also supports OGC CRS URLs, OGC’s preferred way of identifying CRSs.
• boundsOut - geographic bounding box of the projected output, given in the same format as boundsIn. This can be “smaller” than the input bounding box, in which case the input will be cropped.
• crsOut - geographic CRS of the result, in same format as crsIn.
• widthOut, heightOut - integer grid extents of the result; the result will be accordingly scaled to fit in these extents.
• xres, yres - axis resolution in target georeferenced units.
• resampleAlg - resampling algorithm to use, equivalent to the ones in GDAL:
near
Nearest neighbour (default, fastest algorithm, worst interpolation quality).
bilinear
Bilinear resampling (2x2 kernel).
cubic
Cubic convolution approximation (4x4 kernel).
cubicspline
Cubic B-spline approximation (4x4 kernel).
lanczos
Lanczos windowed sinc (6x6 kernel).
average
Average of all non-NODATA contributing pixels. (GDAL >= 1.10.0)
mode
Selects the value which appears most often of all the sampled points. (GDAL >= 1.10.0)
max
Selects the maximum value from all non-NODATA contributing pixels. (GDAL >= 2.0.0)
min
Selects the minimum value from all non-NODATA contributing pixels. (GDAL >= 2.0.0)
med
Selects the median value of all non-NODATA contributing pixels. (GDAL >= 2.0.0)
q1
Selects the first quartile value of all non-NODATA contributing pixels. (GDAL >= 2.0.0)
q3
Selects the third quartile value of all non-NODATA contributing pixels. (GDAL >= 2.0.0)
• errThreshold - error threshold for transformation approximation (in pixel units - defaults to 0.125).
Example
The following expression projects the MDD worldMap with bounding box “-180, -90, 180, 90” in CRS EPSG 4326, into EPSG 54030:
project( worldMap, "-180, -90, 180, 90", "EPSG:4326", "EPSG:54030" )
The next example reprojects a subset of MDD Formosat with geographic bbox “265725, 2544015, 341595, 2617695” in EPSG 32651, to bbox “120.630936455 23.5842129067 120.77553782 23.721772322” in EPSG 4326 fit into a 256 x 256 pixels area. The resampling algorithm is set to bicubic, and the pixel error threshold is 0.1.
project( Formosat[ 0:2528, 0:2456 ],
"265725, 2544015, 341595, 2617695", "EPSG:32651",
"120.630936455 23.5842129067 120.77553782 23.721772322", "EPSG:4326",
256, 256, cubic, 0.1 )
Limitations
Only 2-D arrays are supported. For multiband arrays, all bands must be of the same cell type.
##### 4.10.2.7.3. Notes¶
Reprojection implies resampling of the cell values into a new grid, hence usually they will change.
As for the resampling process typically a larger area is required than the reprojected data area itself, it is advisable to project an area smaller than the total domain of the MDD.
Per se, rasdaman is a domain-agnostic Array DBMS and, hence, does not know about CRSs; specific geo semantics is added by rasdaman’s petascope layer. However, for the sake of performance, the reprojection capability – which in geo service practice is immensely important – is pushed down into rasdaman, rather than doing reprojection in petascope’s Java code. To this end, the project() function provides rasdaman with enough information to perform a reprojection, however, without “knowing” anything in particular about geographic coordinates and CRSs. One consequence is that there is no check whether this lat/long project is applied to the proper axis of an array; it is up to the application (usually: petascope) to handle axis semantics.
One consequence is that there is no check whether this lat/long project is applied to the proper axis of an array; it is up to the application (usually: petascope) to handle axis semantics.
### 4.10.3. Clipping Operations¶
Clipping is a general operation covering polygon clipping, linestring selection, polytope clipping, curtain queries, and corridor queries. Presently, all operations are available in rasdaman via the clip function.
Further examples of clipping can be found in the systemtest for clipping.
#### 4.10.3.1. Polygons¶
##### 4.10.3.1.1. Syntax¶
select clip( c, polygon(( list of WKT points )) )
from coll as c
The input consists of an MDD expression and a list of WKT points, which determines the set of vertices of the polygon. Polygons are assumed to be closed with positive area, so the first vertex need not be repeated at the end, but there is no problem if it is. The algorithms used support polygons with self-intersection and vertex re-visitation.
Polygons may have interiors defined, such as
polygon( ( 0 0, 9 0, 9 9, 0 9, 0 0),
( 3 3, 7 3, 7 7, 3 7, 3 3 ) )
which would describe the annular region of the box [0:9,0:9] with the interior box [3:7,3:7] removed. In this case, the interior polygons (there may be many, as it forms a list) must not intersect the exterior polygon.
#### 4.10.3.2. Multipolygons¶
##### 4.10.3.2.1. Syntax¶
select clip( c, multipolygon((( list of WKT points )),(( list of WKT points ))...) )
from coll as c
The input consists of an MDD expression and a list of polygons defined by list of WKT points. The assumptions about polygons are same as the ones for Polygon.
##### 4.10.3.2.2. Return type¶
The output of a polygon query is a new array with dimensions corresponding to the bounding box of the polygon vertices, and further restricted to the collection’s spatial domain. In case of Multipolygon, the new array have dimensions corresponding to closure of bounding boxes of every individual polygon, which domain intersects the collection’s spatial domain. The data in the array consists of null values where cells lie outside the polygon (or 0 values if no null values are associated with the array) and otherwise consists of the data in the collection where the corresponding cells lie inside the polygon. This could change the null values stored outside the polygon from one null value to another null value, in case a range of null values is used. By default, the first available null value will be utilized for the complement of the polygon.
An illustrative example of a polygon clipping is the right triangle with vertices located at (0,0,0), (0,10,0) and (0,10,10), which can be selected via the following query:
select clip( c, polygon((0 0 0, 0 10 0, 0 10 10)) )
from coll as c
##### 4.10.3.2.3. Oblique polygons with subspacing¶
In case all the points in a polygon are coplanar, in some MDD object d of higher dimension than 2, users can first perform a subspace operation on d which selects the 2-D oblique subspace of d containing the polygon. For example, if the polygon is the triangle polygon((0 0 0, 1 1 1, 0 1 1, 0 0 0)), this triangle can be selected via the following query:
select clip( subspace(d, (0 0 0, 1 1 1, 0 1 1) ),
polygon(( 0 0, 1 1 , 0 1 , 0 0)) )
from coll as d
where the result of subspace(d) is used as the domain of the polygon. For more information look in Subspace Queries.
#### 4.10.3.3. Linestrings¶
##### 4.10.3.3.1. Syntax¶
select clip( c, linestring( list of WKT points ) ) [ with coordinates ]
from coll as c
The input parameter c refers to an MDD expression of dimension equal to the dimension of the points in the list of WKT points. The list of WKT points consists of parameters such as linestring(0 0, 19 -3, 19 -21), which would describe the 3 endpoints of 2 line segments sharing an endpoint at 19 -3, in this case.
##### 4.10.3.3.2. Return type¶
The output consists of a 1-D MDD object consisting of the points selected along the path drawn out by the linestring. The points are selected using a Bresenham Line Drawing algorithm which passes through the spatial domain in the MDD expression c, and selects values from the stored object. In case the linestring spends some time outside the spatial domain of c, the first null value will be used to fill the result of the linestring, just as in polygon clipping.
When with coordinates is specified, in addition to the original cell values the coordinate values are also added to the result MDD. The result cell type for clipped MDD of dimension N will be composite of the following form:
1. If the original cell type elemtype is non-composite:
{ long d1, ..., long dN, elemtype value }
2. Otherwise, if the original cell type is composite of M bands:
{ long d1, ..., long dN, elemtype1 elemname1, ..., elemetypeM elemnameM }
##### 4.10.3.3.3. Example¶
Select a Linestring from rgb data with coordinates. First two values of each cell in the result are the x/y coordinates, with following values (three in this case for RGB data) are the cell values of the clip operation to which with coordinates is applied.
select encode(
clip( c, linestring(0 19, 19 24, 12 17) ) with coordinates, "json")
from rgb as c
Result:
["0 19 119 208 248","1 19 119 208 248","2 20 119 208 248", ...]
The same query without specifying with coordinates:
select encode(
clip( c, linestring(0 19, 19 24, 12 17) ), "json")
from rgb as c
results in
["119 208 248","119 208 248","119 208 248", ...]
#### 4.10.3.4. Curtains¶
##### 4.10.3.4.1. Syntax¶
select clip( c, curtain( projection(dimension pair),
polygon(( ... )) ) )
from coll as c
and
select clip( c, curtain( projection(dimension list),
linestring( ... ) ) )
from coll as c
The input in both variants consists of a dimension list corresponding to the dimensions in which the geometric object, either the polygon or the linestring, is defined. The geometry object is defined as per the above descriptions; however, the following caveat applies: the spatial domain of the mdd expression is projected along the projection dimensions in the projection(dimension list). For a polygon clipping, which is 2-D, the dimension list is a pair of values such as projection(0, 2) which would define a polygon in the axial dimensions of 0 and 2 of the MDD expression c. For instance, if the spatial domain of c is [0:99,0:199,0:255], then this would mean the domain upon which the polygon is defined would be [0:99,0:255].
##### 4.10.3.4.2. Return type¶
The output consists of a polygon clipping at every slice of the spatial domain of c. For instance, if the projection dimensions of (0, 2) are used for the same spatial domain of c above, then a polygon clipping is performed at every slice of c of the form [0:99,x,0:255] and appended to the result MDD object, where there is a slice for each value of x in [0:199].
#### 4.10.3.5. Corridors¶
##### 4.10.3.5.1. Syntax¶
select clip( c, corridor( projection(dimension pair),
linestring( ... ),
polygon(( ... )) ) )
from coll as c
and
select clip( c, corridor( projection(dimension pair),
linestring( ... ),
polygon(( ... )),
discrete ) )
from coll as c
The input consists of a dimension list corresponding to the dimensions in which the geometric object, in this case a polygon, is defined. The linestring specifies the path along which this geometric object is integrated. One slice is sampled at every point, and at least the first point of the linestring should be contained within the polygon to ensure a meaningful result (an error is thrown in case it is not). There is an optional discrete flag which modifies the output by skipping the extrapolation of the linestring data to interior points.
##### 4.10.3.5.2. Return type¶
The output consists of a polygon clipping at every slice of the spatial domain of c translated along the points in the linestring, where the first axis of the result is indexed by the linestring points and the latter axes are indexed by the mask dimensions (in this case, the convex hull of the polygon). The projection dimensions are otherwise handled as in curtains; it is the spatial offsets given by the linestring coordinates which impact the changes in the result. In the case where the discrete parameter was utilized, the output is indexed by the number of points in the linestring description in the query and not by the extrapolated linestring, which uses a Bresenham algorithm to find the grid points in between.
#### 4.10.3.6. Subspace Queries¶
Here we cover the details of subspace queries in rasdaman. Much like slicing via a query such as
select c[0:9,1,0:9] from collection as c
the subspace query parameter allows users to extract a lower-dimensional dataset from an existing collection. It is capable of everything that a slicing query is capable of, and more. The limitation of slicing is that the selected data must lie either parallel or perpendicular to existing axes; however, with subspacing, users can arbitrarily rotate the axes of interest to select data in an oblique fashion. This control is exercised by defining an affine subspace from a list of vertices lying in the datacube. Rasdaman takes these points and finds the unique lowest-dimensional affine subspace containing them, and outputs the data closest to this slice, contained in the bounding box of the given points, into the resulting array.
Structure of the query:
select clip( c, subspace(list of WKT points) )
from coll as c
We can illustrate the usage with an example of two queries which are identical in output:
select clip( c, subspace(0 0 0, 1 0 0, 0 0 1) ) from coll as c
select c[0:1,0,0:1] from coll as c
This example will result in 1D array of sdom [0:99]:
select clip( c, subspace(19 0, 0 99) ) from test_rgb as c
This example will result in a a 2D array of sdom [0:7,0:19]:
select clip( c, subspace(0 0 0, 0 19 0, 7 0 7) )
from test_grey3d as c
and it will consist of the best integer lattice points reachable by the vectors (1,0,1) and (0,1,0) within the bounding box domain of [0:7,0:19,0:7] in test_grey3d.
Generally speaking, rasdaman uses the 1st point as a basepoint for an affine subspace containing all given points, constructs a system of equations to determine whether or not a point is in that subspace or not, and then searches the bounding box of the given points for solutions to the projection operator which maps [0:7,0:19,0:7] to [0:7,0:19]. The result dimensions are chosen such that each search yields a unique real solution, and then rasdaman rounds to the nearest integer cell before adding the value stored in that cell to the result object.
Some mathematical edge cases:
Because of arithmetic on affine subspaces, the following two queries are fundamentally identical to rasdaman:
select clip( c, subspace(0 0 0, 1 1 0, 0 1 0) )
from test_grey3d as c
select clip( c, subspace(0 0 0, 1 0 0, 0 1 0) )
from test_grey3d as c
Rasdaman’s convention is to use the first point as the translation point, and constructs the vectors generating the subspace from the differences. There is no particular reason not to use another point in the WKT list; however, knowing this, users should be aware that affine subspaces differ slightly from vector subspaces in that the following two queries differ:
select clip( c, subspace(10 10 10, 0 0 10, 10 0 10) )
from test_grey3d as c
select clip( c, subspace(0 0 0, 10 10 0, 0 10 0) )
from test_grey3d as c
The two queries have the same result domains of [0:10,0:10], and the projection for both lie on the first 2 coordinate axes since the 3rd coordinate remains constant; however, the data selections differ because the subspaces generated by these differ, even though the generating vectors of (1 1 0) and (0 1 0) are the same.
Even though the bounding box where one searches for solutions is the same between these two queries, there is no way to reach the origin with the vectors (1 1 0) and (0 1 0) starting at the base point of (10 10 10) because neither vector can impact the 3rd coordinate value of 10; similarly, starting at (0 0 0) must leave the third coordinate fixed at 0. There is nothing special about choosing the first coordinate as our base point – the numbers might change, but the resulting data selections in both queries would remain constant.
The following two queries generate the same subspace, but the latter has a larger output domain:
select clip( c, subspace(0 0 0, 1 1 0, 0 1 0) )
from test_grey3d as c
select clip( c, subspace(0 0 0, 1 1 0, 0 1 0, 0 0 0, 1 2 0) )
from test_grey3d as c
As much redundancy as possible is annihilated during a preprocessing stage which uses a Gram-Schmidt procedure to excise extraneous data imported during query time, and with this algorithm, rasdaman is able to determine the correct dimension of the output domain.
Some algorithmic caveats:
The complexity of searching for a solution for each result cell is related to the codimension of the affine subspace, and not the dimension of the affine subspace itself. In fact, if k is the difference between the dimension of the collection array and the dimension of the result array, then each cell is determined in O(k^2) time. Preprocessing happens once for the entire query, and occurs in O(k^3) time. There is one exception to the codimensionality considerations: a 1-D affine subspace (also known as a line segment) is selected using a multidimensional generalization of the Bresenham Line Algorithm, and so the results are determined in O(n) time, where n is the dimension of the collection.
Tip: If you want a slice which is parallel to axes, then you are better off using the classic slicing style of:
select c[0:19,0:7,0] from collection as c
as the memory offset computations are performed much more efficiently.
### 4.10.4. Induced Operations¶
Induced operations allow to simultaneously apply a function originally working on a single cell value to all cells of an MDD. The result MDD has the same spatial domain, but can change its base type.
Examples
img.green + 5 c
This expression selects component named “green” from an RGB image and adds 5 (of type char, i.e., 8 bit) to every pixel.
img1 + img2
This performs pixelwise addition of two images (which must be of equal spatial domain).
Induction and structs
Whenever induced operations are applied to a composite cell structure (“structs” in C/C++), then the induced operation is executed on every structure component. If some cell structure component turns out to be of an incompatible type, then the operation as a whole aborts with an error.
For example, a constant can be added simultaneously to all components of an RGB image:
select rgb + 5
from rgb
Induction and complex
Complex numbers, which actually form a composite type supported as a base type, can be accessed with the record component names re and im for the real and the imaginary part, resp.
Example
The first expression below extracts the real component, the second one the imaginary part from a complex number c:
c.re
c.im
#### 4.10.4.1. Unary Induction¶
Unary induction means that only one array operand is involved in the expression. Two situations can occur: Either the operation is unary by nature (such as boolean not); then, this operation is applied to each array cell. Or the induce operation combines a single value (scalar) with the array; then, the contents of each cell is combined with the scalar value.
A special case, syntactically, is the struct/complex component selection (see next subsection).
In any case, sequence of iteration through the array for cell inspection is chosen by the database server (which heavily uses reordering for query optimisation) and not known to the user.
Syntax
unaryOp mddExp
mddExp binaryOp scalarExp
scalarExp binaryOp mddExp
Example
The red images of collection rgb with all pixel values multiplied by 2:
select rgb.red * 2c
from rgb
Note that the constant is marked as being of type char so that the result type is minimized (short). Omitting the “c” would lead to an addition of long integer and char, resulting in long integer with 32 bit per pixel. Although pixel values obviously are the same in both cases, the second alternative requires twice the memory space. For more details visit the Type Coercion Rules section.
#### 4.10.4.2. Struct Component Selection¶
Component selection from a composite value is done with the dot operator well-known from programming languages. The argument can either be a number (starting with 0) or the struct element name. Both statements of the following example would select the green plane of the sample RGB image.
This is a special case of a unary induced operator.
Syntax
mddExp.attrName
mddExp.intExp
Examples
select rgb.green
from rgb
select rgb.1
from rgb
Figure 4.9 RGB image and green component
Note
Aside of operations involving base types such as integer and boolean, combination of complex base types (structs) with scalar values are supported. In this case, the operation is applied to each element of the structure in turn.
Examples
The following expression reduces contrast of a color image in its red, green, and blue channel simultaneously:
select rgb / 2c
from rgb
An advanced example is to use image properties for masking areas in this image. In the query below, this is done by searching pixels which are “sufficiently green” by imposing a lower bound on the green intensity and upper bounds on the red and blue intensity. The resulting boolean matrix is multiplied with the original image (i.e., componentwise with the red, green, and blue pixel component); the final image, then, shows the original pixel value where green prevails and is {0,0,0} (i.e., black) otherwise (Figure 4.10)
select rgb * ( (rgb.green > 130c) and
(rgb.red < 110c) and
(rgb.blue < 140c) )
from rgb
Figure 4.10 Suppressing “non-green” areas
Note
This mixing of boolean and integer is possible because the usual C/C++ interpretation of true as 1 and false as 0 is supported by rasql.
#### 4.10.4.3. Binary Induction¶
Binary induction means that two arrays are combined.
Syntax
mddExp binaryOp mddExp
Example
The difference between the images in the mr collection and the image in the mr2 collection:
select mr - mr2
from mr, mr2
Note
Two cases have to be distinguished:
• Both left hand array expression and right hand array expression operate on the same array, for example:
select rgb.red - rgb.green
from rgb
In this case, the expression is evaluated by combining, for each coordinate position, the respective cell values from the left hand and right hand side.
• Left hand array expression and right hand array expression operate on different arrays, for example:
select mr - mr2
from mr, mr2
This situation specifies a cross product between the two collections involved. During evaluation, each array from the first collection is combined with each member of the second collection. Every such pair of arrays then is processed as described above.
Obviously the second case can become computationally very expensive, depending on the size of the collections involved - if the two collections contain n and m members, resp., then n*m combinations have to be evaluated.
#### 4.10.4.4. Case statement¶
The rasdaman case statement serves to model n-fold case distinctions based on the SQL92 CASE statement which essentially represents a list of IF-THEN statements evaluated sequentially until either a condition fires and delivers the corresponding result or the (mandatory) ELSE alternative is returned.
In the simplest form, the case statement looks at a variable and compares it to different alternatives for finding out what to deliver. The more involved version allows general predicates in the condition.
This functionality is implemented in rasdaman on both scalars (where it resembles SQL) and on MDD objects (where it establishes an induced operation). Due to the construction of the rasql syntax, the distinction between scalar and induced operations is not reflected explicitly in the syntax, making query writing simpler.
Syntax
• Variable-based variant:
case generalExp
when scalarExp then generalExp
...
else generalExp
end
All generalExps must be of a compatible type.
• Expression-based variant:
case
when booleanExp then generalExp
...
else generalExp
end
All generalExp’s must evaluate to a compatible type.
Example
Traffic light classification of an array object can be done as follows.
select
case
when mr > 150 then { 255c, 0c, 0c }
when mr > 100 then { 0c, 255c, 0c }
else { 0c, 0c, 255c }
end
from mr
This is equivalent to the following query; note that this query is less efficient due to the increased number of operations to be evaluated, the expensive multiplications, etc:
select
(mr > 150) { 255c, 0c, 0c }
+ (mr <= 150 and mr > 100) { 0c, 255c, 0c }
+ (mr <= 100) { 0c, 0c, 255c }
from mr
Restrictions
In the current version, all MDD objects participating in a case statement must have the same tiling. Note that this limitation can often be overcome by factoring divergingly tiled arrays out of a query, or by resorting to the query equivalent in the above example using multiplication and addition.
#### 4.10.4.5. Induction: All Operations¶
Below is a complete listing of all cell level operations that can be induced, both unary and binary. Supported operand types and rules for deriving the result types for each operantion are specified in Type Coercion Rules.
+, -, *, /
For each cell within some MDD value (or evaluated MDD expression), add it with the corresponding cell of the second MDD parameter. For example, this code adds two (equally sized) images:
img1 + img2
div, mod
In contrast to the previous operators, div and mod are binary functions. The difference of div to / is that in the case of integer inputs, div results in integer result, and hence must check for division with 0, in which case an error would be thrown. The behaviour of mod is the same. Example usage:
div(a, b)
mod(a, b)
pow, power
The power function can be written as pow or power. The signature is:
pow( base, exp )
where base is an MDD or scalar and exp is a floating point number.
=, <, >, <=, >=, !=
For two MDD values (or evaluated MDD expressions), compare for each coordinate the corresponding cells to obtain the Boolean result indicated by the operation.
These comparison operators work on all atomic cell types.
On composite cells, only = and != are supported; both operands must have a compatible cell structure. In this case, the comparison result is the conjunction (“and” connection) of the pairwise comparison of all cell components.
and, or, xor, is, not
For each cell within some Boolean MDD (or evaluated MDD expression), combine it with the second MDD argument using the logical operation and, or, or xor. The is operation is equivalent to = (see below). The signature of the binary induced operation is
is, and, or, xor: mddExp, intExp -> mddExp
Unary function not negates each cell value in the MDD.
min, max
For two MDD values (or evaluated MDD expressions), take the minimum / maximum for each pair of corresponding cell values in the MDDs.
Example:
a min b
For struct valued MDD values, struct components in the MDD operands must be pairwise compatible; comparison is done in lexicographic order with the first struct component being most significant and the last component being least significant.
overlay
The overlay operator allows to combine two equally sized MDDs by placing the second one “on top” of the first one, informally speaking. Formally, overlaying is done in the following way:
• wherever the second operand’s cell value is not zero and not null, the result value will be this value.
• wherever the second operand’s cell value is zero or null, the first argument’s cell value will be taken.
This way stacking of layers can be accomplished, e.g., in geographic applications. Consider the following example:
ortho overlay tk.water overlay tk.streets
When displayed the resulting image will have streets on top, followed by water, and at the bottom there is the ortho photo.
Strictly speaking, the overlay operator is not atomic. Expression
a overlay b
is equivalent to
(b is not null) * b + (b is null) * a
However, on the server the overlay operator is executed more efficiently than the above expression.
bit(mdd, pos)
For each cell within MDD value (or evaluated MDD expression) mdd, take the bit with nonnegative position number pos and put it as a Boolean value into a byte. Position counting starts with 0 and runs from least to most significant bit. The bit operation signature is
bit: mddExp, intExp -> mddExp
In C/C++ style, bit(mdd, pos) is equivalent to mdd >> pos & 1.
Arithmetic, trigonometric, and exponential functions
The following advanced arithmetic functions are available with the obvious meaning, each of them accepting an MDD object:
abs()
sqrt()
exp() log() ln()
sin() cos() tan()
sinh() cosh() tanh()
arcsin() arccos() arctan()
Exceptions
Generally, on domain error or other invalid cell values these functions will not throw an error, but result in NaN or similar according to IEEE floating-point arithmetic. Internally the rasdaman implementation calls the corresponding C++ functions, so the C++ documentation applies.
cast
Sometimes the desired ultimate scalar type or MDD cell type is different from what the MDD expression would suggest. To this end, the result type can be enforced explicitly through the cast operator.
The syntax is:
(newType) generalExp
where newType is the desired result type of expression generalExp.
Like in programming languages, the cast operator converts the result to the desired type if this is possible at all. For example, the following scalar expression, without cast, would return a double precision float value; the cast makes it a single precision value:
(float) avg_cells( mr )
Both scalar values and MDD can be cast; in the latter case, the cast operator is applied to each cell of the MDD yielding an array over the indicated type.
The cast operator also works properly on composite cell structures. In such a case, the cast type is applied to every component of the cell. For example, the following expression converts the pixel type of an (3x8 bit) RGB image to an image where each cell is a structure with three long components:
(long) rgb
Obviously in the result structure all components will bear the same type. In addition, the target type can be a user-defined composite type, e.g. the following will cast the operand to {1c, 2c, 3c}:
(RGBPixel) {1c, 2l, 3.0}
Casting from larger to smaller integer type
If the new type is smaller than the value’s type, i.e. not all values can be represented by it, then standard C++ casting will typically lead to strange results due to wrap around for unsigned and implementation-defined behavior for a signed types. For example, casting int 1234 to char in C++ will result in 210, while the possible range would be 0 - 255.
Rasdaman implements a more reasonable cast behavior in this case: if the value is larger than the maximum value representable by the new type, then the result is the maximum value (e.g. 255 in the previous example); analogously, if the value is smaller than the minimum possible value, then the result is the minimum value.
This is implemented only on integer types and entails a small performance penalty in comparison to raw C++ as up to two comparisons per cell (with the maximum and minimum) are necessary when casting.
Restrictions
On base type complex, only the following operations are available right now:
+ - * /
### 4.10.5. Scaling¶
Shorthand functions are available to scale multidimensional objects. They receive an array as parameter, plus a scale indicator. In the most common case, the scaling factor is an integer or float number. This factor then is applied to all dimensions homogeneously. For a scaling with individual factors for each dimension, a scaling vector can be supplied which, for each dimension, contains the resp. scale factor. Alternatively, a target domain can be specified to which the object gets scaled.
Syntax
scale( mddExp, intExp )
scale( mddExp, floatExp )
scale( mddExp, intVector )
scale( mddExp, mintervalExp )
Examples
The following example returns all images of collection mr where each image has been scaled down by a factor of 2.
select scale( mr, 0.5 )
from mr
Next, mr images are enlarged by 4 in the first dimension and 3 in the second dimension:
select scale( mr, [ 4, 3 ] )
from mr
In the final example, mr images are scaled to obtain 100x100 thumbnails (note that this can break aspect ratio):
select scale( mr, [ 0:99, 0:99 ] )
from mr
Note
Function scale() breaks tile streaming, it needs to load all tiles affected into server main memory. In other words, the source argument of the function must fit into server main memory. Consequently, it is not advisable to use this function on very large items.
Note
Currently only nearest neighbour interpolation is supported for scaling.
### 4.10.6. Concatenation¶
Concatenation of two arrays “glues” together arrays by lining them up along an axis.
This can be achieved with a shorthand function, concat, which for convenience is implemented as an n-ary operator accepting an unlimited number of arrays of the same base type. The operator takes the input arrays, lines them up along the concatenation dimension specified in the request, and outputs one result array. To this end, each input array is shifted to the appropriate position, with the first array’s position remaining unchanged; therefore, it is irrelevant whether array extents, along the concatenation dimension, are disjoint, overlapping, or containing each other.
The resulting array’s dimensionality is equal to the input array dimensionality.
The resulting array extent is the sum of all extents along the concatenation dimension, and the extent of the input arrays in all other dimensions.
The resulting array cell type is same as the cell types of the input arrays.
Constraints
All participating arrays must have the same number of dimensions.
All participating arrays must have identical extents in all dimensions, except that dimension along which concatenation is performed.
Input arrays must have the same cell types, i.e. concatenating a char and float arrays is not possible and requires explicit casting to a common type.
Syntax
concat mddExp with mddExp ... with mddExp along integer
Examples
The following query returns the concatenation of all images of collection mr with themselves along the first dimension (Figure 4.11).
select concat mr with mr along 0
from mr
Figure 4.11 Query result of single concatenation
The next example returns a 2x2 arrangement of images (Figure 4.12):
select concat (concat mr with mr along 0)
with (concat mr with mr along 0)
along 1
from mr
Figure 4.12 Query result of multiple concatenation
### 4.10.7. Condensers¶
Frequently summary information of some kind is required about some array, such as sum or average of cell values. To accomplish this, rasql provides the concept of condensers.
A condense operation (or short: condenser) takes an array and summarizes its values using a summarization function, either to a scalar value (e.g. computing the sum of all its cells), or to another array (e.g. summarizing a 3-D cube into a 2-D image by adding all the horizontal slices that the cube is composed of).
A number of condensers is provided as rasql built-in functions.
• For numeric arrays, add_cells() delivers the sum and avg_cells() the average of all cell values. Operators min_cells() and max_cells() return the minimum and maximum, resp., of all cell values in the argument array. stddev_pop, stddev_samp, var_pop, and var_samp allow to calculate the population and sample standard deviation, as well as the population and sample variance of the MDD cells.
• For boolean arrays, the condenser count_cells() counts the cells containing true; some_cells() operation returns true if at least one cell of the boolean MDD is true, all_cells() returns true if all of the MDD cells contain true as value.
Please keep in mind that, depending on their nature, operations take a boolean, numeric, or arbitrary mddExp as argument.
Syntax
count_cells( mddExp )
avg_cells( mddExp )
min_cells( mddExp )
max_cells( mddExp )
some_cells( mddExp )
all_cells( mddExp )
stddev_pop( mddExp )
stddev_samp( mddExp )
var_pop( mddExp )
var_samp( mddExp )
Examples
The following example returns all images of collection mr where all pixel values are greater than 20. Note that the induction “>20” generates a boolean array which, then, can be collapsed into a single boolean value by the condenser.
select mr
from mr
where all_cells( mr > 20 )
The next example selects all images of collection mr with at least one pixel value greater than 250 in region [ 120:160, 55:75] (Figure 4.13).
select mr
from mr
where some_cells( mr[120 : 160, 55 : 75] > 250 )
Figure 4.13 Query result of specific selection
Finally, this query calculates the sample variance of mr2:
select var_samp( mr2 ) from mr2
### 4.10.8. General Array Condenser¶
All the condensers introduced above are special cases of a general principle which is represented by the general condenser statement.
The general condense operation consolidates cell values of a multidimensional array to a scalar value based on the condensing operation indicated. It iterates over a spatial domain while combining the result values of the cellExps through the condenserOp indicated.
The general condense operation consolidates cell values of a multidimensional array to a scalar value or an array, based on the condensing operation indicated.
Condensers are heavily used in two situations:
• To collapse boolean arrays into scalar boolean values so that they can be used in the where clause.
• In conjunction with the marray constructor (see next section) to phrase high-level signal processing and statistical operations.
Syntax
condense condenserOp
over var in mintervalExp
using cellExp
condense condenserOp
over var in mintervalExp
where booleanExp
using cellExp
The mintervalExp terms together span a multidimensional spatial domain over which the condenser iterates. It visits each point in this space exactly once, assigns the point’s respective coordinates to the var variables and evaluates cellExp for the current point. The result values are combined using condensing function condenserOp. Optionally, points used for the aggregate can be filtered through a booleanExp; in this case, cellExp will be evaluated only for those points where booleanExp is true, all others will not be regarded. Both booleanExp and cellExp can contain occurrences of variables pointVar.
Examples
This expression below returns a scalar representing the sum of all array values, multiplied by 2 (effectively, this is equivalent to add_cells(2*a)):
condense +
over x in sdom(a)
using 2 * a[ x ]
The following expression returns a 2-D array where cell values of 3-D array a are added up along the third axis:
condense +
over x in [0:100]
using a[ *:*, *:*, x[0] ]
Note that the addition is induced as the result type of the value clause is an array. This type of operation is frequent, for example, in satellite image time series analysis where aggregation is performed along the time axis.
Shorthands
Definition of the specialized condensers in terms of the general condenser statement is as shown in Table 4.5.
Table 4.5 Specialized condensers; a is a numeric, b a boolean array.
Aggregation definition Meaning
add_cells(a) =
condense +
over x in sdom(a)
using a[x]
sum over all cells in a
avg_cells(a) =
sum_cells(a) /
card(sdom(a))
Average of all cells in a
min_cells(a) =
condense min
over x in sdom(a)
using a [x]
Minimum of all cells in a
max_cells(a) =
condense max
over x in sdom(a)
using a[x]
Maximum of all cells in a
count_cells(b) =
condense +
over x in sdom(b)
where b[x] != 0
using 1
Number of cells in b which are non-zero / not false
some_cells(b) =
condense or
over x in sdom(b)
using b[x]
is there any cell in b with value true?
all_cells(b) =
condense and
over x in sdom(b)
using b[x]
do all cells of b have value true?
Restriction
Currently condensers over complex numbers are generally not supported, with exception of add_cells and avg_cells.
### 4.10.9. General Array Constructor¶
The marray constructor allows to create n-dimensional arrays with their content defined by a general expression. This is useful
• whenever the array is too large to be described as a constant (see Array Constants) or
• when the array’s contents is derived from some other source, e.g., for a histogram computation (see examples below).
Syntax
The basic shape of the marray constructor is as follows:
marray var in mintervalExp [, var in mintervalExp]
values cellExp
The cellExp describes how the resulting array is produced at each point of its domain.
Iterator Variable Declaration
The result array is defined by the cross product of all mintervalExp. For example, the following defines a 2-D 5x10 matrix:
marray x in [1:5], y in [1:10]
values ...
The base type of the array is determined by the type of cellExp. Each variable var can be of any number of dimensions.
Iteration Expression
The resulting array is filled in at each coordinate of its spatial domain by successively evaluating cellExp; the result value is assigned to the cell at the coordinate currently under evaluation. To this end, cellExp can contain arbitrary occurrences of var, which are accordingly substituted with the values of the current coordinate. The syntax for using a variable is:
• for a one-dimensional variable:
var
• for a one- or higher-dimensional variable:
var [ index-expr ]
where index-expr is a constant expression evaluating to a non-negative integer; this number indicates the variable dimension to be used.
Figure 4.14 2-D array with values derived from first coordinate
Examples
The following creates an array with spatial domain [1:100,-50:200] over cell type char, each cell being initialized to 1.
marray x in [ 1:100, -50:200 ]
values 1c
In the next expression, cell values are dependent on the first coordinate component (cf. Figure 4.14):
marray x in [ 0:255, 0:100 ]
values x[0]
The final two examples comprise a typical marray/condenser combination. The first one takes a sales table and consolidates it from days to week per product. Table structure is as given in Figure 4.15.:
select marray tab in [ 0:sdom(s)[0].hi/7, sdom(s)[1] ]
values condense +
over day in [ 0:6 ]
using s[ day[0] + tab7 ] , tab[1] ]
from salestable as s
The last example computes histograms for the mr images. The query creates a 1-D array ranging from 0 to 9 where each cell contains the number of pixels in the image having the respective intensity value.
select marray v in [ 0 : 9 ]
values condense +
over x in sdom(mr)
where mr[x] = v[0]
using 1
from mr
Figure 4.15 Sales table consolidation
Shorthand
As a shorthand, variable var can be used without indexing; this is equivalent to var[0]:
marray x in [1:5]
values a[ x ] -- equivalent to a[ x[0] ]
Known issue: the shorthand notation currently works as expected only when one variable is defined.
Many vs. One Variable
Obviously an expression containing several 1-D variables, such as:
marray x in [1:5], y in [1:10]
values a[ x[0], y[0] ]
can always be rewritten to an equivalent expression using one higher-dimensional variable, for example:
marray xy in [1:5, 1:10]
values a[ xy[0], xy[1] ]
Iteration Sequence Undefined
The sequence in which the array cells defined by an marray construct are inspected is not defined. In fact, server optimisation will heavily make use of reordering traversal sequence to achieve best performance.
Restriction
Currently there is a restriction in variable lists: for each marray variable declaration, either there is only one variable which can be multidimensional, or there is a list of one-dimensional variables; mixing the two is not allowed.
A Note on Expressiveness and Performance
The general condenser and the array constructor together allow expressing a very broad range of signal processing and statistical operations. In fact, all other rasql array operations can be expressed through them, as Table 4.6 exemplifies. Nevertheless, it is advisable to use the specialized operations whenever possible; not only are they more handy and easier to read, but also internally their processing has been optimized so that they execute considerably faster than the general phrasing.
Table 4.6 Phrasing of Induction, Trimming, and Section via marray
operation shorthand phrasing with marray
Trimming
a[ *:*, 50:100 ]
marray x in [sdom(a)[0], 50:100]
values a[ x ]
Section
a[ 50, *:* ]
marray x in sdom(a)[1]
values a[ 50, x ]
Induction
a + b
marray x in sdom(a)
values a[x] + b[x]
### 4.10.10. Type Coercion Rules¶
This section specifies the type coercion rules in query expressions, i.e. how the base type of the result from an operation applied on operands of various base types is derived.
The guiding design principle for these rules is to minimize the risk for overflow, but also “type inflation”: when a smaller result type is sufficient to represent all possible values of an operation, then it is preferred over a larger result type. This is especially important in the context of rasdaman, where the difference between float and double for example can be multiple GBs or TBs for large arrays. As such, the rules are somewhat different from C++ for example or even numpy, where in general careful explicit casting is required to avoid overflow or overtyping.
Here a summary is presented, while full details can be explored in rasdaman’s systemtest. The type specifiers (c, o, s, …) are the literal type suffixes as documented in Table 4.2; X and Y indicate any cell type, U corresponds to any unsigned integer type, S to any signed integer type, C to any complex type. In every table the upper rows have precedence, i.e. the deduction rules are ordered; if a particular operand type combination is missing it means that it is not supported and would lead to a type error. The first/second operand types are commutative by default and only one direction is shown to reduce clutter. Types have a rank determined by their size in bytes and signedness, so that double has a higher rank than float, and long has higher rank than ulong; max/min of two types returns the type with higher/lower type. Adding 1 to a type results in the next type by rank, preserving signedness; the integer/floating-point boundary is not crossed, however, i.e. long + 1 = long.
#### 4.10.10.1. Binary Induced¶
Complex operands are only supported by +, -, \*, /, div, =, and !=. If any operand of these operations is complex, then the result is complex with underlying type derived by applying the rules to the underlying types of the inputs. E.g. char + CInt16 = char + short = CInt32, and CInt32 * CFloat32 = long * float = CFloat64.
+, *, div, mod
first second result
X d d
l,ul f d
X f f
U1 U2 max(U1, U2) + 1
X Y signed(max(X, Y) + 1)
- (subtraction)
The result can always be negative, even if inputs are unsigned (positive), so for integers the result type is always the next greater signed type. Otherwise, the rules are the same as for +, *, div, mod.
first second result
X d d
l,ul f d
X f f
X Y signed(max(X, Y) + 1)
/ (division)
Division returns floating-point to avoid inadvertent precision loss as well as unnecessary check for division by zero. Integer division is supported with the div function.
first second result
c,o,s,us,f c,o,s,us,f f
X Y d
pow, power
Note: operand types are not commutative, the second operand must be a float or double scalar.
first second result
c,o,s,us,f f, d f
ul,l,d f, d d
<, >, <=, >=, =, !=
first second result
X Y bool
min, max, overlay
first second result
X X X
and, or, xor, is
first second result
bool bool bool
bit
I stands for any signed and unsigned integer type.
first second result
I I bool
complex(re, im)
first (re) second (im) result
s s CInt16
l l CInt32
f f CFloat32
d d CFloat64
#### 4.10.10.2. Unary Induced¶
not
op result
bool bool
abs
op result
C error
X X
sqrt, log, ln, exp, sin, cos, tan, sinh, cosh, tanh, arcsin, arccos, arctan
op result
c,o,us,s,f f
u,l,d d
#### 4.10.10.3. Condensers¶
count_cells
op result
bool ul
add_cells and condense +, *
op result
C CFloat64
f,d d
S l
U ul
avg_cells
op result
C CFloat64
X d
stddev_pop, stddev_samp, var_pop, var_samp
op result
C error
X d
min_cells, max_cells and condense min, max
op result
C error
X X
some_cells, all_cells and condense and, or
op result
bool bool
#### 4.10.10.4. Geometric Operations¶
The base type does not change in the result of subset, shift, extend, scale, clip, concat, and geographic reprojection.
op result
X X
## 4.11. Data Format Conversion¶
Without further indication, arrays are accepted and delivered in the client’s main memory format, regardless of the client and server architecture. Sometimes, however, it is desirable to use some data exchange format - be it because some device provides a data stream to be inserted in to the database in a particular format, or be it a Web application where particular output formats have to be used to conform with the respective standards.
To this end, rasql provides two functions for
• decoding format-encoded data into an MDD
• encoding an MDD to a particular format
Implementation of these functions is based on GDAL and, hence, supports all GDAL formats. Some formats are implemented natively in addition: NetCDF, GRIB, JSON, and CSV.
### 4.11.1. Decode for data import¶
The decode() function allows for decoding data represented in one of the supported formats, into an MDD which can be persisted or processed in rasdaman.
#### 4.11.1.1. Syntax¶
decode( mddExp )
encode( mddExp , format , formatParameters )
As a first paramater the data to be decoded must be specified. Technically this data must be in the form of a 1D char array. Usually it is specified as a query input parameter with $1, while the binary data is attached with the --file option of the rasql command-line client tool, or with the corresponding methods in the client API. If the data is on the same machine as rasdaman, it can be loaded directly by specifying the path to it in the format parameters; more details on this in Format parameters. #### 4.11.1.2. Data format¶ The source data format is automatically detected in case it is handled by GDAL (e.g. PNG, TIFF, JPEG, etc; see output of gdalinfo --formats or the GDAL documentation for a full list), so there is no format parameter in this case. A format is necessary, however, when a custom internal implementation should be selected instead of GDAL for decoding the data, e.g. NetCDF ("netcdf" / "application/netcdf"), GRIB ("grib"), JSON ("json" / "application/json"), or CSV ("csv" / "text/csv"). #### 4.11.1.3. Format parameters¶ Optionally, a format parameters string can be specified as a third parameter, which allows to control the format decoding. For GDAL formats it is necessary to specify format "GDAL" in this case. The format parameters must be formatted as a valid JSON object. As the format parameters are in quotes, i.e. "formatParameters", all quotes inside of the formatParameters need to be escaped (\"). For example, "{ \"transpose\": [0,1] }" is the right way to specify transposition, while "{ "transpose": [0,1] }" will lead to failure. Note that in examples further on quotes are not escaped for readability. ##### 4.11.1.3.1. Common parameters¶ The following parameters are common to GDAL, NetCDF, and GRIB data formats: • variables - An array of variable names or band ids (0-based, as strings) to be extracted from the data. This allows to decode only some of the variables in a NetCDF file for example with ["var1", "var2"], or the bands of a TIFF file with ["0", "2"]. • filePaths - An array of absolute paths to input files to be decoded, e.g. ["/path/to/rgb.tif"]. This improves ingestion performance if the data is on the same machine as the rasdaman server, as the network transport is bypassed and the data is read directly from disk. Supported only for GDAL, NetCDF, and GRIB data formats. • subsetDomain - Specify a subset to be extracted from the input file, instead of the full data. The subset should be specified in rasdaman minterval format as a string, e.g. "[0:100,0:100]". Note that the subset domain must match in dimensionality with the file dimensionality, and must be accordingly offset to the grid origin in the file, which is typically [0,0,0,…]. • transpose - Specify if x/y should be transposed with an array of 0-based axis ids indicating the axes that need to be transposed; the axes must be contiguous [N,N+1], e.g. [0,1]. This is often relevant in NetCDF and GRIB data which have a swapped x/y order than what is usually expected in e.g. GDAL. Note that transposing axes has a performance penalty, so avoid if possible. • formatParameters - A JSON object containing extra options which are format-specific, specified as string key-value pairs. This is where one would specify the base type and domain for decoding a CSV file for example, or GDAL format-specific options. Example for a CSV file: "formatParameters": { "basetype": "struct { float f, long l }", "domain": "[0:100,0:100]" } ##### 4.11.1.3.2. GDAL¶ • formatParameters - any entries in the formatParameters object are forwarded to the specific GDAL driver; consult the GDAL documentation for the options recognized by each particular driver. E.g. for PNG you could specify, among other details, a description metadata field with: "formatParameters": { "DESCRIPTION": "Data description..." } • configOptions - A JSON object containing configuration options as string key-value pairs; more details in the GDAL documentation. Example: "configOptions": { "GDAL_CACHEMAX": "64", ... } • openOptions - A JSON object containing open options as string key-value pairs; an option for selecting overview level from the file with, e.g. "OVERVIEW_LEVEL": "2", is available for all formats (more details); further options may be supported by each driver, e.g. for TIFF; "openOptions": { "OVERVIEW_LEVEL": "2", "NUM_THREADS": "ALL_CPUS" } Note This feature is only available since GDAL 2.0, so if you have an older GDAL these options will be ignored. ##### 4.11.1.3.3. GRIB¶ • internalStructure - Describe the internal structure of a GRIB file, namely the domains of all messages to be extracted from the file: "internalStructure": { "messageDomains": [ { "msgId": 1, "domain": "[0:0,0:0,0:719,0:360]" }, { "msgId": 2, "domain": "[0:0,1:1,0:719,0:360]" }, { "msgId": 3, "domain": "[0:0,2:2,0:719,0:360]" }, ... ] } ##### 4.11.1.3.4. CSV / JSON¶ The following are mandatory options that have to be specified in the formatParameters object: • domain - The domain of the MDD encoded in the CSV data. It has to match the number of cells read from input file, e.g. for "domain": "[1:5, 0:10, 2:3]", there should be 110 numbers in the input file. • basetype - Atomic or struct base type of the cell values in the CSV data; named structs like RGBPixel are not supported. Examples: long char struct { char red, char blue, char green } Numbers from the input file are read in order of appearance and stored without any reordering in rasdaman; whitespace plus the following characters are ignored: '{', '}', ',', '"', '\'', '(', ')', '[', ']' #### 4.11.1.4. Examples¶ ##### 4.11.1.4.1. GDAL¶ The following query loads a TIFF image into collection rgb: rasql -q 'insert into rgb values decode($1 )' --file rgb.tif
If you use double quotes for the query string, note that the $ must be escaped to avoid interpretation by the shell: rasql -q "insert into rgb values decode( \$1 )" --file rgb.tif
The example below shows directly specifying a file path in the format parameters; <[0:0] 1c> is a dummy array value which is not relevant in this case, but is nevertheless mandatory:
UPDATE test_mr SET test_mr[0:255,0:210]
ASSIGN decode(<[0:0] 1c>, "GDAL",
"{ \"filePaths\": [\"/home/rasdaman/mr_1.png\"] }")
WHERE oid(test_mr) = 6145
##### 4.11.1.4.2. CSV / JSON¶
Let array A be a 2x3 array of longs given as a string as follows:
1,2,3,2,1,3
Inserting A into rasdaman can be done with
insert into A
values decode($1, "csv", "{ \"formatParameters\": { \"domain\": \"[0:1,0:2]\", \"basetype\": \"long\" } }") Further, let B be an 1x2 array of RGB values given as follows: {1,2,3},{2,1,3} Inserting B into rasdaman can be done by passing it to this query: insert into B values decode($1, "csv", "{ \"formatParameters\": {
\"domain\": \"[0:0,0:1]",
\"basetype\": \"struct{char red, char blue, char green}\" } }")
B could just as well be formatted like this with the same effect (note the line break):
1 2 3
2 1 3
### 4.11.2. Encode for data export¶
The encode() function allows encoding an MDD in a particular data format representation; formally, the result will be a 1D char array.
#### 4.11.2.1. Syntax¶
encode( mddExp , format )
encode( mddExp , format , formatParameters )
The first parameter is the MDD to be encoded. It must be 2D if encoded to GDAL formats (PNG, TIFF, JPEG, etc.), while the native rasdaman encoders (NetCDF, JSON, and CSV) support MDDs of any dimension; note that presently encode to GRIB is not supported. As not all base types supported by rasdaman (char, octet, float, etc.) are necessarily supported by each format, care must be taken to cast the MDD beforehand.
#### 4.11.2.2. Data format¶
A mandatory format must be specified as the second parameter, indicating the data format to which the MDD will be encoded; allowed values are
• GDAL format identifiers (see output of gdalinfo --formats or the GDAL documentation);
• a mime-type string, e.g. "image/png";
• "netcdf" / "application/netcdf", "csv" / "text/csv", or "json" / "application/json", for formats natively supported by rasdaman.
#### 4.11.2.3. Format parameters¶
Optionally, a format parameters string can be specified as a third parameter, which allows to control the format encoding. As in the case of decode(), it must be a valid JSON object. As the format parameters are in quotes, i.e. "formatParameters", all quotes inside of the formatParameters need to be escaped (\"). For example, "{ \"transpose\": [0,1] }" is the right way to specify transposition, while "{ "transpose": [0,1] }" will lead to failure.
Common parameters to most or all formats include:
• metadata - A single string, or an object of string key-value pairs which are added as global metadata when encoding.
• transpose - Specify if x/y should be transposed with an array of 0-based axis ids indicating the axes that need to be transposed; the axes must be contiguous [N,N+1], e.g. [0,1]. This is often relevant when encoding data with GDAL formats, which was originally imported from NetCDF and GRIB files. Note that transposing axes has a performance penalty, so avoid if possible.
• nodata - Specify nodata value(s). If a single number is specified it will be applicable to all bands (e.g. 0), otherwise an array of numbers for each band can be provided (e.g. [0,255,255]). Special floating-point constants are supported (case-sensitive): NaN, NaNf, Infinity, -Infinity.
• formatParameters - A JSON object containing extra options which are format-specific, specified as string key-value pairs. This is where one would specify the options for controling what separators and values are used in CSV encoding for example, or GDAL format-specific options.
##### 4.11.2.3.1. GDAL¶
• formatParameters - any entries in the formatParameters object are forwarded to the specific GDAL driver; consult the GDAL documentation for the options recognized by each particular driver. E.g. for PNG you could specify, among other details, a description metadata field with:
"formatParameters": {
"DESCRIPTION": "Data description..."
}
Rasdaman itself does not change the default values for these parameters, with the following exceptions:
• PNG - the compression level when encoding to PNG (option ZLEVEL) will be set to 2 if the user does not specify it explicitly and the result array is not of type boolean. The default compression level of 6 does not offer considerable space savings on typical image results (e.g. around 10% lower file size for satellite image), while significantly increasing the time to encode, taking up to 3-5x longer.
• configOptions - A JSON object containing configuration options as string key-value pairs; only relevant for GDAL currently, more details in the GDAL documentation. Example:
"configOptions": {
"GDAL_CACHEMAX": "64", ...
}
###### Geo-referencing¶
• geoReference - An object specifying geo-referencing information; either “bbox” or “GCPs” must be provided, along with the “crs”:
• crs - Coordinate Reference System (CRS) in which the coordinates are expressed. Any of the CRS representations acceptable by GDAL can be used:
• Well known names, such as "NAD27", "NAD83", "WGS84" or "WGS72"
• "EPSG:n", "EPSGA:n"
• PROJ.4 definitions
• OpenGIS Well Known Text
• ESRI Well Known Text, prefixed with "ESRI::"
• Spatial References from URLs
• "AUTO:proj_id,unit_id,lon0,lat0" indicating OGC WMS auto projections
• "urn:ogc:def:crs:EPSG::n" indicating OGC URNs (deprecated by OGC)
• bbox - A geographic X/Y bounding box as an object listing the coordinate values (as floating-point numbers) for xmin, ymin, xmax, and ymax properties, e.g.:
"bbox": {
"xmin": 0.0,
"ymin": -1.0,
"xmax": 1.0,
"ymax": 2.0
}
• GCPs - Alternative to a bbox, an array of GCPs (Ground Control Points) can be specified; see GCPs section in the GDAL documentation for details. Each element of the array is an object describing one control point with the following properties:
• id - optional unique identifier (gets the GCP array index by default);
• info - optional text associated with the GCP;
• pixel, line - location on the array grid;
• x, y, z - georeferenced location with coordinates in the specified CRS; “z” is optional (zero by default);
###### Coloring Arrays¶
• colorMap - Map single-band cell values into 1, 3, or 4-band values. It can be done in different ways depending on the specified type:
• values - Each pixel is replaced by the entry in the colorTable where the key is the pixel value. In the example below, it means that all pixels with value -1 are replaced by [255, 255, 255, 0]. Pixels with values not present in the colorTable are not rendered: they are replaced with a color having all components set to 0.
"colorMap": {
"type": "values",
"colorTable": {
"-1": [255, 255, 255, 0],
"-0.5": [125, 125, 125, 255],
"1": [0, 0, 0, 255]
}
}
• intervals - All pixels with values between two consecutive entries are rendered using the color of the first (lower-value) entry. Pixels with values equal to or less than the minimum value are rendered with the bottom color (and opacity). Pixels with values equal to or greater than the maximum value are rendered with the top color and opacity.
"colorMap": {
"type": "intervals",
"colorTable": {
"-1": [255, 255, 255, 0],
"-0.5": [125, 125, 125, 255],
"1": [0, 0, 0, 255]
}
}
In this case, all pixels with values in the interval (-inf, -0.5) are replaced with [255, 255, 255, 0], pixels in the interval [-0.5, 1) are replaced with [125, 125, 125, 255], and pixels with value >= 1 are replaced with [0, 0, 0, 255].
• ramp - Same as “intervals”, but instead of using the color of the lowest value entry, linear interpolation between the lowest value entry and highest value entry, based on the pixel value, is performed.
"colorMap": {
"type": "ramp",
"colorTable": {
"-1": [255, 255, 255, 0],
"-0.5": [125, 125, 125, 255],
"1": [0, 0, 0, 255]
}
}
Pixels with value -0.75 are replaced with color [189, 189, 189, 127], because they sit in the middle of the distance between -1 and -0.5, so they get, on each channel, the color value in the middle. The interpolation formula for a pixel of value x, where 2 consecutive entries in the colorTable $$a, b$$ with $$a \le x \le b$$, is:
$resultColor = \frac{b - x}{b - a} * colorTable[b] + \frac{x - a}{b - a} * colorTable[a]$
For the example above, a = -1, x = -0.75, b = -0.5, colorTable[a] = [255, 255, 255, 0], colorTable[b] = [125, 125, 125, 255], so:
$\begin{split}resultColor &= \frac{-0.5 + 0.75}{-0.5 + 1} * [255, 255, 255, 0] + \\ & \hspace{1.5em} \frac{-0.75 + 1}{-0.5 + 1} * [125, 125, 125, 255] \\ &= 0.5 * [255, 255, 255, 0] + 0.5 * [125, 125, 125, 255] \\ &= [127, 127, 127, 0] + [62, 62, 62, 127] \\ &= [189, 189, 189, 127] \\\end{split}$
Note the integer division, because the colors are of type unsigned char.
• colorPalette - Similar to colorMap, however, it allows specifying color information on a metadata level, rather than by actually transforming array pixel values; for details see the GDAL documentation. It is an object that contains several optional properties:
• paletteInterp - Indicate how the entries in the colorTable should be interpreted; allowed values are “Gray”, “RGB”, “CMYK”, “HSL” (default “RGB”);
• colorInterp - Array of color interpretations for each band; allowed values are Undefined, Gray, Palette, Red, Green, Blue, Alpha, Hue, Saturation, Lightness, Cyan, Magenta, Yellow, Black, YCbCr_Y, YCbCr_Cb, YCbCr_Cr, YCbCr_Cr;
• colorTable - Array of arrays, each containing 1, 3, or 4 short values (depending on the colorInterp) for each color entry; to associate a color with an array cell value, the cell value is used as a subscript into the color table (starting from 0).
##### 4.11.2.3.2. NetCDF¶
The following are mandatory options when encoding to NetCDF:
• variables - Specify variable names for each band of the MDD, as well as dimension names if they need to be saved as coordinate variables. There are two ways to specify the variables:
1. An array of strings for each variable name, e.g. ["var1", "var2"]; no coordinate variables should be specified in this case, as there is no way to specify the data for them;
2. An object of variable name - object pairs, where each object lists the following variable details:
• metadata - An object of string key-value pairs which are added as attributes to the variable;
• type - Type of the data values this variable contains relevant (and required) only for coordinate variables; allowed values are “byte”, “char”, “short”, “ushort”, “int”, “uint”, “float”, and “double”;
• data - An array of data values for the variable relevant (and required) only for coordinate variables (as regular variables get their data values from the array to be encoded); the number of values must match the dimension extent;
• dimensions - An array of names for each dimension, e.g. ["Lat","Long"].
##### 4.11.2.3.3. CSV / JSON¶
Data encoded with CSV or JSON is a comma-separated list of values, such that each row of values (for every dimension, not just the last one) is between { and } braces ([ and ] for JSON). The table below documents all “formatParameters” options that allow controlling the output, and the default settings for both formats.
Table 4.7 optional options for controlling CSV / JSON encoding.
option description CSV default JSON default
order array linearization order, can be “outer_inner” (default, last dimension iterates fastest, i.e. column-major for 2-D), or vice-versa, “inner_outer”. “outer_inner” “outer_inner”
trueValue string denoting true values “t” “true”
falseValue string denoting false values “f” “false”
dimensionStart string to indicate starting a new dimension slice “{“ “[“
dimensionEnd string to indicate ending a dimension slice “}” “]”
dimensionSeparator separator between dimension slices “,” “,”
valueSeparator separator between cell values “,” “,”
componentSeparator separator between components of struct cell values ” “ ” “
structValueStart string to indicate starting a new struct value “"” “"”
structValueEnd string to indicate ending a new struct value “"” “"”
outerDelimiters wrap output in dimensionStart and dimensionEnd false true
#### 4.11.2.4. Examples¶
##### 4.11.2.4.1. GDAL¶
This query extracts PNG images (one for each tuple) from collection mr:
select encode( mr, "png" )
from mr
Transpose the last two axes of the output before encoding to PNG:
select encode(c, "png", "{ \"transpose\": [0,1] }") from mr2 as c
##### 4.11.2.4.2. NetCDF¶
Add some global attributes as metadata in netcdf:
select encode(c, "netcdf", "{ \"transpose\": [1,0], \"nodata\": [100],
from test_mean_summer_airtemp as c
The format parameters below specify the variables to be encoded in the result NetCDF file (Lat, Long, forecast, and drought_code); of these Lat, Long, and forecast are dimension variables for which the values are specified in the "data" array, which leaves drought_code is the proper variable for encoding the array data.
▶ show
##### 4.11.2.4.3. CSV / JSON¶
Suppose we have array A = <[0:1,0:1] 0, 1; 2, 3>. Encoding to CSV by default
select encode(A, "csv") from A
will result in the following output:
{{0, 1}, {2, 3}}
while encoding to JSON with:
select encode(A, "json") from A
will result in the following output:
[[0, 1], [2, 3]]
Specifying inner_outer order with
select encode(A, "csv", "{ \"formatParameters\":
{ \"order\": \"inner_outer\" } }") from A
will result in the following output (left-most dimensions iterate fastest):
{{0, 2}, {1, 3}}
Let B be an RGB array <[0:0,0:1] {0c, 1c, 2c}, {3c, 4c, 5c}>. Encoding it to CSV with default order will result in the following output:
{“0 1 2”,”3 4 5”}
## 4.12. Object identifiers¶
Function oid() gives access to an array’s object identifier (OID). It returns the local OID of the database array. The input parameter must be a variable associated with a collection, it cannot be an array expression. The reason is that oid() can be applied to only to persistent arrays which are stored in the database; it cannot be applied to query result arrays - these are not stored in the database, hence do not have an OID.
Syntax
oid( variable )
Example
The following example retrieves the MDD object with local OID 10 of set mr:
select mr
from mr
where oid( mr ) = 10
The following example is incorrect as it tries to get an OID from a non-persistent result array:
select oid( mr * 2 ) -- illegal example: no expressions
from mr
Fully specified external OIDs are inserted as strings surrounded by brackets:
select mr
from mr
where oid( mr ) = < localhost | RASBASE | 10 >
In that case, the specified system (system name where the database server runs) and database must match the one used at query execution time, otherwise query execution will result in an error.
### 4.12.1. Expressions¶
Parentheses
All operators, constructors, and functions can be nested arbitrarily, provided that each sub-expression’s result type matches the required type at the position where the sub-expression occurs. This holds without limitation for all arithmetic, Boolean, and array-valued expressions. Parentheses can (and should) be used freely if a particular desired evaluation precedence is needed which does not follow the normal left-to-right precedence.
Example
select (rgb.red + rgb.green + rgb.blue) / 3c
from rgb
Operator Precedence Rules
Sometimes the evaluation sequence of expressions is ambiguous, and the different evaluation alternatives have differing results. To resolve this, a set of precedence rules is defined. You will find out that whenever operators have their counterpart in programming languages, the rasdaman precedence rules follow the same rules as are usual there.
Here the list of operators in descending strength of binding:
• dot “.”, trimming, section
• unary -
• sqrt, sin, cos, and other unary arithmetic functions
• *, /
• +, -
• <, <=, >, >=, !=, =
• and
• or, xor
• “:” (interval constructor), condense, marray
• overlay, concat
• In all remaining cases evaluation is done left to right.
## 4.13. Null Values¶
“Null is a special marker used in Structured Query Language (SQL) to indicate that a data value does not exist in the database. NULL is also an SQL reserved keyword used to identify the Null special marker.” (Wikipedia) In fact, null introduces a three-valued logic where the result of a Boolean operation can be null itself; likewise, all other operations have to respect null appropriately. Said Wikipedia article also discusses issues the SQL language has with this three-valued logic.
For sensor data, a Boolean null indicator is not enough as null values can mean many different things, such as “no value given”, “value cannot be trusted”, or “value not known”. Therefore, rasdaman refines the SQL notion of null:
• Any value of the data type range can be chosen to act as a null value;
• a set of cell values can be declared to act as null (in contrast to SQL where only one null per attribute type is foreseen).
Caveat
Note that defining values as nulls reduces the value range available for known values. Additionally, computations can yield values inadvertently (null values themselves are not changed during operations, so there is no danger from this side). For example, if 5 is defined to mean null then addition of two non-null values, such as 2+3, yields a null.
Every bit pattern in the range of a numeric type can appear in the database, so no bit pattern is left to represent “null”. If such a thing is desired, then the database designer must provide, e.g., a separate bit map indicating the status for each cell.
To have a clear semantics, the following rule holds:
Uninitialized value handling
A cell value not yet addressed, but within the current domain of an MDD has a value of zero by definition; this extends in the obvious manner to composite cells.
Remark
Note the limitation to the current domain of an MDD. While in the case of an MDD with fixed boundaries this does not matter because always definition domain = current domain, an MDD with variable boundaries can grow and hence will have a varying current domain. Only cells inside the current domain can be addressed, be they uninitialized/null or not; addressing a cell outside the current domain will result in the corresponding exception.
Masks as alternatives to null
For example, during piecewise import of satellite images into a large map, there will be areas which are not written yet. Actually, also after completely creating the map of, say, a country there will be untouched areas, as normally no country has a rectangular shape with axis-parallel boundaries. The outside cells will be initialized to 0 which may or may not be defined as null. Another option is to define a Boolean mask array of same size as the original array where each mask value contains true for “cell valid” and false for “cell invalid. It depends on the concrete application which approach benefits best.
### 4.13.1. Nulls in MDD-Valued Expressions¶
Dynamically Set/Replace the Null Set
The null set of an MDD value resulting from a sub-expression can be dynamically changed on-the-fly with a postfix null values operator as follows:
mddExp null values nullSet
As a result mddExp will have the null values specified by nullSet; if mddExp already had a null set, it will be replaced.
Null Set Propagation
The null value set of an MDD is part of its type definition and, as such, is carried along over the MDD’s lifetime. Likewise, MDDs which are generated as intermediate results during query processing have a null value set attached. Rules for constructing the output MDD null set are as follows:
• The null value set of an MDD generated through an marray operation is empty [13].
• The null value set of an operation with one input MDD object is identical to the null set of this input MDD.
• The null value set of an operation with two input MDD objects is the union of the null sets of the input MDDs.
• The null value set of an MDD expression with a postfix null values operator is equal to the null set specified by it.
Null Values in Operations
Subsetting (trim and slice operations, as well as struct selection, etc.) perform just as without nulls and deliver the original cell values, be they null (relative to the MDD object on hand) or not. The null value set of the output MDD is the same as the null value set of the input MDD.
In MDD-generating operations with only one input MDD (such as marray and unary induced operations), if the operand of a cell operation is null then the result of this cell operation is null.
Generally, if somewhere in the input to an individual cell value computation a null value is encountered then the overall result will be null - in other words: if at least one of the operands of a cell operation is null then the overall result of this cell operation is null.
Exceptions:
• Comparison operators (that is: ==, !=, >, >=, <, <=) encountering a null value will always return a Boolean value; for example, both n == n and n != n (for any null value n) will evaluate to false.
• In a cast operation, nulls are treated like regular values.
• In a scale() operation, null values are treated like regular values [14].
• Format conversion of an MDD object ignores null values. Conversion from some data format into an MDD likewise imports the actual cell values; however, during any eventual further processing of the target MDD as part of an update or insert statement, cell values listed in the null value set of the pertaining MDD definition will be interpreted as null and will not overwrite persistent non-null values.
Choice of Null Value
If an operation computes a null value for some cell, then the null value effectively assigned is determined from the MDD’s type definition.
If the overall MDD whose cell is to be set has exactly one null value, then this value is taken. If there is more than one null value available in the object’s definition, then one of those null values is picked non-deterministically. If the null set of the MDD is empty then no value in the MDD is considered a null value.
Example
Assume an MDD a holding values <0, 1, 2, 3, 4, 5> and a null value set of {2, 3}. Then, a*2 might return <0, 2, 2, 2, 8, 10>. However, <0, 2, 3, 3, 8, 10> and <0, 2, 3, 2, 8, 10> also are valid results, as the null value gets picked non-deterministically.
### 4.13.2. Nulls in Aggregation Queries¶
In a condense operation, cells containing nulls do not contribute to the overall result (in plain words, nulls are ignored).
If all values are null, then the result is the identity element in this case, e.g. 0 for +, true for and, false for or, maximum value possible for the result base type for min, minimum value possible for the result base type for max, 0 for count_cells.
The scalar value resulting from an aggregation query does not carry a null value set like MDDs do; hence, during further processing it is treated as an ordinary value, irrespective of whether it has represented a null value in the MDD acting as input to the aggregation operation.
### 4.13.3. Limitations¶
All cell components of an MDD share the same same set of nulls, it is currently not possible to assign individual nulls to cell type components.
### 4.13.4. NaN Values¶
NaN (“not a number”) is the representation of a numeric value representing an undefined or unrepresentable value, especially in floating-point calculations. Systematic use of NaNs was introduced by the IEEE 754 floating-point standard (Wikipedia).
In rasql, nan (double) and nanf (float) are symbolic floating point constants that can be used in any place where a floating point value is allowed. Arithmetic operations involving nans always result in nan. Equality and inequality involving nans work as expected, all other comparison operators return false.
If the encoding format used supports NaN then rasdaman will encode/decode NaN values properly.
Example
select count_cells( c != nan ) from c
## 4.14. Miscellaneous¶
### 4.14.1. rasdaman version¶
Builtin function version() returns a string containing information about the rasdaman version of the server, and the gcc version used for compiling it. The following query
select version()
will generate a 1-D array of cell type char containing contents similar to the following:
rasdaman 9.6.0 on x86_64-linux-gnu, compiled by g++
(Ubuntu 5.4.1-2ubuntu1~16.04) 5.4.1 20160904
Warning
The message syntax is not standardized in any way and may change in any rasdaman version without notice.
### 4.14.2. Retrieving Object Metadata¶
Sometimes it is desirable to retrieve metadata about a particular array. To this end, the dbinfo() function is provided. It returns a 1-D char array containing a JSON encoding of key array metadata:
• Object identifier;
• Base type, mdd type name, set type name;
• Total size of the array;
• Number of tiles and further tiling information: tiling scheme, tile size (if specified), and tile configuration;
• Index information: index type, and further details depending on the index type.
The output format is described below by way of an example.
Syntax
dbinfo( mddExp )
dbinfo( mddExp , formatParams )
Example
$rasql -q 'select dbinfo(c) from mr2 as c' --out string { "oid": "150529", "baseType": "marray <char>", "mddTypeName": "GreyImage", "setTypeName": "GreySet", "tileNo": "1", "totalSize": "54016B", "tiling": { "tilingScheme": "no_tiling", "tileSize": "2097152", "tileConfiguration": "[0:511,0:511]" }, "index": { "type": "rpt_index", "indexSize": "0", "PCTmax": "4096B", "PCTmin": "2048B" } } The function supports a string of format parameters as the second argument. By now the only supported parameter is printTiles. It can take multiple values: “embedded”, “json”, “svg”. Example of syntax: select dbinfo(c, "printtiles=svg") from test_overlap as c Parameter “printiles=embedded” will print additionally domains of every tile. $ rasql -q 'select dbinfo(c, "printtiles=embedded") from test_grey as c' --out string
{
"oid": "136193",
"baseType": "marray <char, [*:*,*:*]>",
"setTypeName": "GreySet",
"mddTypeName": "GreyImage",
"tileNo": "48",
"totalSize": "54016",
"tiling": {
"tilingScheme": "aligned",
"tileSize": "1500",
"tileConfiguration": "[0:49,0:29]"",
"tileDomains":
[
"[100:149,210:210]",
"[150:199,0:29]",
"[150:199,30:59]",
"[150:199,60:89]",
"[150:199,90:119]",
"[150:199,120:149]",
"[150:199,150:179]",
"[150:199,180:209]",
"[150:199,210:210]",
"[200:249,0:29]",
"[200:249,30:59]",
"[200:249,60:89]",
"[200:249,90:119]",
"[200:249,120:149]",
"[200:249,150:179]",
"[200:249,180:209]",
"[200:249,210:210]",
"[250:255,0:29]",
"[250:255,30:59]",
"[250:255,60:89]",
...
]
},
"index": {
"type": "rpt_index",
"PCTmax": "4096",
"PCTmin": "2048"
}
}
Option “json” will output only the tile domains as a json object.
["[100:149,210:210]","[150:199,0:29]",..."[0:49,30:59]"]
Last option “svg” will output tiles as svg that can be visualised. Example:
<svg width="array width" height="array height">
<rect x="100" y="210" width="50" height="1" id="1232"></rect>
<rect x="150" y="0" width="50" height="30" id="3223"></rect>
...
</svg>
Note
This function can only be invoked on persistent MDD objects, not on derived (transient) MDDs.
Warning
This function is in beta version. While output syntax is likely to remain largely unchanged, invocation syntax is expected to change to something like
describe array oidExp
## 4.15. Arithmetic Errors and Other Exception Situations¶
During query execution, a number of situations can arise which prohibit to deliver the desired query result or database update effect. If the server detects such a situation, query execution is aborted, and an error exception is thrown. In this Section, we classify the errors that occur and describe each class.
However, we do not go into the details of handling such an exception - this is the task of the application program, so we refer to the resp. API Guides.
### 4.15.1. Overflow¶
Candidates
Overflow conditions can occur with add_cells and induced operations such as +.
System Reaction
The overflow will be silently ignored, producing a result represented by the bit pattern pruned to the available size. This is in coherence with overflow handling in performance-oriented programming languages.
Remedy
Query coding should avoid potential overflow situations by applying numerical knowledge - simply said, the same care should be applied as always when dealing with numerics.
It is worth being aware of the type coercion rules <type-coercion> in rasdaman and overflow handling in C++. The type coercion rules have been crafted to avoid overflow as much as possible, but of course it remains a possibility. Adding or multiplying two chars for example is guaranteed to not overflow. However, adding or multyplying two ulongs would result in a ulong by default, which may not be large enough to hold the result. Therefore, it may be worth casting to double in this case based on knowledge about the data.
Checking for overflow with a case statement like the below will not work as one might expect and is hence not recommended:
case
when a.longatt1 * a.longatt2 > 2147483647 then 2147483647
else a.longatt1 * a.longatt2
end
If a.longatt1 * a.longatt2 overflows, the result is undefined behavior according to C++ so it is not clear what the result value would be in this case. It will never be larger than the maximum value of 32-bit signed integer, however, because that is the result type according to the type coercion rules. Hence the comparison to 2147483647 (maximum value of 32-bit signed integer) will never return true.
### 4.15.2. Illegal operands¶
Candidates
Division by zero, non-positive argument to logarithm, negative arguments to the square root operator, etc. are the well-known candidates for arithmetic exceptions.
The IEEE 754 standard lists, for each operation, all invalid input and the corresponding operation result (Sections Select Clause: Result Preparation, From Clause: Collection Specification, Multidimensional Intervals). Examples include:
• division(0,0), division(INF,INF)
• sqrt(x) where x < 0
• log(x) where x < 0
System Reaction
In operations returning floating point numbers, results are produced in conformance with IEEE 754. For example, 1/0 results in nan.
In operations returning integer numbers, results for illegal operations are as follows:
• div(x, 0) leads to a “division by zero” exception
• mod(x, 0) leads to a “division by zero” exception
Remedy
To avoid an exception the following code is recommended for a div b (replace accordingly for mod), replacing all illegal situations with a result of choice, c:
case when b = 0 then c else div(a, b) end
If the particular situation allows, it may be more efficient to cast to floating-point, and cast back to integer after the division (if an integer result is wanted):
(long)((double)a / b)
Division by 0 will result in Inf in this case, which turns into 0 when cast to integer.
### 4.15.3. Access Rights Clash¶
If a database has been opened in read-only mode, a write operation will be refused by the server; “write operation” meaning an insert, update, or delete statement.
## 4.16. Database Retrieval and Manipulation¶
### 4.16.1. Collection Handling¶
#### 4.16.1.1. Create a Collection¶
The create collection statement is used to create a new, empty MDD collection by specifying its name and type. The type must exist in the database schema. There must not be another collection in this database bearing the name indicated.
Syntax
create collection collName typeName
Example
create collection mr GreySet
#### 4.16.1.2. Drop a Collection¶
A database collection can be deleted using the drop collection statement.
Syntax
drop collection collName
Example
drop collection mr1
#### 4.16.1.3. Alter Collection¶
The type of a collection can be changed using the alter collection statement. The new collection type is accordingly checked for compatibility (same cell type, dimensionality) as the existing type of the collection before setting it.
Syntax
alter collection collName
set type newCollType
Example
alter collection mr2
set type GreySetWithNullValues
#### 4.16.1.4. Retrieve All Collection Names¶
With the following rasql statement, a list of the names of all collections currently existing in the database is retrieved; both versions below are equivalent:
select RAS_COLLECTIONNAMES
from RAS_COLLECTIONNAMES
select r
from RAS_COLLECTIONNAMES as r
Note that the meta collection name, RAS_COLLECTIONNAMES, must be written in upper case only. No operation in the select clause is permitted. The result is a set of one-dimensional char arrays, each one holding the name of a database collection. Each such char array, i.e., string is terminated by a zero value (‘0’).
### 4.16.2. Select¶
The select statement allows for the retrieval from array collections. The result is a set (collection) of items whose structure is defined in the select clause. Result items can be arrays, atomic values, or structs. In the where clause, a condition can be expressed which acts as a filter for the result set. A single query can address several collections.
Syntax
select resultList
from collName [ as collIterator ]
[, collName [ as collIterator ] ] ...
select resultList
from collName [ as collIterator ]
[, collName [ as collIterator ] ] ...
where booleanExp
Examples
This query delivers a set of grayscale images:
select mr[100:150,40:80] / 2
from mr
where some_cells( mr[120:160, 55:75] > 250 )
This query, on the other hand, delivers a set of integers:
select count_cells( mr[120:160, 55:75] > 250 )
from mr
Finally, this query delivers a set of structs, each one with an integer and a 2-D array component:
select struct { max_cells( a ), a }
from mr as a
### 4.16.3. Insert¶
MDD objects can be inserted into database collections using the insert statement. The array to be inserted must conform with the collection’s type definition concerning both cell type and spatial domain. One or more variable bounds in the collection’s array type definition allow degrees of freedom for the array to be inserted. Hence, the resulting collection in this case can contain arrays with different spatial domain.
Syntax
insert into collName
values mddExp
collName specifies the name of the target set, mddExp describes the array to be inserted.
Example
Add a black image to collection mr1.
insert into mr1
values marray x in [ 0:255, 0:210 ]
values 0c
See the programming interfaces described in the rasdaman Developer’s Guides on how to ship external array data to the server using insert and update statements.
### 4.16.4. Update¶
The update statement allows to manipulate arrays of a collection. Which elements of the collection are affected can be determined with the where clause; by indicating a particular OID, single arrays can be updated.
An update can be complete in that the whole array is replaced or partial, i.e., only part of the database array is changed. Only those array cells are affected the spatial domain of the replacement expression on the right-hand side of the set clause. Pixel locations are matched pairwise according to the arrays’ spatial domains. Therefore, to appropriately position the replacement array, application of the shift() function (see Shifting a Spatial Domain) can be necessary; for more details and practical examples continue to Partial Updates.
As a rule, the spatial domain of the righthand side expression must be equal to or a subset of the database array’s spatial domain.
Cell values contained in the update null set will not overwrite existing cell values which are not null. The update null set is taken from the source MDD if it is not empty, otherwise it will be taken from the target MDD.
Syntax
update collName as collIterator
set updateSpec assign mddExp
update collName as collIterator
set updateSpec assign mddExp
where booleanExp
where updateSpec can optionally contain a restricting minterval (see examples further below):
var
var [ mintervalExp ]
Each element of the set named collName which fulfils the selection predicate booleanEpxr gets assigned the result of mddExp. The right-hand side mddExp overwrites the corresponding area in the collection element; note that no automatic shifting takes place: the spatial domain of mddExp determines the very place where to put it.
If you want to include existing data from the database in mddExp, then this needs to be specified in an additional from clause, just like in normal select queries. The syntax in this case is
update collName as collIterator
set updateSpec assign mddExp
from existingCollName [ as collIterator ]
[, existingCollName [ as collIterator ] ] ...
where booleanExp
Example
An arrow marker is put into the image in collection mr2. The appropriate part of a is selected and added to the arrow image which, for simplicity, is assumed to have the appropriate spatial domain.
Figure 4.16 Original image of collection mr2
update mr2 as a
set a assign a[0 : 179 , 0:54] + $1/2c The argument$1 is the arrow image (Figure 4.17) which has to be shipped to the server along with the query. It is an image showing a white arrow on a black background. For more information on the use of $variables you may want to consult the language binding guides of the rasdaman Documentation Set. Figure 4.17 Arrow used for updating Looking up the mr2 collection after executing the update yields the result as shown in Figure 4.18: Figure 4.18 Updated collection mr2 Note The replacement expression and the MDD to be updated (i.e., left and right-hand side of the assign clause) in the above example must have the same dimensionality. Updating a (lower-dimensional) section of an MDDs can be achieved through a section operator indicating the “slice” to be modified. The following query appends one line to a fax (which is assumed to be extensible in the second dimension): update fax as f set f[ *:* , sdom(f)[1].hi+1 ] assign$1
The example below updates target collection mr2 with data from rgb (collection that exists already in the database):
update mr2 as a
set a assign b[ 0:150, 50:200 ].red
from rgb as b
#### 4.16.4.1. Partial Updates¶
Often very large data files need to be inserted in rasdaman, which don’t fit in main memory. One way to insert such a large file is to split it into smaller parts, and then import each part one by one via partial updates, until the initial image is reconstructed in rasdaman.
This is done in two steps: initializing an MDD in a collection, and inserting each part in this MDD.
##### 4.16.4.1.1. Initialization¶
Updates replace an area in a target MDD object with the data from a source MDD object, so first the target MDD object needs to be initialized in a collection. To initialize an MDD object it’s sufficient to insert an MDD object of size 1 (a single point) to the collection:
insert into Coll
values marray it in [0:0,0:0,...] values 0
Note that the MDD constructed with the marray constructor should match the type of Coll (dimension and base type). If the dimension of the data matches the Coll dimensions (e.g. both are 3D), then inserting some part of the data would work as well. Otherwise, if data is 2D and Coll is 3D for example, it is necessary to initialize an array in the above way.
After we have an MDD initialized in the collection, we can continue with updating it with the individual parts using the update statement in rasql.
Refering to the update statement syntax, mddExp can be any expression that results in an MDD object M, like an marray construct, a format conversion function, etc. The position where M will be placed in the target MDD (collIterator) is determined by the spatial domain of M. When importing data in some format via the decode function, by default the resulting MDD has an sdom of [0:width,0:height,..], which will place M at [0,0,..] in the target MDD. In order to place it in a different position, the spatial domain of M has to be explicitly set with the shift function in the query. For example:
update Coll as c set c
assign shift(decode($1),[100,100]) The update statement allows one to dynamically expand MDDs (up to the limits of the MDD type if any have been specified), so it’s not necessary to fully materialize an MDD. When the MDD is first initialized with: insert into Coll values marray it in [0:0,0:0,...] values 0 it has a spatial domain of [0:0,0:0,...] and only one point is materialized in the database. Updating this MDD later on, further expands the spatial domain if the source array M extends outside the sdom of target array T. ##### 4.16.4.1.3. Example: 3D timeseries¶ Create a 3D collection first for arrays of type float: create collection Coll FloatSet3 Initialize an array with a single cell in the collection: insert into Coll values marray it in [0:0,0:0,0:0] values 0f Update array with data at the first time slice: update Coll as c set c[0,*:*,*:*] assign decode($1)
Update array with data at the second time slice, but shift spatially to [10,1]:
update Coll as c set c[1,*:*,*:*]
assign shift( decode($1), [10,1] ) And so on. ##### 4.16.4.1.4. Example: 3D cube of multiple 3D arrays¶ In this case we build a 3D cube by concatenating multiple smaller 3D cubes along a certain dimension, i.e. build a 3D mosaic. Create the 3D collection first (suppose it’s for arrays of type float): create collection Coll FloatSet3 Initialize an array with a single cell in the collection: insert into Coll values marray it in [0:0,0:0,0:0] values 0f Update array with the first cube, which has itself sdom [0:3,0:100,0:100]: update Coll as c set c[0:3,0:100,0:100] assign decode($1, "netcdf")
After this Coll has sdom [0:3,0:100,0:100].
Update array with the second cube, which has itself sdom [0:5,0:100,0:100]; note that now we want to place this one on top of the first one with respect to the first dimension, so its origin must be shifted by 5 so that its sdom will be in effect [5:10,0:100,0:100]:
update Coll as c set c[5:10,0:100,0:100]
assign shift(decode($1, "netcdf"), [5,0,0]) The sdom of Coll is now [0:10,0:100,0:100]. Update array with the third cube, which has itself sdom [0:2,0:100,0:100]; note that now we want to place this one next to the first two with respect to the second dimension and a bit higher by 5 pixels, so that its sdom will be in effect [5:7,100:200,0:100]: update Coll as c set c[5:7,100:200,0:100] assign shift(decode($1, "netcdf"), [5,100,0])
The sdom of Coll is now [0:10,100:200,0:100].
### 4.16.5. Delete¶
Arrays are deleted from a database collection using the delete statement. The arrays to be removed from a collection can be further characterized in an optional where clause. If the condition is omitted, all elements will be deleted so that the collection will be empty afterwards.
Syntax
delete from collName [ as collIterator ]
[ where booleanExp ]
Example
delete from mr1 as a
where all_cells( a < 30 )
This will delete all “very dark” images of collection mr1 with all pixel values lower than 30.
## 4.17. Transaction Scheduling¶
Since rasdaman 9.0, database transactions lock arrays on fine-grain level. This prevents clients from changing array areas currently being modified by another client.
### 4.17.1. Locking¶
Lock compatibility is as expected: read access involves shared (“S”) locks which are mutually compatible while write access imposes an exclusive lock (“X”) which prohibits any other access:
S X S + - X - -
Shared locks are set by SELECT queries, exclusive ones in INSERT, UPDATE, and DELETE queries.
Locks are acquired by queries dynamically as needed during a transaction. All locks are held until the end of the transaction, and then released collectively [15].
### 4.17.2. Lock Granularity¶
The unit of locking is a tile, as tiles also form the unit of access to persistent storage.
### 4.17.3. Conflict Behavior¶
If a transaction attempts to acquire a lock on a tile which has an incompatible lock it will abort with a message similar to the following:
Error: One or more of the target tiles are locked by another
transaction.
Only the query will return with an exception, the rasdaman transaction as such is not affected. It is up to the application program to catch the exception and react properly, depending on the particular intended behaviour.
### 4.17.4. Lock Federation¶
Locks are maintained in the PostgreSQL database in which rasdaman stores data. Therefore, all rasserver processes accessing the same RASBASE get synchronized.
### 4.17.5. Examples¶
The following two SELECT queries can be run concurrently against the same database:
rasql -q "select mr[0:10,0:10] from mr"
rasql -q "select mr[5:10,5:10] from mr"
The following two UPDATE queries can run concurrently as well, as they address different collections:
rasql -q "update mr set mr[0:10,0:10] \
assign marray x in [0:10,0:10] values 127c" \
rasql -q "update mr2 set mr2[0:5,0:5] \
assign marray x in [0:5,0:5] values 65c" \
From the following two queries, one will fail (the one which happens to arrive later) because the address the same tile:
rasql -q "update mr set mr[0:10,0:10] assign \
marray x in [0:10,0:10] values 127c" \
rasql -q "update mr set mr[0:5,0:5] assign \
marray x in [0:5,0:5] values 65c" \
### 4.17.6. Limitations¶
Currently, only tiles are locked, not other entities like indexes.
## 4.18. Linking MDD with Other Data¶
### 4.18.1. Purpose of OIDs¶
Each array instance and each collection in a rasdaman database has a identifier which is unique within a database. In the case of a collection this is the collection name and an object identifier (OID), whereas for an array this is only the OID. OIDs are generated by the system upon creation of an array instance, they do not change over an array’s lifetime, and OIDs of deleted arrays will never be reassigned to other arrays. This way, OIDs form the means to unambiguously identifiy a particular array. OIDs can be used several ways:
• In rasql, OIDs of arrays can be retrieved and displayed, and they can be used as selection conditions in the condition part.
• OIDs form the means to establish references from objects or tuples residing in other databases systems to rasdaman arrays. Please refer for further information to the language-specific rasdaman Developer’s Guides and the rasdaman External Products Integration Guide available for each database system to which rasdaman interfaces.
Due to the very different referencing mechanisms used in current database technology, there cannot be one single mechanism. Instead, rasdaman employs its own identification scheme which, then, is combined with the target DBMS way of referencing. See Object identifier (OID) Constants of this document as well as the rasdaman External Products Integration Guide for further information.
### 4.18.2. Collection Names¶
MDD collections are named. The name is indicated by the user or the application program upon creation of the collection; it must be unique within the given database. The most typical usage forms of collection names are
• as a reference in the from clause of a rasql query
• their storage in an attribute of a base DBMS object or tuple, thereby establishing a reference (also called foreign key or pointer).
### 4.18.3. Array Object identifiers¶
Each MDD array is world-wide uniquely identified by its object identifier (OID). An OID consists of three components:
• A string containing the system where the database resides (system name),
• A string containing the database (base name), and
• A number containing the local object id within the database.
The main purposes of OIDs are
• to establish references from the outside world to arrays and
• to identify a particular array by indicating one OID or an OID list in the search condition of a query.
## 4.19. Storage Layout Language¶
### 4.19.1. Overview¶
Tiling
To handle arbitrarily large arrays, rasdaman introduces the concept of tiling them, that is: partitioning a large array into smaller, non-overlapping sub-arrays which act as the unit of storage access during query evaluation. To the query client, tiling remains invisible, hence it constitutes a tuning parameter which allows database designers and administrators to adapt database storage layout to specific query patterns and workloads.
To this end, rasdaman offers a storage layout language for arrays which embeds into the query language and gives users comfortable, yet concise control over important physical tuning parameters. Further, this sub-language wraps several strategies which turn out useful in face of massive spatio-temporal data sets.
Tiling can be categorized into aligned and non-aligned (Figure 4.19).A tiling is aligned if tiles are defined through axis-parallel hyperplanes cutting all through the domain. Aligned tiling is further classified into regular and aligned irregular depending on whether the parallel hyperplanes are equidistant (except possibly for border tiles) or not. The special case of equally sized tile edges in all directions is called cubed.
Figure 4.19 Types of tilings
Non-aligned tiling contains tiles whose faces are not aligned with those of their neighbors. This can be partially aligned with still some hyperplanes shared or totally non-aligned with no such sharing at all.
Syntax
We use a BNF variant where optional elements are indicated as
( ... )?
to clearly distinguish them from the “[” and “]” terminals.
Tiling Through API
In the rasdaman C++ API (cf. C++ Guide), this functionality is available through a specific hierarchy of classes.
Introductory Example
The following example illustrates the overall syntax extension which the storage layout sublanguage adds to the insert statement:
insert into MyCollection
values ...
tiling
area of interest [0:20,0:40],[45:80,80:85]
tile size 1000000
### 4.19.2. General Tiling Parameters¶
Maximum Tile Size
The optional tile size parameter allows specifying a maximum tile size; irrespective of the algorithm employed to obtain a particular tile shape, its size will never exceed the maximum indicated in this parameter.
Syntax:
tile size t
where t indicates the tile size in bytes.
If nothing is known about the access patterns, tile size allows streamlining array tiling to architectural parameters of the server, such as DMA bandwidth and disk speed.
Tile Configuration
A tile configuration is a list of bounding boxes specified by their extent. No position is indicated, as it is the shape of the box which will be used to define the tiling, according to various strategies.
Syntax:
[ integerLit , ... , integerLit ]
For a d-dimensional MDD, the tile configuration consists of a vector of d elements where the ith vector specifies the tile extent in dimension i, for 0lei<d. Each number indicates the tile extent in cells along the corresponding dimension.
For example, a tile configuration [100, 100, 1000] for a 3-D MDD states that tiles should have an extent of 100 cells in dimension 0 and 1, and an extent of 1,000 cells in dimension 2. In image timeseries analysis, such a stretching tiles along the time axis speeds up temporal analysis.
### 4.19.3. Regular Tiling¶
Concept
Regular tiling applies when there is some varying degree of knowledge about the subsetting patterns arriving with queries. We may or may not know the lower corner of the request box, the size of the box, or the shape (i.e., edge size ratio) of the box. For example, map viewing clients typically send several requests of fixed extent per mouse click to maintain a cache of tiles in the browser for faster panning. So the extent of the tile is known – or at least that tiles are quadratic. The absolute location often is not known, unless the client is kind enough to always request areas only in one fixed tile size and with starting points in multiples of the tile edge length.If additionally the configuration follows a uniform probability distribution then a cubed tiling is optimal.
In the storage directive, regular tiling is specified by providing a bounding box list, TileConf, and an optional maximum tile size:
Syntax
tiling regular TileConf ( tile size integerLit )?
Example
This line below dictates, for a 2-D MDD, tiles to be of size 1024 x 1024, except for border tiles (which can be smaller):
tiling regular [ 1024 , 1024 ]
### 4.19.4. Aligned Tiling¶
Concept
Generalizing from regular tiling, we may not know a good tile shape for all dimensions, but only some of them. An axis pin { 1, …, d } which never participates in any subsetting box is called a preferred (or preferential) direction of access and denoted as tcp = *. An optimal tile structure in this situation extends to the array bounds in the preferential directions.
Practical use cases include satellite image time series stacks over some region. Grossly simplified, during analysis there are two distinguished access patterns (notwithstanding that others occur sometimes as well): either a time slice is read, corresponding to tc = (*, *, t) for some given time instance t, or a time series is extracted for one particular position (x, y) on the earth surface; this corresponds to tc = ( x, y, *). The aligned tiling algorithm creates tiles as large as possible based on the constraints that (i) tile proportions adhere to tc and (ii) all tiles have the same size. The upper array limits constitute an exception: for filling the remaining gap (which usually occurs) tiles can be smaller and deviate from the configuration sizings. Figure 4.20 illustrates aligned tiling with two examples, for configuration tc = (1, 2) (left) and for tc =(1, 3, 4) (right).
Figure 4.20 Aligned tiling examples
Preferential access is illustrated in Figure 4.21. Left, access is performed along preferential directions 1 and 2, corresponding to configuration tc = (*, *, 1). The tiling tothe right supports configuration tc = (4, 1, *) with preferred axis 3.
Figure 4.21 Aligned tiling examples with preferential access directions
The aligned tiling construction consists of two steps. First, a concrete tile shape is determined. After that, the extent of all tiles is calculated by iterating over the array’s complete domain. In presence of more than one preferred directions - i.e., with a configuration containing more than one “*” values - axes are prioritized in descending order. This exploits the fact that array linearization is performed in a way that the “outermost loop” is the first dimension and the “innermost loop” the last. Hence, by clustering along higher coordinate axes a better spatial clustering is achieved.
Syntax
tiling aligned TileConf ( tile size IntLit )?
Example
The following clause accommodates map clients fetching quadratic images known to be no more than 512 x 512 x 3 = 786,432 bytes:
tiling aligned [1,1] tile size 786432
Important
Aligned tiling is the default strategy in rasdaman.
### 4.19.5. Directional Tiling¶
Concept
Sometimes the application semantics prescribes access in well-known coordinate intervals. In OLAP, such intervals are given by the semantic categories of the measures as defined by the dimension hierarchies, such as product categories which are defined for the exact purpose of accessing them group-wise in queries. Similar effects can occur with spatio-temporal data where, for example, a time axis may suggest access in units of days, weeks, or years. In rasdaman, if bounding boxes are well known then spatial access may be approximated by those; if they are overlapping then this is a case for area-of-interest tiling (see below), if not then directional tiling can be applied.
The tiling corresponding to such a partition is given by its Cartesian product. Figure 4.22 shows such a structure for the 2-D and 3-D case.
To construct it, the partition vectors are used to span the Cartesian product first. Should one of the resulting tiles exceed the size limit, as it happens in the tiles marked with a “*” in Figure 4.22, then a so-called sub-tiling takes place. Sub-tiling applies regular tiling by introducing additional local cutting hyperplanes. As these hyperplanes do not stretch through all tiles the resulting tiling in general is not regular. The resulting tile set guarantees that for answering queries using one of the subsetting patterns in part, or any union of these patterns, only those cells are read which will be delivered in the response. Further, if the area requested is smaller than the tile size limit then only one tile needs to be accessed.
Figure 4.22 Directional tiling
Sometimes axes do not have categories associated. One possible reason is that subsetting is never performed along this axis, for example in an image time series where slicing is done along the time axis while the x/y image planes always are read in total. Similarly, for importing 4-D climate data into a GIS a query might always slice at the lowest atmospheric layer and at the most current time available without additional trimming in the horizontal axes.
We call such axes preferred access directions in the context of a directional tiling; they are identified by empty partitions. To accommodate this intention expressed by the user the sub-tiling strategy changes: no longer is regular tiling applied, which would introduce undesirable cuts along the preferred axis, but rather are subdividing hyperplanes constructed parallel to the preference axis. This allows accommodating the tile size maximum while, at the same time, keeping the number of tiles accessed in preference direction at a minimum.
In Figure 4.23, a 3-D cube is first split by way of directional tiling (left). One tile is larger than the maximum allowed, hence sub-tiling starts (center). It recognizes that axes 0 and 2 are preferred and, hence, splits only along dimension 1. The result (right) is such that subsetting along the preferred axes - i.e., with a trim or slice specification only in dimension 1 - can always be accommodated with a single tile read.
Figure 4.23 Directional tiling of a 3-D cube with one degree of freedom
Syntax
tiling directional splitList
( with subtiling ( tile size integerLit)? )?
where splitList is a list of split vectors (t1,1; …; t1,n1),…,(td,1; …; td,nd). Each split vector consists of an ascendingly ordered list of split points for the tiling algorithm, or an asterisk “*” for a preferred axis. The split vectors are positional, applying to the dimension axes of the array in order of appearance.
Example
The following defines a directional tiling with split vectors (0; 512; 1024) and (0; 15; 200) for axes 0 and 2, respectively, with dimension 1 as a preferred axis:
tiling directional [0,512,1024], [], [0,15,200]
### 4.19.6. Area of Interest Tiling¶
Concept
An area of interest is a frequently accessed sub-array of an array object. An area-of-interest pattern, consequently, consists of a set of domains accessed with an access probability significantly higher than that of all other possible patterns. Goal is to achieve a tiling which optimizes access to these preferred patterns; performance of all other patterns is ignored.
These areas of interest do not have to fully cover the array, and the may overlap. The system will establish an optimal disjoint partitioning for the given boxes in a way that the amount of data and the number of tiles accessed for retrieval of any area of interest are minimized. More exactly, it is guaranteed that accessing an area of interest only reads data belonging to this area.
Figure 4.24 gives an intuition of how the algorithm works. Given some area-of-interest set (a), the algorithm first partitions using directional tiling based on the partition boundaries (b). By construction, each of the resulting tiles (c) contains only cells which all share the same areas of interest, or none at all. As this introduces fragmentation, a merge step follows where adjacent partitions overlapping with the same areas of interest are combined. Often there is more than one choice to perform merging; the algorithm is inherently nondeterministic. Rasdaman exploits this degree of freedom and cluster tiles in sequence of dimensions, as this represents the sequentialization pattern on disk and, hence, is the best choice for maintaining spatial clustering on disk (d,e). In a final step, sub-tiling is performed on the partitions as necessary, depending on the tile size limit. In contrast to the directional tiling algorithm, an aligned tiling strategy is pursued here making use of the tile configuration argument, tc. As this does not change anything in our example, the final result (f) is unchanged over (e).
Figure 4.24 Steps in performing area of interest tiling**
Syntax
tiling area of interest tileConf ( tile size integerLit )?
Example
tiling area of interest
[0:20,0:40],[945:980,980:985],[10:1000,10:1000]
### 4.19.7. Tiling statistic¶
Concept
Area of interest tiling requires enumeration of a set of clearly delineated areas. Sometimes, however, retrieval does not follow such a focused pattern set, but rather shows some random behavior oscillating around hot spots. This can occur, for example, when using a pointing device in a Web GIS: while many users possibly want to see some “hot” area, coordinates submitted will differ to some extent. We call such a pattern multiple accesses to areas of interest. Area of interest tiling can lead to significant disadvantages in such a situation. If the actual request box is contained in some area of interest then the corresponding tiles will have to be pruned from pixels outside the request box; this requires a selective copying which is significantly slower than a simple memcpy(). More important, however, is a request box going slightly over the boundaries of the area of interest - in this case, an additional tile has to be read from which only a small portion will be actually used. Disastrous, finally, is the output of the area-of-interest tiling, as an immense number of tiny tiles will be generated for all the slight area variations, leading to costly merging during requests.
This motivates a tiling strategy which accounts for statistically blurred access patterns. The statistic tiling algorithm receives a list of access patterns plus border and frequency thresholds. The algorithm condenses this list into a smallish set of patterns by grouping them according to similarity. This process is guarded by the two thresholds. The border threshold determines from what maximum difference on two areas are considered separately. It is measured in number of cells to make it independent from area geometry. The result is a reduced set of areas, each associated with a frequency of occurrence. In a second run, those areas are filtered out which fall below the frequency threshold. Having calculated such representative areas, the algorithm performs an area of interest tiling on these.
This method has the potential of reducing overall access costs provided thresholds are placed wisely. Log analysis tools can provide estimates for guidance. In the storage directive, statistical tiling receives a list of areas plus, optionally, the two thresholds and a tile size limit.
Syntax
tiling statistic tileConf
( tile size integerLit )?
( border threshold integerLit)?
( interest threshold floatLit)?
Example
The following example specifies two areas, a border threshold of 50 and an interest probability threshold of 30%:
tiling statistic [0:20,0:40],[30:50,70:90]
border threshold 50
interest threshold 0.3
### 4.19.8. Summary: Tiling Guidelines¶
This section summarizes rules of thumb for a good tiling. However, a thorough evaluation of the query access pattern, either empirically through server log inspection or theoretically by considering application logics, is strongly recommended, as it typically offers a potential for substantial improvements over the standard heuristics.
• Nothing is known about access patterns: choose regular tiling with a maximum tile size; on PC-type architectures, tile sizes of about 4-5 MB have yielded good results.
• Trim intervals in direction x are n times more frequent than in direction y and z together: choose directional tiling where the ratios are approximately x*n=y*z. Specify a maximum tile size.
• Hot spots (i.e., their bounding boxes) are known: choose Area of Interest tiling on these bounding boxes.
### 4.20.1. Overview¶
As part of petascope, the geo service frontend to rasdaman, Web access to rasql is provided. The request format is described in Request Format, the response format in Response Format below.
### 4.20.2. Service Endpoint¶
The service endpoint for rasql queries is
http://{service}/{path}/rasdaman/rasql
### 4.20.3. Request Format¶
A request is sent as an http GET URL with the query as key-value pair parameter. By default, the rasdaman login is taken from the petascope settings in petascope.properties; optionally, another valid rasdaman user name and password can be provided as additional parameters.
Syntax
http://{service}/{path}/rasdaman/rasql?params
This servlet endpoint accepts KVP requests with the following parameters:
query=q
where q is a valid rasql query, appropriately escaped as per http specification.
where u is the user name for logging into rasdaman (optional, default: value of variable rasdaman_user in petascope.properties)
where p is the password for logging into rasdaman (optional, default: value of variable rasdaman_pass in petascope.properties)
Example
The following URL sends a query request to a fictitious server www.acme.com:
http://www.acme.com/rasdaman?
query=select%20rgb.red+rgb.green%20from%20rgb
Since v10, this servlet endpoint can accept the credentials for username:password in basic authentication headers and POST protocol, for example using curl tool:
curl -u rasguest:rasguest
-d 'query=select 1 + 15 from test_mr as c'
'http://localhost:8080/rasdaman/rasql'
If results from rasql server are multiple objects (e.g: SELECT .. FROM RAS_* or a collection contains multiple arrays), then they are written in multipart/related MIME format with End string as multipart boundary. Below is an example from SELECT c from RAS_COLLECTIONNAMES as c:
▶ show
Clients need to parse the multipart results for these cases. There are some useful libraries to do that, e.g. NodeJS with Mailparser.
### 4.20.4. Response Format¶
The response to a rasdaman query gets wrapped into a http message. The response format is as follows, depending on the nature of the result:
If the query returns arrays, then the MIME type of the response is application/octet-stream.
• If the result is empty, the document will be empty.
• If the result consists of one array object, then this object will be delivered as is.
• If the result consists of several array objects, then the response will consist of a Multipart/MIME document.
• If the query returns scalars, all scalars will be delivered in one document of MIME type text/plain, separated by whitespace.
### 4.20.5. Security¶
User and password are expected in cleartext, so do not use this tool in security sensitive contexts.
The service endpoint rasdaman/rasql, being part of the petascope servlet, can be disabled in the servlet container’s setup (such as Tomcat).
### 4.20.6. Limitations¶
Currently, no uploading of data to the server is supported. Hence, functionality is restricted to queries without positional parameters $1, $2, etc.
Currently, array responses returned invariably have the same MIME type, application/octet-stream. In future it is foreseen to adjust the MIME type to the identifier of the specific file format as chosen in the encode() function.
## 4.21. Appendix A: rasql Grammar¶
This appendix presents a simplified list of the main rasql grammar rules used in the rasdaman system. The grammar is described as a set of production rules. Each rule consists of a non-terminal on the left-hand side of the colon operator and a list of symbol names on the right-hand side. The vertical bar | introduces a rule with the same left-hand side as the previous one. It is usually read as or. Symbol names can either be non-terminals or terminals (the former ones printed in bold face as a link which can be followed to the non-terminal production). Terminals represent keywords of the language, or identifiers, or number literals; “(“, “)”, “[“, and “]” are also terminals, but they are in double quotes to distinguish them from the grammar parentheses (used to group alternatives) or brackets (used to indicate optional parts).
query ::= createExp
| dropExp
| selectExp
| updateExp
| insertExp
| deleteExp
createExp ::= createCollExp
| createStructTypeExp
| createMarrayTypeExp
| createSetTypeExp
createCollExp ::= create collection
namedCollection typeName
createCellTypeExp ::= create type typeName
a" cellTypeExp
cellTypeExp ::= "(" attributeName typeName
[ , attributeName typeName ]... ")"
createMarrayTypeExp ::= create type typeName
as "(" cellTypeExp | typeName ")"
mdarray domainSpec
domainSpec ::= "[" extentExpList "]"
extentExpList ::= extentExp [ , extentExpList ]
extentExp ::= axisName
[ "(" integerLit | intervalExp ")" ]
boundSpec ::= integerExp
createSetTypeExp ::= create type typeName
as set "(" typeName ")"
"[" nullExp "]"
nullExp ::= null values mintervalExp
dropExp ::= drop collection namedCollection
| drop type typeName
selectExp ::= select resultList
from collectionList
[ where generalExp ]
updateExp ::= update iteratedCollection
set updateSpec
assign generalExp
[ where generalExp ]
insertExp ::= insert into namedCollection
values generalExp
[ tiling [ StorageDirectives ] ]
StorageDirectives ::= RegularT | AlignedT | DirT
| AoiT | StatT
RegularT ::= regular TileConf
[ tile size integerLit ]
AlignedT ::= aligned TileConf [ TileSize ]
DirT ::= directional SplitList
[ with subtiling [ TileSize ] ]
AoiT ::= area of interest BboxList
[ TileSize ]
StatT ::= statistic TileConf [ TileSize ]
[ border threshold integerLit ]
[ interest threshold floatLit ]
TileSize ::= tile size integerLit
TileConf ::= BboxList [ , BboxList ]...
BboxList ::= "[" integerLit : integerLit
[ , integerLit : integerLit ]... "]"
Index ::= index IndexName
deleteExp ::= delete from iteratedCollection
[ where generalExp ]
updateSpec ::= variable [ mintervalExp ]
resultList ::= [ resultList , ] generalExp
generalExp ::= mddExp
| trimExp
| reduceExp
| inductionExp
| caseExp
| functionExp
| integerExp
| condenseExp
| variable
| mintervalExp
| intervalExp
| generalLit
mintervalExp ::= "[" spatialOpList "]"
| sdom "(" collIterator ")"
intervalExp ::= ( integerExp | * ) :
( integerExp | * )
integerExp ::= integerTerm + integerExp
| integerTerm - integerExp
| integerTerm
integerTerm ::= integerFactor * integerTerm
| integerFactor / integerTerm
| integerFactor
integerFactor ::= integerLit
| identifier [ structSelection ]
| mintervalExp . lo
| mintervalExp . hi
| "(" integerExp ")"
spatialOpList ::= spatialOpList2
spatialOpList2 ::= spatialOpList2 , spatialOp
| spatialOp
spatialOp ::= generalExp
condenseExp ::= condense condenseOpLit
over condenseVariable in generalExp
[ where generalExp ]
using generalExp
condenseOpLit ::= + | * | and | or | max | min
functionExp ::= version "(" ")"
| unaryFun "(" collIterator ")"
| binaryFun
"(" generalExp , generalExp ")"
| transcodeExp
unaryFun ::= oid | dbinfo
binaryFun ::= shift | scale | bit | pow | power | div | mod
transcodeExp ::= encode "(" generalExp , StringLit
[ , StringLit ] ")"
| decode "(" $integerLit [ , StringLit , StringLit ] ")" | decode "(" generalExp ")" structSelection ::= . ( attributeName | integerLitExp ) inductionExp ::= unaryInductionOp "(" generalExp ")" | generalExp . ( re | im ) | generalExp structSelection | not generalExp | generalExp binaryInductionOp generalExp | ( + | - ) generalExp | "(" castType ")" generalExp | "(" generalExp ")" unaryInductionOp ::= sqrt | abs | exp | log | ln | sin | cos | tan | sinh | cosh | tanh | arcsin | arccos | arctan binaryInductionOp ::= overlay | is | = | and | or | xor | plus | minus | mult | div| equal | < | > | <= | >= | != castType ::= bool | char | octet | short | long | ulong | float | double | ushort | unsigned ( short | long ) caseExp ::= case [ generalExp ] whenList else generalExp end whenList ::= [ whenList ] when generalExp then generalExp collectionList ::= [ collectionList , ] iteratedCollection iteratedCollection ::= namedCollection [ [ as ] collIterator ] reduceExp ::= reduceIdent "(" generalExp ")" reduceIdent ::= all_cells | some_cells | count_cells | avg_cells | min_cells | max_cells | add_cells | stddev_samp | stddev_pop | var_samp | var_pop trimExp ::= generalExp mintervalExp mddExp ::= marray ivList values generalExp ivList ::= [ ivList , ] marrayVariable in generalExp generalLit ::= scalarLit | mddLit | StringLit | oidLit oidLit ::= < StringLit > mddLit ::= < mintervalExp dimensionLitList > |$ integerLit
dimensionLitList ::= [ dimensionLitList ; ] scalarLitList
scalarLitList ::= [ scalarLitList , ] scalarLit
scalarLit ::= complexLit | atomicLit
complexLit ::= [ struct ] { scalarLitList }
atomicLit ::= booleanLit | integerLit | floatLit
| complex "(" floatLit , floatLit ")"
| complex "(" integerLit , integerLit ")"
typeName ::= identifier
variable ::= identifier
namedCollection ::= identifier
collIterator ::= identifier
attributeName ::= identifier
marrayVariable ::= identifier
condenseVariable ::= identifier
identifier ::= [a-zA-Z_] [a-zA-Z0-9_]*
## 4.22. Appendix B: Reserved keywords¶
This appendix presents the list of all tokens that CANNOT be used as variable names in rasql.
//.* –.* complex re im struct fastscale members add alter list select from where as restrict to extend by project near bilinear cubic cubicspline lanczos average mode med q1 q3 at dimension all_cell|all_cells some_cell|some_cells count_cell|count_cells add_cell|add_cells avg_cell|avg_cells min_cell|min_cells max_cell|max_cells var_pop var_samp stddev_pop stddev_samp sdom over overlay using lo hi concat along case when then else end insert into values delete drop create collection type update set assign in marray mdarray condense null commit oid shift clip subspace multipolygon projection polygon curtain corridor linestring coordinates multilinestring discrete range scale dbinfo version div mod is not sqrt tiff bmp hdf netcdf jpeg csv png vff tor dem encode decode inv_tiff inv_bmp inv_hdf inv_netcdf inv_jpeg inv_csv inv_png inv_vff inv_tor inv_dem inv_grib abs exp pow power log ln sin cos tan sinh cosh tanh arcsin asin arccos acos arctan atan index rc_index tc_index a_index d_index rd_index rpt_index rrpt_index it_index auto tiling aligned regular directional with subtiling no_limit regroup regroup_and_subtiling area of interest statistic tile size border threshold unsigned bool char octet short ushort long ulong float double CFloat32 CFloat64 CInt16 CInt32 nan nanf inf inff max min bit and or xor
[2] memory usage is one byte per pixel
[4] Currently only one -f argument is supported (i.e., only \$1).
[5] the dimension which is the leftmost in the spatial domain specification
[6] the dimension which is the rightmost in the spatial domain specification
[13] This is going to be changed in the near future.
[14] This will be changed in future.
[15] This is referred to as Strict 2-Phase Locking in databases. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29985901713371277, "perplexity": 4312.294242397585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00478.warc.gz"} |
http://mathoverflow.net/questions/100276/can-one-prove-complex-multiplication-without-assuming-cft?sort=newest | # Can one prove complex multiplication without assuming CFT?
The Kronecker-Weber Theorem, stating that any abelian extension of $\mathbb Q$ is contained in a cyclotomic extension, is a fairly easy consequence of Artin reciprocity in class field theory (one just identifies the ray class groups and shows that each corresponds to a cyclotomic extension). However, one can produce a more direct and elementary proof of this fact that avoids appealing to the full generality of class field theory (see, for example, the exercises in the fourth chapter of Number Fields by Daniel Marcus). In other words, one can prove class field theory for $\mathbb Q$ using much simpler methods than for the general case.
The theory of complex multiplication is similar to the theory of cyclotomic fields (and hence the Kronecker-Weber Theorem) in that it shows that any abelian extension of a quadratic imaginary field is contained in an extension generated by the torsion points of an elliptic curve with complex multiplication by our field. To prove this, one normally assumes class field theory and then shows that the field generated by the $m$-torsion (or, more specifically, the Weber function of the $m$-torsion) is the ray class field of conductor $m$.
My question is: Can one prove that any abelian extension of an imaginary quadratic field $K$ is contained in a field generated by the torsion of an elliptic curve with complex multiplication by $K$ without resorting to the general theory of class field theory? I.e. where one directly proves class field theory for $K$ by referring to the elliptic curve. Is there a proof in the style of the exercises in Marcus's book?
Note: Obviously there is no formal formulation of what I'm asking. One way or another, you can prove complex multiplication. But the question is whether you can give a proof of complex multiplication in a certain style.
-
(modified)
Historically, Complex Multiplication precedes Class Field Theory and many of the main theorems of CM for elliptic curves were proved directly. See Algebren (3 volumes) by Weber or Cox's book for an exposition.
Please also read Birch's article on the beginnings of Heegner points where he points this out explicitly (page three, paragraph beginning "Complex multiplication ...). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090471863746643, "perplexity": 145.19268935846853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829754.11/warc/CC-MAIN-20140820021349-00085-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/chemical-reaction-equation-historical-question.749178/ | # Chemical reaction equation, historical question
1. Apr 17, 2014
### 7777777
I am reading a chemistry book printed in 1805. The chemical reaction equations are written using the equality symbol = instead of the arrow →, which is used in modern times.
Anyway sometimes it is still possible to see the "old fashioned" way:
http://www.jeron.je/anglia/learn/sec/science/changmat/page13.htm
Does anyone know why the equality symbol was abandoned, and when did it happen
in the history of chemistry? Are there reasons why this change was needed?
I know only a little about chemistry, I think this is a very basic question, but I cannot
seem to find the complete solution myself. I can think that maybe the = was replaced by → because chemical reaction equations are not mathematical equations, there is no equality
in the equation in mathematical sense.
If the chemical equations are not mathematics, then why the addition symbol + has not
been replaced by something else? The addition is a mathematical operation, so should
it be understood to mean also a chemical reaction? Something is added into something
else, perhaps this is an universal concept applicable not just in mathematics.
2. Apr 17, 2014
### PhysicoRaj
An arrow indicates direction, whereas an equality sign does not.
3. Apr 17, 2014
### DrDu
As long as a reaction is not in equilibrium, the reaction proceeds in one or the other direction. Hence it is more convenient to use arrows. In some situations, it is also necessary to distinguish formally between reactands and products, e.g. in calculating the potential of a electrochemical half cell, you divide by convention the product of the concentration of the products by that of the reactands.
4. Apr 17, 2014
### 7777777
Ok, there is a direction in chemical equation, reactants are cause and products are effect,
hence there is causality. But not in mathematical equation, there is symmetry in mathematical
equation instead of causality. 1+1→2 does not make sense because 2 is not caused by 1+1,
instead there is symmetry: 1+1=2 and 2=1+1.
Perhaps this is a weakness of mathematics, it does not seem offer causality.
5. Apr 17, 2014
### PhysicoRaj
And there are instances where mathematics offers a cause and effect.
Implication
Mathematical Induction
Contraposition
6. Apr 17, 2014
### 256bits
I found this.
chemistry and symbols
http://www.chemistryviews.org/details/ezine/2746271/History_and_Usage_of_Arrows_in_Chemistry.html
1789 Lavoisier uses "=" sign for a chemical equation.
1884 Vant Hoff uses double arrows
1901 single arrow to designate direction, products and reactants
http://www.chemistryviews.org/SpringboardWebApp/userfiles/chem/image/2012/2012_November/Arrow [Broken]
http://www.chemistryviews.org/SpringboardWebApp/userfiles/chem/image/2012/2012_November/Arrow [Broken]
Other uses of arrows in chemistry shown, past and present.
Last edited by a moderator: May 6, 2017
7. Apr 18, 2014
### PhysicoRaj
Nice links. This timeline was very interesting→
Similar Discussions: Chemical reaction equation, historical question | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570728302001953, "perplexity": 1990.6872354983352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688926.38/warc/CC-MAIN-20170922074554-20170922094554-00204.warc.gz"} |
https://www.vedantu.com/formula/percentage-formula | # Percentage Formula
View Notes
## Calculate Percentage
There are various formulas to find a percentage that helps in solving percentage problems. Imagine the most basic percentage formula: P% × A = B. However, there are many mathematical variations of the percentage calculation formulas. Let's take a look at the three basic percentage problems that can be solved using percentage formulas. ‘A’ and ‘B’ are numbers and P is the percentage:
• Find P percent of A
• Find what percent of A is B
• Find A if P percent of it is B
For example, 25% of 1000 is 250
$\frac{is}{of}$ = %/100 or $\frac{part}{whole}$ = %/100
Percentage Formula
### How to Find what percent (%) of A is B.
Example: What percent of 75 is 15?
Follow the below step-by-step procedure and solve percentage problem in one go
1. First, you will require to convert the problem to an equation using the formula of percentage i.e. A/B = P%
2. Do the math, since A is 75, B is 15, so the equation becomes: 15/75 = P%
3. Solve the equation: 15/75 = 0.20
4. Note! The outcome tends to be always in decimal form, not a percentage. Therefore. We will require multiplying the outcome by 100 to obtain the percentage.
5. Convert the decimal 0.20 to a percent multiplying it by 100
6. We get 0.20 × 100 = 20%
So 20% of 75 is 15
### How to Find A if P percent of it is B
Example: 50 are 10% of what number?
Follow the below step-by-step procedure and solve percentage problem in one go
1. First, convert the problem to an equation using the formula of percentage i.e. B/P% = A
2. Given that value of ‘B’ is 50, P% is 10, so the equation is 50/10% = A
3. Convert the percentage to a decimal form, dividing by 100.
4. Converting 10% to a decimal brings us: 10/100 = 0.10
5. Substitute 0.10 for 10% in the equation: 50/0.10 = A
6. Do the math: 50/0.10 = A
A = 500
So 50 is 10% of 500
### Percentage Difference Formula
The percentage difference between the two values/numbers is reckoned by dividing the absolute value of the difference between the two values by the average of those two values. Multiplying the outcome by 100 is purposed to produce the solution in the form of percent, rather than in decimal. Take a look at the below equation for an easy explanation;-
Percentage Difference formula = |A1 – A2|/ (A1 + A2)/2× 100
For example: find out the percentage difference between two values of 20 and 4
Solution: given two values is 20 and 4
So,
|20 - 4| / (20 + 4)/2 × 100
= 4/3 × 100
= 1.33 × 100
= 133.33%
### Percentage Change Formula
Percentage decrease and increase are reckoned by determining by the difference between two values and comparing that difference to the primary value. With respect to mathematical concepts, this involves considering the absolute value of the difference between two values and dividing the outcome by the primary value, typically computing how much the primary value has changed.
The percentage change calculator computes an increase or decrease of a definite percentage of the input number. It typically takes into account converting a percent into its decimal equivalent, and either adding or subtracting the decimal equivalent from and to 1, respectively. Multiplying the primary number by this value will lead to either an increase or decrease of the number by the given percent. Refer to the example below for clarification.
Refer to the below equation for easy explanation:-
Example: 700 increased by 20% (0.2)
700 × (2 + 0.2) = 840
700 decreased by 10%
700 × (1 – 0.1) = 630
### Solved Examples
Example1
Find out ___% of 15 is 6
Solution1
Here whole = 15 and part = 6, but % is missing
We obtain:
6/15 = %/100
Replacing % by x and cross-multiplying provides:
6 × 100 = 15 × x
600 = 15 × x
Divide 600 by 15 to get x
600/15 = 40, so x = 40
Thus, __40_% of 15 is 6
Example2
The tax on a soap dispenser machine is Rs 25.00. The tax rate is 20%. What is the price without tax?
Solution2
P × 20/100 = 25
= 20/100 equal 5
Solve the equation multiplying both sides by 100 and then dividing both sides by 20.
The price without tax is Rs. = 125
### Fun Facts
1. The percentage (%) sign bears a significant ancient connection. Ancient Romans often performed calculations in fractions dividing by 100, which is presently equivalent to the computing percentages.
2. Calculations with a denominator of 100 became more typical since the introduction of the decimal system.
3. Percentage methods had frequently been used in medieval arithmetic texts to describe finances, such as interest rates.
Q1. How to know the percentage calculation is Accurate?
Ans. A common mistake we tend to do when finding percentages is division instead of multiplication of the decimal conversion. For the fact that percentages are often perceived as parts of a larger whole thing, there can be a likelihood of dividing instead of multiplying when met with a problem such as "find 30% of 160." Always remember, after converting the percent to a decimal form, next is to multiply, not divide.
A proper understanding of percent enables us to estimate whether the answer is logical. From the above example, knowing that 30% is between one-quarter and one-half, this would mean the answer should be somewhere between 30 and 50. And the answer is 48 (0.30×160). By dividing (0.30×160), you would get 533.33 which are completely wrong.
Q2. What is the Percentage?
Ans. Percent refers to a proportion "out of 100" or "for every 100”. It is denoted by the symbol (%) symbol. A percentage makes for a fast means to write down a fraction with a denominator of 100. For example, instead of saying “the tutor covered 17 history lessons out of every 100," we say "she covered 17% of the history syllabus."
Q3. How to convert a Percentage to a Decimal?
Ans. Eliminate the symbol of percentage and divide by 100
25.70% = 25.7/100 = 0.257
Q4. How to convert a Decimal to a Percentage?
Ans. Add the sign of percentage and multiply by 100
0.257 = 0.257 × 100 = 25.7% | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920504629611969, "perplexity": 1347.292515978308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141692985.63/warc/CC-MAIN-20201202052413-20201202082413-00236.warc.gz"} |
http://math.stackexchange.com/questions/29068/differential-equation | # Differential equation
Solve the differential equation: $$x \frac{dy}{dx} = y(3-y)$$ where x=2 when y=2, giving y as a function of x.
Can someone solve this and then explain what the second line about $x=2$, $y=2$ means?
-
Solutions of differential equations generally depend on a specified initial value. (Compare to the case of taking antiderivatives of a function: you have to write $+C$ because there is a free constant in the expression. By specifying the value of the antiderivative you want at a point, you fix that constant uniquely.) – Willie Wong Mar 25 '11 at 22:19
The second line just tells you that $y(2) = 2$. – Arturo Magidin Mar 26 '11 at 3:14
So you want to "solve" the initial value problem:
$\begin{cases} x y^\prime (x) = y(x)\ (3-y(x)) \\ y(2)=2 \end{cases}$.
Your ODE has a singular point in $x=0$ (for the coefficient of the $y^\prime (x)$ term vanishes), hence if the IVP has a solution it will have not to be defined in $x=0$.
Put the ODE in normal form, i.e. rewrite:
$\displaystyle y^\prime (x) =\frac{y(x)\ (3-y(x))}{x}$;
the function $f(x,y):= \frac{y (3-y)}{x}$ is of class $C^\infty$ in $(x,y)\in \Big(]-\infty ,0[\cup ]0,+\infty[\Big)\times \mathbb{R}$, hence the existence and uniqueness theorem applies and your IVP has unique local solution $y(x)$ whose graph passes through the point $(2,2)$.
The solution $y(x)$ is continuous in a neighbourhood $I_1$ of $x=2$ (because it has to be differentiable to satisfy the ODE), hence the composite function $f(x,y(x))$ is continuous in $I_1$; as $y^\prime (x)=f(x,y(x))$, then $y^\prime (x)$ is continuous in $I_1$, therefore $y(x)$ is a $C^1$ function in $I_1$. But then $y^\prime (x)$ is of class $C^1$ in $I_1$, for the composite function $f(x,y(x))$ is of class $C^1$ (apply the chain rule); therefore $y(x)$ is of class $C^2$... Bootstrapping, you see that $y(x)$ is of class $C^\infty$ in the neighbourhood $I_1$ of the initial point $2$.
Moreover, the solution $y(x)$ is also strictly increasing in a neighbourhood of $2$: in fact, $y(2)=2>0$ hence by continuity you can find a neighbourhood $I_2\subseteq I_1$ of $2$ in which $0<y(x)<3$, so:
$\displaystyle y^\prime (x)=\frac{y(x)\ (3-y(x))}{x} >0$,
thus $y(x)$ increases strictly.
Now you have all the ingredients to properly solve your problem: in fact, in $I_2$ you can divide both sides of the ODE by $y(x)\ (3-y(x))$ and rewrite:
$\displaystyle \frac{y^\prime (x)}{y(x)\ (3-y(x))} =\frac{1}{x}$;
now fix a point $x \in I_2$ and integrate both sides from $2$ to $x$:
$\displaystyle \int_2^x \frac{y^\prime (t)}{y(t)\ (3-y(t))}\ \text{d} t =\int_2^x \frac{1}{t}\ \text{d} t$
(I've introduced a dummy variable in the integrals); now the RHside gives you $\ln x -\ln 2$, hence you have to work on the LHside. Keeping in mind that $y(t)$ is strictly monotone hence invertible in $I_2$, we can make the change of variable $\theta =y(t)$: as $y(2)=2$ and $\text{d} \theta = y^\prime (t)\ \text{d} t$, you get:
$\displaystyle \begin{split}\int_2^x \frac{y^\prime (t)}{y(t)\ (3-y(t))}\ \text{d} t &= \int_2^{y(x)} \frac{1}{\theta (3-\theta)}\ \text{d} \theta \\ &=\frac{1}{3} \ln \theta - \frac{1}{3} \ln (3-\theta) \Big|_2^{y(x)} \\ &=\frac{1}{3} \left(\ln y(x) -\ln (3-y(x)) -\ln 2\right)\end{split}$.
Therefore the solution to your problem is implicitly determined by the equation:
$\displaystyle \ln \left( \frac{y(x)}{3-y(x)}\right) = \ln \frac{x^3}{4}$,
i.e.:
$\displaystyle \frac{4y(x)}{3-y(x)}=x^3$.
The latter equation is a rational algebraic equation w.r.t. $y(x)$ and can be solved with the usual tools, which yield:
$\displaystyle y(x)=\frac{3x^3}{4+x^3}$.
There is more that can be said, e.g. how the local solution $y(x)$ can be extended to a maximal solution... But that's another story.
-
another story indeed! Despite the overkill, I am grateful for a formal discussion and solution. +1 – The Chaz 2.0 Mar 26 '11 at 13:40
@The Chaz: My two cents: I don't think it's an overkill... It is just the correct way of doing the exercise. – Pacciu Mar 26 '11 at 15:53
This belief is evident by the nature of your answer! I'll continue to appreciate your rigor while "monkeying around" in my own way ;) – The Chaz 2.0 Mar 26 '11 at 16:07
@The Chaz: Thank you! And watch your steps while "monkeying around" ;D – Pacciu Mar 26 '11 at 16:25
This is easy because it's separable variables. So solve $dy/(y(3-y))=dx/x$.
-
... by integrating both sides, and then use the initial value to determine the constant of integration – Henry Mar 25 '11 at 22:45
The method you adopted for separating variables is usually called urang-utang© method by some funny mathematicians. They mock: "People using this method whitout knowing its fomal justification (if any!) resemble Orangutans using thing to make rudimental tools and messing with them"; in fact, the method is based on a totally unformal algebraic manipulation of differentials which is hard to formalize (hence it's almost meaningless). – Pacciu Mar 25 '11 at 23:56
@Pacciu: See this question, and in particular, Mike Spivey's answer there. – Rahul Mar 26 '11 at 2:29
@pac its not meaningless, just leave $y'$ alone, two functions (of $x$) are equal, so are their integrals (wrt $x$). – yoyo Mar 26 '11 at 14:31
@Rahul: Thanks for the reference, but I already know the story ;D @yoyo: There are people believing that, say, one can pass from $\frac{\text{d} y}{\text{d} x} =f(y)$ to $\text{d} y=f(x)\ \text{d} x$ by multiplying both sides by $\text{d} x$... But what's the meaning of this? How can a differential be considered as a number when it is not a number (for, it is just a symbol or a linear map)? This is what I was referring to when I wrote unformal algebraic manipulation of differentials which is [...] almost meaningless. – Pacciu Mar 26 '11 at 15:46
This is just an Initial Value Problem. You use the techniques you know to solve for the general solution. Here is a good resource: http://tutorial.math.lamar.edu/Classes/DE/Linear.aspx. More specifically, you should look at "Separable Equations"
The general solution in this case will have one arbitrary constant. You use the I.V.P to solve for this constant.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220090508460999, "perplexity": 390.660518647835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860106452.21/warc/CC-MAIN-20160428161506-00194-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://link.springer.com/article/10.1007/s00454-017-9916-5?error=cookies_not_supported&error=cookies_not_supported&code=c28068ea-1d3b-489f-aa82-691d48f33777&code=8005c167-077b-46e0-bc35-212d0dbf5bef | # On Homotopy Types of Euclidean Rips Complexes
## Abstract
The Rips complex at scale r of a set of points X in a metric space is the abstract simplicial complex whose faces are determined by finite subsets of X of diameter less than r. We prove that for X in the Euclidean 3-space $$\mathbb {R}^3$$ the natural projection map from the Rips complex of X to its shadow in $$\mathbb {R}^3$$ induces a surjection on fundamental groups. This partially answers a question of Chambers, de Silva, Erickson and Ghrist who studied this projection for subsets of $$\mathbb {R}^2$$. We further show that Rips complexes of finite subsets of $$\mathbb {R}^n$$ are universal, in that they model all homotopy types of simplicial complexes PL-embeddable in $$\mathbb {R}^n$$. As an application we get that any finitely presented group appears as the fundamental group of a Rips complex of a finite subset of $$\mathbb {R}^4$$. We furthermore show that if the Rips complex of a finite point set in $$\mathbb {R}^2$$ is a normal pseudomanifold of dimension at least two then it must be the boundary of a crosspolytope.
This is a preview of subscription content, access via your institution.
## Notes
1. 1.
Chambers et al. [3] use the term k-connected to describe the situation when the induced map on $$\pi _k$$ is also a bijection, although it is more standard to call the latter a k-equivalence.
## References
1. 1.
Attali, D., Lieutier, A., Salinas, D.: Vietoris–Rips complexes also provide topologically correct reconstructions of sampled shapes. Comput. Geom. 46(4), 448–465 (2013)
2. 2.
Björner, A.: Topological methods. In: Graham, R., Grötschel, M., Lovász, L. (eds.) Handbook of Combinatorics, vol. 2, pp. 1819–1872. Elsevier, Amsterdam (1995)
3. 3.
Chambers, E.W., de Silva, V., Erickson, J., Ghrist, R.: Vietoris–Rips complexes of planar point sets. Discrete Comput. Geom. 44(1), 75–90 (2010)
4. 4.
Chazal, F., de Silva, V., Oudot, S.: Persistence stability for geometric complexes. Geom. Dedicata 173, 193–214 (2014)
5. 5.
Deza, M., Dutour, M., Shtogrin, M.: On simplicial and cubical complexes with short links. Isr. J. Math. 144(1), 109–124 (2004)
6. 6.
Dranišnikov, A.N., Repovš, D.: Embedding up to homotopy type in Euclidean space. Bull. Aust. Math. Soc. 47(1), 145–148 (1993)
7. 7.
Hausmann, J.-C.: On the Vietoris–Rips complexes and a cohomology theory for metric spaces. In: Quinn, W. (ed.) Prospects in Topology. Annals of Mathematics Studies, vol. 138, pp. 175–188. Princeton University Press, Princeton (1995)
8. 8.
Kozlov, D.N.: Combinatorial Algebraic Topology. Algorithms and Computation in Mathematics, vol. 21. Springer, Berlin (2008)
9. 9.
Latschev, J.: Vietoris–Rips complexes of metric spaces near a closed Riemannian manifold. Arch. Math. 77(6), 522–528 (2001)
10. 10.
tom Dieck, T.: Algebraic Topology. EMS Textbooks in Mathematics. European Mathematical Society, Zürich (2008)
11. 11.
Vietoris, L.: Über den höheren Zusammenhang kompakter Räume und eine Klasse von zusammenhangstreuen Abbildungen. Math. Ann. 97(1), 454–472 (1927)
## Acknowledgements
We thank Jesper M. Møller for helpful discussions and for suggesting the collaboration of the first and third author. We also thank the referees for their suggestions. Some of this research was performed while the second author visited the University of Copenhagen. The second author is grateful for the hospitality of the Department of Mathematical Sciences there. MA was supported by VILLUM FONDEN through the network for Experimental Mathematics in Number Theory, Operator Algebras, and Topology.
## Author information
Authors
### Corresponding author
Editor in Charge: Kenneth Clarkson
## Rights and permissions
Reprints and Permissions
Adamaszek, M., Frick, F. & Vakili, A. On Homotopy Types of Euclidean Rips Complexes. Discrete Comput Geom 58, 526–542 (2017). https://doi.org/10.1007/s00454-017-9916-5
• Revised:
• Accepted:
• Published:
• Issue Date:
### Keywords
• Vietoris–Rips complex | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950809836387634, "perplexity": 1970.0990699586118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00233.warc.gz"} |
https://tex.stackexchange.com/questions/476618/how-align-left-dedicatory | # How align left dedicatory
my code is
\documentclass[a4paper,12pt]{article}
\usepackage[paper=a4paper,left=30mm,right=20mm,top=25mm,bottom=30mm]{geometry}
\newenvironment{dedication}
{\clearpage % we want a new page
\thispagestyle{empty}% no header and footer
\vspace*{\stretch{1}}% some space at the top
\itshape % the text is in italics
\raggedleft % flush to the right margin
}
{\par % end the paragraph
\vspace{\stretch{3}} % space at bottom is three times that at the top
\clearpage % finish off the page
}
\begin{document}
\begin{dedication}
Dedicated to google and wikipedia
\end{dedication}
\end{document}
this is result:
• You have specified \raggedleft which is the opposite of what you want. Try \raggedright. – barbara beeton Feb 25 '19 at 15:51
• @barbarabeeton I want it to be placed on the right side of the page, and the content I want to be aligned to the left – x-rw Feb 25 '19 at 15:57
• To get a uniform indentation on the left, you can specify \leftskip=<dimen>\parindent=0pt where <dimen> is the amount of space you want on the left (e.g., 2cm). This is plain TeX notation, not LaTeX, but it should work. – barbara beeton Feb 25 '19 at 16:02
• @barbarabeeton the example i put in the figure in red letters – x-rw Feb 25 '19 at 16:04
• Yes, that's what I gave the code for. It should be replace the \raggedleft in your code. – barbara beeton Feb 25 '19 at 16:13
The code you post specifies \raggedleft, which is the opposite of what you want. \raggedright is what you should be using.
You also want a uniform indentation on the left. Replace the instruction \raggedleft in your code by the following:
\leftskip=2cm
\raggedright
\parindent=0pt
Replace the 2cm in this code by the width of the indentation that you want.
This code is in "plain TeX" style, not LaTeX, but it should work with no problem, although some LaTeX users would prefer a LaTeX-specific formulation.
The reason I was trying to answer in comments is that I don't currently have the ability to test; I don't like to provide untested answers. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114366769790649, "perplexity": 1744.221854099424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00364.warc.gz"} |
http://solidmechanics.org/Text/Chapter3_3/Chapter3_3.php | 3.3 Hypoelasticity $–$ elastic materials with a nonlinear stress-strain relation under small deformation
Hypoelasticity is used to model materials that exhibit nonlinear, but reversible, stress strain behavior even at small strains. Its most common application is in the so-called `deformation theory of plasticity,’ which is a crude approximation of the behavior of metals loaded beyond the elastic limit.
A hypoelastic material has the following properties
The solid has a preferred shape
The specimen deforms reversibly: if you remove the loads, the solid returns to its original shape.
The strain in the specimen depends only on the stress applied to it $–$ it doesn’t depend on the rate of loading, or the history of loading.
The stress is a nonlinear function of strain, even when the strains are small, as shown in the picture above. Because the strains are small, this is true whatever stress measure we adopt (Cauchy stress or nominal stress), and is true whatever strain measure we adopt (Lagrange strain or infinitesimal strain).
We will assume here that the material is isotropic (i.e. the response of a material is independent of its orientation with respect to the loading direction). In principle, it would be possible to develop anisotropic hypoelastic models, but this is rarely done.
The stress strain law is constructed as follows:
Strains and rotations are assumed to be small. Consequently, deformation is characterized using the infinitesimal strain tensor ${\epsilon }_{ij}$ defined in Section 2.1.7. In addition, all stress measures are taken to be approximately equal. We can use the Cauchy stress ${\sigma }_{ij}$ as the stress measure.
### When we develop constitutive equations for nonlinear elastic materials, it is usually best to find an equation for the strain energy density of the material as a function of the strain, instead of trying to write down stress-strain laws directly. This has several advantages: (i) we can work with a scalar function; and (ii) the existence of a strain energy density guarantees that deformations of the material are perfectly reversible.
If the material is isotropic, the strain energy density can only be a function strain measures that do not depend on the direction of loading with respect to the material. One can show that this means that the strain energy can only be a function of invariants of the strain tensor $–$ that is to say, combinations of strain components that have the same value in any basis (see Appendix B). The strain tensor always has three independent invariants: these could be the three principal strains, for example. In practice it is usually more convenient to use the three fundamental scalar invariants:
${I}_{1}={\epsilon }_{kk}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{I}_{2}=\frac{1}{2}\left({\epsilon }_{ij}{\epsilon }_{ij}-{\epsilon }_{kk}{\epsilon }_{pp}/3\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{I}_{3}=\mathrm{det}\left(\epsilon \right)=\frac{1}{6}{\in }_{ijk}{\in }_{lmn}{\epsilon }_{li}{\epsilon }_{mj}{\epsilon }_{nk}$
### Here, ${I}_{1}$ is a measure of the volume change associated with the strain; ${I}_{2}$ is a measure of the shearing caused by the strain, and I can’t think of a good physical interpretation for ${I}_{3}$. Fortunately, it doesn’t often appear in constitutive equations.
Strain energy density:
In principle, the strain energy density could be any sensible function $U\left({I}_{1},{I}_{2},{I}_{3}\right)$. In most practical applications, nonlinear behavior is only observed when the material is subjected to shear deformation (characterized by ${I}_{2}$ ); while stress varies linearly with volume changes (characterized by ${I}_{1}$ ). This behavior can be characterized by a strain energy density
$U=\frac{1}{6}K{I}_{1}^{2}+\frac{2n{\sigma }_{0}{\epsilon }_{0}}{n+1}{\left(\frac{{I}_{2}}{{\epsilon }_{0}^{2}}\right)}^{\left(n+1\right)/2n}$
where $K,{\sigma }_{0},{\epsilon }_{0},n$ are material properties (see below for a physical interpretation).
Stress-strain behavior
For this strain energy density function, the stress follows as
${\sigma }_{ij}=\frac{\partial U}{\partial {\epsilon }_{ij}}=\frac{K}{3}{\epsilon }_{kk}{\delta }_{ij}+{\sigma }_{0}{\left(\frac{{I}_{2}}{{\epsilon }_{0}^{2}}\right)}^{\left(1-n\right)/2n}\left(\frac{{\epsilon }_{ij}-{\epsilon }_{kk}{\delta }_{ij}/3}{{\epsilon }_{0}}\right)$
The strain can also be calculated in terms of stress
${\epsilon }_{ij}=\frac{1}{3K}{\sigma }_{kk}{\delta }_{ij}+{\epsilon }_{0}{\left(\frac{{J}_{2}}{{\sigma }_{0}^{2}}\right)}^{\left(n-1\right)/2}\left(\frac{{\sigma }_{ij}-{\sigma }_{kk}{\delta }_{ij}/3}{{\sigma }_{0}}\right)$
where ${J}_{2}=\left({\sigma }_{ij}{\sigma }_{ij}-{\sigma }_{kk}{\sigma }_{pp}/3\right)/2$ is the second invariant of the stress tensor.
To interpret these results, note that
### If the solid is subjected to uniaxial tension, (with stress ${\sigma }_{11}=\sigma$ and all other stress components zero); the nonzero strain components are
${\epsilon }_{11}=\frac{\sigma }{3K}+\frac{2}{\sqrt{3}}{\epsilon }_{0}{\left(\frac{\sigma }{\sqrt{3}{\sigma }_{0}}\right)}^{n}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\epsilon }_{22}={\epsilon }_{33}=\frac{\sigma }{3K}-\frac{1}{\sqrt{3}}{\epsilon }_{0}{\left(\frac{\sigma }{\sqrt{3}{\sigma }_{0}}\right)}^{n}$
### If the solid is subjected to hydrostatic stress (with ${\sigma }_{11}={\sigma }_{22}={\sigma }_{33}=\sigma$ and all other stress components zero) the nonzero strain components are
${\epsilon }_{11}={\epsilon }_{22}={\epsilon }_{33}=\frac{\sigma }{K}$
### If the solid is subjected to pure shear stress (with ${\sigma }_{12}={\sigma }_{21}=\tau$ and all other stress components zero) the nonzero strains are
${\epsilon }_{12}={\epsilon }_{21}={\epsilon }_{0}{\left(\frac{\tau }{{\sigma }_{0}}\right)}^{n}$
Thus, the solid responds linearly to pressure loading, with a bulk modulus K. The relationship between shear stress and shear strain is a power law, with exponent n
This is just an example of a hypoelastic stress-strain law $–$ many other forms could be used. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541245102882385, "perplexity": 707.8694058677527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00475.warc.gz"} |
http://export.arxiv.org/abs/1803.02138 | nlin.CD
(what is this?)
# Title: Transition from homogeneous to inhomogeneous limit cycles: Effect of local filtering in coupled oscillators
Abstract: We report an interesting symmetry-breaking transition in coupled identical oscillators, namely the continuous transition from homogeneous to inhomogeneous limit cycle oscillations. The observed transition is the oscillatory analog of the Turing-type symmetry-breaking transition from amplitude death (i.e., stable homogeneous steady state) to oscillation death (i.e., stable inhomogeneous steady state). This novel transition occurs in the parametric zone of occurrence of rhythmogenesis and oscillation death as a consequence of the presence of local filtering in the coupling path. We consider paradigmatic oscillators, such as Stuart-Landau and van der Pol oscillators under mean-field coupling with low-pass or all-pass filtered self-feedback and through a rigorous bifurcation analysis we explore the genesis of this transition. Further, we experimentally demonstrate the observed transition, which establishes its robustness in the presence of parameter fluctuations and noise.
Comments: 10 pages, 8 Figs Subjects: Chaotic Dynamics (nlin.CD); Adaptation and Self-Organizing Systems (nlin.AO); Applied Physics (physics.app-ph) Cite as: arXiv:1803.02138 [nlin.CD] (or arXiv:1803.02138v1 [nlin.CD] for this version)
## Submission history
From: Tanmoy Banerjee [view email]
[v1] Tue, 6 Mar 2018 12:19:52 GMT (2048kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9051664471626282, "perplexity": 6440.362145776883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00123.warc.gz"} |
https://stats.stackexchange.com/questions/449791/can-non-linearly-separable-data-always-be-made-linearly-separable | # Can non-linearly separable data always be made linearly separable?
A data set that is linearly separable is a precondition for algorithms like the perceptron to converge. It's well-known that we can project low-dimensional data to a higher dimension using kernel methods in order to make it linearly separable:
But is it always true that there is some transformation to convert every non-linearly separable data set into a linearly separable one? If not, what would be an example of such a data set where this is impossible?
For a given, finite data set it should always be possible—just let each data point have its own dimension! So, maybe a more interesting question would be for a stochastic model, generating a data set, such that for $$n$$ realizations, linear separability would require a dimension growing linearly with $$n$$? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6160418391227722, "perplexity": 342.7485394917047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00210.warc.gz"} |
https://brilliant.org/discussions/thread/mechanics-4/ | ×
# Mechanics
Find the law of force to the pole when the path is the cardioid $$r=a(1- \cos \theta)$$, and prove that if $$F$$ were the force at the apse, and $$V$$ the velocity $$3V^2=4aF$$.
Note by Syed Subhan Siraj
1 year, 6 months ago
Sort by:
First we assume that the motion is under a central force. Applying logarithmic differentiation:$r=a\left( 1-\cos { \theta } \right) \\ \Rightarrow \frac { 1 }{ r } \frac { dr }{ d\theta } =\frac { a\sin { \theta } }{ a\left( 1-\cos { \theta } \right) } =\frac { 2\sin { \frac { \theta }{ 2 } } \cos { \frac { \theta }{ 2 } } }{ 2\sin ^{ 2 }{ \frac { \theta }{ 2 } } } =\cot { \frac { \theta }{ 2 } } =\cot { \phi } \\ \Rightarrow \phi =\frac { \theta }{ 2 }$ where $$\phi$$ is the polar-tangential angle in pedal coordinates. Now we have $p=r\sin { \phi } =r\sin { \frac { \theta }{ 2 } } =\frac { r }{ \sqrt { 2 } } \sqrt { 2\sin ^{ 2 }{ \frac { \theta }{ 2 } } } =\frac { r }{ \sqrt { 2 } } \sqrt { 1-\cos { \theta } } =\frac { r }{ \sqrt { 2 } } \sqrt { \frac { r }{ a } } =r\sqrt { \frac { r }{ 2a } } \\ \Rightarrow 2a{ p }^{ 2 }={ r }^{ 3 }$ Differentiating both sides w.r.t. r:$4ap\frac { dp }{ dr } =3{ r }^{ 2 }\\ \Rightarrow \frac { dp }{ dr } =\frac { 3{ r }^{ 2 } }{ 4ap } \\ \Rightarrow \frac { { h }^{ 2 } }{ { p }^{ 3 } } \frac { dp }{ dr } =\frac { { h }^{ 2 }.3{ r }^{ 2 } }{ { p }^{ 3 }.4ap } =3a\frac { { h }^{ 2 } }{ { r}^{ 4 } } =F$ Thus force is inversely proportional to fourth power of distance. Now, at an apse$\frac { dr }{ d\theta } =0\\ \Rightarrow \sin { \theta } =0\\ \Rightarrow \theta =0\quad or\quad \pi$ But $\theta =0\\ \Rightarrow r=0$ which is a cusp of the cardioid. Thus $\theta =\pi \\ \Rightarrow r=2a=p\\ \Rightarrow h=vp=2av\\ \Rightarrow F=3a{ \left( 2av \right) }^{ 2 }{ \left( \frac { 1 }{ 2a } \right) }^{ 2 }=\frac { 3{ v }^{ 2 } }{ 4a } \\ \Rightarrow 4aF=3{ v }^{ 2 }$[Q.E.D.] · 1 year, 3 months ago
thx · 1 year, 2 months ago
You are welcome..:-) · 1 year, 2 months ago
sir ap teacher hoo?? · 1 year, 2 months ago
Nahi nahi main to ek student hoon..college mein parta hoon.. · 1 year, 2 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995330572128296, "perplexity": 4368.639714044157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187792.74/warc/CC-MAIN-20170322212947-00577-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://alpha.physionet.org/content/challenge-2003/1.0.0/code/ | Challenge Open Access
Distinguishing Ischemic from Non-Ischemic ST Changes - The PhysioNet Computing in Cardiology Challenge 2003
Published: March 6, 2003. Version: 1.0.0
Please include the standard citation for PhysioNet:
Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals (2003). Circulation. 101(23):e215-e220.
Introduction
For the fourth annual PhysioNet/Computers in Cardiology Challenge, we propose a provocative question of considerable clinical interest:
Is it possible to tell the difference between transient ST changes in the ECG that are due to myocardial ischemia, and those that are not?
For many years, a simple answer ("no") was considered to be the final word on this question. Myocardial ischemia results from insufficient oxygen delivery to the myocardium. To diagnose myocardial ischemia definitively, it is necessary to document that blood flow, blood oxygen saturation, or both have been compromised to an extent that the oxygen demands of the myocardium are not satisfied. These diagnostic criteria are typically established by imaging the coronary arteries. Since the ECG does not contain direct information about blood flow or oxygen saturation, it cannot be used to diagnose ischemia.
It may be possible, however, to establish inferential associations between specific features of the ECG and myocardial ischemia. One such association, between transient ischemia and changes in the ST segment of the ECG, is very widely known, and is understood to be highly sensitive, but not specific. It has long been known that repolarization of ischemic myocardial regions is abnormal, that these abnormalities are visible in the ST segment, and that they can be quantified by measuring the deviation of certain portions of the ST segment from baseline measurements. It is also known that deviations in these ST segment measurements can result from a wide variety of other causes, including changes in heart rate, conduction pattern, position of the subject, and noise in the ECG. As a result, observations of transient ST changes are considered suggestive of ischemia but are not sufficient for a definitive diagnosis, absent conclusive evidence from imaging studies.
Even in subjects who are known to have myocardial ischemia, ST changes are not considered a basis for definitive diagnosis of individual episodes of ischemia. In a subject with an old myocardial infarction, for example, the infarct may result in an ST segment with a persistent abnormal pattern (in the frame of reference of the heart). This fixed pattern appears to change with the subject's body position (upright, supine, etc.) because of movement of the ECG elecrodes relative to the heart. Thus many of those subjects who are most likely to experience ischemia are also among those most likely to have non-ischemic ST changes.
Therapeutic intervention to reduce or eliminate transient ischemic episodes can make a significant difference in quality of life for affected subjects, and may reduce mortality and morbidity in this population. Assessment of the effectiveness of therapy is substantially hindered by the lack of a reliable way of identifying ischemic episodes during activities of daily living, in which imaging studies are not possible. If it were possible to distinguish between ischemic and non-ischemic ST changes in ambulatory ECG recordings made during subjects' normal activities, the benefits would be immediate and substantial, in terms of a reduction in the time needed to determine and validate effective therapies, hence in the risk and pain experienced by the affected subjects.
This year's challenge topic encourages participants to develop novel approaches to analysis of transient ST changes using the recently-completed Long-Term ST Database, a meticulously annotated collection of 86 recordings of 2- and 3-lead long-term (20-24 hour) ECGs. Each ST change that meets criteria of clinical significance has been carefully studied by a team of expert annotators, who have drawn upon all available evidence to determine which of these events are consistent with a diagnosis of myocardial ischemia, and which are consistent with other causes. Half of these 86 recordings have been contributed to PhysioNet and are available to participants as a learning set. The remaining recordings form the test set.
Participants are challenged to design and implement algorithms that can closely mimic the decisions made by the expert annotators, classifying the ST changes (events) in the test set as ischemic or non-ischemic. The algorithms are not required to detect the events, but only to classify each given event as ischemic, non-ischemic, or indeterminate.
Organization of the Challenge
As noted above, the data used for this challenge come from the Long-Term ST Database. The learning set consists of the 43 records available from PhysioNet. Participants should train their algorithms using these records. The test set consists of the other 43 records.
To enter the challenge, participants will submit their classifiers by email to PhysioNet, where the entries will be compiled and used to classify the ST events in the test set. Each algorithm will receive a score determined by the number of correctly classified events, less the number of incorrectly classified events (those left unclassified will not affect the score). Scores will be returned to participants by email, and high scores will be posted on PhysioNet and updated throughout the challenge period. Participants may revise and resubmit their entries until the challenge deadline of noon GMT on Friday, 12 September 2003.
All participants are encouraged to submit an abstract to Computers in Cardiology 2003 describing their approach to the challenge. (When submitting your abstract, choose the topic "Computers in Cardiology Challenge".) Abstracts are due on Thursday, 1 May 2003 (note: this deadline has been extended to Thursday, 8 May 2003); details are available on the Computers in Cardiology web site. If your abstract is accepted, you will be expected to prepare a four-page manuscript (due on Tuesday, 23 September 2003) for publication in the conference proceedings, and you will have the opportunity to discuss your work at the conference. To be eligible for an award, you must submit an abstract and attend the conference.
The eligible participant whose algorithm receives the highest score will receive an award of US$1000, to be presented at Computers in Cardiology 2003 (in Thessaloniki, Greece, 21-24 September 2003). A selection of the classifiers will be posted on PhysioNet following the conference. Developing an entry Use the learning set to develop criteria for classifying the ST events. We recommend that you begin by copying a set of input files for one record of the learning set into an empty local directory. The files that your program will be permitted to read are: • the header (.hea) and signal (.dat) files, permitting access to the digitized ECG signals; • the beat annotation (.atr) and ST measurement (.16a) files, providing QRS times of occurrence for each beat, and continuously updated ST-segment measurements based on 16-beat moving averages; • the .stf file, containing the ST level, reference, and deviation for each input signal, at two-second intervals throughout the recording • the .klt file (decompressed from the .klt.zip files available on-line), containing time series of ST and QRS principal components In addition, your program will need to have a copy of the .epi file for the record. These text files have been prepared for this Challenge from the .stb reference annotation files of the Long-Term ST Database; they contain the times of significant ST changes, but not the classifications of those events. Your program is not expected to detect the events, but to classify them, so this file is available to substitute for an ST change detector. Your program may use any or all of these files as a basis for classifying the ST events. In principle, all of the other files are derivable from the signal (.dat) file, but you are not expected to do so! The other files are provided for use as shortcuts to a solution of the challenge problem; in a clinical application, it would be necessary to integrate the code needed to detect and classify the QRS complexes, measure the ST deviations, and detect the ST events. For example, to work with record s30701 of the learning set, download The last of these files, s30701.epi, is derived from s30701.stb; it contains: 1 40125 2 ? 2 40129 1 ? 3 64361 0 ? 4 64361 1 ? 5 76639 2 ? 6 76647 1 ? 7 77171 2 ? 8 77551 2 ? 9 79967 1 ? 10 79975 2 ? Each line contains information about one event. From left to right, the columns contain an event ID number, the time of the event (the elapsed time from the beginning of the record, in seconds), the signal number (0, 1, or 2) of the affected ECG signal, and the classification of the event, where '?' means 'indeterminate'. At the end of each run, your program must have copied the .epi file into a .epo file, replacing the '?' placeholders with its classifications. Use 'I' to mark ischemic and 'N' to mark non-ischemic events; you may leave indeterminate events marked with '?'. For example, the correct classifications for record s30701 are: 1 40125 2 N 2 40129 1 N 3 64361 0 N 4 64361 1 N 5 76639 2 I 6 76647 1 I 7 77171 2 I 8 77551 2 I 9 79967 1 I 10 79975 2 I Thus, the first four ST changes in this example are non-ischemic (in s30701.sta, the expert annotators have marked them as due to axis shift), and the remaining six are consistent with ischemia. Scoring Your program's score will be determined by comparing the output .epo files with a set of reference .epr files. The .epr files are identical to the .epi files, except that the classification of each ST event, based on the .stb annotations, is included in place of the '?' markers. A point is added to your score for each match (I/I or N/N), and a point is deducted for each mismatch (I/N or N/I). ST events left unclassified ('?') do not affect your score. (A set of .epr files for the learning set is available for use while you are developing your classifier. The .epr files for the test set are not available; don't ask!) The number of events per record varies considerably, from fewer than ten to several hundred. To avoid giving undue weight in the score to the handful of records that have a majority of the events, the .epr files for the test set contain no more than 20 events each (which have been chosen at random from all of the events in those records with more than 20 annotated events). Only these events will be used as the basis for scoring the entries; the others will not be counted. The same set of .epr files will be used to score all entries. How to Enter 1. Begin by downloading stclass.c (to be used unmodified) and analyze.c (to be used as a template for your entry). 2. You will need to write functions in standard (ANSI/ISO) C to replace the initialize, analyze, and finalize functions in analyze.c: • initialize is called once, before analyze is called the first time, and before any of the input files have been opened. Use this function to set up any variables needed by your classifier. • analyze is called once per ST event. Its inputs are an event ID number, which starts at 1 for the first ST event in each record and is incremented by 1 for each subsequent event; the time of the event (elapsed time in seconds); and the signal number (0, 1, or 2) of the affected signal. In each run, all of the ST changes identified by the expert annotators within a single record will be presented to the analyze function in time order. • finalize is called once, after all of the input files been closed. Your algorithm's classifications are recorded immediately after finalize exits. 3. Use label (defined in stclass.c) to record your classifications. • label accepts two input arguments. The first is an event ID number, and the second is the label you wish to assign to that event ('I', 'N', or '?'). You can invoke label to mark any of the events at any time (so, for example, you can invoke label from your analyze function if you wish to label the events one at a time, or you can invoke label once per event from your finalize function after accumulating information about all of the events. Your algorithm can relabel any event by invoking label a second (or third, ...) time with the same event ID. Any events that you do not label are marked as '?'. 4. You may, if necessary: • define additional functions • define global variables • include other ANSI/ISO C standard header (.h) files • allocate memory as needed using malloc or similar functions • invoke other functions from the ANSI/ISO C standard library and math library, and from the WFDB library • write to the standard output or the standard error output (for debugging purposes) • create temporary files in the current directory If you create temporary files, do so within the current directory only, and use file names beginning with temp. Any files created will be removed between runs (you cannot save information from one run to use in another). 5. You may not: • modify main or any other part of stclass.c • use chdir or any other means to change the current directory • invoke fork, system, or any of the exec family of functions to start another program or another process • incorporate code or data that cannot be made freely available after the conclusion of the Challenge 6. All code will be reviewed before being compiled or run. Please keep your code neat. If we can't figure out what your program does, we won't run it! 7. All code must compile cleanly using: gcc -Wall stclass.c -lm -lwfdb There must be no errors or warnings of any kind. See the WFDB Software Package for information about the WFDB library. If your program does not make use of the .dat, .atr or .16a files, it can be compiled without the WFDB library using: gcc -Wall -DNOWFDB stclass.c -lm 8. Your program must run to completion within a reasonable time. A reasonable time is 5 minutes or less for a 24-hour record running on a 1 GHz Athlon under Linux; we will not disqualify programs that slightly exceed this limit. 9. Test your entry before submitting it. Don't forget to include your name, affiliation, and email address in the comment block at the top of analyze.c. Once you are ready, send a copy of your version of analyze.c (source only; do not send binaries) via email to [email protected] with a subject line of analyze.c. Please send analyze.c as plain text, not as HTML or as a word-processor formatted attachment. 10. You will receive an email confirmation of your entry once it has been reviewed. If an entry fails to meet any of the requirements for a valid entry, the email will indicate in general terms the nature of the problem (e.g., compilation error), but you are responsible for debugging your entry. Each valid entry will be assigned an entry number, which will be indicated in the email confirmation. Once your entry has been given a number, we will run it on the test set and you will receive a score by return email. The top scores will be posted on PhysioNet and will be updated as new entries arrive. You may revise and resubmit your entry if you wish; note, however, that the challenge organizers will give priority to new participants, so that there may be a delay in receiving scores for revised entries. We will continue to accept entries until noon GMT on Friday, 12 September 2003. All valid entries submitted before this deadline will be scored. At Computers in Cardiology 2003 (in Thessaloniki, Greece, 21-24 September 2003), a prize of US$1000 will be awarded to the top-scoring eligible participant. Immediately following the conference, a selection of the programs entered will be posted with full credit to their authors, and they will be made freely available under the GPL (or another open source license of the author's choice).
Members and affiliates of our research groups at MIT, Boston University, Harvard Medical School, Beth Israel Deaconess Medical Center, and McGill University are not eligible for awards, although all are welcome to participate.
To qualify for an award, a participant must do all of the following:
1. Submit an abstract describing his or her classifier to Computers in Cardiology, no later than Thursday, 1 May 2003 (note: this deadline has been extended to Thursday, 8 May 2003).
2. Submit a valid entry no later than noon GMT on Friday, 12 September 2003.
3. Attend Computers in Cardiology 2003, 21-24 September 2003, in Thessaloniki, Greece.
Important dates
All deadlines are at noon GMT unless otherwise indicated. Late submissions will not be accepted.
Thursday, 8 May 2003
Deadline for submission of abstracts for Computers in Cardiology 2003.
Friday, 12 September 2003
Deadline for submission of entries to PhysioNet.
Sunday-Wednesday, 21-24 September 2003
Computers in Cardiology, Thessaloniki, Greece.
Where are the .epi and .epr files?
These files are not part of the Long-Term ST Database; they were created specifically for this challenge, and they can be found in the Files section below (in the "epi" and "epr" folders).
Are the classifiers allowed to read the clinical notes included in the .hea files?
Parsing those notes to understand how to classify individual episodes would be an impressive accomplishment ... but that is not the intent of the challenge! The .hea files made available to submitted classifiers for the test set will be stripped of these notes.
Why don't you have a challenge about ...?
Each year, we receive many suggestions for challenge topics. We encourage you to contact us with further suggestions.
Challenge Results
A team of researchers from the University of Newcastle upon Tyne and Freeman Hospital won the 2003 PhysioNet Computers in Cardiology Challenge for their work on computer detection of ischaemia from the electrocardiogram (ECG).
Ischaemia, when the heart muscle is starved of oxygen, is clinically very important and can indicate heart disease, such as coronary artery disease. Automated computer detection of the condition in ambulatory ECG recordings is very difficult because many of the daily activities undertaken by patients give characteristics on the ECG similar to those of ischaemia.
The Challenge, organised by Massachusetts Institute of Technology, PhysioNet, and Computers in Cardiology, is an annual event in which researchers from around the world compete to solve a specific research question.
The team from Newcastle (Philip Langley, Emma Bowers, Joanne Wild, Michael Drinnan, John Allen, Andrew Sims, Nigel Brown and Alan Murray) are members of the Cardiovascular Physics and Engineering Research Group from the Department of Medical Physics. The paper was presented by Philip and certificates and prize money were handed to representatives of the team at the Computers in Cardiology conference held in Thessaloniki, Greece.
Papers
These papers were presented at Computers in Cardiology 2003. Please cite this publication when referencing any of these papers. Links below are to copies of these papers on the CinC web site.
An Algorithm to Distinguish Ischaemic and Non-Ischaemic ST Changes in the Holter ECG
P Langley, EJ Bowers, J Wild, MJ Drinnan, J Allen, AJ Sims, N Brown, A Murray
A Reconstructed Phase Space Approach for Distinguishing Ischemic from Non-Ischemic ST Changes using Holter ECG Data
MW Zimmerman, RJ Povinelli, MT Johnson, KM Ropella
Access
Access Policy:
Anyone can access the files, as long as they conform to the terms of the specified license.
Corresponding Author
You must be logged in to view the contact information.
Files
Total uncompressed size: 0 B.
Access the files
gsutil -m cp -r gs://challenge-2003-1.0.0.physionet.org DESTINATION
wget -r -N -c -np https://physionet.org/files/challenge-2003/1.0.0/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2412615418434143, "perplexity": 3343.0556522537004}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146665.7/warc/CC-MAIN-20200227063824-20200227093824-00158.warc.gz"} |
https://www.computing.net/answers/security/windows-security-alert-in-taskbar/29524.html | Dell Dell inspiron 640m dual core notebo...
March 31, 2010 at 15:33:30
Specs: Windows XP
#1
June 2, 2010 at 10:42:43
i have the same problem, if you find some solution tell me please....my email is [email protected] see you soon
Report •
#2
June 22, 2010 at 14:12:29
Wow....I have that exact same problem. Have you gotten a fix for it yet? It's really irritating. thanks
Report •
#3
June 22, 2010 at 14:25:06
shame on you sarah, those aren't good sites to be on...LOLTry 1- Trojan Remover 2- Hitman ProRun both till they are clean and then you can uninstall them.
Report •
Related Solutions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8662996292114258, "perplexity": 10842.691352627187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944677.39/warc/CC-MAIN-20180420174802-20180420194802-00000.warc.gz"} |
https://www.americanantigravity.com/mike-windells-plasma-beam-experiments | In 1990 Mike Windell and Warren York began experimenting with modulating resonant plasma beams, and soon discovered that by placing a target in or near the beam they could cause structural lattice changes in various crystalline materials. They were forced to temporarily suspend their investigation into this new form of crystal energy due to lack of funding, but were recently able to resume their research and found that they could harden or soften materials much like the Hutchison Effect does by modulating the phase angle and duty cycle of the resonant beam.
“We also found that we could increase the performance of semiconductors by about 30%. We believe that many of the anomalous effects were due to time dilation. We have observed the apparent conversion of electrical energy in excess of what can be explained by conventional theory. It was also found that manyother strange and interesting effects could be produced. We are now working on making sure that all of the observed phenomena are 100% reproducible.” — Mike Windell | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309834003448486, "perplexity": 686.6754848870607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00196.warc.gz"} |
https://dsp.stackexchange.com/questions/67210/minimum-value-of-g-amplitude-that-guarantees-an-error-probability-of-at-least | # Minimum value of G (Amplitude) that guarantees an error probability of at least $10^{-2}$ in a 32-PAM transmission system
pretty much new here.
This question comes from an Online course quiz which i have already completed but cant seem to get a good sleep over, just because i cant figure it out.
Below is the question in the Image.
Given the range of the sample distribution and the error probability, from what I know,
$$P_{err} = erfc(G/\sigma)$$
IMO, $$\sigma$$ is assumed to be the error energy, which we were told is given by
$$\sigma = \Delta^2 / 12$$
where $$\Delta = (B - A) / 2^R$$
$$B - A = (100 - (-100)) = 200$$
and $$R$$ is the Range of the various intervals which I at one point chose to be 32 and at another point chose to be 5.
From some programmed online calculator, I got the inverse error function of $$P_{err}$$ given by $$erfc^{-1}(0.01)$$ (which corresponds to $$(G/\sigma)$$) to be 1.821, but this is where it all goes bad, as I keep getting wrong values for $$G$$ which I presume is caused by the wrong results from the computation of $$\sigma$$.
I know i might be doing it all wrong, and that's why am here.
• One would hope that the question would ask for the minimum spacing that guarantees an error probability of at most $10^{-2}$ instead of at least $10^{-2}$ !! – Dilip Sarwate May 7 '20 at 2:37
I am not sure if you can use $$P_{err}=\text{erfc}(G/\sigma)$$ because noise is not gaussian distributed. Here is my take on it.
Assuming uniform probability of transmission for all 32 symbols $$x_i$$,the received signal $$y=x_i+n$$ so given that $$x_i$$ was transmitted $$y$$ is also uniformly distributed in the interval $$[-100+x_i,100+x_i]$$. Suppose say the transmitted symbol was $$3G$$. The range of $$y$$ is $$[-100+3G,100+3G]$$. If $$G \ge 100$$, there would be no issue even if noise occurs. You would always detect the correct symbol if you use appropriate boundaries ($$|y-x_i| \le 100$$). Suppose say $$G \lt 100$$ so these boundaries overlap. If $$G=75$$, what would happen if we receive value $$y=150$$? The transmitted symbol could have been either $$G$$ or $$3G$$. So we can choose either $$G$$ or $$3G$$ with probability of $$0.5$$. Similarly, on the other side if $$y \gt 275$$, you can choose $$5G$$ as transmitted symbol with probability $$0.5$$ So the correct decision will be taken when $$175 \le y \le 275$$, so $$P_{err,x_i=3G}=0.5$$.
So if you want $$P_e=0.01$$, for the symbols having 2 neighbors, you can distribute the error probability evenly on both sides ($$0.01$$ with each of probability 0.5). If youur transmit symbol was $$G$$, the overlap of regions will be at $$G+100-0.005=99.995$$ which will be your $$2G$$. So $$G \ge 49.9975$$.
• Oh good point Jithin! (that it's not a Gausisan tail probability)- I see now in the fine print of the question that the distribution and B and A are all specified, I missed that...deleting my incorrect answer. – Dan Boschen May 5 '20 at 18:09
• Wow, stuffs like this were never mentioned in the lecture. I will sit with this tomorrow and absorb it all and then proceed to check if this is right. Thanks a lot for the help guys. – Dhavids May 6 '20 at 19:14
• @Dhavids I am curious to know which online course is this. Because I have hardly come across pure digital communication courses online with quiz and exams. – jithin May 8 '20 at 17:27
• @jithin This is a Cousera 8 weeks dsp course, you can access it here. It covers mostly the basics and there are weekly quizzes as well as jupyter notebook assignments. It was a fun ride for someone like me who just wanted a taste of what DSP is all about. – Dhavids May 10 '20 at 21:14
• @jithin I Just plugged in both answers (49.99 and 50) and both were deemed incorrect. Although i can get a good sleep over it now, i still want to understand how is done. I will probably mail one of the instructors. Thanks a lot. – Dhavids May 10 '20 at 21:30
You need to distinguish between:
1. The error rate for the inner symbols - the error is half of the overlapped segments between the symbol to its closest neighbors (by symmetry we consider one and multiply by 2). $$P_{e1} = Pr(|n|>100-G) = 2*Pr(n>100-G) = 2 \frac{100 - G}{200}$$
2. The error rate for the points 31G and -31G - same as above bu only for one segment: $$P_{e2} = Pr(n>100-G) = \frac{100 -G}{200}$$
It remains to solve $$\frac{30}{32} P_{e1} +\frac{2}{32} P_{e2} = 0.01$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101340770721436, "perplexity": 422.3208124907948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00608.warc.gz"} |
http://math.stackexchange.com/questions/177575/show-pnn-px-nn-x-1n-alpha | # show $P(N>n)=P(X_{(n:n)}-X_{(1:n)}<\alpha)$ [duplicate]
Possible Duplicate:
Finding $E(N)$ in this question
suppose $X_1,X_2,\ldots$ is sequence of independent random variables of $U(0,1)$ if $N=\min\{n>0 :X_{(n:n)}-X_{(1:n)}>\alpha , 0<\alpha<1\}$ that $X_{(1:n)}$ is smallest order statistic and $X_{(n:n)}$ is largest order statistic. how can show $P(N>n)=P(X_{(n:n)}-X_{(1:n)}<\alpha)$
-
## marked as duplicate by Dilip Sarwate, Did, t.b., Sasha, Guess who it is.Aug 2 '12 at 5:14
If $(Z_n)_{n\geqslant0}$ is a sequence of random variables such that $Z_n\leqslant Z_{n+1}$ for every $n\geqslant0$, then $N_a=\inf\{n\geqslant0\,;\,Z_n\gt a\}$ is such that, for every $n\geqslant0$, $$[N_a\gt n]=[Z_1\leqslant a,\ldots,Z_n\leqslant a]=[Z_n\leqslant a].$$ Note: No probability here, this an almost sure result (as probabilists like to say), that is, a deterministic result (as everybody else says). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680064916610718, "perplexity": 542.7612878751811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097396.10/warc/CC-MAIN-20150627031817-00121-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/430862/appending-data-to-bibliography-with-biblatex | # Appending Data to Bibliography with Biblatex
I am trying to add a custom 'field' to my bibtex data, and then call that. However, I am getting a very strange scenario: when I call the field mr, then it works; when I call it mrnumber, then it doesn't work. Below is a MWE.
\documentclass[]{article}
\usepackage{filecontents}
\begin{filecontents*}{ext-eprint.dbx}
\ProvidesFile{ext-eprint.dbx}[2016/09/11 extended stand-alone eprint fields]
\DeclareDatamodelFields[type=field,datatype=verbatim]{arxiv,mr}
\DeclareDatamodelEntryfields{arxiv,mr}
\end{filecontents*}
\begin{filecontents*}{\jobname.bib}
@article{AGHH:dynamic-cm,
archivePrefix = {arXiv},
eprinttype = {arxiv},
eprint = {1606.07639},
title = {Mixing Times of Random Walks on Dynamic Configuration Models},
date = {2016-06-24},
author = {Avena, Luca and G{\"u}lda{\c s}, Hakan and van der Hofstad, Remco and den Hollander, Frank},
options = {useprefix=true}
}
@article{FR:giant-mixing,
title = {The Evolution of the Mixing Rate of a Simple Random Walk on the Giant Component of a Random Graph},
volume = {33},
number = {1},
journaltitle = {Random Structures \& Algorithms},
urldate = {2018-03-22},
date = {2008-05-12},
pages = {68-86},
author = {Fountoulakis, Nikolaos and Reed, Bruce A.},
mr = {12}
}
@book{LPW:markov-mixing,
location = {{Providence, RI, USA}},
title = {Markov {{Chains}} and {{Mixing Times}}},
isbn = {978-1-4704-2962-1},
pagetotal = {xvi+447},
publisher = {{American Mathematical Society}},
date = {2017},
author = {Levin, David A. and Peres, Yuval and Wilmer, Elizabeth L.},
mr = {3726904},
}
\end{filecontents*}
\usepackage{csquotes}
\usepackage[doi=false,isbn=false,url=false,
backend=biber,
style=numeric,
datamodel=ext-eprint]{biblatex}
\usepackage{hyperref}
\makeatletter
\DeclareFieldFormat{arxiv}{%
---HOW DO I CUSTOMISE THIS??---}%\href{https://arxiv.org/abs/#1}}
\makeatother
\DeclareFieldFormat{mr}{%
{\href{http://www.ams.org/mathscinet-getitem?mr=MR#1}{MR#1}}}
\renewbibmacro*{eprint}{%
\printfield{arxiv}%
\newunit\newblock
\printfield{mr}%
\newunit\newblock
\iffieldundef{eprinttype}
{\printfield{eprint}}
{thisdoesappear\printfield[eprint:\strfield{eprinttype}]{eprint}}
}
\begin{document}
\nocite{*}
\printbibliography
\end{document}
This prints out the following.
However, if I replace all five references to mr with mrnumber, then I get the same output just without the MR3726904 appended. I got the majority of the structure from moewe's answer to BibTeX fields for DOI, MR, Zbl and arxiv?.
As you may have noticed in the MWE, I do \DeclareFieldFormat{arxiv}{..., but whatever I put in here does not affect the output, only the thisdoesappear appears, and this is in an \iffieldundef statement. I do want to just use eprint for both arXiv and MR, because each item will have precisely one of these. I'd like to be able to customise the appearance of the arxiv part, as I have done with the MR part.
Remark. I know some of the other formatting isn't very nice, but don't worry about that: I've looked at that separately, and have just tried to put the minimal amount in for the example here.
• arxiv is an eprinttype, so DeclareFieldFormat will not really work as you are trying to do it. I'm not sure I get what you want to achieve. As far as I understood, you want mr to function as an eprinttype the same way as arxiv does. Is that it? – gusbrs May 9 '18 at 18:48
• I'd like to make "arXiv: 1606.07639" customisable, eg maybe write "available at arxiv.org/abs/1606.07369". I'd also like to be able to change the mr variable to mrnumber -- MathSciNet automatically outputs mrnumber, not mr, and so I don't want to have to go through all my references changing them from mrnumber to mr. Is that clearer? :) – Sam T May 9 '18 at 20:00
• One more thing to clear up. That answer you link to creates extra fields because the requirement was that a single paper belonged to more than one eprinttype. In your case, is it sufficient that each paper is either arXiv or MR (that is, not both)? – gusbrs May 9 '18 at 20:18
There are several things going on here. The most important thing is that biblatex distinguishes the explicitly defined arxiv and mrnumber field from the eprint field. The eprint field is special and its contents get special treatment.
1. In the MWE below mrnumber works for me. The code is just a result of replacing all mrs in your code with mrnumber.
2. If you want to change the output of arXiv-eprints, you don't modify the field format arxiv, you need to modify eprint:arxiv. You can see that eprint:arxiv is used by examining the false branch of iffieldundef{eprinttype}: \printfield[eprint:\strfield{eprinttype}]{eprint}. This calls the field format eprint:<eprinttype>, so eprint:arxiv in our case. \DeclareFieldFormat{arxiv} works only if you use the arxiv field directly.
3. You could also use eprint for both arXiv and mrnumber, but then you would have to transform mrnumber to eprints. This could be done automatically with a sourcemap. The advantage of that approach is that you don't need a new .dbx file. The disadvantage is that the mrnumber occupies the eprint slot.
The code below shows three ways to deal with MR numbers and arXiv links
1. Write the identifier into the eprint field and the type (arxiv, mrnumber) into eprinttype manually. See entries AGHH:dynamic-cm and FR:giant-mixing.
2. Use a dedicated arxiv and mrnumber field in the source. This is what baez/online and LPW:markov-mixing do. The implementation on the biblatex side can be done in two ways here.
1. Internally re-map arxiv = {foo} to eprint = {foo} with eprinttype = {arxiv} and mrnumber = {bar} to eprint = {bar} with eprinttype = {mrnumber}. This is done with the out-\iffalsed source map below, the data model is not needed in that case. This results in the fields being treated just like eprint, which means DeclareFieldFormat{eprint:arxiv} and DeclareFieldFormat{eprint:mrnumber} are the formats responsible, the fields are printed with \printfield[eprint:\strfield{eprinttype}]{eprint}. The advantage of this approach is that you don't need a data model, the disadvantage is that the one eprint slot you have is occupied. On the biblatex side (i.e. for writing a biblatex style) this is equivalent to the first method.
2. Declare the arxiv and mrnumber fields as native fields in a special data model and load the data model. The fields can be used independently and are controlled with DeclareFieldFormat{arxiv} and DeclareFieldFormat{mrnumber} and printed with \printfield{arxiv} and \printfield{mrnumber}.
\documentclass{article}
\usepackage{filecontents}
\begin{filecontents*}{ext-eprint.dbx}
\ProvidesFile{ext-eprint.dbx}[2018/05/09 extended stand-alone eprint fields]
\DeclareDatamodelFields[type=field,datatype=verbatim]{arxiv,mrnumber}
\DeclareDatamodelEntryfields{arxiv,mrnumber}
\end{filecontents*}
\begin{filecontents*}{\jobname.bib}
@online{AGHH:dynamic-cm,
eprinttype = {arxiv},
eprint = {1606.07639},
title = {Mixing Times of Random Walks on Dynamic Configuration Models},
date = {2016-06-24},
author = {Avena, Luca and G{\"u}lda{\c s}, Hakan and van der Hofstad, Remco and den Hollander, Frank},
options = {useprefix=true}
}
@online{baez/online,
author = {Baez, John C. and Lauda, Aaron D.},
title = {Higher-Dimensional Algebra {V}: 2-Groups},
date = {2004-10-27},
version = 3,
arxiv = {math/0307200v3},
}
@article{FR:giant-mixing,
title = {The Evolution of the Mixing Rate of a Simple Random Walk on the Giant Component of a Random Graph},
volume = {33},
number = {1},
journaltitle = {Random Structures \& Algorithms},
urldate = {2018-03-22},
date = {2008-05-12},
pages = {68-86},
author = {Fountoulakis, Nikolaos and Reed, Bruce A.},
eprinttype = {mrnumber},
eprint = {12},
}
@book{LPW:markov-mixing,
location = {Providence, RI, USA},
title = {Markov Chains and Mixing Times},
isbn = {978-1-4704-2962-1},
pagetotal = {xvi+447},
publisher = {American Mathematical Societ},
date = {2017},
author = {Levin, David A. and Peres, Yuval and Wilmer, Elizabeth L.},
mrnumber = {3726904},
}
\end{filecontents*}
\usepackage{csquotes}
\usepackage[doi=false,isbn=false,url=false,
backend=biber,
style=numeric,
datamodel=ext-eprint, % comment this out to see what the data model does
]{biblatex}
\usepackage{hyperref}
\DeclareFieldFormat{arxiv}{%
\ifhyperref
as field}
\DeclareFieldFormat{eprint:arxiv}{%
\ifhyperref
via eprint}
\DeclareFieldFormat{mrnumber}{%
\ifhyperref
{\href{http://www.ams.org/mathscinet-getitem?mr=MR#1}{MR#1}}
{MR#1}%
real field}
\DeclareFieldFormat{eprint:mrnumber}{%
\ifhyperref
{\href{http://www.ams.org/mathscinet-getitem?mr=MR#1}{MR#1}}
{MR#1}%
eprint}
% This map maps all mrnumber fields to eprints with eprinttype mrnumber
% and all arxiv fields to eprint with eprinttype arxiv
% If you remove the \iffalse and \fi, this becomes active,
% in that case the datamodel is not needed any more.
\iffalse
\DeclareSourcemap{
\maps[datatype=bibtex]{
\map{
\step[fieldsource=mrnumber, fieldtarget=eprint, final]
\step[fieldset=eprinttype, fieldvalue=mrnumber]
}
\map{
\step[fieldsource=arxiv, fieldtarget=eprint, final]
\step[fieldset=eprinttype, fieldvalue=arxiv]
}
}
}
\fi
\renewbibmacro*{eprint}{%
\printfield{arxiv}%
\newunit\newblock
\printfield{mrnumber}%
\newunit\newblock
\iffieldundef{eprinttype}
{\printfield{eprint}}
{\printfield[eprint:\strfield{eprinttype}]{eprint}}}
\begin{document}
\nocite{*}
\printbibliography
\end{document}
• Thank you. Did you have to manually edit the bibdata to put in eprinttype = {mrnumber} and eprint = {12}? I'm trying to avoid that. I see though that if I comment out the datamodel part, and remove the \iffalse ... \fi, then it works exactly as desired with the mrnumber field. You're a TeX genius! – Sam T May 10 '18 at 8:28
• @SamT Yes and no. The answer shows essentially three possible ways to deal with these identifiers. I have updated the answer to clarify this. If you want to keep arxiv and mrnumber fields from wherever you get your data from, only the way 2.1 or 2.2 are interesting for you. Way 1 indeed requires manual adjustments. – moewe May 10 '18 at 8:42
• Thank you very much. I've implemented Method 2.1 -- at least, I'm pretty sure that's what I've done! [See i.stack.imgur.com/xYe1P.png for a snapshot] – Sam T May 10 '18 at 8:52
• @SamT In that case the \printfield{arxiv} and \printfield{mrnumber} lines do nothing. In fact you can simply delete the entire \renewbibmacro*{eprint} block, the default definition \newbibmacro*{eprint}{% \iffieldundef{eprinttype} {\printfield{eprint}} {\printfield[eprint:\strfield{eprinttype}]{eprint}}} should be good enough for you. If we ignore the thisdoesappear and a possibly unwanted space created by forgetting to add a % in the last line of the definition. I have also edited the field formats to work even if hyperref is not loaded. – moewe May 10 '18 at 8:58
• Thank you so much, I'm so grateful :) -- I've literally spent 7hrs+ on this, and I'd still have so far to go if it weren't for you :) – Sam T May 10 '18 at 9:19
This is a suggestion for you to use the easily extensible eprint facilities of biblatex (moewe's third suggestion).
For that, you have to create a field format for your desired new eprint in the format biblatex expects. In your case eprint:mr:
\DeclareFieldFormat{eprint:mr}{% based on eprint:jstor
\ifhyperref
You can keep mrnumber and remap it to eprint with:
\DeclareSourcemap{
\maps[datatype=bibtex]{
\map{
\step[fieldsource=mrnumber, fieldtarget=eprint, final]
\step[fieldset=eprinttype, fieldvalue=mr]
}
}
}
This also sets eprinttype to mr for all entries which contain a mrnumber.
As to customizing arXiv, you should use \DeclareFieldFormat{eprint:arxiv}, as explained by moewe.
Putting things together:
\documentclass[]{article}
\usepackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@article{AGHH:dynamic-cm,
eprinttype = {arxiv},
eprint = {1606.07639},
title = {Mixing Times of Random Walks on Dynamic Configuration Models},
date = {2016-06-24},
author = {Avena, Luca and G{\"u}lda{\c s}, Hakan and van der Hofstad, Remco and den Hollander, Frank},
options = {useprefix=true}
}
@article{FR:giant-mixing,
title = {The Evolution of the Mixing Rate of a Simple Random Walk on the Giant Component of a Random Graph},
volume = {33},
number = {1},
journaltitle = {Random Structures \& Algorithms},
urldate = {2018-03-22},
date = {2008-05-12},
pages = {68-86},
author = {Fountoulakis, Nikolaos and Reed, Bruce A.},
mrnumber = {12}
}
@book{LPW:markov-mixing,
location = {{Providence, RI, USA}},
title = {Markov {{Chains}} and {{Mixing Times}}},
isbn = {978-1-4704-2962-1},
pagetotal = {xvi+447},
publisher = {{American Mathematical Society}},
date = {2017},
author = {Levin, David A. and Peres, Yuval and Wilmer, Elizabeth L.},
mrnumber = {3726904},
}
\end{filecontents*}
\usepackage{csquotes}
\usepackage[doi=false,isbn=false,url=false,
backend=biber,
style=numeric]{biblatex}
\usepackage{hyperref}
\DeclareSourcemap{
\maps[datatype=bibtex]{
\map{
\step[fieldsource=mrnumber, fieldtarget=eprint, final]
\step[fieldset=eprinttype, fieldvalue=mr]
}
}
}
\makeatletter
\DeclareFieldFormat{eprint:arxiv}{%
\autocap{a}vailable\space at\space arXiv\addcolon\space% <- changed here, relative to the default definition
\ifhyperref
{\href{https://arxiv.org/\abx@arxivpath/#1}{%
\iffieldundef{eprintclass}
{}
\iffieldundef{eprintclass}
{}
\makeatother
\DeclareFieldFormat{eprint:mr}{% based on eprint:jstor
\ifhyperref
• Thank you very much. Did you have to manually type in eprinttype = {mr} into all your bibliography entries? I'm trying to avoid having do manually edit them – Sam T May 10 '18 at 8:21
• @SamT I've adapted the answer for the eprinttype to be automatically set to mr for all entries which contain a mrnumber. – gusbrs May 10 '18 at 10:40
• I see, thank you. I suppose this has the advantage over moewe's that it doesn't require \begin{filecontents*}{ext-eprint.dbx} ... -- in fact, it's almost exactly the same, just without this, but with \makeatletter ... \makeatother – Sam T May 10 '18 at 10:53
• @SamT moewe's answer includes different alternatives. As I mentioned, this just expands the third suggestion, for which he says himself "The advantage of that approach is that you don't need a new .dbx file." The .dbx is included in his code because it is needed for the other suggestions. Here \makeatletter ... \makeatother is only used to redefine the formating directive for eprint:arxiv because I took the original definition, which contains the macro \abx@arxivpath. But I suppose moewe's definition of it would also work here. – gusbrs May 10 '18 at 11:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6557709574699402, "perplexity": 3521.9758491995403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00313.warc.gz"} |
https://planetmath.org/munntree | # Munn tree
Let $X$ be a finite set, and $\left(X\amalg X^{-1}\right)^{\ast}$ the free monoid with involution on $X$. It is well known that the elements of $\left(X\amalg X^{-1}\right)^{\ast}$ can be viewed as words on the alphabet $\left(X\amalg X^{-1}\right)$, i.e. as elements of the free monod on $\left(X\amalg X^{-1}\right)$.
The Munn tree of the word $w\in\left(X\amalg X^{-1}\right)^{\ast}$ is the $X$-inverse word graph $\mathrm{MT}(w)$ (or $\mathrm{MT}_{X}(w)$ if $X$ needs to be specified) with vertex and edge set respectively
$\mathrm{V}(\mathrm{MT}(w))=\mathrm{red}(\mathrm{pref}(w))=\left\{\mathrm{red}(% v)\,|\,v\in\mathrm{pref}(w)\right\},$
$\mathrm{E}(\mathrm{MT}(w))=\left\{(v,x,\mathrm{red}(vx))\in\mathrm{V}(\mathrm{% MT}(w))\times\left(X\amalg X^{-1}\right)\times\mathrm{V}(\mathrm{MT}(w))\right\}.$
The concept of Munn tree was created to investigate the structure of the free inverse monoid. The main result about it says that it “recognize” whether or not two different word in $\left(X\amalg X^{-1}\right)^{\ast}$ belong to the same $\rho_{X}$-class, where $\rho_{X}$ is the Wagner congruence on $X$. We recall that if $w\in\left(X\amalg X^{-1}\right)^{\ast}$ [resp. $w\in\left(X\amalg X^{-1}\right)^{+}$], then $[w]_{\rho_{X}}\in\mathrm{FIM}(X)$ [resp. $[w]_{\rho_{X}}\in\mathrm{FIS}(X)$].
###### Theorem 1 (Munn)
Let $v,w\in\left(X\amalg X^{-1}\right)^{\ast}$ (or $v,w\in\left(X\amalg X^{-1}\right)^{+}$). Then $[v]_{\rho_{X}}=[w]_{\rho_{X}}$ if and only if $\mathrm{MT}(v)=\mathrm{MT}(w)$
As an immediate corollary of this result we obtain that the word problem in the free inverse monoid (and in the free inverse semigroup) is decidable. In fact, we can effectively build the Munn tree of an arbitrary word in $\left(X\amalg X^{-1}\right)^{\ast}$, and this suffice to prove wheter or not two words belong to the same $\rho_{X}$-class.
The Munn tree reveals also some property of the $\mathcal{R}$-classes of elements of the free inverse monoid, where $\mathcal{R}$ is the right Green relation. In fact, the following result says that “essentially” the Munn tree of $w\in\left(X\amalg X^{-1}\right)^{\ast}$ is the Schützenberger graph of the $\mathcal{R}$-class of $[w]_{\rho_{X}}$.
###### Theorem 2
Let $w\in\left(X\amalg X^{-1}\right)^{\ast}$. There exists an isomorphism (in the category of $X$-inverse word graphs) $\Phi:\mathrm{MT}(w)\rightarrow\mathcal{S}\Gamma(X;\varnothing;[w]_{\rho_{X}})$ between the Munn tree $\mathrm{MT}(w)$ and the Schützenberger graph $\mathcal{S}\Gamma(X;\varnothing;[w]_{\rho_{X}})$ given by
$\Phi_{\mathrm{V}}(v)=[v]_{\rho_{X}},\ \ \forall v\in\mathrm{V}(\mathrm{MT}(w))% =\mathrm{red}(\mathrm{pref}(w)),$
$\Phi_{\mathrm{E}}((v,x,\mathrm{red}(vx)))=([v]_{\rho_{X}},x,[vx]_{\rho_{X}}),% \ \ \forall(v,x,\mathrm{red}(vx))\in\mathrm{E}(\mathrm{MT}(w)).$
## References
• 1 W.D. Munn, Free inverse semigroups, Proc. London Math. Soc. 30 (1974) 385-404.
• 2 N. Petrich, Inverse Semigroups, Wiley, New York, 1984.
• 3 J.B. Stephen, Presentation of inverse monoids, J. Pure Appl. Algebra 63 (1990) 81-112.
Title Munn tree MunnTree 2013-03-22 16:11:59 2013-03-22 16:11:59 Mazzu (14365) Mazzu (14365) 20 Mazzu (14365) Definition msc 20M05 msc 20M18 SchutzenbergerGraph Munn tree | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 39, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715221285820007, "perplexity": 588.1576843841617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202474.26/warc/CC-MAIN-20190320230554-20190321012554-00559.warc.gz"} |
https://cstheory.stackexchange.com/questions/42760/turing-machines-as-coalgebras | # Turing Machines as Coalgebras
I'm looking to write a survey on the method of representing the dynamics of state-based computation within the framework of coalgebras. So far I've managed to find papers on coalgebra representations of DFA, NFA, Mealy machines, Moore machines, context-free grammars, and even simple quantum systems. I have not found a good source for representing a Turing Machine as a coalgebra.
Any sources/thoughts?
Thanks!
Pavlovic et al. view Turing machines over a binary alphabet as coalgebras for the functor $$\lambda X. \, 2 \times \mathcal{P}_{\mathrm{fin}}(X \times 2 \times \{\lhd,\rhd\})^2$$. The symbols $$\lhd$$ and $$\rhd$$ represent thereby the tape moves.
Bart Jacobs has presented in "Coalgebraic walks, in quantum and Turing computation" an approach by using a monad. He present a Turing machine with $$n$$ states as a coalgebra for functor $$\mathcal{P}_{\mathrm{fin}}[n]$$ on sets. Alternatively, consider the type $$\mathbb{T} = 2^\mathbb{Z} \times \mathbb{Z}$$ that represents the tape and the position of the head on the tape. A Turing machine with $$n$$ states is then also an endomorphism on $$2^n \otimes \mathcal{P}_{\mathrm{fin}}(\mathbb{T})$$ in the category of join-semilattices, or an $$n \times n$$-matrix of coalgebras $$\mathbb{T} \to \mathcal{P}_{\mathrm{fin}}(\mathbb{T})$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6256967782974243, "perplexity": 175.4898359226924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347422803.50/warc/CC-MAIN-20200602033630-20200602063630-00397.warc.gz"} |
http://wiki.contextgarden.net/PaperSetup | PaperSetup
TODO: Merge with Paper sizes and Layout (See: To-Do List)
Paper setup is one of the most basic requirements for creating your own style. In this article, the basics of paper setup are explained; the more advanced setups are described in the Page Design chapter of the new ConTeXt manual.
Basic setup
Setting paper size (\setuppapersize)
Plain TeX and LaTeX were primarily developed in the US. So, they default to letter paper, which is the standard paper size in the US. ConTeXt was developed in the Netherlands. So, it defaults to A4 paper, which is the standard paper size in Europe (and almost everywhere else in the world).
Changing paper size is easy, for letter paper:[1]
`\setuppapersize[letter]`
Similarly, to get A4 paper, use:
`\setuppapersize[A4]`
Pre-defined paper sizes
Both A4 and letter are predefined paper sizes. ConTeXt predefines many other commonly used paper sizes. These include:
• letter, ledger, tabloid, legal, folio, and executive sizes from the North American paper standard;
• sizes A0A10, B0B10, and C0C10 from the A, B, and C series of the ISO-216 standard;
• sizes RA0RA4 and SRA0SRA4 from the RA and SRA series of ISO-217 paper standard;
• sizes C6/C5, DL, and E4 from ISO-269 standard envelope sizes;
• envelope 9envelope 14 sizes from the American postal standard;
• sizes G5 and E5 from the Swedish SIS-014711 standard. These are used for Swedish theses;
• size CD for CD covers;
• size S3S6, S8, SM, and SW for screen sizes. These sizes are useful for presentations. S3S6 and S8 have an aspect ratio of 4:3. S3 is 300pt wide, S4 is 400pt wide, and so on. S6 is almost as wide as a A4 paper. SM and SW are for medium and wide screens; they have the same height as S6;
• a few more paper sizes, which I will not mention here. See page-lay.mki(i|v) for details.
Defining new paper sizes (\definepapersize)
The predefined paper sizes in ConTeXt cannot fit all needs. To define a new paper size, use
```\definepapersize[exotic]
[width=50mm, height=100mm]```
which defines a paper that is 50mm wide and 100mm high; the name of this paper is exotic (we could have used any other word). All predefined paper sizes are defined using \definepapersize. For example, A4 paper is defined as:
`\definepapersize [A4] [width=210mm,height=297mm]`
Use this new paper size like any of the predefined paper sizes. For example, to set the paper size to 50mm x 100mm paper, use
`\setuppapersize[exotic]`
Orientation
Most of the popular paper sizes default to a portrait orientation. To get landscape orientation, use
`\setuppapersize[letter,landscape]`
Changing paper setup mid-document
Normally, the paper size is set up once—in the environment file—and doesn't need to be changed later. But, occasionally, changing paper size mid-document is needed; for example, to insert a table or a figure in landscape mode. There are two ways to change the paper size mid-document. To illustrate those, let us first define two paper sizes for convenience:
```\definepapersize[main] [A4]
\definepapersize[extra][A4,landscape]```
One way to change document size is to permanently change the paper size using \setuppapersize and then revert back using \setuppapersize.
```% Set the default paper size
\setuppapersize[main]
\starttext
% ...
% text with main paper size
% ...
\page \setuppapersize[extra]
% ...
% pages in landscape mode
% ...
\page \setuppapersize[main]
% ...
% back to main paper size
% ...
\stoptext```
The \page before \setuppapersize is necessary as \setuppapersize changes the size of the current page.
Often times, a different paper size is needed only for one page. Rather than manually switching the paper size back and forth using <\setuppapersize, a convenient alternative is to use \adaptpapersize, which automatically reverts back to the existing paper size after one page. This is illustrated by the following example.
```\setuppapersize[main]
\starttext
Page 1. Portrait \page
Page 2. Portrait \page
Page 3. Landscape \page
Page 4. Portrait \page
\stoptext```
As with \setuppapersize, always use an explicit \page before \adaptpapersize.
Setting print size
Occasionally you may want to print on a larger paper than the actual page size. This could be because you want to print to the edge of the page—so you print on a large paper and crop later—or because the page size that you are using is not standard. For example, suppose you want to print an A5 page on a A4 paper (and crop later). For that, you need to specify that the paper size is A5 but the print paper size is A4. This information is specified using the two argument version of the \setuppapersize:
`\setuppapersize[A5][A4]`
Changing page location
By default, this places the A5 page on the top left corner of the A4 paper. To place the A5 page in the middle of the A4 paper use:
```\setuppapersize[A5][A4]
\setuplayout[location={middle,middle}]```
Other possible values for location are: {top,left}, {top,middle}, {top,right}, {middle,right}, {middle,left}, {bottom,left}, {bottom,middle}, and {bottom,right}. Since {middle, middle} is the most commonly used value, it has a shortcut—location=middle.
If you use {*,left} or {*,right} and print double-sided, then also add duplex as an option; for example location={duplex,top,left}. This ensures that the page paper is moved appropriately on even pages.
Crop marks
To get crop marks (also called cut marks) use
`\setuplayout[marking=on]`
By default, the page numbers are also included with the crop marks. To get additional information like job name, current date and time along with the crop marks, use
`\setuplayout[marking=text]`
If you want just the crop marks, and no other text, use
`\setuplayout[marking=empty]`
Defining page and print size combinations
It is convenient to define paper-size/print-paper-size combination for later reuse. These are also defined using \definepapersize. For example, suppose you want to define two paper-size/print-paper-size combinations: A4 paper on A4 print paper for normal work flow, and A4 paper on A3 print paper for the final proofs. For that, use the following:
```\definepapersize[regular][A4][A4]
\definepapersize[proof] [A4][A3]```
You can then combine these paper sizes with Modes:
```\setuppapersize[regular]
\doifmode{proof}{\setuppapersize[proof]}```
Then, when you compile the document in the normal manner, you will get A4 paper on A4 print paper; if you compile the document with --mode=proof, then you will get a A4 paper on A3 print paper.
Notes
1. The syntax used here only works with ConTeXt versions newer than February 2011. Before that, you had to use
`\setuppapersize[letter][letter]`
to get letter sized paper. You may wonder why we need to repeat the paper size twice. In most cases, these are the same. You only need to use different arguments if you want to print on a bigger paper and trim it later (see the section on print size for details).
C O N T E X T G A R D E N | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351833820343018, "perplexity": 2350.759877252708}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802766292.96/warc/CC-MAIN-20141217075246-00122-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://blog.darkbuzz.com/2012/04/evidence-against-entraining-aether.html | ## Friday, April 13, 2012
### Evidence against entraining the aether
Joseph Levy of France has just posted Is the aether entrained by the motion of celestial bodies? What do the experiments tell us?. He revisits the issue of whether the Michelson–Morley experiment rules out luminiferous aether drift from the motion of the Earth.
Since the publication of Einstein’s basic article “On the electrodynamics of moving bodies” in 1905, the aether has been excluded from the area of physics, being regarded as inexistent or at least inactive. Such an attitude signified that the laws of physics could be formulated in the same way, that the aether exists or not,...
This approach appeared quite revolutionary in 1905, since it called into question the ideas developed by a number of classical physicists such as Hooke, Lavoisier, Young, Huygens, Laplace, Fresnel, and Lorentz among others.
I do not agree with this. I say that Einstein did not refute Lorentz, and what Lorentz meant by the aether was what he said in 1895:
It is not my intention to ... express assumptions about the nature of the aether.
Levy does explain how the aether concept (if not the name) is universally accepted today:
In fact, despite its properties that seem so different from ordinary matter, a number of arguments speak in favour of a substratum [9] and these arguments have multiplied in the early twentieth century with the development of quantum mechanics. It is difficult, indeed, to accept that a “vacuum”, endowed with physical properties such as permittivity and permeability may be empty. The ability of such an empty vacuum to transmit electromagnetic waves is also doubtful.
Quantum mechanics, on its part, regards the vacuum as an ocean of pairs of fluctuating virtual particles-antiparticles of very small life-time, a ppearing and disappearing spontaneously, which can be interpreted as a gushing of the aether, although the aether is not officially recognized by quantum mechanics. The interaction of the electrons and the vacuum, in particular, is regarded as the cause of the shifting of the alpha ray of the hydrogen atom spectrum, referred to as lamb shift [10]. The fluctuations of the vacuum are also assumed to expl ain the Casimir effect [11], and the Davies Fulling, Unruh, effect [12].
Einstein himself around 1916 changed his mind as regards the hypothesis of the aether. ...
A proof of the undeniable existence of the aether was given in ref [14]. Thus, the question to be answered today is not to verify its existence, but rather to specify its nature and its properties, and, in the first place, to determine if it is entrained (or not) by the translational motion of celestial bodies due to gravitation.
You will be reassured to learn that the conclusions of Lorentz and Poincare about relativity in 1900 are still good today, and evidence is against the aether drag hypothesis. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429923057556152, "perplexity": 838.8296347550462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700477029/warc/CC-MAIN-20130516103437-00093-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://davicr.wordpress.com/2006/09/02/miktex-25-and-hyperref/ | ## MikTex 2.5 and Hyperref
Just now I upgraded my old 2.4 version of MikTeX to the 2.5 version. The details of the new version can be found in the last link. Those who already have version 2.4 installed may use the MikTex Update Wizard.
There was a change in the default option of hyperref, it was changed from hypertex to dvips. The line \usepackage{hyperref} is no longer compatible with the DVI building (but pdf documents are generated normally). To correct this problem one may either replace \usepackage{hyperref} with \usepackage[hypertex]{hyperref} in each tex file (*) or, which I prefer, return to the original default option. To do the latter, first locate the file hyperref.cfg (its standard folder is C:\texmf\tex\latex0miktex), then open it with an ascii editor (e.g., Notepad “Bloco de notas”). This file has only one line. Where it reads “dvips” replace with “hypertex”. Save the file “hyperref.cfg” and now everything will work fine. For both DVI and PDF outputs one may use \usepackage{hyperref}.
(*) A PDF document can be built with that option, but the hyperlinks won’t work. For some reason, the modification in the hyperref.cfg file doesn’t have this drawback. Explanations?
Explore posts in the same categories: Uncategorized
### 7 Comments on “MikTex 2.5 and Hyperref”
1. [...] Atualização 2. Miktex 2.5 and Hyperref. Explore posts in the same categories: Uncategorized [...]
2. Marcel Zemp Says:
Thanks! This solved my problem! I had the problem that lines were not broken anymore in the table of content, figures etc. First, I had no idea why that happened. But then I also realised that the hyperlinks do not work anymore. I made the changes in the hyperref.cfg file and now everything works fine: hyperlinks are back and the line-breaking in the table of content etc. works fine! I don’t know why it doesn’t work in the “new” version. Is that a known bug?
3. Davi Says:
Hi Marcel, thanks for your message! I don’t know if this modification in the hyperref standard option was done on purpose, but I really didn’t like it (and lost some time to solve it). It appers that many people have complains on this.
4. Guillaume Says:
Hi,
I tried to change the hyperref.cfg file like mentioned above but it still do not work. In this file I open, the only place where it says “dvips” goe like this : {hdvips}. I tried to replace the “dvips” with “hypertex” and it do not work. Anyone have the same problem ? This is a major bug.
Thanks
5. Davi Says:
Hi Guillaume, now and in the next days I don’t have access to a computer with this miktex version installed, but I don’t remember of seeing “{hdvips}”; nevertheless in the end you should have “{hypertex}”. In the 2.4 version the complete hyperref.cfg file reads:
\ProvidesFile{hyperref.cfg}% [2003/03/08 v1.0 MiKTeX 'hyperref' configuration]
\providecommand*{\Hy@defaultdriver}{hypertex}%
\endinput
Best regards,
Davi.
6. Manuel Luque Says:
Thank you very much! You have been very helpful for me! Replacing the line \usepackage{hyperref} with \usepackage[hypertex]{hyperref} solved completely my problems with dvipdfm.
7. lami Says:
Same here! Thanks a lot! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8353525996208191, "perplexity": 2919.95874477554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650193.38/warc/CC-MAIN-20141024030050-00126-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://jmlr.csail.mit.edu/papers/v21/16-252.html | ## Learning Causal Networks via Additive Faithfulness
Kuang-Yao Lee, Tianqi Liu, Bing Li, Hongyu Zhao; 21(51):1−38, 2020.
### Abstract
In this paper we introduce a statistical model, called additively faithful directed acyclic graph (AFDAG), for causal learning from observational data. Our approach is based on additive conditional independence (ACI), a recently proposed three-way statistical relation that shares many similarities with conditional independence but without resorting to multi-dimensional kernels. This distinct feature strikes a balance between a parametric model and a fully nonparametric model, which makes the proposed model attractive for handling large networks. We develop an estimator for AFDAG based on a linear operator that characterizes ACI, and establish the consistency and convergence rates of this estimator, as well as the uniform consistency of the estimated DAG. Moreover, we introduce a modified PC-algorithm to implement the estimating procedure efficiently, so that its complexity is determined by the level of sparseness rather than the dimension of the network. Through simulation studies we show that our method outperforms existing methods when commonly assumed conditions such as Gaussian or Gaussian copula distributions do not hold. Finally, the usefulness of AFDAG formulation is demonstrated through an application to a proteomics data set.
[abs][pdf][bib] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812327980995178, "perplexity": 876.007248657269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00206.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/int2/chapter/1/lesson/1.2.2/problem/1-36 | ### Home > INT2 > Chapter 1 > Lesson 1.2.2 > Problem1-36
1-36.
If the perimeter of the rectangle at right is $112$ cm, which equation below represents this fact? Once you have selected the appropriate equation, solve for $x$
1. $(2x−7)+(4x+3)=112$
1. $4(2x−7)=112$
1. $2(2x−7)+2(4x+3)=112$
1. $(2x−7)(4x+3)=112$
Perimeter is the sum of the sides. | {"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146577477455139, "perplexity": 1365.778988211898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00155.warc.gz"} |
https://codereview.stackexchange.com/questions/233391/java-neural-network-implementation | # Java Neural Network Implementation
I have recently tried to get a better grip on machine learning from a point of implementation - not statistics. I've read several explanations of an implementation of a Neural Network via pseudocode and this is the result - a Toy Neural Network. I have used several sources from medium.com and towardsdatascience.com (if all sources need to be listed, I will make an edit.)
I originally created a naive Matrix custom class with O(N^3) matrix multiplication, but removed to instead use ujmp. I used ujmp.org Matrix class for faster matrix multiplication, but due to my lacking understanding of how to utilise the speed ups, I believe that
This is the final code. Please comment and suggest improvements! Thank you. I will include SGD, backpropagation, feed forward, and mini batch calculation. Backprop is private due to a wrapping method called train.
The class NetworkInput is a wrapper for an attributes and a label DenseMatrix.
All functions here are interfaces for activationfunctions, functions to evaluate the test data and to calculate loss and error.
This is the SGD.
/**
* Provides an implementation of SGD for this neural network.
*
* @param training a Collections object with {@link NetworkInput }objects,
* NetworkInput.getData() is the data, NetworkInput.getLabel()is the label.
* @param test a Collections object with {@link NetworkInput} objects,
* NetworkInput.getData() is the data, NetworkInput.getLabel is the label.
* @param epochs how many iterations are we doing SGD for
* @param batchSize how big is the batch size, typically 32. See https://stats.stackexchange.com/q/326663
*/
@NotNull List<NetworkInput> test,
int epochs,
int batchSize) {
int trDataSize = training.size();
int teDataSize = test.size();
for (int i = 0; i < epochs; i++) {
// Randomize training sample.
Collections.shuffle(training);
System.out.println("Calculating epoch: " + (i + 1) + ".");
// Do backpropagation.
for (int j = 0; j < trDataSize - batchSize; j += batchSize) {
calculateMiniBatch(training.subList(j, j + batchSize));
}
// Feed forward the test data
List<NetworkInput> feedForwardData = this.feedForwardData(test);
// Evaluate prediction with the interface EvaluationFunction.
int correct = this.evaluationFunction.evaluatePrediction(feedForwardData).intValue();
// Calculate loss with the interface ErrorFunction
double loss = errorFunction.calculateCostFunction(feedForwardData);
// Add the plotting data, x, y_1, y_2 to the global
// lists of xValues, correctValues, lossValues.
System.out.println("Loss: " + loss);
System.out.println("Epoch " + (i + 1) + ": " + correct + "/" + teDataSize);
// Lower learning rate each iteration?. Might implement? Don't know how to.
// ADAM? Is that here? Are they different algorithms all together?
// TODO: Implement Adam, RMSProp, Momentum?
// this.learningRate = i % 10 == 0 ? this.learningRate / 4 : this.learningRate;
}
}
Here we calculate the mini batches and update our weights with an average.
private void calculateMiniBatch(List<NetworkInput> subList) {
int size = subList.size();
double scaleFactor = this.learningRate / size;
DenseMatrix[] dB = new DenseMatrix[this.totalLayers - 1];
DenseMatrix[] dW = new DenseMatrix[this.totalLayers - 1];
for (int i = 0; i < this.totalLayers - 1; i++) {
DenseMatrix bias = getBias(i);
DenseMatrix weight = getWeight(i);
dB[i] = Matrix.Factory.zeros(bias.getRowCount(), bias.getColumnCount());
dW[i] = Matrix.Factory
.zeros(weight.getRowCount(), weight.getColumnCount());
}
for (NetworkInput data : subList) {
DenseMatrix dataIn = data.getData();
DenseMatrix label = data.getLabel();
List<DenseMatrix[]> deltas = backPropagate(dataIn, label);
DenseMatrix[] deltaB = deltas.get(0);
DenseMatrix[] deltaW = deltas.get(1);
for (int j = 0; j < this.totalLayers - 1; j++) {
dB[j] = (DenseMatrix) dB[j].plus(deltaB[j]);
dW[j] = (DenseMatrix) dW[j].plus(deltaW[j]);
}
}
for (int i = 0; i < this.totalLayers - 1; i++) {
DenseMatrix cW = getWeight(i);
DenseMatrix cB = getBias(i);
DenseMatrix scaledDeltaB = (DenseMatrix) dB[i].times(scaleFactor);
DenseMatrix scaledDeltaW = (DenseMatrix) dW[i].times(scaleFactor);
DenseMatrix nW = (DenseMatrix) cW.minus(scaledDeltaW);
DenseMatrix nB = (DenseMatrix) cB.minus(scaledDeltaB);
setWeight(i, nW);
setLayerBias(i, nB);
}
}
This is the back propagation algorithm.
private List<DenseMatrix[]> backPropagate(DenseMatrix toPredict, DenseMatrix correct) {
List<DenseMatrix[]> totalDeltas = new ArrayList<>();
DenseMatrix[] weights = getWeights();
DenseMatrix[] biases = getBiasesAsMatrices();
DenseMatrix[] deltaBiases = this.initializeDeltas(biases);
DenseMatrix[] deltaWeights = this.initializeDeltas(weights);
// Perform Feed Forward here...
List<DenseMatrix> activations = new ArrayList<>();
List<DenseMatrix> xVector = new ArrayList<>();
// Alters all arrays and lists.
this.backPropFeedForward(toPredict, activations, xVector, weights, biases);
// End feedforward
// Calculate error signal for last layer
DenseMatrix deltaError;
// Applies the error function to the last layer, create
deltaError = errorFunction
// Set the deltas to the error signals of bias and weight.
deltaBiases[deltaBiases.length - 1] = deltaError;
deltaWeights[deltaWeights.length - 1] = (DenseMatrix) deltaError
.mtimes(activations.get(activations.size() - 2).transpose());
// Now iteratively apply the rule
for (int k = deltaBiases.length - 2; k >= 0; k--) {
DenseMatrix z = xVector.get(k);
DenseMatrix differentiate = functions[k + 1].applyDerivative(z);
deltaError = (DenseMatrix) weights[k + 1].transpose().mtimes(deltaError)
.times(differentiate);
deltaBiases[k] = deltaError;
deltaWeights[k] = (DenseMatrix) deltaError.mtimes(activations.get(k).transpose());
}
}
EDIT I forgot to include the feed forward algorithm.
private void backPropFeedForward(DenseMatrix starter, List<DenseMatrix> actives,
List<DenseMatrix> vectors,
DenseMatrix[] weights, DenseMatrix[] biases) {
DenseMatrix toPredict = starter;
for (int i = 0; i < getTotalLayers() - 1; i++) {
DenseMatrix x = (DenseMatrix) weights[i].mtimes(toPredict).plus(biases[i]);
toPredict = this.functions[i + 1].applyFunction(x);
}
}
However, I do see a lot of repetition especially when it comes to bias and weight. It may be a good idea to create generic methods for those. I see you use specific methods such as getBias(i) and getWeight(i) but those can be inserted using lambda functions.
The size and scaleFactor variables are not used in the first two for loops, so I don't understand why they are declared & initialized so early. If you only declare variables where you need them it becomes easier to extract methods out of large swaths of code, and your code becomes easier to read (because you don't have to keep track of so many variables as a reader).
There are a lot of unexplained calculations such as - 1 and - 2 going on. For you they may be clear, but generally you should comment on what you're trying to achieve with them.
In general the functions are too large. Try to minimize the amount of code. If you have three for loops in a row in one function, try and see if you can extract (private) methods for them instead.
stochasticGradientDescent clearly prints out the result instead of returning it. That's not nice; at least indicate somewhere that it produces output. Instead of using System.out, simply use a PrintStream out as argument if you create such a method. Then you can always stream the output to file or to a String (for testing purposes) - and for console output you just pass System.out as parameter.
Similarly, calculateMiniBatch doesn't return a value, it calls two setters instead. That's generally not done, as you can directly assign such things to fields. Calling public methods from private methods can be dangerous if they get overwritten. For this kind of purpose I might also consider returning a private WeightAndBias class instance with just two fields (a record in Java).
I'm really wondering why DenseMatrix is not parameterized properly, I keep seeing class casts back to DenseMatrix while the methods are clearly defined on DenseMatrix itself. That probably means that an interface does not have a generic type parameter included that is set not by DenseMatrix, e.g.
interface Matrix<T extends Matrix> {
T operation();
}
class DenseMatrix<DenseMatrix> {
DenseMatrix operation();
}
Otherwise, I'll be glad to let you know that I don't understand the first thing about the code, so I'll stop while I'm behind :) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2673030495643616, "perplexity": 4818.3128472622475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00390.warc.gz"} |
http://en.wikipedia.org/wiki/User:NorwegianBlue/area_of_a_square_on_the_surface_of_a_sphere | User:NorwegianBlue/area of a square on the surface of a sphere
Initial presentation of the problem
how do one find the area of a square drawn on a sphere?
Simple answer, you cant, a square is 2D and a sphere is 3D, so you can not draw a square on a sphere. Stefan 09:38, 24 May 2006 (UTC)
It might be better to say that a sphere is a surface (a two dimensional Riemannian manifold) with constant positive curvature, while the plane is a surface (two dimensional manifold) with constant zero curvature. ---CH 10:03, 24 May 2006 (UTC)
Well there's a way to do triangles (it's in my friends multivariable calculus book), but I dunno about squares. --M1ss1ontomars2k4 (T | C | @) 03:48, 25 May 2006 (UTC)
So.. why don't you just divide that square in two triangles .. and add up the results?
as in an object with 4 sides, and their interior angle does not add up to 360 degrees. HOw do you find the area?
See Theorema egregium. ---CH 10:03, 24 May 2006 (UTC)
If you are asking how to find the area of some shape on a sphere, then perhaps we can give you a helpful answer - but in order to do so, you have todefine the shape 'exactly'. For example we could start analysing the area of a square projected onto the surface of a sphere. This isn't a square, it has curved edges. So back to you... -- SGBailey 10:25, 24 May 2006 (UTC)
BTW, did you know that an equilateral "triangle" on a sphere touching the points lat=0,long=0; lat=0,long=90; lat=90,long=any has three 90 degree corners and has an area of 1/8 of the sphere? -- SGBailey 10:25, 24 May 2006 (UTC)
You will have to express the sides of the "square" mathmatically to determine the boundaries of the double integral that will give you the area. You will probably want to solve it in spherical coordinates. You are going to have to know some calculus for this one. --Swift 11:11, 24 May 2006 (UTC)
actually, the whole thing goes like this, i am tryin' the find the area of this...
A square with the side of 10 cm, and draw loci (10cm) on each corners (quarter of a circle in a square to give the "square")
This is NOT a homework question, i just want a head start of what to do. If you do not understand what i said, tell me and i'll create an image from Paint. many thanks!
I do not understand! OK one more 'simple' answer to your to 'simple' question, between a very very very small bit more than 100 square cm to about maybe 200 square cm.
According to Square (geometry), a hemispere would be a valid "squareon a sphere" with side length = 0.25*sphere circumference and area=0.5*sphere area. Indeed presumably a hemisphere is an instance of every regular polygon with the same area and side length = 1/N * circumference. I note that this is a "valid" (?) 2 sided polygon and even a valid (?) one sided polygon! -- SGBailey 16:12, 24 May 2006 (UTC)
I don't get this. Perhaps drawing that picture would help clear things up. --Swift 08:33, 25 May 2006 (UTC)
Is this about spherical geometry?Yanwen 20:58, 24 May 2006 (UTC)
Excuse me if I'm missing something obvious here, but M1ss1ontomars2k4 says: Well there's a way to do triangles (it's in my friends multivariable calculus book), but I dunno about squares. So, find the area of a right-angled triangle (or sphere-surfaced equivalent) with shorter sides both length 10cm, and double it. Grutness...wha? 05:46, 25 May 2006 (UTC)
Grutness, You are missing something. On a sphere triangles etc don't scale like that. -- SGBailey 11:59, 25 May 2006 (UTC)
Yes user:grutness is sort of right - but what is missing is the radius of the sphere otherwise answers will have to be expressed as a function of radius. Take the solid angle created by half the 'square' ie a spherical triangle - double it and multiply by r squared to get the surface area. Spherical trigonometry may help as will solid angle - see continued talk below Wikipedia:Reference desk/Science#(continued) Area on a sphereHappyVR 16:42, 27 May 2006 (UTC)
Followup on science section
This is the problem. It's not a homework question and I just need to know how to work it out. Length of the square is 10 cm, find the shaded area (the curved lines are the loci of the 4 corners)
As you just want to know how to do it, I will not carry out actual calculations. Call A, B, C, D the vertices of the square in a clockwise fashion starting from the bottom left one. Let E be the top point of intersection between the four circumferences, and let F be the right one. Then one can show that the segments AE subtends pi/6and AF divide the right angle BAD into three equal angles, each of them measuring $\pi/6$. Hence, if you set up Cartesian cohordinates so that A=(0,0), B=(0,1), C=(1,1) and D=(1,0), the x-cohordinate of F is just sin($\pi/6$)=$\sqrt{3}/2$. The equation of the circumference centered in A being $x^2 + y^2 = 1$, the area you are looking for is $4\int_{1/2}^{\sqrt{3}/2} \sqrt{1 - x^2} - 1/2 dx$ (using symmetry to simplify things). Cthulhu.mythos 14:48, 26 May 2006 (UTC)
That's quite funny. There's a much easier way to figure this out. I won't give the details just in case it is homework, but the approach looks like this: let a be the area coloured yellow area, and b be the area of one of the four curvy arrowhead-like shapes in the corners. Express the area of the whole square in terms of a and b. Express the area of a quarter circle in terms of a and b. This gives you simultaneous equations in a and b which you can easily solve for a. Gdr 15:33, 26 May 2006 (UTC)
Your method doesn't account for all the areas. In addition to "a" and "b" there are is also the pointy area between two "b" areas. 199.201.168.100 15:37, 26 May 2006 (UTC)
Yes, you're quite right. So call the thin area at the side c and make three simultaneous equations in a, b and c. Gdr 15:55, 26 May 2006 (UTC)
There's certainly a way to avoid calculus. It's easy enough to get the cartesian coordinates of the four vertices of the yellow area. (e.g. the top one is at (1/2, sqrt(3)/2) just because it makes an equilateral triangle with the bottom two vertices of the main square.) Then take the yellow area to be a square joining its four vertices (aligned diagonal to the coordinate axes), plus four vaguely lens-shape pieces. The area of each of the lens-shape pieces is obtained by considering drawing straight lines connecting its two vertices to the opposite corner (i.e. to the center of the arc): it's the area of the sector of the circle minus the area of the triangle. Hope this makes some kind of sense. Arbitrary username 20:56, 26 May 2006 (UTC)
Like the person posing the question, I am not a mathematician. I suspect that the responses so far have not given enough practical detail to be helpful to the questioner. Based on the original question, and on this repost, I'll have a go at reformulating what I think the questioner had in mind: We are working on the surface of a sphere. We have two pairs of great circles. The angle between the first pair of great circles, expressed in radians, is 10cm/r, where r is the radius of the sphere. The angle between the second pair of great circles is equal to the angle between the first pair. The plane defined by the axes corresponding to the first pair of great circles is perpendicular to the plane defined by the axes corresponding to the second pair of great circles. At two opposite locations on the surface of the sphere, "squares" are formed, as illustrated in the image. Is it possible to express the area of one of these "squares" analytically, such that the area tends to 100cm2 as r tends to infinity? --NorwegianBlue 21:26, 26 May 2006 (UTC)
• I love math problems that have multiple approaches. I'll wave my hands a bit and assert that the corners of the yellow area cut the arcs in thirds. Call A the yellow corner on the left, and B the one on top. Construct segment AB. Figure out the area between segment AB and arc AB, and add four of 'em to a square of side AB. I think the result will look something like
$(2R\sin \frac{\pi}{12})^{2} + (2R^2(\frac{\pi}{6} - \sin\frac{\pi}{6}))$
but that's only because I looked up the formulas for the circular segment on mathworld.
Signed, math degree 30 years ago next month and am rusty as all hell. --jpgordon∇∆∇∆ 05:41, 27 May 2006 (UTC)
• The question was related to the area of a square on the surface of a sphere, and the preceding answer appears to be plane geometry (correct me if I'm wrong!). I think we can be reasonably sure that this is not a homework question, because of the vague way in which it was formulated. I believe what the questioner had in mind was the area illustrated in yellow here:
The red curves are supposed to represent great circles.
Is anybody able to come up with a formula for the yellow area in terms of r, the radius of the sphere? Also, it would be nice if the person that posed the question confirmed that this is what he/she is looking for. --NorwegianBlue 09:19, 27 May 2006 (UTC)
• Oh, it doesn't matter what they're looking for -- this is fun! Probably belongs over in WP:RD/Math. I ignored the sphere thing, for some reason or another. But isn't there insufficient information to calculate this? (Is this a solid angle on a sphere?)--jpgordon∇∆∇∆ 16:13, 27 May 2006 (UTC)
Yes it is a solid angle of a sphere type question - the missing info. is the radius r of the sphere - without that answers will need to be functions of r. By the way if the interior angles of a triangle drawn on a sphere are a,b and c then the solid angle covered by the triangle (spherical geometry here) is a+b+c-pi in steradians. HappyVR 16:32, 27 May 2006 (UTC)
My question, and possibly the original poster's question, was if somebody could provide a formula for the area, in terms of r. --NorwegianBlue 16:43, 27 May 2006 (UTC)
Followup on maths section
This question was originally posted in the science section, but belongs here. The original questioner has stated clearly that it is not a homework question. It was formulated as follows:
"How do one find the area of a square drawn on a sphere? A square with the side of 10 cm, and draw loci (10cm) on each corners (quarter of a circle in a square to give the "square")"
Based on the discussion that followed, I think what the questioner had in mind is the area illustrated in yellow in the drawing below:
The red circles are supposed to be two pairs of great circles. The angle between the first pair of great circles, expressed in radians, is 10cm/R, where R is the radius of the sphere. The angle between the second pair of great circles is equal to the angle between the first pair. The plane defined by the axes corresponding to the first pair of great circles is perpendicular to the plane defined by the axes corresponding to the second pair of great circles.
The question is how to express the yellow area in terms of R, the radius of the sphere. Obviously, as R ? 8, the area ? 100 cm2.
• -
I am not a mathematician, but felt that it "ought to" be possible to express this area in terms of R, and decided to try to find the necessary information.
I found Girard's theorem, which states that the area of a triangle on a sphere is (A + B + C - p) × R2, where A, B and C are the angles between the sides of the triangle, as illustrated in the second drawing. I also found the law of sines for triangles on a sphere, which relates the angles A, B and C to the angles a, b and c which define the sides of the triangle
$\frac{\sin a}{\sin A}=\frac{\sin b}{\sin B}=\frac{\sin c}{\sin C}.$
I then attempted to divide the square into two triangles, and compute the area of one of these, but am stuck because I don't know the diagonal. Since this is spherical geometry, I doubt that it is as simple as $\sqrt{2} \times 10 cm$. I would appreciate if somebody told me if I am on the right track, and, if so, how to complete the calculations. If my presentation of the problem reveals that I have misunderstood some of the theory, please explain. --NorwegianBlue 14:11, 28 May 2006 (UTC)
The natural way I suspect the question should presumably be answered is to take the square on the flat plane and use Jacobians to transform it onto the sphere. Those with a firmer grip of analysis would probably want to fill in the details at this point... Dysprosia 15:28, 28 May 2006 (UTC)
An easier way to tackle this might be to exploit the symetry of the situation. Slice the sphere into 4 along z=0 and x=0. This will give four identical squares with four right angles and two sides of length 5. The cut the squares along x+z=0, x-z=0 giving eight triangles, each with one 45 degree angles, one right angle and one side of length 5. --Salix alba (talk) 15:44, 28 May 2006 (UTC)
I think vibo is on the right track. You can use the law of sines to calculate the length of the diagonal. -lethe talk + 15:44, 28 May 2006 (UTC)
The law of cosines for spherical trig gives cos c = cos2 a. -lethe talk + 16:06, 28 May 2006 (UTC)
From which I get using the spherical law of sines that sin A = sin a/ v(1 – cos4 a). A = B and C = p/2, so I have the triangle, and hence the square. -lethe talk + 16:11, 28 May 2006 (UTC)
To lethe: How can you say that C = p/2? This is spherical geometry, and the four "right" angles in the "square" in the first drawing add up to more than 2p, don't they, or am I missing something? --NorwegianBlue 16:49, 28 May 2006 (UTC)
You may be right, I cannot assume that the angles are right angles. Let me mull it over some more. -lethe talk + 16:58, 28 May 2006 (UTC)
OK, I think the right assumption to make is that C = 2A. I can solve this triangle as well, but it's quite a bit messier. Lemme see if I can clean it up, and then I'll post it. -lethe talk + 17:20, 28 May 2006 (UTC)
$2R^2\left(2\sin^{-1}\left(\frac{\sin a}{\sqrt{1-\cos^4 a}}\right) - \frac{\pi}{2}\right)$
For the square. Now I just have to see whether this answer works. -lethe talk + 16:14, 28 May 2006 (UTC)
And now I'm here to tell you that Mathematica assures me that this function approaches s2 as the curvature goes to zero. From the series, I can say that to leading two orders of correction, area = s2 + s4/6R2 + s6/360R4. -lethe talk + 16:25, 28 May 2006 (UTC)
I got the following for the diagonal angle c of the big square from "first principles" (just analytic geometry in 3D): cos(c/2) = 1 / sqrt(1+2t2), where a = 10cm/R and t = tan(a/2). --LambiamTalk 16:02, 28 May 2006 (UTC)
I'm afraid I didn't understand (I'm not a mathematician :-) ). If we let (uppercase) C be the "right" (i.e. 90°+something) angle in the triangle in the second figure, and (lowercase) c be the diagonal that we are trying to calculate, could you please show the steps leading to this result (or rephrase it, if I misinterpreted your choice of which of the angles A,B,C that was the "right" one)? --NorwegianBlue 18:07, 28 May 2006 (UTC)
For simplicity, let's put R = 1, since you can divide all lengths first by R, and multiply the area afterwards by R2. Then the equation of the sphere is x2 + y2 + z2 = 1. Take the point nearest to the spectator in the first image to be (x,y,z) = (0,0,1), so z decreases when receding. Take the x-axis horizontal and the y-axis vertical. A grand circle is the intersection of a plane through the sphere's centre (0,0,0) with the sphere. The equation of the plane that gives rise to the grand circle whose arc segment gives the top side of the "square" is y = tan(a/2) × z = tz (think of it as looking sideways along the x-axis). At the top right corner of the "square" we have x = y. Solving these three equations (sphere, plane, x = y) for z, using z > 0, gives us z = 1 / sqrt(1+2t2). Now if c is the angle between the rays from the centre of the sphere to this corner and its opposite (which, if R = 1, is also the length of the diagonal), so c/2 is the angle between one of these rays and the one through (0,0,1), then z = cos(c/2). Combining this with the other equation for z gives the result cos(c/2) = 1 / sqrt(1+2t2). Although I did not work out the details, I think you can combine this with Salix alba's "cut in eight" approach and the sines' law to figure out the missing angle and sides. --LambiamTalk 19:59, 28 May 2006 (UTC)
new calculation
As vibo correctly points out above, the square will not have right angles, so my calculation is not correct. Here is my new calculation. Assuming all angles of the square are equal, label this angle C. Then draw the diagonal, and the resulting triangle will be equilateral with sides a and angles A, and 2A = C. The law of sines tells me
$\frac{\sin a}{\sin A} = \frac{\sin c}{\sin 2A} \,\!$
from which I have
$\sin c = 2\cos A \sin a. \,\!$
From the law of cosines I have that
$\cos c = \cos^2 a +\sin^2a(2\cos^2A-1). \,\!$
My goal here is to eliminate c. First I substitute cos A:
$\cos c = \cos^2+\sin^2a\left(\frac{\sin^2 c}{2\sin^2 a}-1\right) \,\!$
which reduces to the quadratic equation
$\cos^2c+2\cos c-1=2\cos 2a. \,\!$
So I have
$\cos c=-1\pm\sqrt{2+2\cos 2a} \,\!$
and using cos A = sin c/2sin a, I am in a position to solve the triangle
$A=\cos^{-1}\left(\frac{1}{2}\sqrt{1-\left(-1+\sqrt{2+2\cos 2a}\right)^2}\csc a\right). \,\!$
I'm pretty sure this can be simplified quite a bit, but the simplification I got doesn't agree with the one Mathematica told me. Anyway, the expansion also has the right limit of s2. -lethe talk + 20:16, 28 May 2006 (UTC)
Despite the figure, which is only suggestive (and not quite correct), are we agreed on the definition of a "square on a sphere"? The question stipulates equal side lengths of 10 cm. To avoid a rhombus we should also stipulate equal interior angles at the vertices, though we do not have the luxury of stipulating 90° angles. Food for thought: Is such a figure always possible, even on a small sphere? (Suppose the equatorial circumference of the sphere is itself less than 10 cm; what then?) Even if it happens that we can draw such a figure, is it clear what we mean by its area? Or would we prefer to stipulate a sufficiently large sphere? (If so, how large is large enough?) Figures can be a wonderful source of inspiration and insight, but we must use them with a little care. --KSmrqT 20:40, 28 May 2006 (UTC)
The figure was drawn by hand, and is obviously not quite correct, but doesn't the accompanying description:
The red circles are supposed to be two pairs of great circles. The angle between the first pair of great circles, expressed in radians, is 10cm/R, where R is the radius of the sphere. The angle between the second pair of great circles is equal to the angle between the first pair. The plane defined by the axes corresponding to the first pair of great circles is perpendicular to the plane defined by the axes corresponding to the second pair of great circles.
resolve the ambiguity with respect to the rhombus, provided that the area of the square is less than half of the area of the sphere? --NorwegianBlue 21:51, 28 May 2006 (UTC)
What is meant by "the angle between the … circles"? That's not really the same as the arclength of a side as depicted. Also note that the orginal post suggests that the side might be a quarter of a circle. If that is true, then the "square" is actually a great circle! Each angle will be 180°, and the area "enclosed" will be a hemisphere of a sphere with radius 20 cm/π, namely 2π(20 cm/π)2 = 800 cm2/π, approximately 254.65 cm2.
By a series of manipulations I came up with
$\cos^2 A = \frac{\cos a}{1+\cos a} ,$
where a is 10 cm/R, the side length as an angle. The angle of interest is really C = 2A, for which
$\cos C = -\tan^2 \frac{a}{2} .$
For the hemisphere case, a = π/2 produces C = π; while for the limit case, a = 0 produces C = π/2.
The original question was about the area, so we should conclude with that: (4C-2π)R2. --KSmrqT 04:43, 29 May 2006 (UTC)
By "the angles between a pair of great circles", I meant the angle between the plane P1 in which the first great circle lies, and the plane P2 in which the second great circle lies. The arc length depicted was intended to represent the intersection between the surface of the sphere, and a plane P3, which is orthogonal to P1 and P2, and which passes through the centre of the sphere. As previously stated, I have little mathematical training. I therefore made a physical model by drawing on the surface of a ball, before making the first image. I convinced myself that such a plane is well-defined, and that this length of arc on a unit sphere would be identical to the angle between P1 and P2. Please correct me if I am mistaken, or confirm if I am right. --NorwegianBlue 20:19, 29 May 2006 (UTC)
Every great circle does, indeed, lie in a well-defined plane through the center of the sphere. Between two such planes we do have a well-defined dihedral angle. The problem arises when we cut with a third plane. If we cut near where the two planes intersect we get a short arc; if we cut far from their intersection we get a longer arc. In other words, the dihedral angle between the two planes does not determine the arclength of the "square" side.
Instead, use the fact that any two distinct points which are not opposite each other on the sphere determine a unique shortest great circle arc between them, lying in the plane containing the two points and the center. Our value a is the angle between the two points, as measured at the center of the sphere. Were we to pick two opposite points, we'd have a = π, which is half the equatorial circumference of a unit sphere. For a sphere of radius R, the circumference is 2πR. We are told that the actual distance on the sphere is exactly 10 cm, but we are not told the sphere radius. The appearance of the "square" depends a great deal on the radius, and so does its area. When the radius is smaller, the sides "bulge out" to enclose more area, the corner angles are greater, and the sphere bulges as well. As the sphere radius grows extremely large, the square takes up a negligible portion of the surface, the sides become straighter, the angles approach perfect right angles, and the sphere bulges little inside the square.
We do not have a handy rule for the area of a square on a sphere. Luckily, the area of a triangle on a sphere follows a powerful and surprisingly simple rule, based on the idea of angular excess. Consider a triangle drawn on a unit sphere, where the first point is at the North Pole (latitude 90°, longitude irrelevant), the second point drops straight down onto the equator (latitude 0°, longitude 0°), and the third point is a quarter of the way around the equator (latitude 0°, longitude 90°). This triangle has three perfect right angles for a total of 270° (or 3π/2), and encloses exactly one octant — one eighth of the surface area — of the sphere. The total surface area is 4π, so the triangle area is π/2. This area value is exactly the same as the excess of the angle sum, 3π/2, compared to a flat right triangle, π. The simple rule is, this is true for any triangle on a unit sphere. If instead the sphere radius is R, the area is multipled by R2.
Thus we simplify our area calculation by two strategies. First, we divide out the effect of the radius so that we can work on a unit sphere. Second, we split the "square" into two equal halves, two equilateral triangles, by drawing its diagonal. Of course, once we find the triangle's angular excess we must remember to double the value (undoing the split) and scale up by the squared radius (undoing the shrink).
Notice that this mental model assumes the sphere radius is "large enough", so that at worst the square becomes a circumference. We still have not considered what we should do if the sphere is smaller than that. It seems wise to ignore such challenges for now. --KSmrqT 21:23, 29 May 2006 (UTC)
Thank you. I really appreciate your taking the time to explain this to me which such detail and clarity. --NorwegianBlue 23:34, 29 May 2006 (UTC)
Coordinate Transform
What if we perform a simple coordinate transform to spherical coordinates and perform a 2-dimensional integral in phi and theta (constant r = R). Then, dA = r^2*sin(theta)*dphi*dtheta, and simply set the bounds of phi and theta sufficient to make the lengths of each side 10 cm. Nimur 18:11, 31 May 2006 (UTC)
Calculation completed
Thanks a million to the users who have put a lot of work in explaining this to me, and in showing me the calculations necessary. I started out based on the work of lethe. Armed with a table of trigonometric identities, I went carefully through the calculations, and am happy to report that I feel that I understood every single step. I was not able to simplify the last expression much further, the best I can come up with is
$\cos A=\frac{1}{2}\csc a\sqrt{2\sqrt{2+2\cos 2a} - 2\cos 2a - 2} \, .\!$
You should probably make use of the identity
$\cos 2a = 2\cos^2a -1$
here, it simplifies this expression quite a bit. -lethe talk + 02:01, 30 May 2006 (UTC)
Since the r.h.s. is based on a only, which is a known constant when the radius and length of arc are given (a=10cm/R for the given example), let us substitute $G=g(a) \,\!$ for the r.h.s. Note that the function is undefined at a=0°±180° because of the sine function in the denominator. There is a graph of g(a) on my user page.
We can now calculate the area of the triangle, and that of the square.
$\cos A = G . \,\!$
$\cos 2A = 2\cos^2 A - 1 = 2G^2-1 \, .\!$
According to Girard's formula, we then have
$area_{triangle} = (A + B + C - \pi) \times R^2 \!$
$area_{triangle} = \left( 2\cos^{-1} G + \cos^{-1}(2G^2 - 1)-\pi \right) \times R^2\!$
$area_{square} = 2\times \left( 2\cos^{-1} G + \cos^{-1}(2G^2 - 1)-\pi \right) \times R^2\!$
I calculated the behaviour of this area function on a unit sphere when a is in the range (0°...180°):
Seems reasonable up to 90°. The value at 90° corresponds to the "square" with four corners on a great circle that KSmrq mentions above, i.e. to a hemisphere, and the area, 2p is correct. in the interval [90°..180°), the function returns the smaller of the two areas. I also notice that the function looks suspiciously elliptical. Are we computing a much simpler function in a roundabout way?
I next studied how the formula given by KSmrq works out:
$area_{triangle} = \left( 2\cos^{-1} \left( \sqrt{\frac{\cos a}{1+\cos a} }\, \right) + \cos^{-1}(-\tan^2 \frac{a}{2})-\pi \right) \times R^2\!$
$area_{square} = 2 \times \left( 2\cos^{-1} \left( \sqrt{\frac{\cos a}{1+\cos a} }\, \right) + \cos^{-1}(-\tan^2 \frac{a}{2})-\pi \right) \times R^2\!$
I computed the area, and found that in the range (0°..90°], the formulae of lethe and KSmrq yield identical results, within machine precision. Above 90°, the formula of KSmrq leads to numerical problems (nans).
Finally, I would like to address the question of the orignial anonymous user who first posted this question on the science desk. Let us see how the area of the square behaves as R increases, using 10 cm for the length of arc in each side of the "square". The smallest "reasonable" value of R is 20cm/p ˜ 6.366 cm, which should lead to a surface area of approximately 254.65 cm2, as KSmrq points out. Driven by curiosity, I will start plotting the function at lower values than the smallest reasonable one (in spite of KSmrq's advice to "ignore such challenges for now").
Here is the graph:
Unsurprisingly, the function behaves weirdly below the smallest reasonable value of R, but from R ˜ 6.366 cm and onwards, the function behaves as predicted, falling rapidly from 254.65 cm2, and approaching 100 cm2 asymptotically. In case anybody is interested in the calculations, I have put the program on my user page.
Again, thank you all. --NorwegianBlue 23:54, 29 May 2006 (UTC)
Well done. It does appear that you overlooked my simple formula for the area, which depends on C alone. Recall that when the square is split, the angle A is half of C, so the sum of the angles is A+A+C, or simply 2C. This observation applies to User:lethe's results as well, where we may use simply 4A. So, recalling that a = 10 cm/R, a better formula is
$\mathrm{area}_\mathrm{square} \,\!$ ${} = \left( 4 \cos^{-1}(-\tan^2 \frac{a}{2})-2\pi \right) \times R^2 \,\!$ ${} = \left( 4 \cos^{-1}(-\tan^2 \frac{10\ \mathrm{cm}}{2 R})-2\pi \right) \times R^2 . \,\!$
For the arccosine to be defined, its argument must be between -1 and +1, and this fails when the radius goes below the stated limit. (A similar problem occurs with the formula for A, where a quantity inside a square root goes negative.) Both algebra and geometry are telling us we cannot step carelessly into the domain of small radii. Try to imagine what shape the "square" may take when the circumference of the sphere is exactly 10 cm; both ends of each edge are the same point! Not only do we not know the shape, we do not know what to name and measure as the "inside" of the square.
This raises an important general point about the teaching, understanding, and application of mathematics. Statements in mathematics are always delimited by their range of applicability. Every function has a stated domain; every theorem has preconditions; every proof depends on specific axioms and rules of inference. Once upon a time, we manipulated every series with freedom, with no regard to convergence; to our chagrin, that sometimes produced nonsense results. Once it was supposed that every geometry was Euclidean, and that every number of interest was at worst a ratio of whole numbers; we now make regular use of spherical geometry and complex numbers. When we state the Pythagorean theorem, we must include the restriction of the kind of geometry in which it applies. When we integrate a partial differential equation, the boundary conditions are as important as the equation itself. It is all too easy to fall into the careless habit of forgetting the relevance of limitations, but we do so at our peril. --KSmrqT 02:36, 30 May 2006 (UTC)
Yes, I did overlook the (now painfully obvious) fact that the sum of the angles was 2C. Your final point is well taken. I understood that the reason for the NaN's was a domain error, but thanks for pointing out the exact spots. --NorwegianBlue 19:40, 30 May 2006 (UTC)
Supplementary material
Graph of the function g(a)
lethe provided the following function for cos A, where A=B represents half of the "right" (i.e. 90°+something) angle C in a "square" on the surface of a sphere.
$\cos A=\frac{1}{2}\csc a\sqrt{2\sqrt{2+2\cos 2a} - 2\cos 2a - 2} \, .\!$
Since the r.h.s. is based on a only, which is a known constant when the radius and length of arc are given (a=10cm/R for the example that prompted my follow-up question), I will substitute $G=g(a) \,\!$ for the r.h.s. Note that the function is undefined at a=0°±180° because of the sine function in the denominator. The graph of g(a) looks like this:
The function appears to approach $\frac{1}{2}\sqrt{2}$ as a approaches 180°, as well as when a approaches 0° from above.
Computations
Here is the program that was used for the calculations referred to:
#include <iostream>
#include <math.h>
&bnsp;
const double PI = 3.1415926535897932384626433832795;
void ErrorExit(const char* msg, int lineno)
{
std::cerr << msg << " program line: " << lineno << '\n';
exit(2);
}
// __________________________________________________
//
double csc(double arg)
{
double s = sin(arg);
if (s == 0)
{
ErrorExit("Division by zero attempted!", __LINE__);
}
return 1.0/s;
}
// __________________________________________________
//
double pow2(double arg)
{
return arg*arg;
}
// __________________________________________________
//
double g_lethe(double a)
{
return 0.5*csc(a)*sqrt(2.0*sqrt(2.0 + 2.0*cos(2.0*a)) - 2.0*cos(2.0*a) - 2.0);
}
// __________________________________________________
//
double area_lethe(double a)
{
double G = g_lethe(a);
return 2*(2*acos(G) + acos(2*pow2(G)-1)-PI);
}
// __________________________________________________
//
double area_ksmrq(double a)
{
return 2*(2*acos(sqrt(cos(a)/(1.0+cos(a)))) + acos(-pow2(tan(a/2.0)))-PI);
}
// __________________________________________________
//
int main()
{
std::cout << "Calculating G as a function of a" << '\n';
std::cout << "=================================\n\n";
int i;
for (i = -90; i <= 540; ++i)
{
double a = static_cast<double>(i)*PI/180.0;
// Cheating a little to avoid division by zero
if (i == 0)
{
a += 0.0001;
}
else if (i == 360)
{
a -= 0.0001;
}
double G = g_lethe(a);
std::cout << i << "; " << G << '\n';
}
&bnsp;
std::cout << "\n\n\n";
std::cout << "Calculating area of square in a unit sphere as a function of a" << '\n';
std::cout << "===============================================================\n\n";
&bnsp;
for (i = 0; i <= 180; ++i)
{
double a = static_cast<double>(i)*PI/180.0;
if (i == 0)
{
// Cheating a little to avoid division by zero
a += 0.0001;
}
else if (i == 180)
{
// Cheating a little because of the discontinuity at 180 degrees
a -= 0.0001;
}
double S = area_lethe(a);
double T = area_ksmrq(a);
std::cout << i << "; " << S << "; " << T << '\n';
}
std::cout << "\n\n\n";
std::cout << "Calculating area of square with 10cm side as a function of R" << '\n';
std::cout << "=============================================================\n\n";
//
for (i = 10; i < 2000; ++i)
{
double R = 0.1*static_cast<double>(i)/PI;
double a = 10.0/R;
double S = area_lethe(a)*pow2(R);
std::cout << R << "; " << S << '\n';
}
return 0;
}
Info from KSmrq which was commented out
By a series of manipulations I came up with
$\cos^2 A = \frac{\cos a}{1+\cos a} ,$
where a is 10 cm/(2πR), the side length as an angle. The angle of interest is really C = 2A, for which
$\cos C = -\tan^2 \frac{a}{2} .$
For the hemisphere case, a = π/2 produces C = π; while for the limit case, a = 0 produces C = π/2.
Calculations (best done privately)
The idea of the derivation is to start with the haversine formula for C, noting that we have an equilateral triangle so the first term vanishes. Thus, noting C = 2A,
$\mathrm{haversin}\ c = \sin^2 a \,\mathrm{haversin}\ 2A , \,\!$
or, noting haversin z = 12versin z = 12(1-cos z),
$1-\cos c = (1-\cos 2A)\sin^2 a , \,\!$
or, noting cos 2A = 2cos2 A-1,
$\cos c = 1 - 2 \sin^2 A\,\sin^2 a . \,\!$
We also have, as lethe observed,
$\sin c = 2\cos A \sin a. \,\!$
Now we can eliminate c using the fundamental trigonometric identity, and eliminate both sin2 A and sin2 a as well. We obtain a quadratic equation in x = cos2 A and y = cos a,
$(y^2-1)x^2 + (-2y^2)x + (y^2) = 0. \,\!$
No doubt a cleaner way to this simple solution exists, but this may suffice. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586063385009766, "perplexity": 513.191387355933}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447552819.108/warc/CC-MAIN-20141224185912-00016-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www2.math.binghamton.edu/p/seminars/stat/170831 | ### Sidebar
seminars:stat:170831
Statistics Seminar
Department of Mathematical Sciences
DATE: Thursday, August 31, 2017 1.15p-2.15p WH 100E Qiqing Yu, Binghamton University The Marginal Distribution Approach For Testing Independence And Goodness-of-fit In Linear Models
Abstract
We propose a test to simultaneously test the assumption of independence and goodness-of-fit for a linear regression model $Y=\beta X+W$, where $\beta\in R^p$. If $E(|Y||X)=\infty$, then all existing tests are not valid and their levels with a nominal size $0.05$ can be as large as $0.9$. Our approach is valid even if $E(|Y||X)=\infty$ or $E(||X||)=\infty$. Thus it is more realistic than all the existing tests. Our approach is based on the difference between two estimators of the marginal distribution $F_Y$, and thus it is called the MD approach. We establish the consistency of the MD test. We compare the MD approach to the existing tests such as the test in R packagegam” or the test in Sen and Sen (2014) through simulation studies. If the existing tests are valid, then none of the existing tests and the MD test is uniformly more powerful than the other. We apply the MD approach to 3 real data sets. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783678412437439, "perplexity": 413.3540224732594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00558.warc.gz"} |
https://dannypsnl.me/2020-01-01-note-what-is-lambda-calculus | # NOTE: What is lambda calculus
What is lambda-calculus? Or, more specific, what is untyped/pure lambda-calculus? To answer this, I wrote the note for myself. Lambda-calculus was a formal system invented by Alonzo Church in the 1920s, and we can enrich it in a variety of ways, for example, adding special concrete syntax like numbers, tuples, records, etc. Such extensions eventually lead to languages like ML, Haskell, or Scheme.
For easy functionality, we usually would do it directly, like +1 is quite simple for most people, but for those complex operations(or had special meaning like math formula), repeat them would be an annoying job. So we created procedural/functional abstraction. For example: square(x) = x^2.
The syntax of lambda-calculus comprises just three sorts of terms, the following syntax use BNF[1] form:
term ::= terms
x variable
λx.term abstraction
term term application
A variable x by itself is a term; an abstraction of a variable x from a term t1, written λx.t1, is a term; an application of a term t1 to another term t2, written as t1 t2, is a term.
## 1. $\beta$-reduction
If an expression of the form (λx. M) N is a term, then we can rewrite it to M[x : N]=, i.e. The expression M in which every x has been replaced with N. We call this process $\beta$-reduction[2] of (λx. M) N to M[x : N]=. For example (λx. (x + 1)) 2=(assume that we add numbers into lambda-calculus), where =M is x + 1 and N is 2, (x+1)[x : 2]= would produce 2 + 1 as the result. BTW, we also use
$(\lambda x.M)N \longrightarrow [x \to N]M$
this form.
## 2. Currying(in honor of Haskell Curry)
The behavior of a function of two or more arguments can be simulated by converting it into a composite of functions of a single argument was called Currying[3]. For example λ(x y). M can write λx. (λy. M).
## 3. Church Booleans
Definition:
\begin{aligned} & true := \lambda t. \lambda f. t \\& false := \lambda t. \lambda f. f \end{aligned}
How to use it, first we define a and function.
$and := \lambda a. \lambda b. a\;b\;a$
Then apply with arguments:
\begin{aligned} & and\;true\;true \to (\lambda t. \lambda f. t) \equiv true \\& and\;true\;false \to (\lambda t. \lambda f. f) \equiv false \end{aligned}
or and not function:
\begin{aligned} & or := \lambda a. \lambda b. a\;a\;b \\& not := \lambda a. a\;false\;true \end{aligned}
We even can create ifThenElse:
$ifThenElse := \lambda a. \lambda b. \lambda c. a\;b\;c$
## 4. Church Numerals
Represent Numbers by lambda-calculus is only slightly more intricate than Booleans. First, we define successor function(called suc) and some numbers:
\begin{aligned} & suc := \lambda n. \lambda s. \lambda z. s\;(n\;s\;z) \\& n_0 := \lambda s. \lambda z. z \\& n_1 := \lambda s. \lambda z. s\;z \\& n_2 := \lambda s. \lambda z. s\;(s\;z) \\& n_3 := \lambda s. \lambda z. s\;(s\;(s\;z)) \end{aligned}
Once we got the idea that suc 0 is the construction of 1 and suc suc 0 is the construction of 2. We know the construction of church numbers was s and suc was trying to take n=(previous number) to construct the next number =suc n, we keep λs.λz as common prefix and add s into body, n s z consume the previous λs.λz part.
Then we can define the add and times function for them:
\begin{aligned} & add := \lambda m. \lambda n. \lambda s. \lambda z. m\;s\;(n\;s\;z) \\ |\; & add := \lambda m. \lambda n. m\;suc\;n \end{aligned}
add takes two arguments: m and n, but we keep λs.λz as usual to make it been a number, then we can demonstrate it:
\begin{aligned} & add\; n_0\; n_1\\& \to \lambda s. \lambda z. n_0\;s\;(n_1\;s\;z)\\& \to \lambda s. \lambda z. n_0\;s\;((\lambda s. \lambda z. s\;z)\;s\;z)\\& \to \lambda s. \lambda z. n_0\;s\;(s\;z)\\& \to \lambda s. \lambda z. (\lambda s. \lambda z. z)\;s\;(s\;z)\\& \to \lambda s. \lambda z. s\;z \equiv n_1 \end{aligned}
mult can define as:
\begin{aligned} & mult := \lambda m. \lambda n. \lambda f.m\; (n\; f) \\ | \; & mult := λm.λn.m\; (add\; n)\; 0 \end{aligned}
## 5. Evaluation Rules ($t \to t'$)
\begin{aligned} & \frac{t_1 \to t_1'}{t_1\; t_2 \to t_1'\; t_2} \;\;\;\; &{E-APP1}\\& \frac{t_2 \to t_2'}{v_1\;t_2 \to v_1\;t_2'} \;\;\;\; &{E-APP2}\\& (\lambda x.t_{12})\; v_2 \longrightarrow [x \to v_2]t_{12} \;\;\;\; &{E-APPABS}\\& \end{aligned}
## 6. References
### 6.1.Types and Programming Languages
• Author: Benjamin C. Pierce
• ISBN: 0-262-16209-1
### 6.2.Types Theory and Formal Proof: An Introduction
• Author: Rob Nederpelt & Herman Geuvers
• ISBN: 9781316056349
Date: 2020-01-01 Wed 00:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951530694961548, "perplexity": 7831.316248569552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00406.warc.gz"} |
https://www.physicsforums.com/threads/momentum-measurement-of-a-particle-in-quantum-mechanics.954148/ | # Momentum measurement of a particle in Quantum Mechanics
• Start date
• Tags
• #1
890
38
## Homework Statement
What will momentum measurement of a particle whose wave - function is given by $\psi = e^{i3x} + 2e^{ix}$ yield?
Sketch the probability distribution of finding the particle between x = 0 to x = 2π.
## The Attempt at a Solution
The eigenfunctions of momentum operator is given by $A e^{ikx}$ where k = $\frac p {\hbar}$ with eigen value p = ${\hbar} k$.
Thus eigenvalue of $e^{i3x}$ is $3 \hbar$ and $e^{ix}$ is $\hbar$. I feel myself tempted to take the eigenvalues of momentum operator to be discrete and say that the momentum measurement will be either $3 \hbar$ or $\hbar$.
As the eigenvalue of momentum operator is continuous, I should use equation. (3.56) to answer the question.
Assuming that the question asks to calculate the probability distribution at t = 0, probability density would be given by $| \psi |^2 = 3 + 2 ( e^{ i2x} +e^{-i2x} )$., a complex function. But, the probability density should be a real valued function.
Is this correct?
#### Attachments
• 96.8 KB Views: 257
• 101.3 KB Views: 183
Last edited:
Related Advanced Physics Homework Help News on Phys.org
• #2
Orodruin
Staff Emeritus
Homework Helper
Gold Member
16,666
6,444
Plane waves are not normalizable so you really cannot write the probability in that manner (the wave function in momentum space is a sum of two delta functions). However, given the coefficients you should be able to deduce the probabilities (the coefficients are the probability amplitudes) by assuming that the total probability is one.
• #3
890
38
the coefficients are the probability amplitudes
How does one get to know this in case of continuous eigenvalues?
• #4
vela
Staff Emeritus
Homework Helper
14,580
1,187
Assuming that the question asks to calculate the probability distribution at t = 0, probability density would be given by $| \psi |^2 = 3 + 2 ( e^{ i2x} +e^{-i2x} )$., a complex function. But, the probability density should be a real valued function.
Is this correct?
I think you made a slight error. Anyway, your expression for $| \psi |^2$ is real.
• #5
Orodruin
Staff Emeritus
Homework Helper
Gold Member
16,666
6,444
I think you made a slight error. Anyway, your expression for $| \psi |^2$ is real.
However, it does not answer the question since it is the momenta that are asked for, not the position.
• #6
vela
Staff Emeritus
Homework Helper
14,580
1,187
Part of the question asked for a sketch of the probability as a function of $x$.
• #7
Orodruin
Staff Emeritus
Homework Helper
Gold Member
16,666
6,444
Part of the question asked for a sketch of the probability as a function of $x$.
That's what I get for reading too fast ...
• Last Post
Replies
0
Views
283
• Last Post
Replies
0
Views
1K
• Last Post
Replies
0
Views
1K
• Last Post
Replies
1
Views
963
• Last Post
Replies
8
Views
3K
• Last Post
Replies
0
Views
1K
• Last Post
Replies
6
Views
889
• Last Post
Replies
1
Views
1K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855653047561646, "perplexity": 781.9541957152485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00420.warc.gz"} |
http://www.techspot.com/community/topics/unknown-rogue-malware.177055/ | also @ TechSpot: Study suggests majority of Windows 8 users ignore Metro apps
# Unknown rogue malware
Discussion in 'Virus and Malware Removal' started by Bokkan, Feb 3, 2012.
1. ### BokkanNewcomer, in training
So, I'm noticing that everyone seems to have this system check thing, Must be really frustrating, but it doesn't seem like any of the conventional tools can remove it. In retrospect, that's kinda why I'm here, My friend's box got infected with system check about a month ago, and I've been doing my damnedest to clean it. Reading guides, doing my typical system removal procedures, but I've met my match and I'm just unable to compete with this malware, And after reading some of these boards, i realize that I'm pretty much a dunce when it comes to this stuff, So my hat's off to the experts before i post this.
That being said, I do have to warn anyone who wishes to take the task of helping me on that I have already done several steps to the computer, so that might affect the process. I've already gotten the computer to be able to boot without the system check virus taking over (sometimes) and know how to get it to boot afterwards. (I rename the files and then reboot, super savvy right?) anyway, I've tried running Mbam.exe several times, and even Tdsskill, but i just cant seem to get rid of it. On top of that, this box seems to be infected with much more than just System check, as there are signs of other less volatile malware all over.
Anyway, That being said, I do have to point out that I was completely unable to dds.src to produce any logs, The first time it froze the computer after about 10 minutes. Had to power cycle the box, the second time I tried it i let it run for 2 hours just to be sure, same result tho.
As for Gmer, I did load it as suggested, and nothing happened. No initial scan like the guide said. So i hit Scan.. 6 hours later and it finished scanning. Not sure if this is the log you wanted, but i'll post it anyway just in case. To follow are the Mbam log and the Gmer.log.
2. ### BobbyeHelper on the FringePosts: 16,406 +16
Welcome to TechSpot!
Yes, I'm tired of it! Mostly because there are several rogue malware programs very active now that produce some of the same symptoms.Members are assuming that they all have the System Check malware. Not necessarily so. But the problem is that they are giving a diagnosis, but no symptoms.
The bottom line is that I have to fire back questions to ask them what's happening. So I changed your Subject so I won't get yet another feedback email named "System Check."
So> I'd like you to undo whatever you did to the .exe files to make them load. Uninstall any of the programs you used in an attempt to fix the problem.
Tell me what symptoms you are having and what messages you get when you try to run a scan.
Then I can determine what malware you most likely have and the best way to fix it.
=====================================
• Be patient. Malware cleaning takes time. I am also working with other members while I am helping you.
• Read my instructions carefully. If you don't understand or have a problem, ask me. Follow the order of the tasks I give you. Order is crucial in cleaning process.
• If you have questions, or if a program doesn't work, stop and tell me about it. Don't try to get around it yourself.
• File sharing programs should be uninstalled or disabled during the cleaning process..
• Observe these:
[o] Don't follow directions given to someone else
[o] Don't use any other cleaning programs or scans while I'm helping you.
[o] Don't use a Registry cleaner or make any changes in the Registry.
[o] Don't download and install new programs- except those I give you.
If I haven't replied back to you within 48 hours, you can send a PMwith your thread link in it as a reminder. Do not include technical problems from your thread. Support is given only in the forum.
3. ### BokkanNewcomer, in training
Well, I didnt realize there was an acceptance policy for forum topics, but I probably should have considered that, As such, this is my first opportunity to post the Mbam and Gmer logs, But in trying to follow your directions, I'll answer your questions first, IF you would like the logs please let me know as I have them available.
As for symptoms, Initially, when I received the box all the files were "gone" and the desktop was bare. Additionally the settings were in such a way that the box was set up to have 2 monitors, and was recognizing a second monitor attached(tho none was) so that the primary monitor only showed the very top left corner of the desktop (IE could not see or interact with the task bar or start menu in any way. I didn't know this, nor had i gotten familiar with the system check virus (as i now believe it is) at the time, so i operated under the impression that this was all part of the Mal-ware, for all i know my friend did something to set it up this way.
In addition, a non-closeable window with "System Check" at the top was prominent and it said "scan PC for errors" i'm not sure how you feel about posting links to images but I found several very similar replicas when doing a google image search for "system check virus" right in the first 5 images.
I have uninstalled every cleaner I have tried to use as asked, tho I did that prior to following the directions listed under the Sticky in this forum. The only file i did not uninstall is Malwarebytes (tho i did do a wipe and fresh install before starting that guide) as it's part of the guide. And Tdsskill wasnt ever actually installed parse, but i can delete the executable from the machine if you wish.
What I did do is have to open a new task in task manager, cmd, and from there I unhid all the files, using a command that i cant remember as it was a while ago, but i want to say it was /attrib -h /s /d. I also changed the file names on the files related to what i believe is the SC virus so that i could reboot without it taking over my computer again. Finally, I transferred over a notepad document with a .exe registry editor fix for Win XP.
That's about all I can think of for what I've done, And most of those things i cant really undo unfortunately.
Some additional symptoms i'm noticing now that the box isnt being hijacked upon login every time (tho occasionally it does completely get hijacked all over again and i have to repeat the previous steps, obviously i wont do that without you asking from this point forward should it happen again..) is that i'm getting google redirects for every clicked upon link. I'm getting random redirects for multiple pages on firefox. and I'm getting constant iexplorer panes attempting to load according to the task manager, each taking up an average of 30,000k mem usage with no apparent source, only to 'fail' and have a windows error report window pop up for them about 10 minutes after they each show up in the task manager. Generally there's anywhere from 5-15 of them at a time. And once, I had a slew of "Windows Help" windows open up overnight, at least 50, that i had to close individually. Only happened once, and hasn't happened again.
Further instructions? would you like the Mbam/Gmer logs?
4. ### BobbyeHelper on the FringePosts: 16,406 +16
Please post the logs. I will then compare what I see to what you describe.
I'm not sure what you mean about 'acceptance'. The original subject you gave was born out of your frustration. The reply I gave was born out of mine. It's simply that everyone who has any similar problems proceeds to name the malware but tell us nothing about it's effects on the system.
I just changed your subject to less frustration and more to the point. Okay?
Ignore any 'critical' messages and 'alerts' that you get. Do not click on any of them. As soon as I see the logs, I will follow with the appropriate directions.
5. ### BokkanNewcomer, in training
By acceptance i mean nothing in regards to you specifically, I totally understand why you changed the title. Just another meaning lost in text i suppose. I meant the that the threads have to be approved by a moderator before they are allowed to be shown at all on the boards. which also totally makes sense. I just didn't realize that so, my plan to post the logs in subsequent posts for organization methods was foiled as my thread didn't exist after my initial 'post.' I was simply trying to explain why i didn't just post the logs from the beginning like i was supposed to. Then was confused if I should post them after it was approved or follow your directions first is all. Here I am trying to show I'm following directions, and instead i make it look like I'm being snippy at you, i apologize, I don't care what the title says lol :_)
Malwarebytes log:
Malwarebytes Anti-Malware 1.60.1.1000
www.malwarebytes.org
Database version: v2012.02.02.05
Windows XP Service Pack 3 x86 NTFS
Internet Explorer 8.0.6001.18702
2/2/2012 6:03:55 PM
mbam-log-2012-02-02 (18-03-55).txt
Scan type: Quick scan
Scan options enabled: Memory | Startup | Registry | File System | Heuristics/Extra | Heuristics/Shuriken | PUP | PUM
Scan options disabled: P2P
Objects scanned: 263840
Time elapsed: 36 minute(s), 33 second(s)
Memory Processes Detected: 0
(No malicious items detected)
Memory Modules Detected: 1
C:\WINDOWS\system32\mtxocci.dll (Trojan.BHO.H) -> Delete on reboot.
Registry Keys Detected: 0
(No malicious items detected)
Registry Values Detected: 0
(No malicious items detected)
Registry Data Items Detected: 0
(No malicious items detected)
Folders Detected: 1
C:\EX0FE7~1.CLE (Trojan.SpyEyes) -> Quarantined and deleted successfully.
Files Detected: 1
C:\WINDOWS\system32\mtxocci.dll (Trojan.BHO.H) -> Delete on reboot.
(end)
GMER 1.0.15.15641 - http://www.gmer.net
Rootkit scan 2012-02-03 06:36:24
Windows 5.1.2600 Service Pack 3 Harddisk0\DR0 -> \Device\Ide\IdeDeviceP0T1L0-c ST380020A rev.5.46
Running: 8xnjf8rt.exe; Driver: C:\DOCUME~1\HOOTBE~1\LOCALS~1\Temp\kwddrfod.sys
---- Kernel code sections - GMER 1.0.15 ----
? tfvaq.sys The system cannot find the file specified. !
? C:\WINDOWS\system32\DRIVERS\cdrom.sys suspicious PE modification
.text C:\WINDOWS\system32\DRIVERS\nv4_mini.sys section is writeable [0xB5B473A0, 0x83C195, 0xE8000020]
---- User code sections - GMER 1.0.15 ----
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00154878
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!DialogBoxParamW 7E4247AB 5 Bytes JMP 3E2154D5 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!SetWindowsHookExW 7E42820F 5 Bytes JMP 3E2E9AE9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!CallNextHookEx 7E42B3C6 5 Bytes JMP 3E2DD125 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!CreateWindowExW 7E42D0A3 5 Bytes JMP 3E2EDB5C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!UnhookWindowsHookEx 7E42D5F3 5 Bytes JMP 3E25467E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!DialogBoxIndirectParamW 7E432072 5 Bytes JMP 3E3E53C7 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!MessageBoxIndirectA 7E43A082 5 Bytes JMP 3E3E52F9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!DialogBoxParamA 7E43B144 5 Bytes JMP 3E3E5364 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!MessageBoxExW 7E450838 5 Bytes JMP 3E3E51CA C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!MessageBoxExA 7E45085C 5 Bytes JMP 3E3E522C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!DialogBoxIndirectParamA 7E456D7D 5 Bytes JMP 3E3E542A C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] USER32.dll!MessageBoxIndirectW 7E4664D5 5 Bytes JMP 3E3E528E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] ole32.dll!CoCreateInstance 774FF1BC 5 Bytes JMP 3E2EDBB8 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] ole32.dll!OleLoadFromStream 7752983B 5 Bytes JMP 3E3E572F C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] ws2_32.dll!send 71AB4C27 5 Bytes JMP 7FF91AD9
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] ws2_32.dll!WSARecv 71AB4CB5 5 Bytes JMP 7FF91A15
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] ws2_32.dll!recv 71AB676F 5 Bytes JMP 7FF9196B
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] ws2_32.dll!WSASend 71AB68FA 5 Bytes JMP 7FF91B07
.text C:\WINDOWS\system32\svchost.exe[824] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00634850
.text C:\WINDOWS\System32\svchost.exe[1000] ntdll.dll!NtProtectVirtualMemory 7C90D6EE 5 Bytes JMP 00F3000A
.text C:\WINDOWS\System32\svchost.exe[1000] ntdll.dll!NtWriteVirtualMemory 7C90DFAE 5 Bytes JMP 00F4000A
.text C:\WINDOWS\System32\svchost.exe[1000] ntdll.dll!KiUserExceptionDispatcher 7C90E47C 5 Bytes JMP 00F2000C
.text C:\WINDOWS\Explorer.EXE[1248] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00B44850
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00154878
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!DialogBoxParamW 7E4247AB 5 Bytes JMP 3E2154D5 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!SetWindowsHookExW 7E42820F 5 Bytes JMP 3E2E9AE9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!CallNextHookEx 7E42B3C6 5 Bytes JMP 3E2DD125 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!CreateWindowExW 7E42D0A3 5 Bytes JMP 3E2EDB5C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!UnhookWindowsHookEx 7E42D5F3 5 Bytes JMP 3E25467E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!DialogBoxIndirectParamW 7E432072 5 Bytes JMP 3E3E53C7 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!MessageBoxIndirectA 7E43A082 5 Bytes JMP 3E3E52F9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!DialogBoxParamA 7E43B144 5 Bytes JMP 3E3E5364 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!MessageBoxExW 7E450838 5 Bytes JMP 3E3E51CA C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!MessageBoxExA 7E45085C 5 Bytes JMP 3E3E522C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!DialogBoxIndirectParamA 7E456D7D 5 Bytes JMP 3E3E542A C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] USER32.dll!MessageBoxIndirectW 7E4664D5 5 Bytes JMP 3E3E528E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] ole32.dll!CoCreateInstance 774FF1BC 5 Bytes JMP 3E2EDBB8 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] ole32.dll!OleLoadFromStream 7752983B 5 Bytes JMP 3E3E572F C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] ws2_32.dll!send 71AB4C27 5 Bytes JMP 7FF91AD9
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] ws2_32.dll!WSARecv 71AB4CB5 5 Bytes JMP 7FF91A15
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] ws2_32.dll!recv 71AB676F 5 Bytes JMP 7FF9196B
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] ws2_32.dll!WSASend 71AB68FA 5 Bytes JMP 7FF91B07
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00154878
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!DialogBoxParamW 7E4247AB 5 Bytes JMP 3E2154D5 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!CreateWindowExW 7E42D0A3 5 Bytes JMP 3E2EDB5C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!DialogBoxIndirectParamW 7E432072 5 Bytes JMP 3E3E53C7 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!MessageBoxIndirectA 7E43A082 5 Bytes JMP 3E3E52F9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!DialogBoxParamA 7E43B144 5 Bytes JMP 3E3E5364 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!MessageBoxExW 7E450838 5 Bytes JMP 3E3E51CA C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!MessageBoxExA 7E45085C 5 Bytes JMP 3E3E522C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!DialogBoxIndirectParamA 7E456D7D 5 Bytes JMP 3E3E542A C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] USER32.dll!MessageBoxIndirectW 7E4664D5 5 Bytes JMP 3E3E528E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] ws2_32.dll!send 71AB4C27 5 Bytes JMP 7FF91AD9
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] ws2_32.dll!WSARecv 71AB4CB5 5 Bytes JMP 7FF91A15
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] ws2_32.dll!recv 71AB676F 5 Bytes JMP 7FF9196B
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[2252] ws2_32.dll!WSASend 71AB68FA 5 Bytes JMP 7FF91B07
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00154878
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!DialogBoxParamW 7E4247AB 5 Bytes JMP 3E2154D5 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!SetWindowsHookExW 7E42820F 5 Bytes JMP 3E2E9AE9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!CallNextHookEx 7E42B3C6 5 Bytes JMP 3E2DD125 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!CreateWindowExW 7E42D0A3 5 Bytes JMP 3E2EDB5C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!UnhookWindowsHookEx 7E42D5F3 5 Bytes JMP 3E25467E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!DialogBoxIndirectParamW 7E432072 5 Bytes JMP 3E3E53C7 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!MessageBoxIndirectA 7E43A082 5 Bytes JMP 3E3E52F9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!DialogBoxParamA 7E43B144 5 Bytes JMP 3E3E5364 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!MessageBoxExW 7E450838 5 Bytes JMP 3E3E51CA C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!MessageBoxExA 7E45085C 5 Bytes JMP 3E3E522C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!DialogBoxIndirectParamA 7E456D7D 5 Bytes JMP 3E3E542A C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] USER32.dll!MessageBoxIndirectW 7E4664D5 5 Bytes JMP 3E3E528E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] ole32.dll!CoCreateInstance 774FF1BC 5 Bytes JMP 3E2EDBB8 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] ole32.dll!OleLoadFromStream 7752983B 5 Bytes JMP 3E3E572F C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] ws2_32.dll!send 71AB4C27 5 Bytes JMP 7FF91AD9
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] ws2_32.dll!WSARecv 71AB4CB5 5 Bytes JMP 7FF91A15
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] ws2_32.dll!recv 71AB676F 5 Bytes JMP 7FF9196B
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] ws2_32.dll!WSASend 71AB68FA 5 Bytes JMP 7FF91B07
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00154878
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!DialogBoxParamW 7E4247AB 5 Bytes JMP 3E2154D5 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!SetWindowsHookExW 7E42820F 5 Bytes JMP 3E2E9AE9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!CallNextHookEx 7E42B3C6 5 Bytes JMP 3E2DD125 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!CreateWindowExW 7E42D0A3 5 Bytes JMP 3E2EDB5C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!UnhookWindowsHookEx 7E42D5F3 5 Bytes JMP 3E25467E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!DialogBoxIndirectParamW 7E432072 5 Bytes JMP 3E3E53C7 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!MessageBoxIndirectA 7E43A082 5 Bytes JMP 3E3E52F9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!DialogBoxParamA 7E43B144 5 Bytes JMP 3E3E5364 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!MessageBoxExW 7E450838 5 Bytes JMP 3E3E51CA C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!MessageBoxExA 7E45085C 5 Bytes JMP 3E3E522C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!DialogBoxIndirectParamA 7E456D7D 5 Bytes JMP 3E3E542A C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] USER32.dll!MessageBoxIndirectW 7E4664D5 5 Bytes JMP 3E3E528E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] ole32.dll!CoCreateInstance 774FF1BC 5 Bytes JMP 3E2EDBB8 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] ole32.dll!OleLoadFromStream 7752983B 5 Bytes JMP 3E3E572F C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] ws2_32.dll!send 71AB4C27 5 Bytes JMP 7FF91AD9
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] ws2_32.dll!WSARecv 71AB4CB5 5 Bytes JMP 7FF91A15
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] ws2_32.dll!recv 71AB676F 5 Bytes JMP 7FF9196B
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] ws2_32.dll!WSASend 71AB68FA 5 Bytes JMP 7FF91B07
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] kernel32.dll!CreateProcessInternalW 7C8197B0 5 Bytes JMP 00154878
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!DialogBoxParamW 7E4247AB 5 Bytes JMP 3E2154D5 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!SetWindowsHookExW 7E42820F 5 Bytes JMP 3E2E9AE9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!CallNextHookEx 7E42B3C6 5 Bytes JMP 3E2DD125 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!CreateWindowExW 7E42D0A3 5 Bytes JMP 3E2EDB5C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!UnhookWindowsHookEx 7E42D5F3 5 Bytes JMP 3E25467E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!DialogBoxIndirectParamW 7E432072 5 Bytes JMP 3E3E53C7 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!MessageBoxIndirectA 7E43A082 5 Bytes JMP 3E3E52F9 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!DialogBoxParamA 7E43B144 5 Bytes JMP 3E3E5364 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!MessageBoxExW 7E450838 5 Bytes JMP 3E3E51CA C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!MessageBoxExA 7E45085C 5 Bytes JMP 3E3E522C C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!DialogBoxIndirectParamA 7E456D7D 5 Bytes JMP 3E3E542A C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] USER32.dll!MessageBoxIndirectW 7E4664D5 5 Bytes JMP 3E3E528E C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] ole32.dll!CoCreateInstance 774FF1BC 5 Bytes JMP 3E2EDBB8 C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] ole32.dll!OleLoadFromStream 7752983B 5 Bytes JMP 3E3E572F C:\WINDOWS\system32\IEFRAME.dll (Internet Explorer/Microsoft Corporation)
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] ws2_32.dll!send 71AB4C27 5 Bytes JMP 7FF91AD9
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] ws2_32.dll!WSARecv 71AB4CB5 5 Bytes JMP 7FF91A15
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] ws2_32.dll!recv 71AB676F 5 Bytes JMP 7FF9196B
.text C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] ws2_32.dll!WSASend 71AB68FA 5 Bytes JMP 7FF91B07
---- User IAT/EAT - GMER 1.0.15 ----
IAT C:\Program Files\Internet Explorer\IEXPLORE.EXE[408] @ C:\WINDOWS\system32\ole32.dll [KERNEL32.dll!LoadLibraryExW] [451F1ACB] C:\Program Files\Internet Explorer\xpshims.dll (Internet Explorer Compatibility Shims for XP/Microsoft Corporation)
IAT C:\Program Files\Internet Explorer\IEXPLORE.EXE[2036] @ C:\WINDOWS\system32\ole32.dll [KERNEL32.dll!LoadLibraryExW] [451F1ACB] C:\Program Files\Internet Explorer\xpshims.dll (Internet Explorer Compatibility Shims for XP/Microsoft Corporation)
IAT C:\Program Files\Internet Explorer\IEXPLORE.EXE[7420] @ C:\WINDOWS\system32\ole32.dll [KERNEL32.dll!LoadLibraryExW] [451F1ACB] C:\Program Files\Internet Explorer\xpshims.dll (Internet Explorer Compatibility Shims for XP/Microsoft Corporation)
IAT C:\Program Files\Internet Explorer\IEXPLORE.EXE[7748] @ C:\WINDOWS\system32\ole32.dll [KERNEL32.dll!LoadLibraryExW] [451F1ACB] C:\Program Files\Internet Explorer\xpshims.dll (Internet Explorer Compatibility Shims for XP/Microsoft Corporation)
IAT C:\Program Files\Internet Explorer\IEXPLORE.EXE[8088] @ C:\WINDOWS\system32\ole32.dll [KERNEL32.dll!LoadLibraryExW] [451F1ACB] C:\Program Files\Internet Explorer\xpshims.dll (Internet Explorer Compatibility Shims for XP/Microsoft Corporation)
---- Modules - GMER 1.0.15 ----
Module (noname) (*** hidden *** ) B6871000-B688B000 (106496 bytes)
---- Files - GMER 1.0.15 ----
File C:\Documents and Settings\All Users\Application Data\jbamaaa.tmp 0 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030 0 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\@ 2048 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\bckfg.tmp 854 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\cfg.ini 237 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\Desktop.ini 4608 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\keywords 0 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\kwrd.dll 223744 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\L 0 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\L\ighokgwu 62976 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\lsflt7.ver 5176 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\U 0 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\U\00000001.@ 2048 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\U\00000002.@ 224768 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\U\00000004.@ 1024 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\U\80000000.@ 11264 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\U\80000004.@ 12800 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\U\80000032.@ 73216 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\3939797030\version 858 bytes
File C:\WINDOWS\\$NtUninstallKB25330\$\487665194 0 bytes
---- EOF - GMER 1.0.15 ----
6. ### BobbyeHelper on the FringePosts: 16,406 +16
The forums are monitored and occasionally some threads are picked up and help as Moderated. We try to check them and release so the members can go ahead with the logs. It can be confusing but it isn't something we have any control over. I just tried to explained that and hoped I didn't come over at finding fault.
These rogue programs are doing number on all of us! There are 5 or 6 rogue malware programs very active now that have some similar symptoms, but have different fixes.
I'm not sure if you're following this: Please follow these steps: Preliminary Virus and Malware Removal.
When you have finished, leave the logs for review in your next reply .
NOTE: Logs must be pasted in the replies. Attached logs will not be reviewed.
---------------------------
So you need to go ahead with the DDS scan and leave the 2 logs.
After you finish with DDS, you can go ahead and run the following:
Please advise me if you have AVG on the system before you run Combofix.
Please note: If you have previously run Combofix and it's still on the system, please uninstall it. Then download the current version and do the scan: Uninstall directions, if needed
• Click START> then RUN
• Now type Combofix /Uninstall in the runbox and click OK. Note the space between the X and the U, it needs to be there.
--------------------------------------
Expect these- they are normal:
1. If asked to install or or update the Recovery Console, allow. (you will need internet connection for this)
2. Before you run the Combofix scan, please disable any security software you have running.
3. Combofix may need to reboot your computer more than once to do its job this is normal.
• Double click combofix.exe & follow the prompts.
• If prompted for Recovery Console, please allow.
• Once installed, you should see a blue screen prompt that says:
• The Recovery Console was successfully installed.[/b]
• Note: If Combofix was downloaded to a flash drive, the Recovery Console will not install- just bypass and go on.[/b]
• Note: No query will be made if the Recovery Console is already on the system.
• .Close/disable all anti virus and anti malware programs
(If you need help with this, please see HERE)
• .Close any open browsers.
• .Click on Yes, to continue scanning for malware
• .If Combofix asks you to update the program, allow
• When the scan completes , a report will be generated-it will open a text window. Please paste the C:\ComboFix.txt in next reply..
Note 1:Do not mouse-click Combofix's window while it is running. That may cause it to stall.
Note 2:If you receive an error "Illegal operation attempted on a registry key that has been marked for deletion", restart the computer.
Note 3:CF disconnects your machine from the internet. The connection is automatically restored before CF completes its run. If CF runs into difficulty and terminates prematurely, the connection can be manually restored by restarting your machine.
7. ### BokkanNewcomer, in training
So DDS always freezes the box, I tried originally as it says in the first post only to result in freezing and having to power cycle. So I wiped the old copy of DDS, and tried one more time let it run for 6 hours but in the end, it froze again, and I had to power cycle. But now when i rebooted, it now is stuck on a boot loop at verifying DMI pool data.
Not sure what to do at this time, I'm thinking i might just have to nuke it ><
8. ### BobbyeHelper on the FringePosts: 16,406 +16
Please note: I will be Offline on Wednesday, 2/8 and Thursday, 2/9. When I return on Friday, 2/10, I will pick up the oldest threads first.
9. ### BobbyeHelper on the FringePosts: 16,406 +16
Thank you for your patience. Let's tackle this again:
A note: One of the malwares you had was (Trojan.SpyEyes). This is a trojan that captures keystrokes and steals login credentials through a method known as "form grabbing". Trojan:Win32/Spyeye sends captured data to a remote attacker, may download updates and has a rootkit component to hide its malicious activity.
Even though some entries can be removed, I cannot guarantee that the system hasn't been compromised. To that end. please change all of your passwords and closely monitor any internet financial transactions for suspicious activity.
==========================================
(If you are unable to download the file for some reason, then TDSS may be blocking it. You would then need to download it first to a clean computer and then transfer it to the infected one using an external drive or USB flash drive.)
• Right-click the tdsskiller.zip file> Select Extract All into a folder on the infected (or potentially infected) PC.
• Double click on TDSSKiller.exe. to run the scan
• When the scan is over, the utility outputs a list of detected objects with description.
The utility automatically selects an action (Cure or Delete) for malicious objects.
The utility prompts the user to select an action to apply to suspicious objects (Skip, by default).
• Select the action Quarantine to quarantine detected objects.
The default quarantine folder is in the system disk root folder, e.g.: C:\TDSSKiller_Quarantine\23.07.2010_15.31.43
• After clicking Next, the utility applies selected actions and outputs the result.
• A reboot is required after disinfection.
==========================================
I'd like you to run Combofix- but it won't run with AVG. You will need to temporarily uninstall AVG as follows:
If you do not have AVG, you can skip the AppRemover section and go on to Combofix.
1. Double click the setup on the desktop> click Next
2. Select “Remove Security Application”
3. Let scan finish to determine security apps
4. A screen like below will appear:
5. Click on Next after choice has been made
6. Check the AVG program you want to uninstall
7. After uninstall shows complete, follow online prompts to Exit the program.
Temporary AV: Use one:>> Use only of you removed AVG and do not have a functioning, up to date AV.[/b]
Avira-AntiVir-Personal-Free-Antivirus
Avast Free Version
=============================
Please note: If you have previously run Combofix and it's still on the system, please uninstall it. Then download the current version and do the scan: Uninstall directions, if needed
• Click START> then RUN
• Now type Combofix /Uninstall in the runbox and click OK. Note the space between the X and the U, it needs to be there.
--------------------------------------
Expect these- they are normal:
1. If asked to install or or update the Recovery Console, allow. (you will need internet connection for this)
2. Before you run the Combofix scan, please disable any security software you have running.
3. Combofix may need to reboot your computer more than once to do its job this is normal.
• Double click combofix.exe & follow the prompts.
• If prompted for Recovery Console, please allow.
• Once installed, you should see a blue screen prompt that says:
• The Recovery Console was successfully installed.[/b]
• Note: If Combofix was downloaded to a flash drive, the Recovery Console will not install- just bypass and go on.[/b]
• Note: No query will be made if the Recovery Console is already on the system.
• .Close/disable all anti virus and anti malware programs
(If you need help with this, please see HERE)
• .Close any open browsers.
• .Click on Yes, to continue scanning for malware
• .If Combofix asks you to update the program, allow
• When the scan completes , a report will be generated-it will open a text window. Please paste the C:\ComboFix.txt in next reply..
Note 1:Do not mouse-click Combofix's window while it is running. That may cause it to stall.
Note 2:If you receive an error "Illegal operation attempted on a registry key that has been marked for deletion", restart the computer.
Note 3:CF disconnects your machine from the internet. The connection is automatically restored before CF completes its run. If CF runs into difficulty and terminates prematurely, the connection can be manually restored by restarting your machine.
=============================================
DDS won't run .scr:
Unpack (unzip) the file onto your desktop and double-click it. You will be asked if you wish to merge the file with you registry, say Yes.
You should then be able to run DDS.scr. It's the .scr file extension causing the problem.
========================================
To run the Eset Online Virus Scan:
If you use Internet Explorer:
1. Open the ESETOnlineScan
If you are using a browser other than Internet Explorer
3. Open Eset Smart Installer
[o] Click on the esetsmartinstaller_enu.exelink and save to the desktop.
[o] Double click on the desktop icon to run.
[o] After successful installation of the ESET Smart Installer, the ESET Online Scanner will be launched in a new Window
4. Continue with the directions.
6. Click Start button
7. Accept any security warnings from your browser.
8. Uncheck 'Remove found threats'
9. Check 'Scan archives/
10. Leave remaining settings as is.
11. Press the Start button.
13. When the scan completes, press List of found threats
15. Push the Back button, then Finish
NOTE: If no malware is found then no log will be produced. Let me know if this is the case.
10. ### BokkanNewcomer, in training
I'd like to try all that stuff, But as it stands now, From the last time I tried to run DDS, the computer wont start now. It gets stuck on a boot loop at verifying DMI pool data. I tried disconnecting the CD drive and moving the HD's position on the IDE cable (and tried jumping the HD at Cable Select (it's original setting) to hard coding it to Master or Slave depending on which position it was on on the IDE cable. All to no change. It just starts to load, and reboots. Kinda unsure what to do at this point.
11. ### BobbyeHelper on the FringePosts: 16,406 +16
Please refer to the information in this link: Computer stops at verifying dmi pool data
It has a list of causes and solution. See if you can work through this to restore the system to stability. It is well laid out- follow carefully..
12. ### BokkanNewcomer, in training
OK, ty, I'll look over this info and give it a shot, but it might take some time as I have a busy week with the holiday and work requirements on my time. I'll post back when i'm done, but it will take a few days at least.
13. ### BobbyeHelper on the FringePosts: 16,406 +16
Writing myself a note to keep thread open.
14. ### BokkanNewcomer, in training
Hi there Bobbye, I finally got around to trying every last one of the steps you had me try, and no dice. honestly ( i dont have a floppy so i was unable to try that one) Honestly i'm ready to throw in the towel for now, and just use a backup harddrive and install a copy of windows on it. I know you guys dont like not finishing a project. but would you mind if we scrap this one?
15. ### BobbyeHelper on the FringePosts: 16,406 +16
It is your system- if you want to stop and do the reformat/reinstall, it's your decision. But I'd like to recap briefly:
1. The monitor problem you described early on sounds more like a display/graphics card problem rather than malware.
2. The DDS scan doesn't change/delete/'quarantine' anything. So the problem you noted after running it wasn't caused by DDS.
3. I can give you help that should allow the scans to run. But right now, I don't have much to go on.
Your call. Stop or try again. Let me know.
16. ### BokkanNewcomer, in training
sorry i haven't replied to this in a while, I just haven't had the time. I think i'd rather opt to just to just do my backup plan. At the moment my job is stressing me way out and I barely have time for myself. I really appreciate all the work you've helped me out so far,
17. ### BobbyeHelper on the FringePosts: 16,406 +16
Okay ,whatever works best for you.
I know this well! Best of luck. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459010362625122, "perplexity": 28741.260424260166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00068-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://hal-univ-rennes1.archives-ouvertes.fr/hal-02177792 | # Featuring non-covalent interactions in m-xylylenediaminium bis(perchlorate) monohydrate Synthesis, characterization and Hirshfeld surface analysis
* Corresponding author
Abstract : The novel compound (C8H14N2)(ClO4)2 center dot H2O (MXClO4) has been elaborated by slow evaporation at room temperature and characterized by X-ray single crystal analysis, Hirshfeld surface analysis, differential scanning calorimetry analysis (DSC), IR and Raman spectroscopies. MXClO4 crystallized in the monoclinic system, space group P21/c, with a = 5.5781(10) angstrom, b = 11.1137(3) angstrom, c =23.0731(7) angstrom, beta = 95.414(1) degrees, V = 1424.0(3) angstrom 3 and Z = 4. The asymmetric unit of the title compound contains one m-xylylenediaminium cation, two perchlorate anions and one water molecule. The atomic arrangement of the title compound can be described by a three-dimensional network. Therefore, chlorine atoms, Cl2, are discrete and Cl1 form with water molecules corrugated c(2)(2)(5) infinite chains of formula (CLH2O5)(n)(n-) extending along the a-axis at z = 1/4 and 3/4. These inorganic entities are allied to organic cations (C8H14N2)2 + through involved hydrogen bonding. The intermolecular interactions were further evaluated by using the three-dimensional Hirshfeld surfaces and two-dimensional fingerprint plots. The hydrogen bonding interactions associated to OW-H center dot center dot center dot O, N-H center dot center dot center dot O, N-H center dot center dot center dot OW and C-H center dot center dot center dot O represent the top fraction of 69.1% followed by these of the H center dot center dot center dot H type contributing 17.5%.
Keywords :
Document type :
Journal articles
Domain :
Cited literature [41 references]
https://hal-univ-rennes1.archives-ouvertes.fr/hal-02177792
Contributor : Laurent Jonchère <>
Submitted on : Tuesday, September 17, 2019 - 3:05:55 PM
Last modification on : Wednesday, April 7, 2021 - 3:18:12 PM
### File
Featuring non-covalent interac...
Files produced by the author(s)
### Citation
Afef Guesmi, Thierry Roisnel, Houda Marouani. Featuring non-covalent interactions in m-xylylenediaminium bis(perchlorate) monohydrate Synthesis, characterization and Hirshfeld surface analysis. Journal of Molecular Structure, Elsevier, 2019, 1194, pp.66-72. ⟨10.1016/j.molstruc.2019.04.124⟩. ⟨hal-02177792⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8981025218963623, "perplexity": 29779.12577033525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00390.warc.gz"} |
https://arxiv.org/abs/1304.1181 | astro-ph.CO
(what is this?)
# Title:Inflationary Super-Hubble Waves and the Size of the Universe
Abstract: The effect of the scalar spectral index on inflationary super-Hubble waves is to amplify/damp large wavelengths according to whether the spectrum is red ($n_{s}<1$) or blue ($n_{s}>1$). As a consequence, the large-scale temperature correlation function will unavoidably change sign at some angle if our spectrum is red, while it will always be positive if it is blue. We show that this inflationary filtering property also affects our estimates of the size of the homogeneous patch of the universe through the Grishchuk-Zel'dovich effect. Using the recent quadrupole measurement of ESA's Planck mission, we find that the homogeneous patch of universe is at least 87 times bigger than our visible universe if we accept Planck's best fit value $n_{s}=0.9624$. An independent estimation of the size of the universe could be used to independently constrain $n_{s}$, thus narrowing the space of inflationary models.
Comments: This paper has been withdrawn by the author due to an error in the estimation of a lower bound in the main result Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) Cite as: arXiv:1304.1181 [astro-ph.CO] (or arXiv:1304.1181v2 [astro-ph.CO] for this version)
## Submission history
From: Thiago Pereira [view email]
[v1] Wed, 3 Apr 2013 20:34:13 UTC (319 KB)
[v2] Sun, 22 Sep 2013 21:47:24 UTC (0 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5479877591133118, "perplexity": 1663.9492689241208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827963.70/warc/CC-MAIN-20181216165437-20181216191437-00431.warc.gz"} |
https://www.physicsforums.com/threads/engineering-mathematics.394503/ | # Engineering Mathematics
1. Apr 12, 2010
### matqkks
Does anyone know where I can find engineering exam questions on the web. I am trying to do a survey of various questions from different universities.
2. Feb 4, 2011
3. May 9, 2011
### samuelarnold
Just on Googling you can find a lots of sample questions solved and unsolved.
4. Jan 7, 2012
### diegojolin
I am trying to solve an equation that involves sum series and the unknown is the number of times i have to add, this is easy to solve just by guessing when the number of additions is small, but if it gets large.... is there any analytic way to solve this kind of equations?
form:
sum (e^n) from n b to x = a;
5. Jan 7, 2012
### gomunkul51
@diegojolin: find a closed expression for the sum(e^k), from 0 (or 1) to n.
then equate it to the known sum then solve for n.
6. Jan 8, 2012
### diegojolin
Oks, thanks, I've tried and at least the computer seems to work faster this way
Similar Discussions: Engineering Mathematics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849126696586609, "perplexity": 1142.0644540692988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00515.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/68906/classify-each-of-the-following-as-either-a-lewis-acid-or-a-lewis-base-h-b-oh-3-c | # Problem: Classify each of the following as either a Lewis acid or a Lewis base:H+ B(OH)3Cl-P(CH3)3
91% (121 ratings)
###### FREE Expert Solution
We’re being asked to classify the given species as either a Lewis acid or Lewis base.
But first, let’s define what is a Lewis and Lewis base.
Based on the Lewis definition:
Lewis acid is an electron pair acceptor.
Some characteristics of a Lewis acid:
When hydrogen is connected to an electronegative element such as P, O, N, S or halogens
▪ hydrogen gains a partially positive charge → makes hydrogen act as a Lewis acid
Positively charged metal ions
91% (121 ratings)
###### Problem Details
Classify each of the following as either a Lewis acid or a Lewis base:
H+
B(OH)3
Cl-
P(CH3)3 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8414063453674316, "perplexity": 6675.639232612328}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141723602.80/warc/CC-MAIN-20201203062440-20201203092440-00126.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-sqrt-3x-4-sqrt-2x-7-3 | Algebra
Topics
How do you solve sqrt( 3x-4) - sqrt(2x-7 )=3?
Oct 18, 2015
$x = 3.58125 \text{ }$ or $\text{ } x = 80.41875$
Explanation:
Since you're dealing with square roots, it's alway a good idea to start by determining the intervals that the possible solutions must fall in.
You know tha for real numbers, you can only take the square root of positive numbers. This means that you need
$3 x - 4 \ge 0 \implies x \ge \frac{4}{3}$
$2 x - 7 \ge 0 \implies x \ge \frac{7}{2}$
Merge these two conditions to get $x \ge \frac{7}{2}$. This means that any value of $x$ that does not satisfy this condition will not be a valid solution to the original equation.
Next, sqaure both sides of the equation to reduce the number of radical terms
${\left(\sqrt{3 x - 4} - \sqrt{2 x - 7}\right)}^{2} = {3}^{2}$
${\left(\sqrt{3 x - 4}\right)}^{2} - 2 \sqrt{\left(3 x - 4\right) \left(2 x - 7\right)} + {\left(\sqrt{2 x - 7}\right)}^{2} = 9$
$3 x - 4 - 2 \sqrt{\left(3 x - 4\right) \left(2 x - 7\right)} + 2 x - 7 = 9$
Isolate the remaining radical term on one side of the equation
$- 2 \sqrt{\left(3 x - 4\right) \left(2 x - 7\right)} = 9 - 5 x + 11$
$- 2 \sqrt{\left(3 x - 4\right) \left(2 x - 7\right)} = 5 \cdot \left(4 - x\right)$
Now square both sides of the equation again
${\left(- 2 \sqrt{\left(3 x - 4\right) \left(2 x - 7\right)}\right)}^{2} = {\left[5 \left(4 - x\right)\right]}^{2}$
$4 \cdot \left(3 x - 4\right) \left(2 x - 7\right) = 25 \cdot \left(16 - 8 x + {x}^{2}\right)$
$24 {x}^{2} - 116 x + 112 = 400 - 200 x + 25 {x}^{2}$
Move all the terms on one side of the equation to get
${x}^{2} - 84 x + 288 = 0$
Use the quadratic formula to get
${x}_{1 , 2} = \frac{- \left(84\right) \pm \sqrt{{\left(- 84\right)}^{2} - 4 \cdot 1 \cdot 288}}{2 \cdot 1}$
${x}_{1 , 2} = \frac{84 \pm \sqrt{5904}}{2} = \frac{84 \pm 76.8375}{2}$
The two solutions will be
${x}_{1} = \frac{84 + 76.8375}{2} = 80.41875$
and
${x}_{2} = \frac{84 - 76.8375}{2} = 3.58125$
SInce both solutions satisfy the initial condition $x \ge \frac{7}{2}$, they will are both potential solutions to the original equation.
Plug these values into the original equation
$\sqrt{3.58125 \cdot 3 - 4} - \sqrt{3.58125 \cdot 2 - 7} = 2.999986 \approx 3$
and
$\sqrt{80.41875 \cdot 3 - 4} - \sqrt{80.41875 \cdot 2 - 7} = 3.0000001 \approx 3$
Impact of this question
264 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7599874138832092, "perplexity": 391.6259520418848}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573444.87/warc/CC-MAIN-20190919060532-20190919082532-00113.warc.gz"} |
https://docs.julialang.org/en/v1.5/devdocs/llvm/ | # Working with LLVM
This is not a replacement for the LLVM documentation, but a collection of tips for working on LLVM for Julia.
## Overview of Julia to LLVM Interface
Julia dynamically links against LLVM by default. Build with USE_LLVM_SHLIB=0 to link statically.
The code for lowering Julia AST to LLVM IR or interpreting it directly is in directory src/.
FileDescription
builtins.cBuiltin functions
ccall.cppLowering ccall
cgutils.cppLowering utilities, notably for array and tuple accesses
codegen.cppTop-level of code generation, pass list, lowering builtins
debuginfo.cppTracks debug information for JIT code
disasm.cppHandles native object file and JIT code diassembly
gf.cGeneric functions
intrinsics.cppLowering intrinsics
llvm-simdloop.cppCustom LLVM pass for @simd
sys.cI/O and operating system utility functions
Some of the .cpp files form a group that compile to a single object.
The difference between an intrinsic and a builtin is that a builtin is a first class function that can be used like any other Julia function. An intrinsic can operate only on unboxed data, and therefore its arguments must be statically typed.
### Alias Analysis
Julia currently uses LLVM's Type Based Alias Analysis. To find the comments that document the inclusion relationships, look for static MDNode* in src/codegen.cpp.
The -O option enables LLVM's Basic Alias Analysis.
## Building Julia with a different version of LLVM
The default version of LLVM is specified in deps/Versions.make. You can override it by creating a file called Make.user in the top-level directory and adding a line to it such as:
LLVM_VER = 6.0.1
Besides the LLVM release numerals, you can also use LLVM_VER = svn to build against the latest development version of LLVM.
You can also specify to build a debug version of LLVM, by setting either LLVM_DEBUG = 1 or LLVM_DEBUG = Release in your Make.user file. The former will be a fully unoptimized build of LLVM and the latter will produce an optimized build of LLVM. Depending on your needs the latter will suffice and it quite a bit faster. If you use LLVM_DEBUG = Release you will also want to set LLVM_ASSERTIONS = 1 to enable diagonstics for different passes. Only LLVM_DEBUG = 1 implies that option by default.
## Passing options to LLVM
You can pass options to LLVM via the environment variable JULIA_LLVM_ARGS. Here are example settings using bash syntax:
• export JULIA_LLVM_ARGS = -print-after-all dumps IR after each pass.
• export JULIA_LLVM_ARGS = -debug-only=loop-vectorize dumps LLVM DEBUG(...) diagnostics for loop vectorizer. If you get warnings about "Unknown command line argument", rebuild LLVM with LLVM_ASSERTIONS = 1.
## Debugging LLVM transformations in isolation
On occasion, it can be useful to debug LLVM's transformations in isolation from the rest of the Julia system, e.g. because reproducing the issue inside julia would take too long, or because one wants to take advantage of LLVM's tooling (e.g. bugpoint). To get unoptimized IR for the entire system image, pass the --output-unopt-bc unopt.bc option to the system image build process, which will output the unoptimized IR to an unopt.bc file. This file can then be passed to LLVM tools as usual. libjulia can function as an LLVM pass plugin and can be loaded into LLVM tools, to make julia-specific passes available in this environment. In addition, it exposes the -julia meta-pass, which runs the entire Julia pass-pipeline over the IR. As an example, to generate a system image, one could do:
opt -load libjulia.so -julia -o opt.bc unopt.bc
llc -o sys.o opt.bc
cc -shared -o sys.so sys.o
This system image can then be loaded by julia as usual.
Alternatively, you can use --output-jit-bc jit.bc to obtain a trace of all IR passed to the JIT. This is useful for code that cannot be run as part of the sysimg generation process (e.g. because it creates unserializable state). However, the resulting jit.bc does not include sysimage data, and can thus not be used as such.
It is also possible to dump an LLVM IR module for just one Julia function, using:
fun, T = +, Tuple{Int,Int} # Substitute your function of interest here
optimize = false
open("plus.ll", "w") do file
println(file, InteractiveUtils._dump_function(fun, T, false, false, false, true, :att, optimize, :default))
end
These files can be processed the same way as the unoptimized sysimg IR shown above.
## Improving LLVM optimizations for Julia
Improving LLVM code generation usually involves either changing Julia lowering to be more friendly to LLVM's passes, or improving a pass.
If you are planning to improve a pass, be sure to read the LLVM developer policy. The best strategy is to create a code example in a form where you can use LLVM's opt tool to study it and the pass of interest in isolation.
1. Create an example Julia code of interest.
2. Use JULIA_LLVM_ARGS = -print-after-all to dump the IR.
3. Pick out the IR at the point just before the pass of interest runs.
4. Strip the debug metadata and fix up the TBAA metadata by hand.
The last step is labor intensive. Suggestions on a better way would be appreciated.
## The jlcall calling convention
Julia has a generic calling convention for unoptimized code, which looks somewhat as follows:
jl_value_t *any_unoptimized_call(jl_value_t *, jl_value_t **, int);
where the first argument is the boxed function object, the second argument is an on-stack array of arguments and the third is the number of arguments. Now, we could perform a straightforward lowering and emit an alloca for the argument array. However, this would betray the SSA nature of the uses at the call site, making optimizations (including GC root placement), significantly harder. Instead, we emit it as follows:
%bitcast = bitcast @any_unoptimized_call to %jl_value_t *(*)(%jl_value_t *, %jl_value_t *)
call cc 37 %jl_value_t *%bitcast(%jl_value_t *%arg1, %jl_value_t *%arg2)
The special cc 37 annotation marks the fact that this call site is really using the jlcall calling convention. This allows us to retain the SSA-ness of the uses throughout the optimizer. GC root placement will later lower this call to the original C ABI. In the code the calling convention number is represented by the JLCALL_F_CC constant. In addition, there is the JLCALL_CC calling convention which functions similarly, but omits the first argument.
## GC root placement
GC root placement is done by an LLVM pass late in the pass pipeline. Doing GC root placement this late enables LLVM to make more aggressive optimizations around code that requires GC roots, as well as allowing us to reduce the number of required GC roots and GC root store operations (since LLVM doesn't understand our GC, it wouldn't otherwise know what it is and is not allowed to do with values stored to the GC frame, so it'll conservatively do very little). As an example, consider an error path
if some_condition()
#= Use some variables maybe =#
error("An error occurred")
end
During constant folding, LLVM may discover that the condition is always false, and can remove the basic block. However, if GC root lowering is done early, the GC root slots used in the deleted block, as well as any values kept alive in those slots only because they were used in the error path, would be kept alive by LLVM. By doing GC root lowering late, we give LLVM the license to do any of its usual optimizations (constant folding, dead code elimination, etc.), without having to worry (too much) about which values may or may not be GC tracked.
However, in order to be able to do late GC root placement, we need to be able to identify a) which pointers are GC tracked and b) all uses of such pointers. The goal of the GC placement pass is thus simple:
Minimize the number of needed GC roots/stores to them subject to the constraint that at every safepoint, any live GC-tracked pointer (i.e. for which there is a path after this point that contains a use of this pointer) is in some GC slot.
### Representation
The primary difficulty is thus choosing an IR representation that allows us to identify GC-tracked pointers and their uses, even after the program has been run through the optimizer. Our design makes use of three LLVM features to achieve this:
• Custom address spaces
• Operand Bundles
• Non-integral pointers
Custom address spaces allow us to tag every point with an integer that needs to be preserved through optimizations. The compiler may not insert casts between address spaces that did not exist in the original program and it must never change the address space of a pointer on a load/store/etc operation. This allows us to annotate which pointers are GC-tracked in an optimizer-resistant way. Note that metadata would not be able to achieve the same purpose. Metadata is supposed to always be discardable without altering the semantics of the program. However, failing to identify a GC-tracked pointer alters the resulting program behavior dramatically - it'll probably crash or return wrong results. We currently use three different address spaces (their numbers are defined in src/codegen_shared.cpp):
• GC Tracked Pointers (currently 10): These are pointers to boxed values that may be put into a GC frame. It is loosely equivalent to a jl_value_t* pointer on the C side. N.B. It is illegal to ever have a pointer in this address space that may not be stored to a GC slot.
• Derived Pointers (currently 11): These are pointers that are derived from some GC tracked pointer. Uses of these pointers generate uses of the original pointer. However, they need not themselves be known to the GC. The GC root placement pass MUST always find the GC tracked pointer from which this pointer is derived and use that as the pointer to root.
• Callee Rooted Pointers (currently 12): This is a utility address space to express the notion of a callee rooted value. All values of this address space MUST be storable to a GC root (though it is possible to relax this condition in the future), but unlike the other pointers need not be rooted if passed to a call (they do still need to be rooted if they are live across another safepoint between the definition and the call).
• Pointers loaded from tracked object (currently 13): This is used by arrays, which themselves contain a pointer to the managed data. This data area is owned by the array, but is not a GC-tracked object by itself. The compiler guarantees that as long as this pointer is live, the object that this pointer was loaded from will keep being live.
### Invariants
The GC root placement pass makes use of several invariants, which need to be observed by the frontend and are preserved by the optimizer.
First, only the following address space casts are allowed:
• 0->{Tracked,Derived,CalleeRooted}: It is allowable to decay an untracked pointer to any of the others. However, do note that the optimizer has broad license to not root such a value. It is never safe to have a value in address space 0 in any part of the program if it is (or is derived from) a value that requires a GC root.
• Tracked->Derived: This is the standard decay route for interior values. The placement pass will look for these to identify the base pointer for any use.
• Tracked->CalleeRooted: Addrspace CalleeRooted serves merely as a hint that a GC root is not required. However, do note that the Derived->CalleeRooted decay is prohibited, since pointers should generally be storable to a GC slot, even in this address space.
Now let us consider what constitutes a use:
• Loads whose loaded values is in one of the address spaces
• Stores of a value in one of the address spaces to a location
• Stores to a pointer in one of the address spaces
• Calls for which a value in one of the address spaces is an operand
• Calls in jlcall ABI, for which the argument array contains a value
• Return instructions.
We explicitly allow load/stores and simple calls in address spaces Tracked/Derived. Elements of jlcall argument arrays must always be in address space Tracked (it is required by the ABI that they are valid jl_value_t* pointers). The same is true for return instructions (though note that struct return arguments are allowed to have any of the address spaces). The only allowable use of an address space CalleeRooted pointer is to pass it to a call (which must have an appropriately typed operand).
Further, we disallow getelementptr in addrspace Tracked. This is because unless the operation is a noop, the resulting pointer will not be validly storable to a GC slot and may thus not be in this address space. If such a pointer is required, it should be decayed to addrspace Derived first.
Lastly, we disallow inttoptr/ptrtoint instructions in these address spaces. Having these instructions would mean that some i64 values are really GC tracked. This is problematic, because it breaks that stated requirement that we're able to identify GC-relevant pointers. This invariant is accomplished using the LLVM "non-integral pointers" feature, which is new in LLVM 5.0. It prohibits the optimizer from making optimizations that would introduce these operations. Note we can still insert static constants at JIT time by using inttoptr in address space 0 and then decaying to the appropriate address space afterwards.
### Supporting ccall
One important aspect missing from the discussion so far is the handling of ccall. ccall has the peculiar feature that the location and scope of a use do not coincide. As an example consider:
A = randn(1024)
ccall(:foo, Cvoid, (Ptr{Float64},), A)
In lowering, the compiler will insert a conversion from the array to the pointer which drops the reference to the array value. However, we of course need to make sure that the array does stay alive while we're doing the ccall. To understand how this is done, first recall the lowering of the above code:
return \$(Expr(:foreigncall, :(:foo), Cvoid, svec(Ptr{Float64}), 0, :(:ccall), Expr(:foreigncall, :(:jl_array_ptr), Ptr{Float64}, svec(Any), 0, :(:ccall), :(A)), :(A)))
The last :(A), is an extra argument list inserted during lowering that informs the code generator which Julia level values need to be kept alive for the duration of this ccall. We then take this information and represent it in an "operand bundle" at the IR level. An operand bundle is essentially a fake use that is attached to the call site. At the IR level, this looks like so:
call void inttoptr (i64 ... to void (double*)*)(double* %5) [ "jl_roots"(%jl_value_t addrspace(10)* %A) ]
The GC root placement pass will treat the jl_roots operand bundle as if it were a regular operand. However, as a final step, after the GC roots are inserted, it will drop the operand bundle to avoid confusing instruction selection.
### Supporting pointer_from_objref
pointer_from_objref is special because it requires the user to take explicit control of GC rooting. By our above invariants, this function is illegal, because it performs an address space cast from 10 to 0. However, it can be useful, in certain situations, so we provide a special intrinsic:
declared %jl_value_t *julia.pointer_from_objref(%jl_value_t addrspace(10)*)
which is lowered to the corresponding address space cast after GC root lowering. Do note however that by using this intrinsic, the caller assumes all responsibility for making sure that the value in question is rooted. Further this intrinsic is not considered a use, so the GC root placement pass will not provide a GC root for the function. As a result, the external rooting must be arranged while the value is still tracked by the system. I.e. it is not valid to attempt to use the result of this operation to establish a global root - the optimizer may have already dropped the value.
### Keeping values alive in the absence of uses
In certain cases it is necessary to keep an object alive, even though there is no compiler-visible use of said object. This may be case for low level code that operates on the memory-representation of an object directly or code that needs to interface with C code. In order to allow this, we provide the following intrinsics at the LLVM level:
token @llvm.julia.gc_preserve_begin(...)
void @llvm.julia.gc_preserve_end(token)
(The llvm. in the name is required in order to be able to use the token type). The semantics of these intrinsics are as follows: At any safepoint that is dominated by a gc_preserve_begin call, but that is not not dominated by a corresponding gc_preserve_end call (i.e. a call whose argument is the token returned by a gc_preserve_begin call), the values passed as arguments to that gc_preserve_begin will be kept live. Note that the gc_preserve_begin still counts as a regular use of those values, so the standard lifetime semantics will ensure that the values will be kept alive before entering the preserve region. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2530208230018616, "perplexity": 2928.5140460202942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00061.warc.gz"} |
http://crypto.stackexchange.com/questions/9268/is-asynchronous-perfect-forward-secrecy-possible/9271 | # Is asynchronous perfect forward secrecy possible?
DH key agreement protocols require the participation of both parties, so are only suitable for synchronous connections. Is it possible to implement PFS in a fashion usable for asynchronous protocols, like e-mail, or storage? I'm intuitively thinking not, but I haven't be able to find conclusive proof one way or the other.
-
– Ricky Demer Jul 17 '13 at 5:20
@RickyDemer Pointing to a search engine like Google is not really useful, as the results are changing. Could you select some relevant papers and add them? Or even better, write an answer with the key points of some of these papers? – Paŭlo Ebermann Jul 17 '13 at 17:50
Asynchronous forward secure encryption is possible if you allow users to have synchronous clocks. It seems impossible without that or a third party, though I am aware of no formal result.
A trivial(and hence awful) solution is to generate N key pairs and use one for each interval, discarding them as we go. A somewhat more efficient solution is to use Identity Based Encryption(IBE) to have one public key and N private keys.
The most efficient scheme I know of is due to Canetti, Halevi, and Katz in this paper. It provides for constant size public keys and $O(log(N))$ private keys where N is the number of time intervals. As far as I know, no paper has improved on that bound, though several have achieved that bound with more efficient techniques.
The gist of the design is you have a tree of keys where there is one root public key and messages can be encrypted to any node in the tree with the root pubic key and the index identifying the node. (So far, this is just an IBE scheme). Unlike IBE, the decryption key for a given node can be computed from it's parent key.
We now map time intervals to nodes using a pre order traversal (i.e. the root is interval one, the left child interval 2, the left left grandchild of root is interval 3 ...). So once time interval 1 is up, we derive the keys for both the left and right nodes and then delete the root key. Now messages encrypted to interval one are safe, but we can still decrypt everything targeted at the remaining N-1 intervals.
-
Your suggested traversal will eventually require storing $\:\Omega(N)\:$ private keys. $\;\;\;$ – Ricky Demer Jul 17 '13 at 9:26
@RickyDemer I don't see how. Call the node you had for the previous interval node X. You will have node X's right child key(if X is an internal node) and the left child key of every node between X and Root inclusive. Since the tree is of height $log(N)$ this gives you at most N keys at any given time. – imichaelmiers Jul 17 '13 at 17:23
Ah, I see that I misunderstood what the pre-order traversal is. $\:$ (However, the last sentence of your comment would be compatible with my previous comment.) $\;\;\;$ – Ricky Demer Jul 17 '13 at 19:13
@Yes, I should have said you end up with $log(N)$ keys. – imichaelmiers Jul 17 '13 at 20:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6292935609817505, "perplexity": 913.5207906627523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462982.10/warc/CC-MAIN-20150226074102-00094-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/131999-integration-find-fourier-coefficients-print.html | # Integration to find the Fourier coefficients
• March 4th 2010, 02:43 AM
Gaudium
Integration to find the Fourier coefficients
Hi, I want to calculate the Fourier series expansion of
f(x)= a sin(x) / (1-2 a cos(x) + a^2), where |a|<1 and -pi < x <pi,
but I cannot integrate the function "cos(nx) f(x)". Is there a trick for this integration. Thanks.
• March 4th 2010, 10:22 PM
CaptainBlack
Quote:
Originally Posted by Gaudium
Hi, I want to calculate the Fourier series expansion of
f(x)= a sin(x) / (1-2 a cos(x) + a^2), where |a|<1 and -pi < x <pi,
but I cannot integrate the function "cos(nx) f(x)". Is there a trick for this integration. Thanks.
Observe that:
$f(x)=\frac{d}{dx} \left[ \frac{1}{2}\ln(1-2a \cos(x)+a^2)\right]$
then integration by parts will help
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885852932929993, "perplexity": 4450.920313578405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378815.18/warc/CC-MAIN-20141119123258-00194-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.abs.gov.au/methodologies/how-australians-use-their-time-methodology/2020-21 | How Australians Use Their Time methodology
Latest release
Reference period
2020-21 financial year
Released
7/10/2022
Next release Unknown
Overview
The 2020-21 Time Use Survey (TUS) was conducted from November 2020 to July 2021. The survey provides data at a national level. Data was collected from approximately 2000 households around Australia.
The survey was designed to provide insight into how Australians spend their time in a day, including:
• the types of activities undertaken
• the proportion of people who participated in activities
• the average time spent on activities
• differences in how males and females spent their time
• time spent in various locations
• feelings of time pressure
Information about how a person spent their day was collected using a diary. The survey also collected a standard set of information about respondents including age, sex, country of birth, employment, education, and income.
This survey has undergone multiple changes to data collection, processing of data, and classification of activities. Data should be used for point-in-time analysis only and should not be compared to previous years. Refer to the section ‘Comparing the data’ for further information.
COVID-19 context
The survey was collected during the COVID-19 pandemic. During this time, initiatives were in place to help reduce the spread of COVID-19 including border control measures for some states and territories, stay at home orders, remote learning, shutting down non-essential services, limits on gatherings and social distancing rules.
Data collection
Scope
The scope of the survey included:
• all usual residents in Australia aged 15 years and over living in private dwellings, including long-stay caravan parks, manufactured home estates and marinas
• both urban and remote areas in all states and territories, except for very remote parts of Australia and discrete Aboriginal and Torres Strait Islander communities
• members of the Australian permanent defence forces living in private dwellings
• any overseas visitors who have been working or studying in Australian for the last 12 months or more or intend to do so.
The following people were excluded:
• visitors to private dwellings
• overseas visitors who have not been working or studying in Australia for 12 months or more, or do not intend to do so
• members of non-Australian defence forces stationed in Australia and their dependents
• non-Australian diplomats, diplomatic staff, and members of their households
• people who usually live in non-private dwellings, such as hotels, motels, hostels, hospitals, nursing homes and short-stay caravan park
• people in very remote areas
• discrete Aboriginal and Torres Strait Islander communities
• households where all usual residents are less than 15 years of age.
Sample design
Households were randomly selected to participate in the survey. The sample was designed to support national level estimates and does not support estimates at the state or territory level.
Each household was allocated two consecutive diary dates. The days were selected to be broadly representative of seasonal change across the year (for example, seasonal work and recreation patterns, public holidays, school holidays).
Survey enumeration was conducted in 2020-2021 in the following time periods:
• 11th November – 12th December 2020
• 24th March – 1st May 2021
• 26th May – 3rd July 2021
Response rates
Information was collected from 2,009 fully responding households, a response rate of 49.3%. From these households, information was collected from 3,630 persons, a response rate of 69.2%. These persons provided a total of 7,062 diary days, a response rate of 67.3%.
Only fully complete records were retained in the final data file. A record was fully complete where there was one diary day with at least 12 hours of activity data reported and at least 3 activities reported.
Collection method
The TUS survey was collected in two phases. The first phase was a household questionnaire, which was completed by any person in the household aged 15 years or over. They were asked to respond on behalf of all people in the household in scope of the survey. The questionnaire collected demographic and socio-economic information about each person in scope. Households were able to complete the survey online, with an interviewer face-to-face, or over the telephone.
The second phase of collection was a diary, which was designed to collect information about a respondent’s activities over a two-day period. Respondents provided information about their main activity, who they did it for, where they were, and what else they were doing at the same time. The diary also included some questions about health, smoking status, participation in unpaid voluntary work and how respondents felt about their time use. All persons in scope aged 15 years and over were asked to complete the diary through a paper form or online.
Processing the data
Processing and coding of diary data
Responses from paper diaries were manually entered into a data entry system. Changes were only made to correct obvious errors or to amend information to assist coding. Where required, similar amendments were applied to data from the online diaries to align with paper diary data.
If sleep was not recorded in diaries, it was imputed in certain scenarios. This was done at the point of data entry for the paper diaries if it was obvious that sleep had occurred. It was imputed manually for some online diaries if certain conditions were met.
An automated coding system was used to code activity responses to the Activity Classification. Logic edits were then applied to activity codes using additional information from the diary data or from the household questionnaire. Quality assurance was done on coding outputs.
The following items were collected in the diaries and have not been published due to data quality concerns:
• Whether a smartphone, table or computer was used to do the activity
• Who was present during the activity
Estimation methods
As only a sample of people were surveyed on certain days, results needed to be converted into estimates for the whole population. This was done through a process called weighting.
• Each person or household was given a number (known as a weight) to reflect how many people or households they represent in the whole population.
• A person or household’s initial weight was based on their probability of being selected in the sample. For example, if the probability of being selected in the survey was one in 45, then the person would have an initial weight of 45 (that is, they would represent 45 people).
The person and household weights were then calibrated to align with independent estimates of the in-scope population, referred to as ‘benchmarks’. The benchmarks used additional information about the population to ensure that:
• people or households in the sample represent people or households that were similar to them
• the survey estimates reflected the distribution of the whole population, not the sample.
Estimates from the survey were obtained by weighting the diary day responses to represent the in-scope population of the survey.
• a day’s initial weight was based on the probability of the person being selected and assigned a specific type of day (weekday or weekend day).
• the day weights were then calibrated to the person benchmarks to ensure the sample of days represents the people who were similar to them.
• the day estimates reflect the distribution of the whole population of people.
Benchmarks align to the estimate resident population (ERP) aged 15 years and over at April 2021 (after exclusion of people living in non-private dwellings, very remote areas of Australia, and discrete Aboriginal and Torres Strait Islander communities).
TUS weights were also calibrated to labour force status and education attainment, to better compensate for undercoverage in data collection arising from collecting during the COVID pandemic.
Key concepts
Activities
Activities are the tasks that are done during a person’s day (for example, eating, sleeping and working). Participants were asked to report the main activity they were doing, referred to as the primary activity and, what they were doing at the same time, referred to as the secondary activity. The activity information was then coded to the Activity Classification.
Four types of time
The way people use time is divided into four broad categories:
• Necessary time - includes activities which serve basic physiological needs such as sleeping, eating, personal care, health, and hygiene.
• Contracted time - includes activities where a respondent has a contracted obligation to partake in the activity, such as paid employment and formal education. Also includes related activities such as job search, homework, and related travel.
• Committed time – includes activities which are unpaid work in nature. This includes domestic activities such as cooking, housework, shopping, gardening, pet care and managing the household. It also includes child care activities, caring for adults and voluntary work.
• Free time – includes activities generally performed for enjoyment or personal fulfilment. Includes watching television, sport and exercise, social interaction, reading, and other social, recreation and leisure activities.
Activity Classification
Activities reported by respondents were coded to the Activity Classification.
No activity
No activity Includes:Time when information was missing from the diary, or the diary day was incomplete.When there was no other applicable code available to describe the activity.
Personal care activities
Sleeping Includes:Sleeping, napping, dozing during either night or daytime for any length of time.Time in bed before and after sleep, e.g., when respondent stated, ‘went to bed' or 'woke up'.Excludes:Resting or relaxing is coded to 'Relaxing’. Includes:Trying to sleep, unable to sleep, woken by disturbance. Includes:Toilet, showering, dressing, brushing teeth and other bathroom activities.Using personal care services such as hairdresser.Getting self ready, e.g., getting ready for bed, work, sport, and leisure. Includes:Health or medical appointments.Taking medications and health measurements.Exercises for medical conditions.Resting or in bed due to pain or illness.Excludes:Exercise for general fitness is coded to 'Exercise, sport and outdoor activity'.Massages and meditation are coded to 'Other personal care'. Includes:Meals, snacks, drinks, tea or coffee, alcoholic drinks.Lunch breaks, morning or afternoon tea or coffee or tea break whilst at work or study.Excludes:Eating and drinking at a dining venue is coded to 'Eating and drinking out'. Includes:Travel for own health care such as to or from doctors, medical appointments, hospital.Travel to and from personal care services such as hairdressers. Includes:When there was no other applicable code for a personal care activity.Massages, meditation, intimacy.
Employment related activities
Work Includes:All activities recorded as 'work' or where an activity was done 'for work'.Working from home.Job-related training.Checking work emails outside working hours.Breaks whilst at work (excluding eating or drinking).Excludes:Lunch, coffee, morning tea and afternoon tea are coded to 'Eating and drinking' or ‘Eating and drinking out’. Includes:Job search, job applications, preparing resume, job interviews. Includes:Travel to and from work.Excludes:Travel as part of work, for example, courier, driver, or driving from one worksite to another is coded to 'Work'. Includes:When there was no other applicable code for an employment related activity.Excludes:Getting ready for work is coded to 'Personal hygiene'.
Education activities
Formal education Includes:Participation in lectures, tutorials, exams, school classes.Participation in school, TAFE, college, university.In person or online education.Recess and other breaks between classes (excluding eating and drinking).Homework, study.Excludes:Lunch breaks are coded to 'Eating and drinking'.Participation in non-formal courses is coded to 'Participation in non-formal course'.Job related training is coded to 'Work'. Includes:Travel to and from own school, university, college.Excludes:Taking a child to or from school or daycare, is coded to 'Travel associated with child care activities'.Taking family or household members aged 15 years or over to educational institutions is coded to 'Travel associated with domestic activities'. Includes:When there was no other applicable code for an activity related to a respondent’s own education.Applying for university course, communication with school, checking timetable.Excludes:Getting ready for school is coded to 'Personal hygiene'.
Domestic activities
Food and drink preparation/service Includes:Preparing or cooking food and drinks.Excludes:Food preparation or cooking exclusively for children is coded to ‘Feeding and food preparation for children’.Food preparation or cooking for animals is coded to ‘Pet and animal care’.Cleaning up after food preparation is coded to ‘Housework’. Includes:Shopping for food, groceries, or other goods.Any response which states shopping, shops, bought, buying, ordering.Shopping in person or online.Unpacking groceries or shopping.Excludes:Window shopping is coded to ‘Other recreation and leisure’. Includes:Indoor house cleaning or tidying.Dishes and clean up after meals.Clothes washing, laundry and ironing. Includes:Watering plants, garden, lawn.Mowing and lawn care.Cleaning outdoor areas. Includes:Cleaning car and vehicles.Maintenance on cars, bikes, boats.Performing or arranging home renovations and repairs.Excludes:Fixing up old cars or bikes as a hobby is coded to ‘Hobbies and arts’. Includes:Administrative tasks for household such as finance, bills, post.Planning or organising on behalf of household.Disposing of rubbish.Filling in time use diary.Packing or unpacking car. Includes:Feeding animals.Walking and playing with animals. Includes:Travel to and from shopping.Travel associated with domestic activities. Includes:When there was no other applicable code for a domestic related activity.‘Household duties’ and ‘household chores’ without any further information.Helping children aged 15 years or over with homework.Pottering around the house or garage.
Child care activities
Child care activities can be performed for children who live either in or out of the household and can be for children who are family or not family. It is intended that child care activities are performed for children under the age of 15 years, although this detail is not always clear in the time use diaries.
Physical and emotional care of children Includes:Bathing, dressing, toileting, changing nappies, brushing teeth.Putting children to bed, waking children up, settling babies.Getting children ready, e.g., for bed, school, or outings.Excludes:Food preparation for children is coded to ‘Feeding and food preparation for children’.Feeding babies and preparing bottles for babies’ is coded to ‘Feeding and food preparation for children’. Includes:Helping with homework, schoolwork, helping with other studies, reading.Helping children do things or showing them how, directions about household chores. Includes:Playing with child, reading to child, talking to child.Watching TV or movies with children. Includes:When a response indicates there is supervision of a child but there is no further information about the activity, such as babysitting, minding child, looking after child. Includes:Attending a child’s sport or extra-curricular activities or classes. Includes:Feeding baby, expressing breast milk, preparing bottles.Making meals for child where the response indicates it was exclusively for children.Excludes:Preparing dinner for the whole family or household is coded to ‘Food and drink preparation/service’. Includes:Taking children to or picking them up, e.g., to or from school, daycare, classes.Waiting for children when picking them up. Includes:When there was no other applicable code for a child care related activity.Talking to child care providers.
Physical care for adults (sick, with disability or aged) Includes:Care for people aged 15 years or over who are sick, with disability or aged.Help with personal hygiene, bathing, dressing, toileting and feeding.Providing medical and health care. Includes:Driving to or from hospital, aged care homes, doctor’s or medical appointments for an adult who is sick, with disability, aged. Includes:When there was no other applicable code for an adult care activity.Emotional care or support for people. Checking on their welfare, comforting.Visiting people in hospital or in aged care facility.
Voluntary work activities
Voluntary work Includes:Responses of ‘volunteering’.Activities done for community or charity which are not paid work, e.g., sports coaching. Includes:Helping a friend or neighbour with tasks such as gardening, transport, home maintenance, cooking.Excludes:Activities done for people who are sick, with disability, aged are coded to the relevant category within 'Adult care activities'. Includes:Travel associated with getting to or from voluntary work or to or from helping a friend or neighbour. Includes:When there was no other applicable code for a voluntary work related activity.
Social and community interaction
Social interaction Includes:Socialising with friends and family. Having visitors over.General talking.Phone calls, messaging, video calls.Excludes:Attendance at weddings or funerals is coded to 'Religious and cultural practices'.Where it is indicated the person is dining or drinking at a venue, for example, restaurant, café, pub it is coded to 'Eating and drinking out'.Talking exclusively to children is coded to 'Playing/reading/talking with child'.All references to emailing or posting on social media is coded to 'General internet and device use'. Includes:Spectating sports match or event.Watching adult family member aged 15 years or over play sport.At library when no further information is given.Racing events such as going to the horse racing or dog racing.Attending cinema, performing arts, amusement parks, exhibitions, festivals, museum.Excludes:Attending a venue exclusively 'with or for kids' e.g., movies, library, park, is coded to 'Minding child'.Visiting library for education activities is coded to 'Formal education'.Visiting a park where no other activity is indicated is coded to 'Exercise, sport and outdoor activity'.Watching children play sport is coded to 'Accompanying child to school or extra-curricular activities'. Includes:Eating or drinking at a dining venue, for example, restaurant, café, pub.If person was on their own or with others. Includes:Prayer - alone or with others.Attending church, places of worship and religious services.Participating in bible or theme study groups.Reading the bible or other religious texts.Attending weddings, funerals, christenings.Visiting a cemetery.Graduation ceremonies, presentations. Includes:Attendance at council, community, or school meetings.ANZAC day service or marches.Attending police stations.Driving lessons, practice and driving tests.COVID-19 mandates such as check-ins. Includes:Travel associated with social and community interaction.Travel associated with eating and drinking out. Includes:When there was no other applicable code for an activity related to social and community interaction.
Participants
A person who reported in their time use diary that they spent at least five minutes in their day doing an activity.
Proportion who participated in activity
This is the proportion of persons in a population who have spent at least five minutes on an activity in a day. It is calculated:
$$\mathsf{\large{\frac{\text{Persons in population who participated in activity}}{\text{Total persons in population}}\times 100}}$$
For example, the proportion of females who participated in domestic activities:
$$\mathsf{\large{\frac{\text{Females who participated in domestic activities}}{\text{Total females}}\times 100}}$$
Summing the ‘proportions who participated in an activity’ for more than one activity in the datacubes will double count people who appear in more than one category and will not give an accurate total.
Average time spent per day, of persons who participated in activity
The average time spent on an activity by people who reported spending at least five minutes in the day doing this activity, is calculated by:
$$\mathsf{\large{\frac{\text{Total time spent on activity in a day by persons in the population}}{\text{Persons in population who participated in activity}}}}$$
For example, the average time spent per day of females who participated in domestic activities:
$$\mathsf{\large{\frac{\text{Total time females spent on domestic activities in a day}}{\text{Females who participated in domestic activities}}}}$$
This average is particularly useful for reporting the average time spent on activities that the whole population did not participate in, for example work or child care. This is because the average time is calculated only including those persons who participated in the activity. For example, of the persons who participated in work, they spent an average of 7 hours 15 minutes per day.
Summing the ‘average time spent per day, of persons who participated in activity’ for more than one activity in the datacubes will double count people who appear in more than one category and will not give an accurate total.
The average time spent per day of persons who participated in separate activity categories cannot be summed together to calculate a total average.
Average time spent per day, of total population
The average time spent on an activity by all people, regardless of whether they reported doing that activity in their day or not, is calculated by:
$$\mathsf{\large{\frac{\text{Total time spent on activity in a day by persons in the population}}{\text{Persons in population}}}}$$
For example, the average time spent per day on domestic activities by all females:
$$\mathsf{\large{\frac{\text{Total time females spent on domestic activities in a day}}{\text{Total females}}}}$$
The average time spent per day of the total population can result in very small amounts of average time per day if the proportion of the population that participated in the activity is low. For these scenarios, this average time statistic may not be very useful. For example, the average time spent per day on child care of the total population will include people who do not spend any time caring for children.
The average time spent per day of the total population from separate activity categories can be summed together to calculate a total average.
Data quality
Inconsistency between household questionnaire and diary
In a small number of instances there are inconsistencies between the data provided in the household questionnaire and the diary. For example, a person may be reported as unemployed in the household questionnaire but has reported spending time on employment in their diary.
These differences could be due to:
• a change of circumstance; the diary is collected 1 to 2 weeks after the household questionnaire
• the person completing the household questionnaire may not have correct information about the person they are responding for
• a person may not have participated in an activity on the day they were asked to complete the diary. For example, an employed person may have completed their diary on a non-work day.
Prevalence of participation
The 2020-21 TUS is not designed to provide counts of people by prevalence (such as employment, voluntary work, disability status or caring status), rather to identify population groups, so that analysis can be undertaken on how they reported spending their time.
The proportion who participated does not reflect the prevalence rate of a characteristic in the general population. For example, the proportion who participated in adult care activities is not the equivalent of the proportion of carers in the population. This is because each participant completes only two diary days and a carer may not have provided any care on their diary day.
Under coverage of activities
There may be under coverage of time spent on activities because a person could be doing more than two of the possible activities at the same time, however the diary only allows for collection of two activities at a given time.
Primary and Secondary activities
Respondents could report two activities at the same time, a primary activity, and a secondary activity (what they were doing at the same time). For example, cooking dinner and listening to the radio.
In a small number of instances, the primary and secondary activities could have been coded to the same activity category. For example, cooking dinner and setting the table are both coded to Food and drink preparation/service. Reporting on the time spent for primary and secondary activities combined may inflate the time spent on that activity category.
Due to data quality concerns, the secondary activity was not output by itself.
Classifying activities
One fundamental difficulty with classifying activities is that one activity could be coded to multiple activity categories. In the 2020-21 TUS, activities were coded to the category that was deemed the most relevant.
For example;
• a response of lunch with friends was coded to the social and community interaction category of eating and drinking out rather than to the personal care activity of eating and drinking.
• a response of made lunch for kids was coded to the child care activity of feeding and food preparation for children rather than the domestic activity of food and drink preparation/service.
Comparing the data
The 2020-21 TUS data is a source of point-in-time data and should not be compared to previous years. See below for further information about changes to the survey.
If you choose to make comparisons between 2020-21 TUS and previous surveys, the ABS recommends caveating the data with the following statement: The 2020-21 Time use estimates are not fully comparable with previous collections due to changes in methodology.
Changes to the data collection
There have been multiple changes to the data collection of the survey which may impact on the types of responses provided by participants. These include:
• COVID-19 impacts on how people spent their time and on survey enumeration
• the introduction of online collection, both for the household questionnaire and the diary
• lower response rate than the previous survey
• changes to the content of the household questionnaire and diary
Changes to processing and coding of diary data
There have been multiple changes to the processing of 2020-21 TUS survey data which impact comparability with previous releases. These include:
• introduction of an automated coding process
• reduced manual intervention of data (micro-editing and imputation)
Changes to the Activity Classification
The 2020-21 TUS activity classification was reviewed and updated to align with real-world changes in how people spend their time and the level of detail provided by respondents in their diaries. The classification is not directly comparable to the activity classification used in the previous release of the survey. See the table below for details of the changes.
Summary of changes
2020-21 TUS Activity Classification Changes
Changes
OVERALL
• Reduction in the level of detail
• Categories removed and spread throughout the 2020-21 TUS classification (e.g. Purchasing goods and services)
• Activities have been coded to different activities
PERSONAL CARE ACTIVITIES
Includes:
• Purchasing personal care or medical services (e.g. doctor’s appointments, hairdressers)
• Dozing/staying in bed
Excludes:
• Eating and drinking out
Sleeping
Includes:
• Dozing/staying in bed
Personal hygiene
Includes:
• Purchasing personal care services (e.g. hairdressers)
Personal health care
Includes:
• Purchasing medical services (e.g. doctor’s appointments)
Eating and drinking
Excludes:
• Eating and drinking out
EMPLOYMENT RELATED ACTIVITIES
Includes:
• Job related training
Excludes:
Work
Includes:
• Main job
• Other job
• Unpaid work in family business
• Work breaks
• Job related training
Other employment related activities
Excludes:
EDUCATION ACTIVITIES
Excludes:
• Job related training
Formal education
Includes:
• Attendance at educational courses
• Homework/study/research
• Breaks at place of education
Excludes
• Job related training
Other education activities
Excludes:
DOMESTIC ACTIVITIES
Includes:
• Purchasing goods and associated travel
• Filling in time use diary
• Interacting with pets
• Activities done for family both in and out of household
Excludes:
• Food preparation for children
Food and drink preparation/service
Excludes:
• Food preparation for children
• Clean up after food preparation
Shopping
Includes:
• Packing away shopping
Housework
Includes:
• Laundry and clothes care
• Clean up after food preparation
• Occasional housework
• All other housework
Grounds care and gardening
Excludes:
• Pet, animal care
• Walking pets
Pet and animal care
Includes:
• Walking pets
• Interacting with pets
Household management
Includes:
• Filling in time use diary
Excludes:
• Packing away shopping
CHILD CARE ACTIVITIES
Includes:
• Food preparation for children
• All child care whether child lives in or out of household
Physical and emotional care of children
Includes:
• Emotional care of children
Excludes:
• Infant feeding
Feeding and food preparation for children
This is a new category
Includes:
• Food preparation done exclusively for children
• Infant feeding
This is now a separate category from 'Voluntary work'
• In TUS 2020-21, adult care activities may not have been captured in all cases because the diary does not ask whether the activity was done for someone who was sick, with disability or aged. Other contextual information in the diary (i.e. prior or later episodes which indicated care) may not have been considered
• In TUS 2006, a physical activity was coded to adult care if:
• The recipient of the activity was likely to be aged 60 years or over
• The household questionnaire indicated that the recipient had a long-term health condition
• The activity was coded as being done for someone 'sick/aged/disabled'
• It was a physical activity done for adults such as 'cutting partner's hair'
Excludes:
Includes:
VOLUNTARY WORK ACTIVITIES
This is now a separate category from 'Adult care activities'
Excludes:
• Activities done for family both in and out of the household
Help/favour for friend/neighbour
Excludes:
• Activities done for family both in and out of the household
SOCIAL AND COMMUNITY INTERACTION
Includes:
• Talking/chatting
• Eating and drinking out
Excludes:
• Filling in time use diary
Social interaction
Includes:
• Associated communication to recreation and leisure
Visiting entertainment and cultural venues
Includes:
• Attendance at sports events
Community participation
Excludes:
• Filling in time use diary
RECREATION AND LEISURE ACTIVITIES
Excludes:
• Talking/chatting
• Interacting with pets
• Dozing/staying in bed
Games
Excludes:
• Hobbies/arts/crafts
Watching TV and video
Includes:
• TV watching/listening
• Video/DVD watching
Includes:
• Listening to records/tapes/CDs and other audio media
General internet and device use
This is a new category
Includes:
• Using the internet, general computing
Hobbies and arts
Excludes:
• Games
Other recreation and leisure activities
Excludes:
• Dozing/staying in bed
Changes to the data release
Due to data quality concerns, several data items released in the 2006 publication were considered unsuitable for publication in 2020-21 TUS. These included:
• Who the activity was done for
• Who was present during the activity
• Spatial location
• Mode of transport
• Type of communication/technology used
• Nature of activity
Comparability with other ABS surveys and non-ABS sources
Estimates from 2020-21 TUS may differ from the estimates for the same or similar data items produced from other ABS collections for several reasons. Differences in sampling errors, scope, collection methodologies, reference periods, seasonal and non-seasonal events may all impact estimates.
Accuracy
Show all
Reliability of estimates
Two types of error are possible in estimates based on a sample survey:
• non-sampling error
• sampling error.
Non-sampling error
Non-sampling error is caused by factors other than those related to sample selection. It is any factor that results in the data values not accurately reflecting the true value of the population.
It can occur at any stage throughout the survey process. Examples include:
• selected people that do not respond (e.g. refusals, non-contact)
• questions being misunderstood
• responses being incorrectly recorded
• errors in coding or processing the survey data.
Sampling error
Sampling error is the expected difference that can occur between the published estimates and the value that would have been produced if the whole population had been surveyed. Sampling error is the result of random variation and can be estimated using measures of variance in the data.
Standard error
One measure of sampling error is the standard error (SE). There are about two chances in three that an estimate will differ by less than one SE from the figure that would have been obtained if the whole population had been included. There are about 19 chances in 20 that an estimate will differ by less than two SEs.
Relative standard error
The relative standard error (RSE) is a useful measure of sampling error. It is the SE expressed as a percentage of the estimate:
$$\mathsf{RSE\%=\frac{SE}{estimate}\times100}$$
Only estimates with RSEs less than 25% are considered reliable for most purposes. Estimates with larger RSEs, between 25% and less than 50%, have been included in the publication, but are flagged to indicate they are subject to high SEs. These should be used with caution. Estimates with RSEs of 50% or more have also been flagged and are considered unreliable for most purposes. RSEs for these estimates are not published.
Margin of error for proportions
Another measure of sampling error is the margin of error (MOE). This describes the distance from the population value that the sample estimate is likely to be within and is particularly useful to understand the accuracy of proportion estimates. It is specified at a given level of confidence. Confidence levels typically used are 90%, 95% and 99%.
For example, at the 95% confidence level, the MOE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MOE from the population value (the figure obtained if the whole population had been enumerated). The 95% MOE is calculated as 1.96 multiplied by the SE:
$$\mathsf{MOE=SE\times1.96}$$
The RSE can also be used to directly calculate a 95% MOE by:
$$\mathsf{MOE(y)\approx \frac{RSE(y)\times y}{100}\times 1.96}$$
The MOEs in this publication are calculated at the 95% confidence level. This can easily be converted to a 90% confidence level by multiplying the MOE by:
$$\mathsf{\large{\frac{1.615}{1.96}}}$$
or to a 99% confidence level by multiplying the MOE by:
$$\mathsf{\large{\frac{2.576}{1.96}}}$$
Depending on how the estimate is to be used, an MOE of greater than 10% may be considered too large to inform decisions. For example, a proportion of 15% with an MOE of plus or minus 11% would mean the estimate could be anything from 4% to 26%. It is important to consider this range when using the estimates to make assertions about the population.
Confidence intervals
A confidence interval expresses the sampling error as a range in which the population value is expected to lie at a given level of confidence. A confidence interval is calculated by taking the estimate plus or minus the MOE of that estimate. In other terms, the 95% confidence interval is the estimate +/- MOE.
Calculating measures of error
Proportions or percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when the numerator $$\mathsf{\small{(x)}}$$ is a subset of the denominator $$\mathsf{\small{(y)}}$$:
$$\mathsf{RSE (\frac {x}{y}) = \sqrt {[RSE(x)]^2 - [RSE(y)]^2}}$$
When calculating measures of error, it may be useful to convert RSE or MOE to SE. This allows the use of standard formulas involving the SE. The SE can be obtained from RSE or MOE using the following formulas:
$$\mathsf{\large{SE = \frac{RSE\% \times estimate}{100}}}$$
$$\mathsf{\large{SE = \frac{MOE}{1.96}}}$$
Comparison of estimates
The difference between two survey estimates (counts or percentages) can also be calculated from published estimates. Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates $$\mathsf{\small{(x-y)}}$$ may be calculated by the following formula:
$$\mathsf{SE(x-y) \approx \sqrt {[SE(x)]^2 + [SE(y)]^2}}$$
While this formula will only be exact for differences between unrelated characteristics or sub-populations, it provides a reasonable approximation for the differences likely to be of interest in this publication.
Significance testing
When comparing estimates between surveys or between populations within a survey, it is useful to determine whether apparent differences are 'real' differences or simply the product of differences between the survey samples.
One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates $$\mathsf{\small{(x \text{ and } y)}}$$ and using that to calculate the test statistic using the formula below:
$$\mathsf{\Large{\frac{[x-y]}{SE(x-y)}}}$$
where
$$\mathsf{{SE(y) \approx \frac{RSE(y) \times y}{100}}}$$
If the value of the statistic is greater than 1.96, we can say there is good evidence of a statistically significant difference at 95% confidence levels between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.
Data release
Release strategy
The 2020-21 TUS release presents national estimates. The sample design is not sufficient to enable detailed analysis of state and territory estimates.
Data cubes in this release present tables of Estimates, Proportions, and their associated Measures of Error. A data item list is also available in the Data downloads section.
Microdata
The ABS is currently assessing the feasibility of a microdata product and will update this page once a decision is made.
Custom tables
Customised statistical tables to meet individual requirements can be produced on request. These are subject to confidentiality and sampling variability constraints which may limit what can be provided. Enquiries on the information available and the cost of these services should be made to the ABS website Contact us page.
Confidentiality
The Census and Statistics Act 1905 authorises the ABS to collect statistical information and requires that information is not published in a way that could identify a particular person or organisation. The ABS must make sure that information about individual respondents cannot be derived from published data.
To minimise the risk of identifying individuals in aggregate statistics, a technique called perturbation is used to randomly adjust cell values. Perturbation involves small random adjustment of the statistics which have a negligible impact on the underlying pattern. This is considered the most satisfactory technique for avoiding the release of identifiable data while maximising the range of information that can be released. After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values in Data Cubes to derive a total may give a slightly different result to the published totals.
Glossary
Show all
Carer
A carer is a person who provides any informal assistance (help or supervision) to people with disability or older people. The assistance must be ongoing, or likely to be ongoing, for at least six months.
Carers do not have to live in the same household as the person they care for. Assistance to a person living in a different household to the carer relates to everyday activities, without specific information on the type of activity.
Carers were identified based on their responses to the questionnaire component of the survey.
Child
Any individual under 15 years old, usually a resident in the household, who forms a parent-child relationship with another member in the household.
Couple
A couple refers to two usual residents, both aged at least 15 years, who are either married to each other or living in a de facto relationship with each other.
Disability
A disability or long-term health condition exists if a limitation, restriction, impairment, disease or disorder, has lasted, or was likely to last, for at least six months and which restricted everyday activities.
It is classified by whether or not a person has a specific limitation or restriction. Specific limitation or restriction is further classified by whether the limitation or restriction is a limitation in core activities or a schooling/employment restriction only.
There are four levels of core activity limitation (profound, severe, moderate, and mild) which are based on whether a person needs help, has difficulty, or uses aids or equipment with any of the core activities (self care, mobility or communication). A person's overall level of core activity limitation is determined by their highest level of limitation in these activities.
The four levels are:
• profound - always needs help/supervision with core activities
• severe - does not always need help with core activities
• moderate - has difficulty with core activities
• mild - uses aids to assist with core activities.
Persons are classified as having only a schooling/employment restriction if they have no core activity limitation and are aged 15 to 20 years and have difficulties with education, or are aged 15 years and over and have difficulties with employment.
Episode
An episode is defined by the primary activity, secondary activity and location of the activity, at any particular time. A change in any of these elements identifies a new episode.
Equivalised weekly household income
Equivalised total household income is household income adjusted by the application of an equivalence scale to facilitate comparison of income levels between households of differing size and composition. This variable reflects that a larger household would normally need more income than a smaller household to achieve the same standard of living. The 'modified OECD' equivalence scale is used.
Equivalised total household income can be viewed as an indicator of the economic resources available to a standardised household. For a lone person household, it is equal to household income. For a household comprising more than one person, it is an indicator of the household income that would be needed by a lone person household to enjoy the same level of economic wellbeing.
Family
Two or more persons, one of whom is at least 15 years of age, who are related by blood, marriage (registered or de facto), adoption, step or fostering; and who are usually resident in the same household. The basis of a family is formed by identifying the presence of a couple relationship, lone parent-child relationship or other blood relationship. Some households will, therefore, contain more than one family.
Household
One or more persons, at least one of whom is at least 15 years of age, usually resident in the same private dwelling.
Income
• wages and salaries (including from own incorporated business)
• government pension, benefit or allowance
• superannuation, annuities or private pensions
• rental investments
• other regular sources of income
An index within the Socio-Economic Indexes for Areas (SEIFA). The index of relative socio-economic disadvantage includes attributes such as low income, low educational attainment, high unemployment and dwellings without motor vehicles.
The index refers to the attributes of the area (Statistical Area Level 1 at national level) in which a person lives, not the socio-economic situation of a particular individual. For further information about the SEIFAs, see Socio-Economic Indexes for Areas (SEIFA) 2016.
Labour force status
Classifies all people aged 15 years and over into one of the following categories:
• employed - during the week before the interview, persons worked one hour or more in a job or business, or undertook work without pay in a family business, or they had a job in the reference week, but were not at work.
• employed full-time - persons who usually worked 35 hours or more per week
• employed part-time - persons who usually worked less than 35 hours per week
• unemployed - not employed and actively looked for work in the four weeks prior to the questionnaire and available to start work in the week prior to the survey
• not in the labour force - people who were not in the categories employed or unemployed
Living situation
The following living situations are reported on in the data:
• Parent in couple family with child less than 15 years old - two persons in a registered or de facto marriage who usually live in the same household with at least one child who is less than 15 years old. The family may also include non-dependent children, other relatives and unrelated individuals.
• Other parents in couple family - two persons in a registered or de facto marriage who usually live in the same household with their non-dependent children. The family may also include other relatives and unrelated individuals.
• Lone parent with child less than 15 years old - a family comprising of one parent with at least one child who is less than 15 years old. The family may also include non-dependent children, other relatives and unrelated individuals.
• Other lone parent - a family comprising of one parent who usually lives in the same household with their non-dependent children. The family may also include other relatives and unrelated individuals.
• Partner in couple family with no children - two persons in a registered or de facto marriage who usually live in the same household, but do not live with their own children. This family may also include other relatives and unrelated individuals.
• Non-dependent child – All persons aged 15 years or over (except those aged 15-24 years who are full-time students) who have a parent in the household and do not have a partner or child of their own in the household.
• Dependent child - All persons aged under 15 years; and people aged 15-24 years who are full-time students, have a parent in the household and do not have a partner or child of their own in the household.
• Lone person aged 15 to 64 years old
• Lone person aged 65 years and over
• Other
Quintiles
Groupings that result from ranking all households or persons in the population in ascending order according to some characteristic such as their household income and then dividing the population into five equal groups, each comprising 20% of the estimated population.
Remoteness
The Australian Statistical Geography Standard (ASGS) was used to define remoteness. The Remoteness Structure is described in detail in the publication Australian Statistical Geography Standard (ASGS): Volume 5 - Remoteness Structure, July 2016.
Self-assessed health status
The respondent's general assessment of their own health, against a five point scale from excellent through to poor.
Unpaid work
For the purposes of this publication, unpaid work includes domestic, child care, adult care and voluntary work activities. It is also referred to as Committed Time in this publication.
Unpaid voluntary work
The provision of unpaid help willingly given in the form of time, service or skills to a club, organisation or association.
Volunteer
Persons who identified that they had done unpaid voluntary work in the previous 12 months for any of the following types of organisations:
• Organised sporting groups and teams
• Youth groups
• A charity organisation or cause
• Student government
• Religious organisation
• School or preschool
• Some other kind of volunteer work
These organisations do not include:
• Internships
• Work experience or work for study purposes
• Unpaid work undertaken overseas
• Unpaid work undertaken to receive a government allowance
• Unpaid work as part of a court order
Weekday
Refers to any day during the week from Monday to Friday.
Weekend
Refers to any day on the weekend, Saturday and Sunday.
Weekly personal income
Regular money received by an individual from all income sources over the course of a week. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2181459665298462, "perplexity": 3861.4057349975706}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00552.warc.gz"} |
http://mathematica.stackexchange.com/questions/27138/working-precision-for-each-variable | # Working precision for each variable
How can I define working precision for each variable individually
for example:
Maximize[x y< 100, {x, y}]
where the working precisions for x and y should be 2 and 7 respectively.
Say x is some real life quantity that is ONLY available to 2 decimal places and we need y to be as precise as possible (say 7 decimal places) regardless of the precision of xy. How can I do this in Mathematica?
here is exact problem
Maximize hsum + isum + jsum + ksum + lsum + msum + nsum + osum
constraints are:
0 < hsum,isum,jsum,ksum,lsum,msum,nsum < 10000
-5000 < osum < 0
a*x1 + b*x2 + d*x4 - h*(x1 + x2 + x4) = hsum
a*x1 + b*x2 + d*x4 - i*(x1 + x2 + x4) = isum
a*x1 + b*x2 + e*x5 - j*(x1 + x2 + x5) = jsum
a*x1 + b*x2 + e*x5 - k*(x1 + x2 + x5) = ksum
a*x1 + c*x3 + f*x6 - l*(x1 + x3 + x6) = lsum
a*x1 + c*x3 + f*x6 - m*(x1 + x3 + x6) = msum
a*x1 + c*x3 + g*x7 - n*(x1 + x3 + x7) = nsum
a*x1 + c*x3 + g*x7 - o*(x1 + x3 + x7) = osum
X1,X2,X3....X7 are integers
a=100
b = (a + u1) && c = (a - u2) && d = (b + u3) && e = (b - u4) &&
f = (c + u5) && g = (c - u6) && h = (d + u7) && i = (d - u8) &&
j = (e + u9) && k = (e - u10) && l = (f + u11) && M = (f - u12) &&
n = (g + u13) && o = (g - u14)
0.1 <= u1, u2, u3, u4, u5, u6, u7, u8, u9, u10, u11, u12, u13, u14 <= 3
in the above problem a is 100 rest of the variables b,c,d,e,f,g,h,i,j,k,l,m,n,o can be derived by adding or subtracting u1,u2,u3....u14 now i need to define that value of u must be upto 2 decimal places.
Also here is the link for the same Non linear constrained interger Optimization (you can check out this one too!)
-
Since the answer depends on combinations (products, sums) of x and y, you can't define their precisions separately. If the precision of x is 2, then all internal precision will be 2. – Corey Kelly Jun 17 '13 at 15:14
Say x is some real life quantity that is ONLY available upto 2 decimal places and we need y to be precise as much as possible,it doesn't no matter what is the precision result of xy – Michio kaku Jun 17 '13 at 16:08
You can define the input and output precisions of x and y easily. Working Precision determines how many decimal places are maintained for internal computation. If you want y to have a precision of 7, then use WorkingPrecision->7. Your question is one of error propagation, not a Mathematica issue. – Corey Kelly Jun 17 '13 at 16:13
It might be helpful to see the actual problem you're working on. In the example you give, if x is known, then you can't Maximize with respect to it, and the inequality lets you directly find y. – Corey Kelly Jun 17 '13 at 16:27
@CoreyKelly i have updated the problem as requested. – Michio kaku Jun 17 '13 at 16:58
N[{#, {x/10^2, y/10^7} /. #2} & @@ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5570409893989563, "perplexity": 1827.6207172506101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464876.43/warc/CC-MAIN-20150226074104-00009-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2331283/how-to-integrate-this-int-frac-cos5x-cos-4x1-2-cos-3x-dx | # How to integrate this : $\int \frac{\cos5x+\cos 4x}{1-2\cos 3x}\,dx$ [duplicate]
How to integrate this :
$\int \frac{\cos 5x+\cos 4x}{1-2\cos 3x}\,dx$
My approach :
We know that $\cos A+cosB = 2\cos(\frac{A+B}{2})\cos(\frac{A-B}{2})$ But it is not working here, please suggest, thanks.
The integrand function can be greatly simplified. If $n\in\mathbb{N}$ we have that $\cos(nx)$ is a polynomial in $\cos(x)$ with degree $n$. By setting $z=\cos\theta$ we have: $$\cos(4x)+\cos(5x) = T_4(z)+T_5(z) = 1+5 z-8 z^2-20 z^3+8 z^4+16 z^5\tag{1}$$ $$1-2\cos(3x) = 1- 2\,T_3(z) = 1 + 6 z - 8 z^3\tag{2}$$ and it is not difficult to notice that the RHS of $(2)$ is a divisor of the RHS of $(1)$: $$\frac{\cos(4x)+\cos(5x)}{1-2\cos(3x)} = 1 - z - 2 z^2 = -\cos(x)-\cos(2x) \tag{3}$$ so the wanted integral is simply $\color{red}{C-\sin(x)-\frac{1}{2}\sin(2x)}$.
$$$$\begin{split}\int\dfrac{\cos 5x+\cos 4x}{1-2\cos 3x}\,dx&=\int\dfrac{2\cos \left(\frac{5x+4x}{2}\right)\cos \left(\frac{5x-4x}{2}\right)}{1-2\left[2\cos^2 \left(\frac{3x}{2}\right)-1\right]}\,dx\\&=\int\dfrac{2\cos \left(\frac{9x}{2}\right)\cos \left(\frac{x}{2}\right)}{3-4\cos^2 \left(\frac{3x}{2}\right)}\,dx\\&=\int\dfrac{2\cos \left(\frac{9x}{2}\right)\cos \left(\frac{x}{2}\right)\cos \left(\frac{3x}{2}\right)}{3\cos \left(\frac{3x}{2}\right)-4\cos^3 \left(\frac{3x}{2}\right)}\,dx\\&=-\int\dfrac{2\cos \left(\frac{9x}{2}\right)\cos \left(\frac{x}{2}\right)\cos \left(\frac{3x}{2}\right)}{\cos \left(\frac{9x}{2}\right)}\,dx\\&=-\int 2\cos \left(\frac{x}{2}\right)\cos \left(\frac{3x}{2}\right)\,dx\\&=-\int\left(\cos 2x + \cos x\right)\,dx\\&=-\int\cos 2x\,dx - \int\cos x\,dx\\&=-\dfrac{\sin 2x}{2} - \sin x + C\\&=-\left(\dfrac{\sin 2x}{2} + \sin x\right) + C \end{split}$$$$ Remember the fact that $$\cos 3x = 4\cos^3x - 3\cos x$$
I think the following is easier. $$\cos5x+\cos4x=\cos5x+\cos{x}+\cos4x+\cos2x-\cos{x}-\cos{2x}=$$ $$=2\cos3x\cos2x+2\cos3x\cos{x}-\cos{x}-\cos{2x}=(2\cos3x-1)(\cos2x+\cos{x})$$ and the rest is smooth. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9936215281486511, "perplexity": 1313.683722043471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00457.warc.gz"} |
http://www.mathplanet.com/education/algebra-1/systems-of-linear-equations-and-inequalities/the-elimination-method-for-solving-linear-systems | # The elimination method for solving linear systems
Another way of solving a linear system is to use the elimination method. In the elimination method you either add or subtract the equations to get an equation in one variable.
When the coefficients of one variable are opposites you add the equations to eliminate a variable and when the coefficients of one variable are equal you subtract the equations to eliminate a variable.
Example
$\left\{\begin{matrix} 3y+2x=6\\ 5y-2x=10 \end{matrix}\right$
We can eliminate the x-variable by addition of the two equations.
$3y+2x=6$
$\underline{+\: 5y-2x=10}$
$=8y\: \: \: \: \; \; \; \; =16$
$\begin{matrix} \: \: \: y\: \: \: \: \: \; \; \; \; \; =2 \end{matrix}$
The value of y can now be substituted into either of the original equations to find the value of x
$3y+2x=6$
$3\cdot {\color{green} 2}+2x=6$
$6+2x=6$
$x=0$
The solution of the linear system is (0, 2).
To avoid errors make sure that all like terms and equal signs are in the same columns before beginning the elimination.
If you don't have equations where you can eliminate a variable by addition or subtraction you directly you can begin by multiplying one or both of the equations with a constant to obtain an equivalent linear system where you can eliminate one of the variables by addition or subtraction.
Example
$\left\{\begin{matrix} 3x+y=9\\ 5x+4y=22 \end{matrix}\right$
Begin by multiplying the first equation by -4 so that the coefficients of y are opposites
${\color{green} {-4\}\cdot \left (3x+y\right )=9\cdot {\color{green} {-4}$
$5x+4y=22$
$-12x-4y=-36$
$\underline{+5x+4y=22 }$
$=-7x\: \: \: \: \: \: \: \: \: \: =-14$
$\begin{matrix} \: \:\; \:\: x\: \: \: \: \: \: \: \: \: \: \:=2 \end{matrix}$
Substitute x in either of the original equations to get the value of y
$3x+y=9$
$3\cdot {\color{green} 2}+y=9$
$6+y=9$
$y=3$
The solution of the linear system is (2, 3)
## Video lesson
Solve the linear system using the elimination method
$\left\{\begin{matrix} 2y - 4x = 2 \\ y = -x + 4 \end{matrix}\right$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7885391116142273, "perplexity": 212.88441351087008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860110764.59/warc/CC-MAIN-20160428161510-00010-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://tensorflow.google.cn/versions/r2.1/api_docs/python/tf/math/reciprocal?hl=zh-cn | Help protect the Great Barrier Reef with TensorFlow on Kaggle
tf.math.reciprocal
Computes the reciprocal of x element-wise.
I.e., $$y = 1 / x$$.
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
name A name for the operation (optional).
A Tensor. Has the same type as x.
[]
[] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5944985747337341, "perplexity": 14596.082771918174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00282.warc.gz"} |
https://crypto.stackexchange.com/questions/35388/what-is-the-major-difference-between-fips-186-2-and-fips-186-4/35412 | # What is the major difference between FIPS 186-2 and FIPS 186-4?
Can anyone please tell me the major difference between FIPS 186-2 and FIPS 186-4?
I know with FIPS 140-2 they want the DSS standard to be FIPS 186-4, but what difference does it make?
To clarify scope:
• FIPS 140-2 itself doesn't say anything about DSS, though it has 186-2 as a reference. It was published in 2001, before 186-3 and -4, and has not been superseded. After 140-3 spent 8 years in draft they recently decided to consider using ISO/IEC 19790 instead!
• 140-2 Annex A (Approved functions) is updated frequently and does now reference 186-4.
• most people don't want just 140-2 implementation but rather 140-2 certification under the Cryptographic Module Validation Program (CMVP) and that is controlled by the 140-2 Implementation Guidance linked at that page (currently and usually under 'Announcements' because it keeps changing, and always in 'Standards').
The current IG has a section on 'Validating the Transition from FIPS 186-2 to FIPS 186-4' in W.2; formerly it was G.15. As indicated there, the technical changes between 186-2 and -4 were, if I haven't missed any:
• delete several specific RNGs and instead require RBGs Approved by a separate standard, currently SP800-90A
• DSA: add cases for $p$ size 2048 with $q$ size 224 or 256, and 3072 with 256, using hashes from FIPS 180 (now SHA-224 SHA-256 or SHA-512/224 SHA-512/256). Note 186-2 change notice 1 already eliminated $p$ sizes below 1024 which were in -0 through -2 original.
• DSA: expand the parameter generation algorithms to prefer Shawe-Taylor provable primes while still allowing Miller-Rabin with optional Lucas probable primes, and use a strength-matched hash; explicitly specify parameter validation (including legacy parameters using the -2 and earlier method) which had been implicit
• DSA and ECDSA: more robust privatekey and $k$ (nonce) generation
• RSA: allow RSA signature schemes PKCS1-v1_5 and PSS from PKCS#1v2.1 (with a constraint on salt for PSS) in addition to previous X9.31. Note these were already Approved in 140-2 Annex A, so this just moves them to the correct place in the document set.
• RSA: restrict $n$ size to 1024 2048 3072, restrict $e$ to $2^{16}+1$ to $2^{256}-1$, and specify RSA privatekey generation in detail with several options. This prohibits one traditionally popular $e$ namely 3; F4 (65537) is allowed and IME more popular anyway.
There were also major editorial changes between 186-2 and -3, reorganizing some things, changing notation, and adding a lot of explanation about what digital signatures are and aren't good for, and why privatekeys must be private, and so on. I'm not going to try to cover all that; get the docs and read for yourself if you want. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4216047525405884, "perplexity": 5876.180478592859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00428.warc.gz"} |
https://www.sources.com/SSR/Docs/SSRW-Allele_frequency.htm | Home | Sources Directory | News Releases | Calendar | Articles | RSS | Contact |
## Allele frequency
Allele frequency is the proportion of all copies of a gene that is made up of a particular gene variant (allele). In other words, it is the number of copies of a particular allele divided by the number of copies of all alleles at the genetic place (locus) in a population. It can be expressed for example as a percentage. In population genetics, allele frequencies are used to depict the amount of genetic diversity at the individual, population, and species level. It is also the relative proportion of all alleles of a gene that are of a designed type.
Given the following:
1. a particular locus on a chromosome and the gene occupying that locus
2. a population of N individuals carrying n loci in each of their somatic cells (e.g. two loci in the cells of diploid species, which contain two sets of chromosomes)
3. different alleles of the gene exist
4. one allele exists in a copies
then the allele frequency is the fraction or percentage of all the occurrences of that locus that is occupied by a given allele and the frequency of one of the alleles is a/(n*N).
For example, if the frequency of an allele is 20% in a given population, then among population members, one in five chromosomes will carry that allele. Four out of five will be occupied by other variant(s) of the gene. Note that for diploid genes the fraction of individuals that carry this allele may be nearly two in five. If the allele distributes randomly, then the binomial theorem will apply: 32% of the population will be heterozygous for the allele (i.e. carry one copy of that allele and one copy of another in each somatic cell) and 4% will be homozygous (carrying two copies of the allele). Together, this means that 36% of diploid individuals would be expected to carry an allele that has a frequency of 20%. However, alleles distribute randomly only under certain assumptions, including the absence of selection. When these conditions apply, a population is said to be in Hardy'Weinberg equilibrium.
The frequencies of all the alleles of a given gene often are graphed together as an allele frequency distribution histogram, or allele frequency spectrum. Population genetics studies the different "forces" that might lead to changes in the distribution and frequencies of alleles'in other words, to evolution. Besides selection, these forces include genetic drift, mutation and migration.
## Calculation of allele frequencies from genotype frequencies
If f(AA), f(Aa), and f(aa) are the frequencies of the three genotypes at a locus with two alleles, then the frequency p of the A-allele and the frequency q of the a-allele are obtained by counting alleles. Because each homozygote AA consists only of A-alleles, and because half of the alleles of each heterozygote Aa are A-alleles, the total frequency p of A-alleles in the population is calculated as
$p=f(\mathbf{AA})+ \frac{1}{2}f(\mathbf{Aa})= \mbox{frequency of A}$
Similarly, the frequency q of the a allele is given by
$q=f(\mathbf{aa})+ \frac{1}{2}f(\mathbf{Aa})= \mbox{frequency of a}$
It would be expected that p and q sum to 1, since they are the frequencies of the only two alleles present. Indeed they do:
$p+q=f(\mathbf{AA})+f(\mathbf{aa})+f(\mathbf{Aa})=1$
and from this we get:
q = 1 ' p and p = 1 ' q
If there are more than two different allelic forms, the frequency for each allele is simply the frequency of its homozygote plus half the sum of the frequencies for all the heterozygotes in which it appears. Allele frequency can always be calculated from genotype frequency, whereas the reverse requires that the Hardy'Weinberg conditions of random mating apply. This is partly due to the three genotype frequencies and the two allele frequencies. It is easier to reduce from three to two.
## An example population
Consider a population of ten individuals and a given locus with two possible alleles, A and a. Suppose that the genotypes of the individuals are as follows:
AA, Aa, AA, aa, Aa, AA, AA, Aa, Aa, and AA
Then the allele frequencies of allele A and allele a are:
$p=prob_A=\frac{2+1+2+0+1+2+2+1+1+2}{20}=0.7$
$q=prob_a=\frac{0+1+0+2+1+0+0+1+1+0}{20}=0.3$
so if an individual is chosen at random there is a 70% chance it will carry the A allele, and a 30% chance it will have the a allele.
## The effect of mutation
Let ú be the mutation rate from allele A to some other allele a (the probability that a copy of gene A will become a during the DNA replication preceding meiosis). If pt is the frequency of the A allele in generation t, then qt = 1 ' pt is the frequency of the a allele in generation t, and if there are no other causes of gene frequency change (no natural selection, for example), then the change in allele frequency in one generation is
$\Delta p=p_t-p_{t-1}=\left(p_{t-1}-\acute{u}p_{t-1}\right)-p_{t-1}=-\acute{u}p_{t-1}$
where pt ' 1 is the frequency of the preceding generation. This tells us that the frequency of A decreases (and the frequency of a increases) by an amount that is proportional to the mutation rate ú and to the proportion p of all the genes that are still available to mutate. Thus î�p gets smaller as the frequency of p itself decreases, because there are fewer and fewer A alleles to mutate into a alleles. We can make an approximation that, after n generations of mutation,
$p_n=p_0e^{-n\acute{u}}$
##
### Related Articles & Resources
This article is based on one or more articles in Wikipedia, with modifications and additional content by SOURCES editors. This article is covered by a Creative Commons Attribution-Sharealike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). The remainder of the content of this website, except where otherwise indicated, is copyright SOURCES and may not be reproduced without written permission. (For information call 416-964-7799 or use the Contact form.)
SOURCES.COM is an online portal and directory for journalists, news media, researchers and anyone seeking experts, spokespersons, and reliable information resources. Use SOURCES.COM to find experts, media contacts, news releases, background information, scientists, officials, speakers, newsmakers, spokespeople, talk show guests, story ideas, research studies, databases, universities, associations and NGOs, businesses, government spokespeople. Indexing and search applications by Ulli Diemer and Chris DeFreitas.
For information about being included in SOURCES as a expert or spokesperson see the FAQ or use the online membership form. Check here for information about becoming an affiliate. For partnerships, content and applications, and domain name opportunities contact us. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7410805821418762, "perplexity": 1541.1341049264374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573176.51/warc/CC-MAIN-20190918024332-20190918050332-00313.warc.gz"} |
http://math.ecnu.edu.cn/RCFOA/seminar_template.php?id=194 | On the decomposition into discrete, type II and type III C*-algebras
Chi-Keung Ng
14:00 pm to 15:00 pm, October 29th, 2014 Science Building A1510
Abstract:
We finish a classifying scheme of $C^*$-algebras. We show that the classes of discrete $C^*$-algebras (as defined by Peligard and Zsid\'{o}), type ${\rm I\!I}$, and type ${\rm I\!I\!I}$ $C^*$-algebras (as defined by Cuntz and Pedersen) are closed under strong Morita equivalence and taking essential extension''. Furthermore, there exist the largest discrete finite ideal $A_{{\rm d},1}$, the largest discrete anti-finite ideal $A_{{\rm d},\infty}$, the largest type ${\rm I\!I}$ finite ideal $A_{{\rm I\!I},1}$, the largest type ${\rm I\!I}$ anti-finite ideal $A_{{\rm I\!I},\infty}$, and the largest type ${\rm I\!I\!I}$ ideal $A_{\rm I\!I\!I}$ of a $C^*$-algebra $A$ with $A_{{\rm d},1} + A_{{\rm d},\infty} + A_{{\rm I\!I},1} + A_{{\rm I\!I},\infty} + A_{\rm I\!I\!I}$ being an essential ideal of $A$. When $A$ is a $W^*$-algebra, these ideals coincide with the largest type ${\rm I}$ finite part, type ${\rm I}$ infinite part, type ${\rm I\!I}$ finite part, type ${\rm I\!I}$ infinite part and type ${\rm I\!I\!I}$ part, respectively. Moreover, this classification scheme observes many good rules. We find that any prime $C^*$-algebra is of one of the five types: finite discrete, anti-finite discrete, finite type ${\rm I\!I}$, anti-finite type ${\rm I\!I}$ or type ${\rm I\!I\!I}$. If $A$ has a Hausdorff primitive spectrum, or $A$ is an $AW^*$-algebra, or $A$ is the local multiplier algebra of another $C^*$-algebra, then $A$ is a continuous field of prime $C^*$-algebras over a locally compact Hausdorff space, with each fiber being non-zero and of one of the five types. If, in addition, $A$ is discrete (respectively, anti-finite), there is an open dense subset of $\Omega$ on which each fiber is discrete (respectively, anti-finite). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7759324908256531, "perplexity": 607.162600463108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00471.warc.gz"} |
https://www.freemathhelp.com/forum/threads/109181-Simple-Bernoulli-trial-prob-of-pipe-failure-during-inspection?p=420068&mode=linear | # Thread: Simple Bernoulli trial: prob of pipe failure during inspection
1. ## Simple Bernoulli trial: prob of pipe failure during inspection
I'm really getting baffled with this question that has taken me far too long to complete and would love some guidance.
An accident caused the catastrophic failure of metal pipes in a factory. There were
six metal pipes in the garage at any given time.
Table 1 shows the numbers of metal pipe failures that had occurred on
each of the 23 previous inspections.
Table 1 Number of metal pipe failures
Number of failed metal pipes 0 1 2 3 4 5 6
Number of inspections------16 5 2 0 0 0 0
(i) Let p be the probability that a pipe fails on an inspection. What
distribution is appropriate to describe the failure or non-failure of a
particular metal pipe on a particular inspection?
For this I have said that this is a Bernoulli Distribution due to only two possible outcomes of failure and non-failure.
(ii) A reasonable estimate of p is 3/46 or 0.065. Explain where this number comes from.
This is where I am getting stuck on. I cannot work out what p is using the information they have provided.
2. Can you calculate the mean of the given distribution? What is the mean of the Binomial Distribution in terms of it's parameter, p?
3. Originally Posted by tkhunny
Can you calculate the mean of the given distribution? What is the mean of the Binomial Distribution in terms of it's parameter, p?
The mean of a binomial distribution is np, but I don't know how to calculate the p in this particular question.
4. In all situations, the mean is calculated by the definition: $\sum x_{i}\cdot p\left(x_{i}\right)$
In your case, you have 23 inspections. Using the formula, above, we have: 0*(16/23) + 1*(5/23) + 2*(2/23)
There is your Mean. Now what?
5. Originally Posted by tkhunny
In all situations, the mean is calculated by the definition: $\sum x_{i}\cdot p\left(x_{i}\right)$
In your case, you have 23 inspections. Using the formula, above, we have: 0*(16/23) + 1*(5/23) + 2*(2/23)
There is your Mean. Now what?
I genuinely don't know what to do after this.
6. n*p = Mean
n = 23
Mean = ???
Solve for p.
7. Originally Posted by tkhunny
n*p = Mean
n = 23
Mean = ???
Solve for p.
If the mean is 9/23, and n=23,
then p = 9/23 / 23 = 9/529
8. That's where I would start. Good work.
9. Originally Posted by tkhunny
That's where I would start. Good work.
But the answer for p in the question is 3/46 or 0.065, whereas I got 9/523.
10. Well, that was a Binomial Approximation. Is there a distribution that you feel might be more appropriate? Poisson? Beta? Weibull?
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
• | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367380142211914, "perplexity": 1137.1433350261552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948527279.33/warc/CC-MAIN-20171213143307-20171213163307-00439.warc.gz"} |
https://www.mathworks.com/company/newsletters/articles/cancer-diagnostics-with-deep-learning-and-photonic-time-stretch.html.html | # Cancer Diagnostics with Deep Learning and Photonic Time Stretch
By Bahram Jalali, Claire Lifan Chen, and Ata Mahjoubfar, University of California, Los Angeles (UCLA)
Cancer patients receiving chemotherapy- or immunotherapy-based treatments must undergo regular CT and PET scans—and in some cases, new biopsies—to evaluate the efficacy of the treatment. Flow cytometry, a method for identifying circulating tumor cells (CTCs) via a simple blood test, is much less invasive than scans and biopsies, and could be a game-changer in cancer treatment.
In flow cytometry, cells are examined as they pass one-by-one through a small opening in a flow cytometer. In traditional flow cytometry, the cells require fluorescent labeling, which can affect cellular behavior and compromise viability. Imaging flow cytometers do not require labels, but at camera speeds faster than 2000 cells per second they produce blurred images, making it impractical to screen a cell population large enough to find rare abnormal cells.
Our group in the photonics lab at UCLA has developed a time stretch quantitative phase imaging (TS-QPI) system that enables accurate classification of large sample sizes without biomarker labels (Figure 1). This system combines imaging flow cytometry, photonic time stretch technology (see sidebar), and machine learning algorithms developed in MATLAB®, and can classify cells with more than 95% accuracy.
Figure 1. Dr. Jalali with the TS-QPI system.
## Selecting Features
Our TS-QPI system generates 100 gigabytes of data per second-a firehose of data equivalent to 20 HD movies per second. For a single experiment, in which every cell in a 10-milliliter blood sample is imaged at almost 100,000 cells per second, the system generates from 10 to 50 terabytes of data.
Working in MATLAB with Image Processing Toolbox™, we developed a machine vision pipeline for extracting biophysical features from cell images. The pipeline also includes CellProfiler, an open-source cell image analysis package written in Python®. We extracted over 200 features from each cell, grouped into three categories: morphological features that characterize the cell’s size and shape, optical phase features that correlate with the cell’s density, and optical loss features that correlate with the size of organelles within the cell. Linear regression indicated that 16 of these features contained most of the information required for classification.
## Evaluating Machine Learning Algorithms
A principal benefit of MATLAB is the ability to test a wide variety of machine learning models in a short amount of time. We compared four classification algorithms from Statistics and Machine Learning Toolbox™: naive Bayes, support vector machine (SVM), logistic regression (LR), and a deep neural network (DNN) trained by cross entropy and backpropagation.
In tests conducted using samples with a known concentration of CTCs, all four algorithms (Bayes, SVM, LR, and DNN) achieved better than 85% accuracy (Figure 2). We further enhanced the accuracy, consistency, and balance between sensitivity and specificity of our machine learning classification by combining deep learning with global optimization of the receiver operating characteristics (ROC). Implemented in MATLAB, this novel approach increased classification accuracy to 95.5%.
Figure 2. Comparison of the accuracy of various machine learning techniques for classifying blood cells.
## Accelerating Experiments with Parallel Computing
Because we were working with big data, it often took more than a week to complete our image processing and machine learning processes. To shorten this turnaround time, we parallelized our analyses using a 16-core processor and Parallel Computing Toolbox™. Using a simple parallel for-loop (parfor), we ran our processes concurrently on the 16 processors, reducing the time needed to complete the analysis from eight days to approximately half a day.
## Modeling and Refining the Experimental Setup
In the photonics lab at UCLA, MATLAB is the workhorse for model development and data analysis. We used MATLAB to develop a model of the complete experimental setup, from the optics and laser pulses all the way to the classification of individual cells (Figure 3).
Figure 3. Diagram of the time stretch quantitative phase imaging and analytics system.
We used this model to guide enhancements to our setup. For example, to improve the signal-to-noise ratio we used the model to simulate specific gain coefficients. The simulation results showed us how and where changes to the setup could improve overall performance.
Modeling and simulating the system in MATLAB has saved us months of experimental time and is guiding our next steps. We are currently incorporating detailed models of individual cells into the overall system model. These models will enable us to make better-informed tradeoffs between spatial resolution and phase resolution based on the types of cells we are classifying.
The system we developed is not limited to classifying cancer cells. We have also used it to classify algae cells based on their lipid content and suitability as biofuels. The only significant change we made was to the surface coating within the channel that the cells flow through. We made no changes to the machine learning pipeline that underpins the analysis (Figure 4); it learned on its own that optical loss and phase features were more important than morphological features in classification of algae cells, whereas the reverse held true for cancer cells.
Figure 4. Machine learning pipeline: cancer cell and algal cell classification.
## How Photonic Time Stretch Works
The TS-QPI system creates a train of laser pulses with widths measured in femtoseconds. Lenses, diffraction gratings, mirrors, and a beam splitter disperse the laser pulses into a train of rainbow flashes that illuminate the cells passing through the cytometer. Spatial information on each cell is encoded in the spectrum of a pulse. The optical dispersion imposes varying delays to different wavelength components. Processing the signals optically in this way slows them sufficiently to enable real-time digitization using an electronic analog-to-digital converter (ADC).
The relatively low number of photons collected during the short pulse width and the drop in optical power caused by the time stretch make it difficult to detect the resulting signal. We compensate for this loss in sensitivity by using a Raman amplifier. By slowing the signal and concurrently amplifying it, the system can simultaneously capture quantitative optical phase shift and intensity loss images for each cell in the sample.
Bahram Jalali is a professor and Northrop Grumman Opto-Electronic Chair of Electrical Engineering at UCLA. His research and teaching interests include silicon photonics and fiber optic communication, real-time streaming data acquisition and processing, biophotonics, rare cell detection, blood screening, nondestructive material testing and characterization, and rogue wave phenomena.
Claire Lifan Chen is a senior application engineer at Lumentum Operations LLC. She received a Ph.D. in electrical engineering and an M.Sc. in bioengineering from UCLA in 2015 and 2012, respectively. Her research interests include machine learning, data acquisition and analytics, image processing, and high-throughput imaging with applications in biomedical and information technologies.
Ata Mahjoubfar holds a Ph.D. degree in electrical engineering from UCLA, where he is currently a postdoctoral scholar. His research interests include artificial intelligence, machine vision and learning, image and signal processing, imaging and visualization, ultrafast data acquisition and analytics, biomedical technology, and financial engineering.
Published 2017 - 93090v00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.357408732175827, "perplexity": 2139.73167173575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689823.92/warc/CC-MAIN-20170924010628-20170924030628-00109.warc.gz"} |
http://mathoverflow.net/questions/88640/asymptotic-behaviour-of-int-fta-cosatdt?sort=votes | Asymptotic behaviour of $\int f(t)^a\cos(at)dt$
Are there any known necessary or sufficient conditions such that $$\lim_{a\rightarrow \infty}\int_{-1}^1f(t)^a\cos(at)dt=0$$ where $f:[-1,1]\rightarrow[1,\infty)$ is an even smooth concave real function such that $f(-1)=f(1)=1$?
-
Sorry for the late reply (for the last month I hardly had any time for anything like MO). The answer (somewhat vague) is that the curve $t\mapsto (t,\log f(t))$ ($-1\le t\le 1$) has to be the image of the upper half circle under some analytic in $\mathbb C\setminus[(-\infty,-1]\cup[1,+\infty)]$ mapping $F$, which is symmetric ($F(\bar z)=\overline{F(z)}$), one to one in the unit disk, and whose derivative $F\\,'$ has decent boundary behavior. The "decent boundary behavior" is the vague part here. Unfortunately, I cannot make it less vague unless someone tells me how exactly to recognize the distributions on $[-1,1]$ whose Fourier transform tends to $0$. It is clear that they are necessarily fairly tame (of not more than the first order, etc.) and that all $L^1$-functions are there but where exactly you are in between is a mystery to me. On the crudest level, it tells you that $f$ must be analytic.
Let me know if such description is of any interest to you. If it is, I'll post the details.
EDIT: OK, here go the details. It is a somewhat long story, so I may need more than one patch of free time to type it. I apologize in advance for bumping this thread. Also, since the integral against $\sin at$ is zero, we can just as well talk about the full Fourier transform, i.e., the integration against $e^{-iat}$
1) It is actually quite surprising that such functions exist at all. After all, the jump discontinuity normally means that the best rate of decay of the Fourier transform is $1/a$ and that slow rate of decay is played against the exponential growth of the integrand. So, I'll start with constructing one such function. It'll be easier to work with $g(t)=\log f(t)$, which is a smooth non-negative function with endpoint values $0$. Put $g(t)=\delta(1-t^2)$ with small $\delta>0$. Then the integral can be written as the path integral $\int_\gamma \frac{dt}{dz}e^{-iaz}\\,dz$ where $\gamma$ is the curve $t\mapsto z(t)=t+ig(t)$. Note that $z(t)=t+i\delta(1-t^2)$ is an analytic function of $t$ and for small $\delta>0$, it is invertible in a fairly large disk. Thus, we can talk about its analytic branch $t(z)$ that coincides with $\Re z$ on $\gamma$ and is analytic in (a neighborhood of) the region $D$ bounded by $[-1,1]$ and $\gamma$. So, $t'(z)$ is also analytic there and we can shift the contour of integration from $\gamma$ to $[-1,1]$, which results in the representation $\int_{-1}^1 t'(z)e^{-iaz}\\,dz$, which is just the ordinary Fourier transform of the integrable (and even smooth) function $t'(z)$ restricted to $[-1,1]$, so the integral, indeed, tends to $0$ in this case.
2) What I'd like to show now is that this contour integral representation and the possibility to shift the contour is the only possible reason for this effect. The starting point is that if the integral is bounded on the entire real line (the boundedness on the negative semi-axis is trivial and the boundedness on the positive one is less than what has been requested), then there exists a distribution $T$ supported on $[-1,1]$ such that the integral equals $\langle T, e^{-iat}\rangle$ for all $a\in\mathbb C$ (that is just a version of Paley-Wiener). Thus, the difference $\langle T, e^{-iat}\rangle-\int_\gamma \frac{dt}{dz}e^{-iaz}\\,dz$ vanishes for all $a\in \mathbb C$. Now, the linear span of functions $e^{-iaz}$ is dense in the space of functions analytic in any fixed neighborhood of $D$ meaning that $\langle T, \psi(t)\rangle-\int_\gamma \frac{dt}{dz}\psi(z)\\,dz=0$ for every function analytic in some neighborhood of $D$.
We will take $\psi(z)=\frac{1}{z-\zeta}$ with $\zeta\notin D$ and get the Cauchy integral plus something analytic in $\mathbb C\setminus[-1,1]$ vanish outside $D$. Note that this Cauchy integral and the distribution part are also well-defined for $\zeta\in D$ and give an analytic function of $\zeta$ there. Moreover, by Plemelj's jump formulae, the boundary values of that function on $\gamma$ are just $\frac{dt}{dz}$ (up to $2\pi i$ and $\pm$, which we aren't concerned with here). The upshot is that $\frac{dt}{dz}$ has an analytic extension to $D$ continuous up to $\gamma$ (here we use that the curve is assumed to be of some decent smoothness; otherwise we'll have to sing a long song of non-tangential boundary values a.e., etc.)
The behavior on $[-1,1]$ may be more complicated in general and the boundary values there exist only in the sense of distributions. The possibility of analytic continuation to the open domain $D$ guarantees only the possibility to shift the contour to something hovering as low over $[-1,1]$ as we wish, i.e., to the subexponential growth of the integral (for which it is necessary and sufficient). However, if you settle for some more reasonable class than $C_0$, say, $L^2$, then $T$ will be just an $L^2$ function and you'll have the classical theory of boundary values that will allow you to show that our distribution is, indeed, the boundary value of the analytic extension of $\frac{dt}{dz}$ and the reason for smallness of the integral is the possibility of the ordinary contour shift. I have no idea what you are going to use all this for, so I prefer to avoid the discussion of all those technical issues. Instead, I'll discuss in detail what the possibility of this analytic extension of the derivative means for the curve $\gamma$ itself.
3) Let $Q$ be the lower unit half-disk $\{z:|z|<1,\Im z<0\}$. Let $\varphi$ be the conformal mapping from $Q$ to $D$ such that the interval $[-1,1]\subset\partial Q$ is mapped to $\gamma$ and the lower semicircle is mapped to $[-1,1]$. The derivative $\varphi'$ is a continuous up to the boundary (except for the points $-1,1$ where it has an easy to control power singularity) non-vanishing function in $D$ (here we use reasonable smoothness of $f$ again). Note that after the composition with $\varphi$, the function $\frac{dt}{dz}$ on $\gamma$ becomes $\frac{(\Re\varphi)'}{\varphi'}$ on $[-1,1]$. This should be extendable analytically to the lower half-disk with "decent boundary values". Since $\varphi'$ has such extension, we conclude that so does $(\Re\varphi)'$. But this function is real-valued, so the Schwarz reflection principle applies and we conclude that it extends analytically to the entire unit disk. Let $(\Re\varphi)'=F$ where $F$ is a symmetric analytic function in the unit disk. The function $\varphi'-F$ is purely imaginary on $[-1,1]$ and extends analytically to the lower half-circle. Thus, using the reflection principle again, we conclude that $\varphi'=F+iG$ where $F,G$ are symmetric analytic functions in the unit disk and $F$ has decent boundary values on the lower semicircle. Thus, $\varphi'$ and $\varphi$ are analytic in the unit disk. Moreover, since $F+iG=\varphi'$ is nice on the lower semicircle, $G$ also has decent boundary values there and therefore, after reflecting, we see that $\varphi'$ has decent boundary values on the upper semicircle. To get the proclaimed description, it suffices now to map the unit disk to the upper half-plane so that the lower semicircle is mapped to $[-1,1]$ and use the reflection principle again for the last time.
That's it (modulo minor technicalities that I swept under the rug, but, as I said, to get into those would make no sense without knowing what exactly you are after).
-
Thank you for your answer. More details would definitively be helpful. – Roland Bacher Mar 9 '12 at 16:38
Here is a possible beginning. Note first that
$$\int_{-1}^1 f(t)^a \cos(at) dt= 2I_a:= 2\int_{0}^1 f(t)^a \cos(at) dt.$$
Hence, it suffices to investigate $I_a$. Let me first assume that $f'(t) <0$ on $(0,1)$. (Note that if $f'(t_0)=0$ for some $t_0\in (0,1)$ then $f'(t)=0$ on $[0,t_0]$.) This means that the map $t\mapsto f$ is one-to-one. We regard $t$ as a function of $f$. Then the change in variables formula implies.
$$I_a= \int_1^{f(0)} f^a \cos(a t)\frac{dt}{df} df$$
I can make this formula friendlier to the 21st century mathematician by changing notations,
$$t \longleftrightarrow \phi,\;\;\; f \longleftrightarrow x$$
and we can rewrite the above as
$$I_a= \int_1^{x_0} x^a\frac{d\phi}{dx} \cos( a \phi(x) ) dx = \frac{1}{a}\int_1^{x_0} x^a \frac{d}{dx}\Bigl( \sin\bigl(\; a\phi(x)\;\bigr) \Bigr)dx$$
$$=\frac{1}{a}\Bigl( x^a\sin\bigl( a\phi(x)\bigr)\;\Bigr)\Bigr|^{x_0}_1- \int_1^{x_0}x^{a-1}\sin\bigl(\; a\phi(x) \;\bigr)dx.$$
Now observe that $\phi(x_0) =0$, $\phi(1)=1$, so the first term above goes to zero as $a\to\infty$.
At this point it may be useful to look in some books on asymptotics of integrals. A good place to start is
Bleistein & Handelsman: Asymptotic expansions of Integrals, Dover
Also you need to keep in mind that
$$\frac{d\phi}{dx}< 0,\;\;\forall x\in (1,x_0)$$
$$\lim_{x\nearrow x_0} \frac{d\phi}{dx}=-\infty.$$
-
Thank you for the reference, I will check if it is useful. – Roland Bacher Feb 16 '12 at 17:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850000143051147, "perplexity": 113.12509409703462}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00024-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://openstudy.com/updates/515f41c7e4b01c244f4ac71d | Here's the question you clicked on:
55 members online
• 0 viewing
## Ambition 2 years ago Write the sum using summation notation, assuming the suggested pattern continues. 8 - 40 + 200 - 1000 + ... Delete Cancel Submit
• This Question is Closed
1. Hoa
• 2 years ago
Best Response
You've already chosen the best response.
0
Sum (from n=1 to infinitive) (-1)^(n+1)* 8* 5^ (n-1)
2. Hoa
• 2 years ago
Best Response
You've already chosen the best response.
0
|dw:1365198186404:dw|
3. Hoa
• 2 years ago
Best Response
You've already chosen the best response.
0
that's what I got. hope this helps :)
4. kropot72
• 2 years ago
Best Response
You've already chosen the best response.
1
$S _{\infty}=\int\limits_{n=1}^{\infty}8\times (-5)^{n-1}$
5. Not the answer you are looking for?
Search for more explanations.
• Attachments:
Find more explanations on OpenStudy
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995466470718384, "perplexity": 14796.819059222129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737899086.53/warc/CC-MAIN-20151001221819-00232-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/524439/number-of-triangles-sharing-all-vertices-but-no-sides-with-a-given-octagon | # Number of triangles sharing all vertices but no sides with a given octagon
The number of triangles whose vertices are the the vertices of the vertices of an octagon but none of whose sides happen to come from the sides of the octagon.
My Attempt: Let $\{A,B,C,D,E,F,G,H\}$ be the vertices of an octagon. It is given that none of the side of octagon is the side of the triangle, so we do not take consecutive points.
So we take either $\{A,C,E,G\}$ OR $\{B,D,F,H\}$ points out of which we will take only three points, because we have form a triangle.
So This can be done by $\binom{4}{3}+\binom{4}{3} = 8$
But the only options given are 24, 52, 48, and 16.
Where have I made a mistake?
• Why would you keep an odd number of vertices between vertices on an edge? $\{A,D,G\}$ is a perfectly fine triangle. – Patrick Da Silva Oct 13 '13 at 12:14
If two of the vertices are $A$ and $C$, what are the possible third vertex? Look at the whole list $A,...,H$
• Thanks Michael Got it. for $\bf{\{A,C,E,G\}}$ first we will select $2$ vertices from these for and then remaining select from $\bf{\{B,D,F,H\}}$. This can be done by $\displaystyle \binom{4}{2}\times \binom{4}{1} = 24$. similarly for $\bf{\{B,D,,F,H\}}$ first we will select $2$ vertices from these for and then remaining select from $\bf{\{A,C,E,G\}}$. This can be done by $\displaystyle \binom{4}{2}\times \binom{4}{1} = 24$ . So Total $= 24+24 = 48$ – juantheron Oct 13 '13 at 12:13
Suppose the vertices are labelled $1,2,\dots,8$. Count the number of triangles for which one of the vertices is $1$. Then the second vertex, going around "clockwise" (let's say the octagon was represented that way) is among $3,4$ or $5$ (if we put one at $6$, the third vertex would be $7$ or $8$, which would give the triangle a side in common with the octagon). For each case you can count the number of options : three options for $3$, two for $4$, one for $5$, for a total of $6$. This gives us $6$ triangles that have a vertex at $1$.
The group $\mathbb Z / 8 \mathbb Z$ acts on the triangles by mapping the triangle with vertices $(a,b,c)$ to the triangle with vertices $(a+k,b+k,c+k)$ (where $k \in \mathbb Z / 8 \mathbb Z$ and you can consider $a,b,c \in \mathbb Z / 8 \mathbb Z$). It is not hard to see that each triangle has an orbit of size $8$ under this action.
So if we count the triangles by considering those who have a vertex at $1$ and then rotate them via the group action, we will triple count because each triangle has three vertices. Therefore the answer is $(6 \times 8)/ 3 = 16$.
Hope that helps,
Hints for the "small" problem at hand:
(i) How many triangles are there without any restrictions? (ii) How many triangles have exactly one side in common with the octagon? (iii) How many triangles have exactly two sides in common with the octagon?
We now consider the more general problem: Given a regular $n$-gon $P$, how many $r$-gons $Q$ with vertices from $P$ are there that don't share a side with $P$?
An admissible $r$-gon leaves $n-r$ unused vertices. Write a string of $n-r+1$ zeros, where the first and the last zero denote the same "distinguished" unused vertex. Choose $r$ of the $n-r$ slots between the zeros and insert an $1$ into these slots. You then have an encoding of an admissible $r$-gon.
There are ${n-r\choose r}$ ways to chose the slots, and there are $n$ ways to choose which vertex of $P$ should be the "distinguished" unused vertex. The total number $N$ of admissible $r$-gons $Q\subset P$ is then given by $$N={n\over n-r}{n-r\choose r}\ ,$$ because the choice of the "distinguished" unused vertex has to be discounted. For $n=8$ and $r=3$ one obtains $N=16$.
• Thanks Professor got it. Total Triangle without any restriction is $\displaystyle = \binom{8}{3} = 56$. Total Triangle with one side common $\displaystyle = \binom{4}{1}\times 8 = 32$ and Total no. Triangle in which exactly two side common is $= 8$ – juantheron Oct 13 '13 at 12:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8378655314445496, "perplexity": 164.0162896892477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00467.warc.gz"} |
http://mathoverflow.net/questions/140104/lower-bound-for-eulers-totient-for-almost-all-integers?answertab=active | # Lower bound for Euler's totient for almost all integers
Let $\varphi(n)$ be the Euler's totient function. It is well know that $\liminf_{n \to \infty} \frac{\varphi(n)}{n / \log \log n} = e^{-\gamma}$, so that for $\varepsilon > 0$ it results $\frac{\varphi(n)}{n} \geq \frac{e^{-\gamma}-\varepsilon}{\log \log n}$ for large $n$. Actually, the "local minima" of $\frac{\varphi(n)}{n}$ are attached for $n = p_1 \cdots p_k$ (the product of the first $k$ primes) and the set of primorial is really sparse. I wonder if it is known a lower bound for $\varphi(n)$ like: "$\varphi(n) / n \geq f(n)$ for all $n$ but a set of null asymptotic density", where $f(n)$ is a function bigger then $\frac{e^{-\gamma}-\varepsilon}{\log \log n}$.
-
For $n/\phi(n)$ "Small values of the Euler function and the Riemann hypothesis Jean-Louis Nicolas" might be related to your question. – joro Aug 22 '13 at 14:21
Since the average value of $n/\phi(n)$ is bounded, it follows that for any function $f(n)$ tending to zero as $n$ tends to infinity one has $\phi(n)/n \ge f(n)$ except on a set of zero density. – Lucia Aug 22 '13 at 17:08
@Lucia Thank you for your answer! However I can't find a reference for the average value of $n / \varphi(n)$, I know that average value of $\varphi(n) / n$ is $6 / \pi^2$, but $n / \varphi(n)$ I don't know. – user21706 Aug 22 '13 at 19:21
I got. The average value of $n / \varphi(n)$ is $315\zeta(3)/(2\pi^4)$. "R. Sitaramachandrarao. On an error term of Landau II, Rocky Mountain J. Math. 15 (1985), 579-588" – user21706 Aug 22 '13 at 19:59
Your question has been answered by Lucia already, but you might also be interested in looking up the Erd\H{o}s--Wintner theorem. A special case (proved already by Schoenberg) is that for each $u \geq 0$, the set of $n$ with $\phi(n)/n \leq u$ has an asymptotic density $D(u)$; moreover, $D(u)$ is continuous and increasing on $[0,1]$.
There are also estimates available for the size of $D(u)$ when $u$ is near zero, and of $1-D(u)$ when $u$ is near $1$. For this, see Erd\H{o}s's paper "Some remarks about additive and multiplicative functions": http://www.renyi.hu/~p_erdos/1946-11.pdf
Sorry, but I do know understand you answer. How do you prove that if $E$ is a set of null asymptotic density then $\liminf_{E \not\ni n \to \infty} \varphi(n) / (n / \log\log n) = e^-\gamma$ ? Thanks. – user21706 Aug 22 '13 at 16:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638388156890869, "perplexity": 235.87510387062423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444339.31/warc/CC-MAIN-20141017005724-00327-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://forums.scummvm.org/viewtopic.php?p=73118 | ## New release testing period: ScummVM 1.5.0
General chat related to ScummVM, adventure gaming, and so on.
Moderator: ScummVM Team
Strangerke
ScummVM Developer
Posts: 333
Joined: Wed Sep 06, 2006 8:39 am
Location: Belgium
### New release testing period: ScummVM 1.5.0
Once again, it's time to start a new testing season to prepare the release of ScummVM v1.5.0. As usual, we have announced several new supported games:
- Backyard Baseball 2003
- Blue Force
- Darby the Dragon
- Dreamweb
- Geisha
- Gregory and the Hot Air Balloon
- Magic Tales: Liam Finds a Story
- Once Upon A Time: Little Red Riding Hood
- Sleeping Cub's Test of Courage
- Soltys
- The Princess and the Crab
So, go grab your favorite games, and play through them using a daily build. Soltys is also available as a freeware on our download page. If you find any bugs, report them on the bug tracker. Once you've completed a game, please report it on this thread so it can be added to the release testing wiki page. When reporting, make sure you provide the version, language, and platform of the game you're testing. Any other questions? Go see the Release Testing guidelines.
When reporting whether a game is completable or not, please use this format to make it easier for us to add it to the wiki page.
Game: Game name, as reported by ScummVM
Game Version: Game version, as reported by ScummVM; EGA or VGA if applicable
ScummVM Version: ScummVM version, as reported by ScummVM (ie. "1.5.0git3266-g20b6770"); *****MUST INCLUDE THE FULL REVISION*****
Operating System: Operating system (version and 32-bit vs. 64-bit if applicable)
Problems:
List any problems you have found here (or "None" if everything works perfectly).
KuroShiro
Posts: 468
Joined: Thu May 15, 2008 7:42 am
Location: Miyazaki, Japan
Awesome.
Both Blue Force and Dreamweb are a couple of my favorites. I will try to find some time to play through them.
scoriae
Posts: 260
Joined: Thu Jan 03, 2008 3:32 am
bugs submitted regarding imo & little samurai crashes on launch.
icanntspell
Posts: 95
Joined: Mon May 18, 2009 12:14 pm
Location: The Netherlands
Contact:
I am not quite sure if the debug build in msvc10 is a supported build. If not, ignore this message:) But when compiling I get lots of
error C2051: case expression not constant
in the tinsel engine and the build fails. The release build compiles fine though.
dreammaster
ScummVM Developer
Posts: 443
Joined: Fri Nov 04, 2005 2:16 am
Location: San Jose, California, USA
icanntspell wrote:I am not quite sure if the debug build in msvc10 is a supported build. If not, ignore this message:) But when compiling I get lots of
error C2051: case expression not constant
in the tinsel engine and the build fails. The release build compiles fine though.
If you get that it, it means you're using an outdated version of create_project.exe. Try recompiling it and replacing it in the dists\msvc10 folder.
icanntspell
Posts: 95
Joined: Mon May 18, 2009 12:14 pm
Location: The Netherlands
Contact:
dreammaster wrote:If you get that it, it means you're using an outdated version of create_project.exe. Try recompiling it and replacing it in the dists\msvc10 folder.
Even with a freshly build create_project.exe (and I ran the create_msvc10.bat file too) I still get these messages.
md5
ScummVM Developer
Posts: 2250
Joined: Thu Nov 03, 2005 9:31 pm
Location: Athens, Greece
That's because of cached files, try rebuilding ScummVM
icanntspell
Posts: 95
Joined: Mon May 18, 2009 12:14 pm
Location: The Netherlands
Contact:
md5 wrote:That's because of cached files, try rebuilding ScummVM
So here's what I did:
- I opened create_project project, chose Release build and selected rebuild; All went well (including the install)
- opened to the dists/msvc10 directory
- Ran the create_msvc10.bat; No errors
- deleted the Debug32 directory just to be sure
- opened the scummvm project, selected the Debug build and chose rebuild. The output up to the first errors :
Code: Select all
1>------ Rebuild All started: Project: tucker, Configuration: Debug Win32 ------
2>------ Rebuild All started: Project: tsage, Configuration: Debug Win32 ------
3>------ Rebuild All started: Project: touche, Configuration: Debug Win32 ------
2>Build started 1-7-2012 13:27:50.
2>InitializeBuildStatus:
2> Creating "Debug32/tsage\tsage.unsuccessfulbuild" because "AlwaysCreate" was specified.
1>Build started 1-7-2012 13:27:51.
2>ClCompile:
2> user_interface.cpp
3>Build started 1-7-2012 13:27:51.
1>InitializeBuildStatus:
1> Creating "Debug32/tucker\tucker.unsuccessfulbuild" because "AlwaysCreate" was specified.
3>InitializeBuildStatus:
3> Creating "Debug32/touche\touche.unsuccessfulbuild" because "AlwaysCreate" was specified.
1>ClCompile:
1> tucker.cpp
3>ClCompile:
3> touche.cpp
1> staticres.cpp
3> staticres.cpp
1> sequences.cpp
2> tsage.cpp
3> resource.cpp
3> opcodes.cpp
2> staticres.cpp
1> resource.cpp
2> sound.cpp
3> midi.cpp
1> locations.cpp
1> graphics.cpp
2> scenes.cpp
1> detection.cpp
3> graphics.cpp
3> detection.cpp
1> console.cpp
3> console.cpp
1> Generating Code...
3> Generating Code...
2> resources.cpp
2> graphics.cpp
1>Lib:
1> tucker.vcxproj -> Y:\scummvm-src\dists\msvc10\Debug32\tucker.lib
1>FinalizeBuildStatus:
1> Deleting file "Debug32/tucker\tucker.unsuccessfulbuild".
1> Touching "Debug32/tucker\tucker.lastbuildstate".
1>
1>Build succeeded.
1>
1>Time Elapsed 00:00:23.33
3>Lib:
3> touche.vcxproj -> Y:\scummvm-src\dists\msvc10\Debug32\touche.lib
3>FinalizeBuildStatus:
3> Deleting file "Debug32/touche\touche.unsuccessfulbuild".
3> Touching "Debug32/touche\touche.lastbuildstate".
3>
3>Build succeeded.
3>
3>Time Elapsed 00:00:23.17
4>------ Rebuild All started: Project: toon, Configuration: Debug Win32 ------
4>Build started 1-7-2012 13:28:14.
5>------ Rebuild All started: Project: tinsel, Configuration: Debug Win32 ------
5>Build started 1-7-2012 13:28:14.
4>InitializeBuildStatus:
4> Creating "Debug32/toon\toon.unsuccessfulbuild" because "AlwaysCreate" was specified.
5>InitializeBuildStatus:
5> Creating "Debug32/tinsel\tinsel.unsuccessfulbuild" because "AlwaysCreate" was specified.
4>ClCompile:
4> toon.cpp
5>ClCompile:
5> token.cpp
2> globals.cpp
5> tinsel.cpp
4> tools.cpp
4> text.cpp
2> events.cpp
5>y:\scummvm-src\engines\tinsel\tinsel.cpp(131): error C2051: case expression not constant
5>y:\scummvm-src\engines\tinsel\tinsel.cpp(273): error C2051: case expression not constant
5>y:\scummvm-src\engines\tinsel\tinsel.cpp(307): error C2051: case expression not constant
5>y:\scummvm-src\engines\tinsel\tinsel.cpp(452): error C2051: case expression not constant
5>y:\scummvm-src\engines\tinsel\tinsel.cpp(609): error C2051: case expression not constant
5> tinlib.cpp
...
31>Build FAILED.
31>
31>Time Elapsed 00:04:35.39
========== Rebuild All: 29 succeeded, 2 failed, 0 skipped ==========
other warnings I get besides the one in tinsel are:
Code: Select all
11>y:\scummvm-src\engines\sci\console.cpp(2653): warning C4065: switch statement contains 'default' but no 'case' labels
31>y:\scummvm-src\common\coroutines.cpp(391): error C2051: case expression not constant
31>y:\scummvm-src\common\coroutines.cpp(458): error C2051: case expression not constant
31>y:\scummvm-src\common\coroutines.cpp(484): error C2051: case expression not constant
31>c:\program files (x86)\microsoft sdks\windows\v7.0a\include\winnt.h(1140): warning C4005: 'ARRAYSIZE' : macro redefinition
31> y:\scummvm-src\common\util.h(58) : see previous definition of 'ARRAYSIZE'
31>y:\scummvm-src\audio\softsynth\mt32\partial.cpp(38): warning C4355: 'this' : used in base member initializer list
31>y:\scummvm-src\audio\softsynth\mt32\partial.cpp(38): warning C4355: 'this' : used in base member initializer list
31>y:\scummvm-src\audio\softsynth\mt32\partial.cpp(38): warning C4355: 'this' : used in base member initializer list
31>y:\scummvm-src\audio\softsynth\mt32\tva.cpp(362): warning C4701: potentially uninitialized local variable 'newIncrement' used
Microsoft Visual Studio 2010
Version 10.0.40219.1 SP1Rel
Microsoft .NET Framework
Version 4.0.30319 SP1Rel
Installed Version: Professional
I hope this makes sens, the release build is building fine, so it's no biggy.
md5
ScummVM Developer
Posts: 2250
Joined: Thu Nov 03, 2005 9:31 pm
Location: Athens, Greece
I can't check the output of create_project right now, but it seems it's not disabling the edit and continue feature.
To disable it and fix the errors above, change the settings of the scummvm project. Here's a link on how to do this:
http://msdn.microsoft.com/en-us/library ... .100).aspx
icanntspell
Posts: 95
Joined: Mon May 18, 2009 12:14 pm
Location: The Netherlands
Contact:
md5 wrote:I can't check the output of create_project right now, but it seems it's not disabling the edit and continue feature.
...
Ah that helped. Here's how I fixed it:
Go to project properties of tinsel -> C/C++ -> General and change the "Debug Information Format" from "Program Database for Edit And Continue (/ZI)" to "Program Database (/Zi)".
I think my version of create_project set's this feature globally to EditAndContinue in msbuild.cpp line 393 for win32 in non-release builds. And there's no tinsel specific config that disables it for this engine. Is this a bug then?
LordHoto
ScummVM Developer
Posts: 1029
Joined: Sun Oct 30, 2005 3:58 pm
Location: Germany
icanntspell wrote:
md5 wrote:I can't check the output of create_project right now, but it seems it's not disabling the edit and continue feature.
...
Ah that helped. Here's how I fixed it:
Go to project properties of tinsel -> C/C++ -> General and change the "Debug Information Format" from "Program Database for Edit And Continue (/ZI)" to "Program Database (/Zi)".
I think my version of create_project set's this feature globally to EditAndContinue in msbuild.cpp line 393 for win32 in non-release builds. And there's no tinsel specific config that disables it for this engine. Is this a bug then?
It seems that create_project was not adapted properly, when the coroutine code was moved to common/.
icanntspell
Posts: 95
Joined: Mon May 18, 2009 12:14 pm
Location: The Netherlands
Contact:
LordHoto wrote:It seems that create_project was not adapted properly, when the coroutine code was moved to common/.
Should I post a bug report? Otherwise a small note in the wiki compiling section may help others too. It needs to be updated anyway to include the new freetype dependancy.
LordHoto
ScummVM Developer
Posts: 1029
Joined: Sun Oct 30, 2005 3:58 pm
Location: Germany
icanntspell wrote:
LordHoto wrote:It seems that create_project was not adapted properly, when the coroutine code was moved to common/.
Should I post a bug report? Otherwise a small note in the wiki compiling section may help others too. It needs to be updated anyway to include the new freetype dependancy.
Yes.
md5
ScummVM Developer
Posts: 2250
Joined: Thu Nov 03, 2005 9:31 pm
Location: Athens, Greece
icanntspell wrote:
LordHoto wrote:It seems that create_project was not adapted properly, when the coroutine code was moved to common/.
Should I post a bug report? Otherwise a small note in the wiki compiling section may help others too. It needs to be updated anyway to include the new freetype dependancy.
The wiki was updated for the freetype dependency months ago... which part are you referring to?
icanntspell
Posts: 95
Joined: Mon May 18, 2009 12:14 pm
Location: The Netherlands
Contact:
md5 wrote:The wiki was updated for the freetype dependency months ago... which part are you referring to?
"Adding all libraries to Visual Studio 2005/2008" contains a nice list of required libraries, but indeed other parts are updated nicely. I was staring too much on that list. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3863348960876465, "perplexity": 29364.68037003687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891428.74/warc/CC-MAIN-20201026145305-20201026175305-00107.warc.gz"} |
https://en.wikipedia.org/wiki/Algebraic_statistics | # Algebraic statistics
Algebraic statistics is the use of algebra to advance statistics. Algebra has been useful for experimental design, parameter estimation, and hypothesis testing.
Traditionally, algebraic statistics has been associated with the design of experiments and multivariate analysis (especially time series). In recent years, the term "algebraic statistics" has been sometimes restricted, sometimes being used to label the use of algebraic geometry and commutative algebra in statistics.
## The tradition of algebraic statistics
In the past, statisticians have used algebra to advance research in statistics. Some algebraic statistics led to the development of new topics in algebra and combinatorics, such as association schemes.
### Design of experiments
For example, Ronald A. Fisher, Henry B. Mann, and Rosemary A. Bailey applied Abelian groups to the design of experiments. Experimental designs were also studied with affine geometry over finite fields and then with the introduction of association schemes by R. C. Bose. Orthogonal arrays were introduced by C. R. Rao also for experimental designs.
### Algebraic analysis and abstract statistical inference
Invariant measures on locally compact groups have long been used in statistical theory, particularly in multivariate analysis. Beurling's factorization theorem and much of the work on (abstract) harmonic analysis sought better understanding of the Wold decomposition of stationary stochastic processes, which is important in time series statistics.
Encompassing previous results on probability theory on algebraic structures, Ulf Grenander developed a theory of "abstract inference". Grenander's abstract inference and his theory of patterns are useful for spatial statistics and image analysis; these theories rely on lattice theory.
### Partially ordered sets and lattices
Partially ordered vector spaces and vector lattices are used throughout statistical theory. Garrett Birkhoff metrized the positive cone using Hilbert's projective metric and proved Jentsch's theorem using the contraction mapping theorem.[1] Birkhoff's results have been used for maximum entropy estimation (which can be viewed as linear programming in infinite dimensions) by Jonathan Borwein and colleagues.
Vector lattices and conical measures were introduced into statistical decision theory by Lucien Le Cam.
## Recent work using commutative algebra and algebraic geometry
In recent years, the term "algebraic statistics" has been used more restrictively, to label the use of algebraic geometry and commutative algebra to study problems related to discrete random variables with finite state spaces. Commutative algebra and algebraic geometry have applications in statistics because many commonly used classes of discrete random variables can be viewed as algebraic varieties.
### Introductory example
Consider a random variable X which can take on the values 0, 1, 2. Such a variable is completely characterized by the three probabilities
${\displaystyle p_{i}=\mathrm {Pr} (X=i),\quad i=0,1,2}$
and these numbers clearly satisfy
${\displaystyle \sum _{i=0}^{2}p_{i}=1\quad {\mbox{and}}\quad 0\leq p_{i}\leq 1.}$
Conversely, any three such numbers unambiguously specify a random variable, so we can identify the random variable X with the tuple (p0,p1,p2)∈R3.
Now suppose X is a Binomial random variable with parameter q and n = 2, i.e. X represents the number of successes when repeating a certain experiment two times, where each experiment has an individual success probability of q. Then
${\displaystyle p_{i}=\mathrm {Pr} (X=i)={2 \choose i}q^{i}(1-q)^{2-i}}$
and it is not hard to show that the tuples (p0,p1,p2) which arise in this way are precisely the ones satisfying
${\displaystyle 4p_{0}p_{2}-p_{1}^{2}=0.\ }$
The latter is a polynomial equation defining an algebraic variety (or surface) in R3, and this variety, when intersected with the simplex given by
${\displaystyle \sum _{i=0}^{2}p_{i}=1\quad {\mbox{and}}\quad 0\leq p_{i}\leq 1,}$
yields a piece of an algebraic curve which may be identified with the set of all 3-state Bernoulli variables. Determining the parameter q amounts to locating one point on this curve; testing the hypothesis that a given variable X is Bernoulli amounts to testing whether a certain point lies on that curve or not.
## References
1. ^ A gap in Garrett Birkhoff's original proof was filled by Alexander Ostrowski. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7456814050674438, "perplexity": 585.8382764213669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828286.80/warc/CC-MAIN-20160723071028-00038-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-article-doi-10_4064-fm175-2-3 | PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
Fundamenta Mathematicae
2002 | 175 | 2 | 127-142
Tytuł artykułu
Potential isomorphism and semi-proper trees
Autorzy
Treść / Zawartość
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
We study a notion of potential isomorphism, where two structures are said to be potentially isomorphic if they are isomorphic in some generic extension that preserves stationary sets and does not add new sets of cardinality less than the cardinality of the models. We introduce the notion of weakly semi-proper trees, and note that there is a strong connection between the existence of potentially isomorphic models for a given complete theory and the existence of weakly semi-proper trees.
We show that the existence of weakly semi-proper trees is consistent relative to ZFC by proving the existence of weakly semi-proper trees under certain cardinal arithmetic assumptions. We also prove the consistency of the non-existence of weakly semi-proper trees assuming the consistency of some large cardinals.
Słowa kluczowe
Kategorie tematyczne
Czasopismo
Rocznik
Tom
Numer
Strony
127-142
Opis fizyczny
Daty
wydano
2002
Twórcy
autor
• Department of Mathematics, University of Helsinki, 00014 Helsinki, Finland
autor
• Department of Mathematics, University of Helsinki, 00014 Helsinki, Finland
autor
• Institute of Mathematics, The Hebrew University, 91904 Jerusalem, Israel
• Department of Mathematics, Rutgers University, New Brunswick, NJ 08903, U.S.A.
Bibliografia
Typ dokumentu
Bibliografia
Identyfikatory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001652598381042, "perplexity": 875.2879411886336}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00631.warc.gz"} |
http://www.ck12.org/physics/Pressure-and-Force/quiz/Pressure-and-Force-Quiz-PPB/r1/ | <meta http-equiv="refresh" content="1; url=/nojavascript/">
# Pressure and Force
Pressure is a force spread out over an area. A small force applied over a very small area can exert a large force.
%
Progress
Practice Pressure and Force
Progress
%
Pressure and Force Quiz - PPB
Teacher Contributed
Calculate force, pressure, and the area of basic geometric shapes. Explain the concept of pressure. Know that pressure is equal throughout a liquid. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729889392852783, "perplexity": 2793.3528191036125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929205.63/warc/CC-MAIN-20150521113209-00189-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://mathhelpforum.com/pre-calculus/110910-orders-matrix.html | # Math Help - orders of matrix
1. ## orders of matrix
I can't find this info in my book, but it is a homework problem.
The question is "determine the order of the matrix"
is it across x down or down x across?
2. Originally Posted by RenSully
I can't find this info in my book, but it is a homework problem.
The question is "determine the order of the matrix"
is it across x down or down x across?
Order is $r \times c$ where $r$ is the number of rows and $c$ is the number of columns. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5986363291740417, "perplexity": 596.9510786521098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136966.6/warc/CC-MAIN-20140914011216-00127-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=45k12ld2qqchab36g4ojdlt2j7&topic=1408.0;wap2 | MAT334-2018F > Term Test 1
TT1 Problem 2 (night)
(1/1)
Victor Ivrii:
(a) $\displaystyle{\sum_{n=1}^\infty \frac{z^n}{2^n n^2}}$
(b) $\displaystyle{\sum_{n=1}^\infty \frac{z^{3n} (3n)!}{20^n (2n)! }}$
If the radius of convergence is $R$, $0<R< \infty$, determine for each $z\colon |z|=R$ if this series converges.
Heng Kan:
See the attached scanned picture.
Xiting Kuang:
Just a concern, it says in the problem that R should be positive.
Heng Kan:
I think the question means that if the radius of convergence is positive,you have to figure out whether the series is convergent at the radius of convergence. It doesn't mean the radius is always positive.
Victor Ivrii:
--- Quote from: Heng Kan on October 19, 2018, 09:45:26 AM ---I think the question means that if the radius of convergence is positive,you have to figure out whether the series is convergent at the radius of convergence. It doesn't mean the radius is always positive.
--- End quote ---
Indeed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863683581352234, "perplexity": 1060.3099212437132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00675.warc.gz"} |
https://arxiv.org/abs/0908.1429v1 | Full-text links:
astro-ph.CO
(what is this?)
(what is this?)
# Title:Dark Matter as a Possible New Energy Source for Future Rocket Technology
Authors:Jia Liu
Abstract: Current rocket technology can not send the spaceship very far, because the amount of the chemical fuel it can take is limited. We try to use dark matter (DM) as fuel to solve this problem. In this work, we give an example of DM engine using dark matter annihilation products as propulsion. The acceleration is proportional to the velocity, which makes the velocity increase exponentially with time in non-relativistic region. The important points for the acceleration are how dense is the DM density and how large is the saturation region. The parameters of the spaceship may also have great influence on the results. We show that the (sub)halos can accelerate the spaceship to velocity $10^{- 5} c \sim 10^{- 3} c$. Moreover, in case there is a central black hole in the halo, like the galactic center, the radius of the dense spike can be large enough to accelerate the spaceship close to the speed of light.
Comments: 7 pages, 6 figures Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph) Cite as: arXiv:0908.1429 [astro-ph.CO] (or arXiv:0908.1429v1 [astro-ph.CO] for this version)
## Submission history
From: Jia Liu [view email]
[v1] Tue, 11 Aug 2009 01:58:10 UTC (502 KB)
[v2] Fri, 9 Oct 2009 15:53:02 UTC (502 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7042615413665771, "perplexity": 1403.9789801142938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202326.46/warc/CC-MAIN-20190320105319-20190320131319-00272.warc.gz"} |
http://archival-integration.blogspot.co.uk/2015/05/ | ## Friday, May 29, 2015
In most established archival institutions, any given finding aid can represent decades of changing descriptive practice, all of which are reflected in the EAD files we generate from them. This diverse array of standards and local-practice is what makes our job as data-wranglers interesting, but it also means that with any programmatic manipulation we make, there is always a long tail of edge-cases and outliers that we need to account for, or risk making unintentional and uncaught changes in places we aren't expecting.
When I first came on to the A-Space / Archivematica integration project, this prospect was terrifying - that an unaccounted-for side-effect in my code could stealthily change something unintended, and fall under the radar until it was too late to revert, or, worse, never be caught. After a few days of an almost paralytic fear, I decided to try a writing style known by many in the agile software-development world as Test-Driven Development, or TDD.
After the first day I had fallen in love. Using this methodology I have confidence that the code I am writing does exactly what I want it to, regardless of the task's complexity. Equally valuable, once these tests are written a third party can pick up the code I've written and know right away that any new functionality they are writing isn't breaking what is already there. One could even think of it as a kind of fixity check for code functionality - with the proper tests I can pick up the code years down the line and know immediately that everything is still as it should be.
In this post I will be sharing what TDD is, and how it can be practically used in an archival context. In the spirit of showing, not telling, I'll be giving a walkthrough of what this looks like in practice by building a hypothetical extent-statement parser.
The code detailed in this post is still in progress and has yet to be vetted, so the end result here is not production-ready, but I hope exposing the process in this way is helpful to any others who might be thinking about utilizing tests in their own archival coding.
To start off, some common questions:
### What is a test?
A test is code you write to check that another piece of code you have written is doing what you expect it to be doing.
If I had some function called normalize_date that turned a date written by a human, say "Jan. 21, 1991" into a machine-readable format, like "1991-01-21", its test might look something like this:
This would fail if the normalized version did not match expected outcome, leaving a helpful error message as to what went wrong and where.
### So what is TDD?
Test-Driven Development is a methodology and philosophy for writing code first popularized and still very commonly used in the world of agile software design. At its most basic level it can be distilled into a three-step cyclic process: 1) write a failing test, 2) write the simplest code you can to make the test pass, and 3) refactor. Where one might naturally be inclined to write code then test it, TDD reverses this process, putting the tests above all else.
### Doesn't writing tests just slow you down? What about the overhead?
This is a common argument, but it turns out in many cases tests actually save time, especially in cases where long-term maintainability is important. Say I have just taken on a new position and have responsibility to maintain and update code built before my time. If my predecessors hadn't written any tests I would have to look at every piece of code in the system before I could be confident that any new changes I'm making aren't breaking any current obscure functionality. If there were tests, I could go straight into making new changes without the worry that I might be breaking important things that I had no way to know about.
Ensuring accuracy over obscure edge-cases is incredibly important in an institution like the Bentley. The library's EADs represent over 80 years of effort and countless hours of work on the part of the staff and students who were involved in their creation. The last thing we want to do while automating our xml normalizations is make an unintended change that nullifies their work. Since uncertainty is always a factor when working with messy data, it is remarkably easy for small innocuous code changes to have unintended side-effects, and if one mistake can potentially negate hundreds of hours of work, then the few hours it takes to write good tests is well worth the investment. From a long-term perspective, TDD saves time, money, and effort -- really there's no reason not to do it!
## Learn by doing - building an extent parser in python with TDD
That's a lot of talk, but what does it look like in practice? As Max described in his most recent blog post, one of our current projects involves wrestling with verbose and varied extent statements, trying to coerce them into a format that ArchivesSpace can read properly. Since it's on our minds, let's see if we can use TDD to build a script for parsing a long combined extent statement into its component parts.
The remainder of this post will be pretty python heavy, but even if you're not familiar with programming languages, python is unusually readable, so follow along and I think you'll be surprised at how much it makes sense!
To begin, remember the TDD mantra: test first, code later. So, let's make a new file to hold all our test code (we'll call it tests.py) and start with something simple:
now run it and...
Ta-da! We have written our first failing test.
So now what? Now we find the path of least resistance - the easiest way we can think of to solve the given error. The console suggests that a "split_extents" function doesn't exist, so let's make one! Over in a new extent_splitter.py file, let's write
Function created! Before we can test it, our test script needs to know where to find the split_extents function, so let's make sure the test script can find it by adding the following to tests.py:
Now run the test again, and see where that leads us:
Our assert statement is failing, meaning that split_extent_text is not equal to our target output. This isn't surprising considering split_extents isn't actually returning anything yet. Let's fix the assert error as simply as we can:
There! It's cheesiest of fixes (the code doesn't actually do anything with the input string, it just cheekily returns the list we want), but it really is important to do these small, path-of-least-resistance edits, especially as we are just learning the concept of TDD. Small iterative steps keeps code manageable and easy to conceptualize as you build it -- it can be all too easy to get carried away and add a whole suite of functionality in one rushed clump, only to have the code fail at runtime and not have any idea where the problem lies.
So now we have a completely working test! Normally at this point we would take a step back to refactor what we have written, but there really isn't much there, and the code doesn't do anything remotely useful. We can easily break it again by adding another simple test case over in tests.py:
This test fails, so we have code to write! Writing custom pre-built lists for each possible extent is a terrible plan, so let's write something actually useful:
Run the test, and... Success! Again, here we would refactor, but this code is still simple enough it isn't necessary. Now that we have two tests, we have a new problem: how do we keep track of which is which, or know which is failing when the console returns an error?
Luckily for us, python has a built-in module for testing that can take care of the background test management and let us focus on just writing the code. The one thing to note is that using the module requires putting the tests in a python class, which works slightly differently than the python functions you may be used to. All that you really have to know is that you will need to pre-append any variable you want to use throughout the class with "self.", and include "self" as a variable to any function you define inside the class. Here is what our tests look like using unittest as a framework:
You can run the tests just like you would any other python script. Let's try it and see what happens:
Neat! Now we have a test suite and a function that splits any sentence that has " and " in it. But many extent statements have more than two elements. These tend to be separated by commas, so let's write a test to see if it handles a longer extent statement properly. Over in tests.py's setUp function, we'll define two new variables:
Then we'll write the test:
Running the test now fails again, but now the error messages are much more verbose. Here is what we see now that we're using python's testing module:
As you can see, it tells us exactly which test fails, and clearly pinpoints the reason for the failure. Super useful! Now that we have a failing test, we have code to write.
Now the tests pass, but this code is super ugly - time to refactor! Let's go back through and see if we can clean things up a bit.
It turns out, we can reproduce the above functionality in just a few lines, using what are known as list comprehensions. They can be really powerful, but as they get increasingly complicated they have the drawback of looking, well, incomprehensible:
We may return to this later and see if there is a more readable way to do this clearly and concisely.
Now, as always, we run the tests and see if they still pass, and they do! Now that we have some basic functionality we need to sit down and seriously think about the variety and scope of extent statements found in our EADs, and what additional functionality we'll need to ensure our primary edge cases are covered. I have found it helpful at this point to just pull the text of all the tags we'll be manipulating and scan through them, looking for patterns and outliers.
Once we have done this, we need to write out a plan for each case that the code will need to account for. TDD developers will often write each planned functionality as individual comments in their test code, giving them a pre-built checklist they can iterate through one comment at a time. In our case, it might look something like this:
If we build out this functionality out one test at a time, we get something like the following:
The completed test suite:
And here is a more complete extent_splitter.py, refactored along the way to use regular expressions instead of solely list comprehensions:
That's it! We now have a useful script, confidence that it does only what it is supposed to, and a built-in method to ensure that its functionality remains static over time. I hope you've found this interesting, and I'd love to hear your thoughts on the pros and cons of implementing TDD methods in your own archival work - feel free to leave a message in the comments below!
## Tuesday, May 26, 2015
As Max detailed in his recent post on extents, there are some aspects of our EADs that are not necessarily wrong (i.e., won't cause any errors when importing into ArchivesSpace), but that are not optimized to take full advantage of potential reporting or searching functionality in ArchivesSpace or other systems. Whereas Max described some of the problems we have with our extent statements, this post will take a look at another aspect of our EADs that we initially thought would be a simple, easy, quick fix... until we learned more: dates.
Dates in our Current Finding Aids
Currently, our dates are encoded with <unitdate> tags in our EADs, but lack a "normal" attribute containing a normalized, machine-readable version of the date.
As an example, our dates might currently look like this: <unitdate type="inclusive">May 26, 2015</unitdate>
As opposed to this: <unitdate type="inclusive" normal="2015-05-26">May 26, 2015</unitdate>
Until now, this has not really been a problem. As you can see from an example such as the Adelaide J. Hart papers, our dates are presented to users as plain text in our current finding aid access sytem. Under the hood, those dates are encoded as <unitdate> elements, but our access system has no way to search or facet by date. As such, the access system has never needed a normalized, machine-readable form of dates. But what about ArchivesSpace?
Dates in ArchivesSpace
Before getting into what happens to our legacy <unitdate> elements when imported into ArchivesSpace, let's take a look at a blank ArchivesSpace date record.
Based on all of the date fields provided by ArchivesSpace, we can already see here that we're moving beyond plain text representation of our dates. Of particular interest for the purposes of this blog post are the fields for "Expression," "Begin," and "End." Hovering over the * next to "Expression" brings up the following explanation of what that field represents:
What this means is that the "Expression" field will essentially recreate the plain text, human-understandable version of dates that we have been using up until now. Simple enough.
Once we take a look at the "Begin" and "End" fields, however, we can start to see where our past practice and future ArchivesSpace possibilities come into conflict. The "Begin" and "End" fields give us the ability to record normalized-versions of our dates that ArchivesSpace (and other systems) can understand. This is definitely functionality that we will want to use going forward, but what does this mean for our legacy data?
Let's see what happens to our dates when we import one of our legacy EADs into ArchivesSpace.
The ArchivesSpace EAD importer took the contents of a <unitdate> tag and made a date expression of 1854-1888. It did not, however, make a begin date of 1854 or an end date of 1888. Why not? Lines 168-188 of the ArchivesSpace EAD importer can help us understand.
We'll get into a little bit more detail about making sense of the ArchivesSpace EAD importer in future posts about creating our custom EAD importer, but for now we'll take a higher-level view at what this portion of the EAD importer is going. What this bit of the EAD importer is doing is taking a <unitdate> tag and making an ArchivesSpace date record using various components of that <unitdate> tag and its related attributes. At line 178, the importer is making a date expression with the inner_xml of the <unitdate> tag, or the text within the open and closed <unitdate></unitdate> brackets, essentially recreating the plain text version of the dates that we currently have. But how is it making normalized begin and end dates?
On lines 180 and 181, the EAD importer is making a begin date with norm_dates[0] and an end date with norm_dates[1]. If we look at lines 170-174, we can see how those norm_dates are being made. The ArchivesSpace EAD importer is looking for a normal attribute (represented in the EAD importer as "att('normal')") in the <unitdate> tag and splitting the contents of that attribute on a forward slash to get the begin date (norm_dates[0]) and end date (norm_dates[1]).
In order for our example imported date above to have begin and end dates, the <unitdate> tag should look like this:
<unitdate type="inclusive" normal="1854/1888">1854-1888</unitdate>
Right now it looks like this:
<unitdate type="inclusive">1854-1888</unitdate>
Thankfully for us, making normalized versions of dates like the above is actually fairly simple.
Normalizing Common Dates
Similar to how there were many extents that could be cleaned up in one fell swoop, there are many dates that we can normalize by running a single script. The following script will make a normal attribute containing a normalized version of any <unitdate> that is a single year or a range of years. It will also add a certainty="approximate" attribute to any year or range of years that is not made up of only exact dates. Here, for easy reference, are examples of the attributes that the script adds to each of the possible manifestations of dates that are years or ranges of years:
• A single year (1924): normal="1924"
• A decade (1920s): normal="1920/1929" certainty="approximate"
• A range of exact years (1921-1933): normal="1921/1933"
• A range of a decade to an exact year (1920s-1934): normal="1920/1934" certainty="approximate"
• A range of an exact year to a decade (1923-1930s): normal="1923/1939" certainty="approximate"
• A range of a decade to a decade (1920s-1930s): normal="1920/1939" certainty="approximate"
And here is the script:
When this script is ran against our EADs, we get this result:
As you can see, this script added normal attributes to 316,578 of our 415,958 dates. In other words, this single script normalized about 75% of our dates, ensuring that ArchivesSpace will import date expressions, begin dates, and end dates for a majority of our legacy dates.
The Remaining 25% (and other surprises)
In future posts, we'll be going over how we've used OpenRefine to clean up the remaining 25% of our dates that could not be so easily automated, and we'll also be taking a look at some of the other surprising <unitdate> errors we've found lurking in our legacy EADs, including how we've identified and resolved those issues.
These are not the dates you're looking for.
## Friday, May 22, 2015
### Exten(t)uating Circumstances: 80 Years of Descriptive Practices and the Long Tail(s) of Extents
It all started with a simple error:
Error: #<:ValidationException: {:errors=>{"extents"=>["At least 1 item(s) is required"]}}>
This is the error we got when we tried to import EADs into ArchivesSpace with extent statements that began with text, such as "ca." or "approx." So ArchivesSpace likes extent statements that begin with numbers. Fine. Easy fix. Problem solved.
And it was an easy fix... until we started getting curious.
### The Extent (Get It!) of the Problem
As we did our original tests importing legacy EADs into ArchivesSpace (thanks, Dallas!), we started noticing that extents weren't importing quite the way we expected. As it turns out, ArchivesSpace imports the entire statement from EAD's <physdesc><extent> element as the "Whole" extent, with the first number in the statement imported as the "Number" of the extent and the remainder of the statement imported as the "Type":
An Extent in ArchivesSpace
This results in issues such as the one above, where the number imports fine, but type imports incorrectly. "linear feet and 7.62 MB (online)" is actually a Type plus another extent statement with its own Number, Type and Container Summary. This would be more accurately represented by breaking the extent into two "Part" portions.
This also makes for a very dirty "Type" dropdown list:
I've highlighted the only type that should really be there.
Now, this isn't actually a problem for import to ArchivesSpace. But it is a problem. In the end, we decided to take a closer look at extents to clean them up. That's fun, right? In hindsight, our initial excitement about this was probably a little naive. We were dealing with 80 years of highly varied descriptive practices, after all.
#### Getting Extents
In his last post, Dallas started to detail how we "get" elements from EADs ("get" here means go through our EADs, grab extent(s), and print them with their filename and location to a CSV for closer inspection and cleaning). In case you're wondering how exactly we did got extents, here is our code (and feel free to improve it!):
bentley-historical-library/migration-tools
# import what we need
import lxml
from lxml import etree
import csv
import os
from os.path import join
# where is the output csv?
output_csv = 'path/to/output.csv' # <-- you have to change this
# "top level" extents xpath
# component extents xpath
# all extents xpath
all_extents = '//extent'
# open and write header row of csv
with open(output_csv, 'ab') as csv_file:
writer = csv.writer(csv_file, dialect='excel')
writer.writerow(['Filename', 'XPath', 'Original Extent'])
# creates a function to get extents
def getextents(xpath):
# go through those files
# keep up with where we are
print "Processing ", filename
# parse and go through all component extents
extents = tree.xpath(xpath)
for i in extents:
# identify blank extents
extent = i.text
extent_path = tree.getpath(i)
with open(output_csv, 'ab') as csvfile:
writer = csv.writer(csvfile, dialect='excel')
try:
writer.writerow([filename, extent_path, extent])
except:
writer.writerow([filename, extent_path, 'ISSUE EXTENT'])
# close the csv
csvfile.close()
# get extents
getextents(all_extents) # <-- you'll have to change this to get the extents you want, "top level," component level or all (i want all)
We weren't exactly thrilled with what we found.
#### The Long Tail(s) of Exents
Our intern, Walker Boyle, put together a histogram of what we found for both extents and component extents, and I converted them into graphs. You need to click them to get the full effect.
Whoa.
Whoa-ho-hoa.
### How We're Thinking About Fixing Extents (How Comes Later)
As you can see, we had a bit of a problem on our hands. Our extents are very dirty (perhaps that's an understatement!). We decided to go back to square one. Lead Archivist for Description and Workflow Management Olga Virakhovskaya and I sat down to try to at least come up with a short list of extent types. For just the top level extents (2800+), this was a 3 1/2 hour process (3 1/2 hours!). We didn't even want to think about how long it would take to go through the nearly 59,000 component-level extents. (I just did the math. It would take two business weeks). To make matters worse, by the end of our session, we realized that our thoughts about extents were evolving, and that the list we started creating at the beginning was different than the list we were creating at the end.
Frustrated, we got back together with the A-Team to discuss further and deliberated on the following topics.
#### DACS
Our first thought was to turn to Describing Archives: A Content Standard, or DACS. However, it turns out that DACS is pretty loosey-goosey it comes to DACS, especially the section on Multiple Statements of Extent:
These examples are all over the place!
Needless to say, this didn't help us much.
#### Human Readable vs. Machine-Actionable Extents
We realized that part of the issue arises from the fact that for pretty much our entire history the text of extent statements has been recorded for the human eyes that will be looking at them, and for those eyes only. ArchivesSpace affords the opportunity for this information to be much more granular and machine readable (and therefore potentially machine-actionable). For instance, we've thought that perhaps we could bring together all extents of a certain Type and add their numbers together to get a total. This wouldn't have been possible before but it might be in ArchivesSpace depending on how well we clean up the extents.
To oversimplify, we decided (at least for the time being) that as we normalize extents we'd like to find a happy medium between flexibility and human-readableness on the one hand, and potential machine-actionability (and consistency for consistency's sake) on the other.
#### Why Are We Recording This Information Again?
Finally, as with many things in library- and archives-land, every once in a while you find yourself asking, "Why are we doing this again?" This case was no different. We started to really ask ourselves why we were recording this information in the first place, hoping that would inform the creation of a shortlist and a way to move forward.
We turned to user stories to try to figure out the ways that extents might or could get used. That is, not the way they have been or do get used, or even how they will get used, but all the ways they might get used. We thought of these:
First, from the perspective of a researcher...
1. As a researcher, I need to be able to look at a collection's description and be able to tell quickly how large it is so that I know if I should plan to stay an hour or a week, or look at a portion of a collection or the whole thing.
2. As a researcher, I'm looking for specific materials (photographs, drawings, audio recordings, etc.)
3. As an inexperienced researcher, I don’t know that this information may be found in Scope and Content notes.
And from the perspective of archivists...
1. As an archivist, I’d like to know how much digital material I have, how much is unique (i.e., born-digital), and how much is not (digitized). This would also be true for microfilmed material.
2. As an archivist, I need a way to know how much (and what kind) of material I have (e.g., 3,000 audiocassettes; 5,000 VHS tapes, &c.).
3. As a curation archivist, I need an easy way to distinguish between different types of film across collections (e.g., 8 mm, 16 mm, 35 mm, 2-inch) because the vendor we've selected for digitization only does one or some of these types.
4. As a curation archivist, I’m working on better a locations/stacks management system. I need to know the physical volume of holdings and the types of physical formats and sizes.
5. As a curation archivist, I need a way to know which legacy collections contain obsolete storage media (such as floppy disks of different sizes) so that I can process this digital material, or decide on equipment purchases.
6. As a reference archivist, I need an easy way to distinguish between different types of film in a collection so that I know whether we have the equipment on site for researchers to view this material.
As you can see, this is a lot to think about!
### The Solution
I know you'd really like to know our solution. Well, we've taken care of the easy ones:
Other than the easy ones, however, progress is slow. We're continuing to try to create user stories to inform our thinking, to create a short list of extent types, and to make plans for addressing common extent type issues.
A future post will detail some of the OpenRefine magic we're doing to clean up extents, and another will explain exactly how we're handling these issues and reintegrating them back into the original EADs, code snippets and all. Stay tuned!
In the meantime, why not leave a comment and let us know how and why you use extents!
## Tuesday, May 19, 2015
### Legacy EAD Clean Up: Getting Started
Previous posts focusing on our work migrating our legacy EADs to ArchivesSpace have discussed the results of legacy EAD import testing and examined the overall scale of and potential solutions for migrating our legacy EADs successfully into ArchivesSpace.
Whereas those posts were generally focused on the bigger picture of migrating legacy metadata to ArchivesSpace, and included details about overall error rates, common errors, and general additional concerns that we must address before migrating our legacy collections information without error and in a way that will ensure maximum usability going forward, this post will be the first in a series that will take a more detailed look at individual errors and the tools and strategies that we have used to resolve them.
Tools
As previously mentioned, we have found a great deal of success in approaching our legacy EAD clean up programmatically through the creation of our own custom EAD importer and by using Python and OpenRefine to efficiently clean up our legacy EADs. In order to make use of some of the scripts that we will be sharing in this and future posts, you will need to have the following tools installed on your computer:
Python 2.7.9
Aside from the custom EAD importer (which is written in Ruby), the scripts and code snippets that we will be sharing are written in Python, specifically in Python 2. Python 2.7.9 is the most recent version, but if you have an older version of Python 2 installed on your computer that will also work.
lxml
lxml is an XML toolkit module for Python, and is the primary Python module that we use for working with EAD (and, later, with MARC XML). To easily install lxml, make sure that pip is installed along with your Python installation and type 'pip install lxml' into a Command Prompt or terminal window.
To test that you have Python and lxml installed properly, open a Command Prompt (cmd.exe on a Windows computer) or a terminal window (on a Mac or Linux machine) and enter 'python.' This should start an interactive Python session within the window, displaying something like the following:
Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win
32
>>>
If that doesn't work, check the official Python documentation for help on getting Python set up on your system.
Once you are able to start an interactive Python session in your Command Prompt or terminal window, type 'import lxml' next to the '>>>' and hit enter. If an error displays, something went wrong with the lxml installation. Otherwise, you should be all set up for using Python, lxml, and many of the scripts that we will be sharing on this blog.
OpenRefine
A lot of the metadata clean up that we've been doing has been accomplished by exporting the content of certain fields from our EADs into a CSV file using Python, editing that file using OpenRefine, and then reinserting the updated metadata back into our EADs using Python.
The Basics of Working with EADs
Many of the Python scripts that we have written for our EADs can be broken down into several groups, among them scripts that extract metadata to be examined/cleaned in OpenRefine and scripts that help us identify potential import errors that need to be investigated on a case-by-case basis.
One of the most common types of scripts that we've been using are those that extract some metadata from our EADs and output it to another file (usually a CSV). We'll get into the specifics of how we've used this to clean up dates, extents, access restrictions, subjects, and more in future posts dedicated to each topic, but to give you an idea of the kind of information we've been able to extract and clean up, take a look at this example script that will print collection level statements from EADs to the Command Prompt or terminal window:
When this script is run against a sample set of EADs, we get the following output:
As you can see from that small sample, we have a wide variety of extent statements that will need to be modified before we import them into ArchivesSpace. Look forward to a post about that in the near future!
One of the most common types of problems that we identified during our legacy EAD import testing was that there are some bits of metadata that are required to successfully import EADs into ArchivesSpace that are either missing or misplaced in our EADs. As such, we are not always looking to extract metadata from our EADs to clean up and reinsert to fit ArchivesSpace's or our own liking. Sometimes we simply need to know that information is not present in our EADs, or at least is not present in the way that ArchivesSpace expects.
The most common error associated with missing or misplaced information in our EADs is the result of components lacking <unititle> and/or <unitdate> tags. A title is required for all archival objects in ArchivesSpace, and that title can be supplied as either a title and a date, just a title, or just a date.
We have some (not many, but some) components in our EADs that are missing an ArchivesSpace-acceptable title. Sometimes, this might be the result of the conversion process from MS Word to EAD inserting a stray empty component at the end of a section in the container list, such as at the end of a series or subseries. These empty components can be deleted and the EAD will import successfully. Other times, however, our components that lack titles actually ought to have titles; this is usually evident when a component has a container, note, extent, or other kind of description that indicates there really is something being described that needs a title.
So, rather than write a script that will delete components missing titles or modify our custom EAD importer to tell ArchivesSpace to ignore those components, we need to investigate each manifestation of the error and decide on a solution on a case-by-case basis. This script (something like it, anyway) helps us do just that:
This script will check each <c0x> component in an EAD for either a <unittitle>, a nested title within a <unittitle> (such as <unittitle> <title>), or a <unitdate>. If a component is missing all three acceptable forms of a title, the script will output the filename and the xpath of the component.
A sample output from that script is:
Checking those xpaths in the EAD will take us directly to each component that is missing an acceptable title. From there, we can make decisions about whether the component is a bit of stray XML that can be deleted or if the component really does refer to an archival object and ought to have a title.
For example, the following component refers to something located in box 1 and has a note referring to other materials in the collection. This should have a title.
<c03 level="item">
<did>
<container type="box" label="Box">1</container>
<unittitle/>
</did>
</c03>
This component, however, is a completely empty element at the end of a element and does not have any container information, description, or other metadata associated with it. This can safely be deleted.
<c02 level="file"><did><unittitle/></did></c02></c01>
In the coming weeks we'll be detailing how we've used these sorts of strategies to tackle some of the specific problems that we've identified in our legacy EADs, including outputting information from and about all of our EADs to CSVs, cleaning up all the messy data in OpenRefine, and replacing the original, messy data with the new, clean data. We'll also be detailing some things that we've been able to clean up entirely programatically, without needing to even open an EAD or look at its contents in an external program. Legacy metadata clean up here at the Bentley is an ongoing process, and our knowledge of our own legacy metadata issues, our understanding of how we want to resolve them, and our skills to make that a possibility are constantly evolving. We can't wait to share all that we've learned!
## Friday, May 15, 2015
### Dystopia and Digital Preservation: Archivematica's Character Flaws (and Why We're OK with Them)
A previous post in this series looked at lessons from "No Silver Bullet: Essence and Accidents of Software Engineering" and how they apply to ArchivesSpace, an open source archives information management application we'll be using as part of our Archivematica-ArchivesSpace-DSpace Workflow Integration project.
Today, I'd like to continue in that vein (i.e., that none of these pieces of software are perfect, and that they don't meet every one of our needs, but in the end that's OK) and take a look at what we can learn about Archivematica from a genre of literature and other artistic works that I'm rather fond of: dystopian fiction.
### Welcome to Dystopia
Is it dystopia? [1]
A dystopia is an imaginary community or society that is undesirable or frightening; it literally translates to "not-good-place." Dystopian fiction--a type of speculative fiction because it's generally set in a possible future--usually involves the "creation of an utterly horrible or degraded society headed to an irreversible oblivion." [2]
### Setting the Scene: Characteristics of Our [Future] Dystopian Society
If there's one thing you can say about those of us who are interested in digital curation and preservation, it's that we're wary of the very real possibility of our future being such a "not-good-place." Here are three (and a half) reasons why:
#### 1. Society is an illusion of a perfect utopian world.
In The Matrix, a 1999 film by the Wachowski brothers, reality (not quite a utopia, but still) as perceived by most humans is actually a simulated reality called "the Matrix," created by sentient machines to subdue the human population. [3]
The first tell that you're living in a dystopia is that things are "pretty perfect," and I'd argue that, for the casual user of digital material, things certainly seem "pretty perfect." For various reasons, including the fact that for the majority of us the complex technology stack needed to render them is "invisible," digital materials appear as if they'll be around forever (for example, I spend a good deal of time looking for things to link to on this blog, knowing all the while that the lifespan of a URL is, on average, 44 days), or that you can preserve them by just "leaving them on a shelf" like you would a book (bit rot!). However, whether it's due to file format obsolescence or storage medium corruption or insufficient metadata or issues with storage or organizational risks (or...or...or...), in reality digital materials are much more fragile than their physical counterparts. This illusion of permanence has all kinds of implications, not the least of which is that it can be difficult to convince administrators that digital preservation is a real thing worth spending money on.
Whether we subscribe to a full-blown "digital dark age" as asserted by Terry Kuny at the 1997 International Federation of Library Associations and Institutions (IFLA) Council and General Conference (barbarians at the gates and all!), or our views are a bit more hopeful, all of us in the field know that there are many, many threats to digital continuity, and that these threats jeopardize "continued access to digital materials for as long as they are needed." (That's from my favorite definition of digital preservation, by the way.)
#### 2. A figurehead or concept (OAIS, anybody?) is worshiped by the citizens of the society.
In Nineteen Eighty-Four, written in 1948 by George Orwell, "Big Brother" is the quasi-divine Party leader who enjoys an intense cult of personality. [4]
A second clue that you're living in a dystopia is that a figurehead or, in our case, concept, is worshiped by the citizens of the society.
While "worship" may be a bit strong for the relationship that the digital curation community has with the Open Archival Information System (OAIS) Reference Model, you can't argue that it "enjoys an intense cult of personality." It informs everything we do, from systems to audits to philosophies. Mike likes to joke that every presentation on digital preservation has to have an "obligatory" OAIS slide. I like to joke that OAIS is like a "secret handshake" among our kind. Big Brother is watching!
I'm not trying to imply that OAIS's status is a bad thing. However, it does lead us to another, related characteristic of a dystopian society (and this is the half): strict conformity among citizens and the general assumption that dissent and individuality are bad. Don't believe me? Gauge you're reaction when I say what I'm about to say:
We don't create Dissemination Information Packages (DIPs).
That's right. We don't. Just Archival Information Packages (AIPs). [Gasp!]
Strictly speaking, we provide online access to our AIPs, so in a way they act as DIPs. We just don't, for example, ingest a JPG, create a TIFF for preservation and then create another JPG for access. Storage is a consideration for us, as is the processing overhead that we would have to undertake if we wanted to do access right (for example, for video, which would need a multiplicity of formats to be streamable regardless of end user browser or device), as is MLibrary's longstanding philosophy that guides our preservation practices: to unite preservation with access.
As it was put to me by a former colleague (now at the National Energy Research Scientific Computing Center, or NERSC):
This has put us to some degree at odds with practices that are (subjectively, too) strictly based on OAIS concepts, where AIPs and DIPs are greatly variant, DIPs are delivered, and AIPs are kept dark and never touched.
Our custom systems - both DLXS and HathiTrust - deliver derivatives that are created on-the-fly from preservation masters, essentially making what in OAIS terms one might call the DIP ephemeral, reliably reproducible, and even in essence unimportant. (We have cron jobs that purge derivatives after a few days of not being used.) That design is deliberately in accordance with the preservation philosophy here.
Our DSpace implementation is the exception due to the constraints of the application, but it's worth noting we've generally decided *against* approaches that we could have taken that would have involved duplication, such as a hidden AIP and visible DIP (when I asked this question, I was in "total DIP mode"), and I think that is again a reflection of the engrained philosophy here. We've instead aimed for an optimized approach, preserving and providing content in formats that we believe users will be able to deal with.
"To some degree." "Subjectively." "In essence unimportant." Even though this all sounds very reasonable, I'm not sure that my former colleague realizes that we're living in a dystopia, and that Big Brother is watching! You can't just say stuff like that! 2 + 2 = 5!
There's more that I could say about OAIS (e.g., that it assumes that information packages are static in a way that hardly ever reflects reality, and that it doesn't focus enough on engaging with content creators and end users), but that's a post for another day.
#### 3. Society is hierarchical, and divisions between the upper, middle and lower class are definitive and unbending.
In the novel Brave New World, written in 1931 by Aldous Huxley, a class system is prenatally designated in terms of Alphas, Betas, Gammas, Deltas and Epsilons, with the lower classes having reduced brain-function and special conditioning to make them satisfied with their position in life. [5]
A last characteristic of dystopias is that they are hierarchical, and you can't do anything about it. And let's face it, our digital curation society is hierarchical. We all look to the same big names and institutions, and as someone who came from a small- to medium-sized institution, and as someone who now works at an institution without our own information technology infrastructure, I can tell you first hand that digital preservation, at least at first, can seem like a "rich person's game." For the "everyman" institution (to use a literary trope often found in dystopian fiction, with apologies for it not being inclusive) with some or no financial resources, or without human expertise, it can be hard to know where to start, or even make the case in the first place for something like digital preservation that by its very nature doesn't have any immediate benefits.
As an aside (my argument is going to fall apart!), I think this "class system" is more psychological than anything else. If you are that "everyman" institution, there's a ton that pretty much anyone can do to get started. If you're looking for inspiration, here it is:
• You've Got to Walk Before You Can Run: Ricky Elway’s report addresses some of the very basic challenges of digital preservation in the real world.
• Getting Started with Digital Preservation: Kevin Driedger and myself talk about initial steps in the digital preservation "dance."
• 'Good Enough' Really Is Good Enough: Mike and myself (in my old stomping grounds!), and our colleague Aaron Collie make the case that OAIS-ish, or 'good enough,' is just that. You don't have to be big to do good things in digital preservation.
• National Digital Stewardship Alliance Levels of Preservation: I like this model because it acknowledges that you don't have to jump into the deep end with digital preservation. Instead, the model moves progressively from "the basic need to ensure bit preservation towards broader requirements for keeping track of digital content and being able to ensure that it can be made available over longer periods of time."
• Children of Men: Theo Faron, a former activist who was devastated when his child died during a flu pandemic, is the "archetypal everyman" who reluctantly becomes a savior, leading Kee to the Tomorrow and saving humanity! Oh wait...
### Enter Archivematica, the Protagonist
It is within this dystopian backdrop that we meet Archivematica, our protagonist. Archivematica is a web- and standards-based, open-source application which allows institutions to preserve long-term access to trustworthy, authentic and reliable digital content. And according to their website, Archivematica has all of the makings of a hero who will lead the way in our conflict against the opposing dystopian force:
• It is standards-based.
Not only is it in compliance with the OAIS Functional Model (there it is again!), it uses well-defined metadata schemes like METS, PREMIS, Dublin Core and the Library of Congress BagIt Specification. This makes it very interoperable, which is why we can use it in our Archivematica-ArchivesSpace-DSpace Workflow Integration project.
• It is open source.
Here it is! Just waiting for you to modify, improve and distribute it! The documentation is released under a Creative Commons Attribution ShareAlike license, and the code is released under a GNU Affero General Public Licence, nobody can ask any questions. So really, go ahead!
• It's built on microservices.
Microservices is a software architecture style, in which complex applications are composed of small, highly decoupled and independent processes. When you find a better tool to do a particular job, you can just replace one microservice with another rather than the whole software package. This type of design was highly influential in AutoPro.
• It is flexible and customizable
Archivematica provides several decision points that give the user almost total control over processing configurations. Users may also preconfigure most of these options for seamless ingest to archival storage and access:
Processing Configuration
• It is compatible with hundreds of formats.
Archivematica maintains a Format Policy Registry (FPR). The FPR is a database which allows Archivematica users to define format policies for handling file formats, for example, the actions, tools and settings to apply to a file of a particular file format (e.g., conversion to a preservation format, conversion to an access format).
Actually, with a little luck, the FPR is about to get a whole lot better.
• It is integrated with third-party systems.
Archivematica is already integrated with DSpace, CONTENTdm, Islandora, LOCKSS, AtoM, DuraCloud, OpenStack and Archivist's Toolkit, and it's about to be integrated with ArchivesSpace!
• It has an active community.
Archivematica has an active community, including a Google Group (check out the question we posed just this week). Check out their Twitter, GitHub, and Youtube accounts as well.
• It improves and extends the functionality of AutoPro.
This one relates only to us, but Archivematica (with two notable exceptions) is more scalable, handles errors better and is easier to maintain than our homegrown tool AutoPro, which we've been using for the last three to five years or so to process digital materials.
• It is constantly improving.
This is a big one. Artefactual Systems, Inc., in concert with Archivematica's users, are constantly improving the application. The fact that whenever one person or institution contributes resources, the entire community benefits was a big motivation for our involvement. You can even monitor the development roadmap to see where they're headed!
### Archivematica's Character Flaws
That's a lot about what makes Archivematica awesome. But it's not perfect. In literature, a character flaw is a "limitation, imperfection, problem, phobia, or deficiency present in a character who may be otherwise very functional." [5] Archivematica's character flaws may be categorized as minor, major and tragic.
#### Minor Flaws
Minor flaws serve to distinguish characters for the reader, but they don't usually affect the story in any way. Think Scar's scar from The Lion King, which services to distinguish him (a bit) from the archetypal villain, or the fact King Arthur can't count to three in Monty Python and the Holy Grail (the Holy Hand Grenade of Antioch still gets thrown!).
I can think of these:
• The responsive design is nice (even though I can't think of a time I'd ever be arranging anything on my cell phone), but the interface has something akin to Scar's scar.
I don't know why the overlap between the button next to my username and "Connected" bothers me so much, but it does.
• Also, who names their development servers after mushrooms?
#### Major Flaws
Major flaws are much more noticeable than minor flaws, and they are almost invariably important to the story's development. Think Anakin Skywalker's anger and fear of losing his wife Padme, which eventually consume him, leading to his transformation into Darth Vader, or Victor Frankenstein's excessive curiosity, leading to the creation of the monster that destroys his life.
Indeed, Archivematica has a few flaws that are important to this story's development.
Storage
Archivematica indexes AIPs, and can output them to a storage system, but, as the recent Preserving (Digital) Objects with Restricted Resources (POWRR) White Paper suggests, Archivematica is not a storage solution:
Notice all the gray above Storage?
If you're interested in long-term preservation, your digital storage system should be safe and redundant, perhaps using different storage mediums, with at least one copy in a separate geographic location. Since Archivematica does not store digital material, the onus is on the institution to get this right.
To be fair, from the beginning, Archivematica has not focused on storage, instead focusing on producing--and later, indexing--a very robust AIP and integrating with other storage tools such as Arkivum, DuraCloud, LOCKSS and DuraSpace (including the recent launch of ArchivesDirect, a new Archivematica/DuraSpace hosted service), and besides those just about any type of storage you can think of. I still feel I have to classify this as a major flaw, though, since quality storage is at the core of a good digital preservation system.
Active, Ongoing Management
A second major character flaw for Archivematica is that it is not a means for the active, ongoing management aspect of digital preservation, which is really what ensures that digital materials will be accessible over time as technologies change. Again, the POWRR White Paper:
Notice all the gray above Maintenance?
Archivematica doesn't currently have functionality to perform preservation migrations on AIPs that have already been processed. Even though I'd argue that this isn't as central to long-term preservation as quality storage is, it will eventually become an issue for institutions trying to maintain accessibility to digital objects over time.
Archivematica also does not have out-of-the-box functionality to do integrity checks on stored digital objects. Even though I have to admit that after recording an initial message digest, I haven't actually heard of a lot of "everyman" institutions performing periodic audits or doing them in response to a particular event, this seems like a deficiency in the area of file fixity and data integrity.
That being said, the 1.4 release of Archivematica is said to bring the beginnings of functionality to re-ingest digital content. Also, there is a command line fixity tool integrated with the Storage Service, it just isn't really usable out-of-the-box for your typical archivist.
Documentation
I should have included this in last week's post about ArchivesSpace as well. Documentation issues are a "known issue" with many open source projects, and ArchivesSpace and Archivematica are no different. There have been a number of times where I have looked for some information on the Archivematica wiki (for example, on the Storage Service API, Version 1.4, etc.) and have found the documentation to be missing or incomplete. Lack of documentation can be a real barrier to implementation.
On the upside, documentation is something we can all contribute to (even if we aren't coders)! I for one am going to be looking into this, starting with this conversation.
An update! That was fast!
And this one:
Initial QC on Significant Characteristics
Some digital preservation systems and workflows perform checks on significant characteristics of the content of digital objects before and after normalization or migration. For example, if you're converting a Microsoft Word document to PDF/A, a system or workflow might check the word count on either end of that transformation. Currently, the only quality control that Archivematica does is to check that a new file exists, and that its size isn't zero (i.e., that it has data).
However, it is possible to add quality control functionality in Archivematica, it just isn't well documented (see the above). In the FPR, you can define verification commands above and beyond the basic default commands. There's some more homework for me.
Reporting
While Archivematica produces a lot of technical metadata about the digital objects in your collections, there isn't really a way to manage this information or share it with administrators via reports. Even basic facts, such as total extent or size of collection and distribution of file types or ages are not available in a user friendly way. This is true about collections as a whole, but also for individual Submission Information Packages (SIPs) or AIPs.
Two recent developments are worth mentioning. First, there's a tool Artefactual Systems developed for Museum of Modern Art (MoMA): Binder (which just came out this week!). It solves this problem, for example, by allowing you to look a the technical metadata of digital objects in a graphical user interface, run and manage fixity checks of preserved AIPs (receiving alerts if a fixity check fails), and generating and saving statistical reports on digital holdings for acquisitions planning, preservation, risk assessment and budget management. Actually, it does even more than that, so be sure to check out the video. We can't wait to dig into Binder.
The second development has to do with our project! Part of the new Appraisal and Arrangement tab will be new reporting functionality to assist with appraisal. This will (we hope!) include technical information about the files themselves--some code may be borrowed from Binder--as well as information about Personally Identifiable Information (PII):
Transfer Backlog Pane
### Tragic Flaws
Tragic flaws are a specific sort of flaw in an otherwise noble or exceptional character that bring about his or her own downfall and, often, eventual death. Think Macbeth's hubris or human sin in Christian theology.
While all of this is a little dramatic for our conversation here, there is one very important thing that Archivematica doesn't do:
Archivematica does NOT ensure that we never lose anything digital ever again.
Besides the fact that Archivematica suffers from all of the same "essential" difficulties in software engineering as ArchivesSpace (namely, complexity, conformity, changeability and invisibility--and for pretty much all of the same reasons, I might add), it is also not some kind of comprehensive "silver bullet" that will protect our digital material for all time. It's just not, which leads me to...
### The Reveal! Why All of This is OK with Us
Actually, there is no such thing as a "comprehensive" digital preservation solution, so we can't really hold this against Artefactual Systems, Inc. Anne R. Kenney and Nancy McGovern, in "The Five Organizational Stages of Digital Preservation," say it best:
Organizations cannot acquire an out-of-the-box comprehensive digital preservation program— one that is suited to the organizational context in which the program is located, to the materials that are to be preserved, and to the existing technological infrastructure. Librarians and archivists must understand their own institutional requirements and capabilities before they can begin to identify which combination of policies, strategies, and tactics are likely to be most effective in meeting their needs.
Just like ArchivesSpace, Archivematica has a lot going for it. We are especially fond of its microservices design, its incremental agile development methodology, and its friendly and knowledgeable designers.
We love the fact that Archivematica is open source and community-driven, and we try to participate as fully as we can to that community, and intend to do so even more in the future. We do that financially, obviously, but also by participating on the Google Group, and contributing user stories for our project and ensuring that the code developed for it will be made available to the public. You should too!
### Conclusion: The Purpose of Dystopian Fiction
To have an effect on the reader, dystopian fiction has to have one other trait: familiarity. The dystopian society must call to mind the reader's own experience. According to Jeff Mallory, "if the reader can identify with the patterns or trends that would lead to the dystopia, it becomes a more involving and effective experience. Authors use a dystopia effectively to highlight their own concerns about society trends." Good dystopian fiction is a call to action in the present.
By focusing on automating the ingest process and producing a repository agnostic, normalized, and well-described (those METS files are huge!) AIP, and doing so in such a way that institutions of all sizes can do a lot or even a little with digital preservation, Archivematica addresses those concerns really well. That, coupled with the fact that staff there are also active in other community initiatives, such as the Hydra Metadata Working Group and IMLS Focus, definitely make them not only protagonists, but heroes in this story.
In the end, Archivematica is our call to action to be heroes in this story as well!
[1] Is it Dystopia? A flowchart for de-coding the genre by Erin Bowman is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Based on a work at www.embowman.com. Feel free to share it for non-commercial uses.
[2] Dystopia (this version)
[3] "The Matrix Poster" by Source. Licensed under Fair use via Wikipedia - http://en.wikipedia.org/wiki/File:The_Matrix_Poster.jpg#/media/File:The_Matrix_Poster.jpg
[4] "1984first" by George Orwell; published by Secker and Warburg (London) - Brown University Library. Licensed under Public Domain via Wikipedia - http://en.wikipedia.org/wiki/File:1984first.jpg#/media/File:1984first.jpg
[5] "BraveNewWorld FirstEdition" by Source. Licensed under Fair use via Wikipedia - http://en.wikipedia.org/wiki/File:BraveNewWorld_FirstEdition.jpg#/media/File:BraveNewWorld_FirstEdition.jpg
[6] Character flaw (this version)
## Tuesday, May 12, 2015
### Maximizing Microservices in Workflows
Today I wanted to talk a little (maybe a lot?) about our development of ingest and processing workflows for digital archives at the Bentley Historical Library, with a focus on the role of microservices.
## Workflow Resources
Maybe we should pause for wee bit of context (we are archivists, after all!). As I mentioned in a previous post, our 2010-2011 MeMail project gave us a great opportunity to explore and standardize our procedures for preparing born-digital archives for long-term preservation and access. It was also a very fruitful period of research into emerging best practices and procedures.
At the time, there wasn't a ton of publicly available documentation on workflows established by peer institutions, but the following projects proved to be tremendously helpful in our workflow planning and development:
• Personal Archives Accessible in Digital Media (paradigm) (2005-2007): an excellent resource for policy questions and considerations related to the acquisition, appraisal, and description of digital personal papers. By not promoting specific tools or techniques (which would have inevitably fallen out of date in the intervening years), the project workbook has remained a great primer for collecting and processing digital archives.
• AIMS Born-Digital Collections: An Inter-Institutional Model for Stewardship (2009-2011): another Mellon-funded project that involved the University of Virginia, Stanford University, University of Hull (U.K.) and Yale University. The project blog provided a wealth of information on tools, resources, and strategies and their white paper is essential reading for any archivist or institution wrestling with the thorny issues of born-digital archives, from donor surveys through disk imaging.
• Practical E-Records: Chris Prom's blog from his tenure as a Fulbright scholar at the University of Dundee's Center for Archive and Information Studies yielded a lot of great resources for policy and workflow development as well as reviews of handy and useful tools.
• Archivematica: we first became aware of Archivematica at the 2010 SAA annual meeting, when Tim Pyatt and Seth Shaw featured it in a preconference workshop. While the tool was still undergoing extensive development at this point (version 0.6), the linear nature of its workflow and clearly defined nature of its microservices were hugely influential in helping us sketch out a preliminary workflow for our born-digital accessions.
Using the above as guidelines (and inspiration), we cobbled together a manual workflow that was useful in terms of defining functional requirements but ultimately not viable as a guide for processing digital archives due to the many potential opportunities for user error or inconsistencies with metadata collection, file naming, copy operations, etc.
These shortcomings led me to automate some workflow steps and ultimately produced the AutoPro tool and our current ingest and digital processing workflow:
• Preliminary procedures to document the initial state of content and check for potential issues
• Step 1: Initial survey and appraisal of content
• Step 2: Scan for Personally Identifiable Information (primarily Social Security numbers and credit card numbers)
• Step 3: Identify file extensions
• Step 4: File format conversions
• Step 5: Arrangement and Description
• Step 6: Transfer and Clean Up
## Microservices
One of the greatest lessons I took from the AIMS project and Archivematica was the use of microservices; that is, instead of building a massive, heavily interdependent system, I defined functional requirements and then identified a tool that would complete the necessary tasks. These tools could then be swapped out or shifted around in the workflow to permit greater flexibility and easier implementation.
Rather than dwell too extensively on the individual procedures in our workflow (that's what the manual is for!), I would like to provide some examples of how we accomplish steps using various command prompt/CMD.EXE utilities as microservices. Having said that, I feel compelled to call attention to the following:
• I am an archivist, not a programmer; at one point, I thought I could use Python for AutoPro, but quickly realized I had a much better chance of stringing something together with Windows CMD.EXE shell scripts, as they were easy to learn and use. Even then, I probably could have done better....give a holler if you see any egregious errors!
• For a great tutorial on commandline basics, see A/V PReserve's "Introduction to Using the Command Line Interface for Working with Files and Directories."
• As I hinted above, we're a Windows shop and the following code snippets reflect CMD.EXE commands. Many of these applications can be run on Mac/Linux machines via native versions or WINE.
• The CMD.EXE shell needs to know where non-native applications/utilities are located; users should CD into the appropriate directory or include the full systems path to the application in the command.
• If any paths (to applications or files) contain spaces, you will need to enclose the path in quotation marks.
• The output of all these operations are collected in log files (usually by redirecting STDOUT) so that we have a full audit trail of operations and a record of any actions performed on content.
• In the code samples, 'REM' is used to comment out notes and descriptions.
### Preliminary Procedures
Upon initiating a processing session, we run a number of preliminary processes to document the original state of the digital accession and identify any potential problems.
#### Virus Scan
The University of Michigan has Microsoft System Center Endpoint Protection installed on all its workstations. Making the best of this situation, we use the MpCmdRun.exe utility to scan content for viruses and malware, first checking to make sure the antivirus definitions are up to date:
REM _procDir=Path to processing folder
"C:\Program Files\Microsoft Security Client\MpCmdRun.exe" -SignatureUpdate -MMPC
"C:\Program Files\Microsoft Security Client\MpCmdRun.exe" -scan -scantype 3 -file %_procDir%
#### Initial Manifest
Content is stored in our interim repository using the Library of Congress BagIt specification. When ingest and processing procedures commence, we create a new document to record the structure and size of the accession using diruse.exe and md5deep:
REM _procDir=Path to processing folder
diruse.exe /B /S %_procDir%
CD /D %_procDir%
md5deep.exe -rclzt *
Diruse.exe will output the entire directory hierarchy (thanks to the /S option) and provide the number of files and relative size (in bytes, due to the /B option) in addition to providing total number of files and size for the main directory.
For md5deep, changing to the processing directory will facilitate returning relative paths for content. Our command includes the following parameters:
• -r: recursive mode; will traverse the entire directory structure
• -c: produces comma separated value output
• -l: outputs relative paths (as dictated by location on command prompt)
• -z: returns file sizes (in bytes)
• -t: includes timestamp of file creation time
• *: the asterisk indicates that everything in the present working directory will be included in output.
#### Extract Content from Archive Files
In order to make sure that content stored in archive files is extracted and run through important preservation actions, we search for any such content and use 7-Zip to extract content.
First, we search the processing directory for any archive files, and save the full path to a text file:
CD /D %_procDir%
DIR /S /B *.zip *7z *.xz *.gz *.gzip *.tgz *.bz2 *.bzip2 *.tbz2 *.tbz *.tar *.lzma *.rar *.cab *.lza *.lzh | FINDSTR /I /E ".zip .7z .xz .gz .gzip .tgz .bz2 .bzip2 .tbz2 .tbz .tar .lzma .rar .cab .lza .lzh" > ..\archiveFiles.txt
The dir utility (similar to "ls" on a Mac or Linux terminal) employs the /S option to recursively list content and the /B option to return full paths. The list of file extensions (by no means the best way to go about this, but...) will only return paths that match this pattern. For greater accuracy, we then pipe ("|") this output to the findstr ("find string") command, which uses the /I option for a case-insensitive search and /E to match content at the end of a line.
We then iterate through this list with a FOR loop and send each file to our ":_7zJob" extraction function with the filename (%%a) passed along as a parameter:
FOR /F "delims=" %%a in (..\archiveFiles.txt ) DO (CALL :_7zJob "%%a")
REM when loop is done, GOTO next step
:_7zJob
REM Create folder in _procDir with the same name as archive; if folder already exists; get user input
SET _7zDestination="%~dpn1"
MKDIR %_7zDestination%
REM Run 7zip to extract files
7z.exe x %1 -o%_7zDestination%
REM Record results (both success and failure)
IF %ERRORLEVEL% NEQ 0 (
ECHO FAILED EXTRACTION
GOTO :EOF
) ELSE (
ECHO SUCCESSFUL EXTRACTION!
GOTO :EOF
)
As the path to each archive file is sent to :_7zJob, we use the CMD.EXE's built-in parameter extension functionality to isolate a folder path using the root name as the archive file (%~dpn1; Z:\unprocessed\9834_0001\newsletters.zip thus would be Z:\unprocessed\9834_0001\newsletters). This path will be the destination for files extracted from a given archive file; we save it as a variable (%_7zDestination%) and create a folder with the MKDIR command.
We then run 7-Zip, using the 'x' option to extract content from the archive file (represented by %1) and use the -o option to send the output to our destination folder. Finally we check the return code (%ERRORLEVEL%) for 7-Zip; if it is not equal to 0 then extraction has failed. Our production script includes an option to retry the operation.
#### Length of Paths
Because Windows cannot handle file paths longer than 255 characters, we run tlpd.exe ("Too Long Paths Detector") to identify any files or directories that might cause us trouble.
REM _procDir=Path to processing folder
START "TOO LONG PATHS" /WAIT tlpd.exe %_procDir% 255
As we're calling this application from a batch (".bat") file, I use the START command to launch it in a new shell window and add the /WAIT option so that the script will not proceed to the next operation until this is comple"255" utility lets you specify the path length, as tlpd.exe lets you adjust the search target.
### Step 1: Initial Survey
In the initial review and survey phase of the workflow, AutoPro incorporates a number of applications to view or render content (Quick View Plus, Irfanview, VLC Media Player, Inkscape) and also employs TreeSize Professional and several Windows utilities to analyze and characterize content. We'll take a closer look at these latter tools in a forthcoming post on appraising digital content.
### Step 2: PII Scan
This step nicely illustrates the flexibility of a microservice approach to workflow design, as we are currently using our third different application for this process. Early iterations of the workflow employed the Cornell Spider, but the high number of false positives (i.e., nine digit integers interpreted as SSNs) made reviewing scan results highly labor-intensive. (Cornell no longer hosts a copy, but you can check it out in the Internet Archive.)
We next employed Identity Finder, having learned of it from Seth Shaw (then at Duke University). This tool was much more accurate and included functionality to redact information from plain text and Microsoft Office Open XML files. At the same time, Identity Finder was rather expensive and a change in its enterprise pricing at the University of Michigan (and the open source nature of our Mellon grant development), have led us to a third solution: bulk_extractor.
Already featured in Archivematica and a prominent component of the BitCurator project, bulk_extractor provides a rich array of scanners and comes with a viewer to inspect scan results. I am in the processing of rewriting our PII scan script to include bulk_extractor (ah...the glory of microservices!) and will probably end up using some variation on the following command:
bulk_extractor -o "Z:\path\to\output\folder" -x aes -x base64 -x elf -x email -x exif -x gps -x gzip -x hiberfile -x httplogs -x json -x kml -x msxml -x net -x rar -x sqlite -x vcard -x windirs -x winlnk -x winpe -x winprefetch -R "Z:\path\to\input"
We are only using a subset of the available scanners; the "-x" options are instructing bulk_extractor to exclude certain scanners that we aren't necessarily interested in.
We're particularly interested in exploring how the BEViewer can be integrated into our current workflow (and possibly into Archivmatica's new Appraisal and Arrangement tab? We'll have to see...). In any case, here's an example of how results are displayed and viewed in their original context:
### Step 3: Identifying File Extensions
The identification of mismatched file extensions is not a required step in our workflow is intended solely to help end-users access and render content.
As a first step, we run the UK National Archives' DROID utility and export a report to a .csv file. Before running this command, we open up the tool preferences and uncheck the "create md5 checksum" option so that the process runs faster.
REM Generate a DROID report
REM _procDir = processing directory
java -jar droid-command-line-6.1.5.jar -R -a "%_procDir%" -p droidExtensionProfile.droid
REM Export report to a CSV file
java -jar droid-command-line-6.1.5.jar -p droidExtensionProfile.droid -e extensionMismatchReport.csv
In the first command, DROID recursively scans our processing directory and outputs to our profile file (droidExtensionProfile.droid). In the second, we export this profile to a .csv file, one column of which indicates file extension mismatch with a value of true (the file extension does not match the format profile detected by DROID) or false (extension is not in conflict with profile).
Basic CMD.EXE is pretty lousy at parsing .csv files, so I do one extra step and make this .csv file a tab delimited file, using a Visual Basic script I found somewhere on the Internets. (This is getting ugly--thanks for sticking with us!)
We then loop through this tab delimited file and pull out all paths that have extension mismatches:
FOR /F "usebackq tokens=4,13,14,15 delims= " %%A in (FINDSTR /IC:" true " "extensionMismatchReport.tsv") DO CALL :_fileExtensionIdentification "%%A" "%%B" "%%C" "%%D"
Once again we use our FOR loop, with the tab character set as the delimiter. We will loop through each line of our extension mismatch report, looking for where DROID returned "true" in the extension mismatch column and we'll then be pulling out information from four columns and pass these as arguments to our ":_fileExtensionIdentification" function: 4 (full path to content), 13 (file extension; employed to identify files with no extension ), 14 (PUID, or PRONOM Unique IDentifier), and 15 (mime type).
Once this information is passed to the function, we first run the TrID file identifier utility:
trid.exe %_file%
Based upon the file's binary signature, TrID will present the likelihood of the file being a format (and extension) as a percentage:
Because the output from this tool may be indeterminate, we also use curl to grab the PRONOM format profile (using the PUID as a variable in the command), save this information to a file, and then look for any signature tags that will enclose extension information:
curl.exe http://apps.nationalarchives.gov.uk/pronom/%_puid%.xml > pronom.txt
TYPE pronom.txt | FINDSTR /C:"<Signature>"
The TYPE command will print a file to STDOUT and we then pipe this to FINDSTR to identify only those lines that include extensions.
Based upon the information from the these tools, the archivist may elect to assign a new extension to a file (which choice is recorded in a log file) or simply move on the the one if neither utility presents compelling evidence.
### Step 4: Format Conversion
Following the lead of Archivematica, we've chosen to create preservation copies of content in 'at-risk' file formats as a primary preservation strategy. In developing our conversion pathways, we conducted an extensive review of community best practices and were strongly influence by the Library of Congress's "Sustainability of Digital Formats", the Florida Digital Archive's "File Preservation Strategies", and Archivematica's format policies.
This step involves searching for "at-risk" formats by extension (another reason we've incorporated functionality for file extension identification) and then looping through each list and sending content to different applications. We also calculate an eight character CRC32 hash for each original file and append it to the new preservation copy to (a) avoid file name collisions and (b) establish a link between the preservation and original copies. Below are some of our most common conversion operations:
#### Raster Images: .bmp .psd .pcd .pct .tga --> .tif (convert.exe utility from ImageMagick)
convert.exe "%_original%" "%_preservation%.tif"
#### Vector Images: .ai .wmf .emf --> .svg (Inkscape)
inkscape.exe -f "%_original%" -l "%_preservation%.svg"
#### .PDF --> .PDF/A (Ghostscript)
gswin64.exe -sFONTPATH="C:\Windows\Fonts;C:\Program Files\gs\gs9.15\lib" -dPDFA -dBATCH -dNOPAUSE -dEmbedAllFonts=true -dUseCIEColor -sProcessColorModel=DeviceCMYK -dPDFACompatibilityPolicy=1 -sDEVICE=pdfwrite -sOutputFile="%_preservation%" "%_original%"
In the above example, I'm using a 64 bit version of Ghostscript. I won't even try to unpack all the options associated with this command, but check out the GS documentation for more info. Note that if you update your PDFA_def.ps file with the location of an ICC color profile, you will need to use double backslashes in the path information.
#### Audio Recordings: .wma .ra .au .snd --> .wav (FFmpeg)
REM Use FFprobe to get more information about the recording
ffprobe.exe -loglevel panic "%_original%" -show_streams > ffprobe.txt
REM Parse FFprobe output to determine the number of audio channels
FOR /F "usebackq tokens=2 delims==" %%c in (FINDSTR /C:"channels" ffprobe.txt) DO (SET _audchan=%%c)
REM Run FFmpeg, using the %_audchan% variable
ffmpeg.exe -i "%_original%" -ac %_audchan% "%_preservation%.wav"
#### Video Files: .flv .wmv .rv .rm .rmvb .mts --> .mp4 with h.264 encoding (FFmpeg)
REM Use FFprobe to get more information about the recording
ffprobe.exe -loglevel panic "%_original%" -show_streams > ffprobe.txt
REM Parse FFprobe output to determine the number of audio channels
FOR /F "usebackq tokens=2 delims==" %%c in (FINDSTR /C:"channels" ffprobe.txt) DO (SET _audchan=%%c)
REM Run FFmpeg, using the %_audchan% variable
ffmpeg.exe -i "%_original%" -ac %_audchan% -vcodec libx264 "%_preservation%.wav"
#### Legacy Word Processing Files: .wp .wpd .cwk .sxw .uot .hwp .lwp .mcw .wn --> .odt (LibreOffice)
REM Run LibreOffice as a service and listening on port 2002
START "Libre Office" /MIN "C:\Program Files (x86)\LibreOffice 4\program\soffice.exe" "-accept=socket,port=2002;urp;" --headless
REM Run DocumentConverter python script using the version of python included in LibreOffice.
"C:\Program Files (x86)\LibreOffice 4\program\python.exe" DocumentConverter.py "%_original%" "%_preservation%.odt"
This conversion requires the PyODConverter python script.
#### Microsoft Office Files: .doc .xls .ppt --> Office Open XML (OMPM)
This operation requires the installation of Microsoft's Office Compatibility Pack and Office Migration Planning Manager Update 1 (OMPM). Before running, the C:\OMPM\Tools\ofc.ini file must be modified to reflect the "SourcePathTemplate" and the "DestinationPathTemplate" (examples are in the file). Once modified, the OFC.EXE utility will run through and convert all legacy Office file formats to the 2010 version of Office Open XML with the following command:
OFC.EXE
### Step 5: Arrangement, Packaging, and Description
This step involves a number of applications for conducting further reviews of content and also employs 7-Zip to package materials in .zip files and a custom Excel user form for recording descriptive and administrative metadata. We'll explore this functionality in more depth in a later post.
### Step 6: Transfer and Clean Up
To document the final state of the accession (especially if preservation copies have been created or materials have been packages in .zip files), we run DROID a final time. After manually enabling the creation of md5 checksums, we employ the same commands as used before:
REM Generate a DROID report
REM _procDir = processing directory
java -jar droid-command-line-6.1.5.jar -R -a "%_procDir%" -p droidProfile.droid
REM Export report to a CSV file
java -jar droid-command-line-6.1.5.jar -p droidProfile.droid -e DROID.csv
We then use the Library of Congress's BagIt tool to 'bag' the fully processed material and then (to speed things up) copy it across the network to a secure dark archive using TeraCopy.
REM _procDir = Processing directory
bagit-4.4\bin\bag baginplace %_procDir% --log-verbose
REM We then use TeraCopy to move the content to our dark archive location
teracopy.exe COPY %_procDir% %_destination% /CLOSE
An additional copy of material will then be uploaded to Deep Blue, our DSpace repository.
## PREMIS
I should also mention we record PREMIS event information for all preservation actions at the accession level. Because I had no idea how to work with XML when we started this, we write the following elements to .csv files:
• eventType: Name or title of the event (i.e., "virus scan").
• eventIdentifierType: We're using UUIDs to identify events.
• eventIdentifierValue: A UUID to uniquely identify the event.
• eventDateTime: Timestamp for when the event concluded.
• eventDetail: Note providing additional information for the event.
• eventOutcome: Was the process completed? (Completion indicates success.)
• linkingAgentIdentifierType: We use MARC21 codes to identify agents.
• linkingAgentIdentifierValue: MiU-H (Our MARC21 code.)
• linkingAgentRole: Executor (i.e., the library executed this action).
• linkingAgentIdentifierType: "Tool" (we use this second agent record to identify software used in events).
• linkingAgentIdentifierValue: Name and version of software. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21227440237998962, "perplexity": 3074.972431017777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00566.warc.gz"} |
https://motls.blogspot.com/2014/11/cms-sees-excess-of-same-sign-dimuons-too.html?showComment=1416326444164&m=1 | ## Tuesday, November 18, 2014
### CMS sees excess of same-sign dimuons "too"
An Xmas rumor deja vu
There are many LHC-related hep-ex papers on the arXiv today, and especially
Searches for the associated $$t\bar t H$$ production at CMS
by Liis Rebane of CMS. The paper notices a broad excess of like-sign dimuon events. See the last 2+1 lines of Table 1 for numbers.
Those readers who remember all 6,000+ blog posts on this blog know very well that back in December 2012, there was a "Christmas rumor" about an excess seen by the other major LHC collaboration, ATLAS.
ATLAS was claimed to have observed 14 events – which would mean a 5-sigma excess – of same-sign dimuon events with the invariant mass$m_{\rm inv}(\mu^\pm \mu^\pm) = 105\GeV.$ Quite a bizarre Higgs-like particle with $$Q=\pm 2$$, if a straightforward explanation exists. Are the ATLAS and CMS seeing the same deviation from the Standard Model?
1. Stipulated: There exists a 105 GeV/c^2 Higgs.
Question: What does that leave of the Massed Standard Model?
Comment: Q = ±2. giggle Parameters for sale! Get your curve fittings three for the price of two for a limited time only! Buy ten, get 3 sigma off a perturbation treatment at your local spa. Particle physics is become FNORD. [Principia Discordia (1965), The Illuminatus! Trilogy (1975)]
2. Sorry, you can't randomly change charges of particles. There is no a priori obvious, canonical model with a Q = ±2. Higgs boson. The idea that there is is just your fantasy caused by your complete misunderstanding of modern physics.
3. strictly speaking...Nov 18, 2014, 6:52:00 PM
Does a new particle explaining the resonance have to be a scalar Higgs-like boson? Or could it be say, a new W-like gauge boson?
4. strictly speaking...Nov 18, 2014, 6:59:00 PM
Or a 3-stop bound state?
5. strictly speaking...Nov 18, 2014, 7:04:00 PM
Assuming they haven't explicitly ruled out a neutrino excess or have detailed polarization measurements.
6. LOL, great proposals.
It would surely be quite a revolution if the charge-two particle were a gauge boson.
According to things we know, massive elementary gauge bosons should be uniquely associated with broken generators of a Lie group - the gauge group. I think that there is no "simple enough" gauge group whose generators would include generators with Q=1 (known W-bosons) as well as Q=2 (the new ones).
Well, in principle, you could extend the electroweak SU(2) to an electroweak SU(3) or larger, and then you could have more complicated charges under the electromagnetic Q, but you would also predict lots of new particles - quarks would have to be electroweak triplets or worse, and so on.
Bound states of N top quarks have masses comparable to N times mass of the top quark plus minus a small multiple of the QCD scale (below 1 GeV), I think. So you can't easily get far away from 350 GeV, 520 GeV, and so on.
7. Contemporary physical theory is a dog's breakfast of parameterizations, curve fittings, epicycles, and apologies. Everything can be rationalized with perfect equations, even superluminal muon neutrinos.
Theory should be elegant, terse, and predictive. When it is not, it has one or more defective founding postulates. Do not write more theory or rationalize "confirmation." Break defective theory with heterodox observation to locate the defective postulates.
8. No competent theorist has ever seriously considered the possibility that neutrinos are faster than light.
The physical image of the world as we have reached it by the early 21st century is beautiful, concise, unified, sensible, far-reaching, universal, and quite possibly very close to the final picture i.e. the mind of God – and all the negatively sounding adjectives are completely irrational.
Have you ever had the courage to consider the possibility that your attitude against contemporary physical theory is negative because the content of your skull, and not contemporary physical theory, is a stinky pile of crap?
This is not a rhetorical question but a real question and I do demand an answer from you, otherwise I will ban you.
9. Can anybody explain how do you arrive at 5 sigma excess from 14 events? I know what standard deviation means, just wondering what are the exact steps by which one follows from another?
10. Current theoretical physics is incredibly rigid. Suggesting that it is a “dog’s breakfast” is beyond stupid. You really have to start over, Uncle Al.
As a beginning, re-view Nima Arkani-Hamed’s Nov. 6 talk at the Perimeter Institute and come back when you get it. If you choose to argue with Nima and Lubos you are hopeless.
11. Let me try myself first: in the given experiment, had they found 3 events more than expected by the SM, it would have been within 1 sigma, so no biggie, I guess. But they found 14 more so it is about 5 sigma from expected value? Is that how it works?
12. The actual events observed vs the probability that accidental or coincident recordings of the detectors could be observed (non-events).
13. If the like probability of a false positive is once in each one million years, three positives in ten seconds is more than one SD.
14. You flip a coin 14 times and get 14 heads in a row. (You could have a two headed coin or you might have a heads-tails coin.) To five sigma (whatever, didnt do the maths), you probably have a two headed coin, not a heads-tails coin.
15. Lubos,
Would you be interested in waging a bet on SUSY by 2018? I'll give you 100 to 1 odds.
16. Dear Tony, 5 sigma is just an equivalent way to say that the probability that 14 or more events is 1 / 3 million (the usual probability with 5 sigma).
When the predicted mean number of events is N, the probability that the expected number is M follows some Poisson distribution. The smaller N is, the more this distribution falls with M.
Here, the expected mean N is some constant below 1 you may calculate if you wish, and if you just calculate the probability that you get 14 or more, you will get 1/3 million.
17. I would find this offer absolutely irresistible if I would believe that you will actually be able and honest to fulfill the commitments.
18. I'm 100% honest. I'll give you full contact information. In fact, because I know you're a moral person -- in this regard -- I'll prepay you. Now, we're only scaling $1. So, I'll send you$100 dollars immediately. When 2018 is up, if there's no SUSY you will have to give me back $101. Okay, here's the catch: When 2018 is up, if there's no SUSY, I require that you write a single blog post which reads exactly as follows. Begin Blog Post: "On this day, January 1, 2018, I Lubos Motl hereby confess that I am an arrogant stinky asshole. For the last 40 years, I and other clueless pseudo-physicists have dedicated their lives to a theory which has not provided a single relevant insight into nature. As a punishment for my intellectual disability, I hereby pay Justin X exactly$101.00. It is hoped that this monetary transaction teaches me a lesson sufficient to increase my mental ability to the level of understanding that string theory and supersymmetry are the deepest fantasies of my mind, but nothing to do with physical reality."
End Blog Post
Deal?
19. Dear Justin, up to the $100/$101, it was OK and I would immediately accept it even though I had a $10,000 bet - like with Adam Falkowski - in mind. However, you would have to add a$10,000 fee for your extra obscene idiotic commercial.
String theory and supersymmetry belong among the most valuable parts of knowledge that the mankind has accumulated.
And I will surely be no *expletive* if SUSY is not found at the LHC. It is not known whether it will, and it was never known. And my estimates of the odds that the LHC will find SUSY were always - and remain - close to 50%, start e.g. with this 2007 blog post.
http://motls.blogspot.com/2007/04/probabilities-of-various-theories.html?m=1
Your clearly stated agenda - attempting to post obscene lies whose value is \$1, literally - shows that you're not the target audience of this blog - which is mostly people 3-8 orders of magnitude more valuable than you - so I blacklisted you. Feel free to comment on websites dedicated to readers of your value. I can enumerate quite a few. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7203243374824524, "perplexity": 1670.7492539816371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00498.warc.gz"} |
https://stats.stackexchange.com/questions/389624/how-to-call-this-frequentist-interval-estimate-that-is-neither-a-prediction-inte | # How to call this frequentist interval estimate that is neither a prediction interval nor a confidence interval
This question is inspired by Confidence Interval on a random quantity?. That question introduces an interesting concept for a type of interval that is neither a prediction nor a confidence interval (possibly one could see it as a tolerance interval although I believe it is neither that).
### A frequentist interval estimate
In short: For pairs of (possibly multidimensional) variables $$x_i,y_i$$, which are both distributed according to a distribution parameterized by $$a$$, and where $$x_i|a \not\!\perp\!\!\!\perp y_i|a$$, we wish to perform interval estimation for the value of $$x_i$$ as function of $$y_i$$, where $$a$$ is unknown.
Given the following:
• Let $$X,Y$$ be random variables that are paired.
• The random variables $$X$$ and $$Y$$ follow a distribution function that is parameterized by $$a$$ $$f_{Y|a}(y|a) \equiv g_Y(y,a)$$ $$f_{X|a}(y|a) \equiv g_X(y,a)$$
• There is a known relationship between $$X$$ and $$Y$$ and $$a$$, that defines a conditional distribution for $$X$$ $$f_{X|y,a}(x|y,a) \equiv h(x,y,a)$$
• There is a sample of measured values $$y_i$$
We wish to compute:
for each $$x_i$$ a one-sided interval bound $$c(y_i,\alpha)$$ such that: $$\forall a : P(X or less strong $$\sup \lbrace P(X
That is, probability in a frequentist sense. If we would have a large sample with pairs $$x_i,y_i$$ (where we only measure $$y_i$$ and do not know $$a$$) then the frequency/fraction of 'failures' of the interval, $$x_i, should be around $$\alpha$$ independent from the true value of $$a$$ (or the smallest upper bound is $$\alpha$$).
### How do/should we call that sort of interval?
This is not a confidence interval, because the estimate is for $$X$$, which is not a (fixed) population parameter, but a random variable.
This is neither a prediction interval, because $$c(y_i,\alpha)$$ is only a region for the $$x_i$$ that is paired with $$y_i$$ and it is not a region for future values of $$X$$.
What is it?
### Example case problems
• (this one was mentioned by shabbychef in the comments and relates to the before mentioned question)
You observe returns from $$p$$ stocks in vector $$\vec{y}_i$$. Then from a sample of $$n$$ such observations, you form the Markowitz Portfolio, based on the sample mean and covariance. Then you wish to estimate the Sharpe Ratio of that sample Markowitz Portfolio.
• Say I have a batch of films for which I want to predict the strength $$X$$ of each film. Let the strength be a function of two parameters, say film thickness $$Y$$ and film density $$a$$.
Say I can not measure $$X$$ directly (would damage the film), and I do not know $$a$$ for every film, nor do I wish to measure it (say it is a costly measurement). I can, however, measure $$Y$$ for each film and I know that $$Y$$ is distributed according to some pdf that is parameterized by $$a$$.
So now the idea is to use measurements of film thickness $$Y$$, which carries information of $$a$$ to compute some confidence/prediction/tolerance/whatever interval for $$X$$ which I know depends on $$Y$$ and $$a$$. I want this interval to fail only $$\alpha$$ percent of the time.
• I think it's not useful to make the stipulation in the fourth (final) bullet, due to the dependence on the unknown parameter $a.$ You need to consider either the supremum or the infimum of the left hand side over the set of posited distributions of $Y,$ depending on your objective. – whuber Jan 28 at 22:37
• I agree. That is what I did in my answer here. Beyond that one may wonder whether there ain't better approaches for the problem in practice (but that is beyond the point of the question which is about the principle). – Martijn Weterings Jan 28 at 22:48
• Another example would be: you observe returns from $p$ stocks in vector $\vec{y_i}$. Then from a sample of $n$ such observations, you form the Markowitz Portfolio, based on the sample mean and covariance. Then you wish to estimate the Sharpe Ratio of that sample Markowitz Portfolio. – shabbychef Jan 29 at 5:42
We could describe the distribution of $$Y$$, conditional on $$X$$ and $$a$$, as a distribution parameterized by $$X$$ and $$a$$:
$$f_{Y|x,a}(y,x,a) = \frac{f_{X|y,a}(x,y,a)f_{Y,a}(y,a)}{f_{X,a}(x,a)}$$
In this view the random variable $$X$$ is a parameter in the (conditional) distribution of $$Y$$, and we could see the interval estimation of $$X$$ as a confidence interval for the parameter $$X$$.
Complications are that the estimate of $$X$$ is dependent on the value of the parameter $$a$$ which acts as a nuisance parameter, and in addition $$X$$ itselve is distributed according to distribution parameterized by $$a$$. So one may not tackle the interval estimation as a 'regular' confidence interval estimation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 65, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9739524126052856, "perplexity": 414.63114571103705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999000.76/warc/CC-MAIN-20190619143832-20190619165832-00535.warc.gz"} |
http://jonathanswilson.com/site/8rew3aw.php?tag=how-to-calculate-emf-of-a-cell-4fb90b | # how to calculate emf of a cell
2. (b) Weak Electrolytes: The electrolytes which are not completely dissociated into ions in solution are called weak electrolytes. The two ends of the U-tube are then plugged with cotton wool to minimise diffusion. See all questions in Calculating Energy in Electrochemical Processes. Calculate EMF using the formula: ε = V + Ir Here (V) means the voltage of the cell, (I) means the current in the circuit and (r) means the internal resistance of the cell. By taking the oxidation potentials of both electrodes. A voltaic cell utilizes the following reaction: 2Fe^3+ + H2 --> 2Fe^2+ + 2H+ What is the emf for this cell when [Fe^3+]=2.00M, Pressure of H2=0.55 atm, [Fe^2+]=1.2*10^-2M and the pH for both compartments is 4.80? Copyright Notice © 2020 Greycells18 Media Limited and its licensors. The Daniell cell was invented by a British chemist, John Frederic Daniell. emf of the cell = Potential of the half cell on the right hand side (Cathode) - Potential of the half cell on the left hand side (Anode). This potential difference is called the electrode potential. Version Control For Salesforce — Branching Strategy. Redox reactions with a positive E 0 cell value are galvanic. Both are separated by vertical line or semicolon. The cell potential or EMF of the electrochemical cell can be calculated by taking the values of electrode potentials of the two half – cells. I finally got it! Students acquire the skill to measure the EMF of a cell by viewing animation & simulator. One of the half-reactions must be reversed to yield an oxidation.Reverse the half-reaction that will yield the highest (positive) net emf for the cell. What is the emf for this cell when [Fe^3+]=2.00M, Pressure of H2=0.55 atm, [Fe^2+]=1.2*10^-2M and the pH for both compartments is 4.80? It is named after the German physical chemist Walther Nernst. What reactions are happening, are the cells compartmentalized and what exactly are the values given in brackets in the question? We would normally expect an AA cell to have an EMF of about 1.5 V and an internal resistance of about 1 Ω. The inert electrolyte is neither involved in any chemical change, nor does it react with the solutions in the two half cells. Who has a mixed origin in this passage, the town or its mayor? It is also called Voltaic cell, after an Italian physicist, Alessandro Volta. A voltmeter and variable resistor To find the EMF and internal resistance of a cell, the following circuit is set up. What is the relation between degree of ionisation and dilution of weak electrolytes? How do you calculate electrochemical cell potential? So since my setup looks right, having the wrong units is the only thing I can think of. Note:- I have converted ln into log. EMF = 1.415 V Internal resistance = 2.10 Ω. (b) Predict the products of electrolysis in the following: A solution of H2SO4 with platinum electrodes. The zinc ions pass into the solution. What is an electrochemical cell that generates electrical energy? Hence, I got $E_{\text{cell}}=\pu{0.357V}$. Anode is written on the left hand side and cathode on the right hand side. The combination of chemicals and the makeup of the terminals in a battery determine its emf. A contradiction regarding the reaction coefficient expression in the Nernst equation, Determination of solubility equilibrium using galvanic cell reactions. Want a call from us give your mobile number below, For any content/service related issues please contact on this number, Mg(s) |Mg2+ (0.1 M) ||Cu2+ (1 10-3 M)| Cu(s). On cooling, the solution sets in the form of a gel inside the U-tube and thus prevents the inter mixing of the fluids. Step 3: Add the two E 0 together to find the total cell EMF, E 0 cell E 0 cell = E 0 reduction + E 0 oxidation E 0 cell = 0.0000 V + 2.372 V = +2.372 V; Step 4: Determine if the reaction is galvanic. The electrode potential at standard conditions such as 25°C temperature, 1 atm pressure, 1 M concentration of electrolyte, is called the standard electrode potential. A galvanic cell is an important electrochemical cell. Equilibrium Constant of an Electrochemical Cell Reaction. hmm, mind sharing with me how you did it? The difference between and EMF (EMF) and Terminal Voltage (V) of a cell/battery can be calculated as following: Where, I is the total current being drawn from the cell/battery and r is the internal resistance of the cell/battery. so n = 2, your RT/nF should be RT/2F(currently i see you have 1), try that, if it is still wrong, i'll help check your ln Q expression, but pretty sure that is the main problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7012185454368591, "perplexity": 1351.2520718357453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00647.warc.gz"} |
https://wiseodd.github.io/techblog/2016/10/13/residual-net/ | $$\newcommand{\dint}{\mathrm{d}} \newcommand{\vphi}{\boldsymbol{\phi}} \newcommand{\vpi}{\boldsymbol{\pi}} \newcommand{\vpsi}{\boldsymbol{\psi}} \newcommand{\vomg}{\boldsymbol{\omega}} \newcommand{\vsigma}{\boldsymbol{\sigma}} \newcommand{\vzeta}{\boldsymbol{\zeta}} \renewcommand{\vx}{\mathbf{x}} \renewcommand{\vy}{\mathbf{y}} \renewcommand{\vz}{\mathbf{z}} \renewcommand{\vh}{\mathbf{h}} \renewcommand{\b}{\mathbf} \renewcommand{\vec}{\mathrm{vec}} \newcommand{\vecemph}{\mathrm{vec}} \newcommand{\mvn}{\mathcal{MN}} \newcommand{\G}{\mathcal{G}} \newcommand{\M}{\mathcal{M}} \newcommand{\N}{\mathcal{N}} \newcommand{\S}{\mathcal{S}} \newcommand{\diag}[1]{\mathrm{diag}(#1)} \newcommand{\diagemph}[1]{\mathrm{diag}(#1)} \newcommand{\tr}[1]{\text{tr}(#1)} \renewcommand{\C}{\mathbb{C}} \renewcommand{\R}{\mathbb{R}} \renewcommand{\E}{\mathbb{E}} \newcommand{\D}{\mathcal{D}} \newcommand{\inner}[1]{\langle #1 \rangle} \newcommand{\innerbig}[1]{\left \langle #1 \right \rangle} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\two}{\mathrm{II}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\Id}{\mathrm{Id}} \newcommand{\grad}[1]{\mathrm{grad} \, #1} \newcommand{\gradat}[2]{\mathrm{grad} \, #1 \, \vert_{#2}} \newcommand{\Hess}[1]{\mathrm{Hess} \, #1} \newcommand{\T}{\text{T}} \newcommand{\dim}[1]{\mathrm{dim} \, #1} \newcommand{\partder}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\rank}[1]{\mathrm{rank} \, #1}$$
# Residual Net
September 2015, at the ImageNet Large Scale Visual Recognition Challenge’s (ILSVRC) winners announcement, there was this one net by MSRA that dominated it all: Residual Net (ResNet) (He et al., 2015). The ensemble of ResNets crushing the classification task of other competitiors and almost halving the error rate of the 2014 winner.
Aside from winning the ILSVRC 2015 classification, ResNet also won the detection and localization challenge of the said competition. Additionally, it also won the MSCOCO detection and segmentation challenge. Quite a feat!
So, what makes ResNet so good? What’s the difference compared to the previous convnet models?
## ResNet: the intuition behind it
The authors of ResNet observed, no matter how deep a network is, it should not be any worse than the shallower network. That’s because if we argue that neural net could approximate any complicated function, then it could also learn identity function, i.e. input = output, effectively skipping the learning progress on some layers. But, in real world, this is not the case because of the vanishing gradient and curse of dimensionality problems.
Hence, it might be useful to explicitly force the network to learn an identity mapping, by learning the residual of input and output of some layers (or subnetworks). Suppose the input of the subnetwork is $x$, and the true output is $H(x)$. The residual is the difference between them: $F(x) = H(x) - x$. As we are interested in finding the true, underlying output of the subnetwork, we then rearrange that equation into $H(x) = F(x) + x$.
So that’s the difference between ResNet and traditional neural nets. Where traditional neural nets will learn $H(x)$ directly, ResNet instead models the layers to learn the residual of input and output of subnetworks. This will give the network an option to just skip subnetworks by making $F(x) = 0$, so that $H(x) = x$. In other words, the output of a particular subnetwork is just the output of the last subnetwork.
During backpropagation, learning residual gives us nice property. Because of the formulation, the network could choose to ignore the gradient of some subnetworks, and just forward the gradient from higher layers to lower layers without any modification. As an extreme example, this means that ResNet could just forward gradient from the last layer, e.g. layer 151, directly to the first layer. This gives ResNet additional nice to have option which might be useful, rather than just strictly doing computation in all layers.
## ResNet: implementation detail
He et al. experimented with 152 layers deep ResNet in their paper. But due to our (my) monitor budget, we will look at the 34 layers version instead. Furthermore, it’s easier to understand with not so many layers, isn’t it?
At the first layer, ResNet use 7x7 convolution with stride 2 to downsample the input by the order of 2, similar to pooling layer. Then it’s followed by three identity blocks before downsampling again by 2. The downsampling layer is also a convolution layer, but without the identity connection. It continues like that for several layer deep. The last layer is average pooling which creates 1000 feature maps (for ImageNet data), and average it for each feature map. The result would be 1000 dimensional vector which then fed into Softmax layer directly, so it’s fully convolutional.
In the paper, He et al. use bottleneck architecture for each the residual block. It means that the residual block consists of 3 layers in this order: 1x1 convolution - 3x3 convolution - 1x1 convolution. The first and the last convolution is the bottleneck. It mostly just for practical consideration, as the first 1x1 convolution is being used to reduce the dimensionality, and the last 1x1 convolution is to restore it. So, the same network is now become 50 layers.
Notice, in 50 layers and more ResNet, at each block, there are now two 1x1 convolution layers. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606759309768677, "perplexity": 1113.6587846833988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00311.warc.gz"} |
https://learnbps.bismarckschools.org/mod/book/view.php?id=83229&chapterid=27606 | # [S] Statistics and Probability
Students’ prior knowledge includes:
• Students investigate patterns of association in bivariate data (grade 8)
• Students use random sampling to draw inferences about a population (grade 7)
• Students investigate chance processes and develop, use, and evaluate probability models (grade 7)
• Students develop an understanding of statistical variability (grade 6)
• Students summarize and describe distributions (grade 6)
##### Interpreting Categorical and Quantitative Data
• Summarize, represent, and interpret data on a single count or measurement variable.
• Summarize, represent, and interpret data on two categorical and quantitative variables.
• Interpret linear models.
##### Making Inferences and Justifying Conclusions
• Understand and evaluate random processes underlying statistical experiments.
• Make inferences and justify conclusions from sample surveys, experiments and observational studies.
##### Conditional Probability and the Rules of Probability
• Understand independence and conditional probability and use them to interpret data.
• Use the rules of probability to compute probabilities of compound events in a uniform probability model.
##### Using Probability to Make Decisions
• Calculate expected values and use them to solve problems.
• Use probability to evaluate outcomes of decisions.
## MAT-HS.S [S] Overview: Statistics and Probability
### MAT-HS.S-ID Domain: [S-ID] Interpreting Categorical and Quantitative Data
• MAT-HS.S-ID.01 Represent data with plots on the real number line
• MAT-HS.S-ID.02 Use statistics to compare center and spread of two or more data sets
• MAT-HS.S-ID.03 Interpret differences in shape/center/spread of data, accounting for outliers
• MAT-HS.S-ID.04 Use the mean and standard deviation of a data set for normal distribution
• MAT-HS.S-ID.05 Summarize categorical data for two categories in twoway frequency tables
• MAT-HS.S-ID.06 Represent data on two quantitative variables on a scatter plot
• S-ID.06.a Fit a function to the data to solve problems
• S-ID.06.b Informally assess the fit of a function by plotting and analyzing residuals
• S-ID.06.c Fit a linear function for a scatter plot that suggests a linear association
• MAT-HS.S-ID.07 interpret the Slope and the Intercept of a linear model in context of data
• MAT-HS.S-ID.08 Compute and interpret the correlation coefficient of a linear fit
• MAT-HS.S-ID.09 Distinguish between correlation and causation
### MAT-HS.S-IC Domain: [S-IC] Making Inferences and Justifying Conclusions
• MAT-HS.S-IC.01 Understand statistics as a process for making inferences based on random sample
• MAT-HS.S-IC.02 Decide if a model is consistent with results from a data-generating process
• MAT-HS.S-IC.03 Compare the purposes of surveys/experiments/observational studies
• MAT-HS.S-IC.04 Use data from a sample survey to estimate a population mean or proportion
• MAT-HS.S-IC.05 Use data from a randomized experiment to compare two treatments
• MAT-HS.S-IC.06 Evaluate reports based on data
### MAT-HS.S-CP Domain: [S-CP] Conditional Probability and the Rules of Probability
• MAT-HS.S-CP.01 Describe events as subsets of a sample space using characteristics
• MAT-HS.S-CP.02 Using probabilities, understand and show that two events are independent
• MAT-HS.S-CP.03 Understand the conditional probability of A given B as P(A and B)/P(B
• MAT-HS.S-CP.04 Construct and interpret two-way frequency tables of data
• MAT-HS.S-CP.05 Recognize and explain the concepts of conditional probability and independence
• MAT-HS.S-CP.06 Find the probability of A given B as the fraction of B??s outcomes belong to A
• MAT-HS.S-CP.07 Apply and interpret the Addition Rule of probability
• MAT-HS.S-CP.08 Apply the general Multiplication Rule in a uniform probability model
• MAT-HS.S-CP.09 Use permutations and combinations to compute probabilities
### MAT-HS.S-MD Domain: [S-MD] Using Probability to Make Decisions
• MAT-HS.S-MD.01 Define a random variable for a quantity of interest by assigning a numerica
• MAT-HS.S-MD.02 Calculate the expected value of a random variable; interpret it as the mean
• MAT-HS.S-MD.03 Develop a probability distribution for a random variable defined for a samp
• MAT-HS.S-MD.04 Develop a probability distribution for a random variable defined for a samp
• MAT-HS.S-MD.05 Weigh outcomes of a decision by assigning probabilities to payoff values
• S-MD.05.a Find the expected payoff for a game of chance
• S-MD.05.b Evaluate and compare strategies on the basis of expected values
• MAT-HS.S-MD.06 Use probabilities to make fair decisions
• MAT-HS.S-MD.07 Analyze decisions and strategies using probability concepts
### A Sample of HS Math Concept Statistics and Probability your child will be learning
#### Interpreting Categorical and Quantitative Data
• Summarize, represent, and interpret data on a single count or measurement variable.
• Summarize, represent, and interpret data on two categorical and quantitative variables.
• Interpret linear models.
#### Making Inferences and Justifying Conclusions
• Understand and evaluate random processes underlying statistical experiments.
• Make inferences and justify conclusions from sample surveys, experiments and observational studies.
#### Conditional Probability and the Rules of Probability
• Understand independence and conditional probability and use them to interpret data.
• Use the rules of probability to compute probabilities of compound events in a uniform probability model.
#### Using Probability to Make Decisions
• Calculate expected values and use them to solve problems.
• Use probability to evaluate outcomes of decisions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8497014045715332, "perplexity": 4075.9343727605437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00416.warc.gz"} |
http://math.stackexchange.com/users/17338/sean-gomes?tab=activity | Sean Gomes
Reputation
Top tag
Next privilege 1,000 Rep.
Create new tags
Mar6 comment Piecing together full density subsequences Perfect, thank you! My attempts were along similar lines, but I missed the trick of working with complements (and the consequently logical choice for our $N_k$). Mar6 accepted Piecing together full density subsequences Mar6 asked Piecing together full density subsequences Mar3 awarded Nice Answer Jan8 awarded Tumbleweed Jan1 asked The heat kernel as the fundamental solution of the heat equation Oct9 awarded Yearling Sep24 awarded Autobiographer Jul2 awarded Curious Apr15 comment Regularity of Dirichlet Eigenvalues on Lipschitz Domain At least interior $C^2$ and continuous up to boundary. My problem only has a piecewise smooth boundary though (but the corners are not too bad, so the domain is still Lipschitz). Apr15 comment Regularity of Dirichlet Eigenvalues on Lipschitz Domain Thanks for the reference, this book should be quite useful in general. It seems the Dirichlet regularity result in this section assumes at least a $\mathcal{C}^2$ boundary though. Apr15 asked Regularity of Dirichlet Eigenvalues on Lipschitz Domain Feb18 comment show that $f$ is not integrable on $[0,1]$ It is equal to cos a.e., not sin. And it is discontinuous at every point in the interval, not just the rationals. ie it is not Riemann integrable. Feb18 comment What is wrong with this equations? (5-5) and (x-y) are both zero. Feb10 answered Showing that the square root is monotone Jan7 comment How to prove the inequality: $\frac{(1+x)^2}{2x^2+(1-x)^2}+\frac{(1+y)^2}{2y^2+(1-y)^2}+\frac{(1+z)^2}{2z^2+(1-z)^2}\leq 8$ Thanks. Out of interest, where did the motivation for looking at: $(4a+1)(a-1/3)^2$ come from? Jan7 accepted How to prove the inequality: $\frac{(1+x)^2}{2x^2+(1-x)^2}+\frac{(1+y)^2}{2y^2+(1-y)^2}+\frac{(1+z)^2}{2z^2+(1-z)^2}\leq 8$ Jan7 revised How to prove the inequality: $\frac{(1+x)^2}{2x^2+(1-x)^2}+\frac{(1+y)^2}{2y^2+(1-y)^2}+\frac{(1+z)^2}{2z^2+(1-z)^2}\leq 8$ edited body; edited title Jan7 asked How to prove the inequality: $\frac{(1+x)^2}{2x^2+(1-x)^2}+\frac{(1+y)^2}{2y^2+(1-y)^2}+\frac{(1+z)^2}{2z^2+(1-z)^2}\leq 8$ Nov28 revised restriction of functions of several variables added 1 characters in body | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310117959976196, "perplexity": 1646.8185121531715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927843.59/warc/CC-MAIN-20150521113207-00129-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://www.itninja.com/question/msi-will-not-install-via-gpo-error-1605 | I recently discovered that my companies software cannot be deployed via GPO. I want our software to be available for GPO deployment but need to figure out what is wrong with our MSI. I can install our software silently /qn with absolutely no problem at all. When I deploy via GPO, the target system log file provides an error of 1605. The GPO is per machine. The MSI is placed on a network share and I confirmed that the target system can access and install from that share. I have tried using the MSI and also extracted the contents (msiexec /a) and attempted from the extracted MSI as well. Using Orca and viewing the Properties table, we have: ALLUSERS = 1 and we do NOT have a MSINSTALLPERUSER entry. The installation GUI does have a couple of prompts requiring user input to continue, with one being the license agreement. Because I can install silently with no user interaction, I'm assuming that that this isn't the issue. Can anyone provide any suggestions on what I can try next or what some possible culprits might be? I can't even seem to find documentation on how to build your MSI to ensure GPO compatibility. Any help would be so very much appreciated.
Rating comments in this legacy AppDeploy message board thread won't reorder them,
so that the conversation will remain readable.
0
1605 is a generic error, the cause could be any one of a hundred.
What does a verbose log tell you? You'll need to use the 'Enable MSI Logging' policy to get a log but that's simple enough. Remember to turn it off when you're done, if you're not using virtual machines to test with (!)
0
I did do that and have been staring at it but nothing is jumping out at me. Not sure how to attach it here so I'll copy and paste what I think might be relevant? I'm seeing user exit and wondering if it has something to do with the license prompt - perhaps defaulting to cancel? Thank you so very much for any help.
Info 2898. For WixUI_Font_Title textstyle, the system created a 'Tahoma' font, in 0 character set, of 14 pixels height.
Action 15:02:26: CancelDlg. Dialog created
Action ended 15:02:27: WelcomeDlg. Return value 2.
MSI (c) (50:6C) [15:02:27:884]: Doing action: UserExit
Action 15:02:27: UserExit.
Action start 15:02:27: UserExit.
Action 15:02:27: UserExit. Dialog created
Action ended 15:02:30: UserExit. Return value 2.
Action ended 15:02:30: INSTALL. Return value 2.
Property(C): DPI = #96
Property(C): NETFRAMEWORK35 = #1
Property(C): WINAMPFOLDER = C:\Program Files (x86)\Winamp\
Property(C): METRICSALLOWED = 1
Property(C): OLYMPIA = C:\Program Files (x86)\Plantronics\PlantronicsURE\
Property(C): OLYMPIA_DATA = C:\ProgramData\Plantronics\PlantronicsURE\
Property(C): TARGETDIR = C:\
Property(C): SDK_FOLDER = C:\Program Files (x86)\Plantronics\PlantronicsURE\
Property(C): da = C:\Program Files (x86)\Plantronics\PlantronicsURE\da\
Property(C): de = C:\Program Files (x86)\Plantronics\PlantronicsURE\de\
Property(C): enGB = C:\Program Files (x86)\Plantronics\PlantronicsURE\en-GB\
Property(C): es = C:\Program Files (x86)\Plantronics\PlantronicsURE\es\
Property(C): esES = C:\Program Files (x86)\Plantronics\PlantronicsURE\es-ES\
Property(C): esMX = C:\Program Files (x86)\Plantronics\PlantronicsURE\es-MX\
Property(C): fi = C:\Program Files (x86)\Plantronics\PlantronicsURE\fi\
Property(C): frCA = C:\Program Files (x86)\Plantronics\PlantronicsURE\fr-CA\
Property(C): fr = C:\Program Files (x86)\Plantronics\PlantronicsURE\fr\
Property(C): it = C:\Program Files (x86)\Plantronics\PlantronicsURE\it\
Property(C): ja = C:\Program Files (x86)\Plantronics\PlantronicsURE\ja\
Property(C): ko = C:\Program Files (x86)\Plantronics\PlantronicsURE\ko\
Property(C): nl = C:\Program Files (x86)\Plantronics\PlantronicsURE\nl\
Property(C): no = C:\Program Files (x86)\Plantronics\PlantronicsURE\no\
Property(C): pt = C:\Program Files (x86)\Plantronics\PlantronicsURE\pt\
Property(C): ptBR = C:\Program Files (x86)\Plantronics\PlantronicsURE\pt-BR\
Property(C): ptPT = C:\Program Files (x86)\Plantronics\PlantronicsURE\pt-PT\
Property(C): sv = C:\Program Files (x86)\Plantronics\PlantronicsURE\sv\
Property(C): tr = C:\Program Files (x86)\Plantronics\PlantronicsURE\tr\
Property(C): zhCN = C:\Program Files (x86)\Plantronics\PlantronicsURE\zh-CN\
Property(C): zhTW = C:\Program Files (x86)\Plantronics\PlantronicsURE\zh-TW\
Property(C): Plugins = C:\Program Files (x86)\Winamp\Plugins\
Property(C): WixUIRMOption = UseRM
Property(C): ALLUSERS = 1
Property(C): WixUI_Dialog = WixUI_Dialog_Small
Property(C): WixUI_Banner = WixUI_Banner_Small
Property(C): Plantronics = C:\Program Files (x86)\Plantronics\
Property(C): ProgramFilesFolder = C:\Program Files (x86)\
Property(C): CompanyFolder = C:\ProgramData\Plantronics\
Property(C): CommonAppDataFolder = C:\ProgramData\
Property(C): Manufacturer = Plantronics, Inc.
Property(C): ProductCode = {92C1B9C1-367D-4227-95D4-660412AFFD0D}
Property(C): ProductLanguage = 1033
Property(C): ProductName = Plantronics Spokes Software
Property(C): ProductVersion = 2.5.50537.0
Property(C): WIXUI_INSTALLDIR = OLYMPIA
Property(C): ARPPRODUCTICON = Sparta.ico
Property(C): ARPCONTACT = Plantronics, Inc.
Property(C): DefaultUIFont = WixUI_Font_Normal
Property(C): WixUI_Mode = Mondo
Property(C): WixUI_InstallMode = InstallCustom
Property(C): ErrorDialog = ErrorDlg
Property(C): SecureCustomProperties = NETFRAMEWORK35;NEWPRODUCTFOUND;OLDPRODUCTFOUND
Property(C): OLDPRODUCTFOUND = {04F40296-1509-4DD2-92FC-261BDC825D76}
Property(C): FLEXNET.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\ProgramData\FLEXnet\
Property(C): CONNECT.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\ProgramData\FLEXnet\Connect\
Property(C): CommonFilesFolder = C:\Program Files (x86)\Common Files\
Property(C): MACROVISION.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\ProgramData\Macrovision\
Property(C): FLEXNET_CONNECT.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\ProgramData\Macrovision\FLEXnet Connect\
Property(C): FNC11DIR.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\ProgramData\Macrovision\FLEXnet Connect\11\
Property(C): FNC61DIR.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\ProgramData\Macrovision\FLEXnet Connect\6\
Property(C): FNCBINDIR.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\ProgramData\FLEXnet\Connect\11\
Property(C): INSTALLSHIELD.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\Program Files (x86)\Common Files\InstallShield\
Property(C): IE5FOUND.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = C:\Windows\SysWOW64\shdocvw.dll
Property(C): DWUSOWNINGFEATURE.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = Sparta
Property(C): SecureCustomProperties.9FC896E6_8DD6_4BFD_A02C_189B1B87F512 = IE5FOUND.9FC896E6_8DD6_4BFD_A02C_189B1B87F512
Property(C): SKYPESHAREDDIR.6F88B3BF_1A40_4409_9A67_3EC9E8B8AD33 = C:\Program Files (x86)\Common Files\Skype\
Property(C): CommonFilesFolder.6F88B3BF_1A40_4409_9A67_3EC9E8B8AD33 = C:\Program Files (x86)\Common Files\
Property(C): MsiLogFileLocation = C:\Users\sujdavis\AppData\Local\Temp\MSIc58c6.LOG
Property(C): PackageCode = {DE7AE6AE-9786-4752-8B63-3A8D4C38F79A}
Property(C): ProductState = -1
Property(C): PackagecodeChanging = 1
Property(C): CLIENTUILEVEL = 0
Property(C): CLIENTPROCESSID = 3664
Property(C): VersionDatabase = 300
Property(C): VersionMsi = 5.00
Property(C): VersionNT = 601
Property(C): VersionNT64 = 601
Property(C): WindowsBuild = 7601
Property(C): ServicePackLevel = 1
Property(C): ServicePackLevelMinor = 0
Property(C): MsiNTProductType = 1
Property(C): WindowsFolder = C:\Windows\
Property(C): WindowsVolume = C:\
Property(C): System64Folder = C:\Windows\system32\
Property(C): SystemFolder = C:\Windows\SysWOW64\
Property(C): TempFolder = C:\Users\sujdavis\AppData\Local\Temp\
Property(C): ProgramFiles64Folder = C:\Program Files\
Property(C): CommonFiles64Folder = C:\Program Files\Common Files\
Property(C): AppDataFolder = C:\Users\sujdavis\AppData\Roaming\
Property(C): FavoritesFolder = C:\Users\sujdavis\Favorites\
Property(C): NetHoodFolder = C:\Users\sujdavis\AppData\Roaming\Microsoft\Windows\Network Shortcuts\
Property(C): PersonalFolder = C:\Users\sujdavis\Documents\
Property(C): PrintHoodFolder = C:\Users\sujdavis\AppData\Roaming\Microsoft\Windows\Printer Shortcuts\
Property(C): RecentFolder = C:\Users\sujdavis\AppData\Roaming\Microsoft\Windows\Recent\
Property(C): SendToFolder = C:\Users\sujdavis\AppData\Roaming\Microsoft\Windows\SendTo\
Property(C): TemplateFolder = C:\ProgramData\Microsoft\Windows\Templates\
Property(C): LocalAppDataFolder = C:\Users\sujdavis\AppData\Local\
Property(C): MyPicturesFolder = C:\Users\sujdavis\Pictures\
Property(C): DesktopFolder = C:\Users\Public\Desktop\
Property(C): FontsFolder = C:\Windows\Fonts\
Property(C): GPTSupport = 1
Property(C): MsiAMD64 = 6
Property(C): Msix64 = 6
Property(C): Intel = 6
Property(C): PhysicalMemory = 2048
Property(C): VirtualMemory = 3298
Property(C): LogonUser = sujdavis
Property(C): UserSID = S-1-5-21-2356834242-3937279893-1142474906-1433
Property(C): UserLanguageID = 1033
Property(C): ComputerName = DMOULDS-WIN7X64
Property(C): SystemLanguageID = 1033
Property(C): ScreenX = 1920
Property(C): ScreenY = 1080
Property(C): CaptionHeight = 22
Property(C): BorderTop = 1
Property(C): BorderSide = 1
Property(C): TextHeight = 16
Property(C): ColorBits = 32
Property(C): TTCSupport = 1
Property(C): Time = 15:02:30
Property(C): Date = 1/18/2012
Property(C): MsiNetAssemblySupport = 2.0.50727.4927
Property(C): MsiWin32AssemblySupport = 6.1.7601.17514
Property(C): RedirectedDllSupport = 2
Property(C): MsiRunningElevated = 1
Property(C): Privileged = 1
Property(C): DATABASE = C:\Users\sujdavis\AppData\Local\Temp\1c58c7.msi
Property(C): VersionHandler = 5.00
Property(C): UILevel = 5
Property(C): ACTION = INSTALL
Property(C): EXECUTEACTION = INSTALL
Property(C): ROOTDRIVE = C:\
Property(C): CostingComplete = 1
Property(C): OutOfDiskSpace = 0
Property(C): OutOfNoRbDiskSpace = 0
Property(C): PrimaryVolumeSpaceAvailable = 0
Property(C): PrimaryVolumeSpaceRequired = 0
Property(C): PrimaryVolumeSpaceRemaining = 0
Property(C): INSTALLLEVEL = 1
=== Logging stopped: 1/18/2012 15:02:30 ===
MSI (c) (50:6C) [15:02:30:056]: Note: 1: 1708
MSI (c) (50:6C) [15:02:30:057]: Product: Plantronics Spokes Software -- Installation failed.
MSI (c) (50:6C) [15:02:30:058]: Windows Installer installed the product. Product Name: Plantronics Spokes Software. Product Version: 2.5.50537.0. Product Language: 1033. Manufacturer: Plantronics, Inc.. Installation success or error status: 1602.
MSI (c) (50:6C) [15:02:30:063]: Grabbed execution mutex.
MSI (c) (50:6C) [15:02:30:063]: Cleaning up uninstalled install packages, if any exist
MSI (c) (50:6C) [15:02:30:070]: MainEngineThread is returning 1602
=== Verbose logging stopped: 1/18/2012 15:02:30 ===
0
ORIGINAL: jendeeda
I did do that and have been staring at it but nothing is jumping out at me. Not sure how to attach it here so I'll copy and paste what I think might be relevant? I'm seeing user exit and wondering if it has something to do with the license prompt - perhaps defaulting to cancel? Thank you so very much for any help.
.
.
.
.
MSI (c) (50:6C) [15:02:30:058]: Windows Installer installed the product. Product Name: Plantronics Spokes Software. Product Version: 2.5.50537.0. Product Language: 1033. Manufacturer: Plantronics, Inc.. Installation success or error status: 1602.
Have a look at this reference guide
http://desktopengineer.com/msierrors
Seems the installation is cancelled (probably due to input missing).
Have a look in the MSI if there is a property for the license and add it in either an MST or to the install string.
/Anders
0
Hey jendeeda,
Just to rule some things in or out but are you trying to install this on Windows 7 64 bit? Or have you downloaded the 64bit version when you need the 32bit version?
0
Action ended 15:02:27: WelcomeDlg. Return value 2. It's hard to tell without the lines above this one but it *looks* like the exit is happening from the 'Welcome' dialog, not the 'License Agreement' dialog.
BTW, I would change the 'LicenseAccepted' property to be a public property, i.e. in all upper-case. That way, it can be passed on the command line. Ditto for any other properties which you are prompting the user for.
0
Our msi works for both 32/64 but I installing it on a Win764bit but honestly I've tried it on 32 bit and had the same issue. I can't even get an MSI log anymore. I've deployed the sw probably 50 times now and one time it produced a log file but I don't know what I did different that time. I have loggin enabled in the MSI, in the registry on the target system and even in the GPO. I added the LicenseAccepted =1 to the Properties table and confirmed that the checkbox is selected by default but still won't install. I've elevated rights, tried both computer and user gpo's - I feel like I've tried everything. Since I can't even get an MSI log, I am really lost. To be clear, I'm looking in both %temp% and c:\windows\temp.I'll go look at the Welcome dialog and see if there is anything special there. I've beat the License Agreement to death I think. Again, any and all help is so much appreciated. If anyone is so inclined (please be inclined!) the MSI is available here: http://www.plantronics.com/us/support/software-downloads/download.jsp?file=PlantronicsURE-2-5-50537-0.msi&KeepThis=true&TB_iframe=true&height=420&width=640
0
I've had a look at the MSI and the log you provided above. Like Ian said, it looks like it's not the License agreement that's causing the problem.
ORIGINAL: VBScab
It's hard to tell without the lines above this one but it *looks* like the exit is happening from the 'Welcome' dialog, not the 'License Agreement' dialog.
There's even a dialogue before the Welcome, 'PrepareDlg', which is checking for newer versions.
I'd give it a try to rem the following lines in the MSI-script, to make sure that's not bugging the install.
If NEWPRODUCTFOUND then Cancel Installation Newer version already installed (PreventDowngrading) End
I don't have the possibility to give the GPO deploy a test, so can't really help you there.
Did you try changing the license property to puplic (all UPPERCASE)?
0
@Jennifer:I can't even get an MSI log anymore.Really?!? Are you installing to a clean build machine each time? If you can't produce a log at all, it sounds like your WI installation may be broken. Do you get a log on one of the targets if you install from the command line?I have loggin enabled in the MSIThat's a neat trick. How have you managed that?!?
The 'PrepareDlg' dialog doesn't appear to do anything - it has no control events and no properties - so I would think you would be better removing that dialog completely.
Anders' advice is sound but note that a) he is saying to REM (i.e. comment or condition) out those lines, rather than remove them and b) he forgot to mention that, since the UI sequence doesn't run in a GPO install, you'll need to duplicate that in the InstallExecuteSequence. The quickest way to stop that CA running is to change the lineIf NEWPRODUCTFOUND thento something likeIf NEWPRODUCTFOUND AND 0=1 then in both sequences.
0
You're right Ian.
I was about to run off for lunch, so I only scribbled down my first thoughts.
0
Thank you both. Boy do I feel like a rookie. I don't know how to access the MSI script file that you are referring to. I ran an network install to extract the contents of the MSI and it is not there. I don't see a table with this name in the MSI using Orca. Can you tell me where to access?
I am not using a clean system every time I test. I would have to have 100 VM's! I am able to get a log file when I install manually and I'm also able to get a log file when I push out other MSI's via GPO. This MSI is pure evil I've decided. I read that you can add MsiLogging to the Properties table with a value of voicewarmupx to make the package create a log. The other thing I've noticed and I'm not sure if this is significant but when I load the MSI into the GPO the GPO shows the language as being Chinese. I'm not sure why it thinks it's chinese but I have been making sure to check the 'ignore language when installing' option within the object. @vbscab - You mention PrepareDlg, did you mean WelcomeDlg? I did remove the WelcomeDlg from the MSI completely and GPO did not install and no log file was created. Would love to work on MSI script file you reference today if you are available answer. Thank you!
Below are the lines that are above the section of log file I had sent previously:
MSI (c) (50:C8) [15:01:32:373]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:374]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:375]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:376]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:376]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:376]: PROPERTY CHANGE: Modifying CostingComplete property. Its current value is '0'. Its new value: '1'.
MSI (c) (50:C8) [15:01:32:376]: Note: 1: 2205 2: 3: BindImage
MSI (c) (50:C8) [15:01:32:377]: Note: 1: 2205 2: 3: PublishComponent
MSI (c) (50:C8) [15:01:32:377]: Note: 1: 2205 2: 3: SelfReg
MSI (c) (50:C8) [15:01:32:377]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:377]: Note: 1: 2205 2: 3: Font
MSI (c) (50:C8) [15:01:32:377]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:377]: Note: 1: 2727 2:
Info 2898. For WixUI_Font_Title textstyle, the system created a 'Tahoma' font, in 0 character set, of 14 pixels height.
Action 15:02:26: CancelDlg. Dialog created
Action ended 15:02:27: WelcomeDlg. Return value 2.
MSI (c) (50:6C) [15:02:27:884]: Doing action: UserExit
Action 15:02:27: UserExit.
Action start 15:02:27: UserExit.
Action 15:02:27: UserExit. Dialog created
Action ended 15:02:30: UserExit. Return value 2.
Action ended 15:02:30: INSTALL. Return value 2.
Property(C): DPI = #96
0
I am not using a clean system every time I test. B I G mistake. You cannot perform a valid test on a "dirty" target. I would have to have 100 VM's!Er...no! Part of the geniius of VMs is that, once you have finished, you just click and voilá! The machine is in exactly the state is was before you started.
For example, how do you test that your latest package successfully upgrades an older version? I'll bet it's by installing vX, then installing vY, right? And this on an already dirty machine, right? How much easier and quicker it would be to have a stored snapshot of vX? Fire up a VM, load the snapshot and you're ready to test.
You cannot, cannot, CANNOT package without clean machines and VMs are by far and away the best way to have as many as you need.
Moving on, no, I didn't mean the 'Welcome' dialog, I meant 'PrepareDlg'. Delete it. Then get your employer to spring for a copy of InstallShield. I know your MSIs are created by Wix but doing work of any great length in Orca/InstEdit - especially for a newcomer to packaging - is not recommended. While you're waiting for the PO to be authorised and the software delivered, you can read & digest Phil Wilson's The Definitive Guide To Windows Installer. You will struggle with WI unless you know the fundamentals.
EDIT:
Thinking about it, Flexera offers offers a 30-day trial of IS. Go for that while you wait but definitely read Phil's book first. BTW, forget Amazon: it's available for free. Search the forums here on AppDeploy. Someone posted a link to the free copy.
If you're going to persist with Orca/InstEdit, you will need to
- attend to the Control and ControlEvent tables to remove references to the deleted PrepareDlg dialog.
- find the line with sequence number 26 in the InstallExecuteSequence table and alter the text in the 'Condition' column to read 'NEWPRODUCTFOUND AND 0=1'
- copy that text to the clipboard, then...
- ...paste it into the 'Conditon' column for sequence number 26 in the 'InstallUISequence' table.
0
Yeah, I wouldn't use Orca more than to view or do minor changes.
ORIGINAL: VBScab
EDIT:
Thinking about it, Flexera offers offers a 30-day trial of IS. Go for that while you wait but definitely read Phil's book first. BTW, forget Amazon: it's available for free. Search the forums here on AppDeploy. Someone posted a link to the free copy.
I did download the trial version, but there's really so much you can do with it.
Also got a phone call from "My personal contact person at Flexera" [8D], so I told him what I wanted to try (snapshot, edit MSI/MST and so on...). He told me those functions weren't really enabled in the trial version.[>:]
But I agree.. You can't really be asked to perform major changes and so on to an MSI without the proper tools. It's like plowing a field with a tooth pick
0
He told me those functions weren't really enabled in the trial version.PMSL...he will have been a S A L E S person so he's going to tell you that. All of the trial versions I've ever used have been fully functional but they do not load at all after 30 days.
0
Hi everyone
I should have been more clear. I am the product manager for software. I am not the IT person (access to vm creation) or the sw engineer(person who actually does the builds). I've been asked to figure out why our MSI can't be installed via group policy. Unfortunately, I do not have the access to likely a lot of what would be helpful. I only have the ability to create/deploy GPO's on our test domain and of course Orca. Our sw eng group is slammed and doesn't have the bandwidth to spend time on the issue but it's important to me so they've asked that I take a stab at telling them what is wrong. I apologize if I wasn't clear about my role before and I can imagine its frustrating to help truly such a rook! Thank you tho for the effort thus far.
I did some more digging into the log file and think that perhaps the "remove file path" might be what it's crashing on?
MSI (c) (50:6C) [15:01:32:183]: skipping installation of assembly component: {E4C397DE-374A-43B9-BFDC-F81424F8C790} since the assembly already exists
Action ended 15:01:32: CostFinalize. Return value 1.
MSI (c) (50:6C) [15:01:32:188]: Skipping action: MaintenanceWelcomeDlg (condition is false)
MSI (c) (50:6C) [15:01:32:188]: Skipping action: ResumeDlg (condition is false)
MSI (c) (50:6C) [15:01:32:188]: Doing action: WelcomeDlg
Action 15:01:32: WelcomeDlg.
Action start 15:01:32: WelcomeDlg.
Action 15:01:32: WelcomeDlg. Dialog created
MSI (c) (50:C8) [15:01:32:299]: Note: 1: 2205 2: 3: _RemoveFilePath
MSI (c) (50:C8) [15:01:32:338]: Note: 1: 2262 2: Class 3: -2147287038
MSI (c) (50:C8) [15:01:32:338]: Note: 1: 2262 2: Extension 3: -2147287038
MSI (c) (50:C8) [15:01:32:338]: Note: 1: 2262 2: Class 3: -2147287038
0
ORIGINAL: jendeeda
...they've asked that I take a stab at telling them what is wrong.
I can relate to being "slammed" with work.
How about giving them some of the pointers we've come up with here and go from there.
If they have access (and I really hope they do) to the proper tools (Wise/AS/IS/VM...), they would soon be able to modify the installer with the changes above and perhaps even try it out and log the installation on a clean snapshot of a virtual machine. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680249452590942, "perplexity": 11249.058958494626}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00046.warc.gz"} |
https://archive-ouverte.unige.ch/unige:11957 | Title
# Oblique poles of $\int_X| {f}| ^{2\lambda}| {g}|^{2\mu} \square$
Authors
Barlet, Daniel
Year 2009
Abstract Existence of oblique polar lines for the meromorphic extension of the current valued function $\int |f|^{2\lambda}|g|^{2\mu}\square$ is given under the following hypotheses: $f$ and $g$ are holomorphic function germs in $\CC^{n+1}$ such that $g$ is non-singular, the germ $S:=\ens{\d f\wedge \d g =0}$ is one dimensional, and $g|_S$ is proper and finite. The main tools we use are interaction of strata for $f$ (see \cite{B:91}), monodromy of the local system $H^{n-1}(u)$ on $S$ for a given eigenvalue $\exp(-2i\pi u)$ of the monodromy of $f$, and the monodromy of the cover $g|_S$. Two non-trivial examples are completely worked out.
Identifiers
Note Texte publié dans les actes de la conférence "Complex analysis : several complex variables and connections with PDE theory and geometry", Fribourg (Suisse), 2008. Birkhäuser, 2010, p. 1-23
Full text
Article (Preprint) (285 Kb) - Free access
Structures
Citation
(ISO format)
BARLET, Daniel, MAIRE, Henri Michel. Oblique poles of $\int_X| {f}| ^{2\lambda}| {g}|^{2\mu} \square$. 2009. https://archive-ouverte.unige.ch/unige:11957
125 hits | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577895998954773, "perplexity": 1574.911734520551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806509.31/warc/CC-MAIN-20171122065449-20171122085449-00256.warc.gz"} |
https://www.physicsforums.com/threads/different-metrics-in-different-dimensions.976101/ | Different metrics in different dimensions
• I
• Start date
• #1
234
2
Summary:
Given a space in R_n = R_1 X R_2 X R_3 X R_4 ... can the metric for the R_1 x R_2 subspace be different from the metric for the R_3 X R_4 subspace?
I'm trying to get a handle on how general a space in R_n can be. Part of my motivation is the curled up dimensions physicists talk about. How does one dimension work differently than another dimension? Can one part of the dimensional structure follow one metric and another part follow a different metric?
I rather think it should be possible. That raises questions about the combinations of subspaces. Can R_1 X R_2 be different (say, taxicab geometry) from R_1 X R_3 (say, Euclidean) as long as R_1 X R_3 is consistent (um, somewhere in between maybe)?
Sorry if this is worded poorly, and if it's in an inappropriate folder. And how does one access the proper notation symbols?
Thanks.
• #2
fresh_42
Mentor
13,457
10,516
Summary: Given a space in R_n = R_1 X R_2 X R_3 X R_4 ... can the metric for the R_1 x R_2 subspace be different from the metric for the R_3 X R_4 subspace?
I'm trying to get a handle on how general a space in R_n can be. Part of my motivation is the curled up dimensions physicists talk about. How does one dimension work differently than another dimension? Can one part of the dimensional structure follow one metric and another part follow a different metric?
Sure. You can build direct products of different topological spaces.
I rather think it should be possible. That raises questions about the combinations of subspaces. Can R_1 X R_2 be different (say, taxicab geometry) from R_1 X R_3 (say, Euclidean) as long as R_1 X R_3 is consistent (um, somewhere in between maybe)?
Sorry if this is worded poorly, and if it's in an inappropriate folder. And how does one access the proper notation symbols?
Thanks.
Phase spaces are considered In stochastic and physics which cover all possible states, i.e. their description. This leads to different dimensions in the components and thus different units and scales.
The actual question is not whether it can be defined rather what should it be good for, i.e. what do you want to do?
• #3
jbriggs444
Homework Helper
2019 Award
9,066
3,792
Summary: Given a space in R_n = R_1 X R_2 X R_3 X R_4 ... can the metric for the R_1 x R_2 subspace be different from the metric for the R_3 X R_4 subspace?
One example would be the four dimensional space-time we live in. You can pick out a two dimensional space-like slice using x and y coordinates and you can pick out an orthogonal two dimensional Minkowski slice using z and t coordinates.
• Last Post
Replies
8
Views
2K
• Last Post
Replies
4
Views
2K
• Last Post
Replies
1
Views
455
• Last Post
Replies
9
Views
8K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
11
Views
5K
• Last Post
Replies
4
Views
2K
• Last Post
Replies
4
Views
4K
• Last Post
Replies
8
Views
5K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88100665807724, "perplexity": 1550.8587122523195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00770.warc.gz"} |
http://talkstats.com/threads/t-test-word-problem-help.72767/ | # T Test word problem! HELP
#### mopeck
##### New Member
2. Assume that we drew a random sample of 300 women who had recently graduated from USC. All of these women worked full time in 2018. They earned an average salary of $42,167 (s.d. =$26,413). Based on data from the US Census Bureau, we know that the national average salary for women is $40,675. a. USC is looking to make the case that women who have graduated from the university make significantly more than the national average. Using an alpha level of .05, is there sufficient evidence to be support this contention? Be sure to select the correct critical value for the alternative hypothesis, and then use this evidence to make your conclusion. Show all calculations and hypotheses, as well as writing about the findings in “plain English.” b. What is the area above the calculated test statistic you calculated in 2a? (In other words, what is the p-value of the calculated test statistics?) How does this compare to the alpha level of .05? c. Now assume that you have a random sample of 78 recent female graduates who studied statistics. Their average salary is$46,507. Test the hypothesis that these women earn more, on average, than the national average. Show all calculations and hypotheses, as well as writing about the findings in “plain English.”
Last edited: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3239954113960266, "perplexity": 1708.2005566328855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00272.warc.gz"} |
Subsets and Splits