url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.naftaliharris.com/blog/
# Style for Python Multiline If-Statements April 10, 2017 PEP 8 gives a number of acceptable ways of handling multiple line if-statements in Python. But to be honest, most of the styles I've seen--even those that conform with the PEP--seem ugly and hard to read for me. So here's my favorite style: # Hypothesis Tests for Machine Learning March 27, 2017 Statisticians have spent a lot of time attempting to do complicated inference for various machine learning models. In fact, there's an enormously simple and naive way to do this in complete generality: Simply use a paired T-test to compare the performance of two models on your test set! # The Communication Loss Function March 20, 2017 My first ever task writing software professionally was to make some small change to the Kaggle server. I spent a day or so following painstakingly moving down the call stack from the API endpoint to figure out which file I needed to make my changes in. I proudly showed my mentor the five line change I'd made. His response: "Oh, that file is actually automatically generated. Your changes are going to get wiped out during the build." # Logistic Regression Isn't Interpretable March 13, 2017 Suppose two events A and B are independent, with the odds of A occurring being 4, and the odds of B being 5. What are the odds of both A and B occurring? I'll give you a hint: it's not 20. # Nontrivial: Exception Handling in Python March 6, 2017 Most code can fail in multiple places and for multiple reasons. Handling these failures seems pretty trivial, something you'd cover in the basic tutorial to your programming language. Actually, I think that doing this well can be surprisingly subtle, and ultimately dances around the control flow constructs of your programming language of choice. Let me illustrate with a simple example in Python, my second-favorite programming language: # Implementing "nonlocal" in Tauthon: Part I February 27, 2017 Tauthon is a fork of Python 2.7 with syntax, builtins, and libraries backported from Python 3. It aspires to be able to run all valid Python 2 and 3 code. In this article, I begin discussing how I was able to backport the "nonlocal" keyword from Python 3. I hope this post is useful for people who are interested in hacking on the CPython interpreter or CPython forks: it sounds hard, and it can be a bit tedious, but it's actually a lot easier than you'd think. # Day-to-Day Operations of Palo Alto February 20, 2017 Palo Alto runs a pretty open city government, with a number of interesting documents available for download on their website. Of particular interest are their annual budgets and annual financial reports, both of which are for the fiscal year ending June 30. These documents are a few hundred pages long each and full of accounting tables, but with a bit of persistence and the help of some friendly city employees I think I was able to figure out much of what is going on. In this post I give an overview of what the city of Palo Alto does on a day-to-day basis--(i.e., excluding long-term capital projects). # Continuous Time Lending January 15, 2017 Assume a borrower takes out an installment loan of size $1$ and makes continuous-time payments on it. The installment loan starts at time $0$, ends at time $T$, and has an interest rate of $r$, compounding continually. We'll let $P(t)$ be the principal owed by the borrower at time $t$, and $I(t)$ be the interest they've paid on the loan so far. Unlike in discrete time lending, we don't need to keep track of the amount of unpaid interest owed by the borrower; it's always zero since the borrower makes continuous payments. To check your comprehension, $P(0) = 1$, $P(T) = 0$, $I(0) = 0$, and $I(T)$ is the total amount of interest the borrower will pay on their loan (assuming no defaults, fees, or prepayment). # An Easy Chess Puzzle December 29, 2016 I was looking through Markovian, my old chess engine, recently, and came across the first game it won against another chess engine. Stepping through the game, it seems that both engines actually played pretty poorly. Even so, I was proud that Markovian found a long forced mate to end the game. Here's that mate in puzzle form; play black, and find the fastest mate. (If there's more than one, any will be accepted). # Why I'm Making Tauthon November 30, 2016 For the past two months I've been spending half my time on Tauthon. Tauthon is a backwards-compatible Python interpreter that runs Python 2 code and C-extensions exactly as-is, while also allowing Python 2 programmers to use the most exciting new language features from Python 3. These new backported language features include async/await syntax, function annotations and typing support, keyword-only arguments, and new metaclass syntax, among many others. I use Tauthon as my system python now, and haven't had any problems running my old 2.7 code or using packages like IPython, pip, numpy, pandas, requests, and flask. I've been enjoying the new language features--I especially like underscores in numeric literals! # Desperation Motivated Creativity July 25th, 2016 I am not the strongest climber. Some of the people I've climbed with are so strong that they can do a one-arm pull-up, and then--while locking off with one arm--sing the "Head, Shoulders, Knees and Toes" song and use the other arm to do the corresponding dance. This is not in my future anytime soon. # OHMS Lessons Learned July 10th, 2016 Note: I found the following post as an almost complete draft as I was reading some of my unpublished posts. I wrote it around October 1st, 2013, at the beginning of what would end up being my last year at Stanford. Later that quarter I learned a lot more lessons from OHMS, including--the hard way, at 12:30AM the night before a homework was due--not to run SQLite over a network file system. Wanting to write up some of those additional lessons probably contributed to my never publishing this until now. I've corrected typos or completed sentences in four or five places to make this post publishable, but the rest of it is exactly as it was in October 2013: # Machine Learning over JSON May 20, 2015 Supervised machine learning is the problem of approximating functions X -> Y from many example (x, y) pairs. Now, the vast majority of supervised learning algorithms assume that X is p-dimensional Euclidean space. As I'll argue, this is a poor model for many problems, and modeling X as a JSON document fits much better. # Visualizing DBSCAN Clustering January 24, 2015 A previous post covered clustering with the k-means algorithm. In this post, we consider a fundamentally different, density-based approach called DBSCAN. In contrast to k-means, which modeled clusters as sets of points near to their center, density-based approaches like DBSCAN model clusters as high-density clumps of points. To begin, choose a data set below: # You Can't Predict Small Counts January 17, 2015 A small restaurant is interested in predicting how many customers will come in on a given night. This is valuable information to know ahead of time, for example, so that the restaurant can figure out whether to ask employees to work extra shifts. Unfortunately, under reasonable conditions no amount of data will permit even the most talented data scientist or statistician to make particularly good predictions. # Half the Decimal Trick January 9, 2015 If something happened 1,234 out of 10,000 times, we'd estimate that the true probability of occurence is about 0.1234. Of course, we wouldn't expect the true probability to be exactly 0.1234, and to quantify the uncertainty in this estimate statisticians have long computed confidence intervals. But in this particular case, there's a simple eyeballing trick we can use to get approximate error bars: we round the proportion to half the number of decimal places, (0.12|34 becomes 0.12), and add a plus or minus 1 in the least significant digit, (0.12 +/- 0.01). # How to Forge an Email December 22, 2014 Most people don't realize how easy it is to forge an email. Say my brother John Doe uses the email address [email protected]. If I get an email from that address, it's natural to assume that John actually sent it. In fact, it's also remarkably easy for an attacker to have sent it. # T-Tests Aren't Monotonic October 22, 2014 R. A. Fisher and Karl Pearson play a heated round of golf. Being Statisticians, they agree before the round to run a two-sided paired T-test to see if either of them is statistically significantly better. After the first 17 holes, Fisher is ahead by 19 strokes, and openly gloating. On the 18th hole, he sinks a 20-foot putt for birdey, and smirks at Pearson. Pearson then "accidentally" hits his ball into several sand traps, trees, and water hazards, taking 100 strokes on the last hole. # Robust Machine Learning October 5, 2014 Real data often has incorrect values in it. Origins of incorrect data include programmer errors, ("oops, we're double counting!"), surprise API changes, (a function used to return proportions, suddenly it instead returns percents), or poorly scraped data. When working with data, a desirable property of whatever you're doing is that it be robust, or continue to work in the presence of some incorrect values. # Python Subclass Relationships Aren't Transitive August 26, 2014 Subclass relationships are not transitive in Python. That is, if A is a subclass of B, and B is a subclass of C, it is not necessarily true that A is a subclass of C. The reason for this is that with PEP 3119, anyone is allowed to define their own, arbitrary __subclasscheck__ in a metaclass. # Sensitivity of Independence Assumptions August 3, 2014 Recently I was considering an interesting problem: Several people interview a potential job candidate, and each of them scores that candidate numerically on some scale. What's the variation associated with the average score? # Visualizing Lasso Polytope Geometry June 8, 2014 Some recent research about the lasso exploits a beautiful geometric picture: Suppose you fix the design matrix X and the regularization parameter $\lambda$. For a particular value of y, the n-dimensional response variable, you can then solve the lasso problem and examine the signs of the coefficients. Now if you partition n-dimensional space based upon the signs of the coefficients you'd get if you solved the lasso problem at each value of y, then you'll find that each of the resulting regions are convex polytopes. This is kind of a mouthful, so I made the visualization below to illustrate this partitioning. You can drag x1, x2, and x3 to rotate them, and slide the slider to increase or decrease $\lambda$. # College Interview Tips May 15, 2014 The college admissions interview is a valuable component of college applications because it provides admissions officers with a holistic evaluation from a source that has no vested interest in your success. This contrasts with your teacher evaluations, which are holistic but (hopefully) written by people who want you to be admitted, and your standardized tests scores, which aren't biased towards your success the way that teacher evaluations are, but which aren't holistic at all. # A College Waitlisting Model May 4, 2014 Suppose a selective college wants $N_0$ students in their freshman class. How many students should they admit, and what's the distribution of the number of students they'll admit off the waitlist? Of course, you could just look for data from previous years, but in the fast-changing world of college admissions, data goes stale quickly. And honestly, I really enjoy simple probability models! # Don't Double Major April 8, 2014 Double majoring in college is a very suboptimal strategy. The reason is simple: It adds a substantial set of constraints to the courses you can take, but in return gives you only a very modest extra credential. # A Statistical Analysis of Climbing February 17, 2014 Recently, 28 of us on the Stanford Climbing Team completed a short survey on our climbing abilities. Although the survey was intended to assess our interest in different clinics, the answers to the survey question also shed light on some interesting climbing questions, like how bouldering grades compare to top-rope grades, how much harder leading is than top-roping, and what different "climber types" there are. These questions really excited me, so I asked for permission to analyze this data, which the team graciously granted. # How to Solve Problems February 4, 2014 I spend a lot of my time solving problems: I solve well-defined math problems in grad school, open-ended problems in statistical consulting, physical puzzles when I'm rock climbing, architectural problems when I build software systems, and mysteries when I debug them. Here are some strategies that have worked for me in solving the problems in these domains, presented in the order that I usually try them in. My hope is that they prove useful to you in solving your problems: # Visualizing K-Means Clustering January 19, 2014 Suppose you plotted the screen width and height of all the devices accessing this website. You'd probably find that the points form three clumps: one clump with small dimensions, (smartphones), one with moderate dimensions, (tablets), and one with large dimensions, (laptops and desktops). Getting an algorithm to recognize these clumps of points without help is called clustering. To gain insight into how common clustering techniques work (and don't work), I've been making some visualizations that illustrate three fundamentally different approaches. This post, the first in this series of three, covers the k-means algorithm. To begin, click an initialization strategy below: # How I Got 2x Speedup with One Line of Code November 14, 2013 If you had asked me whether or not it was possible to get a 2x speedup for my LazySorted project by adding a single line of code, I would have told you "No way, substantial speedups can really only come from algorithm changes." But surprisingly, I was able to do so by adding a single line using the __builtin_prefetch function in GCC and Clang. Here's the story about how adding this got me a 2x speedup. # The LaTex Numbers October 12, 2013 Let's define the LaTex numbers to be the set of all real numbers that can be unambiguously expressed with the LaTex type system. This set of numbers has a few fun properties, not least of which, as we'll see later, is that it doesn't quite exist. # The Ten Best Ideas in Statistics October 4, 2013 I've been studying Statistics for six years now, seriously for the last four years, and as my main focus for the last three. Now that I've finished the core PhD curriculum at Stanford, I've spent some time reflecting on the best ideas I've learned in Probability and Statistics over the years. I've compiled a list of brilliant and beautiful ideas, ones that I'm still impressed with every time I think about them. # The Zero Times Infinity Problem August 24, 2013 There are two ways to keep yourself safe while rock climbing: The first option is to protect yourself carefully with ropes and gear, so that if you fall you won't fall too far or hard. The second option is to make sure not to fall. I think that the option climbers choose reflects their perception of what I like to call the "zero times infinity" problem, in which you multiply a near-zero probability by a near-infinite loss. # Martingale Implications Graph August 19, 2013 Here's another directed graph of statement implications that I used to study for quals. This one is about convergence of stochastic processes with martingales. Like my Markov Chain Implications Graph, each node is a statement about a stochastic process, and each edge is an implication. # Markov Chain Implications Graph July 29, 2013 I've been studying for quals for the last several weeks. Today, I was reviewing basic Markov Chain theory, and decided to understand it by drawing a graph of various statements about Markov Chains. Each statement is a node, and each edge is an implication, (so if A points to B then A implies B). If you're not familiar with the various definitions in the nodes, I'd recommend taking a look at Professor Lalley's very intuitive and readable lecture notes, or at Amir Dembo's comprehensive probability notes, which the edges in the graph below reference for proofs. # Visualizing the James-Stein Estimator May 6, 2013 In the words of one of my professors, "Stein's Paradox may very well be the most significant result in Mathematical Statistics since World War II." The problem is this: You observe $X_1, \ldots, X_n \sim \mathcal{N}_p(\mu, \sigma^2 I_p)$, with $\sigma^2$ known, and wish to estimate the mean vector $\mu \in \mathbb{R}^p$. The obvious thing to do, of course, is to use the sample mean $\bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i$ as an estimator of $\mu$. Stein's Paradox is the counterintuitive fact that in dimension $p \ge 3$, this estimator is inadmissible under squared error loss. # Memory Locality and Python Objects April 15, 2013 I've been obsessed with sorting over the last few weeks as I write a python C-extension implementing a lazily-sorted list. Among the many algorithms that this lazily-sorted list implements is quickselect, which finds the kth smallest element in a list in expected linear time. To examine the performance of this implementation, I decided to plot the time it takes to compute the median of a random list divided by the length of the list. Theoretically, since quickselect runs in expected linear time, this plot should be roughly constant. In fact, this is what the plot looks like: # The Hottest Person in the Group April 4, 2013 Suppose you think you're pretty good-looking. In fact, you think you're at about the 90th percentile--you're hotter than 90% of people and not as hot as the other 10%. If I put you with another nine random people, you'd think, at least heuristically, that you're probably the hottest one out of the ten. This is in fact not true. # Goldbach's Conjecture and Coding Length April 2, 2013 Goldbach's conjecture is that every even integer greater or equal to four can be written as the sum of two prime numbers. (Try it: $4 = 2 + 2$, $6 = 3 + 3$, $8 = 3 + 5$, $10 = 3 + 7$, $12 = 5 + 7$...) It occurred to me that if this conjecture were true, it could be used as a way to encode even integers greater than four, and that this encoding would need to be no more efficient than the most efficient encoding, which simply enumerates the even integers $n \ge 4$. If it were more efficient, this would constitute a counterproof of the conjecture, which is widely believed to be true. # Don't Trust Asymptotics: Part I March 25, 2013 Suppose I give you a sequence of real numbers $x_n$, and tell you that $\lim_n x_n = \infty$. What can you tell me about $x_{100}$? How about $x_{1,000,000}$? # Being the First March 16, 2013 I was at the climbing gym a few weeks ago with a friend of mine. We were just messing around, making up with bouldering problems for ourselves to do. One of the problems my friend came up with was a nice, muscular traverse moving through the overhanging wall, with pretty good hands but not much for feet. For whatever reason, our attempts on this problem started getting a lot of attention from the other climbers at the gym, and pretty soon we had a sizeable group watching us and asking about which holds were on. As these other people started trying the problem, I felt that strong climber's desire to bag the first ascent, especially before the others started learning the beta, (tricks for how to do it). # CSS Gotchas January 10, 2013 I'm still learning html and css. As I debug simple websites I write, (including this one), I've encountered a lot of behavior that seemed very counterintuitive to me. I thought I'd share some of this so that future people (especially myself) can avoid the same mistakes I've made. # How My Chess Engine Works December 26, 2012 I have always had a lot of respect for chess, despite the fact that I'm not very good at it myself. As I learned more about the game, I also heard about the successes of computer chess AIs, in particular the sensational defeat of Gary Kasparov by Deep Blue. This inspired me to write a primitive chess engine in Python in high school, which played rather abysmally. # Finding Isomorphisms Between Finite Groups November 23, 2012 One of the most interesting problems I came across as I was building my Abstract Algebra package was that of finding an isomorphism between two finite groups G and H, represented by their Cayley tables, or proving that G and H aren't isomorphic. Before reading further, give it a little thought--how would you do it? # Popping the Hood October 28, 2012 Lots of things appear to be magic: Computers, the Banach-Tarski Paradox, cars, the phenomenal success of companies like Facebook, airplanes, the Internet, the Central Limit Theorem, and the fact that bicycles and spinning tops don't fall over are just a few examples.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47628504037857056, "perplexity": 1202.3820902478187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822851.65/warc/CC-MAIN-20171018085500-20171018105500-00526.warc.gz"}
http://www.careersfromscience.org/index.php/category/calcium-sensing-receptor/
# Supplementary MaterialsFigure S1: Glial processes inside the larval NMJ are actin Supplementary MaterialsFigure S1: Glial processes inside the larval NMJ are actin wealthy. wandering third instar (W3) larvae with glial procedures tagged using 46F::Compact disc8GFP (green), neurons immunolabeled with anti-HRP (reddish colored) as well as the SSR tagged with ShCter-DsRed (blue). The NMJ happened at 25C and imaged at period 0 (T0) and 60 mins afterwards (T60). Glial region at T0?=?11.63 m2. World wide web modification in glial region ?=?+0.52 m2. Size club, 15 microns. CCJ) Boxed locations from AZD6244 reversible enzyme inhibition sections A and B were digitally scaled 400% with the corresponding grayscale panel showing the range of GFP tagged glial processes. Glial processes associated with boutons (C, D; arrows); and grew independently of either boutons or SSR (C, D: arrowhead). Glial processes appeared to change shape (E, F; arrow) and position with respect to the NMJ (G, H; arrow). Other processes retracted near the synapse (H, I; arrow) or remained associated with the immunolabeled neuron (H, I; arrowhead).(TIF) pone.0037876.s002.tif (3.1M) GUID:?6950AF01-198C-4B5F-B6F3-650404FA4520 Physique S3: Glial processes are found at the NMJ by the 2nd larval instar. Glial processes labeled with CD8GFP in the 2nd larval instar. ACD) 46F-GAL4 driven CD8GFP expression (green) was detected within 2nd instar NMJs fixed and immunolabeled with anti-HRP (red) and anti-Dlg (blue) to label the pre- and post-synpatic regions of the NMJ respectively. At this stage the glial processes have extended into the synpatic region (arrow). Scale bar is usually 15 microns. Panels BCD were digitally scaled 400% (ECF) repo-GAL4 driven CD8GFP expression (green) imaged in live and intact 2nd instar larvae through the body wall. The subsynaptic reticulum labeled with ShCter-dsRed (magenta) indicates the location of the NMJ. At this stage the glial processes have extended into the synaptic region (F, arrow) and also show extra-synaptic extensions across the body wall muscle (E, arrow)(TIF) pone.0037876.s003.tif (1.5M) GUID:?F06BC9EE-B024-4502-893F-FCE5992417BB Video S1: 3D rotation over 90 of a live 3rd instar NMJ showing the glial processes labeled with 46F mCD8GFP AZD6244 reversible enzyme inhibition (green) along the central region of the synaptic bouton region. -HRP (red) was used to live immunolabel the presynpatic boutons and ShCter-DsRed (blue) labeled the postsynaptic SSR.(MOV) pone.0037876.s004.mov (3.3M) GUID:?D0D6C186-42B8-4698-8098-A033F8BB96C4 Video S2: Manipulatable rotation around a 90 axis along the live 3rd instar NMJ shown in Physique 3 . The glial processes (green) had been live tagged using 46F Compact disc8GFP, the neurons labeling using live -HRP (reddish AZD6244 reversible enzyme inhibition colored) immunolabeling as well as the SSR using ShCter-DsRed (blue).(MOV) pone.0037876.s005.mov (2.0M) GUID:?776B6495-30D5-4821-8FF0-AA4CADAAFF11 Video S3: Video panning through the Z-series gathered from another instar NMJ shown in Body 3 . The glial procedures (green) had been live tagged using 46F Compact disc8GFP, the neurons labeling using live -HRP (reddish colored) immunolabeling as well as the SSR using ShCter-DsRed (blue). The pan begins in the greater superficial picture focal planes (nearest the viscera) (Body 3D) and is constantly on the the site where in fact the electric motor axons enter between your muscles through the cuticular aspect (Body 3G).(MOV) pone.0037876.s006.mov (1.9M) GUID:?678D0C40-CE30-450A-88BE-329C45057962 Video S4: 3D rotation more than 90 from the set 3rd instar NMJ from Figure 5A . The septate junction area was tagged with Neurexin IV (green) along the electric motor axons because they branch over the NMJ. -HRP (reddish colored) was AZD6244 reversible enzyme inhibition utilized to immunolabel the presynpatic boutons as well as the postsynaptic SSR was immunolabeled for Dlg (blue).(MOV) pone.0037876.s007.mov (3.4M) GUID:?BAE6648B-375C-419D-88A2-BA0F777128B2 Video S5: Manipulatable rotation around a 90 axis along the live 3rd instar NMJ shown in Body 8B . The glial procedures were live tagged with repo Compact disc8GFP (green) and boutons immunolabeled with -HRP (magenta).(MOV) pone.0037876.s008.mov (1.9M) GUID:?5E8EC7DB-E1E6-4F8E-848F-F106B4510973 Abstract Glia are essential participants in synaptic physiology, remodeling and maturation from blowflies to individuals, yet how glial structure is certainly coordinated with synaptic growth is certainly unknown. To research the dynamics of glial advancement on the Drosophila larval neuromuscular junction (NMJ), we created a live imaging program to establish the partnership between glia, neuronal boutons, as well as the muscle tissue subsynaptic reticulum. Applying this operational program we observed procedures from two classes of peripheral glia present on the NMJ. Processes AZD6244 reversible enzyme inhibition through the subperineurial glia shaped a blood-nerve KIAA1704 hurdle across the axon proximal towards the initial bouton. Processes through the perineurial glial expanded beyond the finish from the blood-nerve hurdle in to the NMJ where they approached synapses and expanded across non-synaptic muscle tissue. Growth from the glial procedures was coordinated with NMJ growth and synaptic activity. Increasing synaptic size through elevated heat or the mutation increased the extent of glial processes at the NMJ and conversely blocking synaptic activity and size decreased the presence and. # Dorsal main ganglion (DRG) neurons cultured in the current presence of Dorsal main ganglion (DRG) neurons cultured in the current presence of nerve growth factor (NGF, 100?ng/ml) often display a spontaneous actions potential. indicate that chronic NGF treatment of cultured DRG neurons in rats induces a constitutively energetic cation conductance through TRPV1, which depolarizes the neurons and causes spontaneous actions potentials in LBH589 reversible enzyme inhibition the LBH589 reversible enzyme inhibition absence of any stimuli. Since NGF in the DRG is reported to increase after nerve injury, this NGF-mediated regulation of TRPV1 may be a cause of the pathogenesis of neuropathic pain. (Burchiel, 1984; Devor et al., 1992; Eide, 1998; Liu et al., 2000; Sun et al., 2005) and (Petersen et al., 1996; Study and Kral, 1996; Amir et al., 1999; Devor, 1999; Liu et al., 1999). Such LBH589 reversible enzyme inhibition abnormal firing is considered to be a cause of spontaneous pain. Nerve growth factor (NGF) is known as one of the mediators that cause neuropathic pain because NGF induces hyperalgesia in rats (Lewin et al., 1993; Woolf et al., 1994; Andreev et al., 1995) and because the expression level of NGF in the dorsal root ganglion (DRG) rises after nerve injury (Herzberg et al., 1997; Shen et al., 1999). Therefore, trials using anti-NGF agents to cure neuropathic pain conditions have been conducted (Cattaneo, 2010; Ossipov, 2011; McKelvey et al., 2013). Actions of NGF in the pathogenesis of neuropathic pain are complicated: NGF seems to have effects on both peripheral tissues and the central nervous system (Lewin et al., 1994; Hao et al., 2000). It is also reported that increased NGF in the DRG causes an extension of sympathetic nerves that make synapses onto DRG neurons and transmit excitatory signals by releasing noradrenaline (Zhang and Tan, 2011). On the other hand, it was reported that DRG neurons that were isolated from adult rats and cultured in the presence of NGF generated action potentials (APs) spontaneously (Kitamura et al., 2005); from these neurons, spontaneous APs were recorded in the on-cell configuration without intracellular dialysis with an artificial solution, and spontaneous action currents (named Isp) were recorded even under the voltage-clamped condition in the whole-cell configuration (Kayano et MCMT al., 2013). Based on the evidence that Isp was blocked by tetrodotoxin (a blocker of the voltage-gated Na+ channel), it is concluded that Isp reflects spontaneous discharges occurring in loosely voltage-clamped areas of the cell membrane. Chronic treatment of DRG neurons with NGF LBH589 reversible enzyme inhibition seemed to activate an intrinsic mechanism, which caused the hyperexcitability, within the membrane of the soma of DRG neurons because the Isp was also recorded from the outside-out patch membranes excised from the soma (Kayano et al., 2013). The essential factors for neurons to generate an AP are (1) a resting membrane potential that is polarized below the threshold potential for the generation of the AP and (2) an ion conductance that drives membrane potentials to a potential above the threshold of the AP. We hypothesized that NGF induces some additional ionic conductance, which can be mixed up in lack of any stimuli constitutively, in cultured DRG neurons and that dynamic conductance makes neurons hyperexcitable constitutively. Among the normal ion stations that confers such conductance to neurons can be a nonselective cation route owned by the transient receptor potential (TRP) superfamily. Among these stations, TRP vanilloid 1 (TRPV1) takes on very important tasks in nociception (Caterina et al., 1997). It really is reported that NGF escalates the manifestation level and activity of TRPV1 in trigeminal neurons (Cost et al., 2005), DRG neurons (Ji et al., 2002; Stein et al., 2006; Eskander et al., 2015) as well as the heterologous manifestation program (Zhang et al., 2005; Stein et al., 2006). Consequently, we analyzed the part of TRPV1 in the era of spontaneous APs in NGF-treated cultured DRG neurons of rats in today’s study and discovered that chronic treatment with NGF induces yet another cation conductance through TRPV1, which causes spontaneous firing. 2.?Experimental procedures 2.1. Cell culture and isolation All pet experiments were performed relative to the.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5313873291015625, "perplexity": 19601.585888878548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00526.warc.gz"}
http://requestforlogic.blogspot.co.il/2012/01/structural-focalization-updated.html
## Thursday, January 12, 2012 ### Structural focalization updated I've uploaded to both ArXiV and my webpage a significantly revised draft of the paper Structural focalization, which I've spoken about here before. Feedback is welcome! One of the points I make about the structural focalization technique is that, because it is all so nicely structurally inductive, it can be formalized in Twelf. As part of a separate project, I've now also repeated the whole structural focalization development in Agda! The code is available from GitHub. While a structural focalization proof has some more moving parts than a simple cut-and-identity proof, it also has one significant advantage over every Agda proof of cut admissibility that I'm aware of: it requires no extra structural metrics beyond normal structural induction! (My favorite structural metric is the totally nameless representation, but there are other ways of threading that needle, including, presumably, these "sized types" that everyone seems to talk about.) In regular, natural-deduction substitution, you can get away without structural metrics by proving the statement that if $$\Gamma \vdash A$$ and $$\Gamma, A, \Gamma' \vdash C$$ then $$\Gamma, \Gamma' \vdash C$$; the extra "slack" given by $$\Gamma'$$ means that you operate by structural induction on the second given derivation without ever needing to apply weakening or exchange. Most cut-elimination proofs are structured in such a way that you have to apply left commutative and right commutative cuts on both of the given derivations, making this process tricky at the best; I've never gotten it to work at all, but you might be able to do something like "if $$\Gamma, \Gamma' \longrightarrow A$$ and $$\Gamma, A, \Gamma'' \longrightarrow C$$ then $$\Gamma, \Gamma', \Gamma'' \longrightarrow C$$." If someone can make this work let me know! A focused sequent calculus, on the other hand, has three separate phases of substitution. The first phase is principal substitution, where the type gets smaller and you can do whatever you want to the derivations, including weakening them. The second phase is rightist substitution, which acts much like natural-deduction substitution, and where you can similarly get away with adding "slack" to the second derivation. The third phase is leftist substitution, and you can get by in this phase by adding "slack" to the first derivation: the leftist cases read something like "if $$\Gamma, \Gamma' \longrightarrow A$$ and $$\Gamma, A \longrightarrow C$$ then $$\Gamma, \Gamma' \longrightarrow C$$." In Structural focalization, I note that the structural focalization technique could be seen as a really freaking complicated way of proving the cut and identity for an unfocused sequent calculus. But in Agda, there's a reason you might actually want to do things the "long way" - not only do you have something better when you finish (a focalization result), but you get cut and identity without needing an annoying structural metric.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6669917106628418, "perplexity": 946.9242771715539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886639.11/warc/CC-MAIN-20180116184540-20180116204540-00787.warc.gz"}
http://jira.codehaus.org/browse/JETTY-1492?focusedCommentId=292956&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
Specifying -D option in start.ini puts slashes in file path on windows when starting Details • Type: Bug • Status: Closed • Priority: Minor • Resolution: Duplicate • Affects Version/s: None • Fix Version/s: None • Component/s: None • Labels: • Environment: Eclipse Jetty distribution 8.1.1.v20120215, Windows XP SP3 • Number of attachments : 0 Description Adding the following options in the start.ini: --exec -Djava.security.krb5.conf=C:\Program Files\Apache Software Foundation\Tomcat 5.5\krb5.conf Causes the java jre to be spawned with the following command line args: "-Djava.security.krb5.conf=C:\Program\ Files\Apache\ Software\ Foundation\Tomcat\ 5.5\krb5.conf" (notice the extra escaping slashes which probably work fine under unix, but messes up windows) JAAS fails to work with the options specified above. If I move the files to a folder without spaces and change the start.ini then it works fine, eg: -Djava.security.krb5.conf=C:\jaas\krb5.conf Activity Hide Joakim Erdfelt added a comment - out of curiosity, what happens if you use this syntax in the start.ini? -Djava.security.krb5.conf=C:/Program Files/Apache Software Foundation/Tomcat 5.5/krb5.conf or you quote the start.ini sections? "-Djava.security.krb5.conf=C:\Program Files\Apache Software Foundation\Tomcat 5.5\krb5.conf" Show Joakim Erdfelt added a comment - out of curiosity, what happens if you use this syntax in the start.ini? -Djava.security.krb5.conf=C:/Program Files/Apache Software Foundation/Tomcat 5.5/krb5.conf -Djava.security.auth.login.config=C:/Program Files/Apache Software Foundation/Tomcat 5.5/cas_jaas.conf or you quote the start.ini sections? "-Djava.security.krb5.conf=C:\Program Files\Apache Software Foundation\Tomcat 5.5\krb5.conf" "-Djava.security.auth.login.config=C:\Program Files\Apache Software Foundation\Tomcat 5.5\cas_jaas.conf" Hide Jamie Maher added a comment - Hi Joakim, Using the forward slash syntax: -Djava.security.krb5.conf=C:/Program Files/Apache Software Foundation/Tomcat 5.5/krb5.conf Results in this: If I quote the start.ini sections (they were not quoted before) then it excludes them in the spawned java jre command line. It's funny, because the -Djetty.home parameter in the spawned java jre contains spaces in the command line: (although it's not specified in the start.ini) "-Djetty.home=C:\Program Files\Jetty\jetty-distribution-8.1.1.v20120215" There must be some magic processing in the start.jar that is unix specific that's getting applied when it shouldn't.. Show Jamie Maher added a comment - Hi Joakim, Using the forward slash syntax: -Djava.security.krb5.conf=C:/Program Files/Apache Software Foundation/Tomcat 5.5/krb5.conf Results in this: "-Djava.security.auth.login.config=C:/Program\ Files/Apache\ Software\ Foundation/Tomcat\ 5.5/krb5.conf" If I quote the start.ini sections (they were not quoted before) then it excludes them in the spawned java jre command line. It's funny, because the -Djetty.home parameter in the spawned java jre contains spaces in the command line: (although it's not specified in the start.ini) "-Djetty.home=C:\Program Files\Jetty\jetty-distribution-8.1.1.v20120215" There must be some magic processing in the start.jar that is unix specific that's getting applied when it shouldn't.. Hide Joakim Erdfelt added a comment - Actually, it's not unix specific, its Runtime.exec() specific. The arguments for it must be properly quoted/escaped for it to work. Wish I could flag bugs on jira as 'valid' or 'confirmed' Show Joakim Erdfelt added a comment - Actually, it's not unix specific, its Runtime.exec() specific. The arguments for it must be properly quoted/escaped for it to work. Wish I could flag bugs on jira as 'valid' or 'confirmed' Hide Jamie Maher added a comment - Note: jdk1.6.0_17 was used to launch start.jar Show Jamie Maher added a comment - Note: jdk1.6.0_17 was used to launch start.jar Hide Jan Bartel added a comment - This issue was moved to jetty eclipse bugzilla: https://bugs.eclipse.org/bugs/show_bug.cgi?id=396564 Show Jan Bartel added a comment - This issue was moved to jetty eclipse bugzilla: https://bugs.eclipse.org/bugs/show_bug.cgi?id=396564 People • Assignee: Joakim Erdfelt Reporter: Jamie Maher
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221153020858765, "perplexity": 25863.744300273866}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461332.16/warc/CC-MAIN-20150226074101-00307-ip-10-28-5-156.ec2.internal.warc.gz"}
http://www.scholarpedia.org/article/Siegel_disks/Linearization
# Siegel disks/Linearization The linearization problem in complex dimension one dynamical systems ## Statement Linearizable at a fixed point $$\implies$$ tame Given a fixed point of a differentiable map, seen as a discrete dynamical system, the linearization problem is the question whether or not the map is locally conjugated to its linear approximation at the fixed point. Since the dynamics of linear maps on finite dimensional real and complex vector spaces is completely understood, the dynamics of a map on a finite dimensional phase space near a linearizable fixed point is tractable. More precisely the problem is the following: there is a set $$S\ ,$$ the phase space, which can be for instance a subset of $$\mathbb{R}^n$$ or $$\mathbb{C}^n$$ or a manifold, and a map $$f$$ from part of $$S$$ to part of $$S\ ,$$ which represents a discrete dynamical system. We are interested in a fixed point of $$f\ ,$$ call it $$a\ .$$ The differential of $$f$$ at $$a$$ is a linear map, call it $$T\ .$$ In our example, $$T$$ acts respectively on $$\mathbb{R}^n\ ,$$ $$\mathbb{C}^n$$ and the tangent space of $$S$$ at $$a\ .$$ Does there exist a neightborhood $$V$$ of $$a$$ and a homeomorphism $$\phi$$ from $$V$$ to some neighborhood of the origin such that the local conjugacy (see Topological conjugacy) $$T=\phi\circ f\circ \phi^{-1}$$ holds in a (possibly smaller) neighborhood of $$0\ ?$$ Topologically linearizable $$\iff$$ holomorphically linearizable It shall be noted that for a given fixed point of a given map, the answer to this question may or may not depend on the regularity allowed for the conjugacy. However, in the particular setting of a holomorphic map of a complex dimension 1 manifold (i.e. a Riemann surface) linearizability by a continuous conjugacy turns out to be equivalent to linearizability by a holomorphic conjugacy. Any regularity in between is thus also equivalent. The multiplier If $$f$$ is a holomorphic map and $$a$$ is a fixed point, i.e. $$f(a)=a\ ,$$ then the multiplier is the complex number $$\lambda=f'(a)\ .$$ The multiplier is invariant under conjugacy. Depending on $$\lambda\ ,$$ the fixed point $$a$$ is termed accordingly: • for $$|\lambda|>1\ ,$$ $$a$$ is repelling • for $$|\lambda|=1\ ,$$ $$a$$ is indifferent • for $$0\leq|\lambda|<1\ ,$$ $$a$$ is attracting • for $$\lambda=0\ ,$$ $$a$$ is superattracting The multiplier, on a Riemann surface • Let S be a complex dimension 1 manifold (a Riemann surface) • $$f$$ be a holomorphic map from a part of S to a part of S • $$a$$ be a fixed point of $$f$$ • $$T_aS$$ be the tangent space of S at $$a$$ • $$D_af: T_aS \to T_aS$$ the differential $$f$$ at $$a$$ Since we are in dimension 1, $$D_af$$ is completely characterized by its unique eigenvalue λ, and equal to the multiplication by λ: $$D_af$$(v) = λv. Identifying $$T_aS$$ with the complex plane $$\mathbb{C}\ ,$$ $$D_af$$ is a similarity of ratio λ. The multiplier is the number λ. Linearizability, depending on the multiplier • If |λ| = 0 (superattracting fixed point), then $$f$$ is not linearizable, unless it is constant in a neighborhood of $$a\ .$$ • If 0 < |λ| < 1 (attracting not superattracting), or 1 < |λ| (repelling), then $$a$$ is a linearizable fixed point. This is referred to as Koenig's theorem. • If |λ| = 1 (indifferent), then it depends. Write λ = exp(i2πθ) for some $$\theta\in\mathbb{R}\ .$$ • If $$\theta\in\mathbb{Q}$$ (parabolic fixed point), then $$f$$ is not linearizable most of times. More precisely, it will be linearizable if and only if $$f$$ has an iterate equal to the identity, which is impossible for instance in the case of a rational map of degree at least 2 (this includes polynomials) and for entire maps that are not of the form $$z\mapsto az+b$$. • If $$\theta\notin\mathbb{Q}$$ (irrationally indifferent), then we get into a much more difficult question. The latter case is where Siegel disks arise. ## Power series expansions and small divisors Assume $$f$$ fixes the origin (take a chart where the fixed point is at the origin) and consider the power series expansions $f(z)=\lambda z +\sum_{n=2}^{+\infty} a_n z^n\ .$ The linearization equation consists in finding $\phi(z)=z+\sum_{n=2}^{+\infty} b_n z^n$ such that $$\phi^{-1} \circ f \circ \phi (z) = \lambda z$$ holds near the origin (a problem equivalent to finding $$\psi$$ such that $$\psi \circ f \circ \psi^{-1} (z) = \lambda z$$). In other words, $$f\circ \phi(z) = \phi(\lambda z)\ .$$ By indentifying power series expansions, one finds a unique solution defined by the recurrence relation on the coefficients $$b_n$$ of $$\phi\ :$$ $b_1=1$ $b_{n+1}=\frac{P_n(a_2,\ldots,a_{n+1},b_2,\ldots,b_{n})}{\lambda^{n+1}-\lambda}$ where $$P_n$$ is an explicit, yet complicated, mutivariate polynomial. Thus for $$\lambda=\exp(i 2\pi\theta)$$ with irrational $$\theta\ ,$$ the conjugating power series $$\phi$$ is always defined as a formal power series. Linearizability of $$f$$ is equivalent to the convergence of this series, i.e. to its convergence radius to be positive. Even though the numerator in the recurrence relation giving $$b_{n+1}$$ is much more complicated than the denominator, it is the latter which is the potential source of divergence. The term $$\lambda^{n+1}-\lambda$$ is called a small divisor. Indeed, for some values of n, typically for n=q where $$p/q$$ is a continued fraction rational approximant of $$\theta\ ,$$ the quantity $$\lambda^{n+1}-\lambda$$ is small. Estimating the growth rate of $$b_n$$ is thus a subtle problem. It requires a good understanding of rational approximations of irrationals. ### A reminder on continued fractions An irrational has a unique continued fraction expansion $$\theta=a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\ddots}}$$ with $$a_0\in\mathbb{Z}$$ and $$a_n\in\mathbb{N}\ .$$ The continued fraction approximants of $$\theta$$ are the numbers $$\frac{p_n}{q_n}=a_0+\cfrac{1}{\ddots+\cfrac{1}{a_n}}$$ (the notations may vary). The best rational approximations of an irrational are given by its continued fraction approximants: • if $$|\theta-p/q|<\frac{1}{q^2\sqrt{5}}$$ then p/q is an approximant of $$\theta$$ • if $$p/q$$ is an approximant then $$|\theta-p/q|<\frac{1}{q^2}\ .$$ The quantity $$q^2|\theta-p/q|$$ can be thought of a measure of the quality of the rational approximation of $$\theta$$ by $$p/q\ .$$ The second point can be made more precise: if $$p_n/q_n$$ is the n-th approximant then we have: $$\frac{1}{2q_n q_{n+1}}<|\theta-p_n/q_n|<\frac{1}{q_n q_{n+1}}\ .$$ There is the well-known recurrence relation on the denominators (also satisfied by the numerators $$p_n$$): $$q_{n+1}=a_n q_n + q_{n-1}\ .$$ Therefore if $$a_n$$ is big, then $$q_n^2|\theta-p_n/q_n|\approx \frac{1}{a_n}\ .$$ Good approximations correspond to big values of $$a_n\ .$$ Concerning the small divisors, with $$\lambda=\exp(2i\pi\theta)\ ,$$ the quantity $$|\lambda^{q+1}-\lambda|$$ is comparable to $$q|\theta-p/q|\ ,$$ where p is the integer so that p/q is closest to $$\theta\ .$$ More precisely, there is the following theorem: the smallest value of $$|\lambda^k-\lambda|$$ for k ranging from 2 to $$1+q_n$$ is obtained precisely at $$k=1+q_n\ ,$$ and we have for $$k=1+q_n\ :$$ $$|\lambda^k-\lambda|\approx\frac{2\pi}{q_{n+1}}\ .$$ ## History Linearizability is closely related to stability. Poincaré, in studying the stability of the solar system, had to face similar questions. He thought he could prove stability in the simplified problem he was looking at (1889). He latter realized he was wrong, and by correcting this famous mistake opened the field of chaotic behaviour in dynamical systems. Concerning the center problem (linearization of an irrationnaly indifferent fixed point of a discrete dynamical system in complex dimension 1): At the International Congress in 1912, E. Kasner conjectured that such a linearization is always possible. Five years later, G. A. Pfeiffer disproved this conjecture by giving a rather complicated description of certain holomorphic functions for which no local linearization is possible. In 1919 Julia claimed to settle the question completely for rational functions of degree two or more by showing that such a linearization is never possible; however, his proof was wrong. H. Cremer put the situation in much clearer perspective in 1927 with a result [...] —John Milnor, Dynamics in one complex variable (second edition, 2000) Cremer's argument is indeed simple. It uses irrationnal rotation numbers $$\theta$$ which are well approximated by rationnals. A non-linearizable irrationaly indifferent fixed point is nowadays called a Cremer point. Siegel was the first to be able to prove, in the 1940s, that linearizability does occur. In fact he showed that if the rotation number is Diophantine, then the fixed point is always linearizable. Then there remained to determine the exact set of values of $$\theta$$ for which f is always linearizable. Brjuno and Rüssman found the exact arithmetic condition, but could only prove it is sufficient. This condition is now called the Brjuno condition. Yoccoz proved the necessity of the condition, i.e. that for an irrationnal $$\theta$$ not satisfying the Brjuno condition, there exists at least one non linearizable example with this rotation number $$\theta\ .$$ He even proved that the degree 2 polynomial $$f(z)=\exp(2i\pi\theta) z + z^2$$ is such an example. ## Results Here, $$\theta$$ refers to an irrational real number$$:\theta\in\mathbb{R}\setminus\mathbb{Q}$$ Definition: Let $$p_n/q_n$$ be the sequence of continued fraction convergents of $$\theta\ .$$ The number $$\theta$$ is said to satisfy Brjuno's condition (also called the Brjuno-Rüssmann condition) whenever $$\sum_{n=0}^{\infty} \frac{\log q_{n+1}}{q_n} < +\infty\ .$$ There are several other equivalent definitions of Brjuno's condition: • $$\sum_{k=0}^{\infty} \frac{1}{2^k}\log\left(\sup_{2^k\leq n< 2^{k+1}} \frac{1}{|\lambda^n-1|}\right) < +\infty$$ where $$\lambda=e^{2i\pi\theta}$$ • $$\sum_{n=0}^{\infty} \beta_{n-1} \log \frac{1}{\alpha_n}< +\infty$$ where $$\alpha_0$$ is the fractional part of $$\theta\ ,$$ $$\alpha_{n+1}$$ is the fractional part of $$\alpha_n\ ,$$ $$\beta_{-1}=1$$ and $$\beta_n=\alpha_0 \cdots \alpha_n$$ For instance the Diophantine numbers satisfy Brjuno's condition. (An irrational number $$\theta$$ is Diophantine if there exists $$C>0$$ and an exponent $$\delta\geq 2$$ such that rational $$\forall p/q\ ,$$ $$\left|\theta-\frac{p}{q}\right| \geq \frac{C}{q^\delta}\ ,$$ i.e. such an irrational cannot be too well approximated by rationals.) Theorem: let $$\theta$$ be irrational • If $$\theta$$ satisfies Bjruno's condition, then all fixed points with multiplier $$e^{2i\pi\theta}$$ are linearizable. • If $$\theta$$ does not, then there exists maps with a non linearizable fixed point with multiplier $$e^{2i\pi\theta}\ .$$ The following statement specifies the second case: Theorem: If $$\theta$$ does not satisfy Bjruno's condition, then the fixed point $$z=0$$ of the degree 2 polynomial $$e^{2i\pi\theta}z+z^2$$ is not linearizable. ## References • Milnor, J. [2000]: Dynamics in one complex variable, second edition
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972618579864502, "perplexity": 287.0955607226005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427132827069.83/warc/CC-MAIN-20150323174707-00187-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.kickzstore.com/en-ca/products/nike-mens-air-force-1-low-acg-light-orewood-brown
### Shopping Cart 0 Your shopping bag is empty Go to the shop Close # Nike Men's Air Force 1 Low ACG Light Orewood Brown \$157.00 \$131.00 ITEM: Nike Men's Air Force 1 Low ACG Light Orewood Brown COLOR: LIGHT OREWOOD BROWN/PINK-ORANGE STYLE NUMBER:  CD0887-100 CONDITION: Brand New with Box (Deadstock) RELEASE DATE: 01/11/2020 ALWAYS 100% Authentic Fast Shipping! 30-Day Returns USA: from \$ 6.95 CANADA: from \$ 40.00 INTERNATIONAL: from \$ 60.00 # Nike Men's Air Force 1 Low ACG Light Orewood Brown \$157.00 \$131.00 US Women US Men European UK Cm 5 3 1/2 35 1/2 2 1/2 22.5 5 1/2 4 36 3 1/2 23 6 4 1/2 36 1/2 4 23.5 6 1/2 5 36 1/2 4 1/2 23.5 7 5 1/2 38 5 24 7 1/2 6 38 1/2 5 1/2 24 8 6 1/2 39 6 24.5 8 1/2 7 40 6 25 9 7 1/2 40 1/2 6 1/2 25.5 9 1/2 8 41 7 26 10 8 1/2 42 7 1/2 26.5 11 9 42 1/2 8 27 12 9 1/2 43 8 1/2 27.5 13 10 44 9 28 14 10 1/2 44 1/2 9 1/2 28.5 15 11 45 10 29 11 1/2 45 1/2 10 1/2 29.5 12 46 11 30 12 1/2 47 11 1/2 30.5 13 47 1/2 12 31 13 1/2 48 12 1/2 31.5 14 48 1/2 13 32 14 1/2 49 13 1/2 32.5 15 49 1/2 14 33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334663510322571, "perplexity": 2825.952470834331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00753.warc.gz"}
https://t.library2.smu.ca/handle/01/21881/browse?type=subject&value=Neutrons
# Browsing Articles by Subject "Neutrons" Sort by: Order: Results: • (American Physical Society, 2016-06-30) Neutron-rich light nuclei and their reactions play an important role in the creation of chemical elements. Here, data from a Coulomb dissociation experiment on [superscript 20,21]N are reported. Relativistic [superscript ... • (American Physical Society, 2017-05-01) Precise helicity-dependent cross sections and the double-polarization observable E were measured for η photoproduction from quasifree protons and neutrons bound in the deuteron. The η→2γ and η&rarr ... • (American Physical Society, 2011-02-09) The interaction cross sections of [superscript 32–35]Mg at 900A MeV have been measured using the fragment separator at GSI. The deviation from the r[subscript 0]A[superscript 1/3] trend is slightly larger for [superscript ... • (Elsevier, 2010) One-neutron knockout reactions of [superscript 24–28]Ne in a beryllium target have been studied in the Fragment Separator (FRS), at GSI. The results include inclusive one-neutron knockout cross-sections as well as ... • (Elsevier, 2009) Results are presented from a one-neutron knockout reaction at relativistic energies on [superscript 56]Ti using the GSI FRS as a two-stage magnetic spectrometer and the MINIBALL array for gamma-ray detection. Inclusive and ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613064527511597, "perplexity": 20785.068062154598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00385.warc.gz"}
http://physics.oregonstate.edu/~rubin/nacphy/UNIX/latex.html
22.A: Printing a LaTeX File 21: Running C and Fortran Together Contents ## § 22:  LaTeX for Scientific Documents The LaTeX package (which is really a macro package composed of primitive TeX commands) is commonly used by scientists and engineers for its high quality typesetting of even complicated equations. Because it is of high quality, yet with highly transportable and compact source code, it is also used by some journals and book publishers for their publications. In 1993 a reorganized version of LaTeX called LaTeX 2E was introduced in order to bring various LaTeX extensions under a common umbrella and in order to add new features. To tell the difference between the two implementations, the first command in a LaTeX  document was changed from \documentstyle (in old LaTeX ) to \documentclass in LaTeX 2E. The new LaTeX 2E will accept the old \documentstyle directive, but will be slower. After creating a LaTeX document, you can convert it to postscript or  pdf for high quality printing and posting, convert it to HTML for a hyper-linked Web document or, of course, send the source code to your favorite journal for them to process it for publication. We will discuss: 1. Viewing and Printing LaTeX Documents 2. Creating LaTeX Documents 3. Converting LaTeX to HTML for Web Documents 22.A: Printing a LaTeX File 21: Running C and Fortran Together Contents
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990994930267334, "perplexity": 3729.6067445602316}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154221.36/warc/CC-MAIN-20160205193914-00224-ip-10-236-182-209.ec2.internal.warc.gz"}
https://planetmath.org/exampleofconstructionofaschauderbasis
# example of construction of a Schauder basis Consider an uniformly continuous function $f:[0,1]\rightarrow\mathbb{R}$. A Schauder basis $\{f_{n}(x)\}_{0}^{\infty}\in C[0,1]$ is constructed. For this purpose we set $f_{0}(x)=1$, $f_{1}(x)=x$. Let us consider the sequence of semi-open intervals in $[0,1]$ $I_{n}=[2^{-k}(2n-2),2^{-k}(2n-1)),\qquad J_{n}=[2^{-k}(2n-1),2^{-k}2n),$ where $2^{k-1}, $k\geq 1$. Define now $\displaystyle f_{n}(x)$ $\displaystyle=$ $\displaystyle\left\{\begin{array}[]{ll}2^{k}[x-(2^{-k}(2n-2)-1)]&\text{if}\,\,% x\in I_{n},\\ 1-2^{k}[x-(2^{-k}(2n-1)-1)]&\text{if}\,\,x\in J_{n},\\ 0&\text{otherwise.}\end{array}\right.$ Geometrically these functions form a sequence of triangular functions of height one and width $2^{-(k-1)}$, sweeping $[0,1]$. So that if $f\in C([0,1])$, it is expressible in Fourier series $f(x)\sim\sum_{n=0}^{\infty}c_{n}f_{n}(x)$ and computing the coefficients $c_{n}$ by equating the values of $f(x)$ and the series at the points $x=2^{-k}m$, $m=0,1,\ldots,2^{k}$. The resulting series converges uniformly to $f(x)$ by the imposed premise. Title example of construction of a Schauder basis ExampleOfConstructionOfASchauderBasis 2013-03-22 17:49:18 2013-03-22 17:49:18 perucho (2192) perucho (2192) 5 perucho (2192) Example msc 15A03 msc 42-00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 20, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9605334997177124, "perplexity": 5510.684867656751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00280.warc.gz"}
http://math.stackexchange.com/questions/157256/exhibiting-a-ring-isomorphism-between-a-ring-and-itself
# Exhibiting a ring isomorphism between a ring and itself. I recently proved to myself that if $R$ is a ring, and $R'$ a set in bijection with $R$, say by $f\colon R'\to R$, then one can turn $R'$ into a ring by defining $0'=f^{-1}(0)$, $1'=f^{-1}(1)$, $$r'+s'=f^{-1}(f(r')+f(s')),\qquad r's'=f^{-1}(f(r')f(s')),$$ and then $f$ is a ring isomorphism. Now suppose you put a new ring structure on $R$, say $(R,+,\cdot_u,0,u^{-1})$, where $a\cdot_u b=aub$. I want to use the above result as a shortcut to show $(R,+,\cdot, 0,1)$ is isomorphic to $(R,+,\cdot_u, 0,u^{-1})$ by exhibiting a bijection on $R$ which satisfies the four properties I listed above. I've had trouble thinking of what the map would look like. Does anyone see what the map would be? - The first paragraph is an example of what is known as transport of structure. –  Arturo Magidin Jun 12 '12 at 6:13 Note, however, that the first paragraph is really irrelevant to your second paragraph: two rings $R$ and $S$ are isomorphic if and only if there exists a bijection $f$ such that $f(r+s) = f(r)+f(s)$ and $f(rs) = f(r)f(s)$. Applying the inverse function $f^{-1}$ to both sides of both equations we get your two displayed properties; the first displayed equation already implies that $f(0)=0$; and the fact that $f$ is onto and multiplicative implies that $f(1)$ is necessarily a unity, hence equal to $1$. –  Arturo Magidin Jun 12 '12 at 6:20 Thanks for these comments. I think the word shortcut was bad word choice on my part. –  Linda Cortes Jun 12 '12 at 6:31 (So using this "method" you end up doing more work than simply checking to see if you have a bijective ring homomorphism, which does not require checking $f(0')=0$ and $f(1') = 1$.) –  Arturo Magidin Jun 12 '12 at 6:31 If $u$ is invertible, then $f: R \rightarrow R$, $r \mapsto ru$ is a bijection and satisfies the properties you want. $f : R \rightarrow R$, $r \mapsto ur$ will also work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708836078643799, "perplexity": 121.25201507871759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124771.92/warc/CC-MAIN-20140914011204-00222-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://en.m.wikipedia.org/wiki/Loiter_(aeronautics)
# Loiter (aeronautics) In aeronautics and aviation, loiter is a phase of flight. The phase consists of cruising for a certain amount of time over a small region. The loiter phase occurs, for general aviation, generally at the end of the flight plan, normally when the plane is waiting for clearance to land. In military flights, such as aerial reconnaissance or ground-attack aircraft, the loiter phase is time the aircraft has over a target. Cruise is the time period the aircraft travels to the target and returns after the loiter. In astronautics, the loiter phase of spacecraft used for human spaceflight may be as long as six months, as is the case for Soyuz spacecraft which remain docked while expedition crewmembers reside aboard the International Space Station. ## Endurance The endurance of the aircraft during the loiter phase is calculated using the following (Breguet formula):[1] ${\displaystyle E={\frac {1}{C}}{\frac {L}{D}}\ln \left({\frac {W_{i}}{W_{f}}}\right)}$ where: • ${\displaystyle E\,\!}$  is the endurance (dimensions of Time) • ${\displaystyle C\,\!}$  is the Specific Fuel Consumption (dimensions of 1/Time) • ${\displaystyle L\,\!}$  is the total lift force on the aircraft • ${\displaystyle D\,\!}$  is the total drag force on the aircraft • ${\displaystyle W_{i}\,\!}$  is the weight of the aircraft at the start of the phase • ${\displaystyle W_{f}\,\!}$  is the weight of the aircraft at the end of the phase
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687123775482178, "perplexity": 2577.5093913766905}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00526.warc.gz"}
http://www.euro-math-soc.eu/node/2840
3 PhD positions Organization: Institute of Mathematics, University of Klagenfurt, Austria Email: barbara.kaltenbacher$aau.at Job Description: The particular topics are • Parameter Identification in Differential Equations university assistant duration: 4 years, starting date: October 2012 http://www.uni-klu.ac.at/career/inhalt/269_873.htm applications to: http://www.aau.at/obf (key: 599/12) • Mathematics of Nonlinear Acoustics: Analysis project P24970 of the Austrian Science Fund (FWF) duration: 3 years, starting date: October 2012 applications to: barbara.kaltenbacher$aau.at • Mathematics of Nonlinear Acoustics: Numerics and Optimization project P24970 of the Austrian Science Fund (FWF) duration: 3 years, starting date: October 2012 applications to: [email protected] We are looking for candidates with a master degree in Mathematics, having a strong background in • Mathematical Optimization and/or Inverse Problems • Analysis and Numerics of Partial Differential Equations • Functional Analysis The Alpen Adria Universita ̈t Klagenfurt is an equal opportunity employer and particularly wel- comes applications from women and from minorities. In the case of equal qualification, female candidates will be given preference. Job Categories:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38421761989593506, "perplexity": 8878.201029441274}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010795590/warc/CC-MAIN-20140305091315-00052-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.acmerblog.com/hdu-1915-arne-saknussemm-2957.html
2013 12-27 # Arne Saknussemm Following the account of Jules Verne, a scrambled message written by the middle age alchemist Arne Saknussemm, and deciphered by professor Lidenbrock, started the incredible travel to the center of the Earth. The scrambling procedure used by Arne is alike the procedure given below. 1. Take a non empty message M that contains letters from the English alphabet, digits, commas, dots, quotes (i.e. ‘), spaces and line breaks, and whose last character is different than space. For example, consider the following message whose translation reads "In Sneffels’s crater descend brave traveler, and touch the center of the Earth". In Sneffels craterem descende audasviator, et terrestre centrum attinges. 2. Choose an integral number 0<K≤length(M) and add trailing spaces to M such that the length of the resulting message, say M’, is the least multiple of K. For K=19 and the message above, where length(M)=74 (including the 8 spaces and the line break that M contains), two trailing spaces are added yielding the message M’ with length(M’)=76. 3. Replace all the spaces from M’ by the character _ (underscore) ; replace all the line breaks from M’ by \ (backslash), and then reverse the message. In our case: 4. Write the message that results from step 3 in a table with length(M’)/K rows and K columns. The writing is column wise. For the given example, the message is written in a table with 76/19=4 rows and 19 columns as follows: [缺少图片] 5. The strings of characters that correspond to the rows of the table are the fragments of the scrambled message. The 4 fragments of Arne’s message given in step 1 are: _etmneet_t\udsmt_fS.narctrtria_edrrlen _gtuerr_,asaneeasf_si_t_seeovdec_ecenI Write a program that deciphers non empty messages scrambled as described. The length of a message, before scrambling, is at most 1000 characters, including spaces and line breaks. The program input is from a text file where each data set corresponds to a scrambled message. A data set starts with an integer n, that shows the number of fragments of the scrambled message, and continues with n strings of characters that designate the fragments, in the order they appear in the table from step 4 of the scrambling procedure. Input data are separated by white-spaces and terminate with an end of file. The program input is from a text file where each data set corresponds to a scrambled message. A data set starts with an integer n, that shows the number of fragments of the scrambled message, and continues with n strings of characters that designate the fragments, in the order they appear in the table from step 4 of the scrambling procedure. Input data are separated by white-spaces and terminate with an end of file. 4 _etmneet_t\udsmt_fS _gtuerr_,asaneeasf_ .narctrtria_edrrlen si_t_seeovdec_ecenI 11 e n r e V _ s e l u J In Sneffels craterem descende audas viator, et terrestre centrum attinges. Jules Verne http://acm.hdu.edu.cn/showproblem.php?pid=1915 #include <iostream> #include <algorithm> using namespace std; char date[1000][1000]; char strings[2000]; int main () { int num; int i,j; int len; int pos=0; while(cin>>num) { pos=0; for(i=0;i<num;i++) cin>>date[i]; len=strlen(date[0]); int flag=0; for(j=0;j<len;j++) { for(i=0;i<num;i++) { if(date[i][j]!='_') flag=1; if(flag==1) { strings[pos]=date[i][j]; pos++; } } } strings[pos]='/0'; reverse(strings,strings+pos);/*反向排列元素*/ for(i=0;i<pos;i++) { if(strings[i]=='_') { printf(" "); strings[i]=' '; continue; } else if(strings[i]=='//') { cout<<'/n'; continue; } else cout<<strings[i]; } cout<<endl<<endl; } return 0; }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214713215827942, "perplexity": 2597.6432327875496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608642.30/warc/CC-MAIN-20170526051657-20170526071657-00166.warc.gz"}
https://eccc.weizmann.ac.il/keyword/18591/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > INFORMATION THEORETIC: Reports tagged with Information Theoretic: TR14-069 | 5th May 2014 Shashank Agrawal, Divya Gupta, Hemanta Maji, Omkant Pandey, Manoj Prabhakaran #### Explicit Non-Malleable Codes Resistant to Permutations The notion of non-malleable codes was introduced as a relaxation of standard error-correction and error-detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In the information theoretic setting, although existence of such codes for various ... more >>> TR17-076 | 21st April 2017 Tianren Liu, Vinod Vaikuntanathan, Hoeteck Wee #### New Protocols for Conditional Disclosure of Secrets (and More) Revisions: 2 We present new protocols for conditional disclosure of secrets (CDS), where two parties want to disclose a secret to a third party if and only if their respective inputs satisfy some predicate. - For general predicates $\text{pred} : [N] \times [N] \rightarrow \{0,1\}$, we present two protocols that achieve ... more >>> TR17-149 | 7th October 2017 Or Meir, Avi Wigderson #### Prediction from Partial Information and Hindsight, with Application to Circuit Lower Bounds Revisions: 5 Consider a random sequence of $n$ bits that has entropy at least $n-k$, where $k\ll n$. A commonly used observation is that an average coordinate of this random sequence is close to being uniformly distributed, that is, the coordinate “looks random”. In this work, we prove a stronger result that ... more >>> TR17-191 | 15th December 2017 Alexander Smal, Navid Talebanfard #### Prediction from Partial Information and Hindsight, an Alternative Proof Revisions: 2 Let $X$ be a random variable distributed over $n$-bit strings with $H(X) \ge n - k$, where $k \ll n$. Using subadditivity we know that a random coordinate looks random. Meir and Wigderson [TR17-149] showed a random coordinate looks random to an adversary who is allowed to query around $n/k$ ... more >>> ISSN 1433-8092 | Imprint
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6431331634521484, "perplexity": 3461.4940925889637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348517506.81/warc/CC-MAIN-20200606155701-20200606185701-00306.warc.gz"}
http://mathhelpforum.com/calculus/171956-annoying-integral-involving-bessel-functions.html
# Thread: Annoying Integral involving Bessel Functions 1. ## Annoying Integral involving Bessel Functions Hi I'm having trouble with this integral $ \int_{0}^{\infty} \frac{J_0(kR)}{(1+(kR_d)^2)^{3/2}} dk $ I'm supposed to evaulate it using $ \int_{0}^{\infty} J_{\nu}(xy) \frac{dx}{(x^2+a^2)^{1/2}} = I_{\nu/2} (ay/2) K_{\nu/2} (ay/2) $ Where standard notation has been used for the bessel functions, any hints on how to transform it to the correct forn would be much appreciated, I can't really see how to get this to work 2. Originally Posted by thelostchild Hi I'm having trouble with this integral $ \int_{0}^{\infty} \frac{J_0(kR)}{(1+(kR_d)^2)^{3/2}} dk $ I'm supposed to evaulate it using $ \int_{0}^{\infty} J_{\nu}(xy) \frac{dx}{(x^2+a^2)^{1/2}} = I_{\nu/2} (ay/2) K_{\nu/2} (ay/2) $ Where standard notation has been used for the bessel functions, any hints on how to transform it to the correct forn would be much appreciated, I can't really see how to get this to work Hint: $\displaystyle \int_0^{\infty} \frac{J_0(kR)}{(1 + (kR)^2 )^{3/2}}dk = ~...~= \frac{1}{y}\int_0^{\infty}J_0(m) \cdot m(1 + m^2)^{-3/2}dm$ (after letting y = R and m = xy.) Now integrate by parts: $\displaystyle \int p~dq = pq - \int q ~dp$ using $\displaystyle p = J_0(m)$ and $\displaystyle dq = m(1 + m^2)^{-3/2}$ -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865205883979797, "perplexity": 534.199223142597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00437.warc.gz"}
http://math.stackexchange.com/questions/221857/existence-of-non-atomic-probability-measure-for-given-measure-zero-sets
# Existence of non-atomic probability measure for given measure zero sets Let $\Omega$ be a set and $\Sigma$ be a $\sigma$-algebra of subsets of $\Omega$. Let $N$ be a collection of measurable subsets of $\Sigma$. Question: What conditions on $\Sigma$ and $N$ guarantee that there exists a non-atomic probability measure $\mu:\Sigma\to [0,1]$ such that for any $E\in \Sigma$ if $\mu(E)=0$, then $E\in N$ ? Edited to make question coherent. - The condition on $N$ seems strange, because you can always choose $E'=\Omega$, at least the way that it is written now. – Lukas Geyer Oct 27 '12 at 2:18 Thanks Lukas. Brain wasn't fully engaged. – Rabee Tourky Oct 27 '12 at 2:43 @Lukas: Either of you could write that up as an answer so the question doesn't remain unanswered. – joriki Oct 27 '12 at 6:19 @MichaelGreinecker: You might be interested in Kelley's criterion (original article here) covered in several books (e.g. Fremlin vol 3, ch. 39) as well as Maharam's "control measure problem" which generated some excitement in the past decade due to its negative resolution by Talagrand in '06. I don't have the time to go digging any further, but this should give some pointers. As an aside: there was also this MO thread by the OP but I didn't read it closely. – commenter Nov 26 '12 at 18:47 @Michael: I don't understand. Take a $\sigma$-ideal $J$ included in $N$ and consider $\mathfrak{A} = \Sigma/J$. Every property of $\mathfrak{A}$ is a property of how $J$ sits inside $\Sigma$. – commenter Nov 26 '12 at 20:38 Thanks Michael Greinecker and commenter. The main practical problem for me in applying commenter's idea was that the weak $\sigma$-distributive property in Maharam's 1947 paper, in Kelley's paper, and in Todorcevic's amazing paper of 2004 on measure algebras may not hold if we choose an arbitrary $\sigma$-ideal $J$ in $N$ (and it certainly has no clear meaning for what I am doing). In the end, the best fit for my work was Ryll-Nardzewski's result published in the addendum section of Kelley and not Kelley's result with the distributive property. 1) There exists a sequence $B_n$ of families of subsets of $\Sigma$ such that $(\Sigma\setminus N)\subseteq \bigcup_{n} B_n$. 2) Each $B_n$ has a positive intersection number (as in Kelley). 3) Each $B_n$ is open for increasing sequences; (if $E_m\uparrow E\in B_n$, then eventually $E_m\in B_n$). The final condition (3) of Nardzewski guarantees that $\Sigma\setminus \bigcup_{n} B_n$ is a $\sigma$-ideal. Condition (2) guarantees that there is a finitely additive (positive) probability measure $\nu_n$ on $\Sigma$ that is bounded away from zero on $B_n$. Condition (3) tells us that from $\nu_n$ we can define a countably additive probability measure $\mu_n$ that also measures elements of $B_n$ positively. Letting $\mu= \sum_{n=1}^\infty 2^{-n} \mu_n$, we have the required measure. For the converse suppose that $\mu$ is the required measure, letting $B_n=\{ \mu>1/n\}$ we see that (1), (2), and (3); hold. - Nice. Glad to see that my suggestion helped even if it was in an unexpected way... – commenter Jan 21 '13 at 21:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172746539115906, "perplexity": 338.7005243891341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152987.97/warc/CC-MAIN-20160205193912-00345-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.mathxplain.com/probability-theory/discrete-and-continuous-distributions/problem-16
Contents of this Probability theory episode: Random variable, Discrete and continuous random variable, Binomial distribution, Poisson distribution, Hypergeometryc distribution, Exponential distribution, Normal distribution, Uniform distribution, Probability, Average, Density function, Distribution function, Expected value, Standard deviation. # Problem 16 16 Oops, it seems you aren't logged in. It's a shame, because you'd find interesting things here, such as: Random variable, Discrete and continuous random variable, Binomial distribution, Poisson distribution, Hypergeometryc distribution, Exponential distribution, Normal distribution, Uniform distribution, Probability, Average, Density function, Distribution function, Expected value, Standard deviation. Let's see this Probability theory episode
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923595190048218, "perplexity": 4273.9760943629635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145657.46/warc/CC-MAIN-20200222085018-20200222115018-00142.warc.gz"}
https://worldbuilding.stackexchange.com/questions/193258/how-would-merfolk-insulate-an-underwater-house/193262
# How would merfolk insulate an underwater house? If you tried to insulate your underwater home, making it water tight... wouldn't you suffocate yourself? Water does get stale, and run low on oxygen, creating a dead-zone, so wouldn't you want some flow of water to keep fresh water coming in? Worse is that water has about 5% the oxygen content of air per volume (not counting H2O itself), and fish are only about 80% efficient at absorbing it. The issue there, is you can't really insulate a home which has fresh water cycling through? That's going to keep it at the same temperature as the sea outside. How could merfolk, or other underwater people, insulate their homes without suffocating themselves? • Comments are not for extended discussion; this conversation has been moved to chat. – L.Dutch Jan 6, 2021 at 4:13 Ventilation with heat recovery: You just stumbled upon the basic problem home builders are having in the real world. To keep the energy inside the building, buildings are more or less made airtight. Windows are no longer leaking significantly (c.f. blower door test). However, people need to get fresh air in from time to time and the bad air (low O2, humidity) out. Opening the windows to exchange air kind of defeats the purpose of building "airtight" in the first place. Therefore all new buildings (at least here in Germany) are required to have some kind of heat recovery ventilation that exchanges air while at the same time keeping most of the energy inside the building. The principles of those systems are applicable to water as well as air. • Thank, Manziel. Great to have a similar RL example to the problem. I'll look into it. Jan 4, 2021 at 21:23 • I was thinking, should I just answer "Heat exchangers"? But your answer is a better way to explain that idea. Jan 5, 2021 at 1:28 • @Joffan That's what I did (see below), only doing it as an ASCII-art drawing took so long Manziel beat me by five minutes. :-)) – Karl Jan 5, 2021 at 21:38 Photosynthesis/chemosynthesis can help you converting CO2 into O2. Seaweed and other sea vegetation can supply the needed oxygen to the environment. Even blue algae can do the same. Basically nothing that different from having home plants for keeping the air fresh on land. There are even sealed bottles where you have a mini biome kept alive by the plants inside it. Alternatively, you could set up a system where outside water is let in and inside water is pushed out with crossing flow, so that the inbound water is warmed up by the outgoing one. Again, nothing different than a land based re-circulation system with heat recovery. • The sealed bottle is interesting. Algae will actually absorb oxygen when not photosynthesizing, so I presume these bottles are kept in day-lit areas? Cross flow is also a good idea. Thanks for the answer. Jan 3, 2021 at 19:11 • Photosynthesis or chemosynthesis? There's little sunlight at the bottom of the ocean Jan 4, 2021 at 8:26 • @mcalex can be both, depending on the depth. – L.Dutch Jan 4, 2021 at 8:48 • You would need a very large bottle to support a sentient life form, open ocean already imposes limits on animals via oxgyen. How large do you envisage the bottle/container being to achieve this? Jan 4, 2021 at 16:47 ### Vents in the floor I'm assuming you're wanting to keep the interior warm from the cold sea. Not the other way around. Warm water rises, cold water sinks. Oxygenated water rises. deoxygenated water sinks. (Warm water holds slightly less oxygen than cold water - so you don't want it too warm!) Your mermaids insulated houses have grates in the floor. "Dead" water sinks down the grates. If the house gets lower oxygen levels than outside, that water will rise up, warm, and expand, oxygenating the warm water inside. One option is to build the house on stilts: A multistorey house is probably fine to just connect to the floor below. If you have big apartment buildings or just a really big city, the oxygen requirements might exceed what the sea can provide in a small space, even if everyone has a garden. You may need oxygen generation to be a city utility, pumping oxygen out to bubblers in every home and workplace. • Great idea, and I love the diagram. I think this combined with Dutch's answer would work wonders, if you can have the warmed stale air replace the less-cold oxygenated air. This does raise a confusing question, as to how much oxygenated water rises compared to warmer water.... They could probably work out the details, living with the stuff all the time. Jan 3, 2021 at 19:16 ### Different degrees of of isolation It is important to know "what from" mermaids would isolate their home. Houses aren't really airtight either, otherwise the water in the air would cause mold growth pretty much instantly. • If you don't want stone/dirt to accumulate to much an house with open windows for water exchange would be enough. • If you don't want fish to enter, use fish nets or finer fabrics to keep them out (similar to mosquito nets). • If there are additional constraints like keeping "warm water" it in the house having removable seals (think windows in a modern house but transparency doesn't matter since you can't see far anyway) might work, although airing cools thing down (or heats up). Although since water has pretty much the same temperature over the day (ocean temperature varies a lot less than air temperature). I don't see a good reason why merfolks would need to isolated themselves from water temperature. One possibility is that they migrated to place they didn't develop of that the climate changed in an unfavorable way. However this temperature sensitivity might place strong limits on what depths of water they can swim at. The temperature change over 1km beats the difference between tropical and subpolar regions. • If industrial age technology is available the analog of a heat exchanging HVAC would allow a constant flow of water. Similar to the low tech solution proposed by L. Dutch. ### If you can build green houses fresh air above sea level is only meters away. I am not a fan of algae based solution as the houses would have to be really close to the surface. Algae would probably also be a thing you want to keep out of the house as accumulations of algae can also turn water stale. It might be more sensible to compress air and release it where you want it. That can happen via bucket chains or tubing. ### Implications for architecture of houses Since water contains less breathable than oxygen houses might be bigger with more and larger windows (statics is less of a concern) to have larger and faster exchanging buffers while you are sleeping. However water circling mechanism might be necessary if the density of more oxygenated water is unfavorable to the sleeping place (on the other hand you could sleep at any height from the floor by floating). Under water there is little reasons for floors other than to put things on. So houses might look like giant libraries with shelfs filled with tools compared to human houses. Sleeping/resting places are probably communal as it might be hard to keep many small rooms oxygenated. Isolated sleeping places would be a much bigger status symbol then above ground. Just for fun there is nothing preventing builders to have door in the ceiling and that would probably useful to transport things in and out of the house. • Thanks for another great answer, WSH. Manually bringing down air might be tricky. Enough air for a human for one day, by my figuring, is 135^3 feet of volume, which would have about 4 tons of up-thrust. They might manage on a lot less than this, if they regularly "air out" their homes in the morning, and only try to warm them before sleeping. Then they'd only need about 10-12 hours of air, and a large house might hold quite a bit in its water. As an aside, a mermaid would theoretically need 337ft^3 of water, per day, if they need as much as a human. Jan 3, 2021 at 20:13 Electrolysis of water. source Your mermaids can accomplish 2 goals with the same process. Electrolysis of water splits it into hydrogen and oxygen, and the oxygen released can be used to re-oxygenate the water. Also, electrolysis of water produces heat and so if the insulation is to retain warmth the process will warm up the dwelling. Electrolysis was discovered in 1789 and so doing this depends on the tech mermaids have available. https://en.wikipedia.org/wiki/Electrolysis_of_water#History • Thanks, Will. Love the article you linked, and I like your idea of getting the most out of a technology. Jan 3, 2021 at 22:07 • There will be a problem with chlorine if it is salt water (as in seawater). Jan 5, 2021 at 0:23 /\ / \ / \ / \ / \ / \ / \ /______________\ | | | ________|_______ | __|______________|_____ | <--_______________________ <-- | | __________ | | | | | | | | --> | | | --> | | __|______________|______________________ One word: "heat exchanger" • That looks similar to the system Manzeil pointed out. I really appreciate people are going to the trouble of making diagrams for the question. Thanks, Karl. Jan 4, 2021 at 21:24 • Words are easy; BTUs are hard. If a passive HVAC system could actually do any appreciable work I wouldn't have a job. Jan 5, 2021 at 5:32 • @Mazura It's not supposed to do any work. In lieu of a pressure difference, it even needs some work, aka some sort of pump to keep the water flowing. Hence the arrows. ;) – Karl Jan 5, 2021 at 7:33 Heated Pumps Reindeer have enormously complicated plumbing in their noses, all of it wrapped in capillary blood-vessels (this is the origin of Rudolph's famous red nose) Essentially, by the time the sub-zero air they breathe in reaches their lungs, it's already up to their core body temperature. They have a similar system for reducing the temperature of the air as they breathe it out again. This protects them from the extreme cold without greatly impeding their breathing. Your mermaid civilisation could use a similar approach, forcing fresh sea-water through a heated set of pipes (take your pick of the ways to heat the pipes, maybe geothermal vents) to provide fresh oxygenated water to the house. Your old water can then be allowed out of the building under positive-pressure. • This would be the best system for an industrial setting. Thank you, Raudhan. I really love the story about the reindeer and Rudolph. Jan 4, 2021 at 21:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3206307291984558, "perplexity": 2106.280618470843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00550.warc.gz"}
https://codereview.meta.stackexchange.com/questions/1875/blue-cheese-monster-feature-requests
# Blue Cheese Monster feature requests Malachi and I, we broke our heads on how we can disambiguate the Blue Cheese Monster, especially in respect to the Tool vs. Toy question raised by rolfl. So here goes: # What features does this tool need to be useful to the community? It would be nice if you could provide: • What does the command do? • Example invocation • Possible restrictions • (for the overly enthusiastic) implementation "blueprint" / syntax The Current Bot that is implemented is written in JavaScript, and is being run in a Chrome browser ## Ban User From Commanding Bot Obvious usage if the user is abusing the bot. ## Privilege settings keep users from creating too many commands or specifying what users can and cannot tell the bot to do. Especially in respect to @rolfls definitely correct and useful objections, Malachi and me just came up with a possible solution: ### Command Types: As mentioned int @rolfls answer some commands are Toys and some are Tools. As a to the bot, there should be some owner-only (read: mods) command, that sets the bot to allow or disallow calls to the toy-commands. Every new "learned" command would start out as a , until an owner (read: mod, maybe community majority) promotes the command to a . In the tool-mode, every call to a toy command will just be either ignored or shunned off. ## Command: The command switches the Blue Cheese Monster from "tool" to "toy" and back CR playtime / CR playtime over This command may only be invoked by The Powers that Be This would also require another command: The command promotes another command to the "tool"-status or back CR tool [command] / CR toy [command] This command also may only be invoked by The Powers that Be Alternatively it could evaluate votes (7 Votes to promote, Binding vote from TPTB) ## EDIT: As there have been some misunderstandings on the purpose of this command in chat, a little clarification. This command is intended as a "partial kill switch". The toy-part of the Blue Cheese Monster can horrendously be abused. Thus Malachi and me thought of this as a way to prevent abuse, but maintain the tool functionality of the bot. This is not meant to be a black/white distinction, just an additional security mechanism • This tool/toy classification is useful to determine whether the bot is a contributor, or distraction in the chat room. It does not need to have a flag on each feature determining whether that feature is a tool/toy.... it just needs to be: 'this bot, in general, is a tool that can also be a toy' – rolfl May 12 '14 at 17:08 • Toys shouldn't run in The 2nd Monitor at all. Development should be done on a development instance of the bot, running in another chat room. As for the kill switch, moderators or the chat room owners can ban the bot's account from posting. – 200_success May 12 '14 at 18:07 • @200_success: Mods may be able to, but room owners cannot ban a particular account from posting. The only available choices are an open chat room where anybody can post, or gallery mode where only approved posters can post. The only route for non-mod users to ban a particular account is auto-banning if a sufficient number of posts are flagged. – Jerry Coffin May 14 '14 at 14:35 CR register-feed {key} {url} Purpose: registers specified RSS feed url with specified key/identifier, and posts whatever comes up in that feed as a chat message, hopefully with a link. Example: CR register-feed cr-answers http://stack2rss.stackexchange.com/codereview.stackexchange/answers&body=true CR unregister-feed {key} Purpose: forget about specified RSS feed url; stop posting feed items as chat messages. Example: CR unregister-feed cr-answers CR list-feeds Purpose: displays a list of all registered RSS feed URL's and the keys they're registered under. Example: CR list-feeds • when I get the RSS stuff figured out I would love to try to implement this! – Malachi May 12 '14 at 17:37 ### Current status: On being unsummoned, the bot currently just vanishes. But the useraccount still remains. ### Suggestion: Have the bot give a last message, before retreating to his cave, something along the lines of : ByeBye I am back to my cave now. And he should really quit the room, and not just hang around ;)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16522321105003357, "perplexity": 6274.020555960912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00536.warc.gz"}
https://math.stackexchange.com/questions/378376/if-g-has-only-2-proper-non-trivial-subgroups-then-g-is-cyclic/378396
If $G$ has only 2 proper, non-trivial subgroups then $G$ is cyclic Is the following true? If $G$ has two proper, non-trivial subgroups then $G$ is cyclic. • Did you mean two proper non-trivial subgroups? – Tobias Kildetoft May 1 '13 at 16:31 • yeah, I mean two proper non-trivial subgroup here. – ROBINSON May 1 '13 at 16:36 • @AKASNIL: In that case you should go back and edit the question right away, since that's not what you asked and lots of people are answering the question you actually asked. – Pete L. Clark May 1 '13 at 16:46 • This is more of a general comment on all the responses thus far. In the initial set of comments on the question, the OP stated that he was referring to a group with exactly two proper, non-trivial subgroups. I assumed he meant exactly two subgroups other than $\{1\}$ and $G$. Am I missing something here? This possibility is all the more likely now that I have recently celebrated yet another birthday. – Chris Leary May 1 '13 at 16:53 • @Pete - I've always believed that if you try to reason you will make logical errors from time to time. Some of the ones I have made have astounded me once I realized them. That's pretty much the price we pay for being human. – Chris Leary May 3 '13 at 15:45 First note that if $G$ does not have finite order, then it does not have a finite number of subgroups, so we can assume that $G$ is finite (see the comment by Pete Clark). Note that if $3$ distinct primes divides the order of the group, then the group has at least $3$ proper non-trivial subgroups. So $|G| = p^nq^m$ with $p$ and $q$ primes. Now, if either $n$ or $m$ is greater than or equal to $4$, then the corresponding Sylow subgroup has too many subgroups. Also, if either if at least $2$ and the other is not $0$, we again get too many subgroups. We are left with either $|G| = p^3$ or $|G| = pq$. In both cases the cyclic group of that order will satisfy the conditions, and we wish to show that these are the only ones (since the cyclic group of order $p^2$ has too few subgroups, and the non-cyclic one has too many). If $|G| = pq$ and $G$ is not cyclic, then $G$ is not abelian, and thus has more than one Sylow subgroup for either $p$ or $q$, giving us too many subgroups. If $|G| = p^3$ then $G$ has at least one subgroup of order $p$ and one of order $p^2$. But if $G$ is not cyclic, it has more than one maximal subgroup, which gives us at least two of order $p^2$, again resulting in too many subgroups. • Assuming $G$ is finite? – Metin Y. May 1 '13 at 16:45 • @MetinY. Right, no infinite group can have this property. – Tobias Kildetoft May 1 '13 at 16:47 • This is a nice answer. One minor point: you seem to be assuming that the group is finite, which the OP did not say (although s/he didn't say other things as well...) and in any case is not necessary. But an infinite group either contains an element of infinite order or arbitrarily large finite subgroups, so this is no problem. I do think you should put this in the answer, though. – Pete L. Clark May 1 '13 at 16:53 • Let me try again: I claim that every infinite group $G$ contains infinitely many subgroups. Indeed, this is clear if it contains an element of infinite order. If not, it contains infinitely many elements of finite order, hence infinitely many finite cyclic subgroups (but they may well all have the same order, despite the fact that my intuition still suggests that this is impossible). – Pete L. Clark May 1 '13 at 18:27 • @StevenStadnicki That group does have arbitrarily large finite subgroups. As mentioned previously, an example where all the proper non-trivial subgroups have the same (finite) order is given by the Tarski monster. – Tobias Kildetoft May 2 '13 at 1:21 Let $H_1$ and $H_2$ be the two non-trivial proper subgroups of the given group $G$. I claim that $G$ is not the union $H_1\cup H_2$. If one of the subgroups is contained in the other, then this is trivially true. Otherwise there exist elements belonging to one subgroup but not the other. Let $h_1\in H_1\setminus H_2$ and $h_2\in H_2\setminus H_1$. What about $g=h_1h_2$? If it belongs to $H_1$, then so does $h_2$. If it belongs to $H_2$, then so does $h_1$. In either case we contradict our assumptions, so we have to conclude that $g\notin H_1\cup H_2$. So we know that there exists an element $g\in G$, $g\notin H_1\cup H_2$. What is the subgroup generated by $g$? Can't be either $H_1$ or $H_2$, so it has to be all of $G$. Ergo, $G$ is cyclic. • Very nice and elementary answer. – Tobias Kildetoft May 1 '13 at 17:00 • Complete, yes. Not so sure about clean. If $G$ were finite, it would be easier to prove that $G\neq H_1\cup H_2$ (Lagrange is all you need). I didn't see a nice way of covering the infinite groups in the same argument, so I resorted to the uglier way of picking those $h_1,h_2$ :-( – Jyrki Lahtonen May 1 '13 at 19:50 • But the fact that the union of two subgroups is not a subgroup unless one is contained in the other is standard knowledge (or should be). – Tobias Kildetoft May 2 '13 at 0:45 • Unlike Jyrki, I think his argument using $h_1$ and $h_2$ is prettier than the one using Lagrange's theorem. Not only does it work for infinite groups, but it uses only information that is even more basic than Lagrange's theorem. – Andreas Blass May 2 '13 at 0:50 • +1: With respect to some esthetic that I would have trouble enunciating, this is clearly the best possible answer. – Pete L. Clark May 2 '13 at 1:47 Since $G$ has proper non-trivial subgroups $\exists~a~(\neq e)\in G.$ • Case $1$: $G=(a):$ Nothing left to prove. • Case $2$: $(a)$ is a non-trivial proper subgroup of $G:$ Choose $b\in G-(a).$ • Case $2.1:$ $G=(b):$ Nothing left to prove. • Case $2.2:$ $(b)$ is also a non-trivial proper subgroup of $G:$ • Case $2.2.1:$ $(a)\cup(b)=G,$ a subgroup of $G.$ Consequently either $(a)\subset(b)$ or $(b)\subset(a).$ • Case $2.2.2:$ $\exists~c\in G-(a)\cup(b).$ Since $G$ has only two proper subgroups $G=(c).$ • @Adhya : I like your answer, but you lost me in the middle of Case 2. Do you mean "Since $G$ has only two proper subgroups..."? And why do you state that $G$ is a subgroup of $G$? Is this a typo? – Stefan Smith May 1 '13 at 21:42 • @Adhya : If $(a) \cup (b)$ is a subgroup of $G$, does it follow that $(a) \subset (b)$ or $(b) \subset (a)$? I don't know much group theory. I upvoted your answer because it is so elegant. – Stefan Smith May 1 '13 at 21:52 • @StefanSmith: See groupprops.subwiki.org/wiki/…. – Sugata Adhya May 2 '13 at 0:25 • @Adhya : Thanks for the link. I read the proof, which should be quite simple, and I took a survey posted there and complained about the "tabular method" of proof they used there, which I didn't care for. (I apologize if you happen to be the creator of that page/proof, and I respect the effort you put into it) – Stefan Smith May 2 '13 at 0:50 Llet $|G| = n$. Suppose $a,b$ be two nonidentity elements of $G$. Now consider $\langle a \rangle$ and $\langle b \rangle$. If $G$ is commutative then it is easy to see that $\langle ab \rangle$ is a cyclic group other than $\langle a \rangle$ and $\langle b \rangle$, which leads to a contradiction, so one of $a$ and $b$ must be of order $n$, so $G$ is cyclic. If $G$ is non-commutative then it can't be cyclic, so we are done. • thank u Mr. Alex. I don't know how to use mathjax. – Anjan Samanta Oct 16 '17 at 8:35 • I fail to follow your logic. If $G$ is commutative it can easily happen that $\langle a\rangle$, $\langle b\rangle$ and $\langle ab\rangle$ are all the same subgroup. And in the non-commutative case the contrapositive claim would be to prove that the group has at least three proper subgroup, so you are not done in that case either. – Jyrki Lahtonen Oct 16 '17 at 14:35 • sorry my bad.if b is the inverse of 'a' then they can't be distinct.And the second case isn't obvious.I have done several mistakes. – Anjan Samanta Oct 16 '17 at 18:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945598602294922, "perplexity": 227.57617355086163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00486.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3798093
## Levi Civita proof - Curl I need to prove that $$\nabla \times (\vec{A} \times \vec{B})$$ Well. Trying to solve it, I've come to this: $$\partial_j \vec{A}_i \vec{B}_j \hat{u}_i- \partial_i \vec{A}_i \vec{B}_j \hat{u}_j$$ Then I've found in my book that this is equal to: $$\vec{A}_i \partial_j \vec{B}_j \hat{u}_i + \partial_j \vec{B}_j \vec{A}_i \hat{u}_i - (\vec{A}_i \partial_i \vec{B}_j \hat{u}_j + \vec{B}_j \vec{A}_i \partial_i \hat{u}_j)$$ Can someone explain to me why?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7860631346702576, "perplexity": 693.171554454864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00067-ip-10-60-113-184.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/87602/how-to-compute-all-primes-between-upto-n-in-time-on-time
# How to compute all primes between upto $n$ in time $O(n)$ time? Suppose that I want to compute all the prime numbers between 2 and $n$. The natural way or most obvious way to do so is given below. Let $A$ is an array contain the numbers from $1$ to $n$. 1. For $j=2$ to $j=\sqrt n$ 2. mark multiples of $j$ from $A$ Running time of this algorithm is $O(n \log n).$ It is easy to see that after first iteration there will be $n/2$ many unmarked elements and so on. The problem with this method is that some of the elements may be marked more than one time. Question : How to compute all the primes upto $n$ in $O(n)$ time? • If you only go over primes (which you can do at no cost), you improve the running time to $O(n\log\log n)$. Feb 1, 2018 at 12:56 • Does it help if you put all elements in a (doubly)linked-list and then do the marking-trick, but instead of marking an element; you simply remove it from the list (removing can be done in $O(1)$)? This solves the "marking things twice" issue. [I am not saying that in practice; this is a good idea] Feb 1, 2018 at 13:10 • Why would you try to mark multiples of 4? Feb 1, 2018 at 16:21 • Your algorithm is a degenerated version of Eratosthenes sieve, where for each j, if j has not been marked before, you mark all its multiple. The running time is thus rougly \sum_{p prime <= n} n/p which is O(n log (log n)) (en.wikipedia.org/wiki/…): not enough but not that bad already. – holf Feb 1, 2018 at 17:42 • @gnasher729 I now notice that did not read the algorithm in the question correctly - I meant to say that you only use the elements that remain in the linked-list for the marking (smallest to largest) (since 4 is removed after doing the marking with 2; it will not be considered) Feb 2, 2018 at 9:26 You can use a sieve to enumerate all prime numbers up to $n$. There are multiple algorithms; see the Wikipedia article I link for some examples. The sieve of Atkin and wheel sieves apparently run in $O(n)$ time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6761418581008911, "perplexity": 458.2012004570545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00107.warc.gz"}
https://infoscience.epfl.ch/record/91178?ln=en
## Average case analysis of multichannel thresholding This paper introduces p-thresholding, an algorithm to compute simultaneous sparse approximations of multichannel signals over redundant dictionaries. We work out both worst case and average case recovery analyses of this algorithm and show that the latter results in much weaker conditions on the dictionary. Numerical simulations confirm our theoretical findings and show that p-thresholding is an interesting low complexity alternative to simultaneous greedy or convex relaxation algorithms for processing sparse multichannel signals with balanced coefficients. Published in: Proc. ICASSP'07 Presented at: ICASSP07, Honolulu Year: 2007 Keywords: Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913471937179565, "perplexity": 2182.082351779301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256812.65/warc/CC-MAIN-20190522123236-20190522145236-00358.warc.gz"}
http://math.stackexchange.com/questions/112596/how-to-find-the-coproduct-in-the-category-of-pointed-sets
# How to find the coproduct in the category of pointed sets? Exercise $6 (b)$, page 58 from Hungenford's book Algebra. Show that in $\mathcal{S}_{\star}$ (the category of pointed sets) every family of objects has a coproduct (often called a "wedge product"); describe this coproduct. I need a suggestion in order to find the coproduct. I would appreciate your help. - With two normal sets, the coproduct is the disjoint union. With pointed sets, you merely add the condition that the basepoints of both sets always go to the basepoint of the new set, which only requires a small modification to the disjoint union. –  Carl Feb 23 '12 at 21:13 @Carl: I would like to thank you. Can you please write it as an answer so that I can accept it? Thank you again! –  spohreis Feb 23 '12 at 21:33 No problem! Reposting as an answer. –  Carl Feb 23 '12 at 22:03 @magma: It's just the disjoint union $X = \{a,b\} \coprod \{c,d\}$ with the base points $a,c$ identified, ie. the quotient of $X$ by the equivalence relation generated by $a \sim c$. –  Najib Idrissi Feb 25 '12 at 16:32 @magma: the coproduct of pointed sets $(X_i,b_i)$ is the universal pointed set $(X,b)$ with inclusion maps $f_i:(X_i,b_i)\to (X,b)$. By definition of morphisms between pointed sets, $f_i(b_i)=b$: all basepoints are mapped to the new basepoint. –  wildildildlife Feb 25 '12 at 17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377155900001526, "perplexity": 526.0649635490723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500808153.1/warc/CC-MAIN-20140820021328-00448-ip-10-180-136-8.ec2.internal.warc.gz"}
https://lambda-y.net/talk/2019-lasso48/
# Spanish Aspectual Adverbials [with Daniel A. Razo and Aniko Csirmaz] ### Abstract Krifka (2001), based on Löbner (1989), proposes a crosslinguistic account of aspectual adverbials. We consider Spanish todavía ‘still’, ya ‘already’ and aún ‘still’ and discuss some challenges Spanish raises for Löbner/Krifka. Krifka (2001) discusses temporal uses in terms of presuppositions and assertions: e.g. still P asserts that $P$ holds at $t$ and presupposes that there is a time $t’$ immediately preceding $t$ where $P(t’)$ (he is still sleeping). The model can be extended to other scales (the scale is distance in San Diego is still in the US) (Beck 2016). Ya. Debelcque&Maldonado (2011) discuss ya. They present ya as a pragmatic anchor, grounding the eventuality with respect to time and movement across a ‘dynamic programmatic base’. We argue that our alternative account is more defined and offers specific predictions. First, we claim that ya can appear with covert predicates P, and is consistent with Löbner/Krifka. This accounts for Había tortillas, frijoles, y ya ‘There were tortillas, beans and that was all’, where P is era todo ‘that was all’. Covert predicates also account for the meaning variation when only ya is the overt (e.g. ¿Ya?). Second, we discuss examples where ya is equivalent to clause-final occurrences of already, a modal use (Hay que harcerlo ya ‘It must be done already’). We also discuss the approach of Curco&Erdely (2016). Todavía, aún. We argue that these adverbials are not synonymous. Todavía is scalar like still, while aún is additive. Additivity permits the concessive interpretation that is also available for aunque (cf. additive particles in Hindi concessives). Date Location Louisiana State University, Baton Rouge
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937744855880737, "perplexity": 9812.696604041794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00418.warc.gz"}
http://sms.niser.ac.in/news/seminar-1
# News & Events ## Seminar Date/Time: Tuesday, September 9, 2014 - 16:45 to 17:45 Venue: LH-101 Speaker: Dr. Ghurumuruhan Ganesan Affiliation: EPFL, Lausanne Title: Infection Spread and Stability in Random Graphs Abstract: In the first part of the talk, we study infection spread in random geometric graphs where $n$ nodes are distributed uniformly in the unit square $W$ centred at the origin and two nodes are joined by an edge if the Euclidean distance between them is less than $r_n$. Assuming edge passage times are exponentially distributed with unit mean, we obtain upper and lower bounds for speed of infection spread in the sub-connectivity regime, $nr^2_n \to \infty$. In the second part of the talk, we discuss convergence rate of sums of locally determinable functionals of Poisson processes. Denoting the Poisson process as $\mathcal{N}$ , the functional as $f$ and Lebesgue measure as $l(.)$, we establish corresponding bounds for $$\frac{1}{l(nW)}\sum_{x\in nW \cap \mathcal{N}} f(x)$$ in terms of the decay rate of the radius of determinability. ## Contact us School of Mathematical Sciences NISERPO- Bhimpur-PadanpurVia- Jatni, District- Khurda, Odisha, India, PIN- 752050 Tel: +91-674-249-4081 Corporate Site - This is a contributing Drupal Theme Design by WeebPal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682180881500244, "perplexity": 524.9101451275059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00486.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/111491
## Files in this item FilesDescriptionFormat application/pdf 5684.pdf (16kB) (no description provided)PDF ## Description Title: Laboratory Spectroscopy For Astrochemistry: A Rotational Investigation Of 3-amino-2-propenenitrile Author(s): Alberton, Davide Contributor(s): Bizzocchi, Luca; Caselli, Paola; Endres, Christian; Lattanzi, Valerio Subject(s): Astronomy Abstract: In order to unveil how chemistry complexity build up in space, Complex Organic Molecules (COMs) are receiving more and more attention from the astrochemical and astrobiological community. Nowadays, especially after the glycine detection in the 67P/C-G comet, there is a strong interest to detect amino acids and their precursors in space, with the aim of gaining understanding in their formation process. The most promising pathways to the synthesis of amino acids, namely photochemically and through the Strecker synthesis, include in their final step the hydrogenation of an aminonitrile molecule. It's in this context that the importance of 3-amino-2-propenitrile (APN) makes its appearance. This aminonitrile is simply obtained in gas phase or in solution by mixing cyanoacetylene (HCCCN) and ammonia (NH$_3$), both largely present in the Universe. Therefore, starting from previous work taken as reference, the APN spectrum has herein collected and characterised using the MPE's CASAC (Center of Astrochemical studies Absorption Cell). Here the frequency modulated signal of synthesiser, locked to a Rb atomic clock, is multiplied several times to cover a possible frequency range from 40 GHz to 1.6 THz. The interaction of the radiation source with the molecular sample is hence acquired by a InSb hot electron bolometer detector, and demodulated by a lock-in amplifier. The unprecedented level of detail in the data has therefore allowed the characterisation of the rotation and distortion constants of the APN at a deeper standard, providing a new level of precision for the hunt of this molecule in space with the highest level of confidence since now. Issue Date: 2021-06-24 Publisher: International Symposium on Molecular Spectroscopy Genre: Conference Paper / Presentation Type: Text Language: English URI: http://hdl.handle.net/2142/111491 Date Available in IDEALS: 2021-09-24 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4929056763648987, "perplexity": 5315.217182953441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00041.warc.gz"}
http://science.sciencemag.org/content/147/3661/991
Articles # Magnetic Fields in Interplanetary Space + See all authors and affiliations Science  26 Feb 1965: Vol. 147, Issue 3661, pp. 991-1000 DOI: 10.1126/science.147.3661.991 ## Abstract The brief period between the conception of the interplanetary magnetic field and conclusive proof of its existence has been an exciting one. Imaginative theoretical developments and careful experimental verification have both been essential to rapid progress. From the various lines of evidence described here it is clear that an interplanetary magnetic field is always present, drawn out from the sun by the radially streaming solar wind. The field is stretched into a spiral pattern by the sun's rotation. The field appears to consist of relatively narrow filaments, the fields of adjacent filaments having opposite directions. At the earth's orbit the field points slightly below the ecliptic plane. The magnitude of the field is steady and near 5 gammas in quiet times, but it may rise to higher values at times of higher solar activity. A collision-free shock front is formed in the plasma flow around the earth. In the transition region between the shock front and the magnetopause the magnitude of the field is somewhat higher than it is in the interplanetary region, and large fluctuations in magnitude and direction are common. A shock front has also been observed in space between a slowly moving body of plasma and a faster, overtaking plasma stream.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788841962814331, "perplexity": 745.6755005946846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104704.64/warc/CC-MAIN-20170818160227-20170818180227-00368.warc.gz"}
https://www.perimeterinstitute.ca/videos/axiverse-cosmology-and-energy-scale-inflation
# Axiverse Cosmology and the Energy Scale of Inflation Playing this video requires the latest flash player from Adobe. ## Recording Details Speaker(s): Scientific Areas: Collection/Series: Subjects: PIRSA Number: 13050054 ## Abstract Ultra-light axions (m_a I will also present preliminary results of constraints to this model using up-to-date cosmological observations, which verify the above picture. The parameter space is interesting to explore due to a strongly mass dependent covariance matrix, motivating comparisons between Metropolis-Hastings and nested sampling. Finally I discuss fine-tuning and naturalness in these models.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932514488697052, "perplexity": 5013.760381237823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193284.93/warc/CC-MAIN-20170322212953-00165-ip-10-233-31-227.ec2.internal.warc.gz"}
https://meangreenmath.com/2013/10/01/area-of-a-triangle-base-and-height-part-1/
# Area of a triangle: Base and height (Part 1) This begins a series of post concerning how the area of a triangle can be computed. This post concerns the formula that students most often remember: $A = \displaystyle \frac{1}{2} b h$ Why is this formula true? Consider $\triangle ABC$, and form the altitude from $B$ to line $AB$. Suppose that the length of $AC$ is $b$ and that the altitude has length $h$. Then one of three things could happen: Case 1. The altitude intersects $AC$ at either $A$ or $C$. Then $\triangle ABC$ is a right triangle, which is half of a rectangle. Since the area of a rectangle is $bh$, the area of the triangle must be $\displaystyle \frac{1}{2} bh$. Knowing the area of a right triangle will be important for Cases 2 and 3, as we will act like a good MIT freshman and use this previous work. Case 2. The altitude intersects $AC$ at a point $D$ between $A$ and $C$. Then $\triangle ABD$ and $\triangle BCD$ are right triangles, and so $\hbox{Area of~} \triangle ABC = \hbox{Area of ~} \triangle ABD + \hbox{~Area of~} \triangle BCD$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} b_1 h + \frac{1}{2} b_2 h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} (b_1 + b_2) h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} bh$ Case 3. The altitude intersects $AC$ at a point $D$ that is not in between $A$ and $C$. Without loss of generality, suppose that $A$ is between $D$ and $C$. Then $\triangle ABD$ and $\triangle BCD$ are right triangles, and so $\hbox{Area of~} \triangle ABC = \hbox{Area of ~} \triangle BCD - \hbox{~Area of~} \triangle ACD$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} (b+t) h + \frac{1}{2} t h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} (b+t-t) h$ $\hbox{Area of~} \triangle ABC = \displaystyle \frac{1}{2} bh$ ## One thought on “Area of a triangle: Base and height (Part 1)” This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625933766365051, "perplexity": 137.43669891811362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00776.warc.gz"}
https://mca2017.org/fr/prog/session/gph
# Géométrie et physique des fibrés de Higgs Tous les résumés Taille : 99 kb Organisateurs : • Florent Schaffhauser (Universidad de los Andes) • Laura P. Schaposnik (University of Illinois-Chicago) • Richard Wentworth (University of Maryland) • Marcos Jardim UNICAMP Branes on moduli spaces of sheaves Résumé en PDF Taille : 38 kb Branes are special submanifolds of hyperkähler manifolds that play an important role in string theory, particularly in the Kapustin–Witten approach to the geometric Langlands program, but which also are of intrinsic geometric interest. More precisely, a brane is a submanifold of a hyperkähler manifold which is either complex or Lagrangian with respect to each of the three complex structures or Kähler forms composing the hyperkahler structure. Branes on moduli spaces of Higgs bundles have been largely studied by many authors; in this talk, I will summarize recent work done in collaboration with Franco, Marchesi, and Menet on the construction of different types of branes on moduli spaces of Higgs bundles via Nahm transform, of framed sheaves on the projective plane, and on moduli spaces of sheaves on K3 and abelian surfaces. • Lara Anderson Virginia Tech Elliptically Fibered CY Geometries and Emergent Hitchin Systems Résumé en PDF Taille : 38 kb I provide a brief overview of the way that Higgs bundles arise in string compactifications (particularly F-theory). Further, I will describe recent progress in describing the moduli spaces of singular Calabi-Yau manifolds and the surprising relationships only recently discovered between Calabi-Yau and Hitchin integrable systems, providing a kind of transition function to relate open and closed string degrees of freedom in F-theory. • Alessia Mandini PUC Rio de Janeiro Hyperpolygon spaces and parabolic Higgs bundles Résumé en PDF Taille : 38 kb Hyperpolygons spaces are a family of (finite dimensional, non-compact) hyperk\"{a}hler spaces, that can be obtained from coadjoint orbits by hyperkaehler reduction. In joint work with L. Godinho, we show that these space are diffeomorphic (in fact, symplectomorphic) to certain families of parabolic Higgs bundles. In this talk I will describe this relation and use it to analyse the fixed points locus of a natural involution on the moduli space of parabolic Higgs bundles. The fixed point locus of this involution is identified with the moduli spaces of polygons in Minkowski 3-space and the identification yields information on the connected components of the fixed point locus. This is based on joint works with Leonor Godinho and with Indranil Biswas, Carlos Florentino and Leonor Godinho UIUC Fiber products and spectral data for Higgs bundles Résumé en PDF Taille : 37 kb I will discuss some interesting relations among Higgs bundles, especially from the point of view of spectral data, that result from isogenies between low dimensional complex Lie groups and their real forms. • Andy Neitzke UT Austin Abelianization in classical complex Chern-Simons theory Résumé en PDF Taille : 54 kb I will describe an approach to classical complex Chern-Simons theory via "abelianization," relating flat $GL(N)$-connections over a manifold of dimension $d \le 3$ to flat $GL(1)$-connections over a branched $N$-fold cover. This is joint work with Dan Freed. • Sara Maloni University of Virginia The geometry of quasi-Hitchin symplectic Anosov representations. Résumé en PDF Taille : 57 kb After revising the background theory of symplectic Anosov representations and their domains of discontinuity, we will focus on our joint work in progress with Daniele Alessandrini and Anna Wienhard. In particular, we will describe partial results about the homeomorphism type of the quotient of the domain of discontinuity for quasi-Hitchin representations in $\mathrm{Sp}(4, \mathbb{C})$ acting on the Lagrangian space $\mathrm{Lag}(\mathbb{C}^4)$. • Leticia Brambila-Paz CIMAT Coherent Higgs Systems Résumé en PDF Taille : 62 kb Let $X$ be a Riemman surface and $K$ the canonical bundle. An $L-$pair of type $(n,d,k)$ is a pair $(E, V)$ where $E$ is a vector bundle over $X$ of rank $n$ and degree $d,$ and $V$ a linear subspace of $H^0(EndE\otimes L)$ of dimension $k.$ A coherent Higgs system is a $K-$pair. In this talk the moduli space of $K-$pairs of type $(n,d,1)$ are related to the moduli spaces of Hitchin pairs of type $(L,P).$ • Claudio Meneses CIMAT On the Narasimhan-Atiyah-Bott metrics on moduli of parabolic bundles Résumé en PDF Taille : 38 kb I will discuss my current work regarding the canonical Kähler structure on moduli spaces of stable parabolic bundles. If time permits, I will also discuss a conjectural relation with the geometry of the nilpotent cone locus and the abelianization of logarithmic connections in genus 0. This talk is based on ongoing projects with Leon Takhtajan, Marco Spinaci and Sebastian Heller. • Steve Rayan Asymptotics of hyperpolygons Résumé en PDF Taille : 56 kb As discovered in the work of Godinho-Mandini and Biswas-Florentino-Godinho-Mandini, the moduli space of $n$-sided hyperpolygons in the Lie algebra $\mathfrak{su}(2)^*$ is naturally a subvariety of the moduli space of rank-$2$ parabolic Higgs bundles on the projective line punctured $n$ times, and the integrable system structure pulls back to one on hyperpolygon space. These results were extended to higher rank in recent work by J. Fisher and myself. In this talk, I will report on joint work with H. Weiss regarding the asymptotic geometry of hyperpolygon space and its ambient space of parabolic Higgs bundles. The former has a hyperkaehler metric arising from a finite-dimensional quotient and the latter has one arising from an infinite-dimensional quotient. We use properties of the hyperkaehler moment map for hyperpolygon space to construct a limiting sequence of hyperpolygons that terminates in a moduli space of degenerate hyperpolygons. In the spirit of the work of Mazzeo-Swoboda-Weiss-Witt on ordinary Higgs bundles, we use this partial compactification to show that hyperpolygon space is an ALE manifold, as expected for Nakajima quiver varieties. Finally, I will use this analysis to speculate on differences between the metric on hyperpolygon space and the one on the ambient parabolic Higgs moduli space. • Victoria Hoskins Freie Universität Berlin Group actions on quiver moduli spaces and branes Résumé en PDF Taille : 38 kb We consider two types of actions on moduli spaces of quiver representations over a field k and we decompose their fixed loci using group cohomology. First, for a perfect field k, we study the action of the absolute Galois group of k on the points of this quiver moduli space valued in an algebraic closure of k; the fixed locus is the set of k-rational points and we obtain a decomposition of this fixed locus indexed by the Brauer group of k and give a modular interpretation of this decomposition. Second, we study algebraic actions of finite groups of quiver automorphisms on these moduli spaces; the fixed locus is decomposed using group cohomology and each component has a modular interpretation. Finally, we describe the symplectic and holomorphic geometry of these fixed loci in hyperkaehler quiver varieties in the language of branes. This is joint work with Florent Schaffhauser. • Qionling Li CalTech Metric domination for Higgs bundles of quiver type Résumé en PDF Taille : 46 kb Given a $G$-Higgs bundle over a Riemann surface, there is a unique equivariant harmonic map into the associated symmetric space $G/K$ through solving Hitchin equation to Higgs bundles. We find a maximal principle for a type of coupled elliptic systems and apply it to analyze the Hitchin equations associated to the Higgs bundles of quiver type. In particular, we find several domination results of the pullback metrics of the associated branched harmonic maps into the symmetric space. This is joint work with Song Dai. • Sergei Gukov CalTech Equivariant invariants of the Hitchin moduli space Résumé en PDF Taille : 38 kb This talk will be a fairly broad review of exploring geometry and topology of the moduli space of Higgs bundles through the equivariant circle action (which acts by a phase on the Higgs field). This approach leads to new invariants of the moduli space of Higgs bundles, the so-called equivariant Verlinde formula, the real and wild versions of the Hitchin character, and the equivariant elliptic genus. The real reason, though, for studying these new invariants is not so much that they contain wealth of useful information about Higgs bundles (they actually do!) but that they have surprising new connections to other problems in math and mathematical physics. • Laura Fredrickson Stanford Constructing solutions of Hitchin's equations near the ends of the moduli space Résumé en PDF Taille : 50 kb Hitchin's equations are a system of gauge theoretic equations on a Riemann surface that are of interest in many areas including representation theory, Teichm\"uller theory, and the geometric Langlands correspondence. In this talk, I'll describe what solutions of $SL(n,\mathbb{C})$-Hitchin's equations near the ends'' of the moduli space look like, and the resulting compactification of the Hitchin moduli space. Wild Hitchin moduli spaces are an important ingredient in this construction. This construction generalizes Mazzeo-Swoboda-Weiss-Witt's construction of $SL(2,\mathbb{C})$-solutions of Hitchin's equations where the Higgs field is simple.'' • Michael Groechenig FU Berlin p-adic integration for the Hitchin system Résumé en PDF Taille : 38 kb I will report on joint work with D. Wyss and P. Ziegler. We prove a conjecture by Hausel-Thaddeus which predicts an agreement of appropriately defined Hodge numbers for moduli spaces of Higgs bundles for the structure groups SL(n) and PGL(n) over the complex numbers. Despite the complex-analytic nature of the statement our proof is entirely arithmetic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898596167564392, "perplexity": 713.0310558550514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00442.warc.gz"}
https://brilliant.org/problems/floor-fun/
# Floor Fun Level pending If $$P$$ is the probability of an uncertain event happening, then what is $$10^{10}\lfloor P\rfloor$$? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6301319003105164, "perplexity": 3757.695215981308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864022.18/warc/CC-MAIN-20180621040124-20180621060124-00314.warc.gz"}
http://cms.math.ca/cmb/onlinefirst
location:  Publications → journals → CMB Online First The following papers are the latest research papers available from the Canadian Mathematical Bulletin. The papers below are all fully peer-reviewed and we vouch for the research inside. Some items are labelled Author's Draft, and others are identified as Published. As a service to our readers, we post new papers as soon as the science is right, but before official publication; these are the papers marked Author's Draft. When our copy editing process is complete and the paper now has our official form, we replace the Author's Draft with the Published version. All the papers below are scheduled for inclusion in a Print issue. When that issue goes to press, the paper is moved from this Online First web page over to the main CMB Digital Archive. Approximate amenability of Segal algebras II Alaghmandan, Mahmood Published: 2014-09-26 Characters on $C( X)$ Boulabiar, Karim Published: 2014-06-09 Lifting Divisors on a Generic Chain of Loops Cartwright, Dustin; Jensen, David; Payne, Sam Author's Draft Irreducible Tuples without Boundary Property Chavan, Sameer Published: 2014-11-03 Compact Commutators of Rough Singular Integral Operators Chen, Jiecheng; Hu, Guoen Published: 2014-10-20 On an Exponential Functional Inequality and its Distributional Version Chung, Jaeyoung Published: 2014-04-03 Orbits of Geometric Descent Daniilidis, A.; Drusvyatskiy, D.; Lewis, A. S. Published: 2014-03-18 Spectral Flows of Dilations of Fredholm Operators De Nitties, Giuseppe; Schulz-Baldes, Hermann Published: 2014-11-19 Correction to "Infinite Dimensional DeWitt Supergroups and Their Bodies" Fulp, Ronald Owen Published: 2014-09-26 Limited Sets and Bibasic Sequences Ghenciu, Ioana Published: 2014-03-25 The equivariant cohomology rings of Peterson varieties in all Lie types Harada, Megumi; Horiguchi, Tatsuya; Masuda, Mikiya Author's Draft Essential Commutants of Semicrossed Products Hasegawa, Kei Published: 2014-11-13 On Graphs Associated with Character Degrees and Conjugacy Class Sizes of Direct Products of Finite Groups Author's Draft Injective Tauberian Operators on $L_1$ and Operators with Dense Range on $\ell_\infty$ Johnson, William; Nasseri, Amir Bahman; Schechtman, Gideon; Tkocz, Tomasz Published: 2014-11-03 Property T and Amenable Transformation Group $C^*$-algebras Kamalov, F. Published: 2014-02-25 Approximate Fixed Point Sequences of Nonlinear Semigroup in Metric Spaces Khamsi, M. A. Published: 2014-05-07 Weak arithmetic equivalence Mantilla-Soler, Guillermo Published: 2014-11-03 A Sharp Constant for the Bergman Projection Marković, Marijan Published: 2014-08-07 Plane Lorentzian and Fuchsian Hedgehogs Martinez-Maure, Yves Published: 2014-11-24 Countable dense homogeneity in powers of zero-dimensional definable spaces Medini, Andrea Author's Draft Corrigendum to "Chen Inequalities for Submanifolds of Real Space Forms with a Semi-symmetric Non-metric Connection" Author's Draft Exact and Approximate Operator Parallelism Published: 2014-09-26 On the Generalized Auslander-Reiten Conjecture under Certain Ring Extensions Nasseh, Saeed Author's Draft On the Generalized Auslander-Reiten Conjecture under Certain Ring Extensions Nasseh, Saeed Published: 2014-11-03 Localization and Completeness in $L_2({\mathbb R})$ Olevskii, Victor Published: 2014-09-20 Connections between metric characterizations of superreflexivity and the Radon-Nikodým property for dual Banach spaces Ostrovskii, Mikhail I. Published: 2014-11-03 Some normal numbers generated by arithmetic functions Pollack, Paul; Vandehey, Joseph Published: 2014-10-10 Some normal numbers generated by arithmetic functions Pollack, Paul; Vandehey, Joseph Author's Draft Periodic Solutions of Almost Linear Volterra Integro-dynamic Equation on Periodic Time Scales Raffoul, Youssef N. Published: 2014-10-15 Homological Planes in the Grothendieck Ring of Varieties Sebag, Julien Published: 2014-10-20 Finite Semisimple Loop Algebras of Indecomposable $RA$ Loops Sharma, R. K.; Sidana, Swati Author's Draft On Finite Groups with Dismantlable Subgroup Lattices Tărnăuceanu, Marius Published: 2014-11-18 On the Structure of Cuntz Semigroups in (Possibly) Nonunital C*-algebras Tikuisis, Aaron Peter; Toms, Andrew Author's Draft Telescoping estimates for smooth series Wirths, Karl Joachim Published: 2014-11-03 Second-order Riesz Transforms and Maximal Inequalities Associated with Magnetic Schrödinger Operators Yang, Dachun; Yang, Sibei Published: 2014-11-24 Dihedral Groups of order $2p$ of Automorphisms of Compact Riemann Surfaces of Genus $p-1$ Yang, Qingjie; Zhong, Weiting Published: 2014-11-13 An Explicit Formula for the Generalized Cyclic Shuffle Map Zhang, Jiao; Wang, Qing-Wen Published: 2013-02-14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.707934558391571, "perplexity": 25775.779812757228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008289.40/warc/CC-MAIN-20141125155648-00097-ip-10-235-23-156.ec2.internal.warc.gz"}
https://casmusings.wordpress.com/2016/03/24/party-ratios/
# Party Ratios I find LOTS of great middle school problems from @Five_Triangles on Twitter.  Their post two days ago was no exception. The problem requires a little stamina, but can be approached many ways–two excellent criteria for worthy student explorations.  That it has some solid extensions makes it even better.  Following are a few different solution approaches some colleagues and I created. INITIAL THOUGHTS, VISUAL ORGANIZATION, & A SOLUTION The most challenging part of this problem is data organization.  My first thoughts were for a 2-circle Venn Diagram–one for gender and one for age.  And these types of Venn Diagrams are often more easily understood, in my experience, in 2×2 Table form with extra spaces for totals.  Here’s what I set up initially. The ratio of Women:Girls was 11:4, so the 24 girls meant each “unit” in this ratio accounted for 24/4=6 people.  That gave 11*6=66 women and 66+24=90 females. At this point, my experience working with algebraic problems tempted me to overthink the situation.  I was tempted to let B represent the unknown number of boys and set up some equations to solve.  Knowing that most 6th graders would not think about variables, I held back that instinct in an attempt to discover what a less-experienced mind might try. I present my initial algebra solution below. The 5:3 Male:Female ratio told me that each “gender unit” represented 90/3=30 people.  That meant there were 5*30=150 men and 240 total people at the party. Then, the 4:1 Adult:Children ratio showed how to age-divide every group of 5 partygoers.  With 240/5=48 such groups, there were 48 children and 4*48=192 adults.  Subtracting the already known 66 women gave the requested answer:  192-66=126 men. While this Venn Diagram/Table approach made sense to me, I was concerned that it was a moderately sophisticated and not quite intuitive problem-solving technique for younger middle school students. WHAT WOULD A MIDDLE SCHOOLER THINK? A middle school teaching colleague, Becky, offered a different solution I could see students creating. Completely independently, she solved the problem in exactly the same order I did using ratio tables to manage the scaling at each step instead of my “unit ratios”.  I liked her visual representation of the 4:1 Adults:Children ratio to find the number of adults, which gave the requested number of men.  I suspect many more students would implicitly or explicitly use some chunking strategies like the visual representation to work the ratios. WHY HAVE JUST ONE SOLUTION? Math problems involving ratios can usually be opened up to allow multiple, or even an infinite number of solutions.  This leads to some interesting problem extensions if you eliminate the “24 girls” restriction.  Here are a few examples and sample solutions. What is the least number of partygoers? For this problem, notice from the table above that all of the values have a common factor of 6.  Dividing the total partygoers by this reveals that 240/6=40 is the least number.  Any multiple of this number is also a legitimate solution. Interestingly, the 11:4 Women:Girls ratio becomes explicitly obvious when you scale the table down to its least common value. My former student and now colleague, Teddy, arrived at this value another way.  Paraphrasing, he noted that the 5:3 Male:Female ratio meant any valid total had to be a multiple of 5+3=8.  Likewise, the 4:1 Adult:Child ratio requires totals to be multiples of 4+1=5.  And the LCM of 8 & 5 is 40, the same value found in the preceding paragraph. What do all total partygoer numbers have in common? As explained above, any multiple of 40 is a legitimate number of partygoers. If the venue could support no more than 500 attendees, what is the maximum number of women attending? 12*40=480 is the greatest multiple of 40 below 500.  Because 480 is double the initial problem’s total, 66*2=132 is the maximum number of women. Note that this can be rephrased to accommodate any other gender/age/total target. Under the given conditions, will the number of boys and girls at the party ever be identical? As with all ratio problems, larger values are always multiples of the least common solution.  That means the number of boys and girls will always be identical or always be different.  From above, you can deduce that the numbers of boys and girls at the party under the given conditions will both be multiples of 4. What variations can you and/or your students create? RESOLVING THE INITIAL ALGEBRA Now to the solution variation I was initially inclined to produce.  After initially determining 66 women from the given 24 girls, let B be the unknown number of boys.  That gives B+24 children.  It was given that adults are 4 times as numerous as children making the number of adults 4(B+24)=4B+96.  Subtracting the known 66 women leaves 4B+30 men.  Compiling all of this gives The 5:3 Male:Female ratio means $\displaystyle \frac{5}{3} = \frac{5B+30}{90} \longrightarrow B=24$, the same result as earlier. ALGEBRA OVERKILL Winding through all of that algebra ultimately isn’t that computationally difficult, but it certainly is more than typical 6th graders could handle. But the problem could be generalized even further, as Teddy shared with me.  If the entire table were written in variables with W=number of women, M=men, G=girls, and B=boys, the given ratios in the problem would lead to a reasonably straightforward 4×4 system of equations.  If you understand enough to write all of those equations, I’m certain you could solve them, so I’d feel confident allowing a CAS to do that for me.  My TI-Nspire gives this. And that certainly isn’t work you’d expect from any 6th grader. CONCLUSION Given that the 11:4 Women:Girls ratio was the only “internal” ratio, it was apparent in retrospect that all solutions except the 4×4 system approach had to find the female values first.  There are still several ways to resolve the problem, but I found it interesting that while there was no “direct route”, every reasonable solution started with the same steps. Thanks to colleagues Teddy S & Becky M for sharing their solution proposals. ### 2 responses to “Party Ratios” 1. Although the party problem is related to factorising, LCMs and the like, we know of no mathematical term for multiple sets of ratios where the ratios need to be cross-resolved with each other, but we will suggest that this problem actually lies at the simpler end of the spectrum in this genre because all of the ratios overlap, reducing the problem to arithmetic. Your “problem extensions” go in a different direction from the central concept we see at these problems’ core. The following example illustrates what can occur towards the other end of the spectrum: it introduces a significant level of complication because, although the basic idea is the same, to cross-resolve the ratios, no relationship between the two ratios is given (i.e., there is no overlap): Solving the problem requires an extra layer of analysis to determine the connection. More than requiring stamina, it’s difficult. • Agreed on all points, especially the arithmetic result of the 11:4 ratio cross-connecting the other two ratios. My stamina note was simply a recognition that far too many U.S. middle school math resources invoke only 1 or 2 ratios. Your problem introduced a lovely reasoning chain. My point on the Problem Extensions was simply that the vast majority of “school math” problems close students off from further investigation. Great mathematics is about asking and exploring “What if …?” questions. In this case, I was offering some samples of what could happen if you didn’t limit the girls to 24. The algebraic extensions acknowledge what I’ve seen in many of your problems. Even though they’re constructed for middle school students, in the proper light, they can be just as applicable for upper school students. Thanks, as always, for sharing rich, quality problems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7747821807861328, "perplexity": 1908.6890301556525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209021.21/warc/CC-MAIN-20180814101420-20180814121420-00093.warc.gz"}
https://www.nature.com/articles/s41586-020-2432-4?error=cookies_not_supported&code=027e415d-6ced-4a20-9032-f801d3b81efd
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Single-molecule imaging of transcription dynamics in somatic stem cells ## Abstract Molecular noise is a natural phenomenon that is inherent to all biological systems1,2. How stochastic processes give rise to the robust outcomes that support tissue homeostasis remains unclear. Here we use single-molecule RNA fluorescent in situ hybridization (smFISH) on mouse stem cells derived from haematopoietic tissue to measure the transcription dynamics of three key genes that encode transcription factors: PU.1 (also known as Spi1), Gata1 and Gata2. We find that infrequent, stochastic bursts of transcription result in the co-expression of these antagonistic transcription factors in the majority of haematopoietic stem and progenitor cells. Moreover, by pairing smFISH with time-lapse microscopy and the analysis of pedigrees, we find that although individual stem-cell clones produce descendants that are in transcriptionally related states—akin to a transcriptional priming phenomenon—the underlying transition dynamics between states are best captured by stochastic and reversible models. As such, a stochastic process can produce cellular behaviours that may be incorrectly inferred to have arisen from deterministic dynamics. We propose a model whereby the intrinsic stochasticity of gene expression facilitates, rather than impedes, the concomitant maintenance of transcriptional plasticity and stem cell robustness. This is a preview of subscription content, access via your institution ## Relevant articles • ### RS-FISH: precise, interactive, fast, and scalable FISH spot detection Nature Methods Open Access 17 November 2022 • ### Multiparameter analysis of timelapse imaging reveals kinetics of megakaryocytic erythroid progenitor clonal expansion and differentiation Scientific Reports Open Access 28 September 2022 • ### A robust method for designing multistable systems by embedding bistable subsystems npj Systems Biology and Applications Open Access 25 March 2022 ## Access options \$32.00 All prices are NET prices. ## Data availability All source data used to generate figures are available within the manuscript files or at the GitHub repository (https://github.com/justincwheat/Single-Molecule-Imaging-of-Transcription-Dynamics-in-Somatic-Stem-Cells) associated with this manuscript. Further information and reasonable requests for resources, reagents and data should be directed to the corresponding author. For data used for generating figures related to kin correlation analysis or simulations (Figs. 2, 4, Extended Data Figs. 8 and 9), separate .mat files have been provided as Supplementary Data 1 and also uploaded to the GitHub repository listed above or are generated upon running the associated scripts. All data are available from the corresponding author upon reasonable request. Source data are provided with this paper. ## Code availability Software written for parameter estimation and stochastic simulations are provided in Supplementary Data 2, (FSP.m, getKLD.m, GSSA.m). Software relevant for Figs. 3 and 4 can also be found in Supplementary Data 2: the code for KCA (KCA.m), generating 3-cell frequency matrices (ThreePtFreqs.m), testing different molecular cutoffs (KCA_thresholdtesting.mlx), and calculating time spent in each state (GenerateAllTrees.m). Data structures for each colony are also provided (Colony[#].mat). All scripts and data files have also been published in a publicly available repository at https://github.com/justincwheat/Single-Molecule-Imaging-of-Transcription-Dynamics-in-Somatic-Stem-Cells. All software generated by other groups used in this study are listed in Supplementary Table 7. ## References 1. Levsky, J. M. & Singer, R. H. Gene expression and the myth of the average cell. Trends Cell Biol. 13, 4–6 (2003). 2. Elowitz, M. B., Levine, A. J., Siggia, E. D. & Swain, P. S. Stochastic gene expression in a single cell. Science 297, 1183–1186 (2002). 3. Raser, J. M. & O’Shea, E. K. Control of stochasticity in eukaryotic gene expression. Science 304, 1811–1814 (2004). 4. Bar-Even, A. et al. Noise in protein expression scales with natural protein abundance. Nat. Genet. 38, 636–643 (2006). 5. Gandhi, S. J., Zenklusen, D., Lionnet, T. & Singer, R. H. Transcription of functionally related constitutive genes is not coordinated. Nat. Struct. Mol. Biol. 18, 27–34 (2011). 6. Huh, D. & Paulsson, J. Random partitioning of molecules at cell division. Proc. Natl Acad. Sci. USA 108, 15004–15009 (2011). 7. Lestas, I., Vinnicombe, G. & Paulsson, J. Fundamental limits on the suppression of molecular fluctuations. Nature 467, 174–178 (2010). 8. Olsson, A. et al. Single-cell analysis of mixed-lineage states leading to a binary cell fate choice. Nature 537, 698–702 (2016). 9. Tusi, B. K. et al. Population snapshots predict early haematopoietic and erythroid hierarchies. Nature 555, 54–60 (2018). 10. Femino, A. M., Fay, F. S., Fogarty, K. & Singer, R. H. Visualization of single RNA transcripts in situ. Science 280, 585–590 (1998). 11. Torre, E. et al. Rare cell detection by single-cell RNA sequencing as guided by single-molecule RNA FISH. Cell Syst. 6, 171–179.e5 (2018). 12. Chen, K. H., Boettiger, A. N., Moffitt, J. R., Wang, S. & Zhuang, X. Spatially resolved, highly multiplexed RNA profiling in single cells. Science 348, aaa6090 (2015). 13. Tsanov, N. et al. smiFISH and FISH-quant – a flexible single RNA detection approach with super-resolution capability. Nucleic Acids Res. 44, e165 (2016). 14. Chen, H. M., Pahl, H. L., Scheibe, R. J., Zhang, D. E. & Tenen, D. G. The Sp1 transcription factor binds the CD11b promoter specifically in myeloid cells in vivo and is essential for myeloid-specific promoter activity. J. Biol. Chem. 268, 8230–8239 (1993). 15. Koschmieder, S., Rosenbauer, F., Steidl, U., Owens, B. M. & Tenen, D. G. Role of transcription factors C/EBPα and PU.1 in normal hematopoiesis and leukemia. Int. J. Hematol. 81, 368–377 (2005). 16. Rekhtman, N., Radparvar, F., Evans, T. & Skoultchi, A. I. Direct interaction of hematopoietic transcription factors PU.1 and GATA-1: functional antagonism in erythroid cells. Genes Dev. 13, 1398–1411 (1999). 17. Zhang, P. et al. PU.1 inhibits GATA-1 function and erythroid differentiation by blocking GATA-1 DNA binding. Blood 96, 2641–2648 (2000). 18. Rosenbauer, F. et al. Acute myeloid leukemia induced by graded reduction of a lineage-specific transcription factor, PU.1. Nat. Genet. 36, 624–630 (2004). 19. Steidl, U. et al. Essential role of Jun family transcription factors in PU.1 knockdown-induced leukemic stem cells. Nat. Genet. 38, 1269–1277 (2006). 20. Will, B. et al. Minimal PU.1 reduction induces a preleukemic state and promotes development of acute myeloid leukemia. Nat. Med. 21, 1172–1181 (2015). 21. Skinner, S. O. et al. Single-cell analysis of transcription kinetics across the cell cycle. eLife 5, e12175 (2016). 22. Giladi, A. et al. Single-cell characterization of haematopoietic progenitors and their trajectories in homeostasis and perturbed haematopoiesis. Nat. Cell Biol. 20, 836–846 (2018). 23. Paul, F. et al. Transcriptional heterogeneity and lineage commitment in myeloid progenitors. Cell 163, 1663–1677 (2015). 24. Nestorowa, S. et al. A single-cell resolution map of mouse hematopoietic stem and progenitor cell differentiation. Blood 128, e20–e31 (2016). 25. Tabula Muris Consortium. Single-cell transcriptomics of 20 mouse organs creates a Tabula Muris. Nature 562, 367–372 (2018). 26. Chou, S. T. et al. Graded repression of PU.1/Sfpi1 gene transcription by GATA factors regulates hematopoietic cell fate. Blood 114, 983–994 (2009). 27. Doré, L. C., Chlon, T. M., Brown, C. D., White, K. P. & Crispino, J. D. Chromatin occupancy analysis reveals genome-wide GATA factor switching during hematopoiesis. Blood 119, 3724–3733 (2012). 28. Grass, J. A. et al. GATA-1-dependent transcriptional repression of GATA-2 via disruption of positive autoregulation and domain-wide chromatin remodeling. Proc. Natl Acad. Sci. USA 100, 8811–8816 (2003). 29. Singer, Z. S. et al. Dynamic heterogeneity and DNA methylation in embryonic stem cells. Mol. Cell 55, 319–331 (2014). 30. Gillespie, D. T. A general method of numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comput. Phys. 22, 403–434 (1976). 31. Haghverdi, L., Büttner, M., Wolf, F. A., Buettner, F. & Theis, F. J. Diffusion pseudotime robustly reconstructs lineage branching. Nat. Methods 13, 845–848 (2016). 32. Hoppe, P. S. et al. Early myeloid lineage choice is not initiated by random PU.1 to GATA1 protein ratios. Nature 535, 299–302 (2016). 33. Buggenthin, F. et al. Prospective identification of hematopoietic lineage choice by deep learning. Nat. Methods 14, 403–406 (2017). 34. Strasser, M. K. et al. Lineage marker synchrony in hematopoietic genealogies refutes the PU.1/GATA1 toggle switch paradigm. Nat. Commun. 9, 2697 (2018). 35. Arinobu, Y. et al. Reciprocal activation of GATA-1 and PU.1 marks initial specification of hematopoietic stem cells into myeloerythroid and myelolymphoid lineages. Cell Stem Cell 1, 416–427 (2007). 36. Laslo, P. et al. Multilineage transcriptional priming and determination of alternate hematopoietic cell fates. Cell 126, 755–766 (2006). 37. Hormoz, S. et al. Inferring cell-state transition dynamics from lineage trees and endpoint single-cell measurements. Cell Syst. 3, 419–433.e8 (2016). 38. Loeffler, D. et al. Mouse and human HSPC immobilization in liquid culture by CD43- or CD44-antibody coating. Blood 131, 1425–1429 (2018). 39. La Manno, G. et al. RNA velocity of single cells. Nature 560, 494–498 (2018). 40. Hilsenbeck, O. et al. Software tools for single-cell tracking and quantification of cellular and molecular properties. Nat. Biotechnol. 34, 703–706 (2016). ## Acknowledgements We thank D. Shechter, K. Gritsman, R. Coleman, J. Biswas, E. Tutucci, M. V. Ugalde and R. Pisczcatowski for discussions; F. Mueller for assistance with FISH-QUANT; M. Elowitz and S. Hormoz for the scripts used for KCA; M. Lopez-Jones for assistance in probe design; D. Loeffler and T. Schroeder for input on time-lapse imaging of HSC; and D. Sun for assistance with flow cytometry and cell sorting. R.H.S. is a senior fellow of the Howard Hughes Medical Institute. A.B is an external professor of the Santa Fe Institute. This research was supported by the Ruth L. Kirschstein National Research Service Award F30GM122308-03 and MSTP training grant T32GM007288-43 to J.C.W., U01DA047729 to R.H.S. and R01CA217092 to U.S. U.S. was supported as a Research Scholar of the Leukemia and Lymphoma Society and is the Diane and Arthur B. Belfer Faculty Scholar in Cancer Research of the Albert Einstein College of Medicine. This work was supported through the Albert Einstein Cancer Center core support grant (P30CA013330), and the Stem Cell Isolation and Xenotransplantation Core Facility (NYSTEM grant #C029154) of the Ruth L. and David S. Gottesman Institute for Stem Cell Research and Regenerative Medicine. ## Author information Authors ### Contributions J.C.W., U.S. and R.H.S. conceptualized the study and designed experiments. J.C.W., A.B. and Y.S. conceptualized mathematical models. J.C.W. performed all experiments and generated all data in the manuscript. J.C.W. performed the mRNA analyses, transcriptional parameter fitting, stochastic simulations, scRNA-seq analyses, and kinship analyses. M.W. provided essential scripts for scRNA-seq analyses. Y.S. and A.B. developed the analyses related to the history of state transitions conditional on pedigree structure. J.C.W. wrote the manuscript and generated all figures and data visualizations. J.C.W., U.S., R.H.S., A.I.S., Y.S., A.B. and M.W. reviewed and edited the manuscript. ### Corresponding author Correspondence to Ulrich Steidl. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature thanks Thomas Gregor, Ellen Rothenberg and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Extended data figures and tables ### Extended Data Fig. 1 Transcriptional dynamics of genes conditional on PU.1 state. a, b, Cumulative distribution function (CDF) of spot intensity (a) and histogram of signal-to-noise ratio (SNR) of spot intensity to local background intensity (b) are shown for all spots that passed intensity and 3D point-spread function (PSF) fit thresholding in FISH-QUANT. c, Probability densities for fluorescence (corresponding to mRNA molecules) in HPC-7 cells for Cy3-, Alexa Fluor 594- and Cy5-labelled readout probes. Insets are XY and XZ average PSFs for each fluorophore. The overlaid line is the fit to a Gaussian distribution. More than 10,000 spots were obtained per fluorophore. d, Representative images three-colour smFISH for PU.1 (Cy5, red), Gata2 (Cy3, white) and Gata1 (AF594, green) in HPC-7 cells. Scale bar, 5 μm. e, Bivariate distributions of Gata1 and Gata2 (left), Gata2 and PU.1 (middle) and PU.1 and Gata1 (right) in two independent experiments (n > 400 cells per experiment) with HPC-7 cells. f, Representative images of multiplexed smFISH between PU.1 and eight other haematopoietic genes in Kit+Lin bone marrow from wild-type mice (n = 258–2,488 cells for each gene, derived from a single experiment; scale bar, 5 μm). g, Probability distribution for PU.1 mRNA per cell in KL cells from bone marrow from wild-type mice. Overlaid are the high (red) and low (blue) components of the two-component negative binomial distribution fitted to the data. h, Comparison of PU.1 bursting kinetics between high and low states. Left, representative images from smFISH for PU.1 with a single, large transcription site in the nucleus. Middle, frequency of cells with the indicated number of active PU.1 transcription sites. Right, frequency distribution of summed nascent mRNA per cell in each PU.1 state. i, Schematic demonstrating a hypothetical transcriptional phase portrait. j, Phase portraits for each gene based on the PU.1 state of the cell. ### Extended Data Fig. 2 Comparative analysis of smFISH and scRNA-seq. a, CDF plots of mRNA per cell for five scRNA-seq datasets and smFISH. Data are normalized to the maximum count for each gene in each dataset. b, Calculated Gini index for seven transcription factor mRNAs in each scRNA-seq dataset (white through to black) and smFISH (red). c, CDF plots of Gini indices for all five scRNA-seq datasets (See Supplementary Table 2 for gene list). d, Schematic of hierarchical clustering followed by random forest classification to identify important variables for cluster assignment. e, Variable importance plotted against Gini index for four scRNA-seq datasets. The bottom and right panels show marginal distributions of Gini index and variable importance, respectively. f, Plot of average mutual information (top) or average absolute value of the Pearson’s correlation coefficient (bottom) versus normalized abundance of n = 200 randomly selected genes against all other genes in the dataset. The r values listed are the correlation coefficients. See Supplementary Discussion for further details on the analyses performed. ### Extended Data Fig. 3 Summary statistics of mRNA copy number for primary KL. a, Representative images of CMPs, GMPs and MEPs stained by smFISH for PU.1, Gata1 and Gata2. Scale bars, 5 μm. Arrows point to CMPs co-expressing all three mRNAs. b, Boxplots of mRNA count per cell, overlaid with single-cell mRNA values (dots). The pink box is the 95% confidence interval, the red line is the mean expression, the grey box is ±s.e.m. c, Table of summary statistics for each gene. Data for ac are derived from two experiments (CMPs and MEPs) or a single experiment (GMPs). The sample size is listed in c. ### Extended Data Fig. 4 Spot detection in FISH-QUANT and spot calling in T lymphocytes. a, b, Comparison of raw (a) and filtered (b) smFISH images from CMPs (representative of more than 2 experiments in CMPs; spot quality is consistent with all reported experiments in this manuscript). The insets show line intensity plots; the white line on the cells indicates from where the plots were obtained. Scale bars, 10 μm. c, Average PSF in XY (left columns) and XZ (right columns) for each gene from all detected spots from the CMPs dataset. d, e, Empirical (left) versus theoretical (middle) PSF and residuals (right) in the XY (d) and XZ (e) planes. f, CDFs for all spots passing the initial intensity thresholding for filtered intensity (top row), squared residuals (second row) and width of spots in X, Y, and Z in nanometres (third to fifth rows, respectively). Spots are separated on the basis of those arising from cells with more than five copies of mRNA per cell, between two and five copies per cell, and one copy per cell. Discarded spots that failed 3D fitting are shown in orange. g, mRNA detection in primary CD4+CD8+ thymocytes (n = 136 for Gata1, n = 154 for PU.1). ### Extended Data Fig. 5 Gating strategy to assign CMP to states. a, Representative images of CMPs in different states. Scale bar, 10 μm. b, Gating scheme for assigning CMPs to transcriptional states. See Supplementary Discussion for details on the gating strategy. The t-SNE plot demonstrates the proximity of states to one another and to immunophenotypic GMPs and MEPs. Images and analyses derived from experimental datasets reported in Fig. 1 and Extended Data Fig. 3. c, Frequency distribution of transcriptional bursting for each gene in each transcriptional state. The x axis is the number of active alleles. d, Top, schematic of ‘states’ being the consequence of simple transcriptional noise of the LES state (right) versus truly separate transcriptional states (right) that require transition events (arrows). Bottom, time-dependent behaviour of simulated cells in a noise only (grey) or state transition system (red) shown as a bivariate plot of Gata1 + Gata2 copy number against PU.1 copy number. T indicates the elapsed simulation time as a fraction of the final time. e, f, Gillespie simulations of state transitions, modulating half-life alone. If a transition to another state occurs by noise alone, the cell changes the mRNA half-life of only the mRNA defining that state. e, f, Endpoint states reached in the simulations (n = 10,000) (e) and 1,000 representative simulation trajectories (f), colour-coded on the final endpoint state. Each panel is a different factor change in the mRNA half-life, with the far-left panel as the reference (that is, the half-lives used in Fig. 2), and the other panels showing 2× (second from left), 3× (second from right), and 4× (far-right). ### Extended Data Fig. 6 Seventy-two-hour progeny of HSCs. a, Representative images of HSC progeny. PU.1, red; Gata2, cyan; Gata1, yellow. Transcription sites are demarcated with boxes. Full arrows indicate triple-positive cells, and the arrowhead marks a megakaryocyte. Representative images from two separate experiments. b, CDFs for mRNA counts per HSC progeny. The number of cells with  greater than or equal to 1 mRNA per cell is indicated. Two separate experiments, with n values indicated on the graphs. c, Bivariate distributions of PU.1 versus Gata1 (left) and PU.1 versus Gata2 (right). ### Extended Data Fig. 7 State assignments for HSC progeny. a, Gating strategy. Left, removal of megakaryocytes occurs first. Right, cells with more than 10 copies of Gata1 are assigned to G1/2H, whereas cells with more than 150 copies of PU.1 are assigned to macrophage. b, Probability density distributions for PU.1 (left) and Gata2 (right) with overlaid fits for a two-component negative binomial distribution amongst cells after removing megakaryocytes, G1/2H, and macrophage . c, Bivariate distribution of the same cells. Contrary to the case in CMPs, the population of Gata2highPU.1high HSC progeny all had morphological characteristics similar to macrophage-like cells seen in GMP datasets, which also were Gata2highPU.1high (see Extended Data Fig. 3). As such, all cells for which PU.1 > 75 and < 150 were assigned to P1H. d, Probability distribution for Gata2 in the remaining cells, fit with a two-component negative binomial. e, A distribution such as that in d cannot be definitively separated into high and low components owing to overlap in the distributions; therefore, cells are assigned probabilistically during KCA to the G2H or LES state in order to correct for false transitions arising from uncertainty in the assignment. See Supplementary Discussion for more details on the rationale and implementation of probabilistic gating. ### Extended Data Fig. 8 HSC colony data. a, Endpoint cells are the leaves on each pedigree. Note that edge lengths are not scaled on time between divisions, and all endpoint cells are 96 h from the start of the experiment. Cells are colour-coded according to the colour scheme used throughout the manuscript. Megakaryocytes are labelled in orange. Nodes (cells) observed upstream of the endpoint (that is, no transcriptional data are available) are coloured black. b, Histogram of number of progeny from a single HSC. ce, Proliferation phenotypes of cells based on endpoint state identity (P1H, n = 137; LES, n = 1,571; G1/2H, n = 81; G2H, n = 166). Cell lifetimes in e are the time interval between cell birth (last division) and the next cell division or cell death. Violin plots are normalized to area, with the centre box-and-whisker plots showing the mean (line), standard deviation (box) and 95% confidence interval (whiskers). In e, single dots represent outliers in the 99th percentile. ### Extended Data Fig. 9 Robustness of inferred transition matrix to mRNA threshold. a, Normalized deviation in the inferred transition matrices for each indicated threshold (n = 200 bootstrapping iterations) of Gata1 mRNA per cell relative to the reference matrix reported in this manuscript (cutoff = 10 mRNA per cell). The reference matrix is boxed. For any given transition (that is, matrix entry), the initial states are the columns, final states are rows. The colour code is the same as is used elsewhere in the manuscript. b, As in a for PU.1 (cutoff in manuscript = 75 mRNA per cell). c, Frobenius distance $$\sqrt{\sum _{ij}{({T}_{i,{j}_{{\rm{ref}}}}-{T}_{i,{j}_{{\rm{test}}}})}^{2}}$$ between each matrix versus the reference transition matrix. The solid black line indicates the background Frobenius distance derived from statistical uncertainty in the reference transition matrix, derived by bootstrapping through the analysis n = 1,000 times and picking random transition rates from a Gaussian distribution defined by inferred mean and standard deviation of the transition matrix. Frobenius distance values above this line significantly differ from the matrix reported in the manuscript. ### Extended Data Fig. 10 Analysis of mRNA partitioning errors. a, Representative image of a CMP in late anaphase. b, mRNA copy number in each sister cell in CMPs (n = 52) and HSCs (n = 46). r is the Pearson’s correlation coefficient for sister-cell mRNA copy number; the red dashed line is y = x. c, Correlation of mRNA levels between HSCs that divided within the last 1 h (n = 171). Pearson’s correlation coefficients (r) for each gene are listed. ## Supplementary information ### Supplementary Information This file contains Supplementary Methods Sections 1-6, Supplementary Discussion Sections 1-3, Supplementary Figures 1-2 and Supplementary References. ### Supplementary Table 1 Oligonucleotide Sequences for smFISH probes. For genes detected with two step smFISH, the appropriate readout probes are listed. ### Supplementary Table 2 Gene lists for scRNAseq analyses. Gene names for all gene sets analyzed in each scRNAseq dataset. ### Supplementary Table 3 GO terms (ranked by enrichment score) and top decile of genes by VI for scRNAseq analysis. Top decile of genes ranked by Variable importance from each scRNAseq dataset analyzed. Associated GO terms, ranked by enrichment score, for those top decile genes. ### Supplementary Table 4 Inferred transcriptional parameters for CMP data. kon: probability of gene turning on; koff: probability of an ON gene turning off; kini: while in the on state, probability of RP2 initiation event; kd: decay rate of the mRNA. ### Supplementary Table 5 List of antibodies used in this study. ### Supplementary Table 6 Reagents used in the experiments used in this study. ### Supplementary Data 1 This zipped file includes: ColonyIDs.mat - Names of all colonies for KCA; Bdry.mat - Number of cells in each colony; Colony_[1:117].mat - 117 data frames containing all data necessary to perform KCA; KLdatamatrix.mat - KL progenitor data; KCA_datamatrix.mat - Collated Data matrix of all cells used in KCA. ### Supplementary Data 2 This zipped file includes: FSP.m - Use this software for parameter inference based on best fit to the burst frequency and mRNA count/cell; getKLD.m - Used by FSP.m; markovBackTrace.m - Can be used to determine number of visits to state j given a cell is in state i at time t; GSSA.m - Stochastic simulations; treeBackTrace2.m - used by GenerateAllTrees.m; GenerateAllTrees.m - Generates pedigree maps of all colonies and also calculates time spent in each state; KCA.m - Generates frequency of states conditional on the presence of a state in the colony and runs the inference for transitions probabilities; ThreePtFreqs.m - 3-cell state frequency test; KCA_thresholdtesting.mlx - Used for testing different mRNA cutoff values for KCA and comparing to the reference matrix. ## Rights and permissions Reprints and Permissions Wheat, J.C., Sella, Y., Willcockson, M. et al. Single-molecule imaging of transcription dynamics in somatic stem cells. Nature 583, 431–436 (2020). https://doi.org/10.1038/s41586-020-2432-4 • Accepted: • Published: • Issue Date: • DOI: https://doi.org/10.1038/s41586-020-2432-4 • ### Gene expression model inference from snapshot RNA data using Bayesian non-parametrics • Zeliha Kilic • Max Schweiger • Steve Pressé Nature Computational Science (2023) • ### Hematopoietic differentiation is characterized by a transient peak of entropy at a single-cell level • Charles Dussiau • Agathe Boussaroque • Olivier Kosmider BMC Biology (2022) • ### RS-FISH: precise, interactive, fast, and scalable FISH spot detection • Ella Bahry • Laura Breimann • Stephan Preibisch Nature Methods (2022) • ### A robust method for designing multistable systems by embedding bistable subsystems • Siyuan Wu • Tianshou Zhou • Tianhai Tian npj Systems Biology and Applications (2022) • ### Multiparameter analysis of timelapse imaging reveals kinetics of megakaryocytic erythroid progenitor clonal expansion and differentiation • Vanessa M. Scanlon • Evrett N. Thompson • Diane S. Krause Scientific Reports (2022)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5985456705093384, "perplexity": 10942.602049053196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00404.warc.gz"}
http://clincancerres.aacrjournals.org/highwire/markup/107090/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
Table 1. Performance of fixed ratio (1:1 and 1:2) and adaptive randomization trial designs without early stopping
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586972951889038, "perplexity": 16083.528055055738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509960.34/warc/CC-MAIN-20181016010149-20181016031649-00480.warc.gz"}
https://puzzling.stackexchange.com/questions/14530/do-a-barrel-roll-i-e-a-euclidean-plane-rotation-puzzle/14683
# Do a barrel roll! (i.e. a Euclidean plane rotation puzzle) One of my favorite Putnam problems due to a slick solution. $R$ is at $(3, 4)$ on the cartesian plane. To try to confuse $R$, the devious $S$ decides to rotate $R$ about the point $(1, 0)$ by $36^\circ$. $S$ then rotates $R$ by $36^\circ$ about the point $(2, 0)$, then $36^\circ$ about the point $(3, 0)$, then $(4, 0)$, etc., until finally rotating her $36^\circ$ about the point $(10, 0)$. Where does $R$ end exactly and why? (Edit) Additional hint: Narmer and xnor have the correct solution below, but there is still a clever proof it works that no one has gotten. If you're curious, it involves only very basic geometry, and doesn't require much more than putting a regular polygon in the right starting location. • I assume all the rotations are in the same direction? May 8, 2015 at 11:57 • @psmears Yes, good point, that's needed. May 8, 2015 at 14:45 Place a regular decagon in the plane with one side being from (0,0) to (1,0). Attach the point (3,4) to it. Now roll the decagon along the x-axis. This has the same effect as the 36 degree turns. The decagon ends up in the same orientation, moved to the right by 10 units. The point is in the same location relative to the decagon, so it is at (13,4). • Nice! See Problem B4 from 2004 to see the original source. May 12, 2015 at 3:14 There are $10$ points used by $S$ and for each of them $S$ rotates $R$ by $36°$ and since $36°\times10 = 360°$ the sum of the rotations gives a full circle. If the point of rotation had not changed in the rotations, the $R$ point would end in the same position it started. But the point of rotation did change, but only in the $X$ axis and the sequence of the $X$ axis values used by $S$ forms an arithmetic progression. This mean that the $Y$ axis of $R$ after the rotations will be the same, $4$. What happens to the changing axis then? Well, adding $1$ to each point in the $X$ axis shifts the $R$ point on the $X$ axis by the same amount. The $R$ point will be in $(13, 4)$. • what tool did you use to make the image? May 8, 2015 at 9:06 • GeoGebra May 8, 2015 at 9:06 • I guess your solution is correct but I do not understand the reasoning. You say the Y coordinate doesn't change because the Y coordinate of the rotation points don't change. But let's say you take rotation points (10,0),(100,0), ..., (10000000000,0) then the resultating Y will be not the same I guess. Or does it only work when moving the rotation point with a fixed distance every time? or what is the criteria for this to work? May 8, 2015 at 9:15 • @Narmer Actually the X coordinates must form an arithmetic progression for this to work - for Y to be the same. – dmg May 8, 2015 at 9:22 • I guess my next question is: why does it work and only work when there is an arithmetic progression? but maybe that's something for math.stackexchange.com May 8, 2015 at 9:35 Let $R_k$ be the position of $R$ after $k$ rotations, represented as a complex number, starting at $R_0 =3+4i$. Let $\alpha=e^{2\pi i/10}$ be the complex number representing rotation by $36^\circ$. Then, rotating around the $k^{th}$ point, whose coordinate is $k+0i$, gives the relationship $$R_k-k =\alpha (R_{k-1} - k)$$ If we switch to coordinates relative to the current rotation point $S_k=R_k-k$, the recursion becomes $$S_k = \alpha (S_{k-1}-1)$$ which is intuitively, shifting coordinates by $1$ due to switching rotation points, then rotating. Applying it ten times takes us back to the starting value because $$S_{10} = -(\alpha + \alpha^2 + \dots + \alpha^9 + \alpha^{10}) + \alpha^{10} S_0 = S_0,$$ using the fact that $a^{10}=1$ and so the geometric progression is $0$. Substituting back for the $R$'s, we have $$R_{10} - 10 = R_0$$ so $R$ ends up ten spaces right of its starting point, at $(13,4)$. • I didn't know this could be done with the geometric sum formula and complex numbers -- very cool. But despite this being elegant and correct, I might hold off accepting for a little bit since it isn't the intended solution. May 8, 2015 at 20:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7711916565895081, "perplexity": 301.74502514295153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00435.warc.gz"}
http://math.stackexchange.com/questions/159013/finding-value-of-x-for-an-equation
# finding value of x for an equation if we have an equation of form $y=x^{nx+1}$ and if we are given the values of $y$ and $n$ then how can one find $x$? I have reduced the equation to $\log(y)/\log(x)=nx+1$ but can't proceed further. Is there some kind of standard equation? Thanks - If you know about differentiation then you should look up "Newton's Method." $x=2/n$ can't possibly be correct, even as an approximate answer; for one thing, it doesn't involve $y$. –  Gerry Myerson Jun 16 '12 at 22:15 A quick search on Google reveals the page http://mathforum.org/library/drmath/view/70483.html, where Doctor Vogler points out that the function $f(X) = x^x$ is not injective, since $$(\frac{1}{2})^{(1/2)} = (\frac{1}{4})^{(1/4)}.$$ However, he points out that it is possible to restrict the domain of the function so that it is injective. Nevertheless, I'm inclined to think that due to this observation, no notation may have been invented specifically for the inverse of this function, unlike other functions such as $f(x) = e^x$ which have inverses like $f^{-1}(x)=ln(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524912238121033, "perplexity": 150.73081992322108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011456.52/warc/CC-MAIN-20141125155651-00211-ip-10-235-23-156.ec2.internal.warc.gz"}
https://amathew.wordpress.com/2010/01/16/the-fourier-transform-the-heat-equation-and-fundamental-solutions/
I have now discussed what the Laplacian looks like in a general Riemannian manifold and can thus talk about the basic equations of mathematical physics in a more abstract context. Specifically, the key ones are the Laplace equation $\displaystyle \Delta u = 0$ for ${u}$ a smooth function on a Riemannian manifold. Since ${\Delta = \mathrm{div} \mathrm{grad}}$, this often comes up when ${u}$ is the potential energy function of a field which is divergence free, e.g. in electromagnetism. The other major two are the heat equation $\displaystyle u_t - \Delta u = 0$ for a smooth function ${u}$ on the product manifold ${\mathbb{R} \times M}$ for ${M}$ a Riemannian manifold, and the wave equation $\displaystyle u_{tt} - \Delta u = 0$ in the same setting. (I don’t know the physics behind these at all, but it’s probably in any number of textbooks.) We are often interested in solving these given some kind of boundary data. In the case of the Laplace equation, this is called the Dirichlet problem. In 2-dimensions for data given on a circle, the Dirichlet problem is solved using the Poisson integral, as already discussed. To go further, however, we would need to introduce the general theory of elliptic operators and Sobolev spaces. This will heavily rely on the material discussed earlier on the Fourier transform and distributions, and before plunging into it—if I do decide to plunge into it on this blog—I want to briefly discuss why Fourier transforms are so important in linear PDE. Specifically, I’ll discuss the solution of the heat equation on a half space. So, let’s say that we want to treat the case of ${\mathbb{R}_{\geq 0} \times \mathbb{R}^n}$. In detail, we have a function ${u(x)=u(0,x)}$, continuous on ${\mathbb{R}^n}$. We want to extend ${u(0,x)}$ to a solution ${u(t,x)}$ to the heat equation which is continuous on ${0 \times \mathbb{R}^n}$ and smooth on ${\mathbb{R}_+^{n+1}}$. To start with, let’s say that ${u(0,x) \in \mathcal{S}(\mathbb{R}^n)}$. The big idea is that by the Fourier inversion formula, we can get an equivalent equation if we apply the Fourier transform to both sides; this converts the inconvenience of differentiation into much simpler multiplication. When we talk about the Fourier transform, this is as a function of ${x}$. So, assuming we have a solution ${u(t,x)}$ as above: $\displaystyle \hat{u}_t = \widehat{\Delta u} = -4\pi^2 |x|^2 \hat{u}.$ Also, we know what ${\hat{u}(0,x)}$ looks like. So this is actually a linear differential equation in ${\hat{u}( \cdot, x)}$ for each fixed ${x}$ with initial conditions ${\hat{u}(0,x)}$. The solution is unique, and it is given by $\displaystyle \hat{u}(t,x) = e^{-4 \pi^2 |x|^2 t} \hat{u}(0,x).$ Now recall that multiplication on the Fourier transform level corresponds to conveolution, and the Fourier transform of ${K(t,x) = (4 \pi t)^{-n/2} e^{- |x|^2/ (4 t)}}$ is ${e^{-4 \pi^2 |x|^2 t}}$. As a result, given a putative solution ${u(t,x)}$, we have determined ${u(t,x)}$ by $\displaystyle u(t,x) = (K(t, \cdot) \ast u) = (4 \pi t)^{-n/2} \int_{\mathbb{R}^n} e^{- |y-x|^2/ (4 t)} u(0, y) dy.$ So we have a candidate for a solution. Conversely, if the boundary data ${u \in L^1(\mathbb{R}^n)}$ alone, it is easy to check by differentiation under the integral (justified by the rapid decrease of the exponential) that we have something satisfying the heat equation in the upper half-space. Moreover ${||u(t, \cdot) - u(0, \cdot)||_{L^1} \rightarrow 0}$ as ${t \rightarrow 0}$ by general facts about approximation to the identity and a look at the definition of ${K(t, x)}$—note that ${K(\sqrt{t}, x)}$ is just the orthodox version of an approximation to the idnetity. So, we have found a way to solve the heat equation on ${\mathbb{R}^{n+1}}$. It thus seems that the way to solve equations such as the heat equation by convolution with appropriate kernels. In fact, this is more generally true of nonhomogeneous constant-coefficient linear PDE on ${\mathbb{R}^n}$ (we’re forgetting about boundary value problems). Suppose we are given a partial differential operator ${P}$ with constant coefficients, i.e. $\displaystyle Pf = \sum_{|a| \leq k} c_a D^a f ,$ where the ${a}$‘s are multi-indices. Then it is immediate that ${P}$ extends to an operator on distributions. Moreover, $\displaystyle \boxed{ P(\phi \ast f) = P\phi \ast f = \phi \ast Pf }$ whenever ${\phi}$ is a distribution and ${f \in \mathcal{S}}$. (This is clear whenever ${\phi \in \cal{S}}$; in general any distribution can be approximated in the weak* sense by distributions by convolving with an approximation to the identity.) As a result, if we have a fundamental solution ${\phi}$, i.e. one with $\displaystyle P \phi = \delta$ we can get a solution to any equation of the form ${Pf = g}$ for ${g \in \mathcal{S}}$ by taking $\displaystyle f = g \ast \phi,$ which is not only a distribution but also a polynomially increasing ${C^{\infty}}$ one. So we can solve any constant-coefficient PDE given a fundamental solution. There is a big theorem of Malgrange and Ehrenpreis that fundamental solutions always exist to constant-coefficient linear PDE. However, the above statement about solving PDEs can actually be proved in a more elementary fashion; perhaps this will be a future topic. For now, however, I want to show that the Gauss kernel ${K(t,x)}$ is actually a fundamental solution to the heat equation, once it is extended to ${\mathbb{R}^{n+1}}$ with ${K(t,x) \equiv 0}$ for ${t \leq 0}$. (This is no longer smooth, but it is still a distribution.) We need to show that $\displaystyle \int_{\mathbb{R}^{n+1}} K(t,x) \left( - \frac{d}{dt} - \Delta \right)u(t,x) dt dx = u(0).$ Let’s take the integral ${I_{\epsilon}}$ where ${t}$ is integrated over ${[\epsilon, \infty)}$; then by integration by parts $\displaystyle I_{\epsilon} = \int_{\epsilon}^{\infty} \int_{\mathbb{R}^n} u \left( \frac{d}{dt} - \Delta \right) K dt dx + \int_{\mathbb{R}^n} K(\epsilon,x) u(\epsilon,x).$ Since ${K}$ is a solution to the heat equation on ${\mathbb{R}^{n+1}_+}$ (e.g. look at the Fourier transform) it is the second integral that is nonzero. We can write this as $\displaystyle \int_{\mathbb{R}^n} K(\epsilon, x)( u(\epsilon,x) - u(0,0)) dx + u(0)$ and it is easy to see (the same approximation to the identity argument) that the former term tends to zero as ${\epsilon \rightarrow 0}$. So we indeed have a fundamental solution to the heat equation. It thus seems fair that we get solutions to it by convolving with the Gauss kernel.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 64, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9900575280189514, "perplexity": 102.67716744654454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886118195.43/warc/CC-MAIN-20170823094122-20170823114122-00661.warc.gz"}
https://www.pubfacts.com/author/Changyu+Shen
# Publications by authors named "Changyu Shen" 337 Publications ### Development and validation of an echocardiographic algorithm to predict long-term mitral and tricuspid regurgitation progression. Eur Heart J Cardiovasc Imaging 2021 Nov 29. Epub 2021 Nov 29. Cardiovascular Division, Department of Medicine, Beth Israel Deaconess Medical Center, 375 Longwood Avenue, 4th floor, Boston, MA 02215, USA. Aims: Prediction of mitral (MR) and tricuspid (TR) regurgitation progression on transthoracic echocardiography (TTE) is needed to personalize valvular surveillance intervals and prognostication. Methods And Results: Structured TTE report data at Beth Israel Deaconess Medical Center, 26 January 2000-31 December 2017, were used to determine time to progression (≥1+ increase in severity). TTE predictors of progression were used to create a progression score, externally validated at Massachusetts General Hospital, 1 January 2002-31 December 2019. In the derivation sample (MR, N = 34 933; TR, N = 27 526), only 5379 (15.4%) individuals with MR and 3630 (13.2%) with TR had progression during a median interquartile range) 9.0 (4.1-13.4) years of follow-up. Despite wide inter-individual variability in progression rates, a score based solely on demographics and TTE variables identified individuals with a five- to six-fold higher rate of MR/TR progression over 10 years (high- vs. low-score tertile, rate of progression; MR 20.1% vs. 3.3%; TR 21.2% vs. 4.4%). Compared to those in the lowest score tertile, those in the highest tertile of progression had a four-fold increased risk of mortality. On external validation, the score demonstrated similar performance to other algorithms commonly in use. Conclusion: Four-fifths of individuals had no progression of MR or TR over two decades. Despite wide interindividual variability in progression rates, a score, based solely on TTE parameters, identified individuals with a five- to six-fold higher rate of MR/TR progression. Compared to the lowest tertile, individuals in the highest score tertile had a four-fold increased risk of mortality. Prediction of long-term MR/TR progression is not only feasible but prognostically important. Source http://dx.doi.org/10.1093/ehjci/jeab254DOI Listing November 2021 ### Hollow-porous fibers for intrinsically thermally insulating textiles and wearable electronics with ultrahigh working sensitivity. Mater Horiz 2021 Mar 11;8(3):1037-1046. Epub 2021 Jan 11. School of Materials Science and Engineering, Key Laboratory of Materials Processing and Mold (Zhengzhou University), Ministry of Education; Henan Key Laboratory of Advanced Nylon Materials and Application (Zhengzhou University), Zhengzhou University, Zhengzhou, 450001, P. R. China. Wearable smart devices should be flexible and functional to imitate the warmth and sensing functions of human skin or animal fur. Despite the recent great progress in wearable smart devices, it is still challenging to achieve the required multi-functionality. Here, stretchable hollow-porous fibers with self-warming ability are designed, and the properties of electrical heating, strain sensing, temperature sensing and pressure sensing are achieved. The hollow-porous TPU fiber possesses an ultra-high stretchability (1468%), and the textiles woven from the fibers present a splendid thermal insulation property (the absolute value difference in temperature |ΔT| = 68.5 and 44 °C at extreme temperatures of 115 and -40.0 °C). Importantly, after conductive filler decoration, the fiber-based strain sensor exhibits one of the highest reported gauge factor (2.3 × 10) towards 100% strain in 7200 working stretch-release cycles. A low detection limit of 0.5% strain is also achieved. Besides, the fibers can be heated to 40 °C in 18 s at a small voltage of 2 V as an electrical heater. The assembled thermal sensors can monitor the temperature from 30 to 90 °C in real time, and the fiber-based capacitive type pressure sensor exhibits good sensing performance under force from 1 to 25 N. The hollow-porous fiber based all-in-one integrated wearable systems illustrate promising prospects for next generation electronic skins to detect human motions and body temperature with thermal therapy and inherent self-warming ability. Source http://dx.doi.org/10.1039/d0mh01818jDOI Listing March 2021 ### Variable-fiber optical power splitter design based on a triangular prism. Appl Opt 2021 Oct;60(30):9390-9395 Fiber optical power splitters (OPSs) have been widely employed in optical communications, optical sensors, optical measurements, and optical fiber lasers. It has been found that OPSs with variable power ratios can simplify the structure and increase the flexibility of optical systems. In this study, a variable-fiber OPS based on a triangular prism is proposed and demonstrated. By adjusting the output beam width of the prism, the power ratio can be continuously tuned. The optical simulations show that the horizontal displacement design is better than the traditional tilt angle design. Our scheme combines a dual-fiber collimator, a focus lens, and a triangular prism with a vertex angle of 120°. By changing the axial displacement of the prism, the power splitting ratio can be altered from 50:50 to 90:10. The polarization and wavelength dependence of the variable OPS were also investigated. Source http://dx.doi.org/10.1364/AO.437983DOI Listing October 2021 ### Deep-learning-assisted fiber Bragg grating interrogation by random speckles. Opt Lett 2021 Nov;46(22):5711-5714 Fiber Bragg gratings (FBGs) have been widely employed as a sensor for temperature, vibration, strain, etc. measurements. However, extant methods for FBG interrogation still face challenges in the aspects of sensitivity, measurement speed, and cost. In this Letter, we introduced random speckles as the FBG's reflection spectrum information carrier for demodulation. Instead of the commonly used InGaAs cameras, a quadrant detector (QD) was first utilized to record the speckle patterns in the experiments. Although the speckle images were severely compressed into four channel signals by the QD, the spectral features of the FBGs can still be precisely extracted with the assistance of a deep convolution neural network (CNN). The temperature and vibration experiments were demonstrated with a resolution of 1.2 pm. These results show that the new, to the best of our knowledge, speckle-based demodulation scheme can satisfy the requirements of both high-resolution and high-speed measurements, which should pave a new way for the optical fiber sensors. Source http://dx.doi.org/10.1364/OL.445159DOI Listing November 2021 ### Raman-scattering-assistant large energy dissipative soliton and multicolor coherent noise-like pulse complex in an Yb-doped fiber laser. ##### Authors: Shuo Chang Yameng Zheng Zhaokun Wang Changyu Shen Opt Lett 2021 Nov;46(22):5695-5698 In this Letter, we have demonstrated the generation of dissipative solitons (DSs) or multi-wavelength noise-like pulses (NLPs) directly from a common linear Yb-doped fiber laser in the presence of stimulated Raman scattering (SRS). For the DSs, the pulse energy of the solitons with a pulse width of 74.2 ps reaches 21.2 nJ. For the NLPs, the generation of the main NLP (1032 nm) together with the first-order Raman NLP (1080 nm) is realized. The narrow peak of the double-scale autocorrelation trace is characterized by quasi-periodic beat pulses with a pulse beating of 40.6 fs and a pulse separation of 79 fs, indicating that the generated solitons at dual wavelengths are mutually coherent. Furthermore, a three-color stable NLP complex with a broader spectrum is also obtained. The results contribute to an in-depth understanding of nonlinear dynamics and ultrafast physics. Source http://dx.doi.org/10.1364/OL.443319DOI Listing November 2021 ### The Role of Frailty in Identifying Benefit from Transcatheter Versus Surgical Aortic Valve Replacement. Circ Cardiovasc Qual Outcomes 2021 Nov 15. Epub 2021 Nov 15. Department of Medicine, Cardiovascular Division, Beth Israel Deaconess Medical Center, Boston, MA; Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology, Beth Israel Deaconess Medical Center, Boston, MA; Harvard Medical School, Boston, MA. Frailty is associated with a higher risk for adverse outcomes after aortic valve replacement (AVR) for severe aortic valve stenosis, but whether or not frail patients derive differential benefit from transcatheter (TAVR) vs. surgical (SAVR) AVR is uncertain. We linked adults ≥ 65 years old in the US CoreValve High Risk (HiR) or Surgical or Transcatheter Aortic-Valve Replacement in Intermediate Risk Patients (SURTAVI) trial to Medicare claims, 2/2/2011-9/30/2015. Two frailty measures, a deficit-based (DFI) and phenotype-based (PFI) frailty index, were generated. The treatment effect of TAVR vs. SAVR was evaluated within frailty index (FI) tertiles for the primary endpoint of death and non-death secondary outcomes, using multivariable Cox regression. Of 1,442 (linkage rate = 60.0%) individuals included, 741 (51.4%) individuals received TAVR and 701 (48.6%) received SAVR (mean age 81.8 ± 6.1 years, 44.0% female). Though 1-year death rates in the highest FI tertiles (DFI 36.7%, PFI 33.8%) were 2-3-fold higher than the lowest tertiles (DFI 13.4%, HR 3.02, 95% CI 2.26-4.02, p < 0.001; PFI 17.9%; HR 2.05, 95% CI 1.58-2.67, p < 0.001), there were no significant differences in the relative or absolute treatment effect of SAVR vs. TAVR across FI tertiles for all death, non-death, and functional outcomes (all interaction p-values > 0.05). Results remained consistent across individual trials, frailty definitions, and when considering the non-linked trial data. Two different frailty indices based on Fried and Rockwood definitions identified individuals at higher risk of death and functional impairment but no differential benefit from TAVR vs. SAVR. Source http://dx.doi.org/10.1161/CIRCOUTCOMES.121.008566DOI Listing November 2021 ### Estimation of DAPT Study Treatment Effects in Contemporary Clinical Practice: Findings from the EXTEND-DAPT Study. Circulation 2021 Nov 8. Epub 2021 Nov 8. Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology, Division of Cardiovascular Medicine, Beth Israel Deaconess Medical Center, Boston MA; Cardiology Division, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA; Baim Institute for Clinical Research, Boston, MA. Differences in patient characteristics, changes in treatment algorithms, and advances in medical technology could each influence the applicability of older randomized trial results to contemporary clinical practice. The Dual Antiplatelet Therapy (DAPT) Study found that longer duration DAPT decreased ischemic events at the expense of greater bleeding, but subsequent evolution in stent technology and clinical practice may attenuate the benefit of prolonged DAPT in a contemporary population. We evaluated whether the DAPT Study population is different from a contemporary population of US patients receiving percutaneous coronary intervention (PCI), and estimated the treatment effect of extended duration antiplatelet therapy after PCI in this more contemporary cohort. We compared characteristics of drug-eluting stent (DES)-treated patients randomized in the DAPT Study to a sample of more contemporary DES-treated patients in the NCDR CathPCI Registry from July 2016-June 2017. After linking trial and registry data, we employed inverse-odds of trial participation weighting to account for patient and procedural characteristics and estimated a contemporary "real-world" treatment effect of 30 vs. 12 months of DAPT after coronary stent procedures. The US DES-treated trial cohort included 8864 DAPT Study patients and the registry cohort included 568,540 patients. Compared to the trial population, registry patients had more comorbidities and were more likely to present with myocardial infarction and receive 2nd generation DES. After reweighting trial results to represent the registry population, there was no longer a significant effect of prolonged DAPT on reducing stent thrombosis (reweighted treatment effect: -0.40, 95% CI: -0.99%, 0.15%), major adverse cardiac and cerebrovascular events (reweighted treatment effect: -0.52, 95% CI: -2.62%, 1.03%), or myocardial infarction (reweighted treatment effect: -0.97%, 95% CI: -2.75%, 0.18%), but the increase in bleeding with prolonged DAPT persisted (reweighted treatment effect: 2.42%, 95% CI: 0.79%, 3.91%). Differences between patients and devices used in contemporary clinical practice compared with the DAPT Study were associated with attenuation of benefits and greater harms attributable to prolonged DAPT duration. These findings limit applicability of average treatment effects from the DAPT Study in modern clinical practice. Source http://dx.doi.org/10.1161/CIRCULATIONAHA.121.056878DOI Listing November 2021 ### Flexible layered cotton cellulose-based nanofibrous membranes for piezoelectric energy harvesting and self-powered sensing. Carbohydr Polym 2022 Jan 16;275:118740. Epub 2021 Oct 16. Key Laboratory of Materials Processing and Mold (Ministry of Education), National Engineering Research Center for Advanced Polymer Processing Technology, Zhengzhou University, Zhengzhou 450002, China. Cellulose has attracted an increasing attention for piezoelectric energy harvesting. However, the limited piezoelectricity of natural cellulose constraints the applications. Therefore, we demonstrate the development of piezoelectric nanogenerators based on robust, durable layered membranes composed of cotton cellulose interfaced maleic-anhydride-grafted polyvinylidene fluoride (PVDF-g-MA) nanofibers. Exploiting [email protected] (pBT) nanoparticles as interlayer bridges, interlocked layer-layer interfaces that covalently bind component layers are constructed by a facile and scalable approach. As-obtained membranes exhibit significantly improved piezoelectricity with a maximum piezoelectric coefficient of 27.2 pC/N, power density of 1.72 μW/cm, and stability over 8000 cycles. Substantial enhancement in piezoelectricity over pristine cellulose is ascribed to the synergy of components and the localized stress concentration induced by pBT nanoparticles. The self-powered device could also be used to detect human physiological motions in different forms. Such cellulose-based membranes can be up-scaled to fabricate ecofriendly, flexible and durable energy harvesters and self-powered wearable sensors. Source http://dx.doi.org/10.1016/j.carbpol.2021.118740DOI Listing January 2022 ### Markedly improved hydrophobicity of cellulose film via a simple one-step aminosilane-assisted ball milling. Carbohydr Polym 2022 Jan 24;275:118701. Epub 2021 Sep 24. Key Laboratory of Materials Processing and Mold (Zhengzhou University), Ministry of Education, National Engineering Research Center for Advanced Polymer Processing Technology, Zhengzhou University, Zhengzhou, 450002, China. Most cellulose products lack water resistance due to the existence of abundant hydroxyl groups. In this work, microfibrillated cellulose (MFC) was modified via 3-aminopropyltriethoxysilane (APTES)-assisted ball milling. Under the synergism between high-energy mechanical force field and APTES-modification, the fibrillation and hydrophobization of MFC were achieved simultaneously. Free-standing translucent cellulose films made of modified MFC were fabricated. The original crystal form of cellulose is maintained. The hydrophobicity of cellulose film markedly increases and the water contact angle goes up to 133.2 ± 3.4°, which might be ascribed to the combined effects of APTES-modification and rough film surface. In addition, the thermostability and mechanical properties of cellulose film are also improved via mechanochemical modification. This work provides a novel one-step fibrillation-hydrophobization method for cellulose. Source http://dx.doi.org/10.1016/j.carbpol.2021.118701DOI Listing January 2022 ### High-sensitivity optical fiber hydrogen sensor based on the metal organic frameworks of UiO-66-NH. Opt Lett 2021 Nov;46(21):5405-5408 A hydrogen sensor with high sensitivity was demonstrated by coating the metal organic frameworks of ${\rm{UiO}}\! -\! {\rm{66 \!-\! N}}{{\rm{H}}_2}$ on an optical fiber Mach-Zehnder interferometer (MZI). The MZI was made of a fiber mismatch structure by using a core-offset fusion splicing method. The effective refractive index of the ${\rm{UiO}}\! -\! {\rm{66\! -\! N}}{{\rm{H}}_2}$ film varied with the absorption and release of hydrogen, and the interference resonant dip wavelength and the intensity of the MZI changed with the variations of the concentration of hydrogen. The experimental results showed that the proposed sensor had a high hydrogen sensitivity of 8.78 dB/% in the range from 0% to 0.8%, which is almost seven times higher than the existing similar hydrogen sensor. Source http://dx.doi.org/10.1364/OL.443930DOI Listing November 2021 ### Flexible Conductive Polyimide Fiber/MXene Composite Film for Electromagnetic Interference Shielding and Joule Heating with Excellent Harsh Environment Tolerance. ACS Appl Mater Interfaces 2021 Oct 15;13(42):50368-50380. Epub 2021 Oct 15. Key Laboratory of Materials Processing and Mold (Zhengzhou University), Ministry of Education; National Engineering Research Center for Advanced Polymer Processing Technology, Zhengzhou University, Zhengzhou, Henan 450002, China. The development of flexible MXene-based multifunctional composites is becoming a hot research area to achieve the application of conductive MXene in wearable electric instruments. Herein, a flexible conductive polyimide fiber (PIF)/MXene composite film with densely stacked "rebar-brick-cement" lamellar structure is fabricated using the simple vacuum filtration plus thermal imidization technique. A water-soluble polyimide precursor, poly(amic acid), is applied to act as a binder and dispersant to ensure the homogeneous dispersion of MXene and its good interfacial adhesion with PIF after thermal imidization, resulting in excellent mechanical robustness and high conductivity (3787.9 S/m). Owing to the reflection on the surface, absorption through conduction loss and interfacial/dipolar polarization loss inside the material, and the lamellar structure that is beneficial for multiple reflection and scattering between adjacent layers, the resultant PIF/MXene composite film exhibits a high electromagnetic interference (EMI) shielding effectiveness of 49.9 dB in the frequency range of 8.2-12.4 GHz. More importantly, its EMI shielding capacity can be well maintained in various harsh environments (e.g., extreme high/low temperature, acid/salt solution, and long-term cyclic bending), showing excellent stability and durability. Furthermore, it also presents fast, stable, and long-term durable Joule heating performances based on its stable and excellent conductivity, demonstrating good thermal deicing effects under actual conditions. Therefore, we believe that the flexible conductive PIF/MXene composite film with excellent conductivity and harsh environment tolerance possesses promising potential for electromagnetic wave protection and personal thermal management. Source http://dx.doi.org/10.1021/acsami.1c15467DOI Listing October 2021 ### Applicability of Transcatheter Aortic Valve Replacement Trials to Real-World Clinical Practice: Findings From EXTEND-CoreValve. JACC Cardiovasc Interv 2021 10;14(19):2112-2123 Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology, Division of Cardiovascular Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA. Electronic address: Objectives: The aim of this study was to examine the applicability of pivotal transcatheter aortic valve replacement (TAVR) trials to the real-world population of Medicare patients undergoing TAVR. Background: It is unclear whether randomized controlled trial results of novel cardiovascular devices apply to patients encountered in clinical practice. Methods: Characteristics of patients enrolled in the U.S. CoreValve pivotal trials were compared with those of the population of Medicare beneficiaries who underwent TAVR in U.S. clinical practice between November 2, 2011, and December 31, 2017. Inverse probability weighting was used to reweight the trial cohort on the basis of Medicare patient characteristics, and a "real-world" treatment effect was estimated. Results: A total of 2,026 patients underwent TAVR in the U.S. CoreValve pivotal trials, and 135,112 patients underwent TAVR in the Medicare cohort. Trial patients were mostly similar to real-world patients at baseline, though trial patients were more likely to have hypertension (50% vs 39%) and coagulopathy (25% vs 17%), whereas real-world patients were more likely to have congestive heart failure (75% vs 68%) and frailty. The estimated real-world treatment effect of TAVR was an 11.4% absolute reduction in death or stroke (95% CI: 7.50%-14.92%) and an 8.7% absolute reduction in death (95% CI: 5.20%-12.32%) at 1 year with TAVR compared with conventional therapy (surgical aortic valve replacement for intermediate- and high-risk patients and medical therapy for extreme-risk patients). Conclusions: The trial and real-world populations were mostly similar, with some notable differences. Nevertheless, the extrapolated real-world treatment effect was at least as high as the observed trial treatment effect, suggesting that the absolute benefit of TAVR in clinical trials is similar to the benefit of TAVR in the U.S. real-world setting. Source http://dx.doi.org/10.1016/j.jcin.2021.08.006DOI Listing October 2021 ### Race, sex and age disparities in echocardiography among Medicare beneficiaries in an integrated healthcare system. Heart 2021 Oct 6. Epub 2021 Oct 6. Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA Objective: To identify potential race, sex and age disparities in performance of transthoracic echocardiography (TTE) over several decades. Methods: TTE reports from five academic and community sites within a single integrated healthcare system were linked to 100% Medicare fee-for-service claims from 1 January 2005 to 31 December 2017. Multivariable Poisson regression was used to estimate adjusted rates of TTE utilisation after the index TTE according to baseline age, sex, race and comorbidities among individuals with ≥2 TTEs. Non-white race was defined as black, Asian, North American Native, Hispanic or other categories using Medicare-assigned race categories. Results: A total of 15 870 individuals (50.1% female, mean 72.2±12.7 years) underwent a total of 63 535 TTEs (range 2-55/person) over a median (IQR) follow-up time of 4.9 (2.4-8.5) years. After the index TTE, the median TTE use was 0.72 TTEs/person/year (IQR 0.43-1.33; range 0.12-26.76). TTE use was lower in older individuals (relative risk (RR) for 10-year increase in age, 0.91, 95% CI 0.89 to 0.92, p<0.001), women (RR 0.97, 95% CI 0.95 to 0.99, p<0.001) and non-white individuals (RR 0.95, 95% CI 0.93 to 0.97, p<0.001). Black women in particular had the lowest relative use of TTE (RR 0.92, 95% CI 0.88 to 0.95, p<0.001). The only clinical conditions associated with increased TTE use after multivariable adjustment were heart failure (RR 1.04, 95% CI 1.00 to 1.08, p=0.04) and chronic obstructive pulmonary disease (RR 1.05, 95% CI 1.00 to 1.10, p=0.04). Conclusions: Among Medicare beneficiaries with multiple TTEs in a single large healthcare system, the median TTE use after the index TTE was 0.72 TTEs/person/year, although this varied widely. Adjusted for comorbidities, female sex, non-white race and advancing age were associated with decreased TTE utilisation. Source http://dx.doi.org/10.1136/heartjnl-2021-319951DOI Listing October 2021 ### Strain-, curvature- and twist-independent temperature sensor based on a small air core hollow core fiber structure. Opt Express 2021 Aug;29(17):26353-26365 Cross-sensitivity (crosstalk) to multiple parameters is a serious but common issue for most sensors and can significantly decrease the usefulness and detection accuracy of sensors. In this work, a high sensitivity temperature sensor based on a small air core (10 µm) hollow core fiber (SACHCF) structure is proposed. Co-excitation of both anti-resonant reflecting optical waveguide (ARROW) and Mach-Zehnder interferometer (MZI) guiding mechanisms in transmission are demonstrated. It is found that the strain sensitivity of the proposed SACHCF structure is decreased over one order of magnitude when a double phase condition (destructive condition of MZI and resonant condition of ARROW) is satisfied. In addition, due to its compact size and a symmetrical configuration, the SACHCF structure shows ultra-low sensitivity to curvature and twist. Experimentally, a high temperature sensitivity of 31.6 pm/°C, an ultra-low strain sensitivity of -0.01pm/µε, a curvature sensitivity of 18.25 pm/m, and a twist sensitivity of -22.55 pm/(rad/m) were demonstrated. The corresponding temperature cross sensitivities to strain, curvature and twist are calculated to be -0.00032 °C/µε, 0.58 °C/m and 0.71 °C/(rad/m), respectively. The above cross sensitivities are one to two orders of magnitude lower than that of previously reported optical fiber temperature sensors. The proposed sensor shows a great potential to be used as a temperature sensor in practical applications where influence of multiple environmental parameters cannot be eliminated. Source http://dx.doi.org/10.1364/OE.433580DOI Listing August 2021 ### Identification of Frailty Using a Claims-Based Frailty Index in the CoreValve Studies: Findings from the EXTEND-FRAILTY Study. J Am Heart Assoc 2021 10 29;10(19):e022150. Epub 2021 Sep 29. Department of Medicine Cardiovascular Division Beth Israel Deaconess Medical Center Boston MA. Background In aortic valve disease, the relationship between claims-based frailty indices (CFIs) and validated measures of frailty constructed from in-person assessments is unclear but may be relevant for retrospective ascertainment of frailty status when otherwise unmeasured. Methods and Results We linked adults aged ≥65 years in the US CoreValve Studies (linkage rate, 67%; mean age, 82.7±6.2 years, 43.1% women), to Medicare inpatient claims, 2011 to 2015. The Johns Hopkins CFI, validated on the basis of the Fried index, was generated for each study participant, and the association between CFI tertile and trial outcomes was evaluated as part of the EXTEND-FRAILTY substudy. Among 2357 participants (64.9% frail), higher CFI tertile was associated with greater impairments in nutrition, disability, cognition, and self-rated health. The primary outcome of all-cause mortality at 1 year occurred in 19.3%, 23.1%, and 31.3% of those in tertiles 1 to 3, respectively (tertile 2 versus 1: hazard ratio, 1.22; 95% CI, 0.98-1.51; =0.07; tertile 3 versus 1: hazard ratio, 1.73; 95% CI, 1.41-2.12; <0.001). Secondary outcomes (bleeding, major adverse cardiovascular and cerebrovascular events, and hospitalization) were more frequent with increasing CFI tertile and persisted despite adjustment for age, sex, New York Heart Association class, and Society of Thoracic Surgeons risk score. Conclusions In linked Medicare and CoreValve study data, a CFI based on the Fried index consistently identified individuals with worse impairments in frailty, disability, cognitive dysfunction, and nutrition and a higher risk of death, hospitalization, bleeding, and major adverse cardiovascular and cerebrovascular events, independent of age and risk category. While not a surrogate for validated metrics of frailty using in-person assessments, use of this CFI to ascertain frailty status among patients with aortic valve disease may be valid and prognostically relevant information when otherwise not measured. Source http://dx.doi.org/10.1161/JAHA.121.022150DOI Listing October 2021 ### Flexible Ag Microparticle/MXene-Based Film for Energy Harvesting. Nanomicro Lett 2021 Sep 24;13(1):201. Epub 2021 Sep 24. College of Materials Science and Engineering, Key Laboratory of Advanced Material Processing & Mold (Ministry of Education), National Engineering Research Center for Advanced Polymer Processing Technology, Zhengzhou University, Zhengzhou, 450002, People's Republic of China. Ultra-thin flexible films have attracted wide attention because of their excellent ductility and potential versatility. In particular, the energy-harvesting films (EHFs) have become a research hotspot because of the indispensability of power source in various devices. However, the design and fabrication of such films that can capture or transform different types of energy from environments for multiple usages remains a challenge. Herein, the multifunctional flexible EHFs with effective electro-/photo-thermal abilities are proposed by successive spraying Ag microparticles and MXene suspension between on waterborne polyurethane films, supplemented by a hot-pressing. The optimal coherent film exhibits a high electrical conductivity (1.17×10 S m), excellent Joule heating performance (121.3 °C) at 2 V, and outstanding photo-thermal performance (66.2 °C within 70 s under 100 mW cm). In addition, the EHFs-based single-electrode triboelectric nanogenerators (TENG) give short-circuit transferred charge of 38.9 nC, open circuit voltage of 114.7 V, and short circuit current of 0.82 μA. More interestingly, the output voltage of TENG can be further increased via constructing the double triboelectrification layers. The comprehensive ability for harvesting various energies of the EHFs promises their potential to satisfy the corresponding requirements. Source http://dx.doi.org/10.1007/s40820-021-00729-wDOI Listing http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8463646PMC September 2021 ### Bioinspired Multifunctional Photonic-Electronic Smart Skin for Ultrasensitive Health Monitoring, for Visual and Self-Powered Sensing. Adv Mater 2021 Nov 23;33(45):e2102332. Epub 2021 Sep 23. School of Materials Science and Engineering, Key Laboratory of Materials Processing and Mold, Ministry of Education, Zhengzhou University, Zhengzhou, 450001, P. R. China. Smart skin is highly desired to be ultrasensitive and self-powered as the medium of artificial intelligence. Here, an ultrasensitive self-powered mechanoluminescence smart skin (SPMSS) inspired by the luminescence mechanism of cephalopod skin and the ultrasensitive response of spider-slit-organ is developed. Benefitting from the unique strain-dependent microcrack structure design based on Ti C T (MXene)/carbon nanotube synergistic interaction, SPMSS possesses excellent strain sensing performances including ultralow detection limit (0.001% strain), ultrahigh sensitivity (gauge factor, GF = 3.92 × 10 ), ultrafast response time (5 ms), and superior durability and stability (>45 000 cycles). Synchronously, SPMSS exhibits tunable and highly sensitive mechanoluminescence (ML) features under stretching. A relationship between ML features, strain sensing performances, and the deformation has been established successfully. Importantly, the SPMSS demonstrates excellent properties as triboelectric nanogenerator (4 × 4 cm ), including ultrahigh triboelectric output (open-circuit voltage V  = 540 V, short-circuit current I  = 42 µA, short-circuit charge Q  = 317 nC) and power density (7.42 W m ), endowing the smart skin with reliable power source supply and self-powered sensing ability. This bioinspired smart skin exhibits multifunctional applications in health monitoring, visual sensing, and self-powered sensing, showing great potential in artificial intelligence. Source November 2021 ### Effect of intensive versus limited monitoring on clinical trial conduct and outcomes: A randomized trial. Am Heart J 2022 Jan 14;243:77-86. Epub 2021 Sep 14. Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology, Division of Cardiovascular Medicine, Beth Israel Deaconess Medical Center, Boston MA; Harvard Medical School, Boston, MA. Electronic address: Background: Regulatory agencies have endorsed more limited approaches to clinical trial site monitoring. However, the impact of different monitoring strategies on trial conduct and outcomes is unclear. Methods: We conducted a patient-level block-randomized controlled trial evaluating the effect of intensive versus limited monitoring on cardiovascular clinical trial conduct and outcomes nested within the CoreValve Continued Access and Expanded Use Studies. Intensive monitoring included complete source data verification of all critical datapoints whereas limited monitoring included automated data checks only. This study's endpoints included clinical trial outcome ascertainment as well as monitoring action items, protocol deviations, and adverse event ascertainment. Results: A total of 2,708 patients underwent transcatheter aortic valve replacement (TAVR) and were randomized to either intensive monitoring (n = 1,354) or limited monitoring (n = 1,354). Monitoring action items were more common with intensive monitoring (52% vs 15%; P < .001), but there was no difference in the percentage of patients with any protocol deviation (91.6% vs 90.4%; P = .314). The reported incidence of trial outcomes between intensive and limited monitoring was similar for mortality (30 days: 4.8% vs 5.5%, P = .442; 1 year: 20.3% vs 21.3%, P = .473) and stroke (30 days: 2.8% vs 2.4%, P = .458), as well as most secondary trial outcomes with the exception of bleeding (intensive: 36.3% vs limited: 32.0% at 30 days, P = .019). There was a higher reported incidence of cardiac adverse events reported in the intensive monitoring group at 1 year (76.7% vs 72.4%; P = .019). Conclusions: Tailored limited monitoring strategies can be implemented without influencing the integrity of TAVR trial outcomes. Source http://dx.doi.org/10.1016/j.ahj.2021.09.002DOI Listing January 2022 ### Neighborhood Socioeconomic Disadvantage and Mortality Among Medicare Beneficiaries Hospitalized for Acute Myocardial Infarction, Heart Failure, and Pneumonia. J Gen Intern Med 2021 Sep 10. Epub 2021 Sep 10. Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology, Division of Cardiology, Beth Israel Deaconess Medical Center and Harvard Medical School, MA, Boston, USA. Background: The Centers for Medicare and Medicaid Services' Hospital Value-Based Purchasing program uses 30-day mortality rates for acute myocardial infarction, heart failure, and pneumonia to evaluate US hospitals, but does not account for neighborhood socioeconomic disadvantage when comparing their performance. Objective: To determine if neighborhood socioeconomic disadvantage is associated with worse 30-day mortality rates after a hospitalization for acute myocardial infarction (AMI), heart failure (HF), or pneumonia in the USA, as well as within the subset of counties with a high proportion of Black individuals. Design And Participants: This retrospective, population-based study included all Medicare fee-for-service beneficiaries aged 65 years or older hospitalized for acute myocardial infarction, heart failure, or pneumonia between 2012 and 2015. Exposure: Residence in most socioeconomically disadvantaged vs. less socioeconomically disadvantaged neighborhoods as measured by the area deprivation index (ADI). Main Measure(s): All-cause mortality within 30 days of admission. Key Results: The study included 3,471,592 Medicare patients. Of these patients, 333,472 resided in most disadvantaged neighborhoods and 3,138,120 in less disadvantaged neighborhoods. Patients living in the most disadvantaged neighborhoods were younger (78.4 vs. 80.0 years) and more likely to be Black adults (24.6% vs. 7.5%) and dually enrolled in Medicaid (39.4% vs. 21.8%). After adjustment for demographics (age, sex, race/ethnicity), poverty, and clinical comorbidities, 30-day mortality was higher among beneficiaries residing in most disadvantaged neighborhoods for AMI (adjusted odds ratio 1.08, 95% CI 1.06-1.11) and pneumonia (aOR 1.05, 1.03-1.07), but not for HF (aOR 1.02, 1.00-1.04). These patterns were similar within the subset of US counties with a high proportion of Black adults (AMI, aOR 1.07, 1.03-1.11; HF 1.02, 0.99-1.05; pneumonia 1.03, 1.00-1.07). Conclusions: Neighborhood socioeconomic disadvantage is associated with higher 30-day mortality for some conditions targeted by value-based programs, even after accounting for individual-level demographics, clinical comorbidities, and poverty. These findings may have implications as policymakers weigh strategies to advance health equity under value-based programs. Source http://dx.doi.org/10.1007/s11606-021-07090-zDOI Listing September 2021 ### Synergistic Effect of Pressurization Rate and β-Form Nucleating Agent on the Multi-Phase Crystallization of iPP. Polymers (Basel) 2021 Sep 3;13(17). Epub 2021 Sep 3. Key Laboratory of Materials Processing and Mold, National Engineering Research Center for Advanced Polymer Processing Technology, Ministry of Education, Zhengzhou University, Zhengzhou 450002, China. Using a homemade pressure device, we explored the synergistic effect of pressurization rate and β-form nucleating agent (β-NA) on the crystallization of an isotactic polypropylene (iPP) melt. The obtained samples were characterized by combining small angle X-ray scattering and synchrotron wide angle X-ray diffraction. It was found that the synergistic application of pressurization and β-NA enables the preparation of a unique multi-phase crystallization of iPP, including β-, γ- and/or mesomorphic phases. Pressurization rate plays a crucial role on the formation of different crystal phases. As the pressurization rate increases in a narrow range between 0.6-1.9 MPa/s, a significant competitive formation between β- and γ-iPP was detected, and their relative crystallinity are likely to be determined by the growth of the crystal. When the pressurization rate increases further, both β- and γ-iPP contents gradually decrease, and the mesophase begins to emerge once it exceeds 15.0 MPa/s, then mesomorphic, β- and γ- iPP coexist with each other. Moreover, with different β-NA contents, the best pressurization rate for β-iPP growth is the same as 1.9 MPa/s, while more β-NA just promotes the content of β-iPP under the rates lower than 1.9 MPa/s. In addition to inducing the formation of β-iPP, it shows that β-NA can also significantly promote the formation of γ-iPP in a wide pressurization rate range between 3.8 to 75 MPa/s. These results were elucidated by combining classical nucleation theory and the growth theory of different crystalline phases, and a theoretical model of the pressurization-induced crystallization is established, providing insight into understanding the multi-phase structure development of iPP. Source http://dx.doi.org/10.3390/polym13172984DOI Listing http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8434399PMC September 2021 ### Optical Fiber Based Mach-Zehnder Interferometer for APES Detection. Sensors (Basel) 2021 Aug 31;21(17). Epub 2021 Aug 31. Institute of Optoelectronic Technology, China Jiliang University, Hangzhou 310018, China. A 3-aminopropyl-triethoxysilane (APES) fiber-optic sensor based on a Mach-Zehnder interferometer (MZI) was demonstrated. The MZI was constructed with a core-offset fusion single mode fiber (SMF) structure with a length of 3.0 cm. As APES gradually attaches to the MZI, the external environment of the MZI changes, which in turn causes change in the MZI's interference. That is the reason why we can obtain the relationships between the APES amount and resonance dip wavelength by measuring the transmission variations of the resonant dip wavelength of the MZI. The optimized amount of 1% APES for 3.0 cm MZI biosensors was 3 mL, whereas the optimized amount of 2% APES was 1.5 mL. Source http://dx.doi.org/10.3390/s21175870DOI Listing http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8434240PMC August 2021 ### Author Correction: Human plasma proteomic profiles indicative of cardiorespiratory fitness. Nat Metab 2021 Sep;3(9):1275 Division of Cardiovascular Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA. Source http://dx.doi.org/10.1038/s42255-021-00459-8DOI Listing September 2021 ### Intravascular Molecular-Structural Assessment of Arterial Inflammation in Preclinical Atherosclerosis Progression. JACC Cardiovasc Imaging 2021 Nov 18;14(11):2265-2267. Epub 2021 Aug 18. Source http://dx.doi.org/10.1016/j.jcmg.2021.06.017DOI Listing http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8571057PMC November 2021 ### Days Out of Institution Following Tracheostomy and Gastrostomy Placement in Critically Ill Older Adults. Ann Am Thorac Soc 2021 Aug 13. Epub 2021 Aug 13. Boston University School of Medicine, Pulmonary Center, Boston, Massachusetts, United States. Rationale Tracheostomy and gastrostomy tubes are frequently placed during critical illness for long-term life-support, with the majority placed in older adults. Large knowledge gaps exist regarding outcomes expressed as most important to patients. Objectives To determine days alive and out of institution and mortality after tracheostomy and gastrostomy placement during critical illness; to evaluate associations between health states prior to critical illness and outcomes. Methods In this retrospective cohort study of Medicare beneficiaries admitted to an intensive care unit (ICU) who received a tracheostomy, gastrostomy, or both, we determined number of days alive and out of institution (DAOI) following procedure date; 90-day, 6-month, and 1-year mortality; hospital discharge destination; and hospital length of stay. We used claims from the year prior to admission to define eight mutually exclusive pre-ICU health states (permutations of one or more of: Cancer, Chronic Organ Failure, Frail, Robust) and assessed their association with DAOI in 90 days and 1-year mortality. Results Among 3,365 patients who received a tracheostomy, 6,709 patients who received a gastrostomy tube, and 3,540 patients who received both procedures, the median DAOI in the first 90 days after placement was 3 (IQR 0-46), 12 (0-61), and 0 (0-37), respectively. Over half died within 180 days. 1-year mortality was 62%, 60%, and 64%, respectively. When compared to the Robust state, all other pre-ICU health states were associated with loss of DAOI and increased 1-year mortality; however, between the seven non-Robust pre-ICU health states, there were no differences in outcomes. Conclusions Medicare beneficiaries with prior comorbidity who received tracheostomy, gastrostomy tube, or both during critical illness spent few days alive and out of institution and had high short- and long-term mortality. Source http://dx.doi.org/10.1513/AnnalsATS.202106-649OCDOI Listing August 2021 ### Racial/Ethnic Disparities in Hypertension Prevalence, Awareness, Treatment, and Control in the United States, 2013 to 2018. Hypertension 2021 Dec 9;78(6):1719-1726. Epub 2021 Aug 9. Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology, Division of Cardiology (R.A., N.C., R.K.W., I.R., C.S., R.W.Y., D.S.K.), Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA. [Figure: see text]. Source http://dx.doi.org/10.1161/HYPERTENSIONAHA.121.17570DOI Listing December 2021 ### Cost-effectiveness of Dapagliflozin for the Treatment of Heart Failure With Reduced Ejection Fraction. JAMA Netw Open 2021 Jul 1;4(7):e2114501. Epub 2021 Jul 1. Harvard Medical School, Boston, Massachusetts. Importance: Heart failure with reduced ejection fraction produces substantial morbidity, mortality, and health care costs. Dapagliflozin is the first sodium-glucose cotransporter 2 inhibitor approved for the treatment of heart failure with reduced ejection fraction. Objective: To examine the cost-effectiveness of adding dapagliflozin to guideline-directed medical therapy for heart failure with reduced ejection fraction in patients with or without diabetes. Design, Setting, And Participants: This economic evaluation developed and used a Markov cohort model that compared dapagliflozin and guideline-directed medical therapy with guideline-directed medical therapy alone in a hypothetical cohort of US adults with similar clinical characteristics as participants of the Dapagliflozin in Patients with Heart Failure and Reduced Ejection Fraction (DAPA-HF) trial. Dapagliflozin was assumed to cost $4192 annually. Nonparametric modeling was used to estimate long-term survival. Deterministic and probabilistic sensitivity analyses examined the impact of parameter uncertainty. Data were analyzed between September 2019 and January 2021. Main Outcomes And Measures: Lifetime incremental cost-effectiveness ratio in 2020 US dollars per quality-adjusted life-year (QALY) gained. Results: The simulated cohort had a starting age of 66 years, and 41.8% had diabetes at baseline. Median (interquartile range) survival in the guideline-directed medical therapy arm was 6.8 (3.5-11.3) years. Dapagliflozin was projected to add 0.63 (95% uncertainty interval [UI], 0.25-1.15) QALYs at an incremental lifetime cost of$42 800 (95% UI, $37 100-$50 300), for an incremental cost-effectiveness ratio of $68 300 per QALY gained (95% UI,$54 600-$117 600 per QALY gained; cost-effective in 94% of probabilistic simulations at a threshold of$100 000 per QALY gained). Findings were similar in individuals with or without diabetes but were sensitive to drug cost. Conclusions And Relevance: In this study, adding dapagliflozin to guideline-directed medical therapy was projected to improve long-term clinical outcomes in patients with heart failure with reduced ejection fraction and be cost-effective at current US prices. Scalable strategies for improving uptake of dapagliflozin may improve long-term outcomes in patients with heart failure with reduced ejection fraction. Source http://dx.doi.org/10.1001/jamanetworkopen.2021.14501DOI Listing http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8317009PMC July 2021 ### Supervised Exercise Therapy for Symptomatic Peripheral Artery Disease Among Medicare Beneficiaries Between 2017 and 2018: Participation Rates and Outcomes. Circ Cardiovasc Qual Outcomes 2021 08 23;14(8):e007953. Epub 2021 Jul 23. Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology (B.J.C., S.C., C.S., E.A.S.), Beth Israel Deaconess Medical Center, Boston, MA. Source http://dx.doi.org/10.1161/CIRCOUTCOMES.121.007953DOI Listing http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8373731PMC August 2021 ### Highly Thermal Conductive Poly(vinyl alcohol) Composites with Oriented Hybrid Networks: Silver Nanowire Bridged Boron Nitride Nanoplatelets. ACS Appl Mater Interfaces 2021 Jul 29;13(27):32286-32294. Epub 2021 Jun 29. National Engineering Research Center for Advanced Polymer Processing Technology, The Key Laboratory of Material Processing and Mold of Ministry of Education, College of Materials Science and Engineering, Zhengzhou University, Zhengzhou 450002, P. R. China. With the increasing demand for thermal management materials in the highly integrated electronics area, building efficient heat-transfer networks to obtain advanced thermally conductive composites is of great significance. In the present work, highly thermally conductive poly(vinyl alcohol) (PVA)/boron nitride [email protected] nanowires ([email protected]) composites were fabricated via the combination of the electrospinning and the spraying technique, followed by a hot-pressing method. BNNS are oriented along the in-plane direction, while AgNWs with a high aspect ratio can help to construct a thermal conductive network effectively by bridging BNNS in the composites. The PVA/[email protected] composites showed high in-plane thermal conductivity (TC) of 10.9 W/(m·K) at 33 wt % total fillers addition. Meanwhile, the composite shows excellent thermal dispassion capability when it is taken as a thermal interface material of a working light-emitting diode (LED) chip, which is certified by capturing the surface temperature of the LED chip. In addition, the out-of-plane electrical conductivity of the composites is below 10 S/cm. The composites with outstanding thermal conductive and electrical insulating properties hold promise for application in electrical packaging and thermal management. Source http://dx.doi.org/10.1021/acsami.1c08408DOI Listing July 2021 ### Tunable and Nacre-Mimetic Multifunctional Electronic Skins for Highly Stretchable Contact-Noncontact Sensing. Small 2021 08 26;17(31):e2100542. Epub 2021 Jun 26. School of Materials Science and Engineering, Key Laboratory of Materials Processing and Mold (Zhengzhou University), Ministry of Education, National Engineering Research Center for Advanced Polymer Processing Technology, Zhengzhou University, Zhengzhou, 450001, China. Electronic skins (e-skins) have attracted great attention for their applications in disease diagnostics, soft robots, and human-machine interaction. The integration of high sensitivity, low detection limit, large stretchability, and multiple stimulus response capacity into a single e-skin remains an enormous challenge. Herein, inspired by the structure of nacre, an ultra-stretchable and multifunctional e-skin with tunable strain detection range based on nacre-mimetic multi-layered silver nanowires /reduced graphene oxide /thermoplastic polyurethane mats is fabricated. The e-skin possesses extraordinary strain response performance with a tunable detection range (50 to 200% strain), an ultralow response limit (0.1% strain), a high sensitivity (gauge factor up to 1902.5), a fast response time (20 ms), and an excellent stability (stretching/releasing test of 11 000 cycles). These excellent response behaviors enable the e-skin to accurately monitor full-range human body motions. Additionally, the e-skin can detect relative humidity quickly and sensitively through a reversible physical adsorption/desorption of water vapor, and the assembled e-skin array exhibits excellent performance in noncontact sensing. The tunable and multifunctional e-skins show promising applications in motion monitoring and contact-noncontact human machine interaction. Source http://dx.doi.org/10.1002/smll.202100542DOI Listing August 2021 ### Comparing Baseline Data From Registries With Trials: Evidence From the CathPCI Registry and DAPT Study. JACC Cardiovasc Interv 2021 06;14(12):1386-1388
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2082083523273468, "perplexity": 13623.878172792041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00017.warc.gz"}
http://jaac.ijournal.cn/ch/reader/view_abstract.aspx?file_no=JAAC-2018-0166
### For REFEREES Volume 9, Number 2, 2019, Pages 765-776 Approximate Lie $\ast$-Derivations on $\rho$-complete Convex Modular algebras Hark-Mahn Kim,Hwan-Yong Shin Keywords:Modular $*$-algebra, convex modular, $\Delta_\mu$-condition, $(m,n)$-Cauchy-Jensen mapping, Lie $*$-derivation. Abstract: In this paper, we obtain generalized Hyers--Ulam stability results of a $(m,n)$-Cauchy-Jensen functional equation associated with approximate Lie $*$-derivations on $\rho$-complete convex modular $*$-algebras $\chi_\rho$ with $\Delta_\mu$-condition on the convex modular $\rho$. PDF      Download reader
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655129075050354, "perplexity": 9287.013728339558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00183.warc.gz"}
https://calculus123.com/wiki/Fixed_points_and_selections_of_set_valued_maps_on_spaces_with_convexity_by_Saveliev
This site is devoted to mathematics and its applications. Created and run by Peter Saveliev. # Fixed points and selections of set valued maps on spaces with convexity by Saveliev Jump to navigationJump to search Fixed points and selections of set valued maps on spaces with convexity by Peter Saveliev International Journal of Mathematics and Mathematical Sciences, 24 (2000) 9, 595-612. Also a talk at at the Joint Mathematics Meeting in January 2000. Reviews: MR 2001h:47097, ZM 0968.47016. We provide two results that unite the following two pairs of theorems respectively. First: Second: For this purpose we introduce convex structures on topological spaces that are more general than those of topological vector spaces, or topological convexity structures due to Michael, Van de Vel, Horvath, and others. We are able to construct a convexity structure for a wide class of topological spaces, which makes it possible to prove a generalization of the following purely topological fixed point theorem. Eilenberg-Montgomery fixed point theorem. Let $X$ be an acyclic compact ANR, and let $F:X\rightarrow X$ be an upper semicontinuous multifunction with nonempty closed acyclic values. Then $F$ has a fixed point. This theorem is especially important as it is used in proving the existence of periodic solutions of differential inclusions (multivalued differential equations), see Dissipativity in the plane and The dissipativity of the generalized Lienard equation. It is generalized in a different direction in A Lefschetz-type coincidence theorem by Saveliev. This fixed point theorem is just one of the scores generated by Problem 54 of The Scottish Book. However I believe that I don't just add another one to the list but instead reduce the total number. Selection theorems are just as numerous, see "Continuous selections of multivalued mappings" by D. Repovs and P.V. Semenov. Full text: Fixed points and selections of set valued maps on spaces with convexity (17 pages)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406179547309875, "perplexity": 580.1347866980544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00363.warc.gz"}
http://tex.stackexchange.com/questions/82921/bookmarks-not-automatically-generating
Bookmarks not automatically generating [closed] I have \usepackage{hyperref} in the preamble of a very simple document, which has defined sections. However, the output pdf file has no bookmarks. In 4.1 of this manual, it's stated that Usually hyperref automatically adds bookmarks for \section and similar macros What am I missing? Also, surfing a forum, I found that someone said that all he had to was put \usepackage[bookmarks=true]{hyperref} in his preamble... but this did not work for me either. (I am using the latest version of TeXworks as my editor and I am a PC user.) - closed as too localized by diabonas, Werner, Jake, Kurt, ThorstenMar 20 '13 at 19:50 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. Please add to your question a simple, complete and minimal document illustrating the problem mentioned. –  Gonzalo Medina Nov 16 '12 at 0:31 I guess you could try loading hyperref last. Surprising how often that works. –  Peter Grill Nov 16 '12 at 0:35 Which hyperref driver you are using (see the .log file)? How do you generate the PDF file? Have you run latex at least twice? If you load package bookmark after hyperref, the bookmarks are faster updated. Which document class is used? –  Heiko Oberdiek Nov 16 '12 at 0:58 This is really strange. Could you please a minimal portion of the offending files? –  Masroor Nov 16 '12 at 2:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028549790382385, "perplexity": 3111.281117907675}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00185-ip-10-180-206-219.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/109406/find-two-points-in-a-point-cloud-with-the-maximum-distance
# Find two points in a point cloud with the maximum distance? What is the least computationally complex way to find two points such that the distance between them is greater or equal to any other pair. Remember hearing something about how you could find such a pair by randomly picking a point r, finding the point furthest away fa and then finding the most distant point from it, md. The diameter is then the distance between fa and md (i.e norm(fa - md)). Is this correct? Can you prove or disprove it? What is the correct way if this is incorrect? • Diameter is not well defined for a general set of points. Please define more precisely what you are looking for. Oct 10, 2015 at 5:28 • @PieterGeerkens You are right, I borrowed this from Graph Theory. Oct 10, 2015 at 17:36 It wont be correct. If you take 4 points; 3 of which lie on a circle and the 4th is in the center. The diameter of this set is the distance of 2 points on the circle. Your algorithm may choose the other point which won't have a distance like that. The proven correct way is to create the convex hull and use the Rotating Calipers method for finding the largest distance. This ends up being O(n log n + k) time complexity. O(n log n) for creating the convex hull and O(k) for iterating over then entire hull to find the points the furthest apart. • Your explanation is not clear. I can't understand why the OP's approach is not valid, and how your "4 points" relates to it. Oct 8, 2015 at 16:42 • @JPhi1618 To visualize ratchet's complaint about the OP's approach, consider the case with exactly 3 equidistant points (equilateral triangle). The OP's approach will calculate the length of a triangle's leg as the diameter. What is more often desirable though is to visualize the 3 points as part of a curved shape (circle in this case) and to take the diameter from that; which is what this answer describes. Oct 8, 2015 at 18:06 • @StevenHansen Ok, thanks - that clears it up. If we define "diameter" as longest distance between two points, is the OP's algorithm acceptable? If we have a cloud of 1000 points, I think in most instances the fastest, reasonably accurate approach would be desired. Oct 8, 2015 at 18:11 • @JPhi1618 Even with this new definition of "diameter" there are problems. There is not enough room in a comment, so I've posted an answer that you can examine for an explanation. Oct 8, 2015 at 19:46 What is the correct way if this is incorrect? You should only ask one question at a time. I'll cover: Is this correct? Can you prove or disprove it? Also, it was questioned in comments whether the OP algorithm might be "good enough" even if it isn't the diameter from a curve. Bottom line: you aren't guaranteed to get the two points that are furthest from each other. Consider the point cloud with exactly four co-planar points, { A, B, C, D }. AB = AC = r, BAC = 70 degrees, ABC = ACB = 55 degrees, and Dis halfway along BC. Like so: A r r B D C Bottom line (TLDR;) using OP algorithm: if D is the random point, the next point is A, then either B (or C: same distance). OP algorithm yields AB or AC. However, the longest distance is BC. The algorithm fails in at least this case. Math Proof: • From random point D we compare AD and BD. AD = r*sin(55) and BD = DC = r*cos(55). Since cos(55) < sin(55), AD > BD and point 2 is A. • From A we consider AD and AC. AC = r and r > r*sin(55), so AC > AD and the final point is C (or B: same distance). • Final OP diameter is AC = r. However, BC = 2*r*cos(55) which means BC > r. The furthest two points from each other are B and C, not A and C. • What if you keep at it a few times, does it converge? Oct 9, 2015 at 18:03 • There are situations (such as you seem to have found), where the wrong points keep pointing back to themselves. Oct 9, 2015 at 19:44 Ok.. Let's look at the following shape: B / \ A C \ / D Now, lets say, the distance BD is 5. The distance AC is 4. If you pick A, you will get C at a distance of 4. From C you will get A again. The distance AD = CD = CB = AB is from Pythagoras: sqrt((4/2)^2 + (5/2)^2) = ~3.2 So yeah, you can miss the mark with a diamond shape.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031021356582642, "perplexity": 717.8863563002952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00690.warc.gz"}
https://www.jobilize.com/physics/section/conceptual-questions-microscopes-by-openstax?qcr=www.quizover.com
# 26.4 Microscopes  (Page 5/9) Page 5 / 9 ## Take-home experiment: make a lens Look through a clear glass or plastic bottle and describe what you see. Now fill the bottle with water and describe what you see. Use the water bottle as a lens to produce the image of a bright object and estimate the focal length of the water bottle lens. How is the focal length a function of the depth of water in the bottle? ## Test prep for ap courses Which of the following correctly describes the image created by a microscope? 1. The image is real, inverted, and magnified. 2. The image is virtual, inverted, and magnified. 3. The image is real, upright, and magnified. 4. The image is virtual, upright, and magnified. (b) Use the diagram shown below to answer the following questions. Draw two rays leaving the arrow shown to the left of both lenses. Use ray tracing to draw the images created by the objective and eyepiece lenses. Label the images as i o and i e . ## Section summary • The microscope is a multiple-element system having more than a single lens or mirror. • Many optical devices contain more than a single lens or mirror. These are analysed by considering each element sequentially. The image formed by the first is the object for the second, and so on. The same ray tracing and thin lens techniques apply to each lens element. • The overall magnification of a multiple-element system is the product of the magnifications of its individual elements. For a two-element system with an objective and an eyepiece, this is $m={m}_{\text{o}}{m}_{\text{e}}\text{,}$ where ${m}_{\text{o}}$ is the magnification of the objective and ${m}_{\text{e}}$ is the magnification of the eyepiece, such as for a microscope. • Microscopes are instruments for allowing us to see detail we would not be able to see with the unaided eye and consist of a range of components. • The eyepiece and objective contribute to the magnification. The numerical aperture $\left(\text{NA}\right)$ of an objective is given by $\text{NA}=n\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\alpha$ where $n$ is the refractive index and $\alpha$ the angle of acceptance. • Immersion techniques are often used to improve the light gathering ability of microscopes. The specimen is illuminated by transmitted, scattered or reflected light though a condenser. • The $f/#$ describes the light gathering ability of a lens. It is given by $f/#=\frac{f}{D}\approx \frac{1}{2\mathrm{NA}}.$ ## Conceptual questions Geometric optics describes the interaction of light with macroscopic objects. Why, then, is it correct to use geometric optics to analyse a microscope’s image? The image produced by the microscope in [link] cannot be projected. Could extra lenses or mirrors project it? Explain. Why not have the objective of a microscope form a case 2 image with a large magnification? (Hint: Consider the location of that image and the difficulty that would pose for using the eyepiece as a magnifier.) What advantages do oil immersion objectives offer? How does the $\text{NA}$ of a microscope compare with the $\text{NA}$ of an optical fiber? ## Problem exercises A microscope with an overall magnification of 800 has an objective that magnifies by 200. (a) What is the magnification of the eyepiece? (b) If there are two other objectives that can be used, having magnifications of 100 and 400, what other total magnifications are possible? (a) 4.00 (b) 1600 (a) What magnification is produced by a 0.150 cm focal length microscope objective that is 0.155 cm from the object being viewed? (b) What is the overall magnification if an $8×$ eyepiece (one that produces a magnification of 8.00) is used? (a) Where does an object need to be placed relative to a microscope for its 0.500 cm focal length objective to produce a magnification of $–400$ ? (b) Where should the 5.00 cm focal length eyepiece be placed to produce a further fourfold (4.00) magnification? (a) 0.501 cm (b) Eyepiece should be 204 cm behind the objective lens. You switch from a $1.40\text{NA}\phantom{\rule{0.25em}{0ex}}\text{60}×$ oil immersion objective to a $1.40\text{NA}\phantom{\rule{0.25em}{0ex}}\text{60}×$ oil immersion objective. What are the acceptance angles for each? Compare and comment on the values. Which would you use first to locate the target area on your specimen? An amoeba is 0.305 cm away from the 0.300 cm focal length objective lens of a microscope. (a) Where is the image formed by the objective lens? (b) What is this image’s magnification? (c) An eyepiece with a 2.00 cm focal length is placed 20.0 cm from the objective. Where is the final image? (d) What magnification is produced by the eyepiece? (e) What is the overall magnification? (See [link] .) (a) +18.3 cm (on the eyepiece side of the objective lens) (b) -60.0 (c) -11.3 cm (on the objective side of the eyepiece) (d) +6.67 (e) -400 You are using a standard microscope with a $0.10\text{NA}\phantom{\rule{0.25em}{0ex}}\text{4}×$ objective and switch to a $0.65\text{NA}\phantom{\rule{0.25em}{0ex}}\text{40}×$ objective. What are the acceptance angles for each? Compare and comment on the values. Which would you use first to locate the target area on of your specimen? (See [link] .) Unreasonable Results Your friends show you an image through a microscope. They tell you that the microscope has an objective with a 0.500 cm focal length and an eyepiece with a 5.00 cm focal length. The resulting overall magnification is 250,000. Are these viable values for a microscope? what is angular velocity Why does earth exert only a tiny downward pull? hello Islam Why is light bright? an 8.0 capacitor is connected by to the terminals of 60Hz whoes rms voltage is 150v. a.find the capacity reactance and rms to the circuit thanks so much. i undersooth well what is physics is the study of matter in relation to energy Kintu a submersible pump is dropped a borehole and hits the level of water at the bottom of the borehole 5 seconds later.determine the level of water in the borehole what is power? power P = Work done per second W/ t. It means the more power, the stronger machine Sphere e.g. heart Uses 2 W per beat. Rohit A spherica, concave shaving mirror has a radius of curvature of 32 cm .what is the magnification of a persons face. when it is 12cm to the left of the vertex of the mirror did you solve? Shii 1.75cm Ridwan my name is Abu m.konnek I am a student of a electrical engineer and I want you to help me Abu the magnification k = f/(f-d) with focus f = R/2 =16 cm; d =12 cm k = 16/4 =4 Sphere what do we call velocity Kings A weather vane is some sort of directional arrow parallel to the ground that may rotate freely in a horizontal plane. A typical weather vane has a large cross-sectional area perpendicular to the direction the arrow is pointing, like a “One Way” street sign. The purpose of the weather vane is to indicate the direction of the wind. As wind blows pa hi Godfred Godfred If a prism is fully imersed in water then the ray of light will normally dispersed or their is any difference? the same behavior thru the prism out or in water bud abbot Ju If this will experimented with a hollow(vaccum) prism in water then what will be result ? Anurag What was the previous far point of a patient who had laser correction that reduced the power of her eye by 7.00 D, producing a normal distant vision power of 50.0 D for her? What is the far point of a person whose eyes have a relaxed power of 50.5 D? Jaydie What is the far point of a person whose eyes have a relaxed power of 50.5 D? Jaydie A young woman with normal distant vision has a 10.0% ability to accommodate (that is, increase) the power of her eyes. What is the closest object she can see clearly? Jaydie 29/20 ? maybes Ju In what ways does physics affect the society both positively or negatively how can I read physics...am finding it difficult to understand...pls help try to read several books on phy don't just rely one. some authors explain better than other. Ju And don't forget to check out YouTube videos on the subject. Videos offer a different visual way to learn easier. Ju hope that helps Ju
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.591610312461853, "perplexity": 1428.5545857262182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00147.warc.gz"}
https://www.datasciencecentral.com/state-of-the-art-machine-learning-automation-with-hdt-2/
State-of-the-Art Machine Learning Automation with HDT The technique presented here blends non-standard, robust versions of decision trees and regression. It has been successfully used in black-box ML implementations. In this article, we discuss a general machine learning technique to make predictions or score transactional data, applicable to very big, streaming data. This hybrid technique combines different algorithms to boost accuracy, outperforming each algorithm taken separately, yet it is simple enough to be reliably automated It is illustrated in the context of predicting the performance of articles published in media outlets or blogs, and has been used by the author to build an AI (artificial intelligence) system to detect articles worth curating, as well as to automatically schedule tweets and other postings in social networks.for maximum impact, with a goal of eventually fully automating digital publishing. This application is broad enough that the methodology can be applied to most NLP (natural language processing) contexts with large amounts of unstructured data. The results obtained in our particular case study are also very interesting. Figure 1: HDT 1.0. Here we describe HDT 2.0. The algorithmic framework described here applies to any data set, text or not, with quantitative, non-quantitative (gender, race) or a mix of variables. It consists of several components; we discuss in details those that are new and original, The other, non original components are briefly mentioned, with references provided for further reading. No deep technical expertise and no mathematical knowledge is required to understand the concepts and methodology described here. The methodology, though state-of-the-art, is simple enough that it can even be implemented in Excel, for small data sets (one million observations.). 1. The Problem Rather than first presenting a general, abstract framework and then showing how it applies to a specific problem (case study), we decided to proceed the other way around, as we believe that it will help the reader understand better our methodology. We then generalize to any kind of data set. In its simplest form, our particular problem consists of analyzing historical data about articles and blog posts, to identify features (also called metrics or variables) that are good predictors of blog popularity when combined together, to build a system that can predict the popularity of an article before it gets published. The goal is to select the right mix of relevant articles to publish, to increase web traffic, and thus advertising dollars, for a niche digital publisher. As in any similar problem, the historical data is called training set, and it is split into test data and control data for cross-validation purposes to avoid over-fitting. The features are selected to maximize some measure of predictive power, as described here. All of this is (so far) standard practice; the reader not familiar with this can Google the keywords introduced in this paragraph. In our particular case, we use our domain expertise to identify great features. These features are pretty generic and apply to numerous NLP contexts, so you can re-use them for your own data sets. Feature Selection and Best Practices One caveat is that some metrics are very sensitive to distortion. In our case, the response (that is, what we are trying to predict, also called dependent variable by statisticians) is the traffic volume. It can be measured in page views, unique page views, or number of users who read the article. Page views can easily be manipulated and the number is inflated by web robots, especially for articles that have little traffic. So instead, we chose “unique page views”, a more robust metric available through Google Analytics. Also, older articles have accumulated more page views over time, so we need to correct for this effect. Correcting for time is explained in this article. Here we used a very simple approach instead: focusing on articles from the most recent, big channel instead (the time window is about two years), and taking the logarithm of unique page views (denoted as pv in the source code in the last section). Taking the logarithm not only smooths out the effect of time and web robots, but also it makes perfect sense as the page view distribution is highly skewed — well modeled using a Zipf distribution — with a few incredibly popular (viral) articles and a large number of articles with average traffic: it is a bit like the income distribution. As for selecting the features, we have two kinds of metrics that we can choose as predictors: 1. Metrics based on the article title, easy to compute: • Keywords found in the title • Article category (blog, event, forum question) • Channel • Creation date • Title contains numbers? • Title is a question? • Title contains special characters? • Length of title 2. Metrics based on the article body, more difficult to compute: • Size of article • Does it contain pictures? • Keywords found in body • Author (and author popularity) • First few words Despite focusing only on a subset of features associated with the article title, we were able to get very interesting, actionable insights; we only used title keywords, and whether the posting is a blog, or not. You have to keep in mind that the methodology used here takes into account all potential key-value combinations, where a key is a subset of features, and value, the respective values: for instance key = (keyword_1, keyword_2, article category) and value = (“Python”, “tutorial”, “Blog”). So it is important to appropriately bin the variables when turning them into features, to prevent the number of key-value pairs from exploding. Another mechanism described  further down in this article is also used to keep the key-value database, stored as an hash table or associate array, manageable. Finally, it can easily be implemented in a distributed environment (Hadoop.) Due to the analogy with decision trees, a key-value is also called a node, and plays the same role as a node in a decision tree. 2. Methodology and Solution As we have seen in the previous section, the problem consists of predicting pv, the logarithm of unique page views for an article (over some time period), as a function of keywords found in the title, and whether the article in question is a blog or not. In order to do so, we created lists of all one-token and two-token keywords found in all the titles, as well as blog status, after cleaning the titles and eliminating some stop word such as “that”, “and” or “the”, that don’t have impacts on the predictions. We were also careful about not eliminating all keywords made up of one or two letters: the one-letter keyword “R”, corresponding to the programming language R, has a high predictive power. For each element in our lists, we recorded the frequency and traffic popularity. More precisely, for each key-value pair, we recorder the number of articles (titles, actually) that are associated with it, as well as the average, minimum and maximum pv across these articles. Example For instance, the element or key-value (keyword_1 = “R”, keyword_2 = “Python”, article = “Blog”) is associated with 6 articles, and has the following statistics: avg pv = 8.52, min pv = 7.41, and max pv = 10.45. Since the average pv across all articles is equal to 6.83, this specific key-value pair (also called node) generates exp(8.52 – 6.83) = 5.42 times more traffic than an average article. It is thus a great node. Even the worst article, among the 6 articles belonging to this node, with a pv of 7.41, outperforms the average article across all nodes. So not only this is a great node, but also a stable one. Some nodes have a far larger volatility, for instance when one of the keywords has different meanings, such as the word “training”, in “training deep learning” (training set) versus “deep learning training” (courses.) Hidden decision trees (HDT) revisited Note that here, the nodes are overlapping, allowing considerable flexibility. In particular, nodes with two keywords are sub-nodes of nodes with one keyword. A previous version of this technique, described here, did not consider overlapping nodes. Also, with highly granular features, the number of nodes explodes exponentially. A solution to this problem consists of • shuffling the observations • working with nodes built on no more than 4 or 5 features • proper binning • visiting the observations sequentially (after the shuffle) and every one million observations, deleting nodes that contain only one observation The general idea behind this technique is to group articles into buckets that are large enough to provide predictions that are sound, without explicitly building decision trees. Not only the nodes are simple and easy to interpret, but unstable nodes are easy to detect and discard. There is no splitting/pruning involved as with classical decision trees, making this methodology simple and robust, and thus fit for artificial intelligence (automated processing.) General framework Whether you are dealing with predicting the popularity of an article, or the risk for a client to default on a loan, the basic methodology is identical. It involves training sets, cross-validation, feature selection, binning, and populating hash tables of key-value pairs (referred to here as the nodes). When you process a new observation, you check which node(s) it belongs to. If the best node it belongs to is stable and not too small, you use it to predict the future performance or value of your observation, or to score the transaction if you are dealing with transactional data such as credit card transactions. In our example, if the performance metric (the average pv in the node in question) is significantly above the global average, and other constraints are met (the node is not too small, and the minimum pv in the node in question not too low to guarantee stability), then we classify the observation as good, just like the node it belongs to. In our case, the observation is a potential article. Also, you need to update your training set and the node table (including automatically discovered new nodes) every six months or so. Parameters must be calibrated to guarantee that • error rate (classifying a good observation as bad or the other way around) is small enough; it is measured using a confusion matrix • the system is robust: we have a reasonable number of stable nodes that are big enough; it is great if less than 3,000 stable, not too small nodes cover 80% of the observations (by stable, we mean nodes with low variance) with an average of at least 10 observations per node • the binning and feature selection mechanism offer real predictive power: the average response (our pv) measured in a node classified as good, is much above the general average, and the other way around for nodes classified as bad; in addition, the response shows little volatility within each node (in our case, pv is relatively stable across all observations within a same usable node) • we have enough usable nodes (that is, after excluding the small ones) to cover at least 50% of all observations, and if possible up to 95% of all observations (100% would be ideal but never exists in practice) We discuss the parameters of our technique, and how to fine-tune them, in the next section. Fine-tuning can be automated or made more robust by testing (say) 2,000 sets of parameters and identify regions of stability that meet our criteria (in terms of error rate and so on) in the parameter space. A big question is what to do with observations not belonging to any usable node: they can not be classified. In our example it does not matter if 30% of the observations can not be classified, but in many applications, it does matter. One way to address this issue is to use super-nodes: in our case, a node for all posts that are blogs, and another one for all posts that are not blogs (these two nodes cover 100% of observations, both past and future.) The problem is that usually, these super-nodes don’t have much predictive power. A better solution consists of using two algorithms: the one described here based on usable nodes (let’s call it algorithm A) and another one called algorithm B, that classifies all observations. Observations that can’t be classified or scored with algorithm A are classified/scored with algorithm B. You can read the details about how to blend the results of two algorithms, in one of my patents.  In practice, we have used Jackknife regression for algorithm B, a technique easy to implement, easy to understand, leading to simple interpretations, and very robust. These feature are important for systems that are designed to run automatically. The resulting hybrid algorithm is called Hidden Decision Trees – hidden because you don’t even realize that you have created a bunch of mini decision trees: it was all implicit. The version described here is version 2, with new features to prevent the node table from exploding, and allowing nodes to overlap, making it more suitable for data sets with a larger number of variables. 3. Case Study: Results Our application about predicting page views for an article has been explained in detail in the previous sections. So here we focus on the results obtained. Output from the algorithm If you run the script listed in the next section, besides producing the table of key-value pairs (the nodes) as a text file for further automated processing, it displays summary statistics that look like the following: Average pv: 6.81 Number of articles marked as good: 865 (real number is 1079) Number of articles marked as bad: 1752 (real number is 1538) Avg pv: articles marked as good: 8.23 Avg pv: articles marked as bad: 6.13 Number of false positive: 50 (bad marked as good) Number of false negative: 264 (good marked as bad) Number of articles: 2617 Error Rate: 0.12 Number of feature values: 16712 (marked as good: 3409) Aggregation factor: 1.62 The number of “feature values” is the total number of key-value pairs found, including the small unstable ones, regardless as to whether they are classified as good or bad. Any article with a pv above the arbitrary value pv_threshold = 7.1 (see source code) is considered as good. This corresponds to articles having about 1.3 times more traffic than average, since we use a log scale and the average pv is 6.81. The traffic for articles classified as good by the algorithm (pv = 8.23) is about 4.2 times above the traffic that an average article receives. Two important metrics are: • Aggregation factor: it is an indicator of the average size of a node. The minimum is 1, corresponding to nodes that only have one observation. A value above 5 is highly desirable, but here, because we are dealing with a small data set and with niche articles, even a small value is OK. • The error rate is the number of articles wrongly classified. Here we care much more about bad articles classified as good. Also note that we correctly identify the vast majority of good articles, but this is because we work with small nodes. Finally an article is marked as good if it triggers at least one node marked as good (that is, satisfying the criterion defined in the next sub-section.) Parameters Besides pv_threshold, the algorithm uses 12 parameters to identify a usable, stable node classified as good. These parameters are illustrated in the following piece of code (see source code): if ( (($n > 3)&&($n < 8)&&($min > 6.9)&&($avg > 7.6)) || (($n >= 8)&&($n < 16)&&($min > 6.7)&&($avg > 7.4)) || (($n >= 16)&&($n < 200)&&($min > 6.1)&&($avg > 7.2)) ) { Here, n represents the number of observations in a node, while, avg and min are the average and minimum pv for the node in question.  We tested many combinations of values for these parameters. Increasing the required size (denoted as n)of a usable node will do the following: • decrease the number of good articles correctly identified as good • increase the error rate • increase the stability of the system • decrease the predictive power • increase the aggregation factor (see previous sub-section) Improving the methodology Here we share some caveats and possible improvements to our technique. You need to use a table of one-token keywords that look like two tokens, for increased efficiency, and consider these keywords as being one-token. For instance “San Francisco” is a one-token keyword, despite its appearance. Such tables are easy to build as you always see the two parts together. Also, we looked at nodes containing (keyword_1, keyword_2) where the two keywords are adjacent. If you allow the two keywords not to be adjacent, the number of key-value pairs (the nodes) increases significantly, but you don’t get much additional predictive power in return: there is even a risk of over-fitting. Another improvement consists of having/favoring nodes containing observations spread over a long time period, to avoid any kind of concentration (which could otherwise result in over-fitting.) Finally, in our case, we can not exclusively focus on articles with great potential. It is important to have many, less popular articles as well: they constitute the long tail. Without these articles, we face problems such as excessive content concentration, which have negative impacts in the long term. The obvious negative impact is that we might miss nascent topics, and thus getting stuck into an non-adaptive mix of articles at some point, thus slowing growth. Interesting findings Titles with the following features work well: • contains a number (10, 15 and so on) as we have many popular articles such as “10 great deep learning articles”. • contains the current year (2017) • is a question  (how to) • not a blog, but a book category • a blog Titles containing the following keywords work well: • everyone (as in “10 regression techniques everyone should know”) • libraries • infographic • explained • algorithms • languages • amazing • r python • job interview questions • should know (as in “10 regression techniques everyone should know”) • nosql databases • versus • decision trees • logistic regression • correlations • tutorials • code • free You might also like my related articles about a data scientists sharing his secrets, and turning unstructured data into structured data. To read my best data science and machine learning articles, click here. 4. Source Code The source code is easy to read and has deliberately made longer than needed to provide enough details, avoid complicated iterations, and facilitate maintenance and translation into Python or R. The output file hdt-out2.txt stores the key-value pairs (or nodes) that are usable, corresponding to popular articles. Here is the input data set: HDT-data3.txt. The code has been written in Perl, R and Python. Perl and Python run faster than R. Click on the relevant link below to access the source code, available as a text file. The code was originally written in Perl, and translated to Python and R by Naveenkumar Ramaraju who is currently pursuing a master’s in Data Science at Indiana University • Python version • Perl version • R version • Improved R version For those learning Python or R, this is a great opportunity. HDT (a light version) has been implemented in Excel too: click here to get the spreadsheet (you will have to scroll down to the middle of section 3, just above the images.) Note regarding the R implementation Required library: hash (R doesn’t have inbuilt hash or dictionary without imports.) You can use any one of below script files. • Standard version is the literal translation of the Perl code with same variable names to the maximum extent possible. • Improved version uses functions, more data frames and more R-like approach to reduce code running time (~30 % faster) and less lines of code. Variable names would vary from Perl. Output file would have comma(,) as delimiter between IDs. Instructions to run:  Place the R file and HDT-data3.txt (input file) in root folder of R environment. Execute the ‘.R’ file in R studio or using command line script: > Rscript HDT_improved.R R is known to be slow in text parsing. We can optimize further if all inputs are within double quotes or no quotes at all by using data frames. Julia version This was added later by Andre Bieler. The comments below are from him. For what its worth, I did a quick translation from Python to Julia (v0.5) and attached a link to the file below, feel free to share it. I stayed as close as possible to the Python version, so it is not necessarily the most “Julian” code. A few remarks about benchmarking since I see this briefly mentioned in the text: • This code is absolutely not tuned for performance since everything is done in global scope. (In Julia it would be good practice to put everything in small functions) • Generally for run times of only a few 0.1 s Python will be faster due to the compilation times of Julia. Julia really starts paying off for longer execution times. Click here to get the Julia code.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49966880679130554, "perplexity": 1459.4672379991896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00678.warc.gz"}
https://ae.norton.com/online-threats/vbs.lavra.b.worm-2002-090912-2547-99-writeup.html
# Threat Explorer The Threat Explorer is a comprehensive resource consumers can turn to for daily, accurate, up-to-date information on the latest threats, risks and vulnerabilities. # VBS.Lavra.B.Worm ## VBS.Lavra.B.Worm Discovered: 09 September 2002 Updated: 13 February 2007 Also Known As: VBS.Thambl Systems Affected: Windows VBS.Lavra.B.Worm is a Trojan horse that is written in Microsoft Visual Basic Script. It attempts to delete antivirus and personal firewall software. In an attempt to spread, it copies itself as numerous files to the shared folders of several file-sharing programs. NOTE: Definitions dated prior to September 12, 2002 may detect this as VBS.Thambl. ### Antivirus Protection Dates • Initial Rapid Release version 09 September 2002 • Latest Rapid Release version 28 September 2010 revision 054 • Initial Daily Certified version 09 September 2002 • Latest Daily Certified version 28 September 2010 revision 036 • Initial Weekly Certified release date 11 September 2002 Click here for a more detailed description of Rapid Release and Daily Certified virus definitions. When VBS.Lavra.B.Worm runs, it does the following: It copies itself as: • C:\Windows\Lbamht.vbs • C:\WinNT\Lbamht.vbs It attempts to delete the following files: • C:\AntiViral Toolkit Pro\*.* • C:\Program Files\Command Software\F-PROT95\*.* • C:\Program Files\McAfee\VirusScan\*.* • C:\Program Files\Norton AntiVirus\*.* • C:\Toolkit\FindVirus\*.* • C:\Program Files\Panda Software\Panda Antivirus Titanium\*.* • C:\PC-Cillin 95\*.* • C:\PC-Cillin 97\*.* • C:\Program Files\Trend Micro\PC-cillin 2002\*.* • C:\Program Files\Zone Labs\ZoneAlarm\*.* • C:\Program Files\Tiny Personal Firewall\*.* It then copies itself as numerous files into the shared folders of these peer-to-peer file-sharing programs: Grockster • C:\Program Files\Grokster\My Grokster\CristinaAguilera.Jpg.vbs • C:\Program Files\Grokster\My Grokster\AVP-Spanish Patch.Zip.VBS • C:\Program Files\Grokster\My Grokster\Norton Antivirus 2002 Crack.Zip.vbs • C:\Program Files\Grokster\My Grokster\SilviaSaintDoubleAnalAction.jpg.vbs • C:\Program Files\Grokster\My Grokster\Panda Titanium Crack.zip.vbs • C:\ARCHIV~1\Grokster\My Grokster\CristinaAguilera.Jpg.vbs • C:\ARCHIV~1\Grokster\My Grokster\AVP-Spanish Patch.Zip.VBS • C:\ARCHIV~1\Grokster\My Grokster\Norton Antivirus 2002 Crack.Zip.vbs • C:\ARCHIV~1\Grokster\My Grokster\SilviaSaintDoubleAnalAction.jpg.vbs • C:\ARCHIV~1\Grokster\My Grokster\Panda Titanium Crack.zip.vbs Morpheus • C:\Program Files\Morpheus\My Shared Folder\CristinaAguilera.Jpg.vbs • C:\Program Files\Morpheus\My Shared Folder\AVP-Spanish Patch.Zip.VBS • C:\Program Files\Morpheus\My Shared Folder\Norton Antivirus 2002 Crack.Zip.vbs • C:\Program Files\Morpheus\My Shared Folder\SilviaSaintDoubleAnalAction.jpg.vbs • C:\Program Files\Morpheus\My Shared Folder\Panda Titanium Crack.zip.vbs • C:\archiv~1\Morpheus\My Shared Folder\CristinaAguilera.Jpg.vbs • C:\archiv~1\Morpheus\My Shared Folder\AVP-Spanish Patch.Zip.VBS • C:\archiv~1\Morpheus\My Shared Folder\Norton Antivirus 2002 Crack.Zip.vbs • C:\archiv~1\Morpheus\My Shared Folder\SilviaSaintDoubleAnalAction.jpg.vbs • C:\archiv~1\Morpheus\My Shared Folder\Panda Titanium Crack.zip.vbs ICQ • C:\Program Files\ICQ\shared files\CristinaAguilera.Jpg.vbs • C:\Program Files\ICQ\shared files\AVP-Spanish Patch.Zip.VBS • C:\Program Files\ICQ\shared files\Norton Antivirus 2002 Crack.Zip.vbs • C:\Program Files\ICQ\shared files\SilviaSaintDoubleAnalAction.jpg.vbs • C:\Program Files\ICQ\shared files\Panda Titanium Crack.zip.vbs • C:\archiv~1\ICQ\shared files\CristinaAguilera.Jpg.vbs • C:\archiv~1\ICQ\shared files\AVP-Spanish Patch.Zip.VBS • C:\archiv~1\ICQ\shared files\Norton Antivirus 2002 Crack.Zip.vbs • C:\archiv~1\ICQ\shared files\SilviaSaintDoubleAnalAction.jpg.vbs • C:\archiv~1\ICQ\shared files\Panda Titanium Crack.zip.vbs Bearshare • C:\Program Files\Bearshare\Shared\CristinaAguilera.Jpg.vbs • C:\Program Files\Bearshare\Shared\AVP-Spanish Patch.Zip.VBS • C:\Program Files\Bearshare\Shared\Norton Antivirus 2002 Crack.Zip.vbs • C:\Program Files\Bearshare\Shared\SilviaSaintDoubleAnalAction.jpg.vbs • C:\Program Files\Bearshare\Shared\Panda Titanium Crack.zip.vbs • C:\archiv~1\Bearshare\Shared\CristinaAguilera.Jpg.vbs • C:\archiv~1\Bearshare\Shared\AVP-Spanish Patch.Zip.VBS • C:\archiv~1\Bearshare\Shared\Norton Antivirus 2002 Crack.Zip.vbs • C:\archiv~1\Bearshare\Shared\SilviaSaintDoubleAnalAction.jpg.vbs • C:\archiv~1\Bearshare\Shared\Panda Titanium Crack.zip.vbs KaZaA • C:\Program Files\KaZaA\My Shared Folder\CristinaAguilera.Jpg.vbs • C:\Program Files\KaZaA\My Shared Folder\AVP-Spanish Patch.Zip.VBS • C:\Program Files\KaZaA\My Shared Folder\Norton Antivirus 2002 Crack.Zip.vbs • C:\Program Files\KaZaA\My Shared Folder\SilviaSaintDoubleAnalAction.jpg.vbs • C:\Program Files\KaZaA\My Shared Folder\Panda Titanium Crack.zip.vbs • C:\ARCHIV~1\KaZaA\My Shared Folder\CristinaAguilera.Jpg.vbs • C:\ARCHIV~1\KaZaA\My Shared Folder\AVP-Spanish Patch.Zip.VBS • C:\ARCHIV~1\KaZaA\My Shared Folder\Norton Antivirus 2002 Crack.Zip.vbs • C:\ARCHIV~1\KaZaA\My Shared Folder\SilviaSaintDoubleAnalAction.jpg.vbs • C:\ARCHIV~1\KaZaA\My Shared Folder\Panda Titanium Crack.zip.vbs LARVA    C:\windows\lbamht.vbs LARVAx   C:\winnt\lbamht.vbs to the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run so that it runs each time that you start Windows. On Windows 95/98/Me only, the Trojan adds the lines @Start C:\Windows\lbamht.vbs>null @Start C:\Winnt\lbamht.vbs>null to the C:\Autoexec.bat file so that the Trojan runs when you start Windows. ### Recommendations Symantec Security Response encourages all users and administrators to adhere to the following basic security "best practices": • Use a firewall to block all incoming connections from the Internet to services that should not be publicly available. By default, you should deny all incoming connections and only allow services you explicitly want to offer to the outside world. • Enforce a password policy. Complex passwords make it difficult to crack password files on compromised computers. This helps to prevent or limit damage when a computer is compromised. • Ensure that programs and users of the computer use the lowest level of privileges necessary to complete a task. When prompted for a root or UAC password, ensure that the program asking for administration-level access is a legitimate application. • Disable AutoPlay to prevent the automatic launching of executable files on network and removable drives, and disconnect the drives when not required. If write access is not required, enable read-only mode if the option is available. • Turn off file sharing if not needed. If file sharing is required, use ACLs and password protection to limit access. Disable anonymous access to shared folders. Grant access only to user accounts with strong passwords to folders that must be shared. • Turn off and remove unnecessary services. By default, many operating systems install auxiliary services that are not critical. These services are avenues of attack. If they are removed, threats have less avenues of attack. • If a threat exploits one or more network services, disable, or block access to, those services until a patch is applied. • Always keep your patch levels up-to-date, especially on computers that host public services and are accessible through the firewall, such as HTTP, FTP, mail, and DNS services. • Configure your email server to block or remove email that contains file attachments that are commonly used to spread threats, such as .vbs, .bat, .exe, .pif and .scr files. • Isolate compromised computers quickly to prevent threats from spreading further. Perform a forensic analysis and restore the computers using trusted media. • Train employees not to open attachments unless they are expecting them. Also, do not execute software that is downloaded from the Internet unless it has been scanned for viruses. Simply visiting a compromised Web site can cause infection if certain browser vulnerabilities are not patched. • If Bluetooth is not required for mobile devices, it should be turned off. If you require its use, ensure that the device's visibility is set to "Hidden" so that it cannot be scanned by other Bluetooth devices. If device pairing must be used, ensure that all devices are set to "Unauthorized", requiring authorization for each connection request. Do not accept applications that are unsigned or sent from unknown sources. • For further information on the terms used in this document, please refer to the Security Response glossary. NOTES: • These instructions are for all current and recent Symantec antivirus products, including the Symantec AntiVirus and Norton AntiVirus product lines. • If the Trojan was successful in deleting Norton AntiVirus, reinstall that program before you can continue with the removal procedure. 1. Update the virus definitions. 2. Run a full system scan, and delete all files that are detected as VBS.Lavra.B.Worm. 3. Delete the values LARVA    C:\windows\lbamht.vbs LARVAx   C:\winnt\lbamht.vbs that the Trojan added to the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run 4. (Windows 95/98/Me only) Delete the lines @Start C:\Windows\lbamht.vbs>null @Start C:\Winnt\lbamht.vbs>null that the Trojan added to the C:\Autoexec.bat file. For details on how to do this, read the following instructions. To update the virus definitions: All virus definitions receive full quality assurance testing by Symantec Security Response before being posted to our servers. There are two ways to obtain the most recent virus definitions: • Run LiveUpdate, which is the easiest way to obtain virus definitions. These virus definitions are posted to the LiveUpdate servers one time each week (usually Wednesdays) unless there is a major virus outbreak. To determine whether definitions for this threat are available by LiveUpdate, look at the Virus Definitions (LiveUpdate) line at the top of this write-up. • Download the definitions using the Intelligent Updater. Intelligent Updater virus definitions are posted on U.S. business days (Monday through Friday). They must be downloaded from the Symantec Security Response Web site and installed manually. To determine whether definitions for this threat are available by the Intelligent Updater, look at the Virus Definitions (Intelligent Updater) line at the top of this write-up. Intelligent Updater virus definitions are available here. For detailed instructions on how to download and install the Intelligent Updater virus definitions from the Symantec Security Response Web site, click here. To scan for and delete the infected files: 1. Start your Symantec antivirus program, and make sure that it is configured to scan all files. 2. Run a full system scan. 3. If any files are detected as infected with VBS.Lavra.B.Worm, click Delete. To delete the values that the Trojan added to the registry: CAUTION : Symantec strongly recommends that you back up the registry before you make any changes to it. Incorrect changes to the registry can result in permanent data loss or corrupted files. Modify only the keys that are specified. Read the document How to make a backup of the Windows registry for instructions. 1. Click Start, and click Run. The Run dialog box appears. 2. Type regedit and then click OK. The Registry Editor opens. 3. Navigate to the key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run 4. In the right pane, delete these values: LARVA    C:\windows\lbamht.vbs LARVAx   C:\winnt\lbamht.vbs 5. Exit the Registry Editor. To delete the lines that the Trojan added to the Autoexec.bat file: This is necessary only on Windows 95/98/Me-based computers. NOTE: (For Windows Me users only) Due to the file-protection process in Windows Me, a backup copy of the file that you are about to edit exists in the C:\Windows\Recent folder. Symantec recommends that you delete this file before you continue with the steps in this section. To do this using Windows Explorer, go to C:\Windows\Recent, and in the right pane select the Win.ini file and delete it. It will be regenerated as a copy of the file that you are about to edit when you save your changes to that file. 1. Click Start, and click Run. 2. Type the following, and then click OK. edit c:\autoexec.bat The MS-DOS Editor opens. 3. Look for these two lines: @Start C:\Windows\lbamht.vbs>null @Start C:\Winnt\lbamht.vbs>null 4. If one or both exist, then for each one select the entire line. Be sure that you have not selected any other text, and then press Delete. 5. Click File, and click Save. 6. Click File, and click Exit. Writeup By: Kaoru Hayashi
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477817416191101, "perplexity": 19863.537801225517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00539.warc.gz"}
http://math.stackexchange.com/questions/115046/there-are-infinitely-many-triangular-numbers-that-are-the-sum-of-two-other-such
There are infinitely many triangular numbers that are the sum of two other such numbers In the Exercise $9$, page 16, from Burton's book Elementary Number Theory he state the following: Establish the identity $t_{x}=t_{y}+t_{z},$ ($t_{n}$ is the nth triangular numbers) where $${x}=\frac{n(n+3)}{2}+1\,\,\,\,\,\,\,y=n+1\,\,\,\,\,\,\,z=\frac{n(n+3)}{2}$$ and $n\geq 1,$ thereby proving that there are infinitely many triangular numbers that are the sum of two other such numbers. I tried to find out how did he get $x,y$ and $z$ but I've failed. I wrote $$\frac{y(y+1)}{2}+\frac{z(z+1)}{2}=\frac{x(x+1)}{2}$$ but I don't know what to do from now on. How one can find $x,y,z$ as above? - Multiply both sides by 8 and complete the squares, see what happens. – Will Jagy Feb 29 '12 at 23:33 @WillJagy: I've found $(2y+1)^{2}+(2z+1)^{2}=(2x+1)^{2}+1$. +1 for you. – spohreis Feb 29 '12 at 23:49 @WillJagy: I just noticed your comment. Please read this thread: meta.math.stackexchange.com/questions/1559/…. There are reasons to add an answer instead of a comment, even if it is just a hint (in this case, a substantial one and so deserves to be an answer, IMO). – Aryabhata Mar 1 '12 at 0:19 @Aryabhata, I see what you mean. In this case I felt multiplying by 8 was what I would do next, but had not the time to finish that. Also, I was a little unclear about the OP's notation. – Will Jagy Mar 1 '12 at 0:40 Triangular numbers are the sum of the first $m$ positive integers, so clearly for any $m$ there is always a triangular number which when $m$ is added to it is a triangular number: $$\frac{m(m+1)}{2}=m+\frac{m(m-1)}{2} .$$ There are others: for example any odd number greater than $1$ is the difference between two triangular numbers two steps apart, while any multiple of $3$ greater than $3$ is the difference between two triangular numbers three steps apart, etc. So if $m$ is any triangular number, say $t_k$ where $m=\frac{k(k+1)}{2}$, then we have a triangular number which is the sum of two triangular numbers, and since there are an infinite number of triangular numbers there are an infinite number of cases of this. In this case we have $t_{m}=t_k + t_{m-1}$ or $$t_{k(k+1)/2}=t_k+t_{k(k+1)/2 - 1}.$$ Now let $k=n+1$ so $\frac{k(k+1)}{2} - 1 =\frac{(n+1)(n+2)}{2} -1 = \frac{n(n+3)}{2}$ and similarly $\frac{k(k+1)}{2}=\frac{(n+1)(n+2)}{2} = \frac{n(n+3)}{2}+1$. So the last expression of the previous result becomes $$t_{n(n+3)/2 + 1}=t_{n+1}+t_{n(n+3)/2}.$$ This explains how he got his result. It does not explain why he prefers the final step over the slightly simpler previous step. - One way to try and come up with this would be to start from the other side: $$\frac{y(y+1)}{2} + \frac{z(z+1)}{2} = \frac{x(x+1)}{2}$$ Multiply by $8$, and adding two gives us $$(2y+1)^2 + (2z+1)^2 = (2x+1)^2 + 1$$ i.e. $$(2y+1)^2 - 1 = (2x+1)^2 - (2z+1)^2$$ and so $$y(y+1) = (x-z)(x+z+1)$$ Given a $y$, we can get a solution by putting $$x - z = 1$$ and $$x+z+1 = y(y+1)$$ and solving the system of equations. - I suspect the choices come from this: start with $$(2y+1)^2 + (2z+1)^2 = (2x+1)^2 + 1.$$ If you now try to see what happens when $$x = z + 1$$ it reduces to $$(2y+1)^2 = 8 z + 9.$$ Once again, if, instead of $y$ itself, we take $$y = n+1,$$ we have $$(2 n+3)^2 = 8 z + 9$$ or $$n^2 + 3 n = 2 z$$ or $$z = \frac{n^2 + 3 n}{2}.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096264243125916, "perplexity": 145.4350129032567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277091.36/warc/CC-MAIN-20160524002117-00168-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/22687
## Files in this item FilesDescriptionFormat application/pdf 9215829.pdf (3MB) (no description provided)PDF ## Description Title: Scanning tunneling microscopy of silicon(100) 2 x 1 Author(s): Hubacek, Jerome S. Doctoral Committee Chair(s): Lyding, Joseph W. Department / Program: Electrical and Computer Engineering Discipline: Electrical and Computer Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Engineering, Electronics and Electrical Physics, Condensed Matter Engineering, Materials Science Abstract: The Si(100) 2 x 1 surface, a technologically important surface in microelectronics and silicon molecular beam epitaxy (MBE), has been studied with the scanning tunneling microscope (STM) to attempt to clear up the controversy that surrounds previous studies of this surface. To this end, an ultra-high vacuum (UHV) STM/surface science system has been designed and constructed to study semiconductor surfaces. Clean Si(100) 2 x 1 surfaces have been prepared and imaged with the STM. Atomic resolution images probing both the filled states and empty states indicate that the surface consists of statically buckled dimer rows.With electronic device dimensions shrinking to smaller and smaller sizes, the Si-SiO$\sb2$ interface is becoming increasingly important and, although it is the most popular interface used in the microelectronics industry, little is known about the initial stages of oxidation of the Si(100) surface. Scanning tunneling microscopy has been employed to examine Si(100) 2 x 1 surfaces exposed to molecular oxygen in UHV. Ordered rows of bright and dark spots, rotated 45$\sp\circ$ from the silicon dimer rows, appear in the STM images, suggesting that the Si(100)-SiO$\sb2$ interface may be explained with a $\beta$-cristobalite(100) structure rotated by 45$\sp\circ$ on the Si(100) surface. Issue Date: 1992 Type: Text Language: English URI: http://hdl.handle.net/2142/22687 Rights Information: Copyright 1992 Hubacek, Jerome S. Date Available in IDEALS: 2011-05-07 Identifier in Online Catalog: AAI9215829 OCLC Identifier: (UMI)AAI9215829 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36839014291763306, "perplexity": 6023.842737091888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00183.warc.gz"}
https://stats.stackexchange.com/questions/298757/factor-analysis-questions-related-to-estimating-and-generating-factor-scores
# Factor analysis - Questions related to estimating and generating factor scores I'm using factor analysis to combine three independent variables for further use in logistic regression. According to the textbook I'm reading there are two main options for computing a metric (composite score) for a factor: Estimating a factor score (the regression method) and generating a factor score. Estimated factor scores are standardized and weighted values that show the standing of each individual on the factor. Generated factor scores are raw and unweighted values obtained for each individual by either summing or averaging only those variables loading most strongly on a factor. The textbook states that if one chooses to estimate factor scores, one should assess the factor determinacy coefficient (Beauducel, 2011) before using the factor scores as variables in subsequent analyses. This is because estimating factor scores has the problem of obtained scores not being unique values (factor indeterminacy). The factor determinacy coefficient should then be at least 0.90 for the factor score to substitute the observed variables. I have two questions related to the above: • How can I assess the factor determinacy coefficient when estimating factor scores? • How can I generate factor scores? Thus far I have tried some different libraries in R for doing factor analysis, but as far as I understand they all use some variation of estimating factor scores, and I can not find any way to assess the factor determinacy coefficient. The function fa in library psych does contain a variable called r.scores after estimating factor scores which I thought might be relevant. However it only works when more than one factor is specified (else its value is always 1). Here is some code to illustrate my approach: library(psych) f <- fa(ds[ ,c(14,15,17)], nfactors = 1, scores="regression") f$r.scores # Not useful with 1 factor factor1 <-f$scores[ ,1] # Estimated factor scores # Using factor scores in logistic regression, controlling for some demographic variables fit <- glm(certified ~ factor1 + age + gender, data = ds, family = binomial()) • Did you read a detailed answer about factor scores? It sounds like what you call "generated scores" are what it calls coarse method and "estimated scores" is refined methods. The R-sq of determinarion of estimated scores by the variables is mentioned there (you have to compute the estimated scores in order to know it). – ttnphns Aug 19 '17 at 8:47 • Most of your question deals with 'how to do it in R' -please note that such formulated questions are generally off-topic. Can you do the question less software specific? – ttnphns Aug 19 '17 at 8:49 • Do't see how "logistic" tag fits in here, so removing it. – ttnphns Aug 19 '17 at 8:51 • I don't say you should remove code, no. The more that it is annotated. I just said that there were too many R libraries specific points in your question, for me. Otherwise your question is very good. – ttnphns Aug 19 '17 at 9:24 • Please note that 'factor scores determinacy coefficient' is easily known only with regression (Thurstone's) method that maximizes it. I'm not aware if for other methods of estimation the coefficient could be obtained. Actually I doubt it could - because we don't know the true f. values to check that. If you find in literature that it is possible - then please send me to there. – ttnphns Aug 19 '17 at 9:33 Regarding the first of the questions: I contacted the author of the psych library (professor William Revelle), and was informed that the fa function can actually report three different estimates of factor score indeterminacy after conducting factor analysis. The three estimates to look for are "Correlation of scores with factors", "Multiple R square of scores with factors" and "Minimum correlation of possible factor scores". Code example in R: f <- fa(ds[ ,c(14,15,17)], nfactors = 1, scores="regression") print(f) # The three estimates of factor score indeterminacy are printed last. • Jea thank you for inquiring. Can you ask the author as well: do these measures apply also to other than "regression" method of scores estimation, and if yes, how can the measures be computed in those instances? – ttnphns Aug 21 '17 at 11:52 • @ttnphns Yes, I can ask him. I will let you know. – Jea Aug 21 '17 at 12:13 • @ttnphns Here are the answers that I received: "The factor score indeterminancy measure as reported in the fa function is based upon the regression methodology of finding the scores. However, the factor score R2 is also reported if you use factor.scores. This is the correct R2 for the method you choose. To make it easier, I have now revised fa so that if the R2 from the scoring method are different from the regression R2, then I report them both." The changes are added to the newest release of the psych library. – Jea Sep 15 '17 at 8:25 • Jea, Thank you. But did the author explain (to you or in his documentation available) how the Rsq (between the scores and the unknown true values) is computed for methods other than regression method? The formula? – ttnphns Sep 15 '17 at 9:13 • @ttnphns Unfortunately this was not answered in the email I received. I am off to vacation now, so I can not follow this up any further, but there might be some information in the manual: cran.r-project.org/web/packages/psych/psych.pdf (description of the fa method starts at page 124). Also the R code is available from cran.r-project.org/web/packages/psych/index.html (email address to the author is also found here). I think you will find most of the relevant code if you download the package source and open psych/R/fa.R . – Jea Sep 15 '17 at 12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.472331166267395, "perplexity": 1690.7083461874904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425481.58/warc/CC-MAIN-20200602162157-20200602192157-00082.warc.gz"}
https://groupprops.subwiki.org/wiki/Fully_invariant_subgroup
# Fully invariant subgroup ## Definition QUICK PHRASES: invariant under all endomorphisms, endomorphism-invariant ### Equivalent definitions in tabular format A subgroup of a group is termed fully invariant or fully characteristic if it satisfies the following equivalent conditions: No. Shorthand A subgroup of a group is termed fully invariant if ... A subgroup of a group is termed a fully invariant subgroup of if ... 1 endomorphism-invariant it is invariant under all endomorphisms of the whole group. for any endomorphism of , or equivalently, for all . 2 endomorphism restricts to endomorphism every endomorphism of the whole group restricts to an endomorphism of the group. for any endomorphism of , and the restriction of to is an endomorphism of . VIEW RELATED: Analogues of this | Variations of this | View a complete list of semi-basic definitions on this wiki This article defines a subgroup property: a property that can be evaluated to true/false given a group and a subgroup thereof, invariant under subgroup equivalence. View a complete list of subgroup properties[SHOW MORE] This is a variation of characteristicity|Find other variations of characteristicity | Read a survey article on varying characteristicity ## Examples VIEW: subgroups of groups satisfying this property | subgroups of groups dissatisfying this property VIEW: Related subgroup property satisfactions | Related subgroup property dissatisfactions ### Extreme examples 1. The trivial subgroup is always fully invariant. 2. Every group is fully invariant as a subgroup of itself. ### Examples 1. High occurrence example: In a cyclic group, every subgroup is fully invariant. That's because any subgroup can be described as the set of all powers, for some choice of , and such a set is clearly invariant under endomorphisms. (In fact, it is a verbal subgroup). 2. More generally, in any abelian group, the set of powers is a verbal subgroup, and hence fully invariant. The set of elements whose order divides is also fully invariant, though not necessarily verbal (for instance, in the group of all roots of unity, the subgroup of roots for fixed is fully invariant but not verbal). 3. In a (possibly) non-abelian group, certain subgroup-defining functions always yield a fully invariant subgroup. For instance, the derived subgroup is fully invariant, and so are all terms of the lower central series as well as the derived series. ### Non-examples 1. In an elementary abelian group, and more generally, in a characteristically simple group, there is no proper nontrivial fully invariant subgroup (in fact, there's no proper nontrivial characteristic subgroup, either). 2. There do exist characteristic subgroups that are not fully invariant; in fact, the center, and terms of the upper central series, may be characteristic but not fully invariant. Further information: center not is fully invariant ### Examples of subgroups satisfying the property Here are some examples of subgroups in basic/important groups satisfying the property: Here are some examples of subgroups in relatively less basic/important groups satisfying the property: Here are some examples of subgroups in even more complicated/less basic groups satisfying the property: ### Examples of subgroups not satisfying the property Here are some examples of subgroups in basic/important groups not satisfying the property: Here are some some examples of subgroups in relatively less basic/important groups not satisfying the property: Here are some examples of subgroups in even more complicated/less basic groups not satisfying the property: ## Metaproperties Metaproperty name Satisfied? Proof Statement with symbols transitive subgroup property Yes full invariance is transitive If , with fully invariant in and fully invariant in , then is fully invariant in . trim subgroup property Yes The trivial subgroup and the whole group are always fully invariant. intermediate subgroup condition No full invariance does not satisfy intermediate subgroup condition It is possible to have such that is a fully invariant subgroup inside but is not a fully invariant subgroup of . strongly intersection-closed subgroup property Yes full invariance is strongly intersection-closed If , are all fully invariant subgroups of , then is also fully invariant in . strongly join-closed subgroup property Yes full invariance is strongly join-closed If , are all fully invariant subgroups of , then is also fully invariant in . commutator-closed subgroup property Yes full invariance is commutator-closed If are fully invariant subgroups of , so is . quotient-transitive subgroup property Yes full invariance is quotient-transitive If such that is fully invariant in and is fully invariant in , then is fully invariant in . finite direct power-closed subgroup property Yes full invariance is finite direct power-closed If is fully invariant in , then in any finite direct power of , the corresponding direct power is fully invariant. restricted direct power-closed subgroup property Yes full invariance is restricted direct power-closed If is fully invariant in , then in any restricted direct power of , the corresponding direct power of is fully invariant. direct power-closed subgroup property No full invariance is not direct power-closed It is possible to have a fully invariant subgroup inside a group and an infinite cardinal such that the direct power is not a fully invariant subgroup inside the direct power . ## Relation with other properties ### Stronger properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions verbal subgroup defined as the set of elements expressible by certain words verbal implies fully invariant fully invariant not implies verbal (see also list of examples) Existentially bound-word subgroup, Image-closed fully invariant subgroup, Intersection of finitely many verbal subgroups, Pseudoverbal subgroup, Quasiverbal subgroup, Quotient-subisomorph-containing subgroup, Weakly image-closed fully invariant subgroup|FULL LIST, MORE INFO intersection of finitely many verbal subgroups intersection of a finite number of verbal subgroups | pseudoverbal subgroup defined as the intersection of normal subgroups for which the quotient group is in a particular pseudovariety Quotient-subisomorph-containing subgroup|FULL LIST, MORE INFO existentially bound-word subgroup defined as the set of elements satisfying a system of equations existentially bound-word implies fully invariant fully invariant not implies existentially bound-word | homomorph-containing subgroup contains every homomorphic image homomorph-containing implies fully invariant fully invariant not implies homomorph-containing Intermediately fully invariant subgroup, Sub-homomorph-containing subgroup|FULL LIST, MORE INFO subhomomorph-containing subgroup contains every homomorphic image of every subgroup (via homomorph-containing) (via homomorph-containing) Homomorph-containing subgroup, Intermediately fully invariant subgroup, Sub-homomorph-containing subgroup, Transfer-closed fully invariant subgroup|FULL LIST, MORE INFO order-containing subgroup contains every subgroup whose order divides its order (via homomorph-containing) (via homomorph-containing) Homomorph-containing subgroup, Image-closed fully invariant subgroup, Subhomomorph-containing subgroup|FULL LIST, MORE INFO variety-containing subgroup contains every subgroup in the variety of groups generated by it (via homomorph-containing subgroup) (via homomorph-containing subgroup) Homomorph-containing subgroup, Intermediately fully invariant subgroup, Subhomomorph-containing subgroup, Transfer-closed fully invariant subgroup|FULL LIST, MORE INFO normal subgroup having no nontrivial homomorphism to its quotient group No nontrivial homomorphism to quotient group Homomorph-containing subgroup, Intermediately fully invariant subgroup, Quotient-subisomorph-containing subgroup|FULL LIST, MORE INFO normal Hall subgroup normal and Hall: its order and index are relatively prime Complemented fully invariant subgroup, Complemented homomorph-containing subgroup, Homomorph-containing subgroup, Image-closed fully invariant subgroup, Intermediately fully invariant subgroup, Normal subgroup having no nontrivial homomorphism to its quotient group, Order-containing subgroup, Quotient-subisomorph-containing subgroup, Sub-homomorph-containing subgroup, Variety-containing subgroup|FULL LIST, MORE INFO normal Sylow subgroup normal and Sylow Complemented fully invariant subgroup, Complemented homomorph-containing subgroup, Homomorph-containing subgroup, Image-closed fully invariant subgroup, Intermediately fully invariant subgroup, Normal Hall subgroup, Normal subgroup having no nontrivial homomorphism to its quotient group, Order-containing subgroup, Quotient-subisomorph-containing subgroup, Sub-homomorph-containing subgroup, Variety-containing subgroup|FULL LIST, MORE INFO quotient-subisomorph-containing subgroup Quotient-subisomorph-containing implies fully invariant Fully invariant not implies quotient-subisomorph-containing Weakly image-closed fully invariant subgroup|FULL LIST, MORE INFO image-closed fully invariant subgroup Under any surjective homomorphism, its image is fully invariant in the image of the group full invariance does not satisfy image condition Weakly image-closed fully invariant subgroup|FULL LIST, MORE INFO intermediately fully invariant subgroup Fully invariant in every intermediate subgroup full invariance does not satisfy intermediate subgroup condition | transfer-closed fully invariant subgroup Its intersection with any subgroup is fully invariant in that full invariance does not satisfy transfer condition | ### Weaker properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions Comparison characteristic subgroup invariant under all automorphisms fully invariant implies characteristic characteristic not implies fully invariant (see also list of examples) Finite direct power-closed characteristic subgroup, Injective endomorphism-invariant subgroup, Normality-preserving endomorphism-invariant subgroup, Retraction-invariant characteristic subgroup, Strictly characteristic subgroup|FULL LIST, MORE INFO characteristic versus fully invariant normal subgroup invariant under all inner automorphisms (via characteristic) (via characteristic) Characteristic subgroup, Finite direct power-closed characteristic subgroup, Fully invariant-potentially fully invariant subgroup, Image-potentially fully invariant subgroup, Injective endomorphism-invariant subgroup, Normal-potentially fully invariant subgroup, Normality-preserving endomorphism-invariant subgroup, Potentially fully invariant subgroup, Retraction-invariant characteristic subgroup, Retraction-invariant normal subgroup, Strictly characteristic subgroup|FULL LIST, MORE INFO strictly characteristic subgroup invariant under all surjective endomorphisms fully invariant implies strictly characteristic strictly characteristic not implies fully invariant Normality-preserving endomorphism-invariant subgroup|FULL LIST, MORE INFO -- injective endomorphism-invariant subgroup invariant under all injective endomorphisms injective endomorphism-invariant not implies fully invariant | retraction-invariant subgroup invariant under all retractions Retraction-invariant normal subgroup|FULL LIST, MORE INFO retraction-invariant characteristic subgroup characteristic and retraction-invariant retraction-invariant normal subgroup normal and retraction-invariant endomorph-dominating subgroup every image under an endomorphism is conjugate to a subgroup of it | potentially fully invariant subgroup the subgroup is fully invariant in some bigger group Fully invariant-potentially fully invariant subgroup, Normal-potentially fully invariant subgroup|FULL LIST, MORE INFO finite direct power-closed characteristic subgroup any finite direct power of the subgroup is characteristic in the corresponding direct power of the whole group follows from full invariance is finite direct power-closed and fully invariant implies characteristic finite direct power-closed characteristic not implies fully invariant Normality-preserving endomorphism-invariant subgroup|FULL LIST, MORE INFO ## Effect of property operators BEWARE! This section of the article uses terminology local to the wiki, possibly without giving a full explanation of the terminology used (though efforts have been made to clarify terminology as much as possible within the particular context) Operator Meaning Result of application Proof and related observations potentially operator fully invariant in some larger group potentially fully invariant subgroup by definition; any potentially fully invariant subgroup is normal, but normal not implies potentially fully invariant intermediately operator fully invariant in every intermediate subgroup intermediately fully invariant subgroup any homomorph-containing subgroup satisfies this property. image condition operator image is fully invariant in any quotient group image-closed fully invariant subgroup any verbal subgroup satisfies this property. ## Formalisms BEWARE! This section of the article uses terminology local to the wiki, possibly without giving a full explanation of the terminology used (though efforts have been made to clarify terminology as much as possible within the particular context) ### Second-order description This subgroup property is a second-order subgroup property, viz., it has a second-order description in the theory of groups View other second-order subgroup properties The property of being fully invariant has a second-order description. A subgroup of a group is termed fully characteristic if: The condition in parentheses is a verification that the function is an endomorphism of . ### Function restriction expression This subgroup property is a function restriction-expressible subgroup property: it can be expressed by means of the function restriction formalism, viz there is a function restriction expression for it. Find other function restriction-expressible subgroup properties | View the function restriction formalism chart for a graphic placement of this property Function restriction expression is a fully invariant subgroup of if ... This means that full invariance is ... Additional comments endomorphism function every endomorphism of sends every element of to within the invariance property for endomorphisms endomorphism endomorphism every endomorphism of restricts to an endomorphism of the balanced subgroup property for endomorphisms Hence, it is a t.i. subgroup property, both transitive and identity-true endomorphism endomorphism every endomorphism of restricts to an endomorphism of the endo-invariance property for endomorphisms; i.e., it is the invariance property for endomorphism, which is a property stronger than the property of being an endomorphism ## Testing ### GAP command This subgroup property can be tested using built-in functionality of Groups, Algorithms, Programming (GAP). The GAP command for testing this subgroup property is:IsFullinvariant View subgroup properties testable with built-in GAP command|View subgroup properties for which all subgroups can be listed with built-in GAP commands | View subgroup properties codable in GAP Note that this GAP testing function uses an additional package called the SONATA package. ## State of discourse ### History This term was introduced by: Levi The concept was introduced by Levi in 1933 under the German name vollinvariant (translating to fully invariant). Both the terms fully invariant and fully characteristic are now in vogue. ### Resolution of questions that are easy to formulate Any typical question about the behavior of fully invariant subgroups in arbitrary groups that is easy to formulate will also be easy to resolve either with a proof or a counterexample, unless some other feature of the question significantly complicates it. This is so, despite the fact that there are a large number of easy-to-formulate questions about the endomorphism monoid that are still open. The reason is that even though not enough is known about the endomorphism monoids, there are other ways to obtain information about the structure of fully invariant subgroups. At the one extreme, there are abelian groups, where the fully invariant subgroups are quite easy to get a handle on. At the other extreme, there are "all groups" where very little can be said about characteristic subgroups beyond what can be proved through elementary reasoning. The most interesting situation is in the middle, for instance, when we are looking at nilpotent groups and solvable groups. In these cases, there are some restrictions on the structure of fully invariant subgroups, but the exact nature of the restrictions is hard to work out.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 80, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099724292755127, "perplexity": 2211.291208664218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00204.warc.gz"}
https://www.physicsforums.com/threads/fourier-transform-convergence.727314/
# Fourier Transform Convergence 1. ### nabeel17 56 From -infinity to infinity at the extreme ends do Fourier transforms always converge to 0? I know in the case of signals, you can never have an infinite signal so it does go to 0, but speaking in general if you are taking the fourier transform of f(x) If you do integration by parts, you get a term (f(x)e^ikx evaluated from -infinity to infinity why does this always = 0? ### Staff: Mentor No, not always. If a signal is periodic in one domain then it is discrete in the other domain. So if you have a signal which is discrete in time, then it is periodic in frequency. Since it is periodic in frequency it does not converge to 0 at infinity. 3. ### nabeel17 56 Ok then why is it that we the first term in the integration by parts goes to 0 then regardless of the function (Whether it is periodic or not)? For example when finding the fourier transform of a derivative F[d/dx] = ∫d/dxf(x)e^ikx= f(x)e^ikx evaluated -infinity to infinity -ik∫f(x)e^ikx the first term = 0, why is that? If it were a wave function like in QM then it makes sense because the area under the wave function must be finite and converge to 0 at the extremes for it to have a probability density, but why here? ### Staff: Mentor I think that the various properties of the Fourier transform all assume that f satisfies the Dirichlet conditions. 5. ### AlephZero 7,299 The OP is asking about Fourier transforms, not Fourier series (of periodic functions) which is what #2 and #4 appear to be about. A reasonable condition for Fourier transforms to behave sensibly is that ##\int_{-\infty}^{+\infty}|f(x)|dx## is finite. Note that if you use Lebesque measure to define integration, that does not imply ##f(x)## converges to 0 as x tends to infinity. ##f(x)## can take any values on a set of measure zero. (Also note, "reasonable" does not necessarily mean either "necessary" or "sufficient"!) The mathematical correspondence between Fourier series and Fourier transforms is not quite "obvious", since the Fourier transform of a periodic function (defined by an integral with an infinte range) involves Dirac delta functions, and indeed the Fourier transform of a periodic function is identically zero except on a set of measure zero (i.e. the points usually called the "Fourier coefficients"). On the other hand if you integrate over one period of a periodic function, it is a lot simpler to get to some practical results, even if you have to skate over why the math "really" works out that way. Last edited: Dec 8, 2013 1 person likes this.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774683713912964, "perplexity": 335.50031733167395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932596.84/warc/CC-MAIN-20150521113212-00323-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.hepdata.net/record/ins746743
• Browse all Measurement of $\sigma_{\chi_{c2}}{\cal B}(\chi_{c2} \to J/\psi \gamma)/\sigma_{\chi_{c1}} {\cal B}(\chi_{c1} \to J/\psi \gamma)$ in $p \bar{p}$ collisions at $\sqrt{s}$ = 1.96-TeV The collaboration Phys.Rev.Lett. 98 (2007) 232001, 2007 Abstract (data abstract) Measurement of the cross section ration SIG(CHI_C2)/SIG(CHI_C1) times branching ratio. • #### Table 1 Data from T 1, P 5 10.17182/hepdata.57248.v1/t1 Ratios of cross section times branching fractions of the X_cJ states for the prompt events and B decay events Relative...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927158951759338, "perplexity": 16173.728455787556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00055.warc.gz"}
https://gmatclub.com/forum/equilateral-triangle-abc-is-inscribed-within-a-circle-as-shown-above-242056.html?sort_by_oldest=true
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 27 Jan 2020, 15:45 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Equilateral triangle ABC is inscribed within a circle as shown above. Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 60687 Equilateral triangle ABC is inscribed within a circle as shown above.  [#permalink] ### Show Tags 06 Jun 2017, 09:51 00:00 Difficulty: 15% (low) Question Stats: 77% (01:26) correct 23% (02:10) wrong based on 79 sessions ### HideShow timer Statistics Equilateral triangle ABC is inscribed within a circle as shown above. If the circle has an area of 36π, what is the length of minor arc AC? A. 3π B. 4π C. 5π D. 6π E. 9π Attachment: EquilateralCircle.png [ 8.51 KiB | Viewed 1362 times ] _________________ Intern Joined: 09 Oct 2016 Posts: 19 Schools: HBS '19 Equilateral triangle ABC is inscribed within a circle as shown above.  [#permalink] ### Show Tags 06 Jun 2017, 10:11 Bunuel wrote: Equilateral triangle ABC is inscribed within a circle as shown above. If the circle has an area of 36π, what is the length of minor arc AC? A. 3π B. 4π C. 5π D. 6π E. 9π Attachment: EquilateralCircle.png Triangle ABC is equilateral, thus minor arcs AC, AB, and BC are equal and each is 1/3rd of the circumference (since these 3 arcs make the whole circumference). were given that the area of the Circle is 36π, using the area formula we can derive the radius and then the circumference. $$π*r^2$$ = 36π, then the radius r is 6 using the circumference formula of a Circle $$2*π*r = 2*π*6 = 12π$$ then minor arc AC: $$\frac{12π}{3} = 4π$$ --------------------------------------------------------------------------------------- Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 9144 Location: United States (CA) Re: Equilateral triangle ABC is inscribed within a circle as shown above.  [#permalink] ### Show Tags 08 Dec 2019, 20:29 Bunuel wrote: Equilateral triangle ABC is inscribed within a circle as shown above. If the circle has an area of 36π, what is the length of minor arc AC? A. 3π B. 4π C. 5π D. 6π E. 9π Attachment: EquilateralCircle.png Minor arc AC is 1/3 of the circumference. Since the area of the circle is 36π, the radius is 6, and thus, the circumference is 12π. Minor arc AC is therefore 1/3 x 12π = 4π. _________________ # Scott Woodbury-Stewart Founder and CEO [email protected] 181 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. Intern Joined: 16 Dec 2019 Posts: 16 Re: Equilateral triangle ABC is inscribed within a circle as shown above.  [#permalink] ### Show Tags 29 Dec 2019, 02:11 Minor arc AC is 1/3 of the circumference Can somebody explain this ? Re: Equilateral triangle ABC is inscribed within a circle as shown above.   [#permalink] 29 Dec 2019, 02:11 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.783209502696991, "perplexity": 4991.8365054241885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00098.warc.gz"}
http://mathhelpforum.com/algebra/206516-dimensions-problem.html
# Math Help - Dimensions Problem 1. ## Dimensions Problem First I want to say that I must apologize if this problem is in the incorrect part of the board. I honestly could not figure out what area this problem would fall under. If anyone read my intro a good while back, I'm in college and have to take basic college mathematics before I can get to core classes because I chose to pay literally no attention to class when I was in high school. Anyways this was the one problem I got incorrect on my Math midterm. I realized that the way to solve this was SO easy, but I decided to post this so I can get some feedback on whether I did it correctly (I'm positive this time that I did) and also to give my first attempt at the Latex thing. Anyways the word problem goes as such. "A frame that is 18 inches by 24 inches has a mat in it that is 2 1/4 inches all around. What are the dimensions of the picture within the mat?" I took a photo of the picture, I hope the quality is decent enough. So... Code: 2 1/4 + 2 1/4 = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5 thus the answer is 13.5 by 19.5 What say you? 2. ## Re: Dimensions Problem Hm, if anyone would also be so kind as to possibly critique the way I wrote out the formula so I know how the latex should have been set up? Thanks in advance for any consideration guys 3. ## Re: Dimensions Problem You did well with the problem, although some instructors may want units with the numbers, inches in this case. To use LaTeX, enclose your code within the TEX tags, which you can generate using the sigma button. 4. ## Re: Dimensions Problem So like this? $2 1/4 + 2 1/4 = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5$ Answer is $19.5" by 13.5"$ Edit:1 It would appear there are still some kinks in latex for me to figure out. let me try it this way. $ 2 1/4 + 2 1/4 = 4.5$ $18 - 4.5 = 13.5$ $24 - 4.5 = 19.5$ Edit: 2 how do I make it actually look like a fraction? 5. ## Re: Dimensions Problem You have the right idea, for 2 1/4 you may wish to use the code 2\frac{1}{4}. I'm not sure where the < br/ > symbols are coming from. 6. ## Re: Dimensions Problem $2\frac{1}{4} + 2\frac{1}{4} = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5$ $2\frac{1}{4} + 2\frac{1}{4} = 4.5 18 - 4.5 = 13.5 24 - 4.5 = 19.5$ Well I got the weird b symbols to go away by keeping the equation in one long line rather than giving them their own lines like such 1 2 3 however, doesn't it seem a bit sloppy this way? It might be best if there is actually a latex tutorial somewhere on this site? I would hate to waste your time with this stuff. 7. ## Re: Dimensions Problem There is a forum here dedicated to using LaTeX, and there are many tutorials online, just do a search on it. I usually enclose each equation or expression separately, rather than inserting carriage returns within the tags. 8. ## Re: Dimensions Problem Dear Lord lol, how did I not notice that forum? lol, thanks for all the help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047428965568542, "perplexity": 853.3919713796657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989826.86/warc/CC-MAIN-20150728002309-00065-ip-10-236-191-2.ec2.internal.warc.gz"}
https://arxiv.org/abs/1608.08640
hep-ph (what is this?) # Title: Search for sharp and smooth spectral signatures of $μν$SSM gravitino dark matter with Fermi-LAT Abstract: The $\mu\nu$SSM solves the $\mu$ problem of supersymmetric models and reproduces neutrino data, simply using couplings with right-handed neutrinos $\nu$'s. Given that these couplings break explicitly $R$ parity, the gravitino is a natural candidate for decaying dark matter in the $\mu \nu$SSM. In this work we carry out a complete analysis of the detection of $\mu \nu$SSM gravitino dark matter through $\gamma$-ray observations. In addition to the two-body decay producing a sharp line, we include in the analysis the three-body decays producing a smooth spectral signature. We perform first a deep exploration of the low-energy parameter space of the $\mu \nu$SSM taking into account that neutrino data must be reproduced. Then, we compare the $\gamma$-ray fluxes predicted by the model with Fermi-LAT observations. In particular, with the 95$\%$ CL upper limits on the total diffuse extragalactic $\gamma$-ray background using 50 months of data, together with the upper limits on line emission from an updated analysis using 69.9 months of data. For standard values of bino and wino masses, gravitinos with masses larger than about 4 GeV, or lifetimes smaller than $10^{28}$ s, produce too large fluxes and are excluded as dark matter candidates. However, when limiting scenarios with large and close values of the gaugino masses are considered, the constraints turn out to be less stringent, excluding masses larger than 17 GeV and lifetimes smaller than $4\times 10^{25}$ s. Comments: Minor changes, references added, version published in JCAP. 23 pages, 7 figures, 3 tables Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Astrophysical Phenomena (astro-ph.HE) DOI: 10.1088/1475-7516/2017/03/047 Cite as: arXiv:1608.08640 [hep-ph] (or arXiv:1608.08640v2 [hep-ph] for this version) ## Submission history From: Daniel Elbio Lopez [view email] [v1] Tue, 30 Aug 2016 20:00:35 GMT (130kb,D) [v2] Fri, 17 Mar 2017 23:20:55 GMT (131kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558283448219299, "perplexity": 3420.4607373976733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00251.warc.gz"}
https://brilliant.org/discussions/thread/how-do-i-integrate-this/
× # How do I integrate this? I came across this problem today: $$Verify\quad for\quad u(x,y)=e^{ x }sin(y)\quad the\quad mean\\ value\quad theorem\quad for\quad harmonic\quad functions\\ on\quad a\quad circle\quad C\quad of\quad radius\quad r=1,\quad with\quad its\\ centre\quad at\quad z=2+2i.$$ I tried to simplify it but I got stuck at the integral of $$cosh(e^{i\theta})$$. So my question is : how do I integrate $$cosh(e^{i\theta})$$? I know that it is somehow related to $$Chi(e^{i\theta})$$, but I don't know how. Note by Vishnu C 2 years, 6 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: After some simplification I was able to verify, by integration, that it is true for the given function. But the question still stands: How is it related to $$Chi(e^{i\theta})$$? I was able to solve the case where the function had limits from 0 to 2*pi, i.e, I had to use some properties of definite integrals to simplify it. But is it possible to evaluate it with a general limit? - 2 years, 6 months ago @Sandeep Bhardwaj Sir, @Raghav Vaidyanathan @Shashwat Shukla @Pranjal Jain @Abhishek Sinha Sir Please help him. Thanks a lot! @vishnu c - 2 years, 6 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977700710296631, "perplexity": 2918.772096743929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828189.71/warc/CC-MAIN-20171024071819-20171024091819-00179.warc.gz"}
https://www.electro-tech-online.com/threads/fixed-voltage-mppt.149336/
# fixed Voltage mppt Status Not open for further replies. #### Hiro Okamura ##### New Member Hi everyone I am new here so not sure how these threads work (format wise). I am doing an assignment that I need to simulate PV panels and track it maximum power. I am wondering if anyone knows what this initial dip is called in my power graph and how to correct it? #### Attachments • 4.3 KB Views: 113 #### ronsimpson ##### Well-Known Member What panel? Data sheet? What simulator? Spice? LTspice? How did you get the graph? Welcome! #### Seyit Yıldırım ##### New Member what this initial dip is called in my power graph because of shading, initial dip you mentioned is occured. or it will be because of that solar arrays are connected in series, paralel. and all pv can not generate the same voltage. So the initial dip is occured Status Not open for further replies. Replies 1 Views 6K Replies 0 Views 4K Replies 5 Views 6K Replies 4 Views 2K Replies 5 Views 4K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046231865882874, "perplexity": 4800.3220514120985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00304.warc.gz"}
https://socratic.org/questions/if-an-object-with-uniform-acceleration-or-deceleration-has-a-speed-of-3-m-s-at-t-6
Physics Topics # If an object with uniform acceleration (or deceleration) has a speed of 3 m/s at t=0 and moves a total of 24 m by t=5, what was the object's rate of acceleration? Feb 3, 2016 $0.72 m {s}^{- 2}$ #### Explanation: we know that, $s = u t + \frac{1}{2} a {t}^{2}$ $\mathmr{and} , 24 = 3 \cdot 5 + 0.5 a \cdot {5}^{2}$ $\mathmr{and} , 9 = \frac{25}{2} a$ $a = 0.72 m {s}^{- 2}$ ##### Impact of this question 154 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6303858160972595, "perplexity": 1731.095612056263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00284.warc.gz"}
http://math.stackexchange.com/users/30938/doraemonpaul?tab=reputation
# doraemonpaul less info reputation 11039 bio website location age member for 2 years, 2 months seen 16 hours ago profile views 1,542 # 5,790 Reputation 35 Jul 13 +25 13:56 2 events Prove that some function is the solution of some equation +10 04:11 upvote Given $f(f(x))$ can we find $f(x)$? 10 Jul 12 +10 12:54 upvote Particular solution of an ODE 10 Jul 9 +10 21:34 upvote Particular solution of an ODE 5 Jul 6 -2 Jul 5 10 Jul 4 85 Jun 25 10 Jun 23 8 Jun 22 10 Jun 13 0 Jun 12 -17 Jun 11 15 Jun 9 10 Jun 8 -2 Jun 7 30 Jun 3 50 Jun 1 10 May 27 10 May 25 45 May 20 15 May 19 -4 May 18 30 May 16 20 May 12 10 May 11 10 May 2 10 May 1 15 Apr 27 10 Apr 26 10 Apr 25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27770933508872986, "perplexity": 11870.402187145124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858892.28/warc/CC-MAIN-20140722025738-00036-ip-10-33-131-23.ec2.internal.warc.gz"}
http://moodle.remc10.org/moodle/course/index.php?categoryid=15
### English 10 Write a concise and interesting paragraph here that explains what this course is about
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653929471969604, "perplexity": 3715.041705956036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00444.warc.gz"}
http://syymmetries.blogspot.com.au/2016/06/warsaw-workshop-on-non-standard-dark.html
## Sunday, 5 June 2016 ### Warsaw Workshop on Non-Standard Dark Matter For the last few days I've been at the Warsaw Workshop on Non-Standard Dark Matter. It's been very enjoyable! Plenty of interesting ideas, coffee, and social events. Yesterday I gave a short talk, trying to make the case for a dark matter direct detection search for the sidereal modulation signature. The general idea is that, if dark matter has self-interactions, the dark matter wind which strikes the Earth will interact with any Earth-captured dark matter, leading to a non-trivial spatial distribution which terrestrial detectors traverse throughout the day. I share the slides below this post. If nothing else you should click through to see some entertaining magnetohydrodynamic simulation animations! By the way, as of this writing ATLAS+CMS have recorded about 2+2/fb of data (or 20 diphotons in alternative units): We're quickly moving toward the position we were by Christmas last year (about 3+3/fb including the CMS $B=0$ data). If the 750 GeV diphoton resonance prevails in the new data we hope to know by the ICHEP on August 3-10. Some authors have taken to calling the would-be particle Ϝ, which is the archaic Greek letter "digamma" -- very fitting! We will see yet if this name becomes lore... I also quite like the following perhaps future update of the PDG from Strumia: Slides
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8412051200866699, "perplexity": 2368.765034363934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320368.57/warc/CC-MAIN-20170624235551-20170625015551-00606.warc.gz"}
http://mathhelpforum.com/pre-calculus/15788-two-pumps-one-tank.html
# Thread: Two Pumps, One Tank 1. ## Two Pumps, One Tank Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? 2. Originally Posted by blueridge Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? I'll start you off... Define your variables: I'd call the bigger pump $b$ and the smaller pump $s$ $b=$the number of tanks per hour the bigger pump can empty $s=$the number of tanks per hour the smaller pump can empty Both of those numbers will be fractions. Now, since you only wanted a hint, I'm only going to give you the first equation and you'll have to find the rest... Two pumps of different sizes working together can empty a fuel tank is 5 hours. $b+s=\frac{1\text{tank}}{5\text{hours}}$ Do you need any more help? 3. ## Yes If you can set up the other equation, I can take it from there. 4. Originally Posted by blueridge If you can set up the other equation, I can take it from there. Originally Posted by blueridge Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? The next equation is somewhat weird. If $b=\frac{\text{tanks}}{\text{hours}}$ than $\frac{1}{b}=\frac{\text{hours}}{\text{tanks}}$ So in fact: $\frac{1}{b}=$the number of hours to empty a tank So we know that: $\frac{1}{b}-\frac{1}{s}=4$ 5. ## tell me... I am dealing with two equations in two unknowns? 6. Originally Posted by blueridge I am dealing with two equations in two unknowns? Yes, solve for $b$ and $s$, and the required answer is $s$ RonL 7. Hello, blueridge! Here's another approach . . . Two pumps of different sizes working together can empty a fuel tank is 5 hours. The larger pump can empty this tank in 4 hours less than the smaller one. If the larger one is out of order, how long will it take the smaller one to do the job alone? Together, they can do the job in 5 hours. . . In one hour, they can do $\frac{1}{5}$ of the job. .[1] The smaller pump can do the job in $x$ hours. .[Note that: . $x > 4$.] . . In one hour, it can do $\frac{1}{x}$ of the job. The larger pump takes 4 hours less; it takes $x - 4$ hours. . . In one hour, it can do $\frac{1}{x-4}$ of the job. Together, in one hour, they can do: . $\frac{1}{x} + \frac{1}{x-4}$ of the job. .[2] But [1] and [2] describe the same thing: . . the fraction of the job done in one hour. There is our equaton! . . . . $\boxed{\frac{1}{x} + \frac{1}{x-4} \:=\:\frac{1}{5}}$ Multiply by the common denominator: $5x(x - 4)$ . . $5(x - 4) + 5x \:=\:x(x-4)$ . . which simplifies to the quadratic: . $x^2 - 14x + 20 \:=\:0$ The Quadratic Formula gives us: . $x \;=\;\frac{14\pm\sqrt{116}}{2} \;=\;7 \pm\sqrt{29} \;\approx\;\{1.6,\:12.4\}$ Since $x > 4$. the solution is: . $x = 12.4$ Therefore, the smaller pump will take about 12.4 hours working alone. 8. ## tell me... Soroban, I thank you for sharing yet a more simplistic avenue to understanding this question.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761281967163086, "perplexity": 847.01835358599}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320261.6/warc/CC-MAIN-20170624115542-20170624135542-00373.warc.gz"}
https://motls.blogspot.com/2018/10/cheeky-girl-demands-moonwalker.html
## Tuesday, October 16, 2018 ... // ### Cheeky girl demands moonwalker, geologist to shut his mouth on geology Everyone can live his or her American dream in America, it's a country of great possibilities and amazing upward mobility. Some people say it's no longer the case but in some negative sense, I think it's still true. In particular, I am often shocked by the arrogance of some American nobodies. Many things are worse in Europe than in America but the arrogance of deluded American Niemands surpasses that of the European counterparts. Willie Soon sent me the partial transcript and video of a panel discussion, Apollo Plus 50. For an hour, 3+1 panelists discussed the motivation for space research and various relationships between Americans' and Unamericans' curiosity and financial and other interests in various projects in the outer space – in the history, now, and in the future. My hyperlink points directly to 1:00:40 in the video where things suddenly become tough. Nick Sinclair, a reporter for the New York Times, gives a long monologue about some old article in his daily. The only point of the monologue is to say that Harrison Schmitt's climate skepticism is the same thing as the people's belief that the moonlanding was staged. Schmitt was one of 12 men who walked the Moon – he was there with Apollo 17. Now, Sinclair's is already a bad enough insult. Imagine that you're the most recent one among the 12 moonwalkers in the homo sapiens species – some microorganisms have walked the Moon with them. You have some good reasons to think that you're pretty important. You also have a special kind of certainty about the proposition that men have really walked on the Moon – because you were one of these 12 apostles. And now, an arrogant left-wing journalist demands you to admit that if you have walked on the Moon, every piece of arbitrarily fishy left-wing pseudoscience must also be true. I admit, I would probably be unable to calm down. But Harrison Schmitt – whose climate skepticism was known to me for a decade – reacted totally professionally. He has explained why it's always needed for science to question things; that geologists know that our planet is not as fragile as someone would like to pretend; why the Earth is a complicated system; why the existence of a complex climate model that looks good to somebody isn't genuine evidence in favor of the anthropogenic climate drivers which are the real questions; why there are some specific uncertainties about the carbon cycle that involves oceans; why the Sun has never been eliminated as a climate driver according to geologists; and why the direction of research is increasingly affected or corrupted by the sources of funding, especially those from the government, something that the science journalists in the hall should investigate. At some point, a part of the audience screams "Yes" after he rhetorically asks whether the humans must be blamed. Yes, we're doomed, Amen. But Harrison Schmitt remains calm. You know, he's calm, he's done real research and wrote books, he knows what science is, he loves this blue, not green planet, and he has walked the Moon. For some reason, Schmitt isn't the boss of the Earth branches at NASA – instead, a Schmidt is. The single letter has far-reaching consequences. For very similar reasons, geologist Harrison's monologues aren't being maximally spread by journalists. They prefer to pick a hysterical Harrison, e.g. the actor Harrison Ford. (Holy crap, that rant was incredible.) At the end, a spoiled girl named Betsy Mason decided to speak to Harrison Schmitt and tell him to "consider not to speak for geologists" because "she is a geologist". You may make an easy comparison to decide whether she should be described as a researcher or a beer snob and pack leader of brain-dead leftists. Make your own verdict but only the second answer is correct, however. Nevertheless, this pathetic female nobody has enough arrogance to demand that despite the monologue that has made so much sense, the famous geologist Harrison Schmitt doesn't speak for geologists. He didn't even speak "for geologists" – he just said how things are being evaluated in geology which is the field where he was trained. And she didn't even attempt to present to have any evidence or isolate an imperfection in Schmitt's monologue. She just unashamedly told him to shut his mouth because she clearly found his observations inconvenient – she must believe that this is how things should work in science. Some additional brain-dead members of her pack applauded her, of course, while Schmitt asked "Really?" At some moments, moonwalkers may feel safer on the Moon than in Trump's America that still nurtures the left-wing scum as if it were a bunch of beautiful flowers. This exchange is another great example showing the far-reaching consequences – destructive consequences for whole countries and our civilization – of the affirmative action and the reckless reduction of the spanking of kids. Affirmative action doesn't just mean that for ideological reasons, 50% of the people whom you hire don't do anything useful in some occupations and the efficiency gets halved. It's worse than that. Many of the "invisible" ones actually contribute with the negative sign – they actively undermine the system and everyone who has actually achieved something tangible. (That's why it's so great that Hungary has banned "gender studies" at the public universities.) They don't respect any meritocracy and any authorities that emerged from the old meritocracy. They only respect themselves and the political interests of similar parasitic additions to these fields. It's never too late and I recommend Betsy Mason's parents to consider spanking her for several hours. If this is tolerated now, what kind of an adult can grow out of her?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24811282753944397, "perplexity": 3022.4354421249986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143695.67/warc/CC-MAIN-20200218120100-20200218150100-00557.warc.gz"}
http://dash.harvard.edu/handle/1/9367005
# In-situ determination of astro-comb calibrator lines to better than $$\textrm{10 cm s}^{-1}$$ Title: In-situ determination of astro-comb calibrator lines to better than $$\textrm{10 cm s}^{-1}$$ Author: Li, Chih-Hao; Glenday, Alexander G.; Benedick, Andrew J.; Chang, Guoqing; Chen, Li-Jin; Cramer, Claire; Fendel, Peter; Furesz, Gabor; Kärtner, Franz X.; Korzennik, Sylvain G.; Phillips, David M.; Sasselov, Dimitar D.; Szentgyorgyi, Andrew H.; Walsworth, Ronald L. Note: Order does not necessarily reflect citation order of authors. Citation: Li, Chih-Hao, Alexander G. Glenday, Andrew J. Benedick, Guoqing Chang, Li-Jin Chen, Claire Cramer, Peter Fendel, et al. 2010. In-situ determination of astro-comb calibrator lines to better than $$\textrm{10 cm s}^{-1}$$. Optics Express 18(12): 13239–13249. Full Text & Related Files: Li_In situDetermination.pdf (336.0Kb; PDF) Abstract: Improved wavelength calibrators for high-resolution astrophysical spectrographs will be essential for precision radial velocity (RV) detection of Earth-like exoplanets and direct observation of cosmological deceleration. The astro-comb is a combination of an octave-spanning femtosecond laser frequency comb and a Fabry-Pérot cavity used to achieve calibrator line spacings that can be resolved by an astrophysical spectrograph. Systematic spectral shifts associated with the cavity can be 0.1-1 MHz, corresponding to RV errors of 10-100 cm/s, due to the dispersive properties of the cavity mirrors over broad spectral widths. Although these systematic shifts are very stable, their correction is crucial to high accuracy astrophysical spectroscopy. Here, we demonstrate an in-situ technique to determine the systematic shifts of astro-comb lines due to finite Fabry-Pérot cavity dispersion. The technique is practical for implementation at a telescope-based spectrograph to enable wavelength calibration accuracy better than 10 cm/s. Published Version: doi:10.1364/OE.18.013239 Other Sources: http://walsworth.physics.harvard.edu/publications/2010_Li_OptExp.pdf http://arxiv.org/abs/1006.0492 Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:9367005 ### This item appears in the following Collection(s) • FAS Scholarly Articles [7055] Peer reviewed scholarly articles from the Faculty of Arts and Sciences of Harvard University
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812177896499634, "perplexity": 13989.5472053659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800168.29/warc/CC-MAIN-20140820021320-00323-ip-10-180-136-8.ec2.internal.warc.gz"}
https://socialsci.libretexts.org/Bookshelves/Gender_Studies/Book%3A_Introduction_to_Women_Gender_Sexuality_Studies_(Kang_Lessard_and_Heston)/04%3A_Gender_and_Work_in_the_Global_Economy
# 4: Gender and Work in the Global Economy $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ Thumbnail: Baiga women and children in protest walk, India. (Public Domain; Ekta Parishad via Wikipedia). This page titled 4: Gender and Work in the Global Economy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Miliann Kang, Donovan Lessard, Laura Heston, and Sonny Nordmarken (UMass Amherst Libraries) . This page titled 4: Gender and Work in the Global Economy is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Miliann Kang, Donovan Lessard, Laura Heston, and Sonny Nordmarken.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949198305606842, "perplexity": 10413.55041411757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00251.warc.gz"}
https://tug.org/pipermail/pdftex/2005-November/006136.html
# [pdftex] Hover effects in pdftex. Tigran Aivazian tigran at bibles.org.uk Fri Nov 4 22:34:38 CET 2005 On Tue, 1 Nov 2005, John R. Culleton wrote: > Once again I am late to the party. Is it the same mechanism as > pdfannot? Can you suggest a document or example to read? You can gather bits of knowledge from movie15.sty and some other bits of knowledge from attachfile.sty. You can also try the enclosed example I wrote when I was experimenting with this feature. This example is not complete, i.e. I wasn't able to get the appearances working correctly and so an icon is displayed if you click on the active area. Also, it is very unstable, i.e. you have to position the mouse very carefully to get it right. I am not sure if this is a bug in my sample Ok, I know that my sample is far from perfect, but since it is the only thing that exists (to my knowledge) that even attempts to do this --- it is a good place to start. And don't forget to email me the working version when you have fixed it :) Kind regards Tigran \documentclass[10pt,a5paper,twoside]{book} \usepackage[bookmarks=false]{hyperref} \newcommand{\PushPindata}{% q 1 1 1\space rg 0 G 1 w 1 6 m 11 6 l 11 13 l 12 13 l 14 11 l 21 11 l 22 12 l 23 12 l 23 2 l 22 2 l 21 3 l 14 3 l 12 1 l 11 1 l 11 6 l B 0.5 G 0 7 m 10 7 l 10 8 l 1 8 l S 1 G 12 12 m 14 10 l 22 10 l 22 11 l S Q } %\DeclareRobustCommand{\PushPin}{% % \raisebox{-1.25bp}{\parbox[b][14bp]{24bp}{% % \rule{0pt}{0pt}\pdfliteral{\PushPindata}}% % }% %} \DeclareRobustCommand{\PushPin}{% \null% } \newcounter{appobj} \newsavebox{\appbox} \savebox{\appbox}{\PushPin} \immediate\pdfxform attr { /Subtype /Form } \appbox \setcounter{appobj}{\pdflastxform} \newlength{\wordwidth} \newlength{\wordheight} \newlength{\worddepth} \newcommand{\annotate}[2]{% #1% \settowidth{\wordwidth}{\mbox{#1}}% \settoheight{\wordheight}{\mbox{#1}}% \settodepth{\worddepth}{\mbox{#1}}% \pdfannot width -\wordwidth height \wordheight depth \worddepth % width 0pt % height 0pt % depth 0pt { /Subtype /Text /T (word: #1) /Contents (Explanation: #2) /AP << /N \theappobj\space 0 R >> }% } \begin{document} \pagestyle{empty} \annotate{Hello}{English greeting.} \annotate{world}{Whole planet Earth.} \annotate{and}{English conjunction.} \annotate{the}{English definite article.} \annotate{rest}{The rest, you know.} \annotate{of}{Short English word.} \annotate{the}{English definite article.} \annotate{Universe!}{A differentiable manifold.} \annotate{Hello}{English greeting.} \annotate{world}{Whole planet Earth.} \annotate{and}{English conjunction.} \annotate{the}{English definite article.} \annotate{rest}{The rest, you know.} \annotate{of}{Short English word.} \annotate{the}{English definite article.} \annotate{Universe!}{A differentiable manifold.} \annotate{Hello}{English greeting.} \annotate{world}{Whole planet Earth.} \annotate{and}{English conjunction.} \annotate{the}{English definite article.} \annotate{rest}{The rest, you know.} \annotate{of}{Short English word.} \annotate{the}{English definite article.} \annotate{Universe!}{A differentiable manifold.} \annotate{Hello}{English greeting.} \annotate{world}{Whole planet Earth.} \annotate{and}{English conjunction.} \annotate{the}{English definite article.} \annotate{rest}{The rest, you know.} \annotate{of}{Short English word.} \annotate{the}{English definite article.} \annotate{Universe!}{A differentiable manifold.} \annotate{Hello}{English greeting.} \annotate{world}{Whole planet Earth.} \annotate{and}{English conjunction.} \annotate{the}{English definite article.} \annotate{rest}{The rest, you know.} \annotate{of}{Short English word.} \annotate{the}{English definite article.} \annotate{Universe!}{A differentiable manifold.} \annotate{Hello}{English greeting.} \annotate{world}{Whole planet Earth.} \annotate{and}{English conjunction.} \annotate{the}{English definite article.} \annotate{rest}{The rest, you know.} \annotate{of}{Short English word.} \annotate{the}{English definite article.} \annotate{Universe!}{A differentiable manifold.} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313188791275024, "perplexity": 15332.201220495943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00453.warc.gz"}
https://read.somethingorotherwhatever.com/entry/HowdoyoufixanOvalTrackPuzzle
# How do you fix an Oval Track Puzzle? • Published in 2016 • Added on In the collections The oval track group, $OT_{n,k}$, is the subgroup of the symmetric group, $S_n$, generated by the basic moves available in a generalized oval track puzzle with $n$ tiles and a turntable of size $k$. In this paper we completely describe the oval track group for all possible $n$ and $k$ and use this information to answer the following question: If the tiles are removed from an oval track puzzle, how must they be returned in order to ensure that the puzzle is still solvable? As part of this discussion we introduce the parity subgroup of $S_n$ in the case when $n$ is even. ## Links ### BibTeX entry @article{HowdoyoufixanOvalTrackPuzzle, title = {How do you fix an Oval Track Puzzle?}, abstract = {The oval track group, {\$}OT{\_}{\{}n,k{\}}{\$}, is the subgroup of the symmetric group, {\$}S{\_}n{\$}, generated by the basic moves available in a generalized oval track puzzle with {\$}n{\$} tiles and a turntable of size {\$}k{\$}. In this paper we completely describe the oval track group for all possible {\$}n{\$} and {\$}k{\$} and use this information to answer the following question: If the tiles are removed from an oval track puzzle, how must they be returned in order to ensure that the puzzle is still solvable? As part of this discussion we introduce the parity subgroup of {\$}S{\_}n{\$} in the case when {\$}n{\$} is even.}, url = {http://arxiv.org/abs/1612.04476v3 http://arxiv.org/pdf/1612.04476v3}, year = 2016, author = {David A. Nash and Sara Randall}, comment = {}, urldate = {2018-03-13}, archivePrefix = {arXiv}, eprint = {1612.04476}, primaryClass = {math.GR}, collections = {Easily explained,Puzzles} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5018126964569092, "perplexity": 1662.922235657759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00469.warc.gz"}
https://arxiv.org/abs/1912.07321
# Title:Transverse Collective Modes in Interacting Holographic Plasmas Abstract: We study in detail the transverse collective modes of simple holographic models in presence of electromagnetic Coulomb interactions. We render the Maxwell gauge field dynamical via mixed boundary conditions, corresponding to a double trace deformation in the boundary field theory. We consider three different situations: (i) a holographic plasma with conserved momentum, (ii) a holographic (dirty) plasma with finite momentum relaxation and (iii) a holographic viscoelastic plasma with propagating transverse phonons. We observe two interesting new features induced by the Coulomb interactions: a mode repulsion between the shear mode and the photon mode at finite momentum relaxation, and a propagation-to-diffusion crossover of the transverse collective modes induced by the finite electromagnetic interactions. Finally, at large charge density, our results are in agreement with the transverse collective mode spectrum of a charged Fermi liquid for strong interaction between quasi-particles, but with an important difference: the gapped photon mode is damped even at zero momentum. This property, usually referred to as anomalous attenuation, is produced by the interaction with a quantum critical continuum of states and might be experimentally observable in strongly correlated materials close to quantum criticality, e.g. in strange metals. Comments: 15 pages, 7 figures Subjects: High Energy Physics - Theory (hep-th); Strongly Correlated Electrons (cond-mat.str-el) Report number: IFT-UAM/CSIC-19-143 Cite as: arXiv:1912.07321 [hep-th] (or arXiv:1912.07321v1 [hep-th] for this version) ## Submission history From: Marcus Tornsö [view email] [v1] Mon, 16 Dec 2019 12:35:54 UTC (307 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974336981773376, "perplexity": 2382.7847894726683}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00408.warc.gz"}
https://pineresearch.com/shop/kb/theory/eis-theory/basic-background-theory/
### EIS Basic Background Theory Last Updated: 5/7/19 by Neil Spinner ##### ARTICLE TAGS • EIS, • eis theory, • eis fundamental ### 1Theory Experimental electrochemistry can be as powerful as it is tricky.  Even simple DC methods (e.g., voltammetry, open circuit potential, chronoamperometry, chronopotentiometry) are often plagued by inaccuracies and/or poor signal-to-noise ratios resulting from seemingly insignificant or overlooked sources.  Variables that can affect electrochemical data include, but are not limited to: the state and quality of electrodes, electrolyte, experimental hardware, the physical laboratory layout, software experimental parameters, arrangement of cables, and grounding configuration. AC techniques, like electrochemical impedance spectroscopy (EIS), can be similarly affected by these variables and sources of error.  The user must exercise particular care and caution when setting up and running EIS experiments as the impact of small sources of error often has a larger effect on data quality than for DC methods.  Obtaining and interpreting meaningful EIS data, as with many other facets of electrochemistry, requires repeated practice and often some trial-and-error with respect to both the hardware and software. In AC electrochemistry, a sinusoidal potential (or current) signal is applied to a system and the resulting current (or potential) signal is recorded and analyzed (see Figure 1 for diagram and Table 1 for associated terminology).  The frequency and amplitude of the input signal are tuned by the user, while the output signal normally has the same frequency as the input signal but its phase may be shifted by a finite amount. Figure 1. AC Electrochemistry Sine Wave Input and Output Terminology Symbol Definition $\displaystyle{E(t)}$ time-dependent potential $\displaystyle{E_o}$ (peak) peak potential amplitude RMS root mean square potential amplitude pk-pk peak-to-peak potential amplitude $\displaystyle{t}$ time $\displaystyle{i(t)}$ time-dependent current $\displaystyle{i_o}$ (peak) peak current amplitude $\displaystyle{\phi}$ phase angle $\displaystyle{f}$ frequency (units of Hz) $\displaystyle{\omega}$ angular frequency (units of rad/s) Table 1. AC Electrochemistry Input and Output Symbol Definitions Practically, frequency (f) is reported in units of Hz.  However, for mathematical convenience the angular frequency (ω), which has units of rad/s and is equivalent to 2πf, is typically used for calculations instead (e.g., see input and output signal equations in Figure 1).  Similarly, the phase angle ($\displaystyle{\phi}$) is typically reported in units of degrees but calculated in units of radians. There are three conventions often used to define the input (and sometimes output) signal amplitude: peak, peak-to-peak, and RMS.  “Peak” refers to the difference between the sine wave set point (i.e., the potential or current at the beginning of the sine wave period) and its maximum or minimum point (i.e., the potential or current at one quarter of the sine wave period).  “Peak-to-peak” is simply twice the peak value (see Figure 1). “RMS”, which stands for “root mean square”, is a mathematical quantity used primarily in electrical engineering to compare AC and DC voltages or currents.  Though its practical relevance and importance to EIS measurements is somewhat minimal, it is still widely used in the industry to characterize input signal amplitude.  Mathematically, it is equivalent to the peak value divided by $\displaystyle{\sqrt{2}}$, or roughly peak times 0.707 (see Figure 1). During an EIS experiment, a sequence of sinusoidal potential signals with varying frequencies, but similar amplitudes, is applied to an electrochemical system.  Typically, frequencies of each input signal are equally spaced on a descending logarithmic scale from ~10 kHz - 1 MHz to a lower limit of ~10 mHz - 1 Hz.  Application of these input and output signals is usually performed automatically via a potentiostat/galvanostat. Monitoring the progress of an EIS experiment can be done by observing the input and output signals on a single current vs. potential graph called a Lissajous plot (see Figure 2).  Depending on the system under study, as well as the applied frequency and amplitude, the shape of the resulting Lissajous plot may vary.  Throughout an EIS experiment, the user can observe the progression and pattern of Lissajous plots as a means of identifying possibly erroneous data. Figure 2. Examples of Typical Lissajous Plots for Stable and Linear Systems The shape of the current vs. potential Lissajous plot for a stable, linear electrochemical system typically appears as either a tilted oval or straight line that repeatedly traces over itself (see Figure 2).  The width of the oval is indicative of the magnitude of the output signal phase angle.  For example, if the Lissajous plot looks like a perfect circle, it means the output signal is completely out of phase (i.e., +90°) with respect to the input signal.  This is also the EIS response experienced by an ideal capacitor or inductor. ### 2References Our knowledgebase is the central repository for written content, including help topics, theory, application notes, specifications, and software information. ##### Software Detailed information about our Software, which includes AfterMath and retired PineChem. ##### Applications Application notes discuss practical aspects of conducting specific experiments. ##### Theory Fundamental electrochemical theory presented in a brief and targeted manner. ##### Product Specifications Review complete product specifications and compare products within a category here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654327392578125, "perplexity": 2141.2390731917094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00204.warc.gz"}
https://docs.aptech.com/gauss/between.html
# between¶ ## Purpose¶ Returns a binary matrix with a 1 if the corresponding element of X is between ‘a’ and ‘b’, with an option to specify whether the ends are inclusive. ## Format¶ mask = between(X, left, right[, inclusive]) Parameters • x (NxK matrix or dataframe.) – Data. • left (1x1 matrix or dataframe.) – Lower limit of the range. • right (1x1 matrix or dataframe.) – Upper limit of the range. • inclusive (string) – Optional argument, specifies which limits are included in range. Default = "both". Options are: ”left” Include lower limit only. ”right” Include upper limit only. ”neither” Do not include either limit. ”both” Include both limits. Returns mask (NxK matrix) – Equal to 1 if the corresponding element of X is in the specified range, otherwise a 0. ## Examples¶ ### Example 1: Select dates in a range¶ // Create file name with full path and load data fname = getGAUSSHome("examples/beef_prices.csv"); beef = beef[1:5,.]; print beef; date beef_price 199201 116.64 199202 114.49 199203 111.11 199204 108.17 199205 107.76 mask = between(beef[.,"date"], "1992-02", "1992-04"); By default, both endpoints are counted as a match. mask = 0 1 1 1 0 You can, however, specify if you would like the endpoints treated differently. // Set the final optional input, 'inclusive' to include only the right endpoint mask_inc_right = between(beef[.,"date"], "1992-02", "1992-04", "right"); mask_inc_right = 0 0 1 1 0 between() can be used with selif() to filter data. // Select rows of the data "if" the mask value is non-zero print beef_trim; date beef_price 199202 114.49 199203 111.11 199204 108.17 ### Example 2: Multiple column use¶ x = { 100 200 300, 40 50 60, 7 8 9 }; left = 25; right = 125; between(x, left, right); The above code prints the following matrix to screen: 1.0000000 0.0000000 0.0000000 1.0000000 1.0000000 1.0000000 0.0000000 0.0000000 0.0000000
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5445842742919922, "perplexity": 10637.194830446244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00025.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/colloquium-mathematicum/all/122/2/86880/lower-quantization-coefficient-and-the-f-conformal-measure
Wydawnictwa / Czasopisma IMPAN / Colloquium Mathematicum / Wszystkie zeszyty Lower quantization coefficient and the $F$-conformal measure Tom 122 / 2011 Colloquium Mathematicum 122 (2011), 255-263 MSC: 60Exx, 28A80, 94A34. DOI: 10.4064/cm122-2-11 Streszczenie Let $F=\{ f^{(i)} : 1\leq i\leq N\}$ be a family of Hölder continuous functions and let $\{\varphi_i : 1 \leq i\leq N\}$ be a conformal iterated function system. Lindsay and Mauldin's paper [Nonlinearity 15 (2002)] left an open question whether the lower quantization coefficient for the $F$-conformal measure on a conformal iterated funcion system satisfying the open set condition is positive. This question was positively answered by Zhu. The goal of this paper is to present a different proof of this result. Autorzy • Mrinal Kanti RoychowdhuryDepartment of Mathematics The University of Texas-Pan American 1201 West University Drive Edinburg, TX 78539-2999, U.S.A. e-mail Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek Odśwież obrazek
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6872437000274658, "perplexity": 6112.369371538493}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00309.warc.gz"}
https://www.physicsforums.com/threads/confusion-with-einstein-tensor-notation.694252/
# Confusion with Einstein tensor notation 1. May 28, 2013 ### Loro 1. The problem statement, all variables and given/known data I'm confused about writing down the equation: $\Lambda \eta \Lambda^{-1} = \eta$ in the Einstein convention. 2. Relevant equations The answer is: $\eta_{\mu\nu}\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma} = \eta_{\rho\sigma}$ However it's strange because there seems to be no distinction between $\Lambda$ and $\Lambda^{-1}$ if we write it this way. However we know that: $(\Lambda^{-1})^{\mu}{}_{\nu} = \Lambda_{\nu}{}^{\mu}$ 3. The attempt at a solution If the equation was instead $\Lambda B \Lambda^{-1} = B$ Where $B$ is a tensor given in the form $B^{\mu}{}_{\nu}$ then it's clear to me how to write it: $\Lambda^{\rho}{}_{\mu} B^{\mu}{}_{\nu} \Lambda_{\sigma}{}^{\nu} = B^{\rho}{}_{\sigma}$ But $\eta$ is given in the form $\eta^{\mu\nu}$ and I don't understand how I can contract it with both $\Lambda^{\mu}{}_{\nu}$ and $\Lambda_{\nu}{}^{\mu}$ in order to arrive eventually at the result quoted in (2). 2. May 28, 2013 ### Mandelbroth Is there an actual question? :tongue: So, your confusion is how (2) works? 3. May 28, 2013 ### Loro Haha sorry :tongue: I would like to know why (2) works, and possibly how I could arrive at it, starting from an expression that has both $\Lambda^{\mu}{}_{\nu}$ and $\Lambda_{\nu}{}^{\mu}$. 4. May 28, 2013 ### Dick Well, just raise the $\mu$ index and lower the $\rho$ index on the first $\Lambda$ in your form with the B tensor using the metric tensor. Last edited: May 28, 2013 5. May 29, 2013 ### Loro Thanks, Like that: ? $\Lambda_{\rho}{}^{\mu} \eta_{\mu}{}_{\nu} \Lambda_{\sigma}{}^{\nu} = \eta_{\rho}{}_{\sigma}$ But then again both $\Lambda$'s are of the same form - this time they both seem to be inverses. Draft saved Draft deleted Similar Discussions: Confusion with Einstein tensor notation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785434603691101, "perplexity": 688.2141242381977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102891.30/warc/CC-MAIN-20170817032523-20170817052523-00347.warc.gz"}
https://owenduffy.net/blog/?p=14660
# Measuring trap resonant frequency with an antenna analyser Finding the resonant frequency of a resonant circuit such as an antenna trap is usually done by coupling a source and power sensor very loosely to the circuit. A modern solution is an antenna analyser or one port VNA, it provides both the source and the response measurement from one coax connector. Above is a diagram from the Rigexpert AA35Zoom manual showing at the left a link (to be connected the analyser) and the trap (here made with coaxial cable. The advantage of this method is that no wire attachments are needed on the device under test, and that coupling of the test instrument is usually easily optimised. ## Why / how does it work? So, what is happening here? Lets create an equivalent circuit of a similar 1t coil and a solenoid with resonating capacitor. The two coupled coils can be represented by an equivalent circuit that is derived from the two inductances and their mutual inductance. The circuit above represents a 1µH coil and a 10µH coil that are coupled such that 3% of the flux of 5% of the flux of one coil cuts the other (they are quite loosely coupled, as in the pic above. The resonant frequency of the 10µH coil and 100pF capacitor can be calculated to be 5.033MHz… and this is the value we want to find from our measurement. Above is a plot of the magnitude of S11. You can see that the cursor set to the theoretical (ie known) resonant frequency coincides almost exactly with the minimum |S11|, and therefore almost exactly with the theoretical (ie known) resonant frequency. Lets increase the coupling. Above, the equivalent circuit with the same coils but 9% flux coupling (the coils have been moved closer together). Above, we have a deeper response, but note the minimum |S11| is now further away from the cursor which is at the theoretical (ie known) resonant frequency. Too much coupling causes interaction with the test object. ### How can you determine how much is too much coupling? One approach is to simply couple up tightly and find the response, and loosen the coupling until the frequency for minimum response stops moving. ### So where do you measure |S11|? Your instrument may display S11 labelled as the complex reflection coefficient, or it may display the magnitude of the complex reflection coefficient, or it may display Return Loss (which is -|S11|). VSWR is related to |S11|, minimising VSWR is akin to minimising |S11| (or maximising Return Loss). Use whatever feature your analyser offers. ## Practical problems Some analysers will not show a useful response for very loose coupling, eg they may not indicate VSWR greater than say 10. You really need to explore the instrument and manual to find if there is a way to display extreme VSWR, even if only at one frequency. There is good reason why some analysers might not show extreme VSWR. If the inherent resolution of the instrument is poor (eg analysers with 8 bit ADCs), then it may not have sufficient accuracy to usefully display extreme VSWR. Sometimes it is just that the designer didn’t really understand the instrument applications in the real world. Of course this technique will not work on a trap that is substantially enclosed in a shield that prevents magnetic coupling. ## Example Here is a measurement made of a parallel resonant circuit at 1.8MHz using a 60mm diameter 1t coil of 2mm copper wire connected directly to an AA-600. Above is a ReturnLoss scan. It is not possible to expand the scale any more… did I mention that designers often do not understand real world applications. Nevertheless we can see that the middle of the peak in the response is at 1.813MHz where ReturnLoss is 0.34dB (equates to 51.6).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8136494755744934, "perplexity": 1519.0298353846129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421021543-00005.warc.gz"}
http://math.stackexchange.com/questions/189287/proof-for-heine-borel-theorem?answertab=oldest
Proof for Heine Borel theorem I am trying to prove the Heine Borel theorem for compactness of the closed interval $[0,1]$ using Konig's lemma. This is what I have so far: 1. I assume $[0,1]$ can be covered by $\{(a_i,b_i):i=0,1,2\cdots\}$. 2. I construct a graph $G$ as follows: First let a vertex be labelled $[0,1]$ (the root). Then consider $[0,1]-(a_0,b_0)\cup(a_1,b_1)$. This consists of $n_1$ closed intervals where $n_1$ is finite. Adjoin the $[0,1]$ vertex with $n_1$ vertices labelled by these closed intervals (these vertices will be at level 1). Next consider $[0,1]-(a_0,b_0)\cup(a_1,b_1)\cup(a_2,b_2)$. This consists of $n_2$ closed intervals. Each of these closed intervals is a subset of exactly one the closed intervals considered in the previous step. Make $n_2$ vertices labelled by these closed intervals and adjoin them to that vertex created in the previous step of which it is a subset of (these vertices will be at level 2). Continue doing so for higher levels, each time obtaining the labels by considering the closed interval obtained from $[0,1]-(a_0,b_0)\cup(a_1,b_1)\cdots \cup(a_i,b_i)$. 3. This yields a rooted tree $G$ where each level is finite. 4. Suppose the tree contained an infinite path: $[0,1]\supset[\alpha_1,\beta_1]\supset[\alpha_2,\beta_2]\cdots$. 5. Since a sequence of nested closed intervals is nonempty so there is an element $x$ in it. As $x\in [0,1]$ so $x\in(a_i,b_i)$ for some $i$. But then $x$ cannot exist in any interval which is at a level beyond $i$, yielding a contradiction to 4. 6. So by the contrapositive form of Konig's lemma, $G$ cannot be infinite. It follows that for some $i$, $[0,1]-(a_0,b_0)\cup(a_1,b_1)\cdots \cup(a_i,b_i)$ is empty. Hence $[0,1]$ is covered by $(a_0,b_0)\cup(a_1,b_1)\cdots \cup(a_i,b_i)$. My doubts are in the arguments presented in 2. and 6. Are they correct? In particular is this statement: "Each of these closed intervals is a subset of exactly one the closed intervals considered in the previous step." correct? What is an upper bound for $n_k$? Thanks - The argument is correct, but it can be cleaned up a bit. Here’s one possible way. Without loss of generality assume that $0\in(a_0,b_0)$, $1\in(a_1,b_1)$, and $b_0\le a_1$. Construct a sequence of closed subsets of $[0,1]$ as follows: $C_0=[0,1]$, and $C_{n+1}=C_n\setminus(a_n,b_n)$ for $n\in\Bbb N$. Claim: Each $C_n$ is the union of a finite family of pairwise disjoint closed intervals (which may be degenerate). Proof: This is clearly true for $C_0,C_1=[b_0,1]$, and $C_2=[b_0,a_1]$. Suppose that it holds for $C_n$, where $n\ge 2$, and write $C_n=\bigcup_{k=1}^m[c_k,d_k]$, where $c_1\le d_1<c_2\le d_2<\dots<c_m\le d_m$. That is, $c_k\le d_k$ for $k=1,\dots,m$, and $d_k<c_{k+1}$ for $k=1,\dots,m-1$. Then $C_{n+1}$ is the disjoint union of the following closed intervals: the intervals $[c_k,d_k]$ such that $d_k\le a_n$ or $c_k>b_n$; the interval $[c_k,a_n]$ if $c_k\le a_n<d_k$; and the interval $[b_n,d_k]$ if $c_k<b_n\le d_k$. The result now follows by induction. $\dashv$ For $n\in\Bbb N$ let $\mathscr{C}_n$ be the set of pairwise disjoint closed intervals that are the connected components of $C_n$, and let $\mathscr{C}=\bigcup_{n\in\Bbb N}\mathscr{C}_n$. It’s clear that if $m\le n$ and $I\in\mathscr{C}_n$, there is a unique $J\in\mathscr{C}_m$ such that $I\subseteq J$. Thus, $\langle\mathscr{C},\supseteq\rangle$ is a tree of height $\omega$, and $\mathscr{C}_n=\operatorname{Lev}_n\mathscr{C}$ for each $n\in\Bbb N$, so $\mathscr{C}$ has finite levels. It follows from König’s theorem that there is a branch $\beta=\langle I_n:n\in\Bbb N\rangle$ through $\mathscr{C}$. Then $\beta$ is a nested sequence of closed intervals, so $\bigcap_{n\in\Bbb N}I_n\ne\varnothing$. Fix $x\in\bigcap_{n\in\Bbb N}I_n\ne\varnothing$. Then $x\in[0,1]$, but for each $n\in\Bbb N$ we have $x\in I_{n+1}\subseteq[0,1]\setminus(a_n,b_n)$, so $x\in[0,1]\setminus\bigcup_{n\in\Bbb N}(a_n,b_n)$, contradicting the assumption that $\{(a_n,b_n):n\in\Bbb N\}$ is a cover of $[0,1]$. For completeness there are a couple of things that you ought to say first: you really should start with an arbitrary open cover $\mathscr{U}$ of $[0,1]$ and then justify replacing it by a countable cover by open intervals. How you do this depends on what tools you consider to be available. To answer the final question, note that each step can increase the number of closed intervals by at most one: in my construction this happens exactly when there is a $k$ such that $c_k\le a_n<b_n\le d_k$. Since $|\mathscr{C}_n=1|$ for $n=0,1,2$, this means that $|\mathscr{C}_n|\le n-1$ for $n\ge 2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857609272003174, "perplexity": 58.29648668030843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098196.31/warc/CC-MAIN-20150627031818-00060-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.r-bloggers.com/simple-and-advanced-time-series-with-oracle-r-enterprise/
# Simple and Advanced Time series with Oracle R Enterprise July 18, 2013 By (This article was first published on Oracle R Enterprise, and kindly contributed to R-bloggers) This guest post from Marcos Arancibia describes how to use Oracle R Enterprise to analyze Time Series data. In this article, we give an overview of how to use Time Series Analysis against data stored in Oracle Database, using the Embedded R Execution capability to send time series computations to the Oracle Database server instead processing at the client. We will also learn how to retrieve the final series or forecasts and retrieve them to the client for plotting, forecasting, and diagnosing. One key thing to keep in mind when using Time Series techniques with data that is stored in Oracle Database is the order of the rows, or records. Because of the parallel capabilities of Oracle Database, when queried for records, one might end up receiving records out of order if an option for order is not specified. Simple Example using Stock Data Let’s start with a simple Time Series example. First we will need to connect to our Oracle Database using ORE. Then, using the package TTR, we will access Oracle Stock data from YahooData service, from January 1, 2008 to January 1, 2013 and push it to the database. # Load the ORE library and connect to Oracle Database library(ORE) ore.connect("myuser","mysid","myserver","mypass",port=1521,all=TRUE) library(TTR) # Get data in XTS format xts.orcl <- getYahooData("ORCL", 20080101, 20130101) # Convert it to a data frame and gets the date # Makes the date the Index df.orcl <- data.frame(xts.orcl) df.orcl$date <- (data.frame(date=index(xts.orcl))$date) # Create/overwrite data in Oracle Database # to a Table called ORCLSTOCK ore.drop(table="ORCLSTOCK") ore.create(df.orcl,table="ORCLSTOCK") # IMPORTANT STEP!!! # Ensure indexing is kept by date rownames(ORCLSTOCK) <- ORCLSTOCK$date # Ensure the data is in the DB ore.ls() # Review column names, data statistics and # print a sample of the data names(ORCLSTOCK) >names(ORCLSTOCK) [1] "Open" "High" "Low" "Close" "Volume" [6] "Unadj.Close" "Div" "Split" "Adj.Div" "date" summary(ORCLSTOCK$Close) >summary(ORCLSTOCK$Close) Min. 1st Qu. Median Mean 3rd Qu. Max. 13.36 20.53 24.22 24.79 29.70 35.73 head(ORCLSTOCK) >head(ORCLSTOCK) Open High Low Close Volume 2008-01-02 01:00:00 21.74414 22.00449 21.58022 21.68629 44360179 2008-01-03 01:00:00 21.62843 22.28413 21.62843 22.28413 43600532 2008-01-04 01:00:00 21.95628 22.06235 21.01130 21.24272 46391263 2008-01-07 01:00:00 21.17523 21.67664 21.01130 21.45486 41527032 2008-01-08 01:00:00 21.44522 21.52236 20.38453 20.39417 45155398 2008-01-09 01:00:00 20.57738 20.91487 20.39417 20.83773 49750304 Unadj.Close Div Split Adj.Div date 2008-01-02 01:00:00 22.49 NA NA NA 2008-01-02 2008-01-03 01:00:00 23.11 NA NA NA 2008-01-03 2008-01-04 01:00:00 22.03 NA NA NA 2008-01-04 2008-01-07 01:00:00 22.25 NA NA NA 2008-01-07 2008-01-08 01:00:00 21.15 NA NA NA 2008-01-08 2008-01-09 01:00:00 21.61 NA NA NA 2008-01-09 Pull data from the database for a simple plot # Pull data from Oracle Database (only the necessary columns) orcl <- ore.pull(ORCLSTOCK[,c("date","Close","Open","Low","High")]) # Simple plot with base libraries - Closing plot(orcl$date,orcl$Close,type="l",col="red",xlab="Date",ylab="US$", main="Base plot:Daily ORACLE Stock Closing points") # Simple plot with base libraries - Other Series plot(orcl$date,orcl$Open,type="l",col="blue",xlab="Date",ylab="US$", main="Base plot:Daily ORACLE Stock: Open/High/Low points") lines(orcl$date,orcl$High,col="green") lines(orcl$date,orcl$Low,col="orange") legend("topleft", c("Opening","High","Low"), col=c("blue","green","orange"),lwd=2,title = "Series",bty="n") A different plot option, using the package xts library(xts) # Pull data from Oracle Database (only the necessary columns) orcl <- ore.pull(ORCLSTOCK[,c("date","Close","Open","Low","High")]) # Convert data to Time Series format orcl.xts <- as.xts(orcl,order.by=orcl$date,dateFormat="POSIXct") # Plot original series plot(orcl.xts$Close,major.ticks='months',minor.ticks=FALSE, main="Time Series plot:Daily ORACLE Stock Closing points",col="red") Simple Time Series: Moving Average Smoothing We might be tempted to call functions like the Smoothing Moving Average from open-source CRAN packages against Oracle Database Tables, but those packages do not know what to do with an “ore.frame”. For that process to work correctly, we can either load the data locally or send the process for remote execution on the Database Server by using Embedded R Execution. We will also explore the built-in Moving Average process from ore.rollmean() as a third alternative. ALTERNATIVE 1 - The first example is pulling the data from Oracle Database into a ts (time series) object first, for a Client-side smoothing Process. library(TTR) # Pull part of the database table into a local data.frame sm.orcl <- ore.pull(ORCLSTOCK[,c("date","Close")]) # Convert "Close" attribute into a Time Series (ts) ts.orcl <- ts(sm.orcl$Close) # Use SMA - Smoothing Moving Average algorithm from package TTR ts.sm.orcl <-ts(SMA(ts.orcl,n=30),frequency=365, start=c(2008,1) ) # Plot both Series together plot(sm.orcl$date,sm.orcl$Close,type="l",col="red",xlab="Date",ylab="US$", main="ORCL Stock Close CLIENT-side Smoothed Series n=30 days") lines(sm.orcl$date,ts.sm.orcl,col="blue") legend("topleft", c("Closing","MA(30) of Closing"), col=c("red","blue"),lwd=2,title = "Series",bty="n") ALTERNATIVE 2 – In this alternative, we will use a Server-side example for running the Smoothing via Moving Average, without bringing all data to the client. Only the result is brought locally for plotting. Remember that the TTR package has to be installed on the Server in order to be called. # Server execution call using ore.tableApply # Result is an ore.list that remains in the database until needed sv.orcl.ma30 <- ore.tableApply(ORCLSTOCK[,c("date","Close")],ore.connect = TRUE, function(dat) { library(TTR) ordered <- dat[order(as.Date(dat$date, format="%Y-%m-%d")),] list(res1 <- ts(ordered$Close,frequency=365, start=c(2008,1)), res2 <- ts(SMA(res1,n=30),frequency=365, start=c(2008,1)), res3 <- ordered$date) } ); # Bring the results locally for plotting local.orcl.ma30 <- ore.pull(sv.orcl.ma30) # Plot two series side by side # (the third element of the list is the date) plot(local.orcl.ma30[[3]],local.orcl.ma30[[1]],type="l", col="red",xlab="Date",ylab="US$", main="ORCL Stock Close SERVER-side Smoothed Series n=30 days") lines(local.orcl.ma30[[3]], local.orcl.ma30[[2]],col="blue",type="l") legend("topleft", c("Closing","Server MA(30) of Closing"), col=c("red","blue"), lwd=2,title = "Series", bty="n") ALTERNATIVE 3 – In this alternative we will use a Server-side example with the computation of Moving Averages using the native ORE in-Database functions without bringing data to the client. Only the result is brought locally for plotting. Just one line of code is needed to generate an in-Database Computation of Moving averages and the creation of a new VIRTUAL column in the Oracle Database. We will call this new column rollmean30. We will use the function ore.rollmean(). The option align="right" makes the MA look at only the past k days (30 in this case), or less, depending on the point in time. This creates a small difference between this method and the previous methods in the beginning of the series, since ore.rollmean() can actually calculate the first sets of days using smaller sets of data available, while other methods discard this data. # Moving Average done directly in Oracle Database ORCLSTOCK$rollmean30 <- ore.rollmean(ORCLSTOCK$Close, k = 30, align="right") # Check that new variable is in the database Open High Low Close Volume 2008-01-02 01:00:00 21.74414 22.00449 21.58022 21.68629 44360179 2008-01-03 01:00:00 21.62843 22.28413 21.62843 22.28413 43600532 2008-01-04 01:00:00 21.95628 22.06235 21.01130 21.24272 46391263 2008-01-07 01:00:00 21.17523 21.67664 21.01130 21.45486 41527032 2008-01-08 01:00:00 21.44522 21.52236 20.38453 20.39417 45155398 2008-01-09 01:00:00 20.57738 20.91487 20.39417 20.83773 49750304 2008-01-02 01:00:00 22.49 NA NA NA 2008-01-02 21.68629 2008-01-03 01:00:00 23.11 NA NA NA 2008-01-03 21.98521 2008-01-04 01:00:00 22.03 NA NA NA 2008-01-04 21.73771 2008-01-07 01:00:00 22.25 NA NA NA 2008-01-07 21.66700 2008-01-08 01:00:00 21.15 NA NA NA 2008-01-08 21.41243 2008-01-09 01:00:00 21.61 NA NA NA 2008-01-09 21.31665 # Get results locally for plotting local.orcl <- ore.pull(ORCLSTOCK[,c("date","Close", "rollmean30")]) sub.orcl <- subset(local.orcl,local.orcl$date> as.Date("2011-12-16")) # Plot the two series side by side # First plot original series plot(local.orcl$date, local.orcl$Close,type="l", col="red",xlab="Date",ylab="US$", main="ORCL Stock Close ORE Computation of Smoothed Series n=30 days") lines(local.orcl$date,local.orcl$rollmean30,col="blue",type="l") legend("topleft", c("Closing","ORE MA(30) of Closing"), col=c("red","blue"),lwd=2,title = "Series",bty="n") Seasonal Decomposition for Time Series Diagnostics Now that we have learned how to execute these processes using Embedded R, we can start using other methodologies required for Time Series using the same Server-side computation and local plotting. It is typical for an analyst to try to understand a Time Series better by looking at some of the basic diagnostics like the Seasonal Decomposition of Time Series by Loess. These can be achieved by using the stl() command in the following process: # Server execution sv.orcl.dcom <- ore.tableApply(ORCLSTOCK[,c("date","Close")],ore.connect = TRUE, function(dat) { ordered <- dat[order(as.Date(dat$date, format="%Y-%m-%d")),] ts.orcl <- ts(ordered$Close,frequency=365, start=c(2008,1)) res <- stl(ts.orcl,s.window="periodic") } ); # Get result for plotting local.orcl.dcom <- ore.pull(sv.orcl.dcom) plot(local.orcl.dcom, main="Server-side Decomposition of ORCL Time-Series",col="blue") Another typical set of diagnostic charts includes Autocorrelation and Partial Autocorrelation function plots. These can be achieved by using the acf() command with the proper options in Embedded R Execution, so computations happen at the Oracle Database server: # Server-side ACF and PACF computation # Use function acf() and save result as a list sv.orcl.acf <- ore.tableApply(ORCLSTOCK[,c("date","Close")],ore.connect=TRUE, function(dat){ ts.orcl <- ts(dat$Close,frequency=365, start=c(2008,1)) list(res1 <- acf(ts.orcl,lag.max=120,type="correlation"),res2 <- acf(ts.orcl,lag.max=30, type="partial")) } ); # Get results for plotting # ACF and PACF as members of the list pulled local.orcl.acf <- ore.pull(sv.orcl.acf) plot(local.orcl.acf[[1]],main="Server-side ACF Analysis for Series ORCL",col="blue",lwd=2) plot(local.orcl.acf[[2]],main="Server-side PACF Analysis for Series ORCL",col="blue",lwd=5) Simple Exponential Smoothing Using the popular package “forecast”, we will use the ses() function to calculate a 90 days horizon (h=90) into the future, using the option criterion=MSE for the model. The package forecast needs to be installed on the Oracle Database server R engine. Then, we will bring the resulting model locally for plotting. Remember to load the library “forecast” locally as well, to be able to interpret the meaning of the ses() output when it’s brought locally. # Execute ses() call in the server sv.orcl.ses <- ore.tableApply(ORCLSTOCK[,c("date","Close")], ore.connect=TRUE, function(dat) { library(forecast) ordered <- dat[order(as.Date(dat$date, format="%Y-%m-%d")),] ts.orcl <- ts(ordered$Close,frequency=365, start=c(2008,1) ) res <- ses(ts.orcl, h=90, alpha=0.1, initial="simple") } ); # Get SES result locally for plotting # Since remote object contains a SES model from package forecast, # load package locally as well library(forecast) plot.orcl.ses <- ore.pull(sv.orcl.ses) plot(plot.orcl.ses,col="blue",fcol="red", main="ORCL with Server-side SES - Simple Exponential Smoothing Forecast") Holt Exponential Smoothing Using the popular package “forecast”, we will use the holt() function to calculate a 90 days horizon (h=90) into the future, requesting the Intervals of confidence of 80 and 95%. Again. the package “forecast” needs to be installed on the Oracle Database server R engine. Then, we will bring the resulting model locally for plotting. Remember to load the library forecast locally as well, to be able to interpret the meaning of the holt() output when it’s brought locally. # Execute holt() call in the server sv.orcl.ets <- ore.tableApply(ORCLSTOCK[,c("date","Close")], ore.connect=TRUE, function(dat) { library(forecast) ordered <- dat[order(as.Date(dat$date, format="%Y-%m-%d")),] ts.orcl <- ts(ordered$Close,frequency=365, start=c(2008,1)) res <- holt(ts.orcl, h=90, level=c(80,95), initial="optimal") } ); # Get resulting model from the server # Since remote object contains a Holt Exponential Smoothing # model from package forecast, load package locally as well library(forecast) local.orcl.ets <- ore.pull(sv.orcl.ets) plot(local.orcl.ets,col="blue",fcol="red", main="ORCL Original Series Stock Close with Server-side Holt Forecast") ARIMA – Auto-Regressive Interactive Moving Average There are at least two options for fitting an ARIMA model into a Time Series. One option is to use the package “forecast”, that allows for an automatic arima fitting (auto.arima) to find the best parameters possible based on the series. For more advanced users, the arima() function in the “stats” package itself allows for choosing the model parameters. # ARIMA models on the server using auto.arima() from package forecast arimaModel <- ore.tableApply(ORCLSTOCK[,c("date","Close")], ore.connect=TRUE, FUN = function(dat){ # load forecast library to use auto.arima library(forecast) # sort the table into a temp file by date ordered <- dat[order(as.Date(dat$date, format="%Y-%m-%d")),] # convert column into a Time Series # format ts(...) and request creation of an automatic # ARIMA model auto.arima(...) res <- auto.arima(ts(ordered$Close,frequency=365, start=c(2008,1)), stepwise=TRUE, seasonal=TRUE) }) # Alternative using the arima() from package “stats”. arimaModel <- ore.tableApply(ORCLSTOCK[,c("date","Close")],ore.connect=TRUE, FUN = function(dat){ # sort table into a temp file by date ordered <- dat[order(as.Date(dat$date, format="%Y-%m-%d")),] # convert column into a Time Series # format ts(...) and request creation of a specific # ARIMA model using arima(), for example an ARIMA(2,1,2) res <- arima(ts(ordered$Close,frequency=365, start=c(2008,1)), order = c(2,1,2)) }) # Load forecast package locally to use the model # for plotting and producing forecasts library(forecast) # Show remote resulting Time Series model >arimaModel Series: ts(ordered$Close, frequency = 365, start = c(2008, 1)) ARIMA(2,1,0) Coefficients: ar1 ar2 -0.0935 -0.0192 s.e. 0.0282 0.0282 sigma^2 estimated as 0.2323: log likelihood=-866.77 AIC=1739.55 AICc=1739.57 BIC=1754.96 # Get remote model using ore.pull for local prediction and plotting local.arimaModel <- ore.pull(arimaModel) # Generate forecasts for the next 15 days fore.arimaModel <- forecast(local.arimaModel, h=15) # Use the following option if you need to remove scientific notation of # numbers that are too large in charts options(scipen=10) # Generate the plot of forecasts, including interval of confidence # Main title is generated automatically indicating the type of model # chosen by the Auto ARIMA process plot(fore.arimaModel,type="l", col="blue", xlab="Date", ylab="Closing value (US$)", cex.axis=0.75, font.lab="serif EUC", sub="Auto-generated ARIMA for ORCL Stock Closing" ) # Generate and print forecasted data points plus standard errors # of the next 15 days forecasts <- predict(local.arimaModel, n.ahead = 15) >forecasts$pred Time Series: Start = c(2011, 165) End = c(2011, 179) Frequency = 365 [1] 33.29677 33.29317 33.29395 33.29395 33.29393 33.29393 33.29393 33.29393 33.29393 33.29393 33.29393 [12] 33.29393 33.29393 33.29393 33.29393 \$se Time Series: Start = c(2011, 165) End = c(2011, 179) Frequency = 365 [1] 0.4819417 0.6504925 0.7807798 0.8928901 0.9924032 1.0827998 1.1662115 1.2440430 1.3172839 1.3866617 [11] 1.4527300 1.5159216 1.5765824 1.6349941 1.6913898
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23623564839363098, "perplexity": 6331.234984148539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997893859.88/warc/CC-MAIN-20140722025813-00019-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.aimsciences.org/journal/1531-3492/2009/12/1
# American Institute of Mathematical Sciences ISSN: 1531-3492 eISSN: 1553-524X All Issues ## Discrete & Continuous Dynamical Systems - B July 2009 , Volume 12 , Issue 1 Select all articles Export/Reference: 2009, 12(1): 1-22 doi: 10.3934/dcdsb.2009.12.1 +[Abstract](2117) +[PDF](720.1KB) Abstract: Two coupled partial differential equations which describe the motion of a viscoelastic (Kelvin-Voigt type) Timoshenko beam are formulated with the complementarity conditions. This dynamic impact problem is considered a boundary thin obstacle problem. The existence of solutions is proved. A major concern is to pursue an investigation into conservation of energy (or energy balance), which is performed both theoretically and numerically. 2009, 12(1): 23-38 doi: 10.3934/dcdsb.2009.12.23 +[Abstract](2020) +[PDF](208.4KB) Abstract: This work extends the model developed by Gao (1996) for the vibrations of a nonlinear beam to the case when one of its ends is constrained to move between two reactive or rigid stops. Contact is modeled with the normal compliance condition for the deformable stops, and with the Signorini condition for the rigid stops. The existence of weak solutions to the problem with reactive stops is shown by using truncation and an abstract existence theorem involving pseudomonotone operators. The solution of the Signorini-type problem with rigid stops is obtained by passing to the limit when the normal compliance coefficient approaches infinity. This requires a continuity property for the beam operator similar to a continuity property for the wave operator that is a consequence of the so-called div-curl lemma of compensated compactness. 2009, 12(1): 39-76 doi: 10.3934/dcdsb.2009.12.39 +[Abstract](1994) +[PDF](1394.1KB) Abstract: We consider a general model of chemotaxis with finite speed of propagation in one space dimension. For this model we establish a general result of stability of some constant states both for the Cauchy problem on the whole real line and for the Neumann problem on a bounded interval. These results are obtained using the linearized operators and the accurate analysis of their nonlinear perturbations. Numerical schemes are proposed to approximate these equations, and the expected qualitative behavior for large times is compared to several numerical tests. 2009, 12(1): 77-108 doi: 10.3934/dcdsb.2009.12.77 +[Abstract](2446) +[PDF](512.8KB) Abstract: In this work we derive a hierarchy of new mathematical models for describing the motion of phototactic bacteria, i.e., bacteria that move towards light. These models are based on recent experiments suggesting that the motion of such bacteria depends on the individual bacteria, on group dynamics, and on the interaction between bacteria and their environment. Our first model is a collisionless interacting particle system in which we follow the location of the bacteria, their velocity, and their internal excitation (a parameter whose role is assumed to be related to communication between bacteria). In this model, the light source acts as an external force. The resulting particle system is an extension of the Cucker-Smale flocking model. We prove that when all particles are fully excited, their asymptotic velocity tends to an identical (pre-determined) terminal velocity. Our second model is a kinetic model for the one-particle distribution function that includes an internal variable representing the excitation level. The kinetic model is a Vlasov-type equation that is derived from the particle system using the BBGKY hierarchy and molecular chaos assumption. Since bacteria tend to move in areas that were previously traveled by other bacteria, a surface memory effect is added to the kinetic model as a turning operator that accounts for the collisions between bacteria and the environment. The third and final model is derived as a formal macroscopic limit of the kinetic model. It is shown to be the Vlasov-McKean equation coupled with a reaction-diffusion equation. 2009, 12(1): 109-131 doi: 10.3934/dcdsb.2009.12.109 +[Abstract](2084) +[PDF](602.8KB) Abstract: We introduce a characterization of exponential dichotomies for linear difference equations that can be tested numerically and enables the approximation of dichotomy rates and projectors with high accuracy. The test is based on computing the bounded solutions of a specific inhomogeneous difference equation. For this task a boundary value and a least squares approach is applied. The results are illustrated using Hénon's map. We compute approximations of dichotomy rates and projectors of the variational equation, along a homoclinic orbit and an orbit on the attractor as well as for an almost periodic example. For the boundary value and the least squares approach, we analyze in detail errors that occur, when restricting the infinite dimensional problem to a finite interval. 2009, 12(1): 133-149 doi: 10.3934/dcdsb.2009.12.133 +[Abstract](2913) +[PDF](799.7KB) Abstract: A global bifurcation result is obtained for families of competitive systems of difference equations $x_{n+1} = f_\alpha(x_n,y_n)$ $y_{n+1} = g_\alpha(x_n,y_n)$ where $\alpha$ is a parameter, $f_\alpha$ and $g_\alpha$ are continuous real valued functions on a rectangular domain $\mathcal{R}_\alpha \subset \mathbb{R}^2$ such that $f_\alpha(x,y)$ is non-decreasing in $x$ and non-increasing in $y$, and $g_\alpha(x, y)$ is non-increasing in $x$ and non-decreasing in $y$. A unique interior fixed point is assumed for all values of the parameter $\alpha$. As an application of the main result for competitive systems a global period-doubling bifurcation result is obtained for families of second order difference equations of the type $x_{n+1} = F_\alpha(x_n, x_{n-1}), \quad n=0,1, \ldots$ where $\alpha$ is a parameter, $F_\alpha:\mathcal{I_\alpha}\times \mathcal{I_\alpha} \rightarrow \mathcal{I_\alpha}$ is a decreasing function in the first variable and increasing in the second variable, and $\mathcal{I_\alpha}$ is a interval in $\mathbb{R}$, and there is a unique interior equilibrium point. Examples of application of the main results are also given. 2009, 12(1): 151-168 doi: 10.3934/dcdsb.2009.12.151 +[Abstract](2348) +[PDF](2061.4KB) Abstract: The purpose of this paper is to present qualitative and bifurcation analysis near the degenerate equilibrium in models of interactions between lymphocyte cells and solid tumor and to understand the development of tumor growth. Theoretical analysis shows that these cancer models can exhibit Bogdanov-Takens bifurcation under sufficiently small perturbation of the system parameters whether it is vascularized or not. Periodic oscillation behavior and coexistence of the immune system and the tumor in the host are found to be influenced significantly by the choice of bifurcation parameters. It is also confirmed that bifurcations of codimension higher than 2 cannot occur at this equilibrium in both cases. The analytic bifurcation diagrams and numerical simulations are given. Some anomalous properties are discovered from comparing the vascularized case with the avascular case. 2009, 12(1): 169-186 doi: 10.3934/dcdsb.2009.12.169 +[Abstract](3563) +[PDF](681.8KB) Abstract: The global dynamics of a periodic SIS epidemic model with maturation delay is investigated. We first obtain sufficient conditions for the single population growth equation to admit a globally attractive positive periodic solution. Then we introduce the basic reproduction ratio $\mathcal{R}_0$ for the epidemic model, and show that the disease dies out when $\mathcal{R}_0<1$, and the disease remains endemic when $\mathcal{R}_0>1$. Numerical simulations are also provided to confirm our analytic results. 2009, 12(1): 187-203 doi: 10.3934/dcdsb.2009.12.187 +[Abstract](2533) +[PDF](270.7KB) Abstract: The theory of Lyapunov exponents and methods from ergodic theory have been employed by several authors in order to study persistence properties of dynamical systems generated by ODEs or by maps. Here we derive sufficient conditions for uniform persistence, formulated in the language of Lyapunov exponents, for a large class of dissipative discrete-time dynamical systems on the positive orthant of $\mathbb{R}^m$, having the property that a nontrivial compact invariant set exists on a bounding hyperplane. We require that all so-called normal Lyapunov exponents be positive on such invariant sets. We apply the results to a plant-herbivore model, showing that both plant and herbivore persist, and to a model of a fungal disease in a stage-structured host, showing that the host persists and the disease is endemic. 2009, 12(1): 205-218 doi: 10.3934/dcdsb.2009.12.205 +[Abstract](1984) +[PDF](156.6KB) Abstract: This note is concerned with the identification of the absorption coefficient in a parabolic system. It introduces an algorithm that can be used to recover the unknown function. The algorithm is iterative in nature. It assumes an initial value for the unknown function and updates it at each iteration. Using the assumed value, the algorithm obtains a background field and computes the equation for the error at each iteration. The error equation includes the correction to the assumed value of the unknown function. Using the measurements obtained at the boundaries, the algorithm introduces two formulations for the error dynamics. By equating the responses of these two formulations it is then possible to obtain an equation for the unknown correction term. A number of numerical examples are also used to study the performance of the algorithm. 2009, 12(1): 219-225 doi: 10.3934/dcdsb.2009.12.219 +[Abstract](2110) +[PDF](127.9KB) Abstract: In this paper, we consider the initial-boundary value problem of Burgers equation with a time delay. Using a fixed point theorem and a comparison principle, we show that the time-delayed Burgers equation is exponentially stable under small delays. The result is more explicit, but also complements, the result given by Weijiu Liu [Discrete and Continuous Dynamical Systems-Series B, 2:1(2002),47-56], which was based on the Liapunov function approach. 2009, 12(1): 227-250 doi: 10.3934/dcdsb.2009.12.227 +[Abstract](1717) +[PDF](470.1KB) Abstract: To mimic the striking capability of microbial culture for growth adaptation after the onset of the novel environmental conditions, a modified heterogeneous microbial population model in the chemostat with essential resources is proposed which considers adaptation by spontaneously phenotype-switching between normally growing cells and persister cells having reduced growth rate. A basic reproductive number $R_0$ is introduced so that the population dies out when $R_0<1$, and when $R_0>1$ the population will be asymptotic to a steady state of persister cells, or a steady state of only normal cells, or a steady state corresponding to a heterogeneous population of both normal and persister cells. Our analysis confirms that inherent heterogeneity of bacterial populations is important in adaption to fluctuating environments and in the persistence of bacterial infections. 2009, 12(1): 251-260 doi: 10.3934/dcdsb.2009.12.251 +[Abstract](2227) +[PDF](143.2KB) Abstract: Some existence theorems are obtained for periodic and subharmonic solutions of ordinary $P$-Laplacian systems by the minimax methods in critical point theory. 2019  Impact Factor: 1.27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833602070808411, "perplexity": 386.33171527416215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00541.warc.gz"}
http://mathoverflow.net/questions/73636/how-can-i-make-a-non-gaussian-first-order-autoregressive-sequence-of-random-vari
How can I make a Non-Gaussian first order autoregressive sequence of random variables independent? Hi everybody, Consider a sequence of Non-Gaussian first order autoregressive random variables of length $N$, $\mathbf{X}=\{x_i\}_{i=1}^N$, generated from a common stationary distribution $p(\mathbf{x})$, with covariance matrix $$\mathbf{K}_{\mathbf{x}\mathbf{x}}=Toeplitz(1, \rho, \rho^2, \ldots, \rho^{N-1}),$$ where $\rho$ is a normalized correlation coefficient. Can you please help me find some approaches to make $\mathbf{X}$ as a sequence of independent identically distributed (i.i.d.) random variables.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597054481506348, "perplexity": 175.51145483771586}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444312.8/warc/CC-MAIN-20141017005724-00220-ip-10-16-133-185.ec2.internal.warc.gz"}
https://ahilado.wordpress.com/2017/06/
In Valuations and Completions we introduced the $p$-adic numbers $\mathbb{Q}_{p}$, which, like the real numbers, are the completion of the rational numbers under a certain kind of valuation. There is one such valuation for each prime number $p$, and another for the “infinite prime”, which is just the usual absolute value. Each valuation may be thought of as encoding number theoretic information related to the prime $p$, or to the “infinite prime”, for the case of the absolute value (more technically, the $p$-adic valuations are referred to as nonarchimedean valuations, while the absolute value is an example of an archimedean valuation). We can consider valuations not only for the rational numbers, but for more general algebraic number fields as well. In its abstract form, given an algebraic number field $K$, a (multiplicative) valuation of $K$ is simply any function $|\ |$ from $K$ to $\mathbb{R}$ satisfying the following properties: (i) $|x|\geq 0$, where $x=0$ if and only if $x=0$ (ii) $|xy|=|x||y|$ (iii) $|x+y|\leq|x|+|y|$ If this seems reminiscent of the discussion in Metric, Norm, and Inner Product, it is because a valuation does, in fact, define a metric on $K$, and by extension, a topology. Two valuations are equivalent if they define the same topology; another way to phrase this statement is that two valuations $|\ |_{1}$ and $|\ |_{2}$ are equivalent if $|x|_{1}=|x|_{2}^{s}$ for some positive real number $s$, for all $x\in K$.  The valuation is nonarchimedean if $|x+y|\leq\text{max}\{|x|,|y|\}$; otherwise, it is archimedean. Just as in the case of rational numbers, we also have an exponential valuation, defined as a function $v$ from the field $K$ to $\mathbb{R}\cup \infty$ satisfying the following conditions: (i) $v(x)=\infty$ if and only if $x=0$ (ii) $v(xy)=v(x)+v(y)$ (iii) $v(x+y)\geq\text{min}\{v(x),v(y)\}$ Two exponential valuations $v_{1}$ and $v_{2}$ are equivalent if $v_{1}(x)=sv_{2}(x)$ for some real number $s$, for all $x\in K$. The idea of valuations allows us to make certain concepts in algebraic number theory (see Algebraic Numbers) more abstract. We define a place $v$ of an algebraic number field $K$ as an equivalence class of valuations of $K$. We write $K_{v}$ to denote the completion of $K$ under the place $v$; these are the generalizations of the $p$-adic numbers and real numbers to algebraic number fields other than $\mathbb{Q}$. The nonarchimedean places are also called the finite places, while the archimedean places are also called the infinite places. To express whether a place $v$ is a finite place or an infinite place, we write $v|\infty$ or $v\nmid\infty$ respectively. The infinite places are of two kinds; the ones for which $K_{v}$ is isomorphic to $\mathbb{R}$ are called the real places, while the ones for which $K_{v}$ is isomorphic to $\mathbb{C}$ are called the complex places. The number of real places and complex places of $K$, denoted by $r_{1}$ and $r_{2}$ respectively, satisfy the equation $r_{1}+2r_{2}=n$, where $n$ is the degree of $K$ over $\mathbb{Q}$, i.e. $n=[K:\mathbb{Q}]$. By the way, in some of the literature, such as in the book Algebraic Number Theory by Jurgen Neukirch, “places” are also referred to as “primes“. This is intentional – one may actually think of our definition of places as being like a more abstract replacement of the definition of primes. This is quite advantageous in driving home the concept of primes as equivalence classes of valuations; however, to avoid confusion, we will stick to using the term “places” here, along with its corresponding notation. When $v$ is a nonarchimedean valuation, we let $\mathfrak{o}_{v}$ denote the set of all elements $x$ of $K_{v}$ for which $|x|_{v}\leq 1$. It is an example of a ring with special properties called a valuation ring. This means that, for any $x$ in $K$, either $x$ or $x^{-1}$ must be in $\mathfrak{o}_{v}$. We let $\mathfrak{o}_{v}^{*}$ denote the set of all elements of $\mathfrak{o}_{v}$ for which $|x|_{v}=1$, and we let $\mathfrak{p}_{v}$ denote the set of all elements of $\mathfrak{o}_{v}$ for which $|x|_{v}< 1$. It is the unique maximal ideal of $\mathfrak{o}_{v}$. Now we proceed to consider the modern point of view in algebraic number theory, which is to consider all these equivalence classes of valuations together. This will lead us to the language of adeles and ideles. An adele $\alpha$ of $K$ is a family $(\alpha_{v})$ of elements $\alpha_{v}$ of $K_{v}$ where $\alpha_{v}\in K_{v}$, and $\alpha_{v}\in\mathfrak{o}_{v}$ for all but finitely many $v$. We can define addition and multiplication componentwise on adeles, and the resulting ring of adeles is then denoted $\mathbb{A}_{K}$. The group of units of the ring of adeles is called the group of ideles, denoted $I_{K}$. For a finite set of primes $S$ that includes the infinite primes, we let $\displaystyle \mathbb{A}_{K}^{S}=\prod_{v\in S}K_{v}\times\prod_{v\notin S}\mathfrak{o}_{v}$ and $\displaystyle I_{K}^{S}=\prod_{v\in S}K_{v}^{*}\times\prod_{v\notin S}\mathfrak{o}_{v}^{*}$. We denote the set of infinite primes by $S_{\infty}$. Then $\mathfrak{o}_{K}$, the ring of integers of the number field $K$, is given by $K\cap\mathbb{A}_{K}^{S_{\infty}}$, while $\mathfrak{o}_{K}^{*}$, the group of units of $\mathfrak{o}_{K}$, is given by $K^{*}\cap I_{K}^{S_{\infty}}$. Any element of $K$ is also an element of $\mathbb{A}_{K}$, and any element of $K^{*}$ (the group of units of $K$) is also an element of $I_{K}$. The elements of $I_{K}$ which are also elements of $K^{*}$ are called the principal ideles. This should not be confused with the concept of principal ideals; however the terminology is perhaps suggestive on purpose. In fact, ideles and fractional ideals are related. Any fractional ideal $\mathfrak{a}$ can be expressed in the form $\displaystyle \mathfrak{a}=\prod_{\mathfrak{p}}\mathfrak{p}^{\nu_{\mathfrak{p}}}$. Therefore, we have a mapping $\displaystyle \alpha\mapsto (\alpha)=\prod_{\mathfrak{p}}\mathfrak{p}^{v_{\mathfrak{p}}(\alpha_v)}$ from the group of ideles to the group of fractional ideals. This mapping is surjective, and its kernel is $I_{K}^{S_{\infty}}$. The quotient group $I_{K}/K^{*}$ is called the idele class group of $K$, and is denoted by $C_{K}$. Again, this is not to be confused with the ideal class group we discussed in Algebraic Numbers, although the two are related; in the language of ideles, the ideal class group is defined as $I_{K}/I_{K}^{S_{\infty}}K^{*}$, and is denoted by $Cl_{K}$. There is a surjective homomorphism $C_{K}\mapsto Cl_{K}$ induced by the surjective homomorphism from the group of ideles to the group of fractional ideals that we have described in the preceding paragraph. An important aspect of the concept of adeles and ideles is that they can be equipped with topologies (see Basics of Topology and Continuous Functions). For the adeles, this topology is generated by the neighborhoods of $0$ in $\mathbb{A}_{K}^{S_{\infty}}$ under the product topology. For the ideles, this topology is defined by the condition that the mapping $\alpha\mapsto (\alpha,\alpha^{-1})$ from $I_{K}$ into $\mathbb{A}_{K}\times\mathbb{A}_{K}$ be a homeomorphism onto its image. Both topologies are locally compact, which means that every element has a neighborhood which is compact, i.e. every open cover of that neighborhood has a finite subcover. For the group of ideles, its topology is compatible with its group structure, which makes it into a locally compact topological group. In this post, we have therefore seen how the theory of valuations can allow us to consider a more abstract viewpoint for algebraic number theory, and how considering all the valuations together to form adeles and ideles allows us to rephrase the usual concepts related to algebraic number fields, such as the ring of integers, its group of units, and the ideal class group, in a new form. In addition, the topologies on the adeles and ideles can be used to obtain new results; for instance, because the group of ideles is a locally compact topological (abelian) group, we can use the methods of harmonic analysis (see Some Basics of Fourier Analysis) to study it. This is the content of the famous thesis of the mathematician John Tate. Another direction where the concept of adeles and ideles can take us is class field theory, which relates the idele class group to the other important group in algebraic number theory, the Galois group (see Galois Groups). The language of adeles and ideles can also be applied not only to algebraic number fields but also to function fields of curves over finite fields. Together these fields are also known as global fields. References: Tate’s Thesis on Wikipedia Class Field Theory on Wikipedia Algebraic Number Theory by Jurgen Neukirch Algebraic Number Theory by J. W. S. Cassels and A. Frohlich A Panorama of Pure Mathematics by Jean Dieudonne In Category Theory we introduced the language of categories, and in many posts in this blog we have seen how useful it is in describing concepts in modern mathematics, for example in the two most recent posts, The Theory of Motives and Algebraic Spaces and Stacks. In this post, we introduce another important concept in category theory, that of adjoint functors, as well as the closely related notion of monads. Manifestations of these ideas are quite ubiquitous in modern mathematics, and we enumerate a few examples in this post. An adjunction between two categories $\mathbf{C}$ and $\mathbf{D}$ is a pair of functors, $F:\mathbf{C}\rightarrow \mathbf{D}$, and $G:\mathbf{D}\rightarrow \mathbf{C}$, such that there exists a bijection $\displaystyle \text{Hom}_{\mathbf{D}}(F(X),Y)\cong\text{Hom}_{\mathbf{C}}(X,G(Y))$ for all objects $X$ of $\mathbf{C}$ and all objects $Y$ of $\mathbf{D}$. We say that $F$ is left-adjoint to $G$, and that $G$ is right-adjoint to $F$. We may also write $F\dashv G$. An adjunction determines two natural transformations $\eta: 1_{\mathbf{C}}\rightarrow G\circ F$ and $\epsilon:F\circ G\rightarrow 1_{\mathbf{D}}$, called the unit and counit, respectively. Conversely, the functors $F$ and $G$, together with the natural transformations $\eta$ and $\epsilon$, are enough to determine the adjunction, therefore we can also denote the adjunction by $(F,G,\eta,\epsilon)$. We give an example of an adjunction. Let $K$ be a fixed field, and consider the functors $F:\textbf{Sets}\rightarrow\textbf{Vect}_{K}$ $\displaystyle G:\textbf{Vect}_{K}\rightarrow\textbf{Sets}$ where $F$ is the functor which assigns to a set $X$ the vector space $F(X)$ made up of formal linear combinations of elements of $X$ with coefficients in $K$; in other words, an element of $F(X)$ can be written as $\sum_{i}a_{i}x_{i}$, where $a_{i}\in K$ and $x_{i}\in X$, and $G$ is the forgetful functor, which assigns to a vector space $V$ the set $G(V)$ of elements (vectors) of $V$; in other words it simply “forgets” the vector space structure on $V$. For every function $g:X\rightarrow G(V)$ in $\textbf{Sets}$ we have a linear transformation $f:F(X)\rightarrow V$ in $\textbf{Vect}_{K}$ given by $f(\sum_{i}a_{i}x_{i})=\sum_{i}a_{i}g(x_{i})$. The correspondence $\psi:g\rightarrow f$ has an inverse $\varphi$, given by restricting $f$ to $X$ (so that our only linear transformations are of the form $f(x_{i})$, and we can obtain set-theoretic functions corresponding to these linear transformations). Hence we have a bijection $\displaystyle \text{Hom}_{\textbf{Vect}_{K}}(F(X),V)\cong\text{Hom}_{\textbf{Sets}}(X,G(V))$. We therefore see that the two functors $F$ and $G$ form an adjunction; the functor $F$ (sometimes called the free functor) is left-adjoint to the forgetful functor $G$, and $G$ is right-adjoint to $F$. As another example, consider now the category of modules over a commutative ring $R$, and the functors $-\otimes_{R}B$ and $\text{Hom}_{R}(B,-)$ (see The Hom and Tensor Functors). For every morphism $g:A\otimes_{R}B\rightarrow C$ we have another morphism $f: A\rightarrow\text{Hom}_{R}(B,C)$ given by $[f(a)](b)=g(a,b)$. We actually have a bijection $\displaystyle \text{Hom}(A\otimes_{R}B,C)\cong\text{Hom}(A,\text{Hom}_{R}(B,C))$. This is called the Tensor-Hom adjunction. Closely related to the concept of an adjunction is the concept of a monad. A monad is a triple $(T,\eta,\mu)$ where $T$ is a functor from $\mathbf{C}$ to itself, $\eta$ is a natural transformation from $1_{\mathbf{C}}$ to $T$, and $\mu$ is a natural transformation from $\mu:T^{2}\rightarrow T$, satisfying the following properties: $\displaystyle \mu\circ\mu_{T}=\mu\circ T\mu$ $\displaystyle \mu\circ\eta_{T}=\mu\circ T\eta=1$ Dual to the concept of a monad is the concept of a comonad. A comonad on a category $\mathbf{C}$ may be thought of as a monad on the opposite category $\mathbf{C}^{\text{op}}$. As an example of a monad, we can consider the action of a fixed group $G$ on a set (such as the symmetric group permuting the elements of the set, for example). In this case, our category will be $\mathbf{Sets}$, and $T$, $\eta$, and $\mu$ are given by $\displaystyle T(X)=G\times X$ $\displaystyle \eta:X\rightarrow G\times X$ given by $x\rightarrow\langle g,x\rangle$ $\displaystyle \mu:G\times (G\times X)\rightarrow G\times X$ given by $\langle g_{1},\langle g_{2},x\rangle\rangle\rightarrow \langle g_{1}g_{2},x\rangle$ Adjunctions and monads are related in the following way. Let $F:\mathbf{C}\rightarrow\mathbf{D}$ and $G:\mathbf{D}\rightarrow\mathbf{C}$ be a pair of adjoint functors with unit $\eta$ and counit $\epsilon$. Then we have a monad on $\mathbf{C}$ given by $(G\circ F,\eta,G\epsilon_{F})$. We can also obtain a comonad given by $(F\circ G,\epsilon,F\eta_{G})$. Conversely, if we have a monad $(T,\eta,\mu)$ on the category $\mathbf{C}$, we can obtain a pair of adjoint functors $F:\mathbf{C}\rightarrow\mathbf{C}^{T}$ and $G:\mathbf{C}^{T}\rightarrow\mathbf{C}$, where $\mathbf{C}^{T}$ is the Eilenberg-Moore category, whose objects (called $T$-algebras) are pairs $(A,\alpha)$, where $A$ is an object of $\mathbf{C}$, and $\alpha$ is a morphism $T(A)\rightarrow A$ satisfying $\displaystyle \alpha\circ \eta_{A}=1_{A}$ $\displaystyle \alpha\circ \mu_{A}=\alpha\circ T(\alpha)$, and whose morphisms $h:(A,\alpha)\rightarrow (B,\beta)$ are morphisms $h:A\rightarrow B$ in $\mathbf{C}$ such that $\displaystyle h\circ\alpha=\beta\circ T(h)$. In the example we gave above in the discussion on monads, the $T$-algebras are exactly the sets with the action of the group $G$. If $X$ is such a set, then the corresponding $T$-algebra is the pair $(X,h)$, where the function $h:G\times X\rightarrow X$ satisfies $\displaystyle h(g_{1},h(g_{2},x))=h(g_{1}g_{2},x)$ $\displaystyle h(e,x)=x$. For comonads, we have a dual notion of coalgebras. These “dual” ideas are important objects of study in themselves, for example in topos theory. Another reason to consider comonads and coalgebras is that in mathematics there often arises a situation where we have three functors $\displaystyle L:\mathbf{D}\rightarrow\mathbf{C}$ $\displaystyle F:\mathbf{C}\rightarrow\mathbf{D}$ $\displaystyle R:\mathbf{D}\rightarrow\mathbf{C}$ where $L$ is left-adjoint to $F$, and $R$ is right-adjoint to $F$ (a so-called adjoint triple). As an example, consider the forgetful functor $F:\textbf{Top}\rightarrow\textbf{Sets}$ which assigns to a topological space its underlying set. It has both a left-adjoint $L:\textbf{Sets}\rightarrow\textbf{Top}$ which assigns to a set $X$ the trivial topology (where the only open sets are the empty set and $X$ itself), and a right-adjoint $R:\textbf{Sets}\rightarrow\textbf{Top}$ which assigns to the set $X$ the discrete topology (where every subset of $X$ is an open set). Therefore we have a monad and a comonad on $\textbf{Sets}$ given by $F\circ L$ and $F\circ R$ respectively. Many more examples of adjoint functors and monads can be found in pretty much all areas of mathematics. And according to a principle attributed to the mathematician Saunders Mac Lane (one of the founders of category theory, along with Samuel Eilenberg), such a structure that occurs widely enough in mathematics deserves to be studied for its own sake. References: Categories for the Working Mathematician by Saunders Mac Lane Category Theory by Steve Awodey # Algebraic Spaces and Stacks We introduced the concept of a moduli space in The Moduli Space of Elliptic Curves, and constructed explicitly the moduli space of elliptic curves, using the methods of complex analysis. In this post, we introduce the concepts of algebraic spaces and stacks, far-reaching generalizations of the concepts of varieties and schemes (see Varieties and Schemes Revisited), that are very useful, among other things, for constructing “moduli stacks“, which are an improvement over the naive notion of moduli space, namely in that one can obtain from it all “families of objects” by pulling back a “universal object”. We need first the concept of a fibered category (also spelled fibred category). Given a category $\mathcal{C}$, we say that some other category $\mathcal{S}$ is a category over $\mathcal{C}$ if there is a functor $p$ from $\mathcal{S}$ to $\mathcal{C}$ (this should be reminiscent of our discussion in Grothendieck’s Relative Point of View). If $\mathcal{S}$ is a category over some other category $\mathcal{C}$, we say that it is a fibered category (over $\mathcal{C}$) if for every object $U=p(x)$ and morphism $f: V\rightarrow U$ in $\mathcal{C}$, there is a strongly cartesian morphism $\phi: f^{*}x\rightarrow x$ in $\mathcal{S}$ with $f=p(\phi)$. This means that any other morphism $\psi: z\rightarrow x$ whose image $p(\psi)$ under the functor $p$ factors as $p(\psi)=p(\phi)\circ h$ must also factor as $\psi=\phi\circ \theta$ under some unique morphism $\theta: z\rightarrow f^{*}x$ whose image under the functor $p$ is $h$. We refer to $f^{*}x$ as the pullback of $x$ along $f$. Under the functor $p$, the objects of $\mathcal{S}$ which get sent to $U$ in $\mathcal{C}$ and the morphisms of $\mathcal{S}$ which get sent to the identity morphism $i_{U}$ in $\mathcal{C}$ form a subcategory of $\mathcal{S}$ called the fiber over $U$. We will also write it as $\mathcal{S}_{U}$. An important example of a fibered category is given by an ordinary presheaf on a category $\mathcal{C}$, i.e. a functor $F:\mathcal{C}^{\text{op}}\rightarrow (\text{Set})$; we can consider it as a category fibered in sets $\mathcal{S}_{F}\rightarrow\mathcal{C}$. A special kind of fibered category that we will need later on is a category fibered in groupoids. A groupoid is simply a category where all morphisms have inverses, and a category fibered in groupoids is a fibered category where all the fibers are groupoids. A set is a special kind of groupoid, since it may be thought of as a category whose only morphisms are the identity morphisms (which are trivially their own inverses). Hence, the example given in the previous paragraph, that of a presheaf, is also an example of a category fibered in groupoids, since it is fibered in sets. Now that we have the concept of fibered categories, we next want to define prestacks and stacks. Central to the definition of prestacks and stacks is the concept known as descent, so we have to discuss it first. The theory of descent can be thought of as a formalization of the idea of “gluing”. Let $\mathcal{U}=\{f_{i}:U_{i}\rightarrow U\}$ be a covering (see Sheaves and More Category Theory: The Grothendieck Topos) of the object $U$ of $\mathcal{C}$. An object with descent data is a collection of objects $X_{i}$ in $\mathcal{S}_{U}$ together with transition isomorphisms $\varphi_{ij}:\text{pr}_{0}^{*}X_{i}\simeq\text{pr}_{1}^{*}X_{j}$ in $\mathcal{S}_{U_{i}\times_{U}U_{j}}$, satisfying the cocycle condition $\displaystyle \text{pr}_{02}^{*}\varphi_{ik}=\text{pr}_{01}^{*}\varphi_{ij}\circ \text{pr}_{12}^{*}\varphi_{jk}:\text{pr}_{0}^{*}X_{i}\rightarrow \text{pr}_{2}^{*}X_{k}$ The morphisms $\text{pr}_{0}:U_{i}\times_{U}U_{j}\rightarrow U_{i}$ and the $\text{pr}_{1}:U_{i}\times_{U}U_{j}\rightarrow U_{j}$ are the projection morphisms. The notations $\text{pr}_{0}^{*}X_{i}$ and $\text{pr}_{1}^{*}X_{j}$ means that we are “pulling back” $X_{i}$ and $X_{j}$ from $\mathcal{S}_{U_{i}}$ and $\mathcal{S}_{U_{j}}$, respectively, to $\mathcal{S}_{U_{i}\times_{U}U_{j}}$. A morphism between two objects with descent data is a a collection of morphisms $\psi_{i}:X_{i}\rightarrow X'_{i}$ in $\mathcal{S}_{U_{i}}$ such that $\varphi'_{ij}\circ\text{pr}_{0}^{*}\psi_{i}=\text{pr}_{1}^{*}\psi_{j}\circ\varphi_{ij}$. Therefore we obtain a category, the category of objects with descent data, denoted $\mathcal{DD}(\mathcal{U})$. We can define a functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ by assigning to each object $X$ of $\mathcal{S}_{U}$ the object with descent data given by the pullback $f_{i}^{*}X$ and the canonical isomorphism $\text{pr}_{0}^{*}f_{i}^{*}X\rightarrow\text{pr}_{1}^{*}f_{j}^{*}X$. An object with descent data that is in the essential image of this functor is called effective. Before we give the definitions of prestacks and stacks, we recall some definitions from category theory: A functor $F:\mathcal{A}\rightarrow\mathcal{B}$ is faithful if the induced map $\text{Hom}_{\mathcal{A}}(x,y)\rightarrow \text{Hom}_{\mathcal{B}}(F(x),F(y))$ is injective for any two objects $x$ and $y$ of $\mathcal{A}$. A functor $F:\mathcal{A}\rightarrow\mathcal{B}$ is full if the induced map $\text{Hom}_{\mathcal{A}}(x,y)\rightarrow \text{Hom}_{\mathcal{B}}(F(x),F(y))$ is surjective for any two objects $x$ and $y$ of $\mathcal{A}$. A functor $F:\mathcal{A}\rightarrow\mathcal{B}$ is essentially surjective if any object $y$ of $\mathcal{B}$ is isomorphic to the image $F(x)$ of some object $x$ in $\mathcal{A}$ under $F$. A functor which is both faithful and full is called fully faithful. If, in addition, it is also essentially surjective, then it is called an equivalence of categories. Now we give the definitions of prestacks and stacks using the functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ we have defined earlier. If the functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ is fully faithful, then the fibered category $\mathcal{S}\rightarrow\mathcal{C}$ is a prestack. If the functor $\mathcal{S}_{U}\rightarrow\mathcal{DD}(\mathcal{U})$ is an equivalence of categories, then the fibered category $\mathcal{S}\rightarrow\mathcal{C}$ is a stack. Going back to the example of a presheaf as a fibered category, we now look at what it means when it satisfies the conditions for being a prestack, or a stack: (i) $F$ is a prestack if and only if it is a separated functor, (ii) $F$ is stack if and only if it is a sheaf. We now have the abstract idea of a stack in terms of category theory. Next we want to have more specific examples of interest in algebraic geometry, namely, algebraic spaces and algebraic stacks. For this we need first the idea of a representable functor (and the closely related idea of a representable presheaf). The importance of representability is that this will allow us to “transfer” interesting properties of morphisms between schemes such as being surjective, etale, or smooth, to functors between categories or natural transformations between functors. Therefore we will be able to say that a functor or natural transformation is surjective, or etale, or smooth, which is important, because we will define algebraic spaces and stacks as functors and categories, respectively, but we want them to still be closely related, or similar enough, to schemes. A representable functor is a functor from $\mathcal{C}$ to $\textbf{Sets}$ which is naturally isomorphic to the functor which assigns to any object $X$ the set of morphisms $\text{Hom}(X,U)$, for some fixed object $U$ of $\mathcal{C}$. A representable presheaf is a contravariant functor from $\mathcal{C}$ to $\textbf{Sets}$ which is naturally isomorphic to the functor which assigns to any object $X$ the set of morphisms $\text{Hom}(U,X)$, for some fixed object $U$ of $\mathcal{C}$. If $\mathcal{C}$ is the category of schemes, the latter functor is also called the functor of points of the object $U$. We take this opportunity to emphasize a very important concept in modern algebraic geometry. The functor of points $h_{U}$ of a scheme $U$ may be identified with $U$ itself. There are many advantages to this point of view (which is also known as functorial algebraic geometry); in particular we will need it later when we give the definition of algebraic spaces and stacks. We now have the idea of a representable functor. Next we want to have an idea of a representable natural transformation (or representable morphism) of functors. We will need another prerequisite, that of a fiber product of functors. Let $F,G,H:\mathcal{C}^{\text{op}}\rightarrow \textbf{Sets}$ be functors, and let $a:F\rightarrow G$ and $b:H\rightarrow G$ be natural transformations between these functors. Then the fiber product $F\times_{a,G,b}H$ is a functor from $\mathcal{C}^{\text{op}}$ to $\textbf{Sets}$, and is given by the formula $\displaystyle (F\times_{a,G,b}H)(X)=F(X)\times_{a_{X},G(X),b_{X}}H(X)$ for any object $X$ of $\mathcal{C}$. Let $F,G:\mathcal{C}^{\text{op}}\rightarrow \textbf{Sets}$ be functors. We say that a natural transformation $a:F\rightarrow G$ is representable, or that $F$ is relatively representable over $G$ if for every $U\in\text{Ob}(\mathcal{C})$ and any $\xi\in G(U)$ the functor $h_{U}\times_{G}F$ is representable. We now let $(\text{Sch}/S)_{\text{fppf}}$ be the site (a category with a Grothendieck topology –  see also More Category Theory: The Grothendieck Topos) whose underlying category is the category of $S$-schemes, and whose coverings are given by families of flat, locally finitely presented morphisms. Any etale covering or Zariski covering is an example of this “fppf covering” (“fppf” stands for fidelement plate de presentation finie, which is French for faithfully flat and finitely presented). An algebraic space over a scheme $S$ is a presheaf $\displaystyle F:((\text{Sch}/S)_{\text{fppf}})^{\text{op}}\rightarrow \textbf{Sets}$ with the following properties (1) The presheaf $F$ is a sheaf. (2) The diagonal morphism $F\rightarrow F\times F$ is representable. (3) There exists a scheme $U\in\text{Ob}((\text{Sch}/S)_{\text{fppf}})$ and a map $h_{U}\rightarrow F$ which is surjective, and etale (This is often written simply as $U\rightarrow F$). The scheme $U$ is also called an atlas. The diagonal morphism being representable implies that the natural transformation $h_{U}\rightarrow F$ is also representable, and this is what allows us to describe it as surjective and etale, as has been explained earlier. An algebraic space is a generalization of the notion of a scheme. In fact, a scheme is simply the case where, for the third condition, we have $U$ is the disjoint union of affine schemes $U_{i}$ and where the map $h_{U}\rightarrow F$ is an open immersion. We recall that a scheme may be thought of as being made up of affine schemes “glued together”. This “gluing” is obtained using the Zariski topology. The notion of an algebraic space generalizes this to the etale topology. Next we want to define algebraic stacks. Unlike algebraic spaces, which we defined as presheaves (functors), we will define algebraic stacks as categories, so we need to once again revisit the notion of representability in terms of categories. Let $\mathcal{C}$ be a category. A category fibered in groupoids $p:\mathcal{S}\rightarrow\mathcal{C}$ is called representable if there exists an object $X$ of $\mathcal{C}$ and an equivalence $j:\mathcal{S}\rightarrow \mathcal{C}/X$ (The notation $\mathcal{C}/X$ signifies a slice category, whose objects are morphisms $f:U\rightarrow X$ in $\mathcal{C}$, and whose morphisms are morphisms $h:U\rightarrow V$ in $\mathcal{C}$ such that $f=g\circ h$, where $g:U\rightarrow X$). We give two specific special cases of interest to us (although in this post we will only need the latter): Let $\mathcal{X}$ be a category fibered in groupoids over $(\text{Sch}/S)_{\text{fppf}}$. Then $\mathcal{X}$ is representable by a scheme if there exists a scheme $U\in\text{Ob}((\text{Sch}/S)_{\text{fppf}})$ and an equivalence $j:\mathcal{X}\rightarrow (\text{Sch}/U)_{\text{fppf}}$ of categories over $(\text{Sch}/S)_{\text{fppf}}$. A category fibered in groupoids $p : \mathcal{X}\rightarrow (\text{Sch}/S)_{\text{fppf}}$ is representable by an algebraic space over $S$ if there exists an algebraic space $F$ over $S$ and an equivalence $j:\mathcal{X}\rightarrow \mathcal{S}_{F}$ of categories over $(\text{Sch}/S)_{\text{fppf}}$. Next, following what we did earlier for the case of algebraic spaces, we want to define the notion of representability (by algebraic spaces) for morphisms of categories fibered in groupoids (these are simply functors satisfying some compatibility conditions with the extra structure of the category). We will need, once again, the notion of a fiber product, this time of categories over some other fixed category. Let $F:\mathcal{X}\rightarrow\mathcal{S}$ and $G:\mathcal{Y}\rightarrow\mathcal{S}$ be morphisms of categories over $\mathcal{C}$. The fiber product $\mathcal{X}\times_{\mathcal{S}}\mathcal{Y}$ is given by the following description: (1) an object of $\mathcal{X}\times_{\mathcal{S}}\mathcal{Y}$ is a quadruple $(U,x,y,f)$, where $U\in\text{Ob}(\mathcal{C})$, $x\in\text{Ob}(\mathcal{X}_{U})$, $y\in\text{Ob}(\mathcal{Y}_{U})$, and $f : F(x)\rightarrow G(y)$ is an isomorphism in $\mathcal{S}_{U}$, (2) a morphism $(U,x,y,f) \rightarrow (U',x',y',f')$ is given by a pair $(a,b)$, where $a:x\rightarrow x'$ is a morphism in $X$, and $b:y\rightarrow y'$ is a morphism in $Y$ such that $a$ and $b$ induce the same morphism $U\rightarrow U'$, and $f'\circ F(a)=G(b)\circ f$. Let $S$ be a scheme. A morphism $f:\mathcal{X}\rightarrow \mathcal{Y}$ of categories fibered in groupoids over $(\text{Sch}/S)_{\text{fppf}}$ is called representable by algebraic spaces if for any $U\in\text{Ob}((\text{Sch}/S)_{\text{fppf}})$ and any $y:(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{Y}$ the category fibered in groupoids $\displaystyle (\text{Sch}/U)_{\text{fppf}}\times_{y,\mathcal{Y}}\mathcal{X}$ over $(\text{Sch}/U)_{\text{fppf}}$ is representable by an algebraic space over $U$. An algebraic stack (or Artin stack) over a scheme $S$ is a category $\displaystyle p:\mathcal{X}\rightarrow (\text{Sch}/S)_{\text{fppf}}$ with the following properties: (1) The category $\mathcal{X}$ is a stack in groupoids over $(\text{Sch}/S)_{\text{fppf}}$ . (2) The diagonal $\Delta:\mathcal{X}\rightarrow \mathcal{X}\times\mathcal{X}$ is representable by algebraic spaces. (3) There exists a scheme $U\in\text{Ob}((\text{Sch/S})_{\text{fppf}})$ and a morphism $(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{X}$ which is surjective and smooth (This is often written simply as $U\rightarrow\mathcal{X}$). Again, the scheme $U$ is called an atlas. If the morphism $(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{X}$ is surjective and etale, we have a Deligne-Mumford stack. Just as an algebraic space is a generalization of the notion of a scheme, an algebraic stack is also a generalization of the notion of an algebraic space (recall that that a presheaf can be thought of as category fibered in sets, which themselves are special cases of groupoids). Therefore, the definition of an algebraic stack closely resembles the definition of an algebraic space given earlier, including the requirement that the diagonal morphism (which in this case is a functor between categories) be representable, so that the functor $(\text{Sch}/U)_{\text{fppf}}\rightarrow\mathcal{X}$ is also representable, and we can describe it as being surjective and smooth (or surjective and etale). As an example of an application of the ideas just discussed, we mention the moduli stack of elliptic curves (which we denote by $\mathcal{M}_{1,1}$ – the reason for this notation will become clear later). A family of elliptic curves over some “base space” $B$ is a fibration $\pi:X\rightarrow B$ with a section $O:B\rightarrow X$ such that the fiber $\pi^{-1}(b)$ over any point $b$ of $B$ is an elliptic curve with origin $O(b)$. Ideally what we want is to be able to obtain every family $X\rightarrow B$ by pulling back a “universal object” $E\rightarrow\mathcal{M}_{1,1}$ via the map $B\rightarrow\mathcal{M}_{1,1}$. This is something that even the notion of moduli space that we discussed in The Moduli Space of Elliptic Curves cannot do (we suggestively denote that moduli space by $M_{1,1}$). So we need the concept of stacks to construct this “moduli stack” that has this property. A more thorough discussion would need the notion of quotient stacks and orbifolds, but we only mention that the moduli stack of elliptic curves is in fact a Deligne-Mumford stack. More generally, we can construct the moduli stack of curves of genus $g$ with $\nu$ marked points, denoted $\mathcal{M}_{g,\nu}$. The moduli stack of elliptic curves is simply the special case $\mathcal{M}_{1,1}$. Aside from just curves of course, we can construct moduli stacks for many more mathematical objects, such subschemes of some fixed scheme, or vector bundles, also on some fixed scheme. The subject of algebraic stacks is a vast one, as may perhaps be inferred from the size of one of the main references for this post, the open-source reference The Stacks Project, which consists of almost 6,000 pages at the time of this writing. All that has been attempted in this post is but an extremely “bare bones” introduction to some of its more basic concepts. Hopefully more on stacks will be featured in future posts on the blog. References: Stack on Wikipedia Algebraic Space on Wikipedia Fibred Category on Wikipedia Descent Theory on Wikipedia Stack on nLab Grothendieck Fibration on nLab Algebraic Space on nLab Algebraic Stack on nLab Moduli Stack of Elliptic Curves on nLab Stacks for Everybody by Barbara Fantechi What is…a Stack? by Dan Edidin Notes on the Construction of the Moduli Space of Curves by Dan Edidin Notes on Grothendieck Topologies, Fibered Categories and Descent Theory by Angelo Vistoli Lectures on Moduli Spaces of Elliptic Curves by Richard Hain The Stacks Project Algebraic Spaces and Stacks by Martin Olsson Fundamental Algebraic Geometry: Grothendieck’s FGA Explained by Barbara Fantechi, Lothar Gottsche, Luc Illusie, Steven L. Kleiman, Nitin Nitsure, and Angelo Vistoli # The Theory of Motives The theory of motives originated from the observation, sometime in the 1960’s, that in algebraic geometry there were several different cohomology theories (see Homology and Cohomology and Cohomology in Algebraic Geometry), such as Betti cohomology, de Rham cohomology, $l$-adic cohomology, and crystalline cohomology. The search for a “universal cohomology theory”, such that all these other cohomology theories could be obtained from such a universal cohomology theory is what led to the theory of motives. The four cohomology theories enumerated above are examples of what is called a Weil cohomology theory. A Weil cohomology theory, denoted $H^{*}$, is a functor (see Category Theory) from the category $\mathcal{V}(k)$ of smooth projective varieties over some field $k$ to the category $\textbf{GrAlg}(K)$ of graded $K$-algebras, for some other field $K$ which must be of characteristic zero, satisfying the following axioms: (1) (Finite-dimensionality) The homogeneous components $H^{i}(X)$ of $H^{*}(X)$ are finite dimensional for all $i$, and $H^{i}(X)=0$ whenever $i<0$ or $i>2n$, where $n$ is the dimension of the smooth projective variety $X$. (2) (Poincare duality) There is an orientation isomorphism $H^{2n}\cong K$, and a nondegenerate bilinear pairing $H^{i}(X)\times H^{2n-i}(X)\rightarrow H^{2n}\cong K$. (3) (Kunneth formula) There is an isomorphism $\displaystyle H^{*}(X\times Y)\cong H^{*}(X)\otimes H^{*}(Y)$. (4) (Cycle map) There is a mapping $\gamma_{X}^{i}$ from $C^{i}(X)$, the abelian group of algebraic cycles of codimension $i$ on $X$ (see Algebraic Cycles and Intersection Theory), to $H^{i}(X)$, which is functorial with respect to pullbacks and pushforwards, has the multiplicative property $\gamma_{X\times Y}^{i+j}(Z\times W)=\gamma_{X}^{i}(Z)\otimes \gamma_{Y}^{j}(W)$, and such that $\gamma_{\text{pt}}^{i}$ is the inclusion $\mathbb{Z}\hookrightarrow K$. (5) (Weak Lefschetz axiom) If $W$ is a smooth hyperplane section of $X$, and $j:W\rightarrow X$ is the inclusion, the induced map $j^{*}:H^{i}(X)\rightarrow H^{i}(W)$ is an isomorphism for $i\leq n-2$, and a monomorphism for $i\leq n-1$. (6) (Hard Lefschetz axiom) The Lefschetz operator $\displaystyle \mathcal{L}:H^{i}(X)\rightarrow H^{i+2}(X)$ given by $\displaystyle \mathcal{L}(x)=x\cdot\gamma_{X}^{1}(W)$ for some smooth hyperplane section $W$ of $X$, with the product $\cdot$ provided by the graded $K$-algebra structure of $H^{*}(X)$, induces an isomorphism $\displaystyle \mathcal{L}^{i}:H^{n-i}(X)\rightarrow H^{n+i}(X)$. The idea behind the theory of motives is that all Weil cohomology theories should factor through a “category of motives”, i.e. any Weil cohomology theory $\displaystyle H^{*}: \mathcal{V}(k)\rightarrow \textbf{GrAlg}(K)$ can be expressed as the following composition of functors: $\displaystyle H^{*}: \mathcal{V}(k)\xrightarrow{h} \mathcal{M}(k)\rightarrow\textbf{GrAlg}(K)$ where $\mathcal{M}(k)$ is the category of motives. We can get different Weil cohomology theories, such as Betti cohomology, de Rham cohomology, $l$-adic cohomology, and crystalline cohomology, via different functors (called realization functors) from the category of motives to a category of graded algebras over some field $K$. This explains the term “motive”, which actually comes from the French word “motif”, which itself is already used in music and visual arts, among other things, as some kind of common underlying “theme” with different possible manifestations. Let us now try to construct this category of motives. This category is often referred to in the literature as a “linearization” of the category of smooth projective varieties. This means that we obtain it from some sense starting with the category of smooth projective varieties, but we also want to modify it so that it we can do linear algebra, or more properly homological algebra, in some sense. In other words, we want it to behave like the category of modules over some ring. With this in mind, we want the category to be an abelian category, so that we can make sense of notions such as kernels, cokernels, and exact sequences. An abelian category is a category that satisfies the following properties: (1) The morphisms form an abelian group. (2) There is a zero object. (3) There are finite products and coproducts. (4) Every morphism $f:X\rightarrow Y$ has a kernel and cokernel, and satisfies a decomposition $\displaystyle K\xrightarrow{k} X\xrightarrow{i} I\xrightarrow{j} Y\xrightarrow{c} K'$ where $K$ is the kernel of $f$, $K'$ is the cokernel of $f$, and $I$ is the kernel of $c$ and the cokernel of $k$ (not to be confused with our notation for fields). In order to proceed with our construction of the category of motives, which we now know we want to be an abelian category, we discuss the notion of correspondences. The group of correspondences of degree $r$ from a smooth projective variety $X$ to another smooth projective variety $Y$, written $\text{Corr}^{r}(X,Y)$, is defined to be the group of algebraic cycles of $X\times Y$ of codimension $n+r$, where $n$ is the dimension of $X$, i.e. $\text{Corr}^{r}(X,Y)=C^{n+r}(X\times Y)$ A morphism (of varieties, in the usual sense) $f:Y\rightarrow X$ determines a correspondence from $X$ to $Y$ of degree $0$ given by the transpose of the graph of $f$ in $X\times Y$. Therefore we may think of correspondences as generalizations of the usual concept of morphisms of varieties. As we have learned in Algebraic Cycles and Intersection Theory, whenever we are dealing with algebraic cycles, it is often useful to consider them only up to some equivalence relation. In the aforementioned post we introduced the notion of rational equivalence. This time we consider also homological equivalence and numerical equivalence between algebraic cycles. We say that two algebraic cycles $Z_{1}$ and $Z_{2}$ are homologically equivalent if they have the same image under the cycle map, and we say that they are numerically equivalent if the intersection numbers $Z_{1}\cdot Z$ and $Z_{2}\cdot Z$ are equal for all $Z$ of complementary dimension. There are other such equivalence relations on algebraic cycles, but in this post we will only mostly be using rational equivalence, homological equivalence, and numerical equivalence. Since correspondences are algebraic cycles, we often consider them only up to these equivalence relations, and denote the quotient group we obtain by $\text{Corr}_{\sim}^{r}(X,Y)$, where $\sim$ is the equivalence relation imposed, for example, for numerical equivalence we write $\text{Corr}_{\text{num}}^{r}(X,Y)$. Taking the tensor product of the abelian group $\text{Corr}_{\sim}^{r}(X,Y)$ with the rational numbers $\mathbb{Q}$, we obtain the vector space $\displaystyle \text{Corr}_{\sim}^{r}(X,Y)_{\mathbb{Q}}=\text{Corr}_{\sim}^{r}(X,Y)\otimes_{\mathbb{Z}}\mathbb{Q}$ To obtain something closer to an abelian category (more precisely, we will obtain what is known as a pseudo-abelian category, but in the case where the equivalence relation is numerical equivalence, we will actually obtain an abelian category), we need to consider “projectors”, correspondences $p$ of degree $0$ from a variety $X$ to itself such that $p^{2}=p$. So now we form a category, whose objects are $h(X,p)$ for a variety $X$ and projector $p$, and whose morphisms are given by $\displaystyle \text{Hom}(h(X,p),h(Y,q))=q\circ\text{Corr}_{\sim}^{0}(X,Y)_{\mathbb{Q}}\circ p$. We call this category the category of pure effective motives, and denote it by $\mathcal{M}_{\sim}^{\text{eff}}(k)$. The process described above is also known as passing to the pseudo-abelian (or Karoubian) envelope. We write $h^{i}(X,p)$ for the objects of $\mathcal{M}_{\sim}^{\text{eff}}(k)$ that map to $H^{i}(X)$. In the case that $X$ is the projective line $\mathbb{P}^{1}$, and $p$ is the diagonal $\Delta_{\mathbb{P}^{1}}$, we find that $h(\mathbb{P}^{1},\Delta_{\mathbb{P}^{1}})=h^{0}\mathbb{P}^{1}\oplus h^{2}\mathbb{P}^{1}$ which can be rewritten also as $\displaystyle h(\mathbb{P}^{1},\Delta_{\mathbb{P}^{1}})=\mathbb{I}\oplus\mathbb{L}$ where $\mathbb{I}$ is the image of a point in the category of pure effective motives, and $\mathbb{L}$ is known as the Lefschetz motive. It is also denoted by $\mathbb{Q}(-1)$. The above decomposition corresponds to the projective line $\mathbb{P}^{1}$ being a union of the affine line $\mathbb{A}^{1}$ and a “point at infinity”, which we may denote by $\mathbb{A}^{0}$: $\displaystyle \mathbb{P}^{1}=\mathbb{A}^{0}\cup\mathbb{A}^{1}$ More generally, we have $\displaystyle h(\mathbb{P}^{n},\Delta_{\mathbb{P}^{n}})=\mathbb{I}\oplus\mathbb{L}\oplus...\oplus\mathbb{L}^{n}$ corresponding to $\displaystyle \mathbb{P}^{n}=\mathbb{A}^{0}\cup\mathbb{A}^{1}\cup...\cup\mathbb{A}^{n}$. The category of effective pure motives is an example of a tensor category. This means it has a bifunctor $\otimes: \mathcal{M}_{\sim}^{\text{eff}}\times\mathcal{M}_{\sim}^{\text{eff}}\rightarrow\mathcal{M}_{\sim}^{\text{eff}}$ which generalizes the usual notion of a tensor product, and in this particular case it is given by taking the product of two varieties. We can ask for more, however, and construct a category of motives which is not just a tensor category but a rigid tensor category, which provides us with a notion of duals. By formally inverting the Lefschetz motive (the formal inverse of the Lefschetz motive is then known as the Tate motive, and is denoted by $\mathbb{Q}(1)$), we can obtain this rigid tensor category, whose objects are triples $h(X,p,m)$, where $X$ is a variety, $e$ is a projector, and $m$ is an integer. The morphisms of this category are given by $\displaystyle \text{Hom}(h(X,p,n),h(Y,q,m))=q\circ\text{Corr}_{\sim}^{n-m}(X,Y)_{\mathbb{Q}}\circ p$. This category is called the category of pure motives, and is denoted by $\mathcal{M}_{\sim}(k)$. The category $\mathcal{M}_{\text{rat}}(k)$ is called the category of Chow motives, while the category $\mathcal{M}_{\text{num}}(k)$ is called the category of Grothendieck (or numerical) motives. The category of Chow motives has the advantage that it is known to be “universal”, in the sense that every Weil cohomology theory factors through it, as discussed earlier; however, in general it is not even abelian, which is a desirable property we would like our category of motives to have. Meanwhile, the category of Grothendieck motives is known to be abelian, but it is not yet known if it is universal. If the so-called “standard conjectures on algebraic cycles“, which we will enumerate below, are proved, then the category of Grothendieck motives will be known to be universal. We have seen that the category of pure motives forms a rigid tensor category. Closely related to this concept, and of interest to us, is the notion of a Tannakian category. More precisely, a Tannakian category is a $k$-linear rigid tensor category with an exact faithful functor (called a fiber functor) to the category of finite-dimensional vector spaces over some field extension $K$ of $k$. One of the things that makes Tannakian categories interesting is that there is an equivalence of categories between a Tannakian category $\mathcal{C}$ and the category $\text{Rep}_{G}$ of finite-dimensional linear representations of the group of automorphisms of its fiber functor, which is also known as the Tannakian Galois group, or, if the Tannakian category is a “category of motives” of some sort, the motivic Galois group. This aspect of Tannakian categories may be thought of as a higher-dimensional analogue of the classical theory of Galois groups, which can be stated as an equivalence of categories between the category of finite separable field extensions of a field $k$ and the category of finite sets equipped with an action of the Galois group $\text{Gal}(\bar{k}/k)$, where $\bar{k}$ is the algebraic closure of $k$. So we see that being a Tannakian category is yet another desirable property that we would like our category of motives to have. For this not only do we have to tweak the tensor product structure of our category, we also need certain conjectural properties to hold. These are the same conjectures we have hinted at earlier, called the standard conjectures on algebraic cycles, formulated by Alexander Grothendieck at around the same time he initially developed the theory of motives. These conjectures have some very important consequences in algebraic geometry, and while they remain unproved to this day, the search for their proof (or disproof) is an important part of modern mathematical research on the theory of motives. They are the following: (A) (Standard conjecture of Lefschetz type) For $i\leq n$, the operator $\Lambda$ defined by $\displaystyle \Lambda=(\mathcal{L}^{n-i+2})^{-1}\circ\mathcal{L}\circ (\mathcal{L}^{n-i}):H^{i}\rightarrow H^{i-2}$ $\displaystyle \Lambda=(\mathcal{L}^{n-i})\circ\mathcal{L}\circ (\mathcal{L}^{n-i+2})^{-1}:H^{2n-i+2}\rightarrow H^{2n-i}$ is induced by algebraic cycles. (B) (Standard conjecture of Hodge type) For all $i\leq n/2$, the pairing $\displaystyle x,y\mapsto (-1)^{i}(\mathcal{L}x\cdot y)$ is positive definite. (C) (Standard conjecture of Kunneth type) The projectors $H^{*}(X)\rightarrow H^{i}(X)$ are induced by algebraic cycles in $X\times X$ with rational coefficients. This implies the following decomposition of the diagonal: $\displaystyle \Delta_{X}=\pi_{0}+...+\pi_{2n}$ which in turn implies the decomposition $\displaystyle h(X,\Delta_{X},0)=h(X,\pi_{0},0)\oplus...\oplus h(X,\pi_{2n},0)$ which, writing $h(X,\Delta_{X},0)$ as $hX$ and $h(X,\pi_{i},0)$ as $h^{i}(X)$, we can also compactly and suggestively write as $\displaystyle hX=h^{0}X\oplus...\oplus h^{2n}X$. In other words, every object $hX=h(X,\Delta_{X},0)$ of our “category of motives” decomposes into graded “pieces” $h^{i}(X)=h(X,\pi_{i},0)$ of pure “weight$i$. We have already seen earlier that this is indeed the case when $X=\mathbb{P}^{n}$. We will need this conjecture to hold if we want our category to be a Tannakian category. (D) (Standard conjecture on numerical equivalence and homological equivalence) If an algebraic cycle is numerically equivalent to zero, then its cohomology class is zero. If the category of Grothendieck motives is to be “universal”, so that every Weil cohomology theory factors through it, this conjecture must be satisfied. In Algebraic Cycles and Intersection Theory and Some Useful Links on the Hodge Conjecture, Kahler Manifolds, and Complex Algebraic Geometry, we have made mention of the two famous conjectures in algebraic geometry known as the Hodge conjecture and the Tate conjecture. In fact, these two closely related conjectures can be phrased in the language of motives as the conjectures stating that the realization functors from the category of motives to the category of pure Hodge structures and continuous $l$-adic representations of $\text{Gal}(\bar{k}/k)$, respectively, be fully faithful. These conjectures are closely related to the standard conjectures on algebraic cycles as well. We have now constructed the category of pure motives, for smooth projective varieties. For more general varieties and schemes, there is an analogous idea of “mixed motives“, which at the moment remain conjectural, although there exist several related constructions which are the closest thing we currently have to such a theory of mixed motives. If we want to construct a theory of mixed motives, instead of Weil cohomology theories we must instead consider what are known as “mixed Weil cohomology theories“, which are expected to have the following properties: (1) (Homotopy invariance) The projection $\pi:X\rightarrow\mathbb{A}^{1}$ induces an isomorphism $\displaystyle \pi^{*}:H^{*}(X)\xrightarrow{\cong}H^{*}(X\times\mathbb{A}^{1})$ (2) (Mayer-Vietoris sequence) If $U$ and $V$ are open coverings of $X$, then there is a long exact sequence $\displaystyle ...\rightarrow H^{i}(U\cap V)\rightarrow H^{i}(X)\rightarrow H^{i}(U)\oplus H^{i}(V)\rightarrow H^{i}(U\cap V)\rightarrow...$ (3) (Duality) There is a duality between cohomology $H^{*}$ and cohomology with compact support $H_{c}^{*}$. (4) (Kunneth formula) This is the same axiom as the one in the case of pure motives. We would like a category of mixed motives, which serves as an analogue to the category of pure motives in that all mixed Weil cohomology theories factor through it, but as mentioned earlier, no such category exists at the moment. However, the mathematicians Annette Huber-Klawitter, Masaki Hanamura, Marc Levine, and Vladimir Voevodsky have constructed different versions of a triangulated category of mixed motives, denoted $\mathcal{DM}(k)$. A triangulated category $\mathcal{T}$ is an additive category with an automorphism $T: \mathcal{T}\rightarrow\mathcal{T}$ called the “shift functor” (we will also denote $T(X)$ by $X[1]$, and $T^{n}(X)$ by $X[n]$, for $n\in\mathbb{Z}$) and a family of “distinguished triangles $\displaystyle X\rightarrow Y\rightarrow Z\rightarrow X[1]$ which satisfies the following axioms: (1) For any object $X$ of $\mathcal{T}$, the triangle $X\xrightarrow{\text{id}}X\rightarrow 0\rightarrow X[1]$ is a distinguished triangle. (2) For any morphism $u:X\rightarrow Y$ of $\mathcal{T}$, there is an object $Z$ of $\mathcal{T}$ such that $X\xrightarrow{u}Y\rightarrow Z\rightarrow X[1]$ is a distinguished triangle. (3) Any triangle isomorphic to a distinguished triangle is a distinguished triangle. (4) If $X\rightarrow Y\rightarrow Z\rightarrow X[1]$ is a distinguished triangle, then the two “rotations” $Y\rightarrow Z\rightarrow Z[1]\rightarrow Y[1]$ and $Z[-1]\rightarrow X\rightarrow Y\rightarrow Z$ are also distinguished triangles. (5) Given two distinguished triangles $X\xrightarrow{u}Y\xrightarrow{v}Z\xrightarrow{w}X[1]$ and $X'\xrightarrow{u'}Y'\xrightarrow{v'}Z'\xrightarrow{w'}X'[1]$ and morphisms $f:X\rightarrow X'$ an $g:Y\rightarrow Y'$ such that the square “commutes”, i.e. $u'\circ f=g\circ u$, there exists a morphisms $h:Z\rightarrow Z$ such that all other squares commute. (6) Given three distinguished triangles $X\xrightarrow{u}Y\xrightarrow{j}Z'\xrightarrow{k}X[1]$$Y\xrightarrow{v}Z\xrightarrow{l}X'\xrightarrow{i}Y[1]$, and $X\xrightarrow{v\circ u}Z\xrightarrow{m}Y'\xrightarrow{n}X[1]$, there exists a distinguished triangle $Z'\xrightarrow{f}Y'\xrightarrow{g}X'\xrightarrow{h}Z'[1]$ such that “everything commutes”. A $t$-structure on a triangulated category $\mathcal{T}$ is made up of two full subcategories $\mathcal{T}^{\geq 0}$ and $\mathcal{T}^{\leq 0}$ satisfying the following properties (writing $\mathcal{T}^{\leq n}$ and $\mathcal{T}^{\leq n}$ to denote $\mathcal{T}^{\leq 0}[-n]$ and $\mathcal{T}^{\geq 0}[-n]$ respectively): (1) $\mathcal{T}^{\leq -1}\subset \mathcal{T}^{\leq 0}$ and $\mathcal{T}^{\geq 1}\subset \mathcal{T}^{\geq 0}$ (2) $\displaystyle \text{Hom}(X,Y)=0$ for any object $X$ of $\mathcal{T}^{\leq 0}$ and any object $Y$ of $\mathcal{T}^{\geq 1}$ (3) for any object $Y$ of $\mathcal{T}$ we have a distinguished triangle $\displaystyle X\rightarrow Y\rightarrow Z\rightarrow X[1]$ where $X$ is an object of $\mathcal{T}^{\leq 0}$ and $Z$ is an object of $\mathcal{T}^{\geq 1}$. The full subcategory $\mathcal{T}^{0}=\mathcal{T}^{\leq 0}\cap\mathcal{T}^{\geq 0}$ is called the heart of the $t$-structure, and it is an abelian category. It is conjectured that the category of mixed motives $\mathcal{MM}(k)$ is the heart of the $t$-structure of the triangulated category of mixed motives $\mathcal{DM}(k)$. Voevodsky’s construction proceeds in a manner somewhat analogous to the construction of the category of pure motives as above, starting with schemes (say, over a field $k$, although a more general scheme may be used) as objects and correspondences as morphisms, but then makes use of concepts from abstract homotopy theory, such as taking the bounded homotopy category of bounded complexes, and localization with respect to a certain subcategory, before passing to the pseudo-abelian envelope and then formally inverting the Tate object $\mathbb{Z}(1)$. The triangulated category obtained is called the category of geometric motives, and is denoted by $\mathcal{DM}_{\text{gm}}(k)$. The schemes and correspondences involved in the construction of $\mathcal{DM}_{\text{gm}}(k)$ are required to satisfy certain properties which eliminates the need to consider the equivalence relations which form a large part of the study of the category of pure motives. Closely related to the triangulated category of mixed motives is motivic cohomology, which is defined in terms of the former as $\displaystyle H^{i}(X,\mathbb{Z}(m))=\text{Hom}_{\mathcal{DM}(k)}(X,\mathbb{Z}(m)[i])$ where $\mathbb{Z}(m)$ is the tensor product of $m$ copies of the Tate object $\mathbb{Z}(1)$, and the notation $\mathbb{Z}(m)[i]$ tells us that the shift functor of the triangulated category is applied to the object $\mathbb{Z}(m)$ $i$ times. Motivic cohomology is related to the Chow group, which we have introduced in Algebraic Cycles and Intersection Theory, and also to algebraic K-theory, which is another way by which the ideas of homotopy theory are applied to more general areas of abstract algebra and linear algebra. These ideas were used by Voevodsky to prove several related theorems, from the Milnor conjecture to its generalization, the Bloch-Kato conjecture (also known as the norm residue isomorphism theorem). Historically, one of the motivations for Grothendieck’s attempt to obtain a universal cohomology theory was to prove the Weil conjectures, which is a higher-dimensional analogue of the Riemann hypothesis for curves over finite fields first proved by Andre Weil himself (see The Riemann Hypothesis for Curves over Finite Fields). In fact, if the standard conjectures on algebraic cycles are proved, then a proof of the Weil conjectures would follow via an approach that closely mirrors Weil’s original proof (since cohomology provides a Lefschetz fixed-point formula –  we have mentioned in The Riemann Hypothesis for Curves over Finite Fields that the study of fixed points is an important part of Weil’s proof). The last of the Weil conjectures were eventually proved by Grothendieck’s student Pierre Deligne, but via a different approach that bypassed the standard conjectures. A proof of the standard conjectures, which would lead to a perhaps more elegant proof of the Weil conjectures, is still being pursued to this day. The theory of motives is not only related to analogues of the Riemann hypothesis, which concerns the location of zeroes of L-functions, but to L-functions in general. For instance, it is also related to the Langlands program, which concerns another aspect of L-functions, namely their analytic continuation and functional equation, and to the Birch and Swinnerton-Dyer conjecture, which concerns their values at special points. We recall in The Riemann Hypothesis for Curves over Finite Fields that the Frobenius morphism played an important part in counting the points of a curve over a finite field, which in turn we needed to define the zeta function (of which the L-function can be thought of as a generalization) of the curve. The Frobenius morphism is an element of the Galois group, and we recall that a category of motives which is a Tannakian category is equivalent to the category of representations of its motivic Galois group. Therefore we can see how we can define “motivic L-functions” using the theory of motives. As the L-functions occupy a central place in many areas of modern mathematics, the theory of motives promises much to be gained from its study, if only we could make progress in deciphering the many mysteries that surround it, of which we have only scratched the surface in this post. The applications of motives are not limited to L-functions either – the study of periods, which relate Betti cohomology and de Rham cohomology, and lead to transcendental numbers which can be defined using only algebraic concepts, is also strongly connected to the theory of motives. Recent work by the mathematicians Alain Connes and Matilde Marcolli has also suggested applications to physics, particularly in relation to Feynman diagrams in quantum field theory. There is also another generalization of the theory of motives, developed by Maxim Kontsevich, in the context of noncommutative geometry. References: Weil Cohomology Theory on Wikipedia Motive on Wikipedia Standard Conjectures on Algebraic Cycles on Wikipedia Motive on nLab Pure Motive on nLab Mixed Motive on nLab The Tate Conjecture over Finite Fields on Hard Arithmetic What is…a Motive? by Barry Mazur Motives – Grothendieck’s Dream by James S. Milne Noncommutative Geometry, Quantum Fields, and Motives by Alain Connes and Matilde Marcolli Algebraic Cycles and the Weil Conjectures by Steven L. Kleiman The Standard Conjectures by Steven L. Kleiman Feynman Motives by Matilde Marcolli Une Introduction aux Motifs (Motifs Purs, Motifs Mixtes, Periodes) by Yves Andre
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 688, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9739530086517334, "perplexity": 159.46382797812797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00141.warc.gz"}
https://ai.stackexchange.com/questions/6686/what-does-it-mean-derivative-of-an-image/6695
# What does it mean “derivative of an image”? I am reading a book about OpenCV, it speaks about some derivative of images like sobel. I am confused about image derivative! What is derived from? How can we derived from an image? I know we consider an image(1-channel) as a n*m matrix with 0 to 255 intensity numbers. How can we derive from this matrix? EDIT: a piece of text of the book: One of the most basic and important convolutions is computing derivatives (or approximations to them). There are many ways to do this, but only a few are well suited to a given situation. In general, the most common operator used to represent differentiation is the Sobel derivative operator. Sobel operators exist for any order of derivative as well as for mixed partial derivatives (e.g., ∂ 2 /∂x∂y). Imagine a line laid through the image. All pixels along the line count as values, so you can graph the pixels along the line like a function. The derivative is of that 'function'. A black picture and a white picture have the same derivative (0), but a black-fading-to grey image would have a constant derivative bigger or smaller than zero, depending on the direction of the line in relation to the fading. Hard contrasts have huge derivarives at the points in the line where the line crosses a white/black border. Usually the rows and columns are used as the lines, but you could also lay any oblique line, and some algorithms do. The term 'derivative' is somewhat a misnomer in this case, as usually the pixel values do net get fitted by a function of which then a derivative is taken, but the 'derivative' is directly taken by looking at the differences from one pixel to it's neighbor. There is a thread in dsp.stackexchange that deals with this, the following illustrative picture is from there: • Thank you bukwyrm! So you mean for a 100*200 image, sobel in x-direction, plots 100 functions for each row of the image, then takes the derivative of each row, and so on? And also does the same for each column in y-direction? If I got it true, do you know what is the next step of the algorithm(just like to know) – Hasani Jun 8 '18 at 15:22 • @Hasani I edited the answer, hopefully adressing your remarks. – bukwyrm Jun 11 '18 at 4:45 • Applying the word DERIVATIVE to the output of a process to characterize image change is NOT a misnomer. Rather post(s) in the dsp.stackexchange thread are apparently incorrectly using the term. The diagram and description you provided describes what is called an DIFFERENCE SEQUENCE. It is, as you say, applied to either ORTHOGONAL or DIAGONAL slices. It was a design component used by early 1D bar code readers. DERIVATIVE matrices (x-y) or cubes (x-y-time) can be developed using Hamming, trapezoidal, or other windowing along with 2/3D splines or FFTs. (Not all DSP programs are well designed.) – FauChristian Jun 18 '18 at 2:20 The term Derivative of an Image in the context you mention has two meanings. 1. A matrix, image, or floating point number that is derived from an image via convolution, passing the image through a two dimensional NN, the application of an FFT analysis, or some other process. In this context, the word Derivative implies the direction of calculation: Image B is derived from image A. 2. A matrix or cube that represents the rate of change at in the image being processed. The change being measured between only two adjacent pixels in a single dimension and only one direction at a time, but the applications of this technique is very limited, and such a sequence is of differences, not at all reasonable approximations of the derivative of light. What is more useful in real recognition systems are two dimensional or hexagonal windowing (Gausian, Hamming, Hanning, trapazoidal, cosine, ...) across space and, for video, through time. The calculus term derivative should always reference the theoretical surface being approximated using these techniques, not the discrete matrix or cube that approximates the surface. Such multidimensional convolution and neural network based approaches are less sensitive to capture noise and orientation nuances. Two dimensional whole image or windowed FFT techniques have met with much success because filtering the expected frequency range of features to be detected is merely an attenuation process. Two and three dimensional splines can also be tuned to be useful in the detection of features in an orientation independent way. In addition to gray scale analysis, color and transparency channels can be selected for independent or parallel analysis or added to the dimension of the fitting model from which the derivative is taken. Advances in deep networks have blossomed into a new area of image processing and recognition research, bringing new hope to robotics, automated transportation, and cybernetics in general. • Thank you FauChristian, I updated my question, maybe clarifies what exactly I asked. – Hasani Jun 9 '18 at 7:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7680535316467285, "perplexity": 868.3557118928192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881763.20/warc/CC-MAIN-20200706160424-20200706190424-00031.warc.gz"}
https://programmer.group/matlab-general-simulation-2-notes.html
# matlab general simulation 2 (notes) Keywords: less network #### [example 2.1] try to simulate the input and output waveforms of an amplitude modulation system. Suppose that the input modulated signal is a cosine wave with an amplitude of 2v and a frequency of 1000Hz, the modulation system is 0.5, and the modulated carrier signal is a cosine wave with an amplitude of 5v and a frequency of 10KHz. The initial phase of all cosine waves is 0. % ch2example1prg1.m dt=1e-5; % Simulation sampling interval T=3*1e-3; % Simulation end time t=0:dt:T; input=2*cos(2*pi*1000*t); % Input modulated signal carrier=5*cos(2*pi*1e4*t); % carrier output=(2+0.5*input).*carrier; % Modulation output % Mapping: Observe the input signal, carrier, And modulation output subplot(4,1,1); plot(t,input);xlabel('time t');ylabel('Modulated signal'); subplot(4,1,2); plot(t,carrier);xlabel('time t');ylabel('carrier'); subplot(4,1,3); plot(t,output);xlabel('time t');ylabel('Am output'); % Noise free envelope detection and restoration of original signal y = hilbert(output); am = (abs(y)/5-2)*2; subplot(4,1,4); plot(t,am);xlabel('time t');ylabel('Envelope detection');axis([0 0.003 -2 2]); Simulation waveform of amplitude modulation displayed by real oscilloscope In the program, the frequency of the input modulated signal is deliberately set to 1005Hz, so that the signal period is not an integral multiple of the simulation frame period. After operation, the "continuously" sliding modulated signal will be seen, the carrier shows phase jitter, and the received signal is contaminated with noise. % ch2example1prg3.m dt=1e-6; % Simulation sampling interval T=2*1e-3; % Frame period of simulation for N=0:500 % Total frames of simulation t=N*T+(0:dt:T); % Sampling time in frame input=2*cos(2*pi*1005*t); % Input the modulated signal, change 1005 to 1000, and you will not lose the dynamic graph. carrier=5*cos(2*pi*(1e4)*t+0.1*randn); % carrier output=(2+0.5*input).*carrier; % Modulation output noise=randn(size(t)); % Noise r=output+noise; % Modulation signal through additive noise channel % Mapping: Observe the input signal, carrier, And modulation output subplot(3,1,1); plot([0:dt:T],input);xlabel('time t'); ylabel('Modulated signal');text(T*2/3,1.5,['Current frames: N=',num2str(N)]); subplot(3,1,2); plot([0:dt:T],carrier); xlabel('time t');ylabel('carrier'); subplot(3,1,3); plot([0:dt:T],r); xlabel('time t');ylabel('Am output'); set(gcf,'DoubleBuffer','on'); % Double buffering to avoid drawing flicker drawnow; end #### [example 2.2] simulate the charging process of capacitor. A voltage source charges the capacitor through a network of resistors connected in series with the capacitor. Let t = 0 be the initial time (before the initial time, the circuit is disconnected and does not work), the output voltage x(t) of the voltage source is the unit step function, the voltage at both ends of the capacitance is y(t), the circuit current is i(t), and the voltage source is regarded as the system input, and the voltage on the capacitance is regarded as the system output. The initial state of the circuit is y(0). Figure 2.3 shows the circuit diagram and the equivalent system model. Set up circuit equations according to questions y(t)=x(t)−Ri(t)i(t)=Cdy(t)dt \begin{aligned} y(t) &=x(t)-R i(t) \\ i(t) &=C \frac{\mathrm{d} y(t)}{\mathrm{d} t} \end{aligned} y(t)i(t)​=x(t)−Ri(t)=Cdtdy(t)​​ The simplification results in: dy(t)dt=1RCx(t)−1RCy(t) \frac{\mathrm{d} y(t)}{\mathrm{d} t}=\frac{1}{R C} x(t)-\frac{1}{R C} y(t) dtdy(t)​=RC1​x(t)−RC1​y(t) Substituting dy(t)=y(t+dt)-y(t) into the above formula, we can get: y(t+dt)=y(t)+1RCx(t)dt−1RCy(t)dt y(t+\mathrm{d} t)=y(t)+\frac{1}{R C} x(t) \mathrm{d} t-\frac{1}{R C} y(t) \mathrm{d} t y(t+dt)=y(t)+RC1​x(t)dt−RC1​y(t)dt Euler algorithm for differential equations Δ≈dty(t+Δ)≈y(t)+1RCx(t)Δ−1RCy(t)Δ \Delta \approx \mathrm{d} t \\ y(t+\Delta) \approx y(t)+\frac{1}{R C} x(t) \Delta-\frac{1}{R C} y(t) \Delta Δ≈dty(t+Δ)≈y(t)+RC1​x(t)Δ−RC1​y(t)Δ % ch2example2prg1.m dt=1e-5; % Simulation sampling interval R=1e3; % Resistance value C=1e-6; % Capacitance T=5*1e-3; % Simulation interval from -T reach +T t=-T:dt:T; % Computed discrete time series y(1)=0; % Initial value of capacitance voltage, It will remain unchanged when time is less than zero % If you want to simulate a zero input response, Set up y(1)=1 Equal nonzero value. % ----Input signal setting:Selectable: Zero input,Step input,sinusoidal input,Square wave input, etc---- x=zeros(size(t)); % Initialize input signal storage matrix x=1*(t>=0); % The input signal jumps to 1 at 0, That is, the input is a step signal. % If you want to simulate a zero input response, It can be set up here. x=0 that will do % x=sin(2*pi*1000*t).*(t>=0); % This is 1000 from zero Hz Sine signal of % x=square(2*pi*500*t).*(t>=0); % This is 500 from zero Hz Square wave signal of % Simulation start, Be careful: Circuit does not work before zero time, System state remains unchanged for k=1:length(t) time=-T+k*dt; if time>=0 y(k+1)=y(k)+1./(R*C)*(x(k)-y(k))*dt; %Recursively solving the state value of the next simulation time else y(k+1)=y(k); % When the time is less than zero, the circuit is disconnected and the system does not work end end subplot(2,1,1);plot(t,x(1:length(t)));axis([-T T -1.1 1.1]); xlabel('t');ylabel('input'); subplot(2,1,2);plot(t,y(1:length(t)));axis([-T T -1.1 1.1]); xlabel('t');ylabel('output'); Published 29 original articles, won praise 12, visited 20000+ Posted by jzimmerlin on Sun, 12 Jan 2020 03:37:16 -0800
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987456738948822, "perplexity": 6395.879044714882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594603.8/warc/CC-MAIN-20200119122744-20200119150744-00396.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-1-introduction-to-algebraic-expressions-1-7-multiplication-and-division-of-real-numbers-1-7-exercise-set-page-58/96
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $-\dfrac{10}{17}$ To get the reciprocal of a number, interchange the numerator and the denominator. Hence, the reciprocal of the given expression, $-1.7$, is \begin{array}{l} -\dfrac{17}{10} \\\\= -\dfrac{10}{17} .\end{array}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962955713272095, "perplexity": 3171.013032045462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570830.42/warc/CC-MAIN-20190915072355-20190915094355-00163.warc.gz"}
https://link.springer.com/chapter/10.1007/978-3-319-18827-0_64
# Schwarz Waveform Relaxation for a Class of Non-dissipative Problems Conference paper Part of the Lecture Notes in Computational Science and Engineering book series (LNCSE, volume 104) ## Abstract In this paper, we introduce the results for the Schwarz waveform relaxation (SWR) algorithm applied to a class of non-dissipative reaction diffusion equations. Both the Dirichlet and Robin transmission conditions (TCs) are considered. For the Dirichlet TC, we consider the algorithm for the nonlinear problem $$\partial _{t}u =\mu \partial _{xx}u + f(u)$$, in the case of many subdomains. ## Notes ### Acknowledgement The author is very grateful to Prof. Martin J. Gander, for his fund of the DD22 conference, his careful reading and revision of this paper, and his professional instructions in many fields. This work was supported by the NSF of Science & Technology of Sichuan Province (2014JQ0035), the project of the Key Laboratory of Cambridge and Non-Destructive Inspection of Sichuan Institutes of Higher Education (2013QZY01) and the NSF of China (11301362, 11371157, 91130003). ### References 1. 1. D. Bennequin, M.J. Gander, L. Halpern, A homographic best approximation problem with application to optimized schwarz waveform relaxation. Math. Comput. 78, 185–223 (2009) 2. 2. F. Caetano, M.J. Gander, L. Halpern, J. Szeftel, Schwarz waveform relaxation algorithms for smilinear reaction-diffusion equations. Netw. Heterog. Media 5, 487–505 (2010) 3. 3. M.J. Gander, A waveform relaxation algorithm with overlapping splitting for reaction diffusion equations. Numer. Linear Algebra Appl. 6, 125–145 (1998) 4. 4. M.J. Gander, L. Halpern, Optimized schwarz waveform relaxation for advection reaction diffusion problems. SIAM J. Numer. Anal. 45, 666–697 (2007) 5. 5. M.J. Gander, A.M. Stuart, Space-time continuous analysis of waveform relaxation for the heat equation. SIAM J. Sci. Comput. 19, 2014–2031 (1998) 6. 6. L. Halpern, Absorbing boundary conditions and optimized schwarz waveform relaxation. Behav. Inf. Technol. 46, 21–34 (2006) 7. 7. S.L. Wu, Convergence analysis of the schwarz waveform relaxation algorithms for a class of non-dissipative problems. Manuscript (2014)Google Scholar 8. 8. S.L. Wu, C.M. Huang, T.Z. Huang, Convergence analysis of the overlapping schwarz waveform relaxation algorithm for reaction-diffusion equations with time delay. IMA J. Numer. Anal. 32, 632–671 (2012)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46780022978782654, "perplexity": 3931.2691278127995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812405.3/warc/CC-MAIN-20180219052241-20180219072241-00701.warc.gz"}
https://outofthenormmaths.wordpress.com/2011/12/06/intransitive-votes/
In the previous post I gave an example of students deciding who best guessed some lecturers’ ages. I chose the numbers carefully so that under three reasonable methods of measuring: Method  First place  Second place  Third place This is actually almost identical to Condorcet’s voting paradox: Voter  First preference  Second preference  Third preference Voter 1  A  B  C Voter 2  B  C  A Voter 3  C  A  B If three people in an election vote for candidates A, B, and C this way, then even using a method that takes account of all the preferences in one of the Condorcet voting system leads to a deadlock. The Condorcet methods are fair systems of voting, which generally have better properties than the Alternative Vote (AV) system, though is more complicated to count (but equally simple for the voters who just list their preferences in order). In each case two out of three people each prefer A to B, B to C, and in turn C to A, which can seem a bit bizarre when you first come across it, hence why it is referred to as a paradox. You can find examples on Wikipedia based around this principle supporting AV (also known as IRV, Instant Runoff Voting) over the Condorcet methods, and visa-versa, where multiple voters are now in each category instead of a single one. Again, this goes to show why you should choose your evaluation method (here voting systems) from first principles rather than from the outcomes (the parties they will elect) or solely from considering rare cases where they throw up unexpected results. Of course, this cyclical behaviour shouldn’t seem at all strange: most children have happily played Rock, Paper, Scissors. Rock beats Scissors which cuts Paper which envelops Rock. People are instinctively happier when you can order things nicely like the natural numbers or the real numbers: if $a < b$ and $b < c$ then we must have $a < c$. This is a nice property known as transitivity, that the three examples above lack. Intransitive dice image from Wikipedia. Numbers on the unseen opposite sides, unlike on normal dice, are the same. Another nice example is the nontransitive dice, which are so designed such that: Die A will on average roll a higher score than Die B; which in turn usually beats Die C; which also wins with a greater than 50% chance against the original Die A. Apparently, Warren Buffet failed to trick a suspicious Bill Gates into gambling with them: Gates was to pick first.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6421265602111816, "perplexity": 1764.7721985252542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00571.warc.gz"}
http://www.vexorian.com/2013/07/srm-586-meh.html
## Div1 500 : Something with dynasties Not sure what to do. I reduced it to finding out if there is a solution to a system of inequalities, where the variables are the (real) starting year of each nation's calendar. I had issues even implementing a stub code. I couldn't debug them because my valgrind was suddenly not providing line numbers. The other day, before releasing my test setup I actually tweaked my c++ build file a bit, I may have done something that gives valgrind line number amnesia. I actually spent most of the last 10 minutes (before the 10 minute mark) trying to fix this. Not like I had an actual idea of how to solve this problem. ## Div1 250: The one with a function. You are given an array Y = { Y_0, Y_1, ... Y_(n-1) }, the array talks about a real function f() with domain [Y_0, Y(n-1)]. For each (i < n - 1), the function contains line segment between points (i,Y_i) and (i+1, Y_(i+1)). Find an equation y = "something" that intersects the largest number of times with this function. And return that number of times. Of course, if a horizontal segment exists in f() there exists a y that will have infinitely many intersections, in that case return -1. This was a good problem and I felt confident to solve it under 10 minutes. However, I had a bug (didn't notice an issue with code) which delayed me a bit past the end of the coding phase. According to KawigiEdit, I would have scored ~209 points in this problem if I opened it first. Too slow. Then it turns out that my idea was wrong anyway. So I don't think things changed much by my strategy. If I was having a good day, I *may* have found the challenge case , and maybe attempted to challenge. Who knows? I am still preferring to use my new strategy, whilst 250 is tricky, I learned a bit more by trying to solve div1 500 than by solving yet another tricky 250. Anyway, the solution was to notice that most of the times we only need to try values from Y[] as the position of the horizontal line that crosses f(). This is because any intermediary point in a segment will intersect an equal or lesser number of times than the segment's extremes. So just count, for each Y[i], the number of times it intersects with segments in the function So does y intersect with a segment that goes from Y[i-1] to Y[i]?, if y lies in interval [Y[i-1], Y[i] ], the answer is yes. However, the mistake I made was that I was counting some intersections twice. The same point (x,y) may be shared by two segments, and you only need to count once. What I did to fix this small issue was to make sure that y is not equal to Y[i-1] before counting that intersection. However, there is a catch and it is that there are exceptions to the rule. Imagine Y = {0, 5, 0, 5}, in this case, neither 0 or 5 are optimal locations. 2.5 is better (3 intersections). What is up with that? It turns out that, besides of the segment's extremes, you need to take a point (any point) between any two extremes of a segment. You only need to check once per pair of segment extremes. In fact, you can just check for every segment extreme +- 0.5 and it will suffice. +- 0.5 can be implemented easily by scaling all values by 2 so you just have to try +- 1. I hope to have a formal proof by the time the editorial is released. int maximumSolutions(vector <int> Y){ // If there is a horizontal segment, return -1: for (int i=1; i<Y.size(); i++) { if (Y[i] == Y[i-1]) { return -1; } } // Scale up all values by 2: for (int i=0; i<Y.size(); i++) { Y[i] *= 2; } int mx = 0; // for each y such that abs(y - Y[i]) <= 1: for (int i=0; i<Y.size(); i++) { for (int y = Y[i] - 1; y <= Y[i] + 1; y++) { //count the number of intersections: int c = (Y[0] == y); for (int j=1; j<Y.size(); j++) { if (Y[j] > Y[j-1]) { c += ( (Y[j-1] < y) && (y <= Y[j]) ); } else { c += ( Y[j] <= y && y < Y[j-1] ); } } mx = std::max(mx, c); } } return mx;}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4788019061088562, "perplexity": 885.2566570227847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424910.80/warc/CC-MAIN-20170724202315-20170724222315-00277.warc.gz"}
https://math.meta.stackexchange.com/questions/33473/asking-questions-about-difficulty-reading-proofs
This is a question about advice specifically for reading proofs, not really about writing them. I have recently been reading Algebra by Hungerford (the standard graduate level 400 some page one) and a few times I have been finding myself getting stuck as long as hours trying to read one proof. Normally this happens towards the end of a section (some good examples: P. Hall theorem (II.7.14) and Krull-Schmidt Theorem (II.3.8) (somehow this has only happened in chapter 2 so far and I have read chapters 1,2,most of 10, part of 3, and half of 4 and other bits here and there; I am still stuck on II.3.8, but that is a matter for MSE itself). Whenever I am not stuck, it goes pretty fast (I have done a lot of algebra more advanced and tricky than most of this book), even a lot of the exercises so far just feel trivial. I am asking if it is ok if simply post a proof in MSE itself (one that has not specifically been asked about in the same way before) and basically put circles around the inferences that I am stuck on and ask if someone can explain them to me? If I have spent an hour on a 2 page proof I hope it is not considered "lazy" but I just wanted to ask if there is maybe something else I should be doing (it doesn't happen extremely often to the extent of an hour or more, but either I am racing through the text no problem at all or I am just stuck in the mud completely, multiple times more so and more often than when I read the first $$\frac{2}{3}$$ of Altman Kleinman or any of Lang's Linear Algebra). I think some of the theorems later in the section I might not ever use, but at the same time it really bothers me to just skip things due to not understanding them. If you're wondering, yes I try to spend a good amount of time writing out a 2-3 page proof of something that Hungerford just sort of states like it's obvious, but I don't always get it. Also I am a first year M.A. student, so this is to prepare for eventual research after qualifying exams (I am doing this alongside a first lower level M.A. course in algebra, and by the way I have more or less never gotten stuck reading the professor's notes, this problem is something unique to Hungerford). • It is better to explain where you are stuck and what precisely it is you do not understand. Getting super-specific makes it easier for others to help you, and often has the happy outcome that you solve the problem by yourself and don't need Math.SE's help. Apr 21, 2021 at 9:42 No, simply posting a screenshot of something, circling bits, and saying "help" (to paraphrase your second paragraph) is not acceptable. It's pretty much the definition of a PSQ -- problem-statement question. First off: by posting screenshots you're making the body of your post effectively unsearchable, which is detrimental to the site's goals, and you're preventing people using screen-readers or other aids for viewing the site from seeing your question. That's slightly discriminatory to them, and detrimental to you because you're lowering the pool of potential answerers. So please MathJax out the bits that are concerning to you. Second: all the context that someone would need to help you is in this post, separate from where it would be needed! The reason we ask for context is so that you can provide information like "I've done a lot of algebra more advanced that this and can't understand why I'm stuck here". That tells answerers that there's a conceptual misunderstanding so just writing out a step-by-step proof is probably the wrong approach (bear in mind this won't stop this from happening, just increase your chances of getting the explanation you actually need). Finally, the site aims to be a searchable repository of knowledge, so taking the time to phrase your questions the way other people might is worthwhile: other people with similar problems can then find your question easily and benefit from it. There's a veiled concern in your post (I think; correct me if I'm wrong) that you're expected to write out a two-page proof up to the point where you're stuck. No, you're not, and people are unlikely to read it anyway. What you need to do is excerpt the part where you're getting stuck and provide enough information around it (definitions, assumptions, your reasoning at a high level). This is a useful skill, not just in mathematics but in all walks of life, and its worth getting the practice when you can. • @Countable: the answer clearly says that you don't need to type the full proof but just the parts relevant to your issues. – Paramanand Singh Mod Apr 22, 2021 at 2:26 • Yes ok fine. I think a lot of the time there is a good half or so of the proof that might be relevant to later parts in a proof but yes I can just cut and paste the necessary bits, fair. Also I get that the images are not searchable. My hope was that I could use an image and clearly describe the difficulty by using the image as a visual aid, of course I wouldn't just post an image and say nothing else. If the general idea of the question and the title and the page and source etc. are in text then is it really that important for every word to be searchable by the way? Apr 23, 2021 at 23:58 • @Countable you can include the image in your question but it will be better received if you don't rely on it. Having it there to provide extra information, or because you can't (easily) isolate the relevant parts is acceptable generally; but having the question depend on being able to see the image is not acceptable. (And while you wouldn't post an image and nothing else, exactly that gets closed daily on the site.) It's not essential that every word be searchable, no :) But your question is more useful if it can be searched. Apr 24, 2021 at 15:33 • That makes sense, Thank you :) Apr 24, 2021 at 19:35 • +1 especially for "This is a useful skill, not just in mathematics but in all walks of life" – Neal Apr 28, 2021 at 20:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.592596173286438, "perplexity": 477.404811637168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00355.warc.gz"}
https://geo.libretexts.org/Bookshelves/Geology/Physical_Geology_(Earle)/09%3A_Earths_Interior/9.04%3A_Isostasy
# 9.4: Isostasy $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ Theory holds that the mantle is able to convect because of its plasticity, and this plasticity also allows for another very important Earth process known as isostasy. The literal meaning of the word isostasy is “equal standstill,” but the importance behind it is the principle that Earth’s crust is floating on the mantle, like a raft floating in the water, rather than resting on the mantle like a raft sitting on the ground. The relationship between the crust and the mantle is illustrated in Figure $$\PageIndex{1}$$ On the right is an example of a non-isostatic relationship between a raft and solid concrete. It’s possible to load the raft up with lots of people, and it still won’t sink into the concrete. On the left, the relationship is an isostatic one between two different rafts and a swimming pool full of peanut butter. With only one person on board, the raft floats high in the peanut butter, but with three people, it sinks dangerously low. We’re using peanut butter here, rather than water, because its viscosity more closely represents the relationship between the crust and the mantle. Although it has about the same density as water, peanut butter is much more viscous (stiff), and so although the three-person raft will sink into the peanut butter, it will do so quite slowly. The relationship of Earth’s crust to the mantle is similar to the relationship of the rafts to the peanut butter. The raft with one person on it floats comfortably high. Even with three people on it the raft is less dense than the peanut butter, so it floats, but it floats uncomfortably low for those three people. The crust, with an average density of around 2.6 grams per cubic centimetre (g/cm3), is less dense than the mantle (average density of approximately 3.4 g/cm3 near the surface, but more than that at depth), and so it is floating on the “plastic” mantle. When more weight is added to the crust, through the process of mountain building, it slowly sinks deeper into the mantle and the mantle material that was there is pushed aside (Figure $$\PageIndex{2}$$, left). When that weight is removed by erosion over tens of millions of years, the crust rebounds and the mantle rock flows back (Figure $$\PageIndex{2}$$, right). The crust and mantle respond in a similar way to glaciation and deglaciation as they do to the growth and erosion of mountain ranges. Thick accumulations of glacial ice add weight to the crust, and as the mantle beneath is squeezed to the sides, the crust subsides. This process is illustrated for the current ice sheet on Greenland in Figure $$\PageIndex{3}$$ (a and b). The Greenland Ice Sheet at this location is over 2,500 meters thick, and the crust beneath the thickest part has been depressed to the point where it is below sea level over a wide area. When the ice eventually melts, the crust and mantle will slowly rebound, but full rebound will likely take more than 10,000 years (Figure $$\PageIndex{3}$$c). How can the mantle be both solid and plastic? You might be wondering how it is possible that Earth’s mantle is rigid enough to break during an earthquake, and yet it convects and flows like a very viscous liquid. The explanation is that the mantle behaves as a non-Newtonian fluid, meaning that it responds differently to stresses depending on how quickly the stress is applied. A good example of this is the behaviour of the material known as Silly Putty, which can bounce and will break if you pull on it sharply, but will deform like a liquid if stress is applied slowly. In this photo, Silly Putty was placed over a hole in a glass tabletop, and in response to gravity, it slowly flowed into the hole. The mantle will flow when placed under the slow but steady stress of a growing (or melting) ice sheet. Large parts of Canada are still rebounding as a result of the loss of glacial ice over the past 12 ka, and as shown in Figure $$\PageIndex{5}$$, other parts of the world are also experiencing isostatic rebound. The highest rate of uplift is in within a large area to the west of Hudson Bay, which is where the Laurentide Ice Sheet was the thickest (over 3,000 m). Ice finally left this region around 8,000 years ago, and the crust is currently rebounding at a rate of nearly 2 centimeters per year. Strong isostatic rebound is also occurring in northern Europe where the Fenno-Scandian Ice Sheet was thickest, and in the eastern part of Antarctica, which also experienced significant ice loss during the Holocene. There are also extensive areas of subsidence surrounding the former Laurentide and Fenno-Scandian Ice Sheets. During glaciation, mantle rock flowed away from the areas beneath the main ice sheets, and this material is now slowly flowing back, as illustrated in Figure $$\PageIndex{3}$$b. ##### Exercise 9.4 Rock density and isostasy The densities (also known as “specific gravity”) of a number of common minerals are given in Table 9.1. Table 9.1 Densities of common minerals. Mineral Density (grams per cubic centimetre, g/cm3) Quartz 2.65 Feldspar 2.63 Amphibole 3.25 Pyroxene 3.4 Olivine 3.3 The following table provides the approximate proportions of these minerals in the continental crust (typified by granite), oceanic crust (mostly basalt), and mantle (mainly the rock known as peridotite). Assuming that you have 1,000 cm3 of each rock type, estimate the respective rock-type densities. For each rock type, you will need to multiply the volume of the different minerals in the rock by their density, and then add those numbers to get the total weight for 1,000 cm3 of that rock. The density is that number divided by 1,000. The continental crust is done for you. Table 9.2 Determine the density of different kinds of crusts Rock Type Volumes of individual minerals in 1000 cm3. Grams of individual minerals in 1000 cm3 Total Weight (grams) Density (grams per cubic centimetre, g/cm3) Continental Crust (Granite) Quartz – 180 cm3 Feldspar – 760 cm3 Amphibole – 70 cm3 Quartz – 477 g Feldspar – 1999 g Amphibole – 277 g 2703 g 2.70 Oceanic Crust (Basalt) Feldspar – 450 cm3 Amphibole – 50 cm3 Pyroxene – 500 cm3 Feldspar – Amphibole – Pyroxene – Mantle (Peridotite) Pyroxene – 450 cm3 Olivine – 550 cm3 Pyroxene – Olivine – If continental crust (represented by granite) and oceanic crust (represented by basalt) are like rafts floating on the mantle, what does this tell you about how high or low they should float? This concept is illustrated in Figure $$\PageIndex{6}$$. The dashed line is for reference, showing points at equal distance from Earth’s centre. See Appendix 3 for Exercise 9.4 answers. • Figure $$\PageIndex{4}$$: “Silly putty dripping” © Eric Skiff. CC BY-SA. • Figure $$\PageIndex{5}$$: “PGR Paulson2007 Rate of Lithospheric Uplift due to PGR” by NASA. Public domain.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.675345242023468, "perplexity": 1708.8951097547206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00283.warc.gz"}
https://ita.skanev.com/C/03/09.html
# Exercise C.3.9 Show that for any random variable $X$ that takes on only the values $0$ and $1$, we have $\Var[X] = \E[X]\E[1-X]$. Let's first calculate the expectations: $$\E[X] = 0 \cdot \Pr\{X = 0\} + 1 \cdot \Pr\{X = 1\} = \Pr\{X = 1\} \\ \E[1-X] = \Pr\{X = 0\} \\ \E[X]\E[1-X] = \Pr\{X = 0\} \cdot \Pr\{X = 1\}$$ Now - the variance: $$\Var[X] = \E[X^2] - \E^2[X] = \Pr\{X = 1\} - (\Pr\{X = 1\})^2 = \Pr\{X = 1\} (1 - \Pr\{X = 1\}) = \Pr\{X = 0\} \cdot Pr\{X = 1\}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284596800804138, "perplexity": 208.7406605722885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689490.64/warc/CC-MAIN-20170923052100-20170923072100-00078.warc.gz"}
https://www.taylorfrancis.com/books/e/9780203427538/chapters/10.1201/9780203427538-34
chapter  12 -Caspase-independent cell death 211 Pages 14 The best-studied members of the death-receptor family are TNF receptor 1 (TNFRl), Fas (also known as CD95 or Apo-1), and the receptors for TNF-related apop­ tosis-inducing ligand (TRAIL). Whereas it has long been known that TNF-induced death can take the shape of either apoptosis or necrosis (Laster et al.y 1988), the ability of the Fas ligand (FasL) and TRAIL to induce necrosis-like PCD has been Except for the dependence on reactive oxygen species (ROS) and, in some cases, serine protease activity, necrotic signaling pathways have remained ambiguous until recently (Denecker et al, 2001). Novel data demonstrate now that TNF, FasL, and TRAIL can trigger caspase-8-independent necrosis-like PCD that is dependent on the Fas-associated death domain (FADD) protein and the kinase activity of the receptor-interacting protein (RIP) (Holler et a l , 2000). The dependence of RIPmediated necrotic PCD on proteases remains to be studied. Interestingly, some TNF-resistant cells are sensitized to TNF-induced necrosis-like PCD upon inhibi­ tion of caspases, suggesting that caspases act as survival factors that directly inhibit the TNF-induced necrotic pathway (Khwaja and Tatton, 1999). Death receptors can also trigger caspase-independent apoptosis-like PCD. In immortalized epithe­ lial cells, activated Fas has been reported to recruit Daxx from the nucleus to the receptor complex, and to trigger its binding with apoptosis signal-regulating kinase 1 (Askl) (Charette et a l , 2000; Ko et a l , 2001). Others have, however, have failed to detect Daxx in the cytosol and have suggested that Daxx enhances Fas-induced caspase-dependent death from its nuclear localization (Torii et a l , 1999). Thus, Daxx may stimulate Fas-induced death by two independent mechanisms, the cas­ pase-independent pathway being evident only when caspase activation is defective (Charette etai, 2000) and enough Askl is available (Ko etai, 2001). In addition to a caspase-dependent proapoptotic function that depends on its kinase activity, Askl possesses a caspase-independent killing function that is independent of its kinase activity and is activated by interaction with Daxx (Charette et a l , 2001). Askl has also been found to be essential for TNF-triggered apoptosis of primary fibroblasts, but its activation by TNF appears to require ROS (Tobiume et al, 2001 ) instead of Daxx (Yang et a l , 1997). In TNF-treated fibrosarcoma cells cysteine cathepsins act as dominant execution proteases and bring about apoptosis-like morphologic changes (Foghsgaard et a l , 2001). Whether Askl and cathepsins act on the same signaling pathway is as yet unknown.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877696692943573, "perplexity": 23242.622955982548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00341.warc.gz"}
http://www.ck12.org/book/Probability-and-Statistics-%2528Advanced-Placement%2529/r1/section/5.1/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 5.1: The Standard Normal Probability Distribution Difficulty Level: At Grade Created by: CK-12 ## Learning Objectives • Identify the characteristics of a normal distribution. • Identify and use the Empirical Rule (\begin{align*}68-95-99.7\end{align*} rule) for normal distributions. • Calculate a \begin{align*}z-\end{align*}score and relate it to probability. • Determine if a data set corresponds to a normal distribution. ## Introduction Most high schools have a set amount of time in between classes in which students must get to their next class. If you were to stand at the door of your statistics class and watch the students coming in, think about how the students would enter. Usually, one or two students enter early, then more students come in, then a large group of students enter, and then the number of students entering decreases again, with one or two students barely making it on time, or perhaps even coming in late! Try the same by watching students enter your school cafeteria at lunchtime. Spend some time in a fast food restaurant or café before, during, and after the lunch hour and you will most likely observe similar behavior. Have you ever popped popcorn in a microwave? Think about what happens in terms of the rate at which the kernels pop. Better yet, actually do it and listen to what happens! For the first few minutes nothing happens, then after a while a few kernels start popping. This rate increases to the point at which you hear most of the kernels popping and then it gradually decreases again until just a kernel or two pops. Try measuring the height, or shoe size, or the width of the hands of the students in your class. In most situations, you will probably find that there are a couple of very students with very low measurements and a couple with very high measurements with the majority of students centered around a particular value. Sometimes the door handles in office buildings show a wear pattern caused by thousands, maybe millions of times being pulled or pushed to open the door. Often you will see that there is a middle region that shows by far the most amount of wear at the place where people opening the door are the most likely to grab the handle, surrounded by areas on either side showing less wear. On average, people are more likely to have grabbed the handle in the same spot and less likely to use the extremes on either side. All of these examples show a typical pattern that seems to be a part of many real life phenomena. In statistics, because this pattern is so pervasive, it seems to fit to call it “normal”, or more formally the normal distribution. The normal distribution is an extremely important concept because it occurs so often in the data we collect from the natural world, as well as many of the more theoretical ideas that are the foundation of statistics. This chapter explores the details of the normal distribution. ## The Characteristics of a Normal Distribution ### Shape If you think of graphing data from each of the examples in the introduction, the distributions from each of these situations would be mound-shaped and mostly symmetric. A normal distribution is a perfectly symmetric, mound-shaped distribution. It is commonly referred to the as a normal, or bell curve. Because so many real data sets closely approximate a normal distribution, we can use the idealized normal curve to learn a great deal about such data. In practical data collection, the distribution will never be exactly symmetric, so just like situations involving probability, a true normal distribution results from an infinite collection of data, or from the probabilities of a continuous random variable. ### Center Due to this exact symmetry the center of the normal distribution, or a data set that approximates a normal distribution, is located at the highest point of the distribution, and all the statistical measures of center we have already studied, mean, median, and mode are equal. It is also important to realize that this center peak divides the data into two equal parts. Let’s go back to our popcorn example. The bag advertises a certain time, beyond which you risk burning the popcorn. From experience, the manufacturers know when most of the popcorn will stop popping, but there is still a chance that a rare kernel will pop after longer, or shorter periods of time. The directions usually tell you to stop when the time between popping is a few seconds, but aren’t you tempted to keep going so you don’t end up with a bag full of un-popped kernels? Because this is real, and not theoretical, there will be a time when it will stop popping and start burning, but there is always a chance, no matter how small, that one more kernel will pop if you keep the microwave going. In the idealized normal distribution of a continuous random variable, the distribution continues infinitely in both directions. Because of this infinite spread, range would not be a possible statistical measure of spread. The most common way to measure the spread of a normal distribution then is using the standard deviation, or typical distance away from the mean. Because of the symmetry of a normal distribution, the standard deviation indicates how far away from the maximum peak the data will be. Here are two normal distributions with the same center(mean): The first distribution pictured above has a smaller standard deviation and so the bulk of the data is concentrated more heavily around the mean. There is less data at the extremes compared to the second distribution pictured above, which has a larger standard deviation and therefore the data is spread farther from the mean value with more of the data appearing in the tails. ## Investigating the Normal Distribution on a TI-83/4 Graphing Calculator We can graph a normal curve for a probability distribution on the TI-83/4. Press [y=]. To create a normal distribution, we will draw an idealized curve using something called a density function. We will learn more about density functions in the next lesson. The command is called a probability density function and it is found by pressing [2nd] [DISTR] [1]. Enter an \begin{align*}X\end{align*} to represent the random variable, followed by the mean and the standard deviation. For this example, choose a mean of \begin{align*}5\end{align*} and a standard deviation of \begin{align*}1\end{align*}. Choose [2nd] [QUIT] to go to the home screen. We can draw a vertical line at the mean to show it is in the center of the distribution by pressing [2nd] [DRAW] and choosing VERTICAL. Enter the mean (5) and press [ENTER] Remember that even though the graph appears to touch the \begin{align*}x-\end{align*}axis it is actually just very close to it. This will graph \begin{align*}3\end{align*} different normal distributions with various standard deviations to make it easy to see the change in spread. ## The Empirical Rule Because of the similar shape of all normal distributions we can measure the percentage of data that is a certain distance from the mean no matter what the standard deviation of the set is. The following graph shows a normal distribution with \begin{align*}\mu=0\end{align*} and \begin{align*}\sigma=1\end{align*}. This curve is called a standard normal distribution. In this case, the values of \begin{align*}x\end{align*} represent the number of standard deviations away from the mean. Notice that vertical lines are drawn at points that are exactly one standard deviation to the left and right of the mean. We have consistently described standard deviation as a measure of the “typical” distance away from the mean. How much of the data is actually within one standard deviation of the mean? To answer this question, think about the space, or area under the curve. The entire data set, or \begin{align*}100\%\end{align*} of it, is contained by the whole curve. What percentage would you estimate is between the two lines? It is a reasonable estimate to say it is about \begin{align*}2/3\end{align*} of the total area. In a more advanced statistics course, you could use calculus to actually calculate this area. To help estimate the answer, we can use a graphing calculator. Graph a standard normal distribution over an appropriate window. Now press [2nd] [DISTR] and choose DRAW ShadeNorm. Insert \begin{align*}–1\end{align*}, \begin{align*}1\end{align*} after the ShadeNorm command and it will shade the area within one standard deviation of the mean. The calculator also gives a very accurate estimate of the area. We can see from this that approximately \begin{align*}68\;\mathrm{percent}\end{align*} of the area is within one standard deviation of the mean. If we venture two standard deviations away from the mean, how much of the data should we expect to capture? Make the changes to the ShadeNorm command to find out. Notice from the shading, that almost all of the distribution is shaded and the percentage of data is close to \begin{align*}95\%\end{align*}. If you were to venture \begin{align*}3\end{align*} standard deviations from the mean, \begin{align*}99.7\%\end{align*}, or virtually all of the data is captured, which tells us that very little of the data in a normal distribution is more than \begin{align*}3\end{align*} standard deviations from the mean. Notice that the shading of the calculator actually makes it look like the entire distribution is shaded because of the limitations of the screen resolution, but as we have already discovered, there is still some area under the curve further out than that. These three approximate percentages, \begin{align*}68, 95\end{align*} and \begin{align*}99.7\end{align*} are extremely important and useful for beginning statistics students and is called the empirical rule. The empirical rule states that the percentages of data in a normal distribution within \begin{align*}1, 2\end{align*}, and \begin{align*}3\end{align*} standard deviations of the mean, are approximately \begin{align*}68, 95\end{align*}, and \begin{align*}99.7\end{align*}, respectively. ## Z-Scores A \begin{align*}z-\end{align*}score is a measure of the number of standard deviations a particular data point is away from the mean. For example, let’s say the mean score on a test for your statistics class were an \begin{align*}82\end{align*} with a standard deviation of \begin{align*}7\end{align*} points. If your score was an \begin{align*}89\end{align*}, it is exactly one standard deviation to the right of the mean, therefore your \begin{align*}z-\end{align*}score would be \begin{align*}1\end{align*}. If, on the other hand you scored a \begin{align*}75\end{align*}, your score is exactly one standard deviation below the mean, and your \begin{align*}z-\end{align*}score would be \begin{align*}-1\end{align*}. To show that it is below the mean, we will assign it a \begin{align*}z-\end{align*}score of negative one. All values that are below the mean will have negative \begin{align*}z-\end{align*}scores. A \begin{align*}z-\end{align*}score of negative two would represent a value that is exactly \begin{align*}2\end{align*} standard deviations below the mean, or \begin{align*}82 - 14 = 68\end{align*} in this example. To calculate a \begin{align*}z-\end{align*}score in which the numbers are not so obvious, you take the deviation and divide it by the standard deviation. \begin{align*}z=\frac{\text{Deviation}}{\text{Standard Deviation}}\end{align*} You may recall that deviation is the observed value of the variable, subtracted by the mean value, so in symbolic terms, the \begin{align*}z-\end{align*}score would be: \begin{align*}z=\frac {x-\bar x}{sd}\end{align*} Ex. What is the \begin{align*}z-\end{align*}score for an \begin{align*}A\end{align*} on this test? (assume that an \begin{align*}A\end{align*} is a \begin{align*}93\end{align*}). \begin{align*}z&=\frac {x-\bar x}{sd}\\ z&=\frac {93-82}{7}\\ z&=\frac {11}{7}\approx 1.57\end{align*} It is not necessary to have a normal distribution to calculate a \begin{align*}z-\end{align*}score, but the \begin{align*}z-\end{align*}score has much more significance when it relates to a normal distribution. For example, if we know that the test scores from the last example are distributed normally, then a \begin{align*}z-\end{align*}score can tell us something about how our test score relates to the rest of the class. From the empirical rule we know that about \begin{align*}68\;\mathrm{percent}\end{align*} of the students would have scored between a \begin{align*}z-\end{align*}score of \begin{align*}–1\end{align*} and \begin{align*}1\end{align*}, or between a \begin{align*}75\end{align*} and an \begin{align*}89\end{align*}. If \begin{align*}68\%\end{align*} of the data is between those two values, then that leaves a remaining \begin{align*}32\%\end{align*} in the tail areas. Because of symmetry, that leaves \begin{align*}16\%\end{align*} in each individual tail. If we combine the two percentages, approximately \begin{align*}84\%\end{align*} of the data is below an \begin{align*}89\end{align*} score. We typically refer to this as a percentile. A student with this score could conclude that he or she performed better than \begin{align*}84\%\end{align*} of the class, and that he or she was in the \begin{align*}84^{th}\end{align*} percentile. This same conclusion can be put in terms of a probability distribution as well. We could say that if a student from this class were chosen at random the probability that we would choose a student with a score of \begin{align*}89\end{align*} or less is \begin{align*}.84\end{align*}, or there is an \begin{align*}84\%\end{align*} chance of picking such a student. ## Assessing Normality The best way to determine if a data set approximates a normal distribution is to look at a visual representation. Histograms and box plots can be useful indicators of normality, but are not always definitive. It is often easier to tell if a data set is not normal from these plots. If a data set is skewed right it means that the right tail is significantly larger than the left. Likewise, skewed left means the left tail has more weight than the right. A bimodal distribution has two modes, or peaks, as if two normal distributions were added together. Multimodal distributions with two or more modes often reflect two different types. For instance, a histogram of the heights of American \begin{align*}30\end{align*}-year-old adults, you will see a bimodal distribution -- one mode for males, one mode for females. Now that we know how to calculate \begin{align*}z-\end{align*}scores, there is a plot we can use to determine if a distribution is normal. If we calculate the \begin{align*}z-\end{align*}scores for a data set and plot them against the actual values, this is called a normal probability plot, or a normal quantile plot. If the data set is normal, then this plot will be perfectly linear. The closer to being linear the normal probability plot is, the more closely the data set approximates a normal distribution. Look below at a histogram and the normal probability plot for the same data. The histogram is fairly symmetric and mound-shaped and appears to display the characteristics of a normal distribution. When the \begin{align*}z-\end{align*}scores are plotted against the data values, the normal probability plot appears strongly linear, indicating that the data set closely approximates a normal distribution. Example: The following data set tracked high school seniors' involvement in traffic accidents. The participants were asked the following question: “During the last \begin{align*}12\end{align*} months, how many accidents have you had while you were driving (whether or not you were responsible)?” Year Percentage of high school seniors who said they were involved in no traffic accidents 1991 \begin{align*}75.7\end{align*} 1992 \begin{align*}76.9\end{align*} 1993 \begin{align*}76.1\end{align*} 1994 \begin{align*}75.7\end{align*} 1995 \begin{align*}75.3\end{align*} 1996 \begin{align*}74.1\end{align*} 1997 \begin{align*}74.4\end{align*} 1998 \begin{align*}74.4\end{align*} 1999 \begin{align*}75.1\end{align*} 2000 \begin{align*}75.1\end{align*} 2001 \begin{align*}75.5\end{align*} 2002 \begin{align*}75.5\end{align*} 2003 \begin{align*}75.8\end{align*} Figure: Percentage of high school seniors who said they were involved in no traffic accidents. Source: Sourcebook of Criminal Justice Statistics: http://www.albany.edu/sourcebook/pdf/t352.pdf Here is a histogram and a box plot of this data. The histogram appears to show a roughly mound-shaped and symmetric distribution. The box plot does not appear to be significantly skewed, but the various sections of the plot also do not appear to be overly symmetric either. In the following chart the \begin{align*}z-\end{align*}scores for this data set have been calculated. The mean percentage is approximately \begin{align*}75.35\end{align*} Year Percentage \begin{align*}z-\end{align*}score 1991 \begin{align*}75.7\end{align*} \begin{align*}.45\end{align*} 1992 \begin{align*}76.9\end{align*} \begin{align*}2.03\end{align*} 1993 \begin{align*}76.1\end{align*} \begin{align*}.98\end{align*} 1994 \begin{align*}75.7\end{align*} \begin{align*}.45\end{align*} 1995 \begin{align*}75.3\end{align*} \begin{align*}-.07\end{align*} 1996 \begin{align*}74.1\end{align*} \begin{align*}-1.65\end{align*} 1997 \begin{align*}74.4\end{align*} \begin{align*}-1.25\end{align*} 1998 \begin{align*}74.4\end{align*} \begin{align*} -1.25\end{align*} 1999 \begin{align*}75.1\end{align*} \begin{align*}-.33\end{align*} 2000 \begin{align*}75.1\end{align*} \begin{align*}-.33\end{align*} 2001 \begin{align*}75.5\end{align*} \begin{align*}.19\end{align*} 2002 \begin{align*}75.5\end{align*} \begin{align*}.19\end{align*} 2003 \begin{align*}75.8\end{align*} \begin{align*}.59\end{align*} Figure: Table of \begin{align*}z-\end{align*}scores for senior no-accident data. Here is a plot of the percentages and the \begin{align*}z-\end{align*}scores, or the normal probability plot. While not perfectly linear, this plot shows does have a strong linear pattern and we would therefore conclude that the distribution is reasonably normal. One additional clue about normality might be gained from investigating the empirical rule. Remember than in an idealized normal curve, approximately \begin{align*}68\%\end{align*} of the data should be within one standard deviation of the mean. If we count, there are \begin{align*}9\;\mathrm{years}\end{align*} for which the \begin{align*}z-\end{align*}scores are between \begin{align*}-1\end{align*} and \begin{align*}1\end{align*}. As a percentage of the total data, \begin{align*}9/13\end{align*} is about \begin{align*}69\%\end{align*}, or very close to the ideal value. This data set is so small that it is difficult to verify the other percentages, but they are still not unreasonable. About \begin{align*}92\%\end{align*} of the data (all but one of the points) ends up within \begin{align*}2\end{align*} standard deviations of the mean, and all of the data (Which is in line with the theoretical \begin{align*}99.7\%\end{align*}) is located between \begin{align*}z-\end{align*}scores of \begin{align*}-3\end{align*} and \begin{align*}3\end{align*}. ## Lesson Summary A normal distribution is a perfectly symmetric, mound-shaped distribution that appears in many practical and real data sets and is an especially important foundation for making conclusions about data called inference. A standard normal distribution is a normal distribution in which the mean is \begin{align*}0\end{align*} and the standard deviation is \begin{align*}1\end{align*}. A \begin{align*}z-\end{align*}score is a measure of the number of standard deviations a particular data value is away from the mean. The formula for calculating a \begin{align*}z-\end{align*}score is: \begin{align*}z=\frac {x-\bar x}{sd}\end{align*} \begin{align*}Z-\end{align*}scores are useful for comparing two distributions with different centers and/or spreads. When you convert an entire distribution to \begin{align*}z-\end{align*}scores, you are actually changing it to a standardized distribution. A distribution has \begin{align*}z-\end{align*}scores regardless of whether or not it is normal in shape. If the distribution is normal, however, the \begin{align*}z-\end{align*}scores are useful in explaining how much of the data is contained within a certain distance of the mean. The empirical rule is the name given to the observation that approximately \begin{align*}68\%\end{align*} of the data is within \begin{align*}1\end{align*} standard deviation of the mean, about \begin{align*}95\%\end{align*} is within \begin{align*}2\end{align*} standard deviations of the mean, and \begin{align*}99.7\%\end{align*} of the data is within \begin{align*}3\end{align*} standard deviations of the mean. Some refer to this as the \begin{align*}68-95-99.7\end{align*}. There is no straight-forward test for normality. You should learn to recognize the normality of a distribution by examining the shape and symmetry of its visual display. However, a normal probability or normal quantile plot is a useful tool to help check the normality of a distribution. This graph is a plot of the \begin{align*}z-\end{align*}scores of a data set against the actual values. If the distribution is normal, this plot will be linear. ## Points To Consider 1. How can we use normal distributions to make meaningful conclusions about samples and experiments? 2. How do we calculate probabilities and areas under the normal curve that are not covered by the empirical rule? 3. What are the other types of distributions that can occur in different probability situations? ## Review Questions 1. Which of the following data sets is most likely to be normally distributed? For the other choices, explain why you believe they would not follow a normal distribution. 1. The hand span (measured from the tip of the thumb to the tip of the extended \begin{align*}5^{th}\end{align*} finger) of a random sample of high school seniors. 2. The annual salaries of all employees of a large shipping company. 3. The annual salaries of a random sample of \begin{align*}50\end{align*} CEOs of major companies, \begin{align*}25\end{align*} women and \begin{align*}25\end{align*} men. 4. The dates of \begin{align*}100\end{align*} pennies taken from a cash drawer in a convenience store. 2. The grades on a statistics mid-term for a high school are normally distributed with \begin{align*}\mu = 81\end{align*} and \begin{align*}\sigma = 6.3\end{align*}. Calculate the \begin{align*}z-\end{align*}scores for each of the following exam grades. Draw and label a sketch for each example. 1. \begin{align*}65\end{align*} 2. \begin{align*}83\end{align*} 3. \begin{align*}93\end{align*} 4. \begin{align*}100\end{align*} 3. Assume that the mean weight of \begin{align*}1\end{align*} year-old girls in the US is normally distributed with a mean of about \begin{align*}9.5 \;\mathrm{kilograms}\end{align*} with a standard deviation of approximately \begin{align*}1.1 \;\mathrm{kilograms}\end{align*}. Without using a calculator, estimate the percentage of \begin{align*}1\end{align*} year-old girls in the US that meet the following conditions. Draw a sketch and shade the proper region for each problem. 1. Less than \begin{align*}8.4 \;\mathrm{kg}\end{align*} 2. Between \begin{align*}7.3 \;\mathrm{kg}\end{align*} and \begin{align*}11.7 \;\mathrm{kg}\end{align*} 3. More than \begin{align*}12.8 \;\mathrm{kg}\end{align*} 4. For a standard normal distribution, place the following in order from smallest to largest. 1. The percentage of data below \begin{align*}1\end{align*} 2. The percentage of data below \begin{align*}-1\end{align*} 3. The mean 4. The standard deviation 5. The percentage of data above \begin{align*}2\end{align*} 5. The 2007 AP Statistics examination scores were not normally distributed, with \begin{align*}\mu = 2.80\end{align*} and \begin{align*}\sigma = 1.34^1\end{align*}. What is the approximate \begin{align*}z-\end{align*}score that corresponds to an exam score of \begin{align*}5\end{align*} (The scores range from \begin{align*}1-5\end{align*}). 1. \begin{align*}0.786\end{align*} 2. \begin{align*}1.46\end{align*} 3. \begin{align*}1.64\end{align*} 4. \begin{align*}2.20\end{align*} 5. A \begin{align*}z-\end{align*}score can not be calculated because the distribution is not normal. \begin{align*}^1\end{align*}Data available on the College Board Website: 6. The heights of \begin{align*}5^{th}\end{align*} grade boys in the United States is approximately normally distributed with a mean height of \begin{align*}143.5 \;\mathrm{cm}\end{align*} and a standard deviation of about \begin{align*}7.1 \;\mathrm{cm}\end{align*}. What is the probability that a randomly chosen \begin{align*}5^{th}\end{align*} grade boy would be taller than \begin{align*}157.7 \;\mathrm{cm}\end{align*}? 7. A statistics class bought some sprinkle (or jimmies) doughnuts for a treat and noticed that the number of sprinkles seemed to vary from doughnut to doughnut. So, they counted the sprinkles on each doughnut. Here are the results: \begin{align*}241, 282, 258, 224, 133, 335, 322, 323, 354, 194, 332, 274, 233, 147, 213, 262, 227, 366\end{align*} (a) Create a histogram, dot plot, or box plot for this data. Comment on the shape, center and spread of the distribution. (b) Find the mean and standard deviation of the distribution of sprinkles. Complete the following chart by standardizing all the values: \begin{align*}\mu = \underline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;} \qquad \qquad \sigma = \underline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;}\end{align*} Number of Sprinkles Deviation \begin{align*}Z-\end{align*}scores \begin{align*}241\end{align*} \begin{align*}282\end{align*} \begin{align*}258\end{align*} \begin{align*}223\end{align*} \begin{align*}133\end{align*} \begin{align*}335\end{align*} \begin{align*}322\end{align*} \begin{align*}323\end{align*} \begin{align*}354\end{align*} \begin{align*}194\end{align*} \begin{align*}332\end{align*} \begin{align*}274\end{align*} \begin{align*}233\end{align*} \begin{align*}147\end{align*} \begin{align*}213\end{align*} \begin{align*}262\end{align*} \begin{align*}227\end{align*} \begin{align*}366\end{align*} Figure: A table to be filled in for the sprinkles question. (c) Create a normal probability plot from your results. (d) Based on this plot, comment on the normality of the distribution of sprinkle counts on these doughnuts. Open-ended Investigation: Munchkin Lab. Teacher Notes: For this activity, obtain two large boxes of Dunkin Donuts’ munchkins. Each box should contain only one type of munchkin. I have found students prefer the glazed and the chocolate, but the activity can be modified according to your preference. If you do not have Dunkin Donuts near you, the bakery section of your supermarket should have boxed donut holes or something similar you can use. You will also need an electronic balance capable of measuring to the nearest \begin{align*}10^{th}\end{align*} of a gram. Your science teachers will be able to help you out with this if you do not have one. I have used this activity before introducing the concepts in this chapter. If you remove the words “\begin{align*}z-\end{align*}score”, the normal probability plot and the last two questions, students will be able to investigate and develop an intuitive understanding for standardized scores and the empirical rule, before defining them. Experience has shown that this data very closely approximates a normal distribution and students will be able to calculate the \begin{align*}z-\end{align*}scores and verify that their results come very close to the theoretical values of the empirical rule. 1. You would expect this situation to vary normally with most students’ hand spans centering around a particular value and a few students having much larger or much smaller hand spans. 2. Most employees could be hourly laborers and drivers and their salaries might be normally distributed, but the few management and corporate salaries would most likely be much higher, giving a skewed right distribution. 3. Many studies have been published detailing the shrinking, but still prevalent income gap between male and female workers. This distribution would most likely be bi-modal, with each gender distribution by itself possibly being normal. 4. You might expect most of the pennies to be this year or last year, fewer still in the previous few years, and the occasional penny that is even older. The distribution would most likely be skewed left. 1. \begin{align*}z \approx -2.54\end{align*} 2. \begin{align*}z \approx 0.32\end{align*} 3. \begin{align*}z \approx 1.90\end{align*} 4. \begin{align*}z \approx 3.02\end{align*} 1. Because the data is normally distributed, students should use the \begin{align*}68-95-99.7\end{align*} rule to answer these questions. 1. about \begin{align*}16\%\end{align*} (less than one standard deviation below the mean) 2. about \begin{align*}95\%\end{align*} (within \begin{align*}2\end{align*} standard deviations) 3. about \begin{align*}0.15\%\end{align*} (more than \begin{align*}3\end{align*} standard deviations above the mean) 2. The standard normal curve has a mean of zero and a standard deviation of one, so all the values correspond to \begin{align*}z-\end{align*}scores. The corresponding values are approximately: 1. \begin{align*}0.84\end{align*} 2. \begin{align*}0.16\end{align*} 3. \begin{align*}0\end{align*} 4. \begin{align*}1\end{align*} 5. \begin{align*}0.025\end{align*} Therefore the correct order is: c, e, b, a, d 3. c 4. \begin{align*}0.025. 157.7\end{align*} is exactly \begin{align*}2\end{align*} standard deviations above the mean height. According to the empirical rule, the probability of a randomly chosen value being within \begin{align*}2\end{align*} standard deviations is about \begin{align*}0.95\end{align*}, which leaves \begin{align*}0.05\end{align*} in the tails. We are interested in the upper tail only as we are looking for the probability of being above this value. 5. (a) Here are the possible plots showing a symmetric, mound shaped distribution. (b) \begin{align*}\mu = 262.222 \qquad \qquad s = 67.837\end{align*} Number of Sprinkles Deviations \begin{align*}Z-\end{align*}scores \begin{align*}241\end{align*} \begin{align*}-21.2222\end{align*} \begin{align*}-0.313\end{align*} \begin{align*}282\end{align*} \begin{align*}19.7778\end{align*} \begin{align*}0.292\end{align*} \begin{align*}258\end{align*} \begin{align*}-4.22222\end{align*} \begin{align*}-0.062\end{align*} \begin{align*}223\end{align*} \begin{align*}-38.2222\end{align*} \begin{align*}-0.563\end{align*} \begin{align*}133\end{align*} \begin{align*}-129.222\end{align*} \begin{align*}-1.905\end{align*} \begin{align*}335\end{align*} \begin{align*}72.7778\end{align*} \begin{align*}1.073\end{align*} \begin{align*}322\end{align*} \begin{align*}59.7778\end{align*} \begin{align*}0.881\end{align*} \begin{align*}323\end{align*} \begin{align*}60.7778\end{align*} \begin{align*}0.896\end{align*} \begin{align*}354\end{align*} \begin{align*}91.7778\end{align*} \begin{align*}1.353\end{align*} \begin{align*}194\end{align*} \begin{align*}-68.2222\end{align*} \begin{align*}-1.006\end{align*} \begin{align*}332\end{align*} \begin{align*}69.7778\end{align*} ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Tags: Subjects:
{"extraction_info": {"found_math": true, "script_math_tex": 264, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199779391288757, "perplexity": 866.4143594756213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718866.34/warc/CC-MAIN-20161020183838-00352-ip-10-171-6-4.ec2.internal.warc.gz"}
http://en.forums.wordpress.com/topic/error-with-latex-product-symbol
Need help? Check out our Support site, then error with LaTeX "product" symbol 1. Hello, I need to put a LaTeX formula on my blog using the mathematical "product". Here is the formula: \prod_{n=1}^5\frac{n}{n-1} I know it works on LaTeX (I tested it) but WordPress tells me "Formula does not parse". Anyone does why ?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868958592414856, "perplexity": 7309.786298065692}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999679238/warc/CC-MAIN-20140305060759-00008-ip-10-183-142-35.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007%2Fs00158-010-0493-y
Structural and Multidisciplinary Optimization , Volume 42, Issue 2, pp 293–304 # Effect of approximation fidelity on vibration-based elastic constants identification • Christian Gogu • Raphael Haftka • Rodolphe Le Riche • Jerome Molimard Research Paper ## Abstract Some applications such as identification or Monte Carlo based uncertainty quantification often require simple analytical formulas that are fast to evaluate. Approximate closed-form solutions for the natural frequencies of free orthotropic plates have been developed and have a wide range of applicability, but, as we show in this article, they lack accuracy for vibration based material properties identification. This article first demonstrates that a very accurate response surface approximation can be constructed by using dimensional analysis. Second, the article investigates how the accuracy of the approximation used propagates to the accuracy of the elastic constants identified from vibration experiments. For a least squares identification approach, the approximate analytical solution led to physically implausible properties, while the high-fidelity response surface approximation obtained reasonable estimates. With a Bayesian identification approach, the lower-fidelity analytical approximation led to reasonable results, but with much lower accuracy than the higher-fidelity approximation. The results also indicate that standard least squares approaches for identifying elastic constants from vibration tests may be ill-conditioned, because they are highly sensitive to the accuracy of the vibration frequencies calculation. ## Keywords Identification Bayesian identification Response surface approximations Dimensionality Reduction Plate vibration ## Notes ### Acknowledgments This work was supported in part by the NASA grant NNX08AB40A. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Aeronautics and Space Administration. ## References 1. Blevins RD (1979) Formulas for natural frequency and mode shape. Van Nostrand Reinhold, New YorkGoogle Scholar 2. Buckingham E (1914) On physically similar systems: illustrations of the use of dimensional equations. Phys Rev 4:345–376 3. Dickinson SM (1978) The buckling and frequency of flexural vibration of rectangular isotropic and orthotropic plates using Rayleigh’s method. J Sound Vib 61:1–8 4. Gogu C, Haftka RT, Le Riche R, Molimard J, Vautrin A, Sankar BV (2008) Comparison between the basic least squares and the Bayesian approach for elastic constants identification. J Phys: Conf Ser 135:012045 5. Gogu C, Haftka RT, Bapanapalli S, Sankar BV (2009a) Dimensionality reduction approach for response surface approximations: application to thermal design. AIAA J 47(7):1700–1708 6. Gogu C, Haftka RT, Le Riche R, Molimard J, Vautrin A, Sankar BV (2009b) Bayesian statistical identification of orthotropic elastic constants accounting for measurement and modeling errors. In: 11th AIAA non-deterministic approaches conference, AIAA paper 2009-2258, Palm Springs, CAGoogle Scholar 7. Gürdal Z, Haftka RT, Hajela P (1998) Design and optimization of laminated composite materials. Wiley Interscience, New YorkGoogle Scholar 8. Kaipio J, Somersalo E (2005) Statistical and computational inverse problems. Springer, New York 9. Mottershead JE, Friswell MI (1993) Model updating in structural dynamics: a survey. J Sound Vib 167:347–375 10. Myers RH, Montgomery DC (2002) Response surface methodology: process and product optimization using designed experiments, 2nd edn. Wiley, New York 11. Pedersen P, Frederiksen PS (1992) Identification of orthotropic material moduli by a combined experimental/numerical approach. Measurement 10:113–118 12. Vaschy A (1892) Sur les lois de similitude en physique. Ann Télégr 19:25–28Google Scholar 13. Viana FAC, Goel T (2009) Surrogates toolbox v1.1 user’s guide. http://fchegury.googlepages.com 14. Waller MD (1939) Vibrations of free square plates: part I. Normal vibrating modes. Proc Phys Soc 51:831–844 15. Waller MD (1949) Vibrations of free rectangular plates. Proc Phys Soc B 62(5):277–285 ## Authors and Affiliations • Christian Gogu • 1 • 2 Email author • Raphael Haftka • 2 • Rodolphe Le Riche • 1 • Jerome Molimard • 1 1. 1.Centre Science des Matériaux et des StructuresEcole des Mines de Saint EtienneSaint Etienne Cedex 2France 2. 2.Mechanical and Aerospace Engineering DepartmentUniversity of FloridaGainesvilleUSA
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353806972503662, "perplexity": 10484.294410465414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00217.warc.gz"}
https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/
Notice This document is for a development version of Ceph. # batch¶ The subcommand allows to create multiple OSDs at the same time given an input of devices. The batch subcommand is closely related to drive-groups. One individual drive group specification translates to a single batch invocation. The subcommand is based to create, and will use the very same code path. All batch does is to calculate the appropriate sizes of all volumes and skip over already created volumes. All the features that ceph-volume lvm create supports, like dmcrypt, avoiding systemd units from starting, defining bluestore or filestore, are supported. ## Automatic sorting of disks¶ If batch receives only a single list of data devices and other options are passed , ceph-volume will auto-sort disks by its rotational property and use non-rotating disks for block.db or journal depending on the objectstore used. If all devices are to be used for standalone OSDs, no matter if rotating or solid state, pass --no-auto. For example assuming bluestore is used and --no-auto is not passed, the deprecated behavior would deploy the following, depending on the devices passed: 1. Devices are all spinning HDDs: 1 OSD is created per device 2. Devices are all SSDs: 2 OSDs are created per device 3. Devices are a mix of HDDs and SSDs: data is placed on the spinning device, the block.db is created on the SSD, as large as possible. Note Although operations in ceph-volume lvm create allow usage of block.wal it isn’t supported with the auto behavior. This default auto-sorting behavior is now DEPRECATED and will be changed in future releases. Instead devices are not automatically sorted unless the --auto option is passed It is recommended to make use of the explicit device lists for block.db, block.wal and journal. # Reporting¶ By default batch will print a report of the computed OSD layout and ask the user to confirm. This can be overridden by passing --yes. If one wants to try out several invocations with being asked to deploy --report can be passed. ceph-volume will exit after printing the report. Consider the following invocation: $ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 This will deploy three OSDs with external db and wal volumes on an NVME device. pretty reporting The pretty report format (the default) would look like this: $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 --> passed data devices: 3 physical, 0 LVM --> relative data size: 1.0 --> passed block_db devices: 1 physical, 0 LVM Total OSDs: 3 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- data /dev/sdb 300.00 GB 100.00% block_db /dev/nvme0n1 66.67 GB 33.33% ---------------------------------------------------------------------------------------------------- data /dev/sdc 300.00 GB 100.00% block_db /dev/nvme0n1 66.67 GB 33.33% ---------------------------------------------------------------------------------------------------- data /dev/sdd 300.00 GB 100.00% block_db /dev/nvme0n1 66.67 GB 33.33% JSON reporting Reporting can produce a structured output with --format json or --format json-pretty: ## Explicit sizing¶ It is also possible to provide explicit sizes to ceph-volume via the arguments • --block-db-size • --block-wal-size • --journal-size ceph-volume will try to satisfy the requested sizes given the passed disks. If this is not possible, no OSDs will be deployed. # Idempotency and disk replacements¶ ceph-volume lvm batch intends to be idempotent, i.e. calling the same command repeatedly must result in the same outcome. For example calling: \$ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 will result in three deployed OSDs (if all disks were available). Calling this command again, you will still end up with three OSDs and ceph-volume will exit with return code 0. Suppose /dev/sdc goes bad and needs to be replaced. After destroying the OSD and replacing the hardware, you can again call the same command and ceph-volume will detect that only two out of the three wanted OSDs are setup and re-create the missing OSD. This idempotency notion is tightly coupled to and extensively used by OSD Service Specification.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3018955588340759, "perplexity": 10840.133123862253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00772.warc.gz"}
https://www.chemzipper.com/2018/10/hydrolysis-of-salt-of-weak-acid-and.html
Welcome to Chem Zipper.com......: [3] CATIONIC AS WELL AS ANIONIC HYDROLYSIS: ## Search This Blog ### [3] CATIONIC AS WELL AS ANIONIC HYDROLYSIS: Take a salt (CH3COONH4) of the weak acid (CH3COOH) and the weak base (NH4OH) . and dissolve in water, thereforethe salt completely dissociates as given below. The ions get hydrolysed according to the reaction. Such salts undergoes hydrolysis because ,the aqueous solution contains unionised acid as well as  base molecules . The nature of aqueous solution of such salt depends on the equilibrium constant for cationic or anionic hydrolysis. Multiplying and dividing by H+ & OH and rearranging, There is an important issue that needs clarification before we move on further. In this case, we can see that both the ions (i.e., cation and anion) get hydrolyzed to produce a weak acid and a weak base (hence, we can’t predict whether the solution is acidic, basic or neutral). We have considered the degree of hydrolysis of both the ions to be the same. Now we present an explanation as to why this is incorrect and then state reasons for the validity of this assumption n. Actually the hydrolysis reaction given earlier, Now, we calculate the pH of the solution as: If the reaction for hydrolysis is in equilibrium then all the reversible processes occurring in water must be in equilibrium . The H+ or OH- ions may be calculated from the dissociation constant of acid or base , here calculation of H+ from acid is given as below . We know that at 25° Pkw of water is 14 . Hence pH = 7+ 1/2[Pka~pkb] If Kh1<Kh2 then ka>kb. and  pKa <pKb as results Solution become acidic If Kh1>Kh2 then ka<kB and  pKa>pKb as results  solution become  basic ILLUSTRATIVE EXAMPLE (1): calculate the pH of 0.2 M NH4CN Solution. ( Given Ka HCN is 3x10-10 and kb NH4OH is 2.0x10-5) (Ans-pH 9.5 ) ILLUSTRATIVE EXAMPLE (2): Calculate the DOD and pH of 0.2M NaCN Solution (Given Ka of HCN is 2.0x10-10) (Ans- DOD = √2×10-10 and pH =11.5) ILLUSTRATIVE EXAMPLE (3): Calculate the DOD (h) and pH of 0.2 M C6CH5NH3Cl Solution (Given Ka C6CH5NH3Cl is =4.0×10-8) (Ans- DOD =√20×10-4  and pH is 6.6) ILLUSTRATIVE EXAMPLE (4): ILLUSTRATIVE EXAMPLE (5):
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026779532432556, "perplexity": 7584.084402085913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00535.warc.gz"}
http://www.rmi.ge/%7Emeskhi/
# 1.    Personal data: a) First name: ALEXANDER b) Family name: MESKHI c) Date of birth:  May 2, 1972. d) Place of birth:  Tbilisi, Georgia. e) Citizenship:  Citizen of Georgia. f) Marital status: Married. g) Children: Two sons h) Codice Fiscale (in Italy): MSK LND 72E02 Z136S i) Personal homepage: http://www.rmi.ge/~meskhi/ 2.    Positions: a)11.05.2000-up to now: Senior Researcher  at  A.Razmadze Mathematical Institute, Georgia. b) 28.09.2009-up to now: Full Professor of the  Technical University of Georgia (Department of Mathematics). b) 01.2006-up to now: Foreign Professor of Abdus Salam School of Mathematical Sciences, GC University, Lahore, Pakistan. c) 01.05.2003- 01.05.2005 : Postdoc position at Scuola Normale Superiore of Pisa, Italy. d) 02.10.1998- 11.05.2000: Researcher at A. Razmadze Mathematical Institute, Georgia. A. Razmadze Mathematical Institute of I. Javakhishvili Tbilisi State University, 6 Tamarashvili Str., Tbilisi 0177, Georgia. 4. e-mail: [email protected][email protected] 5. Tel. : (+995 32) 32 62 47;   (+995)  93 57 38 95. 6. Languages: Georgian (Mother), Russian, English, Italian. 7.    Education: b) 1990-1995 - Student of  the  Technical University of Georgia. 8.    Scientific degrees: a) 2001: Doctor of Sciences of Phys.-Math.: "Boundedness and compactness criteria  for potential- type operators". A. Razmadze Mathematical Institute (Scientific Consultant  Prof. V.  Kokilashvili). b) 1998 : Candidate Degree of Phys. Math (Ph.D.) " Two-weight inequalities for singular integrals defined on homogeneous groups".  A. Razmadze Mathematical Institute (Supervisor Prof. V. Kokilashvili). c) 1995: Master Degree in Mathematics, " Two-weight inequalities for singular integrals and potentials on Heisenberg groups". Technical University of Georgia (Supervisor Prof. V. Kokilashvili). 9.  Main Research Field: Harmonic Analysis and Applications in Partial Differential Equations. 10. Main Topics: Weight theory of integral and differential operators and applications in nonlinear partial differential equations; spectral theory of integral operators; analysis on measure metric spaces; function spaces; abstract harmonic analysis. 11. Research Grants: a)      18.09.2011-24.09.2013: Shota Rustaveli National Science Foundation Grant, Name of the Project: ‘’8the International Conference on Function Spaces, Differential Operators, and Nonlinear Analysis’’, Project  No. 11_tr_129. b)      01.01.2010-01.01.2013:  Shota Rustaveli National Science Foundation Grant, “New aspects in function spaces, differential and integral operators and non-linear analysis, and applications in PDE”,  No.GNSF/ST09_23_3-100. c)      01.01.2008- 01.01.2011: Georgian National Science Foundation Grant “Some Topics of Harmonic and Non-linear Analysis in Non-standard Setting with Applications in Differential Equations”  No. GNSF/ST07/3-169. d)     2008, 2009: Grant GNAMPA (Italy), University of Padova, Italy. e)      01.01.2007-01.01.2009: INTAS grant : “Variable Exponent Analysis”,  Nr.06-1000017-8792. f)       2006-2008: Georgian  National Science Foundation Grant, “Nonstandard function spaces and functions with applications to partial differential equations”, No. GNSF/ST06/3-010. g)      01.09. 2006-01.09.2008: INTAS grant: “Function spaces and applications to partial differential equations”, No. 05-1000008-8157. h)      1998, 2000-2004, 2006- 2009:  Georgian President’s Grant for Young Scientists. i)        2001-2002: An INTAS Fellowship grant for Young Scientists (Fellowship Reference No YSF 01/1-8). j)        1999, 2001: Grant of the Royal Society (UK). k)      1999-2003: Grant of the Georgian Academy of Sciences No. 1.7. l)        1997-1998:  J. Soros Grant for Post-graduate Students. m)    2012-2014:  Integral operators and boundary value problems in new function spaces; new aspects in Fourier and wavelet theories.  Shota Rustaveli National Science Foundation Grant (contract No. D/13-23). n)      2013-2015: Modern Problem s of Harmonic Approximation and Integral Operator Theories in New Function Spaces; Applications to the Boundary Value Problems (contract number 31/47) 12. Awards: b) c) d)     2000: Euler Premium for young scientists established by the German Mathematical Association 13. Participation in Conferences, Congresses, Schools etc: 1. "Multilinear Integral Operators in Weighted Function Spaces", II Advanced Courses of TICMI in Applied Mathematics, 28-29 SEPTEMBER, 2015, TBILISI, Georgia, I. Vekua Institute of Applied Mathematics of I. Javakhishvili Tbilisi State University. 2. "Multilinear Integral Operators in Some Non-standard Weighted Function Spaces" (Plenary Speaker), Swedish-Georgian Conference in Analysis and Dynamical Systems,  July 15-22, Tbilisi, Georgia. 3. "Fractional integral operators between Banach function lattices" (Invited Speaker), International Workshop on Operator Theory and Applications (IWOTA 2015), July 6-10, Tbilisi, Georgia. 4. "One-sided operators in grand variable exponent Lebesgue spaces", Joint International Meeting of the American, European and Portuguese Mathematical Societies (AMS, EMS and SPM, respectively),  June 10-13, 2015, Porto, Portugal. 5. "Semi-Plenary talk)’Multisublinear maximal operators in Banach Function Lattices", V Annual International Conference of the Georgian Mathematical Union’’, September 8-12, 2014, (book of Abstracts, P. 47). 6. "Two-weight criteria for Riesz potentials on cones of radially decreasing functions", Caucasian Mathematics Conference (CMC I), Tbilisi, September 5-6, 2014 (book of abstracts, p. 135). 7. "Weighted criteria for multilinear fractional integrals", International Congress of Mathematicians  (ICM 2014)", August 13-21, Seoul, Korea. 8. "Interior estimates and regularity of solutions for elliptic equations in Non-divergence with VMO Coefficients" (Plenary talk), IV International Conference of the Georgian Mathematical Union Dedicated to Academician Victor Kupradze (1903 – 1985) on the occasion of 110-th anniversary of his birthday  and to the Georgian Mathematical union on the occasion  of 90 year from founding, Tbilisi-Batumi, September 9-15, 2013. 9. "Boundedness and compactness criteria for positive kernel operators in variable exponent Lebesgue spaces", International ISAAC 9-th Congress, 5-9 August, 2013, Krakow, Poland. d) j) 6. 5.      M. A. Zaighum, Title of the thesis: “Kernel operators in some new function spaces”, Abdus Salam School of Mathematical Sciences, Lahore, 2014. 19. Editorial Activities b)    Member of the Editorial Board of the International Journal ‘‘Universitas Scientiarum’’ (Columbia); c)    Member of the Advisory Board of the International Journal ‘’Journal of the Prime Research in Mathematics’’ (Abdus Salam School of Mathematical Sciences, Lahore); d)    Member of the Editorial Board of the of the International Journal ‘’Journal of Mathematical Inequalities’’ (JMI) (Croatia). e)    Associate Editor of ‘’Journal of Inequalities and Applications’’ (Springer). f)     Member of the Editorial Board of ‘’Tbilisi Mathematical Journal’’ (Georgia). 20. Membership: a) Member of the American Mathematical Society; b) Member of the Georgian Mathematical Union. 21. Other Information a) Included in the list of ‘’Top 100 Professionals’’ by the Cambridge Bibliographic Center (2015); b) Included in the list of ‘’2000 Intellectuals of the 21-st Centary’’ by the Cambridge Bibliographic Center (2015); c) Included in ‘’Who is who’’ bibliographic center (2015). 4.       V. Kokilashvili, A. Meskhi, S. Samko, and H. Rafeiro, Integral Operators in Non-standard Function Spaces. Vol. I. Variable Exponent Lebesgue and Amalgam Spaces. Birkhäuser, 2015,  pp. 1-586. 5.      V. Kokilashvili, A. Meskhi, S. Samko, and H. Rafeiro, Integral Operators in Non-standard Function Spaces. Vol. II. Variable Exponent Hölder, Morrey-Campanato and Grand spaces. Birkhäuser, 2016,  pp. 587-1009. 13.        Weighted inequalities for Riemann-Liouville transform, Proc. A. Razmadze Math. Inst. 117(1998), 151-153. 34.        * 35.        * On the singular numbers for some integral operators.  Revista Mat. Compl. 14(2001), No. 2, 379-393. 37. 42. 97.        * Weighted kernel operators in variable exponent amalgam spaces, J. Ineq. Appl. 2013, 2013:173, doi:10.1186/1029-242X-2013-173, pp. 1-28,  (with V. Kokilashvili and M. A. Zaighum). 98.        * Boundedness of commutators of singular and potential operators in generalized grand Morrey spaces and some applications, Studia Math. 217 (2013), No. 2, 159-178 (with V. Kokilashvili and H. Rafeiro). 99.        * Two-weight norm estimates for maximal and Calderón-Zygmund operators in variable exponent Lebesgue spaces. Georgian Math. J. 20 (2013), No. 3, 547-572 (with V. Kokilashvili and M. Sarwar). 101.    * On the boundedness of maximal and potential operators in amalgam spaces. J. Math. Inequal. 8 (2014), No. 1, 123-152 (with M. A. Zaighum). 102.    * On the boundedness of the multilinear fractional integral operators. Nonlinear Analysis, Theory, Methods and Applications 94 (2014), 142-147 (with V. Kokilashvili and M. Mastylo). 103.    * Estimates for nondivergence elliptic equations with VMO coefficients in generalized grand Morrey spaces, Complex Variables and Elliptic Equation 59 (2013), No. 8, 1169-1184  (with V. Kokilashvili and H. Rafeiro). 104.    * Grand Bochner-Lebesgue space and its associate space. Journal of Functional Analysis 266 (2014), No. 4, 2125-2136 (with V. Kokilashvili and H. Rafeiro). 105.    * Two-weight norm estimates for sublinear integral operators in variable exponent Lebesgue spaces, Studia Sci. Math. Hungarica 51 (2014), No. 3, 384-406  (with V. Kokilashvili). 106.    One-weight weak-type estimates for fractional and singular integrals in grand Lebesgue spaces. Banach Center Publications 102 (2014), 131-142 (with V. Kokilashvili). 107.    Some fundamental inequalities for trigonometric polynomials in imbeddings of grand Besov spaces. Proc. A. Razmadze Math. Inst. 165 (2014), 105-116 (with V. Kokilashvili). 108.    * Maximal and Calderón-Zygmund operators in grand variable exponent Lebesgue spaces. Georgian Math. J. 21 (2014), No. 4, 447-461 (with V. Kokilashvili). 109.    * The multisublinear maximal type operators in Banach function lattices, J. Math. Anal. Appl. 421 (2015), No. 1, 656-668 (with V. Kokilashvili and M. Mastylo). 110.    * Fractional integral operators between Banach function lattices, Nonlinear Analysis, Theory, Methods and Applications 117 (2015) 148-158 (with V. Kokilashvili and M. Mastylo). 111.    * Two-weight norm estimates for multilinear fractional integrals in classical Lebesgue spaces, Fractional Calculus Applied Analysis 18 (2015), No. 5, 1146-1163 (with V. Kokilashvili and M. Mastylo). 112.    * On weighted Bernstein type inequality in grand variable exponent  Lebesgue spaces. Mathematical Inequalities and Applications 18 (2015), No. 3, 991–1002 (with V. Kokilashvili). 113.   * Weighted Kernel Operators in $L^{p(x)}({\Bbb{R}}_+$. J. Math. Ineq. 10 (2016), No. 3, 623–639, DOI: 10.7153/jmi-10-50 (with M. A. Zaighum). 114.    * Sharp weighted bounds for one-sided operators, Georgian Mathematical Journal (accepted) (with V. Kokilashvili and   M. A.  Zaighum). 115.   * Weighted extrapolation in Iwaniec-Sbordone spaces. Applications to integral operators and theory of approximation.  Proc. Steklov Inst. Math.  293 (2016), 161–185. Original Russian Text published in Trudy Mat. Inst. Steklova 293 (2016), 167–192 (with V. Kokilashvili). 116.   * Interpolation on variable Morrey spaces defined on quasi-metric measure spaces, J. Functional Analysis 270 (2016), No. 10,  3946-3961 (with H. Rafeiro and   M. A.  Zaighum). 117.   Sharp weighted bounds for multiple integral operators, Trans. A. Razmadze Math. Inst. 170 (2016),  75–90 (with V. Kokilashvili and M. A. Zaighum). 118.   * Multilinear integral operators in weighted grand Lebesgue spaces, Frac. Calc. Appl. Anal. 19 (2016), 691-724. (with V. Kokilashvili and M. Mastylo). 119.   Sharp weighted bounds for the Hilbert transform of odd and even functions, Trans. A. Razmadze Math. Ins. (2016), Doi: http://dx.doi.org/10.1016/j.trmi.2016.07.005 (with G. Gilles). 120.   * The Riemann boundary value problem in the frame of variable exponent grand Lebesgue spaces, Georgian Math. J., DOI: 10.1515/gmj-2016-0041  (with V. Kokilashvili and V. Paatashvili). 121.   * Sublinear Operators in Generalized Weighted Morrey Spaces,  (Russian) Doklady Acad. Nauk 470 (2016), No. 5, 502–504. Translation in: Doklady Mathematics 94 (2016), No. 2, 558-560. 122.   The Riemann–Hilbert problem in the class of Cauchy type integrals with densities of grand Lebesgue spaces, Trans. A. Razmadze Math. Inst. 170 (2016), No. 2, 208-211 (with V. Kokilashvili and V. Paatashvili). 123.   Generalized singular integral on Carleson curves in weighted grand Lebesgue spaces, Trans. A. Razmadze Math. Inst. 170 (2016), No. 2, 212-214 (with V. Kokilashvili and V. Paatashvili). 124.   * Complex interpolation on variables exponent Campanato spaces of order k, Complex variable and Elliptic Equations (accepted) (with H. Rafeiro and M. A. Zaighum).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7002745866775513, "perplexity": 9402.62103394327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541525.50/warc/CC-MAIN-20161202170901-00180-ip-10-31-129-80.ec2.internal.warc.gz"}
http://rohanvarma.me/Neural-Net-Tensorflow/
# Creating Neural Networks in Tensorflow This is a write-up and code tutorial that I wrote for an AI workshop given at UCLA, at which I gave a talk on neural networks and implementing them in Tensorflow. It’s part of a series on machine learning with Tensorflow, and the tutorials for the rest of them are available here. ### Recap: The Learning Problem We have a large dataset of $(x, y)$ pairs where $x$ denotes a vector of features and $y$ denotes the label for that feature vector. We want to learn a function $h(x)$ that maps features to labels, with good generalization accuracy. We do this by minimizing a loss function computed on our dataset: $\sum_{i=1}^{N} L(y_i, h(x_i))$. There are many loss functions we can choose. We have gone over the cross-entropy loss and variants of the squared error loss functions in previous workshops, and we will once again consider those today. ### Review: A Single “Neuron”, aka the Perceptron A single perceptron first calculates a weighted sum of our inputs. This means that we multiply each of our features $(x_1, x_2, ... x_n) \in x$ with an associated weight $(w_1, w_2, ... w_n)$ . We then take the sign of this linear combination, which and the sign tells us whether to classify this instance as a positive or negative example. We then moved on to logistic regression, where we changed our sign function to instead be a sigmoid ($\sigma$) function. As a reminder, here’s the sigmoid function: Therefore, the function we compute for logistic regression is $h(x) = \sigma (w^Tx + b)$. The sigmoid function is commonly referred to as an “activation” function. When we say that a “neuron computes an activation function”, it means that a standard linear combination is calculated ($w^Tx + b$) and then we apply a non linear function to it, such as the sigmoid function. Here are a few other common activation functions: ### Review: From binary to multi-class classification The most important change in moving from a binary (negative/positive) classification model to one that can classify training instances into many different classes (say, 10, for MNIST) is that our vector of weights $w$ changes into a matrix $W$. Each row of weights we learn represents the parameters for a certain class: We also want to take our output and normalize the results so that they all sum to one, so that we can interpret them as probabilities. This is commonly done using the softmax function, which takes in a vector and returns another vector who’s elements sum to 1, and each element is proportional in scale to what it was in the original vector. In binary classification we used the sigmoid function to compute probabilities. Now since we have a vector, we use the softmax function. Here is our current model of learning, then: $h(x) = softmax(Wx + b)$. ### Building up the neural network Now that we’ve figured out how to linearly model multi-class classification, we can create a basic neural network. Consider what happens when we combine the idea of artificial neurons with our softmax classifier. Instead of computing a linear function $Wx + b$ and immediately passing the output to a softmax function, we have an intermediate step: pass the output of our linear combination to a vector of artificial neurons, which each compute a nonlinear function. The output of this “layer” of neurons can be multiplied with a matrix of weights again, and we can apply our softmax function to this result to produce our predictions. Original function: $h(x) = softmax(Wx + b)$ Neural Network function: $h(x) = softmax(W_2(nonlin(W_1x + b_1)) + b_2)$ The key differences are that we have more biases and weights, as well as a larger composition of functions. This function is harder to optimize, and introduces a few interesting ideas about learning the weights with an algorithm known as backpropagation. This “intermediate step” is actually known as a hidden layer, and we have complete control over it, meaning that among other things, we can vary the number of parameters or connections between weights and neurons to obtain an optimal network. It’s also important to notice that we can stack an arbitrary amount of these hidden layers between the input and output of our network, and we can tune these layers individually. This lets us make our network as deep as we want it. For example, here’s what a neural network with two hidden layers would look like: We’re now ready to start implementing a basic neural network in Tensorflow. First, let’s start off with the standard import statements, and visualize a few examples from our training dataset. import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data # a function that shows examples from the dataset. If num is specified (between 0 and 9), then only pictures with those labels will beused def show_pics(mnist, num = None): to_show = list(range(10)) if not num else [num]*10 # figure out which numbers we should show for i in range(100): batch = mnist.train.next_batch(1) # gets some examples pic, label = batch[0], batch[1] if np.argmax(label) in to_show: # use matplotlib to plot it pic = pic.reshape((28,28)) plt.title("Label: {}".format(np.argmax(label))) plt.imshow(pic, cmap = 'binary') plt.show() to_show.remove(np.argmax(label)) #show_pics(mnist) show_pics(mnist, 2) Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz As usual, we would like to define several variables to represent our weight matrices and our biases. We will also need to create placeholders to hold our actual data. Anytime we want to create variables or placeholders, we must have a sense of the shape of our data so that Tensorflow has no issues in carrying out the numerical computations. In addition, neural networks rely on various hyperparameters, some of which will be defined below. Two important ones are the ** learning rate ** and the number of neurons in our hidden layer. Depending on these settings, the accuracy of the network may greatly change. # some functions for quick variable creation def weight_variable(shape): return tf.Variable(tf.truncated_normal(shape, stddev = 0.1)) def bias_variable(shape): return tf.Variable(tf.constant(0.1, shape = shape)) # hyperparameters we will use learning_rate = 0.1 hidden_layer_neurons = 50 num_iterations = 5000 # placeholder variables x = tf.placeholder(tf.float32, shape = [None, 784]) # none = the size of that dimension doesn't matter. why is that okay here? y_ = tf.placeholder(tf.float32, shape = [None, 10]) We will now actually create all of the variables we need, and define our neural network as a series of function computations. In our first layer, we take our inputs that have dimension $n * 784$, and multiply them with weights that have dimension $784 * k$, where $k$ is the number of neurons in the hidden layer. We then add the biases to this result, which also have a dimension of $k$. Finally, we apply a nonlinearity to our result. There are, as discussed, several choices, three of which are tanh, sigmoid, and rectifier. We have chosen to use the rectifier (also known as relu, standing for Rectified Linear Unit), since it has been shown in both research and practice that they tend to outperform and learn faster than other activation functions. Therefore, the “activations” of our hidden layer are given by $h_1 = relu(Wx + b)$. We follow a similar procedure for our output layer. Our activations have a shape $n * k$, where $n$ is the number of training examples we input into our network and $k$ is the number of neurons in our hidden layer. We want our final outputs to have dimension $n * 10$ (in the case of MNIST) since we have 10 classes. Therefore, it makes sense for our second matrix of weights to have dimension $k * 10$ and the bias to have dimension $10$. After taking the linear combination $W_2(h_1) + b$, we would then apply the softmax function. However, applying the softmax function and then writing out the cross-entropy loss ourself could result in numerical unstability, so we will instead use a library call that computes both the softmax outputs and the cross entropy loss. # create our weights and biases for our first hidden layer W_1, b_1 = weight_variable([784, hidden_layer_neurons]), bias_variable([hidden_layer_neurons]) # compute activations of the hidden layer h_1 = tf.nn.relu(tf.matmul(x, W_1) + b_1) W_2_hidden = weight_variable([hidden_layer_neurons, 30]) b_2_hidden = bias_variable([30]) h_2 = tf.nn.relu(tf.matmul(h_1, W_2_hidden) + b_2_hidden) # create our weights and biases for our output layer W_2, b_2 = weight_variable([30, 10]), bias_variable([10]) # compute the of the output layer y = tf.matmul(h_2,W_2) + b_2 The cross entropy loss function is a commonly used loss function. For a single prediction/label pair, it is given by $C(h(x), y) = -\sum_i y_i log(h(x_i))$.* Here, $y$ is a specific one-hot encoded label vector, meaning that it is a column vector that has a 1 at the index corresponding to its label, and is zero everywhere else. $h(x)$ is the output of our prediction function whose elements sum to 1. As an example, we may have: The contribution to the entire training data’s loss by this pair was 0.61. To contrast, we can swap the first two probabilities in our softmax vector. We then end up with a lower loss: So our cross-entropy loss makes intuitive sense: it is lower when our softmax vector has a high probability at the index of the true label, and it is higher when our probabilities indicate a wrong or uncertain choice. Sanity check: why do we need the negative sign outside the sum? # define our loss function as the cross entropy loss cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = y)) # create an optimizer to minimize our cross entropy loss # functions that allow us to gauge accuracy of our model correct_predictions = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) # creates a vector where each element is T or F, denoting whether our prediction was right accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # maps the boolean values to 1.0 or 0.0 and calculates the accuracy # we will need to run this in our session to initialize our weights and biases. init = tf.global_variables_initializer() With all of our variables created and computation graph defined, we can now launch the graph in a session and begin training. It is important to remember that since we declared the $x$ and $y$ variables as placeholders, we will need to feed in data to run our optimizer that minimizes the cross entropy loss. The data we will feed in (by passing into our function a dictionary feed_dict) will come from the MNIST dataset. To randomly sample 100 training examples, we can use a wrapper provided by Tensorflow: mnnist.train.next_batch(100). When we run the optimizer with the call optimizer.run(..) Tensorflow calculates a forward pass for us (essentially propagating our data through the graph we have described), and then uses the loss function we created to evaluate the loss, and then computes partial derivatives with respect to each set of weights and updates the weights according to the partial derivatives. This is called the backpropagation algorithm, and it involves significant application of the chain rule. CS 231N provides an excellent explanation of backpropagation. # launch a session to run our graph defined above. with tf.Session() as sess: sess.run(init) # initializes our variables for i in range(num_iterations): # get a sample of the dataset and run the optimizer, which calculates a forward pass and then runs the backpropagation algorithm to improve the weights batch = mnist.train.next_batch(100) optimizer.run(feed_dict = {x: batch[0], y_: batch[1]}) # every 100 iterations, print out the accuracy if i % 100 == 0: # accuracy and loss are both functions that take (x, y) pairs as input, and run a forward pass through the network to obtain a prediction, and then compares the prediction with the actual y. acc = accuracy.eval(feed_dict = {x: batch[0], y_: batch[1]}) loss = cross_entropy_loss.eval(feed_dict = {x: batch[0], y_: batch[1]}) print("Epoch: {}, accuracy: {}, loss: {}".format(i, acc, loss)) # evaluate our testing accuracy acc = accuracy.eval(feed_dict = {x: mnist.test.images, y_: mnist.test.labels}) print("testing accuracy: {}".format(acc)) Epoch: 0, accuracy: 0.07999999821186066, loss: 2.2931833267211914 Epoch: 100, accuracy: 0.8399999737739563, loss: 0.6990350484848022 Epoch: 200, accuracy: 0.8700000047683716, loss: 0.35569435358047485 Epoch: 300, accuracy: 0.9300000071525574, loss: 0.26591774821281433 Epoch: 400, accuracy: 0.8999999761581421, loss: 0.3307000696659088 Epoch: 500, accuracy: 0.9399999976158142, loss: 0.23977749049663544 Epoch: 600, accuracy: 0.9800000190734863, loss: 0.09397666901350021 Epoch: 700, accuracy: 0.9200000166893005, loss: 0.2931550145149231 Epoch: 800, accuracy: 0.9399999976158142, loss: 0.20180968940258026 Epoch: 900, accuracy: 0.949999988079071, loss: 0.18461622297763824 Epoch: 1000, accuracy: 0.9700000286102295, loss: 0.18968147039413452 Epoch: 1100, accuracy: 0.9599999785423279, loss: 0.14828498661518097 Epoch: 1200, accuracy: 0.949999988079071, loss: 0.1613173633813858 Epoch: 1300, accuracy: 0.9800000190734863, loss: 0.10008890926837921 Epoch: 1400, accuracy: 0.9900000095367432, loss: 0.07440848648548126 Epoch: 1500, accuracy: 0.9599999785423279, loss: 0.1167958676815033 Epoch: 1600, accuracy: 0.9100000262260437, loss: 0.1591644138097763 Epoch: 1700, accuracy: 0.9599999785423279, loss: 0.10022231936454773 Epoch: 1800, accuracy: 0.9700000286102295, loss: 0.1086776852607727 Epoch: 1900, accuracy: 0.9700000286102295, loss: 0.15659521520137787 Epoch: 2000, accuracy: 0.9599999785423279, loss: 0.09391114860773087 Epoch: 2100, accuracy: 0.9800000190734863, loss: 0.09786181151866913 Epoch: 2200, accuracy: 0.9700000286102295, loss: 0.11428779363632202 Epoch: 2300, accuracy: 0.9900000095367432, loss: 0.07231700420379639 Epoch: 2400, accuracy: 0.9700000286102295, loss: 0.09908157587051392 Epoch: 2500, accuracy: 0.9599999785423279, loss: 0.15657338500022888 Epoch: 2600, accuracy: 0.9900000095367432, loss: 0.07787769287824631 Epoch: 2700, accuracy: 0.9800000190734863, loss: 0.07373256981372833 Epoch: 2800, accuracy: 0.9700000286102295, loss: 0.062044695019721985 Epoch: 2900, accuracy: 0.9700000286102295, loss: 0.12512363493442535 Epoch: 3000, accuracy: 0.9900000095367432, loss: 0.11000598967075348 Epoch: 3100, accuracy: 0.9700000286102295, loss: 0.20609986782073975 Epoch: 3200, accuracy: 0.9800000190734863, loss: 0.09811186045408249 Epoch: 3300, accuracy: 0.9700000286102295, loss: 0.09816547483205795 Epoch: 3400, accuracy: 0.9700000286102295, loss: 0.10826745629310608 Epoch: 3500, accuracy: 0.9900000095367432, loss: 0.0645124614238739 Epoch: 3600, accuracy: 0.9700000286102295, loss: 0.1555529236793518 Epoch: 3700, accuracy: 0.9700000286102295, loss: 0.06963416188955307 Epoch: 3800, accuracy: 0.9900000095367432, loss: 0.08054723590612411 Epoch: 3900, accuracy: 0.9800000190734863, loss: 0.06120322644710541 Epoch: 4000, accuracy: 0.9900000095367432, loss: 0.06058483570814133 Epoch: 4100, accuracy: 0.9700000286102295, loss: 0.11490124464035034 Epoch: 4200, accuracy: 0.9700000286102295, loss: 0.10046141594648361 Epoch: 4300, accuracy: 0.9800000190734863, loss: 0.04671316221356392 Epoch: 4400, accuracy: 0.9900000095367432, loss: 0.052477456629276276 Epoch: 4500, accuracy: 0.9800000190734863, loss: 0.08245706558227539 Epoch: 4600, accuracy: 0.9900000095367432, loss: 0.041497569531202316 Epoch: 4700, accuracy: 0.9900000095367432, loss: 0.050769224762916565 Epoch: 4800, accuracy: 0.9900000095367432, loss: 0.039090484380722046 Epoch: 4900, accuracy: 0.9900000095367432, loss: 0.0564178042113781 testing accuracy: 0.9653000235557556 ### Questions to Ponder • Why is the test accuracy lower than the (final) training accuracy ? • Why is there only a nonlinearity in our hidden layer, and not in the output layer? • How can we tune our hyperparameters? In practice, is it okay to continually search for the best performance on the test dataset? • Why do we use only 100 examples in each iteration, as opposed to the entire dataset of 50,000 examples? ### Exercises 1. Using different activation functions. Consult the Tensorflow documentation on tanh and sigmoid, and use that as the activation function instead of relu. Gauge the resulting changes in accuracy. 2. Varying the number of neurons - as mentioned, we have complete control over the number of neurons in our hidden layer. How does the testing accuracy change with a small number of neurons versus a large number of neurons? What about the generalization accuracy (with respect to the testing accuracy?) 3. Using different loss functions - we have discussed the cross entropy loss. Another common loss function used in neural networks is the MSE loss. Consult the Tensorflow documentation and implement the MSELoss() function. 4. Addition of another hidden layer - We can create a deeper neural network with additional hidden layers. Similar to how we created our original hidden layer, you will have to figure out the dimensions for the weights (and biases) by looking at the dimension of the previous layer, and deciding on the number of neurons you would like to use. Once you have decided this, you can simply insert another layer into the network with only a few lines of code: 1. Use weight_variable() and bias_variable() to create new variables for the additional layer (remember to specify the shape correctly). 2. Similar to computing the activations for the first layer, h_1 = tf.nn.relu(...), compute the activations for your additional hidden layer. 3. Remember to change your output weight dimensions in order to reflect the number of neurons in the previous layer. ### More *Technical note: The way this loss function is presented is such that activations corresponding to a label of zero are not penalized at all. The full form of the cross-entropy loss is given by $C(y, h(x)) = \sum_i y_i log(h(x_i)) + (1 - y_i)(log(1 - h(x_i))$. However, the previously presented function works just as well in environments with larger amounts of data samples and training for many epochs (passes through the dataset), which is typically the case for neural networks.
{"extraction_info": {"found_math": true, "script_math_tex": 32, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5819457173347473, "perplexity": 1654.478514334761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00643.warc.gz"}
http://mathhelpforum.com/calculus/160686-limit-evaluation-3rd-root.html
# Thread: Limit Evaluation 3rd root 1. ## Limit Evaluation 3rd root evaluate: $\displaystyle \displaystyle\lim_{x \to \infty} (x+1)^{2/3}-(x-1)^{2/3}$ Attached Thumbnails 2. $\displaystyle a^3- b^3= (a- b)(a^2+ ab+ b^2)$ With $\displaystyle a= (x+1)^{2/3}$ and $\displaystyle b= (x- 1)^{2/3}$, that says the $\displaystyle (x+1)^2- (x-1)^2= ((x+1)^{2/3}- (x-1)^{2/3})((x+1)^{4/3}+ ((x+1)(x-1))^{2/3}+ (x-1)^2)$. $\displaystyle (x+1)^{2/3}- (x-1)^{2/3}= \frac{(x+1)^2- (x-1)^2}{(x+1)^{4/3}+ ((x+1)(x-1))^{2/3}+ (x-1)^2}$ 3. Another way is squeeze, using the following inequality which is valid for all $\displaystyle x\ge\frac{1+\sqrt{5}}{2}$: $\displaystyle (x+1)^\frac{2}{3} \le (x-1)^\frac{2}{3}+(x-1)^{-\frac{1}{3}}$ (the reverse inequality is true for all $\displaystyle x\le\frac{1-\sqrt{5}}{2}$, with which you can calculate the limit at $\displaystyle -\infty$)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9610276818275452, "perplexity": 1522.320221580398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00420.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2012-January/048150.html
# [OS X TeX] Does {ntheorem} have a bug related to \newshadedtheorem ? Don Green Dragon fergdc at Shaw.ca Tue Jan 17 01:55:44 EST 2012 Hi Michael, >> <<snip>>Well, your hack works here too. > > Sweet. :-) >> However, I never would have guessed that your line >> >>> \def\theoremframecommand{\colorbox{yellow}} >> >> would allow for the yellow backgrounds!!! Would you make a few comments about \theoremframecommand or give a reference. From the terminal "texdoc theoremframecommand" gave no results and a CTAN search fared no better! > > There's a throwaway line in the ntheorem documentation (page 8) about it. Both Shaded and Framed theorems use it to draw boxes. The default is the pstricks way, which doesn't play nice with pdfs. I just changed it so that pstricks wasn't needed by using the standard (I think) command \colorbox. Good guess. I read the line \def\theoremframecommand{⟨any box command⟩} but did not have a clue how to use it and subsequently forgot that piece of information. >> <<snip>> >> Upon doing so, everything was fine, including references to an instance of \newframedtheorem via both \ref{...} and \thref{...}. So despite, the comments in May & Schedler, it "appears" (???) that no reference to {pstricks} is needed. > That's right. pstricks is only needed if thereomframecommand isn't defined, as you discover below. NO! You discovered it! I recognized the significance only after reading your comment. :-) >> However, your line \def\theoremframecommand{\colorbox{yellow}} is crucial if one wants to use the \newshadedtheorem guy. I commented it out and received the error message: >> ============ >> ./Untitled.tex:27: Undefined control sequence. >> \thmshaded at framecommand ->\psframebox >> [fillstyle=solid, fillcolor=gray, line... >> [Named] % YellowBackTheorem >> ============ >> >> So why do you refer to your solution as a 'hack'? > Because it's not as elegant as I would like. It's not really a hack, but I like the word. :) Good. > <<snip>> Thanks again for the good counsel. Don Green Dragon fergdc at Shaw.ca
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.598804235458374, "perplexity": 5471.875242263793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00548.warc.gz"}
https://www.arxiv-vanity.com/papers/1703.00570/
# Background free search for neutrinoless double beta decay with Gerda Phase II August 4, 2021 ###### Abstract The Standard Model of particle physics cannot explain the dominance of matter over anti-matter in our Universe. In many model extensions this is a very natural consequence of neutrinos being their own anti-particles (Majorana particles) which implies that a lepton number violating radioactive decay named neutrinoless double beta () decay should exist. The detection of this extremely rare hypothetical process requires utmost suppression of any kind of backgrounds. The Gerda collaboration searches for decay of Ge () by operating bare detectors made from germanium with enriched Ge fraction in liquid argon. Here, we report on first data of Gerda Phase II. A background level of   has been achieved which is the world-best if weighted by the narrow energy-signal region of germanium detectors. Combining Phase I and II data we find no signal and deduce a new lower limit for the half-life of  yr at 90 % C.L. Our sensitivity of  yr is competitive with the one of experiments with significantly larger isotope mass. Gerda is the first experiment that will be background-free up to its design exposure. This progress relies on a novel active veto system, the superior germanium detector energy resolution and the improved background recognition of our new detectors. The unique discovery potential of an essentially background-free search for decay motivates a larger germanium experiment with higher sensitivity. decay, , Ge, enriched Ge detectors, active veto ###### pacs: 23.40.-s, 21.10.Tg, 27.50.+e, 29.40.Wk Gerda collaboration also at:]Moscow Inst. of Physics and Technology, Moscow, Russia also at:]Int. Univ. for Nature, Society and Man, Dubna, Russia ## I Introduction One of the most puzzling aspects of cosmology is the unknown reason for the dominance of matter over anti-matter in our Universe. Within the Standard Model of particle physics there is no explanation for this observation and hence a new mechanism has to be responsible. A favored model called leptogenesis Davidson et al. (2008) links the matter dominance to the nature of neutrinos and to the violation of lepton number, i.e. the total number of electrons, muons, taus and neutrinos minus the number of their anti-particles. In most extensions of the Standard Model Mohapatra and A.Y.Smirnov (2006); Mohapatra et al. (2007); Päs and Rodejohann (2015) neutrinos are assumed to be their own anti-particles (Majorana particles). This might lead to lepton number violating processes at the TeV energy scale observable at the LHC Päs and Rodejohann (2015) and would result in neutrinoless double beta () decay where a nucleus of mass number and charge decays as . Lepton number violation has not been unambiguously observed so far. There are several experimental decay programs ongoing using for example Ge Agostini et al. (2013a); Cuesta et al. (2015), Te Alfonso et al. (2015); Andringa et al. (2016) or Xe Gando et al. (2016); Albert et al. (2014); Martin-Albo et al. (2016). They all measure the sum of the electron energies released in the decay which corresponds to the mass difference of the two nuclei. The decay half-life is at least 15 orders of magnitude longer than the age of the universe. Its observation requires therefore the best suppression of backgrounds. In the GERmanium Detector Array (Gerda) experiment bare germanium detectors are operated in liquid argon (LAr). The detectors are made from germanium with the Ge isotope fraction enriched from 7.8 % to about 87 %. Since source and detector of decay are identical in this calorimetric approach the detection efficiency is high. This Article presents the first result from Gerda Phase II. In the first phase of data taking (Phase I), a limit of  yr (90 % C.L.) was found Agostini et al. (2013a) for an exposure of 21.6 kgyr and a background of 0.01  at  keV Mount et al. (2010). At that time, the result was based on data from 10 detectors (17.6 kg total mass). In December 2015, Phase II started with 37 detectors (35.6 kg) from enriched material. The mass is hence doubled relative to Phase I. The ambitious goal is an improvement of the half-life sensitivity to  yr for about 100 kgyr exposure by reducing the background level by an order of magnitude. The latter is achieved by vetoing background events through the detection of their energy deposition in LAr and the characteristic time profile of their signals in the germanium detectors. The expected background is less than one count in the energy region of interest up to the design exposure which means that Gerda will be the first “background free” experiment in the field. We will demonstrate in this Article that Gerda has reached the envisioned background level which is the world-best level if weighted by our superior energy resolution. Gerda is therefore best suited to not only quote limits but to identify with high confidence a signal. ## Ii The experiment The Gerda experiment Ackermann et al. (2013) is located at the underground Laboratori Nazionali del Gran Sasso (LNGS) of INFN, Italy. A rock overburden of about 3500 m water equivalent removes the hadronic components of cosmic ray showers and reduces the muon flux at the experiment by six orders of magnitude to 1.2 /(mh). The basic idea is to operate bare germanium detectors in a radiopure cryogenic liquid like LAr for cooling to their operating temperature of 90 K and for shielding against external radiation originating from the walls (see Extended Data Fig. 1 for a sketch of the setup) Heusser (1995). In Gerda, a 64 m LAr cryostat is inside a 590 m water tank. The clean water completes the passive shield. Above the water tank is a clean room with a glove box and lock for the assembly of germanium detectors into strings and the integration of the liquid argon veto system. Gerda deploys 7 coaxial detectors from the former Heidelberg-Moscow Klapdor-Kleingrothaus et al. (2004) and IGEX Aalseth et al. (2002) experiments and 30 broad energy (BEGe) detectors  Agostini et al. (2015a). All diodes have p-type doping (see Extended Data Fig. 2). Electron-hole pairs created in the 1–2 mm thick n electrode mostly recombine such that the active volume is reduced. A superior identification of the event topology and hence background rejection is available for the BEGe type (see below). The enriched detectors are assembled into 6 strings surrounding the central one which consists of three coaxial detectors of natural isotopic composition. Each string is inside a nylon cylinder (see Extended Data Fig. 3) to limit the LAr volume from which radioactive ions like K can be collected to the outer detector surfaces Agostini et al. (2014a). All detectors are connected to custom made low radioactivity charge sensitive amplifiers Riboldi et al. (2015) (30 MHz bandwidth, 0.8 keV full width at half maximum (FWHM) resolution) located in LAr about 35 cm above the detectors. The charge signal traces are digitized with 100 MHz sampling rate and stored on disk for offline analysis. In background events some energy is often also deposited in the argon. The resulting scintillation light Agostini et al. (2015b) can be detected to veto them. In Phase II, a cylindrical volume of 0.5 m diameter and 2.2 m height around the detector strings (see Extended Data Fig. 1 and 4) is instrumented with light sensors. The central 0.9 m of the cylinder are defined by a curtain of wavelength shifting fibers which surround the 0.4 m high detector array. The fibers are read-out at both ends with 90 silicon photomulipliers (SiPM) Janicsko et al. (2016). Groups of six  mm SiPMs are connected together to a charge sensitive amplifier. Sixteen 3” low-background photomultpliers (PMT) designed for cryogenic operation are mounted at the top and bottom surfaces of the cylindrical volume. The distance to any detector is at least 0.7 m to limit the PMT background contribution from their intrinsic Th/U radioactivity. All LAr veto channels are digitized and read-out together with the germanium channels if at least one detector has an energy deposition above 100 keV. The nylon cylinders, the fibers, the PMTs and all surfaces of the instrumented LAr cylindrical volume are covered with a wavelength shifter to shift the LAr scintillation light from 128 nm to about 400 nm to match the peak quantum efficiency of the PMTs and the absorption maximum of the fibers. The water tank is instrumented with 66 PMTs to detect Cherenkov light from muons passing through the experiment. On top of the clean room are three layers of plastic scintillator panels covering the central 43 m to complete the muon veto Freund et al. (2016). ## Iii Data analysis The data analysis flow is very similar to that of Phase I. The offline analysis of the digitized germanium signals is described in Refs. Agostini et al. (2013a, 2012, 2011a). A data blinding procedure is again applied. Events with a reconstructed energy in the interval  keV are not analyzed but only stored on disk. After the entire analysis chain has been frozen, these blinded events have been processed. The gain stability of each germanium detector is continuously monitored by injecting charge pulses (test pulses) into the front-end electronics with a rate of 0.05 Hz. The test pulses are also used to monitor leakage current and noise. Only data recorded during stable operating conditions (e.g. gain stability better than 0.1 %) are used for the physics analysis. This corresponds to about 85 % of the total data written on disk. Signals originated from electrical discharges in the high voltage line or bursts of noise are rejected during the offline event reconstruction by a set of multi-parametric cuts based on the flatness of the baseline, polarity and time structure of the pulse. Physical events at are accepted with an efficiency larger than 99.9 % estimated with lines in calibration data, test pulse events and template signals injected in the data set. Conversely, a visual inspection of all events above 1.6 MeV shows that no unphysical event survives the cuts. The energy deposited in a germanium detector is reconstructed offline with an improved digital filter Agostini et al. (2015c), whose parameters are optimized for each detector and for several periods. The energy scale and resolution are determined with weekly calibration runs with Th sources. The long-term stability of the scale is assessed by monitoring the shift of the position of the 2615 keV peak between consecutive calibrations. It is typically smaller than 1 keV for BEGe detectors and somewhat worse for some coaxial ones. The FWHM resolution at 2.6 MeV is between 2.6–4.0 keV for BEGe and 3.4–4.4 keV for coaxial detectors. The width of the strongest lines in the physics data (1525 keV from K and 1460 keV from K) is found to be 0.5 keV larger than the expectation for the coaxial detectors (see Fig. 1). In order to estimate the expected energy resolution at an additional noise term is added to take this into account. For decays in the active part of a detector volume, the total energy of is detected in 92 % of the cases in this detector. Multiple detector coincidences are therefore discarded as background events. Two consecutive candidate events within 1 ms are also rejected (dead time ) to discriminate time-correlated decays from primordial radioisotopes, as e.g. the radon progenies Bi and Po. Candidate events are also refuted if a muon trigger occurred within 10 s prior to a germanium detector trigger. More than 99 % of the muons that deposit energy in a germanium detector are rejected this way. The induced dead time is 0.1 %. The traces from PMTs and SiPMs are analyzed offline to search for LAr scintillation signals in coincidences with a germanium detector trigger. An event is rejected if any of the light detectors record a signal of amplitude above 50 % of the expectation for a single photo-electron within 5 s from the germanium trigger. 99 % of the photons occur in this window. Accidental coincidences between the LAr veto system and germanium detectors create a dead time of  % which is measured with test pulse events and cross checked with the counts in the K peak. Fig. 2 shows the energy spectra for BEGe and coaxial detectors of Phase II with and without the LAr veto cut. Below  keV the spectra are dominated by Ar decays, up to 1.7 MeV by events from double beta decay with two neutrino emission (), above 2.6 MeV by decays on the detector surface and around by a mixture of events, K decays and those from the U and Th decay chains. The two spectra are similar except for the number of events which is on average higher for coaxial detectors. The number of counts shows a large variation between the detectors. The power of the LAr veto is best demonstrated by the K line at 1525 keV which is suppressed by a factor 5 (see inset) due to the particle depositing up to 2 MeV energy in the LAr. The figure also shows the predicted spectrum from Ge using our Phase I result for the half-life of  yr Agostini et al. (2015d). The time profile of the germanium detector current signal is used to discriminate decays from background events. While the former have point-like energy deposition in the germanium (single site events, SSE), the latter have often multiple depositions (multi site events, MSE) or depositions on the detector surface. The same pulse shape discrimination (PSD) techniques of Phase I Agostini et al. (2013b) are applied. Events in the double escape peak (DEP) and at the Compton edge of 2615 keV gammas in calibration data have a similar time profile as decays and are hence proxies for SSE. These samples are used to define the PSD cuts and the related detection efficiencies. The latter are cross checked with decays. The geometry of BEGe detectors allows to apply a simple mono-parametric PSD based on the maximum of the detector current pulse normalized to the total energy  Budjáš et al. (2009); Agostini et al. (2011b). The energy dependence of the mean and the resolution of are measured for every detector with calibration events. After correcting for these dependences and normalizing the mean of DEP events to 1, the acceptance range is determined for each detector individually: the lower cut is set to keep 90 % of DEP events and the upper position is twice the low-side separation from 1. Fig. 3 shows a scatter plot of the PSD parameter versus energy and the projection to the energy axis. Events marked in red survive the PSD selection. Below 1.7 MeV events dominate with a survival fraction of  %. The two potassium peaks and Compton scattered photons reconstruct at (below the SSE band). All 234  events at higher energies exhibit and are easily removed. The average survival fraction Wagner (2017) is  %. The uncertainty takes into account the systematic difference between the centroids of DEP and events and different fractions of MSE in DEP and events. For coaxial detectors a mono-parametric PSD is not sufficient since SSE do not have a simple signature Agostini et al. (2013b). Instead two neural network algorithms are applied to discriminate SSE from MSE and from surface events. The first one is identical to the one used in Phase I. The cut on the neural network qualifier is set to yield a survival fraction of DEP events of 90 % for each detector. For the determination of the efficiency, events in physics data and a complete Monte Carlo simulation Kirsch (2014) of physics data and calibration data are used. The simulation considers the detector and the electronics response to energy depositions including the drift of charges in the crystal Bruyneel et al. (2016). We find a survival fraction for events of  % where the error is derived from variations of the simulation parameters. The second neural network algorithm is applied for the first time and identifies surface events on the p contact. Training is done with physics data from two different energy intervals. After the LAr veto cut events in the range 1.0–1.3 MeV are almost exclusively from decay and hence signal-like. Events above 3.5 MeV are almost all from decays on the p electrode and represent background events in the training. As efficiency we measure a value of  % for a event sample not used in the training. The combined PSD efficiency for coaxial detectors is  %. ## Iv Results This analysis includes the data sets used in the previous publication Agostini et al. (2013a, 2015e), an additional coaxial detector period from 2013 (labeled “PI extra”) and the Phase II data from December 2015 until June 2016 (labeled “PIIa coaxial” and “PIIa BEGe”). Table 1 lists the relevant parameters for all data sets. The exposures in the active volumes of the detectors for Ge are 234 and 109 molyr for Phase I and II, respectively. The efficiency is the product of the Ge isotope fraction (87 %), the active volume fraction (87–90 %), the event fraction reconstructed at full energy in a single crystal (92 %), pulse shape selection (79–92 %) and the live time fraction (97.7 %). For the Phase I data sets the event selection including the PSD classification is unchanged. An improved energy reconstruction Agostini et al. (2015c) is applied to the data as well as an updated value for the coaxial detector PSD efficiency of the neural network analysis of  % Kirsch (2014). Fig. 4 shows the spectra for the combined Phase I data sets and the two Phase II sets. The analysis range is from 1930 to 2190 keV without the intervals  keV and  keV of known peaks predicted by our background model Agostini et al. (2014a). For the coaxial detectors four events survive the cuts which means that the background is reduced by a factor of three compared to Phase I (see ’PI golden’ in Tab. 1). Due to the better PSD performance, only one event remains in the BEGe data which corresponds to a background of . Consequently, the Phase II background goal is reached. We perform both a Frequentist and a Bayesian analysis based on an unbinned extended likelihood function Agostini et al. (2015e). The fit function for every data set is a flat distribution for the background (one free parameter per set) and for a possible signal a Gaussian centered at with a width according to the corresponding resolution listed in Tab. 1. The signal strength is calculated for each set according to its exposure, efficiency and the inverse half-life which is a common free parameter. Systematic uncertainties like a 0.2 keV uncertainty of the energy scale at are included in the analysis as pull terms in the likelihood function. The implementation takes correlations into account. The Frequentist analysis uses the Neyman construction of the confidence interval and the standard two-sided test statistics Olive et al. (2014); Cowan et al. (2011) with the restriction to the physical region : the frequency distribution of the test statistic is generated using Monte Carlo simulations for different assumed values. The limit was determined by finding the largest value of for which at most 10 % of the simulated experiments had a value of the test statistic more unlikely than the one measured in our data (see Extended Data Fig. 5). Details of the statistical analysis can be found in the appendix. The best fit yields zero signal events and a 90 % C.L. limit of 2.0 events in 34.4 kgyr total exposure or T0ν1/2>5.3⋅1025yr. (1) The (median) sensitivity assuming no signal is  yr (see Extended Data Fig. 5). The systematic errors weaken the limit by 1 %. The Bayesian fit yields for a prior flat in between 0 and  yr a limit of  yr (90 % C.I.). The sensitivity assuming no signal is  yr. ## V Discussion The second phase of Gerda collects data since December 2015 in stable conditions with all channels working. The background at for the BEGe detectors is  . This is a major achievement since the value is consistent with our ambitious design goal. We find no hint for a decay signal in our combined data and place a limit of  yr (90 % C.L., sensitivity  yr). For light Majorana neutrino exchange and a nuclear matrix element range for Ge between 2.8 and 6.1 Menendez et al. (2009); Horoi and Neacsu (2016); Barea et al. (2015); Hyvärinen and Suhonen (2015); Simkovic et al. (2013); Vaquero et al. (2013); Yao et al. (2015) the Gerda half-life limit converts to 0.15–0.33 eV (90 % C.L.). We expect only a fraction of a background event in the energy region of interest (1 FWHM) at design exposure of 100 kgyr. Gerda is hence the first “background free” experiment in the field. Our sensitivity grows therefore almost linearly with time instead of by square root like for competing experiments and reaches  yr for the half-life limit within 3 years of continuous operation. With the same exposure we have a 50 % chance to detect a signal with significance if the half-life is below  yr. Phase II has demonstrated that the concept of background suppression by exploiting the good pulse shape performance of BEGe detectors and by detecting the argon scintillation light works. The background at is at a world-best level: it is lower by typically a factor of 10 compared to experiments using other isotopes after normalization by the energy resolution and total efficiency ; i.e. (BIFWHM/) is superior. This is the reason why the Gerda half-life sensitivity of  yr for an exposure of 343 molyr is similar to the one of Kamland-Zen for Xe of  yr based on a more than 10-fold exposure of 3700 molyr Gando et al. (2016). A discovery of decay would have far reaching consequences for our understanding of particle physics and cosmology. Key features for a convincing case are an ultra low background with a simple flat distribution, excellent energy resolution and the possibility to identify the events with high confidence as signal-like as opposed to an unknown -line from a nuclear transition. The latter is achieved by the detector pulse shape analysis and possibly a signature in the argon. The concept to operate bare germanium detectors in liquid argon has proven to have the best performance for a discovery which motivates future extensions of the program. The Gerda cryostat can hold 200 kg of detectors. Such an experiment will remain background-free until an exposure of 1000 kgyr provided the background can be further reduced by a factor of five. The discovery sensitivity would then improve by an order of magnitude to a half-life of  yr. The 200 kg setup is conceived as a first step for a more ambitious 1 ton experiment which would ultimately boost the sensitivity to  yr corresponding to the 10–20 meV range. Both extensions are being pursued by the newly formed LEGeND Collaboration (http://www.legend-exp.org) ## Appendix A Acknowledgments The Gerda experiment is supported financially by the German Federal Ministry for Education and Research (BMBF), the German Research Foundation (DFG) via the Excellence Cluster Universe, the Italian Istituto Nazionale di Fisica Nucleare (INFN), the Max Planck Society (MPG), the Polish National Science Centre (NCN), the Russian Foundation for Basic Research (RFBR), and the Swiss National Science Foundation (SNF). The institutions acknowledge also internal financial support. The Gerda collaboration thanks the directors and the staff of the LNGS for their continuous strong support of the Gerda experiment. ## Appendix B Appendix: Statistical Methods This section discusses the statistical analysis of the Gerda data. In particular, the procedures to derive the limit on , the median sensitivity of the experiment and the treatment of systematic uncertainties are described. A combined analysis of data from Phase I and II is performed by fitting simultaneously the six data sets of Table 1. The parameter of interest for this analysis is the strength of a possible  decay signal: . The number of expected  events in the -th data set as a function of  is given by: μSi=ln2⋅(NA/ma)⋅ϵi⋅Ei⋅S , (2) where is Avogadro’s number, the global signal efficiency of the -th data set, the exposure and the molar mass. The exposure quoted is the total detector mass multiplied by the data taking time. The global signal efficiency accounts for the fraction of Ge in the detector material, the fraction of the detector active volume, the efficiency of the analysis cuts, the fractional live time of the experiment and the probability that  decay events in the active detector volume have a reconstructed energy at . The total number of expected background events as a function of the background index is: μBi=Ei⋅BIi⋅ΔE, (3) where =240 keV is the width of the energy region around  used for the fit. Each data set is fitted with an unbinned likelihood function assuming a Gaussian distribution for the signal and a flat distribution for the background: Li(Di|S,BIi,θi) = Nobsi∏j=1 1μSi+μBi⋅[μSi⋅1√2πσiexp(−(Ej−Qββ−δi)22σ2i)+μBi⋅1ΔE] (4) where are the individual event energies, is the total number of events observed in the -th data set, is the energy resolution and is a possible systematic energy offset. The parameters with systematic uncertainties are indicated with . The parameters and are bound to positive values. The total likelihood is constructed as the product of all weighted with the Poisson terms pdg : L(D|S,\bf{BI},θ)=∏i⎡⎣e−(μSi+μBi)⋅(μSi+μBi)NobsiNobsi!⋅Li(Di|S,BIi,θi)⎤⎦ (5) where , and . A frequentist analysis is performed using a two-sided test statistics Cowan et al. (2011) based on the profile likelihood (): tS=−2lnλ(S)=−2lnL(S,^^\bf BI,^^θ)L(^S,^\bf BI,^θ) (6) where and in the numerator denote the value of the parameters that maximizes for a fixed . In the denominator, , and are the values corresponding to the absolute maximum likelihood. The confidence intervals are constructed for a discrete set of values . For each , possible realizations of the experiments are generated via Monte Carlo according to the parameters of Table 1 and the expected number of counts from Eqs. 2 and 3. For each realization is evaluated. From the entire set the probability distribution is calculated. The p-value of the data for a specific is computed as: pSj=∫∞tobsf(tS|Sj)d(tSj) (7) where is the value of the test statistics of the Gerda data for . The values of are shown by the solid line in Extended Data Fig. 5. The 90 % C.L. interval is given by all values with . Such an interval has the correct coverage by construction. The current analysis yields a one-sided interval, i.e. a limit of  yr. The expectation for the frequentist limit (i.e. the experimental sensitivity) was evaluated from the distribution of built from Monte Carlo generated data sets with no injected signal (). The distribution of is shown in Extended Data Fig. 5: the dashed line is the median of the distribution and the color bands indicate the 68 % and 90 % probability central intervals. The experimental sensitivity corresponds to the  value at which the median crosses the p-value threshold of 0.1 :  yr (90% C.L.). Systematics uncertainties are folded into the likelihood by varying the parameters in the fits and constraining them by adding to the likelihood multiplicative Gaussian penalty terms. The central values and the standard deviations of the penalty terms for and are taken from Table 1. The penalty term on has a central value equal to zero and standard deviation of 0.2 keV. Instead of the two-sided test statistics one can use a one-sided test statistic defined as Cowan et al. (2011): ~tS={0,^S>S≥0−2lnλ(S),^S≤S (8) By construction for for all realizations and consequently is always included in the 90 % C.L. interval, i.e. the one-sided test statistic will always yield a limit. In our case the resulting limit would be 50% stronger. Similar to other experiments Albert et al. (2014); Gando et al. (2016), we want to be able to detect a possible signal and thus we decided a priori to adopt the two-sided test statistic. It is noteworthy that, although the coverage of both test statistics is correct by construction, deciding which one to use according to the outcome of the experiment would result in the flip-flop issue discussed by Feldman and Cousins Feldman:1997qc . The statistical analysis is also performed within a Bayesian framework. The combined posterior probability density function (PDF) is calculated from the six data sets according to Bayes’ theorem: P(S,\bf BI|D,θ)∝L(D|S,\bf BI,θ) P(S) ∏iP(BIi) (9) The likelihood is given by Eq. (5), while and are the prior PDFs for  and for the background indices, respectively. The one-dimensional posterior PDF of the parameter  of interest is derived by marginalization over all nuisance parameters BI. The marginalization is performed by the BAT toolkit bat via a Markov chain Monte Carlo numerical integration. A flat PDF between 0 and 0.1  is considered as prior for all background indices. As in Ref. Agostini et al. (2013a), a flat prior distribution is taken for  between 0 and  /yr, i.e. all counting rates up to a maximum are considered to be equiprobable. The parameters in the likelihood are fixed during the Bayesian analysis and the uncertainties are folded into the posterior PDF as last step by an integral average: ⟨P(S|D)⟩=∫P(S|D,θ)∏ig(θi)dθi (10) with being Gaussian distributions like for the frequentist analysis. The integration is performed numerically by a Monte Carlo approach. The median sensitivity of the experiment in the case of no signal is  yr (90% C.I.). The posterior PDF for our data has an exponential shape with the mode at . Its 90 % probability quantile yields  yr. As in any Bayesian analysis, results depend on the choice of the priors. For our limit we assume all signal count rates to be a priori equiprobable. Alternative reasonable choices are for instance: equiprobable Majorana neutrino masses, which yields a prior proportional to ; or scale invariance in the counting rate, namely a flat prior in . The limits derived with these assumptions are significantly stronger (50 % or more), since for both alternatives the prior PDFs increase the probability of low  values. The systematic uncertainties weaken the limit on  by less than 1% both in the frequentist and Bayesian analysis. In general, the impact of systematic uncertainties on limits is marginal in the low-statistics regime that characterizes our experiment (see also Ref. cousins ). The limit derived from the Gerda data is slightly stronger than the median sensitivity. This effect is more significant in the frequentist analysis as one would expect, see e.g. Ref. biller for a detailed discussion. The probability of obtaining a frequentist (Bayesian) limit stronger than the actual one is 33 % (35 %). ## References • Davidson et al. (2008) S. Davidson, E. Nardi,  and Y. Nir, Phys. Rept. 446, 105 (2008). • Mohapatra and A.Y.Smirnov (2006) R. Mohapatra and A.Y.Smirnov, Ann. Rev. Nucl. Part. Sci. 56, 569 (2006). • Mohapatra et al. (2007) R. Mohapatra et al., Rept. Prog. Phys. 70, 1757 (2007). • Päs and Rodejohann (2015) H. Päs and W. Rodejohann, New J. Phys. 17, 115010 (2015). • Agostini et al. (2013a) M. Agostini et al. (GERDA Collaboration), Phys. Rev. Lett. 111, 122503 (2013a). • Cuesta et al. (2015) C. Cuesta et al. (Majorana Collaboration), AIP Conf. Proc. 1686, 020005 (2015). • Alfonso et al. (2015) K. Alfonso et al. (Cuore Collaboration), Phys. Rev. Lett. 115, 102502 (2015). • Andringa et al. (2016) S. Andringa et al. (SNO+ Collaboration), Adv. High Energy Phys. 2016, 6194250 (2016). • Gando et al. (2016) A. Gando et al. (Kamland-Zen Collaboration), Phys. Rev. Lett. 117, 082503 (2016). • Albert et al. (2014) J. Albert et al. (EXO-200 Collaboration), Nature 510, 229 (2014). • Martin-Albo et al. (2016) J. Martin-Albo et al. (NEXT-100 Collaboration), JHEP 1605, 159 (2016). • Mount et al. (2010) B. J. Mount, M. Redshaw,  and E. G. Myers, Phys.Rev. C 81, 032501 (2010). • Ackermann et al. (2013) K.-H. Ackermann et al. (GERDA Collaboration), Eur. Phys. J. C 73, 2330 (2013). • Heusser (1995) G. Heusser, Ann. Rev. Nucl. Part. Sci. 45, 543 (1995). • Klapdor-Kleingrothaus et al. (2004) H. V. Klapdor-Kleingrothaus et al., Phys. Lett. B 586, 198 (2004). • Aalseth et al. (2002) C. E. Aalseth et al. (IGEX Collaboration), Phys. Rev. D 65, 092007 (2002). • Agostini et al. (2015a) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 75, 39 (2015a). • Agostini et al. (2014a) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 74, 2764 (2014a). • Riboldi et al. (2015) S. Riboldi et al.,  (2015), http://ieeexplore.ieee.org/document/7465549, (2015) . • Agostini et al. (2015b) M. Agostini et al., Euro. Phys. J. C 75, 506 (2015b). • Janicsko et al. (2016) J. Janicsko et al.,  (2016), https://arxiv.org/abs/1606.04254 . • Freund et al. (2016) K. Freund et al., Eur. Phys. J. C 76, 298 (2016). • Agostini et al. (2012) M. Agostini et al., J. Phys.: Conf. Ser. 368, 012047 (2012). • Agostini et al. (2011a) M. Agostini et al., J. Instrum. 6, P08013 (2011a). • Agostini et al. (2015c) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 75, 255 (2015c). • Agostini et al. (2015d) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 75, 416 (2015d). • Agostini et al. (2013b) M. Agostini et al. (GERDA Collaboration), Eur. Phys. J. C 73, 2583 (2013b). • Budjáš et al. (2009) D. Budjáš et al., JINST 4, P10007 (2009). • Agostini et al. (2011b) M. Agostini et al., JINST 6, P03005 (2011b). • Wagner (2017) V. Wagner, Pulse Shape Analysis for the GERDA Experiment to Set a New Limit on the Half-life of 0 Decay of Ge (PhD thesis University of Heidelberg, 2017). • Kirsch (2014) A. Kirsch, Search for neutrinoless double beta decay in GERDA Phase I (PhD thesis University of Heidelberg, 2014). • Bruyneel et al. (2016) B. Bruyneel, B. Birkenbach,  and P. Reiter, Eur. Phys. J. A 52, 70 (2016). • Agostini et al. (2015e) M. Agostini et al. (GERDA Collaboration), Physics Procedia 61, 828 (2015e). • Olive et al. (2014) K. Olive et al. (Particle Data Group), Chin. Phys. C 38, 090001 (2014). • Cowan et al. (2011) G. Cowan et al., Eur. Phys. J. C 71, 1554 (2011). • Menendez et al. (2009) J. Menendez et al., Nucl. Phys. A 818, 139 (2009). • Horoi and Neacsu (2016) M. Horoi and A. Neacsu, Phys. Rev. C 93, 024308 (2016). • Barea et al. (2015) J. Barea, J. Kotila,  and F. Iachello, Phys. Rev. C 91, 034304 (2015). • Hyvärinen and Suhonen (2015) J. Hyvärinen and J. Suhonen, Phys. Rev. C 91, 024613 (2015). • Simkovic et al. (2013) F. Simkovic et al., Phys. Rev. C. 87, 045501 (2013). • Vaquero et al. (2013) N. L. Vaquero, T. Rodriguez,  and J. Egido, Phys. Rev. Lett. 111, 142501 (2013). • Yao et al. (2015) J. Yao et al., Phys. Rev. C 91, 024316 (2015). • (43) J. Beringer et al., Review of Particle Physics, Phys. Rev. D86 (2012) 010001. • (44) G. J. Feldman and R. D. Cousins, Phys. Rev. D 57 (1998) 3873-3889 • (45) A. Caldwell, D. Kollar, and K. Kröninger, Comput. Phys. Commun.180 (2009) 2197-2209. • (46) R. Cousins and V. Highland, Nucl. Instr. Meth. A320 (1992) 331-335. • (47) S. V. Biller and S. M. Oser, Nucl. Instr. Meth. A774 (2015) 103-119.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003246426582336, "perplexity": 1420.9716471486395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00399.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/129477-frequency-modulation-synthesis-series.html
# Thread: Frequency modulation synthesis in series 1. ## Frequency modulation synthesis in series Hello there, I'm currently working on a project about the mathematics of FM synthesis (for a general overview see the relevant chapter in ☮ Music: a Mathematical Offering ☮). I'm trying to do some expansion and simplification of using multiple modulating waves and the maths is getting a bit trying. I want to rearrange an equation of trignometric functions so they are in bessel function form. So far I've done it for parallel modulating waves, by showing if we have something of the form $sin(c_2 + I_2 sin(\theta_2) + I_1 sin(\theta_2))$ then it can be rearranged to $\sum_{k_1} \sum_{k_2} J_{k_1} (I_1) J_{k_2} (I_2) sin (c_2 + k_1 \theta_1 + k_2 \theta_2))$ using standard addition formulae for trignometric equations and the Bessel function expansion of $sin(z sin\theta)) = \sum_{n=0}^{oo} J_{2n+1}(z)sin((2n+1) \theta)$ (I can provide a full proof if needed, but I hope the sketch will give an idea of what I'm aiming for) So I'm trying to do the same for series: I'm starting with $sin(\alpha_1 + I_1 sin (\alpha_2 + I_2 sin \theta_2)$ Using addition formula for $sin$ and then applying the Bessel function formula again, I arrive at $sin (c_1 + I_1 \sum_{k_1}J_{k_1} (I_2) sin (c_1 + k_1 \theta_1)))$ Now here's where I get stuck. Can anyone suggest how I might expand/simplify this last equation? Can it even be done? 2. I've actually found the solution to the problem but not the proof, so if someone could help me understand it that would be great. Rewriting in the authors own terms, he starts with the equation $s(t) = sin(2 \pi f_c t + I sin[2 \pi f_1 t + I_2 sin\{2 \pi f_2 t\}])$ and ends with $s(t) = \sum_k J_k(I_1) \times J_n (k I_2) sin(2\pi [f_c + k_1 f_1 + n f_2]t)$ Can anyone explain the inbetween steps for me?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631268382072449, "perplexity": 397.11474752918565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00537.warc.gz"}