text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
There are $n$ hotels on a street. For each hotel you know the number of free rooms. Your task is to assign hotel rooms for groups of tourists. All members of a group want to stay in the same hotel.
The first input line contains two integers $n$ and $m$: the number of hotels and the number of groups. The hotels are numbered $1,2,\ldots,n$.
The next line contains $n$ integers $h_1,h_2,\ldots,h_n$: the number of free rooms in each hotel.
The last line contains $m$ integers $r_1,r_2,\ldots,r_m$: the number of rooms each group requires.
Print the assigned hotel for each group. If a group cannot be assigned a hotel, print 0 instead. | CommonCrawl |
Abstract: Gerstenhaber and Schack ([GS]) developed a deformation theory of presheaves of algebras on small categories. We translate their cohomological description to sheaf cohomology. More precisely, we describe the deformation space of (admissible) quasicoherent sheaves of algebras on a quasiprojective scheme $X$ in terms of sheaf cohomology on $X$ and $X\times X$. These results are applied to the study of deformations of the sheaf $D_X$ of differential operators on $X$. In particular, in case $X$ is a flag variety we show that any deformation of $D_X$, which is induced by a deformation of $\cO_X$, must be trivial. This result is used in [LR3], where we study the localization construction for quantum groups. | CommonCrawl |
By a complex irreducible representation of a group $G$, I mean a simple $\mathbb CG$-module. So my representations need not be unitary and we are working in the purely algebraic setting.
Is a group virtually abelian if and only if all its complex irreducible representations are finite dimensional?
It was shown by E. Thoma that all irreducible unitary representations of a discrete group are finite dimensional if and only if the group is virtually abelian. This is, of course, a different category than I am looking at, but relevant.
It was shown independently by Snider and Wehrfritz that a solvable group whose complex irreducible representations are all finite dimensional is virtually abelian, cf. here. However, it seems that the paper trail grows cold after this, in the sense that these papers don't seem to have many citations on Mathscinet. I am not sure if this is because they are only cited in old papers that are not recorded in citation part of Mathscinet, of if people stopped working on the problem.
If $G$ is a finitely generated group whose irreducible representations are all finite dimensional, then $G$ is residually finite. This is because $\mathbb CG$ is semiprimitive, which means that it has enough irreducible representations to separate points, and hence $G$ has enough irreducible representations to separate points. But a theorem of Malcev says a finitely generated linear group is residually finite. I don't know if the problem can be reduced to finitely generated groups.
As, @YCor pointed out (and was first observed by Kaplansky) a virtually abelian group has all its complex irreducible representation of bounded degree. Conversely, if $G$ has all its complex irreducible representations of bounded degree, then $G$ is virtually abelian by a result of Isaacs and Passman see here.
B. Hartley proved that the question has a positive answer for locally finite groups.
Passman and Temple showed that the problem reduces to finitely generated groups and also that if all complex irreducible representations of $G$ are finite dimensional, then $G/N$ is virtually abelian for some finitely generated normal subgroup $N$ of $G$, see here.
Snider prove that the question has a positive answer for periodic (=torsion) groups. See here.
I am primarily interested in the case of a countable group.
Browse other questions tagged gr.group-theory rt.representation-theory ra.rings-and-algebras group-rings or ask your own question.
Which groups have only real and quaternionic irreducible representations? | CommonCrawl |
Is the Hilbert-Smith conjecture still unsolved?
$\phi: G\to G'$ be a group homomorphism, is it true that $G/\ker\phi\cong G$?
Which of the following formulas hold for all invertible $n\times n$ matrices $A$ and $B$?
Is the Lie derivative of a harmonic form also a harmonic form? | CommonCrawl |
Abstract : This paper deals with real-time control under computational constraints. A robust control approach to control/real-time scheduling co-design is proposed using the $H_\infty$ framework for Linear Parameter Varying (LPV) polytopic systems. The originality consists in a new resource sharing between control tasks according to the controlled plant performances. Here the varying parameters are images of the control performance w.r.t. the sampling frequencies. Then a LPV based feedback scheduler is designed to adapt the control tasks periods according to the plant behavior and to the availability of computing resources. The approach is illustrated with a robot-arm controller design, whose feasibility is assessed in simulation. | CommonCrawl |
"I have been working, in collaboration with José Bordes (Valencia) and Chan Hong-Mo (Rutherford-Appleton Laboratory), to build the Framed Standard Model (FSM) for some time now. The initial aim of the FSM is to give geometric meaning to (fermion) generations and the Higgs field(s). The surprise is that doing so has enabled one not only to reproduce some details of the standard model with fewer parameters but also to make testable new predictions, possibly even for dark matter. I find this really quite exciting.
It is well known that general relativity is solidly based on geometry. It would be nice if one can say the same for particle physics. The first steps are hopeful, since gauge theory has geometric significance as a fibre bundle over spacetime, and the gauge bosons are components of its connection.
The standard model (SM) of particle physics is a gauge theory based on the gauge group $SU(3) \times SU(2) \times U(1)$, ignoring discrete identifications for simplicity. The gauge bosons are the photon $\gamma$, $W^\pm, Z$ and the colour gluons. To these are added, however, the scalar Higgs fields, and the leptons and quarks, for which no geometric significance is usually sought nor given. Besides, the latter two have to come in three generations, as represented by the three rows of the table below.
In a gauge theory the field variables transform under the gauge group. and to describe their transformation, we need to specify a local (spacetime dependent) frame, with reference to a global (fixed) frame via a transformation matrix. We suggest therefore that it is natural to incorporate both the local and global symmetries, and to introduce the elements of the transformation matrix as dynamical variables, which we call framons. Consequently, the global $SU(3)$ symmetry can play the role of generations, and the $SU(2)$ framons can give rise to the Higgs field.
The FSM takes the basic structure of the SM, without supersymmetry, in four spacetime dimensions, adds to it a naturally occuring global symmetry, and uses 't Hooft's confinement picture instead of the usual symmetry breaking picture.
As a result of this suggestion, we find that many details of the SM can be explained. Indeed, already by one-loop renormalization, we are able to reproduce the masses of the quarks and leptons, and their mixing parameters including the neutrino oscillation angles, using 7 parameters, as compared to the 17 free parameters of the SM. It also gives an explanation of the strong CP problem without postulating the existence of axions.
In addition, the FSM has predictions which are testable in the near future. What is special about the FSM is the presence of the colour $SU(3)$ framons. They will have effects which make the FSM deviate from the SM. Let me mention one of these which is of immediate relevance to LHC experiments. The FSM has a slightly larger prediction for the mass of the $W$ boson than the SM (see figure). It would be very interesting to see if future improved measurements of the $W$ mass would agree better with FSM.
(Figure 1: Measurements of the W boson mass compared to the SM prediction (mauve) and the FSM predictions (green) at two different vaccuum expectation values).
One other possible implication is that some of the bound states involving the $SU(3)$ framons might be suitable candidates for dark matter. Now dark matter is one of the astrophysical mysteries still largely unresolved. We know that constituents of dark matter, whatever they may be, have mass but hardly interact with our world of quarks and leptons. These FSM candidates have exactly these properties, but we need more study to see if they are valid candidates for dark matter. If they were, then it would be a very interesting implication for the FSM."
For a fuller explanation of the work click here. | CommonCrawl |
I'm sorry if this is well-known. I'm new to stable homotopy theory.
There are many models of spectra, and an answer to this MO post says most of them are equivalent, even equivalent while taking the symmetric monoidal structure (smash product) into account.
My question is motivated by wanting to know how these equivalences are established. There's a clean development of the stable $\infty$-category of spectra and the smash product thereof in Lurie's Higher Algebra, and both are characterized by universal properties, but it seems anachronistic to say that checking these universal properties is the way to establish the equivalence of different models.
Finally, even if one is able to establish that the different model categories of spectra with smash product (symmetric, orthogonal, what have have you) are equivalent, it seems like this doesn't easily construct a functor from one model to another--for instance, it's not obvious how to take two arbitrary Omega spectra (more or less the model in Higher Algebra) and spit out a pair of symmetric spectra while verifying that their smash products in either model are equivalent.
When known, functors between the models?
The paper you're looking for is "The stable homotopy category is rigid" by Stefan Schwede. It shows that an equivalence on the homotopy category level implies a Quillen equivalence on the model category level.
For monoidal results, check out "Monoidal Uniqueness of Stable Homotopy Theory" by Brooke Shipley. This paper has exactly the same universal property for the model category of spectra that you mention from Lurie, but of course many years before his work. I think it's safe to say these papers of Schwede and Shipley were what Lurie had in mind when he wrote the $\infty$-category version.
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology model-categories stable-homotopy or ask your own question.
Counterexample in cohomology for symmetric spectra?
Are Thom spectra MU, MSO and K-theory spectra KU, KO modules over some truncations of the sphere spectrum?
What is, really, the stable homotopy category?
Comonadicity of spaces over spectra? | CommonCrawl |
Nicolas Papernot, Patrick D. McDaniel. Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. CoRR abs/1803.04765 (2018).
Papernot and McDaniel introduce deep k-nearest neighbors where nearest neighbors are found at each intermediate layer in order to improve interpretbaility and robustness. Personally, I really appreciated reading this paper; thus, I will not only discuss the actually proposed method but also highlight some ideas from their thorough survey and experimental results.
First, Papernot and McDaniel provide a quite thorough survey of relevant work in three disciplines: confidence, interpretability and robustness. To the best of my knowledge, this is one of few papers that explicitly make the connection of these three disciplines. Especially the work on confidence is interesting in the light of robustness as Papernot and McDaniel also frequently distinguish between in-distribution and out-distribution samples. Here, it is commonly known that deep neural networks are over-confidence when moving away from the data distribution.
The deep k-nearest neighbor approach is described in Algorithm 1 and summarized in the following. For a trained model and a training set of labeled samples, they first find k nearest neighbors for each intermediate layer of the network. The layer nonconformity with a specific label $j$, referred to as $\alpha$ in Algorithm 1, is computed as the number of labels that in the set of nearest neighbors that do not share this label. By comparing these nonconformity values to a set of reference values (computing over a set of labeled calibration data), the prediction can be refined. In particular, the probability for label $j$ can be computed as the fraction of reference nonconformity values that are higher than the computed one. See Algorthm 1 or the paper for details.
Algorithm 1: The deep k-nearest neighbor algorithm and an illustration.
Finally, they provide experimental results – again considering the three disciplines of confidence/credibility, interpretability and robustness. The main take-aways are that the resulting confidences are more reliable on out-of-distribution samples, which also include adversarial examples. Additioanlly, the nearest neighbor allow very basic interpretation of the predictions. | CommonCrawl |
$$((A-\omega^2 I)(A-\omega^2 I)+\omega^2 I)x = b.$$ It may be useful to note that the matrix factors as $$ (A-(\omega^2-i\omega)I)(A-(\omega^2+i\omega)I), $$ where $i^2 = -1$.
Details: $A$ is sparse and I won't have direct access to its entries. The dimension of the null space of $A$ is a non-negligible fraction of $n$. The dimension of the problem, $n$, will be as big as the computer's RAM will allow.
What is a good way to solve preprocess / precondition this system? Note that the RHS, $b$, will change when $\omega$ changes.
Notes: This is a follow up question to this one. The idea of the proposed solution to that question shows that if we could perform an complete eigendecomposition on $A$, we would have a pretty much ideal preprocess. I have implemented an Lansczos iteration to approximate this eigendecomposition but it doesn't perform as well as I had hoped. I can explain this idea in more detail as an addendum if there is interest.
Of course full answers are appreciated, but they are not expected. I am mainly looking for ideas to investigate. Any comments and pointers to the literature are much appreciated.
Note to mods: Is this kind of question acceptable? I can change it to something more definite if asking for ideas is unacceptable.
This is what I plan on doing. First note that as $\omega\to \infty$ the matrix starts looking like $I(\omega^4+\omega^2)$, so we are mainly interested in when $\omega$ is comparable to the norm of $A$ and smaller.
I plan on using this information to construct an initial guess for $x$. I am still unsure on what preconditioner to use.
You have noticed that any eigenvector of $A$ is also an eigenvector of your composed matrix. When you compute eigenvectors of your matrix, you can use them in a deflation-type preconditioner, as described in [1, 2].
Not the answer you're looking for? Browse other questions tagged linear-algebra linear-solver eigensystem or ask your own question. | CommonCrawl |
Given $f: R \times S \to T \times S$, therefore $f = <f_1, f_2>$ where $f_1: R \times S \to T$ and $f_2: R \times S \to S$. Since $S$ is Hopfian, therefore $f_2$ is an isomorphic function. I wonder if isomorphism of $f$ does implies isomorphism of $f_1$ and $f_2$. Also please let me know if this is the correct approach or should I look at it in some other way.
P.S: This is my first question on mathexchange, please help me correct any mistakes which I might have made over here.
Browse other questions tagged abstract-algebra ring-theory direct-product or ask your own question.
The Chinese Remainder Theorem for Rings.
Which isomorphism of coordinate rings corresponds to isomorphisms of affine varieties?
Does the Dorroh Extension Theorem simplify ring theory to the study of rings with identity?
Epimorphism, in the category of commutative rings with unity , with domain a field is an isomorphism? | CommonCrawl |
We enumerate each choice i, and then enumerate another choice j (j ≠ i), let cnt = 0 at first, if choice j is twice longer than i let cnt = cnt + 1, if choice j is twice shorter than i let cnt = cnt - 1. So i is great if and only if cnt = 3 or cnt = - 3.
If there is exactly one great choice, output it, otherwise output C.
We could deal with this by digits.
Because lowbit(x) is taking out the lowest 1 of the number x, we can enumerate the number of the lowest zero.
Then, if we enumerate x as the number of zero, we enumerate a as well, which a × 2x is no more than limit and a is odd. We can find out that lowbit(a × 2x) = 2x.
In this order, we would find out that the lowbit() we are considering is monotonically decresing.
Because for every two number x, y, lowbit(x) is a divisor of lowbit(y) or lowbit(y) is a divisor of lowbit(x).
We can solve it by greedy. When we enumerate x by descending order, we check whether 2x is no more than sum, and check whether there is such a. We minus 2x from sum if x and a exist.
If at last sum is not equal to 0, then it must be an impossible test.
If we choose one number whose lowbit = 2x - 1, then we can choose at most one number whose lowbit = 2x - 2, at most one number whose lowbit = 2x - 3 and so on. So the total sum of them is less than 2x and we can't merge them into sum.
If we don't choose one number whose lowbit = 2x - 1, then it's just the same as we don't choose one number whose lowbit = 2x.
So the total time complexity is O(limit).
The best way to delete all n nodes is deleting them in decreasing order of their value.
Consider each edge (x, y), it will contribute to the total cost vx or vy when it is deleted.
If we delete the vertices in decreasing order, then it will contribute only min(vx, vy), so the total costs is the lowest.
First, there is nothing in the graph. We sort all the areas of the original graph by their animal numbers in decreasing order, and then add them one by one.
When we add area i, we add all the roads (i, j), where j is some area that has been added.
After doing so, we have merged some connected components. If p and q are two areas in different connected components we have merged just then, f(p, q) must equals the vi, because they are not connected until we add node i.
So we use Union-Find Set to do such procedure, and maintain the size of each connected component, then we can calculate the answer easily.
In this problem, you are asked to count the triangulations of a simple polygon.
First we label the vertex of polygon from 0 to n - 1.
If the line segment (i, j) cross with the original polygon or is outside the polygon, f[i][j] is just 0. We can check it in O(n) time.
Otherwise, we have , which means we split the polygon into the triangulation from vertex i to vertex k, a triangle (i, k, j) and the triangulation from vertex k to vertex j. We can sum these terms in O(n) time.
Finally,the answer is f[n - 1]. It's obvious that we didn't miss some triangulation. And we use a triangle to split the polygon each time, so if the triangle is different then the triangulation must be different, too. So we didn't count some triangulation more than once.
So the total time complexity is O(n3), which is sufficient for this problem.
The important idea of this problem is the property of .
If k = 0, remains to be x.
If k ≠ 0, .
We realize every time a change happening on x, x will be reduced by at least a half.
So let the energy of x become . Every time when we modify x, it may take at least 1 energy.
The initial energy of the sequence is .
We use a segment tree to support the query to the maximum among an interval. When we need to deal with the operation 2, we modify the maximum of the segment until it is less than x.
Now let's face with the operation 3.
Every time we modify an element on the segment tree, we'll charge a element with power.
So the total time complexity is : .
By the way, we can extend the operation 3 to assign all the elements in the interval to the same number in the same time complexity. This is an interesting idea also, but a bit harder. You can think of it.
`` + 1'' is for f = 1.
So the remaining question is: how to calculate the multiplication inverse of a power series and the square root of a power series?
We use f(z) ≡ g(z) (mod zn) to denote that the first n terms of f(z) and g(z) are the same.
We can simplely use Fast Fourier Transform to deal with multiplication. Note the unusual mod 998244353 (7 × 17 × 223 + 1), thus we can use Number Theoretic Transform.
By doubling n repeatedly, we can get the first n terms of the square root of F(z) in time.
That's all. What I want to share with others is this beautiful doubling algorithm.
So the total time complexity of the solution to the original problem is .
There is an algorithm solving this problem using divide and conquer and Fast Fourier Transform, which runs in . See the C++ code and the Java code for details.
Can someone please explain the solution for 437C — The Child and Toy. In the question it's given that the child can remove one part at a time. Does that mean he can remove one vertex at a time? If yes, then how is it equal to removing the edges as given in the solution of the editorial?
if you remove a vertex you will break all connections with this vertex, and the cost as given will be sum of costs of all vertices connected to the removed vertex.
If you remove all the edges connected to a vertex at one time, then answer for the first test case will be 50 not 40. 40 will be the answer when the edges are removed one at a time. But in the question it is given that vertices are to be removed one at a time.
I too have the same doubt...can someone please explain!
Remove an edge implies removing the vertices connected to it.
in the first case, you can remove the first vertex, and this will break its connection with the second and the fourth vertex, and the cost will be 60, instead you can remove vertex 4 and 2 and break there connection with vertex 1 and the cost will be 10 + 10 = 20 and since you removed vertex 2 this will also break the connection with vertex 3 and then the total cost will be 50, but this is also not the ideal solution.
First, remove part 3, cost of the action is 20. Then, remove part 2, cost of the action is 10. Next, remove part 4, cost of the action is 10. At last, remove part 1, cost of the action is 0.
Thanks for the task (and all the other ones, too)!
my solutions executed faster then solutions of people who used sorting of array matter. Although I used 1 extra unnecessary loop.
Can somebody please explain 437B The Child and Set : why that greedy solution (in a decreasing order) works? I could not understand the editorial due to poor english and presentation (didnt expect this from codeforces tutorials ).
What is 'a' here ? What is this -> Because for every two number x, y, that lowbit(x) | lowbit, 2014 - 06 - 01(y) or lowbit(y) | lowbit, 2014 - 06 - 01(x).
Also, if we don't choose a number whose lowbit = 2^x, then we shouldn't choose two numbers whose lowbit = 2^x - 1 but we can choose four such numbers....so what is he trying to say ?
During the contest I thought of creating an array of size limit. Then array[i-1] will store the number 2^k where k is the index of the first 1 bit (from right to left) of number i.
Then the problem basically asks to find a subset of the values stored in this array, whose total summation is equal to sum. However I couldn't figure out how to solve this part efficiently.
A simpler explanation of the solution to this problem would be appreciated.
It's similar to the Coin Change problem but with (possibly) many denominations.
Since the low bit of an odd number is always 1 we have ceil(limit / 2) coins with value 1.
We know that every number can be obtained as a sum of 1's. Let's denote the number of coins with value 1 as ones.
If sum <= ones then problem solved, use as many 1's as needed.
Else try to decrease sum as much as possible using coins with value greater than 1. If we can make sum <= ones then problem solved (1).
Unlike the Coin Change problem we don't need to minimize the number of coins, so the best way to decrease sum is to start with the biggest denominations.
Maybe not the best analogy but it can help :).
If there are two numbers whose lowbit = 2x - 1 we chose and we have a number whose lowbit = 2x, then we can replace that two numbers with a single number whose lowbit = 2x, remaining the sum unchanged. So we should choose at most one number whose lowbit = 2x - 1.
Can someone please tell me why in "Child and the Zoo" we sort them in increasing way? For me it is the decreasing way that works: when we add new area that has smaller number than areas that it connects then it creates the biggest minimum on the road that has just been created. And that is what we are looking for, isn't it?
Yes , I solved it sorting in decreasing way.
Couldn't understand 437E. What's the final solution? F[n-1]? If so, is there a proof that you won't count a triangulation twice when choosing K? I tried to come up with some sort of divide and conquer technique similar to this, but couldn't come up with anything that would prevent the overlapping solutions to be counted twice..
Yes, it's just f[n - 1].
Once we choose k, we also choose a triangle. So for different k, the triangulations are different.
Ahh, got it now! At least I was somewhat close to the correct approach, lol.
Just a minor feedback: when you say "...which means we split the polygon into the triangulation from vertex i to vertex j, a triangle (i, j, k) and the triangulation from vertex j to vertex k."
Shouldn't it be "...which means we split the polygon into the triangulation from vertex i to vertex k, a triangle (i, j, k) and the triangulation from vertex k to vertex j."?
Still one of the best editorials around here!
the line that contains "Because for every two number x, $y$, that ..."
Can somebody please elaborate 437C — The Child and Toy. Please.
For each edge e connecting nodes u and v, the cost to remove e can be f(u) if removed with node u or f(v) if removed with node v. Since we want to minimize the total cost we should remove e with the minimum cost. How?
Indeed my idea was not to disconnect the graph but to construct it, we start with an empty graph, then add the nodes one by one in ascending order of f(node), the new node should be connected only with the existing nodes.
My straightforward solution 6770869 for problem 437E - Ребенок и многоугольник with an intended complexity O(N3) got TLE. After a bit optimization (just by a constant factor) it has passed 6774608. I am a bit disappointed, why there was such a tight time limit? In a competition like this (where we don't get a full feedback) authors usually let even unoptimized solution with the correct complexity pass. Why this wasn't the case? Have any of you experienced a similar problem?
Thank you for your response. The mistake probably wasn't in the time limit, but in my memoization ;-) I expected that the number of ways is always nonzero and used that value as "not calculated yet". But it can be zero for a collinear points subproblem. It makes me feel much better about the problem!
I have: 5430175, when my deque<> experienced too many reallocations due to being declared in a loop. Solutions with worse complexity and better constant factor passed, my optimal solution didn't.
is C instead of D? I assumed that since D is twice longer than all other options?
Both A and D are the great choices. So the child choose C.
If there is exactly one great choice then the child chooses it. Otherwise the child chooses C (the child think it is the luckiest choice).
Does anyone know a fast way to calculate the square root of a number by prime (or not prime) modulo?
Cipolla's algorithm is waiting for you.
Hi may someone explain 438D - The Child and Sequence better?, thanks.
http://codeforces.com/blog/entry/12490#comment-171961 Xellos explained it well, too.
But, for some reason, the order in which I consider the numbers matters. If I try all numbers from 1 to limit, I get wrong answer (6785689). But, if I try numbers in the reverse order i.e. from limit down to 1, I get Accepted (6785709).
Can anyone explain why this happened?
I think it is because that when you try numbers from limit down to 1, you'll have many numbers whose lowbit = 1 to make the sum equal to the sum this problem required.
Just like there is a bottle, if you put big stones first and then put small stones will possibly fill the bottle. But if you put small stones first and then put big stones will possibly remain much empty room in the bottle.
I am not completely convinced about my algorithm, too. I don't know if it can be proved.
Your algorithm is surprisingly correct, with a rather convoluted proof. I'm not sure about the details, but basically you prove that your algorithm is correct for small values of limit, then you can also prove that for large values of limit, the sum will diminish quickly enough to reduce to a proven result or otherwise is too large that even with the algorithm in the editorial it's impossible. Probably the thing that saves you is that the small cases are indeed true: lowbit(1) = 1, lowbit(2) = 2, so you can prove for limit ≤ 2 and sum ≤ 3 either your algorithm solves it or your algorithm correctly states it's impossible.
Should I call the process of analyzing the complexity in 438D potential analysis?
Can someone elaborate on how to solve 437D - Ребенок и зоопарк? I don't understand the editorial.
Begin by sorting all the vertices (areas) according to the number of animals in it, in decreasing order. Also begin with an empty graph.
Now add the vertices one by one into the graph in our new order, along with the edges (roads) connected to it.
Whenever we add a vertex v, we introduce a new component in the graph, namely that single vertex. Whenever we add an edge vi for some existing vertex i to the graph, we might join two components, the component containing v (denote A) and the component containing i (denote B). If this is the case, then any path from A to B must necessarily passes edge vi, and therefore passes the vertex v. Due to our sorting of the vertices, v must have the least cost among all vertices present, and so the value of f of this path is av, the number of animals in v.
Sorting the vertices, we have the order 40, 30, 20, 10. So first we introduce the fourth vertex. Clearly there's no edge added here, as the graph only contains one vertex.
Next, we introduce the third vertex. Now we have vertices 4 and 3, and we also have the edge 4 - 3. We add the edge 4 - 3 to the graph, which apparently joins two components together. Denote the component with 3 as A, and the component with 4 as B. (Note that A only contains the vertex 3, and B only contains the vertex 4.) Thus any path connecting A and B must visit vertex 3, and hence has a minimum animal of a3 = 30. There is |A||B| = 1·1 = 1 such path, namely 3 - 4. This gives f(3, 4) = 30.
Introduce the second vertex and the edge 2 - 3. Now, again, the component with 2 is now connected to the component with 3 (namely 3 and 4). Denote again A, B accordingly. Any path joining A and B must visit vertex 2 and has minimum animal of a2 = 20. There are |A||B| = 1·2 = 2 such paths, namely 2 - 3 and 2 - 4. So f(2, 3) = f(2, 4) = 20.
The same can be said for vertex 1, giving f(1, 2) = f(1, 3) = f(1, 4) = 10, hence the result.
In the second test case, there will be a road that connects two vertices in the same component (if you process the edges from top to bottom, this is edge 3 - 1). We ignore such edges.
In the third test case, when you introduce vertex 5, you will connect it with two components, component 1, 2, 4 and component 6, 7. This is why I keep saying A to be the component containing the current vertex as opposed to only the current vertex alone; after you process edge 4 - 5, you will now have the component 1, 2, 4, 5, thus when you process edge 5 - 6, you will connect |A||B| = 4·2 = 8 such paths connecting 1, 2, 4, 5 to 6, 7.
I hope this is clearer, but just ask if you still don't understand.
Thank you, I understand the solution now. It seems a bit harder than the typical Div2D/Div1B problem though.
Could someone please tell me why the equation F(z) = C(z)F(z)2 + 1 in 438E has only one solution?
starts with either 1 or - 1. If you use the latter root, for some power series P; and inverse doesn't exist.
I am sorry that I still don't understand. Could you please give a more detailed explanation? Thank you!
It's easy to find that b0 is either 1 or -1.
For example, if C(z) = 2z - 4z2 then is either 1 - 4z or - 1 + 4z. If we choose - 1 + 4z then , which doesn't exist. If we choose 1 - 4z then .
Also, quadratic equations usually have two solutions.
Thanks to the explanations, I have understood this algorithm. Actually there can be two different square roots but only one of them could keep the inverse exist. By the way, I do not know clearly the defination of the notation for a formal power series S(x), but I guess it should be unique. Can someone please answer this question? How is it defined?
But there can be two different X(x)s which satisfy X(x)2 = S(x). So which one does refer to? In the editorial above refers to the one starting with 1, so does it mean that should always contain a non-negative constant?
In this problem, it's .
Due to this, we must take - .
$a \times 2^y = b$ can be true only when y = 0. So, this is always false. So, the number chosen are always distinct.
For Div2B (The Child and the Set), I came up with a different solution which runs pretty efficiently. The idea is to continuously "split" numbers that have either already been used or are larger than the limit. We take all the 1 bits in sum and make them numbers (so a sum of 100101 would become 100000, 100, and 1), and split the numbers that don't satisfy the limit.
When splitting a number whose lowbit is x, we create two new numbers with lowbit (x — 1). So by splitting 100, we create numbers 10 and 110 or 1010 or 1110, etc, such that by using any two of these numbers with lowbit (x — 1) we can sum to a number of lowbit (x). Take two of these two numbers and split them again if necessary (above limit or already used). If the number isn't used and is below the limit, we mark it as used, and move on to split the next number.
The case where we can't split anymore (lowbit = 1, numbers like 111, 11, or 1) ends up becoming the case where we can't reach the sum. If we run out of numbers to split, then that means we have finished the problem, because all numbers satisfy the limit and are unique.
would you like to tell me the complexity of your solution..?
I believe it's O(limit) because we split each number under the limit at most once, though it varies a lot depending on the bits in the sum.
Can anyone link me his code for problem 437D — The Child and Zoo please ?
Can anyone please explain the solution to 438D : Child and Sequence. I cannot understand it fully. I do understand that there cannot be more than log2x changes on x. But I don't get the things after that.
Server time: Apr/19/2019 20:26:40 (f2). | CommonCrawl |
Last Friday, I went to one of the health economics seminars that are organised at UCL; the format that is used is that one of the people in the group suggests a paper (typically something that they are working on) but instead of having them leading the discussion, one of the others takes the responsibility of preparing a few slides to highlight what they think are the main points. The author/person who suggested the paper is usually in the room and they respond to the short presentation and then the discussion is open to the group at large.
I missed a couple since they started last summer, but the last two I've been to have really been interesting. Last time the main topic was mapping of utility measures; in a nutshell, the idea is that there are some more or less standardised measures of "quality of life" (QoL $-$ the most common probably being the EQ5D and the SF6D).
However, they are not always reported. For example, you may have a trial that you want to analyse in which data have been collected on a different scale (and I'm told that there are plenty); or, and that's perhaps even more interesting, as Rachael pointed out at the seminar, sometimes you're interested in a disease area that is not quite covered by the standard QoL measures and therefore you want to derive some induced measure by what is actually observed.
In the paper that was discussed on Friday, the authors had used a Beta-Binomial regression and were claiming that the results were more reasonable than when using standard linear regression $-$ which is probably sensible, given that these measures are far from symmetrical or "normally distributed" (in fact the EQ5D is defined between $-\infty$ and 1).
I don't know much about mapping (so it is likely that what I'm about to say has been thoroughly investigated already $-$ although it didn't come out in the seminar, where people were much more clued up than I am), but this got me thinking that this is potentially a problem that one can solve using (Bayesian) hierarchical models.
The (very raw) way I see it is that effectively there are two compartments to this model: the first one (typically observed) is made by data on some non-standard QoL measure and possibly some relevant covariates; then one can think of a second compartment, which can be build separately to start with, in which the assumptions underlying the standard measure of QoL are spelt out (eg in terms of the impact of some potential covariates, or something).
The whole point, I guess, is to find a way to connecting these two compartments, for example by assuming (in a more or less confident way) that each of them is used to estimate some relevant parameter, representing some form of QoL. These in turns have to be linked in some (theory-based, I should think) way. A Bayesian approach would allow for the exchange of information and "feed-back" between the two components, which would be potentially very helpful, for example if there was a subset of individuals on which observations on both the compartment were available. | CommonCrawl |
Received June 15, 2017. Published online August 6, 2018.
Abstract: A $*$-ring $R$ is strongly 2-nil-$*$-clean if every element in $R$ is the sum of two projections and a nilpotent that commute. Fundamental properties of such $*$-rings are obtained. We prove that a $*$-ring $R$ is strongly 2-nil-$*$-clean if and only if for all $a\in R$, $a^2\in R$ is strongly nil-$*$-clean, if and only if for any $a\in R$ there exists a $*$-tripotent $e\in R$ such that $a-e\in R$ is nilpotent and $ea=ae$, if and only if $R$ is a strongly $*$-clean SN ring, if and only if $R$ is abelian, $J(R)$ is nil and $R/J(R)$ is $*$-tripotent. Furthermore, we explore the structure of such rings and prove that a $*$-ring $R$ is strongly 2-nil-$*$-clean if and only if $R$ is abelian and $R\cong R_1, R_2$ or $R_1\times R_2$, where $R_1/J(R_1)$ is a $*$-Boolean ring and $J(R_1)$ is nil, $R_2/J(R_2)$ is a $*$-Yaqub ring and $J(R_2)$ is nil. The uniqueness of projections of such rings are thereby investigated. | CommonCrawl |
A certain amount earns simple interest of Rs 1750 after 7 years. Had the interest been 2% more, how much more interest would it have earned?
3 . What should come in place of the question mark(?) in the following number series?
4 . What should come in place of the question mark(?) in the following number series?
Hence, the question mark(?) should be replaced 16.5.
5 . What should come in place of the question mark(?) in the following number series?
6 . What should come in place of the question mark(?) in the following number series?
7 . What should come in place of the question mark(?) in the following number series?
989.001 + 1.00982 $\times$ 76.792 = ? | CommonCrawl |
We will now look at the algorithm for Newton's Method for approximating roots to functions.
Obtain a function $f$ and assume that a root $\alpha$ exists. Obtain an initial approximation $x_0$ to this root. Also obtain a maximum number of iterations allowed and an error tolerance $\epsilon$.
If the signs of $f(x_n + \epsilon)$ and $f(x_n - \epsilon)$ are opposites of each other, then the accuracy of $x_n$ is verified and stop the algorithm. $x_n$ is a good approximation to $\alpha$. If the signs of $f(x_n + \epsilon)$ and $f(x_n - \epsilon)$ are the same, then $\alpha$ is not contained in the small interval $[x_n - \epsilon, x_n + \epsilon]$. Print out an error message.
If the maximum number of iterations is reached, then print our a failure message. | CommonCrawl |
Prime numbers, as we know, are the positive numbers bigger than 1, which are not divisible by any number except itself and 1. Every number can be represented as the product of prime numbers. These prime numbers are known as prime factors for that particular number. Prime factorization is the method of finding prime factors for a number. The set of prime factors for a particular number is always unique.
Example 1: Determine prime factors of 45.
Hence, prime factors of 45 are 3 $\times$ 3 $\times$ 5.
Example 2: Calculate prime factors of 126.
Hence, prime factors of 126 are 2 $\times$ 3 $\times$ 3 $\times$ 7.
Example 3: Determine prime factors of 84.
Hence, prime factors of 84 are 2 $\times$ 2 $\times$ 3 $\times$ 7.
Example 4: Find prime factors of 3250.
Hence, prime factors of 3250 are 2 $\times$ 5 $\times$ 5 $\times$ 5 $\times$ 13. | CommonCrawl |
This is less of a "how it's done" question and more of a "am I understanding this right" question, because my rudimentary understanding of physics tells me that if your spaceship is firing a railgun projectile with megatons or gigatons of kinetic energy behind it, recoil from the weapon should also be destroying you as well as your target. Whatever asteroid you cobbled your ship together out of probably isn't going to survive that kind of abuse unless it's made from adamantium.
Or am I wrong? I get the sneaking suspicion there's some fundamental part of how a railgun works that I'm either ignorant of or don't properly understand. Can anyone help clarify this for me?
The conserved quantity causing the recoil is the product of mass and velocity. The projectile is a small fraction of the mass of the ship. The recoil will be the same fraction of the velocity.
E.g. if the projectile is 100g and shot at a million miles per hour, and the ship was 100 tons then its recoil would be 1 mile per hour.
The additional energy needed to correct the motion of the ship can be in principle much smaller than what you expended on the projectile, since that's mass times the square of the velocity. (The real needs will depend on how your thrusters work: how much reaction mass you are willing to lose).
On military vehicles there are special locations called "hardpoints". A hardpoint is a specially reinforced location on the frame that is designed to support the weight of something heavy and/or withstand the force of something with a lot of recoil.
On null-grav vehicles, a hardpoint is also a specially reinforced section of the frame, but the design of the entire frame of the vehicle also takes the location of these hardpoints into consideration, due to the lack of constant acceleration (ie: gravity), and the potential variable acceleration effects inherent in an omni-directional environment. Otherwise any recoil of weapons (or engines) being fired might damage or warp the frame, or possibly impart acceleration in an undesirable way. In reality, weapon systems would either be axially located, or would consist of multiple matched weapons (pairs, triple-config, quad-config, etc.,) placed so that they do not impart spin in any given direction.
It is true that in reality, unleashing a weapon with a exa-joule or zetta-joule output would impart acceleration away from the direction of fire. Even smaller weapons would impart some fraction of acceleration which may result in deflection of angle of travel or even in spin.
Therefore, the frame of a vessel mounting such a capital weapon must be designed to not only withstand the acceleration force of the weapon itself, it must also be able to withstand the force of all the weapons it mounts, plus the engines, plus any impact (armor) it is designed to take, plus any gravitational stresses it may come under during travel near massive objects such as planets, stars, or maybe even stations.
How long does it take to accelerate the projectile?
If you have a really long launcher, you can fling lumps of metal around at great speeds and not have ship-shattering recoil. What the ship feels instead is a continuous push from the launcher over a length of time.
Let me explain that will some simple maths.
The product of $Force\times Time$ is going to be the total amount of energy imparted into the projectile. You can push really hard for a short length of time, or you can push gently for a longer period of time, and the results will be the same.
When designing your railgun/mass-driver/doomsday-weapon, this is one of the factors you need to consider.
Newton's Third Law: for every action, there's an equal and opposite reaction.
If you push hard against something, the something pushes back hard. If you push against it too hard, you'll damage yourself.
It's not specific to railguns. Railguns aren't magic. Rocket engines work exactly the same way - they throw mass at (relatively) huge velocities. So why doesn't a rocket engine break your ship apart?
First, there's the issue of momentum and inertia. Your is much more massive than the projectile, so the recoil only causes a tiny amount of change in the velocity of the ship - a small acceleration. This is extremely important in weapons like railguns (and impulse engines of any kind) - the energy of the projectile goes up with the square of the velocity, but the momentum only goes up linearly. And the conserved quantity here is momentum - so you can increase the energy of the projectile while keeping the recoil identical "simply" by making the projectile less massive. Mind you, energy isn't the only thing you care about with a projectile, but most sci-fi railguns don't care too much about the momentum of the projectile. Lasers are an extreme example - they fire "rounds" that move at the speed of light, but impart only an absurdly tiny amount of momentum (though not zero - you can use this to construct photonic drives or solar sails and the like).
Second, the force of the launch is much more spread out over time. You're accelerating the projectile through a barrel - and the length of the barrel is the difference between the impact of the projectile on the target and on the firing ship. If you have a barrel that's ten meters long, that gives you a lot lower acceleration than when the projectile hits a piece of armour.
In the end, you don't even have to think about railguns and spaceships, really. When you throw a rock at someone, you can cave their skull in easily, while your arm doesn't really get any damage. It's all about making the stress as small as possible on the attacker, while maximizing it on the defender - concentrating the round's impact in time and space.
The basic run down of what's happening is that there is recoil, but the difference is in time.
Let's say you shoot something at a speed of 10.(obviously the numbers and units don't matter so ignore that.) The recoil is also 10. This is the same no matter whether you're using a normal cannon or rail gun.
With a cannon the 10 recoil is all delivered at T1 (or instantly) where as with a railgun, because of how it works, the recoil is delivered at T1, T2, T3, T4, T5, etc based on the number of magnets the round which equates to 10 divided by the magnets... to Recoil / Time.
So you still get the same recoil, it's just not felt because it is divided temporally, just like how ion drives work vs chemical thrusters. You can get the same velocities. It just takes longer with one over the other which impacts how you'd use them.
Because the ship is so much more massive than the projectile.
Rearrange it and you can see the ratios.
So a 1 kg projectile fired at 1000 m/s will cause a 1000 kg ship to recoil back at 1 m/s.
So if a ship with a mass of 5.5 teratons and a velocity of .1c fired a 100 kiloton projectile at .7 or .99c, the recoil felt by the ship wouldn't disturb it to any great degree?
Let's do the math. We need to solve for vs (recoil velocity of the ship).
And we get vs of −3.818 m/s. The ship will hardly move.
BUT WAIT! That's Newtonian Physics. Momentum being mass times velocity is actually a simplification for speeds much slower than the speed of light. We're working at significant fractions of the speed of light so we need to account for the Lorentz factor. This changes things. Momentum increases exponentially as you approach the speed of light.
The full momentum equation is: mv / sqrt(1−v2/c2). How much of a difference does this make? Let's look at our projectile in both. Newtonian momentum says a 100 kT projectile at .7c has a momentum of 2.1e16 kg·m/s. But its relativistic momentum is 2.94e16 kg·m/s. A 50% increase.
At 0.99c the momentum is almost 10 times greater! 3e16 kg·m/s vs 2.1e17 kg·m/s.
Buuuut the ship is so massive compared to the projectile that means a change of roughly −30 m/s.
Yes, most such weapons as depicted in fiction won't be viable.
The firing ship has a few advantages. They get to pick where the momentum and kinetic energy is deposited, while the target ship gets hit somewhere random.
They also get to accellerate the projectile over a slightly longer period and over a wider area; the length of the ship, if it is a axis-mount weapon.
But as the KE and velocity of the projectile grows, the time it has to cross the length of the ship goes to zero, which reduces the period it has to "soak" the momentum and KE of the projectile. It still can soak it over a longer distance.
Chemical bonds are limited in strength, and our conventional "matter" based engineering is limited by the strength of chemical bonds. If you accellerate a projectile to a significant fraction of c over the length of your ship, the time it takes is going to be bounded, hence the amount of momentum/energy required to be transferred per second will be bounded below.
Given the strength of chemical bonds, the amount of solid matter the projectile must be coupled with will be bounded below.
A magnetic launcher somewhat helps here, as you can push on/pull on the projectile from far away. However, there are falloff problems with magnetic fields, and near the projectile the slope of the magnetic field is going to be strong enough that no "stationary" matter is going to be permitted.
Eventually the competing "magnetic field slope falls off with distance" and "the field has to be very steep to get the accelleration we need" and "nothing can be close to the projectile and survive while stationary" are going to render the problem unsolvable, probably long before a large-fraction-of-c gun with a barrel modern-naval-ship-length order of magnitude could exist. Let alone finding a way for the crew to survive near the magnetic fields involved.
This then reduces you to fantasy physics.
One way to solve this problem is to spread accelleration over a longer period. Do a high velocity magnetic launch, then use lasers to keep on adding impulse over extremely long distances. Or, have gun barrels that are not single solid objects, but spread over an area of space the size of a planet or solar system or larger.
Recoil dampener / recoil buffer / shock absorber: We add a moving element on the ship, possibly inside the gun, which changes the lots-of-force applied by the gun over a millisecond to lots-of-force/1000 over 1 second.
Armor: We know where the lots-of-force/1000 is applied, so we design the ship to withstand it.
Mass: Compared to the mass of the weapons platform, the mass of the projectile is very small, which cancels out the high velocity. Thus, while the gun can serve to accelerate the ship, it's likely an inefficient drive (unless you select a big enough gun).
When materials are subjected to forces, what happens to them is determined by their properties - their strength (sheer, tensile, compressive, etc.), their brittleness, and other factors. If the force is below the threshold they can handle, then they can transmit that force externally, onto other things; if the force is too great, it causes changes internally, and the material fails.
In other words, if you push something gently, then it can pass that on to what's behind it; push too hard, and it breaks.
They're made of something strong.
They're constructed in such a way as to spread any force applied to them across as much of your craft's structure as possible.
This reduces the amount of force that any one part of your craft takes, helping to keep it below that critical threshold. The important part, however, isn't spreading the force over more space - it's time that's key.
With a weapon like a cannon or railgun, you can accelerate the projectile gradually over a long track or barrel, applying force the whole time by the expansion of gas, a magnetic field, or some other means. This means you can apply a (relatively) gentle force, which adds up over time to give the projectile a very large velocity. This reduces the force that the structure has to support at any given moment while that projectile is being accelerated.
On the other hand, when that projectile strikes a target, the material of that target will be attempting to bring the projectile to rest in a much shorter time. Force is defined as the rate of change of momentum, and the time you spent giving the projectile its momentum is much longer than the amount of time it spends punching through an armour plate - the peak forces are much higher, and therefore more likely to exceed the threshold at which the material fails. This explains how your ship can be made of the same stuff as theirs, but theirs breaks and yours doesn't. By keeping peak forces low, your ship can spread the force out and experience the weapon fire as a slight push applied to the whole ship, while theirs has the force concentrated and experiences it as a hole being punched somewhere.
Because the projectile came in at great speed, they weren't able to spread the force over more time. This meant that they exceeded the peak forces of their materials.
Because their materials failed, they were unable to transmit the forces further through the ship and spread them over space.
Because they couldn't spread the forces across more space, they couldn't affect a large enough of the ship to add up to much mass.
There's no air to carry a shockwave, so this process is your most effective means of spreading damage through the enemy ship.
This process of generating as many fast-moving chunks as possible so as to spread the damage is called fragmentation. Sometimes your projectile might be designed to shatter on impact, or even explode beforehand; other times it's just about striking in such a way that you maximise the fragments thrown off (perhaps by having a round that will tumble erratically as it passes through the target rather than punching through pointy end first).
While both the firing ship and the target will experience the same change in energy (roughly, as there is still a minimum amount of friction in space), the way this energy is transferred is radically different.
On the receiving end, the transferral of energy is abrupt and near-instantaneous. This means that the applied Force is very large indeed.
On the sending end; the energy is applied over a trajectory, and therefore over time. This makes the Force at any instant much lower. Although it will definitely cause a backwards acceleration for the whole vessel, the material impact is much more limited.
The two material factors to keep in mind here when it comes to measuring the destruction of either party are Strain (Bend until it breaks) and Impact (punching a hole in a sheet of paper). Both of these are mostly uncorrelated. The defending ship will benefit most from a high Impact Resistance, while the firing ship will survive through its Strain Resistance.
If you simultaneously fired another shot in the opposite direction the motions would cancel out and the shooter would not be perturbed.
Not the answer you're looking for? Browse other questions tagged reality-check spaceships weapon-mass-destruction or ask your own question.
How much energy to destroy the crust of a planet?
If someone was stabbed in the heart how fast would their regeneration abilities have to be for it not to kill them?
How can I keep the computers on my spaceship from temperature related death after a hull breach?
How hot can I make the insides of my spaceship before damaging crew too much? | CommonCrawl |
The Semantic Web is a collection of technologies that facilitate universal access to linked data. The Resource Description Framework (RDF) model is one such technology that is being developed by the World Wide Web Consortium (W3C). A common representation of RDF data is as a set of triples. Each triple contains three fields: a subject, a predicate, and an object. A collection of triples can also be visualized as a directed graph, with subjects and objects as vertices in the graph, and predicates as edges connecting the vertices. When large collections of triples are aggregated, they form massive RDF graphs. Collections of RDF triple data sets have been growing over the past decade, and publicly-available RDF data sets now have billions of triples. As data sizes continue to grow, the time to process and query large RDF data sets also continues to increase. This work presents RDF3x-MPI, a new scalable, parallel RDF data management and querying system based on the RDF3x data management system. RDF3x (RDF Triple eXpress) is a state-of-the-art RDF engine that is shown to outperform alternatives by one or two orders of magnitude, on several well-known benchmarks and in experimental studies. Our approach leverages all the data storage, indexing, and querying optimizations in RDF3x. We additionally partition input RDF data to support parallel data ingestion, and devise a methodology to execute SPARQL queries in parallel, with minimal inter-processor communication. Using our new approach, we demonstrate a performance improvement of up to 12.9$\times$ in query evaluation for the LUBM benchmark, using 32-way MPI task parallelism. This work also presents an in-depth characterization of SPARQL query execution times with RDF3x and RDF3x-MPI on several large-scale benchmark instances. | CommonCrawl |
At first glance, this article appears to be about a physics principle called the conservation of energy, i.e. "Energy is not created or destroyed, only changed in form". But the energy conservation principle only serves as an example, and this article has a higher purpose — it shows how shaping and testing theories creates scientific fields like physics.
Scientific fields don't spring into being because researchers perform experiments and publish papers — if that were true, astrologers could make astrology a science by simply hiring a lot of scientists. Instead, a scientific field is defined by its theories and established principles. A field's theories must be testable and potentially falsifiable in practical tests, and work in the field must address the field's defining theories.
This article will show how a description — "a perfectly insulated enclosure doesn't gain or lose heat energy" — can become a testable, general explanation, a scientific theory, by way of inductive generalization — "the total amount of energy in an isolated system remains constant over time." — and by that process can help define a scientific field.
Because this is about science, it would be nice if there were a convenient way to verify equation (1), using an everyday observation, especially the surprising fact that the energy of a moving mass is proportional to v2, the square of the velocity. Is it really true that an object has energy e at v, but has four times the energy (4 e) at 2 v?
Yes, it's true — concisely, double velocity, four times energy, and it's true about all moving masses. An example is a thrown baseball, which explains why the most skilled major-league pitchers rarely exceed 100 mph fastballs, and it's why a 100 mph fastball requires twice the throwing energy of a 71 mph ball.
Another example is wind velocity and wind energy. When we look at wind velocity charts, unless we know about the kinetic energy equation, we might think that a 100 mph wind is twice as powerful as a 50 mph wind. But in fact it's four times as powerful. This explains why automobile efficiency declines so quickly above 60 mph — the wind resistance begins to increase as the square of the car's velocity.
My favorite kinetic energy example is car stopping distances. Based on equation (1), it would seem that a car's kinetic energy should be proportional to the square of its speed. Is there any way to check this? Yes, as a matter of fact, there is — because of how car braking works, a skid mark's length is a linear measure of dissipated kinetic energy. This means each foot of a skid mark distance represents an equal amount of dissipated energy.
How does that relate to equation (1)? Well, it means that when you double a car's speed, its stopping distance becomes four times greater. Guess how many people know this about their cars.
Again, because this is about science, we should check this idea, compare it to real car stopping distances — see if it holds up. Let's start with the most basic experiment — let's pretend we know nothing about kinetic energy and want to rely on direct tests. So with that in mind I go to an empty parking lot and discover that my car requires 19 feet to stop from a speed of 20 mph (not including reaction time), and 76 feet at 40 mph. Based on my measurements, I can create a description of a stopping car, and I can use it to create a theory about car stopping distances.
A field measurement provides two data points that correlate vehicle speed and stopping distance.
The experimental data are used to make a prediction about other speeds not actually measured.
The prediction is successfully compared to field measurements for a range of speeds.
The validation of the theory leads to its formal expression as the kinetic energy equation.
Item (a) above is a simple field measurement — a description — that by itself cannot produce a theory about physics.
Item (b) above makes an inductive, general explanation — a theory — based on a specific experiment.
Item (c) above tests and confirms the theory using more field data.
Item (d) above creates an equation that concisely expresses the theory we've tested.
All of the above steps are required for science, but only the last step can contribute to a field's corpus of tested theory, and to the definition of a scientific field.
Because we have moved from experiment to the level of theory, and because we have tested our theory, we have a reliable general statement about all masses in the universe — all of them follow the kinetic energy equation. This equation can make accurate statements about wind, water, planets and satellites, cars, and any other moving object.
The kinetic energy equation is just one in a group of central defining theories that define physics, and that make it a science.
My reason for explaining this in such detail is because there are fields that fancy themselves to be sciences, but that never get to steps (b) through (d) in the above list. Such fields never generalize observations, test the generalizations, or craft theories that would place their field on a solid scientific foundation. Such fields are sciences in name only.
Potential energy is the energy of position or state. Everyday examples include a charged battery or a book on a high shelf. The electrical charge of a fully charged Alkaline D-cell battery is about 74,970 Joules or 20.83 watt-hours (source).
r = radius between m1 and m2, meters.
GPE values are always ≤q; 0.
We will revisit the issue of negative GPE and the reasoning behind it.
y = changed height of book, 2 meters.
In the case of kinetic energy discussed earlier, the energy added to the system is represented by the velocity of the mass. But where is potential gravitational energy located? We lifted a book in a gravitational field, adding potential energy to the system, but where is the added energy located?
A very small mass indeed, about 1/3 that of a small bacterium.
In a closed system, energy is constant over time.
Potential energy is the energy of position or state.
GPE is present between masses, and as the masses approach, GPE decreases.
Planetary orbits are an excellent example of the principle of energy conservation, because over time, a mass in an elliptical orbit (most orbits are elliptical) converts energy from potential to kinetic and the reverse, while maintaining a constant total energy (the sum of potential and kinetic energy). Also, orbital observations — experimental confirmations — are relatively easy to carry out.
Notice about equation (8) that we're assuming only one of the two masses is in motion. This assumption is optional — the physics is consistent regardless of how the kinetic energy is pictured as distributed between the two masses, but making this assumption simplifies the calculation.
Aware that a picture is thought to be worth $1 \times 10^3$ words, I have written a numerical orbit simulator to show that energy is conserved. Click within the image below to see the result.
Click below to start or stop the animation.
This orbital simulator is realistic enough to show that, as the orbit proceeds and as the orbiting body appraches and recedes from the parent body, the ever-changing relationship between kinetic and potential energy produces a constant sum ("Total energy" above), consistent with the principle of energy conservation.
One of the earliest insights into planetary orbits was made by Johannes Kepler, whose second law says, "A line joining a planet and the Sun sweeps out equal areas during equal intervals of time." This simulation shows agreement with this law (see the green ellipse segment and the "Swept area" value below the animation), apart from a small deviation that results only from limited numerical precision.
Indeed, it seems Kepler's second law contains within it the idea of energy conservation, although Kepler lived long before anyone formalized the idea of energy conservation. In this equivalence, the radius of the green ellipse segment represents potential energy, and its width (its subtended angle) represents kinetic energy. Put another way, the statement that orbital energy is conserved, and Kepler's second law, say the same thing.
Notes on the animation: The orbital simulator uses a relatively new Web (HTML5) feature called "canvas", a simple and efficient way to add animations to a Web page. Unfortunately, as is so often the case, Microsoft's browser doesn't support canvas and must be coerced into supporting it in a somewhat crippled way. If the simulation is slow and you're using Microsoft's browser, by all means install a different browser — any other browser. Browsers that run this simulation blazingly fast include Chrome and Firefox.
Apart from explaining the principle of conservation of energy, my purpose in this article is to show how scientific fields function — how they depend on the existence of testable principles and theories to unify research efforts within the field, as well as confer legitimate scientific status to the field.
Can a field be scientific without established, testable principles and theories? No, not really. Without theories, it's not obvious what the field would mean — what context would exist to guide research.
In a physics context, for example, a new idea that contradicted the principle of energy conservation would have to be accompanied by substantial evidence and an alternative explanation for the many observations that appear to support energy conservation.
Fields without central defining theories (i.e. psycholgy, sociology and others) don't have this problem — if a new idea comes along, it doesn't have to show how it works within, or contradicts, a pre-existing theoretical framework, for the simple reason that there's no such framework. This makes life very easy for psychologists, who, by wandering in a theoretical vacuum, can say or do practically anything.
This indifference to theoretical issues wouldn't be a problem for psychology any more than it is for philosophy, except for one thing — unlike philosophers, psychologists have clinics and pose as mental doctors to a gullible public.
But because of an absence of tested theories to define and discipline psychology, there is no mental medicine to stand alongside physical medicine, as a result of which, in spite of the existence of mental health clinics everywhere, there aren't actually any mental doctors to stand alongside physical doctors — the people who occupy mental health clinics are each more or less effective soley based on their personal skills, not because of their status as psychologists.
This article shows how physicists solve problems, how physics works, and why it is a science. Physics is a science only because of the existence of well-tested physical theories, as well as constant research to refine and sometimes falsify physical theories.
Physical theories have properties that legitimate scientific theories are expected to have — objectivity, meaning different observers will come to the same conclusion, falsifiability, meaning new persuasive evidence can overthrow existing theories, and consistency, meaning different physical theories must not contradict each other.
The three elements are defined by different underlying physical principles (kinetic energy, potential energy, and conservation of energy), and their ability to work together as well as model nature shows that physical theory is internally consistent as well as empirical. This is why physics is a science — not because of big research budgets, not because of impressive laboratories, not because of white lab coats, but because of theories that are consistent with each other and with nature, theories that define physics.
I regularly hear from psychologists who object to my frequent comparisons of psychology to real science, who try to say there are different kinds of science (and there are — there's science and pseudoscience), who ask why this is all so important. Isn't psychological science good enough?
In reply, I ask whether psychologists intend to pose as mental doctors, tell us what's wrong with us, estimate how many of us are "mentally ill" and how that number seems inevitably to increase as the years go by. If psychologists want to be thought of as doctors, then to avoid more disasters like Recovered Memory Therapy, psychologists must adopt scientific methods. And if they can't or won't do that, they need to stop pretending to be either scientists or doctors.
Conservation of Energy — "Energy is not created or destroyed, only changed in form."
Kinetic energy — the energy of motion.
Potential energy — the energy of position or state.
Inductive Generalization — a logical step whereby an experimental outcome is proposed as a general explanation.
Wind Energy — the kinetic energy of wind.
Vehicle Stopping Distance And Time — an online table of stopping distances derived from field measurements.
Battery Data — shows the electrical energy stored in some commmon battery types.
Gravitational potential energy — the energy associated with a gravitational field.
Gravitational constant — an empirical physical constant used in gravitational work.
Mass-energy equivalence — the statement that mass is a measure of energy, and vice versa.
Speed of light — an important physical constant.
Johannes Kepler — pioneering astronomer, from a time when it was dangerous to make claims the Church didn't like.
Kepler's laws of planetary motion — an early effort to craft a theory of orbits.
False memory syndrome — the outcome of Recovered Memory Therapy. | CommonCrawl |
Farmer John is attempting to sort his $N$ cows ($1 \leq N \leq 10^5$), conveniently numbered $1 \dots N$, before they head out to the pastures for breakfast.
Currently, the cows are standing in a line in the order $p_1, p_2, p_3, \dots, p_N$, and Farmer John is standing in front of cow $p_1$. He wants to reorder the cows so that they are in the order $1, 2, 3, \dots, N$, with cow $1$ next to Farmer John.
Today the cows are a bit sleepy, so at any point in time the only cow who is paying attention to Farmer John's instructions is the cow directly facing Farmer John. In one time step, he can instruct this cow to move $k$ paces down the line, for any $k$ between $1$ and $N-1$ inclusive. The $k$ cows whom she passes will amble forward, making room for her to insert herself in the line after them.
Now the only cow paying attention to FJ is cow $3$, so in the second time step he may give cow $3$ an instruction, and so forth until the cows are sorted.
Farmer John is eager to complete the sorting, so he can go back to the farmhouse for his own breakfast. Help him find a sequence of instructions that sorts the cows in the minimum number of time steps.
The first line of input contains $N$. The second line contains $N$ space-separated integers: $p_1, p_2, p_3, \dots, p_N$, indicating the starting order of the cows.
The first line should contain a single integer, $K$, giving the minimum number of time steps required to sort the cows.
The second line should contain $K$ space-separated integers, $c_1, c_2, \dots, c_K$, each in the range $1 \ldots N-1$. Furthermore, if in the $i$-th time step FJ instructs the cow facing him to move $c_i$ paces down the line, then after $K$ time steps the cows should be in sorted order.
If there are multiple optimal instruction sequences, your program may output any of them. | CommonCrawl |
The Robertson–Seymour theorem says that any minor-closed family $\mathcal G$ of graphs can be characterized by finitely many forbidden minors.
Is there an algorithm that for an input $\mathcal G$ outputs the forbidden minors or is this undecidable?
Obviously, the answer might depend on how $\mathcal G$ is described in the input. For example, if $\mathcal G$ is given by an $M_\mathcal G$ that can decide membership, we cannot even decide whether $M_\mathcal G$ ever rejects anything. If $\mathcal G$ is given by finitely many forbidden minors - well, that's what we're looking for. I would be curious to know the answer if $M_\mathcal G$ is guaranteed to stop on any $G$ in some fixed amount of time in $|G|$. I'm also interested in any related results, where $\mathcal G$ is proved to be minor-closed with some other certificate (like in case of $TFNP$ or WRONG PROOF).
Update: The first version of my question turned out to be quite easy, based on the ideas of Marzio and Kimpel, consider the following construction. $M_\mathcal G$ accepts a graph on $n$ vertices if and only if $M$ does not halt in $n$ steps. This is minor closed and the running time depends only on $|G|$.
The answer by Mamadou Moustapha Kanté (who did his PhD under supervision of Bruno Courcelle) to a similar question cites A Note on the Computability of Graph Minor Obstruction Sets for Monadic Second Order Ideals (1997) by B. Courcelle, R. Downey, and M. Fellows for a non-computability result (for MSOL-definable graph classes, i.e. classes defined by a Monadic Second order formula) and The obstructions of a minor-closed set of graphs defined by a context-free grammar (1998) by B. Courcelle and G. Sénizergues for a computability result (for HR-definable graph classes, i.e. classes defined by a Hyperedge Replacement grammar).
The crucial difference between the computable and the non-computable case is that (minor-closed) HR-definable graph classes have bounded treewidth, while (minor-closed) MSOL-definable graph classes need not have bounded treewidth. In fact, if a (minor-closed) MSOL-definable graph class has bounded treewidth, then it is also HR-definable.
The treewidth seems to be really the crucial part for separating the computable from the non-computable cases. Another known result (by M. Fellows and M. Langston) basically says that if a bound for the maximum treewidth (or pathwidth) of the finite set of excluded minors is known, then the (finite) minimal set of excluded minors becomes computable.
It is not even known whether the (finite) minimal set of excluded minors for the union (which is trivially minor-closed) of two minor-closed graph classes each given by their respective finite set of excluded minors can be computed, if no information about treewidth (or pathwidth) is available. Or maybe it has even been proved in the meantime that it is non-computable in general.
Not the answer you're looking for? Browse other questions tagged graph-theory decidability graph-minor or ask your own question.
Is there a list of forbidden subgraphs for comparability graphs?
Is there an efficient algorithm for finding edges that are part of all hamiltonian paths?
A good Library for testing whether a minors exists in a graph? | CommonCrawl |
Segment Tree is used in cases where there are multiple range queries on array and modifications of elements of the same array. For example, finding the sum of all the elements in an array from indices $$L$$ to $$R$$, or finding the minimum (famously known as Range Minumum Query problem) of all the elements in an array from indices $$L$$ to $$R$$. These problems can be easily solved with one of the most versatile data structures, Segment Tree.
What is Segment Tree ?
The root of $$T$$ will represent the whole array $$A[0:N-1]$$.
Each leaf in the Segment Tree $$T$$ will represent a single element $$A[i]$$ such that $$0 \le i \lt N$$.
The internal nodes in the Segment Tree $$T$$ represents the union of elementary intervals $$A[i:j]$$ where $$0 \le i \lt j \lt N$$.
The root of the Segment Tree represents the whole array $$A[0:N-1]$$. Then it is broken down into two half intervals or segments and the two children of the root in turn represent the $$A[0:(N-1) / 2]$$ and $$A[ (N-1) / 2 + 1 : (N-1) ]$$. So in each step, the segment is divided into half and the two children represent those two halves. So the height of the segment tree will be $$log_2 N$$. There are $$N$$ leaves representing the $$N$$ elements of the array. The number of internal nodes is $$N-1$$. So, a total number of nodes are $$2 \times N - 1$$.
Update: To update the element of the array $$A$$ and reflect the corresponding change in the Segment tree.
Query: In this operation we can query on an interval or segment and return the answer to the problem (say minimum/maximum/summation in the particular segment).
Since a Segment Tree is a binary tree, a simple linear array can be used to represent the Segment Tree. Before building the Segment Tree, one must figure what needs to be stored in the Segment Tree's node?.
For example, if the question is to find the sum of all the elements in an array from indices $$L$$ to $$R$$, then at each node (except leaf nodes) the sum of its children nodes is stored.
A Segment Tree can be built using recursion (bottom-up approach ). Start with the leaves and go up to the root and update the corresponding changes in the nodes that are in the path from leaves to root. Leaves represent a single element. In each step, the data of two children nodes are used to form an internal parent node. Each internal node will represent a union of its children's intervals. Merging may be different for different questions. So, recursion will end up at the root node which will represent the whole array.
For $$update()$$, search the leaf that contains the element to update. This can be done by going to either on the left child or the right child depending on the interval which contains the element. Once the leaf is found, it is updated and again use the bottom-up approach to update the corresponding change in the path from that leaf to the root.
To make a $$query()$$ on the Segment Tree, select a range from $$L$$ to $$R$$ (which is usually given in the question). Recurse on the tree starting from the root and check if the interval represented by the node is completely in the range from $$L$$ to $$R$$. If the interval represented by a node is completely in the range from $$L$$ to $$R$$, return that node's value.
Update: Given $$idx$$ and $$val$$, update array element $$A[idx]$$ as $$A[idx] = A[idx] + val$$.
Queries and Updates can be in any order.
This is the most basic approach. For every query, run a loop from $$l$$ to $$r$$ and calculate the sum of all the elements. So each query will take $$O(N)$$ time.
$$A[idx] += val$$ will update the value of the element. Each update will take $$O(1)$$.
This algorithm is good if the number of queries are very low compared to updates in the array.
First, figure what needs to be stored in the Segment Tree's node. The question asks for summation in the interval from $$l$$ to $$r$$, so in each node, sum of all the elements in that interval represented by the node. Next, build the Segment Tree. The implementation with comments below explains the building process.
As shown in the code above, start from the root and recurse on the left and the right child until a leaf node is reached. From the leaves, go back to the root and update all the nodes in the path. $$node$$ represents the current node that is being processed. Since Segment Tree is a binary tree. $$2 \times node$$ will represent the left node and $$2 \times node + 1$$ will represent the right node. $$start$$ and $$end$$ represents the interval represented by the node. Complexity of $$build()$$ is $$O(N)$$.
To update an element, look at the interval in which the element is present and recurse accordingly on the left or the right child.
Complexity of update will be $$O(logN)$$.
To query on a given range, check 3 conditions.
If the range represented by a node is completely outside the given range, simply return 0. If the range represented by a node is completely within the given range, return the value of the node which is the sum of all the elements in the range represented by the node. And if the range represented by a node is partially inside and partially outside the given range, return sum of the left child and the right child. Complexity of query will be $$O(logN)$$. | CommonCrawl |
There are 2 vertical segments on both sides and 3 horizontal segments in the center.
What's the largest number you can form, where it has the same number of visible segments as the number it represents.
For example 4 is formed with 4 visible segments on a digital clock.
Or 5 which has 5 visible segments.
How large can these numbers get, and which is the largest?
Your answer is only valid if you state the answer, and how you came to the conclusion!
There's two answers here: a real answer, and a (slightly) cheating answer.
6 is the largest number where it represents the number of segments.
First, we can put an upper bound on our answer. At most, we have a two digit number. This is because the smallest three digit number (100) is an order of magnitude more than the maximum number of displayed segments ($21 = 7\times3$).
Through a process of elimination, we can work backwards from the largest possible 2 digit number: 14 ($=7\times2$).
We end up with $6$ being the largest number.
If we can front-pad the number with zeros, we can get to infinity, but only certain numbers. This is because each zero we add in front of the number is 6 additional segments.
For example, $0...0400$ can be made with 114 leading zeros. $(114+2)\times6+4 = 400$.
In fact, there's a limited set of numbers that answer this. We know these numbers are greater than six, and we know their remainder when divided by six is equal to the sum of their segments mod 6.
Let $N$ be the number, and $s_n$ be the number of segments of the $n$th digit.
Any number of the form $0...06...6$ will work (assuming enough zeros). Also, any number of the form $0...040...0$.
$N$ is even, iff $\sum s_n$ is even. This is because zero has an even number of segments, and you can't add even numbers together to get odd numbers. The corollary to this is: $N$ is odd, iff $\sum s_n$ is odd. This may seem really obvious, but it helps tremendously.
There are probably many other forms. I'd be interested to see a mathematician work through them all.
I suppose this is also cheating?
This assumes you cannot null-pad the number, since if you can null-pad it, the highest number is infinite.
For a n-digit number, the maximum number of lines visible is 7n. Therefore, the number cannot be greater than 2 digits, since 21<100 for n = 3, and the number of lines increases linearly with n while the number increases exponentially with n.
The 2-digit number also cannot be greater than 14, since the maximum number of visible lines is 14.
(actualValue-lineCount)%6 == 0 will work.
binary. It reads 100000, which uses 3210 line segments to display the number 3210.
in any base, when we add an additional nth digit, the clock face value grows geometrically by (radix*n) while the line count grows arithmetically by at most 7. Clearly these values will diverge quickly; to maximize the point before they diverge, we want the most digits, and hence the lowest radix. Ergo, no base higher than 2 will yield a higher answer.
"What about a clock in base square root of two?"
Regarding only the maximum of 28 elements and 4 numbers, the highest element is 6 (without leading zeroes) and 22 (with leading zeroes : 00:22). Only regarding decimal system.
Since this is a digital clock, the leftmost digit only has six elements, which can display either 1 or 2. The upper left vertical segment is missing on clock displays. Therefore, the leftmost digit can only display 1, 2, 3, or 7. The lack of facility to display a 0 in the most significant position in a digital clock means the question as asked has no valid answer. Oh sure, 24-hour clocks. Pff. Who has them?
Taking a four-gang seven-segment display, with zero padding, the total number of segments is 28. With zero padding (not disqualified by the questioner) this means 0022 (6+6+5+5) is the highest number.
Taking a four-gang seven-segment display, with zero padding and a colon (as illustrated, although the missing segment in the second digit is a bit off), the total number of elements is 30. The highest number of elements for a display beginning 00:2 displays 00:28 and lights 6+6+2+5+7=26 elements.
This means that, with a flashing colon that is lit at 59 seconds and unlit at 0 seconds, a digital clock will display two consecutive "correct" answers at 21 and 22 minutes past midnight.
Not the answer you're looking for? Browse other questions tagged mathematics seven-segment or ask your own question. | CommonCrawl |
I was playing the PC-Game "Darkest Dungeon" recently. In the game, you have to explore dungeons, which consist of connected rooms as shown in the picture below.
You start in fixed room (Entrance). You cannot choose where to start.
The distance between adjacent rooms is the same for all rooms.
What is the shortest path from the entrance that visits every room at least once?
What algorithm(s) could be used to solve this problem?
Are there implementations that are free (and fairly simple) to use for someone like me?
I have found other questions such as this or this without finding an answer. I am familiar with the (basic) TSP and are able to code and solve simple TSPs. Hamiltonian paths didn't solve my problem, because it doesn't allow for multiple visits. The Chinese postman problem does also not apply here in its basic form because I don't have to visit every edge.
As I have stated in the comments, I'm not a computer scientist and am not interested in proving a mathematical statements (maybe I'll post this question on stackoverflow at a later stage). Also, I'm not a programmer and the chances that I'm able to code a solution myself are pretty slim. But I suspect I'm not the first one dealing with a problem of that nature.
Create a pairwise distance matrix with all rooms thus adding edges between all rooms.
Starting from the Entrance, visit all rooms according to the solution. For non-adjacent rooms, substitute the shortest distance between those rooms.
Will this procedure provide the answer to the problem?
Travelling Salesman Problem, even if you allow repeating nodes is NP-hard. See Computational Complexity of TSP.
Umans and Lenhart show hardness results for Hamiltonian Graphs in Solid Grid Graphs, 1997.
TSP for Euclideal Case (or graphs with triangle inequality) also imply NP-hardness of TSP with node repetition. TSP even for manhattan distance $L_1$ (or $L_\infty$) metric is NP-complete. See the original Papadimitriou's paper on the topic.
You may be able to prove NP-hardness of TSP for your case by adding arcs to nodes that have corresponding distance as length of shortest path between nodes which will simulate repetitions of the nodes. TSP for your special case looks like an NP-complete problem.
So either write a sufficiently good (heuristic wise) exponential branch and bound algorithm to compute a shortest tour (which may not be all that inefficient, if your graph is small) , or forget about optimization and calculate a good enough approximation.
In addition to the above answer, I would point out some TSP solvers already available.
1) Construct an unweighted undirected graph from the grid- rooms, path junctions are nodes, and edges the paths between those nodes.
2) Find the minimum spanning tree from your start point using depth first search.
3) "Subdivide" the underlying grid so that your minimum spanning tree creates two "lanes".
4) From your starting point walk clockwise in the right-hand lane from node-to-node until you return to the starting point in the complementary lane.
Not the answer you're looking for? Browse other questions tagged algorithms graphs optimization shortest-path or ask your own question. | CommonCrawl |
I have to use an iterative method (Newton-Raphson, modified Newton and Broyden) to solve a system of nonlinear equations $f(x)=0$. Every unknown $x_i$ is bounded between $l_i$ and $u_i$, i.e., $l_i<x_i<u_i$, and evaluating $f$ if any of the unknowns goes out of its bounds would give an error.
is always inside the corresponding hypercube. However, this is an ad hoc, nonrigorous solution, and I have found papers about reflective techniques (although they were only applied for optimization problems).
Is the above solution a good one or should I try something different?
You should put "mirrors" to bracket your domain.
If I know that my unknown $x$ is bounded between $0$ and $a$. At each iteration of my method, i will check if $0 < x < a$. If not, while $x < 0$ or $x > a$, if $x < 0$ I will change $x$ to $-x$, else if $x > a$ I will change $x$ to $a-(x-a)$.
Another method that might be more complex to program depending what language you use but that is often more successful is to modify your Jacobian by dividing the step by 2 while you are not in your domain of interest.
Not the answer you're looking for? Browse other questions tagged iterative-method newton-method constraints or ask your own question.
Muller's method is the same as Newton's method with a quadratic interpolating polynomial? | CommonCrawl |
Select the missing terms in the following question.
In the following question, a group of letters is given which are numbered 1, 2, 3, 4, 5 and 6. Below are given four alternatives containing combinations of these numbers. Select that combination of numbers so that letters arranged accordingly, form a meaningful word.
3 . In Question, which one of the given responses would be a meaningful order of the following ?
4 . In Question, which one of the given responses would be a meaningful order of the following ?
5 . In Question, which one of the given responses would be a meaningful order of the following ?
A family consisted of a man, his wife, his three sons, their wives and three children in each son's family. How many members are there in the family ?
Mani is double the age of Prabhu. Ramona is half the age of Prabhu. If Mani is sixty, find out the age of Ramona.
Some equations are solved on the basis of a certain system. On the same basis, find out the correct answer for the unsolved equation.
2 $\times$ 5 $\times$ 7 = ? | CommonCrawl |
Abstract: The Boltzmann equation models the motion of a rarefied gas, in which particles interact through binary collisions, by describing the evolution of the particle density function. The effect of collisions on the density function is modeled by a bilinear integral operator (collision operator) which in many cases has a non-integrable angular kernel. For a long time the equation was simplified by assuming that this kernel is integrable (the so called Grad's cutoff) with a belief that such an assumption does not affect the equation significantly. However, in the last 20 years it has been observed that a non-integrable singularity carries regularizing properties which motivates further analysis of the equation in this setting.
We study behavior in time of tails of solutions to the Boltzmann equation in the non-cutoff regime by examining the generation and propagation of $L^1$ and $L^\infty$ exponentially weighted estimates and the relation between them. For this purpose we introduce Mittag-Leffler moments which can be understood as a generalization of exponential moments. An interesting aspect of this result is that the singularity rate of the angular kernel affects the order of tails that can be shown to propagate in time. This is based on joint works with Alonso, Gamba, Pavlovic and Gamba, Pavlovic.
Abstract: Consider the Burgers equation with some nonlocal sources, which were derived from models of nonlinear wave with constant frequency. This talk will present some recent results on the global existence of entropy weak solutions, priori estimates, and a uniqueness result for both Burgers-Poisson and Burgers-Hilbert equations. Some open questions will be discussed.
Abstract: In this talk, we discuss the stochastic homogenization of certain nonconvex Hamilton-Jacobi equations. The nonconvex Hamiltonians, which are generally uneven and inseparable, are generated by a sequence of (level-set) convex Hamiltonians and a sequence of (level-set) concave Hamiltonians through the min-max formula. We provide a monotonicity assumption on the contact values between those stably paired Hamiltonians so as to guarantee the stochastic homogenization. If time permits, we will talk about some homogenization results when the monotonicity assumption breaks down. | CommonCrawl |
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:664-673, 2017.
Persistence diagrams (PDs) play a key role in topological data analysis (TDA), in which they are routinely used to describe succinctly complex topological properties of complicated shapes. PDs enjoy strong stability properties and have proven their utility in various learning contexts. They do not, however, live in a space naturally endowed with a Hilbert structure and are usually compared with specific distances, such as the bottleneck distance. To incorporate PDs in a learning pipeline, several kernels have been proposed for PDs with a strong emphasis on the stability of the RKHS distance w.r.t. perturbations of the PDs. In this article, we use the Sliced Wasserstein approximation of the Wasserstein distance to define a new kernel for PDs, which is not only provably stable but also provably discriminative w.r.t. the Wasserstein distance $W^1_\infty$ between PDs. We also demonstrate its practicality, by developing an approximation technique to reduce kernel computation time, and show that our proposal compares favorably to existing kernels for PDs on several benchmarks.
%X Persistence diagrams (PDs) play a key role in topological data analysis (TDA), in which they are routinely used to describe succinctly complex topological properties of complicated shapes. PDs enjoy strong stability properties and have proven their utility in various learning contexts. They do not, however, live in a space naturally endowed with a Hilbert structure and are usually compared with specific distances, such as the bottleneck distance. To incorporate PDs in a learning pipeline, several kernels have been proposed for PDs with a strong emphasis on the stability of the RKHS distance w.r.t. perturbations of the PDs. In this article, we use the Sliced Wasserstein approximation of the Wasserstein distance to define a new kernel for PDs, which is not only provably stable but also provably discriminative w.r.t. the Wasserstein distance $W^1_\infty$ between PDs. We also demonstrate its practicality, by developing an approximation technique to reduce kernel computation time, and show that our proposal compares favorably to existing kernels for PDs on several benchmarks. | CommonCrawl |
$$\mathrm dW=T\cdot\mathrm d\theta$$ where T is torque.
While solving a question like in case of a body rolling down an incline (pure rolling), we usully equate the change in kinetic energy equal to work done by gravity (just the translational work). Why do we not count the rotational work done when gravity does provide a torque?
In almost all question of mixed motions (translational and rotational) I have come across, none of them have the application of rotational work done, so I would also like to know when do we apply this concept.
First, for something like a ball rolling down an incline, gravity has no torque about the center of the ball. The force that causes the ball to start rolling is friction. On a frictionless incline the ball would just slide down the incline without rolling.
Now, with that out of the way, it turns out that we do take into account "rotational work" due to friction. Let's assume a constant friction force $f$ and say that the ball with radius $r$ is released from rest on the incline. We know that the net torque about the center of the ball is given by $$\tau = fr$$ using Newton's second law we also have $$\tau=I\alpha$$ where $I$ is the moment of inertia and $\alpha$ is the angular acceleration.
Now, since our torque is constant (since $f$ and $r$ are both constant) we know two other things. First, the work done by friction is given by $$W=\tau\Delta\theta=I\alpha\Delta\theta$$ and second, we can apply our constant acceleration equations. This means that $$\omega^2=2\alpha\Delta\theta$$ where $\Delta\theta$ is the angle the ball rolls through some time after release, and $\omega$ is the angular velocity at that same point in time.
and this result might look familiar. It is what we usually associate "rotational kinetic energy" with. So we do take into account the "rotational work", we just do it implicitly with $\frac12I\omega^2$ rather than explicitly (this is similar to how we use potential energy to implicitly take into account the work done by conservative forces rather than explicitly calculating the work done by said forces).
Not the answer you're looking for? Browse other questions tagged energy rotational-dynamics work torque or ask your own question.
What happens inside a body when it rotates?
Is work done in rolling friction?
How can angular velocity be constant even when there is a torque by friction?
Rope and rotational mechanics: why does my solution method work?
Does change in shape of a body, while applying force to move it to a certain distance, cause any change in the work done on the body?
When is the net work done by a force equal to zero?
For a solid sphere rolling (pure roll) up a slope (with friction) does friction play a role in slowing it down? | CommonCrawl |
The effect of liquid viscosity on the final instants previous to pinch-off of an air bubble immersed in a stagnant viscous liquid is experimentally and theoretically investigated. Our experiments show that the use of a power-law to describe the collapse dynamics of the bubble is not appropriate in an intermediate range of liquid viscosities, for which a transition from an inviscid to a fully viscous pinch-off takes place. Instead, the instantaneous exponent $\alpha(\tau)$ varies during a single pinch-off event from the typical values of inviscid collapse, $\alpha\simeq 0.58$, to the value corresponding to a fully viscous dynamics, $\alpha\simeq 1$. However, we show that the pinch-off process can be accurately described by the use of a pair of Rayleigh-like differential equations for the time evolution of the minimum radius and the axial curvature evaluated at the minimum radius, $r_1$. This theoretical model is able to describe the smooth transition which takes place from inviscid to viscous-dominated pinch-off in liquids of intermediate viscosity, 10 $\leq\mu\leq$ 100 cP, and accounts for the fact that the axial curvature remains constant when the local Reynolds number becomes small enough, in agreement with our experimental measurements.
*Supported by the Spanish MCyI under project DPI2008-06624-C03-02. | CommonCrawl |
We will now look at an analogous Theorem for evaluating triple integrals over a rectangular box $B$ with iterated integrals.
Theorem 1: Let $w = f(x, y, z)$ be a three variable real-valued function such that $f$ is continuous on the rectangular box $B = [a, b] \times [c, d] \times [r, s]$. Then the triple integral of $f$ over $B$ can be evaluated as iterated integrals, that is, $\iiint_B f(x, y, z) \: dV = \int_r^s \int_c^d \int_a^b f(x, y, z) \: dx \: dy \: dz$.
There are six total ways to evaluate a triple integral over a box using iterated integrals. For example, we could get that $\iiint_B f(x, y, z) \: dV = \int_a^b \int_c^d \int_r^s f(x, y, z) \: dz \: dy \: dx$, or $\iiint_B f(x, y, z) \: dV = \int_c^d \int_a^b \int_r^s f(x, y, z) \: dz \: dx \: dy$. Each of these six possible orders will give rise to the same value.
Let's now look at an example of evaluating a triple integral over a box.
Evaluate the triple integral $\iiint_B 2x - e^y + 3z^2 \: dV$ over the box $B = [0, 2] \times [0, 1] \times [1, 3]$. | CommonCrawl |
1. Each column, row and $3 \times 3$ subgrid must have the numbers 1 to 9.
2. No column, row or subgrid can have two cells with the same number.
The puzzle can be solved with the help of the numbers in the top parts of certain cells. These numbers are the products of the digits in all the cells horizontally and vertically adjacent to the cell.
The cell in the top left corner of this Sudoku contains the number 20. 20 is the product of the digits in the two adjacent cells, which therefore must contain the digits 4 and 5. The 5 cannot go in the cell below the top left hand corner because 5 is not a factor of 96 (the product shown in the third cell down on the left hand side of the puzzle). Therefore 5 must be entered into the cell to the right of the cell containing 20 and 4 in the cell below.
A printable version of the problem can be found here.
Multiplication & division. Divisibility. Visualising. Games. Working systematically. Curious. Resourceful. Addition & subtraction. Factors and multiples. Resilient. | CommonCrawl |
A company makes mechanical keypad locks.
The keypad is a set of five buttons arranged vertically.
The buttons are quite close together. Once a button is pushed in it stays in until the lock is opened or reset.
If the right combination of buttons are pushed in the correct order then a separate handle can be turned and the door opens. If not the action of turning the handle resets all the buttons so you start again.
The salesman says that there are $545$ different simple 'unlock combinations or sequences' which can be 'programmed' into this mechanical lock. Simple unlock combinations are combinations which can be punched into the keypad with a single finger.
Is the salesman correct? why or why not?
325 possibilities from Excited Raichu's answer.
(12)345, 1(23)45, 12(34)5 and 123(45).
You have either 4 or 5 buttons pressed by having 2 pairs and you cannot have more than 2 pairs since there are 5 buttons total.
Which is the expected answer from the question!
Well, let's see. It's possible for 1 to 5 buttons to be pushed to create a combination, so let's calculate the number of possibilities for each individually, assuming you can pick the order of the buttons to be pushed.
Well, there are only 5 possible combinations. This isn't hard.
The number of possible combinations is C(5,2), or 10. You can arrange the order of each possible combination in 2!, or 2 ways. So the number of possible 2-button combinations is 20.
Since C(5,3) is also 10, there are 10 different combinations. Each of these combinations can be reordered in 3!, or 6 ways. So the number of possible 3-button combinations is 60.
C(5,4) is 5. So, there are 5 different combinations which each can be rearranged into 4!, or 24 ways. So the number of possible 4-button combinations is 120.
Obviously, there is only 1 5-digit combination, but that can be rearranged in 5!, or 120 ways.
5+20+60+120+120 = 325 possible combinations. Assuming there isn't any trickery here, the salesman is indeed incorrect.
If the buttons were really small then there could be 3 or more pressed simultaneously - but I will ignore these calculations as if that were the case it may be difficult to press only one at a time.
5 combos where 1 button is pushed, 11 combos where 2 buttons are pushed, 9 combos where 3 buttons are pushed, 5 combos where 4 buttons are pushed, and 1 combo where all 5 buttons are pushed.
5*1 + 11*2 + 9*6 + 5*24 + 1*120 = 321 possible combos (assuming you don't count no buttons being pushed as a combo).
Your lock looks a lot like a Kaba Simplex. This lock allows for codes where the order in which the buttons are pressed matters. Also, it allows for buttons needing to be pressed in unison, or not at all. Each button can be pressed only once, though.
Let's start by just pressing a single button at a time. Each code can be anywhere from 1–5 buttons long.
The number of single button codes is simply $5$ since that's the number of buttons.
The total number of codes you can enter this way is $5 + 20 + 60 + 120 + 120 = 325$ which is a lot less than the $545$ promised.
The buttons are quite close together.
so we can push two adjacent buttons together. After all, the Simplex allows for buttons to be pressed simultaneously with one finger. This adds a number of multi-button codes.
This adds $4$ codes, since there are 4 pairs of adjacent buttons that can be pressed together with one finger.
There are again $4$ possible pairs, which leave $3$ options for the single button. All of these can be used in two ways: either the pair first, or the single button first.
That gives us a total of $4 \times 3 \times 2 = 24$ codes with a pair and a single button.
Here we have two options. We can either use two pairs, or a pair and two single buttons.
With two pairs, we have $3$ options for the button we leave unpushed: either an outer button, or the middle button. We can push the top pair first, or the bottom pair first, so we have a total of $3 \times 2 = 6$ codes with two pairs.
With a single pair and two single buttons, we have $4$ options for the pair, then $3$ for the unpushed button, so that's $4 \times 3 = 12$ combinations. We can push the pair first, last, or in between the single buttons, and we can either push the top single button first or last. So that's $12 \times 3 \times 2 = 72$ codes.
That gives us a total of $6 + 72 = 78$ codes for 4 buttons with pairs.
Again, we have two options: two pairs and a single, or one pair and three singles.
With two pairs, the single button can be in $3$ places (just like the unpushed button). We can push the single button first, last, or in between the pairs, and we can push the top pair first or last. This gives us $3 \times 3 \times 2 = 18$ codes.
With a single pair, that pair can be in $4$ positions, the rest of the buttons being the single ones. We can push those single buttons in $3 \times 2 \times 1 = 6$ different orders, and we can push the pair first, second, third, or last. This gives us $4 \times 6 \times 4 = 96$ codes.
The total for 5 buttons with pairs comes to $18 + 96 = 124$ different codes.
Adding these all up gives us $4 + 24 + 78 + 124 = 230$ different codes when adding pairs into the mix.
Adding this to the number of single button codes, we get $325 + 230 = 545$ codes as promised.
But can we do more? If we're really fat-fingered, or if the buttons are really close together, maybe we can try pressing 3 adjacent buttons at once.
There are just $3$ triplets we can pick here.
We have the same $3$ triplets and $2$ different single buttons to pick, and we can push the single button either first or last, for a total of $3 \times 2 \times 2 = 12$ different codes.
We can combine a triplet with either a pair, or two single buttons.
There are $2$ different ways to divide the 5 buttons into a pair and a triplet, and $2$ orders in which to press these, so that's $2 \times 2 = 4$ codes.
Or we can pick our triplet in $3$ ways, our remaining single buttons in $2$ different orders, and our triplet either first, last, or in between, so that's $3 \times 2 \times 3 = 18$ codes.
That's $4 + 18 = 22$ different codes.
So that's $3 + 12 + 22 = 37$ additional codes using triplets. If we add this to the $545$ codes we already had, we can even get $545 + 37 = 582$ codes in total.
We might as well continue. Let's try pressing 4 buttons at once. We have the choice of $2$ different quads, and the choice of pressing the remaining button before that, after that, or not at all. That's $2 \times 3 = 6$ codes with quads.
Adding these to what we already had, we get $582 + 6 = 588$ codes in total.
Let's go all the way and add the last possibility of pressing all 5 buttons at once. There's only $1$ way to do that.
That gives us a final total of $588 + 1 = 589$ possible codes that can theoretically entered using just a single finger.
You are allowed to press two buttons or more at the same time with one finger.
590 possibilities exist including non pressing any button.
If our salesman is really correct, then you cannot press more than two buttons at the same time with your fingers or it will not be functional on this mechanical keylock and non-pressing any button would not be considered a combination.
The handle is not a true reset per-say. It's actually a sixth switch.
Your "true" passcode requires a "wrong" five digit code, a reset followed by the "right" one.
Both codes are in fact full codes in their own right.
So your code that appears to the world to be 12345 is obfuscated by doing 54321, Resetting then doing 12345.
Given there are 120 permutations of 5 digits, there are a truly staggering number of permutations of 10 digits (with a Reset in the middle). far more than 545, and so satisfying the salesman's claim by a significant margin.
It seems that some of the solutions are double-counting combinations. The order of the pushes does not matter. Therefore, there are only 32 combinations.
Not the answer you're looking for? Browse other questions tagged mathematics logical-deduction combinatorics keys-and-locks or ask your own question.
Is it indeed impossible to score any points in the test in Heinlein's "Space Cadet"? | CommonCrawl |
Amateur math enthusiast. Novice programmer. Finally getting my hands dirty with Python. Love Latex. Want to learn linux. Jack of all trades, Master of none.
6 $f:\mathbb R \rightarrow \mathbb R$ be differentiable function such that $f'(x)$ is continuous and $f(x+1)=f(x)+1$ for all $x \in\mathbb R$. | CommonCrawl |
As I don't have access to a linux machine I would like to make a plea to Oleg (or anyone else who's able to create new firmware) to add an option to the webinterface where the default access rights of the anonymous user can be set.
echo -e "user=anonymous * $x_FIsAnonymousPath 0 $x_FIsAnonymousRights" >> /tmp/stupid-ftpd.confWould you be so kind to add this feature? I would really appriciate it!
Oops, I just discovered that disabling anonymous access and creating a user 'anonymous' with password '*' works exacly like a real anonymous user. There's no need to do the patching of the firmware anymore. | CommonCrawl |
Frequency in signal processing is the number of cycles (of the signal) per second.
May I know how to find Time frequency power spectrogram of a signal?
I am analyzing EEG signal and I am unable to understand the term Time-Frequency Power Spectrogram. Please Can anyone help me? Thank you in advance.
How is frequency related to data rate?
What are real-valued and complex signals and why is the Fourier transform of a real-valued signal Hermitian?
Why a DFT of two sinusoids is very noisy even with frequency sampling 5 times higher?
Why is it called continuous-time frequency?
What does it mean for a function to have frequencies?
Why do frequencies of analog signals range from $-\infty$ to $\infty$ while frequencies of digital signals are restricted to $[0,2\pi]$?
In Fourier analysis while dealing with discrete-time signals, frequencies range from $0$ to $2\pi$ why? Intuitively how can i understand it?
Can the instantaneous frequency be always derived from an analytic signal?
Is there just a carrier in each frequency band in FDMA?
Estimation of Two Closely Spaced Frequencies?
What is the best frequency estimation algorithm for two closely spaced frequencies in term of the minimum frequency spacing achieved?
How to compute impulse and frequency response of Flanger?
how to calculate the maximum obtainable directivity for a microphone array? | CommonCrawl |
Abstract: In the present article a semilinear wave equation with scale-invariant damping and mass is considered. The global (in time) existence of radial symmetric solutions in even spatial dimension $n$ is proved using weighted $L^\infty-L^\infty$ estimates, under the assumption that the multiplicative constants, which appear in the coefficients of damping and of mass terms, fulfill an interplay condition which yields somehow a "wave-like" model. In particular, combining this existence result with a recently proved blow-up result, a suitable shift of Strauss exponent is proved to be the critical exponent for the considered model. Moreover, the still open part of a conjecture done by D'Abbicco - Lucente - Reissig is proved to be true in the massless case. | CommonCrawl |
S.K. Nechaev, K. Polovnikov, Statistika redkikh sobytii i modulyarnaya invariantnost', UFN, 188 (1), 106-112 (2018) [S.K. Nechaev, K. Polovnikov, Rare-event statistics and modular invariance, Phys. Usp., 61(1), 99-104 (2018)], Scopus: 2-s2.0-85045749757.
V.L. Aksenov, I.G. Brankov, V.A. Zagrebnov, E.A. Ivanov, D.I. Kazakov, S.K. Nechaev, N.M. Plakida, A.M. Povolotskii, V.P. Spiridonov, P. Eksner, Vyacheslav Borisovich Priezzhev (06.09.1944 – 31.12.2017), TMF, 194(3), 383-384 (2018) [I.G. Brankov, V.A. Zagrebnov, E.A. Ivanov, D.I. Kazakov, S.K. Nechaev, N.M. Plakida, A.M. Povolotskii, V.P. Spiridonov, P. Eksner, Viacheslav Borisovich Priezzhev (6 September 1944 – 31 December 2017), Theor. Math. Phys., 194(3), 329-330 (2018)], WoS: 000429233100001.
F. Hivert, S. Nechaev, G. Oshanin, O. Vasilyev, On the distribution of surface extrema in several one- and two-dimensional random landscapes, J. Stat. Phys., 126(2), 243-279 (2007); cond-mat/0509584.
M.V. Tamm, S.K. Nechaev, Necklace-cloverleaf transition in associating RNA-like diblock copolymers, Phys. Rev. E 75, 031904 (2007) (13 pages).
G. Sitnikov, M. Taran, A. Muryshev, S. Nechaev, Application of a two-length-scale field theory to the solvation of neutral and charged molecules, J. Chem. Phys. 124, 094501 (2006) (15 pages); cond-mat/0505337.
A.Y. Grosberg, S. Nechaev, M. Tamm, O. Vasilyev, How long does it take to pull an ideal polymer into a small hole?, Phys. Rev. Lett. 96, 228105 (2006) [4 pages]; cond-mat/0510418.
M.V. Tamm, S.K. Nechaev, I.Ya. Erukhimovich, Statistics of ideal randomly branched polymers in a semi-space, Eur. Phys. J. E 17 (2), 209-219 (2005); cond-mat/0408575.
S. Nechaev, O. Vasilyev, On topological correlations in trivial knots: From Brownian Bridges to crumpled globules, J. Knot Theory and Its Ramifications, 14 (2), 243-263 (2005); cond-mat/0204149.
S. Nechaev, R. Voituriez, Conformal geometry and invariants of 3-strand Brownian braids, Nucl. Phys. B 710(3), 614-628 (2005).
S.N. Majumdar, S. Nechaev, Exact asymptotic results for the Bernoulli matching model of sequence alignment, Phys. Rev. E 72, 020901 (2005) (4 pages); q-bio/0410012.
G.V. Sitnikov, S.K. Nechaev, M.D. Taran, O kolichestvennoi srednepolevoi teorii gidrofobnogo effekta neitral'nykh i zaryazhennykh molekul proizvol'noi geometrii, ZhETF, 128(5), 1099-1116 (2005) [G.V. Sitnikov, S.K. Nechaev, M.D. Taran, A quantitative mean-field theory of the hydrophobic effect of neutral and charged molecules of arbitrary geometry, JETP, 101(5), 962-977 (2005)].
S.K. Nechaev, O.A. Vasilyev, Thermodynamics and Topology of Disordered Knots. Correlations in Trivial Lattice Knot Diagrams, In: Physical and Numerical Models in Knot Theory: Including Applications to the Life Sciences, Chap. 22, p. 421-472 Ed. by J.A. Calvo, K.C. Millett, E.J. Rawdon and A. Stasiak, WSPC: Singaport, 2005. ISBN 981-256-187-0 [Series on Knots and Everything - Vol. 36].
G. Sitnikov, S. Nechaev, Whether the mean-field two-length scale theory of hydrophobic effect can be microscopically approved?, cond-mat/0510045.
S.K. Nechaev, O.A. Vasilyev, On metric structure of ultrametric spaces, J. Phys. A 37(12), 3783-3803 (2004); cond-mat/0310079.
S.N. Majumdar, S. Nechaev, Anisotropic ballistic deposition model with links to the Ulam problem and the Tracy-Widom distribution, Phys. Rev. E 69, 011103 (2004) [5 pages]; cond-mat/0307189.
S.K. Nechaev, O.A. Vasil'ev, On the Metric Structure of Ultrametric Spaces, Tr. MIAN, 245 (Izbrannye voprosy $p$-adicheskoi matematicheskoi fiziki i analiza, Sbornik statei. K 80-letiyu so dnya rozhdeniya akademika Vasiliya Sergeevicha Vladimirova), 182–201 (2004) [Proc. Steklov Inst. Math., 245, 169–188 (2004)].
S.K. Nechaev, O.A. Vasilyev, Topological percolation on a square lattice, cond-mat/0401027.
S. Nechaev, Raphaël Voituriez, Random walks on three-strand braids and on related hyperbolic groups, J. Phys. A 36 (1), 43-66 (2003).
O.A. Vasil'ev, S.K. Nechaev, O topologicheskikh korrelyatsiyakh v trivial'nykh uzlakh: Novye argumenty v pol'zu predstavleniya o skladchatoi polimernoi globule, TMF, 134(2), 164–184 (2003) [O.A. Vasilyev, S.K. Nechaev, Topological correlations in trivial knots: New arguments in favor of the representation of a crumpled polymer globul, Theor. Math. Phys., 134 (2), 142-159 (2003)].
N.D. Ozernyuk, S.K. Nechaev, Analiz mekhanizmov adaptatsionnykh protsessov, Izv. AN, ser. biol., No.4, 457-462 (2002) [N.D. Ozernyuk, S.K. Nechaev, Analysis of mechanisms underlying adaptation processes, Biology Bull., 29 (4), 373-377 (2002)].
A.A. Naidenov, S.K. Nechaev, O reaktsiyakh tipa $A+A+ \ldots+A \to0$ na odnomernoi periodicheskoi reshetke kataliticheskikh tsentrov: tochnoe reshenie, Pis'ma v ZhETF, 76 (1), 68-73 (2002) [A.A. Naidenov, S.K. Nechaev, On the reactions A + A+ ... + A -> 0 at a one-dimensional periodic lattice of catalytic centers: Exact solution, JETP Lett., 76 (1), 61-65 (2002)]; cond-mat/0209271.
S. Nechaev, O. Vasilyev, Topological correlations in trivial knots: new arguments in support of the crumpled polymer globule, cond-mat/0204149.
A. Naidenov, S. Nechaev, Adsorption of a random heteropolymer at a potential well revisited: location of transition point and design of sequences, J. Phys. A 34(28), 5625-5634 (2001); cond-mat/0012232.
S. Nechaev, R. Voituriez, On the plant leaf's boundary, 'jupe à godets' and conformal embeddings, J. Phys. A 34(49), 11069 (2001); cond-mat/0107413.
S. Nechaev, Raphaël Voituriez, On the plant leaf's boundary, "jupe a godets" and conformal embeddings, J. Phys. A 34(49), 11069-11082 (2001).
A. Comtet, S. Nechaev, Raphaël Voituriez, Multifractality in uniform hyperbolic lattices and in quasi-classical Liouville field theory, J. Stat. Phys., 102 (1-2), 203-230 (2001); cond-mat/0004491.
R. Bikbov, S. Nechaev, Topological relaxation of entangled flux lattices: Single versus collective line dynamics, Phys. Rev. Lett. 87, 150602 (2001); cond-mat/0010466.
O.A. Vasil'ev, S.K. Nechaev, Termodinamika i topologiya neuporyadochennykh sistem: Statistika diagramm sluchainykh uzlov na konechnykh reshetkakh, ZhETF, 120(5), 1288-1308 (2001) [O.A. Vasilyev, S.K. Nechaev, Thermodynamics and topology of disordered systems: Statistics of the random knot diagrams on finite lattices, JETP, 93(5), 1119-1136 (2001)]; cond-mat/0111091.
A.M. Vershik, S. Nechaev, R. Bikbov, Statistical properties of locally free groups with applications to braid groups and growth of random heaps, Commun. Math. Phys., 212 (2), 469-501 (2000).
R. Voituriez, S. Nechaev, Multifractality of entangled random walks and non-uniform hyperbolic spaces, J. Phys. A 33(32), 5631-5652 (2000); cond-mat/0001138.
S. Nechaev, G. Oshanin, A. Blumen, Anchoring of polymers by traps randomly placed on a line, J. Stat. Phys., 98 (1-2), 281-303 (2000); cond-mat/9901269.
S. Nechaev, R. Voituriez, Random walks on hyperbolic groups and their Riemann surfaces, math-ph/0012037.
R. Bikbov, S. Nechaev, On the limiting power of the set of knots generated by 1+1-and 2+1-braids, J. Math. Phys., 40(12), 6598-6608 (1999).
R.R. Bikbov, S.K. Nechaev, Ob otsenke sverkhu moshchnosti mnozhestva uzlov, porozhdennykh odnomernymi i dvumernymi kosami, TMF, 120(2), 208–221 (1999) [R.R. Bikbov, S.K. Nechaev, Upper estimate of the cardinality of the set of knots generated by one- and two-dimensional braids, Theor. Math. Phys., 120(2), 985-996 (1999)].
A.M. Vershik, S. Nechaev, R. Bikbov, Statistical properties of braid groups in locally free approximation, math/9905190.
S. Nechaev, Localization in a simple multichain catalytic absorption model, J. Phys. A 31 (8), 1965-1980 (1998); cond-mat/9707314.
J. Desbois, S. Nechaev, Statistics of reduced words in locally free and braid groups: Abstract studies and applications to ballistic growth model, J. Phys. A 31(12), 2767-2789 (1998); cond-mat/9707121.
A. Comtet, S. Nechaev, Random operator approach for word enumeration in braid groups, J. Phys. A 31(26), 5609-5630 (1998); cond-mat/9707120.
G. Oshanin, S. Nechaev, A.M. Cazabat, M. Moreau, Kinetics of anchoring of polymer chains on substrates with chemically active sites, Phys. Rev. E 58 (5), 6134-6144 (1998); cond-mat/9807184.
S.K. Nechaev, Problemy veroyatnostnoi topologii: statistika uzlov i nekommutativnykh sluchainykh bluzhdanii, Uspekhi fiz. nauk, 168 (4), 369-405 (1998) [S.K. Nechaev, Nechaev SK, Problems of probabilistic topology: statistics of knots and noncommutative random walks, Phys. Usp., 41(4), 313-347 (1998)].
S. Nechaev, Statistics of knots and entangled random walks, Les Houches Session LXIX, 643-733 (1999) [Topological Aspects of Low Dimensional Systems: Les Houches summer school, July 7-31, 1998. Ed. by A. Comtet, T. Jolicoeur, S. Ouvry, F. David. Springer, 1999, xxxiv,911pp. ISBN 978-3-540-66909-8]; cond-mat/9812205.
R. Bikbov, S. Nechaev, On the limiting power of set of knots generated by 1+1- and 2+1- braids, math/9807149.
J. Desbois, S. Nechaev, Statistical mechanics of braided Markov chains: I. Analytic methods and numerical simulations, J. Stat. Phys., 88 (1-2), 201-229 (1997).
M. Monastyrsky, S. Nechaev, Correlation functions for some conformal theories on Riemann surfaces, Mod. Phys. Lett. A 12 (9), 589-596 (1997); hep-th/9707121.
S.K. Nechaev, A.Yu. Grosberg, A.M. Vershik, Random walks on braid groups: Brownian bridges, complexity and statistics, J. Phys. A 29(10), 2411-2433 (1996).
A.R. Khokhlov, S.K. Nechaev, Topologically driven compatibility enhancement in the mixtures of rings and linear chains, J. Phys. II France, 6(11), 1547-1555 (1996).
V. Tchijov, S. Nechaev, Rodriguez-S. Romo, Interface structure in colored DLA model, Pis'ma v ZhETF, 64 (7), 504-509 (1996) [JETP Lett., 64 (7), 549-555 (1996)].
S.K. Nechaev, Statistics of Knots and Entangled Random Walks, World Scientific: Singapore, 1996, xiv+190 pp. ISBN: 978-981-02-2519-3.
S. Nechaev, Statistical problems in knot theory and noncommutative random walks, STATPHYS 19: The 19th IUPAP International Conference on Statistical Physics, Xiamen, China, July 31-August 4, 1995, p. 45-55. Ed. by Hao Bailin, World Scientific, 1996, xiii+571 pp. ISBN: 9810223145.
S. Nechaev, Yi-C. Zhang, Exact Solution of the 2D Wetting Problem in a Periodic Potential, Phys. Rev. Lett. 74(10), 1815-1818 (1995).
S. Nechaev, A. Vershik, Random walks on multiconnected manifolds and conformal field theory, J. Phys. A 27 (7), 2289-2298 (1994).
A. Grosberg, S. Izrailev, S. Nechaev, Phase transition in a heteropolymer chain at a selective interface, Phys. Rev. E 50 (3), 1912-1921 (1994).
S. Nechaev, Nematic phase transition in entangled directed polymers, Pis'ma v ZhETF, 60 (4), 277-284 (1994) [JETP Lett., 60 (4), 291-299 (1994)].
S.K. Nechaev, V.G. Rostiashvili, Polymer chain in a random array of topological obstacles : 1. Collapse of loops, J. Phys. II France, 3 (1), 91-104 (1993).
V.G. Rostiashvili, S.K. Nechaev, T.A. Vilgis, Polymer chain in a random array of topological obstacles: Classification and statistics of complex loops, Phys. Rev. E 48 (5), 3314-3320 (1993).
L.B. Koralov, S.K. Nechaev, Ya.G. Sinai, Predel'noe povedenie dvumernogo sluchainogo bluzhdaniya s topologicheskimi ogranicheniyami, Teor. veroyatn. i ee prim., 38(2), 331-344 (1993) [L.B. Koralov, S.K. Nechaev, Y.G. Sinai, Asymptotic-behavior of a 2-dimensional random-walk with topological constraints, Theor. Prob. Appl., 38 (2), 296-306 (1993)].
A. Grosberg, S. Nechaev, Polymer topology, Advances in Polymer Science, Vol. 106, 1-29 (1993) [Polymer Characteristics, Springer, 1993. ISBN 978-3-540-56140-8].
A. Grosberg, S. Nechaev, Averaged Kauffman Invariant and Quasi-Knot Concept for Linear Polymers, Europhys. Lett., 20 (7), 613-619 (1992).
S.K. Nechaev, Ya.G. Sinai, Limiting-type theorem for conditional distributions of products of independent unimodular 2×2 matrices, Bull. Braz. Math. Soc. (Bol. Soc. Bras. Mat., Nova Sér.), 21(2), 121-132 (1991).
L.B. Koralov, S.K. Nechaev, Ya.G. Sinai, Limiting probability distribution for a random walk with topological constraints, Chaos 1(2), 131-133 (1991).
D.V. Khveshchenko, Ya.I. Kogan, S.K. Nechaev, Vortices in the lattice model of planar nematic, Int. J. Mod. Phys. B 5 (4), 647-657 (1991).
S.K. Nechaev, Ya.G. Sinai, Scaling behavior of random walks with topological constraints, In: New trends in probability and statistics. Vol. 1, Proc. 23-th Bakuriani Colloq. in Honour of Yu. V. Prokhorov, Bakuriani, USSR 24 February - 4 March, 1990, 683-693 (1991). Sazonov, V.V.; Shervashidze, T.L. (eds.), Utrecht, Vilnius: VSP, Mokslas. xvi, 702 p. (1991). ISBN 90-6764-133-2.
Ya.I. Kogan, S.K. Nechaev, D.V. Khveshchenko, Vikhri v reshetochnoi modeli dvumernogo nematika, ZhETF, 98 (5), 1847-1856 (1990) [Ya.I. Kogan, S.K. Nechaev, D.V. Khveshchenko, Vortices in a lattice model of a two-dimensional nematic, Sov. Phys. JETP 71(5), 1038-1042 (1990)].
D.V. Khveshchenko, Ya.I. Kogan, S.K. Nechaev, Vortices in the lattice model of planar nematic, Preprint ITEP-90-77, SSCL-282, May 1990. 18pp. | CommonCrawl |
Hence we expect the exterior derivative to give an induced 2-form on $TM$.
This approach however does not successfully lead anywhere (could be my shoddy maths skills to blame however, so I would be extremely interested to see if it did in fact lead somewhere?). If we cannot develop a bracket on $TM$ does this mean that $f,g\in\mathcal C^\infty (TM)$ do not have a Lie algebra structure, and at what point would they inherit such a structure during their transition to the cotangent bundle?
Thank you for your time and I hope this question is appropriate for this forum (?) apologies if not!
Your bracket on $T^*M$ is correct. If you do all computations in $T^*M$ then things will be consistent.
But something is wrong with the way you try to go from $T^*M$ to $TM$. This cannot always be done.
If you start with a Lagrangian 1-form on $TM$ you need a nondegeneracy condition in order to get a symplectic form and hence a Poisson bracket on its sections. (This excludes, e.g., gauge theories; there the bracket is defined only on the quotient space of gauge equivalence classes..) Moreover, the definition of a Hamiltonian vector field is now different.
For more details see Section 18.4 of my online book Classical and Quantum Mechanics via Lie algebras.
@AngusTheMan: The equations come from (18.1) and the definition of a locally Hamiltonian vector field, directly above Proposition 18.1.2. Their significance is that, in the singular case, they single out the gauge invariant functions. [See Example 18.3.1, which one can also cast in a lagrangian form - exercise!] Indeed, these equations are responsible for being able to derive the Jacobi identity: Theorem 18.1.3 proves that one gets a Poisson algebra, in particular a Lie algebra. But this Poisson algebra is $C^\infty(TM)$ only in the regular case! | CommonCrawl |
We have seen how principal component analysis (PCA) can be used in the dimensionality reduction task—reducing the number of features of a dataset while maintaining the essential relationships between the points. While PCA is flexible, fast, and easily interpretable, it does not perform so well when there are nonlinear relationships within the data; we will see some examples of these below.
To address this deficiency, we can turn to a class of methods known as manifold learning—a class of unsupervised estimators that seeks to describe datasets as low-dimensional manifolds embedded in high-dimensional spaces. When you think of a manifold, I'd suggest imagining a sheet of paper: this is a two-dimensional object that lives in our familiar three-dimensional world, and can be bent or rolled in that two dimensions. In the parlance of manifold learning, we can think of this sheet as a two-dimensional manifold embedded in three-dimensional space.
Rotating, re-orienting, or stretching the piece of paper in three-dimensional space doesn't change the flat geometry of the paper: such operations are akin to linear embeddings. If you bend, curl, or crumple the paper, it is still a two-dimensional manifold, but the embedding into the three-dimensional space is no longer linear. Manifold learning algorithms would seek to learn about the fundamental two-dimensional nature of the paper, even as it is contorted to fill the three-dimensional space.
Here we will demonstrate a number of manifold methods, going most deeply into a couple techniques: multidimensional scaling (MDS), locally linear embedding (LLE), and isometric mapping (IsoMap).
The output is two dimensional, and consists of points drawn in the shape of the word, "HELLO". This data form will help us to see visually what these algorithms are doing.
This distance matrix gives us a representation of our data that is invariant to rotations and translations, but the visualization of the matrix above is not entirely intuitive. In the representation shown in this figure, we have lost any visible sign of the interesting structure in the data: the "HELLO" that we saw before.
The MDS algorithm recovers one of the possible two-dimensional coordinate representations of our data, using only the $N\times N$ distance matrix describing the relationship between the data points.
This is essentially the goal of a manifold learning estimator: given high-dimensional embedded data, it seeks a low-dimensional representation of the data that preserves certain relationships within the data. In the case of MDS, the quantity preserved is the distance between every pair of points.
The fundamental relationships between the data points are still there, but this time the data has been transformed in a nonlinear way: it has been wrapped-up into the shape of an "S."
The best two-dimensional linear embeding does not unwrap the S-curve, but instead throws out the original y-axis.
How can we move forward here? Stepping back, we can see that the source of the problem is that MDS tries to preserve distances between faraway points when constructing the embedding. But what if we instead modified the algorithm such that it only preserves distances between nearby points? The resulting embedding would be closer to what we want.
Here each faint line represents a distance that should be preserved in the embedding. On the left is a representation of the model used by MDS: it tries to preserve the distances between each pair of points in the dataset. On the right is a representation of the model used by a manifold learning algorithm called locally linear embedding (LLE): rather than preserving all distances, it instead tries to preserve only the distances between neighboring points: in this case, the nearest 100 neighbors of each point.
Thinking about the left panel, we can see why MDS fails: there is no way to flatten this data while adequately preserving the length of every line drawn between the two points. For the right panel, on the other hand, things look a bit more optimistic. We could imagine unrolling the data in a way that keeps the lengths of the lines approximately the same. This is precisely what LLE does, through a global optimization of a cost function reflecting this logic.
The result remains somewhat distorted compared to our original manifold, but captures the essential relationships in the data!
Though this story and motivation is compelling, in practice manifold learning techniques tend to be finicky enough that they are rarely used for anything more than simple qualitative visualization of high-dimensional data.
In manifold learning, there is no good framework for handling missing data. In contrast, there are straightforward iterative approaches for missing data in PCA.
In manifold learning, the presence of noise in the data can "short-circuit" the manifold and drastically change the embedding. In contrast, PCA naturally filters noise from the most important components.
The manifold embedding result is generally highly dependent on the number of neighbors chosen, and there is generally no solid quantitative way to choose an optimal number of neighbors. In contrast, PCA does not involve such a choice.
In manifold learning, the globally optimal number of output dimensions is difficult to determine. In contrast, PCA lets you find the output dimension based on the explained variance.
In manifold learning, the meaning of the embedded dimensions is not always clear. In PCA, the principal components have a very clear meaning.
In manifold learning the computational expense of manifold methods scales as O[N^2] or O[N^3]. For PCA, there exist randomized approaches that are generally much faster (though see the megaman package for some more scalable implementations of manifold learning).
With all that on the table, the only clear advantage of manifold learning methods over PCA is their ability to preserve nonlinear relationships in the data; for that reason I tend to explore data with manifold methods only after first exploring them with PCA.
For toy problems such as the S-curve we saw before, locally linear embedding (LLE) and its variants (especially modified LLE), perform very well. This is implemented in sklearn.manifold.LocallyLinearEmbedding.
If you're interested in getting a feel for how these work, I'd suggest running each of the methods on the data in this section.
One place manifold learning is often used is in understanding the relationship between high-dimensional data points. A common case of high-dimensional data is images: for example, a set of images with 1,000 pixels each can be thought of as a collection of points in 1,000 dimensions – the brightness of each pixel in each image defines the coordinate in that dimension.
We have 2,370 images, each with 2,914 pixels. In other words, the images can be thought of as data points in a 2,914-dimensional space!
We see that for this data, nearly 100 components are required to preserve 90% of the variance: this tells us that the data is intrinsically very high dimensional—it can't be described linearly with just a few components. | CommonCrawl |
[1603.08980v2] Are all Secant Varieties of Segre Products Arithmetically Cohen-Macaulay?
Title:Are all Secant Varieties of Segre Products Arithmetically Cohen-Macaulay?
Abstract: When present, the Cohen-Macaulay property can be useful for finding the minimal defining equations of an algebraic variety. It is conjectured that all secant varieties of Segre products of projective spaces are arithmetically Cohen-Macaulay. A summary of the known cases where the conjecture is true is given. An inductive procedure based on the work of Landsberg and Weyman (LW-lifting) is described and used to obtain resolutions of orbits of secant varieties from those of smaller secant varieties. A new computation of the minimal free resolution of the variety of border rank 4 tensors of format $3 \times 3 \times 4$ is given together with its equivariant presentation. LW-lifting is used to prove several cases where secant varieties are arithmetically Cohen-Macaulay and arithmetically Gorenstein. | CommonCrawl |
I'm trying to follow Gatheral's Volatility Surface Ch. 1, i.e. the text (pg. 5 and 6) linked to in this question, with further text discussed in this question. I can't figure out how to arrive at the initial basic equation giving the change in the value of the portfolio, so if anyone can kindly please help wrap my head around it.
where $\mu_t$ is the (deterministic) instantaneous drift of stock price returns, $\eta$ the volatility of volatility, and $\rho$ the correlation between random stock price returns and changes in $v_t$. $dZ_1$ and $dZ_2$ are Wiener processes.
of $V=V(S,v,t)$ being the (value of the) option being priced, a quantity $\Delta$ of the underlying stock $S$ and a quantity $\Delta_1$ of an asset $V_1$ whose value depends on volatility (which I assume follows the same valuation notation of $V_1=V_1(S,v,t)$).
where, for clarity, we have eliminated the explicit dependence on $t$ of the state variables $S_t$ and $v_t$, and the dependence of $\alpha$ and $\beta$ on the state variables.
Only a brief pointer should suffice, and I'll try working through it. Also, could you please suggest some calculus reading as a precursor to this book?
Browse other questions tagged option-pricing volatility stochastic-volatility heston volatility-surface or ask your own question. | CommonCrawl |
Vector spaces are one of the most common mathematical abstractions. In classical physics, vectors can describe linear motion in space. In operations research, vectors can describe linear constraints to optimization problems. In statistics and machine learning, vectors can describe features in datasets.
In the aforementioned use cases, it is normally sufficient to represent vectors as some array of real values. The rigidity of arrays is the major flaw of this approach: how do you extend arrays to support other functionality such as user-defined annotations? Furthermore, how can we maintain type safety so that the compiler can infer common features of vector spaces? We introduce a simple vector interface to solve this issue.
Before diving into the creation of this interface, we begin with an example vector space to motivate the problem.
Add vector subtraction and additive inversion.
Our first change to the requirements is the addition of two new operations: adding polynomials by their coefficients and scaling a polynomial's coefficients by a scalar value. With these operations, we can consider polynomials as vectors in a vector space. To develop some intuition for these operations, we sketch some examples.
Adding two polynomials, $a$ and $b$, results in a new polynomial where the coefficients have been summed together component-wise.
Multiplying a polynomial, $a$, by a scalar, $\lambda$ means that we scale each coefficient of the polynomial, $a_i$, by the scalar.
The second modification to the requirements introduce vector subtraction and additive inversion. Vector subtraction means that we can subtract the coefficients of two polynomials. Additive inversion means that we negate the coefficients of a polynomial. It is trivial to show that we can define vector subtraction and additive inversion in terms of vector addition and scalar multiplication.
During the construction of our polynomial vector space, we realize that we must implement two operations: vector addition and scalar multiplication. Furthermore, we may infer the implementation of vector subtraction and additive inversion by the implementation of vector addition and scalar multiplication.
This problem appears to generalize well into a library where we can use any object that satisfies the base set of vector space properties in a variety of vector-based algorithms such as models of motion, constrained optimization, and clustering.
Before we introduce any code, we begin by analyzing vector spaces from its mathematical definition as an algebraic structure. Frequently, mathematical abstractions provide a good starting point for modeling by defining the objects needed to model as well as the operations between the model.
Abstract Algebra is the field of mathematics that is concerned with the algebraic structures. Algebraic structures are sets with operations defined on the elements of the set and other optional sets.
Function notation describes operations by the types of its arguments and results. The types of arguments are to the left of the $\mapsto$ symbol and $\times$ symbol delimits the types of arguments. The types of results are to the right of the $\mapsto$ symbol and the $\times$ symbol also delimits the types of arguments.
The mathematical definition of vector spaces provides sufficient details about the types we need to model in addition to the operations supported by those types.
For modeling scalars, it is sufficient to use double primitives. However, The implementation of vectors may vary depending on the problem at hand. Trivially, we can represent a vector as an array of doubles, double. Unfortunately, this representation is rigid and difficult to extend. We propose an interface instead.
This interface, however, is unsatisfactory because vector implementations are forced to handle interoperability with other vector types. In mathematics, however, it is not the case that a vector is required to interact with vectors from other spaces. To mitigate this problem, we use Java generics with the Curiously Recursive Template Pattern (CRTP) to parameterize the type by its subtypes.
This interface only requires vectors from the same space (subtype) to interact with other vectors in its space.
Suppose we would like this interface to have default implementations for operations that can be defined in terms of other operations.
It is important that we highlight the CRTP pattern here. Without knowing that E is a subtype of Vector<E>, we would not be able to provide default implementations for operations that depend on other abstracted operations. This problem occurs in Java because of type erasure.
Type erasure is a feature of Java where generic types are erased at runtime. This means that unbounded objects of E can only be assumed to have the same interface as Object.
With these interface constraints and default operations, we have completed the interface.
We revisit the implementation of the Polynomial class so that it implements Vector<E> interface.
This implementation no longer requires the vector subtraction nor the additive inverse implementations because the interfaces provides them.
In addition to modeling vector-based types, this interface provides a foundation for generic vector-based algorithms including optimization and matrix factorization. It is easy to conceive a gradient descent optimization algorithm based on this vector interface.
The GradientDescent implementation can now perform optimization on any user-defined type that implements Vector<E> and some notion of a gradient.
The Vector<E> interface provides an extensible and type safe representation of vector spaces in Java 8 as we have seen with the polynomials example. This theory should also extend to other vector formulations of user-defined types.
Future work for this interface include the introduction of algorithms backed by this interface such as gradient descent. I will continue working on this interface as well as libraries for it in my Github repository vec. The vision for this project is to enable software engineers to formulate and solve problems in terms of vectors rather than arrays-of-doubles. | CommonCrawl |
Reference for homological cross lemma?
If the two sequences here are exact, then $\ker(\psi_2 \circ \phi_1) \cong \ker(\phi_2\circ\psi_1)$, and similarly for cokernel.
The proof is straightforward diagram chasing, but we'd like to just cite a source for it if possible. He says he's seen a version of this referred to as the "cross lemma", but a search on this name doesn't yield much besides https://link.springer.com/content/pdf/10.1007/BF00969298.pdf and https://core.ac.uk/download/pdf/82599979.pdf, both of which are working in confusingly more general contexts.
Is there a basic source on algebra where we could find this written down? Does it go by a different name?
What you are looking for is a corollary of [a certain variation of] the $9$-lemma (also known as the $3\times 3$-lemma); the name "cross lemma" might not be so widely used in the literature.
I believe there are some modern references for this statement, but it appears precisely in this form as Proposition 16.5 of Mitchell's "Theory of Categories" (1965).
Not the answer you're looking for? Browse other questions tagged reference-request homological-algebra diagram-chasing or ask your own question.
Fundamental lemma of homological algebra via acylic models?
Reference for homological algebra in abelian categories?
Why does this homological lemma hold? | CommonCrawl |
I know I have already asked a question regarding this proof. However, I wanted to see if my reformulation of this proof (with my better understanding in my own words and after some time) is correct.
There exists a prime factor of the form $3k+2$ and there is no prime factor (from our finite list $p_1,p_2,\ldots,p_r$) of the form $3k+2$. Therefore, the supposition is false and there are infinitely many primes with remainder $2$ when divided by $3$.
Just as when Euclid's proof is erroneously reported (by Dirichlet, G. H. Hardy, and others) to be a proof by contradiction, your proof is more complicated than it needs to be. Instead of supposing that only finitely many exist, just say you have some set of finitely many, then go through your argument to show you can extend it to a larger finite set.
Other than that I think you're OK.
Not the answer you're looking for? Browse other questions tagged elementary-number-theory proof-verification prime-numbers or ask your own question.
Prove that there are infinitely many primes of the form $25m+7$.
Can't understand the logical structure of Euclid's infinitely many primes proof in Rosen's book. | CommonCrawl |
76 Why does Friedberg say that the role of the determinant is less central than in former times?
23 Is it possible to have a $3 \times 3$ matrix that is both orthogonal and skew-symmetric?
18 How can I find all solutions to differential equation?
13 What is the probability of $\cos(\theta_1) + \cos(\theta_2) + \cos(\theta_1 - \theta_2) + 1 \le 0$? | CommonCrawl |
I'm aware of this question https://stackoverflow.com/questions/4350215/fastest-nearest-neighbor-algorithm But it's not the same question as I'm asking. Because, Octree and its generalization are only applicable to very small $D$, and they increase exponentially with regard to it. But I'm interested in very high dimensional cases, like $D>1000$. Assume we have a cloud of $N$ points in a $D$ dimensional space. But they might have a lower inherent dimension $d$, i.e., they lie on a $d$ dimensional manifold. Now, what is the most efficient way to compute K-nearest neighbors for each point? I've researched this topic a little bit, but most of the literature focuses on KNN clustering which is slightly different from what I'm asking. First, I don't have any labels here and those methods which rely on labels to reduce complexity are not of any interest. Second, many methods focus on reducing query processing time, which means they pre-process the data, build a database, and quickly process new queries. But I'm interested in reducing the overall time complexity rather than queries. What is the state-of-the-art method and its time complexity?
UPDATE: One might assume that these points are uniformly spread in a hyper-cube. I suppose anything which holds for uniform distribution, to a lesser extent will hold for a smoothly changing distribution.
For high-dimensional exact nearest neighbor search the theoretical guarantees are pretty dismal: the best algorithms are based on fast matrix multiplication and have running time of the form $O(N^2D^\alpha)$ for some $\alpha < 1$.
On the other hand you can do better if you are ok with approximation. Locality sensitive hashing can be used to achieve subquadratic in $N$ running times while reporting points which are not much further than the nearest neighbors.
For subsets of Euclidean space with doubling dimension $d$ (which captures the lower-dimensional manifold situation), Indyk and Naor show that there is an embedding into $O(d)$ dimensions (in fact a random projection in the style of the Johnson-Lindenstrauss lemma) which approximately preserves nearest neighbors. If $d$ is small enough you could first apply the embedding, then use a nearest neighbor data structure for low dimension.
You could also try the random projection trees (RP-trees) of Dasgupta and Freund. They are a natural variant of kd-trees in which the cut in each level is done in a random direction, rather than a coordinate direction. I don't think there are any known provable guarantee for how well RP-trees do on nearest neighbor problems, but the paper I have linked does prove that the trees adapt nicely to doubling dimension, so there is hope. The guarantee they give is that, if a point set inside a cell has doubling dimension $d$, then the descendants of the cell $O(d\log d)$ levels down the tree have diameter at most half that of the cell.
Several state of the art approximate nearest neighbor methods in high dimensions are based on reducing the dimension of the space through randomized techniques.
The main idea is that you can exploit concentration of measure to greatly reduce the dimension of the space while preserving distances up to tolerance $\epsilon$.
Indeed, such subspaces are "typical" in a sense, and you can find such a subspace with high probability by simply projecting your points onto a completely random hyperplane (and in the unlikely event that such a hyperplane is not good enough, just pick another random one and project again).
Now you have a nearest neighbor problem on a much much lower dimensional space.
Have you considered a kd-tree? More optimal algorithms usually require a less general case problem.
Not the answer you're looking for? Browse other questions tagged cg.comp-geom time-complexity computational-geometry or ask your own question.
Can we reduce dimensions before applying high dimensional approximate nearest neighbor algorithms?
Can we perform an n-d range search over an arbitrary box without resorting to simplex methods?
How are lower bounds for computational complexity proved? | CommonCrawl |
This document derives the Fourier Series coefficients for several functions. The functions shown here are fairly simple, but the concepts extend to more complex functions.
This can be a bit hard to understand at first, but consider the sine function. The function sin(x/2) twice as slow as sin(x) (i.e., each oscillation is twice as wide). In the same way ΠT(t/2) is twice as wide (i.e., slow) as ΠT(t).
Since the function is even there are only an terms.
Any interval of one period is allowed but the interval from -T/2 to T/2 is straightforward in this case.
This result is further explored in two examples.
The values for an are given in the table below. Note: this example was used on the page introducing the Fourier Series. Note also, that in this case an (except for n=0) is zero for even n, and decreases as 1/n as n increases.
As you add sine waves of increasingly higher frequency, the approximation improves.
The addition of higher frequencies better approximates the rapid changes, or details, (i.e., the discontinuity) of the original function (in this case, the square wave).
Gibb's overshoot exists on either side of the discontinuity.
The rightmost button shows the sum of all harmonics up to the 21st harmonic, but not all of the individual sinusoids are explicitly shown on the plot. In particular harmonics between 7 and 21 are not shown.
The values for an are given in the table below (note: this example was used on the previous page).
Note that because this example is similar to the previous one, the coefficients are similar, but they are no longer equal to zero for n even.
which generates the same answer as before. This will often be simpler to evaluate than the original integral because one of the limits of integration is zero.
As before the integral is from -T/2 to +T/2 and make use of the facts that the function is constant for |t|<Tp/2 and zero elsewhere, and the T·ω0=2*·π.
Euler's identities dictate that e+jθ-e-jθ=2jsin(θ) so e-jθ-e+jθ=-2jsin(θ).
Note that, as expected, c0=a0 and cn=an/2, (n≠0) (since this is an even function bn=0).
If xT(t) is a triangle wave with A=1, the values for an are given in the table below (note: this example was used on the previous page).
Note: this is similar, but not identical, to the triangle wave seen earlier.
As you add sine waves of increasingly higher frequency, the approximation gets better and better, and these higher frequencies better approximate the details, (i.e., the change in slope) in the original function.
The amplitudes of the harmonics for this example drop off much more rapidly (in this case they go as 1/n2 (which is faster than the 1/n decay seen in the pulse function Fourier Series (above)). Conceptually, this occurs because the triangle wave looks much more like the 1st harmonic, so the contributions of the higher harmonics are less. Even with only the 1st few harmonics we have a very good approximation to the original function.
There is no discontinuity, so no Gibb's overshoot.
As before, only odd harmonics (1, 3, 5, ...) are needed to approximate the function; this is because of the symmetry of the function.
Thus far, the functions considered have all been even. The diagram below shows an odd function.
It is easiest to integrate from -T/2 to +T/2. Over this interval $x_T(t)=2At/T$.
Note: this is similar, but not identical, to the sawtooth wave seen earlier.
There is Gibb's overshoot caused by the discontinuity.
For this reason, among others, the Exponential Fourier Series is often easier to work with, though it lacks the straightforward visualization afforded by the Trigonometric Fourier Series.
There is Gibb's overshoot caused by the discontinuities.
If the function xT(t) has certain symmetries, we can simplify the calculation of the coefficients.
The first two symmetries are were discussed previously in the discussions of the pulse function (xT(t) is even) and the sawtooth wave (xT(t) is odd).
Half-wave symmetry is depicted the diagram below.
The reason the coefficients of the even harmonics are zero can be understood in the context of the diagram below. The top graph shows a function, xT(t) with half-wave symmetry along with the first four harmonics of the Fourier Series (only sines are needed because xT(t) is odd). The bottom graph shows the harmonics multiplied by xT(t).
Now imagine integrating the product terms from -T/2 to +T/2. The odd terms (from the 1st (red) and 3rd (magenta) harmonics) will have a positive result (because they are above zero more than they are below zero). The even terms (green and cyan) will integrate to zero (because they are equally above and below zero). Though this is a simple example, the concept applies for more complicated functions, and for higher harmonics.
The only function discussed with half-wave symmetry was the triangle wave and indeed the coefficients with even indices are equal to zero (as are all of the bn terms because of the even symmetry). The square wave with 50% duty cycle would have half wave symmetry if it were centered around zero (i.e., centered on the horizontal axis). In that case the a0 term would be zero and we have already shown that all the terms with even indices are zero, as expected.
Simplifications can also be made based on quarter-wave symmetry, but these are not discussed here.
(assuming xT(t) is real) we can use the symmetry properties of the Trigonometric Series to find an and bn and hence cn.
The magnitude of the cn terms are even with respect to n: |c-n|=|cn|.
The angle of the cn terms are odd with respect to n: ∠c-n=-∠cn.
If xT(t) is even, then bn=0 and cn is even and real.
If xT(t) is odd, then an=0 and cn is odd and imaginary.
Let's examine the Fourier Series representation of the periodic rectangular pulse function, ΠT(t/Tp), more carefully.
We can change the limits of integration to -Tp/2 and +Tp/2 (since the function is zero elsewhere) and proceed (the function is one in that interval, so we can drop it). We also make use of the fact the ω0=2π/T and Euler's identity for sine.
The last step in the derivation is performed so we can use the sinc() function (pronounced like "sink"). This function comes up often in Fourier Analysis.
sinc(x)=0 for all integer values of x except at x=0 where sinc(0)=1. This is because sin(π·n)=0 for all integer values of n. However at n=0 we have sin(π·n)/(π·n) which is zero divided by zero, but by L'Hôpital's rule get a value of 1.
The first zeros away from the origin occur when x=±1.
The function decays with an envelope of 1/(π·x) as x moves from the origin. This is because the sin() function has successive maxima with an amplitude of 1, and the sin function is divided by π·x.
The diagram below shows cn vs n for several values of the duty cycle, Tp/T.
The graph on the left shows the time domain function. If you hit the middle button, you will see a square wave with a duty cycle of 0.5 (i.e., it is high 50% of the time). The period of the square wave is T=2·π;. The graph on the right shown the values of cn vs n as red circles vs n (the lower of the two horizontal axes; ignore the top axis for now). The blue line goes through the horizontal axis whenever the argument of the sinc() function, n·Tp/T is an integer (except when n=0.). In particular the first crossing of the horizontal axis is given by n·Tp/T=1 or n=T/Tp (note this is not an integer values of Tp). There are several important features to note as Tp is varied.
As Tp decreases (along with the duty cycle, Tp/T), so does the value of c0. This is to be expected because c0 is just the average value of the function and this will decrease as the pulse width does.
As Tp decreases, the "width" of the sinc() function broadens. This tells us that as the function becomes more localized in time (i.e., narrower) it becomes less localized in frequency (broader). In other words, if a function happens very rapidly in time, the signal must contain high frequency coefficients to enable the rapid change.
This tells us explicitly that the product of width in frequency (i.e., Δn) multiplied by the width in time (Δt) is constant - if one is doubled, the other is halved. Or - as one gets more localized in time, it is less localized in frequency. We will discuss this more later. | CommonCrawl |
We model a one-shot learning" situation, where very few (scalar) observations $y_1,...,y_n$ are available. Associated with each observation $y_i$ is a very high-dimensional vector $x_i$, which provides context for $y_i$ and enables us to predict subsequent observations, given their own context. One of the salient features of our analysis is that the problems studied here are easier when the dimension of $x_i$ is large; in other words, prediction becomes easier when more context is provided. The proposed methodology is a variant of principal component regression (PCR). Our rigorous analysis sheds new light on PCR. For instance, we show that classical PCR estimators may be inconsistent in the specified setting, unless they are multiplied by a scalar $c > 1$; that is, unless the classical estimator is expanded. This expansion phenomenon appears to be somewhat novel and contrasts with shrinkage methods ($c < 1$), which are far more common in big data analyses. " | CommonCrawl |
[SOLVED] Is it possible for information to be transmitted faster than light by using a rigid pole?
[SOLVED] If I run along the aisle of a bus traveling at (almost) the speed of light, can I travel faster than the speed of light?
[SOLVED] Would time freeze if you could travel at the speed of light?
[SOLVED] If a mass moves close to the speed of light, does it turn into a black hole?
[SOLVED] What really causes light/photons to appear slower in media?
[SOLVED] Does a photon in vacuum have a rest frame?
[SOLVED] How does gravity escape a black hole?
[SOLVED] Why and how is the speed of light in vacuum constant, i.e., independent of reference frame?
[SOLVED] What is the mechanism behind the slowdown of light/photons in a transparent medium?
[SOLVED] Why is a black hole black?
[SOLVED] What is so special about speed of light in vacuum?
[SOLVED] What is the proof that the universal constants ($G$, $\hbar$, $\ldots$) are really constant in time and space?
[SOLVED] How is light affected by gravity?
[SOLVED] Why is there no absolute maximum temperature?
[SOLVED] How can a photon have no mass and still travel at the speed of light?
[SOLVED] Why does the (relativistic) mass of an object increase when its speed approaches that of light?
[SOLVED] Why are objects at rest in motion through spacetime at the speed of light?
[SOLVED] Do photons have acceleration?
[SOLVED] How does light speed up after coming out of a glass slab?
[SOLVED] How does a photon experience space and time?
[SOLVED] Speed of light in a gravitational field?
[SOLVED] Accelerating particles to speeds infinitesimally close to the speed of light?
[SOLVED] If I am travelling on a car at around 60 km/h, and I shine a light, does that mean that the light is travelling faster than the speed of light?
[SOLVED] Is it really possible to break the speed of light by flicking your wrist with a laser pointer?
[SOLVED] Does a photon instantaneously gain $c$ speed when emitted from an electron?
[SOLVED] Will free-fall object into black hole exceed speed of light $c$ before hitting black hole surface?
[SOLVED] Is a photon "fixed in spacetime"?
[SOLVED] What happens when a photon hits a mirror? | CommonCrawl |
As is well-known the QCD coupling alpha_s is running. The only rigorously known expression for $\alpha_s$ comes from RGE. Nevertheless the approximation of fixed coupling is quite often used in QCD in order to simplify calculations. In this case the scale of alpha_s is conventionally fixed a posteriori from phenomenological considerations. We argue that there are different frozen couplings for calculations in the Double- and Single- Logarithmic Approximations. We present estimates for their values. | CommonCrawl |
Computational Complexity: Foundations of Complexity Lesson 1: What is a computer?
Lesson 1: What is a computer?
This is the first of a long series of posts giving an informal introduction to computational complexity.
Computational complexity theorists try to determine which problem are efficiently solvable on a computer. This sentence already leads to many questions: What is a problem? What is efficiently solvable? Let us first start off with a truly basic question, what is a computer?
In 1936, Alan Turing invented a theoretical computing device now called a Turing Machine. This was before electronic computers existed. He tried to give a simple model of the thought processes of mathematicians. His model has stood the test of time and represents the official model of computation that we still study today.
Instead of giving the formal definition of a Turing machine, let us try a more modern approach. Consider some current programming language like Java. Let us consider the (imaginary) world where a Java program has access to a potentially infinite amount of storage. A Turing machine corresponds to a specific Java program. You might find it a little confusing to think of Turing machine = Java Program but that is the best way to think about it.
Which you can interpret as saying everything is computable is computable by a Java program.
The Church-Turing thesis cannot be proven as it is a thesis but has lead us to define computable as computable by a Turing machine. Now after about half a century of having real computers, the Turing machine has really proven itself as the right model of computation.
I find the first order logic notions of $\Sigma_1$ subsets and $\Delta_1$ subsets of $\mathbb N}$ to be much more natural than Turing enumerable or Turing decidable which are after all defined in terms of a particular kind of machine (one of many possible). In my view, the Church-Turing hypothesis looks more plausible if stated in terms of recursive functions (or \Sigma_1/\Delta_1 sets) than through the language of "machines". Talking about different machines gives the hypothesis a mystifying aura that it does not deserve.
That is not the church-turing thesis!? | CommonCrawl |
We prove a Hardy inequality on convex sets, for fractional Sobolev-Slobodeckii spaces of order $(s,p)$. The proof is based on the fact that in a convex set the distance from the boundary is a superharmonic function, in a suitable sense. The result holds for every $1<p<\infty$ and $0<s<1$, with a constant which is stable as $s$ goes to $1$. | CommonCrawl |
Divide and conquer is a extremely powerful concept that is being used a lot in computer science, and that can also be applied in real life. We present its application to sorting algorithms. Then we'll talk about a major fundamental open mathematical problem, called P=NC.
As computers surely take off the world, algorithms more and more rule it. I'm not saying we shall use algorithms for everything, at least not as Dr. Sheldon Cooper uses an algorithm to make friends.
Great concepts such as divide and conquer or parallelization have developed a lot and showed plenty of applications, including in the way we should deal with real life issues. Let's mention Facebook, Youtube or Wikipedia whose success is based on these concepts, by distributing the responsibility of feeding them with contents. But, what's less known is that astronomy also went distributed, as it has asked for every one to contribute to its development by classifying galaxy on this website. Let's also add Science4All to that list (we'll get there!), which counts on each and everyone of you, not only to contribute by writing articles, but also to share the pages and tell your friends!
But what has come is nothing compare to what's coming. In his book Physics of the Future, the renowned theoretical physicist Michio Kaku predicts the future of computers. Today, computers have only one chip, which may be divided in two or four cores. Chips give computers their computability powers. But, in the future, chips will be so small and so cheap, that they will be everywhere, just like electricity today. Thousands of chips will take care of each and everyone of us. I'll let you imagine the infinite possibilities of parallelization.
In this article, we'll present how the divide and conquer approach improves sorting algorithms. Then we'll talk about parallelization and the major fundamental problem $P=NC$.
The problem of sorting data is crucial, not only in computer science. Imagine Sheldon wanted to hire a friend to take him to the comic store. He'd make a list of his friends, ranked by how good friends they are, then he'll ask each friend to come with him. If the friend refuses, he'd have to go to the next friend of the list. Thus, he needs to sort his friends. This can take a while, even for Dr. Sheldon Cooper.
Really? I mean, Sheldon only has a handful of friends, doesn't he?
Even if he only has 6 friends, if he does the sorting badly, it can take him a while. For instance, he could use the bogosort: Write his friends' names on cards, throw the cards in the air, pick them, test if they are sorted, and, if they are not, repeat from step 1. Even though this will eventually work, this will probably take a while. In means, it will take as many throws in the air as the number of ways to sort 6 friends, that is $6! = 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720$.
But don't underestimate Dr. Cooper's circle. In the pilot of the show, he claims to have 212 friends on MySpace (and it doesn't even include Penny and Amy)! Sorting 212 friends will take a while. With the bogosort, it will take in means about 10281 throws in the air, which cannot be done in billions of years even with very powerful computers… Obviously, he'll need to do it smartly.
Can't he just take his most favorite friend that hasn't refused yet until finding one?
Yes he can. At the first iteration, he'd have to look through the 212 friends. Then, if the favorite friend refuses, through the other 211 friends. Then, if the second favorite friend refuses, through the other 210 friends… and so on. In the worst case where all his friends refuse (which is a scenario Sheldon really should consider…), he'll have to do $212+211+210+…+1$ basic operations. That is equal to $212*211/2 = 22,472$. Even for Dr. Cooper, this will take a while.
What he's done is almost the selection sort algorithm. This algorithm uses two lists, one called the remaining list initially filled with all friends, and the other, called the sorted list, initially empty. At each iteration, the most favorite friend of the remaining list is selected and removed from that list, and it is appended to the sorted list. If the number of friends is $n$, the number of operation required for this algorithm is about $n2/2$. We say that it has a quadratic time complexity.
There are several other sort algorithms with quadratic time complexity that you can think of, including the insertion sort, the bubble sort and the gnome sort. But there are better algorithms, mainly based on the divide and conquer principle.
Divide and conquer? What do you mean?
Divide and conquer is an extremely powerful way of thinking problems. The idea is to divide a main problem into smaller subproblems easier to solve. Solving the main problem becomes equivalent to the problems of the division into subproblems, the resolution of subproblems and the merge of results of subproblems. This is particularly powerful in the case of parallel algorithms, which I'll get back to later in this article.
In our case, we're going to divide the problem of sorting all of the friends into two subproblems of sorting each half of the friends. Now, there are mainly two ways of handling the dividing and merging phases. Either we focus on the dividing phase and we'll be describing the quick sort, or we focus on the merging phase and this will lead us to the merge sort. Although I actually prefer the merge sort (and I'll explain why later), I will be describing the quick sort.
As I said, the idea of the quick sort is to focus on the dividing phase. What we'll be doing in this phase is dividing the list of friends into a list of relatively good friends and a list of relatively bad friends. In order to do that we need to define a friend that's in-between the two lists, called the pivot. This pivot is usually the first element of the list or an element picked randomly in the list. Friends preferred to the pivot will go into the first list, the others into the second list. In each list, we solve a subproblem that gets it sorted. Merging will become very easy as it will simply consist in appending the second list to the first one.
OK, I know how to divide and merge. But how do we solve the subproblem?
The subproblem is identical to the main problem… Obviously, if a list has only 1 or 2 elements, we don't need to apply a quick sort to sort it… But in other cases, we can use the quick sort for the subproblems! That means that we will apply a quick sort to our two lists, which will sort them.
Can we use an algorithm in its own definition?
Yes, we can, because we use it for a strictly simpler problem, and if the problem is already too simple (that is, we're trying to sort a list of 1 or 2 elements), we don't use the algorithm. Let's apply the quick sort on Sheldon's example if we only consider his close friends.
And you're saying that this algorithm performs better than the selection sort?
Yes that's what I'm saying. Let's do a little bit of math to prove that. Suppose we have $n$ friends to sort. Denote $T(n)$ the number of operations to perform the quick sort. The dividing phase takes $n$ operations. Once divided, we get two equally smaller problems. Solving one of them corresponds to sorting $n/2$ friends with the quick sort, which takes $T(n/2)$. Solving the two of them therefore takes twice as long, that is $2T(n/2)$. The merging phase can be done very quickly with only 1 operation. That's why, we have the relation $T(n) = 2T(n/2)+n+1$. Solving this equation leads you to approximate $T(n)$ by $n \log(n)/\log(2)$. For great values of $n$, this number is much much less than $n^2/2$.
Yes I did. It can be shown that by picking the pivot randomly, then in means, the complexity will be nearly the one we wrote. We talk about average complexity. But in the worst case, the complexity is not as good. In fact, the worst case complexity for the quick sort is the same as for the selection sort: it is quadratic. That's why I prefer the merge sort who, by making sure the two subproblems will be equally small, makes sure that the complexity is always about $n \log n$, even in the worst case.
The reason why I wanted to talk about the quick sort is to show the importance of randomness in the algorithm. If we used the first element as pivot, then sorting a sorted list would be the worst case. Yet, it will probably happen quite often… Randomness enables to obtain a good complexity even for that case. Read more on randomness with my article on probabilistic algorithms!
The problem $P=NC$ is definitely a less-known problem but I actually find it just as crucial as the $P=NP$ problem (see my article on P=NP). Obviously if both were proven right, then $P=NC=NP$, which means that complicated problems could be solved… almost instantaneously. I really like the $P=NC$ problem as it highlights an important new way of thinking that needs to be applied in every field: parallelization.
The term parallel here is actually opposed to sequential. It is also known as distributed. For instance, suppose you want to compare the prices of milk, lettuce and beer in shops of New York City. You could go into each store, check the price and get out, but, as you probably know, NYC is a big city and this could take you months, if not years. What Brian Lehrer did on WNYC is applying the concept of parallelization on the comparing prices in NYC shops problem: he asked his auditors to check the prices as he opened a web page to gather the information. In matter of days the problem was solved. Check out the obtained map.
It surely does! The concept of divide and conquer was already extremely powerful in the case of a sequential algorithm, so I'll let you imagine how well it performs in the case of parallelized algorithms.
Except that there are no problems of egos with computers! Parallelization is a very important concept as more and more computers enable it. As a matter of fact, your computer (or even your phone) probably has a duo core or a quad core processor, which means that algorithms can be parallelized into 2 or 4 subproblems that run simultaneously. But that's just the tip of the iceberg! Many algorithms now run in cloud computing, which means that applications are running on several servers at the same time. The number of subproblems you can run is now simply enormous, and it will keep increasing.
Parallelization is now a very important way of thinking, because we now have the tools to actually do it. And to do it well.
Almost. $NC$, which corresponds to Nick's class after Nick Pippenger, is the set of decision problems that can be solved using a polynomial number of processors in a polylogarithmic time, that is with less than $(\log n)^k$ transitions, where $k$ is a constant, and $n$ is the size of the input. That means that parallelization would enable to solve NC problems very very quickly. In matters of seconds if not less, even for extremely large inputs.
And what does the "P" stand for?
$P$ stands for Polynomial. It's the set of decision problems that can be solved in a polynomial time with a sequential algorithm. I won't be dwelling to much on these definitions, you can read my future article on $P=NP$ to have better definitions.
As any parallelized problem can be sequenced by solving parallelized subproblems sequentially, it can be easily proved that any $NC$ problem is a $P$ problem. The big question is proving whether a $P$ problem is necessarily a $NC$ problem or not. If $P=NC$, then that means that any problem that we can solve with a single machine in reasonable time can now be solved almost instantaneously with cloud computing. Applications in plenty of fields would be extraordinary. But if $P \neq NC$, which is, according to Wikipedia, what scientists seem to suspect, that means that some problems are intrinsically not parallelizable.
Do we have the concept of P-completeness to prove P=NC?
Yes we do, just like we have the concept of NP-completeness to prove $P=NP$! There are a few problems that have been proved to be P-complete, that is, problems that are harder than other $P$ problems. If one of them is proven $NC$, then any other $P$ problems will be $NC$. Proving that a P-complete problem is $NC$ would solve the problem $P=NC$. One of these problems is the decision problem associated to the very classical linear programming problem. This problem is particularly interesting because it has a lot of applications (and there would an awful lot of applications if it could be parallelized!). Read my article on linear programming to learn about it!
I guess so… Still, a few problems sequentially polynomial have been parallelized and can now be solved much more quickly. For instance, let's get back to sort algorithms. In the two divide and conquer algorithms we have described, subproblems are generated and can easily be parallelized. The number of subproblems cannot be more than the size of the list. It is therefore polynomial in the size of the list at any time. In the case of nearly equal subproblems, the number of iterations is about the logarithm of the size of the list. Therefore the parallelized sort algorithms is parallelized on a polynomial number of processors and has an almost polylogarithmic time. The only problem is the complexity time of the merging phase for the merge sort, and of the dividing phase for the quick sort.
However, those two difficulties have been overcome. In 1988, Richard Cole found a smart way to parallelize the merging phase of the merge sort. In 1991, David Powers used implicite partitions to parallelize the dividing phase for the quick sort. In both cases, this led to a logarithmic complexity. With huge datacenters, google can now perform sorting of web pages in matters of seconds. Impressive, right?
I'm sure there is a lot more to say on parallelization (and definitely a lot more of research to be done), especially in terms of protocols, which leads to problems of mechanism design, as each computer may act for its own purpose. I'm no expert in parallelization and if you are, I encourage you to write an article on that topic.
Computer science has already totally changed the world. For the new generation that includes myself, it's hard to imagine how people used to live without computers, how they wrote reports or seek for information. Yet, this is only the beginning and parallelization will change the world in a way I probably can't imagine. The advice I'd give you is to think with parallelization. It's powerful concepts that need to be applied in various fields, including in company strategies.
In fact, I have applied divide and conquer by choosing the apparent lack of structure of the articles for Science4All. Classical structures would have involved a tree structure with categories of articles and subcategories, probably by themes of fields. But I'm betting on a more horizontal structure, where the structure is created by the links between the articles, via related articles or via authors. | CommonCrawl |
A Furuta pendulum is a serial connection of two thin, rigid links, where the first link is actuated by a vertical control torque while it is constrained to rotate in a horizontal plane; the second link is not actuated. The second link of the conventional Furuta pendulum is constrained to rotate in a vertical plane orthogonal to the first link, under the influence of gravity. Methods of geometric mechanics are used to formulate a new global description of the Lagrangian dynamics on the configuration manifold $(\S^1)^2$. In addition, two modifications of the Furuta pendulum, viewed as double pendulums, are introduced. In one case, the second link is constrained to rotate in a vertical plane that contains the first link; global Lagrangian dynamics are developed on the configuration manifold $(\S^1)^2$. In the other case, the second link can rotate without constraint; global Lagrangian dynamics are developed on the configuration manifold $\S^1 \times \S^2$. The dynamics of the Furuta pendulum models can be viewed as under-actuated nonlinear control systems. Stabilization of an inverted equilibrium is the most commonly studied nonlinear control problem for the conventional Furuta pendulum. Nonlinear, under-actuated control problems are introduced for the two modifications of the Furuta pendulum introduced in this paper, and these problems are shown to be extremely challenging. | CommonCrawl |
The BMC License Usage Collection Utility is a free application that enables you to run license compliance reports on the BMC products your organization owns. This information can help you determine whether you're under-utilizing or over-utilizing licenses. Although you can run license reports from individual BMC products, the license utility enables you to run reports all at once across many BMC products (See Supported BMC products). The license utility provides specific license report information for each BMC product, so the report contents and formatting vary (see Reports overview for details on each product's report). You can also view BMC product license usage per product deployment in the form of bar charts (see Dashboard overview).
Configuration is quick and easy. You configure the license utility only once for each BMC product by connecting to the servers that host the BMC products. (See Configuring product servers.) When you're ready to run the reports, the license utility automatically connects to the server and produces a report, like the following CSV example. (See Running reports.) Each time reports are run, they are stored in the user\Reports folder where the License Utility is installed ($LicenseUsageCollectorHome\licenseusagecollector\user\Reports\timestamp folder).
After you review the reports and email them to the BMC License Compliance team, you can also talk to your BMC sales representative about managing your under-utilized or over-utilized licenses.
You can download the free license utility either from the BMC Communities page or from the BMC Electronic Product Distribution (EPD) site (see Performing the installation).
For a brief overview and demo, see our videos on YouTube. | CommonCrawl |
Algebraic expressions with more than one variable can look scary. To make variable expressions easier to solve, simply simplify! Don't be frightened; I will walk you through the steps.
This expression has two different variables: 3x + 2y. Hmm… How do you simplify this expression? Maybe you're thinking 3x + 2y = 5xy. STOP! This is wrong! You can't simplify this expression anymore. Combining terms with x and terms with y is not possible.
It's almost as wrong as Dr. Evil attempting to create a "Sharkuine" – by combining a shark with a penguine! Just like sharks and penguins, variables such as x and y are not the same. Repeat to yourself: You can only add and subtract like terms! When adding and subtracting variables, never, ever combine unlike terms. Never, ever!
Let's try out this example. We will go step by step. x + 3 · (10 + y) − 7x − y.
Step 1 is to write out the expression as a sum. See the parentheses? You will need to use the Distributive Property to multiply the three times the binomial contained inside the parentheses. Now you have: x + 30 + 3y − 7x − y.
Step 2 is to group the like terms. You can use the Commutative Property to move the terms. Place the x's with x's, the y's with y's, and so on. This will make the next step easier: x − 7x + 3y − y + 30.
Now, Step 3 is to simplify the expression by combining the like terms. For terms that have numbers and variables, remember the coefficient tells us how many there are of each variable: −6x + 2y + 30.
Step 1: Write the expression as a sum. If there are parentheses, use the Distributive Property.
Step 2: Move the terms around to group the like terms by using the Commutative Property.
And lastly, Step 3, simplify the expression by combining the like terms.
Most important: Never, ever try to combine unlike terms. Here is your reminder: Don't create sharkuins! Don't make the same mistake as Dr. Evil. He had to learn the hard way.
Before evaluating expressions or solving equations, your goal is to always simplify as much as possible. To do this, you should follow some simple steps, especially where variables are concerned. You should know how to use the Distributive Property as well as the rules for combining variables.
If you have complex expressions or equations, you first look for addition or subtraction inside the parentheses. Then, you can simply follow the rest of the order of operations (PEMDAS, BEDMAS, BODMAS, BIDMAS). Afterwards, you group like terms before combining like terms.
In case you were wondering what like terms and unlike terms are and how to group and combine like and unlike terms when variables are involved, this video will enlighten you. If you had problems with what to combine, always keep in mind, that different variables should be treated like different species in real life. That being said, you will master simplifying variable expressions in no time, promised.
Don't try to sneak past this topic in maths if you have difficulties; simplifying variable expressions is one of the most important issues for all your future math topics and in our video it will be explained step by step to you. Relax, lean back, watch and learn.
Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Simplifying Variable Expressions kannst du es wiederholen und üben.
Determine whether or not the terms can be combined.
In math, you are only allowed to combine like terms.
Variables that are not the same are called unlike terms.
Only two of the choices have terms that you can combine.
We can't combine a shark and a penguin to make a sharkuin or a penguishark.
In math, this is like the concept of only being able to combine like terms. We only can combine terms that have the same combination of numbers or variables.
The addends have different variables, so we can't combine them.
Both of these terms have an $x$, so we can combine the two to become $-6x$.
The addends in this expression are unlike terms because the $30$ is missing an $x$ and therefore are not able to be combined.
We see that we can combine those terms by subtracting the coefficients. Which leaves us $2y$.
The terms have different variables and therefore are not able to be combined.
You know $10+5=15$. Therefore, numbers can be added. Numbers are like terms.
$10\times(3+y)=30+3y$ using the Distributive Property. As you can see, the multiplication of a number and a variable $y$ is allowed.
For example, $x-7x=-6x$ but $3x+2y$ can't be simplified.
Never, ever combine unlike terms.
Phrased a bit differently, this means that only like terms can be combined. Only like terms can be added or subtracted.
Since these are like terms (both terms contain the variable $y$, the coefficients can be subtracted, leaving us with $3y - 1y = 2y$.
These terms each have a different variable. No matter how hard we try, they aren't combinable.
Use the Distributive Property to transform $3\times(10+y)$ to $3\times 10+3y=30+3y$.
For example, $x$ and $3y$ aren't like terms.
We remember that we can only combine (add or subtract) like terms.
Since we have to follow PEMDAS, we have to use the Distributive Property to rewrite the expression as a sum.
The next step is grouping like terms. Here, we use the Commutative Property.
Finally, we combine like terms by adding or subtracting their coefficients.
This gives us the simplified expression, seen above.
Remember: Never, ever combine unlike terms.
Combining like terms means adding or subtracting the coefficients of the variables.
Write the expression as a sum using the Distributive Property.
Analyze the expressions for terms that can be simplified.
Two pocket calculators plus three pocket calculators gives you five pocket calculators.
On the other hand, two pocket calculators and three cell phones cannot be combined.
Like terms have the same combination of variables and the exponents are of the same degree.
When we simplify expressions, we first have to identify the like terms.
Always remember: Never ever combine unlike terms.
Remember, you only can combine like terms.
Combining sharks with penguins is similar to adding or subtracting $3x$ and $2y$. This is impossible.
We can only combine, add or subtract, like terms. First, we locate the like terms and put them into groups. Then, we can add or subtract like terms by adding or subtracting the coefficients.
It's important to note the following: Never, ever combine unlike terms. | CommonCrawl |
We have described the design and implementation of an extensive toolkit of visualization techniques. In this chapter we examine several case studies to show how to use these tools to gain insight into important application areas. These areas are medical imaging, financial visualization, modelling, computational fluid dynamics, finite element analysis, and algorithm visualization. For each case, we briefly describe the problem domain and what information we expect to obtain through visualization. Then we craft an approach to show the results. Many times we will extend the functionality of the Visualization Toolkit with application-specific tools. Finally, we present a sample program and show resulting images.
The visualization design process we go through is similar in each case. First, we read or generate application-specific data and transform it into one of the data representation types in the Visualization Toolkit. Often this first step is the most difficult one because we have to write custom computer code, and decide what form of visualization data to use. In the next step, we choose visualizations for the relevant data within the application. Sometimes this means choosing or creating models corresponding to the physical structure. Examples include spheres for atoms, polygonal surfaces to model physical objects, or computational surfaces to model flow boundaries. Other times we generate more abstract models, such as isosurfaces or glyphs, corresponding to important application data. In the last step we combine the physical components with the abstract components to create a visualization that aids the user in understanding the data.
Radiology is a medical discipline that deals with images of human anatomy. These images come from a variety of medical imaging devices, including X-ray, X-ray Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and ultrasound. Each imaging technique, called an imaging modality, has particular diagnostic strengths. The choice of modality is the job of the radiologist and the referring physician. For the most part, radiologists deal with two-dimensional images, but there are situations when three-dimensional models can assist the radiologist's diagnosis. Radiologists have special training to interpret the two dimensional images and understand the complex anatomical relationships in these two-dimensional representations. However, in dealing with referring physicians and surgeons, the radiologist sometimes has difficulty communicating these relationships. After all, a surgeon works in three-dimensions during the planning and execution of an operation; moreover, they are much more comfortable looking at and working with three-dimensional models.
Figure 12-1. A CT slice through a human head.
This case study deals with CT data. Computed tomography measures the attenuation of X-rays as they pass through the body. A CT image consists of levels of gray that vary from black (for air), to gray (for soft tissue), to white (for bone). Figure 12-1 shows a CT cross section through a head. This slice is taken perpendicular to the spine approximately through the middle of the ears. The gray boundary around the head clearly shows the ears and bridge of the nose. The dark regions on the interior of the slice are the nasal passages and ear canals. The bright areas are bone. This study contains 93 such slices, spaced 1.5 mm apart. Each slice has \(256*^2\) pixels spaced 0.8 mm apart with 12 bits of gray level.
Our challenge is to take this gray scale data (over 12 megabytes) and convert it into information that will aid the surgeon. Fortunately, our visualization toolkit has just the right techniques. We will use isocontouring techniques to extract the skin and bone surfaces and display orthogonal cross-sections to put the isosurface in context. From experience we know that a density value of 500 will define the air/skin boundary, and a value of 1150 will define the soft tissue/bone boundary. In VTK terminology, medical imaging slice data is image data. Recall from Chapter 5 that for image data, the topology and geometry of the data is implicitly known, requiring only dimensions, an origin, and the data spacing.
The steps we follow in this case study are common to many three-dimensional medical studies.
For each anatomical feature of interest, create an isosurface.
Transform the models from patient space to world space.
This case study describes in detail how to read input data and extract anatomical features using iso-contouring. Orthogonal planes will be shown using a texture-based technique. Along the way we will also show you how to render the data. We finish with a brief discussion of medical data transformations. This complete source code for the examples shown in this section are available from Medical1.cxx, Medical2.cxx, and Medical3.cxx.
Medical images come in many flavors of file formats. This study is stored as flat files without header information. Each 16-bit pixel is stored with little-endian byte order. Also, as is often the case, each slice is stored in a separate file with the file suffix being the slice number of the form prefix.1, prefix.2 , and so on. Medical imaging files often have a header of a certain size before the image data starts. The size of the header varies from file format to file format. Finally, another complication is that sometimes one or more bits in each 16-bit pixel is used to mark connectivity between voxels. It is important to be able to mask out bits as they are read.
VTK provides several image readers including one that can read raw formats of the type described above--- vtkVolume16Reader. To read this data we instantiate the class and set the appropriate instance variables as follows.
The FilePrefix and FilePattern instance variable work together to produce the name of files in a series of slices. The FilePattern---which by default is %s.%d---generates the filename to read by performing a C-language sprintf() of the FilePrefix and the current file number into the FilePattern format specifier.
We can choose from three techniques for isosurface visualization: volume rendering, marching cubes, and dividing cubes. We assume that we want to interact with our data at the highest possible speed, so we will not use volume rendering. We prefer marching cubes if we have polygonal rendering hardware available, or if we need to move up close to or inside the extracted surfaces. Even with hardware assisted rendering, we may have to reduce the polygon count to get reasonable rendering speeds. Dividing cubes is appropriate for software rendering. For this application we'll use marching cubes.
For medical volumes, marching cubes generates a large number of triangles. To be practical, we'll do this case study with a reduced resolution dataset. We took the original \(256^2\) data and reduced it to \(64^2\) slices by averaging neighboring pixels twice in the slice plane. We call the resulting dataset quarter since it has 1/4 the resolution of the original data. We adjust the DataSpacing for the reduced resolution dataset to 3.2 mm per pixel. Our first program will generate an isosurface for the skin.
The flow in the program is similar to most VTK applications.
Create a mapper to generate rendering primitives.
Create actors for all mappers.
The filter we have chosen to use is vtkMarchingCubes. We could also use vtkContourFilter since it will automatically create an instance of vtkMarchingCubes as it delegates to the fastest subclass for a particular dataset type. The class vtkPolyDataNormals is used to generate nice surface normals for the data. vtkMarchingCubes can also generate normals, but sometimes better results are achieved when the normals are directly from the surface vtkPolyDataNormals ) versus from the data vtkMarchingCubes ). To complete this example, we take the output from the isosurface generator vtkMarchingCubes and connect it to a mapper and actor via vtkPolyDataMapper and vtkActor. The C++ code follows.
// Initialize the event loop and then start it.
Figure 12-2. The skin extracted from a CT dataset of the head. See MedicalDemo1.cxx and MedicalDemo1.py.
To provide context for the isosurface an outline is created around the data. An initial view is set up in a window size of \(640 \times 480\) pixels. Since the dolly command moves the camera towards the data, the clipping planes are reset to insure that the isosurface is completely visible. Figure 12-2 shows the resulting image of the patient's skin.
We can improve this visualization in a number of ways. First, we can choose a more appropriate color (and other surface properties) for the skin. We use the vtkProperty method SetDiffuseColor() to set the skin color to a fleshy tone. We also add a specular component to the skin surface. Next, we can add additional isosurfaces corresponding to various anatomical features. Here we choose to extract the bone surface by adding an additional pipeline segment. This consists of the filters vtkMarchingCubes, vtkPolyDataMapper, and vtkActor, just as we did with the skin. Finally, to improve rendering performance on our system, we create triangle strips from the output of the contouring process. This requires adding vtkStripper.
Figure 12-3. Skin and bone isosurfaces. See MedicalDemo2.cxx and MedicalDemo2.py.
Figure 12-3 shows the resulting image, and the following is the C++ code for the pipeline.
The Visualization Toolkit provides other useful techniques besides isocontouring for exploring volume data. One popular technique used in medical imaging is to view orthogonal slices, or planes, through the data. Because computer graphics hardware supports texture mapping, an approach using texture mapping gives the best result in terms or interactive performance.
We will extract three orthogonal planes corresponding to the axial, sagittal, and coronal cross sections that are familiar to radiologists. The axial plane is perpendicular to the patient's neck, sagittal passes from left to right, and coronal passes from front to back. For illustrative purposes, we render each of these planes with a different color lookup table. For the sagittal plane, we use a gray scale. The coronal and axial planes vary the saturation and hue table, respectively. We combine this with a translucent rendering of the skin (we turn off the bone with the C++ statement bone->VisibilityOff() ). The following VTK code creates the three lookup tables that is used in the texture mapping process.
Figure 12-4. Composite image of three planes and translucent skin. See MedicalDemo3.cxx and MedicalDemo3.py.
Figure 12-4 shows the resulting composite image.
In this example, the actor named skin is rendered last because we are using a translucent surface. Recall from "Transparency and Alpha Values" in Chapter 7 that we must order the polygons composing transparent surfaces for proper results. We render the skin last by adding it to aRenderer's actor list last.
We need to make one last point about processing medical imaging data. Medical images can be acquired in a variety of orders that refer to the relationship of consecutive slices to the patient. Radiologists view an image as though they were looking at the patient's feet. This means that on the display, the patient's left appears on the right. For CT there are two standard orders: top to bottom or bottom to top. In a top to bottom acquisition, slice i is farther from the patient's feet than slice i - 1. Why do we worry about this order? It is imperative in medical applications that we retain the left / right relationship. Ignoring the slice acquisition order can result in a flipping of left and right. To correct this, we need to transform either the original dataset or the geometry we have extracted. (See "Exercises" in this Chapter for more information.) Also, you may wish to examine the implementation of the classes vtkVolume16Reader and vtkVolumeReader (the superclass of vtkVolume16Reader). These classes have special methods that deal with transforming image data.
The previous example described how to create models from gray-scale medical imaging data. The techniques for extracting bone and skin models is straightforward compared to the task of generating models of other soft tissue. The reason is that magnetic resonance and, to some extent, computed tomography, generates similar gray-scale values for different tissue types. For example, the liver and kidney in a medical computed tomography volume often have overlapping intensities. Likewise, many different tissues in the brain have overlapping intensities when viewed with magnetic resonance imaging. To deal with these problems researchers apply a process called segmentation to identify different tissues. These processes vary in sophistication from almost completely automatic methods to manual tracing of images. Segmentation continues to be a hot research area. Although the segmentation process itself is beyond the scope of this text, in this case study we show how to process segmented medical data.
For our purposes we assume that someone (or many graduate students) have laboriously labeled each pixel in each slice of a volume of data with a tissue identifier. This identifier is an integer number that describes which tissue class each pixel belongs to. For example, we may be given a series of MRI slices of the knee with tissue numbers defining the meniscus, femur, muscles, and so forth. Figure 12-5 shows two representations of a slice from a volume acquired from a patient's knee. The image on the left is the original MRI slice; the image on the right contains tissue labels for a number of important organs. The bottom image is a composite of the two images.
Notice the difference in the information presented by each representation. The original slice shows gradual changes at organ borders, while the segmented slice has abrupt changes. The images we processed in the previous CT example used marching cubes isocontouring algorithm and an intensity threshold to extract the isosurfaces. The segmented study we present has integer labels that have a somewhat arbitrary numeric value. Our goal in this example is to somehow take the tissue labels and create grayscale slices that we can process with the same techniques we used previously. Another goal is to show how image processing and visualization can work together in an application.
To demonstrate the processing of segmented data we will use a dataset derived from a frog. This data was prepared at Lawrence Berkeley National Laboratories and is included with their permission on the CD-ROM accompanying this book. The data was acquired by physically slicing the frog and photographing the slices. The original segmented data is in the form of tissue masks with one file per tissue. There are 136 slices per tissue and 15 different tissues. Each slice is 470 by 500 pixels. (To accommodate the volume readers we have in VTK, we processed the mask files and combined them all in one file for each slice.) We used integer numbers 1--15 to represent the 15 tissues. Figure 12-6 shows an original slice, a labeled slice, and a composite of the two representations.
Figure 12-6. Photographic slice of frog (upper left), segmented frog (upper right) and composite of photo and segmentation (bottom). The purple color represents the stomach and the kidneys are yellow. See FrogSlice.cxx and FrogSlice.py.
Before we describe the process to go from binary labeled tissues to gray-scale data suitable for isosurface extraction, compare the two images of the frog's brain shown in Figure 12-7. On the left is a surface extracted using a binary labeling of the brain. The right image was created using the visualization pipeline that we will develop in this example.
plus possibly many more parameters to control decimation, smoothing, and so forth. Working in C++, we would have to design the format of the file and write code to interpret the statements. We make the job easier here by using Tcl interpreter. Another decision is to separate the modelling from the rendering. Our script will generate models in a "batch" mode. We will run one VTK Tcl script for each tissue. That script will create a vtk output file containing the polygonal representation of each tissue. Later, we can render the models with a separate script.
Figure 12-8 shows the design of the pipeline. This generic pipeline has been developed over the years in our laboratory and in the Brigham and Women's Hospital Surgical Planning Lab. We find that it produces reasonable models from segmented datasets. Do not be intimidated by the number of filters (twelve in all). Before we developed VTK, we did similar processing with a hodgepodge of programs all written with different interfaces. We used intermediate files to pass data from one filter to the next. The new pipeline, implemented in VTK, is more efficient in time and computing resources.
We start by developing Tcl scripts to process the volume data. In these scripts, we use the convention that user-specified variables are in capital letters. First we show the elements of the pipeline and subsequently show sample files that extract 3D models of the frog's tissues.
Figure 12-7. The frog's brain. Model extracted without smoothing (left) and with smoothing (right). See ViewFrogBoth.cxx and ViewFrogBoth.py.
We assume here that all the data to be processed was acquired with a constant center landmark. In VTK, the origin of the data applies to the lower left of an image volume. In this pipeline, we calculate the origin such that the x,y center of the volume will be (0,0). The DataSpacing describes the size of each pixel and the distance between slices. DataVOI selects a volume of interest (VOI). A VOI lets us select areas of interest, sometimes eliminating extraneous structures like the CT table bed. For the frog, we have written a small C program that reads the tissue label file and finds the volume of interest for each tissue.
The SetTransform() method defines how to arrange the data in memory. Medical images can be acquired in a variety of orders. For example, in CT, the data can be gathered from top to bottom (superior to inferior), or bottom to top (inferior to superior). In addition, MRI data can be acquired from left to right, right to left, front-to-back (anterior to posterior) or back-to-front. This filter transforms triangle vertices such that the resulting models will all "face" the viewer with a view up of (0,-1,0), looking down the positive z axis. Also, proper left-right correspondence will be maintained. That means the patient's left will always be left on the generated models. Look in SliceOrder.tcl to see the permutations and rotations for each order.
All the other parameters are self-explanatory except for the last. In this script, we know that the pipeline will only be executed once. To conserve memory, we invoke the ReleaseDataFlagOn() method. This allows the VTK pipeline to release data once it has been processed by a filter. For large medical datasets, this can mean the difference between being able to process a dataset or not.
Figure 12-8. The segmented volume to triangle pipeline. Volume passes through image pipeline before isosurface extraction..
Some segmentation techniques, especially those that are automatic, may generate islands of misclassified voxels. This filter looks for connected pixels with the ISLAND_REPLACE label, and if the number of connected pixels is less than ISLAND_AREA, it replaces them with the label TISSUE. Note that this filter is only executed if ISLAND_REPLACE is positive.
The rest of the pipeline requires gray-scale data. To convert the volume that now contains integer tissue labels to a gray-scale volume containing only one tissue, we use the threshold filter to set all pixels with the value TISSUE (the tissue of choice for this pipeline) to 255 and all other pixels to 0. The choice of 255 is somewhat arbitrary.
Lower resolution volumes produce fewer polygons. For experimentation we often reduce the resolution of the data with this filter. However, details can be lost during this process. Averaging creates new pixels in the resampled volume by averaging neighboring pixels. If averaging is turned off, every SAMPLE_RATE pixel will be passed through to the output.
Now we can process the volume with marching cubes just as though we had obtained gray-scale data from a scanner. We added a few more bells and whistles to the pipeline. The filter runs faster if we turn off gradient and normal calculations. Marching cubes normally calculates vertex normals from the gradient of the volume data. In our pipeline, we have concocted a gray-scale representation and will subsequently decimate the triangle mesh and smooth the resulting vertices. This processing invalidates the normals that are calculated by marching cubes.
There are often many more triangles generated by the isosurfacing algorithm than we need for rendering. Here we reduce the triangle count by eliminating triangle vertices that lie within a user-specified distance to the plane formed by neighboring vertices. We preserve any edges of triangles that are considered "features."
This filter uses Laplacian smoothing described in "Mesh Smoothing" in Chapter 9 to adjust triangle vertices as an "average" of neighboring vertices. Typically, the movement will be less than a voxel.
Of course we have already smoothed the image data with a Gaussian kernel so this step may not give much improvement; however, models that are heavily decimated can sometimes be improved with additional polygonal smoothing.
To generate smooth shaded models during rendering, we need normals at each vertex. As in decimation, sharp edges can be retained by setting the feature angle.
Triangle strips are a compact representation of large numbers of triangles. This filter processes our independent triangles before we write them to a file.
Finally, the last component of the pipeline writes the triangles strips to a file.
If you have gotten this far in the book, you know that the Visualization Toolkit uses a demand-driven pipeline architecture and so far we have not demanded anything. We have just specified the pipeline topology and the parameters for each pipeline element.
causes the pipeline to execute. In practice we do a bit more than just Update the last element of the pipeline. We explicitly Update each element so that we can time the individual steps. The script frogSegmentation.tcl contains the more sophisticated approach.
set SAMPLE_RATE "1 1 1"
set VOI "167 297 154 304 $START_SLICE $END_SLICE"
Parameters in frog.tcl can also be overridden. For example, skeleton.tcl overrides the standard deviation for the Gaussian filter.
set VOI "23 479 8 473 0 $ZMAX"
set GAUSSIAN_STANDARD_DEVIATION "1.5 1.5 1"
Note that both of these examples specify a volume of interest. This improves performance of the imaging and visualization algorithms by eliminating empty space.
Another script, marchingFrog.tcl, uses similar parameters but processes the original gray-scale volume rather than the segmented volume. This script is used in skin.tcl to extract the skin. The file marchingFrog.tcl does not have the island removal or threshold pipeline elements since the data is already has gray-scale information.
Once the models are generated with the process just outlined, they can be rendered using the following tcl script called ViewFrog.tcl. First we create a Tcl procedure to automate the creation of actors from the model files. All the pipeline elements are named consistently with the name of the part followed by the name of the pipeline element. This makes it easy for the user to identify each object in more sophisticated user interfaces.
The rest of the script defines a standard view.
(a) All frog parts and translucent skin.
(b) The comnplete frog without skin.
(c) No skin or skeleton.
Figure 12-9. Various frog images. (a) See ViewFrogSkinAndTissue.cxx and ViewFrogSkinAndTissue.py.; (b). See ViewFrog.cxx and ViewFrog.py.; (c) See ViewFrogA.cxx and ViewFrogA.py.
Figure 12-9 shows three views of the frog.
This lengthy example shows the power of a comprehensive visualization system like VTK.
We mixed image processing and computer graphics algorithms to process data created by an external segmentation process.
We developed a generic approach that allows users to control the elements of the pipeline with a familiar scripting language, tcl.
We separated the task into a "batch" portion and an "interactive" portion.
The folks at Lawrence Berkeley National Laboratory have an impressive Web site that features the frog used in this example. The site describes how the frog data was obtained and also permits users to create mpeg movies of the frog. There are also other datasets available. Further details on "The Whole Frog Project" can be found at http://www-itg.lbl.gov/Frog. Also, the Stanford University Medical Media and Information Technologies (SUMMIT) group has on-going work using the Berkeley frog. They are early VTK users. Enjoy their Virtual Creatures project at: http://summit.stanford.edu/creatures.
The application of 3D visualization techniques to financial data is relatively new. Historically, financial data has been represented using 2D plotting techniques such as line, scatter plots, bar charts, and pie charts. These techniques are especially well suited for the display of price and volume information for stocks, bonds, and mutual funds. Three-dimensional techniques are becoming more important due to the increased volume of information in recent years, and 3D graphics and visualization techniques are becoming interactive. Interactive rates mean that visualization can be applied to the day-to-day processing of data. Our belief is that this will allow deeper understanding of today's complex financial data and other more timely decisions.
In this example we go through the process of obtaining data, converting it to a form that we can use, and then using visualization techniques to view it. Some of the external software tools used in this example may be unfamiliar to you. This should not be a large concern. We have simply chosen the tools with which we are familiar. Where we have used an Awk script, you might choose to write a small C program to do the same thing. The value of the example lies in illustrating the high-level process of solving a visualization problem.
Each line stores the data for one day of trading. The first number is the date, stored as the last two digits of the year, followed by a two-digit month and finally the day of the month. The next three values represent the high, low, and closing price of the stock for that day. The next value is the volume of trading in thousands of shares. The final value is the volume of trading in millions of dollars.
We used an Awk script to convert the original data format into a VTK data file. (See the VTK User's Guide for information on VTK file formats; or see VTK File Formats.) This conversion could be done using many other approaches, such as writing a C program or a Tcl script.
The above Awk script performs the conversion. Its first line outputs the required header information indicating that the file is a VTK data file containing polygonal data. It also includes a comment indicating that the data represents stock values. There are a few different VTK data formats that we could have selected. It is up to you to decide which format best suits the data you are visualizing. We have judged the polygonal format ( vtkPolyData ) as best suited for this particular stock visualization.
The next line of the Awk script creates a variable named count that keeps track of how many days worth of information is in the file. This is equivalent to the number of lines in the original data file.
The next fourteen lines convert the six digit date into a more useful format, since the original format has a number of problems. If we were to blindly use the original format and plot the data using the date as the independent variable, there would be large gaps in our plot. For example, 931231 is the last day of 1993 and 940101 is the first day of 1994. Chronologically, these two dates are sequential, but mathematically there are (940101--931231=) 8870 values between them. A simple solution would be to use the line number as our independent variable. This would work as long as we knew that every trading day was recorded in the data file. It would not properly handle the situation where the market was open, but for some reason data was not recorded. A better solution is to convert the dates into numerically ordered days. The preceding Awk script sets January 1, 1993, as day number one, and then numbers all the following days from there. At the end of these 14 lines the variable, d, will contain the resulting value.
The next line in our Awk script stores the converted date, closing price, and dollar volume into arrays indexed by the line number stored in the variable count. Once all the lines have been read and stored into the arrays, we write out the rest of the VTK data file. We have selected the date as our independent variable and x coordinate. The closing price we store as the y coordinate, and the z coordinate we set to zero. After indicating the number and type of points to be stored, the Awk script loops through all the points and writes them out to the VTK data file. It then writes out the line connectivity list. In this case we just connect one point to the next to form a polyline for each stock. Finally, we write out the volume information as scalar data associated with the points. Portions of the resulting VTK data file are shown below.
Now that we have generated the VTK data file, we can start the process of creating a visualization for the stock data. To do this, we wrote a Tcl script to be used with the Tcl-based VTK executable. At a high level the script reads in the stock data, sends it through a tube filter, creates a label for it, and then creates an outline around the resulting dataset. Ideally, we would like to display multiple stocks in the same window. To facilitate this, we designed the Tcl script to use a procedure to perform operations on a per stock basis. The resulting script is listed below.
The first part of this script consists of the standard procedure for renderer and interactor creation that can be found in almost all of the VTK Tcl scripts. The next section creates the objects necessary for drawing an outline around all of the stock data. A vtkAppendPolyData filter is used to append all of the stock data together. This is then sent through a vtkOutlineFilter to create a bounding box around the data. A mapper and actor are created to display the rIn the next part of this script, we define the procedure to add stock data to this visualization. The procedure takes five arguments: the name of the stock, the label we want displayed, and the x, y, z coordinates defining where to position the label. The first line of the procedure indicates that the variable ren1 should be visible to this procedure. By default the procedure can only access its own local variables. Next, we create the label using a vtkTextSource, vtkPolyDataMapper, and vtkFollower. The names of these objects are all prepended with the variable " $prefix. " so that the instance names will be unique. An instance of vtkFollower is used instead of the usual vtkActor, because we always want the text to be right-side up and facing the camera. The vtkFollower class provides this functionality. The remaining lines position and scale the label appropriately. We set the origin of the label to the center of its data. This insures that the follower will rotate about its center point.
The next group of lines creates the required objects to read in the data, pass it through a tube filter and a transform filter, and finally display the result. The tube filter uses the scalar data (stock volume in this example) to determine the radius of the tube. The mapper also uses the scalar data to determine the coloring of the tube. The transform filter uses a transform object to set the stock's position based on the value of the variable zpos. For each stock, we will increment zpos by 10, effectively shifting the next stock over 10 units from the current stock. This prevents the stocks from being stacked on top of each other. We also use the transform to compress the x-axis to make the data easier to view. Next, we add this stock as an input to the append filter and add the actors and followers to the renderer. The last line of the procedure sets the follower's camera to be the active camera of the renderer.
Back in the main body of the Tcl script, we invoke the AddStock procedure four times with four different stocks. Finally, we add the outline actor and customize the renderer and camera to four different stocks. Finally, we add the outline actor and customize the renderer and camera to produce a nice initial view. Two different views of the result are displayed in Figure 12-10. The top image shows a history of stock closing prices for our four stocks. The color and width of these lines correspond to the volume of the stock on that day. The lower image more clearly illustrates the changes in stock volume by looking at the data from above.
Figure 12-10 Two views from the stock visualization script. The top shows closing price over time; the bottom shows volume over time ( stocks.tcl ).
A legitimate complaint with Figure 12-10 is that the changing width of the tube makes it more difficult to see the true shape of the price verses the time curve. We can solve this problem by using a ribbon filter followed by a linear extrusion filter, instead of the tube filter. The ribbon filter will create a ribbon whose width will vary in proportion to the scalar value of the data. We then use the linear extrusion filter to extrude this ribbon along the y-axis so that it has a constant thickness. The resulting views are shown in Figure 12-11.
Figure 12-11. Two more views of the stock case study. Here the tube filter has been replaced by a ribbon filter followed with a linear extrusion filter. See Stocks.cxx and Stocks.py.
The Visualization Toolkit has some useful geometric modelling capabilities. One of the most powerful features is implicit modelling. In this example we show how to use polygonal descriptions of objects and create "blobby" models of them using the implicit modelling objects in VTK. This example generates a logo for the Visualization Toolkit from polygonal representations of the letters v, t, and k.
Figure 12-12. The visualization pipeline for the VTK blobby logo.
We create three separate visualization pipelines, one for each letter. Figure 12-12 shows the visualization pipeline. As is common in VTK applications, we design a pipeline and fill in the details of the instance variables just before we render. We pass the letters through a vtkTransformPolyDataFilter to position them relative to each other. Then we combine all of the polygons from the transformed letters into one polygon dataset using the vtkAppendPolyData filter. The vtkImplicitModeller creates a volume dataset of dimension 643 with each voxel containing a scalar value that is the distance to the nearest polygon. Recall from "Implicit Modelling" in Chapter 6 that the implicit modelling algorithm lets us specify the region of influence of each polygon. Here we specify this using the SetMaximumDistance() method of the vtkImplicitModeller. By restricting the region of influence, we can significantly improve performance of the implicit modelling algorithm. Then we use vtkContourFilter to extract an isosurface that approximates a distance of 1.0 from each polygon. We create two actors: one for the blobby logo and one for the original polygon letters. Notice that both actors share the polygon data created by vtkAppendPolyData. Because of the nature of the VTK visualization pipeline (see "Implicit Execution" in Chapter 4), the appended data will only be created once by the portion of the pipeline that is executed first. As a final touch, we move the polygonal logo in front of the blobby logo. Now we will go through the example in detail.
First, we read the geometry files that contain polygonal models of each letter in the logo. The data is in VTK polygonal format, so we use vtkPolyDataReader .
We want to transform each letter into its appropriate location and orientation within the logo. We create the transform filters here, but defer specifying the location and orientation until later in the program.
We collect all of the transformed letters into one set of polygons by using an instance of the class vtkAppendPolyData.
Since the geometry for each letter did not have surface normals, we add them here. We use vtkPolyDataNormals. Then we complete this portion of the pipeline by creating a mapper and an actor.
We create the blobby logo with the implicit modeller, and then extract the logo with vtkContourFilter. The pipeline is completed by creating a mapper and an actor.
To improve the look of our resulting visualization, we define a couple of organic colors. Softer colors show up better on some electronic media (e.g., VHS video tape) and are pleasing to the eye.
These colors are then assigned to the appropriate actors.
Figure 12-13. A logo created with vtkImplicitModeller. See BlobbyLogo.cxx and BlobbyLogo.py.
And finally, we position the letters in the logo and move the polygonal logo out in front of the blobby logo by modifying the actor's position.
An image made from the techniques described in this section is shown in Figure 12-13 Note that the image on the left has been augmented with a texture map.
Computational Fluid Dynamics (CFD) visualization poses a challenge to any visualization toolkit. CFD studies the flow of fluids in and around complex structures. Often, large amounts of super-computer time is used to derive scalar and vector data in the flow field. Since CFD computations produce multiple scalar and vector data types, we will apply many of the tools described in this book. The challenge is to combine multiple representations into meaningful visualizations that extract information without overwhelming the user.
Display the computational grid. The analyst carefully constructed the finite difference grid to have a higher density in regions where rapid changes occur in the flow variables. We will display the grid in wireframe so we can see the computational cells.
Display the scalar fields on the computational grid. This will give us an overview of where the scalar data is changing. We will experiment with the extents of the grid extraction to focus on interesting areas.
Explore the vector field by seeding streamlines with a spherical cloud of points. Move the sphere through areas of rapidly changing velocity.
Try using the computational grid itself as seeds for the streamlines. Of course we will have to restrict the extent of the grid you use for this purpose. Using the grid, we will be able to place more seeds in regions where the analyst expected more action.
For this case study, we use a dataset from NASA called the LOx Post. It simulates the flow of liquid oxygen across a flat plate with a cylindrical post perpendicular to the flow [Rogers86]. This analysis models the flow in a rocket engine. The post promotes mixing of the liquid oxygen.
We start by exploring the scalar and vector fields in the data. By calculating the magnitude of the velocity vectors, we derive a scalar field. This study has a particularly interesting vector field around the post. We seed the field with multiple starting points (using points arranged along a curve, referred to as a rake) and experiment with parameters for the streamlines. Streampolygons are particularly appropriate here and do a nice job of showing the flow downstream from the post. We animate the streamline creation by moving the seeding line or rake back and forth behind the post.
Following our own advice, we first display the computational grid. The following Tcl code produced the right image of Figure 12-14.
Figure 12-14. Portion of computational grid for the LOx post. See LOxGrid.cxx and LOxGrid.py.
To display the scalar field using color mapping, we must change the actor's representation from wireframe to surface, turn on scalar visibility for each vtkPolyDataMapper, set each mapper's scalar range, and render again, producing the right image of Figure 12-14.
Now, we explore the vector field using vtkPointSource. Recall that this object generates a random cloud of points around a spherical center point. We will use this cloud of points to generate stream-lines. We place the center of the cloud near the post since this is where the velocity seems to be changing most rapidly. During this exploration, we use streamlines rather than streamtubes for reasons of efficiency. The Tcl code is as follows.
Figure 12-15. Streamlines seeded with spherical cloud of points. Four separate cloud positions are shown. See LOxSeeds.cxx and LOxSeeds.py.
Figure 12-15 shows streamlines seeded from four locations along the post. Notice how the structure of the flow begins to emerge as the starting positions for the streamlines are moved up and down in front of the post. This is particularly true if we do this interactively; the mind assembles the behavior of the streamlines into a global understanding of the flow field.
Figure 12-16. Streamtubes created by using the computational grid just in front of the post as a source for seeds. See LOx.cxx and LOx.py.
For a final example, we use the computational grid to seed streamlines and then generate streamtubes as is shown in Figure 12-16. A nice feature of this approach is that we generate more streamlines in regions where the analyst constructed a denser grid. The only change we need to make is to replace the rake from the sphere source with a portion of the grid geometry.
There are a number of other methods we could use to visualize this data. A 3D widget such as the vtkLineWidget could be used to seed the streamlines interactively (see "3D Widgets and User Interaction" in Chapter 7). As we saw in "Point Probe" in Chapter 8, probing the data for numerical values is a valuable technique. In particular, if the probe is a line we can use it in combination with vtkXYPlotActor to graph the variation of data value along the line. Another useful visualization would be to identify regions of vorticity. We could use Equation9-12 in conjunction with an isocontouring algorithm (e.g., vtkContourFilter ) to creates isosurfaces of large helical-density.
Finite element analysis is a widely used numerical technique for finding solutions of partial differential equations. Applications of finite element analysis include linear and nonlinear structural, thermal, dynamic, electromagnetic, and flow analysis. In this application we will visualize the results of a blow molding process.
In the extrusion blow molding process, a material is extruded through an annular die to form a hollow cylinder. This cylinder is called a parison. Two mold halves are then closed on the parison, while at the same time the parison is inflated with air. Some of the parison material remains within the mold while some becomes waste material. The material is typically a polymer plastic softened with heat, but blow molding has been used to form metal parts. Plastic bottles are often manufactured using a blow molding process.
Designing the parison die and molds is not easy. Improper design results in large variations in the wall thickness. In some cases the part may fail in thin-walled regions. As a result, analysis tools based on finite element techniques have been developed to assist in the design of molds and dies.
The results of one such analysis are shown in Figure 12-17. The polymer was molded using an isothermal, nonlinear-elastic, incompressible (rubber-like) material. Triangular membrane finite elements were used to model the parison, while a combination of triangular and quadrilateral finite elements were used to model the mold. The mold surface is assumed to be rigid, and the parison is assumed to attach to the mold upon contact. Thus the thinning of the parison is controlled by its stretching during inflation and the sequence in which it contacts the mold.
Figure 12-17 illustrates 10 steps of one analysis. The color of the parison indicates its thickness. Using a rainbow scale, red areas are thinnest while blue regions are thickest. Our visualization shows clearly one problem with the analysis technique we are using. Note that while the nodes (i.e., points) of the finite element mesh are prevented from passing through the mold, the interior of the triangular elements are not. This is apparent from the occlusion of the mold wireframe by the parison mesh.
Figure 12-17. Ten frames from a blow molding finite element analysis. Mold halves (shown in wireframe) are closed around a parison as the parison is inflated. Coloring indicates thickness-red areas are thinner than blue. See Blow.cxx and Blow.py.
To generate these images, we used a Tcl script shown in Figure 12-18 and Figure 12-19. The input data is in VTK format, so a vtkUnstructuredGridReader was used as a source object. The mesh displacement is accomplished using an instance of vtkWarpVector. At this point the pipeline splits. We wish to treat the mold and parison differently (different properties such as wireframe versus surface), but the data for both mold and parison is combined. Fortunately, we can easily separate the data using two instances of class vtkConnectivityFilter. One filter extracts the parison, while the other extracts both parts of the mold. Finally, to achieve a smooth surface appearance on the parison, we use a vtkPolyDataNormals filter. In order to use this filter, we have to convert the data type from vtkUnstructuredGrid (output of vtkConnectivityFilter ) to type vtkPolyData. The filter vtkGeometryFilter does this nicely.
Visualization can be used to display algorithms and data structures. Representing this information often requires creative work on the part of the application programmer. For example, Robertson et al. [Robertson91] have shown 3D techniques for visualizing directory structures and navigating through them. Their approach involves building three dimensional models (the so-called "cone trees") to represent files, directories, and associations between files and directories. Similar approaches can be used to visualize stacks, queues, linked lists, trees, and other data structures.
In this example we will visualize the operation of the recursive Towers of Hanoi puzzle. In this puzzle there are three pegs ( Figure 12-20 ). In the initial position there are one or more disks (or pucks) of varying diameter on the pegs. The disks are sorted according to disk diameter, so that the largest disk is on the bottom, followed by the next largest, and so on. The goal of the puzzle is to extract mold from mesh using connectivity vtkConnectivityFilter move the disks from one peg to another, moving the disks one at a time, and never placing a larger disk on top of a smaller disk.
Figure 12-19. Tcl script to generate blow molding image (Part two of two).
Figure 12-20. Towers of Hanoi. (a) Initial configuration. (b) Intermediate configuration. (c) Final configuration. (a) See HanoiInitial.cxx and HanoiInitial.py.; (b). See HanoiIntermediate.cxx and HanoiIntermediate.py.; (c) See Hanoi.cxx and Hanoi.py.
Figure 12-21. C++ code for recursive solution of Towers of Hanoi.
Figure 12-22. Function to move disks from one peg to another in the Towers of Hanoi example. The resulting motion is in small steps with an additional flip of the disk.
The classical solution to this puzzle is based on a divide-and-conquer approach [AhoHopUll83]. The problem of moving n disks from the initial peg to the second peg can be thought of as solving two subproblems of size n--1. First move n--1 disks from the initial peg to the third peg. Then move the nth disk to the second peg. Finally, move the n--1 disks on the third peg back to the second peg.
The solution to this problem can be elegantly implemented using recursion. We have shown portions of the C++ code in Figure 12-21 and Figure 12-22. In the first part of the solution (which is not shown in Figure 12-21 ) the table top, pegs, and disks are created using the two classes vtkPlaneSource and vtkCylinderSource. The function Hanoi() is then called to begin the recursion. The routine MovePuck() is responsible for moving a disk from one peg to another. It has been jazzed up to move the disk in small, user-specified increments, and to flip the disc over as it moves from one peg to the next. This gives a pleasing visual effect and adds the element of fun to the visualization.
Because of the clear relationship between algorithm and physical reality, the Towers of Hanoi puzzle is relatively easy to visualize. A major challenge facing visualization researchers is to visualize more abstract information, such as information on the Internet, the structure of documents, or the effectiveness of advertising/entertainment in large market segments. This type of visualization, known as information visualization, is likely to emerge in the future as an important research challenge.
This chapter presented several case studies covering a variety of visualization techniques. The examples used different data representations including polygonal data, volumes, structured grids, and unstructured grids. Both C++ and Tcl code was used to implement the case studies.
Medical imaging is a demanding application area due to the size of the input data. Three-dimensional visualization of anatomy is generally regarded by radiologists as a communication tool for referring physicians and surgeons. Medical datasets are typically image data---volumes or layered stacks of 2D images that form volumes. Common visualization tools for medical imaging include isosurfaces, cut planes, and image display on volume slices.
Next, we presented an example that applied 3D visualization techniques to financial data. In this case study, we began by showing how to import data from an external source. We applied tube filters to the data and varied the width of the tube to show the volume of stock trading. We saw how different views can be used to present different pieces of information. In this case, we saw that by viewing the visualization from the front, we saw a conventional price display. Then, by viewing the visualization from above, we saw trade volume.
In the modelling case study we showed how to use polygonal models and the implicit modelling facilities in VTK to create a stylistic logo. The final model was created by extracting an isosurface at a user-selected offset.
Computational fluid dynamics analysts frequently employ structured grid data. We examined some strategies for exploring the scalar and vector fields. The computational grid created by the analyst serves as a starting point for analyzing the data. We displayed geometry extracted from the finite difference grid, scalar color mapping, and streamlines and streamtubes to investigate the data.
In the finite element case study, we looked at unstructured grids used in a simulation of a blow molding process. We displayed the deformation of the geometry using displacement plots, and represented the material thickness using color mapping. We saw how we can create simple animations by generating a sequence of images.
We concluded the case studies by visualizing the Towers of Hanoi algorithm. Here we showed how to combine the procedural power of C++ with the visualization capabilities in VTK. We saw how visualization often requires our creative resources to cast data structures and information into visual form.
The case studies presented in the chapter rely on having interesting data to visualize. Sometimes the hardest part of practicing visualizing is finding relevant data. The Internet is a tremendous resource for this task. Paul Gilster [Gilster94] has written an excellent introduction to many of the tools for accessing information on the Internet. There are many more books available on this subject in the local bookstore.
In the stock case study we used a programming tool called AWK to convert our data into a form suitable for VTK. More information on AWK can be found in The AWK Programming Language [Aho88]. Another popular text processing languages is Perl [Perl95].
If you would like to know more about information visualization you can start with the references listed here [Becker95] [Ding90] [Eick93] [Feiner88] [Johnson91] [Robertson91]. This is a relatively new field but will certainly grow in the near future.
[Aho88] A. V. Aho, B. W. Kernighan, and P. J. Weinberger. The AWK Programming Language. AddisonWesley, Reading, MA, 1988.
[AhoHopUll83] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. Data Structures and Algorithm s. AddisonWesley, Reading, MA, 1983.
[Becker95] R. A. Becker, S. G. Eick, and A. R. Wilks. "Visualizing Network Data." IEEE Transactions on Visualization and Graphics. 1(1):16--28,1995.
[deLorenzi93] H. G. deLorenzi and C. A. Taylor. "The Role of Process Parameters in Blow Molding and Correlation of 3-D Finite Element Analysis with Experiment." International Polymer Processing. 3(4):365--374, 1993.
[Ding90] C. Ding and P. Mateti. "A Framework for the Automated Drawing of Data Structure Diagrams." IEEE Transactions on Software Engineering. 16(5):543--557, May 1990.
[Eick93] S. G. Eick and G. J. Wills. "Navigating Large Networks with Hierarchies." In Proceedings of Visualization '93. pp. 204--210, IEEE Computer Society Press, Los Alamitos, CA, October 1993.
[Feiner88] S. Feiner. "Seeing the Forest for the Trees: Hierarchical Displays of Hypertext Structures." In Conference on Office Information Systems. Palo Alto, CA, 1988.
[Gilster94] P. Gilster. Finding It on the Internet: The Essential Guide to Archie, Veronica, Gopher, WAIS, WWW (including Mosaic), and Other Search and Browsing Tools. John Wiley & Sons, Inc., 1994.
[Johnson91] B. Johnson and B. Shneiderman. "Tree-Maps: A Space-Filling Approach to the Visualization of Hierarchical Information Structure s." In Proceedings of Visualization '91. pp. 284--291, IEEE Computer Society Press, Los Alamitos, CA, October 1991.
[Perl95] D. Till. Teach Yourself Perl in 21 Days. Sams Publishing, Indianapolis, Indiana, 1995.
[Robertson91] G. G. Robertson, J. D. Mackinlay, and S. K. Card. "Cone Trees: Animated 3D Visualizations of Hierarchical Information." In Proceedings of ACM CHI '91 Conference on Human Factors in Computing Systems. pp. 189--194, 1991.
[Rogers86] S. E. Rogers, D. Kwak, and U. K. Kaul, "A Numerical Study of Three-Dimensional Incompressible Flow Around Multiple Post." in Proceedings of AIAA Aerospace Sciences Conference. vol. AIAA Paper 86-0353. Reno, Nevada, 1986.
12.1 The medical example did nothing to transform the original data into a standard coordinate system. Many medical systems use RAS coordinates. R is right/left, A is anterior/posterior and S is Superior/Inferior. This is the patient coordinate system. Discuss and compare the following alternatives for transforming volume data into RAS coordinates.
12.2 Modify the last example found in the medical application ( Medical3.cxx ) to use vtkImageDataGeometryFilter instead of vtkImageActor. Compare the performance of using geometry with using texture. How does the performance change as the resolution of the volume data changes?
12.3 Modify the last medical example ( Medical3.cxx ) to use v tkTexture and vtkPlaneSource instead of vtkImageActor.
12.4 Change the medical case study to use dividing cubes for the skin surface.
12.5 Combine the two scripts frogSegmentation.tcl and marchingFrog.tcl into one script that will handle either segmented or grayscale files. What other parameters and pipeline components might be useful in general for this application?
12.6 Create polygonal / line stroked models of your initials and build your own logo. Experiment with different transformations.
12.7 Enhance the appearance of Towers of Hanoi visualization.
a) Texture map the disks, base plane, and pegs.
b) Create disks with central holes.
12.8 Use the blow molding example as a starting point for the following.
a) Create an animation of the blow molding sequence. Is it possible to interpolate between time steps? How would you do this?
b) Create the second half of the parison using symmetry. What transformation matrix do you need to use?
12.9 Start with the stock visualization example presented in this chapter.
a) Modify the example code to use a ribbon filter and linear extrusion filter as described in the text. Be careful of the width of the generated ribbons.
b) Can you think of a way to present high/low trade values for each day? | CommonCrawl |
Let $C$ be the compact cylinder $S^1\times [0,1]$. A 3-manifold $M$ with incompressible boundary is called acylindrical if every map $(C,\partial C)\to (M,\partial M)$ that sends the components of $\partial C$ to essential curves in $\partial M$ is homotopic rel $\partial C$ into $\partial M$.
I'm looking, for each $g\geq 2$, for examples of compact, orientable, acylindrical, hyperbolic 3-manifolds $M_g$ with non-empty, incompressible boundary such that each component of $\partial M_g$ is homeomorphic to the surface of genus $g$.
I'm sure such things should be well known to the experts.
Here's a little motivation. Such examples would be useful because, given an arbitrary hyperbolic 3-manifold $N$ with incompressible boundary, you can glue copies of the $M_g$ to the non-toroidal boundary components of $N$ and the result, by Geometrization (for Haken 3-manifolds, so you only need Thurston, not Perelman), is a hyperbolic 3-manifold of finite volume.
Akira Ushijima. The canonical decompositions of some family of compact orientable hyperbolic 3-manifolds with totally geodesic boundary. Geom. Dedicata, 78(1):21–47, 1999.
with the additional requirement that every boundary component has the same genus $g$.
To construct such manifolds you may draw pictures of suffficiently knotted graphs in $S^3$ consisting of some copies of genus-$g$ graphs, and take their complements. Then you can use orb to check whether the complement has a hyperbolic structure with geodesic boundary.
An alternative construction uses ideal triangulations, extending Thurson's original "knotted y" example from his notes. Pick a bunch of tetrahedra and pair their faces so that every edge in the resulting triangulation has valence $> 6$. Then remove an open star at each vertex. Geometrization guarantees that the resulting manifold admits a hyperbolic metric with geodesic boundary (because you can put an angle structure à la Casson which excludes any normal surface with $\chi \geq 0$).
For example, you can take $g\geqslant 2$ tetrahedra and pair the faces in such a way that the resulting triangulation consists of one vertex and one edge only (which has thus valence $6g$). The resulting manifold is a hyperbolic 3-manifold with connected genus-$g$ geodesic boundary. Its hyperbolic structure is simply obtained by giving each tetrahedron the structure of a truncated regular hyperbolic tetrahedron with all dihedral angles of angle $\pi/(3g)$. Thurston's knotted y is obtained in this way for $g=2$.
The manifolds constructed in this way are "the simplest ones" among those having a connected genus-$g$ boundary, from different viewpoints: they have smallest volume (as a consequence of a result of Myiamoto) and smallest Matveev complexity: we have investigated these manifolds here. There are many such manifolds because there are many triangulations with one vertex and one edge: their number grow more than exponentially in $g$.
MR0860677 (88b:32050) Brooks, Robert(1-UCLA) Circle packings and co-compact extensions of Kleinian groups. Invent. Math. 86 (1986), no. 3, 461–469.
The idea is that given a circle-packed hyperbolic surface (such are dense in teichmuller space, by an earlier theorem of Brooks) one can manufacture a hyperbolic manifold whose boundary consists of four copies of the surface.
See the proof of theorem 19.8 in my book. I explain two constructions, one via orbifold trick as Igor explained and the other using Meyers' theorem. Meyers' idea is: Take genus $g$ handlebody $H$ and take a knot $K\subset H$ which busts all essential annuli and disks in $H$. Then do a Dehn surgery on $H$ along $K$.
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology gr.group-theory reference-request or ask your own question.
Do different Dehn fillings produce homeomorphic 3-manifolds ? | CommonCrawl |
Given a complete binary tree ($n = 2^d$ leaves) with integers in leaves.
Goal: How can we minimize the number of inversions in that array only by choosing non-leaf nodes and swapping the the subtrees under the given nodes. For example, if the sequence is (4,2,1,3), then choosing the root (swapping its direct children) followed by choosing the right child of the root (and swapping its direct children) turns the sequence into (1,3,2,4), which has only one inversion. This would be the minimum number of inversions.
My approach: My idea was to use something like merge sort but modified. First i would recursively split the array and then when I merge back together I'd need to change the strategy. For the first level of merges (say I have the integers and ) I can simply put those in the correct order because our swapping operation doesn't prevent this.
However, once I have to merge something like [1,2] and [3,4] (and larger groups which are powers of 2), I have to move those groups as single units (because of the operation we have to use). I can't put it in exact order like I want but I was thinking maybe it would be best to merge the larger groups by summing up all the numbers inside the groups and moving the groups with the smallest sums to the left to minimize inversions. That seems inefficient though. All the while I have to somehow count inversions. I'm really kind of lost on what to do besides what I've thought of so far. Would anyone be able to help?
Let us denote by $L(x)$ the set of leaves in the subtree rooted at $x$. The idea is that the decision whether to switch the two children $x_1,x_2$ of $x$ depends only on $L(x_1),L(x_2)$, and depends on how many of the pairs $(a,b) \in L(x_1) \times L(x_2)$ satisfy $a < b$ and how many satisfy $b < a$.
If the sets $L(x_1),L(x_2)$ were available to you in their sorted forms, then you could count the number of pairs of each type by merging these sets and doing some accounting. This suggests a recursive algorithm which sorts the leaves using the merge sort strategy.
How do I find the number of inversions using a red black tree?
DFS algorithm and how do I show/put it into practice? | CommonCrawl |
Abstract: We describe the Hochschild cohomology algebra for algebras of dihedral type in the subfamily of the family $D(2\mathcal B)$, for which the parameter $c$ is equal to $0$. In the calculation of multiplication in this cohomology algebra, we use the minimal bimodule projective resolution for the algebras under consideration that was constructed in the previous paper of the authors. The obtained results allow us to describe the Hochschild cohomology algebra also for algebras in the family $D(2\mathcal A)$ for which $c=0$.
Key words and phrases: Hochschild cohomology algebra, algebras of dihedral type, bimodule resolution. | CommonCrawl |
Welcome to the Database of Ring Theory!
A repository of rings, their properties, and more ring theory stuff.
See rings and properties on record.
You can search for rings by their properties. If you are only interested in commutative rings, try the specialized search with expanded, commutative-only properties.
There are a lot of ways you could help, even if you don't know what specifically what yet.
This is a ring which is one-sided pseudo-Frobenius. The construction is involved, and not complete yet. Also, if you had any trouble visiting the theorems section recently, please try again. I fixed a bug there.
This is an example of a ring which is one-sided principally injective.
Added this ring and this ring, examples of simple right V domains, which I hear are pretty rare.
I posted a few graphs of implications between ring properties. This is the first time I've regenerated the graphs in a long time, so there are probably problems. Please let me know if you find any. Thanks!
Congratulations MatheinBoulomenos for the winning submission! The next several rings are submissions from the contest too, so check them out. Also thanks to kc_kennylau for working so hard filling in a lot of gaps, especially around ring cardinalities.
site news: "Corrections to Examples of Commutative rings"
Put up the first draft of a TeX document of errata for Examples of Commutative rings. Find the link on the errata page.
Owing to the difficulties of sensibly ordering and searching dimensions, I've settled on this presentation of dimensions. As always, if some data is missing, please submit it for entry!
Site update: Three new features!
On ring detail pages you will now find Dimensions and Subsets tabs. Take a look at $\mathbb Z$ to see how they work. Also, there is now a ring theory literature errata page so we can track mistakes as we find them. You can get to it through the Citations menu item.
I added PCI rings to the collection. Interestingly, this is the only property I'm aware of in the database for which it is currently unknown whether or not the definition is symmetric or asymmetric.
View a list of resources for studying ring theory.
Math Counterexamples: Mathematical exceptions to the rules or intuition.
GroupProps: the group properties wiki. | CommonCrawl |
The capacitor current-voltage equation has a derivative form and an integral form.
I could have perhaps described the "t to tau" substitution step in the video a little better.
Say you have some function, $f()$. An example would be $f(x) = 2x$. The variable inside the parentheses, in this case $x$ gets matched up with the variable name $x$ on the right side. If I wanted to, I could express exactly the same function with $f(y) = 2y$. All I did was swap out $x$ and put in $y$. The meaning of the function did not change.
We can do this trick when the function is an integral. In this video I did exactly that. I swapped $t$ out of the integral expression and replaced it with tau, $\tau$.
$\displaystyle \int i(t) \,dt$ can be written as $\displaystyle \int i(\tau) \,d\tau$ and it means exactly the same thing.
That's the "$t$ to $\tau$" step.
The obvious question is, "Why bother?" This is going to be a bit long.
where the definite integral runs from $t = -\infty$ up to $t = \text T$. The value $t = \text T$ is the moment we stop the integral. I call that time "now". The equation tells you the voltage on the capacitor "now." It tells you to add up all the current that ever flowed in or out of the capacitor, accounting for all time back to the beginning of the universe $(t = -\infty)$. Of course, we only go back in time as far as makes sense, like back to when we turned the circuit on, or back to the last time we were sure of the voltage.
When you evaluate a definite integral, by the time you are done the independent variable $(t$ and $dt)$ inside the integral will vanish. That's why we don't care what the independent variable is called. It could be $t$ or $\tau$ or *@#%. We don't care. It is going to disappear.
The reason for doing the $t$ to $\tau$ switch is it frees me up to do another switch, purely for the cosmetic look of the equation. I want to replace $\text T$ with $t$, just because I like the look of little $t$ as the independent variable.
And this is the equation you see at 8:03. After doing all the name changes, the variable $t$ is the time you stop the integration and learn the voltage on the capacitor.
In some textbooks you may see $t$ used both inside the integral and as the upper limit.
I think this notation is deceiving because it glosses over the two interpretations of time, the continuous time axis and the time now when you want to know the voltage. | CommonCrawl |
Which square—large, medium, or small—covers more of the plane? Explain your reasoning.
Draw three different quadrilaterals, each with an area of 12 square units.
b. not tile the plane.
The area of this shape is 24 square units. Which of these statements is true about the area? Select all that apply.
The area can be found by counting the number of squares that touch the edge of the shape.
It takes 24 grid squares to cover the shape without gaps and overlaps.
The area can be found by multiplying the sides lengths that are 6 units and 4 units.
The area can be found by counting the grid squares inside the shape.
The area can be found by adding $4 \times 3$ and $6 \times 2$.
Here are two copies of the same figure. Show two different ways for finding the area of the shaded region. All angles are right angles.
Which shape has a larger area: a rectangle that is 7 inches by $\frac 34$ inch, or a square with side length of $2 \frac12$ inches? Show your reasoning.
The diagonal of a rectangle is shown.
Decompose the rectangle along the diagonal, and recompose the two pieces to make a different shape.
How does the area of this new shape compare to the area of the original rectangle? Explain how you know.
The area of the square is 1 square unit. Two small triangles can be put together to make a square or to make a medium triangle.
Which figure also has an area of $1\frac 12$ square units? Select all that apply.
Priya decomposed a square into 16 smaller, equal-size squares and then cut out 4 of the small squares and attached them around the outside of original square to make a new figure.
How does the area of her new figure compare with that of the original square?
The area of the new figure is greater.
The two figures have the same area.
The area of the original square is greater.
We don't know because neither the side length nor the area of the original square is known.
The area of a rectangular playground is 78 square meters. If the length of the playground is 13 meters, what is its width?
Explain why the student's statement about area is incorrect.
Find the area of each shaded region. Show your reasoning.
Find the area of each shaded region. Show or explain your reasoning.
Two plots of land have very different shapes. Noah said that both plots of land have the same area.
Do you agree with Noah? Explain your reasoning.
A homeowner is deciding on the size of tiles to use to fully tile a rectangular wall in her bathroom that is 80 inches by 40 inches. The tiles are squares and come in three side lengths: 8 inches, 4 inches, and 2 inches. State if you agree with each statement about the tiles. Explain your reasoning.
Regardless of the size she chooses, she will need the same number of tiles.
Regardless of the size she chooses, the area of the wall that is being tiled is the same.
She will need two 2-inch tiles to cover the same area as one 4-inch tile.
She will need four 4-inch tiles to cover the same area as one 8-inch tile.
If she chooses the 8-inch tiles, she will need a quarter as many tiles as she would with 2-inch tiles.
Select all of the parallelograms. For each figure that is not selected, explain how you know it is not a parallelogram.
a. Decompose and rearrange this parallelogram to make a rectangle.
b. What is the area of the parallelogram? Explain your reasoning.
Find the area of the parallelogram.
Explain why this quadrilateral is not a parallelogram.
Find the area of each shape. Show your reasoning.
Find the areas of the rectangles with the following side lengths.
Select all parallelograms that have a correct height labeled for the given base.
The side labeled $b$ has been chosen as the base for this parallelogram.
Draw a segment showing the height corresponding to that base.
Find the area of each parallelogram.
If the side that is 6 units long is the base of this parallelogram, what is its corresponding height?
Do you agree with each of these statements? Explain your reasoning.
A parallelogram has six sides.
Opposite sides of a parallelogram are parallel.
A parallelogram can have one pair or two pairs of parallel sides.
All sides of a parallelogram have the same length.
All angles of a parallelogram have the same measure.
A square with an area of 1 square meter is decomposed into 9 identical small squares. Each small square is decomposed into two identical triangles.
What is the area, in square meters, of 6 triangles? If you get stuck, draw a diagram.
How many triangles are needed to compose a region that is $1\frac 12$ square meters?
Which three of these parallelograms have the same area as each other?
Which of the following pairs of base and height produces the greatest area? All measurements are in centimeters.
Here are the areas of three parallelograms. Use them to find the missing length (labeled with a "?") on each parallelogram.
The Dockland Building in Hamburg, Germany is shaped like a parallelogram.
If the length of the building is 86 meters and its height is 55 meters, what is the area of this face of the building?
Select all segments that could represent a corresponding height if the side $m$ is the base.
Find the area of the shaded region. All measurements are in centimeters. Show your reasoning.
To decompose a quadrilateral into two identical shapes, Clare drew a dashed line as shown in the diagram.
She said the that two resulting shapes have the same area. Do you agree? Explain your reasoning.
Did Clare partition the figure into two identical shapes? Explain your reasoning.
Triangle R is a right triangle. Can we use two copies of Triangle R to compose a parallelogram that is not a square?
If so, explain how or sketch a solution. If not, explain why not.
Two copies of this triangle are used to compose a parallelogram. Which parallelogram cannot be a result of the composition? If you get stuck, consider using tracing paper.
a. On the grid, draw at least three different quadrilaterals that can each be decomposed into two identical triangles with a single cut (show the cut line). One or more of the quadrilaterals should have non-right angles.
b. Identify the type of each quadrilateral.
A parallelogram has a base of 9 units and a corresponding height of $\frac23$ units. What is its area?
A parallelogram has a base of 9 units and an area of 12 square units. What is the corresponding height for that base?
A parallelogram has an area of 7 square units. If the height that corresponds to a base is $\frac14$ unit, what is the base?
Select all segments that could represent a corresponding height if the side $n$ is the base.
To find the area of this right triangle, Diego and Jada used different strategies. Diego drew a line through the midpoints of the two longer sides, which decomposes the triangle into a trapezoid and a smaller triangle. He then rearranged the two shapes into a parallelogram.
Jada made a copy of the triangle, rotated it, and lined it up against one side of the original triangle so that the two triangles make a parallelogram.
Explain how Diego might use his parallelogram to find the area of the triangle.
Explain how Jada might use her parallelogram to find the area of the triangle.
Find the area of the triangle. Explain or show your reasoning.
Which of the three triangles has the greatest area? Show your reasoning.
If you get stuck, use what you know about the area of parallelograms to help you.
Draw an identical copy of each triangle such that the two copies together form a parallelogram. If you get stuck, consider using tracing paper.
A parallelogram has a base of 3.5 units and a corresponding height of 2 units. What is its area?
A parallelogram has a base of 3 units and an area of 1.8 square units. What is the corresponding height for that base?
A parallelogram has an area of 20.4 square units. If the height that corresponds to a base is 4 units, what is the base?
Select all drawings in which a corresponding height $h$ for a given base $b$ is correctly identified.
For each triangle, a base and its corresponding height are labeled.
a. Find the area of each triangle.
b. How is the area related to the base and its corresponding height?
Here is a right triangle. Name a corresponding height for each base.
Find the area of the shaded triangle. Show your reasoning.
Andre drew a line connecting two opposite corners of a parallelogram. Select all true statements about the triangles created by the line Andre drew.
Each triangle has two sides that are 3 units long.
Each triangle has a side that is the same length as the diagonal line.
Each triangle has one side that is 3 units long.
When one triangle is placed on top of the other and their sides are aligned, we will see that one triangle is larger than the other.
The two triangles have the same area as each other.
While estimating the area of the octagon, Lin reasoned that it must be less than 100 square inches. Do you agree? Explain your reasoning.
Find the exact area of the octagon. Show your reasoning.
For each triangle, a base is labeled $b$. Draw a line segment that shows its corresponding height. Use an index card to help you draw a straight line.
Select all triangles that have an area of 8 square units. Explain how you know.
Find the area of the triangle. Show your reasoning.
If you get stuck, carefully consider which side of the triangle to use as the base.
Can side $d$ be the base for this triangle? If so, which length would be the corresponding height? If not, explain why not.
Find the area of this shape. Show your reasoning.
On the grid, sketch two different parallelograms that have equal area. Label a base and height of each and explain how you know the areas are the same.
Mark each vertex with a large dot. How many edges and vertices does this polygon have?
Find the area of this trapezoid. Explain or show your strategy.
Lin and Andre used different methods to find the area of a regular hexagon with 6-inch sides. Lin decomposed the hexagon into six identical triangles. Andre decomposed the hexagon into a rectangle and two triangles.
Find the area of the hexagon using each person's method. Show your reasoning.
Identify a base and a corresponding height that can be used to find the area of this triangle. Label the base $b$ and the corresponding height $h$.
2. Find the area of the triangle. Show your reasoning.
On the grid, draw three different triangles with an area of 12 square units. Label the base and height of each triangle.
What is the surface area of this rectangular prism?
Which description can represent the surface area of this trunk?
The number of square inches that cover the top of the trunk.
The number of square feet that cover all the outside faces of the trunk.
The number of square inches of horizontal surface inside the trunk.
The number of cubic feet that can be packed inside the trunk.
Which figure has a greater surface area?
A rectangular prism is 4 units high, 2 units wide, and 6 units long. What is its surface area in square units? Explain or show your reasoning.
Draw an example of each of the following triangles on the grid.
A right triangle with an area of 6 square units.
An acute triangle with an area of 6 square units.
An obtuse triangle with an area of 6 square units.
Find the area of triangle $MOQ$ in square units. Show your reasoning.
Is this polyhedron a prism, a pyramid, or neither? Explain how you know.
How many faces, edges, and vertices does it have?
Tyler said this net cannot be a net for a square prism because not all the faces are squares.
Do you agree with Tyler's statement? Explain your reasoning.
Explain why each of the following triangles has an area of 9 square units.
A parallelogram has a base of 12 meters and a height of 1.5 meters. What is its area?
A triangle has a base of 16 inches and a height of $\frac18$ inches. What is its area?
A parallelogram has an area of 28 square feet and a height of 4 feet. What is its base?
A triangle has an area of 32 square millimeters and a base of 8 millimeters. What is its height?
Find the area of the shaded region. Show or explain your reasoning.
Can the following net be assembled into a cube? Explain how you know. Label parts of the net with letters or numbers if it helps your explanation.
What polyhedron can be assembled from this net? Explain how you know.
Find the surface area of this polyhedron. Show your reasoning.
Here are two nets. Mai said that both nets can be assembled into the same triangular prism. Do you agree? Explain or show your reasoning.
Here are two three-dimensional figures.
Tell whether each of the following statements describes Figure A, Figure B, both, or neither.
This figure is a polyhedron.
This figure has triangular faces.
There are more vertices than edges in this figure.
This figure has rectangular faces.
This figure is a pyramid.
There is exactly one face that can be the base for this figure.
The base of this figure is a triangle.
This figure has two identical and parallel faces that can be the base.
Select all units that can be used for surface area. Explain why the others cannot be used for surface area.
Find the area of this polygon. Show your reasoning.
Jada drew a net for a polyhedron and calculated its surface area.
What polyhedron can be assembled from this net?
Jada made some mistakes in her area calculation. What were the mistakes?
Find the surface area of the polyhedron. Show your reasoning.
A cereal box is 8 inches by 2 inches by 12 inches. What is its surface area? Show your reasoning. If you get stuck, consider drawing a sketch of the box or its net and labeling the edges with their measurements.
Twelve cubes are stacked to make this figure.
How would the surface area change if the top two cubes are removed?
Here are two polyhedra and their nets. Label all edges in the net with the correct lengths.
What three-dimensional figure can be assembled from the net?
Match each quantity with an appropriate unit of measurement.
Here is a figure built from snap cubes.
Find the volume of the figure in cubic units.
Find the surface area of the figure in square units.
True or false: If we double the number of cubes being stacked, both the volume and surface area will double. Explain or show how you know.
Which two figures suggest that her statement is true?
Which two figures could show that her statement is not true?
Draw a pentagon (five-sided polygon) that has an area of 32 square units. Label all relevant sides or segments with their measurements, and show that the area is 32 square units.
Draw a net for this rectangular prism.
Find the surface area of the rectangular prism.
a. Decide if each number on the list is a perfect square.
b. Write a sentence that explains your reasoning.
Decide if each number on the list is a perfect cube.
b. Explain what a perfect cube is.
A square has side length 4 cm. What is its area?
The area of a square is 49 m2. What is its side length?
A cube has edge length 3 in. What is its volume?
Prism A and Prism B are rectangular prisms. Prism A is 3 inches by 2 inches by 1 inch. Prism B is 1 inch by 1 inch by 6 inches.
Select all statements that are true about the two prisms.
They have the same volume.
They have the same number of faces.
More inch cubes can be packed into Prism A than into Prism B.
The two prisms have the same surface area.
The surface area of Prism B is greater than that of Prism A.
What information would you need to find its surface area? Be specific, and label the diagram as needed.
Find the surface area of this triangular prism. All measurements are in meters.
What is the volume of a cube with edge length 8 in?
What is the volume of a cube with edge length $\frac 13$ cm?
A cube has a volume of 8 ft3. What is its edge length?
a. What three-dimensional figure can be assembled from this net?
b. If each square has a side length of 61 cm, write an expression for the surface area and another for the volume of the figure.
Draw a net for a cube with edge length $x$ cm.
Here is a net for a rectangular prism that was not drawn accurately.
Explain what is wrong with the net.
Draw a net that can be assembled into a rectangular prism.
Create another net for the same prism.
State whether each figure is a polyhedron. Explain how you know.
Here is Elena's work for finding the surface area of a rectangular prism that is 1 foot by 1 foot by 2 feet.
She concluded that the surface area of the prism is 296 square feet. Do you agree with her conclusion? Explain your reasoning.
No practice problems for this lesson. | CommonCrawl |
Del Pezzo Surfaces in Weighted Projective SpacesJan 23 2013Nov 15 2016We study singular del Pezzo surfaces that are quasi-smooth and well-formed weighted hypersurfaces. We give an algorithm how to classify all of them.
Del Pezzo Surfaces in Weighted Projective SpacesJan 23 2013Apr 13 2016We study singular del Pezzo surfaces that are quasi-smooth and well-formed weighted hypersurfaces. We give an algorithm how to classify all of them.
Review of "Garden of integrals"Feb 06 2008This is a review of the book "Garden of integrals" by Frank Burk.
The $\infty$-harmonic potential is not always an $\infty$-eigenfunctionOct 11 2012Oct 26 2012In this note we prove that there is a convex domain for which the $\infty$-harmonic potential is not a first $\infty$-eigenfunction.
Vector product algebrasOct 30 2008Vector products can be defined on spaces of dimensions 0, 1, 3 and 7 only, and their isomorphism types are determined entirely by their adherent symmetric bilinear forms. We present a short and elementary proof for this classical result.
Trigonometry of The Gold-BugMay 31 2012The classic Edgar Allan Poe story The Gold-Bug involves digging for pirate treasure. Locating the digging sites requires some simple trigonometry. | CommonCrawl |
Your task is to divide the numbers $1,2,\ldots,n$ into two sets so that the sums of the sets are equal.
Print "YES", if the division is possible, and "NO" otherwise.
After this, if the division is possible, print an example how to create the sets. First, print the number of elements in the first set and the elements, and then, print the second set in a similar way. | CommonCrawl |
August 2014, 111 pages, softcover, 17 x 24 cm.
This book is based on a lecture course given by the author at the Educational Center of Steklov Mathematical Institute in 2011. It is designed for a one semester course for undergraduate students, familiar with basic differential geometry, complex and functional analysis.
The universal Teichmüller space $\mathcal T$ is the quotient of the space of quasisymmetric homeomorphisms of the unit circle modulo Möbius transformations. The first part of the book is devoted to the study of geometric and analytic properties of $\mathcal T$. It is an infinite-dimensional Kähler manifold which contains all classical Teichmüller spaces of compact Riemann surfaces as complex submanifolds which explains the name "universal Teichmüller space". Apart from classical Teichmüller spaces, $\mathcal T$ contains the space $\mathcal S$ of diffeomorphisms of the circle modulo Möbius transformations. The latter space plays an important role in the quantization of the theory of smooth strings. The quantization of $\mathcal T$ is presented in the second part of the book. In contrast with the case of diffeomorphism space $\mathcal S$, which can be quantized in frames of the conventional Dirac scheme, the quantization of $\mathcal T$ requires an absolutely different approach based on the noncommutative geometry methods.
The book concludes with a list of 24 problems and exercises which can be used during the examinations. | CommonCrawl |
Nine of ten cards, among which there is an ace of hearts, are distributed to three players so that the first one receives 3, the second - 4, and the third - 2 cards. How many cards combinations exist, where an ace of hearts gets to a third player?
Thus there are $9\times70\times4=2520$ ways to distribute the cards.
Not the answer you're looking for? Browse other questions tagged combinatorics discrete-mathematics card-games or ask your own question.
What is the probability that four players who each receive ten cards together receive less than four aces?
Probability of each player withdrawing 4 aces given 3 cards each on a 40 card deck? | CommonCrawl |
We present recent results in the theory of turbulent momentum transport pertinent to the description of intrinsic rotation. Emphasis is placed on the self-consistent evolution of poloidal and toroidal flows. Both turbulent and neo-classical stresses are considered, allowing for the recovery of purely neo-classical flows, as well as the description of deviations induced by the background turbulence. Along with radial force balance, toroidal and parallel force balance are utilized to constrain the evolution of poloidal and toroidal momentum. Within the turbulent toroidal momentum flux, in the limit of small but finite inverse aspect ratio, two distinct non-diffusive contributions capable of spinning up a plasma from rest are identified. The first results from E $\times$ B shear induced symmetry breaking of the underlying wave population, whereas the second follows from charge separation induced by the polarization drift. An expression for the poloidal flow, including both neo-classical and turbulent stresses, is obtained from parallel force balance. Potentially significant deviations from neo-classical poloidal rotation are found, which are in turn seen to provide a robust means of enhancing toroidal flow generation. Ongoing work is devoted to the development of a self-consistent model describing the coupled poloidal and toroidal flow evolution. | CommonCrawl |
Deep Learning Scaling Is Predictable, Empirically by Baidu Research.
Models are becoming deeper and deeper from the 8 layers of AlexNet to the 1001-layer ResNet.
Training on large dataset is way quicker, ImageNet can now (with enough computing power) been trained in less than 20 minutes.
Dataset size are increasing each year.
The Deep Learning (DL) community has created impactful advances across diverse application domains by following a straightforward recipe: search for improved model architectures, create large training data sets, and scale computation.
However it also notes that new models and hyperparameters configuration are often depend on epiphany and serendipity.
In order to harness the power of big data (more data, more computation power, etc.) models should not be designed to reduce error rate of an epsilon on Imagenet but be designed to be better with more data.
Where $\epsilon(m)$ is the generalization error on the number of train samples $m$; $\alpha$ a constant related to the problem; and $\beta_g$ the steepness of the learning curve.
$\beta_g$ is said to settle between -0.07 and -0.35.
Baidu tested four domains: machine translation, language modeling, image classification, and speech recognition.
For each domain, a variety of architectures, optimizers, and hyperparameters is tested. To see how models scale with dataset size, Baidu trained models on samples ranging from 0.1% of the original data to the whole data (minus the validation set).
The paper's authors try to find the smallest model that is able to overfit each sample.
Baidu also removed any regularizations, like weight decay, that might reduce the model's effective capacity.
In all domain, they found that the model size growth with dataset size sublinearly.
The first thing that we can conclude from these numbers is that text based problems (translation and language modeling) scale badly faced to image problems.
It is worth noting that (current) models seem to scale better depending on the data dimension: Image and speech are of a higher dimensionality than text.
You may also wonder why image has two entries in the table. One for top-1 generalization error, and one for top-5. This is one of the most interesting finding of this paper. Current models of image classification improve their top-5 faster than top-1 as data size increases! I wonder the reason why.
The small data region, where models given so few data can only make random guessing.
The power law region, where models follow the power law. However the learning curve steepness may be improved.
The irreductible error, a combination of the Bayes error (on which the model cannot be improved) and the dataset defects that may impair generalization.
Given the power law, researchers can train their new architecture on a small dataset, and have a good estimation of how it would scale on a bigger dataset. It may also give a reasonable estimation of the hardware and time requirements to reach a chosen generalization error.
We suggest that future work more deeply analyze learning curves when using data handling techniques, such as data filtering/augmentation, few-shot learning, experience replay, and generative adversarial networks.
Baidu also recommends to search how to push the boundaries of the irreductible error. To do that we should be able to distinguish between what contributes to the bayes error, and what's not.
Baidu Research showed that models follow a power law curve. They empirically determined the power law exponent, or steepness of the learning curve, for machine translation, language modeling, image classification, and speech recognition.
This power law express how much a model can improve given more data. Models for text problems are currently the less scalable. | CommonCrawl |
Abstract: Accurate ab-inito quantum mechanical calculations of pyrrole dimers are reported. The thermodynamical stabilities of dimers with $\alpha - \alpha, \alpha - \beta$, and $\beta - \beta$ type linkages are compared in order to predict the possibilities of branching in polypyrroles. Calculations employing large basis sets and including electron correlation effects predict the $\alpha - \alpha$ dimers as the most stable form. However, an $\alpha - \beta$ type bonding requires only 1.5-2.0 kcal/mol, and the energy necessary to introduce a $\beta - \beta$ type bond is 3.6-4.0 kcal/mol. These values show that a high degree of branching is possible even at room temperatures. | CommonCrawl |
We will now use the above theorems to prove a very important result.
Proposition 1: Let $X$ be a locally convex topological vector space. If $Y$ is a proper closed subspace of $X$ then for every $x_0 \in X \setminus Y$ there exists a continuous linear function $f$ on $X$ such that $f(x_0) \neq 0$ and $f |_Y = 0$. | CommonCrawl |
Recently O. Pechenik studied the cyclic sieving of increasing tableaux of shape $2\times n$, and obtained a polynomial on the major index of these tableaux, which is a $q$-analogue of refined small Schröder numbers. We define row-increasing tableaux and study the major index and amajor index of row-increasing tableaux of shape $2 \times n$. The resulting polynomials are both $q$-analogues of refined large Schröder numbers. For both results we give bijective proofs. | CommonCrawl |
Does there exist a matrix $A\neq I$ such that $A^n=1$ for any $n$?
How to calculate probability and expected value?
If a random variable is uniformly bounded, i.e., $|X_j| \leq C$ for all $j = 1,2,\ldots$, does the fourth moment exist?
If $W_t$ is a Brownian motion, is $W_t^3$ a martingale? | CommonCrawl |
It is known to be finitely generated, but the question of finite presentation is still open. Given that the mapping class group is well known to be finitely presented, a natural question to ask is then, under what conditions, if any, does every finitely generated subgroup of a given group permit a finite presentation? A group satisfying this property is said to be coherent. Of course if the mapping class group could be shown to be coherent then the question of finite presentation of the Torelli group is trivially solved. All I have done is said; if every finitely generated subgroup of the mapping class group is finitely presented, then the Torelli group is finitely presented, which isn't very helpful. But the question of coherence could provide a roundabout way of solving this problem. More importantly however it serves as motivation for the question; under what conditions can a group be said to be coherent.
Using this we can embed in with generators , , and . And similarly we can embed in with . So we have that these linear groups are also not coherent.
Surface groups are 1-relator groups. Moreover, any subgroup of a surface group is another surface group, and thus is finitely presented. This is a nice easy case as surfaces have a powerful underlying topological restriction, namely the classification of surfaces.
In fact it is also known that the fundamental groups of 3-manifolds are coherent, and similarly it is the underlying topological properties that allow this to be shown, specifically the Scott core theorem.
What if we let X be the wedge of two circles, then take Y to be the direct product of two copies of X. If we thicken Y slightly we obtain a 3-manifold. However the fundamental group of Y is unaffected by this thickening, remaining F2 x F2, which as you have already said is incoherant?
Don't know if you are still reading this, but the mapping class group contains copies of $F_2 \times F_2$ once the genus is at least $2$, so it is not coherent.
Hey – thanks for the reply! I know how to get an sitting in there (take Dehn twists about curves with and use ping pong lemma). I guess then with genus 2 or higher we have enough room to do this twice disjointly?
The structure of subgroup of mapping class groups generated by two Dehn twists.
Proc. Japan Acad. Ser. A Math. Sci. 72 (1996), no. 10, 240–241. | CommonCrawl |
I'm trying to solve a 1D Poisson equation with pure Neumann boundary conditions. I've found many discussions of this problem, e.g.
Some of the answers seem unsatisfactory though. For example, the answer in 2) contains a document that explains how the matrix $A$ changes when applying Neumann boundary conditions, but does not explain how to solve the singular system. The comment left in 2) by @Evgeni Sergeev is a reference to a problem with mixed boundary conditions, and not pure Neumann boundary conditions.
That said, there are some useful things I've found. I do like the second comment given in the 1) by @Sumedh Joshi. It suggests to subtract the mean from the RHS. Other references I've read suggest using Dirichlet BCs at one location in the computational domain which should result in a unique solution, however, I prefer removing the mean since this seams more elegant.
Where $u_b,u_i,f_b,\theta$ are the $u$ at the boundary, $u$ at the first interior point, $f$ at the boundary and the slope of the solution at the boundary (zero in this case) respectively. The interior of this matrix is the same as the one discussed in the document given in the accepted answer in 2).
Here I show the setup and results of a small MATLAB script where I print out $A, f, mean(f)$ and the (attempted) solution $u$ using MATLAB's backslash operator.
Warning: Matrix is singular to working precision.
My question is why is this not working? I thought that removing the mean of the RHS would work as suggested in @Sumedh Joshi's comment. Any help is greatly appreciated.
I found out that this same exact problem setup works perfectly fine for 12 unknowns (and 15 and much larger, e.g. 100), but does not work for 10 unknowns (as posted). So I suppose that the problem is setup correctly. This still begs the question though, what is going on here? It seems that this may be more of a question regarding numerical analysis.
As stated by Peter the matrix of the all Neumann boundary Poisson problem is indeed singular and therefore the Matlab Backslash operator, which attempts to use direct solvers, cannot deal with it. Conjugate Gradient on the other hand can, you just need to, as you have, ensure the mean of f equals 0 before handing it to e.g. PCG.
I would like to stress that arbitrarily replacing a Neumann boundary with a fixed (arbitrary) valued Dirichelt boundary is not the correct way of dealing with the problem.
% are incorporated into the RHS. For plotting values need to be appended.
Here both the solutions and their first derivatives are very different, especially near where the boundary condition of the problem has been changed.
There are many different ways of dealing with singular matrices, but using an arbitrary Dirichlet boundary condition is not one. For the problem at hand it is easiest to switch to an iterative solver like PCG and ensure Discrete Compatibility.
First a short answer - it did not work because your matrix is singular. The matrix is singular, because you did not do all steps necessary for the case of zero Neumann boundary condition.
A longer answer - I guess your expectation it should work (I might be wrong) were due to the fact that the mean of right hand side is in your case (numerically almost) zero. There exists a step in this kind of problems where a mean of right hand side is subtracted from it, but this is not enough, and in your case it is not necessary because it is equal 0.
In fact, the zero mean of $f$ (or zero sum of entries in your $f$) means in your case that there exists a solution to your problem, in fact, there are infinitely many solutions. You still have to do some additional work to find at least one solution from those infinitely many ones.
Let me first note how to see easily that your matrix is singular. I mean the matrix in your output from Matlab, i.e. without the first and last row and column with zero entries only. If you sum all rows in your matrix $A$ you get the row with zeros only. This can happen only for singular matrix. In fact it means that you can delete e.g. the last row in your matrix, because it is redundant, it can be expressed from other rows.
Luckily, if you do the same with the right hand side $f$ then you get also the zero (it means also the mean of $f$ is zero). If this would not be case, your system has no solution (a brief explanation - it would be like you have the zero row in the matrix with non zero right hand side, such equation can not be fulfilled).
So now what to do to find at least one representative solution. In general you must add some additional equation that makes the solution unique. It means you have to add one row in the matrix and corresponding one right hand side.
One option you mentioned in your question (and it is the most typical one) is to say e.g. that the last unknown (for which in fact you erased the row in $A$) you prescribe an arbitrary value. It is like you replace the zero Neumann boundary condition with Dirichlet one. So it is like to add the row $(0 \, \ldots \, 0 \, 1)$ to replace the erased one.
Another one is that you require that the mean of unknowns has a prescribed value, e.g. zero. It is like you have to add a row with ones only, i.e. $(1 \, \ldots 1)$ to replace the erased row. That is a reason that such option is not very popular, because it changes the structure of your matrix.
Not the answer you're looking for? Browse other questions tagged finite-difference boundary-conditions poisson or ask your own question.
Finite difference Neumann boundary conditions: uneven weighting of edge nodes? | CommonCrawl |
We draw at random a number in interval [0,1] such that each number is "equally likely". Suppose we do the experiment two times (independently), giving us two numbers in [0,1]. What is the probability that the sum of these numbers is greater than 1/2?
My understanding is that since the experiments are independent, I am able to multiply the probability of each experiment with each other directly.
My attempt at this was to calculate probability of each experiment where x (the random number) is less than 1/4, thus giving me the probability of 1/16 when I multiply them. This would imply that there is a 15/16 chance my sum is greater than 1/2. However the answer is wrong since it is supposed to be 7/8.
Choosing two numbers randomly (uniformly, independently) in the unit interval $[0,1]$ is the same as choosing a single point uniformly in the unit square $[0, 1]\times [0,1]$, and looking at its first and second coordinate.
Now, take a look at that square (you can even draw it, if you want). See if you can tell which points are such that their two coordinates add up to more than $\frac12$. (The line $x+y = \frac12$ is very relevant here, because it consists of the points where the two coordinates add up to exactly $\frac12$. That is what $x + y = \frac12$ really means, after all. You can draw that too to help you.) What's the area of that region?
As an extra exercise, you tried looking at the region where both the variables were smaller than $\frac14$. Can you draw that into the square? Can you see why your answer was wrong?
What is the probability that the product of $20$ random numbers between $1$ and $2$ is greater than $10000$?
whats the probability of a number getting picked from a pool fewer than 3 times?
Probability question on outcome without replacement, selecting $k$ balls with a different number. | CommonCrawl |
It got very easy to do Machine Learning: you install a ML library like scikit-learn or xgboost, choose an estimator, feed it some training data, and get a model which can be used for predictions.
Ok, but what's next? How would you know if it works well? Cross-validation! Good! How would you know that you haven't messed up the cross validation? Are there data leaks? If the quality is not good enough, how to improve it? Are there data preprocessing errors or other software bugs? ML systems are notably good at hiding bugs - they can adapt, so often in case of bug there is a small quality drop, but the system as a whole still works. Should we put the model to production? Can the model be trusted to do reasonable things? Are there pitfalls? How to convince others the system works in a reasonable ways?
There is no silver bullet; I don't know a true answer for these questions. But understanding of how the model "thinks" - what its decisions are based on - should be a big part of the answer.
AI-powered robots haven't conquered us (yet), so let's start with a 19th century Machine Learning method: Linear Regression.
Most people agree that $latex price = 1.5 \times radius + 0.4 \times salami + 0.1 \times tomato&s=1$ is pretty understandable. Looking at coefficients of a Linear Regression can be enough for humans to see what's going on. For example, we can see that in some sense salami is more important than tomatoes - salami count is multiplied by 0.4, while tomato count is multiplied only by 0.1; this can be an useful information. We can also see how much adding a piece of salami or increasing radius by 1cm affects the price.
There are caveats though. If scales of features are not the same then comparing coefficients can be misleading - maybe there are 25 tomato slices on average, and only 2 salami slices on average (weird 19th century pizzas!), and so tomatoes contribute much more to the price than salami, despite their coefficient being lower. It is also obvious that radius and salami coefficients can't be compared directly. Another caveat is that if features are not independent (e.g. there is always an extra tomato per salami slice), interpreting coefficents gets trickier.
One more observation is that to explain the behavior we didn't care how to train the model (how we came up with radius/salami/tomato coefficients), we only needed to know the final formula (algorithm) used at prediction time. It means that we can look at Ridge or Lasso or Elastic Net regression the same way, as they are the same at prediction time.
That said, understanding of the training process can be important for understanding behavior of the system. For example, if two features are correlated, Ridge regression tend to keep both features, but set lower coefficients for them, while Lasso may eliminate one of the features (set its weight to zero) and use a high coefficient for the remaining feature. It means that e.g. in Ridge regression you're more likely to miss an important feature if you look at top coefficients, and in Lasso a feature can get a zero weight even if it is almost as important as the top feature.
So, there are two lessons. First, looking at coefficients is still helpful, at least as a sanity check. Second, it is good to understand what you're looking at, because there are caveats.
It is no longer 19th century: we don't have to walk around beautiful Italian towns, drink coffee and eat pizzas to collect a dataset, we can now go to the Internet. Likewise, for Linear Regression we can use libraries like numpy or scikit-learn instead of visiting a library, armed with quill and paper.
The result is still scary, but at least we can check if a feature contributes positively or negatively to a price. For example, CRIM (crime level) is a negative factor, while CHAS (if a river is nearby) is a positive factor. It is not possible to compare coefficients directly because scales of features are different; we may normalize data to make scales comparable using e.g. preprocessing utilities from scikit-learn - try it yourselves.
To make inspecting coefficients easier we created eli5 Python library. It can do much more than that, but it started from a snippet similar to a snippet above, which we were copy-pasting across projects.
It shows the same coefficients, but there is also a "<BIAS>" feature. We forgot about it when writing the ``get_formula`` snippet: LinearRegression by default creates a feature which is 1 for all examples (it is called "bias" or "intercept"); its weight is in ``reg.intercept_`` atribute.
So, the lesson here is that machine learning libraries like scikit-learn expose coefficients of trained ML models; it is possible to inspect them, and eli5 library makes it easier.
Find some keywords specific for categories. For example, 'computer', 'graphics', 'photoshop' and '3D' for computer graphics, and 'kidney', 'treatment', 'pill' for medicine.
Count how many of the keywords from each set are in the document. Category which gets more keywords wins.
We can write it this way: $latex y = computer + graphics + photoshop - kidney - treatment - pill$; if $latex y > 0 &s=1$ then text is about computer graphics; if it is less than zero then we have a medical document.
Many smart people are lazy, so they likely won't be fond of idea of adjusting all these coefficients by hand. A better idea could be to take documents about CG and documents about medicine, then write a program to find best coefficients automatically.
It already starts to look suspiciously similar to pizza's Linear Regression formula, isn't it? The difference is that we are not interested in ``y`` value per se, we want to know if it is greater than zero or not.
Such "y" function is often called a "decision function": we compute "y", check if it is greater or less than zero, and make a yes/no decision. And in fact, this is a very common approach: for example, at prediction time linear Support Vector Machine (SVM) works exactly as our "y" function. Congratulations - now you know how linear SVMs work at prediction time! If you look at coefficients of a linear SVM classifier applied for text classification using "bag-of-words" features (similar to what we've done), then you'll be looking at the same weights as in our example. There is a weight (coefficient) per word, to do prediction linear SVM computes weighted sum of tokens present in a document, just like our "y" function, and then the result is compared to 0.
We may also notice that a larger "y" (positive or negative) means that we're cetrain a document is about CG or about medicine (it has more relevant keywords), while ``y`` close to zero means we either don't have enough information, or keywords cancel each other.
Let's say we calculated "y" and got "2.5" value. What does $latex y=-2.5&s=1$ mean? To make "y" value easier to interpret it'd be nice for it to be in range from 0 to 1 - this way we can think about it as of a probability. For example, when keywords sum is a very large negative number, "y" could be close to 0 (a probaility of a document bein a CG document is close to 0), when there is no information "y" could be 0.5, and when sum is a large positive number "y" could be close to 1.0.
then we get a Machine Learning model called Logistic Regression. Congratulations - you now know how Logistic Regression works at prediction time!
Note that at prediction time Logistic Regression and Linear SVM do exactly the same if you only need yes/no labels and don't need probabilities. But they still differ in how weights are selected during the training, i.e. for the same training data you'll get different weights (and so different predictions). Logistic Regression chooses best weights for good probability approximation, while Linear SVM chooses weights such as that decisions are separated by a larger margin; it is common to get a tiny bit higher yes/no accuracy from a linear SVM, but linear SVMs as-is won't give you a probability score.
Now as you know how Logistic Regression and linear SVMs work and what their coefficients mean, it is time to apply them to a text classification task and check how they are making their predictions. These simple linear models are surprisingly strong baselines for text classification, and they are easy to inspect and debug; if you have a text classification problem it is a good idea to try text-based features and a linear model first, even if you want to go fancy later.
Scikit-Learn docs have a great tutorial on text processing using bag-of-words features and simple ML models. The task in the tutorial is almost the same in our example: classify a text message as a message about computer graphics, medicine, atheism or Christianity. This tutorial uses 4 possible classes, not two. We only discussed how to classify a text document into two classes (CG vs medicine), but don't worry.
A common way to do multi-class classification (and the way which is used by default in most of scikit-learn) is to train a separate 2-class classifier per each class. So under the hood there will be 4 classifers: CG / not CG, medicine / not medicine, atheism / not atheism, Christianity / not Christianity. Then, at prediction time, all four classifiers are employed; to get a final answer highest-scoring prediction among all classifiers is used.
It means that instead of inspecting a single classifier we'll be inspecting 4 different classifiers which work together to get us an answer.
The final model showed in the tutorial is a linear SVM trained on TF*IDF bag-of-words features using SGD training algorithm. We already know how a linear SVM works at prediction time, and we don't care about training algorithm.
TF*IDF bag-of-words features are very similar to "bag-of-words" features we used before - there is still a coefficient per word. The difference is that instead of counting words or simply checking if a word is in a document, a more complex approach is used: words counts are now normalized according to document length, and the result is downscaled for words that occur in many documents (very common words like "he" or "to" are likely to be irrelevant).
Here we have much more parameters than in previous examples - a parameter per word per class; there are 4 classes and 20K+ words, so looking at all parameters isn't feasible. Instead of displaying everything eli5 shows only parameters with largest absolute values - these parameters are usually more important (of course, there are caveats).
We can see that a lot of these words make sense - "atheism" is a relevant word for atheism-related messages, "doctor" is a good indicator that a text is a medical text, etc. But, at the same time, some of the words are surprising: why do "keith" and "mathew" indicate a text about atheism, and "pitt" indicates a medical text? It doesn't sound right, something is going on here.
> exterminated through out history, so I guess it was not unusual.
> be considered a "fair" trial by US standards.
Aha, we have messages as training examples, and some guy named Mathew wrote some of them. His name is in the message header (From: mathew...), and in the message footer. So instead of focusing on message content, our ML system found an easier way to classify messages: just remember person names and email addresses of notable message authors. It may depend on a task, but most likely this is not what we wanted model to learn. Likely we wanted to classify message content, not message authors.
It also means that likely our accuracy scores are too optimistic. There are messages mentioning Mathew both in training and testing part, so the model can use message author name to get score points. A model which thinks "Oh, this is my old good friend Mathew! He only talks about atheism, I don't care much about what he's saying" can still get some accuracy points, even if it does nothing useful for us.
A lesson learned: by inspecting model parameters sometimes it is possible to check if the model is solving the same problem as we think.
It doesn't make sense to try more advanced models or tune parameters of the current model at this point: it looks like there is a problem in task specification, and evaluation setup is also not correct for the task we're solving (assuming we're interested in message texts).
So it could give us at least two ideas: 1) probably we could get a better train/test split for the data if messages by the same author (or mentioning the same author, e.g. via replying) only appear either in train or in test part, but not in both; 2) to train an useful classifier on this data it could make sense to remove message headers, footers, quoting, email addresses, to make model focus on message content - such model could be more useful on unseen data.
But does the model really only care about Mathew in the example? Until now, we were checking model coefficients; it allows us to get some general feeling of how the model works. But this method has a downside: it is not obvious why a decision was made on a concrete example.
A related downside is that coefficients depend on feature scales; if features use different scales we can't compare coefficients directly. While indicator bag-of-word features (1 if a word is in a document and 0 otherwise) use the same scale, with TF*IDF features input values are different for different words. It means that for TF*IDF a coefficient with top weight is not necessarily the most important, as in the input data word weight could be low because of IDF multiplier, and a high coefficient just compensates this.
We only looked at coefficients for words, but we haven't checked which words are in the document, and what are the values coefficients are multiplied by. Previously we were looking at something like $latex y = 2.0 \times atheism + 1.9 \times keith + 1.4 \times mathew + ...$ (for all possible words), but for a concrete example values of "mathew" and "from" are known - it could be raw word counts in the document, or 0/1 indicator values, or TF*IDF weighted counts, as in our example, and a list of words is much smaller - for most of the words value is zero.
Green highlighting means positive contribution, red means negative.
It seems the classifier still uses words from message text, but names like "Mathew", email addresses, etc. look more important for a classifier. So yeah, even without author name classifier likely makes a correct decision for this example, but it focuses mostly on wrong parts of the message.
Model is no longer able to use author names, emails, etc.; it must learn how to distinguish messages based only on text content, which is a harder (and arguably a more realistic) task.
We've removed some useful information as well, e.g. message subject or text of quoted messages. We should try to bring this information back, but we need to be very careful with evaluation: for example, messages from the test set shouldn't quote messages from train set, and vice versa.
Preprocessing helped - all (or most) of author names are gone, and feature list makes more sense now.
Some of the features still look strange though - why is "of" the most negative word for computer graphics documents? It doesn't make sense. "Of" is just a common word which appears in many documents. Probably, all other things equal, a document is less likely to be a computer graphics document, and the model learned to use a common, "background" word "of" to encode this information.
A classical approach for improving text classification quality is to remove "stop words" from text - generic words like "of", "it", "was", etc., which shouldn't be specific to a particular topic. The idea is to make it easier for model to learn something useful. In our example we'd like the model to focus more on the topic-specific words and use a special "bias" feature instead of relying on these "background" words.
Mmm, it looks like many "background" words are no longer highlighted, but some of them still are. For example, "don" in "don't" is green, and "weren" in "weren't" is also green. It looks suspicious, and indeed - we've spotted an issue with scikit-learn 0.18.1 and earlier: stop words list doesn't play well with the default scikit-learn tokenizer. Tokenizer splits contractions (words like "don't") into two parts, but stop words list doesn't include first parts of these contractions.
Accuracy improved a tiny bit - 0.820 instead of 0.819.
A lesson learned: by looking at model weights and prediction explanations it is possible to spot preprocessing bugs. This particular bug was there in scikit-learn for many years, but it was only reported recently, while it was easy for us to find this bug just by looking at "eli5.explain_prediction" result. If you're a reader from the future then maybe this issue is already fixed; examples use scikit-learn 0.18.1.
We may also notice that last parts of contractions ("t" in "don't") are not highlighted, unlike first parts ("don"). But "t" is not in the stop words list, just like "don". What's going on? The reason is that default scikit-learn tokenizer removes all single-letter tokens. This piece of information is not mentioned in scikit-learn docs explicitly, but the gotcha becomes visible if we inspect the prediction result. By looking at such explanations you may get a better understanding of how a library works.
Another lesson is that even with bugs the pipeline worked overall; there was no indication something is wrong, but after fixing the issue we've got a small quality improvement. Systems based on Machine Learning are notably hard to debug; one of the reasons is that they often can adapt to such software bugs - usually it just costs us a small quality drop. Any additional debugging and testing instrument is helpful: unit tests, checking of the invariants which should hold, gradient checking, etc.; looking at model weights and inspecting model predictions is one of these instruments.
So far we've only used individual words as features. There are other ways to extract features from text. One common way is to use "n-grams" - all subsequences of a given length. For example, in a sentence "The quick brown fox" word 2-grams (word bigrams) would be "The quick", "quick brown" and "brown fox". So instead of having a parameter per individual word we could have a parameter per such bigram. Or, more commonly, we may have parameters both for individual words and n-grams. It allows to "catch" short phrases - often only a phrase has a meaning, not individual words it consists of.
It is also possible to use "char n-grams" - instead of splitting text into words one can use a "sliding window" of a given length. "The quick brown fox" can be converted to a char 5-gram as "The q", "the qu", "he qui", etc. This approach can be used when one want to make classifier more robust to word variations, typos, and to make a better use of related words.
N-grams are overlapping; individual characters are highlighted according to weights of all ngrams they belong to. It is now more clear which parts of text are important; it seems char n-grams make it all a bit more noisy.
By the way, haven't we removed stop words already? Why are hey still highlighted? We're passing stop_words argument to TfidfVectorizer as before, but it seems this argument does nothing now. And it indeed does nothing - scikit-learn ignores stop words when using char n-grams; this is documented, but still easy to miss.
So maybe char n-grams are not that much worse than words for this data - accuracy of a word-based model without stop words removal is similar (0.796 instead of 0.792). It could be the case that after removing stop words and tuning optimal SGDClassifier parameters (which are likely different for char-based features) we can get a similar or better quality. We still don't know if this is true, but at least after the inspection we've got some starting points.
got better understanding of how the whole text processing pipeline works.
scikit-learn docs are tutorials are top-notch; they can be easily the best among all ML software library docs and tutorials, and our findings don't change that. Such problems are common in a real world: small processing bugs, misunderstandings; there were similar data issues in every single real-world project I've worked on. Of course, you can't detect and fix all problems by looking inside models and their predictions, but with eli5 at least you have better chances for spotting such problems.
We've been using these techniques for many projects: model inspection is a part of data science work in our team, being it classification tasks, Named Entity Recognition or adaptive crawlers based on Reinforcement Learning. Explanations are not only useful for developers, they are helpful for users of your system as well - users get better understanding of how a system works, and can either trust it more, or become aware of its limitations - see this blog post from our firends Hyperion Gray for a practical example.
eli5 library is not limited to linear classifiers and text data; it supports several ML frameworks (scikit-learn, xgboost, LightGBM, etc.) and implements several model explanation methods, both model-specific and model-agnostic. Library is improving; we're trying to get most proven explanation methods available in eli5. But even with all its features it barely scratches the surface; there is a lot of research going on, and it is exciting to see how this field develops.
August 04, 2016 In "Open source" , "Scrapy" , "Scrapinghub" | CommonCrawl |
(1) There exist uncountably many ergodic invariant probability measures.
(2) Atomic invariant measures are weak star dense in the set of all invariant probability measures.
The proof of (2) is contained in the much more general theorem due to Sigmund. This is because the main result of Sigmund "Generic Properties Of Invariant Measures for Axiom A-Diffeomorphisms" Inventiones Math. 11 (1970), pp. 99-109 applies. One may argue that Sigmund considers only toral automorphisms, but his reasoning is based on the fact that the maps he considers have the periodic specification property. The proof that periodic specification sufficies for (2) to hold can be found in Ergodic theory on compact spaces (Volume 527 of Lecture notes in mathematics) by Manfred Denker, Christian Grillenberger, Karl Sigmund (Springer-Verlag, 1976). It is easy to see that maps $x\mapsto mx$ ($m>1$) have the periodic specification property (a theorem of Blokh stating that topological mixing implies specification for maps on graphs can also be invoked, and to a proof that $x\mapsto mx$ have topological mixing is an exercise). In any case, much more is known and Ian Morris above is right - the idea follows Parthasaraty's article.
P.S. and (1) follows from another Sigmund paper ON THE CONNECTEDNESS OF ERGODIC SYSTEMS or can be deduced form the general result: if a nontrivial simplex of invariant measures (known to be a Choquet simplex see inverse problem for ergodic measures) has a dense set of ergodic measures (extreme points), then the set of extreme points must be arcwise connected and hence uncountable. This is a consequence of the Lindenstrauss, Olsen and Sternfeld result (The Poulsen simplex, Annales de l'institut Fourier 28.1 (1978): pp. 91-114).
Not the answer you're looking for? Browse other questions tagged reference-request ds.dynamical-systems ergodic-theory or ask your own question.
What do singular, atomless invariant measures of $\times d$ look like?
weak star density of atomic invariant measures ? | CommonCrawl |
Hi everybody. I am new to sage. I want to construct rings, ideals, and quotient rings. I used Z.IntegerRing() to generate the ring of integers.
Question 1: Then I used I = Z.ideal(2) to get the ideal generated by 2 (even numbers). Question: How can I display the Elements. E.g. I(2) does not work in order to display the second element of the ideal.
Question 2: To generate the quotientring Z/2Z i used S = Z.quotient_ring(I). What if I want to generate the quotientring 2Z/6Z ? S = I.quotient_ring(J) does not work (I = Z.ideal(2), J = Z.ideal(6).
I think Z is defined as Z = IntegerRing(), not Z.IntegerRing().
There is no such type as "the type of elements of an ideal" or something approaching.
Question 2: I am not definitive on this question, but I do not think you can define $2\mathbb Z/6\mathbb Z$ in SageMath. It is isomorphic to $\mathbb Z/3\mathbb Z$ that you can define. | CommonCrawl |
The d-wave vortex lattice state is studied within the framework of Bogoliubov-de Gennes (BdG) mean field theory. We allow antiferromagnetic (AFM) order to develop self-consistently along with d-wave singlet superconducting (dSC) order in response to an external magnetic field that generates vortices. The resulting AFM order has strong peaks at the vortex centers, and changes sign, creating domain walls along lines where $\nabla \times j_s \approx 0$. The length scale for decay of this AFM order is found to be much larger than the bare d-wave coherence length, $\xi$. Coexistence of dSC and AFM order in this system is shown to induce $\pi$-triplet superconducting order. Competition between different orders is found to suppress the local density of states at the vortex center and comparison to recent experimental findings is discussed. | CommonCrawl |
A new metro connection between Metsälä ja Syrjälä has finally been opened. There are $n$ stations on the line; the first station is Metsälä and the last station is Syrjälä.
The first train travelled from Metsälä to Syrjälä. You know for each station the total number of passenger who either entered or left the train. Based on this information, your task is to find lower and upper bounds for the maximum number of passengers in the train between two stations.
The train was empty at the beginning and end of the journey, and no passenger left the train immediately after entering it at the same station.
The first input line contains an integer $n$: the number of stations.
The next line has $n$ integers $x_1,x_2,\ldots,x_n$: the measured number of passengers at each station.
Print two integers: the lower and upper bound for the maximum number of passengers.
You can assume that there is at least one way how the passengers could have acted.
Explanation: In the lower bound there are 5, 2 and 6 passengers between the stations. In the upper bound there are 5, 8 and 6 passengers between the stations. | CommonCrawl |
Is this what am I supposed to do? And at what locations I put 0s to make the kernel 100x100?
Where the real part of the result is $ac-bd$, and the imaginary part is $ad+bc$.
Then you can take the inverse 2D-DFT.
Note that, for an alias free circular convolution implementation, your DFT length should be at least $100 + (5-1)/2 = 102 \times 102$ points. Then after the inverse DFT of $ 102 \times 102 $ points, discard the first $2$ rows and columns and reatin the remaining $100 \times 100$ part which are the samples of the filtered image.
You're trying to implement convolution through multiplication in frequency domain.
you need to do complex multiplication. Not "real or imaginary or magnitude", but complex multiplication; your signal and your filter are complex-valued, and there's as much information in the real part as in the imaginary part, and you mustn't drop that information.
What you'll do is cyclic convolution, because for the discrete Fourier transform, multiplication in frequency domain is equivalent to cyclic convolution in time domain; that's something different than what's normally called "convolution". Again, if you need that acyclic convolution, you'll have to implement segmented convolution (look for "overlap-add" and "overlap-save" methods. You'll find them for 1D signals everywhere, for 2D, too, likely, but understand the 1D case first!); if you then implement the DFT by an FFT, you get what we call "fast convolution"; maybe that's the term you want to google for.
Not the answer you're looking for? Browse other questions tagged image-processing fourier-transform convolution or ask your own question.
How does subpixel image shifting using DFT really work? | CommonCrawl |
The goal is, to make slice pointfree, i.e. write it as slice = ..., and thereby illustrate the systematic approach of doing so.
where $\alpha$ is a parameter and $F$ is a function definition for the function f that does not contain $\alpha$.
The next parameter which we want to remove is called len. It appears in $(2)$ as an argument to take. Since take len is the first argument to (.), but we need it on the right-most side, we are going to apply flip :: (a -> b -> c) -> b -> a -> c to (.). Afterwards the function composition can be explicitly stated, as we did when transforming from $(1)$ to $(2)$. | CommonCrawl |
Han, X.Y.;Xu, Z.R.;Wang, Y.Z.;Tao , X.;Li, W.F.
This experiment was conducted to investigate the effect of cadmium levels on weight gain, nutrient digestibility and the retention of iron, copper and zinc in tissues of growing pigs. A total of one hundred and ninety-two crossbred pigs (barrows, Duroc$\times$Landrace$\times$Yorkshine, 27.67$\pm$1.33 kg of average initial body weight) were randomly allotted to four treatments. Each treatment had three replicates with 16 pigs per pen. The corn-soybean basal diets were supplemented with 0, 0.5, 5.0, 10.0 mg/kg cadmium respectively, and the feeding experiment lasted for eight-three days. Cadmium chloride was used as cadmium source. The results showed that pigs fed the diet containing 10.0 mg/kg cadmium had lower ADG and FCR than any other treatments (p<0.05). Apparent digestibility of protein in 10.0 mg/kg cadmium-treated group was lower than that of other groups (p<0.05). There was lower iron retention in some tissues of 5.0 mg/kg and 10.0 mg/kg cadmium treatments (p<0.05). However, pigs fed the diet 10.0 mg/kg cadmium had higher copper content in most tissues than that of any other groups (p<0.05). There was a significantly increase of zinc retention in kidney of 10.0 mg/kg cadmium additional group (p<0.05) and zinc concentrations in lymphaden, pancreas and heart of 10.0 mg/kg cadmium treatment were lower than those of the control (p<0.05). This study indicated that relatively high cadmium level (10.0 mg/kg) could decrease pig growth performance and change the retention of iron, copper and zinc in most tissues during extended cadmium exposure period.
Anke, M., T. Masaoka, B. Groppel, G. Zervas and W. Arnhold. 1989. The influence off sulpher, molybdenum and cadmium exposure on the growth of goat, cattle and pig. Archiv. Fur. Tierernahrung 39:221-228.
Brzoska, M. M. and J. Moniuszko-Jakoniuk. 2001. Interactions between cadmium and zinc in the organs. Food. Chem. Toxicol. 39:967-980.
Brzoska, M. M., J. Moniuszko-Jakoniuk, M. Jurczuk, M. Galazyn-Sidorczuk and J. Rogalska. 2000. Effect of short-term ethanol administration on cadmium retention and bioelement metabolism in rats continuously exposed to cadmium. Alcohol and Alcoholism. 5:439-445.
Elinder, C. G., M. Nordberg, B. Palm, L. Bjork and L. Jonsson. 1987. Cadmium, zinc, and copper in rabbit kidney metallothionein-relation to kidney toxicity. Environ. Res. 42:553-562.
Elsenhans, B., K. Kolb, K. Schumann and W. Forth. 1994. The longitudinal distribution of cadmium, zinc, copper, iron and metallothionein in the small intestinal mucosa of rats after administration of cadmium chloride. Biol. Trace Elem. Res.41:31-46.
Eybl, V., D. Kotyzova, J. Koutensky, V. Mickova, M. M. Jones and P. K. Singh. 1998. Effect of cadmium chelating agents on organ cadmium and trace element levels in mice. Analyst. 123:25-26.
Fox, M. R. S., R. M. Jacob, A. O. L. Jones, B. E. Fry and C. L. Stone. 1980. Effects of vitamin C and iron on cadmium metabolism. Annals of the New York Academy of Sciences 355:249-261.
Koizumi, T., T. Yokota and K. Suzuki. 1994. Mechanism of cadmium-induced cytotoxicity in rat hepatocytes. Biol. Trace Elem. Res. 42:31-41.
Mohan, J., R. P. Moudgal and J. N. Panda. 1992. Effects of cadmium salt on phosphomonoesterases activity and fertilizing ability of fowl spermatozoa. Indian J. Exp. Biol. 30:241-243.
Webster, W. S. 1979. Iron deficiency and its role in cadmiuminduced fetal growth retardation. J. Nutr. 109:1640-1645.
Miccadei, S. and A. Floridi. 1993. Sites of inhibition of mitochondrial electron transport by cadmium. Chem. Biol. Interact. 89:159-167.
NRC. 1998. Nutrient Requirements of Swine (10th Ed.) National Academy Press. Washington, DC.
Banis, R. J., W. G. Pond, E. F. Walker and J. R. O'Connor. 1969. Dietary cadmium, iron, and zinc interactions in the growing rat, Proc. Soc. Exp. Biol. Med. 130:802-806.
Crowe, A. and E. H. Morgan. 1997. Effect of dietary cadmium on iron metabolism in growing rats. Toxicol. Appl. Pharmacol. 145:136-146.
Kanwar, K. C., S. C. Kaushal and R. V. Mehra. 1980. Clearance of orally administered 115mCd from rat tissues. Experientia. 15:1004-1005.
Pond, W. G. and E. F. Walker.. 1973. Cadmium-induced anemia in growing pigs: protective effect of oral and parenteral iron. J. Anim. Sci. 6:1122-1128.
AOAC. 1990. Official Methods of Analysis (15th Ed.). Assocition f Official Analytical Chemists. Washington, DC.
Cotzias, G. C., D. C. Borg and B. Selleck. 1961. Virtual absence of turnover in cadmium metabolism: Cd109 studies in the mouse. Am. J. Physiol.. 201:927–930.
Hamilton, D. and L. Valberg. 1974. Relationship between cadmium and iron absorption. Am. J. Physiol. 227:1033-1037.
Schafer, S. G. and W. Forth. 1984. Effect of acute and subchronic exposure to cadmium on the retention of iron in rats. J. Nutr. 114:1989-1996.
Oishi, S., J.-I. Nakagawa and M. Ando. 2000. Effects of cadmium administration on the endogenous metal balance in rats. Biol. Trace Elem. Res. 76:257-278.
Sowa, B. and E. Steibert. 1985. Effect of oral cadmium administration to female rats during pregnancy on zinc, copper and iron content in placenta, foetal liver, kidney, intestine and brain. Arch. Toxicol. 56:256-263.
Rothe, S., J. Gropp, H. Weiser and W. A. Rambeck. 1994. The effect of vitamin C and zinc on the copper-induced increased of cadmium residues in swine. Zeitschrift fur Ernahrungswissenschaft. 33:61-67.
Steibert, E., B. Krol, K. Gralewska, M. Kaminki, O. Kaminska and E. Kusz. 1984. Cadmium-induced changes in the histoenzymatic activity in liver, kidney and duodenum of pregnant rats. Toxicol. Lett. 20:127-132.
Cousins, R. J., A. K. Baber and J. R. Trout. 1973. Cadmium toxicity in growing swine. J. Nutr. 103:964-972.
Jones, M. M. and M. G. Cherian. 1990. The search for chelate antagonists for chtonic cadmium intoxication. Toxicol. 62:1-25.
Coppen-Jaeger, D. E. and M. Wilhelm. 1989. The effects of cadmium on zinc absorption in isolated rat intestinal preparations. Biol. Trace Elem. Res. 21:207-212.
Miller, W. J. 1971. Cadmium nutrition and metabolism in ruminants: Relationship to concentrations in tissues and products. Feedstuffs. July 17:24.
Raszy, J., H. Docekalova, J. Rubes, V. Gajduskova, J. Masek, L. Rodak and J. Bartos. 1992. Ecotoxicologic relations on a large pig-fattening farm located in a lignite mining area and near and solid-fuel electrical power plant. Vet. Med. 37:435-448.
Bafundo, K. W., D. H. Baker and P. R. Fitzgerald. 1984. Eimeria acervulina infection and the zinc-cadmium interrelationship in the chick. Poult. Sci. 63:1828-1832.
Nasreddin, L. and D. Parent-Massin. 2002. Food contamination by metals and pesticides in the European Union. Should we worry? Toxicol. Lett. 127:29-41.
Pond, W. G., P. Chapman and E. Jr. Walker. 1966. Influence of dietary zinc, corn oil and cadmium on certain blood components, weight gain and parakeratosis in young pigs. J. Anim. Sci. 25:122-127.
Joshi, J. G. and A. Zimmerman. 1988. Ferritin: an expanded role in metabolic regulation. Toxicol. 48:21-29.
Davies, N. and J. Campbell. 1977. The effect of cadmium on intestinal copper absorption and binding in the rat. Life Sci. 20:955-960.
Hansen, L. G. and T. D. Hinesly. 1979. Cadmium from soil amended with sewage sludge: effects and residues in swine. Environ. Health. Persp. 28:51-57.
Kozlowska, K., A. Brzozowska, J. Sulkoeska and W. Roszkowski. 1993. The effect of cadmium on iron metabolism in rats. Nutr. Res. 13:1163-1172.
Lisk, D. J., R. D. Boyd, J. N. Telford, J. G. Babish, G. S. Stoesand, C. A. Bache and W. H. Gutenmann. 1982. Toxicologic studies with swine fed corn grown on municipal sewage sludgeamended soil. J. Anim. Sci. 55:613-619.
Jacobson, K. B. and J. E. Turner. 1980. The interaction of cadmium and certain other metal ions with proteins and nucleic acids. Toxicol. 16:1-37.
Czarnecki, G. L. and D. H. Baker. 1982. Tolerance of the chick to excess dietary cadmium as influenced by dietary cysteine and by experimental infection with Eimeria acervulina. J. Anim. Sci. 54:983-988.
Rambeck, W. A., H. W. Brehm and W. E. Kollmer. 1991. The effect of increased copper supplements in feed on the development of cadmium residues in swine. Zeitschrift fur Ernahrungswissenschaft. 30:298-306.
Ammerman, C. B., S. M. Miller, K. R. Fick and S. L. Hansard II. 1977. Contaminating elements in mineral supplements and 1977their potential toxicity: A review. J. Anim. Sci. 44:485-508.
Foulkes, E. C. 1985. Interactions between metals in rat jejunum:implications on the nature of cadmium uptake. Toxicol. 37:117-125.
Fox, M. R. S. 1987. Assessment of cadmium, lead and vanadium status of large animals as related to the human food chain. J. Anim. Sci. 65:1744-1752.
Matsubara-Khan, J. and K. Machida. 1975. Cadmium accumulation in the mouse organs during sequential injections of cadmium109. Environ. Res. 10:29-38.
Wang, S. C. 1994. Food Hygiene Testing and Technology Handbook. Chemical Industry Press, Beijin, China, pp. 185-189.
Chmielnicka, J. and B. Sowa. 1996. Cadmium interaction with essential metals (Zn, Cu, Fe), metabolism, metallothionein, and ceruloplasmin in pregnant rats and fetuses. Ecotoxicol. Environ. Safety 35:277-281. | CommonCrawl |
Can we apply Newton-Raphson method treating $i$ as constant or we have to substitute $x=a+ib$ and solve two simultaneous equations.
As Robert Israel answered, the work to be done is the same and the same problems are faced, in particular the choice of $x_0$.
Newton-Raphson is exactly the same for equations involving complex numbers. You just have to do the arithmetic using complex numbers.
Not the answer you're looking for? Browse other questions tagged complex-numbers newton-raphson or ask your own question. | CommonCrawl |
This question is similar to Leak Information?, but with new settings.
Let $a,b,c,d,e,f$ be selected at random from $\mathbb Z^*_q$. Also, let $r_1,r_2,r_3,z_1,z_2,z_3,p_1,p_2,p_3$ be selected at random from $\mathbb Z^*_q$.
Browse other questions tagged cryptanalysis protocol-design semantic-security data-privacy probabilistic-encryption or ask your own question.
Does $(u_1=r_1\cdot x,\ \ w_1=r_1\cdot z, \ \ u_2=r_2\cdot x,\ \ w_2=r_2\cdot z)$ Leak Information? | CommonCrawl |
After this 3-tape Turing machine is constructed we can concert is to a 1-tape Turing machine, if necessary.
Such a universal machine can be physically realized via hardware (like a PC, designed for executing $\delta$ with 3 Gigahertz) and the machine code might then come from a computer program written by a 6 year old fat girl from Ohio. This video is from a nerd who wanted to build himself a hands on realization of a universal Turing machine.
The described process changes the runtime only as $\mathcal O\left(T(n)\right)\to \mathcal O\left(\lceil\log T(n)\rceil\cdot T(n)\right)$. | CommonCrawl |
Abstract: We study the semileptonic differential decay rates of $B_c$ meson to S-wave charmonia, $\eta_c$ and $J/\Psi$, at the next-to-leading order accuracy in the framework of NRQCD. In the heavy quark limit, $m_b \to \infty$, we obtain analytically the asymptotic expression for the ratio of NLO form factor to LO form factor. Numerical results show that the convergence of the ratio is perfect. At the maximum recoil region, we analyze the differential decay rates in detail with various input parameters and polarizations of $J/\psi$, which can now be checked in the LHCb experiment. Phenomenologically, the form factors are extrapolated to the minimal recoil region, and then the $B_c$ to charmonium semileptonic decay rates are estimated. | CommonCrawl |
A method of proving mathematical results based on the principle of mathematical induction: An assertion $A(x)$, depending on a natural number $x$, is regarded as proved if $A(1)$ has been proved and if for any natural number $n$ the assumption that $A(n)$ is true implies that $A(n+1)$ is also true.
The proof of $A(1)$ is the first step (or base) of the induction and the proof of $A(n+1)$ from the assumed truth of $A(n)$ is called the induction step. Here $n$ is called the induction parameter and the assumption of $A(n)$ for the proof of $A(n+1)$ is called the induction assumption or induction hypothesis. The principle of mathematical induction is also the basis for inductive definition. The simplest example of such a definition is the definition of the property: "to be a word of length n over a given alphabet a1…ak" .
The base of the induction is: Each symbol of the alphabet is a word of length 1. The induction step is: If $E$ is a word of length $n$, then each word $Ea_i$, where $1\leq i\leq k$, is a word of length $(n+1)$. Induction can also start at the zero-th step.
It often happens that $A(1)$ and $A(n+1)$ can be proved by similar arguments. In these cases it is convenient to use the following equivalent form of the principle of mathematical induction. If $A(1)$ is true and if for each natural number $n$ from the assumption: $A(x)$ is true for any natural number $x<n$, it follows that $A(x)$ is also true for $x=n$, then $A(x)$ is true for any natural number $x$. In this form the principle of mathematical induction can be applied for proving assertions $A(x)$ in which the parameter $x$ runs through any well-ordered set of a transfinite type (transfinite induction). As simple examples of transfinite induction one has induction over a parameter running through the set of all words over a given alphabet with the lexicographic ordering, and induction over the construction formulas in a given logico-mathematical calculus.
Sometimes, for the inductive proof of an assertion $A(n)$ one has to inductively prove, simultaneously with $A(n)$, a whole series of other results without which the induction for $A(n)$ cannot be carried out. In formal arithmetic one can also give results $A(n)$ for which, within the limits of the calculus considered, an induction can not be carried out without the addition of new auxiliary results depending on $n$ (see ). In these cases one has to deal with the proof of a number of assertions by compound mathematical induction. All these statements could formally be united into one conjunction; however, in practice this would only complicate the discussion and the possibility of informal sensible references to specific induction assumptions would disappear.
In some concrete mathematical investigations the number of notions and results defined and proved by compound induction has reached three figures (see ). In this case, because of the presence in induction of a large number of cross references to the induction assumptions, for a concise (informal) understanding of any (even very simple) definition or results for a large value of the induction parameter, the reader must be familiar with the content of all induction ideas and properties of these ideas for small values of the induction parameter. Apparently, the only logically correct outcome of this circle of problems is the axiomatic presentation of all these systems of ideas. Thus, a great number of ideas defined by compound mathematical induction lead to the need for an application of the axiomatic method in inductive definitions and proofs. This is a visual example of the necessity of the axiomatic method for the solution of concrete mathematical problems, and not just for questions relating to the foundations of mathematics.
In the article above, the natural numbers are $1,2,\ldots$ (i.e. excluding $0$).
This page was last modified on 10 April 2014, at 15:56. | CommonCrawl |
Bondarev B. V., Zhmykhova T. V.
A problem of calculating the probability of ruin of an insurance company in infinite number of steps is considered in the case where this company is able to invest its capital to a bank deposit at every time. As a distribution describing claim amounts to the insurance company, the gamma distribution with parameters $n$ and $\alpha$ is chosen.
English version (Springer): Ukrainian Mathematical Journal 59 (2007), no. 4, pp 500-512.
Citation Example: Bondarev B. V., Zhmykhova T. V. Evaluation of the probability of bankruptcy for a model of insurance company // Ukr. Mat. Zh. - 2007. - 59, № 4. - pp. 447–457. | CommonCrawl |
Are there too many 8-digit primes $p$ for Mersenne primes $M_p$?
I put $12?$ as there could still be some $8$-digit $p$ for which $M_p$ is prime. So my question is, isn't that $12$ out of place? Or does the LPW conjecture expect large deviations from $5.92$?
It is not verified if there are more $p$ between $3.7\times10^7$ and $7.7\times10^7$, so some of these counts may change.
The red count for $8$ digits in base-$12$ is still incomplete. Since this roughly overlaps with $10^7-10^9$, then $L(10)$ suggests there will be a total of about $12$ $p$ in that range, more or less. If not, then this nice increasing pattern will be ruined.
Not the answer you're looking for? Browse other questions tagged prime-numbers analytic-number-theory mersenne-numbers or ask your own question.
Are there infinitely many Mersenne primes?
Why are conjectures about the primes so hard to prove?
At what point are there gaps in the sequence of known prime numbers.
What will/should be the intention or purpose to compute and count prime numbers in the gap defined by two consecutive and large Mersenne primes?
Is it possible the density of Mersenne numbers in the primes gets arbitrarily close to $1$?
Is there an anomaly in the distribution of Mersenne primes? | CommonCrawl |
Inline formulas are surrounded by single dollar signs. For example, $f(x) = ax^2 + bx + c$ renders as $f(x)=ax^2+bx+c$.
Greek letters are mostly written by simply spelling them after a backslash with capitalization indicated by first letter. For example, $\alpha \beta \Gamma \Delta$ renders as $\alpha \beta \Gamma \Delta$.
For a quick tutorial and command reference, please see our MathJax quick reference page. See also our additional pointers and references. | CommonCrawl |
Comment. The trace of an endomorphism $\alpha$ of a finite-dimensional vector space $V$ over the field $k$ may be defined as the trace of any matrix representing it... To understand total variation we first must find the trace of a square matrix. A square matrix is a matrix that has an equal number of columns and rows. Important examples of square matrices include the variance-covariance and correlation matrices.
4/09/2014 · Trace of a matrix and it's properties explained. To ask your doubts on this topic and much more, click here: http://www.techtud.com/video-lecture/lecture-trace how to get from papeete to bora bora To understand total variation we first must find the trace of a square matrix. A square matrix is a matrix that has an equal number of columns and rows. Important examples of square matrices include the variance-covariance and correlation matrices.
To understand total variation we first must find the trace of a square matrix. A square matrix is a matrix that has an equal number of columns and rows. Important examples of square matrices include the variance-covariance and correlation matrices. | CommonCrawl |
The bounty I placed on this question expires in the next 24 hours.
I have a psychological data set which, traditionally, would be analysed using a paired samples t test. The design of the experiment is $39 (subjects) \times 7 (targets) \times 2 (conditions)$, and I'm interested in the difference in a given variable between the conditions.
The traditional approach has been to average across targets so that I have 2 observations per participant, and then compare these averages using a paired t test.
I wanted to use a mixed models approach, as has become increasingly popular in this field (i.e. Baayen, Davidson & Bates, 2008), and so the first model I fit, which I thought should approximate the results of the t test, was one with $condition$ as a fixed effect, and random intercepts for $subjects$ (i.e. $var = \alpha + \beta*condition + Intercept(subject) + \epsilon$. Obviously, the full model would also include random intercepts for $targets$.
However, I'm struggling to understand why I achieve pretty divergent results between the two approaches. Can anyone explain what's going on here? I've also seen (what I understand to be) a similar question asked here, with an answer about correlation structure which I'm not equipped to understand. If this is also what's at issue here, I would appreciate if anyone could suggest some resources to read up on this.
Edit: I've posted the example data, and R script, here.
I'm only analysing the correct responses (think of it as analogous to reaction time), so there are missing cases - not every participant provides 7 data points per condition.
When I analyse all responsees, rather than just the correct ones, the difference between the two results is reduced, but not eliminated. This suggests to me that the missing cases are a factor here.
The variable isn't normally distributed. In my final model, I scale it using a Box-Cox transformation, but I omit that here for consistency with the t test.
As pointed out by @PeterFlom, the $df$s differ hugely between the two approaches, but I assume this to be because the t test is being applied to the aggregate data (2 observations per participant, 1 per condition), while the mixed model is applied to raw scores ($<14$ observations per participant, $<7$ per condition).
@BenBolker notes that the t values also differ pretty considerably.
My analysis code is below.
Notice, specifically, the jump in the p value from $p = .188$ in the t test, to $p = .021$ from either lmer method.
I've tried, and failed to provide a reproducible example of this, using the anorexia dataset in the MASS package, so I would assume the problem is something idiosyncratic to my data, but I don't understand what.
Not the answer you're looking for? Browse other questions tagged r mixed-model t-test or ask your own question.
Which statistical test to use? Paired-sample or independent-sample?
Is my dataset suitable for a mixed effects model?
When does an unpaired test result in higher p-value than a paired test?
In a mixed effects model, how do you determine when the slope and intercept should be independent?
T-Test with strictly paired data versus mixed model with mostly paired data? | CommonCrawl |
You are currently browsing the archive for the Graphical Models category.
What happens when you combine Relational Databases, Logic, and Machine Learning?
Answer: Statistical Relational Learning. Maybe I can get the book for Christmas.
Unfortunately, when you have 30 full day workshops in a two day period, you miss most of them. I could only attend the three listed above. There were many other great ones.
In "Semantic Hashing", Salakhutdinov and Hinton (2007) show how to classify documents with binary vectors. They combine deep learning and graphical models to assign each document a binary vector. Similar documents can be found by using the L1 difference between the binary vectors. Here is their abstract.
We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and give a much better representation of each document than Latent Semantic Analysis. When the deepest layer is forced to use a small number of binary variables (e.g. 32), the graphical model performs "semantic hashing": Documents are mapped to memory addresses in such away that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method. By using semantic hashing to filter the documents given to TF-IDF, we achieve higher accuracy than applying TF-IDF to the entire document set.
In the seminal paper "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", Lafferty, McCallum, Pereira (2001) introduce a very popular type of Markov random field for segmentation. Conditional Random Fields (CRFs) are used in many fields including machine translation, parsing, genetics, and transmission codes. They are a non-directed version of Hidden Markov Networks. The paper describes conditional random fields, provides an iterative method to estimate the parameters of the CRF, and reports experimental comparisons between CRFs, hidden Markov models, and maximum entropy Markov models.
where $x_n$ is the state, $f_n$ is the known state transition function, $y_n$ is a real valued observation vector, $G_n$ is the known observation matrix (full rank), $v_n$ is the process vector noise (not necessarily Gaussian), and $\epsilon_n$ is the observation noise vector (once again, not necessarily Gaussian). The object of the game is to estimate $x_n$ given the transition function, the observation matrices, and the observations $y_n$.
In particular, outside of the dynamic estimation regime, signal priors based on the notion of sparsity have led to state-of-the-art performance in many linear inverse problems (e.g., undersampling, inpainting, denoising, compressed sensing, etc. ) across several different application domains (e.g. natural images , audio , hyperspectral imagery , etc.).
way that is well-matched to the assumptions of the problem.
The paper includes several experimental results in section IV. | CommonCrawl |
Maria Udan-Johns, Rocio Bengoechea, Shaughn Bell, Jieya Shao, Marc I Diamond, Heather L True, Conrad C Weihl and Robert H Baloh.
Prion-like nuclear aggregation of TDP-43 during heat shock is regulated by HSP40/70 chaperones.. Human molecular genetics 23(1):157–70, January 2014.
Abstract TDP-43 aggregation in the cytoplasm or nucleus is a key feature of the pathology of amyotrophic lateral sclerosis and frontotemporal dementia and is observed in numerous other neurodegenerative diseases, including Alzheimer's disease. Despite this fact, the inciting events leading to TDP-43 aggregation remain unclear. We observed that endogenous TDP-43 undergoes reversible aggregation in the nucleus after the heat shock and that this behavior is mediated by the C-terminal prion domain. Substitution of the prion domain from TIA-1 or an authentic yeast prion domain from RNQ1 into TDP-43 can completely recapitulate heat shock-induced aggregation. TDP-43 is constitutively bound to members of the Hsp40/Hsp70 family, and we found that heat shock-induced TDP-43 aggregation is mediated by the availability of these chaperones interacting with the inherently disordered C-terminal prion domain. Finally, we observed that the aggregation of TDP-43 during heat shock led to decreased binding to hnRNPA1, and a change in TDP-43 RNA-binding partners suggesting that TDP-43 aggregation alters its function in response to misfolded protein stress. These findings indicate that TDP-43 shares properties with physiologic prions from yeast, in that self-aggregation is mediated by a Q/N-rich disordered domain, is modulated by chaperone proteins and leads to altered function of the protein. Furthermore, they indicate that TDP-43 aggregation is regulated by chaperone availability, explaining the recurrent observation of TDP-43 aggregates in degenerative diseases of both the brain and muscle where protein homeostasis is disrupted.
Bernadett Kalmar, Ching-Hua Lu and Linda Greensmith.
The role of heat shock proteins in Amyotrophic Lateral Sclerosis: The therapeutic potential of Arimoclomol.. Pharmacology & therapeutics 141(1):40–54, January 2014.
Abstract Arimoclomol is a hydroxylamine derivative, a group of compounds which have unique properties as co-inducers of heat shock protein expression, but only under conditions of cellular stress. Arimoclomol has been found to be neuroprotective in a number of neurodegenerative disease models, including Amyotrophic Lateral Sclerosis (ALS), and in mutant Superoxide Dismutase 1 (SOD1) mice that model ALS, Arimoclomol rescues motor neurons, improves neuromuscular function and extends lifespan. The therapeutic potential of Arimoclomol is currently under investigation in a Phase II clinical trial for ALS patients with SOD1 mutations. In this review we summarize the evidence for the neuroprotective effects of enhanced heat shock protein expression by Arimoclomol and other inducers of the Heat Shock Response. ALS is a complex, multifactorial disease affecting a number of cell types and intracellular pathways. Cells and pathways affected by ALS pathology and which may be targeted by a heat shock protein-based therapy are also discussed in this review. For example, protein aggregation is a characteristic pathological feature of neurodegenerative diseases including ALS. Enhanced heat shock protein expression not only affects protein aggregation directly, but can also lead to more effective clearance of protein aggregates via the unfolded protein response, the proteasome-ubiquitin system or by autophagy. However, compounds such as Arimoclomol have effects beyond targeting protein mis-handling and can also affect additional pathological mechanisms such as oxidative stress. Therefore, by targeting multiple pathological mechanisms, compounds such as Arimoclomol may be particularly effective in the development of a disease-modifying therapy for ALS and other neurodegenerative disorders.
Samantha Zinkie, Benoit J Gentil, Sandra Minotti and Heather D Durham.
Expression of the protein chaperone, clusterin, in spinal cord cells constitutively and following cellular stress, and upregulation by treatment with Hsp90 inhibitor.. Cell stress & chaperones 18(6):745–58, November 2013.
Abstract Clusterin, a protein chaperone found at high levels in physiological fluids, is expressed in nervous tissue and upregulated in several neurological diseases. To assess relevance to amyotrophic lateral sclerosis (ALS) and other motor neuron disorders, clusterin expression was evaluated using long-term dissociated cultures of murine spinal cord and SOD1(G93A) transgenic mice, a model of familial ALS. Motor neurons and astrocytes constitutively expressed nuclear and cytoplasmic forms of clusterin, and secreted clusterin accumulated in culture media. Although clusterin can be stress inducible, heat shock failed to increase levels in these neural cell compartments despite robust upregulation of stress-inducible Hsp70 (HspA1) in non-neuronal cells. In common with HSPs, clusterin was upregulated by treatment with the Hsp90 inhibitor, geldanamycin, and thus could contribute to the neuroprotection previously identified for such compounds in disease models. Clusterin expression was not altered in cultured motor neurons expressing SOD1(G93A) by gene transfer or in presymptomatic SOD1(G93A) transgenic mice; however, clusterin immunolabeling was weakly increased in lumbar spinal cord of overtly symptomatic mice. More striking, mutant SOD1 inclusions, a pathological hallmark, were strongly labeled by anti-clusterin. Since secreted, as well as intracellular, mutant SOD1 contributes to toxicity, the extracellular chaperoning property of clusterin could be important for folding and clearance of SOD1 and other misfolded proteins in the extracellular space. Evaluation of chaperone-based therapies should include evaluation of clusterin as well as HSPs, using experimental models that replicate the control mechanisms operant in the cells and tissue of interest.
Ari M Chow, Derek W F Tang, Asad Hanif and Ian R Brown.
Induction of heat shock proteins in cerebral cortical cultures by celastrol.. Cell stress & chaperones 18(2):155–60, March 2013.
Abstract Alzheimer's disease, Parkinson's disease and amyotrophic lateral sclerosis (ALS) are 'protein misfolding disorders' of the mature nervous system that are characterized by the accumulation of protein aggregates and selective cell loss. Different brain regions are impacted, with Alzheimer's affecting cells in the cerebral cortex, Parkinson's targeting dopaminergic cells in the substantia nigra and ALS causing degeneration of cells in the spinal cord. These diseases differ widely in frequency in the human population. Alzheimer's is more frequent than Parkinson's and ALS. Heat shock proteins (Hsps) are 'protein repair agents' that provide a line of defense against misfolded, aggregation-prone proteins. We have suggested that differing levels of constitutively expressed Hsps (Hsc70 and Hsp27) in neural cell populations confer a variable buffering capacity against 'protein misfolding disorders' that correlates with the relative frequencies of these neurodegenerative diseases. The high relative frequency of Alzheimer's may due to low levels of Hsc70 and Hsp27 in affected cell populations that results in a reduced defense capacity against protein misfolding. Here, we demonstrate that celastrol, but not classical heat shock treatment, is effective in inducing a set of neuroprotective Hsps in cultures derived from cerebral cortices, including Hsp70, Hsp27 and Hsp32. This set of Hsps is induced by celastrol at 'days in vitro' (DIV) 13 when cultured cortical cells reached maturity. The inducibility of a set of neuroprotective Hsps in mature cortical cultures at DIV13 suggests that celastrol is a potential agent to counter Alzheimer's disease, a neurodegenerative 'protein misfolding disorder' of the adult brain that targets cells in the cerebral cortex.
Zaorui Zhao, Alan I Faden, David J Loane, Marta M Lipinski, Boris Sabirzhanov and Bogdan A Stoica.
Neuroprotective effects of geranylgeranylacetone in experimental traumatic brain injury.. Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism 33(12):1897–908, 2013.
Abstract Geranylgeranylacetone (GGA) is an inducer of heat-shock protein 70 (HSP70) that has been used clinically for many years as an antiulcer treatment. It is centrally active after oral administration and is neuroprotective in experimental brain ischemia/stroke models. We examined the effects of single oral GGA before treatment (800 mg/kg, 48 hours before trauma) or after treatment (800 mg/kg, 3 hours after trauma) on long-term functional recovery and histologic outcomes after moderate-level controlled cortical impact, an experimental traumatic brain injury (TBI) model in mice. The GGA pretreatment increased the number of HSP70(+) cells and attenuated posttraumatic $\alpha$-fodrin cleavage, a marker of apoptotic cell death. It also improved sensorimotor performance on a beam walk task; enhanced recovery of cognitive/affective function in the Morris water maze, novel object recognition, and tail-suspension tests; and improved outcomes using a composite neuroscore. Furthermore, GGA pretreatment reduced the lesion size and neuronal loss in the hippocampus, cortex, and thalamus, and decreased microglial activation in the cortex when compared with vehicle-treated TBI controls. Notably, GGA was also effective in a posttreatment paradigm, showing significant improvements in sensorimotor function, and reducing cortical neuronal loss. Given these neuroprotective actions and considering its longstanding clinical use, GGA should be considered for the clinical treatment of TBI.
V F Lazarev, D V Sverchinskyi, M V Ippolitova, A V Stepanova, I V Guzhova and B A Margulis.
Factors Affecting Aggregate Formation in Cell Models of Huntington's Disease and Amyotrophic Lateral Sclerosis.. Acta naturae 5(2):81–9, 2013.
Abstract Most neurodegenerative pathologies stem from the formation of aggregates of mutant proteins, causing dysfunction and ultimately neuronal death. This study was aimed at elucidating the role of the protein factors that promote aggregate formation or prevent the process, respectively, glyceraldehyde-3-dehydrogenase (GAPDH) and tissue transglutaminase (tTG) and Hsp70 molecular chaperone. The siRNA technology was used to show that the inhibition of GAPDH expression leads to a 45-50% reduction in the aggregation of mutant huntingtin, with a repeat of 103 glutamine residues in a model of Huntington's disease (HD). Similarly, the blockage of GAPDH synthesis was found for the first time to reduce the degree of aggregation of mutant superoxide dismutase 1 (G93A) in a model of amyotrophic lateral sclerosis (ALS). The treatment of cells that imitate HD and ALS with a pharmacological GAPDH inhibitor, hydroxynonenal, was also shown to reduce the amount of the aggregating material in both disease models. Tissue transglutaminase is another factor that promotes the aggregation of mutant proteins; the inhibition of its activity with cystamine was found to prevent aggregate formation of mutant huntingtin and SOD1. In order to explore the protective function of Hsp70 in the control of the aggregation of mutant huntingtin, a cell model with inducible expression of the chaperone was used. The amount and size of polyglutamine aggregates were reduced by increasing the intracellular content of Hsp70. Thus, pharmacological regulation of the function of three proteins, GAPDH, tTG, and Hsp70, can affect the pathogenesis of two significant neurodegenerative diseases.
Eun-Hyun Kim, Hyo-Soon Jeong, Hye-Young Yun, Kwang Jin Baek, Nyoun Soo Kwon, Kyoung-Chan Park and Dong-Seok Kim.
Geranylgeranylacetone inhibits melanin synthesis via ERK activation in Mel-Ab cells.. Life sciences 93(5-6):226–32, 2013.
Abstract AIMS: Geranylgeranylacetone (GGA) has shown cytoprotective activity through induction of a 70-kDa heat shock protein (HSP70). Although HSP70 is reported to regulate melanogenesis, the effects of GGA on melanin synthesis in melanocytes have not been previously studied. Therefore, this study investigated the effects of GGA on melanogenesis and the related signaling pathways. MAIN METHODS: Melanin content and tyrosinase activities were measured in Mel-Ab cells. GGA-induced signal transduction pathways were investigated by western blot analysis. KEY FINDINGS: Our results showed that GGA significantly decreased melanin content in a concentration-dependent manner. Similarly, GGA reduced tyrosinase activity dose-dependently, but it did not directly inhibit tyrosinase. Western blot analysis indicated that GGA downregulated microphthalmia-associated transcription factor (MITF) and tyrosinase protein expression, whereas it increased the phosphorylation of extracellular signal-regulated kinase (ERK) and mammalian target of rapamycin (mTOR). Furthermore, a specific ERK pathway inhibitor, PD98059, blocked GGA-induced melanin reduction and then prevented downregulation of MITF and tyrosinase by GGA. However, a specific mTOR inhibitor, rapamycin, only slightly restored inhibition of melanin production by GGA, indicating that mTOR signaling is not a key mechanism regulating the inhibition of melanin production. SIGNIFICANCE: These findings suggest that activation of ERK by GGA reduces melanin synthesis in Mel-Ab cells through downregulation of MITF and tyrosinase expression.
Tatsuya Hoshino, Koichiro Suzuki, Takahide Matsushima, Naoki Yamakawa, Toshiharu Suzuki and Tohru Mizushima.
Suppression of Alzheimer's disease-related phenotypes by geranylgeranylacetone in mice.. PloS one 8(10):e76306, January 2013.
Abstract Amyloid-$\beta$ peptide (A$\beta$) plays an important role in the pathogenesis of Alzheimer's disease (AD). A$\beta$ is generated by the secretase-mediated proteolysis of $\beta$-amyloid precursor protein (APP), and cleared by enzyme-mediated degradation and phagocytosis. Transforming growth factor (TGF)-$\beta$1 stimulates this phagocytosis. We recently reported that the APP23 mouse model for AD showed fewer AD-related phenotypes when these animals were crossed with transgenic mice expressing heat shock protein (HSP) 70. We here examined the effect of geranylgeranylacetone, an inducer of HSP70 expression, on the AD-related phenotypes. Repeated oral administration of geranylgeranylacetone to APP23 mice for 9 months not only improved cognitive function but also decreased levels of A$\beta$, A$\beta$ plaque deposition and synaptic loss. The treatment also up-regulated the expression of an A$\beta$-degrading enzyme and TGF-$\beta$1 but did not affect the maturation of APP and secretase activities. These outcomes were similar to those observed in APP23 mice genetically modified to overexpress HSP70. Although the repeated oral administration of geranylgeranylacetone did not increase the level of HSP70 in the brain, a single oral administration of geranylgeranylacetone significantly increased the level of HSP70 when A$\beta$ was concomitantly injected directly into the hippocampus. Since geranylgeranylacetone has already been approved for use as an anti-ulcer drug and its safety in humans has been confirmed, we propose that this drug be considered as a candidate drug for the prevention of AD.
Sarah J Weisberg, Roman Lyakhovetsky, Ayelet-chen Werdiger, Aaron D Gitler, Yoav Soen and Daniel Kaganovich.
Compartmentalization of superoxide dismutase 1 (SOD1G93A) aggregates determines their toxicity.. Proceedings of the National Academy of Sciences of the United States of America 109(39):15811–6, 2012.
Abstract Neurodegenerative diseases constitute a class of illnesses marked by pathological protein aggregation in the brains of affected individuals. Although these disorders are invariably characterized by the degeneration of highly specific subpopulations of neurons, protein aggregation occurs in all cells, which indicates that toxicity arises only in particular cell biological contexts. Aggregation-associated disorders are unified by a common cell biological feature: the deposition of the culprit proteins in inclusion bodies. The precise function of these inclusions remains unclear. The starting point for uncovering the origins of disease pathology must therefore be a thorough understanding of the general cell biological function of inclusions and their potential role in modulating the consequences of aggregation. Here, we show that in human cells certain aggregate inclusions are active compartments. We find that toxic aggregates localize to one of these compartments, the juxtanuclear quality control compartment (JUNQ), and interfere with its quality control function. The accumulation of SOD1G93A aggregates sequesters Hsp70, preventing the delivery of misfolded proteins to the proteasome. Preventing the accumulation of SOD1G93A in the JUNQ by enhancing its sequestration in an insoluble inclusion reduces the harmful effects of aggregation on cell viability.
David J Gifondorwa, Ramon Jimenz-Moreno, Crystal D Hayes, Hesam Rouhani, Mac B Robinson, Jane L Strupe, James Caress and Carol Milligan.
Administration of Recombinant Heat Shock Protein 70 Delays Peripheral Muscle Denervation in the SOD1(G93A) Mouse Model of Amyotrophic Lateral Sclerosis.. Neurology research international 2012:170426, 2012.
Abstract A prominent clinical feature of ALS is muscle weakness due to dysfunction, denervation and degeneration of motoneurons (MNs). While MN degeneration is a late stage event in the ALS mouse model, muscle denervation occurs significantly earlier in the disease. Strategies to prevent this early denervation may improve quality of life by maintaining muscle control and slowing disease progression. The precise cause of MN dysfunction and denervation is not known, but several mechanisms have been proposed that involve potentially toxic intra- and extracellular changes. Many cells confront these changes by mounting a stress response that includes increased expression of heat shock protein 70 (Hsp70). MNs do not upregulate Hsp70, and this may result in a potentially increased vulnerability. We previously reported that recombinant human hsp70 (rhHsp70) injections delayed symptom onset and increased lifespan in SOD1(G93A) mice. The exogenous rhHsp70 was localized to the muscle and not to spinal cord or brain suggesting it modulates peripheral pathophysiology. In the current study, we focused on earlier administration of Hsp70 and its effect on initial muscle denervation. Injections of the protein appeared to arrest denervation with preserved large myelinated peripheral axons, and reduced glial activation.
Gong-Jhe Wu, Wu-Fu Chen, Han-Chun Hung, Yen-Hsuan Jean, Chun-Sung Sung, Chiranjib Chakraborty, Hsin-Pai Lee, Nan-Fu Chen and Zhi-Hong Wen.
Effects of propofol on proliferation and anti-apoptosis of neuroblastoma SH-SY5Y cell line: new insights into neuroprotection.. Brain research 1384:42–50, April 2011.
Abstract Recently, it has been suggested that anesthetic agents may have neuroprotective potency. The notion that anesthetic agents can offer neuroprotection remains controversial. Propofol, which is a short-acting intravenous anesthetic agent, may have potential as a neuroprotective agent. In this study, we tried to determine whether propofol affected the viability of human neuroblastoma SH-SY5Y cells by using the MTT assay. Surprisingly, our results showed that propofol at a dose of 1-10 $\mu$M could improve cell proliferation. However, at higher doses (200 $\mu$M), propofol appears to be cytotoxic. On the other hand, propofol could up-regulate the expression of key proteins involved in neuroprotection including B-cell lymphoma 2 at a dose range of 1-10 $\mu$M, activation of phospho-serine/threonine protein kinase at a dose range of 0.5-10 $\mu$M, and activation of phospho-extracellular signal-regulated kinases at a dose range of 5-10 $\mu$M. Similarly, we demonstrate that propofol (10 $\mu$M) could elevate protein levels of heat shock protein 90 and heat shock protein 70. Therefore, we choose to utilize a 10 $\mu$M concentration of propofol to assess neuroprotective activities in our studies. In the following experiments, we used dynorphin A to generate cytotoxic effects on SH-SY5Y cells. Our data indicate that propofol (10 $\mu$M) could inhibit the cytotoxicity in SH-SY5Y cells induced by dynorphin A. Furthermore, propofol (10 $\mu$M) could decrease the expression of the p-P38 protein as well. These data together suggest that propofol may have the potential to act as a neuroprotective agent against various neurologic diseases. However, further delineation of the precise neuroprotective effects of propofol will need to be examined.
Atsushi Sanbe, Takuya Daicho, Reiko Mizutani, Toshiya Endo, Noriko Miyauchi, Junji Yamauchi, Kouichi Tanonaka, Charles Glabe and Akito Tanoue.
Protective effect of geranylgeranylacetone via enhancement of HSPB8 induction in desmin-related cardiomyopathy.. PloS one 4(4):e5351, January 2009.
Abstract BACKGROUND: An arg120gly (R120G) missense mutation in HSPB5 (alpha-beta-crystallin ), which belongs to the small heat shock protein (HSP) family, causes desmin-related cardiomyopathy (DRM), a muscle disease that is characterized by the formation of inclusion bodies, which can contain pre-amyloid oligomer intermediates (amyloid oligomer). While we have shown that small HSPs can directly interrupt amyloid oligomer formation, the in vivo protective effects of the small HSPs on the development of DRM is still uncertain. METHODOLOGY/PRINCIPAL FINDINGS: In order to extend the previous in vitro findings to in vivo, we used geranylgeranylacetone (GGA), a potent HSP inducer. Oral administration of GGA resulted not only in up-regulation of the expression level of HSPB8 and HSPB1 in the heart of HSPB5 R120G transgenic (R120G TG) mice, but also reduced amyloid oligomer levels and aggregates. Furthermore, R120G TG mice treated with GGA exhibited decreased heart size and less interstitial fibrosis, as well as improved cardiac function and survival compared to untreated R120G TG mice. To address possible mechanism(s) for these beneficial effects, cardiac-specific transgenic mice expressing HSPB8 were generated. Overexpression of HSPB8 led to a reduction in amyloid oligomer and aggregate formation, resulting in improved cardiac function and survival. Treatment with GGA as well as the overexpression of HSPB8 also inhibited cytochrome c release from mitochondria, activation of caspase-3 and TUNEL-positive cardiomyocyte death in the R120G TG mice. CONCLUSIONS/SIGNIFICANCE: Expression of small HSPs such as HSPB8 and HSPB1 by GGA may be a new therapeutic strategy for patients with DRM.
Satoshi Endo, Nobuhiko Hiramatsu, Kunihiro Hayakawa, Maro Okamura, Ayumi Kasai, Yasuhiro Tagawa, Norifumi Sawada, Jian Yao and Masanori Kitamura.
Geranylgeranylacetone, an inducer of the 70-kDa heat shock protein (HSP70), elicits unfolded protein response and coordinates cellular fate independently of HSP70.. Molecular pharmacology 72(5):1337–48, 2007.
Abstract Geranylgeranylacetone (GGA), an antiulcer agent, has the ability to induce 70-kDa heat shock protein (HSP70) in various cell types and to protect cells from apoptogenic insults. However, little is known about effects of GGA on other HSP families of molecules. We found that, at concentrations >/=100 microM, GGA caused selective expression of 78-kDa glucose-regulated protein (GRP78), an HSP70 family member inducible by endoplasmic reticulum (ER) stress, without affecting the level of HSP70 in various cell types. Induction of ER stress by GGA was also evidenced by expression of another endogenous marker, CCAAT/enhancer-binding protein-homologous protein (CHOP); decreased activity of ER stress-responsive alkaline phosphatase; and unfolded protein response (UPR), including activation of the activating transcription factor 6 (ATF6) pathway and the inositol-requiring ER-to-nucleus signal kinase 1-X-box-binding protein 1 (IRE1-XBP1) pathway. Incubation of mesangial cells with GGA caused significant apoptosis, which was attenuated by transfection with inhibitors of caspase-12 (i.e., a dominant-negative mutant of caspase-12 and MAGE-3). Dominant-negative suppression of IRE1 or XBP1 significantly attenuated apoptosis without affecting the levels of CHOP and GRP78. Inhibition of c-Jun NH(2)-terminal kinase, the molecule downstream of IRE1, by 1,9-pyrazoloanthrone (SP600125) did not improve cell survival. Blockade of ATF6 by 4-(2-aminoethyl) benzenesulfonyl fluoride enhanced apoptosis by GGA, and it was correlated with attenuated induction of both GRP78 and CHOP. Overexpression of GRP78 or dominant-negative inhibition of CHOP significantly attenuated GGA-induced apoptosis. These results suggested that GGA triggers both proapoptotic (IRE1-XBP1, ATF6-CHOP) and antiapoptotic (ATF6-GRP78) UPR and thereby coordinates cellular fate even without induction of HSP70.
Rachel L Galli, Donna F Bielinski, Aleksandra Szprengiel, Barbara Shukitt-Hale and James A Joseph.
Blueberry supplemented diet reverses age-related decline in hippocampal HSP70 neuroprotection.. Neurobiology of aging 27(2):344–50, 2006.
Abstract Dietary supplementation with antioxidant rich foods can decrease the level of oxidative stress in brain regions and can ameliorate age-related deficits in neuronal and behavioral functions. We examined whether short-term supplementation with blueberries might enhance the brain's ability to generate a heat shock protein 70 (HSP70) mediated neuroprotective response to stress. Hippocampal (HC) regions from young and old rats fed either a control or a supplemented diet for 10 weeks were subjected to an in vitro inflammatory challenge (LPS) and then examined for levels of HSP70 at various times post LPS (30, 90 and 240 min). While baseline levels of HSP70 did not differ among the various groups compared to young control diet rats, increases in HSP70 protein levels in response to an in vitro LPS challenge were significantly less in old as compared to young control diet rats at the 30, 90 and 240 min time points. However, it appeared that the blueberry diet completely restored the HSP70 response to LPS in the old rats at the 90 and 240 min times. This suggests that a short-term blueberry (BB) intervention may result in improved HSP70-mediated protection against a number of neurodegenerative processes in the brain. Results are discussed in terms of the multiplicity of the effects of the BB supplementation which appear to range from antioxidant/anti-inflammatory activity to signaling.
Masahisa Katsuno, Chen Sang, Hiroaki Adachi, Makoto Minamiyama, Masahiro Waza, Fumiaki Tanaka, Manabu Doyu and Gen Sobue.
Pharmacological induction of heat-shock proteins alleviates polyglutamine-mediated motor neuron disease.. Proceedings of the National Academy of Sciences of the United States of America 102(46):16801–6, November 2005.
Abstract Spinal and bulbar muscular atrophy (SBMA) is an adult-onset motor neuron disease caused by the expansion of a trinucleotide CAG repeat encoding the polyglutamine tract in the first exon of the androgen receptor gene (AR). The pathogenic, polyglutamine-expanded AR protein accumulates in the cell nucleus in a ligand-dependent manner and inhibits transcription by interfering with transcriptional factors and coactivators. Heat-shock proteins (HSPs) are stress-induced chaperones that facilitate the refolding and, thus, the degradation of abnormal proteins. Geranylgeranylacetone (GGA), a nontoxic antiulcer drug, has been shown to potently induce HSP expression in various tissues, including the central nervous system. In a cell model of SBMA, GGA increased the levels of Hsp70, Hsp90, and Hsp105 and inhibited cell death and the accumulation of pathogenic AR. Oral administration of GGA also up-regulated the expression of HSPs in the central nervous system of SBMA-transgenic mice and suppressed nuclear accumulation of the pathogenic AR protein, resulting in amelioration of polyglutamine-dependent neuromuscular phenotypes. These observations suggest that, although a high dose appears to be needed for clinical effects, oral GGA administration is a safe and promising therapeutic candidate for polyglutamine-mediated neurodegenerative diseases, including SBMA.
Hiroaki Adachi, Masahisa Katsuno, Makoto Minamiyama, Chen Sang, Gerassimos Pagoulatos, Charalampos Angelidis, Moriaki Kusakabe, Atsushi Yoshiki, Yasushi Kobayashi, Manabu Doyu and Gen Sobue.
Heat shock protein 70 chaperone overexpression ameliorates phenotypes of the spinal and bulbar muscular atrophy transgenic mouse model by reducing nuclear-localized mutant androgen receptor protein.. The Journal of neuroscience : the official journal of the Society for Neuroscience 23(6):2203–11, 2003.
Abstract Spinal and bulbar muscular atrophy (SBMA) is an inherited motor neuron disease caused by the expansion of the polyglutamine (polyQ) tract within the androgen receptor (AR). The nuclear inclusions consisting of the mutant AR protein are characteristic and combine with many components of ubiquitin-proteasome and molecular chaperone pathways, raising the possibility that misfolding and altered degradation of mutant AR may be involved in the pathogenesis. We have reported that the overexpression of heat shock protein (HSP) chaperones reduces mutant AR aggregation and cell death in a neuronal cell model (Kobayashi et al., 2000). To determine whether increasing the expression level of chaperone improves the phenotype in a mouse model, we cross-bred SBMA transgenic mice with mice overexpressing the inducible form of human HSP70. We demonstrated that high expression of HSP70 markedly ameliorated the motor function of the SBMA model mice. In double-transgenic mice, the nuclear-localized mutant AR protein, particularly that of the large complex form, was significantly reduced. Monomeric mutant AR was also reduced in amount by HSP70 overexpression, suggesting the enhanced degradation of mutant AR. These findings suggest that HSP70 overexpression ameliorates SBMA phenotypes in mice by reducing nuclear-localized mutant AR, probably caused by enhanced mutant AR degradation. Our study may provide the basis for the development of an HSP70-related therapy for SBMA and other polyQ diseases.
T Ooie, N Takahashi, T Saikawa, T Nawata, M Arikawa, K Yamanaka, M Hara, T Shimada and T Sakata.
Single Oral Dose of Geranylgeranylacetone Induces Heat-Shock Protein 72 and Renders Protection Against Ischemia/Reperfusion Injury in Rat Heart. Circulation 104(15):1837–1843, 2001.
Abstract Background– Induction of heat-shock proteins (HSPs) results in cardioprotection against ischemic insult. Geranylgeranylacetone (GGA), known as an antiulcer agent, reportedly induces HSP72 in the gastric mucosa and small intestine of rats. The present study tested the hypothesis that oral GGA would induce HSP72 in the heart and thus render cardioprotection against ischemia/reperfusion injury in rats. Methods and Results– Cardiac expression of HSPs was quantitatively evaluated in rats by Western blot analysis. Ten minutes of whole-body hyperthermia induced HSP72 expression in the rat hearts. A single oral dose of GGA (200 mg/kg) also induced expression of HSP72, which peaked at 24 hours after administration. Therefore, isolated perfused heart experiments using a Langendorff apparatus were performed 24 hours after administration of 200 mg/kg GGA (GGA group) or vehicle (control group). After a 5-minute stabilization period, no-flow global ischemia was given for 20, 40, or 60 minutes, followed by 30 minutes of reperfusion. During reperfusion, the functional recovery was greater and the released creatine kinase was less in the GGA group than in the control group. Electron microscopy findings revealed that the ischemia/reperfusion-induced damage of myocardial cells was prevented in GGA-treated myocytes. Conclusions– The results suggest that oral GGA is cardioprotective against ischemic insult through its induction of HSP72. | CommonCrawl |
I am a postdoc in the Math Department at University of Hawaii (2016–2017). In fall 2017 I will move to University of Colorado, Boulder.
I have worked at Iowa State University (2014–2016) and University of South Carolina (2012–2014).
My primary research area is universal algebra; current projects focus on lattice theory, computational complexity, and universal algebraic approaches to constraint satisfaction problems. Other research interests include logic, computability theory, type theory, category theory, functional programming, dependent types, and proof-carrying code.
Most of my research papers are posted on my Math arXiv page or my CS arXiv page. A more comprehensive collection of my work resides in my Github repositories.
This post describes the steps I use to update or install multiple versions of the Java Development Kit (JDK) on Linux machines.
The alt-comm repository contains a note giving an alternate decription of the (non-modular) commutator that yields a polynomial time algorithm for computing it. This is inspired by the alternate description of the commutator given by Kearnes in .
It is not hard to see that 3-SAT reduces to the problem of deciding whether all coatoms in a certain partition lattice are contained in the union of a collection of certain principal filters. Therefore, the latter problem, which we will call the covered coatoms problem (CCP), is NP-complete. In this post we simply define CCP. Later we check that 3-SAT reduces to CCP, and then develop some ideas about constructing a feasible algorithm to solve CCP.
The questions below appeared on an online test administered by Jane Street Capital Management to assess whether a person is worthy of a phone interview.
This is a markdown version of the tutorial Learn You an Agda and Achieve Enlightenment! by Liam O'Connor-Davis.
I made this version for my own reference, while working through the tutorial, making some revisions and additions, and a few corrections. You may prefer the original.
Consider two finite algebras that are the same except for a relabeling of the operations. Our intuition tells us that these are simply different names for the same mathematical structure. Formally, however, not only are these distinct mathematical objects, but also they are nonisomorphic.
Isotopy is another well known notion of "sameness" that is broader than isomorphism. However, I recently proved a result that shows why isotopy also fails to fully capture algebraic equivalence.
Claim: If the three $\alpha_i$'s pairwise permute, then all pairs in the lattice permute.
I started a GitHub repository called TypeFunc in an effort to get my head around the massive amount of online resources for learning about type theory, functional programming, category theory, $\lambda$-calculus, and connections between topology and computing. | CommonCrawl |
Abstract: Quantizing the mirror curve of certain toric Calabi-Yau (CY) three-folds leads to a family of trace class operators. The resolvent function of these operators is known to encode topological data of the CY. In this paper, we show that in certain cases, this resolvent function satisfies a system of non-linear integral equations whose structure is very similar to the Thermodynamic Bethe Ansatz (TBA) systems. This can be used to compute spectral traces, both exactly and as a semiclassical expansion. As a main example, we consider the system related to the quantized mirror curve of local $\mathbb P^2$. According to a recent proposal, the traces of this operator are determined by the refined BPS indices of the underlying CY. We use our non-linear integral equations to test that proposal. | CommonCrawl |
I recently did a course on Associative Rings. Do there exist non-Associative rings? I'd like to discuss certain examples here.
I'm told that is one such ring, which is non-associative under the operation of vector product. In other words, $A\times (B\times C)\neq (A\times B)\times C$.
After thinking about it for a bit, I found a supporting example in no time. We can see that . However, .
Wikipedia says that Lie Algebras are also examples of non-Associative rings. But that is a topic for another time. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.