text
stringlengths
100
500k
subset
stringclasses
4 values
Would there be an interest in having more specialized chat rooms, like the one for homotopy theory that currently exists? So far, the homotopy theory one seems to be fairly successful, and it seems that other topics might also enjoy a chat room like it. Obviously, if such a chat room is to be useful, it will need to have at least a few experts interested in frequenting it, which would then mean that people should be at least somewhat sparing with pings. I am posting this both as a general question about whether people would be interested in such chat rooms, and as a place where people can plan such chat rooms. If you have a topic you would like a chat room for, put it as an answer. Upvotes for that answer should then be interpreted as someone saying "yes, I would use such a chat room to at least some extend" (preferably with a comment so it can be gauged who those people who are interested are). July 3: Well, it is gone now; it is worth starting a chat room and just chatting with random strangers, until such time as people interested in that topic begin to post there. Jon started a Homotopy chat room. It turned out not to have MO as a parent. So I started one, and Jon and I chatted, then some homotopy people. After a bit, it was revealed that my room was also not MO. So Manish merged the two rooms, mushed the posts together. So, I think the thing to do is settle on a title, begin chatting in the new room with anyone at all on MO, and try to get graduate students involved, because they are sufficiently hyper to keep a room going. Should that work, look for postdocs and new assistant professors in or near the field. Online chat does seem to be a matter of taste. The relatively stable configuration is one or two established people peering in when they have time and are in the right mood, answering questions. Time will tell whether it is possible to hold scheduled sessions with larger numbers of people peering in, informal seminar. Jon is trying something along those lines, so maybe we will be seeing that in the coming weeks. Some friends and I wanted a chat room based on geometry and QFT (and also whenever topology, representation theory, ... comes in). I've gone ahead and created it: https://chat.stackexchange.com/rooms/82100/geometryphysics. If you're interested, feel free to stop by! I would like to have a chat room for representation theory (though possibly "algebraic representation theory" would be more precisely the subject I mean when I say representation theory). I'd really like to see a category theory chatroom. This would obviously have significant overlap with the homotopy theory chatroom, and could potentially detract from both, I don't know. But pure category theory (and this could include $\infty$-category stuff, to try and steal some of the n-labbers away maybe) is pretty lovely. I think that adding a list of past chatrooms which are now frozen but have been active at least for some time might be a useful addition to this question. I added the rooms I am aware of and made this post community wiki. Feel free to update the list of rooms. There were two incarnations of room for algebraic geometry. This one had not much activity apart from the first few days. The other one seem to have generated more activity. (I would say that for the first 6 to 8 months that the activity was not entirely negligible.) I will also add link to Jon Beardsley's advice on what can help to make a room active - which was posted in one algebraic geometry rooms. A room for discussions about representation theory was suggested by Tobias Kildetoft and he indeed created such a room although it never generated too much activity. differential geometry as in first two chapters of Kobayashi and Nomizu. Stacks as in Angelo Vistoli's notes (also available on arXiv). Is there any one interested for the same? I would like to see a chatroom that deals with group theory and group theoretic approaches to geometry. I know that geometry isn't a really fashionable subject right now, but I would like to talk to people who are interested in it.
CommonCrawl
The birds always behave in a consistent and predictable manner, in the sense that $x$'s response to hearing $y$ is always the same bird $xy$ regardless of the weather or time of day. This means that the operation $(x,y) \mapsto xy$ of saying $y$'s name to $x$ can be regarded as a mathematical function. Definition: A forest is a set $\F$ equipped with a binary operation $\F \times \F \to \F$, denoted by juxtaposition. The elements of $\F$ are called birds. If $x,y \in \F$, then we interpret $xy$ as $x$'s response to hearing the name of $y$. (We use a dot to separate bird names when they are spelled out.) In this two-element forest, the parrot simply repeats back the name of whatever bird it hears, while the cardinal only talks about itself. Whenever we have three birds, $x,y,$ and $z,$ we must be careful to distinguish between the names $(xy)z$ and $x(yz).$ The first is $xy$'s response to $z$, while the second is $x$'s response to $yz;$ these are not necessarily the same! Thus, the operation of "multiplying" two birds is in general neither commutative nor associative. We will use parentheses throughout to indicate the order in which birds are spoken to. We will begin our study of ornithology by describing some special types of relationships between birds. Definition: Let $\F$ be a forest, and let $x, y, z \in \F$ be any three birds. We say that $x$ is fond of $y$ if $x$ responds to $y$'s name by repeating it. In symbols, $x$ is fond of $y$ if $xy = y$. We say that $x$ is normal if it is fond of some bird in $\F$ (including possibly itself). Otherwise, if $x$ is not fond of any bird in $\F$ (not even itself), it is abnormal. We say that $x$ composes $y$ with $z$ if for all birds $f \in \F$, we have that $xf = y(zf)$. This means that $x$'s response to $f$ is the same as $y$'s response to $z$'s response to $f$. Note that a given forest might not contain a composition for every pair of birds. Moreover, when a composition exists, it might not be unique! For this reason, composing birds is subtly different from composing mathematical functions.
CommonCrawl
How do I generate cayley table for direct product of symmetric group $S_n$ and additive group of integers modulo $\mathbb Z/m\mathbb Z$ or $\mathbb Z_m$. Use the cartesian_product and multiplication_table methods. To figure out who are a, b, c, etc, list the elements of the group. The multiplication table with elements is hard to read.
CommonCrawl
This is a question from the Graph Theory (Bondy & Murty) text which has me rather stumped. (a) Formulate as a network flow problem the problem of determining whether a given pair $(p, q)$ is realizable by a simple bipartite graph. For (a), we view each $x_i$ as a source and each $y_j$ as a sink and connect as follows (edge $(x_i,y_j)$ as allowed from $p$ and $q$ with capacity = 1 for all edges). Not 100% sure how to go about with the flows but I believe we can claim that it is realized if the constructed network has a feasible flow. I am rather unsure how to go about doing (b) from the max-flow min-cut theorem (apologies for my lack of progress). Edit: upper bound of sigma of $q_j$ in inequality of part b to k, not n. I assigned just this question on a problem set recently- you can find the answer on the first page of the Problem Set 3 solutions. What is the number of bijections between two multisets? A question about $n$-cubes and boolean lattices (graph theory). Did I find a max-flow min-cut theorem contradiction? Max-flow/Min-cut theorem from perfect graphs? Proving vertex form of Menger's Theorem et al. without using capacity of vertices. Is this a valid proof of Hall's Theorem on System of Distinct Representatives?
CommonCrawl
One way to think about the SD is in terms of errors in prediction. Suppose I am going to generate a value of the random variable $X$, and I ask you to predict the value I am going to get. What should you use as your predictor? A natural choice is $\mu_X$, the expectation of $X$. But you could choose any number $c$. The error that you will make is $X - c$. About how big is that? For most reasonable choices of $c$, the error will sometimes be positive and sometimes negative. To find the rough size of this error, we will avoid cancellation as before, and start by calculating the mean squared error $E[(X-c)^2]$. with equality if and only if $c = \mu_X$. What we have shown is the predictor $\mu_X$ has the smallest mean squared error among all choices $c$. That smallest mean squared error is the variance of $X$, and hence the smallest root mean squared error is the SD $\sigma_X$. If we have two competing estimators of a parameter, we can use expected values and SDs to compare them. As an example, recall the German warplanes example of Data 8. The model was that we were observing $X_1, X_2, \ldots , X_n$, which are $n$ draws made at random with replacement from $1, 2, \ldots , N$. The goal was to estimate $N$, the total number of warplanes. Here is the simulation we did in Data 8, using a sample of size 30 to estimate $N$ which we had taken to be 300. We constructed the estimator $2A - 1$ to be unbiased, and indeed its empirical distribution is symmetric around the parameter 300. The estimator $M$ is clearly biased: it can never be larger than $N$ but it can be smaller. If we just compare expectations, then $E(2A-1) = 300$ while $E(M) \ne 300$, so it seems as though $2A-1$ is the better estimate. This tells us that though $M$ is typically going to be below 300, it won't be below by much, whereas $2A-1$ can be quite far away from 300, underestimating about as often as it overestimates. Perhaps now you'd prefer to use $M$ instead of $2A-1$. This is an example of the bias-variance tradeoff that is common in estimation. Both expectation and SD are important in choosing one estimator over another. As in this example, the choice isn't always clear. But if the estimator with the lower SD also happens to be unbiased, then you can declare it the winner.
CommonCrawl
We will see some instances of swindles in mathematics, primarily focusing on some in geometric topology due to Barry Mazur. Casson invariant is defined for the class of oriented integral homology 3-spheres. It satisfies certain properties, and reduce to Rohlin invariant after mod 2. We will define Casson invariant as half of the algebraic intersection number of irreducible representation spaces (space consists of representations of fundamental group to SU(2)), and then prove this definition satisfies the expected properties. In this series of (3-5) lectures, I will talk about different aspects of a class of contact 3-manifolds for which geometry, dynamics and topology interact subtly and beautifully. The talks are intended to include short surveys on "compatibility", "Anosovity" and "Conley-Zehnder indices". The goal is to use the theory of Contact Dynamics to show that conformally Anosov contact 3-manifolds (in particular, contact 3-manifolds with negative α-sectional curvature) are universally tight, irreducible and do not admit a Liouville cobordism to the tight 3-sphere. In this series of (3-5) lectures, I will talk about different aspects of a class of contact 3-manifolds for which geometry, dynamics and topology interact subtly and beautifully. The talks are intended to include short surveys on "compatibility", "Anosovity" and "Conley-Zehnder indices". The goal is to use the theory of Contact Dynamics to show that conformally Anosov contact 3-manifolds (in particular, contact 3-manifolds with negative α-sectional curvature) are universally tight, irrducible and do not admit a Liouville cobordism to tight 3-sphere. In this series of (3-5) lectures, I will talk about different aspects of a class of contact 3-manifolds for which geometry, dynamics and topology interact subtly and beautifully. The talks are intended to include short surveys on "compatibility", "Anosovity" and "Conley-Zehnder indices". The goal is to use the theory of Contact Dynamics to show that conformally Anosov contact 3-manifolds (in particular, contact 3-manifolds with negative $\alpha$-sectional curvature) are universally tight, irrducible and do not admit a Liouville cobordism to tight 3-sphere. The talk will discuss a paper by Gompf and Miyazaki of the same name. paper, without going too much into the heavy technical details. measure theory that would imply the spheres uniqueness in this property. the theorems of Lutz and Martinet.
CommonCrawl
This formula uses the formula for the field due to a thin shell solenoid, integrated over a range of radii to obtain the magnetic field at any point on the axis of a finite. n is the number of turns of wire per unit length in the solenoid. r1 is the inner radius of the solenoid, in meters. r2 is the outer radius of the solenoid, in meters. P is the total power consumed by the coil, in watts. $\lambda$ is equal to the total conductor cross section area divided by the total coil cross section area, which ranges from 0.6 to 0.8 in typical coils. $\rho$ is the conductor resistivity, in units of ohms-length. The length units must match those of r1. j is the bulk current density in the coil cross section, in amperes per square meter. N is the total number of turns of wire in the coil. Note that G is maximum when $\alpha=3$ and $\beta=2$. A coil built with a given inner diameter and input power will deliver the highest central field strength when these conditions are met. The following Python code shows how to use these formulas to calculate magnetic fields.
CommonCrawl
``Two Quaternion algebras over a number field $K$ are isomorphic iff they are ramified at the same number of places" As detailed here, Hasse introduced an invariant, now known as the Hasse invariant in two papers (here and here) to classify central simple algebras. Not the answer you're looking for? Browse other questions tagged reference-request ring-theory algebraic-number-theory quaternions or ask your own question. What does for places mean to " be ramified over $\mathbb Q$"?
CommonCrawl
I'm trying to understand what happens to the intensity and polarization of light that passes through a quarter-wave plate. I believe I know what happens when unpolarized light and linearly polarized light pass through. However, I'm not sure what happens when circularly polarized light passes through. According to a lecture I'm looking at from my university, the light should emerge linearly polarized. Unfortunately, there is no explanation as to why (I would expect it to be still circularly polarized) or how the intensity is changed (if at all). I suspect it has something to do with the shift that occurs when the light passes through the plate, but I'm not sure. Note that this is all in the context of monochromatic waves. A waveplate is a birefringent crystal. Birefringence is a particular kind of anisotropy where the refractive index depends on the plane of polarization of the light: there are two, orthogonal linear polarization planes which have a relatively lower ("fast axis") and higher ("slow axis") refractive index. An waveplate of angle $\phi$ is of such a thickness that the phase delay differs for these two polarization states by $\phi$ as light passes through the crystal. where $\alpha,\,\beta,\,\delta,\,\epsilon$ encode the magnitude and phase of the component of the electric field vector along the fast and slow axes. undergoes uniform circular motion and that this is therefore circular polarization. and the fast and slow components are now either in or exactly out of phase. It's not hard to see that the head of this vector undergoes simple harmonic rectillinear motion, and that therefore we're now dealing with linear polarization. Circularly polarised light can be thought of as being produced by two simple harmonic motions of the same frequency and amplitude oscillating at right angles to one another with a phase difference of $90^\circ$. The introduction of a quarter wave plate changes the phase of one of the simple harmonic motions by $90^\circ$ so that the phase difference between the two simple harmonic motions becomes either $0^\circ$ or $180^\circ$ and this is linear polarisation. There is a nice simulation of the addition of two shms to be found at this website. Not the answer you're looking for? Browse other questions tagged waves polarization or ask your own question. Faraday rotation effect in circularly polarized waves? Does all slowed light becomes circularly polarized? What does a quarter wave plate actually do? Is there a way to physically resolve elliptically polarized light into circularly polarized components? Torque on a quarter-wave plate… Where is the force? Where is the moment arm? How to check if a laser's light is circularly polarized?
CommonCrawl
Titel: Error Estimates and Convergence Rates for Filtered Back Projection. Abstract: We consider the approximation of target functions from fractional Sobolev spaces by the method of filtered back projection (FBP), which gives an inversion of the Radon transform. To this end, we analyze the intrinsic FBP approximation error which is incurred by the use of a low-pass filter with finite bandwidth, before we prove $L^2$-error estimates on Sobolev spaces of fractional order. The obtained error bounds are affine-linear with respect to the distance between the filter's window function and the constant function $1$ in the $L^\infty$-norm. With assuming more regularity for the window function, we refine the error estimates to prove convergence for the FBP approximation in the $L^2$-norm as the filter's bandwidth goes to infinity. We finally give asymptotic convergence rates in terms of the bandwidth of the low-pass filter and the smoothness of the target function.
CommonCrawl
A jet is simulated on the background of a galactic wind headed by a radiative bow shock. The wind shell, which is due to the radiative bow shock, is effectively destroyed by the impact of the jet cocoon, thanks to Rayleigh-Taylor instabilities. Associated strong HI absorption, and possibly also molecular emission, in high redshift radio galaxies which is observed preferentially in the smaller ones may be explained by that model, which is an improvement of an earlier radiative bow shock model. The model requires temperatures of $\approx$106 K in the proto-clusters hosting these objects, and may be tested by high resolution spectroscopy of the Ly$\alpha$ line. The simulations show that - before destruction - the jet cocoon fills the wind shell entirely for a considerable time with intact absorption system. Therefore, radio imaging of sources smaller than the critical size should reveal the round central bubbles, if the model is correct.
CommonCrawl
Which answer figure will complete the question figure ? If a mirror is placed on the line MN, then which of the answer figures is the right image of the given figure ? If 782 = 20 and 671= 17, then 884 = ? 5 $\times$ 6 $\times$ 4 = 456, 3 $\times$ 6 $\times$ 5 = 536, 4 $\times$ 8 $\times$ 7 = ? A cyclist goes 30 km to North and then turning East he goes 40 km. Again he turns to his right and goes 20 km. After this, he turns to his right and goes 40 km. How far is he from his starting point ? 35 + 7 - 5 $\div$ 5 $\times$ 6 = ? One morning at 7 o'clock, Naresh started walking with his back towards the Sun. Then he turned towards left, walked straight and then turned towards right and walked straight. Then he again turned towards left. Now in which direction is he facing ?
CommonCrawl
The dot after the 0 in the first array results in an array of floating point numbers. Without it the entries would all be integers. A NumPy command similar to the built-in function range is the arange function. By default it creates an integer array. Here are two ways to create a floating point array using the arange function. Read the warning in the documentation regarding floating point issues with non-integer valued step size. Arrays with random entries can be created using functions in numpy.random. For example, function rand creates arrays with entries drawn independently from a uniform distribution on $[0,1)$. Other commonly used functions are for creating arrays of zeros and ones. Evenly spaced points in an interval are useful as the sample points for many applications such as plotting functions. These can be created by the linspace function. The number of points desired between the start and end points can be specified. Sometimes the space for an array needs to be created into which entries will be filled later. One could create an array of zeros or ones. But this would be wasteful since each memory location would have to be set to 0 or 1. Memory access is expensive (in terms of time) and it is best to avoid accessing memory locations if all that is required is allocation of space. NumPy provides the empty function for this. The values printed are whatever values happen to be stored at the locations that are commandeered for the array space. Create a uniform subdivision of the interval -1.3 to 2.5. Generate an array of length $3n$ filled with the cyclic pattern 1, 2, 3. Create an array of the first 10 odd integers. For variety let's switch to importing numpy first rather than importing individual functions as needed. A common special matrix is the identity matrix. A square identity matrix can be created with identity function. As usual, the data type can be specified. The eye function is more flexible than identity. It can do what identity can do. In addition eye can be used to create matrices of arbitrary shapes with ones along a specified diagonal. For example, you can create a rectangular matrix with ones on the super diagonal. Negative numbers specify diagonal below the main diagonal (which has index 0). Toeplitz matrices arise, for example, when discretizing the derivative operator. These matrices have constants on various diagonals. The following example shows how to create a Topelitz matrix with 2 along the main diagonal and -1 on the super- and sub-diagonals. Let's now put together matrices of various types to create a block matrix structure using the bmat function. Create a 10 x 10 arrays of zeros and then "frame" it with a border of ones. Create an 8 x 8 array with a checkerboard pattern of zeros and ones using a slicing+striding approach. Create a block matrix with the $(0,0)$ block being a $5 \times 5$ random matrix, the $(0,1)$ and $(1,0)$ blocks being $5\times 5$ identity matrices and the $(1,1)$ block a zero matrix.
CommonCrawl
In the the last entry, I enthused about how much mileage we can get with a standard model object and a set of standard transformations thereof—and I haven't even gotten to Bayesian updating and hierarchical modeling, which will come next time. In this entry, I'll present a demo implementation of a truncate-at-zero transformation. It gives me another excuse to show how more offbeat models can be given entirely standard treatment. For those of you who might want to implement something like this, via whatever platform, you might gain benefit from seeing the method I use to implement transformations. Some of you might find this entry to be TMI, in which case, come back in three days. I'm using Apophenia for all of the examples because, as noted in prior posts, I really don't think there's anything out there like it. C custom is to put a library-specific prefix at the head of every function or structure in a library, to prevent name conflicts, so you'll see a lot of apop_s in the code below. It gets monotonous, but is very clear. If you think I'm being a narcissist for writing around the library of stats functions I wrote, I'll hook you up with the people who think I need to do exponentially more self-promotion and you can debate amongst yourselves. At least I didn't name the package after myself I'm lookin' at you, Ross and Robert (smiley face). If you hate C, I encourage you to try reading through the sample code anyway and see if you get the gist even if the extra stars and ampersands seem like line noise (and then get my C textbook so it'll all become clear…). As a final apology, writing this revealed a bug in the apop_beta model, so you'll want to download a recent copy of the library to run this against. I'll first walk you through the use of the transformation, and then its construction. One of my favorite tests of a model is a round-trip where we start with a set of parameters, draw a few thousand elements using the RNG, and then use those elements as a data set to estimate parameters. If the parameters at the end of the round trip are reasonably close to the parameters we started with, then we have more confidence that the model is at least internally consistent. Start with an unparameterized stock apop_normal. Set its parameters to (1<|,|> 1)<|,|> creating a parameterized model. Apply the truncation function to create a truncated<|,|> parameterized model. Use the model to make 20,000 draws. Use the data and the truncated model (the parameters will be thrown away) to estimate the parameters given the data set. This produces a new model with non-NULL parameters. Print the estimated model parameters to the screen<|,|> so we can see that the results are reasonably close to the original parameters of (1<|,|> 1). Here's the actual program. I broke this up into main and round_trip functions, so I can do the draw-and-estimate routine for all four models. apop_model truncated_model; //these two will be defined in the next file. //this copying and NULLifying is unnecessary; it's just so you know I'm not cheating. As is custom with many C files, it's best to read from the bottom up, as the last function in the file is where execution starts, and that function calls elements declared earlier in the file, which call elements still earlier in the file. At the bottom, you'll find the transformation function, truncate_model. It outputs a model object, so everything above that point describes the model object to be returned. These were the only two items also defined in the file above that makes use of the transformation, so you can see that the rest of this file can remain private, internal workings to the transformation. double cutoff; //A global variable, so you know this isn't for production use. The function itself only has to make a copy of the truncated_model, store a copy of the original model in the new model (the apop_model object has a ->more pointer just for purposes like these) and wire the two parameter sets together. The truncated model that gets returned is declared in the line above the truncate_model function: it has log likelihood, draw, and prep functions, which are as described. The prep function is Apophenia-specific, and is what gets called by the apop_estimate function to make sure the parameter set isn't NULL and other such housekeeping. Of course, if a data point is below the cutoff, that has probability zero and thus log probability $-\infty$. So there's the tour of the code: the transformation function outputs a model that wraps the code in a simple model object that does the appropriate transformations to the base model that it holds in its internal storage, then the main routine can live up to the promise of egalitarian treatment of the pre- and post-transformation models. As a final point of foreshadowing of future entries, those of you who got this compiled and running will notice how each run produced a covariance matrix for the parameters.
CommonCrawl
Abstract. We prove rigorous lower and upper bounds for the mass gap of the ferromagnetic spin 1/2 XXZ chain. The two bounds coincide asymptotically in the Ising limit $\Delta\to\infty$. Near the isotropic point, $\Delta=1$, the estimates are good enough to determine the critical behaviour of the mass gap unambiguously. The derivation does not rely on exact solutions.
CommonCrawl
Two closed vessel of equal volume contain air at 105 kPa, 300 K and are connected through narrow tube. If one of the vessel is now maintained at 300 K and other at 400 K what will be the pressure in the vessel? I don't want the solution of this question but I want to know why the pressure will be same in both the vessel when one is maintained 300 K and other 400 K. The molecules in the vessel at 400 K will have greater kinetic energy and will exert greater pressure on the walls than the molecules of the other vessel. Now one will say that some of the molecules of vessel at higher temperature will move to the other vessel so that the pressure remains same in both the vessels, but why is it so. Why can't the pressure in both the vessels be different? So if $T$ is increased, $p$ is increased. As $p$ is increased, there will be a net flow of mass, until they are balanced. That's what we observe. Now, we can think of it microscopically. Initially, there are the same number of particles in each vessel. When heating up there is an increase in velocity. So, in the high-temperature vessel the same number of molecules are with higher velocity. Imagine a plane in the middle of the narrow tube, the number of collisions per time, because of velocity, will be higher in $400\ \mathrm K \to 300\ \mathrm K$ direction. It indicates a net flow of mass. This flow of mass will equal pressures on both sides. This experiment divides the container so that pressure (a "flow" of gas molecules against the walls) can be equalized thru the narrow tube but net flow of heat from hot molecules passing thru the narrow tube is seriously impeded (essentially does not occur because each section is maintained at 300 K or 400 K). This non-equilibrium condition is held static by the external devices maintaining the set temperatures and the narrow tube which connects the pressures in the two "systems" but disconnects the heat flow. To see that the pressures will eventually be the same, imagine connecting the two vessels at 300 K by the narrow tube, then quickly heating one to 400 K. The pressure will increase in the heated tube, and depending on the narrowness of the tube, will decrease as gas flows into the cooler tube (where the pressure will increase because there will be more molecules there, even tho still at 300 K). After some time, the pressures will equalize. I'm going to answer this question in two parts: First, why will some of the molecules of the vessel at higher temperature move to the other vessel? Pressure is simply the force per unit area that the gas exerts. When a gas is sitting in a vessel, it exerts a force on the walls - and the walls in turn exert an equal an opposite force. Thus, nothing moves (on a macroscopic level) in the system. If you open a tube to another vessel, then the gas in each vessel will be exerting a force on the gas in the tube. When both vessels are at equal temperatures and pressures, they exert equal and opposite forces, and the gas remains static (on a macroscopic level), and there is no net flow. Heating up one vessel would increase the pressure due to the ideal gas law: $$pV=Nk_\mathrm BT.$$ Since the higher temperature vessel will have a higher pressure, it will exert more force on the gas in the tube, and the gas has to move away from it. Secondly, why can't the pressure in both the vessels be different? The answer is that the system is closed, and the number of molecules is conserved - molecules cannot enter from outside the two vessels. Thus, molecules can't move from the hotter vessel to the cooler vessel forever - the hotter vessel would run out of molecules. The critical part of the ideal gas law to keep in mind is the $N$, the number of molecules. In the hotter vessel, molecules will travel out, and $N$ will be reduced, as long as the pressure is greater than the cooler vessel. Since volume and temperature are constant, pressure must drop until the pressures are equal. Not the answer you're looking for? Browse other questions tagged pressure kinetic-theory-of-gases or ask your own question. How accurate would be the following conclusion on 'Critical Temperature'? How does the partial pressure of gas C change with the change in volume of the container? When two containers at different T and P are kept together, will pressure become same or temperature?
CommonCrawl
We say that centrifugal force is fictitious, yet we still use it in some problems. If the centrifugal force is equal and opposite to the centripetal force wouldn't that make the net force zero? The centrifugal force is a "fictitious force" that appears when working in a rotating coordinate system. Basically (together with the Coriolis force) it's the imaginary force that would, if real, make objects move with respect to a fixed, non-rotating coordinate system in the same way that they actually move (due to inertia) with respect to the rotating coordinates that we're using. If all that sounds really confusing, please see the second half of this earlier answer I wrote, where I try to explain this in more detail (and with pictures!). Anyway, the point of these fictitious forces is that they let us apply the same laws and formulas of Newtonian physics in a rotating frame of reference as we would in a non-rotating one, and still get physically correct results, as long as we remember to also include the effect of those imaginary forces on all objects. For example, in a normal non-rotating coordinate system, a stationary object will remain stationary if (and only if) all the forces acting upon it cancel out, so that the net force acting on the object is zero. In a rotating coordinate system, an object which is stationary with respect to the coordinates (i.e. rotating along with them around the same axis at the same speed) will remain so if (and only if) all the real and imaginary forces acting on it cancel out, leaving an (apparent) net force of zero. In this example, since the spheres are motionless with respect to the rotating coordinate system, and since the centripetal and centrifugal forces balance out, they will remain motionless with respect to the rotating coordinates — i.e. they will continue to rotate around the same axis at the same speed. First, it must be stated that Newton's laws only hold in inertial frames. What this means is that accelerations must arise from forces (Second Law), and these forces arise from interactions (Third Law). The issue with rotating frames is that accelerations arise when forces of interactions are not present, so Newton's laws do not hold in rotating frames. However, the second law ($\mathbf F=m\mathbf a$) is nice to use since it tells us how to determine the position and velocity of a body given initial conditions. Therefore, we define "fictitious" centrifugal and Coriolis forces in order to keep this framework. They are "fictitious" because they are an artifact of the rotating reference frame rather than interactions, but they are not fake (for example, they are very real for anyone going around a sharp turn in a car). Essentially we have opted to abandon the third law in order to keep the second law. Now, onto your specific inquiry: If you are in a rotating frame, and there is a force equally opposing the centrifugal force, then yes the net force is zero (assuming no Coriolis force either). Therefore in the rotating frame there is no acceleration of the object in question. Of course, if you looked at the scenario from an inertial frame you would have a non-zero acceleration of the object as there is now a non-zero net force that is the centripetal force. Not the answer you're looking for? Browse other questions tagged newtonian-mechanics rotational-dynamics reference-frames centripetal-force centrifugal-force or ask your own question. Would you feel Centrifugal Force without Friction? What provides the centrifugal force for planets orbiting a star? What is the cause of centripetal/centrifugal force? Is centrifugal force equal in magnitude to the centripetal force in the frame of a body undergoing circular motion? The athlete feels a centrifugal force when whirling the hammer - is there always a centrifugal force associated with a centripetal force? Why is centrifugal force considered fictitious, when it's the one that feels real to us when we are moving in a circle?
CommonCrawl
I know that if $f$ is continuous at one point then it is continuous at every point. From this i want to show that $f(x)=xf(1).$ Can anybody help me to proving this? Look at $0$ first: $f(0)=f(0+0)=f(0)+f(0)$, so $f(0)=0=0\cdot f(1)$. Use induction to prove that $f(n)=nf(1)$ for every positive integer $n$, and use $f(0)=0$ to show that $f(n)=nf(1)$ for every negative integer as well. Once you've got it for $f\left(\frac1n\right)$, use the idea of (2) to get it for all rationals. Then use continuity at a point. Not the answer you're looking for? Browse other questions tagged real-analysis functional-equations or ask your own question. If $f\colon\mathbb R\to\mathbb R$ is continuous and satisfies $f(x+y) = f(x) + f(y)$ and $f(1)=1$, then $f(x)=x$ for all $x$. Example of a function continuous at only one point.
CommonCrawl
Abstract: One of the main contributions of this paper is to illustrate how large deviation theory can be used to determine the equilibrium distribution of a basic droplet model that underlies a number of important models in material science and statistical mechanics. The model is simply defined. $K$ distinguishable particles are placed at random onto the $N$ sites of a lattice, where the ratio $K/N$, the average number of particles per site, equals a constant $c \in (1,\infty)$. We focus on configurations for which each site is occupied by at least one particle. The main result is the large deviation principle (LDP), in the limit where $K \rightarrow \infty$ and $N \rightarrow \infty$ with $K/N = c$, for a sequence of random, number-density measures, which are the empirical measures of dependent random variables that count the droplet sizes. The rate function in the LDP is the relative entropy $R(\theta | \rho^*)$, where $\theta$ is a possible asymptotic configuration of the number-density measures and $\rho^*$ is a Poisson distribution restricted to the set of positive integers. This LDP reveals that $\rho^*$ is the equilibrium distribution of the number-density measures, which in turn implies that $\rho^*$ is the equilibrium distribution of the random variables that count the droplet sizes. We derive the LDP via a local large deviation estimate of the probability that the number-density measures equal $\theta$ for any probability measure $\theta$ in the range of these random measures.
CommonCrawl
In Part I of this article, I introduced the problem and the initial naive solution to it. In this part, I will discuss the random algorithm that was used to solve the problem, and the results we achieved using it. In Part I, I introduced the naive (master/worker) algorithm, and the first version of a distributed algorithm. The distributed algorithm allows worker processes to ask other worker processes (chosen at random) for work when needed. This actually introduced a defect: far too many messages were exchanged between workers. When a worker $P_i$ process receives a request for worker from another process $P_j$, it will provide $P_j$ with some portion of its own pending tasks (if any). Process $P_i$ may split its own queue into two equal halves and send one half to the requesting process $P_j$ as our first try did. However, $P_i$ could split its own queue unevenly and assign some unequal portion of the work to $P_j$. But, how much should it share? Remember that work queue items in this case are either files or directories. If a work queue entry is a file, then it's easy to quantify how much work that represents (keep in mind that at this point, the items are just filenames with no information about whether they are files or directories). But if a work queue entry is a directory, then how does one quantify how much work it represents without first fully exploring the directory? For each work queue item Process $P_i$ gives to Process $P_i$, Process $P_i$ really has no idea of how much work it's giving away - which makes our goal of equitable load distribution seem impossible. When faced with such a decision, it is impossible to know the ideal solution without having previously solved the exact instance of the problem. Having no other information to go on, and knowing the problem with splitting the queues equally, I chose to split them randomly. The idea was that when a process received a work request from another, it would just give away a random portion of the work (even if multiple processes requested work). What if random is wrong? Theoretically, splitting the queue at random could actually cause very unbalanced load distribution. How bad could it be? To be specific about cost, we'll use a simple model for network cost: $$C(n) = \alpha + n * \beta$$In this model, $\alpha$ is the network latency, and $\beta$ is the average transmission cost for one network unit, and $n$ is the message length. Now that we have a metric, let's examine the worst case scenario.Asymptotically, the queue can be split at most $n$ times for $n$ elements. This is because for every split, the splitting process will always keep at least one element to process. No matter how the work is distributed amongst the processes, the pending work of a process decreases by at least one unit after a queue split. If we assume that there are $p$ processes and each process consumes one unit of the $n$ initial work units between two consecutive splits of its queue, then the work to be done decreases by at least one for each queue split.This in turn implies a reduction by $p$ units over $p$ processes after their corresponding queue splits. The communication cost is dependent on the queue splitting, since every queue split implies two communications (a request and a reply). Now we know that in the absolute worst case, the cost of this distributed algorithm is $O(n^2)$. In practice however, this case was never encountered. Below is a graph comparing the distributed algorithm to the centralized one. It's easy to see from the graph that our distributed algorithm outperforms the centralized algorithm by a significant margin, even with fewer processes. You might be wondering, can we do better? To find out, I profiled the code to see what exactly was going on. The graph above shows the results of profiling the distributed algorithm by function. The first function communicate(), represents all of the communication overhead that our algorithm uses (OpenMPI in our implementation). The callback() function is called for every file in the tree. If you wanted to copy the file tree, then the call back would be a function that copies one file. In this test, it was just a logging function. Next, the function readdir() is the function used to read the contents of a directory onto a worker's queue. Finally, lstat() is the function used to determine if a filename is a directory or file. We can safely assume that the distributed algorithm can't really get much faster, because the graph aboveshows that the vast majority of the execution time is spent in system calls to lstat and readdir as it should be. Additionally, the same amount of work could be accomplished with far fewer resources, as shown in the first graph. The implementation of this algorithm is still in use today on supercomputers at the Los AlamosNational Laboratory. The algorithm was implemented as a library called libcircle, named for the circular ordering of processes for termination detection (explained in Part I). It's written in C, and the source can be found on GitHub. The original implementation has been wrapped so that it can be used from Go, and you can also find that library on GitHub. If you've ever put 'too many' files into a directory, then you know what happens when you type ls -l (long listing). Each file's attributesmust be read using the stat system call so that the file's attributes (metadata) can be displayed in the output. Calling stat for hundreds of thousands of files can take a very long time (depending on the filesystem). Large filesystems can contain millions or billions of files inside of many directories. When I worked at the Los Alamos National Laboratory, we had several large parallel file systems attached to multiple supercomputers. These parallel file systems were used as fast, temporary storage for jobs running on the supercomputers. Users were expected to migrate their data to a more reliable (slower) file system when they were done with it, in order to free space on the parallel file system for other users. We observed that more often than not, users didn't even know how much space they were using on the parallel file system. To make matters worse, the standard tools for finding that information relied on calling stat serially. If you give a scientist 100,000 processors, they will write 100,000 files over and over again. This is the first part of the story of how a distributed, randomized file tree walk algorithm was developed. At the time I had a working knowledge of the file system internals from many nights spent keeping the file system running. I was also a graduate student and had worked on several parallel code projects for fun and profit. I knew that our file system metadata could be queried in parallel, and even a small speedup would be better than a serial process. Armed with the skills and the knowledge, I set out to solve the problem at hand: efficiently visit every file in a tree in parallel. If the general problem could be solved, then it could be applied to any operation on a file tree (copying, deleting, stating). The first, and simplest solution to implement was a master/worker architecture. The master process distributes work units to each of the worker processes, one or more units at a time. Each worker performs the work assigned and sends the results back to the master process (the result might actually contain more work). In this case, the work units are file metadata queries and the results are the attributes. The master would begin by opening the root directory to be traversed, reading its children onto the queue, and then dispatching those items to worker processes. Worker processes would stat each item it received and, if the item was a directory would send its children to the master process. Although simple, this solution is far from ideal. First of all, the process of querying metadata can be done by nodes independently (in other words - distributed), but using the master/worker architecture centralizes it. The second problem can't be explained without a little context. The parallel program for this algorithm would be running on a supercomputer, whose network interconnect was designed to maximum throughput between every pair of communication endpoints. With this in mind, note that all of the communication takes place between the master process and the worker processes, while none takes place between the processes themselves. To demonstrate this point, the implementation for the master/worker file tree walk was instrumented so that communication between the processes were logged (both message size and message count). The graph above shows a heat map of the communication data for a tree walk on a 471TB file system using 30 processes (called ranks). Each process is identified by its rank, and each intersection of two ranks in the graph is colored based on the amount of data that was exchanged between them. For example, rank 0 didn't send any data to rank 0 so that block is colored white (meaning 0 Bytes). You can see that rank 0 (the master process) exchanged data with each worker, but favored the first few (the implementation used a LIFO queue). This was indicative of another problem with the algorithm: unbalanced load distribution. Also note that because workers didn't exchange any data amongst themselves, that all of the corresponding blocks are white. Although it was significantly faster than a serial tree walk, a lot of performance was left on the table. How much performance did we leave on the table using a centralized algorithm? The final algorithm, presented in part II, was instrumented as well, and the results are shown below. Only 8 processes were used, because the new algorithm is more efficient, and we were able to saturate the file system metadata servers with a smaller number of workers. You may correctly be thinking that networking != performance, and you are correct. But you can rest assured that I'll show the speedup acheived by this load redistribution in part II. These visualizations were made using D3.js. You can see the source here. I was determined to make it faster, and started from scratch. This time however, I abandoned the master/worker architecture in favor of dynamic load distribution. I devised a simple scheme in which all processes were implemented as simple state machines. There would be no master process. Instead, each worker would maintain its own work queue,and operate independently using a simple loop. If the process had work items queued, it would process them, otherwise it would ask another worker forwork at random. After processing a work queue item, if another process had requested work from it then it would send that process half of its own queue. This scheme is known as work stealing. The first thing I noticed was that a large number of messages were exchanged as the processes were requesting work from each other, performing the same tree walk as in the first figure. The graph above shows the number of messages sent between all nodes over time. At first, only one process has work - the that opened the first directory. Then there is a flurry of communication as processes ask each other for work (often without success). Then at the end of the walk, communication spikes again as processes begin to starve. But that raises another question, without a master process how would the processes even know when the walk was over? Luckily, the problem of distributed termination detection had already been solved by Djikstra. The algorithm is very clever, and uses simple rules to accomplish its goal. I'll summarize them here, but if you want to know more then you should read the paper. All processes are logically ordered. This one is easy because the processes are already uniquely identifiable by their MPI rank. Processes have a color, which can be black or white. Every processes is initially colored white. There is a token which can be passed between processes (only one process can possess the token at a time). The token can be colored black or white. Initially, the first process (rank 0) has the token, and the token is white. When the first process (rank 0) is idle, it sends the white token to the next process (rank 1). Any time a process sends work to another process with a lower rank, it colors itself black. If a process is black and receives the token, it makes the token black, makes itself white, and then forwards the token to the next process. If a process is white and receives the token, it forwards the token unchanged. Tokens are only forwarded by a process when it has no work in its queue. Termination is detected when the first process (rank 0) receives a white token. The implementation of this algorithm can be found on GitHub here. I would eventuallylike to rewrite it in a higher level language, but the C implementation has been sufficient for my uses so far.
CommonCrawl
CMS Collaboration; Canelli, Florencia; Kilminster, Benjamin; Aarestad, Thea; Brzhechko, Danyyl; Caminada, Lea; De Cosa, Annapaoloa; Del Burgo, Riccardo; Donato, Silvio; Galloni, Camilla; Hreus, Tomas; Leontsinis, Stefanos; Mikuni, Vinicius Massami; Neutelings, Izaak; Rauco, Giorgia; Robmann, Peter; Salerno, Daniel; Schweiger, Korbinian; Seitz, Claudia; Takahashi, Yuta; Wertz, Sebastien; Zucchetta, Alberto; et al (2018). Measurement of normalized differential \mathrmtøverline\mathrmt cross sections in the dilepton channel from pp collisions at $ \sqrt s$ = 13 TeV. Journal of High Energy Physics, 04:060. Normalized differential cross sections for top quark pair production are measured in the dilepton (e+e−, μ+μ−, and μ∓e±) decay channels in proton-proton collisions at a center-of-mass energy of 13 TeV. The measurements are performed with data corresponding to an integrated luminosity of 2.1 fb−1 using the CMS detector at the LHC. The cross sections are measured differentially as a function of the kinematic properties of the leptons, jets from bottom quark hadronization, top quarks, and top quark pairs at the particle and parton levels. The results are compared to several Monte Carlo generators that implement calculations up to next-to-leading order in perturbative quantum chromodynamics interfaced with parton showering, and also to fixed-order theoretical calculations of top quark pair production up to next-to-next-to-leading order.
CommonCrawl
Let $N$ and $M$ be two integers with $M\ge N\ge2$. There is a list of altogether $M$ statements that is divided into three parts: the first part consists only of the first statement; the second part consists of the $N-1$ statements with numbers from $2$ up to $N$; the third (possibly empty) part consists of the remaining statements with numbers from $N+1$ up to $M$. Statement 1: Not all the statements on this list are false. Statement n: All statements with a number divisible by $n$ are false. The statement with number $k$ with $N+1\le k\le M$ in the third part may be arbitrary. Determine (in dependence on $N$) the minimum value of $M$ for which this system of statements does not yield a paradox. (1) No statement $n$ with $2\le n\le N$ in the second part can be true, as it would imply its own falseness. Every statement in the second part of the list is false. For every $n$ with $2\le n\le N$, there is some true statement in the third part of the list, whose number is divisible by $n$. (3) Now for $n=N$, the above discussion implies that the third part of the list contains a true statement whose number is divisible by $N$. This implies $M\ge2N$. The first statement is TRUE. All statements in the second part are FALSE. All statements in the third part are TRUE (let us say that they state $1+1=2$). Since every number $n$ with $2\le n\le N$ has a multiple in the range $N+1,N+2,\ldots,2N$, the truth values are compatible and do not yield a paradox. The answer is that the minimum value is $M=2N$. N can be as small as 2. Essentially, for your highest N, you have to have a higher 2*N that is true. Between N and M, every prime will need to be true. The minimum value of M is N+1. Statement 1 to (n-1) says : Not all statements in this list are false. If the first statements are true it does not put any requirements on the rest of the list, as it itself is true. If the first statements are false then all statements must be false, thus the second part of statements must be true. Since the statements in the second part they themselves imply they are false this leads to a paradox. Therefore all statements in the first part are true. 1) Not all the statements on the list are false. am I missing the question entirely? Not the answer you're looking for? Browse other questions tagged logical-deduction paradox or ask your own question.
CommonCrawl
A topological space $X$ such that every mapping $g\colon A\to X$ of any closed subset $A$ of an arbitrary normal space $Y$ can be extended to the entire space $Y$. A direct product of absolute retracts for normal spaces is an absolute retract for normal spaces, as is any retract (cf. Retract of a topological space) of an absolute retract for normal spaces. In particular, the following spaces are absolute retracts for normal spaces: the unit interval $I$; the $n$-dimensional cube $I^n$; and the Hilbert cube $I^\omega$. Any two mappings of a binormal space into an absolute retract for normal spaces are homotopic; while a binormal absolute retract for normal spaces is contractible into a point. Usually an absolute retract (AR) for normal spaces is defined to be a normal space which is a retract of every normal space in which it is imbedded as a closed subset. A space satisfying the property in the article above is then called an absolute extensor (AE) for normal spaces. One then proves that a space is an (AR) if and only if it is an (AE). Absolute retracts and extensors can be defined for any class of spaces (i.e. not just for normal spaces). A binormal space is a space $X$ for which the product $X\times I$ is normal. It can be proved that a space $X$ is binormal if and only if $X$ is normal and countably paracompact (cf. Paracompactness criteria). This page was last modified on 18 August 2014, at 14:27.
CommonCrawl
Abstract: In this article it is proved that for any quasimetric $d$ on a set $X$ with a base-point $p_X$ there exists a maximal invariant extension $\hat\rho$ on the free monoid $F^a(X,\mathcal V)$ in a non-Burnside quasi-variety $\mathcal V$ of topological monoids (Theorem 6.1). This fact permits to prove that for any non-Burnside quasi-variety $\mathcal V$ of topological monoids and any $T_0$-space $X$ the free topological monoid $F(X,\mathcal V)$ exists and is abstract free (Theorem 7.1). Corollary 10.2 affirms that $F(X,\mathcal V)$, where $\mathcal V$ is a non-trivial complete non-Burnside quasi-variety of topological monoids, is a topological digital space if and only if $X$ is a topological digital space. Keywords and phrases: quasi-variety of topological monoids, free monoid, invariant distance, quasimetric.
CommonCrawl
Characteristics of sets on the real line. The least upper bound of a given set of real numbers is the smallest number bounding this set from above; its greatest lower bound is the largest number bounding it from below. This will now be restated in more detail. Let there be given a subset $X$ of the real numbers. A number $\beta$ is said to be its least upper bound, denoted by $\sup X$ (from the Latin "supremum" — largest), if every number $x\in X$ satisfies the inequality $x\leq\beta$, and if for any $\beta'<\beta$ there exists an $x'\in X$ such that $x'>\beta'$. A number $\alpha$ is said to be the greatest lower bound of $X$, denoted by $\inf X$ (from the Latin "infimum" — smallest), if every $x\in X$ satisfies the inequality $x\geq\alpha$, and if for any $\alpha'>\alpha$ there exists an $x'\in X$ such that $x'<\alpha'$. These examples show, in particular, that the least upper bound (greatest lower bound) may either belong to the set (e.g. in the case of the interval $[a,b]$) or not belong to it (e.g. in the case of the interval $(a,b)$). If a set has a largest (smallest) member, this number will clearly be the least upper bound (greatest lower bound) of the set. Each non-empty set of real numbers has a unique least upper bound (greatest lower bound), finite or infinite. All non-empty sets bounded from above have finite least upper bounds, while all those bounded from below have finite greatest lower bounds. The terms least upper (greatest lower) limit of a set are also sometimes used instead of the least upper bound (greatest lower bound) of a set, in one of the senses defined above. By the least upper bound (greatest lower bound) of a real-valued function, in particular of a sequence of real numbers, one means the least upper bound (greatest lower bound) of the set of its values (cf. also Upper and lower limits). Commonly, an upper bound of a set $S$ of real numbers is a number $b$ such that for all $x\in S$ one has $x\leq b$. The least upper bound of $S$ is then defined as an upper bound $B$ such that for every upper bound $b$ one has $B\leq b$. Analogous definitions hold for a lower bound and the greatest lower bound. If the least upper bound of $S$ belongs to $S$, then it is called the maximum of $S$. If the greatest lower bound of $S$ belongs to $S$, then it is called the minimum of $S$. The fundamental axiom of the real number system, or continuity axiom, may be expressed in the form Every non-empty set of real numbers bounded above has a real number supremum. This page was last modified on 7 January 2015, at 21:45.
CommonCrawl
I have been trying to make my way through the homotopy type theory book, slowly but surely, and I just finished reading this introductory series of 3 articles on hott on ScienceForAll. I'm aware that we'd get stuck at something like uudd(0), but then I'm sure we could have more rules to swap the ups and downs around or something. Then, having those equalities at the definitional level, u(d(n))=n : Z (3-bar equality symbol), we would get the equality type from above u(d(n))=n (as a type). Is the problem that it's too strong? If I understand the question correctly, one answer is that the rules of type theory are not (supposed to be) arbitrarily chosen independently of each other like the axioms of set theory are. They come in "packages", one for each "type-forming operation", and each package has the same general shape: it consists of a Formation rule, some Introduction rules, some Elimination rules, and some Computation rules. A Formation rule tells you how to introduce a type, e.g. "if $A$ and $B$ are types, so is $A\times B$". An Introduction rule tells you how to introduce terms in that type, e.g. "if $a:A$ and $b:B$, then $(a,b):A\times B$". An Elimination rule tells you how to use terms in that type to construct terms in other types, e.g. "if $f:A\to B\to C$, then $rec(f):A\times B\to C$". And a Computation rule tells you what happens when you apply an Elimination rule to an Introduction rule, e.g. "$rec(f)((a,b)) \equiv f(a)(b)$". These four groups of rules that pertain to any type former can't be chosen arbitrarily either; they have to be "harmonious". There's no formal definition of what this means, but the idea is that the Introduction and Elimination rules should determine each other, and the Computation rules should tell you exactly how to apply any Elimination rule to any Introduction rule and no more. A bit more specifically, there are two kinds of type formers: positive ones and negative ones. For a positive type, you choose the Introduction rules, and then the Elimination rules are essentially determined by saying "in order to define a function out of our new type, it suffices to specify its value on all the inputs coming from some Introduction rule". For a negative type, you choose the Elimination rules, and then the Introduction rules are essentially determined by saying "in order to construct an element of our new type, it suffices to specify how all the Elimination rules would behave on that element". In both cases, the Computation rules then say that these "specifications" do in fact hold (as definitional equalities). Why do we require this sort of harmony between the rules? From a computational point of view, it's so that we can actually compute with the Computation rules. If you didn't have that sort of harmony, then you might end up with "stuck" terms with an Elimination form applied to an Introduction form but no applicable Computation rule, or conversely if there were too many Computation rules then you might have some terms that try to "compute" to many different things. From a category-theoretic point of view, it's because we're specifying objects by universal properties: a positive type former has a "left" universal property like a colimit, while a negative type former has a "right" universal property like a limit. I wrote a blog post about this here. A different answer is that one of the purposes of higher inductive types is to define homotopy types containing nontrivial paths. The judgmental equalities coming from computation rules cannot give rise to nontrivial paths, because there is no way for two things to be "judgmentally equal in more than one way". By contrast, two things can be propositionally equal in more than one way (because an equality type can contain more than one term), so we can regard those as paths. Higher inductive types also make sense in "extensional type theory" where there is no (or little) distinction between propositional and judgmental equality, and in this case it is true that every path-constructor of a HIT gives rise to a judgmental equality as well as a propositional one. In this case, they are less interesting, since all types are 0-truncated, but they still have a good deal of uses. However, at a basic level the path-constructors are still Introduction rules for the reasons described in my other answer, with the resulting judgmental equalities coming from the "reflection rule" of extensional type theory. Not the answer you're looking for? Browse other questions tagged lo.logic type-theory homotopy-type-theory or ask your own question. How do you define (infinity,1) categories in Homotopy Type Theory? Homotopy Type Theory: What is it? How much homotopy type theory should be modeled by the unstable motivic category? Can a type in a lower universe be formed from types in higher universes?
CommonCrawl
I am reading this paper below about optimal bid-ask spread in a market making strategy. It finds an approximation for optimal solution, but I cannot understand how it's practice to set the parameters for a sample stock (eg. AAPL). Assuming, I have this stock below, how I can find all the parameters for the optimal bid-ask spread? -> How to set $A$ and $k$, for my example stock? Which is the parameter for the tick size? Constant frequency $\Lambda$ of market buy/sell orders estimated by dividing the total volume traded over a day by the average size of market orders on that day. where $\alpha$ will be overtaken from the literature (presented in the original study), or else needs to be calculated from your specific dataset. also either overtaken from the literature or estimated on the historical values of your specific stock. where $A = \Lambda / \alpha$ and $k = \alpha K$, $K$ being a scaling parameter for the temporary impact of a market order. Not the answer you're looking for? Browse other questions tagged high-frequency market-microstructure market-making utility-theory or ask your own question. Longer term average probabilities of fills at fx ECNs?
CommonCrawl
We obtain sufficient conditions for the differentiability of solutions to stationary Fokker--Planck--Kolmogorov equations with respect to a parameter. In particular, this gives conditions for the differentiability of stationary distributions of diffusion processes with respect to a parameter. Keywords: Stationary Fokker--Planck--Kolmogorov equation, differentiability with respect to a parameter.. Mathematics Subject Classification: Primary: 35J15; Secondary: 60J6. A. Arapostathis, V. S. Borkar and M. K. Ghosh, Ergodic Control of Diffusion Processes,, Cambridge University Press, (2012). V. I. Bogachev, Measure Theory,, V. 1, (2007). doi: 10.1007/978-3-540-34514-5. V. I. Bogachev, A. I. Kirillov and S. V. Shaposhnikov, On probability and integrable solutions to the stationary Kolmogorov equation,, Dokl. Russian Acad. Sci., 438 (2011), 154. doi: 10.1134/S1064562411030112. V. I. Bogachev, N. V. Krylov and M. Röckner, On regularity of transition probabilities and invariant measures of singular diffusions under minimal conditions,, Comm. Partial Diff. Eq., 26 (2001), 2037. doi: 10.1081/PDE-100107815. V. I. Bogachev, N. V. Krylov and M. Röckner, Elliptic equations for measures: Regularity and global bounds of densities,, J. Math. Pures Appl., 85 (2006), 743. doi: 10.1016/j.matpur.2005.11.006. V. I. Bogachev, N. V. Krylov and M. Röckner, Elliptic and parabolic equations for measures,, Uspehi Mat. Nauk, 64 (2009), 5. doi: 10.1070/RM2009v064n06ABEH004652. V. I. Bogachev and M. Röckner, A generalization of Khasminskii's theorem on the existence of invariant measures for locally integrable drifts,, Teor. Verojatn. i Primen., 45 (2000), 417. doi: 10.1137/S0040585X97978348. V. I. Bogachev, M. Röckner and S. V. Shaposhnikov, Estimates of densities of stationary distributions and transition probabilities of diffusion processes,, Teor. Verojatn. i Primen., 52 (2007), 240. doi: 10.1137/S0040585X97982967. V. I. Bogachev, M. Röckner and S. V. Shaposhnikov, On uniqueness problems related to elliptic equations for measures,, J. Math. Sci. (New York), 176 (2011), 759. doi: 10.1007/s10958-011-0434-3. V. I. Bogachev, M. Röckner and S. V. Shaposhnikov, On positive and probability solutions of the stationary Fokker-Planck-Kolmogorov equation,, Dokl. Akad. Nauk, 444 (2012), 245. doi: 10.1134/S1064562412030143. V. I. Bogachev, M. Röckner and S. V. Shaposhnikov, On existence of Lyapunov functions for a stationary Kolmogorov equation with a probability solution,, Dokl. Akad. Nauk, 457 (2014), 136. V. I. Bogachev, M. Röckner and W. Stannat, Uniqueness of solutions of elliptic equations and uniqueness of invariant measures of diffusions,, Matem. Sb., 193 (2002), 3. doi: 10.1070/SM2002v193n07ABEH000665. V. I. Bogachev, M. Röckner and F.-Y. Wang, Elliptic equations for invariant measures on finite and infinite dimensional manifolds,, J. Math. Pures Appl., 80 (2001), 177. doi: 10.1016/S0021-7824(00)01187-9. A. Friedman, Partial Differential Equations of Parabolic Type,, Prentice-Hall, (1964). C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions,, CRC Press, (1992). D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,, Springer-Verlag, (1977). N. V. Krylov, Controlled Diffusion Processes,, Springer-Verlag, (1980). E. Pardoux and A. Yu. Veretennikov, On the Poisson equation and diffusion approximation. II,, Ann. Probab., 31 (2003), 1166. doi: 10.1214/aop/1055425774. M. Reed and B. Simon, Methods of Modern Mathematical Physics I: Functional Analysis, 2nd ed.,, Academic Press, (1980). S. V. Shaposhnikov, On interior estimates for the Sobolev norms of solutions of elliptic equations,, Matem. Zametki, 83 (2008), 316. doi: 10.1134/S0001434608010318. N. S. Trudinger, Linear elliptic operators with measurable coefficients,, Ann. Scuola Normale Super. Pisa (3), 27 (1973), 265. N. S. Trudinger, Maximum principles for linear, non-uniformly elliptic operators with measurable coefficients,, Math. Z., 156 (1977), 291. doi: 10.1007/BF01214416. A. Yu. Veretennikov, On Sobolev solutions of Poisson equations in $\mathbbR^d$ with a parameter,, J. Math. Sci. (New York), 179 (2011), 48. doi: 10.1007/s10958-011-0582-5. W. Ziemer, Weakly Differentiable Functions,, Springer-Verlag, (1989). doi: 10.1007/978-1-4612-1015-3.
CommonCrawl
Let $G$ and $H$ be two topological groupoids. Suppose that I have two morphisms $G \rightrightarrows H$ and I want to take the 2-coequalizer of these maps. I'd like an explicit description of (a particular model for) this weak colimit. I can do this very easily for groupoids in SET by constructing a groupoid with objects $G_0 \coprod H_0$ where I desribe the arrows in terms of generators and relations. However, then I don't know what topology to put on the arrows. I'd also be happy, if instead of this, someone knew an explicit description of a weak pushout diagram of topological groupoids. General colimits of topological groupoids are shown to exist in the paper with Lew Hardy referred to by Jeremy (Math. Nachr. 71 (1976) 273-286.); essentiall,y existence is an application of the Freyd adjoint functor theorem. This also makes it quite difficult to describe the topology explicitly, but in practice we often want only the universal property. Homotopy colimits are constructed as colimits: for example a homotopy pushout is a double mapping cylinder, where the topologically discrete groupoid $\mathcal I$, the groupoid version of the unit interval, with two objects $0,1$, takes the place of the usual unit interval. The general homotopy colimit is more complicated, and I think you would have to refer to specialised papers, or ask Tim Porter! Not the answer you're looking for? Browse other questions tagged ct.category-theory groupoids higher-category-theory or ask your own question. What condition on a "bibundle between categories" generalizes "right-principal bibundle between groupoids"?
CommonCrawl
But this obviously does not work. In general, I'd like to be able to impose more constraints on the $x_i's$ when they're being sorted. So basically we sort it the way it would be sorted with all subscripts == 1. This has the advantage of not returning a (potentially) wrong answer if the sort order is uncertain. Not the answer you're looking for? Browse other questions tagged list-manipulation symbolic array sorting or ask your own question. How can I check if one expression implies another?
CommonCrawl
Now we're ready to implement some complex data types that are useful in contests. An abstract data type is just a data type that defined by its behavior from the perspective of the user. In other words, an abstract data type is only defined by what it does, not how it does what it does. You'll see what I mean in the following sections, where we study three basic ones, and implement them. To implement an abstract data type is to write code that performs the behavior of that data type. We begin with the stack. push x. Place the object x on top of the stack. pop. Remove the topmost item in the stack. is_empty. Returns true if the stack is empty, and false otherwise. You can imagine this as a stack of books for example. You can only place a book on top of the stack (push), and only take the topmost book (pop). This sounds simple enough, but in fact there are a few minor details that we need to focus on. What should be the type of the objects that can be put in the stack? The definition above allows for any type to be used, so we can imagine, for example, a stack of ints, longs, strings, etc.. For our purposes, let's just implement a stack of ints. We also mention that there is a way to implement a stack so that it can take any type. You do this using C++ templates, but that's a lesson for another time. What happens if you pop an empty stack? This might seem minor, but it would be nice to define the stack's behavior now rather than later, so we have a predictable, robust and reliable data type behavior and its user (which is you) will be able to handle it properly. For our purposes, let's just say we return 0 in case an empty stack is popped. What should the initial state of the stack be? In other words, what should a new stack contain? The most obvious answer is that it should be empty. So given the definition of a stack, how do we implement one? This sounds very simple to implement. Let's investigate an implementation of the stack using an array. We just wrote our first stack implementation! This just uses an array, and every push places an element at the "end". The length attribute signifies how many things have already been inserted, so we know where to put new items, we know where the topmost item is, and we know if the stack is empty. Hooray! Our stack seems to work as intended! Except there is a major problem with our implementation: It can only hold up to $100$ items. This is because we only allocated a size of $100$ for contents. If we try to push more than a hundred items to the stack, then we either get a segmentation fault or we get some obscure hard-to-fix bugs. We should fix our implementation. We still run into errors if we push more than a million items in the stack. Surely this is rarer, but this can still happen, and when it does, it might hurt much worse. More importantly, this is horribly inefficient if you only want to add a few items in the stack! This implementation creates a new array that has 100 more available space when we run out. Now, does this work? We can test it again using our example code above. You can see that the output is still the same. Furthermore, if you perform a test that pushes lots of items, you'll find that it still works as intended. Thus, we just wrote our first working stack implementation! Notice that we didn't need to modify our example code to test our new implementation. That's a nice thing about abstract data types. We only specified its behavior, not how it's implemented, so any code that uses our data type should still work as intended. Indeed, your favorite programming language contains tons of implementations of abstract data types. These implementations get constantly updated between versions, but your programs that use them don't really break. They will only break if the abstract data types themselves change, but language designers and implementors take a lot of care that this only happen rarely. Unfortunately, while the above works, it runs really slowly. It's because all that reallocating and copying takes a lot of time. Consider for example pushing one million times. Most of the time you only perform a few operations, but every $100$ pushes, we need to reallocate. On the first allocation, we allocate an array of size $200$. On the second, $300$, and so on. Thus, we can estimate the number of operations to be $$3\cdot 1000000 + (200 + 300 + 400 + \ldots + 1000100) > 5\times 10^9.$$ That's a lot of operations for just one million pushes! We can state this in a different way: We say that the worst-case running time of a push operation is $O(n)$, where $n$ is the number of items in the stack. We would like to have a faster implementation. Luckily, we can use linked lists which we learned in the previous section! this->top = NULL; // in C++, "NULL" is just a synonym for "0" This implementation uses a linked list to represent a stack. The first node of the linked list represents the top of the stack, and is referred to by top. Clearly, all operations only does a fixed amount of things, so we say that the running time of each operation is $O(1)$. This is the best you can hope for! We encourage you to check that it still works. enqueue x. Place the object x at the back of the queue. dequeue. Remove the item in front of the queue. is_empty. Returns true if the queue is empty, and false otherwise. You can imagine this as a queue of people waiting to order at McDollibee for example. People that just arrived go to the back of the queue (enqueue), and the person in front of the queue will be served first (dequeue). First in, first out. Some people call "enqueue" and "dequeue" "push" and "pop" just like in stacks, but the names don't really matter too much. Here we just use "enqueue" and "dequeue" so we don't confuse terms. This uses an array and two indices called front and back to determine the extent of the queue in the array. The items are always in a contiguous subarray of the array at any point. front points to the first item of this subarray, and back points to the first available space after this contiguous subarray. back is always ahead of front except when the queue is empty. We encourage you to write some code that tests whether this works as intended. Let us know if it does! Like our first stack implementation, this also suffers from the problem of enqueueing too many items; our queue can only support up to $100$ "enqueue" operations. We can also solve this by reallocating the array every time front exceeds the boundaries of our array, like before. But again, this strategy is slow; the worst case for enqueue is $O(n)$. Well, let's try using a linked list instead. Unfortunately, this implementation is still slow. Notice that in an enqueue operation, we perform a while loop just to find the front of the queue. When the queue contains a lot of things, this is slow! Specifically, the running time of an enqueue is still $O(n)$. Study the role of the new front pointer carefully. This time we see that each operation runs in $O(1)$ time, which is really great! push_left x. Place the object x at the left end of the deque. push_right x. Place the object x at the right end of the deque. pop_left. Remove the leftmost item of the deque. pop_right. Remove the rightmost item of the deque. is_empty. Returns true if the deque is empty, and false otherwise. We leave it to you to implement this new data type with arrays. I assure you it won't be very hard, because it is actually very similar to our previous array implementations! Because of that though, it will probably be slow too. So the next thought is to use linked lists with back and front pointers just like in our queue implementation. Before proceeding, we encourage you to implement it with linked lists yourself. The real solution for this problem is to use a doubly linked list, as mentioned in the previous section! The idea is to add a new attribute in each node called prev, in addition to data and next, which points to the previous node in the linked list. There will be more pointers to update on every operation, but at least we'll now be able to implement all operations in $O(1)$ time!
CommonCrawl
Abstract: We provide non-smooth atomic decompositions for Besov spaces $\Bd(\rn)$, $s>0$, $0<p,q\leq \infty$, defined via differences. The results are used to compute the trace of Besov spaces on the boundary $\Gamma$ of bounded Lipschitz domains $\Omega$ with smoothness $s$ restricted to $0<s<1$ and no further restrictions on the parameters $p,q$. We conclude with some more applications in terms of pointwise multipliers.
CommonCrawl
I'm not sure if this is elementary math or something more advanced so I apologize ahead of time if I put htis in the wrong place. I have a large pot that needs to be lined entirely with a teflon sheet. The pot is 10" diameter, and 12" tall. I need to know how to figure out what size square sheet I need to buy that will fit inside the pot to line the bottom and sides. Yes, this question is more complicated than it first seems. You can't line the interior of a round pot with a square sheet without wrinkles, but then you have to account for the surface areas of the wrinkles. Yes I can cut the sheet to my will and wrinkles don't matter for my purposes. So from your math I could buy a 24"x24" sheet and if I stick it in the pot evenly, the shortest side should still reach the top of the pot (or above it), then I could easily just trim all the excess. I meant if you could cut the sheet into any shapes, then it will work. For example, to cover the sides of the pot (a $12$ by $10\pi$ rectangle, essentially), you could cut two strips of $12$ by $5\pi$ ($12$ by $15.71$) rectangles. You can then use the remaining sheet to attempt to morph it into a perfect circle of diameter $10$ (though I don't know if this is perfectly possible in a finite amount of cuts). You will need a 10" diameter circle for the bottom. We can easily assume that you will need to use a 10" square section of sheet to make this. (100 sq.in.). You then need, as eddybob123 said, a $12 \times 10\pi$ strip for the sides. The easiest way to make this all fit in a sheet is if we use a 12" x 10" section to make the base (more wasteful, but it is then the same width as the strip. We then need to be able to make a $12 \times 10(1+\pi)$ strip in order to cover the whole pot. That's 12" x 41.4", which can easily be done from a 24" x 24" sheet. (with a section 12" x 6.6" left over - plus the offcuts from making the circular base. Last edited by v8archie; September 14th, 2014 at 05:31 PM. Reason: Upload template.
CommonCrawl
We use a new compilation of the hadronic $R$-ratio from available data for the process $e^+e^-\to$ hadrons below the charm mass to determine the strong coupling $\alpha_s$, using finite-energy sum rules. Quoting our results at the $\tau$ mass to facilitate comparison to the results obtained from similar analyses of hadronic $\tau$-decay data, we find $\alpha_s(m_\tau^2)=0.298\pm 0.016\pm 0.006$ in fixed-order perturbation theory, and $\alpha_s(m_\tau^2)=0.304\pm 0.018\pm 0.006$ in contour-improved perturbation theory, where the first error is statistical, and the second error combines various systematic effects. These values are in good agreement with a recent determination from the OPAL and ALEPH data for hadronic $\tau$ decays. We briefly compare the $R(s)$-based analysis with the $\tau$-based analysis. 1. Add captions to Tables and enumerate them. 2. In section 2 the authors infer that more data in the region s<4 GeV2 should imply a more precise determination of αs as compared with the higher s region. However, I do not see why this should be true in general, since theoretical uncertainties play a crucial role in the extraction of αs. Response: This is discussed in much more detail in Ref. , to which we refer through footnote 1. 3. Explain briefly how the errors quoted in the last column of would-be Table 1 are obtained. Response: This is very simple: by propagation of the data errors. We added a note stating this. 4. The error on αs displayed in Figure 4 is not the total error. This is stated by the authors in footnote 5, but it should also be stated in the caption of Figure 4. The total error should possibly be indicated in the Figure, or reported in the text. Response: We added a sentence to the caption indicating that the individual error bars on the points reflect the fit errors. The purpose of this plot is explained in detail in the text, and it's goal is not to show the final values of the error bars on αs, which can be found in Eq. (6). 5. Data for w4 should also be included in Figure 4, according to the content of would-be Table 2. Response: We did not do this to avoid clutter in the plot. We added a sentence to this extent in the caption of this figure. 6. What is the effect of neglected higher dimensional condensates on the data in Figure 4, and how do they compare with the duality violations (black circles) especially in the low smin0 region? Response: No higher dimensional condensates have been neglected, as explained in the text. The condensates we included follow directly from Cauchy's theorem. 7. The authors should address more clearly the stability of their fitted results reported in Figure 4 when varying smin0 and smax0. An analogous observation applies to section 5, see point 9. Response: This is discussed in detail in the paper this writeup summarizes, see Ref. , to which we refer extensively for all details. 8. What is the channel displayed in Figure 5 left panel, V, A, V+A? And what are the uncertainties in the fitted curves? Response: We chose the V channel, as this makes most sense in comparison with the e+e-based plots. We added a clarification to the caption. 9. Section 5 concludes that the determination of αs from e+e− data and the one from τ data are consistent. However, there is hardly enough information that can be extracted from Figure 5 and the surrounding text. Importantly, what happens to the final value of αs and to the fitted parameters of the duality-violation model when one varies smin0 in the τ-data fits? This is a relevant point. I think that an accurate study of this dependence is needed in order to assess the stability and consistency of the results and to draw conclusions. If a complete analysis cannot be worked out in a reasonably short time, the authors can at least acknowledge this point in their contribution and formulate their final remarks accordingly. Response: This writeup is not concerned with the τ-based analysis, for which we refer to Refs. [11,18], and references therein. The authors have implemented most of the revisions. Though I would have preferred a more thorough reaction to point 9 of my report, there is sufficient improvement and the contribution can now be accepted for publication.
CommonCrawl
Recall that if $X$ and $Y$ are sets then a function $f : X \to Y$ is a rule which assigns to each $x \in X$ a unique $y \in Y$. We can more precisely define a function in terms of sets. Definition: Let $X$ and $Y$ be sets. A Function from $X$ to $Y$ is a subset $f$ of $X \times Y$ such that for every $x \in X$ if $(x, y_1), (x, y_2) \in f$ then $y_1 = y_2$. In other words, a function from a set $X$ to a set $Y$ can be defined as a subset of the cartesian product $X \times Y$ such that for each $x \in X$ there is exactly one ordered pair $(x, y)$ where $y \in Y$. Definition: Let $X$ and $Y$ be sets. A Relation from $X$ to $Y$ is a subset $R$ of $X \times Y$. The definition above is more precisely defining a binary relation. More generally, if $X_1, X_2, ..., X_n$ are sets then an $n$-ary relation on $X_1$, $X_2$, …, $X_n$ is a subset $R$ of $X_1 \times X_2 \times ... \times X_n$. The terms "relation" and "binary relation" will mean the same thing unless specified otherwise. Observe the definition of a relation closely. Note that a relation from $X$ to $Y$ is simply a subset $R$ of the cartesian product $X \times Y$. There is nothing more to it! Clearly $R$ is a relation. Is $R$ a function? If $R$ is a function then for $0 \in X$ we must have that $(0, a) = (0, b)$. But then $a = b$ which is a contradiction. So $R$ is NOT a function.
CommonCrawl
Aims. Results obtained in very-high-energy (VHE; E >= 100 GeV) gamma-ray observations performed with the H.E.S.S. telescope array are used to investigate particle acceleration processes in the vicinity of the young massive stellar cluster Westerlund 1 (Wd 1). Methods. Imaging of Cherenkov light from gamma-ray induced particle cascades in the Earth's atmosphere is used to search for VHE gamma rays from the region around Wd 1. Possible catalogued counterparts are searched for and discussed in terms of morphology and energetics of the H.E.S.S. source. Results. The detection of the degree-scale extended VHE gamma-ray source HESS J1646-458 is reported based on 45 h of H.E.S.S. observations performed between 2004 and 2008. The VHE gamma-ray source is centred on the nominal position of Wd 1 and detected with a total statistical significance of similar to 20 sigma. The emission region clearly extends beyond the H.E.S.S. point-spread function (PSF). The differential energy spectrum follows a power law in energy with an index of Gamma = 2.19 +/- 0.08(stat) +/- 0.20(sys) and a flux normalisation at 1 TeV of Phi(0) = (9.0 +/- 1.4(stat) +/- 1.8(sys)) x 10(-12) TeV-1 cm(-2) s(-1). The integral flux above 0.2 TeV amounts to (5.2 +/- 0.9) x 10(-11) cm(-2) s(-1). Conclusions. Four objects coincident with HESS J1646-458 are discussed in the search of a counterpart, namely the magnetar CXOU J164710.2-455216, the X-ray binary 4U 1642-45, the pulsar PSR J1648-4611 and the massive stellar cluster Wd 1. In a single-source scenario, Wd 1 is favoured as site of VHE particle acceleration. Here, a hadronic parent population would be accelerated within the stellar cluster. Beside this, there is evidence for a multi-source origin, where a scenario involving PSR J1648-4611 could be viable to explain parts of the VHE gamma-ray emission of HESS J1646-458. Context. 1ES 0414+009 (z = 0.287) is a distant high-frequency- peaked BL Lac object, and has long been considered a likely emitter of very-highenergy (VHE, E > 100 GeV) gamma-rays due to its high X-ray and radio flux. Aims. Observations in the VHE gamma-ray band and across the electromagnetic spectrum can provide insights into the origin of highly energetic particles present in the source and the radiation processes at work. Because of the distance of the source, the gamma-ray spectrum might provide further limits on the level of the extragalactic background light (EBL). Methods. We report observations made between October 2005 and December 2009 with H. E. S. S., an array of four imaging atmospheric Cherenkov telescopes. Observations at high energies (HE, 100 MeV-100 GeV) with the Fermi-LAT instrument in the first 20 months of its operation are also reported. To complete the multi-wavelength picture, archival UV and X-ray observations with the Swift satellite and optical observations with the ATOM telescope are also used. Results. Based on the observations with H.E.S.S., 1ES 0414+009 is detected for the first time in the VHE band. An excess of 224 events is measured, corresponding to a significance of 7.8 sigma. The photon spectrum of the source is well described by a power law, with photon index of Gamma(VHE) = 3.45 +/- 0.25(stat) +/- 0.20(syst). The integral flux above 200 GeV is (1.88 +/- 0.20(stat) +/- 0.38(syst)) x10(-12) cm(-2) s(-1). Observations with the Fermi-LAT in the first 20 months of operation show a flux between 200 MeV and 100 GeV of (2.3 +/- 0.2(stat)) x 10(-9) erg cm(-2) s(-1), and a spectrum well described by a power-law function with a photon index Gamma(HE) = 1.85 +/- 0.18. Swift/XRT observations show an X-ray flux between 2 and 10 keV of (0.8-1) x 10(-11) erg cm(-2) s(-1), and a steep spectrum Gamma(X) = (2.2-2.3). Combining X-ray with optical-UV data, a fit with a log-parabolic function locates the synchrotron peak around 0.1 keV. Conclusions. Although the GeV-TeV observations do not provide better constraints on the EBL than previously obtained, they confirm a low density of the EBL, close to the lower limits from galaxy counts. The absorption-corrected HE and VHE gamma-ray spectra are both hard and have similar spectral indices (approximate to 1.86), indicating no significant change of slope between the HE and VHE gamma-ray bands, and locating the gamma-ray peak in the SED above 1-2 TeV. As for other TeV BL Lac objects with the gamma-ray peak at such high energies and a large separation between the two SED humps, this average broad-band SED represents a challenge for simple one-zone synchrotron self-Compton models, requiring a high Doppler factor and very low B-field. The giant radio galaxy M 87 with its proximity (16 Mpc), famous jet, and very massive black hole ((3-6) x 10(9) M-circle dot) provides a unique opportunity to investigate the origin of very high energy (VHE; E > 100 GeV) gamma-ray emission generated in relativistic outflows and the surroundings of supermassive black holes. M 87 has been established as a VHE gamma-ray emitter since 2006. The VHE gamma-ray emission displays strong variability on timescales as short as a day. In this paper, results from a joint VHE monitoring campaign on M 87 by the MAGIC and VERITAS instruments in 2010 are reported. During the campaign, a flare at VHE was detected triggering further observations at VHE (H.E.S.S.), X-rays (Chandra), and radio (43 GHz Very Long Baseline Array, VLBA). The excellent sampling of the VHE gamma-ray light curve enables one to derive a precise temporal characterization of the flare: the single, isolated flare is well described by a two-sided exponential function with significantly different flux rise and decay times of tau(rise)(d) = (1.69 +/- 0.30) days and tau(decay)(d) = (0.611 +/- 0.080) days, respectively. While the overall variability pattern of the 2010 flare appears somewhat different from that of previous VHE flares in 2005 and 2008, they share very similar timescales (similar to day), peak fluxes (Phi(>0.35 TeV) similar or equal to (1-3) x 10(-11) photons cm(-2) s(-1)), and VHE spectra. VLBA radio observations of 43 GHz of the inner jet regions indicate no enhanced flux in 2010 in contrast to observations in 2008, where an increase of the radio flux of the innermost core regions coincided with a VHE flare. On the other hand, Chandra X-ray observations taken similar to 3 days after the peak of the VHE gamma-ray emission reveal an enhanced flux from the core (flux increased by factor similar to 2; variability timescale <2 days). The long-term (2001-2010) multi-wavelength (MWL) light curve of M 87, spanning from radio to VHE and including data from Hubble Space Telescope, Liverpool Telescope, Very Large Array, and European VLBI Network, is used to further investigate the origin of the VHE gamma-ray emission. No unique, common MWL signature of the three VHE flares has been identified. In the outer kiloparsec jet region, in particular in HST-1, no enhanced MWL activity was detected in 2008 and 2010, disfavoring it as the origin of the VHE flares during these years. Shortly after two of the three flares (2008 and 2010), the X-ray core was observed to be at a higher flux level than its characteristic range (determined from more than 60 monitoring observations: 2002-2009). In 2005, the strong flux dominance of HST-1 could have suppressed the detection of such a feature. Published models for VHE gamma-ray emission from M 87 are reviewed in the light of the new data. Context. Several newly discovered very-high-energy (VHE; E > 100 GeV) gamma-ray sources in the Galaxy are thought to be associated with energetic pulsars. Among them, middle-aged (greater than or similar to 10(4) yr) systems exhibit large centre-filled VHE nebulae, offset from the pulsar position, which result from the complex relationship between the pulsar wind and the surrounding medium, and reflect the past evolution of the pulsar. Aims. Imaging Atmospheric Cherenkov Telescopes (IACTs) have been successful in revealing extended emission from these sources in the VHE regime. Together with radio and X-ray observations, this observational window allows one to probe the energetics and magnetic field inside these large-scale nebulae. Methods. H.E.S.S., with its large field of view, angular resolution of less than or similar to 0.1 degrees and unprecedented sensitivity, has been used to discover a large population of such VHE sources. In this paper, the H. E. S. S. data from the continuation of the Galactic Plane Survey (-80 degrees < l < 60 degrees, vertical bar b vertical bar < 3 degrees), together with the existing multi-wavelength observations, are used. Results. A new VHE gamma-ray source was discovered at RA (J2000) = 13(h)56(m)00(s), Dec (J2000) = -64 degrees 30'00 '' with a 2' statistical error in each coordinate, namely HESS J1356-645. The source is extended, with an intrinsic Gaussian width of (0.20 +/- 0.02)degrees. Its integrated energy flux between 1 and 10 TeV of 8 x 10(-12) erg cm(-2) s(-1) represents similar to 11% of the Crab Nebula flux in the same energy band. The energy spectrum between 1 and 20 TeV is well described by a power law dN/dE proportional to E-Gamma with photon index Gamma = 2.2 +/- 0.2(stat) +/- 0.2(sys). The inspection of archival radio images at three frequencies and the analysis of X-ray data from ROSAT/PSPC and XMM-Newton/MOS reveal the presence of faint non-thermal diffuse emission coincident with HESS J1356-645. Conclusions. HESS J1356-645 is most likely associated with the young and energetic pulsar PSR J1357-6429 (d = 2.4 kpc, tau(c) = 7.3 kyr and (E) over dot = 3.1 x 10(36) erg s(-1)), located at a projected distance of similar to 5 pc from the centroid of the VHE emission. HESS J1356-645 and its radio and X-ray counterparts would thus represent the nebula resulting from the past history of the PSR J1357-6429 wind. In a simple one-zone model, constraints on the magnetic field strength in the nebula are obtained from the flux of the faint and extended X-ray emission detected with ROSAT and XMM-Newton. Fermi-LAT upper limits in the high-energy ( HE; 0.1-100 GeV) domain are also used to constrain the parent electron spectrum. From the low magnetic field value inferred from this approach (similar to 3-4 mu G), HESS J1356-645 is thought to share many similarities with other known gamma-ray emitting nebulae, such as Vela X, as it exhibits a large-scale nebula seen in radio, X-rays and VHE gamma-rays. Several models of Quantum Gravity predict Lorentz Symmetry breaking at energy scales approaching the Planck scale (similar to 10(19) GeV). With present photon data from the observations of distant astrophysical sources, it is possible to constrain the Lorentz Symmetry breaking linear term in the standard photon dispersion relations. Gamma Ray Bursts (GRB) and flaring Active Galactic Nuclei (AGN) are complementary to each other for this purpose, since they are observed at different distances in different energy ranges and with different levels of variability. Following a previous publication of the High Energy Stereoscopic System (H.E.S.S.) collaboration , a more sensitive event-by-event method consisting of a likelihood fit is applied to PKS 2155-304 flare data of MJD 53944 (July 28, 2006) as used in the previous publication. The previous limit on the linear term is improved by a factor of similar to 3 up to M(QG)(1), > 2.1 X 10(1B) GeV and is currently the best result obtained with blazars. The sensitivity to the quadratic term is lower and provides a limit of M(QG)(q) > 6.4 x 10(10) GeV, which is the best value obtained so far with an AGN and similar to the best limits obtained with GRB. (C) 2011 Elsevier B.V. All rights reserved. Aims. Previous observations with the HESS telescope array revealed the existence of extended very-high-energy (VHE; E > 100 GeV) gamma-ray emission, HESS J1023-575, coincident with the young stellar cluster Westerlund 2. At the time of discovery, the origin of the observed emission was not unambiguously identified, and follow-up observations have been performed to further investigate the nature of this gamma-ray source. Methods. The Carina region towards the open cluster Westerlund 2 has been re-observed, increasing the total exposure to 45.9 h. The combined dataset includes 33 h of new data and now permits a search for energy-dependent morphology and detailed spectroscopy. Results. A new, hard spectrum VHE gamma-ray source, HESS J1026-582, was discovered with a statistical significance of 7 sigma. It is positionally coincident with the Fermi LAT pulsar PSRJ1028-5819. The positional coincidence and radio/gamma-ray characteristics of the LAT pulsar favors a scenario where the TeV emission originates from a pulsar wind nebula. The nature of HESS J1023-575 is discussed in light of the deep HESS observations and recent multi-wavelength discoveries, including the Fermi LAT pulsar PSRJ1022-5746 and giant molecular clouds in the region. Despite the improved VHE dataset, a clear identification of the object responsible for the VHE emission from HESS J1023-575 is not yet possible, and contribution from the nearby high-energy pulsar and/or the open cluster remains a possibility. A search for a very-high-energy (VHE; >= 100 GeV) gamma-ray signal from self-annihilating particle dark matter (DM) is performed towards a region of projected distance r similar to 45-150 pc from the Galactic center. The background-subtracted gamma-ray spectrum measured with the High Energy Stereoscopic System (H.E.S.S.) gamma-ray instrument in the energy range between 300 GeV and 30 TeV shows no hint of a residual gamma-ray flux. Assuming conventional Navarro-Frenk-White and Einasto density profiles, limits are derived on the velocity-weighted annihilation cross section <sigma nu > as a function of the DM particle mass. These are among the best reported so far for this energy range and in particular differ only little between the chosen density profile parametrizations. In particular, for the DM particle mass of similar to 1 TeV, values for <sigma nu > above 3 x 10(-25) cm(3) s(-1) are excluded for the Einasto density profile. The gamma-ray pulsar PSR B1706-44 and the adjacent supernova remnant (SNR) candidate G 343.1-2.3 were observed by H. E. S. S. during a dedicated observation campaign in 2007. As a result of this observation campaign, a new source of very-high-energy (VHE; E > 100 GeV) gamma-ray emission, H.E.S.S. J1708-443, was detected with a statistical significance of 7 sigma, although no significant point-like emission was detected at the position of the energetic pulsar itself. In this paper, the morphological and spectral analyses of the newly-discovered TeV source are presented. The centroid of H. E. S. S. J1708-443 is considerably offset from the pulsar and located near the apparent center of the SNR, at alpha(J2000) = 17(h)08(m)11(s) +/- 17(s) and delta(J2000) = -44 degrees 20' +/- 4'. The source is found to be significantly more extended than the H. E. S. S. point spread function (similar to 0.1 degrees), with an intrinsic Gaussian width of 0.29 degrees +/- 0.04 degrees. Its integral flux between 1 and 10 TeV is similar to 3.8 x 10(-1)2 ph cm(-2) s(-1), equivalent to 17% of the Crab Nebula flux in the same energy range. The measured energy spectrum is well-fit by a power law with a relatively hard photon index Gamma = 2.0 +/- 0.1(stat) +/-0.2(sys). Additional multi-wavelength data, including 330 MHz VLA observations, were used to investigate the VHE gamma-ray source's possible associations with the pulsar wind nebula of PSR B1706-44 and/or with the complex radio structure of the partial shell-type SNR G 343.1-2.3. The high-frequency peaked BL Lac object PKS 2005-489 was the target of a multi-wavelength campaign with simultaneous observations in the TeV gamma-ray (H.E.S.S.), GeV gamma-ray (Fermi/LAT), X-ray (RXTE, Swift), UV (Swift) and optical (ATOM, Swift) bands. This campaign was carried out during a high flux state in the synchrotron regime. The flux in the optical and X-ray bands reached the level of the historical maxima. The hard GeV spectrum observed with Fermi/LAT connects well to the very high energy (VHE, E > 100 GeV) spectrum measured with H.E.S.S. with a peak energy between similar to 5 and 500 GeV. Compared to observations with contemporaneous coverage in the VHE and X-ray bands in 2004, the X-ray flux was similar to 50 times higher during the 2009 campaign while the TeV gamma-ray flux shows marginal variation over the years. The spectral energy distribution during this multi-wavelength campaign was fit by a one zone synchrotron self-Compton model with a well determined cutoff in X-rays. The parameters of a one zone SSC model are inconsistent with variability time scales. The variability behaviour over years with the large changes in synchrotron emission and small changes in the inverse Compton emission does not warrant an interpretation within a one-zone SSC model despite an apparently satisfying fit to the broadband data in 2009. The Sculptor and Carina dwarf spheroidal galaxies were observed with the H.E.S.S. Cherenkov telescope array between January 2008 and December 2009. The data sets consist of a total of 11.8 h and 14.811 of high quality data, respectively. No gamma-ray signal was detected at the nominal positions of these galaxies above 220 GeV and 320 GeV, respectively. Upper limits on the gamma-ray fluxes at 95% CL assuming two forms for the spectral energy distribution (a power law shape and one derived from dark matter annihilation) are obtained at the level of 10(-13)-10(-12) cm(-2) s(-1) in the TeV range. Constraints on the velocity weighted dark matter particle annihilation cross section for both Sculptor and Carina dwarf galaxies range from <sigma v > 10(-21) cm(3) s(-1) down to <sigma v > similar to 10(-2)2 cm(3) s(-1) on the dark matter halo model used. Possible enhancements of the gamma-ray flux are studied: the Sommerfeld effect, which is found to exclude some dark matter particle masses, the internal Bremsstrahlung and clumps in the dark-matter halo distributions. (C) 2010 Elsevier B.V. All rights reserved. Context. Observations of very high-energy.-rays from blazars provide information about acceleration mechanisms occurring in their innermost regions. Studies of variability in these objects lead to a better understanding of the mechanisms in play. Aims. To investigate the spectral and temporal variability of VHE (>100 GeV) gamma-rays of the well-known high-frequency-peaked BL Lac object PKS 2155-304 with the HESS imaging atmospheric Cherenkov telescopes over a wide range of flux states. Methods. Data collected from 2005 to 2007 were analyzed. Spectra were derived on time scales ranging from 3 years to 4 min. Light curve variability was studied through doubling timescales and structure functions and compared with red noise process simulations. Results. The source was found to be in a low state from 2005 to 2007, except for a set of exceptional flares that occurred in July 2006. The quiescent state of the source is characterized by an associated mean flux level of (4.32 +/- 0.09(stat) +/- 0.86(syst)) x 10(-11) cm(-2) s(-1) above 200 GeV, or approximately 15% of the Crab Nebula, and a power-law photon index of Gamma = 3.53 +/- 0.06(stat) +/- 0.10(syst). During the flares of July 2006, doubling timescales of similar to 2 min are found. The spectral index variation is examined over two orders of magnitude in flux, yielding different behavior at low and high fluxes, which is a new phenomenon in VHE gamma-ray emitting blazars. The variability amplitude characterized by the fractional rms F(var) is strongly energy-dependent and is proportional to E(0.19 +/- 0.01). The light curve rms correlates with the flux. This is the signature of a multiplicative process that can be accounted for as a red noise with a Fourier index of similar to 2. Conclusions. This unique data set shows evidence of a low-level.-ray emission state from PKS 2155-304 that possibly has a different origin than the outbursts. The discovery of the light curve ognormal behavior might be an indicator of the origin of aperiodic variability in blazars. Aims. The properties of the broad-band emission from the high-frequency peaked BL Lac H 2356-309 (z = 0.165) are investigated. Methods. Very high energy (VHE; E > 100 GeV) observations of H 2356-309 were performed with the High Energy Stereoscopic System (HESS) from 2004 through 2007. Simultaneous optical/UV and X-ray observations were made with the XMM-Newton satellite on June 12/13 and June 14/15, 2005. NRT radio observations were also contemporaneously performed in 2005. ATOM optical monitoring observations were also made in 2007. Results. A strong VHE signal, similar to 13 sigma total, was detected by HESS after the four years HESS observations (116.8 h live time). The integral flux above 240 GeV is I(> 240 GeV) = (3.06 +/- 0.26(stat) +/- 0.61(syst)) x 10(-12) cm(-2) s(-1), corresponding to similar to 1.6% of the flux observed from the Crab Nebula. A time-averaged energy spectrum is measured from 200 GeV to 2 TeV and is characterized by a power law (photon index of Gamma = 3.06 +/- 0.15(stat) +/- 0.10(syst)). Significant small-amplitude variations in the VHE flux from H 2356-309 are seen on time scales of months and years, but not on shorter time scales. No evidence for any variations in the VHE spectral slope are found within these data. The XMM-Newton X-ray measurements show a historically low X-ray state, characterized by a hard, broken-power-law spectrum on both nights. Conclusions. The broad-band spectral energy distribution (SED) of the blazar can be adequately fit using a simple one-zone synchrotron self-Compton (SSC) model. In the SSC scenario, higher VHE fluxes could be expected in the future since the observed X-ray flux is at a historically low level. Universität Heidelberg, Germany;Universite ́ Paris Diderot, France;Ecole Polytechnique, France. Max Planck Inst Kernphys, Germany ; Dublin Inst Adv Studies, Ireland ; Natl Acad Sci Republ Armenia, Armenia. Natl Acad Sci Republ Armenia, Armenia ; Yerevan Phys Inst, Armenia. DESY, Germany ; Univ Potsdam, Germany. Nicolaus Copernicus Astron Ctr, Poland. Max-Planck-Institut für Kernphysik, Germany ; Dublin Institute for Advanced Studies, Ireland ; National Academy of Sciences of the Republic of Armenia, Armenia . Max-Planck-Institut für Kernphysik, Germany . University of Namibia, Namibia . University of Amsterdam, The Netherlands . University of Adelaide, Australia . North-West University, South Africa . Université Pierre et Marie Curie Paris 6, Université Denis Diderot Paris 7, France. The University of Warsaw, Poland. Max-Planck-Institut für Kernphysik, Germany ; Instytut Fizyki Jądrowej PAN, Poland. University of the Witwatersrand, South Africa . North-West University, South Africa ; University of Namibia, Namibia. Dublin Institute for Advanced Studies, Ireland. The University of Leicester, UK. Nicolaus Copernicus Astronomical Center, Poland. Instytut Fizyki Jądrowej PAN, Poland. University of the Free State, South Africa . Universität Hamburg, Institut für Experimentalphysik, Germany. Max-Planck-Institut für Kernphysik,, Germany ; Now at Institut de Ciències de l'Espai, Spain. Max-Planck-Institut für Kernphysik, Germany ; ITA Universität Heidelberg, Germany . Universität Potsdam, Germany ; DESY, Germany. Université Denis Diderot Paris 7, France. Re-observations with the HESS telescope array of the very high-energy (VHE) source HESS J1018-589 A that is coincident with the Fermi-LAT γ-ray binary 1FGL J1018.6-5856 have resulted in a source detection significance of more than 9σ and the detection of variability (χ$^2$/ν of 238.3/155) in the emitted γ-ray flux. This variability confirms the association of HESS J1018-589 A with the high-energy γ-ray binary detected by Fermi-LAT and also confirms the point-like source as a new VHE binary system. The spectrum of HESS J1018-589 A is best fit with a power-law function with photon index Γ = 2.20 \plusmn 0.14$_stat$ \plusmn 0.2$_sys$. Emission is detected up to ~20 TeV. The mean differential flux level is (2.9 \plusmn 0.4) \times 10$^-13$ TeV$^-1$ cm$^-2$ s$^-1$ at 1 TeV, equivalent to ~1% of the flux from the Crab Nebula at the same energy. Variability is clearly detected in the night-by-night light curve. When folded on the orbital period of 16.58 days, the rebinned light curve peaks in phase with the observed X-ray and high-energy phaseograms. The fit of the HESS phaseogram to a constant flux provides evidence of periodicity at the level of N$_sigma$\gt 3σ. The shape of the VHE phaseogram and measured spectrum suggest a low-inclination, low-eccentricity system with amodest impact from VHE γ-ray absorption due to pair production (τ \lsim 1 at 300 GeV). Univ Hamburg, Inst Experimentalphys, D-22761 Hamburg, Germany. Max Planck Inst Kernphys, D-69029 Heidelberg, Germany ; Dublin Inst Adv Studies, Dublin 4, Ireland ; Natl Acad Sci Republ Armenia, Yerevan, Armenia . Max Planck Inst Kernphys, D-69029 Heidelberg, Germany. Natl Acad Sci Republ Armenia, Yerevan, Armenia ; Yerevan Phys Inst, Yerevan 375036, Armenia. Humboldt Univ, Inst Phys, D-12489 Berlin, Germany. Univ Erlangen Nurnberg, Inst Phys, D-91058 Erlangen, Germany. Univ Namibia, Dept Phys, Windhoek, Namibia. Univ Durham, Dept Phys, Durham DH1 3LE, England. DESY, D-15738 Zeuthen, Germany ; Univ Potsdam, Inst Phys & Astron, D-14476 Potsdam, Germany. Nicolaus Copernicus Astron Ctr, PL-00716 Warsaw, Poland. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Nicolaus Copernicus Astron Ctr, PL-00716 Warsaw, Poland. Context. About 40% of the observation time of the High Energy Stereoscopic System (H.E.S.S.) is dedicated to studying active galactic nuclei (AGN), with the aim of increasing the sample of known extragalactic very-high-energy (VHE, E > 100 GeV) sources and constraining the physical processes at play in potential emitters.Aims. H.E.S.S. observations of AGN, spanning a period from April 2004 to December 2011, are investigated to constrain their gamma-ray fluxes. Only the 47 sources without significant excess detected at the position of the targets are presented.Methods. Upper limits on VHE fluxes of the targets were computed and a search for variability was performed on the nightly time scale.Results. For 41 objects, the flux upper limits we derived are the most constraining reported to date. These constraints at VHE are compared with the flux level expected from extrapolations of Fermi-LAT measurements in the two-year catalog of AGN. The H.E.S.S. upper limits are at least a factor of two lower than the extrapolated Fermi-LAT fluxes for 11 objects Taking into account the attenuation by the extragalactic background light reduces the tension for all but two of them, suggesting intrinsic curvature in the high-energy spectra of these two AGN.Conclusions. Compilation efforts led by current VHE instruments are of critical importance for target-selection strategies before the advent of the Cherenkov Telescope Array (CTA). Univ Hamburg, Inst Expt Phys, D-22761 Hamburg, Germany. Natl Acad Sci Republ Armenia, Yerevan 0019, Armenia. Univ Amsterdam, Astron Inst Anton Pannekoek, GRAPPA, NL-1098 XH Amsterdam, Netherlands. Uniwersytet Jagiellonski, Astron Observ, PL-30244 Krakow, Poland.
CommonCrawl
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coefficient, lift coefficient and pressure difference in the frontal and aft part of the cylinder as functions of time, maximum drag coefficient, maximum lift coefficient , Strouhal number and pressure difference $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coefficient f=f(c_L). The data for this test, the code and the results are shown below. Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4. The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity. The Euler method is a poor way to solve a differential equation. It can be numerically unstable. We almost passed the test, therefore, the solution is stable. Explicit Euler is widely used in solving problems in hydrodynamics. The stability of the Euler method depends on the step of integration. In this case, the difference in solutions in v11.01 and v11.3 is due to integration over the surface of the cylinder when calculating the drag coefficient and lift coefficient. In our example, 2000 integrals over the surface are calculated. From these data, the maximum values of the coefficients and the Strouhal number are calculated. This is probably the main source of errors. Thanks for it! It is pretty amazing and fun to see that some, for example textbook-scope, fluid mechanics example simulations can be carried out in Mathematica, i.e. by way of pure coding and then hitting the Shift+Enter button. Needless to comment that for fluid mechanics simulations in any slightly more complicated 2-D geometries or RL 3-D geometries, one would resort to a GUI-driven software where the user interacts with the mouse and context menus and does not have to code anything. Often enough I miss GUI interactivity (similar to Word, Excel. PowerPoint, Comsol) in our software, but then I also realize that Mathematica grants more flexibility and direct control over other details which can't be manipulated in those specialized user-friendly applications. The bottom line is that for each task at hand one should choose the proper tool wisely. Me too, i enjoy trying to use the software for all and everything at least once. Good for my coding learning experience. Vahl Davis, G.de (1983) : Natural convection of air in a square cavity : A bench mark numerical solution. Int. J. Numer. Methods Fluids 3, 249-264. $\vec u$ is velocity field vector, $T$ - temperature, $\vec g $ - gravity vector, $Pr$ - the Prandtl number, $Ra$ - the Rayleigh number. After the release of Mathematica version 12, I tested a non-linear FEM on the convection problem and compared it with my method. Surprisingly, the coincidence is one to one. I congratulate the team Wolfram with a great achievement. The following code generates the same output as my code implementing the method of the false transient from the paper Vahl Davis, G.de (1983) : Natural convection of air in a square cavity : A bench mark numerical solution. Int. J. Numer. Methods Fluids 3, 249-264. There are two different models - an incompressible viscous flow at $Re=100$ and convection in the Boussinesq approximation in a rectangular cavity with a Rayleigh number of $Ra=10^3, 10^4,10^5$. Transition to turbulence in the wake of the cylinder in 3D occurs already at $Re=170-260$. We can put $Re = 1000$ so that everything is mixed in the wake. In what approximation do you want to calculate convection? How is the cylinder oriented relative to the vertical? I consulted a textbook on boundary layer flow which states that the turbulence onset on the surface is at Re = 5 x 10^5. It may well be that turbulence in the wake starts earlier. The cylinder axis is vertical to the convection direction, as in your first example. The cylinder has a diameter of, say, 60 cm, and is a few Kelvin cooler than the incoming air. I'd like to model the resulting air cooling pattern in space and time. I'm not sure which approximation to choose, I'm not an expert on CFD -- that's why I'm asking in this forum. In fig. shows the temperature at various points in time when hot air flows around a cold cylinder at $Re=10^3$. Mathematica FEM method can be used to calculate unsteady viscous compressible flows around aerofoil in subsonic, transonic and supersonic modes. Here an example is given for the wing profile NACA2415 for the Mach number $M = 0.55$, Reynolds number $Re = 1000$, angle of attack $\alpha =\pi/16$ . I tried several times to plot the velocity distribution profile in the open channel using Piecewise command. How can I plot this profile for sections 1 and 2? In fig. shows the laminar flow velocity profiles on a flat plate at $Re=800$. May I have the used command or notebook for my future study? Use this file with Mathematica 11.3. If you have questions ask here.
CommonCrawl
Proof. We may and do replace $R$ by its localization at $\mathfrak p$. Then $\mathfrak p = \mathfrak m$ is the maximal ideal of $R$ and $A = R/I$. Let $f_1, \ldots , f_ r \in I$ be a minimal sequence of generators. The completion of $A$ is equal to $A^\wedge = R^\wedge /(f_1, \ldots , f_ r)R^\wedge $ because completion for finite modules over the Noetherian ring $R_\mathfrak p$ is exact (Algebra, Lemma 10.96.1). If (1) holds, then the image of the sequence $f_1, \ldots , f_ r$ in $R^\wedge $ is a regular sequence by assumption. Hence it is a regular sequence in $R$ by Algebra, Lemmas 10.96.2 and 10.67.5. Thus (1) implies (2). In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 09Q1. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 09Q1, in case you are confused.
CommonCrawl
Still bemused the sage went along to the local school which was deeply embroiled in GCSE investigations and once again stated his problem. 'That's a very interesting question,' said the teacher. The moral of this story is that, as Humpty Dumpty once said, 'when I use a word, it means just what I want it to mean, and nothing else'. The term 'proof' is just such a word. In different contexts it means very different things. To a judge and jury it means something established by evidence 'beyond a reasonable doubt'. To a statistician it means something occuring with a probability calculated from assumptions about the likelihood of certain events happening randomly. To a scientist it means something that can be tested -- the proof that water boils at 100o is to carry out an experiment. A mathematician wants more -- simply predicting and testing is not enough -- for there may be hidden assumptions (that the water boiling is always carried out at normal atmospheric pressure and not, say, on the top of Mount Everest). Faced with such a question, you might begin by thinking 'as many as you like', or 'infinity'. Then you may begin to realize that a square could be cut into 4, 9 or 16 by dividing it into pieces of equal size. If you play about with possible ways of cutting a square into smaller squares, you may suddenly see that any square can be be cut into four smaller squares, including the smaller squares themselves. Aha! A square could be cut into four quarters and one quarter cut into four again, losing the quarter as a counted square but gaining four smaller ones - so a square can be cut into seven squares. Cutting any of the squares in this picture into four squares gives three extra squares. Thus it is possible to cut a square into 7 squares, 10 squares, 13 squares, 16, 19, 22, ... and so on. It is easy to see that IF a square can be cut into n squares THEN it can be cut into n+3 squares. It is this general result which is the key to the solution of the problem. For instance, if I could cut a square into six smaller squares, then I could do 6+3=9, 9+3=12, and so on, to get the sequence 6, 9, 12, 15, ... If I could cut a square into five smaller squares, then I could get the sequence 5, 8, 11, 14, ..., and so on. But can I ? One attack suggested by students is to look at a picture like figure 3, and to propose that, by rubbing out a number of lines in a three by three subdivision it is possible to "glue four squares into a single square". This gives one big square and five smaller squares, making six squares in all. But can you cut a square into five smaller squares? One student, Paul, suggested to me that if you can do n squares you can do n-3 by joining a block of four squares together as in Figure 3. Is Paul's suggestion correct? It is certainly true for n=9, as Figure 3 shows, but is it true for all whole numbers n? There is a well-known story of the experimental physicist who claimed to prove that 60 is divisible by every other number. He came to this conclusion by considering a sequence of cases to establish the pattern: 1,2,3,4,5,6 and then moved on to a few others at random to test out the theory : 10,12, 20, 30, and concluded that his result was experimentally verified. He was surpassed in this endeavour by an engineer who noticed that all odd numbers seemed to be prime... One - well that's an oddity, but we'll include it in - three, five, seven, good, we're getting somewhere - nine? Oh, nine... Let's leave that a moment - eleven, thirteen - fine. The exceptional case of nine must have been an experimental error. This story, which I claim bears no relationship to any known physicist or engineer, living or dead, does illustrate the important difference between proof by looking at a number of cases and proper mathematical proof. It is not enough to consider just a number of cases, for all of them may have some hidden common assumption. For instance, we might conclude from a number of experiments that water always boils at 100o C because we never have the experience of trying to boil water on the top of Mount Everest. Scientific proof depends on the predictability of experiments: that we conjecture that when we carry out an experiment it will have a predicted outcome. Such proof is not appropriate in mathematics where we must provide a logical argument that the conclusion follows from explicitly stated assumptions. The idea is first to get a good idea how and why the result works, sufficient to believe its truth. Convincing oneself is, regrettably, all too easy. So pleased is the average mortal when the 'Aha!' strikes that, even if shouting 'Eureka' and running down the street in a bath towel is de rigeur, it is very difficult to believe that the blinding stroke of insight might be wrong. So the next stage is to convince a friend - another student, perhaps - which has the advantage that, to explain something to someone else at least makes one sort out the ideas into some kind of coherent argument. The final stage in preparing a convincing argument, according to 'Thinking Mathematically' is to convince an enemy - a mythical arbiter of good logic who subjects every stage of an argument with a fine toothcomb to seek out weak links. A student might very well convince himself of the truth of the argument "IF I can cut a square into n smaller squares, THEN I can cut the square into n- 3 squares". He might even convince a friend by showing pictures such as figure 3. But an enemy might put up figure 4, where a square is cut into eight smaller squares (seven the same size, plus one bigger one). Here there is no set of four smaller subsquares in a group which can be amalgamated into one larger square to reduce the eight sub-squares to five sub-squares. (a) Find all numbers n such that a square can be cut into n smaller sub-squares and prove that this is actually possible for every such number n. (b) For all the numbers n not included in part (a), prove that it is not possible to cut a square into such a number of smaller squares. You should certainly have a go at this before moving to the next section. Proof requires a careful statement of assumptions and a precise argument showing how a clearly stated result is deduced. It is surprising how often we miss the fact that a statement has implicit, unspoken assumptions. Look at the square problem. Into how many squares can I cut a square? For what numbers is this not possible? If a square is cut into more than one square, there will be a corner of a smaller square in each corner of the original square. Thus, if there is more than one square, there must be at least four . There cannot be two or three. Perhaps you might like to try to extend this argument to cover other cases which you suspect cannot be done (if there are any...). I have given this problem to hundreds of undergraduates over the years and we have all eventually agreed on which values of n cannot be done. It has become quite a party piece which I have also tried out with many sixthformers. It was ten years after I first met the problem that a perceptive fourteen year old girl in a problem-solving session came up with an original thought. She suggested that the problem had not explicitly stated that the paper could not be cut and then glued together again in a different way. Her solution for n=2 is given in figure 5. Square problem version 2 : A square is cut into n smaller squares by making single straight line cuts, without joining together cut parts into larger wholes. What are the possible values of n? The exceptional cases found earlier would still be exceptions to this better phrased problem. Figure 5 would now fail to be a counter example to this because it breaks the rule about not sticking together cut parts into larger wholes. Square problem version 3 : Into how many subsquares n is it possible to cut a square, if it is allowed to join cut parts into large wholes? The answer to problem version 3 is likely to be different from that to version 2. You should see if any or all of the 'impossible' numbers from version 2 now become 'possible'. For instance, in Figure 4 we can clearly take any four of the smaller squares and move them together to glue them into one medium size square. Thus, if we allow sticking together we can re-form the square in Figure 4 into five squares of different sizes: one large, one medium, and three little ones. With a little ingenuity perhaps you can solve the case n=3 for version 3 of the problem. Perhaps now you can specify the solutions of both problems. They will be different. This shows that precision in making mathematical statements is all important. Once we have got precise statements of the assumptions (P) underlying a theorem and what it is we are trying to prove (Q), then a mathematical proof of the theorem is in the form "IF P is true THEN Q is true". In everyday language the conventions are sometimes different. "If your father comes home before six o'clock then you can have some chocolate before dinner-time". Here the assumption P is "father comes home before six o'clock" and the deduction Q is "you can have chocolate before dinner-time". Presumably father brings the chocolate and if he arrives sufficiently early you can have some without spoiling your appetite. But also contained in the statement that IF father does not come home before six o'clock, THEN you will NOT have chocolate before dinner-time. There is often an implication in everyday language that IF P happens THEN Q will follow, but IF P FAILS THEN Q FAILS ALSO. In mathematics such an assumption is not made. Here a proof in the form IF P THEN Q simply requires that if P is true, then Q must be true also. If P is false, then no implication as to the truth or falsehood of Q is necessary. If x > 6 then x > 3 . In mathematics this is considered a true statement. If x is a number bigger than 6 then it must also be bigger than 3. However, consider this as separate statements, where P is " x > 6 " and Q is "x > 3 ". What happens for various values of x? If x=7 then P is true and Q is also true. In fact, when x is a number bigger than 6 then P is true and it will follow also that Q is true. But if x=5 then P is false but Q is true, and if x=3 then P is false and Q is false. Thus when P is false, Q can be true or false. We simply have no interest what happens in this case. A particle mass M rests on a rough plane with coefficient of friction $\mu$, inclined to the horizontal at an angle $\alpha$. Show that if the particle slides down the plane then $\tan\alpha> \mu$. What students often do is to assume $\tan\alpha> \mu$ and deduce that the particle slides. They have been asked to prove IF P THEN Q where P is "the particle slides" and Q is "$\tan\alpha> \mu$". They often prove IF Q THEN P. In this case it happens that the two things are equivalent. P happens if and only if Q happens. But the question only asks for the implication from P to Q and the students only prove the implication from Q to P. You might feel that this is a trivial matter. But logically it is totally erroneous. In mathematics it often happens that IF P THEN Q is true but IF Q THEN P is false. For instance, it is true that IF x > 6 THEN x > 3 , but the other way round: IF x > 3 THEN x > 6 is clearly false. Thus it is important to distinguish between the two. The statement IF Q THEN P is called the converse of the statement IF P THEN Q. It is important to distinguish between the proof of a statement and the proof of its converse. One may be true and the other may be false. Another example occurred with the case of "into how many squares can I cut a square" (version 2). It is true to say that if a square can be cut into n pieces then it can be cut into n+3 pieces. The converse, that if it can be cut into n+3 pieces it can be cut into n pieces is false, as can be seen from the case n=3,5. Then, because the derivative of a constant is zero we can deduce that IF $P$ is true THEN $Q$ is true. What we cannot do is to deduce the converse: IF $Q$ is true THEN $P$ is true. Oh, you may say, that's cheating, we don't normally meet functions like that in the calculus... No we don't. Nor do we normally have the personal experience of boiling water on the top of Mount Everest, which would prove that water doesn't always boil at 100o C. To be sure that the mathematics will always work it is necessary to state precisely the assumptions and to take great care over the deductions. This proves to be rather hard. Indeed it tends to be the province of university pure mathematics rather than A-level. You may find the accent on precise proof in mathematics rather esoteric. Other scientists are known to make such jibes at mathematicians. Indeed they say you can tell whether someone is an engineer, physicist or mathematician by setting fire to his wastepaper basket. The engineer will make a cursory calculation and swamp the basket with enough water to put out the fire and more. The physicist will sit down, calculate exactly how much water is needed and pour the exact quantity on the fire. The mathematician? The mathematician will sit down and calculate exactly how much water is needed. Thus the mathematician stands accused of developing a precise theory that is devoid of application. This could not be further from the truth. In our universities computer scientists are growing increasingly worried that students no longer seem to understand the finer points of proof. This is especially true since the demise of Euclidean geometry which was largely concerned with the ritual of deducing one statement about a geometrical figure from given assumptions. It may have serious consequences. As we use increasingly more sophisticated software to run our lives we need computer scientists and programmers who can write provably correct software that does not contain horrendous bugs - unlike the kind of software that caused the stockmarket crash because it was designed to sell under certain conditions which occurred late one Friday and caused the computers to attempted to outdo each other as the selling fed back into the system causing even more selling and then eventual collapse of the market. It is therefore even more important in today's technological climate to pay attention to the niceties of well-formulated statements and logical deduction. "To prove or not to prove" is a question that can have only one answer, for proof is an essential component of technological order in the future. Professor in Mathematical Thinking at the University of Warwick (1992-present). Educated at Victoria School (1945-1952), The Grammar School, Wellingborough (1952-1960) and Wadham College Oxford (1960-1966) where I obtained first class honours in Mathematics, The Junior Mathematics Prize, (1963) and DPhil in Mathematics (1967). From 1966 to 1969 I was a lecturer in Mathematics at Sussex University. Since 1969, I have been on the staff of the University of Warwick, as a lecturer in mathematics with special interests in education in the Mathematics Institute (1969-1980), then within what has become the Institute of Education, being awarded a personal chair in 1992.
CommonCrawl
As far as I understood, implied volatility (IV) is a lucky parametrization of the vanilla option's price. That is, instead of deciding how much the call worth now, you can decide on its IV and put this in the Black-Scholes (BS) formula since all other inputs (underlying price, time to maturity etc.) are readily available. In that case, we use IV $\sigma$ as a free variable which we adjust to fit the market prices. This parametrization and the choice of the free variable is by no means unique: for example, we can say that instead of BS price for call $V(S,\sigma,\dots)$ we invent $W(S,\sigma,\alpha,\dots) = \alpha\cdot V(S,\sigma,\dots)$. In that case, we may estimate $\sigma$ as a 30-days end-of-day volatility of underlying returns (so that it becomes a measurable from market data, fixed quantity) and let $\alpha$ be a new free variable. It will have a similar effect: raise $\alpha$ to raise call price, and we can talk in term of implied $\alpha$ surface rather that IV surface. The popularity of the IV parametrization seems to be in the fact that it's simpler, we just use the BS framework and don't have to come up with new variables. Am I right, or am I missing some points here? The IV is hence an inconsistent model: it's like we pick up a random formula (say BS formula) with one free variable, and just try to fit the output to the market prices by changing the value of this variable. Because of that, we can't do much against the market - in contrast, would the CRR binomial model predict statistics of underlying prices correctly, if we get a market price significantly different from the CRR price, we can trade it and make a risk-free profit by hedging. The IV approach does not even seem to have a potential here: you are relying on the market prices, and cannot say whether they are right or wrong. Am I right here as well? For the reasons above, I have the following question. Gatheral writes that more consistent stochastic volatility models are used to derive values for exotic options, parameters being fitted over the vanilla options prices. Does it mean that we can't do better with vanilla option prices just by using the IV approach? Please tell me if the question is not clear, I'd be happy to fix that. CRR is just a numerical approximation to Black--Scholes. Its main use is in getting American option price. There is no real difference other than slight inaccuracy when using it for Europeans. So no it wouldn't do what you ask. Your questions are philosophical. What is the purpose of the model? if you estimate the volatility from a time series then you can use it to assess the prices of vanilla options. If you fit it to vanilla options then you can't but you can then use it to price exotics. You might find looking Rebonato's Volatility and Correlation helpful. Not the answer you're looking for? Browse other questions tagged option-pricing implied-volatility stochastic-volatility or ask your own question.
CommonCrawl
Write a program to read in a visual description of a connect-the-dots puzzle, then connect the dots and print the result. Input contains up to $100$ images. Each image is rectangular and has dots indicated by the symbols $0, 1, \ldots , 9$, a, b, …, z, A, B, …, Z. It is guaranteed that if any dot appears, then all preceding dots in the above sequence also appear. So, for example, if the dot 'b' occurs, then dots $0$ through $9$ and 'a' also occur. Each dot appears at most once. Non-dots are indicated by periods. Each image has $1$ to $100$ rows $1$ and $100$ columns. Input ends at end of file. For each image, print out the dots-connected image, as illustrated in the sample output. Connected dots will always be vertically or horizontally related; there are no diagonal lines. The lines should not obscure any dots, and if two lines cross, then indicate that as shown with a plus (+). Do not connect the last dot with the first dot. Produce a blank line between each pair of images.
CommonCrawl
To get Mathjax support the easiest thing to do is to add the following to _include/themes/yourtheme/default.html where "yourtheme" is whichever theme you're currently using. I'm sure there is a better way to do this so that you don't need to add this whenever you change themes but this is working for me for the time being. However using Maruku (the default markdown rendering engine with Jekyll) actually using math can be a pain since it likes to replace _ with <em> no matter where it is in the post. So $$x_i$$ gets converted to $$x<em>i$$ which is not what we want. to the _config.yml file at the top directory of my repository. Combining this with the fact that I write my posts using R Markdown so I can easily insert code and output into my posts makes the whole process a lot nicer. I have heard good things about rdiscount as a markdown rendering engine but I don't think I'll use it. It doesn't appear to support the syntax I described in this post and I rather like this syntax. kramdown appears to work well enough for my needs so I think I'll stick with that for now.
CommonCrawl
50 Does every real function have this weak continuity property? 17 $G$ a group, with $p$ a prime number, and $|G|=2^p-1$, is it abelian? 16 Does every real function have this weak derivation property? 7 If $f$ is continuous and $f(f(x))+f(x)+x=0$ for all $x$ in $\mathbb R^2$, then $f$ is linear? 5 A new characterization of an annulus in the plane?
CommonCrawl
In a BRDF $f_r$, assume Cook-Torrance model, we have a microfacets distribution $D$. This distribution essentially models the rougheness of the material as far as I've understood. Given a direction $v$ it models the amount of microfacets having that orientation. Now say we have a material, for each point $p$ in theory we have a 3D function that tell us the orientation in a given direction. This explantion (not detailed) makes sense to me, however when I read about the roughness map what I see is that it's a kind of texture where each pixel has values between 0 and 1 (from black to white) I do struggle to understand the relationship between this map and the microfacets distribution. Most normal distribution functions (NDFs) are parametrized by some variable (tipically $m$ or $\alpha$) that determines the "roughness" or "spikiness" of the NDF (this is often meant to be the rms slope of the surface). we can think of the NDF as a function of the direction as well as of the roughness value (as in $D(\theta,\phi,m)$. Here, $\phi$ and $\theta$ were used to represent spherical coordinates for the microsurface normal direction). A roughness map can also be thought of as a function: one that returns a value for $m$ for every point $p$ on a surface. Hence, $m(p)$. And, because $m(p)$ may be different for every $p$ but stays fixed when we only vary $\phi$ and $\theta$, we effectively get a whole different NDF on every point on the surface. You have missunderstood this. $D$ is a Normal Distribution Function (or short NDF), so it doesn't really give you a single normal, but a distribution. In a (specular) BRDF you are always using the normal that is the half vector between the incoming and outgoing light, since via your theory, every microfacet is a perfect mirror and thus every microfacet reflects light exactly along the (micro) surface and that one only. You also must not confuse roughness with microfacet normal. The roughness is a more or less arbitrary value (and a scalar, so a single one at that). The roughness is being used differently, depending on the overall BRDF (e.g. roughness in GGX(/Trowbridge-Reitz) is different from the Roughness in Oren-Nayar) and therefore can have different ranges. Still, if you look at one specific NDF, you will see, how the roughness is used. with $\alpha = roughness^2$ (a common remapping), $\omega_m$ is the microfacet normal, $\omega_n$ is the geometry normal. You can see here, that the roughness is being treated like a single value. The higher the roughness is, the more random your microfacets (or normals thereof) are distributed. The more that happens, the less your surface will reflect light concentrated into the same direction (i.e. the less highlights you will have). Thus, your material is more diffuse. Since now your roughness is a single value, the map makes sense to have only values in $\left[0, 1\right]$ and therefore is a gray scale image. Not the answer you're looking for? Browse other questions tagged shader brdf distribution microfacet or ask your own question. What is the accepted method of converting shininess to roughness, and vice versa?
CommonCrawl
How do I see LaTeX math on any web page and in email? This is a follow up to this closed question. I open a random page, such as something on arXiv at 8:05 p.m. EST, and I see all these dollar signs, and I sigh and I wish that I could see nicely formatted math formulas instead, just like on MO. Is it possible? Can one write a Greasemonkey script to apply jsMath after the fact even if the page authors did not think of it? A Mozilla Firefox addon? Please share your solutions. Seeing like this is an active community of people with similar interests, I am sure that hundreds or thousands of mathematicians would benefit from a solution. The Greasemonkey MathML script written by Steve Cheng and linked to in Scott Morrison's answer worked only partially for me in Firefox on Windows 7: it did not display many \mathbb, \mathcal, and \mathfrak characters because the corresponding Unicode characters were missing in the fonts. Installing additional STIX and Asana Math fonts did not help, in fact it made the display looking worse. So I rewrote the script (a long and tedious job finding the correct Unicode codes and putting them in the right places). I also added arxiv.org, front.math.ucdavis.edu, MathSciNet, and mail.google.com to the sites supported by default, and added miscellaneous characters and TeX commands missing in the original script. Yes, it works with gmail (!) if you switch to the basic HTML view. So now you can read an email from your collaborator and see typeset math right there. Now tell me you haven't always wished and prayed for this? I know I have. Here are the detailed instructions for the method that produces good results using Mozilla Firefox on Windows 7. I haven't tested on other systems, you are welcome to share your experiences in the comments. Click here to install the Greasemonkey Firefox extension. Download a modified Greasemonkey script from here and save it to your Desktop. From the Firefox menu bar, File > Open File, navigate to the downloaded script and open it. Greasemonkey will offer to install it. Do that. That should be it. Check how it works by looking at some arXiv abstracts such as this, or this. Even when the authors use custom notations, such as \red or \cE, removing the dollar signs, putting math in a different font, and using sub- and superscripts dramatically increases the readability in my experience. Edit: I also fixed the displayed formulas with double dollars, which the original script did not handle correctly. So now you can also view this and this. So in the end this was more of a community service than a question. Enjoy the results! I wrote a little program called GmailTeX which adds $\TeX$ capability to Gmail. You can get it here. Added 9/1/2010: Thanks to Kristi Tsukida, it is now available as a Google Chrome extension, for easy installation. display-latex2 described in the other answer (written by Steve Cheng, to which I made some modifications). ASCIIMathML, written by Peter Jipsen. I think the best choice by far is to use MathJax, an active project with a very professional development team. It is free, open source, and it is backed by the American Mathematical Society, the American Physical Society, and SIAM, among others. That is the way to go, if you are able to install it on your server. I wrote a very simple Greasemonkey script which allows you to use your local installation of MathJax on any web page, and in Gmail (in "basic HTML" and "print" views). The same web page also contains a Greasemonkey script allowing you to pick and choose between the three locally installed math engines. In the long run, the best way would be for arXiv, MathSciNet, and Gmail to use MathJax on their servers. I made a Gmail Lab request for this. If more people support it, maybe they will do it, that would be great. Here are detailed instructions. I am hesitant to bump this question too often. So for minor edits I will update this web site instead. First, download and install MathJax and MathJax web fonts. Install MathJax on your web server. Install the MathJax fonts from the MathJax-webfonts(-beta2)/fonts/HTML-CSS/TeX/otf directory. so that MathJax uses your locally installed MathJax TeX fonts. Next, download mathjaxthispage.user.js and save it to your Desktop. The script assumes that your MathJax installation resides in http://localhost/MathJax. If it is different, edit the script accordingly. From the Firefox menu bar, File > Open File, navigate to the downloaded script and open it. Greasemonkey will offer to install; do that. Start surfing. The following settings make for a more pleasant viewing experience when browsing the pages with non-standard TeX macros, for example arXiv.org. The first line controls the magnification of math output, and you can change it to your liking. The other lines are needed to fix a bug with Mozilla's display (otherwise, Mozilla does not display MathML correctly). Yes, it is clearly possible to write a GreaseMonkey script that loads the jsMath library, then calls the appropriate code. You might also look at this description of building a GreaseMonkey script that loads an external library. It seems sufficiently complicated (waiting to ensure the library has loaded before calling it) that I'm not interested in the details. If that worked, create a new bookmark in your browser with the above line as the "location". When you come to a page where you want to process the math, just click on the bookmark. As with VA's solution, this doesn't seem to work with the standard view in Gmail. Mathjax.org may not be happy with people loading the script from them to view random web pages. If your browser supports MathML, this solution should produce MathML output, but I haven't tested this yet (because it doesn't work in Firefox for me). If you can get MathML output, please leave a comment saying so. If you had to do something special, please edit this answer (it's CW). Edit(VA): I believe the only browsers supporting MathML now are Firefox and IE with MathPlayer installed. Using the default output (HTML-CSS) should work very well if you have installed MathJax fonts on your machine. Or you can wait until the official STIX fonts come out (it has been 13 years of delays but maybe they are really coming out on May 17?) and install those. Also, noErrors.js and noUndefined.js are not currently part of the standard MathJax distribution. But they can be obtained from the recent builds as explained in the other answer. Recently I have found the following extension with the help of which it is possible to send email with MathJaX equations (however I am not sure whether to read the sent emails if the receiver also needs to add the extensions). The extension is, TeX all the things. Not the answer you're looking for? Browse other questions tagged latex soft-question or ask your own question. How to draw knots with Latex? Any viable modern LaTeX alternative for expressing mathematics?
CommonCrawl
This number is called the Copeland–Erdős constant, and is known to be irrational and normal. I believe its transcendence or otherwise is an open problem. This source claims that it has been proved to be transcendental, but the paper they refer to is the one in which it was proved to be normal and so I think the source is mistaken. For now, the knowledge that it is almost surely transcendental will have to suffice! Not the answer you're looking for? Browse other questions tagged number-theory transcendental-numbers or ask your own question. Does the number $2.3\,5\,7\,11\,13\ldots$ exist and, if so, is it rational or irrational &/or transcendental? Is $0.248163264128…$ a transcendental number? What is the name of this number? Is it transcendental? Is $ 0.112123123412345123456\dots $ algebraic or transcendental? Is $0.121121111112111…$ a transcendental number? Do we know a transcendental number with a proven bounded continued fraction expansion? If we delete the non-primes from $e$, is the resulting number transcendental? Is there any known transcendental $b$ such that $b^b$ is also transcendental?
CommonCrawl
This problem is very simple stated. You are given a positive integer **X** and you want to compute its square. The good thing is that **X** consists only of **N** repetitions of the same digit. Sounds very simple, but the problem is that **X** can be very large here. In all subtasts, you have to handle at most $20$ test cases. In order to return the result, you are asked to compute its hash with the given hash function. Since this can be done in linear time in terms of the length of the result, from now, we assume that you only need to compute the result as $X \cdot X$. Since squaring a number is multiplying it by itself, use [long multiplication], which is sometimes called grade-school multiplication or Standard Algorithm, and speed up the computation using prefix and suffix sums of digits of the intermediate result. In the simplest subtask, since **N** is at most $9$, we know that **X** fits in $32$-bit integer and hence, the result fits in $64$-bit integer, so you can compute it as the product of two $32$ bit integers. In the second subtask, **N** is at most $100$, so the previous method cannot be used here. The good news, that **N** is not as big to prevent us from using standard, taught in elementary school, [long multiplication]. Using this method, we can easily compute the result for a single test case in $O(N^2)$ time. In this subtask, we have $N$ up to $2 \cdot 10^4$, and you can solve this task using any reasonable speed up of a quadratic multiplication algorithm, so in other words, you can solve the multiplication of two long number problem in general. This can be done, probably in the easiest way, by representing $X$ in some significantly larger base than $10$ and then applying the standard long multiplication algorithm. Other, well-known methods, for multiplying fast two long numbers are FFT and Karatsuba algorithm, but since they are not so easy to implement, they are not so good choices here. Let $M = d \cdot X$ and $m[k], m[k-1], \ldots, m$ be the digits of $M$. First important thing to notice, is that $M$ can have $N$ or $N + 1$ digits. Now, in order to get the result, we need to compute the following sum: $M + 10 \cdot M + 10^2 \cdot M + \ldots$. We can do this by accumulating the sum of each column from above schema independently. Notice that, the exact digits we need to accumulate in a certain column, depends only on the number of digits of $M$. After that, in order to get the final result, we have to normalize the results from each column to base 10. This is easy to do by iterating over sums of columns from right to left, and computing for each one its sum modulo 10 as the digit corresponding to this column in the result, and carrying the outcome of dividing this sum by 10 the the next column. However, if we accumulate the sum of each column naively, the method has quadratic time complexity, which is too much for this subtask. In order to speed it up, let's take a closer look at the above adding schema. The result for each column is the sum of the first l digits of M for some l or the sum of the last l digits of M for some l. So in order to get the sum of each column, we can first precompute prefix sums and suffix sums of digits of M and then we can get the sum for each column in constant time, which is a big speed up over summing these digits naively. This method has linear time complexity, but be aware of the your implementation, especially avoid copying and reversing arrays. In addition, it is better to implement the solution using static array rather than dynamic ones. Another way of solving the large test case (10^6 digits) is to represent the number as 111…111(n times 1) x d. Now the square of the number is equal to square of(1111…111) x square of d. As we know, square of 1111…111 always follows a fixed pattern of digits. We represent the square of 111…111 as a char array and then just use a standard digit multiplication algorithm to compute the result into another char array. then it's just finding the hash from the result array. Hence, the entire problem is solved in 0(N). Getting Access Denied when trying to access author's and tester's solution. I am calculating the square in O(N). But my last test case is still giving TLE. Help Anyone? One difference to my implementation; You don't have to use modulo each time when summing up the hash. In fact the sum fits into a long long, so computing the modulo once is enough. Maybe that's already enough to pass. I used the same logic but I still don't know why does my code not give a correct answer for at least the 1st 3 subtasks. Given below is the link to my solution. I am not able to find the test case for which my code goes wrong. Please help me out. I got the error in your logic. You are assuming that the square of 111…111(n times) is 1234…n…4321. But that's correct only upto n=9. For example square of 111111111111111 (1 repeated 15 times) is 12345679012345654320987654321.
CommonCrawl
Let $x=x_0$. Thus $P(x_0)$ is given. Let $y$ be arbitrary. So we have to prove $P(y)$. $y$ being arbitarary means that either $y=x_0$ or $y\ne x_0$. So now we prove $P(x_0) \lor P(z)$ where $z$ is arbitrary and $z\ne x_0$. Is this correct ? If not what is wrong ? Your proof is bad. You don't have $P(x_0)$ is given. You have to show that such $x_0$ exists. The point is that if there $x_0$ such that $\lnot P(x_0)$, then $P(x_0)\rightarrow\forall yP(y)$ is vacuously true; otherwise for all $x$ it is true that $P(x)$ and therefore $\forall yP(y)$ is true, and the implication is again true, in which case any choice of $x$ will prove the statement. One key point is that the proof doesn't constructively show which of the option holds, is it the case that $\forall yP(y)$ is true, or perhaps there exists some $x$ such that $\lnot P(x)$? Here we can find a discussion of the Drinker paradox. We can prove it in Hilbert-style using the axiom system of Herbert Enderton, A Mathematical Introduction to Logic (2nd - 2001)). $\vdash(∀x \beta \rightarrow \alpha) \leftrightarrow ∃x(\beta \rightarrow \alpha)$, if $x$ does not occur free in $\alpha$. $\vdash \forall x P(x) \rightarrow P(y)$. $\forall x P(x) \vdash \forall y P(y)$. $\vdash (\forall x P(x) \rightarrow \forall y P(y))$. $\vdash \exists x (P(x) \rightarrow \forall y P(y))$. Not the answer you're looking for? Browse other questions tagged logic proof-verification quantifiers fake-proofs paradoxes or ask your own question. **Ended Competition:** What is the shortest proof of $\exists x \forall y (D(x) \to D(y)) $? How to show that $\vdash (\forall x \beta \to \alpha) \leftrightarrow \exists x (\beta \to \alpha)$? Is the Formula Logically Valid? $(\forall (x,y) \in F\times G, \phi(x) \lor \psi(y) ) \Leftrightarrow (\forall x\in F, \phi(x)) \lor (\forall y\in G, \psi(y))$? Valid well-formed formulae in predicate calculus? Is my proof of $((p\implies q)\land(q\implies r))\implies(p\implies r)$ correct? Prove that every nonzero element has an induction given the following axioms.
CommonCrawl
Abstract : Optimal maps in $R^n$ to disconnected targets necessarily contain discontinuities (i.e. tears). But how smooth are these tears ? When the target components are suitably separated by hyperplanes, non-smooth versions of the implicit function theorem can be developed which show the tears are hypersurfaces given as differences of convex functions --- DC for short. If in addition the targets are convex the tears are actually $C^1,\alpha$. Similarly, under suitable affine independence assumptions, singularities of multiplicity $k$ lie on DC rectifiable submanifolds of dimension $n+1-k$. These are stable with respect to $W_\infty$ perturbations of the target measure. Moreover, there is at most one singularity of multiplicity $n$. This represents joint work with Jun Kitagawa.
CommonCrawl
After reading this question I was wondering whether mathematicians tried to invent better names for exceptional simple Lie groups $F_4, E_6, E_7, E_8$ ? These names seems a bit obscure and does not show that we have a series here of four groups. This series is of course different than infinite series $SO_n,SU_n,Sp_n$ (pardon, it seems there is no plural form for "series" in English, in Polish there is). Letters $A,B,C,D,E,F,G$ are OK to use for Killing and Cartan who were classifying all simple compact Lie groups. Now we could have better names showing that these groups are groups of isometries of series of projective spaces over algebras being tensor product of octonions with real, complex, quaternion and octonion algebra. This is materialized in Freudenthal magic square. However letters used in magic square do not seem to represent the symmetry. For example first row is: $A_1,A_2,C_3,F_4$. In other words we can name it: $SO_3,SU_3,Sp_3,F_4$. It is not reflecting fact that position $k,n$ in magic square represent tensor product of division algebras $\mathcal A_k\otimes\mathcal A_n$ where $k,n=1,2,4,8$ where I denoted by $\mathcal A_k$ reals, complex numbers, quaternions or octonions. The effort of better naming shouldn't be undervalued. By naming groups properly we also understand them better. Do you agree ? Since in the comment there are doubts expressed whether there exists any projective spaces for exceptional Lie groups, so I put following references below. In this paper there are exceptional Riemmanian symmetric spaces defined. Huang, Yongdong; Leung, Naichung Conan, A uniform description of compact symmetric spaces as Grassmannians using the magic square, Math. Ann. 350, No. 1, 79-106 (2011). ZBL1280.53050. In this paper Freudenthal magic square is used to define exceptional Lie algebras in uniform way using division algebras. Barton, C. H.; Sudbery, A., Magic squares and matrix models of Lie algebras., Adv. Math. 180, No. 2, 596-647 (2003). ZBL1077.17011. Ruth Moufang started work on octonionic geometry. I don't know much about it. Probably some work is still to do be done in this area. Each Lie group reveal some symmetry - what symmetry ? How do we define Lie groups ? Usually they are defined as automorphisms of some structure. Alternatively one can first define the group and using it next define the structure it preserves. For exceptional Lie groups it is not easy neither to define the group nor the structure. As an excercise please try to define $E_7$ Lie group. Browse other questions tagged lie-groups soft-question exceptional-groups or ask your own question. Who originated the standard symbols for Lie groups GL, SL, SU, etc.? Why do the adjoint representations of three exceptional groups have the same first eight moments?
CommonCrawl
Typesetting in wxMaxima1 IntroductionTo make your paper look as good as possible you should be familiar with the following: how to format text and create sections (and subsections, etc.), how to typeset equations, and how to incorporate graphics.... Typesetting rules for mathematics may depend on the specific field of science, i.e. there may be different standards for math, physics, chemistry, etc. I discussed some of the issues regarding typesetting rules for mathematical physics in my EuroTeX 2009 proceedings paper . Typesetting rules for mathematics may depend on the specific field of science, i.e. there may be different standards for math, physics, chemistry, etc. I discussed some of the issues regarding typesetting rules for mathematical physics in my EuroTeX 2009 proceedings paper .... $\LaTeX$ is designed for typesetting, and can do some calculations too. So the main strength of $\LaTeX$ here is that if and when you want to add some other common document feature to your math papers, like cross-referencing, bibliography, numbered lists, etc., you'll already be in the right domain, and can do all that by editing the same one file. 29/03/2018 · In the latest blog post, Dr Timothy Walton discusses LaTeX - the typesetting programme used to format the mathematics throughout this blog site.... $\LaTeX$ is designed for typesetting, and can do some calculations too. So the main strength of $\LaTeX$ here is that if and when you want to add some other common document feature to your math papers, like cross-referencing, bibliography, numbered lists, etc., you'll already be in the right domain, and can do all that by editing the same one file. Abstract. Too often our students submit incomplete homework that is disorganized, unclear, and nonlinear. Typesetting with LATEX, although time consuming for those new to the software, strengthens communication by forcing organization and proper notation required by the precise, formal language of mathematics. Math fonts are discussed further on pages 22 and 30. The ellipsis "…" in the previous expression was produced using \ldots . Notice that the dots are aligned with the base of the characters.
CommonCrawl
We describe the asymptotic behavior as $t\to \infty$ of the solution of $u_t=\Delta_p u$ in $\R^N$, for $(2N+1)/(N+1)\le p < N$ and non-negative, integrable initial data. Optimal rates in $L^q$, $q=2-1/(p-1)$ for the convergence towards a self-similar profile corresponding to a solution with Dirac distribution initial data are found. They are connected with optimal constants for a Gagliardo-Nirenberg inequality.
CommonCrawl
Let $ X $ be an irreducible hypersurface given by a polynomial $ f \in K[[x_1, \ldots, x_d]][z] $, where $ K $ denotes an algebraically closed field of characteristic zero. The variety $ X $ is called quasi-ordinary with respect to the projection to the affine space defined by $ K[[x_1, \ldots, x_d]] $ if the discriminant of $ f $ is a monomial times a unit. In my talk I am going to present the construction of an invariant that allows to detect whether a given polynomial $ f $ (with fixed projection) defines a quasi-ordinary singularity. This involves a weighted version of Hironaka's characteristic polyhedron and successive embeddings of the singularity in affine spaces of higher dimensions. Further, I will explain how the construction permits to view $ X $ as an "overweight deformation" of a toric variety which leads then to the proof of our characterization.
CommonCrawl
I am the chief wizard of computing at https://www.ncssm.edu, a nationally recognized magnet school for science and math that is a campus of The University of North Carolina. I completed my undergraduate degree at Indiana University and my Ph. D at The University of Texas. 66 Why can ALL quadratic equations be solved by the quadratic formula? 51 What is the purpose of the first test in an inductive proof? 50 Prove: $\int_0^\infty \sin (x^2) \, dx$ converges. 50 Is there a mathematical symbol for "For every element"?
CommonCrawl
I'm trying to simulate - via the Metropolis-Hastings algorithm - a sample $X$ of size 10000 from a density $f$ using a proposal distribution $g$. A classical step after generating my sample X is to discard the first thousand values or so, so I only have $X_n$ with $n$ big enough such that $X_n$ approximately follows $f$. However, after some reading (here and here), I am under the impression that this is unnecessary if we start from a state $x_0\in M$ that should be reached with high probability. While I think I get the point these texts are trying to make, starting at a large $n$ seems absolutely necessary to me so that $X$ starting from $n$ follows $f$. So, should I skip a thousand values and only consider my chain from then on, or should I inspect the output values and start from the mode of $f$ ? Browse other questions tagged markov-chains monte-carlo simulation or ask your own question. Can you suggest a method to generate random sample from following PDF?
CommonCrawl
Abstract: It is shown that the kernel of a Toeplitz operator with $2\times 2$ symbol $G$ can be described exactly in terms of any given function in a very wide class, its image under multiplication by $G$, and their left inverses, if the latter exist. As a consequence, under many circumstances the kernel of a block Toeplitz operator may be described as the product of a space of scalar complex-valued functions by a fixed column vector of functions. Such kernels are said to be of scalar type, and in this paper they are studied and described explicitly in many concrete situations. Applications are given to the determination of kernels of truncated Toeplitz operators for several new classes of symbols.
CommonCrawl
The next step is to construct a function space over the mesh. We start by loading a Mesh in 2D. You can print back the expression to the screen to check that everything is ok. You want to use as expression a*x+b*y, you have to define a and b as option (either in your code, either in the library). then we compute the gradient of g and f. template argument are given to grad to specify the shape of the gradient: in the case of \(\nabla g\), it is \(1\times2\) and \(2\times 2\) for \(\nabla f\) since we are in 2D. then we compute the laplacian of g and f. then we compute the divergence of f. Finally we evaluate these expressions at one point given by the option x and y. $ ./feelpp_tut_myexpression --a=3 --functions.g="<your_function>" --functions.f="<your_function>" We start with the following function g=1 and f=(1,1). The symbolic calculus system worked as expected.
CommonCrawl
a nonlocal SPS type equation in balls of $\mathbb R^3$ with Dirichlet boundary conditions. which changes sign exactly $k$ times in the radial variable. of radial global solutions for the associated nonlocal parabolic problem having $k+1$ nodal regions at every time.
CommonCrawl
Once in a while, someone on Reddit asks for justification why everyone there hates PHP. I never reply, because there's too much to list in a comment, but maybe I can write a definitive post here. Most recently updated: 7 Feb 2014, for new features in PHP 5.6.0a1. A day like any other: In order to test that all the categories of products are behaving correctly on the website, I spend an hour writing a page to display a table of in-stock (further subdivided) and out-of-stock items. About 6 business days after finishing, while waiting for review: instead of reading the entire history of every single Planet MySQL blog, I spend another half hour fancying up the CSS of my page. My boss catches me, asks what the page is about, rejects the hypothesis that testing is important, and lectures me. We are not making enough money to pay you your pathetic rate; do not do extra work. Several business days later: the system is finally approved and live. Nobody in the office is trained on it when an order comes in. The order is for an out-of-stock item. The Big Boss is rather angry, and demands to know whether there is some way to find out "what the site thinks it has in stock." My boss answers "No." I am silent. I'm already looking for a new job. Business Day 88 (about four months into the 90-day evaluation period): after 2 days and 2 emails, I finally get a meeting with the Big Boss to announce that I'm going to terminate my at-will employment after Day 89 to start my next job, 45 miles closer to home, at $pay * 1.38 + $benefits * 1.25. (I ultimately decide to tell him the exact offered salary, though I can't tell if he's BS'ing me on whether it's an acceptable/common question to ask, because I figure he won't match it. He doesn't even try to come up with a counteroffer.) He threatens that I might need to stay 2 weeks because he doesn't know if I can leave. The last project was finished somewhere around Day 76, and has been waiting for review. Every time I pinged my boss on a review, ever, including this final task, the answer was: "later today." Day 89 was thankfully uneventful. Recently at work, I updated our SPF policy to something accurate. Along the way, to understand the policy I was deploying and what the previous version actually meant, I had to understand the various rules and types involved. When I attended community college, the computers in the labs were running Windows 95, pretty much in a state of constant hilarity. I'll get to that some other time, though; today's wacky hijinks are about The Server: the most secure machine on all of the campus, since it was the master authentication source. I read "Linking in JSON" the other day. I knew someone had already gotten started on JSON Schema. (A quick search shows JSON namespace ideas floating around.) JSON as the lightweight alternative to XML being turned into XML? This is beginning to sound familiar. With lightweight formats, we tend to get a proliferation of variants for different uses. (Not just images, naturally, and CPAN manages to use more than one.) Heavier formats tend to have a problem I'm going to call "Accessories Not Included": they get sufficiently large and complex that not all readers support all format options. If the growth is arrested early enough, you end up with a handful of profiles; if it gets out of hand, you have over ten of them. This sort of thing is practically destined to keep happening. More features generally cost more memory and processing time, or some other inconvenience like a compilation step, which is against the religion of some developers. Thus, lightweight versions of things spring up in opposition to whatever is perceived to be too heavy. Sometimes compilation is considered the lightweight alternative, since it's not done on every request. Though sometimes many similar projects proliferate because they just aren't that hard. It's easier to write a web framework than to learn one, so there are a lot of them. Made it out of the link forest and need something more to kill the time? Maybe you want to subscribe to my feed, or follow me on twitter. Why are device pixels so meaningful that we get stuck designing around pixels, even though we "know" we should design for device-independent units? If a user wants to zoom so that features are physically 50% larger, then the same problems of rendering 1-pixel features on 1.5px areas occur, but this time we know we can't tweak physical size. Antialiasing happens instead, resulting in a zoomed but blurry UI. Worse, subpixel rendering adds noise when not rendering precisely onto the intended subpixels, but the font rendering is done by the time the zooming layer gets to see it on Linux. Unless everything is lovingly hinted and/or provided at multiple PPI steps, there's basically no solution to the problem. I'm willing to bet that people will skip properly handling multiple PPI settings if it's any more complicated than supporting power-of-2 sizes. As long as pixels matter, which they will up to 600 PPI or more, people are going to design for pixels. This problem is that there's too much writing on the web that attacks "RPC systems" as the logical opposite of REST, and I took this assumption unquestioned. Imagine watching a baseball game on a TV where ESPN is a smart app, not a dumb channel. When you're watching a game, you could tell the TV to show you the career statistics for the current batter. You could ask the HBO app which other movies this actress has been in. Point is: it'd be better for both viewers and the networks if a TV "channel" were an interactive app rather than a mere single stream of video. Splitting my attention between things means I don't remember either thing. Worse, the things that a smart channel offers me in Gruber's vision—the things actually related to the show I'm watching—are useless trivialities. If I had smart TV and lacked the discipline to avoid these side quests, then I wouldn't gain anything out of my screen time. I'd forget the answers to the fleeting distractions, and also not be able to remember what I was watching in the first place. I can say all this because I already know what the price of distraction is. I refuse to pick up my iPod while watching things, no matter how interesting it seems at the time, because I'd rather focus on the show or movie. What makes me happy? It's not the Internet; it's not TV; it's not apps; it won't be all three of them rolled together into smart TV. However, a smart TV done well will still be a success in the market. We'll find out sooner or later whether Apple did it well. It's almost certain that they'll try. Many languages these days are fairly Lispy, except for being homoiconic and thus having a full-strength macro system instead of C's token pasting or many other languages' nothing. But which ones are absolutely vital to learn, and which ones are "just different languages"? Update, 1 Oct 2012: This post is largely obsolete, as Amazon recently added private streaming support to the CloudFront section of the AWS Console. The original post follows. I'll just assume you're aware of the IaaS offering known as Amazon Web Services, AWS. CloudFront is a CDN in the AWS micropayments-as-you-go style, which offers the ability to serve non-public content stored in S3. This is a compendium of the things I learned setting up a private streaming distribution for use with PHP. This is going to be fairly low-level, since I like to drink deeply of the systems I'm working with. I don't think AWS works smoothly enough yet that you can put the API on the "it's magic" side of the line. What's the difference? Which one is standard? Is one harder to break? Why don't any of the outputs match either of Wikipedia's examples? Create the new bucket with the desired name. Go to the old bucket and select all files: click the first and then shift+click the last. Above the file listing, in the button row, is one marked "Actions", which opens a menu that includes "Cut" and "Copy". Pick one. Go to the new bucket, click Actions, and Paste your files. Done. No 3rd-party software required. Why would anyone want to rename a bucket? In our case, we created a StudlyCapsStyle bucket, which can't be used with CloudFront's dns-compatible-style. In double-checking this post for accuracy, I noticed that Cut/Copy are available on the right-click menu for a single selection, but not the multi-select. Weird. This post is a companion to Perl and Unicode in Brief, an attempt to cover the same ground more concisely. This is an extended remix of my recent post on the subject, only less of a rambling story and more focused. Again, I'll start with some background definitions. I'll also assume that you're going to make everything UTF-8, because as a US-centric American who has the luxury of using English, that's what makes the most sense for my systems. However, if you understand everything I wrote, it should not be difficult to make everything UTF-16 or any other encoding you desire. Note: this post has been superseded. REST is easy to describe. It goes a little something like this: "You have some representation, and you send (or receive) the whole thing to read it or make changes." People coming from Clojure would understand it as REST sends values. I can GET an object, receive the value, manipulate it, and PUT the new value. It's so easy because it just uses HTTP! Right? Maybe not. If REST is so easy, why is there HATEOAS*? Shouldn't that have been obvious? Why do we have arguments about versioning and parameters and formats and headers on Reddit? Then if I get an error message, mysql will tell me e.g. "Error ... at line 42100". Running "vim +42100 binlog-play.sql" lets me inspect the stream to see what went wrong in detail. The final piece of the puzzle is that executing "FLUSH LOGS;" or "mysqladmin flush-logs" will push mysqld on to the next binlog file, so you can safely play out the one you want. Once you've finished processing a file through mysqlbinlog, you can just remember the file boundary, and flush mysql's logs if you want to process the one it's presently writing to. This is in regards to piping mysqlbinlog output from one mysql server into the mysql client to execute on another; the post I linked above discusses doing so for switching to Amazon RDS. The basic strategy is to minimize downtime by loading a database dump from the source on the destination, then use mysqlbinlog on the source and the mysql client to feed updates from the source to the destination. The updates can be faster to load than a new dump; and when it's time to switch servers, it's a matter of stopping database clients, turning off the source mysqld, sending the final binlog updates, pointing the clients to the destination server, and turning the clients back on. As opposed to waiting for a whole dump to load while the clients are off. This post is a companion to Perl and Unicode in Brief, an attempt to cover similar ground more concisely. And this post is a revised version of the one you're currently reading. I'm currently moving data from a (relatively old now) MySQL 5.0 server into Amazon RDS. I've been here before, when I was moving data from MySQL 4.x into 5.0 and mangling character sets. This time, I want to make 100% sure everything comes across with maximum fidelity, and also get the character encoding as stored to be labeled correctly in MySQL. Character Set: a specific table to translate between characters and numbers. Example: ASCII defines characters for numbers 0-127; "A" is 65. This can also be described as "a set of characters, and their corresponding representation inside the computer." Character Encoding: a means of "packing" numbers from the character set into a container. Example: UTF-8. The Unicode character 0x2013 becomes 0xE2,80,99. The "E" signifies "Part 1 of 3", and part of the remaining bytes simply indicate "Continued"; the 0x2013 is then divided up to fit in the parts of the bytes that aren't indicating their "Part 1" or "Continued" status. In the specific case of UTF-8, the encoding is designed so that the ASCII range 0-127 (0x00-7F) is encoded without change: a leading 0-7 means "Part 1 of 1". 8-bit character encoding: In older, simpler days, character sets defined only as many characters as could fit in 8 bits, and defined the encoding as simply the numbers. Character number 181 would encode as a byte (8 bits) with value 181. A character encoding implies the associated character set, because the encoding defines how numbers in its character set become individual bytes. How characters in other sets would be encoded is left undefined and basically impossible. This last point is why MySQL lets you set "character sets" to UTF-8, though the latter is an encoding. One of them succumbed to death by risk-aversion. One of them couldn't let go of the tether and fly. I think Linus said the same of svn: paraphrased, "If you're trying to make 'a better CVS' then you have already lost, because CVS is too broken to fix." Hey, sapphirepaw: make sure what you do is good on its own, not "an X only different". I'm getting old: if I were to pass on at the same age Jobs did, my life would be more than half over already. What separates me from Jobs? There's the matter of leverage, where he could take his vision and coordinate the prototyping and development of it, into the iPod, the iPhone, the Macbook Air, the iPad. There's also the matter of having vision. In 2006 or so, I beheld my first iPod in real life, an old (FireWire based) model with a physical click-wheel. In 2008 I picked up a different, small MP3 player and for the first time, immediately noticed the limitations of digital control. Without having handled the iPod and getting a feel for the analog response of the wheel, I probably wouldn't have given the buttons a second thought. Do you want to scroll on the generic? Click-click-click-click. Or click-and-hold, guess at how long you need to go (since the screen is slow enough to be unreadable at this scrolling speed, and they don't slow updates to compensate), and release. The point here is, Jobs saw humans as inherently analog, and adapted all of his machines to analog control. It's a simple thing, but Jobs was apparently devoted to HCI. The "vision" simply falls out of that. It's not like the limitations of digital control weren't apparent in the 1980s. Compare Rad Racer to a real car's steering wheel. Anyone focused on "how it feels" could have been Jobs back then, inventing 2010 in the 16-bit era instead of carrying 8-bit paradigms through the 1990s. In contrast, I seem to lack vision because I'm busy implementing arbitrarily complex business rules at work, and staying away from the bleeding edge of gadgetry. I'm not in the consumer space; I'm not taking any research toward the consumer space; and I'm not thinking about what's next for it, either (at least, not beyond what turns out to actually be the next thing*.) But, I'm also having little impact on the wider world, writing code that never leaves the house. It's important, but after I am gone, will these be the best years of my life? Will I think college was the best time of my life, forever? I think it's time to put my free time to better use and do something instead of watching the world slowly develop towards Jobs' vision on its own. * I have a dead draft which discusses the crazy idea of "having a set-top box inside the remote" in 2006 or so. It then points out that h.264-over-wifi ought to handle the bandwidth to do exactly that from your iPhone now. It starts fleshing out what would be necessary to make it happen, then abruptly ends with a note: "Two days after I started writing this, Apple announced AirPlay." Wait. What? Not only does ec2-user have no password, but I didn't change its NOPASSWD line in sudoers. One of the first things I did at my current job was to rewrite a Perl/CGI (the module, and the actual cgi-script execution model) site into PHP. Part of this site implemented a single-signon (SSO) system for a partner site that hosts our training videos. Clicking the link led to the innocuously-named "training_jump.pl" CGI script. The goal in life of the training_jump is to redirect a user to the partner site, passing along the username and email address. The partner site creates the user if necessary, starts a session for them on its server, and ultimately displays the actual training content. Inside training_jump is an innocuous-looking "use OAuth::Lite;" line. I didn't know what OAuth was at the time, so of course I went and looked it up: OAuth is designed to let a site like ExampleMashup authenticate someone as "twitter user chris" without needing to ask chris directly for their twitter password. Of course, this makes no sense, because in our case, we possess the account, not our partner. Likewise, once the login is complete, the user should end at our partner's site rather than our own. We have nothing to use the oauth token for, because we don't perform any operation at the partner site aside from the login. Yet here inside training_jump was OAuth. The user hit training_jump; we redirected to the partner by IP address (!) with the OAuth request token, all the necessary user data, a callback URL (training_jump again); they duly redirected; we collected the response token and redirected the user back to the partner with that token as the parameter. The end result is still kind of fragile, in that AFAIK, it only works in the first browser you sign up with. If you log in with Firefox, then try it in Chrome, the latter gives you an error somewhere along the line instead of videos. IIRC, research at the time indicated that there was no good PHP OAuth library, and/or the suitable libraries didn't implement the exact flavor of OAuth API that was being used by the Perl code. I'm absolutely certain I considered replacing the Perl entirely, but I don't remember why I rejected PHP OAuth as a solution. I couldn't simply continue using training_jump as-is, because the CGI module and PHP store their session data in different locations, in different formats. The username in the PHP session wouldn't be accessible to pass through the authentication dance, and it was clearly inadvisable to modify training_jump to accept a username as a URL parameter. Nowadays, training_jump has been succeeded by the cleverly named training_jump2, which actually reads request variables on stdin and produces an answer on stdout. (The format of this text is much like LiveJournal's ancient API, from back when I had a LiveJournal client. There was no convenient interchange format, as the Perl code didn't have JSON installed at the time and PHP didn't have XML. "Lightweight" eats you again.) The PHP training_jump manages the connection between server environment and training_jump2, and training_jump2 simply had its server environment replaced with communicating over pipes. We're in the negotiation phase of moving to the provider's newer platform, which has a proper, encrypted SSO system. training_jump2 is slated to become irrelevant, eventually. In the meantime, it's the only bit of Perl CGI that never made the jump to mod_php. I programmed in assembly for some time, using pointers without understanding what they were, or that they were called pointers. When I finally got to learning C, the pointer syntax was downright inscrutable, but when I got it, suddenly all of C and all of assembler laid clear before me, all at once. It was a beautiful thing. I was reminded about this while reading this post from HN. It inspired me to try explaining pointers from the opposite direction. Instead of trying to teach pointers via C syntax, let me try to start with pointers outside of programming, then discuss them in relation to C and PHP. I opened up a PDF in the default Gnome PDF reader last night, and it was once again a terrible experience. It opened with the zoom set to "fit page width", and the scrolling set to continuous. There's no concept of a persistent user preference, or user preferences that override the document preferences. Then I got to considering the underlying reasons why I didn't like the default display. In its original conception, "Network Neutrality" as I understood it was about a lack of privilege amongst competing traffic sources: that Google, Viacom, the atheism reddit, the Anglican Council, and the Time Cube site would all be subject to equal traffic slowdowns in the face of congestion. A bit of thought would suggest that treating individual packets equally was not, in fact, desirable: you probably don't want your VOIP call and each individual P2P connection to be subject to the same rules, really. You'd rather the call got through even at the expense of delaying a few packets of your (or your neighbor's) download. Certain large ISPs have been trying to twist it to mean they can charge on both sides, for content providers to be allowed to send data to "their" customers, though the customers are already paying (quite profitably for the companies) for their own access. They would be charging everyone for access, so it's "neutral," right? This is an anti-neutrality stance trying to co-opt the word so that it sounds like a good thing. Pro-neutrality forces (in the first sense) argue that requiring content providers to pay for carriage, or for "premium" speeds, would completely destroy the internet as we know it. Also, many of them believe they are preserving existing neutrality, but this turns out to be incorrect. A content delivery network (CDN) essentially is an implementation of pay-for-speed, because the content provider pays for their content to be stored closer to end-users, which reduces load time for those users. Although the end-user's ISP doesn't receive payment directly, the content provider's payment to the CDN also funds the overall system by paying for the CDN's own connectivity at the ends, and infrastructure in the middle. I think the value of the Internet is in two things: uniformity of access for end-users, and fair division of capacity. Uniformity of access is simply that any connection should be able to carry packets from any content provider, so that the view of "the Internet" from any one ISP is the same view as from any other. Otherwise, "the Internet" would cease to have meaning, as it reverted to the days of online services like CompuServe, Prodigy, and AOL. Fair division of capacity is exactly what it says on the tin, that speeds and latencies should be balanced among customers of an ISP. I shouldn't be able to start a download and prevent Netflix from delivering video to my neighbor, and a bunch of people on 6Mbps connections shouldn't be able to deny service to 1.5Mbps subscribers. The real emotional punch that gets brought into neutrality discussions seems to come from the leonine terms the ISPs would like to apply: around one-tenth of the current (often secret) usage limits, for as low as six-tenths of the price, as in Time-Warner's experiment last year. Though the current arrangement is apparently profitable and growing more so over time: the cost of carriage is falling faster than inflation is diluting revenues. The fear is that ISPs will establish these terms "in order to build out next-generation networks" and then not follow through on that investment, artificially limiting their service and allowing inflated payments that do nothing but lift the artificial restriction—in order to offer what is on the market today. Promises, after all, are cheap. This fear is only exacerbated by the incumbent ISPs' wars against municipal broadband. City-owned networks are being opposed in many states as 'unfair' competition. In at least one case, the city in question embarked on its network building course because the ISP claimed they would never offer higher speed. Yet as soon as the city decided to offer higher speed itself if nobody else was going to, the ISP frantically began upgrading their infrastructure, hurrying to complete it before the city's project was finished, so they could argue that the city network was 'unnecessary' due to the ISP offering its (new) high-speed service. This fear is further exacerbated by the regular broadband reports showing that countries with more competition amongst ISPs, regardless of urbanization, have the fastest speeds and highest limits on data transferred, where applicable. If larger companies truly did have more efficiency and more benefit to the customer as they claim, then the average US broadband connection should meet—or exceed—the average connection in Japan. Instead, large companies' performance suggests they are the major impediment to improved service. For the Internet to continue its course of innovation and convenience for the American consumer, protection of uniformity of access and fair division of capacity are sorely needed. Placing these responsibilities into the hands of existing large ISPs who have been actively demonstrating their complete lack of commitment to the principles, or their customers, except when threatened en masse with an alternative network, is clearly the wrong course of action to ensure the result. It is putting the fox with feathers stuck in its teeth in charge of the hen house. TCP assumes that the connection has a more or less constant RTT and assumes delays are losses due to congestion somewhere on the path from A to B. This struck a special chord with me, because I had just recently read about TCP algorithms that had been designed to combat "buffer bloat": instead of scaling strictly based on packet loss, assume increases in latency are due to buffering on the path. Then, back off to avoid both packet loss and longer latency, which is measured by RTT. Since 3G attempts to implement reliable delivery itself, TCP-in-3G bears performance characteristics similar to TCP-in-TCP that is explained in Avery Penwarr's sshuttle README. (sshuttle takes care to extract data from the one TCP connection and copy it to a technically distinct connection, instead of wrapping it, in order to avoid the problem.) And actually, I see that Singleton linked to another source going into more detail, which I skipped reading the first time around. So not only is 3G a bad transport for that reason, but the variable RTT its delivery mechanism introduces also sinks TCP algorithms which try to use increased RTT to avoid queueing in buffers. The buffer-avoidance aspect can't distinguish between "bad" buffers like those in a cheap home router that take huge chunks of data off the Ethernet at 100 Mbps, then dribble it out at 0.6 Mbps to the Internet at large; and "good" buffers like those in the 3G system that are unclogging the spectrum rather than crowding other users of the tubes. Singleton proposes some mitigations for app developers; I'd rather try to "fix" TCP so that it gracefully handles variable RTT. It may violate the perfect conceptual segregation of the OSI Seven Layer Model, but simply having the phone's TCP stack aware of the wireless interface itself would go a long way toward mitigating the problem. Perhaps if the 3G hardware could indicate "link restored" and "backlog cleared", TCP could skip using the RTT of packets received between those events in its congestion avoidance. It seems like WiFi would need some mitigations as well. It is particularly prone to periods of "solid" packet loss, occasionally even destroying the beacon signal and thus kicking everyone off, and periods of fairly reliable reception. However, when you do get reception back, the data pours in without significant degradation in speed, so the underlying issue is a bit different. However, the connection always seems to be particularly slow if it has the bad luck of being started during a period of loss. In the end, the problems seem to come from allowing endpoints to specify receive-windows, but not the network. TCP views the network as a dumb thing that it can draw conclusions about based on end-to-end behavior. Yet the increasing prevalence of wireless, and of sending TCP over wireless links, seems to indicate that "the network" should be able to add metadata to the packets (probably at the IP level, since the network is conceptually unable to peek inside of IP data) to indicate that the delivery of the packet was delayed for reliability. Unfortunately, rogue devices could set that bit for their buffer-bloated packets, so it's about as practical as the Evil Bit. In Python, the sum() builtin gives you the ability to take a list, say [1, 2, 10] and find the sum of it as if you had written out 1 + 2 + 10. What happens if we put these two observations together? Can we sum() a list of lists to get one flattened list? Nope. sum() internally starts with "0 + (first element of sequence)" so you can only pass things that can be added to integers. Over the past couple of weeks, I have assembled a reader in PHP, such that it understands code of the form (print (== (+ 4 4 6) (- 30 15 1))) and will be able to create PHP source that ultimately prints out "1". It's kind of brokenly stupid in other ways, but it's the bare-bones skeleton of a working compiler. Something I have never been able to build prior to this attempt, largely because I wanted to tokenize something superficially like PHP, and I always got bored of defining all the stupid tokens. Going with s-expressions made for only a handful of token types so that I could get on with the interesting bits instead of grinding out pages of /&&|\|\|/ crud. Because almost anything can go in an identifier, I can treat everything as identifiers for now. There are a few obvious things it needs next: string types. Variables. defun. defmacro. Separate namespaces for functions and variables, defined by context, so you can say (array_map htmlspecialchars row) and it will know that the first argument passed is a callable and the second is a expression, so that they can compile to 'htmlspecialchars' and $row, respectively. And to serve its original purpose as an "enhanced PHP"-to-PHP compiler, it needs to read that source language rather than s-expressions. Of course, with a non-sexp-based language, macros might not work out so well, but I do want to be able to run code to rewrite the AST (or the whole tokenizer: aka reader macros) at compile-time. There's a bunch of features I want to add, too. Proper named arguments. Multiple-value return. Ubiquitous lexical scope, so obviously let and its function equivalent (flet perhaps?). Something else that I'm forgetting at the moment. In the long run, I also want to do some optimizations; ideally, I could turn $efoo = array_map('htmlspecialchars', $foo); into $efoo=array(); foreach ($foo as $k=>$v) $efoo[$k]=htmlspecialchars($v); as well as doing simple optimizations like i++; to ++i;. I'd also love to be able to compile some 5.3 code like $foo::bar("baz"), ?:, and "nowdoc" syntax into 5.2-compatible renditions (answer to the first: call_user_func(array($foo, 'bar'), "baz") though my accumulated wisdom now considers such things to be a code smell). The weird thing about this is that if I succeed, I'll be doing what Rasmus did to create PHP—riffing on an existing system in the domain to come up with something a little better. It began with a simple bit of laziness: I wanted a preprocessor so that I could write as if PHP had multiple return values. I'd write "return $x, $y;" in the callee, and "$a, $b = fn();" in the caller, and the preprocessor would rewrite it to valid PHP (throwing array() and list() around the appropriate expressions). But I'm even too lazy for that. To do this right, I'd need to fully parse the PHP, so I could understand more complicated return expressions like method calls. So instead of that, I slapped together a lexer for s-expressions. They're a lot less hairy, and this is just some twisted experiment. Alternatively, I can define a package in SBCL that emits PHP, and have the reader and macros for free. Then my head exploded. * Because this tool was intended for PHP shops, the compiler would have to be written in and emit PHP so there's no Scary Foreign Language involved, other than the compiler's input. And originally, that input language was going to be almost PHP. Relying on others' suggested magic numbers for crypto is probably a Bad Idea, so recently I studied Diffie-Hellman a while to understand what the "DH Group" parameter was in my IPSEC setup, and my PuTTY settings. DH turns out to be a lot like RSA, so bit lengths are comparable between the two and neither is directly comparable to symmetric ciphers like AES. A specific Diffie-Hellman exchange happens using some parameters: a generator for the base, and a prime to use as modulus. (An exponent remains secret.) DH Groups refer to specific, pre-chosen prime-and-generator pairs so that, for example, SSH can negotiate "group 14" instead of transferring the complete parameters themselves. These groups have been standardized in RFC 2409, with additional groups defined in RFC 3526. The latter RFC defines the bit lengths of the groups explicitly, stating that group 5 is 1536 bits, group 14 is 2048, and group 16 is 4096 bits. As far as I can tell, groups 1 and 2 defined in the earlier RFC are only 768 and 1024 bits, respectively. Rackspace recently announced that they'd like to discontinue Slicehost at some point, migrate everyone to the EC2-like Rackspace Cloud, and make people worry per GB about the bandwidth they're consuming. So I'm preparing a move to Linode for more of everything*, and in the planning, I've come across a new argument in favor of whimsical names for servers. If I give each server a whimsical name, like alice.example.com and bob.example.com, I can always refer to the old and new IP addresses as "alice" and "bob", while the change of IP of "www" propagates through the DNS. Between the time where the new address is set and the old one is expired (and note that there's no way to force an ISP's resolver to honor the TTL if they choose to assume "no TTLs will be shorter than an hour") the name being transitioned points to a more-or-less random server. Basically, the whimsical name is like a server ID, and the service-based names are just conveniences. Though a program is three lines long, someday it must be maintained; though a server hosts one service, someday it will have to be replaced. When an organization gets big enough that it can't generate whimsy as fast as it needs servers, then it should go with something more regular for the server name, but each server should still have a unique, non-service-based name. * Except bandwidth, but the 11% difference is smaller than my current monthly consumption, so it turns out not to matter much. Even if it did matter, that much transfer on The Cloud (insert angelic chord here) would be expensive, so Linode still wins. They say you don't get a second chance to make a first impression, but that depends on who you are. Apple seems to have managed a couple of major architecture transitions and their own Vista without too much ill will, yet Microsoft was practically crucified for Vista with no architecture transitions. Fair warning: many links in this post lead to tvtropes. I used to believe that my computer was mine, and no program had any authority to do anything without my consent. (This can probably be traced back to my days on Slashdot, a decade ago; if I didn't get the opinion from there, they certainly reinforced it.) I believed I was sufficiently smart to manage my own software, without everyone's updater constantly nagging me to do so. I especially didn't want the updater to do it on its own; this often lead to problems, especially when Firefox got updated behind the scenes while I was using it. However, I liked automatic security updates on Linux, so I got rather used to restarting Firefox when links mysteriously failed to be followed, or menus and tabs couldn't be opened—these being the days before the "Firefox has been updated and needs to be restarted" notification. If you've ever tried to add a directory to man's search path, you've undoubtedly noticed that the MANPATH environment variable replaces rather than extends man's built-in search path. Today, I rediscovered a clever little setup on a machine at work. Add your desired MANPATH lines to this file at whatever position you wish. Don't forget to curse the lack of an include mechanism at this point, which prevents you from automatically getting changes to /etc/man.config. Cheer up, because there probably won't be any. Add an alias to your shell. For bash, you would put something like alias man='man -C ~/.config/man/man.config' (which obviously includes the name of the file chosen in step 1) into ~/.bashrc. Remember to source ~/.bashrc to make it take effect in the current session. That's all! Now when you run man, your personal manpages will be searched as well. The documentation for man on the system in question claims that it will use $PATH to guess at additional man page locations, but this does not actually work for me. Having a command in ~/.install/bin does not allow man to find the manpage in ~/.install/share/man. I ran into some interesting problems in Perl, which invoked more learning around the require/use mechanisms and how constants are interpreted. In this post, I'll lay out some general terms about variable scoping, such as lexical scope, dynamic scope, the differences between them, and how they all interact in Perl. And then I'll cover require and use with that foundation in place. If you've been wondering about lexicals or closures, this is your post. I've tried to lay things out more or less from basic principles, despite the verbosity of the approach, because this has taken me years to understand. I started programming with Perl in 2000 and still learned a bit more about it today. Yes, it's 2011 now. Hopefully, you can read this and get it in less time. The following information applies to Ubuntu 10.04 LTS, Lucid Lynx, with xserver-xorg-input-wacom installed to provide xsetwacom. This is about fine-tuning your tablet; if your tablet isn't working at all, you probably need bug #568064. There used to be a wacomcpl program to graphically configure a Wacom tablet; this quit working with changes to the upstream project and/or the Tcl dependency, so it hasn't been working for me for some time. Before it quit working, I set up a script to call the xsetwacom command-line program with the desired results, so the loss didn't affect me. Mainly, I had adjusted the active area so that tracing a circle on the tablet would result in a circular shape on the monitor. $aspect is the aspect ratio of the monitor, obtained by dividing where you write the colon. For example, 16:10 = 16/10 = 1.6. Alternatively, you can divide the width in pixels by the height, so a 2560x1600 display has an aspect of 2560/1600 = 1.6. (If you have square pixels, which practically everyone does because they're so convenient.) The monitor being narrower or wider refers to whether the monitor's aspect is lower or higher than the tablet's, respectively. You can calculate the tablet's aspect by dividing $w by $h; obtaining them is the subject of the next section. $w and $h come from the actual tablet, which you can find easily enough. In these commands, $T represents your tablet's name, which you can get from `xsetwacom list dev`. In my case, there's a tool name attached, so it prints "Wacom Bamboo 4x5 Pen STYLUS" (among other things) but only the "Wacom Bamboo 4x5 Pen" portion is the actual device name. The first command simply resets the coordinates to cover the full tablet, just in case they have been changed. $w is BottomX-TopX, and $h is BottomY-TopY. Armed with this information, you should now choose the correct formula from above, and substitute all the numbers. In my case, the top coordinates are both 0, so BottomX=$w=14720, and BottomY=$h=9200. Altering TopX or TopY means that the inactive portion of the tablet runs down the left or across the top. I don't really care where the dead zone ends up, so I chose the method that results in the fewest calculations needed. You could just as easily set BottomX to BottomX-$x_offset to move the dead zone to the right side of the tablet, or adjust both TopX and BottomX by half of the $x_offset to keep the active area centered.
CommonCrawl
position 10: if there are no degenerate conics. position 9: if they are a pair of real lines. position 7: if the only degenerate conic in the pencil is a double real line." OK, I looked at the paper. I think the passage you are asking about is very confusingly written. What they appear to be saying is that for this specific value, $\alpha A +B$ is the (unique) degenerate conic in the pencil. So the plural "conics" in the first line and the pronoun "they" later on are incorrect. So to sum up: 1) the pencil is $\lambda A +B$ where $\lambda$ is a variable. 2) $\alpha A +B$ is a specific conic in the pencil, the unique degenerate one. 3) the referee did a bad job. Assuming the conics meet in four points, $Ap_k=Bp_k=0$ so that $(\alpha+B)p_k=0$ and all the conics in the pencil have these points in common. In particular, there are conics in the pencil that are degenerate, i.e. pairs of lines through the four points. Not the answer you're looking for? Browse other questions tagged geometry algebraic-geometry conic-sections or ask your own question. Determine if a conic is degenerate with the determinant. Why are degenerate conics not projectively equivalent to nondegenerate conics? How to slice a double cone to get a pair of parallel lines?
CommonCrawl
In India, there is a confusing term `non-bank financial company' (NBFC). This is an unfortunate phrase as the term, when taken literally, includes insurance companies, etc. In India, it denotes a $10 \times 2 \times 2$ classification of business models which are regulated by the RBI. There is a lot of confusion in the present regulatory treatment of these classes of firms. The existing levers of regulation are inappropriate, and it is not clear why RBI -- which should be about sound money and sound banking -- is doing all this work. These concerns are becoming particularly important in the context of the fintech revolution, where all kinds of new firms are being shoe-horned into NBFC regulation. Consumer protection. Financial firms generally require a layer of restrictions, that impact upon their dealings with customers, that improve fair play. These problems are heightened when the financial firm directly deals with unsophisticated individuals. Micro-prudential regulation. When a financial firm makes a high intensity promise to a consumer, generally there is a need for restrictions upon the risk-taking by the firm, to curtail the probability of firm failure. Such micro-prudential regulation is (in turn) motivated by consumer protection: we wish to improve how consumers are treated in their dealings with the financial firm. When a firm takes a deposit from a household, that requires micro-prudential regulation, but when the firm lends to a household, the household is quite comfortable with the prospect of firm default, and no micro-prudential regulation is required. Resolution. When a financial firm makes promises to consumers, or when a financial firm is systemically important, the conventional bankruptcy process (of IBC) is inadequate. A specialised bankruptcy process is required, which is run by the Resolution Corporation. Systemic risk regulation. The behaviour of firms needs to be restricted from the viewpoint of systemic risk. This is mostly about system thinking, and not looking at individual firms ("the woods and not the trees"). But one ("trees") element of this tends to be a reduced target failure probability for a few firms which are termed `systemically important'. Part XII does resolution (S.286 to S.310). This has morphed into the FRDI Bill. Part XIII does systemic risk regulation (S.311 to S.341). This treatment is non-sectoral. There is no special law which defines consumer protection for banks vs. consumer protection for mutual funds. All kinds of financial business is treated identically, within these four components. The advantage of non-sectoral law is that the law does not have to be modified when new business models are invented, or when multiple kinds of activities are undertaken under one roof. Now let's apply this thought process to what, in today's India, would be called an NBFC. To keep things simple, consider a company which finances itself using the bond market, has no unsophisticated consumers, and gives out loans to companies. How would we think about regulating this? Consumer protection: As this firm has no unsophisticated customers, this simplifies the problem of consumer protection. See Table 5.5 in FSLRC Volume 1. The protections that would have to be enforced are: professional diligence, unfair contract terms, unfair conduct, privacy, fair disclosure and redress. Micro prudential regulation: As this firm makes no promises to unsophisticated individuals, there is no need for micro-prudential regulation. The bond market is what will discipline the risk taking of this firm. This is similar to how the bond market shapes the leverage and access to debt capital of an ordinary non-financial firm. Resolution: Ordinary IBC processes will suffice to deal with failure. The bond market will reward more resolvable businesses with a lower cost of capital. Systemic risk regulation: Until the balance sheet becomes 1 per cent of GDP, i.e. $20 billion, the firm is not systemically important. By this logic, for most NBFCs, there is a need for a little bit of consumer protection and nothing else. Most of the existing edifice of NBFC regulation, which seems to be inspired by the regulation of banks, is not required. Enacting the Indian Financial Code addresses this situation at two levels. First, as described above, it gives a clear conceptual framework on how to think about financial regulation, without encoding business models into the law. Second, the FSLRC regulation-making process encourages the institutionalised application of mind. When mistaken ideas start out in the regulation-making process, there will be greater push back. The staff of financial agencies will rise to higher quality thinking when placed into the FSLRC regulation-making process. In a previous article, Renuka Sane and I wrote about the barriers faced for the Fintech Regulatory Sandbox. The question discussed here -- the problems associated with shoe-horning fintech into the NBFC framework -- connects integrally to that. Once a project is proven in the sandbox, it will come out into the regulation making process. If the concepts and principles of the regulation-making process have basic defects, this will hamper the working of the regulation-making process, and yield poor outcomes.
CommonCrawl
where $c$ is the number of classes. SAXVSM is described formally in Algorithm 13. Parameters $l$, $\alpha$ and $w$ are set through cross validation on the training data. Predictions are made using a 1-NN classification based on the word frequency distribution of the new case and the $tf\cdot idf$ vectors of each class. The Cosine similarity measure is used.
CommonCrawl
How does real-time signal processing work when the signal we deal with is assumed to be known for all times? One thing that never sat right with me after my signal processing course is that in order to do signal processing, we must know the entire signal $x(t)$. For instance, I cannot take the Fourier or Z transform of a single data point. Let's say however I am writing a program for a self-driving car. I need to make a decision as soon as the signals from the sensors come in. I don't have a full signal. At most I have one or two data points. How can I do signal processing in these real-life, real-time situations? You never work with the full signal. Your $x(t)$ usually just considers the last N samples. That is why most systems work with a certain delay. Depending on the value of N (and your sample rate) this can be almost instantaneous from a human perspective. one of the properties of a real-time DSP system is that it is causal. we just cannot look into the future, so in computing output sample $y[n]$, none of the input samples in the future, $x[n+1], x[n+2]...$ can be used in that computation. This means that only the present and past input samples $x[n], x[n-1], x[n-2]...$ and past output samples $y[n-1], y[n-2]...$ can be used in that calculation. if the system is LTI (and causal), that means that the impulse response, $h[n]$, must be causal which must be equal to zero for all negative values of $n$. the impulse response cannot react before the driving impulse that causes the response. so even if the limits on the Fourier Integral are $\pm \infty$, that does not mean that the quantity being integrated is anything other than zero for all times in the future. If we have a finite window of real world data, but wish to apply mathematical concepts which seem to require signals that are infinite in length or duration, many other assumptions are often applied. Such as assuming that the data is infinitely periodically repeating outside an FFT aperture, or assuming that the data is zero outside the data aperture, or assuming the data is arbitrary, or Gaussian random, or continuous with some number of finite derivatives, etc., outside the aperture, but that a finite length rectangular (or other) windowing function has already been applied to that longer signal. etc. Not the answer you're looking for? Browse other questions tagged signal-analysis continuous-signals speech-processing or ask your own question. How to detect time of arousal with excitation signal in speech signal? How to get the signal to noise ratio from noisy data set with sinusoid? How to deal with variations in amplitudes when doing voiced/silence detection? Why does it mean that the process/signal is not stationary when its variance varied with time? How does this digital signal controlling a switch in the circuit affect the output voltage?
CommonCrawl
Precision timing detectors for high energy physics experiments with temporal resolutions of a few 10 ps are of pivotal importance to master the challenges posed by the highest energy particle accelerators. Calorimetric timing measurements have been a focus of recent research, enabled by exploiting the temporal coherence of electromagnetic showers. Scintillating crystals with high light yield as well as silicon sensors are viable sensitive materials for sampling calorimeters. Silicon sensors have very high effciency for charged particles. However, their sensitivity to photons, which comprise a large fraction of the electromagnetic shower, is limited. A large fraction of the energy in an electromagnetic shower is carried by photons. To enhance the efficiency of detecting photons, materials with higher atomic numbers than silicon are preferable. In this paper we present test beam measurements with a Cadmium-Telluride sensor as the active element of a secondary emission calorimeter with focus on the timing performance of the detector. A Schottky type Cadmium-Telluride sensor with an active area of 1 cm$^2$ and a thickness of 1 mm is used in an arrangement with tungsten and lead absorbers. Measurements are performed with electron beams in the energy range from 2 GeV to 200 GeV. A timing resolution of 20 ps is achieved under the best conditions. Precision timing detectors for high energy physics experiments with temporal resolutions of a few 10 ps are of pivotal importance to master the challenges posed by the highest energy particle accelerators. Calorimetric timing measurements have been a focus of recent research, enabled by exploiting the temporal coherence of electromagnetic showers. In this paper, we present results of studies of a calorimeter prototype using Cadmium-Telluride (CdTe) sensors as the active material. CdTe has been studied extensively in the context of thin film solar cells and has become a mature and wide-spread technology. It has also been used as a radiation detector for nuclear spectroscopy, and is known to have high quantum efficiency for photons in the x-ray range of the spectrum. This feature is of particular interest in the context of its use in calorimetery because it would be sensitive to secondary particles in the keV range, a significant component of the electromagnetic shower. Therefore, the first study of electromagnetic showers using CdTe sensors has the potential to yield new insight into the behavior of secondary particles produced within an electromagnetic shower with energies in the keV range, and has the potential to yield an improvement on the energy measurement due to the additional contribution of the higher energy x-ray photons to which previous calorimeters were not sensitive. The recent interest on precision timing has resulted in new studies of the timing properties of silicon sensors. These studies have found a time resolution at 20 ps level, provided a suffciently large signal size in a variety of applications ranging from calorimetery to charged particle detectors. The signal formation process in CdTe sensors are very similar to the process in silicon and has similar potential to yield precise timestamps. In this article, we study the signal response of the CdTe sensor to electromagnetic showers of varying energies and at different shower depths. We also study the timing performance of the CdTe sensors for electromagnetic showers. The semi-conducting properties of Cadmium-Telluride has been studied since many decades, in particular in the the context of using the material in photovoltaic applications. Cadmium-telluride sensor are widely used in X-ray detectors. They have also been investigated for synchrotron radaition detectors in accelerator technology. In our previous studies we have demonstrated that increasing the primary sensor signal is crucial to achieve good timing resolutions. Cadmium-telluride features a significantly larger effciency for detecting photons in the 10 to 100 keV energy range compared to silicon sensors. The higher atomic number of Cadmium and Tellurium, averaging to about 48 for the compound bulk material, results in a higher interaction cross section for photons in this energy range. Photons with such energies are abundant in electromagnetic showers. Furthermore, CdTe sensor are available with thicknesses of 1 mm and more. The path-length of the charged shower particles in the sensor material scales accordingly, resulting in a larger primary signal. Our measurements were conducted with a CdTe Schottky type diode purchased from Acrorad. It is 1 cm$^2$ in transverse size and 1 mm thick. It was operated at a bias voltage of 700 V and the dark current was between 3 nA and 6 nA depending on the environmental conditions in the test beam experimental zones. The sensor was placed in a box made of 0.3 mm copper sheets sealed with copper tape to shield against environmental noise. A broadband amplifier with a bandwidth of 1.5 GHz was used to amplify the signals from the sensor. We performed the measurements at the H2 beamline of the CERN North-Area testbeam facility and the T9 beamline of the CERN East-Area testbeam facility. They provide secondary electron beams from the Proton Synchrotron (PS) and Super Proton Synchrotron (SPS) of energies ranging from 2 GeV to 200 GeV. The DAQ system uses a CAEN V1742 switched capacitor digitizer based on the DRS4 chip. Wire chambers are used to measure the position of each incident beam particle in the plane transverse to the beamline. A micro-channel plate photomultiplier (MCP-PMT) detector is used to provide a very precise reference timestamp. The precision of the time measurement for both types of MCP-PMT's is less than 10 ps. Our initial results are encouraging and motivate future work on more detailed comparisons with simulation and more detailed measurements of transverse and longitudinal shower profiles. We have measured the rise time for signals in the Schottky type CdTe sensor diode to be about 1.3 ns which makes them suitable as devices for precision timing applications. The large ionization signal yield we achieve with a 1 mm thick sensor is equally favorable for precision timing applications. We observe dependencies of the measured time on the geometric position of the beam particle impact point on the sensor, which may indicate differences in the charge collection dynamics. More detailed studies of this aspect are needed and a more optimal design of the connection of the sensor readout is envisioned. Correcting for these dependencies yield time resolutions of 25 ps for a single layer CdTe sensor of transverse area 1 cm$\times$ 1cm, uniformly sampled by the electromagnetic shower of electrons with energy above 100 GeV after 6 radiation lengths of tungsten and lead absorber. In the most favorable region of the sensor we observe time resolutions as low as 20 ps. These initial results are encouraging and motivate further in-depth studies in the future.
CommonCrawl
arXiv is an e-print service in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance and statistics. Abstract: As robots become more ubiquitous and capable, it becomes ever more important to enable untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to perception and actuation in the physical world. In this paper, we present an approach for joint learning of language and perception models for grounded attribute induction. Our perception model includes attribute classifiers, for example to detect object color and shape, and the language model is based on a probabilistic categorial grammar that enables the construction of rich, compositional meaning representations. The approach is evaluated on the task of interpreting sentences that describe sets of objects in a physical workspace. We demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes. Abstract: When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random "dropout" gives big improvements on many benchmark tasks and sets new records for speech and object recognition. This paper introduced Dropout, a new layer type. It has a parameter $\alpha \in (0, 1)$. The output dimensionality of a dropout layer is equal to its input dimensionality. With a probability of $\alpha$ any neurons output is set to 0. At testing time, the output of all neurons is multiplied with $\alpha$ to compensate for the fact that no output is set to 0. A much better paper, by the same authors but 2 years later, is [Dropout: a simple way to prevent neural networks from overfitting](http://www.shortscience.org/paper?bibtexKey=journals/jmlr/SrivastavaHKSS14). Dropout can be interpreted as training an ensemble of many networks, which share weights. It was notably used by [ImageNet Classification with Deep Convolutional Neural Networks](http://www.shortscience.org/paper?bibtexKey=krizhevsky2012imagenet). Abstract: After a more than decade-long period of relatively little research activity in the area of recurrent neural networks, several new developments will be reviewed here that have allowed substantial progress both in understanding and in technical solutions towards more efficient training of recurrent networks. These advances have been motivated by and related to the optimization issues surrounding deep learning. Although recurrent networks are extremely powerful in what they can in principle represent in terms of modelling sequences,their training is plagued by two aspects of the same issue regarding the learning of long-term dependencies. Experiments reported here evaluate the use of clipping gradients, spanning longer time ranges with leaky integration, advanced momentum techniques, using more powerful output probability models, and encouraging sparser gradients to help symmetry breaking and credit assignment. The experiments are performed on text and music data and show off the combined effects of these techniques in generally improving both training and test error. * Recurrent Neural Networks (RNNs) are very powerful at modelling sequences but they are not good at learning long-term dependencies. * The paper discusses the reasons behind this difficulty and some suggestions to mitigate it. * RNNs form a deterministic state variable h<sup>t</sup> as function of input observation and previous state. * Learnable parameters to decide what will be remembered about the past sequence. * When computations performed by RNN are unfolded through time, a deep Neural Network with shared weights is realised. * The cost function of this deep network depends on the output of hidden layers. * Gradient descent updates could "explode" (become very large) or "vanish" (become very small). * **Clip Gradient** - when the norm of the gradient vector ($g$) is above a threshold, update is done in direction of threshold $g/||g||$. This normalisation implements a simple form of second-order normalisation (the second-order derivate will also be large in regions of exploding gradient). Different values of &alpha; allow a different amount of the previous state to "leak" through the unfolded layers to further in time. This simply expands the time-scale of vanishing gradients and not totally remove them. * Use **output probability models** like Restricted Boltzmann Machine or NADE to capture higher order dependencies between variables in case of multivariate prediction. * By using **rectifier non-linearities**, the gradient on hidden units becomes sparse and these sparse gradients help the hidden units to specialise. The basic idea is that if the gradient is concentrated in fewer paths (in the unfolded computational graph) the vanishing gradient effect would be limited. * A **simplified Nesterov Momentum** rule is proposed to allow storing past velocities for a longer time while actually using these velocities more conservatively. The new formulation is also easier to implement. * SGD with these optimisations outperforms a vanilla SGD. Abstract: While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations. * GraphLab abstraction exposes asynchronous, dynamic, graph-parallel computation model in the shared-memory setting. * This paper extends the abstraction to the distributed setting. * [Link](http://vldb.org/pvldb/vol5/p716_yuchenglow_vldb2012.pdf) to the paper. * Sometimes computation requires modeling dependencies between data. * eg modeling dependencies between similar users for the recommendation use case. * In many cases, asynchronous procedures outperform synchronous ones. * eg linear systems, belief propagation, stochastic optimization etc. * Iterative computation converges asymmetrically. * Convergence can be accelerated by dynamic scheduling. * eg do not update parameters that have already converged. * Ensuring that all parallel executions have an equivalent serial execution is desirable for both correctness and faster convergence. * Store program state as a directed graph. * **G = (V,E,D)** where D is the user defined data (model parameters, algorithm state, statistical data etc). * The graph data **D** is mutable but the state of the graph **(V,E)** is immutable. * Stateless procedure that modifies the data within the scope of a vertex and schedules the execution of the *update* function on other vertices. * **Scope** of a vertex (S) - data corresponding to the vertex, its edges and its adjacent vertices. * **update: $f (v, S_v) -> (S_v, T)$** where T is the set of vertices where *update* function is scheduled to be invoked. * Scheduling of computation id decoupled from movement of data and no message passing is required between vertices. * Input to the model is G and T, the initial set of vertices to be updated. * During each step, a vertex is extracted from T, updated and a set of vertices is added to T (for future computation). * Vertices in T can be executed in any order with the only constraint that all vertices be eventually executed. * Sync operation runs in the background to maintain global aggregates concurrently. * These global values are read by *update* function and written by the sync operation. * Full read/write access in the *scope*. * Scope of concurrently updating vertices cannot overlap. * Read/write access on the vertex and the adjacent edges but only read access to adjacent vertices. * Write access to the vertex and read access to adjacent edges and vertices. * All vertices can run update function simultaneously. * Two-phase partitioning process for load balancing the graph on arbitrary cluster size. * In the first phase, partition the graph into k parts (k >> number of machines). * Each part, called **atom**, is a file of graph generating commands. * Atom also stores information about **ghosts** (set of vertices and edges adjacent to the partition boundary). * Atom index file contains connectivity structure and file location for the k atoms as a meta-graph. * In the second phase, this meta-graph is partitioned over the physical machines. * A vertex coloring (no adjacent vertices have the same color) is constructed to serialize parallel execution of dependent tasks (in our case, vertices in the graph). * For edge consistency model, execute all vertices of the same color before going to next color and run sync operation between color steps. * Changes to ghost vertices and edges are communicated asynchronously as they are made. * Vertex consistency is trivial - assign same color to all the vertices. * Associate reader-writer locks on each vertex. * Each machine can update only the local vertices. * Ghosting system uses caching to eliminate wait on remote, unchanged data. * Lock request and synchronization are pipelined to hide network latency. * Each machine maintains a pipeline of vertices for which locks have been requested but not granted. * A vertex is executed once lock acquisition and data synchronization are complete. * Nonblocking reader-writer locks, that work through callback functions, are used. * Suspend computation to save all modified data since the last checkpoint. * Asynchronous checkpointing based on Chandy-Lamport snapshot algorithm. * The snapshot step becomes an *update* function in the GraphLab abstraction. * Better than synchronous checkpointing. * One instance of GraphLab runs on each machine. * These processes are symmetric and communicate via RPC. * The first process additionally acts as the master and computes placement of atoms based on atom index. * Each process maintains a local scheduler (for its vertices) and a cache to access remote data. * Distributed consensus algorithm to decide when all the schedulers are empty. * The biggest strength of the paper are its extensive experiments. * GraphLab benefits from the use of background asynchronous communication and pipelined locking but its communication layer is not as efficient as MPI's communication layer. Lee, George and Lin, Jimmy J. and Liu, Chuang and Lorek, Andrew and Ryaboy, Dmitriy V. The [paper](http://vldb.org/pvldb/vol5/p1771_georgelee_vldb2012.pdf) presents Twitter's logging infrastructure, how it evolved from application specific logging to a unified logging infrastructure and how session-sequences are used as a common case optimization for a large class of queries. Twitter uses **Scribe** as its messaging infrastructure. A Scribe daemon runs on every production server and sends log data to a cluster of dedicated aggregators in the same data center. Scribe itself uses **Zookeeper** to discover the hostname of the aggregator. Each aggregator registers itself with Zookeeper. The Scribe daemon consults Zookeeper to find a live aggregator to which it can send the data. Colocated with the aggregators is the staging Hadoop cluster which merges the per-category stream from all the server daemons and writes the compressed results to HDFS. These logs are then moved into main Hadoop data warehouse and are deposited in per-category, per-hour directory (eg /logs/category/YYYY/MM/DD/HH). Within each directory, the messages are bundled in a small number of large files and are partially ordered by time. Twitter uses **Thrift** as its data serialization framework, as it supports nested structures, and was already being used elsewhere within Twitter. A system called **Elephant Bird** is used to generate Hadoop record readers and writers for arbitrary thrift messages. Production jobs are written in **Pig(Latin)** and scheduled using **Oink**. Initially, all applications defined their own custom formats for logging messages. While it made it easy to develop application logging, it had many downsides as well. * Inconsistent semantics associated with each category name causing resource discovery problem. * Inconsistent format of log messages. All these issues make it difficult to reconstruct user session activity. This is an effort within Twitter to develop a unified logging framework to get rid of all the issues discussed previously. A hierarchical, 6-level schema is imposed on all the events (as described in the table below). For example, the following event, `web:home:mentions:stream:avatar:profile_click` is logged whenever there is an image profile click on the avatar of a tweet in the mentions timeline for a user on twitter.com (read from right to left). The alternate design was a tree based model for logging client events. That model allowed for arbitrarily deep event namespace with as fine-grained logging as required. But the client events model was chosen to make the top level aggregate queries easier. A client event is a Thrift structure that contains the components given in the table below. * All log messages share a common format with clear semantics. * All log messages are stored in a single place. * A lot of brute force scans. * Large group-by operations needed to reconstruct user session. * Reorganize complete Thrift messages by reconstructing user sessions - This solves the second problem but not the first. * Use a columnar storage format - This addresses the first issue but it just reduces the time taken by mappers and not the number of mappers itself. The materialized session sequences are much smaller than raw client event logs (around 50 times smaller) and address both the issues. To enhance the accessibility of the client event logs, an automatically generated event data log is used along with a browsing interface to allow users to browse, search and access sample entries for the various client events. (These sample entries are the same entries that were mentioned in the previous section. The catalog is rebuilt every day and is always up to date. * Summary Statistics - Session sequences are used to compute various statistics about sessions. * Event Counting - Used to understand what feature of users take advantage of a particular feature. * Funnel Analytics - Used to focus on user attention in a multi-step process like signup process. * User Modeling - Used to identify "interesting" user behavior. N-gram models (from NLP domain) can be extended to measure how important temporal signals are by modeling user behavior on the basis of last n actions. The paper also mentions the possibility of extracting "activity collocations" based on the notion of collocations. Session sequences are limited in the sense that they capture only event name and exclude other details. The solution adopted by Twitter is to use a generic indexing infrastructure that integrates with Hadoop at the level of InputFormats. The indexes reside with the data making it easier to reindex the data. An alternative would have been to use **Trojan layouts** which members indexing in HDFS block header but this means that indexing would require the data to be rewritten. Another possible extension would be to leverage more analogies from the field of Natural Language Processing. This would include the use of automatic grammar induction techniques to learn hierarchical decomposition of user activity. Another area of exploration is around leveraging advanced visualization techniques for exploring sessions and mapping interesting behavioral patterns into distinct visual patterns that can be easily recognized. Abstract: We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time "twist": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of "big data". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a "big data" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle "big" as well as "fast" data. Twitter's Real-Time Related Query Suggestion Architecture. The paper tells the story behind architecture to support Twitter's real-time related query suggestion, why the architecture had to be designed twice and what lessons can be learned from this exercise. It does not talk much about the algorithms, rather it talks about the different design decisions that lead to the current architecture — the focus is not on how things were done but why they were done a certain way. Twitter has an interesting use case — search assistance — which boils down to things like a user searching for "Obama" and getting results for related queries like "White House" as well. Spelling correction is also a part of search assistance. The problem is quite well researched from volume perspective of data but in Twitter's context, velocity mattered as much as volume. The results had to adapt to rapidly evolving global conversations in real-time — where real-time loosely translates to a target latency of 10 minutes. The real-time sense is important in Twitter's context where "relevance" has a temporal aspect as well. For example, after the Nepal Earthquake in 2015, the query "Nepal" led to results related to "earthquake" as well. Then when the new constitution was passed in Nepal, the query "Nepal" led to results related to "constitution". The time frame in which suggestions have maximum impact is very narrow. A suggestion made too early would seem irrelevant while a suggestion made too late would seem obvious and hence less impactful. These fast moving signals have to be mixed with slow moving ones for insightful results. Sometimes the query volume is very low in which case longer observation period is needed before suggestions can be made. Twitter noticed that 17% of top 1000 query terms churn over an hourly basis which means they are no longer in top 1000 after an hour. Similarly, around 13% of top 1000 query terms are churned out every day. This suggested that a fine-grained tracking of search terms was needed. Twitter started with a basic idea: if two queries are seen in the same context, then they are related. Now they had a large open design space. For example, context can be defined by user's search session or tweet or both. Measures like log likelihood, chi-square test etc can be used to quantify how often 2 queries appear together. To consider the temporal effect, counts are decayed time. Finally, Twitter has to combine these factors, and some more factors, together to come up with a ranking mechanism. This paper does not focus on what algorithms were chosen for these tasks, it focuses on how an end-to-end system was created. Twitter has a powerful petabyte-scale Hadoop-based analytics platform. Both real-time and batch processes write data to the Hadoop Distributed File System (HDFS). These include bulk exports from databases, application logs, and many other sources. Contents are serialized using either Protocol Buffers or Thrift, and LZOcompressed. There is a work-flow manager, called Oink, which schedules recurring jobs and handles dataflow dependencies between jobs. For example, if job B requires data generated by job A, A will be scheduled first. Twitter wanted to take advantage of this stack and the first version was deployed in form of a Pig script that aggregated user search sessions to compute term and cooccurrence statistics and ranked related queries on top of the existing stack. While the results were pretty good, the latency was too high and results were not available until several hours. Twitter uses Scribe to aggregate streaming log data in an efficient manner. These logs are rich with user interaction and are used by the search assistant. A Scribe daemon is running on each production server where it collects and sends local log data (consisting of category and message) to a cluster of aggregators which are co-located with a staging Hadoop cluster. This cluster merges per-category streams from the server daemons and writes the results to HDFS of the staging cluster. These logs are then transformed and moved to the main Hadoop data warehouse in chunks of data for an hour. These log messages are put in per-category, per-hour directories and are bundled in a small number of large files. Only now can the search assistant start its computations. The hierarchical aggregation is required to "roll up" data into few, large files as HDFS is not good at handling large numbers of small files. As a result, there is a huge delay from when the logs are generated to when they are available for processing. Twitter estimated that they could bring down the latency to tens of minutes by re-engineering their stack though even that would be too high. Hadoop is not meant for latency sensitive jobs. For example, a large job could take tens of seconds to just startup — irrespective of the amount of data crunched. Moreover, the Hadoop cluster was a shared resource across Twitter. Using a scheduler (in this case, FairScheduler) is not the ideal solution as the focus is on predictable end-to-end latency bound and not resource allocation. Lastly, the job completion time depending on stragglers. For some scenarios, a simple hash partitioning scheme created chunks of "work" with varying size. This lead to large varying running times for different map-reduce jobs. For scripts that chain together Hadoop jobs, the slowest task becomes the bottleneck. Just like with log imports, Twitter estimated the best case scenario for computing query suggestions to be of the order of ten minutes. Starting with the Hadoop stack had many advantages like a working prototype was built quickly and ad hoc analysis could be easily done. This also helped them to understand the query churn and make some important observations about factors to use in search assistant. For example, Twitter discovered that only 2 sources of context — search sessions and tweets — were good enough for an initial implementation.But due to high latency, Twitter had to restrict this solution to the experimental stage itself. 1. Frontend Nodes — These are lightweight in-memory caches which periodically read fresh results from HDFS. They are implemented as a Thrift service, and can be scaled out to handle increased query load. 2. Backend Nodes — These nodes perform the real computations. The backend processing engine is replicated but not sharded. Every five minutes, computed results are persisted to HDFS and every minute, the frontend caches poll a known HDFS location for updated results. ZooKeeper for automatic resource discovery and robust failover. 3. Rankers: Periodically execute one or more ranking algorithm by getting raw features from the in-memory stores. 1. Sessions store: Keeps track of (anonymized) user sessions observed in the query hose, and for each session, the history of the queries issued in a linked list. Sessions older than a threshold are discarded. Metadata is also tracked separately. 2. Query statistics store: Retains up-to-date statistics, like session count, about individual queries. These also include a weighted count based on a custom scoring function. This function captures things like association is more between 2 consecutively typed queries vs 2 consecutively clicked hash-tags. These weights are periodically decayed to reflect decreasing importance over time. It also keeps additional metadata about the query like its language. pairs of co-occurring queries. Weighting and decaying are applied like in the case of query statistics store. **Query Flow** — As a user query flows through the query hose, query statistics are updated in the query statistics store, it is added to the sessions store and some old queries may be removed. For each previous query in the session, a query cooccurrence is formed with the new query and statistics in the query cooccurrence statistics store are also updated. **Tweet Flow** — As a tweet flows through the firehose, its n-grams are checked to determine whether they are query-like or not. All matching n-grams are processed just like the query above except that the "session" is the tweet itself. to control the overall memory footprint of the service. Even user sessions with no recent activity are pruned. **Ranking cycles** — Rankers are triggered periodically to generates suggestions for each query based on the various accumulated statistics. Top results are then persisted to HDFS. 1. Since there is no sharding, each instance of the backend processing engine must consume the entire firehose and query hose to keep up with the upcoming data. 2. The memory footprint for retaining various statistics, without any pruning, is very large. But if the footprint is reduced, by say pruning, the quality and coverage of results may be affected. Another approach could be to store less session history and decay the weights more aggressively though it may again affect the quality of the results. Twitter managed to solve the problem of fast moving big data but their solution is far from ideal. It works well but only for scenario it is fine-tuned for. What is needed is a unified data platform to process for big and fast moving data with varying latency requirements. Twitter's current implementation is an in-memory engine which mostly uses tweets and search sessions to build the context. Rich parameters like clicks, impressions etc are left out for now to keep the latency under check. Twitter described it as "a patchwork of different processing paradigms". Though not-so-complete, it is still an important step in the direction of unifying big data and fast data processing systems. A lot of systems exists which solve the problem is pieces eg message queues like Kafka for moving data in real time and Facebook's ptail for running Hadoop operations in real time but there is no end-to-end general data platform which can adapt itself to perform analytics on both short and long term data and combine their results as per latency bound in different contexts. This paper is about a complexity estimation of using distributed algorithm for branch-and-bound over graphical models. The paper proposes a distributed/parallel Branch-and-Bound algorithm and evaluates its efficiency for load balancing. The ability to search multiple sub problem in parallel for the same problem would speed up the search significantly. However, balancing the parallel search load is important for efficiency. This paper does a complete case study of learning how to parallelize And-Or graph using Branch and Bound search. They use a set of features collected from the graph (static) or from the search problem (dynamic) for the problem and evaluate each of the three learning cases: per problem instances, per problem class, and across problem class using linear and non-linear regression methods.T hey extensively evaluate all the possible combinations and show the pros and cons of using each type of learning. Pros: The paper is well written, and covers all the possibilities for learning and has a good discussion that educates the reader. The evaluations are extensive and conclusive. The literature review is complete. Cons: From a practicality point of view, the bottle neck problem is not discussed, when would parallelizing AOGBB be bad? Also the motivation for choosing the features that they used, are they standard set of features? (the good point is that they showed that some of the features were more important than other based on their experiments).
CommonCrawl
If $M = N = 64$, we'll have $64$ basis functions. These functions have two variables including $p$ and $q$. Now, to verify, I take the first basis function with $m = n = 0$. Clearly, this function changes with different values of $p$ and $q$. Why it's drawn with same color in the following image (the upper left gray square)? Because the upper left square would correspond to values of $p=0$ and $q=0$, and $cos(\alpha) = 1 $ if $\alpha=0$, so you get a constant term, that is why it is all gray. Basically as you increment $p$ and $q$ you get basis images that oscillate more either in vertical or horizontal directions. Not the answer you're looking for? Browse other questions tagged image-processing image dct or ask your own question. Do the DCT coefficients correspond to particular pixels? what is the differnence between 8x8 dct coefficients and 25 x 25 dct coefficients? Suggesting method for removing noise and image restoration?
CommonCrawl
Abstract : Let $X$ be a compact connected Abelian group. It is well-known that then there exist topological automorphisms $\alpha_j, \beta_j $ of $X$ and independent random variables $\xi_1$ and $\xi_2$ with values in $X$ and distributions $\mu_1, \mu_2$ such that the linear forms $L_1 = \alpha_1\xi_1 + \alpha_2\xi_2$ and $L_2 = \beta_1\xi_1 + \beta_2\xi_2$ are independent, but $\mu_1$ and $\mu_2$ are not represented as convolutions of Gaussian and idempotent distributions. To put this in other words in this case even a weak analogue of the Skitovich-Darmois theorem does not hold. We prove that there exists a compact connected Abelian group such that if we consider three linear forms of three independent random variables taking values in $X$ and the linear forms are independent, then at least one of the distributions is idempotent.
CommonCrawl
What does this notation stand for when describing a code? This code has appeared in some online course material. $Z_2^5$ means that you are working in $GF(2)^5$. $0 + 0 = 0\\ 0 + 1 = 1\\ 1 + 0 = 1\\ 1 + 1 = 0$ It is equivalent to XOR. $0 \times 0 = 0\\ 0 \times 1 = 0\\ 1 \times 0 = 0\\ 1 \times 1 = 1$ It is equivalent to AND. the $ ^5$ is the dimension of the space (or the size of the vectors). Here you are using a space of dimension 5, hence 5 coordinates. Not the answer you're looking for? Browse other questions tagged notation coding-theory or ask your own question. What does the $\|$ operation mean in cryptographic notation? What does the notation $1^n$ stand for? What does the〈 〉notation mean in this situation?
CommonCrawl
DisturbLabel paper approach, yet another regularization technique prevents overfitting. Unlike other regularization methods such as weight decay, dropour, or DropConnect, DisturbLabel regularizes loss. The TLDR version of the CVPR 2016 paper is: deliberately add noise (wrong labels) to small portion the ground truth labels in each minibatch during training and it helps overfitting. Therefore, the generalization of the network improves and the test set accuracy improves. Adding label noise contributes to: 1. A noisy loss. 2. A noisy gradient backpropagation. Authors state that DisturbLabel is equivalent to combining a large number of models trained with different noisy data. $$ groundtruth-label = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0] $$ $$ P(\alpha) = [0.01, 0.01, 0.91, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01] $$ So the new label for that particular data is drawn uniformly from $P(\alpha)$ with a command like numpy.choice from numpy. Consequently, the label will be the same as ground-truth label most of the time, but few times during training the data is assigned with a wrong label. In other words if $\alpha=10$, you would expect that 10% of the labels are assigned with a wrong label for each minibatch. ### effect of noise: - The algorithm has better performance for small noise rates such as $\alpha=10, 20$. The algorithm performance degrades for larger noise rates such as $\alpha=50$.
CommonCrawl
"When you have eliminated the impossible, whatever remains, however improbable, must be the truth." Thus Arthur Conan Doyle has Sherlock Holmes describe a crucial part of his method of solving detective cases. Sherlock Holmes often takes pride in adhering to principles of scientific reasoning. Whether or not this particular element of his analysis can be called scientific is not straightforward to decide, however. Do scientists use 'no alternatives arguments' of the kind described above? Is it justified to infer a theory's truth from the observation that no other acceptable theory is known? Can this be done even when empirical confirmation of the theory in question is sketchy or entirely absent? In recent years, formal and empirical approaches have become central to study group decision-making. Social network analysis, agent-based modeling and simulation techniques are meanwhile widely used not only in sociology, political science, social psychology, and economics. Also philosophers increasingly point to the potentials of these approaches for addressing questions in political and moral philosophy, formal epistemology, and philosophy of science more generally. Whereas the importance of social dynamics and network structures for investigating into decision-making has been largely acknowledged, the application and results of these novel approaches raise a number of philosophical issues that have not yet been discussed in the literature. This workshop will bring together social scientists, philosophers, decision theorists, and psychologists to explore and discuss the potentials and limitations of these approaches for scientific practice and philosophy alike. We invite submissions of a title, a short abstract of 100 words and an extended abstract of 1000 words by 25 May, 2014. We anticipate that there will be space for about four contributed talks. Find all the details on the event organized by the Munich Center for Mathematical Philosophy (MCMP) and CAS - Center for Advanced Studies LMU online here: http://www.dgn2014.philosophie.uni-muenchen.de/index.html. In the past two decades, agent-based models (ABMs) have become ubiquitous in philosophy and various sciences. ABMs have been applied, for example, to study the evolution of norms and language, to understand migration patterns of past civilizations, to investigate how population levels change in ecosystems over time, and more. In contrast with classical economic models or population-level models in biology, ABMs are praised for their lack of assumptions and their flexibility. Nonetheless, many of the methodological and epistemological questions raised by ABMs have yet to be fully articulated and answered. For example, there are unresolved debates about how to test (or "validate") ABMs, about the scope of their applicability in philosophy and the sciences, and about their implications or our understanding of reduction, emergence, and complexity in the sciences. This conference aims to bring together an interdisciplinary group of researchers aimed at understanding the foundations of agent-based modeling and how the practice can inform and be informed by philosophy. We invite submissions of extended abstracts of 750-1000 words for contributed talks by 1 June 2014. Decisions will be made by 15 June 2014. (1) Are bridge laws allowed in the scrutability base, and if so does this trivialize scrutability theses? Bridge laws are certainly not disallowed from the base in general (indeed, I'd have psychophysical bridge laws in my own base). When I said that bridge laws were not allowed in the base, I was discussing a specific scrutability thesis: microphysical scrutability (where the base must be microphysical truths alone). On the other hand, building in separate bridge laws for water, kangaroos, and everything else will lead to a non-compact scrutability base. So there's no trivialization of the central compact scrutability thesis here. (2) Is Carnap's $\omega$-rule powerful enough to yield scrutability of mathematical truths? My discussion of the $\omega$ rule is intended to illustrate my response to the godelian objection to the scrutability of mathematical truths, rather than a general account of the knowability of mathematical truths. It's an example of an idealized infinitary process that can get around godelian limitations. The $\omega$-rule suffices to settle first-order arithmetical truths but of course other infinitary methods will be needed in other domains. It's just false that inference rules assume the knowability of their premises, so there's no trivialization here. (3) Is there a circularity in nomic truths being scrutable from microphysical truths and vice versa? If one distinguishes ramsified and nonramsified versions of microphysical truths, any apparent circularity disappears. non-ramsified microphysical truths are scrutable from ramsified causal/nomic truths, which are scrutable from ramsified microphysical truths (including microphysical laws). (4) What about Newman's and Scheffler's problems? The "contemporary Newman problem" isn't a problem for my thesis, as my ramsification base isn't an observational base. As for Scheffler's problem: my first reaction (though this really is quick) is that Scheffler's example involves either ramsifying a trivial theory or giving an incomplete regimentation (and then ramsification) of a nontrivial theory. If those material conditionals really constitute the whole content of the theory (and the theory gives the whole content of the relevant theoretical term), then it's trivial in the way suggested. If the theory is formulated more completely e.g. with nomic or causal conditionals, the objection won't arise. Certainly the problem won't arise for the Ramsey sentences that my procedure yields. (5) Why think special science truths are scrutable? The arguments for scrutability of special science truths are in Chapters 3 and 4 (supplemented by 6), which are not discussed in the critical notice. The excursus on the unity of science is not intended as a primary argument for scrutability of special science truths. Rather, it is connecting the scrutability thesis to the unity/reduction literature and making the case that the thesis is a weak sort of unity/reduction thesis that survives common objections to unity or reduction theses. Congress of Logic, Methodology and Philosophy of Science (CLMPS 2015). (the abstract submission for Logic Colloquium 2015 opens in early 2015). The congress will host six plenary lectures and several invited lectures. the abstracts well suited for IUHPS themes. in total 30 minutes (20 min for the presentation + 10 min for the discussion). of this paper must submit the paper and be registered as a participant). Constructing the World. Oxford University Press, 2012. Raatikainen, P. 2014. "Chalmers Blueprint of the World", International Journal of Philosophical Studies 22 (1):113-128. This is the third in a series of three posts in which I rehearse what I hope to say at the Author Meets Critics session for Lara Buchak's tremendous* new book Risk and Rationality at the Pacific APA in a couple of weeks. The previous two posts are here and here. In the first post, I gave an overview of risk-weighted expected utility theory, Buchak's alternative to expected utility theory. In the second post, I gave a prima facie reason for worrying about any departure from expected utility theory: if an agent violates expected utility theory (perhaps whilst exhibiting the sort of risk-sensitivity that Buchak's theory permits), then her preferences amongst the acts don't line up with her estimates of the value of those acts. In this post, I want to consider a way of reconciling the preferences Buchak permits with the normative claims of expected utility theory. I will be making a standard move. I will be redescribing the space of outcomes in such a way that we can understand any Buchakian agent as setting her preferences in line with her expectation (and thus estimate) of the value of that act. In recent formal epistemology, a lot of attention has been paid to a programme that one might call accuracy-first epistemology. It is based on a particular account of the goodness of doxastic states: on this account, a doxastic state -- be it a full belief, a partial belief, or a comparative probability ordering -- is better the greater its accuracy; Alvin Goldman calls this account veritism. This informal idea is often then made mathematically precise and the resulting formal account of doxastic goodness is used to draw various epistemological conclusions. In this post, the doxastic states with which I will be concerned are credences or partial beliefs. Such a doxastic state is represented by a single credence function $c$, which assigns a real number $0 \leq c(X) \leq 1$ to each proposition $X$ about which the agent has an opinion. Thus, a measure of accuracy is a function $A$ that takes a credence function $c$ and a possible world $w$ and returns a number $A(c, w)$ that measures the accuracy of $c$ at $w$: $A(c, w)$ takes values in $[-\infty, 0]$. Beginning with Joyce 1998, a number of philosophers have given different characterisations of the legitimate measures of accuracy: Leitgeb and Pettigrew 2010; Joyce 2009; and D'Agostino and Sinigaglia 2009. Leitgeb and Pettigrew give a very narrow characterisation, as do D'Agostino and Sinigaglia: they agree that the so-called Brier score (or some strictly increasing transformation of it) is the only legitimate measure of accuracy. Joyce, on the other hand, gives a much broader characterisation. I find none of these characterisations adequate, though I won't enumerate my concerns here. Rather, in this post, I'd like to offer a new characterisation.
CommonCrawl
What is the shortest string that contains all permutations of an alphabet? What is the shortest string $S$ over an alphabet of size $n$, such that every permutation of the alphabet is a substring of $S$? I thought of this problem while reading a open problem on shortest supersequence of all permutations. Here are some thoughts: given a permutation $abcde$, you can generate all its cyclic rotations with no additional cost: $abcdeabcd$. Then you'd want to capitalize of the largest number symbols, so we repeat the deepest symbol, in this case $a$, complete the permutation with $e$, and generate all cyclic rotations: $abcdeabcdaebcda$. So far we've generated all cyclic rotations of $abcde$ and $bcdae$. Notice that the first $n-1$ characters are rotated. We continue to generate all cyclic rotations of $cdabe,dabce$, and then we have to bring back one symbol shallower: $\ldots dabcedabcadebcad$. So we moved from $dabce$ to $bcade$. This corresponds to rotation of the middle $n-2$ characters, followed by the usual rotation of $n-1$ characters. Now we get back to the earlier operation (rotation of $n-1$), applying occasionally the new rotation, until we get stuck; we would then need to do a rotation of the middle $n-3$ characters, and so on. I would conjecture that the preceding scheme is optimal; the diligent reader can recursively calculate its length. I researched this question 20 years ago and found that the length of the shortest string containing all the permutations of n objects to be as stated in http://www.notatt.com/permutations.pdf. We created a computer algorithm to generate all possible strings containing all permutations of n objects and proved this minimal length through brute force for alphabets up to 11 objects. We never could find a proof that our algorithm generated the shortest strings for any n and I would love for someone to pick this subject up. I've found that most mathematicians disregard this topic as already done when in fact, upon close examination, it has not been proven. If anyone knows of such a proof, please pass it along. You can find our paper at Minimal Superpermutations, Ashlock D., and J. Tillotson, Congressus Numerantium 93(1993), 91-98. [NB] This argument as stated is incorrect, and the error is pointed out by Yuval in the comments below. One can indeed use the De Bruijn graph to get all permutations, exactly as I described below, but as Yuval's comment implies, one will also get plenty of non-permutations. The length of the string $S$ is not what my argument says it is, and the problem of the MINIMAL LENGTH sequence seems to require a different argument. This is very dangerous because all those undeserved points might cause someone to dismiss what might be an interesting and difficult question. Since there are $n!$ permutations of $n$ symbols, the string $S$ must have length at least $n!+n-1$. That is, there must be $n!$ starting positions for a permutation, and the last starting position must be succeeded by $n-1$ symbols. There is in fact such a string, which is obtained from an Eulerian path in the De Bruijn graph $B(n,n)$. This is a directed graph whose vertices are all strings of $n-1$ distinct symbols, with a directed edge from $u$ to $v$ if there is a symbol $s$ such that $v$ has the form $u_1s$, where $u_1$ is $u$ with the left-most symbol deleted. We label the edge with $s$. Every vertex has the same in-degree and out degree so the graph has an eulerian path. The labels of the edges on any such path, together with the string where the path ends, gives a minimal sequence of the kind you want. See this Wikipedia article http://en.wikipedia.org/wiki/De_Bruijn_sequence for terminology, references and a nice explanation. What is the shortest sequence that contains every permutation of $1..n$? Expected length of a sequence that contains all words of a given length. What is the shortest digit sequence containing all 4 digits sequences? Shortest ternary string containing all ternary strings of length 3? What is the minimum longest repeated substring of a binary string of size n?
CommonCrawl
On the Ground State Energy of the Translation Invariant Pauli-Fierz Model. II. We determine the ground state energy of the translation invariant Pauli-Fierz model for an electron with spin, to subleading order $O(\alpha^2)$ with respect to powers of the finestructure constant $\alpha$ and prove rigorous error bounds of order $O(\alpha^3)$. A main objective of our argument is its brevity. Keywords and Phrases: Translation invariant Pauli-Fierz Hamiltonian, spectral theory, ground state energy. Full text: dvi.gz 26 k, dvi 80 k, ps.gz 237 k, pdf 153 k.
CommonCrawl
Ideas for further contemplation from the blog post Determinant from Matrix Multiplication. For $n \times n$ matrices, use the Laplace Expansion (which was probably how you learned to calculate determinants in school, anyway). Derive a general formula for $n \times n$ matrices using Mathematical Induction. Are there consequences for the eigenvalues?
CommonCrawl
Please forgive my limited vocabulary, I'm not a math guru (yet) but I'm trying to solve a practical real-life challenge in a 3D scene I'm creating with PovRay. Given a line going through x=0,y=0,z=0 in 3D Cartesian space. I know the angles the line has on the xz plane and the yz plane as they would be when viewing them orthographically. I want to know the angle that the line makes towards the z axis. When xz=90° and yz=90° then the angle should be 0° (or 180° or 360°). When xz=0° and yz=0° then the angle should be 90° (or 270°). My intuition says that xz=45° and yz=45° should be more than 45°. Slicing a cube from corner to opposite corner, so probably 180 - (inverse tan(0.5/1)) = 153.43°. Not sure if I have that correct though. I could lay out the known values in a spreadsheet and try to brew a formula averaging the two angles and using a sinus or cosinus to wobble between the extremes, but I'd never know if the intermediate answers would actually be correct. Things also get tricky when x and y go different, I'm not sure where to start, summing them, taking an average or what. For now I just need a formula, but if someone could add some enlightenment (enable me to reason the formula on my own) that would be super! I suppose that when you project your line orthographically on the $x,z$ plane and measure the angle in that plane, you are measuring that angle from the $x$ axis; and likewise you measure the angle in the $y,z$ plane from the $y$ axis. You know the 3D line goes through $(0,0,0).$ You can easily detect if the line is completely in the $x,y$ plane, $90$ degrees from the $z$ axis, and in that case you have a solution. Now that you know the coordinates $x_1$ and $y_1,$ you can construct a right triangle with vertices at $(0,0,0),$ $(x_1, y_1, 1),$ and $(0,0, 1).$ In this triangle, the angle at $(0,0, 1)$ is the right angle, the side from $(0,0,0)$ to $(x_1, y_1, 1)$ lies along your 3D line, and the side from $(0,0,0)$ to $(0,0, 1)$ lies along the $z$-axis. So the angle between your line and the $z$-axis is the angle at $(0,0,0)$ in this triangle. Not the answer you're looking for? Browse other questions tagged trigonometry or ask your own question. Evaluate the sine cosine and tangent of the following angles.
CommonCrawl
This is a generalization of Place 4x12 detainees on a 7x7 grid of cells. there are an equal number of squares of each color. For which values of $n$ is this possible? This fills in all cells of the $n\times n$ square, uses each colour exactly the same number of times, and no two same-colour squares touch at edge or corner. Then simply extend this solution outwards by continuing each row and column in a pattern of the form ABAB... or BABA... or CDCD... or ACAC... or whatever, according to the way that row or column has already started when reading from the centre outwards. For instance, how do we extend the first row to the right? We read rightwards from the centre column: D, C, so the next one must be D, then C again, and so on. How do we extend the second column downwards? Again, read downwards from the centre row: C, A, so the next one must be C, then A again, and so on. The corners are then filled in in the only way they can be: once the rest of the $2\times2$ square starting from that corner are filled in, there's only one way the corner can be done. Now there are five of each colour filled in, so the remaining four cells must contain exactly one of each. But if those AB pairs on the left are all aligned the same way, then none of those four can be B; similarly, if the CD pairs on the left are both aligned the same way, then none of them can be D; and similarly for the pairs on the right. So we need to swap some pairs. How about this? Now there's exactly one way to fill in those four remaining cells, which gives us the solution written above for $n=5$. ... and that this extension outwards from the centre cross can be used to construct a solution for all odd $n$ simply by extending further and further, one outer layer at a time. Make a square of of 2 by 2 with the 4 colors and spam it from beginning to end. Easy peasy. put the empty square in the middle, put a different color on each side of the empty square, then spiral outward. Will never fail.
CommonCrawl
I need to find all the sub-square matrices of a $4 \times 4$ matrix. Then find the determinant of all $70$ of them. I was directed to Sage but I know little about it. A simple program would be much appreciated. Note: if you already figured out how to input a matrix, you would save potential answerers some time by putting that code in your question, ready to copy-paste. print ( "A[ %s , %s ].det() = %s" To get a square sub-matrix from a square matrix, you just need to delete an equal number of rows and columns. Notice the methods delete_rows and delete_columns. To enumerate subsets of size 2 of a set S, we can use Subsets(S, k=2). Combining all this, write a loop to iterate over all possible numbers of rows and columns to delete.
CommonCrawl
Abstract: Let $\mathcal K$ be a complete quasivariety of completely regular universal topological algebras of continuous signature $\mathcal E$ (which means that $\mathcal K$ is closed under taking subalgebras, Cartesian products, and includes all completely regular topological $\mathcal E$-algebras algebraically isomorphic to members of $\mathcal K$). For a topological space $X$ by $F(X)$ we denote the free universal $\mathcal E$-algebra over $X$ in the class $\mathcal K$. Using some extension properties of the Hartman-Mycielski construction we prove that for a closed subspace $X$ of a metrizable (more generally, stratifiable) space $Y$ the induced homomorphism $F(X)\to F(Y)$ between the respective free universal algebras is a closed topological embedding. This generalizes one result of V.Uspenskii concerning embeddings of free topological groups.
CommonCrawl
Topic modeling is an unsupervised machine learning method to discover abstract topics within a collection of unlabelled documents. Each collection of documents (corpus) contains a "latent" or "hidden" structure of topics. Some topics are more prominent in the whole corpus, some less. In each document there are multiple topics covered, each to a different amount. The latent variable $z$ describes the topic structure, as each word of each document is thought to be implicitly assigned to a topic. What are the topics that appear in the corpus? Which words are prominent in which topics? Which topics appear in which documents? speaker: connected to speaker metadata like age, party, etc. type: categorical "chair", "poi" or "speech" – we only need "speech" $D_1$: "Regarding the financial situation of Russia, President Putin said ..." $D_2$: "In the first soccer game, he only sat on the bank ..." $D_3$: "The conference on banking and finance ..." how to set the parameters $K$ (number of topics), $\alpha$ and $\beta$? how to split data? hold out whole documents or words in documents ("document completion")? how to compare training and test outcomes? → which properties should "ideal" posterior distributions $\phi$ and $\theta$ have? topics should be "well separated" (as with Cao) but also adds penalty for too many "topic splits" used "expert-driven topic annotation study" saliency and distinctiveness show large amount of very common words like "herr", "dame", "kollege", "sagen"
CommonCrawl
I want to compute the estimate of $\beta$ for a linear model $Y = X\beta + \varepsilon $ with $$\varepsilon \sim N_d(0, \sigma^2V),$$ where $V$ is a $d\times d$ definitive posive, symmetric matrix. It is straightforward to generalize the theory of ordinary linear models (where $\varepsilon \sim N_d(0, \sigma^2I_d))$, to this general case. In fact, it is sufficient to consider the Cholesky decomposition of the matrix $V$ and then transform the variables (details, for instance, here). Now that the theory is clear, how can I specify and solve this problem in R? Browse other questions tagged r regression generalized-linear-model cholesky or ask your own question. Distribution of sum of squares error for linear regression?
CommonCrawl
In this talk, we prove that the stationary Swift-Hohenberg equation has chaotic dynamics on a critical energy level for a large (continuous) range of parameter values. The first step of the method relies on a computer assisted, rigorous, continuation method to prove the existence of a periodic orbit with certain geometric properties. The second step is topological: we use this periodic solution as a skeleton, through which we braid other solutions, thus forcing the existence of infinitely many braided periodic orbits. A semi-conjugacy to a subshift of finite type shows that the dynamics is chaotic. In this talk, I will introduce and explain some algorithmic methods for the automated analysis of dynamical systems. The discretization of the dynamics makes it possible to use fast graph algorithms to extract the recurrent and gradient-like dynamics, and to represent the system in terms of a Conley-Morse decomposition. Recently developed algorithms, including the automatic homology computation, provide additional information and make it possible to obtain mathematically meaningful, rigorous results. Moreover, if an n- parameter family of dynamical systems is considered, then using outer approximations of dynamics provides an algorithmic method to prove certain continuation results, as well as to detect possible bifurcations. A nonlinear Leslie population model will be used as a sample discrete dynamical system which illustrates the effectiveness of this approach. In this talk we are concerned about singular boundary value problems arising in hydrodynamics and cosmology. In the case of spherical simmetry, the orginal partial differential equation may be reduced to a second order ordinary differential equation (ODE). This is the case, for example, of the formation of bubbles or droplets in a mixture gas-liquid. We are interested on solutions of the resulting ODE which are strictly increasing on the positive semi-axis and have finite limits at $0$ and $\infty$ (bubble-type solutions). Necessary and sufficient conditions for the existence of such solutions are obtained in the form of a restriction on the equation coefficients. The asymptotic behavior of certain solutions of this equation is analysed near the two singularities (when $r\rightarrow 0+$ and $r\rightarrow \infty$), where the considered boundary conditions define one-parameter families of solutions. Based on the analytic study, an efficient numerical method is proposed to compute approximately the needed solutions of the above problem. Some results of the numerical experiments are displayed and their physical interpretation is discussed.
CommonCrawl
Dr. James Coakley received his B. Eng. degree in Mechanical Engineering from University College Dublin. He earned his Ph.D. in Materials Science and Engineering at Imperial College London. After postdoctoral appointments at Imperial College London and at Oak Ridge National Laboratory, he was awarded a three year Marie Curie research fellowship in 2014, co-hosted between Northwestern University and the University of Cambridge. He is a member of the Oak Ridge National Laboratory review committee, by invitation, and has been the invited speaker and session chair at the Minerals, Metals and Materials Society (TMS) conference. His interests lie in the research and development of structural materials, with a focus on relating the microstructure evolution that occurs both during alloy processing and during the lifetime of the component to the concomitant change in mechanical properties. He utilizes a broad range of scientific techniques, including alloy processing, electron microscopy, atom probe tomography, in-situ and ex-situ neutron and X-ray diffraction and small angle scattering, materials modeling, and developing data analysis protocols. The projects that he has undertaken have been strongly rooted in industrial and government interests and demands, forming collaborations between industry and academia to achieve a common goal. Coakley, J., Radecka, A., Dye, D., Bagot, P. A., Martin, T. L., Prosa, T. J., Chen, Y., Stone, H. J., Seidman, D. N., Isheim, D. (2018).Characterizing nanoscale precipitation in a titanium alloy by laser-assisted atom probe tomography. Materials Characterization, 141, 129--138. Li, M., Coakley, J., Isheim, D., Tian, G., Shollock, B. (2018).Influence of the initial cooling rate from $\gamma$′ supersolvus temperatures on microstructure and phase compositions in a nickel superalloy. Journal of Alloys and Compounds, 732, 765--776. Coakley, J., Seong, B., Dye, D., Ohnuma, M. (2017).Isothermal omega kinetics in beta-titanium alloys. Philosophical Magazine Letters, 97, 83--91. Coakley, J., Ma, D., Frost, M., Dye, D., Seidman, D. N., Dunand, D. C., Stone, H. J. (2017).Lattice strain evolution and load partitioning during creep of a Ni-based superalloy single crystal with rafted $\gamma$′ microstructure. Acta Materialia, 135, 77--87. Coakley, J., Isheim, D., Radecka, A., Dye, D., Stone, H. J., Seidman, D. N. (2017).Microstructural evolution in a superelastic metastable beta-Ti alloy. Scripta Materialia, 128, 87--90. Coakley, J., Lass, E. A., Ma, D., Frost, M., Seidman, D. N., Dunand, D. C., Stone, H. J. (2017).Rafting and elastoplastic deformation of superalloys studied by neutron diffraction. Scripta Materialia, 134, 110--114. Isheim, D., Coakley, J., Radecka, A., Dye, D., Prosa, T., Chen, Y., Bagot, P., Seidman, D. (2016).Atom-Probe Tomography: Detection Efficiency and Resolution of Nanometer-Scale Precipitates in a Ti-5553 Alloy. Microscopy and Microanalysis, 22, 702--703. Coakley, J., Rahman, K. M., Vorontsov, V. A., Ohnuma, M., Dye, D. (2016).Effect of precipitation on mechanical properties in the $\beta$-Ti alloy Ti--24Nb--4Zr--8Sn. Materials Science and Engineering: A, 655, 399--407. Coakley, J., Radecka, A., Dye, D., Bagot, P. A., Stone, H. J., Seidman, D. N., Isheim, D. (2016).Isothermal omega formation and evolution in the Beta-Ti alloy Ti-5Al-5Mo-5V-3Cr. Philosophical Magazine Letters, 96, 416--424. Radecka, A., Coakley, J., Jones, I. P., Rugg, D., Lindley, T. C., Dye, D. (2016).Ordering and the micromechanics of Ti--7Al. Materials Science and Engineering: A, 650, 28--37. Swinburne, T., Glavicic, M., Rahman, K., Jones, N., Coakley, J., Eakins, D., White, T., Tong, V., Milathianaki, D., Williams, G., others, . (2016).Picosecond dynamics of a shock-driven displacive phase transformation in Zr. Physical Review B, 93, 144119. Liu, Q., Coakley, J., Seidman, D. N., Dunand, D. C. (2016).Precipitate evolution and creep behavior of a W-free Co-based superalloy. Metallurgical and Materials Transactions A, 47, 6090--6096. Radecka, A., Coakley, J., Vorontsov, V., Martin, T., Bagot, P., Moody, M., Rugg, D., Dye, D. (2016).Precipitation of the ordered $\alpha$2 phase in a near-$\alpha$ titanium alloy. Scripta Materialia, 117, 81--85. Radecka, A., Bagot, P., Martin, T., Coakley, J., Vorontsov, V., Moody, M., Ishii, H., Rugg, D., Dye, D. (2016).The formation of ordered clusters in Ti--7Al and Ti--6Al--4V. Acta Materialia, 112, 141--149. Coakley, J., Vorontsov, V. A., Littrell, K. C., Heenan, R. K., Ohnuma, M., Jones, N. G., Dye, D. (2015).Nanoprecipitation in a beta-titanium alloy. Journal of Alloys and Compounds, 623, 146--156. Coakley, J., Vorontsov, V. A., Jones, N. G., Radecka, A., Bagot, P. A., Littrell, K. C., Heenan, R. K., Hu, Fr\'ed\'eric, ., Magyar, A. P., Bell, D. C., others, . (2015).Precipitation processes in the Beta-Titanium alloy Ti--5Al--5Mo--5V--3Cr. Journal of Alloys and Compounds, 646, 946--953. Yan, H., Coakley, J., Vorontsov, V. A., Jones, N. G., Stone, H. J., Dye, D. (2014).Alloying and the micromechanics of Co--Al--W--X quaternary alloys. Materials Science and Engineering: A, 613, 201--208. Warwick, J., Coakley, J., Raghunathan, S., Talling, R., Dye, D. (2012).Effect of texture on load partitioning in Ti-6Al-4V. Acta Materialia, 60, 4117--4127. Coakley, J., Reed, R. C., Warwick, J. L., Rahman, K. M., Dye, D. (2012).Lattice strain evolution during creep in single-crystal superalloys. Acta Materialia, 60, 2729--2738. Yan, H., Vorontsov, V. A., Coakley, J., Jones, N. G., Stone, H. J., Dye, D. (2012).Quaternary alloying effects and the prospects for a new generation of Co-base superalloys. Superalloys, 53, 705. Coakley, J., Dye, D., Basoalto, H. (2011).Creep and creep modelling of a multimodal nickel-base superalloy. Acta Materialia, 59, 854--863. Coakley, J., Basoalto, H., Dye, D. (2010).Coarsening of a multimodal nickel-base superalloy. Acta Materialia, 58, 4019--4028. Dye, D., Coakley, J., Vorontsov, V., Stone, H., Rogge, R. (2009).Elastic moduli and load partitioning in a single-crystal nickel superalloy. Scripta Materialia, 61, 109--112. Radecka, A., Vorontsov, V., Coakley, J., Rahman, K., Bagot, P., Martin, T., Moody, M., Jones, I., Lindley, T., Rugg, D., Dye, D. (2016). Ordering inα Titanium Alloys. Proceedings of the 13th World Conference on Titanium (pp. 971-978). Wiley-Blackwell.
CommonCrawl
Would it be possible to create an artificial iron star? I ran a quick calculation, and the mass of iron required to form a black hole is on the order of 10^40kg. So, suppose that in my story there is this almost much iron available to somehow be brought together in one place, but the builders don't want to create a black hole, but instead want to fuse iron into heavier elements. In the beginning a large sphere would be created, and then more iron would be added with low temperature and kinetic energy. For the purpose of the story ignore any practical issues with doing this or the usefulness thereof. Such a large mass should, under its own gravity, be under tremendous pressure (and possibly temperature) at the core. Normally, iron doesn't fuse because such a reaction would be endothermic, but because we aren't starting with a star is it possible for such fusion to occur in such an artificially constructed setting? For comparison, the mass of the Milky Way galaxy is estimated to be only about two and a half orders of magnitude more than the amount of iron we are talking about, so this is a LOT of iron. First, you seem to be familiar with the binding energy curve for elements. As you can see, iron is at the top of the binding energy curve, so it has the most energy per nucleon. If you wanted to 'fuse' iron into something heavier, then you need to add energy. This is not impossible to imagine happening. The conditions inside a supernova allow nucleosynthesis of elements heavier than iron, so incredible though supernovas may be, perhaps some super-advanced alien race could accomplish it by adding enough energy to a giant iron core. The problem here comes from the number of neutrons present in Iron. Iron does not have enough neutrons to form Uranium or some other heavier elements that you suggested. Having iron and only iron in your core gives you a limited number of things that you can make. You can only add nuclei together that have 26 protons in them, so the first order fusion products will all have 52 protons, and somewhere between 56 and 64 neutrons. From a chart of nuclides, we can see that none of these configurations is stable. 52 protons is tellurium which requires at least 68 neutrons to be stable. Thus, these short term fusion products will decay, via $\beta+$ decay, into mostly tin, with some cadmium, indium and silver and the like mixed in, depending on the initial isotopes. This decay takes elements 'down' the periodic table; tellurium has a higher atomic number than any of its products, so $\beta+$ decaying isotopes will end up with a lower number of protons once the decay chain stops at a stable isotope. This process could potentially continue, with the tin fusing with another iron atom to make osmium, which would decay into ytterbium or hafnium; that could then pick up another iron, etc. The problem is that eventually your fusion products are going to be elements large enough that they need massive amounts of neutrons to maintain stability. While iron can be stable at 14:13 neutrons:protons, lead is only stable at 62:41; and uranium's most long lived isotope is 73:46. Ultimately, large atoms that are sufficiently neutron deficient will start to undergo $\alpha$ decay, where they lose an $\alpha$ particle which is 2 neutrons and 2 protons. In this decay, they will rapidly drop atomic number until you are back close to where you started. Heavier elements obviously got created, but how? By the r-process, where nuclei of atoms with atomic numbers above iron accumulate extra neutrons. These extra neutrons cause isotopes to $\beta-$ decay. The result of this decay is that an element moves 'up' the periodic table, gaining atomic number as unstable neutrons turn into protons. This is the only way that mostly stable elements high up in the period table (like uranium and thorium) can be formed. The r-process occurs in two situations (that we know about). The first is a neutron star collision, where there are obviously plenty of extra neutrons to spare. The second is in the core of a Type IIb supernova, the most violent kind. Much is unclear about how this works; there has only been one observed, nearby Type IIb supernova, and the first neutrons star merger was observed in 2017. Here is a chart of the source of nucleosynthesis for elements. You can see that the neutrons star merger (which is also possibly occurring in Type IIb supernovas) is how you get the vast majority of your heavy elements. How can you get enough neutrons? In a supernova, and in stars in general, neutrons form from the p-p fusion chain. Two protons fuse together, a highly unstable configuration. This immediately (as in, no measured half-life immediately) $\beta+$ decays into deuterium, which has a neutron. Deuterium can then fuse with other assorted combinations of hydrogen and helium to form free neutrons. In a core collapse supernova, the neutron flux is theorized to be such that extra neutrons can accumulate to the heavier elements, thereby making even heavier elements. From your alien's perspective though, the best way to add neutrons has got to be a neutron star. If they can get their hands on a galaxy's worth of iron, they can probably find a neutron star lying around too. Take a giant iron ball, add an immense amount of energy, and then throw in a neutron star. Metal! As you pointed out, iron is normally the element where typical solar fusion ends. However, if enough alpha particles are present, nickel and zinc, could result. Also, neutron capture would require less energetic means to generate elements higher than iron. This is the contribution supernovae make towards forming these elements. And you also mention bringing all of that iron together, which would be a very bad idea if you are trying not to create a huge lump of molten lead. Maybe bring it all together in controlled bursts, to match the amount of alpha particles or neutrons you can toss at it, then remove your newly formed elements to a safer distance. Not the answer you're looking for? Browse other questions tagged reality-check hard-science stars or ask your own question. Is a gas giant more massive than its star possible? Is binary planets orbiting a triple star system possible? Would it be possible to create "Hard Light" out of photons and similar particles? Would alien litterbugs accidentally create a star?
CommonCrawl
I came up with a question to know the fatou component of of some types of rational function. In some sense, I may need to give a computational criterion to existence of attracting periodic basin for a rational function, which is related to Fatou's theorem. I tried some examples , however, sometimes I can not find attracting periodic point (period from 2 to 100) with the comupters. I have no idea to know whether it has a attracting basin basin for $f$ or not. For example, $$f(z) = -2/3*z+(-2*z^3+1)/(3*z^3+5*z).$$ By the software of Ultra Fractal, and it seems that the julia set of the example is very small (may be a Cantor set), I can only see the whole screen is black. It seems that there is no criterion (an algorithm which will always give an answer). Consider the following simplified problem: is there a criterion that $z^2+c$ has an attracting fixed point in $C$? The set of such $c$ is a union of disjoint open regions. Each region is bounded by an algebraic curve, but the number of regions is infinite. Thus the set is not semi-algebraic, and there can be no "criterion". So the situation is the following: if your computer finds an attracting point, then you are done. If not, you cannot make any conclusion. But if the word "criterion" means only a sufficient condition for absence of attracting points then such criteria exist. In doing computation, one only has to look at the trajectories of the critical points. So if all critical points eventually are absorbed by repelling cycles, then there are no attracting fixed points, and $J=C$. This is a nice, verifiable condition which is most frequently used in showing that for some specific function $J=C$. As Alexandre points out, if every critical point of a rational function is pre-periodic, then the Julia set of the function is the entire Riemann sphere. This is stated as Theorem 4.3.1 in Beardon's Iteration of Rational Functions. A simple example is $$f(z) = (z-2)^2/z^2$$ whose only critical points are $0$ and $2$ and satisfy $$2\rightarrow0\rightarrow\infty\rightarrow1\rightarrow1.$$ Unfortunately, this is a rather narrow criterion in that there may be many functions with full Julia set (i.e., the whole Riemann sphere) that don't satisfy the criterion. There is, in principle however, a necessary and sufficient condition for a rational function to have full Julia set: There simply needs to be one point whose orbit is dense in the Riemann sphere (Thm 4.3.2 of Beardon). While I suspect this is impractical to use in a proof, it's simple to use as numerical evidence when you are unable to find attractive periodic orbits. Applying this to your function, I obtained the following picture which plots the first $20,000$ points of a random orbit on the Riemann sphere. I include the Mathematica code since I think it's quite simple and illustrates the idea. Definitely not proof of a dense orbit but compelling enough that I wouldn't spend my resources looking for very long attractive behavior. A similar picture is generated by the other example. We can now search for values of $\lambda$ where some attractive behavior is detected. It's not too hard to prove that $\infty$ is an attractive fixed point whenever $\lambda>3/2$. Thus, for all $\lambda\leq3/2$, we iterate from all six critical points of $f$ in search of attractive behavior. In the image below, the black region indicates that no attractive behavior was detected at all. The lighter shaded region indicates that some attractive behavior was detected. For each critical point, we iterate up to 250 times until an attractive orbit of length up to 25 is detected; we then add up those six values and shade according to that sum. Thus, a very light shade indicates that all six critical points converged quickly to an orbit. Of interest for your question is the fact that the small yellow dot at the point $\lambda=1$ appears to be wholly contained in the black region. Again, not proof, but compelling. Then there are rational maps arbitrarily nearby that do have nonrepelling cycles, and hence we cannot distinguish between the two cases by considering our map only up to finite precision. Of course, when your coefficients are given in some absolute way (e.g. as rational or algebraic numbers), then you can ask about this as a classical problem in computer science. This problem seems less natural, although I would expect it also to be undecidable. There is a result that says that the Mandelbrot set is undecidable in the sense of Shub-Smale (which assumes infinite-precision arithmetic), which suggests at least that the problem can't be solved in any naive way. Of course, there are specific examples for which it can be verified that there are no nonrepelling cycles, as mentioned by Alexandre. Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems cv.complex-variables complex-dynamics or ask your own question. Algorithm for determining when polynomial iteration is bounded? When deRham curve is bijection? What's a good approach to model this system? parabolic immediate basins always simply connected? What are good methods for detecting parabolic components and Siegel disk components in the Fatou set of a rational function? Is this a Julia set (and if so, for which function family is it the Julia set)? When is a Newton basin fractal continuously determined by the roots of its polynomial?
CommonCrawl
Are f.g. projective modules (of constant rank) free over the ring $A$ which is the total quotient ring of a reduced non-Noetherian commutative ring. Note that dimension of $A$ need not be $0$. I've posted an answer here. I can give an example of ring $A$ which is reduced and every non-unit is a zero-divisor. It may be helpful to find the answer over $A$. Let $R=k[x_1,x_2,...]$ be polynomial ring (in infinitely many variables) over a field; let $m =(x_1,x_2,...)$ be a maximal ideal. Fix an integer $n>0$ and define $I$ to be the ideal generated by products $x_l x_j$ with $l$, $j$ distinct and $l> n$. Define $A= R_m /I$. Then $ A$ is reduced, total quotient field of $A$ is $A$, and dimension of $A$ is $n$. The dimension of a reduced commutative Noetherian ring having no non-zerodivisor is $0$. Hence Tom's comment will not work in general, as commented by others. A ring such that all projectives are stably free but not all projectives are free? How to introduce notions of flat, projective and free modules? When are two projective modules of equal rank isomorphic? Why are injective modules more complicated than projective modules?
CommonCrawl
Flood-It is a popular one player game on many smart phones. The player is given an $n \times n$ board of tiles where each tile is given one of $6$ colours (numbered $1$–$6$). Each tile is connected to up to $4$ adjacent tiles in the North, South, East, and West directions. A tile is connected to the origin (the tile in the upper left corner) if it has the same colour as the origin and there is a path to the origin consisting only of tiles of this colour. A player makes a move by choosing one of the $6$ colours. After the choice is made, all tiles that are connected to the origin are changed to the chosen colour. The game proceeds until all tiles have the same colour. The goal of the game is to change all the tiles to the same colour, preferably with the fewest number of moves possible. if there is a tie, break ties by choosing the lowest numbered colour. The input consists of multiple test cases. The first line of input is a single integer, not more than $20$, indicating the number of test cases to follow. Each case starts with a line containing the integer $n$ ($1 \leq n \leq 20$). The next $n$ lines each contains $n$ characters, giving the initial colours of the $n \times n$ board of tiles. Each colour is specified by a digit from $1$ to $6$. For each case, display two lines of output. The first line specifies the number of moves needed to change all the tiles to the same colour. The second line specifies $6$ integers separated by a single space. The $i$-th integer gives the number of times colour $i$ is chosen as a move in the game.
CommonCrawl
meaning of $\mathcal O(-1)$ in algebraic geometry? In quantum field theory, multi-loop Feynman integrals are basic ingredients of calculating high order corrections. Recently, I have come across the paper A Feynman integral via higher normal functions. It seems that Feynman integrals have some relations to algebraic geometry. Since I am physics-oriented, I don't know much about algebraic geometry. Could anyone provide some explanations or expository articles on this topic? Thanks a lot! This is somewhat of a borderline question for the forum. Generally "tell me about stuff" requests don't do very well. It might make sense to start a meta thread where you could refine your question beforehand. I think that Ryan has a point, although I'd be very interested in what answers show up! You should look at the book "Feynman motives" by Marcolli. @DanPetersen: To me, having seen your MO activity, it would be great to read any comments you could give on the subject. There are many resources where information can be found. For example, look at the slides here and the references therein. Also see these notes and perhaps just look at this proceeding. If you have any questions, feel free to comment or email me.
CommonCrawl
Steady states to this generalized TASEP? We have a one-dimensional lattice of length $n$ populated with particles($p_1,p_2,p_3$ in this case) that hop to the right, but only if the neighboring cell contains no particle($p_1$ may hop, but $p_2$ may not). Each particle independently waits a random amount of (continuous-)time before trying to hop. Particles are injected into the lattice at rate $r_i$ as long as the first cell is empty and ejected from the lattice at rate $r_e$ as long as the last cell is full. Now we generalize our TASEP by treating our "lattice" exactly as if it was just made up of particles all along. We allow any edge to hop independently. If an edge has multiple possible hops, it chooses one at random. We allow any edge to be added to the graph between any two nodes at rate $r_i$ (if no edge exists between those two nodes) and any edge to be deleted from the graph at rate $r_e$ (if the edge exists between two random nodes) all while the edges independently head-move or tail-move. Given a random directed graph on $n$ nodes, what are the steady states of this process as we vary $r_i$ and $r_e$? What's the hitting time of these states? For large $n$ and as time $\rightarrow \infty$, what's the expected degree distribution(example below)? Besides the degree distribution, what other measures of the resultant graphs differ significantly from random graphs? #Return a list of all possible head moves for the edge (source, target1). #Return a list of all possible tail moves for the edge (source, target1). #Return a list of all available moves for the edge (source, target1). #Build our digraph. Note we don't mind self-loops! if G.size()>0: #Check if there are edges. random_edge=choice(list(G.edges())) #Pick an edge at random. if len(move_list)>0: #Check if edge has moves. random_move=choice(move_list) #Make a random move. Browse other questions tagged probability-distributions stochastic-processes markov-process random-graphs or ask your own question. Examples of Non-Markov process with continuous time and finite set of states. What is the statistical steady state of this poisson process? What are the measures, methods, and strategies for analyzing dynamical graphs?
CommonCrawl
Why would the value of a call option go infinity as volatility goes to infinity? I understand how you could solve this question by taking $\sigma \rightarrow \infty$ in the solution to the black scholes equation. However, I cannot understand this on a more heuristic level. Surely as volatility goes to infinity you also have a larger chance that the option finishes out of the money (and potentially by a long way)? The stock price may fall a lot with high volatility, but you can only lose the price of the option if you bought the option. So the upside gets bigger but the downside is bounded. Not the answer you're looking for? Browse other questions tagged options volatility implied-volatility call or ask your own question. What is the value of an ATM call under the Black Scholes Framework when $T \rightarrow \infty$? why is the delta of a short call option negative? What is The Closed-Form Implied Volatility Estimator (As Defined by Hallerbach 2004) for A Put Option?
CommonCrawl
I decided to try out day-time usage on 2 consecutive days, taking the 100mg at noon or 1 PM. On both days, I thought I did feel more energetic but nothing extraordinary (maybe not even as strong as the nicotine), and I had trouble falling asleep on Halloween, thinking about the meta-ethics essay I had been writing diligently on both days. Not a good use compared to staying up a night. Another common working memory task is the n-back task, which requires the subject to view a series of items (usually letters) and decide whether the current item is identical to the one presented n items back. This task taxes working memory because the previous items must be held in working memory to be compared with the current item. The easiest version of this is a 1-back task, which is also called a double continuous performance task (CPT) because the subject is continuously monitoring for a repeat or double. Three studies examined the effects of MPH on working memory ability as measured by the 1-back task, and all found enhancement of performance in the form of reduced errors of omission (Cooper et al., 2005; Klorman et al., 1984; Strauss et al., 1984). Fleming et al. (1995) tested the effects of d-AMP on a 5-min CPT and found a decrease in reaction time, but did not specify which version of the CPT was used. However, normally when you hear the term nootropic kicked around, people really mean a "cognitive enhancer" — something that does benefit thinking in some way (improved memory, faster speed-of-processing, increased concentration, or a combination of these, etc.), but might not meet the more rigorous definition above. "Smart drugs" is another largely-interchangeable term. The evidence? Although everyone can benefit from dietary sources of essential fatty acids, supplementation is especially recommended for people with heart disease. A small study published in 2013 found that DHA may enhance memory and reaction time in healthy young adults. However, a more recent review suggested that there is not enough evidence of any effect from omega 3 supplementation in the general population. However, they fell short in several categories. The key issue with their product is that it does not contain DHA Omega 3 and the other essential vitamins and nutrients needed to support the absorption of Huperzine A and Phosphatidylserine. Without having DHA Omega 3 it will not have an essential piece to maximum effectiveness. This means that you would need to take a separate pill of DHA Omega 3 and several other essential vitamins to ensure you are able to reach optimal memory support. They also are still far less effective than our #1 pick's complete array of the 3 essential brain supporting ingredients and over 30 supporting nutrients, making their product less effective. Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'. The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly. Because executive functions tend to work in concert with one another, these three categories are somewhat overlapping. For example, tasks that require working memory also require a degree of cognitive control to prevent current stimuli from interfering with the contents of working memory, and tasks that require planning, fluency, and reasoning require working memory to hold the task goals in mind. The assignment of studies to sections was based on best fit, according to the aspects of executive function most heavily taxed by the task, rather than exclusive category membership. Within each section, studies are further grouped according to the type of task and specific type of learning, working memory, cognitive control, or other executive function being assessed. Phenotropil is an over-the-counter supplement similar in structure to Piracetam (and Noopept). This synthetic smart drug has been used to treat stroke, epilepsy and trauma recovery. A 2005 research paper also demonstrated that patients diagnosed with natural lesions or brain tumours see improvements in cognition. Phenylpiracetam intake can also result in minimised feelings of anxiety and depression. This is one of the more powerful unscheduled Nootropics available. Long-term use is different, and research-backed efficacy is another question altogether. The nootropic market is not regulated, so a company can make claims without getting in trouble for making those claims because they're not technically selling a drug. This is why it's important to look for well-known brands and standardized nootropic herbs where it's easier to calculate the suggested dose and be fairly confident about what you're taking. Smart pills have huge potential and several important applications, particularly in diagnosis. Smart pills are growing as a highly effective method of endoscopy, particularly for gastrointestinal diseases. Urbanization and rapid lifestyle changes leaning toward unhealthy diets and poor eating habits have led to distinctive increasing lifestyle disorders such as gastroesophageal reflux disease (GERD), obesity, and gastric ulcers. One of the most popular legal stimulants in the world, nicotine is often conflated with the harmful effects of tobacco; considered on its own, it has performance & possibly health benefits. Nicotine is widely available at moderate prices as long-acting nicotine patches, gums, lozenges, and suspended in water for vaping. While intended for smoking cessation, there is no reason one cannot use a nicotine patch or nicotine gum for its stimulant effects. But notice that most of the cost imbalance is coming from the estimate of the benefit of IQ - if it quadrupled to a defensible $8000, that would be close to the experiment cost! So in a way, what this VoI calculation tells us is that what is most valuable right now is not that iodine might possibly increase IQ, but getting a better grip on how much any IQ intervention is worth. Supplements, medications, and coffee certainly might play a role in keeping our brains running smoothly at work or when we're trying to remember where we left our keys. But the long-term effects of basic lifestyle practices can't be ignored. "For good brain health across the life span, you should keep your brain active," Sahakian says. "There is good evidence for 'use it or lose it.'" She suggests brain-training apps to improve memory, as well as physical exercise. "You should ensure you have a healthy diet and not overeat. It is also important to have good-quality sleep. Finally, having a good work-life balance is important for well-being." Try these 8 ways to get smarter while you sleep. Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8). Coconut oil was recommended by Pontus Granström on the Dual N-Back mailing list for boosting energy & mental clarity. It is fairly cheap (~$13 for 30 ounces) and tastes surprisingly good; it has a very bad reputation in some parts, but seems to be in the middle of a rehabilitation. Seth Robert's Buttermind experiment found no mental benefits to coconut oil (and benefits to eating butter), but I wonder. Perceptual–motor congruency was the basis of a study by Fitzpatrick et al. (1988) in which subjects had to press buttons to indicate the location of a target stimulus in a display. In the simple condition, the left-to-right positions of the buttons are used to indicate the left-to-right positions of the stimuli, a natural mapping that requires little cognitive control. In the rotation condition, the mapping between buttons and stimulus positions is shifted to the right by one and wrapped around, such that the left-most button is used to indicate the right-most position. Cognitive control is needed to resist responding with the other, more natural mapping. MPH was found to speed responses in this task, and the speeding was disproportionate for the rotation condition, consistent with enhancement of cognitive control.
CommonCrawl
Central heating plants are often used on large building complexes such as university campuses or military bases. Utilidors can be used to contain heat distribution lines and other utilities between a utility station and serviced buildings. Traditional thermal analysis of utilidors is one-dimensional, with heat transfer correlations used to estimate the effects of convection, radiation, and two-dimensional geometric effects. The expanding capabilities of computers and numerical methods suggest that more detailed analysis and possibly more energy-efficient designs could be obtained. This work examines current methods of estimating the convection and radiation that occur across an air space in square and rectangular enclosures and compares them with numerical and experimental data. A numerical model was developed that solves the energy, momentum, and continuity equations for the primitive variables in two dimensions; radiation between free surfaces was also included. Physical experiments were conducted with two 10-ft-long apparatuses; one had a 1-ft $\times$ 1-ft cross section, the other was 2 ft $\times$ 4 ft. Several pipe sizes and configurations were studied with the 1-ft $\times$ 1-ft apparatus. The 2-ft $\times$ 4-ft apparatus was limited to containing 4- and 8-inch insulated pipes. Corresponding numerical studies were conducted. Difficulties in modeling large enclosures or those with large temperature differences (Rayleigh numbers above 10$\sp7$) were encountered. Results showed good agreement between numerical and experimental average heat transfer rates, and for insulated pipe cases these results also compared well with rates obtained from one-dimensional analysis. A new effective conductivity correlation for air in a square enclosure was developed, and its use was demonstrated in numerical conduction solutions and compared with full numerical convection and radiation solutions and with experimental data. Reasonably good results were achieved when there was a small temperature difference across the air gap.
CommonCrawl
Definition at line 31 of file Block.php. Create a new block with specified parameters on a user, IP or IP range. Definition at line 131 of file Block.php. References $options, DB_REPLICA, isAutoblocking(), isCreateAccountBlocked(), isEmailBlocked(), isHardblock(), isSitewide(), isUsertalkEditAllowed(), User\newFromId(), null, setBlocker(), setExpiry(), setHideName(), setReason(), setTarget(), setTimestamp(), wfGetDB(), and wfTimestamp(). Checks if a block applies to a particular namespace. Definition at line 2060 of file Block.php. Checks if a block applies to a particular page. This check does not consider whether $this->isUsertalkEditAllowed returns false, as the identity of the user making the hypothetical edit isn't known here (particularly in the case of IP hardblocks, range blocks, and auto-blocks). Definition at line 2088 of file Block.php. Check if the block prevents a user from resetting their password. Definition at line 2151 of file Block.php. References getSystemBlockType(), isCreateAccountBlocked(), and null. Determine whether the Block prevents a given right. A right may be blacklisted or whitelisted, or determined from a property on the Block object. For certain rights, the property may be overridden according to global configs. Definition at line 1227 of file Block.php. References $res, RequestContext\getMain(), isCreateAccountBlocked(), isEmailBlocked(), isSitewide(), null, and User. Referenced by chooseBlock(), and User\isBlockedFromCreateAccount(). Checks if a block applies to a particular title. Definition at line 2037 of file Block.php. References as, getRestrictions(), and isSitewide(). Determine whether the block allows the user to edit their own user talk page. This is done separately from Block::appliesToRight because there is no right for editing one's own user talk page and because the user's talk page needs to be passed into the Block object, which is unaware of the user. This method does not validate that the passed in talk page belongs to the block target since the target (an IP) might not be the same as the user's talk page (if they are logged in). Definition at line 1991 of file Block.php. References appliesToNamespace(), appliesToPage(), RequestContext\getMain(), isSitewide(), isUsertalkEditAllowed(), and NS_USER_TALK. If there are multiple exact or range blocks at the same level, the one chosen is random This should be used when $blocks where retrieved from the user's IP address and $ipChain is populated from the same IP address information. Definition at line 1517 of file Block.php. References appliesToRight(), as, false, isHardblock(), null, and IP\toHex(). Referenced by User\getBlockedStatus(), and BlockTest\testBlocksOnXff(). WebResponse $response The response on which to unset the cookie. Definition at line 1809 of file Block.php. Retroactively autoblocks the last IP used by the user (if it is a user) blocked by this Block. This will use the recentchanges table. Definition at line 757 of file Block.php. References $dbr, $options, $res, $wgPutIPinRC, as, DB_REPLICA, doAutoblock(), getTarget(), getType(), ActorMigration\newMigration(), wfDebug(), and wfGetDB(). Delete the row from the IP blocks table. Definition at line 515 of file Block.php. References DB_MASTER, getId(), wfGetDB(), and wfReadOnly(). Check if a block has expired. Delete it if it is. Definition at line 937 of file Block.php. Autoblocks the given IP, referring to this Block. string $autoblockIP The IP to autoblock. int|bool Block ID if an autoblock was inserted, false if not. Definition at line 856 of file Block.php. References $mId, getBlocker(), getExpiry(), getHideName(), getReason(), getRestrictions(), getSystemBlockType(), getTarget(), isAutoblocking(), isCreateAccountBlocked(), isSitewide(), isUsertalkEditAllowed(), null, plain, Hooks\run(), wfDebug(), wfMessage(), and wfTimestampNow(). Definition at line 734 of file Block.php. References getTarget(), getType(), isAutoblocking(), Hooks\run(), and wfDebug(). Referenced by insert(), and update(). Check if two blocks are effectively equal. Definition at line 292 of file Block.php. References getExpiry(), getHideName(), getReason(), getRestrictions(), isAutoblocking(), isCreateAccountBlocked(), isEmailBlocked(), isHardblock(), isSitewide(), and isUsertalkEditAllowed(). Find Restriction by type and value. Definition at line 2110 of file Block.php. References $type, $value, as, getRestrictions(), and null. Referenced by appliesToNamespace(), and appliesToPage(). Get/set a flag determining whether the master is used for reads. Definition at line 1133 of file Block.php. Get a timestamp of the expiry for autoblocks. Definition at line 1346 of file Block.php. Definition at line 718 of file Block.php. References getBlocker(), getHideName(), getReason(), CommentStore\getStore(), isCreateAccountBlocked(), isSitewide(), isUsertalkEditAllowed(), and ActorMigration\newMigration(). Get the user who implemented this block. User User object. May name a foreign user. Definition at line 1755 of file Block.php. Referenced by doAutoblock(), getAutoblockUpdateArray(), getBlockErrorParams(), getBy(), getByName(), getDatabaseArray(), and insert(). Get block information used in different block error messages. Definition at line 1896 of file Block.php. References $blocker, $lang, $link, getBlocker(), getByName(), getExpiry(), getId(), IContextSource\getLanguage(), getReason(), IContextSource\getRequest(), getSystemBlockType(), getTarget(), IContextSource\getUser(), User\getUserPage(), MessageLocalizer\msg(), and string. Definition at line 1436 of file Block.php. References $rows, as, DB_MASTER, DB_REPLICA, IP\isValid(), LIST_OR, IP\toHex(), and wfGetDB(). Get the user id of the blocking sysop. Definition at line 1041 of file Block.php. Get the username of the blocking sysop. Definition at line 1050 of file Block.php. Referenced by getBlockErrorParams(), and ApiQueryUserInfo\getBlockInfo(). Get the BlockID cookie's value for this block. This is usually the block ID concatenated with an HMAC in order to avoid spoofing (T152951), but if wgSecretKey is not set will just be the block ID. string The block ID, probably concatenated with "!" and the HMAC. Definition at line 1822 of file Block.php. References getId(), RequestContext\getMain(), and MWCryptHash\hmac(). Definition at line 683 of file Block.php. References $forcedTargetID, $mParentBlockId, Wikimedia\Rdbms\IDatabase\encodeExpiry(), getBlocker(), getExpiry(), getHideName(), User\getId(), getRangeEnd(), getRangeStart(), getReason(), CommentStore\getStore(), isAutoblocking(), isCreateAccountBlocked(), isEmailBlocked(), isHardblock(), isSitewide(), isUsertalkEditAllowed(), ActorMigration\newMigration(), string, and Wikimedia\Rdbms\IDatabase\timestamp(). Get the block expiry time. Definition at line 1709 of file Block.php. Referenced by doAutoblock(), equals(), getBlockErrorParams(), ApiQueryUserInfo\getBlockInfo(), getDatabaseArray(), isExpired(), and setCookie(). Get whether the block hides the target's username. Definition at line 1104 of file Block.php. Referenced by doAutoblock(), equals(), getAutoblockUpdateArray(), and getDatabaseArray(). Definition at line 1058 of file Block.php. Referenced by delete(), getBlockErrorParams(), ApiQueryUserInfo\getBlockInfo(), getCookieValue(), BlockTest\testInsertExistingBlock(), BlockTest\testRestrictionsFromDatabase(), update(), and updateTimestamp(). Get the stored ID from the 'BlockID' cookie. The cookie's value is usually a combination of the ID and a HMAC (see Block::setCookie), but will sometimes only be the ID. string $cookieValue The string in which to find the ID. int|null The block ID, or null if the HMAC is present and invalid. Definition at line 1845 of file Block.php. References false, RequestContext\getMain(), MWCryptHash\hmac(), and null. Referenced by User\getBlockFromCookieValue(), and UserTest\testAutoblockCookies(). Get the component of an IP address which is certain to be the same between an IP address and a rangeblock containing that IP address. Definition at line 454 of file Block.php. Get the key and parameters for the corresponding error message. Definition at line 1872 of file Block.php. References $params, getBlockErrorParams(), getSystemBlockType(), isSitewide(), and null. Return the tables, fields, and join conditions to be selected to create a new block object. Definition at line 260 of file Block.php. Referenced by CleanupBlocks\execute(), and BlockTest\testNewFromRow(). Get a set of SQL conditions which will select rangeblocks encompassing a given range. Definition at line 421 of file Block.php. References $dbr, DB_REPLICA, LIST_AND, null, and wfGetDB(). Get the IP address at the end of the range in Hex form. Definition at line 1022 of file Block.php. References list, IP\parseRange(), IP\toHex(), and type. Get the IP address at the start of the range in Hex form. Definition at line 1003 of file Block.php. Get the reason given for creating the block. Definition at line 1084 of file Block.php. Referenced by doAutoblock(), equals(), getAutoblockUpdateArray(), getBlockErrorParams(), ApiQueryUserInfo\getBlockInfo(), and getDatabaseArray(). Definition at line 1328 of file Block.php. References Html\element(), getTarget(), text, and wfMessage(). Getting the restrictions will perform a database query if the restrictions are not already loaded. Definition at line 1937 of file Block.php. Referenced by appliesToTitle(), doAutoblock(), equals(), and findRestriction(). Get the system block type, if any. Definition at line 1123 of file Block.php. Referenced by appliesToPasswordReset(), doAutoblock(), getBlockErrorParams(), ApiQueryUserInfo\getBlockInfo(), getPermissionsError(), and insert(). Get the target for this particular Block. Note that for autoblocks, this returns the unredacted name; frontend functions need to call $block->getRedactedName() in this situation. Definition at line 1699 of file Block.php. Referenced by defaultRetroactiveAutoblock(), doAutoblock(), doRetroactiveAutoblock(), getBlockErrorParams(), getRedactedName(), getTargetAndType(), and isValid(). Get the target and target type for this particular Block. Definition at line 1689 of file Block.php. Get the timestamp indicating when the block was created. Definition at line 1729 of file Block.php. Referenced by ApiQueryUserInfo\getBlockInfo(), and updateTimestamp(). Get the type of target for this particular block. Autoblocks have whichever type corresponds to their target, so to detect if a block is an autoblock, we have to check the mAuto property instead. Definition at line 1676 of file Block.php. Referenced by defaultRetroactiveAutoblock(), ApiBase\dieBlocked(), doRetroactiveAutoblock(), getTargetAndType(), isAutoblocking(), isHardblock(), SpecialBlock\maybeAlterFormDefaults(), and shouldTrackWithCookie(). Given a database row from the ipblocks table, initialize member variables. Definition at line 468 of file Block.php. References DB_REPLICA, CommentStore\getStore(), isAutoblocking(), isCreateAccountBlocked(), isEmailBlocked(), isHardblock(), isSitewide(), isUsertalkEditAllowed(), User\newFromAnyId(), null, setBlocker(), setExpiry(), setHideName(), setReason(), setTarget(), setTimestamp(), text, wfGetDB(), and wfTimestamp(). Referenced by newFromRow(), and newLoad(). Will fail if there is a conflicting block (same name and options) already in the database. Definition at line 543 of file Block.php. References $mId, $wgBlockDisablesLogin, DB_MASTER, doRetroactiveAutoblock(), getBlocker(), getDatabaseArray(), getSystemBlockType(), insert, null, setId(), wfDebug(), and wfGetDB(). Referenced by LocalIdLookupTest\addDBData(), ApiUnblockTest\setUp(), ApiBlockTest\testBlockByBlockedUser(), ApiUserrightsTest\testBlockedWithoutUserrights(), ApiUserrightsTest\testBlockedWithUserrights(), ApiQueryBlocksTest\testExecuteBlock(), ApiQueryBlocksTest\testExecuteRestrictions(), ApiQueryBlocksTest\testExecuteSitewide(), BlockTest\testNewFromRow(), ApiUnblockTest\testUnblockSelfWhenBlocked(), and ApiUnblockTest\testUnblockWhenBlocked(). Definition at line 1155 of file Block.php. Referenced by __construct(), doAutoblock(), doRetroactiveAutoblock(), equals(), getDatabaseArray(), initFromRow(), shouldTrackWithCookie(), and update(). Get or set the flag indicating whether this block blocks the target from creating an account. Definition at line 1187 of file Block.php. Referenced by __construct(), appliesToPasswordReset(), appliesToRight(), doAutoblock(), equals(), getAutoblockUpdateArray(), getDatabaseArray(), and initFromRow(). Get or set the flag indicating whether this block blocks the target from sending emails. Definition at line 1200 of file Block.php. Referenced by __construct(), appliesToRight(), equals(), getDatabaseArray(), and initFromRow(). Definition at line 954 of file Block.php. References getExpiry(), wfDebug(), and wfTimestampNow(). Definition at line 1142 of file Block.php. References $isHardblock, getType(), true, and wfSetVar(). Referenced by __construct(), chooseBlock(), equals(), getDatabaseArray(), and initFromRow(). Indicates that the block is a sitewide block. This means the user is prohibited from editing any page on the site (other than their own talk page). Definition at line 1174 of file Block.php. Referenced by __construct(), appliesToNamespace(), appliesToPage(), appliesToRight(), appliesToTitle(), appliesToUsertalk(), ApiBase\dieBlocked(), doAutoblock(), equals(), getAutoblockUpdateArray(), ApiQueryUserInfo\getBlockInfo(), getDatabaseArray(), getPermissionsError(), initFromRow(), SpecialBlock\maybeAlterFormDefaults(), and prevents(). Get or set the flag indicating whether this block blocks the target from editing their own user talk page. Definition at line 1213 of file Block.php. Referenced by __construct(), appliesToUsertalk(), doAutoblock(), equals(), getAutoblockUpdateArray(), getDatabaseArray(), initFromRow(), and prevents(). Is the block address valid (i.e. since 1.33 No longer needed in core. Definition at line 971 of file Block.php. References getTarget(), null, and wfDeprecated(). Checks whether a given IP is on the autoblock whitelist. Definition at line 810 of file Block.php. References $cache, $line, $lines, array(), as, DB_REPLICA, IP\isInRange(), plain, wfDebug(), wfGetDB(), and wfMessage(). Definition at line 192 of file Block.php. References $dbr, $res, DB_REPLICA, null, and wfGetDB(). Referenced by User\getBlockFromCookieValue(), ApiUnblockTest\setUp(), BlockTest\testCrappyCrossWikiBlocks(), BlockTest\testINewFromIDReturnsCorrectBlock(), and BlockTest\testRestrictionsFromDatabase(). Definition at line 503 of file Block.php. Given a target and the target's type, get an existing Block object if possible. string | User | int | null $vagueTarget As above, but we will search for any block which affects that target (so for an IP address, get ranges containing that IP; and also get any relevant autoblocks). Leave empty or blank to skip IP-based lookups. Block|null (null if no relevant block could be found). The target and type of the returned Block will refer to the actual block which was found, which might not be the same as the target you gave if you used $vagueTarget! Definition at line 1397 of file Block.php. Referenced by BlockTest\addBlockForUser(), SpecialContributions\contributionsSub(), ApiBlockTest\doBlock(), ApiUnblock\execute(), ApiBlock\execute(), SpecialUnblock\execute(), User\getBlockedStatus(), MediaWiki\Auth\CheckBlocksSecondaryAuthenticationProviderTest\getBlockedUser(), ApiUnblockTest\getBlockFromParams(), Article\getRobotPolicy(), DeletedContributionsPage\getSubTitle(), User\isBlockedFromCreateAccount(), SpecialBlock\maybeAlterFormDefaults(), SpecialBlock\processForm(), SpecialUnblock\processUnblock(), EditPage\showIntro(), Article\showMissingArticle(), User\spreadBlock(), BlockTest\testBlockedUserCanNotCreateAccount(), ApiBlockTest\testBlockWithoutRestrictions(), ApiBlockTest\testBlockWithRestrictions(), MediaWiki\Auth\AuthManagerTest\testCheckAccountCreatePermissions(), BlockTest\testCrappyCrossWikiBlocks(), ApiEditPageTest\testEditWhileBlocked(), BlockTest\testINewFromTargetReturnsCorrectBlock(), SpecialBlockTest\testMaybeAlterFormDefaults(), SpecialBlockTest\testMaybeAlterFormDefaultsPartial(), ApiMoveTest\testMoveWhileBlocked(), SpecialBlockTest\testProcessForm(), SpecialBlockTest\testProcessFormExisting(), SpecialBlockTest\testProcessFormRestrictions(), SpecialBlockTest\testProcessFormRestrictionsChange(), and BlockTest\testT31116NewFromTargetWithEmptyIp(). Load a block from the database which affects the already-set $this->target: 1) A block directly on the given user or IP 2) A rangeblock encompassing the given IP (smallest first) 3) An autoblock on the given IP. User | string | null $vagueTarget Also search for blocks affecting this target. Doesn't make any sense to use TYPE_AUTO / TYPE_ID here. Leave blank to skip IP lookups. Definition at line 322 of file Block.php. References $res, as, DB_MASTER, DB_REPLICA, initFromRow(), list, LIST_OR, null, IP\parseRange(), string, IP\toHex(), type, and wfGetDB(). From an existing Block, get the target and the type of target. Note that, except for null, it is always safe to treat the target as a string; for User objects this will return User::__toString() which in turn gives User::getName(). Definition at line 1619 of file Block.php. References $target, IP\isValid(), IP\isValidRange(), User\newFromName(), null, IP\sanitizeIP(), and IP\sanitizeRange(). Referenced by BlockListPager\formatValue(), SpecialBlockList\getBlockListPager(), SpecialBlock\getTargetAndType(), and SpecialBlock\setParameter(). Get/set whether the Block prevents a given action. bool|null Null for unrecognized rights. Definition at line 1273 of file Block.php. References $res, RequestContext\getMain(), isSitewide(), isUsertalkEditAllowed(), null, User, and wfSetVar(). Purge expired blocks from the ipblocks table. Definition at line 1355 of file Block.php. References $fname, Wikimedia\Rdbms\IDatabase\addQuotes(), DeferredUpdates\addUpdate(), DB_MASTER, Wikimedia\Rdbms\IDatabase\delete(), Wikimedia\Rdbms\IDatabase\selectFieldValues(), Wikimedia\Rdbms\IDatabase\timestamp(), wfGetDB(), and wfReadOnly(). Return the list of ipblocks fields that should be selected to create a new block. Definition at line 216 of file Block.php. Set the user who implemented (or will implement) this block. Definition at line 1763 of file Block.php. References $user, User\isUsableName(), and User\newFromName(). Referenced by __construct(), and initFromRow(). Set the 'BlockID' cookie to this block's ID and expiry time. The cookie's expiry will be the same as the block's, to a maximum of 24 hours. WebResponse $response The response on which to set the cookie. Definition at line 1785 of file Block.php. References $expiryTime, false, getCookieValue(), getExpiry(), WebResponse\setCookie(), and wfTimestamp(). Set the block expiry time. Definition at line 1719 of file Block.php. Referenced by __construct(), initFromRow(), and updateTimestamp(). Set whether ths block hides the target's username. Definition at line 1114 of file Block.php. Definition at line 1068 of file Block.php. Set the reason for creating the block. Definition at line 1094 of file Block.php. Definition at line 1957 of file Block.php. Set the target for this block, and update $this->type accordingly. Definition at line 1747 of file Block.php. Set the timestamp indicating when the block was created. Definition at line 1739 of file Block.php. Check if the block should be tracked with a cookie. Definition at line 2132 of file Block.php. References RequestContext\getMain(), getType(), and isAutoblocking(). Update a block in the DB with new parameters. The ID field needs to be loaded first. Definition at line 620 of file Block.php. References $mId, $success, DB_MASTER, doRetroactiveAutoblock(), getAutoblockUpdateArray(), getDatabaseArray(), getId(), isAutoblocking(), null, wfDebug(), and wfGetDB(). Update the timestamp on autoblocks. Definition at line 979 of file Block.php. References DB_MASTER, getId(), getTimestamp(), setExpiry(), setTimestamp(), wfGetDB(), and wfTimestamp(). Definition at line 60 of file Block.php. Definition at line 63 of file Block.php. Definition at line 78 of file Block.php. Referenced by getBlocker(), and getBlockErrorParams(). Definition at line 69 of file Block.php. Definition at line 84 of file Block.php. Definition at line 81 of file Block.php. Definition at line 90 of file Block.php. Definition at line 39 of file Block.php. Definition at line 57 of file Block.php. Definition at line 42 of file Block.php. Definition at line 54 of file Block.php. Definition at line 45 of file Block.php. Definition at line 51 of file Block.php. Referenced by doAutoblock(), getId(), insert(), and update(). Definition at line 48 of file Block.php. Definition at line 33 of file Block.php. Definition at line 36 of file Block.php. Definition at line 93 of file Block.php. Definition at line 87 of file Block.php. Definition at line 66 of file Block.php. Referenced by getTarget(), and parseTarget(). After the block has been loaded from the database, this can only be USER, IP or RANGE. Definition at line 75 of file Block.php. Referenced by findRestriction(), and getType(). Definition at line 99 of file Block.php. Referenced by SpecialContributions\contributionsSub(), ApiBase\dieBlocked(), ApiUnblock\execute(), SpecialUnblock\execute(), SpecialBlockList\getBlockListPager(), SpecialUnblock\getFields(), DeletedContributionsPage\getSubTitle(), SpecialBlock\maybeAlterFormDefaults(), SpecialUnblock\processUnblock(), EditPage\showIntro(), and Article\showMissingArticle(). Definition at line 100 of file Block.php. Referenced by SpecialUnblock\execute(), and SpecialBlockList\getBlockListPager(). Definition at line 97 of file Block.php. Referenced by BlockTest\addXffBlocks(), SpecialUnblock\execute(), BlockListPager\formatValue(), User\getBlockFromCookieValue(), SpecialBlockList\getBlockListPager(), SpecialUnblock\getFields(), SpecialBlock\processForm(), SpecialUnblock\processUnblock(), and SpecialBlock\validateTarget(). Definition at line 98 of file Block.php. Referenced by BlockTest\addXffBlocks(), MediaWiki\Auth\AuthManager\checkAccountCreatePermissions(), SpecialContributions\contributionsSub(), SpecialUnblock\execute(), BlockListPager\formatValue(), User\getBlockFromCookieValue(), SpecialBlockList\getBlockListPager(), SpecialUnblock\getFields(), DeletedContributionsPage\getSubTitle(), SpecialBlock\maybeAlterFormDefaults(), SpecialBlock\processForm(), SpecialUnblock\processUnblock(), MediaWiki\Auth\CheckBlocksSecondaryAuthenticationProvider\testUserForCreation(), and SpecialBlock\validateTarget(). Definition at line 96 of file Block.php. Referenced by SpecialBlock\blockLogFlags(), ApiBlock\execute(), SpecialUnblock\execute(), BlockListPager\formatValue(), User\getBlockFromCookieValue(), SpecialBlockList\getBlockListPager(), SpecialUnblock\getFields(), SpecialBlock\processForm(), UserTest\testAutoblockCookieInfiniteExpiry(), UserTest\testAutoblockCookies(), UserTest\testAutoblockCookiesDisabled(), and SpecialBlock\validateTarget().
CommonCrawl
Banach spaces, function spaces, real functions, integral transforms, theory of distributions, measure theory. Functional characterization of local correlation matrices? Can the characteristic function of a Borel set be approached by a sequence of continuous function through a certain convergence in $L^\infty$? Is every pair of closed linear subspaces boundedly regular? How to prove the binary function uniformly boundary?
CommonCrawl
A short introduction to Sullivan models, with the Sullivan model of a free loop space and the detailed proof of Vigué-Sullivan theorem on the Betti numbers of free loop space. Workshop on free loop space à Strasbourg, November 2008, scanned notes. Exposé dans le cadre des Journées d'Algèbre "Dualité et structures BV en algèbre et en topologie", à Clermont-Ferrand, Juin 2009. résumé , notes scannées . Talk given at the workshop strings in Copenhagen, February 15-19, 2010. scanned notes taken by a participant. Abstract : In our paper Batalin-Vilkovisky algebra structures on Hochschild Cohomology, we showed that the (dual of the) Hochschild homology of a Calabi-Yau algebra is a Batalin-Vilkovisky algebra. In On the Classification of Topological Field Theories, Jacob Lurie announced that the Hochschild homology of a Calabi-Yau algebra, more generally, has a structure of closed topological conformal field theory. Talk given at the workshop algebraic homotopy and its applications, June 25-29, 2012. scanned notes . Extended talk given at the workshop String Topology and Related Topics, April 16, 2013. notes taken by Richard Hepworth. Let $M$ be a simply-connected closed manifold. Chas and Sullivan have defined a product on the shifted homology of the free loop space $\mathbbH_*(LM)$. Consider over any field, the usual homological Eilenberg-Moore spectral sequence converging to $H_*(LM)$. Using results of F\'elix and Thomas, we show that this spectral sequence is multiplicative with respect to the Chas-Sullivan loop product and that its $E_2$-term is the Hochschild cohomology of $H^*(M)$. This gives a new method to compute the loop homology algebra of spheres and complex projective spaces. This is joint work with K. Kuribayashi and T. Naito.
CommonCrawl
I have come to think that answers that clearly required less than ten minutes of processing from the part of the answerer should be just left unposted. There are a few hit-and-run-operators that make me want to introduce rate limitations for answerers as well. 6 per day, 50 per month is generous enough. Focus on quality, not quantity. The answerers are not sitting in some exam here. After you reached 10k by answering calculus, leave that arena to the noobs, please (unless you can add a genuinely different point of view to that rare question where a new pearl can be found). I think this is an excellent idea. It would stop answering machines who answer any and all garbage questions and it would stop general rep farming. Do you think we should rate limit answerers? Is this even a possibility of the software? Actually I do not agree with the suggestion. I assume the goal is to stop rep-farmers from providing low quality answers. However, if you look at the first page of users (ordered by reputataion this year) here, you identity several users you encountered when reviewing low quality queue, but there are also quite a lot of users who provide great answers. These users easily provide more than 200 answers every months. I just don't see why writing more answers will in general decrease the quality of the answers (If you write 10 answers today, it doesn't mean some of those 10 answers are of lower quality. It doesn't even mean that you are answering low quality questions). Instead of targeting users who give lots of answers, we should target those who give lots of low quality answers. IMO they can be identified easily, we all have a list in mind. Just downvote and delete their low quality answers. I posted 55 answers in April 2018. Some of them were quick to write, but I should hope none of them are fit to be handed in as homework by the asker. Only two of them were on questions tagged calculus. I would be interested in knowing which of them you or Jyrki think should not have been posted. Can diagonal matrix $D \in M_n$ be permuted ($P$) to render a scaled identity matrix, i.e., $\alpha I_n = P D P^*$? Which has this answer, written in less than ten minutes. Is it not worth post? This answer that I'm writing right now took less than ten minutes, and that's including looking at the most recent questions and seeing whether they have answers. Is it not worth posting? Firstly, there are many questions that I can answer well in much less than 10 minutes. In fact, I very rarely spend that long on an answer. Secondly, this notion that we all care about our rep score is completely misplaced. Some of us think the whole reputation system is completely childish. In fact I've come to the conclusion it reflects the US-centric culture of this site; the idea that you can motivate people by giving them useless points as a reward is a singularly American idea. So please don't assume that the only reason we are answering questions is because we value the brownie points. Is the question limit removed?
CommonCrawl
You are working as a guide on a tour bus for retired people, and today you have taken your regular Nordic seniors to The Gate of Heavenly Peace. You let them have a lunch break where they could do whatever they like. Now you have to get them back to the bus, but they are all walking in random directions. You try to intersect them, and send them straight back to the bus. Minimize the time before the last person is in the bus. You will always be able to run faster than any of the tour guests, and they walk with constant speed, no matter what you tell them. The seniors walk in straight lines, and the only way of changing their direction is to give them promises of camphor candy. A senior will neither stop at nor enter the bus before given such a promise. There are a number of test cases (at most $10$) consisting of: A line with an integer $1 \leq n \leq 8$, the number of people on the tour. A line with an floating point number $1 < v \leq 100$, your maximum speed (you start in the bus at the origin). Then follow $n$ lines, each containing four floating point numbers $x_ i$ $y_ i$ $v_ i$ $a_ i$, the starting coordinates ($-10^6 \leq x_ i, y_ i \leq 10^6$), speed ($1 \leq v_ i < 100$) and direction ($0 \leq a_ i < 2 \pi $) of each of the tour guests. The input is terminated by a case with $n = 0$, which should not be processed. All floating point numbers in the input will be written in standard decimal notation, and have no more than $6$ digits. For each test case, print a line with the time it takes before everybody is back in the bus (the origin). Round the answer to the nearest integer. The answer will never be larger than $10^6$.
CommonCrawl
We study an ultracold atom-cavity coupling system.The model is described by a noninteracting Bose-Einstein condensate contained in a Fabry-Pérot optical resonator,in which two incommensurate standing-wave modes are excited and thus form a quasiperiodic optical lattice potential for the atoms.Constant external force which is provided by atoms' gravity is exerted via tilting the cavity.Special emphasis is paid to the variation of the cavity dynamics and atomic Bloch oscillation modulated by the cavity light field.We also make a numerical simulation by meanfield theory,adiabatic approximation and split-operater,which shows disturbance to the periodicity of the dynamics of cavity and atoms,due to the superposition of two standing waves with different frequency which breaks the potential's periodicity.Additionally,we make an analysis from the point of lattice band.By coupling the cavity standing-wave mode to the atomic centre-of-mass motion,the dynamics of intra-cavity photon number and transmission light are strongly affected by the corresponding dynamics,which can serve as a probe of the atomic dynamics. 【1】Bloch F.Uber die Quantenmechanik er Electronen in Kristallgittern[J].Zeitschrift Für Physik A Hadrons & Nuclei,1929,52(555-600).DOI:101007/BF01339455. 【2】Zener C.A Theory of the Electrical Breakdown of Solid Dielectrics[J].Proceedings of the Royal Society of London Series A,1934,145(855):523-9.DOI:101098/rspa.19340116. 【3】Feldmann J,Leo K,Shah J,et al.Optical Investigation of Bloch Oscillations in a Semiconductor Superlattice[J].Physical Review B,1992,46(11):7252-5.DOI:101103/PhysRevB.467252. 【4】Waschke C,Roskos H G,Schwedler R,et al.Coherent Submillimeter-wave Emission from Bloch Oscillations in a Semiconductor Superlattice[J].Physical Review Letters,1993,70(21):3319-22.DOI:101103/PhysRevLett.703319. 【5】Battesti R,Cladé P,Guellati-Khélifa S,et al.Bloch Oscillations of Ultracold Atoms:A Tool for a Metrological Determination of $h/{m}-{\mathrm{Rb}}$ [J].Physical Review Letters,2004,92(25):253001.DOI:101103/PhysRevLett.92253001. 【6】Ben Dahan M,Peik E,Reichel J,et al.Bloch Oscillations of Atoms in an Optical Potential[J].Physical Review Letters,1996,76(24):4508-11.DOI:101103/PhysRevLett.764508. 【7】Cladé P,De Mirandes E,Cadoret M,et al.Determination of the Fine Structure Constant Based on Bloch Oscillations of Ultracold Atoms in a Vertical Optical Lattice[J].Physical Review Letters,2006,96(3):033001.DOI:101103/PhysRevLett.96033001. 【8】Peik E,Ben Dahan M,Bouchoule I,et al.Bloch Oscillations of Atoms,Adiabatic Rapid Passage,and Monokinetic Atomic Beams[J].Physical Review A,1997,55(4):2989-3001.DOI:101103/PhysRevA.552989. 【9】Wilkinson S R,Bharucha C F,Madison K W,et al.Observation of Atomic Wannier-Stark Ladders in an Accelerating Optical Potential[J].Physical Review Letters,1996,76(24):4512-5.DOI:101103/PhysRevLett.764512. 【10】Geiger,Zachary A,Fujiwara,et al.Observation and Uses of Position-Space Bloch Oscillations in an Ultracold Gas[J].Physical Review Letters,2018,120(21):213201.DOI:101103/PhysRevLett.120213201. 【11】Anderson B P,Kasevich M A.Macroscopic Quantum Interference from Atomic Tunnel Arrays[J].Science,1998,282(5394):1686.DOI:101126/science.28253941686. 【12】Ferrari G,Poli N,Sorrentino F,et al.Long-lived BLOCH Oscillations with Bosonic sr Atoms and Application to Gravity Measurement at the Micrometer Scale[J].Physical Review Letters,2006,97(6):060402.DOI:101103/PhysRevLett.97060402. 【13】Roati G,De M E,Ferlaino F,et al.Atom Interferometry with Trapped Fermi Gases[J].Physical Review Letters,2004,92(23):230402.DOI:101103/PhysRevLett.92230402. 【14】Sacchetti A.Bloch Oscillations and Accelerated Bose-Einstein Condensates in an Optical Lattice[J].Physics Letters A,2017,381(4):184-8.DOI:101016/j.physleta.201611025. 【15】Wang G.Fragmentation of Bloch Oscillations in Quasiperiodic Waveguide Arrays[J].Journal of Optics,2014,16(1):5502.DOI:101016/j.physleta.201611025. 【16】Walter S,Schneble D,Durst A C.Bloch Oscillations in Lattice Potentials with Controlled Aperiodicity[J].Physical Review A,2010,81(3):033623.DOI:101103/PhysRevA.81033623. 【17】Reeves J B,Gadway B,Bergeman T,et al.Superfluid Bloch Dynamics in an Incommensurate Optical Lattice[J].New Journal of Physics,2014,16(6):065011.DOI:101088/1367-2630/16/6/065011. 【18】Lu Z,Han P,Zhang K,et al.Cavity-induced Switching Between Localized and Extended States in a Non-interacting Bose-Einstein Condensate [J].Physical Review A,2011,84(4):21098-108.DOI:101103/PhysRevA.84043606. 【19】Morsch O,Müller J H,Cristiani M,et al.Bloch Oscillations and Mean-field Effects of Bose-Einstein Condensates in 1D Optical Lattices[J].Physical Review Letters,2001,87(14):140402.DOI:101103/PhysRevLett.87140402. 【20】Torres P J,Carretero-González R,Middelkamp S,et al.Vortex Interaction Dynamics in Trapped Bose-Einstein Condensates[J].Communications on Pure & Applied Analysis,2017,10(6):1589-1615.DOI:103934/cpaa.2011101589. 【21】Yu L,Fan J,Zhu S,et al.Creating a Tunable Spin Squeezing Via a Time-dependent Collective Atom-photon Coupling[J].Physical Review A,2014,89(2):530-41.DOI:101103/PhysRevA.89023838. 【22】Davis K B,Mewes M,Andrews M R,et al.Bose-Einstein Condensation in a Gas of Sodium Atoms[J].Physical Review Letters,1995,75(22):3969-73.DOI:101103/PhysRevLett.753969. 【23】Anderson M H,Ensher J R,Matthews M R,et al.Observation of Bose-einstein Condensation in a Dilute Atomic Vapor[J].Science,1995,269(5221):198.DOI:101126/science.2695221198. 【24】Diener R B,Georgakis G A,Zhong J,et al.Transition Between Extended and Localized States in a One-dimensional Incommensurate Optical Lattice[J].Physical Review A,2001,64(3):003416.DOI:101103/PhysRevA.64033416. 【25】Fishman S,Grempel D R,Prange R E.Chaos,Quantum Recurrences,and Anderson Localization[J].Physical Review Letters,1982,49(8):509-12.DOI:101103/PhysRevLett.49509. 【26】Lüschen H P,Scherg S,Kohlert T,et al.Single-Particle Mobility Edge in a One-Dimensional Quasiperiodic Optical Lattice[J].Physical Review Letters,2018,120(16):160404.DOI:10.1103/PhysRevLett.120.160404. 【27】Prange R E,Grempel D R,Fishman S.Solvable Model of Quantum Motion in an Incommensurate Potential[J].Physical Review B,1984,29(12):6500-12.DOI:101103/PhysRevB.296500. 【28】Duncan C W,Loft N J S,hberg P,et al.Spin Localization of a Fermi Polaron in a Quasirandom Optical Lattice[J].Few-Body Systems,2017,58(2):50.DOI:101007/s00601-016-1203-0. 【29】Thouless D J.Bandwidths for a Quasiperiodic Tight-binding Model[J].Physical Review B,1983,28(8):4272-6.DOI:101103/PhysRevB.284272. 【30】Deissler B,Lucioni E,Modugno M,et al.Correlation Function of Weakly Interacting Bosons in a Disordered Lattice[J].New Journal of Physics,2011,13(2):99-103.DOI:101088/1367-2630/13/2/023020. 【31】Deissler B,Zaccanti M,Roati G,et al.Delocalization of a Disordered Bosonic System by Repulsive Interactions[J].Nature Physics,2012,6(5):354-8.DOI:101038/NPHYS1635. 【32】Ingoldr,Eacute G L,Nggi.What do Phase Space Methods Tell us About Disordered Quantum Systems?[J].Lecture Notes in Physics,2013,630(85-97).DOI:101007/978-3-540-45202-7-7. 【33】Hartmann T,Keck F,Korsch H J,et al.Dynamics of Bloch Oscillations[J].New Journal of Physics,2004,6(1):2.DOI:101088/1367-2630/6/1/002. 【34】Venkatesh B P,O'dell D H J.Bloch Oscillations of Cold Atoms in a Cavity:Effects of Quantum Noise[J].Physical Review A,2013,88(1):013848.DOI:101103/PhysRevA.88013848.
CommonCrawl
I live in Bozeman, Montana, as an assistant professor of mathematics at Montana State University. Previously, I was a postdoc at the University of Southern California, Harvard University, and the University of Copenhagen. I received my PhD through Stanford University under the supervision of Ralph Cohen. See my abbreviated Curriculum Vitæ for specifics. Much of my research is framed by the following paradigm. Invariants of locally standard entities -- such as manifolds, links, or (derived) schemes with very few points -- that possess a nuanced local-to-global expression can be constructed from (higher-)algebraic, or (higher-)categorical, data. The result, factorization homology, is an association of an object, such as a vector space, from such an entity and such a (higher-)algebra/category. Deformations of the algebraic/categorical input are organized as a likewise algebra/category -- this is Koszul duality. Through factorization homology, Poincare' duality intertwines with Koszul duality, thereby offering unforseen identities of, and within, the values of factorization homology. A factorization homology primer (with John Francis; to appear in The Homotopy Handbook). We give a consolidated account of the theory of, examples/instances of, and general features of factorization homology ("alpha"). This includes discussing relationships with factorization algebras, defects, filtrations, and Poincare'/Koszul duality. Flagged higher categories (with John Francis; to appear in CBMS proceedings). We introduce flagged $(\infty,n)$-categories, as a model-independent characterization of Segal sheaves on Joyal's category $\bTheta_n$. We indicate some compelling examples of such. This result can be interpreted as a non-linear instance of iterated Koszul duality. The geometry of the cyclotomic trace (with Aaron Mazel-Gee and Nick Rozenblyum; last revised October 2017). Using enriched factorization homology and linearization, we give a construction of the cyclotomic trace from algebraic K-theory to topological cyclic homology. Essentially, for each derived scheme X we organize the derived loop space LX as a quasi-coherent sheaf over the stratified algebraic stack of the previous paper; for each vector bundle over X, "trace of monodromy" defines a global function on LX. Much of this paper surveys its supporting papers. A naive approach to genuine G-spectra and cyclotomic spectra (with Aaron Mazel-Gee and Nick Rozenblyum; last revised October 2017). We interpret the $\infty$-category of cyclotomic spectra as that of quasi-coherent sheaves on a stratified algebraic stack over the sphere spectrum, with TC being global functions; we do similarly for genuine G-spectra. We do this by organizing Tate constructions as recollements, and use Glassman's definition of stratified stable categories. Factorization homology of enriched $(\infty,1)$-categories (with Aaron Mazel-Gee and Nick Rozenblyum; last revised October 2017). We define factorization homology of enriched $(\infty,1)$-categories over oriented 1-manifolds (as well as directed graphs). The bordism hypothesis (with John Francis; last revised August 2017). We supply a proof of the bordism hypothesis, using factorization homology. Fibrations of $\infty$-categories (with John Francis; last revised February 2017). We give a model-independent account of fibrations among $\infty$-categories, with exponentiable fibrations playing a central role. We show that each notion of fibration is classified by an $\infty$-category. This offers an operationally practical technique for making constructions among $\infty$-category theory, in a model-independent manner. Factorization homology I: higher categories (with John Francis and Nick Rozenblyum; to appear in Advances of Mathematics). We define vari-framings, and develop factorization homology over vari-framed stratified manifolds with coefficients in higher categories. We prove that this construction embeds higher categories as invariants of vari-framed stratified manifolds. A stratified homotopy hypothesis (with John Francis and Nick Rozenblyum; to appear, Journal of the European Mathematical Society.). We give a geometrically convenient model for $\infty$-categories using stratified spaces, and introduce some universal examples. Poincar'e/Koszul duality (with John Francis; to appear in Communications in Mathematical Physics). We articulate a duality among certain topological field theories that exchanges perturbative sigma-models and reduced state-sum models; this duality generalizes Poincar'e duality as well as Koszul duality. Zero-pointed manifolds (with John Francis; last revised July 2017). We give a convenient category of manifolds that is the home for dualities. We recover the Bar-coBar construction in this way, and prove a general version of non-abelian Poincar\'e duality. Factorization homology of stratified spaces (with John Francis and Hiro Lee Tanaka; Selecta Mathematica (N.S.) 23 (2017), no. 1, 293-362). We define factorization homology over structured stratified spaces, and characterize such through excision. Local structures on stratified spaces (with John Francis and Hiro Lee Tanaka; Advances in Mathematics 307 (2017), 903-1028). We develop a theory of stratified spaces and their moduli. We characterize local structures on them. Factorization homology of topological manifolds (with John Francis; Journal of Topology 8 (2015), no. 4, 1045-1084). We classify excisive invariants of topological manifolds by way of factorization homology of disk-algebras. Configurations spaces and $\Theta_n$ (with Richard Hepworth; Proceedings of the American Mathematical Society 142 (2014), no.7, 2243-2254). We explain that the category $\Theta_n$ encodes configuration spaces of points in Euclidean n-space. Counting bitangents with stable maps (with Renzo Cavalieri; Expositiones Mathematicae, volume 24, no. 4, pages 307-335). We use ideas from Gromov-Witten theory to do some enumerative geometry. An approach to less climate-impactful conferences (with Lukas Brantner, Theo Johnson-Freyd, Andre Henriques, and Aaron Mazel-Gee; London Mathematical Society Newsletter, no. 480, pages 32-33). To decrease environmental impact of cutting-edge mathematics conferences. To decrease the financial (and other) burdens of travel to attend such conferences, thereby accessing a broader community of researchers. Factorization homology II: adjoints (with John Francis and Nick Rozenblyum; to appear). We show that, in the presence of adjoints, factorization homology is naturally defined on solidly n-framed stratified manifolds. The orthogonal group and adjoints (with John Francis; to appear). We amalgamate the Schubert stratifications of Grassmannians to combinatorialize the orthogonal group, as a group. We construct a lax-action of this combinatorial orthogonal group on n-categories. This action is implemented by adjoining adjoints.
CommonCrawl
Authors of accepted papers should read Final Manuscripts below. Authors who wish to present their work at ITW2014 must submit manuscripts with a concise description of their work. Submitted manuscripts should be of sufficient depth for review by experts in the field. A submission should contain a clear presentation of the merits of the research, including (i) the main contributions, (ii) why the contributions are of interest, and (iii) how the new contributions relate to prior work. The IEEE ITW 2014 proceedings will be published by IEEE, and the accepted papers will be available on IEEE Xplore. Regular papers will undergo a peer review process and accepted papers must be presented at the conference by one of the authors. To ensure a fair and timely review of the submissions, we ask that authors follow the guidelines below. IEEE reserves the right to exclude a paper from distribution after the conference (for example, removal from IEEE Xplore) if the paper is not presented at the conference. Manuscript submission is on EDAS. Begin a new submission at http://edas.info/N17627. Minimum font size 10 points. No page numbers, no headers, no footers. Papers must be submitted as a PDF, in A4 paper format (i.e. not US Letter). The abstract should be limited to 300 words. The title and authors in EDAS must agree with the title and authors in the PDF file. Non-compliant manuscripts will not be included in the proceedings and IEEE Xplore. Further help is available at EDAS. The format of the final version of the paper is the same as that of the initial submission, with font size no smaller than 10 points. PDF files should be verified for IEEE format compatibility at PDF eXpress Plus, using the confernce code 32406XP (use your existing user account, or create a new one). Then, upload your final manuscript PDF via EDAS, using the link given in your acceptance notification e-mail. Uploads must be completed by 17 August 2014. In order to be included in the final program and the proceedings, at least one author must register at a non-student rate for each accepted paper (with a maximum of three papers registered by the same author). The author registration deadline is also 17 August 2014. Accepted papers without registered authors will not appear in either the final program or the proceedings. In order to link your registration form to a paper, please use the paper's 10-digit EDAS identifier, for example 1569947323. You have to report this number in the registration form. Instructions on registration are available on the registration page. An IEEE copyright form must be submitted for each accepted paper. Authors are requested to use EDAS electronic signing for the copyright form: The copyright transfer can be accomplished by clicking the blue (+) button next to 'Copyright form' in EDAS. This will take you to the IEEE electronic copyright center, where you can sign the form for your paper electronically. The EDAS abstract will be used in the printed ITW2014 Abstracts book, so authors should update any changes to the abstract in both the PDF file and the EDAS abstract. LaTeX notation (for example, $\alpha_i$) is allowed in the EDAS title and abstract. Important information If at least one author has not registered at a non-student rate before the deadline, the paper will be automatically withdrawn. Accepted papers must be presented by one of the authors. To be included in the proceedings and IEEE Xplore, papers must be compatible with the IEEE format, please use PDF eXpress Plus. Note that the IEEE reserves the right to exclude a paper from distribution after the conference (for example, removal from IEEE Xplore) if the paper is not presented at the conference.
CommonCrawl
BJT stands for Bipolar Junction Transistor. It is a three-terminal electronic device constructed of doped semiconductor material and may be used in amplifying or switching applications. Why Emitter current increases with increase in collector to base voltage in common base configuration of a BJT? Why base emitter voltage (Vbe) of BJT is not showing as 0.7v in proteus simulation? What kind of transistor turns on with 0.2 volts? I need a circuit to turn on a LED with 0.2 volts. I want to do it with a transistor as a switch. What kind of transistor will turn on at this voltage? How can I calculate the base current of a transistor to get a specific emitter-collector voltage? What's happening to this 3906? 2 cascaded transistors- where is Vout of a transistor when there is a shorted collector resistor? Using an analogue multiplexer to control a motor controller? Can the parameter \$\alpha\$ and \$\beta\$ be used for any BJT? Is it possible to use a NPN BJT as switch, from single power source? How do I calculate the collector current of this circuit with a bipolar junction transistor? common collector design with MMBT3904, will cutoff happen? Can a BJT base-collector junction be safely used as a diode? Which region is the transistor operating at? BJT differential amplifier: What causes common mode voltage gain? What is the theorem behind the basic BJT biasing strategy (divide Vcc into three parts)? Are bipolar logic familes like ECL/CML used currently for radiation hardened computing devices? A question about understanding a BJT differential amplifier? CE/CB BJT Amplifier(s) + Load Lines!
CommonCrawl
Paper summary davidstutz Sharma and Chen provide an experimental comparison of different state-of-the-art attacks against the adversarial training defense by Madry et al. . They consider several attacks, including the Carlini Wagner attacks , elastic net attacks as well as projected gradient descent . Their experimental finding – that the defense by Madry et al. Can be broken by increasing the allowed perturbation size (i.e., epsilon) – should not be surprising. Every network trained adversarially will only defend reliable against attacks from the attacker used during training. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. ArXiv, 1706.06083, 2017. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks.InIEEE Symposiumon Security and Privacy (SP), 39–57., 2017. P.Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.J. Hsieh. Ead: Elastic-net attacks to deep neuralnetworks via adversarial examples. arXiv preprint arXiv:1709.04114, 2017. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/). Abstract: The Madry Lab recently hosted a competition designed to test the robustness of their adversarially trained MNIST model. Attacks were constrained to perturb each pixel of the input image by a scaled maximal $L_\infty$ distortion $\epsilon$ = 0.3. This discourages the use of attacks which are not optimized on the $L_\infty$ distortion metric. Our experimental results demonstrate that by relaxing the $L_\infty$ constraint of the competition, the elastic-net attack to deep neural networks (EAD) can generate transferable adversarial examples which, despite their high average $L_\infty$ distortion, have minimal visual distortion. These results call into question the use of $L_\infty$ as a sole measure for visual distortion, and further demonstrate the power of EAD at generating robust adversarial examples. Sharma and Chen provide an experimental comparison of different state-of-the-art attacks against the adversarial training defense by Madry et al. . They consider several attacks, including the Carlini Wagner attacks , elastic net attacks as well as projected gradient descent . Their experimental finding – that the defense by Madry et al. Can be broken by increasing the allowed perturbation size (i.e., epsilon) – should not be surprising. Every network trained adversarially will only defend reliable against attacks from the attacker used during training. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. ArXiv, 1706.06083, 2017. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks.InIEEE Symposiumon Security and Privacy (SP), 39–57., 2017. P.Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.J. Hsieh. Ead: Elastic-net attacks to deep neuralnetworks via adversarial examples. arXiv preprint arXiv:1709.04114, 2017.
CommonCrawl