Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What's the reason behind the current remaining the same after passing by a resistance? I've been wondering why does this really happen, I mean by intuition if electrons are driven by EMF (ignoring wire's resistance), $n$ coulombs would pass by a point per second, until they encounter something that slows them down thus the rate of flow would change. Why does current remain the same? One answer I saw somewhere that made sense to me is that it indeed slows electrons down, but electrons lose some of their energy to compensate the loss of velocity in a way that would bring the current back to the constant current, and this lose of energy is called drop in voltage and that's why voltage decreases when running over some resistance, is this true?
Current us a measure of how much charge is passing a given point (or cross section) of a wire. If the currents were not equal at all points in a simple circuit, there would have to be charges entering or exiting the circuit. This however does not happen. Water pipe analogy: current is something like liters per minute that pass through a certain point. If there are no leaks or additional pipes joining, at each point there has to be the same water flow in liters per minute.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/442719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 0 }
Rutherford gold experiment When an alpha particle is nearing the gold atoms nucleus, it is slowing down due to electrostatic repulsion, right? But then why is the acceleration or velocity not a minimum at that point (the point where the alpha particle reverses its direction)and why is the speed a minimum? Like isn't the resultant force on the alpha particle reducing which should cause its acceleration to be minimum?
The electrostatic repulsion force becomes larger as the particles are brought closer together. Since force is proportional to acceleration, this must mean that the acceleration is at a maximum when the particles are at their closest distance. When something changes direction, the velocity vector changes direction. Therefore, at the point of changing directions the velocity is $0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What are some good resources to learn Vector Spaces for Quantum Mechanics? I am currently using Shankar's Principles of Quantum Mechanics. I had no trouble understanding finite dimension vector spaces using it. But I find it difficult to understand infinite dimensional vector spaces using this book. What are some alternative resources that I can use?
The first chapter of Shankar's Quantum Mechanics contains a thorough introduction on the linear algebra necessary. I have found out that learning linear algebra from a math textbook can be somewhat counterproductive- but I might be wrong.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Could a microwave oven be tuned to defrost well? Typical microwave ovens do a lousy job of defrosting because liquid water absorbs their radiation far better than ice. So once a spot melts, it will quickly rise to cooking temperature while the rest of the food remains frozen. Would it be possible to build an oven that uses microwaves absorbed preferentially by ice instead, so it would defrost well? Such an oven would presumably be inefficient for cooking, but still valuable.
In order to “tune” a microwave oven to handle defrosting different from cooking, I think you would need more than one microwave frequency and I don't think that’s going to happen since the FCC sets the frequency range and the frequency of 2450 MHz has become the industry standard. Even if the permitted range by the FCC would allow another frequency more favorable to ice, it would probably drive up the cost more than it would be worth to consumers. As far as I know the defrost cycle on all microwave ovens involve varying the on and off times. Shorter on and longer off times defrost more evenly. However, it would also take longer to defrost and users now days expect everything to happen quickly in a microwave oven. Winds up being a compromise. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Could quantum fluctuations spawn real matter? Would it be plausible for fluctuations in the QED vacuum to spawn actual matter (such as quarks, electrons the constituents of a hydrgen atom) given enough time and space?
Also in QED, total Energy is conserved for every time! A difference between QED and classical electrodynamics is that the Expression for total Energy is slightly altered. Classical electrodynamics has kinetic Energy $T$ and a potential Energy $V$ arising from electromagnetic fields. In classical Theory, total Energy is conserved, i.e. $T+V = const.$ But in Quantum electrodynamics and other Quantum field theories, you have also additional zero-Point Energy $\hbar \omega$, which is responsible for e.g. the Casimir effect (when non-charged plates attrackt each other when These have extremely small distance). The frequency $\omega$ can be interpreted as how fast significant changes in the System takes place. For a many-particle System with extremely high collision frequency, the value $\omega$ will be also high, altering the Energy balance to $T+V + \hbar \omega = const.$ The last term in this equation on the left Hand side is also called "self-energy" in Quantum kinetic Theory. This self-energy is a complex-valued quantity, where the real part describes the zero-Point Energy , while the imaginary part is antiproportional to the lifetime of the excited state Hence, higher zero-Point Energy imply Shorter Lifetimes of excited states. Another example of the Change of effective Energy/Hamiltonian due to Quantum effects is shown e.g. in this paper: https://arxiv.org/abs/0706.1090
{ "language": "en", "url": "https://physics.stackexchange.com/questions/443802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 3 }
Why we consider identical particles for Bose-Einstein condensation? Why we consider identical particles like identical composite bosons for BEC. Why we do not consider non identical particles of differnt masses etc?
The irreducible representations of the Poincaré group are labelled by mass $m$ and the spin $s$. So as soon you have particles that have different mass, they are intrinsically different and obey their own Bose/Fermi/... statistical distribution. I assume here that you do not mean idential to be a synonym of indistinguishable. If yes, please clarify.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does a rock use up energy to maintain its shape? A rock sitting on land, the ocean floor, or floating in space maintains its shape somehow. Gravity isn't keeping it together because it is too small, so I'm assuming it is chemical or nuclear bonds keeping it together as a solid. If not it would simply crumble apart. So, what type of energy maintains the shape of a rock, where did this energy come from, and is it slowly dissipating? As a corollary, if a large rock is placed on top of a small rock, is the energy required to maintain the shape of the small rock 'used' at a greater rate?
Consider an answer by contradiction: Imagine the rock is in the vacuum of outer space with no energy able to be added to it. Suppose it does use energy to maintain shape. Then at some point, it will run out of energy and the shape will change. Now, since it is out of energy and can't change shape, isn't it now maintaining shape without energy?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 3 }
Can light be compressed? What if we take a cylindrical vessel with an inside surface completely reflecting and attach a piston such that it is also reflecting. What will happen to light if we compress it like this?
Ideally, this is essentially the same as compressing a quantum gas of any other boson. Macroscopically, there is a pressure exerted by the photon gas on the walls of the chamber, so compressing the piston will take work and thus will increase the internal energy of the photon gas. Microscopically, by compressing the chamber, we are making the wavelengths of the supported modes shorter, and thus the frequency and energy of the photons in the chamber will increase. So either way, the internal energy of the photon gas will go up. The exact amount by which the internal energy increases depends on how the piston is compressed, e.g. adiabatically vs. diabatically. In the specific case where the piston is compressed adiabatically, the occupation of each mode of the chamber remains unchanged. So the light in the chamber gets "blue-shifted", but the number of photons in a given mode does not change. Summarily, the light gets bluer (higher frequency).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 2, "answer_id": 1 }
Euler’s Equations of Motion for a Rigid Body and Inertial Forces Euler’s equations of motion for a rigid body can be interpreted as a rewriting of Newton’s second law for rotations in a rotating frame. They basically tell us the sum of the torques equals the rate of change of the body’s angular momentum, In the rotating frame. Do we then not need to take into account inertial forces when computing the torques in rotating coordinates?
Do we then not need to take into account inertial forces when computing the torques in rotating coordinates? No, but there is an inertial torque you have to worry about. From the perspective of an inertial frame, the rotational analog of Newton's second law for rotation about the center of mass is $$\frac{d\boldsymbol L}{dt} = \sum_i \boldsymbol \tau_{\text{ext},i}\tag{1}$$ where $\boldsymbol L$ is the object's angular momentum with respect to inertial, $\boldsymbol \tau_{\text{ext},i}$ is the $i^\text{th}$ external torque, and the differentiation is from the perspective of the inertial frame. Note that this pertains to non-rigid objects as well as rigid bodies. The relationship between the time derivatives of any vector quantity $\boldsymbol q$ from the perspectives of co-located inertial and rotating frames is $$\left(\frac{d\boldsymbol q}{dt}\right)_\text{inertial} = \left(\frac{d\boldsymbol q}{dt}\right)_\text{rotating} + \boldsymbol \Omega \times \boldsymbol q\tag{2}$$ where $\boldsymbol\Omega$ is the frame rotation rate with respect to inertial. For a rigid body, the body's angular momentum with respect to inertial but expressed in body-fixed coordinates is $\boldsymbol L = \mathbf I\,\boldsymbol \omega$ where $\mathbf I$ is the body's moment of inertia tensor and $\boldsymbol \omega$ is the body's rotation rate with respect to inertial but expressed in body-fixed coordinates. Since a rigid body's inertia tensor is constant in the body-fixed frame, we have $$\left(\frac{d\boldsymbol L}{dt}\right)_\text{body-fixed} = \frac{d(\mathbf I \boldsymbol \omega)}{dt} = \mathbf I \frac{d\boldsymbol\omega}{dt}\tag{3}$$ Combining equations (1), (2), and (3) yields $$\mathbf I \frac{d\boldsymbol\omega}{dt} = \sum_i \boldsymbol \tau_{\text{ext},i} - \boldsymbol \omega\times(\mathbf I \times \boldsymbol \omega)\tag{4}$$ This is Euler's equations of motion for a rigid body. No inertial forces come into play. However, the term $-\boldsymbol \omega\times(\mathbf I \times \boldsymbol \omega)$ is essentially an inertial torque. Just as inertial forces vanish in inertial frames, so does this inertial torque.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Wavelength and relativity From de Broglie equation λ=h/p. But p=mv and velocity is a relativistic quantity so also wavelength is relative ? In other words does wavelength depends on the reference frame ?
In other words does wavelength depends on the reference frame ? Yes, but the variation of wavelength we're talking about here is not, as claimed in two other answers, the same as a standard Doppler effect. An electron, in its rest frame, has a wavelength of infinity, i.e., a wavenumber ($k=2\pi/\lambda$) of zero. There is no Doppler shift formula that is going to transform 0 to some finite wavenumber (or $\infty$ to some finite wavelength). If you measure a wavenumber of 0 in some frame, then you have essentially no information, and you cannot find out the wavenumber in some other frame without knowing some additional information, such as the mass of the particle. (In more formal mathematical language, a tensor that is zero in one frame is zero in all frames.) For a sound wave or a light wave, there is an observable quantity that tells you the amplitude. You can tell where there are nodes (amplitude=0), and measure the wavelength by finding the distance between them. Therefore the wavelength must have some knowable transformation law when you go from one frame to another. Not so for a wavefunction. The wavefunction is not observable. The wavefunction of an electron moving at a definite velocity does not have nodes that are at detectable points in space ($e^{ikx}$ is never zero). The wavelength does transform, but not according to any Doppler shift formulas. A particular wavetrain can be 3 wavelengths long according to one observer and 4 wavelengths long according to another (a situation that would be impossible with a sound or light wave).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/444937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Relfection and transmission coefficients for wave function in $\delta$-potential Let's assume we have some one-dimensional Delta-potential $V(x)=V_0 \delta(x)$. Then I have found numerous problems where the approach for a wave function is $$\varphi(x)=\begin{cases}e^{ikx}+re^{-ikx},\ & x<0\\te^{ik'x},\ &x>0\end{cases}$$ I have two questions about this: * *The Schrödinger equation for this wave function outside of $x=0$ yields $\frac{\hbar^2 k'^2}{2m}=E=\frac{\hbar^2k^2}{2m}$. This means $k'=k$. Is this correct? Can we say that in general the wave vector $k$ must be the same if the wave propagates in the same potential (which outside of $x=0$ is just $V=0$)? And if not, why do we then have a legit approach where the reflected part and the incoming part of the WV in the area $x<0$ have the same wave vector? *By definition, for the transmission coefficient $T$ we have $T=\mid\frac{\varphi(\infty)}{\varphi(-\infty)}|^2=|t|^2$ which confuses me. Isn't $t$ already the coeffient of transmission? What else is $t$ if not the coeffiecient? And if it is the coefficient, what did I get wrong about the definition of $T$?
* *You have understood this aspect correctly. The bottom line is: $\psi(x)$ is claimed to satisfy the time independent Schrodinger equation, so if in doubt, plug it in and check that it does! *Transmission here is defined to be the ratio of two physically observable rates, namely $T = R$(transmit) $/ R$(incident) where $R$(incident) is the rate at which right-moving particles would be detected before the barrier if a detector were placed there, and $R$(transmit) is the rate at which right-moving particles would be detected after the barrier if a detector were placed there. These rates are proportional to the modulus-squared of the quantum amplitude associated with each plane wave, not the quantum amplitude itself, and they also involve a factor $k$ or $k'$ to account for the faster motion (higher flux) when the wavevector is high. To be precise, $$ T = \frac{ |t|^2}{ |1|^2 } \frac{k'}{k} $$ where I include the $|1|^2$ term to keep the logic clear (your incident wave has amplitude $1$) and the ratio of wavevectors obviously evaluates to $1$ when $k'=k$, but more generally this will not always happen. To understand this really fully you need to learn about the probability current or flux which is given by $$ {\bf j} = \frac{\hbar}{2 mi}(\Psi^* \nabla \Psi - \Psi \nabla \Psi^*) $$ (this expression can be related to the continuity equation which expresses the conservation of the number of particles, or if you prefer, the conservation of probability).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is wave function collapse the only source of 'randomness' in QM? What about field fluctuations? Are these two even distinct? Basically I want to know the validity of the statement, "All randomness originates from wave function collapse" or maybe "The only true random event is the collapse of wavefunctions" This seemed to jive with me initially, but then I thought about the random fluctuations in underlying quantum fields, as well as the idea that the quantum fluctuations at the big bang combined with hyperinflation may have caused the uneven distribution of matter we see today. Those effects aren't due to wave function collapse, right? Are there more sources of randomness? Is there a general statement we can make about randomness and where it physically originates from?
It is important to understand that fields don't fluctuate. This is explored in the question Are vacuum fluctuations really happening all the time? (spoiler: the answer is no). The randomness you are talking about is due to measuring some quantity when the wavefunction is not an eigenstate of that quantity. For example suppose we are measuring energy. If our wavefunction is not an eigenstate of energy we can write it as a sum of energy eigenstates: $$ \Psi = a_1 \psi_1 + a_2 \psi_2 + a_3 \psi_3 + ~ ... $$ where the $\psi_i$ are the energy eigenfunctions. Then measuring the energy randomly collapses the wavefunction to one of these eigenstates $\psi_i$ with probability $\left| a_i \right|^2$. This is the random element in QM, and it applies to quantum fields in exactly the same way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Examples of central forces on the path of orbit? In solving a problem from Goldstein (3.13), I solved for multiple properties of a circular orbit with the attractive central force where the path of orbit crosses the point of the force (at origin). The solutions were simple enough to find, but what's been in the back of my mind is what type of physical system does this represent? I am used to Kepler type problems where the central force is located within the orbit and not on it. What system would this be applied to? Or is it merely an exercise?
Consider this scenario in which a spring is connected to a bead and the other end of spring is connected to a circular frame and bead is set to contained on that circular frame and end of spring connected to circular frame is glued and take it as origin. HOPE THIS HELPS. Note this problem is merely an excersice problem and this spring system is just to give you feel. But the type of force which attracts depends on question so do not apply mecanics of spring on problem just do what is stated there are infinite systems possible for this scenario.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why are protons and neutrons the "right" degrees of freedom of nuclei? This question may sound stupid but why do we visualize nuclei as composed of a bunch of neutrons and protons? Wouldn't the nucleons be too close together to be viewed as different particles? Isn't the whole nucleus just a complicated low energy state of QCD?
We can measure the form-factors of bound nucleons. For instance by doing quasi-elastic scattering of a proton out of the nucleus $A(e,e'p)$ at low energy loss (my dissertation work involved this reaction for deuterium, helium, carbon and iron). The result are quite similar to (but measurably not identical to) the equivalent results on free protons. That similarity make the choice of nucleons as the degrees of freedom a good starting point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 1 }
How are the coefficients determined in the high temperature expansion of the 2D Ising model? I have been studying the 2D Ising model lately and have been looking at high and low temperatures. But I'm having problems when trying to understand the high temperature one. The final expansion looks like this: $$Z =(\cosh K)^{2N}2^{N}\sum (\tanh K)^{l}$$ with $$K = \beta J$$ I understand the part inside the sigma sums for all the possible closed loops, with $l$ being the length of the loop. When computing the expansion to the 8th order of $l$ the answer is (I'll use $\tanh K = \epsilon$ ): $$Z =(\cosh K)^{2N}2^{N}(1+ N\epsilon^4+2N\epsilon^6+N\frac{N+9}{2}\epsilon^8 + ...)$$ What I don't understand is how are the closed loops in the lattice counted.
I am certainly not the person on Physics SE with most expertise concerning lattice models, but since nobody has offered an answer yet, here is mine. As you have indicated, the partition function can be expressed as a high-temperature expansion involving closed loops or polygons (of nearest-neighbour interaction terms) on the square lattice. The essential point is that all the diagrams that do not involve an even number of lines at each vertex will cancel out to zero, once the summation over spins is carried out. The loops can be categorized according to number of lines: larger loops contribute at a higher order in the expansion. The counting of loops is explained clearly here and probably in several textbooks as well. I'm just going to reproduce some of the material in the table on p9 of that link (in case it disappears in future). For the smaller loops, you can do the counting "by hand". For larger ones, it's a numerical exercise best tackled on a computer. The smallest loops are of order 4 and 6. Here they are: For an $N$-spin system, assuming periodic boundaries so there are no "edge effects" which might restrict the placement of the loop, there are $N$ possibilities for the location of the 4-line loop. Just consider the number of options for the bottom left corner, for instance. This is the coefficient of the $\epsilon^4$ term. For the 6-line loop, there are again $N$ possible locations, but also it may take either of two orientations (horizontal or vertical). So the coefficient of the $\epsilon^6$ term is $2N$. For the 8-line loops, there are several arrangements. Here are two of them. The one on the left is two squares. Having placed the first one in any of the allowed $N$ positions, the second one can be placed in any of $N-5$ remaining positions. The $5$ excluded ones are directly on top of the first one, and in $4$ adjacent positions (north, south, east and west). Then there is a factor $2$ to account for the fact that the two loops are identical. So this gives a contribution $N(N-5)/2$. For the one on the right, there are $2N$ possibilities, similar to the 6-line loop seen before. There are still two more shapes to consider: The one on the left has $N$ possible locations, but for each one there are $4$ possible orientations, so the contribution is $4N$. The one on the right just has $N$ possible locations. Adding all these up gives the coefficient of $\epsilon^8$ as $$ \frac{N(N-5)}{2} + 2N + 4N + N = \frac{N(N+9)}{2} $$ The table in the linked chapter also gives the calculation for $\epsilon^{10}$, but I'll stop here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The minimum diameter of a sphere such that a cone may balance on it There is a solid sphere of diameter D, with a right circular cone placed on top of it. The cone has a height h and the diameter of the cone base is d, and d=h. Explain why the minimum value of D (diameter of sphere) must be d=h in order that the cone may still be balanced? I have attempted using moment of inertia for this question and then realised I may be better off using moments/torque, to see how the normal force changes with a changing diameter of sphere. I used the centre of the sphere as the centre of rotation, and based my work off the centre of mass being in the cone. Looking at other examples and explanations of torque and moments I could not figure out how to solve this particular problem in this manner. I also looked into calculating the restoring force for the cone but could not find any guidance regarding this that was not directly to do with pendulums. Any ideas on different methods of solving this would be welcome as well as anything which may be useful for the techniques I have tried to use.
HINT: Think about the potential energy $U(\theta)$ of the cone when the contact point is at the top of the sphere ($\theta = 0$) vs. when the contact point is at an angle $\theta$ from the vertical. If the cone is stable when $\theta = 0$, what can you say about the potential energy function $U(\theta)$ at that point?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Primary field in CFT and path integral I should feel ashamed to ask such a naive question, but anyway let me start with the $\phi^4$ theory in the Minkowski spacetime, which has a Lagrangian of the form $$\frac{1}{2}(\partial\phi)^2-\frac{1}{4!}g\,\phi^4$$ One say that it is scale invariant if under the transformation $x^\mu \rightarrow \lambda x^\mu$, the field $\phi$ transforms as $\phi(x)\rightarrow \frac{1}{\lambda}\phi(x)$. So when we consider QFT, we take the path integral of this Lagrangian over all the configuration of the field $\phi(x)$. But if we are interested in scale invariance of this theory, in path integral formulation, do we only integrate over the configurations of $\phi$ which transforms in the way $\phi(x)\rightarrow \frac{1}{\lambda}\phi(x)$? Similarly in QFT, a primary field transforms in a very specific way (like the rules of the tensor transformation). When we consider the corresponding quantum theory, in the path integral, do we only integrate over the field configurations of primary fields? Instead of integrating over all the fields which might not be primary (of the same type)!
To be brief, no you integrate over all field configurations. Field configurations are not operators they are ordinary functions that are summed over in the path integral. Primary fields are operators. They appear in correlation functions which involve an expectation value over all field configurations. The conformal invariance happens at the level of operators and correlation functions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/445912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Metric for 2D de Sitter? What is the correct metric to use for two dimensional de Sitter? If one starts with the following metric, which looks similar to de Sitter in 4 dimensions: $$ds^2 = -dt^2 + e^{2H t} dx^2,$$ one can calculate $R = 2H^2$, and $R_{00} = -H^2$, which gives the $\Lambda = 0$, which is not the solution one is looking for. What should be the correct metric to use for the same?
In two-dimensional spacetime, the Einstein tensor $R_{ab}-\frac{1}{2}g_{ab}R$ is identically zero , which explains why you get $\Lambda=0$. In any number $D$ of spacetime dimensions, including $D=2$, de Sitter spacetime can be constructed like this. Start with the $D+1$ dimensional Minkowski metric $$ -(\mathrm dX^0)^2+\sum_{k=1}^D(\mathrm dX^k)^2. \tag{1} $$ The submanifold defined by the condition $$ \sum_{k=1}^D(X^k)^2=L^2+(X^0)^2 \tag{2} $$ is $D$-dimensional de Sitter spacetime. The length parameter $L$ is related to the cosmological constant $\Lambda$ by $$ \Lambda = \frac{(D-2)(D-1)}{2L^2}. \tag{3} $$ This is equation (4) in "Les Houches Lectures on de Sitter Space". Setting $D=2$ recovers your result $\Lambda=0$. By the way, equations (13)-(14) in the same paper show how to derive the de Sitter metric in the form $$ -\mathrm dt^2+e^{2t}\sum_{k=1}^{D-1}(\mathrm dx^k)^2 $$ starting from equations (1)-(2). For $D=2$, this reduces to the form shown in the question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How did Coulomb arrive at value of electron charge? Charge of one electron is known to be as $1.6$ x $10^{-19}$ C or alternative 1 Coulomb contains charge of $6.24$ x $10^{18}$ electrons. I am just wondering if these numbers are arbitrarily chosen or were derived through some calculations?
1 Coulomb is defined as 1 As, where the Ampere is defined as a current producing a given amount of force between two ideal conductors, and the second is defined in multiples of the period of a transition in Cs. Since both definitions of Ampere and second are somewhat arbitrary, them combining to the numbers you have given is just as arbitrary. Starting next year we will have a new definition of the Ampere, which is actually based on how many electrons flow through a conductor per second, so then the Coulomb is more directly defined in terms of electrons, but the number itself is just as arbitrary.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we ever "measure" a quantum field at a given point? In quantum field theory, all particles are "excitations" of their corresponding fields. Is it possible to somehow "measure" the "value" of such quantum fields at any point in the space (like what is possible for an electrical field), or the only thing we can observe is the excitations of the fields (which are particles)?
Quantum Fields can't be physical, you can see this from the Equivalence Theorem which states that if I have a quantum field $\Phi(x)$, I can perform a field redefinition in my action $\Phi(x)\rightarrow \Phi'(x) = f(\Phi(x))$, so that as long as $f(\Phi(x))$ satisfies some simple properties, all S-matrix elements (basically everything we can measure) are invariant. The value of the field can't possibly be an observable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 5 }
Transformation of a Lagrangian $$L(\lambda,\mu,\dot{\lambda},\dot{\mu})=\frac{m}{2}(\lambda^2+\mu^2)(\dot{\lambda}^2+\dot{\mu}^2)-\alpha \lambda^2\mu^2,$$ I'm supposed to express this Lagrangian through $x=\lambda^2-\mu^2$ $y=2\lambda\mu$ My first thought was to use $x+\mu^2=\lambda^2$ by putting it into the second equation but then I get: $y=2\mu\sqrt(x+\mu^2)$ and don´t know how to proceed.
This is the answer that physshyp had in mind but felt like not writing down. Define the complex variables $\zeta = \lambda + i\, \mu$ and $z = x + i\, y$. Then \begin{align}\zeta^2 =& (\lambda + i\, \mu)^2= (\lambda + i\, \mu)(\lambda + i\, \mu) \\ =& \lambda^2 + i\, \lambda\, \mu + i \, \mu \, \lambda + (i\, \mu)^2 = \lambda^2 + 2\, i\, \lambda\, \mu - \, \mu^2 \\ =& (\lambda^2 - \mu^2) + i (2 \, \lambda \, \mu) \end{align} Consequently, since \begin{align} &x = \lambda^2 - \mu^2\\ &y = 2\, \lambda \mu \end{align} we have $$z = x + i\, y = (\lambda^2 - \mu^2) + i (2 \, \lambda \, \mu) = (\lambda + i\, \mu)^2 = \zeta^2$$ So in complex numbers, $$z = \zeta^2$$ Now, it is easy to differentiate the change of variables and get $$\dot{z} = 2\, \zeta\, \dot{\zeta}$$ Then, by taking absolute value squared of complex numbers $$|\dot{z}|^2 = 4\, |\zeta|^2\, |\dot{\zeta}|^2$$ If you expand in real coordinates, recalling the definition of absolute value squared of complex numbers $$\dot{x}^2 + \dot{y}^{2} = |\dot{z}|^2 = 4\, |\zeta|^2\, |\dot{\zeta}|^2 = 4 \, (\lambda^2 + \zeta^2)\,(\dot{\lambda}^2 + \dot{\zeta}^2)$$ The latter expression is the first term of the Lagrangian and combined with the fact that $y = 2\, \lambda\, \mu$ we get the desired change of variables in the Lagrangian function $$L = \frac{m}{2}\, (\lambda^2 + \zeta^2)\,(\dot{\lambda}^2 + \dot{\zeta}^2) - \alpha\, (\lambda \, \mu)^2 = \frac{m}{2}\,\frac{1}{4}\, (\dot{x}^2 + \dot{y}^{2}) - \alpha \frac{1}{4}\, y^2 = \frac{m}{8}\, (\dot{x}^2 + \dot{y}^{2}) - \frac{\alpha}{4}\, y^2$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are the Fermi-Dirac, Bose-Einstein and Boltzmann distributions all probabilities, or are they ways to get to probabilities? Hyper physics has a page for the energy distribution functions (here), they say that each of the distributions are the probabilites that a particle has a certain energy state E, but other websites like this one say that Fermi-Dirac provides the probability. I interpret this as you can use the distribution function to get the probability, and the distribution itself, $\bar n_{FD}={1\over e^{\beta (\epsilon - \mu)} +1}$, is not the probability. Is this the case? This question is the same for the other two types of distributions.
Start with the grand canonical partition function $Y$ and the microstate $r=(n_{p_1},n_{p_2},...)=\{n_p\}$: \begin{align} Y&=\sum_r\exp\left(-\beta\left(E_r\left(V_rN_r\right)-\mu N_r\right)\right)\\ &=\sum_{n_{p1}=0}^\infty\exp\left(-\beta\left(\epsilon_{p_1}-\mu\right)n_{p_1}\right)\cdot\sum_{n_{p2}=0}^\infty\exp\left(-\beta\left(\epsilon_{p_2}-\mu\right)n_{p_2}\right)\cdot...\\ &=\frac{1}{1-\exp\left(-\beta\left(\epsilon_{p_1}-\mu\right)\right)}\cdot\frac{1}{1-\exp\left(-\beta\left(\epsilon_{p_2}-\mu\right)\right)}\cdot...\\ &=\prod_p\frac{1}{1-\exp\left(-\beta\left(\epsilon_{p}-\mu\right)\right)} \end{align} Now calculate the mean occupation number with momentum $p_i$: \begin{align} \bar{n_{p_1}}&=\frac{1}{Y}\sum_rn_{p_1}\exp\left(-\beta\left(E_r-\mu N_r\right)\right)\\ &=...\\ &=\frac{1}{\exp\left(\beta\left(\epsilon_{p_i}-\mu\right)\right)-1} \end{align} This is the Bose-Einstein statistic, which gives you an idea which occupation number is to be expected with a given energy / momentum. The Fermi-Dirac statistic can be derived accordingly and gives as well a relation between the energy and the mean occupation number. I would not consider this a probability, rather a distribution from which probability can be derived.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/446773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Having trouble making sense of Einstein's thought experiment So I was reading about Einstein's thought experiment where he tries to show that simultaneous events in one frame may not be simultaneous in another frame. So, in the given pic, light from B' reaches Mavis before light from A' and I get that because she is moving to the right. But what if Mavis had another way to check whether the lightning hits the two ends simultaneously or not? What if I have two clocks attached at the ends A' and B' and the two clocks are synchronized. Attached to the ends A' and B' are devices that record the time when the lightning hits them. So if the lightning hits both A' and B' simultaneously with respect to Stanley, then by looking at the recorded times from A' and B', can't Mavis also come to the same conclusion?
Stanley doesn't know about A' and B'. In his world, there is A and B. What does it mean for Stanley to "see the strike occur at A' and B' simultaneously?" If it mean he can read Mavis's clocks and he sees that they read the same time (let's say $t=0$) when they are struck, then he sees the following: When the clock at A' reads $t=0$, lightning strikes it. At this point, the clock at B' has not reached the spatial coordinate of his B because the car is not long enough. Moreover, the clock at B' reads $t<0$. After some time, the front end of the car reaches B (and B') and lightning strikes it: Mavis's clock at B' read $t=0$ (but her clock that was at A' has moved, and now reads $t>0$). Some more times passes and both flashes reach Mavis simultaneously (in all reference frames, since there is no spatial separation between the 2 events--rear flash reaches Mavis and front flash reaches Mavis). Stanley concludes that the rear flash occurred first, and the spatial separation between the 2 flashes was greater than the length of train car.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/447051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Does a particle with infinite energy escape an infinite well? Currently, my modern physics class is going over particles in finite and infinite wells, general quantum formalism, and tunneling. What happens to a particle as it gains an infinite amount of energy? Does it stay inside of the infinite well? Does it escape? Can it not be determined? Does it depend? Are there any issues with this question? Is it valid? Is there anything I need to define or presume before I ask it? Do I need to define the rates at which the potential of the walls go to infinity, or the rate at which the particle's energy goes to infinity?
Particle and potential wells are in the framework of quantum mechanics. In this framework one cannot be talking of potential wells arbitrarily changing the particle's energy, because the energy is strictly defined by the solution of the quantum mechanical equation for the given potential. What happens to a particle as it gains an infinite amount of energy? Does it stay inside of the infinite well? Here is an example with specific boundary conditions of an infinite potential well using the time independent Schrodinger equation for the solutions. The particle can be in one of these states where n can go to infinity . The energy is on the y axis . Taking a limit of n to infinity , a level exists at at each step, since the solution is a periodic function. Does a particle with infinite energy escape an infinite well? For this model, no. It will be caught in one specific value of n. There is no "outside" in this model. The issue is to accept that particles and potential wells belong to the quantum mechanical regime and the models have to follow specific rules. Do I need to define the rates at which the potential of the walls go to infinity, or the rate at which the particle's energy goes to infinity? One may model infinite potential wells in different ways, also time dependent, BUT the possible energy states of the particle are defined by the potential well and the boundary conditions, one cannot change independently the energy of the particle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/447199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Can a battleship float in a tiny amount of water? Given a battleship, suppose we construct a tub with exactly the same shape as the hull of the battleship, but 3 cm larger. We fill the tub with just enough water to equal the volume of space between the hull and the tub. Now, we very carefully lower the battleship into the tub. Does the battleship float in the tub? I tried it with two large glass bowls, and the inner bowl seemed to float. But if the battleship floats, doesn't this contradict what we learned in school? Archimedes' principle says "Any floating object displaces its own weight of fluid." Surely the battleship weighs far more than the small amount of water that it would displace in the tub. Note: I originally specified the tub to be just 1 mm larger in every direction, but I figured you would probably tell me when a layer of fluid becomes so thin, buoyancy is overtaken by surface tension, cohesion, adhesion, hydroplaning, or whatever. I wanted this to be a question about buoyancy alone.
The issue is just in your "definition" of displaced. When we say "the buoyant force is equal to the weight of the displaced fluid" (which is more true than it seems people are saying it is), displaced does not mean "how much fluid overflows out of our container" (unless we started with a full container). The displaced fluid really just means how much fluid is pushed out of the way. What this leads to is that whatever volume of the object is submerged under the fluid surface, this is the volume of the fluid displaced. If we were to calculate the weight of this volume of water, we would find that it is equal to the buoyant force exerted on the object. Therefore, in your example, if the volume of the boat that is submerged would give a volume of water that weighs the same as the boat, then the boat will float. How you get to this final configuration is irrelevant. As a counter example to using the idea of water spilling out of a container, just imagine a boat in the ocean, where no water is spilling out of a container, yet the boat still floats.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 8, "answer_id": 6 }
When is it more efficient to blow air over a wet laundry in order to dry it - when it's wetter or when it's drier? If I want to speed the drying of laundry and allocate for it one hour of fanning - should I use it just after I hang it to dry or several hours later? When we want to cool hot tea it's better to add the cold milk just before drinking it. Is it the same also in this case? Edit: I like your answers, all in favor of the "in the beginning" option. Is it always the better option in cases where doing nothing is also a solution, i.e. when we don't really have to invest energy in order to get the desired outcome (unlike the coffee case where the milk is usually colder than the air)?
This case is quite similar to the heating case in some ways, but not others. Thermal gradients behave quite similarly to evaporation concentrations. That is, the rate of evaporation is greater if the difference in concentration is greater. The rate is also increased by convection, the same as with heating. A big difference is what we are doing in this situation compared to adding milk to a warm cup. In that case, you have a low temperature mass you can add, and you are just looking for the right time to add it. For that to be equivalent here, we would need some sort of absorbent object to put into the mix, which is obviously not the case here. The analogous heat transfer question to ask here is "If I can only blow on the coffee once, should I blow on it right after I pour it, or right before I drink it. Given that we are blowing for a set amount of time, and the greatest mass transfer occurs when the gradient is greatest and there is convection, to maximize the mass transfer over the time being blown, I would choose to blow on it when it is the wettest. This at very least will get you closer to dry far earlier.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/448900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Can very very few photons form the EMWs? One maybe interesting question please! In quantum point of view, the electromagnetic waves (EMWs) consist of photons. However, if there are only very very few photons, can they form a wave-like macro EM field? OR If a spherical monochromatic EMW (frequency is $\nu$) propagates and decays into very low level of energy flux density, e.g., for every square meter, the energy flux is far less than 1*$h\nu$ per second, then, does the EM fields still exist there? OR If the EMW is extremely weak (by value of corresponding energy flux density), can the electric field and magnetic field still exist in the spacetime and still propagate in the shape of waves? Or, in this case, is the form of wave only meaning the quantum wave function to indicate the probability of where the photons appear? Thank you very much!
The smallest EM wave is generated by single electrons in atoms and has discrete energy levels, which we can call a photon. This small EM wave tends to propagate in one direction where its E and M fields are strongest, the solution to Maxwell's equation says the E and M fields are well confined to sinusoids in a certain direction. However the wave function is a different function for the photon and it describes a probablilty nature so that the photon has a small chance of being anywhere but the greatest chance of going in a straight direction. So for Q1 a few photons do form a field but localized, but for Q2 and Q3 I am not an expert but would say that the QM photon description is not the same as the Maxwell EM field, so the EM field is not measurable everywhere but QM says it possible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Light through a cylindrical fiber-cable which has decreasing radius; one in shape of a helix I thought of the idea during breakfast this morning, and it has been nagging me all day - so hopefully (probably) I will find some good answers here. I'm not a physics student (economics), so please be gentle! Suppose that you have a fibercable (or similar with $\approx$ 100% reflectivity). At the start, light initially passes through a radius $d_i$ and then travels through the cable - which is shaped as a helix - and then passes out at the end of the cable, which then has a radius $d_e$ where $d_e << d_i$. Basically, what will happen? Given perfect conditions (e.g. $d_e \rightarrow 0$) - shouldn't the lightbeam be intensified? All answers are appreciated!
The great advantage of fiber cables is that they are almost perfect waveguides, they turn corners, so a helical or whatever geometry for the cable does not affect the light within,and that is why they are useful in communications. Only if the diameter of the cable would change to become smaller there would be an intensification of the light intensity. Think of a water pipe in your geometry, the shape is irrelevant for the pressure, which is constant as long as the diameter of the pipe is the same.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why can't a particle penetrate an infinite potential barrier? I am studying basic quantum theory. My question is: Why can't a particle penetrate an infinite potential barrier? The reasoning that I have applied is that particles under consideration have finite energy. So, to cross an infinite potential barrier the particle requires infinite energy. But I cannot think of the mathematical relation between potential and energy so that indeed I am convinced that to cross an infinite potential barrier the particle needs infinite energy. What is the relation between the potential and energy of quantum mechanical particles?
The relation between the particle's wave function $\psi(x)$, potential $V(x)$ and energy is $$ E = \int dx\ \psi^*(x)\left(-\frac{\hbar^2}{2m}\psi''(x) + V(x)\psi(x)\right) \quad \label((*) $$ Suppose $V(x)$ is bounded from below and is equal to $+\infty$ on some interval $[x_1,x_2]$. If $\psi(x)\neq 0$ for $x\in[x_1,x_2]$, then the energy $E$ is infinite. The term containing second derivative is always non-negative, so it can not compensate this infinity. Update. This relation is well known in the quantum mechanics. I didn't mention that the norm of a wave function is usually taken to be $1$: $$ \int dx\ \psi^*(x)\psi(x) = 1 $$ Under this condition the Schrodinger equation $$ -\frac{\hbar^2}{2m}\psi''(x) + V(x)\psi(x) = E\psi(x) $$ been multiplied by $\psi^*(x)$ and integrated by $x$ gives the relation (*). The term $$ -\frac{\hbar^2}{2m}\int dx\ \psi^*(x)\psi''(x) $$ corresponds to the kinetic energy of a particle, so it must be non-negative. Indeed, integration by parts leads to the following manifestly non-negative expression $$ \frac{\hbar^2}{2m}\int dx\ \psi'^*(x)\psi'(x). $$ By the way, quantity $\psi''(x)/\psi(x)$ can be either positive or negative.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
How can two electrons repel if it's impossible for free electrons to absorb or emit energy? There is no acceptable/viable mechanism for a free electron to absorb or emit energy, without violating energy or momentum conservation. So its wavefunction cannot collapse into becoming a particle, right? How do 2 free electrons repel each other then?
It is true that the reactions $$e + \gamma \to e, \quad e \to e + \gamma$$ cannot occur without violating energy or momentum conservation. But that doesn't mean that electrons can't interact with anything! For example, scattering $$e + \gamma \to e + \gamma$$ is perfectly allowed. And a classical electromagnetic field is built out of many photons, so the interaction of an electron with such a field can be thought of as an interaction with many photons at once. There are plenty of ways a free electron can interact without violating energy or momentum conservation, so there's no problem here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 4 }
Would a charge imbalance act like dark energy? I realize that there are theoretical reasons to reject the idea that the charges on electrons and protons may not be exactly equal and opposite; and I am not suggesting that they're not. Edited 12/20/18: However, I would like to know: if there were a very, very tiny imbalance between the electron's charge and the proton's charge (on the order of one part in $10^{36}$ or less), would it result in a cosmic expansion that resembles the expansion attributed to dark energy? The rationale is that all atoms would have a very slight net charge of the same sign if there were such an imbalance; and as a result there would be a very slight net repulsive electrostatic force between all nominally neutral atoms. If the imbalance were small enough, atoms should still form, gravity should be sufficient to bind most (nominally neutral) matter together on most scales, and most other phenomena should be as currently observed, but it seems that at cosmological distances there might be some observable effects.
I don't think this idea works, for the simple reason that the electromagnetic and gravitational forces scale the exact same way: they are both proportional to their respective 'charge' and inverse square. Since your idea implies most apparently neutral objects would have roughly the same charge to mass ratio (since it depends on just the electron density), if the repulsion effect were strong enough to beat gravity at cosmological scales, it would also beat gravity on everyday scales, because the ratio of the two would remain exactly the same. But we're not repelled from the Earth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Violating Newtons First Law! Suppose you are inside a very large empty box in deep space , floating ( i.e not touching the box from anywhere initially).The box is at complete rest. Now you push the box forward from inside. Now you would go backwards but the box will move forward to conserve momentum. However since you were inside the box your force is an internal force but the box would have moved forward. So doesnt this violate newtons 1st law as an internal force made a body move from state of rest?
If you say that pushing against the box (and the box pushing against you) is an internal force, then that means that you and the box are considered to be two internal parts of a single object. The center of mass of this object does not move when you push against the box, since all forces are internal. So you will move in such a way that the center of mass of you and the box does not accelerate. The net force on this object is zero, and the object does not accelerate (if it accelerated, then its center of mass would also accelerate), so Newton's First Law is fulfilled.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
Is the period a physical observable in General Relativity? I am currently seeing the classical tests of GR. To justify the introduction of a test based on the Doppler effect, the professor says that the previous test ( Shapiro and echo-radar test ) is based on non-physical parameters as the radius of Earth's orbit since it changes when coordinates change. Furthermore, he introduce the period and eccentricity of the Earth's orbit ( orbital parameters ) because they are physical parameters ( I suppose this means they don't change with change of the metrics ). However, I don't see the difference with the radius since making the change $t \rightarrow 2t $ seems to change the period by a factor 2. The only thing I could imagine is that it does not satisfy Einstein equation anymore but I don't think so. What have I misunderstood? Or am I right?
As Elio and others have written, the comment by your professor is not really very meaningful. I would rather like to write about the title of your question. Of course the period of Revolution of Earth is a physical observable, after all you can measure it! Obviously different observers might measure different periods, but this does not mean anything special. Even energy has different values for different observers, but this does not mean that energy is not a physical observable. Adding in GR does not really help much. GR is not a crazy theory, but a very successful physical theory describing our universe. Even the Mayas were able to measure the period of Revolution of the Earth, so it would be amazing if GR could not do the same,only much much better.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/449798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
From where electrons flow to make a bulb light? Suppose we have the "basic" stuff like a battery 2 piece of wire and a bulb. Battery has a potential difference. But from where electrons flow to make the bulb light? from wire or from battery or from both? also if electrons flow from battery and they go through the wire (conductor) then why in insulators this doesnt happen? insulators dont give electrons but why they dont let electrons flow?
Electrons from the material in the entire circuit flow. Conductors are different to insulators because their atomic structure is made of a “sea of electrons” around the positive nuclei. These electrons are free to move from atom to atom in the conductors and not in the insulators. This may be better explained why by a chemist, but I believe it’s a property metals near the “center” of the periodic table have (I’m probably not technically right about this).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why can the pion decay into two photons? The neutral pion belongs to the pseudoscalar meson octet, so it has, in the ground state ($L=0$): \begin{align} P_{\pi^0}&=-1 \\ C_{\pi^0}&=+1. \end{align} And the photon has \begin{align} P_\gamma = -1 \\ C_\gamma=-1. \end{align} Therefore, since electromagnetic interactions conserve parity and charge conjugation, why does the process \begin{equation} \pi^0 \rightarrow \gamma\gamma \end{equation} occur? Doesn't it violate parity? In the example I have seen in class, $C$ conservation is used to explain why the $\pi^0$ cannot decay into three photons, since for $\pi^0 \rightarrow \gamma\gamma\gamma$ we have \begin{equation} C_i = +1 \neq C_f = (-1)^3 = -1 \end{equation} and, for $\pi^0 \rightarrow \gamma\gamma$, \begin{equation} C_i = +1 = C_f = (-1)^2 = +1, \end{equation} so regarding $C$ conservation it should be allowed. But, considering P conservation, \begin{align} \pi^0 \rightarrow \gamma\gamma \qquad \Rightarrow \qquad P_i = (-1)^{L}\times \underbrace{(-1)}_{\text{intrinsic parity}} = -1 \neq P_f = (-1)^2 = +1 \end{align} so it would be forbidden for $L=0$. And, with the same argument, the decay into three photons would be allowed. What am I missing?
The photons have intrinsic spin (or, better, helicity) one, so the pair can have odd orbital angular momentum, still conserving total angular momentum (which has to be zero, as the pion is spinless). Specifically, the spins of the two photon can combine to give total spin $S=1$. This, conmbined with an angular momentum $L=1$, has a $J=0$ component which permits the pion to decay into two photons. You can check from the Clebsch-Gordan table that the final two photon wavefunction is symmetric under particle permutation, as required by Bose statistics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to understand the two-point correlation function in momentum space? Let's take the Ising model as an example and study the two point spin spin correlation function: $$\langle s_0 s_r\rangle = \frac{\sum_{\{s_i\}}e^{K\sum_{\langle i ,j\rangle}s_i s_j} s_0 s_r}{\sum_{\{s_i\}}e^{K\sum_{\langle i ,j\rangle}s_i s_j} }.$$ In high temperature, i.e., when $K$ is small, the two point correlation function would decays exponentially: $$G(r)\equiv\langle s_0 s_r\rangle \sim \exp(-r/\xi).$$ In momentum space, the two point correlation function would becomes: $$G(k)\sim \frac{1}{k^2+ 1/\xi^2}.$$ I think that in real space, the meaning of the correlation function is straightforward to understand, but how to understand the form $$G(k)\sim \frac{1}{k^2+ 1/\xi^2}$$ in momentum space directly? What is the physics picture in momentum space?
If the spins are at positions $\bf R$, it is possible to define a $\bf k$-dependent collective variable $s_{\bf k}$ (Fourier component of the spin vector configuration) as: $$ s_{\bf k}=\sum_{\bf R} e^{i\bf k \cdot R}s_{\bf R} $$ (maybe with a normalization factor depending on the exact choice of definition). The k-space two-point correlation function is the Fourier transform of the spin-spin correlation function in r-space G($\bf R$,$\bf R'$)= $\left< s_{\bf R}s_{\bf R'} \right>$, which, for a translationally invariant system is also equal to $\left< s_{\bf 0}s_{\bf R'-R} \right>$, so that $$ G({\bf k})=\left< s^*_{{\bf k}}s^~_{{\bf k}} \right> = \left< s_{{\bf k}}s_{{\bf -k}} \right>. $$ From this formula, and taking into account that for non zero wavevectors $s_{{\bf k}}$ can be interpreted as a fluctuation of spin density, the physical meaning of $G({\bf k})$ is of correlation between density fluctuations of the same wavevector. It is an especially important quantity because it is possible to show that it is the most important factor depending on the spin values and positions, of the neutron scattering cross section. Therefore, it provides a direct method to measure two-point correlations in real magnetic systems.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Angular momentum in different points I have a question about angular momentum: Is it possible to have a system where angular momentum is conserved relative to 1 point,but not conserved relative to another?
Angular momentum relative to an origin ${\mathcal O_1}$ $$ \mathbf{L_{\mathcal O_1}} = \mathbf{r_{\mathcal O_1} \times p_{\mathcal O_1}}$$ where $\mathbf r_{\mathcal O_1}$ is the position vector to the particle relative to some origin ${\mathcal O_1}$. Now suppose that angular momentum is conserved in ${\mathcal O_1}$. Then $$ \frac{d \mathbf L_1}{dt} = \mathbf{\dot{r_1} \times p_1} + \mathbf{r_1 \times \dot{p_1}} = \frac{1}{m} \mathbf{p_1 \times p_1} + \mathbf{r_1 \times \dot{p_1}} =0 $$ but since the direction of momentum is frame-independent, the first term vanishes (that is, $\mathbf{p_1} = \mathbf{p}$). It then follows that $$ \mathbf{r_1 \times F_1} =0 . $$ Now, let's look at some other origin $\mathcal{O}_2$, given that $L$ is conserved in $\mathcal O_1$. Well the first term much vanish again, that's fine but what about the second term? Does $$\mathbf{r_2 \times F_2} \stackrel{?}{=}0. $$ Well, no not necessarily. Namely, just choose an origin in which the force is perpendicular to your position vector.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is there a useful way to visualize the symmetries of the relativistic Riemann curvature tensor? I find it useful to see diagrams such as trees, colored 2D and 3D arrays, etc., which illustrate how terms combine in composite expressions. For example, the following is my visualization of the genesis of multinomial coefficients: I believe the field of mathematics to which I allude is called combinatorics. I'm wondering if anybody has developed useful visual aids illustrating how the various combinations of index values of the relativistic Riemann curvature tensor participate in symmetries. This is progress toward something along the lines of what I'm envisioning: The red text cells represent the completely antisymmetric constraint. The blue background cells are those which I haven't processed yet. The black text are the independent components in increasing index order. The light red background cells are permuted on the first and last pairs of indices. In the case of $R_{0\alpha\beta\gamma}$ one of the upper triangle red text cells can be expressed in terms of the other two. There are 18 black text cells and 3 upper triangle red text $R_{0\alpha\beta\gamma}$ cells. So, at least the number comes out right. By "relativistic" Riemann tensor, I mean the Riemann-Christoffel curvature tensor of the locally Minkowskian, pseudo-Riemannian 4-space of general relativity, having non-definite metric of signature $\pm2$ in local Riemann normal coordinates depending on the defined coordinate convention.
In Geroch's Differential Geometry notes (1972, ISBN 978-1927763063) page 60, he uses an octahedron to describe the symmetries of the Riemann Curvature Tensor. In his example, using the upper back triangular face, the sum of the terms at the vertices is $$R_{bdca}+R_{cdab}+R_{dacb} \stackrel{{}_{[ab][cd]}}{=}R_{dbac}+R_{dcba}+R_{dacb} \stackrel{{}_{a[bcd]}}{=}R_{d[bac]}=0.$$ Other relations follow from working with this figure. This is likely based on Milnor's octahedron in Milnor's Morse Theory (1963, ISBN 978-0691080086) page 54, (2) is defined as $R(X,Y)Z+R(Y,Z)X+R(Z,X)Y=0$ and Milnor says "Formula (2) asserts that the sum of the quantities at the vertices of the shaded triangle W is zero".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/450889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Gauss theorem and inverse square law I know that the gauss law states that the Flux of the electric field through a closed surface is Q/ε , but does the gauss theorem works also for non inverse square law Fields? 
I'd like to draw a distinction: Gauss's Theorem (Also called Divergence Theorem): $$\iint_{\partial V} \mathbf{E} \cdot d\mathbf{A} = \iiint_V \nabla \cdot \mathbf{E}\ dV $$ This is a purely mathematical statement and holds for all differentiable vector fields $\mathbf{E}$. Gauss's Law: $$\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon_0}$$ Plugging this into Gauss's Theorem we have that $$\iint_{\partial V} \mathbf{E} \cdot d\mathbf{A}= \frac{1}{\epsilon_0} \iiint_V \rho(\mathbf{r}) \ dV \equiv \frac{Q_{enc}}{\epsilon_0}. $$ So to answer your question, Gauss's Theorem is always true. It must be. However, Gauss's Law didn't have to be true; it just so happens to be a law of physics in the universe we find ourselves in. Having said that, Gauss's law will be true for any vector field $\mathbf{F}$ that satisfies the differential equation. $$\nabla \cdot \mathbf{F} \propto \Lambda(\mathbf{r}), $$ where $\Lambda$ is just some scalar field that is well defined over the volume $V$. “Gauss's law” (i.e. in integral form; I put it in quotes because we are just plugging it into Gauss’s theorem, but this is how many use the term) for such a field would look like $$\iint_{\partial V} \mathbf{F} \cdot d\mathbf{A}= \alpha \iiint_V \Lambda(\mathbf{r}) \ dV \equiv \alpha \tilde{Q}_{enc}, $$ for some constant $\alpha$ and $\tilde{Q}_{enc}$ is just what we define to be how much "charge" is enclosed by our surface. Conclusion: The fact that the $\mathbf{E}$ field falls like $\frac{1}{r^2}$ just makes the integral on the left hand side do-able at a fixed $r$ (actually the fact that it only depended on $r=|\mathbf{r}|$ is what made it do-able); it does not mean that anything that the corresponding field $\mathbf{F}$ falls like $\frac{1}{r^2}$. When you think about all the times you actually used Gauss's law to calculate the $\mathbf{E}$ field recall that you actually had to assume that it fell like $\frac{1}{r^2}$ in order to do the integral simply.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the half-life of an element mean it will never decay completely? Example: Half life of Polonium-194 is 0.7 seconds. If we supposedly take 50g of Polonium, there will surely be a time when no more of this Polonium will be left because if we consider the decay discretely, in the form of individual atoms, won't there be a time when the last atom decays completely? Does this mean an element can decay completely? If so, why don't we actually 'run out' of natural radioactive elements? Is it so because the elements they decay into combine to form the parent element again?
No, not really. For example, suppose you have a sample of $2^{1000}$ atoms with half-life $t$. (Note: there are only about $10^{80}$ protons in the Universe.) * *If you wait for a time $t$, then half of them have decayed, and you have on average $2^{999}$ remaining. *If you wait $1000t$, then you will have on average a single radioactive atom remaining. *And if you wait $2000t$, then the average number of remaining radioactive atoms is $2^{-1000}$. That means it is overwhelmingly likely that actual number of remaining atoms is zero. Your half-second polonium isotope was among the isotopes produced in the supernova explosion whose detritus recombined to form our solar system, five-ish billion years ago. That "primordial" polonium is entirely gone. The radioactive elements that remain are the ones with billion-year half-lives. A famous consequence of this is the natural fission reactor at Oklo, Gabon, where several tons of uranium ore underwent rainwater-moderated fission about two billion years ago. This was possible because, on a younger Earth, there was more of the highly-fissionable, shorter-lived isotope U-235 in uranium ores than there is today. A similar structure formed by geological processes today wouldn't fission because natural uranium is no longer sufficiently enriched.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }
Explicit computation of singular part of two-loop sunrise diagram For $\phi^4$, there is two-loop self-energy contribution from sunrise (sunset) diagram. The integration is $$ I(p)=\int\frac{d^D p_1}{(2\pi)^D}\frac{d^Dp_2}{(2\pi)^D}\frac{1}{(p_1^2+m^2)(p_2+m^2)[(p-p_1-p_2)^2+m^2]}. $$ I try to use trick like Feynman Parameter, but still cannot get the explicit result. I saw this question Two-loop regularization and I found that the exact solution of above integration is very complex and need a lot of fancy technique. However in order to do the renormalization I need only singular part of $I(p)$ ($\epsilon$-expansion with $D=4-\epsilon$). My question: * *Is there some easy and direct method just to compute the singular part of $I(p)$ i.e. the coefficient of $1/\epsilon^2$ and $1/\epsilon$? By the way, I saw this answer https://physics.stackexchange.com/a/79236/169288 But it only give an explicit computation of singular of $I^\prime(p^2=m^2)$ which is not my requirement. You just need to provide the literature or textbooks which show the explicit computation details. I can rarely find textbooks covering explicit two-loop computation. PS: The following is my method by ordinary Feynman parameter trick. $$ I(p)=\int\frac{d^D p_1}{(2\pi)^D}\frac{d^Dp_2}{(2\pi)^D}\frac{1}{(p_1^2+m^2)(p_2+m^2)[(p-p_1-p_2)^2+m^2]}$$ $$I(p)=\int\frac{d^D p_1}{(2\pi)^D}\frac{d^Dp_2}{(2\pi)^D} \int dx dy dz \delta(x+y+z-1) \frac{2}{\mathcal{D}^3}$$ with $$\mathcal{D}=x p_1^2 +y p_2^2 +z(p-p_1-p_2)^2 +m^2 = \alpha k_1^2 + \beta k_2^2 + \gamma p^2 +m^2 $$ with $$\alpha = x+z $$ $$\beta = \frac{xy + yz +zx}{x+z}$$ $$\gamma = \frac{xyz}{xy + yz +zx}$$ $$k_1= p_1 + \frac{z}{x+z}(p_2-p)$$ $$k_2 =p_2 - \frac{xz}{xy + yz +zx} p.$$ And the Jacobian $\frac{\partial(k_1,k_2)}{\partial(p_1,p_2)}=1.$ $$I(p)=\int_0^1 dx dy dz \delta(x+y+z-1) \int\frac{d^D k_1}{(2\pi)^D}\frac{d^Dk_2}{(2\pi)^D} \frac{2}{(\alpha k_1^2 + \beta k_2^2 + \gamma p^2 +m^2 )^3} $$ $$I(p)=\int dx dy dz \delta(x+y+z-1) \int\frac{d^D k_1}{(2\pi)^D}\frac{d^Dk_2}{(2\pi)^D} \int_0^{+\infty} dt t^2 e^{-t(\alpha k_1^2 + \beta k_2^2 + \gamma p^2 +m^2) }.$$ Gaussian integral of $k_1$ and $k_2$ $$I(p)= \int dx dy dz \delta(x+y+z-1)\int_0^{+\infty} dt \frac{t^{2-D}}{ (4\pi)^D (\alpha \beta)^{D/2}}e^{-t(\gamma p^2 +m^2)}$$ $$I(p)=\int dx dy dz \delta(x+y+z-1) \frac{\Gamma(3-D)}{(4\pi)^D(\alpha \beta)^{D/2}(\gamma p^2 +m^2)^{3-D}}.$$ Use $D= 4-\epsilon$ $$I(p)=\frac{1}{(4 \pi)^4}\int dx dy dz \delta(x+y+z-1) \frac{\gamma p^2 +m^2}{(\alpha \beta)^2}\Gamma(-1+\epsilon) \left(\frac{\sqrt{\alpha \beta}}{\gamma p^2 +m^2}\right)^\epsilon$$ $$\Gamma(-1+\epsilon)= -\frac{1}{\epsilon} +\gamma_E-1 +\mathcal{O}(\epsilon)$$ $$\left(\frac{\sqrt{\alpha \beta}}{\gamma p^2 +m^2}\right)^\epsilon = 1 + \epsilon \ln\left(\frac{\sqrt{\alpha \beta}}{\gamma p^2 +m^2}\right)+\mathcal{O}(\epsilon^2).$$ Up to $0$th order of $\epsilon$ $$I(p)=\frac{1}{(4 \pi)^4}\int dx dy dz \delta(x+y+z-1) \frac{\gamma p^2 +m^2}{(\alpha \beta)^2}\left(-\frac{1}{\epsilon}- \ln\left(\frac{\sqrt{\alpha \beta}}{\gamma p^2 +m^2}\right) +\gamma_E-1\right) .$$ There is two integral I need to compute $$I_1 = \int_0^1 dx dy dz \delta(x+y+z-1) \frac{\gamma p^2 +m^2}{(\alpha \beta)^2} $$ $$I_2 = \int_0^1 dx dy dz \delta(x+y+z-1) \frac{\gamma p^2 +m^2}{(\alpha \beta)^2}\ln\left(\frac{\sqrt{\alpha \beta}}{\gamma p^2 +m^2}\right).$$ I can't find the explicit result of these two.
If you can be satsfied with the $m=0$ case, the integral is easy provided you work in configuaration space rather that momentum space. The $x$-space propagator in $n$ dimensions is $$ g(x,x') = \frac{1}{ (n-2)S_{n-1}} \left(\frac {1}{|x-x'|}\right)^{n-2} $$ where $S_{n-1} = 2\pi^{n/2}/\Gamma(n/2)$ is the surface area of the $n$-ball. Your Feynman diagram in configuration space is $$ I(p)= \int {d^nx} e^{ipx}[g(x,0)]^3 $$ which you can evaluate (after re-naming the following integrals integration variable $k$ as $x$ and its $x$ to the external momentum $p$) using the standard Fourier integral $$ \int \frac{d^n k}{(2\pi)^n} e^{ikx} |k^2|^s = \frac{4^s}{\pi^{n/2}}\frac{\Gamma(s+n/2)}{\Gamma(-s)} \frac{1}{|x|^{2s+n}}. $$ The $m=0$ case is good enough to get the $\beta$ function and the wavefunction renormalization $Z$ as these do not depend on $m$ anyway. If the $m$ dependence is important to you you will need to evaluate the Fourier trasnform of the cube of a Bessel "K" function
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
A drone, or any lifting vehicle, enclosed in a container. Will it lift along with the container? Consider a drone, or any lifting vehicle, enclosed in a container. Will it lift along with the container? Suppose I place a small drone in a large container of negligible weight and place them in space. Will drone move forward pushing container? I know it doesn't work, but don't know the reason.
The reason why it will not work is because the air that the drone is pushing down will push the bottom of the container down too. Imagine yourself in a box half your height. Will you be able to get up straight? No, because you will be pushing the bottom of the box down with the same force that you are using to push yourself up :) Edit: Per Aaron Stevens' comment, it is worth mentioning the fact that in a case of a very tall box it is possible that the drone could momentarily (briefly) lift the box as it takes time for the air pressure to build onto the bottom of the box. It is also possible to momentarily lift the box if the drone speeds up really fast and hits the top of the box. The drone's inertia would move the box. F1 = F2 because: The air pushed down by the drone's blades has to transfer its momentum to something in order for it to be able to change course and return to the blades, and the bottom part of the box is the only thing around. If the box (container) is completely sealed, it will take the full impact (100%) of the air pushed down by the drone.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Different predictions from differential vs integral form of the Maxwell–Faraday equation? Assume a toroidal solenoid with a variable magnetic field inside (and zero outside) and a circular wire around one of the sides. Because there is no magnetic field outside the solenoid, we have $$\nabla \times E = - \frac{\partial B}{\partial t}=0,$$ which impies that E is conservative, that is, $$\int_{\partial \Sigma} E.d\ell =0$$ On the other hand, using the integral form we get: $$\int_{\partial \Sigma} E.d\ell = - \frac{\partial}{\partial t}\int_\Sigma B \cdot dS \ne0,$$ because there is a changing B inside the surface. What is it wrong with my reasoning?
You are re-discovering the Aharonov-Bohm effect. it is not a problem of differential vs integral form of Maxwell equations, but the issue is that in order to prove equivalence between the local condition on vanishing curl and the global of vanishing of the line integral of the field is required a simply connected domain. Which is not the case if you have a toroidal solenoid (every closed loop around the solenoid cannot be contracted to a point). For more information see a previous Q&A. In particular, among the first comments you'll find a reference to an experiment performed with a toroidal solenoid.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/451919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Poynting theorem in Landau and Lifshitz’ field theory book In Landau & Lifshitz’s The Classical Theory of Fields, in section 31, they have proved the Poynting theorem (equation 31.6) in its integral form. In the footnote on page 76, they mention We assume that at the given moment there are no charges on the surface itself. If this were not the case, then on the right we would have to include the energy flux transported by particles passing through the surface. I would like to know how would the additional term look like, and any physical situation in which this term is important?
I think the answer should be such: that charge on the surface of the volume creates an electric field is equal $\vec E=4\pi \sigma \vec n$, where $\sigma$ is the surface charge and then the energy flux is $\oint c \sigma [\vec n,\vec H]df$ The charges of the electric field are formed on the surface of the dielectric, the conductor forms currents, so this member is essential for the situation with the dielectric in an external field
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why acceleration comes to be different when using $F=ma$ and when using $\tau = I \alpha $? Consider a Disc of mass $M$ and radius $R$, I applied force $F$ tangentially on it. Now using $F=Ma$ , acceleration comes up to $$a=F/M$$ Now, let's use the torque equation: Here, the moment of inertia $I$ is $\frac12MR^2$ , and let $\alpha$ be the angular acceleration. Now, torque equals $FR$, so $$FR=\frac12MR^2\alpha$$ and, putting the rolling without slipping assumption $a=\alpha R$, we get $$a=2F/M$$ What gives rise to this discrepancy?
For translations, it doesn't matter how the mass of a body is distributed, the acceleration will be $a=F/m$. For rotations, the distribution of the mass is important. A ring whith large radius is harder to get into rotation than a small ring with the same mass. This is captured by moment of inertia.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Noether's theorem for scale invariance When we have the Lagrangian $$\mathcal{L} = \frac{1}{2} \partial _\mu \phi\partial^\mu \phi \tag{1} $$ We have a symmetry given by $$x^\mu\mapsto e^\alpha x^\mu, \qquad\phi\mapsto e^{-\alpha} \phi.\tag{2}$$ I'm struggling to find the Noether charge for this symmetry. The formula is $$j^\mu=\frac{\partial \mathcal{L}}{\partial\partial_\mu\phi}\delta\phi-k^\mu\tag{3}$$ where $$\delta \phi=-\phi \tag{4}$$ in this case, but I can't find $k^\mu$ such that $$\delta \mathcal {L}=\partial _\mu k^\mu .\tag{5}$$
If u want to compute the Noether currents, u can do as follows: $$x'^u=x^u+\delta x^u \quad \delta x^u=e^aE_a^u$$ $$\phi'=\phi+\delta\phi \quad \delta\phi=e^aX_a$$ $$J_a^u=[\eta_p^uL-\frac{dL}{dd_u\phi}d_p\phi]E_a^p+\frac{dL}{dd_u\phi}X_a$$ So in your case results: $$E^u=x^u \quad X=-\phi$$ $$J^u=\frac{1}{2}d_p\phi d^p\phi x^2-d_v\phi d^u\phi x^v-(d^u\phi)\phi$$ Using Euler-Lagrange equation $d_ud^u\phi=0$ it semplifies to $$J^u=-d_v\phi d^u\phi x^v-(d^u\phi)\phi$$ that u can verify it is conserved
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Gravitational redshift discrepancy? I want to compute the redshift of a signal emitted by a static observer in $r=R_1$, $\phi=\phi_1$and recieved by another static observ at $r=R_2$, $\phi=\phi_2$ with $R_2>R_1$, in Schwarzschild metric. So i determined it in two different manners obtaing different results. First i considered the metric for a static observer $$ds^2=-(1-\frac{2m}{r})dt^2=-d\tau^2$$ $$dt=\frac{d\tau_1}{(1-\frac{2m}{R_1})^{1/2}}=\frac{d\tau_2}{(1-\frac{2m}{R_2})^{1/2}}$$ So results $$\frac{\lambda_2}{\lambda}=\frac{(1-\frac{2m}{R_2})^{1/2}}{(1-\frac{2m}{R_1})^{1/2}}$$ Instead using the simmetry under timereversal of the metric we have $$\frac{dt}{d\tau}(1-\frac{2m}{r})=constant$$ $$dt=\frac{d\tau_1}{(1-\frac{2m}{R_1})}=\frac{d\tau_2}{(1-\frac{2m}{R_2})}$$ Giving $$\frac{\lambda_2}{\lambda}=\frac{(1-\frac{2m}{R_2})}{(1-\frac{2m}{R_1})}$$ What i'm doing wrong?
The expression $$\left( 1 - \frac{r_s}{r}\right)\frac{dt}{d\tau} = \frac{E}{mc^2} = {\rm constant}$$ would apply to an inertial observer in the Schwarzschild metric. i.e. The $\tau$ here corresponds to the proper time experienced by an inertial (free-falling) observer. Your first expression $$ \left( 1-\frac{r_s}{r}\right)^{1/2}dt = d\tau$$ would apply in a case where $dr = d\phi=0$, i.e. the case where $\tau$ is the proper time experienced by a stationary, and therefore non-inertial observer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Action of rotation operator on spin 1/2 system In Sakurai book on QM in chapter 3, he states the following relation $$e^{\frac{iS_z\phi}{\hbar}}[(\rvert+\rangle\langle-\rvert)+(\rvert-\rangle\langle+\rvert)]e^{\frac{-iS_z\phi}{\hbar}}$$ $$=e^{\frac{i\phi}{2}}\rvert+\rangle\langle-\rvert e^{\frac{i\phi}{2}}+e^{\frac{-i\phi}{2}}\rvert-\rangle\langle+\rvert e^{\frac{-i\phi}{2}}$$ The problem I am having in understanding the above relationship is this: from where does the $e^{\frac{i\phi}{2}}$ comes into the equation?
The operator $S_z$ the operator representing the $z$-component of angular momentum, which also generates rotations about the $z$ axis. Assuming that $|\pm\rangle$ are the eigenstates of $S_z$ for a spin-$1/2$ object, the first interpretation gives $$ S_z|\pm\rangle=\pm\frac{\hbar}{2}|\pm\rangle, $$ which then implies $$ \exp\left(\frac{iS_z\phi}{\hbar}\right)|\pm\rangle= \exp\left(\frac{\pm i\phi}{2}\right)|\pm\rangle $$ and $$ \exp\left(\frac{-iS_z\phi}{\hbar}\right)|\pm\rangle= \exp\left(\frac{\mp i\phi}{2}\right)|\pm\rangle. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Double slit experiment record but not look at it I am having a tough tough stressful week trying to write an article about quantum mechanics. I know there was a question asked the same thing before, but I didn't understand nor I did not want to wake up an old thread. The double slit experiment we all know, if we put an apparatus at the slit to detect which slit the electron has gone through but not look at the data it spits out, what happens to the screen? (Looking at the screen doesn't do any change right?) 1) If an active observer is not necessary for the wave function to collapse, the existence of a sensor will destroy the interference pattern because the sensor has interfered with the system. 2) If there needs to be an observer to read off the data from the sensor and acknowledge that the electron has acted like a particle and gone through either of the slit, the interference pattern will not be destroyed when the sensor is just there but no one has read the data. Which one is it?
The interference pattern will be destroyed even before the wavefunction-collapse. Let's say your particle is described by the wavefunction $|\text{p}>$. When passing the double slit (I call those the up and down slits), it becomes entangled with the sensor, and your wavefunction becomes : $|\psi> \equiv|\text{sensor}_{up}>\otimes |\text{p}_{up}>+|\text{sensor}_{down}>\otimes |\text{p}_{down}>$ When you want the probability to find the particle at a certain position on the screen, you will calculate $<\psi|\psi>$ but since $<\text{sensor}_{up}|\text{sensor}_{down}>=0$, you won't get the term $<\text{p}_{up}||\text{p}_{down}>$ which is the one you need to have an interference pattern. Simply adding the sensor destroyed the interference, whether you look at it or not.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Intuition behind dual vectors ('Bongs of a bell' does not help) Similar to the post here (How to visualize the gradient as a one-form?), I'm wondering about an intuition behind dual vectors and differential forms (and the link in that answer to Thorne's notes is broken now). I'm not as familiar with level sets (as mentioned in post above), and both Carrol and MTW leave the explanation somewhat wanting... MTW's "bongs of a bell" explanation is particularly useless, and not having quantum mechanics experience means that "kets" and "bras" is not helpful either. Am I just unprepared for the material? I didn't think QM was a prereq for GR... Is there an intuitive explanation for the relationships between vectors, their duals, and a geomretic object they describe? One of the key differences between vectors and their duals seems to be that dual vectors are reliant on the metric. It's also clear that dual vectors occupy a space of the same dimensionality as the vectors and can function on a geometric object to return its components in the 'original' vector space. As an example, we know the dual vector $\overline{w}^1$ acts on $\overrightarrow{v}$ as $\overline{w}^1 \overrightarrow{v} = v^1$, is that equivalent to $\overline{w}^1 g_{aa} v^a = v^1$ ? Or does the dual vector act on $\overrightarrow{v}$ in another way?
Since no one has explicitly mentioned Schutz's explanation in A First Course in General Relativity I will outline it, as it is particularly intuitive: As others have mentioned, dual vectors/one-forms/covectors can be seen as a map from a vector to a scalar (and in general, covariant n-th order tensors map contravariant n-th order tensors to scalars and vice versa). In index notation this is just $s=w_iv^i$ Visually, vectors can be neatly represented as an arrow–an object with direction and magnitude–and naturally then level sets map these arrows to scalars by counting the number of rungs the arrow crosses. You have probably seen level sets before; think topographic maps showing elevation–each rung represents some set height. Page 62 of Schutz has the following figure which summarizes:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/452774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does the eigenbasis associated with an observable changes after measuring a different observable? Suppose a system is initially in a superposition: $$\psi(x) = \sum\limits_{i}|c_i\phi_i(x)\rangle$$ After a position measurement, the wave function collapses to one of the position eigenfunctions,$\phi_i(x).$ Geometrically, I understand this as projecting the wave function to one of its components along its position eigenbasis in Hilbert space. If I then measure momentum, the wavefunction is projected to one of its component along its momentum eigenbasis. If I measure position again, would the set of position eigenbasis change? Or is it still the same set of position eigenbasis $\{|\phi_i\rangle\}$?
The state, your initial state collapses on, is always one of the eigenstates of the observable you are measuring. these eigenstates are defined a priori, and don't change as long as the observable doesn't change. So, the formal answer to your question depends on the picture you are working in: * *if you are in Schroedinger picture, where operators don't change and states evolve, the eigenvalues of the operator $\hat X$ wouldn't evolve in time and would be the same at every instant $t$ *if you are in Heisenberg picture, and you make your measurements in two different instants $t=0$ and $t=t_1$, your position operator would be $\hat X(t_1)=e^{iHt_1/\hbar}X(0)e^{-iHt_1/\hbar}$ so the set of eigenstates of this operator are related to the initial one by $|\phi(x_i,t_1)\rangle=e^{iHt_1/\hbar}|\phi(x_i,0)\rangle$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What if humans doubled size... and everything else... could we notice? After the big bang, everything expanded from a small mass. That expansion is said to be still happening. Imagine if everything observable constantly grew in size. EG. Everything slowly doubled in size over a decade? Would we notice difference? Would it seem the same because everything we measure by grew as well? Assuming the speed of light adjusted as well. The farther from the big bang the expodentially slower the expansion. Imagine the expansion of the universe is happening within us. Is it possible?
We always measure size with relative to a known size. When we see in a microscope ,until we don't know that they are small ,they will look very much large.This is the first sense that is by seeing. The second sense is by touching.and the third thing is by a known speed of an object and it's known size.Usually we measure the size by these. If everything starts to grow in size and speed of objects(as speed of light is used to measure length) also adjusted with respect to size Then all possibilities of knowing the difference in size is adjusted. You can't even know if you just doubled or tripled in size.This is similar to the case that time is stopped for 3hours and you can't know that it was stopped.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there anything special about ebonite and fur? I'm from Czech Republic, born 1980. From elementary school, we all remember this mantra: When ebonite rod is rubbed with fox fur, electrostatic charge is created. Electrostatic charge is created by rubbing ebonite rod with fox fur. Rubbing ebonite fur with fox fur creates electrostatic charge. Etc. ad nauseam. So... Is there anything special about the combination of ebonite and fox fur that makes it especially useful for teaching kids about electricity? Does there even exist a clear distinction between things that do and things that don't create electrostatic charge by rubbing? The irony: I can't remember ever hearing the word 'ebonite' in any other context than this particular strange example. (I never even knew what ebonite was until about 15 minutes ago when I googled it.)
Static electricity is observed with a plastic comb after you comb dry hair. So there is nothing special about ebonite except ancient report which led to the name electricity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why a rope grabbed by a single end seems to follow the hand that grabs it? Context I am thinking about the physics of the movement of a rope in the context of a person running and grabbing a rope in one hand (the other end is "free", there is nothing attached). When the person is just standing, the rope simply falls down vertically. Update after comments: the rope does not stay on "horizontal state" (wrong original assumption was that when the person is running, the rope looks like it follows the direction of the person hand, that is, if the person runs forwards (horizontally), the rope hangs horizontally from the hand.) When the person is not running, I asume that the main ideas that explain why the rope "falls down" vertically are gravity. Main question What physics principles / ideas can explain the movement of the rope, when the person is running? I guess that air friction might be one. Am I right? What would others be? Tension? Side question While running, the person moves its arm upwards. The rope transitions from horizontal to diagonal (switching from one height to the current hand's height) and then returns to horizontal. What principles are involved in these transitions / movements?
The rope can be modeled as an elastic system consisting of individual massive elements connected by springs. Each massive element moves under the action of elastic forces, gravity and air resistance. Let's consider the task: at the initial moment of time, the rope hangs vertically, in the next 1 second the upper end of the rope moves with acceleration 2, after that it moves at a constant speed 2. We solve the problem with air resistance and without resistance. Both versions are presented in animations 1,2. We see that in the presence of resistance, the rope deviates from the vertical both when accelerating and when moving at a constant speed. Without resistance, the rope is deflected during acceleration, and then oscillates relative to the vertical. With resistance Without resistance
{ "language": "en", "url": "https://physics.stackexchange.com/questions/453952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Fire -Thermodynamics Could a steel box lined with Rockwool (interior only) be an adequate shelter during a fire? How can I determine the temperature inside of the box at peak fire temp? How long could someone withstand the peak internal temperature? How could I cool the interior? Would a fire extinguisher explode at peak temperature?
Imagine a box of area $A$, volume $V$, density $\rho$, temperature $T$ and interior heat capacity $C$ surrounded by a fire at temperature $T_f$. It has a thickness $d$ thermal insulation of thermal conductivity $k$. Let's ignore the thermal capacity of the insulator. The heat flow across the area will be $kA(T_f-T)/d$ Watt, and hence the internal temperature will grow as $$T' = kA(T_f-T)/\rho CVd.$$ Assuming all the values to the right except $T$ are constant the solution to this is $$T(t)=T_f - (T(0)-T_f)\exp(-[kA/\rho CVd]t).$$ If the maximum acceptable temperature is $T_{max}$ this will happen after time $$t=-\left[\frac{\rho CVd}{kA}\right]\ln\left(\frac{T_{max}-T_f}{T_f - T(0)}\right).$$ Throwing some random numbers at this. Stone wool has a thermal conductivity around 0.020 W/m K. If we assume a $V=8$ cubic meter box containing air, $\rho=1$, $C=1.00$ kJ/kg.K ( ignoring temperature and pressure dependency!), A=24 square meter. Let's set $T_f=700$ K and $T(0)=300$K and $T_{max}=400$ K. Let's add a meter of rock wool, $d=1$. Then I get 4794.7 seconds, or 79 minutes. That doesn't sound too crazy given that it is a pretty mild fire and a lot of insulation. Using 1 cm insulation gives you 47 seconds instead. The other questions like how to cool or when an extinguisher explodes have the answer "it depends".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the Higgs boson an elementary particle? If so, why does it decay? The Higgs boson is an excitation of the Higgs field and is very massive and short lived. It also interacts with the Higgs field and thus is able to experience mass. Why does it decay if it is supposed to be an elementary particle according to the standard model?
All fundamental or elementary particles decay after being born. Take, for example, electron. While being created in some process, it "decays" into "another electron" and many soft photons. As it is unlikely that "another electron" may stay without further interactions with its environment, it continues to interact, i.e., generally speaking, absorb and emit soft photons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 6, "answer_id": 1 }
Why can blue LEDs be used for generating white light, but red LEDs cannot LEDs consist of pn-junctions, so why can blue LEDs be used for generating white light, but red LEDs cannot
The blue led has wavelength of about 450nm and has more energy than red photons at about 600nm wavelength. To create white light phosphors were discovered a long time ago, phosphors are used in fluorescent bulbs (convert UV to blue, green, red) and in old CRT TVs that converted electrons into colours of light. The phosphor atom takes in a higher energy photon and then produces a lower energy photon ( color) and heat. There are many different phosphor chemicals (1000s to 10000s) that absorb UV or blue or other and make different colours of light.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Is it possible to have a planet entirely made out of liquid water? Earth is mostly covered in oceans, but they only go a few kilometres deep. It's obviously not possible to have a planet the size of the earth to be made entirely out of water, because of the kind of pressures reached in the interior. a. But say that we did, how far down from the surface would water remain water before presumably turning to ice under the pressure? b. How large a 'planet' could we have made entirely out of water? Would it be able to attain the size of a small dwarf planet like Ceres?
The related question points out that water would become ice at a depth of around sixty km. This answers the first question. And this suggests we should expect a body of water a 120km in diameter to remain water all the way through. This is far smaller than a dwarf planet and more the size of a large asteroid.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/454932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Conservation of momentum if kinetic energy is converted to mass There is a moving object. Through an unspecified (science fiction) mechanism its kinetic energy is converted to mass and the object comes to rest. The mechanism is fictional but in good scifi it is good to adhere to the laws of nature. Does the conversion of kinetic energy to mass violate conservation of momentum? Or is conservation of momentum just a case of conservation of energy, which is conserved when converted to mass?
Through an unspecified (science fiction) mechanism its kinetic energy is converted to mass and the object comes to rest. That part is fine. Einstein gives us the equivalence between mass and energy. So converting the kinetic energy to mass is quite doable. Any form of energy storage will do this. Does the conversion of kinetic energy to mass violate conservation of momentum? No. Energy and momentum are different. Momentum must be conserved as well. The total mass/energy of the system before stopping and after stopping must have the same momentum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/455764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
What is the meaning of effective density in porous media? Is the density of air inside the pore space not same as density of free air? I am trying to understand the physical meaning of using effective density in porous media. Is it a fictitious value? Can't I use the density of solid and fluid as it is while modeling porous media?
The density inside a porous medium cannot be the same, in general, as the density outside. The simplest way to understand why, is to look at the system (pourous medium + air inside and air outside) as a two component system. Interactions inside the porous medium modify the chemical potential of air inside. Condition of equilibrium requires that the chemical potential of air inside and outside must be equal. Therefore, being temperature and pressure the same, concentration (density) of air inside and outside must be different. From the experimental point of view, people do not use chemical potentials, since the relation with density is unknown. Usually some estimation of the porous network volume is obtained with some adsorption experiments and the amount of air can be obtained by weighting the sample (or using Archimede's principle).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is it possible to use visible light to transmit "radio" like AM/FM? When I see a big radio antenna, I like to imagine it's a giant incandescent light bulb filament in the vertical, but of a invisible light. So that it "glows" the radio, performing oscillations which contains all the music/voice information. But at the reverse, is it possible to create a practical experiment which modulates (or something) an analog audio signal and transmits it by glowing some sort of light, then have a antenna or sensor to pick it up and reproduce the signal to a speaker? Is it possible to use a mono pole antenna to detect light?
But at the reverse, is it possible to create a practical experiment which modulates (or something) an analog audio signal and transmits it by glowing some sort of light, then have a antenna or sensor to pick it up and reproduce the signal to a speaker? Yes, in principle. Analog modulation of optical signals is not super common, but it is done, for example in many CATV-over-optical-fiber systems. Free-space optical communication is commonly be done between a hand-held remote control and a television set. Optical communication of audio signals is done in TOSLINK interconnect. There's no technological reason these things aren't all combined into a single analog, free-space, audio communication system, only economic reasons: We have cheaper ways of doing it so nobody has bothered to commercialize such a thing. It would be pretty easy to set up a class-room demonstration where an audio signal is sent to an LED, which illuminates a photodiode a few cm away, which connects through an amplifier to drive a speaker, if you wanted to demonstrate such a thing. Even with much older technology, there was the photophone developed by Alexander Graham Bell.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If I leave a glass of water out, why do only the surface molecules vaporize? If I leave a glass of water out on the counter, some of the water turns into vapor. I've read that this is because the water molecules crash into each other like billiard balls and eventually some of the molecules at the surface acquire enough kinetic energy that they no longer stay a liquid. They become vapor. Why is it only the molecules on the surface that become vapor? Why not the molecules in the middle of the glass of water? After all, they too are crashing into each other. If I put a heating element under the container and increase the average kinetic energy in the water molecules to the point that my thermometer reads ~100°C, the molecules in the middle of the glass do turn into vapor. Why doesn't this happen even without applying the heat, like it does to the surface molecules?
The water molecules in the liquid attract each other. Their thermal velocity distribution allows some molecules to be fast enough to overcome this attraction. If it happens to a molecule at the surface to be kicked by such a fast molecule, it may be kicked with an impulse stronger than the attractive forces, and therefore leave the liquid. The same kick inside the liquid would be passed on to other molecules very efficiently. If a gas bubble forms inside the liquid, the reduced attraction between the molecules in the gas is part of the energy penalty to be paid for the bubble formation in the form of heat from an external heat source.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Collision of rotating sticks Q: Two identical uniform sticks are rotating about their stationary centers with equal angular speeds. The vertical stick is slowly raised until its top end collides with the center of the horizontal stick. The sticks join together to make a rigid object in the shape of a T. Assume that the collision takes place when the top stick lies in the plane of the paper. Immediately after the collision, one point (in addition to the CM) on the T will instantaneously be at rest. Where is this point? I was thinking that when/where they collide, their rotations would be in opposite directions, and so would cancel out and make the instantaneously still point the point of connection. Is this not correct? How could one prove or disprove this analytically/mathematically? Besides just thinking about it or visualizing it, I'm not sure how to go about this problem. Could anyone offer some guidance?
After thinking about it some more, we need to calculate the velocity of the t-joint where the two pieces connect. For now, consider the rotation of the vertical rod. Its angular momentum is: $L=\frac{m\ l^2}{12}\omega$ When they attach, angular momentum in the x direction is conserved. The center of mass of the combined pieces is $l/4$ below the t-joint, which will be the new center of x axis rotation. The moment of inertia of the combination about this axis is: $inew=m \left(\frac{l}{4}\right)^2+\frac{m l^2}{12}+m \left(\frac{l}{4}\right)^2$ which is the moment of inertia of the original vertical bar about its original center of mass + the addition of the $l/4$ offset from the original center of mass + the contribution of the top bar. The equation for the new vertical angular velocity is: $L=\ inew\ \omega new$ or $\text{$\omega $new}=\frac{2}{5} \omega$ making the velocity at the t-joint: $v=\frac{l}{4}\ \omega new=\frac{l\ \omega }{10}$ The top bar is still rotating with angular velocity $\omega$, so the point to the right of the t-joint on the top bar that has the same velocity in the opposite direction and is therefore instantaneously at rest is: $\text{dist}=\frac{v}{\omega }=\frac{l}{10}$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/457912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Einstein's Elevator - Constant acceleration eventually reaches $c$. Can't that be used to detect gravity vs acceleration? Objects with mass that continuously accelerate will eventually approach $c$, but cannot exceed it. So if I find myself in an elevator, unable to determine if I'm in a uniform gravitational field or accelerating, I will resolve this by: * *calculating the acceleration, a *solve the equation for t, the amount of time it takes for my mass to reach c accelerating at a. *If the acceleration stops before t, I was accelerating through empty space *If the acceleration continues beyond t, I must be in a uniform gravitational field Why won't this work?
In your idea, the at rest observer sees the accelerating rocket (or elevator) acquire a velocity $\Delta v_1=a \Delta t_1$ then an additional $\Delta v_2=a \Delta t_2$ in the next interval of time $\Delta t_2$. You assumed all the $\Delta v_i$ will eventually add up to c or greater. Unfortunately, according to special relativity, velocities do not add this way and instead a special velocity addition formula must be used that prevents ever reaching c. Whereas velocities are not additive, there is a thing called the Lorentz Boost parameter (also called rapidity) $\lambda$ which is additive for the observer. It is related to the velocity by $\frac{v}{c}=tanh(\lambda)$. Notice that for an infinitesimal velocity $\frac{\Delta v}{c}= \Delta \lambda$ and then $\frac{a\Delta t}{c}=\frac{\Delta v}{c}= \Delta \lambda$ where $a$ is the constant acceleration experienced in the rocket frame and $\Delta t$ is the time that passes in the rocket frame. Now add up all these $\Delta \lambda$ to get $\lambda=\frac{at}{c}$. The final velocity is then $\frac{v}{c}=tanh(\frac{at}{c})$. Notice that as $t \rightarrow \infty$, $\frac{v}{c} \rightarrow 1$, and you don't get a velocity greater than c,
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
What's the difference in a $P$-$V$ diagram that is curved versus one that is straight? So what would the difference be between the graph above versus one that has the same initial and final points but the path is curved. I'm sure it has something to do with temperature, so does it mean temperature is constant? Or is there something else going on?
You have $PV=NkT$, and along the path, $P=-aV+b$, where I assume you know how to compute $a$ and $b$. Replacing the second into the first results in: $T=\frac{V(b-aV)}{Nk}$ which means that, along the path, the temperature varies quadratically with the change in volume.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Central Forces: Newtonian/Coulomb force vs. Hooke's law We know that a body under the action of a Newtonian/Coulomb potential $1/r$ can describe an elliptic orbit. On the other hand, we also know that a body under the action of two perpendicular Simple Harmonic Motions can also have an elliptic orbit. Hence I was wondering if we can differentiate between a body under the influence of a central potential $1/r$ and a body under the action of two perpendicular SHM’s just by observing the orbits without prior knowledge of the potential they are under. So my question is how can we differentiate between these two potentials?
I was wondering if we can differentiate between a body under the influence of a central potential 1/r and a body under the action of two perpendicular SHM's just by observing the orbits without prior knowledge of the potential they are under. As a first note, you have described the two motions in different ways: the former dynamically, the latter kinematically. In the first case your description points to the kind of force acting, in the second on motion being a composition of two SHM. Of course you know the dynamics, as is shown from your title, where Hooke's law is recalled. A second point is: what do you exactly mean by "just by observing the orbits"? If you mean simply discovering that both orbits are ellipses, obviously there's no answer - they are indistinguishable. At the other extreme, I assume you don't think of identifying the center of force, which would give an easy solution. Yet this could be done by pure kinematics, computing the (vector) acceleration. But there is an intermediate way, which uses Kepler's second law (the law of areas). In Newton/Coulomb case the speed has a minimum at an extreme of the major axis and a maximum at the other. In Hooke's case speed at both extremes of major axis is the same, at the minimum, and maximum is attained at minor axis' extremes. Thus a simple measurement of speed would give the answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
In particle colliders, according to QM, how are two particles able to "collide"? According to QM, we know that The act of measurement forces a particle to acquire a definite (up to experimental errors) position, so in a particle collider, like the one in CERN, by which means do we force particles to acquire a definite position, so that they "collide"? My guts says the answer first point out that we are not actually colliding anything, but rather we are forcing the probability distribution of two particles, say protons, to overlap, and at the end they "somehow" acquire a position, hence "collide", but, this is just an educated guess.
To get particles to actually collide in a collider, many, many particles are formed into a high-speed beam which is separated into clumps that circulate one way around the collider, while other particles are similarly circulating around in the opposite direction. When both beams have been given the right amount of energy, they are then aimed at one another so the clumps intersect inside a sensor array that detects the products of any collisions that take place there. This process involves millions upon millions of particles each time the clumps are steered together, and the collisions are set up in this way millions upon millions of times- which means that the experimenters rely on probability to furnish enough collision opportunities to make the experiment worthwhile- even though in any given collision, they do not have precise control over or knowledge of the positions of every single one of the particles in the beam clumps as they pass through the detector. Instead, they rely upon the detector to track the products of all the collisions that do occur as they get knocked out of the beam and spray outwards. The trajectories of those new particles can be traced backwards to infer the location of each collision, and (among other things) verify that the collision products actually did originate inside the detector and were not just background noise that the detector array responded to.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Rabi flopping vs. rate equation approach? In Chapter 7 of C. J. Foot's Atomic Physics, Foot discusses the interaction of a two-level atom with radiation. He derives the phenomenon of Rabi flopping from the Schrodinger equation, using perturbation theory and the rotating wave approximation as is standard to do. Then he says this: The population oscillates between the two levels. When $Ωt = π$ all the population has gone from level 1 into the upper state, $|c_2(t)|^2 = 1$, and when $Ωt = 2π$ the atom has returned to the lower state. This behaviour is completely different from that of a two-level system governed by rate equations where the populations tend to become equal as the excitation rate increases and population inversion cannot occur. What is the distinction he draws here? How does one reconcile the fact that the use of rate equations generally does not allow for population inversion between the two levels, as he says, but that the Schrodinger equation does? Is there a more subtle issue, such as the assumption of coherence in the case of Rabi flopping, involved here? Is it just that the "rate equation" model is simply wrong? I'm happy to try to clarify the question if it needs it.
The distinction is precisely given by the balance between the 'coherent dynamics' (ie., the Rabi flopping), and the rate of decoherence. In particular, the coherent dynamics of Rabi oscillations only holds when there is no dissipation, so the system remains in a pure quantum state. The rate equations governing the system in the presence of decoherence describe the evolution of the density matrix of the system. The system may begin pure but end up quickly in a statistical mixture of the two states. This is a more complete picture for the dynamics of a real-world quantum system, but this picture always simplifies to the picture of coherent dynamics in the limit of no decoherence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Transparency and visibility of light in tyndall effect Oil and water are both transparent however, they lose their transparency once they are mixed together. What is the reason for this? The size of the molecules are still the same so why does the substance become cloudy?
Although the size of the molecules are still the same, the oil (nonpolar) cannot dissolve into the water (polar), so the oil drops form inside water, which is much larger than the molecule itself. Those drops (emulsion) reflects light and causes tyndall effect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/458960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Hydrodynamic interaction between two spheres in $Re\ll 1$ flow I am studying the interaction between two spherical particles of radius $a$ in a low Reynolds number flow. Because of linearity, I know that their respective velocities will be linear in the forces applied to them. Similarly, the force $\boldsymbol{F}_j$ applied on one particle contributes to the velocity $\boldsymbol{v}_i$ of the other through a term which is linear in $\boldsymbol{F}_j$. I write this as follows $$\boldsymbol{v}_1=(6\pi a)^{-1}\boldsymbol{F}_{1}+\boldsymbol{H}\left(r_{12}\right)\cdot\boldsymbol{F}_{2}$$ $$\boldsymbol{v}_2=(6\pi a)^{-1}\boldsymbol{F}_{2}+\boldsymbol{H}\left(r_{21}\right)\cdot\boldsymbol{F}_{1}$$ where $H$ is the hydrodynamic interaction tensor that depends on the relative positions $\boldsymbol{r}_{ij}$ of the two spheres ($i=1,2$). Here is my question: if I wanted to look at the limit of far field, in principle I would assume that $a\ll r_{ij}$ and look at what happens to the equations. This can be done formally by nondimensionalising with respect to the typical distance $\ell$ such that $r_{ij}\sim \ell$, define $$\epsilon=\frac{a}{\ell}$$ and take the limit $\epsilon\rightarrow 0$. However, this seems to present problems, because the friction terms are proportional to $a^{-1}$, so would diverge in such an expansion. What am I missing? If the divergence is indeed physically relevant, what is its meaning? How can one deal with it in order to study the limit of far field?
When distance between spheres is large compared to their size, the velocity of each sphere is predominantly determined by the balance between drag force and external force acting on it, and the inter-sphere interaction force is negligible. That's why the external force term is "blowing up" in relation to interaction force when $\epsilon\to 0$, and if the non-dimensionalization is done correctly then the drag force term also blows up at the same rate as the external force term (so there is balance between the two as $\epsilon\to0$). An equivalent (and perhaps less repugnant) way to say it is that interaction force is becoming negligible in relation to the other two forces on the body. Mathematically it means that to leading order in $\epsilon$ the velocity of each sphere is determined by the balance between drag force and external force, and inter-sphere interaction appears only at higher order.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Does the existence of electrons validate the integral form of electric fields? For an arbitrary charged object, it seems to be the case that we express it as a continuous sum (sum on the reals/integral) of point charges $dq$ that have a canonical Coulomb's law force. That is to say, for an arbitrary charged object, we split it up into tiny $dq$'s (located at $\vec r'$, with the force exerted on reference point $P$ at $\vec r$ by them equal to.. $\text{let} \ \vec r - \vec r' = \vec \zeta$ $$F_{dq} = k \ dq \ \frac{\vec{\zeta}}{\zeta^2}$$ Implying.. $$\vec E = k \int \frac{1}{\zeta^2} \vec \zeta dq$$ But why do we assume that $dq$ exhibits the form $F_{dq}$? It's almost like there's a fundamentally point-like charged particle composing all charged objects.. aha! Electrons. But wasn't this theory established independent of electrons? How could we justify them without electrons? Do we need to? Is that even the justification for it? Why are we allowed to assume all charged objects are made of infinitesimal point charges and do electrons have anything to do with it?
If I understand the question correctly it is about the impact on the field of the difference between a continuous and a discrete charge dustribution. This difference manifests itself as shot noise.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is energy in quantum mechanics? Is it wrong to say energy is the expectation value of Hamiltonian? Or should I say energy is the eigenvalue of Hamiltonian?
You must be a bit more explicit in your language than in the classical case. Either could be correct but I lean towards the eigenvalue description and I'll explain why. First of all, the Hamiltonian $\hat{H}$ is something which "belongs" to system. Energy is something which belongs to a state. So for example if a state $|\psi_n\rangle$ has $$ \hat{H}|\psi_n\rangle = E_n |\psi_n\rangle $$ We can unambiguously say "this state has energy $E_n$" because every time you measure it you will get energy $E_n$ and also the average energy of this state, $E_{avg}$ is $E_n$. However, as was just pointed out in Superposition principle forbids quantisation? we can consider states which are superpositions of energy eigenstates such as $a|\psi_n\rangle + b|\psi_m\rangle$ which can have average energy anywhere between $E_n$ and $E_m$. One could say it is a state with this new energy, $E_{avg} = |a|^2E_n + |b|^2E_m$ but I think that would be misleading because if you do a measurement on this state you will never measure energy $E_{avg}$, you will always get either $E_n$ or $E_m$. The language I would use is either that it is a state which is in a superposition of energy states or (more daringly) it is a state which both has energy $E_n$ and $E_m$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why do we neglect higher order terms in Cauchy's Equation? Cauchy's Equation for finding the refractive index for a light of given wavelength is: $$n(\lambda)=A+\dfrac{B}{\lambda^2}+\dfrac{C}{\lambda^4}.....$$ This formula however is simplified to $n(\lambda)=A+\dfrac{B}{\lambda^2}$ by neglecting higher order terms. This is what I don' t understand. Wavelength of visible light is approximately $6\cdot10^{-7}m$ which is less than $1$. Shouldn't the contribution of higher order terms be more than lower order terms.
The Cauchy equation is empirical relationship. However, the refractive index can be obtained from the classical Lorentz model where a light wave creates oscillatory motion of the electrons and the electron displacements form dipole moments. This polarizes the medium , and the refractive index can be estimated theoretically. (https://www.phys.ksu.edu/personal/cdlin/class/class02a/s2-jing-li.ppt) $$n=\sqrt{1+\frac{\omega_p^2}{\omega_0^2-\omega^2-i\gamma\omega} }$$ $\omega_p$ is the plasma frequency and $\omega_0$ are the resonance absorption edges. If we can neglect the absorption $\gamma=0$ . $$n=\sqrt{1+\frac{p^2}{1-x^2}}\propto a+bx^2+cx^4...$$ Where $x=\frac{\omega}{\omega_0}$~$\frac{\lambda_0}{\lambda}$, $p=\omega_p/\omega_0$= constant When we Taylor expand we see that in some approximation the empirical equation is justified. And we can expect that the Cauchy equation would fit the refractive index in limited spectral regions, for some materials. Higher orders do not add much. More often is used the Sellmeier equation which describes better the behavior of the refractive index.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Hooke's full unapproximated law It is known that the Hooke's law relating the restoring force of a spring to the distance of retraction from the equilibrium position, is only an approximation. That is, the equation $F=-kx$ is only the linear term that approximates the relationship, but gets less accurate the more the spring is retracted from the equilibrium position. There is also an elastic limit for the spring, which makes the relationship vertically asymptotic at distances large enough and the law completely breaks down there. The relationship looks to me more like a tangent function than a line when considering the increasing deviation and asymptotic behavior. Since $F=-kx$ is only an approximation, then what is the full story? What is the actual relationship? I couldn't find the answer anywere on the internet. Is it a transcendental function, or perhaps some non-elementary function? Or does the function defining the relationship depend on the material used and local physical quantities?
The answer isn't well defined, since the full behaviour is different for different materials. It is not that the function is some mysterious transcendental function, but rather some function typically described by a series of powers of x with coefficients dependent on the material.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What is the electromagnetic 4-vector potential in Reissner-Nordström coordinates? If you have the usual Reissner-Nordström metric of a charged black hole, is the electromagnetic potential of the black hole still: $$A_0(r,\theta,\phi) = \frac{Q}{r}$$ in these units?
A Schwarzschild black hole has no charge and no electrostatic potential. This is the potential of a Reissner-Nordström black hole, in Gaussian units. If you are talking about the potential of a point charge at rest outside a Schwarzschild black hole, with $r$ being some measure of distance from the charge rather than the Schwarzschild radial coordinate, then this is not its potential. The potential and electrostatic field of a charge at rest outside the hole are not spherically symmetric around the charge as in flat spacetime, because the curvature of the black hole warps the field. The electrostatic potential of a point charge at rest outside a Schwarzschild black hole was investigated by Copson in 1928 and his solution was corrected by Linet in 1976. The solution is $$V=\frac{q}{b r}\left[ \frac{(b-M)(r-M)-M^2\cos\theta}{\sqrt{(r-M)^2+(b-M)^2-2(b-M)(r-M)\cos\theta-M^2\sin^2\theta}}+M \right]$$ where $M$ is the mass of the black hole, $r$ and $\theta$ are Schwarzschild coordinates, $q$ is the charge of the point particle outside the horizon, and the point particle is at rest at $r=b$, $\theta=0$. This is written in units where $G=c=1$. Here is a contour plot of the potential. The larger blue circle is the black hole. The smaller blue circle is the point particle. The white area is an artifact of the rendering, where the contour lines get too close together. The horizon is an equipotential surface. The asymmetry of the field causes a gravitationally-induced electrostatic self-force on the charged particle. It is directed away from the hole and, in the frame of a freely-falling observer instantaneously at rest at the position of the charge, has magnitude $$F=\frac{GMq^2}{c^2b^3}.$$ This was first calculated exactly by Smith and Will in 1980. https://journals.aps.org/prd/abstract/10.1103/PhysRevD.22.1276 Interestingly, as $b\rightarrow2M$ so that the charge is just outside the horizon, the potential does become symmetrically centered on the hole, rather than on the charge, and is just $q/r$ outside and 0 inside.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/459791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Spot welding technique I am trying to make electrical contacts by the spot welding technique. For Hall effect measurements, the contact size should be as small as possible and the sample is very very thin (0.025mm). But while spot welding, i am somehow able to make contacts, but it creates a hole in my sample. How to get rid of it? Is there any other way of making such fine contacts on such thin samples?
Yes, there are several, as follows. First there is thermosonic bonding, in which a heated tool applies local pressure to the two pieces being bonded, and then a burst of ultrasound is used to vibrate that tool. This technique is used to make contact between a silicon chip and a kapton/gold flexible leadframe (or "flex circuit") in a process called TAB or Tape Automated Bonding. Second, you can try solderbonding, in which a paste containing finely milled solder particles is silkscreened onto (for example) a printed circuit board, and a silicon chip is then placed upside-down atop the solder paste areas and heated. The solder melts and connects the PC board to the interconnect pads on the chip in what is known as the flip-chip process. Third, you can glue the parts together using an epoxy containing finely milled silver spherules which is called conductive epoxy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the internal energy the expected value of energies of individual particles? In this Wikipedia page: https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics) .. the total sum of energy in an ideal gas is given as: $$\langle E \rangle = \sum_s E_s P_s $$ where $s$ runs over all states. But isn't this just the expected value of the energy for a single particle? Shouldn't the sum of energy be just: $$ E = \sum_s E_s $$ Why do we take the expected value?
Reading carefully the Wikipedia page, one finds that the internal energy is the "ensemble average energy, which is the sum of the microstate energies weighted by their probabilities". Therefore, $E_s$ is the energy of the s-th microstate, where a microstate is the microscopic state of $N$ particles. Once the equilibrium ensemble has been fixed, the probability of each microstate is a function of the energy of the microstate and the ensemble average must be $$ \left<E\right>= \sum_s P_s E_s $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Radial term in the spin-orbit coupling The spin-orbit interaction for the hydrogen atom is of the form $\hat{H_1} = A\frac{1}{r^3}\pmb{\hat{L}}\cdot \pmb{\hat{S}}$ Now in my course, we treated this interaction by working in the basis of total angular momentum $\pmb{J} $and from there calculated the energy eigenvalues of $\hat{H_1}$ and assumed that theses were the correction to the energy levels. My question is, what exactly is $\frac{1}{r^3}$?. Because if we treat this term as being an operator, then it is not obvious at all that $\hat{H_1} $ should commute with $\hat{H_0} = \frac{\pmb{\hat{p}}}{2m}-\frac{e^2}{\hat{r}}$. This non commutativity implies then that you can't correct the energy eigenvalues of $\hat{H_0}$ with those of $\hat{H_1}$. So my question is, are we actually doing some kind of perturbation theory where we assume that $\frac{1}{\hat{r^3}}$ is actually $<\frac{1}{\hat{r^3}}>$, i.e. the expectation value from $\hat{H_0}$? In that case the two operators would commute and the corrections to the energies would make sense. Thanks you.
You have uncovered one of the pitfalls of trying to treat spin non relativistically. If you had used the Dirac Hamiltonian rather than the Schrodinger Hamiltonian you would not encounter this problem. Frequently these $1/r^3$ singularities can be avoided in non relativistic treatments by using the fact that the nucleus is not really a point charge. It is also important to use the Foldy/Wouthuysen reduction methodology to make sure that you are being consistent in your non relativistic treatment. I addressed this issue in a paper long ago: L. D, Miller, "A Foldy-Wouthuysen reduction of the atomic relativistic Hartree-Fock equations", J. Phys B., At. Mol. Phys. 20 (1987), p. 4309-4316
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How come planets with different masses can orbit at the same velocity at the same altitudes? Angular momentum is equal to r × p and angular momentum is also what gives planets with lower orbits a higher speed (because angular momentum is conserved). So as r decreases either m or v (p=mv) has to increase and as the mass can’t change the velocity has to increase. This I can understand, but the velocity of an orbit is the same for all planets at the same altitude. This doesn’t make sense to me as I would think that the velocity wouldn’t be as high if the mass is bigger, but the planets would really orbit at the same velocity, right? What am I missing here? How come the velocity can be the same even though the masses are different?
That the velocity of an orbiting body is independent of the orbiting body's mass is independent of mass is a consequence of Kepler's laws. Kepler's laws however are only approximately correct. Newtonian mechanics says otherwise: It says that ignoring the influences of other planets, the angular velocity of a planet orbiting the Sun is $\sqrt{G(M_{\text{sun}} + M_{\text{planet}})/R^3}$. For our solar system, even the largest planet is less than 1/1000th of the mass of the Sun. This means Kepler's laws are good to two or three decimal places. Beyond the third decimal place, Kepler's laws are not so good. A specific example: The Moon's mass is about 0.0123 times that of the Earth. This means that a tiny object at the Moon's position would orbit the Earth a bit slower than does our Moon.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why aren't particles constantly "measured" by the whole universe? Let's say we are doing the double slit experiment with electrons. We get an interference pattern, and if we put detectors at slits, then we get two piles pattern because we measure electrons' positions when going through slits. But an electron interacts with other particles in a lot of different ways, e.g. electric field, gravity. Seems like the whole universe is receiving information about the electron's position. Why is it not the case and the electron goes through slits "unmeasured"? Bonus question: in real experiments do we face the problem of not "shielding" particles from "measurement" good enough and thus getting a mix of both patterns on the screen?
There are time-scales related to interactions, or, equivalently, interaction rates. These interaction rates are often calculated in lowest order based on Fermi’s Golden Rule. An experiment that measures electron interference needs to make sure that the time-of-flight of the electrons from the electron source to the observation screen is much shorter than any of the time-scales of possible interactions. In interference experiments, we therefore define a coherence time for the interfering particles. In real experiments, we do indeed face the problem of shielding particles from being measured by the environment, before they interfere. For example, in electron interferometers realized in solid-state devices, we have to go to very low temperatures, where the interactions between electrons and phonons become very 'slow' (their rate becomes very small). We also have to make sure that the devices are small enough that the Coulomb-interaction between electrons, which persists even at the lowest temperatures, does not spoil the interference (the decoherence rate due to electron-electron interaction does also depend on temperature: the rate becomes smaller with decreasing temperature).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66", "answer_count": 6, "answer_id": 4 }
Parity transformation and mirror reflection I have some trouble understanding what exactly is parity transformation. The definition of parity transformation is a flip in the sign of all three spatial coordinates, ie $$(x,y,z) \rightarrow (-x,-y,-z).$$ Consider a stationary particle at a position $(a,b,c)$ in space described by a coordinate system $(x,y,z)$. Does parity transformation mean that the particle is still at the exact point in space but its position is now described by $(-a,-b,-c)$? But often parity is talked about as a mirror reflection and it seems to me that a mirror reflection means physically moving the particle from point $(a,b,c)$ to $(-a,-b,-c)$ in a coordinate system $(x,y,z)$. Which of the above 2 cases is parity transformation really referring to? If it refers to both cases, why are the two cases the same? In one case a particle is fixed in space while in another case a particle is moved to another point in space.
These are more general properties of the parity which I think, they give a better explanation of what this symmetry is really about. The general definition of parity is an operator $\mathcal{P}$ with the properties $\mathcal{P} = \mathcal{P}^*$ and $\mathcal{P}^n=\mathbb{1}$, $*$ denotes complex conjugation. Most of the time, people stick with $\mathcal{P}^2=\mathbb{1}$. Perhaps it is also easier to look parity in a discrete Hilbert space in a one-dimensonal system. For this it is enough to look for mirror symmetry. That is, consider a tight-binding chain in which each site is described by the basis \begin{equation} \lbrace |\psi_1\rangle,\dots,|\psi_N\rangle\rbrace. \end{equation} The parity $\mathcal{P}$ shoudl act on this basis as $|\psi_1\rangle\rightarrow|\psi_N\rangle$, $|\psi_2\rangle\rightarrow|\psi_{N-1}\rangle$, and so forth. Fixing the basis, you can write the parity operator as \begin{equation} \mathcal{P} = \begin{pmatrix} & & & 1 \\ & &1& \\ &\unicode{x22f0}& &\\ 1 & & & \end{pmatrix}, \end{equation} all the other entries are zero. Furthermore, this representation is also suitable to represent parity in discrete models which are not one-dimensional. For example benzene, ethylene etc. Finally, from my point of view, parity is explicitly put into action when you integrate relativity in quantum mechanics, i.e. in Quantum Field Theory. There, the position is $x^\mu = (t,\vec{x})$ and fixing a metric, the parity operator is given by the matrix \begin{equation} \mathcal{P}^\mu_{\;\nu} = \begin{pmatrix} 1 & & & \\ &-1 & &\\ & & -1 & \\ & & & -1 \end{pmatrix}. \end{equation} Then $\mathcal{P}: (t,\vec{x}) \rightarrow (t,-\vec{x})$, which amounts to a matrix multiplication $\mathcal{P}x$. As a bonus, returning to non-relativistic quantum mechanics, people have studied a lot parity-symmetry Hamiltonians for the past 10 years or so.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/460996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Do topological transitions only occur at Dirac points? Topological phase transitions happen when the band gap closes. It is not true that all band crossings are topological. There are Dirac (linear) band crossings, quadratic band crossings, Dirac-like triply degenerate band crossings, double Dirac cone crossings, semi-Dirac transitions (linear in one direction and quadratic in another) etc. Even in 1D, all the band crossings I recall look linear. In 2D, all the band crossings I recall are Dirac cones. I feel like I have been told that some quadratic dispersions can be a topological transition but I am not sure if I remember correctly. Are all topological phase transitions in electronic bands/ photonic bands linear/ Dirac points?
I'll leave the aspect of classifying band closings at topological transitions to others, and focus on this statement: Topological phase transitions happen when the band gap closes. Although that's the standard story, there's a growing understanding that you can actually have topological transitions without gap closings. These so-called first-order topological transitions require some degree of interaction between electrons (or possibly other constituent particles). Topological transitions in non-interacting electron systems should still be continuous, and have an associated gap closing. Relevant literature: * *Amaricci et al. First order character and observable signatures of topological quantum phase transitions, Phys. Rev. Lett. 114, 185701 (2015). *Imriška, Wang, and Troyer First order topological phase transition of the Haldane--Hubbard model, Phys. Rev. B 94, 035109 (2016) *Juricic, Abergel and Balatsky First-order quantum phase transition in three-dimensional topological band insulators, Phys. Rev. B 95, 161403 (2017)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why friction causes energy to be lost in terms of heat when it appears to be an energy transfer mechanicsm? For example, when we move/walk, we apply a force (via friction) on earth, and the earth in turn on us. So essentially I see it as an energy transfer as follows: Suppose I move in same direction as earths rotation. Here I am applying a force in such a way as to increase my velocity from initial $\Omega_{earth} \times radius_{earth}$ , so as to move relative to earth. I also reduce earth's angular rotation during this walking motion due to force I applied on earth. As a whole the system has the same energy. In light of this, friction doesn't appear as a heat dissipative force to me.
You are describing static friction. Static friction is not dissipative. It's only when your foot skids on the surface that the friction force becomes sliding or kinetic friction. It is only sliding or kinetic friction that dissipates energy in the form of heat. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Muon decay in muons frame of reference, time muon expects us to record I have already asked a question regarding this concept but it was flagged as a duplicate. I know this misconception is very common for special relativity but I haven't found a question that talks about the misconception I'm having. Or atleast I haven't made the link between their question and mine. My question is why does it look like in this case the factor by which the clocks are slower is $γ^2$? On the earth lets say a muon has to travel a distance $d$ at a speed $v$. We expect the time it takes for the muon to reach us as time $t$, where $t=d/v$, but factoring in time dilation of running clocks, the muon has only experienced time $t/γ$ in our frame of reference, hence we can observe it on Earth. But in the muons frame of reference, the distance it has to travel is shorter by a factor of gamma, so the time it experiences to travel to earth will be $t_2$, where $t_2=v/(d/γ)$. This is the same time as we record in our frame of reference. But according to the muon, we too have our clocks running hence we should record in our labs, a dilated time, $t_2/γ$. This is our original expected time $t$, but reduced by a factor $γ^2$. Why is this so? I am sure it has something to do with the simultaneity of events but I don't know where to start.
Indeed, the muon will think that our clock is not only running slow, but running ahead, by the usual relativistic $dv/c^2$ delay. The final reading we take down, according to the muon, is $$\frac{t}{\gamma} + \frac{dv}{c^2} = \frac{t}{\gamma} + \frac{v^2}{c^2} \frac{t}{\gamma} = \frac{t}{\gamma} \gamma^2 = \gamma t$$ which is perfectly consistent with what we calculate in our frame.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do ion thrusters create a force against the spacecraft? I recently saw an old thread, How do reaction engines create a force against the rocket?, get bumped up, and it asks a good question: in a chemical rocket, the fact that the rocket exhaust gets propelled away means that Newton's Third Law requires that there be some force acting on the rocket in the other direction, but the Third Law itself does not actually specify what that force is, with the answer being that it's the pressure of the gas in the combustion chamber and on the engine bell that produces an unbalanced force on the engine. I'd like to ask exactly the same question, but for an ion thruster instead. As in the chemical rocket, the fact that there's an ion stream going away at high velocity implies that there needs to be a point at which the outgoing ions exert some form of electric force on the thruster. So: what is the nature of this force, and how does it work?
The principle is very simple, though of course actually constructing the things is a lot more complicated. A propellant gas is ionised between two charged plates. The cations are attracted to the negative plate and repelled by the positive plate and acquire an energy $E = qV$ and a momentum $p = \sqrt{2mE}$. The plates, due to the potential they're being held to, also carry an electric charge (which is what attracts and repels the ions), and these charges also feel a (small) unbalanced force coming from the ions, so the plates (and the spaceship they are attached to) acquire an equal and opposite momentum $-p$. So the electrostatic force pushes the cations one way and the spaceship the other. The electrons acquire an equal kinetic energy but since they are much lighter than the cations their momentum is negligible. It is the cations that propel the spaceship. The negative plate is a grid, so the majority of the cations fly straight through the grid and out the other side. At this point the electrons collected at the positive plate are recombined with the charged exhaust gas to neutralise it. The neutralised gas feels no electrostatic force so it shoots off with basically the same momentum as it gained when accelerated between the plates. The end result is simply that the propellant goes one way and the two charged plates, and the spaceship they are attached to, goes the other way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How can we derive from $\{G,H\}=0$ that $G$ generates a transformations which leaves the form of Hamilton's equations unchanged? In the Hamiltonian formalism, a symmetry is defined as transformation generated by a function $G$ is a symmetry if $$\{G,H\}=0 ,$$ where $H$ denotes the Hamiltonian. On the other hand, a symmetry is a transformations which map each solution of the equation of motion into another solution. And this requires that the form of the equation of motion remains unchanged. Therefore, it should be possible to show that it follows from $\{G,H\}=0 $ that Hamilton's equations are unchanged by the transformation generated by $G$. Concretely, we have \begin{align} q \to q' &= q + \epsilon \frac{\partial G}{\partial p} \\ p \to p' &= p - \frac{\partial G}{\partial q} \\ H \to H' &=H + \{H,G\} \end{align} and we want to show that if for the original $q$ and $p$ Hamilton's equations \begin{align} \frac{dp}{dt}&= -\frac{\partial H}{\partial q} \\ \frac{dq}{dt} &= \frac{\partial H}{\partial p} \end{align} hold, they also hold for $q'$ and $p'$: \begin{align} \frac{dp'}{dt}&= -\frac{\partial H'}{\partial q'} \\ \frac{dq'}{dt} &= \frac{\partial H'}{\partial p'} \end{align} How can this be shown explicitly? Using the transformation rules explicitly yields for Hamilton's first equation \begin{align} \frac{dp}{dt}&= -\frac{\partial H}{\partial q} \\ \therefore \quad \frac{d(p' + \frac{\partial G}{\partial q})}{dt}&= -\frac{\partial (H + \{H,G\} )}{\partial (q' + \epsilon \frac{\partial G}{\partial q} )} \\ \therefore \quad \frac{d(p' + \frac{\partial G}{\partial q})}{dt}&= -\frac{\partial H }{\partial (q' + \epsilon \frac{\partial G}{\partial q} )} \\ \end{align} But I've no idea how to proceed from here.
Express your equations of motion as $$ \dot q= [H,q]\\\\\dot p=[H,p] $$ Note that, on the mass shell, any function $f(q,p)$ obeys $\dot f(q,p)=[H,f]$. Now just hit the commutator $[G, ]$ on both sides in the equations of motion. Since $[G, H]=0$ then $G$ commute with the time derivative, i.e. $\dot G=0$. Using the Jacobi identity on the right hand side gives you $$ [G,[H,q]] = [H,[G,q]] + [[G,H],q] = [H,[G,q]] $$ Now you are finish since you can use the linearity of $[H,]$ to add this new equation into your old equation $$ \frac{d}{dt}(q+\epsilon [G,q])=[H,(q+\epsilon [G,q])] $$ The same is true for the $p$-equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Density $\rho$ in the Friedmann equations In the Friedmann equations: $$\ddot{a}=-\frac{4}{3}\pi G(\rho+\frac{3p}{c^2})$$ $$\dot a^2+Kc^2=\frac{8}{3}\pi G\rho a^2$$ I didn't understand if $\rho$ is the mass density deriving from $m_0$ (the rest mass) or from $\gamma m_0$. In other words $\rho c^2$ is the energy density due only to the rest energy $E=m_0 c^2$ or due to the total energy $E=\gamma m c^2$ (rest energy, Kinetic energy, internal energy...)? I think that $\rho$ is $m_0\over V_0$ (where $V_0$ is the proper volume) so that I could write $\rho=\frac{m_0}{V_0}=\frac{E}{V_0 c^2}=\frac{\epsilon}{c^2}$, with $\epsilon$ density (rest) energy but I'm not sure. Someone could make clear my ideas please? Summary We can summarize the point thanks to the help of Ben Crowell and Elio Fabri: $\rho$ is in general the energy density of my cosmological fluid, but we are in the comoving frame so $\rho$ is related to rest energy ($Mc^2$) of the entire fluid (with mass $M$) because we see the fluid still. All the particles (galaxies) of the cosmological fluid contribute to this rest energy through their own energy (rest energy, kinetic energy, interaction energies), as in the example by Ben Crowell of the proton in a boby.
The $\rho$ comes from the component $T^{tt}$ of the stress-energy tensor, which is the density of mass-energy $E$, not the density of mass. We never have any way of knowing or defining the density of mass. For example, I could say that a proton in my body has some mass which contributes to my mass, but its mass may actually be in forms such as the kinetic energy of its quarks. Also, mass is not additive in relativity, but the stress-energy tensor is a tensor, which means we want to be able to talk about adding stress-energy tensors. BTW, do yourself a favor and stop writing factors of $c$ when you do GR. $c=1$ in any system of units that is sensible for GR.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/461999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do electrons move towards a vacant position (hole) in a crystal lattice? Why do electrons in a crystal lattice move towards the vacant position? Aren't electrons stable in their current position?
Consider first an intrinsic semiconductor crystal at zero temperature. The crystal will be charge neutral. All states in the valence bands (and at lower energies) are occupied with electrons, all higher energy states above the band gap are unoccupied. Suppose now that we remove one electron from the valence band, thereby creating a ‘hole’. A ‘hole’ in a crystal lattice means that there is a net positive charge. Let us now put the previously removed electron into a conduction band state. The crystal as a whole will now again be charge neutral, but there exists a positive charge in the valence band, and a negative charge in the conduction band. Both can freely move through the crystal. As a result of their opposite electric charge they will attract each other (Coulomb interaction), so the electron will tend to move towards the hole and vice versa.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Classification of 2D time dependent diffusion equation I was trying to classify the following PDE: $$\frac{\partial{u}}{\partial{t}}=\frac{\partial^2{u}}{\partial{x^2}}+\frac{\partial^2{u}}{\partial{y^2}}$$ where $u = u(x,y,t)$. I was originally using the definition of $B^2-4AC$ and found this equation to be elliptic, which is true for the Laplace equation however I was wondering if the dependence on time changes this. I was also wondering if this PDE is inhomogeneous and linear? Thank you!
homogenous, linear and parabolic. In a generalization of the 2-dimensional equation, any equation of the form $$ \partial_t y = -L u $$ where $L$ is positive elliptic (such as $-\nabla^2$) is said to be parabolic. It shares with the 2d case the fact that it has well defined solutions with inital value data an a line with $t=constant$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do objects rebound after hitting the ground? When an object, say a shoe, falls from a height (under the influence of gravity), it rebounds after hitting the ground. For an object to move upwards, it requires a force to overcome its weight. When the shoe hits the ground some of its energy is lost and the ground pushes back with a force less than its weight, so why does it rebound, since the upward force is not large enough to overcome its weight?
There are also many objects that do not rebound when they got the ground but rather they get deformed . So the total potential energy stored in the body at a height is used to deform the body and some energy is lost in the form of heat or sound energy. The objects which are elastic in nature have a tendency to rebound and these objects don't get deformed or get a little bit deformed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Is internal resistance of cell part of the equivalent resistance of the network of resistors? Do we include the internal resistance of cell while calculating equivalent resistance of network? Take, for instance, the question given. Do we include the 1 ohm internal resistance while calculating equivalent resistance of the network?
From the wording "A network [...] to a battery with internal resistance" I would say the network is everything except the battery, where the battery consists of the ideal voltage source and the $1\,\Omega$ resistor. It is impossible to know for sure though what the author really meant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Do orbiting planets have infinite energy? I know that planets can't have infinite energy, due to the law of conservation of energy. However, I'm confused because I see a contradiction and it would be great if someone could explain it. Energy is defined as the capacity to do work. Work is defined as Force x Distance. Force is defined as Mass x Acceleration. Thus, if we accelerate a mass for some distance by using some force, we are doing work, and we must have had energy in order to do that work. In orbit, planets change direction, which is a change in velocity, which is an acceleration. Planets have mass, and they are moving over a particular distance. Thus, work is being done to move the planets. In an ideal world, planets continue to orbit forever. Thus, infinite work will be done on the planets as they orbit. How can infinite work be done (or finite work over an infinite time period, if you'd like to think of it that way) with a finite amount of energy? Where is the flaw in this argument?
Remember that work is force times displacement, not distance. Displacement is a vector, which means when a planet moves a full circle, its overall displacement is zero, resulting in a work of zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/462768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Confusion over units in force equation? While discussing Newton's laws, our book says Force is proportional to rate of change of momentum so they say F is proportional to mass * acceleration if mass is constant So $F=kma$ where $k$ is a constant. They then say we choose a unit of force such that it produces acceleration of $1\ \mathrm{m/s}^2$ in $1\ \mathrm{kg}$ mass so $1\ \mathrm{N}=k\cdot 1\,\mathrm{kg}\cdot 1\,\mathrm{m/s}^2$. Then they say $k=1$. How is $k=1$? It should be $1\,\mathrm{N}/(1\,\mathrm{kg\, m/s}^2)$, which is different than just $1$. Force is always written as $F=ma$ not $F=kma$ which seems false. This question is different as it asks the actual concept of dimensions rather than other number the question asker of other question was confused about the choice of number not of dimension.
I'm breaking your question into parts to identify what confuses you: so $F=kma$ where $k$ is a constant OK until there. They say we choose unit of force such that it produces acceleration of 1ms-2 in 1 kg mass so 1N=k*1kg*1ms-2 So they "define" $1N$ such that $F=1 [N] = k \times 1[kg] \times 1[m.s^{-2}]$. If $k$ where equal to $2[K]$ ([K] = Kelvin), then we would have $[1N] = 2 [K.m.s^{-2}]$ as a definition of $1N$. But that's not the case, because that's not how $1N$ is defined (see @zdimension's comment below for example). The issue is that the definition in your book, from what you report, is not a definition (except if the author defined "force" before), as my example above has just proven.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
How does Hamilton's Principle give us the path taken? We defined the action as: $$\mathcal{S}(t)=\int_{t_1}^{t_2}\mathcal{L}(q_i,\dot{q_i},t) dt$$ where $q_i(t_1)$ and $q_i(t_2)$ are known and fixed. Hamilton's principle states that the path that is followed has minimum action. Suppose we know just the initial coordinates of a system i.e. $q_i(t_1)$ and not its final coordinates. How can we find out the path followed by the system using the least-action principle (Hamilton's Principle)? As it seems to me that it can only be used when both end points are known.
OP is correct: The stationary action principle (SAP)/Hamilton's principle(HP) needs$^1$ boundary conditions (BCs), i.e. both initial and final conditions. This is because we need the $$\text{boundary-terms}~=~\left[\sum_{j=1}^np_j\delta q^j \right]_{t=t_i}^{t=t_f}~=~0\tag{1}$$ to vanish when we vary the action $\delta S$ to find stationary paths. In conclusion: The SAP/HP can not be applied directly to solve an initial value problem (IVP). (Of course the SAP/HP can be used indirectly in a certain sense: 1. First use SAP/HP with pertinent BCs to establish the EOMs in the first place. 2. Next use the EOMs to solve an IVP.) See also e.g. this related Phys.SE post. -- $^1$ Note that if the Lagrangian $L(q,t)$ does not depend on velocities $\dot{q}$, i.e. the system is static, then we don't need any BCs.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Why does a sonar or radar's frequency correlate with its resolution? A sonar's (or radar's) frequency determines its limit on the smallest size that it can detect and its resolution. I've heard that it's due to aliasing, if so, please explain the reason behind it a little more. EDIT: My own understading: Lower frequencies don't reflect well off small objects, thus the reflected wave has a smaller amplitude and this increases the inaccuracy, but this is a practical rather than a physical limit as devices with more precision can also detect smaller amplitudes better. Is this correct?
I’ll confine my answer to pulsed radars. Longer wavelengths reflect just fine from large targets, unless the targets have been specially shaped to minimize back-scatter by diverting reflections, as in the design of stealthy aircraft. The range resolution is roughly $c/2B$, where B denotes signal bandwidth, which is limited in practice to roughly 10% of the frequency. The cross-range angular resolution is roughly $\lambda /d$, where d denotes the diameter of the antenna. (Remember that the terminology of resolution is topsy-turvy. A small value for resolution is called high resolution. More is less.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why Neutrino is a ghost particle? why neutrinos are called ghost particle.why it is not affected by strong magnetic field. why it does not interact with matter. why it does not interact with gravitational field? I am unable to understand it
This is a misleading way of talking in popularized versions about neutrinos, not recommended for physics vocabulary, since in the theories for particle physics "ghost" has a different mathematical meaning. The everyday version of "ghost" is a moving impression that can pass walls and appear randomly, and popularizers attribute this adjective to neutrinos because they are hard to detect as they mainly interact with the weak interaction. Thus they can cross large distances in space passing through matter like "ghosts". Nevertheless in our experiments we do detect neutrinos and antineutrinos, form beams of them and experiment with them and thus given them the niche in the elementary particle table of the standard model. Nothing ghostly, just weak interacting. I have worked in experiments with neutrino beams, and we detected them by their interactions, which helped build up the standard model. In the above experiment the neutrino beam enters from the bottom, there is a magnetic field perpendicular to the picture so the charged tracks turn. The neutrino as neutral leaves no track but interacts with the hydrogen in the bubble chamber. why it does not interact with gravitational field As it has a four vector of energy momentum, it will interact with the gravitationla field, very weakly, look at the forces again and their strength. Ignore the adjective in popularized articles. They are just hard to detect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/463651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How much time does it take for a broken magnet to recover its poles? I understand that when you cut a magnet you end up with 2 magnets but I wonder how much time does it take to the magnetic domains to rearange and form the new pole. I know the answer may vary depending on the size of the magnet, the material, and some other variable so I'm searching for an answer as general as possible and how the variables may affect the answer.
The molecules that make up the magnet have a magnetic dipolar moment. You can think of them as small magnets aligned so that the total magnetic field is the sum of all the small magnets. If you cut a magnet in two, the two magnets are still made of aligned dipolar moments, so there is no rearrangement of poles. The two pieces will automatically be magnetized.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/464256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Schrodinger's Equation in three dimensions Consider Schrödinger's Equation, $$H=\sum^3_{i=1} \frac{p^2_i}{2m_i}+V(x_1,x_2,x_3).$$ In one dimensional case, we can analyse the shape of the potential, i.e $$V(x)=\frac{1}{2}m_1 \omega^2_1 x^2$$ is the potential for quantum oscillator. The ground state of quantum oscillator looks like a Gaussian. For two dimensional oscillator we can write $$V(x,y)=\frac{1}{2}m_1 \omega^2_1 x^2+ \frac{1}{2}m_2 \omega^2_2 y^2,$$ the ground state of this system is again looks like a Gaussian in two dimensions. If we proceed further we can write $$V(x,y,z)=\frac{1}{2} m_1 \omega^2_1 x^2+\frac{1}{2}m_2 \omega^2_2 y^2+\frac{1}{2}m_3 \omega^2_3 z^2$$ as the potential of thee dimensional harmonic oscillator. I hope again the ground state of this system is a Gaussian, but in three dimensions I am unable to understand which shape it will get. What will happen if we further increase our dimensions say more than three?
The graphical representation of the probability density distributed over the three-dimensional space would be a four-dimensional plot--just like the plot of a probability density distribution over one dimension is two-dimensional and that of a probability density distribution over two dimensions is three-dimensional. There is no direct way to visualize a four-dimensional plot except via its projections onto lower dimensional spaces. Now, for the specific theory of decoupled harmonic oscillators, the ground state would be the multiplication of Gaussians in each of the dimensions. Thus, the full probability density distribution over the three-dimensional space, when projected onto one (or two) dimension(s), would simply look like Gaussians in those lower dimensions. But, this simplification is owing to the decoupling of oscillators, such a simplification would not be generically possible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/464729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
What are the Basic Properties of a Photon? I want to grasp the idea of a photon. While researching, I have come upon many different ways of describing a photon, but have found "quantum of the electromagnetic field" to be most satisfying. However, I still have a few questions about this description. I. What does 'quantum' mean in this context? Quantum of what physical quantity? II. What features do photons exhibit as a wave? (wavelength, speed, et cetera) III. What features do photons exhibit as a particle? (mass, spin, et cetera) I would especially thank if anyone could explain the momentum of a photon as a wave and as a particle. For anybody wondering, I am a high school student interested, but not fluent in physics.
I. A photon is the quantum, or the basic building block, of the electromagnetic field. For example, visible light, which is an electromagnetic field, is a large collection of photons. Photons exhibit wave-particle duality. This means that they have some properties that exhibit their wave-like properties manifestly, while some other properties exhibit the particle-like properties. II. Wave-like properties - Some frequency/wavelength and processes like reflection, refraction, diffraction, interference, polarization and dispersion III. Particle-like properties - Blackbody radiation, Photoelectric effect, Compton scattering, pair production, non-zero momentum (and energy) causing radiation pressure and bending of light under the action of gravity, spin angular momentum
{ "language": "en", "url": "https://physics.stackexchange.com/questions/465083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
If sound is a longitudinal wave, why can we hear it if our ears aren't aligned with the propagation direction? If a sound wave travels to the right, then the air molecules inside only vibrate left and right, because sound is a longitudinal wave. This is only a one-dimensional motion. If our ears are oriented perpendicular to this oscillation, e.g. if they are pointing straight up, how can we hear it?
The revised question, as I understand it, amounts to asking how it is possible for a sound wave propagating along (instead of towards) a wall with a small hole in it to generate any sound waves on the other side of the hole. What happens in this case is easiest to explain with a diagram: Whenever the air pressure on the upper side of the hole is different from the pressure on the lower side of the hole, the air on the lower side of the hole sees a net force, and so a new pressure wave is generated on the lower side of the hole. This is an example of diffraction. As long as the wavelength of the wave on the upper side of the hole is much bigger than the diameter of the hole, at any moment, the air pressure on the upper side of the hole will be almost constant over the entire diameter of the hole, no matter which direction the wave is propagating in. This is why it doesn't matter if the wave is moving along the wall. Acoustic wavelengths in air (for the frequency range audible by humans) are roughly 1.5mm–17 m, and the ear canal is maybe 5mm in diameter, so, for all but the highest frequencies, the wavelength will indeed be much bigger than the hole. (The diagram doesn't illustrate this. Sorry about that.) (N.B. A human's external ear has a much more complicated shape, which has evolved to efficiently gather sound waves passing the head in any direction and direct them into the ear canal, but it still does this by diffracting the wave.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/465203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 9, "answer_id": 2 }