Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why can you hear sound over a wall? I know it has to do with diffraction of sound, as it is a wave, but how exactly does this diffraction occur?
It is not quite clear to me what you are asking, but I would refer to simulations like on PhET: Wave Interference. There you can switch between sound waves, surface waves on water (ripple tank) or light waves. There are two important things to realize: * *The wavelength of sound waves is of the order of 1 meter, comparable to the height of walls. *We experience sound level as a logarithmic measure of the intensity, and for example even a tenfold reduction in power reduces the sound level by only 10 dB, for example from 60 dB to 50 dB.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/506677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
When can I set $d=4$ in dimensional regularization? I am using dimensional regularization to extract the divergence of some complicated integral. I work in $d=2\omega$ dimensions, with $\omega\approx 2$. After I extract the divergence, I have an expression of the form $$f(\omega)\Gamma(\omega-2)\int_{-\infty}^{\infty}d\tau_3 d\tau_4 \frac{1}{(x_{13}^2)^{\omega-1}}\frac{1}{(x_{24}^2)^{\omega-1}}\frac{1}{(x_{34}^2)^{\omega-2}}\tag{1}$$ with $x_{ij}:=x_i-x_j$. Now I know how to compute $$\int_{-\infty}^{\infty}d\tau_3 \frac{1}{(x_{13}^2)^{\omega-1}}\tag{2}$$ but the last factor spoils it. However, the integral seems finite, so if I send $\omega\to 2$ now, the last factor is simply $1$ and the integral is easy to compute. Am I allowed to send $\omega\to 2$ for just one part of the integral, if the latter is finite? More generally, can I send $\omega\to 2$ for parts of a computation if they are finite in this number of dimensions? Note that although this is a mathematical question, I felt that this was belonging to the physics page since (1) dimensional regularization is a tool that is used a lot in QFT, (2) the computation is directly related to a physics research, and other people probably thought about this question before in the physics community. Clarification about the notation: I forgot a few details about the remaining integrals: $x_{3\mu}$ and $x_{4\mu}$ are, respectively, defined as $(0,0,0,\tau_3)$ and $(0,0,0,\tau_4)$, while $x_1=(1,0,0,0)$ and $x_2=(x_2^1,x_2^2,0,0)$. Note that I work in Euclidean space. Thus, the integrals can be written as: $$\int_{-\infty}^{\infty} d\tau_3 d\tau_4 \frac{1}{(x_1^2+\tau_3^2)^{\omega-1}}\frac{1}{(x_2^2+\tau_4^2)^{\omega-1}}\frac{1}{(x_{34}^2)^{\omega-2}}\tag{3}$$ If I set $\omega=2$, the integrals decouple and are elementary integrals. This maybe shows why my question arised in the first place.
The important quantities in dimensional regularization are precisely the poles you will obtain in the limit $\omega \rightarrow 2$ and their associated residues. In other words, your bare correlation functions will involve integrals with some divergences, $$ I = \sum_{n = 1}^m\frac{a_n}{(\omega - 2)^n} + \mathrm{finite}, $$ and the renormalization of your theory consists of getting rid of those poles, and in order to do so, you'll need to specify the constants $a_n$. Now the issue is that, within intermediate calculations, you may have multiple poles contributing. For example, consider the function $$ \frac{f(\omega)}{(\omega - 2)^2}, $$ where $f(2)$ is finite. The problem here is that when you set $\omega$ equal to $2$ within this function, you are actually missing out on a first-order pole in $\omega$. Instead, you should write $$ \frac{f(\omega)}{(\omega - 2)^2} = \frac{f(2)}{(\omega - 2)^2} + \frac{f'(2)}{(\omega - 2)} + \mathrm{finite} $$ where $f'(\omega) = df(\omega)/d\omega$. This is a potential issue in your case, since your integrals are multiplying $\Gamma(\omega - 2)$ which already has a pole, but I don't quite understand your notation (how are the $\tau_i$'s and $x_{ij}$'s related?). But if the integrals are all finite for $\omega = 2$ then you are safe with the replacement.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/506744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does the Lorenz Gauge condition lead to four wave equations? The 1972 book by L. Eyges's, The Classical Electromagnetic Field, on p. 184, in $\S$11.7, Integral Forms of The Potential, the statement "We now turn to the problem of finding $\mathbf{A}$ and $\mathbf{\Phi}$ in terms of $\mathbf{J}$ and $\rho$. For this purpose, the Lorenz gauge is the more convenient one. In this gauge we have four equations in (11.33)." appears. Equation 11.33 is stated on p. 182 as $$ \nabla^2 \mathbf{A}- \frac{1}{c^2} \frac{\partial^2 \mathbf{A}}{\partial t^2} = - \frac{4 \pi \mathbf{J}}{c}, \\ \nabla^2 \phi - \frac{1}{c^2} \frac{\partial^2 \phi}{\partial t^2} = - 4 \pi \rho $$ Why does the author claim that this is four equations when only two are clearly written?
The Equation, $$ \nabla^2 \mathbf{A}- \frac{1}{c^2} \frac{\partial^2 \mathbf{A}}{\partial t^2} = - \frac{4 \pi \mathbf{J}}{c} $$ , is actually three seperate equations in three dimensions. In Cartesian coordinates this Equation expands to, $$\nabla^2 A_x- \frac{1}{c^2} \frac{\partial^2 A_x}{\partial t^2} = - \frac{4 \pi J_x}{c}, \\ \nabla^2 A_y- \frac{1}{c^2} \frac{\partial^2 A_y}{\partial t^2} = - \frac{4 \pi J_y}{c}, \\ \nabla^2 A_z- \frac{1}{c^2} \frac{\partial^2 A_z}{\partial t^2} = - \frac{4 \pi J_z}{c}. $$ Therefore, given these three equations, and the equation for $\phi$, there are four total equations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/506933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Escape velocity and orbital velocity confusion If orbital velocity is reduced when we want to put a satellite in a higher orbit, and if to achieve a lower orbit we need to increase it's velocity, then how come by increasing the speed of the satellite we can escape the same satellite from earths gravity?
This is an extension of Dale's answer. We need to introduce a bit of orbital mechanics. The specific energy of the satellite has the form $$ E = \frac{v^2}{2} - \frac{k}{r}.\tag{1} $$ In general, a bound orbit has the shape of an ellipse with semi-major axis $a$ and eccentricity $\varepsilon$. The distance of the satellite at its periapsis is $r_\text{peri} = a(1-\varepsilon)$, and likewise at its apoapsis $r_\text{apo} = a(1+\varepsilon)$. At the apsides, the specific angular momentum $h$ is simply the product of the distance and the velocity: $$h = r_\text{peri}v_\text{peri} = r_\text{apo}v_\text{apo}.\tag{2}$$ If we plug this into $(1)$, we get $$ E = \frac{h^2}{2a^2(1-\varepsilon)^2} - \frac{k}{a(1-\varepsilon)} = \frac{h^2}{2a^2(1+\varepsilon)^2} - \frac{k}{a(1+\varepsilon)}.\tag{3} $$ From this, we obtain $$ E\left[a^2(1+\varepsilon)^2-a^2(1-\varepsilon)^2\right] = -k\left[a(1+\varepsilon) - a(1-\varepsilon)\right],\tag{4} $$ which can be simplified to $$ E = -\frac{k}{2a}.\tag{5} $$ Plug this into $(1)$, and we have an expression of the velocity in terms of the distance and semi-major axis: $$ v^2 = \frac{2k}{r} - \frac{k}{a}.\tag{6} $$ Now, suppose we start with a satellite on a circular orbit with radius $r_1$ and velocity $v_1$. Then we have $a_1 \equiv r_1$ and $$ v_1^2 = \frac{2k}{r_1} - \frac{k}{a_1} = \frac{k}{r_1}.\tag{7} $$ We would like to bring this into a higher circular orbit with radius $r_2 > r_1$ and velocity $v_2$. For such an orbit $a_2 \equiv r_2$ and $$ v_2^2 = \frac{2k}{r_2} - \frac{k}{a_2} = \frac{k}{r_2}.\tag{8} $$ Clearly, $v_2 < v_1$. But how can we put the satellite into such an orbit? The answer: by giving it two boosts, one at distance $r_1$, and one at distance $r_2$. First we boost it in such a way that the orbit changes from a circular orbit into an elliptical orbit with periapsis $r_1$ and apoapsis $r_2$. In other words, the new semi-major axis $\bar{a}$ and eccentricity $\bar{\varepsilon}$ must be such that $$ \begin{align} \bar{a}(1-\bar{\varepsilon}) &= a_1 = r_1,\\ \bar{a}(1+\bar{\varepsilon}) &= a_2 = r_2.\tag{9} \end{align} $$ we find $$ \begin{align} 2\bar{a} &= r_1 + r_2,\\ \bar{\varepsilon} &= \frac{r_2-r_1}{r_1 + r_2}.\tag{10} \end{align} $$ The satellite will follow this new orbit if we boost its initial velocity $v_1$ to a new velocity $\bar{v}_1$, given by $$ \bar{v}_1^2 = \bar{v}_\text{peri}^2 = \frac{2k}{r_1} - \frac{k}{\bar{a}} = \frac{r_2}{r_1}\frac{2k}{r_1 + r_2} = v_1^2\frac{2r_2}{r_1 + r_2}.\tag{11} $$ When the satellite has completed half an orbit, it will be at its apoapsis $r_2$ with velocity $\bar{v}_2$, given by $$ \bar{v}_2^2 = \bar{v}_\text{apo}^2 = \frac{2k}{r_2} - \frac{k}{\bar{a}} = \frac{r_1}{r_2}\frac{2k}{r_1 + r_2} = v_2^2\frac{2r_1}{r_1 + r_2}.\tag{12} $$ Finally, at $r_2$ we perform a second boost from velocity $\bar{v}_2$ to $v_2$, and the orbit of the satellite will change into a circular orbit with radius $r_2$. As you can see, $$\bar{v}_2 < v_2 < v_1 < \bar{v}_1,\tag{13}$$ so $\Delta v_1 = \bar{v}_1 - v_1 > 0$ and $\Delta \bar{v}_2 = v_2 - \bar{v}_2 > 0$, but $v_2 < v_1$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
On work done by internal forces which is coming out to be not equal to zero 1) Let us consider a block which explodes due to some internal mechanism into two smaller fragments of equal masses.The system was initially at rest and now is having some finite kinetic energy(due to momentum conservation).We can hence comment that the work has been done by the internal force by the Work-energy Theorem since there are no other forces acting on the system.But this seems to contradict the fact that work done by internal forces is always 0.Where am I going wrong? I have researched similar questions on stack and other site but to no avail. Also,textbooks for some reason do not consider a lot of theory on this matter for some reason which adds to my woes. 2) I have another question that in a two mass spring block system does the spring do any work?It should be 0 according to me as it is an internal force when solving from COM frame but is this also true from a ground frame?While writing the work energy theorem on this system, would the spring work show up even in the form of potntial energy?
Something to keep in mind is that "internal force" is a subjective term. It completely depends on what we say the system is, and therefore what we say is "internal" and "external". However, the work done by a force is not dependent on this distinction. Therefore, we should not expect that the label of "internal" or "external" should influence how much work the force actually does. As you have shown, internal forces can certainly do work. As long as a force is applied to an object over some distance (i.e. $\int\mathbf F\cdot\text d\mathbf x\neq0$), work is being done.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can a "time dimension" be part of a spherical topology? I've heard it speculated that the spatial dimensions of the universe is a 3-sphere. Or a 3-torus. But usually, I guess, it's assumed that the "time" dimension just has its own geometry, like a line, in Cartesian product with the geometry of the spatial dimensions. I don't know much about topology, nor the constraints on topology placed by geometry. So I don't know if the shape of the manifold "cares" that the geometry treats one of those dimensions differently (particularly because the dimensions are symmetric in a sphere). Basically, can a manifold whose metric has the Lorentz signature $-+++$ be a 4-sphere? Or more generally, can a manifold with a $(1,n-1)$ signature metric be an $n$-sphere? Also, let me know if I'm using imprecise, bad language here.
No, Lorentzian manifolds can be spheres only in odd dimensions. This is because the Euler characteristic of compact Lorentzian manifolds must vanish, which it doesn't for $S^n$ for even $n$, cf. e.g. this MathOverflow question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why doesn't hot charcoal glow blue? I was learning about black body radiation and the explanations given by Max Planck and Albert Einstein when a thought crossed my mind. When we heat an iron piece, its color changes gradually from red, orange, yellow to bluish white. Yet such a change is not visible in a glowing piece of charcoal obtained from wood. Why is it that, wood charcoal is not able to glow in colors of higher frequencies?
The glowing color of an object is based on its temperature. Wood Charcoal probably won't get hot enough to look blue. The sun is around 5500 degrees and emits all colors of em radiation. Wood charcoal will take a lot of help from the user (pumping air on the charcoal) to approach that required temperature and it will probably turn to ash before the temp is even reached. To glow with a blue color, the required temperature is probably greater than 8000 degrees.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 1 }
Are there scattering targets other than nuclei, protons or electrons in experimental particle physics? I am not too familiar with particle physics, so maybe I missed something. Typical scattering targets seem to be nuclei, protons, electrons, i.e. stable targets, which of course makes some sense. Have there ever been scattering experiments involving two (moderately) unstable partners, e.g. muon - muon or muon-charged pion scattering?
There are other forms of baryonic matter besides the ones you've listed, e.g., white dwarfs and neutron stars. There are cases where particle physicists have gotten useful bounds on certain observables from this. For example, Giddings and Mangano rule out certain scenarios involving large extra dimensions because microscopic black holes would have destroyed all our white dwarfs and neutron stars. There is also two-photon physics: https://en.wikipedia.org/wiki/Two-photon_physics Giddings and Mangano 2008, "Astrophysical implications of hypothetical stable TeV-scale black holes,"https://arxiv.org/abs/0806.3381
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can atmospheric $CO_2$ absorption of infrared be 100% when its atmospheric concentration is 0.04%? An absorption spectrum from high in the atmosphere of infrared radiation emitted from the earth, shows that for the 15µm wavelength there is almost complete absorption. This is attributed to absorption by CO2 in the atmosphere at this wavelength. However, the concentration of CO2 in the atmosphere is generally about 0.04%, or 400 parts per million. This says to me that for any cubic metre of volume, there would be a large space not occupied by CO2 molecules through which such radiation would pass uninhibited, and therefore ultimately simply pass out to space. This seems contradictory to me. Could someone perhaps elaborate on what is happening here.
Let's say a 15µm photon has a nonzero probability of interacting with any $CO_2$ molecule that it passes within a distance of 1 wavelength. How many $CO_2$ molecules are there in a cylinder with radius 15µm and extending from the earth's surface out to space? More than $10^{16}$. I think what bothers you is the idea that $CO_2$ can be so tremendously much more effective at absorbing this wavelength than $O_2$ or $N_2$. Else why would the ratio be relevant? It's a bit like wondering how someone can die from arsenic poisoning when only 0.04% of their meal was arsenic. It's a small fraction of the total, but that doesn't mean it's not important.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 0 }
Can we detect a cyclic coordinate by just inspecting the Lagrangian? I'm reading through Susskind-Hrabovsky's Theoretical Minimum. On page 126, where they are talking about cyclic coordinates, an example is given: Suppose two particles moving on a line with a potential energy that depends on the distance between them... Lagrangian is derived as: $$L = \frac{m}{2}(\dot{x}_1^2 + \dot{x}_2^2) - V(x_1 - x_2).\tag{16}$$ It is suggested that if the Lagrangian doesn't depend on coordinate $q_i$, then that coordinate is cyclic and its conjugate momentum is conserved. Then, a coordinate transform is utilized and the Lagrangian in the new coordinate is derived: $$x_+ = \frac{x_1+x_2}{2}, \qquad x_{-} = \frac{x_1-x_2}{2}, $$ $$L = m(\dot{x}_+^2 + \dot{x}_{-}^2) - V(x_{-})$$ Then it was discussed that there is actually a hidden cyclic coordinate and its conjugate momentum is conserved (which is total momentum): $$p_{+} = 2m\dot{x}_{+} = m\dot{x}_1 + m\dot{x}_2$$ * *If there may exist a transformation that reveals a hidden cyclic coordinate (hence a preserved conjugate momentum), then doesn't that make the original statement about we being able to detect cyclic coordinate my merely looking at the Lagrangian, invalid? *In general, how can we find the transformation which reveals the cyclic coordinate? Also, there are some doubts on the derived terms: *Shouldn't potential energy in the new coordinate be $\,V(2\times{x_{-}})$? *Shouldn't $\,p_+ = m\dot{x}_+$? Where did that $2$ come from?
* *1 & 2. A bit oversimplified a strategy to find candidates for cyclic coordinates is to find coordinates that parametrizes equipotential surfaces of the potential $V$. * *In physics we often use the same notation for a function $V$ and its value $V(x)$ at a point $x$. If we transform the argument $x=f(y)$, we often don't bother to write $V\circ f(y)$ but just write $V(y)$ in a common physics misuse of notation. The transformation $f$ is implicitly understood. * *Use the definition of canonical/conjugate momenta $p_+:=\frac{\partial L}{\partial \dot{x}^+}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Physical reason for $T^2=a^3$ when $T$ is in years and $a$ is in AU Kepler's third law states $$T^2\propto a^3$$ When $T$ is in years and $a$ is in AU, the proportionality constant becomes $1$. This can't be a coincidence; I would like to know the physical reason for it.
It is almost trivial! The law in SI units: $$\tag{1} T^2 = \frac{4 \pi^2}{G M} \, a^3. $$ Now write this, for $T_0 = 1~\mathrm{year}$ and $a_0 = 1~\mathrm{AU}$: $$\tag{2} \frac{T^2}{T_0^2} = \frac{\displaystyle{\frac{4 \pi^2}{G M} \, a^3}}{\displaystyle{\frac{4 \pi^2}{G M} \, a_0^3}} \equiv \frac{a^3}{a_0^3}. $$ Now, define $a' = a / a_0$ and $T' = T / T_0$, so $a'$ is now measured in UA and $T'$ in years: $$\tag{3} T'^{2} = a'^3. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Complex number representation of a wave There are some aspects to waves I am confused, for instance in Chapter 11. Fraunhofer Diffraction. The incoming electric fields can be partially expressed as $e^{i(kr-\omega t)}$. I have two questions regarding this: * *What does $\,(kr-\omega t)\,$ indicate, and how do we know it is these values, why not just use $\omega t?$ *Why is there suddenly an introduction of an imaginary component of the oscillation? What is the significance of the imaginary component of the incoming oscillation, and why can we express them in this way? I will appreciate some layman's term or down to earth explanation to 2. but some detailed explanation to 1.
We write $kr$ to show how the wave changes through space. For example you can fix $t$=constant, so the part $e^{i\omega t}$=constant, so you can see changes through space just shifting the "$x$" - space component. "$kx-\omega t$" express the whole phase, so you take a real part if this exponent, and after multiplying the amplitude you get the value of vector if your field in this moment $t$ and this coordinate $x$. When we write wave in complex form, we assume, that after all mathematical manipulations we take a real part of it. So you can write everywhere Re[...], and it would be the same
{ "language": "en", "url": "https://physics.stackexchange.com/questions/507992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is tension always the same as centripetal force? For example, if a ball is attached to a string and released from a vertical height and then pivots around a point to initiate circular motion, tension is equal to centripetal motion. If, on the other hand, a ball hands from a string and it’s hit in such a way that it travels in a vertical circle. Tension is not just equal to centripetal force. When is tension equal to centripetal force, and when is it another value? The scenarios above are taken from previous problems I’ve seen in class. I’m not sure if I’ve explained them as clearly as needed, but I think the general idea should be understood.
I think that part of the misunderstanding is due to the use of the term centripetal force in a context where more that force is involved. Take the vertical circle motion as an example. The circulating mass is under the action of two forces, the gravitational attraction due to the Earth and the tension in a string. These two forces produce a net force on the mass which causes a centripetal, towards the centre, acceleration. As the mass progresses it is still acted upon by the same two forces and there is a net force acting on the mass. That net force can be resolved into a radial component and a tangential component. The radial component is responsible for the centripetal acceleration of the mass. Even when only one force acts, as with the Earth in an elliptical orbit around the Sun, one must be careful as the gravitational attraction of the Sun can be split into two with one component causing the centripetal acceleration. If however the orbit is assumed to be circular then one can say the the gravitational attractive force as a whole causes a centripetal acceleration and thee acceleration is due to a centripetal, towards the centre, force although personally I would avoid calling it that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/508090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Proof of continuity of voltage across a capacitor It is known that the voltage drop across a capacitor is a continuous function of time. This means that, for each instant t0, we may write: V(t0-) = V(t0+) This relationship is very used in the time domain analysis of RC circuits for instance, and it is due to the fact that if V was not continuous, it would mean infinite current across the capacitor, which is not physically possible. But which is the mathematical proof of that? I remember I saw it on a book but I do not find it any more. I remember that it was a consequence of the relationship i = C dV/dt....
Well, you wrote your answer yourself ! Since $i=C dV/dt$ it means that for the current $i$ to be finite, $V$ may not have "jumps". A function that has jumps does not have a derivative at that point, in a rigorous mathematical point a view. If one does "hand-waving" about that, one might (erroneously) speak of the "(some kind of pseudo)-derivative" of a function with a jump, but then this object would be infinite. There are rigorous ways to speak of things like that that mathematicians call "distributions" but that would lead us too far. Just see that a finite current implies a finite derivative of $V$ and thus you need V(t0-) = V(t0+)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/508402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Confusion regarding fermi-dirac distribution function The fermi dirac distribution function given in http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/disfd.html is different from what I learnt. What I learnt had the chemical potential $\mu$ in place of $E_F$ . Isn't $\mu$ only equal to $E_F$ when $T = 0$ ? Or am I mixing stuff up?
In solid state physics, especially in the subfield of electronic devices, it is not uncommon to use Fermi energy as a synonym of chemical potential. Some texts explicitly disclose this possible source of confusion and misusage of the term Fermi energy, others do not.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/508560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to disturb a particle distribution I have a set of N macroscopic particles, each representing a group of electrons. Each of these macroscopic particles has a different charge. My system is one-dimensional so all particle´s positions are described by their position along the z-axis. I have an array positionsof length N, where each element represents the z position of a single particle. positions[i] represents the position of the i$^{th}$ particle. These values for the positions are such that the carge density is uniform along z between $0$ and $L$. $$\rho_1(z) = N_1$$ I need to disturb my charge distribution by a factor: $$\delta \rho = A \cos \left( \frac{\pi mz}{L} \right)$$ In order to have a final charge distribution of: $$\rho_2(z) = N_1 + A \cos \left( \frac{\pi mz}{L} \right)$$ with $m$ an integer equal or larger to 1 in order to keep charge conserved. I have been struggling with this for longer than I would like to admit, could someone point me into the right direction. How could I modify each positions[i] in order to modify my charge distribution as expected. Effectively what I need is to remove particles from the region where $\delta \rho < 0$ and add them in regions where $\delta \rho > 0$ but following the cosine density I need.
I am assuming that you want to keep the total charge constant. Let's assume that your array is of size $2m$. Then you can update the first $m$ entries of the array using the assignment $r[i] \leftarrow r[i] + A\cos(\pi i/m)$ and the last $m$ entries using the assignment $r[i] \leftarrow r[i] - A\cos(\pi i/m)$. I suggest that you modify each half separately to ensure that the total charge indeed remains constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/508680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can someone explain what is the force the ball will exert? If a ball is falling under free fall then the force exerted by the ball on the ground would be $mg$. But that's not the case in real life ball would hit with more force. But when i draw free body diagram there is only one force that is acting on it $mg$ Can someone explain what is the force the ball will exert ?
perhaps you also want to calculate the contact force between the ball and the ground. your ball is falling from height $H$ with start velocity $v_0$ and then touch the ground. if we take a simple model of the ground with stiffness $k$ and damper $d$ you get this equation of motion. $$m\ddot{x}+d\,\dot{x}+k\,x=m\,g\tag 1$$ your contact force is the spring plus the damper force (free body diagram) $$F_c=d\,\dot{x}+k\,x$$ to find the solution of equation (1), you need the initial conditions: $x(t=0)=0$ $\dot{x}(t=0)=v_e$ where $v_e$ is the velocity of your ball when it touch the ground: $v_e=\sqrt{v_0^2+2\,g\,H}$ where $v_0$ is the start velocity of ball. This diagram shows the contact force : the blue line is the weight of the ball. the red one is the start velocity $v_0=0$ and the green one the start velocity is $v_0=10$ you see that your contact force is much higher then the weight force and is depending on the velocity $v_e$ which is a function of the height $H$ and the start velocity $v_0$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/508808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Newton's third law and Coulomb's law Coulomb's law states that if we have two charges $q_{1}$ and $q_{2}$, then $q_{1}$ will act on $q_{2}$ with a force $$ \textbf{f}_{12}=\frac{q_{1}q_{2}}{r_{12}^2} { \hat {\textbf {r}}_{12}},$$ and $q_{2}$ will similarly act on $q_{1}$ with a force $\textbf{f}_{21}$ such that $$\,\textbf{f}_{21}=-\textbf{f}_{12}.$$ Suppose the only things we knew was that the repulsive forces vary like $r^{-2}$, and that they depend on the magnitude of the charges involved. Can we infer from these two observations alone that $\textbf{f}_{21}=-\textbf{f}_{12}$? Or would we need further experiments to establish this equation? The collinearity can be deduced from symmetrical considerations. What about the magnitude?
It is worth repeating that laws in physics are axioms, there is no proof or derivation other than that the law is necessary, so that a physical mathematical theory can choose those solutions that will fit existing data and, important, will be predictive in new situations. Laws in effect are a distillate of data. Coulomb's law defines one of the possible forces, so that Newton's laws can be used in order to have classical mechanics solutions and predictability in kinematic problems involving charges.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/509039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is there a limit of electrons a single hydrogen atom can have? Is there a limit of electrons a single hydrogen atom can have? If so what is it? why? Is the the answer to why scalable to helium?
Your question is about the Hydrogen ion, when it gains electrons. Normally, the Hydrogen ion (we usually call the single proton without electron the Hydrogen ion) when it gains an electron, will have a negative charge. Now these negative ions (with two or more extra electrons) are unstable. You are basically asking if you can bind a proton with more then two electrons. Though, you can try to use an external magnetic field to keep it stable. https://link.springer.com/article/10.1007/s00601-009-0018-7
{ "language": "en", "url": "https://physics.stackexchange.com/questions/509254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Representation of the $\rm SU(5)$ model in GUT In Srednicki's textbook Quantum Field Theory, section 97 discusses Grand Unification. On page 606, it states: In terms of $\rm SU(5)$, we have \begin{equation} 5 \otimes 5 = 15_{S} \oplus 10_{A} \tag{97.5} \end{equation} where the subscripts $S$ and $A$ refer to symmetric and antisymmetric respectively. To my understanding, $15_{S}$ is a $15 \times 15$ matrix, and $10_{A}$ is a $10 \times 10$ matrix. Am I right? However, in the text, a left-handed Weyl field $\chi_{ij} = - \chi_{ji}$ in the 10 representation is defined. Its components are given by \begin{equation} \chi_{ij} = \left( \begin{array}{ccccc} 0 & \overline{u}^{g} & -\overline{u}^{b} & u_{r} & d_{r} \\ -\overline{u}^{g} & 0 & \overline{u}^{r} & u_{b} & d_{b} \\ \overline{u}^{b} & -\overline{u}^{r} & 0 & u_{g} & d_{g} \\ -u_{r} & - u_{b} & -u_{g} & 0 & \overline{e} \\ -d_{r} & -d_{b} & -d_{g} & -\overline{e} & 0 \end{array} \right). \tag{97.12} \end{equation} Why is $\chi_{ij}$ not a $10 \times 10$ matrix, but a $5\times 5$ matrix?
Am I right? Only in a small way, but basically not. (97.5) denotes dimensionalities of the irreducible representations of SU(5) involved, so how the respective vectors are acted upon by the coproduct of SU(5) generators. On the left hand side, you have two quintuplets (5-vectors), each acted upon by 5×5 matrices $T^a_5$, so the whole reducible rep acted upon by 25×25 matrices $$ \Delta (T^a)_{25}= T^a_5\otimes 1\!\!1 _5 + 1\!\!1 _5 \otimes T^a_5 . $$ On the right hand side, you have the reduction of the 25-dim vector into two separated vectors of dimension 15 and 10 respectively, each one acted on by 15×15 and 10×10 generators respectively, $$ T^a_{15} \oplus T^a_{10} , $$ That is the 25×25 matrices break up consistently into upper left 15×15 blocks and lower right 10×10 blocks, as your group theory text should detail. All 5,25,15,10 dim matrices obey the very same su(5) Lie algebra! So (97.12) is basically a 10-dim complex vector, with the 10 degrees of freedom of the upper triangular piece arranged into a 5×5 antisymmetric matrix format by superfluous replication for future convenience. (It couples to a fermion 5 and a Higgs 5 in the invariant term in the lagrangian, so it pays to have two loose 5 indices to saturate, instead of a loose 10.) An analog you might use to fix your thinking is looking at the Kronecker composition of two doublets (spin 1/2s) of SU(2) into a (symmetric) triplet (spin one) and an antisymmetric singlet (spin 0), $$ 2\otimes 2= 3_s\oplus 1_A . $$ If you chose to, you could arrange the 3-vector on the right into the format of a real antisymmetric 3×3 matrix, as one routinely does in the logarithm of 3×3 rotation matrices, which amounts to three angular momentum generators $\vec L$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/509363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Interpretation of the photon scattering rate? The photon scattering rate $\Gamma$ describes the rate at which photons scatter off an atom$^1$. In a two-level system, the ansatz for the photon scattering rate often is given by \begin{equation} \Gamma = \rho_{22}\gamma \end{equation} where $\rho_{22}$ is the probability to find the atom in the excited state and $\gamma$ is the rate of spontaneous decay. However, I don't see the connection between the ansatz above and what the photon scattering rate is physically meant to be. $^1$In my imagination, the photon scattering rate is the absorption rate for photons at a certain frequency $\omega$. Hence $\Gamma(\omega)$ shows the saturation broadened Lorentzian absorption line of the atom, centered around a resonance frequency.
Considering light as a stream of photons at energy hω, photon scattering is usually defined as cycles of absorption and subsequent spontaneous emission. $$Γsc(r) = Pabs /hω = 1 /he0c*Im(α) I(r).$$ http://cds.cern.ch/record/380296/files/9902072.pdf The photon scattering rate is the radiated power divided by the photon energy hω. $$Rsc = Prad /ω$$ http://atomoptics-nas.uoregon.edu/~dsteck/teaching/quantum-optics/quantum-optics-notes.pdf
{ "language": "en", "url": "https://physics.stackexchange.com/questions/509576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Is the sound of hammering nails louder since it travels through the walls? The question might sound silly but hear me out. When people are hammering nails into a wall in a nearby room, you can often hear it very loudly. It seems natural to suspect that this is because, on top of being loud to begin with, the sound is traveling through the walls. (Sound travels faster through solids, after all.) But on the other hand, sound waves should dissipate faster in walls than in air. Signal strength is what's relevant here, not wave speed. And sound must attenuate faster in walls than in air, because thickening a wall acts to lower the volume of sound passing through. So I think it's a reasonable question: Does someone hammering a nail into the wall in the room next-door sound louder than, say, the exact same sound (with the same source amplitude) being played from a speaker in the middle of that room? In other words, can noises from the next room over be amplified by "wall effects"?
hammering on a wall will transmit the sound vibrations much more strongly than playing a recording of hammering towards the wall, even at the same volume level in the room. This is because instead of the sound waves hitting the wall, the hammer itself is hitting the wall and so the displacement waves are being fed directly into the wall structure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/509672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Entanglement of initially unentangled state of two non-interacting systems Suppose we have two subsystems $A$ and $B$, and the hamilitonian $H$ for the system separates into a sum $H_A(t) + H_B(t)$ of two time dependent hamiltonians $H_A(t)$ and $H_B(t)$ that act only on subsystems $A$ and $B$ respectively. Suppose the collective system begins in a non-entangled (product) state $\psi_o \equiv |a\rangle \otimes |b\rangle$ and the system evolves under the action of $H$ for some period of time so that the state is now some $\psi'$. Will $\psi'$ also be a product state? If not wouldn't this be surprising that correlations would develop in non-interacting systems? A short time $dt$ later the Schrodinger equation tells us that $$ |a\rangle \otimes |b\rangle \to |a\rangle \otimes |b\rangle + \frac{dt}{i\hbar} \left( H_A | a \rangle \otimes | b \rangle + | a \rangle \otimes H_B | b \rangle \right) $$ so that to first order in $dt$ we could write $$ |a\rangle \otimes |b\rangle \to \left( 1 + \frac{dt}{i\hbar} H_A \right) |a\rangle \otimes \left( 1 + \frac{dt}{i\hbar} H_B \right) |b\rangle $$ so that it appears that the state remains in a product state. I do not know how to show this for finite $dt$. Could we add a term $-\frac{\Delta t^2}{\hbar^2}$ in the Dyson series, i.e. \begin{align} U(t_o,t) & \equiv \Pi_{n=0}^N \left(1 + \frac{\Delta t}{i\hbar}\left( H_A(t_o + n \Delta t ) + H_B(t_o + n \Delta t) \right) \right) \\ & \to \Pi_{n=0}^N \left[ \left(1 + \frac{\Delta t}{i\hbar} H_A(t_o + n \Delta t ) \right)\left(1 + \frac{\Delta t}{i\hbar} H_B(t_o + n \Delta t ) \right) \right] \equiv \tilde{U}(t_o,t) \end{align} where $\Delta t \equiv \frac{t-t_o}{N}$ so that $U=\tilde{U}$ in the limit $N \to \infty$?
Define $|\alpha(t)\rangle$ and $|\beta(t)\rangle$ as the solutions of the IVPs \begin{align} i\hbar \frac{\mathrm d}{\mathrm dt}|\alpha(t)\rangle & = H_A(t) |\alpha(t)\rangle \quad\text{under}\quad |\alpha(0)\rangle = |a\rangle, \\ i\hbar \frac{\mathrm d}{\mathrm dt}|\beta(t)\rangle & = H_B(t) |\beta(t)\rangle \quad\text{under}\quad |\beta(0)\rangle = |b\rangle. \end{align} Then you can show that \begin{align} i\hbar \frac{\mathrm d}{\mathrm dt}|\alpha(t)\rangle\otimes|\beta(t)\rangle & = (H_A(t)+H_B(t)) |\alpha(t)\rangle\otimes|\beta(t)\rangle \\ \text{under}\quad |\alpha(0)\rangle\otimes|\beta(0)\rangle & = |a\rangle\otimes|b\rangle, \end{align} i.e. the product state $|\alpha(t)\rangle\otimes|\beta(t)\rangle$ is a solution of the TDSE you're interested in; since that solution is unique, it follows that it is the solution you're interested in. $$\tag*{$\blacksquare$}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/509939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derivation of scaled Richardson number and speed of internal gravity waves for density stratified fluid My questions relate to page 29 of this document. In particular, on page 29, two expressions for the bulk Richardson number are given; one is said to be 'before scaling'. How is the first expression for the bulk Richardson number $Ri_0 = gh/\rho_0$ derived from the unscaled $Ri_0 = g\Delta\rho h/(\rho_0 \Delta U^2)$? I can't seem to find a way of linking the two. Also, how is the phase speed of the internal gravity wave derived as $c = \sqrt{Ri_0/\alpha}$? I can't seem to figure out how this was obtained. Any help with the above questions is much appreciated.
I am not sure but there might be a small error in this derivation that compensates itself. It has been quite some time since I have seen a linear stability analysis so take the following thoughts with a pinch of salt. The Richardson number is defined as $$ Ri := \frac{g}{\rho} \frac{\frac{\partial \rho}{\partial z}}{\left( \frac{\partial u}{\partial z} \right)^2} \phantom{spacespace} \frac{\text{buoyancy}}{\text{flow shear}}. \tag{1}\label{1}$$ When approximating the derivatives in \eqref{1} as finite differences (bulk Richardson number) this leads to $$ Ri = \frac{g}{\rho} \frac{\frac{\Delta \rho}{\Delta z}}{\left( \frac{\Delta u}{\Delta z} \right)^2} = \frac{g}{\rho} \frac{\Delta \rho \Delta z}{\Delta u^2}. \tag{2}\label{2}$$ We transform the velocity to $\overline{U} = \frac{U_1 + U_2}{2}$ and for convenience introduce $\Delta z = h$ (see page 26) for the parameter for velocity transition between 1 and 2 which is assumed linear and of finite thinkness. Thus \eqref{2} can also be written for the point $z = 0$ (thus the index $0$) when considering the intervals I and II as $$ Ri_0 = \frac{g}{\rho_0} \frac{\Delta \rho h}{\Delta \overline{U}^2}. \tag{3}\label{3} $$ Now we consider a scaled system where the all differences are normalised to the range $\left[ -1, 1 \right]$ such as in the figure on page 29 and therefore all the changes are $1 - (-1) = 2$ ($\Delta U = 2$ and $\Delta \rho = 2$). This allows us to simplify \eqref{3} to $$ Ri_0 = \frac{g h}{2 \rho_0} \tag{4}\label{4} $$ which is referred to as "after scaling". Contrary to the document I still got a factor of two in there but it eliminates itself later on. The phase speed is given as (page 5) $$ c_p = \frac{\omega}{k} \tag{5}\label{5} $$ and the wave number by (page 5) $$ \omega^2 = \frac{g ( \rho_2 - \rho_1) k}{\rho_1 + \rho_2} \tag{6}\label{6} $$ When transforming \eqref{6} to the scaled system $\rho_{1,2} = \rho_0 \pm 1$ this results in $$ \omega^2 = \frac{g \overbrace{[\rho_0 + 1 - (\rho_0 - 1)]}^{2} k}{\underbrace{ \rho_0 + 1 + \rho_0 - 1}_{2 \rho_0} } = \frac{g k}{\rho_0} \tag{7}\label{7}$$ and therefore combining \eqref{6} and \eqref{7} $c_p$ is given by $$ c_p = \frac{1}{k^2} \sqrt{\frac{g k}{\rho_0}} = \sqrt{\underbrace{\frac{g h}{2 \rho_0}}_{Ri_0} \underbrace{\frac{2}{k h}}_{\frac{1}{\alpha}}} = \sqrt{\frac{Ri_0}{\alpha}} \tag{8}\label{8} $$ as (page 26) $ \alpha = \frac{k h}{2}$ holds.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/510016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Dimensional regularization of a divergent integral Suppose there is an integral in four dimension Euclidean space \begin{equation} I_{d=4}=\int_0^\infty d^4x\frac{1}{|x|^2},~ \end{equation} which is divergent. $|x|$ is the length of the vector. Can one use dimensional regularization to compute this integral by using $d^4x \to d^dx$,with $d=4-\epsilon$ ? Or more abstractly my question is that If I want to compute an integral $I_{d=4}$, but it divergent for example at range $2<d<5$, can we use dimensional regularization by writing $d=4+\epsilon$. Then at the end of calculation let $\epsilon\to0$ ?
In dimensional regularisation this integral would normally be set to zero - the reason is that the integrand contains no dimensionful parameter upon which the result can depend. This is curious in qft because it removes ir and uv divergences at the same time
{ "language": "en", "url": "https://physics.stackexchange.com/questions/510172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Should the ground state electron density of an atom go to zero at the origin? I have heard from my professor that the particle density of electrons (in the ground state) of an atom should vanish near the nucleus. Hydrogen is an obvious counter-example. So I am trying to work out what he could have meant? Which quantum phenomenon is he thinking of?
As per QM, there are three forces that balance out to keep an electron at a stable energy level: * *the electron's potential EM energy keeps it close to the nucleus *the electron's kinetic energy (momentum) keeps it away from the nucleus *the HUP keeps it away from the nucleus (in case it should get too close) In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities[1] asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables or canonically conjugate variables such as position x and momentum p, can be known or, depending on interpretation, to what extent such conjugate properties maintain their approximate meaning, as the mathematical framework of quantum physics does not support the notion of simultaneously well-defined conjugate properties expressed by a single value. Now the HUP makes sure that there is a very low probability for the electron to exist inside the nucleus (maybe this is what you are referring to).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/510298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can one tell they are accelerating? In Newtonian mechanics all inertial reference frames follow the same laws of physics. Why does this break down for acceleration. In a rocket you feel acceleration because the rocket is accelerating but everything inside is staying at the same speed so it looks like there is a force pushing it back. But if everything in the rocket is equally accelerated, let's say because the rocket is charged including all of the inside, so that the rocket doesn't push on anything and it is accelerating towards a much larger opposite charge, how can you tell you're accelerating, it will just look like the earth is accelerating away from you Is there some mathematical way of showing from both reference frames that it will look like you are the one being accelerated? What would the path of constant acceleration look like if the speed of light is constant? And is there a Lorentz transform for acceleration, or even a general Lorentz transform for a more complicated motion?
You're correct - if every single particle that you have access to experiences the exact same acceleration, then you have no way to do detect that you're accelerating, even in principle. In practice, gravitational fields are the only way to arrange for this to happen. Every time you literally feel acceleration, it's because different parts of your body are experiencing slightly different instantaneous accelerations, so your body is experiencing internal forces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/510521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 0 }
Problem Regarding Buoyancy A spherical marble of radius $1\,$cm is stuck in a circular hole slightly smaller than its own radius (for calculation purposes , both are equal) at the bottom of a filled bucket of height $10\,$cm. Find the force on the marble due to the water. I have always been troubled by problems like this. Does the marble not displace a certain volume of fluid? Should a buoyant force not act on it? However, in this problem, the answer happens to equal the product of the pressure, and the projection area.... And, when I came across this similar problem :- A steel ball is floating in a trough of mercury. If we fill the empty part of the trough with water, what happens to the steel ball? The answer to this one is that the steel ball rises. Here, instead of multiplying the pressure and area of projection, and arguing that a net downward force acts, we argue that the steel ball displaces water, and causes an upward buoyant force to act. My question is, when does one know which force to apply?
For the first problem, you should know that buoyancy arises due to pressure difference which in turn arises due to mass of fluid above a certain level. You are correct in saying that the ball will displace a volume of water, and that is equal to half the volume of the sphere. So we can write that the force acting due to water will be due to the (mass of water above the ball)g, which is just another way of saying that it is equal to pressure times projection area, and the actual volume of water above the ball = (h*Area of base of cylindrical region) - (Half the volume of sphere) And now it's quite easy to proceed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/510647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can frequency affect loudness? I don't think there is any relationship in frequency and loudness, and frequency only affects the pitch of the wave. But could a really high frequency somehow affect the amplitude of a transverse wave in real-life conditions?
I first want to clarify a few terms as they are commonly used in acoustics: * *Frequency is the number of times per second that the sound pressure changes from low to high. *Amplitude is an objective physical measure of the strength of the sound wave. For a sound wave with an amplitude of 1 Pa, the high sound pressure is the atmospheric pressure plus 1 Pa, while the low sound pressure is atmospheric pressure minus 1 Pa. Put simply, sound amplitude is often expressed logarithmically as decibels. *Loudness is the subjective perception of how strong the sound is. This perception depends on individuals' hearing. For example, someone with age-related hearing loss would typically percieve a high-frequency sound as less loud than someone with "normal" hearing. People's hearing is also more sensitive to some frequencies than others. In other words, if you play sounds at different frequencies but the same amplitudes to a person, he or she would perceive them as having different loudness. This effect is often characterised through equal-loudness curves such as the ones below. Sounds along each of the red lines are, on average for humans, perceived as equally loud. This shows us that people are much less sensitive to low-frequency sound, and most sensitive around 3–4 kHz, although this effect diminishes as sounds get louder. This is a well-understood effect that is taken into account when setting limits for or measuring noise from e.g. traffic or industry. I hope that answered your question!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/511119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
In adiabatic expansion does the internal energy of an ideal gas decrease? By First Law of thermodynamic, for an ideal gas, if there isn’t heat transfer, work done by the gas is equal to decrease in internal energy of the gas. Suppose that I have a perfectly-insulated syringe closed at one end and a frictionless piston on the other. The syringe initially contain ideal gas of volume $V$. If I pulled the piston outward, the volume of gas would increase. Since I am the one applying force, work is done by me instead of by the contained gas. So, in this case, does the internal energy of gas remain constant?
Microscopically, gas molecules impacting on the outward-moving surface get reflected with a lower velocity. Of course, the average molecular velocity is orders of magnitude larger than the velocity of the piston and the difference in energy for each collision is very small. But it all adds up, and the gas is cooling.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/511444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Concept of Maximum Kinetic Energy What is the concept of maximum Kinetic energy of an electron when a photon is incident on a metal surface? Why the ejected electron can have a range of Kinetic Energy?
When a light of certain frequency is incident on a metal then the photoelectrons may get emitted by the metal.If the light has just threshold energy then photo electrons will have zero kinetic energy.If the wavelength is greater than threshold wavelength then photoelectrons will come out from the surface with some kinetic energy.The energy carried by photons is imparted to the electron,the electron would gain some kinetic energy and it may collide with some atoms in the metal,it loses some amount of kinetic energy to the atoms.If electrons do not collide with atoms then they would come out of the surface with maximum kinetic energy possible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/511597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about description of Gibbs free energy When introduced to the gibbs free energy, it was derived as follows: First law: $dU=dq+dw$ Second law: $dS>dq/T$ for a spontaneous change. Note $dq$ and $dw$ are inexact differentials. Subsituting $dq=dU-dw$, into the second law gives us: $TdS>dU-dw$ using $dw=-P_{ext}dV$ $Tds>dU+P_{ext}dV$ or, $dU+P_{ext}dV - TdS<0$ Now, keeping pressure and temperature constant, we can say that: $dU+P_{ext}dV - TdS<0$ = $d(U+P_{ext}V - TS)<0$ = $dG<0$, where $G$ is the gibbs free energy. Here is my problem. A few lectures later when we were being introduced to the idea of chemical potential, the gibbs free energy was re written as a function of pressure and temperature in the following way. $dG=Vdp-SdT$, this expression was derived using the result above. My question is that if pressure and temperature were constant in the above expression, isnt $dp$ and $dT$ always 0? If so, how is this a valid expression of $G$?
I'm going to call a thermodynamic transformation that's both isothermal and isobaric a $TP$ transformation for convenience. Let's say we have an irreversible $TP$ transformation $A$ (with $T=T_0$ & $p=p_0$) that starts from state $1$ and ends at state $2$. $$\text{State 1 : }p_1=p_0 \;|\;T_1=T_0 \;|\;V_1\;|\;S_1\; |\; G_1=U_1+p_1V_1-T_1S_1$$ $$\text{State 2 : }p_2=p_0 \;|\;T_2=T_0 \;|\;V_2\;|\;S_2\;|\;G_2=U_2+p_2V_2-T_2S_2$$ $$\text{For an irreversible isothermal process (Second Law) : }Q_A \leq T_0 (S_2-S_1) \tag{1}$$ $$\text{For an irreversible isobaric process : }W_A=p_0(V_2-V_1)\tag{2}$$ $$\text{First Law of Thermodynamics : }\Delta U = U_2 - U_1 = Q_A - W_A \tag{3}$$ $$\Delta G=G_2-G_1=\Delta U + p_0(V_2-V_1)-T_0(S_2-S_1)=Q_A-W_A+W_A-T_0(S_2-S_1)$$ $$ \Rightarrow \Delta G \leq 0$$ $$\Delta U=\int_{1|\text{process O}}^2(TdS-pdV) \text{ holds iff the process O from state $1$ to state $2$ is reversible} \tag{4}$$ $$\underline{\text{The Subtle Detail}}$$ Given transformation $A$ exists, it is not guaranteed that there must also exist a reversible $TP$ process (with the same $T_0$ and $p_0$) that connects the same initial and final states. However, if there exists a reversible $TP$ transformation $B$ (with the same $T=T_0$ & $p=p_0$) that goes from state $1$ to state $2$, then $$\Delta U \stackrel{\text{Eq. $(4)$}}= \int_{ 1|\text{process B}}^2(T_0 dS - p_0 dV)= T_0(S_2-S_1) - p_0(V_2-V_1) \tag{5}$$ $$\stackrel{\text{Follows from Eq. $(2),(3)$ and $(5)$}}\Rightarrow Q_{\text{over any $PT$ transformation (can be either reversible or irreversible)}}=T_0 (S_2-S_1)$$ $$\Rightarrow \Delta G=0$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/511775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does diffraction depend on refractive index of a medium? Does diffraction pattern depend on refractive index of a medium? Does the transmitting media influence on difraction the phenomenon or is it caused by light and the edge alone? Is a diffraction pattern will be identical in air an in water?
Yes, diffraction does depend on the refractive index of a medium. The invariant property of a light source is its frequency, and the wavelength this light takes in a given medium will change with the medium's refractive index. Diffraction is a spatial interference phenomenon, which means that the locations where the diffraction pattern's maxima and minima appear are determined by the geometries where a certain number of wavelengths add up (or don't). Thus, in general, working in a medium with refractive index $n$, as far as diffraction goes, is equivalent to working with a wavelength that is $n$ times shorter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/512007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does adding a second magnet on the other side of a coil affect induced voltage? From what I understand, if you have a magnet moving with a relative velocity towards a coil, Lenz's law states that the current flow induced in the coil will create a magnetic field that opposes the change that induced it. So it will essentially create a magnet with an opposing field to the magnet moving towards it. In the picture below, disregarding magnet 2, if magnet 1 moves towards the coil it induces a magnetic field in the coil as shown, with the 'north pole' of the coil repelling the north pole of magnet 1. If you only look at magnet 2 now, moving away from the coil as shown, it creates a 'south pole' in the coil that attracts the north pole of magnet 2. If I am correct until this point, would that mean that if the two magnets were in motion together, in the same direction but on opposite sides of the coil, the induced voltage in the coil would be doubled? Is there something that I am missing in this interaction? Thanks in advance for any clarification.
What you are thinking is right. This is a perfect application of the famous superposition theorem. The individual effects of both the magnets are added together. In this case, since they are supporting each other, the induced EMFs are added together. Consider what will happen when magnet 2 is going the other direction? The effects of both the magnets might cancel out and no emf might be induced in the coil!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/512284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If I kept a body in a groove on a frictionless circular table, and rotate the table with a constant angular velocity, what will happen to the block? I have read that it would slide off the table, because apparently in the frame of the table it experiences a centrifugal force outwards, but I can't seem to agree with that logic. In the ground frame of reference there is no force in the direction of the groove (let me call it the instantaneous x axis). Centrifugal force is just a psuedo force right? So how is it that in the frame of the table it experiences an outward force? According to me it is just like keeping a block on a frictionless table. Causing the table to move would have no effect on the block which would eventually fall off the table because it would eventually run out of table. But here the table is only rotating so no chance of that happening. Perhaps it's some deep rooted misconception about circular motion .. please help! EDIT: THE GROOVE IS DIAMETRICAL.
The groove applies a normal reaction on the block, which is the reason that the block rotates along with the table with the same angular velocity as the table. Now since this normal force is tangential to the table it causes the tangential speed of the block to increase. Tangential velocity is equal to angular velocity cross radius and angular velocity remains constant (same as of table), therefore the radius (distance of block from center of table) increases, which is manifest as the centrifugal force. The given link may help (pg 127 from the cover, question 13). Hope this bridges the gap in understanding: https://drive.google.com/file/d/1cpuVbLtqdYiT_p5BAaUiWgKWFi2Gty1l/view
{ "language": "en", "url": "https://physics.stackexchange.com/questions/512652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How is many worlds different to basic probabilities? In the many worlds theory the universes branch according to the wave function. We find ourselves travelling down one path of an immense number of these branches of a tree. When we branch, we have no interaction with the other branches. This sounds to me very similar to classical mechanics where we have a probability function which defines the chances of a particular outcome to occur. When we observe it we see that in this universe we got a particular outcome. But there are also nearly an infinite number of other imaginary universes where the other outcomes occurred according to the probability function (similar to the wave function). I guess I am asking what’s the difference between the imaginary universes of non-realised outcomes, and the many worlds branches we never travel down? We have no interaction with them so they are as real as our imagination.
In the many worlds theory the universes branch according to the wave function. I would consider this to be not quite right. Many-worlds is not a theory, it's an interpretation (MWI). For all practical purposes, in essentially every experiment ever done, its predictions are the same as those of, for example, the Copenhagen interpretation (CI). Therefore they are not different theories. MWI also doesn't have to involve the concept of branching. The talk about branching is more of a heuristic or a way it's presented in popularizations. The most austere versions of MWI simply posit the same postulates that everyone agrees on for quantum mechanics, and doesn't add an extra postulate about collapse as in CI. No branching. I guess I am asking what’s the difference between the imaginary universes of non-realised outcomes, and the many worlds branches we never travel down? We have no interaction with them so they are as real as our imagination. (1) Classical probability doesn't allow for interference. Therefore the ways in which we predict probabilities are different in quantum mechanics than in a classical stochastic system. (2) MWI and CI can be viewed as approximations to decoherence. Decoherence has a time-scale on which it occurs. This time-scale is different from anything in classical systems. The dead and live copies of Schrodinger's cat can in principle interfere with each other, even after days have passed -- the interference effects just fall off exponentially, with such a short time-scale, that it becomes impractical to observe them.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/512750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I experimentally measure the surface area of a rock? I hope this is the right place to ask this question. Suppose I found a small irregular shaped rock, and I wish to find the surface area of the rock experimentally. Unlike for volume, where I can simply use Archimedes principle, I cannot think of a way to find the surface area. I would prefer an accuracy to at least one hundredth of the stone size. How can I find the surface area experimentally?
Difficult. Adsorb some chemical, heat it up, measure the amount that evaporates? I would look at the literature, maybe start with a search for "experimental determination of the surface area" in geological contexts. Edit: a molecular probe should give something close to the maximum value. There is an end to the length scale when dealing with real materials, a rock is not a mathematical fractal. After letting in a suitable kind of molecules and pumping them out, thermally stimulated desorption would measure the absorbing area.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/512834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "140", "answer_count": 24, "answer_id": 11 }
When the direction of a movement changes, is the object at rest at some time? The question I asked was disputed amongst XVIIe century physicists (at least before the invention of calculus). Reference: Spinoza, Principles of Descartes' philosophy ( Part II: Descartes' Physics, Proposition XIX). Here, Spinoza, following Descartes, denies that a body, the direction of which is changing, is at rest for some instant. https://archive.org/details/principlesdescar00spin/page/86 How is it solved by modern physics? If the object is at rest at some instant, one cannot understand how the movement starts again ( due to the inertia principle). If the object is not at rest at some instant, it seems necessary that there is some instant at which it goes in both directions ( for example, some moment at which a ball bouncing on the ground is both falling and going back up). In which false assumptions does this dilemma originate according to modern physics?
It boils down to the direction of the force applied. if the force works perfectly against the movement -><- then indeed the object comes to rest - but it would only remain at said rest, if the forces now are in equilibrium. Usually when throwing a ball upward - the gravity doesn't stop working magically ... so the net-force is still pointing downward thus overcome the short rest where apparent velocity is 0. in all other cases the force you apply and the movement together distinguish the new movement path and speed along that path.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/512902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
Why coupled oscillators tend to seek integer frequency ratios? In this document, the author writes (page 225) Coupled oscillators have a tendency to seek frequency ratios which can be expressed as rational numbers with small numerators and denominators. For example, Mercury rotates on its axis exactly three times for every two rotations around the sun, so that one Mercurial day lasts two Mercurial years. In a similar way, the orbital times of Jupiter and the minor planet Pallas around the sun are locked in a ratio of 18 to 7 (Gauss calculated in 1812 that this would be true, and observation has confirmed it). This is also why the moon rotates once around its axis for each rotation around the earth so that it always shows us the same face. Is that true? Can we prove mathematically that Coupled oscillators love rational frequency ratios? Oh, it appears that planetary motion is not an oscillator. But anyway, I just want some reference to verify whether this is true, preferably with mathematical derivations.
The motion of two coupled harmonic oscillators is the sum of two "beat" frequency oscillations. The frequencies are functions of the masses and spring constants and can take any value, not necessarily "rational numbers with small numerators and denominators". I don't think this is a correct analogy to orbital resonances. This page gives some mathematics of orbital resonances. It explains how resonances can be stable, but it is not clear to me how planets get into these states - it seems to me that these must be low energy states in some sense, so there must be some dissipative mechanism leading to them. Maybe that is covered in a more advanced course.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/513210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Which approach to take with a vertical spring? Lets say we have a spring hanging vertically with spring constant $k$ attached to a block of mass $m$. The system is at rest. Then, you pull the mass downwards, extending the spring by distance $x$, then let go. The spring will, of course, bounce back to its original spot. What is the velocity of the object at its initial resting location? To solve this, I took two approaches, but I'm not sure which one is right. The first is a work approach. When the block returns to its old location, it is the same except now has the new energy it recieved from the prior extention, so I can say... $W=Fd$ or $W=\frac{1}{2}*k*x^2$. So, $\frac{1}{2}*k*x^2=\frac{1}{2}*m*v^2$ therefore $v=\sqrt{\frac{k}{m}}*abs(x)$. However, I don't consider gravitational potential, which worries me. If we do so, we can say at the rest location the energy is just $mgh$ where $h$ is $x$. At the bottom, the energy is just $\frac{1}{2}*k*x^2$ so... $mgx+\frac{1}{2}m*v^2=\frac{1}{2}k*x^2$ so $v=\sqrt{\frac{kx^2-2gmx}{m}}$ Which approach do I take?
Both approaches are actually the same, if you do them correctly. I will address your second case first. You are correct to use conservation of energy and say that the potential energy stored in the spring at the lowest point is equal to the sum of the kinetic energy and the potential energy due to gravity at the equilibrium point. So you were correct with the equation $$\frac12ky^2=mgy+\frac12mv^2$$ Let’s now look at the first case, but let's do it correctly. We know that the net work done on the mass is equal to its change in kinetic energy: $$W_\text{net}=W_\text{gravity}+W_\text{spring}=\Delta K=\frac12mv^2-0$$ We can easily determine the work done by gravity and the spring force using the definition of work $W=\int\mathbf F\cdot\text d\mathbf y$ $$W_\text{gravity}=\int_{-y}^0(-mg)\,\text dy'=-mgy$$ $$W_\text{spring}=\int_{-y}^0(-ky')\,\text dy'=\frac12ky^2$$ Putting it all together we have $$W_\text{net}=-mgy+\frac12ky^2=\frac12mv^2$$ You can see this is exactly the same as your second case. So both methods you have proposed are exactly the same. The issue with the first case in your question is just as you said. You didn't include work done by gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/513352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are the boundary conditions purely a consequence of Maxwell's equations? The boundary conditions, namely were all these, realized only by looking at Maxwell's equations? Or is there a physical reasoning behind them? For example, Why does the component of the electric field parallel to the surface of interface remain unaltered? I also read that the reason light bends when it passes through another medium is because only the normal component gets altered and the horizontal component remains the same(whereas the velocity gets altered because of the other electrons in the material that are driven by the source and produce a separate wave with a different phase and the superposition of these two waves seem to alter the speed of light in a medium $^\dagger$). My question in short is, What would be my answer,if someone asked me, to explain the boundary conditions, without equations*. *If that's purely based on equations, please ignore "without equations", but there's got to be something that's physically occurring which led us to create a model, right?. $^\dagger$Is that right?
Consider an electrically polarized continuous medium. In its volume it is neutral but at its boundary a charge appears. For a magnetically polarised medium a current appears. This is why there is a jump in the fields.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/513436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Where does $\pi$ come from in the Heisenberg equation? In class today we were taught about Heisenberg’s equation, $$\Delta x\Delta p\ge\frac{h}{4\pi}. $$ Experience tells me that any time an equation involves pi, circles aren’t far behind. Obviously this is true in geometry, but even pure number theory equations, such as $\Sigma_{n=1}^{\infty} \frac1{n^2}=\frac{\pi^2}6$, you can always find a way to construct the problem such that circles are involved and the solution, including pi, naturally jumps out. The natural question, then, is: what do circles have to do with Heisenberg? Why is Planck’s constant divided by a multiple of pi, and why specifically $4\pi$?
Very sketchy, but $\Delta x \Delta p$ has a unit of angular momentum. Angular momentum is quantized (Bohr's condition) which can be interpreted as standing wave condition on "circular" orbit of electron $$n \lambda = 2 \pi r,$$ for which the $\pi$ come from.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/513566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Mean free path equation derivation I was reading about mean free path equation derivation online and stumbled upon this: We will derive the equation using the following assumptions: let’s assume that the molecule is spherical, and the collision occurs when one molecule hits another, and only the molecule we are going to study will be in motion and the rest of the molecules will be stationary. Let’s consider our single molecule to have a diameter $d$ and all the other molecules to be points. This does not change our criteria for collision, as our single molecule moves through the gas, it sweeps out a short cylinder of cross section area $πd^2$ between successive collisions... which got me confused. Wouldn't the cross section area be equal to $πr^2 = πd^2/{4}$? To be clear, I read that here (under the section 'Derivation of Mean Free Path').
In fact there are two analogous formulas for calculating the collision mean free path by investigating the so called collision cylinder or tube. They differ about the diameter of the collision cylinder $D$ that can be either $d$ or $2d$ (where $d$ is the molecular diameter ) and still are correct. The delusion arise because they are applied to two different cases. * *electron-molecule collision in a gas discharge. Here both assumptions you stated in your question that electron radius (the colliding particle) is zero and other molecules are at rest or at zero velocity w.r.t. fast tiny electrons apply very well. Obviously the diameter of the collision cylinder is $D=d$. *molecule-molecule (or atom-atom) collisions in neutral gases. Here the previous assumptions fail and we have to consider a collision cylinder of diameter $D=2d$ and must correct for erroneous assumption that all other molecules being at rest by a correction factor $1/\sqrt{2}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/513777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Is running a gas stove more efficient than turning on an in-building radiator? It's cold in your studio apartment and you have two options: * *Turn on the four burners and the oven on your gas stove; or, *Crank open the two small radiators. Both deliver heat directly to the space of your apartment and we can ignore, for the moment at least, the impact the location of each source might have on how efficiently it heats the space. Is one method more efficient than the other? It seems to me that, if so, the difference will come in the difference in efficiency we might find between the building's boiler and the apartment's stove. For example, while your stove might extract as heat only 75% of the available energy of the gas the building boiler extracts 90%. If they have the same efficiency, it would seem, we would have minimal to no difference, as then we're in the domain simply of energy exchange from one medium to another. Is this the right way to think about this question?
Assuming you have a gas stove that is not externally vented, and a gas central heat system, the stove will probably be more thermally efficient. I would not recommend using your stove, as it releases dangerous carbon monoxide and other waste gasses into your room, and is not intended for extended use without ventilation. This being said, with the stove all of the heat from the burned gas goes into the room. With central heat, the burned gas heats a media that is circulated to transfer heat. Then the burned gas, still containing some of the heat, is vented to the outside. WARNING: if you use gas heat (especially unvented stoves) it is a good idea to have carbon monoxide alarms, and smoke alarms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/514102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can the work by kinetic friction on an object be zero? We know that friction is of two types - static and kinetic. Static friction acts when there is no relative motion between the surfaces in contact. Kinetic friction takes place when surfaces rub against each other. I was wondering whether the work done by the kinetic friction can be positive, negative or zero. * *Positive work - When an object is placed on a rapidly moving belt, it moves along with the belt but with slipping (relative motion between the surfaces exist) when there is no enough friction to prevent slipping. Here the work done by the kinetic friction is positive, as the direction of frictional force and the displacement is same. *Negative work - Work done by kinetic friction, when an object moving on a rough surface slows down, is negative as the direction of friction and displacement are opposite to each other. I'm unable to think of any circumstances when the work done by kinetic friction is zero because of the following reasons: * *Work done on an object is zero if displacement is zero. In our case, if displacement is zero, the frictional force acting on the object is static and not kinetic in nature. *Work done is also zero when the force and displacement are perpendicular to each other. The only example I am aware of is circular motion. As the point at which the wheel touches the ground is at rest. The nature of friction is again static. So, can the work by kinetic friction on an object be zero? Please note: I read the answers for the following two related questions. There is no clear explanation on the two aspects of friction (static and kinetic) in those answers. Simply they don't have enough details. * *Work done by Friction. Can it be positive or zero? *Positive work done by friction
Yes work done by kinetic friction may be zero for example:- consider a block slipping on ground work done by kinetic friction will be negative in ground frame but now observe the block w.r.t block itself now work done by each and every force will be zero as displacement of block w.r.t itself is zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/514234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
How does the speed of light affect our cosmic observations/detections? -I learned that if you look at a star 10 million lightyears away, the light that you are looking at is 10 million years old since the photons have been traveling at the speed of light for 10 million years to reach you. -I also learned that all EM radiation travels at the speed of light. -I know that there is not faster than light, but object that move slower than light redshift and blueshift on the visible spectrum depending on which direction they are moving with respect to the observer. I'm asking this question because if I'm putting the pieces together correctly, then the stuff we observe "now", really all happened tens, hundreds, thousands, millions, and billions of years ago. Correct? Which, would that mean that gamma ray bursts, xrays, radio waves--all limited by the speed of light--are all events that happened in the past, and that their EM radiation takes time to reach us in every situation?
Yes the speed of light is slow and everything you see happened in the past.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/514325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The boundary conditions of an electromagnetic wave hitting a surface When I try to solve the Fresnel equations, I don't understand the condition: E(i)+E(r)=E(t) where E(i) is the incident wave, E(r) is the reflected wave and E(t) is the transmitted wave. So the question is how it can be that the E(t) equal both fields while the intuitive equation is that E(i)=E(r)+E(t)...
The boundary condition here is derived from the Faraday-Maxwell law, and says that the component of the electric field parallel to the boundary is continuous. That is, the electric field parallel to the boundary is the same either side of the boundary. Since solutions of Maxwell's equations can superpose, that means if there are multiple electric fields on one side of the boundary (in this case, the components of the fields due to the incident and reflected waves), then these must be added, in a vectorial way, and their sum must be the same either side of the boundary. Your intuitive approach is perhaps confusing electric field with energy. Energy is conserved at the boundary, but this says that the flux in equals the flux out $$\vec{N}_i\cdot d\vec{s} = -\vec{N}_r\cdot d\vec{s} + \vec{N}_t\cdot d\vec{s},$$ where the terms represent the Poynting vectors integrated over the beam area, and the minus sign before the reflected term is because the Poynting vector of the reflected wave and $d\vec{s}$ are in opposite directions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/514446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Will August be always summer in the northern hemisphere? Is the Earth orbit precessing, or are there other effects which will create a shift between our calendar (day counting), and the Earth's orbit? I imagine these effects to be small, but I'm asking for long timescales. [Edit] To formulate my question better, let me be more precise. Assume that our calendar never changes, meaning we keep counting days always in the same way (second...days defined by an atomic clock, 365 days = 1 year, usual leap years, etc), and consider what will happen in -I don't know- 100k-1M year? Or, if this timescale is wrong, what should it be to see an effect of a shift between seasons and months, as we are used to?
The Gregorian calendar, the calendar used most, adds a leap year every 4 years, and skips 3 leap days every 400 years. This gives it an average year of 365.2425 days. It has been proposed to omit another leap day every 4000 years to keep the Gregorian calendar even closer to an astronomers mean tropical year, currently 365.24219 days.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/514567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What happens to the accretion disk when two black holes merge? I'm aware that accretion disks around black holes are formed from the swirling mass of matter that is slowly being stripped of its atoms, but what happens to it when two black holes merge? I was thinking maybe it gets eject or sucked in completely. Not sure why though. In specific, what happens to the nearby objects; objects that are both stationary and objects in motion ($v = nc,$ where $0.6<n<1$).
If there are two BH moving around, there's a big chance that there's no accretion disk from the start or that it is far away from the binary. Once both BHs merged, the far away accretion disk would still be there, without much changes. Or depending on the initial conditions (accretion disk around each BH, for example), the final matter in the disks could be partly absorbed by the BHs during the coallense process, and partly diffused away. Some parts may still be orbiting the final BH. All scenarios are possible, it just depends on the initial conditions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/514755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Ground → ship Wi-Fi bandwidth in my fast moving spaceship I have a hypothetical question that hopefully makes sense considering only the rudimentary amount of knowledge I have on relativity. Assume I am in a spaceship orbiting around a spherical satellite, at a certain fixed radius. The satellite sends data to the ship via radio (something like satellite internet or other long-range data radio, using electromagnetic signals). Would the download speed (received data rate seen on-board ship) be faster or slower, compared to when I am stationary with respect to the satellite? Also consider that attenuation is negligible. What I initially thought was that time dilation would make "my" clock in the spaceship tick slower, hence the signal from the "satellite" would be received faster. I thought I should add this to clarify my train of thought that led to the question. Please do note that it is a hypothetical question. (A comment pointed out that a spherically symmetric EM wave is impossible, therefore I have taken down the diagram and the statement regarding so.)
Basically your question is the same as what Michaelson and Morley asked. Their experiment used the earth traveling around the sun instead of a spaceship. Their results did not show a difference in the speed of light either way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/514883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 4 }
Status of Space-Time Many physicists conjecture that space-time is not fundamental. Is this the orthodox view in physics these days? Follow ups - If a philosopher argues that space-time is reducible, are any physicists likely to argue? Are there many or any theories (for instance versions of string theory) that actually require a fundamental space-time? This not asking how space-time can be emergent. The question is asking whether the view that space-time is emergent is considered orthodox, or to what extent it is endorsed by physicists.
Since I am unaware of the level of physics you are familiar with, I shall try and give an answer for the general audience. Earlier, when Einstein presented the idea of space-time, it was assumed as well as critical to his theory of general relativity, that space-time is a continuous background, on which every event in the universe takes place. But, later on, to conserve the principles of quantum mechanics(mostly uncertainty principle), it was seen that space-time needs to have a discrete/discontinuous structure. One of the "emerging" ideas in physics, aka quantum loop gravity, tries to account for uncertainty by introducing uncertainty in the space-time itself so as to speak. Hence, the classical idea of continuous space-time is, well, old. It being a fundamental background on which events occur, is also being challenged.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/515049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Finding potential using spherical harmonics I have been trying to solve the following question: The potential on the surface of a sphere is given by $\mathbf {V = V_{0} \sin^2\theta \sin2\phi,\;}$ find the potential outside the sphere I am trying to solve it by separation of variable in spherical coordinates by using the following formula for potential outside the sphere, $$V=\sum_{l=0}^\infty\frac{B_{lm}}{r^{l+1}} {Y_l}^m (\theta,\phi)$$ Now the potential on the surface of the sphere is given, so we can use that for r=R as, $$\tag{1}V_{0}\sin^2\theta\sin 2\phi=\sum_{l=0}^\infty\frac{B_{lm}}{R^{l+1}} {Y_l}^m (\theta,\phi)$$ Next for the value of $B_l$ I multiply both side with ${Y^*_l}^m$ and integrate. RHS becomes $\frac{B_{lm}}{r^{l+1}}$ while LHS becomes interesting. I note that $\sin^2\theta$ $\sin 2\phi$ can be converted into $Y_2^2$ with some Constant factor. $Y_2^2$ is given as follows: $$ Y_2^2= A \sin^2\theta\ e^{im\phi}$$ So my problem is, can I some how convert this into $Y_2^2$ so that it simply gives me the left hand side of equation ${(1)}?\;$ I see that $sin2\phi$ is the imaginary part of $e^{im\phi}$ with $m=2$. Please guide me through this.
You already noticed $\sin 2\phi$ is the imaginary part of $e^{2i\phi}$. Another way to say this is $$\sin 2\phi=\frac{i}{2}\left(-e^{2i\phi}+e^{-2i\phi}\right)$$ From the table of spherical harmonics you have: $$Y_2^{+2}(\theta,\phi)=\frac{1}{4}\sqrt{\frac{15}{2\pi}}\sin^2\theta \ e^{2i\phi}$$ $$Y_2^{-2}(\theta,\phi)=\frac{1}{4}\sqrt{\frac{15}{2\pi}}\sin^2\theta \ e^{-2i\phi}$$ Putting this together you find (even without doing any integral): $$\sin^2\theta\ \sin 2 \phi= 2i\sqrt{\frac{2\pi}{15}} \left(-Y_2^{+2}(\theta,\phi) + Y_2^{-2}(\theta,\phi)\right)$$ You see, one spherical harmonic was not enough. You needed two of them.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/515221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does a photon have spin 1? Are we taking the photon spin to be one to describe electromagnetic force or there is any equation (is it relativistic Schrodinger or Dirac equation?) with a solution that tells us that its value is one?
The question $\textit{why it has spin 1}$ is inappropriate. Particles, by definition, are embedded into irreducible representations of the Poincaré group, i.e., a field. Fields with distinct Lorentz representations have distinct phenomenology and so we must $\textbf{choose}$ the representation of the field in order to describe the correct phenomenology of the particle. The photon is a particular case of this; it a boson with two degrees of freedom (two independent polarizations) which is its own antiparticle. In particular, the circular polarization of the photon is characteristic of massless spin 1 particles, since other spins like 0 or 2 have different polarization patterns. So, without going to deep into the theory, it is phenomenologically unavoidable to have a spin 1 photon.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/515319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How can the mechanism of electrons in an atom be explained? I am a high school student who takes both Physics and Chemistry. Recently I learnt about the quantum mechanical point of view of looking at electrons or nuclei. I also learnt that the wave functions can be obtained by solving the Schrodinger's equation with various conditions specific to the problem (such as the particle in a box). My shallow understanding of quantum mechanics is that we can only know the probability of an electron existing at a certain position and time, and the actual position can be determined when the 'observation' takes place. The chemical bondings and chemical reactions are the results of electric interactions between nuclei and electrons. The Coulomb force is a function of the distance between two charges, so it is important that the exact locations of electrons should be known. But taking into consideration quantum mechanics, we don't even know where the electrons are, and we built up a subject called Chemistry, and most importantly, CHEMISTRY STILL WORKS VERY WELL. So, what is going on?
Chemistry and classical mechanics work well because they deal with statistical behavior involving many atoms and molecules, rather than individual particles. Due to the Law of Large Numbers, the overall behavior corresponds very closely to the probabilities calculated using quantum mechanics. So in many cases quantum behavior can be ignored, and classical models (like the planetary diagram of electrons orbiting around the nucleus) can be used -- it's an approximation (like all models), but it's normally close enough. You generally only have to worry about quantum mechanical effects when you're dealing with very small numbers of particles. For instance, engineers designing microelectronic circuits have to deal with this, because they create "wires" that are just a few molecules thick.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/515438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Is the Casimir energy in CFT an observable? We know that if we transform a 2d conformal field theory from a plane to a cylinder with perimeter $L$, the ground state energy will be shifted by $$E = -\frac{c}{24L}$$ due to the Schwarzian derivative term in the transformation of stress energy tensor. This energy is the difference of a theory on a cylinder and the same theory on a plane. How can we compare the ground state energy of two theories on different spacetime? Therefore I would like to know is this energy a physical observable? And if not, why is it important?
Of course, the free energy on the cylinder is not a measurable observable if you're given the theory on the infinite plane. But one can measure other observables which are proportional to the central charge, such as the two-point function of the stress-energy tensor. There are situations where that expression is an observable. If you have a one-dimensional quantum system with periodic boundary conditions that flows to a (1+1)-dimensional CFT, then its ground state energy will generically be given by the formula $$ E = E_1 L + E_0 - \frac{\pi v c}{6L} + \cdots, $$ where the higher-order terms are lower-order in $L$. (See below about the mismatch between our expressions.) Here, $E_1$, $E_0$, and $v$ are non-universal constants ($v$ is the velocity of excitations at low-energy, usually called the "speed of light" in a field theory textbook). Then it is possible to "measure" the central charge term. For example, say you do some Monte-Carlo simulations to obtain the velocity $v$ of excitations, and then numerically calculate the ground state energy for several (large) values of $L$ and match it to the above equation. This lets you determine $c$. In practice, it is much easier to extract central charge from the entanglement entropy. In particular, for an open one-dimensional quantum system, the entropy associated with tracing out half of the system is $S = (c/6) \log L$. As a side-note, I think that what you are calling $L$ is really the radius of the cylinder, which is related to the perimeter by a factor of $2 \pi$. Finally, you are only considering the holomorphic sector, and above I'm everywhere considering also the antiholomorphic sector with an identical central charge. So that's why my expression is off by $4 \pi$ compared to yours.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/515570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Minimum time to cover distance with variable acceleration I have a problem to solve and I got stack. The question is: Having a vehicle that weighs $m$ that can move at variable speed but with maximum acceleration of $a_1$ and minimum deceleration of $a_2$ calculate the minimum time to cover distance $\ell$. The starting and ending speed should be zero. I need some pointers or a link, please and thank you.
We accelerate and then when there is no choice anymore, we decelerate. Indeed, if your minimal deceleration decelaration was infinite, you would have been able to decelerate completely brutaly at the end and that would have been the best. Now you have to tend to that limit. The idea to prove it is to look at how to optimize the distance because it's more simple to do it. Let's write $v_M$ the maximal velocity. Deceleration is from $v_M$. Then the time to decelerate with the minimal decelaration $a_m$ is $t_d=v_M/a_m$ on a distance $l_d=\int_0^{t_d}(v_M-a_m t)dt=v_M t-a_m t^2/2 $. Now the maximal acceleration is $a_M$. Then $v(t)=\int^t_0 a_M t'dt'=a t$. And $x(t)=at^2/2$. At $t_M$ we got to the maximal velocity : $v_M=a t_a$ and the distance $l_a=at_a^2/2$. We have $L=l_a+l_d=a_M t_a^2/2+(a_M t_a) (a_M t_a /a_m)-a_m (a_M t_a/a_m)^2/2=t_a^2(a_M/2-a_M^2/a_m/2)$. Ok now you can finish. You have $t_a$, so you have $l_a$. But then you have also $l_d=L-l_a$. And from this you get $t_d$, and then $T=t_a+t_d$. I hope I didn't do too much mistakes :) I didn't write the whole answer with a rigourous proof to my answer. But that's a sketch of how to tackle the issue.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/515723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is torque a cross product? If I'm not wrong, torque is perpendicular to both the radius and force i. e. It is along the axis of rotation. Questions that arise are- why do we consider the length between the axis/point of rotation while calculating torque? More importantly why is torque a cross product?
Torque is defined as $\quad\vec{\tau}=\frac{d\vec{J}}{dt}$ where $\vec{J}$ is the angular momentum of the object. The angular momentum is defined as $\vec{J}=\vec{r}\times \vec{P}$. Then $$ \vec{\tau}=\frac{d\vec{J}}{dt}=\frac{d(\vec{r}\times \vec{P})}{dt}=\frac{d\vec{r}}{dt}\times\vec{P}+\vec{r}\times\frac{d\vec{P}}{dt} $$ but $$ \frac{d\vec{r}}{dt}\times\vec{P}=\frac{d\vec{r}}{dt}\times m\vec{v}=\frac{d\vec{r}}{dt}\times m\frac{d\vec{r}}{dt}=0 $$ so $$ \vec{\tau}=\vec{r}\times\frac{d\vec{P}}{dt}=\vec{r}\times\vec{F} $$ which is the answer to the question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/516011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 1 }
Why 2 electrons can't be in the same quantum state when they are distant apart? I understand that when 2 electrons are confined into a very small volume of space slightly bigger than their debroglie wavelength, one of the pair must jiggle with increase momentum due to pauli exclusion principle. But looking at G. Smith's comment in my earlier question, why can't 2 electrons separated with a vast distance of space share the same quantum state? It didn't make sense to me unless the electrons are bound to an atom then each of them must go to different energy level since already 2 electrons with the same lowest energy state show different spin state.
Electrons, in general, don't necessarily have well-defined positions, so the idea that they are "separated with a vast distance of space" is nebulous, at best. I'm going to assume that when you say 2 electrons separated with a vast distance of space you mean 2 electrons whose wavefunctions are well-localized (i.e. strongly peaked in position space) and for which the peaks of the wavefunctions are separated by a large distance. As you can plainly see from this characterization, the wavefunctions of the two electrons are very different, since they peak in different places. The wavefunction is part of the electron's quantum state.* This means that, since the electrons have different position wavefunctions, they are in different quantum states. *There are other ways to define the quantum state that don't directly reference position, such as giving a momentum-space wavefunction or a decomposition into eigenstates of a particular potential, but the same principle applies - if whatever you use to represent the quantum state is different for one electron than for another, they're in different quantum states.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/516520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is my visualization of a photon correct? When trying to visualize a photon, I imagine it as an electromagnetic wave of very short length. Is this accurate?
The visualization of a photon is not really part of physics, because physics is a science and as such it only make statements about things that can be tested by observations. The existence of photons are inferred from the interactions of light with matter. We can therefore say that photons are involved when such interactions take place, i.e., when light is emitted or absorbed. If we entertain the question: how does light produce these quantized phenomena?, then we can think about light in terms of photons even when it does not interact with matter. (But we cannot confirm this view scientifically.) Quantum mechanics does not tell us how the photons behave individually. All it tells us is the probability to observe a photon given certain experimental conditions. One can derive the probability distribution for the observation of a photon from the electromagnetic field. However, the photon itself would then have to be a dimensionless point particle. (Let me stress again, this view is strictly non-scientific.) Just to remove a misconception that is being spread on PhysicsSE: an optical field is NOT the coherent superposition of lots of photons. In quantum mechanics the state of a photon can be represented by a Fock state $|\psi_n^{(1)}\rangle$, where $\psi_n$ represents the wave function of the photon and the superscript indicates that it is a single excitation (one photon) of the wave function. One can form the coherent superposition of many different single photons states like that $$ |\phi\rangle = \sum_n |\psi_n^{(1)}\rangle C_n , $$ such that $$ \sum_n |C_n|^2 = 1 . $$ What is $|\phi\rangle$ then? Is it the state of a multi-photon optical field? No! It is still a single photon state. All that happened is that its wave function is now the coherent superposition of all the wave functions of those single photon states.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/516862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Work Done by a time-variable Force My problem gives a time-dependent force as follows: lets say that the force is fairly simple, $F=6t$ lets say that we want to find the work done in the 1st second. Here's my approach: $W=F(t).v$ So, for a small interval where force can be considered constant, $dW=F.dv$ using the kinematical relations: $v+dv=v+(6t/m)*dt$ $dv=6tdt$ (for m=1kg) therefore, $=>dw=6t*6tdt$ $=>dw=36t^2dt$ so we can integrate using any limits on time I guess? But in my case, this doesn't work (wrong answer) idk why, it seems correct to me lol For anyone who wants the answer, its $4.5J$ (at least according to my textbook)
You are confusing work and power. Because of the pioneering work (no pun intended !!!) of James Watt, the unit of power is called the Watt and denoted by $W$. This should not be considered as the first letter of "work" in the physical meaning of the word. I think this may be the cause of your confusion. You are supposed to compute the work. Work is the integral in time of power.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/517031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Phase transition on magnetic materials Is ferromagnetic to paramagnetic phase transition a reversible process? If I start with a ferromagnetic material with a spontaneous magnetization below the Curie temperature, and then I start to heat it, it will become paramagnetic above the critical temperature. If I then start to drop the temperature slowly to below the Curie temperature then will I achieve the ferromagnetic behaviour with same spontaneous magnetization as before?
Let me rephrase your question slightly to make it clearer what (I think) you are asking. Suppose we start with a ferromagnetic material above the Curie point and we cool it through the Curie point in the absence of any external influence e.g. no externally applied magnetic field. And suppose we repeat this experiment many times. Will the final state of the material always be the same? The answer is that without any external field the total magnetic field will be zero. This is because the magnetic domains formed as we pass through the Curie point will be randomly oriented and their total magnetic field will sum to zero. In this I agree with Pieter. But it is only the total field that is the same. If we watched an individual spin in the material as we cycled it through the Curie point it would not have the same orientation each time. And unless there was some controlling factor, like defects in the solid, the magnetic domains would not be the same each time. In any system we have random thermal fluctuations, and in the ferromagnetic above the Curie point there will be random thermal fluctuations in the alignment of the dipoles. As we cool towards the Curie temperature these fluctuations will get larger and larger, and at some point magnetic domains will nucleate and start to grow. These give rise to the domains we observe in the material at low temperature. But the nucleation process is random and hence the final pattern of the domains will be random. So while the overall field is always zero after the cooling the microstructure will not be.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/517113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Why is the divergence of the field zero in Maxwell's equations? I read in a book called Vector Analysis by Murray R. Spiegel by Schaums Series, and I found that there is somewhere printed that the divergence of the electric field is zero. Since my teacher told that divergence means something which originates from a point and meet another point, simply source and sink. And I know that electric field originates from a point charge and in a dipole its sink is the negative charge, then why the divergence of the field is said to be zero in the Maxwell's equations?
When ${\bf \nabla} \cdot {\bf E}$ is introduced in Vector Analysis by Murray R. Spiegel, it is stated explicitly that it is proportional to the charge density and therefore it is zero only if the charge density is zero. I guess that you may have been mislead by the solved problem n. 19 of Chapter 4, where it is shown that $$ {\bf \nabla} \cdot \left( \frac{{\bf r}}{r^3}\right)=0. $$ I.e. that the divergence of a Coulomb-like field would be zero. In that case, you have to be careful. The equality holds at the points where the function is differentiable. i.e. everywhere but the origin ($\bf r = 0$). At the origin, that vector function is singular and its divergence can be evaluated, within distribution theory, only as a generalized Dirac delta function $\delta({\bf r})$. For more details, have a look at this Q&A on Math.SE.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/517243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What happens when a weapon breaks on impact? Here is the situation: You are attacking someone with a wooden pole (such as a pole arm or tree branch). You either (1) hit as hard as you can and the pole breaks into two pieces on impact OR (2) hit quite hard but the pole remains in tact. Assume that you are hitting the same spot with the same angle and everything, so the only difference is how much force was applied. Which would cause more damage? My gut instinct is that a weapon that breaks would transfer less of the impact to the person it hits, thereby causing less damage. However my boyfriend pointed out that if you are hitting someone hard enough that the pole breaks, you have used the maximum amount of force that the pole can withstand, so the most force you can transfer with a single blow has been done. [Edit: I don't have an education in physics beyond high school, but it looks like this question has never been answered on this site before. That could be because I didn't know the physics terms to search for though! Any pointers in the right direction are appreciated.]
You don't want to break the weapon you want to break another object with the weapon. It's a game theoretic thing to not lose your weapon in a fight :D. Physically it is harder on the enemy than if the weapon had not broken. Though :]
{ "language": "en", "url": "https://physics.stackexchange.com/questions/517580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Confusing definition of proper time – which is correct? I have googled for „definition of proper time“ This source https://www.collinsdictionary.com/dictionary/english/proper-time gives the following definition: * *proper time ... measured by a clock that has the same motion as the observer. Any clock in motion relative to the observer ... will not, according to the theory of relativity, measure proper time. However, according to this answer Confusing time dilation - proper time is higher? „In other words, it is the time registered by a clock that is carried from one event to the other“ exactly the moving clock measures proper time interval and this time interval is the shortest due to time dilation. Is definition in Collins Dictionary wrong? Please help resolve this contradiction.
The definitions are both trying to say the same thing, but they are not quite managing to avoid all scope for misunderstanding. For a non-technical appreciation of the meaning of proper time you should start with the principle that proper time is the time experienced at any point in one's own reference frame. As you sit at your desk marvelling at the clarity of my answer you are experiencing proper time in the reference frame in which you are stationary. Any clocks that are stationary relative to you will record time at the same rate you experience it. Anybody moving relative to you will experience their own proper time, which will be faithfully recorded by any clocks moving with them (ie clocks that are stationary in that person's reference frame). The Collins definition was insufficiently precise. It should have said that any clock moving with respect to an observer will not measure proper time in the observer's frame of reference. If a clock is moved between two other clocks that are stationary relative to each other, the time it records is a proper time for that clock's frame of reference, and it will be shorter than the time that appears to have elapsed according to the stationary clocks.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/517662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Near-field energy transfer If you absorb energy in the near field of the antenna, it will produce a loading effect on the source. Whereas in the far field it will not. Is there an intuitive explanation why this is true for one type of the field and not the other? Can the common electrical transformer be thought of as two antennas in each others near field?
You can think of two antennas in each other's near field as two halves of an air-core transformer. As such, they will load each other in ways that don't happen in the far field. This principle can be used to couple RF power to an antenna in a manner that prevents induced currents from other nearby antennas from propagating in the antenna leads. Instead of antenna leads that travel directly from the transmitter to the antenna, the antenna leads are broken and a pair of dipole antennas are inserted there, less than a wavelength apart. The near-field coupling is almost as good as the unbroken case, and the presence of a physical break in the antenna line interrupts the induced currents.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/517826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Lagrangian for a forced system Suppose that for a non-forced system Lagrange's equations are \begin{equation*} \left\{ \begin{array}{l} m\ddot{x}+\left( k_{1}+k_{2}\right) x-k_{2}y+2c_{1}\dot{x}=0 \\ m\ddot{y}-k_{2}x+\left( k_{2}+k_{3}\right) y+2c_{2}\dot{y}=0.% \end{array}% \right. \end{equation*} But if the system is subject to external forces, say $F_{x}, $ $F_{y}$, which would be the Lagrangian in this case? Can we add $F_{x},$ $F_{y}$ in the right-hand sides?
The short answer is yes: when the system is not conservative because of dissipation or driving, one must include generalized forces on the right hand side of the usual EL equation: \begin{align} \frac{d}{dt}\frac{\partial L}{\partial \dot q_k}-\frac{\partial L}{\partial q_k}={\cal F}_k\, , \end{align} where ${\cal F}_k$ is the generalized force on the (generalized) coordinate $k$. You have already included damping so you need to include the driving term, again “by hand”. In the simplest example of a harmonic force on a 1d system, the equations of motion would then be of the form \begin{align} \ddot{x}+\frac{\omega_0}{Q}\dot{x}+\omega_0^2 x = A\cos(\omega t) \end{align} where for a force $F_0\cos(\omega t)$ and $A=F_0/m$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/518045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Observed speed of a receding light source Let’s say there’s a planet 4 light years away from Earth and we send a rocket ship towards that planet at 99.9% light speed. We stay behind on Earth and watch the rocket ship travel towards the other planet. Eventually we should be able to see our rocket ship reach it’s destination. How much time will have elapsed for us until we see that occur? My intuition would say about 4 years. But I also know that when we observe such a far-away planet, we are ”seeing it as it was 4 years ago”. Well 4 years ago the rocket was still on Earth, so how can I be seeing it landing on the planet now? Something has to give, but what? Will it appear as if the trip took 8 years to complete?
It all depends on what the meaning of the word "appear" is. In about eight years, you'll see the ship land, and you'll say "Ah. I see the ship landing at a place four light years away, so it must have landed four years ago". Does "appear" refer to what you see, or to the meaning you attribute to what you see after you've made the necessary corrections? If the first, the ship appears to land in eight years. If the second, it appears to land in four. An analogy: Suppose I stand really really far away from you. Would you say that I "appear" to have shrunk to half my height? By one meaning of the word "appear", the answer is clearly yes. By another, no, I don't appear to have shrunk, I just appear to be really far away, but to still be six feet tall.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/518147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What are real world applications of the Duistermaat–Heckman formula? In the famous 1984 paper "The Moment Map and Equivariant Cohomology" by Atiyah and Bott, an equivariant de Rham theory was presented in relation to the Duistermaat–Heckman formula $$ \int_M e^{-itf} \frac{\omega^n}{n!} = \sum_p \frac{e^{-itf(p)}}{(it)^n \text{e}(p)},$$ where $(M,\omega)$ is a symplectic manifold and $f: M \to \mathbb{R}$ is the moment map on $M$ coming from a circle action. This sparked my curiosity in the physical applications of equivariant cohomology (i.e., equivariant de Rham theory), especially the above formula. What are real world applications of the Duistermaat–Heckman formula? By real world applications, I mean physical applications in classical mechanics, statistical mechanics, quantum mechanics, or quantum field theory, but probably not string theory. I would appreciate if someone could provide real world applications of the Duistermaat–Heckman formula.
Here are some applications that might qualify for your criterion of 'real world': Yasui, Y., & Ogura, W. (1996). Vortex filament in a three-manifold and the Duistermaat-Heckman formula. Physics Letters A, 210(4-5), 258-266. Karki, T., & Niemi, A. J. (1994). On the Duistermaat-Heckman Integration Formula and Integrable Models. arXiv preprint hep-th/9402041. Bismut, J. M. (2011). Duistermaat–Heckman formulas and index theory. In Geometric Aspects of Analysis and Mechanics (pp. 1-55). Birkhäuser Boston. Zhang, L., Jiang, Y., & Wu, J. (2019). Duistermaat–Heckman measure and the mixture of quantum states. Journal of Physics A: Mathematical and Theoretical, 52(49), 495203.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/518321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Do colors differ in terms of speed? Here is a very simple question about light. As far as I remember from the school program, each color is merely one of the frequencies of light. I also remember that each color's wave length is different. On the other hand, when talking about the speed of light, I've always heard only one value. Why is it so? Shouldn't it be like the red color's speed must be way higher (or lower) than, say, the purple color's speed? I am quite confused here. (Sorry if my question is too foolish, but it has bugged me for years and I was quite bad at physics at school and have never touched it since I finished school)
No, they are related by the formula $$c_0 = f \cdot \lambda$$ with speed of light in vacuum $c_0$, frequency $f$ and wavelength $\lambda$. A change in frequency demands an anti proportional change in wavelength and vice versa, since the speed is constant. It is not possible to change the frequency and leave the wavelength constant. This is quite intuitive because the frequency is the timescale how often a wave peak will pass in a given time. But of course this depends on the wavelength: Image source
{ "language": "en", "url": "https://physics.stackexchange.com/questions/518511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 5, "answer_id": 2 }
Light beam vs sound beam Why is it that it's very common to have beams of light but not beams of sound? Laser beams are widely available, and I am aware that it is also possible to direct sound, however, we rarely see examples of it. Is it more difficult to direct due to longer wavelength or is it more dispersive in air or something?
Wave beams require to have a transversal section of lenght of the same the order of magnitude than the wavelength. Whereas for light, we can get very tiny and focused beams (of $\mu m$ order), for sound the wavenlength (of centimeter or meter order), you cannot get beams s focused. Hence the utility of such beams to either transmit information, or focus energy is quite limited. I am aware of such device for crowd controlling use (https://en.wikipedia.org/wiki/Long_Range_Acoustic_Device).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/518646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Moving charge in different frames of reference Imagine we have a uniform magnetic field, $\mathbf{B}$, and a single electron is moving normal to it, the electron will produce a magnetic field of its own which interacts with $\mathbf{B}$ and so electron experiences a force. This is perfectly fine, but what troubles me is when we switch perspectives. If we are moving with the electron, then to us, the electron would be stationary, so it produces no magnetic field and hence no interaction with $\mathbf{B}$ making it experience no force. How can this be possible? Clearly there should be something that I am missing allowing for a force to be exerted but all we see is a stationary electron in a magnetic field and it will somehow experience a force out of nowhere. What's going on?
Electric and magnetic fields are in effect different views of a single electromagnetic field. That is, if we have an electromagnetic field then different observers moving at different velocities will see the electromagnetic field as different combinations of an electric field and a magnetic field. And it is this that answers your question. We lab observers see a stationary magnetic field. However to the moving electron the same electromagnetic field appears as a combination of a magnetic field and an electric field. It is the electric field that appears in the electron's rest frame that exerts the force on the electron and makes it move in the trajectory observed in the lab.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/518789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Pressure needed to compress iron to double the density? What is the pressure needed to compress iron to twice its density? (with "its density" meaning solid iron at room temperature and atmospheric pressure, reference 7.874 g/cm³ from Google. )
It is difficult to calculate such properties accurately. The standard technique is density functional theory, which is as much an art of approximation as science. I am not qualified to speak to experimental techniques involving sudden compression, but they are of obvious interest to designers of nuclear weapons. Geophysical models are informative because they can to some extent be validated by seismology. Even the huge pressure at the center of the Earth is not quite enough to double the density. The density there is estimated at 13.1 g/cm³, the pressure 364 GPa, and the bulk modulus 1425 GPa. A fit to data over the limited range of densities (only 2.5%) and pressures (10%) in the inner core suggests that the bulk modulus varies as $K\propto {{P}^{0.60}}$. Using $dP=K\ d(\ln \rho )$ to extrapolate 20% in density, you might estimate P = 810 GPa when $\rho $ = 15.75 g/cm3. Caveat: The extrapolation is clearly far out on a limb and shouldn't be trusted. Naive power-law extrapolations are shaky. $P\sim {{\rho }^{x}}$ would obviously break down at low pressure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/519170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Physical interpretation for wave function in infinite square well If you look at the wave function of a particle in infinite square problem for some specific energy level, say for n =1, then the probability of particle to be found in middle of the well is higher than at any other point. Similarly for higher energy levels, there are points called nodes where the particle can't be found. What is the physical interpretation for this? Why are some points more probable than others?
I will not go any deep into the nature of reality but instead I will try to convince you why it has to look like from a logical point of view: -The potential is infinite at the edges of the well. Therefore, we have to agree that the probability to find the particle there is zero. These are the nodes of the wavefunction squared. -We also have to agree that the particle exists somewhere in the well, so the wavefunction cannot just be zero everywhere inside. -Probabilities cannot be negative so, if the wavefunction squared is to be non-zero within the well, it has to be positive. It also has to be smooth since it wouldn't make sense that the probability changes abruptly from one point to another: there is nothing special about any point that would lead to this. -The whole system is symmetric with respect to a vertical line in the centre of the well. There is nothing special about either side. -If we put all of that together, we end up with something that looks like: If you want to then understand why sometimes some nodes appear within the well, then you need to learn some quantum mechanics. Or simply read about standing waves and then realise the wavefunction of the electron within the well is just a standing wave with the contraint of having nodes on the limits of the well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/519580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How often do positrons appear in a typical Wilson cloud chamber? I would like to know a rough estimate of how often positrons appear per meter cubed per second in a typical Wilson cloud chamber based off of your experience. I'm interested in the same quantity for the other typically observed particles as well. Also on a more theoretical note if you know of reasons behind there proportions that would be cool too. But in general I'm just wondering how often positrons end up running into you body while you're walking around on earth.
A Wilson chamber is an interesting way of seeing the cosmic rays which continually bombard the earth, as well as any radioactive material placed close to it, but it is not good in getting data in an organized method so that one can answer your question: But in general I'm just wondering how often positrons end up running into you body while you're walking around on earth There is a large amount of data on cosmic rays, positrons come from secondary interaction of these high energy particles, either from galactic or intergalactic origin, or by scatterings in the atmosphere, and even from scatterings in the Wilson chamber. The are a secondary component and important only for studying the origin of cosmic rays, example, they contribute very little to the flux. In the particle data group one finds the cosmic ray fluxes, a secondary part of which will be any positrons that have been generated from the primary (fig 29.2). If you are worried about radiation due to positrons, it will be a very small component in the general radiation, discussed here ,
{ "language": "en", "url": "https://physics.stackexchange.com/questions/520021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Distance between laser diode and photodetector I have a laser diode with a wide divergent elliptical beam (angles: 57°-13°). If I want to use a light detector with 50mm diameter, how can I estimate the distance at which I can put the LD from the detector to have all the LD beam inside the detector? I do not know the beam waist.
I do not know the beam waist. The beam waist is almost certainly not important. The reason the divergence is at such a high angle is that the waist diameter is very small. Likely less than a micron in the "out-of-plane" direction, leading to the 57° divergence in this direction. If you just assume the waist is a point, and do the basic trigonometry for the 57° divergence, you'll get very close to the right answer. But remember that 57° is not the angle of the cone that contains all the emitted power. It's probably the angle at which the emission intensity has fallen by half from the center of the beam. To get all of the emitted power, you'll want to make your detector diameter 2 or 3 times the diameter predicted by the 57° cone angle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/520202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Justification for Loop de Loop minimum speed I was trying to figure out the minimum speed an object would have to travel on a loop not to lose contact with the loop. Setting the centripetal force equal to gravity $m\frac{v^2}{r} = mg$ gives $v = \sqrt{gr}$ that explanation is valid and makes sense to me but I was wondering why a conservation of energy approach wasn't. Entering the loop with speed $v$ and setting Kinetic energy equal to gravitational potential $0.5mv^2 = mgR$ gives $v = \sqrt{2gr}$ which obviously is not the same. Why is this explanation not correct?
What you have done is, you have taken the initial kinetic energy as $\frac{1}{2}mv^2$, then you have taken the change in potential energy to be $2mgR$ at the topmost point not$mgR$, that was your first mistake. Even then you are having zero velocity at the topmost point, which means the body has no velocity, and hence would just fall down instead of completing the loop.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/520382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why are 8 images formed for a object kept symmetrically between two mirrors kept at angle 50°? Let us consider two mirrors $M_1$ and $M_2$ kept at $50^\circ$ with each other. An object $O$ is kept symmetrically between the mirrors making angles $25^\circ$ with each. Now the number of images is given by the formula: $$n = \frac{360}{\theta},$$ where is $n$ is odd. The number of images is $n$ for a asymmetrically placed object and $n-1$ for a symmetrically placed object. If $n$ is even, the number of images is $n-1$ for all positions of the object. Applying the formula to this case we get $$n=\frac{360^\circ}{50^\circ} = 7.2,$$ ignoring the decimal part. As the object is symmetrically placed, number of images becomes $n-1 = 6$. But the ray diagram I have drawn shows otherwise. Here 8 images are formed. So which one should I follow, the ray diagram or the formula?
The formula $n = \frac{360}{\theta}$ has on one side an integer and on the other a continuous variable. This should give you pause. For correctness, we may either restrict $\theta$ to precisely those values which give integral $360/\theta$ or modify the formula. So, for $50^\circ$, the formula doesn't hold. Truncating the decimal isn't something the formula can do and so it can't be expected to yield a right answer. Anyways, for the symmetric case, the general formula would be $n=2\left\lfloor m\right\rfloor+ \lceil\{m\}\rceil(1+\lceil\{m-1/2\}\rceil)$ where $m=\frac{\pi-\phi}{2\phi}$ $\theta=2\phi$ and {} denotes fractional part. This, indeed, gives $n(25^\circ)=8$. For $\theta=\frac{2 \pi}{k}$, $k$ integrer, $n\left(\frac{2 \pi}{k}\right)=k-1$ as expected Extending continuous variables to discrete casses can sometimes be tricky. The first term in the formula is just the ordinary "above the mirror" reflections. The $\lceil\{m\}\rceil$ is to terminate the formula in case the reflection falls on the mirrors. The "$(1...$" is for those last two reflections just below the mirrors. The last factor$\lceil\{m-1/2\}\rceil$, corrects for $\theta=\frac\pi2,\frac\pi3,\frac\pi4,\frac\pi5,...$ when those two last reflections coincide. $\lceil\{x\}\rceil$=$0$ for $x$ positive integer, $0$ otherwise
{ "language": "en", "url": "https://physics.stackexchange.com/questions/520527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Applications and limitations of the Hamilton-Jacobi formalism It was my understanding that the Hamiltonian formalism was inadequate to describe systems that are invariant under time reparametrization or that have gauge symmetries. However, I see in Classical Dynamics by Jorge V. José and Eugene J. Saletan, that both a relativistic particle and a particle under an electromagnetic potential are described using the Hamilton-Jacobi formalism, dealing the right equations of motion. I wonder why does this work: Are there systems that can be treated with by Hamilton-Jacobi formalism but yield false results when treated by Hamilton? Is there a way to adequately treat systems with the mentioned invariances through Hamiltonian mechanics? If so, are their Hamiltonians always of the form $H=T+V$?
* *Hamiltonian formalism also works for gauge systems, although one has to introduce constraints (and possibly the corresponding Lagrange multipliers), see e.g. Ref. 1. For the relativistic point particle, see e.g. this Phys.SE post. *In fact the Hamilton-Jacobi equation is derived from the Hamiltonian formalism, not the other way around. *The Hamiltonian $H$ is not always of the form $T+U$. See e.g. this Phys.SE post for the corresponding Lagrangian question. References: * *M. Henneaux & C. Teitelboim, Quantization of Gauge Systems, 1994.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/520776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quantum operator calculations We define the quantum operator $$ P^\mu=\int{\frac{d^3p}{(2\pi)^3}}p^\mu a_p^\dagger a_p $$ Now how can I calculate $$ \langle p_2|P^\mu|p_1\rangle~? $$ My attempt: $$ \langle p_2|P^\mu|p_1\rangle =\int{\frac{d^3p}{(2\pi)^3}}\langle 0|a_{p_2} p^\mu a_p^\dagger a_p a_{p_1}^\dagger|0\rangle. $$ Now we know that $\langle0|a_p a_q^\dagger|0\rangle =\delta^{(3)}(p-q)$ but I'm not quite sure how it works with multiple states in the bra-ket.
Use the canonical commutation relation $[a_p, a^{\dagger}_q] = (2\pi)^3\delta^3(p-q)$: We have that $a_{p_2}p^{\mu}a^{\dagger}_pa_pa^{\dagger}_{p_1} =p^{\mu}a_{p_2}a^{\dagger}_pa^{\dagger}_{p_1}a_p + p^{\mu}a_{p_2}a^{\dagger}_p[a_p, a^{\dagger}_{p_1}]$ The first term is ignored, because when considering $<0|p^{\mu}a_{p_2}a^{\dagger}_pa^{\dagger}_{p_1}a_p|0>$ We have an annihilation operator hitting the vacuum state, so this term in the integrand must vanish. Plugging in the commutator, your integral is equal to $\int\frac{d^3p}{(2\pi)^3}p^{\mu}(2\pi)^3\delta^3(p-p_1)<0|a_{p_2}a^{\dagger}_p|0> $ You can now integrate out the delta function: $p_1^{\mu}<0|a_{p_2}a^{\dagger}_{p_1}|0> $ Repeating the same trick with commuting the operators and using $<0|0>=1$, we have $p_1^{\mu}(2\pi)^3\delta^3(p_1-p_2)$ Pretty much as expected. The much easier route is if you recognize that states $|p_1>$ are eigenstates of the operator $P^\mu$ with eigenvalue $p_1^\mu$. Using this, $<p_2|P^\mu|p_1> = p_1^\mu<p_2|p_1>$ and using the normalization of singly excited states, $=p_1^\mu(2\pi)^3\delta^3(p_1-p_2)$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/521011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to justify symmetry in a torus for the calculation of $\mathbf H$? The typical texbook example of finding $\mathbf H$ in a torus filled with a material with magnetic permitivity $\mu_0$ (of course we don't need this, that's to find $\mathbf B$ later) always starts like this: "The symmetry of the problem suggests that the field lines of $\mathbf H$ inside the torus are circles centered in the torus' axis and the module depends only on the distance $r$ from the axis" and then they just take $\mathbf H$ out of the integral along a circle in Ampere's law to find $\mathbf H$. I find this argument is not detailed enough to convince me. How can I justify the other components of $\mathbf H$ are zero (radial and vertical if using cylindrical coordinates)?
The Biot-Savart law $$ \vec H = \frac{1}{4 \pi}\int_C \frac {d\vec l \wedge \vec r'}{\vert \vec r' \vert^3} $$ tells us that the contribution to $\vec H$ of a current element $d \vec l$ is perpendicular to $d \vec l$ and the vector linking the point of measurement of $\vec H$ to the current element $d\vec l$. A toroidal coil can be approximated by a succession of circular current loops. The field measured in the torus in the plane of the current loop (winding) must be perpendicular to the plane of the winding or parallel to the axis of the torus. If the core has a high $\mu$, the boundary condition for the magnetic induction, i.e. the continuity of the normal component of $\vec B$ across a boundary, will make sure that the field lines stay axially oriented and do not leak out of the torus.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/521148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Time period of a simple pendulum in an accelerated frame Suppose I have a simple pendulum oscillating in an accelerated frame then my textbook says that the time period of the pendulum is no longer given by: $$ T = 2\pi\sqrt{\frac{L}{g}} $$ but by: $$ T = 2\pi\sqrt{\frac{L}{a_{eff}}} $$ where $a_{eff}$ is the magnitude of vector sum of the acceleration due to gravity, $g$ and acceleration of the frame $a$. Can anyone explain why it is so?
In the frame of the cart a pseudo force acts towards the left. Notice that this is the new equilibrium position of the bob: Now, let us turn the axis of the drawing so as to make it easier for us to understand. Here $g_{eff} = \sqrt{a^2+g^2}$ Now, we take a small angular displacement of $\theta$ and analyse the motion. We get: $$ \tau = mg_{eff}l\sin\theta\\ $$ But, $ \theta << 1$ $$ \tau = mg_{eff}l\theta\\ $$ So, we get, $$ C = mg_{eff}l\\ and\ I = ml^2\\ $$ Finally, for time period, $$ T = 2\pi \sqrt{\frac{I}{C}}\\ T = 2\pi \sqrt{\frac{ml^2}{mg_{eff}l}}\\ \implies T = 2\pi \sqrt{\frac{l}{g_{eff}}}\\ \implies T = 2\pi \sqrt{\frac{l}{\sqrt{g^2+a^2}}} $$ Hope this helps!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/521229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A doubt in trigonometric approximation used in the derivation of mirror formula The following text is from Concepts of Physics by Dr. H.C.Verma, from chapter "Geometrical Optics", page 387, topic "Relation between $u$,$v$ and $R$ for Spherical Mirrors": If the point $A$ is close to $P$, the angles $\alpha$,$\beta$ and $\gamma$ are small and we can write $$\alpha\approx\frac{AP}{PO},\ \beta=\frac{AP}{PC}\ \ \ \text{and} \ \ \gamma \approx\frac{AP}{PI}$$ As $C$ is the centre of curvature, the equation for $\beta$ is exact whereas the remaining two are approximate. The terms on the R.H.S. of the equations for the angles $\alpha$,$\beta$ and $\gamma$, are the tangents of the respective angles. We know that, when the angle $\theta$ is small, then $\tan\theta\approx\theta$. In the above case, this can be imagined as, when the angle becomes smaller, $AP$ becomes more and more perpendicular to the principal axis. And thus the formula for the tangent could be used. But, how can this approximation result in a better accuracy for $\beta$ when compared to $\alpha$ and $\gamma$? I don't understand the reasoning behind the statement: "As $C$ is the centre of curvature, the equation for $\beta$ is exact whereas the remaining two are approximate." I can see the author has used "$=$" instead of "$\approx$" for $\beta$ and he supports this with that statement. But why is this so? Shouldn't the expression for $\beta$ be also an approximation over equality? Is the equation and the following statement really correct?
I believe the author is assuming that you recognize that AP is an arc by the context of the image. The main point here then is that C is the center of curvature of the mirror while O and I are not. As C is the center, AC = PC, and the usual arc length formula can be applied exactly with angle $\beta$ even if $\beta$ isn't assumed small. On the other hand, O and I are not centers of curvatures. Thus, OA $\neq$ PO and AI $\neq$ PI, so the arc length formula does not apply exactly. However, as you note, AP becomes more tangent to the horizontal axis as the angles become smaller. For small angles, arc AP is approximately a straight line perpendicular to the horizontal axis and the angles are approximately the tangents of the angles, so you can apply the tangent definition to relate AP to PO and AP to PI by the angles in this case.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/521447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does light spread out? So we know the light that's emitted from a torch (flashlight) must be moving in straight lines, so why does it spread out when moving? Why does it cover larger area?
So we know the light that's emitted from a torch (flashlight) must be moving in straight lines, so why does it spread out when moving? Why does it cover larger area? The reason for such spreading out lies in the idea of "Diffraction". You can read more on diffraction at this resource. The following are some useful points for a quick understanding. The roots of why Diffraction happens lies in "Heisenberg's Uncertainty Principle", which you can understand at the following two resources. * *Heisenberg's Uncertainty Principle Explained by Veritasium. https://www.youtube.com/watch?v=a8FTr2qMutA *Heisenberg's Uncertainty Principle in action! by Dr. Walter Lewin. https://www.youtube.com/watch?v=0FGo8mi-5w4 Following is a quick summary of Diffraction on the basis of Heisenberg's Uncertainty Principle. Heisenberg's uncertainty principle tells us that it is impossible to simultaneously measure the position and momentum of a particle with infinite precision. In our everyday lives we virtually never come up against this limit, hence why it seems peculiar. In this experiment a laser is shone through a narrow slit onto a screen. As the slit is made narrower, the spot on the screen also becomes narrower. But at a certain point, the spot starts becoming wider. This is because the photons of light have been so localised at the slit that their horizontal momentum must become less well defined in order to satisfy Heisenberg's uncertainty principle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/521967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Energy conservation in motional emf If a rod enters a region of uniform electric field, a potential difference arises between the ends of the rod. The work required to create this potential difference comes from the magnetic field. If the work done by the magnetic field increases the potential energy of the rod then would the kinetic energy possesed by the rod decrease?
The electric potential energy due to the separation of charges along the rod comes either from the work done by an external force acting on the rod to keep the rod moving at constant velocity or from a decrease in the kinetic energy of the rod as negative work is done on the rod by the force on the rod due to the interaction of the moving charges whilst setting up the potential difference and the magnetic field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/522077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are $a$ and $a^*$ called in the context of a classical harmonic oscillator? Consider a harmonic oscillator defined by the coupled differential equations \begin{align} \begin{split} \dot{X} &= \omega Y \\ \dot{Y} &= - \omega X \, . \end{split} \tag{1} \end{align} Defining new variables $a = X + i Y$ and $a^* = X - i Y$, produces a new uncoupled system of equations \begin{align} \begin{split} \dot{a} &= - i \omega \, a \\ \dot{a}^* &= i \omega \, a^* \, . \end{split} \tag{2} \end{align} In classical physics [1] (or just in the mathematical context of this transformation used to solve a pair of coupled differential equations) what are the variables a and $a^*$ called? [1]: In the context of quantum mechanics, the variables $a$ an $a^*$ would in fact be operators and would be called the "raising" and "lowering" operators.
I would call them normal modes, which are by definition the degrees of freedom of a system that oscillate at a single frequency. Beyond terminology there is a whole body of classical theory behind this term making it useful, for example for more complex oscillators or continuous oscillating fields.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/522163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Internal force disintegrating a solid body? Let $M$ be a block on a frictionless surface. Now let us mentally divide (not physically) the block into 2:1 ration (i.e $1/3$ of the left be called $M_1$ and $2/3$ right be called $M_2$). So $M_1$ applies force $F_1$ on $M_2$ and $M_2$ applies force $F_2$ on $M_1$ and by 3rd law they are equal. Hence acceleration of $M_1$ would be $2a$ and that of $M_2$ would be $a$. Shouldn't this deform the block?
I believe you are making a mistake in the way you are imagining force is being applied on the masses. The scenario where $M_1$ and $M_2$ move with acceleration $2a$ and $a$ respectively is when $F$ is applied on both the bodies separately, imagining both are rigid bodies. The idea of deformation in your mind, what I perceive is, that if $F$ is applied at either body when they are kept adjacent to each other, difference in acceleration will mean that they should deform. BUT, like Aaron clearly showed you, this is not equivalent to application of $F$ on both bodies independently. Here the force will be redistributed in such a way, that both the bodies will have same acceleration. The whole idea works only when assuming rigid body condition of $F_{12} = F_{21}$. This is the key. You can read more about $\textbf{strong}$ and $\textbf{weak}$ laws of action and reaction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/522514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Gravitational potential energy defined as the work done on a mass Our physics sir made us write that gravitational potential energy is the work done in bringing a mass from infinity to a point without acceleration, but I am confused because if acceleration is $0$ it means that the external force is 0, and hence net work done should always be zero. Then how can potential energy be anything other than zero?
...if acceleration is $0$ it means that the external force is $0$... No. If acceleration is $0$ then net force is $0$. ...and hence net work done should always be zero. Yes, in this scenario the net work is in fact $0$, since the net force is $0$. However, this means that there are (at least) two forces acting on the object in question: gravity $F_g$ and an external force $F_e$. These two forces must be equal and opposite. This is a standard treatment/explanation of potential energy. We move the body with a constant velocity, as $F_g=-F_e$, and so the work done by the external force $W_e$ is equal to the negative of the work done by gravity $W_g$. By definition, the work done by gravity is also equal to the negative change in potential energy $\Delta U$. Finally, if we start "at infinity" where $U(\infty)=0$ and end at position $x$, then $\Delta U=U(x)-U(\infty)=U(x)$, Therefore, we have $$W_e=-W_g=-(-\Delta U)=\Delta U=U(x)$$ So then we have what you stated at the beginning: gravitational potential energy is the work done in bringing a unit mass from infinity to a point without acceleration.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/522649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
running on the outside of a moving train Seen a few films lately where the hero runs on top of a moving train, for example "Under Siege 2". It gives the impression that the train is moving quite fast (presumably its normal operating speed on a straight section, of order 100 km/hr $\approx$ 30 m/s). Is it realistic that a man sized object could run along the top of the train? What sort of forces would he have to overcome to stay on the roof and what speed of train could he/she safely manage?
If the train is moving at constant velocity, in order for the the man to walk on top of it in the direction of travel he would need to exert a force overcome the force of air resistance. On the other hand, he could easily walk opposite the direction of travel of the train since the force of the air at his back will cause him to accelerate. Bottom line is the velocity of the train is irrelevant if it is constant, were it not for the air resistance. If, for example, the train were moving at constant velocity in a vacuum (no air), the man could walk freely in either direction only needing to apply the necessary force to accelerate his mass in the desired direction. The velocity of the train would have no effect on him. I accept that in a vacuum no particular issues, but did have the tag aerodynamics. In that case in air it would be unreasonable for the man to run along the top of the train towards the front of the train since, even with no wind with the train still, the relative velocity of the air against the man would be 100 km/hr (62 miles per hour) which is nearly hurricane force winds. On the other hand, he would be accelerated towards the back of the train requiring no effort. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/522756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does the second law of thermodynamics prevent 100% efficiency? So far in my thermodynamics lecture course, my understanding of the laws of thermodynamics is that the first law is about the conservation of energy, the second law says entropy must always increase or stay the same which apparently results in the fact you can never achieve 100% efficiency of heat engines, unless at $T = 0\,\mathrm K$, and the last law says that you can't get to $T= 0\,\mathrm K$. I have never explicitly seen why the fact that entropy must always increase or stay the same results in the prevention of achieving 100% efficiency. The only proof I have is showing the Carnot cycle is the most efficient and that is only 100% efficient if the cold reservoir is at absolute zero, which it can not be at. Is there any way to work from the statement: $\Delta S \geq 0$ (for any process in a closed system), to some result which says you can not achieve 100% efficiency?
In any irreversible process, heat is lost. in almost every case by the friction of whatever kind (resistance in a circuit, friction between gasses, friction between plates moving on top of each other, magnetic and electrical friction, or friction in waves e.g.). This is wasted energy, so you can never be able to get 100% efficiency. Unless the process is a reversal one, but they're not. In the Carnot-cycle, the maximum is about 68%. Try looking up why this is the case! But can't you use this heat again for doing work? Yes, but then you have to catch it first and store it. **If ** you succeed in doing this then the whole story repeats again... etc...etc...Until you come close to a peperpetuum mobile.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/522878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Why are the masses of $W^{\pm}$ and $Z^0$ different? We know that through the Higgs phenomenon, the weak bosons become massive. In our Lagrangian the $W^\pm$ boson is usually defined as $\frac{1}{\sqrt{2}}(W^1_\mu\mp iW^2_\mu)$ and $Z^o$ is usually defined as $(-B_\mu+W^3_\mu)$ ignoring pre factors and couplings. Because of these definitions the masses of $W^\pm$ and $Z^o$ are different. Is the reason of these definitions purely experimental? Or was there a reason for doing this purely from theoretical grounds?
You could ignore over-all factors, but definitely not couplings. The (not quite purely) theoretical mass term in the generic Weinberg-Salam model is, instead, proportional to $$ (W_\mu^1)^2+ (W_\mu^2)^2+\left (W_\mu^3-\frac{g'}{g} B_\mu\right ) ^2, $$ (not quite purely, as the form was all but suggested by a skein of experimental facts--a hugely long story involving the neutrality of neutrinos and the chirality of the charged currents, ultimately spelled out by Glashow in 1961). But, certainly, the magnitude of the weak mixing, $$ \frac{g'}{g} \equiv \tan \theta_W $$ is a purely experimental fact of life. (Theoretically arbitrary, unless you joined speculative explanatory schemes in GUTs, etc...) Theoretically, if nature chose $g'=0$, so $\theta_W=0$, the mass of the Z would be the same as that of the charged Ws, since the argument of the third parenthesis is $Z_\mu / \cos \theta_W$. But, experimentally, nature chose $\sin^ 2 \theta _W$ = 0.2397 ± 0.0013 instead of 0. "Nobody really knows why"...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/523109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the cross-section size of a photon? How "wide" is a photon, if any, of its electromagnetic fields? Is there any physical length measurement of these two orthogonal fields, $E$ and $M$, from the axis of travel? When a photon hits a surface, and is absorbed by an electron orbital, this width comes into play, as there could have been more than one electron that could have absorbed the photon? This is not my personal query, I found it while I was surfing the web, and found it interesting, so I posted it here.
This is a puzzle. I hear that you can slowly build up an interference pattern using an ultra low intensity beam. This suggests that the EM wave associated with each photon can interfere with other parts of itself to determine the probability that the photon will be absorbed at a particular point. When I was at MIT they had a grating spectrometer that could spread the spectrum over a width of 10 feet or more. That would require a good sized photon. On the other hand, the photons in a laser beam can only interact with things that fall within the width of the beam which can be focused down to the width of the track on a DVD (and the temporal length of the photon must be shorter than that of a bit on the track).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/523385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
Gauss law when dealing with materials with conductivy Suppose we have a parallel plate capacitor filled with two dielectrics materials, one with conductivity $\sigma_1$ and permittivity $\epsilon_1$ and the other one with conductivity $\sigma_2$ and permittivity $\epsilon_2$. Each dielectric has thickness equal to half of the distance that separates the plates. The capacitor is connected to a battery of potential V. I am asked to find out the electric field between the plates. Applying Gauss law, I find that the electric displacement vector inside the capacitor is equal to the superficial charge density, $\sigma$. From here, I can calculate $\sigma$, supposing we are dealing with linear dielectrics: $V = \int_0^\frac{d}{2} \frac{D}{\epsilon_1} dl + \int_\frac{d}{2}^d \frac{D}{\epsilon_2} dl = \frac{\sigma d \left( \epsilon_1 + \epsilon_2 \right)}{2\epsilon_1\epsilon_2} \iff \sigma = \frac{2V\epsilon_1\epsilon_2}{d(\epsilon_1+\epsilon_2)}$ From here I conclude that: $E_1 = \frac{\sigma}{\epsilon_1} = \frac{2V\epsilon_2}{d(\epsilon_1+\epsilon_2)}$ $E_2 = \frac{\sigma}{\epsilon_2} = \frac{2V\epsilon_1}{d(\epsilon_1+\epsilon_2)}$ Problem is that acoording to my professor the solution to this part of the exercise is: $E_1 = \frac{2V\sigma_2}{d(\sigma_1+\sigma_2)}$ $E_2 = \frac{2V\sigma_1}{d(\sigma_1+\sigma_2)}$ Which he obtains by imposing boundary conditions and calculating the current densities. My question is: why is my procedure wrong? What have I assumed that is not correct?
Your technique is not wrong... if $\sigma_1=\sigma_2=0$. See, if the materials you are working with can conduct current, the parts of the system with free charges will not just be the plates, but any part of the (partially) conducting media can also have free charges brought by the current that flow in these materials. This is your mistaken assumption. In this problem, the interface between $\epsilon_1,\sigma_1$ and $\epsilon_2,\sigma_2$ can have a free charge density $\sigma'$ since the system in equilibrium can have brought them from either plate. Assuming this, your potential equation now reads $$V = \int_0^\frac{d}{2} \frac{D_-}{\epsilon_1} dl + \int_\frac{d}{2}^d \frac{D_+}{\epsilon_2} dl,\ \text{ with }\ D_\pm=(\sigma+\sigma'/2).$$ $$\implies V = \frac{\sigma d \left( \epsilon_1 + \epsilon_2 \right)}{2\epsilon_1\epsilon_2} + \frac{\sigma' d \left( \epsilon_1 - \epsilon_2 \right)}{2\epsilon_1\epsilon_2}.$$ The idea is that this new interface charge density is a free parameter. Its value depends on the current equilibrium state since this is our other boundary condition, or in other words, the system's equilibrium state can only be fully described by both charge and current boundary conditions. Your professor simplifies this by first finding the current equations and seeing that they do not depend on the charge boundary conditions (thus fully explain the, say, voltage and electric fields fully). (Edit:) A nice way to illustrate this is to assign $\sigma=0$ or $\sigma=\infty$ to one of the parts. For example, in the case where $\sigma_1=0$, there cannot be any current flowing in the first region. This necessarily means that $E_2=0$, since otherwise the current coming from the 2nd plate would always want to flow into the first region. For another example, if $\sigma_2\to\infty$, the second part, again, cannot have any electric fields inside of it since, well, there are no electric field inside (perfect) conductors. These happen regardless of dielectric properties.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/523784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
I dont understand the work equation I don't understand how work = force * displacement as if a force of say 1 Newton was to be applied to two objects of different mass until the object reached a displacement of say 1 meter, surely the object of less mass would displace 1 meter in less time (due to faster acceleration) meaning the force would be applied for less time resulting in less work. I know there is something fundamentally wrong with my understanding of this but I'm not sure exactly what. any help would be greatly appreciated.
You are confusing the concept of "power" (that the user is giving to the system) with the concept of "energy" (that is something more inherent to the system). Power is defined as $P=\frac{dW}{dt}=\mathbf{F}\mathbf{v}$ with $\mathbf{v}$ the velocity, $\mathbf{F}$ the force, and $P$ the power and $W$ the work done. Indeed the power you will need is going to be less for an item of a low mass than for an item of high mass. You should also look at the concept of dissipation of energy. Indeed, the work $W$ will eventually be the same in the 2 cases, but the dissipation of energy is going to be higher in the second case. The work that is done does not depend on time and is not connected to what power the user is giving to the system and what power is going to be dissipated by this system. If you call $R$ the time derivative of the energy of the system, $R=\frac{dE}{dt}+w$, with $E$ the intern energy and $w$ the dissipation of energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/523904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Ways to derive black-body radiation in Unruh effect I know two ways to derive black-body radiation in Unruh effect and they are: * *Using Bogoliubov coefficients (N. D. Birrell and Paul Davies) *Using Page approximation (David J. Toms and Leonard Parker) Are there other ways to derive this effect (for example using Hamiltonian diagonalization maybe)?
Unruh effect can be considered as a special case of Hawking radiation (See the beautiful answer of Motl to this question). Then finding ways to discover the Hawking effect gives you ways to infer the Unruh radiation existence. Three lovely and strongly physical derivations of the Hawking effect: 1) Cancelling gravitational anomalies https://arxiv.org/abs/gr-qc/0502074 2) Avoiding equivalence principle violations https://arxiv.org/abs/1102.5564 3) Hawking's derivation using the imaginary time trick https://hapax.github.io/physics/imaginary-time/
{ "language": "en", "url": "https://physics.stackexchange.com/questions/524238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What properties a medium must have to allow waves to travel? There are many types of waves - sound waves, water waves, light 'waves' etc. What are the common properties of the media in which these various types of wave travel? And how these properties enable the wave propagation? I'm especially interested in a mathematical description of these properties. (If it's reasonable to ask for it.)
Light does not need a medium. Mechanical waves need a medium with inertia and a restoring force. For longer waves on a water surface it is gravity that provides the driving force to a flat surface. The motion of water below the surface is not so easy to describe mathematically. Sound waves in air or water are pressure waves where the elasticity provides the restoring force. These are longitudinal waves. Transverse waves cannot exist in a fluid because there is no restoring force for a shearing deformation. Solids have an elastic shear modulus so there will also be transverse waves. Mathematically one often ends up with the wave equation, a differential equation that for mechanical waves is derived from Newton's law $F = ma.$ The solutions are then functions of position and time that can be written as $$f(x,t) = f(x-vt),$$ which propagate with a velocity $v.$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/524377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Does deep inelastic scattering produce photon? I know that DIS produces hadron jets, which are formed from the intense energy of the interaction. But I wonder, are photons also produced? And if so, what are the processes that create these photons?
Photons are seen in the final state in many cases. The "final" products of any process include only stable particles (or at least those long-lived enough to not matter in the context of the detector you are using): electrons, protons, neutrons, neutrinos, and photons (plus possibly muons depending on the size of your detector system and the energies involved). All other product decay or re-interact after production. You get photons from * *decay of unstable particles *direct production of off charged particles involved in the vertex interactions of charged products with the medium of the detector (Bremsstrahulung) *interactions of charged products with the medium of the detector *annihilation of particles with their charge conjugation partners (i.e. matter-antimatter) and other less common causes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/524512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the car braking time formula $ T = v / (\mu_s \, g) $ valid only for uniformly accelerated motion? I'm wondering if the car braking time formula is valid only for uniformly accelerated motion. $$ T = \frac{v} {\mu_s \, g} $$ with $ v $ average speed, $ \mu_s $ static friction coefficient between the wheel and the ground, $ g $ gravitational acceleration on the earth. I derived it in this way ($ F_{s, max} = \mu_s \, N = \mu_s \, m \, g $ maximum static friction force; $ N $ normal force, $ m $ car mass): $$ F_{s, max} = m \, a $$ $$ \mu_s \, m \, g = m \, \frac{v} {T} $$ $$ T = \frac{v} {\mu_s \, g} $$ where $ a $ is the average acceleration of the car. Thank you in advance.
Yes, your equation is only valid for uniformly accelerated motion. This is because you substituted $a$ as $\frac vT$ in your derivation and that is only valid during constant acceleration.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/524688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is ampere still a base unit? The ampere is still a base unit, according to the SI brochure. However, in my perception the recent redefinition of units effectively defines the Coulomb as e/(1.602 176 634 × 10^−19), and the ampere is derived as 1 A = 1 C/s. Why did they not make the coulomb a base unit, instead of the ampere, last year?
It appears that they have deemphasized the concept of "base units". They did not remove the term, but mention that all units are now defined in terms of constants. As wikipedia puts it: With the 2019 redefinition, the SI is constructed around seven defining constants, allowing all units to be constructed directly from these constants. The designation of base units is retained but is no longer essential to define SI measures. From the 9th edition of the SI Brochure The choice of the base units was never unique, but grew historically and became familiar to users of the SI. This description in terms of base and derived units is maintained in the present definition of the SI, but has been reformulated as a consequence of adoption of the defining constants. With that position, there seems to be no strong desire to modify which units comprise the set of base units.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/524788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 0 }
Clarification in the difference between metastable states and excited states The answer of this question What is the difference between metastable states and excited states? is that the difference lie in the the time that the systems lie in a given state. So for example take the hydrogen atom and the state $2s$. Which time tell us the difference between the excited state and the metastable state?
Long lifetime of an excited state is just an indication of metastable state. The physical mechanism making a long lifetime is the presence of some dynamical obstruction of the direct mechanism for decaying to a lower state. It is quite a widespread mechanism encompassing nuclear physics, electronic states in atoms and molecules and condensed phases of matter. An example coming from atomic physics is the case of transitions forbidden by electric dipole selection rules. The much smaller matrix elements corresponding to quadrupole transitions makes the lifetime of those states significantly larger than in the case of electric dipole transitions. In classical physics, metastable states require an activation energy to overcome an energy barrier separating the basin of the escited state from the ground state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/524853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can particle with fractional charge exist in isolation? Since quarks are ruled out I wonder if it is possible for free fractional charge to exist not counting virtual particles?
All the data gathered over the last 200 years or so show only integer multiples of charges, and the models that classify the behavior of particles are successful in predicting new data. Fractional charges arouse in the models to describe the symmetries of hadrons which were experimentally found to be composite. The models have not been invalidated, within the mainstream there are no free fractional charges. They have not been observed and that (non observation) is what the models describe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/525063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }