Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What do we mean by charge in physics? I am drawing the comparison between electrical charge and colour charge, in electric charges they communicate with (virtual) photon and photon itself is a boson carrying no charge. How about colour charges how do they communicate with each other if they themselves are the boson?
Charge in this context refers to the fundamental coupling between particles (either matter or gauge bosons) and gauge bosons. Let me explain, starting with the QED sector: * *The electron is electrically charged. This means it interacts with the photon. *The photon is electrically neutral. This means it does not interact with other photons at tree level. (There are higher-order photon-photon interactions mediated by virtual particle-antiparticle pairs, but there is no fundamental photon-photon vertex.) Now we get on to the QCD sector: * *The quarks are electrically charged. This means they interact with photons. *The quarks are also "colourfully" charged (they have colour charge). This means they interact with gluons. *Gluons also have colour charge. This means that (unlike photons) gluons interact with other gluons. There is a gluon-gluon interaction vertex, even at tree level.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/525275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Problem involving gravitational potential energy Two uniform solid spheres of equal radii $R$, but mass $M$ and $4M$ have a centre separation of $6R$. The two spheres are held fixed. A projectile of mass $m$ is projected from the surface of the sphere $M$ and towards the second sphere along the line joining the centres of the two spheres. Obtain an expression for the minimum speed $v$ of the projectile so it reaches the surface of the second sphere. I was looking at the answer of this problem and noticed that the neutral point (i.e. the point where the forces between the two spheres exactly cancel out) had been calculated and conservation of energy had been applied at the neutral point $N$ and at the surface ($E_s$ being the the mechanical energy at the surface). $$E_s= \frac 12 mv^2-\frac {GMm}R-\frac{4GMm}{5R} $$ $$E_N= -\frac{GMm}{2R}-\frac{4GMm}{4R}$$ Equating $E_s$ and $E_N$ gives $v=\sqrt\frac{3GM}{5R}$. What I did not understand was that while writing the mechanical energy at the neutral point $N$ they assumed the kinetic energy of the projectile to be zero. If the kinetic energy of the particle at $N$ is zero, implying that the particle is stationary, then how would it reach the surface of the second sphere since there is no force pulling it towards $4M$?
In this situation, the minimum speed is a limiting value. Anything above that will carry the projectile beyond the null point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/525347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does momentum conservation imply energy conservation? I was trying to figure out the situation in which energy is conserved and momentum is not and it was quite easy to find out one case which is of a stone tied to a string moving in a uniform circular motion. Then I thought to consider the reverse situation in which momentum is conserved but energy is not. To me it seems that as soon as we chose a system in which momentum is conserved then it automatically is implied that in such system energy is conserved too. But looking at the case how these two conservations come into existence due to two different symmetries (one related to the invariance in physical laws due to translation in space and the other in time). So it would be quite helpful * *if someone can point out to a case in which momentum is conserved but energy is not. *otherwise if the above is not possible then explain why that is the case? [Note that I am considering every form of energy of the given system.] This is to summarise my question so that no further confusion occurs to future visitors. The question in short is: * *can we lose time symmetry and retain translation symmetry? (Give an appropriate example for the case)
If you’re just considering mechanical energy, consider the case of a firework or other exploding object. The total momentum is conserved, but the kinetic energy is increased from the chemical energy of the explosion. More formally, momentum conservation is associated with isotopic space while energy conservation is associated with no changes over time: the explosion happens at a moment and energy changed; you pull the circular motion in different directions and momentum changes. So you can develop different situations to do different things.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/525478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 9, "answer_id": 2 }
Time ordering of Normal ordered products in Wick's theorem I have a small doubt regarding wick's theorem. Is it normal ordered products are time ordered? Actually in wick's theorem we usually don't write the symbol of time ordering in front of normal ordered products that's why I asked, In S matrix expansion for a particular process , If normal ordered products are not time ordered may cause occurrence of some process in reverse order, how does this possible?
* *Time-ordering and normal ordering are 2 different operator ordering prescriptions. *It is not meaningful to apply them simultaneously, because which prescription should we then follow? However nested ordering prescriptions do make sense, cf. e.g. this Phys.SE post. *(One version of) Wick's theorem translates between them. See e.g. my related Phys.SE answer here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/525560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difference between oscillation and radiation? Im doing this specifically in terms of the Zeeman effect, but in general I have read some stuff about osciallations and orientations that is confusing me. If we have a magnetic dipole, propagating a field in all 3 spatial directions, and then we apply an external magnetic field in one direction, the external field would apply torque to the dipoles such that they align with the field - effectively giving them energy. which means we have 3 different energies, hence 3 different angular frequencies. in terms of the zeeman effect, this corresponds to the 3 spectral lines. we can detect these energies as light. if observed perpendicular to the field, we get 3 lines, and if viewed parallel we get 2 (transverse and longitudinal zeeman effect). but why? I really dont understand this logic. and this is where my question above comes into play...i read in a few places that for the longitudinal zeeman effect, with the field and observational direction parallel, the magnetic dipole along that direction is $\textit{oscillating}$ but not $\textit{radiating}$ - hence it is not detected, and only 2 lines are observed. what is the difference between oscillation and radiation? i can intuitivey `see' how oscialltion conserves the energy, and radiation by definition radiates it out, but in this context I cannot distinguish between the 2. why does it radiate when observed perpendicular, but not radiate when observed parallel to the field? I know this is messy, i'm quite confused. Any help is appreciated!
An oscillating electric charge emits electro-magnetic radiation. However, the spatial distribution of this radiation is not uniform, but has a strong preference angle copied from here. Thus, the dipole does not radiate in the direction of its oscillation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/525715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fermion zero modes extra conditions? A fermion zero mode is a zero eigenfunction, $$i\gamma^\mu(\partial_\mu-iA_\mu)\psi=0$$ The number of zero modes is apparently related to the instantons of the gauge field. But now my question is about 'ordinary' solutions to the Dirac equation. Even if there is no gauge field and even in Euclidean space with a mass term the Dirac equation has solutions $$(i\gamma^\mu\partial_\mu + m)\psi=0$$ For instance, a possible basis choice in 2D is, $\gamma^0=\sigma^1, \gamma^1=\sigma^2$. Then there is the simple solution with no $x^1$ dependence, $$(\psi_L,\psi_R)=(e^{i m x^0},e^{i m x^0}).$$ Why are these ordinary solutions not considered when zero modes are considered? In Luboš Motl's answer here, he goes so far as to say solutions with non-zero mass don't exist in Euclidean space, but I don't see why not, I just explicitly found an obvious one. Is there some extra condition that goes into the definition of the zero mode that I am missing?
The eigenvalues of the Euclidean Dirac opertor are of the form $i\lambda+m$, $\lambda,m,\in {\mathbb R}$. Instanton backgrounds can allow solutions with $\lambda=0$, but if $m\ne 0$ you cannot get $i\lambda+m=0$. Your confusion comes from the fact that the Euclidean Dirac operator with hermitian $\gamma^\mu$ obeying $\gamma^\mu\gamma^\nu+\gamma^\nu\gamma^\mu= 2\delta^{\mu\nu}$ is $$ \gamma^\mu \partial_\mu+m. $$ There is no "$i$" before the gammas. This absence of "$i$" is essential precisely because there must be no zero modes in the Euclidean theory so that the Euclidean propagator $(\gamma^\mu \partial_\mu+m)^{-1}$ always exists. It's only when we go back to Minkowski signature that we can go "on shell." Your zero mode solution has $p_0=m$ which is a Minkowski energy momentum relation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/526366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The Vertical Launch of a Rocket From Q7 on Pg.22 of "Upgrade Your Physics" by BPhO/Machacek A rocket of initial mass $M_0$ is being launched vertically in a uniform gravitational field of strength $g$. (a) Calculate the final velocity of the rocket 90 % of whose launch mass is propellant, with a constant exhaust velocity $u$. Assume that the propellant is consumed evenly over one minute. Attempt: Let $\alpha$ denote the fuel consumption in $\mathrm{kg\ s^{-1}}$ Then the constant thrust provided by the exhaust is given by: $$T=\alpha u \tag{1}$$ The acceleration $a(t)$ of the rocket at some time $t$ after the launch: $$T-M(t)g=M(t)a(t) \tag{2}$$ where $$M(t)=M_0-\alpha t \tag{3}$$ is the mass of the rocket at time $t$. Using $v=\int a(t)\,\mathrm dt $, I got $$v(t)=\int\limits_0^t\left(\frac{\alpha u}{M_0-\alpha t}-g\right)\,\mathrm dt=u\ln\left(\frac{M_0}{M_0-\alpha t}\right)-gt \tag{4}$$ since $v_0=0$. Can $\alpha$ and $t$ somehow be eliminated or do I need more information to answer the question? Any conceptual errors in my working? Later on, the question also asks for the velocity at main engine cut-off and the greatest height reached (which I think can be obtained by integrating eq. $(4)$ but the notion of time is again needed here?).
As you say : $$M(t) = M_o - \alpha t$$ But you must know that at time $\tau$ after start the object now has $0.1M_o$ mass (since it consumed all of it's fule). Therfore $$0.1M_o = M_o - \alpha \tau \tag 1$$ $$\Rightarrow \tau = \frac {0.9M_o}{\alpha} \tag 2$$ After substituting $(1)$ and $(2)$ into your equation we get: $$ v(\tau) = u\ln \left(10 \right)-g\frac {0.9M_o}{\alpha}$$ Now if you know $\alpha$ then you can find $v(\tau)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/526458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Conceptual freshman year physics question about acceleration A particle moves along the x-axis. When its acceleration is positive, A. its velocity must be positive B. it must be speeding up C. it must be slowing down D. its velocity must be negative E. none of the above is always true The answer to this is $E$, but according to the analysis I've done if the velocity is positive according to formula $a=v/t$, positive velocity means positive acceleration according to the formula. Can anybody explain why the answer is given as $E$?
Two examples should illustrate why answer "E" is the correct answer. When you work a physics problem, you get to decide which direction is positive. Accordingly, when you are driving down the road, it is valid to state that the direction that is in front of your car is the positive direction. Example 1: You are starting from the "x=0" position, and the stop light turns green. As you press the accelerator pedal, your velocity is positive, you are speeding up, and your acceleration is positive. Example 2: You are at position x=100 m, and you are traveling in the reverse direction (negative x direction; you are backing up) at 10 m/s. For whatever reason, you put your automatic transmission into "drive" and press on the accelerator pedal. At that point, your velocity is negative, you are slowing down, and your acceleration is positive. In both examples, acceleration is positive, but that is the ONLY thing that the two examples have in common.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/526566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is meaning of negative frequency got from Doppler equation? If a observer is moving away from a stationary sound source with a velocity V' then observed frequency is (1-V'/v)f where v is the speed of sound and f is the frequency observed when at rest. Now if V'>v what will happen actually? What is the meaning of negative frequency?
This will be easier to picture with you in a boat on a lake with a wave source. Say you travel away from the source, but slower than the waves propagate. Then the waves will travel past you and hit your boat from behind. In your formula for the doppler frequency, this will give a positive frequency. If you speed up to exactly the speed of the waves, you will travel perfectly synchronous with the waves and no waves will hit your boat, as you are exactly travelling along with them. The doppler formula correctly gives a frequency of 0. If you speed up even more, you will start hitting the waves with your boat from behind. The frequency you hit the waves with is again given by your formula, but now with a negative sign. So the sign in the result tells you whether the sound waves hit you from behind, or if you are fast enough to overtake the waves and you will hit them from behind (which is equivalent to them hitting you from the front).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/526801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Antimatter and quantum mechanics This question could have a very simple answer but I could not find that answer anywhere. My question is since electrons, protons, etc they all have their antiparticles, why are not they mentioned in Quantum Physics? And if they are real, should not they be included into Schrödinger equation?
The non-relativistic behavior of antiparticles can be understood with the Schrodinger equation. For example, anti-hydrogen is approximated by the Schrodinger equation to the same accuracy as hydrogen is. This is often never mentioned in an introductory Quantum Mechanics course. But the relationship between particles and antiparticles can only be understood using relativistic quantum mechanics, such as the Dirac equation or relativistic quantum field theory. Quantum electrodynamics (QED) is an example of the latter and explains, among many other things, how an electron and a positron can annihilate into photons. Since charged particles and their antiparticles can annihilate to produce photons, which are never non-relativistic, the non-relativistic Schrodinger equation cannot explain this interaction. Also, the Schrodinger equation cannot represent particles or antiparticles appearing or disappearing, like QED can. But the Schrodinger equation can explain how an anti-proton binds with an anti-electron (positron) to make anti-hydrogen, since this does not involve relativistic processes and no particles appear or disappear.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/526907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the reason to believe that the laws of physics are same in all frames of reference? The first postulate of Special Relativity is that the laws of physics must be the same in all frames of reference i.e. invariant of coordinate transformations. I know this might be moot to ask but after reading a critique's paper on Special Relativity, I thought the question needs to be answered. Is there any supportive evidence which suggests so other than the evidence of common sense and intuition? As we know common sense and intuition are easily defied in most of the physics theories like Quantum Mechanics we should not rely upon such assertions to formulate an entire theory of Universe.
The laws of physics being the same in all inertial reference frames is an idea that originates not with Einstein but Galileo, who noted if you're in a sealed windowless room on a ship you can't tell whether the ship is moving (though you can tell if it's accelerating, such as when it bobs). Special relativity differs from Galilean relativity only in how we convert between inertial reference frames, claiming the transformation of spacetime coordinates to be Lorentzian instead of Galilean. With the right assumptions, you can show only that one of these transformations is correct. The Galilean case is then the special case $c^{-2}=0$, which is measurably incorrect. The positive empirical value of $c^{-2}$ is known precisely enough to define the metre in terms of the second.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/527004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 1 }
What happens when the amplitudes of interfering waves is different in the phenomenon of beats? I had read that for the formation of beats, two waves must interfere such that they have similar frequencies but not identical, and their amplitudes should be identical. I don't understand why should their amplitudes be identical for the formation of beats. What would happen if their amplitudes were not identical? Can someone help me out in this?
Beat frequency is as you say when the difference in frequency is low enough for us to make out a beat. When the beats occur the signal is ideally interfered into $0\%$ amplitude. If the amplitudes of the two signals however are very different we'd have a reduction in intensity but not an attenuation to $0\%$ but maybe instead to $90\%$ if one of them is $10$X the other one. This would not be as easy for us to detect. In other words, the amplitude does not need to be identical, but it helps us to show the phenomenon.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/527359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Entropy change in the free expansion of a gas Consider the adiabatic free expansion of a gas since there is no external Pressure hence Work done on the system is 0 and since the walls are insulated (hence adiabatic) the heat absorbed is 0. However since this is a irreversible process then entropy change > 0 hence dQ > 0 . However there is no heat absorption. What am I missing ?
Substitute the irreversible spontaneous expansion with an appropriate reversible process. This can be done thanks to entropy being a state function. Since in the spontaneous expansion the temperature remains constant, you can choose a reversible isothermal expansion with the same initial and final state as in the spontaneous expansion. Entropy change ΔS of reversible isothermal expansion is described as follows: ΔS=Q/T.Consider the process to be isothermal. Internal energy does not change with constant temperature, therefore heat absorbed by the system during reversible isothemal expansion is equal to work done by the gas. During a spontaneous expansion, the pressure and volume of the gas changes unpredictably. Since pressure p is not constant during reversible isothermal expansion and it is a volume function. Find work done for isothermal process. ΔS=Q/T=W/T
{ "language": "en", "url": "https://physics.stackexchange.com/questions/527438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Conservation of Energy stored in electric field? Let's say due to some particle process, an electron is created at time $t >0$. And from this moment on, the electric field will start to propagate to infinity at the speed of $c$. But we know that the energy stored in the electric field is proportional to the volume integral of $E^2$. Then wouldn't this mean more energy is being stored in the field as time passes? How is the total energy conserved and what offsets the continual increase in stored field energy?
No energy conservation is not violated because for creating an electron,energy is required.In this case without specifying any source you have violated the law of conservation of charge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/527594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Does friction do work or dissipate heat? I know there are a bunch of similar questions but I read through them all and they don't answer my question. Let's say I give a box on a floor an initial "kick" of force such that it has kinetic energy $KE$. Due to friction between the box and the floor, the box will slide to a halt. This means the friction must supply work equal and opposite to the objects energy: $W = -KE$. However, we know that friction is an irreversible process. This means there is an entropy increase $S > 0$. But according to the classical definition of entropy, $S = \frac{Q}{T}$. Since work does not appear in this equation, this would imply there had to be a heat transfer at some point, but where? Is the frictive force also causing heat?
This is an answer to the original title: Is friction work or heat? Neither work nor heat. Friction is a force: Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other.2 There are several types of friction In physics one has to be accurate in the use of terms, the units are different for force and for work. work and heat have the units of energy. There is radiation in the work done with friction, because the electromagnetic interactions, end up as heat on the solid lattices., this link may help
{ "language": "en", "url": "https://physics.stackexchange.com/questions/527671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why is law of conservation of angular momentum (seemingly) being violated over here? Description of the system Assume two point masses one at the point $C$ and the other on the circumference of the circle with radius $R$. They are attracting one another gravitationally and no external forces are acting on them. The point mass at C has very large mass such that it is at the COM of the system consisting of both the particles and the COM is at rest. The point mass is revolving about the COM with an angular velocity $\omega$. We fix our coordinate axis at one point on the circumference and find the angular momentum of both the particles by the following formula: $$ \boldsymbol {\ell} = \mathbf r \times \mathbf p$$ Question Clearly the magnitude of the angular momentum $\ell _{\alpha}$ of the particle at the circumference is given as follows: $$\ell _{\alpha} = Rp\sin (\omega t) \tag 1 $$ Whereas that of the one at centre is $0$. This means the total angular momentum of the system is: $$L = Rp\sin (\omega t)$$ This is a time dependent equation meaning that angular momentum is variable which should not be the case as this system is isolated one. I think that since law of conservation of angular momentum cannot be violated therefore there is something wrong with the method / conclusion. So * *What am I doing wrong here that I am getting this conclusion? *Is it possible to show mathematically that angular momentum is conserved? Please don't skip this section and then later tell me that angular momentum here is $\ell = rp$ Derivation of the Eq. (1) Magnitude of angular momentum at point $O'$ if given by $$\ell _{\alpha} = r(t) p \sin \phi$$ Here $$\theta (t) = \omega t $$ Clearly ( via the sum of Internal angel of triangle) $$ \alpha = \frac {\pi}{2} - \frac {\theta (t)}{2}$$ Therefore $$ \phi = \frac {\pi}{2} + \alpha = \pi - \frac {\theta (t)}{2}$$ Therefore $$\boxed {\begin {align} \ell _{\alpha} & = r(t) p \sin \left (\pi - \frac {\theta (t)}{2} \right ) \\ & = r(t) p \sin \left ( \frac {\theta (t)}{2}\right)\end {align}}$$ Now (using law of sines) $$\frac {R}{\sin \alpha} = \frac {r(t)}{\sin \theta (t)}$$ Therefore $$r(t) = 2R \cos \left ( \frac {\theta (t) }{2} \right) $$ Now substituting this into the equation for $\ell$ we get $$\Rightarrow \ell _{\alpha} = Rp \left (2 \cos \left ( \frac {\theta (t) }{2} \right) \sin \left ( \frac {\theta (t)}{2}\right) \right) $$ $$ \ell _{\alpha} = Rp \sin \theta (t) $$ Substituting $\theta (t) = \omega t $ $$\boxed { \ell _{\alpha} = Rp \sin {\omega t}}$$
For circular motion ${\bf r} \times {\bf p} = rp$ as these are perpendicular vectors and $\sin 90=1$. So angular momentum is constant hence conserved. There is no torque with respect to the centre of the circle. Of course angular momentum is not conserved with respect to any other position. The reason is that there is a torque ${\bf r}\times {\bf f} $ with respect to any other point than the centre, as r is then no longer parallel to f. This torque is opposite and equal to the torque exerted by the small mass on the big one. The _total _ AM is conserved, regardless of the choice of origin.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/527892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 6 }
Conduction band and free electron I have learnt that at room temperature there are some free electron(not bound to any nucleas) in a conductor and when an electric field is applied they form an electric current.I am quite comfortable with this theory but then I am introduced to band theory which talks of valence and conduction band and they write when electrons can jump from valence to conduction band they conduct electricity.I cannot get what these bands are and why that jump causes current flow?I think the energy level of the valence electrons of all the atoms in the crystal form valence band(correct me,if wrong) but I have no idea about conduction band.Is it allowed energy levels of free electrons present in the crystal or something else?Please clarify
Solids contain a huge no of atom packed closely together, when such a atom is isolated then it is discrete set of electronic energy levels and when the isolated atom brought together their outermost electronic level overlap and the form energy bands to preserve the Pauli exclusion principle. When the distance between atoms approaches the equilib- rium, this band splits into two bands separated by an energy gap Eg. The upper band is called as the conduction band . And the lower one is called valence band. Thus, apart from the low-lying and tightly bound “core” levels, the crystal has two bands of available energy levels separated by an energy gap Eg wide, which contains no allowed energy levels for electrons to occupy. This gap is sometimes called a “forbidden band". https://en.m.wikipedia.org/wiki/File:Solid_state_electronic_band_structure.svg
{ "language": "en", "url": "https://physics.stackexchange.com/questions/527985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What makes a wheel spin? I don't fully grasp what makes a wheel much easier to move than to push a solid block. The pressure at the point of contact between a wheel and the ground must be pretty enormous compared to the pressure created by a block of same material and mass as the wheel. Friction is defined as the product of normal force exerted on the object and the coefficient of friction between the object and ground. So I assume that for two identical objects of infinite masses this parameter does not make any difference. Given these circumstances, I don't understand the physics behind it. Am I missing some other attributes of a wheel that makes it easier to move?
Some of the other answers are correct, but too high of a physics level to be appropriate to the question. This needs a low-tech answer. First, consider how much easier it is to walk than it would be to drag yourself across the ground. The reason is because you're not dragging anything - you lift a foot up, you move it forwards, you put it down, and repeat. It's easy to move the foot in the air, and the foot on the ground is fixed in place, not dragging. Every point around the outside of a wheel is like a foot - with a vast number of feet around the outside of the wheel. When the wheel is not moving, it's like it's standing on the one foot at the bottom. When the wheel rolls, a new is foot coming down in front while the old foot goes up in back. The feet don't drag on the ground, each foot just lifts up and goes over the wheel and comes down in front. Going up and down doesn't drag. Theoretically there would be zero friction for a perfect wheel rolling on perfect ground, because nothing drags. However the wheel and ground aren't perfect - there are tiny bumps and the surfaces bend, so there is still a small amount of friction. This is called rolling friction. Rolling friction is tiny compared to dragging friction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/528079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 14, "answer_id": 4 }
How do I find the equilibrium points in an electric field created by three or more charges? I know that for two charges $q_1$ and $q_2$ of the same polarity, the neutral point will be at the internal section of the segment by the ratio of $\sqrt q_1$:$\sqrt q_2$, and if they are of opposite polarity the point will be at the external division by the same ratio. Now, for 3 charges I tried to compare this with the center of mass formula. For two mass $m_1$ and $m_2$ the center of mass will be found at the internal section with the ratio of $\frac{1}{m_1}:\frac{1}{m_2}$ So I came to the formula of point $(x,y)$ being the neutral point : $$(x,y) = (\frac{\sum_{i=0}^n\frac{x_i}{\sqrt q_i}}{\sum_{i=0}^n\frac{1}{\sqrt q_i}}, \frac{\sum_{i=0}^n\frac{y_i}{\sqrt q_i}}{\sum_{i=0}^n\frac{1}{\sqrt q_i}})$$ But it did not work. I did not find any good resource for finding the neutral point for three or more charges. So here I am.
There is in general no simple closed formula for the positions of the neutral points of a system of 3 or more charges. The centre-of-mass formula does not apply because the neutral points have no connection with the centre of mass. There can be several neutral points but only one centre of mass. If charges $Q_i$ are placed at points ($x_i, y_i, z_i$) then the resultant electric field at a point P ($x_0, y_0, z_0$) is zero when the sum of electric field components at P in the $x, y$ and $z$ directions are separately zero : $$\sum \frac{Q_i X_i}{R_i^3}=\sum\frac{Q_i Y_i}{R_i^3}=\sum\frac{Q_i Z_i}{R_i^3}=0$$ where $X_i=x_i-x_0, Y_i=y_i-y_0, Z_i=z_i-z_0, R_i^2=X_i^2+Y_i^2+Z_i^2$. In general the above triplet of equations will be coupled and transcendental, having only numerical solutions. For arrangements which have some symmetry it may be possible to simplify the calculation by pairing charges about a single line of symmetry. This will eliminate two co-ordinates, reducing the system to a single-variable polynomial equation of degree 8. For example, suppose identical charges $Q$ are placed at the vertices AB and charges $2Q$ at vertices CD of a rectangle ABCD of dimensions $3\times 2$. Consider the electric field at point P which lies on the perpendicular bisector of AB and CD, as in the following diagram : The resultant field $\mathbf{E_1}=2kQ\frac{x}{r^3}$ due to charges AB and that $\mathbf{E_2}=2k(2Q)\frac{y}{s^3}$ due to charges CD point in opposite directions in the same straight line. Here $y=3-x, r^2=1+x^2, s^2=1+y^2$. The resultant electric field at P will be zero when $$\frac{x}{(1+x^2)^{3/2}}=\frac{2y}{(1+y^2)^{3/2}}$$ This must be solved numerically. Wolfram Alpha gives the possible roots $x \approx 0.236168, 0.990072, 2.95105$. A further 2 solutions would be found from considering off-axis points. This is most easily seen from symmetry for 4 charges arranged in a square, then deforming the square. The following diagram illustrates the variation of the electric fields $E_1, E_2$ due to charges AB, CD respectively when the charges are the same on each pair. The curves intersect at 3 points, which are the null points. For $n$ charges arranged in a (possibly irregular) convex polygon there will be $n+1$ neutral points all lying within the polygon. [reference or proof required] All neutral points are points of unstable equilibrium. This can be seen by applying Gauss' Law to a small volume surrounding the neutral point. The Gaussian surface contains no charge so the total flux across its surface is zero; it must have as many lines of flux leaving as there are entering. Stable points must have all field lines converging on them from every direction. This is incompatible with the total flux being zero. Related questions : Neutral points in a system of charges on the vertices of a square Need a more efficient way to find where the $E$ field is zero How do I choose the right value of $r$ to find where the electric field is zero? See also : number of null points (author unknown)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/528301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Does polarisation matter in double slit experiment? So I am studying diffraction, in particular the diffraction of electromagnetic waves using a double slit set up. However, there seems to be no mention of the polarisation of electromagnetic waves and I wondered would the experiment differ if the polarisation was different? I also have the same question regrading the following set up using a conducting plate. Surely polarisation would result in a differing pattern of interference?
Polarisation can matter in real two slit experiments. If metal is used for the slit then the boundary conditions are different for perpendicular and parallel polarization. The difference becomes important when the slit width and separation are comparable to the wavelength, so for large diffraction angles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/528592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Unique characterization of Ideal gas In Thermodynamic state - Wikipedia, it defines a thermodynamic state as: A thermodynamic state of a system is its condition at a specific time, that is fully identified by values of a suitable set of parameters known as state variables. In the part explaining state functions, it says In the most commonly cited simple example, an ideal gas, the thermodynamic variables would be any three variables out of the following four: mole number, pressure, temperature, and volume. Thus the thermodynamic state would range over a three-dimensional state space. I would think this is not a result in thermodynamics. Is this just an assumption made about ideal gas, or can it be derived from considering the model of the ideal gas statistically?
The ideal gas law reads $$ p V = n R T $$ where $R$ is a constant or alternatively, $p V = N k_B T$, where $k_B$ is a constant. Thus, if three of the four variables are given, you can use this equation to determine the fourth variable. How to derive this law statistically, was discussed here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/528706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Three-Dimensional Picture from 2D tiles The CMS Silicon Pixel detector can create three-dimensional pictures of a particle's trajectory. It specifically says that "because the detector is made of 2D tiles, rather than strips, and has a number of layers, we can create a three-dimensional picture." Why the emphasis on "rather than strips"? Link: http://cms.cern/detector/identifying-tracks/silicon-pixels
To get a trajectory in space you need an (x,y,z) . The two dimensional panels provide the (x,y) of their location ( with error their width) and the layers the z variable from their z location . A strip has only one dimension in the panel plane, so cannot give two coordinates. From the description of strip detectors a complicated geometry layer would be needed to get an extra variable, introducing unnecessary errors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/528823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solution to infinite particle creation in EM by classical sources In this question: Peskin and Schroeder "Particle Creation by a Classical Source" particle creation by a classical source is discussed. Doesn't this mean that a static constant source would create infinite energy? I heard that QFT solves this problem by quantizing the source as well. How does this work?
A classical charge $|Q|\gg e$ (source) is neutralized gradually by the oppostely charged particles of the created pairs. So the number of pairs created is less than $|Q/e|$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/529060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does the current remain the same in a circuit? I understand when we say current, we mean charge (protons/electrons) passing past a point per second. And the charges have energy due to the e.m.f. of the power supply. Now tell me, if a lamp has resistance and you hook it in the circuit, how will the current stay the same? The charges obviously lose energy in the lamp and so become SLOWER, which should mean current decreases, right? [Edit] All answers explained a bit of everything, so it was hard to choose one. If YOU are looking for an answer, please check the others too, in case the accepted one doesn't answer your question.
@Farcher answer, particularly the last paragraph, sums it up perfectly. The positive work done by the electric field on the charge giving the charge kinetic energy equals the negative work done by the lattice structure that takes away the kinetic energy of the charge increasing the internal energy of the structure. Ultimately, the energy is dissipated as light and heat to the surroundings (a.k.a resistance heating). A mechanical analog is pushing an object at constant velocity on a surface with friction. The positive work done in pushing the box between two points exactly equals the negative friction work for a net work of zero and no change in the kinetic energy (velocity) of the box. The result is an increase in the temperature at the interface and eventual heat transfer to the surroundings. Although not exact, you can think of the external force as analogous to the electric field force, the box analogous to the charge, the velocity analogous to current, and the surface with friction analogous to electrical resistance. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/529224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 9, "answer_id": 6 }
What are the eigenfunctions of Hamiltonian of a free particle? From my actual understanding of quantum physics observable are operators, when we measure some observable we will find an eigenvalue of such operator, and the system will collapse in the eigenstate. The Hamiltonian is the operator related to energy, just like in classical mechanics, and Hamiltonian eigenvalues are, under some assumptions, the energy of a system. When we have a free particle, the Hamiltonian is: $$H=-\frac{h^{2}}{2m}\frac{\partial^2 }{\partial x^2}$$ So the eigenfunctions of the Hamiltonian should be the solutions to the second order linear equation: $$-\frac{h^{2}}{2m}\frac{\partial^2 \psi}{\partial x^2} = E\psi$$ The solutions are a linear combination of $e^{ikx}$ and $e^{-ikx}$, and I'd expect them to be something like $$\psi(x)=Ae^{ikx}+Be^{-ikx}$$ But different book I saw just give $$\psi(x)=Ae^{\pm ikx}$$ Which looks like mine just half of the solutions I thought. Am I missing something?
$ψ(x)=Ae^{ikx}+Be^{−ikx}$ is the correct general eigenfunction for a given eigenvalue $E$: But there is the boundary condition that $ψ(x)$ must go to zero at plus and minus infinity. In order to follow it, $E$ can not be a fixed value, but a continuous interval $[E_1,E_2]$. And $ψ(x)$ must be a Fourier integral.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/529364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to build a quantum logic circuit that has feedback loop? Using classic logic gates, it is possible to make circuits with loops, i.e. with feedback. I wonder whether that is still the case using quantum gates. I do not mean sequential circuits that are synchronized using clock, but simple logic circuits with feedback loops. Thanks.
Using the output of a gate as input of another, in the quantum case, amounts to applying something like a CNOT operation. Unless you want classical feedback (e.g. the decision of what ist the next gate being conditional to a previous measurement result), in which case the evolution ceases to be unitary (which doesn't mean that it cannot be done: this is common e.g. in one-way quantum computation schemes). If instead you refer to the output being fed to the input, the question is a bit ill-posed, because when you write a quantum circuit the "wires" don't really refer to the information flowing between spatially separated gates, but rather to the information flow in time. In other words, if you were to pass your information carrier twice through the same physical device implementing some operation, you would still probably write the corresponding circuit in the normal sequential way. If you are just asking whether it is possible to pass an information carrier through the same "quantum device" more than once, then sure this is possible. For example, if you encode a qubit in the polarisation of a photon, you can have a feedback loop with the photon passing through the same device multiple times using a fiber optic cable (and likely something to implement further operations at each loop).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/529498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Graph of periodic motion due to wave Wave is a disturbance in a medium, due to this disturbance the particles in the medium oscillate. Due to this oscillation we say that the wave is sinusoidal because the motion of the particle is periodic. So we REPRESENT the motion in sine wave. Periodic motion can be represented by sine graphs. So if instead of representing in sine graph can we represent it as square (instead of curves) periodic graph ?. Ps: also please confirm if the first paragraph is correct (kind of confused in that too).
Re. Due to this oscillation we say that the wave is sinusoidal because the motion of the particle is periodic Sinusoidal motion is not required for oscillation. For example square waves or triangular waves oscillate. So if instead of representing in sine graph can we represent it as square (instead of curves) periodic graph ?. YES.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/529697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Relation between grand potential and expected number of particles in an energy state? During the review of my lecture notes I stumbled upon an equation that gives me some trouble understanding. The big task that motivates the following is to express the entropy $S$ with the expected number of particles in an energy state $\langle n_i \rangle$. Since the entropy also relates with the grand potential we are looking for an expression that gives a relation between the grand Potential $\Omega$ and $\langle n_i \rangle$ first. An expression for $\langle n_i \rangle$ is for example $$\langle n_i \rangle = \frac{1}{e^{\beta(E_i-\mu)}+\gamma} \quad \text{with} \quad \gamma= \begin{cases} +1,\,& \text{Fermi-Dirac}\\ -1,\,& \text{Bose-Einstein}\\ 0^+,\,& \text{Maxwell-Boltzmann} \end{cases}.$$ Now my notes make the equation, where I can't understand the second equality $$\Omega = -\frac{1}{\beta}\ln \mathcal{Z}_G \stackrel{?}{=} \sum_i (E_i-\mu)\langle n_i\rangle.$$ I've seen an expression for $\ln \mathcal{Z}_G$ that looks like $$\ln \mathcal{Z}_G = \frac{1}{\gamma}\sum_i \ln\left[ 1+ \gamma e^{-\beta(E_i-\mu)}\right],$$ but I don't know if this can help me in any way. I tried to find the relation by doing some algebra, but I never seem to get to the equality $-\frac{1}{\beta}\ln \mathcal{Z}_G = \sum_i (E_i-\mu)\langle n_i\rangle$. I had the idea that maybe one needs to do some kind of approximation, but then again I am clueless what and how. It would be great if someone could show how I get from the LHS to the RHS
The trick is to use the definition of a derivative of a logarithm. Lets do the calculus for the FD case: $$-\frac{1}{\beta}\ln \mathcal{Z}_G \stackrel{?}{=} \sum_i (E_i-\mu)\langle n_i\rangle$$ If you see in the right side of the equation, you have a case $\frac{f'(x)}{f(x)}=\frac{d(\ln f(x))}{dx}$. Remember the definition of $\langle n_i \rangle$ $$\langle n_i \rangle=-\frac{1}{\beta}\frac{\partial \ln Z_G}{\partial E_i}$$ and you will get the left side.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/529844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the commutation relation in quantum mechanics right? $$[ \hat X, \hat P_x\hat F(x)\hat P_x] =\frac{\hbar}{i}[\hat F(x)\hat P_x+ \hat P_x\hat F(x)]$$ It's given in the book "Basic Quantum Mechanics" by R.L. White. Maybe I am doing something wrong. What I am getting is: $$[ \hat X, \hat P_x\hat F(x)\hat P_x] =-\frac{\hbar}{i} [\hat P_x\hat F(x)+2\hat F(x)\hat P_x]$$ This is how I was doing: \begin{align} [ \hat X, \hat P_x\hat F_x\hat P_x]\psi & = \hat X \hat P_x\hat F_x\hat P_x\psi - \hat P_x\hat F_x\hat P_x \hat X \psi \\ &= \cdots - \frac{\hbar^2}{i^2} \frac{\partial}{\partial x}(F_x\psi+F_xx \frac{\partial \psi}{\partial x}) \\ & = - \frac{\hbar^2}{i^2} (F'_x\psi+2F_x \frac{\partial \psi}{\partial x}) \\ &= -\frac{\hbar}{i}(\frac{\hbar}{i}\frac{\partial F_x}{\partial x}\psi+ 2F_x \frac{\hbar}{i}\frac{\partial \psi}{\partial x}) \\ &= -\frac{\hbar}{i} (\hat P_x\hat F_x \psi + 2\hat F_x \hat P_x \psi) \\ & = -\frac{\hbar}{i}[\hat P_x\hat F(x)+2\hat F(x)\hat P_x]\psi \end{align}
I think you can try using the identity $[\hat{A},\hat{B}\hat{C}\hat{D}]=[\hat{A},\hat{B}]\hat{C}\hat{D}+\hat{B}[\hat{A},\hat{C}]\hat{D}+\hat{B}\hat{C}[\hat{A},\hat{D}]$. Then, according to your question, we have $$ \begin{align*} [\hat{X},\hat{P}_{X}\hat{F}(\hat{X})\hat{P}_{X}]\Psi&=[\hat{X},\hat{P}_{X}]\hat{F}(\hat{X})\hat{P}_{X}\Psi +\hat{P}_{X}[\hat{X},\hat{F}(\hat{X})]\hat{P}_{X}\Psi+\cdot\cdot\cdot \end{align*} $$ Then you can use some of the known commutators that you can find in standard quantum mechanics textbooks. As for commutators with $\hat{F}(\hat{X})$, you need to Taylor expand it as a function of operators and find a pattern.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/529937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If different wavelengths of light have different speeds, how can they move together as a white light in air? My question is with respect to Newton's experiment of using two identical glass prisms [in which one is inverted with respect to the first one]. When he allowed all the colors of the spectrum to pass through the second prism, he found a beam of white emerging from the other side of the second prism. And I know that refraction is due to different speeds of different wavelengths of light. So, How can those colors recombine to form a beam of white light (since different colors have different speeds)?
The speed of light in a vacuum is constant for all wavelengths. In other media (like glass), it can vary. The speed of light in a particular medium doesn't depend on its history, only on what the medium is. Questions such as " Will it speed back up to the speed of light?" don't make sense - the speed of light depends only on what it's currently traveling in. The different wavelengths do move at different speeds in air, but the difference is so small that white light remains "white" - all the colors move at effectively the same speed. The colors in that figure are combining to white light because they have traveled the same path length. As you can see, the purple beam traveled the least distance in the first prism, but it also travels the furthest distance in the second prism. The red beam does the reverse - it traveled the furthest distance in the first prism and the least distance in the second. Added together the two colors traveled the same path length, so they recombine.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/530051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why isn’t the center of the Earth cold? If the pressure of the Earth is keeping the inner core solid, keeping it rigid to take up the least space, and temperature is dependent on how much the atoms are moving, why isn’t the inner core cold? If the pressure is so high that it’s forcing the inner core to be solid then the atoms can’t move around and thus they can’t have temperature.
I'll answer by analogy with a spring: * *Temperature <=> Energy in vibration of spring *Pressure <=> Compression of spring *Phase state (solid or liquid) <=> Movement of the spring Temperature is basically the energy of the moving particles. If you take our analogous spring and have no weight on it (no pressure) it can bounce around with great movement as a liquid would. By virtue of the momentum of how fast the spring can travel in this state, it has high "temperature". If you put a heavy weight on it (and thus apply pressure), the force from the spring becomes very high (with the same amount of energy), but obviously the distance it covers is a lot less. The energy in the spring is the same (equivalent to being the same temperature but in a solid state). End analogy, the spring can have the same amount of energy in it's vibration, but one case can be constrained by weight (high force, low movement) and is free (low force, high movement).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/530369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 7, "answer_id": 1 }
Why is the von Neumann entropy maximised when $p_n=1/N$? For example, $\hat \rho=\sum_{\substack{n}} p_n |E_n \rangle \langle E_n|$ is a stationary mixed state of a given quantum system, where $|E_n \rangle$ are eigenstates of the Hamiltonian $\hat H$ with the eigenvalues $E_n$. In every book that I read on quantum information, it says that $p_n=1/N$ for the maximal value of the von Neumann entropy for the given mean energy. I don't see how we could end up with this value. And I would like to understand how to prove that. Any references to books/websites where I can read about the derivation is also much appreciated. Any help is much appreciated.
I'm assuming you meant the say that $p_n=1/N$ is the maximal value of the von Neumann entropy without constraining the mean energy (otherwise the statement is not true). One way to see it is using Lagrange multipliers. You want to maximise $S(\mathbf p)\equiv-\sum_i p_i \log p_i$ in the hyperplane $\sum_i p_i=1$. For this to be the case, $\nabla_{\mathbf p} S$ needs to be proportional to $\nabla_{\mathbf p}(\sum_i p_i-1)=\sum_i \hat{\mathbf e}_i$. This means that, for some $\lambda$, you have $$ - (\log p_i + 1) = \lambda, $$ and thus $p_i=e^{-1-\lambda}$. Imposing $\sum_i p_i=1$ then gives $N=e^{1+\lambda}$. We conclude that $p_i = 1/N$ for all $i$. The case constraining the average energy can also be worked out, but now you are maximising $S(\mathbf p)$ in the intersection of the hyperplanes $\sum_i p_i=1$ and $\sum_i p_i E_i=E$. A variation of Lagrange multipliers can handle this more general case (and you get a distribution of the type $p_n\simeq e^{-\beta E_n}$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/530519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How strong would the electromagnetic field of the earth and the planets would have to be, in order to mimic the effects of gravity? How strong would the combined forces of electromagnetism on the earth and planets need to be, to mimic, and therefore, replace gravity?
The gravitational force between the Earth and Sun is easily calculated to be about $3.5\times10^{22}$ newtons. If the Earth and the Sun had opposite charges of magnitude $3.0\times10^{17}$ coulombs, their electrostatic attraction could "replace" this gravitational attraction. (Unequal charges would also work as long as their product produced the same result.) This would require about $1.9\times10^{36}$ excess electrons on the Earth and the same number of excess protons on the Sun, or vice versa. The Earth is estimated to have between $10^{49}$ and $10^{50}$ atoms, so there would need to be one extra electron or proton on the Earth for about every ten trillion atoms. The voltage at the surface of the Earth would be $4.2\times10^{20}$ volts and the electric field strength would be $6.6\times10^{13}$ volts per meter. This would be a problem for atomic structure as this kind of field strength would ionize atoms. These calculations are for amusement. They are not an endorsement of the silly Electric Universe "theory".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/531140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Difference between adding Force Vectors and adding Velocity Vectors Consider the following two situations. Case I I am able to solve this question. The answer to this question is = 5 m/s. I have attached the solution in the end. This is not the doubt. Kindly read further to understand the theoretical doubt. Case II Now, in Case I, suppose we replace the velocities by two Forces F1=5N and F2=3N, applied on the body in the same directions as in Case I and we are supposed to calculate the net force on the body, and this modified situation, we call Case II. Now, in Case II, we can apply the formula for net force given by the Parallelogram Law of Vectors as shown below and that gives the right answer in Case II. But, interestingly, when the same knowledge of Parallelogram Law is applied in Case I, it doesn't give the right answer. According to my textbook applying Parallelogram Law in Case I, like Case II, is wrong. I do not understand the reason behind it. Both, Force and Velocity are vectors and Parallelogram Law of Vectors, as I understand, should be applicable for all the vectors, so why it is the case that Parallelogram Law of Vector Addition gives the right answer in Case II, but NOT in Case I. Why are we treating the Force Vector and Velocity Vector differently? I would appreciate both - Mathematical and Intuitive Understanding. The solution to Case I. Let's say the net velocity is V then the components of this velocity along V1 and V2 should be equal to V1 and V2 itself, due to string conservation and hence following is the solution. "Theta" is the angle net velocity makes with V1 (Let's say).
The velocities add exactly like the forces, her to about 7,21m/s not to 5. the solution you give is wrong. Do you really believe , that the net velocity is exactly the size and direction of v1? The equation which gives you sin(theta)=0 is wrong. may be you just add the two vectors graphical to see it?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/531260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Direction of average acceleration in circular motion I know that the instantaneous acceleration is always directed towards the center of the circle.But what about average acceleration. In the above figure my book says place change in velocity along the line that bisects angle $r$ and $r'$ and observe that it is directed towards centre. my question is that is there any rule that we should place it along the angle bisector between the two given points to get average acceleration direction. Any help will be appreciated
Since the average acceleration is along Δv (a=Δv/Δt), the average acceleration is perpendicular to Δr. We already know that, since the path is circular, v is perpendicular to r and so is v' to r', according to the figure given by you. (Since the velocity vectors v and v' are always perpendicular to the position vectors, the angle between them is also ΔΘ). Note that the book stated that If we place Δv on the line that bisects the angle between r and r', we see that it is directed towards the centre of the circle. It was just a verification of the statement already quoted above. A perpendicular from the centre bisects the chord, and since PCP' is an isosceles triangle, bisects the angle between r and r' (from the angle bisector theorem). Obviously it will be directed towards the centre considering the geometry of the figure, or understand it this way, The perpendicular bisector of a chord passes through the centre of the circle. It is a very fundamental theorem which has many applications in various fields of Physics and Mathematics. I have attached a link to its proof . Actually, it is not a rule, it is just an approach to introduce this topic at the beginner level, assuming that you know basic geometry, upto 10th grade. In a nutshell, since Δv is perpendicular to Δr, it passes through the centre of the circle. And so, the average acceleration is directed towards the centre. Note that the average acceleration only changes into instantaneous acceleration if we put the limits (Δt->0). So, its direction is towards the centre. In the fig(c), we are only approximating the situation on an infinitesimally small scale. But the overall concept remains the same. P.S.: Read my comments on your question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/531369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Why traditional turbulence theory concerns so much about statistics such as correlations? I have been wondering why the traditional turbulence theory, e.g., Kolmogorov's 1941 theory, concerns so much about things like two-point correlations, structure functions, their scalings, and so forth. I saw somebody says that, IF you know all these statistics, then you know the entire field. So my question is * *What do these statistics actually tell us? How do these statistics imply the spatial structure of the field? *Why the scalings matter so much? *Is this statistical approach adopted from statistical mechanics, field theory, or some other branches of physics?
* *The average two point correlation tels us how large are the phenomena appearing in the flow. Theory actually shows us that turbulence quickly becomes a cascade of similar phenomena but at a smaller and smaller length scale. Turbulence is an mechanism of transferring energy from macro to micro scales much more efficient than merely diffusion. By observing correlation you can learn about the energy flow down to molecular level. *Scaling matters so much because of how similar are the phenomena appearing in the turbulent flow, so only things that are independent on it can make up such a fractal (self similar) structure. A dimensional analysis can be used to determine actual relations. *This statistical approach is a result of the question we are asking - what is actually happening in this flow? Correlation gives us a lot of information about it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/531459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Are the boundary and initial conditions only effecting parameters make turbulence unpredictable? The book that I used to study turbulence states that "in a laboratory experiment initial and boundary conditions can not be fully under control, despite all the effort there will be infinitesimal variations between experiments. Turbulence amplifies these variations and instantaneous velocity will differ from experiment to experiment." What I don't understand is the boundary and initial conditions are the only reason of unpredictability? If we consider ideal experiments that initial and boundary conditions exactly the same, can we say that the variation of instantaneous velocity in time at a specific location will be exactly the same for all experiments?
This is just a long comment. The idealization must include instruments that have infinite precision, in a world without thermal fluctuations. In such case yes, if everything is exactly equal (including the position and velocity of every molecule) the predictions will be too. All these assuming a classical world, because quantum effects would make the same evolution impossible, even if you had the same initial and boundary conditions. Also notice that in a chaotic system the evolution is exponentially sensitive to initial conditions. So the evolution of the different experiments will agree better at the beginning than at later times. However I do not think the level of precision to predict turbulence even for a few seconds might be reachable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/531588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the movement of heated gas via combustion considered "work" and thus a form of mechanical energy? I'm teaching middle school kids about energy, but I got curious for my own education: Does heated gas, as a product of combustion, produce mechanical energy or is this conversion (from chemical to mechanical) too negligible to consider as a significant form of mechanical energy? (Maybe due to the negligible mass of the gas products $\frac12mv^2$) If not, is there a scenario where the combustion in a system can be altered in such a way where mechanical work is done by the gas?
It is chemical energy, and it can be transformed into heat and from here to work, like in an Otto engine. Here the efficiency is limited by the Carnot efficiency. Alternatively, it can be transformed directly into work, like in molecular engines, or fuel cells, which in general are more efficient. I might be wrong on this, but I do not think there is a equivalent to a Carnot cycle that can limit the efficiency of chemical energy conversion into work. It can be theoretically close to 100% according to this source
{ "language": "en", "url": "https://physics.stackexchange.com/questions/531777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do springs have a linear relationship? Why does: F = k*(change in position) Why can't the relationship be quadratic or higher ordered?
Springs do not always follow Hooke's law. Hooke's law is a very good law, and it handles a lot of cases, but it's not The Law. As J.G. points out in his answer, Hooke's law can be seen as an approximation that's good for small changes. As it turns out, for the way springs deform, its a very good law because springs tend to deform in a "small change" way... every part of the spring deforms just a little. Hooke got his name on the law because enough springs are close enough to this ideal linear behavior that it's useful. All models are wrong; some are useful. There are many cases where Hooke's law doesn't apply. Its very common for suspension springs to not follow Hooke's law. They're designed that way to create a smoother ride while still protecting against bottoming out. If you look at a leaf spring on a truck or a train, they are designed to engage more and more linear spring elements to create an overall behavior which is closer to a square law.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/532430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why Pauli matrices are the same in any frame? On page 157 of Schwartz's QFT book, He write that “$\sigma_i$ do not change under rotations”. If so, changes in $\psi$ and $B$ cancels, so we can get that $(\vec{\sigma} \cdot \vec B)\psi$ is rotationally invariant. But why Pauli matrices are the same in any frame? Any hint or reference would be helpful!
Pauli matrices is sets of numbers, they don't transform under rotations in contrast of vector $\vec{B}$ or field $\psi$! See for details An introduction to spinors around (31). Another useful reference Spin, topology, SU(2)$\to$ SO(3). See around (7). Main idea: Using therms like $(\sigma^i B^i)$ one can convert rotation of vector to rotations of spinor indeces: $$ (\sigma^i (B^i)^\prime) = (\sigma^i e^{i\alpha J}B^i) = e^{i\alpha\sigma/2 }(\sigma^i B^i) e^{-i\alpha\sigma/2 } $$ And due to transformation of $\psi$: $$ \psi^{\prime } = e^{i\alpha\sigma/2 } \psi $$ One have: $$ (\sigma^i (B^i)^\prime) \psi^\prime = e^{i\alpha\sigma/2 }(\sigma^i B^i) \psi $$ So this therm transforms like $\psi$. So Schrödinger–Pauli equation is rotationally invariant. Author is not quite clear make statement about it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/532831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Splitting a single particle wave function The wikipedia article on the double slit experiment contains the following animation: https://upload.wikimedia.org/wikipedia/commons/transcoded/a/a0/Double_slit_experiment.webm/Double_slit_experiment.webm.180p.vp9.webm Here we can see that part of the wavefunction is reflected back at the electron source. Does this only happen when there are multiple particles? If the experiment is set up so that at any one time, there is at most 1 electron between the electron source and the screen, can this sort of reflection still happen? More generally: what happens when a wave function of a single particle splits in two parts, with each part propagating in a different direction? Is this even possible?
When a single electron is fired it will not give the full probability function that you see in the animation, in fact there are other paths of low probability that are not even shown. Any particle has a chance of going anywhere, that's why we can say a single particle many has possible paths or wave functions. The single path that a particle ultimately takes is a different wave funtion than the probability one shown in the animation which tries to show ~ >99% of the probable paths.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/532994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I determine the mean size (area) of the surface reconstruction domains from a LEED (low energy electron diffraction) pattern? How can I determine the mean size (area) of the surface reconstruction domains from a low-energy electron diffraction (LEED) pattern? The cross-section of the electron beam is definitely going to be much larger than the average domain area, so I can't just measure every single domain. There will probably have be to some statistical approach right? And I think spot intensity profile might have to be used too.
Focus your beam to a spot size smaller than the domain size and scan the intensity of a peak that is due to the surface reconstruction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/533086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why bound currents cannot be detected in experiment? In today's group meeting about anomalous Nernst effect, I learned that bound currents cannot be detected in experiment. Why?
While finding the vector potential due to piece of magnetized material with magnetization M ,it is turn out that it is same as potential produced by a volume current and a surface current called bound current. Physical interpretation for them that in uniform magnetized material there tiny current loops which produces dipole moment. The net effect of these loop is a surface current. It is clear that it's just an analogy too understand the net effect of those tiny loop.If you try to measure the surface bound current ,you will unable as there only tiny current loops in reality.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/533180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find the magnetic field of a current using the differential form of Maxwell's equations? To find the magnetic field produced by a long straight wire, one would ise either Biot-Savart law or Ampere's Law in integral form. How do you find this simple result starting from $\nabla \cdot \vec{B} = 0$ and $\nabla \times \vec{B} = \mu_0 \vec{J}$? Let's imagine a current flowing in the $\hat{y}$ direction, then the previous equations are: $$ \partial_x B_x+ \partial_y B_y + \partial_z B_z = 0$$ $$ \partial_y B_z - \partial_z B_y = 0$$ $$ \partial_x B_y - \partial_y B_x = 0$$ $$ \partial_z B_x - \partial_x B_z = \mu_0 J_y$$ And then? what next?
As others already pointed out, it is hard to solve this problem in cartesian coordinates and from the differential Maxwell equations. But anyway, here is a rough sketch without going too much into the details. The current density $\vec{J}$ is zero everywhere, except in the wire (at $x=0, z=0$) where it is infinite and pointing in $\hat{y}$-direction, in such a way that the total current through a small circle around the wire is $I$. This can be described using Dirac delta functions: $$\vec{J}=\hat{y}I\delta(x)\delta(z)$$ Therefore in your 4th equation you need to write $I\delta(x)\delta(z)$ instead of $J_y$. From the symmetry of your situation we try the following: * *$B_y$ is zero, *$B_x$ is independent of $y$, *$B_z$ is independent of $y$. Using this approach, from your 4 equations the 2nd and 3rd are trivially satisfied. And the 1st and 4th equation become $$\begin{align} \partial_x B_x + \partial_z B_z &= 0 \\ \partial_z B_x - \partial_x B_z &= \mu_0 I\delta(x)\delta(z) \end{align} \tag{1}$$ By some clever guessing you get the solution $$\begin{align} B_x &= -C\frac{z}{x^2+z^2} \\ B_z &= +C\frac{x}{x^2+z^2} \end{align} \tag{2}$$ with a still unknown pre-factor $C$. You can easily check the correctness of this solution by plugging it into the differential equations (1), at least for outside of the wire ($x\neq 0, z\neq 0$). For finding the pre-factor $C$ you need to plug the solution (2) into the second of differential equations (1) and then integrate it over a small area in the $x$-$z$-plane containing the wire. Due to the singularity there (at $x=0, z=0$) this is a tricky business. The result of this integration is $2\pi C = \mu_0 I$. So we have the solution $$\begin{align} B_x &= -\frac{\mu_0 I}{2\pi}\frac{z}{x^2+z^2} \\ B_z &= +\frac{\mu_0 I}{2\pi}\frac{x}{x^2+z^2} \end{align} \tag{3}$$ Rewriting this solution (3) from cartesian to cylindrical coordinates gives $$\vec{B}=\frac{\mu_0 I}{2\pi r}\hat{\phi},$$ where $r$ is the distance from the $y$-axis, and $\hat{\phi}$ is the azimuthal unit-vector around the $y$-axis. So finally, we arrived at the same well-known solution as found by other methods.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/533341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Orthogonality of a Lorentz Boost Matrix in terms of an invariant I have been doing questions recently involving Lorentz boosts. However I was wondering if the Lorentz boost matrix $Λ$ is orthogonal. $$ \left[\begin{array}{cccc}\hat {ct} \\ \hat x\end{array}\right] = \left[\begin{array}{cccc}{\cosh \varphi} & {-\sinh \varphi} \\ {-\sinh \varphi} & {\cosh \varphi}\end{array}\right] \left[\begin{array}{cccc}{ct} \\ x\end{array}\right] =Λ(\varphi)\left[\begin{array}{cccc}{ct} \\ x\end{array}\right] $$ My understanding: For a matrix to be orthogonal $ΛΛ^T=Λ^TΛ=I$ That is that $Λ^T=Λ^{-1}$, however this is not the case with the given matrix here. So instead of using that definition could I prove it is orthogonal in terms of an invarient? My attempt: If I denote $\eta $ to be a Minkowsi metric which is an invariant. The matrix representing a Lorentz boost is orthogonal with respect to this Minkowski metric $$ \Lambda \eta \Lambda^T = \eta \text{ or } \Lambda^{-1} = \eta \Lambda^T\eta.$$ Is this a correct statement?
Yes your statement is correct. Rotations are isometries of 3D Euclidean space: they preserve the inner product defined using the Euclidean metric. Rotations + boosts are isometries of 4D Minkowski space: they preserve the inner product defined using the Minkowski metric (technically this isn't an inner product since it's not positive definite, it's a symmetric bilinear form). Or, put differently: The Euclidean metric is left invariant under rotations and the Minkowski metric is left invariant under Lorentz transformations. For rotations this gives us $R^TR=1$, but this isn't the case for boosts.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/533446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Doubt related to the use of Gaussian Surfaces My textbook says we need to take care not to let the Gaussian Surface pass through any discrete charge. However, the Gaussian Surface can pass through a continuous charge distribution. Why so?
Point charges correspond to a discontinuous charge distribution. For instance if your surface is a sphere of radius $r$ enclosing a uniform charge distribution $\rho_0$, then the enclosed charge is perfectly defined ans is a continuous function of $r$ ($Q(r) = 4/3 \pi r^3 \rho_0$) so when using the macroscopic Gauss equation the RHS will be defined and regular. However, if there is a single point charge $q$ on top at $r=R$, suddenly you will have a discontinuity as $Q(R-\varepsilon) \simeq 4/3 \pi R^3$ but $Q(R+\varepsilon) \simeq 4/3 \pi R^3 + q$. For this reason, it does not really make sense to associate a precise enclosed charge for $r=R$, because the limits $r=R^-$ and $r=R^+$ are different. All of this comes from the fact that the charge density $\rho(r)$ diverges at the position of a point charge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/533558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Effective action for ferromagnetism and ferroelectricity In Three Lectures On Topological Phases Of Matter section 2.1 mentioned, that: $$ I^\prime = \int dt d^3x \; \left(\vec{a}\vec{E}+\vec{b}\vec{B}\right) $$ correspond to ferromagnetism and ferroelectricity. And that $$ I^{\prime\prime} = \int dt d^3x \; \left(a_{ij}E^iE^j+b_{ij}B^iB^j\right) $$ correspondence to electric and magnetic susceptibility. Could somebody clarify, why? I will be very appreciate for answers!
The energy of an electric dipole moment $\bf{p}$/magnetic dipole moment $\bf{m}$ in the external field is proportional to it, $W = -\bf{p\cdot E}$ or $W = -\bf{m\cdot B}$. In a ferromagnetic sample the local magnetic dipole moment is propotional to element of volume $d^3x$. This is just the same as you have in the expression for $I'$. Then, if you consider a dielectric or diamagnetic sample, it gets polarized in the external electric or magnetic field. The susceptibility is a tensor that relates, for example, the dipole moment of unit volume $\bf{P}$ (polarization) and the external field $\bf{E}$: $$P^i=a^i_jE^j.$$ Since $\bf{P}$ is again the dipole moment, you get the expression $I''$ for action.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/533821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Clarification of the concept "less resistance means less heating" in a wire So my textbook says that the reason cables that are suppose to carry high currents, are thicker that those that are meant to carry lesser current, is that "less resistance (of the wire) means less heating..."? Is this even true? Isn't CURRENT the reason wires heat up? If we decrease resistance, more current flows, and that should produce more heating!
If we decrease resistance, more current flows You're talking about the resistance of the transmission wires that carry electric current from the generating station (or other power supply) to the load. Normally those things are sized such that the power dissipated in the transmission line is much less than the power dissipated in the load. Yes, decreasing the resistance of the transmission line will increase the total current IF the load is purely resistive*, but even if you could decrease it to zero, it only would increase the total current by a small amount because the resistance of the load dominates the equation. $I_{total} = \frac{V_{supply}}{R_{line}+R_{load}}$, where $R_{line}\ll R_{load}$ At the same time, decreasing the transmission line resistance relative to the load resistance will increase the fraction of the total power that is delivered to the load, which generally is what we want. $P_{load}={I_{total}}^2R_{load}$, and $P_{line}={I_{total}}^2R_{line}$ * Some loads, including motors, and electronic equipment with switching power supplies, may behave differently from a resistor (i.e., do not obey Ohm's Law.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/533927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 8, "answer_id": 7 }
What can cause a steam condensate pipe to oscillate and is this normal? I was visiting an industrial site not too long ago and I noticed an interesting phenomenon involving one of their steam condensate pipes. This (insulated) pipe was suspended from the ceiling. It hung down about 20 feet, supported by a series of supports. Each support comprised of a roller (positioned under the pipe) and a rod, which ran from the roller to the ceiling directly over head. It ran roughly 100 yards from one end of the facility to the other. This pipe also included a U shaped expansion loop. As I watched it, I noticed that it was swaying back and forth, with period of about 2 seconds, with an amplitude of a couple of feet. It followed the description of a second harmonic standing wave in a string with fixed ends, where there is a stationary point in the middle that does not move. I was (and am) still very curious about this. Is this often seen in a steam system? Is this ok? What is this called? What causes it? Should it be dampened? Will it hurt the life of the pipe? It may be ok, but it really reminded me of the tacoma bridge. I was assuming that the oscillation was caused by water flowing around the expansion loop, imparting momentum to it as it encountered the turns, which then caused the pipe to oscillate at its resonance frequency.
This is common in piping systems carrying steam and connected to big boilers and turbines, for the following reasons. A boiler generates a strong random "rumble" while operating, which comes from the boiling process inside it. When connected to piping systems which possess compliance and inertia, those pipe runs are driven with that random spectrum and resonate at their natural frequencies. The amplitudes that result are sufficient to abrade the pipe joint seals to the point where they develop significant leaks.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/534033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to find the critical exponent of some directional dependent correlation length? I am working on a two dimensional anisotropic system with correlation length diverging with different critical exponent in different directions. And I am wondering if there is any theoretical prediction on what exponent characterize the divergence of the correlation length in some random direction? More specifically, if $\nu_x$ and $\nu_y$ characterize the divergence of the correlation length in the x and y direction, (i.e. $\xi_x \sim t^{-\nu_x}$ and $\xi_y \sim t^{-\nu_y}$), what critical exponent should I expect if I look at the correlation length in some random direction, $u=cos(\theta)e_x + sin(\theta) e_y $? I searched on internet but I haven't found anything. I'll be happy if someone has any good reference on that type of things too.
When you refer to there being two different correlation lengths in two orthogonal directions, I assume what you mean is that the correlation functions take the form $$ G(x,y) = \exp\left[ - x/\xi_x - y/\xi_y \right] $$ at long distances (let me know if you have something else in mind). Now, if you consider the decay of correlations along some direction $u=cos(\theta)e_x + sin(\theta) e_y$ in space, then the correlation function will decay as $$ G(r,\theta) = \exp\left[ - \left(\cos(\theta)/\xi_x + \sin(\theta)/\xi_y \right) r \right], $$ where I'm considering $\theta$ fixed and $r$ to be the Euclidean distance between the two points being considered for the correlation function. Then we identify the correlation length in the $u$ direction as $$ \xi_u = \left(\cos(\theta)/\xi_x + \sin(\theta)/\xi_y \right)^{-1}. $$ Now as we approach the critical point, the correlation lengths diverge as $\xi_x = c_x t^{-\nu_x}$ and $\xi_y = c_y t^{-\nu_y}$ with some non-universal constants $c_{x,y}$. Let's say I've chosen coordinates such that $\nu_x > \nu_y$. Then we can write $$ \xi_u = \left(\cos(\theta)/\xi_x + \sin(\theta)/\xi_y \right)^{-1} = t^{-\nu_y} \left( c_x^{-1} \cos(\theta) t^{\nu_x - \nu_y} + c_y^{-1} \sin(\theta) \right)^{-1}. $$ Since $\nu_x > \nu_y$, the quantity in the parentheses smoothly goes to a constant as $t \rightarrow 0^+$, and we find that the correlation length diverges as $$ \xi_u = \frac{c_y}{\sin \theta} \ t^{-\nu_y}. $$ So if we consider correlation length in an arbitrary direction, it diverges with the smaller of the two critical exponents $\nu_x$ and $\nu_y$ (unless it is parallel to the direction with the larger $\nu$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/534135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Gravity, matter vs antimatter I have a simple question regarding matter-antimatter gravity interaction. Consider the following though experiment: If we imagine a mass $m$ and an antimass $m^-$, revolving around a large mass $M$ the potential energy of mass $m$ should be: $$ U_1=-\frac{GmM}{R} $$ and the potential energy of mass $m^-$ should be: $$ U_2=-\frac{GmM}{R} $$ or: $$ U_2=\frac{GmM}{R} $$ depending on the sign of the gravity interaction between matter and antimatter. If the two particles annihilate to energy, then the gravitational field of $M$ will interact with the emitted photons and will change their frequency. But, as the interaction between gravity and the photons has nothing to do with the question of the gravity between matter and antimatter, can't we simply use the interaction between gravity and photons, and the energy conservation to establish the nature of the gravity interaction between matter and antimatter?
There are also constraints on antimatter gravitational coupling from studies of neutral mesons. In the Standard Model, neutral kaons (down-antistrange and strange-antidown) can oscillate into one another via weak interactions. By measuring the decays of the kaon beam, you can put very accurate constraints on the rate of the oscillation. Adding a weird gravitational coupling to the antimatter component of the meson creates changes in the oscillation that would be detectable and are not detected. See Tests of the Equivalence Principle with Neutral Kaons for a discussion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/534289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 3 }
Velocity after applying a force in the vacuum I’m sorry for so simple question, but I just need to be sure. I understand, that the changing of the speed occurs only when the force is applied, I understand that if one punch a ball in the free space it will infinitely move with a constant velocity Some point-like body with mass $m$ is situated in vacuum, and has initial velocity $v_1=0 \space m/s$. Some force is now acting on a body for a infinitely short period of the time. The acceleration that gives the application of this force to body equals $a=5 \space m/s^2$. The velocity after will be $v_2=0+5 =5\space m/s$? Also, if the force is acting for a non-infinitely short period of time how to calculate then? I found this from https://physics.stackexchange.com/a/231120/255554 $$x=( x + \frac{|F| }{2m} t^{2} ) $$ Seems it can be applied for both of my cases, but I don’t know why there is 2 times mass And, can you, please confirm, if 1 Newton is the force that during 1 second changes the 1 kg body velocity on 1 m/s, then 2 Newtons is the force that changes: * *if mass is same: during 1 second velocity on 2 m/s *if mass is 2 kg: during 1 second velocity on 1 m/s Am I understanding correctly?
Firstly, the body will only accelerate while the force is being applied, and it will move at a constant velocity the instant the force stops being applied. Your final equation is just a variation on $$x=\frac12at^2$$ Why that factor of ½ arises can be shown using elementary calculus, or by a geometrical argument. Both statements about a 1 Newton force in your update are correct.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/534394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
The Enigma of Universal Gravitation Forces This is taken from a book called "Physical Paradoxes and Sophisms" by V. N. Lange. 1.22. The Enigma of Universal Gravitation Forces The law of gravitation can be written $F=\gamma\frac{m_1m_2}{R^2}$. By analyzing this relationship we can easily arrive at some interesting conclusions: as the distance between the bodies tends to zero the force of their mutual attraction must rise without limit to infinity.Why then can we lift up, without much effort, one body from the surface of another body (e.g., a stone from the Earth) or stand up after sitting on a chair?
We have the power to overcome this near infinite force of gravity because the electromagnetic forces generated by our muscles is much much stronger. Every cell in our body burns on the order of 1 to 10 million molecules of ATP every second. That’s when those cells are at rest. During periods of high intensity, that ATP burning increases 1000 fold in muscle cells. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3005844/
{ "language": "en", "url": "https://physics.stackexchange.com/questions/534515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 6 }
What does Heisenberg's uncertainty principle tell about nature? I agree with the fact that the principle points out to the inaccuracy in the measurement of the two quantities of the particles (momentum and position). But measurements apart, does it explain anything about how nature works, in general? As in, I think the particle would have some exact value of momentum at that point in space (if not, please explain why). So why not just tell that 'okay it does possess some momentum at that position, but I can't tell what that exact value is'? Edit: I understood that the principle points out at nature as a whole, in general, and does not just point out at measurements
Look at a neutron star. The particles are under so much compression that all position locations will be occupied. Since we don't see matter more dense than this we assume that the position locations approach maximal definition. This constraint means, according to the Heisenberg Uncertainty Principle that the momenta of the neutrons must be highly undefined. Basically speaking, the denser the neutron matter becomes the more momentum space we get. As more mass is added the radius of the star decreases but the momentum space increases. Once a critical mass is reached the radius of the matter in position decreases to its Schwarzschild radius and we can no longer speak about its position or momentum. Nature's mystery is cloaked by an Event Horizon. I like to give this example because it shows quantum effects on a stellar scale and challenges our intuition. I don't think we can understand fully the Heisenberg Uncertainty Principle though because our brain is too large.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/534614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 11, "answer_id": 8 }
Is a pseudo-Goldstone boson always a pseudoscalar particle? There are several examples of pseudo-Goldstone bosons which are CP-odd particles, such as the pion, as well as many axion-inspired models. If we invert the logic, Are all pseudo-Goldstone boson of CP-odd type? Or, can they be CP-even too? Is there a known example?
I'm not sure why you would think all (pseudo-)Goldstone bosons have to be CP odd. This would be the result if the spontaneously broken symmetry is a chiral symmetry ($SU(2)_A$ for pions, $U(1)_{\text{PQ}}$ for axions), but of course you can spontaneously break other kinds of symmetries too. For example, consider a complex scalar field with $$\mathcal{L} = |\partial_\mu \phi|^2 + m^2 |\phi|^2 - \lambda |\phi|^4 + \epsilon (\phi^3 + {\phi^*}^3).$$ We can take this complex scalar to be C even and P even. The parameter $\epsilon$ can be taken small technically naturally, because it is an explicit breaking of the $U(1)$ symmetry $\phi \to e^{i \theta} \phi$. Upon spontaneous symmetry breaking, the phase of $\phi$ is a Goldstone boson, which picks up a small mass due to $\epsilon$, and is CP even. This kind of setup is used in Affleck-Dine baryogenesis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/534776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Intuitive explanation why rate of energy transfer depends on difference in energy between two materials? The temperature of an object will decrease faster if the difference in temperature between the object and it's surroundings is greater. What is the intuitive explanation for this?
You know that temperature is related to the microscopic kinetic energy of the atoms and molecules that make up a material. For simplicity, let's assume the two materials consist of monatomic ideal gases with one having a higher temperature than the other. Then the temperature of the two gases is a measure of the average kinetic energy of the gas atoms, which in turn depends on the speed of the atoms. If the two gases are brought into contact with one another, at the interface collisions between the higher speed atoms of the higher temperature gas with the lower speed atoms of the lower temperature gas will transfer kinetic energy to the lower speed atoms. Those atoms will, in turn, move into the bulk of the gas and collide with others increasing their speed. Eventually when thermal equilibrium is reached the two gases reach some common intermediate temperature. How quickly the temperature rises in the lower temperature gas will depend on how quickly the atoms at the interface move into the bulk of the gas and collide with other atoms raising the overall kinetic energy. All other things being equal, that penetration will be quicker the higher the speed of the atoms after colliding with the more energetic atoms of the higher temperature gas at the interface. That speed will greater the higher the speeds of the high temperature gas atoms, which in turn increases with the temperature of the gas. Bottom line: The greater the temperature difference, the more quickly energy is transferred into the interior of the lower temperature material and the faster its temperature rise. For more discussion of temperature I suggest you look at the Hyperphysics website. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Number of electrons in an orbital How do we know number of electrons per element since electrons do not have shape and volume? Isn't an electron just quantized fluctuating probability wavefunction? Is there an experimental study supporting the idea that electrons are the moving particles in orbitals?
When we solve the hydrogen atom Hamiltonian, we get quantised energy states that are allowed for an electron. These states correspond to the wavefunction of the electron and are called orbitals. And since these orbitals are stationary states, the number of electrons in the ground state is constant. So when you say: Isn't an electron just quantized fluctuating probability wavefunction? Is there an experimental study supporting the idea that electrons are the moving particles in orbitals? The quantised fluctuating wavefunction itself is what we call orbital. However it is to be noted that they are exact only for hydrogen atom. This is because an orbital inherently doesn’t involve electron_electron interaction. Spectral lines provide indirect information about the presence of orbitals. But there have been direct evidence of quantised states by the means of orbital tomography, however the validity of orbitals has been discussed here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Symmetry breaking and higgs representation I was wondering if there is a criterion for the representation the Higgs should change under or if it's a case by case scenario. For instance, electroweak symmetry breaking is done with a higgs in the fundamental representation of SU(2). Using an adjoint representation does not break the symmetries generated by the third pauli matrix (and therefore not all the gauge bosons get masses and ecetera). To break SU(5), we need to use a higgs in the adjoint representation. According to this paper (http://www-f1.ijs.si/~ziherl/Greljo12.pdf), the reason is: "Since SU(5) has 24 gauge bosons, and SM has 12, the rest of the gauge bosons should get mass after SSB. So, we need to get at least 12 Goldstone bosons. Minimal representation of the Higgs which can do the job is 24,adjoint Higgs." I do not understand his reasoning, what is this criterion for the higgs to be able to 'do the job'?
Table III of the legendary 1974 paper by Ling-Fong Li, required canonical reading for theory students, details which low-lying Higgs representations break SU(n) groups to what subgroup and why. The "job" is to SSBreak 12 of the 24 symmetry directions of SU(5) so the remaining 12, so far unbroken at this stage, comprise the 8+3+1=12 of the SU(3)×SU(2)×U(1) of the SM. Table III tells you the adjoint Higgs rep of SU(5), the 24 , breaks it to just SU(3)×SU(2)×U(1), virtually magically! (This was the "could this be a coincidence?" moment of its inceptors.) The smaller reps all have problems: The fundamental, the 5, breaks SU(5) to only SU(4), so only 9 Goldstone bosons. Taking two of those, breaks it to SU(3), so 16 Goldstone bosons—far too many—would have driven model builders mad by its dysfunctional subtlety. The symmetric two-tensor, the 15, breaks it to SU(4), only 8 Goldstone bosons, or O(5), with 10 goldstons, not enough, in both cases. The antisymmetric two-tensor, the 10, to SU(3), so 16 goldstons, as above, so too many. So the adjoint does the job indeed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does charge on a capacitor remain constant when dielectric is fully inserted between the plates of the capacitor? We have a capacitor let's say of capacitance C and is charged by Voltage say V. Then the voltage is disconnected and a dielectric of dielectric constant say k is inserted fully between the plates of parallel plate capacitor. We are asked to find the change in charge stored by the capacitor and change in voltage. Now what I am not getting is why does charge stored in capacitor remain constant. The surface charge density decreases due to polarisation of dielectric and so the net charge on the plates should decrease yet we are considering charge to be constant. Please correct me.
why does charge stored in capacitor remain constant. Because you disconnected the voltage source. It's meant to be implied that the capacitor is disconnected from all external circuits. Therefore there's nowhere for the charge to go. And since charge is a conserved quantity, that means the charge on the capacitor plate must remain constant. The surface charge density decreases due to polarisation of dielectric and so the net charge on the plates should decrease yet we are considering charge to be constant. The charge associated with the polarization only compensates for some of the charge on the plate, it doesn't remove it. The charge associated with polarization is in the dielectric, and the charge on the plate is on the plate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Power Spectrum Density of real valued time series data There are real valued time-series data X(t) and corresponding auto-correlation function ACF(t)=$\left<X(0)X(t)\right>$. As written in wikipedia, Power Spectrum Density (PSD) can be calculated using either of X(t) or ACF(t). If one choose to calculate PSD using ACF, I can write the following : $PSD(\omega)=\mathcal{F}\{ACF(t)\}$. However, I get PSD which is complex number with non-infinitesimal imaginary value. From the method which uses $X(t)$ for PSD, I think PSD should be real number ($PSD(\omega)=E\left[|\hat{X}(\omega)|^2\right]$). I have two choices: 1) taking only real part 2) taking norm (using |z|=$\sqrt{zz^*}$). Which one is valid method?
The autocorrelation function is defined as: $$ r_{a b}\left( i, j \right) = E\left[ a_{i} \ b_{j}^{*} \right] \tag{0} $$ where $a(b)$ is an arbitrary time series signal and $i(j)$ is the corresponding index, respectively. The $E\left[ x \ y \right]$ term is the expectation value between $x$ and $y$ and the asterisk indicates the complex conjugate of the argument. The Fourier transform of a time series signal $x(t)$ is given by: $$ \tilde{x}\left( \omega \right) = \frac{ 1 }{ \sqrt{ 2 \ \pi } } \int_{-\infty}^{\infty} \ dt \ x\left( t \right) \ e^{-i \ \omega \ t} \tag{1} $$ where $\omega$ is the angular frequency. The inverse involves switching $\tilde{x}$ and $x$ and changing the sign of $i$ in the exponent. Then the power spectral density or PSD is defined by: $$ s_{x}\left( \omega \right) = C_{o} \ \lvert \tilde{x}\left( \omega \right) \rvert^{2} \tag{2} $$ where $C_{o}$ is a constant used for normalization and units, depending on method and/or computer language used (they each have slightly different normalizations for FFTs). The Wiener–Khinchin theorem allows you to define the autocorrelation function of $x(t)$ in terms of the PSD or the converse. That is, the PSD can be defined as: $$ s_{x}\left( \omega \right) = \int_{-\infty}^{\infty} \ dt \ r_{x x}\left( t \right) \ e^{-i \ \omega \ t} \tag{3} $$ I have two choices... Which one is valid method? In principle, they are the same. If you already have $x(t)$ why bother with the autocorrelation, just take the absolute value squared of the FFT of $x(t)$ (with proper normalization included based upon the specific language used).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can momentum never be zero in quantum mechanics? I have seen Zetilli's QM book deals with $E>V$ and $E< V$ (tunnelling) in case of the potential wells deliberately avoiding the E=V case, so I thought maybe something is intriguing about this and made this up. Suppose the total energy of the particle is equal to its potential energy.Then its kinetic energy should be zero, (speaking non-relativistically). But Kinetic energy operator is $\hat{T}=\hat{p}^2/2m$ (where $\hat{p}=-i\hbar\frac{\partial}{\partial x}$), So clearly since Kinetic energy is 0 here, momentum eigenvalue will also vanish. Now, Putting $E=V$ in time-independent Schrodinger equation (1D) we get, $$\frac{\partial^2\psi}{\partial x^2}=\frac{2m(E-V)}{\hbar^2}\psi\implies\frac{d^2\psi}{d x^2}=0\implies\psi=Ax+B$$ where $A$ and $B$ are arbitrary constants. Since, the wave function must vanish at $\pm\infty$, $A=0$,hence the wave function equals a constant=$B$ and is not normalizable. So, a particle with no momentum(or kinetic energy), gives a physically unrealizable wave function! Does this imply $E=V$ is a restricted critical case or momentum cant be zero in quantum mechanics or did i just go wrong somewhere?
I would like to add two points to the accepted answer: * *If you use the periodic boundary conditions trick to normalize the momentum eigenstates, then all the momentum eigenstates become normalizable, including the zero momentum eigenstate. *The discussion of tunneling in QM books is usually within the quasi-classical approximation, which breaks down when the difference $V - E$ is small. Thus, one usually treats the cases when the particle energy is well below or well above the barrier.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can elementary particles be explained adequately by a wave-only model? I have been watching quantum mechanics documentaries and reading a layman's book called "The Quantum Universe". I believe I understand why the double slit experiments exclude a particle only model. However I do not understand why the particle portion of particle-wave duality is needed. When I google the title to this question I do not get an adequate explanation of why the particle side of wave-particle duality is needed I feel. I believe the explanations assert that a particle moves in a wave-like/probabilistic manner but what is the evidence that requires a particle even exist instead of the wave itself being the whole story? Is it because elementary particles have quantized states? Can elementary 'waves' not simply exist in quantized states without a particle? I guess I would also like to know how a wave-only model would differ from string theory if you would not mind. My understanding is that string theory replaces particles with vibrating strings that seem an awful lot like quantized waves in my head. Forgive me if this is duplicate, my googlefu did not reveal a duplicate.
Sort of, yes. The many-worlds interpretation of quantum mechanics essentially says that there aren't actually any particles, just the quantum waves and our observations of them - the "particles" are just our limited observations of a small slice of the complete quantum waveform. As a result, you could say that they're a wave-only explanation of fundamental particles, since the particles don't "actually" exist. Here's a Youtube video explaining it in a bit more detail.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 5 }
How do we know not all photons are absorbed? Only those of specific energies? When a photon hits an electron in an atom, its energy has to be equal to the difference in energy between the current shell and a shell with a higher energy level, otherwise it is not absorbed at all. How do we know not all photons are absorbed? Wouldn't at least some energy of the photon be absorbed since it is an oscillation in the EM field?
We can shoot photons of different energies at atoms and see what goes thru and what is absorbed. only the specific energy photons will be absorbed. And no part of the energy will not. This is one of the results from quantum mechanics. The first excitation energy of hydrogen is 10.2 ev. If you shoot photons of energy 12 ev, part of it will not excite the H atom. You need a photon of 10.2 ev to excite the H atom.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/535851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Vector Helmholtz Equation In my recent exercise book I've derived the following equation that needs solving: $\nabla^2\vec{u} + k^2\vec{u} = 0.$ The deformation vectors points only in the $\hat{e}_r$ direction. I didn't want to write out the Laplace in spherical coordinates, so I tried using what I learned in my PDE course the previous semester. It turns out, the vector Helmholtz equation is quite different from scalar one we've studied. Suppose I have basic knowledge in solving scalar Helmholtz in spherical (and other coordinate systems). Is there any analogy that translates over to the vector version? In other words, should I be able to solve vector Helmholtz if I can solve scalar versions?
Yes, indeed you can use your knowledge of the scalar Helmholtz equation. The difficulty with the vectorial Helmholtz equation is that the basis vectors $\mathbf{e}_i$ also vary from point to point in any other coordinate system other than the cartesian one, so when you act $\nabla^2$ on $\mathbf{u}$ the basis vectors also get differentiated. This forces you to calculate $\nabla^2 \mathbf{u}$ through the identity $$ \nabla^2 \mathbf{u} = \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{u}) - \boldsymbol{\nabla}\times (\boldsymbol{\nabla}\times \mathbf{u}) \tag{1} $$ which is really cumbersome to deal with by brute force. A smart way to avoid all the hassle is by using the ansatz $$ \mathbf{u} = \mathbf{r} \times (\boldsymbol{\nabla} \psi) \tag{2} $$ where $\psi$ satisfies the scalar Helmholtz equation $$ (\nabla^2 + k^2) \psi = 0. $$ To check that $(\nabla^2 + k^2) \mathbf{u} = 0$ yourself you have to plug the ansatz $(2)$ on $(1)$ and make use of many vector identities and the scalar Helmholtz equation. The calculation is quite involved, so I'll point you to check Reitz, Milford & Christy's Foundations of Electromagnetic Theory, there they do the full calculation. With ansatz $(2)$ proven, it's just a matter of plugging the relevant mode $\psi_{lm}$ in eq. $(2)$ that you get your solution $\mathbf{u}_{lm}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/536044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is the electromagnetic field a real physical entity? Up till now, I always thought that electric and magnetic fields are mathematical constructs which aid our understanding. What was a one-step process of particle $A$ exerting force on a particle $B$ is replaced by a two-step process of the particle $A$ creating a field and the particle $B$ then entering it and experiencing the force. But I just read a chapter on electromagnetic (EM) waves. From what I understood, if a charged particle $A$ moves, the electromagnetic field induced by it also changes, and this change in the field can be modeled by a wave equation. This wave equation would then determine the force experienced by a particle $B$ when it enters the field at a particular position $\vec{r}$ and time $t$. My confusion starts when I see that these varying EM fields (or EM waves) produce an actual physical quantity called light. So if EM Fields were imaginary, then these EM waves should also be imaginary. But expiriment and expirience shows that they are very real. Hence the only possible reason for my confusion is that EM fields are real-world entities. Can someone confirm this for me?
You should consider electromagnetic fields to be just as “real” as matter because both have energy, momentum, and angular momentum. “Reality” is a vague concept and isn’t what is important here. What is important is that energy, momentum, and angular momentum can only be locally conserved if the EM field transports them. In the Standard Model of particle physics, everything consists of just seventeen fields. For example, there is an electron-positron field. So either all seventeen are “real” or none of them are. It makes no sense to say that the electron-positron field is real but the EM field isn’t.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/536169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
How does the baryon asymmetry control temperature fluctuations of CMB? The temperature fluctuations of the Cosmic Microwave Background (CMB) have a sensitive dependence on the quantity of baryon asymmetry of the universe. In fact, analysis of CMB fluctuations is one of the ways of inferring the amount of baryon asymmetry. However, purely on physical grounds, how does one understand how the amount of baryon asymmetry controls the fluctuations CMB temperature?
I am afraid this will not be a complete answer; also, there is a similar question on the site [How does the CMB constrain the baryon asymmetry? ]. If the universe were uniformly occupied by equal amounts of matter and antimatter, it is reasonable to imagine that the CMB spectrum would take note of the frequent annihilations. I am going to quote the following recent work on this. The crux of their argument is that "if large domains of matter and antimatter exist, then annihilations would take place at the interfaces between them. If the typical size of such a domain was small enough, then the energy released by these annihilations would result in a diffuse gamma ray background and a distortion of the cosmic microwave radiation". This paper in turn cites other, earlier work on this issue, but these date far back, so I will not cite them here. If the universe had far separated regions dominated by matter or antimatter, so that annihilations were infrequent, this would not be as visible, so there is a question of how efficient annihilations have to be to leave its signature on the CMB. Nevertheless, in a uniform plasma, large enough antimatter density would have an imprint on CMB as this diffuse background.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/536259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Why is the speed of light in vacuum a universal constant? While getting familiar with relativity, the second postulate has me stuck. "The speed of light is constant for all observers". why can't light slow down for an observer travelling the same direction as the light?
Speed of light is constant in all inertial frames in a vacuum, this is a postulate of Special theory of Relativity. There was no assumption that Speed of light is the fastest traveling speed in the Universe but if you study Special theory of Relativity closely, you will understand that particles having zero mass can only have the highest speed, like approximate massless particles, neutrinos travel really fast but never reach the speed of light, because light particles are massless. Now, following from Maxwell's theory of Electromagnetism, you can understand that EM waves/fields are traveling with very fast (not yet the speed of light or photons or massless particles), you will observe that EM fields/waves don't change their speed when the reference frame is changed. This was the most important observation by Maxwell and Faraday knew this will revolutionize Physics which happened when Einstein took this as a postulate and then as people were working on experiments to find massless particles and they found that they already knew that these are EM waves or photons or LIGHT! Physics also answers why such postulates are made. Edit:- someone has downvoted this answer, either the person didn't read it or don't know the content. Would you tell why you downvoted the answer?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/536432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 6 }
Action of quantum Fourier transform on two-fermion states In section 2.2 of the paper https://arxiv.org/abs/1807.07112, there appears a Fourier transformation named $F_k^n$ that comes out of a matrix called $F_2$, $$ F_2 = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1/\sqrt{2} & 1/\sqrt{2} & 0\\ 0 & 1/\sqrt{2} & -1/\sqrt{2} & 0\\ 0&0&0&-1 \end{pmatrix} $$ This matrix seems to be explained in the appendix of https://doi.org/10.1103/PhysRevA.79.032316, but I don't understand where the $c^\prime s$ appeared and how it is related to the Fourier transformation.
It's the beamsplitter unitary (a.k.a. the QFT in two dimensions, a.k.a. the Hadamard gate), represented via its action on two-mode Fermion states. With two modes, there are four possible fermionic states: $|11\rangle\equiv c_1^\dagger c_2^\dagger |\text{vac}\rangle$ (one fermion per mode), $|01\rangle\equiv c_2^\dagger |\text{vac}\rangle$ (a fermion in the second mode), $|10\rangle\equiv c_1^\dagger |\text{vac}\rangle$ (a fermion in the first mode), and $|00\rangle\equiv|\text{vac}\rangle$ (no fermion at all). A beamsplitter will act on these states as follows: \begin{align} |00\rangle &\to |00\rangle, \\ |10\rangle &\to \frac{1}{\sqrt2}(|10\rangle+ |01\rangle), \\ |01\rangle &\to \frac{1}{\sqrt2}(|10\rangle- |01\rangle), \\ |11\rangle &\to -|11\rangle. \end{align} To see this you just need to consider that the beamsplitter acts on the fermionic modes as $$c_1^\dagger\to\frac{1}{\sqrt2}(c_1^\dagger+c_2^\dagger), \qquad c_2^\dagger\to\frac{1}{\sqrt2}(c_1^\dagger-c_2^\dagger),$$ so that for example $$|11\rangle\equiv c_1^\dagger c_2^\dagger |\text{vac}\rangle \to \frac12(c_1^\dagger+c_2^\dagger)(c_1^\dagger-c_2^\dagger)|\text{vac}\rangle \to -c_1^\dagger c_2^\dagger|\text{vac}\rangle.$$ The rest of the rules is similarly derived. The matrix reported in the paper is simply the matrix representation of these rules. This is a special case of a more general problem: given a unitary $U$, how does it act on many-fermion states? The general result is that the scattering amplitude between an $n$-fermion, $m$-mode input $$|r_1,...,r_m\rangle\equiv c_1^{r_1\dagger}\cdots c_n^{r_n \dagger}|\text{vac}\rangle$$ and an output $|s_1,...,s_m\rangle$ is given by the determinant of the matrix obtained from $U$ by taking its first column $r_1$ times, its second column $r_2$ times, etc., and similarly taking rows according to the occupation numbers in $|s_1,...,s_m\rangle$. For example, applying this to the above case with $U=H$ and $|r_1,r_2\rangle=|11\rangle$, we get that the probability amplitude of $|11\rangle$ evolving into $|11\rangle$ is the determinant of $H$ itself, which is $-1$, consistently with what we found by direct analysis before.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/536851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Harmonic waves: direction left or right? Consider $E(x,t)=A\sin(kx-\omega t)$ where $k=2\pi / \lambda$, with $\lambda$ the wave length and $A$ its amplitude. We have $$E(x,t)=A\sin(k(x-vt))$$ so this wave is going to the right. Now, if I want to make it going to the left, I just have to change the sign of $v$ which leads : $$E(x,t)=A\sin(k(x+vt)).$$ Is that right? But, in my course it is written that changing the sign of $k$ can change the direction (left or right) of the propagation of the wave, but I don't understand why. Any help would be appreciated,
The Plane progressive harminic wave of the form $$E(x,t)=A\sin(kx-\omega t)$$ where $\omega =kv$ , represents the wave with speed $v$ travelling in $+x$ direction. While $$E(x,t)=A\sin(kx+\omega t)$$ represent the wave with speed $v$ travelling in $-x$ direction. Now Let I change the sign of $k$ form first equation. $$E(x,t)=A\sin(-kx-\omega t)=-A\sin(kx+\omega t)$$ that is wave moving in $-x$ direction. Similearly changing sign in second equation $$E(x,t)=A\sin(-kx+\omega t)=-A\sin(kx-\omega t)$$ that is wave moving in $+x$ direction. So Either of these two can be done both are equivalent.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/537089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why complex numbers are used in electronics? The impedance of a capacitor or an inductor is imaginary. How do we know these quantities are imaginary?
Using complex numbers means you are trying to describe a value in a different domain and in complex number systems, the Imaginary number doesn't mean that the value of capacitor is imaginary. The imaginary number helps to signify the vector rotation when voltage is applied across it or when current flows through it. I would suggest you watch the series on complex numbers by Welch labs on youtube. This might help you to understand the number system better!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/537446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Why can vector components not be resolved by Laws of Vector Addition? A vector at any angle can be thought of as resultant of two vector components (namely sin and cos). But a vector can also be thought of resultant or sum of two vectors following Triangle Law of Addition or Parallelogram Law of Addition, as a vector in reality could be the sum of two vectors which are NOT 90°.The only difference here will be that it is not necessary that components will be at right angle. In other words why do we take components as perpendicular to each other and not any other angle (using Triangle Law and Parallelogram Law).
Because they have nothing to do with vectors and their addition. They are simple sketches or representations of algebraic objects, elements of vetoor spaces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/537550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 6 }
Does the magnetic field for only two particle exist? I have read the article about the relationship between electric fields and magnetic field, which involves special relativity. But i wouder does the effect of relativity always take place? When a single electron is flying by another single electron, and when their distance minimizes, the displacement between two electrons is perpendicular to the direction of the speed, so the distance between two electrons should not be affected by relativity, which means the force between them can be simply derived by Coulomb's law. Is the reasoning above correct? If not, could you please point these problems out? I am only a sophomore and my mother tongue is not English, so i would be extremely thankful if you can use simple math and simple language. :D
When a single electron is flying by another single electron, and when their distance minimizes, the displacement between two electrons is perpendicular to the direction of the speed, so the distance between two electrons should not be affected by relativity, which means the force between them can be simply derived by comloub's law. Classical physics says that the electric field around a moving electron is not the same spherically symmetric electric field that exist around a non-moving electron, and because of that the Coulomb's law does not apply. And in addition to that there is a magnetic field too. Relativity says that moving electric field is length-contracted, and that explains the forces felt by a charge next to a moving charge. Here you can see a depiction of a length-contracted electric field. (About half way through the page) http://physics.weber.edu/schroeder/mrr/MRRtalk.html Oh yes, in the picture the arrows pointing up and down are not unchanged, they are extra long. Which means that electric field is extra strong in the direction perpendicular to the motion of the electric field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/537877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solving TDSE for molecular alignment I'm doing my thesis, and I have to solve a TDSE for molecular alignment - non-adiabatic and non-resonant laser induce alignment. And I really need your help to solve it. I tried split operator method to solve it but I was unable to solve it: $$i\frac{\partial\Psi_{JM}(\theta,\phi,t)}{\partial t}=\left[BJ^2-\frac{E(t)^2}{2}(\alpha_\parallel\cos^2\theta+\alpha_\perp\sin^2\theta\right]\Psi_{JM}(\theta,\phi,t).$$ I tried split operator method like this: $$\Psi(\theta,\phi,t+\Delta t)=\exp\left(-iH_o\frac{\Delta t}{2}\right)\exp\left(-iV\left(t+\frac{\Delta t}{2}\right)\Delta t\right)\exp\left(-iH_o\frac{\Delta t}{2}\right)\sum_{jm}c(t)\Psi_{jm},$$ but the sum of $c^2$ after a time step is not $1$, it could be $2$, $3$, $4$...
Split-operator is going to work perfectly fine for non-resonant alignment simulations. The drifting normalization is probaby due to not small enough time-step or an implementation error. In rigid rotor alignment simulations the direct exponentiation of the molecule+field hamiltonian in spherical harmonics basis is also a possible way of solving the TDSE. There are a number of computer codes capable of doing this type of calculation. I suspect that in order for you to get a practical answer to your question, it should be more specific. Then I should be able to help.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why can a partial derivative be added to a hamiltonian in canonical transformations? In canonical transformations, how come we allow hamiltonian to change by a partial derivative of time? $$H'(P, Q, t) = H(p, q, t) + \frac{\partial F}{\partial t}.$$ Here $F$ is the generating function. I mean geometrically that is not how a function should be transformed when there is a change of variables. Geometrically it should be $$H'(P, Q, t) = H(p, q, t).$$ In Lagrangian mechanics it is indeed so $$L'(Q, \dot{Q}, T) = L(q, \dot{q}, t).$$
A more geometric approach is to consider the $(2n+1)$-dimensional contact manifold ${\cal M}$ with coordinates $(q^i,p_j,t)$. The Hamiltonian action functional is $$S_H[\gamma]~=~\int_I \gamma^{\ast} \Theta, \qquad \Theta~=~p_j \mathrm{d}q^j -H \mathrm{d}t, \tag{1}$$ where $\gamma:I\to {\cal M}$ is a curve. This action formulation (1) is world-line (WL) reparametrization invariant. Let us for simplicity work in the static gauge $\gamma^0(t)=t$. The Euler-Lagrange (EL) equations (i.e Hamilton's equations) remain the same if we change the contact 1-form $\Theta$ by an exact 1-form $$ P_j \mathrm{d}Q^j -K\mathrm{d}t ~=~ \Theta^{\prime}~=~\Theta- \mathrm{d}F.\tag{2}$$ From this geometric perspective, the transform law $$ K~=~H + \frac{\partial F}{\partial t} \tag{3}$$ is just the standard way how the $t$-component $\Theta_t=-H$ of the contact 1-form $\Theta$ transforms under a change by an exact 1-form (given various other restrictions on the transformation). References: * *S. G. Rajeev, A Hamilton-Jacobi Formalism for Thermodynamics, Annals. Phys. 323 (2008) 2265, arXiv:0711.4319. *H. Geiges, An introduction to contact topology, 2008. (A pdf file of lecture notes from 2004 can be found on the author's webpage.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why are some energies dependent on reference frame, and some are not? And why is transfer between them possible? For example the chemical energy of a kilogram of gasoline is 44-46 MJ/kg. It is only dependent on its chemical structure, which stays the same, whether the gas tank moves or stays still relative to the observer. But the kinetic energy of a car depends on the reference frame. In a reference frame of a car A of the same speed, the car B in question have no kinetic energy. But for a bystander, car B has a lot of kinetic energy. What puzzles me, is: 1 - why are some energies relative to reference frame and some are not? 2 - why the "absolute" energy from gasoline can be changed into kinetic energy of car, and therefore change into "relative" energy? I wouldn't be surprised if someone answers "the chemical energy is also relative", but I can't understand why.
All energy is more or less frame-dependent. This is obvious in the case of the kinetic energy of a moving car, but less so for a quantity of gasoline. An illustration: let's say the chemical energy of 1 kg gasoline is 45 MJ at rest. If you put this gasoline in motion so it moves at 1000 m/s, the total energy will now be 45.5 MJ, since 1 kg of matter moving at 1000 m/s has a kinetic energy of 0.5 MJ. This situation becomes more clear when looking at it through special relativity instead of the Newtonian approximation. In special relativity, energy is the time-component of the 4-momentum vector. The temporal and spatial components of this 4-vector are transformed into each other when changing reference frames, while only $m_0c^2$ is guaranteed to stay constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
What is the flux through a square plane cointaining a point charge? Consider a square plane of finite area A and let a point charge q be placed on the plane. What is the electric flux through the plane due to point charge? I reckon it to be zero as all electric field lines in the plane are parallel to the plane but my books tells it to be non zero. My question :Why is flux non zero? I do not want exact calculations just a explanation why flux is non zero, as intuitively it looks like zero to me.
Its easy, consider the point having charge as P over the sheet we can draw 3d radial field lines all around it you can notice apart from parallel field lines(to the sheet), there are other field lines originating through the surface(IMAGINE A 3D SPHERE with point P) hence flux is not zero it is infact q/E0(E0 = epsilon)(through the finite sheet over which charge q is placed) which could be easily obtained by using gauss law through the 3d sphere we had imagined as a closed GAUSSIAN SURFACE. don't get confused that field lines are originating from sheet and not passing,infact every surface over which we want to find electric flux we can also assume field lines are originating from surface or passing both are equivalent. if the charge was placed at corner of sheet flux would have been zero as radial field lines don't get any surface to create electric flux. I hope you understand (stay safe amidst chaos of COVID 19 )
{ "language": "en", "url": "https://physics.stackexchange.com/questions/538914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The integral in centre of mass (one dimension) This is the integral for finding the centre of mass in one dimension: $$X=\frac{1}{M}\int xdm.\tag{1}$$ But I was wondering whether we could do it by taking x as the integrating variable: A homogenous rod of length $X$ is split into $N$ regions of width $\Delta x$. Let $m$ be the mass of every such region. As $N\rightarrow \infty$, $\Delta x \rightarrow dx.$ $$\text{Centre of mass}=\frac{\int mdx}{\int m}=\frac{m}{M}\int_0^xdx=\frac{mX}{2M},\tag{2}$$ where $M=\text{total mass}$ and $m=\text{some constant}.$ But it is known that $$\text{Centre of mass}=\frac{X}{2},\tag{3}$$ $$\therefore \frac{mX}{2M}=\frac{X}{2}\tag{4},$$ $$\therefore m=M\tag{5}.$$ It is faulty ($m$ is not equal to $M$). Where have I gone wrong? Is it possible to express the integral (in centre of mass in one dimension expression) by having $x$ as the integrating variable (like I attempted to do in the picture)? If not, then why?
Your equation (2) is wrong. To use $x$ as the integrating variable you need to change $x\,dm$ into $x\,\dfrac{dm}{dx}\,dx$. This means we need to define $m$ as a function of $x$, and the most reasonable way to do that while keeping the original meaning of $dm$ is to let $m(x)$ be the mass of the section that goes from $0$ to $x$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/539996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Special relativity and tensile stress If an observer studies a cubic structure crystal from a moving frame of reference while speeding towards the crystal, he would expect to measure the atoms in the crystal closer together in the direction of his travel compared to distance of atoms in a perpendicular direction. How would this observer explain the tensile stress force he observes on the crystal which someone standing next to the crystal will not detect?
How would this observer explain the tensile stress force he observes on the crystal which someone standing next to the crystal will not detect? Stress is the space-space components of the stress energy tensor. For the stationary observer the stress energy tensor is $$\left( \begin{array}{cccc} \rho & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right)$$ For the moving observer the stress energy tensor is $$\left( \begin{array}{cccc} \gamma^2 \rho & v \gamma^2 \rho & 0 & 0 \\ v \gamma^2 \rho & v^2 \gamma^2 \rho & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right)$$ So there is a nonzero component of stress in the moving frame in the $xx$ direction. In an engineering stress tensor this would represent a compressive stress rather than a tensile stress. However, in relativity this term includes a momentum convection term. So according to the moving observer, the reason that the term is zero for the stationary observer is that because he is comoving with the crystal there is no momentum convection past him.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Symmetry transformations: a doubt about the relations that we assume true When we deal with symmetry transformations in quantum mechanics we assume true that, If before the symmetry transformation we have this $ \hat A | \phi_n \rangle = a_n|\phi_n \rangle,$ and after the symmetry transformation we have this $ \hat A' | \phi_n' \rangle = a_n'|\phi_n' \rangle,$ then $a_n'=a_n$. I think the reason for this relation is that $\hat A$ and $\hat A'$ are equivalent observables (for example the energy in two different frame of references). The problem is that, if $\hat A=\hat X$ where $\hat X$ is the position operator, then this relation seems wrong, because we would have: $ \hat X | x \rangle = x|x \rangle$ and $ \hat X' | x' \rangle =x|x' \rangle$ both ture, that means that the position eigenstate seen by two different frames of references is seen in the same coordinates. How can this be true if the systems are for example translated one to the other?
* *This is not an assumption, it is a requirement for consistency. The symmetry transformation acts on operators and states, it does not act on numbers. So the equation $A\lvert \psi_n \rangle = a_n\lvert \psi_n\rangle$ simply becomes $A'\lvert \psi_n'\rangle = a_n\lvert \psi_n'\rangle$ after applying the transformation. This equation must be true for any linear transformation on the space of states, regardless of whether it is a symmetry or not. *So when the transformation is a translation by $a$, it acts as $\hat{x}\mapsto \hat{x} - a$ on the position operator and $\lvert x\rangle \mapsto \lvert x + a\rangle$ on its eigenstates. The equation $\hat{x}\lvert x\rangle = x\lvert x\rangle$ becomes $(\hat{x}-a)\lvert x + a\rangle = x\lvert x + a\rangle$. There is nothing inconsistent about this - note that the transformed equation does not claim that $\lvert x + a\rangle$ would be a position eigenstate with eigenvalue $x$, but instead says that $\lvert x + a\rangle$ is an eigenstate of $\hat{x}-a$ with eigenvalue $x$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How do we know that one particular solution for the velocities of a two-body elastic collision is the correct one over the other? Assuming there is a 1-D collision between two bodies, having masses $m_1$ and $m_2$, if we conserve energy and momentum, we get two solutions. $$ v_{1,i} = v_{1,f} \\ v_{2,i} = v_{2,f} $$ or $$ v_{1,i} = -v_{1,f} \\ v_{2,i} = -v_{2,f} $$ Both of these are valid mathematical solutions under the conservation laws. If so, apart from practical experimentation, how do we decide which one of these is the correct answer? Is there an analysis that we should do locally within the system, rather than just using global laws? Note: Subscripts i and f denote initial and final states.
In 1 dimension, in the center-of-momentum frame, there are only 2 types of elastic scattering: (1) Forward scattering: $$ v'_i = v_i $$ for $i \in \{1, 2\}$ which looks like no collision at all. (2) Backward scattering: $$ v'_i = -v_i $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Does Pascal's Law hold true in this scenario? In the attached image, it has been told that the pressure at point A is equal to the pressure at point B(both at the same height). My question: Can this be justified using Pascal's Law? Whether yes or no, how is it justified? To clarify, the outline represents the boundary of the container which is completely filled to the top by the liquid (the top slanted surface does not represent the meniscus).
For the liquid to stay like that, it has to be held in place by a container on top (or some other forces), or else $h_1$ and $h_2$ would be the same because the water would level out. Lets assume that the top of the water column on point $B$ is at atmospheric pressure. For things to remain balanced, the pressure at the top of column $A$ has to be above atmospheric pressure. Even though there's no water directly above it, for things to remain in balance, the water at the top of column $A$ has to be at the same pressure as the water in column $B$ at that same elevation, or else the higher pressure water could push it out of the way. Essentially, the water at the top of column $A$ has some additional pressure, and actually would be pushing on the top of the container. This additional pressure on the top of column $A$ is easy to calculate in this case, it would just be $p_{top A} = \rho g (h_2 - h_1)$. This would mean the pressure at the bottom of column a is $$p_{bottom A} = p_{top A} + \rho g h_1 = \rho g (h_2 - h_1) + \rho g h_1 = \rho g h_2$$ (Sorry I mixed up $h_1$ and $h_2$ in this drawing)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can a red light photon be different from a blue light photon? How can photons have different energies if they have the same rest mass (zero) and same speed (speed of light)?
"different color" is a feeling in your brain. Red and blue is different feeling, the root of the different feeling is some different property of the photon that can result in different feelings. In the case of human eye, the property that make the difference of feeling is the frequency/energy of the photon. Photons with different energy stimulate light sensors in retina with different strength. Blue photons stimulates blue sensors more, red photons stimulates red sensors more, finally giving different feeling of colors in your brain. At very low light condition like at night, a fourth type of light sensor that responses to differnt visible light photons not very differently is stimulated much more then blue and red sensors, then most the visual signal sent to your brain is from that fourth type of light sensor and this is why you can't see color well anymore at low light. All this is for human eyes. Other eyes (including bio engineered eyes) can have differnt types of color sensors and even not generating signal to brain based on the frequecy/evergy property but on other properties, like polarizing?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 6, "answer_id": 5 }
Is Stokes' law, for drag force in fluids, accurate? In high school, I was taught that Stokes' law is dependent on assumption that drag force is proportional to velocity, viscosity and radius of the sphere (and the powers/exponents are evaluated using dimensional analysis). Is Stokes' law proven or is it just an assumption?
As indicated, Stokes' law applies when the inertial effects of the fluid are negligible. The pressure forces compensate the viscosity forces. It can be shown that this is true when the Reynolds number is less than 1. In the general case, with reasonable assumptions, dimensional analysis can justify that the drag force on a sphere is of the form: $F=(1/2)\mu v^2 \pi a^2C_x(Re)$ with $C_x(Re)$ a dimensionless function of the Reynolds number $Re=\mu v a/\eta$. If we add the idea that inertia does not intervene, the density must disappear from this relation and the only solution is a function of the form $C_x(Re)=c/Re$ with $c$ a constant. It remains then $F=c'v\pi a \eta$ which is Stokes' law. Of course, the coefficient $c'=6\pi$ cannot be obtained by dimensional analysis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Decoupling of ghost fields in axial-gauge QCD After quantizing QCD using the Faddeev-Popov "prescription", we end up with the original QCD Lagrangian plus the gauge-fixing term, \begin{equation} -\frac{1}{2\alpha}(n\cdot A)^2, \end{equation} and the ghost fields action \begin{equation} S_\mathrm{g}(\phi,\bar{\phi},A)= \int\bar{\phi}(x)\bigl([n\cdot A(x),\phi(x)]+n\cdot\mathrm{d}\phi(x)\bigr)\,\mathrm{d}x. \end{equation} It is usually said that, using the axial gauge, the ghost fields decouple from the gauge field. As long as $A$ appears in the ghost fields action $S_\mathrm{g}(\phi,\bar{\phi},A)$, a ghost-gluon vertex is created, so ghosts don't go away. In $S_\mathrm{g}(\phi,\bar{\phi},A)$, $A$ appears in the product $n\cdot A$: I thought that the gauge condition $n\cdot A=0$ would help to eliminate this term, effectively removing $A$ from $S_\mathrm{g}(\phi,\bar{\phi},A)$. But wouldn't this mean that the gauge fixing term is zero, too? Surely it cannot be, or we would be back at the beginning of the whole gauge-fixing procedure. Also, the way the Faddeev-Popov prescription is usually presented in the literature, in order to "create" the gauge-fixing term, it requires a modification of the gauge condition $n\cdot A=0$ to $n\cdot A-\nu=0$ where $\nu$ is some $\mathrm{su}(N)$-valued function (just like $A$), then an integration on $\nu$ using a Gaussian weight, which in the end becomes the gauge-fixing term. But then $n\cdot A$ isn't zero, so the relative term in the ghost action shouldn't even cancel, if I'm guessing correctly. Exactly then how can I prove that the ghost fields really decouple?
In the path integral with a $R_{\xi}$-gauge-fixing term ${\cal L}_{GF}=-\frac{\chi^2}{2\xi}$, the axial gauge-fixing condition $\chi=n\cdot A\approx 0$ is only imposed in a quantum average sense. In general the gauge-fixing condition may be violated by quantum fluctuations, except in the Landau gauge $\xi=0^+$, where such quantum fluctuations are exponentially suppressed (in the Wick-rotated Euclidean path integral). Therefore, only in the Landau gauge $\xi=0^+$, we may remove $n\cdot A$ from the Faddeev-Popov (FP) term. In this case the FP ghosts decouple from the gluon-field, cf. OP's question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Finite barrier. Constant including minus or not? For a finite potential barrier of magnitude $V_0$ between $x=-a$ and $x=a$ we know that the time independent schrodinger equation is $\Psi'' +\frac{2m}{\hbar}E\Psi=0$ for $x<-a$. Let $E<V_0.$ Normally we set $k_1^2=\frac{2mE}{\hbar^2}$ and get $\Psi''+k_1^2\Psi=0$ which would give $$\Psi=A_1e^{ik_1x} + B_1e^{-ik_1x}.$$ But if we set $k_2^2=\frac{-2mE}{\hbar^2}$ we get $\Psi'' - k_2^2\Psi=0$ and the solution $$\Psi=A_2e^{k_2x} + B_2e^{-k_2x}.$$ Why is the second solution incorrect, while the first one is?
The difference is in the sign of $E$. The definition $k_2^2=-2mE/\hbar^2$ with $E>0$ implies that $k$ is pure imaginary $k_2$, i.e $k_2=i k_1$ with $k_1^2=+2mE/\hbar^2>0$. Then $e^{k_2 x}= e^{ik_1 x}$. On the other hand the definition $k_1^2=+2mE/\hbar^2$ gives $k_1$ real so again $e^{i k_1x}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/540866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Indefinite integral of a density function Suppose that $\rho(x)=\frac{dm}{dx}$ is the linear density of a rod. Can we find the mass at each point of the rod by integrating $\rho(x)$, so that:$$m(x)=\int\rho(x)dx.$$ Can we do the same with probability density in quantum mechanics, so that:$$P(x)=\int|\Psi|^{2}dx$$ (assuming one dimensional wavefunction). In the case of probability density I think we can't because the probability in every point would be 0 because the position is a continuous variable. Any ideas?
It's neither possible to find the mass of a point nor the (quantum) probability in such a point. It is possible to find the mass of a small interval $\delta x$, located at $x$ as: $$m(x,x+\delta x)=\int_x^{x+\delta x}\rho(x)\text{d}x$$ Similarly: $$P(x,x+\delta x)=\int_x^{x+\delta x}|\Psi|^{2}\text{d}x$$ Note that in both cases, when $\delta x=0$, the integral returns $0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why do not we consider the topological term in Abelian gauge theory? The second Chern form $\epsilon^{\mu\nu\rho\sigma} F_{\mu\nu}F_{\rho\sigma}$ is topological in 4-dimensional spacetime. However, we usually only consider this term in non-Abelian gauge theory, but not in Abelian gauge theory. Is this term vanishing identically for Abelian gauge field? Somehow, I cannot see it. Or actually, we do consider it, e.g. in QED. But I never see any discussion on it.
For Abelian gauge theory $$\epsilon^{\mu\nu\lambda\rho}F_{\mu\nu} F_{\lambda\rho}=\epsilon^{\mu\nu\lambda\rho}\partial_\mu(A_\nu F_{\lambda\rho}).$$ Thus, the term in the action coming from this term can be converted to a surface integral of $A_\nu F_{\lambda\rho}$, which vanishes since $F_{\lambda\rho}$ vanishes on the surface at infinity. But for nonabelian gauge theories with gauge coupling $g$, this term is $$\sim \epsilon^{\mu\nu\lambda\rho} \partial_\mu(A_\nu^a\partial_\lambda A_\rho^a-\frac{g}{3}f_{bca}A_\nu^a A_\lambda^b A_\rho^c)$$ which does not vanish at infinity because $A^a_\mu$ need not vanish at inifinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Mass in different inertial frames EDIT: In standard textbooks on classical mechanics I know after the notion of mass of a body is introduced, it is tacitly assumed that in all inertial frames the mass of a body is the same. Does this fact follow from other basic principles of classical mechanics (like the Galileo principle of relativity) or it is an independent experimental fact? A reference discussing this issue would be very helpful.
The idea from early treatments of special relativity that mass increases with velocity was superseded in general relativity and is better not used. It is a fundamental principle that the laws of physics are covariant - they are formulated using tensor (& vector & scalar invariant) quantities so as to be the same for all observers. Proper mass, or rest mass, is the invariant magnitude of the energy-momentum 4-vector $(E,\mathbf p)$ and satisfies (in units with $c=1$) $$m^2 = E^2 - \mathbf p^2. $$ There is no need for another concept of mass. There is no point in conflating energy with relativistic mass, since this only results from misapplying Newtonian equations instead of replacing them with relativistic tensor equations. We already have a good word, energy. There is no need to call it relativistic mass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do electrostatic air filters maintain their charge? 3M meets the N95 filtration specification on disposable respirators by constructing them with "electrostatically charged microfiber filter media." How does air filter fabric stay charged? Fun grade school demonstrations involving everything from balloons to Van de Graaff generators show how rubbing certain materials together can impart an electrostatic charge, and that attractive charge can be observed as hair, paper, and other items are drawn towards the charged material. But the same demonstrations show that the electrostatic charge dissipates on contact with the same items that are attracted to it. In fact, the charge will dissipate over time just through contact with air. I imagine that charged media can be sandwiched between two layers of electrostatic insulators to prevent discharge on contact with hair, skin, clothing, etc. But given that the sandwich has to be permeable to large volumes of air (for respiration), how is a charge maintained on the media for any significant length of time? (Related question: By what mechanism does electrostatic media aid in particulate filtration?)
The fibres are made from a high resistivity synthetic polymers called electrets which can maintain a permanent dipole moment both on the surface and in the bulk. In many ways they are the electrostatic equivalent of permanent magnets. They are charged with excess charge on the surface and by producing permanent bulk dipoles in the manufacturing process whilst they are still molten by subjecting the material to a very high electric field (corona charging) and then allowing the polymer to cool which locks the bulk dipoles into position. Because the manufactured fibres have such a high resistivity and resistance to adsorption of moisture they can stay "electrolised" (magnetised) for very many years but obviously have a finite lifetime when in use. The advantage of having the charged fibres is that the gap between fibres can be made larger (but not too large) which allow the freer passage of air through the masks which makes breathing through them easier.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fourier Optics - Impulse Response of Free Space from Fresnel Transfer Function I am currently reading the chapter "Fourier Optics" in the book "Fundamentals of Photonics" by Saleh and Teich. However I am not able to follow one specific mathematical derivation. On page 111 the transfer function of free space is derived $$ H(\nu_x, \nu_y) = \text{exp}(-j 2 \pi d \sqrt{\lambda^{-2} - \nu_x^2 - \nu_y^2}).$$ $d$ is the distance the light travels from the input plane to the output plane. $\lambda$ is the wavelength and $\nu_x$ and $\nu_y$ are the spatial frequency components. After that this formula is being simplified by using the fresnel approximation, for which it is assumed, that the frequency components $\nu_x$ and $\nu_y$ in the input wave are much smaller than the system bandwidth $\lambda^{-1}$. The resulting approximated transfer function is $$ H_{\text{Fresnel}}(\nu_x, \nu_y) = \text{exp}(j \pi \lambda d (\nu_x^2 + \nu_y^2)) \cdot \text{exp}(-j k d).$$ This still makes sense to me, everything is fine so far. However after that they derive the impulse response of the system by applying the inverse fourier transform to the transfer function $H_{\text{Fresnel}}$. The resulting function is $$h(x,y) \approx \dfrac{j}{\lambda d} \cdot \text{exp}(-j k d) \cdot \text{exp}(-j k \dfrac{x^2+y^2}{2 d}).$$ And honestly, I have absolutely no idea how they come to that expression. The inverse fourier is $$h(x, y) \approx \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} H_{\text{Fresnel}}(\nu_x, \nu_y) \cdot \text{exp}(-j 2 \pi (\nu_x x + \nu_y y)) d\nu_x d\nu_y.$$ Small annotation: Out of some reason they flipped the signs in the fourier transform in contrast to the standard notation. So the core question is: How did they solve this integral? There is a correspondence table at the end of the book, but I have no clue how this should help. Kind regards
Remeber that for $a>0$ the Fresnel integral is $$ \int_{-\infty}^{\infty} e^{iax^2} = e^{i\pi/4} \sqrt{\frac \pi{a}}, $$ because of the need to push the contour off the real axis with $x= e^{i\pi/4}t$. Your integral has the product of two Fresnel integrals $x$ times $y$ and so you have $$ \left[e^{i\pi/4} \sqrt{\frac \pi{a}}\right]^2= \frac{\pi i}{a}. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the best way to imagine the difference between vectors and one-forms? I am studying the GR and reading the Schutz. He is defining the one-form as $\widetilde{p} = p_{\alpha}\widetilde{w}^{\alpha}$, and a vector $\vec{A} = A^{\beta}\vec{e}_{\beta}$ such that $$\widetilde{p}(\vec{A}) = p_{\alpha}A^{\beta}{w}^{\alpha}(e_{\beta})= p_{\alpha}A^{\beta}\delta^{\alpha}_{\beta}$$ for ${w}^{\alpha}(e_{\beta}) = \delta^{\alpha}_{\beta}$ The books define one-forms as functions that take vectors as their arguments. And I believe its a good definition but I am still confused. For me, it seems that there's not much difference between the two of them. For instance, in Minkowski space, the component transformation between vectors and one-forms are just defined as $$V_{\alpha} = \eta_{\alpha\beta}V^{\beta}$$ For instance if the component of a vector is $\vec{V} = (a,b,c,d)$, then its components in one-from is $\widetilde{V} = (-a,b,c,d,)$. The interesting thing is that in Euclidian space says they are equal which is clear from the above expression. Let me express what I understand. One-forms are like vectors but with different components. For instance in general we define a vector in the form of $\vec{A} = A^{\beta}\vec{e}_{\beta}$. So by using the basis vectors $\vec{e}_{\beta}$ we create new basis vectors such that $\widetilde{w}^{\alpha}$. So one-forms are just vectors but written on another basis?
To keep it simple, think of vectors (contravariant vectors) as column matrices and think of one-forms (covariant vectors) as row matrices (the dual space), and the inner product as a multiplication between row matrices and column matrices.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Derivation of Optical Absorption Coefficient in Semiconductors I have been researching how to derive an expression for the absorption coefficient in semiconductors. I know the absorption coefficient can be expressed as such $$\alpha = A(hf-E_g)^{n}$$ with $n = \frac{1}{2}$ and $n = 2$ for direct band gap and indirect band gap respectively. I have seen a few explanations via use of effective mass and momentum to infer this, but they all seem to take big steps with no clear and logical explanation. I am stumped on how to derive this equation. Any help would be much appreciated.
Starting with parabolic bands. The absorbed photon has energy $h\nu$ and generates an electronic and hole at energy levels $E_2$ and $E_1$ respectively. Energy and moment balance imply, $$ h\nu = E_2 - E_1 = E_c(k) - E_v(k)$$ where $k$ is the momentum of the photo-generated electron and hole (it’s the same for both carriers), $m_c$ and $m_v$ are the conduction and valence band effective masses, $$ E_c(k) = E_g + \frac{\hbar^2 k^2 }{2m_c} $$ $$ E_v(k) = - \frac{\hbar^2 k^2 }{2m_v} $$ Solving these for $k$, $$ k^2 = \frac{2m_r}{\hbar^2}\left(h\nu - E_g\right) $$ the reduced effective mass is defined as, $$ \frac{1}{m_r} = \frac{1}{m_c} + \frac{1}{m_v} $$ The parabolic bands define the density of states of conduction $\rho_c(E) \propto \left(E - E_g\right)^{1/2} $ and valence $\rho_v(E)$ bands, however, not all of these states can couple to a photon of energy $h\nu$, only states which conserve both energy and momentum. We need to know the optical joint density of states $\rho(\nu)$ which determines the electronic states which are coupled by a photon of energy $h\nu$. There are a number of ways for deriving this. The simplest is relating an infinitesimal change in conduction band density of states at the electron energy to a infinitesimal change in joint optical density of states at the photon energy, $$ \rho_c(E_2) dE_2 = \rho(\nu) d\nu $$ $$ \rho(\nu) = \frac{dE_2}{d\nu} \rho_c(E) $$ Therefore you end up with the joint optical density of states being proportional to, $$ \rho(\nu) \propto \left(h\nu - E_g\right)^{1/2} $$ The linear absorption coefficient $\alpha$ is going to be proportional to joint optical density of states, so $$ \alpha = A \left(h\nu - E_g\right)^{1/2} $$ The derivation for indirect semiconductors is much the same but phonons must be included to conserve momentum. This accounts for different exponents.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why is the answer different with energy conservation vs forces? Q) An insect crawls up a hemispherical surface very slowly.The coeffiecient of friction is $\mu$ between surface and insect.If line joining the centre of hemispherical surface to the insect makes an angle $\alpha $ with the vertical, find the maximum possible value of $\alpha$. With the force method, the solution can be found as at the highest point the frictional force would be equal to gravitational force.Therefore, $$\mu mg\cos\alpha=mg \sin\alpha$$ $$\implies \cot \alpha=1/\mu$$ However, when I tried to do this by energy conservation,equating the total frictional force with potential energy the answer was different. Let $\theta$ be angle covered by it and $d\theta$ be a small angle covered by it. $$mgr(1-\cos\alpha)=\int_0^\alpha \mu (mg\cos\theta )*rd\theta$$ $$mgr(1-\cos\alpha)=\mu mgr \sin\alpha$$ $$2\sin^2\frac{\alpha}{2}=\mu 2\sin\frac{\alpha}{2}\cos\frac{\alpha}{2}$$ $$\cot\frac{\alpha}{2}=1/\mu$$ Why is the answer different if I used force or if i use energy conservation?
The difference in energy between the two static equilibrium positions may only be some potential energy difference. You may assume the friction force is $F=\mu N$ during sliding, where $\mu$ is the kinetic friction coefficient (taken equal to the static friction coefficient) but since this force is non conservative, the work done this force will not account for any potential energy change, instead, it's lost. The balance in energy between the two positions will thus only tell you that the change in potential energy is the work of the weight force, which is not helpful for the determination of $\alpha$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/541882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Solving the geodesic equation for a Schwarzschild metric Using the Schwarzschild solution is there a simple differential equation describing the four position of a particle influenced by a Schwarzschild metric using the geodesic equation. How would the simplest form look like?
In the Scharzschild solution, we can write the geodesic equation in the form of the equations of motion $$ r^2 \dot\phi = h = \mathrm {const}$$ $$ {\dot r}^2 = {2\mu\over r} - \bigg(1-{2m\over r}\bigg){h^2\over r^2} $$ Einstein showed that solving these equations perturbs the Newtonian equations and results in orbital precession. For comparison, the first equation is also true in Newtonian gravity (Kepler's second law). The equation in Newtonian gravity corresponding to the second is $$ {\dot r}^2 = {2\mu\over r} - {h^2\over r^2} + k$$ where $k$ is a constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What do the fixed points of a RG equation mean and what are its importance? Can somebody explain to me what the fixed points of a renormalization group mean? What is their physical significance in the sense that why do we study them and what do we get to know from them?
Chiral anomaly gave an excellent answer in QFT language but I'll provide a more physical way of thinking about fixed points. Consider two distinct physical systems (for example Ni or Fe magnet) that flow to the same fixed point under RG, then near the critical point, they have similar behaviors in heat capacity or magnetic susceptibility measurements. More precisely, the critical exponents, which characterize how these quantities diverge, are the same. That is to say they correspond to the same universality class, since they flow to the same fixed point. But you can say the fixed points don't exactly describe the physical systems we are studying so why bother? Because studying Fe/Ni exactly is too complicated. We are content enough to correctly get the exponents that describe a non-generic measurement. Edits To be more explicit, the example of ferromagnetism I gave can be described by the Landau-Ginzburg hamiltonian $$\beta H = \int d^dx \, [\frac{t}{2}m^2 + \frac{K}{2} (\nabla m)^2+..] + u \int d^dx \,m^4$$ where $t$ is the reduced temperature and $u$ the interaction term. The RG flow diagram is given by Kardar chapter 5 which I copy below, for $d<4$, the non-zero fixed point is called Wilson-Fisher fixed point, which describe the paramagnet/ferromagnet phase transition. You can see there are two relatively straight lines flow into the same WF fixed point. Real materials like Ni/Fe will have some curve in this t-u parameter plane that people measure and at some point both will cross the straight line and flow into WF fixed point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Interesting generalisation about a parallel ray between 2 mirrors I have been self studying from home nowadays and came across a result in a video lecture on 'Ray Optics' If an incident ray is parallel to one of the 2 mirrors kept an angle $\theta$ = $\frac{\fracπ2}n$ where $n \in N$, then the ray will fall normally on one of the mirrors and retrace its path. In the end, there will be $2n - 1$ total reflections. The teacher gives examples by taking $\theta$ = 2 and 3, and supports the statement. Statement and example diagrams I ask for a general proof of the statement. I feel like there must be a geometrical insight that leads our incident ray to be normal to one of the mirrors. I have some experience with olympiads and watch a lot of 3Blue1Brown videos, so I feel like there must be something I am missing. I tried by constructing a quadrant of a circle on the $x-y$ plane and imagining one of my mirrors to be the $x$ - axis and the other mirror is made when a light beam $\parallel$ to the $x$ - axis intersects the circle. Then, I tried using parametric coordinates to lead me to the final point where the ray becomes normal to one of the mirrors. But, I was not able to think any further and also not able to define a point on the circumference where $\theta$ = $\frac{\fracπ2}n$.
For a simple 'geometrical' answer the following sketch may suffice. Red lines represent the first mirror, and its images. The blue lines represent the second mirror and its images. I think the diagram is self-explanatory to confirm that there are $2n-1$ reflections, and that the 'middle' one is perpendicular to the mirror. But for a wordy explanation see below the diagram. Because the initial angle is $\frac{\pi}{2n}$, there will be $4n$ mirrors (and mirror images) in the circle (one at each of $0$, $\frac{\pi}{2n}$, $\frac{2\pi}{2n}$, ... ,$\frac{4n-1\pi}{2n}$ or $2n$ in the half-circle. So any input beam parallel to one of the mirrors will intersect with precisely $2n-1$ of the mirrors. Note that there will be a mirror image at $\frac{n\pi}{2n} = \frac{\pi}{2}$, which is perpendicular to the first mirror and also perpendicular to the light beam. That is the image point (or reflection) where the beam will strike normal to the mirror (the $n^{\text{th}}$ reflection). From the symmetry of the diagram it is easy to see that after that halfway point, the beam/mirror intersections as the beam exits are at the same angles as those that occurred on the way to the $n^{\text{th}}$ intersection. The same diagram can also be used to show that a light beam that is not parallel to one of the mirrors has a strict maximum of $2n$ reflections before exiting the 'mirror-maze'.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is boiling once more effective than boiling twice half the amount? In a real-life (non-ideal) situation, is boiling 1 litre of water in a kettle the same as boiling two times 500 millilitres in terms of consumed energy? In the latter, I assume the kettle is still hot from the previous boiling.
Intuition tells me that the only difference is due to heat loss while at high temperature. Assuming the same kettle the rate of loss from the kettle should be the same at given temperature. Because the smaller volume heats faster, I think it will spend less time at high temperature, and the net energy loss will be less. I calculated an equation by integrating the equation for rate of temperature increase and obtained a logarithmic law for the amount of time taken. This confirmed my intuition, although there is not a great deal in it. (I did not attempt to fit realistic numbers, but the numbers are not relevant to the principle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Energy stored in an electric field I know the mathematical proof that $U=\frac{\epsilon_0}{2}\int\vec{E}^2dv$ is the energy stored in a particular volume in space due to an electric field, but I don't get what it actually means. I lack the physical intuition to this result. For example, if I want to calculate what work needs to be done in order to assemble a charged sphere with radius $R$, why is it required to integrate over the entire space from radius $r=\infty$ to radius $r=R$? Every insight will help, it is just a pure mathematical result for me now and I'm not even sure how to properly use it.
One needs to perform work in order to construct a charge distribution. Doing work is an energy transfer and this energy is stored in the form of the electric field. An easy example is a parallel plate capacitor. Its capacitance is $$C=\frac{\epsilon_0A}{d},$$ whereas the energy built by moving the charge from one plate to the other is $$U=\frac{Q^2}{2C}=\frac{CV^2}{2}= \frac{\epsilon_0A}{d}\frac{E^2d^2}{2}=\frac{\epsilon_0E^2}{2}Ad,$$ i.e. exactly what we call the energy of the field in volume $Ad$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Physics of the trikke tricycle I love my trikke, but I still do not understand what propels it forwards. It is very clear that the energy comes from my legs and not from my arms (I only have to touch the handle bar ever so lightly), but I do not see how my shifting weight from side to side can result in a forward pointing force. How is the side to side movement converted into a forward moving force? (And just to be clear: My trikke is not electric).
See the technical studies at http://www.lastufka.net/trikke/. The main physical principle is angular torque via angular momentum transfer from the body to the Trikke. The generation of motion is very much like cross country skiing according to skiers. Both legs and arms can contribute to the angular momentum transfer. The motion is more like a washing machine agitator - side-to-side motion cannot physically contribute as all forces perpendicular to the wheels get canceled by friction and only those parallel (via torque) produce motion. "Leaning" (technically cambering the front wheel via the steering column) helps the rider generate torque, but is not necessary.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Incident and reflected waves Let's consider a transversal wave on a string, which is reflected on a wall. I understand that the velocity of the incident and reflected wave are equal. However, I don't understand why the frequencies of both waves are the same. Can anyone please explain this fact?
The spatial boundary conditions on the fields must hold for all times, something not possible unless the incident, reflected and transmitted waves have the same temporal part that “cancels out” for all times. You have a spatial boundary (or obstacle) so this changes the spatial part of the wave, but why should it change the temporal part?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/542995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }