Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
The work-energy principle for particles reversing direction I've been trying to find an answer to this question, but have really been stumped so far.
The work-energy principle says that work done on a single particle is equal to its change in kinetic energy. Now let's say a particle is moving in the +x direction at constant speed v and we perform work to reverse its direction so that it moves in the -x direction at constant speed v. This clearly requires work, but its change in kinetic energy is zero, because it has the same speed at the beginning and the end.
Sorry if there is an obvious answer to this, but have been bricking my head until now!
| Let the particle of mass $m$ be the system under consideration.
Look at the diagram below which shows the particle initially moving in the $\hat x$ direction at a velocity $\vec v_{\rm initial}= v \,\hat x $ at position $A$ with a constant external force $\vec F = F (-\hat x)$ acting on it in the $(-\hat x$.
The direction of motion being reversed at position $B$.
When the particle reaches position $A$ again it has velocity $\vec v_{\rm final}= v(-\hat x)$.
When going from $A$ to $B$ the work done on the particle by the external force is $\vec F \cdot \vec \Delta \vec x =[F(-\hat x)]\cdot [\Delta x \hat x] = - F\,\Delta x$
The negative sign means that the force is doing negative work on the particle which can be interpreted as the particle doing positive work on the force (ie the kinetic energy of the particle has decreased).
When going from $B$ to $A$ the work done on the particle by the external force is $\vec F \cdot \vec \Delta \vec x =[F(-\hat x)]\cdot [\Delta x (-\hat x)] = + F\,\Delta x$
The positive sign means that the force is doing positive work on the particle (ie the kinetic energy of the particle has increased).
So the total work done by the external force is $(-F\Delta x)+(+F\Delta x) =0$ and this is equal to the change in kinetic energy of the system $\frac 12 mv^2 -\frac 12 mv^2 =0$
An example of such motion although not with a constant force is a particle executing simple harmonic motion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/424821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Degenerate parametric amplifier: quadratures The degenerate parametric amplifier is described by the Hamiltonian:
$H=\hbar \omega a^\dagger a-i\hbar \chi /2 \left[e^{2i\omega t}a^2-e^{-2i\omega t}(a^\dagger)^2\right]$
Where $a$ and $a^\dagger$ as just the operators of creation and anhiquilation and $\chi$ is just a real constant.
If we define the quadratures as:
$X_1=a+a^\dagger \ \ \ \ \ \ \ \ \ ; \ \ \ \ \ \ \ \ \ X_2=a-a^\dagger$
How can we calculate the quadratic fluctuations (uncertainties) of these quadratures? In particular, I read that they satisfy the equations:
$(\Delta X_i)^2(t)=e^{(2\chi t)}(\Delta X_i)^2(0)$
I tried applying the Dirac picture with, as we can easily separate:
$H=\underbrace{\hbar \omega a^\dagger a}_{H_0}+\underbrace{-i\hbar \chi /2 \left[e^{2i\omega t}a^2-e^{-2i\omega t}(a^\dagger)^2\right]}_{H_1}$
Where $H_0$ is just the Hamiltonian for the harmonic oscillator (with known solution) and $H_1$ is just a perturbation. This allows to find the equations of motion for $a$ and $a^\dagger$, but I'm not sure how to get the form of $(\Delta X_i)^2 (t)$ shown above.
PD: I haven't studied time-dependent perturbation theory so I'm not sure if it's necessary to solve this problem.
| Commonly, the quadratures are defined as $X_1=(\hat{a}^{\dagger}+\hat{a})/2$ and $X_2=i(\hat{a}^{\dagger}-\hat{a})/2$. You can use the equations of motion for the operators $\hat{a}^{\dagger}$ and $\hat{a}$. Solve the differential equations for these, which are not complicated. Then use the quadrature operators. On page 41 of the paper (https://arxiv.org/abs/0901.3439), there is a process that could help you finish the calculation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/424951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this statement of conservation of charge circular? According to Wikipedia:
A closed system is a physical system that does not allow certain types of transfers (such as transfer of mass and energy transfer) in or out of the system.
According to my textbook, the principle of conservation of charge is:
The algebraic sum of all the electric charges in any closed system is constant.
Isn't this circular logic? In terms of charge, a "closed system" is one in which charge can neither exit nor enter. If the charge neither exits nor enters, then of course the sum thereof stays constant.
Or is the principle saying that the only way for the sum of charge in a system to change is via transfer of charge in or out of the system? (In this case, wouldn't it make more sense to state the principle as "charge can neither be created nor destroyed"?)
| These statements are not circular but equivalent, you can assume one is true and the other follows. That is:
if
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 6,
"answer_id": 1
} |
Is the reason the sunshine is 'extra bright' after rain due to refraction of the additional water in the air? Quite frequently after the sun comes out after rain I experience a 30 minute period where the sunshine is 'unusually bright'. Such that it makes my eyes water.
My question is: Is the reason the sunshine is 'extra bright' after rain due to refraction of the additional water in the air?
| It may not be direct sunlight that does this.
Any wet surfaces, particularly if the water has not yet had a chance to disperse and runoff, will likely reflect more ambient light as well as direct light at different angles than they would dry (when they'd absorb more without the water to reflect). I would typically notice this as streets become much brighter after rain (it's both a curse and a blessing if you do amateur street photography, as I do).
Also consider that just before it rains the air is likely to be carrying more water vapor than after. It's hard to tell from your answer but there's also the issue of cloud - when it rains typically there is cloud overhead, and even if it's a cloudy day generally rain clouds will be darker and be absorbing more light.
As someone mentioned in comments that human vision is adaptive to brightness, it's worth mentioning some timescales for that. I'm not an expert in human vision but timescales of 10-30 minutes would not be unusual from what I've read. So if it was a little darker due to cloud before rain and relatively brighter due to brighter reflections and/or less cloud after rain your eyes would take time to adjust. Note that this will vary quite a lot from person to person.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How do I calculate time dilation on GPS satelites due to Earth's mass? I found this answer which gives as the formula
$$T_2 = \frac{T_0}{\sqrt{1-\frac{2GM}{c^2 R}}}$$
that should result in the time $T_2$ passed on a GPS satellite while $T_0$ seconds pass in the center of earth. I assume the following:
*
*$G\approx 6.674 \cdot 10^{-11} \frac{\text{N} \cdot \text{m}^2}{\text{kg}^2}$ is the gravitational constant
*$M = 5.97237 \cdot 10^{24} \text{kg}$ is the mass of Earth
*$c = 299792458 \text{m/s}$ is the speed of light,
*$R = 20180\text{km} + 6378\text{km}$ is the distance of the GPS satellite from Earths center of mass.
Then I get:
\begin{align}
t_\Delta &= T_2 - T_0\\
&= \left (\frac{1}{\sqrt{1-\frac{2GM}{c^2 R}}} - 1 \right ) \cdot T_0\\
&\approx \left (\frac{1}{\sqrt{0.9999999996660677}} - 1 \right ) \cdot 86400s\\
&\approx 14.4\mu s
\end{align}
While it is in the same ballpark, it is still quite different from the 45$\mu s$ provided in the linked answer. Where is the mistake?
Python script
T0 = 24 * 60 * 60
G = 6.673e-11
M = 5.97237e+24
c = 299792458
R = 26558.16
print((1/(1 - 2 * G * M / (c**2 * R * 10**3))**0.5 - 1 ) * T0)
| The expressions in the quoted answer are pretty misleading (I hesitate to say wrong, but, well). These are time dilations with respect to flat space, so with respect to an observer at infinity which is not moving with respect to objects in the gravitational field of Earth. What you actually need are the dilations with respect to an observer on the surface because those are the clocks you are comparing, not some hypothetical clock-at-infinity.
[I apologise in advance for the really casual sign conventions and names for things below: I am just improvising, badly. Sorry.]
So, if we assume that the Earth is not rotating (because I don't want to bother with the special-relativistic effect due to that as well as I am lazy and it is very small), clocks on the surface will run slow with respect to far-off clocks, and we can compute that rate (I am using $r$ for rates)
$$r_E = \frac{1}{\sqrt{1 - \frac{2GM}{c^2 R_E}}}$$
Where $R_E$ is the radius of the Earth.
Plugging in numbers we get
$$r_E \approx 1 + 6.961\times 10^{-10}$$
This is how much clocks on the surface run slow compared to 'stationary' clocks at infinity.
For the satellite, the gravitational correction (again, with respect to clocks at infinity) is
$$r_S \approx 1 + 1.670\times 10^{-10}$$
So the GR difference between the rate on the surface and the rate on the satellite is
$$r_S - r_E \approx -5.291\times 10^{-10}$$
Which means the satellite runs fast compared to us. But then we need to correct that by the special relativity factor which is (using $\rho$ because I need some new variable as I've chosen terrible names)
$$
\begin{align}
\rho_S &= \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}\\
&\approx 1 + 8.352\times 10^{-11}
\end{align}
$$
And this is in the opposite direction (the satellite is slow compared to us), so the total difference in rates is
$$r_S - R_E + \rho_S - 1 \approx -4.456\times 10^{-10}$$
And multiplying by $86400\,\mathrm{s}$ to get a daily difference, we get
$$-38.5\,\frac{\mathrm{\mu s}}{\text{day}}$$
Which, I think, is about right.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Forms of transformation Suppose $O$ is an object to be transformed, and $S$ is the transformation operator. Sometime the transformation is in the form
\begin{equation}
O \rightarrow SO. \tag{1}
\end{equation}
But sometime the transformation is in the form
\begin{equation}
O \rightarrow SOS^{-1}.\tag{2}
\end{equation}
I am confused. I know that there is some difference between these two cases. I just don't know what is the difference? What kind of objects transform in the first way, and what kind of objects transform in the second way? Is there any rule?
| (1) is Lorentz transformation, while (2) is similarity transformation.
Lorentz transformation includes rotation and boost. Similarity transformation is performed upon a square matrix that leaves invariant its characteristic polynomial, trace, and determinant. The transformed matrix is similar to the original matrix in the sense that they represent the same linear map under two different bases. The matrix $S$ in (2) is the change-of-basis matrix. In quantum mechanics and quantum field theory, similarity transformation is mostly used to diagonalize a matrix and find out its eigenvalues. See Wikipedia for the derivation of the transformation form (2).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Symmetry of the dielectric tensor In the book Principles of Optics by Max Born, in chapter XIV, the rate of change in the electric energy density $w_{e}$ is generalised to
\begin{equation}
\frac{dw_{e}}{dt} = \frac{1}{4\pi}\sum_{kl}\,E_{k}\epsilon_{kl}\dot{E}_{l}
\tag 1
\end{equation}
in order to take into account anisotropic media. It is said, however, that the right side of the equation above cannot be interpreted as the rate of change in the electric energy density unless
\begin{equation}
\frac{dw_{e}}{dt} = \frac{1}{4\pi}\sum_{kl}\,E_{k}\epsilon_{kl}\dot{E}_{l}\, = \frac{1}{8\pi}\sum_{kl}\,\epsilon_{kl}( E_{k}\dot{E}_{l}+ \dot{E}_{k}E_{l})
\tag 2
\end{equation}
that is, unless
\begin{equation}
\sum_{kl}\,\epsilon_{kl}( E_{k}\dot{E}_{l} - \dot{E}_{k}E_{l}) = 0
\tag 3
\end{equation}
which implies $\epsilon_{kl} = \epsilon_{lk}$, given that $k$ and $l$ are dummy indices.
Now, for isotropic media $\epsilon_{kl} = \epsilon\,\delta_{kl}$ and equation (1) is as expected. I don't understand, however, why this expression can only be identified with the change in the electric energy density if the requirement of equation (2) is satisfied. To me, it all seems like circular reasoning because you can only write the equation (2) if the tensor is symmetric to start with. Can you help me understand this reasoning?
| The requirement of permutation symmetry in couplings of this form is a pretty universal feature, and the core reason for it is that for the energy to be a well-defined function of the state variables, you need it to be path-independent.
This is easiest to see using a concrete example, so consider a 2D case in which the susceptibility tensor reads
$$
\epsilon
= \begin{pmatrix} \epsilon_{xx} & \epsilon_{yx} \\ \epsilon_{xy} & \epsilon_{yy} \end{pmatrix}
= \begin{pmatrix} 0 & 0 \\ \epsilon_{xy} & 0 \end{pmatrix},
$$
and consider two processes that take $(E_x,E_y)$ from $(0,0)$ to $(E_0,E_0)$,
*
*via the leg $(E_x,E_y): (0,0) \to (0,E_0)\to (E_0,E_0)$, versus
*via the leg $(E_x,E_y): (0,0) \to (E_0,0)\to (E_0,E_0)$,
with each side of the square traversed uniformly over a time $T$.
In the first process, you have
$$
\frac{dw_{e}}{dt}
= \frac{1}{4\pi}\sum_{kl}\,E_{k}\epsilon_{kl}\dot{E}_{l}
= \frac{1}{4\pi}\,E_{x}\epsilon_{xy}\dot{E}_{y}
= 0
$$
on the first leg, because ${E}_x=0$, and on the second leg you have $\dot{E}_y=0$, so you also get
$$
\frac{dw_{e}}{dt}
= \frac{1}{4\pi}\sum_{kl}\,E_{k}\epsilon_{kl}\dot{E}_{l}
= \frac{1}{4\pi}\,E_{x}\epsilon_{xy}\dot{E}_{y}
= 0,
$$
and you conclude that $\Delta w_e=0$.
On the other hand, in the second process, you also have $\dot{E}_y=0$, so you also have $\frac{dw_{e}}{dt} =0$, but the closing side of the square is different, since
$$
\frac{dw_{e}}{dt}
= \frac{1}{4\pi}\sum_{kl}\,E_{k}\epsilon_{kl}\dot{E}_{l}
= \frac{1}{4\pi}\,E_{x}\epsilon_{xy}\dot{E}_{y}
= \frac{1}{4\pi}\,E_{0}\epsilon_{xy}\frac{E_0}{T}
= \frac{1}{4\pi}\,\frac{1}{T}\epsilon_{xy}E_{0}^2
= 0,
$$
and you conclude that $\Delta w_e = \frac{1}{4\pi}\epsilon_{xy}E_{0}^2\neq 0$.
As you can see, the coupling tensor I started with is inconsistent with $w_e$ being a function of the state variables. A slightly more formalized version of the same argument is enough to show that this property is feasible if and only if the coupling tensor is symmetric in each pair of indices.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/425820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Binding energy between an $1s$ electron and its nucleus I've always thought that the binding energy decreases as the electron moves/jumps away from the nucleus.
Then when I see the radial probability distribution for $1s$ electron, there is a probability for finding the electron everywhere. Since the binding energy depends only on $n$, does this mean the binding energy remains constant no matter where the electron is found ?
If that's the case, kindly consider this situation :
1) A $1s$ electron is $100m$ away from the nucleus.
2) It absorbs a photon and jumps into $2s$ orbital. Now can the electron in this orbital stay closer to the nucleus(say at $20m$) ? If yes, isn't this counter intuitive ? (How can the far away $1s$ electron have greater binding energy compared to the closer 2s electron)
| Before looking at the atom, consider a classical example of a comet on an elongated elliptical orbit around the Sun. While particles are not anything like planets, still even this rough classical analogy seems to address your concern. The kinetic energy farther from the Sun is lower, the total energy is still the same while the kinetic energy is irrelevant to your question. You can catch one comet with a lower energy farther away and another with a higher energy closer to the Sun. Clearly the second one will be moving faster, but speed also is irrelevant to your question, only the total energy is.
Please keep in mind that electrons are not like comets and don't actually "rotate" around the nucleus, so this example is just a rough analogy to get some helpful intuition.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can the solutions to equations of motion be unique if it seems the same state can be arrived at through different histories? Let's assume we have a container, a jar, a can or whatever, which has a hole at its end. If there were water inside, via a differential equation we could calculate the time by which the container is empty.
But here is the thing: through the differential equation, with initial condition, I shall be able to know everything about the container: present past and future.
But let's assume I come and I find the container empty. Then
*
*It could have always been empty
*It could have been emptied in the past before my arrival
So this means I am not able to know, actually, all its story. Past present and future.
So it seems there is an absurdity in claiming that the solution of the differential equation is unique. Where am I wrong?
| Now imagine you have a jar, and there is a drop of water moving vertically behind the hole. Can you solve this one provided you have the coordinates and the velocity of the drop. Yes, you can. The only difference is that the initial state of the jar is not enough for solving the (jar, water) system, you need the information about water.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 14,
"answer_id": 13
} |
Applying the Heisenberg uncertainty principle to photons The speed of light is a universal constant, so we definitely know the speed of the photons. If we know the speed, then we should not have any information about their location, because of Heisenberg's uncertainty principle. But I'm one hundred percent sure when light goes through my window.
Why is this so?
| I would like to mention some interpretation that made me understand these concept easier for myself.
Imagine that your window shrank into approximately wavelength size. You would certainly observe diffraction phenomenon under this condition.
You can interpret it as an outcome of uncertainty principle in a case when X axis is paralel to the wall where you installed your tiny window.
(After you decreased the uncertainty of photon position [on X axis], the photon increased its uncertainty of momentum [on X axis too] for the uncertainty principle to be conserved)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 1
} |
Why is it much more difficult to horizontally throw a toy balloon than a football? If you horizontally throw a sphere of radius $R$ it will feel, in this direction, a drag force due to air. Assume the drag is given by Stokes law, $F_D=6\pi\eta R v$, where $\eta$ is the air viscosity and $v$ is the horizontal speed. This force cannot "see" the internal structure of a toy balloon, a football or even a metal sphere. However, anyone who ever played with balls and toy balloons noticed that for the same throwing, the ball will have higher horizontal reach for the same time interval. Just think about someone kicking toy balloons and footballs and the distances reached in each case. How is the resistive force considerably greater for the toy balloon?
Even if we consider a quadratic drag, $bv^2$, I suppose the coefficient $b$ would depend only on the fluid and the geometry of the bodies. Again the drag would be equal.
Another way to put this question: How does the density of the sphere contribute for the resistive force?
| We know that
Where p=mv is the momentum.
If both the balloon and the ball initialy have the same speed when you throw them, then the ball will have more momemtum than the balloon because it is more massive (p =mv).
So that if the force of friction on both of them is the same, the rate of change of momentum is the same. But since the momentum of the balloon is very small, it's momentum will go to zero faster.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
What is state variable and full differential? e.g. in entropy? I am studying basic concepts of entropy and statistical physics. And red a lot what is entropy, and that it is integrative factor of heat; and getting it with the full differential._
Anyway, what I am trying to grasp in all that story, actually to gain a feeling and know what is actually full differential means in this story; and that some variable is state variable (state function)?
For example $dS=\delta Q/dT$.
I know I needed to find integrative factor which is temperature as universal thermodynamics variable. And I need to have on extensive and one intensive variable to get state variable.
But get confused if I actually just start to think about what is variable of state here actually and what does it usually say.
It same here with work where $dV= \delta W/ -p$.
Thanks for help :)
| The equation should read $dS=dQ_{rev}/T$, where the subscript rev refers exclusively to a reversible path between the initial and final states of the system. This reversible path does not necessarily have to bear any resemblance whatsoever to the actual process path between the initial and final states. If you don't apply the equation to a reversible path (which you may have to devise), the equation does not give the entropy change. Only for a reversible path is 1/T an integrating factor for dQ to obtain dS. For a primer on how to determine the entropy change for a system experiencing any process (whether reversible or irreversible), see the following link: https://www.physicsforums.com/insights/grandpa-chets-entropy-recipe/
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
A simple question on finding the number of macro states for a system of two Einstein solids Consider two identical Einstein solids each with $N$ oscillators in thermal contact with each other and suppose that $$q_{\mathrm{total}}=q_A+q_B=2N$$
How many different macrostates are there ( i.e. possible values for a total value of A)?
My attempt and reasoning:
I imagined that if we had two boxes A and B. I started by saying that micros-states corresponding first box is $q_A=2N, q_B=0$, the next is $q_A=2N-1, q_B=1$ and so on until.......$q_A=2N-2N=0, qB=2N$. Therefore the total no of micro-states is $2N+1$.
Question: Is this correct? and if it is not, is there any mathematically rigorous way to describe it?
| This is more a question of how you define a macrostate in your model. In the conventional Einstein solid model, we consider the energy observable. The reasoning for this is that we are using the model to study thermal energy (specifically heat capacity), and so all internal energy is taken to be heat, and changes in heat are directly observable.
From that conventional perspective, with two Einstein solids in thermal contact, and the total energy fixed, we cannot define separate macrostates for each of the subsystems. If this were an actual physical system, as it evolved the two subsystems would constantly exchange heat energy and thus there would be no distinguishable macrostates with the same total energy.
Consistent with that view, the correct way of looking at your model is to consider both Einstein solids together as a single solid. In which case for a fixed total energy, there is only one macrostate.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/426974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Turning off an inductor: Experimental problems I am trying to measure the current in dependence of time in turning on and off an inductor.
If I choose an inductor with low inductivity (36 mH) it works for turning on (almost) as expected, but for turning off it doesn't.
Here is my setup:
and here is are the results (red curve for turning on, black curve for turning off):
I measured the resistance of the current path for turning off with a voltmeter to about 10 ohms. In the case of turning on, the resistance is also approximately 10 ohms as you can see from the current curve (in the case 5,2 volts).
What did I do wrong and how can I fix it?
Edit
Here are details of the setup:
*
*Ammeter: Sensor Cassy 2
*Voltage source: Peak tech 6150
*Inductor: Leybold Coil
| You need to provide a path for your current when the switch is open. See picture for details. At step 1, you have current. At step 2, your current dies out. So at step 3, when you expect your current, it is not there.
If you fix the circuit as in step 4,5,6, you are fine.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If an object moves at constant speed, does it necessarily have constant velocity? If an object moves at constant speed, does it necessarily have constant velocity?
| If an object moving at a constant speed, It is not necessarily that it should move with constant linear velocity because Linear velocity is speed along with direction. so if the direction is not constant then Linear velocity will vary.
e.g - An object moving with constant speed in a circular path certainly doesn't have constant linear velocity because its direction keeps on changing at every instant tangentially. However, Its angular velocity will be constant.
Please note that the only condition the speed and the linear velocity be constant if the object travels in a straight line path without u-turn
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Force, work and the apparent disappearance of Mechanical Energy A man exerts force on a wall of bricks. The man must have consumed the energy he possessed (mechanical energy?) to exert the force. The man sweats and tires himself out but the wall does not move.
The force is given as $F = ma$ but no acceleration was produced in the wall. So, the force is zero even though force was exerted. The definition of force says that it moves, tends to move, stops or tends to stop the motion of a body.
How can the force be zero?
The work done is zero since displacement is zero. The Work-Energy principle says that work done on a body appears as a change in its kinetic or potential energy. Since work done is zero, no change in the energy of the wall occurs. Where did all of the energy the man spent go?
| First, it does not require energy to produce a force. A ladder can lean against a wall, exerting a force on it indefinitely without the expenditure of energy. A man exerting the same force sweats and gets tired, not because energy is required to produce force but simply because the human body is a very inefficient machine.
Your second paragraph makes the same mistake as in your previous question. As I explained there, the proper form of Newton’s 2nd law is $\Sigma F = ma$. You are neglecting the other forces acting on the wall. So the force exerted by the man is not zero, but there are other forces acting on the wall such that $\Sigma F=0$
For your third paragraph, the energy the man spent went to thermal energy. This is where energy usually goes for inefficient machines.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Confusion about how an electron gun works I'm a little unclear about the charge balance aspect of an electron gun. Referring to this diagram and similar diagrams I've seen, what I don't get is wouldn't the target of the electrons have to be connected to the positive anode so that the electrons fired at a target can be recycled if the electron gun is needs to operate continuously? Is the target generally placed on the anode opening so it's connected to the positive?
| Think about it this way. The only reason this connection exists in the TV scenario it to hit a target for illumination display purposes. In other applications such as linear accelerators we shoot the beam thru a window into a vacuum waveguide filled with a resonant Microwave freq. This Acceleration scenario creates energies in the Mega Volt scale with target state at ground. At this highly excited accelerated state crashing into a cooled tungsten target produces Photons at high MV levels If we make the target Gold we can maintain electron output in the MV (megavolt) levels. This target is ground also and attenuates the electron beam.
Note: Not to be toyed with Fatal energies.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
What type of fields existed in early universe? In quantum field theory we associate field to every particle. So how many elementary fields exits in nature? Why are fields associated to particles different from fields associated to fundamental forces? And if fields are fundamental and existed since universe formed, why are we trying to develop other theories?
| The fundamental fields of the Standard Model are the various quarks (up, down, ..), leptons (electron, electron neutrino, mu, ..), gauge bosons and the Higgs boson, where the matter fields differ from the force fields by their statistics - gauge bosons are, well, bosonic, like the Higgs field, whereas matter fields are fermionic. It is not plausible that these fields existed literally 'since the universe formed': QFT is not complete as it does not include gravity, and it seems to be impossible to extend it in a naive way to make it do so.
Hence, it seems that the Standard Model fields should arise as some low-energy limit to a more fundamental theory of everything, which does include gravity. Which theory this should be is a matter of active research, although many think it is probably string theory.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Differentials and small changes in thermodynamics This may seem like an elementary question, but I'm a bit confused right now about this. From the first and second laws of thermodynamics, and from the definition of enthalpy (per unit mass), we have the equation (as an example, and at constant pressure):
$$
dh=c_p dT.
$$
But I often come across this other form:
$$
\Delta h=c_p\Delta T,
$$
but from the sources I've seen, it's not made clear that these deltas represent incremental changes. That is the case? The second expression ought to be written
$$
\Delta h = \int_{T_i}^{T_f}c_p dT,
$$
right? In any case I'm not sure I understand that second form, because $c_P$ is measured at which temperature, $T_i$ or $T_i+\Delta T$?
| For a perfect gas, $c_p$ is actually independent of temperature, so both equations are equivalent. Some real gases actually show behavior very close to temperature independence of $c_p$, e.g. ammonia.
In addition, because the coefficients of temperature dependence of $c_p$ of most gases are not that large, over a small temperature rise it is valid to approximate the first equation with the second form.
Or you may just be reading about some approximate or computational method.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/427993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Why is conservation of momentum not valid here?
To explain my confusion, I would provide the following system:
The two masses $m$ and $M$, with $M\gg m$, are moving towards each other(as directed by the arrows) with a common constant speed $V_0$. There is no friction between any two surfaces and all collisions are perfectly elastic.
I take all velocities +ve towards right and I call the velocity of $m$ after collision $V$. As $M\gg m$, there will be negligible change in the velocity of $M$ after collision. Also, as the collision is elastic, the velocity with which the two masses approach each other must be equal to the velocity with which they get separated.
Therefore,
\begin{align}
-V_0-(V_0)&=-V_0-(V)
\\V&=-3V_0
\end{align}
Now, this seems quite true to me.
But, when we apply conservation of momentum,
\begin{align}
mV_0-MV_0&=mV-MV_0
\\V&=V_0
\end{align}
So why is conservation of momentum not valid here?
|
As $M\gg m$, there will be negligible change in the velocity of $M$
after collision.
Yes, the change in the velocity of $M$ will be negligible, but what is conserved is not the velocity but the momentum and, since $M\gg m$, even a small change in the velocity of $M$ will translate in a relatively big change in its momentum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is a complex phase shift? In a complex methods course I am taking, we were given an equation for a particular driven harmonic oscillator where the driving force is trigonometric. I have worked out the math and obtained an equation that tells me that the driving frequency at resonance is the natural frequency multiplied by i. My tutor tells me that this is a 90 degree phase shift, but I don't really understand why. Isn't a phase shift obtained by adding or subtracting 90 degrees? And how can a frequency, which is a measurable physical value, take on imaginary values? I would understand if we were talking about velocity. Because velocity has a direction, addition or scalar multiplication by a real value would not describe a 90 degree rotation of the vector. But frequency is a scalar quantity. What does it mean to have an imaginary frequency?
| It is a phase shift by 90 degrees if multiplied by $i$ indeed.
Note that $i=e^{i\pi/2}$. Writing whatever driving signal in complex form, since it is sinusoidally driven, it will have an $e^{i\omega t}$ in it, multiplying by $i$ multiplies by $e^{i\pi/2}$, and when you multiply the exponentials you add the exponents to get $e^{i(\omega t+\pi/2)}$.
Taking the real part to get an answer that actually makes sense physically, you would have a $\cos(\omega t+\pi/2)$ dependency in your driving.
I think this is what you are asking, hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How are RF Waves transmitted? What is the mode of transmission for RF waves at 1800 MHz. Is it ground wave propagation, Line of Sight Propagation or Atmospheric reflection (from ionosphere).
What are the different ways for different frequencies of RF waves?
| Frequency is essential in this discussion. What happens at 50 MHz is irrelevant at 1800 MHz. At 1800 MHz waves travel by line of site. They can be impacted by tropospheric ducting and I believe also by thunderstorms.
Search for radio propagation in Wikipedia - insufficient prior research.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why does work depend on distance? So the formula for work is$$
\left[\text{work}\right] ~=~ \left[\text{force}\right] \, \times \, \left[\text{distance}\right]
\,.
$$
I'm trying to get an understanding of how this represents energy.
If I'm in a vacuum, and I push a block with a force of $1 \, \mathrm{N},$ it will move forwards infinitely. So as long as I wait long enough, the distance will keep increasing. This seems to imply that the longer I wait, the more work (energy) has been applied to the block.
I must be missing something, but I can't really pinpoint what it is.
It only really seems to make sense when I think of the opposite scenario: when slowing down a block that is (initially) going at a constant speed.
| I see several answers that all seem to explain it, but for someone trying to understand the why, perhaps it's best answered simply.
It is going to be a lot more "work" for me to push a heavy trashcan out the door and down the driveway than the amount of "work" for me to just push the trashcan out of the house.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 11,
"answer_id": 6
} |
Probability of a vector lying in an interval Suppose we have a vector $\vec{v}$ with constant length that is equally likely to be pointing in any direction, specified by $\theta$ w.r.t the $x$ axis .
How can I compute the probability of the $x$ component of $\vec{v}$ lying in the range $v_{x}$ to $v_{x} + \mathop{dx}$
By working and looking online, I have been able to figure out the following:
Let $|v|$ denote the magnitude of the vector $v$. Then,
$v_{x}$ = $|v|cos(\theta)$
We have $P(v_{x} \leq u) = P(vcos(\theta) \leq u) = P(vcos(\theta) \leq u, \pi \leq \theta \leq 2\pi) + P(v\cos(\theta) \leq u, \pi \leq \theta \leq 2\pi)$.
Let the variable $w = u/v$. Then, our probability $P(v_{x} \leq u)$ equals
$P(\theta \geq arccos(w), 0 \leq \theta \leq \pi) + P(\theta \leq arccos(w), \pi \leq \theta \leq 2\pi) = \frac{|\pi - arccos(w)|}{\pi} $
However, I'm not sure how I'm supposed to proceed. I don't see how this relates, since it doesn't introduce $\mathop{dx}$? Could someone please help me from here?
| A slightly less laborious way of writing this is: Let $\theta$ be a random variable describing the angle of the vector relative to the $x$ axis. Then we can write the vector in Cartesian coordinates as:
$$
\vec{v} = |v|.(\cos \theta, \sin \theta)
$$
Thus the probability of $v_x < \vec{v}_x < v_x + dx$ is simply the probability that $v_x < |v| \cos \theta < v_x + dx$.
If you want to, you can rewrite this as: $\arccos{\frac{v_x}{|v|}} < \theta < \arccos{\frac{v_x + dx}{|v|}}$. This is assuming that $\theta$ is between $0$ and $\pi$.
To proceed any further you need to integrate between those two bounds, and you need to know the distribution of $\theta$. Also this might be a question more suited to mathematics.stackexchange.com.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is there any moon without its planet? Is there any planet without its star?
Is there any moon or any planet wandering in outer space without a definite orbit?
(The name moon or planet used here serves only for size and spherical shape notion.)
|
João Bosco asked: Is there any planet without its star?
They are called rogue planets, and of course there can also be rogue moons: for example, a regular planet can lose its moon when a rogue planet comes too close and disrupts the system the moon can be ejected.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why doesn't a charged particle moving with constant velocity produce electromagnetic waves? A charged particle moving with an acceleration produces electromagnetic waves. Why doesn't a charged particle moving with a constant velocity produce electromagnetic waves? As far I understand, the electric and magnetic fields in space will still be time-dependent, if a charged particle is moving with constant velocity, so they could have given rise to electromagnetic waves, but they don't.
Also, why do accelerating charged particles produce electromagnetic waves? What is Nature's intention behind this phenomena?
| Riemannium's answer tackles why you need acceleration to form EM waves. I will hit from a different way that I think gets at your question title as to why charges moving at a constant velocity do not produce EM waves. In the subsequent discussion all mentioned reference frames are inertial reference frames.
The easiest way to reason that charges moving at a constant velocity relative to us will not emit radiation is to observe that we can always boost to a frame moving along with the charge. Then we will just see a stationary charge with just a constant electric field.
Now, it wouldn't make sense that we don't see an EM wave in our frame, but someone moving by at some us would. If an EM wave exists in one inertial frame it must exist in all inertial frames. Therefore, it must be that a charge moving at a constant velocity (in some inertial reference frame) cannot produce an EM wave.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/428922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
Does ferromagnetic material decrease magnetic field intensity in surrounding area? I have a doubt regarding behaviour of ferromagnetic material. I know that magnetic fields are said to increase in intensity inside bulk of ferromagnetic material, or converge into them.
Does this mean or imply that magnetic field intensity B decreases in the region surrounding the ferromagnetic material than what it was when ferromagnetic material was absent?
Here in the image does magnetic field instensity decrease in region below and above the iron piece?
| Yes, it does decrease outside of the iron piece. Imagine the new magnetic field introduced by the iron piece (would look like a bar magnet). When superimposing with the original magnetic field, you'll find cancellation outside the iron piece.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
String theory and background independence I have read that string theory assumes strings live in spacetime defined by general relativity which make the theory background dependent (although general relativity is a background independent theory). Background independence dictates that spacetime emerge from more fundamental ingredient than spacetime. Quoting Brian Greene, “Then, the theories ingredients - be they strings, branes, loops, or something else discovered in the course of further research - coalesced to produce a familiar, large-scale spacetime” (Greene, The Fabric of the Cosmos, 2004: 491).
My question: why couldn’t spacetime just be spacetime, a fundamental entity? If so, string theory, based on general relativity, would not be a “great unsolved problem” facing string theory.
| You're asking a question that deserves an enormous amount and clarifications. I will just want to share some very illuminating references on the idea of "Background independence" in the context of string theory.
What is background independence and how important is it?
At least two philosophers understood background independence
Is space and time emergent? ER-EPR correspondence adds a voice
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Chirality of the Electromagnetic Field Tensor I have learned that chirality is a concept, that appears for $(A,B)$ representations of the Lorentz group, where $A\neq B$.
An example would be a Dirac spinor, corresponding to the representation $(\tfrac{1}{2},0)\oplus(0,\tfrac{1}{2})$, where we can identify left- and right-chiral components.
Wikipedia lists the electromagnetic field strength tensor $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ as transforming under the $(1,0)\oplus(0,1)$ representation of the Lorentz group.
Supposing my first sentence is true, where can I see chirality in the electromagnetic field strength tensor?
| There's a great existing answer, I just thought I'd check where the "rotation" comes from.
As you know, the electromagnetic field tensor decomposes under $SO(3)$ into two vectors, $\mathbf{E}$ and $\mathbf{B}$, which are preserved under rotation. In fact, any linear combination of $\mathbf{E}$ and $\mathbf{B}$ are preserved under rotations. Now if we add in the boosts, the specific combinations that are preserved under both rotations and boosts are
$$\mathbf{E} = \pm i \mathbf{B}.$$
These correspond to the $(1, 0)$ and $(0, 1)$ irreps; they are called self-dual and anti-self-dual fields.
Here we're working with complex-valued electromagnetic fields, i.e. we have
$$\mathbf{E} = \mathbf{E}_0 e^{ik \cdot x}, \quad \mathbf{B} = \pm i \mathbf{E}_0 e^{ik\cdot x}.$$
To get representative real-valued solutions, we may take the real part. For a wave propagating along $\hat{\mathbf{z}}$, guessing $\mathbf{E}_0 \propto (1, \pm i, 0)^T$, we find the self-dual and anti-self-dual fields correspond to light waves with clockwise and counterclockwise circular polarization, a clear manifestation of chirality. You can't boost or rotate a clockwise polarized wave into anything but a clockwise polarized wave.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Deriving or building a Hamiltonian from a Density Matrix Is it possible to create a Hamiltonian if given a Density Matrix.
If you already the the Density Matrix, then is the Partition
Function (Z) even needed?
This Q is not about physics. Its about an application of math
to poorly defined and dynamic systems such as populations
(any kind) and stock/bond portfolios (just another type of
population).
I see a connection here.
| No. Suppose you have an admixture of spin-up/spin-down states:
$$
\rho=\left(\begin{array}{cc}
1-\alpha & 0 \\
0 &\alpha\end{array}\right)\, .
$$
There is no information about the evolution of the system, in the sense that there is no reason to suppose that this density matrix must evolve according to $H=\omega \sigma_z$ or $H=\omega\sigma x$ or the more general $H=\omega \hat n\cdot\vec\sigma$ for that matter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Dependency of temperature acquired by a resistor on power A nichrome heating element across 230V supply consumes 1.5kW of power and heats up to 750*C. A tungsten bulb across the same supply operates at a much higher temperature of 1600*C in order to emit light.
Does it mean that the tungsten bulb necessarily consumes greater Power?
| A typical incandescent light bulb with a tungsten filament consumes somewhere between $25$W and $100$W (much less than $1.5kW$) and reaches temperatures on the order of $2500^{\circ}C$ (much greater than $750^{\circ}C$).
So, why does it get hotter than a nichrome heating element?
It is because the temperature of a heated object depends not only on its power consumption, but also on its thermal resistance, which, in turn, depends on such factors as its surface area, air circulation, etc. The greater the thermal resistance of an object, the greater its temperature rise relative to the ambient temperature, given the same heat dissipation rate (which is close to the power consumption rate for the nichrome wire and somewhat lower for the tungsten filament, which radiates out some of the consumed power as light).
So, since, in this example, the power consumption of the tungsten bulb is substantially lower than the power consumption of the nichrome heating element, we have to conclude that the former must have much higher thermal resistance, e.g., because the diameter of the tungsten filament is smaller than the diameter of the nichrome wire.
In summary, the fact that a tungsten filament gets hotter than a nichrome heating element, does not mean that it consumes more power.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/429995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why are coordinate systems used in General Relativity if it is a background independent theory? I am studying topological manifolds as a prerequisite to studying General Relativity and although this question is premature since I have not yet begun the latter it is bothering me.
From basic physics I always heard that General Relativity is background independent. Is this the same thing as saying it can be done without coordinate systems or just that it must be done with a coordinate system but the one you use doesn't really matter?
| Pretty much the latter: coordinates are the tools we use to describe what is going on: there are no coordinates in nature and it does not really matter which coordinate system we use, so long as it's 'good': it should provide a suitably differentiable 1-1 map of the manifold (or an open subset of the manifold) into $\mathbb{R}^4$, so that we can talk about what is going on. Of course, in practice the choice of coordinates matters a lot, because you want to choose one that makes calculations practical: this is just the same thing as in other parts of physics: you don't really want to be working on a problem with spherical symmetry in cartesian coordinates, say.
Of course there are also things you can do in GR without using coordinates at all: only some (probably most!) calculations require coordinates.
However there is a slight caveat to this: the way manifolds are defined (at least to a physicist: mathematicians may have more abstract definitions) is by specifying that there are continuous 1-1 maps between open sets of the manifold and open sets of $\mathbb{R}^n$. These collections of maps are, in fact, coordinate systems for the manifold. So you could argue that there are, inherently, coordinate systems. But it does not matter what the coordinate systems are: you can choose any good set you like.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reversing magnet polarity to increase/decrease Eddy currents? I have a cast iron wheel with magnets around the inner radius as a braking mechanism. If I were to add additional magnets around the outer radius, would the amount of Eddy currents increase or crease if the polarity of the outer magnets was opposite?
EDIT:
Here's a diagram
where black circle, red and green rectangles depict the wheel, existing magnets and new magnets, respectively. The existing magnets have the S side facing the wheel. So my question is, if the green magnets have the N side facing the wheel, will Eddy currents increase or decrease? What if the S sides face the wheel? In fact, do the polarities of any of the magnet matter?
| If the magnets in the two sets were facing each other (i.e., if green outer magnets were shifted to the $6$ o'clock position), it would be pretty obvious that, in order to increase eddy currents, the outer magnets would have to be installed with their north poles facing the wheel, so that the magnetic fields of the two sets boosted each other rather than canceled each other.
When the outer magnets are located as shown on the diagram (at about $3$ o'clock), the interaction of the magnetic fields of the two sets is not as significant.
If the wheel was made out of aluminum or other non-ferromagnetic material, we could say that the polarity of the outer magnets would not matter, i.e., in either case, they would generate roughly the same additional eddy currents and similarly increase braking action.
In your example, through, the wheel is made out of cast iron, a ferromagnetic material, and, as such, it would bend magnetic lines and increase the interaction between the two sets. It is hard to predict the degree of this interaction, but, to be sure (and since it costs nothing), it makes sense to install the outer magnets with their north poles facing the wheel.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How does particle physics use deep neural networks to find particles? Does anyone use deep learning: RNN, CNN or any other architecture of deep neural networks to asses the standard model or to detect new or unseen particles? What's the status these days in this frontier?
| The short answer is no.
The verification process for the standard model involved comparing predictions furnished by that model with experimental data, mostly from particle accelerators, including some which were purpose-built to serve in this way. Neither the process of writing down the mathematical underpinnings for the model nor that of designing and building accelerators or the detectors used with them required neural networks.
One of the frontiers in this field is the search for a candidate particle to furnish the missing mass needed to account for the dynamics of spiral galaxies and the clumping behavior of galaxy clusters, groups, and supergroups. It is not clear (to me anyway) how neural networks would be useful in this arena.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
What happens to a radioactive material's atom when it disintegrates? Suppose you initial had radioactive $2^n$ atoms (where $n$ is an integer). Now after a number of halflives the number of left out atoms becomes 1. Now what will happen to it will it disintegrate and the leftover would be half an atom? Now if the reaction stops then the statement "The decaying radioactive atom would never end" then it'll be wrong.
| Radioactivity does not mean that an atom disappears. It means that the atom splits into one or more different smaller atoms or fragments of atoms. Very little mass is lost. The mass of all the fragments is not much less than the mass of the original atoms.
When the last radioactive atom has decayed, the process of radioactive disintegration stops. It does not go on forever, as the mathematical model suggests. You cannot have fractions of an atom left over, and fractions of an atom cannot decay at any time. Like the average Western family being 2.4 children, there are no families with 0.4 of a child.
If the fragments are unstable they will also decay, with a different half-life.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What happens to gravity and spacetime when mass turns to energy? What will happen to the distorted space and time around a mass when it is converted into energy?
Will it go back to its original configuration (i.e. with $0$ gravity)?
Or does space time oscillate? Or is there something else that happens?
| There are several ways to convert mass into energy. The mass of a star e.g. decreases slowly due to nuclear fusion processes. In this case energy is radiated away which corresponds to a slight decrease of mass. As the mass of the star remains spherical symmetric during this process, so does its gravitational field and thus the spacetime around the star will not be distorted.
The complete conversion of mass into energy is possible by particle-antiparticle annihilation. Perform this process in a ideal box (withstanding the heat) the weight of the box will not change and neither does the spacetime.
Distortions of spacetime are caused by asymmetric processes (more technically if the mass quadrupole moment of a system changes over time) as during the merger of black holes, supernovae explosions and the like.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/430909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is $\sin\left[2\alpha\right]\cos\left[2\alpha\right]\ge0$ a valid restriction on the angles of the principal stresses in 2D elasticity? This question pertains to Elasticity: Tensor, Dyadic, and Engineering Approaches By: Pei Chi Chou, Nicholas J. Pagano, Section 1.4.
The objective under discussion is to find the directions of stationary normal stress. The following transformation equations have been established:
$$\sigma^{\overline{x}}=\sigma^{x}\cos^{2}\left[\alpha\right]+\sigma^{y}\sin^{2}\left[\alpha\right]+2\tau^{x}{}_{y}\cos\left[\alpha\right]\sin\left[\alpha\right],\tag{1.5}$$
$$\sigma^{\overline{y}}=\sigma^{x}\sin^{2}\left[\alpha\right]+\sigma^{y}\cos^{2}\left[\alpha\right]-2\tau^{x}{}_{y}\cos\left[\alpha\right]\sin\left[\alpha\right].\tag{1.7}$$
Using the trigonometric identities
$$\sin\left[2\alpha\right]=2\cos\left[\alpha\right]\sin\left[\alpha\right],\tag{1.8a}$$
$$\sin^{2}\left[\alpha\right]=\frac{1}{2}\left(1-\cos\left[2\alpha\right]\right),\tag{1.8b}$$
$$\cos^{2}\left[\alpha\right]=\frac{1}{2}\left(1+\cos\left[2\alpha\right]\right),\tag{1.8c}$$
these become
$$\sigma^{\overline{x}}=\frac{\sigma^{x}+\sigma^{y}}{2}+\left(\frac{\sigma^{x}-\sigma^{y}}{2}\cos\left[2\alpha\right]+\tau^{x}{}_{y}\sin\left[2\alpha\right]\right),\tag{1.9a}$$
$$\sigma^{\overline{y}}=\frac{\sigma^{x}+\sigma^{y}}{2}-\left(\frac{\sigma^{x}-\sigma^{y}}{2}\cos\left[2\alpha\right]+\tau^{x}{}_{y}\sin\left[2\alpha\right]\right).\tag{1.9b}$$
We set the derivative with respect to $\alpha$ of either of these equations ($\sigma^{\overline{x}}$ in this case) equal to zero
$$\left(\sigma^{x}-\sigma^{y}\right)\sin\left[2\alpha\right]=2\tau^{x}{}_{y}\cos\left[2\alpha\right],\tag{1.11}$$
and find the two roots $\left\{ 2\alpha_{1},2\alpha_{2}\right\}$ of the resulting trigonometric expression
$$\tan\left[2\alpha\right]=\frac{2\tau^{x}{}_{y}}{\sigma^{x}-\sigma^{y}}.\tag{1.12}$$
It follows that $2\alpha_{2}=2\alpha_{1}\pm\pi;$ thus $\alpha_{2}=\alpha_{1}\pm\frac{\pi}{2}.$ The sine and cosine of $2\alpha$ are
$$\sin\left[2\alpha\right]=\pm\frac{2\tau^{x}{}_{y}}{\sqrt{4\left(\tau^{x}{}_{y}\right)^{2}+\left(\sigma^{x}-\sigma^{y}\right)^{2}}},\tag{1.12a}$$
$$\cos\left[2\alpha\right]=\pm\frac{\sigma^{x}-\sigma^{y}}{\sqrt{4\left(\tau^{x}{}_{y}\right)^{2}+\left(\sigma^{x}-\sigma^{y}\right)^{2}}}.\tag{1.12b}$$
In the first sentence of page 10, the book claims that $\sin\left[2\alpha\right]$ and $\cos\left[2\alpha\right]$ are either both positive or both negative. That would restrict $\alpha$ to $0\le\alpha\le\frac{\pi}{4}$ or $\frac{\pi}{2}\le\alpha\le\frac{3\pi}{4}$. More significantly, it would require $\tau^{x}{}_{y}$ and $\sigma^{x}-\sigma^{y}$ to have the same arithmetic sign. I see no justification for either of those restrictions. Is the claim that $\sin\left[2\alpha\right]\cos\left[2\alpha\right]\ge0$ valid?
The assertion that arithmetic signs of $\sin\left[2\alpha\right]$ and $\cos\left[2\alpha\right]$ must match appears to contradict the discussion in the final paragraph on page 10 in which the case of
$$\frac{\pi}{2}<2\alpha\iff2\tau^{x}{}_{y}>0\wedge\left(\sigma^{x}-\sigma^{y}\right)<0$$
is considered. That clearly makes the signs of eq 1.12a and eq 1.12b different.
It might be the case that any system can be fully characterized by considering a range of angles which conforms to $\sin\left[2\alpha\right]\cos\left[2\alpha\right]\ge0$ , but that is not the claim made by the authors.
| Ref. 1 writes [admittedly somewhat confusingly]:
[...], and noting that the sine and cosine are either both plus or both minus, [...]
Ref. 1 does not say that sine and cosine are either both positive or either both negative in eqs. (1.12a)-(1.12b), which would have been incorrect$^{\dagger}$. Rather Ref. 1 is trying to say [that it follows from eq. (1.12)] that either both the upper $+$ signs in the $\pm$ symbol apply, or both the lower $-$ signs in the $\pm$ symbol apply, but a mixture with one upper $+$ sign and one lower $-$ sign is not allowed.
References:
*
*Pei Chi Chou & Nicholas J. Pagano, Elasticity: Tensor, Dyadic, and Engineering Approaches, 1967; p.9-10.
$^{\dagger}$ The variable $\alpha$ is $2\pi$-periodic.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Could you have sand pipes like water pipes? It's common knowledge that sand behaves like water when in small grains. So can you make a pipe that carries sand in the same way pipes carry water? If not, is there another way you could?
| Tricky for sand, most grains of sand are 'sharp' they will lock into each other and form jams in the pipe unless you have some fluid (eg. air or water) carrying them along. There are types of sand with smooth polished grains which flow more freely (eg desert sand) but these aren't used in construction so there isn't much effort into moving it around.
Smoother particles like grain are routinely moved around in pipes.
The other problem is how to pump them. Pumping requires a fluid that you can compress in the pump so there is a pressure difference that transfers force to other particles. Although it's easy to lift particles to the top of a pipe and have them flow down, it's less clear how you can pump solid sand particles up without using some carrier fluid like air or water.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
What causes clock drift in quartz oscillators? Usually, computer seem to use quartz oscillators. In contrast to atomic caesium clocks they seem to have a relatively big drift and thus we need protocols like NTP to correct them.
What causes this clock drift in quartz oscillators? Is it something that could be improved? Are there some fundamental properties stopping quartz oscillators from reaching some accuracy? And what is this accuracy?
| The number one factor limiting the long-term accuracy of quartz crystal oscillators is ambient temperature fluctuation. Crystal ovens exist for keeping quartz oscillators at a constant temperature for applications requiring high time keeping accuracy, but that's obviously not a practical solution for things like quartz oscillator watches.
...the oven-controlled crystal oscillator (OCXO) achieves the best frequency stability possible from a crystal. The short term frequency stability of OCXOs is typically $1 \times 10^{−12}$ over a few seconds, while the long term stability is limited to around $1 \times 10^{−8}$ (10 ppb) per year by aging of the crystal.1 Achieving better performance requires switching to an atomic frequency standard, such as a rubidium standard, caesium standard, or hydrogen maser. Another cheaper alternative is to discipline a crystal oscillator with a GPS time signal, creating a GPS-disciplined oscillator (GPSDO). Using a GPS receiver that can generate accurate time signals (down to within ~30 ns of UTC), a GPSDO can maintain oscillation accuracy of $10^{−13}$ for extended periods of time.
(Wikipedia: Crystal Oven)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Once introduced will an electric and/or magnetic field live for ever? So if generate an electric field or magnteic field, will it live for ever? because whenever you get rid of that field for example getting rid of electric field by discharging a capacitor, it will result in changing megntic field and that will result in changing electric field and that will keep on going on it own. Does it mean then that once introduced electric field or magnetic field will become immortal :)
|
Does it mean then that once introduced electric field or magnetic
field will become immortal?
Not necessarily - as mentioned in a comment and in the praveen kr's answer, EM energy could be converted in other forms of energy - but, if it is not, it may get close to immortality. Take the light coming from stars that has been travelling for billions of years - that's pretty close to immortality and it is not dead yet.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/431753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
How are particles in a collision chosen? In synchrotron particle colliders, how are the particles which are collided chosen? For the most part, collisions of different types of particles don't do anything like you might expect in a video game; there is no secret recipe list of cool things, each which require different types of particles. So, what makes certain particles more favorable than others?
| In rough lines, when one is planning a particle collider experiment, one has a theory which will be tested by the experiments in the collider.
There are two streams :discovery machines, as was the Tevatron and now the LHC, which by discovering new predicted by the theory particles validate the theories, and accuracy machines , as was LEP, the electron positron collider, and the future ILC.
Hadronic collisions have much higher cross sections and as the other answer explains can go, due to their mass, to much higher energies, opening channels not available to the lower energy (per force) electron positron colliders.But the theory is very messy because of QCD and no accurate theory predictions can be checked or fitted. The electron positron collisions have much simpler feynman diagram contributions in the calculations, and thus a theory can be fitted with small errors, as was the standard model fitted at LEP.
I think it was Feynman( or maybe somebody else, I like it as an analogy) who said :"if you want to study the inside of a clock you do not bang two clocks against each other and count the fallen gears, you use a screw driver " .
So it depends on what the people proposing the machines want to do. Go for accuracy, or for spectacular new particles, which will herald supersymetry, the proposed extension of the standard model.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How does this action in this picture reduce $R$? (Angular Momentum) I was doing a course on Brilliant today when I came across this question:
In the picture, the question asks me what actions that must be done in order to maximize the distance I travel during takeoff from the curved ramp, and presents me with three choices: Stand up , Duck down or Do nothing.
I was really puzzled by this question so I went to get the explanation instead. This was what came up:
The correct answer was to stand up. However, after getting the explanation, I was still puzzled about how standing up will "shorten the radius of the curve around which she is traveling". (I do know that "shortening the radius of the curve around which she is traveling" will reduce R and will let the biker's velocity increase. I am puzzled about how that action can reduce R) Can anybody help?
| $R$ is the distance from the center of the circle to the center of mass. By standing up, you raise the center of mass and consequently shorten $R$. The center of mass is the average position of the mass of a body. If a force acts on the center of mass of any body, the body will only accelerate lineraly, but not rotationally. The center of mass of a person is roughly in their center, i.e. at about half their height. For a kneeling person, that might be 50cm from the ground, but for a standing person 80-90cm. $R$ is the only relevant distance in this system when calculating the angular momentum, so shortening it while keeping the angular momentum $L$ constant in fact increases $v$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why are the Lyapunov and Lindeberg Central Limit Theorem conditions often satisfied in the real world? Some background for the question.
I've been trying to understand why so many things have a Gaussian Distribution. There are a lot of questions about this on StackExchange but none of them were answered in sufficient detail to satisfy me.
First, I know often times people model phenomenal as Gaussian when they are not to make the math easier. I am not asking about these. I am wondering why so many phenomena actually are Gaussian or approximately Gaussian.
Second, people often say a Gaussian satisfies maximal entropy because energy is conserved and is quadratic (E=.5*mv^2). However, this only means that velocity distributions for systems in thermal equilibrium are Gaussian. However, many things besides velocity such as human height are also Gaussian in the real world.
Third, the Central Limit Theorem is often put as an explanation. People claim that the sum of independent random variables tends to have a Gaussian Distribution so long as it satisfies certain conditions. I believe they are referring to the Lyapunov and Lindeberg variants of the Central Limit Theorem.
Which brings me to my actual question: Why are the conditions of the Lyapunov and Lindeberg Central Limit Theorem often satisfied in the real world?
| There are two reasons that come to mind: firstly, many real-world phenomena are collective actions with many, many steps involved. Brownian motion is
a good example, where thousands of collisions with a small particle
can result in its random jiggles seen in a microscope. The familiar
pin-board sorting of balls into a Pascal's-triangle distribution also
has rank after rank of identical disturbances (random bounces), and
approximates well a Gaussian distribution.
The second reason is more subtle: we make measurements with instruments
that we calibrate, and calibration takes out the zero-offset, and
the linear-with-independent-variable terms in the errors, but retains
the second and higher moments (sigma-squared and such); when we
consider only small errors, the error distribution dominated by the
lowest nonvanishing moment, the second moment, is one of those
things we always assume is Gaussian. The fact is, in the limit of
small random errors, when only the second moment matters, the Gaussian
distribution result is identical to the 'real' result of ANY
other distribution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Closed field lines in case of a Bar magnet Field lines in case of charges go from +ve to -ve but incase of magnet, they dont start or stop anywhere. They form closed loops. Is this consequence of the fact that single poles dont exist or something else is going on here?
| In this answer I explained that macroscopic magnetic and electric fields are created by the alignment of the magnetic dipole moments of subatomic particles respectively a charge separation.
... incase of magnet, they (the field lines) don't start or stop anywhere. They form closed loops. Is this consequence of the fact that single poles dont exist or something else is going on here?
In a closer view the magnetic field of a permanent magnet is the sum of the aligned magnetic dipole moments of the involved electrons and protons. They form a common magnetic field. Nature in our surrounding is made of two kinds of electric charges and of closed magnetic loops from subatomic particles. Nature simply do not provide magnetic monopoles. Only understanding what’s going on inside electrons and perhaps improving the quark model it maybe would be possible to solve the question about magnetic monopoles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Are there limitations to the type of paths needed in the path integral formulation of quantum mechanics? In some places it is stated that one needs to include all paths in the path integral approach to quantum mechanics. But in the implementations I have seen one has been content with paths that goes in small steps along an operator, and not included paths that for instance goes to another galaxy and draws Mona Lisa and then goes somewhere else et cetera et cetera and then goes to the end point. So I assume there is some guiding principle or perhaps some bounds that show what kind of paths and how many paths are sufficient to bring the error down to an acceptable level?
It seems reasonable to me that the particle moves slower than the speed of light, for example. And that it doesn't teleport or branch off into fewer/more trajectories (unless that is needed for chemistry).
| It depends on Lagrangian. In cases of physical systems, Lagrangian has kinetic energy part and another following from some potential energy. In such systems this terms has certain property: kinetic energy is quadratic in velocity. When potential energy is absent - as in a case of free particle for example - action integral is Gaussian. In such case only small amounts of trajectories, related to maximum of Gaussian density, gives value of the integral itself.
When potential energy terms are present, usually such potential has an maximum etc. Most of the time such maximum may be approximated by parabola ( quadratic terms again).
In both cases the stationary phase techniques are used, and only trajectories close to classical trajectory gives account to the sum.
In purely mathematical cases it may be required to include all the paths as you mentioned. And of course for some systems, even physical ones, there may be more sophisticated examples, where for example potential energy part is flat, or system has not only potential energy terms but some topological bounds included. In such cases various subsets of all possible trajectories set has to be included in calculations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/432853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Uniaxial stress question Let's have a rectangular profiled bar. Let us introduce force $\vec{F}$ which pull the bar apart. In the picture below let us make a virtual horizontal cut $A$.
Well, everything is in the picture. Nothing fancy. But the part I'm stuck with is this:
Let's instead of cut $A$ make a cut $B$ which will be perpendicular to $A$'s normal. That is, $B$'s normal is perpendicular to $\vec{F}$. From my point of view, the force $\vec{F}$ will now be shearing plane $B$. But, of cource, every textbook say that there will be NO stress (neither normal nor tangental) on the plane $B$.
And that's where I'm stuck: My intuition says that $\vec{F}$ will shear $B$, but theory says -- it will not.
I guess my problem lies in the fact that I don't understand why Tractrions(Forces) on cuts with different normals can't add up. But nowhere I've seen any thorough explanation about this inability of comparing tractions on different cuts.
Please, help.
| I think the force is assumed to be uniformly distributed over the cross section.If uniformly distributed, ther will be no shearing effect.But if applied at a point then the analysis becomes complicated.Refer to the image attached.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/433371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Quantumness of gravity and Padmanbhan insight on holography Padmanabhan (and also Klinkhammer and others) argues that even classical gravity is already "quantum"...since:
$$F=G\dfrac{Mm}{r^2}=\dfrac{L_p^2 c^3 Mm}{\hbar r^2}$$
Is this "naive" argument right? And related to this, how should we understand holography in newtonian gravity, provided it makes sense?
References:
*
*https://arxiv.org/pdf/0912.3165.pdf
*https://arxiv.org/pdf/gr-qc/0703009.pdf
*https://arxiv.org/pdf/1006.2094.pdf
| As the Planck length $\ell_{\mathrm {P}}$ is defined in terms of the gravitational constant: ${\displaystyle \ell _{\mathrm {P} }={\sqrt {\frac {\hbar G}{c^{3}}}}}$, then yes, trivially we have $G = \ell_{\mathrm {P}}^2c^3/\hbar$. This is no way implies that classical gravity is 'quantum', at least not in any sense that the term quantum is normally used.
And it doesn't seem like those papers are suggesting that. Instead it is talking about an alternative perspective where the Planck length is fundamental instead of $G$. Quoting from the third paper:
Moreover, having a new fundamental constant $l$ may help in resolving
a potential problem of Verlinde’s approach regarding the total entropy of a general
equipotential screen.
Being able to reconstruct the classical limit would be that minimum that would be possible if entropic gravity is true.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/433520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do base kets satisfy Schrödinger's equation in Schrödinger picture and why don't they evolve with time? According to Sakurai, eigenvalue equation for an operator $A$, $A|a'\rangle=a'|a'\rangle$. In the Schrödinger picture, $A$ does not change, so the base kets, obtained as the solutions to this eigenvalue equation at t=0, for instance, must remain unchanged.
*
*Since base kets do not evolve with time $|a',t\rangle=|a'\rangle$ and is independent of t.
Schrödinger equation
$$i\hbar\frac{\partial |a',t\rangle}{\partial t}=H|a',t\rangle,$$
the LHS is zero and RHS is non-zero. Why is the Schrödinger equation not satisfied?
*Suppose $A$ commutes with $H$ (Hamiltonian).
$A|a'\rangle=a'|a'\rangle$ and evolution operator is $U(t,0)=\exp(-\frac{iHt}{\hbar})$
$$UA|a'\rangle=Ua'|a'\rangle$$
Since $H$ and $A$ commute, $U$ and $A$ also commute.
$$AU|a'\rangle=a'U|a'\rangle$$
So the eigenvalue remains same and eigenket is now $U|a'\rangle$ and evolves with time, which reduces to $|a'\rangle$ at t=0.
So, I can conclude that base kets evolve with time when $A$ commutes with Hamiltonian. This has an additional advantage that Schrödinger Equation is now satisfied.
As stated in the book, the base kets do not change in the Schrödinger picture. Is this statement wrong in the above case?
| Only kets that represent physical systems ("state vectors") satisfy the Schrodinger equation. Basis kets don't represent physical systems, but just a system of coordinates, so they don't.
Your question is analogous to asking why the coordinates of a random point in space don't satisfy Hamilton's or the Euler-Lagrange equations. There's just nothing there to time-evolve.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/433651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Fire-powered thrusters? So recently I have been working on a science project for ideas that could possibly help in space expeditions, and one of my ideas would be a rocket that would be powered by flame, anyone here have an answer?
| what you describe is exactly how rocket engines work today: a very violent chemical reaction inside a rocket motor causes extremely hot gases (there's your flame!) to fly out of the exit nozzle at tremendous speed. the pressure forces that accelerated the hot gas produce a reaction force on the motor which equals the motor's thrust.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/433945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What does the $R$ stand for in $R_\xi$ gauge? The $R_\xi$ gauge fixing condition is a term that can be added to a Lagrangian to choose a certain gauge:
$$
\delta\mathcal L = -\frac{1}{2\xi}(\partial_\mu A^\mu)^2
$$
Here, $\xi$ is the parameter that decides the gauge, but where does the $R$ come from?
| Following the comment by @AccidentalFourierTransform, here are two references:
*
*M. Srednicki, "Quantum Field Theory", 4th Edition, Chapter 62, page 377 (emphasis mine)
Here we have used the freedom to add $k^\mu$ or $k^\nu$ terms to put the propagator into generalized Feynman gauge or $R_\xi$ gauge. (The name $R_\xi$ gauge has historically been used only in the context of spontaneous symmetry breaking – see section 85 – but we will use it here as well. $R$ stands for renormalizable and $\xi$ stands for $\xi$.)
$\phantom{x}$
*M. E. Peskin, D. V. Schroeder, "An Introduction to Quantum Field Theory" (2018), Chapter 21, page 738 (emphasis mine)
The gauges defined by the possible values of $\xi$ are known as the renormalizability, or $R_\xi$, gauges.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Dispersion of cavity photons I read a paper called "strong coupling phenomena in microcavity structure" in that with regard to photons in a microcavity
For small $k$, the dispersion is parabolic, and so it can be described by a cavity photon effective mass $M = hnc/cLc$. This mass is very small, typically ∼$10−5me$ [10]. Such dispersions can be measured directly in angle tuning experiments (as discussed in section 6): moving away from normal incidence in a reflectivity measurement introduces an in-plane component to the photon wavevector [3]. In-plane wavenumbers up to $k \approx 107 \,\text{m}^{−1}$ can be probed in this way
Do the lines corresponding to the dispersion for photons in the graphs below demonstrate the difference between photons with different $k$ vectors (ignoring all the excitons and stuff).
| Yes they do. The horizontal axis of the graph gives the information of the k component parallel to the propagation plane (it is connected to the angle through a sine). So you can see the photon dispersion as (in units hbar=1):
Ephot(k)= omega_o+k^2/2m, where k is the in-plane mometum of the photons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Chronology protection: current status I am looking for some fresh references on the Chronology Protection Conjecture. I am aware of this question, but the answer there seems to resort to energy conditions.
But, weren't they shown violated in QFT, even in averaged format?
I heard physicists are mostly "uncomfortable" with CTCs (right?). What is currently the most accepted "working hypothesis" which prevents CTCs?
Some mathematical details would be much appreciated as I am a mathematician.
| Dismissing all closed timelike curves from just physical arguments is hard since a lot of them are fairly benign (such as the classic timelike cylinder where two spacelike hypersurface of a globally hyperbolic spacetime are identified), but those are usually not considered very problematic since you can always just go to the universal cover without issues. Other spacetimes with non-compact chronology violating regions can also be fairly benign and are harder to disprove, short of finding out initial conditions for the universe.
The common type of CTCs argued against are the ones which can be constructed by experiment, so-called compactly generated chronology horizons. Chronology horizons are as you may know a type of Cauchy horizon leading to a chronology violating region, and a chronology horizon is compactly generated if its null generators remain in a compact region of spacetime at some point in the past (this usually means that they all stem from some closed null geodesic, called a fountain).
It has been shown [1] that, in a compactly generated Cauchy horizon, there are points called base points which are past terminal accumulation points of the null generators (either points on the closed null geodesics or accumulation points if the spacetime is causal but has imprisoned curves). In such a case, the Klein-Gordon equation is singular at those points, which leads the stress-energy tensor to be divergent.
This is the current big argument for chronology protection : in such cases, the perturbation to the vacuum is such that the stress-energy tensor diverges, so that the solution is meaningless : obviously such a thing would disrupt the solution before the formation of a time machine.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Energy difference between enantiomers (matter/antimatter) I am aware of the fact that enantiomers have different energies, for example L-amino acids have different energy than D-amino acids. The difference is not significant and is most usually about $10^{-18}$ eV. (1)
Recently I have read that antimatter mirror images of compounds have actually the same energy. So L-amino acids will actually have the same energy as antimatter D-amino acids.
Can someone explain in relatively simply terms (meaning not too much math) why enantiomers have different energies and why matter-antimatter enantiomers have the same energy?
Also if L is the more stable enantiomer for normal matter, will D be the more stable enantiomer for antimatter?
(1) Amino Acids and the Asymmetry of Life: Caught in the Act of
Formation - Uwe Meierhenrich
| I will expand on this later, but there is a main difference between regular enantiomeres, in which the particles are the same but in a different configuration, versus an antimatter enantiomere, in which all particles reverse their properties. In the antimater case, chirality relationships for instance, remain the same, so no changes in energy; but in a regular enantiomere the particules are the same but in different configurations, and parity non-conserving energy differences can and have been calculated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
What is the experimental evidence that the nucleons are made up of three quarks? What is the experimental evidence that the nucleons are made up of three quarks? What is the point of saying that nucleons are made of quarks when there are also gluons inside it?
| When I was in university, sitting on my dinosaur, one of my profs mentioned he worked on a neutron polarization system on a particle accelerator at Chalk River.
I had to ask how you polarized a neutral particle with a magnet, still thinking in terms of classical particles and charges. He explained that since there's an internal quark structure, even though the outside looks neutral there's enough asymmetry for you to work with.
It was not until much later that I read how these actually work, there's complexity of course, but the basics are there.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/434985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 6,
"answer_id": 3
} |
Why does increasing the volume in which a gas can move increase its entropy? Let's say we have a box with a non-permeable wall separating the box in half. There is gas on the other side of the wall. Now we remove the wall so that the gas can diffuse to the other half of the box.
It is said that the entropy of the gas increases because the molecules now have more room to move, and therefore there are more states that the gas can be in. I can understand this well.
But the change in entropy is also defined as follows:
$\displaystyle \Delta S = \frac{Q}{T}$
Where $T$ is the temperature of the gas and $Q$ is the change in heat of the system. But if we look at this definition, why did the entropy change for the gas inside the box? By just removing the wall, the kinetic energy of the molecules does not change, therefore the temperature does not either. We also didn't add any heat to the system, so $Q$ is zero as well. So why did the entropy change?
| A statistical mechanics perspective. The phase space of an atom in the gas is described by the microstates, made from position and momentum values, $(x, y, z, p_x, p_y, p_z)$.
The allowed position states are constrained by the dimensions (l, w and d) of the box, $0<x<l$, $0<y<w$, $0<z<d$.
When you increase the box size, you have increased $W$, the number of microstates and therefore increased the entropy of the gas, according to the Boltzmann entropy definition,
$S=k_B \ln W$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
What is possible intuitive explanation of inelastic relativistic collsion? In classical mechanics, we say an inelastic collision happens when some energy is transferred to heat and noise without changing the total sum of momentum. However, in special relativity, every component of 4 momentum is preserved, but not the sum of masses. How can we explain it intuitively like we did in classical mechanics?
| In a collision, the total relativistic energy is conserved.
$$E_{rel,total,f} = \gamma_{Ai} m_A + \gamma_{Bi}m_B$$
If, in addition, the particles retain their rest masses in the interaction,
we have for the left-hand-side:
$$E_{rel,total,f} = \gamma_{Af} m_A + \gamma_{Bf}m_B$$
and (without using the "final" labels on the left hand side)
$$ m_A + m_B = m_A +m_B.$$
By subtraction,
$$(\gamma_{Af}-1) m_A + (\gamma_{Bf}-1) m_B = (\gamma_{Ai}-1) m_A + (\gamma_{Bi}-1)m_B$$
so the "total relativistic-kinetic-energy is conserved"... in an elastic collision,
similar to what we say in classical mechanics...
the "total kinetic-energy is conserved"... in an elastic collision.
By having the particles change mass in the collision,
this last equation will no longer hold... and you now have an inelastic interaction.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Can I use time evolving block decimation (TEBD) to simulate the dynamics for many body localized systems? In the many-body localized phase, the system is described by quasi-local integrals of motion ("l-bits"). The entanglement does grow logarithmically with time. So if I use TEBD to get the real-time evolution will it be efficient? Or it will not work at strong disorder?
| If the entanglement entropy scales like $\sim \log t$, then the required bond dimension (and hence, the computational cost) for TEBD scales as a power law in $t$ (because it's exponential in entanglement entropy). If you call that "efficient" then TEBD is efficient. But if you want to go to very late times you're obviously still going to have problems.
For example, in my paper https://arxiv.org/pdf/1603.08001.pdf, we used TEBD to simulate the time-evolution of an MBL system at short times. But we also needed to go to very late times (for example $t \sim 10^{10}$ in some natural units) and for that we had to resort to exact diagonalization.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What is quantum fluctuation in the Bose-Einstein condensates theory? I would like to understand what quantum fluctuation really means. I think it's the particles that are not in the ground state. Am I right? But then, what are the differences between quantum fluctuations and thermal fluctuations?
Another question. How do these quantum fluctuations stabilize the BEC against the mean-field collapse? I've read something about soft and hard excitations, but I do not know how these stabilize the BEC.
| Dinesh, this subject is also that I do not understand in detail and would like to understand better (especially the LHY correction), but I can say this much: BEC with attractive interaction are unstable in general. Atoms will rush to clump on top of another and decay via three-body loss. However, if there is another, repulsive interaction, which is effected by quantum fluctuation (to which the LHY correction is associated) , it will cancel out the attractive interaction at some length scale. Hence the BEC stabilizes into small droplets as per the paper you cite.
Note that the dipolar interaction (which is studied in your reference) is not crucial. If I recall correctly, its role is to provide enough repulsive interaction to reduce the large attractive s-wave interaction, so that the net interaction (without quantum fluctuation) is a small attractive interaction. LHY correction is quite tiny, so you wouldn't observe its effect unless the attractive term you are trying to balance is already small.
Here is an example of stable BEC droplet without dipolar interaction, by researchers from ICFO: https://arxiv.org/abs/1708.07806
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is a correct loudspeaker connecting scheme and why? Thinking of "what would be best shape for a subwoofer box?" i came to idea of a barrel, with its sides (or covers) "replaced" with speakers:
I have stereo bass amplifier which is fed from single signal source. So there are 2 possible ways to connect a terminals to an amplifier channels:
*
*With zero phase offset: this would double the pressure inside (lower or higher than air), if is compared than a single speaker in a barrel
*
*With inverse phase offset: this would cancel out internal pressure. When front speaker's diaphragm would be pushed forward, back would be also pushed forward.
I am interested what scheme is used and why - what consequences inner pressure would introduce to bass sound, what effect would create those two schemes?
| Assuming that, when listening to the music, you are sitting at a point equidistant from the two bass drivers then the sound from them will travel an equal distance from each driver to reach you. If you wire the speakers in antiphase then the sound from the two drivers will be in antiphase when it reaches you and will interfere destructively. This will reduce the volume where you're sitting, which is almost certainly not what you want.
If you did the experiment in an anechoic chamber the sound intensity could in principle be reduced to zero. In practice in a normal room there will be lots of scatter from the walls of the room so the volume will be decreased a bit, but not to zero.
So, wire the two drivers in phase.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to determined the angle of force weight in an incline force vector problem? today in class I was introduced to some basic incline problems. I know that Force weight can be resolved into 2 components-the parallel and the perpendicular. I was given the angle of the ramp to be $30 degrees$. Which makes the parallel component to be $w *cos30$ and the perpendicular component to be $w * sin30$. Now, this is the part of my question how do we know that the angle of the ramp (30) is equal to the angle of the force weight? Couldn't the angle in the force weight be a different angle? Why not? Thanks!
|
I hope this clears the confusion. Use trigonometry to find the components of force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/435883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does time dilation mean that faster than light travel is backwards time travel? Ok. So my question is, I've always heard it that Faster Than Light travel is supposedly backwards time travel.
However, the time dilation formula is
$$T=\frac{T_0}{\sqrt{1-v^2/c^2}}$$
And while it is true that speeds greater than $c$ turn the denominator negative, doesn't the whole thing get rendered a complex fraction, rather than negative or backwards time flow, due to the square root of a negative number being a complex one?
Wouldn't this then mean that faster than light travel does something weird, rather than backwards time travel? In other words, wouldn't what happens during faster than light travel be some sort travel in a complex plane and wouldn't that have radically different implications to backwards time travel, depending on the direction one took FTL?
| I don't know what you mean by "some sort travel in a complex plane". Faster than light travel is by definition some object that changes position from $x_0$ to $x_1$ in such a way that $\dfrac{x_1-x_0}{\Delta t}>c$, where $\Delta t$ is the elapsed time. There is no time travel involved when this happens, but causality will take a blow if events at $x_1$ depend on events at $x_0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
IQHE, quantized conductance, and zeeman splitting I've been trying to understand IQHE by reading these lecture notes by David Tong.
Mainly, I was trying to understand the quantized hall resistivity in terms of the number of Landau levels crossing the fermi energy.
Then, I began thinking about why spin induced Zeeman splitting is never really mentioned in the context of IQHE.
The lecture notes say that it's because typically the
Zeeman splitting is very small and it polarizes the spin of the electron.
I think the spin based splitting of energy states still confuses me because in my mind, with the spins of electrons taken into account, you have twice as many energy states crossing the fermi energy.
The filling factor in IQHE is the number of landau levels crossing the fermi energy (as shown in the image below). To me, the spin Zeeman splitting seems to double that number.
| The definition of filling factor is, $\nu \equiv \frac{\text{number of particles}}{\text {number of flux quanta}}$. I guess even if you include the Zeeman splitting this is not going to change.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Work done by a gas In the expression for work done by a gas,
$$W=\int P \,\mathrm{d}V,$$
aren't we supposed to use internal pressure?
Moreover work done by gas is the work done by the force exerted by the gas, but everywhere I find people using external pressure instead of internal pressure.
| Work is done by the gas against the external pressure.If there is a case of free expansion of the gas (as in vacuum) the work done by the gas is zero as no opposing forces are present to prevent expansion of the gas, hence it is evident that work done by the gas is only due to external pressure. If the process is quasistatic(that is infinitesimally slow) then outside pressure is almost equal to internal pressure hence work done may be evaluated by considering the pressure of the gas, but if the process is not quasistatic then we must consider external pressure only.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 7,
"answer_id": 4
} |
Where is the right place to put the pressure gauge to measure the pressure of a tank? Studying the basic concepts of Fluid Mechanics, applied to pressure gauges, and looking at schematics in many places, a question came into my mind: Where is the right place to put the pressure gauge to measure the pressure of a tank?
The first case would be if the tank contains a gas. In this situation, Çengel's Fluid Mechanics book clarified it to me:
Since the gravitational effects of gases are negligible, the pressure anywhere in the tank and at position 1 has the same value.
Thus, I can put it anywhere in the tank if it contains a gas.
The second case would be if the tank contains a liquid, especially when the tank is large. In this situation, the decision that seems more logical to me is to put the pressure gauge in the bottom of the tank. However, in all the places that I looked, the point "A" was the chosen one to measure pressure (as shown in the images below in points M, N, A and B), which I believe that gives the average pressure of the tank because the point is located at height of its geometric center:
$$p_{average}=\frac1H \cdot\int_0^H\gamma h \,dh=\frac{\gamma H}{2}=p_A$$
Images sources: MATHalino/PennState College of Engineering (MNE)/The SensorsGuide/University of Sydney (MDP)/ScienceStruck/Chegg
So, where is the right place to put it to measure pressure of a tank? Why the points M/N/A/B were chosen instead of the botton of their tanks to calculate the pressure in the images above?
Related questions:
*
*Why textbooks use geometric center/centerline of the pipe when calculating/measuring pressure?
*Why using average pressure in calculations gives the most accurate results?
| Unlike a sensor, a gauge has to be where you can see it. Often you dont want the process liquid to get in the gauge, so the gauge is on top of the liquid level.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
Rindler Coordinates and homogeneous Gravity Field I understood from the equivalence principle that an accelerated observer in free space is equivalent to a stationary observer in a gravitational field.
As far as I understood further, this means to analyze systems which occur at the earth's surface it is possible to use the Rindler Coordinates.
However the 00 element of the rindler metric contains a term $\alpha x$ where $\alpha$ is the proper acceleration and $x$ is the position. It is clear what to substitute for $\alpha$, but what number should one plug in for $x$?
| The Rindler metric describes an homogenous gravitational field (as you say in your title) i.e. a field that is the same everywhere. This means it is only an approximation at the Earth's surface since on Earth the gravitational acceleration changes with height. The Rindler metric will describe the geometry only over a region small enough the the change in $g$ with height is negligible.
With this caveat it's perfectly reasonable to use the Rindler metric as a local approximation. If we write the metric in the form:
$$ \mathrm ds^2 = -\left(1 + \frac{a}{c^2}x \right)^2 c^2 ~\mathrm dt^2 + \mathrm dx^2 $$
then the $x$ coordinate is the distance measured by the observer at the origin and likewise the time coordinate is the time measured by the person at the origin. So in this case you would put the origin at the surface of the Earth and set the acceleration to $g$. So $x$ is zero at the surface.
We need to be careful about the sign of $x$ though actually the obvious choice is the correct one. The acceleration $a$ is the proper acceleration of the observer at $x=0$, and if you are standing on the surface of the Earth your proper acceleration points upwards i.e. away from the centre of the Earth. So upwards is the positive direction.
So to summarise, $x$ is zero at the surface, positive above the surface and negative below the surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Defining generalized momentum in terms of kinetic energy versus a Lagrangian Reputable authors (e.g., Bergmann, Wells, Susskind) define generalized momentum using the Lagrangian $L$ as $$p_{i}\equiv\frac{\partial L}{\partial\dot{q}^{i}}.\tag{1}$$
Joos and Freeman define generalized momentum for holonomous-scleronomous systems using the kinetic energy $T$ as $$p_{i}\equiv\frac{\partial T}{\partial\dot{q}^{i}}.\tag{2}$$
There is no direct contradiction due to the qualification that the system is holonomous-scleronomous. Nonetheless, it begs the question: what would be the consequences of one definition over the other in more general circumstances? Put differently, why choose one over the other?
| *
*The canonical/conjugate momentum (1) is the natural/fundamental notion in Lagrangian formalism. (Also recall that there exist velocity-dependent potentials $U(q,\dot{q},t)$.)
*The kinetic momentum (2) only exists if there is a natural notion of a kinetic term $T$ in the Lagrangian $L$. (The kinetic term $T$ is by the way not always the kinetic energy $K$. Think e.g. of a relativistic point particle, cf. this Phys.SE post.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why is the power of a filament lamp directly proportional to the cube of its voltage? I was doing a textbook question on how the power of a bulb varies with the potential difference across it. I plotted this graph:
(V is on the x axis and P is on the y axis.)
I was then told that this graph obeys the relationship $P=kV^3$ (note that for ohmic conductors the relationship is $P=kV^2$, where $k=\frac1 R$), and I was then told to explain this relationship. I'm not terribly sure where to start; I know that increasing the voltage increases the temperature of the filament and therefore the resistance, but doesn't that mean the current decreases, therefore decreasing the power of the bulb?
For those of you in the British education system, I'm just starting my A level in Physics (which means I'm 17 years old for everyone else), so a decently non-technical answer would be appreciated...
| Think of it the following way. You were right in writing the relationship between power, voltage and resistance:
$$P=\frac{V^2}{R}.$$
But this equation was said to not fit the data, and instead
$$P=kV^3.$$
Comparing these two equations, we arrive at the relationship
$$R=\frac{1}{kV}.$$
This is the thing we need to explain. As you've hinted, it's related to the temperature dependence of the material.
Consider the filament to be a black body of area $A$ and at temperature $T$. The electrical power going through it should be irradiated out through black body radiation. We then say
$$P=\sigma AT^4,$$
Where $\sigma$ is the Stefan-Boltzmann constant. Comparing this to the expression given for the power
$$kV^3=\sigma AT^4$$
$$V=\left(\frac{\sigma A}{k}\right)^{1/3}T^{4/3}.$$
Substituting this into our expression for the resistance we get
$$R=\left(\frac{1}{k^2\sigma A}\right)^{1/3} T^{-4/3}.$$
This is quite odd, to be honest. It implies the resistance decreases with temperature, which is quite the opposite of what we have with conductors usually. Are you sure the power goes as $V^3$?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can Heat Transfer occur between two bodies with the same temperature but different states only through Latent Heat Transfer? I understand that temperature difference is the driving force for heat transfer but I have been wondering whether there would be any heat transfer, let's say if steam at 100 degree Celsius and water at 100 degree Celsius are passed through the two sides of a Heat Exchanger.
| yes it's possible if the states are somehow not reversible(might not be the proper word for that). For example if you have water and air at ambient temperature then water will evaporate as long as the air isnt saturated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/436885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Relationship between intensity and amplitude of light wave I am confused with the realtionship between intensity and amplitude of wave. My understanding is that energy in a wave is proportional to its intensity; which is proportional to the square of the maximum height of the wave. is that a correct understanding.? If that understanding is correct I have a red right and I increase the brightness what happens.? Do I increase the amplitude of the wave.?
| If you insist on taking a wave, your statements are correct. The energy of the wave IS proportional to the intensity, which is in turn proportional to the square of amplitude of vibrations produced by the wave(in this case being the vibrations of electric and magnetic fields). When you increase the brightness, the amplitude of vibrations of the electric and magnetic fields increase, leading to an increase in the energy carried by the wave.
If, on the other hand, you consider photons, intensity is the number of photons reaching your eye per unit area per unit time. This again relates directly to energy; more the number of photons, more the energy. When you increase the brightness, you are actually increasing the number of photons. I do admit, though, that I can't bring an amplitude into this photon picture. Maybe someone else can help.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Physical Significance of $U$ (Internal Energy ) , $H$ (Enthalpy) , $F$ (Free Energy) and $G$ (Gibbs Free Energy)? I know their mathematical definitions and how these terms are interrelated (mathematically) but I fail to understand the physical meaning of none but one which is INTERNAL ENERGY .
It seems implausible to me that these are just mathematical terms that serve the purpose that
*
*If $T, V, N$ are known
we use $F=F(T, V, N) $
where $F$: Free Energy or Helmholtz Free Energy
*If $T, P, N$ are known
we use $G=G(T, P, N)$
where $G$: Gibbs Free Energy
*If $S, P, N$ are known
we use $H=H(S, P, N)$
where $H$: Enthalpy
and that's all. They have no physical significance?
What I know of $U$(Internal Energy) is that it is a measure of kinetic energy of system molecules and hence also the system temperature. The more the molecular K. E., more heat energy produced due to molecular collisions and hence more the temperature.
I am expecting similar physical explanations to other thermodynamic variables which I couldn't find even on other stack exchange threads!
| U, F, G, and H are sometimes referred to as Thermodynamic Potentials. A nice (in my opinion) explanation of the physical significance of these properties in relation to entropy (S) and system work can be found on the Hyperphysics web site under "Thermodynamic Potentials".
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
What causes burns when in contact with hot water? As I understand it thermal energy (heat) is simply a measure of the kinetic energy of an object (For example : water).Hot water is simply water with a larger kinetic energy in its molecules, right ?
So how do my hands get burned if I immerse them in hot water ? Do the particles collide with my hand and produce burns ?
PS : I may have a conpletely wrong understanding of how heat works .
| It should be noted that this question is not that much about heat, as about biochemistry. The actual damage to living tissues is not caused by kinetic "bombardment" by fast molecules, but proteins permanently switching to a different spatial conformation, which is favored at higher temperatures.
This explains why 310 Kelvin water feels fine, but 320 Kelvin (i.e. 47 °C) water burns.
Unlike people, certain bacteria contain only proteins that do not undergo denaturation so easily, so they happily live even at 140 °C.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Introduction to nuclear physics I want to self-study nuclear physics in order to understand nuclear reactors and nuclear weapons, what books can you recommend?
| Here are some available references. The Atomic Nucleus by Evans. Nuclear Reactor Theory by Lamarsh. The Los Alamos Primer by Serber. Building the Bombs by Loeber. Most of the details for nuclear weapons are classified or at least limited distribution. You can also search the web for articles by Drell and Peurifoy related to nuclear weapons. Much of the information on the web related to nuclear weapons is incorrect.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What does resonant frequency in the Q factor mean? For the Q factor of a body undergoing force oscillations, does resonant frequency refers to the frequency of driving frequency or the body's natural frequency?
The term resonant frequency seems to mean the body's natural frequency (since this frequency corresponds to resonance); but I've just seen a question on my physics textbook of a pendulum oscillating at a frequency, f, under forced vibration and the solution used that frequency as the 'resonant frequency' in the Q factor equation to calculate the Q factor of the system.
| In your question, you state that the pendulum is oscillating at a frequency f and that it is being forced by a source. In this context it appears that what the author meant was that the pendulum was being driven at its natural frequency f.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/437920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Example in which light takes the path of maximum optical length According to the modern version of Fermat's principle,"A light ray in going from point A to point B must traverse an optical path length that is stationary with respect to variations of that path.".Is a maximum optical path length possible ?What if we keep adding deviations to the optical path length?
| You are quoting wikipedia
There is no maximum length from point A to point B (the path could be arbitrarily long), the more deviation to that minimum path the more length is added in such a way that the light phase is so mixed up that it ends up cancelling itself (no light).
EDIT: Quantum electrodynamics from Feynman offers a nice accessible explanation for this phenomenon
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How can a particle in circular motion about a fixed point accelerate, if the point doesn't too? When a particle is performing uniform circular motion attached to a string about a fixed centre, at any instant of time its acceleration is directed towards the centre but the centre has no acceleration. But I was taught in school this is not possible because of the string constraint:
The accelerations of the ends of a string are the same if the string is not slack.
Where am I wrong?
| I think what you're asking is "How can a particle accelerate towards a point without ever getting closer?"
Acceleration is "change in Velocity," and Velocity is the combination of speed and direction. So acceleration can mean "a change in speed", "a change in direction", or a combination of the two. In order for any object to travel in a circle, it must change direction; it must accelerate.
The particle on the string is in a kind of equilibrium caused by the string. If it didn't accelerate enough toward the point, its current velocity would move it away from the point; if it accelerated too much, it would get closer. But the string causes the particle to remain at a constant distance, causing just enough acceleration to maintain the orbit.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 5
} |
Harmonic Oscillator Trial Wavefunction I was learning today about trial wave functions for a harmonic oscillator.
We learnt that the solution to Schrödinger equation for a harmonic oscillator is a Gaussian curve, i.e.
$$
f(x) = e^{-x^2} .
$$
Testing a trial function such as:
$$
\psi = N_{0}e^{-ax^2}
$$
where $x$ is position gave
$$
\frac{d^2\psi}{dx^2} = N_{0} \ (4a^2x^2 - 2a)\cdot e^{-ax^{2}} .
$$
Applying this to Schrödinger's equation using reduced mass $\mu$
$$
-N_{0} \cdot \frac{\hbar^2}{2\mu}(4a^2x^2 - 2a)e^{-ax^2}+ \frac{1}{2}kx^2\ \cdot N_{0}e^{-ax^2} = E\ \cdot \ N_{0}e^{-ax^2}
$$
simplified to
$$
- \frac{\hbar^2}{2\mu}(4a^2x^2 - 2a)+ \frac{1}{2}kx^2 = E.
$$
The lecturer mentioned that as the total energy $E$ was constant,
$E$ cannot be dependent on position $x$ which made sense from studies on Harmonic Motion.
Then he continued to state:
We therefore have a solution to the Schrödinger equation if the terms in $x$ are equal and opposite and cancel.
Suddenly the equation becomes:
$$
\frac{\hbar^2}{2\mu} \cdot 4a^2x^2 = \frac{1}{2}kx^2
$$
and solving for $a$:
$$
a = \frac{\sqrt{k\mu}}{2\hbar}.
$$
My question is how did the equation
$$
- \frac{\hbar^2}{2\mu}(4a^2x^2 - 2a)+ \frac{1}{2}kx^2\ = E
$$
suddenly transform into
$$
\frac{\hbar^2}{2\mu} \cdot 4a^2x^2 = \frac{1}{2}kx^2
$$
in just one line?
| The basic point is that the equation involving $E$ is an identity, which must hold for all values of $x$, not just particular values of $x$. So, all the $x$-dependent parts must cancel identically. Sometimes, identities are distinguished from simple equations by using the symbol $\equiv$ rather than $=$.
Slightly more generally, if you
bring all the terms onto the left and rearrange your equation into the form of
a polynomial identity
$$
C_0 + C_1 x + C_2 x^2 + \ldots + C_n x^n \equiv 0
$$
which must hold for all values of $x$, then it follows that all the coefficients $C_i$ must vanish. You can show this by setting $x=0$ (hence $C_0=0$); then by differentiating with respect to $x$ and setting $x=0$ (hence $C_1=0$); and so on.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $\sum_n \ c_n \ \psi_n(x,t)$ represents an arbitrary state for a given solution to the TISE, what are the bases for a free particle? If $\sum_n c_n \psi_n(x,t)$ represents an arbitrary vector in the Hilbert space of solutions to Schrodinger's equation with a given potential function, this makes makes sense to me. Each $\psi_n$ can be thought as a basis vector, and thus each state is a linear combination of basis vectors. However, an arbitrary vector for a free particle is represented as:
$$\psi(x,t) = \int_{-\infty}^{\infty} A(k) \ e^{i(kx-\omega t)} \ dk$$ rather than a sum. I feel like I've lost the ability to think in a linear mathematical way here. Are there no bases? If there are, how could they be? A basis for me assumes an arbitrary vector by a linear combination such as:
$$u \in V = \sum_n \alpha_i v_i, \forall \alpha_i \in F, \forall v_i \in V$$
Where $V$ is a vector space and $F$ is a field. I couldn't think of it in terms of an integral.
| The integral is a sum. We just changed from a discrete index to a continuous one, since $k$ isn't bounded by any quantization conditions. The bases are the separatable solutions to the free particle Hamiltonian, but since they aren't normalizable, they can't represent a physical state. Although, we can think of them as handy mathematical idealizations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a medium less dense than vacuum, in which light can travel faster than $c$? Is there a medium less dense than vacuum, in which light can travel faster than $c$? If not, can we make it?
| It depends on the volume! You have to read something about the Casimir effect. Even in a complete vacuum, you always have virtual particles. Reality is quantum, and quantum vacuum is not empty! It cannot be!
So basically when the volume is bounded as in a capacitor, it seems there are fewer possible excitations inside (fewer kinds of virtual particles) than in a general big volume.
It looks like there's a lot of vacuums density possible, and some of them are emptier than others, even if all of them are empty!
The Casimir effect affects the force between capacitor plates and can be measured!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/438921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Retarded potentials with a dirac delta fail to give Lienard-Wiechert In the derivation of the Liénard-Wiechert potential the expression for the retarded potential is given
$$\varphi(\mathbf{r}, t) = \frac{1}{4\pi \epsilon_0}\int \frac{\rho(\mathbf{r}', t_r')}{|\mathbf{r} - \mathbf{r}'|} d^3\mathbf{r}'$$
and it is applied to a moving particle, given by the time varying distribution (a moving dirac delta):
$$\rho(\mathbf{r}', t') = q \delta^3(\mathbf{r'} - \mathbf{r}_s(t'))$$
Now why is it NOT the case that for this particular time-varying distribution we simply have:
$$\varphi(\mathbf{r}, t) = \frac{1}{4\pi \epsilon_0} \frac{q}{|\mathbf{r} - \mathbf{r}_s (t')| } $$
?
i.e., why does it not directly follow form the definition of the Dirac delta given by:
$$\int f(x) \delta(x - x_0) = f(x_0)$$
?
Being a mathematician familiar with distributions in the sense of Schwartz, I would like a mathematically rigorous answer in terms of this.
| The definition of the Dirac function cannot be applied directly to obtain your third expression, because $t_r'$ in $\delta^3(\mathbf r' - \mathbf r_s(t_r'))$ is a function of $\mathbf r'$.
To evaluate the integral, one must change the integral to new integration variables $\mathbf y$ in such a way that the definition can be applied, which requires that the argument is of the form
$$
\mathbf y - \mathbf y_0
$$
where $\mathbf y_0$ does not depend on $\mathbf y$.
For example, we can introduce new variables in this way:
$$
\mathbf y = \mathbf r' - \mathbf r_s(t_r')
$$
and then use this definition to reexpress the integral. In doing so a Jacobi factor will be introduced and that's why the result (Lienard-Wiechert electric potential) is different from your third expression.
Some examples of how the change of variables is done can be seen here:
http://booksite.academicpress.com/andrilli/elementary/content/jacobian.pdf
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can I find inlet pressure of a valve by measuring the mass flow rate? I don't have a manometer to measure how much pressure of water my inlet valve is having, can I measure this by just having an empty container and have the valve fill it per x amount of seconds x amount of liters and calculate that to what my actual water pressure is?
| In principle, yes, you can calculate the inlet pressure of a valve by measuring the mass flow rate as it discharges to atmospheric conditions. However, to so this, you need to know something about how the flow rate relates to the pressure drop across the valve, which will depend on the size and design of the valve. I actually work in the valve industry and a common concept that is used is the flow capacity, Cv, which is calculated using the following formula:
$$Cv=\frac{w}{N\sqrt{\Delta p\cdot\rho}}$$
where $w$ is the mass flow rate, $N$ is a constant (depending on units used), $\Delta p$ is the pressure drop and $\rho$ is the fluid density. Essentially, Cv is a measure of how much mass of a given fluid will flow through the valve, for a given pressure drop - i.e. it characterizes the valve design as it relates to flow performance.
So, if you know what the Cv value is for a given valve, then you would be able to calculate the inlet pressure from the measured mass flow rate.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Isotropy of the universe in different reference frames Suppose that we put Bob and Alice into intergalactic space. If they look around they will see the light from distant galaxies shifted according to the Hubble law. More importantly, the light is (on average) isotropic.
Now suppose we accelerate Bob to e.g. $\beta = 0.999$. He should see the light in forward angles shifted blue and in backward angles shifted more towards red. In other words, the universe isn't isotropic to Bob anymore.
Then again, if we accelerate both of them to $\beta$, the relative situation is identical, but now both 'should' see anisotropic universe. According to this reasoning, there exists a special reference frame where the universe is isotropic. This, of course, isn't what we measure.
What is the solution to this issue?
My take:
Suppose the motion is along a common $x$ axis. At $t=0$ (according to Alice) two distant galaxies at $x = a$ and $x = -a$ shoot out a signal. Then Bob sees these events occuring at $t_{\pm} = \pm \gamma \, \frac{\beta}{c} a$. In other words, the backward signal was emitted when this galaxy was much younger and therefore (Hubble law) moving more slowly (giving a small redshift). But as Bob is moving away from it, it redshifts further. Also the forward galaxy is much older and is originally redshifted a lot. But as Bob is moving towards it, it blueshifts.
It would be great is there was some explanation without using the whole machinery of GTR. Thanks!
|
According to this reasoning, there exists a special reference frame where the universe is isotropic. This, of course, isn't what we measure.
There is such a special frame, and that is what we measure. The special frame is the frame moving with the Hubble flow, and in that frame, the CMB is observed to be uniform in all directions, with no differences in Doppler shifts. The Hubble flow can also be characterized approximately as the frame in which the galaxies are at rest. That's why the universe looks nearly isotropic in the earth's frame. In the earth's frame, the CMB does have a difference between one side (slightly blueshifted) and the opposite one (slightly redshifted). However, because our galaxy is nearly at rest relative to the Hubble flow, these shifts are rather small.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Formula for potential energy? Conservation of energy? How would we know what formula to use for potential energy?
In my class, $mgh$ was used, but when dealing with a spring, it's ${1\over2}kx^2$. Is that because that's the elastic potential energy formula?
Also, for elastic and inelastic collisions, momentum is conserved. But kinetic energy is conserved only in elastic collisions, what does this really mean?
| Both equations for potential energy are of the form $\text{force}\times \text{distance}$ ie $mg \times h$ and $kx \times x$.
The factor $\frac 12$ is there for the spring potential energy because the force does not stay constant as the extension of the spring changes unlike the gravitational force on a mass which stays constant as its height changes.
So for the spring one averages the force $\frac{mg}{2}$ to get the potential energy.
—-
The momentum of a system is conserved if there are no external forces acting on a system.
The energy of a system is conserved in such a case but there may be an interchange between differing forms of energy.
In particular the kinetic energy can:
*
*decrease which is called an inelastic collision with the kinetic energy being converted into heat, light and work being done in permanent deformation.
*stay the same which is called an elastic collision during which there may be deformation but any elastic potential energy stored is then converted back to kinetic energy.
*increase which is called a super elastic collision during which the kinetic energy of the system actually increase. An example being an explosion during which chemical energy is converted into kinetic energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/439978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Non-compatibility between relativity and quantum mechanics Is the discrepancy between quantum mechanics and relativity only in the math involved or is it much deeper? That is, do the same interactions have different and non-comparable interpretations in both, or are the mathematical equations involved wrong with respect to each other?
What are some examples of this discrepancy?
How far along are we from an unified theory?
| The unified theory of particle physics, $\operatorname{SU}(3)\times \operatorname{SU}(2) \times \operatorname{U}(1)$, uses Klein Gordon and Dirac and quantized maxwell equations to solve for quantum mechanical systems, and at a meta level quantum field theory. All these are 100% compatible with special relativity.
If by relativity you mean General Relativity, GR, , i.e. quantization of gravity, yes , GR is not quantized definitively yet, i.e. there is no standard model. String theories have quantized gravity and unify it with the standard model of particle physics but there still is no definite model which can be tested for validity, as there are too many of them.
The discrepancy with the standard model is in the math involved , because GR is a deterministic classical theory, whereas the standard model of particle physics is based on quantum mechanics and probabilities.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Wave behavior of particles When people say that every moving particle has an associated wave, do they mean that the particles will move up and down physically, for example when we say that a moving electron has a wave associated with it, does the electron physically oscillate? Or is it some other wave, like a probability wave? I really don't understand the latter.
| The wave is there to describe the phenomena of diffraction and of interference. Particle beams can interfere destructively: no intensity at some spot when both beams are on.
This can be described by a phase and the mathematics of waves. When phases are opposite, the sum is zero. Feynman explains this (in his little book QED, I recommend to read that) with electrons having dials that turn around. Or one could represent phase as color.
But physically, there is no transverse wave oscillating up and down. Physically, there is no dial. Physically there is no color. These are just representations of phase, which mathematically describes the phenomena of many different kinds of waves.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 2
} |
Hamiltonian for a magnetic field An atom has an electromagnetic moment, $\mu = -g\mu_B S$ where S is the electronic spin operator ($S=S_x,S_y.S_z$) and $S_i$ are the Pauli matrices, given below. The atom has a spin $\frac{1}{2}$ nuclear magnetic moment and the Hamiltonian of the system is
\begin{gather*}
H = -\mu .B + \frac{1}{2}A_0S_z
\end{gather*}
The first term is the Zeeman term, the second is the Fermi contact term and $A_0$ is a real number. Obtain the Hamiltonian in matrix form for a magnetic field, $B=B_x,B_y,B_z$. Show that when the atom is placed in a magnetic field of strength B, aligned with the z axis, transitions between the ground and excited states of the atom occur at energies:
\begin{gather*}
E= g\mu_B B + \frac{1}{2}A_0
\end{gather*}
The Pauli Matrices are:
\begin{gather*}
S_x = \frac{1}{2}
\begin{bmatrix}
0 & 1 \\ 1 & 0
\end{bmatrix} ,
\ S_y = \frac{1}{2}
\begin{bmatrix}
0 & -i \\ i & 0
\end{bmatrix} ,
\ S_z = \frac{1}{2}
\begin{bmatrix}
1 & 0 \\ 0 & -1
\end{bmatrix}
\end{gather*}
Where do I even start for a solution to this problem I am unclear as to how to formulate the B matrix. If I can get that hopefully the second part will become apparent to prove
| The Hamiltonain is calculated as
\begin{align}
H =& \, g \mu_B \, \left(B_x S_x + B_y S_y + B_z S_z\right) \, + \, \frac{1}{2}A_0 S_z = \\
=& \, \frac{g \mu_B}{2} \, \left(B_x \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} + B_y \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} + B_z \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}\right) \, + \, \frac{1}{4}A_0 \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \\
=& \, \frac{g \mu_B}{2} \, \begin{bmatrix} B_z & B_x-iB_y \\ B_x+iB_y & -B_z \end{bmatrix} \, + \, \frac{1}{4} \begin{bmatrix} A_0 & 0 \\ 0 & -A_0 \end{bmatrix} \\
\end{align}
In the case of a constant magnetic field aligned with the $z-$axis, $B_x = B_y=0$ and $B_z = B$. Then
$$H = \, \frac{g \mu_B}{2} \, \begin{bmatrix} B_z & 0 \\ 0 & -B_z \end{bmatrix} \, + \, \frac{1}{4} \begin{bmatrix} A_0 & 0 \\ 0 & -A_0 \end{bmatrix} = \frac{1}{2}\begin{bmatrix} g\mu_B\, B_z+\frac{1}{2}A_0 & 0 \\ 0 & - \, g\mu_B\,B_z-\frac{1}{2}A_0 \end{bmatrix} $$
By solving the linear eigenvalue equations
$$H \, | \psi \rangle = \lambda\, | \psi \rangle $$ you would get the basis energy states (the eignevectors $| \psi \rangle$) and their energy levels (the eigenvalyes $\lambda$). Since $H$ is a 2 by 2 matrix, so
$$ | \psi \rangle = \begin{bmatrix}\psi_1 \\ \psi_2 \end{bmatrix}$$ the equation is
$$\begin{bmatrix} \frac{1}{2} g\mu_B\, B_z+\frac{1}{4}A_0 & 0 \\ 0 & - \, \frac{1}{2} g\mu_B\,B_z-\frac{1}{4}A_0 \end{bmatrix} \, \begin{bmatrix}\psi_1 \\ \psi_2 \end{bmatrix} = \lambda \, \begin{bmatrix}\psi_1 \\ \psi_2 \end{bmatrix}$$ so it is easy to see that the egienvectros are
$$\begin{bmatrix} 1 \\ 0 \end{bmatrix} \text { and } \begin{bmatrix} 0 \\ 1\end{bmatrix}$$ with energy levels
$$ \frac{1}{2} g\mu_B\, B_z+\frac{1}{4}A_0 \,\, \text { and }\,\, - \frac{1}{2} g\mu_B\, B_z-\frac{1}{4}A_0$$ respectively. There are only two eigenstates and the transition from on to the other happens when the energy is equal to the difference of the energy levels, i.e.
$$\left(\frac{1}{2} g\mu_B\, B_z+\frac{1}{4}A_0 \right) - \left( - \frac{1}{2} g\mu_B\, B_z-\frac{1}{4}A_0\right) = g\mu_B\, B_z+\frac{1}{2}A_0$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Integral formula for inertia tensor Writing down the balance of angular momentum, we introduce the inertia tensor by the formula
\begin{equation}
J(t)a \cdot b = \int_{S(t)} \rho (t,x)\left( a \times \left( x - X(t) \right)\right)\cdot \left(b \times \left( x - X(t) \right) \right) dx
\end{equation}
for some vectors $a,b$, some body $S(t)$ at time $t$, the density $\rho$ and the centre of mass $X(t)$.
Now, what confuses me, is that later we use the expressions
\begin{equation}
J(t)a \ \ \ \text{and} \ \ \ J(t)a \times a,
\end{equation}
which (as far as I can see) are not immediately clear from the above formula.
For the first one I would expect something like:
\begin{equation}
J(t)a = \int_{S(t)} \rho (t,x)\left( a \times \left( x - X(t) \right)\right)dx,
\end{equation}
and for the second one:
\begin{equation}
J(t)a \times a = \int_{S(t)} \rho (t,x)\left( a \times \left( x - X(t) \right)\right)\times \left(a \times \left( x - X(t) \right) \right) dx
\end{equation}
But those are just guesses. Can someone tell me the precise definition of these terms? Thank you in advance.
| Welcome to Physics SE!
The definition of the inertia tensor that you seem to be using can also be written, as on the Wikipedia Moment of Inertia page
$$
\mathbf{J} = -\int d\mathbf{x} \, \rho(\mathbf{x}) \,
\big[ \Delta\mathbf{x} \big] \cdot \big[ \Delta\mathbf{x} \big]
$$
where $\Delta \mathbf{x} = \mathbf{x}-\mathbf{X}$
and $[\ldots]$ is short for a $3\times3$ skew-symmetric matrix constructed
from the vector $\Delta \mathbf{x}\equiv (\Delta x_1, \Delta x_2, \Delta x_3)$.
When one of these matrices multiplies a vector, the result can be represented as a vector cross product:
\begin{align*}
\big[ \Delta\mathbf{x} \big] \cdot \mathbf{b}
&=
\begin{pmatrix} 0 & -\Delta x_3 & \Delta x_2 \\
\Delta x_3 & 0 & -\Delta x_1 \\
-\Delta x_2 & \Delta x_1 & 0 \end{pmatrix}
\begin{pmatrix} b_1 \\ b_2 \\ b_3 \end{pmatrix}
\\
&= \begin{pmatrix} \Delta x_2 \, b_3 - \Delta x_3 \, b_2 \\
\Delta x_3 \, b_1 - \Delta x_1 \, b_3 \\
\Delta x_1 \, b_2 - \Delta x_2 \, b_1 \end{pmatrix}
= \Delta\mathbf{x} \times \mathbf{b}
= -\mathbf{b} \times\Delta\mathbf{x} .
\end{align*}
I've taken the liberty of writing the matrices and vectors in bold,
it's just more familiar to me.
Actually, on the Wikipedia page, the equation is given in terms
of a sum over discrete masses rather than an integral over a mass density, but it's equivalent.
If we contract this matrix with two arbitrary vectors $\mathbf{a}$ and $\mathbf{b}$, we get your starting equation. I would prefer to write the left hand side as $\mathbf{a}\cdot\mathbf{J}\cdot\mathbf{b}$, or even as $\mathbf{a}^T\cdot\mathbf{J}\cdot\mathbf{b}$, not as $\mathbf{J}\,\mathbf{a}\cdot\mathbf{b}$, because your notation makes it look like $\mathbf{a}$ and $\mathbf{b}$ are being combined together in a scalar product, which is not the case. Your equation doesn't have the minus sign: the change in sign comes from one of the vector products on the right being $\mathbf{a}\cdot\big[ \Delta\mathbf{x} \big]$ and the other being $\big[ \Delta\mathbf{x} \big]\cdot\mathbf{b}$.
So, I believe your starting equation is obtained from mine by
\begin{align*}
\mathbf{a}\cdot\mathbf{J}\cdot\mathbf{b} &=
-\int d\mathbf{x} \, \rho(\mathbf{x}) \,
\mathbf{a}\cdot\big[ \Delta\mathbf{x} \big] \cdot \big[ \Delta\mathbf{x} \big]
\cdot\mathbf{b}
\\
&=
-\int d\mathbf{x} \, \rho(\mathbf{x}) \,
\left(\big[ \Delta\mathbf{x} \big]^T\cdot\mathbf{a}\right)
\cdot
\left(\big[ \Delta\mathbf{x} \big]\cdot\mathbf{b}\right)
\\
&=
\int d\mathbf{x} \, \rho(\mathbf{x}) \,
(\mathbf{a}\times\Delta\mathbf{x})
\cdot
(\mathbf{b}\times\Delta\mathbf{x}) .
\end{align*}
I'm omitting the $T$ transpose sign on vectors, to avoid clutter; I don't believe that there is any ambiguity.
Now to your question. The main point is that $\mathbf{a}$ and $\mathbf{b}$ are arbitrary. Since they are arbitrary, your starting equation does completely specify $\mathbf{J}$. You can always choose one or both of $\mathbf{a}$ and $\mathbf{b}$ to be Cartesian basis vectors, to express any result in terms of components, if you wish. Alternatively, you can use the expression I gave above, and simply don't contract with the vector on the left. So I reckon
\begin{align*}
\mathbf{J}\cdot\mathbf{a} &= -\int d\mathbf{x} \, \rho(\mathbf{x}) \,
\big[ \Delta\mathbf{x} \big] \cdot \big[ \Delta\mathbf{x} \big] \cdot \mathbf{a}
\\
&= -\int d\mathbf{x} \, \rho(\mathbf{x}) \,
\big[ \Delta\mathbf{x} \big] \cdot (\Delta\mathbf{x}\times \mathbf{a})
\\
&= \int d\mathbf{x} \, \rho(\mathbf{x}) \,
\big[ \Delta\mathbf{x} \big] \, (\mathbf{a}\times \Delta\mathbf{x})
\\
&= \int d\mathbf{x} \, \rho(\mathbf{x}) \,
\Delta\mathbf{x} \times (\mathbf{a}\times \Delta\mathbf{x})
\\
&= \int d\mathbf{x} \, \rho(\mathbf{x}) \,
\left(
|\Delta\mathbf{x}|^2 \mathbf{a}-
(\Delta\mathbf{x} \cdot\mathbf{a})\Delta\mathbf{x})
\right)
.
\end{align*}
I'm not completely sure about your second expression,
because of the same notational concerns I raised above.
Clearly you don't mean
$\mathbf{J} \cdot(\mathbf{a}\times\mathbf{a})$
because the quantity in parentheses vanishes identically.
So I guess you mean $\mathbf{a}\times(\mathbf{J}\cdot\mathbf{a})$,
or $(\mathbf{J}\cdot\mathbf{a})\times\mathbf{a}$.
If the first of these is true, then the answer is
$$\mathbf{a}\times(\mathbf{J}\cdot\mathbf{a})
=
-\int d\mathbf{x} \, \rho(\mathbf{x}) \,
(\Delta\mathbf{x} \cdot\mathbf{a})(\mathbf{a}\times\Delta\mathbf{x})
$$
while if it's the second, just drop the negative sign.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Contravariant metric in Newton-Cartan spacetime I'm interested in the geometrized Newtonian gravitation or Newton-Cartan theory. In every reference that I have found begins saying that a Newton-Cartan spacetime is a manifold $M$ with some structures. Among then, is always pointed a contravariant metric $g^{ab}$ that represents the spatial distances.
My question is: why is contravariant? Should it not be a covariant metric to measure the length of vectors? I understand that a contravariant metric measures lengths and angles of covectors or 1-forms.
| The metric structure in Newton-Cartan geometry is given by two elements (in d+1 spacetime dimensions):
*
*A contravariant metric $h^{\mu\nu}$ of rank d
*A one-form $\psi_\mu$ spanning the radical of $h$, namely $h^{\mu\nu}\psi_{\nu}=0$.
The 1-form $\psi$ allows to distinguish between timelike ($\psi_\mu X^\mu\neq0$) and spacelike ($\psi_\mu X^\mu=0$) vector fields (there are no light-like vectors).
Consistently with usual Newtonian theory, the notion of distance should only makes sense to measure spatial distances (as opposed to space-time distances as in general relativity).
Now, one can show that the contravariant metric $h$ provides exactly what is needed as in can be shown that the above definition of $h$ is in fact equivalent to defining a d-dimensional Riemannian metric $\gamma$ acting on the kernel of $\psi$, namely $\gamma$ acts on spacelike vector fields and thus provides a notion of spatial distance.
The situation is even clearer when the distribution of spacelike vector fields is involutive (i.e. if $[X,Y]$ is spacelike for all spacelike vector fields X and Y or equivalently if $\psi$ satisfies the Frobenius integrability condition $d\psi\wedge\psi=0$). In this case, the $d+1$-dimensional spacetime is foliated by $d$-dimensional hypersurfaces (absolute spaces) corresponding to leaves of equal time, each of which is endowed with a $d$-dimensional Riemannian metric $\gamma$ allowing to measure spatial distances within this instantaneous $d$-dimensional space.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Doubt about ray diagrams In a ray diagram, 2 rays are considered enough to locate the image of a point on a given object. But how can we say that the rays other than the one we drew will meet at that same point?
I guess we can justify this by saying that we get only one image of a given object by a single mirror/lens (right?). So every point on the object must correspond to only one point on the only image. Is this reasoning correct?
Also, can somebody provide a more "rigorous" proof ( maybe with some math involved)
Thanks
| This is a direct result of paraxial optics. By paraxial, one means that all the rays are nearly parallel to the optical axis.
Let's make this claim more rigorous. Any given ray at some point is characterized by its height $x$ and angle $\theta$ in respect to the optical axis. In this scenario, nearly every optical element can be approximated as a linear transformation of the $(x,\theta)$ vector, since $\theta\ll 1$ is small. In other words, we can associate with every optical system a matrix, called ABCD matrix, such that
$$\left(\matrix{x^\prime\\ \theta^\prime}\right)=\left(\matrix{A&B\\C&D}\right)\left(\matrix{x\\ \theta}\right)$$
where the $\prime$ indicates the coordinates after the system. In particular $x^{\prime}=Ax+B\theta$. In the special case of $B=0$ we can assert that $x^{\prime}=Ax$, i.e. $x^{\prime}$ is independent of $\theta$. Thus all the rays from height $x$ before the system intersect at a point of height $x^{\prime}$ immediately after. In this sense $B=0$ is the condition for imaging. In the case of an ideal lens, this reduces to the famous imaging formula
$$\frac{1}{u}+\frac{1}{v}=\frac{1}{f}$$
For more information you can refer to any undergraduate book on optics (Fundamentals of Photonics by Saleh and Teich for example), or simply to this Wikipedia page.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Air Pressure in a Mine In Sunday's "60 Minutes" TV program the correspondent descended into a gold mine said to be 2 miles (3 km) deep. What equation describes the air pressure relative to sea level atmospheric pressure?
| Barometric formula is the equation which can estimate the pressure at different heights:
$$
p=p_0\cdot\exp\left(-\frac{mgh}{k_BT}\right)
$$
The $p_0$ is the reference pressure, $T$ is the temperature in K, $k_B$ is the Boltzmann constant, and $h$ is the height.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Cylinder vs cylinder of double the radius roll down an incline plane, which one wins? A solid cylinder and another solid cylinder with the same mass but double the radius start at the same height on an incline plane with height h and roll without slipping. Consider the cylinders as disks with moment of inertias I=(1/2)mr^2. Which one reaches the bottom of the incline plane first?
According to this, the velocity of any body rolling down the plane is
v=(2 g h/1 + c) ^½
where c is the constant in moment of inertia (for example, c=2/5 for a solid sphere).
My thought process was that since the radius doubled, c=2. So, the velocity of the doubled cylinder would be less, therefore finishing later. Similarly, if it’s moment of inertia increases, it’s angular and linear acceleration decreases. However, my other peers and even my professor disagree, saying that radius and mass do not play a role in the velocity of the body, since both m and r will cancel in an actual calculation of the velocity.
Could anyone elaborate on whether I am right or wrong?
|
My thought process was that since the radius doubled, c=2
$c$ is not the moment of inertia itself, it's the constant in $I = cMR^2$. For your two solid cylinders, the constant will be the same, even though $I$ will differ because $R$ will differ.
Similarly, if it’s moment of inertia increases, it’s angular and linear acceleration decreases.
You're correct that the angular acceleration decreases. But that doesn't mean the linear acceleration decreases.
If we put the same rotational energy into the cylinders, the larger one must spin slower. How much slower?
$$ E = \frac{1}{2} I \omega^2$$
$$ \omega ^2 = \frac {2 E}{I}$$
$$ \omega = \sqrt{ \frac {2 E}{MR^2} }$$
Since mass and energy are constant here, we can replace them and the factor of two with a single constant $k$.
$$ \omega = \frac{k}{R}$$
So as I goes up (and energy and mass are constant) it has angular speed that is inversely proportional to R. But because it's rolling, we know that $v = \omega R$.
$$ v = \omega R$$
$$ v = \frac{k}{R} R = k$$
The radius has fallen out. The rotational speed depends on the radius, but the linear speed does not.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/440946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What is the measured decay rate of antineutrons? / What is the measured mean lifetime of antineutrons? Please do not post any "answers" dealing with predicted/theoretical estimates. The question specifically asks for measured / experimental evidence.
| According to Fundamental Symmetries, ed. Bloch, P., Pavlopoulos, P., Klapisch, R. 1987, page 82:
The measurement of this lifetime has not yet been attempted as it requires very slow antineutrons. Low-energy antineutrons are created in the antiproton source of antiproton accumulators, and they can be produced in the charge exchange reaction $\bar{p}p \to \bar{n}n$, where the momentum of the incoming antiproton is above 1GeV/c. It is not inconceivable that antineutrons could be trapped in a magnetic storage device. In contrast to the neutron, decaying antineutrons can probably be more easily detected owing to the outgoing antiproton and its subsequent annihilation. This could allow the antineutron lifetime to be determined directly from the exponential decay. One would this be free from normalization problems. An accuracy of at least 1% should be achievable.
So unless someone has done the experiment since the answer to your question is that there is no experimental measurement of the antineutron lifetime.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/441056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is the normal force equal to weight if we take the rotation of Earth into account? In my physics class we were doing problems such that we set $N$ (normal force) $= mg$. I understand that by Newton's Third Law, if I exert a force on the ground, then the ground will exert an equal and opposite force on me. However, the part that I am slightly confused about is that when the Earth rotates, and thus I rotate too, I am accelerating with the centripetal force towards the center of the earth (assuming I am at the equator). How am I doing this if the normal force equals $mg$? If the normal force doesn't equal mg then why isn't the ground exerting an equal and opposite force?
| Here is a diagram of an ideal spherical Earth radius $R$, mass $M$ rotating at an angular speed $\omega$ with an object mass $m$ in contact with the surface of the Earth.
The object on the Earth is subject to two forces:
gravitational attraction $\frac{GMm}{R^2}=mg$ where $g$ is the gravitational field strength and a reaction due to the Earth $N$.
The net force on the object produces the centripetal acceleration of the object.
At the poles there is no centripetal acceleration so $mg -N_{\rm pole} = m 0 \Rightarrow N_{\rm pole} = mg$ the equation that you quoted in your first sentence.
At the Equator the equation of motion is $mg - N_{\rm equator} = mR\omega^2$ so the normal reaction $N_{\rm equator}$ is smaller than the gravitational attraction $mg$.
At other points on the Earth the reaction $N$ is smaller than the gravitational attraction $mg$ but not by as much as at the equator but you will note that on a spherical Earth that reaction is no longer normal to the Earth's surface.
A better approximation to the shape of the Earth is that it is an oblate spheroid (like a squashed sphere) as shown greatly exaggerated below.
With the Earth being that shape the reaction force on the mass is normal to the surface and in general a plumb line does not point towards the centre of the Earth.
Now another correction has to be made as the value of the gravitational field strength $g$ varies from being a maximum at the poles and a minimum at the Equator.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/441245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
Understanding projective measurements as a special case of POVM measurements ("third postulate" in Nielsen and Chuang) I am working through Nielsen and Chuang's book and am confused about a detail from sections 2.2.3 and 2.2.5.
On page 88 of my copy (section 2.2.5), they write
Projective measurements can be understood as a special case of Postulate 3. Suppose the measurement operators in Postulate 3, in addition to satisfying the completeness relation $\sum_m M_m^\dagger M_m = I$ also satisfy the conditions that $M_m$ are orthogonal projectors, that is, the $M_m$ are Hermitian, and $M_mM_{m^\prime} = \delta_{m,m^\prime}M_m$.
It seems to me that they're implying that orthogonal projectors are (1) Hermitian and also (2) satisfy $M_mM_{m^\prime} = \delta_{m,m^\prime}M_m$.
My question: My understanding is that a projector is simply an operator which satisfies $P^2 = P$, and for projectors to be orthogonal means that the composition of two distinct ones always yields zero, i.e. $(P_1 \circ P_2)(x) = 0$ for all $x$. But this is all covered by part (2) of the statement alone. So why is (1) necessary?
Edit: Here is a screenshot of their statement of postulate 3
| https://en.wikipedia.org/wiki/Projection_(linear_algebra)#Orthogonal_projections states:
An orthogonal projection is a projection for which the range U and the null space V are orthogonal subspaces.
Thus, orthogonality is a property of a single projection, not of a set of projections, as you state it (some kind of mutual orthogonality) -- so the immediate answer to your question is: "You are using the wrong definition of orthogonal projection".
Immediately afterwards, it is shown that:
A projection is orthogonal if and only if it is self-adjoint.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/441378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Do centrifugal force and gravity differ in their effects on objects? If the type of object matters, consider the human body. If the situation matters, consider standing on the inside wall of an O'Neill cylinder compared to standing on the surface of Earth.
"Differ in their effects on objects" means: Would the object be able to tell the difference? That is, is there an instrument that could tell whether it is placed in an O'Neil cylinder or on the surface of a planet from the effects (acceleartion, I suppose) of centrifugal force and gravity alone?
| What we normally think of as “gravity” on earth is actually a mix of gravitational and centrifugal force: plumb bobs don’t hang toward the center of the earth, but rather slightly toward the opposite pole. They are both static body forces, so it’s not possible to directly tell them apart locally.
But any rotating frame also has Coriolis force, which is detectable locally.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/441606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why does the warm air rises up? Warm air has more energy than cold air. This means that according to the Einstein equation $E = mc^2$ the warmer air has a greater mass than the cold one. Why is the warm air rising, if it has a greater mass, which means that the attraction of gravity between the Earth and the warm air is greater?
| Buoyancy and the ideal gas law.
PV = nRT
P is pressure
V is volume
n is number
R is a constant
T is temperature
In a closed container if you increase T then P goes up.
In the open (atmosphere) V goes up
With same mass and more V buoyancy then takes over.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/441954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Is there some physical interpretation of the parallel exterior region? Let the maximal extension of the Schwarzschild spacetime be given. It admits as coordinates the Kruskal-Szekeres coordinates $(T,X,\theta,\phi)$ with $$T^2-X^2<1$$
since the singularity occurs at $T^2-X^2=1$. This spacetime is divided into four regions:
*
*Region I: this is the exterior region. One can define in this region the usual $t,r$ coordinates by $$r=2M\left(1+W_0\left(\dfrac{X^2-T^2}{e}\right)\right),\quad t=4M\tanh^{-1}\dfrac{T}{X}$$
*Region II: this is the black hole region. One can also define the above two coordinates here, but now they are $$r=2M\left(1+W_0\left(\dfrac{X^2-T^2}{e}\right)\right),\quad t=4M\tanh^{-1}\dfrac{X}{T}$$
*Region III: this is the parallel exterior region, on which we have $$r=2M\left(1+W_0\left(\dfrac{X^2-T^2}{e}\right)\right),\quad t=4M\tanh^{-1}\dfrac{T}{X}$$
*Region IV: this is the white hole region, on which we have $$r=2M\left(1+W_0\left(\dfrac{X^2-T^2}{e}\right)\right),\quad t=4M\tanh^{-1}\dfrac{X}{T}$$
Now, regions I and II together comprise the usual Schwarzschild spacetime. On the other hand, we have also regions III and IV. I remember Wald says in his book these regions are unphysical.
Is that really the case? There is no physical interpretation for the regions III and IV? Specially the parallel exterior region, isn't there any known physical interpretation of what it might be physically or how it might really exist in a given situation?
Is the maximal extension of Schwarzschild spacetime physically meaningful or is it just mathematically meaningful?
|
Is that really the case? There is no physical interpretation for the regions III and IV? Specially the parallel exterior region, isn't there any known physical interpretation of what it might be physically or how it might really exist in a given situation?
Yes, that's really the case. Those regions don't exist for a black hole that forms by gravitational collapse. For a black hole that forms by gravitational collapse, the Penrose diagram looks like this:
Related: Can black holes form in a finite amount of time?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/442037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Capacitor demo explanation I know that for a charged capacitor as one separates the plates further apart the voltage increases while the capacitance decreases.
But surely as the plates are pulled further and further apart the potential difference across the plates or voltage cannot rise indefinitely? Where does it stop?
also can someone please explain more in detail perhaps with a schematic the setup seen in this video?
https://www.youtube.com/watch?v=e0n6xLdwaT0
Especially if he charges the capacitor with a power supply ad then disconnects the power supply where is then the current measured as the plates are moved apart? I assume the plates aren't electrically connected otherwise the capacitor would discharge itself?
|
to take the gravitational potential energy as comparison feels weird because the further a mass gets from another mass the less force it experiences until a point where the force experienced is so negligible that it counts only theoretically
The same thing is happening with the charge plates.
At a close distance (when the separation is much less than the size of the plates), the field between the plates is uniform and the potential increases linearly with distance. This is analogous to how we treat gravitational energy near the earth. The field is nearly uniform, so we assume energy and potential increase linearly with height.
At larger distances, we can no longer assume the field is uniform and the change in energy or potential with increase in distance starts to decrease rapidly. At large distances, the forces/gravitational/electric fields tend to zero.
When the capacitor plates are small, the linear region for separating the plates will also be small.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/442152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Accurate Equation for Earth's Gravitational Binding Energy This is a relatively important question for anyone who can answer it. I am trying to find the equation that accurately solves for Earth's Gravitational Binding Energy. The information below is from the wikipedia page:
Assuming that the Earth is a uniform sphere (which is not correct, but is close enough to get an order-of-magnitude estimate) with M = 5.97 x 10^24 kg and r = 6.37 x 10^6 m, U is 2.24 x 10^32 J. This is roughly equal to one week of the Sun's total energy output. It is 37.5 MJ/kg, 60% of the absolute value of the potential energy per kilogram at the surface.
The actual depth-dependence of density, inferred from seismic travel times (see Adams–Williamson equation), is given in the Preliminary Reference Earth Model (PREM). Using this, the real gravitational binding energy of Earth can be calculated numerically as U = 2.487 x 10^32 J.
So what I wish to not for the latter results (2.487e+32 J) is what is the actual equation is used to get this result.
And I do not mean the standard GBE equation which gave the other result above.
|
I am trying to find the equation that accurately solves for Earth's Gravitational Binding Energy.
There isn't a single equation for this (and much of real science does not yield convenient single formulas as solutions). What the PREM produces for density is a set of piecewise approximate polynomial functions that model the theorized density that produces a reasonable match to measured data (like for example seismic data).
So working from this page by Dave Typinski you could develop an integral equation for the gravitational binding energy. I'm not going to actually do that myself, but the density polynomial functions are just quadratic, and you can apply this simply enough to the integral for gravitational binding energy :
$$U = 16\pi^2 G \int_0^R r \rho(r) \left[ \int_0^r \rho(r) r^2 dr \right] dr $$
This is tedious to do with the piecewise functions for density, but not difficult.
For a constant density $\rho(r)=\rho_0$ you can see this reduces to the familiar equation :
$$U = 16\pi^2G \rho_0^2 \int_0^R r \left[\frac 1 3 r^3\right] dr = \frac {16\pi^2G} {15} \rho_0^2 R^5 = \frac {3}{5} \frac {GM^2} R$$
So what I wish to not for the latter results ($2.487\times 10^{32}\,J$) is what is the actual equation is used to get this result.
I can't answer that definitively, but it would would most likely be either a numerical integration or something like I've outlined.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/442379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Plasma and helicon wave frequencies in iron according to Drude model The first chapter of Ashcroft and Mermin's Solid State Physics discusses electromagnetic waves in metals. One of the exercises requires calculation of the plasma and helicon frequenies using the Drude model. For iron, I got $2.3*10^{16}$ for the plasma frequency and 11.5 for helicon wave frequency (radians per second). The first seems too high and the second seems too low. Are they reasonable?
The given formula for the plasma frequency is $\omega_p^2=\frac{4\pi n e^2}{m}$ where n is the density of charge carriers, e is the electron charge and m is the electron mass. The authors seem to be using gaussian units although this is not entirely clear. For n I used $1.7*10^{23}\:cm^{-3}$
My cyclotron frequency for a magnetic field of 10 kilogauss, $1.76*10^{11}$, seems to be right.
For the helicon wave frequency the given formula is $\omega=\omega_c(\frac{k^2c^2}{\omega_p^2})$ where $\omega_c$ is the cyclotron frequency and $\omega_p$ is the plasma frequency. The value of $k=2\pi/\lambda$ correesponds to wavelength of 1 cm.
| For the plasma frequency, you're only about one order of magnitude off; the correct value is $9.89 \times 10^{14}$ Hz (source). The reason for the difference is that because of iron's long-range periodic crystal structure, electrons in iron have an effective mass significantly higher than their physical mass. Note that there is no single effective electron mass for a given substance, so you can't just go look up the effective mass in a table and substitute it into your equation; rather, the plasma frequency generally has to be determined by experiment.
For a metal, it is reasonable to expect helicon frequencies in the range of perhaps $10^1$ to $10^4$ Hz (for example, see this lab). Your value is definitely on the low side and probably isn't realistic. The error here is due to at least two things, the incorrect plasma frequency and not using the cyclotron effective mass to compute the cyclotron frequency.
In any case, I should add that you've applied the Drude model correctly; the fact that the results do not match what we observe in real life is due to limitations of the model, not any mistake on your part.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/442470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
What is the shape of a gravitational wave form? What is the shape of a gravitational wave as it hits the Earth, particularly the time portion.
Does time start at normal speed, then slow slightly, and then return to normal speed?
Or does it start at a normal speed, slow down slightly, then speed up slightly, and then return to normal speed?
Those other questions only concerned whether time dilation exists. I'm more concerned with the shape of the wave form. So not the same questions at all.
| The new report here, published today, shows three very nice examples of gravitational waves coming from the merger of two black holes on page 2. So I can see the wave shapes very clearly.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/442588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits